url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
http://xxicla.dm.uba.ar/viewAbstract.php?code=1377
Conference abstracts Session S04 - Operator Algebras July 25, 15:00 ~ 15:50 ## Invariants of Operator Systems ### University of Regina, Canada   -   argerami@uregina.ca Operator systems are unital, selfadjoint, subspaces of $B(H)$. They form a category with unital completely positive maps as their morphisms. The problem of classifying these structures is very hard, even in the finite-dimensional case; in fact, there is still no classification in the 3-dimensional case! We will show some positive classification results, both of an abstract and a concrete flavour.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8723391890525818, "perplexity": 1978.2644636413395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890947.53/warc/CC-MAIN-20180122014544-20180122034544-00337.warc.gz"}
http://slideplayer.com/slide/233581/
# Pattern Recognition and Machine Learning ## Presentation on theme: "Pattern Recognition and Machine Learning"— Presentation transcript: Pattern Recognition and Machine Learning Chapter 3: Linear models for regression Linear Basis Function Models (1) Example: Polynomial Curve Fitting Linear Basis Function Models (2) Generally where Áj(x) are known as basis functions. Typically, Á0(x) = 1, so that w0 acts as a bias. In the simplest case, we use linear basis functions : Ád(x) = xd. Linear Basis Function Models (3) Polynomial basis functions: These are global; a small change in x affect all basis functions. Linear Basis Function Models (4) Gaussian basis functions: These are local; a small change in x only affect nearby basis functions. ¹j and s control location and scale (width). Linear Basis Function Models (5) Sigmoidal basis functions: where Also these are local; a small change in x only affect nearby basis functions. ¹j and s control location and scale (slope). Maximum Likelihood and Least Squares (1) Assume observations from a deterministic function with added Gaussian noise: which is the same as saying, Given observed inputs, , and targets, , we obtain the likelihood function where Maximum Likelihood and Least Squares (2) Taking the logarithm, we get where is the sum-of-squares error. Maximum Likelihood and Least Squares (3) Computing the gradient and setting it to zero yields Solving for w, we get where The Moore-Penrose pseudo-inverse, Maximum Likelihood and Least Squares (4) Maximizing with respect to the bias, w0, alone, we see that We can also maximize with respect to ¯, giving Geometry of Least Squares Consider S is spanned by . wML minimizes the distance between t and its orthogonal projection on S, i.e. y. N-dimensional M-dimensional Sequential Learning Data items considered one at a time (a.k.a. online learning); use stochastic (sequential) gradient descent: This is known as the least-mean-squares (LMS) algorithm. Issue: how to choose ´? Regularized Least Squares (1) Consider the error function: With the sum-of-squares error function and a quadratic regularizer, we get which is minimized by Data term + Regularization term ¸ is called the regularization coefficient. Regularized Least Squares (2) With a more general regularizer, we have Lasso Quadratic Regularized Least Squares (3) Lasso tends to generate sparser solutions than a quadratic regularizer. Multiple Outputs (1) Analogously to the single output case we have: Given observed inputs, , and targets, , we obtain the log likelihood function Multiple Outputs (2) Maximizing with respect to W, we obtain If we consider a single target variable, tk, we see that where , which is identical with the single output case. The Bias-Variance Decomposition (1) Recall the expected squared loss, where The second term of E[L] corresponds to the noise inherent in the random variable t. What about the first term? The Bias-Variance Decomposition (2) Suppose we were given multiple data sets, each of size N. Any particular data set, D, will give a particular function y(x;D). We then have The Bias-Variance Decomposition (3) Taking the expectation over D yields The Bias-Variance Decomposition (4) Thus we can write where The Bias-Variance Decomposition (5) Example: 25 data sets from the sinusoidal, varying the degree of regularization, ¸. The Bias-Variance Decomposition (6) Example: 25 data sets from the sinusoidal, varying the degree of regularization, ¸. The Bias-Variance Decomposition (7) Example: 25 data sets from the sinusoidal, varying the degree of regularization, ¸. From these plots, we note that an over-regularized model (large ¸) will have a high bias, while an under-regularized model (small ¸) will have a high variance. Bayesian Linear Regression (1) Define a conjugate prior over w Combining this with the likelihood function and using results for marginal and conditional Gaussian distributions, gives the posterior where Bayesian Linear Regression (2) A common choice for the prior is for which Next we consider an example … Bayesian Linear Regression (3) 0 data points observed Prior Data Space Bayesian Linear Regression (4) 1 data point observed Likelihood Posterior Data Space Bayesian Linear Regression (5) 2 data points observed Likelihood Posterior Data Space Bayesian Linear Regression (6) 20 data points observed Likelihood Posterior Data Space Predictive Distribution (1) Predict t for new values of x by integrating over w: where Predictive Distribution (2) Example: Sinusoidal data, 9 Gaussian basis functions, 1 data point Predictive Distribution (3) Example: Sinusoidal data, 9 Gaussian basis functions, 2 data points Predictive Distribution (4) Example: Sinusoidal data, 9 Gaussian basis functions, 4 data points Predictive Distribution (5) Example: Sinusoidal data, 9 Gaussian basis functions, 25 data points Equivalent Kernel (1) The predictive mean can be written This is a weighted sum of the training data target values, tn. Equivalent kernel or smoother matrix. Equivalent Kernel (2) Weight of tn depends on distance between x and xn; nearby xn carry more weight. Equivalent Kernel (3) Non-local basis functions have local equivalent kernels: Polynomial Sigmoidal Equivalent Kernel (4) The kernel as a covariance function: consider We can avoid the use of basis functions and define the kernel function directly, leading to Gaussian Processes (Chapter 6). Equivalent Kernel (5) for all values of x; however, the equivalent kernel may be negative for some values of x. Like all kernel functions, the equivalent kernel can be expressed as an inner product: where . Bayesian Model Comparison (1) How do we choose the ‘right’ model? Assume we want to compare models Mi, i=1, …,L, using data D; this requires computing Bayes Factor: ratio of evidence for two models Posterior Prior Model evidence or marginal likelihood Bayesian Model Comparison (2) Having computed p(MijD), we can compute the predictive (mixture) distribution A simpler approximation, known as model selection, is to use the model with the highest evidence. Bayesian Model Comparison (3) For a model with parameters w, we get the model evidence by marginalizing over w Note that Bayesian Model Comparison (4) For a given model with a single parameter, w, con-sider the approximation where the posterior is assumed to be sharply peaked. Bayesian Model Comparison (5) Taking logarithms, we obtain With M parameters, all assumed to have the same ratio , we get Negative Negative and linear in M. Bayesian Model Comparison (6) Matching data and model complexity The Evidence Approximation (1) The fully Bayesian predictive distribution is given by but this integral is intractable. Approximate with where is the mode of , which is assumed to be sharply peaked; a.k.a. empirical Bayes, type II or gene-ralized maximum likelihood, or evidence approximation. The Evidence Approximation (2) From Bayes’ theorem we have and if we assume p(®,¯) to be flat we see that General results for Gaussian integrals give The Evidence Approximation (3) Example: sinusoidal data, M th degree polynomial, Maximizing the Evidence Function (1) To maximise w.r.t. ® and ¯, we define the eigenvector equation Thus has eigenvalues ¸i + ®. Maximizing the Evidence Function (2) We can now differentiate w.r.t. ® and ¯, and set the results to zero, to get where N.B. ° depends on both ® and ¯. Effective Number of Parameters (3) w1 is not well determined by the likelihood w2 is well determined by the likelihood ° is the number of well determined parameters Likelihood Prior Effective Number of Parameters (2) Example: sinusoidal data, 9 Gaussian basis functions, ¯ = 11.1. Effective Number of Parameters (3) Example: sinusoidal data, 9 Gaussian basis functions, ¯ = 11.1. Test set error Effective Number of Parameters (4) Example: sinusoidal data, 9 Gaussian basis functions, ¯ = 11.1. Effective Number of Parameters (5) In the limit , ° = M and we can consider using the easy-to-compute approximation Limitations of Fixed Basis Functions M basis function along each dimension of a D-dimensional input space requires MD basis functions: the curse of dimensionality. In later chapters, we shall see how we can get away with fewer basis functions, by choosing these using the training data.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9871476888656616, "perplexity": 2766.092056372412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805417.47/warc/CC-MAIN-20171119061756-20171119081756-00570.warc.gz"}
http://www.cje.net.cn/CN/Y2010/V29/I02/324
• 研究报告 • ### 沙埋对醉马草种子萌发和幼苗生长的影响 1. 1河西学院生态研究所| 甘肃张掖 734000;2兰州大学干旱与草地农业生态教育部重点实验室| 兰州 730000 • 出版日期:2010-02-10 发布日期:2010-02-10 ### Effects of sand burial depth of Achnatherum inebrians seed on its germination and seedling growth. 1. WANG Ju-hong1,2|CHAI Yan-fei1|ZHANG Yong1 • Online:2010-02-10 Published:2010-02-10 Abstract: This paper studied the effects of different sand burial depths (0, 1, 2, 3, 4, and 5 cm) of Achnatherum inebrians seed on its germination and seedling growth under a constant temperature regime, aimed to approach the proliferation mechanism of A. inebrians population and to supply theoretical basis for the control and exploitation of A. inebrians. Sand burial depth of A. inebrians seed had significant effects on its seedling emergence percentage, date of first emergence, seedling height, and biomass allocation (P<0.001). The seedling emergence rate was the highest (92%) when the seed was buried at the depth of 2 cm, but the lowest (58.7%) when the sand burial depth was 5 cm. The seed ling height reached the maximum (10.8 cm) when the sand burial depth was 3 cm, but was the shortest, being 6.3 and 7.1 cm when the seed was buried at the depths of 0 and 5 cm, respectively. The above and below ground biomass of A. inebrians was the maximum at the sand burial depth of 2 cm, but the minimum at the sand burial depth of 5 cm; and the root was the longest (nearly 5 cm) at sand burial of 2-3 cm and the shortest (1 cm) at sand burial of 5 cm. It was suggested the optimum sand burial depth of A. inebrians seed for its seedling emergence and growth would be 2-3 cm.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26580995321273804, "perplexity": 8077.567742192176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362219.5/warc/CC-MAIN-20211202114856-20211202144856-00029.warc.gz"}
http://math.stackexchange.com/questions/247395/limiting-distribution-and-initial-distribution-of-a-markov-chain?answertab=active
# Limiting distribution and initial distribution of a Markov chain For a Markov chain (can the following discussion be for either discrete time or continuous time, or just discrete time?), 1. if for an initial distribution i.e. the distribution of $X_0$, there exists a limiting distribution for the distribution of $X_t$ as $t \to \infty$, I wonder if there exists a limiting distribution for the distribution of $X_t$ as $t \to \infty$, regardless of the distribution of $X_0$? 2. When talking about limiting distribution of a Markov chain, is it in the sense that some distributions converge to a distribution? How is the convergence defined? Thanks! - 1. No, let $X$ be a Markov process having each state being absorbing, i.e. if you start from $x$ then you always stay there. For any initial distribution $\delta_x$, there is a limiting distribution which is also $\delta_x$ - but this distribution is different for all initial conditions. 2. The convergence of distributions of Markov Chains is usually discussed in terms of $$\lim_{t\to\infty}\|\nu P_t - \pi\| = 0$$ where $\nu$ is the initial distribution and $\pi$ is the limiting one, here $\|\cdot\|$ is the total variation norm. AFAIK there is at least a strong theory for the discrete-time case, see e.g. the book by S. Meyn and R. Tweedie "Markov Chains and Stochastic Stability" - the first edition you can easily find online. In fact, there are also extension of this theory by the same authors to the continuous time case - just check out their work to start with. - Thanks! I was wondering if the limiting distribution which is independent of initial distributions is unique when it exists? –  Tim Dec 1 '12 at 15:10 @Tim: could you please define the uniqueness? Is far as my guess is true, you mean exactly its independence from the initial distribution. –  S.D. Dec 1 '12 at 17:33 By "uniqueness" of the limiting distribution, I mean if there are two different probability measures on the state space s.t. they can both be the limiting distribution for a Markov chain, and the limiting distribution is defined to be the same for all initial distributions. –  Tim Dec 1 '12 at 17:51 @Tim Given any initial distribution it admits (if admits) a unique limiting distribution. Thus, if the latter is independent of the former, the latter is unique. In other words, suppose there are two limiting distributions $\pi_1$ and $\pi_2$, then $\|\nu_1 P^n - \pi_1\| \to 0$ and $\|\nu_2 P^n - \pi_2\| \to 0$ for some $\nu_1, \nu_2$ which contradicts with the fact that limit of $\nu_1 P^n$ is the same as of $\nu_2 P^n$ –  S.D. Dec 2 '12 at 9:43 Thanks! (1) Do you mean that for any given initial distribution, its limiting distribution (not necessarily the same one for other initial distributions) is unique? Why is that? (2) also I feel the first comment on my another post is confusing math.stackexchange.com/questions/248609/…. If you can let me know what you think, that will be appreciated! –  Tim Dec 2 '12 at 10:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9841476082801819, "perplexity": 197.38934255139105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500831098.94/warc/CC-MAIN-20140820021351-00237-ip-10-180-136-8.ec2.internal.warc.gz"}
https://codegolf.stackexchange.com/questions/62752/longest-common-prefix-of-2-strings/63016
# Longest Common Prefix of 2 Strings Write a program that takes 2 strings as input, and returns the longest common prefix. This is , so the answer with the shortest amount of bytes wins. Test Case 1: "global" , "glossary" "glo" Test Case 2: "department" , "depart" "depart" Test Case 3: "glove", "dove" "" • Another good test case is "aca", "aba". – Morgan Thrapp Nov 3 '15 at 19:00 • Do you want a complete programs that inputs from STDIN and prints to STDOUT, or are functions OK? – xnor Nov 3 '15 at 19:35 • Can we assume the input won't have newlines? Which characters will the input have? – Downgoat Nov 3 '15 at 23:59 • General note: People using a regex based solution should not copy other people's regex answers without testing them yourself; this does not work in all regex engines. In particular, it gives different (both incorrect) answers in nvi and vim. – Random832 Nov 4 '15 at 16:38 • All of the examples given are in lowercase, but do we need to worry about case sensitivity? For example, should global and GLOSSARY return glo or ''? – AdmBorkBork Nov 5 '15 at 16:08 ## MATLAB, 63 bytes Defines a function that accepts 2 strings as input. function f(a,b),c=1;try,while a(c)==b(c),c=c+1;end,end,a(1:c-1) Had to include a try-statement for those cases where a would be a is a longer string than b. • If we have the freedom to always supply the shorter string to a, then 8 bytes can be removed. • If it is allowed to define a and b in the workspace, then another 16 bytes can be removed. # R, 130 bytes substr(x[1],1,which.max(apply(do.call(rbind,lapply(strsplit(x,''),length<-,nchar(x[1]))),2,function(i)!length(unique(i))==1))-1) Usage: x <- c('bubblegum','bubbafish') # Scala, 90 bytes object S extends App{print(args(0)zip args(1)takeWhile{case(a,b)=>a==b}map(_._1)mkString)} It takes to Strings as arguments and outputs to stdout. • This won't print a string to STDOUT, but a Vector[Tuple2[Char,Char]] – Jacob Nov 5 '15 at 9:50 • Fixed! Thanks for pointing out. – corvus_192 Nov 5 '15 at 19:31 # brainfuck, 91 bytes +[>>,>++++[<-------->-]<]<<[<<]>+[>>++++[>++++++++<-],]<[<]>>-<[>>[<+>>-<-]+>[<->,]<[<.,]>] Requires an interpreter that either allows negative positions on the tape or wraps if you < from 0. Also requires , to return 0 every time you use it after input runs out. (In my experience these are both the most common behaviour.) Takes input as two words separated by a space. This was a lot easier than I expected it to be! Usually I decide to write a brainfuck program and end up devoting quite a bit of time to it, but this one played nice. My first idea ended up working well and being rather short, especially for brainfuck. This works by getting the entire first word and storing the characters in every second cell, then weaving in the second word (e.g. gglloosbsaalr y). Then, for each pair of characters a and b, it copies a a cell to the left and simultaneously replaces b with b-a. The cell a used to be in becomes NOT (b-a). If that's true, a is printed and the loop continues to the next set of characters. Otherwise, nothing is printed and the loop terminates. I only used two real golfing tricks in this program. The first was combining two unrelated loops while gathering input. The first word is initially stored with each of its bytes subtracted by 32, so that space becomes 0 and the loop can end. Rather than adding 32 to each of those bytes and then getting the second word, the program does both at the same time. The second trick I used was abuse of , when I know the input is empty. The idiomatic way of setting a cell to 0 is [-]. However, if you know that the program has already read the entire input, most interpreters will let you try to get a byte of input anyway and set the current cell to NUL, or 0. I use this twice in my program, saving 4 bytes. Ungolfed: +[>>,>++++[<-------->-]<] get first word (minus 32 at each byte) <<[<<]> go back to start +[>>++++[>++++++++<-],] get second word and add 32 to each byte of first word <[<]>>-< go back to start and clean up a little bit [ main loop >>[<+>>-<-] subtract letter from second word from letter of first word +>[<->,]< logical NOT the result [<.,]> if the result is 1: print the letter else: the loop dies and execution is terminated ] ô⟦ï0]Ă⇀$≔ï1[_]?1:ï1=0)ø⬯) Try it here (Firefox only). It barely looks like ES6. # MUMPS, 54 bytes t(a,b) f i=$L(a):-1:0 s p=$E(a,1,i) q:p=$E(b,1,i) q p Typically primitive stuff - it just compares successively-shorter prefixes of the strings until it hits a match. # Javascript: 67 Bytes (a,b)=>{for(i=0;i<a.length;i++){if(a[i]!=b[i])return a.slice(0,i)}} # ><>, 37 bytes i:0( ?\ 4*=?$>1+{:8 +[r]$1 ?!;o>:{= Try it online! Input is via STDIN, and is expected without quotes, separated by a space. For example, global glossary. After the input is read, the characters up to and including the space are reversed and pushed back onto the stack. For example, if the input were global glossary, the stack would be glossary labolg. The stack is then rotated to the left one step at a time. If the top two chars are the same, output. Otherwise, end. ## C# 147 146 string l(string a,string b){var s="";for(int i=0;i<Math.Min(a.Length,b.Length);i++){if(a[i]==b[i])s+=a[i];else return a.Substring(0,i);}return s;} string longestPrefix(string a, string b) { var s = ""; for (int i = 0; i < Math.Min(a.Length, b.Length); i++) { if (a[i] == b[i]) s+=a[i]; else return a.Substring(0, i); } return s; } ## How it works: It loops until characters on the same index do not match. Every character that matches is added to s string, otherwise return a new string from zero index to current iteration. ## Brainfuck, 61 bytes + [ ,[<+> >+<-] ++++[>--------<-] > ] <<[<] <+ [ ,[>+>-<<-] >>[<] <[.>] < ] Expects two words separated by a space. Try it online. # Java 8, 76 bytes (a,b)->{String m="";for(int i=0;i<a.length&&a[i]==b[i];)m+=a[i++];return m;} Lambda that takes 2 char[] arguments. Loops through until the letters stop matching or we match them all, appending them to a blank string as it goes. # Scala, 8583 77 bytes def f(a:String,b:String)=a zip b takeWhile(a=>a._1==a._2) map(_._1) mkString for example, f("global" , "glossary") returns glo • (edited: the return type is made explicit by the last "mkString" invocation) – Leonardo Jul 3 '18 at 15:06 # K, 45 bytes {*|(*v)@{&y~'x}.#[&/#:'v;]'v:{#[;x]'1+!#x}'x} Takes input as a 2-element list. ## vim, 20 bytes s/$$.*$$.* \1.*/\1/ This also works with ex/vi (heirloom ex 050325), and the trailing slash is not required. Oddly, this should work in vim, but mysteriously fails. It works if I add another unused capture group, something which should not change the semantics of the regex at all: s/\v(.*)(.* \1.*)/\1 It fails and gives garbage answers in nvi and the results are downright mysterious: :1 global glossary :s/$$.*$$$$.*$$ \1$$.*$$/\1{\2,\3}/ global{,ry} NOTE: This expects the words on the current [last in the file] line [or every line for the sed script] separated by a space, and containing no space. To operate on every line in ex/vim, add % to the beginning. I don't think I'm the only program here to have constraints like these. Swift, 34 bytes import UIKit "global".commonPrefixWith("glossary") But with Swift 2 it is actually more like: "global".commonPrefixWithString("glossary",options:.CaseInsensitiveSearch) # C#, 112 bytes class P{static void Main(string[]a){try{for(int i=0;a[0][i]==a[1][i];)System.Console.Write(a[0][i++]);}catch{}}} Newlines and indentation for clarity: class P{ static void Main(string[]a){ try{ for(int i=0;a[0][i]==a[1][i];) System.Console.Write(a[0][i++]); } catch{} } } ## Minkolang 0.10, 21 bytes (od" "=,)x(0gdo=?.O1) Expects input as two words, space-separated, like so: department depart. Try it here. ### Explanation (od" "=,) Loops through input until a space is encountered x Dumps extraneous space (0gdo= 1) Loops through second word and compares letters ?.O Halts if two letters are not equal, outputs them otherwise # pb, 105 bytes ^w[B!32]{>}>w[B!0]{t[B]vb[1]<[X]w[B!0]{>}b[T]w[B!1]{>}b[0]^>}v<[X]<t[0]w[T=0]{>t[B]^t[T-B]v}w[B!0]{b[0]>} Takes two words separated by a single space. (I can save a byte by using a tab instead but that feels like cheating.) In pb, the area that can be written to is thought of as a 2D space, with (0, 0) in the upper left. Additionally, input is initially kept at Y=-1. This program copies the second word of the input to Y=0 (starting at (0, 0)). Then, each letter is compared to the letter immediately above it until one is found that doesn't match. The rest of the word is erased and the desired output is already on the canvas so it's printed when execution halts. Ungolfed: ^w[B!32]{>}> # Go to the first letter of the second word w[B!0]{ # For each letter in the second word: t[B] # Save the letter to T vb[1] # Put a flag below that letter so it can be found later <[X]w[B!0]{>} # Go to the first empty space on Y=0 b[T] # Write the contents of T w[B!1]{>}b[0] # Go back to the flag and erase it ^> # Restart loop from next letter } v<[X]< # Go to (-1, 0) t[0] # Set T to 0 w[T=0]{ # While T is 0: >t[B] # Save the next letter of the second word to T ^t[T-B]v # Subtract the equivalent letter of the first word from T # If they were the same, T is 0 and the loop continues. } w[B!0]{b[0]>} # Erase the rest of the second word # Ruby, 44 characters ->a,b{i=0;i+=1while a[i]&&a[i]==b[i];a[0,i]} Sample run: 2.1.5 :001 > ->a,b{i=0;i+=1while a[i]&&a[i]==b[i];a[0,i]}["global , "glossary"] => "glo" 2.1.5 :002 > ->a,b{i=0;i+=1while a[i]&&a[i]==b[i];a[0,i]}["department", "depart"] => "depart" 2.1.5 :003 > ->a,b{i=0;i+=1while a[i]&&a[i]==b[i];a[0,i]}["glove", "dove"] => "" # Dyalog APL, 12 bytes {⊥⍨⌽=⌿↑⍵}↑∊ That's two bytes less than the previous APL solution! The overall function is ↑, which takes n elements (characters) from the flattened (∊) argument, where n is the result of applying the function {⊥⍨⌽=⌿↑⍵} to the argument: ↑⍵ convert list of strings to table (padding with spaces to form rectangle) =⌿ compare down (columns) giving boolean list ⌽ reverse ⊥⍨ count trailing trues* *Literally it is a mixed-base to base-10 conversion, using the boolean list as both number and base: ⊥⍨0 1 0 1 1 is the same as 0 1 0 1 1⊥⍨0 1 0 1 1 which is 0×(0×1×0×1×1) 1×(1×0×1×1) 0×(0×1×1) 1×(1×1) + 1×(1) which again is two (the number of trailing 1s). # PHP, 49 bytes <?=substr($t=$argv[1],0,strspn($t^$argv[2],"\0")); Replace \0 with the actual byte. # Java 7, 145 bytes class M{public static void main(String[]a){for(char i=0,c;i<a[0].length();){c=a[0].charAt(i);if(c!=a[1].charAt(i++))break;System.out.print(c);}}} Those pesky program-requirements instead of function.. Ungolfed: class M{ public static void main(String[] a){ for(char i = 0, c; i < a[0].length(); ){ c = a[0].charAt(i); if(c != a[1].charAt(i++)){ break; } System.out.print(c); } } } Try it here. • Pretty sure you're allowed to assume 'programs or functions' unless full program is specified by OP – Xanderhall Dec 12 '16 at 14:53 • see here – Xanderhall Dec 12 '16 at 15:10 • @Xanderhall In the description it states "Write a program that takes 2 strings as input.." – Kevin Cruijssen Dec 12 '16 at 18:38 • If you'd looked at the link I posted, you'd see that the community consensus is that the default for a challenge is "program or function". – Xanderhall Dec 14 '16 at 13:10 • @Xanderhall I know the default is program or function, but in this question the OP states it should be a program. The question rules overrules that default rule... – Kevin Cruijssen Dec 14 '16 at 14:33 # 05AB1E, 10 bytes (non-competing) .ps.p©å®Ï¤ Try it online! (^.*).*$\n\1 each input string is separate line. https://regex101.com/r/bTf1ud/1 # Powershell + Regex, 48 bytes $m=$args-join"n"-match"(^.*).*n\1";$Matches[1] one line input strings only. # Powershell pure, 58 56 bytes param($a,$b)for($i=0;$a[$i]-eq$b[$i]){$c+=$a[$i++]};"$c" Test script: $f = { param($a,$b)for($i=0;$a[$i]-eq$b[$i]){$c+=$a[$i++]};"$c" } "glo" -eq (&$f "global" "glossary") "depart" -eq (&$f "department" "depart") "" -eq (&$f "glove" "dove") Output: True True True # C# (Visual C# Compiler), 62 bytes (a,b)=>Concat(a.Zip(b,(x,y)=>x==y?x:'$').TakeWhile(x=>x!='$')) Try it online! Zip! This byte count includes only the lambda expression, and some necessary using static directives are not counted. It is assumed that no word will contain the magical char value $ (otherwise program may fail). Can use \0 instead (but that is longer to type). # Jelly, 6 bytes ¹Ƥ€f/Ṫ Try it online! # Rust, 75 bytes |a,b|a.chars().zip(b.chars()).take_while(|(a,b)|a==b).map(|v|v.0).collect() Try it online! Does unnecessary heap allocation for the result (idiomatic Rust code would return &str here as opposed to String), but it works so whatever. It's not like it matters. This iterates over string characters as long as characters match and then collects matched characters into a String. # K (oK) / K4, 21 19 bytes Solution: (*x)@&&\=/(#,/x)$x: Try it online! Explanation: Pad strings to combined length of the strings, check for equality, find matching indices, take minimum over resulting list, and index into first element of original input at these indices. (*x)@&&\=/(#,/x)$x: / the solution x: / save input as x$ / pad ( ) / do together ,/x / flatten (,/) x # / count (returns length) =/ / compare, equals (=) over (/) &\ / mins, min (&) scan (\) & / indices where true @ / index into ( ) / do this together *x / first (*) x # Java, 152 bytes String a="aa",b="ab";char[]c=a.toCharArray(),d=b.toCharArray();int e=0,f=Math.min(c.length,d.length);for(;e<f&&c[e]==d[e];e++);return new String(c,0,e); • What's your language and score? – Hand-E-Food Nov 4 '15 at 22:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2345777451992035, "perplexity": 4859.67106796791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541309137.92/warc/CC-MAIN-20191215173718-20191215201718-00031.warc.gz"}
https://www.varsitytutors.com/calculus_1-help/how-to-graph-functions-of-points-of-inflection
# Calculus 1 : How to graph functions of points of inflection ## Example Questions ← Previous 1 3 ### Example Question #1 : Points Of Inflection Find the inflection point(s) of . Explanation: Inflection points can only occur when the second derivative is zero or undefined. Here we have Therefore possible inflection points occur at  and . However, to have an inflection point we must check that the sign of the second derivative is different on each side of the point. Here we have Hence, both are inflection points ### Example Question #2 : Points Of Inflection Below is the graph of . How many inflection points does  have? Not enough information Explanation: Possible inflection points occur when  . This occurs at three values, . However, to be an inflection point the sign of  must be different on either side of the critical value. Hence, only  are critical points. ### Example Question #3 : Points Of Inflection Find the point(s) of inflection for the function . and and There are no points of inflection. Explanation: A point of inflection is found where the graph (or image) of a function changes concavity. To find this algebraically, we want to find where the second derivative of the function changes sign, from negative to positive, or vice-versa. So, we find the second derivative of the given function The first derivative using the power rule is, and the seconds derivative is We then find where this second derivative equals  when . We then look to see if the second derivative changes signs at this point. Both graphically and algebraically, we can see that the function  does indeed change sign at, and only at, , so this is our inflection point. ### Example Question #4 : Points Of Inflection What are the  coordinates of the points of inflection for the graph There are no points of inflection on this graph. Explanation: Infelction points are the points of a graph where the concavity of the graph changes.  The inflection points of a graph are found by taking the double derivative of the graph equation, setting it equal to zero, then solving for . To take the derivative of this equation, we must use the power rule, . We also must remember that the derivative of an constant is 0. After taking the first derivative of the graph equation using the power rule, the equation becomes . In this problem the double derivative of the graph equation comes out to , factoring this equation out it becomes . Solving for when the equation is set equal to zero, the inflection points are located at . ### Example Question #5 : Points Of Inflection Find all the points of inflection of . There are no inflection points. Explanation: In order to find the points of inflection, we need to find  using the power rule, . Now we set , and solve for . To verify this is a true inflection point we need to plug in a value that is less than it and a value that is greater than it into the second derivative. If there is a sign change around the  point than it is a true inflection point. Let Now let Since the sign changes from a positive to a negative around the point , we can conclude it is an inflection point. ### Example Question #6 : Points Of Inflection Find all the points of inflection of There are no points of inflection. Explanation: In order to find the points of inflection, we need to find  using the power rule . Now to find the points of inflection, we need to set . . Now we can use the quadratic equation. Recall that the quadratic equation is , where a,b,c refer to the coefficients of the equation  . In this case, a=12, b=0, c=-4. Thus the possible points of infection are . Now to check if or which are inflection points we need to plug in a value higher and lower than each point. If there is a sign change then the point is an inflection point. To check  lets plug in . Therefore  is an inflection point. Now lets check  with . Therefore  is also an inflection point. ### Example Question #7 : Points Of Inflection Find all the points of infection of . There are no points of inflection. Explanation: In order to find the points of inflection, we need to find  using the power rule . Now lets factor . Now to find the points of inflection, we need to set . . From this equation, we already know one of the point of inflection, . To figure out the rest of the points of inflection we can use the quadratic equation. Recall that the quadratic equation is , where a,b,c refer to the coefficients of the equation . In this case, a=20, b=0, c=-18. Thus the other 2 points of infection are To verify that they are all inflection points we need to plug in values higher and lower than each value and see if the sign changes. Lets plug in Since there is a sign change at each point, all are points of inflection. ### Example Question #8 : Points Of Inflection Find the points of inflection of . There are no points of inflection. There are no points of inflection. Explanation: In order to find the points of inflection, we need to find Now we set . . This last statement says that  will never be . Thus there are no points of inflection. ### Example Question #9 : Points Of Inflection Find the points of inflection of the following function: Explanation: The points of inflection of a given function are the values at which the second derivative of the function are equal to zero. The first derivative of the function is , and the derivative of this function (the second derivative of the original function), is . Both derivatives were found using the power rule Solving . To verify that this point is a true inflection point we need to plug in a value that is less than the point and one that is greater than the point into the second derivative. If there is a sign change between the two numbers than the point in question is an inflection point. Lets plug in . Now plug in . Therefore,  is the only point of inflection of the function. ### Example Question #10 : Points Of Inflection Find all the points of inflection of . Explanation: In order to find all the points of inflection, we first find  using the power rule twice, . Now we set . . Now we factor the left hand side. From this, we see that there is one point of inflection at . For the point of inflection, lets solve for x for the equation inside the parentheses. ← Previous 1 3
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9265875220298767, "perplexity": 501.54925579523893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806586.6/warc/CC-MAIN-20171122122605-20171122142605-00794.warc.gz"}
http://simple.wiktionary.org/wiki/offset
# offset Offset is part of the Academic Word List. It is important for students in college and university. ## Verb Plain form offset Third-person singular offsets Past tense offset Past participle offset Present participle offsetting 1. (transitive) If $x$ offsets $y$, the loss (because) of $y$ is balanced by $x$. The school will provide limited scholarships to offset the cost of tuition. Increases in efficiency partially offset the increased costs. The few problems are more than offset by the relatively large number of successes. 2. (transitive) If you offset $x$ against $y$, you compare of contrast them. All this solid colour is offset by the tiny yellow green flowers. 3. The past tense and past participle of offset. ## Noun Singular offset 1. (countable) An offset is something that balances (the loss of) something else. 2. (uncountable) (technical) A particular way of printing where the ink moves from surface A to B and then from B to the final C. 3. (countable); (technical) The image produced by this kind of printing. 4. (countable & uncountable); (technical) An offset is the distance that something moves away from where it is supposed to be or where it was.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35727983713150024, "perplexity": 3186.083910839168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500809686.31/warc/CC-MAIN-20140820021329-00255-ip-10-180-136-8.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/20045/projective-camera-back-projecting-a-point-on-the-image-plane-into-3-space
# projective camera: back-projecting a point on the image plane into 3-space suppose I got a projective camera model. for this model I would like to back-project a ray through a point in the image plane. I know that the equation for this is the following: $$y(\lambda) = P^+ \pmb{x} + \lambda \pmb{c}$$ where $P^+$ denotes the pseudoinverse of the camera matrix P. P has a dimensionality of 3 by 4. $x$ is the point on the image plane in homogenous coordinates. Hence it's dimensionality is 3 by 1. $c$ the center of the camera in 3-space in homogeneous coordinates. (note that this equation is taken from the book "Multiple View Geometry in Computer Vision" page 162.) Now I don't fully get this equation. I get that $P^+ x$ results in a point on the line we are looking for. Hence we have two points that we can use for constructing a line. However I don't get the parametrization using $\lambda$. Why is the equation not in the form like: $$y(\lambda) = (1-\lambda) \pmb{a} + \lambda \pmb{b}$$ Any help in understanding the original equation of the resulting ray would be appreciated! :D - I'm unfamiliar with this notation, child you clarify it? What is $\lambda$? How is it that you can multiply $P^+$ and $x$, when $P^+$ is a matrix and $x$ is scalar? When you say $c_0$ is the "center of the camera," what does that mean? –  Colin K Jan 26 '12 at 17:40 ok let me try to clearify: $x$ is in fact a point on the image plane in homogeneous coordinates. $\lambda$ is just the parameter of the ray. I think geometrically it can be interpreted as the inverse of the depth of the point. (but I'm not 100% sure about that) –  Tobias Domhan Jan 26 '12 at 17:55 and the center of the camera is the right null-space of $P$: $PC=0$ –  Tobias Domhan Jan 26 '12 at 18:03 I'm sort of shocked that with a masters degree in optics, there is a notation that is not only unknown to me, but seems completely nonsensical. You keep taking about a product with $P$ either involving or producing scalars, but isn't $P$ your lens matrix? You say $\lambda$ is "the" ray parameter, but which parameter? Height? Angle? By homogenous coordinates, do you mean normalized coordinates? Is the right null space of $P$ the optical axis? –  Colin K Jan 26 '12 at 19:14 OH! I just figured it out. This isn't a geometrical optics question this is a computer vision thing. The "Camera Matrix" is not at all what I thought. Give me some time to figure this out now. –  Colin K Jan 26 '12 at 19:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7914887070655823, "perplexity": 266.3025882471893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657132025.91/warc/CC-MAIN-20140914011212-00129-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://physics.stackexchange.com/users/139/gerard
# Gerard less info reputation 11425 bio website location age member for 3 years, 11 months seen 20 hours ago profile views 275 # 20 Questions 40 A list of inconveniences between quantum mechanics and (general) relativity? 28 Is anti-matter matter going backwards in time? 22 Can Noether's theorem be understood intuitively? 10 What are some useful ways to imagine the concept of spin as it relates to subatomic particles? 7 What physical forces pull/press water upwards in vegetation? # 2,042 Reputation +5 Is anti-matter matter going backwards in time? +10 A list of inconveniences between quantum mechanics and (general) relativity? +15 Potential that is proportional to distance +10 Why does kinetic energy increase quadratically, not linearly, with speed? 48 Why does kinetic energy increase quadratically, not linearly, with speed? 14 Is Angular Momentum truly fundamental? 11 How can I measure the mass of the Earth at home? 8 How can we make an order-of-magnitude estimate of the strength of Earth's magnetic field? 6 What is up and down conversion in photonics? # 66 Tags 48 newtonian-mechanics × 3 15 classical-mechanics × 2 48 kinematics 14 spin × 3 48 energy 14 angular-momentum 48 speed 12 home-experiment × 2 18 quantum-mechanics × 6 11 newtonian-gravity × 2 # 10 Accounts Stack Overflow 2,471 rep 32857 Physics 2,042 rep 11425 Mathematics 362 rep 110 TeX - LaTeX 216 rep 4 Area 51 211 rep 3
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9252512454986572, "perplexity": 3063.243203365418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663711.39/warc/CC-MAIN-20140930004103-00188-ip-10-234-18-248.ec2.internal.warc.gz"}
http://math.stackexchange.com/users/38265/euler-is-alive?tab=activity
Euler....IS_ALIVE Reputation 2,038 Next privilege 2,500 Rep. Create tag synonyms Jul21 comment a GRE question about ring theory, how can I know which one is correct? This and @DerekHolt information should be enough to answer the question. Jul21 comment a GRE question about ring theory, how can I know which one is correct? $\{0\}$ and $R$ are ideals for every ring $R$. Jul11 comment Proving linearity of derivative It must be a linear function. Otherwise, the linear approximation formula doesn't make any sense. Jun18 comment Math Problem Help || Trig (Updated picture) This hurts my eyes. Jun18 comment Palindromic numbers, and $3$. Nothing about this question makes sense. Jun18 comment Palindromic numbers, and $3$. Even more furthermore, if $x$ is a palindromic number, it's still not true :p Jun18 comment Palindromic numbers, and $3$. Did you mean to apply this to palindromic numbers? Jun18 answered Palindromic numbers, and $3$. May11 comment How to find the maximum value subject to constraints This is definitely in either your textbook or class. Apr18 comment Continuity of integral from x to x+1 of Lp function Hint : Holder . Mar24 comment Wave equation in three space dimensions The dot just means the $x$ variable. As you can see, the estimate only depends on the $t$ slice. Mar8 comment How do none of the of the polynomials have a degree 2? It is definitely wrong because $x^2$ is a degree 2 polynomial! Mar6 comment How to handle derivative of absolute value? Please look up the definition of weak derivative. The function might have a derivative in this sense. Mar6 comment Two norms $||.||_1$ and $||.||_2$ on a vector space $V$ are equivalent. This question has been answered at least 50 times on this site. Mar6 comment Find the limit of $\lim\limits_{x\rightarrow0}\frac{x}{\tan x}$. It was wrong. Now it's correct. Mar6 comment Find the limit of $\lim\limits_{x\rightarrow0}\frac{x}{\tan x}$. False? ${}{}{}{}$ Mar6 comment What decides if a Coxeter Group is “crystalline” or “non-crystalline”? This is going to be very hard to completely intractable depending on how much math you know. Do you know any group theory? Have you read the wikipedia article on Coxeter groups? It might be a little easier to read as an introduction than a paper. Feb27 comment About a theorem on stability of sytems of autonomous ODEs Is that word for word what the theorem says? It seems to be wrong then by your counterexample. Feb27 comment Solve 2nd order ordinary differential equation by Laplace transforms and convolution of their inverse functions. (5.6-40) Not correct. You can easily check by plugging it into your original equation. Feb26 comment How to solve $tx'(t) = (2t^2 + 1)x(t) + t^2$? This seems to be a simple 1d problem that you can solve with an integrating factor.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9099988341331482, "perplexity": 468.675530427553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988312.76/warc/CC-MAIN-20150728002308-00284-ip-10-236-191-2.ec2.internal.warc.gz"}
http://nrich.maths.org/8078/clue?nomenu=1
If he cycled for $7\frac{1}{2}$ hours, how many calories would he need? How many calories could he take in? What if he cycled for $8$ hours?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8555015921592712, "perplexity": 3841.3344107470266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823133.4/warc/CC-MAIN-20160723071023-00073-ip-10-185-27-174.ec2.internal.warc.gz"}
https://arxiv.org/list/cond-mat.str-el/2106
# Strongly Correlated Electrons ## Authors and titles for cond-mat.str-el in Jun 2021 [ total of 346 entries: 1-25 | 26-50 | 51-75 | 76-100 | ... | 326-346 ] [ showing 25 entries per page: fewer | more | all ] [1] Title: Intrinsic one-dimensional conducting channels in the Kondo insulator SmB6 Comments: Upon further follow-up studies, we discovered additional data in different parameter spaces that weakened our arguments. While this does not preclude the existence of 1-D conducting states, we feel that our previous supporting arguments are rendered ineffective, and we must suspend and withdraw our manuscript at this point Subjects: Strongly Correlated Electrons (cond-mat.str-el); Mesoscale and Nanoscale Physics (cond-mat.mes-hall); Materials Science (cond-mat.mtrl-sci) [2] Title: Field-Direction Sensitive Skyrmion Crystals in Cubic Chiral Systems: Implication to $4f$-Electron Compound EuPtSi Subjects: Strongly Correlated Electrons (cond-mat.str-el) [3] Title: Field-induced meron and skyrmion superlattice in chiral magnets on the honeycomb lattice Subjects: Strongly Correlated Electrons (cond-mat.str-el); Mesoscale and Nanoscale Physics (cond-mat.mes-hall) [4] Title: Evolution of the metallic state of LaNiO$_3$/LaAlO$_3$ superlattices measured by $^8$Li $β$-detected NMR Authors: V.L. Karner (1 and 2), A. Chatzichristos (2 and 3), D.L. Cortie (1, 2 and 3), D. Fujimoto (2 and 3), R.F. Kiefl (2, 3 and 4), C.D.P. Levy (4), R. Li (4), R.M.L. McFadden (1 and 2), G.D. Morris (4), M.R. Pearson (4), E. Benckiser (5), A.V. Boris (5), G. Cristiani (5), G. Logvenov (5), B. Keimer (5), W.A. MacFarlane (1, 2 and 4) ((1) Department of Chemistry University of British Columbia, (2) Stewart Blusson Quantum Matter Institute, (3) Department of Physics and Astronomy University of British Columbia, (4) TRIUMF, (5) Max Planck Institute for Solid State Research) Subjects: Strongly Correlated Electrons (cond-mat.str-el) [5] Title: A DMI guide to magnets micro-world Comments: Contribution for the JETP special issue in honor of I.E. Dzyaloshinskii's 90th birthday Journal-ref: J. Exp. Theor. Phys. 132, 506-516 (2021) Subjects: Strongly Correlated Electrons (cond-mat.str-el) [6] Title: Multiple field-induced phases in the frustrated triangular magnet Cs$_3$Fe$_2$Br$_9$ Comments: 11 pages, 12 figures (minor corrections and 3 additional have been included in V3) Journal-ref: Phys. Rev. B 104, 064418 (2021) Subjects: Strongly Correlated Electrons (cond-mat.str-el); Materials Science (cond-mat.mtrl-sci) [7] Title: Flat-band ferromagnetism and spin waves in the Haldane-Hubbard model Subjects: Strongly Correlated Electrons (cond-mat.str-el); Mesoscale and Nanoscale Physics (cond-mat.mes-hall) [8] Title: Transport properties of the parent LaNiO2 Subjects: Strongly Correlated Electrons (cond-mat.str-el) [9] Title: $s$-wave paired composite-fermion electron-hole trial state for quantum Hall bilayers with $ν=1$ Subjects: Strongly Correlated Electrons (cond-mat.str-el); Mesoscale and Nanoscale Physics (cond-mat.mes-hall) [10] Title: Parallel-Chain Monte Carlo Based on Generative Neural Networks Subjects: Strongly Correlated Electrons (cond-mat.str-el); Disordered Systems and Neural Networks (cond-mat.dis-nn); Computational Physics (physics.comp-ph) [11] Title: Enhanced spin-orbit coupling and orbital moment in ferromagnets by electron correlations Subjects: Strongly Correlated Electrons (cond-mat.str-el); Materials Science (cond-mat.mtrl-sci) [12] Title: Distinct band reconstructions in kagome superconductor CsV$_3$Sb$_5$ Subjects: Strongly Correlated Electrons (cond-mat.str-el); Materials Science (cond-mat.mtrl-sci); Superconductivity (cond-mat.supr-con) [13] Title: Scaling of disorder operator at deconfined quantum criticality Subjects: Strongly Correlated Electrons (cond-mat.str-el); High Energy Physics - Theory (hep-th) [14] Title: Many-body energy invariant for $T$-linear resistivity Comments: 6 pages, 4 figures and 6 pages of supplementary information. Both authors contributed equally to this work Subjects: Strongly Correlated Electrons (cond-mat.str-el); Disordered Systems and Neural Networks (cond-mat.dis-nn); Statistical Mechanics (cond-mat.stat-mech) [15] Title: Topological Magnons: A Review Authors: Paul McClarty Comments: 12 pages, 3 figures, to appear in Annual Review of Condensed Matter Physics Subjects: Strongly Correlated Electrons (cond-mat.str-el); Mesoscale and Nanoscale Physics (cond-mat.mes-hall) [16] Title: Charge order dynamics in underdoped La$\mathbf{_{1.6-\textit{x}}}$Nd$\mathbf{_{0.4}}$Sr$\mathbf{_\textit{x}}$CuO$\mathbf{_{4}}$ revealed by electric pulses Journal-ref: Appl. Phys. Lett. 118, 224104 (2021) Subjects: Strongly Correlated Electrons (cond-mat.str-el); Materials Science (cond-mat.mtrl-sci); Applied Physics (physics.app-ph) [17] Title: Transmission phase evolution in fully screened and overscreened Kondo impurities Authors: D. B. Karki Journal-ref: Phys. Rev. B 103, 235403 (2021) Subjects: Strongly Correlated Electrons (cond-mat.str-el) [18] Title: Quaternary-digital data storage based on magnetic bubbles in anisotropic materials Journal-ref: Phys. Rev. Applied 15, 064052 (2021) Subjects: Strongly Correlated Electrons (cond-mat.str-el); Materials Science (cond-mat.mtrl-sci); Applied Physics (physics.app-ph) [19] Title: Anomalous and Anisotropic Nonlinear Susceptibility in the Proximate Kitaev Magnet $α$-RuCl$_3$ Subjects: Strongly Correlated Electrons (cond-mat.str-el) [20] Title: Fluctuating spin and charge stripes in the two-dimensional Hubbard model in the thermodynamic limit Subjects: Strongly Correlated Electrons (cond-mat.str-el); Superconductivity (cond-mat.supr-con) [21] Title: Relationship between A-site Cation and Magnetic Structure in 3d-5d-4f Double Perovskite Iridates Ln2NiIrO6 (Ln=La, Pr, Nd) Subjects: Strongly Correlated Electrons (cond-mat.str-el); Materials Science (cond-mat.mtrl-sci) [22] Title: Correlated insulators, semimetals, and superconductivity in twisted trilayer graphene Subjects: Strongly Correlated Electrons (cond-mat.str-el); Superconductivity (cond-mat.supr-con) [23] Title: A unified view on symmetry, anomalous symmetry and non-invertible gravitational anomaly Subjects: Strongly Correlated Electrons (cond-mat.str-el); High Energy Physics - Theory (hep-th) [24] Title: Abelian SU$(N)_1$ Chiral Spin Liquids on the Square Lattice Comments: 41 pages, 21 figures, references added, figure 2(j) modified, abstract and introduction revised Subjects: Strongly Correlated Electrons (cond-mat.str-el); Quantum Gases (cond-mat.quant-gas) [25] Title: Spin Vortex Crystal Order in Organic Triangular Lattice Compound Journal-ref: Phys. Rev. Lett. 127, 147204 (2021) Subjects: Strongly Correlated Electrons (cond-mat.str-el) [ total of 346 entries: 1-25 | 26-50 | 51-75 | 76-100 | ... | 326-346 ] [ showing 25 entries per page: fewer | more | all ] Disable MathJax (What is MathJax?) Links to: arXiv, form interface, find, cond-mat, 2110, contact, help  (Access key information)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23721778392791748, "perplexity": 18120.119063843555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585696.21/warc/CC-MAIN-20211023130922-20211023160922-00132.warc.gz"}
http://physics.stackexchange.com/questions/22468/what-are-the-calculations-for-vacuum-energy?answertab=votes
# What are the calculations for Vacuum Energy? In wiki the Vacuum Energy in a cubic meter of free space ranges from $10^{-9}$ from the cosmological constant to $10^{113}$ due to calculations in Quantum Electrodynamics (QED) and Stochastic Electrodynamics (SED). I've looked at Baez and references given on the wiki page but none of them give a clear working for how these values are derived. Can someone point me in the right direction, as to how values like $10^{-9}$ are derived from the cosmological constant; OR $10^{113}$ due to calculations in Quantum Electrodynamics? - The cosmological constant is measured from WMAP data, baryonic density and standard candle distances/blueshifts; see en.wikipedia.org/wiki/Lambda-CDM_model . The so-called "vacuum energy" that one "calculates" from the Standard Model is nonsense; see the lead up to section 2.3.1 in damtp.cam.ac.uk/user/tong/qft/qft.pdf . For more what the cosmological constant with less brain-deadness, see arxiv.org/abs/1002.3966 –  genneth Mar 18 '12 at 0:49 I think section IV of this link may contain the sort of "calculation" you're asking about, although I don't have a working knowledge of the effective action formalism. –  twistor59 Mar 18 '12 at 10:21 The vacuum energy for a free field is the ground state energy of each field oscillator, ${1\over 2} \omega$, summed over all the modes. For a cubic periodic box of side-length L, you get $$\sum_k {1\over 2} \sqrt{k^2+m^2}$$ Where the sum is all k's in an infinite size 3d cubic lattice where each k component is an integer multiple of $2\pi\over L$. When you make L big, this makes the k-lattice continuous, and the sum turns into an integral: $$({L\over 2\pi})^3 \int \sqrt{k^2+m^2} d^3k$$ If you put a cutoff $\Lambda$, the result diverges as $$E\propto V \Lambda^4$$ so that the energy density is proportional to the fourth power of the momentum cutoff. This integral reproduces the dimensional expectation when $\Lambda$ is the Planck length. For interacting field theories, the vacuum energy is the sum of all vacuum loop Feynman diagrams. In the free case, the loop is just a single propagator joined to itself (this is a very degenerate Feynman diagram). The sign of Fermion and Boson loops are opposite, and Fermionic oscillators with the most natural definition of energy give an opposite sign vacuum energy in each oscillator. In a supersymmetric theory, when the Hamiltonian is presented in the form that preserves supersymmetry, the vacuum energy is zero. This is the only principle that we know today that can control the cosmological constant. the problem is that SUSY is broken in our world at a cutoff scale of about a Tev, so the cancellations in SUSY are not exact. This means that the residual non-SUSY vacuum energy has to cancel from the scale of the Higgs (at least) to the scale of the observed cosmological constant, which is many orders of magnitude smaller. You can't just get rid of vacuum energy by a natural statement that the vacuum has zero energy, because the vacuum in QCD (and in the Higgs mechanism) is full of crud. There is a pion condensate, a gluon condensate, and a Higgs condesate at the very least, and if you make everything cancel for our exact values of the masses of the quarks and leptons, if you change the mass of the quark, the condensate energy density changes in extremely complicated ways, so that the subtraction constant must be tuned to a magical value with no dynamical explanation. Weinberg suggested that this is an anthropic accident--- that we need to have a low cosmological constant to evolve intelligent life. This predicts that the cosmological constant should be of the exact same order as the density of matter today, after life has evolved, but no lower, since it doesn't need to be any lower. This is what is observed, so Weinberg might be right, and there might be no explanation for the cosmological constant. If Weinberg is right, the string vacuum that describes our universe will be very special--- it will be a non-supersymmetric vacuum with an accidentally small cosmological constant. If this is an accident with no rhyme or reason, then it will be very useful in picking out the right vacuum. We'll know we have it when it produces the right cosmological constant. - This is very close to what I'm looking for, it's just the actual calculations that lead to such high values as 10^113 Jm^-3 would be interesting, but if no one else can improve on this, then this is all I want. None of the references on the wiki page offer much in the way of an extensive answer. Thanks Ron. –  metzgeer Mar 22 '12 at 3:40 @metzgeer: The "actual calculations" are exactly what I did--- if you plug in $\Lambda=10^{19} GeV$, you get the order of magnitude. Since it is so absurd, nobody bothers to do any more elaborate calculations. The value of $\Lambda$ is also sometimes taken to be $1 TeV$, this assumes that SUSY cancels vacuum energy contributions past this point. –  Ron Maimon Mar 22 '12 at 3:43 Ah... in that case all kudos to the bunny of love, question answered. Thanks. –  metzgeer Mar 22 '12 at 5:46 maybe some unaccounted fields have negative contributions that are not accounted in our positive energy theories –  diffeomorphism Dec 18 '13 at 3:05 You can understand the origin of these numbers from a simple consideration of dimensional analysis, and the cosmological data available. This keeps the answer intuitive, and any more complicated derivation will not change the answer substantially. The first of your numbers, $10^{-9}$ Joules per cubic meter, is simply an empirical measurement in the framework of the Lamda-CDM model. Measurements of the CMB (WMAP), combined with type Ia supernovae, tell us that this is about the energy density of the universe, and that most of the energy density is in the form of dark energy. We assume that the dark energy comes from a cosmological constant $\Lambda$. In natural units where $\hbar$ and $c$ are set equal to 1, a length is essentially an inverse energy. So in these units $\Lambda$ is about $10^{-46}$ GeV$^{4}$. Here comes the essential point: If we consider the Planck mass to be the natural energy scale for the vacuum energy, then the ratio of the observed energy density in the cosmological constant is too small by 122 orders of magnitude (and this is the origin of the second number - it just comes from taking the planck mass to the fourth power in natural units). So, the fundamental puzzle is, why is $\Lambda$ so small compared to the 'natural' scale we would expect? One way out is to argue that a different energy scale other than the planck mass is what we should be comparing $\Lambda$ to. - You If is important: the Planck mass is neither large nor very small: it is roughly the amount of Vitamin D you should consume a day, and many living things are smaller, so it is not obviously fundamental, except as a indicator of where general relativistic effects and quantum mechanical effects start to confuse each other. –  Henry Mar 20 '12 at 22:51 Again, the question I'm asking about isn't about the cosmological constant, it's the Quantum Field Theory derivation. I have tried to be clear about this, see the question in the bounty. All the same, thanks for your replies. –  metzgeer Mar 21 '12 at 2:28 @metzgeer, there simply is no consensus for how to derive the vacuum energy from first principles. It is one of the biggest outstanding puzzles in physics, and a first principles derivation would revolutionize the field. Prior to the measurements of the CMB and type Ia supernovae, there was no clear indication that it would turn out to be $10^{-9}$ Joules per cubic meter. That's just what we measure. Its exceedingly small value (compared to the Planck scale) causes many physicists to invoke anthropic explanations. –  kleingordon Mar 21 '12 at 6:17 @metzgeer And then there is also the perspective in the paper linked from the comments below your question: our current understanding of QFT can't predict the measured value of $10^{-9}$ Joules / m$^3$, but there's no "grand mystery" because the existence of $\Lambda$ in Einstein's equations can be considered to be a fundamental law in its own right. The key quote, however, is "there is no known natural way to derive the tiny cosmological constant that plays a role in cosmology from particle physics. And there is no known understanding of why this constant is not renormalized to a high value." –  kleingordon Mar 21 '12 at 8:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9237465858459473, "perplexity": 315.1037704745206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394020792760/warc/CC-MAIN-20140305115952-00063-ip-10-183-142-35.ec2.internal.warc.gz"}
https://pub.uni-bielefeld.de/publication/1893649
# Spin dynamics of quantum and classical Heisenberg dimers Mentrup D, Schnack J, Luban M (1999) Physica A 272: 153. No fulltext has been uploaded. References only! Journal Article | Published | English No fulltext has been uploaded Author ; ; Department Publishing Year PUB-ID ### Cite this Mentrup D, Schnack J, Luban M. Spin dynamics of quantum and classical Heisenberg dimers. Physica A. 1999;272:153. Mentrup, D., Schnack, J., & Luban, M. (1999). Spin dynamics of quantum and classical Heisenberg dimers. Physica A, 272, 153. Mentrup, D., Schnack, J., and Luban, M. (1999). Spin dynamics of quantum and classical Heisenberg dimers. Physica A 272, 153. Mentrup, D., Schnack, J., & Luban, M., 1999. Spin dynamics of quantum and classical Heisenberg dimers. Physica A, 272, p 153. D. Mentrup, J. Schnack, and M. Luban, “Spin dynamics of quantum and classical Heisenberg dimers”, Physica A, vol. 272, 1999, pp. 153. Mentrup, D., Schnack, J., Luban, M.: Spin dynamics of quantum and classical Heisenberg dimers. Physica A. 272, 153 (1999). Mentrup, Detlef, Schnack, Jürgen, and Luban, Marshall. “Spin dynamics of quantum and classical Heisenberg dimers”. Physica A 272 (1999): 153. This data publication is cited in the following publications: This publication cites the following data publications: ### Export 0 Marked Publications Open Data PUB
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8771248459815979, "perplexity": 26056.336766063552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814857.77/warc/CC-MAIN-20180223213947-20180223233947-00711.warc.gz"}
https://alexisperrier.com/aws/2017/12/04/automl_aws_data_science.html
# Build a predictive analytics pipeline in a flash When Bayesian optimization meets the Stochastic Gradient Descent algorithm on the AWS marketplace, rich features bloom, models are trained, Time-To-Market shrinks and stakeholders are satisfied. In this article, we present an AWS based framework which allows non technical people to build predictive pipelines in a matter of hours while achieving results that rival solutions handcrafted by data scientists. This data flow revolves around two online services: Amazon’s own Machine Learning service for model training and selection and Predicsis.ai, a service available on the AWS marketplace, for feature engineering. # Expectations - Frustrations As the saying goes, to hire data scientists call it machine learning, to raise money call it Artificial intelligence and data science in all other cases. Whatever name you give to predictive analytics, it is the new gold rush for companies across all industries. This revolution impacts the way we communicate, trade and interact with the world. It’s a new golden age of cybernetics that we have just begun to explore. But predictive analytics takes time and can be frustrating. Expectations are high and with a bullish job market, expertise is scarce. Predictive analytics workflows require a mix of domain knowledge, data science expertise, and software proficiency that not only can be difficult to assemble but which are also challenging to bring to production. Data scientist may be the sexiest job of the early 21st century, but it will be a few years before the surging new education programs from bootcamps to college degrees, bring enough data scientists to the market. Companies don’t have the luxury to wait as competition gears up. Within that context, AutoML offers a way to shorten project delivery time and reduce the need for junior data scientists. Before presenting these 2 services, let’s see where they fit in within the different phases of a data science project. # Data Science project management A typical data science workflow can be framed as following these 7 steps: • Data: find, access and explore the data • Features: extract, assess and evaluate, select and sort • Models: find the right model for the problem at hand, compare, optimize, and fine tune • Communication: Interpret and communicate the results • Production: transform the code into production ready code, integrate into current ecosystem and deploy • Maintain: adapt the models and features to the evolution of the environment This workflow is not linear. Many iterations are necessary before first results can be interpreted and evaluated by the business. Business assessment of the results trigger new expectations and sends the team back to work on data, features and models. Finally after several such iterations, the research code is ready to be refactored for production. Feature engineering and model selection are the most resource intensive phases and the hardest to plan for. These two phases require data science expertise in order to tackle inherent data problems such as outliers, missing values, skewed distributions and make sure the models don’t overfit through cross validation and hyper-parameter tuning. Two services available in the AWS ecosystem can help businesses reduce their needs for data science skills and speed up the discovery phase of the data science workflow: Predicsis.ai and AWS Machine Learning. # AWS Machine Learning to the rescue Launched in April 2015, Amazon’s Machine Learning service, aims at “putting data science in reach of any developer”. AWS ML greatly simplifies the model training, optimization and selection phases by limiting the choice of available models to a unique one, the stochastic gradient descent (SGD). By offering parameter tuning via a simple web interface, AWS ML further simplifies the modeling step while maintaining a very high predictive performance. A simple UI, and a simple yet powerful model is the key to a very efficient modeling process. The SGD is a veteran algorithm whose conception started in the 1950’s with a seminal paper by Robbins and Monro titled A Stochastic approximation Method. The SGD has been studied and optimized extensively since then. It is powerful and robust. AWS ML goes beyond the modeling phase of the project by offering production ready endpoints with a simple click. Thus removing the need for resource intensive code development. However, before we can train a predictive model, we need to extract the variables from the raw data having the biggest predictive potential. This feature extraction phase is the most unpredictable and often requires expert domain knowledge. Schemes quickly become over complicated and it’s easy to end up with thousands of variables in the hope of increasing prediction performances. While trying to keep the total number of variables down, an unavoidable constraint in order to prevent overfitting. # Predicsis.ai This is where Predicsis.ai steps in by boiling down the complexity of the feature engineering phase to a few clicks through smart bayesian optimization of the feature space. Being able to automatically discover the variables that are the most predictive of the outcome, again via a simple web interface, is a game changer in a predictive analytics project. Predicsis.ai is available on the AWS marketplace. It is the de facto companion to the AWS Machine Learning service and has been selected as a key machine learning provider at the recent AWS:reInvent 2017. With these two services in mind, the entire predictive analytics pipeline now boils down to: • Build the raw dataset • Transform it into a powerful training dataset with Predicsis.ai • Use that dataset to train a model on AWS ML • Create an endpoint for production with AWS ML With AWS ML and Predicsis.ai, building a predictive pipeline from raw data to production endpoints can now be done in a few hours by team members wit no data science skills. People from the sales or marketing department can explore, assess and create efficient pipelines for their prediction targets. The AWS ML service is well documented. We will focus here on Predicsis.ai and its use of bayesian optimization to build feature rich datasets. AutoML is what drives Predicis.ai. # AutoML AutoML means several things for different people but overall, AutoML is considered to be about algorithm selection, hyperparameter tuning of the model, iterative modeling, and model assessment. Without claims of exhaustivity, some of the currently available AutoML platforms and actors are: • H2O’s AutoML which automates the process of training a large selection of candidate prediction models. • Auto-sklearn which recently won the ChaLearn Automatic Machine Learning Challenge. Auto-sklearn is built on top of scikit-learn and includes 15 ML algorithms, 14 preprocessing methods, and all their respective hyperparameters, for a total of 110 hyperparameters and optimizes hyperparameter selection through bayesian optimization. • TPOT automatically optimizes a series of feature preprocessors and models that maximize the cross-validation accuracy on the data set. It is also scikit-learn compatible. • A precursor of AutoML is Auto-WEKA which considers the problem of simultaneously selecting a learning algorithm and setting its hyperparameters • And of course DataRobot, a leader in automated machine learning To recapitulate, these AutoML libraries focus on optimizing • the model selection: deciding whether to choose an SVM over a random forest of a gradient boosted machine. • the hyperparameters tuning: which kernel should be used for the SVM, how to set the learning rate for the Stochastic Gradient Descent or the number of trees for the random forest? and in some cases • the transformations to be applied on the training data: should the data be one hot encoded or normalized through a box-cox transformation? While these approaches are very powerful to create powerful models, we are still left with the task of engineering the most predictive variables straight out of the raw data. The challenge is even more complex when dealing with hierarchical data. # Hierarchical data Consider for instance the case of predicting a potential buyer behavior based on three different sources: the customer’s current demographic profile, a trace of that person’s online behavior and a history of communication through emails and texts. We could probably also add that person’s social network activities, or emails or geolocation to the dataset. These different datasets come from different sources that can be aggregated to form a global customer centric hierarchical dataset. For a single person we end up with a complex mix of many variables that come in all forms and shapes: categories, flags, tags, continuous values as well as text and time series. The potential feature space expands very quickly. Having such a large feature space triggers several problems such as multi-collinearity, overfitting and what is known as the curse of dimensionality where the feature space becomes too sparse for the algorithm to be able to effectively infer the right signal for predictions. On top of that, there’s always the potential for a yet undiscovered mix of variables to hold even better predictive potential than the ones already engineered. If we want to reduce the time it takes to engineer and score these different variables mix, we need to automate the construction of variables. The good news is that we can adapt the model centric AutoML approach to the feature space. In other words, the bayesian approach to hyper parameter tuning can also be applied to the feature space with the goal of having a concise and powerful training dataset. # Bayesian optimization as feature fertilizer Bayesian based feature optimization is presented in the paperTowards Automatic Feature Construction for Supervised Classification by Marc Boullé from Orange Labs. The process can be decomposed as follows: 1. Define an initial feature space from the original raw hierarchical data. 2. Extend the initial feature space by applying a pre-defined series of transformations on the data. 3. Score the variables using an evaluation criterion of the constructed variables using a Bayesian approach 4. Filter out variables that score below 0 (worse than random). With that approach it becomes possible to use all the available data as input, set the dimension of the overall feature space considered for evaluation and dictate the final number of features in the expected training dataset. This is exactly what Predicsis.ai allows. That optimized training dataset is then fed to the AWS ML service for model training and selection. # Real life project We will now show how to use Predicsis.ai in combination with AWS ML to implement a buyer prediction model based on multi-relational data. The raw data is extracted from 4 tables: • The Master table holds the customer’s profile information with demographic data as well as purchase history • The Orders table has order information (status, amount, web site, …) as well as the user’s web signature (OS, browser, mobile, …) • The Emails table holds email campaign related information (campaign name, recipient action, …) • The Visited Pages table where the customer’s online behavior is recorded. From that data, we want to predict whether the customer will buy again from our store. The outcome corresponds to the LABEL column in the Master file. The whole dataset can be downloaded here. We will first build the optimized dataset via Predicsis.ai and then use that dataset to train a model on AWS ML. # Predicsis.ai in action As mentioned before, Predicsis.ai is available on the AWS Marketplace but you can also try out Predicsis.ai feature optimization by signing up for a free trial on their website. To try out AWS marketplace version of Predicsis.ai: • Login to your AWS account and go to the AWS marketplace Predicsi.ai page • Start the EC2 instance. • Copy the instance’s url into your favorite browser You are now ready to start working with the Predicsis.ai platform. First create a new project and upload all your data (Master, Orders, Emails and VisitedPages). The workflow is split between feature importance exploration followed by feature selection and data export. Start by exploring the Master file since it contains the outcome variable (LABEL). The following screenshot shows the relative importance of the different variables from that file. For instance, we see that the age variable contributes for 15% of the model prediction and is split in 3 bins each with a different coverage and frequency. This visualization gives us a good way to understand what really drives our predictions. As shown in the next screenshot, our first model achieves a Gini score of 0.69 with Gini = 2 * AUC -1 where AUC is the classic Area Under Curve. Now we can improve on that model by adding the data files to the feature mix. In the next screenshot, we add the orders file and tell the model to generate 100 new feature aggregates and only keep a total of 30 features for our model. We explore the feature space and at the same time limit its dimension. Doing so will bring to the surface new features and create a new model with improved performances. As we see below, our model’s performance has been sgnificantly improved from 0.69 to 0.87 while keeping the total number of features to 30: 6 of which come from the Master file, and 24 are new aggregates obtained from the order file. We can iterate that feature generation process and add the other files (VisitedPages, Emails) to the feature mix, have the platform create and assess new aggregates and only keep the most important and powerful variables. This is just a glimpse of Predicsis.ai capabilities. Predicsis.ai also offers rich visualizations to assess and understands the lift and cumulative gains related to the population slice. Note that all these steps can also be carried out using the Predicsis.ai python SDK. The aggregation functions are quite simple (simple selects, binning or simple stats: mean, std, median). These functions can for the most part be implemented in SQL at the data extraction level directly from the database and therefore straightforward to apply on unseen production data. It’s also possible to use PredicSis.ai in batch mode to materialize those features on any new data with the same relational structure. We can now export our optimized dataset to AWS ML and build a prediction model. To do so we follow the standard AWS ML workflow: • Upload the training data to S3 • Define a data source. Let AWS ML infer the schema, double check and validate it. • Let AWS create the training and testing subsets. • AWS ML will suggest default transformations aka recipes which consists mostly in binning continuous variables. These binning transformations are important to achieve good predictions. You can either accept the suggested bins or the ones inferred by PredicSis.ai. • Train the model. With parameter tuning reduced to the bare minimum, you will only have to choose the type and strength of regularization • Wait a few minutes for the evaluation results. • if satisfied by the prediction score, create an end point with a click. You now have a fully trained predictive model, based on a highly optimized features set, all in a matter of minutes without involving any scripting. For that particular dataset, we obtain an AUC of 0.87 with the Predicsis.ai boosted dataset, which is very decent for such an imbalanced dataset so little efforts. # Conclusion As predictive analytics becomes a must have for companies big and small, the need for fast and reliable data pipelines that can be easily understood and interpreted grows stronger. Using existing AWS based services available through simple UI we are able to build efficient end to end predictive models. From multi-relational raw data to optimized training datasets, powerful models and stable production end points. The reliability of this approach is grounded on the robust mathematical methods that these two services are built upon: Bayesian optimization for feature extraction and stochastic optimization for model training. Methods that have been studied for decades and that can be safely and simply exploited by business users for real life data projects.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21879719197750092, "perplexity": 1649.4359763193786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300722.91/warc/CC-MAIN-20220118032342-20220118062342-00410.warc.gz"}
https://www.physicsforums.com/threads/pressure-ratio-problem.310998/
# Pressure ratio problem 1. Apr 30, 2009 ### Bohrok 1. The problem statement, all variables and given/known data Calculate the pressure ratio of He to N2 at which helium would have the same density as nitrogen if their temperatures were the same. 2. Relevant equations I used D = m/v 3. The attempt at a solution DHe = mHe/v DN2 = mN2/v Both gases occupy the same volume, so just v for both. Since DHe = DN2, mHr/v = mN2/v and mHe = mN2 For some x and y, x mol He(4.003 g/mol He) = mHe y mol N2(28.01 g/mol N2) = mN2 4.003x g = 28.01y g x = 7y To me it looks like there are 7 times as many moles of He as N2 but I doubt that would directly apply to their pressure ratios. I think I'd have to use PV = nRT but I'm not sure how I'd put it in. I was actually helping some chemistry students earlier today with this and am hoping I can have the answer ready for them tomorrow morning. Last edited: May 1, 2009 2. May 1, 2009 ### Staff: Mentor Honestly, I have no idea what the question asks. Oxygen? And why do you use neon in your calculations? Could be what you did is OK, but with all these typos/inconsistencies it is not. 3. May 1, 2009 ### Bohrok I fixed the Ne and oxygen; I have an older edition of the book than that which the students are using and I hadn't quite changed everything to match the problem in their book. Should be alright now. 4. May 1, 2009 ### Staff: Mentor OK. Now, knowing ratio of numbers of moles try to calculate ratio of pressures using PV=nRT. Don't be surprised if everything cancels out 5. May 4, 2009 ### Bohrok I think I got it now (don't know why I didn't look at it like this before) I found that nHe/nN2 = 7/1, and using P = nRT/V, PHe/PN2 = (nHeRT/V)/(nN2RT/V) PHe/PN2 = nHe/nN2 = 7/1 Similar Discussions: Pressure ratio problem
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8673549294471741, "perplexity": 2821.54577199976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886952.14/warc/CC-MAIN-20180117173312-20180117193312-00288.warc.gz"}
http://tex.stackexchange.com/questions/63396/table-of-contents-for-thesis
I am writing my thesis and will prefer my table of contents to appear like Table Of Contents centered and not Contents aligned to the left. If U use \tableofcontents it appears to the left. The other question is how do I take away chapters from my table of content so that it appears like this Table of Contents Chapter 1 General information 1.1 data 1.2 good 2 Background 2.1 yes 2.2 hello in that other can any body come to aid? thanks - Welcome to TeX.StackExchange. You should ask one question at time so it is easier for other with the same problem to find it. –  Spike Jul 15 '12 at 15:40 Welcome to TeX.SE! Please provide a bit more information about the setup of your document. E.g., which document class do you use? And, do you load any LaTeX packages that affect the look of the table of contents, i.e., what's produced by the \tableofcontents command? –  Mico Jul 15 '12 at 15:41 Please also clarify the intended look of the body of the ToC: Do you want to suppress all page numbers (which would normally be shown at the right-hand end of each row)? –  Mico Jul 15 '12 at 15:49 I recommend you take a look at the tocloft package. It provides lots of commands to modify the table of contents (and the list of figures, the list of tables, and similar lists). The following MWE illustrates the usage of the package, using some of the criteria outlined in your posting: \documentclass{book} \usepackage{tocloft} \renewcommand{\cfttoctitlefont}{\hfill\large\bfseries} \renewcommand{\cftaftertoctitle}{\hfill} \renewcommand\cftchapfont{\mdseries} % default: \bfseries \cftpagenumbersoff{chapter} % suppress mention of page numbers for chapters, etc \cftpagenumbersoff{section} \cftpagenumbersoff{subsection} \begin{document} \frontmatter \tableofcontents \mainmatter \chapter{General information} \section{Data} \section{Good} \chapter{Background} \section{Yes} \section{No} \end{document} - omg, whats with that background color? –  canaaerus Jul 15 '12 at 16:16 @canaaerus: I have no idea what's going on: When I view the image in a chrome browser window (or in a safari window on my iPad) the background is plain (i.e., white, transparent), as I intend it to be. However, when I view it in an IE browser window the background is a soft baby-blue... I have no idea what may be causing this oddity. –  Mico Jul 15 '12 at 16:28 @mico It's also blue in FF 13 on a Mac. –  Alan Munn Jul 15 '12 at 16:34 Strangely in gimp it is white too, but the image properties show under color profile: Lenovo ThinkPad, LCD Monitor, Lenovo ThinkPad LCD Monitor, Copyright (c) 2005 Lenovo Corporation, WhitePoint : D65 (daylight). Maybe some libs are acting on this WhitePoint and adjust the color?!? –  canaaerus Jul 15 '12 at 16:38 @canaaerus -- thanks for providing the pointer about the whitepoint profile being embedded. I've changed the save .png settings to exclude any embedding of color profiles. Hopefully this will do the trick. –  Mico Jul 15 '12 at 18:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7659862041473389, "perplexity": 2128.5905753486086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770686.106/warc/CC-MAIN-20141217075250-00147-ip-10-231-17-201.ec2.internal.warc.gz"}
http://www.sciencechatforum.com/viewtopic.php?t=1537&start=0
## Membrane theory and the decline of scientific method This is not an everything goes forum, but rather a place to ask questions and request help for developing your ideas. Moderators: BioWizard, Marshall ### Membrane theory and the decline of scientific method Membrane theory and the decline of scientific method Nanometer-minded persons in science V.V. Matveev and D.N. Wheatley. "Fathers" and "sons" of theories in cell physiology: the membrane theory. Cell. Mol. Biol., 51(8): 797-801, 2005. Abstract. The last 50 years in the history of life sciences are remarkable for a new important feature that looks as a great threat for their future. A profound specialization dominating in quickly developing fields of science causes a crisis of the scientific method. The essence of the method is a unity of two elements, the experimental data and the theory that explains them. To us, "fathers" of science, classically, were the creators of new ideas and theories. They were the true experts of their own theories. It is only they who have the right to say: "I am the theory". In other words, they were carriers of theories, of the theoretical knowledge. The fathers provided the necessary logical integrity to their theories, since theories in biology have still to be based on strict mathematical proofs. It is not true for sons. As a result of massive specialization, modern experts operate in very confined close spaces. They formulate particular rules far from the level of theory. The main theories of science are known to them only at the textbook level. Nowadays, nobody can say: "I am the theory". With whom, then is it possible to discuss today on a broader theoretical level? How can a classical theory - for example, the membrane one - be changed or even disproved under these conditions? How can the "sons" with their narrow education catch sight of membrane theory defects? As a result, "global" theories have few critics and control. Due to specialization, we have lost the ability to work at the experimental level of biology within the correct or appropriate theoretical context. The scientific method in its classic form is now being rapidly eroded. A good case can be made for "Membrane Theory", to which we will largely refer throughout this article. Find full text here: http://www.actomyosin.spb.ru/fathersandsons.htm The illustration for the article: Forum Neophyte Posts: 38 Joined: 08 May 2006 Location: Russia Since when are membranes a "theory," and since when must biological theories be based on "strict mathematical proofs"? I'd say that it's extremely rare for biological theories to be based on strict mathematical proofs; the only example that comes to mind is the Hardy-Weinburg theorum, although I'm sure there must be other examples. Hardstreet Member Posts: 566 Joined: 21 Jan 2006 The way I see it, specialization doesnt abolish the ability to theorize. Quite the contrary. It opens up a whole new level of detail. I agree however that there is no more space for the kind of theoretical speculation/philosophical musing done by the philosophers and early scientists in the past, but that's only because our knowledge based has increased significantly. When scientific fact replaces speculation, science moves on to the next problem, unwrapping the workings of biology, one layer at a time. I do however believe very strongly in interdisciplinary discussion for the sake of appreciating big pictures (which is one of the main goals of this website). Most of the scientists I work with are not interested with much beyond the set of proteins or genes they have specialized in. That doesnt mean however that science as a whole is facing a crisis. I see it as having more "technicians", not less "scientists" (No offense intended to anyone). Oh and by the way, what is a theory of the membrane? Bilayer lipid mozaic model? That's what you want to dispute? Didnt enough time and work go into validating that model? Why waste more time on something that has been extensively examined? Wouldnt that bring science to a standstill? BioWizard Posts: 10602 Joined: 24 Mar 2005 Location: United States Blog: View Blog (3) I like your style of writing... also the painting... awesome! What I don't understand is why you think the scientific method is "eroding." Also, what exactly is the membrane theory? When I checked out the link it was exactly what you posted and a picture. Ohh... Welcome to the forums! :) darwinlemmings BioWizard: It is possible to ideate two situations. (i) Einstein theory with a lot of details, and (ii) a lot of details without the theory. Cell physiology, I think, has put in the second situation. BioWizard and darwinlemmings: "...what is a theory of the membrane?"---I recommend the references: darwinlemmings: the scientific method is eroding because you can not check some theory if you know a lot of details only. Some theory can be checked by itself or by another one, not by "facts". What about this? Thanks Forum Neophyte Posts: 38 Joined: 08 May 2006 Location: Russia Ok, let me put it this way. Can you please state your theory of the cell membrane in physiology? BioWizard Posts: 10602 Joined: 24 Mar 2005 Location: United States Blog: View Blog (3) I have been thinking about this a little. Although this specific topic is well outside my field, as a rule, following Kuhn's ideas, the "theory" is what tells you just what counts as relevant data and why, guides you in deciding what further experiments you should or could perform and why, allows you to state that your conclusions or results may have some application beyond what you are doing, etc. Without a theoretical framework, how would you decide whether "cell membrane theory" (and your critique) should be generated in a biology lab, a nuclear physics lab or a history seminar? Sounds to me like (as Bio noted above) you are so deep within the technical details, you are just unaware of the overall theoretical framework. This is actually not at all uncommon. In many branches of science and for most people who operate in sciences, most of the time, what Kuhn called "normal science" prevails. When working within "normal science", most people go about their problem and puzzle solving, following routine methodology (a little Feyerabend here) and following what seems like the "common sense" of the craft. But, and as a consequence, these people may well be completely unaware that the problems and methodologies employed stem from the theoretical paradigm they work in and the "common sense" is a direct product of working for extended periods of time within and without really critically considering that theoretical paradigm. As I said, not at all unusual. Forest_Dump Forum Moderator Posts: 8097 Joined: 31 Mar 2005 Location: Great Lakes Region I agree with all that Forest. However, I yet have to find out what a new theory of the membrane entails. BioWizard Posts: 10602 Joined: 24 Mar 2005 Location: United States Blog: View Blog (3) ### Inside structure of the living cell To BioWizard: Modern membrane physiology is a physiology of soap bubble. We need in physiology based on inside structure of the living cell. Some ideas for this you may find in my article: Vladimir Matveev. Protoreaction of Protoplasm. Cell. Mol. Biol. 51(8): 715-723, 2005. See full text here: http://www.actomyosin.spb.ru/protoreaction.htm Forum Neophyte Posts: 38 Joined: 08 May 2006 Location: Russia Ohhh... are you talking about the whole thing with studying bubbles to get an idea about primitive cells? 8) thats cool 8) darwinlemmings ### Bubbles... If you study bubbles only you will get nothing... Forum Neophyte Posts: 38 Joined: 08 May 2006 Location: Russia Modern membrane physiology is a physiology of soap bubble. That's not true. I've seen formulas that describe the physical properties of the cell based on cytoskeletal properties. The internal compartment of the cell is no longer considered as a liquid intern of a bubble, but a mesh work of microfilaments, intermediate filaments, and microtubules. BioWizard Posts: 10602 Joined: 24 Mar 2005 Location: United States Blog: View Blog (3) Study bubbles can get you places! I thought you were a bubblologist too... I must have been mistaken. But seriously, there are syudies on bubble/lipid like substances that could have appeared on early Earth and have possibly acted as a membrane/cytoplasm of early cells. Some of these bubble things exibit properties of "life" such, reproduction, growth, and an extremely primitive form of metabolism. darwinlemmings ### Bubbles, bubbles, and bubbles again! To BioWizard: "Cytoskeletal" approaches include basic principles of the membrane theory---free state of K+ into the cell, sieve function of plasma membrane, pumping function for Na-K-ATPase and so on (see http://www.bioparadigma.spb.ru/files/Li ... unking.pdf for more details). Join cytoskeleton to membrane is a contribution of cytoskeletonists only. To darwinlemmings: Coacervates make the same (see Oparin's works). But they have no membranes with pumps and carriers. All investigate bubbles, nobody investigates coacervates because the physiology MUST BE bubble one. Rescue oneself if you want study coacervates---referees will become angry. By the way, the article about referees: Pollack GH. Revitalizing science in a risk-averse culture: reflections on the syndrome and prescriptions for its cure. Cell Mol Biol (Noisy-le-grand). 2005 Dec 16;51(8):815-20. Forum Neophyte Posts: 38 Joined: 08 May 2006 Location: Russia What do ion pumps and membrane potential have to do with a mechanical model of the cytoskeleton? Do you think you can clearly state what theory you are trying to put forth, or otherwise what existing theory you are trying to debunk, and why? I recently had a course on cell physiology. Not once have I heard the cell be described as a bubble. In fact, I positively heard one of the lecturers stress that the cell cannot be considered as a vesicle, due to the presence of the cytoskeleton which gives the cell exquisite mechanical properties, very different from a simple liposome. BioWizard Posts: 10602 Joined: 24 Mar 2005 Location: United States Blog: View Blog (3) Oh and btw, why does anyone need to disprove anything concerning the Na+K+ channel? High resolution structures of most of these channels have been deposited to the PDB, with detailed accounts of the function of these channels using x-ray crystallography and biochemical assays (in vitro). Some of these papers were just published this year. BioWizard Posts: 10602 Joined: 24 Mar 2005 Location: United States Blog: View Blog (3) ### Know TWO languages Dear BioWizard, Remember this forum on special lectures about membrane potentials and cell volume regulation. To better understand these lectures please read this: Short extractions from the last Ling's book you'll find here: These Ling's works are available too: Somebody knowing TWO languages BETTER understand the nature of language. Forum Neophyte Posts: 38 Joined: 08 May 2006 Location: Russia If you cannot sum up what you are trying to say (or at least a glimpse of it) in a single post, I will not spend time sifting through hundreds of pages that I dont yet believe bear any merit for me. You keep saying that there are many flaws with our current understanding of the membrane. All I'm asking for is to show me one flaw, so I can be convinced that you are on to something there. I find your analogy with languages a bit inaccurate. There is only one truth, and knowing different theories doesnt necessarily bring you closer to it. BioWizard Posts: 10602 Joined: 24 Mar 2005 Location: United States Blog: View Blog (3) ### Science is not a book of cartoons Dear BioWizard, Yes, there is only one truth but if it is already discovered why science is needed? There is no truth in the front line of research thus there are many points of view on the same subject exist. Discovered truth is used in factories only. Do you want be a worker on some factory, on sausage factory, for example? You have two ways: to get new scientific ideas, and to save your time. You have chose the last in this case. It's your problem. It's impossible to study science reading abstract only or look at cartoons. It is not needed to sift hundreds of pages. It's quite enough to read few key pages. As Frenchmen say, it is not needed to drink a barrel of wine to understand its taste. I hope you well understand that I can not spend time in order to pack hundreds of pages in few lines of abstract for everybody who is lazy to read 3 pages. Good luck! Forum Neophyte Posts: 38 Joined: 08 May 2006 Location: Russia All I asked is for you to point out a single flaw in the mainstream understanding of cell membrane physiology, and you have failed to do that. It isnt about being lazy, but more about you being able to clearly demonstrate your idea(s) on a message board. If you are a published scientist, you should know that before any journal agrees to publish your work, they ask you to submit an abstract. If your abstract doesnt demonstrate the merit of your work, they wont even ask for you data. Being able to present your case in a couple hundred word is one of the simplest skills a scientist has to have. I am sorry that you felt that way (and had to call me lazy), but what I asked from you is extremely simple. Present an abstract that is relevant to the science at hand, rather than a philosophical attack devoid of any science. If you are not willing to type a few words to explain your theory, I doubt anyone will be willing to read 76 pages get to your point. Vladimir Matveev wrote:Yes, there is only one truth but if it is already discovered why science is needed? I specificially said:There is only one truth, and knowing different theories doesnt necessarily bring you closer to it. Cheers. BioWizard Posts: 10602 Joined: 24 Mar 2005 Location: United States Blog: View Blog (3) In fact, you specifically said that the cell is modelled as a liposome bubble, and I specifically said that it's an outdated model that has long been discarded. So good luck with your theory, whatever it is :) BioWizard Posts: 10602 Joined: 24 Mar 2005 Location: United States Blog: View Blog (3) Wouldn't you rather have someone summarize something into basically raw fact and key information rather than reading an entire book on something that can be summed up easily? I still don't understand why you think that studying coacervates is such a problem. darwinlemmings BiWizard: "Why does anyone need to disprove anything concerning the Na+K+ channel? High resolution structures of most of these channels have been deposited to the PDB." ***Na+K+ channel has a sense if membrane separates two solutions. But according to Ling' approach cell inside is NOT a solution, it is a gel where water properties is changed as a result of its interaction with proteins. From this point of view Na+K+ channel is not a CHANNEL, it is real protein structure connecting out cell water with inside gel. It is very important difference. Why? Read the references I gave above. So the channel MUST be regarded as receptor (sensor) only. This statement has a lot of sequences. BiWizard: In fact, you specifically said that the cell is modelled as a liposome bubble, and I specifically said that it's an outdated model that has long been discarded. ***The membrane theory (MT) is 100 years old. Research of cell structure was made independently from MT dogmas. As a result we have a lot of data inconsistent with the theory. Interestingly, who studies cell structure do not know MT exactly; who has established the main principles of MT are now dead. darwinlemmings: "Wouldn't you rather have someone summarize something into basically raw fact and key information..." ***Some peculiarities of Ling's approach: (i) solutes distribution between cell and medium is ruled by its adsorption (or not) on intracellular structures (high K+ "concentration" into cell is accounted by its binding to proteins, no pumps); (ii) bound intracellular water is a real barrier for extracellular solutes, not a plasma membrane; (iii) ATP is not the main energy source for cell, real source is an energy transformation accompanying the cycle: protein-water-K+-ATP protein + water + K+ + ADP + Pi; (iv) cell has no enough energy to supply even NaK-pump only postulated by MT. Ling's approach and MT are global theories for cell scale. Therefore more and more questions will appear in your heads, friends. Please read at least few key pages of Ling's papers. It is not needed to become a preacher of Ling's theory. This theory will give you capability (with other knowledge) to understand scientific problems much deeper! Deep understanding of something important is the best aim for all of us. Good luck! Once again: These Ling's works are available: Forum Neophyte Posts: 38 Joined: 08 May 2006 Location: Russia Ok, now we're getting somewhere that we can discuss specific points. I dont need to look at your references, unless I have to check experimental results. ***Na+K+ channel has a sense if membrane separates two solutions. But according to Ling' approach cell inside is NOT a solution, it is a gel where water properties is changed as a result of its interaction with proteins. From this point of view Na+K+ channel is not a CHANNEL, it is real protein structure connecting out cell water with inside gel. It is very important difference. Why? Read the references I gave above. So the channel MUST be regarded as receptor (sensor) only. This statement has a lot of sequences. The cytoplasm is not a gel. The cytoplasm is however filled with a meshwork of filamments and microtubules. These networks composed of microfilamments, intermediate filamments, amd microtubules give rigidity to the cell, and give it's cytoplasm gel-like properties. However, this doesnt at all say that the cytoskeleton is not interspersed by a solution, whose ionic concentrations have been worked out to nanomolar quantities. In that respect, the ion channel is actually interfaced by real solutions on both sides. ***The membrane theory (MT) is 100 years old. Research of cell structure was made independently from MT dogmas. As a result we have a lot of data inconsistent with the theory. Interestingly, who studies cell structure do not know MT exactly; who has established the main principles of MT are now dead. Errrrkaayyy. So why are we supposed to worry about it? Structural biologists and Molecular Biophysicists dont really care about outdated theories and models. Falsified models are discarded, and then science moves forward. I havent heard anyone nag about Bohr's model of the atom in a very long time. ***Some peculiarities of Ling's approach: (i) solutes distribution between cell and medium is ruled by its adsorption (or not) on intracellular structures (high K+ "concentration" into cell is accounted by its binding to proteins, no pumps); Not true. Increased intracellular K+ concentrations are caused by K+ATPase, Kir, Kgnc, and othe K+ channels. (ii) bound intracellular water is a real barrier for extracellular solutes, not a plasma membrane; Absolutely meaningless. You are not only challenging MT here, but all of chemistry and biophysics. Water cannot act a barrier for a solute (hence the use of the term solute). The plasma membrane acts as a thermodynamic barrier to charged and/or hydrophilic species. iii) ATP is not the main energy source for cell, real source is an energy transformation accompanying the cycle: protein-water-K+-ATP protein + water + K+ + ADP + Pi; (iv) cell has no enough energy to supply even NaK-pump only postulated by MT. Many of these pumps are assisted by other gradients, like glucose or osmotic gradients. The overall $\delta G$ is sufficient to drive the translocation of the ion. Everywhere I've read about it, it clearly says that experimental data confirms the molecular mechanism of action. However, you are claiming fraudulent interpretation of data, so I will have to look more closely into the experimental design of those studies, and those of Mr. Ling. BioWizard Posts: 10602 Joined: 24 Mar 2005 Location: United States Blog: View Blog (3) YOU CALL THESE "KEY PAGES?!" THE FIRST ARTICLE HAS 76 PAGES! :evil: darwinlemmings To BioWizard: You well understood MT, congratulation. But you do not know Ling's arguments. However you do not need it if MT is Holy Writ for you. Your choice is your fate. To darwinlemmings: half of the pages is Appendix, do not worry... Forum Neophyte Posts: 38 Joined: 08 May 2006 Location: Russia If the arguements you made here are quoted from Ling's work, I really dont think I should spend anytime reading his papers :) BioWizard Posts: 10602 Joined: 24 Mar 2005 Location: United States Blog: View Blog (3) It's not arguments, it's statements... Forum Neophyte Posts: 38 Joined: 08 May 2006 Location: Russia Well those statements are trying to argue something, arent they? (hint: MT) BioWizard Posts: 10602 Joined: 24 Mar 2005 Location: United States Blog: View Blog (3) Those statements are trying to turn your interest on new approach and ideas... Forum Neophyte Posts: 38 Joined: 08 May 2006 Location: Russia Next
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34482231736183167, "perplexity": 2465.226691567941}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122123549.67/warc/CC-MAIN-20150124175523-00234-ip-10-180-212-252.ec2.internal.warc.gz"}
https://electronics.stackexchange.com/questions/86489/relation-and-difference-between-fourier-laplace-and-z-transforms/86496
# Relation and difference between Fourier, Laplace and Z transforms I have become a bit confused about these topics. They've all started looking the same to me. They seem to have the same properties such as linearity, shifting and scaling associated with them. I can't seem to put them separately and identify the purpose of each transform. Also, which one of these is used for frequency analysis? I couldn't find (with Google) a complete answer that addresses this specific issue. I wish to see them compared on the same page so that I can have some clarity. The Laplace and Fourier transforms are continuous (integral) transforms of continuous functions. The Laplace transform maps a function $f(t)$ to a function $F(s)$ of the complex variable s, where $s = \sigma + j\omega$. Since the derivative $\dot f(t) = \frac{df(t)}{dt}$ maps to $sF(s)$, the Laplace transform of a linear differential equation is an algebraic equation. Thus, the Laplace transform is useful for, among other things, solving linear differential equations. If we set the real part of the complex variable s to zero, $\sigma = 0$, the result is the Fourier transform $F(j\omega)$ which is essentially the frequency domain representation of $f(t)$ (note that this is true only if for that value of $\sigma$ the formula to obtain the Laplace transform of $f(t)$ exists, i.e., it does not go to infinity). The Z transform is essentially a discrete version of the Laplace transform and, thus, can be useful in solving difference equations, the discrete version of differential equations. The Z transform maps a sequence $f[n]$ to a continuous function $F(z)$ of the complex variable $z = re^{j\Omega}$. If we set the magnitude of z to unity, $r = 1$, the result is the Discrete Time Fourier Transform (DTFT) $F(j\Omega)$ which is essentially the frequency domain representation of $f[n]$. • The s in the Laplace Transform is a complex number, say a+j$\omega$, so its a more general transform than the completely imaginary Fourier. In fact, so long as you're in the Region of Convergence, it's fair game to go back and forth between the two just by replacing j$\omega$ with s and vice versa – Scott Seidman Oct 25 '13 at 16:19 • I find it useful to think of the Fourier transform as something you apply to periodic signals, and the Laplace transform as something you apply to time-varying signals. (This is a consequence of what @ScottSeidman explained above.) – Li-aung Yip Oct 26 '13 at 6:23 • @Alfred: You haven't actually addressed which one of these is used for frequency analysis - for completeness it is probably worth mentioning that that most people use the FFT for frequency analysis, and how the FFT fits in with the things already listed. – Li-aung Yip Oct 26 '13 at 6:32 • @Li-aungYip, I think you may be conflating the Fourier series and the Fourier transform. The Fourier series is for periodic functions; the Fourier transform can be thought of as the Fourier series in the limit as the period goes to infinity. So, the Fourier transform is for aperiodic signals. Also, since periodic signals are necessarily time-varying signals, I don't "get" the distinction you're drawing. – Alfred Centauri Oct 26 '13 at 11:51 • @Li-aungYip Also, FFT is used to compute DFT which is not DTFT. DFT is like taking samples in the frequency domain after having a DTFT (which is continuous for aperiodic signals). It is just a tool used in computers for fast computations (okay, we can use it manually too). But FFT comes after you are past DTFT and CTFT. – Anshul Oct 30 '13 at 11:07 Laplace transforms may be considered to be a super-set for CTFT. You see, on a ROC if the roots of the transfer function lie on the imaginary axis, i.e. for s=σ+jω, σ = 0, as mentioned in previous comments, the problem of Laplace transforms gets reduced to Continuous Time Fourier Transform. To rewind back a little, it would be good to know why Laplace transforms evolved in the first place when we had Fourier Transforms. You see, convergence of the function (signal) is a compulsory condition for a Fourier Transform to exist (absolutely summable), but there are also signals in the physical world where it is not possible to have such convergent signals. But, since analysing them is necessary, we make them converge, by multiplying a monotonously decreasing exponential e^σ to it, which makes them converge by its very nature. This new σ+jω is given a new name 's', which we often substitute as 'jω' for sinusoidal signals response of causal LTI systems. In the s-plane, if the ROC of a Laplace transform covers the imaginary axis, then it's Fourier Transform will always exist, since the signal will converge. It is these signals on the imaginary axis which comprise of periodic signals e^jω = cos ωt + j sin ωt (By Euler's). Much in the same way, z-transform is an extension to DTFT to, first, make them converge, second, to make our lives a lot easier. It's easy to deal with a z than with a e^jω (setting r, radius of circle ROC as untiy). Also, you are more likely to use a Fourier Transform than Laplace for signals which are non-causal, because Laplace transforms make lives much easier when used as Unilateral (One sided) transforms. You could use them on both sides too, the result will work out to be the same with some mathematical variation. • Your answer is saviour .... thumbs up for so precise and great explanation .. – pravin poudel Sep 5 '18 at 4:39 Fourier transforms are for converting/representing a time-varying function in the frequency domain. A laplace transform are for converting/representing a time-varying function in the "integral domain" Z-transforms are very similar to laplace but are discrete time-interval conversions, closer for digital implementations. They all appear the same because the methods used to convert are very similar. I will try to explain the difference between Laplace and Fourier transformation with an example based on electric circuits. So, assume we have a system that is described with a known differential equation, let say for example that we have a common RLC circuit. Also assume that a common switch is used to switch ON or OFF the circuit. Now if we want to study the circuit in the sinusoid steady state we have to use Fourier transform. Otherwise, if our analysis include the switch ON or switch OFF the circuit we have to implement the Laplace transformation for the differential equations. In other words the Laplace transformation is used to study the transient evolution of the system´s response from the initial state to the final sinusoid steady state. It Includes not only the transient phenomenon from the initial state of the system but also the final sinusoid steady state. Different tools for different jobs. Back at the end of the sixteenth century astronomers were starting to do nasty calculations. Logarithms were first calculated to transform multiplication and division into easier addition and subtraction. Likewise, Laplace and Z transforms turn nasty differential equations into algebraic equations that you have a chance of solving. Fourier series were originally invented to solve for heat flow in bricks and other partial differential equations. Application to vibrating strings, organ pipes, and time series analysis came later. In any LTI system for calculating transfer function we use only laplace transform instead of fourier or z transform because in fourier we get the bounded output ;it doesn't go to infinity. And z transform is used for discrete signals but the LTI systems are continous signals so we cannot use z transform .. Therefore by using laplace transform we can calculate transfer function of any LTI system.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8785198330879211, "perplexity": 390.5003024210781}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439736057.87/warc/CC-MAIN-20200810145103-20200810175103-00036.warc.gz"}
http://www.ck12.org/book/Human-Biology---Digestion-and-Nutrition/r1/section/2.1/
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> You are reading an older version of this FlexBook® textbook: Human Biology - Digestion and Nutrition Go to the latest version. # 2.1: Why Do We Eat? Difficulty Level: At Grade Created by: CK-12 Does what I eat really matter? What did you have for breakfast today? Does it matter? Food is essential to life because it provides the energy to run body functions and the building blocks to grow and repair body tissue. Your body is made of billions of cells. What you eat matters because the cells in your body need certain things that you can provide only by eating. Nutrition refers to the composition of food and how the various components of foods affect the body. In this section you will investigate valuable information about what is in the foods you eat. Throughout the rest of the unit, you will explore how your body uses the food you eat and how you can keep your digestive system healthy. “I never associated what I ate with how I felt. If I saw something I wanted to eat and my head said yes, I ate it. I got fat. It wasn't until I started listening to my body that I realized there was a link between what went in my mouth and how I functioned that day.” -High School Dieter Most food, as you see it on the table, is of little use to your cells. Obviously, you can't simply graft a steak onto your leg to build a stronger leg muscle. Food must be broken down into many separate, simple molecules that can flow into your bloodstream and from there move into your cells where they are used for fuel or for building new molecules. The body's process of breaking down food into smaller particles is called digestion. This unit introduces you to how the digestive system functions. This unit also presents some information on the cultural and social elements of eating such as why we eat what we eat and why different people in different places eat different foods. You will also learn about some of the psychological aspects of eating, dieting, and eating disorders. What are your most favorite things to eat? Write them down. Then write a paragraph or two about whether or not you think your preferred diet is healthy. After you finish this unit, review your list and what you wrote to see if your views have changed. At the end of this unit you will learn some general strategies for staying healthy. Although good nutrition is very important to your health, many other factors that relate to digestion and nutrition also affect your health. For example, stress, exercise, and sleep can all affect your appetite and your health in various ways. Keep in mind as you read the unit and do the activities that your growing body has special needs. Just as if you were building a house, you need energy and specific materials to build your body. Both come from the food you eat. It is hard to understand that what you eat today may affect your health later in life, but it is true. Eating habits, such as eating lots of fat, may lead to high blood pressure and clogged arteries later in life. Choices Are Everywhere What are the choices you have to make in a day (all choices, not just choices about food)? Work with a partner to generate a list. Share the list with your class, and come up with a comprehensive list of choices. There are many choices when it comes to food. You constantly receive messages from parents, friends, television, radio, magazines, and teachers about what to eat. How do you know what is right? Each person's body is unique. Some people may need more energy or more of one mineral or vitamin than other people do. Also, a person's needs change with age and with levels of activity. But the basics remain the same. You need to eat a balanced diet that includes foods from six basic nutrient groups. By the end of the unit, you will be able to answer these questions. • How does good food help you fight off illness and resist infections? • How does good nutrition affect growth and development? • How does what you eat affect how you look? • How does good food affect your ability to concentrate and think straight? • Why does what you eat affect how well and how long you can exercise? Regular exercise is an important part of staying healthy. ## Activity 1-1: Are You What You Eat? Introduction Are you what you eat? As you begin your study of nutrition, you can keep a food diary on the data sheets provided. With this information you will be able to analyze your diet. Materials • Resource 1: Food Diary • Resource 2: Food Nutrient Chart (Also, see page 60.) • Diet Data Sheet • Activity Report • Measuring cups and spoons, glasses with 4 ounces and 8 ounces of liquid • Food labels • Fast Food information sheets • Food Models Procedure Step 1 Use the Food Diary (Resource 1) to record your diet for two consecutive days. Include the name of the food, the amount eaten, and the nutrient information listed on the Food Nutrient Chart. Step 2 Complete the “totals” section of the Data Sheet for each day. Then complete the Activity Report. ## Your food and energy needs and the six nutrient groups What Do You Think? Why is it that you see lots of ads for fast food and junk food, but very few ads for vegetables and fruits? Let's start the unit with a discussion of the six basic nutrient groups what they are and what they do for you. What is a nutrient? Food molecules that supply energy, building blocks for other molecules, and reserves for future use are called nutrients. You need six basic types of nutrients in your diet. The six nutrient groups are carbohydrates, protein, fats, vitamins, minerals, and water. Nutrients are not the same as calories. Calories refer to the amount of energy in a unit (typically one gram) of food, no matter what the food or nutrient source might be. Just getting the right number of calories each day does not necessarily mean that you have all of the nutrients you need to stay healthy. For example, if you need 2,200 calories a day, getting those calories from candy and French fries will leave you less well nourished than if you get those calories from salad, bread, and a piece of chicken. You will learn more about calories in the next section. Figure 1.1 The six essential types of nutrients are carbohydrates, protein, fats, vitamins, minerals, and water. Did You Know? Carbohydrates represent only 2% of your body weight. Your body uses carbohydrates primarily to provide energy. However, the reverse is true in plants. Energy from the sun is used to make the carbohydrates that form most of the plant's structure and its energy reserves. Whether the plant is a red wood tree or a carrot, it is mostly carbohydrates. Carbohydrates Carbohydrates are food nutrients that provide energy and building blocks. The simplest carbohydrate molecules are sugars. One very important sugar is glucose, which is the common form of fuel circulating in our blood and used by our cells for energy. The atoms in the glucose molecule can be rearranged slightly to produce another important sugar called fructose. It is mostly fructose that makes fruits and honey sweet. Other sugars in our diets are molecules that result from combining glucose and fructose molecules together. A molecule of sucrose, which is common table sugar, consists of a molecule of glucose and a molecule of fructose bonded together. Two molecules of glucose bonded together make maltose, which is found in germinating seeds. Another small sugar molecule is galactose. Combining a galactose molecule and a glucose molecule produces a sugar called lactose, which is found in milk. When many sugar molecules are connected together, they make big molecules called complex carbohydrates or starches. Most of the bodies of plants, as well as the pages of this book, are made up of a complex carbohydrate called cellulose. Cellulose is made up of long chains of glucose molecules. Starches are important sources of energy. Potatoes, rice, and wheat are three good examples of starch in our diet. Starches and sugars provide the body with energy, but also with building blocks that our cells can use to make other molecules. Word Origin of Carbohydrate Research the origin of the word carbohydrate. Also, find out what the word carbohydrate means. Then write the basic chemical structure. Carbohydrates we eat must be broken down into simple sugar molecules before the cells lining the digestive tract can absorb them and before they can be circulated in the blood. However, the enzymes we produce in our digestive tracts cannot digest some carbohydrates in our diet. For example, we cannot digest cellulose. Indigestible carbohydrate is called fiber. It is an important part of our diet even though it does not supply energy or building blocks. Fiber keeps things moving in the digestive system. You will explore how fiber works later. Figure 1.2 Examples of sources for simple sugars include such foods as fruit, honey, and refined sugar. Figure 1.3 Examples of sources for complex carbohydrates include such foods as pasta, bread, and potatoes. Figure 1.4 The pie graph shows the recommended percentages of daily calories you should obtain from the nutrient groups carbohydrates, fats, and protein. Note that you should obtain 55% of your daily calories from the food group carbohydrates. Complex carbohydrates in our diet can also bring with them other important nutrients such as vitamins and minerals. The American Heart Association recommends that you should get about 55% of your calories from carbohydrates. This does not mean, however, that you should get 55% of your calories from simple sugars in candy and junk food! Unlike foods composed of complex carbohydrates, foods rich in simple sugars usually don't contain fiber and important nutrients such as minerals and vitamins. Therefore, most of your carbohydrate intake should be complex carbohydrates rather than simple sugars. Perhaps you have heard that carbohydrates make you fat. Carbohydrates are actually fat-free, but they do provide calories. Carbohydrates contain less than $\frac{1}{2}$ the calories per gram that fat contains. When you take in more calories than you need, the excess is stored as fat no matter where the calories came from. Why do coaches tell their athletes to eat a big pasta dinner the night before a competition and simple sugars a few hours before the competition? Why don't the athletes eat pasta right before the competition and a candy bar the night before? Did You Know? Without sufficient fiber, the muscles in your intestine have to squeeze too hard. This can result in saclike bulges of the intestinal wall, causing a condition known as diverticulosis. What would you call the condition when the wall becomes-inflamed? (Hint: What do you call the condition of having an inflamed appendix?) Fiber is a carbohydrate that travels through the digestive tract but is not digested or absorbed. Fiber supplies no energy. It occurs in roots, stems, leaves, nuts, and seed coverings of vegetables, fruits, and whole grains. Fiber provides bulk for muscles of the digestive tract to squeeze against. This squeezing helps speed the passage of food through the food tube. Fiber also acts like a sponge by holding onto unhealthy substances in food to prevent them from being absorbed into the body. One example of an unhealthy substance is cholesterol. Fiber reduces the absorption of cholesterol into the bloodstream and lowers the chances of getting colon cancer. Protein Protein in your food provides an important kind of building block-called amino acids-that your body needs to make its own proteins. You eat protein and digest it into amino acids. Your blood and cells absorb these amino acids from your digestive system. In your cells, the amino acids you get from meat, milk, eggs, beans, and fish link together to form thousands of different proteins that become part of you. Some of the proteins you make form the structure of your body, others become antibodies to fight off infection, and still others control and regulate cell activities. Did You Know? Proteins have a much more complicated structure than either carbohydrates or fats. Hydrogen, carbon, oxygen, and nitrogen make up the building blocks of protein, called amino acids. The amino acids join together to form the backbone of the protein molecule. Each protein molecule has a specific shape that allows it to fulfill its special jobs in the body. Twelve to eighteen percent of your body is made up of protein. Proteins do a variety of jobs in your body: They regulate body functions, build muscles and bones, make muscles contract, help fight illness, transport substances in your blood, and transmit information between cells. Figure 1.5 The recommended percentage of daily calories from the food group protein is 15%. Protein is one of the six essential types of nutrients that provide the raw materials for producing new cells. You need more protein when your body is growing rapidly, especially during infancy and adolescence. If you do not get enough protein at these critical times, your growth can be slowed. In addition, if you do not have enough amino acids (protein building blocks) available for building new cells during these critical times, some of the missed growth cannot be made up later. At your age you are most likely either in a growth spurt or you will be having one soon, so making correct nutritional choices is especially important. Vegans are people who don't eat any animal products, including meats, eggs, or dairy products. How can these people still get the protein their cells need to grow if each kind of plant they eat doesn't contain complete proteins? Did you Know? People in some cultures eat little meat, by choice or because it is not available. They eat a mixture of plant proteins that together provide the right combination of essential amino acids. Some examples of vegetarian combinations that supply all essential amino acids are • refried beans and tortillas, • pea soup and rye bread, • beans and pasta, • beans and rice, • baked beans and brown bread, and • peanut butter on whole wheat bread. Adding even a small amount of animal protein can supply missing amino acids, such as • pasta and cheese, and • vegetable stir-fry and small pieces of chicken. Your cells can make most of the twenty amino acids. Your body is able to use other amino acids to make these amino acids, but there are nine that it cannot make. The nine amino acids your body cannot make are called essential amino acids. These essential amino acids must be obtained from the foods you eat. It is important to know that your body does not store excess amino acids like it stores excess carbohydrate or fat. Therefore, you have to get all of the amino acids you need each day in your diet. Proteins that contain all nine essential amino acids are termed complete proteins. Meat, fish, and milk products contain complete proteins. Other foods contain some, but not all, of the essential amino acids. Such foods contain incomplete proteins. Foods containing the incomplete proteins are grains, nuts, beans, and some other plants. If you regularly eat meat, poultry, fish, eggs, and milk products, you probably get enough complete protein. If you don't get enough protein, you can become sick and weak. If you eat more protein than you need, the extra calories can be stored as fat. Figure 1.6 Examples of sources of protein include beans, chicken, peanut butter (on bread), chicken, and steak. Fats Fats, also known as lipids, play essential roles in your body. The body can make most of the fats it needs from other nutrients, so you don't have to have much fat in your diet. All cells need fat for building cell membranes. Fats also are found in high concentrations in brain and nerve cells. Certain vitamins (A, D, E, and K) are fat-soluble, which means that your body stores excess amounts of these vitamins in your body fat. Therefore, these vitamins are more abundant in foods that contain fats. Fats are the major energy store for the body. You get more energy from a gram of fat than from a gram of carbohydrates. Far has about 9 calories per gram. Carbohydrates and protein have about 4 calories per gram. However, if you store too much fat in your body, it can have a negative effect on your body. You gain weight, and your heart, muscles, and joints must work harder to move the extra weight. Did You Know? You can gain weight from eating too much of any food, not just fatty foods. You store fat if you consume more calories than you burn. One pound of body fat contains 4,000 calories. There are saturated fats and unsaturated fats. You have probably heard about them in the news. Saturated and unsaturated fats have different chemical characteristics. Saturated fats are solid at room temperature. They come from meat, lard, butter, coconut oil, and palm oil. These fats should be very limited in your diet. Unsaturated fats are liquid at room temperature. They are products of plants such as olives, peanuts, corn, soybeans, and safflowers. Unsaturated fats also occur in fish. Unsaturated fats are better for you than saturated fats but should still be limited in your diet. Did You Know? The body makes different kinds of fats by attaching long molecules called fatty acids to small glycerol molecules. Each glycerol molecule can carry three fatty acids. That is why fats are also called triglycerides. The body can make different fatty acids, but there is one that must come from the diet. It is linoleic acid. Linoleic acid is common in plants. Figure 1.7 Examples of saturated fats include palm oil, ham, butter, and lard. Examples of unsaturated fats include olive and corn oil, avocado, nuts, and fish. One brand of granola has a label on the container stating, in big letters, “NO TROPICAL OILS.” Why do you think this has been pointed out? Did You Know? There are some easy ways to reduce the fat in your diet: • Eat smaller amounts of red meat. • Eat more fish and chicken than red meat. • Cut the fat off meat, and remove the skin from chicken. • Don't eat the grease from meat. • Drink skim or low-fat milk. • Eat low-fat cheese. • Avoid fried foods like chips and French fries. • Choose low-fat or fat-free ice cream or frozen yogurt. People who eat foods high in saturated fat run a greater risk of having high cholesterol levels in their blood and of developing heart disease. Cholesterol is a waxy, fatlike substance that is made by the body and is needed for making vitamin D, hormones, and cell membranes. You also can get cholesterol from the foods you eat. Meat, eggs, and animal fats are high in cholesterol. If you eat a lot of cholesterol, you are likely to have high levels in your blood. It is also possible to have a high level of cholesterol in your blood if you have a family history of high cholesterol, even though you don't eat foods high in cholesterol. Foods containing cholesterol are usually high in other fats, too, leading to excess fat and cholesterol in the body. Limiting the cholesterol and saturated fats in your diet is wise. High levels of cholesterol can contribute to atherosclerosis, or hardening of the arteries, and other forms of heart disease. Heart disease is a leading cause of death in the United States, even for those people who are under 65 years of age. Considering the list of possible ways to reduce fat in your diet, which three things would be the easiest for you to try as part of your own diet? Which three things would be the hardest for you to try? Why? Do you think it is important for you as an adolescent to monitor the fat in your diet? Why or why not? About 34% of the calories in an average American diet comes from fat. The American Heart Association recommends that 30% or less of your daily calories come from fat and under 10% of that from saturated fat. Figure 1.8 The recommended percentages of daily calories from the food group fats is 10% saturated fats and 20% unsaturated fats. Did You Know? • Fat buildup in the arteries tends to be slow, but it starts as early as age 10. There are no symptoms to warn you about this process until it is already well advanced. Therefore, it is important to lead a healthy lifestyle at an early age, before damage is done. • Although the body needs cholesterol to make other substances, one-third of American young people may be getting too much cholesterol in their diets. The liver can produce all the cholesterol the body needs. We do not have to eat any cholesterol at all. Vitamins A vitamin is a chemical the body needs in small amounts but cannot make for itself. Vitamins don't provide energy, but some vitamins help the body use the energy from a nutritious diet. Their most important job is to help enzymes do their jobs in cells. Your body needs most vitamins in only very small quantities. You don't need to get your vitamins from a bottle or jar if you eat the right amount of a variety of foods. There are usually more vitamins in a good diet than your body can use. But it is important to eat a healthy diet to get the vitamins needed. If your diet isn't providing the right vitamins, your body gets sick. You can also get sick by eating too much of some vitamins. Some vitamins dissolve in water. Your body uses the amounts of these vitamins that it needs. Then the excess leaves the body in your urine. Some vitamins are fat soluble, which means that your body stores excess amounts of these vitamins in your body fat. Toxic levels of fat-soluble vitamins can accumulate. If you take vitamins, be careful how many and which ones you take. Here you'll find basic information about vitamins. Check with a health professional for recommendations about the type and amount of vitamins that you might need. Did You Know? Cataracts are responsible for 50% of all cases of blindness. Cataracts are a clouding of the lens that lets light into the eye. Without enough light, the eye cannot see. Vitamins A and C seem to help protect the eye from formation of cataracts. Your body can make vitamin A from a pigment found in some plants. A pigment is a colored chemical. The pigment that is required for vitamin A is carotene. It is the molecule that makes carrots orange. You need vitamin A for healthy skin, bones, and teeth. You also need vitamin A for good vision. If you don't get enough vitamin A, you cannot see well at night. The vitamin B complex is a group of eight vitamins. Your cells use B vitamins in the chemical reactions that produce energy from food. You can get anemia (low blood iron) or beriberi (a disease involving the nerves, heart, and gut) if you do not consume enough B vitamins. Many people take vitamin supplements in the form of pills or vitamin shakes. Now that you know the various vitamins found in foods and what they do for your body functions, what is your personal “philosophy” about getting enough of all the vitamins you need? Do you take large doses of vitamin supplements, moderate amounts, or none at all? Why? How does the information in the section affect your decisions about the vitamins that you consume? Vitamin C helps your body fight infection. We don't know much about how vitamin C works. Before 1800, sailors did not have many fresh fruits and vegetables in their diets. They developed bleeding gums as a result of a disease called scurvy. It was discovered that citrus fruits like oranges, lemons, and limes cured and prevented this disease. After that discovery, the British navy required that all of their ships carry limes so the sailors could have a daily ration of lime juice. That is how British sailors got the name “limeys.” Warning: Be careful not to spend too much time in the sun without wearing sunblock. Too much exposure to the sun can damage your skin and may lead to skin cancer over time. Vitamin D helps your bones and teeth stay strong. It helps bones and teeth by regulating the absorption and use of the mineral calcium. You can get rickets, a disease of bone softening and poor bone growth, if your diet lacks vitamin D and you don't get enough sunlight. What does sunlight have to do with vitamins? If you get enough sunlight, your skin can make vitamin D. Even in cold climates, if your face is exposed to sunlight for an hour or so each day, you will not need supplementary vitamin D in your diet. Vitamin E protects red blood cells and is needed for the functioning of certain enzymes. Vitamin K assists the blood in clotting. You need foods with vitamins B and C almost every day. The B and C vitamins dissolve in blood because they are water-soluble. When vitamins B and C are eaten in excess of body needs, they pass from the body through urine. Vitamins A, D, E, and K dissolve in fat instead of water. If you take in more of these vitamins than you need, they accumulate in body fat and can build up to unhealthy levels in your cells. Did You Know? A serious type of birth defect is malformation of the spinal cord. It has recently been shown that a small, daily dose of a B vitamin called folate given to women during pregnancy reduces the incidence of these birth defects by 40%. Make a list of all the things that could go wrong with your body due to vitamin A, B, C, and D deficiencies. Which would be the hardest for you to deal with? Are you willing to eat the foods that contain that vitamin to prevent this problem? Why or why not? Minerals Minerals, like vitamins, do not provide calories, but they are essential for good health. Minerals are simple chemical elements that come from the earth. Just as with vitamins, you can get minerals you need by eating a balanced diet. What do minerals do? Some minerals, like most vitamins, are needed in only very small amounts. Such micronutrients may be needed to make molecules that have specific functions. For example, some enzymes need zinc to do their jobs in promoting specific chemical reactions. What Do You Think? Why do you think there are so many advertisements for milk featuring famous athletes and movie stars drinking milk? What audience are these ads targeting? As a consumer, do you think the ads are effective? Some minerals are needed in larger quantities and are called macronutrients. For example, sodium and potassium are needed to carry electrical charges that make nerves and muscles work. Iron is needed to make hemoglobin molecules carry oxygen in your red blood cells. Calcium and phosphorus are needed to build bones and teeth. Without adequate calcium and vitamin D, bone forms poorly in children and, in older people, can become brittle. As people age, they are at risk for a disease called osteoporosis. Osteoporosis is a condition in which bones become so porous and brittle that they are easily fractured. The fractures can lead to severe pain and disability. Osteoporosis occurs mainly in women after the age of 50. A key factor in the development of osteoporosis is the density (calcium content) of the bones in early adulthood when bone mass reaches its peak. It is important to consume enough calcium, particularly during adolescence and early adulthood, when bone is growing and increasing in density. Generally, males get enough calcium, while females aged 11 years and older generally do not. Because 60% of bone density is formed between the ages of 10 and 16, adolescence is a crucial time for building strong bones in young women. Females not only drink less milk, but they eat less food than males their age, too. Females can increase this essential nutrient in their diet by selecting calcium-rich foods such as cheese, yogurt, or green, leafy vegetables. The recommended daily amount of calcium for females who are 11-14 years of age is 1,200 milligrams. A sample of calcium-rich foods in a daily diet that meets the recommended daily amount of calcium would be Food Milligrams of Calcium 1 cup low-fat yogurt 415 3 stalks of broccoli 240 2 glasses low-fat milk 600 Total calcium 1,255 Minerals are valuable nutrients, and the body usually recycles them. There are cells that are always remodeling your bones. Some cells break down bone and release calcium and phosphorus into the blood. Other cells take up these minerals from the blood to make new bone. When red blood cells are about 4 months old, they are broken down. The iron is extracted from the hemoglobin molecules. The blood transports this iron to the bone marrow. In the bone marrow the iron is recycled into new hemoglobin molecules in the new red blood cells that are always being produced in the bone marrow. The point to remember is that minerals, like vitamins, occur naturally in a healthy diet and, like vitamins, your body needs only reasonable amounts each day for normal growth and good health. What do cooks mean when they say, “A colorful plate is a healthy plate”? Figure 1.9 A person's body is about 50-60% water. But don't be confused by the drawing. You are not like a glass that fills from bottom to top. Instead, the water is distributed throughout your body. Water Water is essential for life. You need water for digestion, carrying waste, making urine, circulating blood, and holding your body temperature constant, and for the many chemical reactions that take place in your cells. You lose 2 to 3 liters of water a day to perspiration, urine, stool, and breathing. You may lose even more water during very hot, dry weather and when exercising. Your body is about 50-60% water, so it would seem as if you have a good supply. However, the normal rate of loss of water from your body is fairly high. The rate of water loss can rapidly increase with vomiting, diarrhea, or excessive sweating. If you lose too much water, the cells of your body shrink and cannot function properly. Fever and diarrhea cause a rapid loss of water, so the sufferer must drink plenty of water to replace it. People who do not get enough water become dehydrated and, if severely so, may be given an intravenous infusion of fluids in the hospital. Getting Enough Nutrients How can we be sure we get all of these nutrients that we need? After all, they are hard to remember and most food s don't come labeled with their contents as breakfast cereal boxes do. A good trick for keeping track of nutrients is to think of all food as consisting of five basic food groups. Each food group provides certain nutrients. What are the five basic food groups? How much of them should you eat? The next section will address these questions. ## Activity 1-2: What's in Your Food? Introduction How can you test foods for nutrients? In this activity you test different foods for the presence of carbohydrates, proteins, and fats. • Carbohydrates-sugar and starch-provide energy for your cells. The long chain of carbohydrate molecules is broken down to smaller sugar molecules. • Proteins are digested into building blocks of amino acids. Amino acids are used for building and repairing cells, fighting infection, and other critical functions. • Fats are large molecules that store energy and can be digested into building blocks called fatty acids. Fats help you absorb vitamins and are present in nerve and skin cells. Materials • Safety goggles • Resource 1: Part A Data Sheet • Resource 2: Part B Data Sheet • Activity Report • Glucose sugar solution • Egg white, raw • Butter, margarine, or vegetable oil • Test tubes • Test-tube holder • Water bath • Brown wrapping paper • Plastic knife • Starch solution • Iodine solution • Biuret solution • Benedict's solution • 3 medicine droppers • Small pieces of various foods CAUTION: You should wear goggles in all experimental laboratory situations. Make sure you are wearing goggles when working with any chemicals such as Benedict's solution. Also, wear goggles when working with heat or fire. Procedure Part A. Laboratory Tests for Nutrients Testing for Carbohydrates: Starches Step 1 Put 2 milliliters of starch solution into a test tube. Step 2 Add a few drops of iodine solution. Step 3 Record the results on the Table on Resource 1. Testing for Carbohydrates: Glucose (Sugars) Step 1 Pour about $5 \ ml$ of glucose (sugar) solution into a test tube. Step 3 Using safe lab technique, heat the liquid for about 3 minutes in the water bath. Step 4 Record the results on the Table on Resource 1. Testing for Proteins Step 1 Put some raw egg white into a test tube. Step 2 Wearing goggles, add 3-5 drops of Biuret solution. Step 3 Record the results on the Table on Resource 1. Testing for Fats and Oils Step 1 Use a plastic knife to spread a small amount of butter or margarine on a piece of brown wrapping paper. Step 2 Hold the brown paper up to the light and look at the stain. Step 3 Record the results on the Table on Resource 1. Summary of Test Results: Part A Summarize your test results by completing answers to questions on the Activity Report. Part B. Testing Foods for Nutrients The tests you used on carbohydrates and proteins caused color changes, while the test for fats caused a change in the appearance of the brown wrapping paper. In this activity you use the laboratory skills you learned in Part A to test foods for the presence of carbohydrates (starch and sugar), proteins, and fats. Step 1 Put a few small pieces of a food to be tested into a test tube and add just enough water to cover the pieces of food. Step 2 Refer to testing procedures in Part A to test each piece of food for the presence of carbohydrates, proteins, and fats. Begin with the first Step 2 of Part A. Step 3 Record the results on the Table on Resource 2. Complete the Activity Report. ## Review Questions 1. Why does what you eat matter? 2. Compare a carbohydrate molecule and a glucose molecule. 3. What is meant by the terms essential amino acids and complete proteins? 4. What is the difference between saturated and unsaturated fat? 5. What are three examples of a vitamin or a mineral deficiency? What disorders can each cause? 6. What are the five body functions that need the recommended five 8-oz glasses of water you should drink every day? 6 , 7 , 8 Feb 23, 2012 Aug 11, 2015
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 2, "texerror": 0, "math_score": 0.2000291794538498, "perplexity": 2832.994414559803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398464536.35/warc/CC-MAIN-20151124205424-00155-ip-10-71-132-137.ec2.internal.warc.gz"}
https://byjus.com/wbjee/wbjee-syllabus/
# WBJEE Syllabus Students must be well-versed with the WBJEE syllabus in order to secure maximum marks. The official syllabus is formulated and implemented by the West Bengal Joint Entrance Examination Board. Following is an overview of all the important concepts under each of the main subjects. WBJEE Syllabus 2022: WBJEE Physics Syllabus 2022 Atomic Physics Bulk properties of matter Current Electricity Electromagnetic induction & alternating current Electromagnetic waves Electrostatics Friction Gravitation Kinematics Kinetic theory of gases Laws of motion Magnetic effect of current Magnetics Nuclear Physics Newton’s Relation Optics I (Ray optics) Optics II (Wave Optics) Oscillations & Waves Particle nature of light & wave-particle dualism Physical World, Measurements, Units & dimensions Solid-state Electronics Surface Tension The motion of the centre of mass, connected systems Thermodynamics Viscosity WBJEE Physics Important Topics & Weightage Topics Weightage Laws of Motion 4% Modern Physics – Atomic Models 5% Nuclear Physics 5% Rotational Motion 4% Solids & Semiconductor Devices 5% Wave Motion 5% ### WBJEE Physics Reference Books • Concept of Physics – H.C. Verma • Understanding Physics series – D.C. Pandey • Concepts of Competition Physics for CBSE PMT – Agarwals • NCERT Physics – Class 10 and 12 WBJEE Chemistry Syllabus 2022 Alcohol Application Oriented chemistry Aromatic Compounds Atomic Structure Atoms, Molecules and Chemical Arithmetic Chemical Bonding and Molecular Structure Chemical Energetics and Chemical Dynamics Chemistry in Industry Chemistry of Carbon Compounds Chemistry of Metals Chemistry of Non-Metallic Elements and their Compounds Compounds Coordination Compounds Environmental Chemistry Gaseous State Haloalkanes and Haloarenes Hydrogen Introduction to Bio-Molecules Ionic and Redox Equilibria Liquid State Physical Chemistry of Solutions Polymers Principles of Qualitative Analysis Radioactivity and Nuclear Chemistry Solid State Surface Chemistry The Periodic Table and Chemical Families WBJEE Chemistry Important Topics & Weightage Topics Weightage Alcohol Phenol Ether 4% Carboxylic Acids & Derivatives 4% Chemical Equilibrium 4% Chemical Thermodynamics 4% Coordination Compounds 4% Ionic Equilibrium 4% p- Block Elements 6% Redox Reactions 5% ### WBJEE Chemistry Reference Books • Physical Chemistry – O.P. Tandon • Inorganic Chemistry – O.P. Tandon • Concepts of Organic Chemistry – O.P. Tandon • Organic Chemistry – Arihant • Organic Chemistry – Morrison and Boyd • NCERT Chemistry – Class 10 and 12 WBJEE Maths Syllabus 2022 Algebra Sets, Relations and Mappings Logarithms Arithmetic Progression G.P., H.P Complex Numbers Permutation and combination Polynomial equation Principle of mathematical induction Matrices Binomial theorem (positive integral index) Statistics and Probability Coordinate geometry of three dimensions Coordinate geometry of two dimensions Differential calculus Calculus Integral calculus Application of Calculus Differential Equations Vectors Trigonometry WBJEE Maths Important Topics & Weightage Topics Weightage 3-D Geometry 6% Complex Numbers 4% Definite Integration 5% Indefinite Integration 5% Limits 5% Matrices & Determinants 5% Permutation & Combination 4% Probability 7% Sets, Relation & Functions 5% Theory of Equations 4% Vectors 7% ### WBJEE Maths Reference Books • Mathematics – R.S. Aggarwal • Objective Mathematics – R.D Sharma • NCERT Mathematics – Class 10 and 12 WBJEE Syllabus 2022:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9229214787483215, "perplexity": 12899.309961134377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711417.46/warc/CC-MAIN-20221209144722-20221209174722-00017.warc.gz"}
http://sachinashanbhag.blogspot.com/2014/07/on-student-debt.html
## Wednesday, July 2, 2014 ### On Student Debt The NYT has this by-now popular article asking people to take a chill-pill. The Reality of Student Debt Is Different From the Clichés. It is based largely based on a Brookings Institution study which essentially claims that the sky is not falling. The 3 main takeaways from that study (emphasis mine): 1. Roughly one-quarter of the increase in student debt since 1989 can be directly attributed to Americans obtaining more education, especially graduate degrees. The average debt levels of borrowers with a graduate degree more than quadrupled, from just under $10,000 to more than$40,000. By comparison, the debt loads of those with only a bachelor’s degree increased by a smaller margin, from $6,000 to$16,000. 2. Increases in the average lifetime incomes of college-educated Americans have more than kept pace with increases in debt loads. Between 1992 and 2010, the average household with student debt saw an increase of about $7,400 in annual income and$18,000 in total debt. In other words, the increase in earnings received over the course of 2.4 years would pay for the increase in debt incurred. 3. The monthly payment burden faced by student loan borrowers has stayed about the same or even lessened over the past two decades. The median borrower has consistently spent three to four percent of their monthly income on student loan payments since 1992, and the mean payment-to-income ratio has fallen significantly, from 15 to 7 percent. The average repayment term for student loans increased over this period, allowing borrowers to shoulder increased debt loads without larger monthly payments. The NYT tries to shine a light on the real problem: The vastly bigger problem is the hundreds of thousands of people who emerge from college with a modest amount of debt yet no degree. For them, college is akin to a house that they had to make the down payment on but can’t live in. In a cost-benefit calculation, they get only the cost. And they are far, far more numerous than bachelor’s degree holders with huge debt burdens. Here is an attempted "takedown" of the report and the NYT article. And here is a well-reasoned takedown of the takedown.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33759793639183044, "perplexity": 2289.798014235278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948627628.96/warc/CC-MAIN-20171218215655-20171219001655-00624.warc.gz"}
http://mathhelpforum.com/differential-geometry/148038-question-infinite-derivative-print.html
# A question on infinite derivative. • June 6th 2010, 06:42 PM zzzhhh A question on infinite derivative. Definition of infinite derivative: Let $E=\text{dom }f, c\in E$, c is a limit point of E, real-valued function f is said to have a infinite derivative at c if f is continuous at c and the limit $\lim\limits_{x\to c}\frac{f(x)-f(c)}{x - c}$ is $+\infty$ or $-\infty$. We write in this case $f'(c)=+\infty$ or $f'(c)=-\infty$. For finitely differentiable functions it is well-known that $(f+g)'=f'+g'$. But if both f and g has infinite derivative at c, what about the differentiability of the sum f+g at c? It may be still differentiable, e.g. $f=x^{1/3}$ and $g=-x^{1/3}$, f+g=0 so (f+g)'=0 at c=0 even if $f'(c)+g'(c)=(+\infty)+(-\infty)$ which is undefinable. Now I want to find an example such that $f'(c)=+\infty, g'(c)=-\infty$, but f+g is not differentiable at c, where c is a finite real number. I hope to find a function F satisfying 1)continuous at c, 2)has bounded derivative about c but except c and 3) $F'$ oscillates like $\sin\frac{1}{x}$ so that F is not differentiable at c. If I can find such F, My example can be easily constructed. But I failed to find such F, Can you help me find such F, or construct the example in another way? Thanks! • June 7th 2010, 02:40 AM Ackbeet Try two functions that behave more or less the same way as the functions you gave, but don't combine. How about $f(x)=\sqrt[3]{x}$, and $g(x)=-\sqrt[5]{x}$? Here's an example where, I think, there's too much symmetry, and the solution involves destroying symmetry. • June 7th 2010, 05:30 PM zzzhhh $\sqrt[3]{x}-\sqrt[5]{x}$ does not meet my requirement: it has derivative $-\infty$ at 0. We should not only destroy the symmetry, but also the destroyer is not differentiable at c. Since derivatives do not have first-kind discontinuities, I think the destroyer should have an oscillating derivative, that is, second-kind discontinuity, near c, as condition 3) in my first post indicated. • June 7th 2010, 06:50 PM Ackbeet Wow. It is not intuitive at all that my candidate actually had a derivative at 0. I must confess I didn't actually check it. I just figured you'd have a cusp at the origin, which would not have a definable two-sided derivative. If you plot my function, it does appear to have a cusp at the origin. But it must hang to the left of the origin just enough to get that negative slope. Anyway. You've got a tough assignment there. I would look into the construction of the everywhere continuous, nowhere differentiable function. I realize that that function, as is, does not solve your problem. But its construction might give you some ideas. You might also try cycloids which have bona fide cusps.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9402521252632141, "perplexity": 390.12339340464484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430451452451.90/warc/CC-MAIN-20150501033732-00051-ip-10-235-10-82.ec2.internal.warc.gz"}
https://andthentheresphysics.wordpress.com/2014/10/25/back-to-basics/
## Back to basics Although I didn’t really think through starting this blog, one motivation was that I was aware that quite a lot of what was said about climate science in the blogosphere was simply wrong. I thought that if I could point out these basic errors, people might go “oh, okay, I didn’t realise. Thanks.” Of course that isn’t what happened and I was, obviously, naive to think it would. In one of my early posts, I pointed out what I thought was an error in a Watts Up With That post written by Roger Pielke Sr. Tom Curtis, very kindly 🙂 , came along and pointed out that I was actually wrong. So, I wrote another post in which I acknowledged my error and ended up in a lengthy discussion with Roger Pielke Sr about feedbacks. I thought his conclusion about whether or not they were already operating was wrong. We didn’t achieve much, but it was perfectly cordial; something I value quite highly these days. I haven’t really encountered Roger Pielke Sr much since then, but in the last couple of days have been engaged in a discussion with him on a RealClimate post. The discussion relates to his guest post on Judith Curry’s blog in which he suggests an alternative metric to assess global warming, which I’ve discussed before. The basic idea is to assess global warming using the following equation $\Delta Q = \Delta F - \Delta T / \lambda.$ This is essentially the energy budget formalism, and is a perfectly reasonable way to assess global warming (with some caveats). The problem, though, is that I think Roger’s definition – and understanding – of the terms isn’t quite correct. The equation is basically saying that if one considers some time interval over which there is a change in external forcing, $\Delta F$, and a change in temperature, $\Delta T$, then if the climate sensitivity is $\lambda$, there will be a change in system heat uptake rate (or change in radiative imbalance) of $\Delta Q$. Roger, however, interprets $\Delta Q$ as simply being the radiative imbalance, rather than the change in radiative imbalance. This is only true if the initial state is one in which the system is in equilibrium and hence the initial system heat uptake rate (radiative imbalance) is zero. Additionally, Roger doesn’t appear to understand the definition of a radiative forcing. For example, in this comment he asks What is the current (2014) radiative forcing from added CO2 concentrations (above the pre-industrial concentrations)? We do not need the difference between these two time periods, which the IPCC presents, but the current forcing given that some of the added CO2 has been adjusted to by the warming up to this time. Now, I don’t think this actually makes sense. A radiative forcing is defined as being the change in net (down minus up) irradiance (solar plus longwave; in W m–2) at the tropopause after allowing for stratospheric temperatures to readjust to radiative equilibrium, but with surface and tropospheric temperatures and state held fixed at the unperturbed values. In other words, if there is some change in an external driver of climate change (Sun, volcanoes, anthropogenic GHGs), how does this change influence the net irradiance, or the net flux into the system, without the troposphere responding to this change. You can’t really define an instantaneous radiative forcing. You can define a total, or a change. Also the statement the current forcing given that some of the added CO2 has been adjusted to by the warming up to this time is similarly confused, since as the system adjusts, the radiative forcing doesn’t change. The net flux changes due to the Planck response and other feedbacks, but the forcing remains the same. I’ll also add that this comment is suggesting that Roger Pielke Sr also doesn’t understand the origin of some of the data that he’s using in his analysis. So, this seems to be an example of someone who is critical both of mainstream climate science and of the IPCC, but doesn’t appear to understand some of the basic concepts that he’s ultimately criticising. I have a similar issue sometimes with Judith Curry who often criticises climate scientists (or the IPCC) for not considering natural warming processes, when in fact they do. The reason that it appears that they don’t consider these processes is really because these processes appear unable to explain our recent warming. It’s not that they’re being ignored, it’s that they seem to be much smaller than the anthropogenic influences. There’s nothing wrong with trying to understand the role of natural processes, but it would be good if those doing so (and who thought they might have a significant influence) appeared to understand why many think they’re not a significant driver of our current warming. In a sense, this is why I sometimes find the whole idea of an improved dialogue difficult to contemplate. I’m all for improved dialogue but am not sure I see the point when the most suitable response to what someone says is “apart from being completely wrong, that’s an excellent point.” Now, of course, it would be great if – when pointing out these errors – the response was “oh, okay, I didn’t realise. Thanks.”. However – as I pointed out at the beginning of this post – I’d be naive to think that that was likely. Of course, if I’ve made some error (and I may well have) feel free to point them out. If you do, and if you’re right, I promise to respond with “oh, okay, I didn’t realise. Thanks.” 🙂 This entry was posted in Climate change, Climate sensitivity, Gavin Schmidt, Global warming, IPCC, Judith Curry, Science and tagged , , , , , , , , , . Bookmark the permalink. ### 58 Responses to Back to basics 1. Pierre-Normand says: Yes, I’ve also questioned him on Climate Etc some time ago about his conception of total forcing and couldn’t make sense of his response. It had also seemed to me that, once some of his assumptions were corrected, his derivation of sensitivity on the basis of ocean heat content were back in line with the IPCC’s. I’ll dig that up and compare with your recent discussion. 2. Pierre-Normand, Yes, that would be interesting to see. I also concluded that if he corrected the errors in his calculation it would probably bring it more in line with IPCC estimates. 3. Are there unique correct answers to this kind of semantic questions? Both IPCC and Pielke Sr seem to agree that RF is the imbalance that results from some specified change. The concept does not make sense without stating also the change that’s causing the imbalance, but I would not state as categorically as you do that the way Pielke Sr is using the concept is wrong. It’s not the imbalance that results from an assumed almost sudden change (almost as stratosphere is allowed to adjust) but the imbalance from a particular real history. As long as that’s made clear, I don’t think it’s right to state that his way of using the word is wrong. Some may find it confusing, but how badly is that really the case? 4. Pekka, I largely bdisagree with your comment and I don’t understand how you can claim this is semantics. The radiative forcing is defined as being the change in TOA flux after the stratosphere only has responded to some change in external driver. Therefore there is no way to determine the radiative forcing in 2014 without it being with reference to some earlier period. Of course you could define it as being with reference to 2014, but that would then, by definition, be zero. It’s not the imbalance that results from an assumed almost sudden change (almost as stratosphere is allowed to adjust) but the imbalance from a particular real history. I’m not even sure that I understand what you mean by this. The radiative forcing is the change in net flux after some change in an external driver and after allowing the stratosphere to respond, but not the troposphere. You seem to be suggesting that it’s okay to redefine the term without making that clear. There is a difference between a radiative forcing and the radiative imbalance. The radiative forcing always has to be relative to some base and therefore cannot be determined for a single instant in time. The radiative imbalance – on the other hand – is the difference between the incoming and outgoing flux and so can be defined at some instant in time. I don’t really see how you can argue otherwise unless you’ve decided to redefine the term radiative forcing. 5. Pekka, I’ll add that in the IPCC definition a forcing refers only to external influences (Sun, volcanoes, anthro). All the other influences (Planck response, water vapour, lapse rate, clouds) are feedbacks. You – and Roger Pielke Sr – seem to be suggesting that feedbacks can influence the value of the forcing. I would argue that this is only true if you choose to redefine the meaning of forcing. 6. Perse says: ATTP, You’ll be glad to know that I have a goal to always respond to constructive criticism with “Oh, okay, I didn’t realize. Thanks”! 7. Pierre-Normand says: Pekka: “Are there unique correct answers to this kind of semantic questions?” This was not at issue. I was quite happy to allow him whatever definition he wanted. It had seemed to me that his derivation was not consistent with his own definition. But I’ll review this old topic (and the recent RealClimate exchange) before commenting further. Incidentally, I found by chance your “Kinetic gas theory for gas in gravitational field” this morning and found it quite useful since I am am currently arguing with a Climate Etc. regular that an isothermal state is perfectly consistent with a vertical density gradient in a box under gravity (something that he can’t fathom, and is quite unwilling to accept, for a variety of strange reasons). It also confirmed most of the intuitions that I had developed so far about the features of the velocity distribution that must satisfy the Boltzmann distribution at equilibrium. I never had thought about this problem before FOMD brought up a neat lunar thought experiment that illuminates the (lack of a) ‘problem’ with the idea of particles falling down while maintaining a constant velocity distribution. 8. IPCC has defined, how IPCC uses the word. In spite of the important role IPCC has they cannot decree that all other uses are wrong. 9. Pekka, In spite of the important role IPCC has they cannot decree that all other uses are wrong. In a sense I agree, but if someone is going to use the term differently, they should make that clear. You can’t use the same term as everyone else but mean something different, without making that clear. Similarly, if you’re going to argue that someone else (the IPCC) is wrong, you have to do so using their definition of the term, not yours (or you have to ensure that you’ve correctly mapped from their definition to yours). In another sense, I disagree. If an organisation defines a scientific term, in what scenario is it sensible for a single scientist to use the same term but mean something different. It would be like someone saying “I’ve decided that a metre is the distance an ant typically crawls in 10 seconds” and then use that to argue that everyone else’s calculations are wrong. To be honest, I’m actually slightly uncertain how to respond to what you’re saying. This isn’t about the term specifically, but about whether or not what Roger Pielke Sr appears to be suggesting makes scientific sense. The term itself is not that important. What it represents is what’s important. 10. BBD says: Bloody hell, Pekka! 🙂 11. BBD, Maybe you should make it clear how you’re defining the terms “bloody hell”. Pekka might think you mean “good job!” 🙂 12. Eli Rabett says: Antoine Lavoisier on nomenclature and why Roger Sr. cannot play Humpty Dumpty The impossibility of separating the nomenclature of a science from the science itself, is owing to this, that every branch of physical science must consist of three things; the series of facts which are the objects of the science, the ideas which represent these facts, and the words by which these ideas are expressed. Like three impressions of the same seal, the word ought to produce the idea, and the idea to be a picture of the fact. And, as ideas are preserved and communicated by means of words, it necessarily follows that we cannot improve the language of any science without at the same time improving the science itself; neither can we, on the other hand, improve a science, without improving the language or nomenclature which belongs to it. However certain the facts of any science may be, and, however just the ideas we may have formed of these facts, we can only communicate false impressions to others, while we want words by which these may be properly expressed.[3] 13. Concerning the RealClimate post of stefan (I assume that’s Rahmstorf), I like it very much, as it presents almost exactly the same arguments I have presented a couple of times at Climate Etc (perhaps also here, I’m not sure) about the choice of the indicator of warming to be used in reporting to the public. I didn’t discuss policy targets in those comments, but the arguments are the same. 14. Arthur Smith says: If Roger defines radiative forcing to mean what other people call radiative imbalance, which sounds like what his comments indicate, then his equation that ATTP quoted reduces to Delta T = 0 (because he has defined delta F = delta Q). That seems rather pointless, it certainly tells us nothing about feedbacks. If that’s not how he defines radiative forcing, does anybody have a clue what his definition is? Pukka? 15. Arthur, Yes, that was what I realised after responding to Pekka’s comment. If Roger is using his equation as done by others, then the Planck response and the other feedbacks are represented by $\lambda$, the radiative imbalance is represented by $\Delta Q$, and the forcings are represented by $\Delta F$. If, however, he has redefined what is meant by the term forcings, then either his equation is trivial, as you suggest, or I no longer understand it. 16. Pierre-Normand says: This was the response that puzzled me: http://judithcurry.com/2014/04/28/an-alternative-metric-to-assess-global-warming/#comment-537841 This was my main criticism, posted four days later: http://judithcurry.com/2014/04/28/an-alternative-metric-to-assess-global-warming/#comment-542031 Re-reading this quickly, I may have neglected non-CO2 greenhouse gas contributions in my claim about the IPCC projections for 2100. Roger Sr. didn’t respond but I don’t blame him. I think he had departed already. 17. My reaction is certainly related to, how I like to see such details discussed that may be misleading. In my way of thinking it’s appropriate to note that the use differs from the standard usage and that this makes understanding the message difficult or even impossible. That approach may help in resolving the issue rapidly. Commenting on another site it’s also appropriate to note that the concept is not used according to the standard practice. Whether there are errors in the argument, when the conceptual misunderstandings have been resolved is another issue. There are also cases where people use words intentionally to mislead, or use the same word in two different meanings and then build false conclusions based on that. In such cases it’s certainly appropriate to state that directly and strongly. I don’t think that Pielke Sr had that kind goals in his writing (but I may err on that). 18. The thread Pierre-Normand linked to contains also one of the cases, where I presented arguments similar to stefan’s post at RC 19. Pekka, I agree with you about Stefan’s post and with the points you made on Climate etc. I certainly don’t think that the OHC is a particularly good indicator/target for policy purposes. 20. Pierre-Normand says: Pekka: “There are also cases where people use words intentionally to mislead, or use the same word in two different meanings and then build false conclusions based on that. In such cases it’s certainly appropriate to state that directly and strongly. I don’t think that Pielke Sr had that kind goals in his writing (but I may err on that).” When those kinds of self-serving equivocations occur, it’s very frustrating for the person arguing the other side of the debate, but I think it’s almost never intentional. It’s almost always stemming from a powerful mechanism for avoiding cognitive dissonance. But one can’t make this remark during the debate. It’s not a legitimate debate move to question one’s opponent cognitive integrity. So, the best we can do is to constantly re-frame the argument, insist on the faults in the other side’s argument, and hope the main point will sink in eventually… but not count on it. 21. Pierre-Normand says: Pekka: “The thread Pierre-Normand linked to contains also one of the cases, where I presented arguments similar to stefan’s post at RC” Yes, I remember very well the good points that you made back then. 22. I am sure that Pielke Senior is wrong. He doesn’t seem to understand the fat-tail of thermal diffusion. There is always a fast transient (that the deniers mistake for a first-order damped exponential response) but always followed by a long fat tail (that they always conveniently ignore). Not using something akin to the heat equation when dealing with a large thermal mass and substituting first-order kinetics for the behavior is about as futile as trying to square the circle. If you want to get back to the basics on this topic, start from there. 23. WHT, Well, yes, that’s another issue. He seems to want to use these simple models to assess global warming. I see no problem with that if your goal is to understand if it’s happening or if you want to quantify it generally. Suggesting that these simple models can be used to validate climate models or accurately quantify global warming seems a bit much. 24. Ah, glasshopper, you are young. Every novice must get through the phase of arguing with RP Sr about alternative metrics and his own defn of terms. Eventually The Enlightened realise that the answer is Mu. 25. William, Just trying to earn my wings 🙂 26. Oh, I’ve just realised that the correct response was probably wax on, wax off. 27. John Mashey says: On reading the post, I was going to invoke Humpyy Dumpty, but seeing the comments I czn only second Eli’s. The Rabett was too quick. The Alice books are gibe sources: believing six impossible things before breakfast is often seen. 28. Steve Bloom says: I’m recalling that way back on his blog (what William refers to, ~ 2005 IIRC) RP Sr. started out with some similarly obscurantist reasoning with regard to the ability of surface stations to measure temperature in any meaningful way (this was the genesis of the Watts surface stations project, now foundered on the rocks of reality), going from there to The One True Metric business. While there still isn’t an OHC metric that can be tracked with any accuracy, I had assumed that now that we do have some measurements showing a warming trend we’d be hearing from RP Sr. shortly. As noted above, the cognitive dissonance is strong in that one, IMO part not-invented-here syndrome, part an unwillingness to admit that things might really be warming as they are, and finally a dollop of just plain old attention-seeking. 29. Steve Bloom says: Oh, a fourth factor: The personal feedback loop with RP Jr. 30. Steve Bloom says: To clarify what is probably obvious, the cognitive dissonance arises from the unwillingness and is enhanced by the other three factors. 31. Tony Duncan says: when you understand the difference between David Carradine and Ralph Macchio you may be worthy of sensei Connolly’s attention 32. Well, at least I know the difference between Ralph Macchio and Jaden Smith! 33. Tony, Actually, if you don’t start spelling Sensei Connolley’s name correctly, you may be getting some attention of your own (and by Sensei, I mean Dr) 😉 34. BBD says: Eventually The Enlightened realise that the answer is Mu. Either that, or it’s a finger pointing away to the moon. 35. Steve Bloom says: Mere kyus cannot be senseis, sorry. 36. There is simple and there is simplifying — what we want is the latter, because we do want to reduce the immensely complicated simulation known as a GCM. I streamed Gavin Schmidt’s talk at the Rotman conference this morning (along with 12 other people apparently) and paraphrased a few of his statements below: “No climate model can be true.” “No physical model of the real world can be true.” “If you were to try to ’game’ the system, you would fail miserably” “Why do climate modelers pursue this endless task of increasing complexity, instead of simple energy balance models?” The answer he gave is because simple models do not include all the detailed factors. ” El Ninos and La Ninos are random. … ENSO can not be predicted more than 6 months in advance.” Prior to his talk, there was twitter activity around Jim Fleming’s talk yesterday. Steve Easterbrook @SMEasterbrook : Fleming: The butterfly effect is misnamed. Lorenz knew the perturbation would have to be really big. Better label: Mothra effect #Rotman2014 Gavin Schmidt ‏@ClimateOfGavin : @smeasterbrook not sure I actually agree with this though. In GCMs smallest possible changes have same effect. My take is that this speaker Jim Fleming suggested the idea that the chaotic models of climate as originally proposed by Edward Lorenz are not as chaotic as people think. Easterbrook interpreted that by stating that a butterfly was to weak a forcing to be able to change anything, and something more akin to a monstrous moth was needed to change the trajectory on climate. I think that there are probably a couple of scales that we need to consider. Events such as hurricanes are likely unpredictable, but they are really inconsequential when compared to the largely deterministic trajectories of significant phenomena such as ENSO. Same with CO2, as that is a Godzilla of a forcing. What we are trying to do at the Azimuth Project the last few months is to come up with effective models for ENSO. The idea is that a set of quasi-periodic forcing factors, that when combined properly and input to a sloshing formulation, can generate the quasi-periodic time-series of ENSO. This is a thread that started a couple weeks ago and shows how much progress we are making towards a simplifying model of ENSO: http://azimuth.mathforge.org/discussion/1504/symbolic-regression-machine-learning-and-enso-time-series/ 37. Tom Curtis says: 1) For Pekka, equations constrain semantics. Therefore when Pielke Sr affirms that ΔQ = ΔF – ΔT/λ he also precludes a semantics in which, in effect, he defines (in effect) F = Q. If he does not, he asserts not that ΔT happens to be zero, but that ΔT is necessarily zero as a mater of physical law. (Put another way, as he clearly does not accept that consequence, he is not entitled to a variant semantics that entails it.) In fact, given that the formula was derived using the standard IPCC semantics, if he varies the semantics he must independently derive the formula, whose coincidence in form to the standard formula is then purely accidental. As he does not do that, his variant semantics is a mistake, not a convention. 2) More generally, Pielke Sr may be defining forcing as the total radiative forcing (= (1-Albedo) * TSI/4 + Total Greenhouse Effect). In that case he is confused in including feedbacks with forcings. More importantly, by that definition ΔF= ΔQ so that, again, necessarily ΔT = 0. Consequently, while it makes more sense of how he could believe he is talking about forcings with his variant terminology, it does not dig him out of his hole. He could fix that be explicitly excluding all feedbacks from the calculation, but that then reduces the formulation to the standard IPCC position. 38. Eli Rabett says: Tom, equally the point, and Eli assumes Pekka agrees, semantics constrains equations. Pielke Sr. has lead everyone a merry chase by using his own very personal semantics and simply not letting anyone know what he is doing until forced into the open, at which point he turned huffy. Eli, and the rest, not being mind readers, this is the sort of thing that gets you punched out in high school. 39. I noticed that last week Isaac Held formulated a model of thermal diffusion exactly like the one I did two years ago: http://theoilconundrum.blogspot.com/2012/01/thermal-diffusion-and-missing-heat.html My impulse response shows a square-root hyperbolic response, which is the fat-tail profile I referred to earlier. ${1\over2}{1\over{x_0+\sqrt{Dt}}}$ His step response looks like this $h(\tau) \approx 1/(1+(\pi\tau)^{-1/2})$ Same characteristic dependence with the fat-tail, showing a slow asymptotic increase. Held gets his response function as an approximation to the solution to the heat equation, which I mentioned upthread. This is the stuff that Pielke Senior is not considering, and that Curry hasn’t a clue about. The deniers better start paying attention to Held, as he understands what the idea of simplifying is all about. 40. Tom and Eli, I’m not going to check, what I wrote, but I think that I did in some way mention that using the same word in two different meanings and then drawing conclusions from the equality of the word is an explicit error. That’s what happens, when it appears in the equation based on a different definition than in the text. I cannot always express my thoughts precisely enough in English. That’s particularly true, when I try to be brief. This discussion has turned to discussing, how I express myself, when I discuss how ATTP expresses his thoughts in discussing expressions that Pielke Sr uses. 41. WHT, That Isaac Held post looks very interesting. Thanks. Pekka, Yes, it does seem rather circular. 42. Mack says: ……. (.if it passes moderation.) …… [Mod : No, it didn’t 🙂 ] 43. Mack says: Ok ATTP can I talk nicely to you here about “radiative forcing” and “radiative imbalance” ? Can I explain these to you, You don’t seem to understand. 44. You can try, but I think I do. Go ahead, though. 45. Mack says: Well my comment that didn’t pass moderation explains the “Radiative forcing” bit. The “radiative balance” or “imbalance” is false because you confuse basic heat transfer. Heat transfers in two categories 1) Radiative heat transfer and 2) Conduction , convection heat transfer, (To all intents and purposes we can dismiss conduction in the atmosphere) Radiative heat transfer travels at the speed of light and the other 2 by a slower ” mechanical” process. Here’s my take….The satellites look up and measure a real incoming solar radiation of about 1360w/sq.m. Because it is a yearly global average it cannot be divided down and should be regarded as non-directional covering the whole globe at the TOA. We look up from the Earth surface with land based radiometers (properly shielded), and if the readings from these are collated correctly and interpreted correctly they yield a yearly global average of about 340w/sq.m. The satellites also look down and….hey presto! they also read about 340w/sq.m. net yearly average at the earth’s surface. The instruments are only measuring radiative heat transfer. The satellites sitting in “cold” space only radiation. Trenberth , Salby and all the rest cannot get an Earth energy budget to “balance” in w/sq.m. because, the status quo. I’ve just explained. is maintained continuously by the sun. As physics depicts..a large straight arrow strikes an object and a small wavy one comes off. Watts travelling at the speed of light, become measureable in time, ie converted into joules,,,,which then just waft off “mechanically” into space UNDETECTED. The only cause for concern being the definition of “space” 🙂 . 46. Mack, I’ll let that through because I said I would. I’m also assuming that you’re being serious, but it isn’t obvious. What is obvious is that it’s clear that you don’t understand radiative forcings or the concept of a radiative imbalance. They are as explained in my post, not as explained by you. 47. Mack says: Thanks ATTP (Anders ?) 🙂 😉 Mack: Ok ATTP can I talk nicely to you here about “radiative forcing” and “radiative imbalance” ? Can I explain these to you, You don’t seem to understand. Ah. Mack appears to be a relative newcomer here. Perhaps he missed our recent discussion of this very important paper? 49. Tom Curtis says: Mack, just once: “The satellites look up and measure a real incoming solar radiation of about 1360w/sq.m. Because it is a yearly global average it cannot be divided down and should be regarded as non-directional covering the whole globe at the TOA.” The 1361 W/m^2 Total Solar Irradiance (TSI) is the power from incoming sunlight falling on a plane perpendicular to the incoming sunlight. As it happens, the Earth is an oblate spheroid, and hence only a very small part of it is perpendicular to the incoming sunlight at any time. If we made a shield between us and the Sun, having the same diameter as the Earth, and maintained at the top of the atmosphere so as to always block all incoming sunlight, then the area of the shield that is bathed in sunlight would be one fourth of the area of the Earth. Ergo, without that shield, and on average, the same amount of sunlight is spread over four times the area. Hence the insolation on the Earth is 1361/4 W/m^2. Further, 30% of that incoming sunlight is reflected away, so that the total sunlight absorbed is 0.7 * 1361/4 = 238.2 W/m^2. As it happens, the IR radiation leaving the Earth’s surface is 396 W/m^2 (Khiel and Trenberth, 2010). That is 158 W/m^2 greater than the incoming solar radiation, so absent any other effect the Earth would cool very rapidly. However, the outgoing IR radiation as measured at the top of the atmosphere is 238.5 W/m^2 (also Khiel and Trenberth). The difference between the outgoing radiation at the surface and at the TOA is the greenhouse effect. That difference has been directly measured (see also comment 43), although with a level of accuracy not adequate to determine the actual energy imbalance (which needs to be inferred from changes in Ocean Heat Content). This much has been very basic physics, and has been confirmed repeatedly by direct observations. We can discuss more if you are prepared to acknowledge, and learn at least this much. If not, I have no interest in further discussion. 50. Eli Rabett says: Radiative heat transfer only occurs at the speed of light in a vacuum. Otherwise multiple scattering and especially absorption/emission cycles slows it up. At the peak of a ghg absorption line the time essentially no radiation at the surface makes it to space. 51. Willard says: > using the same word in two different meanings and then drawing conclusions from the equality of the word is an explicit error. http://en.m.wikipedia.org/wiki/Equivocation A side note on semantics and equations: Physics makes powerful use of mathematics, yet the way this use is made is often poorly understood. Professionals closely integrate their mathematical symbology with physical meaning, resulting in a powerful and productive structure. But because of the way the cognitive system builds expertise through binding, experts may have difficulty in unpacking their wellestablished knowledge in order to understand the difficulties novice students have in learning their subject. This is particularly evident in subjects in which the students are learning to use mathematics to which they have previously been exposed in math classes in complex new ways. In this paper, we propose that some of this unpacking can be facilitated by adopting ideas and methods developed in the field of cognitive semantics, a sub-branch of linguistics devoted to understanding how meaning is associated with language. arxiv.org/pdf/1002.0472 Equations void of semantics are traces on a medium. 52. Where I have met the worst problems related to equating two concepts of the same name has been in modeling large scale energy systems. Such models are highly aggregated. Their variables have names that give the appearance of being well and uniquely defined. Numerical data in Eurostat statistical database contains often data columns named similarly, but using those numbers as input to the models gives often nonsensical results. Similar problems are equally typical for many other highly aggregated models. A lot of additional research is needed to either make the model interpret the input correctly or to extract from the available input data values that correspond to well enough the variables of the model. That the same name is used in two different connections, or by two people in superficially similarly conditions, is likely to lead to errors, unless good care is taken to remove such inconsistencies. 53. Willard says: > That the same name is used in two different connections, or by two people in superficially similarly conditions, is likely to lead to errors, unless good care is taken to remove such inconsistencies. Here’s the latest equivocation I stumbled upon: In functional programming, a monad is a structure that represents computations defined as sequences of steps: a type with a monad structure defines what it means to chain operations, or nest functions of that type together. This allows the programmer to build pipelines that process data in steps, in which each action is decorated with additional processing rules provided by the monad. As such, monads have been described as “programmable semicolons”; a semicolon is the operator used to chain together individual statements in many imperative programming languages, thus the expression implies that extra code will be executed between the statements in the pipeline. Monads have also been explained with a physical metaphor as assembly lines, where a conveyor belt transports data between functional units that transform it one step at a time. They can also be seen as a functional design pattern to build generic types. […] The name and concept comes from the eponymous concept (monad) in category theory, where monads are one particular kind of functor, a mapping between categories; although the term monad in functional programming contexts is usually used with a meaning corresponding to that of the term strong monad in category theory. The inverse case is also common, when theoricians invent separated fields that they can be connected via correspondence, e.g. In programming language theory and proof theory, the Curry–Howard correspondence (also known as the Curry–Howard isomorphism or equivalence, or the proofs-as-programs and propositions- or formulae-as-types interpretation) is the direct relationship between computer programs and mathematical proofs. It is a generalization of a syntactic analogy between systems of formal logic and computational calculi that was first discovered by the American mathematician Haskell Curry and logician William Alvin Howard.[citation needed] It is the link between logic and computation that is usually attributed to Curry and Howard, although the idea is related to the operational interpretation of intuitionistic logic given in various formulations by L. E. J. Brouwer, Arend Heyting and Andrey Kolmogorov (see Brouwer–Heyting–Kolmogorov interpretation).[citation needed] The relationship has been extended to include category theory as the three-way Curry–Howard–Lambek correspondence. http://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence 54. Eli Rabett says: The hardest and most necessary thing in introductory classes is to teach students to write answers that integrate text for meaning with equations for process. 55. My first reminiscences of observing, how difficult it is to many to connect the meaning of a problem to the mathematical expression or solution are from the elementary school and related to a problem that was admittedly difficult for 3rd graders. Similar observations accrued throughout the school years. Children, who get on the right track early, learn mathematics much more easily also later. That’s at least, what I believe. 56. ATTP, this is just a thought on the communication of mathematical ideas – the use of natural language vs. the use of symbolism vs. both: So the reader can see some context, ATTP said in comment 92 at http://www.realclimate.org/index.php/archives/2014/10/ocean-heat-storage-a-particularly-lousy-policy-target/ the following: “Delta Q will only be the same as the radiative imbalance at the final time interval if it is zero (in equilibrium) at the beginning.” ATTP, do you think that giving also some basic symbolism along the lines of x_2 – x_1 = x_2 x_2 – x_2 = x_1 0 = x_1 perhaps might have helped him see that you were communicating to him just an application of basic algebra? 57. K&A, I doubt it will help. Rather than responding to my point, he moved on to asking me a question which I decided to answer, only to discover that what he was asking wasn’t quite what I thought he asking. 58. Eli Rabett says: With both Pielke’s you can’t play their game. As MT said, with them it is Pileke’s all the way down. This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 11, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7985174655914307, "perplexity": 1199.1334964897235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00401.warc.gz"}
http://mathematica.stackexchange.com/questions/36846/how-can-i-calculate-all-irreducible-polynomials-of-31-degree-in-mathbb-z-2x
# How can I calculate all irreducible polynomials of 31 degree in $\mathbb Z_2[x]$? How can I calculate all binary irreducible polynomials of degree 31? or how i calculate all irreducible $f$ in $\mathbb Z_2[x]$? (The irreducible polynomial in $\mathbb Z_2[x]$ and $\mathbb R$ are distinguish; for example, in $\mathbb Z_2[x]$, we have $x^2+1=(x+1)(x+1)$.) - I'm not quite sure what good it will be to calculate them, as there are 143.522.117. See oeis.org/… – belisarius has settled Nov 12 '13 at 4:08 Over R none will be irreducible. Over Q would be a different matter. Over Z_2 there will be a large number of them, as @belisarius has already indicated. – Daniel Lichtblau Nov 12 '13 at 16:15 This gives you the irreducible polynomials up to order n - 1 in $\mathbb Z_2[x]$ n = 5; Table[Pick @@ Transpose[({#, IrreduciblePolynomialQ[#, Modulus -> 2]} & /@ (FromDigits[#, x] & /@ Tuples[{0, 1}, i]))], {i, n}] // Column However, for degree 31 there are 2^32 == 4,294,967,296 tuples to explore. That's not feasible in this way. I doubt about the usefulness of calculating them all, but here's a memory-diet way to do that given enough lifespan on your part: Needs["Combinatorica"]; s = {}; n = 31; While[(s = NextSubset[Range[n + 1], s]) != {}, If[IrreduciblePolynomialQ[#, Modulus -> 2], Print@#] &[ Array[x^# &, n + 1, 0].SparseArray[Thread[Rule[s, 1]], n + 1]]; ] ` -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8188545107841492, "perplexity": 1978.0248225387681}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398474527.16/warc/CC-MAIN-20151124205434-00260-ip-10-71-132-137.ec2.internal.warc.gz"}
http://polymathprojects.org/2012/06/12/polymath7-research-thread-1-the-hot-spots-conjecture/?like=1&source=post_flair&_wpnonce=8099f58e7f
The polymath blog June 12, 2012 Polymath7 research thread 1: The Hot Spots Conjecture Filed under: hot spots,research — Terence Tao @ 8:58 pm The previous research thread for the Polymath7 project “the Hot Spots Conjecture” is now quite full, so I am now rolling it over to a fresh thread both to summarise the progress thus far, and to make it a bit easier to catch up on the latest developments. The objective of this project is to prove that for an acute angle triangle ABC, that 1. The second eigenvalue of the Neumann Laplacian is simple (unless ABC is equilateral); and 2. For any second eigenfunction of the Neumann Laplacian, the extremal values of this eigenfunction are only attained on the boundary of the triangle.  (Indeed, numerics suggest that the extrema are only attained at the corners of a side of maximum length.) To describe the progress so far, it is convenient to draw the following “map” of the parameter space.  Observe that the conjecture is invariant with respect to dilation and rigid motion of the triangle, so the only relevant parameters are the three angles $\alpha,\beta,\gamma$ of the triangle.  We can thus represent any such triangle as a point $(\alpha/\pi,\beta/\pi,\gamma/\pi)$ in the region $\{ (x,y,z): x+y+z=1, x,y,z > 0 \}$.  The parameter space is then the following two-dimensional triangle: Thus, for instance 1. A,N,P represent the degenerate obtuse triangles (with two angles zero, and one angle of 180 degrees); 2. B,F,O represent the degenerate acute isosceles triangles (with two angles 90 degrees, and one angle zero); 3. C,E,G,I,L,M represent the various permutations of the 30-60-90 right-angled triangle; 4. D,J,K represent the isosceles right-angled triangles (i.e. the 45-45-90 triangles); 5. H represents the equilateral triangle (i.e. the 60-60-60 triangle); 6. The acute triangles form the interior of the region BFO, with the edges of that region being the right-angled triangles, and the exterior being the obtuse triangles; 7. The isosceles triangles form the three line segments NF, BP, AO.  Sub-equilateral isosceles triangles (with apex angle smaller than 60 degrees) comprise the open line segments BH,FH,OH, while super-equilateral isosceles triangles (with apex angle larger than 60 degrees) comprise the complementary line segments AH, NH, PH. Of course, one could quotient out by permutations and only work with one sixth of this diagram, such as ABH (or even BDH, if one restricted to the acute case), but I like seeing the symmetry as it makes for a nicer looking figure. Here’s what we know so far with regards to the hot spots conjecture: 1. For obtuse or right-angled triangles (the blue shaded region in the figure), the monotonicity results of Banuelos and Burdzy show that the second claim of the hot spots conjecture is true for at least one second eigenfunction. 2. For any isosceles non-equilateral triangle, the eigenvalue bounds of Laugesen and Siudeja show that the second eigenvalue is simple (i.e. the first part of the hot spots conjecture), with the second eigenfunction being symmetric around the axis of symmetry for sub-equilateral triangles and anti-symmetric for super-equilateral triangles. 3. As a consequence of the above two facts and a reflection argument found in the previous research thread, this gives the second part of the hot spots conjecture for sub-equilateral triangles (the green line segments in the figure). In this case, the extrema only occur at the vertices. 4. For equilateral triangles (H in the figure), the eigenvalues and eigenfunctions can be computed exactly; the second eigenvalue has multiplicity two, and all eigenfunctions have extrema only at the vertices. 5. For sufficiently thin acute triangles (the purple regions in the figure), the eigenfunctions are almost parallel to the sector eigenfunction given by the zeroth Bessel function; this in particular implies that they are simple (since otherwise there would be a second eigenfunction orthogonal to the sector eigenfunction).  Also, a more complicated argument found in the previous research thread shows in this case that the extrema can only occur either at the pointiest vertex, or on the opposing side. So, as the figure shows, there has been some progress on the problem, but there are still several regions of parameter space left to eliminate.  It may be possible to use perturbation arguments to extend validity of the hot spots conjecture beyond the known regions by some quantitative extent, and then use numerical verification to finish off the remainder.  (It appears that numerics work well for acute triangles once one has moved away from the degenerate cases B,F,O.) The figure also suggests some possible places to focus attention on, such as: 1. Super-equilateral acute triangles (the line segments DH, GH, KH).  Here, we know the second eigenfunction is simple (and anti-symmetric). 2. Nearly equilateral triangles (the region near H).  The perturbation theory for the equilateral triangle could be non-trivial due to the repeated eigenvalue here. 3. Nearly isosceles right-angled triangles (the regions near D,G,K).  Again, the eigenfunction theory for isosceles right-angled triangles is very explicit, but this time the eigenvalue is simple and perturbation theory should be relatively straightforward. 4. Nearly 30-60-90 triangles (the regions near C,E,G,I,L,M).  Again, we have an explicit simple eigenfunction in the 30-60-90 case and an analysis should not be too difficult. There are a number of stretching techniques (such as in the Laugesen-Siudeja paper) which are good for controlling how eigenvalues deform with respect to perturbations, and this may allow us to rigorously establish the first part of the hot spots conjecture, at least, for larger portions of the parameter space. As for numerical verification of the second part of the conjecture, it appears that we have good finite element methods that seem to give accurate results in practice, but it remains to find a way to generate rigorous guarantees of accuracy and stability with respect to perturbations.  It may be best to focus on the super-equilateral acute isosceles case first, as there is now only one degree of freedom in the parameter space (the apex angle, which can vary between 60 and 90 degrees) and also a known anti-symmetry in the eigenfunction, both of which should cut down on the numerical work required. I may have missed some other points in the above summary; please feel free to add your own summaries or other discussion below. 1. […] has been some progress in the polymath 7 project. See the new thread here. Like this:LikeBe the first to like this […] Pingback by New thread for Polymath 7 « Euclidean Ramsey Theory — June 12, 2012 @ 10:00 pm 2. Here is a simple eigenvalue comparison theorem: if $0 = \lambda_1(D) \leq \lambda_2(D) \leq \ldots$ denotes the Neumann eigenvalues of a domain D (counting multiplicity), and $T: D \to TD$ is a linear transformation, then $\|T\|_{op}^{-2} \lambda_k(D) \leq \lambda_k(TD) \leq \|T^{-1}\|_{op}^2 \lambda_k(D)$ for each k. This is because of the Courant-Fisher minimax characterisation of $\lambda_k(D)$ as the supremum of the infimum of the Rayleigh-Ritz quotient $\int_D |\nabla u|^2/ \int_D |u|^2$ over all codimension k subspaces of $L^2(D)$, and because any candidate $u \in L^2(D)$ for the Rayleigh-Ritz quotient on D can be transformed into a candidate $u \circ T^{-1} \in L^2(TD)$ for the Rayleigh-Ritz quotient on TD, and vice versa. (This is not the most sophisticated comparison theorem available – for instance, the Laugesen-Siudeja paper has a more delicate analysis involving comparison of one triangle against two reference triangles, instead of just one – but it is one of the easiest to state and prove.) One corollary of this theorem is that if one has a spectral gap $\lambda_2(D) < \lambda_3(D)$ for some triangle D, then this spectral gap persists for all nearby triangles TD, as long as T has condition number less than $(\lambda_3(D)/\lambda_2(D))^{1/2}$. This should allow us to start rigorously verifying the simplicity of the eigenvalue for at least some of the regions of the above figure, and in particular in the vicinity of the points C,D,E,G,I,J,K,L,M where the eigenvalues are explicit. With numerics, we should be able to cover other areas as well, except in the vicinity of the equilateral triangle H where of course we have a repeated eigenvalue, but perhaps some perturbative analysis near that triangle can establish simplicity there too. Comment by Terence Tao — June 12, 2012 @ 10:50 pm • Stability of Neumann eigenvalues was studied by Banuelos and Pang (Electron. J. Diff. Eqns., Vol. 2008(2008), No. 145, pp. 1-13) and Pang (http://dx.doi.org/10.1016/j.jmaa.2008.04.026). They prove that multiplicity 1 is stable under small perturbations, while multiplicity 2 is not. Hence linear transformation above can be replaced with almost any small perturbation. Comment by Bartlomiej Siudeja — June 12, 2012 @ 11:41 pm • And a small last name correction: Siujeda should really be Siudeja. Here and in the main summary. [Oops! Sorry about that. Corrected, -T.] Comment by Bartlomiej Siudeja — June 12, 2012 @ 11:46 pm • Joe and I have a working high-order finite element code (to give increased order of approximation as we increase the resolution) . We’re working on a mapped domain (as described in a different thread), and are starting to explore the parameter space you suggested. So far, no surprises, though we haven’t reached the perturbed equilateral triangle. We hope to post some results and graphics soon. Visualizing the results is taking some thought: for each point in parameter space, we want to record: whether the conjecture holds for the approximation; the approximate eigenvalue(s); the spectral gap; and some measure of the quality of the approximation. Comment by Nilima Nigam — June 13, 2012 @ 4:25 am 3. Just a note: The rigorous numerical approach from [FHM1967] was used extensively to study eigenvalues of triangles by Pedro Antunes and Pedro Freitas. They studied various aspects of the Dirichlet spectrum using improvement of [FHM1967] due to Payne and Moler (http://www.jstor.org/stable/2949550). This method also works extremely well with bessel functions, even for far from degenerate triangles. Comment by Bartlomiej Siudeja — June 12, 2012 @ 10:55 pm • The Fox, Henrici and Moler paper is beautiful, and was updated by Betcke and Trefethen in SIAM Review in 2005. Barnett has a more recent paper discussing the method of particular solutions, based on Bessel functions, applied to the Neumann problem. This is harder, and the numerics are more challenging: Comment by Nilima Nigam — June 13, 2012 @ 4:30 am 4. Continuing the ideas for Comments 13,14, and 18 of the previous thread, Consider a super-equilateral isosceles triangle (I will call it a 50-50-80 triangle to make things clear). As discussed in Comment 14 and 18, since we know the second eigenfunction is anti-symmetric we can instead consider the 40-50-90 right triangle with mixed Dirichlet-Neumann. -It should also be that we can now “unfold” the 40-50-90 triangle into a 40-40-100 triangle with mixed Dirichlet-Neumann and, intuitively at least, it should be the case that the first non-trivial eigenfunction there is the eigenfunction we are looking for (Though while I think that “folding in” is always legal, appealing to the Raleigh-Ritz formalism, in general “folding out” might introduce new first-non-trivial eigenfunctions). I am not sure if this really buys us anything though… -Having reduced the problem to the Dirichlet-Neumann boundary case, maybe it is possible to implement the method of particular solutions as suggested by Nilima in Comment 13 (links provided there). The method of particular solutions, at least as presented in those papers, considered a Dirichlet boundary condition that an eigenfunction was chosen to try and match. For the mixed problem, we now have a Dirichlet boundary (the fact that the other two boundaries are Neumann shouldn’t matter as those are taken care of for free when choosing an eigenfunction consisting of “Fourier-Bessel” functions anchored at the opposite angle). Comment by letmeitellyou — June 12, 2012 @ 11:48 pm • On the first non-trivial eigenfunction for a triangle with mixed boundary conditions (two sides Neumann, and one side Dirichlet): Intuitively, the following statement must be true for all such triangles: The maximum of the first non-trivial eigenfunction occurs at the corner opposite to the Dirichlet side. Perhaps this is on the books somewhere? A probabilistic interpretation is as follows: The solution to the heat equation on the mixed-boundary triangle with initial condition $u_0 \equiv 1$ can be expressed probabilistically as $u(x,t)=P_x(\tau>t)$ Where $\tau$ is the first time that $X_t$, a Brownian motion starting from $x$ and reflected on the Neumann sides, hits the Dirichlet side. Intuitively to keep your Brownian motion alive the longest you would start it at the opposite corner. Of course this is all intuition and not a formal proof… Comment by letmeitellyou — June 13, 2012 @ 12:28 am • Probabilistic intuition is extremely convincing. In fact to make it even more appealing, think about “regular” polygon that can be built by gluing matching Neumann sides of many triangles. We get a “regular” polygon with Dirichlet boundary conditions. By rotational symmetry maximum survival time must happen at the center. Of course not every triangle gives a nice polygon (angles never add up to 2pi), and the ones we need never give one. We would need a multiple cover to make a polygon for arbitrary rational angles, but the intuition is kind of lost this way. Comment by Bartlomiej Siudeja — June 13, 2012 @ 12:53 am • Yah I was thinking about this as well… you would get sort of a spiral staircase no? But I think there might be some issue with defining the Brownian motion on this spiral staircase as it might flip out near the origin (i.e. it will have some crazy winding number). Although, with probability 1, the Brownian motion won’t actually hit the origin so maybe it isn’t a big deal. On page 472 of the paper [BT2005] Timo Betcke, Lloyd N. Trefethen, Reviving the Method of Particular Solutions, they mention that how the eigenfunction for the wedge cannot be extended analytically unless an integer multiple of the angle is $2\pi$. Comment by Chris Evans — June 13, 2012 @ 3:01 am • Actually maybe a proof can be furnished using a synchronous coupling! Consider a triangle with 1 side Dirichlet and 2 sides Neumann. Orient it so that it lies in the right half plane $x\geq 0$ and has its Dirichlet side along the y-axis (so that the point with the largest $x$-coordinate in the triangle is the opposite corner (where we claim the hotspot is). Now consider two points $x$ and $y$ in the plane (I will abuse notation and call the points $x=(x_1,x_2)$ and $y = (y_1,y_2)$). Now consider a synchronously-coupled reflected Brownian motion $(X_t,Y_t)$ started from these two points (Synchronously coupled means that they are driven by the same brownian motion but they might of course reflect at different times). If $y$ lies to the right of $x$, it ought to be the case that always $Y_t$ lies to the right of $X_t$. consequently $X_t$ is more likely to hit the Dirichlet boundary than $Y_t$. It therefore would follow that the place to start to take the longest to hit the boundary is the point furthest to the right, i.e. the opposite corner as predicted. Notes: -The issues with coupled Brownian motions dancing around each other should be avoided here.. in the acute triangle with all three sides Neumann this was an issue but here there is only one corner to play around/bounce off of. -This is really stating the following monotonicity theorem: If $u_0\equiv 1$ then $u(x,t)$ is monotonically increasing from left to right for all $t>0$. There might be a more direct analytic proof. -Seeing as this was a very simple argument it is likely to be already known (or I could be wrong about the coupling preserving the orientation). Comment by letmeitellyou — June 13, 2012 @ 2:53 am • Unfortunately, I think the synchronous coupling can flip the orientation of $X_t$ and $Y_t$. Suppose for instance that $X_t$ and $Y_t$ are oriented vertically, and $Y_t$ hits one of the Neumann sides oriented diagonally. Then $Y_t$ can bounce in such a way that it ends up to the left of $X_t$. But perhaps some variant of this coupling trick should work… Comment by Terence Tao — June 13, 2012 @ 4:13 am • Ah, good point! The points $x$ and $y$ would have to start such that the angle between them is smaller than the angle of the opposite side… this is actually a condition in the Baneulos-Burdzy paper as well (the “left-right” formalism is just a simpler way to discuss it). But I don’t think this will be an obstacle. I will work on writing this up more clearly Edit: While talking in terms of all these angles is messy, the succinct explanation is: As long as the points $x$ and $y$ are such that the line segment connecting them is nearly horizontal (and it’s a wide range that is allowed based on the angles… basically anything from the angle you get if you ram them against the bottom line to the angle you get when you ram them against the top line), than what I wrote should hold. And that is sufficient to prove the lemma. Comment by Chris Evans — June 13, 2012 @ 4:28 am • Ok, here is a writeup which explains things more precisely http://www.math.missouri.edu/~evanslc/Polymath/MixedTriangle In there I only give an argument for the case that the angle opposite the Dirichlet side is acute… but I think the obtuse case should be true as well. It all boils down to whether the following probabalistic statement is true: Consider the infinite wedge $\{(r,\theta)\vert 0\leq\theta\leq\gamma < \pi\}$. Let $(X_t,Y_t)$ be a synchronously coupled Brownian motion starting from points $x$ and $y$ such that (thought of as elements of the complex plane), $0\leq \arg(y-x) \leq \gamma$. Then $0\leq \arg(Y_t-X_t) \leq \gamma$ for all $t>0$. Comment by Chris Evans — June 13, 2012 @ 5:48 am • I think this does indeed work for acute angles, so this should settle the super-equilateral isosceles case, but I’ll try to recheck the details tomorrow. I think I can also recast the coupling arguments as a PDE argument based on the maximum principle – this doesn’t add anything as far as the results are concerned, but may be a useful alternate way of thinking about these sorts of arguments. (I come from a PDE background rather than a probability one, so I am perhaps biased in this regard.) This type of argument may also settle the non-isosceles case in regimes in which we can show that the nodal line is reasonably flat, though I don’t know how one would actually try to show that… Comment by Terence Tao — June 13, 2012 @ 6:44 am • OK, I wrote up both a sketch of the Brownian motion argument and the maximum principle argument on the wiki at http://michaelnielsen.org/polymath1/index.php?title=The_hot_spots_conjecture#Isosceles_triangles So I think we can now move super-equilateral isosceles triangles (the lines HD, HJ, HK in the above diagram) into the “done” column, thus finishing off all the isosceles cases. (Actually the argument also works for the lowest anti-symmetric mode of the sub-equilateral triangles as well, though this is not directly relevant for the hot spots conjecture.) So now we have to start braving the cases in which there is no axis of symmetry to help us… Comment by Terence Tao — June 13, 2012 @ 4:52 pm • I’m a bit confused about the PDE proof of Corrolary 4. In the case where $x$ lies on the interior of $DB$, it is correct that $\nabla u$ is parallel to $DB$. However, we do not know what is its direction. If it has the same direction like the vector $DB$ then we are OK. But if its direction is $BD$ then it does not lie in the sector $S$. Comment by Hung Tran — June 13, 2012 @ 8:08 pm • By hypothesis, at this point $\nabla u$ lies on the boundary of the region $S_{\varepsilon(t+1)}$ (in particular, it is not in S). The only point on this boundary that is parallel to DB is the point which is a distance $\varepsilon(t+1)$ from the origin in the BD direction. (I should draw a picture to illustrate this but I was too lazy to do so for the wiki.) Comment by Terence Tao — June 13, 2012 @ 8:18 pm • Thanks for your clarification. I got that part. I’m still confused though. In the proof, you basically performed the reflection arguments to consider the cases when $x$ lies on the interiors of $DB,\ AB$. By doing so, $x$ turns out to be an interior point of the domain and then it is pretty straightforward to deduce the result from classical maximum principle. My concern is about the reflection arguments. Do you need sth like $\dfrac{\partial^2 u}{\partial n^2}(x)=0$ in order to do so? Comment by Hung Tran — June 14, 2012 @ 5:15 am • No, to reflect around a flat edge one only needs the Neumann condition $\partial u / \partial n = 0$. The second normal derivative $\partial^2 u / \partial n^2$ will reflect in an even fashion (rather than an odd fashion) around the edge, and so does not need to vanish; it only needs to be continuous in order to obtain a C^2 reflection. Once one has a C^2 reflection, one solves the eigenfunction equation in the classical sense in the unfolded domain, and elliptic regularity in that domain upgrades the regularity to $C^\infty$ (at least as long as one stays away from the corners). Comment by Terence Tao — June 14, 2012 @ 2:58 pm • Oh, I meant at the specific point $x$. Your argument should be OK for eigenfunctions. But here we are dealing with the heat equations, right? In general, I think it would be really interesting to consider the heat equation $u_t - \Delta u =0$ in ${\rm ABC} \times (0,\infty)$ with the given initial data $u_0$ chosen in such a way that it is increasing along some specific directions. Let say $(u_0)_\xi \ge 0$ for some unit vector $\xi$. If we can use maximum principle to show that $u_\xi \ge 0$ by essentially killing the boundary cases then we are done. Comment by Hung Tran — June 14, 2012 @ 4:13 pm • Ah, fair enough, but even when reflecting a solution to the heat equation rather than an eigenfunction, one still gets a classical (C^2 in space, C^1 in time) solution to the heat equation on reflection as long as the Neumann boundary condition is satisfied (and providing that the original solution was already C^2 up to the boundary, which I believe can be established rigorously in the acute triangle case), and then by applying parabolic regularity instead of elliptic regularity one can ensure that this is a smooth solution. (Alternatively, one can unfold the triangle around the edge of interest at time zero, solve the heat equation with Neumann data on the unfolded kite region, and then use the uniqueness theory of the heat equation to argue that this solution is necessarily symmetric around the edge of unfolding, and that the restriction to the original triangle is the original solution to the heat equation.) Comment by Terence Tao — June 14, 2012 @ 4:26 pm • Oh, thank you. Probably now I see my source of confusion. Probable one needs $\dfrac{\partial u_0}{\partial n}=0$ on ${\rm AB, \ BC, \ CA}$ in order to get higher regularity when reflecting. I was confused about this part. So why don’t we proceed by considering the heat equation with Neumann boundary condition in ${\rm ABC} \times (0,\infty)$ with given initial date $u_0$ satisfying sth like $\dfrac{\partial u_0}{\partial n}=0$ on ${\rm AB, \ BC, \ CA}$ and $(u_0)_\xi \ge 0$ for some unit direction $\xi$. If we then let $v=u_\xi$ then $v$ solves also the heat equation. We want to show that $v \ge 0$ or so by using maximum principle. As we know, $\max_{{\rm ABC} \times [0,T]} v = \max \{ \max_{\rm{ABC}} (u_0)_\xi, \max_{\rm{AB,BC,CA} \times (0,T)} v\}$. And since one can omit the boundary cases by performing reflection method, it should be OK. Comment by Hung Tran — June 14, 2012 @ 6:24 pm • I have done some computations to support my argument above. The point now is to build a function $u_0: {\rm ABC} \to \mathbb{R}$ so that $\dfrac{\partial u_0}{\partial n}=0$ on the edges and $(u_0)_\xi \ge 0$ for some unit vector $\xi$. Then $u$ inherits this monotonicity property of $u_0$, namely $u_\xi \ge 0$ in ${\rm ABC} \times (0,\infty)$. Here is the first computation in case ${\rm ABC}$ is an acute isosceles triangle like in Corollary 4. Let’s assume $A=(0,1),\ B=(-a,0), \ C=(a,0)$ for some $0. Then we can build $u_0$ which is antisymmetric around ${\rm OA}$ as $u_0(x,y)=\sin(\frac{\pi}{2a}x) \cos (\frac{\pi}{2}y)^{1/a^2}$. It turns out that $(u_0)_x, \nabla u \cdot (\frac{1}{a},1) \ge 0$ for $x \ge 0$. This is exactly the needed function for Corollary 4. I will try to build such $u_0$ for general acute triangle to see if the shape of ${\rm ABC}$ has anything to do with the direction $\xi$. It may then help us to see where the min and the max of the second eigenfunctions locate. Comment by Hung Tran — June 15, 2012 @ 4:33 am • Great! Actually, half of my graduate thesis was on reflected Brownian motion and the other half was on maximum principles for systems… so it is cool to see that they are related. And on a more practical note, rigorously arguing the geometric properties of coupled Brownian motion can be a bit of a mess (involving Ito’s formula) so if it can be avoided by appealing to the maximum principle, so much the better. Comment by Chris Evans — June 13, 2012 @ 9:51 pm • After a night’s rest, I think the statement I made above about “the infinite wedge preserving the angle” only holds true in the acute case. For the obtuse case, it isn’t to hard to see how the angle won’t always be preserved. It still seems it should be the case that the first eigenfunction for the mixed triangle should be at the vertex opposite the Dirichlet side… but at this point I suppose we only need to know the acute case. Edit: Actually I think the obtuse case might follow from the following paper by Mihai Pascu which uses an exotic “scaling coupling” to prove Hot-Spots results for $C^{1,\alpha}$ convex domains which are symmetric about one axis. http://www.ams.org/journals/tran/2002-354-11/S0002-9947-02-03020-9/home.html Reflecting the triangle across its Dirichlet side would give such a domain provided that we could “smooth out the corners” without affecting the eigenfunction too much. Comment by Chris Evans — June 13, 2012 @ 9:48 pm 5. Chris, I am not sure this is pertinent to your argument. But the regularity of the eigenfunctions for the mixed Dirichlet-Neumann case must degenerate, as the angle between the Dirichlet and Neumann sectors becomes near pi. To see this, think about a sector of a circle with Dirichlet data on one ray and the curvilinear arc, and Neumann on the remaining ray. The solution (by seperation of variables) is again in terms of Bessel functions, but this time with fractional order. As long as the angle of the sector is less than pi, a reflection about the Neumann side would give you an eigenfunction problem with Dirichlet data, and you pick out the one with the right symmetry. However, as the interior angle approached pi, after reflection the doubled sector gets closer to the circle with a slit. The resulting eigenfunction is not smooth. This argument suggests that if, after reflections, you have a mixed boundary eigenproblem where the Dirichlet-Neumann segments are meeting at nearly flat angles, then there may be issues. Comment by Nilima Nigam — June 13, 2012 @ 3:08 pm • Well, for our application the Dirichlet-Neumann region of interest is a folded super-equilateral triangle, so one of the angles between Dirichlet and Neumann is a right angle (thus becomes not an angle at all when unfolded) and the other is between 30 and 45 degrees, so the regularity looks pretty good ($C^\infty$ at the right angle, $C^{2,\varepsilon}$ at the less-than-45-degree-angle, and $C^{3,\varepsilon}$ at the remaining angle between the two Neumann edges, which is less than 60 degrees. (From Bessel function expansion in a Neumann triangle we know that eigenfunctions have basically $\pi/\alpha$ degrees of regularity at an angle of size $\alpha$, and are $C^\infty$ when $\pi/\alpha$ is an integer. I think the same should also be true for solutions to the heat equation with reasonable initial data, though I didn’t check this properly.) But, yes, things are probably more delicate once the Dirichlet-Neumann angles get obtuse. In the case when the Dirichlet boundary comes from a nodal line from a Neumann eigenfunction, the Dirichlet boundary should hit the Neumann boundary at right angles (unless it is in a corner or is somehow degenerate), so this should not be a major difficulty. Comment by Terence Tao — June 13, 2012 @ 3:51 pm • Hmm… it seems that we have shown that for a triangle with mixed boundary conditions (one side Dirichlet, two sides Neumann), that the extremum of the first eigenfunction lies at the vertex opposite the Dirichlet side, provided that angle is acute. Such a triangle could have that the angle between the Dirichlet side and one of the Neumann sides is arbitrarily close to $\pi$… but things should still be ok (provided what I wrote in the previous paragraph is true). In your example, you have two sides which are Dirichlet and only one side which is Neumann… maybe that is what makes the difference? Comment by Chris Evans — June 13, 2012 @ 9:56 pm • Chris, I tried the case where there where two Neumann sides and one Dirichlet. Same problem- but my argument is for a mixed problem where the junction angle is nearing pi. As Terry points out, this concern may not arise for the argument you are trying. Comment by Nilima Nigam — June 14, 2012 @ 3:55 am 6. We’re exploring the parameter space corresponding to the region BDO in the triangle above. We’re taking a set of discrete points in this parameter set, and verifying the conjecture as well as computing the spectral gap for the corresponding domain . To debug, we’re taking a coarse spacing of pi/10 in each direction, but we will refine this. We’re using piecewise quadratic polynomials in an H^1 conforming finite element method, with Arnoldi iterations with shift to get the smaller eigenvalues. I have a quick question- is there some target spacing you’d like? This will influence some memory management issues. Comment by Nilima Nigam — June 13, 2012 @ 10:01 pm • Hmm, good question. As a test case for a back-of-the-envelope calculation, let’s look at the range of stability for the isosceles right-angled (i.e. 45-45-90) triangle (point D in the diagram), say with vertices (0,0), (1,0), (1,1) for concreteness. This is half of the unit square and so the Neumann eigenvalues can in fact be read off quite easily by Fourier series. The second eigenvalue is $\pi^2$, with eigenfunction $\cos \pi x + \cos \pi y$, and then there is a third eigenvalue at $2\pi^2$ with eigenfunction $\cos \pi(x+y) + \cos \pi(x-y)$. So, by Comment 2, the second eigenvalue remains simple for all linear images TD of this triangle with condition number less than $\sqrt{2}$. To convert the 45-45-90 triangle into another right-angled triangle $(\pi/2-\alpha, \alpha,\pi/2)$ triangle for some $0 < \alpha < \pi/2$ requires a transformation of condition number $\cot \alpha$, which lets one obtain simplicity of eigenvalues for such triangles whenever $\alpha > 0.615$, or about 35 degrees – enough to get about two thirds of the way from point D on the diagram to point C. This extremely back of the envelope calculation suggests that increments of about 10 degrees (or about $\pi/20$) at a time might be enough to get a good resolution. But things may get worse as one approaches the equilateral triangle (point H) or the degenerate triangle (points B, F, O). By permutation symmetry it should be enough to explore the triangle BDH instead of BDO. The Laugesen-Suideja paper at http://arxiv.org/abs/0907.1552 has some figures on eigenvalues in the isosceles case (Fig 2 and Fig 3) that could be used for comparison. Comment by Terence Tao — June 13, 2012 @ 10:37 pm • thanks, this is helpful. I’ll set this running with pi/50, to be on the safe side. This will take a few hours to run. certainly the numerics suggest that the manner in which I approach the point for the equilateral triangle impacts the spectral gap. however, the resolution is not sharp enough to make this formal. Comment by Nilima Nigam — June 13, 2012 @ 11:17 pm • A detail which will not affect any analytical attack, but which should be noted for anyone else doing numerics on this. As we search through parameter space, we look at what happens with a triangle with given edges – but we should probably fix one side, so we can compare eigenvalues. This is important since what we also want to examine is the spectral gap. Joe and I’ve fixed one side of the acute triangle to have length 1. As we range through parameter space, the other sides, and the area of the triangles, change. We are recording this information. May I recommend that if anyone else is doing numerics on this problem, they also make available the area of the triangles used (or at least one side) for each choice of angles? This way, we’ll be able to compare eigenvalues on triangles with the same angles. Comment by Nilima Nigam — June 14, 2012 @ 4:07 am 7. I think i can show that second eigenfunction is simple. It involves a few not-overly complicated cases of comparisons between a given triangle and a few known cases (through linear mappings). There seems to be a way to do all of this using one very complicated comparison (with 4-5 reference triangles) and an extremely ugly upper bound for acute triangles (many pages to write it down), but that is probably not worth pursuing. I will try to write something tonight, at least one simple case. It appears that even around equilateral everything should be OK. Comment by Bartlomiej Siudeja — June 13, 2012 @ 10:18 pm • Here is a very rough write-up of just one case containing equilateral, right isosceles, and some non-isosceles cases. I am sure this case can be optimized to include larger area. Another 3-4 cases and all triangles should be covered. I will try to optimize the approach before I post all the cases. Near the end of the argument there is an ugly inequality involving triangle parametrization. It should reduce to polynomial inequality, so in the worst case we can evaluate a few (or a bit more) points and find rough gradient estimates. http://pages.uoregon.edu/siudeja/simple.pdf Comment by Bartlomiej Siudeja — June 14, 2012 @ 2:25 am • I was playing with reference triangles a bit more, and it seems that one case with 3 reference triangles (near equilateral) and another with just 2 (near degenerate cases) should be enough to cover all acute triangles. Details to follow. Comment by Bartlomiej Siudeja — June 14, 2012 @ 3:06 pm • Great news! In addition to resolving one part of the hot spots conjecture, I think having a rigorous lower bound on the spectral gap $\lambda_3-\lambda_2$ will also be useful for perturbation arguments if we are to try to verify things by rigorous numerics. Comment by Terence Tao — June 15, 2012 @ 1:26 am • This thread is getting somewhat large! I’d posted some of this information below, but this may be useful. A plot of the spectral gap for the approximated eigenvalues, \lambda_3-\lambda_2 multiplied by the area of the triangle \Omega as we range through parameter space is here: Comment by Nilima Nigam — June 15, 2012 @ 1:48 am • The simplest proof that eigenvalue is simple will have almost no gap bound. However, if one wants to get something for a specific triangle, one can use very complicated comparisons and upper bounds without much trouble. In particular upper bound can include 3 or more known eigenfunctions. Except that even with just 2 eigenfunctions there is no way to write down the result from Rayleigh quotient for the test function on general triangle without using many pages. This is obviously not a problem for a specific triangle. The Mathematica package I mentioned in 12 was written specifically for those really ugly test function. Comment by Bartlomiej Siudeja — June 15, 2012 @ 2:26 am 8. In comment thread 4, Terry suggested looking at the nodal line for more arbitrary triangles, which would then divide the triangle into two mixed domains. Running computer simulations (but only for the graphs $G_n$ as I am not setup to do more accurate numerical approximation), it seems that the nodal line is always near the sharpest corner. Perhaps it is even close to an arc? So then that mixed-boundary sub-domain might be handled by arguments similar to those in comment thread 4. But I am not sure what we would do on the other sub-domain as it would have a strange geometry… A related question: Rather than divide into sub-domains by the nodal line, is it possible to divide with respect to another level curve, say $u = 2$? This would lead to the mixed boundary condition with Neumann boundary on some sides and “$u=2$” on some sides… but presumably the behavior of the heat flow on that region is the same as the mixed-Dirichlet-Neumann boundary heat flow after you subtract off the constant function $2$. Comment by Chris Evans — June 13, 2012 @ 10:30 pm • It may be easier to show that the extremum occurs at the sharpest corner than it is to figure out what happens to the other extremum (this was certainly my experience with the thin triangle case). See for instance Corollary 1(ii) of the Atar-Burdzy paper http://webee.technion.ac.il/people/atar/lip.pdf which establishes the extremising nature of the pointy corner for a class of domains that includes for instance parallelograms. Once one considers level sets of eigenfunctions at heights other than 0, I think a lot less is known. For instance, the Courant nodal theorem tells us that the nodal line $\{u=0\}$ of a second eigenfunction is a smooth curve that bisects the domain into two regions, but this is probably false once one works with other level sets (though, numerically, it seems to be valid for acute triangles). Comment by Terence Tao — June 13, 2012 @ 10:45 pm • There is a paper of Burdzy at http://arxiv.org/pdf/math/0203017.pdf devoted to the study of the nodal line in regions such as triangles, with the main tool being mirror couplings; I haven’t digested it, but it does seem quite relevant to this strategy. Comment by Terence Tao — June 14, 2012 @ 4:51 pm 9. I’ve been looking at the stability of eigenvalues/eigenfunctions with respect to perturbations, and it seems that the first Hadamard variation formula is the way to go. A little bit of setup. Following the notation on the wiki, we perturb off of a “reference” triangle $\hat \Omega$ to a nearby triangle $B \hat \Omega$, where B is a linear transformation close to the identity. The second eigenfunction on $B\hat \Omega$ can be pulled back to a mean zero function on $\hat \Omega$ which minimizes the modified Rayleigh quotient $\int_{\hat \Omega} \nabla^T u M \nabla u / \int_{\hat \Omega} u^2$ amongst mean zero functions, where $M = B^{-1} (B^{-1})^T$ is a symmetric perturbation of the identity matrix; this function then obeys the modified eigenvalue equation $-\nabla \cdot M \nabla u = \lambda_2 u$ with boundary condition $n \cdot M \nabla u = 0$. Now view B = B(t) as deforming smoothly in time with B(0)=I, then M also deforms smoothly in time with M(0)=I. As long as the second eigenvalue of the reference triangle is simple, I believe one can show that $\lambda$ and $u$ will also vary smoothly in time (after normalizing $u$ to have norm one). One can then solve for the derivatives $\dot \lambda_2(0), \dot u(0)$ at time zero by differentiating the eigenvalue equation and the boundary condition. What one gets is the first variation formulae $\dot \lambda_2(0) = \int_{\hat \Omega} \nabla^T u(0) \dot M(0) \nabla u(0)$ and $(-\Delta - \lambda_2(0)) \dot u(0) = \pi( \nabla \cdot \dot M(0) \nabla u(0) )$ subject to the inhomogeneous Neumann boundary condition $n \cdot \nabla \dot u(0) = - n \cdot \dot M(0) \nabla u(0)$ where $\pi$ is the projection to the orthogonal complement of $u(0)$ (and to $1$) and $\dot u(0)$ is also constrained to this orthogonal complement. I think that by using C^2 bounds on the reference eigenfunction $u(0)$, one should then be able to obtain $C^2$ bounds on the derivative $\dot u(0)$, though there is of course a deterioration if the spectral gap $\lambda_3(0)-\lambda_2(0)$ goes to zero. But this stability in C^2 norm should be enough to show, for instance, that if one has a reference triangle in which the second eigenfunction is simple and only has extrema in the vertices, then any sufficiently close perturbation of this triangle will also have this property. (Note from Bessel function expansion that if an extrema occurs at an acute vertex, then the Hessian is definite at that vertex, and so for any small C^2 perturbation of that eigenfunction, the vertex will still be the local extremum.) Thus, for instance, we should now be able to get the hot spots conjecture in some open neighborhood of the open intervals BD and DH (and similarly for permutations). Furthermore it should be possible to quantify the size of this neighborhood in terms of the spectral gap. This argument doesn’t quite work for perturbations of the equilateral triangle H due to the repeated eigenvalue, but I think some modification of it will. EDIT: I think the equilateral case is going to be OK too. The variation formulae will control the portion of $\dot u(0)$ in the complement of the second eigenspace nicely, and so one can write the second eigenfunction of a perturbed equilateral triangle (after changing coordinates back to the reference triangle) as the sum of something coming from the second eigenspace of the original equilateral triangle, plus something small in C^2 norm. I think there is enough “concavity” in the second eigenfunctions of the original equilateral triangle that one can then ensure that for any sufficiently small perturbation of that triangle, the second eigenfunction only has extrema at the vertices. Will try to write up details on the wiki later. Comment by Terence Tao — June 14, 2012 @ 4:21 pm • Using raw numerics (the finer-resolution calculation is not yet done), here is what I observe: one can perturb from the equilateral triangle in a symmetric way, ie, by changing one angle by $-\epsilon$ and the others by $\epsilon/2$ Or one can perturb each angle differently. The spectral gap changes rather differently, depending on how one perturbs. I should revisit these calculations by scaling by the Jacobian of the mapping B of the domain in each case (following the Courant spectral gap result). Comment by Nilima Nigam — June 14, 2012 @ 6:03 pm • Here are some graphics, to explore the parameter region (BDH) above. To enable visualization, I’m plotting data as functions of the $lateex (\alpha,\beta)$. I’m taking a rectangular grid oriented with the sides BD and DH, with 25 steps in each direction. So there are (25)^5 grid points. Each parameter (alpha,beta,gamma) yields a triangle $\Omega$. I’m fixing one side to be of unit length. For details, please see the wiki. For each triangle, the second Neumann eigenvalue and third eigenvalue (first and second non-zero Neumann eigenvalue) is computed. I also kept track of where max|u| occurs, where u is the second eigenfunction. This is because numerically I can get either u or -u. I A plot of the 2nd Neumann eigenvalue as we range through parameter space is here: http://www.math.sfu.ca/~nigam/polymath-figures/Lambda2.jpg A plot of the 3rdd Neumann eigenvalue as we range through parameter space is here: http://www.math.sfu.ca/~nigam/polymath-figures/Lambda3.jpg A plot of the spectral gap, \lambda_3-\lambda_2 multiplied by the area of the triangle \Omega as we range through parameter space is here: One sees that the eigenvalues vary smoothly in parameter space, and that the spectral gap is largest for acute triangles without particular symmetries. For each triangle, I also kept track of the physical location of max|u|. If it went to the corner (0,0), I allocated a value of 1; if it went to (1,0) I allocated a value of 2, and if it went to the third corner, I allocated 3. If the maximum was not reported to be at a corner, I put a value of 0. show the result. Note that we obtain some values of 0 inside parameter space. Please DON”T interpret this to mean the conjecture fails. Rather, this is a signal that eigenfunction is likely flattening out near a corner, and that the numerical values at points near the corner are very close. I’m running these calculations with finer tolerances now, but it will take some hours. Comment by Nilima Nigam — June 14, 2012 @ 8:49 pm • Hi, I think there may be something to do using analytic pertubation theory. The first remark is that, using a linear diffeomorphism we can pullback the Dirichlet energy form ($\int_T \left | \nabla u \right |^2 dxdy$) on a moving triangle $T$ to a quadratic form on a fixed triangle $T_0$ that can be written $\int_{T_0} {}^t\nabla u A \nabla u dxdy$ for some symmetric matrix $A$ so that studying the Neumann Laplacian on $T$ amounts to study the latter quadratic form restricted to $H^1(T_0)$ with respect to the standard Euclidean scalar product. If we now let $A$ depend analytically on a real parameter $t$ then we get a real-analytic family in the sense of Kato-Rellich so that the eigenvalues (and eigenvectors) are organized into real-analytic branches. Let $(E(t), u(t))$ be such an analytic eigenbranch, we define the following function $f$ by $f(t)= \frac{ \| u(t)\|_{ \infty,\partial T_0 } }{ \| u(t)\|_{\infty,T_0} }$ (observe that now eveything is defined on $T_0$) and suppose we can prove that this function also is analytic (that is for any choice of analytic perturbation and any corresponding eigenbranch). Then I think we can prove the following statement : “For any triangle $T$ there is a Neumann eigenfunction whose maximum is on the boundary”. The proof would be as follows. Start from your triangle $T$ and move one of its vertices along the corresponding altitude. This defines an analytic perturbation and for any $t$ small enough the obtained triangle is obtuse. For $t$ very small the second eigenbranch is simple and satisfy the hotspot conjecture so that if we follow this particular branch, the corresponding $f$ is identically $1$ for $t$ small enough and since it is analytic it is always $1.$ The claimed eigenfunction is the one that corresponds to this eigenbranch (because of crossings, it need not be the second one). If we want to prove the real hotspot conjecture we can try to argue in the opposite direction : start from the second eigenvalue and follow the same perturbation. We now have to prove the following things : 1- For $t$ small the branch becomes simple so that it corresponds to the $N$-th eigenvalue, 2- For any $N$ and any $t$ small enough the $N$-th eigenfunction has its maximum on the boundary. Of course this line of reasoning relies heavily on the analyticity of $f$ which I haven’t been able to establish yet (observe that $t \mapsto u(t)$ is analytic with values in $H^1$ which is not good enough for $C^0$ bounds). Recently I have been thinking that maybe we could instead try to prove that $f_r$ is analytic where the subscript $r$ means that we have removed a ball of that radius near each vertex. It should be easier to prove that this one is analytic (but then we need to prove something on the maximum of $u_2$ for any tobtuse triangle when we remove a ball near each vertex). I finish by pointing at two references on multiplicities in the spectrum of triangles. – Hillairet-Judge Simplicity and asymptotic separation of variables, CMP, 2011, 302(2) (Erratum, CMP, 2012, 311 (3)) – Berry-Wilkinson Diabolical points in the spectra of triangles, Proc. Roy. Soc. London, 1984, 392(1802), pp.15-43 Comment by Luc Hillairet — June 15, 2012 @ 11:57 am • [I was editing this comment and I accidentally transferred ownership of it to myself, which is why my icon appears here. Sorry, please ignore the icon; this is Nilima’s post. – T.] An analytic perturbation argument from known cases would certainly be great! I thought about a similar argument for the thin triangle case (http://michaelnielsen.org/polymath1/index.php?title=The_hot_spots_conjecture, under ‘thin not-quite-sectors). But I was thinking about perturbing from a sector to the triangle, and you’re thinking about perturbing from one triangle to another. Let’s see if I follow your argument. Following the notation in (http://michaelnielsen.org/polymath1/index.php?title=The_hot_spots_conjecture, under ‘reformulation on a reference domain’), one can replace the reference triangle by any other. One then shows analyticity of the eigenvalues with respect to perturbations in the mapping B, and shows the domain of analyticity is large enough to cover all acute triangles. Is this correct? Comment by Nilima Nigam — June 15, 2012 @ 2:59 pm • I think it may be difficult to show analyticity of a sup norm; note that even the sup of two analytic functions $\max(f(t),g(t))$ is not analytic when the two functions cross (e.g. $|t| = \max( t, -t)$). The enemy here is that as one varies t, a new local extremum gets created somewhere in the interior of the triangle, and eventually grows to the point where it overtakes the established extremum on the vertices, creating a non-analytic singularity in the L^infty norm. However, I think one does have analyticity as long as the extrema are unique (up to symmetry, in the isosceles case) and non-degenerate (i.e. their Hessian is definite), and the eigenvalue is simple. This is for instance the case for the non-equilateral acute isosceles and right-angled triangles, where we know that the eigenvalues are simple and the extrema only occur at the vertices of the longest side, and a Bessel expansion at a (necessarily acute) extremal vertex shows that any extremum is non-degenerate (it looks like a non-zero scalar multiple of the 0th Bessel function $J_0(\sqrt{\lambda} r)$, plus lower order terms which are $o(r^2)$ as $r \to 0$). Certainly in this setting, the work of Banuelos and Pang ( http://eudml.org/doc/130789;jsessionid=080D9E5423278BA5ACFC818847CA97FE ) applies, and small perturbations of the triangle give small perturbations of the eigenfunction in L^infty norm at least. This (together with uniform C^2 bounds for eigenfunctions in a compact family of acute triangles, which is sketched on the wiki, and is needed to handle the regions near the vertices) is already enough to give the hot spots conjecture for sufficiently small perturbations of a right-angled or non-equilateral acute isosceles triangle. The Banuelos-Pang results require the eigenvalue to be simple, so the perturbation theory of the equilateral triangle (in which the second eigenvalue has multiplicity 2) is not directly covered. However, it seems very likely that for any sufficiently small perturbation of the equilateral triangle, a second eigenfunction of the perturbed triangle should be close in L^infty norm to _some_ second eigenfunction of the original triangle (but this approximating eigenfunction could vary quite discontinuously with respect to the perturbation). Assuming this, this shows the hot spots conjecture for perturbations of the equilateral triangle as well, because _every_ second eigenfunction of the equilateral triangle can be shown to have extrema only at the vertices, and to be uniformly bounded away from the extremum once one has a fixed distance away from the vertices (this comes from the strict concavity of the image of the complex second eigenfunction of the equilateral triangle, discussed on the wiki). The perturbation argument also shows that in order for the hot spots conjecture to fail, there must exist a “threshold” counterexample of an acute triangle in which one of the vertex extrema is matched by a critical point either on the edge or interior of the triangle, though it is not clear to me how to use this information. Comment by Terence Tao — June 15, 2012 @ 3:53 pm • Thanks ! Actually what I had in mind was trying to prove that $t\mapsto u(t)$ is analytic with values in $C^0(T_0)$ but then I imprudently jumped to think that this would imply the analyticity of the supnorm. So I am not sure there is something to save from the analyticity approach I was suggesting. Except maybe the following fact : I think that the set of triangles such that $\lambda_2$ is simple is open and dense (and also full measure for a natural class of Lebesgue measure). We have proved that for any mixed Dirichlet-Neumann boundary condition … except Neumann everywhere ! I have a sketch of proof for the latter case but I never carried out the details (so there may be some bugs in the argument). Last thing concerning analyticity of the eigenvalues and eigenfunctions, this holds only for one-parameter analytic families of triangles. I don’t think the eigenvalues can be arranged to be analytic on the full parameter space (because there are crossings). Comment by Luc Hillairet — June 15, 2012 @ 5:01 pm 10. I would like to propose a further probabilistic intuition, based on comment 15 of thread 1, and another possiblity for attacking the problem. It is based on relating free Brownian motion with reflecting Brownian motion. If $B$ is a one dimensional Brownian motion, and we define the floor function $\lfloor \rfloor$ and the zig-zag function $f(x) = \| x - 2 \lfloor (x+1) / 2 \rfloor \|$, then $R=f(B)$ is a reflecting Brownian motion on $[0,1]$ (as can be rigorously proved using stochastic calculus and local time for example) and its density is the fundamental solution of the heat equation with Neumann boundary conditions. To write an expression of the transition density $p_t^R$ of $R$ in terms of the transition density $p_t$ of $B$, write $y_1\sim y_2$ if $f(y_1)=f(y_2)$ and note that (1) $p_t^R(x,y)=\sum_{\tilde y\sim y} p_t(x,\tilde y)$ if $x,y\in (0,1)$ but $p_t^R(x,1)=2\sum_{\tilde y\sim 1}p_t(x,y)$ This explains why the boundary points 0 and 1 accumulate (or trap) heat at twice the rate as interior vertices, and I believe that from here one can conceptually prove hotspots in the very simple case of the interval. For two dimensional reflecting Brownian motion, one needs a similar reflection function. To construct it: think first of an equilateral triangle constructed as a kaleidoscope with 3 sides of equal length. Each point inside the triangle gives rise to a lattice of points in the plane which will be identified via the equivalence relation $\sim$. We then write the fundamental solution to the heat equation with Neumann boundary condition on the triangle via formula (1) for points in the interior of the triangle. However, points at the sides of the triangle accumulate heat at twice the rate while corner points trap it at 6 times the rate (because the triangle is equilateral). In general one would hope that a corner of angle alpha gets heated $\lfloor 2\pi/\alpha\rfloor$ times faster than interior points. I think that stochastic calculus is not yet mature enough to prove that reflecting brownian motion in the triangle can be constructed by applying the reflection $\sim$ to free brownian motion (lacking a multidimensional Tanaka formula). However, one can see if formula (1) does give the fundamental solution to the heat equation with Neumann boundary conditions. Comment by Gerónimo — June 14, 2012 @ 4:23 pm • Hmm, I’m not so sure about the factor of 2 in the formula for $p_t^R(x,1)$, as this would imply that the heat kernel is discontinuous at the boundary, which I’m pretty sure is not the case. Note that the epsilon-neighbourhood of a boundary point in one dimension is only half as large as the epsilon-neighbourhood of an interior point, and so I think this factor of 1/2 cancels out the factor of 2 that one is getting from the folding coming from the zigzag function. So the heating at the endpoints is coming more from the convexity properties of the heat kernel than from folding multiplicity. Still, this does in principle give an explicit formula for the heat kernel on the triangle as some sort of infinitely folded up version of the heat kernel on something like the plane (but one may have to work instead with something more like the universal cover of a plane punctured at many points if the angles do not divide evenly into pi). One problem in the general case is that the folding map becomes dependent on the order of the edges the free Brownian motion hits, and so cannot be represented by a single map f unless one works in some complicated universal cover. Comment by Terence Tao — June 14, 2012 @ 4:38 pm • I agree, the formula for $p_t^R(x,1)$ shouldn’t have the factor two and the intuition there is incorrect. However, it does suggest a new one: since the heat kernel $p_t$ decays rapidly, endpoints with nearby reflections will accumulate more density (the notion of nearby depends on the amount of time elapsed) and corners of angle $\alpha$ are points where there are (mainly) $2\pi/\alpha$ nearby reflections. Also, maybe one does not need to leave the plane since to construct the reflecting brownian motion since two-dimensional free Brownian motion does not visit the corners of the triangle (by polarity of countable sets), so one only needs to keep changing the reflection edge as soon as a new one is reached. The transition density does indeed seem more complicated, but perhaps (1) might provide sensible approximations. Comment by Gerónimo — June 14, 2012 @ 5:44 pm • It is true that once one shows that Neumann heat kernel is increasing toward boundary, the hot-spots conjecture is true. But this approach is much harder than just proving hot-spots conjecture. Until very recently there was Laugesen-Morpurgo conjecture stating the Neumann heat kernel for a ball is increasing toward boundary. This was settled by Pascu and Gageonea (http://www.sciencedirect.com/science/article/pii/S0022123610003526) in 2011 using mirror couplings. Reflection argument seems very appealing, but even for an interval I have not seen a proof that Neumann heat kernel is increasing using explicit series of Gaussian terms coming from reflections. The above paper also settles interval case. One can also use Dirichlet heat kernel to prove this (http://pages.uoregon.edu/siudeja/neumann.pdf, slides 6 and 7). For triangles reflections are not enough to cover the plane. You may have to also flip the reflected triangle along the line perpendicular to the reflection side in order to ensure that you can cover the plane. This however means that you loose continuity on the boundary. Comment by Bartlomiej Siudeja — June 14, 2012 @ 6:07 pm • A small correction: “diagonal of Neumann heat kernel should be increasing toward boundary”, so $p_t^R(x,x)$ should be increasing when x goes to boundary. Comment by Bartlomiej Siudeja — June 14, 2012 @ 8:32 pm • It does seem like such a procedure would be hard (perhaps hopelessly so) to implement for triangles that don’t tile the plane nicely (which are most triangles) for the reasons given in the other replies. But if such an argument were to work it would first need to be worked out for the case of an equilateral triangle. I’d be interested in seeing such an argument but I am not sure how it would go… Suppose the initial heat is a point mass at one corner, and draw out a full tiling of the plane. Then the unreflected heat flow would have a nice Gaussian distribution, and the reflected heat flow could be recovered by folding in all the triangles… but how would you show that the hottest point upon folding is at the corner you started the heat flow at? You have an infinite sum and it is not the case that each triangle in this sum has its maximum at that corner… Comment by Chris Evans — June 14, 2012 @ 8:26 pm 11. A couple of people asked for some pictures of nodal lines. Here are some on triangles which aren’t isoceles or right or equilateral, and whose angles aren’t within pi/50 of those special cases, either: Here are the nodal lines corresponding to the 2nd and 3rd Neumann eigenfunction on a nearly equilateral triangle. Note the multiplicity of the 2nd eigenvalue is 1, but the spectral gap \lambda_3-\lambda_2 is small. I found these interesting. Comment by Nilima Nigam — June 14, 2012 @ 5:16 pm • Is the nearly equilateral triangle isosceles? If it is, the nearly antisymmetric case should not look the way it does. Every eigenfunction on isosceles triangle must be either symmetric or antisymmetric. Otherwise corresponding eigenvalue is not simple. It is not impossible that the third one is not simple, but for nearly equilateral triangle that is extremely unlikely. Here antisymmetric case is the second eigenvalue, so it must be antisymmetric. Even if this triangle is not isosceles, the change in the shape of the nodal like is really huge. Comment by Bartlomiej Siudeja — June 15, 2012 @ 3:51 am • No, the nearly equilateral triangle is not isoceles. Comment by Nilima Nigam — June 15, 2012 @ 3:54 am • Also, do you have bounds on how the nodal lines should change as we perturb away from the equilateral triangle in an asymmetric fashion? This would be interesting to compare with. Comment by Nilima Nigam — June 15, 2012 @ 4:01 am • No, I do not think I have anything for nodal lines. One of the papers by Antunes and Freitas may have something, but they mostly concentrate on the way eigenvalues change. Nothing for nodal lines. It is quite surprising, and good for us, that the change is so big. Comment by Bartlomiej Siudeja — June 15, 2012 @ 4:08 am 12. In case someone wants to see eigenfunctions of all known triangles and a square (right isosceles triangle), I have written a Mathematica package http://pages.uoregon.edu/siudeja/TrigInt.m. See ?Equilateral and ?Square for usage. A good way to see nodal domains is to use RegionPlot with eigenfunction>0. The package can also be used to facilitated linear deformation for triangles. In particular Transplant moves a function from one triangle to another(put {x,y} as function to see the linear transformation itself). There is a T[a,b] notation for triangle with vertices (0,0) (1,0) and (a,b). Function Rayleigh evaluates Rayleigh quotient of a given function on a given triangles (with one side on x-axis). There are also other helper functions for handling triangles. Everything is symbolic so parameters can be used. Put this in Mathematica to import the package: AppendTo[$Path,ToFileName[{$HomeDirectory, “subfolder”, “subfolder”}]]; << TrigInt` The first line may be needed for Mathematica kernel to see the file. After that Equilateral[Neumann,Antisymmetric][0,1] gives the first antisymmetric eigenfunction Equilateral[Eigenvalue][0,1] gives the second eigenvalue Comment by Anonymous — June 14, 2012 @ 8:15 pm • There is also a function TrigInt which is much faster than regular Int for complicated trigonometric functions. Limits for the integral can be obtained using Limits[triangle]. For integration it might be a good idea to use extended triangle notation T[a,b,condition] where condition is something like b>0. Comment by Bartlomiej Siudeja — June 14, 2012 @ 8:20 pm • I’m not a Mathematica user, so my question may be naive. Are the eigenfunctions being computed symbolically by Mathematica? If not, could you provide some details on what you’re using to compute the eigenfunctions/values? It would be great if you could post this information to the Wiki. Comment by Nilima Nigam — June 15, 2012 @ 4:04 am • They are computed using general formula. The nicest write-up is probably in the series of papers by McCartin. All eigenfunctions look almost the same, sum of three terms, each is a product of two cosines/sines. The only difference is integer coefficients under trigs. The same formula works for Dirichlet, just a bit different numbers. Comment by Bartlomiej Siudeja — June 15, 2012 @ 4:12 am • Here is a code from the package (with small changes for readability). First there are some convenient definitions. h=1; r=h/(2Sqrt[3]); u=r-y; v=Sqrt[3]/2(x-h/2)+(y-r)/2; w=Sqrt[3]/2(h/2-x)+(y-r)/2; Then a function that contains all cases, #1 and #2 are just integers, f and g are trig functions: EqFun[f_,g_]:=f[Pi (-#1-#2)(u+2r)/(3r)]g[Pi (#1-#2)(v-w)/(9r)]+ f[Pi #1 (u+2r)/(3r)]g[Pi (2#2+#1)(v-w)/(9r)]+ f[Pi #2 (u+2r)/(3r)]g[Pi (-2#1-#2)(v-w)/(9r)]; All the cases: Equilateral[Neumann,Symmetric]=EqFun[Cos,Cos]&; Equilateral[Neumann,Antisymmetric]=EqFun[Cos,Sin]&; Equilateral[Dirichlet,Symmetric]=EqFun[Sin,Cos]&; Equilateral[Dirichlet,Antisymmetric]=EqFun[Sin,Sin]&; Eigenvalue is the same regardless of the case. For Neumann you need 0<=#1<=#2. For Dirichlet: 0<#1<=#2. And antisymmetric cannot have #1=#2. Equilateral[Eigenvalue]=Evaluate[4/27(Pi/r)^2(#1^2+#1 #2+#2^2)]&; Comment by Bartlomiej Siudeja — June 15, 2012 @ 4:22 am • I’m sorry, I’m really not familiar with this package. Am I correct, reading the script above, that you are computing an *analytic* expression for the eigenvalue? That is, if I give three angles of an arbitrary triangle(a,b,pi-a-b), your script renders the Neumann eigenvalue and eigenfunction in closed form? Or is this code for the cases where the closed form expressions for the eigenvalues are known (equilateral, right-angled, etc)? This is also very nice to have, for verification of other methods of calculation. When we map one triangle to another, the eigenvalue problem changes (see the Wiki, or previous discussions here). It is great if you have a code which can analytically compute the eigenvalues of the mapped operator on a specific triangle, or equivalently, eigenvalues on a generic triangle. Comment by Nilima Nigam — June 15, 2012 @ 4:36 am • This package is not fancy at all. It has formulas for equilateral, right isosceles and half of equilateral. These are known explicitly. Fro other triangles it just helps evaluate Rayleigh quotient on something like f composed with T (linear). This just gives upper bounds for eigenvalues. Or it might help speed-up calculations for Hadamard variation, since you do not need to think about what is the linear transformation from one triangle to another. And it can evaluate Rayleigh quotient on transformed triangle Was handy for proving bounds for eigenvalues, and to see nodal domains for known cases. I wish I had any analytic formula for eigenvalues on arbitrary triangle. Comment by Bartlomiej Siudeja — June 15, 2012 @ 4:42 am • OK, thanks for the clarification! I was thrown by your initial comment about it accepting all parameters. Now I understand that you’re able to get good bounds, rather than the exact eigenvalues. Comment by Nilima Nigam — June 15, 2012 @ 5:23 am • The fact that there are quite a few known cases means that you can make a linear combination of known eigenfunctions (each transplanted to a given triangle) and evaluate Rayleigh quotient. PDEToolbox is not a benchmark for FEM, but I have seen cases where 16GB of memory was not enough to bring numerical result below upper bound obtained from a test function containing 5 known eigenfunctions. Comment by Bartlomiej Siudeja — June 15, 2012 @ 5:12 am • PDEtoolbox is great for generating a quick result, but not for careful numerics, and it doesn’t do high order. Yes, you could wait a long while to get good results if you relied solely on PDEToolBox. Joe Coyle (whose FEM solver we’re using) has implemented high-accuracy conforming approximants, and we’re keeping tight control on residual errors. Details of our approximation strategy are on the Wiki. I’m also thinking of implementing a completely non-variational strategy, so we have two sets of results to compare. Comment by Nilima Nigam — June 15, 2012 @ 5:29 am • I used to use PDEToolbox for visualizations, but I no longer have license for it. Besides, it does not have 3D, and eigenvalues in 3D behave much worse than in 2D. I have written a wrapper for eigensolver from FEniCS project (http://fenicsproject.org/). It is most likely not good for rigorous numerics, and I am not even a beginner in FEM. However, it works perfectly for plotting. In particular one can see that nodal line moves away very quickly from vertices. The nearly equilateral case Nilima posted must indeed be extremely close to equilateral. While Nilima crunches the data, anyone who wants to see more pictures is welcome to use my script. It is a rough implementation with no-so-good documentation, but it can handle many domains with any boundary conditions (also mixed). There is a readme file. Download link: http://pages.uoregon.edu/siudeja/fenics.zip. I have not tested this only on Mac, so I am not sure it will work in Windows or Linux, though it should. To get a triangle one can use python eig.py tr a b -N -s number -m -c3 -e3 tr is domain specification, a,b is the third vertex, -N gives Neumann, -s number is number of triangles, -m shows mesh, -c3 gives contours instead of surface plots (3 contours are good for nodal line), -e3 to get 3 eigenvalues There are many options. python eig.py -h lists all of them with minimalistic explanations Comment by Bartlomiej Siudeja — June 15, 2012 @ 4:57 pm 13. Some random thoughts about the nodal line: 1) I believe the Nodal Line Theorem guarantees that the nodal line consists of a curve with end points on the boundary and which divides the triangle into two sub-regions. It might be possible to prove that in fact the two endpoints of the nodal line lie on different sides of the triangle. (The alternate case, that the nodal line looks like a handle sticking out from one of the edges, feels wrong… in fact maybe it is the case that for no domain ever is it the case that the two endpoints of the nodal line lie on the same straight line segment of its boundary). 2) If 1) were true, then it would follow that the nodal line does in fact straddle one of the corners. Moreover, we know apriori that the nodal line is orthogonal to the boundary (so at least locally near the boundary it starts to “bow out”). The nodal line ought not to be too serpentine… that would cause the second eigenfunction to have a large $H^1$-norm while allowing the $L^2$-norm to stay small… which would violate the Raleigh-Ritz formulation of the 2nd eigenfunction. 3) Since the nodal line is “bowed out” at the boundary, and has incentive not to be serpentine, it seems like it shouldn’t “bow in”. If we could show that the slope/angle of the nodal-line stays within a certain range then the arguments used for the mixed Dirichlet-Neumann triangle could be applied to show that the extremum of the eigenfunction in this sub-region in fact lies at the corner the nodal line is straddling. Of course this is all hand-wavy and means nothing without precise quantitative estimates :-/ In particular though, does any one know if the statement ” for no domain ever is it the case that the two endpoints of the nodal line lie on the same straight line segment of its boundary” is true? I can’t think of any domain for which that would be the case… Comment by Chris Evans — June 14, 2012 @ 8:53 pm • I think your last statement is true. Suppose a nodal line for the Neumann Laplacian in a polygonal domain end both its endpoints on the same line segment. Consider the domain Q enclosed by the nodal line and the piece of the line segment enclosed between the nodal line end-points. This region is a subset of the original domain. Now, on Q, the eigenfunction u has the following properties: it satisfies $\Delta u + \Lambda u=0$ in Q, has zero Dirichlet data on the curvy part of the boundary of Q, and satisfies zero Neumann data on the straight line part of its boundary. Now reflect Q across the straight line segment, and you get a Dirichlet problem for $\Delta u + \Lambda u=0$ in the doubled domain. I now claim $\Lambda$ cannot be an eigenvalue of the Dirichlet problem on this doubled domain. $\Lambda$ is the first eigenvalue of the mixed Dirichlet-Neumann problem on Q. This is easy- there are no other nodal lines in Q. Hence $\Lambda$ is smaller than the first eigenvalue of the Dirichlet problem on Q (fewer constraints). Doubling the domain just increases the value of the Dirichlet eigenvalue. So $\Lambda$ cannot be an eigenvalue on the doubled domain. Finally, we have the Helmholtz problem $\Delta u + \Lambda u=0$ on the doubled domain, with zero boundary data. We’ve just shown \Lambda is not an eigenvalue, so the problem is uniquely solvable, and hence u=0 in the doubled domain. Comment by Nilima Nigam — June 14, 2012 @ 9:35 pm • I think there is something wrong with this argument. When you double the domain, the Dirichlet eigenvalue must go down. In fact $\Lambda$ is exactly equal to the first Dirichlet eigenvalue on doubled Q (which has Dirichlet condition all around). Doubled Q has a line of symmetry, hence by simplicity of the first Dirichlet eigenvalue, the eigenfunction must be symmetric. Hence it must satisfy Neumann condition on the straight part of the boundary of Q. Comment by Bartlomiej Siudeja — June 14, 2012 @ 10:00 pm • Once we double the original domain and get a doubled Q with Dirichlet condition all around, we can claim that this domain has larger eigenvalues than the origin domain doubled with Dirichlet all around. Assuming the doubled domain is convex, we can use Payne-Levine-Weinberger inequality $\mu_3\le\lambda_1$, Neumann is below Dirichlet. Without convexity we just have $\mu_2 <\lambda_1$. Our original eigenfunction gives a eigenvalue on the doubled domain, but unfortunately it might not be second. If it was we would be done. Under convexity assumption it should be easier, but I am not sure yet how to finish the proof. Comment by Bartlomiej Siudeja — June 14, 2012 @ 10:38 pm • I like the idea of taking advantage of the fact that the boundary is flat to reflect across it, but for the reasons Siudeja mentions I don’t quite follow the argument. Maybe it is possible to make an argument by reflecting the entire domain (not just the $Q$ in your notation) across the straight line segment. The reflected eigenfunction would then have a nodal line which is a circle… Thus we would have an eigenfunction which has only *one* nodal line and it is a loop floating in the middle… does the Nodal Line Theorem preclude this? Comment by Chris Evans — June 14, 2012 @ 11:27 pm • The unit disk contains a Neumann eigenfunction $J_0(r/j_1)$ whose nodal line is a closed circle – but it isn’t the second eigenvalue. But it is the second eigenvalue amongst the radial functions, which already suggests one has to somehow “break the symmetry” (whatever that means) in order to rule out loops… Comment by Terence Tao — June 15, 2012 @ 12:28 am • I think that if one can prove that the second eigenfunction of an acute scalene triangle never vanishes at a vertex (i.e. the nodal line cannot cross a vertex), then by a continuity argument (starting from a very thin acute triangle, for instance) shows that for any acute scalene triangle, the nodal line crosses each of the edges adjacent to the pointiest vertex exactly once. I don’t know how to prevent vanishing at a vertex though. (Note that for an equilateral or super-equilateral isosceles triangle, the nodal line does go through the vertex, though as shown in the image http://people.math.sfu.ca/~nigam/polymath-figures/nearly-equilateral-1.jpg from comment 11, the nodal line quickly moves off of the vertex once one perturbs off of the isosceles case.) I was looking at the argument that shows the nodal line is not a closed loop, hoping to get some mileage out of a reflection argument, but unfortunately it relies on an isoperimetric inequality and does not seem to be helpful here. (The argument is as follows: if the nodal line is a closed loop, enclosing a subdomain D of the original triangle T, then by zeroing out everything outside of the loop we see that the second Neumann eigenvalue of T is at least as large as the first Dirichlet eigenvalue of D, which is in turn larger than the first Dirichlet eigenvalue of T. But there are isoperimetric inequalities that assert that among all domains of a given area, the first Dirichlet eigenvalue is minimised and the second Neumann eigenvalue are maximised at a disk, implying in particular that the Neumann eigenvalue of T is less than or equal to the Dirichlet eigenvalue of T, giving the desired contradiction.) Comment by Terence Tao — June 14, 2012 @ 10:55 pm • This is exactly what I was trying to do above. I think that isoperimetric inequality is not needed. Neumann eigenvalue is just equal to Dirichlet in the loop (laplacian is local), which is larger than Dirichlet on the whole domain which is larger than second Neumann on the whole domain (Polya and others). For convex domains even the third Neumann eigenvalue is below the first Dirichlet. But even this is not enough for our case. Comment by Bartlomiej Siudeja — June 14, 2012 @ 11:04 pm • I have done a few numerical plots for super-equilateral triangles sheared by very small number. It seems that the speed at which nodal line moves away from the vertex when shearing is growing when isosceles triangle approaches equilateral. For triangle with vertices (0,0), (1,0) and (1/2+epsilon,sqrt(3)/ (2+epsilon)), nodal line looks almost the same regardless of epsilon. I tried epsilon=0.1, 0.01, 0.0001. Nodal line touches the side about 1/3 of the way from vertex. Comment by Bartlomiej Siudeja — June 15, 2012 @ 6:31 pm • I think reflection may actually work, unless I am missing something. Let T be the original acute triangle, Q the quadrilateral obtained by reflection, and S the reflection line. We assume that the second Neumann eigenfunction of T has endpoints on S. Now reflect to get interior Dirichlet domain D. This one is smaller than Q, so by domain monotonicity has strictly larger first Dirichlet eigenvalue than Q with Dirichlet boundary conditions. Due to convexity of Q we get that the third Neumann eigenvalue of Q is not larger than the first Dirichlet eigenvalue of Q (http://www.jstor.org/stable/2375044). We will be done if we can show that the second Neumann eigenfunction of T gives the second or third Neumann eigenfunction of Q. Due to line of symmetry in Q, every eigenfunction must be symmetric or antisymmetric. If not, we could reflect it, then add original and reflection to get symmetric. We could also subtract to get antisymmetric. Hence non symmetric eigenfunction of Q implies double eigenvalue. One of those must be symmetric, so it must be the Neumann eigenfunction of T, and we are done. So suppose that the second Neumann eigenfunction on Q is antisymmetric. If the third on is also antisymmetric, it must have additional nodal line, hence by antisymmetry must have at least 4 nodal domains. But this is not possible. Hence either the second or the third eigenfunction on Q must be symmetric, hence it must satisfy Neumann condition on S. Therefore it must be the second eigenfunction on T. Contradiction. Comment by Bartlomiej Siudeja — June 15, 2012 @ 12:44 am • Nice! (Though I’m not clear about the line “Non symmetric eigenfunction of Q implies double eigenvalue”, it seems that this is neither true nor needed for the argument. Also, I replaced your jstor link with the stable link.) Comment by Terence Tao — June 15, 2012 @ 1:20 am • No symmetry for eigenfunction means that we can reflect the eigenfunction to get a new one (different). Now take a sum to get something symmetric (Neumann on S), subtract to get antisymmetric (DIrichlet on S). Neither one will be 0, and they must be orthogonal. So eigenvalue must be double or higher. This just means that eigenspace for something symmetric can always be decompose into symmetric and antisymmetric. Comment by Bartlomiej Siudeja — June 15, 2012 @ 1:29 am • Oh, I see what you mean now. (I had confused “non symmetric” with “anti-symmetric”.) I put a quick writeup of the argument on the wiki. Comment by Terence Tao — June 15, 2012 @ 1:48 am • The reference I included was to a paper of Friedlander, where has cites a much older paper by Levine and Weinberger where the inequality is proved. There is also a nice paper by Frank and Laptev that gives good account of who proved what (http://www2.imperial.ac.uk/~alaptev/Papers/FrLap2.pdf). Comment by Bartlomiej Siudeja — June 15, 2012 @ 2:16 am 14. Concerning the method of attack I suggested in the previous comment, it seems that 1) is proven (as the nodal line connects two edges, it does indeed straddle some vertex. It occurs to me that 2) and 3) can be more succinctly phrased as the conjecture that the mixed boundary domain consisting of this corner and nodal line is *convex*. I think showing that would be enough… because the nodal line intersects the boundary orthogonally, knowing this region is convex should control the slope of the nodal line enough that earlier arguments would get the extremum in the corner. Comment by Chris Evans — June 15, 2012 @ 7:27 am 15. […] proposed by Chris Evans, and that has already expanded beyond the proposal post into its first research discussion post. (To prevent clutter and to maintain a certain level or organization, the discussion gets cut up […] Pingback by Three number theory bits: One elementary, the 3-Goldbach, and the ABC conjecture « mixedmath — June 15, 2012 @ 1:58 pm 16. […] previous research thread for the Polymath7 “Hot Spots Conjecture” project has once again become quite full, so […] Pingback by Polymath7 research threads 2: the Hot Spots Conjecture « The polymath blog — June 15, 2012 @ 9:49 pm • As you can see, I’ve rolled over the thread again as this thread is also approaching 100 comments and getting a little hard to follow. The pace is a bit hectic, but I guess this is a good thing, as it is an indication that we are making progress and understanding the problem better… Comment by Terence Tao — June 15, 2012 @ 9:51 pm 17. […] been quite an active discussion in the last week or so, with almost 200 comments across two threads (and a third thread freshly opened up just now).  While the problem is still not completely […] Pingback by Updates on the two polymath projects « What’s new — June 15, 2012 @ 10:22 pm 18. […] time to roll over the research thread for the Polymath7 “Hot Spots” conjecture, as the previous research thread has again become […] Pingback by Polymath7 research threads 3: the Hot Spots Conjecture « The polymath blog — June 24, 2012 @ 7:22 pm
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 231, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8597649335861206, "perplexity": 564.172403269028}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246639482.79/warc/CC-MAIN-20150417045719-00260-ip-10-235-10-82.ec2.internal.warc.gz"}
https://fr.coursera.org/lecture/machine-learning-with-python/introduction-to-classification-95g22
## Introduction to Classification ### Compétences que vous apprendrez Python Libraries, Machine Learning, regression, Hierarchical Clustering, K-Means Clustering ### Avis 4.7 (13,158 évaluations) • 5 stars 75,89 % • 4 stars 18,71 % • 3 stars 3,38 % • 2 stars 0,94 % • 1 star 1,05 % RN 25 mai 2020 Labs were incredibly useful as a practical learning tool which therefore helped in the final assignment! I wouldn't have done well in the final assignment without it together with the lecture videos! MJ 3 juin 2020 In peer graded assignments, if someone is grading any peer below passing criteria then it must be compulsory to let the learner know his mistakes or shortcomings because of which he does not graded. À partir de la leçon Classification ### Enseigné par • #### SAEED AGHABOZORGI Ph.D., Sr. Data Scientist • #### Joseph Santarcangelo Ph.D., Data Scientist at IBM
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8250282406806946, "perplexity": 16286.388099021971}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334855.91/warc/CC-MAIN-20220926082131-20220926112131-00426.warc.gz"}
http://archytas.birs.ca/events/2017/5-day-workshops/17w5144/schedule
# Schedule for: 17w5144 - Photonic Topological Insulators Arriving in Banff, Alberta on Sunday, September 10 and departing Friday September 15, 2017 Sunday, September 10 16:00 - 17:30 Check-in begins at 16:00 on Sunday and is open 24 hours (Front Desk - Professional Development Centre) 17:30 - 19:30 Dinner A buffet dinner is served daily between 5:30pm and 7:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building. (Vistas Dining Room) 20:00 - 22:00 Informal gathering (Corbett Hall Lounge (CH 2110)) Monday, September 11 07:00 - 08:45 Breakfast Breakfast is served daily between 7 and 9am in the Vistas Dining Room, the top floor of the Sally Borden Building. (Vistas Dining Room) 08:45 - 09:00 Introduction and Welcome by BIRS Station Manager (TCPL 201) 09:00 - 09:30 Iacopo Carusotto: Pumping and dissipation as an asset for topological photonics In this talk I will review some general aspects about the different ways of injecting light into a topological photonics system and of extracting information about its dynamics from the emitted light. Rather than just a hindrance, the intrinsically non-equilibrium nature of optical systems can in fact be seen as a promising asset in view of exploring new physics beyond what is normally done in condensed-matter and ultracold atom systems. In the first part, I will review the basic features of the principal pumping schemes used in experi- ments on quantum fluids of light [1] and topological photonics. In particular, I will illustrate how these features have been exploited in recent experiments to highlight different aspects of topological physics. In the second part, I will present some theoretical proposals of new effects that can be studied in state-of-the-art systems of current interest for topological photonics. Our long term goal is to push further the research on topological photonics in the direction of generating strongly correlated states of light in strongly nonlinear systems [2, 3] and observe novel phase transitions in a driven-dissipative context [4]. References [1] I. Carusotto and C. Ciuri, Quantum fluids of light, Rev. Mod. Phys. 85, 299 (2013) [2] E. Macaluso and I. Carusotto, Hard-wall confinement of a fractional quantum Hall liquid, arXiv:1706.00353. [3] R. O. Umucallar and I. Carusotto, Spectroscopic signatures of a Laughlin state in an incoher- ently pumped cavity, to be submitted. [4] J.Lebreuillyetal.,Stabilizingstronglycorrelatedphotonfluidswithnon-Markovianreservoirs, arXiv:1704.01106. (TCPL 201) 09:30 - 10:00 Miguel Bandres: Topological Lasers (TCPL 201) 10:00 - 10:30 Coffee Break (TCPL Foyer) 10:30 - 11:00 Hannah Price: Measuring the Berry curvature from geometrical pumping Geometrical properties of energy bands underlie fascinating phenomena in a wide-range of systems, including solid-state materials, ultracold gases and photonics. Most notably, local geometrical characteristics, like the Berry curvature, can be related to global topological invariants such as those classifying quantum Hall states or topological insulators. Regardless of the band topology, however, any non-zero Berry curvature can have important consequences, such as in the dynamical evolution of a wave-packet [1]. We experimentally demonstrate for the first time that wave-packet dynamics can be used to directly map out the Berry curvature over an energy band [2]. To this end, we use optical pulses in two coupled fibre loops to explore the discrete time-evolution of a wave-packet in a 1D geometrical charge pump, where the Berry curvature leads to an anomalous displacement of the wave packet under pumping. This is a direct observation of Berry curvature effects in an optical system, and, more generally, a proof-of-principle demonstration that wave-packet dynamics can be used as a high-resolution tool for probing the geometrical properties of energy bands. [1] D. Xiao, M.-C. Chang, and Q. Niu, Rev. Mod. Phys. 82, 1959 (2010). [2] M. Wimmer, H.M. Price, I. Carusotto and U. Peschel, Nature Physics, 13, 6, 545 (2017). (TCPL 201) 11:00 - 11:30 Tomoki Ozawa: Synthetic dimensions and four-dimensional quantum Hall effect in photonics I discuss recent developments of the study of “synthetic dimensions” in photonics. The idea of synthetic dimensions is to identify internal states of a photonic cavity as extra dimensions, and to simulate higher dimensional lattice models using physically lower dimensional systems. The concept was originally proposed and experimentally realized in ultracold gases [1–5]. I first review the existing theoretical and experimental studies of synthetic dimensions. After discussing some challenges and limitations of the existing methods of synthetic dimensions, I explain our proposals of realizing synthetic dimensions in photonic cavities [6, 7], which overcome some of these limitations. Finally I discuss how the four dimensional quantum Hall effect can be observed in photonics using the synthetic dimensions [6, 8, 9]. [1] O. Boada, A. Celi, J. I. Latorre, and M. Lewenstein, Quantum Simulation of an Extra Dimension, Phys. Rev. Lett. 108, 133001 (2012). [2] A. Celi, P. Massignan, J. Ruseckas, N. Goldman, I. B. Spielman, G. Juzelinas, and M. Lewenstein, Synthetic Gauge Fields in Synthetic Dimensions, Phys. Rev. Lett. 112, 043001 (2014). [3] M. Mancini, G. Pagano, G. Cappellini, L. Livi, M. Rider, J. Catani, C. Sias, P. Zoller, M. Inguscio, M. Dalmonte, and L. Fallani, Observation of chiral edge states with neutral fermions in synthetic Hall ribbons, Science 349, 1510 (2015). [4] B. K. Stuhl, H. I. Lu, L. M. Aycock, D. Genkina, and I. B. Spielman, Visualizing edge states with an atomic Bose gas in the quantum Hall regime, Science 349, 1514 (2015). [5] L. F. Livi, G. Cappellini, M. Diem, L. Franchi, C. Clivati, M. Frittelli, F. Levi, D. Calonico, J. Catani, M. Inguscio, and L. Fallani, Synthetic dimensions and spin-orbit coupling with an optical clock transition, Phys. Rev. Lett. 117, 220401 (2016). [6] T. Ozawa, H. M. Price, N. Goldman, O. Zilberberg, and I. Carusotto, Synthetic dimensions in integrated photonics: From optical isolation to four-dimensional quantum Hall physics Phys. Rev. A 93, 043827 (2016). [7] T. Ozawa and I. Carusotto, Synthetic dimensions with magnetic fields and local interactions in pho- tonic lattices, Phys. Rev. Lett. 118, 013601 (2017). [8] H. M. Price, O. Zilberberg, T. Ozawa, I. Carusotto, and N. Goldman, Four-Dimensional Quantum Hall Effect with Ultracold Atoms, Phys. Rev. Lett. 115, 195303 (2015). [9] H.M.Price,O.Zilberberg,T.Ozawa,I.Carusotto,andN.Goldman,MeasurementofChernnumbers through center-of-mass responses, Phys. Rev. B 93, 245113 (2016). (TCPL 201) 11:30 - 12:00 Oded Zilberberg: Topological 2D pumps: a dynamical realization of the four-dimensional quantum Hall effect The discovery of topological states of matter has profoundly augmented our understanding of phase transitions in physical systems. A prominent example thereof is the two-dimensional integer quantum Hall effect. It is characterized by the first Chern number which manifests in the quantized Hall response induced by an external electric field. Generalizing the quantum Hall effect to four-dimensional systems leads to the appearance of a novel non-linear Hall response with a 4D symmetry that is quantized as well, but described by a 4D topological invariant - the second Chern number. Here, we report on the first realization of such 4D topological effects using 2D topological pumps. The quantized bulk response of the pump is measured in a cold atomic system and its corresponding edge phenomena is studied using coupled photonic waveguide arrays. (TCPL 201) 12:00 - 13:30 Lunch Lunch is served daily between 11:30am and 1:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building. (Vistas Dining Room) 13:00 - 14:00 Guided Tour of The Banff Centre Meet in the Corbett Hall Lounge for a guided tour of The Banff Centre campus. (Corbett Hall Lounge (CH 2110)) 14:00 - 14:20 Group Photo Meet in foyer of TCPL to participate in the BIRS group photo. The photograph will be taken outdoors, so dress appropriately for the weather. Please don't be late, or you might not be in the official group photo! (TCPL Foyer) 15:30 - 16:00 Coffee Break (TCPL Foyer) 16:00 - 16:30 Wladimir Benalcazar: Topological Corner and Hinge Modes in Crystalline Insulators We describe topological crystalline insulators in 2D that host corner-localized modes. These insulators are protected by spatial group symmetries, and hence do not need time-reversal symmetry to be broken. As initially described in Science 357, 61 (2017), the bulk-boundary correspondence in these systems allows for the edges of the 2D crystal to be gapped, as long as the edges are 1D topological insulators themselves. We will show the experimental realization of one such structure in a photonic system. We also describe topological pumping processes associated with these insulators that are higher dimensional counterparts of the Thouless charge pump. When these pumping processes are extended into 3D via reverse dimensional reduction procedures, the systems break time-reversal symmetry and give rise to chiral, hinge-localized modes. (TCPL 201) 16:30 - 17:00 Yidong Chong: Effects of Nonlinearity and Disorder in Topological Photonics In the first part of the talk, I discuss how optical nonlinearity alters the behavior of photonic topological insulators. In the nonlinear regime, band structures and their associated topological invariants cannot be calculated. Nonetheless, nonlinear photonic lattices can support moving edge solitons that "inherit" many properties of linear topological edge states: they are strongly self-localized, and propagate unidirectionally along the lattice edge. These solitons can be realized in a variety of model systems, including (i) an abstract nonlinear Haldane model, (ii) a Floquet lattice of coupled helical waveguides, and (iii) a lattice of coupled-ring waveguides. Topological solitons can be "self-induced", meaning that they locally drive the lattice from a topologically trivial to nontrivial phase, similar to how an ordinary soliton locally induces its own confining potential. This behavior can be used to design nonlinear photonic structures with power thresholds and discontinuities in their transmittance; such structures, in turn, may provide a novel route to devising nonlinear optical isolators. In the second part of the talk, I discuss amorphous analogues of a two-dimensional photonic Chern insulator. These lattices consist of gyromagnetic rods that break time-reversal symmetry, arranged using a close-packing algorithm in which the level of short-range order can be freely adjusted. Simulation results reveal strongly-enhanced nonreciprocal edge transmission, consistent with the behavior of topological edge states. Interestingly, this phenomenon persists even into the regime where the disorder is sufficiently strong that there is no discernable spectral gap. (TCPL 201) 17:00 - 17:30 Philippe St-Jean: Lasing in topological edge states of a 1D lattice Recently, the exploration of topological physics in photonic structures has triggered considerable efforts to engineer optical devices that are robust against external perturbation and fabrication defects [1]. However, due to the difficulty of implementing topological lattices in media exhibiting optical gain and/or nonlinearities, these realizations have been mostly limited so far to passive devices. Hence, cavity polaritons formed from the strong coupling between quantum well excitons and cavity photons are particularly appealing: their photonic part allows for engineering topological properties in lattices of coupled resonators [2,3], while their excitonic part gives rise to Kerr-like nonlinearities and to lasing through stimulated relaxation [4]. In this work [5], we demonstrate lasing in the topological edge states of a 1D lattice. This lattice emulates an orbital version of the Su-Schrieffer-Heeger (SSH) Hamiltonian by coupling the 1st excited states (l=1) of polariton micropillars arranged in a zigzag chain (Fig 1 shows a SEM image of the lattice and a schematic representation of a micropillar, and Fig. 2 shows a real-space image of the emission from the orbital bands where we can observe the spatial distribution of the topological mode). Then, taking profit of the nonlinear properties of polaritons, we evaluate the robustness of this lasing action by optically shifting the on-site energy of the edge pillar, thus breaking the chiral symmetry of the lattice. Under this perturbation, we observe that the localization of the topological mode is not significantly affected, leading to an immunity of the lasing threshold. The most promising perspective of this work is to extend the results to 2D lattices where we envision, in systems with broken time-reversal symmetry, topological lasers in 1D chiral edge states allowing backscattering-immune transport of coherent light. References [1] L. Lu., J. Joannopoulos, and M. Soljacic. Nat. Photon. 8, 821 (2014) [2] M. Milicevic et al. Phys. Rev. Lett 118, 107403 (2017) [3] F. Baboux et al. Phys. Rev. B 95, 161114 (R) (2017) [4] I. Carusotto and C. Ciuti. Rev. Mod. Phys. 85, 299 (2013) [5] P. St-Jean et al. arXiv: 1704.07310 (accepted for publication in Nat. Photon.) (TCPL 201) 17:30 - 19:30 Dinner A buffet dinner is served daily between 5:30pm and 7:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building. (Vistas Dining Room) Tuesday, September 12 07:00 - 09:00 Breakfast (Vistas Dining Room) 09:00 - 09:30 Charles Fefferman: Schroedinger Operators with Honeycomb Potentials The talk presents rigorous theorems for Schroedinger operators with the symmetries of the honeycomb. The results deal with Dirac points and low-lying bands in the weak binding and strong binding regimes, and in intermediate regimes. Joint work with Michael Weinstein and James Lee-Thorp. (TCPL 201) 09:30 - 10:00 Michael Weinstein: Edge states in honeycomb structures Abstract: We present rigorous results on protected edge states for continuous Schroedinger operators with honeycomb potentials. We consider edges which arise (a) via interpolation, by a domain wall, between two distinct periodic potentials, and (b) at the sharp interface between a honeycomb structure and a homogeneous media. Joint work with Charles Fefferman and James Lee-Thorp. (TCPL 201) 10:00 - 10:30 Coffee Break (TCPL Foyer) 10:30 - 11:00 Mark Ablowitz: Tight-binding models for longitudinally driven linear/nonlinear photonic lattices A systematic method to find tight-binding approximations in longitudinally driven linear/nonlinear photonic lattices is developed; prototypes include honeycomb and staggered square lattices. A number of periodic helically varying lattices are investigated; these include: sublattices with the same rotation, phase offset rotation, counter rotation, different rotation sizes and frequencies. Both topological and nontopological modes are found. Topological modes possess unidirectionality and do not scatter off lattice defects. Asymptotic descriptions are found; numerical simulations for both the linear and nonlinear edge states agree with asymptotic theory. (TCPL 201) 11:00 - 11:30 Yi Zhu: Electromagnetic waves in honeycomb structures Motivated by the novel and subtle properties of electronic waves in graphene, there has been very wide interest in the propagation of waves in two-dimensional structures having the symmetries of a hexagonal tiling of the plane with applications to electromagnetic and other types of waves. In this talk, we analyze the photonic analogs of graphene and related topological edge states. Specifically, we study the propagation of waves governed by the two-dimensional Maxwell equations in honeycomb media. Existence of Dirac Ferminons and correspoinding Dirac dynamics are regiously analyzed. The introduction through small and slow variations of a domain wall across a line-defect gives rise to the bifurcation from Dirac points of highly robust (topologically protected) edge states. This talk is based on the joint work with Michael I. Weinstein at Columbia University and James Lee-Thorp at Courant Institute. (TCPL 201) 11:30 - 12:00 Alexander Watson: Wave-packet dynamics in periodic media: Berry curvature induced anomalous velocity and Landau-Zener inter-band transitions We study the dynamics of wave-packets in media with a local periodic structure which varies adiabatically (over many periods of the periodic lattice) across the medium. We focus in particular on the case where symmetries of the periodic structure lead to degeneracies in the Bloch band dispersion surface. An example of such symmetry-induced degeneracies are the Dirac points’ of media with honeycomb lattice’ symmetry. Our results are as follows: (1) A systematic and rigorous derivation of the anomalous velocity’ of wave-packets due to the Bloch band’s Berry curvature. The Berry curvature is large near to degeneracies, where it takes the form of a monopole. We also derive terms which do not appear in the works of Niu et al. (2) Restricting to one spatial dimension, the derivation of the precise dynamics when a wave-packet is incident on a Bloch band degeneracy. In particular we derive the probability of an inter-band transition and show that our result is consistent with an appropriately interpreted Landau-Zener formula. I will present these results for a Schr\”{o}dinger model; extending our results to the full Maxwell system is the subject of ongoing work. This is joint work with Michael Weinstein and Jianfeng Lu. (TCPL 201) 12:00 - 13:30 Lunch (Vistas Dining Room) 13:30 - 14:00 Vincenzo Vitelli: Topological Active Metamaterials Liquids composed of self-propelled particles have been experimentally realized using molecular, colloidal, or macroscopic constituents. These active liquids can flow spontaneously even in the absence of an external drive. Unlike spontaneous active flow, the propagation of density waves in confined active liquids is not well explored. Here, we exploit a mapping between density waves on top of a chiral flow and electrons in a synthetic gauge field to lay out design principles for artificial structures termed topological active metamaterials. We design metamaterials that break time-reversal symmetry using lattices composed of annular channels filled with a spontaneously flowing active liquid. Such active metamaterials support topologically protected sound modes that propagate unidirectionally, without backscattering, along either sample edges or domain walls and despite overdamped particle dynamics. Our work illustrates how parity-symmetry breaking in metamaterial structure combined with microscopic irreversibility of active matter leads to novel functionalities that cannot be achieved using only passive materials. (TCPL 201) 14:00 - 14:30 Andrea Alu: Topological and non-reciprocal photonics and phononics In this talk, I will review our recent progress towards the concept, design and realization of magnet-free non-reciprocal photonic, acoustic and mechanical devices, and arrays of them offering strong topological protection, aimed at realizing reconfigurable, broadband isolators, gyrators and circulators, and one-way waveguides. We will discuss our approaches to induce topological order, and to design topological photonic metasurfaces based on spatio-temporal modulation, nonlinearities, and/or opto-mechanical interactions, and discuss our vision towards new transport phenomena for light and sound, and new nanophotonic and acoustic devices with enhanced non-reciprocal properties over broad bandwidths. (TCPL 201) 14:30 - 15:00 Sebastian Huber: A phononic quantized quadrupole insulator All existing topological band structures can be traced back to a quantized dipole moment, or a mathematical generalization thereof. Recently, it has been shown theoretically, how the quadrupole moment of a charge distribution can be quantized. The associated phenomenology includes in-gap states on surfaces two or more dimensions lower than the bulk. Here, we report on the experimental observation of such a quadrupole state in a mechanical metamaterial made from weakly coupled oscillators in a silicon membrane. We characterize the topological in-gap “corner-states” together with the induced gapped edge modes. (TCPL 201) 15:00 - 15:30 Coffee Break (TCPL Foyer) 15:45 - 16:15 Zheng Wang: Topologically-protected optical forces Radiation pressure of electromagnetic fields have been widely used for non-contact nanomanipulation and tunable optics. However, for the resulting force fields, the backscattering of light has been a major constraint that result in instability and dissipation. Here we show that the complete suppression of backscattering in photonic topological materials provides new ways of controlling optical forces: long range optical pulling forces exist in any line defect containing multiple edge states, while long range conservative potentials can be established with single-mode edge states. The optical force fields are entirely defined by the topological band structures and the unit cell functions. (TCPL 201) 16:15 - 16:45 William Irvine: Spinning top-ology: Order, disorder and topology in mechanical gyro-materials and fluids Geometry, topology and broken symmetry often play a powerful role in determining the organization and properties of materials. A recent example is the discovery that the excitation spectra of materials -- be they electronic, optical, or mechanical -- may be topologically non-trivial. I will explore the use of spinning tops' to explore this physics. In particular I will discuss an experimental and theoretical study of a simple kind of active meta-material – coupled gyroscopes – that naturally encodes non-trivial topology in its vibrational spectrum. These materials have topologically protected edge modes which we observe in experiment. Crucially, the geometry of the underlying lattice controls the presence of time reversal symmetry that is essential to the non-trivial topology of the spectrum. We exploit this to control the chirality of the edge modes by simply deforming the lattice. Moving beyond ordered lattices we show that amorphous gyroscopic networks are naturally topological. We construct them from arbitrary point sets -- including hyperuniform, jammed, quasi-crystalline and uniformly random -- and control their topology through simple, local decorations. (TCPL 201) 17:30 - 19:30 Dinner (Vistas Dining Room) Wednesday, September 13 07:00 - 09:00 Breakfast (Vistas Dining Room) 09:00 - 09:30 Terry Loring: Local indices and quantified topological protection We will discuss local formulas for K-theory that can be defined on models with irregular boundaries and lattice defects. This form of K-theory can work with models that have sites in real space are random or quasicrystalline. Simple formulas for these finite-volume invariants allow for fast numerics and quantitative statements about the robustness of certain states against disorder. (TCPL 201) 09:30 - 10:00 Emil Prodan: The K-theoretic Bulk-Boundary Principle for Patterned Resonators Resonators couple to each other when put in contact, leading to collective resonant modes. An interesting problem is to understand and exploit these collective modes when the resonators form different patterns in space. In this talk I will first present a kaleidoscope of numerical examples where patterned resonators display spectral properties akin to 2- and higher-dimensional Integer Quantum Hall Effect. In the second part, I will demonstrate how K-theory can be used to understand and predict the bulk and the edge spectrum of such systems. In particular, a simple K-theoretic version of the bulk-boundary principle will be presented which enables one to see when topological edge spectrum is to be expected. This last part will be supported again with a kaleidoscope of numerical examples. (TCPL 201) 10:00 - 10:30 Coffee Break (TCPL Foyer) 10:30 - 11:00 Max Lein: Symmetry Classification of Topological Photonic Crystals In 2005 Haldane conjectured that topological phenomena were not quantum but wave effects. He proposed [RH08] an electromagnetic analog of the Quantum Hall Effect, something that was confirmed in a number of spectacular experiments [Wan+08; Rec+13] a few years later. These and other, more recent works have naturally raised two questions: (1) How similar is the Quantum Hall Effect for light to the one from solid state physics? And (2) are there other, as-of-yet unknown topological effects in electromagnetic media? The crucial ingredient are symmetries, and when designing topological electromagnetic media, there are two axes to explore: One can choose the materials from which to build the photonic crystal (material symmetries) and then decide how to periodically arrange these materials (crys- tallographic symmetries). For material symmetries we answer both of these questions conclusively by first reformulating Maxwell’s equations in Schrödinger form [DL17], and then adapting the Cartan-Altland-Zirnbauer classification scheme for topological insulators [DL14; DL16]. With regards to question (1), gyrotropic media are in the same symmetry class (class A) as solids exhibiting the Quantum Hall Effect. This is a first step to proving photonic bulk-edge correspon- dences that would make Haldane’s conjecture precise: In a two-dimensional topological photonic crystal the Chern number quantifies the net number of edge modes traveling from left to right. Ques- tion (2) has a negative answer, in dimension d ≤ 3 there are no as-of-yet undiscovered topological effects due to material symmetries. In particular, despite some claims to the contrary, there is no electromagnetic analog of the Quantum Spin Hall Effect as that requires the presence of an odd time-reversal symmetry (a realization of class AII). Acknowledgements M. L. thanks JSPS for support of his research with a WAKATE B grant. G. D. research is supported by the grant Iniciación en Investigación 2015 - No 11150143 funded by FONDECYT. (TCPL 201) 11:00 - 11:30 Eli Levy: Probing Topological Properties of Quasicrystals with Waves Among the large variety of complex non-periodic structures, quasicrystals and quasiperiodic distributions play a special role. These structures have some of their physical properties (e.g. dielectric constant, potential, etc.) modulated according to a deterministic non-periodic pattern such as a substitution rules set or a cut & project construction. Such architectures have long been recognized to yield a pronounced long-range order manifesting as an infinite set of crystallographic Bragg peaks and a highly lacunar singular-continuous energy spectrum, with an infinite set of gaps arranged in a multifractal hierarchy. The possibility that such structures also possess distinct topological features has been discussed both in mathematics and physics literature including some descriptions of the spectrum through topological invariants. These topological numbers, emerging from the structural building rules, are known to label the dense set of spectral gaps. We present the topological properties of a finite quasiperiodic chains studied using the scattering and also the diffraction of waves. We show that the topological invariants may be measured from the winding of a chiral scattering phase as a function of a phason structural degree of freedom. Using a Fabry-Perot point of view, this chiral phase is also shown to drive the spectral traverse of conveniently emulated edge states. Furthermore, we present a method to obtain all available topological numbers from the diffraction pattern of a quasicrystal, a method which may be termed topological quasicrystallography. Existing experimental realizations will be addressed, as well as the possible generalizations. (TCPL 201) 11:30 - 12:00 Justin Cole (TCPL 201) 12:00 - 13:30 Lunch (Vistas Dining Room) 13:30 - 14:00 Gaurav Bahl: Chirality and non-reciprocity in optomechanical resonator systems Time-reversal symmetry is a property shared by wave phenomena in linear stationary media. However, broken time-reversal symmetry is required for synthesizing nonreciprocal devices like isolators, circulators, gyrators, and for topological systems supporting chiral states. Magnetic fields can of course enable nonreciprocal behavior for electromagnetic waves, but this method does not conveniently translate to the chip-scale or to the acoustic domain, compelling us to search for nonmagnetic solutions. We have adopted a unique approach to address this challenge through the use of co-localized interacting modes of light and sound in resonator systems. The acousto-optical physics within these systems enable fundamental experiments having analogies to condensed matter phenomena, including phonon laser action [1], cooling [2, 3], and electromagnetically induced transparency [4]. This talk will describe our experimental efforts to exploit the momentum conservation rules intrinsic to light-sound interactions for producing strong nonreciprocal behavior, using both optical and acoustic pumping. We have demonstrated that such ‘nonreciprocal atoms’ can be used to produce complete optical isolation with ultra-low loss over a very compact footprint [5]. Our results also reveal that chiral effects are pervasive throughout the phononic and photonic physical layers of these systems, for instance, showing that chirality can be dynamically imparted to phonon transport to suppress disorder-induced backscattering [6]. This talk will also describe how intuitions drawn from our optomechanical experiments can be used to design practical microwave and acoustic systems with reconfigurable topology and nonreciprocal responses. References 1. G. Bahl, J. Zehnpfennig, M. Tomes, T. Carmon, "Stimulated optomechanical excitation of surface acoustic waves in a microdevice," Nature Communications, 2:403, 2011. 2. G. Bahl, M. Tomes, F. Marquardt, T. Carmon, "Observation of spontaneous Brillouin cooling," Nature Physics, Vol. 8, No. 3, pp. 203-207, 2012. 3. S. Kim, G. Bahl, "Role of optical density of states in two-mode optomechanical cooling," Optics Express 25(2), pp.776-784, 2017. 4. J. Kim, M. Kuzyk, K. Han, H. Wang, G. Bahl, "Non-reciprocal Brillouin scattering induced transparency," Nature Physics, 11, pp. 275-280, 2015. 5. J. Kim, S. Kim, G. Bahl "Complete linear optical isolation at the microscale with ultralow loss," Scientific Reports, 7:1647, 2017. 6. S. Kim, X. Xu, J.M. Taylor, G. Bahl, "Dynamically induced robust phonon transport and chiral cooling in an optomechanical system," Nature Communications 8, 205, 2017. (TCPL 201) 14:00 - 14:30 Florian Marquardt: Engineering topological transport of phonons at the nanoscale In this talk I will describe our recent ideas of how to engineer nanostructures that generate topological transport of vibrations. These include situations with explicit time-reversal symmetry breaking (via an optical field with optical vorticity) as well as with time-reversal symmetry intact. In the latter case, I will show how the snowflake phononic crystal, first invented for the purposes of optomechanics, provides an ideal platform in which to implement both pseudomagnetic fields for vibrations as well as a topological insulator. Pseudomagnetic fields for sound at the nanoscale Christian Brendel, Vittorio Peano, Oskar Painter, and Florian Marquardt, Proceedings of the National Academy of Sciences (PNAS) 114, E3390–E3395 (2017) Snowflake Topological Insulator for Sound Waves Christian Brendel, Vittorio Peano, Oskar Painter, and Florian Marquardt, arXiv:1701.06330 (2017) Topological Phases of Sound and Light Vittorio Peano, Christian Brendel, Michael Schmidt, and Florian Marquardt, Phys. Rev. X 5, 031011 (2015) (TCPL 201) 14:30 - 15:00 Alexander Khanikaev: All-Dielectric Photonic Topological Metamaterials and Metasurfaces (TCPL 201) 15:00 - 15:30 Coffee break (TCPL 201) 15:30 - 16:00 Steven Anlage: Exciting Reflectionless Unidirectional Edge Modes in a Reciprocal Photonic Topological Insulator Medium Abstract- Photonic topological insulators are an interesting class of materials whose photonic band structure can have a band gap in the bulk while supporting topologically protected unidirectional edge modes. Recent studies on bianisotropic metamaterials that emulate the electronic quantum spin Hall effect using its electromagnetic analog are examples of such systems with a relatively simple and elegant design. In this presentation, we present a rotating magnetic dipole antenna, composed of two perpendicularly oriented coils, that can efficiently excite the unidirectional topologically protected surface waves in the bianisotropic metawaveguide (BMW) structure recently realized by T. Ma et al. [Phys. Rev. Lett. 114, 127401 (2015)] despite the fact that the BMW medium does not break time-reversal invariance. In addition to achieving a high directivity, the antenna can be tuned continuously to excite reflectionless edge modes in the two opposite directions at various amplitude ratios. We demonstrate its performance through experiments and compare to simulation results. For details, see Phys. Rev. B 94, 195427 (2016). Acknowledgements, This work was supported by the ONR under Grant No. N000141512134, AFOSR COE Grant FA9550-15-1-0171, and the National Science Foundation under Grant Nos. NSF PHY-1415547; AFOSR FA9550-15-1-0075; ARO W911NF-16-1-0319, and NSF ECCS-1158644. (TCPL 201) 16:00 - 16:30 Ling Lu: After a Weyl Weyl points are the key to non-trivial topological phenomena in 3D. I will give several examples: 1) Two Weyl points of same first Chern number form double-weyl points. Examples of double-weyl phonons will be shown in crystalline solids. 2) Two Weyl points of opposite first Chern number form 3D Dirac points. Glide-symmetry protected gapped phase with a single surface Dirac cone can be obtained by gapping 3D Dirac points. It has a Z2 invariant. 3) A Chern crystal of a full 3D gap can be obtained by gapping a Weyl pair. It is characterized by three first Chern numbers. 4) Coupling two Weyl points with helical modulations provides one-way fibers of second Chern number in 4D parameter space. (TCPL 201) 16:30 - 17:00 Zubin Jacob: Dirac–Maxwell correspondence: Spin–1 bosonic topological insulator arXiv:1708.08192 Fundamental differences between fermions and bosons are revealed in their spin and distribution statistics as well as the discrete symmetries they obey (charge, parity and time). While significant progress has been made on fermionic topological phases with time-reversal symmetry, the bosonic counterpart still remains elusive. We present here a spin-1 bosonic topological insulator for light by utilizing a Dirac-Maxwell correspondence. Marking a departure from existing structural photonic approaches which mimic the pseudo-spin-1/2 behavior of electrons, we exploit the integer spin and discrete symmetries of the photon to predict the existence of a distinct bosonic topological phase in continuous media. We introduce the bosonic equivalent of Kramers theorem and topological quantum numbers for light as well as the concept of photonic Dirac monopoles, Dirac strings and skyrmions to underscore the correspondence between Maxwell's and Dirac's equations. We predict that a unique magneto-electric medium with anomalous parity and time-reversal symmetries, if found in nature, will exhibit a gapped Quantum spin-1 Hall bosonic phase. Photons do not possess a conductivity transport parameter which can be quantized (unlike topological electronic systems), but we predict that the helical quantization of symmetry--protected edge states in bosonic topological insulators is amenable to experimental isolation. (TCPL 201) 17:30 - 19:30 Dinner (Vistas Dining Room) Thursday, September 14 07:00 - 09:00 Breakfast (Vistas Dining Room) 09:00 - 09:30 Nate Lindner: Controlling electrons in Floquet Topological Insulators I will discuss the open system dynamics and steady states of two dimensional Floquet topological insulators: systems in which a topological Floquet-Bloch spectrum is induced by an external periodic drive. I will present a solution for the bulk and edge state carrier distributions, which takes into account energy and momentum relaxation through radiative recombination and electron-phonon interactions, as well as coupling to an external fermionic reservoir. The resulting steady state resembles a topological insulator in the Floquet basis. The particle distribution in the Floquet edge modes exhibits a sharp feature akin to the Fermi level in equilibrium systems, while the bulk hosts a small density of excitations. Using these distributions, I will analyze the regimes where edge-state transport can be observed. These results show that signatures of the non-trivial topology persist in the non-equilibrium steady state. (TCPL 201) 09:30 - 10:00 Gil Refael: Topological frequency conversion in strongly driven quantum systems When a small quantum system is subject to multiple periodic drives, it may realize multidimensional topological phases. In my talk, I will explain how to make such constructions, and show how a spin-1/2 particle driven by two elliptically-polarized light beams could realize the Bernevig-Hughes-Zhang model of 2 topological insulators. The observable consequence of such construction is quantized pumping of energy between the two drive sources. (TCPL 201) 10:00 - 10:30 Coffee Break (TCPL Foyer) 10:30 - 11:00 Hrvoje Buljan: Engineering synthetic gauge fields, Weyl semimetals, and anyons I will present two topics of research in our group related to synthetic topological quantum matter [1]: (i) topological phases in 3D optical lattices, more specifically a proposal for experimental realization of Weyl semimetals in ultracold atomic gases [2], and (ii) anyons [3,4]. I will present one possible route to engineer anyons in a 2D electron gas in a strong magnetic field sandwiched between materials with high magnetic permeability, which induce electron-electron vector interactions to engineer charged flux-tube composites [3]. I will also discuss intriguing concepts related to extracting observables from anyonic wavefunctions [4]: one can show that the momentum distribution is not a proper observable for a system of anyons [4], even though this observable was crucial for the experimental demonstration of Bose-Einsten condensation or ultracold fermions. [1] N. Goldman, G. Juzeliunas, P. Ohberg, I. B. Spielman, Rep. Prog. Phys. 77, 126401 (2014). [2] Tena Dubček, Colin J. Kennedy, Ling Lu, Wolfgang Ketterle, Marin Soljačić, Hrvoje Buljan, Weyl points in three-dimensional optical lattices: Synthetic magnetic monopoles in momentum space, Phys. Rev. Lett. 114, 225301 (2015). [3] M. Todorić, D. Jukić, D. Radić, M. Soljačić, and H. Buljan, The Quantum Hall Effect with Wilczek's charged magnetic flux tubes instead of electrons, in preparation [4] Tena Dubček, Bruno Klajn, Robert Pezer, Hrvoje Buljan, Dario Jukić, Quasimomentum distribution and expansion of an anyonic gas, arXiv:1707.04712. (TCPL 201) 11:00 - 11:30 Xiao Hu: Topological Phenomena Emerging from Honeycomb Structure Honeycomb lattice plays an important role in the course of fostering topology physics as known from the Haldane model and the Kane-Mele model [1]. Recently, we propose a way to achieve all-dielectric topological photonics starting from honeycomb structure. We identify a pseudospin degree of freedom in electromagnetic (EM) modes hosted by honeycomb lattice, which can be explored for establishing topological EM states with time-reversal symmetry [2]. We demonstrate theoretically the nontrivial topology by showing photonic band inversions, and counter-propagating edge EM waves. I will show recent experimental results of microwaves which confirm our theory [3]. The idea can also be applied for electronic systems [4]. In terms of the tight-binding model on honeycomb lattice with detuned nearest-neighbor hopping, we find that the topological state is characterized by mirror winding numbers, and absence of the so-called minigap in the edge states can be shown analytically [5]. Recent progresses and perspectives of the present approach will be discussed. References: [1] H.-M. Weng, R. Yu, X. Hu, X. Dai and Z. Fang, Adv. Phys. vol. 64, 227 (2015). [2] L.-H. Wu and X. Hu: Phys. Rev. Lett. vol. 114, 223901 (2015). [3] Y.-T. Yang, J.-H. Jiang, X. Hu and Z.-H. Hang: arXiv.1610.07780. [4] L.-H. Wu and X. Hu: Sci. Rep. vol. 6, 24347 (2016). [5] T. Kariyado and X. Hu: arXiv.1607.08706. (TCPL 201) 11:30 - 12:00 Fabrice MORTESSAGNE: Dirac matter and topology with microwaves The group Waves in complex systems in Nice (France) is interested in controlling the wave transport properties in various systems whose mastered designs range from homogeneous systems with complex geometries to either periodic or disordered structured materials. Thanks to an experimental versatile platforms in microwaves, we develop recently analog approaches of topological effects in condensed matter, and more specifically in 1D or 2D periodic or quasiperiodic (meta-)materials. I will give a review of some of our results ranging from the observation of a topological phase transition in strained artificial graphene to intuitive physical interpretation of the gap-labelling in a Penrose tilling. (TCPL 201) 12:00 - 13:30 Lunch (Vistas Dining Room) 13:30 - 14:00 Patrick Ohberg: Driven lattices and non-local effects in photonic lattices In this talk we will discuss some recent experiments with photonic lattices done at the Institute of Photonics and Quantum Sciences in Edinburgh, UK. In particular the work with slowly driven lattices where non-trivial topological phenomena can be observed will be presented. We will also discuss some recent, perhaps rather speculative, theoretical ideas on how to create long-range interactions based on non-local photon fluids. (TCPL 201) 14:00 - 14:30 Alexander Cerjan: Exceptional contours formed in non-Hermitian topological photonic systems (TCPL 201) 14:30 - 15:00 Mohammad Hafezi: Quantum transport in topological photonics (TCPL 201) 15:30 - 16:00 Bo Zhen (TCPL 201) 17:30 - 19:30 Dinner (Vistas Dining Room) Friday, September 15 07:00 - 09:00 Breakfast (Vistas Dining Room) 09:00 - 09:30 No talks this morning... (TCPL 201) 10:00 - 10:30 Coffee Break (TCPL Foyer) 11:30 - 12:00 Checkout by Noon 5-day workshop participants are welcome to use BIRS facilities (BIRS Coffee Lounge, TCPL and Reading Room) until 3 pm on Friday, although participants are still required to checkout of the guest rooms by 12 noon. (Front Desk - Professional Development Centre) 12:00 - 13:30 Lunch from 11:30 to 13:30 (Vistas Dining Room)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5729244947433472, "perplexity": 2645.8736832598506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000231.40/warc/CC-MAIN-20190626073946-20190626095946-00047.warc.gz"}
https://www.gamedev.net/blogs/entry/207100-nuttin-to-do/
• 10 • 12 • 12 • 14 • 15 • entries 31 97 • views 13408 # nuttin to do 291 views MAN there was nothing to do today so I went over to my friends house and got my butt kicked in basketball.When I got home a few hours later I decided to stay up all night[smile]now I'm working on this //hangman#include#include#include#include#include#includeusing namespace std;int main(){ thats all I have right now. Wow! You certainly got a lot done! Working on a Pacman clone, I can see... [grin] You forgot #include<D3D8.dll> no its hangman the computer im using can't handle directx :( ,but I might be getting a new one soon[smile].
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2748267650604248, "perplexity": 9107.747141804279}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947957.81/warc/CC-MAIN-20180425193720-20180425213720-00285.warc.gz"}
https://wiki.sustainabletechnologies.ca/wiki/Special:MobileDiff/10382
# Changes ,  2 years ago m no edit summary Line 1: Line 1: − [[Swale_sections.PNG|frame]] + [[Swale_sections.PNG|border]] Flow (''Q'') in an open channel, such as a [[swale]], may be calculated using Manning's equation: Flow (''Q'') in an open channel, such as a [[swale]], may be calculated using Manning's equation: $Q=VA=\frac{R^{\frac{2}{3}}S^{\frac{1}{2}}}{n}$ $Q=VA=\frac{R^{\frac{2}{3}}S^{\frac{1}{2}}}{n}$ 8,254 edits
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7667593955993652, "perplexity": 21278.76216886255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057347.80/warc/CC-MAIN-20210922102402-20210922132402-00218.warc.gz"}
https://codeforces.com/blog/entry/75908
### vovuh's blog By vovuh, history, 14 months ago, <almost-copy-pasted-part> Hello! Codeforces Round #634 (Div. 3) will start at Apr/13/2020 17:35 (Moscow time). You will be offered 6 or 7 problems (or 8) with expected difficulties to compose an interesting competition for participants with ratings up to 1600. However, all of you who wish to take part and have rating 1600 or higher, can register for the round unofficially. The round will be hosted by rules of educational rounds (extended ACM-ICPC). Thus, during the round, solutions will be judged on preliminary tests, and after the round it will be a 12-hour phase of open hacks. I tried to make strong tests — just like you will be upset if many solutions fail after the contest is over. You will be given 6 or 7 (or 8) problems and 2 hours to solve them. Note that the penalty for the wrong submission in this round (and the following Div. 3 rounds) is 10 minutes. Remember that only the trusted participants of the third division will be included in the official standings table. As it is written by link, this is a compulsory measure for combating unsporting behavior. To qualify as a trusted participants of the third division, you must: • take part in at least two rated rounds (and solve at least one problem in each of them), • do not have a point of 1900 or higher in the rating. Regardless of whether you are a trusted participant of the third division or not, if your rating is less than 1600, then the round will be rated for you. Thanks to MikeMirzayanov for the platform, help with ideas for problems and for coordination of my work. Thanks to my good friends Daria nooinenoojno Stepanova, Mikhail awoo Piklyaev, Maksim Neon Mescheryakov and Ivan BledDest Androsov for help in round preparation and testing the round. Also thanks to Artem Rox Plotkin and Dmitrii _overrated_ Umnov for the discussion of ideas and testing the round! Good luck! </almost-copy-pasted-part> UPD: Also thanks to ma_da_fa_ka and infinitepro for testing the round! UPD2: Editorial is published! • +358 » 14 months ago, # |   +45 vovuh logic » 14 months ago, # | ← Rev. 2 →   +12 vovuh hello , I guess my rating will down to less than 1600,but I've registered before,will I be rated in this case? • » » 14 months ago, # ^ |   0 May b nope... • » » 14 months ago, # ^ |   0 I think the you won't be rated in this situation. • » » 14 months ago, # ^ |   +17 I think after rating is updated, you can deregister and register back • » » 14 months ago, # ^ |   +4 I guess you are lucky (?) :D • » » » 14 months ago, # ^ |   0 Maybe » 14 months ago, # |   +11 Wow wow, another Div.3 but this time I will not get rate :D » 14 months ago, # |   +7 I good opportunity to get my rating back from today's Div2 lol » 14 months ago, # |   +9 What does "open hacking" mean, sorry I am new? How can you hack a solution? • » » 14 months ago, # ^ |   +1 After each contest, there is a time of 12 hrs where if we find someone's solution incorrect, then we can give input the case at which the submission fails, if hacked you are rewarded with some points sometimes. • » » » 14 months ago, # ^ |   0 Not after each contest LOL • » » » 14 months ago, # ^ |   +2 is there a way to sort solutions by programming languages? Now, I have to go through all the submissions to find Java submissions. • » » » » 14 months ago, # ^ | ← Rev. 3 →   0 On the right hand side of status page,you can find language option under status filter and select language Java. • » » » » 14 months ago, # ^ |   0 status > status filter > languages • » » 14 months ago, # ^ |   +11 1)After this contest (the same for following div.3 rounds and educational rounds), there's a period of 12-hour hacking phase.That means, you can hack anyone you want(while in normal rounds you can only hack your "roommate" during the contest).2)If you find something wrong in other's solutions(but passed the pretests), you can give an input which will make the solution fail (wa,tl,...)In div.2 you get 100 points from each successful hack and lost 50 points from each unsuccessful oneIn div.3 there's no prize or penalty for hacks. • » » » 14 months ago, # ^ |   0 Simply That Problem Will be Marked as Unsolved • » » » 14 months ago, # ^ |   0 What's "roommate"? Sorry, newbie. • » » » » 14 months ago, # ^ |   0 In a normal round, you are assigned to a specific room, with ~50 people if I'm not mistaken. During the round, you can attempt to hack their solutions, but only if you lock your own solution first (this is because you will be seeing others' codes, it wouldn't be fair if you could change your own code).A successful hacking attempt will give you a 100pt boost, while an unsuccessful hacking attempt will take away 50pts from you. • » » » » » 14 months ago, # ^ |   0 and what does locking the solution mean? If hacking starts after the round then how can one change his code? » 14 months ago, # | ← Rev. 4 →   -25 Hope it won't go too mathematical ;/ questions about string is more interesting;) • » » 14 months ago, # ^ |   +6 50iq is not bad actually idk » 14 months ago, # |   0 **BACK TO BACK CONTEST ** » 14 months ago, # |   +77 • » » 14 months ago, # ^ |   +79 But comparison of other websites code forces server is good • » » » 14 months ago, # ^ |   0 It's not just good it is pretty awesome when compared to others. Some of them had to cancel their contest just because the servers weren't able to handle the load and the submission queue was getting overloaded. • » » 14 months ago, # ^ |   +63 Did you ever participate in codechef cook off or luch time...they can't handle even 5000 participants properly...whereas codeforces handles nearly 20K participants...comparitively codeforces servers >>>>> any other cp server • » » » 14 months ago, # ^ |   0 Can't you guys take it as a meme... • » » » » 14 months ago, # ^ |   0 Exactly!!! • » » » » » 14 months ago, # ^ |   0 These guys become really too serious. • » » » » » » 14 months ago, # ^ |   0 Their serious and sensible reply let me know that CF's servers are not bad, very good instead. Next time I'll be more patient. • » » 14 months ago, # ^ |   +25 The girl is the middle would rather be saying that "Give the lengthy problem A and engage participants." • » » 14 months ago, # ^ |   0 Did you ever participate in atcoder contests It servers cannot hold even 5000 participants. Codeforces servers when it is +5000 submissions at time it will get some time but when it is less it works good! • » » » 14 months ago, # ^ |   +1 it was just a meme bro... I know CF is really good.. I apologize for posting this...fine? » 14 months ago, # | ← Rev. 2 →   0 After every contest i felt disappointed.But believes that practice makes a man perfect. Best wishes for all. <3 » 14 months ago, # |   +3 Hoping to see "Yet Another Problems". » 14 months ago, # |   +5 finally a div 3 contest :( » 14 months ago, # |   0 Can "Hacking A Soluton" also be included in Div 3 ?? Just an idea. • » » 14 months ago, # ^ |   +3 When div3 was first proposed then it was said that a beginner should waste time in hacking instead of solving problem. So contest time hacking is not available in div3 rounds. And there is 12 hours hacking phase for improving hacking skill. But i think 12 hour is too long. It can be reduced. • » » » 14 months ago, # ^ |   0 Is Hacking Possible in Virtual Contest?? • » » » » 14 months ago, # ^ |   0 yes if contest is still in progress hacking • » » » 14 months ago, # ^ |   0 I think 12 hour just for global, maybe someone finish the problems then have to go to bed.When they weak up, they can hack. • » » » » 14 months ago, # ^ |   0 Actually after 6 to 7 hours nothing for hacking. Almost all the hackable solution are already hacked in this time. » 14 months ago, # |   +4 Thank you, as always I am excited for another div.3 :))) » 14 months ago, # |   0 I should try to increase my rating in this contest.... » 14 months ago, # |   -16 Hello, is this rated? • » » 14 months ago, # ^ |   0 rated for div 3(below 1600 rating) » 14 months ago, # |   +4 I'll try to skew my propagating wave at least this time. Pray for strong pre-tests » 14 months ago, # | ← Rev. 2 →   +143 • » » 14 months ago, # ^ |   0 lol » 14 months ago, # |   0 Problem set in recent codeforces contests seems tough for me. Looking to get some easier problem this time. » 14 months ago, # |   0 Div 3 means higher participation. I hope the testing is fast! » 14 months ago, # |   +8 hopefully some short problem sentence » 14 months ago, # |   0 Looking forward for the contest .✌️ » 14 months ago, # |   +24 Why setter is yelling at tester? • » » 14 months ago, # ^ |   +41 Because of my invaluable suggestions :) • » » » 14 months ago, # ^ |   0 reveal unto us, are the problems sexy? • » » » 14 months ago, # ^ | ← Rev. 2 →   +16 I think you should explain about your suspicious behaviour in the past contest. Because it looks like you've cheated before.your suspicious behaviourApologize in advance if I mistakenly understand it. • » » » » 14 months ago, # ^ | ← Rev. 3 →   +5 I am not being rude, but I don't think so I need to explain you anything :) • » » » » » 14 months ago, # ^ |   +11 The thing is, your code is totally same with another code in the past contest.I don't want to be an annoying guy who repeat one thing everywhere. I just want to know what happen. If it isn't a cheat, I will sincerely apologize. If you've already be punished for that, I won't mentioned it anymore.I will keep annoyying you if and only if you really cheated and haven't been punished so far. • » » » » » » 14 months ago, # ^ |   0 Well You can see my rank in this contest :) • » » » » » » » 14 months ago, # ^ |   0 ok, got it. » 14 months ago, # |   -9 Hoping for stronger pretests in A. Got hacked (unfortunately) in the educational rounds and it had been a while since I had solved 3 T_T and I missed it thanks to the A. • » » 14 months ago, # ^ |   0 You should be more careful with some corner casesThat's more important and valuable than strong pretests. » 14 months ago, # |   0 I hope it will be like this for second time!! • » » 14 months ago, # ^ |   0 good luck and high rating bro • » » » 14 months ago, # ^ |   0 thanks bro • » » 14 months ago, # ^ |   +1 00:01 nice network • » » » 14 months ago, # ^ |   0 Yes , it worked good this time (^_^) • » » 14 months ago, # ^ |   0 You just risked there by some basic observation, I don't think that on minute 2 you were able to prove your solution. So don't be happy by that. » 14 months ago, # |   0 I hope to see Math problem~~ » 14 months ago, # |   +2 you make me a green i dare you » 14 months ago, # |   0 good luck » 14 months ago, # |   +3 Hope the system test won’t be too long. » 14 months ago, # |   0 It's good to see vovuh back but I expect that codeforces will do something in order to handle such large number of submissions during the contests and I guess this time it won't be queueforces :) » 14 months ago, # |   +1 I hope that the gap between the div.3 rounds will be decreased , its the best round » 14 months ago, # |   +2 Hope my rating will go up.I'm so vegetable...... » 14 months ago, # |   0 Hope the statements will be shorter -_- • » » 14 months ago, # ^ |   +16 » 14 months ago, # |   +10 Thanks vovuh again for bringing such contests in this pandemic situation we highly thanked to codeforces and its community for bringing such contest in this period to make us somehow busy and involved.Thank you codeforces community ! » 14 months ago, # |   +9 • » » 14 months ago, # ^ |   +1 The color spectra doesn't quite match Codeforces band XD » 14 months ago, # |   +6 I am becoming Expert Blue after this round. • » » 14 months ago, # ^ |   0 I don't think so » 14 months ago, # |   0 Is it rated? • » » 14 months ago, # ^ |   -7 For you, yes. For expert or more, no. » 14 months ago, # |   +2 Isn't it more logical to not allow experts or above from submitting solution for first 20 minutes rather than making problem A hard/confusing for div3 rounds atleast. Many participants don't even submit solution if they find problem A confusing or tough for it's bracket. » 14 months ago, # |   0 My rating is just 1600, so helpless • » » 14 months ago, # ^ |   0 lol similar is case for guys having rating just 1900 » 14 months ago, # |   0 Good Luck Have Fun! » 14 months ago, # |   0 what happens when i hack my own solution ? • » » 14 months ago, # ^ |   0 pts — your pointsSo, if you hack task A, then:new_pts = pts — points_from_task(A) + hack_points • » » » 14 months ago, # ^ |   0 i guess both points are equal. • » » 14 months ago, # ^ |   +1 If successful:You will lose a problem solvedIf unsuccessful:Nothing changes.So it's obviously meaningless to hack yourself unless you've found some mistakes in your code after the contest. • » » » 14 months ago, # ^ |   0 oh i got that ,thank you. » 14 months ago, # |   0 Good luck to you all » 14 months ago, # |   -23 What you see: Codeforces Round #XYZ (Div. 3)What I see: Another unrated contest :'( • » » 14 months ago, # ^ |   0 architb_12 why the hell are u giving this contest ,if u have such a mentality,and posting this nonsense doesn't show that u are cool • » » » 14 months ago, # ^ | ← Rev. 2 →   -8 pizza_hut what mentality? I am just saying that I am sad that the contest is unrated. It is much more fun when it is rated. What is your problem with that? • » » » » 14 months ago, # ^ |   -20 architb_12 whats the problem when contest is rated for 70% of users on codeforces • » » » » » 14 months ago, # ^ |   +7 I think you have completely misinterpreted me. I meant that it is more fun to give a contest when it is rated for you. • » » » » » 14 months ago, # ^ |   0 hey, first of all, learn to give respect to others (doesn't matter he is highly rated or low) and don't talk rudely. He was just saying that this contest is unrated for him and of course this was unrated for him because he has done efforts to be at that position where Div3 even Div2 is unrated for him. So, please don't post such non-senses. Even today tourist gave the Div3, can you stop him...No nah? » 14 months ago, # |   +3 Can the frequency of Div.3 contests increase ? » 14 months ago, # |   0 let's hope that i get one question correct on this site tutututu » 14 months ago, # |   +61 This is what happens when Tourist attends Div 3. • » » 14 months ago, # ^ |   +6 How can one read the problem, figure out the solution and write down the code in 1 minute..... • » » » 14 months ago, # ^ |   +9 Think about reading 3 problems, figuring out solutions, coding all 3 of them and getting them accepted in 3 minutes. • » » » 14 months ago, # ^ |   0 He doesn't read the problem, nor do any other red coders. They go straight to test case, find a pattern, and implement. • » » » » 14 months ago, # ^ |   0 Unbelievable! • » » » » » 14 months ago, # ^ |   +46 If problems see tourist comming they give up and solve themselfs. • » » » » » » 14 months ago, # ^ |   +1 You are very humorous • » » » » » » 14 months ago, # ^ |   0 lol • » » » » » » 14 months ago, # ^ |   0 Actually tourist has solved the problems before they solve themselves:) • » » » 14 months ago, # ^ |   0 technically, it was 36 seconds • » » » 14 months ago, # ^ |   +3 • » » 14 months ago, # ^ |   0 he submits all the answers after that he reads the questions » 14 months ago, # |   -15 What's the point of having 50000 testcases, if giving verdict still takes you > 10 minutes? • » » 14 months ago, # ^ |   +72 I'm sorry, what about you? I don't see any of your submissions were judged for more than 1.5 minutes. • » » » 14 months ago, # ^ | ← Rev. 2 →   0 F took > 10 minutes. It really didn't affect me. I understand its because these rounds don't have pretests, I guess. » 14 months ago, # |   +38 Wow the servers have REALLY IMPROVED. MikeMirzayanov orz » 14 months ago, # |   +75 tourist solved 7 problems in 22 minutes and still atheists exist. orz » 14 months ago, # |   -8 Tests today was rlly fast, thx) » 14 months ago, # |   +16 A nice and balanced round !! » 14 months ago, # |   +7 How to solve E2 and F? » 14 months ago, # |   +12 Easy +rating, nice contest) » 14 months ago, # |   +5 I was logged out suddenly while submitting please solve this problem it occurs most often during contests with me! » 14 months ago, # | ← Rev. 2 →   0 How do you do E2? I spent like 45 minutes binary searching for the second endpoint(and I think it was probably the right way since you have blocks of xyx or just one large block of length x), so you can use binary search to find the optimal value for the second endpoint, but it wouldn't work. EDIT: Just realized that I thought a_i<=26 for E2 (as it was in E_1) so that's why I was getting wrong answer • » » 14 months ago, # ^ |   +2 Observation:The contribution of a number is limited by number of occurrences of prefix and number of occurrences of suffix, In other words, if you have a prefix of 7 occurrences and suffix of 2 occurrences, Then you're only contributing to the final answer with just 4(2 from the left side which has 7 occurrences, and 2 from the suffix), So contribution = min(prefix,suffix) * 2.Knowing the above, We can just try for each number x, A possible prefix that has 1 occurrence.. 2 occurrences .. 3 and so on, you can fill the middle with an element that occurs the most, you have only 200 distinct element, just go over them all. • » » » 14 months ago, # ^ |   0 Oh my this is neat. I mean the idea about checking only prefix and suffix. Thanks!! » 14 months ago, # |   +10 » 14 months ago, # |   0 How to solve E2. Any idea?? • » » 14 months ago, # ^ |   +1 store position of every element in a vector v[203].for each number between 1 to 200 run two pointer from start and end of its position vector and between them find maximum time occuring integer frequency using prefix sum 2-d array. • » » » 14 months ago, # ^ |   0 can you elaborate on the prefix sum 2-d array? how are you storing in them? » 14 months ago, # |   0 my approach for E1 was to check for the next max frequency of a number between the extreme positions of all the numbers having max frequency ...what's wrong in this?? » 14 months ago, # |   +1 more than 60k accepted submission and aprox 26k participants.wow amazing. » 14 months ago, # | ← Rev. 3 →   0 Why i can't hack someone? It's write me "Illegal contest ID" or "Неверный идентификатор соревнования" • » » 14 months ago, # ^ |   0 Hack via status page,not standing page • » » » 14 months ago, # ^ |   0 Thank you very much :) » 14 months ago, # |   +1 Fast servers but speedforces :( » 14 months ago, # | ← Rev. 2 →   +3 How did you all solve E2? I used Mo's and I wonder if that was overkill. • » » 14 months ago, # ^ |   0 Oh man the alphabet is only size 200, Mo's was definitely overkill. • » » 14 months ago, # ^ |   0 Yeah, I used Mo's too, don't know how others got that right • » » 14 months ago, # ^ |   0 complexity- (2*10^5)*200 I will tell very briefly... for each index i let x=arr[i] and till now(including i) let it(x) has appeared fth time so if (Frequency of x in whole array)/2 >= f then i found index j of x from the end of array such that in segment arr[j...n] occurrence of x is also f and hence I iterate through all the number 1<=n<=200 an found the no of occurrence O= of such n between (i,j) . so length of three blocks palindrome will be f+f+O.Do this and update the result if you got a larger length. • » » 14 months ago, # ^ | ← Rev. 4 →   0 Here's my approach: Loop through all $k$ possible "symbols". This will be the symbol forming the prefix and suffix (aaa...aaa) Put two pointers on the first ($i$) and last ($j$) occurrences of the symbol. Find what's the best score for the middle part (...bb...) between $i+1$ and $j-1$ using a precomputed table of counts for each prefix and each symbol. Move pointers towards each other to the next two occurrences of the symbol from #2. At every time, you know exactly how many of these symbols there are to the left (= to the right) of the middle part, so the score for this partition is middle part score + 2 * symbol count. Repeat until they meet, keeping track of the maximum score. $O(n\cdot k^2)$ in total, but the operation #3 is rare in practice, so it passes the pretests ¯\_(ツ)_/¯. Got MLE on the first attempt though, lol. Don't define int long long where it's not necessary. • » » » 14 months ago, # ^ |   0 Excellent solution. I thought O(N * K^2) would fail, but yours is super simple, so I guess it has a very LOW constant factor. • » » » 14 months ago, # ^ | ← Rev. 2 →   +2 That is $O(n \cdot k)$ in total if you have position pre-calculation, not $O(n \cdot k^2)$. That is because $\left(\sum \text{occurrences of symbols}\right) = n, \text{not } n \cdot k$. • » » » » 14 months ago, # ^ |   +3 but for each prefix and suffix of given size and letter, he is going over all 200 letters to find biggest centre block. So I think its really $nk^2$ with small constants. • » » » » » 14 months ago, # ^ | ← Rev. 2 →   +3 I think prefix table can be calculated for vector(200), then the max_element(for "b") can be found in O(k) time. Since we traverse over O(n) two pointer pairs (i,j), this gives total time O(nk). • » » » 14 months ago, # ^ | ← Rev. 2 →   0 FYI this approach is almost optimal; but at step 2 you should initialize $i$ and $j$ to the two middle occurrences of the symbol (if the amount of symbols is odd, just ignore the middle one), then move them away from each other, towards the end.The benefit of this is that you can start with an empty set of frequences $f[200]$, then initialize that to the counts of each symbol between $i$ and $j$. Keep track of the most frequent symbol whenever you update ++$f[\cdot]$. And then do --$i$, ++$j$, and update $f[]$ as you go. You can always keep track of the most frequent letter in the middle part this way, and complexity is $O(n)$ because you do ++$f[\cdot]$ at most $n$ times. • » » » 14 months ago, # ^ |   +3 For E2, I coded a O(nklog(n))time algorithm using the same idea as yours except using binary search instead of prefix sums. I realised that the complexity is too large. However, for testcase 9, which has 134 repeated 2e5 times, my solution should simplify down to O(nlogn). However, when I ran it on my system, it took more than 30 seconds and gave me TLE on codeforces. Can you please help? Here is my submission https://codeforces.com/contest/1335/submission/76624993 • » » » » 14 months ago, # ^ |   0 I also tried the same thing but got tle on test 15 .76625017Later i think to remove binary search and precompute freq table but memory limit exceed on test 9 .76625886can anyone help ? • » » » » » 14 months ago, # ^ |   +3 Replace ll from int in your program • » » » » » » 14 months ago, # ^ |   0 wtf , It worked .how they can set such a memory limit. • » » » 14 months ago, # ^ | ← Rev. 4 →   +1 E2 complexity analysis: $\displaystyle \sum_{c=1}^k freq_c \cdot k = \displaystyle k \cdot \sum_{c=1}^k freq_c = O(k \cdot n)$Sum over $c$ for choosing the number in $1^{st}$ block. $freq_c$ for iterating over all possible lengths of $1^{st}$(and $3^{rd}$) block. Multiplied by $k$ for choosing the number in $2^{nd}$ block.$\displaystyle \sum_{c=1}^k freq_c \neq k \cdot (freq_c)_{max} \neq k \cdot n$ $\displaystyle \sum_{c=1}^k freq_c = n$ • » » 14 months ago, # ^ |   0 What is Mo. I thought of doing binary search but failed to do so. • » » » 14 months ago, # ^ |   0 maybe Mo's algorithm? • » » » 14 months ago, # ^ |   +3 • » » 14 months ago, # ^ |   0 Could you please explain how you applied Mo's? • » » » 14 months ago, # ^ |   +2 The problem can be reduced to answering a bunch of queries of the form: What is the most frequently any element appears in the range [L,R]? If the alphabet is large, you could attack this with Mo's Algorithm.Unfortunately, it slipped my mind that since the alphabet is so small (only up to $200$ symbols), you could more simply create 200 prefix sum tables instead. • » » » » 14 months ago, # ^ |   0 I didn't quite understand from your code how you're generating these queries. Could you please elaborate how you're generating these ranges? • » » » » 14 months ago, # ^ | ← Rev. 2 →   0 I did, but one must be careful to not use long long, since it used to much memory. We have to use a $32$ bit type.76620592 • » » » » » 14 months ago, # ^ |   0 My solution is similar to yours but it is giving TLE on test 2. I am not able to figure out the reason.My Submission • » » » » 14 months ago, # ^ |   0 Is Mo's algorithm the only way to do it? » 14 months ago, # |   0 Really good round! I loved problems D and F. Had an idea for F but gave up after 20-25 minutes of implementation after I realised I didn't account for something mentioned in the question. Had about 20 minutes for E1 and E2 (I regret not having attempted E1 and E2 before F). I believe I'd have been able to solve E if I made a wiser decision earlier but shit happens, lol. Also, ideas for E, anyone? • » » 14 months ago, # ^ |   0 You can enumerate a and the number of a, and then enumerate what's b, it seems 200*200*n, but you can find the really time complexity is 200*n • » » » 14 months ago, # ^ |   0 Keven Can you explain your solution? Is it similar to this: https://codeforces.com/blog/entry/75908?#comment-602884 • » » » » 14 months ago, # ^ |   0 200*200*n, first 200 enumerate a, n is the number of a, so first 200 times n is n, time complexity is 200*n • » » » 14 months ago, # ^ |   +1 can you explain me why it is not 200*200*n?I got AC on E1? but i assume that complexity was 200*200*n, now i sent same code in E2 and i got AC • » » » » 14 months ago, # ^ | ← Rev. 2 →   +1 In your solution you have y = v[a].size() - 1; which is actually not always n. In fact it would make the make the factor of 200*n to just n. • » » » » » 14 months ago, # ^ |   0 yeap, I already got it, thanks! » 14 months ago, # |   0 can anyone point out which test case is giving wrong answer in this code for question B of this contest. https://codeforces.com/contest/1335/submission/76554481 • » » 14 months ago, # ^ |   0 i missed corner case when we have to take 26 unique characters.my bad » 14 months ago, # |   +8 In problem F, if N and M's range can be confirm instead of N*M <= 1e6 will be better » 14 months ago, # |   0 How do you guys solve D?I finally figure out that we must change 9 postions which are respectively distributed in the nine squres.So I enumerate the column and the row (maybe like N-queen Problem) and get the answer.... • » » 14 months ago, # ^ |   +58 Author's solution is to replace all 2 with 1. • » » » 14 months ago, # ^ |   0 Wow, this solution is very cool!I think replacing 3 with 1 / 2 / ... is also ok. • » » » 14 months ago, # ^ |   0 Why 2? • » » » » 14 months ago, # ^ |   +3 You can take any 2 numbers. • » » » 14 months ago, # ^ | ← Rev. 2 →   0 wow that's innovative, instead of figuring out which cell to edit. • » » » 14 months ago, # ^ |   +3 Yeah, I also changed every 9 to 8. It took just less than 4 minutes for me to solve this. Fastest D problem ever for me. • » » 14 months ago, # ^ | ← Rev. 2 →   0 I place a number 9 - currentNumber if its non zero else 1 in every cell, such that whose row,col and block was not visited before through any cell. To check that I use simple hashing. • » » 14 months ago, # ^ |   0 IdeaYou only need to make changes such that each row, col, block have at least two same elements. If you make changes at indices (0-based) (0,0), (1,3), (2,6), (3,1), (4,4), (5,7), (6,2), (7,5), (8,8), you end up covering all rows, cols and blocks. So, make the required changes to those indices => problem solved. Code/* Problem from CodeForces! # Tags: [] Link: */ /* Written by: Aryan V S Date: Sunday 2020-04-12 */ #include #include #include #include #include #include #include using namespace std; typedef long long ll; typedef unsigned long long ull; #define all(x) x.begin (), x.end () #define rall(x) x.rbegin (), x.rend () vector Sudoku (9, string (9, '0')); int main() { ios::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); cout.precision(10); cout << fixed;// << boolalpha; int T; cin >> T; while (T--) { for (int i = 0; i < 9; ++i) cin >> Sudoku [i]; int x = 0, y = 3, z = 6; for (int i = 0; i < 9; i += 3) { Sudoku [i][x] = (Sudoku [i][x] == '1' ? '2' : '1'); Sudoku [i + 1][y] = (Sudoku [i + 1][y] == '1' ? '2' : '1'); Sudoku [i + 2][z] = (Sudoku [i + 2][z] == '1' ? '2' : '1'); cout << Sudoku [i] << '\n' << Sudoku [i + 1] << '\n' << Sudoku [i + 2] << '\n'; ++x; ++y; ++z; } } return 0; } • » » » 14 months ago, # ^ |   0 I did the same xD mat[0][0]=max((mat[0][0]+1)%9,1); mat[1][3]=max((mat[1][3]+1)%9,1); mat[2][6]=max((mat[2][6]+1)%9,1); mat[3][1]=max((mat[3][1]+1)%9,1); mat[4][4]=max((mat[4][4]+1)%9,1); mat[5][7]=max((mat[5][7]+1)%9,1); mat[6][2]=max((mat[6][2]+1)%9,1); mat[7][5]=max((mat[7][5]+1)%9,1); mat[8][8]=max((mat[8][8]+1)%9,1); • » » » » 14 months ago, # ^ |   0 Shhh, don't post anything remotely related to the word "same" :p They'll think we cheated • » » » » » 14 months ago, # ^ |   0 lol should I delete it? • » » » » » » 14 months ago, # ^ |   0 Nah, hehe. It's cool. Now, it'll look like: Yeah, they cheated, but if they're talking about it openly, they might not have....Jokes apart, I'm just having fun. GL for future rounds • » » 14 months ago, # ^ |   0 a[1][1] = a[2][1], a[2][4] = a[1][4], a[3][7] = a[2][7], a[4][2] = a[3][2], a[5][5] = a[4][5], a[6][8] = a[5][8], a[7][3] = a[8][3], a[8][6] = a[7][6], a[9][9] = a[8][9] This was my idea. Later, got that replacing all 2's with 1's was good enough. » 14 months ago, # |   +92 Funny way to solve F without detecting cycles: notice that if two robots get to the same cell, then they will always move to the same cell from then on. This means that we can find for each cell where the robot starting in that cell will be after some $2^{22}$ moves using binary lifting. • » » 14 months ago, # ^ |   +37 This is author's solution :) • » » » 14 months ago, # ^ |   0 Hi, this is my first contest. Can you please share the link to the author's solutions? I am not able to locate it. Thank you!. • » » » » 14 months ago, # ^ |   +23 I'll post them with the editorial. Please, be patient. • » » » » » 14 months ago, # ^ |   -42 The contest was not at all balanced, I must say. Even you can see the rating of people and compare the no. of questions they solved. Starting 4 questions were not of much use as almost half of the contestant solved the first 4. Also, question D looked like it was meant for April Fool's 202 but was posted by mistake? • » » » » » » 14 months ago, # ^ |   +34 Why do you posting this from fake account? :) • » » » » » » » 14 months ago, # ^ | ← Rev. 2 →   -21 You ignored my comment... What am I saying wrong according to you? You should accept the mistakes. :) • » » » » » » » » 14 months ago, # ^ |   +21 Which mistakes? Prediction mistakes? Maybe, I need to clean my magic ball to see the future better.The only (and very doubtful) "mistake" here is the "gap" in $5400$ accepted solutions among official participants (which means that E1 was solved by 1/4 of people who solved D). This is not a big gap.I could show you examples of really big gaps but I don't think I need to prove you something anymore. If you don't realize that this prediction with difficulties is good enough, I have nothing to say to you.And, if you post your opinion from fake account, it seems like you don't want to anybody know who are you actually :) • » » » » » » » » » 14 months ago, # ^ |   -8 The last point is quite obvious.. :) I just wanted to say that these weekly contests are supposed to test our algorithmic (and not puzzles) skills. Thank you for replying.. • » » » » » » » » » 14 months ago, # ^ |   +2 Yeah, you are right. As far as I remember all the Div.3s I have given were perfectly balanced and excellent problemsets and all of them were made by you:) • » » » » » » » » » 14 months ago, # ^ |   +11 Well, I disagree with you :) Most times there are some mistakes that you couldn't notice, but this time the contest went fine. There also were some minor mistakes but not as fatal as usual. This is the rare case when almost everything is fine and I'm glad to see that. • » » » » » » » » » 14 months ago, # ^ |   +4 Thank you for the round! No queue, no weak pretests.I found problem E2 very interesting and problem D a bit hard for me but interesting too.Please make more div 3 contests, we really appreciate you :) • » » » 14 months ago, # ^ |   0 That's actually a really cool approach. Have you also tested a cycle-based solution? I am trying to implement a fairly straightforward search and it keeps running out of memory.Looking at 32 pages of MLE results in Status, I think the memory limits for E and F could have been a little bit higher. • » » » » 14 months ago, # ^ |   +3 The approach with extracting cycles and doing some dp is much harder to implement, so I didn't even tried to do that. I could but didn't do this. • » » » » » 14 months ago, # ^ |   +9 I ended up flipping the dimensions if N > M, and surprisingly this helped (76628096). According to the test results, it uses up 254.5 out of 256 MB, lol. • » » » » » » 14 months ago, # ^ |   +3 Well, I'm sorry to hear that. I really thought to increase memory limit, but my solution uses ~200MB (the two-dimensional vector of size $nm \log nm$) so I thought that the 50MB will be enough (because this is the only array which should have such size). • » » » » » » » 14 months ago, # ^ | ← Rev. 2 →   0 Hello ; can you please explain me why does N*M*M solutions pass for problem E2 ?tourist has written a code with that time complexity ; here is his submission — https://codeforces.com/contest/1335/submission/76520445And I am not able to understand how does that happen ; curiously ; I even did try to check the test cases ; and I see test case 9 which should break such solutions.So clearly ; I am not understanding something ; can you please help me with this.Thanks ! • » » » » » » » » 14 months ago, # ^ |   +14 The solution of tourist has the complexity $O(nk)$ where $k$ is the size of the alphabet. If you take a look at the two outer cycles, you can notice that they just iterate over all characters of the string and this sum is obviously $n$.Maybe, when I post the editorial, it will be more obvious to understand why this is $O(nk)$. Just be patient. • » » » » » » » » » 14 months ago, # ^ |   0 Oh ; I see ; I apologize for being a bit impatient ; and I thank u for the reply.I understand the complexity analysis of that solution now ; Thanks ! • » » » » » » » 14 months ago, # ^ |   +1 vovuh please look at my submission for E2 this it got MLE 252700 KB but now I am submitting it got accepted this with 252800 KB (more space than previos)... :'( and because of that I was not able to submit E1 also during contest...very sad!!! :'( • » » 14 months ago, # ^ |   +12 Thanks for teaching me something. I learned binary lifting for RMQ/LCA but didn't consider reusing the table for k-th ancestor queries. » 14 months ago, # |   +4 I got E 5 min late -_-Nice round! » 14 months ago, # |   0 help me please which problem i have with this code on problem B? https://codeforces.com/contest/1335/submission/76592383 • » » 14 months ago, # ^ |   0 your code is giving wrong output in this test case :- 1 10 10 10your output contain only 9 distinct character but reuqired distinct characters is 10 • » » 14 months ago, # ^ |   0 Idea is wrong..... My idea(1) Construct a string of length B with all distinct characters.(2) Construct a string of length A — B with character same as last distinct character in String (1).(3) Concatenate the String (1) and (2).(4) Print String (3) (which has length A) N / A times.(5) Print the remaining length (N % A) with appropriate characters from String (3). Code/* Problem from CodeForces! # Tags: [] Link: */ /* Written by: Aryan V S Date: Sunday 2020-04-12 */ #include #include #include #include #include using namespace std; typedef long long ll; typedef unsigned long long ull; #define all(x) x.begin (), x.end () #define rall(x) x.rbegin (), x.rend () int main() { ios::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); cout.precision(10); cout << fixed;// << boolalpha; int T; cin >> T; while (T--) { int N, A, B; cin >> N >> A >> B; string S, P (A, '0'); for (int i = 0; i < B; ++i) P [i] = i + 'a'; for (int i = B; i < A; ++i) P [i] = B + 'a' - 1; int i = 0; for (; i + A < N; i += A) S += P; for (; i < N; ++i) S += P [i % A]; cout << S << '\n'; } return 0; } • » » 14 months ago, # ^ |   0 You have taken 'g' twice in your array » 14 months ago, # |   0 How to solve E? » 14 months ago, # | ← Rev. 2 →   +3 How to think fast like tourist... :(When he was solving each question in about 1 minute, I was drawing on mspaint.exe and looking for the solution :) • » » 14 months ago, # ^ |   +1 practise,practise and practise .smartwork +hardwork • » » » 14 months ago, # ^ | ← Rev. 2 →   0 In normal practice I could solve A and B with 1000 difficulty in less than 2 hours. :/Probably my brain crashed because of taking too much practices on last night lol... • » » » » 14 months ago, # ^ |   0 they are doing coding for many years now , if you continue doing this you will be master in upcoming years • » » » » 14 months ago, # ^ |   0 Do you know this website? https://vjudge.net/ I think is a good sit for practice. There are all kinds of projects to practice on. • » » 14 months ago, # ^ |   +5 Well pen-paper seems more reasonable for me • » » » 14 months ago, # ^ | ← Rev. 2 →   +3 I have my own drawing pad & stylus pen so I don't wanna waste them too haha. » 14 months ago, # |   +8 the best contest ever for me » 14 months ago, # |   0 Problem DCan somebody please help me in finding out the problem of this submission. Apparently, the inputs are not being properly taken, instead some absurd garbage values are being shown. I was stuck with this for the last 40 minutes of the contest and still could not figure it out. • » » 14 months ago, # ^ |   0 You declared the array as integer, hence it will read whole row at once (not character by character as it is supposed to be). • » » 14 months ago, # ^ |   0 Input is not n*n integers, it is n strings in n lines. • » » 14 months ago, # ^ |   0 You can't take the input as int sudoku[10][10]. Make it char sudoku[10][10] and change your code accordingly. • » » 14 months ago, # ^ | ← Rev. 2 →   0 That memset tho... • » » 14 months ago, # ^ |   0 cin >> sudoku[i][j]; this takes takes whole row as input. Not a single integer » 14 months ago, # |   +1 https://codeforces.com/contest/1335/submission/76590246https://codeforces.com/contest/1335/submission/76509810both solutions are identical and the person who submitted it from different accounts, intentionally did so to hack (and get points if hacking gets you extra points, not sure tho) ! • » » 14 months ago, # ^ |   0 Maybe that's the reason why he's still gray. • » » 14 months ago, # ^ |   +1 Successful Hacks don't change anything except the victim will lose a problem solvedHacks with this method are......meaningless, honestly » 14 months ago, # |   -48 people making such D should be banned from making future contests • » » 14 months ago, # ^ |   +1 Can you ban MikeMirzayanov? • » » 14 months ago, # ^ |   +11 People making such comments should be banned from making future comments • » » 14 months ago, # ^ |   0 I did read the "you can change any..." as "you can swap any numbers in the grid". Still have no solution for that problem. • » » » 14 months ago, # ^ |   0 you can see my solution i also thought the same • » » » » 14 months ago, # ^ |   +1 • » » 14 months ago, # ^ |   0 Can you please make some better to and provide them to problem setters so that the people who works so hard for making this problem can take some rest. • » » » 14 months ago, # ^ |   0 Haha , why should i do that. I only want to solve problems not make them. But if some problem is bad and i say that it's bad dosen't mean i need to make problems. • » » » » 14 months ago, # ^ |   0 Here is a Chinese proverb for you:“你行你上啊”. no can no BB! • » » 14 months ago, # ^ |   0 It's your problem... » 14 months ago, # | ← Rev. 2 →   0 ~~~~~ How is the answer for 6 1 5 1 1 1 1 5? Test case 2 E 2! According to me it should be 6! • » » 14 months ago, # ^ |   +1 1st and 3rd part must have the same length x • » » 14 months ago, # ^ |   +1 1 1 1 1 1 is the longest palindrome. What is your 6 length palindrome? • » » » 14 months ago, # ^ |   0 Damn! I missed x==x :/ • » » » » 14 months ago, # ^ |   0 I was also stuck at this point for half an hour • » » 14 months ago, # ^ |   +2 Number of a's on both sides of b's has to be equal as per the problem. • » » 14 months ago, # ^ |   0 It should be 5. The longest 3 block palindrome here is 11111 • » » 14 months ago, # ^ | ← Rev. 2 →   0 1 1 1 1 1 is the answer and here both x=0,y=5,x=0basically every answer should be a palindrome with max 2 types of elements, and the 2nd type of element should be sandwiched b/w 1st type of elements , hope it gives you a little idea ! PS: thanks ashkANOn for pointing out the mistake. • » » » 14 months ago, # ^ |   +1 For 1 1 1 1 1 x=0,y=5,x=0 i think • » » 14 months ago, # ^ |   0 remember that the 3 palindrome should be of form x y x. the length of first x and second x is the same » 14 months ago, # |   0 It's really bad for me. 1335B - Construct the String said that " to construct a string s of length n consisting of lowercase Latin letters such that each substring of length a has exactly b distinct letters. " e.g. For test case: n, a, b are 22 17 12 my code output: abcdefghijgkkkkkkabcdeis it wrong? • » » 14 months ago, # ^ | ← Rev. 2 →   0 My construction is:abcdefghijkllllllabcde • » » » 14 months ago, # ^ |   0 Oh, it's a really silly mistake. Thanks for helping • » » 14 months ago, # ^ |   0 The first substring "abcdefghijgkkkkka" has only 11 distinct characters. • » » 14 months ago, # ^ |   0 Yes it is wrong for the prefix of size 17. I think you are trying to print this "abcdefghijkllllllabcde". » 14 months ago, # |   0 Can someone show me your solution to C?My solotion is to binary-search the answer and check whether it is legal.But my solution runs nearly 900ms.Is there some easier and faster solutions? • » » 14 months ago, # ^ | ← Rev. 2 →   0 Here. Mine runs in 61ms. Idea could be figured from the code but hit me up in PM or here if you need help understanding something. Code/* Problem from CodeForces! # Tags: [] Link: */ /* Written by: Aryan V S Date: Sunday 2020-04-12 */ #include #include #include #include #include #include #include using namespace std; typedef long long ll; typedef unsigned long long ull; #define all(x) x.begin (), x.end () #define rall(x) x.rbegin (), x.rend () int main() { ios::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); cout.precision(10); cout << fixed;// << boolalpha; int T; cin >> T; while (T--) { int N, Same = 0; cin >> N; unordered_map M; vector A (N); for (int i = 0; i < N; ++i) { cin >> A [i]; ++M [A [i]]; Same = max(Same, M [A [i]]); } int Distinct = M.size() - 1; if (Same > Distinct && Same - Distinct > 1) cout << Distinct + 1; else if (Same > Distinct) cout << Distinct; else cout << Same; cout << '\n'; } return 0; } • » » » 14 months ago, # ^ |   0 Can you please briefly explain your solution?I have a little difficulty in understanding it.... • » » » » 14 months ago, # ^ |   0 Sure.I map every skill to the number of times it appears in the input. I do this because I'd like to find the skill value which is present in max number of people. This is what the variable Same holds. The size of the map will give the total number of different skill levels. However, one of the skill levels must be used to compose the team with same skill levels. So, the number of distinct skills are Map.size() — 1. And then it's a matter of evaluating which team size is maximal. • » » » » » 14 months ago, # ^ | ← Rev. 2 →   0 Btw, the only check you need to do is whether the biggest count equals the size of the map or if it is greater. If they’re equal, return min(biggest count, map size — 1). If not, return the min(biggest count, map size) • » » » » » 14 months ago, # ^ |   0 Thanks bro!I have understood your solution and finished the problem in 62 ms. • » » » » 14 months ago, # ^ |   0 I can explain mine if you want to :) SolutionThere is a greedy aproach to this. We want to maximize the number of people in both teams. So we want to check the maximum number of students with the same skill (let's name it max_same) and also the skill (let's name it num). After that, we want to check the maximum number of students with different skills, different than num (there is the risk to put the same student in 2 groups). Both of these can be done with a simple sort on the initial vector and a linear scan. Now, if the absolute difference between the number of students between the 2 groups is at least 2, then we can transfer one student from the second group that we didn't take previously to the first to maximize the number of different skills. The answer is the minimum between the number of students between the 2 groups.Code: 76607507 • » » » » » 14 months ago, # ^ |   +1 Thank you, but I have finished the problem in 62 ms(^v^) • » » » » » » 14 months ago, # ^ |   0 76583554 Can you please tell me why my answer is wrong in the second testcase. I have used the same logic as you. Please help. • » » » » » 14 months ago, # ^ |   0 Can you please tell me why my answer is wrong in the second testcase. I have used the same logic as you. Please help.76583554 » 14 months ago, # | ← Rev. 3 →   -21 Hi, can I be unrated for Codeforces Round #634 (Div. 3). Reason is with Problem D with Java. I submitted the code https://codeforces.com/contest/1335/submission/76566765 during the contest without PrintWriter but TLE'd on test case 2, although my solution is O(81*10^4) which should be under the time limit. After the contest I submitted the exact same code with PrintWriter https://codeforces.com/contest/1335/submission/76614841 and got AC. @vovuh vovuh » 14 months ago, # |   +18 Best contest ever for div 3. Thank you god vovuh very much and hope everyone can become expert ! » 14 months ago, # |   +10 I didn't read D correctly and thought you needed to keep the "sudoku structure" ( just swap numbers around, no replacing ), so I came up with a backtracking solution and I just kept wondering how was it a div3D and how did so many people manage to solve it lol. » 14 months ago, # |   +3 Strong Pretests and Intersting Problems, Fast Test and Good Network.Thanks to vovuh & MikeMirzayanov » 14 months ago, # |   -19 Plz change your time of contests in india. » 14 months ago, # |   0 What is the intended complexity for E2? • » » 14 months ago, # ^ |   0 O($N$ * $200$) • » » » 14 months ago, # ^ |   +3 The complexity of your code is O(N*200*200). Many other solutions with this complexity have passed. I think the best we can do O(200*N*log(N))? • » » » » 14 months ago, # ^ |   +3 I believe my code is O(200N):https://codeforces.com/contest/1335/submission/76611393 • » » » » 14 months ago, # ^ |   0 See my reply below. It can be done in O(N*200) • » » 14 months ago, # ^ |   0 O(200*200*nlogn) • » » » 14 months ago, # ^ |   0 Not for E2 • » » » 14 months ago, # ^ |   +5 I can be done in O(N*200) analyzing from the "center" for each distinct value. Centers for value V means two distinct pointers (a, b) where the amount of V before a is the same as the amount if V after b. Then keep jumping a backward and b forward, both at the same time, but the jump is actually to the previous/next V value. While you do this, count the frequency of values in the middle of a and b, and keep the one with maximum occurrences, and add that to (amount of V before a)*2 keeping maximum for each time you move the pointers. This is linear for each value V • » » » » 14 months ago, # ^ |   +3 Nice solution!, initially I had thought of this method, but I tried to move the pointers closer instead of farther and it led to some complications in maintaining the maximum • » » » » 14 months ago, # ^ |   0 Ah! I realise now that what I did was itself O(200*N). I did the complexity analysis wrong. Thanks! » 14 months ago, # |   +4 Where can I report this self-hack? https://codeforces.com/contest/1335/submission/76616926 • » » 14 months ago, # ^ |   0 And then there's this one: https://codeforces.com/contest/1335/submission/76618522 • » » » 14 months ago, # ^ |   0 And then this guy is hacking his own solutions by deliberately putting wrong tests: https://codeforces.com/contest/1335/submission/76620243 • » » 14 months ago, # ^ |   0 • » » 14 months ago, # ^ |   0 no need to report.There's no prize for such successful hacksAlso there's no Best Hacker Rank in div.3, only in Educational Rounds. » 14 months ago, # |   +17 In F,Why did you keep 1 as white and 0 as black ?If your intention was to delay submission time by 2 minutes, congrats :D » 14 months ago, # |   0 Could you please help me understand why my code for Problem C doesn't work? It seems pretty straightforward and I have seen other solve it my way, however my code fails on test 4. I have no idea why. Here is my submission. • » » 14 months ago, # ^ |   +1 I don't understand your code, but it seems to fail on the case "1 1 1 1 1 1 1...". The answer should be taking the whole array (n) • » » » 14 months ago, # ^ |   0 My code fails on case 4, do you mean when all the numbers are the same? In this case the answer is 1. How my code works: 1. Counts all the different numbers 2. Finds the biggest amount of repeating numbers. 3. Takes the smallest of the two (+- 1) 'difs' — different numbers 'maxN' — biggest amount of repeating numbers 'curN' — current amount of repeating numbers • » » 14 months ago, # ^ |   +1 Don't compare two Integer(object type) using ==. It compares reference not value. • » » » 14 months ago, # ^ |   +1 Thank yo soo much for your help. I used the intValue() method when comparing and my code is now working. Thank you so much for your help. Here is the working version. » 14 months ago, # |   +2 I am getting automatically logged out when i press submit button for the first time in a contest.This is happening regularly with me from past 5 to 6 contests. I am still a specialist and those extra 15-20 seconds don't matter to me right now, but this might be a bug in the server that may need a correction. Please have a look MikeMirzayanov. Thanks! • » » 14 months ago, # ^ |   0 Maybe your cookies are not working. • » » » 14 months ago, # ^ | ← Rev. 2 →   0 but,this is not happening with me on other coding platforms! If it were cookies problem, then it would have occured every time i hit submit, or at other platforms too! It does not happen on the light codeforces website even! • » » 14 months ago, # ^ |   +1 I have faced the same issue when clicking from the problem itself instead of going to the submit code tab • » » 14 months ago, # ^ |   +1 I'm also facing some problem while submitting my first solution of previous 2-3 contests. I get some html 403 forbidden error with some token value! » 14 months ago, # |   0 In E1 test 2 set 107 is "4 1 4 1" and the expected answer is 4?Am I missing something? • » » 14 months ago, # ^ |   +1 I think that the counter in the evaluator that displays the WA test number is off by 1 • » » » 14 months ago, # ^ |   0 No, I'm pretty sure that's the line. I made a submission that outputs -1 at 107th item, and it gets caught as -1.https://codeforces.com/contest/1335/submission/76621065 wrong answer 107th numbers differ — expected: '4', found: '-1'Then I submitted an app that prints only the 107th line's input. And it is "4 1 4 1".https://codeforces.com/contest/1335/submission/76621266It can't be 4. • » » » » 14 months ago, # ^ |   0 It prints all the inputs starting from the first; 4 1 4 1 is actually the first test case. • » » » » 14 months ago, # ^ |   +1 On 107th iteration you are taking input for 1st time. It is taking 1st case as input not 107th case. • » » » » » 14 months ago, # ^ |   0 My bad, thanks. It's actually 1 1 1 4 1. • » » 14 months ago, # ^ |   0 Is that input visible from the test case? » 14 months ago, # |   0 Why are some of the submissions containing this part?if (t == 'something') cout << "OK" << endl;Does it give any benefit? • » » 14 months ago, # ^ |   +2 They do that so they can get easy hacks from another account
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33754491806030273, "perplexity": 2598.4227931434925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487636559.57/warc/CC-MAIN-20210618104405-20210618134405-00336.warc.gz"}
http://mathhelpforum.com/differential-equations/98380-whats-laplase-transformation-one.html
# Math Help - whats the laplase transformation of this one.. 1. ## whats the laplase transformation of this one.. $ \frac{-2e^{-s}}{s^3} $ i know that e ill make u function but i dont know the rest 2. Originally Posted by transgalactic $ \frac{-2e^{-s}}{s^3} $ i know that e ill make u function but i dont know the rest Recall that $\mathcal{L}^{-1}\left\{e^{-as}F\left(s\right)\right\}=u\left(t-a\right)f\left(t-a\right)$ So here, we see that $-2e^{-s}{s^3}=-e^{-s}\frac{2!}{s^3}$ Therefore, $\mathcal{L}^{-1}\left\{-e^{-s}\frac{2!}{s^3}\right\}=-u\left(t-1\right)\cdot\left(t-1\right)^2$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9688079357147217, "perplexity": 3894.3697056256997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989790.89/warc/CC-MAIN-20150728002309-00119-ip-10-236-191-2.ec2.internal.warc.gz"}
https://listserv.uni-heidelberg.de/cgi-bin/wa?A2=LATEX-L;4e8f2280.0207&FT=&P=1751000&H=&S=b
## LATEX-L@LISTSERV.UNI-HEIDELBERG.DE Options: Use Classic View Use Monospaced Font Show Text Part by Default Show All Mail Headers Topic: [<< First] [< Prev] [Next >] [Last >>] Re: User's thoughts about LPPL Frank Mittelbach <[log in to unmask]> Sun, 21 Jul 2002 01:38:45 +0200 text/plain (31 lines) Thomas Bushnell, BSG writes:  > Boris Veytsman <[log in to unmask]> writes:  >  > > B. The *name* TeX is reserved for Knuth's program. If you program  > > is called TeX, it must satisfy triptest. You can NOT correct bugs  > > in this program, you cannot do Debian QA for it -- you either take  > > it as is or rename it.  >  > No. You are quite wrong. Provided it still passes triptest, you can  > call it TeX. You certainly can correct bugs or do Debian QA, provided  > the changes still pass triptest. sorry but I fear it's you that is quite wrong. The triptest is only there to help you determine that your implementation is okay. you are neither allowed to fix bugs or add extra features (new commands, or whatever). theroretically (now Goedel turns up again:-) a program is only allowed to call itself TeX if it produces for all inputs exactly the same output compared to the master copy in stanford (there are technically a bunch of exceptions related to floating point stuff in dvi production, but that isn't related to the argument. have a look at Don's home page. there you find that upon his death TeX version number goes up to \pi and from there on all bugs are by definition features. of course you are allowedto rename the source files and produce whatever you wish from them, but you are not allowed to call the resulting thing "TeX" again not even if you have the most valid bug fix upon your sleave. frank
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9320849180221558, "perplexity": 6047.375182496249}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107893011.54/warc/CC-MAIN-20201027023251-20201027053251-00472.warc.gz"}
http://nrich.maths.org/7180/solution?nomenu=1
If page $1$ is on the outside of the first sheet, then page $2$ is on the inside. Pages $3$ and $4$ are on the same sheet, as are pages $5$ and $6$. This therefore shares a sheet with page $7$, so is the inside of the sheet. On the outside on the other side is therefore page $62$. The other three sheets have pages $63$ and $64$, $65$ and $66$ and $67$ and $68$ on them. Therefore there are $68$ pages, and four sheets per page, so $17$ sheets. Alternatively, the sum of the page numbers on the same side of the sheet is always constant, since one side increases by $1$ every time the other decreases by $1$. This means the total is always $8 + 61 = 69$, so page $1$ shares with page $68$. Hence there are $68$ pages, so $17$ sheets. This problem is taken from the UKMT Mathematical Challenges. View the previous week's solution View the current weekly problem
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7261088490486145, "perplexity": 306.0780701004713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989126.22/warc/CC-MAIN-20150728002309-00044-ip-10-236-191-2.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/number-theory-questions.115226/
Homework Help: Number theory questions 1. Mar 22, 2006 randommacuser Hey all, I've got a few problems that are tripping me up tonight. 1. Let m,n be positive integers with m|n. Prove phi(mn)=m*phi(n). I know I can write n as a multiple of m, and m as a product of primes, and my best guess so far is that I can work with some basic properties of or formulas for the phi function to get the desired result. But I'm not making much progress. 2. Let n be an integer with 9|n. Prove n^7 = n mod 63. In this case I know it is enough to prove 7|(n^7-n) and 9|(n^7-n). But even though I imagine this would be a pretty simple application of Euler's theorem, I can't figure it out so far. 3. Suppose p is prime and p = 3 mod 4. Prove ((p-1)/2)! = +/- 1 mod p. This one has me stumpted totally. 2. Mar 22, 2006 shmoe 1. can you prove the simpler case where n is a prime power? Your approach should be easier here. 2. 9|(n^7-n) should be simple...Euler's theorem on 7|(n^7-n) is a good idea, what's going wrong? Show your work... 3. A small hint, consider Wilson's theorem. 3. Mar 22, 2006 devious_ 1. Maybe you should think of a counting argument. What does m|n really tell you, and how can you count numbers coprime to mn in { 1, ..., m, ..., n, ..., mn }? 2. n^7 = n (mod 7) by Fermat's Little Theorem (or Euler's Theorem), and you know that 9|n so 9|n^7. Can you see why n^7 = n (mod 9) too? 3. Maybe Wilson's theorem will help. If you write p=4k+3 then (p-1)! = (4k+2)! = -1 (mod p) But also (4k+3)! = [(4k+2)(4k+1)...(2k+2)](2k+1)! = [(-1)(-2)(-3)...(-(2k+1))](2k+1)! (mod p) = - [(2k+1)!]^2 (mod p) and the minus sign is there because there are an odd number of terms (in fact we have (-1)^(2k+1) = -1). Can you see where to go from here? 4. Mar 22, 2006 devious_ Oops. Looks like I'm half an hour too late. In my defense I didn't click the "Post" button properly then went to make dinner only to come back and see that it didn't actually post. 5. Mar 22, 2006 neurocomp2003 first one...u should know phi(n), now treat mn as single integer "mn" ...what is phi("mn"); 6. Mar 22, 2006 randommacuser I like this last suggestion for #1. I can do all that and I see where it is headed, but at the end I have expressions for phi(n) and phi(mn) that depend on different primes (or at least I can't prove they are the same). How do I use m|n to prove this? As I suspected, #2 is a lot easier than I was making it. I have a bit of a related question, though. If 3 does not divide n, how can I prove that n^7=n mod 9 ? Haven't had a chance to look at #3 yet, but I will. 7. Mar 22, 2006 shmoe The primes involved in the factorizations of n and m*n have to be the same as m|n, they will possibly be different powers though. You'd then know gcd(n,9)=1, so... 8. Mar 22, 2006 randommacuser Got them all, I think. Thanks everyone! 9. Mar 23, 2006 Hurkyl Staff Emeritus A useful form of the formula for phi(n) that I don't see often is: $$\varphi(n) = n (1 - \frac{1}{p_1}) (1 - \frac{1}{p_2}) \cdots$$ which makes your problem (1) trivial!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9034056067466736, "perplexity": 959.2984566750118}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676594018.55/warc/CC-MAIN-20180722213610-20180722233610-00313.warc.gz"}
http://www.techjail.net/solved-couldnot-start-service-apache-tomcat-server-on-windows.html
## [SOLVED] Couldnot start service Apache Tomcat server on Windows Hi, my uncle have a Laptop with Windows 7 and asked me to configure Apache Tomcat server on his PC. After I installed the server, it worked and from the very next day it didn’t start at the startup nor the manual start. I opened the “services.msc” and viewed the service, but still no luck on starting. I viewed the location of the service and found something like “tomcat7.exe //RC//Apache“(I really forgot that). Then I opened the location where the bin folder is located on the explorer. So I opened: Program Files\Apache…\Tomcat\bin\ After that I opened a Command prompt there and tried opening the Tomcat.exe with required parameter and found that there is some missing dll. After that I opened the log file and saw something as: [error] The specified module could not be found. [error] Failed creating java C:\Program Files (x86)\Java\jre1.6.0_03\bin\client\jvm.dll [error] The specified module could not be found. [error] ServiceStart returned 1 [error] The specified module could not be found. So, I traced the path for C:\Program Files (x86)\Java\jre1.6.0_03\bin\client\jvm.dll found that the file is missing and I tried locating the file in another location. Since my uncle isn’t any developer, he hasn’t kept any jre and I found it on JDK location. Then I followed some steps as on the image below: In the image above it have # I browsed and located the jre jvm location. Hope this solution works Enjoy
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.815345287322998, "perplexity": 3726.1739824798187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738905.62/warc/CC-MAIN-20200812141756-20200812171756-00122.warc.gz"}
https://docs.bmc.com/docs/ServerAutomation/87/using/launching-bmc-server-automation/rules-for-entering-paths
# Rules for entering paths BMC Server Automation requires you to enter paths using conventions that are atypical for Microsoft Windows or UNIX platforms. See the following sections: ## Entering UNIX host names For UNIX, precede a host name with two slashes. Use a slash to identify a directory. The following is an example of a directory on a UNIX host called unixtest1: //unixtest1/usr/bin ## Entering Windows host names For Windows, precede the host name with two slashes. For files, COM+, and metabase, use slashes to identify the disk drive, folders, and sub-folders. Windows paths are not case sensitive. The following is an example of a folder on a Windows host called win2ktest1: //win2ktest1/c/winnt/system32 The following is an example of a path to a COM+ property: Applications/IIS Utilities/Activation The following is an example of a path to a metabase value: LM/W3SVC/Default Web Site/ServerSize When entering a path to an item in the registry, use backslashes. The following is an example of a path to a registry value: When you enter a file path, if you do not specify a disk drive for a Windows machine, BMC Server Automation defaults the path to the C drive. For example, BMC Server Automation considers the following paths to be the same: • //win2ktest1/winnt • //win2ktest1/c/winnt ## Entering paths to configuration file entries When you enter paths to configuration files on Windows or UNIX, use a double slash (//) to separate the path to the configuration file from the path to a hierarchy within the file. For example, you might identify a configuration file entry as follows: /c/winnt/odbc.ini//Excel files/Driver32 In this example, /c/winnt/odbc.ini provides the path to a configuration file, while Excel files/Driver 32 is the path to an entry in the configuration file. ## Slashes appearing in values For most hierarchical assets, the path separator is a slash. To enter a value that includes a slash, you must "escape" the slash by preceding it with a backslash. For example, to create the value driver/32, enter: driver\/32 Note The Windows registry is an exception because its path separator is a backslash. If you enter a value in a Windows registry path that includes a backslash, you must escape the backslash by preceding it with a slash. For example, to create a registry value named C:\winnt, enter C:/\winnt. ## Using trailing slashes When you enter a folder or directory name, BMC Server Automation does not support trailing slashes. If you enter a path to a registry value with a name of empty string (that is, ""), you can use a trailing slash, as follows:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8188618421554565, "perplexity": 3623.5079252627243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655883961.50/warc/CC-MAIN-20200704011041-20200704041041-00185.warc.gz"}
https://us.sofatutor.com/mathematics/videos/parts-of-areas-and-circumferences-of-circles
# Parts of Areas and Circumferences of Circles04:21 minutes Video Transcript ## Videos in this Topic Use Equations and Inequalities to Solve Geometry Problems (9 Videos) ## Parts of Areas and Circumferences of Circles Übung ### Du möchtest dein gelerntes Wissen anwenden? Mit den Aufgaben zum Video Parts of Areas and Circumferences of Circles kannst du es wiederholen und üben. • #### Find the area of the sectors. ##### Tipps $A_{\text{circle}} =${$\pi~r^2$}, where $\pi\approx 3.14$ and $r$ the radius of the circle. Don't forget about the units. Areas are $m^2$ or ... ##### Lösung For the polar bear figure skating the coordinators need the largest area. The other area will be used for the penguin sumo wrestling. So the coordinators have to determine the area of a part of a circle two times. They need the formula for the area of circles: $A_{\text{circle}}=\pi~r^2$. with $\pi\approx3.14$ and $r$ the radius of the circle. Because each ice floe is just a part of the circle, they have to multiply this formula by the corresponding fraction. Let's start with ice floe number $1$: $A_{Floe1}=\frac13~\pi~r^2$. Plugging in the given values ($\pi\approx 3.14$ and $r=30~m$) we get $\begin{array}{rcl} A_{Floe1}&=&\frac13(3.14)(30)^2\\ &=&\frac13(3.14)(900)\\ &\approx&942 \end{array}$ The area of ice floe number $1$, which is what we were looking for, is $942~m^2$. In a similar way we can determine the area of ice floe number $2$. Here we have to multiply the area formula by $\frac34$: $A_{Floe2}=\frac34~\pi~r^2$. Again we plug in the given values for $\pi$ and $r=10~m$ $\begin{array}{rcl} A_{Floe2}&=&\frac34(3.14)(10)^2\\ &=&\frac34(3.14)(100)\\ &\approx&235.5 \end{array}$ The area of ice floe number 2 is $235.5~m^2$. Thus the coordinators decide to take ice floe number $1$ for the polar bear figure skating and number $2$ for the penguin sumo wrestling. • #### Recall the formulas for the area and circumference of both a circle and a sector of a circle. ##### Tipps $A$ is the symbol for the area while $C$ stands for the circumference. Pay attention to the meaning of the values • $r$ is the radius of the circle. • $d=2r$ is the diameter of the circle. • $\pi\approx 3.14$ For area we can use units such as $m^2$ and for circumference we can use units such as $m$. $r^2$ leads to the unit $m^2$. ##### Lösung Here are all the formulas you need. • $r$ is the radius of the circle. • $d=2r$ is the diameter of the circle. • $\pi\approx 3.14$ • The area is given by $A_{\text{circle}}=\pi~r^2$ The units for the area are $m^2$ or ... • The circumference is given by $C_{\text{circle}}=2~\pi~r=\pi~d$ The units for the circumference are $m$ or ... If we want to determine the area or circumference of part of a circle, we have to multiply those formulas by the corresponding fraction. For the given examples we get: • The area of a third of a circle $A_{\text{sector of a circle}}=\frac13~\pi~r^2$ • The circumference of three quarters of a circle $C_{\text{sector of a circle}}=\frac34~2~\pi~r$ • #### Find the circumferences of the sectors. ##### Tipps $C_{\text{circle}}=${$2~\pi~r$}, where $\pi\approx 3.14$ and $r$ is the radius of the circle. Don't forget about the units. The circumference of a circle is a length, which in this case will be given as $m$, and in other cases could be $ft$, $km$, or $mi$. ##### Lösung The coordinators of the games have a lot of work to do. Now, for the walrus swimming event, they have to find out which sector has the longest circumference. Therefore, they have to determine the circumferences for both ice floes. Fortunately, they know the formula for the circumferences of circles: $C_{\text{circle}}=2~\pi~r$, with $\pi\approx3.14$ and $r$ the radius of the circle. Because each ice floe isn't a whole circle (it's just a part) they have to multiply this formula with the regarding fraction. They start with ice floe number $1$: $C_{Floe1}=\frac13~2~\pi~r^2$. Plugging in the given values ($\pi\approx 3.14$ and $r=30~m$) they get $\begin{array}{rcl} C_{Floe1}&=&\frac13(2)(3.14)(30)\\ &=&\frac13(3.14)(60)\\ &\approx&62.8 \end{array}$ The circumference of ice floe number $1$ is $62.8~m$. They just have to determine the circumference of ice floe number $2$. For this they multiply the circumference formula by $\frac34$: $C_{Floe2}=\frac34~2~\pi~r^2$. Finally they plug in the given values for $\pi$ and $r=10~m$: $\begin{array}{rcl} C_{Floe2}&=&\frac34(2)(3.14)(10)\\ &=&\frac34(3.14)(20)\\ &\approx&47.1 \end{array}$ This gives them the circumference of ice floe number 2 as approximately $47.1~m$. Wow, they got it. They decide to use ice floe number 1 for the walrus swimming event because it has a longer circumference. The games will start. • #### Solve for the circumference of the ice floe for the Dolphin sprint. ##### Tipps Use the formula for the circumference of a circle. $C_{\text{circle}}=2~\pi~r$. Multiply the circumference formula by the corresponding fraction. The smallest value is $26.17~m$ and the largest one $47.1~m$. ##### Lösung We have to use formula for the circumference of a circle $C_{\text{circle}}=2~\pi~r$ For each given ice floe we multiply this formula with the corresponding fraction. The ice floe are already in the right order: Ice Floe #5 $C_{\text{Floe5}}=\frac16~2~\pi~r=\frac16(2)(3.14)(25)\approx 26.17$ The circumference of this floe is given by $26.17~m$. Ice Floe #4 $C_{\text{Floe4}}=\frac14~2~\pi~r=\frac14(2)(3.14)(18)=28.26$ The circumference of this floe is given by $28.26~m$. Ice Floe #1 $C_{\text{Floe1}}=\frac12~2~\pi~r=\frac12(2)(3.14)(10)=31.4$ The circumference of this floe is given by $31.4~m$. Ice Floe #2 $C_{\text{Floe2}}=\frac25~2~\pi~r=\frac25(2)(3.14)(15)=37.68$ The circumference of this floe is given by $37.68~m$. Ice Floe #3 $C_{\text{Floe3}}=\frac35~2~\pi~r=\frac35(2)(3.14)(12)=45.216$ The circumference of this floe is given by $45.216~m$. Ice Floe #6 $C_{\text{Floe6}}=\frac38~2~\pi~r=\frac38(2)(3.14)(20)=47.1$ The circumference of this floe is given by $41.7~m$. So best the organizers should choose the ice floe#6 if it's still available. • #### Decide which ice floe has enough area to house the Polar Games Village. ##### Tipps Use the approximate value $\pi\approx 3.14$. Each time you have to multiply the area formula $A_{\text{circle}}=\pi~r^2$ by the given fraction. Just two ice floes are suitable. ##### Lösung We use the formula for the area of a circle $A_{\text{circle}}=\pi~r^2$ Each time we have to multiply this formula by the given fraction. $~$ Ice Floe #1 $A_{\text{Floe1}}=\frac12~\pi~r^2=\frac12(3.14)(20)^2=628$ Great, this ice floe is suitable because the area $628~m^2$ is larger than $527~m2$. $~$ Ice Floe #2 $A_{\text{Floe2}}=\frac23~\pi~r^2=\frac23(3.14)(15)^2=471$ Too bad, this area $471~m^2$ is less than they require. $~$ Ice Floe #3 $A_{\text{Floe2}}=\frac25~\pi~r^2=\frac25(3.14)(20)^2=502.4$ This floe isn't suitable either. The area $502.4~m^2$ is to small. $~$ Ice Floe #4 $A_{\text{Floe4}}=\frac34~\pi~r^2=\frac34(3.14)(15)^2\approx 530$ Awesome. This ice floe, with an area of $530~m^2$, is suitable for the village. $~$ Ice Floe #5 $A_{\text{Floe5}}=\frac14~\pi~r^2=\frac14(3.14)(25)^2\approx 491$ Sorry, that's not large enough. The village can't be build on this ice floe. • #### Calculate the area of the stadium seating. ##### Tipps The circumferences of the sectors are already given. Take care of the measures: The circumference is a length and thus has the measure $m$ or ... Just multiply the given circumferences by the corresponding fraction. ##### Lösung We already know the circumferences of the ice floes: • $C_{\text{Floe1}}=62.8~m$ • $C_{\text{Floe2}}=47.1~m$ To determine the needed length for the seating we still have to multiply those values with the given fraction. • Regarding to ice floe #1 we get $\frac16\times 62.8~m=10.47~m$. • The length corresponding to ice floe #2 is given by $\frac38\times 47.1~m=17.67~m$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9557980895042419, "perplexity": 1491.4224267682762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573121.4/warc/CC-MAIN-20190917203354-20190917225354-00305.warc.gz"}
https://brilliant.org/problems/boiling-points-of-covalent-hydrides/
# Boiling Points of Covalent Hydrides Chemistry Level pending If hydrogen bonding is NOT considered, which of the following compounds would you expect to have the highest boiling point: $\ce{HF}, ~~\ce{HCl}, ~~\ce{HBr}, ~~\ce{or} ~~\ce{HI}?$ If hydrogen bonding IS considered, which of the following compounds would you expect to have the highest boiling point: $\ce{HF}, ~~\ce{HCl}, ~~\ce{HBr}, ~~\text{or} ~~\ce{HI}?$ ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5971813201904297, "perplexity": 1037.087085657005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00040-ip-10-171-10-70.ec2.internal.warc.gz"}
http://sankhya.isical.ac.in/articles/286
## Article #### Title: A Berry-Esséen Inequality without Higher Order Moments ##### Issue: Volume 78 Series A Part 2 Year 2016 ###### Abstract In this note, a generalized form of the celebrated Berry-Esséen inequality is developed assuming only second-order moments to exist in the case of independent but not identically distributed random variables. The result generalizes and unifies many well known and highly used forms of the Berry- Esséen inequality. #### Latest Issue 3. Online Submission 5. Subscribe
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9655665159225464, "perplexity": 2338.5517711578727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250619323.41/warc/CC-MAIN-20200124100832-20200124125832-00273.warc.gz"}
http://openstudy.com/updates/4f9c755ae4b000ae9ed17b12
Here's the question you clicked on: 55 members online • 0 viewing ## KingGeorge 4 years ago [SOLVED] George's problem of the [insert arbitrary time unit] Define a function $$f: \mathbb{Z}^+ \longrightarrow \mathbb{Z}^+$$ such that $$f$$ is strictly increasing, is multiplicative, and $$f(2)=2$$. Show that $$f(n) =n$$ for all $$n$$. Hint 1: You need to find an upper and a lower bound for a certain $$n$$ and show that the bounds are the same. Hint 2: Find the upper and lower bound for $$n=18$$. Using this show that $$f(3)=3$$. Now deduce that $$f(n)=n$$ for all $$n$$. [EDIT: It should be noted that this problem is relatively difficult (but only if you don't see the right process)] Delete Cancel Submit • This Question is Closed 1. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 Strictly increasing means that if $$a>b$$, then $$f(a)>f(b)$$. Multiplicative means that if $$a, b$$ are coprime, then $$f(ab)=f(a)f(b)$$. 2. beginnersmind • 4 years ago Best Response You've already chosen the best response. 0 Do you need help with this or is it more of a puzzle? (I managed to prove that f(3)=3) 3. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 More of a puzzle. I know how to get the answer. BTW, how did you prove f(3)=3? 4. beginnersmind • 4 years ago Best Response You've already chosen the best response. 0 2=f(2)<f(3)<f(4)=f(2)*f(2)=4 5. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 2 and 2 are not coprime, so that doesn't work. 6. beginnersmind • 4 years ago Best Response You've already chosen the best response. 0 oh 7. anonymous • 4 years ago Best Response You've already chosen the best response. 0 Any ring automorphism on Z that takes 2 to itself will by definition take every n to itself? (Just a guess, I'm not that far along yet). 8. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 That's certainly not the solution I have. 9. anonymous • 4 years ago Best Response You've already chosen the best response. 0 Well it's kind of part of the definition of Z, isn't it, that what you describe in the problem would be the case? I mean, it should be a pretty straightforward proof by induction, I would think. But I might be misunderstanding the question. 10. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 I'm using $$\mathbb{Z}^+$$ which is equivalent to $$\mathbb{N}$$ with out 0. I'm just repeating the problem the way it was given to me. 11. anonymous • 4 years ago Best Response You've already chosen the best response. 0 Hey! You can answer a question I have! I just discovered this site today. How do you do inline TeX? And yeah, I follow what ring you're referring to, I just don't see what makes the problem interesting, I guess. 12. beginnersmind • 4 years ago Best Response You've already chosen the best response. 0 I need some help, I know almost no number theory. To define a multiplicative function I need to define it on all powers of all primes, right? 13. anonymous • 4 years ago Best Response You've already chosen the best response. 0 Sorry, not a ring at all, just a set. Even so, it seems like a pretty simple induction proof to show that f(n)=n if f(2)=2. Still could be missing something though. 14. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 Use "\ (" instead of "\ [" (without the spaces to do inline $$\LaTeX$$. Defining it on all powers of primes would certainly be helpful. Also, this is not just a simple induction proof. 15. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 16. anonymous • 4 years ago Best Response You've already chosen the best response. 0 Awesome. Most sites I use just use the \$ notation, haven't seen it with parentheses before. Also, yes, I see where the problem is with this, will think about it for some time and try to work it out. 17. anonymous • 4 years ago Best Response You've already chosen the best response. 0 1 is coprime to every integer, so $$f(n)=n$$ by definition. 18. anonymous • 4 years ago Best Response You've already chosen the best response. 0 That just means:$f(n\cdot 1)=f(n)f(1)$first you need to show f(1)=1, which can be done like this:$f(2)=f(2\cdot 1)=f(2)f(1)\Longrightarrow 2=2f(1)\Longrightarrow f(1)=1$ 19. beginnersmind • 4 years ago Best Response You've already chosen the best response. 0 I think that only proves f(n)=f(n)*f(1)=f(n) 20. anonymous • 4 years ago Best Response You've already chosen the best response. 0 Right, what Joe said. 21. beginnersmind • 4 years ago Best Response You've already chosen the best response. 0 Anyway, I'm stuck, I'll read KG's solution later. 22. anonymous • 4 years ago Best Response You've already chosen the best response. 0 Or alternatively, the way that I thought about it, if f is strictly increasing, the only element left in Z+ that is less than 2 is 1, so f(1) has to be 1 23. anonymous • 4 years ago Best Response You've already chosen the best response. 0 I spent a good ten minutes thinking about rebuilding the series of primes, should have remembered, always start with the identity :P 24. beginnersmind • 4 years ago Best Response You've already chosen the best response. 0 George Polya said, if you can't solve the problem, try to solve an easier one that's similar. Any suggestions? 25. beginnersmind • 4 years ago Best Response You've already chosen the best response. 0 Looking only at the set of primes and numbers that are the product of n distinct primes what constraint do we get for f, if we want to keep it strictly increasing? 26. anonymous • 4 years ago Best Response You've already chosen the best response. 0 I tried building it up from primes that way and hit a dead end, which is when I went back to basics and realized you could just force the definition via the identity. Building it by primes seems a bit more difficult, and I'm not positive how you would go about it. 27. beginnersmind • 4 years ago Best Response You've already chosen the best response. 0 "and realized you could just force the definition via the identity." What do you mean by that? 28. experimentX • 4 years ago Best Response You've already chosen the best response. 0 f(2*1) = f(2)f(1) f(1) = 1 f(3*2) = f(3) 2 f(3) = f(6)/2 f(5) = f(10)/2 f(n) = f(2n)/2 for odd n 29. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 @experimentX That is correct, but it still doesn't solve the original problem. 30. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 I have posted a hint for those who had previously participated. 31. anonymous • 4 years ago Best Response You've already chosen the best response. 0 $f(2) = f(1\cdot2) = f(1)\cdot f(2) = 2 \implies f(1) = 1$ Lets assume $$f(k-1) = k-1$$ and same follows up from 2. $f(k) = f\left(\frac k 2 \cdot 2\right) = f\left(\frac k2\right)\cdot f(2)$ $1 < \frac{k}2< k-1$ Is this enough lol? :/ :( 32. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 If you knew that $$f(n)=n$$ for at least one value of $$n\geq3$$ that would work (with a little bit of fixing). But first you need to show that $$f(n)=n$$ for some value of $$n$$ that's not 1 or 2. 33. anonymous • 4 years ago Best Response You've already chosen the best response. 0 2=f(2)<f(3)<f(4)=f(2)*f(2)=4 what's wrong with this one? 34. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 2 is not coprime to 2. So $$f(2)f(2)$$ does not necessarily equal $$f(4)$$ 35. experimentX • 4 years ago Best Response You've already chosen the best response. 0 f(2) = f(1+1) = f(1) + f(1) ??? 36. anonymous • 4 years ago Best Response You've already chosen the best response. 0 I don't understand it. Can you give me an example which proves f(2)f(2) isn't necessarily f(4). Sorry If I sound dumb. 37. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 @experimentX Not necessarily. This is not necessarily a homomorphism. @Ishaan94 Suppose $$f(3) =10$$ and $$f(6)=f(2)f(3)=20$$. $$f(4)$$ could be anywhere in between. The function is given as multiplicative (see first post for definition), so if the two arguments don't have a gcd of 1, you can't say anything about the product. 38. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 @experimentX It would probably be simpler to say that the function is just multiplicative. Not additive, and what you posted was the definition of additive. 39. anonymous • 4 years ago Best Response You've already chosen the best response. 0 $f(3)>f(2) \implies f(3) \ge f(2) + 1 \implies f(3) \ge 3$$f(4) \ge 4 \implies f(n) \ge n$Can we assume $$f(n)=n+k$$? but k may not be necessarily a constant term right? 40. anonymous • 4 years ago Best Response You've already chosen the best response. 0 But this doesn't help either :/ 41. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 That's the right direction. If you follow that direction correctly, you'll be able to get the lower bound you need. To get the upper bound you need, you'll need to change things a little bit first. 42. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 I'll post a second, more helpful hint in half an hour or so if you think you need it. 43. anonymous • 4 years ago Best Response You've already chosen the best response. 0 $f(3) =3 +k, \quad f(6) =f(3)f(2) = 6+2k, \quad f(5)< 6 +2k, \quad f(4)> 3+k$$f(5) \le 5 + 2k, \quad f(4) \ge 4+ k$$f(5) \ge f(4) + 1 \implies f(4)+1\le f(5)\le5+2k$ If I prove $$f(4) +1 = 5+2k$$ then this might work out, but $$f(4) +1 \ge 5+k$$. It's just more and more and more inequalities. :/ 44. anonymous • 4 years ago Best Response You've already chosen the best response. 0 From the above inequalities, $5+ k\le f(5) \le 5+2k$ 45. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 Have posted 2nd hint. I would also suggest letting $$f(3)=k$$. It makes the manipulations easier 46. anonymous • 4 years ago Best Response You've already chosen the best response. 0 wait... $$f(4) + 1 \le 5+ 2k \implies f(4) \le 4 +2k \implies f(3) <(4) \le 4+2k$$ $$\implies 4 + k \le f(4) \le 4 + 2k$$ $\implies n + k \le f(n) \le n + 2k$ woow, nice. 4 + 2k< 5 + k k< 1 Woow, Wow. K<1. Yay! I am so happy! K can not be less than Zero as f(n) is supposed to be an increasing function. So, K has to be Zero. Yay! Yayyy! But I am not sure if it's right. :/ 47. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 I don't think that's quite right. It's almost correct, but in your last expression, 4+2k>5+k, I'm pretty sure the second k is not the same k. 48. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 If you could convince me that the k's are in fact the same, you would indeed have a proof. 49. anonymous • 4 years ago Best Response You've already chosen the best response. 0 No, it's same. Really?$f(3) >2 \implies f(3) \ge f(2) + 1 \implies f(3) \ge 3$$\implies f(n) \ge n$ Let $$f(3) = 3 + k$$. $f(6) = f(2) \cdot f(3) = 6 + 2k$ $f(5) < f(6) \implies f(5) < 6+ 2k \implies f(5) \le 5 + 2k \tag1$ $f(4) < f(5) \implies f(4) + 1 \le f(5) \tag2$ From1 and 2.$f(4) +1 \le f(5) \le 5+2k \tag3$ $f(3) < f(4) \implies f(3) + 1 \le f(4) \implies 4 + k \le f(4) \tag 4$ From 4 and 3 $5 + k \le f(5) \le 5 + 2k \tag5$Also from 3$f(4) + 1\le 5+2k \implies f(4) \le 4 + 2k\tag6$From 4 and 6 $4+k \le f(4) \le 4+ 2k$ $\implies n+k \le f(n) \le n +2k\tag7$ From 5 and 6 $4+2k < 5+k \implies k<1 \implies k=0\tag8$From 7 and 8 $n\le f(n)\le n \implies f(n) = n$ 50. anonymous • 4 years ago Best Response You've already chosen the best response. 0 Can I type QED now? :D 51. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 I think I see the actual problem now. You're trying to compare an upper bound for 4, and a lower bound for 5. Your claim is that those bounds can't overlap, but I don't see anything that shows those bounds can't overlap. It is true of course, that the best bounds don't overlap, but these bounds are not as small as possible. If $$f(4)=4+2k$$, then of course $$4+2k<f(5)<6+2k$$. And we're done. But what if $$f(4)<4+2k$$? That is still possible and we no longer get the same implication. 52. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 53. anonymous • 4 years ago Best Response You've already chosen the best response. 0 Ohh hmm :( I will try again. 54. anonymous • 4 years ago Best Response You've already chosen the best response. 0 Hmm but I still do have the bounds hehehe $n+k \le f(n) \le n+2k$ Wait... for $$n=2$$,$2+k \le f(2) \le 2+2k \implies 2+ k \le 2\le 2+2k \implies k=0$ 55. anonymous • 4 years ago Best Response You've already chosen the best response. 0 56. anonymous • 4 years ago Best Response You've already chosen the best response. 0 Shall I type QED now? :D 57. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 You have those bounds for $$n \geq3$$ so you can't use it for $$n=2$$. For $$n=2$$, you're given that it's 2. You're method is great, you just need to refine it to get an upper and lower bound for a single number and show that the bounds are the same. 58. anonymous • 4 years ago Best Response You've already chosen the best response. 0 Oh I knew something was wrong. hmm I will try again. (Y) :D 59. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 It's very helpful to set $$f(3)=a$$. From there, get a lower and upper bound for $$f(18)$$. If you do it just right, you'll get a quadratic that you can solve for a. 60. anonymous • 4 years ago Best Response You've already chosen the best response. 0 I can't take $$f(18)=f(3)f(6)$$. Can I? I think $$f(2)\cdot f(9)$$ is right. Hmm $$2f(9)$$. 61. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 That's the right track. Now what's the upper bound of$$f(9)$$ in terms of $$f(3)$$? 62. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 Hint: $f(9)\leq f(10)-1$ 63. anonymous • 4 years ago Best Response You've already chosen the best response. 0 4k-3 or 4a -3? 64. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 Right. So this means that $$f(18) \leq \text{____}$$ 65. anonymous • 4 years ago Best Response You've already chosen the best response. 0 8k - 6 $$\ge$$ f(18) How do I solve it for f(17)? 66. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 Ignore f(17). You found the upper bound of f(18). Now we need to find the lower bound for f(18). Once again, you want to start at f(3)=k. 67. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 In this part, you'll want to get a quadratic. You'll want to look at f(3) and f(5) and use these to get to f(18). 68. anonymous • 4 years ago Best Response You've already chosen the best response. 0 I can only get f(15) using f(3) and f(5) 69. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 And from f(15) you can get f(18) 70. anonymous • 4 years ago Best Response You've already chosen the best response. 0 $2k + k^2 \le f(15) \le 2k^2 -k$ 71. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 Just use $2k + k^2 \le f(15)$Now this means that $\text{______}\le f(18)$ 72. anonymous • 4 years ago Best Response You've already chosen the best response. 0 $$(k+1)^2 + 2 \le f(18) \le 8k+6$$ 73. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 $(k+1)^2 + 2 \le 8k+6$Now just solve for k 74. anonymous • 4 years ago Best Response You've already chosen the best response. 0 $$(k+1)^2 + 2 \le 8k + 6 \implies k^2 + 2k+3 \le 8k+6 \implies k^2 -6k - 3\le0$$ $\implies 3 -2\sqrt3 \le k \le 3 + 2\sqrt3$ But is this what I am supposed to get? 75. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 There was a typo that was making this rather hard. It should be $(k+1)^2 + 2 \le f(18) \le 8k-6$You had this right originally, there was just a typo that was perpetuated. Try and solve this one. 76. anonymous • 4 years ago Best Response You've already chosen the best response. 0 Ohh I can't concentrate at all :( I see now, $3\le k \le 3 \implies k=3 \implies f(3) =3$ 77. anonymous • 4 years ago Best Response You've already chosen the best response. 0 So, we finally have f(3)=3. But I don't get it why did we have to go all the way from 4 and 5 to 18. Why a quadratic? 78. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 We can get the relation $k^2-6k+9 \le 0\implies (k-3)^2 \le 0$There is only one value of $$k$$ that solves this, $$k=3$$. That's why we need a quadratic. It allows us to get an inequality with only 1 solution. If it were linear, we would have a line with infinite solutions satisfying the inequality. 79. anonymous • 4 years ago Best Response You've already chosen the best response. 0 eh the proof still isn't complete I will have to show it for f(n). I think I will have to use induction. $f(3) =3$ Lets assume it works up from 3 to 2k-1. $f(k) = 2\cdot f\left(k\right) = 2k$ I am not sure if it's right. I can only recall strong induction from your previous problem. 80. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 To show it for all n, just take $$f(2)f(3)=f(6)=2*3=6$$ So $$f(4)=4$$ and $$f(5)=5$$ since it has to be strictly increasing. We can just continue this process up to infinity. 81. anonymous • 4 years ago Best Response You've already chosen the best response. 0 and why wasn't f(1) enough for us to use induction? why did we need to solve it for f(3). 82. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 $$f(1)$$ just let us get $$f(2)$$. We needed a value greater than 2 to generate larger numbers using the multiplicative rule. 83. anonymous • 4 years ago Best Response You've already chosen the best response. 0 ohh, thanks. i couldn't have done this without your help. 84. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 You did very well. =D 85. KingGeorge • 4 years ago Best Response You've already chosen the best response. 1 I appreciate the amount of time you spent on this. Thanks for actually doing this. 86. Not the answer you are looking for? Search for more explanations. • Attachments: Find more explanations on OpenStudy ##### spraguer (Moderator) 5→ View Detailed Profile 23 • Teamwork 19 Teammate • Problem Solving 19 Hero • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995686411857605, "perplexity": 2243.171963877085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049274119.11/warc/CC-MAIN-20160524002114-00142-ip-10-185-217-139.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/71196/difference-between-begintheorem-endtheorem-and-theorem
# Difference between \begin{theorem}…\end{theorem} and {\theorem …} I see that both attempts to display a theorem in the following code generates similar output. \documentclass{article} \usepackage{amsthm} \newtheorem{theorem}{Theorem} \begin{document} \begin{theorem} $$(a + b)^2 = a^2 + 2ab + b^2$$ \end{theorem} { \theorem $$(a + b)^2 = a^2 + 2ab + b^2$$ } \end{document} I want to know what the difference between \begin{theorem}...\end{theorem} and {\theorem ...} is. - Are you familiar with Plain TeX and LaTeX differences? –  percusse Sep 11 '12 at 20:02 @percusse No, I work only with LaTeX. Is \begin{theorem}...\end{theorem} LaTeX way of writing a theorem while {\theorem ...} TeX way of writing a theorem? –  Lone Learner Sep 11 '12 at 20:04 In a nutshell yes, but I'm not entitled to give a comprehensive answer. This is also the root of the confusion about \center,\centering and \begin{center}...\end{center}. See for example tex.stackexchange.com/questions/2651/… –  percusse Sep 11 '12 at 20:06 a misguided user just sent a bug report to the latex bugs site saying that the heading font for all theorems after using \remark{...} was changed to italic. (he didn't use braces to isolate the theorem input.) all latex documentation says that theorems are to be input as environments, not commands, so clearly, the instructions weren't being followed. egreg's answer gives good reasons why one should follow the instructions. –  barbara beeton Sep 18 '12 at 15:04 Assume foo is some environment; with \begin{foo} LaTeX does some bookkeping, opens a group and expands the macro \foo With \end{foo} some check are performed, \endfoo is expanded and the group is closed. In the case of theorem, we can test \show\theorem \show\endtheorem which gives > \theorem=macro: ->\@thm {\let \thm@swap \@gobble \th@plain }{theorem}{Theorem}. > \endtheorem=macro: ->\endtrivlist \@endpefalse . It may seem that \endtheorem is no big deal; but let's see what \endtrivlist means: > \endtrivlist=macro: ->\if@inlabel \leavevmode \global \@inlabelfalse \fi \if@newlist \@noitemerr \global \@newlistfalse \fi \ifhmode \unskip \par \else \@inmatherr {\end {\@currenvir }}\fi \if@noparlist \else \ifdim \lastskip >\z@ \@tempskipa \lastskip \vskip -\lastskip \advance \@tempskipa \parskip \advance \@tempskipa -\@outerparskip \vskip \@tempskipa \fi \@endparenv \fi . So you're missing several things if you omit \end{theorem}. Perhaps, in the case of theorem not much is missed, but only getting "similar" output doesn't guarantee that, maybe some pages later, something goes awry. The most striking aspect in the particular case is that the vertical spacing after the statement will be wrong, even if you leave an empty line after the closing brace. This practice is definitely not recommendable: some environments do the bulk of their work exactly at \end...; others do almost nothing at that stage. One should know in depth what every environment does. Finally, the {\theorem ...} syntax is clumsy. - Nice! Where should I type \show\theorem\show\endtheorem to see that lines? –  Sigur Sep 11 '12 at 22:08 @Sigur Just add them to your document and compile it from the command line. You can get a "printed" version good for non interactive runs with {\ttfamily\meaning\theorem}. –  egreg Sep 11 '12 at 22:10 It's interesting to note that \theorem survives without \endtheorem. Yes you "missing several things" but there's no compile error. This is because there's no grouping between the two commands; something inherent when using the \begin{theorem}...\end{theorem} pair. –  Werner Sep 11 '12 at 22:10 @Sigur: Review the .log file after the run, since \show flushes its contents there. –  Werner Sep 11 '12 at 22:11 @Werner No compile error doesn't mean that the output is correct. –  egreg Sep 11 '12 at 22:28 As soon as you add some material after both constructs you'll see the difference; \end{theorem} uses \endtrivlist which internally uses \par, effectively ending a paragraph; in the second construct there's no paragraph ending: \documentclass{article} \usepackage{amsthm} \newtheorem{theorem}{Theorem} \begin{document} \begin{theorem} $$(a + b)^2 = a^2 + 2ab + b^2$$ \end{theorem} aaa { \theorem $$(a + b)^2 = a^2 + 2ab + b^2$$ } aaa \end{document} -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9576508402824402, "perplexity": 6894.850439985979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163999838/warc/CC-MAIN-20131204133319-00045-ip-10-33-133-15.ec2.internal.warc.gz"}
http://cms.math.ca/cjm/msc/20F55
location:  Publications → journals Search results Search: MSC category 20F55 ( Reflection and Coxeter groups [See also 22E40, 51F15] ) Expand all        Collapse all Results 1 - 10 of 10 1. CJM 2013 (vol 66 pp. 323) Hohlweg, Christophe; Labbé, Jean-Philippe; Ripoll, Vivien Asymptotical behaviour of roots of infinite Coxeter groups Let $W$ be an infinite Coxeter group. We initiate the study of the set $E$ of limit points of normalized'' roots (representing the directions of the roots) of W. We show that $E$ is contained in the isotropic cone $Q$ of the bilinear form $B$ associated to a geometric representation, and illustrate this property with numerous examples and pictures in rank $3$ and $4$. We also define a natural geometric action of $W$ on $E$, and then we exhibit a countable subset of $E$, formed by limit points for the dihedral reflection subgroups of $W$. We explain how this subset is built from the intersection with $Q$ of the lines passing through two positive roots, and finally we establish that it is dense in $E$. Keywords:Coxeter group, root system, roots, limit point, accumulation setCategories:17B22, 20F55 2. CJM 2013 (vol 66 pp. 481) Aguiar, Marcelo; Mahajan, Swapneel On the Hadamard Product of Hopf Monoids Combinatorial structures that compose and decompose give rise to Hopf monoids in Joyal's category of species. The Hadamard product of two Hopf monoids is another Hopf monoid. We prove two main results regarding freeness of Hadamard products. The first one states that if one factor is connected and the other is free as a monoid, their Hadamard product is free (and connected). The second provides an explicit basis for the Hadamard product when both factors are free. The first main result is obtained by showing the existence of a one-parameter deformation of the comonoid structure and appealing to a rigidity result of Loday and Ronco that applies when the parameter is set to zero. To obtain the second result, we introduce an operation on species that is intertwined by the free monoid functor with the Hadamard product. As an application of the first result, we deduce that the Boolean transform of the dimension sequence of a connected Hopf monoid is nonnegative. Keywords:species, Hopf monoid, Hadamard product, generating function, Boolean transformCategories:16T30, 18D35, 20B30, 18D10, 20F55 3. CJM 2013 (vol 66 pp. 354) Kellerhals, Ruth; Kolpakov, Alexander The Minimal Growth Rate of Cocompact Coxeter Groups in Hyperbolic 3-space Due to work of W. Parry it is known that the growth rate of a hyperbolic Coxeter group acting cocompactly on ${\mathbb H^3}$ is a Salem number. This being the arithmetic situation, we prove that the simplex group (3,5,3) has smallest growth rate among all cocompact hyperbolic Coxeter groups, and that it is as such unique. Our approach provides a different proof for the analog situation in ${\mathbb H^2}$ where E. Hironaka identified Lehmer's number as the minimal growth rate among all cocompact planar hyperbolic Coxeter groups and showed that it is (uniquely) achieved by the Coxeter triangle group (3,7). Keywords:hyperbolic Coxeter group, growth rate, Salem numberCategories:20F55, 22E40, 51F15 4. CJM 2011 (vol 63 pp. 1238) Bump, Daniel; Nakasuji, Maki Casselman's Basis of Iwahori Vectors and the Bruhat Order W. Casselman defined a basis $f_u$ of Iwahori fixed vectors of a spherical representation $(\pi, V)$ of a split semisimple $p$-adic group $G$ over a nonarchimedean local field $F$ by the condition that it be dual to the intertwining operators, indexed by elements $u$ of the Weyl group $W$. On the other hand, there is a natural basis $\psi_u$, and one seeks to find the transition matrices between the two bases. Thus, let $f_u = \sum_v \tilde{m} (u, v) \psi_v$ and $\psi_u = \sum_v m (u, v) f_v$. Using the Iwahori-Hecke algebra we prove that if a combinatorial condition is satisfied, then $m (u, v) = \prod_{\alpha} \frac{1 - q^{- 1} \mathbf{z}^{\alpha}}{1 -\mathbf{z}^{\alpha}}$, where $\mathbf z$ are the Langlands parameters for the representation and $\alpha$ runs through the set $S (u, v)$ of positive coroots $\alpha \in \hat{\Phi}$ (the dual root system of $G$) such that $u \leqslant v r_{\alpha} < v$ with $r_{\alpha}$ the reflection corresponding to $\alpha$. The condition is conjecturally always satisfied if $G$ is simply-laced and the Kazhdan-Lusztig polynomial $P_{w_0 v, w_0 u} = 1$ with $w_0$ the long Weyl group element. There is a similar formula for $\tilde{m}$ conjecturally satisfied if $P_{u, v} = 1$. This leads to various combinatorial conjectures. Keywords:Iwahori fixed vector, Iwahori Hecke algebra, Bruhat order, intertwining integralsCategories:20C08, 20F55, 22E50 5. CJM 2009 (vol 61 pp. 740) Caprace, Pierre-Emmanuel; Haglund, Frédéric On Geometric Flats in the CAT(0) Realization of Coxeter Groups and Tits Buildings Given a complete CAT(0) space $X$ endowed with a geometric action of a group $\Gamma$, it is known that if $\Gamma$ contains a free abelian group of rank $n$, then $X$ contains a geometric flat of dimension $n$. We prove the converse of this statement in the special case where $X$ is a convex subcomplex of the CAT(0) realization of a Coxeter group $W$, and $\Gamma$ is a subgroup of $W$. In particular a convex cocompact subgroup of a Coxeter group is Gromov-hyperbolic if and only if it does not contain a free abelian group of rank 2. Our result also provides an explicit control on geometric flats in the CAT(0) realization of arbitrary Tits buildings. Keywords:Coxeter group, flat rank, $\cat0$ space, buildingCategories:20F55, 51F15, 53C23, 20E42, 51E24 6. CJM 2001 (vol 53 pp. 1121) Monotone Paths on Zonotopes and Oriented Matroids Monotone paths on zonotopes and the natural generalization to maximal chains in the poset of topes of an oriented matroid or arrangement of pseudo-hyperplanes are studied with respect to a kind of local move, called polygon move or flip. It is proved that any monotone path on a $d$-dimensional zonotope with $n$ generators admits at least $\lceil 2n/(n-d+2) \rceil-1$ flips for all $n \ge d+2 \ge 4$ and that for any fixed value of $n-d$, this lower bound is sharp for infinitely many values of $n$. In particular, monotone paths on zonotopes which admit only three flips are constructed in each dimension $d \ge 3$. Furthermore, the previously known 2-connectivity of the graph of monotone paths on a polytope is extended to the 2-connectivity of the graph of maximal chains of topes of an oriented matroid. An application in the context of Coxeter groups of a result known to be valid for monotone paths on simple zonotopes is included. Categories:52C35, 52B12, 52C40, 20F55 7. CJM 1999 (vol 51 pp. 1307) Johnson, Norman W.; Weiss, Asia Ivić Quadratic Integers and Coxeter Groups Matrices whose entries belong to certain rings of algebraic integers can be associated with discrete groups of transformations of inversive $n$-space or hyperbolic $(n+1)$-space $\mbox{H}^{n+1}$. For small $n$, these may be Coxeter groups, generated by reflections, or certain subgroups whose generators include direct isometries of $\mbox{H}^{n+1}$. We show how linear fractional transformations over rings of rational and (real or imaginary) quadratic integers are related to the symmetry groups of regular tilings of the hyperbolic plane or 3-space. New light is shed on the properties of the rational modular group $\PSL_2 (\bbZ)$, the Gaussian modular (Picard) group $\PSL_2 (\bbZ[{\it i}])$, and the Eisenstein modular group $\PSL_2 (\bbZ[\omega ])$. Categories:11F06, 20F55, 20G20, 20H10, 22E40 8. CJM 1999 (vol 51 pp. 1240) Monson, B.; Weiss, A. Ivić Realizations of Regular Toroidal Maps We determine and completely describe all pure realizations of the finite regular toroidal polyhedra of types $\{3,6\}$ and $\{6,3\}$. Keywords:regular maps, realizations of polytopesCategories:51M20, 20F55 9. CJM 1999 (vol 51 pp. 1175) Lehrer, G. I.; Springer, T. A. Reflection Subquotients of Unitary Reflection Groups Let $G$ be a finite group generated by (pseudo-) reflections in a complex vector space and let $g$ be any linear transformation which normalises $G$. In an earlier paper, the authors showed how to associate with any maximal eigenspace of an element of the coset $gG$, a subquotient of $G$ which acts as a reflection group on the eigenspace. In this work, we address the questions of irreducibility and the coexponents of this subquotient, as well as centralisers in $G$ of certain elements of the coset. A criterion is also given in terms of the invariant degrees of $G$ for an integer to be regular for $G$. A key tool is the investigation of extensions of invariant vector fields on the eigenspace, which leads to some results and questions concerning the geometry of intersections of invariant hypersurfaces. Categories:51F15, 20H15, 20G40, 20F55, 14C17 10. CJM 1998 (vol 50 pp. 829) Putcha, Mohan S. Conjugacy classes and nilpotent variety of a reductive monoid We continue in this paper our study of conjugacy classes of a reductive monoid $M$. The main theorems establish a strong connection with the Bruhat-Renner decomposition of $M$. We use our results to decompose the variety $M_{\nil}$ of nilpotent elements of $M$ into irreducible components. We also identify a class of nilpotent elements that we call standard and prove that the number of conjugacy classes of standard nilpotent elements is always finite. Categories:20G99, 20M10, 14M99, 20F55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9176132678985596, "perplexity": 428.71575910354983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119650193.38/warc/CC-MAIN-20141024030050-00284-ip-10-16-133-185.ec2.internal.warc.gz"}
https://ts.gluon.ai/master/api/gluonts/gluonts.mx.distribution.nan_mixture.html
gluonts.mx.distribution.nan_mixture module¶ class gluonts.mx.distribution.nan_mixture.NanMixture(nan_prob: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], distribution: gluonts.mx.distribution.distribution.Distribution, F=None)[source] A mixture distribution of a NaN-valued Deterministic distribution and Distribution Parameters • nan_prob – A tensor of the probabilities of missing values. The entries should all be positive and smaller than 1. All axis should either coincide with the ones from the component distributions, or be 1 (in which case, the NaN probability is shared across the axis). • distribution – A Distribution object representing the Distribution of non-NaN values. Distributions can be of different types. Each component’s support should be made of tensors of shape (…, d). • F – A module that can either refer to the Symbol API or the NDArray API in MXNet arg_names = None property distribution is_reparameterizable = False log_prob(x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol][source] Compute the log-density of the distribution at x. Parameters x – Tensor of shape (*batch_shape, *event_shape). Returns Tensor of shape batch_shape containing the log-density of the distribution for each event in x. Return type Tensor property nan_prob class gluonts.mx.distribution.nan_mixture.NanMixtureArgs(distr_output: gluonts.mx.distribution.distribution_output.DistributionOutput, prefix: Optional[str] = None)[source] Bases: mxnet.gluon.block.HybridBlock hybrid_forward(F, x: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol]) → Tuple[Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol], ...][source] Overrides to construct symbolic graph for this Block. Parameters • x (Symbol or NDArray) – The first input tensor. • *args (list of Symbol or list of NDArray) – Additional input tensors. class gluonts.mx.distribution.nan_mixture.NanMixtureOutput(distr_output: gluonts.mx.distribution.distribution_output.DistributionOutput)[source] distr_cls alias of NanMixture distribution(distr_args, loc: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol, None] = None, scale: Union[mxnet.ndarray.ndarray.NDArray, mxnet.symbol.symbol.Symbol, None] = None, **kwargs) → gluonts.mx.distribution.mixture.MixtureDistribution[source] Construct the associated distribution, given the collection of constructor arguments and, optionally, a scale tensor. Parameters • distr_args – Constructor arguments for the underlying Distribution type. • loc – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution. • scale – Optional tensor, of the same shape as the batch_shape+event_shape of the resulting distribution. property event_shape Shape of each individual event contemplated by the distributions that this object constructs. get_args_proj(prefix: Optional[str] = None) → gluonts.mx.distribution.nan_mixture.NanMixtureArgs[source]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2703983187675476, "perplexity": 11269.98464165053}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038461619.53/warc/CC-MAIN-20210417162353-20210417192353-00142.warc.gz"}
http://www.physicsforums.com/showthread.php?p=3896030
# Computing curvatures by rsq_a Tags: computing, curvatures P: 110 I'm having a really hard time working with non-standard expressions for the curvature. This deals with two expressions for the curvature, one for a 2D plane curve, and the other for an axi-symmetric surface in 3D. The plane curve Suppose we have a plane curve given by $x = h(y)$ for both x and y positive. In a paper I'm reading, the author writes: The surface curvature can be expressed in terms of the slope angle, $\beta$ of the surface as per $$\kappa = -\frac{d( \cos\beta)}{dy}.$$ The geometrical relation, $$\frac{dh}{dy} = \frac{1}{\tan \beta}$$ expresses the dependence of h on $\beta$ The second expression is more or less clear for me (modulo whether it should be negated or not). The first expression is not. How do you go from the standard definition: $$\kappa = \frac{h''}{(1+(h')^2)^{3/2}},$$ to this result? Surface curvature This one is from another paper. The author assumes that there is an axi-symmetric surface $S(z,r) = 0$, which is only a function of $z$ and $r$ in spherical coordinates. He states that the mean curvature is $$\kappa = (S_z^2 +S_r^2)^{-3/2} \left[ S_z^2 S_{rr} - 2S_z S_r S_{rz} + S_r^2 S_{zz} + r^{-1} S_r(S_r^2 + S_z^2)\right].$$ If we denote the downward angle of the slope at an arbitrary position on the drop surface by $$\delta$$, so that $$\cos\delta = \frac{S_z}{\sqrt{S_z^2+S_r^2}} \qquad \sin\delta = \frac{S_r}{\sqrt{S_z^2+S_r^2}}$$ then the curvature can be written as $$\kappa = \frac{1}{r} \frac{d}{dr} \left(r \sin\delta\right)$$ along $S = 0$ Again, I could really use some help in seeing how these expressions were derived. Or if it's not trivial, perhaps a source where the work is shown. Related Discussions Calculus & Beyond Homework 4 Introductory Physics Homework 5 Differential Geometry 0 Special & General Relativity 9 Special & General Relativity 11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8762750625610352, "perplexity": 330.1243208429053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510257966.18/warc/CC-MAIN-20140728011737-00393-ip-10-146-231-18.ec2.internal.warc.gz"}
http://lt-jds.jinr.ru/record/72303?ln=en
/ hep-ph arXiv:1711.04303 Spin-flavor oscillations of ultrahigh-energy cosmic neutrinos in interstellar space: The role of neutrino magnetic moments Kurashvili, Podist (NCBJ, Swierk) ; Kouzakov, Konstantin A. (Moscow State U.) ; Chotorlishvili, Levan (Martin Luther U., Halle-Wittenberg) ; Studenikin, Alexander I. (Dubna, JINR) Published in: Phys.Rev. Year: 2017 Vol.: D96    Num./Issue: 10 Page No: 103017 Pages: 8 Year: 2017-11-21 published Abstract: A theoretical analysis of possible influence of neutrino magnetic moments on the propagation of ultrahigh-energy cosmic neutrinos in the interstellar space is carried out under the assumption of two-neutrino mixing. The exact solution of the effective equation for neutrino evolution in the presence of a magnetic field and matter is obtained, which accounts for four neutrino species corresponding to two different flavor states with positive and negative helicities. Using most stringent astrophysical bounds on the putative neutrino magnetic moment, probabilities of neutrino flavor and spin oscillations are calculated on the basis of the obtained exact solution. Specific patterns of spin-flavor oscillations are determined for neutrino-energy values characteristic of, respectively, the cosmogenic neutrinos, the Greisen-Zatsepin-Kuz'min (GZK) cutoff, and well above the cutoff. A theoretical analysis of possible influence of neutrino magnetic moments on the propagation of ultrahigh-energy cosmic neutrinos in the interstellar space is carried out under the assumption of two-neutrino mixing. The exact solution of the effective equation for neutrino evolution in the presence of a magnetic field and matter is obtained, which accounts for four neutrino species corresponding to two different flavor states with positive and negative helicities. Using most stringent astrophysical bounds on the putative neutrino magnetic moment, probabilities of neutrino flavor and spin oscillations are calculated on the basis of the obtained exact solution. Specific patterns of spin-flavor oscillations are determined for neutrino-energy values characteristic of, respectively, the cosmogenic neutrinos, the Greisen-Zatsepin-Kuz’min (GZK) cutoff, and well above the cutoff. Note: 18 pages, 4 figures; fixed misprints in Eq. (7) Total numbers of views: 354 Numbers of unique views: 160 DOI: 10.1103/PhysRevD.96.103017
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9532608389854431, "perplexity": 3405.7840863541737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583662863.53/warc/CC-MAIN-20190119074836-20190119100836-00460.warc.gz"}
https://brilliant.org/practice/sequences-warmup/
× Calculus # Sequences Warmup If $$\{a_n\}$$ is a sequence defined by $$a_n = n^2 + n + 1$$ for each natural number $$n$$, what is $$a_5?$$ Suppose $$\{a_n\}$$ is a sequence defined by $a_1 = 1, a_2 = 1,$ and $a_n = a_{n-1} + a_{n -2}$ for each natural number $$n > 2.$$ What is $$a_6$$? A geometric progression is a sequence in which $$a_n = r \cdot a_{n-1}$$ for each natural number $$n > 1$$, where $$r$$ is a real number called the common ratio. If $$a_n$$ is a geometric progression with $$a_1 = 5$$ and $$a_6 = 160$$, what is $$a_3$$? On Day 1, Isabel has $200 in the bank, and she adds$5 at the start of each subsequent day. On what day will her account’s value reach $300? Each day this week, Morgan had twice as much money as the day before. On Day 6, Morgan had$40. How many dollars did Morgan have on Day 1? ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9211580753326416, "perplexity": 320.7889653259118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806615.74/warc/CC-MAIN-20171122160645-20171122180645-00594.warc.gz"}
https://studysoup.com/tsg/16705/discrete-mathematics-and-its-applications-7-edition-chapter-2-3-problem-78e
× Log in to StudySoup Get Full Access to Discrete Mathematics And Its Applications - 7 Edition - Chapter 2.3 - Problem 78e Join StudySoup Get Full Access to Discrete Mathematics And Its Applications - 7 Edition - Chapter 2.3 - Problem 78e Already have an account? Login here × Reset your password # a) Show that a partial function from .A to B can be viewed ISBN: 9780073383095 37 ## Solution for problem 78E Chapter 2.3 Discrete Mathematics and Its Applications | 7th Edition • Textbook Solutions • 2901 Step-by-step solutions solved by professors and subject experts • Get 24/7 help from StudySoup virtual teaching assistants Discrete Mathematics and Its Applications | 7th Edition 4 5 1 263 Reviews 22 5 Problem 78E a) Show that a partial function from $$A$$ to $$B$$ can be viewed as a function $$f^{*}$$ from $$A$$ to $$B \cup\{u\}$$, where $$u$$ is not an element of $$B$$ and $$f^{*}(a)= \begin{cases}f(a) & \text { if } a \text { belongs to the domain } \\ & \text { of definition of } f \\ u & \text { if } f \text { is undefined at } a .\end{cases}$$ b) Using the construction in (a), find the function $$f^{*}$$ corresponding to each partial function in Exercise 77. Equation Transcription: { Text Transcription: A B f cup {u} f(x)= {_u   if f is undefined at a ^f(a)  if a belongs to the domain of definition of f Step-by-Step Solution: Solution: Step 1 ; In this problem we have to prove that the function can be viewed as a function where u is not an element of B  and This shows that is well defined. For each  it shows that either  is in the domain of the definition of  or it is not. If is in the domain of the definition then is the well defined element, If  is not in the domain of the definition In either case is well defined. Step 2 of 2 ## Discover and learn what students are asking Calculus: Early Transcendental Functions : Inverse Trigonometric Functions: Integration ?In Exercises 1-20, find the indefinite integral. $$\int \frac{12}{1+9 x^{2}} d x$$ Chemistry: The Central Science : Molecular Geometry and Bonding Theories ?In the hydrocarbon (a) What is the hybridization at each carbon atom in the molecule? (b) How many \( Statistics: Informed Decisions Using Data : Inference about the Difference between Two Medians: Independent Samples ?In Problems 1–8, use the Mann–Whitney test to test the given hypotheses at the = 0.05 level of significance. The independent samples were obtained ra #### Related chapters Unlock Textbook Solution Enter your email below to unlock your verified solution to: a) Show that a partial function from .A to B can be viewed
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6914613246917725, "perplexity": 1686.920982198458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711064.71/warc/CC-MAIN-20221205232822-20221206022822-00061.warc.gz"}
https://www.lessonplanet.com/teachers/popcorn-neutrino-lab
# Popcorn Neutrino Lab ##### This Popcorn Neutrino Lab lesson plan also includes: Students parcticipate in a modeling activity that simulates the cyclical role of experimental and theoretical science. Initially, students measure the mass of popcorn. They also record predictions of the mass of the kernels after they are popped. Concepts Resource Details
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8841236233711243, "perplexity": 8246.903371913237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812327.1/warc/CC-MAIN-20180219032249-20180219052249-00411.warc.gz"}
http://math.stackexchange.com/questions/2796/help-in-getting-the-quadratic-equation
# Help in getting the Quadratic Equation I'm starting a chapter on Functions and they had the steps shown to reach the p-q equation. $$x_{1,2} = -\frac{p}{2} \pm\sqrt{\left(\frac{p}{2}\right)^2 - q}$$ So I wanted to do the same with the Quadratic Equation. I'm using the base linear equation $$ax+by+c = 0.$$ The solution I have so far is as follows: $$x^2 + \frac{b}{a}x + \frac{c}{a}= 0$$ $$x^2 + \frac{b}{a}x = -\frac{c}{a}$$ $$x^2 + \frac{b}{a}x + \left(\frac{b}{2a}\right)^2 = -\frac{c}{a} + \left(\frac{b}{2a}\right)^2$$ $$\left(x + \frac{b}{2a}\right)^2 = \left(\frac{b}{2a}\right)^2 - \frac{c}{a}$$ $$\left(x + \frac{b}{2a}\right) = \pm\sqrt{\left(\frac{b}{2a}\right)^2 - \frac{c}{a}}$$ $$x = -\frac{b}{2a} \pm\sqrt{\left(\frac{b}{2a}\right)^2 - \frac{c}{a}}$$ My problem comes from trying to solve the insides of the square root: $$\sqrt{(\frac{b}{2a})^2 - \frac{c}{a}} = \sqrt{\frac{b^2}{4a2} - \frac{c}{a}}$$ $$= \sqrt{\frac{b^2}{4a^2} - \frac{c}{a} \left(\frac{4a}{4a}\right)} = \sqrt{\frac{b^2 - 4ac}{4a^2}}$$ $$= \sqrt{\frac{b^2 - 4ac}{\left(2a\right)^2}}$$ Then: $$x_{1,2} = \frac{-\left(\frac{b}{2a}\right) \pm\sqrt{b^2 -4ac}}{2a}$$ but there is still the problem of the -(b/2a) outside of the sqrt. What am I doing wrong? Also, Tex is awesome; is there a better way to do the 1,2 subscripts than _1,_2? $$x_{1,2} = \frac{-\left(\frac{b}{2a}\right) \pm\sqrt{b^2 -4ac}}{2a}.$$ The solution goes $$\frac{-b}{2a}\pm \sqrt{\frac{b^{2}-4ac}{4a^{2}}}=\frac{-b}{2a}\pm \frac{\sqrt{b^{2}-4ac}}{2a}$$ $$= \frac{-b \pm\sqrt{b^2 - 4ac}}{2a}$$ - A note on the TeX subscripts: try "x_{1,2}". As for the derivation, are you already familiar with "completing the square"? Otherwise, one thing you can try is to make the substitution $x=u-\frac{b}{2a}$, solve for u, and then reexpress the whole mess in terms of x. Good luck! –  Guess who it is. Aug 19 '10 at 13:20 The typical TeX way to do the 1,2 subscripts would be x_{1,2}, which yields $x_{1,2}$ as desired. In other words, you use curly braces, like so: "_{subscripts go here}". –  Alex Basson Aug 19 '10 at 13:57 @Mangaldan: Thanks for the TeX tip! Yes, "completing the square" is what I had to use to get the quadratic equation, but I'm not sure what you mean. After all, I've already applied it earlier in the solution. –  IAE Aug 19 '10 at 14:23 SB, your dividing of the depression term $-\frac{b}{2a}$ again with $2a$ was what threw me off. :) At least WWright has already pointed you in the proper direction. –  Guess who it is. Aug 19 '10 at 14:39 Sorry I don't know how to do tex on websites, but I'm trying to learn. You just made a small mistake on the final step. In the second to last step, we actually have our full equation as: $\frac{-b}{2a}\pm \sqrt{\frac{b^{2}-4ac}{4a^{2}}}=\frac{-b}{2a}\pm \frac{\sqrt{b^{2}-4ac}}{2a}$ Now we can collect the common factor of 1/2a and get: $\frac{-b\pm \sqrt{b^{2}-4ac}}{2a}$ Does that make sense? - I learned TeX more or less by looking at the Tex that other users wrote. View the source code for this page and then you can see how I wrote the math in TeX. It's quite easy actually, and you can display it by starting and ending with . –  IAE Aug 19 '10 at 13:52 WWright, this may be a useful crutch: codecogs.com/latex/eqneditor.php –  Guess who it is. Aug 19 '10 at 13:53 thanks for the help, give me a few minutes and it'll look right, hopefully :) –  WWright Aug 19 '10 at 14:08 Looks good now, you've done it right! :) –  Guess who it is. Aug 19 '10 at 14:21 You factored 2a out of the square root and put it in the denominator without factoring it out of -(b/2a). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.981856644153595, "perplexity": 988.005144672203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064951.43/warc/CC-MAIN-20150827025424-00176-ip-10-171-96-226.ec2.internal.warc.gz"}
http://mathhelpforum.com/trigonometry/217233-need-help-solving-trig-identity-print.html
Need help solving trig Identity • April 11th 2013, 12:43 AM Gurp925 Need help solving trig Identity Hi I am having trouble with these Trig questions: 1) (tanx)(csc^2x)/Sec^2x 2) (cotx)(sec^2x)/csc^2x If anyone could help that would be great! • April 11th 2013, 12:58 AM Gusbob Re: Need help solving trig Identity Quote: Originally Posted by Gurp925 Hi I am having trouble with these Trig questions: 1) (tanx)(csc^2x)/Sec^2x 2) (cotx)(sec^2x)/csc^2x If anyone could help that would be great! By identity I presume you mean simplify? The easiest way to do this is to write everything in terms of sines and cosines. For example $\tan(x)\cdot \csc^2(x) \cdot \frac{1}{\sec^2(x)}=\frac{\sin(x)}{\cos(x)}\cdot \frac{1}{\sin^2(x)}\cdot \frac{\cos^2(x)}{1}$ • April 11th 2013, 01:09 AM Prove It Re: Need help solving trig Identity Quote: Originally Posted by Gurp925 Hi I am having trouble with these Trig questions: 1) (tanx)(csc^2x)/Sec^2x 2) (cotx)(sec^2x)/csc^2x If anyone could help that would be great! Neither of these is an identity, or even an equality... • April 11th 2013, 01:42 AM Gurp925 Re: Need help solving trig Identity MY mistake everyone with my wording the unit is trig identities and equations the question is asking us to simplify by writing as a single trig ratio. • April 11th 2013, 02:32 AM Gusbob Re: Need help solving trig Identity Quote: Originally Posted by Gurp925 MY mistake everyone with my wording the unit is trig identities and equations the question is asking us to simplify by writing as a single trig ratio. In which case refer to my previous post.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9652387499809265, "perplexity": 1412.2305013928199}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462426.4/warc/CC-MAIN-20150226074102-00160-ip-10-28-5-156.ec2.internal.warc.gz"}
https://infoscience.epfl.ch/record/119952
Infoscience Journal article # The role of radial electric fields in linear and nonlinear gyrokinetic full radius simulations The pivotal role played by radial electric fields in the development of turbulence associated with anomalous transport is examined by means of global gyrokinetic simulations. It is shown that the stabilizing effect of E x B flows on ion temperature gradient (ITG) modes is quadratic in the shearing rate amplitude. For a given shearing rate it leads to an increase in the critical gradient. The electric fields (zonal flows) self-generated by ITG modes interact in a nonlinear way and it is shown that a saturated level of both the zonal flow and ITG turbulence is reached in the absence of any collisional mechanism being included in the model. The quality of the global nonlinear simulations is verified by the energy conservation which is allowed by the inclusion of nonlinear parallel dynamics. This demonstrates the absence of spurious damping of numerical origin and thus confirms the nonlinear character of zonal flow saturation mechanism.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8562663197517395, "perplexity": 737.9628507102893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186891.75/warc/CC-MAIN-20170322212946-00512-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.zbmath.org/?q=ut%3Auniform+many-sorted+closure+operator
× Found 2 Documents (Results 1–2) 100 MathJax A characterization of the $$n$$-ary many-sorted closure operators and a many-sorted Tarski irredundant basis theorem. (English)Zbl 07144285 MSC:  06A15 54A05 Full Text: On many-sorted algebraic closure operators. (English)Zbl 1038.08001 MSC:  08A30 06A15 Full Text:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1771901249885559, "perplexity": 29147.137536566865}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104204514.62/warc/CC-MAIN-20220702192528-20220702222528-00758.warc.gz"}
http://mathhelpforum.com/calculus/134912-conceptual-integrations.html
# Math Help - Conceptual Integrations 1. ## Conceptual Integrations To integrate $\int\sqrt{a^2-x^2}$ you will let x = ... To integrate $\int\sqrt{x^2-a^2}$ you will let x = ... To integrate $\int\sqrt{x^2+a^2}$ you will let x = ... I'm not sure if I'm meant to derive these or just know them. Either way, I'm screwed. 2. Just know them, google 'trigonometric substitution for integration' 3. Originally Posted by Selim To integrate $\int\sqrt{a^2-x^2}$ you will let x = ... To integrate $\int\sqrt{x^2-a^2}$ you will let x = ... To integrate $\int\sqrt{x^2+a^2}$ you will let x = ... I'm not sure if I'm meant to derive these or just know them. Either way, I'm screwed. these all lead to trig substitutions. 8. Integration by Trigonometric Substitution
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9996447563171387, "perplexity": 1809.3742654228058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111313.83/warc/CC-MAIN-20160428161511-00086-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/frictional-force-is-a-manifestation.7665/
# Frictional force is a manifestation 1. Oct 23, 2003 ### anand Frictional force is a manifestation of which fundamental force of nature?Is it the electromagnetic force?If so how? Last edited by a moderator: Feb 6, 2013 2. Oct 23, 2003
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9700435996055603, "perplexity": 6530.781324165051}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865995.86/warc/CC-MAIN-20180624005242-20180624025242-00372.warc.gz"}
https://cstheory.meta.stackexchange.com/questions/837/answers-with-low-score-are-no-longer-shown-in-gray-text
# Answers with low score are no longer shown in gray text During the beta, answers with low score (score −3 or below if I remember correctly) were shown in gray text, but they are no longer shown in gray. I guess that this change is unintentional, judging from the way the style sheet is written. Namely, the style sheet still contains the declaration .downvoted-answer {color: #888888;} but it is effectively ignored because it is overridden by other declarations. • I'm looking into this. Do you happen to have a link to an answer with a lot of downvotes? – Jin Dec 20 '10 at 5:45 • @Jin: Thanks! Here is an example: cstheory.stackexchange.com/questions/3836/… Dec 20 '10 at 10:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.575863242149353, "perplexity": 1181.3630258461465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363791.16/warc/CC-MAIN-20211209091917-20211209121917-00345.warc.gz"}
http://www.zazzle.com/rage+face+gifts
Showing All Results 1,167 results Page 1 of 20 Related Searches: female me gusta comic, meme, comic meme Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo No matches for Showing All Results 1,167 results Page 1 of 20
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8085224628448486, "perplexity": 4640.1218800142515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678700883/warc/CC-MAIN-20140313024500-00023-ip-10-183-142-35.ec2.internal.warc.gz"}
https://nanopartikel.info/en/glossar/reach/
# REACH > Glossar > REACH Short for Registration, Evaluation, Authorization of Chemicals. REACH is the novel EC regulation no. 1907/2006 that has been in force since June 1, 2007. For further information see https://echa.europa.eu/regulations/reach/understanding-reach
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8995454907417297, "perplexity": 17425.120740244995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057366.40/warc/CC-MAIN-20210922132653-20210922162653-00435.warc.gz"}
https://gateoverflow.in/312009/gate2010-mn-ga-1
+1 vote 88 views Which of the following options is the closest in meaning to the word below$:$ Exhort 1. urge 2. condemn 3. restrain 4. scold edited | 88 views ## 1 Answer +1 vote Best answer Exhort: to strongly encourage or try to persuade someone to do something Similar words are urging or persuading Option A should be the answer by Boss (42.7k points) selected by Answer: 0 votes 1 answer 1 0 votes 1 answer 2 +1 vote 1 answer 3
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31855496764183044, "perplexity": 21512.91278913924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145981.35/warc/CC-MAIN-20200224193815-20200224223815-00486.warc.gz"}
http://math.stackexchange.com/users/16205/vitalij-zadneprovskij?tab=activity
less info reputation 213 bio website linkedin.com/pub/… location Rome, Italy age 28 member for 2 years, 6 months seen 8 hours ago profile views 44 Java programmer, theoretical computer science enthusiast My blog # 58 Actions Feb28 comment MBA Business Statistics What have you tried? What difficulties did you find? Feb27 comment Consequences of difference between “strong” and weak Church-Rosser property @RuiBaptista why is it locally Church-Rosser? Feb27 revised Consequences of difference between “strong” and weak Church-Rosser property Added definition of normal form Feb27 revised Consequences of difference between “strong” and weak Church-Rosser property Added some definitions Feb27 comment Consequences of difference between “strong” and weak Church-Rosser property @frabala added, also note that diamond property and Church Rosser property is the same thing according to Barengt Feb27 revised Consequences of difference between “strong” and weak Church-Rosser property added 1352 characters in body; edited title Feb27 asked Consequences of difference between “strong” and weak Church-Rosser property Feb25 comment Is Lambda calculus a purely equational theory? thank you very much! Feb25 comment Is Lambda calculus a purely equational theory? If I understand correctly in another example $(\lambda x.xx)M$ rewrites to $xx[x \rightarrow M]$ and then to $M$. Am I right? Feb25 comment Is Lambda calculus a purely equational theory? I have changed that to remove the wrong equation from the question. Seeing revisions of the question is possibile to see the original form. Feb25 accepted Is Lambda calculus a purely equational theory? Feb25 revised Is Lambda calculus a purely equational theory? added 13 characters in body Feb25 comment Is Lambda calculus a purely equational theory? @dtldarek thank you. If you post it as an answer, I will accept it Feb25 asked Is Lambda calculus a purely equational theory? Feb21 revised I can't do math? added 4 characters in body Feb21 awarded Scholar Feb21 accepted Meaning of variables and applications in lambda calculus Feb21 awarded Commentator Feb21 comment Meaning of variables and applications in lambda calculus @ZhenLin Church encoding is what I was looking for. If you post it as an answer, I will accept it. Feb20 awarded Custodian
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6117958426475525, "perplexity": 3105.597150088779}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678683400/warc/CC-MAIN-20140313024443-00094-ip-10-183-142-35.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/180011-limit-calculation.html
1. Limit calculation I can't seem to get the hang of calculating these indeterminate forms...how do I figure out limit ln(cosx) / ln(cos5x) as x goes to pi/2 from the left The left limits of the numerator and denominator at pi/2 are negative infinity. So I tried looking at the limit of their derivatives: tanx / 5tan(5x) taking out the 1/5 I'm left to calculate the limit of tanx / tan5x. Once again the left limit at pi/2 is infinity. So looking again at the limit of the derivatives it comes out (secx)^2 / 5(sec5x)^2 which once again I have no idea how to calculate. I'm very confused. 2. Originally Posted by moses I can't seem to get the hang of calculating these indeterminate forms...how do I figure out limit ln(cosx) / ln(cos5x) as x goes to pi/2 from the left The left limits of the numerator and denominator at pi/2 are negative infinity. So I tried looking at the limit of their derivatives: tanx / 5tan(5x) taking out the 1/5 I'm left to calculate the limit of tanx / tan5x. Once again the left limit at pi/2 is infinity. So looking again at the limit of the derivatives it comes out (secx)^2 / 5(sec5x)^2 which once again I have no idea how to calculate. I'm very confused. Hint: $\frac{d}{dx} \ln(cos(x))=\frac{-\sin(x)}{\cos(x)}=-\tan(x)$ 3. Okay, so the derivatives of ln(cos(x)) and ln(cos(5x)) are -tan(x) and -5tan(5x). But how do I calculate the limit -tan(x) / -5tan(5x) = (1/5) * (tan(x) / tan(5x)) 4. Originally Posted by moses Okay, so the derivatives of ln(cos(x)) and ln(cos(5x)) are -tan(x) and -5tan(5x). But how do I calculate the limit -tan(x) / -5tan(5x) = (1/5) * (tan(x) / tan(5x)) $\lim_{x \to 0}\frac{\tan(x)}{5\tan(5x)}=\lim_{x \to 0}\frac{1}{5}\frac{\sin(x)}{\cos(x)}\cdot \frac{\cos(5x)}{\sin(5x)}=\lim_{x \to 0}\frac{1}{5}\frac{\sin(x)}{\sin(5x)}\cdot \frac{\cos(5x)}{\cos(x)}$ $\lim_{x \to 0}\frac{1}{25}\frac{\sin(x)}{x}\cdot \frac{5x}{\sin(5x)}\cdot \frac{\cos(5x)}{\cos(x)}$ Can you finish from here? 5. Yes, I get it now. I wouldn't have thought of splitting up the sinx/5sin5x like that...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9946732521057129, "perplexity": 1522.7446189590135}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719542.42/warc/CC-MAIN-20161020183839-00181-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/prealgebra/prealgebra-7th-edition/chapter-7-section-7-3-solving-percent-problems-with-proportions-practice-page-490/10
## Prealgebra (7th Edition) what percent of 40 is 8 ? what percent is percent 40 is base 8 is amount $\frac{8}{40}$=$\frac{p}{100}$ set cross product equal 40$\times$ p = 8 $\times$ 100 Multiply 40p= 800 divide both sides by 40 $\frac{40p}{40}$ = $\frac{800}{40}$ p= 20 8 is 20 % of 40.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8042663335800171, "perplexity": 2107.835283200548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947822.98/warc/CC-MAIN-20180425135246-20180425155246-00487.warc.gz"}
https://www.nature.com/articles/s41565-018-0101-7?error=cookies_not_supported&code=258ffc03-7fb3-4f91-bc9e-f58f53e0c77c
CO2 REDUCTION # A reversible morphology ### Subjects The electrocatalytic reduction of carbon dioxide to methane, one of the possible paths towards reducing greenhouse gas concentration and generating fuels sustainably, is a challenging reaction to catalyse as it can yield several other partially reduced compounds. Therefore, designing effective catalysts for the CO2-to-CH4 transformation requires an in-depth mechanistic understanding of the reaction conditions. Recently, a Cu(ii) phthalocyanine complex showed attractive activity and selectivity properties. Weng et al. now report an in situ study and show that this complex undergoes a morphological transformation that is responsible for the high activity recorded. Credit: Macmillan Publishers Ltd The researchers carry out X-ray absorption spectroscopy, cycling the electrochemical potential between the open circuit voltage (~0.80 V) and the voltage where the maximum catalytic activity occurs (–1.06 V). They observe the appearance of Cu(i) and then Cu(0) peaks. The peaks disappear as the potential is cycled back to less reducing conditions. Morphological analysis and theoretical calculations show the presence of Cu–Cu metallic bonds and the formation of Cu clusters of ~2 nm at –1.06 V. It is likely that these clusters are stabilized by the phtalocynanine ligands. Weng et al. therefore conclude that the superior catalytic performance of the copper complex is due to the reversible formation of the Cu(0) cluster. ## Author information Authors ### Corresponding author Correspondence to Alberto Moscatelli. ## Rights and permissions Reprints and Permissions Moscatelli, A. A reversible morphology. Nature Nanotech 13, 178 (2018). https://doi.org/10.1038/s41565-018-0101-7
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8018389344215393, "perplexity": 4871.799645208388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141733120.84/warc/CC-MAIN-20201204010410-20201204040410-00467.warc.gz"}
http://mathhelpforum.com/trigonometry/136764-exact-value.html
# Math Help - Exact value 1. ## Exact value I'm just wondering, how would I find the exact value of the following expression of sec(-25pie/6)? It's in my textbook but someone scratched the solution out... Thanks! 2. Originally Posted by kelvinly I'm just wondering, how would I find the exact value of the following expression of sec(-25pie/6)? It's in my textbook but someone scratched the solution out... Thanks! $\sec\left(-\frac{25\pi}{6}\right) = $ $\sec\left(\frac{25\pi}{6}\right) =$ $\frac{1}{\cos\left(\frac{25\pi}{6}\right)} =$ $\frac{1}{\cos\left(\frac{\pi}{6}\right)} = \frac{2}{\sqrt{3}}$ btw ... this is pie this is pi ... 3. Originally Posted by skeeter $\sec\left(-\frac{25\pi}{6}\right) = $ $\sec\left(\frac{25\pi}{6}\right) =$ $\frac{1}{\cos\left(\frac{25\pi}{6}\right)} =$ $\frac{1}{\cos\left(\frac{\pi}{6}\right)} = \frac{2}{\sqrt{3}}$ btw ... this is pie this is pi ... lol yeah typo! and thanks for the step by step solutions!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9669368863105774, "perplexity": 1919.1071702805423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115856041.43/warc/CC-MAIN-20150124161056-00180-ip-10-180-212-252.ec2.internal.warc.gz"}
http://www.physics.utoronto.ca/people/homepages/savard/
/ Pierre Savard Web Page of Pierre Savard Pierre Savard Associate Professor and TRIUMF Scientist Experimental Particle Physics Office: MP- 803 tel: (416) 978-0764 fax: (416) 978-8221 email: savard at physics.utoronto.ca Brief CV Associate Professor, University of Toronto, TRIUMF Scientist  (2007-) Assistant Professor, University of Toronto, TRIUMF Scientist  (2002-2007) Research Associate, University of Toronto (2000-2002) FCAR Fellow, University of  Toronto (1998-2000) Ph.D., Université de Montréal (1998) M.Sc., Université de Montréal (1993) B.Sc., Université de Sherbrooke (1991) Research Interests: I was involved in the discovery of the Higgs boson where I contributed to the WW analysis and was an editor of the ATLAS discovery paper.  I'm currently convener of the Higgs subgroup responsible for Higgs decays to pairs of W bosons with Biagio Di Micco. I'm also involved in the search for Dark Matter with the ATLAS experiment in the monojet final state, which is performed within the Exotics group of the ATLAS collaboration.  I was convener of the Exotics physics group between 2008 and 2010, when the LHC produced its first collision. The Exotics group is focused on looking for new physics (e.g. new particles, new forces, new dimensions) beyond the Standard Model, our current theoretical framework.  Before that, I spent 10 years on the CDF experiment where I published  papers on exotic physics (W' search, leptoquark and SUSY searches, Large Extra Dimensions), Top Quark Physics (top mass, top production cross section, single top production), Electroweak Physics (WW production).  On CDF, I was convener of the top and electroweak physics group at the beginning of Run II with Willis Sakumoto, I was Offline Analysis Coordinator with Avi Yagil, and in charge of calorimetry reconstruction and simulation.  Before CDF, I worked on  the design and testing of the hadronic endcap calorimeter of ATLAS.  One of the main design considerations was the search for Higgs bosons in the vector boson fusion channel. I've also worked on the SDC experiment (M. Sc.)  and the OPAL experiment as an undergraduate student. Current Students: Joe Taenzer (Ph.D. on Associated Higgs production) Steven Schramm (Search for Dark Matter in monojet final states) Previous Students and Research Associates: Pierre-Hughes Beauchemin (Research Associate, now professor at Tufts ) Teresa Spreitzer (Ph.D. Top Quark Production Cross Section, now postdoc  at UofT) Kostas Kordas (Research Associate, now professor in Greece) Reda Tafirout (Research Associate, now Staff Scientist at TRIUMF) Simon Sabik (Ph.D.  Top Quark Mass, professor at Marianopolis College) Sing Leung Cheung (Ph.D. Dijet resonance search) Pier-Olivier Deviveiros (Ph.D. Dijet Angular Distributions, now postdoc at NIKHEF) Cristen Adams  (M.Sc.  Large Extra Dimensions, now PHD student in   atmospheric physics) Bernd Stelzer (Ph.D. Single Top Production, now professor at Simon Fraser University) Selected Publications: Search for the Standard Model Higgs boson in the H WW(*) νν decay mode with 4.7 /fb of ATLAS data at s=7Phys.Lett. B716 (2012) 62-81 Search for Large Extra Dimensions in the Production of Jets and Missing Transverse Energy in ppbar Collisions at s**(1/2) = 1.96 TeV.  e-Print Archive: hep-ex/0605101 Measurement of the top quark mass using template methods on dilepton events in p anti-p collisions at s**(1/2) = 1.96-TeV. Published in Phys.Rev.D73:112006,2006    e-Print Archive: hep-ex/0602008 Top quark mass measurement from dilepton events at CDF II. Published in Phys.Rev.Lett.96:152002,2006.  e-Print Archive: hep-ex/0512070 Top quark mass measurement using the template method in the lepton + jets channel at CDF II. Published in Phys.Rev.D73:032003,2006.  e-Print Archive: hep-ex/0510048 Determination of the jet energy scale at the Collider Detector at Fermilab.  e-Print Archive: hep-ex/0510047 Measurement of the W+ W- production cross section in p anti-p collisions at s**(1/2) =1.96-TeV using dilepton events. Published in Phys.Rev.Lett.94:211801,2005.  e-Print Archive: hep-ex/0501050 Search for electroweak single top quark production in p anti-p collisions at s**(1/2) = 1.96-TeV. Published in Phys.Rev.D71:012005,2005. e-Print Archive: hep-ex/0410058 Measurement of the t anti-t production cross section in p anti-p collisions at s**(1/2) = 1.96-TeV using dilepton events. Published in Phys.Rev.Lett.93:142001,2004. e-Print Archive: hep-ex/0404036 Search for a W-prime boson decaying to a top and bottom quark pair in 1.8-TeV p anti-p collisions. Published in Phys.Rev.Lett.90:081802,2003.  e-Print Archive: hep-ex/0209030 Search for single top quark production in p anti-p collisions at s**(1/2) = 1.8-TeV. Published in Phys.Rev.D65:091102,2002.  e-Print Archive: hep-ex/0110067 Complete publication list from SPIRES database
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9580798149108887, "perplexity": 19526.698233091873}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698411148/warc/CC-MAIN-20130516100011-00016-ip-10-60-113-184.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/1540/relation-of-angular-speed-of-a-rigid-body-to-eulers-angles/1547
# Relation of angular speed of a rigid body to Euler's Angles My Question was like this and i have realised few things and still have some doubts I have a book in which a paragraph goes like this Now, $\dot\phi$, $\dot \theta$, $\dot\psi$ are respectively the angular speeds about the space z-axis, line of node and the body z-axis. I don't know how Euler's angles of rotation are even connected with the angular velocity. Since it's my assumption that the the space set of axis remains unchanged, and the body set of axes rotate with the the axis of rotation. According to my assumption, the angular velocity vector components should remain constant w.r.t the space set of axes (not equal but unchanged) and the angular velocity vector is zero w.r.t the body set of axes because the body set of axes is rotating with the axis so the angular velocity is zero. Even if the body set of axes were stationary and the rigid body is rotating, would that mean the the components would be connected to the Euler's angle of rotation anyway? I think that Euler's angle are just angles of rotation that transforms the space set of axes into body set of axes. And I also don't understand what does this 'line of node mean'. I have come to realize that in Euler's rotation, The space axis is rotated about space Z axis, new space X-axis and, body Z axis (which is aligned by the new space X axis rotation). Since there is rotation, there is angular speed, and the rotation are $\phi , \theta ,and \psi$, then obviously the the angular speeds are $\dot\phi$, $\dot \theta$, $\dot\psi$ and the line of node is the new space X-axis from space Z-axis rotation. And there is no rigid body involved. But has this angular velocity got something to do with the rotation of rigid body? Like stability of spinning top? I don't know but I hope i am right. - en.wikipedia.org/wiki/Euler_angles#Euler_rotations <- does this help? –  Marek Dec 2 '10 at 11:21 If we express the change in anglular velocity as $\Delta\vec\omega$ in local coordinates, with for example the angles $\phi$, $\theta$ and $\psi$ rotations about the $Z$, $X$ and $Z$ respectively then the answer is $$\Delta\vec{\omega}=\dot{\phi}\hat{k}+\mathrm{{Rot}(\hat{k},\phi})\left(\dot{\theta}\hat{i}+\mathrm{{Rot}(\hat{i},\theta})\left(\dot{\psi}\hat{k}\right)\right)$$ Where Rot(axis,angle) is a 3x3 rotation matrix. We apply the angular rotation components in sequence on the local coordinate axes this way. Also $\hat{i}=(1,0,0)$, $\hat{j}=(0,1,0)$ and $\hat{k}=(0,0,1)$. Line of node must be the common normal between the two $z$-axes. We typically denote that as the $x$-axis (see Denavit-Hartenberg notation).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9039300084114075, "perplexity": 207.89419910036204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272329.26/warc/CC-MAIN-20140728011752-00215-ip-10-146-231-18.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/adsolute-and-conditional-convergence-of-alternating-series.391464/
# Adsolute and conditional convergence of alternating series • Start date • #1 15 0 i have a question regarding adsolute and conditional convergence of alternating series. - i know that summation of [ tan(pi/n) ] diverge, but how do we proof it converge conditionally? (ie, (-1)^n tan(pi/n) ] can Leibiniz's theorem be used in this case? but tan(pi/2) is infinite? any help is appreciated. =D Related Calculus and Beyond Homework Help News on Phys.org • #2 Dick Homework Helper 26,258 618 If you write it as sum n=3 to infinity then you can use the alternating series test. If the series includes n=2 then it would be undefined. • #3 15 0 so alternating series of tan(pi/n) converge conditionally for n>3 only ? for n>0 it is diverge? • #4 1,033 1 Conditional convergence of an alternating series means that it converges but if you take the absolute value it diverges? • #5 Dick Homework Helper 26,258 618 so alternating series of tan(pi/n) converge conditionally for n>3 only ? for n>0 it is diverge? Maybe. Read the fine print in the definition and consult a lawyer. I would prefer to call the case n>0 undefined rather than divergent. • #6 Dick Homework Helper 26,258 618 Conditional convergence of an alternating series means that it converges but if you take the absolute value it diverges? Well, yes. • Last Post Replies 1 Views 3K • Last Post Replies 7 Views 1K • Last Post Replies 5 Views 2K • Last Post Replies 26 Views 3K • Last Post Replies 6 Views 17K • Last Post Replies 8 Views 1K • Last Post Replies 0 Views 2K • Last Post Replies 8 Views 2K • Last Post Replies 1 Views 753 • Last Post Replies 4 Views 601
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.954799234867096, "perplexity": 4335.558527652733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413786.46/warc/CC-MAIN-20200531213917-20200601003917-00128.warc.gz"}
https://tex.stackexchange.com/questions/172279/what-conventions-or-packages-are-appropriate-for-formatting-records-algebraic-d
# What conventions or packages are appropriate for formatting records, algebraic data types in LaTeX? LaTeX formatting practice is well-established and well-supported by existing packages for three common cases relevant in computer science: • Source code listings • Established, conventional mathematical notation • Standard algorithmic pseudocode I find I often want to describe things like records (a struct in C parlance) or algebraic data types in a semi-formal notation which is typeset much closer to math mode (proportional fonts, somewhat set off but well integrated with the body text) than source code (fixed-width fonts, jarringly distinct from the body text), but find that general math mode is a mess without imposing a bit more structure and convention. For example, I might want to describe a concrete instance of some data as a few records: foo { field : value field : value } bar { field : value field : value field : value } or I might want to sketch an algebraic data type like: type t = Foo(type, type) | Bar(type, type, type) Are there packages, standard features of amsmath, or even simply examples people like and conventions to follow here? I'm looking for something that would be at home along side, for example, the algorithmic environment for formatting pseudocode and simple math mode for describing pure functions. I am imagining something half-way in between a simplified ML-like notation, and full on amsmath, which is meant to be typeset in a proportional font with high readability, not to look like fixed-width code. I find my manual attempts are hitting an awkward midpoint where I am making a lot of choices to force raw math mode (or, e.g., the align environment) to do something reasonable, and that things like the curly braces to delimit collections may be a poor choice. I'm hoping not to have to entirely derive my own notation and macros for it from scratch, but have so far failed to find anything preexisting that feels right. • amsmath wasn't designed to cater for computer science, and isn't likely to do so in the near future. i hope you can find another compatible package. – barbara beeton Apr 18 '14 at 22:18 • Might be overkill, or might be exactly what you want: Simple algebraic data types for C has source for a program that can generate the LaTeX code for a provided ADT. No idea about the quality of the code it produces, though. – Mike Renfro Apr 18 '14 at 23:18 This task seems well suited for the tabstackengine package. While the package can emulate, in many ways, the behavior of the align style environments in math mode, it is, by default, a text stacking package. Thus, you can use an align style syntax (though in macro, not environment form), but do so in text mode. Note: in text mode, non-zero gaps are typically specified between columns because, unlike in math mode where relational operators generate their own spacing with respect to surrounding operands, text columns are set "as is", with leading and trailing spaces ignored. \documentclass{article} \usepackage{tabstackengine} \begin{document} \parindent 0in \setstacktabbedgap{1ex}% default 0pt \def\stackalignment{l} \tabbedLongstack{ Foo &\{&\\ & field A:& value\\ & field AA:& value\\ \} }\\ \\ \tabbedLongstack{ Longbar &\{&\\ & field B:& value\\ & field BBB:& value\\ & field BB:& value\\ \} }\\ \\ \setstacktabulargap{1ex}% default \tabcolsep \tabularLongstack{lcll}{ type t &=&Foo&(type, type)\\ & $|$ & Longbar&(type, type, type) } \end{document}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6294272541999817, "perplexity": 3129.8677062431366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540511946.30/warc/CC-MAIN-20191208150734-20191208174734-00388.warc.gz"}
https://worldwidescience.org/topicpages/i/integration+program+avtip.html
#### Sample records for integration program avtip 1. Air Vehicle Technology Integration Program (AVTIP) Delivery Order 0015: Open Control Platform (OCP) Software Enabled Control (SEC) Hardware in the Loop Simulation - OCP Hardware Integration National Research Council Canada - National Science Library Paunicka, James L 2005-01-01 ...) project sponsored by the DARPA Software Enabled Control (SEC) Program. The purpose of this project is to develop the capability to be an OCP test-bed and to evaluate the OCP controls and simulation environment for a specific test case... 2. Air Vehicle Technology Integration Program (AVTIP) Delivery Order 0008: Open Control Platform (OCP) Software Enabled Control (SEC) Hardware in the Loop Simulation Program National Research Council Canada - National Science Library Portilla, Eric 2004-01-01 ...) contract awarded to Northrop Grumman Corporation (NGC). The OCP HITL program developed a Hardware-in-the Loop facility for demonstrating and evaluating High-Confidence Software and Systems (HCSS... 3. State Program Integrity Reviews Data.gov (United States) U.S. Department of Health & Human Services — State program integrity reviews play a critical role in how CMS provides effective support and assistance to states in their efforts to combat provider fraud and... 4. Integral Fast Reactor Program International Nuclear Information System (INIS) Chang, Y.I.; Walters, L.C.; Laidler, J.J.; Pedersen, D.R.; Wade, D.C.; Lineberry, M.J. 1993-06-01 This report summarizes highlights of the technical progress made in the Integral Fast Reactor (IFR) Program in FY 1992. Technical accomplishments are presented in the following areas of the IFR technology development activities: (1) metal fuel performance, (2) pyroprocess development, (3) safety experiments and analyses, (4) core design development, (5) fuel cycle demonstration, and (6) LMR technology R ampersand D 5. Integrated data base program International Nuclear Information System (INIS) Notz, K.J. 1981-01-01 The IDB Program provides direct support to the DOE Nuclear Waste Management and Fuel Cycle Programs and their lead sites and support contractors by providing and maintaining a current, integrated data base of spent fuel and radioactive waste inventories and projections. All major waste types (HLW, TRU, and LLW) and sources (government, commerical fuel cycle, and I/I) are included. A major data compilation was issued in September, 1981: Spent Fuel and Radioactive Waste Inventories and Projections as of December 31, 1980, DOE/NE-0017. This report includes chapters on Spent Fuel, HLW, TRU Waste, LLW, Remedial Action Waste, Active Uranium Mill Tailings, and Airborne Waste, plus Appendices with more detailed data in selected areas such as isotopics, radioactivity, thermal power, projections, and land usage. The LLW sections include volumes, radioactivity, thermal power, current inventories, projected inventories and characteristics, source terms, land requirements, and a breakdown in terms of government/commercial and defense/fuel cycle/I and I 6. IMP - INTEGRATED MISSION PROGRAM Science.gov (United States) Dauro, V. A. 1994-01-01 IMP is a simulation language that is used to model missions around the Earth, Moon, Mars, or other planets. It has been used to model missions for the Saturn Program, Apollo Program, Space Transportation System, Space Exploration Initiative, and Space Station Freedom. IMP allows a user to control the mission being simulated through a large event/maneuver menu. Up to three spacecraft may be used: a main, a target and an observer. The simulation may begin at liftoff, suborbital, or orbital. IMP incorporates a Fehlberg seventh order, thirteen evaluation Runge-Kutta integrator with error and step-size control to numerically integrate the equations of motion. The user may choose oblate or spherical gravity for the central body (Earth, Mars, Moon or other) while a spherical model is used for the gravity of an additional perturbing body. Sun gravity and pressure and Moon gravity effects are user-selectable. Earth/Mars atmospheric effects can be included. The optimum thrust guidance parameters are calculated automatically. Events/maneuvers may involve many velocity changes, and these velocity changes may be impulsive or of finite duration. Aerobraking to orbit is also an option. Other simulation options include line-of-sight communication guidelines, a choice of propulsion systems, a soft landing on the Earth or Mars, and rendezvous with a target vehicle. The input/output is in metric units, with the exception of thrust and weight which are in English units. Input is read from the user's input file to minimize real-time keyboard input. Output includes vehicle state, orbital and guide parameters, event and total velocity changes, and propellant usage. The main output is to the user defined print file, but during execution, part of the input/output is also displayed on the screen. An included FORTRAN program, TEKPLOT, will display plots on the VDT as well as generating a graphic file suitable for output on most laser printers. The code is double precision. IMP is written in 7. Integrated maintenance program (IMP) International Nuclear Information System (INIS) Zemdegs, R.T.; Chout, Q.B. 1998-01-01 Approaches to the maintenance of nuclear power plants have undergone significant change in the past several decades. The traditional breakdown approach has been displaced by preventive (calendar-based) maintenance and more recently, by condition-based maintenance (CBM). This is largely driven by the fact that traditional maintenance programs, derived primarily from equipment vendor recommendations, are generally unsuccessful in controlling maintenance costs or equipment failures. Many advances in the maintenance field have taken place since the maintenance plans for Ontario Hydro's nuclear plants were initially established. Ontario Hydro nuclear plant operating costs can be substantially reduced and Incapability Factor improved with the application of modern maintenance processes and tools. Pickering is designated as the lead station for IMP. Of immediate concern is the fact that Pickering Nuclear Division has been experiencing a significant backlog of Operating Preventive Maintenance Callups. This backlog, over 2000, is unacceptable to both station management and the nuclear regulator, the Atomic Energy Control Board. In addition there are over 500 callups in various stages of revision (in hyperspace) without an adequate control nor reporting system to manage their completion. There is also considerable confusion about the classification of l icensing c allups, e.g. callups which are mandatory as a result of legal requirements. Furthermore the ineffectiveness of the Preventive Maintenance (PM) has been the subject of peer audits and Atomic Energy Control Board (AECB) findings over the past several years. The current preventive maintenance ratio PM2 /(PM+CM3) at Pickering ND is less than 20%, due to the current high load of equipment breakdown. This past summer, an Independent Integrated Performance Assessment (IIPA) review at Ontario Hydro confirmed these concerns. Over the past several years, Ontario Hydro nuclear staff have evaluated several programs to improve 8. Integrated Financial Management Program Science.gov (United States) Pho, Susan 2004-01-01 Having worked in the Employees and Commercial Payments Branch of the Financial Management Division for the past 3 summers, I have seen the many changes that have occurred within the NASA organization. As I return each summer, I find that new programs and systems have been adapted to better serve the needs of the Center and of the Agency. The NASA Agency has transformed itself the past couple years with the implementation of the Integrated Financial Management Program (IFMP). IFMP is designed to allow the Agency to improve its management of its Financial, Physical, and Human Resources through the use of multiple enterprise module applications. With my mentor, Joseph Kan, being the branch chief of the Employees and Commercial Payments Branch, I have been exposed to several modules, such as Travel Manager, WebTads, and Core Financial/SAP, which were implemented in the last couple of years under the IFMP. The implementation of these agency-wide systems has sometimes proven to be troublesome. Prior to IFMP, each NASA Center utilizes their own systems for Payroll, Travel, Accounts Payable, etc. But with the implementation of the Integrated Financial Management Program, all the "legacy" systems had to be eliminated. As a result, a great deal of enhancement and preparation work is necessary to ease the transformation from the old systems to the new. All this work occurs simultaneously; for example, e-Payroll will "go live" in several months, but a system like Travel Manager will need to have information upgraded within the system to meet the requirements set by Headquarters. My assignments this summer have given me the opportunity to become involved with such work. So far, I have been given the opportunity to participate in projects resulting from a congressional request, several bankcard reconciliations, updating routing lists for Travel Manager, updating the majordomo list for Travel Manager approvers and point of contacts, and a NASA Headquarters project involving 9. State Program Integrity Assessment (SPIA) Data.gov (United States) U.S. Department of Health & Human Services — The State Program Integrity Assessment (SPIA) is the Centers for Medicare and Medicaid Services (CMS) first national data collection on state Medicaid program... 10. SRS Tank Structural Integrity Program International Nuclear Information System (INIS) Maryak, Matthew 2010-01-01 The mission of the Structural Integrity Program is to ensure continued safe management and operation of the waste tanks for whatever period of time these tanks are required. Matthew Maryak provides an overview of the Structural Integrity Program to open Session 5 (Waste Storage and Tank Inspection) of the 2010 EM Waste Processing Technical Exchange. 11. Steam generator tube integrity program International Nuclear Information System (INIS) Dierks, D.R.; Shack, W.J.; Muscara, J. 1996-01-01 A new research program on steam generator tubing degradation is being sponsored by the U.S. Nuclear Regulatory Commission (NRC) at Argonne National Laboratory. This program is intended to support a performance-based steam generator tube integrity rule. Critical areas addressed by the program include evaluation of the processes used for the in-service inspection of steam generator tubes and recommendations for improving the reliability and accuracy of inspections; validation and improvement of correlations for evaluating integrity and leakage of degraded steam generator tubes, and validation and improvement of correlations and models for predicting degradation in steam generator tubes as aging occurs. The studies will focus on mill-annealed Alloy 600 tubing, however, tests will also be performed on replacement materials such as thermally-treated Alloy 600 or 690. An overview of the technical work planned for the program is given 12. Foreign energy conservation integrated programs International Nuclear Information System (INIS) Lisboa, Maria Luiza Viana; Bajay, Sergio Valdir 1999-01-01 The promotion of energy economy and efficiency is recognized as the single most cost-effective and least controversial component of any strategy of matching energy demand and supply with resource and environmental constraints. Historically such efficiency gains are not out of reach for the industrialized market economy countries, but are unlikely to be reached under present conditions by developing countries and economics in transition. The aim of the work was to analyze the main characteristics of United Kingdom, France, Japan, Canada, Australia and Denmark energy conservation integrated programs 13. Mixed wasted integrated program: Logic diagram International Nuclear Information System (INIS) Mayberry, J.; Stelle, S.; O'Brien, M.; Rudin, M.; Ferguson, J.; McFee, J. 1994-01-01 The Mixed Waste Integrated Program Logic Diagram was developed to provide technical alternative for mixed wastes projects for the Office of Technology Development's Mixed Waste Integrated Program (MWIP). Technical solutions in the areas of characterization, treatment, and disposal were matched to a select number of US Department of Energy (DOE) treatability groups represented by waste streams found in the Mixed Waste Inventory Report (MWIR) 14. State Program Integrity Review Reports List Data.gov (United States) U.S. Department of Health & Human Services — Comprehensive state program integrity (PI) review reports (and respective follow-up review reports) provide CMS assessment of the effectiveness of the states PI... 15. Mixed wasted integrated program: Logic diagram Energy Technology Data Exchange (ETDEWEB) Mayberry, J.; Stelle, S. [Science Applications International Corp., Idaho Falls, ID (United States); OBrien, M. [Univ. of Arizona, Tucson, AZ (United States); Rudin, M. [Univ. of Nevada, Las Vegas, NV (United States); Ferguson, J. [Lockheed Idaho Technologies Co., Idaho Falls, ID (United States); McFee, J. [I.T. Corp., Albuquerque, NM (United States) 1994-11-30 The Mixed Waste Integrated Program Logic Diagram was developed to provide technical alternative for mixed wastes projects for the Office of Technology Developments Mixed Waste Integrated Program (MWIP). Technical solutions in the areas of characterization, treatment, and disposal were matched to a select number of US Department of Energy (DOE) treatability groups represented by waste streams found in the Mixed Waste Inventory Report (MWIR). 16. Integral Ramjet Booster Demonstration Program Science.gov (United States) 1975-02-01 vibration loads before motor firing at -65, +70, and +1650F, (2) The chambers are fabricated from roll and welded ( TIG ) L-605 sheet that is cold...Typical Integral Booster Internal Configuration Keyhole Grain Pressure and Thrust Versus Time (+700F, Sea Level) Keyhole Grain Pressure and...Thrust Versus Time (+1650F, Sea Level) Keyhole Grain Pressure and Thrust Versus Time (-65^, Sea Level) Radial-Slot Grain Design Radial-Slot Grain 17. Advances by the Integral Fast Reactor Program International Nuclear Information System (INIS) Lineberry, M.J.; Pedersen, D.R.; Walters, L.C.; Cahalan, J.E. 1991-01-01 The advances by the Integral Fast Reactor Program at Argonne National Laboratory are the subject of this paper. The Integral Fast Reactor (IFR) is an advanced liquid-metal-cooled reactor concept being developed at Argonne National Laboratory. The advances stressed in the paper include fuel irradiation performance, improved passive safety, and the development of a prototype fuel cycle facility. 14 refs 18. Containment integrity research program plan International Nuclear Information System (INIS) 1987-08-01 This report presents a plan for research on the question of containment performance in postulated severe accident scenarios. It focuses on the research being performed by the Structural and Seismic Engineering Branch, Division of Engineering, Office of Nuclear Regulatory Research. Summaries of the plans for this work have previously been published in the ''Nuclear Power Plant Severe Accident Research Plan'' (NUREG-0900). This report provides an update to reflect current status. This plan provides a summary of results to date as well as an outline of planned activities and milestones to the contemplated completion of the program in FY 1989 19. Characterization, Monitoring and Sensor Technology Integrated Program International Nuclear Information System (INIS) 1993-01-01 This booklet contains summary sheets that describe FY 1993 characterization, monitoring, and sensor technology (CMST) development projects. Currently, 32 projects are funded, 22 through the OTD Characterization, Monitoring, and Sensor Technology Integrated Program (CMST-IP), 8 through the OTD Program Research and Development Announcement (PRDA) activity managed by the Morgantown Energy Technology Center (METC), and 2 through Interagency Agreements (IAGs). This booklet is not inclusive of those CMST projects which are funded through Integrated Demonstrations (IDs) and other Integrated Programs (IPs). The projects are in six areas: Expedited Site Characterization; Contaminants in Soils and Groundwater; Geophysical and Hydrogeological Measurements; Mixed Wastes in Drums, Burial Grounds, and USTs; Remediation, D ampersand D, and Waste Process Monitoring; and Performance Specifications and Program Support. A task description, technology needs, accomplishments and technology transfer information is given for each project 20. 75 FR 34805 - Program Integrity Issues Science.gov (United States) 2010-06-18 ... Mathematics Access to Retain Talent Grant (National Smart Grant) Programs. DATES: We must receive your... Association of College and University Business Officers, representing business officers. Val Meyers, Michigan... identifying and handling test score abnormalities, ensuring the integrity of the testing environment, and... 1. Integrated Data Base Program: a status report International Nuclear Information System (INIS) Notz, K.J.; Klein, J.A. 1984-06-01 The Integrated Data Base (IDB) Program provides official Department of Energy (DOE) data on spent fuel and radioactive waste inventories, projections, and characteristics. The accomplishments of FY 1983 are summarized for three broad areas: (1) upgrading and issuing of the annual report on spent fuel and radioactive waste inventories, projections, and characteristics, including ORIGEN2 applications and a quality assurance plan; (2) creation of a summary data file in user-friendly format for use on a personal computer and enhancing user access to program data; and (3) optimizing and documentation of the data handling methodology used by the IDB Program and providing direct support to other DOE programs and sites in data handling. Plans for future work in these three areas are outlined. 23 references, 11 figures 2. Integral quality programs for radiodiagnostics Services International Nuclear Information System (INIS) Alastuey, F.; Barranco, C.; Marco, R.; Perez, C.; Sanchez, J.; Pardo, J.; Madrid, G. 1993-01-01 The aim of the work entitled ''Integral Quality Programs for Radiodiagnostics Services'' is to present the experience accumulated over the past 10 years by the Radiodiagnostics Service of C.M.E. Ramon y Cajal in Zaragoza. The term ''integral quality'' will be defined conceptually in order to differentiate it from the classical quality control which refers exclusively to the control of radiology equipment. The problem will be reviewed from the historical point of view and a basic, homologated model, contrasted on the basis of the work of these 10 years, is proposed mainly to serve as the backbone for the working system in a Radiodiagnostics Service. (Author) 46 ref 3. Developing an integrated dam safety program International Nuclear Information System (INIS) Nielsen, N. M.; Lampa, J. 1996-01-01 An effort has been made to demonstrate that dam safety is an integral part of asset management which, when properly done, ensures that all objectives relating to safety and compliance, profitability, stakeholders' expectations and customer satisfaction, are achieved. The means to achieving this integration of the dam safety program and the level of effort required for each core function have been identified using the risk management approach to pinpoint vulnerabilities, and subsequently to focus priorities. The process is considered appropriate for any combination of numbers, sizes and uses of dams, and is designed to prevent exposure to unacceptable risks. 5 refs., 1 tab 4. Integrating Cybersecurity into the Program Management Organization Science.gov (United States) 2015-05-13 penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 13 MAY 2015 2...Threat to our National Economy DOD Cybersecurity Gaps Could Be Canary in Federal Acquisition Coal Mine Intangible Assets Create Vulnerabilities...operational approach integrates with current or planned CONOPS, BCP, information architecture, programs or initiatives Development  Approach to 5. In Situ Remediation Integrated Program: FY 1994 program summary Energy Technology Data Exchange (ETDEWEB) NONE 1995-04-01 The US Department of Energy (DOE) established the Office of Technology Development (EM-50) as an element of the Office of Environmental Management (EM) in November 1989. In an effort to focus resources and address priority needs, EM-50 introduced the concept of integrated programs (IPs) and integrated demonstrations (IDs). The In Situ Remediation Integrated Program (ISR IP) focuses research and development on the in-place treatment of contaminated environmental media, such as soil and groundwater, and the containment of contaminants to prevent the contaminants from spreading through the environment. Using in situ remediation technologies to clean up DOE sites minimizes adverse health effects on workers and the public by reducing contact exposure. The technologies also reduce cleanup costs by orders of magnitude. This report summarizes project work conducted in FY 1994 under the ISR IP in three major areas: treatment (bioremediation), treatment (physical/chemical), and containment technologies. Buried waste, contaminated soils and groundwater, and containerized waste are all candidates for in situ remediation. Contaminants include radioactive waste, volatile and nonvolatile organics, heavy metals, nitrates, and explosive materials. 6. In Situ Remediation Integrated Program: FY 1994 program summary International Nuclear Information System (INIS) 1995-04-01 The US Department of Energy (DOE) established the Office of Technology Development (EM-50) as an element of the Office of Environmental Management (EM) in November 1989. In an effort to focus resources and address priority needs, EM-50 introduced the concept of integrated programs (IPs) and integrated demonstrations (IDs). The In Situ Remediation Integrated Program (ISR IP) focuses research and development on the in-place treatment of contaminated environmental media, such as soil and groundwater, and the containment of contaminants to prevent the contaminants from spreading through the environment. Using in situ remediation technologies to clean up DOE sites minimizes adverse health effects on workers and the public by reducing contact exposure. The technologies also reduce cleanup costs by orders of magnitude. This report summarizes project work conducted in FY 1994 under the ISR IP in three major areas: treatment (bioremediation), treatment (physical/chemical), and containment technologies. Buried waste, contaminated soils and groundwater, and containerized waste are all candidates for in situ remediation. Contaminants include radioactive waste, volatile and nonvolatile organics, heavy metals, nitrates, and explosive materials 7. The Efficient Separations and Processing Integrated Program International Nuclear Information System (INIS) Kuhn, W.L.; Gephart, J.M. 1994-08-01 The Efficient Separations and Processing Integrated Program (ESPIP) was created in 1991 to identify, develop, and perfect separations technologies and processes to treat wastes and address environmental problems throughout the US Department of Energy (DOE) complex. The ESPIP funds several multiyear tasks that address high-priority waste remediation problems involving high-level, low-level, transuranic, hazardous, and mixed (radioactive and hazardous) wastes. The ESPIP supports applied R ampersand D leading to demonstration or use of these separations technologies by other organizations within DOE's Office of Environmental Restoration and Waste Management. Examples of current ESPIP-funded separations technologies are described here 8. Concrete containment integrity program at EPRI International Nuclear Information System (INIS) Winkleblack, R.K.; Tang, Y.K. 1984-01-01 Many in the nuclear power plant business believe that the catastrophic failure mode for reactor containment structures is unrealistic. One of the goals of the EPRI containment integrity program is to demonstrate that this is true. The objective of the program is to provide the utility industry with an experimental data base and a test-validated analytical method for realistically evaluating the actual over-pressure capability of concrete containment buildings and to predict leakage behavior if higher pressures were to occur. The ultimate goal of this research effort is to characterize the containment leakage mode and rate as a function of internal pressure and time so that the risk can be realistically assessed for hypothetical degraded core accidents. Progress in the first and second phases of the three-phase analytical and testing efforts is discussed 9. STEFINS: a steel freezing integral simulation program International Nuclear Information System (INIS) Frank, M.V. 1980-09-01 STEFINS (STEel Freezing INtegral Simulation) is a computer program for the calculation of the rate of solidification of molten steel on solid steel. Such computations arize when investigating core melt accidents in fast reactors. In principle this problem involves a coupled two-dimensional thermal and hydraulic approach. However, by physically reasonable assumptions a decoupled approach has been developed. The transient solidification of molten steel on a cold wall is solved in the direction normal to the molten steel flow and independent from the solution for the molten steel temperature and Nusselt number along the direction of flow. The solutions to the applicable energy equations have been programmed in cylindrical and slab geometries. Internal gamma heating of steel is included 10. DOE In Situ Remediation Integrated Program International Nuclear Information System (INIS) Yow, J.L. Jr. 1993-01-01 The In Situ Remediation Integrated Program (ISRP) supports and manages a balanced portfolio of applied research and development activities in support of DOE environmental restoration and waste management needs. ISRP technologies are being developed in four areas: containment, chemical and physical treatment, in situ bioremediation, and in situ manipulation (including electrokinetics). the focus of containment is to provide mechanisms to stop contaminant migration through the subsurface. In situ bioremediation and chemical and physical treatment both aim to destroy or eliminate contaminants in groundwater and soils. In situ manipulation (ISM) provides mechanisms to access contaminants or introduce treatment agents into the soil, and includes other technologies necessary to support the implementation of ISR methods. Descriptions of each major program area are provided to set the technical context of the ISM subprogram. Typical ISM needs for major areas of in situ remediation research and development are identified 11. In Situ Remediation Integrated Program: Technology summary Energy Technology Data Exchange (ETDEWEB) 1994-02-01 The In Situ Remediation Integrated Program (ISR IP) was instituted out of recognition that in situ remediation could fulfill three important criteria: significant cost reduction of cleanup by eliminating or minimizing excavation, transportation, and disposal of wastes; reduced health impacts on workers and the public by minimizing exposure to wastes during excavation and processing; and remediation of inaccessible sites, including: deep subsurfaces, in, under, and around buildings. Buried waste, contaminated soils and groundwater, and containerized wastes are all candidates for in situ remediation. Contaminants include radioactive wastes, volatile and non-volatile organics, heavy metals, nitrates, and explosive materials. The ISR IP intends to facilitate development of in situ remediation technologies for hazardous, radioactive, and mixed wastes in soils, groundwater, and storage tanks. Near-term focus is on containment of the wastes, with treatment receiving greater effort in future years. ISR IP is an applied research and development program broadly addressing known DOE environmental restoration needs. Analysis of a sample of 334 representative sites by the Office of Environmental Restoration has shown how many sites are amenable to in situ remediation: containment--243 sites; manipulation--244 sites; bioremediation--154 sites; and physical/chemical methods--236 sites. This needs assessment is focused on near-term restoration problems (FY93--FY99). Many other remediations will be required in the next century. The major focus of the ISR EP is on the long term development of permanent solutions to these problems. Current needs for interim actions to protect human health and the environment are also being addressed. 12. In Situ Remediation Integrated Program: Technology summary International Nuclear Information System (INIS) 1994-02-01 The In Situ Remediation Integrated Program (ISR IP) was instituted out of recognition that in situ remediation could fulfill three important criteria: significant cost reduction of cleanup by eliminating or minimizing excavation, transportation, and disposal of wastes; reduced health impacts on workers and the public by minimizing exposure to wastes during excavation and processing; and remediation of inaccessible sites, including: deep subsurfaces, in, under, and around buildings. Buried waste, contaminated soils and groundwater, and containerized wastes are all candidates for in situ remediation. Contaminants include radioactive wastes, volatile and non-volatile organics, heavy metals, nitrates, and explosive materials. The ISR IP intends to facilitate development of in situ remediation technologies for hazardous, radioactive, and mixed wastes in soils, groundwater, and storage tanks. Near-term focus is on containment of the wastes, with treatment receiving greater effort in future years. ISR IP is an applied research and development program broadly addressing known DOE environmental restoration needs. Analysis of a sample of 334 representative sites by the Office of Environmental Restoration has shown how many sites are amenable to in situ remediation: containment--243 sites; manipulation--244 sites; bioremediation--154 sites; and physical/chemical methods--236 sites. This needs assessment is focused on near-term restoration problems (FY93--FY99). Many other remediations will be required in the next century. The major focus of the ISR EP is on the long term development of permanent solutions to these problems. Current needs for interim actions to protect human health and the environment are also being addressed 13. Integrated rural development programs: a skeptical perspective. Science.gov (United States) Ruttan, V W 1975-11-01 In examining integrated rural development programs the question that arises is why is it possible to identify several relatively successful small-scale or pilot rural development projects yet so difficult to find examples of successful rural development programs. 3 bodies of literature offer some insight into the morphology of rural development projects, programs, and processes: the urban-industrial impact hypothesis; the theory of induced technical change; and the new models of institutional change that deal with institution building and the economics of bureaucratic behavior. The urban-industrial impact hypothesis helps in the clarification of the relationships between the development of rural areas and the development of the total society of which rural areas are a part. It is useful in understanding the spatial dimensions of rural development where rural development efforts are likely to be most successful. Formulation of the hypothesis generated a series of empirical studies designed to test its validity. The effect of these studies has been the development of a rural development model in which the rural community is linked to the urban-industrial economy through a series of market relationships. Both the urban economy's rate of growth and the efficiency of the intersector product and factor markets place significant constraints on the possibilities of rural area development. It is not possible to isolate development processes in the contemporary rural community in a developing society from development processes in the larger society. The induced technical change theory provides a guide as to what must be done to gain access to efficient sources of economic growth, the new resources and incomes that are necessary to sustain rural development. Design of a successful rural development strategy involves a combination of technical and institutional change. The ability of rural areas to respond to the opportunities for economic growth generated by local urban 14. MIxed Waste Integrated Program (MWIP): Technology summary International Nuclear Information System (INIS) 1994-02-01 The mission of the Mixed Waste Integrated Program (MWIP) is to develop and demonstrate innovative and emerging technologies for the treatment and management of DOE's mixed low-level wastes (MLLW) for use by its customers, the Office of Waste Operations (EM-30) and the Office of Environmental Restoration (EM-40). The primary goal of MWIP is to develop and demonstrate the treatment and disposal of actual mixed waste (MMLW and MTRU). The vitrification process and the plasma hearth process are scheduled for demonstration on actual radioactive waste in FY95 and FY96, respectively. This will be accomplished by sequential studies of lab-scale non-radioactive testing followed by bench-scale radioactive testing, followed by field-scale radioactive testing. Both processes create a highly durable final waste form that passes leachability requirements while destroying organics. Material handling technology, and off-gas requirements and capabilities for the plasma hearth process and the vitrification process will be established in parallel 15. Mixed Waste Integrated Program emerging technology development International Nuclear Information System (INIS) Berry, J.B.; Hart, P.W. 1994-01-01 The US Department of Energy (DOE) is responsible for the management and treatment of its mixed low-level wastes (MLLW). MLLW are regulated under both the Resource Conservation and Recovery Act and various DOE orders. Over the next 5 years, DOE will manage over 1.2 m 3 of MLLW and mixed transuranic (MTRU) wastes. In order to successfully manage and treat these mixed wastes, DOE must adapt and develop characterization, treatment, and disposal technologies which will meet performance criteria, regulatory approvals, and public acceptance. Although technology to treat MLLW is not currently available without modification, DOE is committed to developing such treatment technologies and demonstrating them at the field scale by FY 1997. The Office of Research and Development's Mixed Waste Integrated Program (MWIP) within the DOE Office of Environmental Management (EM), OfFice of Technology Development, is responsible for the development and demonstration of such technologies for MLLW and MTRU wastes. MWIP advocates and sponsors expedited technology development and demonstrations for the treatment of MLLW 16. Mixed Waste Integrated Program emerging technology development Energy Technology Data Exchange (ETDEWEB) Berry, J.B. [Oak Ridge National Lab., TN (United States); Hart, P.W. [USDOE, Washington, DC (United States) 1994-06-01 The US Department of Energy (DOE) is responsible for the management and treatment of its mixed low-level wastes (MLLW). MLLW are regulated under both the Resource Conservation and Recovery Act and various DOE orders. Over the next 5 years, DOE will manage over 1.2 m{sup 3} of MLLW and mixed transuranic (MTRU) wastes. In order to successfully manage and treat these mixed wastes, DOE must adapt and develop characterization, treatment, and disposal technologies which will meet performance criteria, regulatory approvals, and public acceptance. Although technology to treat MLLW is not currently available without modification, DOE is committed to developing such treatment technologies and demonstrating them at the field scale by FY 1997. The Office of Research and Developments Mixed Waste Integrated Program (MWIP) within the DOE Office of Environmental Management (EM), OfFice of Technology Development, is responsible for the development and demonstration of such technologies for MLLW and MTRU wastes. MWIP advocates and sponsors expedited technology development and demonstrations for the treatment of MLLW. 17. Program Collaboration and Service Integration At-a-Glance Centers for Disease Control (CDC) Podcasts Dr. Kevin A. Fenton, Director of CDC's National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention, discusses program collaboration and service integration, a strategy that promotes better collaboration between public health programs and supports appropriate service integration at the point-of-care. 18. Integrated program of using of Probabilistic Safety Analysis in Spain International Nuclear Information System (INIS) 1998-01-01 Since 25 June 1986, when the CSN (Nuclear Safety Conseil) approve the Integrated Program of Probabilistic Safety Analysis, this program has articulated the main activities of CSN. This document summarize the activities developed during these years and reviews the Integrated programme 19. Program collaboration and service integration activities among HIV programs in 59 U.S. health departments. Science.gov (United States) Fitz Harris, Lauren F; Toledo, Lauren; Dunbar, Erica; Aquino, Gustavo A; Nesheim, Steven R 2014-01-01 We identified the level and type of program collaboration and service integration (PCSI) among HIV prevention programs in 59 CDC-funded health department jurisdictions. Annual progress reports (APRs) completed by all 59 health departments funded by CDC for HIV prevention activities were reviewed for collaborative and integrated activities reported by HIV programs for calendar year 2009. We identified associations between PCSI activities and funding, AIDS diagnosis rate, and organizational integration. HIV programs collaborated with other health department programs through data-related activities, provider training, and providing funding for sexually transmitted disease (STD) activities in 24 (41%), 31 (53%), and 16 (27%) jurisdictions, respectively. Of the 59 jurisdictions, 57 (97%) reported integrated HIV and STD testing at the same venue, 39 (66%) reported integrated HIV and tuberculosis testing, and 26 (44%) reported integrated HIV and viral hepatitis testing. Forty-five (76%) jurisdictions reported providing integrated education/outreach activities for HIV and at least one other disease. Twenty-six (44%) jurisdictions reported integrated partner services among HIV and STD programs. Overall, the level of PCSI activities was not associated with HIV funding, AIDS diagnoses, or organizational integration. HIV programs in health departments collaborate primarily with STD programs. Key PCSI activities include integrated testing, integrated education/outreach, and training. Future assessments are needed to evaluate PCSI activities and to identify the level of collaboration and integration among prevention programs. 20. 20 CFR 220.64 - Program integrity. Science.gov (United States) 2010-04-01 ... for reasons bearing on professional competence, professional conduct, or financial integrity; who has surrendered such a license while formal disciplinary proceedings involving professional conduct were pending... 1. 76 FR 34541 - Child and Adult Care Food Program Improving Management and Program Integrity Science.gov (United States) 2011-06-13 ... 7 CFR Parts 210, 215, 220 et al. Child and Adult Care Food Program Improving Management and Program..., 220, 225, and 226 RIN 0584-AC24 Child and Adult Care Food Program Improving Management and Program... management and integrity in the Child and Adult Care Food Program (CACFP), at 67 FR 43447 (June 27, 2002) and... 2. Exploring Art and Science Integration in an Afterschool Program Science.gov (United States) Bolotta, Alanna Science, technology, engineering, arts and math (STEAM) education integrates science with art, presenting a unique and interesting opportunity to increase accessibility in science for learners. This case study examines an afterschool program grounded in art and science integration. Specifically, I studied the goals of the program, it's implementation and the student experience (thinking, feeling and doing) as they participated in the program. My findings suggest that these programs can be powerful methods to nurture scientific literacy, creativity and emotional development in learners. To do so, this program made connections between disciplines and beyond, integrated holistic teaching and learning practices, and continually adapted programming while also responding to challenges. The program is therefore specially suited to engage the heads, hands and hearts of learners, and can make an important contribution to their learning and development. To conclude, I provide some recommendations for STEAM implementation in both formal and informal learning settings. 3. Slide layout and integrated design (SLIDE) program International Nuclear Information System (INIS) Roberts, S.G. 1975-01-01 SLIDE is a FORTRAN IV program for producing 35 mm color slides on the Control Data CYBER-74. SLIDE interfaces with the graphics package, DISSPLA, on the CYBER-74. It was designed so that persons with no previous computer experience can easily and quickly generate their own textual 35 mm color slides for verbal presentations. SLIDE's features include seven different colors, five text sizes, ten tab positions, and two page sizes. As many slides as desired may be produced during any one run of the program. Each slide is designed to represent an 8 1 / 2 in. x 11 in. or an 11 in. x 8 1 / 2 in. page. The input data cards required to run the SLIDE program and the program output are described. Appendixes contain a sample program run showing input, output, and the resulting slides produced and a FORTRAN listing of the SLIDE program. (U.S.) 4. Program Integration for International Technology Exchange International Nuclear Information System (INIS) Rea, J.L. 1993-01-01 Sandia National Laboratories (SNL), Albuquerque, New Mexico, supports the International Technology Exchange Division (ITED) through the integration of all international activities conducted within the DOE's Office of Environmental Management (EM) 5. 76 FR 20534 - Program Integrity Issues Science.gov (United States) 2011-04-13 ... Code of Federal Regulations is available via the Federal Digital System at: http://www.gpo.gov/fdsys... educational programs or those that provide marketing, advertising, recruiting, or admissions services. We have... the institution to provide services, such as food service, other than educational programs, marketing... 6. 78 FR 17598 - Program Integrity Issues Science.gov (United States) 2013-03-22 ... College and Higher Education (TEACH) Grant Program, the Federal Pell Grant Program, and the Academic Competitiveness Grant (AGC) and National Science and Mathematics Access to Retain Talent Grant (National Smart... is most likely to be obtained. As the primary function of admissions representatives is to serve as... 7. Integrating Robot Task Planning into Off-Line Programming Systems DEFF Research Database (Denmark) Sun, Hongyan; Kroszynski, Uri 1988-01-01 a system architecture for integrated robot task planning. It identifies and describes the components considered necessary for implementation. The focus is on functionality of these elements as well as on the information flow. A pilot implementation of such an integrated system architecture for a robot......The addition of robot task planning in off-line programming systems aims at improving the capability of current state-of-the-art commercially available off-line programming systems, by integrating modeling, task planning, programming and simulation together under one platform. This article proposes...... assembly task is discussed.... 8. Program Collaboration and Service Integration At-a-Glance Centers for Disease Control (CDC) Podcasts 2010-09-15 Dr. Kevin A. Fenton, Director of CDC's National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention, discusses program collaboration and service integration, a strategy that promotes better collaboration between public health programs and supports appropriate service integration at the point-of-care.  Created: 9/15/2010 by National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention.   Date Released: 9/15/2010. 9. Integrating Ethics in Community Colleges' Accounting Programs. Science.gov (United States) Clarke, Clifton 1990-01-01 Argues that two-year college business programs need to provide moral guidance and leadership to students to help stem the proliferation of fraudulent and questionable financial reporting practices. Reviews amoral and moral unity theories of business ethics. Discusses barriers to ethical instruction in business curricula, and ways to overcome them.… 10. A stochastic-programming approach to integrated asset and liability ... African Journals Online (AJOL) This increase in complexity has provided an impetus for the investigation into integrated asset- and liability-management frameworks that could realistically address dynamic portfolio allocation in a risk-controlled way. In this paper the authors propose a multi-stage dynamic stochastic-programming model for the integrated ... 11. Opportunities for Integrated Fast Ignition program International Nuclear Information System (INIS) Mackinnon, A. J.; Key, M. H.; Hatchett, S. P.; Tabak, M.; Town, R.; Gregori, G.; Patel, P. K.; Snavely, R.; Freeman, R. R.; Stephens, R. B.; Beg, F. 2005-01-01 Experiments designed to investigate the physics of particle transport and heating of dense plasmas have been carried out in an number of facilities around the world since the publication of the fast ignition concept in 1997. To date a number of integrated experiments, examining the capsule implosion and subsequent heating have been carried out on the Gekko facility at the Institute of Laser Engineering (ILE) Osaka, Japan. The coupling of energy by the short pulse into the pre-compressed core in these experiments was very encouraging. More facilities capable of carrying out integrated experiments are currently under construction: Firex at ILEm the Omega EP facility at the University of Rochester, Z PW at Sandia National Lab, LIL in France and eventually high energy PW beams on the NIF. This presentation will review the current status of experiments in this area and discuss the capabilities of integrated fast ignition research that will be required to design the proof of principle and scaling experiments for fast ignition to be carried on the NIF. (Author) 12. 75 FR 66665 - Program Integrity: Gainful Employment-New Programs Science.gov (United States) 2010-10-29 ..., requires the GAO to conduct a study and report on issues pertaining to the oral health of children... response to, an initiative by a governmental entity, such as the oral health program with the Federal... already understand the employment demands in their field. The commenters also believed that because... Science.gov (United States) Pappalardo, Michele; Schaffer, William R. 2016-01-01 With the passage of the Workforce Innovation and Opportunity Act (WIOA) of 2014, Northampton Community College began the creation of Integrated Education and Training (IE&T) programs in October 2015. After a needs assessment was conducted with the partners, programs were created to address the needs in the hospitality and healthcare sectors.… 14. Integrating computer programs for engineering analysis and design Science.gov (United States) Wilhite, A. W.; Crisp, V. K.; Johnson, S. C. 1983-01-01 The design of a third-generation system for integrating computer programs for engineering and design has been developed for the Aerospace Vehicle Interactive Design (AVID) system. This system consists of an engineering data management system, program interface software, a user interface, and a geometry system. A relational information system (ARIS) was developed specifically for the computer-aided engineering system. It is used for a repository of design data that are communicated between analysis programs, for a dictionary that describes these design data, for a directory that describes the analysis programs, and for other system functions. A method is described for interfacing independent analysis programs into a loosely-coupled design system. This method emphasizes an interactive extension of analysis techniques and manipulation of design data. Also, integrity mechanisms exist to maintain database correctness for multidisciplinary design tasks by an individual or a team of specialists. Finally, a prototype user interface program has been developed to aid in system utilization. 15. Light Water Reactor Sustainability Program: Integrated Program Plan International Nuclear Information System (INIS) 2016-02-01 and terrorism. The Light Water Reactor Sustainability (LWRS) Program is the primary programmatic activity that addresses Objective 1. This document summarizes the LWRS Program's plans. For the LWRS Program, sustainability is defined as the ability to maintain safe and economic operation of the existing fleet of nuclear power plants for a longer-than-initially-licensed lifetime. It has two facets with respect to long-term operations: (1) manage the aging of plant systems, structures, and components so that nuclear power plant lifetimes can be extended and the plants can continue to operate safely, efficiently, and economically; and (2) provide science-based solutions to the industry to implement technology to exceed the performance of the current labor-intensive business model. 16. Light Water Reactor Sustainability Program: Integrated Program Plan Energy Technology Data Exchange (ETDEWEB) NONE 2016-02-15 proliferation and terrorism. The Light Water Reactor Sustainability (LWRS) Program is the primary programmatic activity that addresses Objective 1. This document summarizes the LWRS Program's plans. For the LWRS Program, sustainability is defined as the ability to maintain safe and economic operation of the existing fleet of nuclear power plants for a longer-than-initially-licensed lifetime. It has two facets with respect to long-term operations: (1) manage the aging of plant systems, structures, and components so that nuclear power plant lifetimes can be extended and the plants can continue to operate safely, efficiently, and economically; and (2) provide science-based solutions to the industry to implement technology to exceed the performance of the current labor-intensive business model. 17. Light Water Reactor Sustainability Program Integrated Program Plan International Nuclear Information System (INIS) Griffith, George; Youngblood, Robert; Busby, Jeremy; Hallbert, Bruce; Barnard, Cathy; McCarthy, Kathryn 2012-01-01 Nuclear power has safely, reliably, and economically contributed almost 20% of electrical generation in the United States over the past two decades. It remains the single largest contributor (more than 70%) of non-greenhouse-gas-emitting electric power generation in the United States. Domestic demand for electrical energy is expected to experience a 31% growth from 2009 to 2035. At the same time, most of the currently operating nuclear power plants will begin reaching the end of their initial 20-year extension to their original 40-year operating license for a total of 60 years of operation. Figure E-1 shows projected nuclear energy contribution to the domestic generating capacity. If current operating nuclear power plants do not operate beyond 60 years, the total fraction of generated electrical energy from nuclear power will begin to decline - even with the expected addition of new nuclear generating capacity. The oldest commercial plants in the United States reached their 40th anniversary in 2009. The U.S. Department of Energy Office of Nuclear Energy's Research and Development Roadmap (Nuclear Energy Roadmap) organizes its activities around four objectives that ensure nuclear energy remains a compelling and viable energy option for the United States. The four objectives are as follows: (1) develop technologies and other solutions that can improve the reliability, sustain the safety, and extend the life of the current reactors; (2) develop improvements in the affordability of new reactors to enable nuclear energy to help meet the Administration's energy security and climate change goals; (3) develop sustainable nuclear fuel cycles; and (4) understand and minimize the risks of nuclear proliferation and terrorism. The Light Water Reactor Sustainability (LWRS) Program is the primary programmatic activity that addresses Objective 1. This document summarizes the LWRS Program's plans. 18. Light Water Reactor Sustainability Program Integrated Program Plan Energy Technology Data Exchange (ETDEWEB) George Griffith; Robert Youngblood; Jeremy Busby; Bruce Hallbert; Cathy Barnard; Kathryn McCarthy 2012-01-01 Nuclear power has safely, reliably, and economically contributed almost 20% of electrical generation in the United States over the past two decades. It remains the single largest contributor (more than 70%) of non-greenhouse-gas-emitting electric power generation in the United States. Domestic demand for electrical energy is expected to experience a 31% growth from 2009 to 2035. At the same time, most of the currently operating nuclear power plants will begin reaching the end of their initial 20-year extension to their original 40-year operating license for a total of 60 years of operation. Figure E-1 shows projected nuclear energy contribution to the domestic generating capacity. If current operating nuclear power plants do not operate beyond 60 years, the total fraction of generated electrical energy from nuclear power will begin to decline - even with the expected addition of new nuclear generating capacity. The oldest commercial plants in the United States reached their 40th anniversary in 2009. The U.S. Department of Energy Office of Nuclear Energy's Research and Development Roadmap (Nuclear Energy Roadmap) organizes its activities around four objectives that ensure nuclear energy remains a compelling and viable energy option for the United States. The four objectives are as follows: (1) develop technologies and other solutions that can improve the reliability, sustain the safety, and extend the life of the current reactors; (2) develop improvements in the affordability of new reactors to enable nuclear energy to help meet the Administration's energy security and climate change goals; (3) develop sustainable nuclear fuel cycles; and (4) understand and minimize the risks of nuclear proliferation and terrorism. The Light Water Reactor Sustainability (LWRS) Program is the primary programmatic activity that addresses Objective 1. This document summarizes the LWRS Program's plans. 19. Light Water Reactor Sustainability Program: Integrated Program Plan Energy Technology Data Exchange (ETDEWEB) None, None 2017-05-01 proliferation and terrorism. The Light Water Reactor Sustainability (LWRS) Program is the primary programmatic activity that addresses Objective 1. This document summarizes the LWRS Program’s plans. For the LWRS Program, sustainability is defined as the ability to maintain safe and economic operation of the existing fleet of nuclear power plants for a longer-than-initially-licensed lifetime. It has two facets with respect to long-term operations: (1) manage the aging of plant systems, structures, and components so that nuclear power plant lifetimes can be extended and the plants can continue to operate safely, efficiently, and economically; and (2) provide science-based solutions to the industry to implement technology to exceed the performance of the current labor-intensive business model. 20. Mixed Waste Integrated Program Quality Assurance requirements plan International Nuclear Information System (INIS) 1994-01-01 Mixed Waste Integrated Program (MWIP) is sponsored by the US Department of Energy (DOE), Office of Technology Development, Waste Management Division. The strategic objectives of MWIP are defined in the Mixed Waste Integrated Program Strategic Plan, and expanded upon in the MWIP Program Management Plan. This MWIP Quality Assurance Requirement Plan (QARP) applies to mixed waste treatment technologies involving both hazardous and radioactive constituents. As a DOE organization, MWIP is required to develop, implement, and maintain a written Quality Assurance Program in accordance with DOE Order 4700.1 Project Management System, DOE Order 5700.6C, Quality Assurance, DOE Order 5820.2A Radioactive Waste Management, ASME NQA-1 Quality Assurance Program Requirements for Nuclear Facilities and ANSI/ASQC E4-19xx Specifications and Guidelines for Quality Systems for Environmental Data Collection and Environmental Technology Programs. The purpose of the MWIP QA program is to establish controls which address the requirements in 5700.6C, with the intent to minimize risks and potential environmental impacts; and to maximize environmental protection, health, safety, reliability, and performance in all program activities. QA program controls are established to assure that each participating organization conducts its activities in a manner consistent with risks posed by those activities 1. Mixed Waste Integrated Program Quality Assurance requirements plan Energy Technology Data Exchange (ETDEWEB) 1994-04-15 Mixed Waste Integrated Program (MWIP) is sponsored by the US Department of Energy (DOE), Office of Technology Development, Waste Management Division. The strategic objectives of MWIP are defined in the Mixed Waste Integrated Program Strategic Plan, and expanded upon in the MWIP Program Management Plan. This MWIP Quality Assurance Requirement Plan (QARP) applies to mixed waste treatment technologies involving both hazardous and radioactive constituents. As a DOE organization, MWIP is required to develop, implement, and maintain a written Quality Assurance Program in accordance with DOE Order 4700.1 Project Management System, DOE Order 5700.6C, Quality Assurance, DOE Order 5820.2A Radioactive Waste Management, ASME NQA-1 Quality Assurance Program Requirements for Nuclear Facilities and ANSI/ASQC E4-19xx Specifications and Guidelines for Quality Systems for Environmental Data Collection and Environmental Technology Programs. The purpose of the MWIP QA program is to establish controls which address the requirements in 5700.6C, with the intent to minimize risks and potential environmental impacts; and to maximize environmental protection, health, safety, reliability, and performance in all program activities. QA program controls are established to assure that each participating organization conducts its activities in a manner consistent with risks posed by those activities. 2. A program for performing angular integrations for transition operators International Nuclear Information System (INIS) Froese Fischer, C.; Godefroid, M.R.; Hibbert, A. 1991-01-01 The MCHF-MLTPOL program performs the angular integrations necessary for expressing the matrix elements of transition operators, E1, E2, ..., or M1, M2, ..., as linear combinations of radial integrals. All matrix elements for transitions between two lists of configuration states will be evaluated. A limited amount of non-orthogonality is allowed between orbitals of the initial and final state. (orig.) 3. Light Water Reactor Sustainability Program Integrated Program Plan Energy Technology Data Exchange (ETDEWEB) McCarthy, Kathryn A. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Busby, Jeremy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hallbert, Bruce [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bragg-Sitton, Shannon [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis [Idaho National Lab. (INL), Idaho Falls, ID (United States); Barnard, Cathy [Idaho National Lab. (INL), Idaho Falls, ID (United States) 2014-04-01 Nuclear power has safely, reliably, and economically contributed almost 20% of electrical generation in the United States over the past two decades. It remains the single largest contributor (more than 70%) of non-greenhouse-gas-emitting electric power generation in the United States. Domestic demand for electrical energy is expected to experience a 31% growth from 2009 to 2035. At the same time, most of the currently operating nuclear power plants will begin reaching the end of their initial 20-year extension to their original 40-year operating license for a total of 60 years of operation. Figure E-1 shows projected nuclear energy contribution to the domestic generating capacity. If current operating nuclear power plants do not operate beyond 60 years, the total fraction of generated electrical energy from nuclear power will begin to decline—even with the expected addition of new nuclear generating capacity. The oldest commercial plants in the United States reached their 40th anniversary in 2009. The U.S. Department of Energy Office of Nuclear Energy’s Research and Development Roadmap (Nuclear Energy Roadmap) organizes its activities around four objectives that ensure nuclear energy remains a compelling and viable energy option for the United States. The four objectives are as follows: (1) develop technologies and other solutions that can improve the reliability, sustain the safety, and extend the life of the current reactors; (2) develop improvements in the affordability of new reactors to enable nuclear energy to help meet the Administration’s energy security and climate change goals; (3) develop sustainable nuclear fuel cycles; and (4) understand and minimize the risks of nuclear proliferation and terrorism. The Light Water Reactor Sustainability (LWRS) Program is the primary programmatic activity that addresses Objective 1. This document summarizes the LWRS Program’s plans. 4. Light Water Reactor Sustainability Program Integrated Program Plan Energy Technology Data Exchange (ETDEWEB) Kathryn McCarthy; Jeremy Busby; Bruce Hallbert; Shannon Bragg-Sitton; Curtis Smith; Cathy Barnard 2013-04-01 Nuclear power has safely, reliably, and economically contributed almost 20% of electrical generation in the United States over the past two decades. It remains the single largest contributor (more than 70%) of non-greenhouse-gas-emitting electric power generation in the United States. Domestic demand for electrical energy is expected to experience a 31% growth from 2009 to 2035. At the same time, most of the currently operating nuclear power plants will begin reaching the end of their initial 20-year extension to their original 40-year operating license for a total of 60 years of operation. Figure E-1 shows projected nuclear energy contribution to the domestic generating capacity. If current operating nuclear power plants do not operate beyond 60 years, the total fraction of generated electrical energy from nuclear power will begin to decline—even with the expected addition of new nuclear generating capacity. The oldest commercial plants in the United States reached their 40th anniversary in 2009. The U.S. Department of Energy Office of Nuclear Energy’s Research and Development Roadmap (Nuclear Energy Roadmap) organizes its activities around four objectives that ensure nuclear energy remains a compelling and viable energy option for the United States. The four objectives are as follows: (1) develop technologies and other solutions that can improve the reliability, sustain the safety, and extend the life of the current reactors; (2) develop improvements in the affordability of new reactors to enable nuclear energy to help meet the Administration’s energy security and climate change goals; (3) develop sustainable nuclear fuel cycles; and (4) understand and minimize the risks of nuclear proliferation and terrorism. The Light Water Reactor Sustainability (LWRS) Program is the primary programmatic activity that addresses Objective 1. This document summarizes the LWRS Program’s plans. 5. Integration of safety engineering into a cost optimized development program. Science.gov (United States) Ball, L. W. 1972-01-01 A six-segment management model is presented, each segment of which represents a major area in a new product development program. The first segment of the model covers integration of specialist engineers into 'systems requirement definition' or the system engineering documentation process. The second covers preparation of five basic types of 'development program plans.' The third segment covers integration of system requirements, scheduling, and funding of specialist engineering activities into 'work breakdown structures,' 'cost accounts,' and 'work packages.' The fourth covers 'requirement communication' by line organizations. The fifth covers 'performance measurement' based on work package data. The sixth covers 'baseline requirements achievement tracking.' 6. Hazardous Waste Remedial Actions Program: integrating waste management International Nuclear Information System (INIS) Petty, J.L.; Sharples, F.E. 1986-01-01 The Hazardous Waste Remedial Actions Program was established to integrate Defense Programs' activities in hazardous and mixed waste management. The Program currently provides centralized planning and technical support to the Office of the Assistant Secretary for Defense Programs. More direct project management responsibilities may be assumed in the future. The Program, under the direction of the ASDP's Office of Defense Waste and Transportation Management, interacts with numerous organizational entities of the Department. The Oak Ridge Operations Office has been designated as the Lead Field Office. The Program's four current components cover remedial action project identification and prioritization; technology adaptation; an informative system; and a strategy study for long-term, ''corporate'' project and facility planning 7. Boosting program integrity and effectiveness of the cognitive behavioral program EQUIP for incarcerated youth in The Netherlands NARCIS (Netherlands) Helmond, P.; Overbeek, G.; Brugman, D. 2014-01-01 This study examined whether a "program integrity booster" could improve the low to moderate program integrity and effectiveness of the EQUIP program for incarcerated youth as practiced in The Netherlands. Program integrity was assessed in EQUIP groups before and after the booster. Youth residing in 8. Attitudes Toward Integration as Perceived by Preservice Teachers Enrolled in an Integrated Mathematics, Science, and Technology Teacher Education Program. Science.gov (United States) Berlin, Donna F.; White, Arthur L. 2002-01-01 Describes the purpose of the Master of Education (M. Ed.) Program in Integrated Mathematics, Science, and Technology Education (MSAT Program) at The Ohio State University and discusses preservice teachers' attitudes and perceptions toward integrated curriculum. (Contains 35 references.) (YDS) 9. Integrated inspection programs at Bruce Heavy Water Plant International Nuclear Information System (INIS) Brown, K.C. 1992-01-01 Quality pressure boundary maintenance and an excellent loss prevention record at Bruce Heavy Water Plant are the results of the Material and Inspection Unit's five inspection programs. Experienced inspectors are responsible for the integrity of the pressure boundary in their own operating area. Inspectors are part of the Technical Section, and along with unit engineering staff, they provide technical input before, during, and after the job. How these programs are completed, and the results achieved, are discussed. 5 figs., 1 appendix 10. IAEA integrated safeguards instrumentation program (I2SIP) International Nuclear Information System (INIS) Arlt, R.; Fortakov, V.; Gaertner, K.J. 1995-01-01 This article is a review of the IAEA integrated safeguards instrumentation program. The historical development of the program is outlined, and current activities are also noted. Brief technical descriptions of certain features are given. It is concluded that the results of this year's efforts in this area will provide significant input and be used to assess the viability of the proposed concepts and to decide on the directions to pursue in the future 11. Integrated inspection programs at Bruce Heavy Water Plant Energy Technology Data Exchange (ETDEWEB) Brown, K C [Ontario Hydro, Tiverton, ON (Canada) 1993-12-31 Quality pressure boundary maintenance and an excellent loss prevention record at Bruce Heavy Water Plant are the results of the Material and Inspection Units five inspection programs. Experienced inspectors are responsible for the integrity of the pressure boundary in their own operating area. Inspectors are part of the Technical Section, and along with unit engineering staff, they provide technical input before, during, and after the job. How these programs are completed, and the results achieved, are discussed. 5 figs., 1 appendix. 12. Danish integrated antimicrobial in resistance monitoring and research program DEFF Research Database (Denmark) Hammerum, Anette Marie; Heuer, Ole Eske; Emborg, Hanne-Dorthe 2007-01-01 a systematic and continuous monitoring program of antimicrobial drug consumption and antimicrobial agent resistance in animals, food, and humans, the Danish Integrated Antimicrobial Resistance Monitoring and Research Program (DANMAP). Monitoring of antimicrobial drug resistance and a range of research......Resistance to antimicrobial agents is an emerging problem worldwide. Awareness of the undesirable consequences of its widespread occurrence has led to the initiation of antimicrobial agent resistance monitoring programs in several countries. In 1995, Denmark was the first country to establish...... activities related to DANMAP have contributed to restrictions or bans of use of several antimicrobial agents in food animals in Denmark and other European Union countries.... 13. Trends and Features of Student Research Integration in Educational Program Science.gov (United States) Grinenko, Svetlana; Makarova, Elena; Andreassen, John-Erik 2016-01-01 This study examines trends and features of student research integration in educational program during international cooperation between Østfold University College in Norway and Southern Federal University in Russia. According to research and education approach the international project is aimed to use four education models, which linked student… 14. Planning integration FY 1996 program plan. Revision 1 International Nuclear Information System (INIS) 1995-09-01 This Multi-Year Program Plan (MAP) Planning Integration Program, Work Breakdown Structure (WBS) Element 1.8.2, is the primary management tool to document the technical, schedule, and cost baseline for work directed by the US Department of Energy (DOE), Richland Operations Office (RL). As an approved document, it establishes an agreement between RL and the performing contractors for the work to be performed. It was prepared by Westinghouse Hanford Company (WHC) and Pacific Northwest Laboratory (PNL). The MYPPs for the Hanford Site programs are to provide a picture from fiscal year (FY) 1996 through FY 2002. At RL Planning and Integration Division (PID) direction, only the FY 1996 Planning Integration Program work scope has been planned and presented in this MAP. Only those known significant activities which occur after FY 1996 are portrayed in this MAP. This is due to the uncertainty of who will be accomplishing what work scope when, following the award of the Management and Integration (M ampersand I) contract 15. Integral Fast Reactor Program annual progress report, FY 1991 International Nuclear Information System (INIS) 1992-06-01 This report summarizes highlights of the technical progress made in the Integral Fast Reactor (IFR) Program in FY 1991. Technical accomplishments are presented in the following areas of the IFR technology development activities: (1) metal fuel performance, (2) pyroprocess development, (3) safety experiments and analyses, (4) core design development, (5) fuel cycle demonstration, and (6) LMR technology R ampersand D 16. A mixed integer linear program for an integrated fishery | Hasan ... African Journals Online (AJOL) ... and labour allocation of quota based integrated fisheries. We demonstrate the workability of our model with a numerical example and sensitivity analysis based on data obtained from one of the major fisheries in New Zealand. Keywords: mixed integer linear program, fishing, trawler scheduling, processing, quotas ORiON: ... 17. Integral Fast Reactor Program. Annual progress report, FY 1992 Energy Technology Data Exchange (ETDEWEB) Chang, Y.I.; Walters, L.C.; Laidler, J.J.; Pedersen, D.R.; Wade, D.C.; Lineberry, M.J. 1993-06-01 This report summarizes highlights of the technical progress made in the Integral Fast Reactor (IFR) Program in FY 1992. Technical accomplishments are presented in the following areas of the IFR technology development activities: (1) metal fuel performance, (2) pyroprocess development, (3) safety experiments and analyses, (4) core design development, (5) fuel cycle demonstration, and (6) LMR technology R&D. 18. Integral Fast Reactor Program annual progress report, FY 1994 International Nuclear Information System (INIS) Chang, Y.I.; Walters, L.C.; Laidler, J.J.; Pedersen, D.R.; Wade, D.C.; Lineberry, J.J. 1994-12-01 This report summarizes highlights of the technical progress made in the Integral Fast Reactor (IFR) Program in FY 1994. Technical accomplishments are presented in the following areas of the IFR technology development activities: metal fuel performance; pyroprocess development; safety experiments and analyses; core design development; fuel cycle demonstration; and LMR technology R ampersand D 19. Hawaii Integrated Biofuels Research Program: Final Subcontract Report, Phase III Energy Technology Data Exchange (ETDEWEB) 1992-05-01 This report is a compilation of studies done to develop an integrated set of strategies for the production of energy from renewable resources in Hawaii. Because of the close coordination between this program and other ongoing DOE research, the work will have broad-based applicability to the entire United States. 20. Integral Fast Reactor Program. Annual progress report, FY 1993 Energy Technology Data Exchange (ETDEWEB) Chang, Y.I.; Walters, L.C.; Laidler, J.J.; Pedersen, D.R.; Wade, D.C.; Lineberry, M.J. 1994-10-01 This report summarizes highlights of the technical progress made in the Integral Fast Reactor (IFR) Program in FY 1993. Technical accomplishments are presented in the following areas of the IFR technology development activities: (1) metal fuel performance, (2) pyroprocess development, (3) safety experiments and analyses, (4) core design development, (5) fuel cycle demonstration, and (6) LMR technology R and D. 1. Integral Fast Reactor Program. Annual progress report, FY 1993 International Nuclear Information System (INIS) Chang, Y.I.; Walters, L.C.; Laidler, J.J.; Pedersen, D.R.; Wade, D.C.; Lineberry, M.J. 1994-10-01 This report summarizes highlights of the technical progress made in the Integral Fast Reactor (IFR) Program in FY 1993. Technical accomplishments are presented in the following areas of the IFR technology development activities: (1) metal fuel performance, (2) pyroprocess development, (3) safety experiments and analyses, (4) core design development, (5) fuel cycle demonstration, and (6) LMR technology R and D 2. Biomass Program 2007 Peer Review - Integrated Biorefinery Platform Summary Energy Technology Data Exchange (ETDEWEB) none, 2009-10-27 This document discloses the comments provided by a review panel at the U.S. Department of Energy Office of the Biomass Program Peer Review held on November 15-16, 2007 in Baltimore, MD and the Integrated Biorefinery Platform Review held on August 13-15, 2007 in Golden, Colorado. 3. What is Program Collaboration and Service Integration (PCSI)? Centers for Disease Control (CDC) Podcasts 2009-12-07 This podcast provides a description of Program Collaboration and Service Integration (PCSI).  Created: 12/7/2009 by National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention (NCHHSTP).   Date Released: 12/7/2009. 4. 25 CFR 39.132 - Can a school integrate Language Development programs into its regular instructional program? Science.gov (United States) 2010-04-01 ... 25 Indians 1 2010-04-01 2010-04-01 false Can a school integrate Language Development programs into... Language Development Programs § 39.132 Can a school integrate Language Development programs into its regular instructional program? A school may offer Language Development programs to students as part of its... 5. Integrated initial training program for a CEGB operations engineer International Nuclear Information System (INIS) Tompsett, P.A. 1987-01-01 This paper considers the overall training programs undertaken by a newly appointed Operations Engineer at one of the Central Electricity Generating Board's (CEGB) Advanced Gas Cooled Reactor (AGR) nuclear power stations. The training program is designed to equip him with the skills and knowledge necessary for him to discharge his duties safely and effectively. In order to assist the learning process and achieve and integrated program, aspects of reactor technology and operation, initially the subject of theoretical presentations at the CEGB's Nuclear Power Training Center (NPTC) are reinforced by either simulation and/or practical experience on site. In the later stages plant-specific simulators, operated by trained tutors, are incorporated into the training program to provide the trainee with practical experience of plant operation. The trainee's performance is assessed throughout the program to provide feedback to the trainee, the trainers and station management 6. Achieving High Reliability Operations Through Multi-Program Integration Energy Technology Data Exchange (ETDEWEB) Holly M. Ashley; Ronald K. Farris; Robert E. Richards 2009-04-01 Over the last 20 years the Idaho National Laboratory (INL) has adopted a number of operations and safety-related programs which has each periodically taken its turn in the limelight. As new programs have come along there has been natural competition for resources, focus and commitment. In the last few years, the INL has made real progress in integrating all these programs and are starting to realize important synergies. Contributing to this integration are both collaborative individuals and an emerging shared vision and goal of the INL fully maturing in its high reliability operations. This goal is so powerful because the concept of high reliability operations (and the resulting organizations) is a masterful amalgam and orchestrator of the best of all the participating programs (i.e. conduct of operations, behavior based safety, human performance, voluntary protection, quality assurance, and integrated safety management). This paper is a brief recounting of the lessons learned, thus far, at the INL in bringing previously competing programs into harmony under the goal (umbrella) of seeking to perform regularly as a high reliability organization. In addition to a brief diagram-illustrated historical review, the authors will share the INL’s primary successes (things already effectively stopped or started) and the gaps yet to be bridged. 7. Steam-Generator Integrity Program/Steam-Generator Group Project International Nuclear Information System (INIS) 1982-10-01 The Steam Generator Integrity Program (SGIP) is a comprehensive effort addressing issues of nondestructive test (NDT) reliability, inservice inspection (ISI) requirements, and tube plugging criteria for PWR steam generators. In addition, the program has interactive research tasks relating primary side decontamination, secondary side cleaning, and proposed repair techniques to nondestructive inspectability and primary system integrity. The program has acquired a service degraded PWR steam generator for research purposes. This past year a research facility, the Steam Generator Examination Facility (SGEF), specifically designed for nondestructive and destructive examination tasks of the SGIP was completed. The Surry generator previously transported to the Hanford Reservation was then inserted into the SGEF. Nondestructive characterization of the generator from both primary and secondary sides has been initiated. Decontamination of the channelhead cold leg side was conducted. Radioactive field maps were established in the steam generator, at the generator surface and in the SGEF 8. International Piping Integrity Research Group (IPIRG) Program. Final report International Nuclear Information System (INIS) Wilkowski, G.; Schmidt, R.; Scott, P. 1997-06-01 This is the final report of the International Piping Integrity Research Group (IPIRG) Program. The IPIRG Program was an international group program managed by the U.S. Nuclear Regulatory Commission and funded by a consortium of organizations from nine nations: Canada, France, Italy, Japan, Sweden, Switzerland, Taiwan, the United Kingdom, and the United States. The program objective was to develop data needed to verify engineering methods for assessing the integrity of circumferentially-cracked nuclear power plant piping. The primary focus was an experimental task that investigated the behavior of circumferentially flawed piping systems subjected to high-rate loadings typical of seismic events. To accomplish these objectives a pipe system fabricated as an expansion loop with over 30 meters of 16-inch diameter pipe and five long radius elbows was constructed. Five dynamic, cyclic, flawed piping experiments were conducted using this facility. This report: (1) provides background information on leak-before-break and flaw evaluation procedures for piping, (2) summarizes technical results of the program, (3) gives a relatively detailed assessment of the results from the pipe fracture experiments and complementary analyses, and (4) summarizes advances in the state-of-the-art of pipe fracture technology resulting from the IPIRG program 9. International Piping Integrity Research Group (IPIRG) Program. Final report Energy Technology Data Exchange (ETDEWEB) Wilkowski, G.; Schmidt, R.; Scott, P. [and others 1997-06-01 This is the final report of the International Piping Integrity Research Group (IPIRG) Program. The IPIRG Program was an international group program managed by the U.S. Nuclear Regulatory Commission and funded by a consortium of organizations from nine nations: Canada, France, Italy, Japan, Sweden, Switzerland, Taiwan, the United Kingdom, and the United States. The program objective was to develop data needed to verify engineering methods for assessing the integrity of circumferentially-cracked nuclear power plant piping. The primary focus was an experimental task that investigated the behavior of circumferentially flawed piping systems subjected to high-rate loadings typical of seismic events. To accomplish these objectives a pipe system fabricated as an expansion loop with over 30 meters of 16-inch diameter pipe and five long radius elbows was constructed. Five dynamic, cyclic, flawed piping experiments were conducted using this facility. This report: (1) provides background information on leak-before-break and flaw evaluation procedures for piping, (2) summarizes technical results of the program, (3) gives a relatively detailed assessment of the results from the pipe fracture experiments and complementary analyses, and (4) summarizes advances in the state-of-the-art of pipe fracture technology resulting from the IPIRG program. 10. Human Research Program Integrated Research Plan. Revision A January 2009 Science.gov (United States) 2009-01-01 The Integrated Research Plan (IRP) describes the portfolio of Human Research Program (HRP) research and technology tasks. The IRP is the HRP strategic and tactical plan for research necessary to meet HRP requirements. The need to produce an IRP is established in HRP-47052, Human Research Program - Program Plan, and is under configuration management control of the Human Research Program Control Board (HRPCB). Crew health and performance is critical to successful human exploration beyond low Earth orbit. The Human Research Program (HRP) is essential to enabling extended periods of space exploration because it provides knowledge and tools to mitigate risks to human health and performance. Risks include physiological and behavioral effects from radiation and hypogravity environments, as well as unique challenges in medical support, human factors, and behavioral or psychological factors. The Human Research Program (HRP) delivers human health and performance countermeasures, knowledge, technologies and tools to enable safe, reliable, and productive human space exploration. Without HRP results, NASA will face unknown and unacceptable risks for mission success and post-mission crew health. This Integrated Research Plan (IRP) describes HRP s approach and research activities that are intended to address the needs of human space exploration and serve HRP customers and how they are integrated to provide a risk mitigation tool. The scope of the IRP is limited to the activities that can be conducted with the resources available to the HRP; it does not contain activities that would be performed if additional resources were available. The timescale of human space exploration is envisioned to take many decades. The IRP illustrates the program s research plan through the timescale of early lunar missions of extended duration. 11. Integrating the GalileoScope into Successful Outreach Programming Science.gov (United States) Michaud, Peter D.; Slater, S.; Goldstein, J.; Harvey, J.; Garcia, A. 2010-01-01 Since 2004, the Gemini Observatory’s week-long Journey Through the Universe (JTtU) program has successfully shared the excitement of scientific research with teachers, students and the public on Hawaii’s Big Island. Based on the national JTtU program started in 1999, the Hawai‘i version reaches an average of 7,000 students annually and each year features a different theme shared with a diverse set of learners. In 2010, the theme includes the integration of the GalileoScope-produced as a keystone project for the International Year of Astronomy. In preparation, a pilot teacher workshop (held in October 2009) introduced local island teachers to the GalileoScope and a 128-page educator’s activity resource book coordinated by the University of Wyoming. Response from this initial teacher’s workshop has been strong and evaluations plus follow-up actions by participating teachers illustrate that the integration of the GalileoScope has been successful based upon this diverse sample. Integrating GalileoScopes into Chilean schools in 2010 is also underway at Gemini South. This program will solicit informal proposals from educators who wish to use the telescopes in classrooms and a Spanish version of the teacher resource book is planned. The authors conclude that integration of the GalileoScope into an existing outreach program is an effective way to keep content fresh, relevant and engaging for both educators and students. This initiative is funded by Gemini Observatory outreach program. The Gemini Observatory is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (US), the Science and Technology Facilities Council (UK), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência e Tecnologia (Brazil), and Ministerio de Ciencia, Tecnología e Innovación Productiva 12. Stochastic programming problems with generalized integrated chance constraints Czech Academy of Sciences Publication Activity Database Branda, Martin 2012-01-01 Roč. 61, č. 8 (2012), s. 949-968 ISSN 0233-1934 R&D Projects: GA ČR GAP402/10/1610 Grant - others:SVV(CZ) 261315/2010 Institutional support: RVO:67985556 Keywords : chance constraints * integrated chance constraints * penalty functions * sample approximations * blending problem Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.707, year: 2012 http://library.utia.cas.cz/separaty/2012/E/branda-stochastic programming problems with generalized integrated.pdf 13. Report of the Integrated Program Planning Activity for the DOE Fusion Energy Sciences Program International Nuclear Information System (INIS) None 2000-01-01 This report of the Integrated Program Planning Activity (IPPA) has been prepared in response to a recommendation by the Secretary of Energy Advisory Board that, ''Given the complex nature of the fusion effort, an integrated program planning process is an absolute necessity.'' We, therefore, undertook this activity in order to integrate the various elements of the program, to improve communication and performance accountability across the program, and to show the inter-connectedness and inter-dependency of the diverse parts of the national fusion energy sciences program. This report is based on the September 1999 Fusion Energy Sciences Advisory Committee's (FESAC) report ''Priorities and Balance within the Fusion Energy Sciences Program''. In its December 5,2000, letter to the Director of the Office of Science, the FESAC has reaffirmed the validity of the September 1999 report and stated that the IPPA presents a framework and process to guide the achievement of the 5-year goals listed in the 1999 report. The National Research Council's (NRC) Fusion Assessment Committee draft final report ''An Assessment of the Department of Energy's Office of Fusion Energy Sciences Program'', reviewing the quality of the science in the program, was made available after the IPPA report had been completed. The IPPA report is, nevertheless, consistent with the recommendations in the NRC report. In addition to program goals and the related 5-year, 10-year, and 15-year objectives, this report elaborates on the scientific issues associated with each of these objectives. The report also makes clear the relationships among the various program elements, and cites these relationships as the reason why integrated program planning is essential. In particular, while focusing on the science conducted by the program, the report addresses the important balances between the science and energy goals of the program, between the MFE and IFE approaches, and between the domestic and international aspects 14. AN INTEGRATIVE GROUP PSYCHOTHERAPY PROGRAM FOR CHILDREN. THE WIZARDING SCHOOL Directory of Open Access Journals (Sweden) Oana Maria Popescu 2012-02-01 Full Text Available One of the most important tendencies in child psychotherapy is the integration of various psychotherapeutic approaches and technical interventions belonging to different orientations. Based on the Harry Potter stories, the „Wizarding School” structured group therapy program is a 12-step integratively oriented program applicable in personal development, individual and group therapy for children aged 6 to 13 (at present being adapted for adult psychotherapy. The program takes place within a fairy tale, being therefore a type of informal hypnotic trance. The interventions are drawn from the lessons described in Harry Potter’s story at Hogwarts, based on the fundamental principles of child psychotherapy and including elements of play therapy, art therapy, hypnotherapy, cognitive- behavioural therapy, transactional analysis, supportive therapy, family therapy and person centred therapy. From a theoretical point of view the program is based on elements from a number of psychotherapeutic approaches, the main concept being that we need to create a therapeutic myth that is acceptable to a child. The program is not suitable for children with structural deficits, who have difficulties in making the difference between fantasy and reality. 15. Integrating student-focused career planning into undergraduate gerontology programs. Science.gov (United States) Manoogian, Margaret M; Cannon, Melissa L 2018-04-02 As our global older adult populations are increasing, university programs are well-positioned to produce an effective, gerontology-trained workforce (Morgan, 2012; Silverstein & Fitzgerald, 2017). A gerontology curriculum comprehensively can offer students an aligned career development track that encourages them to: (a) learn more about themselves as a foundation for negotiating career paths; (b) develop and refine career skills; (c) participate in experiential learning experiences; and (d) complete competency-focused opportunities. In this article, we discuss a programmatic effort to help undergraduate gerontology students integrate development-based career planning and decision-making into their academic programs and achieve postgraduation goals. 16. Implementation and integration of program packages NAMMU and HYPAC International Nuclear Information System (INIS) Nedbal, T. 1986-05-01 This work is prepared for the Swedish Power Inspectorate (SKI). The SKI has from the Atomic Energy Research Establishment (AERE) at Harwell, U.K., acquired the computer model NAMMU for groundwater hydrology calculations. The code was first implemented on an AMDAHL 470, a IBM compatible computer, and then modified in order to integrate it with HYPAC, which is a program package for pre- and post-processing finite element data, developed by KEMAKTA AB. This report describes the modifications done to both NAMMU and HYPAC, and the verification of the coupled program system NAMMU-HYPAC. (author) 17. An integrated approach to fire penetration seal program management International Nuclear Information System (INIS) Rispoli, R.D. 1996-01-01 This paper discusses the utilization of a P.C. based program to facilitate the management of Entergy Operations Arkansas Nuclear One (ANO) fire barrier penetration seal program. The computer program was developed as part of a streamlining process to consolidate all aspects of the ANO Penetration Seal Program under one system. The program tracks historical information related to each seal such as maintenance activities, design modifications and evaluations. The program is integrated with approved penetration seal design details which have been substantiated by full scale fire tests. This control feature is intended to prevent the inadvertent utilization of an unacceptable penetration detail in a field application which may exceed the parameters tested. The system is also capable of controlling the scope of the periodic surveillance of penetration seals by randomly selecting the inspection population and generating associated inspection forms. Inputs to the data base are required throughout the modification and maintenance process to ensure configuration control and maintain accurate data base information. These inputs are verified and procedurally controlled by Fire Protection Engineering (FPE) personnel. The implementation of this system has resulted in significant cost savings and has minimized the allocation of resources necessary to ensure long term program viability 18. Integration of the TNXYZ computer program inside the platform Salome International Nuclear Information System (INIS) Chaparro V, F. J. 2014-01-01 The present work shows the procedure carried out to integrate the code TNXYZ as a calculation tool at the graphical simulation platform Salome. The TNXYZ code propose a numerical solution of the neutron transport equation, in several groups of energy, steady-state and three-dimensional geometry. In order to discretized the variables of the transport equation, the code uses the method of discrete ordinates for the angular variable, and a nodal method for the spatial dependence. The Salome platform is a graphical environment designed for building, editing and simulating mechanical models mainly focused on the industry and unlike other software, in order to form a complete scheme of pre and post processing of information, to integrate and control an external source code. Before the integration the in the Salome platform TNXYZ code was upgraded. TNXYZ was programmed in the 90s using Fortran 77 compiler; for this reason the code was adapted to the characteristics of the current Fortran compilers; in addition, with the intention of extracting partial results over the process sequence, the original structure of the program underwent a modularization process, i.e. the main program was divided into sections where the code performs major operations. This procedure is controlled by the information module (YACS) on Salome platform, and it could be useful for a subsequent coupling with thermal-hydraulics codes. Finally, with the help of the Monte Carlo code Serpent several study cases were defined in order to check the process of integration; the verification process consisted in performing a comparison of the results obtained with the code executed as stand-alone and after modernized, integrated and controlled by the Salome platform. (Author) 19. The integrated approach to teaching programming in secondary school Directory of Open Access Journals (Sweden) Martynyuk A.A. 2018-02-01 Full Text Available the article considers an integrated approach to teaching programming with the use of technologies of computer modeling and 3D-graphics, allowing to improve the quality of education. It is shown that this method will allow you to systematize knowledge, improve the level of motivation through the inclusion of relevant technologies, to develop skills of project activities, to strengthen interdisciplinary connections, and promotes professional and personal self-determination of students of secondary school. 20. Program integration of predictive maintenance with reliability centered maintenance International Nuclear Information System (INIS) Strong, D.K. Jr; Wray, D.M. 1990-01-01 This paper addresses improving the safety and reliability of power plants in a cost-effective manner by integrating the recently developed reliability centered maintenance techniques with the traditional predictive maintenance techniques of nuclear power plants. The topics of the paper include a description of reliability centered maintenance (RCM), enhancing RCM with predictive maintenance, predictive maintenance programs, condition monitoring techniques, performance test techniques, the mid-Atlantic Reliability Centered Maintenance Users Group, test guides and the benefits of shared guide development 1. Program NICOLET to integrate energy loss in superconducting coils International Nuclear Information System (INIS) Vogel, H.F. 1978-08-01 A voltage pickup coil, inductively coupled to the magnetic field of the superconducting coil under test, is connected so its output may be compared with the terminal voltage of the coil under test. The integrated voltage difference is indicative of the resistive volt-seconds. When multiplied with the main coil current, the volt-seconds yield the loss. In other words, a hysteresis loop is obtained if the integrated voltage difference phi = ∫ΔVdt is plotted as a function of the coil current, i. First, time functions of the two signals phi(t) and i(t) are recorded on a dual-trace digital oscilloscope, and these signals are then recorded on magnetic tape. On a CDC-6600, the recorded information is decoded and plotted, and the hysteresis loops are integrated by the set of FORTRAN programs NICOLET described in this report. 4 figures 2. Integrating human resources and program-planning strategies. Science.gov (United States) Smith, J E 1989-06-01 The integration of human resources management (HRM) strategies with long-term program-planning strategies in hospital pharmacy departments is described. HRM is a behaviorally based, comprehensive strategy for the effective management and use of people that seeks to achieve coordination and integration with overall planning strategies and other managerial functions. It encompasses forecasting of staffing requirements; determining work-related factors that are strong "motivators" and thus contribute to employee productivity and job satisfaction; conducting a departmental personnel and skills inventory; employee career planning and development, including training and education programs; strategies for promotion and succession, including routes of advancement that provide alternatives to the managerial route; and recruitment and selection of new personnel to meet changing departmental needs. Increased competitiveness among hospitals and a shortage of pharmacists make it imperative that hospital pharmacy managers create strategies to attract, develop, and retain the right individuals to enable the department--and the hospital as a whole--to grow and change in response to the changing health-care environment in the United States. Pharmacy managers would be greatly aided in this mission by the establishment of a well-defined, national strategic plan for pharmacy programs and services that includes an analysis of what education and training are necessary for their successful accomplishment. Creation of links between overall program objectives and people-planning strategies will aid hospital pharmacy departments in maximizing the long-term effectiveness of their practice. 3. The Glory Program: Global Science from a Unique Spacecraft Integration Science.gov (United States) Bajpayee Jaya; Durham, Darcie; Ichkawich, Thomas 2006-01-01 The Glory program is an Earth and Solar science mission designed to broaden science community knowledge of the environment. The causes and effects of global warming have become a concern in recent years and Glory aims to contribute to the knowledge base of the science community. Glory is designed for two functions: one is solar viewing to monitor the total solar irradiance and the other is observing the Earth s atmosphere for aerosol composition. The former is done with an active cavity radiometer, while the latter is accomplished with an aerosol polarimeter sensor to discern atmospheric particles. The Glory program is managed by NASA Goddard Space Flight Center (GSFC) with Orbital Sciences in Dulles, VA as the prime contractor for the spacecraft bus, mission operations, and ground system. This paper will describe some of the more unique features of the Glory program including the integration and testing of the satellite and instruments as well as the science data processing. The spacecraft integration and test approach requires extensive analysis and additional planning to ensure existing components are successfully functioning with the new Glory components. The science mission data analysis requires development of mission unique processing systems and algorithms. Science data analysis and distribution will utilize our national assets at the Goddard Institute for Space Studies (GISS) and the University of Colorado's Laboratory for Atmospheric and Space Physics (LASP). The Satellite was originally designed and built for the Vegetation Canopy Lidar (VCL) mission, which was terminated in the middle of integration and testing due to payload development issues. The bus was then placed in secure storage in 2001 and removed from an environmentally controlled container in late 2003 to be refurbished to meet the Glory program requirements. Functional testing of all the components was done as a system at the start of the program, very different from a traditional program 4. Program integration on the Civilian Radioactive Waste Management System International Nuclear Information System (INIS) Trebules, V.B. 1995-01-01 The recent development and implementation of a revised Program Approach for the Civilian Radioactive Waste Management System (CRWMS) was accomplished in response to significant changes in the environment in which the program was being executed. The lack of an interim storage site, growing costs and schedule delays to accomplish the full Yucca Mountain site characterization plan, and the development and incorporation of a multi-purpose (storage, transport, and disposal) canister (MPC) into the CRWMS required a reexamination of Program plans and priorities. Dr. Daniel A. Dreyfus, the Director of the Office of Civilian Radioactive Waste Management (OCRWM), established top-level schedule, targets and cost goals and commissioned a Program-wide task force of DOE and contractor personnel to identify and evaluate alternatives to meet them. The evaluation of the suitability of Yucca Mountain site by 1998 and the repository license application data of 2001 were maintained and a target date of January 1998 for MPC availability was established. An increased multi-year funding profile was baselined and agreed to by Congress. A $1.3 billion reduction in Yucca Mountain site characterization costs was mandated to hold the cost to$5 billion. The replanning process superseded all previous budget allocations and focused on program requirements and their relative priorities within the cost profiles. This paper discusses the process for defining alternative scenarios to achieve the top-level program goals in an integrated fashion 5. Faculty perceptions of the integration of SAP in academic programs Directory of Open Access Journals (Sweden) Sam Khoury 2012-08-01 Full Text Available In order to prepare students for the workforce, academic programs incorporate a variety of tools that students are likely to use in their future careers. One of these tools employed by business and technology programs is the integration of live software applications such as SAP through the SAP University Alliance (SAP UA program. Since the SAP UA program has been around for only about 10 years and the available literature on the topic is limited, research is needed to determine the strengths and weaknesses of the SAP UA program. A collaborative study of SAP UA faculty perceptions of their SAP UAs was conducted in the fall of 2011. Of the faculty invited to participate in the study, 31% completed the online survey. The results indicate that most faculty experienced difficulty implementing SAP into their programs and report that a need exists for more standardized curriculum and training, while a large percentage indicated that they are receiving the support they need from their schools and SAP. 6. Steam generator tube integrity program: Phase II, Final report Energy Technology Data Exchange (ETDEWEB) Kurtz, R.J.; Bickford, R.L.; Clark, R.A.; Morris, C.J.; Simonen, F.A.; Wheeler, K.R. 1988-08-01 The Steam Generator Tube Integrity Program (SGTIP) was a three phase program conducted for the US Nuclear Regulatory Commission (NRC) by Pacific Northwest Laboratory (PNL). The first phase involved burst and collapse testing of typical steam generator tubing with machined defects. The second phase of the SGTIP continued the integrity testing work of Phase I, but tube specimens were degraded by chemical means rather than machining methods. The third phase of the program used a removed-from-service steam generator as a test bed for investigating the reliability and effectiveness of in-service nondestructive eddy-current inspection methods and as a source of service degraded tubes for validating the Phase I and Phase II data on tube integrity. This report describes the results of Phase II of the SGTIP. The object of this effort included burst and collapse testing of chemically defected pressurized water reactor (PWR) steam generator tubing to validate empirical equations of remaining tube integrity developed during Phase I. Three types of defect geometries were investigated: stress corrosion cracking (SCC), uniform thinning and elliptical wastage. In addition, a review of the publicly available leak rate data for steam generator tubes with axial and circumferential SCC and a comparison with an analytical leak rate model is presented. Lastly, nondestructive eddy-current (EC) measurements to determine accuracy of defect depth sizing using conventional and alternate standards is described. To supplement the laboratory EC data and obtain an estimate of EC capability to detect and size SCC, a mini-round robin test utilizing several firms that routinely perform in-service inspections was conducted. 7. Steam generator tube integrity program: Phase II, Final report International Nuclear Information System (INIS) Kurtz, R.J.; Bickford, R.L.; Clark, R.A.; Morris, C.J.; Simonen, F.A.; Wheeler, K.R. 1988-08-01 The Steam Generator Tube Integrity Program (SGTIP) was a three phase program conducted for the US Nuclear Regulatory Commission (NRC) by Pacific Northwest Laboratory (PNL). The first phase involved burst and collapse testing of typical steam generator tubing with machined defects. The second phase of the SGTIP continued the integrity testing work of Phase I, but tube specimens were degraded by chemical means rather than machining methods. The third phase of the program used a removed-from-service steam generator as a test bed for investigating the reliability and effectiveness of in-service nondestructive eddy-current inspection methods and as a source of service degraded tubes for validating the Phase I and Phase II data on tube integrity. This report describes the results of Phase II of the SGTIP. The object of this effort included burst and collapse testing of chemically defected pressurized water reactor (PWR) steam generator tubing to validate empirical equations of remaining tube integrity developed during Phase I. Three types of defect geometries were investigated: stress corrosion cracking (SCC), uniform thinning and elliptical wastage. In addition, a review of the publicly available leak rate data for steam generator tubes with axial and circumferential SCC and a comparison with an analytical leak rate model is presented. Lastly, nondestructive eddy-current (EC) measurements to determine accuracy of defect depth sizing using conventional and alternate standards is described. To supplement the laboratory EC data and obtain an estimate of EC capability to detect and size SCC, a mini-round robin test utilizing several firms that routinely perform in-service inspections was conducted 8. Factors Influencing Learning Environments in an Integrated Experiential Program Science.gov (United States) Koci, Peter The research conducted for this dissertation examined the learning environment of a specific high school program that delivered the explicit curriculum through an integrated experiential manner, which utilized field and outdoor experiences. The program ran over one semester (five months) and it integrated the grade 10 British Columbian curriculum in five subjects. A mixed methods approach was employed to identify the students' perceptions and provide richer descriptions of their experiences related to their unique learning environment. Quantitative instruments were used to assess changes in students' perspectives of their learning environment, as well as other supporting factors including students' mindfulness, and behaviours towards the environment. Qualitative data collection included observations, open-ended questions, and impromptu interviews with the teacher. The qualitative data describe the factors and processes that influenced the learning environment and give a richer, deeper interpretation which complements the quantitative findings. The research results showed positive scores on all the quantitative measures conducted, and the qualitative data provided further insight into descriptions of learning environment constructs that the students perceived as most important. A major finding was that the group cohesion measure was perceived by students as the most important attribute of their preferred learning environment. A flow chart was developed to help the researcher conceptualize how the learning environment, learning process, and outcomes relate to one another in the studied program. This research attempts to explain through the consideration of this case study: how learning environments can influence behavioural change and how an interconnectedness among several factors in the learning process is influenced by the type of learning environment facilitated. Considerably more research is needed in this area to understand fully the complexity learning 9. Integrating Retired Registered Nurses Into a New Graduate Orientation Program. Science.gov (United States) Baldwin, Kathleen M; Black, Denice L; Normand, Lorrie K; Bonds, Patricia; Townley, Melissa 2016-01-01 The project goal of was to decrease new graduate nurse (NGN) attrition during the first year of employment by improving communication skills and providing additional mentoring for NGNs employed in a community hospital located in a rural area. All NGNs participate in the Versant Residency Program. Even with this standardized residency program, exit interviews of NGNs who resigned during their first year of employment revealed 2 major issues: communication problems with patients and staff and perceived lack of support/mentoring from unit staff. A clinical nurse specialist-led nursing team developed an innovative program integrating retired nurses, Volunteer Nurse Ambassadors (VNAs), into the Versant Residency Program to address both of those issues. All NGNs mentored by a retired nurse remain employed in the hospital (100% retention). Before the VNA program, the retention rate was 37.5%. Both the NGNs and VNAs saw value in their mentor-mentee relationship. There have been no critical incidences or failure to rescue events involving NGNs mentored by a VNA. Use of VNAs to support NGNs as they adjust to the staff nurse role can prevent attrition during their first year of nursing practice by providing additional support to the NGN. 10. INTEGRATION OPPORTUNITIES OF MIGRANTS, WITH ESPECIAL REGARDS TO SENSITIZATION PROGRAMS Directory of Open Access Journals (Sweden) Krisztina DAJNOKI 2017-06-01 Full Text Available As a result of the migration wave appearing in summer 2015, the issue of immigrant integration has more often become conspicuous. Although a significant decline has been recorded in the number of immigrants, social-economic-labor market integration is still a challenge for experts and a task to be resolved. In our opinion, the key to the success of migration strategies and integration-aimed programs depends on the attitude and awareness of society (public opinion and – on the organizational level – the manager and future colleagues as well as on the organizational culture and the approach of a proper human resource expert. Besides adequate information, the recognition of international ‘best practices’ and the adaptation of operational diversity-management, one of the possible methods of facilitating integration is the utilization of sensitization trainings. The article introduces partial results of a questionnaire survey involving 220 employees with respect to attributes associated with migrants and emphasizing the peculiarity and significance of sensitization trainings. 11. Analysis of integrated plant upgrading/life extension programs International Nuclear Information System (INIS) McCutchan, D.A.; Massie, H.W. Jr.; McFetridge, R.H. 1988-01-01 A present-worth generating cost model has been developed and used to evaluate the economic value of integrated plant upgrading life extension project in nuclear power plants. This paper shows that integrated plant upgrading programs can be developed in which a mix of near-term availability, power rating, and heat rate improvements can be obtained in combination with life extension. All significant benefits and costs are evaluated from the viewpoint of the utility, as measured in discounted revenue requirement differentials between alternative plans which are equivalent in system generating capacity. The near-term upgrading benefits are shown to enhance the benefit picture substantially. In some cases the net benefit is positive, even if the actual life extension proves to be less than expected 12. Critical Care Organizations: Building and Integrating Academic Programs. Science.gov (United States) Moore, Jason E; Oropello, John M; Stoltzfus, Daniel; Masur, Henry; Coopersmith, Craig M; Nates, Joseph; Doig, Christopher; Christman, John; Hite, R Duncan; Angus, Derek C; Pastores, Stephen M; Kvetan, Vladimir 2018-04-01 13. In situ remediation integrated program: Success through teamwork International Nuclear Information System (INIS) Peterson, M.E. 1994-08-01 The In Situ Remediation Integrated Program (ISR IP), managed under the US Department of Energy's (DOE) Office of Technology Development, focuses research and development efforts on the in-place treatment of contaminated environmental media, such as soil and groundwater, and the containment of contaminants to prevent the contaminants from spreading through the environment. As described here, specific ISR IP projects are advancing the application of in situ technologies to the demonstration point, providing developed technologies to customers within DOE. The ISR IP has also taken a lead role in assessing and supporting innovative technologies that may have application to DOE 14. The safety basis of the integral fast reactor program International Nuclear Information System (INIS) Pedersen, D.R.; Seidel, B.R. 1990-01-01 The Integral Fast Reactor (IFR) and metallic fuel have emerged as the US Department of Energy reference reactor concept and fuel system for the development of an advanced liquid-metal reactor. This article addresses the basic elements of the IFR reactor concept and focuses on the safety advances achieved by the IFR Program in the areas of (1) fuel performance, (2) superior local faults tolerance, (3) transient fuel performance, (4) fuel-failure mechanisms, (5) performance in anticipated transients without scram, (6) core-melt mitigation, and (7) actinide recycle 15. Nuclear methods - an integral part of the NBS certification program International Nuclear Information System (INIS) Gills, T.E. 1984-01-01 Within the past twenty years, new techniques and methods have emerged in response to new technologies that are based upon the performance of high-purity and well-characterized materials. The National Bureau of Standards, through its Standard Reference Materials (SRM's) Program, provides standards in the form of many of these materials to ensure accuracy and the compatibility of measurements throughout the US and the world. These standards, defined by the National Bureau of Standards as Standard Reference Materials (SRMs), are developed by using state-of-the-art methods and procedures for both preparation and analysis. Nuclear methods-activation analysis constitute an integral part of that analysis process 16. Integrating Professional Development into STEM Graduate Programs: Student-Centered Programs for Career Preparation Science.gov (United States) Lautz, L.; McCay, D.; Driscoll, C. T.; Glas, R. L.; Gutchess, K. M.; Johnson, A.; Millard, G. 2017-12-01 17. BWR Full Integral Simulation Test (FIST) program: facility description report International Nuclear Information System (INIS) Stephens, A.G. 1984-09-01 A new boiling water reactor safety test facility (FIST, Full Integral Simulation Test) is described. It will be used to investigate small breaks and operational transients and to tie results from such tests to earlier large-break test results determined in the TLTA. The new facility's full height and prototypical components constitute a major scaling improvement over earlier test facilities. A heated feedwater system, permitting steady-state operation, and a large increase in the number of measurements are other significant improvements. The program background is outlined and program objectives defined. The design basis is presented together with a detailed, complete description of the facility and measurements to be made. An extensive component scaling analysis and prediction of performance are presented 18. Nuclear Application Programs Development and Integration for a Simulator Energy Technology Data Exchange (ETDEWEB) Park, Hyun-Joon; Lee, Tae-Woo [KEPCO Engineering and Construction Co., Deajeon (Korea, Republic of) 2016-10-15 KEPCO E and C participated in the NAPS (Nuclear Application Programs) development project for BNPP (Barakah Nuclear Power Plant) simulator. The 3KEY MASTER™ was adopted for this project, which is comprehensive simulation platform software developed by WSC (Western Services Corporation) for the development, and control of simulation software. The NAPS based on actual BNPP project was modified in order to meet specific requirements for nuclear power plant simulators. Considerations regarding software design for BNPP simulator and interfaces between the 3KM platform and application programs are discussed. The repeatability is one of functional requirements for nuclear power plant simulators. In order to migrate software from actual plants to simulators, software functions for storing and retrieving plant conditions and program variables should be implemented. In addition, software structures need to be redesigned to meet the repeatability, and source codes developed for actual plants would have to be optimized to reflect simulator’s characteristics as well. The synchronization is an important consideration to integrate external application programs into the 3KM simulator. 19. Integrated research training program of excellence in radiochemistry Energy Technology Data Exchange (ETDEWEB) Lapi, Suzanne [Washington Univ., St. Louis, MO (United States) 2015-09-18 20. Integrated Risk Management Within NASA Programs/Projects Science.gov (United States) 2004-01-01 As NASA Project Risk Management activities continue to evolve, the need to successfully integrate risk management processes across the life cycle, between functional disciplines, stakeholders, various management policies, and within cost, schedule and performance requirements/constraints become more evident and important. Today's programs and projects are complex undertakings that include a myriad of processes, tools, techniques, management arrangements and other variables all of which must function together in order to achieve mission success. The perception and impact of risk may vary significantly among stakeholders and may influence decisions that may have unintended consequences on the project during a future phase of the life cycle. In these cases, risks may be unintentionally and/or arbitrarily transferred to others without the benefit of a comprehensive systemic risk assessment. Integrating risk across people, processes, and project requirements/constraints serves to enhance decisions, strengthen communication pathways, and reinforce the ability of the project team to identify and manage risks across the broad spectrum of project management responsibilities. The ability to identify risks in all areas of project management increases the likelihood a project will identify significant issues before they become problems and allows projects to make effective and efficient use of shrinking resources. By getting a total team integrated risk effort, applying a disciplined and rigorous process, along with understanding project requirements/constraints provides the opportunity for more effective risk management. Applying an integrated approach to risk management makes it possible to do a better job at balancing safety, cost, schedule, operational performance and other elements of risk. This paper will examine how people, processes, and project requirements/constraints can be integrated across the project lifecycle for better risk management and ultimately improve the 1. Benefits and costs of integrating technology into undergraduate nursing programs. Science.gov (United States) Glasgow, Mary Ellen Smith; Cornelius, Frances H 2005-01-01 2. Optimising an integrated crop-livestock farm using risk programming Directory of Open Access Journals (Sweden) SE Visagie 2004-06-01 Full Text Available Numerous studies have analysed farm planning decisions focusing on producer risk preferences. Few studies have focussed on the farm planning decisions in an integrated croplivestock farm context. Income variability and means of managing risk continues to receive much attention in farm planning research. Different risk programming models have attempted to focus on minimising the income variability of farm activities. This study attempts to identify the optimal mix of crops and the number of animals the farm needs to keep in the presence of crop production risk for a range of risk levels. A mixed integer linear programming model was developed to model the decision environment faced by an integrated crop-livestock farmer. The deviation of income from the expected value was used as a measure of risk. A case study is presented with representative data from a farm in the Swartland area. An investigation of the results of the model under different constraints shows that, in general, strategies that depend on crop rotation principles are preferred to strategies that follow mono-crop production practices. 3. Two Inseparable Facets of Technology Integration Programs: Technology and Theoretical Framework Science.gov (United States) Demir, Servet 2011-01-01 This paper considers the process of program development aiming at technology integration for teachers. For this consideration, the paper focused on an integration program which was recently developed as part of a larger project. The participants of this program were 45 in-service teachers. The program continued four weeks and the conduct of the… 4. Process improvement program evolves into compliance program at an integrated delivery system. Science.gov (United States) Tyk, R C; Hylton, P G 1998-09-01 An integrated delivery system discovered questionable practices when it undertook a process-improvement initiative for its revenue-to-cash cycle. These discoveries served as a wake-up call to the organization that it needed to develop a comprehensive corporate compliance program. The organization engaged legal counsel to help it establish such a program. A corporate compliance officer was hired, and a compliance committee was set up. They worked with counsel to develop the structure and substance of the program and establish a corporate code of conduct that became a part of the organization's policies and procedures. Teams were formed in various areas of the organization to review compliance-related activities and suggest improvements. Clinical and nonclinical staff attended mandatory educational sessions about the program. By approaching compliance systematically, the organization has put itself in an excellent position to avoid fraudulent and abusive activities- and the government scrutiny they invite. 5. Integration of the program TNXYZ in the platform SALOME International Nuclear Information System (INIS) Chaparro V, F. J.; Silva A, L.; Del Valle G, E.; Gomez T, A. M.; Vargas E, S. 2013-10-01 This work presents the procedure realized to integrate the code TNXYZ like a processing tool to the graphic simulation platform SALOME. The code TNXYZ solves the neutron transport equation in stationary state, for several energy groups, quantizing the angular variable by the discrete ordinates method and the space variable by nodal methods. The platform SALOME is a graphic surrounding designed for the construction, edition and simulation of mechanical models focused to the industry and contrary to other software, it allows to integrate external source codes to the surrounding, to form a complete scheme of execution, supervision, pre and post information processing. The code TNXYZ was programmed in the 90s in a Fortran compiler, but to be used at the present time the code should be actualized to the current compiler characteristics; also, in the original scheme was carried out a modularization process, that is to say, the main program was divided in sections where the code carries out important operations, with the intention of flexibility the data extraction process along its processing sequence and that can be useful in a later development of coupling. Finally, to verify the integration a fuel assembly BWR was modeled, as well as a control cell. The cross sections were obtained with the Monte Carlo Serpent code. Some results obtained with Serpent were used to verify and to begin with the validation of the code, being obtained an acceptable comparison in the infinite multiplication factor. The validation process should extend and one has planned to present in a future work. This work is part of the development of the research group formed between the Escuela Superior de Fisica y Matematicas del Instituto Politecnico Nacional (IPN) and the Instituto Nacional de Investigaciones Nucleares (ININ) in which a simulation Mexican platform of nuclear reactors is developed. (Author) 6. Strategic planning of an integrated program for state oversight agreements International Nuclear Information System (INIS) Walzer, A.E.; Cothron, T.K. 1991-01-01 Among the barrage of agreements faced by federal facilities are the State Oversight Agreements (known as Agreements in Principle in many states). These agreements between the Department of Energy (DOE) and the states fund the states to conduct independent environmental monitoring and oversight which requires plans, studies, inventories, models, and reports from DOE and its management and operating contractors. Many states have signed such agreements, including Tennessee, Kentucky, Washington, Idaho, Colorado, California, and Florida. This type of oversight agreement originated in Colorado as a result of environmental concerns at the Rocky Flats Plant. The 5-year State Oversight Agreements for Tennessee and Kentucky became effective on May 13, 1991, and fund these states nearly $21 million and$7 million, respectively. Implementation of these open-quotes comprehensive and integratedclose quotes agreements is particularly complex in Tennessee where the DOE Oak Ridge Reservation houses three installations with distinctly different missions. The program development and strategic planning required for coordinating and integrating a program of this magnitude is discussed. Included are the organizational structure and interfaces required to define and coordinate program elements across plants and to also effectively negotiate scope and schedules with the state. The planned Program Management Plan, which will contain implementation and procedural guidelines, and the management control system for detailed tracking of activities and costs are outlined. Additionally, issues inherent in the nature of the agreements and implementation of a program of this magnitude are discussed. Finally, a comparison of the agreements for Tennessee, Kentucky, Colorado, and Idaho is made to gain a better understanding of the similarities and differences in State Oversight Agreements to aid in implementation of these agreements 7. EM-54 Technology Development In Situ Remediation Integrated Program International Nuclear Information System (INIS) 1993-08-01 The Department of Energy (DOE) established the Office of Technology Development (EM-50) as an element of Environmental Restoration and Waste Management (EM) in November 1989. EM manages remediation of all DOE sites as well as wastes from current operations. The goal of the EM program is to minimize risks to human health, safety and the environment, and to bring all DOE sites into compliance with Federal, state, and local regulations by 2019. EM-50 is charged with developing new technologies that are safer, more effective and less expensive than current methods. The In Situ Remediation Integrated Program (the subject of this report) is part of EM-541, the Environmental Restoration Research and Development Division of EM-54. The In Situ Remediation Integrated Program (ISR IP) was instituted out of recognition that in situ remediation could fulfill three important criteria: Significant cost reduction of cleanup by eliminating or minimizing excavation, transportation, and disposal of wastes; reduced health impacts on workers and the public by minimizing exposure to wastes during excavation and processing; and remediation of inaccessible sites, including: deep subsurfaces; in, under, and around buildings. Buried waste, contaminated soils and groundwater, and containerized wastes are all candidates for in situ remediation. Contaminants include radioactive wastes, volatile and non-volatile organics, heavy metals, nitrates, and explosive materials. The ISR IP tends to facilitate development of in situ remediation technologies for hazardous, radioactive, and mixed wastes in soils, groundwater, and storage tanks. Near-term focus is on containment of the wastes, with treatment receiving greater effort in future years 8. Integrative Reiki for cancer patients: a program evaluation. Science.gov (United States) Fleisher, Kimberly A; Mackenzie, Elizabeth R; Frankel, Eitan S; Seluzicki, Christina; Casarett, David; Mao, Jun J 2014-01-01 This mixed methods study sought to evaluate the outcomes of an integrative Reiki volunteer program in an academic medical oncology center setting. We used de-identified program evaluation data to perform both quantitative and qualitative analyses of participants' experiences of Reiki sessions. The quantitative data were collected pre- and postsession using a modified version of the distress thermometer. The pre- and postsession data from the distress assessment were analyzed using a paired Student's : test. The qualitative data were derived from written responses to open-ended questions asked after each Reiki session and were analyzed for key words and recurring themes. Of the 213 pre-post surveys of first-time sessions in the evaluation period, we observed a more than 50% decrease in self-reported distress (from 3.80 to 1.55), anxiety (from 4.05 to 1.44), depression (from 2.54 to 1.10), pain (from 2.58 to 1.21), and fatigue (from 4.80 to 2.30) with P Reiki, we found 176 (82.6%) of participants liked the Reiki session, 176 (82.6%) found the Reiki session helpful, 157 (73.7%) plan to continue using Reiki, and 175 (82.2%) would recommend Reiki to others. Qualitative analyses found that individuals reported that Reiki induced relaxation and enhanced spiritual well-being. An integrative Reiki volunteer program shows promise as a component of supportive care for cancer patients. More research is needed to evaluate and understand the impact that Reiki may have for patients, caregivers, and staff whose lives have been affected by cancer. 9. Brazilian Air Force aircraft structural integrity program: An overview Directory of Open Access Journals (Sweden) Alberto W. S. Mello Junior 2009-01-01 Full Text Available This paper presents an overview of the activities developed by the Structural Integrity Group at the Institute of Aeronautics and Space - IAE, Brazil, as well as the status of ongoing work related to the life extension program for aircraft operated by the Brazilian Air Force BAF. The first BAF-operated airplane to undergo a DTA-based life extension was the F-5 fighter, in the mid 1990s. From 1998 to 2001, BAF worked on a life extension project for the BAF AT- 26 Xavante trainer. All analysis and tests were performed at IAE. The fatigue critical locations (FCLs were presumed based upon structural design and maintenance data and also from exchange of technical information with other users of the airplane around the world. Following that work, BAF started in 2002 the extension of the operational life of the BAF T-25 “Universal”. The T-25 is the basic training airplane used by AFA - The Brazilian Air Force Academy. This airplane was also designed under the “safe-life” concept. As the T-25 fleet approached its service life limit, the Brazilian Air Force was questioning whether it could be kept in flight safely. The answer came through an extensive Damage Tolerance Analysis (DTA program, briefly described in this paper. The current work on aircraft structural integrity is being performed for the BAF F-5 E/F that underwent an avionics and weapons system upgrade. Along with the increase in weight, new configurations and mission profiles were established. Again, a DTA program was proposed to be carried out in order to establish the reliability of the upgraded F-5 fleet. As a result of all the work described, the BAF has not reported any accident due to structural failure on aircraft submitted to Damage Tolerance Analysis. 10. Integrating New Technologies and Existing Tools to Promote Programming Learning Directory of Open Access Journals (Sweden) Álvaro Santos 2010-04-01 Full Text Available In recent years, many tools have been proposed to reduce programming learning difficulties felt by many students. Our group has contributed to this effort through the development of several tools, such as VIP, SICAS, OOP-Anim, SICAS-COL and H-SICAS. Even though we had some positive results, the utilization of these tools doesn’t seem to significantly reduce weaker student’s difficulties. These students need stronger support to motivate them to get engaged in learning activities, inside and outside classroom. Nowadays, many technologies are available to create contexts that may help to accomplish this goal. We consider that a promising path goes through the integration of solutions. In this paper we analyze the features, strengths and weaknesses of the tools developed by our group. Based on these considerations we present a new environment, integrating different types of pedagogical approaches, resources, tools and technologies for programming learning support. With this environment, currently under development, it will be possible to review contents and lessons, based on video and screen captures. The support for collaborative tasks is another key point to improve and stimulate different models of teamwork. The platform will also allow the creation of various alternative models (learning objects for the same subject, enabling personalized learning paths adapted to each student knowledge level, needs and preferential learning styles. The learning sequences will work as a study organizer, following a suitable taxonomy, according to student’s cognitive skills. Although the main goal of this environment is to support students with more difficulties, it will provide a set of resources supporting the learning of more advanced topics. Software engineering techniques and representations, object orientation and event programming are features that will be available in order to promote the learning progress of students. 11. [Educative programs based on self-management: an integrative review]. Science.gov (United States) Nascimento, Luciana da Silva; de Gutierrez, Maria Gaby Rivero; De Domenico, Edvane Birelo Lopes 2010-06-01 The objective was to identify definitions and/or explanations of the term self-management in educative programs that aim its development. The authors also aimed to describe the educative plans and results of the educative programs analyzed. As a methodology we used integrative review, with 15 published articles (2002 the 2007). The inclusion criteria was: the occurrence of the term self-management; the existence of an educative program for the development of self-management; to be related to the area of the health of the adult. Self-management means the improvement or acquisition of abilities to solve problems in biological, social and affective scopes. The review pointed to different educational methodologies. However, it also showed the predominance of traditional methods, with conceptual contents and of physiopathological nature. The learning was evaluated as favorable, with warns in relation to the application in different populations and contexts and to the increase of costs of the educative intervention. It was concluded that research has evidenced the importance of the education for self-management, but lacked in strength for not relating the biopsychosocial demands of the chronic patient and for not describing in detail the teaching and evaluation methodologies employed. 12. Integrated Data Analysis (IDCA) Program - PETN Class 4 Standard Energy Technology Data Exchange (ETDEWEB) Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Geoffrey W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Daniel N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pollard, Colin J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Sorensen, Daniel N. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Remmers, Daniel L. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Shelley, Timothy J. [Air Force Research Lab. (AFRL), Tyndall AFB, FL (United States); Reyes, Jose A. [Applied Research Associates, Tyndall AFB, FL (United States); Phillips, Jason J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States) 2012-08-01 The Integrated Data Collection Analysis (IDCA) program is conducting a proficiency study for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the results for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of PETN Class 4. The PETN was found to have: 1) an impact sensitivity (DH50) range of 6 to 12 cm, 2) a BAM friction sensitivity (F50) range 7 to 11 kg, TIL (0/10) of 3.7 to 7.2 kg, 3) a ABL friction sensitivity threshold of 5 or less psig at 8 fps, 4) an ABL ESD sensitivity threshold of 0.031 to 0.326 j/g, and 5) a thermal sensitivity of an endothermic feature with Tmin = ~ 141 °C, and a exothermic feature with a Tmax = ~205°C. 13. Integrated Data Collection Analysis (IDCA) Program — Ammonium Nitrate Energy Technology Data Exchange (ETDEWEB) Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Geoffrey W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Daniel N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pollard, Colin J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Sorensen, Daniel N. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Remmers, Daniel L. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Phillips, Jason J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Shelley, Timothy J. [Bureau of Alcohol, Tobacco and Firearms, Redstone Arsenal, AL (United States); Reyes, Jose A. [Applied Research Associates, Tyndall AFB, FL (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States) 2013-05-17 The Integrated Data Collection Analysis (IDCA) program is conducting a proficiency study for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the results for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of ammonium nitrate (AN). AN was tested, in most cases, as both received from manufacturer and dried/sieved. The participants found the AN to be: 1) insensitive in Type 12A impact testing (although with a wide range of values), 2) completely insensitive in BAM friction testing, 3) less sensitive than the RDX standard in ABL friction testing, 4) less sensitive than RDX in ABL ESD testing, and 5) less sensitive than RDX and PETN in DSC thermal analyses. 14. Characterization, Monitoring, and Sensor Technology Integrated Program (CMST-IP) International Nuclear Information System (INIS) 1994-04-01 The Characterization, Monitoring, and Sensor Technology Integrated Program seeks to deliver needed technologies, timely and cost-effectively, to the Office of Waste Management (EM-30), the Office of Environmental Restoration (EM-40), and the Office of Facility Transition and Management (EM-60). The scope of characterizations monitoring, and sensor technology needs that are required by those organizations encompass: (1) initial location and characterization of wastes and waste environments - prior to treatment; (2) monitoring of waste retrieval, remediation and treatment processes; (3) characterization of the co-position of final waste treatment forms to evaluate the performance of waste treatments processes; and (4) site closure and compliance monitoring. Wherever possible, the CMST-IP fosters technology transfer and commercialization of technologies that it sponsors 15. Steam generator tube integrity program. Phase I report International Nuclear Information System (INIS) Alzheimer, J.M.; Clark, R.A.; Morris, C.J.; Vagins, M. 1979-09-01 The results are presented of the pressure tests performed as part of Phase I of the Steam Generator Tube Integrity (SGTI) program at Battelle Pacific Northwest Laboratory. These tests were performed to establish margin-to-failure predictions for mechanically defected Pressurized Water Reactor (PWR) steam generator tubing under operating and accident conditions. Defect geometries tested were selected because they simulate known or expected defects in PWR steam generators. These defect geometries are Electric Discharge Machining (EDM) slots, elliptical wastage, elliptical wastage plus through-wall slot, uniform thinning, denting, denting plus uniform thinning, and denting plus elliptical wastage. All defects were placed in tubing representative of that currently used in PWR steam generators 16. The driving elements of an integrated configuration management program International Nuclear Information System (INIS) Zaalouk, M.G. 1990-01-01 The need for an effective long term Plant Configuration Management Program (CMP) has been demonstrated in response to Plant Design Modification and Plant Life Extension activities. Having particular need are those Utilities operating early vintage nuclear plants, where numerous modifications have been made without the benefit of an accurate, complete, properly maintained and controlled Design Basis. This paper presents a model for a long term, cost effective CMP which is based on and driven by the development, maintenance and control of accurate plant Design Basis Information. The model also provides a systematic approach for devising and implementing an integrated Plant CMP based on the essential attributes of the Plant Configuration Management, including Design Basis 17. International piping integrity research group (IPIRG) program final report International Nuclear Information System (INIS) Schmidt, R.; Wilkowski, G.; Scott, P.; Olsen, R.; Marschall, C.; Vieth, P.; Paul, D. 1992-04-01 This is the final report of the International Piping Integrity Research Group (IPIRG) Programme. The IPIRG Programme was an international group programme managed by the U.S. Nuclear Regulatory Commission and funded by a consortium of organizations from nine nations: Canada, France, Italy, Japan, Sweden, Switzerland, Taiwan, the United Kingdom, and the United states. The objective of the programme was to develop data needed to verify engineering methods for assessing the integrity of nuclear power plant piping that contains circumferential defects. The primary focus was an experimental task that investigated the behaviour of circumferentially flawed piping and piping systems to high-rate loading typical of seismic events. To accomplish these objectives a unique pipe loop test facility was designed and constructed. The pipe system was an expansion loop with over 30 m of 406-mm diameter pipe and five long radius elbows. Five experiments on flawed piping were conducted to failure in this facility with dynamic excitation. The report: provides background information on leak-before-break and flaw evaluation procedures in piping; summarizes the technical results of the programme; gives a relatively detailed assessment of the results from the various pipe fracture experiments and complementary analyses; and, summarizes the advances in the state-of-the-art of pipe fracture technology resulting from the IPIRG Program 18. High-level waste program integration within the DOE complex International Nuclear Information System (INIS) Valentine, J.H.; Malone, K.; Schaus, P.S. 1998-03-01 Eleven major Department of Energy (DOE) site contractors were chartered by the Assistant Secretary to use a systems engineering approach to develop and evaluate technically defensible cost savings opportunities across the complex. Known as the complex-wide Environmental Management Integration (EMI), this process evaluated all the major DOE waste streams including high level waste (HLW). Across the DOE complex, this waste stream has the highest life cycle cost and is scheduled to take until at least 2035 before all HLW is processed for disposal. Technical contract experts from the four DOE sites that manage high level waste participated in the integration analysis: Hanford, Savannah River Site (SRS), Idaho National Engineering and Environmental Laboratory (INEEL), and West Valley Demonstration Project (WVDP). In addition, subject matter experts from the Yucca Mountain Project and the Tanks Focus Area participated in the analysis. Also, departmental representatives from the US Department of Energy Headquarters (DOE-HQ) monitored the analysis and results. Workouts were held throughout the year to develop recommendations to achieve a complex-wide integrated program. From this effort, the HLW Environmental Management (EM) Team identified a set of programmatic and technical opportunities that could result in potential cost savings and avoidance in excess of 18 billion and an accelerated completion of the HLW mission by seven years. The cost savings, schedule improvements, and volume reduction are attributed to a multifaceted HLW treatment disposal strategy which involves waste pretreatment, standardized waste matrices, risk-based retrieval, early development and deployment of a shipping system for glass canisters, and reasonable, low cost tank closure 19. Integrated Data Collection Analysis (IDCA) Program - SSST Testing Methods Energy Technology Data Exchange (ETDEWEB) Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Geoffrey W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Daniel N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pollard, Colin J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Remmers, Daniel L. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Sorensen, Daniel N. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Whinnery, LeRoy L. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Phillips, Jason J. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Shelley, Timothy J. [Bureau of Alcohol, Tobacco and Firearms (ATF), Huntsville, AL (United States); Reyes, Jose A. [Applied Research Associates, Tyndall AFB, FL (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States) 2013-03-25 The Integrated Data Collection Analysis (IDCA) program is conducting a proficiency study for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the methods used for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis during the IDCA program. These methods changed throughout the Proficiency Test and the reasons for these changes are documented in this report. The most significant modifications in standard testing methods are: 1) including one specified sandpaper in impact testing among all the participants, 2) diversifying liquid test methods for selected participants, and 3) including sealed sample holders for thermal testing by at least one participant. This effort, funded by the Department of Homeland Security (DHS), is putting the issues of safe handling of these materials in perspective with standard military explosives. The study is adding SSST testing results for a broad suite of different HMEs to the literature. Ultimately the study will suggest new guidelines and methods and possibly establish the SSST testing accuracies needed to develop safe handling practices for HMEs. Each participating testing laboratory uses identical test materials and preparation methods wherever possible. The testing performers involved are Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL), Indian Head Division, Naval Surface Warfare Center, (NSWC IHD), Sandia National Laboratories (SNL), and Air Force Research Laboratory (AFRL/RXQL). These tests are conducted as a proficiency study in order to establish some consistency in test protocols, procedures, and experiments and to compare results when these testing variables cannot be made consistent. 20. Integrated energy and climate program without nuclear power International Nuclear Information System (INIS) Haller, W. 2007-01-01 Under the German EU Council presidency, the European Union adopted an ambitious climate protection program in spring this year which has consequences for the entire energy sector. A fair system of burden sharing is currently being sought on the level of the European Union. However, the German federal government does not wait for that agreement to be reached, but has added to the clearcut EU plans in order to achieve more climate protection. At the closed meeting of the federal cabinet in Meseberg on August 23-24, 2007, the key points of an integrated energy and climate program were adopted. The unprecedented set of measures comprises 30 points. In many cases, legal measures are required for implementation, which implies a heavy workload facing the federal government and parliament. A major step forward is seen in the federal government's intention to preserve the international competitiveness of the producing sector and energy-intensive industries also under changed framework conditions. The imperative guiding principle must be that care should take precedence over speed. European or worldwide solutions must be found for all measures, be it energy efficiency or climate protection, and all countries must be involved because, otherwise, specific measures taken by individual states will be ineffective. (orig.) 1. GENP-2, Program System for Integral Reactor Perturbation International Nuclear Information System (INIS) Boioli, A.; Cecchini, G.P. 1975-01-01 1 - Description of problem or function: GENP-2 is a system of programs that use 'generalized perturbation theory' to calculate the perturbations of reactor integral characteristics which can be expressed by means of ratios between linear or bilinear functionals of the real and/or adjoint fluxes (e.g. reaction rate ratios), due to cross section perturbations. 2 - Method of solution: GENP-2 consists of the following codes: DDV, SORCI, CIAP-PMN and GLOBP-2D. DDV calculates the real or adjoint fluxes and power distribution using multigroup diffusion theory in 2-dimensions. SORCI uses the fluxes from DDV to calculate the real and/or adjoint general perturbation sources. CIAP-PMN reads the sources from SORCI and uses them in the real or adjoint generalised importance calculations (2 dimensions, multi- group diffusion). GLOBP-2D uses the importance calculated by CIAP-PMN, and the fluxes calculated by DDV, in generalised perturbation expressions to calculate the perturbation in the quantity of interest. 3 - Restrictions on the complexity of the problem: DDV although variably dimensioned has the following restrictions: - max. number of mesh points 6400; - max. number of mesh points in 1-dimension 81; - max. number of regions 6400; - max. number of energy groups 100; - if power distribution calculated, product of number of groups and number of regions 2500. The other programs have the same restrictions if applicable 2. Critical Issues Forum: A multidisciplinary educational program integrating computer technology Energy Technology Data Exchange (ETDEWEB) Alexander, R.J.; Robertson, B.; Jacobs, D. [Los Alamos National Lab., NM (United States) 1998-09-01 The Critical Issues Forum (CIF) funded by the US Department of Energy is a collaborative effort between the Science Education Team of Los Alamos National Laboratory (LANL) and New Mexico high schools to improve science education throughout the state of New Mexico as well as nationally. By creating an education relationship between the LANL with its unique scientific resources and New Mexico high schools, students and teachers participate in programs that increase not only their science content knowledge but also their critical thinking and problem-solving skills. The CIF program focuses on current, globally oriented topics crucial to the security of not only the US but to that of all nations. The CIF is an academic-year program that involves both teachers and students in the process of seeking solutions for real world concerns. Built around issues tied to LANLs mission, participating students and teachers are asked to critically investigate and examine the interactions among the political, social, economic, and scientific domains while considering diversity issues that include geopolitical entities and cultural and ethnic groupings. Participants are expected to collaborate through telecommunications during the research phase and participate in a culminating multimedia activity, where they produce and deliver recommendations for the current issues being studied. The CIF was evaluated and found to be an effective approach for teacher professional training, especially in the development of skills for critical thinking and questioning. The CIF contributed to students ability to integrate diverse disciplinary content about science-related topics and supported teachers in facilitating the understanding of their students using the CIF approach. Networking technology in CIF has been used as an information repository, resource delivery mechanism, and communication medium. 3. Evaluation of integrated child development services program in rajasthan, India Directory of Open Access Journals (Sweden) Madan Singh Rathore 2015-01-01 Full Text Available Background: The Integrated Child Development Services (ICDS scheme is the largest program for promotion of maternal and child health and nutrition. Aims: The present study is aimed to evaluate ICDS program in terms of infrastructure of anganwadi centers (AWCs, characteristics of anganwadi workers (AWWs, coverage of supplementary nutrition (SN, and preschool education (PSE to the beneficiaries. Methods: A total of 39 AWCs from a rural area and 15 from the urban area were surveyed. AWWs were interviewed, and records were reviewed. Information was collected using a predesigned and pretested questionnaire. Results: In the selected AWCs, 88.9% were running in Pucca buildings, 38.9% had electricity, 35.1% had a separate kitchen, 1.8% had cooking gas, and toilets were available in 59.3% AWCs. All the AWW have received job training, 83.3% AWW have received refresher training. 38.8% AWW have received orientation training, 37% have received skill training in World Health Organization growth standards and 18.5% AWW have received skill training in mother and child health. 86.9% registered pregnant women, 90.7% registered lactating women, 72.6% registered adolescent girls were availing SN. 95.4% registered children 6 months to 3 years and 92.4% registered children 3-6 years of age were availing SN. Interruption in SN in last 6 months was seen in 22.2% AWCs. Appropriate and adequate PSE material was available in 59.2% AWCs. Conclusion: There are program gaps in the infrastructure of AWCs, training of AWW, coverage of SN, interruption in the supply of SN. 4. Configuration Management Program - a part of Integrated Management System International Nuclear Information System (INIS) Mancev, Bogomil; Yordanova, Vanja; Nenkova, Boyka 2014-01-01 the Integrated Management System. CM ensures that during the entire operational life of the plant the following requirements are met: · The basic design requirements of the plant are established, documented and maintained; · The physical structures, systems and components (SSCs) of the plant are in conformity with the design requirements; · The physical and functional characteristics of the plant are correctly incorporated in the operational and maintenance documentation, as well as in the documents for testing and training; · The changes in the design documentation are incorporated in the physical configuration and · the operative documentation; · The changes in the design are minimized by management process for review according to approved criteria. The purpose of this report is to try to clarify the place of configuration management program within the Integrated Management System of Kozloduy NPP and to present the computerized information system for organization of the operational activities (IS OOA) as a tool for effective management of the facility. (authors) 5. Integrated Healthcare Delivery: A Qualitative Research Approach to Identifying and Harmonizing Perspectives of Integrated Neglected Tropical Disease Programs. Directory of Open Access Journals (Sweden) Arianna Rubin Means 2016-10-01 Full Text Available While some evidence supports the beneficial effects of integrating neglected tropical disease (NTD programs to optimize coverage and reduce costs, there is minimal information regarding when or how to effectively operationalize program integration. The lack of systematic analyses of integration experiences and of integration processes may act as an impediment to achieving more effective NTD programming. We aimed to learn about the experiences of NTD stakeholders and their perceptions of integration.We evaluated differences in the definitions, roles, perceived effectiveness, and implementation experiences of integrated NTD programs among a variety of NTD stakeholder groups, including multilateral organizations, funding partners, implementation partners, national Ministry of Health (MOH teams, district MOH teams, volunteer rural health workers, and community members participating in NTD campaigns. Semi-structured key informant interviews were conducted. Coding of themes involved a mix of applying in-vivo open coding and a priori thematic coding from a start list.In total, 41 interviews were conducted. Salient themes varied by stakeholder, however dominant themes on integration included: significant variations in definitions, differential effectiveness of specific integrated NTD activities, community member perceptions of NTD programs, the influence of funders, perceived facilitators, perceived barriers, and the effects of integration on health system strength. In general, stakeholder groups provided unique perspectives, rather than contrarian points of view, on the same topics. The stakeholders identified more advantages to integration than disadvantages, however there are a number of both unique facilitators and challenges to integration from the perspective of each stakeholder group.Qualitative data suggest several structural, process, and technical opportunities that could be addressed to promote more effective and efficient integrated NTD 6. Integrated Pest Management: A Curriculum for Early Care and Education Programs Science.gov (United States) California Childcare Health Program, 2011 2011-01-01 This "Integrated Pest Management Toolkit for Early Care and Education Programs" presents practical information about using integrated pest management (IPM) to prevent and manage pest problems in early care and education programs. This curriculum will help people in early care and education programs learn how to keep pests out of early… 7. 42 CFR 455.232 - Medicaid integrity audit program contractor functions. Science.gov (United States) 2010-10-01 ... 42 Public Health 4 2010-10-01 2010-10-01 false Medicaid integrity audit program contractor functions. 455.232 Section 455.232 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS PROGRAM INTEGRITY: MEDICAID Medicaid... 8. Integrating scientific knowledge into large-scale restoration programs: the CALFED Bay-Delta Program experience Science.gov (United States) Taylor, K.A.; Short, A. 2009-01-01 Integrating science into resource management activities is a goal of the CALFED Bay-Delta Program, a multi-agency effort to address water supply reliability, ecological condition, drinking water quality, and levees in the Sacramento-San Joaquin Delta of northern California. Under CALFED, many different strategies were used to integrate science, including interaction between the research and management communities, public dialogues about scientific work, and peer review. This paper explores ways science was (and was not) integrated into CALFED's management actions and decision systems through three narratives describing different patterns of scientific integration and application in CALFED. Though a collaborative process and certain organizational conditions may be necessary for developing new understandings of the system of interest, we find that those factors are not sufficient for translating that knowledge into management actions and decision systems. We suggest that the application of knowledge may be facilitated or hindered by (1) differences in the objectives, approaches, and cultures of scientists operating in the research community and those operating in the management community and (2) other factors external to the collaborative process and organization. 9. NASA Space Radiation Program Integrative Risk Model Toolkit Science.gov (United States) Kim, Myung-Hee Y.; Hu, Shaowen; Plante, Ianik; Ponomarev, Artem L.; Sandridge, Chris 2015-01-01 NASA Space Radiation Program Element scientists have been actively involved in development of an integrative risk models toolkit that includes models for acute radiation risk and organ dose projection (ARRBOD), NASA space radiation cancer risk projection (NSCR), hemocyte dose estimation (HemoDose), GCR event-based risk model code (GERMcode), and relativistic ion tracks (RITRACKS), NASA radiation track image (NASARTI), and the On-Line Tool for the Assessment of Radiation in Space (OLTARIS). This session will introduce the components of the risk toolkit with opportunity for hands on demonstrations. The brief descriptions of each tools are: ARRBOD for Organ dose projection and acute radiation risk calculation from exposure to solar particle event; NSCR for Projection of cancer risk from exposure to space radiation; HemoDose for retrospective dose estimation by using multi-type blood cell counts; GERMcode for basic physical and biophysical properties for an ion beam, and biophysical and radiobiological properties for a beam transport to the target in the NASA Space Radiation Laboratory beam line; RITRACKS for simulation of heavy ion and delta-ray track structure, radiation chemistry, DNA structure and DNA damage at the molecular scale; NASARTI for modeling of the effects of space radiation on human cells and tissue by incorporating a physical model of tracks, cell nucleus, and DNA damage foci with image segmentation for the automated count; and OLTARIS, an integrated tool set utilizing HZETRN (High Charge and Energy Transport) intended to help scientists and engineers study the effects of space radiation on shielding materials, electronics, and biological systems. 10. Mixed Waste Integrated Program -- Problem-oriented technology development International Nuclear Information System (INIS) Hart, P.W.; Wolf, S.W.; Berry, J.B. 1994-01-01 The Mixed Waste Integrated Program (MWIP) is responding to the need for DOE mixed waste treatment technologies that meet these dual regulatory requirements. MWIP is developing emerging and innovative treatment technologies to determine process feasibility. Technology demonstrations will be used to determine whether processes are superior to existing technologies in reducing risk, minimizing life-cycle cost, and improving process performance. Technology development is ongoing in technical areas required to process mixed waste: materials handling, chemical/physical treatment, waste destruction, off-gas treatment, final forms, and process monitoring/control. MWIP is currently developing a suite of technologies to process heterogeneous waste. One robust process is the fixed-hearth plasma-arc process that is being developed to treat a wide variety of contaminated materials with minimal characterization. Additional processes encompass steam reforming, including treatment of waste under the debris rule. Advanced off-gas systems are also being developed. Vitrification technologies are being demonstrated for the treatment of homogeneous wastes such as incinerator ash and sludge. An alternative to conventional evaporation for liquid removal--freeze crystallization--is being investigated. Since mercury is present in numerous waste streams, mercury removal technologies are being developed 11. Part 2 -- current program integrating strategies and lubrication technology Energy Technology Data Exchange (ETDEWEB) Johnson, B. 1996-12-01 This paper is the second of two that describe the Predictive Maintenance Program for rotating machinery at the Palo Verde Nuclear Generating Station. The Predictive Maintenance program has been enhanced through organizational changes and improved interdisciplinary usage of technology. This paper will discuss current program strategies that have improved the interaction between the Vibration and Lube Oil programs. The {open_quotes}Lube Oil{close_quotes} view of the combined program along with case studies will then be presented. 12. Part 2 -- current program integrating strategies and lubrication technology International Nuclear Information System (INIS) Johnson, B. 1996-01-01 This paper is the second of two that describe the Predictive Maintenance Program for rotating machinery at the Palo Verde Nuclear Generating Station. The Predictive Maintenance program has been enhanced through organizational changes and improved interdisciplinary usage of technology. This paper will discuss current program strategies that have improved the interaction between the Vibration and Lube Oil programs. The open-quotes Lube Oilclose quotes view of the combined program along with case studies will then be presented 13. School Integration Program in Chile: gaps and challenges for the implementation of an inclusive education program Directory of Open Access Journals (Sweden) Mauro Tamayo Rozas 2018-06-01 Full Text Available Constructing inclusive societies, leaving no one behind, it is an ethical obligation. Developing inclusive educational programs allows ensuring equal opportunities in one of the most critical stages of development. The aim of this study is to describe the implementation of the School Integration Program (SIP in its different dimensions and in different zones of Chile. A descriptive and cross-sectional study of the perception of SIP Coordinators was performed in public and subsidized schools at the country through a web-based survey. A simple random convenience sampling of schools was performed, obtaining 1742 answers from educational establishments with SIP. Higher level of implementation of the program was identified in areas related to interdisciplinary work and comprehensive training, curricular and institutional aspects. On the other hand, deficiencies were identified in the implementation of accessibility, development of reasonable adjustments and participation of the educational community. Likewise, there are differences between the zones of Chile, with the North zone having the least progress. Although there are results in the work team and institutional development, the development of objective conditions and participation is still a pending task in the implementation of the SIP. 14. Integrated Task And Data Parallel Programming: Language Design Science.gov (United States) Grimshaw, Andrew S.; West, Emily A. 1998-01-01 his research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers '95 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program m. Additional 1995 Activities During the fall I collaborated 15. Development of a Mathematics, Science, and Technology Education Integrated Program for a Maglev Science.gov (United States) Park, Hyoung Seo 2006-01-01 The purpose of the study was to develop an MST Integrated Program for making a Maglev hands-on activity for higher elementary school students in Korea. In this MST Integrated Program, students will apply Mathematics, Science, and Technology principles and concepts to the design, construction, and evaluation of a magnetically levitated vehicle. The… 16. McGill's Integrated Civil and Common Law Program. Science.gov (United States) Morissette, Yves-Marie 2002-01-01 Describes the bijural program of McGill University Faculty of Law. The program educates all first-degree law students in both the common law and civil law traditions, preparing them for the increasing globalization of legal practice. (EV) 17. Software-Programmed Optical Networking with Integrated NFV Service Provisioning DEFF Research Database (Denmark) Mehmeri, Victor; Wang, Xi; Basu, Shrutarshi 2017-01-01 We showcase demonstrations of “program & compile” styled optical networking as well as open platforms & standards based NFV service provisioning using a proof-of-concept implementation of the Software-Programmed Networking Operating System (SPN OS).......We showcase demonstrations of “program & compile” styled optical networking as well as open platforms & standards based NFV service provisioning using a proof-of-concept implementation of the Software-Programmed Networking Operating System (SPN OS).... 18. Specialization-Specific Course Assessments Integrated for Program Assessment OpenAIRE Qurban A. Memon; Adnan Harb; Shakeel Khoja 2012-01-01 The program assessment process combines assessments from individual courses to generate final program assessment to match accreditation benchmarks. In developing countries, industrial environment is not diversified to allow graduating engineers to seek jobs in all disciplines or specializations of an engineering program. Hence, it seems necessary to seek evolution of an engineering program assessment for specialized requirements of the industry. This paper describes how specialization-specifi... 19. 76 FR 34385 - Program Integrity: Gainful Employment-Debt Measures Science.gov (United States) 2011-06-13 ... bachelor's and master's degree programs, and 20 years for programs that lead to a doctoral or first...-risk and underserved populations of students; and limit the growth of, and innovation in, new programs... and to society in general, nor that they would represent a poor financial risk. Sen. Rep. No. 758... 20. OSMOSE: An experimental program for the qualification of integral cross sections of actinides International Nuclear Information System (INIS) Hudelot, J. P.; Klann, R.; Fougeras, P.; Jorion, F.; Drin, N.; Donnet, L. 2004-01-01 The accurate integral cross sectional reaction rates in representative spectra for the actinides are discussed at OSMOSE program. The first step in obtaining better nuclear data consists of measuring accurate integral data and comparing it to integrated energy dependent data: this comparison provides a direct assessment of the effect of deficiencies in the differential data. The OSMOSE program includes a complete analytical program associated with experimental measurement program and aims at understanding and resolving discrepancies between calculated and measured values. The measurement covers a wide range of neutron spectra, from over-moderate thermal spectra to fast spectra. (authors) 1. The effectiveness of a cardiometabolic prevention program in general practices offering integrated care programs including a patient tailored lifestyle treatment. NARCIS (Netherlands) Hollander, M.; Eppink, L.; Nielen, M.; Badenbroek, I.; Stol, D.; Schellevis, F.; Wit, N. de 2016-01-01 Background & Aim: Selective cardio-metabolic prevention programs (CMP) may be especially effective in well-organized practices. We studied the effect of a CMP program in the academic primary care practices of the Julius Health Centers (JHC) that offer integrated cardiovascular disease management 2. Integration with Writing Programs: A Strategy for Quantitative Reasoning Program Development Directory of Open Access Journals (Sweden) Nathan D. Grawe 2009-07-01 Full Text Available As an inherently interdisciplinary endeavor, quantitative reasoning (QR risks falling through the cracks between the traditional “silos” of higher education. This article describes one strategy for developing a truly cross-campus QR initiative: leverage the existing structures of campus writing programs by placing QR in the context of argument. We first describe the integration of Carleton College’s Quantitative Inquiry, Reasoning, and Knowledge initiative with the Writing Program. Based on our experience, we argue that such an approach leads to four benefits: it reflects important aspects of QR often overlooked by other approaches; it defuses the commonly raised objection that QR is merely remedial math; it sidesteps challenges of institutional culture (idiosyncratic campus history, ownership, and inertia; and it improves writing instruction. We then explore the implications of our approach for QR graduation standards. Our experience suggests that once we engaged faculty from across the curriculum in our work, it would have been difficult to adopt a narrowly defined requirement of skills-based courses. The article concludes by providing resources for those who would like to implement this approach at the course and institutional level. 3. Montana Integrated Carbon to Liquids (ICTL) Demonstration Program Energy Technology Data Exchange (ETDEWEB) Fiato, Rocco A. [Accelergy Corporation, Houston, TX (United States); Sharma, Ramesh [Univ. of North Dakota, Grand Forks, ND (United States). Energy & Environmental Research Center (EERC); Allen, Mark [Accelergy Corporation, Houston, TX (United States). Integrated Carbon Solutions; Peyton, Brent [Montana State Univ., Bozeman, MT (United States); Macur, Richard [Montana State Univ., Bozeman, MT (United States). Dept. of Land Resources and Environmental Sciences; Cameron, Jemima [Australian Energy Company Ltd., Hovea (Australia). Australian American Energy Corporation (AAEC) 2013-12-01 Integrated carbon-to-liquids technology (ICTL) incorporates three basic processes for the conversion of a wide range of feedstocks to distillate liquid fuels: (1) Direct Microcatalytic Coal Liquefaction (MCL) is coupled with biomass liquefaction via (2) Catalytic Hydrodeoxygenation and Isomerization (CHI) of fatty acid methyl esters (FAME) or trigylceride fatty acids (TGFA) to produce liquid fuels, with process derived (3) CO2 Capture and Utilization (CCU) via algae production and use in BioFertilizer for added terrestrial sequestration of CO2, or as a feedstock for MCL and/or CHI. This novel approach enables synthetic fuels production while simultaneously meeting EISA 2007 Section 526 targets, minimizing land use and water consumption, and providing cost competitive fuels at current day petroleum prices. ICTL was demonstrated with Montana Crow sub-bituminous coal in MCL pilot scale operations at the Energy and Environmental Research Center at the University of North Dakota (EERC), with related pilot scale CHI studies conducted at the University of Pittsburgh Applied Research Center (PARC). Coal-Biomass to Liquid (CBTL) Fuel samples were evaluated at the US Air Force Research Labs (AFRL) in Dayton and greenhouse tests of algae based BioFertilizer conducted at Montana State University (MSU). Econometric modeling studies were also conducted on the use of algae based BioFertilizer in a wheat-camelina crop rotation cycle. We find that the combined operation is not only able to help boost crop yields, but also to provide added crop yields and associated profits from TGFA (from crop production) for use an ICTL plant feedstock. This program demonstrated the overall viability of ICTL in pilot scale operations. Related work on the Life Cycle Assessment (LCA) of a Montana project indicated that CCU could be employed very effectively to reduce the overall carbon footprint of the MCL/CHI process. Plans are currently being made to conduct larger 4. Role of innovative institutional structures in integrated governance. A case study of integrating health and nutrition programs in Chhattisgarh, India. Science.gov (United States) Kalita, Anuska; Mondal, Shinjini 2012-01-01 The aim of this paper is to highlight the significance of integrated governance in bringing about community participation, improved service delivery, accountability of public systems and human resource rationalisation. It discusses the strategies of innovative institutional structures in translating such integration in the areas of public health and nutrition for poor communities. The paper draws on experience of initiating integrated governance through innovations in health and nutrition programming in the resource-poor state of Chhattisgarh, India, at different levels of governance structures--hamlets, villages, clusters, blocks, districts and at the state. The study uses mixed methods--i.e. document analysis, interviews, discussions and quantitative data from facilities surveys--to present a case study analyzing the process and outcome of integration. The data indicate that integrated governance initiatives improved convergence between health and nutrition departments of the state at all levels. Also, innovative structures are important to implement the idea of integration, especially in contexts that do not have historical experience of such partnerships. Integration also contributed towards improved participation of communities in self-governance, community monitoring of government programs, and therefore, better services. As governments across the world, especially in developing countries, struggle towards achieving better governance, integration can serve as a desirable process to address this. Integration can affect the decentralisation of power, inclusion, efficiency, accountability and improved service quality in government programs. The institutional structures detailed in this paper can provide models for replication in other similar contexts for translating and sustaining the idea of integrated governance. This paper is one of the few to investigate innovative public institutions of a and community mobilisation to explore this important, and under 5. 47 CFR 76.504 - Limits on carriage of vertically integrated programming. Science.gov (United States) 2010-10-01 ... programming. 76.504 Section 76.504 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST... Limits on carriage of vertically integrated programming. (a) Except as otherwise provided in this section... national video programming services owned by the cable operator or in which the cable operator has an... 6. An Integrative Suicide Prevention Program for Visitor Charcoal Burning Suicide and Suicide Pact Science.gov (United States) Wong, Paul W. C.; Liu, Patricia M. Y.; Chan, Wincy S. C.; Law, Y. W.; Law, Steven C. K.; Fu, King-Wa; Li, Hana S. H.; Tso, M. K.; Beautrais, Annette L.; Yip, Paul S. F. 2009-01-01 An integrative suicide prevention program was implemented to tackle an outbreak of visitor charcoal burning suicides in Cheung Chau, an island in Hong Kong, in 2002. This study evaluated the effectiveness of the program. The numbers of visitor suicides reduced from 37 deaths in the 51 months prior to program implementation to 6 deaths in the 42… 7. 77 FR 72435 - Pipeline Safety: Using Meaningful Metrics in Conducting Integrity Management Program Evaluations Science.gov (United States) 2012-12-05 ... effectiveness of their integrity management programs. Program evaluation is one of the key required program... activities that are in place to control risk. These measures indicate how well an operator is implementing... outcome is being achieved or not, despite the risk control activities in place. Failure Measures that... 8. Active Participation of Integrated Development Environments in the Teaching of Object-Oriented Programming Science.gov (United States) Depradine, Colin; Gay, Glenda 2004-01-01 With the strong link between programming and the underlying technology, the incorporation of computer technology into the teaching of a programming language course should be a natural progression. However, the abstract nature of programming can make such integration a difficult prospect to achieve. As a result, the main development tool, the… 9. Reluctant gerontologists: integrating gerontological nursing content into a prelicensure program. Science.gov (United States) Miller, Joanne M; Coke, Lola; Moss, Angela; McCann, Judith J 2009-01-01 Integration of readily available resources on care of older adults increased student and faculty interest and knowledge of gerontological nursing. The authors describe their use of these practical and easy-to-implement resources. 10. Using management action plans to integrate program improvement efforts Energy Technology Data Exchange (ETDEWEB) Meador, S.W.; Kidwell, R.J.; Shangraw, W.R.; Cardamone, E.N. [Project Performance Corporation, Sterling, VA (United States) 1994-12-31 The Department of Energys (DOEs) Environmental Management Program is the countrys largest and most sophisticated environmental program to date. The rapid expansion of the DOEs environmental restoration efforts has led to increased scrutiny of its management processes and systems. As the program continues to grow and mature, maintaining adequate accountability for resources and clearly communicating progress will be essential to sustaining public confidence. The Office of Environmental Management must ensure that adequate processes and systems are in place at Headquarters, Operation Offices, and contractor organizations. These systems must provide the basis for sound management, cost control, and reporting. To meet this challenge, the Office of Environmental Restoration introduced the Management Action Plan process. This process was designed to serve three primary functions: (1) define the programs management capabilities at Headquarters and Operations Offices; (2) describe how management initiatives address identified program deficiencies; and (3) identify any duplication of efforts or program deficiencies. The Environmental Restoration Management Action Plan is a tracking, reporting, and statusing tool, used primarily at the Headquarters level, for assessing performance in key areas of project management and control. BY DOE to communicate to oversight agencies and stakeholders a clearer picture of the current status of the environmental restoration project management system. This paper will discuss how Management Action Plans are used to provide a program-wide assessment of management capabilities. 11. Paradox applications integration ATP's for MAC and mass balance programs International Nuclear Information System (INIS) Russell, V.K.; Mullaney, J.E. 1994-01-01 The K Basins Materials Accounting (MAC) and Material Balance (MBA) database system were set up to run under one common applications program. This Acceptance Test Plan (ATP) describes how the code was to be tested to verify its correctness. The scope of the tests is minimal, since both MAC and MBA have already been tested in detail as stand-alone programs 12. The Performance Enhancement Group Program: Integrating Sport Psychology and Rehabilitation Science.gov (United States) Granito, Vincent J.; Hogan, Jeffery B.; Varnum, Lisa K. 1995-01-01 In an effort to improve the psychological health of the athlete who has sustained an injury, the Performance Enhancement Group program for injured athletes was created. This paper will offer a model for the Performance Enhancement Group program as a way to: 1) support the athlete, both mentally and physically; 2) deal with the demands of rehabilitation; and 3) facilitate the adjustments the athlete has to make while being out of the competitive arena. The program consists of responsibilities for professionals in sport psychology (ie, assessment/orientation, support, education, individual counseling, and evaluation) and athletic training (ie, organization/administration, recruitment and screening, support, application of techniques, and program compliance). The paper will emphasize that the success of the program is dependent on collaboration between professionals at all levels. PMID:16558357 13. Integrated environmental monitoring program at the Hanford Site International Nuclear Information System (INIS) Jaquish, R.E. 1990-08-01 The US Department of Energy's Hanford Site, north of Richland, Washington, has a mission of defense production, waste management, environmental restoration, advanced reactor design, and research development. Environmental programs at Hanford are conducted by Pacific Northwest Laboratory (PNL) and the Westinghouse Hanford Company (WHC). The WHC environmental programs include the compliance and surveillance activities associated with site operations and waste management. The PNL environmental programs address the site-wide and the of-site areas. They include the environmental surveillance and the associated support activities, such as dose calculations, and also the monitoring of environmental conditions to comply with federal and state environmental regulations on wildlife and cultural resources. These are called ''independent environmental programs'' in that they are conducted completely separate from site operations. The Environmental Surveillance and Oversight Program consists of the following projects: surface environmental surveillance; ground-water surveillance; wildlife resources monitoring; cultural resources; dose overview; radiation standards and calibrations; meteorological and climatological services; emergency preparedness 14. INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization Energy Technology Data Exchange (ETDEWEB) Groer, Christopher S [ORNL; Sullivan, Blair D [ORNL; Weerapurage, Dinesh P [ORNL 2012-10-01 It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms we have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods. 15. Using management action plans to integrate program improvement efforts International Nuclear Information System (INIS) Meador, S.W.; Kidwell, R.J.; Shangraw, W.R.; Cardamone, E.N. 1994-01-01 The Department of Energy's (DOE's) Environmental Management Program is the country's largest and most sophisticated environmental program to date. The rapid expansion of the DOE's environmental restoration efforts has led to increased scrutiny of its management processes and systems. As the program continues to grow and mature, maintaining adequate accountability for resources and clearly communicating progress will be essential to sustaining public confidence. The Office of Environmental Management must ensure that adequate processes and systems are in place at Headquarters, Operation Offices, and contractor organizations. These systems must provide the basis for sound management, cost control, and reporting. To meet this challenge, the Office of Environmental Restoration introduced the Management Action Plan process. This process was designed to serve three primary functions: (1) define the program's management capabilities at Headquarters and Operations Offices; (2) describe how management initiatives address identified program deficiencies; and (3) identify any duplication of efforts or program deficiencies. The Environmental Restoration Management Action Plan is a tracking, reporting, and statusing tool, used primarily at the Headquarters level, for assessing performance in key areas of project management and control. BY DOE to communicate to oversight agencies and stakeholders a clearer picture of the current status of the environmental restoration project management system. This paper will discuss how Management Action Plans are used to provide a program-wide assessment of management capabilities 16. Integrating cognitive rehabilitation: A preliminary program description and theoretical review of an interdisciplinary cognitive rehabilitation program. Science.gov (United States) Fleeman, Jennifer A; Stavisky, Christopher; Carson, Simon; Dukelow, Nancy; Maier, Sheryl; Coles, Heather; Wager, John; Rice, Jordyn; Essaff, David; Scherer, Marcia 2015-01-01 Interdisciplinary cognitive rehabilitation is emerging as the expected standard of care for individuals with mild to moderate degrees of cognitive impairment for a variety of etiologies. There is a growing body of evidence in cognitive rehabilitation literature supporting the involvement of multiple disciplines, with the use of cognitive support technologies (CSTs), in delivering cognitive therapy to individuals who require cognitive rehabilitative therapies. This article provides an overview of the guiding theories related to traditional approaches of cognitive rehabilitation and the positive impact of current theoretical models of an interdisciplinary approach in clinical service delivery of this rehabilitation. A theoretical model of the Integrative Cognitive Rehabilitation Program (ICRP) will be described in detail along with the practical substrates of delivering specific interventions to individuals and caregivers who are living with mild to moderate cognitive impairment. The ultimate goal of this article is to provide a clinically useful resource for direct service providers. It will serve to further clinical knowledge and understanding of the evolution from traditional silo based treatment paradigms to the current implementation of multiple perspectives and disciplines in the pursuit of patient centered care. The article will discuss the theories that contributed to the development of the interdisciplinary team and the ICRP model, implemented with individuals with mild to moderate cognitive deficits, regardless of etiology. The development and implementation of specific assessment and intervention strategies in this cognitive rehabilitation program will also be discussed. The assessment and intervention strategies utilized as part of ICRP are applicable to multiple clinical settings in which individuals with cognitive impairment are served. This article has specific implications for rehabilitation which include: (a) An Interdisciplinary Approach is an 17. Universal file processing program for field programmable integrated circuits International Nuclear Information System (INIS) Freytag, D.R.; Nelson, D.J. 1985-01-01 A computer program is presented that translates logic equations into promburner files (or the reverse) for programmable logic devices of various kinds, namely PROMs FPLAs, FPLSs and PALs. The program achieves flexibility through the use of a database containing detailed information about the devices to be programmed. New devices can thus be accommodated through simple extensions of the database. When writing logic equations, the user can define logic combinations of signals as new logic variables for use in subsequent equations. This procedure yields compact and transparent expressions for logic operations, thus reducing the chances for error. A logic simulation program is also provided so that an independent check of the design can be performed at the software level 18. Generic multiset programming for language-integrated querying DEFF Research Database (Denmark) Henglein, Fritz; Larsen, Ken Friis 2010-01-01 This paper demonstrates how relational algebraic programming based on efficient symbolic representations of multisets and operations on them can be applied to the query sublanguage of SQL in a type-safe fashion. In essence, it provides a library for naïve programming with multisets in a generalized...... SQL-style fashion, but avoids many cases of asymptotically inefficient nested iteration through cross-products.... 19. National Acid Precipitation Assessment Program: 1990 Integrated Assessment report International Nuclear Information System (INIS) 1991-11-01 The document, the 'Integrated Assessment,' is a summary of the causes and effects of acidic deposition and a comparison of the costs and effectiveness of alternative emission control scenarios. In developing the 'Integrated Assessment,' it was NAPAP's goal to produce a structured compilation of policy-relevant technical information. The Integrated Assessment is based on findings and data from a series of twenty-seven State-of-Science/Technology Reports (SOS/T) on acidic deposition published by NAPAP in 1990. The scope of the documents includes: (1) emissions, atmospheric processes and deposition; (2) effects on surface waters, forests, agricultural crops, exposed materials, human health, and visibility; and (3) control technologies, future emissions, and effects valuation 20. Employee health services integration: meeting the challenge. Successful program. Science.gov (United States) Lang, Y C 1998-02-01 1. The first step of a successful Employee Health Service integration is to have a plan supported by management. The plan must be presented to the employees prior to implementation in a "user friendly" manner. 2. Prior to computerization of employee health records, a record order system must be developed to prevent duplication and to enhance organization. 3. Consistency of services offered must be maintained. Each employee must have the opportunity to receive the same service. Complexity of services will determine the site of delivery. 4. Integration is a new and challenging development for the health care field. Flexibility and brainstorming are necessary in an attempt to meet both employee and employer needs. 1. The Environment for Application Software Integration and Execution (EASIE), version 1.0. Volume 2: Program integration guide Science.gov (United States) Jones, Kennie H.; Randall, Donald P.; Stallcup, Scott S.; Rowell, Lawrence F. 1988-01-01 The Environment for Application Software Integration and Execution, EASIE, provides a methodology and a set of software utility programs to ease the task of coordinating engineering design and analysis codes. EASIE was designed to meet the needs of conceptual design engineers that face the task of integrating many stand-alone engineering analysis programs. Using EASIE, programs are integrated through a relational data base management system. In volume 2, the use of a SYSTEM LIBRARY PROCESSOR is used to construct a DATA DICTIONARY describing all relations defined in the data base, and a TEMPLATE LIBRARY. A TEMPLATE is a description of all subsets of relations (including conditional selection criteria and sorting specifications) to be accessed as input or output for a given application. Together, these form the SYSTEM LIBRARY which is used to automatically produce the data base schema, FORTRAN subroutines to retrieve/store data from/to the data base, and instructions to a generic REVIEWER program providing review/modification of data for a given template. Automation of these functions eliminates much of the tedious, error prone work required by the usual approach to data base integration. 2. Integration Of Innovative Technologies And Affective Teaching amp Learning In Programming Courses Directory of Open Access Journals (Sweden) Alvin Prasad 2015-08-01 Full Text Available Abstract Technology has been integral component in the teaching and learning process in this millennium. In this review paper we evaluate the different technologies which are used to currently facilitate the teaching and learning of computer programming courses. The aim is to identify problems or gaps in technology usage in the learning environment and suggest affective solutions for technology integration into programming courses at the University levels in the future. We believe that with the inclusion of suggested innovative technologies and affective solutions in programming courses teaching and learning will be attractive and best for the programming industry. 3. Lessons learned from the scaling-up of a weekly multimicronutrient supplementation program in the integrated food security program (PISA). Science.gov (United States) Lechtig, Aarón; Gross, Rainer; Vivanco, Oscar Aquino; Gross, Ursula; López de Romaña, Daniel 2006-01-01 Weekly multimicronutrient supplementation was initiated as an appropriate intervention to protect poor urban populations from anemia. To identify the lessons learned from the Integrated Food Security Program (Programa Integrado de Seguridad Alimentaria [PISA]) weekly multimicronutrient supplementation program implemented in poor urban populations of Chiclayo, Peru. Data were collected from a 12-week program in which multimicronutrient supplements were provided weekly to women and adolescent girls 12 through 44 years of age and children under 5 years of age. A baseline survey was first conducted. Within the weekly multimicronutrient supplementation program, information was collected on supplement distribution, compliance, biological effectiveness, and cost. Supplementation, fortification, and dietary strategies can be integrated synergistically within a micronutrient intervention program. To ensure high cost-effectiveness of a weekly multimicronutrient supplementation program, the following conditions need to be met: the program should be implemented twice a year for 4 months; the program should be simultaneously implemented at the household (micro), community (meso), and national (macro) levels; there should be governmental participation from health and other sectors; and there should be community and private sector participation. Weekly multimicronutrient supplementation programs are cost effective options in urban areas with populations at low risk of energy deficiency and high risk of micronutrient deficiencies. 4. Dynamic programming for Integrated Emission Management in diesel engines NARCIS (Netherlands) Schijndel, J. van; Donkers, M.C.F.; Willems, F.P.T.; Heemels, W.P.M.H. 2014-01-01 Integrated Emission Management (IEM) is a supervisory control strategy that aims at minimizing the operational costs of diesel engines with an aftertreatment system, while satisfying emission constraints imposed by legislation. In previous work on IEM, a suboptimal real-time implementable solution 5. 20 CFR 404.1503a - Program integrity. Science.gov (United States) 2010-04-01 ... is currently revoked or suspended by any State licensing authority pursuant to adequate due process procedures for reasons bearing on professional competence, professional conduct, or financial integrity; or who, until a final determination is made, has surrendered such a license while formal disciplinary... 6. 75 FR 5244 - Pipeline Safety: Integrity Management Program for Gas Distribution Pipelines; Correction Science.gov (United States) 2010-02-02 ... Management Program for Gas Distribution Pipelines; Correction AGENCY: Pipeline and Hazardous Materials Safety... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part... Regulations to require operators of gas distribution pipelines to develop and implement integrity management... 7. 10 CFR 70.62 - Safety program and integrated safety analysis. Science.gov (United States) 2010-01-01 ...; (iv) Potential accident sequences caused by process deviations or other events internal to the... have experience in nuclear criticality safety, radiation safety, fire safety, and chemical process... this safety program; namely, process safety information, integrated safety analysis, and management... 8. The Transuranic Waste Program's integration and planning activities and the contributions of the TRU partnership International Nuclear Information System (INIS) Harms, T.C.; O'Neal, W.; Petersen, C.A.; McDonald, C.E. 1994-02-01 The Technical Support Division, EM-351 manages the integration and planning activities of the Transuranic Waste Program. The Transuranic Waste Program manager provides transuranic waste policy, guidance, and issue resolution to Headquarters and the Operations Offices. In addition, the program manager is responsible for developing and implementing an integrated, long-range waste management plan for the transuranic waste system. A steering committee, a core group of support contractors, and numerous interface working groups support the efforts of the program manager. This paper provides an overview of the US Department of Energy's transuranic waste integration activities and a long-range planning process that includes internal and external stakeholder participation. It discusses the contributions and benefits provided by the Transuranic Partnership, most significantly, the integration activities and the body of data collected and assembled by the Partnership 9. Glutaminolysis and Fumarate Accumulation Integrate Immunometabolic and Epigenetic Programs in Trained Immunity NARCIS (Netherlands) Arts, R.J; Novakovic, B.; Horst, R; Carvalho, A.; Bekkering, S.; Lachmandas, E.; Rodrigues, F.; Silvestre, R.; Cheng, S.C.; Wang, S.; Habibi, E.; Goncalves, L.G.; Mesquita, I.; Cunha, C.; Laarhoven, A. van; Veerdonk, F.L van de; Williams, D.L.; Meer, J.W van der; Logie, C.; O'Neill, L.A.; Dinarello, C.A.; Riksen, N.P; Crevel, R. van; Clish, C.; Notebaart, R.A; Joosten, L.A.; Stunnenberg, H.G.; Xavier, R.J.; Netea, M.G 2016-01-01 Induction of trained immunity (innate immune memory) is mediated by activation of immune and metabolic pathways that result in epigenetic rewiring of cellular functional programs. Through network-level integration of transcriptomics and metabolomics data, we identify glycolysis, glutaminolysis, and 10. Mismatch or cumulative stress : Toward an integrated hypothesis of programming effects NARCIS (Netherlands) Nederhof, Esther; Schmidt, Mathias V. 2012-01-01 This paper integrates the cumulative stress hypothesis with the mismatch hypothesis, taking into account individual differences in sensitivity to programming. According to the cumulative stress hypothesis, individuals are more likely to suffer from disease as adversity accumulates. According to the 11. Integrating declarative knowledge programming styles and tools for building expert systems Energy Technology Data Exchange (ETDEWEB) Barbuceanu, M; Trausan-Matu, S; Molnar, B 1987-01-01 The XRL system reported in this paper is an integrated knowledge programming environment whose major research theme is the investigation of declarative knowledge programming styles and features and of the way they can be effectively integrated and used to support AI programming. This investigation is carried out in the context of the structured-object representation paradigm which provides the glue keeping XRL components together. The paper describes several declarative programming styles and associated support tools available in XRL. These include an instantiation system supporting a generalized view of the ubiquous frame installation process, a description based programming system providing a novel declarative programming style which embeds a mathematical oriented description language in the structured object environment and a transformational interpreter for using it, a semantics oriented programming framework which offers a specific semantic construct based approach supporting maintenance and evolution and a self description and self generation tool which applies the latter approach to XRL itself. 29 refs., 16 figs. 12. Glass sampling program during DWPF Integrated Cold Runs International Nuclear Information System (INIS) Plodinec, M.J. 1990-01-01 The described glass sampling program is designed to achieve two objectives: To demonstrate Defense Waste Processing Facility (DWPF) ability to control and verify the radionuclide release properties of the glass product; To confirm DWPF's readiness to obtain glass samples during production, and SRL's readiness to analyze and test those samples remotely. The DWPF strategy for control of the radionuclide release properties of the glass product, and verification of its acceptability are described in this report. The basic approach of the test program is then defined 13. Integrated predictive maintenance program vibration and lube oil analysis: Part I - history and the vibration program Energy Technology Data Exchange (ETDEWEB) Maxwell, H. 1996-12-01 This paper is the first of two papers which describe the Predictive Maintenance Program for rotating machines at the Palo Verde Nuclear Generating Station. The organization has recently been restructured and significant benefits have been realized by the interaction, or {open_quotes}synergy{close_quotes} between the Vibration Program and the Lube Oil Analysis Program. This paper starts with the oldest part of the program - the Vibration Program and discusses the evolution of the program to its current state. The {open_quotes}Vibration{close_quotes} view of the combined program is then presented. 14. Integrated predictive maintenance program vibration and lube oil analysis: Part I - history and the vibration program International Nuclear Information System (INIS) Maxwell, H. 1996-01-01 This paper is the first of two papers which describe the Predictive Maintenance Program for rotating machines at the Palo Verde Nuclear Generating Station. The organization has recently been restructured and significant benefits have been realized by the interaction, or open-quotes synergyclose quotes between the Vibration Program and the Lube Oil Analysis Program. This paper starts with the oldest part of the program - the Vibration Program and discusses the evolution of the program to its current state. The open-quotes Vibrationclose quotes view of the combined program is then presented 15. Integrated Worker Health Protection and Promotion Programs: Overview and Perspectives on Health and Economic Outcomes Science.gov (United States) Pronk, Nicolaas P. 2014-01-01 Objective To describe integrated worker health protection and promotion (IWHPP) program characteristics, to discuss the rationale for integration of OSH and WHP programs, and to summarize what is known about the impact of these programs on health and economic outcomes. Methods A descriptive assessment of the current state of the IWHPP field and a review of studies on the effectiveness of IWHPP programs on health and economic outcomes. Results Sufficient evidence of effectiveness was found for IWHPP programs when health outcomes are considered. Impact on productivity-related outcomes is considered promising, but inconclusive, whereas insufficient evidence was found for health care expenditures. Conclusions Existing evidence supports an integrated approach in terms of health outcomes but will benefit significantly from research designed to support the business case for employers of various company sizes and industry types. PMID:24284747 16. Clinical capabilities of graduates of an outcomes-based integrated medical program Directory of Open Access Journals (Sweden) Scicluna Helen A 2012-06-01 Full Text Available Abstract Background The University of New South Wales (UNSW Faculty of Medicine replaced its old content-based curriculum with an innovative new 6-year undergraduate entry outcomes-based integrated program in 2004. This paper is an initial evaluation of the perceived and assessed clinical capabilities of recent graduates of the new outcomes-based integrated medical program compared to benchmarks from traditional content-based or process-based programs. Method Self-perceived capability in a range of clinical tasks and assessment of medical education as preparation for hospital practice were evaluated in recent graduates after 3 months working as junior doctors. Responses of the 2009 graduates of the UNSW’s new outcomes-based integrated medical education program were compared to those of the 2007 graduates of UNSW’s previous content-based program, to published data from other Australian medical schools, and to hospital-based supervisor evaluations of their clinical competence. Results Three months into internship, graduates from UNSW’s new outcomes-based integrated program rated themselves to have good clinical and procedural skills, with ratings that indicated significantly greater capability than graduates of the previous UNSW content-based program. New program graduates rated themselves significantly more prepared for hospital practice in the confidence (reflective practice, prevention (social aspects of health, interpersonal skills (communication, and collaboration (teamwork subscales than old program students, and significantly better or equivalent to published benchmarks of graduates from other Australian medical schools. Clinical supervisors rated new program graduates highly capable for teamwork, reflective practice and communication. Conclusions Medical students from an outcomes-based integrated program graduate with excellent self-rated and supervisor-evaluated capabilities in a range of clinically-relevant outcomes. The program 17. FY-94 buried waste integrated demonstration program report International Nuclear Information System (INIS) 1994-01-01 The Buried Waste Integrated Demonstration (BWID) supports the applied research, development, demonstration, and evaluation of a multitude of advanced technologies. These technologies are being integrated to form a comprehensive remediation system for the effective and efficient remediation of buried waste. These efforts are identified and coordinated in support of the U.S. Department of Energy (DOE), Environmental Restoration and Waste Management (ER/WM) needs and objectives. This document summarizes previous demonstrations and describes the FY-94 BWID technology development and demonstration activities. Sponsored by the DOE Office of Technology Development (OTD), BWID works with universities and private industry to develop these technologies, which are being transferred to the private sector for use nationally and internationally. A public participation policy has been established to provide stakeholders with timely and accurate information and meaningful opportunities for involvement in the technology development and demonstration process 18. [The development of an integrated suicide-violence prevention program for adolescents]. Science.gov (United States) Park, Hyun Sook 2008-08-01 The purpose of this study was to develop an integrated suicide-violence prevention program for adolescents. Another purpose was to evaluate the effects of the integrated suicide-violence prevention program on self-esteem, parent-child communication, aggression, and suicidal ideation in adolescents. The study employed a quasi-experimental design. Participants for the study were high school students, 24 in the experimental group and 25 in the control group. Data was analyzed by using the SPSS/WIN. 11.5 program with chi2 test, t-test, and 2-way ANOVA. Participants in the integrated suicide-violence prevention program reported increased self-esteem scores, which was significantly different from those in the control group. Participants in the integrated suicide-violence prevention program reported decreased aggression and suicidal ideation scores, which was significantly different from those in the control group. The integrated suicide-violence prevention program was effective in improving self-esteem and decreasing aggression and suicidal ideation for adolescents. Therefore, this approach is recommended as the integrated suicide-violence prevention strategy for adolescents. 19. A Developmental Mapping Program Integrating Geography and Mathematics. Science.gov (United States) Muir, Sharon Pray; Cheek, Helen Neely Presented and discussed is a model which can be used by educators who want to develop an interdisciplinary map skills program in geography and mathematics. The model assumes that most children in elementary schools perform cognitively at Piaget's concrete operational stage, that readiness for map skills can be assessed with Piagetian or… 20. How to Integrate International Financial Reporting Standards into Accounting Programs Science.gov (United States) Singer, Robert A. 2012-01-01 It is expected the SEC will require U.S. domestic companies to prepare and file their annual 10Ks in accordance with international financial reporting standards (IFRS) by 2016. Given the probability that the FASB-IASB convergence project (i.e., Norwalk Agreement) will continue subsequent to mandatory adoption, US accounting programs will be… 1. The Next Step in Educational Program Budgets and Information Resource Management: Integrated Data Structures. Science.gov (United States) Jackowski, Edward M. 1988-01-01 Discusses the role that information resource management (IRM) plays in educational program-oriented budgeting (POB), and presents a theoretical IRM model. Highlights include design considerations for integrated data systems; database management systems (DBMS); and how POB data can be integrated to enhance its value and use within an educational… 2. Merging Regular and Special Education Teacher Preparation Programs: The Integrated Special Education-English Project (ISEP). Science.gov (United States) Miller, Darcy E. 1991-01-01 Describes the Integrated Special Education-English Project (ISEP) which facilitated the gradual integration of special education and English teacher preparation programs. A description of the ISEP model and a case study are included. The case study indicated student teachers who participated in the ISEP improved special education and English… 3. 49 CFR 192.911 - What are the elements of an integrity management program? Science.gov (United States) 2010-10-01 ...) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas Transmission Pipeline Integrity Management § 192.911 What are the elements of an integrity management program... 4. A Critical Agency Network Model for Building an Integrated Outreach Program Science.gov (United States) Kiyama, Judy Marquez; Lee, Jenny J.; Rhoades, Gary 2012-01-01 This study considers a distinct case of a college outreach program that integrates student affairs staff, academic administrators, and faculty across campus. The authors find that social networks and critical agency help to understand the integration of these various professionals and offer a critical agency network model of enacting change.… 5. Progress and status of the integral fast reactor (IFR) development program International Nuclear Information System (INIS) Chang, Y.I. 1992-01-01 This paper discusses the Integral Fast Reactor (IFR) development program, in which the entire reactor system - reactor, fuel cycle, and waste process is being developed and optimized at the same time as a single integral entity. Detailed discussions on the present status of the IFR technology development activities in the areas of fuels, pyroprocessing, safety, core design, and fuel cycle demonstration are also presented 6. Enhancing creative problem solving in an integrated visual art and geometry program: A pilot study NARCIS (Netherlands) Schoevers, E.M.; Kroesbergen, E.H.; Pitta-Pantazi, D. 2017-01-01 This article describes a new pedagogical method, an integrated visual art and geometry program, which has the aim to increase primary school students' creative problem solving and geometrical ability. This paper presents the rationale for integrating visual art and geometry education. Furthermore 7. MathModelica - An Extensible Modeling and Simulation Environment with Integrated Graphics and Literate Programming OpenAIRE Fritzson, Peter; Gunnarsson, Johan; Jirstrand, Mats 2002-01-01 MathModelica is an integrated interactive development environment for advanced system modeling and simulation. The environment integrates Modelica-based modeling and simulation with graphic design, advanced scripting facilities, integration of program code, test cases, graphics, documentation, mathematical type setting, and symbolic formula manipulation provided via Mathematica. The user interface consists of a graphical Model Editor and Notebooks. The Model Editor is a graphical user interfa... 8. Let's get technical: Enhancing program evaluation through the use and integration of internet and mobile technologies. Science.gov (United States) Materia, Frank T; Miller, Elizabeth A; Runion, Megan C; Chesnut, Ryan P; Irvin, Jamie B; Richardson, Cameron B; Perkins, Daniel F 2016-06-01 Program evaluation has become increasingly important, and information on program performance often drives funding decisions. Technology use and integration can help ease the burdens associated with program evaluation by reducing the resources needed (e.g., time, money, staff) and increasing evaluation efficiency. This paper reviews how program evaluators, across disciplines, can apply internet and mobile technologies to key aspects of program evaluation, which consist of participant registration, participant tracking and retention, process evaluation (e.g., fidelity, assignment completion), and outcome evaluation (e.g., behavior change, knowledge gain). In addition, the paper focuses on the ease of use, relative cost, and fit with populations. An examination on how these tools can be integrated to enhance data collection and program evaluation is discussed. Important limitations of and considerations for technology integration, including the level of technical skill, cost needed to integrate various technologies, data management strategies, and ethical considerations, are highlighted. Lastly, a case study of technology use in an evaluation conducted by the Clearinghouse for Military Family Readiness at Penn State is presented and illustrates how technology integration can enhance program evaluation. Copyright © 2016 Elsevier Ltd. All rights reserved. 9. Toward an Integrated Design, Inspection and Redundancy Research Program. Science.gov (United States) 1984-01-01 William Creelman William H. Silcox National Marine Service Standard Oil Company of California St. Louis, Missouri San Francisco, California .-- N...develop physical models and generic tools for analyzing the effects of redundancy, reserve strength, and residual strength on the system behavior of marine...probabilistic analyses to be applicable to real-world problems, this program needs to provide - the deterministic physical models and generic tools upon 10. Evaluation Of Model Based Systems Engineering Processes For Integration Into Rapid Acquisition Programs Science.gov (United States) 2016-09-01 for the required simulation allowed the MK6LE project to avoid the risk of having lower level model components not integrating together. The initial...that programs that applied MBSE at the lower levels, in particular the MK54 Torpedo program, expressed regrets of limiting the re-architecture to the 11. Program on ecosystem change and society: An international research strategy for integrated social-ecological systems NARCIS (Netherlands) Carpenter, S.R.; Folke, C.; Norström, A.V.; Olsson, O.; Schultz, L.; Agarwal, B.; Balvanera, P.; Campbell, B.; Castilla, J.C.; Cramer, W.; DeFries, R.; Eyzaguirre, P.; Hughes, T.P.; Polasky, S.; Sanusi, Z.A.; Scholes, R.J.; Spierenburg, M.J. 2012-01-01 The Program on Ecosystem Change and Society (PECS), a new initiative within the ICSU global change programs, aims to integrate research on the stewardship of social-ecological systems, the services they generate, and the relationships among natural capital, human wellbeing, livelihoods, inequality 12. Program on ecosystem change and society: an international research strategy for integrated social–ecological systems NARCIS (Netherlands) Carpenter, S.R; Folke, C.; Nordström, A.; Olsson, O.; Schultz, L.; Agarwal, B.; Balvanera, P.; Campbell, B.; Castilla, J.C.; Cramer, W.; DeFries, R.; Eyzaguirre, P.; Hughes, T.P.; Polasky, S.; Sanusi, Z.; Spierenburg, M.J. 2012-01-01 The Program on Ecosystem Change and Society (PECS), a new initiative within the ICSU global change programs, aims to integrate research on the stewardship of social-ecological systems, the services they generate, and the relationships among natural capital, human wellbeing, livelihoods, inequality 13. Impact of a Sustained Job-Embedded Professional Development Program on Classroom Technology Integration Science.gov (United States) Grashel, Mark A. 2014-01-01 The purpose of this single case study was to examine a grant-funded program of professional development (PD) at a small rural high school in Ohio. Evidence has shown that the current model of technology professional development in-service sessions has had little impact on classroom technology integration. This PD program focused on 21st Century… 14. Shifting Views: Exploring the Potential for Technology Integration in Early Childhood Education Programs Science.gov (United States) Dietze, Beverlie; Kashin, Diane 2013-01-01 Using technology with children in play-based early learning programs creates questions for some within the Early Childhood Education (ECE) community. This paper presents how two faculty who teach in ECE-related degree programs integrated educational technology into their teaching pedagogy as a way to model to their students how it can be used to… 15. Integration of Vocational and Academic Curricula through the NSF Advanced Technological Education Program (ATE). Science.gov (United States) Bailey, Thomas R.; Matsuzuka, Yukari A study examined the impact of the Advanced Technological Education (ATE) program on efforts in academic and vocational integration. A case study of 10 community colleges housing ATE-funded projects collected data through interviews with administrators, faculty, ATE program practitioners, and faculty and administrators at collaborating high… 16. An Examination of the Feasibility of Integrating Motivational Interviewing Techniques into FCS Cooperative Extension Programming Science.gov (United States) Radunovich, Heidi Liss; Ellis, Sarah; Spangler, Taylor 2017-01-01 Demonstrating program impact through behavior change is critical for the continued success of Family and Consumer Sciences (FCS) Cooperative Extension programming. However, the literature suggests that simply providing information to participants does not necessarily lead to behavior change. This study pilot tested the integration of Motivational… 17. Determination of Safety Performance Grade of NPP Using Integrated Safety Performance Assessment (ISPA) Program International Nuclear Information System (INIS) Chung, Dae Wook 2011-01-01 Since the beginning of 2000, the safety regulation of nuclear power plant (NPP) has been challenged to be conducted more reasonable, effective and efficient way using risk and performance information. In the United States, USNRC established Reactor Oversight Process (ROP) in 2000 for improving the effectiveness of safety regulation of operating NPPs. The main idea of ROP is to classify the NPPs into 5 categories based on the results of safety performance assessment and to conduct graded regulatory programs according to categorization, which might be interpreted as 'Graded Regulation'. However, the classification of safety performance categories is highly comprehensive and sensitive process so that safety performance assessment program should be prepared in integrated, objective and quantitative manner. Furthermore, the results of assessment should characterize and categorize the actual level of safety performance of specific NPP, integrating all the substantial elements for assessing the safety performance. In consideration of particular regulatory environment in Korea, the integrated safety performance assessment (ISPA) program is being under development for the use in the determination of safety performance grade (SPG) of a NPP. The ISPA program consists of 6 individual assessment programs (4 quantitative and 2 qualitative) which cover the overall safety performance of NPP. Some of the assessment programs which are already implemented are used directly or modified for incorporating risk aspects. The others which are not existing regulatory programs are newly developed. Eventually, all the assessment results from individual assessment programs are produced and integrated to determine the safety performance grade of a specific NPP 18. Trauma Center Based Youth Violence Prevention Programs: An Integrative Review. Science.gov (United States) Mikhail, Judy Nanette; Nemeth, Lynne Sheri 2016-12-01 Youth violence recidivism remains a significant public health crisis in the United States. Violence prevention is a requirement of all trauma centers, yet little is known about the effectiveness of these programs. Therefore, this systematic review summarizes the effectiveness of trauma center-based youth violence prevention programs. A systematic review of articles from MEDLINE, CINAHL, and PsychINFO databases was performed to identify eligible control trials or observational studies. Included studies were from 1970 to 2013, describing and evaluating an intervention, were trauma center based, and targeted youth injured by violence (tertiary prevention). The social ecological model provided the guiding framework, and findings are summarized qualitatively. Ten studies met eligibility requirements. Case management and brief intervention were the primary strategies, and 90% of the studies showed some improvement in one or more outcome measures. These results held across both social ecological level and setting: both emergency department and inpatient unit settings. Brief intervention and case management are frequent and potentially effective trauma center-based violence prevention interventions. Case management initiated as an inpatient and continued beyond discharge was the most frequently used intervention and was associated with reduced rearrest or reinjury rates. Further research is needed, specifically longitudinal studies using experimental designs with high program fidelity incorporating uniform direct outcome measures. However, this review provides initial evidence that trauma centers can intervene with the highest of risk patients and break the youth violence recidivism cycle. © The Author(s) 2015. 19. The effectiveness and cost-effectiveness of an integrated cardiometabolic risk assessment and treatment program in primary care (the INTEGRATE study). NARCIS (Netherlands) Stol, D.; Badenbroek, I.; Hollander, M.; Nielen, M.; Schellevis, F.; Wit, N. de 2014-01-01 The effectiveness and cost-effectiveness of an integrated cardiometabolic risk assessment and treatment program in primary care (the INTEGRATE study): a stepped-wedge randomized controlled trial protocol. Rationale: The increasing prevalence of cardiometabolic disease (CMD), including cardiovascular 20. US/DOE Man-Machine Integration program for liquid metal reactors International Nuclear Information System (INIS) D'Zmura, A.P.; Seeman, S.E. 1985-03-01 The United States Department of Energy (DOE) Man-Machine Integration program was started in 1980 as an addition to the existing Liquid Metal Fast Breeder Reactor safety base technology program. The overall goal of the DOE program is to enhance the operational safety of liquid metal reactors by optimum integration of humans and machines in the overall reactor plant system and by application of the principles of human-factors engineering to the design of equipment, subsystems, facilities, operational aids, procedures and environments. In the four years since its inception the program has concentrated on understanding the control process for Liquid Metal Reactors (LMRs) and on applying advanced computer concepts to this process. This paper describes the products that have been developed in this program, present computer-related programs, and plans for the future 1. DITTY - a computer program for calculating population dose integrated over ten thousand years International Nuclear Information System (INIS) Napier, B.A.; Peloquin, R.A.; Strenge, D.L. 1986-03-01 The computer program DITTY (Dose Integrated Over Ten Thousand Years) was developed to determine the collective dose from long term nuclear waste disposal sites resulting from the ground-water pathways. DITTY estimates the time integral of collective dose over a ten-thousand-year period for time-variant radionuclide releases to surface waters, wells, or the atmosphere. This document includes the following information on DITTY: a description of the mathematical models, program designs, data file requirements, input preparation, output interpretations, sample problems, and program-generated diagnostic messages 2. Integrating electron microscopy into nanoscience and materials engineering programs Science.gov (United States) Cormia, Robert D.; Oye, Michael M.; Nguyen, Anh; Skiver, David; Shi, Meng; Torres, Yessica 2014-10-01 Preparing an effective workforce in high technology is the goal of both academic and industry training, and has been the engine that drives innovation and product development in the United States for over a century. During the last 50 years, technician training has comprised a combination of two-year academic programs, internships and apprentice training, and extensive On-the-Job Training (OJT). Recently, and especially in Silicon Valley, technicians have four-year college degrees, as well as relevant hands-on training. Characterization in general, and microscopy in particular, is an essential tool in process development, manufacturing and QA/QC, and failure analysis. Training for a broad range of skills and practice is challenging, especially for community colleges. Workforce studies (SRI/Boeing) suggest that even four year colleges often do not provide the relevant training and experience in laboratory skills, especially design of experiments and analysis of data. Companies in high-tech further report difficulty in finding skilled labor, especially with industry specific experience. Foothill College, in partnership with UCSC, SJSU, and NASA-Ames, has developed a microscopy training program embedded in a research laboratory, itself a partnership between university and government, providing hands-on experience in advanced instrumentation, experimental design and problem solving, with real-world context from small business innovators, in an environment called the collaboratory'. The program builds on AFM-SEM training at Foothill, and provides affordable training in FE-SEM and TEM through a cost recovery model. In addition to instrument and engineering training, the collaboratory also supports academic and personal growth through a multiplayer social network of students, faculty, researchers, and innovators. 3. Integration of a Multizone Airflow Model into a Thermalsimulation Program DEFF Research Database (Denmark) Jensen, Rasmus Lund; Sørensen, Karl Grau; Heiselberg, Per 2007-01-01 An existing computer model for dynamic hygrothermal analysis of buildings has been extended with a multizone airflow model based on loop equations to account for the coupled thermal and airflow in natural and hybrid ventilated buildings. In water distribution network and related fields loop...... a methodology adopted from water distribution network that automatically sets up the independent loops and is easy to implement into a computer program. Finally an example of verification of the model is given which demonstrates the ability of the models to accurately predict the airflow of a simple multizone... 4. Integrating packing and distribution problems and optimization through mathematical programming Directory of Open Access Journals (Sweden) Fabio Miguel 2016-06-01 Full Text Available This paper analyzes the integration of two combinatorial problems that frequently arise in production and distribution systems. One is the Bin Packing Problem (BPP problem, which involves finding an ordering of some objects of different volumes to be packed into the minimal number of containers of the same or different size. An optimal solution to this NP-Hard problem can be approximated by means of meta-heuristic methods. On the other hand, we consider the Capacitated Vehicle Routing Problem with Time Windows (CVRPTW, which is a variant of the Travelling Salesman Problem (again a NP-Hard problem with extra constraints. Here we model those two problems in a single framework and use an evolutionary meta-heuristics to solve them jointly. Furthermore, we use data from a real world company as a test-bed for the method introduced here. 5. Unpacking vertical and horizontal integration: childhood overweight/obesity programs and planning, a Canadian perspective. Science.gov (United States) Maclean, Lynne M; Clinton, Kathryn; Edwards, Nancy; Garrard, Michael; Ashley, Lisa; Hansen-Ketchum, Patti; Walsh, Audrey 2010-05-17 Increasingly, multiple intervention programming is being understood and implemented as a key approach to developing public health initiatives and strategies. Using socio-ecological and population health perspectives, multiple intervention programming approaches are aimed at providing coordinated and strategic comprehensive programs operating over system levels and across sectors, allowing practitioners and decision makers to take advantage of synergistic effects. These approaches also require vertical and horizontal (v/h) integration of policy and practice in order to be maximally effective. This paper examines v/h integration of interventions for childhood overweight/obesity prevention and reduction from a Canadian perspective. It describes the implications of v/h integration for childhood overweight and obesity prevention, with examples of interventions where v/h integration has been implemented. An application of a conceptual framework for structuring v/h integration of an overweight/obesity prevention initiative is presented. The paper concludes with a discussion of the implications of vertical/horizontal integration for policy, research, and practice related to childhood overweight and obesity prevention multiple intervention programs. Both v/h integration across sectors and over system levels are needed to fully support multiple intervention programs of the complexity and scope required by obesity issues. V/h integration requires attention to system structures and processes. A conceptual framework is needed to support policy alignment, multi-level evaluation, and ongoing coordination of people at the front lines of practice. Using such tools to achieve integration may enhance sustainability, increase effectiveness of prevention and reduction efforts, decrease stigmatization, and lead to new ways to relate the environment to people and people to the environment for better health for children. 6. Unpacking vertical and horizontal integration: childhood overweight/obesity programs and planning, a Canadian perspective Directory of Open Access Journals (Sweden) Ashley Lisa 2010-05-01 Full Text Available Abstract Background Increasingly, multiple intervention programming is being understood and implemented as a key approach to developing public health initiatives and strategies. Using socio-ecological and population health perspectives, multiple intervention programming approaches are aimed at providing coordinated and strategic comprehensive programs operating over system levels and across sectors, allowing practitioners and decision makers to take advantage of synergistic effects. These approaches also require vertical and horizontal (v/h integration of policy and practice in order to be maximally effective. Discussion This paper examines v/h integration of interventions for childhood overweight/obesity prevention and reduction from a Canadian perspective. It describes the implications of v/h integration for childhood overweight and obesity prevention, with examples of interventions where v/h integration has been implemented. An application of a conceptual framework for structuring v/h integration of an overweight/obesity prevention initiative is presented. The paper concludes with a discussion of the implications of vertical/horizontal integration for policy, research, and practice related to childhood overweight and obesity prevention multiple intervention programs. Summary Both v/h integration across sectors and over system levels are needed to fully support multiple intervention programs of the complexity and scope required by obesity issues. V/h integration requires attention to system structures and processes. A conceptual framework is needed to support policy alignment, multi-level evaluation, and ongoing coordination of people at the front lines of practice. Using such tools to achieve integration may enhance sustainability, increase effectiveness of prevention and reduction efforts, decrease stigmatization, and lead to new ways to relate the environment to people and people to the environment for better health for children. 7. Integrating the Principles of Effective Intervention into Batterer Intervention Programming: The Case for Moving Toward More Evidence-Based Programming. Science.gov (United States) Radatz, Dana L; Wright, Emily M 2016-01-01 The majority of batterer intervention program (BIP) evaluations have indicated they are marginally effective in reducing domestic violence recidivism. Meanwhile, correctional programs used to treat a variety of offenders (e.g., substance users, violent offenders, and so forth) that adhere to the "principles of effective intervention" (PEI) have reported significant reductions in recidivism. This article introduces the PEI-the principles on which evidence-based practices in correctional rehabilitation are based-and identifies the degree to which they are currently integrated into BIPs. The case is made that batterer programs could be more effective if they incorporate the PEI. Recommendations for further integration of the principles into BIPs are also provided. © The Author(s) 2015. 8. Sensitivity Analysis of the Integrated Medical Model for ISS Programs Science.gov (United States) Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M. 2016-01-01 Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral 9. Gore-Mbeki Binational Commission integrated housing program. Final report Energy Technology Data Exchange (ETDEWEB) NONE 1998-12-31 This report documents the work done under Grant DE-FG36-97GO10209, Innovative Renewable Energy Technology Transfer Program. PEER Consultants, PC, and its subcontractor, PEER Africa (Pty.) Ltd., received an88,000.00 grant to plan and build two energy efficient homes in the black township of Gugulethu in Cape Town, South Africa. These demonstration homes were given to the people of South Africa as a gesture of goodwill by the US government as part of the Gore-Mbeki Binational Commission (BNC) agreements and cooperation. The BNC is the term used to describe the agreement to work together by the US and the South African governments for economic development of South Africa in the areas of energy, commerce, agriculture, housing, and transportation. The BNC was formed in 1995. This project under the auspices of the BNC started in September 1996. The DOE-funded portion was performed between January 11, 1997 and February 28, 1997. 10. Integrated Risk and Knowledge Management Program -- IRKM-P Science.gov (United States) Lengyel, David M. 2009-01-01 The NASA Exploration Systems Mission Directorate (ESMD) IRKM-P tightly couples risk management and knowledge management processes and tools to produce an effective "modern" work environment. IRKM-P objectives include: (1) to learn lessons from past and current programs (Apollo, Space Shuttle, and the International Space Station); (2) to generate and share new engineering design, operations, and management best practices through preexisting Continuous Risk Management (CRM) procedures and knowledge-management practices; and (3) to infuse those lessons and best practices into current activities. The conceptual framework of the IRKM-P is based on the assumption that risks highlight potential knowledge gaps that might be mitigated through one or more knowledge management practices or artifacts. These same risks also serve as cues for collection of knowledge particularly, knowledge of technical or programmatic challenges that might recur. 11. Aging Evaluation Programs for Jet Transport Aircraft Structural Integrity Directory of Open Access Journals (Sweden) Borivoj Galović 2012-10-01 Full Text Available The paper deals with criteria and procedures in evaluationof timely preventive maintenance recommendations that willsupport continued safe operation of aging jet transports untiltheir retirement from service. The active service life of commercialaircraft has increased in recent years as a result of low fuelcost, and increasing costs and delivery times for fleet replacements.Air transport industry consensus is that older jet transportswill continue in service despite anticipated substantial increasesin required maintenance. Design concepts, supportedby testing, have worked well due to the system that is used to ensureflying safety. Continuing structural integrity by inspectionand overhaul recommendation above the level contained inmaintenance and service bulletins is additional requirement, insuch cases. Airplane structural safety depends on the performanceof all participants in the system and the responsibility forsafety cannot be delegated to a single participant. This systemhas three major participants: the manufacturers who design,build and support airplanes in service, the airlines who operate,inspect and mantain airplanes and the airworthiness authoritieswho establish rules and regulations, approve the design andpromote airline maintenance performance. 12. Developing an Integrative Treatment Program for Cancer-Related Fatigue Using Stakeholder Engagement - A Qualitative Study. Science.gov (United States) Canella, Claudia; Mikolasek, Michael; Rostock, Matthias; Beyer, Jörg; Guckenberger, Matthias; Jenewein, Josef; Linka, Esther; Six, Claudia; Stoll, Sarah; Stupp, Roger; Witt, Claudia M 2017-11-01 Although cancer-related fatigue (CRF) has gained increased attention in the past decade, it remains difficult to treat. An integrative approach combining conventional and complementary medicine interventions seems highly promising. Treatment programs are more likely to be effective if the needs and interests of the people involved are well represented. This can be achieved through stakeholder engagement. The aim of the study was to develop an integrative CRF treatment program using stakeholder engagement and to compare it to an expert version. In a qualitative study, a total of 22 stakeholders (4 oncologists, 1 radiation-oncologist, 1 psycho-oncologist, 5 nurses/nurse experts, 9 patients, 1 patient family member, 1 representative of a local Swiss Cancer League) were interviewed either face-to-face or in a focus group setting. For data analysis, qualitative content analysis was used. With stakeholder engagement, the integrative CRF treatment program was adapted to usual care using a prioritizing approach and allowing more patient choice. Unlike the expert version, in which all intervention options were on the same level, the stakeholder engagement process resulted in a program with 3 different levels. The first level includes mandatory nonpharmacological interventions, the second includes nonpharmacological choice-based interventions, and the third includes pharmacological interventions for severe CRF. The resulting stakeholder based integrative CRF treatment program was implemented as clinical practice guideline at our clinic (Institute for Complementary and Integrative Medicine, University Hospital Zurich). Through the stakeholder engagement approach, we integrated the needs and preferences of people who are directly affected by CRF. This resulted in an integrative CRF treatment program with graded recommendations for interventions and therefore potentially greater sustainability in a usual care setting. 13. Risk assessment to an integrated planning model for UST programs International Nuclear Information System (INIS) Ferguson, K.W. 1993-01-01 The US Postal Service maintains the largest civilian fleet in the United States totaling approximately 180,000 vehicles. To support the fleets daily energy requirements, the Postal Service also operates one of the largest networks of underground storage tanks nearly 7,500 nationwide. A program to apply risk assessment to planning, budget development and other management actions was implemented during September, 1989. Working closely with a consultant, the postal service developed regulatory and environmental risk criteria and weighting factors for a ranking model. The primary objective was to identify relative risks for each underground tank at individual facilities. Relative risks at each facility were determined central to prioritizing scheduled improvements to the tank network. The survey was conducted on 302 underground tanks in the Northeast Region of the US. An environmental and regulatory risk score was computed for each UST. By ranking the tanks according to their risk score, tanks were classified into management action categories including, but the limited to, underground tank testing, retrofit, repair, replacement and closure 14. Regulatory analysis of the Underground Storage Tank-Integrated Demonstration Program International Nuclear Information System (INIS) Smith, E.H. 1992-01-01 The Underground Storage Tank-Integrated Demonstration (UST-ID) Program has been developed to identify, demonstrate, test, and evaluate technologies that will provide alternatives to the current underground storage tank remediation program. The UST-ID Program is a national program that consists of five participating US Department of Energy (DOE) sites where technologies can be developed an ultimately demonstrated. Once these technologies are demonstrated, the UST-ID Program will transfer the developed technology system to industry (governmental or industrial) for application or back to Research and Development for further evaluation and modification, as necessary. In order to ensure that the UST-ID Program proceeds without interruption, it will be necessary to identify regulatory requirements along with associated permitting and notification requirements early in the technology development process. This document serves as a baseline for identifying certain federal and state regulatory requirements that may impact the UST-ID Program and the demonstration of any identified technologies 15. The biology of lysine acetylation integrates transcriptional programming and metabolism Directory of Open Access Journals (Sweden) Mujtaba Shiraz 2011-03-01 Full Text Available Abstract The biochemical landscape of lysine acetylation has expanded from a small number of proteins in the nucleus to a multitude of proteins in the cytoplasm. Since the first report confirming acetylation of the tumor suppressor protein p53 by a lysine acetyltransferase (KAT, there has been a surge in the identification of new, non-histone targets of KATs. Added to the known substrates of KATs are metabolic enzymes, cytoskeletal proteins, molecular chaperones, ribosomal proteins and nuclear import factors. Emerging studies demonstrate that no fewer than 2000 proteins in any particular cell type may undergo lysine acetylation. As described in this review, our analyses of cellular acetylated proteins using DAVID 6.7 bioinformatics resources have facilitated organization of acetylated proteins into functional clusters integral to cell signaling, the stress response, proteolysis, apoptosis, metabolism, and neuronal development. In addition, these clusters also depict association of acetylated proteins with human diseases. These findings not only support lysine acetylation as a widespread cellular phenomenon, but also impel questions to clarify the underlying molecular and cellular mechanisms governing target selectivity by KATs. Present challenges are to understand the molecular basis for the overlapping roles of KAT-containing co-activators, to differentiate between global versus dynamic acetylation marks, and to elucidate the physiological roles of acetylated proteins in biochemical pathways. In addition to discussing the cellular 'acetylome', a focus of this work is to present the widespread and dynamic nature of lysine acetylation and highlight the nexus that exists between epigenetic-directed transcriptional regulation and metabolism. 16. Development of engineering program for integrity evaluation of pipes with local wall thinned defects International Nuclear Information System (INIS) Park, Chi Yong; Lee, Sung Ho; Kim, Tae Ryong; Park, Sang Kyu 2008-01-01 Integrity evaluation of pipes with local wall thinning by erosion and corrosion is increasingly important in maintenance of wall thinned carbon steel pipes in nuclear power plants. Though a few program for integrity assessment of wall thinned pipes have been developed in domestic nuclear field, however those are limited to straight pipes and methodology proposed in ASME Sec.XI Code Case N-597. Recently, the engineering program for integrity evaluation of pipes with all kinds of local wall defects such as straight, elbow, reducer and branch pipes was developed successfully. The program was designated as PiTEP (Pipe Thinning Evaluation Program), which name was registered as a trademark in the Korea Intellectual Property Office. A developed program is carried out by sequential step of four integrity evaluation methodologies, which are composed of construction code, code case N-597, its engineering method and two developed owner evaluation method. As PiTEP program will be performed through GUI (Graphic User Interface) with user's familiarity, it would be conveniently used by plant engineers with only measured thickness data, basic operation conditions and pipe data 17. The MICA Case Conference Program at Tewksbury Hospital, Mass.: an integrated treatment model. Science.gov (United States) Clodfelter, Reynolds C; Albanese, Mark J; Baker, Gregg; Domoto, Katherine; Gui, Amy L; Khantzian, Edward J 2003-01-01 This report describes the MICA (Mentally Ill Chemically Abusing) Program at the Tewksbury Hospital campus in Tewksbury, Massachusetts. Several campus facilities collaborate in the MICA Program. Through Expert Case Conferences, principles of integrated psychosocial treatment with dual diagnosis patients are demonstrated. An expert clinician focuses on the interplay between psychological pain, characterological traits, defenses, and the patient's drug of choice. Patients who have participated in the program have reported positive experiences. The staff reported that the program has resulted in facility improvement in assessment and treatment of complex dual diagnosis patients. International Nuclear Information System (INIS) 2004-01-01 19. Integrated agriculture programs to address malnutrition in northern Malawi Directory of Open Access Journals (Sweden) Rachel Bezner Kerr 2016-11-01 agriculture and food security, alongside involving male leaders were some of the reasons that respondents named for changed gender norms. Conclusions Participatory education that explicitly addresses hegemonic masculinities related to child nutrition, such as women’s roles in child care, can begin to change dominant gender norms. Involving male leaders, participatory methods and integrating agriculture and food security concerns with nutrition appear to be key components in the context of agrarian communities. 20. Integrated agriculture programs to address malnutrition in northern Malawi. Science.gov (United States) Kerr, Rachel Bezner; Chilanga, Emmanuel; Nyantakyi-Frimpong, Hanson; Luginaah, Isaac; Lupafya, Esther 2016-11-28 were some of the reasons that respondents named for changed gender norms. Participatory education that explicitly addresses hegemonic masculinities related to child nutrition, such as women's roles in child care, can begin to change dominant gender norms. Involving male leaders, participatory methods and integrating agriculture and food security concerns with nutrition appear to be key components in the context of agrarian communities. 1. Performance planning and measurement for DOE EM-International Technology Integration Program. A report on a performance measurement development workshop for DOE's environmental management international technology integration program International Nuclear Information System (INIS) Jordan, G.B.; Reed, J.H.; Wyler, L.D. 1997-03-01 This report describes the process and results from an effort to develop metrics for program accomplishments for the FY 1997 budget submission of the U.S. Department of Energy Environmental Management International Technology Integration Program (EM-ITI). The four-step process included interviews with key EM-ITI staff, the development of a strawman program logic chart, and all day facilitated workshop with EM-ITI staff during which preliminary performance plans and measures were developed and refined, and a series of follow-on discussions and activities including a cross-organizational project data base. The effort helped EM-ITI to crystallize and develop a unified vision of their future which they can effectively communicate to their own management and their internal and external customers. The effort sets the stage for responding to the Government Performance and Results Act. The metrics developed may be applicable to other international technology integration programs. Metrics were chosen in areas of eight general performance goals for 1997-1998: (1) number of forums provided for the exchange of information, (2) formal agreements signed, (3) new partners identified, (4) customers reached and satisfied, (5, 6) dollars leveraged by EM technology focus area and from foreign research, (7) number of foreign technologies identified for potential use in remediation of DOE sites, and (8) projects advanced through the pipeline 2. Business process modeling for the Virginia Department of Transportation : a demonstration with the integrated six-year improvement program and the statewide transportation improvement program. Science.gov (United States) 2005-01-01 This effort demonstrates business process modeling to describe the integration of particular planning and programming activities of a state highway agency. The motivations to document planning and programming activities are that: (i) resources for co... 3. Business process modeling for the Virginia Department of Transportation : a demonstration with the integrated six-year improvement program and the statewide transportation improvement program : executive summary. Science.gov (United States) 2005-01-01 This effort demonstrates business process modeling to describe the integration of particular planning and programming activities of a state highway agency. The motivations to document planning and programming activities are that: (i) resources for co... 4. Implementing preventive chemotherapy through an integrated National Neglected Tropical Disease Control Program in Mali. Directory of Open Access Journals (Sweden) Massitan Dembélé Full Text Available BACKGROUND: Mali is endemic for all five targeted major neglected tropical diseases (NTDs. As one of the five 'fast-track' countries supported with the United States Agency for International Development (USAID funds, Mali started to integrate the activities of existing disease-specific national control programs on these diseases in 2007. The ultimate objectives are to eliminate lymphatic filariasis, onchocerciasis and trachoma as public health problems and to reduce morbidity caused by schistosomiasis and soil-transmitted helminthiasis through regular treatment to eligible populations, and the specific objectives were to achieve 80% program coverage and 100% geographical coverage yearly. The paper reports on the implementation of the integrated mass drug administration and the lessons learned. METHODOLOGY/PRINCIPAL FINDINGS: The integrated control program was led by the Ministry of Health and coordinated by the national NTD Control Program. The drug packages were designed according to the disease endemicity in each district and delivered through various platforms to eligible populations involving the primary health care system. Treatment data were recorded and reported by the community drug distributors. After a pilot implementation of integrated drug delivery in three regions in 2007, the treatment for all five targeted NTDs was steadily scaled up to 100% geographical coverage by 2009, and program coverage has since been maintained at a high level: over 85% for lymphatic filariasis, over 90% for onchocerciasis and soil-transmitted helminthiasis, around 90% in school-age children for schistosomiasis, and 76-97% for trachoma. Around 10 million people have received one or more drug packages each year since 2009. No severe cases of adverse effects were reported. CONCLUSIONS/SIGNIFICANCE: Mali has scaled up the drug treatment to national coverage through integrated drug delivery involving the primary health care system. The successes and lessons 5. A Review of Ocean Management and Integrated Resource Management Programs from Around the World OpenAIRE , Seaplan 2018-01-01 This draft report is one of several prepared under contract to the Massachusetts Ocean Partnership (MOP) to support the Massachusetts Executive Office of Energy and Environmental Affairs (EOEEA) in its development of the integrated coastal ocean management plan mandated by the Massachusetts Oceans Act of 2008. The purpose of this report was to inventory and review ocean management and integrated resource management programs from around the world, including the United States, Europe, Australia... 6. In Situ Remediation Integrated Program, Evaluation and assessment of containment technology International Nuclear Information System (INIS) Gerber, M.A.; Fayer, M.J. 1994-04-01 The In Situ Remediation Integrated Program (ISRIP) was established by the US Department of Energy (DOE) to advance the state-of-the art of innovative in situ remediation technologies to the point of demonstration and to broaden the applicability of these technologies to the widely varying site remediation requirements throughout the DOE complex. This program complements similar ongoing integrated demonstration programs being conducted at several DOE sites. The ISRIP has been conducting baseline assessments on in situ technologies to support program planning. Pacific Northwest Laboratory conducted an assessment and evaluation of subsurface containment barrier technology in support of ISRIP's Containment Technology Subprogram. This report summarizes the results of that activity and provides a recommendation for priortizing areas in which additional research and development is needed to advance the technology to the point of demonstration in support of DOE's site restoration activities 7. Integrating Corporate Social Responsability Programs into the Ethical Dimension of the Organization OpenAIRE Ibrian CARAMIDARU; Sabina IRIMIE 2011-01-01 The purpose of this paper is to indicate the need to integrate corporate social responsibility programs into the global ethical vision of organizations. Such an approach requires the definition of the corporation in relation to the moral values it assumes and the ways in which moral values occur within the organization. On this foundation, the authors examined the various implications that moral values have on the initiation and conduct of corporate social responsibility programs. 8. Analysis to develop a program for energy-integrated farm systems Energy Technology Data Exchange (ETDEWEB) Eakin, D.E.; Clark, M.A.; Inaba, L.K.; Johnson, K.I. 1981-09-01 A program to use renewable energy resources and possibly develop decentralization of energy systems for agriculture is discussed. The purpose of the research presented is to establish the objective of the program and identify guidelines for program development. The program's objective is determined by: (1) an analysis of the technologies that could be utilized to transform renewable farm resources to energy by the year 2000, (2) the quantity of renewable farm resources that are available, and (3) current energy-use patterns. Individual research, development, and demonstration projects are fit into a national program of energy-integrated farm systems on the basis of: (1) market need, (2) conversion potential, (3) technological opportunities, and (4) acceptability. Quantification of these factors for the purpose of establishing program guidelines is conducted using the following four precepts: (1) market need is identified by current use of energy for agricultural production; (2) conversion potential is determined by the availability of renewable resources; and (3) technological opportunities are determined by the state-of-the-art methods, techniques, and processes that can convert renewable resources into farm energy. Each of these factors is analyzed in Chapters 2 to 4. Chapter 5 draws on the analysis of these factors to establish the objective of the program and identify guidelines for the distribution of program funds. Chapter 6 then discusses the acceptability of integrated farm systems, which can not be quantified like the other factors. 9. Working together – integration of information literacy in educational programs at Blekinge Institute of Technology Directory of Open Access Journals (Sweden) 2013-12-01 • The library can, together with the Schools, create and offer IL modules adapted to the educational programs Today IL education at BTH is quite extensive, but also irregular and highly dependent on contacts with individual teachers, which makes IL education vulnerable. In order to bring this problem to light, and inspired by the Borås model (presented at Creating knowledge VI, as well as Sydostmodellen, the library at BTH contacted the Board of Education during the winter of 2012, and presented a plan on how the library and Schools at BTH could cooperate in order to integrate IL education within all educational programs. Suggestions regarding content, extent, progression, timing, assessment and learning outcomes of the IL education are the focal point of the presented plan. As the first result of the proposal, the library has been commissioned by the BTH Quality Assurance Council to review the situation regarding IL education at BTH together with the educational program directors. In cooperation with the programs, the library should also make a plan for each program on how to integrate IL education as a part of generic skills. At the conference, the following themes were addressed and discussed during our presentation: sustainability of IL education, collaboration within the academy regarding IL education and how integration of IL education at university educational programs is reflected in research on IL in general. 10. Integrated safety assessment report, Haddam Neck Plant (Docket No. 50-213): Integrated Safety Assessment Program: Draft report International Nuclear Information System (INIS) 1987-07-01 The integrated assessment is conducted on a plant-specific basis to evaluate all licensing actions, licensee initiated plant improvements and selected unresolved generic/safety issues to establish implementation schedules for each item. Procedures allow for a periodic updating of the schedules to account for licensing issues that arise in the future. The Haddam Neck Plant is one of two plants being reviewed under the pilot program. This report indicates how 82 topics selected for review were addressed, and presents the staff's recommendations regarding the corrective actions to resolve the 82 topics and other actions to enhance plant safety. 135 refs., 4 figs., 5 tabs 11. A Integracao de Ensino das Ciencias da Saude (An Integrated Medical Education Program [in Brazil]). Science.gov (United States) Pourchet-Campos, M. A.; Guimaraes Junior, Paulino At the Sixth Annual Reunion of the Brazilian Association of Medical Schools (VI Reuniao Anual da Associacao Brasileira de Escolas Medicas) leaders in the Brazilian medical profession proposed an integrated educational program for training students in the fields of medicine and public health. Under Brazil's present system of education, all… 12. Work-Integrated Learning Process in Tourism Training Programs in Vietnam: Voices of Education and Industry Science.gov (United States) Khuong, Cam Thi Hong 2016-01-01 This paper addresses the work-integrated learning (WIL) initiative embedded in selected tourism training programs in Vietnam. The research was grounded on the framework of stakeholder ethos. Drawing on tourism training curriculum analysis and interviews with lecturers, institutional leaders, industry managers and internship supervisors, this study… 13. Advancing the US Department of Energy's Technologies through the Underground Storage Tank: Integrated Demonstration Program International Nuclear Information System (INIS) Gates, T.E. 1993-01-01 The principal objective of the Underground Storage Tank -- Integrated Demonstration Program is the demonstration and continued development of technologies suitable for the remediation of waste stored in underground storage tanks. The Underground Storage Tank Integrated Demonstration Program is the most complex of the integrated demonstration programs established under the management of the Office of Technology Development. The Program has the following five participating sites: Oak Ridge, Idaho, Fernald, Savannah River, and Hanford. Activities included within the Underground Storage Tank -- Integrated Demonstration are (1) characterizating radioactive and hazardous waste constituents, (2) determining the need and methodology for improving the stability of the waste form, (3) determining the performance requirements, (4) demonstrating barrier performance by instrumented field tests, natural analog studies, and modeling, (5) determining the need and method for destroying and stabilizing hazardous waste constituents, (6) developing and evaluating methods for retrieving, processing (pretreatment and treatment), and storing the waste on an interim basis, and (7) defining and evaluating waste packages, transportation options, and ultimate closure techniques including site restoration. The eventual objective is the transfer of new technologies as a system to full-scale remediation at the US Department of Energy complexes and sites in the private sector 14. 7 CFR 4290.1940 - Integration of this part with other regulations applicable to USDA's programs. Science.gov (United States) 2010-01-01 ... Agriculture (Continued) RURAL BUSINESS-COOPERATIVE SERVICE AND RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL BUSINESS INVESTMENT COMPANY (âRBICâ) PROGRAM Miscellaneous § 4290.1940 Integration of this... extent applicable to this part, the Secretary will comply with subpart D of 7 CFR part 1900 and RD... 15. Vertical and Horizontal Integration of Laboratory Curricula and Course Projects across the Electronic Engineering Technology Program Science.gov (United States) Zhan, Wei; Goulart, Ana; Morgan, Joseph A.; Porter, Jay R. 2011-01-01 This paper discusses the details of the curricular development effort with a focus on the vertical and horizontal integration of laboratory curricula and course projects within the Electronic Engineering Technology (EET) program at Texas A&M University. Both software and hardware aspects are addressed. A common set of software tools are… 16. Cultivating the Academic Integrity of Urban Adolescents with Ethical Philosophy Programming Science.gov (United States) Seider, Scott; Novick, Sarah; Gomez, Jessica 2013-01-01 This mixed-methods study considered the effects of ethical philosophy programming at a high-performing, high-poverty urban high school upon the academic integrity of participating adolescents ("n" = 279). Analyses of pre-post survey data revealed that participating adolescents reported significantly higher levels of academic integrity… 17. Macromodels of digital integrated circuits for program packages of circuit engineering design Science.gov (United States) Petrenko, A. I.; Sliusar, P. B.; Timchenko, A. P. 1984-04-01 Various aspects of the generation of macromodels of digital integrated circuits are examined, and their effective application in program packages of circuit engineering design is considered. Three levels of macromodels are identified, and the application of such models to the simulation of circuit outputs is discussed. 18. BASIC Programming for the Integration of Money, Demand Deposits Creation, and the Hicksian-Keynesian Model. Science.gov (United States) Tom, C. F. Joseph Money, banking, and macroeconomic textbooks traditionally present the topics of money, the creation of demand deposits by depository institutions, and the Hicksian-Keynesian Theory of Income and Interest separately, as if they were unrelated. This paper presents an integrated approach to those subjects using computer programs written in BASIC, the… 19. Integrating Social Responsibility into an Entrepreneurship Education Program: A Case Study Science.gov (United States) Maistry, Suriamurthee Moonsamy; Ramdhani, Jugathambal 2010-01-01 Entrepreneurship education in South Africa is often presented as a neutral discipline. Yet fundamental to any entrepreneurship education program should be the integration of key issues, such as ethics, values and social responsibility. This paper reports on a study that set out to explore student teachers experiences of engaging in an… 20. 78 FR 22225 - Solicitation of Input From Stakeholders Regarding the Integrated Forest Products Research Program Science.gov (United States) 2013-04-15 ... Stakeholders Regarding the Integrated Forest Products Research Program AGENCY: National Institute of Food and Agriculture, Department of Agriculture. ACTION: Notice of Request for Stakeholder Input. SUMMARY: As part of... wood utilization issues, NIFA is soliciting stakeholder input that will allow it to identify the needs... 1. Curriculum-Integrated Information Literacy (CIIL) in a Community College Nursing Program: A Practical Model Science.gov (United States) Argüelles, Carlos 2016-01-01 This article describes a strategy to integrate information literacy into the curriculum of a nursing program in a community college. The model is articulated in four explained phases: preparatory, planning, implementation, and evaluation. It describes a collaborative process encouraging librarians to work with nursing faculty, driving students to… 2. Preparing Teachers for Technology Integration: Programs, Competencies, and Factors from the Literature Science.gov (United States) Oliver, Kevin; Townsend, Latricia 2013-01-01 This article presents a review of recent literature about preparing teachers for technology integration. The review found six types of training programs are commonly implemented: pre-service training, long-term courses, short-term workshops and institutes, coaching/mentoring, learning communities, and product/assessment approaches. The review… 3. Dependent failure analysis research for the US NRC Risk Methods Integration and Evaluation Program International Nuclear Information System (INIS) Bohn, M.P.; Stack, D.W.; Campbell, D.J.; Rooney, J.J.; Rasmuson, D.M. 1985-01-01 The Risk Methods Integration and Evaluation Program (RMIEP), which is being performed for the Nuclear Regulatory Commission by Sandia National Laboratories, has the goals of developing new risk assessment methods and integrating the new and existing methods in a uniform procedure for performing an in-depth probabilistic risk assessment (PRA) with consistent levels of analysis for internal, external, and dependent failure scenarios. An important part of RMIEP is the recognition of the crucial importance of dependent common cause failures (CCFs) and the pressing need to develop effective methods for analyzing CCFs as part of a PRA. The NRC-sponsored Integrated Dependent Failure Methodology Program at Sandia is addressing this need. This paper presents a preliminary approach for analyzing CCFs as part of a PRA. A nine-step procedure for efficiently screening and analyzing dependent failure scenarios is presented, and each step is discussed 4. The integrated in situ testing program for the Waste Isolation Pilot Plant (WIPP) International Nuclear Information System (INIS) Matalucci, R.V. 1987-03-01 The US Department of Energy (DOE) is developing the Waste Isolation Pilot Plant (WIPP) Project in southeastern New Mexico as a research and development (R and D) facility for examining the response of bedded (layered) salt to the emplacement of radioactive wastes generated from defense programs. The WIPP Experimental Program consists of a technology development program, including laboratory testing and theoretical analysis activities, and an in situ testing program that is being done 659 m underground at the project site. This experimental program addresses three major technical areas that concern (1) thermal/structural interactions, (2) plugging and sealing, and (3) waste package performance. To ensure that the technical issues involved in these areas are investigated with appropriate emphasis and timing, an in situ testing plan was developed to integrate the many activities and tasks associated with the technical issues of waste disposal. 5 refs., 4 figs 5. Implementation of an integrity management program in a crude oil pipeline system Energy Technology Data Exchange (ETDEWEB) Martinez, Maria; Tomasella, Marcelo [Oleoductos del Valle, General Roca (Argentina); Rossi, Juan; Pellicano, Adolfo [SINTEC S.A. , Mar del Plata, Buenos Aires (Argentina) 2005-07-01 The implementation of an Integrity Management Program (IMP) in a crude oil pipeline system is focused on the accomplishment of two primary corporative objectives: to increase safety operation margins and to optimize available resources. A proactive work philosophy ensures the safe and reliable operation of the pipeline in accordance with current legislation. The Integrity Management Program is accomplished by means of an interdisciplinary team that defines the strategic objectives that complement and are compatible with the corporative strategic business plan. The implementation of the program is based on the analysis of the risks due to external corrosion, third party damage, design and operations, and the definition of appropriate mitigation, inspection and monitoring actions, which will ensure long-term integrity of the assets. By means of a statistical propagation model of the external defects, reported by high-resolution magnetic inspection tool (MFL), together with the information provided by corrosion sensors, field repair interventions, close internal surveys and operation data, projected defect depth; remaining strength and failure probability distributions were obtained. From the analysis, feasible courses of action were established, including the inspection and repair plan, the internal inspection program and both corrosion monitoring and mitigation programs. (author) 6. US Department of Energy Integrated Resource Planning Program: Accomplishments and opportunities Energy Technology Data Exchange (ETDEWEB) White, D.L. [Oak Ridge National Lab., TN (United States); Mihlmester, P.E. [Aspen Systems Corp., Oak Ridge, TN (United States) 1993-12-17 The US Department of Energy Integrated Resource Planning Program supports many activities and projects that enhance the process by which utilities assess demand and supply options and, subsequently, evaluate and select resources. The US Department of Energy program coordinates integrated resource planning in risk and regulatory analysis; utility and regional planning; evaluation and verification; information transfer/technological assistance; and demand-side management. Professional staff from the National Renewable Energy Laboratory, Oak Ridge National Laboratory, Lawrence Berkeley Laboratory, and Pacific Northwest Laboratories collaborate with peers and stakeholders, in particular, the National Association of Regulatory Utility Commissioners, and conduct research and activities for the US Department of Energy. Twelve integrated resource planning activities and projects are summarized in this report. The summaries reflect the diversity of planning and research activities supported by the Department. The summaries also reflect the high levels of collaboration and teaming that are required by the Program and practiced by the researchers. It is concluded that the Program is achieving its objectives by encouraging innovation and improving planning and decision making. Furthermore, as the Department continues to implement planned improvements in the Program, the Department is effectively positioned to attain its ambitious goals. 7. Integrated neuroscience program: an alternative approach to teaching neurosciences to chiropractic students. Science.gov (United States) He, Xiaohua; La Rose, James; Zhang, Niu 2009-01-01 Most chiropractic colleges do not offer independent neuroscience courses because of an already crowded curriculum. The Palmer College of Chiropractic Florida has developed and implemented an integrated neuroscience program that incorporates neurosciences into different courses. The goals of the program have been to bring neurosciences to students, excite students about the interrelationship of neuroscience and chiropractic, improve students' understanding of neuroscience, and help the students understand the mechanisms underpinning the chiropractic practice. This study provides a descriptive analysis on how the integrated neuroscience program is taught via students' attitudes toward neuroscience and the comparison of students' perceptions of neuroscience content knowledge at different points in the program. A questionnaire consisting of 58 questions regarding the neuroscience courses was conducted among 339 students. The questionnaire was developed by faculty members who were involved in teaching neuroscience and administered in the classroom by faculty members who were not involved in the study. Student perceptions of their neuroscience knowledge, self-confidence, learning strategies, and knowledge application increased considerably through the quarters, especially among the 2nd-year students. The integrated neuroscience program achieved several of its goals, including an increase in students' confidence, positive attitude, ability to learn, and perception of neuroscience content knowledge. The authors believe that such gains can expand student ability to interpret clinical cases and inspire students to become excited about chiropractic research. The survey provides valuable information for teaching faculty to make the course content more relevant to chiropractic students. 8. Planning integration FY 1995 Multi-Year Program Plan (MYPP)/Fiscal Year Work Plan (FYWP) International Nuclear Information System (INIS) 1994-09-01 This Multi-Year Program Plan (MYPP) for the Planning Integration Program, Work Breakdown structure (WBS) Element 1.8.2, is the primary management tool to document the technical, schedule, and cost baseline for work directed by the US Department of Energy (DOE), Richland Operations Office (RL). As an approved document, it establishes a binding agreement between RL and the performing contractors for the work to be performed. It was prepared by the Westinghouse Hanford Company (WHC) and the Pacific Northwest Laboratory (PNL). This MYPP provides a picture from fiscal year 1995 through FY 2001 for the Planning Integration Program. The MYPP provides a window of detailed information for the first three years. It also provides 'execution year' work plans. The MYPP provides summary information for the next four years, documenting the same period as the Activity Data Sheets 9. Planning integration FY 1995 Multi-Year Program Plan (MYPP)/Fiscal Year Work Plan (FYWP) Energy Technology Data Exchange (ETDEWEB) 1994-09-01 This Multi-Year Program Plan (MYPP) for the Planning Integration Program, Work Breakdown structure (WBS) Element 1.8.2, is the primary management tool to document the technical, schedule, and cost baseline for work directed by the US Department of Energy (DOE), Richland Operations Office (RL). As an approved document, it establishes a binding agreement between RL and the performing contractors for the work to be performed. It was prepared by the Westinghouse Hanford Company (WHC) and the Pacific Northwest Laboratory (PNL). This MYPP provides a picture from fiscal year 1995 through FY 2001 for the Planning Integration Program. The MYPP provides a window of detailed information for the first three years. It also provides execution year work plans. The MYPP provides summary information for the next four years, documenting the same period as the Activity Data Sheets. 10. Office of Technology Development integrated program for development of in situ remediation technologies International Nuclear Information System (INIS) Peterson, M. 1992-08-01 The Department of Energy's Office of Technology Development has instituted an integrated program focused on development of in situ remediation technologies. The development of in situ remediation technologies will focus on five problem groups: buried waste, contaminated soils, contaminated groundwater, containerized wastes and underground detonation sites. The contaminants that will be included in the development program are volatile and non volatile organics, radionuclides, inorganics and highly explosive materials as well as mixtures of these contaminants. The In Situ Remediation Integrated Program (ISR IP) has defined the fiscal year 1993 research and development technology areas for focusing activities, and they are described in this paper. These R ampersand D topical areas include: nonbiological in situ treatment, in situ bioremediation, electrokinetics, and in situ containment 11. Away Rotations and Matching in Integrated Plastic Surgery Residency: Applicant and Program Director Perspectives. Science.gov (United States) Drolet, Brian C; Brower, Jonathan P; Lifchez, Scott D; Janis, Jeffrey E; Liu, Paul Y 2016-04-01 Although nearly all medical students pursuing integrated plastic surgery residency participate in elective rotations away from their home medical school, the value and costs of these "away" rotations have not been well studied. The authors surveyed all integrated plastic surgery program directors and all applicants in the 2015 National Residency Matching Program. Forty-two program directors and 149 applicants (64 percent and 70 percent response rate, respectively) completed the survey. Applicants reported 13.7 weeks spent on plastic surgery rotations during medical school, including a mean of 9.2 weeks on away rotations. Average reported cost for away rotations was 3591 per applicant. Both applicants and program directors most commonly reported "making a good impression" (44.6 percent and 36.6 percent, respectively) or finding a "good-fit" program (27.7 percent and 48.8 percent, respectively) as the primary goal for away rotations. Almost all applicants (91.1 percent) believed an away rotation made them more competitive for matching to a program at which they rotated. Program directors ranked a strong away rotation performance as the most important residency selection criterion. Twenty-seven percent of postgraduate year-1 positions were filled by an away rotatorm and an additional 17 percent were filled by a home medical student. Away rotations appear to be mutually beneficial for applicants and programs in helping to establish a good fit between students and training programs through an extended interaction with the students, residents, and faculty. In addition, making a good impression on a senior elective rotation (home or away) may improve an applicant's chance of matching to a residency program. 12. A Program to Protect Integrity of Body-Mind-Spirit: Mindfulness Based Stress Reduction Program Directory of Open Access Journals (Sweden) Oznur Korukcu 2015-03-01 Full Text Available Mindfulness-based applications allow health care staffs to understand themselves as well as other individuals. Awareness based applications are not only stress reduction techniques but also a way of understanding the life and human existence, and it should not be only used to cope with the diseases. Emotions, thoughts and judgments of people might give direction to their life sometimes. Accessing a life without judgment and negative feelings brings a peaceful and happy life together. Mindfulness based stress reduction exercises may help to enjoy the present time, to cope with the challenges, stress and diseases and accept the negative life experiences rather than to question their reasons. About three decades ago, Kabat-Zin conducted the first investigation of Mindfulness-Based Stress Reduction program which is used commonly all around the world. The 8-weeks program, which contains mindful living exercises (such as eating, walking, cooking, etc., yoga, body scan and meditation practices, requires doing daily life activity and meditation with attention, openness and acceptance. The aim of this review article is to give information about Mindfulness Based Stress Reduction program and to emphasize its importance. [Psikiyatride Guncel Yaklasimlar - Current Approaches in Psychiatry 2015; 7(1: 68-80 13. Analisis Efektivitas Perangkat pada Program Desa Broadband Terpadu [Analysis of Device Effectiveness in Integrated Broadband Village Program Directory of Open Access Journals (Sweden) Hilarion Hamjen 2016-12-01 Full Text Available Pemerintah berkomitmen mendukung pertumbuhan e-commerce dan ekonomi digital di Indonesia untuk mencapai visi Indonesia 2020 sebagai negara ekonomi digital terbesar di Asia Tenggara. Secara fundamental diperlukan dukungan konektivitas nasional dari tingkat pusat sampai ke tingkat lokal, salah satunya melalui program KPU/USO yaitu program DBT (Desa Broadband Terpadu. Penelitian ini bertujuan untuk mengetahui efektivitas perangkat pada program DBT phase 1 dan keterkaitannya dengan konektivitas, dengan menggunakan metode analisis kepentingan kinerja dan uji statistik Chi square. Berdasarkan hasil penelitian diketahui bahwa efektivitas perangkat meliputi variabel kondisi, fungsi, pemeliharaan dan pemanfaatan rata-rata adalah 84,5 persen. Dengan nilai efektivitas tersebut diketahui bahwa keseluruhan variabel kondisi perangkat, fungsi dan pemanfaatannya tidak mempengaruhi konektivitas. *****The Indonesian government has a strong commitment in supporting the growth of e-commerce and Digital Economy in Indonesia to attain Indonesia’s vision by 2020 as the largest digital economy nation in Southeast Asia. Fundamentally, the national connectivity supports from central level to local level are needed, where one of them comes from Integrated Broadband Village program. This research determines the effectiveness of devices in the DBT program and its correlation to the connectivity, by using importance-performance analysis method and Chi-square statistical test. It is known from the result that the effectiveness of devices, including condition, function, maintenance, and utilization variables, achieves 84.5 percent on average. The value shows that all mentioned variables have insignificant correlations to the connectivity. 14. Program NICOLET to integrate energy loss in superconducting coils. [In FORTRAN for CDC-6600 Energy Technology Data Exchange (ETDEWEB) Vogel, H.F. 1978-08-01 A voltage pickup coil, inductively coupled to the magnetic field of the superconducting coil under test, is connected so its output may be compared with the terminal voltage of the coil under test. The integrated voltage difference is indicative of the resistive volt-seconds. When multiplied with the main coil current, the volt-seconds yield the loss. In other words, a hysteresis loop is obtained if the integrated voltage difference phi = ..integral delta..Vdt is plotted as a function of the coil current, i. First, time functions of the two signals phi(t) and i(t) are recorded on a dual-trace digital oscilloscope, and these signals are then recorded on magnetic tape. On a CDC-6600, the recorded information is decoded and plotted, and the hysteresis loops are integrated by the set of FORTRAN programs NICOLET described in this report. 4 figures. 15. Islam - Science Integration Approach in Developing Chemistry Individualized Education Program (IEP for Students with Disabilities Directory of Open Access Journals (Sweden) Jamil Suprihatiningrum 2017-11-01 Full Text Available The paper is based on a research which tries to explore, explain and describe Islam - science integration approach to develop an Individualized Education Program (IEP for students with disabilities in chemistry lesson. As a qualitative case study, this paper is aimed at investigating how Islam - science integration approach can be underpinned for developing the IEP for Chemistry. Participants were recruited purposively and data were collected by interviews; documents’ analysis; and experts’ assessment (i.e. material experts, inclusive education experts, media experts, chemistry teachers and support teachers, then analyzed using content-analysis. The result shows Islam - science integration approach can be a foundation to develop the chemistry IEP by seeking support for the verses of the Qur'an and corresponding hadiths. Even although almost all the subject matter in chemistry can be integrated with Islamic values, this study only developed two contents, namely Periodic System of Elements and Reaction Rate. 16. Integration of Health Systems Management Bachelors Program graduates into the Israeli healthcare market. Science.gov (United States) Schwartz-Ilan, Dana; Goldberg, Avishay; Pliskin, Joseph S; Peled, Ronit; Shvarts, Shifra 2005-01-01 Ben-Gurion University (BGU) in Beer-Sheva, opened a special program (B.A. degree) for training junior academic administrative personnel who can improve the quality of service in health care organizations through suitable and high-quality administration. The program the first of its kind in Israel, has been in operation since 1994, providing 50 candidates for administrative positions within the health system per year. The research goals of the project described in this paper were to examine integration of 224 graduates of the undergraduate program in Health Systems Management (HSM) within the private and public health system in Israel, including employment trends and evaluation of the program in retrospect. Questionnaires were sent to all graduates of the program. Participants were requested to answer questions regarding their present place of employment and their satisfaction with their academic degree. The findings showed that the graduates of the undergraduate program in HSM have integrated well into the health system, butnotas well as they could have. The graduates encountered difficulties in their absorption into management roles in the public health system and feel that the extent of their abilities has yet to be fully recognized and utilized by the system. 17. The Future of Nearshore Processes Research: U.S. Integrated Coastal Research Program Science.gov (United States) Elko, N.; Feddersen, F.; Foster, D. L.; Hapke, C. J.; Holman, R. A.; McNinch, J.; Mulligan, R. P.; Ozkan-Haller, H. T.; Plant, N. G.; Raubenheimer, B. 2016-02-01 The authors, representing the acting Nearshore Advisory Council, have developed an implementation plan for a U.S. Nearshore Research Program based on the 2015 Future of Nearshore Processes report that was authored by the nearshore community. The objectives of the plan are to link research programs across federal agencies, NGOs, industry, and academia into an integrated national program and to increase academic and NGO participation in federal agency nearshore processes research. A primary recommendation is interagency collaboration to build a research program that will coordinate and fund U.S. nearshore processes research across three broad research themes: 1) long-term coastal evolution due to natural and anthropogenic processes; 2) extreme events; and 3) physical, biological and chemical processes impacting human and ecosystem health. The plan calls for a new program to be developed by an executive committee of federal agency leaders, NGOs, and an academic representative, created similarly to the existing NOPP program. This leadership will be established prior to the 2016 Ocean Sciences meeting and will have agreed on responsibilities and a schedule for development of the research program. To begin to understand the scope of today's U.S. coastal research investment, a survey was distributed to ten federal agency R&D program heads. Six of the ten agencies indicated that they fund coastal research, with a combined annual coastal research budget of nearly 100 million (NSF has not responded). The priority of the three research themes were ranked nearly equally and potential research support ranged from 15-19 million for each theme, with approximately 12 million as direct contribution to academic research. Beyond addressing our fundamental science questions, it is critical that the nearshore community stay organized to represent academic interests on the new executive committee. The program goal is the integration of academic, NGO, and federal agencies. 18. IPEP: The integrated performance evaluation program for the Department of Energy's Office of Environmental Management International Nuclear Information System (INIS) Lindahl, P.C.; Streets, W.E.; Bass, D.A. 1995-01-01 The quality of the analytical data being provided to DOE's Office of Environmental Management (EM) for environmental restoration activities and the extent to which these data meet the data quality objectives are critical in the decision-making process. One of several quality metrics that can be used in evaluating a laboratory is its performance in performance evaluation (PE) programs. In support of DOE's environmental restoration and waste management efforts, EM has been charged with developing and implementing a program to assess the performance of participating laboratories. Argonne National Laboratory (ANL) and DOE's Environmental Measurements Laboratory (EML) and Radiological and Environmental Sciences Laboratory (RESL) have been collaborating on the development and implementation of a comprehensive Integrated Performance Evaluation Program (IPEP) for DOE-wide implementation. The IPEP will use results from existing inorganic, organic, and radiological PE programs when these are available and appropriate for the analytes and matrices being determined for DOE's EM activities. Existing programs include the U.S. Environmental Protection Agency's (EPA's) Contract Laboratory Program (CLP), the Water Supply (WS) and Water Pollution (WP) PE studies for inorganic and organic analytes, and DOE's Quality Assessment Program (QAP) for radiological analytes. In addition, DOE has begun the development of the Mixed Analyte Performance Evaluation Program (MAPEP) to address the needs of the DOE Complex. These PE programs provide a spectrum of matrices and analytes covering the various inorganic, organic, and low-level radiologic categories found in routine environmental and waste samples. These PE programs already provide some assessment of laboratory performance; IPEP will expand these assessments by evaluating historical performance, as well as results from multiple PE programs, thereby providing an enhanced usage of the PE program information 19. DOEs integrated low-level waste management program and strategic planning Energy Technology Data Exchange (ETDEWEB) Duggan, G. [Dept. of Energy, Washington, DC (United States). Office of Environmental Restoration and Waste Management; Hwang, J. [Science Applications International Corp., Germantown, MD (United States) 1993-03-01 To meet the DOEs commitment to operate its facilities in a safe, economic, and environmentally sound manner, and to comply with all applicable federal, state, and local rules, regulations, and agreements, DOE created the Office of Environmental Restoration and Waste Management (EM) in 1989 to focus efforts on controlling waste management and cleaning up contaminated sites. In the first few years of its existence, the Office of Waste Management (EM-30) has concentrated on operational and corrective activities at the sites. In 1992, the Office of Waste Management began to apply an integrated approach to managing its various waste types. Consequently, DOE established the Low-Level Waste Management Program (LLWMP) to properly manage its complex-wide LLW in a consistent manner. The objective of the LLWMP is to build and operate an integrated, safe, and cost-effective program to meet the needs of waste generators. The program will be based on acceptable risk and sound planning, resulting in public confidence and support. Strategic planning of the program is under way and is expected to take two to three years before implementation of the integrated waste management approach. 20. A status report on the integral fast reactor fuels and safety program International Nuclear Information System (INIS) Pedersen, D.R.; Seidel, B.R. 1990-01-01 The integral fast reactor (IFR) is an advanced liquid-metal-cooled reactor (ALMR) concept being developed at Argonne National Laboratory. The IFR program is specifically responsible for the irradiation performance, advanced core design, safety analysis, and development of the fuel cycle for the US Department of Energy's ALMR program. The basic elements of the IFR concept are (a) metallic fuel, (b) liquid-sodium cooling, (c) modular, pool-type reactor configuration, (d) an integral fuel cycle based upon pyrometallurgical processing. The most significant safety aspects of the IFR program result from its unique fuel design, a ternary alloy of uranium, plutonium, and zirconium. This fuel is based on experience gained through > 25 yr operation of the Experimental Breeder Reactor II (EBR-II) with a uranium alloy metallic fuel. The ultimate criteria for fuel pin design is the overall integrity at the target burnup. The probability of core meltdown is remote; however, a theoretical possibility of core meltdown remains. The next major step in the IFR development program will be a full-scale pyroprocessing demonstration to be carried out in conjunction with EBR-II. The IFR fuel cycle closure based on pyroprocessing will also have a dramatic impact on waste management options and on actinide recycling 1. 34 CFR 425.1 - What is the Demonstration Projects for the Integration of Vocational and Academic Learning Program? Science.gov (United States) 2010-07-01 ... 34 Education 3 2010-07-01 2010-07-01 false What is the Demonstration Projects for the Integration of Vocational and Academic Learning Program? 425.1 Section 425.1 Education Regulations of the Offices... EDUCATION DEMONSTRATION PROJECTS FOR THE INTEGRATION OF VOCATIONAL AND ACADEMIC LEARNING PROGRAM General... 2. Educating Social Workers for Practice in Integrated Health Care: A Model Implemented in a Graduate Social Work Program Science.gov (United States) Mattison, Debra; Weaver, Addie; Zebrack, Brad; Fischer, Dan; Dubin, Leslie 2017-01-01 This article introduces a curricular innovation, the Integrated Health Scholars Program (IHSP), developed to prepare master's-level social work students for practice in integrated health care settings, and presents preliminary findings related to students' self-reported program competencies and perceptions. IHSP, implemented in a… 3. An Integrated Constraint Programming Approach to Scheduling Sports Leagues with Divisional and Round-robin Tournaments Energy Technology Data Exchange (ETDEWEB) Carlsson, Mats; Johansson, Mikael; Larson, Jeffrey 2014-01-01 Previous approaches for scheduling a league with round-robin and divisional tournaments involved decomposing the problem into easier subproblems. This approach, used to schedule the top Swedish handball league Elitserien, reduces the problem complexity but can result in suboptimal schedules. This paper presents an integrated constraint programming model that allows to perform the scheduling in a single step. Particular attention is given to identifying implied and symmetry-breaking constraints that reduce the computational complexity significantly. The experimental evaluation of the integrated approach takes considerably less computational effort than the previous approach. 4. Progress and status of the Integral Fast Reactor (IFR) development program International Nuclear Information System (INIS) Chang, Yoon I. 1992-01-01 In the Integral Fast Reactor (IFR) development program, the entire reactor system -- reactor, fuel cycle, and waste process is being developed and optimized at the same time as a single integral entity. The ALMR reactor plant design is being developed by an industrial team headed by General Electric and is presented in a companion paper. Detailed discussions on the present status of the IFR technology development activities in the areas of fuels, pyroprocessing, safety, core design, and fuel cycle demonstration are presented in the other two companion papers that follows this 5. Progress and status of the Integral Fast Reactor (IFR) development program Energy Technology Data Exchange (ETDEWEB) Chang, Yoon I. 1992-04-01 In the Integral Fast Reactor (IFR) development program, the entire reactor system -- reactor, fuel cycle, and waste process is being developed and optimized at the same time as a single integral entity. The ALMR reactor plant design is being developed by an industrial team headed by General Electric and is presented in a companion paper. Detailed discussions on the present status of the IFR technology development activities in the areas of fuels, pyroprocessing, safety, core design, and fuel cycle demonstration are presented in the other two companion papers that follows this. 6. Progress and status of the Integral Fast Reactor (IFR) development program Energy Technology Data Exchange (ETDEWEB) Chang, Yoon I. 1992-01-01 In the Integral Fast Reactor (IFR) development program, the entire reactor system -- reactor, fuel cycle, and waste process is being developed and optimized at the same time as a single integral entity. The ALMR reactor plant design is being developed by an industrial team headed by General Electric and is presented in a companion paper. Detailed discussions on the present status of the IFR technology development activities in the areas of fuels, pyroprocessing, safety, core design, and fuel cycle demonstration are presented in the other two companion papers that follows this. 7. Integrated Data Collection Analysis (IDCA) Program — IDCA Quarterly Program Review, September 14 and 15, 2010 Energy Technology Data Exchange (ETDEWEB) Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Geoffrey W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (IHD-NSWC), Indian Head, MD (United States). Indian Head Division; Remmers, Daniel L. [Naval Surface Warfare Center (IHD-NSWC), Indian Head, MD (United States). Indian Head Division; Shelley, Timothy J. [Air Force Research Lab. (AFRL/RXQF), Tyndall AFB, FL (United States); Reyes, Jose A. [Applied Research Associates, Inc., Tyndall AFB, FL (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States) 2012-02-07 The IDCA conducted a program review at Los Alamos National Laboratory, September 14 and 15, 2010. The review was divided into three parts: 1) an update on the current status of the program, 2) an information exchange and discussion on technical details for current issues and future planning, and 3) a tour of the SSST testing facilities at LANL. The meeting started with an update from DHS by Laura Parker and a restating of some of the objectives of the Proficiency Test of which the IDCA is currently engaged. This update was followed by a discussion of some high level programmatic issues particularly about ways of communicating the overall goals of the IDCA to non-technical representatives. The final topic focused on the difficulty of releasing information, including the DHS approval process, ITAR, and open publication. Next JGR presented a technical summary of accomplishments, schedule, milestones, and future directions. These key points made were: 1) about 1/3 of the materials have been tested, 2) some participants are behind others causing a lag in report writing, 3) method reports have been assigned to various participants to speed up the process of reporting, 4) the SSST Compendium needs reformatting and restructuring, and 5) the Compendium needs a web site to house with access control. After the technical update, some of the Proficiency Test results were shown comparing data from the various laboratories. These results included comparisons of the RDX standard, KC/sugar mixtures (-100 mesh and as received), KC/dodecane, KP/Al, and KP/C. Impact, friction, ESD, and DSC results were the focus. All the participants were involved in these discussions. This report includes summary notes, presentations, and explanatory information. 8. Enhancing positive attitudes towards disability: evaluation of an integrated physiotherapy program. Science.gov (United States) Morgan, Prue Elizabeth; Lo, Kristin 2013-02-01 This study explored whether attitudes towards disability in second year undergraduate physiotherapy students could be enhanced by an on-campus integrated curriculum program. A pre-post design was used. Year 2 (pre-clinical) students participated in a 12-week program focused on optimising attitudes towards people with acquired or developmental neurological disability. The Discomfort subscale of the Interaction with Disabled Persons scale, rated on a six-point Likert scale, was applied prior to and at completion of the 12-week program, and compared to year 4 students, just prior to graduation. Qualitative data from year 2 reflective narratives was also gathered. Forty-seven second year and 45 fourth year physiotherapy students participated. The difference in Discomfort subscale scores between weeks 1 and 12 of year 2 was statistically significant (p = 0.0016). The difference in Discomfort subscale scores between year 2 week 1 and year 4 students was also statistically significant (p = 0.040). There was no significant difference in attitudes between students at the end of year 2 and the end of year 4 (p = 0.703). Qualitative data supported the development of more positive attitudes towards neurological disability across the 12 week year 2 pre-clinical program. Student attitudes towards people with acquired and/or developmental neurological disabilities can be enhanced through an on campus integrated curriculum program. 9. Annual Report of the Integrated Status and Effectiveness Monitoring Program: Fiscal Year 2008 Energy Technology Data Exchange (ETDEWEB) Terraqua, Inc. (Wauconda, WA) 2009-07-20 This document was created as an annual report detailing the accomplishments of the Integrated Status and Effectiveness Monitoring Program (ISEMP) in the Upper Columbia Basin in fiscal year 2008. The report consists of sub-chapters that reflect the various components of the program. Chapter 1 presents a report on programmatic coordination and accomplishments, and Chapters 2 through 4 provide a review of how ISEMP has progressed during the 2008 fiscal year in each of the pilot project subbasins: the John Day (Chapter 2), Wenatchee/Entiat (Chapter 3) and Salmon River (Chapter 4). Chapter 5 presents a report on the data management accomplishments in 2008. 10. Mixed Waste Integrated Program: A technology assessment for mercury-containing mixed wastes International Nuclear Information System (INIS) Perona, J.J.; Brown, C.H. 1993-03-01 The treatment of mixed wastes must meet US Environmental Protection Agency (EPA) standards for chemically hazardous species and also must provide adequate control of the radioactive species. The US Department of Energy (DOE) Office of Technology Development established the Mixed Waste Integrated Program (MWIP) to develop mixed-waste treatment technology in support of the Mixed Low-Level Waste Program. Many DOE mixed-waste streams contain mercury. This report is an assessment of current state-of-the-art technologies for mercury separations from solids, liquids, and gases. A total of 19 technologies were assessed. This project is funded through the Chemical-Physical Technology Support Group of the MWIP 11. An integrated approach to strategic planning in the civilian high-level radioactive waste management program International Nuclear Information System (INIS) Sprecher, W.M.; Katz, J.; Redmond, R.J. 1992-01-01 This paper describes the approach that the Office of Civilian Radioactive Waste Management (OCRWM) of the Department of Energy (DOE) is taking to the task of strategic planning for the civilian high-level radioactive waste management program. It highlights selected planning products and activities that have emerged over the past year. It demonstrates that this approach is an integrated one, both in the sense of being systematic on the program level but also as a component of DOE strategic planning efforts. Lastly, it indicates that OCRWM strategic planning takes place in a dynamic environment and consequently is a process that is still evolving in response to the demands placed upon it 12. MPL-A program for computations with iterated integrals on moduli spaces of curves of genus zero Science.gov (United States) Bogner, Christian 2016-06-01 We introduce the Maple program MPL for computations with multiple polylogarithms. The program is based on homotopy invariant iterated integrals on moduli spaces M0,n of curves of genus 0 with n ordered marked points. It includes the symbol map and procedures for the analytic computation of period integrals on M0,n. It supports the automated computation of a certain class of Feynman integrals. 13. Financial incentives and accountability for integrated medical care in Department of Veterans Affairs mental health programs. Science.gov (United States) Kilbourne, Amy M; Greenwald, Devra E; Hermann, Richard C; Charns, Martin P; McCarthy, John F; Yano, Elizabeth M 2010-01-01 This study assessed the extent to which mental health leaders perceive their programs as being primarily accountable for monitoring general medical conditions among patients with serious mental illness, and it assessed associations with modifiable health system factors. As part of the Department of Veterans Affairs (VA) 2007 national Mental Health Program Survey, 108 mental health program directors were queried regarding program characteristics. Perceived accountability was defined as whether their providers, as opposed to external general medical providers, were primarily responsible for specific clinical tasks related to serious mental illness treatment or high-risk behaviors. Multivariable logistic regression was used to determine whether financial incentives or other system factors were associated with accountability. Thirty-six percent of programs reported primary accountability for monitoring diabetes and cardiovascular risk after prescription of second-generation antipsychotics, 10% for hepatitis C screening, and 17% for obesity screening and weight management. In addition, 18% and 27% of program leaders, respectively, received financial bonuses for high performance for screening for risk of diabetes and cardiovascular disease and for alcohol misuse. Financial bonuses for diabetes and cardiovascular screening were associated with primary accountability for such screening (odds ratio=5.01, pFinancial incentives to improve quality performance may promote accountability in monitoring diabetes and cardiovascular risk assessment within mental health programs. Integrated care strategies (co-location) might be needed to promote management of high-risk behaviors among patients with serious mental illness. 14. Constellation Program Human-System Integration Requirements. Revision E, Nov. 19, 2010 Science.gov (United States) Dory, Jonathan 2010-01-01 The Human-Systems Integration Requirements (HSIR) in this document drive the design of space vehicles, their systems, and equipment with which humans interface in the Constellation Program (CxP). These requirements ensure that the design of Constellation (Cx) systems is centered on the needs, capabilities, and limitations of the human. The HSIR provides requirements to ensure proper integration of human-to-system interfaces. These requirements apply to all mission phases, including pre-launch, ascent, Earth orbit, trans-lunar flight, lunar orbit, lunar landing, lunar ascent, Earth return, Earth entry, Earth landing, post-landing, and recovery. The Constellation Program must meet NASA's Agency-level human rating requirements, which are intended to ensure crew survival without permanent disability. The HSIR provides a key mechanism for achieving human rating of Constellation systems. 15. Integrating Early Child Development and Violence Prevention Programs: A Systematic Review. Science.gov (United States) Efevbera, Yvette; McCoy, Dana C; Wuermli, Alice J; Betancourt, Theresa S 2018-03-01 Limited evidence describes promoting development and reducing violence in low- and middle-income countries (LMICs), a missed opportunity to protect children and promote development and human capital. This study presents a systematic literature review of integrated early childhood development plus violence prevention (ECD+VP) interventions in LMICs. The search yielded 5,244 unique records, of which N = 6 studies met inclusion criteria. Interventions were in Chile, Jamaica, Lebanon, Mexico, Mozambique, and Turkey. Five interventions were parent education programs, including center-based sessions (n = 3) and home visiting (n = 2), while one intervention was a teacher education program. All but one study reported improvements in both child development and maltreatment outcomes. The dearth of evidence on ECD+VP interventions suggests additional research is needed. Integrated ECD+VP interventions may improve multiple child outcome domains while leveraging limited resources in LMICs. © 2018 Wiley Periodicals, Inc. 16. US Department of Energy, Richland Operations Office Integrated Safety Management System Program Description International Nuclear Information System (INIS) SHOOP, D.S. 2000-01-01 The purpose of this Integrated Safety Management System (ISMS) Program Description (PD) is to describe the U.S. Department of Energy (DOE), Richland Operations Office (RL) ISMS as implemented through the RL Integrated Management System (RIMS). This PD does not impose additional requirements but rather provides an overview describing how various parts of the ISMS fit together. Specific requirements for each of the core functions and guiding principles are established in other implementing processes, procedures, and program descriptions that comprise RIMS. RL is organized to conduct work through operating contracts; therefore, it is extremely difficult to provide an adequate ISMS description that only addresses RL functions. Of necessity, this PD contains some information on contractor processes and procedures which then require RL approval or oversight 17. The Bosnian Train and Equip Program: A Lesson in Interagency Integration of Hard and Soft Power Science.gov (United States) 2014-03-01 even before Bos- nia officially declared independence. Most of the weaponry and commanders from the former Yugoslav People’s Army in Bosnia, which was...ability, and disposition.” 505 It covers characteristics that research indicates affect team performance including attitudinal, de- mographic, and...The Train and Equip Program reduced foreign influence in the Federation, which helped remove impediments to reconciliation and integration in Bos- nia 18. Single session of integrated "Silver Yoga" program improves cardiovascular parameters in senior citizens Directory of Open Access Journals (Sweden) Ananda Balayogi Bhavanani 2015-06-01 Conclusion: There is a healthy reduction in HR, BP and derived cardiovascular indices following a single yoga session in geriatric subjects. These changes may be attributed to enhanced harmony of cardiac autonomic function as a result of coordinated breath-body work and mind-body relaxation due to an integrated and #8220;Silver Yoga and #8221; program. [J Intercult Ethnopharmacol 2015; 4(2.000: 134-137 19. Movement integration in elementary classrooms: Teacher perceptions and implications for program planning. Science.gov (United States) Webster, Collin A; Zarrett, Nicole; Cook, Brittany S; Egan, Cate; Nesbitt, Danielle; Weaver, R Glenn 2017-04-01 Movement integration (MI), which involves infusing physical activity (PA) into regular classroom time in schools, is widely recommended to help children meet the national guideline of 60min of PA each day. Understanding the perspective of elementary classroom teachers (ECTs) toward MI is critical to program planning for interventions/professional development. This study examined the MI perceptions of ECTs in order to inform the design and implementation of a school-based pilot program that focused in part on increasing children's PA through MI. Twelve ECTs (Grades 1-3) from four schools were selected to participate based on their responses to a survey about their use of MI. Based on the idea that MI programming should be designed with particular attention to teachers who integrate relatively few movement opportunities in their classrooms, the intent was to select the teacher who reported integrating movement the least at her/his respective grade level at each school. However, not all of these teachers agreed to participate in the study. The final sample included two groups of ECTs, including eight lowest integrating teachers and four additional teachers. Each ECT participated in an interview during the semester before the pilot program was implemented. Through qualitative analysis of the interview transcripts, four themes emerged: (a) challenges and barriers (e.g., lack of time), (b) current and ideal resources (e.g., school support), (c) current implementation processes (e.g., scheduling MI into daily routines), and (e) teachers' ideas and tips for MI (e.g., stick with it and learn as you go). The themes were supported by data from both groups of teachers. This study's findings can inform future efforts to increase movement opportunities for children during regular classroom time. Copyright © 2017 Elsevier Ltd. All rights reserved. 20. Integrating Gender into World Bank Financed Transport Programs : Component 1. Case Study Summary and Final Report OpenAIRE IC Net 2004-01-01 The World Bank in November 2001 commissioned IC Net Limited of Japan to carry out a study titled 'Integrating Gender into World Bank Financed Transport Programs' in accord with the terms of reference (TOR) issued in June 2001. The study was financed by a grant from the Japanese Large Studies Trust Fund. The contract came into effect on 15 December 2001 and covers the period to 15 June 2004... 1. Demonstration of an Integrated Pest Management Program for Wheat in Tajikistan OpenAIRE Landis, Douglas A.; Saidov, Nurali; Jaliov, Anvar; El Bouhssini, Mustapha; Kennelly, Megan; Bahlai, Christie; Landis, Joy N.; Maredia, Karim 2016-01-01 Wheat is an important food security crop in central Asia but frequently suffers severe damage and yield losses from insect pests, pathogens, and weeds. With funding from the United States Agency for International Development, a team of scientists from three U.S. land-grant universities in collaboration with the International Center for Agricultural Research in Dry Areas and local institutions implemented an integrated pest management (IPM) demonstration program in three regions of Tajikistan ... 2. Integration of simulation in postgraduate studies in Saudi Arabia: The current practice in anesthesia training program Directory of Open Access Journals (Sweden) Abeer Arab 2017-01-01 Full Text Available The educational programs in the Saudi Commission for Health Specialties are developing rapidly in the fields of technical development. Such development is witnessed, particularly in the scientific areas related to what is commonly known as evidence-based medicine. This review highlights the critical need and importance of integrating simulation into anesthesia training and assessment. Furthermore, it describes the current utilization of simulation in anesthesia and critical care assessment process. 3. The fundamentals of integrating service in a post-licensure RN to BSN program. Science.gov (United States) Washington-Brown, Linda; Ritchie, Arlene 2014-01-01 Integrating service in a post-licensure registered nurse to bachelor of science in nursing (RN to BSN) program provides licensed registered nurse (RN) students the opportunity to learn, develop, and experience different cultures while serving the community and populations in need (McKinnon & Fitzpatrick, 2012). Service to the community, integrated with academic learning can be applied in a wide variety of settings, including schools, universities, and community faith-based organizations. Academic service-learning (ASL) can involve a group of students, a classroom, or an entire school. In the RN to BSN program, the authors use a student-directed service learning approach that integrates service-learning throughout the curriculum. RN students are introduced to service-learning at program orientation prior to the start of classes and receive reinforcement and active engagement throughout the curriculum. The students and volunteer agencies receive and give benefits from the services provided and the life lessons gained through mentorship, education, and hands-on experiences. 4. Development of an administrative system for an integral program of safety and occupational hygiene International Nuclear Information System (INIS) Dominguez R, J. 2004-01-01 The objective of the present investigation thesis will be to provide a clear application of the basic elements of the administration for the elaboration of an integral program of security and occupational hygiene that serves like guide for the creation of new programs and of an internal integral regulation, in the matter. For the above mentioned the present work of thesis investigation besides applying those basic elements of the integral administration will be given execution to the normative one effective as well as the up-to-date concepts of security and hygiene for that the present thesis will be based on these premises that guided us for the elaboration of the program of security and occupational hygiene and that it will serve like base to be applied in all the areas of the National Institute of Nuclear Research and in special in those that are certifying for the system of administration of quality ISO 9001:2000 that with their implantation the objectives were reached that the Institute it has been traced in their general politics. It is necessary to make mention that the Institute has a primordial activity that is the one of to make Research and Development in nuclear matter for the peaceful uses of the nuclear energy, for that that with a strong support of the conventional areas of the type industrial the institutional objectives are achieved, for what is in these areas where the present thesis investigation is developed, without stopping to revise and to apply the nuclear normativity. (Author) 5. Open pre-schools at integrated health services - A program theory Directory of Open Access Journals (Sweden) Agneta Abrahamsson 2013-04-01 Full Text Available Introduction: Family centres in Sweden are integrated services that reach all prospective parents and parents with children up to their sixth year, because of the co-location of the health service with the social service and the open pre-school. The personnel on the multi-professional site work together to meet the needs of the target group. The article explores a program theory focused on the open pre-schools at family centres.Method: A multi-case design is used and the sample consists of open pre-schools at six family centres. The hypothesis is based on previous research and evaluation data. It guides the data collection which is collected and analysed stepwise. Both parents and personnel are interviewed individually and in groups at each centre.Findings: The hypothesis was expanded to a program theory. The compliance of the professionals was the most significant element that explained why the open access service facilitated positive parenting. The professionals act in a compliant manner to meet the needs of the children and parents as well as in creating good conditions for social networking and learning amongst the parents. Conclusion: The compliance of the professionals in this program theory of open pre-schools at family centres can be a standard in integrated and open access services, whereas the organisation form can vary. The best way of increasing the number of integrative services is to support and encourage professionals that prefer to work in a compliant manner. 6. Open pre-schools at integrated health services - A program theory Directory of Open Access Journals (Sweden) Agneta Abrahamsson 2013-04-01 Full Text Available Introduction: Family centres in Sweden are integrated services that reach all prospective parents and parents with children up to their sixth year, because of the co-location of the health service with the social service and the open pre-school. The personnel on the multi-professional site work together to meet the needs of the target group. The article explores a program theory focused on the open pre-schools at family centres. Method: A multi-case design is used and the sample consists of open pre-schools at six family centres. The hypothesis is based on previous research and evaluation data. It guides the data collection which is collected and analysed stepwise. Both parents and personnel are interviewed individually and in groups at each centre. Findings: The hypothesis was expanded to a program theory. The compliance of the professionals was the most significant element that explained why the open access service facilitated positive parenting. The professionals act in a compliant manner to meet the needs of the children and parents as well as in creating good conditions for social networking and learning amongst the parents. Conclusion: The compliance of the professionals in this program theory of open pre-schools at family centres can be a standard in integrated and open access services, whereas the organisation form can vary. The best way of increasing the number of integrative services is to support and encourage professionals that prefer to work in a compliant manner. 7. Regulatory requirements of the integrated technology demonstration program, Savannah River Site (U) International Nuclear Information System (INIS) Bergren, C.L. 1992-01-01 The integrated demonstration program at the Savannah River Site (SRS) involves demonstration, testing and evaluation of new characterization, monitoring, drilling and remediation technologies for soils and groundwater impacted by organic solvent contamination. The regulatory success of the demonstration program has developed as a result of open communications between the regulators and the technical teams involved. This open dialogue is an attempt to allow timely completion of applied environmental restoration demonstrations while meeting all applicable regulatory requirements. Simultaneous processing of multiple regulatory documents (satisfying RCRA, CERCLA, NEPA and various state regulations) has streamlined the overall permitting process. Public involvement is achieved as various regulatory documents are advertised for public comment consistent with the site's community relations plan. The SRS integrated demonstration has been permitted and endorsed by regulatory agencies, including the Environmental Protection Agency (EPA) and the South Carolina Department of Health and Environmental Control. EPA headquarters and regional offices are involved in DOE's integrated Demonstration Program. This relationship allows for rapid regulatory acceptance while reducing federal funding and time requirements. (author) 8. Epistemology, development, and integrity in a science education professional development program Science.gov (United States) Hancock, Elizabeth St. Petery This research involved interpretive inquiry to understand changes in the notion of "self" as expressed by teachers recently enrolled as graduate students in an advanced degree program in science education at Florida State University. Teachers work in a context that integrates behavior, social structure, culture, and intention. Within this context, this study focused on the intentional realm that involves interior understandings, including self-epistemology, professional self-identity, and integrity. Scholarship in adult and teacher development, especially ways of knowing theory, guided my efforts to understand change in these notions of self. The five participants in this study were interviewed in depth to explore their "self"-related understandings in detail. The other primary data sources were portfolios and work the participants submitted as part of the program. Guided by a constructivist methodology, I used narrative inquiry and grounded theory to conduct data analysis. As learners and teachers, these individuals drew upon epistemological orientations emphasizing a procedural orientation to knowledge. They experienced varying degrees of interior and exterior development in self and epistemology. They created integrity in their efforts to align their intentions with their actions with a dynamic relationship to context. This study suggests that professional development experiences in science education include consideration of the personal and the professional, recognize and honor differing perspectives, facilitate development, and assist individuals to recognize and articulate their integrity. 9. The Efficiency of an Integrated Program Using Falconry to Deter Gulls from Landfills Directory of Open Access Journals (Sweden) Ericka Thiériot 2015-04-01 Full Text Available Gulls are commonly attracted to landfills, and managers are often required to implement cost-effective and socially accepted deterrence programs. Our objective was to evaluate the effectiveness of an intensive program that integrated the use of trained birds of prey, pyrotechnics, and playback of gull distress calls at a landfill located close to a large ring-billed gull (Larus delawarensis colony near Montreal, Quebec, Canada. We used long-term survey data on bird use of the landfill, conducted behavioral observations of gulls during one season and tracked birds fitted with GPS data loggers. We also carried out observations at another landfill located farther from the colony, where less refuse was brought and where a limited culling program was conducted. The integrated program based on falconry resulted in a 98% decrease in the annual total number of gulls counted each day between 1995 and 2014. A separate study indicated that the local breeding population of ring-billed gulls increased and then declined during this period but remained relatively large. In 2010, there was an average (±SE of 59 ± 15 gulls/day using the site with falconry and only 0.4% ± 0.2% of these birds were feeding. At the other site, there was an average of 347 ± 55 gulls/day and 13% ± 3% were feeding. Twenty-two gulls tracked from the colony made 41 trips towards the landfills: twenty-five percent of the trips that passed by the site with falconry resulted in a stopover that lasted 22 ± 7 min compared to 85% at the other landfill lasting 63 ± 15 min. We concluded that the integrated program using falconry, which we consider more socially acceptable than selective culling, was effective in reducing the number of gulls at the landfill. 10. Impact of a Post-Discharge Integrated Disease Management Program on COPD Hospital Readmissions. Science.gov (United States) Russo, Ashlee N; Sathiyamoorthy, Gayathri; Lau, Chris; Saygin, Didem; Han, Xiaozhen; Wang, Xiao-Feng; Rice, Richard; Aboussouan, Loutfi S; Stoller, James K; Hatipoğlu, Umur 2017-11-01 Readmission following a hospitalization for COPD is associated with significant health-care expenditure. A multicomponent COPD post-discharge integrated disease management program was implemented at the Cleveland Clinic to improve the care of patients with COPD and reduce readmissions. This retrospective study reports our experience with the program. Groups of subjects who were exposed to different components of the program were compared regarding their readmission rates. Multivariate logistic regression analysis was performed to build predictive models for 30- and 90-d readmission. One hundred sixty subjects completed a 90-d follow-up, of which, 67 attended the exacerbation clinic, 16 subjects received care coordination, 51 subjects completed both, and 26 subjects did not participate in any component despite referral. Thirty- and 90-d readmission rates for the entire group were 18.1 and 46.2%, respectively. Thirty- and 90-d readmission rates for the individual groups were: exacerbation clinic, 11.9 and 35.8%; care coordination, 25.0 and 50.0%; both, 19.6 and 41.2%; and neither, 26.9 and 80.8%, respectively. The model with the best predictive ability for 30-d readmission risk included the number of hospitalizations within the previous year and use of noninvasive ventilation (C statistic of 0.84). The model for 90-d readmission risk included receiving any component of the post-discharge integrated disease management program, the number of hospitalizations, and primary care physician visits within the previous year (C statistic of 0.87). Receiving any component of a post-discharge integrated disease management program was associated with reduced 90-d readmission rate. Previous health-care utilization and lung function impairment were strong predictors of readmission. Copyright © 2017 by Daedalus Enterprises. 11. Integrating prevention of mother-to-child HIV transmission programs to improve uptake: a systematic review. Directory of Open Access Journals (Sweden) Lorainne Tudor Car Full Text Available BACKGROUND: We performed a systematic review to assess the effect of integrated perinatal prevention of mother-to-child transmission of HIV interventions compared to non- or partially integrated services on the uptake in low- and middle-income countries. METHODS: We searched for experimental, quasi-experimental and controlled observational studies in any language from 21 databases and grey literature sources. RESULTS: Out of 28 654 citations retrieved, five studies met our inclusion criteria. A cluster randomized controlled trial reported higher probability of nevirapine uptake at the labor wards implementing HIV testing and structured nevirapine adherence assessment (RRR 1.37, bootstrapped 95% CI, 1.04-1.77. A stepped wedge design study showed marked improvement in antiretroviral therapy (ART enrolment (44.4% versus 25.3%, p<0.001 and initiation (32.9% versus 14.4%, p<0.001 in integrated care, but the median gestational age of ART initiation (27.1 versus 27.7 weeks, p = 0.4, ART duration (10.8 versus 10.0 weeks, p = 0.3 or 90 days ART retention (87.8% versus 91.3%, p = 0.3 did not differ significantly. A cohort study reported no significant difference either in the ART coverage (55% versus 48% versus 47%, p = 0.29 or eight weeks of ART duration before the delivery (50% versus 42% versus 52%; p = 0.96 between integrated, proximal and distal partially integrated care. Two before and after studies assessed the impact of integration on HIV testing uptake in antenatal care. The first study reported that significantly more women received information on PMTCT (92% versus 77%, p<0.001, were tested (76% versus 62%, p<0.001 and learned their HIV status (66% versus 55%, p<0.001 after integration. The second study also reported significant increase in HIV testing uptake after integration (98.8% versus 52.6%, p<0.001. CONCLUSION: Limited, non-generalizable evidence supports the effectiveness of integrated PMTCT programs. More research measuring coverage and 12. Alberta Healthy Living Program--a model for successful integration of chronic disease management services. Science.gov (United States) Morrin, Louise; Britten, Judith; Davachi, Shahnaz; Knight, Holly 2013-08-01 The most common presentation of chronic disease is multimorbidity. Disease management strategies are similar across most chronic diseases. Given the prevalence of multimorbidity and the commonality in approaches, fragmented single disease management must be replaced with integrated care of the whole person. The Alberta Healthy Living Program, a community-based chronic disease management program, supports adults with, or at risk for, chronic disease to improve their health and well being. Participants gain confidence and skills in how to manage their chronic disease(s) by learning to understand their health condition, make healthy eating choices, exercise safely and cope emotionally. The program includes 3 service pillars: disease-specific and general health patient education, disease-spanning supervised exercise and Better Choices, Better Health(TM) self-management workshops. Services are delivered in the community by an interprofessional team and can be tailored to target specific diverse and vulnerable populations, such as Aboriginal, ethno-cultural and francophone groups and those experiencing homelessness. Programs may be offered as a partnership between Alberta Health Services, primary care and community organizations. Common standards reduce provincial variation in care, yet maintain sufficient flexibility to meet local and diverse needs and achieve equity in care. The model has been implemented successfully in 108 communities across Alberta. This approach is associated with reduced acute care utilization and improved clinical indicators, and achieves efficiencies through an integrated, disease-spanning patient-centred approach. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved. 13. Evidence and Feasibility of Implementing an Integrated Wellness Program in Northeast Georgia. Science.gov (United States) Flanigan, Amber; Salm Ward, Trina 2017-08-01 Evidence for the connection between physical and mental health is growing, as is interest in providing a holistic, mind-body approach to improving mental health and wellness. A needs assessment in northeast Georgia identified several regional health priorities, including mental health and substance abuse, access to care, and cardiovascular health. The study's purpose is threefold: to (1) review evidence for integrated mind-body wellness services, (2) explore the feasibility of implementing wellness services in a small mental health agency serving northeast Georgia, and (3) conduct a brief survey assessing interest in a wellness program. The literature search identified articles within the past 10 years with these key words: "yoga," "mental health," "wellness program," "complementary alternative medicine," "tai chi," "mindfulness," "meditation," and "nutrition." The survey was distributed to the agency's affiliates. The literature review identified strong evidence for an integrated mind-body wellness program that includes yoga, tai chi, mindfulness meditation, and nutrition education. Among 73 survey respondents, 86 percent indicated interest in wellness services, and 85 percent agreed that wellness services are important to mental health and well-being. Authors suggest a model to incorporate a holistic wellness program to complement mental health services and help facilitate physical and mental health. © 2017 National Association of Social Workers. 14. STARS: An Integrated, Multidisciplinary, Finite-Element, Structural, Fluids, Aeroelastic, and Aeroservoelastic Analysis Computer Program Science.gov (United States) Gupta, K. K. 1997-01-01 A multidisciplinary, finite element-based, highly graphics-oriented, linear and nonlinear analysis capability that includes such disciplines as structures, heat transfer, linear aerodynamics, computational fluid dynamics, and controls engineering has been achieved by integrating several new modules in the original STARS (STructural Analysis RoutineS) computer program. Each individual analysis module is general-purpose in nature and is effectively integrated to yield aeroelastic and aeroservoelastic solutions of complex engineering problems. Examples of advanced NASA Dryden Flight Research Center projects analyzed by the code in recent years include the X-29A, F-18 High Alpha Research Vehicle/Thrust Vectoring Control System, B-52/Pegasus Generic Hypersonics, National AeroSpace Plane (NASP), SR-71/Hypersonic Launch Vehicle, and High Speed Civil Transport (HSCT) projects. Extensive graphics capabilities exist for convenient model development and postprocessing of analysis results. The program is written in modular form in standard FORTRAN language to run on a variety of computers, such as the IBM RISC/6000, SGI, DEC, Cray, and personal computer; associated graphics codes use OpenGL and IBM/graPHIGS language for color depiction. This program is available from COSMIC, the NASA agency for distribution of computer programs. 15. Environment, Safety, Health, and Quality Plan for the Buried Waste Integrated Demonstration Program International Nuclear Information System (INIS) Walker, S. 1994-05-01 The Buried Waste Integrated Demonstration (BWID) is a program funded by the US Department of Energy Office of Technology Development. BWID supports the applied research, development, demonstration, testing, and evaluation of a suite of advanced technologies that together form a comprehensive remediation system for the effective and efficient remediation of buried waste. This document describes the Environment, Safety, Health, and Quality requirements for conducting BWID activities at the Idaho National Engineering Laboratory. Topics discussed in this report, as they apply to BWID operations, include Federal, State of Idaho, and Environmental Protection Agency regulations, Health and Safety Plans, Quality Program Plans, Data Quality Objectives, and training and job hazard analysis. Finally, a discussion is given on CERCLA criteria and System and Performance audits as they apply to the BWID Program 16. Instrumentation and electrical program at the Three Mile Island Unit 2, Technical Integration Office International Nuclear Information System (INIS) Hecker, L.A. 1982-01-01 The Three Mile Island Unit 2 accident of March 28, 1979 presents unique research opportunities that can provide valuable information on nuclear power plant safety philosophy and safety systems performance. The Technical Integration Office at Three Mile Island was established by the Department of Energy to manage a broad-based research and development program. One significant part of this effort is the Instrumentation and Electrical Program, which operates: (1) to identify instruments and electrical components that failed during or since the accident; (2) to test and analyze them in order to identify the causes of failure; and (3) to assess the survivability of those that did not fail. The basis for selection of equipment is discussed, and the testing methodology is described. Also, some results of Instrumentation and Electrical Program work to date are presented 17. Working Together: Building Successful Policy and Program Partnerships for Immigrant Integration Directory of Open Access Journals (Sweden) Els de Graauw 2017-03-01 Full Text Available Supporting and investing in the integration of immigrants and their children is critically important to US society. Successful integration contributes to the nation’s economic vitality, its civic and political health, and its cultural diversity. But although the United States has a good track record on immigrant integration, outcomes could be better. A national, coherent immigrant integration policy infrastructure is needed. This infrastructure can build on long-standing partnerships between civil society and US public institutions. Such partnerships, advanced under Republican- and Democratic-led administrations, were initially established to facilitate European immigrants’ integration in large American cities, and later extended to help refugees fleeing religious persecution and war. In the twenty-first century, we must expand this foundation by drawing on the growing activism by cities and states, new civil society initiatives, and public-private partnerships that span the country. A robust national integration policy infrastructure must be vertically integrated to include different levels of government and horizontally applied across public and private sector actors and different types of immigrant destinations. The resultant policy should leverage public-private partnerships, drawing on the energy, ideas, and work of community-based nonprofit organizations as well as the leadership and support of philanthropy, business, education, faith-based, and other institutions. A new coordinating office to facilitate interagency cooperation is needed in the executive branch; the mandate and programs of the Office of Refugee Resettlement need to be secured and where possible expanded; the outreach and coordinating role of the Office of Citizenship needs to be extended, including through a more robust grant program to community-based organizations; and Congress needs to develop legislation and appropriate funding for a comprehensive integration 18. Integrating hypermedia into the environmental education setting: Developing a program and evaluating its effect Science.gov (United States) Parker, Tehri Davenport 1997-09-01 This study designed, implemented, and evaluated an environmental education hypermedia program for use in a residential environmental education facility. The purpose of the study was to ascertain whether a hypermedia program could increase student knowledge and positive attitudes toward the environment and environmental education. A student/computer interface, based on the theory of social cognition, was developed to direct student interactions with the computer. A quasi-experimental research design was used. Students were randomly assigned to either the experimental or control group. The experimental group used the hypermedia program to learn about the topic of energy. The control group received the same conceptual information from a teacher/naturalist. An Environmental Awareness Quiz was administered to measure differences in the students' cognitive understanding of energy issues. Students participated in one on one interviews to discuss their attitudes toward the lesson and the overall environmental education experience. Additionally, members of the experimental group were tape recorded while they used the hypermedia program. These tapes were analyzed to identify aspects of the hypermedia program that promoted student learning. The findings of this study suggest that computers, and hypermedia programs, can be integrated into residential environmental education facilities, and can assist environmental educators in meeting their goals for students. The study found that the hypermedia program was as effective as the teacher/naturalist for teaching about environmental education material. Students who used the computer reported more positive attitudes toward the lesson on energy, and thought that they had learned more than the control group. Students in the control group stated that they did not learn as much as the computer group. The majority of students had positive attitudes toward the inclusion of computers in the camp setting, and stated that they were a good 19. Adopting De Novo Programming Approach on IC Design Service Firms Resources Integration Directory of Open Access Journals (Sweden) James K. C. Chen 2014-01-01 Full Text Available The semiconductor industry has very important position in computer industry, ICT field, and new electronic technology developing. The IC design service is one of key factor of semiconductor industry development. There are more than 365 IC design service firms have been established around Hsinchu Science Park in Taiwan. Building an efficient planning model for IC design service firm resources integrating is very interest issue. This study aims to construct a planning model for IC design service firm implementation resources integration. This study uses the De Novo programming as an approach of criteria alternative to achieve optimal resource allocation on IC design firm. Results show the IC design service firm should conduct open innovation concept and utilizes design outsourcing obtains cost down and enhance IC design service business performance. This plan model of De Novo programming is not only for IC design service firm and also can apply to the other industrial implementation strategic alliance/integrating resource. This plan model is a universal model for the others industries field. 20. The bottom-up approach to integrative validity: a new perspective for program evaluation. Science.gov (United States) Chen, Huey T 2010-08-01 The Campbellian validity model and the traditional top-down approach to validity have had a profound influence on research and evaluation. That model includes the concepts of internal and external validity and within that model, the preeminence of internal validity as demonstrated in the top-down approach. Evaluators and researchers have, however, increasingly recognized that in an evaluation, the over-emphasis on internal validity reduces that evaluation's usefulness and contributes to the gulf between academic and practical communities regarding interventions. This article examines the limitations of the Campbellian validity model and the top-down approach and provides a comprehensive, alternative model, known as the integrative validity model for program evaluation. The integrative validity model includes the concept of viable validity, which is predicated on a bottom-up approach to validity. This approach better reflects stakeholders' evaluation views and concerns, makes external validity workable, and becomes therefore a preferable alternative for evaluation of health promotion/social betterment programs. The integrative validity model and the bottom-up approach enable evaluators to meet scientific and practical requirements, facilitate in advancing external validity, and gain a new perspective on methods. The new perspective also furnishes a balanced view of credible evidence, and offers an alternative perspective for funding. Copyright (c) 2009 Elsevier Ltd. All rights reserved. 1. The Anne Frank Haven: A case of an alternative educational program in an integrative Kibbutz setting Science.gov (United States) Ben-Peretz, Miriam; Giladi, Moshe; Dror, Yuval 1992-01-01 The essential features of the programme of the Anne Frank Haven are the complete integration of children from low SES and different cultural backgrounds with Kibbutz children; a holistic approach to education; and the involvement of the whole community in an "open" residential school. After 33 years, it is argued that the experiment has proved successful in absorbing city-born youth in the Kibbutz, enabling at-risk populations to reach significant academic achievements, and ensuring their continued participation in the dominant culture. The basic integration model consists of "layers" of concentric circles, in dynamic interaction. The innermost circle is the class, the learning community. The Kibbutz community and the foster parents form a supportive, enveloping circle, which enables students to become part of the outer community and to intervene in it. A kind of meta-environment, the inter-Kibbutz partnership and the Israeli educational system, influence the program through decision making and guidance. Some of the principles of the Haven — integration, community involvement, a year's induction for all new students, and open residential settings — could be useful for cultures and societies outside the Kibbutz. The real "secret" of success of an alternative educational program is the dedicated, motivated and highly trained staff. 2. Battelle integrity of nuclear piping program. Summary of results and implications for codes/standards International Nuclear Information System (INIS) Miura, Naoki 2005-01-01 The BINP(Battelle Integrity of Nuclear Piping) program was proposed by Battelle to elaborate pipe fracture evaluation methods and to improve LBB and in-service flaw evaluation criteria. The program has been conducted from October 1998 to September 2003. In Japan, CRIEPI participated in the program on behalf of electric utilities and fabricators to catch up the technical backgrounds for possible future revision of LBB and in-service flaw evaluation standards and to investigate the issues needed to be reflected to current domestic standards. A series of the results obtained from the program has been well utilized for the new LBB Regulatory Guide Program by USNRC and for proposal of revised in-service flaw evaluation criteria to the ASME Code Committee. The results were assessed whether they had implications for the existing or future domestic standards. As a result, the impact of many of these issues, which were concerned to be adversely affected to LBB approval or allowable flaw sizes in flaw evaluation criteria, was found to be relatively minor under actual plant conditions. At the same time, some issues that needed to be resolved to address advanced and rational standards in the future were specified. (author) 3. NRC integrated program for the resolution of Unresolved Safety Issues A-3, A-4 and A-5 regarding steam generator tube integrity: Final report International Nuclear Information System (INIS) 1988-09-01 This report presents the results of the NRC integrated program for the resolution of Unresolved Safety Issues (USIs) A-3, A-4, and A-5 regarding steam generator tube integrity. A generic risk assessment is provided and indicates that risk from steam generator tube rupture (SGTR) events is not a significant contributor to total risk at a given site, nor to the total risk to which the general public is routinely exposed. This finding is considered to be indicative of the effectiveness of licensee programs and regulatory requirements for ensuring steam generator tube integrity in accordance with 10 CFR 50, Appendices A and B. This report also identifies a number of staff-recommended actions that the staff finds can further improve the effectiveness of licensee programs in ensuring the integrity of steam generator tubes and in mitigating the consequences of an SGTR. As part of the integrated program, the staff issued Generic Letter 85-02 encouraging licensees of pressurized water reactors (PWRs) to upgrade their programs, as necessary, to meet the intent of the staff-recommended actions; however, such actions do not constitute NRC requirements. In addition, this report describes a number of ongoing staff actions and studies involving steam generator issues which are being pursued to provide added assurance that risk from SGTR events will continue to be small. 146 refs., 5 figs., 11 tabs 4. Technology Integration Division FY 1992 Public Participation Program Management and Implementation Plan International Nuclear Information System (INIS) 1991-12-01 The mission of the Office of Technology Development (OTD), to develop and apply existing and innovative environmental restoration and waste management technologies to the cleanup to Department of Energy (DOE) sites and facilities in accordance with applicable regulations, is to be carried out through the central mechanisms of the Integrated Demonstration (ID) and Integrated Program (IP). Regulations include provisions for public participation in DOE decision making regarding IDs. Beyond these requirements, DOE seeks to foster a more open culture in which public participation, based on two-way communication between DOE and the public, is not only welcomed, but actively encouraged. The public to which the Program is addressed actually consists of several distinct ''publics:'' state and local government officials; Indian tribes; citizen groups and individuals concerned about specific issues; citizen groups or individuals who are opinion leaders in their communities; other federal agencies; private industry; and academia involved in IDs. Participation of these publics in decision making means that their concerns, needs, objectives, and other input are identified by two-way communication between them and DOE, and that these factors are considered when decisions made about OTD activities. This plan outlines the TIPs Public Participation Program goals, objectives, and steps to be taken during Fiscal Year (FY) 1992 to move toward those goals and objectives, based on the challenges and opportunities currently recognized or assumed 5. Unit and integration testing of Lustre programs: a case study from the nuclear industry International Nuclear Information System (INIS) Thevenod-Fosse, P. 1998-01-01 LUSTRE belongs to the class of synchronous data flow languages which have been designed for programming reactive and real-time systems having safety-critical requirements. It is implemented in the SCADE tool. SCADE is a software development environment for real-time systems which consists of a graphical and textual editor, and a C code generator. In previous work, a testing approach specific to LUSTRE programs has been defined, which may be applied at either the unit or integration testing levels of a gradual testing process. The paper reports on an industrial case study we have performed to exemplify the feasibility of the testing strategy. The software module, called SRIC (Source Range Instrumentation channel), was developed by SCHNEIDER ELECTRIC in the SCADE environment. SRIC is extracted from a monitoring software system of a nuclear reactor: it approximates 2600 lines of C code automatically generated by SCADE. Section 2 outlines the testing strategy. Then, Section 3 presents the results related to the program SRIC, for which four testing levels were defined (unit testing followed by three successive integration testing levels). First conclusions and direction for future work are proposed in Section 4. (author) 6. Integrated experimental test program on waterhammer pressure pulses and associated structural responses within a feedwater sparger International Nuclear Information System (INIS) Nurkkala, P.; Hoikkanen, J. 1997-01-01 This paper describes the methods and systems as utilized in an integrated experimental thermohydraulic/mechanics analysis test program on waterhammer pressure pulses within a revised feedwater sparger of a Loviisa generation VVER-440-type reactor. This program was carried out in two stages: (1) measurements with a strictly limited set of operating parameters at Loviisa NPP, and (2) measurements with the full set of operating parameters on a test article simulating the revised feedwater sparger. The experiments at Loviisa NPS served as an invaluable source of information on the nature of waterhammer pressure pulses and structural responses. These tests thus helped to set the objectives and formulate the concept for series of tests on a test article to study the water hammer phenomena. The heavily instrumented full size test article of a steam generator feedwater sparger was placed within a pressure vessel simulating the steam generator. The feedwater sparger was subjected to the full range of operating parameters which were to result in waterhammer pressure pulse trains of various magnitudes and duration. Two different designs of revised feedwater sparger were investigated (i.e. 'grounded' and 'with goose neck'). The following objects were to be met within this program: (1) establish the thermohydraulic parameters that facilitate the occurrence of water hammer pressure pulses, (2) provide a database for further analysis of the pressure pulse phenomena, (3) establish location and severity of these water hammer pressure pulses, (4) establish the structural response due to these pressure pulses, (5) provide input data for structural integrity analysis. (orig.) 7. [The IPT integrative program of psychological therapy for schizophrenia patients: new perspectives]. Science.gov (United States) Pomini, Valentino 2004-04-01 The integrated psychological treatment for schizophrenic patients IPT is composed by six modules that can be implemented either separately or in an articulated way. In that case, the treatment begins with a cognitive remediation phase which is followed by a social skills training phase. In the first phase, exercises specifically focalize on selective attention, memory, logical reasoning, perception and communication skills. The second phase of the program offers three other modules that train other skills: 1) social skills, 2) emotional management, 3) interpersonal problem solving. The IPT program belong to the so called second generation of social skills training programmes. It has been validated by numerous controlled studies, either in its complete form or in partial forms containing only one ore more of its sub-programmes. The results of these studies are globally positive. They show that IPT is an interesting therapeutic contribution for the rehabilitation practice with schizophrenic patients. A third generation of social skills training has been elaborated on the basis of the current IPT program. These new adjunctions to the IPT tend to favour the utilization in the real life of the competencies trained in the sessions, either by adding specific homeworks, in-vivo or booster sessions, or by designating new programmes directed to specific rehabilitation objectives, such as the integration in a apartment, the management of leisure times or the return to a workplace. These new programmes have been studied. They are promising and seem to be a useful complement to the original IPT. 8. Plagiarism, Cheating and Research Integrity: Case Studies from a Masters Program in Peru. Science.gov (United States) Carnero, Andres M; Mayta-Tristan, Percy; Konda, Kelika A; Mezones-Holguin, Edward; Bernabe-Ortiz, Antonio; Alvarado, German F; Canelo-Aybar, Carlos; Maguiña, Jorge L; Segura, Eddy R; Quispe, Antonio M; Smith, Edward S; Bayer, Angela M; Lescano, Andres G 2017-08-01 Plagiarism is a serious, yet widespread type of research misconduct, and is often neglected in developing countries. Despite its far-reaching implications, plagiarism is poorly acknowledged and discussed in the academic setting, and insufficient evidence exists in Latin America and developing countries to inform the development of preventive strategies. In this context, we present a longitudinal case study of seven instances of plagiarism and cheating arising in four consecutive classes (2011-2014) of an Epidemiology Masters program in Lima, Peru, and describes the implementation and outcomes of a multifaceted, "zero-tolerance" policy aimed at introducing research integrity. Two cases involved cheating in graded assignments, and five cases correspond to plagiarism in the thesis protocol. Cases revealed poor awareness of high tolerance to plagiarism, poor academic performance, and widespread writing deficiencies, compensated with patchwriting and copy-pasting. Depending on the events' severity, penalties included course failure (6/7) and separation from the program (3/7). Students at fault did not engage in further plagiarism. Between 2011 and 2013, the Masters program sequentially introduced a preventive policy consisting of: (i) intensified research integrity and scientific writing education, (ii) a stepwise, cumulative writing process; (iii) honor codes; (iv) active search for plagiarism in all academic products; and (v) a "zero-tolerance" policy in response to documented cases. No cases were detected in 2014. In conclusion, plagiarism seems to be widespread in resource-limited settings and a greater response with educational and zero-tolerance components is needed to prevent it. 9. Mixed Waste Integrated Program interim evaluation report on thermal treatment technologies International Nuclear Information System (INIS) Gillins, R.L.; DeWitt, L.M.; Wollerman, A.L. 1993-02-01 The Mixed Waste Integrated Program (MWIP) is one of several US Department of Energy (DOE) integrated programs established to organize and coordinate throughout the DOE complex the development of technologies for treatment of specific waste categories. The goal of the MWIP is to develop and deploy appropriate technologies for -the treatment of DOE mixed low-level and alpha-contaminated wastes in order to bring all affected DOE installations and projects into compliance with environmental laws. Evaluation of treatment technologies by the MWIP will focus on meeting waste form performance requirements for disposal. Thermal treatment technologies were an early emphasis for the MWIP because thermal treatment is indicated (or mandated) for many of the hazardous constituents in DOE mixed waste and because these technologies have been widely investigated for these applications. An advisory group, the Thermal Treatment Working Group (TTWG), was formed during the program's infancy to assist the MWIP in evaluating and prioritizing thermal treatment technologies suitable for development. The results of the overall evaluation scoring indicate that the four highest-rated technologies were rotary kilns, slagging kilns, electric-arc furnaces, and plasma-arc furnaces. The four highest-rated technologies were all judged to be applicable on five of the six waste streams and are the only technologies in the evaluation with this distinction. Conclusions as to the superiority of one technology over others are not valid based on this preliminary study, although some general conclusions can be drawn 10. Sustainable Environmental Education: Conditions and Characteristics Needed for a Successfully Integrated Program in Public Elementary Schools Science.gov (United States) Rieckenberg, Cara Rae This case study investigated what conditions and characteristics contributed to a successful environmental education program within elementary schools of a school district where environmental education was the mandate. While research does exist on practical application of environmental education within schools, little if any literature has been written or research conducted on schools actually implementing environmental education to study what contributes to the successful implementation of the program. To study this issue, 24 participants from a Midwestern school district were interviewed, six of whom were principals of each of the six elementary schools included in the study. All participants were identified as champions of environmental education integration within their buildings due to leadership positions held focused on environmental education. Analysis of the data collected via interviews revealed findings that hindered the implementation of environmental education, findings that facilitated the implementation of environmental education, and findings that indicated an environmental education-focused culture existed within the schools. Conditions and characteristics found to contribute to the success of these school's environmental education programs include: professional development opportunities, administrative support, peer leadership opportunities and guidance, passion with the content and for the environment, comfort and confidence with the content, ease of activities and events that contribute to the culture and student success. Keywords: environmental education, integration, leadership, teachers as leaders. 11. Integrating an internet-mediated walking program into family medicine clinical practice: a pilot feasibility study Directory of Open Access Journals (Sweden) Sen Ananda 2011-06-01 Full Text Available Abstract Background Regular participation in physical activity can prevent many chronic health conditions. Computerized self-management programs are effective clinical tools to support patient participation in physical activity. This pilot study sought to develop and evaluate an online interface for primary care providers to refer patients to an Internet-mediated walking program called Stepping Up to Health (SUH and to monitor participant progress in the program. Methods In Phase I of the study, we recruited six pairs of physicians and medical assistants from two family practice clinics to assist with the design of a clinical interface. During Phase II, providers used the developed interface to refer patients to a six-week pilot intervention. Provider perspectives were assessed regarding the feasibility of integrating the program into routine care. Assessment tools included quantitative and qualitative data gathered from semi-structured interviews, surveys, and online usage logs. Results In Phase I, 13 providers used SUH and participated in two interviews. Providers emphasized the need for alerts flagging patients who were not doing well and the ability to review participant progress. Additionally, providers asked for summary views of data across all enrolled clinic patients as well as advertising materials for intervention recruitment. In response to this input, an interface was developed containing three pages: 1 a recruitment page, 2 a summary page, and 3 a detailed patient page. In Phase II, providers used the interface to refer 139 patients to SUH and 37 (27% enrolled in the intervention. Providers rarely used the interface to monitor enrolled patients. Barriers to regular use of the intervention included lack of integration with the medical record system, competing priorities, patient disinterest, and physician unease with exercise referrals. Intention-to-treat analyses showed that patients increased walking by an average of 1493 steps 12. Repository Integration Program: RIP performance assessment and strategy evaluation model theory manual and user's guide International Nuclear Information System (INIS) 1995-11-01 This report describes the theory and capabilities of RIP (Repository Integration Program). RIP is a powerful and flexible computational tool for carrying out probabilistic integrated total system performance assessments for geologic repositories. The primary purpose of RIP is to provide a management tool for guiding system design and site characterization. In addition, the performance assessment model (and the process of eliciting model input) can act as a mechanism for integrating the large amount of available information into a meaningful whole (in a sense, allowing one to keep the ''big picture'' and the ultimate aims of the project clearly in focus). Such an integration is useful both for project managers and project scientists. RIP is based on a '' top down'' approach to performance assessment that concentrates on the integration of the entire system, and utilizes relatively high-level descriptive models and parameters. The key point in the application of such a ''top down'' approach is that the simplified models and associated high-level parameters must incorporate an accurate representation of their uncertainty. RIP is designed in a very flexible manner such that details can be readily added to various components of the model without modifying the computer code. Uncertainty is also handled in a very flexible manner, and both parameter and model (process) uncertainty can be explicitly considered. Uncertainty is propagated through the integrated PA model using an enhanced Monte Carlo method. RIP must rely heavily on subjective assessment (expert opinion) for much of its input. The process of eliciting the high-level input parameters required for RIP is critical to its successful application. As a result, in order for any project to successfully apply a tool such as RIP, an enormous amount of communication and cooperation must exist between the data collectors, the process modelers, and the performance. assessment modelers 13. IOTA (Integrable Optics Test Accelerator): facility and experimental beam physics program Science.gov (United States) Antipov, S.; Broemmelsiek, D.; Bruhwiler, D.; Edstrom, D.; Harms, E.; Lebedev, V.; Leibfritz, J.; Nagaitsev, S.; Park, C. S.; Piekarz, H.; Piot, P.; Prebys, E.; Romanov, A.; Ruan, J.; Sen, T.; Stancari, G.; Thangaraj, C.; Thurman-Keup, R.; Valishev, A.; Shiltsev, V. 2017-03-01 The Integrable Optics Test Accelerator (IOTA) is a storage ring for advanced beam physics research currently being built and commissioned at Fermilab. It will operate with protons and electrons using injectors with momenta of 70 and 150 MeV/c, respectively. The research program includes the study of nonlinear focusing integrable optical beam lattices based on special magnets and electron lenses, beam dynamics of space-charge effects and their compensation, optical stochastic cooling, and several other experiments. In this article, we present the design and main parameters of the facility, outline progress to date and provide the timeline of the construction, commissioning and research. The physical principles, design, and hardware implementation plans for the major IOTA experiments are also discussed. 14. IOTA (Integrable Optics Test Accelerator): Facility and experimental beam physics program International Nuclear Information System (INIS) Antipov, Sergei; Broemmelsiek, Daniel; Bruhwiler, David; Edstrom, Dean; Harms, Elvin 2017-01-01 The Integrable Optics Test Accelerator (IOTA) is a storage ring for advanced beam physics research currently being built and commissioned at Fermilab. It will operate with protons and electrons using injectors with momenta of 70 and 150 MeV/c, respectively. The research program includes the study of nonlinear focusing integrable optical beam lattices based on special magnets and electron lenses, beam dynamics of space-charge effects and their compensation, optical stochastic cooling, and several other experiments. In this article, we present the design and main parameters of the facility, outline progress to date and provide the timeline of the construction, commissioning and research. Finally, the physical principles, design, and hardware implementation plans for the major IOTA experiments are also discussed. 15. REFLECT: a program to integrate the wave equation through a plane stratified plasma International Nuclear Information System (INIS) Greene, J.W. 1975-01-01 A program was developed to integrate the wave equation through a plane stratified plasma with a general density distribution. The reflection and transmission of a plane wave are computed as a function of the angle of incidence. The polarization of the electric vector is assumed to be perpendicular to the plane of incidence. The model for absorption by classical inverse bremsstrahlung avoids the improper extrapolation of underdense formulae that are singular at the plasma critical surface. Surprisingly good agreement with the geometric-optics analysis of a linear layer was found. The system of ordinary differential equations is integrated by the variable-step, variable-order Adams method in the Lawrence Livermore Laboratory Gear package. Parametric studies of the absorption are summarized, and some possibilities for further development of the code are discussed. (auth) 16. Integrated Plant Safety Assessment, Systematic Evaluation Program: Yankee Nuclear Power Station (Docket No. 50-29) International Nuclear Information System (INIS) 1987-10-01 The US Nuclear Regulatory Commission (NRC) has prepared Supplement 1 to the final Integrated Plant Safety Assessment Report (IPSAR) (NUREG-0825), under the scope of the Systematic Evaluation Program (SEP), for Yankee Atomic Electric Company's Yankee Nuclear Power Station located in Rowe, Massachusetts. The SEP was initiated by the NRC to review the design of older operating nuclear power plants to reconfirm and document their safety. This report documents the review completed under the SEP for those issues that required refined engineering evaluations or the continuation of ongoing evaluations after the Final IPSAR for the Yankee plant was issued. The review has provided for (1) an assessment of the significance of differences between current technical positions on selected safety issues and those that existed when Yankee was licensed, (2) a basis for deciding how these differences should be resolved in an integrated plant review, and (3) a documented evaluation of plant safety. 2 tabs 17. Patient-centered care in cancer treatment programs: the future of integrative oncology through psychoeducation. Science.gov (United States) Garchinski, Christina M; DiBiase, Ann-Marie; Wong, Raimond K; Sagar, Stephen M 2014-12-01 The reciprocal relationship between the mind and body has been a neglected process for improving the psychosocial care of cancer patients. Emotions form an important link between the mind and body. They play a fundamental role in the cognitive functions of decision-making and symptom control. Recognizing this relationship is important for integrative oncology. We define psychoeducation as the teaching of self-evaluation and self-regulation of the mind-body process. A gap exists between research evidence and implementation into clinical practice. The patients' search for self-empowerment through the pursuit of complementary therapies may be a surrogate for inadequate psychoeducation. Integrative oncology programs should implement psychoeducation that helps patients to improve both emotional and cognitive intelligence, enabling them to better negotiate cancer treatment systems. 18. Covenant model of corporate compliance. "Corporate integrity" program meets mission, not just legal, requirements. Science.gov (United States) Tuohey, J F 1998-01-01 Catholic healthcare should establish comprehensive compliance strategies, beyond following Medicare reimbursement laws, that reflect mission and ethics. A covenant model of business ethics--rather than a self-interest emphasis on contracts--can help organizations develop a creed to focus on obligations and trust in their relationships. The corporate integrity program (CIP) of Mercy Health System Oklahoma promotes its mission and interests, educates and motivates its employees, provides assurance of systemwide commitment, and enforces CIP policies and procedures. Mercy's creed, based on its mission statement and core values, articulates responsibilities regarding patients and providers, business partners, society and the environment, and internal relationships. The CIP is carried out through an integrated network of committees, advocacy teams, and an expanded institutional review board. Two documents set standards for how Mercy conducts external affairs and clarify employee codes of conduct. 19. An object-oriented programming system for the integration of internet-based bioinformatics resources. Science.gov (United States) Beveridge, Allan 2006-01-01 The Internet consists of a vast inhomogeneous reservoir of data. Developing software that can integrate a wide variety of different data sources is a major challenge that must be addressed for the realisation of the full potential of the Internet as a scientific research tool. This article presents a semi-automated object-oriented programming system for integrating web-based resources. We demonstrate that the current Internet standards (HTML, CGI [common gateway interface], Java, etc.) can be exploited to develop a data retrieval system that scans existing web interfaces and then uses a set of rules to generate new Java code that can automatically retrieve data from the Web. The validity of the software has been demonstrated by testing it on several biological databases. We also examine the current limitations of the Internet and discuss the need for the development of universal standards for web-based data. 20. Integrating interdisciplinary pain management into primary care: development and implementation of a novel clinical program. Science.gov (United States) Dorflinger, Lindsey M; Ruser, Christopher; Sellinger, John; Edens, Ellen L; Kerns, Robert D; Becker, William C 2014-12-01 The aims of this study were to develop and implement an interdisciplinary pain program integrated in primary care to address stakeholder-identified gaps. Program development and evaluation project utilizing a Plan-Do-Study-Act (PDSA) approach to address the identified problem of insufficient pain management resources within primary care. A large Healthcare System within the Veterans Health Administration, consisting of two academically affiliated medical centers and six community-based outpatients clinics. An interprofessional group of stakeholders participated in a Rapid Process Improvement Workshop (RPIW), a consensus-building process to identify systems-level gaps and feasible solutions and obtain buy-in. Changes were implemented in 2012, and in a 1-year follow-up, we examined indicators of engagement in specialty and multimodal pain care services as well as patient and provider satisfaction. In response to identified barriers, RPIW participants proposed and outlined two readily implementable, interdisciplinary clinics embedded within primary care: 1) the Integrated Pain Clinic, providing in-depth assessment and triage to targeted resources; and 2) the Opioid Reassessment Clinic, providing assessment and structured monitoring of patients with evidence of safety, efficacy, or misuse problems with opioids. Implementation of these programs led to higher rates of engagement in specialty and multimodal pain care services; patients and providers reported satisfaction with these services. Our PDSA cycle engaged an interprofessional group of stakeholders that recommended introduction of new systems-based interventions to better integrate pain resources into primary care to address reported barriers. Early data suggest improved outcomes; examination of additional outcomes is planned. Wiley Periodicals, Inc. 1. Towards the integration of mental practice in rehabilitation programs. A critical review Directory of Open Access Journals (Sweden) Francine eMalouin 2013-09-01 Full Text Available Many clinical studies have investigated the use of mental practice (MP through motor imagery (MI to enhance functional recovery of patients with diverse physical disabilities. Although beneficial effects have been generally reported for training motor functions in persons with chronic stroke (e.g. reaching, writing, walking, attempts to integrate MP within rehabilitation programs have been met with mitigated results. These findings have stirred further questioning about the value of MP in neurological rehabilitation. In fact, despite abundant systematic reviews, which customarily focused on the methodological merits of selected studies, several questions about factors underlying observed effects remain to be addressed. This review discusses these issues in an attempt to identify factors likely to hamper the integration of MP within rehabilitation programs. First, the rationale underlying the use of MP for training motor function is briefly reviewed. Second, three modes of MI delivery are proposed based on the analysis of the research protocols from 27 studies in persons with stroke and Parkinson’s disease. Third, for each mode of MI delivery, a general description of MI training is provided. Fourth, the review discusses factors influencing MI training outcomes such as: the adherence to MI training, the amount of training and the interaction between physical and mental rehearsal; the use of relaxation, the selection of reliable, valid and sensitive outcome measures, the heterogeneity of the patient groups, the selection of patients and the mental rehearsal procedures. To conclude, the review proposes a framework for integrating MP in rehabilitation programs and suggests research targets for steering the implementation of MP in the early stages of the rehabilitation process. The challenge has now shifted towards the demonstration that MI training can enhance the effects of regular therapy in persons with subacute stroke during the period of 2. Archive of Core and Site/Hole Data and Photographs from the Integrated Ocean Drilling Program (IODP) Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The US Science Operator for the Integrated Ocean Drilling Program (IODP) operated the drilling vessel JOIDES Resolution from 2004-2013 for worldwide expeditions... 3. Integration of TGS and CTEN assays using the CTENFIT analysis and databasing program International Nuclear Information System (INIS) Estep, R. 2000-01-01 The CTEN F IT program, written for Windows 9x/NT in C++, performs databasing and analysis of combined thermal/epithermal neutron (CTEN) passive and active neutron assay data and integrates that with isotopics results and gamma-ray data from methods such as tomographic gamma scanning (TGS). The binary database is reflected in a companion Excel database that allows extensive customization via Visual Basic for Applications macros. Automated analysis options make the analysis of the data transparent to the assay system operator. Various record browsers and information displays simplified record keeping tasks 4. Integrated Data Collection Analysis (IDCA) Program — Bullseye® Smokeless Powder Energy Technology Data Exchange (ETDEWEB) Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Geoffrey W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Daniel N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pollard, Colin J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Sorensen, Daniel N. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Remmers, Daniel L. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Phillips, Jason J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Shelley, Timothy J. [Bureau of Alcohol, Tobacco and Firearms, Redstone Arsenal, AL (United States); Reyes, Jose A. [Applied Research Associates, Tyndall AFB, FL (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States) 2013-05-30 The Integrated Data Collection Analysis (IDCA) program is conducting a proficiency study for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the results for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of Bullseye® smokeless powder (Gunpowder). The participants found the Gunpowder: 1) to have a range of sensitivity to impact, from less than RDX to almost as sensitive as PETN, 2) to be moderately sensitive to BAM and ABL friction, 3) have a range for ESD, from insensitive to more sensitive than PETN, and 4) to have thermal sensitivity about the same as PETN and RDX. 5. Integrated Data Collection Analysis (IDCA) Program - RDX Standard Data Set 2 Energy Technology Data Exchange (ETDEWEB) Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Geoffrey W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Daniel N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pollard, Colin J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Sorensen, Daniel N. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Remmers, Daniel L. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Phillips, Jason J. [Air Force Research Lab. (AFRL), Tyndall Air Force Base, FL (United States); Shelley, Timothy J. [Applied Research Associates, Tyndall Air Force Base, FL (United States); Reyes, Jose A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States) 2013-02-20 The Integrated Data Collection Analysis (IDCA) program is conducting a proficiency study for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the results for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of the RDX Type II Class 5 standard, from testing the second time in the Proficiency Test. This RDX testing (Set 2) compared to the first (Set 1) was found to have about the same impact sensitivity, have more BAM friction sensitivity, less ABL friction sensitivity, similar ESD sensitivity, and same DSC sensitivity. 6. Integrating Service ­Learning and International Study into the Traditional Degree Programs OpenAIRE Newcomer, Quint 2010-01-01 In 2001, the University of Georgia Foundation made a significant commitment to expanding the opportunity for study abroad at UGA when it purchased a 155‐acre farm and built a new education and research center in San Luis de Monteverde, Costa Rica. UGA Costa Rica collaborates with departments and schools across the University to offer study abroad programs that offer courses directly related to major areas of study and that also integrate service‐learning as a central component of the overall ... 7. Reliability and integrity management program for PBMR helium pressure boundary components - HTR2008-58036 International Nuclear Information System (INIS) Fleming, K. N.; Gamble, R.; Gosselin, S.; Fletcher, J.; Broom, N. 2008-01-01 The purpose of this paper is to present the results of a study to establish strategies for the reliability and integrity management (RIM) of passive metallic components for the PBMR. The RIM strategies investigated include design elements, leak detection and testing approaches, and non-destructive examinations. Specific combinations of strategies are determined to be necessary and sufficient to achieve target reliability goals for passive components. This study recommends a basis for the RIM program for the PBMR Demonstration Power Plant (DPP) and provides guidance for the development by the American Society of Mechanical Engineers (ASME) of RIM requirements for Modular High Temperature Gas-Cooled Reactors (MHRs). (authors) 8. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Quality Assurance Manual Energy Technology Data Exchange (ETDEWEB) C. L. Smith; R. Nims; K. J. Kvarfordt; C. Wharton 2008-08-01 The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment using a personal computer running the Microsoft Windows operating system. SAPHIRE is primarily funded by the U.S. Nuclear Regulatory Commission (NRC). The role of the INL in this project is that of software developer and tester. This development takes place using formal software development procedures and is subject to quality assurance (QA) processes. The purpose of this document is to describe how the SAPHIRE software QA is performed for Version 6 and 7, what constitutes its parts, and limitations of those processes. 9. Present status of an integrated software system for HASP (Human Acts Simulation Program) International Nuclear Information System (INIS) Otani, Takayuki; Ebihara, Ken-ichi; Kambayashi, Shaw; Kume, Etsuo; Higuchi, Kenji; Fujii, Minoru; Akimoto, Masayuki 1994-01-01 In Human Acts Simulation Program (HASP), human acts to be realized by a human-shaped intelligent robot in a nuclear power plant are simulated by computers. The major purpose of HASP is to develop basic and underlying design technologies for intelligent and automatic power plant. The objectives of this paper is to show the present status of the HASP, with particular emphasis on activities targetted at the integration of developed subsystems to simulate the important capabilities of the intelligent robot such as planning, robot dynamics, and so on. (author) 10. Information Management Architecture for an Integrated Computing Environment for the Environmental Restoration Program. Volume 1, Introduction International Nuclear Information System (INIS) 1994-09-01 As part of the Environmental Restoration Program at Martin Marietta, IEM (Information Engineering Methodology) was developed as part of a complete and integrated approach to the progressive development and subsequent maintenance of automated data sharing systems. This approach is centered around the organization's objectives, inherent data relationships and business practices. IEM provides the Information Systems community with a tool kit of disciplined techniques supported by automated tools. It includes seven stages: Information Strategy Planning; Business Area Analysis; Business System Design; Technical Design; Construction; Transition; Production 11. The VIS-AD data model: Integrating metadata and polymorphic display with a scientific programming language Science.gov (United States) Hibbard, William L.; Dyer, Charles R.; Paul, Brian E. 1994-01-01 The VIS-AD data model integrates metadata about the precision of values, including missing data indicators and the way that arrays sample continuous functions, with the data objects of a scientific programming language. The data objects of this data model form a lattice, ordered by the precision with which they approximate mathematical objects. We define a similar lattice of displays and study visualization processes as functions from data lattices to display lattices. Such functions can be applied to visualize data objects of all data types and are thus polymorphic. 12. Advanced reactor development: The LMR integral fast reactor program at Argonne International Nuclear Information System (INIS) Till, C.E. 1990-01-01 Reactor technology for the 21st Century must develop with characteristics that can now be seen to be important for the future, quite different from the things when the fundamental materials and design choices for present reactors were made in the 1950s. Argonne National Laboratory, since 1984, has been developing the Integral Fast Reactor (IFR). This paper will describe the way in which this new reactor concept came about; the technical, public acceptance, and environmental issues that are addressed by the IFR; the technical progress that has been made; and our expectations for this program in the near term. 3 figs 13. Advanced Recovery and Integrated Extraction System (ARIES) program plan. Rev. 1 International Nuclear Information System (INIS) Nelson, T.O.; Massey, P.W.; Cremers, T.L. 1996-01-01 The Advanced Recovery and Integrated Extraction System (ARIES) demonstration combines various technologies, some of which were/are being developed under previous/other Department of Energy (DOE) funded programs. ARIES is an overall processing system for the dismantlement of nuclear weapon primaries. The program will demonstrate dismantlement of nuclear weapons and retrieval of the plutonium into a form that is compatible with long term storage and that is inspectable in an unclassified form appropriate for the application of traditional international safeguards. The success of the ARIES demonstration would lead to the development of a transportable modular or other facility type systems for weapons dismantlement to be used at other DOE sites as well as in other countries 14. Material protection control and accounting program activities at the Urals electrochemical integrated plant International Nuclear Information System (INIS) McAllister, S. 1997-01-01 The Urals Electrochemical Integrated Plant (UEIP) is the Russian Federation's largest uranium enrichment plant and one of three sites in Russia blending high enriched uranium (HEU) into commercial grade low enriched uranium. UEIP is located approximately 70 km north of Yekaterinburg in the closed city of Novouralsk (formerly Sverdlovsk- 44). DOE's MPC ampersand A program first met with UEIP in June of 1996, however because of some contractual issues the work did not start until September of 1997. The six national laboratories participating in DOE's Material Protection Control and Accounting program are cooperating with UEIP to enhance the capabilities of the physical protection, access control, and nuclear material control and accounting systems. The MPC ampersand A work at UEIP is expected to be completed during fiscal year 2001 15. Education, outreach, and inclusive engagement: Towards integrated indicators of successful program outcomes in participatory science. Science.gov (United States) Haywood, Benjamin K; Besley, John C 2014-01-01 The use and utility of science in society is often influenced by the structure, legitimacy, and efficacy of the scientific research process. Public participation in scientific research (PPSR) is a growing field of practice aimed at enhancing both public knowledge and understanding of science (education outreach) and the efficacy and responsiveness of scientific research, practice, and policy (participatory engagement). However, PPSR objectives focused on "education outreach" and "participatory engagement" have each emerged from diverse theoretical traditions that maintain distinct indicators of success used for program development and evaluation. Although areas of intersection and overlap among these two traditions exist in theory and practice, a set of comprehensive standards has yet to coalesce that supports the key principles of both traditions in an assimilated fashion. To fill this void, a comprehensive indicators framework is proposed with the goal of promoting a more integrative and synergistic PPSR program development and assessment process. 16. Evaluation of an integrated services program to prevent subsequent pregnancy and birth among urban teen mothers. Science.gov (United States) Patchen, Loral; Letourneau, Kathryn; Berggren, Erica 2013-01-01 This article details the evaluation of a clinical services program for teen mothers in the District of Columbia. The program's primary objectives are to prevent unintended subsequent pregnancy and to promote contraceptive utilization. We calculated contraceptive utilization at 6, 12, 18, and 24 months after delivery, as well as occurrence of subsequent pregnancy and birth. Nearly seven in ten (69.5%) teen mothers used contraception at 24 months after delivery, and 57.1% of contraceptive users elected long-acting reversible contraception. In the 24-month follow-up period, 19.3% experienced at least one subsequent pregnancy and 8.0% experienced a subsequent birth. These results suggest that an integrated clinical services model may contribute to sustained contraceptive use and may prove beneficial in preventing subsequent teen pregnancy and birth. 17. [Effects of integrated disease management program on the outcome of patients with heart failure]. Science.gov (United States) Fan, Hui-hua; Shi, Hao-ying; Jin, Wei; Zhu, Ya-juan; Huang, Dai-ni; Yan, Yi-wen; Zhu, Feng; Li, Hong-li; Liu, Jian; Liu, Shao-wen 2010-07-01 To investigate the feasibility and efficacy on the outcome of patients with heart failure of integrated disease management program with heart failure clinic, patient education and telephone follow-up. A total of 145 hospitalized patients with chronic heart failure and LVEF ≤ 45% or patients with LVEF > 45% and NT-proBNP > 1500 ng/L were divided into conventional group (n = 71) and interventional group (n = 74). Patients were followed for 10 to 12 months. Baseline clinical characteristics, LVEF and dose of evidence-based medicine were similar between the 2 groups. During follow-up, the NYHA functional class was higher in conventional group than interventional group (3.2 ± 0.5 vs 1.4 ± 0.5, P management program with heart failure clinic, patient education and telephone follow-up can improve patient compliance to heart failure treatment, improve cardiac function and reduce cardiovascular event rate. 18. Integration of national and regional energy development programs in Baltic States International Nuclear Information System (INIS) Klevas, V.; Antinucci, M. 2004-01-01 The report is dedicated to the presentation of the general framework of regional energy planning activities in Baltic States. The objective is to provide information on the context, in which regional energy policy instruments have to operate, and which has to be taken into consideration when compiling energy development measures for regional development and structural funds. The major issue of the publication is to discuss perspective of the formation methodology for energy management integration into development of regional planning documents. The main objective of this publication is to make a brief overview of what are the prospects of regional energy development. The place of municipal and regional energy development programs in general energy investment strategy is defined. The guidelines for regional energy programs are presented 19. EBR-2 [Experimental Breeder Reactor-2], IFR [Integral Fast Reactor] prototype testing programs International Nuclear Information System (INIS) Lehto, W.K.; Sackett, J.I.; Lindsay, R.W.; Planchon, H.P.; Lambert, J.D.B. 1990-01-01 The Experimental Breeder Reactor-2 (EBR-2) is a sodium cooled power reactor supplying about 20 MWe to the Idaho National Engineering Laboratory (INEL) grid and, in addition, is the key component in the development of the Integral Fast Reactor (IFR). EBR-2's testing capability is extensive and has seen four major phases: (1) demonstration of LMFBR power plant feasibility, (2) irradiation testing for fuel and material development. (3) testing the off-normal performance of fuel and plant systems and (4) operation as the IFR prototype, developing and demonstrating the IFR technology associated with fuel and plant design. Specific programs being carried out in support of the IFR include advanced fuels and materials development and component testing. This paper discusses EBR-2 as the IFR prototype and the associated testing programs. 29 refs 20. Evaluasi Program Corporate Social Responsibility “Organic Integrated System” PT. Pembangkitan Jawa-Bali Unit Pembangkitan Paiton OpenAIRE Harianto, Ruth Carissa 2016-01-01 Penelitian ini dilakukan untuk mengevaluasi program Corporate Social Responsibility “Organic Integrated System” yang dijalankan oleh divisi umum dan CSR PT. Pembangkitan Jawa – Bali Unit Pembangkitan Paiton. Di dalam melaksanakan program Corporate Social Responsibility “Organic Integrated System” ini, PT. Pembangkitan Jawa – Bali Unit Pembangkitan Paiton bekerja sama dengan Lembaga Swadaya Masyarakat Sekola Konang dan Kelompok Suko Tani sebagai publik sasarannya. Penelitian ini menggunakan pe... 1. Development of the Integrated Performance Evaluation Program (IPEP) for the Department of Energy's Office of Environmental Management International Nuclear Information System (INIS) Lindahl, P.; Streets, E.; Bass, D.; Hensley, J.; Newberry, R.; Carter, M. 1995-01-01 Argonne National Laboratory (ANL) and DOE's Radiological and Environmental Sciences Laboratory (RESL), Environmental Measurements Laboratory (EML), and Grand Junction Project office (GJPO) are collaborating with DOE's Office of Environmental Management (EM), Analytical Services Division (ASD, EM-263) and the Environmental Protection Agency (EPA) to develop an Integrated Performance Evaluation Program (IPEP). The purpose of the IPEP is to integrate information from existing PE programs with expanded QA activities to develop information about the quality of radiological, mixed waste, and hazardous environmental sample analyses provided by all laboratories supporting EM programs. The IPEP plans to utilize existing PE programs when available and appropriate for use by DOE; new PE programs will be developed only when no existing program meets DOEs needs. Interagency Agreements have been developed between EPA and DOE to allow DOE to use major existing PE programs developed by EPA. In addition, the DOE radiological Quality Assessment Program (QAP) administered by EML is being expanded for use in EM work. RESL and GJPO are also developing the Mixed Waste Performance Evaluation Program (MAPEP) to provide radiological, inorganic, and organic analytes of interest to EM programs. The use of information from multiple PE programs will allow a more global assessment of an individual laboratory's performance, as well as providing a means of more fairly comparing laboratories' performances in a given analytical area. The EPEP will interact with other aspects of the ASD such as audit and methods development activities to provide an integrated system for assessment and improvement of data quality 2. Integrated Status and Effectiveness Monitoring Program - Entiat River Snorkel Surveys and Rotary Screw Trap, 2007. Energy Technology Data Exchange (ETDEWEB) Nelle, R.D. 2008-01-01 The USFWS Mid-Columbia River Fishery Resource Office conducted snorkel surveys at 24 sites during the summer and fall periods of 2006 survey periods as part of the Integrated Status and Effectiveness Monitoring Program in the Entiat River. A total of 37,938 fish from 15 species/genera and an unknown category were enumerated. Chinook salmon were the overall most common fish observed and comprised 15% of fish enumerated followed by rainbow trout (10%) and mountain whitefish (7%). Day surveys were conducted during the summer period 2007 (August), while night surveys were conducted during the fall 2007 (October) surveys. The USFWS Mid-Columbia River Fishery Resource Office (MCFRO) operated two rotary screw traps on the Entiat River as part of the Integrated Status and Effectiveness Monitoring Program (ISEMP) program from August through November of 2007. Along with the smolt traps, juvenile emigrants were also captured at remote locations throughout the Entiat watershed and its major tributary, the Mad River. A total of 999 wild Oncorhynchus mykiss and 5,107 wild run O. tshawytscha were PIT tagged during the study period. Rotary screw trap efficiencies averaged 22.3% for juvenile O. tshawytscha and 9.0% for juvenile O. mykiss. Rotary screw traps operated 7 days a week and remote capture operations were conducted when flow and temperature regimes permitted. This is third annual progress report to Bonneville Power Administration for the snorkel surveys conducted in the Entiat River as related to long-term effectiveness monitoring of restoration programs in this watershed. The objective of this study is to monitor the fish habitat utilization of planned in-stream restoration efforts in the Entiat River by conducting pre- and post-construction snorkel surveys at selected treatment and control sites. 3. Steam generator tube integrity program: Annual report, August 1995--September 1996. Volume 2 International Nuclear Information System (INIS) Diercks, D.R.; Bakhtiari, S.; Kasza, K.E.; Kupperman, D.S.; Majumdar, S.; Park, J.Y.; Shack, W.J. 1998-02-01 This report summarizes work performed by Argonne National Laboratory on the Steam Generator Tube Integrity Program from the inception of the program in August 1995 through September 1996. The program is divided into five tasks: (1) assessment of inspection reliability, (2) research on ISI (inservice-inspection) technology, (3) research on degradation modes and integrity, (4) tube removals from steam generators, and (5) program management. Under Task 1, progress is reported on the preparation of facilities and evaluation of nondestructive evaluation techniques for inspecting a mock-up steam generator for round-robin testing, the development of better ways to correlate failure pressure and leak rate with eddy current (EC) signals, the inspection of sleeved tubes, workshop and training activities, and the evaluation of emerging NDE technology. Results are reported in Task 2 on closed-form solutions and finite-element electromagnetic modeling of EC probe responses for various probe designs and flaw characteristics. In Task 3, facilities are being designed and built for the production of cracked tubes under aggressive and near-prototypical conditions and for the testing of flawed and unflawed tubes under normal operating, accident, and severe-accident conditions. Crack behavior and stability are also being modeled to provide guidance for test facility design, develop an improved understanding of the expected rupture behavior of tubes with circumferential cracks, and predict the behavior of flawed and unflawed tubes under severe accident conditions. Task 4 is concerned with the acquisition of tubes and tube sections from retired steam generators for use in the other research tasks. Progress on the acquisition of tubes from the Salem and McGuire 1 nuclear plants is reported 4. The NADI program and the JOICFP integrated project: partners in delivering primary health care. Science.gov (United States) Arshat, H; Othman, R; Kuan Lin Chee; Abdullah, M 1985-10-01 The NADI program (pulse in Malay) was initially launched as a pilot project in 1980 in Kuala Lumpur, Malaysia. It utilized an integrated approach involving both the government and the private sectors. By sharing resources and expertise, and by working together, the government and the people can achieve national development faster and with better results. The agencies work through a multi-level supportive structure, at the head of which is the steering committee. The NADI teams at the field level are the focal points of services from the various agencies. Members of NADI teams also work with urban poor families as well as health groups, parents-teachers associations, and other similar groups. The policy and planning functions are carried out by the steering committee, the 5 area action committees and the community action committees, while the implementation function is carried out by the area program managers and NADI teams. The chairman of each area action committee is the head of the branch office of city hall. Using intestinal parasite control as the entry point, the NADI Integrated Family Development Program has greatly helped in expanding inter-agency cooperation and exchange of experiences by a coordinated, effective and efficient resource-mobilization. The program was later expanded to other parts of the country including the industrial and estate sectors. Services provided by NADI include: comprehensive health services to promote maternal and child health; adequate water supply, proper waste disposal, construction of latrines and providing electricity; and initiating community and family development such as community education, preschool education, vocational training, family counseling and building special facilities for recreational and educational purposes. 5. Steam generator tube integrity program. Semiannual report, August 1995--March 1996 International Nuclear Information System (INIS) Diercks, D.R.; Bakhtiari, S.; Chopra, O.K. 1997-04-01 This report summarizes work performed by Argonne National Laboratory on the Steam Generator Tube Integrity Program from the inception of that program in August 1995 through March 1996. The program is divided into five tasks, namely (1) Assessment of Inspection Reliability, (2) Research on ISI (in-service-inspection) Technology, (3) Research on Degradation Modes and Integrity, (4) Development of Methodology and Technical Requirements for Current and Emerging Regulatory Issues, and (5) Program Management. Under Task 1, progress is reported on the preparation of and evaluation of nondestructive evaluation (NDE) techniques for inspecting a mock-up steam generator for round-robin testing, the development of better ways to correlate burst pressure and leak rate with eddy current (EC) signals, the inspection of sleeved tubes, workshop and training activities, and the evaluation of emerging NDE technology. Under Task 2, results are reported on closed-form solutions and finite element electromagnetic modeling of EC probe response for various probe designs and flaw characteristics. Under Task 3, facilities are being designed and built for the production of cracked tubes under aggressive and near-prototypical conditions and for the testing of flawed and unflawed tubes under normal operating, accident, and severe accident conditions. In addition, crack behavior and stability are being modeled to provide guidance on test facility design, to develop an improved understanding of the expected rupture behavior of tubes with circumferential cracks, and to predict the behavior of flawed and unflawed tubes under severe accident conditions. Task 4 is concerned with the cracking and failure of tubes that have been repaired by sleeving, and with a review of literature on this subject 6. US Department of Energy Mixed Waste Integrated Program performance systems analysis International Nuclear Information System (INIS) Ferrada, J.J.; Berry, J.B. 1994-01-01 The primary goal of this project is to support decision making for the U.S. Department of Energy (DOE)/EM-50 Mixed Waste Integrated Program (MWIP) and the Mixed Low-Level Waste Program. A systems approach to the assessment of enhanced waste form(s) production will be employed including, coordination and configuration management of activities in specific technology development tasks. The purpose of this paper is to describe the development and application of a methodology for implementing a performance systems analysis on mixed waste treatment process technologies. The second section describes a conventional approach to process systems analysis followed by a methodology to estimate uncertainties when analyzing innovative technologies. Principles from these methodologies have been used to develop a performance systems analysis for MWIP. The third section describes the systems analysis tools. The fourth section explains how the performance systems analysis will be used to analyze MWIP process alternatives. The fifth and sixth sections summarize this paper and describe future work for this project. Baseline treatment process technologies (i.e., commercially available technologies) and waste management strategies are evaluated systematically using the ASPEN PLUS program applications developed by the DOE Mixed Waste Treatment Project (MWTP). Alternatives to the baseline (i.e., technologies developed by DOE's Office of Technology Development) are analyzed using FLOW, a user-friendly program developed at Oak Ridge National Laboratory (ORNL). Currently, this program is capable of calculating rough order-of-magnitude mass and energy balances to assess the performance of the alternative technologies as compared to the baseline process. In the future, FLOW will be capable of communicating information to the ASPEN PLUS program 7. Systems analysis programs for hands-on integrated reliability evaluations (SAPHIRE), Version 5.0 International Nuclear Information System (INIS) Russell, K.D.; Kvarfordt, K.J.; Hoffman, C.L. 1995-10-01 The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. The Graphical Evaluation Module (GEM) is a special application tool designed for evaluation of operational occurrences using the Accident Sequence Precursor (ASP) program methods. GEM provides the capability for an analyst to quickly and easily perform conditional core damage probability (CCDP) calculations. The analyst can then use the CCDP calculations to determine if the occurrence of an initiating event or a condition adversely impacts safety. It uses models and data developed in the SAPHIRE specially for the ASP program. GEM requires more data than that normally provided in SAPHIRE and will not perform properly with other models or data bases. This is the first release of GEM and the developers of GEM welcome user comments and feedback that will generate ideas for improvements to future versions. GEM is designated as version 5.0 to track GEM codes along with the other SAPHIRE codes as the GEM relies on the same, shared database structure 8. Integration DEFF Research Database (Denmark) Emerek, Ruth 2004-01-01 Bidraget diskuterer de forskellige intergrationsopfattelse i Danmark - og hvad der kan forstås ved vellykket integration......Bidraget diskuterer de forskellige intergrationsopfattelse i Danmark - og hvad der kan forstås ved vellykket integration... 9. Establishing an Integrative Medicine Program Within an Academic Health Center: Essential Considerations. Science.gov (United States) Eisenberg, David M; Kaptchuk, Ted J; Post, Diana E; Hrbek, Andrea L; O'Connor, Bonnie B; Osypiuk, Kamila; Wayne, Peter M; Buring, Julie E; Levy, Donald B 2016-09-01 Integrative medicine (IM) refers to the combination of conventional and "complementary" medical services (e.g., chiropractic, acupuncture, massage, mindfulness training). More than half of all medical schools in the United States and Canada have programs in IM, and more than 30 academic health centers currently deliver multidisciplinary IM care. What remains unclear, however, is the ideal delivery model (or models) whereby individuals can responsibly access IM care safely, effectively, and reproducibly in a coordinated and cost-effective way.Current models of IM across existing clinical centers vary tremendously in their organizational settings, principal clinical focus, and services provided; practitioner team composition and training; incorporation of research activities and educational programs; and administrative organization (e.g., reporting structure, use of medical records, scope of clinical practice) and financial strategies (i.e., specific business plans and models for sustainability).In this article, the authors address these important strategic issues by sharing lessons learned from the design and implementation of an IM facility within an academic teaching hospital, the Brigham and Women's Hospital at Harvard Medical School; and review alternative options based on information about IM centers across the United States.The authors conclude that there is currently no consensus as to how integrative care models should be optimally organized, implemented, replicated, assessed, and funded. The time may be right for prospective research in "best practices" across emerging models of IM care nationally in an effort to standardize, refine, and replicate them in preparation for rigorous cost-effectiveness evaluations. 10. SAMPLE RESULTS FROM THE INTEGRATED SALT DISPOSITION PROGRAM MACROBATCH 4 TANK 21H QUALIFICATION SAMPLES Energy Technology Data Exchange (ETDEWEB) Peters, T.; Fink, S. 2011-06-22 Savannah River National Laboratory (SRNL) analyzed samples from Tank 21H to qualify them for use in the Integrated Salt Disposition Program (ISDP) Batch 4 processing. All sample results agree with expectations based on prior analyses where available. No issues with the projected Salt Batch 4 strategy are identified. This revision includes additional data points that were not available in the original issue of the document, such as additional plutonium results, the results of the monosodium titanate (MST) sorption test and the extraction, scrub strip (ESS) test. This report covers the revision to the Tank 21H qualification sample results for Macrobatch (Salt Batch) 4 of the Integrated Salt Disposition Program (ISDP). A previous document covers initial characterization which includes results for a number of non-radiological analytes. These results were used to perform aluminum solubility modeling to determine the hydroxide needs for Salt Batch 4 to prevent the precipitation of solids. Sodium hydroxide was then added to Tank 21 and additional samples were pulled for the analyses discussed in this report. This work was specified by Task Technical Request and by Task Technical and Quality Assurance Plan (TTQAP). 11. A process for integrating public involvement into technical/social programs International Nuclear Information System (INIS) Wiltshire, S.; Williams, C. 1994-01-01 Good technical/social decisions--those that are technically sound and publicly acceptable--result from a planning process that considers consulting the public a basic part of the technical program, as basic as hiring a technical consultant to advise about new ideas in computer modeling. This paper describes a specific process for making public involvement an integral part of decision-making about high-level radioactive waste management, so that important technical, social, environmental, economic, and cultural information and values can be incorporated in a meaningful way in planning and carrying out a high-level waste management program or project. The process for integration must consider: (a) the decision or task for which public interaction is needed; (b) the people who should or will want to participate in the decision or task; (c) the goals or purposes of the communication or interaction--the agency's and the public's; (d) the kinds of information the public needs and that the agency needs in order to understand the relevant technical and social issues; and (e) the types of communication or involvement that best serve to meet the agency's and the public's goals 12. Development of a 3-D flow analysis computer program for integral reactor International Nuclear Information System (INIS) Youn, H. Y.; Lee, K. H.; Kim, H. K.; Whang, Y. D.; Kim, H. C. 2003-01-01 A 3-D computational fluid dynamics program TASS-3D is being developed for the flow analysis of primary coolant system consists of complex geometries such as SMART. A pre/post processor also is being developed to reduce the pre/post processing works such as a computational grid generation, set-up the analysis conditions and analysis of the calculated results. TASS-3D solver employs a non-orthogonal coordinate system and FVM based on the non-staggered grid system. The program includes the various models to simulate the physical phenomena expected to be occurred in the integral reactor and will be coupled with core dynamics code, core T/H code and the secondary system code modules. Currently, the application of TASS-3D is limited to the single phase of liquid, but the code will be further developed including 2-phase phenomena expected for the normal operation and the various transients of the integrator reactor in the next stage 13. Integrating between Malay culture and conservation in Green campus program: Best practices from Universitas Riau, Indonesia Science.gov (United States) Suwondo, Darmadi, Yunus, Mohd. 2017-11-01 Green campus program (GCP) is a policy to optimize the role of the University of Riau in implementing sustainable development. Green campus development is done by integrating Malay culture and conservation in every implementation of the program. We identify the biophysical, economic and socio-cultural characteristics as well as the problems encountered in the campus environment. This study uses desk study, survey, and focus group discussion (FGD). GCP analysis is divided into several stages, namely assess problem, design, implementation, monitor, evaluate and adjust. Bina Widya Campus of Universitas Riau has a good biodiversity of flora and fauna with species characteristics in lowland tropical forest ecosystems. Plant species of the Dipterocarpaceae family are the dominant species, whereas fauna is from reptile, leaves, and mammals. Efforts to maintain and enhance species diversity are undertaken by designing and constructing Arboretum and Ecoedupark for the ex situ conservation of flora and fauna. The enrichment of species is carried out by planting vegetation types that are closely related to Malay culture. On the other hand, the management of the green campus faces challenges in the diverse perceptions of stakeholders with low levels of academic participation. Economically the existence of the campus provides a multiplier effect on the emergence of various economic activities of the community around the campus. Implementation of green university campus of Riau University by integrating Melayu culture and conservation contributes to the creation of green open space which is increasingly widespread and able to support sustainable development, especially in Pekanbaru City. 14. Education and leisure: analyzing the Integrated School Program in Belo Horizonte Directory of Open Access Journals (Sweden) Marcília de Sousa Silva 2015-01-01 Full Text Available This article aims to analyze the concepts of leisure and education that permeate the documents in the Integrated School Program in Belo Horizonte. The analysis was based on the Policy cycle approach and emphasized the contexts of influence and the policy text production. Thus, the formation of the political agenda, the Political Pedagogical Project Program and the Strategic Plan 2010-2030 BH were investigated. The policy context is not organized in a linear fashion; it is a process of groups of interest interaction. With the discourse of coping with school failure, revealed by the students’ yield and flow evaluation indices (approval, repetition and dropout, the Integrated School education documents announce education and leisure as forms of production, strengthening links between public and private. The right to education is restricted to children’s and youth’s access and permanence in school without creating a perspective of universalization and quality. The documents address the leisure with a simplistic view of construction and maintenance of equipment and the idea of activity 15. BEfree: A new psychological program for binge eating that integrates psychoeducation, mindfulness, and compassion. Science.gov (United States) Pinto-Gouveia, José; Carvalho, Sérgio A; Palmeira, Lara; Castilho, Paula; Duarte, Cristiana; Ferreira, Cláudia; Duarte, Joana; Cunha, Marina; Matos, Marcela; Costa, Joana 2017-09-01 Binge eating disorder (BED) is associated with several psychological and medical problems, such as obesity. Approximately 30% of individuals seeking weight loss treatments present binge eating symptomatology. Moreover, current treatments for BED lack efficacy at follow-up assessments. Developing mindfulness and self-compassion seem to be beneficial in treating BED, although there is still room for improvement, which may include integrating these different but complimentary approaches. BEfree is the first program integrating psychoeducation-, mindfulness-, and compassion-based components for treating women with binge eating and obesity. To test the acceptability and efficacy up to 6-month postintervention of a psychological program based on psychoeducation, mindfulness, and self-compassion for obese or overweight women with BED. A controlled longitudinal design was followed in order to compare results between BEfree (n = 19) and waiting list group (WL; n = 17) from preintervention to postintervention. Results from BEfree were compared from preintervention to 3- and 6-month follow-up. BEfree was effective in eliminating BED; in diminishing eating psychopathology, depression, shame and self-criticism, body-image psychological inflexibility, and body-image cognitive fusion; and in improving obesity-related quality of life and self-compassion when compared to a WL control group. Results were maintained at 3- and 6-month follow-up. Finally, participants rated BEfree helpful for dealing with impulses and negative internal experiences. These results seem to suggest the efficacy of BEfree and the benefit of integrating different components such as psychoeducation, mindfulness, and self-compassion when treating BED in obese or overweight women. The current study provides evidence of the acceptability of a psychoeducation, mindfulness, and compassion program for binge eating in obesity (BEfree); Developing mindfulness and self-compassionate skills is an effective way of 16. Fellowship Program in Health System Improvement: A novel approach integrating leadership development and patient-centred health system transformation. Science.gov (United States) Philippon, Donald J; Montesanti, Stephanie; Stafinski, Tania 2018-03-01 This article highlights a novel approach to professional development, integrating leadership, development and patient-centred health system transformation in the new Fellowship Program in Health System Improvement offered by the School of Public Health at the University of Alberta. Early assessment of the program is also provided. 17. Evaluation of NSF's Program of Grants and Vertical Integration of Research and Education in the Mathematical Sciences (VIGRE) Science.gov (United States) National Academies Press, 2009 2009-01-01 In 1998, the National Science Foundation (NSF) launched a program of Grants for Vertical Integration of Research and Education in the Mathematical Sciences (VIGRE). These grants were designed for institutions with PhD-granting departments in the mathematical sciences, for the purpose of developing high-quality education programs, at all levels,… 18. Integrative Curriculum Development in Nuclear Education and Research Vertical Enhancement Program International Nuclear Information System (INIS) Egarievwe, Stephen U.; Jow, Julius O.; Edwards, Matthew E.; Montgomery, V. Trent; James, Ralph B.; Blackburn, Noel D.; Glenn, Chance M. 2015-01-01 Using a vertical education enhancement model, a Nuclear Education and Research Vertical Enhancement (NERVE) program was developed. The NERVE program is aimed at developing nuclear engineering education and research to 1) enhance skilled workforce development in disciplines relevant to nuclear power, national security and medical physics, and 2) increase the number of students and faculty from underrepresented groups (women and minorities) in fields related to the nuclear industry. The program uses multi-track training activities that vertically cut across the several education domains: undergraduate degree programs, graduate schools, and post-doctoral training. In this paper, we present the results of an integrative curriculum development in the NERVE program. The curriculum development began with nuclear content infusion into existing science, engineering and technology courses. The second step involved the development of nuclear engineering courses: 1) Introduction to Nuclear Engineering, 2) Nuclear Engineering I, and 2) Nuclear Engineering II. The third step is the establishment of nuclear engineering concentrations in two engineering degree programs: 1) electrical engineering, and 2) mechanical engineering. A major outcome of the NERVE program is a collaborative infrastructure that uses laboratory work, internships at nuclear facilities, on-campus research, and mentoring in collaboration with industry and government partners to provide hands-on training for students. The major activities of the research and education collaborations include: - One-week spring training workshop at Brookhaven National Laboratory: The one-week training and workshop is used to enhance research collaborations and train faculty and students on user facilities/equipment at Brookhaven National Laboratory, and for summer research internships. Participants included students, faculty members at Alabama A and M University and research collaborators at BNL. The activities include 1) tour and 19. SAPHIRE6.64, System Analysis Programs for Hands-on Integrated Reliability International Nuclear Information System (INIS) 2001-01-01 1 - Description of program or function: SAPHIRE is a collection of programs developed for the purpose of performing those functions necessary to create and analyze a complete Probabilistic Risk Assessment (PRA) primarily for nuclear power plants. The programs included in this suite are the Integrated Reliability and Risk Analysis System (IRRAS), the System Analysis and Risk Assessment (SARA) system, the Models And Results Database (MAR-D) system, and the Fault tree, Event tree and P and ID (FEP) editors. Previously these programs were released as separate packages. These programs include functions to allow the user to create event trees and fault trees, to define accident sequences and basic event failure data, to solve system and accident sequence fault trees, to quantify cut sets, and to perform uncertainty analysis on the results. Also included in this program are features to allow the analyst to generate reports and displays that can be used to document the results of an analysis. Since this software is a very detailed technical tool, the user of this program should be familiar with PRA concepts and the methods used to perform these analyses. 2 - Methods: SAPHIRE is written in MODULA-2 and uses an integrated commercial graphics package to interactively construct and edit fault trees. The fault tree solving methods used are industry recognized top down algorithms. For quantification, the program uses standard methods to propagate the failure information through the generated cut sets. SAPHIRE includes a separate module called the Graphical Evaluation Module (GEM). GEM provides a highly specialized user interface with SAPHIRE which automates the process for evaluating operational events at commercial nuclear power plants. Using GEM an analyst can estimate the risk associated with operational events (that is, perform a Level 1, Level 2, and Level 3 analysis for operational events) in a very efficient and expeditious manner. This on-line reference guide will 20. Integrative Curriculum Development in Nuclear Education and Research Vertical Enhancement Program Energy Technology Data Exchange (ETDEWEB) Egarievwe, Stephen U.; Jow, Julius O.; Edwards, Matthew E.; Montgomery, V. Trent [Nuclear Engineering and Radiological Science Center, Alabama A and M University, Huntsville, AL (United States); James, Ralph B.; Blackburn, Noel D. [Nonproliferation and National Security Department, Brookhaven National Laboratory, Upton, NY (United States); Glenn, Chance M. [College of Engineering, Technology and Physical Sciences, Alabama A and M University, Huntsville, AL (United States) 2015-07-01 Using a vertical education enhancement model, a Nuclear Education and Research Vertical Enhancement (NERVE) program was developed. The NERVE program is aimed at developing nuclear engineering education and research to 1) enhance skilled workforce development in disciplines relevant to nuclear power, national security and medical physics, and 2) increase the number of students and faculty from underrepresented groups (women and minorities) in fields related to the nuclear industry. The program uses multi-track training activities that vertically cut across the several education domains: undergraduate degree programs, graduate schools, and post-doctoral training. In this paper, we present the results of an integrative curriculum development in the NERVE program. The curriculum development began with nuclear content infusion into existing science, engineering and technology courses. The second step involved the development of nuclear engineering courses: 1) Introduction to Nuclear Engineering, 2) Nuclear Engineering I, and 2) Nuclear Engineering II. The third step is the establishment of nuclear engineering concentrations in two engineering degree programs: 1) electrical engineering, and 2) mechanical engineering. A major outcome of the NERVE program is a collaborative infrastructure that uses laboratory work, internships at nuclear facilities, on-campus research, and mentoring in collaboration with industry and government partners to provide hands-on training for students. The major activities of the research and education collaborations include: - One-week spring training workshop at Brookhaven National Laboratory: The one-week training and workshop is used to enhance research collaborations and train faculty and students on user facilities/equipment at Brookhaven National Laboratory, and for summer research internships. Participants included students, faculty members at Alabama A and M University and research collaborators at BNL. The activities include 1) tour and 1. Integrating Hydrology and Historical Geography in an Interdisciplinary Environmental Masters Program in Northern Ontario, Canada Science.gov (United States) Greer, Kirsten; James, April 2016-04-01 Research in hydrology and other sciences are increasingly calling for new collaborations that "…simultaneously explore the biogeophysical, social and economic forces that shape an increasingly human-dominated global hydrologic system…" (Vorosmarty et al. 2015, p.104). With many environmental programs designed to help students tackle environmental problems, these initiatives are not without fundamental challenges (for example, they are often developed around a single epistemology of positivism). Many environmental graduate programs provide narrow interdisciplinary training (within the sciences, or bridging to the social sciences) but do not necessarily engage with the humanities. Geography however, has a long tradition and history of bridging the geophysical, social sciences, and humanities. In this paper, we reflect on new programming in an Interdisciplinary Master's program in Northern Ontario, Canada, inspired by the rich tradition of geography. As Canada Research Chairs trained in different geographical traditions (historical geography and hydrology), we aim to bring together approaches in the humanities and geophysical sciences to understand hydrological and environmental change over time. We are teaching in a small, predominantly undergraduate University located in Northern Ontario, Canada, a region shaped significantly by colonial histories and resource development. The Masters of Environmental Studies/Masters of Environmental Sciences (MES/MESc) program was conceived from a decade of interdisciplinary dialogue across three undergraduate departments (Geography, Biology and Chemistry, History) to promote an understanding of both humanistic and scientific approaches to environmental issues. In the fall of 2015, as part of our 2015-2020 Canada Research Chair mandates, we introduced new initiatives to further address the integration of humanities and sciences to our graduate program. We believe the new generation of environmental scientists and practioners 2. Integrating Spiritual Care into a Baccalaureate Nursing Program in Mainland China. Science.gov (United States) Yuan, Hua; Porr, Caroline 2014-09-01 Holistic nursing care takes into account individual, family, community and population well-being. At the level of individual well-being, the nurse considers biological, psychological, social, and spiritual factors. However, in Mainland China spiritual factors are not well understood by nursing students. And accordingly, nursing faculty and students are reluctant to broach the topic of spirituality because it is either unknown to students or students believe that the provision of spiritual care is beyond their capabilities. We wonder then, what can we do as nurse educators to integrate spiritual care into a baccalaureate nursing program in Mainland China? The purpose of this article is to propose the integration of Chinese sociocultural traditions (namely religious/spiritual practices) into undergraduate nursing curricula as a means to enter into dialogue about spiritual well-being, to promote spiritual care; and to fulfill the requirements of holistic nursing care. However, prior to discussing recommendations, an overview of the cultural context is in order. Thus, this article is constructed as follows: first, the complexity of Chinese society is briefly described; second, the historical evolution of nursing education in Mainland China is presented; and, third, strategies to integrate Chinese religious/spiritual practices into curricula are proposed. © The Author(s) 2014. 3. Integration of complex-wide mixed low-level waste activities for program acceleration and optimization International Nuclear Information System (INIS) McKenney, D.E. 1998-01-01 In July 1996, the US Department of Energy (DOE) chartered a contractor-led effort to develop a suite of technically defensible, integrated alternatives which would allow the Environmental Management program to accomplish its mission objectives in an accelerated fashion and at a reduced cost. These alternatives, or opportunities, could then be evaluated by DOE and stakeholders for possible implementation, given precursor requirements (regulatory changes, etc.) could be met and benefits to the Complex realized. This contractor effort initially focused on six waste types, one of which was Mixed Low-Level Waste (MLLW). Many opportunities were identified by the contractor team for integrating MLLW activities across the DOE Complex. These opportunities were further narrowed to six that had the most promise for implementation and savings to the DOE Complex. The opportunities include six items: (1) the consolidation of individual site analytical services procurement efforts, (2) the consolidation of individual site MLLW treatment services procurement efforts, (3) establishment of ''de minimus'' radioactivity levels, (4) standardization of characterization requirements, (5) increased utilization of existing DOE treatment facilities, and (6) using a combination of DOE and commercial MLLW disposal capacity. The results of the integration effort showed that by managing MLLW activities across the DOE Complex as a cohesive unit rather than as independent site efforts, the DOE could improve the rate of progress toward meeting its objectives and reduce its overall MLLW program costs. Savings potential for MLLW, if the identified opportunities could be implemented, could total224 million or more. Implementation of the opportunities also could result in the acceleration of the MLLW ''work off schedule'' across the DOE Complex by five years 4. The integrated performance evaluation program quality assurance guidance in support of EM environmental sampling and analysis activities International Nuclear Information System (INIS) 1994-05-01 EM's (DOE's Environmental Restoration and Waste Management) Integrated Performance Evaluation Program (IPEP) has the purpose of integrating information from existing PE programs with expanded QA activities to develop information about the quality of radiological, mixed waste, and hazardous environmental sample analyses provided by all laboratories supporting EM programs. The guidance addresses the goals of identifying specific PE sample programs and contacts, identifying specific requirements for participation in DOE's internal and external (regulatory) programs, identifying key issues relating to application and interpretation of PE materials for EM headquarters and field office managers, and providing technical guidance covering PE materials for site-specific activities. (PE) Performance Evaluation materials or samples are necessary for the quality assurance/control programs covering environmental data collection 5. Development of an Integrated Performance Evaluation Program (IPEP) for the Department of Energy's Office of Environmental Restoration and Waste Management International Nuclear Information System (INIS) Streets, W.E.; Ka; Lindahl, P.C.; Bottrell, D.; Newberry, R.; Morton, S.; Karp, K. 1993-01-01 Argonne National Laboratory (ANL), in collaboration with DOE's Radiological and Environmental Sciences Laboratory (RESL), Environmental Measurements Laboratory (EML), and Grand Junction Project Office (GJPO), is working with the Department of Energy (DOE) Headquarters and the US Environmental Protection Agency (EPA) to develop the Integrated Performance Evaluation Program (IPEP). The purpose of IPEP is to integrate performance evaluation (PE) information from existing PE programs with expanded quality assurance (QA) activities to develop information about the quality of radiological, mixed waste, and hazardous environmental sample analyses provided by all laboratories supporting DOE Environmental Restoration and Waste Management (EM) programs. The IPEP plans to utilize existing PE programs when available and appropriate for use by DOE-EM; new PE programs will be developed only when no existing program meets DOE's needs 6. Integrated experimental test program on waterhammer pressure pulses and associated structural responses within a feedwater sparger Energy Technology Data Exchange (ETDEWEB) Nurkkala, P.; Hoikkanen, J. [Imatran Voima Oy, Vantaa (Finland) 1997-12-31 This paper describes the methods and systems as utilized in an integrated experimental thermohydraulic/mechanics analysis test program on waterhammer pressure pulses within a revised feedwater sparger of a Loviisa generation VVER-440-type reactor. This program was carried out in two stages: (1) measurements with a strictly limited set of operating parameters at Loviisa NPP, and (2) measurements with the full set of operating parameters on a test article simulating the revised feedwater sparger. The experiments at Loviisa NPS served as an invaluable source of information on the nature of waterhammer pressure pulses and structural responses. These tests thus helped to set the objectives and formulate the concept for series of tests on a test article to study the water hammer phenomena. The heavily instrumented full size test article of a steam generator feedwater sparger was placed within a pressure vessel simulating the steam generator. The feedwater sparger was subjected to the full range of operating parameters which were to result in waterhammer pressure pulse trains of various magnitudes and duration. Two different designs of revised feedwater sparger were investigated (i.e. grounded and with goose neck). The following objects were to be met within this program: (1) establish the thermohydraulic parameters that facilitate the occurrence of water hammer pressure pulses, (2) provide a database for further analysis of the pressure pulse phenomena, (3) establish location and severity of these water hammer pressure pulses, (4) establish the structural response due to these pressure pulses, (5) provide input data for structural integrity analysis. (orig.). 3 refs. 7. Integrating Pregnancy Prevention Into an HIV Counseling and Testing Program in Pediatric Primary Care. Science.gov (United States) Wheeler, Noah J; Upadhya, Krishna K; Tawe, Marie-Sophie; Tomaszewski, Kathy; Arrington-Sanders, Renata; Marcell, Arik V 2018-04-11 8. Integrated experimental test program on waterhammer pressure pulses and associated structural responses within a feedwater sparger Energy Technology Data Exchange (ETDEWEB) Nurkkala, P; Hoikkanen, J [Imatran Voima Oy, Vantaa (Finland) 1998-12-31 This paper describes the methods and systems as utilized in an integrated experimental thermohydraulic/mechanics analysis test program on waterhammer pressure pulses within a revised feedwater sparger of a Loviisa generation VVER-440-type reactor. This program was carried out in two stages: (1) measurements with a strictly limited set of operating parameters at Loviisa NPP, and (2) measurements with the full set of operating parameters on a test article simulating the revised feedwater sparger. The experiments at Loviisa NPS served as an invaluable source of information on the nature of waterhammer pressure pulses and structural responses. These tests thus helped to set the objectives and formulate the concept for series of tests on a test article to study the water hammer phenomena. The heavily instrumented full size test article of a steam generator feedwater sparger was placed within a pressure vessel simulating the steam generator. The feedwater sparger was subjected to the full range of operating parameters which were to result in waterhammer pressure pulse trains of various magnitudes and duration. Two different designs of revised feedwater sparger were investigated (i.e. grounded and with goose neck). The following objects were to be met within this program: (1) establish the thermohydraulic parameters that facilitate the occurrence of water hammer pressure pulses, (2) provide a database for further analysis of the pressure pulse phenomena, (3) establish location and severity of these water hammer pressure pulses, (4) establish the structural response due to these pressure pulses, (5) provide input data for structural integrity analysis. (orig.). 3 refs. 9. "Thinking ethics": a novel, pilot, proof-of-concept program of integrating ethics into the Physiology curriculum in South India. Science.gov (United States) D, Savitha; Vaz, Manjulika; Vaz, Mario 2017-06-01 Integrating medical ethics into the physiology teaching-learning program has been largely unexplored in India. The objective of this exercise was to introduce an interactive and integrated ethics program into the Physiology course of first-year medical students and to evaluate their perceptions. Sixty medical students (30 men, 30 women) underwent 11 sessions over a 7-mo period. Two of the Physiology faculty conducted these sessions (20-30 min each) during the routine physiology (theory/practicals) classes that were of shorter duration and could, therefore, accommodate the discussion of related ethical issues. This exercise was in addition to the separate ethics classes conducted by the Medical Ethics department. The sessions were open ended, student centered, and designed to stimulate critical thinking. The students' perceptions were obtained through a semistructured questionnaire and focused group discussions. The students found the program unique, thought provoking, fully integrated, and relevant. It seldom interfered with the physiology teaching. They felt that the program sensitized them about ethical issues and prepared them for their clinical years, to be "ethical doctors." Neutral observers who evaluated each session felt that the integrated program was relevant to the preclinical year and that the program was appropriate in its content, delivery, and student involvement. An ethics course taught in integration with Physiology curriculum was found to be beneficial, feasible, and compatible with Physiology by students as well as neutral observers. Copyright © 2017 the American Physiological Society. 10. Overview of NASA's Universe of Learning: An Integrated Astrophysics STEM Learning and Literacy Program Science.gov (United States) Smith, Denise; Lestition, Kathleen; Squires, Gordon; Biferno, Anya A.; Cominsky, Lynn; Manning, Colleen; NASA's Universe of Learning Team 2018-01-01 NASA's Universe of Learning creates and delivers science-driven, audience-driven resources and experiences designed to engage and immerse learners of all ages and backgrounds in exploring the universe for themselves. The project is the result of a unique partnership between the Space Telescope Science Institute, Caltech/IPAC, Jet Propulsion Laboratory, Smithsonian Astrophysical Observatory, and Sonoma State University, and is one of 27 competitively-selected cooperative agreements within the NASA Science Mission Directorate STEM Activation program. The NASA's Universe of Learning team draws upon cutting-edge science and works closely with Subject Matter Experts (scientists and engineers) from across the NASA Astrophysics Physics of the Cosmos, Cosmic Origins, and Exoplanet Exploration themes. Together we develop and disseminate data tools and participatory experiences, multimedia and immersive experiences, exhibits and community programs, and professional learning experiences that meet the needs of our audiences, with attention to underserved and underrepresented populations. In doing so, scientists and educators from the partner institutions work together as a collaborative, integrated Astrophysics team to support NASA objectives to enable STEM education, increase scientific literacy, advance national education goals, and leverage efforts through partnerships. Robust program evaluation is central to our efforts, and utilizes portfolio analysis, process studies, and studies of reach and impact. This presentation will provide an overview of NASA's Universe of Learning, our direct connection to NASA Astrophysics, and our collaborative work with the NASA Astrophysics science community. 11. Remote community electrification program - small wind integration in BC's offgrid communities Energy Technology Data Exchange (ETDEWEB) 2011-07-01 The paper presents the Remote Community Electrification (RCE) program and wind integration in BC's off grid communities. The program offers electric utility service to eligible remote communities in BC. Most of them are offered off-grid services although it is cheaper to connect a community to a grid. BC hydro serves some communities that are not connected to the main grid. Local diesel or small hydro-generating stations are used to serve remote communities. The renewable energy program target is to reach 50% of remote communities. The reason that wind is a small part of the renewables is that hydro and biomass are abundant in BC. Some other barriers include high installation costs, durability concerns, and lack of in-house technical expertise. Some small Wind initiatives that have been taken were relatively few and fairly small. It can be concluded that due to a poor wind resource and the relatively low cost of diesel, there is limited potential for wind in BC remote communities. 12. Pioneering Integrated Education and Research Program in Graduate School of Engineering and its Inquiry by Questionnaire Science.gov (United States) Minamino, Yoritoshi Department of Adaptive Machine Systems, Department of Materials and Manufucturing Science and Department of Business engineering have constructed the educational programs of consecutive system from master to doctor courses in graduate school of engineering, “Pioneering Integrated Education and Research Program (PP) ”, to produce volitional and original mind researchers with high abilities of research, internationality, leader, practice, management and economics by cooperation between them for reinforcement of their ordinary curriculums. This program consists of the basic PP for master course students and the international exchange PP, leadership pp and tie-up PP of company and University for Doctor course students. In 2005th the basic PP was given to the master course students and then their effectiveness of the PP was investigated by questionnaire. The results of questionnaire proved that the graduate school students improved their various abilities by the practical lesson in cooperation between companies and our Departments in the basic PP, and that the old boys after basic PP working in companies appreciated the advantages to business planning, original conception, finding solution, patents, discussion, report skills required in companies. 13. Rehabilitation Program Integrating Virtual Environment to Improve Orientation and Mobility Skills for People Who Are Blind. Science.gov (United States) Lahav, Orly; Schloerb, David W; Srinivasan, Mandayam A 2015-01-01 This paper presents the integration of a virtual environment (BlindAid) in an orientation and mobility rehabilitation program as a training aid for people who are blind. BlindAid allows the users to interact with different virtual structures and objects through auditory and haptic feedback. This research explores if and how use of the BlindAid in conjunction with a rehabilitation program can help people who are blind train themselves in familiar and unfamiliar spaces. The study, focused on nine participants who were congenitally, adventitiously, and newly blind, during their orientation and mobility rehabilitation program at the Carroll Center for the Blind (Newton, Massachusetts, USA). The research was implemented using virtual environment (VE) exploration tasks and orientation tasks in virtual environments and real spaces. The methodology encompassed both qualitative and quantitative methods, including interviews, a questionnaire, videotape recording, and user computer logs. The results demonstrated that the BlindAid training gave participants additional time to explore the virtual environment systematically. Secondly, it helped elucidate several issues concerning the potential strengths of the BlindAid system as a training aid for orientation and mobility for both adults and teenagers who are congenitally, adventitiously, and newly blind. 14. Long-term student outcomes of the Integrated Nutrition and Physical Activity Program. Science.gov (United States) Puma, Jini; Romaniello, Catherine; Crane, Lori; Scarbro, Sharon; Belansky, Elaine; Marshall, Julie A 2013-01-01 15. An "Evidence-Based" Professional Development Program for Physics Teachers Focusing on Knowledge Integration Science.gov (United States) Berger, Hana This dissertation is concerned with the design and study of an evidence-based approach to the professional development of high-school physics teachers responding to the need to develop effective continuing professional development programs (CPD) in domains that require genuine changes in teachers' views, knowledge, and practice. The goals of the thesis were to design an evidence-based model for the CPD program, to implement it with teachers, and to study its influence on teachers' knowledge, views, and practice, as well as its impact on students' learning. The program was developed in three consecutive versions: a pilot, first, and second versions. Based on the pilot version (that was not part of this study), we developed the first version of the program in which we studied difficulties in employing the evidence-based and blended-learning approaches. According to our findings, we modified the strategies for enacting these approaches in the second version of the program. The influence of the program on the teachers and students was studied during the enactment of the second version of the program. The model implemented in the second version of the program was characterized by four main design principles: 1. The KI and evidence aspects are acquired simultaneously in an integrated manner. 2. The guidance of the teachers follows the principles of cognitive apprenticeship both in the evidence and the KI aspects. 3. The teachers experience the innovative activities as learners. 4. The program promotes continuity of teachers' learning through a structured "blended learning" approach. The results of our study show that this version of the program achieved its goals; throughout the program the teachers progressed in their knowledge, views, and practice concerning the knowledge integration, and in the evidence and learner-centered aspects. The results also indicated that students improved their knowledge of physics and knowledge integration skills that were developed 16. [Integrity]. Science.gov (United States) Gómez Rodríguez, Rafael Ángel 2014-01-01 To say that someone possesses integrity is to claim that that person is almost predictable about responses to specific situations, that he or she can prudentially judge and to act correctly. There is a closed interrelationship between integrity and autonomy, and the autonomy rests on the deeper moral claim of all humans to integrity of the person. Integrity has two senses of significance for medical ethic: one sense refers to the integrity of the person in the bodily, psychosocial and intellectual elements; and in the second sense, the integrity is the virtue. Another facet of integrity of the person is la integrity of values we cherish and espouse. The physician must be a person of integrity if the integrity of the patient is to be safeguarded. The autonomy has reduced the violations in the past, but the character and virtues of the physician are the ultimate safeguard of autonomy of patient. A field very important in medicine is the scientific research. It is the character of the investigator that determines the moral quality of research. The problem arises when legitimate self-interests are replaced by selfish, particularly when human subjects are involved. The final safeguard of moral quality of research is the character and conscience of the investigator. Teaching must be relevant in the scientific field, but the most effective way to teach virtue ethics is through the example of the a respected scientist. 17. Integrated Program of Experimental Diagnostics at the NNSS: An Integrated, Prioritized Work Plan for Diagnostic Development and Maintenance and Supporting Capability International Nuclear Information System (INIS) 2010-01-01 This Integrated Program of Experimental Diagnostics at the NNSS is an integrated prioritized work plan for the Nevada National Security Site (NNSS), formerly the Nevada Test Site (NTS), program that is independent of individual National Security Enterprise Laboratories (Labs) requests or specific Subprograms being supported. This prioritized work plan is influenced by national priorities presented in the Predictive Capability Framework (PCF) and other strategy documents (Primary and Secondary Assessment Technologies Plans and the Plutonium Experiments Plan). This document satisfies completion criteria for FY 2010 MRT milestone No.3496: Document an integrated, prioritized work plan for diagnostic development, maintenance, and supporting capability. This document is an update of the 3-year NNSS plan written a year ago, September 21, 2009, to define and understand Lab requests for diagnostic implementation. This plan is consistent with Lab interpretations of the PCF, Primary Assessment Technologies, and Plutonium Experiment plans. 18. Integrating Research and Extension for the Nsf-Reu Program in Water Resources Science.gov (United States) Judge, J.; Migliaccio, K.; Gao, B.; Shukla, S.; Ehsani, R.; McLamore, E. 2011-12-01 Providing positive and meaningful research experiences to students in their undergraduate years is critical for motivating them to pursue advanced degrees or research careers in science and engineering. Such experiences not only offer training for the students in problem solving and critical thinking via hands-on projects, but also offer excellent mentoring and recruiting opportunities for the faculty advisors. The goal of the Research Experience for Undergraduates (REU) Program in the Agricultural and Biological Engineering Department (ABE) at the University of Florida (UF) is to provide eight undergraduate students a unique opportunity to conduct research in water resources using interdisciplinary approaches, integrating research and extension. The students are selected from diverse cultural and educational backgrounds. The eight-week REU Program utilizes the extensive infrastructure of UF - Institute of Food and Agricultural Sciences (IFAS) through the Research and Education Centers (RECs). Two students are paired to participate in their own project under the direct supervision of one of the four research mentors. Four of the eight students are located at the main campus, in Gainesville, Fl, and four remaining students are located off-campus, at the RECs, where some of the ABE faculty are located. The students achieve an enriching cohort experience through social networking, daily blogs, and weekly video conferences to share their research and other REU experiences. The students are co-located during the Orientation week and also during the 5-day Florida Waters Tour. Weekly group meetings and guest lectures are conducted via synchronously through video conferencing. The integration of research and extension is naturally achieved through the projects at the RECs, the guest lectures, Extension workshops, and visits to the Water Management Districts in Florida. In the last two years of the Program, we have received over 80 applicants, from four-year and advanced 19. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE), Version 5.0: Integrated Reliability and Risk Analysis System (IRRAS) reference manual. Volume 2 International Nuclear Information System (INIS) Russell, K.D.; Kvarfordt, K.J.; Skinner, N.L.; Wood, S.T.; Rasmuson, D.M. 1994-07-01 The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. The Integrated Reliability and Risk Analysis System (IRRAS) is a state-of-the-art, microcomputer-based probabilistic risk assessment (PRA) model development and analysis tool to address key nuclear plant safety issues. IRRAS is an integrated software tool that gives the use the ability to create and analyze fault trees and accident sequences using a microcomputer. This program provides functions that range from graphical fault tree construction to cut set generation and quantification to report generation. Version 1.0 of the IRRAS program was released in February of 1987. Since then, many user comments and enhancements have been incorporated into the program providing a much more powerful and user-friendly system. This version has been designated IRRAS 5.0 and is the subject of this Reference Manual. Version 5.0 of IRRAS provides the same capabilities as earlier versions and ads the ability to perform location transformations, seismic analysis, and provides enhancements to the user interface as well as improved algorithm performance. Additionally, version 5.0 contains new alphanumeric fault tree and event used for event tree rules, recovery rules, and end state partitioning 20. Integrated Blanket Supplementary Feeding Program Reduces Levels of Stunting in Yenangyaung, Myanmar International Nuclear Information System (INIS) Aung, Thet; Baik, Diane 2014-01-01 Full text: BACKGROUND: Yenanchaung Township is the top ranked amongst the six poorest townships in the 25 townships comprising of Magway Division. There is food insecurity, poor transportation, high unemployment and migration rates widespread illiteracy, poor hygiene, and lack of health facilities. Along with food insecurity, high rates of malnutrition are found. In 2010, 39.5 percent of children under five years of age were found to be stunted, 18.1 percent wasted and 28.3 percent underweight. World Food Program (WFP) and World Vision Myanmar (WV) have been collaborating in response to the food insecurity situation in Yenangyaung since 2005 through food assistance interventions. However, in 2011, WV target villages started focusing on implementation of food activities apart from just food assistance; a more sustainable approach. Thus, the project is now focusing on maintaining the food security status of the targeted communities by strengthening the capacity in agriculture technique, alternative livelihood skills, and health/nutrition education. METHODS: This project is focused on food provision for all pregnant and lactating mothers and under 3 children according to the set criteria by WFP as well as nutrition education in the respective villages. Township health offices, village leaders and trained volunteers were used to carry out the activities of the project, including: health/nutrition education, food distribution, cooking demonstrations, integration of immunization and vitamin A supplementation, pre-/post-natal care, growth monitoring, counseling and referrals. The weight and MUAC of the children (n = 381) were taken every month, and height was measured every 3 months. Follow-up was conducted January 2012 to December 2012. Children were discharged from the program when they reached 3 years of age, regardless of the nutritional status. Thus, the data collected during the project was used to assess the impact of the program. RESULTS: No significant changes 1. Integration of Bilingual Emphasis Program into University Curriculum. Multiple Subjects Credential Program: Hupa, Yurok, Karuk, or Tolowa Emphasis. Science.gov (United States) Bennett, Ruth A description of the American Indian Bilingual Teacher Credential Program offered by Humboldt State University (California) provides background information on the linguistic groups served by the program. Accompanying the program descriptions are lists of lower and upper division requirements, descriptions of competency exam, program schedule,… 2. Systems analysis programs for hands-on integrated reliability evaluations (SAPHIRE) version 5.0 International Nuclear Information System (INIS) Russell, K.D.; Kvarfordt, K.J.; Skinner, N.L.; Wood, S.T. 1994-07-01 The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. This volume is the reference manual for the Systems Analysis and Risk Assessment (SARA) System Version 5.0, a microcomputer-based system used to analyze the safety issues of a open-quotes familyclose quotes [i.e., a power plant, a manufacturing facility, any facility on which a probabilistic risk assessment (PRA) might be performed]. The SARA database contains PRA data primarily for the dominant accident sequences of a family and descriptive information about the family including event trees, fault trees, and system model diagrams. The number of facility databases that can be accessed is limited only by the amount of disk storage available. To simulate changes to family systems, SARA users change the failure rates of initiating and basic events and/or modify the structure of the cut sets that make up the event trees, fault trees, and systems. The user then evaluates the effects of these changes through the recalculation of the resultant accident sequence probabilities and importance measures. The results are displayed in tables and graphs that may be printed for reports. A preliminary version of the SARA program was completed in August 1985 and has undergone several updates in response to user suggestions and to maintain compatibility with the other SAPHIRE programs. Version 5.0 of SARA provides the same capability as earlier versions and adds the ability to process unlimited cut sets; display fire, flood, and seismic data; and perform more powerful cut set editing 3. The OSMOSE Experimental Program for the qualification of integral cross sections of actinides Energy Technology Data Exchange (ETDEWEB) Antony, Muriel; Hudelot, Jean-Pascal [CEA, Centre de Cadarache, F-13108 Saint Paul lez Durance (France); Klann, Raymond [Nuclear Engineering Division, Argonne. National Laboratory, 9700 South Cass Ave., Argonne, IL 60439-4814 (United States) 2006-07-01 The need of better nuclear data on minor actinides has been stressed by various organizations throughout the world. It especially deals with the studies on plutonium management and waste incineration in existing systems and transmutation of waste or Pu burning in future nuclear concepts. To address this issue, a Working Party of the OECD has been concerned with identifying these needs and has produced a detailed High Priority Request List for Nuclear Data. The first step in obtaining better nuclear data consists in measuring accurate integral data and comparing them to integrated energy dependent data: this comparison provides a direct assessment of the effect of deficiencies in the differential data. Several international programs have indicated a strong desire to obtain accurate integral reaction rate data for improving the major and minor actinides cross sections. Data on major actinides (i.e. {sup 235}U, {sup 236}U, {sup 238}U, {sup 239}Pu, {sup 240}Pu, {sup 241}Pu, {sup 242}Pu and {sup 241}Am) are reasonably well-known and available in the Evaluated Nuclear Data Files (JEFF, JENDL, ENDF-B). However information on the minor actinides (i.e. {sup 232}Th, {sup 233}U, {sup 237}Np, {sup 238}Pu, {sup 242}Am, {sup 243}Am, {sup 242}Cm, {sup 243}Cm, {sup 244}Cm, {sup 245}Cm, {sup 246}Cm and {sup 247}Cm) is less well-known and considered to be relatively poor in some cases, having to rely on model and extrapolation of few data points. In this framework, the ambitious OSMOSE program between the Commissariat a l'Energie Atomique (CEA), Electricite de France (EDF) and the U.S. Department of Energy (DOE) has been undertaken with the aim of measuring the integral absorption rate parameters of actinides in the MINERVE experimental facility located at the CEA Cadarache Research Center. The OSMOSE Program (Oscillation in Minerve of isOtopes in 'Eupraxic' Spectra) includes a complete analytical program associated with the experimental measurement program and aims 4. CAD-based Monte Carlo program for integrated simulation of nuclear system SuperMC International Nuclear Information System (INIS) Wu, Y.; Song, J.; Zheng, H.; Sun, G.; Hao, L.; Long, P.; Hu, L. 2013-01-01 SuperMC is a (Computer-Aided-Design) CAD-based Monte Carlo (MC) program for integrated simulation of nuclear systems developed by FDS Team (China), making use of hybrid MC-deterministic method and advanced computer technologies. The design aim, architecture and main methodology of SuperMC are presented in this paper. The taking into account of multi-physics processes and the use of advanced computer technologies such as automatic geometry modeling, intelligent data analysis and visualization, high performance parallel computing and cloud computing, contribute to the efficiency of the code. SuperMC2.1, the latest version of the code for neutron, photon and coupled neutron and photon transport calculation, has been developed and validated by using a series of benchmarking cases such as the fusion reactor ITER model and the fast reactor BN-600 model 5. Integrated Data Collection Analysis (IDCA) Program — RDX Standard Data Sets Energy Technology Data Exchange (ETDEWEB) Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Geoffrey W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Daniel N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pollard, Colin J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Sorensen, Daniel N. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Remmers, Daniel L. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Phillips, Jason J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Shelley, Timothy J. [Bureau of Alcohol, Tobacco and Firearms, Huntsville, AL (United States); Reyes, Jose A. [Applied Research Associates, Tyndall AFB, FL (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States) 2013-03-04 The Integrated Data Collection Analysis (IDCA) program is conducting a proficiency study for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the results for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of the RDX Type II Class 5 standard, for a third and fourth time in the Proficiency Test and averaged with the analysis results from the first and second time. The results, from averaging all four sets (1, 2, 3 and 4) of data suggest a material to have slightly more impact sensitivity, more BAM friction sensitivity, less ABL friction sensitivity, similar ESD sensitivity, and same DSC sensitivity, compared to the results from Set 1, which was used previously as the values for the RDX standard in IDCA Analysis Reports. 6. Integrated Data Collection Analysis (IDCA) Program - NaClO3/Icing Sugar Energy Technology Data Exchange (ETDEWEB) Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Geoffrey W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Daniel N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pollard, Colin J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Sorensen, Daniel N. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Remmers, Daniel L. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Shelley, Timothy J. [Air Force Research Lab. (AFRL), Tyndall Air Force Base, FL (United States); Reyes, Jose A. [Applied Research Associates, Tyndall Air Force Base, FL (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States) 2013-02-11 The Integrated Data Collection Analysis (IDCA) program is conducting a proficiency study for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the results for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of a mixture of NaClO3 and icing sugar—NaClO3/icing sugar mixture. The mixture was found to: be more sensitive than RDX but less sensitive than PETN in impact testing (180-grit sandpaper); be more sensitive than RDX and about the same sensitivity as PETN in BAM fiction testing; be less sensitive than RDX and PETN except for one participant found the mixture more sensitive than PETN in ABL ESD testing; and to have one to three exothermic features with the lowest temperature event occurring at ~ 160°C always observed in thermal testing. Variations in testing parameters also affected the sensitivity. 7. Integrated Data Collection Analysis (IDCA) Program - KClO4/Aluminum Mixture Energy Technology Data Exchange (ETDEWEB) Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Geoffrey W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Daniel N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pollard, Colin J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (NSWC IHD), Indian Head, MD (United States). Indian Head Division; Sorensen, Daniel N. [Naval Surface Warfare Center (NSWC IHD), Indian Head, MD (United States). Indian Head Division; Remmers, Daniel L. [Naval Surface Warfare Center (NSWC IHD), Indian Head, MD (United States). Indian Head Division; Whinnery, LeRoy L. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Shelley, Timothy J. [Air Force Research Lab. (AFRL), Tyndall AFB, FL (United States); Reyes, Jose A. [Applied Research Associates (ARA), Tyndall AFB, FL (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States) 2012-01-17 The Integrated Data Collection Analysis (IDCA) program is conducting a Proficiency Test for Small-Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the results for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of a mixture of KClO4 and aluminum—KClO4/Al mixture. This material was selected because of the challenge of performing SSST testing of a mixture of two solids. The mixture was found to be: 1) much less sensitive to impact than RDX, (LLNL being the exception) and PETN, 2) more sensitive to friction than RDX and PETN, and 3) extremely sensitive to spark. The thermal analysis showed little or no exothermic character. One prominent endothermic feature was observed in the temperature range studied and identified as a phase transition of KClO4. 8. A risk characterization of safety research areas for integral fast reactor program planning International Nuclear Information System (INIS) Mueller, C.J.; Cahalan, J.E.; Hill, D.J.; Kramer, J.M.; Marchaterre, J.F.; Pedersen, D.R.; Sevy, R.H.; Tibbrook, R.W.; Wei, T.Y.; Wright, A.E. 1988-01-01 This paper characterizes the areas of integral fast reactor (IFR) safety research in terms of their importance in addressing the risk of core disruption sequences for innovative designs. Such sequences have traditionally been determined to constitute the primary risk to public health and safety. All core disruption sequences are folded into four fault categories: classic unprotected (unscrammed) events; loss of decay heat; local fault propagation; and failure to critical reactor structures. Event trees are used to describe these sequences and the areas in the IFR safety and related base technology research programs are discussed with respect to their relevance in addressing the key issues in preventing or delimiting core disruptive sequences. Thus a measure of potential for risk reduction is obtained for guidance in establishing research priorities 9. Effect of an Integrated Health Management Program Based on Successful Aging in Korean Women. Science.gov (United States) Ahn, Okhee; Cha, Hye Gyeong; Chang, Soo Jung; Cho, Hyun-Choul; Kim, Hee Sun 2015-01-01 This study evaluates the efficacy of an integrated health management program (IHMP) based on successful aging in older women. A single group pretest and posttest research design was employed, with a sample of 33 older Korean women over 60 years registered in a public health center. The intervention, including exercise, health education, and social activities, was performed 3 hr per week for 12 weeks. Demographic characteristics, body composition, physical fitness, biomarkers, depression, and social support were measured. Data were analyzed with a Wilcoxon signed-rank test, statistical significance levels were set at p test (p < .001) were significantly improved. Systolic blood pressure (p < .003), diastolic blood pressure (p = .030), and blood cholesterol (p = .011) were significantly decreased. Depression (p = .043) was significantly decreased, and social support (p < .001) was significantly increased. Adopting and maintaining an IHMP can be useful to promote physical, psychological, and social functioning that lead to successful aging in older Korean women. © 2014 Wiley Periodicals, Inc. 10. A risk characterization of safety research areas for Integral Fast Reactor program planning International Nuclear Information System (INIS) Mueller, C.J.; Cahalan, J.E.; Hill, D.J. 1988-01-01 This paper characterizes the areas of Integral Fast Reactor (IFR) safety research in terms of their importance in addressing the risk of core disruption sequences for innovative designs. Such sequences have traditionally been determined to constitute the primary risk to public health and safety. All core disruption sequences are folded into four fault categories: classic unprotected (unscrammed) events; loss of decay heat; local fault propagation; and failure of critical reactor structures. Event trees are used to describe these sequences and the areas in the IFR Safety and related Base Technology research programs are discussed with respect to their relevance in addressing the key issues in preventing or delimiting core disruptive sequences. Thus a measure of potential for risk reduction is obtained for guidance in establishing research priorites 11. Investigation for integration of the German Public Health Service in catastrophe and disaster prevention programs in Germany International Nuclear Information System (INIS) Pfenninger, E.; Koenig, S.; Himmelseher, S. 2004-01-01 This research project aimed at investigating the integration of the GPHS into the plans for civil defence and protection as well as catastrophe prevention of the Federal Republic of Germany. Following a comprehensive analysis of the current situation, potential proposals for an improved integrative approach will be presented. In view of the lack of topics relevant for medical care in disaster medicine in educational curricula and training programs for medical students and postgraduate board programs for public health physicians, a working group of the Civil Protection Board of the German Federal Ministry of the Interior already complained in their 'Report on execution of legal rules for protection and rescue of human life as well as restitution of public health after disaster' in 1999, that the integration of the GPHS into catastrophe and disaster prevention programs has insufficiently been solved. On a point-by-point approach, our project analysed the following issues: - Legislative acts for integration of the German Public Health Service into medical care in catastrophes and disasters to protect the civilian population of Germany and their implementation and execution. - Administrative rules and directives on state and district levels that show relationship to integration of the German Public Health Service into preparedness programs for catastrophe prevention and management and their implementation and execution. - Education and postgraduate training options for physicians and non-physician employees of the German Public health Service to prepare for medical care in catastrophes and disasters. - State of knowledge and experience of the German Public Health Service personnel in emergency and disaster medicine. - Evaluation of the German administrative catastrophe prevention authorities with regard to their integration of the German Public Health Service into preparedness programs for catastrophe prevention and management. - Development of a concept to remedy the 12. Clinical integration and how it affects student retention in undergraduate athletic training programs. Science.gov (United States) Young, Allison; Klossner, Joanne; Docherty, Carrie L; Dodge, Thomas M; Mensch, James M 2013-01-01 A better understanding of why students leave an undergraduate athletic training education program (ATEP), as well as why they persist, is critical in determining the future membership of our profession. To better understand how clinical experiences affect student retention in undergraduate ATEPs. Survey-based research using a quantitative and qualitative mixed-methods approach. Three-year undergraduate ATEPs across District 4 of the National Athletic Trainers' Association. Seventy-one persistent students and 23 students who left the ATEP prematurely. Data were collected using a modified version of the Athletic Training Education Program Student Retention Questionnaire. Multivariate analysis of variance was performed on the quantitative data, followed by a univariate analysis of variance on any significant findings. The qualitative data were analyzed through inductive content analysis. A difference was identified between the persister and dropout groups (Pillai trace = 0.42, F(1,92) = 12.95, P = .01). The follow-up analysis of variance revealed that the persister and dropout groups differed on the anticipatory factors (F(1,92) = 4.29, P = .04), clinical integration (F(1,92) = 6.99, P = .01), and motivation (F(1,92) = 43.12, P = .01) scales. Several themes emerged in the qualitative data, including networks of support, authentic experiential learning, role identity, time commitment, and major or career change. A perceived difference exists in how athletic training students are integrated into their clinical experiences between those students who leave an ATEP and those who stay. Educators may improve retention by emphasizing authentic experiential learning opportunities rather than hours worked, by allowing students to take on more responsibility, and by facilitating networks of support within clinical education experiences. 13. Clinical Integration and How It Affects Student Retention in Undergraduate Athletic Training Programs Science.gov (United States) Young, Allison; Klossner, Joanne; Docherty, Carrie L; Dodge, Thomas M; Mensch, James M 2013-01-01 Context A better understanding of why students leave an undergraduate athletic training education program (ATEP), as well as why they persist, is critical in determining the future membership of our profession. Objective To better understand how clinical experiences affect student retention in undergraduate ATEPs. Design Survey-based research using a quantitative and qualitative mixed-methods approach. Setting Three-year undergraduate ATEPs across District 4 of the National Athletic Trainers' Association. Patients or Other Participants Seventy-one persistent students and 23 students who left the ATEP prematurely. Data Collection and Analysis Data were collected using a modified version of the Athletic Training Education Program Student Retention Questionnaire. Multivariate analysis of variance was performed on the quantitative data, followed by a univariate analysis of variance on any significant findings. The qualitative data were analyzed through inductive content analysis. Results A difference was identified between the persister and dropout groups (Pillai trace = 0.42, F1,92 = 12.95, P = .01). The follow-up analysis of variance revealed that the persister and dropout groups differed on the anticipatory factors (F1,92 = 4.29, P = .04), clinical integration (F1,92 = 6.99, P = .01), and motivation (F1,92 = 43.12, P = .01) scales. Several themes emerged in the qualitative data, including networks of support, authentic experiential learning, role identity, time commitment, and major or career change. Conclusions A perceived difference exists in how athletic training students are integrated into their clinical experiences between those students who leave an ATEP and those who stay. Educators may improve retention by emphasizing authentic experiential learning opportunities rather than hours worked, by allowing students to take on more responsibility, and by facilitating networks of support within clinical education experiences. PMID:23672327 14. Integrated Design of Superconducting Magnets with the CERN Field Computation Program ROXIE CERN Document Server Russenschuck, Stephan; Bazan, M; Lucas, J; Ramberger, S; Völlinger, Christine 2000-01-01 The program package ROXIE has been developed at CERN for the field computation of superconducting accelerator magnets and is used as an approach towards the integrated design of such magnets. It is also an example of fruitful international collaborations in software development.The integrated design of magnets includes feature based geometry generation, conceptual design using genetic optimization algorithms, optimization of the iron yoke (both in 2d and 3d) using deterministic methods, end-spacer design and inverse field calculation.The paper describes the version 8.0 of ROXIE which comprises an automatic mesh generator, an hysteresis model for the magnetization in superconducting filaments, the BEM-FEM coupling method for the 3d field calculation, a routine for the calculation of the peak temperature during a quench and neural network approximations of the objective function for the speed-up of optimization algorithms, amongst others.New results of the magnet design work for the LHC are given as examples. 15. Oceanic crustal velocities from laboratory and logging measurements of Integrated Ocean Drilling Program Hole 1256D Science.gov (United States) Gilbert, Lisa A.; Salisbury, Matthew H. 2011-09-01 Drilling and logging of Integrated Ocean Drilling Program (IODP) Hole 1256D have provided a unique opportunity for systematically studying a fundamental problem in marine geophysics: What influences the seismic structure of oceanic crust, porosity or composition? Compressional wave velocities (Vp) logged in open hole or from regional refraction measurements integrate both the host rock and cracks in the crust. To determine the influence of cracks on Vp at several scales, we first need an accurate ground truth in the form of laboratory Vp on crack-free, or nearly crack-free samples. We measured Vp on 46 water-saturated samples at in situ pressures to determine the baseline velocities of the host rock. These new results match or exceed Vp logs throughout most of the hole, especially in the lower dikes and gabbros, where porosities are low. In contrast, samples measured at sea under ambient laboratory conditions, had consistently lower Vp than the Vp logs, even after correction to in situ pressures. Crack-free Vp calculated from simple models of logging and laboratory porosity data for different lithologies and facies suggest that crustal velocities in the lavas and upper dikes are controlled by porosity. In particular, the models demonstrate significant large-scale porosity in the lavas, especially in the sections identified as fractured flows and breccias. However, crustal velocities in the lower dikes and gabbros are increasingly controlled by petrology as the layer 2-3 boundary is approached. 16. Integrated employee assistance program/managed behavioral health plan utilization by persons with substance use disorders. Science.gov (United States) Merrick, Elizabeth S Levy; Hodgkin, Dominic; Hiatt, Deirdre; Horgan, Constance M; Greenfield, Shelly F; McCann, Bernard 2011-04-01 New federal parity and health reform legislation, promising increased behavioral health care access and a focus on prevention, has heightened interest in employee assistance programs (EAPs). This study investigated service utilization by persons with a primary substance use disorder (SUD) diagnosis in a managed behavioral health care (MBHC) organization's integrated EAP/MBHC product (N = 1,158). In 2004, 25.0% of clients used the EAP first for new treatment episodes. After initial EAP utilization, 44.4% received no additional formal services through the plan, and 40.4% received regular outpatient services. Overall, outpatient care, intensive outpatient/day treatment, and inpatient/residential detoxification were most common. About half of the clients had co-occurring psychiatric diagnoses. Mental health service utilization was extensive. Findings suggest that for service users with primary SUD diagnoses in an integrated EAP/MBHC product, the EAP benefit plays a key role at the front end of treatment and is often only one component of treatment episodes. Copyright © 2011 Elsevier Inc. All rights reserved. 17. Uranium removal from soils: An overview from the Uranium in Soils Integrated Demonstration program International Nuclear Information System (INIS) Francis, C.W.; Brainard, J.R.; York, D.A.; Chaiko, D.J.; Matthern, G. 1994-01-01 An integrated approach to remove uranium from uranium-contaminated soils is being conducted by four of the US Department of Energy national laboratories. In this approach, managed through the Uranium in Soils Integrated Demonstration program at the Fernald Environmental Management Project, Fernald, Ohio, these laboratories are developing processes that selectively remove uranium from soil without seriously degrading the soil's physicochemical characteristics or generating waste that is difficult to manage or dispose of. These processes include traditional uranium extractions that use carbonate as well as some nontraditional extraction techniques that use citric acid and complex organic chelating agents such as naturally occurring microbial siderophores. A bench-scale engineering design for heap leaching; a process that uses carbonate leaching media shows that >90% of the uranium can be removed from the Fernald soils. Other work involves amending soils with cultures of sulfur and ferrous oxidizing microbes or cultures of fungi whose role is to generate mycorrhiza that excrete strong complexers for uranium. Aqueous biphasic extraction, a physical separation technology, is also being evaluated because of its ability to segregate fine particulate, a fundamental requirement for soils containing high levels of silt and clay. Interactions among participating scientists have produced some significant progress not only in evaluating the feasibility of uranium removal but also in understanding some important technical aspects of the task 18. [The impact of patient identification on an integrated program of palliative care in Basque Country]. Science.gov (United States) Larrañaga, Igor; Millas, Jesús; Soto-Gordoa, Myriam; Arrospide, Arantzazu; San Vicente, Ricardo; Irizar, Marisa; Lanzeta, Itziar; Mar, Javier 2017-12-05 Evaluate the process and the economic impact of an integrated palliative care program. Comparative cross-sectional study. Integrated Healthcare Organizations of Alto Deba and Goierri Alto-Urola, Basque Country. Patients dead due to oncologic and non-oncologic causes in 2012 (control group) and 2015 (intervention group) liable to need palliative care according to McNamara criteria. Identification as palliative patients in primary care, use of common clinical pathways in primary and secondary care and arrange training courses for health professionals. Change in the resource use profile of patients in their last 3 months. Propensity score by genetic matching method was used to avoid non-randomization bias. The groups were compared by univariate analysis and the relationships between variables were analysed by logistic regressions and generalized linear models. One thousand and twenty-three patients were identified in 2012 and 1,142 patients in 2015. In 2015 doubled the probability of being identify as palliative patient in deaths due to oncologic (19-33%) and non-oncologic causes (7-16%). Prescriptions of opiates rise (25-68%) and deaths in hospital remained stable. Contacts per patient with primary care and home hospitalization increased, while contacts with hospital admissions decreased. Cost per patient rise 26%. The integrated palliative care model increased the identification of the target population. Relationships between variables showed that the identification had a positive impact on prescription of opiates, death outside the hospital and extension to non-oncologic diseases. Although the identification decreased admissions in hospital, costs per patient had a slight increase due to home hospitalizations. Copyright © 2017 Elsevier España, S.L.U. All rights reserved. 19. Maternal substance use and integrated treatment programs for women with substance abuse issues and their children: a meta-analysis Directory of Open Access Journals (Sweden) Milligan Karen 2010-09-01 Full Text Available Abstract Background The rate of women with substance abuse issues is increasing. Women present with a unique constellation of risk factors and presenting needs, which may include specific needs in their role as mothers. Numerous integrated programs (those with substance use treatment and pregnancy, parenting, or child services have been developed to specifically meet the needs of pregnant and parenting women with substance abuse issues. This synthesis and meta-analysis reviews research in this important and growing area of treatment. Methods We searched PsycINFO, MedLine, PubMed, Web of Science, EMBASE, Proquest Dissertations, Sociological Abstracts, and CINAHL and compiled a database of 21 studies (2 randomized trials, 9 quasi-experimental studies, 10 cohort studies of integrated programs published between 1990 and 2007 with outcome data on maternal substance use. Data were summarized and where possible, meta-analyses were performed, using standardized mean differences (d effect size estimates. Results In the two studies comparing integrated programs to no treatment, effect sizes for urine toxicology and percent using substances significantly favored integrated programs and ranged from 0.18 to 1.41. Studies examining changes in maternal substance use from beginning to end of treatment were statistically significant and medium sized. More specifically, in the five studies measuring severity of drug and alcohol use, the average effect sizes were 0.64 and 0.40, respectively. In the four cohort studies of days of use, the average effect size was 0.52. Of studies comparing integrated to non-integrated programs, four studies assessed urine toxicology and two assessed self-reported abstinence. Overall effect sizes for each measure were not statistically significant (d = -0.09 and 0.22, respectively. Conclusions Findings suggest that integrated programs are effective in reducing maternal substance use. However, integrated programs were not 20. Integrated Status and Effectiveness Monitoring Program - Entiat River Snorkel Surveys, 2006-2007. Energy Technology Data Exchange (ETDEWEB) Nelle, R.D. 2007-10-01 The USFWS Mid-Columbia River Fishery Resource Office conducted snorkel surveys at 11 sites during the summer 2006 survey period and at 15 sites during fall 2006 and winter 2007 survey periods as part of the Integrated Status and Effectiveness Monitoring Program in the Entiat River. A total of 39,898 fish from 14 species/genera and an unknown category were enumerated. Chinook salmon were the overall most common fish observed and comprised 19% of fish enumerated followed by mountain whitefish (18%) and rainbow trout (14%). Day and night surveys were conducted during the summer 2006 period (August), while night surveys were conducted during the fall 2006 (October) and winter 2007 (February/March) surveys. This is second annual progress report to Bonneville Power Administration for the snorkel surveys conducted in the Entiat River as related to long-term effectiveness monitoring of restoration programs in this watershed. The objective of this study is to monitor the fish habitat utilization of planned in-stream restoration efforts in the Entiat River by conducting pre- and post-construction snorkel surveys at selected treatment and control sites. 1. God imagery and affective outcomes in a spiritually integrative inpatient program. Science.gov (United States) Currier, Joseph M; Foster, Joshua D; Abernethy, Alexis D; Witvliet, Charlotte V O; Root Luna, Lindsey M; Putman, Katharine M; Schnitker, Sarah A; VanHarn, Karl; Carter, Janet 2017-08-01 Religion and/or spirituality (R/S) can play a vital, multifaceted role in mental health. While beliefs about God represent the core of many psychiatric patients' meaning systems, research has not examined how internalized images of the divine might contribute to outcomes in treatment programs/settings that emphasize multicultural sensitivity with R/S. Drawing on a combination of qualitative and quantitative information with a religiously heterogeneous sample of 241 adults who completed a spiritually integrative inpatient program over a two-year period, this study tested direct/indirect associations between imagery of how God views oneself, religious comforts and strains, and affective outcomes (positive and negative). When accounting for patients' demographic and religious backgrounds, structural equation modeling results revealed: (1) overall effects for God imagery at pre-treatment on post-treatment levels of both positive and negative affect; and (2) religious comforts and strains fully mediated these links. Secondary analyses also revealed that patients' generally experienced reductions in negative emotion in God imagery over the course of their admission. These findings support attachment models of the R/S-mental health link and suggest that religious comforts and strains represent distinct pathways to positive and negative domains of affect for psychiatric patients with varying experiences of God. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved. 2. Integrated Data Collection Analysis (IDCA) Program - Final Review September 12, 2012 at DHS Energy Technology Data Exchange (ETDEWEB) Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Geoffrey W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Phillips, Jason J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Shelley, Timothy J. [Air Force Research Lab. (AFRL), Tyndall AFB, FL (United States); Reyes, Jose A. [Applied Research Associates, Tyndall AFB, FL (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States) 2012-11-26 The Integrated Data Collection Analysis (IDCA) program conducted a final program review at the Department of Homeland Security on September 12, 2012. The review was focused on the results of the program over the complete performance period. A summary presentation delineating the accomplished tasks started the meeting, followed by technical presentations on various issues that arose during the performance period. The presentations were completed with a statistical evaluation of the testing results from all the participants in the IDCA Proficiency Test study. The meeting closed with a discussion of potential sources of funding for continuing work to resolve some of these technical issues. This effort, funded by the Department of Homeland Security (DHS), put the issues of safe handling of these materials in perspective with standard military explosives. The study added Small-Scale Safety and Thermal (SSST) testing results for a broad suite of different HMEs to the literature, and suggested new guidelines and methods to develop safe handling practices for HMEs. Each participating testing laboratory used identical test materials and preparation methods wherever possible. Note, however, the test procedures differ among the laboratories. The results were compared among the laboratories and then compared to historical data from various sources. The testing performers involved were Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL), Naval Surface Warfare Center, Indian Head Division (NSWC IHD), Sandia National Laboratories (SNL), and Air Force Research Laboratory, Tyndall AFB (AFRL/RXQL). These tests were conducted as a proficiency study in order to establish some consistency in test protocols, procedures, and experiments and to compare results when these testing variables cannot be made consistent. 3. The NIAID Division of AIDS enterprise information system: integrated decision support for global clinical research programs Science.gov (United States) Gupta, Nitin; Varghese, Suresh; Virkar, Hemant 2011-01-01 The National Institute of Allergy and Infectious Diseases (NIAID) Division of AIDS (DAIDS) Enterprise Information System (DAIDS-ES) is a web-based system that supports NIAID in the scientific, strategic, and tactical management of its global clinical research programs for HIV/AIDS vaccines, prevention, and therapeutics. Different from most commercial clinical trials information systems, which are typically protocol-driven, the DAIDS-ES was built to exchange information with those types of systems and integrate it in ways that help scientific program directors lead the research effort and keep pace with the complex and ever-changing global HIV/AIDS pandemic. Whereas commercially available clinical trials support systems are not usually disease-focused, DAIDS-ES was specifically designed to capture and incorporate unique scientific, demographic, and logistical aspects of HIV/AIDS treatment, prevention, and vaccine research in order to provide a rich source of information to guide informed decision-making. Sharing data across its internal components and with external systems, using defined vocabularies, open standards and flexible interfaces, the DAIDS-ES enables NIAID, its global collaborators and stakeholders, access to timely, quality information about NIAID-supported clinical trials which is utilized to: (1) analyze the research portfolio, assess capacity, identify opportunities, and avoid redundancies; (2) help support study safety, quality, ethics, and regulatory compliance; (3) conduct evidence-based policy analysis and business process re-engineering for improved efficiency. This report summarizes how the DAIDS-ES was conceptualized, how it differs from typical clinical trial support systems, the rationale for key design choices, and examples of how it is being used to advance the efficiency and effectiveness of NIAID's HIV/AIDS clinical research programs. PMID:21816958 4. ADVANCING THE STUDY OF VIOLENCE AGAINST WOMEN USING MIXED METHODS: INTEGRATING QUALITATIVE METHODS INTO A QUANTITATIVE RESEARCH PROGRAM Science.gov (United States) Testa, Maria; Livingston, Jennifer A.; VanZile-Tamsen, Carol 2011-01-01 A mixed methods approach, combining quantitative with qualitative data methods and analysis, offers a promising means of advancing the study of violence. Integrating semi-structured interviews and qualitative analysis into a quantitative program of research on women’s sexual victimization has resulted in valuable scientific insight and generation of novel hypotheses for testing. This mixed methods approach is described and recommendations for integrating qualitative data into quantitative research are provided. PMID:21307032 5. Integrative shell of the program complex MARS (Version 1.0) radiation transfer in three-dimensional geometries International Nuclear Information System (INIS) Degtyarev, I.I.; Lokhovitskij, A.E.; Maslov, M.A.; Yazynin, I.A. 1994-01-01 The first version of integrative shell of the program complex MARS is written for calculating radiation transfer in the three-dimensional geometries. The integrative shell allows the user to work in convenient form with complex MARS, creat input files data and get graphic visualization of calculated functions. Version 1.0 is adapted for personal computers of types IBM-286,386,486 with operative size memory not smaller than 500K. 5 refs 6. INTEGRATED PRACTICE LEARNING MODEL TO IMPROVE WAITER/S’ COMPETENCY ON HOSPITALITY STUDY PROGRAM, POLITEKNIK NEGERI BALI Directory of Open Access Journals (Sweden) 2017-12-01 Full Text Available Hospitality Study Program, Politeknik Negeri Bali (PNB, hadn’t implemented integrated learning practice optimally. The aim of this research was improving the learning process method as an integrated practice learning model involving three courses (Food Production, FB Service, English for Restaurant in the same topic. This study was conducted on the forth semester of Hotel Study Program as the sample used in this research. After the random sampling was selected two classes as research samples, those were IVA class as an experiment group and IVB class as a control. Thus the samples could be determined according to the number of students in each class as many as 26 people. The application of integrated practice learning had an effect on the achievement of student competency in waiter/s occupation at Hotel Studies Program. The result of statistical test showed that there was a significant difference of competency achievement between integrated learning practices with partial practice learning students groups. It’s suggested to the management Hospitality Study Program to encourage and to facilitate the lecturers especially of core subjects to apply integrated learning practices in order to achieve the competency. 7. IPAD applications to the design, analysis, and/or machining of aerospace structures. [Integrated Program for Aerospace-vehicle Design Science.gov (United States) Blackburn, C. L.; Dovi, A. R.; Kurtze, W. L.; Storaasli, O. O. 1981-01-01 A computer software system for the processing and integration of engineering data and programs, called IPAD (Integrated Programs for Aerospace-Vehicle Design), is described. The ability of the system to relieve the engineer of the mundane task of input data preparation is demonstrated by the application of a prototype system to the design, analysis, and/or machining of three simple structures. Future work to further enhance the system's automated data handling and ability to handle larger and more varied design problems are also presented. 8. Steam Generator Tube Integrity Program: Surry Steam Generator Project, Hanford site, Richland, Benton County, Washington: Environmental assessment International Nuclear Information System (INIS) 1980-03-01 The US Nuclear Regulatory Commission (NRC) has placed a Nuclear Regulatory Research Order with the Richland Operations Office of the US Department of Energy (DOE) for expanded investigations at the DOE Pacific Northwest Laboratory (PNL) related to defective pressurized water reactor (PWR) steam generator tubing. This program, the Steam Generator Tube Integrity (SGTI) program, is sponsored by the Metallurgy and Materials Research Branch of the NRC Division of Reactor Safety Research. This research and testing program includes an additional task requiring extensive investigation of a degraded, out-of-service steam generator from a commercial nuclear power plant. This comprehensive testing program on an out-of-service generator will provide NRC with timely and valuable information related to pressurized water reactor primary system integrity and degradation with time. This report presents the environmental assessment of the removal, transport, and testing of the steam generator along with decontamination/decommissioning plans 9. Reuniting the Solar System: Integrated Education and Public Outreach Projects for Solar System Exploration Missions and Programs Science.gov (United States) Lowes, Leslie; Lindstrom, Marilyn; Stockman, Stephanie; Scalice, Daniela; Klug, Sheri 2003-01-01 The Solar System Exploration Education Forum has worked for five years to foster Education and Public Outreach (E/PO) cooperation among missions and programs in order to leverage resources and better meet the needs of educators and the public. These efforts are coming together in a number of programs and products and in '2004 - The Year of the Solar System.' NASA's practice of having independent E/PO programs for each mission and its public affairs emphasis on uniqueness has led to a public perception of a fragmented solar system exploration program. By working to integrate solar system E/PO, the breadth and depth of the solar system exploration program is revealed. When emphasis is put on what missions have in common, as well as their differences, each mission is seen in the context of the whole program. 10. BOKASUN: A fast and precise numerical program to calculate the Master Integrals of the two-loop sunrise diagrams Science.gov (United States) Caffo, Michele; Czyż, Henryk; Gunia, Michał; Remiddi, Ettore 2009-03-01 We present the program BOKASUN for fast and precise evaluation of the Master Integrals of the two-loop self-mass sunrise diagram for arbitrary values of the internal masses and the external four-momentum. We use a combination of two methods: a Bernoulli accelerated series expansion and a Runge-Kutta numerical solution of a system of linear differential equations. Program summaryProgram title: BOKASUN Catalogue identifier: AECG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9404 No. of bytes in distributed program, including test data, etc.: 104 123 Distribution format: tar.gz Programming language: FORTRAN77 Computer: Any computer with a Fortran compiler accepting FORTRAN77 standard. Tested on various PC's with LINUX Operating system: LINUX RAM: 120 kbytes Classification: 4.4 Nature of problem: Any integral arising in the evaluation of the two-loop sunrise Feynman diagram can be expressed in terms of a given set of Master Integrals, which should be calculated numerically. The program provides a fast and precise evaluation method of the Master Integrals for arbitrary (but not vanishing) masses and arbitrary value of the external momentum. Solution method: The integrals depend on three internal masses and the external momentum squared p. The method is a combination of an accelerated expansion in 1/p in its (pretty large!) region of fast convergence and of a Runge-Kutta numerical solution of a system of linear differential equations. Running time: To obtain 4 Master Integrals on PC with 2 GHz processor it takes 3 μs for series expansion with pre-calculated coefficients, 80 μs for series expansion without pre-calculated coefficients, from a few seconds up to a few minutes for Runge-Kutta method (depending 11. An integrated strategy for analyzing the unique developmental programs of different myoblast subtypes. Directory of Open Access Journals (Sweden) 2006-02-01 Full Text Available An important but largely unmet challenge in understanding the mechanisms that govern the formation of specific organs is to decipher the complex and dynamic genetic programs exhibited by the diversity of cell types within the tissue of interest. Here, we use an integrated genetic, genomic, and computational strategy to comprehensively determine the molecular identities of distinct myoblast subpopulations within the Drosophila embryonic mesoderm at the time that cell fates are initially specified. A compendium of gene expression profiles was generated for primary mesodermal cells purified by flow cytometry from appropriately staged wild-type embryos and from 12 genotypes in which myogenesis was selectively and predictably perturbed. A statistical meta-analysis of these pooled datasets--based on expected trends in gene expression and on the relative contribution of each genotype to the detection of known muscle genes--provisionally assigned hundreds of differentially expressed genes to particular myoblast subtypes. Whole embryo in situ hybridizations were then used to validate the majority of these predictions, thereby enabling true-positive detection rates to be estimated for the microarray data. This combined analysis reveals that myoblasts exhibit much greater gene expression heterogeneity and overall complexity than was previously appreciated. Moreover, it implicates the involvement of large numbers of uncharacterized, differentially expressed genes in myogenic specification and subsequent morphogenesis. These findings also underscore a requirement for considerable regulatory specificity for generating diverse myoblast identities. Finally, to illustrate how the developmental functions of newly identified myoblast genes can be efficiently surveyed, a rapid RNA interference assay that can be scored in living embryos was developed and applied to selected genes. This integrated strategy for examining embryonic gene expression and function provides 12. PRISMA: Program of Research to Integrate the Services for the Maintenance of Autonomy. A system-level integration model in Quebec Directory of Open Access Journals (Sweden) 2015-09-01 Full Text Available The Program of Research to Integrate the Services for the Maintenance of Autonomy (PRISMA began in Quebec in 1999. Evaluation results indicated that the PRISMA Project improved the system of care for the frail elderly at no additional cost. In 2001, the Quebec Ministry of Health and Social Services made implementing the six features of the PRISMA approach a province-wide goal in the programme now known as RSIPA (French acronym. Extensive Province-wide progress has been made since then, but ongoing challenges include reducing unmet need for case management and home care services, creating incentives for increased physician participation in care planning and improving the computerized client chart, among others. PRISMA is the only evaluated international model of a coordination approach to integration and one of the few, if not the only, integration model to have been adopted at the system level by policy-makers. 13. Development of an administrative system for an integral program of safety and occupational hygiene; Desarrollo de un sistema administrativo para un programa integral de seguridad e higiene ocupacional Energy Technology Data Exchange (ETDEWEB) Dominguez R, J 2004-07-01 The objective of the present investigation thesis will be to provide a clear application of the basic elements of the administration for the elaboration of an integral program of security and occupational hygiene that serves like guide for the creation of new programs and of an internal integral regulation, in the matter. For the above mentioned the present work of thesis investigation besides applying those basic elements of the integral administration will be given execution to the normative one effective as well as the up-to-date concepts of security and hygiene for that the present thesis will be based on these premises that guided us for the elaboration of the program of security and occupational hygiene and that it will serve like base to be applied in all the areas of the National Institute of Nuclear Research and in special in those that are certifying for the system of administration of quality ISO 9001:2000 that with their implantation the objectives were reached that the Institute it has been traced in their general politics. It is necessary to make mention that the Institute has a primordial activity that is the one of to make Research and Development in nuclear matter for the peaceful uses of the nuclear energy, for that that with a strong support of the conventional areas of the type industrial the institutional objectives are achieved, for what is in these areas where the present thesis investigation is developed, without stopping to revise and to apply the nuclear normativity. (Author) 14. Toward Integral Higher Education Study Programs in the European Higher Education Area: A Programmatic and Strategic View Directory of Open Access Journals (Sweden) Markus Molz 2009-12-01 Full Text Available This essay somehow arbitrarily freezes my ongoing attempt to grasp the present situation and future possibilities of higher education courses, programs, institutions and initiatives that are inspired by integral and likeminded approaches. The focus in this essay is on the European Higher Education Area and its specifics, whereas some implicit or explicit comparisons with the USA are made. My reflections are triggered by the recurrent observation that in Europe there seems to be i more demand than offer of integrally oriented higher education programs, ii an imbalance between overused but little successful and underused but potentially more promising strategies to implement such programs, iii little or no learning from past failures, and iv little mutual awareness, communication and collaboration between different activists and initiatives in this field. The context for this essay is i the current societal macroshift, ii the unfolding of academic level integral and likeminded research worldwide, and iii the large scale reform of the European Higher Education systems brought about by the Bologna process, its (false promises and the potential it nevertheless has for realizing examples of a more integral higher education. On this basis the consequences for attempts to overcome a relatively stagnant state of affairs in Europe are discussed. Given that; most past attempts to implement programs inspired by an integral worldview have failed from the start, or disappeared after a relatively short period, or are marginalised or becoming remainstreamed, this essay aims to devise a potentially more promising strategic corridor and describes the contours of the results that could be brought about when following a developmental trajectory within this corridor. This futurising exercise is inspired by principles shared by many integral and likeminded approaches, especially the reconsideration, integration and transcendence of premodern, modern and postmodern 15. A workstation-integrated peer review quality assurance program: pilot study Science.gov (United States) 2013-01-01 16. A workstation-integrated peer review quality assurance program: pilot study International Nuclear Information System (INIS) O’Keeffe, Margaret M; Davis, Todd M; Siminoski, Kerry 2013-01-01 17. Integrated Data Collection Analysis (IDCA) Program - RDX Type II Class 5 Standard, Data Set 1 Energy Technology Data Exchange (ETDEWEB) Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Geoffrey W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Daniel N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pollard, Colin J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Sorenson, Daniel N. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Remmers, Daniel L. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Moran, Jesse S. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Shelley, Timothy J. [Air Force Research Lab. (AFRL), Tyndall AFB, FL (United States); Reyes, Jose A. [Applied Research Associates, Inc., Tyndall AFB, FL (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Whipple, Richard E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States) 2011-04-11 This document describes the results of the first reference sample material—RDX Type II Class 5—examined in the proficiency study for small-scale safety and thermal (SSST) testing of explosive materials for the Integrated Data Collection Analysis (IDCA) Program. The IDCA program is conducting proficiency testing on homemade explosives (HMEs). The reference sample materials are being studied to establish the accuracy of traditional explosives safety testing for each performing laboratory. These results will be used for comparison to results from testing HMEs. This effort, funded by the Department of Homeland Security (DHS), ultimately will put the issues of safe handling of these materials in perspective with standard military explosives. The results of the study will add SSST testing results for a broad suite of different HMEs to the literature, potentially suggest new guidelines and methods for HME testing, and possibly establish what are the needed accuracies in SSST testing to develop safe handling practices. Described here are the results for impact, friction, electrostatic discharge, and scanning calorimetry analysis of a reference sample of RDX Type II Class 5. The results from each participating testing laboratory are compared using identical test material and preparation methods wherever possible. Note, however, the test procedures differ among the laboratories. These results are then compared to historical data from various sources. The performers involved are Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL), Air Force Research Laboratory/ RXQL (AFRL), Indian Head Division, Naval Surface Warfare Center, (IHD-NSWC), and Sandia National Laboratories (SNL). These tests are conducted as a proficiency study in order to establish some consistency in test protocols, procedures, and experiments and to understand how to compare results when test protocols are not identical. 18. Effects of a Memory and Visual-Motor Integration Program for Older Adults Based on Self-Efficacy Theory. Science.gov (United States) Kim, Eun Hwi; Suh, Soon Rim 2017-06-01 This study was conducted to verify the effects of a memory and visual-motor integration program for older adults based on self-efficacy theory. A non-equivalent control group pretest-posttest design was implemented in this quasi-experimental study. The participants were 62 older adults from senior centers and older adult welfare facilities in D and G city (Experimental group=30, Control group=32). The experimental group took part in a 12-session memory and visual-motor integration program over 6 weeks. Data regarding memory self-efficacy, memory, visual-motor integration, and depression were collected from July to October of 2014 and analyzed with independent t-test and Mann-Whitney U test using PASW Statistics (SPSS) 18.0 to determine the effects of the interventions. Memory self-efficacy (t=2.20, p=.031), memory (Z=-2.92, p=.004), and visual-motor integration (Z=-2.49, p=.013) increased significantly in the experimental group as compared to the control group. However, depression (Z=-0.90, p=.367) did not decrease significantly. This program is effective for increasing memory, visual-motor integration, and memory self-efficacy in older adults. Therefore, it can be used to improve cognition and prevent dementia in older adults. © 2017 Korean Society of Nursing Science 19. Safe Cockroach Control: A Guide to Setting Up an Integrated Pest Management Program within a School System. Science.gov (United States) Cowles, Kathleen Letcher; And Others Integrated Pest Management (IPM) is a decision-making approach to pest control that has been used successfully on farms, city parks, offices, homes, and schools. IPM programs help individuals decide when treatments are necessary, where treatment would be most helpful, and what combinations of tactics would be most effective, safe, and inexpensive… 20. Nurturing the Relationships of All Couples: Integrating Lesbian, Gay, and Bisexual Concerns into Premarital Education and Counseling Programs Science.gov (United States) Casquarelli, Elaine J.; Fallon, Kathleen M. 2011-01-01 Research shows that premarital counseling programs help engaged couples develop interpersonal and problem-solving skills that enhance their marital relationships. Yet, there are limited services for same-sex couples. This article assumes an integrated humanistic and social justice advocacy stance to explore the needs of lesbian, gay, and bisexual… 1. Hybrid Approximate Dynamic Programming Approach for Dynamic Optimal Energy Flow in the Integrated Gas and Power Systems DEFF Research Database (Denmark) Shuai, Hang; Ai, Xiaomeng; Wen, Jinyu 2017-01-01 This paper proposes a hybrid approximate dynamic programming (ADP) approach for the multiple time-period optimal power flow in integrated gas and power systems. ADP successively solves Bellman's equation to make decisions according to the current state of the system. So, the updated near future... 2. From Theory to Practice: Utilizing Integrative Seminars as Bookends to the Master of Public Administration Program of Study Science.gov (United States) Stout, Margaret; Holmes, Maja Husar 2013-01-01 Integrative seminar style courses are most often used as an application-oriented capstone in place of a thesis or comprehensive exam requirement in Master of Public Administration (MPA) degree programs. This article describes and discusses the benefits of a unique approach of one National Association of Schools of Public Affairs and Administration… 3. Several problems of algorithmization in integrated computation programs on third generation computers for short circuit currents in complex power networks Energy Technology Data Exchange (ETDEWEB) Krylov, V.A.; Pisarenko, V.P. 1982-01-01 Methods of modeling complex power networks with short circuits in the networks are described. The methods are implemented in integrated computation programs for short circuit currents and equivalents in electrical networks with a large number of branch points (up to 1000) on a computer with a limited on line memory capacity (M equals 4030 for the computer). 4. Integrating Foreign Languages and Cultures into U.S. International Business Programs: Best Practices and Future Considerations Science.gov (United States) Sacco, Steven J. 2014-01-01 This paper describes the importance of foreign languages and cultures and their integration into U.S. international business programs. The author juxtaposes globalization strategies of European and American business schools and highlights pre-university foreign language study in Europe and the U.S. The paper goes on to describe model U.S.… 5. Patient-centeredness of integrated care programs for people with multimorbidity: results from the European ICARE4EU project. NARCIS (Netherlands) Heide, I. van der; Snoeijs, S.; Quattrini, S.; Struckmann, V.; Hujala, A.; Schellevis, F.; Rijken, M. 2018-01-01 Introduction: This paper aims to support the implementation of patient-centered care for people with multimorbidity in Europe, by providing insight into ways in which patient-centeredness is currently shaped in integrated care programs for people with multimorbidity in European countries. Methods: 6. Implementing Task-Based Language Teaching to Integrate Language Skills in an EFL Program at a Colombian University Science.gov (United States) Córdoba Zúñiga, Eulices 2016-01-01 This article reports the findings of a qualitative research study conducted with six first semester students of an English as a foreign language program in a public university in Colombia. The aim of the study was to implement task-based language teaching as a way to integrate language skills and help learners to improve their communicative… 7. BOKASUN: a fast and precise numerical program to calculate the Master Integrals of the two-loop sunrise diagrams OpenAIRE Caffo, Michele; Czyz, Henryk; Gunia, Michal; Remiddi, Ettore 2008-01-01 We present the program BOKASUN for fast and precise evaluation of the Master Integrals of the two-loop self-mass sunrise diagram for arbitrary values of the internal masses and the external four-momentum. We use a combination of two methods: a Bernoulli accelerated series expansion and a Runge-Kutta numerical solution of a system of linear differential equations. 8. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Code Reference Manual Energy Technology Data Exchange (ETDEWEB) C. L. Smith; K. J. Kvarfordt; S. T. Wood 2008-08-01 The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer. SAPHIRE is funded by the U.S. Nuclear Regulatory Commission (NRC) and developed by the Idaho National Laboratory (INL). The INL's primary role in this project is that of software developer. However, the INL also plays an important role in technology transfer by interfacing and supporting SAPHIRE users comprised of a wide range of PRA practitioners from the NRC, national laboratories, the private sector, and foreign countries. SAPHIRE can be used to model a complex system’s response to initiating events, quantify associated damage outcome frequencies, and identify important contributors to this damage (Level 1 PRA) and to analyze containment performance during a severe accident and quantify radioactive releases (Level 2 PRA). It can be used for a PRA evaluating a variety of operating conditions, for example, for a nuclear reactor at full power, low power, or at shutdown conditions. Furthermore, SAPHIRE can be used to analyze both internal and external initiating events and has special features for transforming models built for internal event analysis to models for external event analysis. It can also be used in a limited manner to quantify risk in terms of release consequences to both the public and the environment (Level 3 PRA). SAPHIRE includes a separate module called the Graphical Evaluation Module (GEM). GEM provides a highly specialized user interface with SAPHIRE that automates SAPHIRE process steps for evaluating operational events at commercial nuclear power plants. Using GEM, an analyst can estimate the risk associated with operational events in a very efficient and expeditious manner. This reference guide will introduce the SAPHIRE Version 7.0 software. A brief discussion of the purpose and history of the software is included along with 9. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Code Reference Manual Energy Technology Data Exchange (ETDEWEB) C. L. Smith; K. J. Kvarfordt; S. T. Wood 2006-07-01 The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer. SAPHIRE is funded by the U.S. Nuclear Regulatory Commission (NRC) and developed by the Idaho National Laboratory (INL). The INL's primary role in this project is that of software developer. However, the INL also plays an important role in technology transfer by interfacing and supporting SAPHIRE users comprised of a wide range of PRA practitioners from the NRC, national laboratories, the private sector, and foreign countries. SAPHIRE can be used to model a complex system’s response to initiating events, quantify associated damage outcome frequencies, and identify important contributors to this damage (Level 1 PRA) and to analyze containment performance during a severe accident and quantify radioactive releases (Level 2 PRA). It can be used for a PRA evaluating a variety of operating conditions, for example, for a nuclear reactor at full power, low power, or at shutdown conditions. Furthermore, SAPHIRE can be used to analyze both internal and external initiating events and has special features for ansforming models built for internal event analysis to models for external event analysis. It can also be used in a limited manner to quantify risk in terms of release consequences to both the public and the environment (Level 3 PRA). SAPHIRE includes a separate module called the Graphical Evaluation Module (GEM). GEM provides a highly specialized user interface with SAPHIRE that automates SAPHIRE process steps for evaluating operational events at commercial nuclear power plants. Using GEM, an analyst can estimate the risk associated with operational events in a very efficient and expeditious manner. This reference guide will introduce the SAPHIRE Version 7.0 software. A brief discussion of the purpose and history of the software is included along with 10. Integrated Economic and Financial Analysis of China’s Sponge City Program for Water-resilient Urban Development Directory of Open Access Journals (Sweden) Xiao Liang 2018-03-01 Full Text Available To improve Chinese cities’ resilience to climate change, the Sponge City Program, which was designed to tackle water pollution, storm water management, and flooding, was initiated in 2014. Being a major policy initiative, the Sponge City Program raises heated discussions among Chinese academics; however, no relevant extensive economic or financial analysis has been conducted. The research carries out an integrated economic and financial analysis on the Sponge City Program from the perspectives of two stakeholders: the government and the project manager. Different stakeholders have unique perspectives on the management of water projects. This study has two parts: economic analysis and financial analysis. The economic analysis is from the government perspective, and considers all the economic, environmental, and social effects. The financial analysis is from the project manager’s perspective, and judges the financial feasibility of projects. Changde city, one of the demo cities of Sponge City Program, is chosen for the research. The results show that from the perspective of the government, the Sponge City Program should be promoted, because most water projects are economically feasible. From the perspective of the project manager, the program should not be invested in, because the water projects are financially infeasible. A more comprehensive and integrated plan for developing and managing the water projects of the Sponge City Program is required. Otherwise, the private sector may not be interested in investing in the water projects, and the water projects may not be operational in the long term. 11. Efficient Separations and Processing Integrated Program (ESP-IP): Technology summary International Nuclear Information System (INIS) 1994-02-01 The Efficient Separations and Processing Integrated Program (ESPIP) was created in 1991 to identify, develop and perfect separations technologies and processes to treat wastes and address environmental problems throughout the DOE Complex. These wastes and environmental problems, located at more than 100 contaminated installations in 36 states and territories, are the result of half a century of nuclear processing activities by DOE and its predecessor organizations. The cost of cleaning up this legacy has been estimated to be of the order of hundreds of billions of dollars, and ESPIP's origin came with the realization that if new separations and processes can produce even a marginal reduction in cost then billions of dollars will be saved. The ultimate mission for ESPIP, as outlined in the ESPIP Strategic Plan, is: to provide Separations Technologies and Processes (STPS) to process and immobilize a wide spectrum of radioactive and hazardous defense wastes; to coordinate STP research and development efforts within DOE; to explore the potential uses of separated radionuclides; to transfer demonstrated separations and processing technologies developed by DOE to the US industrial sector, and to facilitate competitiveness of US technology and industry in the world market. Technology research and development currently under investigation by ESPIP can be divided into four broad areas: cesium and strontium removal; TRU and other HLW separations; sludge technology, and other technologies 12. Design and first integral test of MUSE facility in ALPHA program Energy Technology Data Exchange (ETDEWEB) Park, Hyun-sun; Yamano, Norihiro; Maruyama, Yu; Moriyama, Kiyofumi; Kudo, Tamotsu; Yang, Yanhua; Sugimoto, Jun [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment 1998-03-01 Vapor explosion (Steam explosion or energetic Fuel-Coolant Interaction) is a phenomenon in which a hot liquid rapidly releases its internal energy into a surrounding colder and more volatile liquid when these liquids come into a sudden contact. This rapid energy release leads to rapid vapor production within a timescale short compared to vapor expansion causes local pressurization similar to an explosion and eventually threatens the surroundings by dynamic pressures and the subsequent expansion. It has been recognized that the energetics of vapor explosions strongly depend on the initial mixing geometry established by the contact of hot and cold liquids. Therefore, a new program has been initiated to investigate the energetics of vapor explosions in various contact geometries; i.e., pouring, stratified, coolant and melt injection modes in a facility which is able to measure the energy conversion ratio and eventually to provide data to evaluate the mechanistic analytical models. In the report, this new facility, called MUSE (MUlti-configuration in Steam Explosions), and the results of the first integral test are described in detail. (author) 13. Integrated Data Collection Analysis (IDCA) Program - Statistical Analysis of RDX Standard Data Sets Energy Technology Data Exchange (ETDEWEB) Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Geoffrey W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Daniel N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pollard, Colin J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Sorensen, Daniel N. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Remmers, Daniel L. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Phillips, Jason J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Shelley, Timothy J. [Air Force Research Lab. (AFRL), Tyndall AFB, FL (United States); Reyes, Jose A. [Applied Research Associates, Tyndall AFB, FL (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States) 2015-10-30 The Integrated Data Collection Analysis (IDCA) program is conducting a Proficiency Test for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are statistical analyses of the results for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of the RDX Type II Class 5 standard. The material was tested as a well-characterized standard several times during the proficiency study to assess differences among participants and the range of results that may arise for well-behaved explosive materials. The analyses show that there are detectable differences among the results from IDCA participants. While these differences are statistically significant, most of them can be disregarded for comparison purposes to assess potential variability when laboratories attempt to measure identical samples using methods assumed to be nominally the same. The results presented in this report include the average sensitivity results for the IDCA participants and the ranges of values obtained. The ranges represent variation about the mean values of the tests of between 26% and 42%. The magnitude of this variation is attributed to differences in operator, method, and environment as well as the use of different instruments that are also of varying age. The results appear to be a good representation of the broader safety testing community based on the range of methods, instruments, and environments included in the IDCA Proficiency Test. 14. Management of comments on DOE's Site Characterization Plan (SCP) and integration with the planned geotechnical program International Nuclear Information System (INIS) Bjerstedt, T.W.; Gil, A.V.; Baird, F.A. 1991-01-01 The US DOE has committed to respond to comments on the SCP throughout the site characterization process. As of January 1990 DOE has received 4,574 comments on both the SCP/Consultation Draft and the statutory SCP. Of these, 2,662 responses have been completed and returned to the originators. Many comments are programmatic in nature and express diverse concerns beyond the scope of the SCP. DOE uses a three-tiered process in responding to comments that integrated technical and management responsibilities. The process defines specific roles in developing, reviewing, and concurring on responses. Commitments or open-items can be generated in DOE responses to comments, which are tracked on a relational database. Major changes reflected in the Secretary of Energy's 1989 reassessment of the high-level waste program were advocated in comments on the SCP. Most DOE commitments, however, deal with consideration of recommendations contained in SCP comments relevant to low-levels of technical planning detail (SCP Study Plans). Commitments are discharged when referred to the appropriate quality-affecting or management process whereupon their merits can be evaluated 15. System description of the Repository-Only System for the FY 1990 systems integration program studies International Nuclear Information System (INIS) McKee, R.W.; Young, J.R.; Konzek, G.J. 1991-07-01 This document provides both functional and physical descriptions of a conceptual high-level waste management system defined as a Repository-Only System. Its purpose is to provide a basis for required system computer modeling and system studies initiated in FY 1990 under the Systems Integration Program of the US Department of Energy's (DOE) Office of Civilian Radioactive Waste Management (OCRWM). The Repository-Only System is designed to accept 3000 MTU per year of spent fuel and 400 equivalent MTU per year of high-level wastes disposal in the geologic repository. This document contains both functional descriptions of the processes in the waste management system and physical descriptions of the equipment and facilities necessary for performance of those processes. These descriptions contain the level of detail needed for the projected systems analysis studies. The Repository-Only System contains all system components, from the waste storage facilities of the waste generators to the underground facilities for final disposal of the wastes. The major facilities in the system are the waste generator waste storage facilities, a repository facility that packages the wastes and than emplaces them in the geologic repository, and the transportation equipment and facilities for transporting the wastes between these major facilities. 18 refs., 39 figs 16. Hanford Site's Integrated Risk Assessment Program: No-intervention risk assessment International Nuclear Information System (INIS) Mahaffey, J.A.; Dukelow, J.S. Jr.; Stenner, R.D. 1994-08-01 The long-term goal of the Integrated Risk Assessment program (IRAP) is to estimate risks to workers, the public, organizations, and groups with reserved rights to Site access, the ecosystem, and natural resources to aid in managing environmental restoration and waste management at the Hanford Site. For each of these, information is needed about current risks, risks during cleanup, and endstate risks. The objective is three-fold: to determine if and when to remediate, and to what extent; to identify information unavailable but needed to make better cleanup decisions; to establish technology performance criteria for achieving desired cleanup levels; to understand costs and benefits of activities from a Site-wide perspective. The no-intervention risk, assessment is the initial evaluation of public health risks conducted under IRAP. The objective is to identify types of activities that the US Department of Energy (DOE) must accomplish for closure of the Hanford Site, defined as no further DOE intervention. There are two primary conclusions from the no-intervention risk assessment. First, some maintenance and operations activities at Hanford must be continued to protect the public from grave risks. However, when large Hanford expenditures are compared to cleanup progress, funds expended for maintenance and operations must be put in proper perspective. Second, stakeholder's emphasis on public risks at Hanford, as indicated by remediation priorities, are not in line with those estimated. The focus currently is on compliance with regulations, and on dealing with issues which are visible to stakeholders 17. Toward Integral Higher Education Study Programs in the European Higher Education Area: A Programmatic and Strategic View Directory of Open Access Journals (Sweden) Markus Molz 2009-12-01 Full Text Available This essay somehow arbitrarily freezes my ongoing attempt to grasp thepresent situation and future possibilities of higher education courses, programs,institutions and initiatives that are inspired by integral and likeminded approaches. Thefocus in this essay is on the European Higher Education Area and its specifics, whereassome implicit or explicit comparisons with the USA are made. My reflections aretriggered by the recurrent observation that in Europe there seems to be i more demandthan offer of integrally oriented higher education programs, ii an imbalance betweenoverused but little successful and underused but potentially more promising strategies toimplement such programs, iii little or no learning from past failures, and iv little mutualawareness, communication and collaboration between different activists and initiatives inthis field.The context for this essay is i the current societal macroshift, ii the unfolding ofacademic level integral and likeminded research worldwide, and iii the large scalereform of the European Higher Education systems brought about by the Bologna process,its (false promises and the potential it nevertheless has for realizing examples of a moreintegral higher education. On this basis the consequences for attempts to overcome arelatively stagnant state of affairs in Europe are discussed. Given that; most past attemptsto implement programs inspired by an integral worldview have failed from the start, ordisappeared after a relatively short period, or are marginalised or becoming remainstreamed,this essay aims to devise a potentially more promising strategic corridorand describes the contours of the results that could be brought about when following adevelopmental trajectory within this corridor. This futurising exercise is inspired byprinciples shared by many integral and likeminded approaches, especially thereconsideration, integration and transcendence of premodern, modern and postmodernstructures and practices 18. CAD-based Monte Carlo program for integrated simulation of nuclear system SuperMC International Nuclear Information System (INIS) Wu, Yican; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Long, Pengcheng; Hu, Liqin 2015-01-01 Highlights: • The new developed CAD-based Monte Carlo program named SuperMC for integrated simulation of nuclear system makes use of hybrid MC-deterministic method and advanced computer technologies. SuperMC is designed to perform transport calculation of various types of particles, depletion and activation calculation including isotope burn-up, material activation and shutdown dose, and multi-physics coupling calculation including thermo-hydraulics, fuel performance and structural mechanics. The bi-directional automatic conversion between general CAD models and physical settings and calculation models can be well performed. Results and process of simulation can be visualized with dynamical 3D dataset and geometry model. Continuous-energy cross section, burnup, activation, irradiation damage and material data etc. are used to support the multi-process simulation. Advanced cloud computing framework makes the computation and storage extremely intensive simulation more attractive just as a network service to support design optimization and assessment. The modular design and generic interface promotes its flexible manipulation and coupling of external solvers. • The new developed and incorporated advanced methods in SuperMC was introduced including hybrid MC-deterministic transport method, particle physical interaction treatment method, multi-physics coupling calculation method, geometry automatic modeling and processing method, intelligent data analysis and visualization method, elastic cloud computing technology and parallel calculation method. • The functions of SuperMC2.1 integrating automatic modeling, neutron and photon transport calculation, results and process visualization was introduced. It has been validated by using a series of benchmarking cases such as the fusion reactor ITER model and the fast reactor BN-600 model. - Abstract: Monte Carlo (MC) method has distinct advantages to simulate complicated nuclear systems and is envisioned as a routine 19. Promoting convergence: The integrated graduate program in physical and engineering biology at Yale University, a new model for graduate education. Science.gov (United States) Noble, Dorottya B; Mochrie, Simon G J; O'Hern, Corey S; Pollard, Thomas D; Regan, Lynne 2016-11-12 20. Successfully integrating aged care services: A review of the evidence and tools emerging from a long-term care program Directory of Open Access Journals (Sweden) Michael J. Stewart 2013-02-01 Full Text Available Background: Providing efficient and effective aged care services is one of the greatest public policy concerns currently facing governments. Increasing the integration of care services has the potential to provide many benefits including increased access, promoting greater efficiency, and improving care outcomes. There is little research, however, investigating how integrated aged care can be successfully achieved. The PRISMA (Program of Research to Integrate Services for the Maintenance of Autonomy project, from Quebec, Canada, is one of the most systematic and sustained bodies of research investigating the translation and outcomes of an integrated care policy into practice.  The PRISMA research program has run since 1988, yet there has been no independent systematic review of this work to draw out the lessons learnt. Methods: Narrative review of all literature emanating from the PRISMA project between 1988 and 2012. Researchers accessed an online list of all published papers from the program website. The reference lists of papers were hand searched to identify additional literature. Finally, Medline, Pubmed, EMBASE and Google Scholar indexing databases were searched using key terms and author names. Results were extracted into specially designed spread sheets for analysis. Results: 45 journal articles and two books authored or co-authored by the PRISMA team were identified. Research was primarily concerned with: the design, development and validation of screening and assessment tools; and results generated from their application. Both quasi-experimental and cross sectional analytic designs were used extensively. Contextually appropriate expert opinion was obtained using variations on the Delphi Method. Literature analysis revealed the structures, processes and outcomes which underpinned the implementation. PRISMA provides evidence that integrating care for older persons is beneficial to individuals through reducing incidence of functional 1. Successfully integrating aged care services: A review of the evidence and tools emerging from a long-term care program Directory of Open Access Journals (Sweden) Michael J. Stewart 2013-02-01 Full Text Available Background: Providing efficient and effective aged care services is one of the greatest public policy concerns currently facing governments. Increasing the integration of care services has the potential to provide many benefits including increased access, promoting greater efficiency, and improving care outcomes. There is little research, however, investigating how integrated aged care can be successfully achieved. The PRISMA (Program of Research to Integrate Services for the Maintenance of Autonomy project, from Quebec, Canada, is one of the most systematic and sustained bodies of research investigating the translation and outcomes of an integrated care policy into practice.  The PRISMA research program has run since 1988, yet there has been no independent systematic review of this work to draw out the lessons learnt.Methods: Narrative review of all literature emanating from the PRISMA project between 1988 and 2012. Researchers accessed an online list of all published papers from the program website. The reference lists of papers were hand searched to identify additional literature. Finally, Medline, Pubmed, EMBASE and Google Scholar indexing databases were searched using key terms and author names. Results were extracted into specially designed spread sheets for analysis.Results: 45 journal articles and two books authored or co-authored by the PRISMA team were identified. Research was primarily concerned with: the design, development and validation of screening and assessment tools; and results generated from their application. Both quasi-experimental and cross sectional analytic designs were used extensively. Contextually appropriate expert opinion was obtained using variations on the Delphi Method. Literature analysis revealed the structures, processes and outcomes which underpinned the implementation. PRISMA provides evidence that integrating care for older persons is beneficial to individuals through reducing incidence of functional 2. Factors Influencing the Selection of the Systems Integration Organizational Model Type for Planning and Implementing Government High-Technology Programs Science.gov (United States) Thomas, Leann; Utley, Dawn 2006-01-01 While there has been extensive research in defining project organizational structures for traditional projects, little research exists to support high technology government project s organizational structure definition. High-Technology Government projects differ from traditional projects in that they are non-profit, span across Government-Industry organizations, typically require significant integration effort, and are strongly susceptible to a volatile external environment. Systems Integration implementation has been identified as a major contributor to both project success and failure. The literature research bridges program management organizational planning, systems integration, organizational theory, and independent project reports, in order to assess Systems Integration (SI) organizational structure selection for improving the high-technology government project s probability of success. This paper will describe the methodology used to 1) Identify and assess SI organizational structures and their success rate, and 2) Identify key factors to be used in the selection of these SI organizational structures during the acquisition strategy process. 3. Integrated safety assessment report: Integrated Safety Assessment Program: Millstone Nuclear Power Station, Unit 1 (Docket No. 50-245): Draft report International Nuclear Information System (INIS) 1987-04-01 The Integrated Safety Assessment Program (ISAP) was initiated in November 1984, by the US Nuclear Regulatory Commission to conduct integrated assessments for operating nuclear power reactors. The integrated assessment is conducted in a plant-specific basis to evaluate all licensing actions, licensee initiated plant improvements and selected unresolved generic/safety issues to establish implementation schedules for each item. In addition, procedures will be established to allow for a periodic updating of the schedules to account for licensing issues that arise in the future. This report documents the review of Millstone Nuclear Power Station, Unit No. 1, operated by Northeast Nuclear Energy Company (located in Waterford, Connecticut). Millstone Nuclear Power Station, Unit No. 1, is one of two plants being reviewed under the pilot program for ISAP. This report indicates how 85 topics selected for review were addressed. This report presents the staff's recommendations regarding the corrective actions to resolve the 85 topics and other actions to enhance plant safety. The report is being issued in draft form to obtain comments from the licensee, nuclear safety experts, and the Advisory Committee for Reactor Safeguards (ACRS). Once those comments have been resolved, the staff will present its positions, along with a long-term implementation schedule from the licensee, in the final version of this report 4. 0 + 5 Vascular Surgery Residents' Operative Experience in General Surgery: An Analysis of Operative Logs from 12 Integrated Programs. Science.gov (United States) Smith, Brigitte K; Kang, P Chulhi; McAninch, Chris; Leverson, Glen; Sullivan, Sarah; Mitchell, Erica L 2016-01-01 5. 'Integration' DEFF Research Database (Denmark) Olwig, Karen Fog 2011-01-01 , while the countries have adopted disparate policies and ideologies, differences in the actual treatment and attitudes towards immigrants and refugees in everyday life are less clear, due to parallel integration programmes based on strong similarities in the welfare systems and in cultural notions...... of equality in the three societies. Finally, it shows that family relations play a central role in immigrants’ and refugees’ establishment of a new life in the receiving societies, even though the welfare society takes on many of the social and economic functions of the family.... 6. System description of the Basic MRS System for the FY 1990 Systems Integration Program studies International Nuclear Information System (INIS) McKee, R.W.; Young, J.R.; Konzek, G.J. 1991-07-01 This document provides both functional and physical descriptions of a conceptual high-level waste management system defined as a Basic MRS System. Its purpose is to provide a basis for required system computer modeling and system studies initiated in FY 1990 under the Systems Integration Program of the Office of Civilian Radioactive Waste Management Office (OCRWM). Two specific systems studies initiated in FY 1990, the Reference System Performance Evaluation and the Aggregate Receipt Rate Study, utilize the information in this document. The Basic MRS System is the current OCRWM reference high-level radioactive wastes repository system concept. It is designed to accept 3000 MTU per year of spent fuel and 400 equivalent MTU per year of high-level wastes. The Basic MRS System includes a storage-only MRS that provides for a limited amount of commercial spent fuel storage capacity prior to acceptance by the geologic repository for disposal. This document contains both functional descriptions of the processes in the waste management system and physical descriptions of the equipment and facilities necessary for performance of those processes. The basic MRS system contains all system components, from the waste storage facilities of the waste generators to the underground facilities for final disposal of the wastes. The major facilities in the system are the waste generator waste storage facilities, an MRS facility that provides interim storage wastes accepted from the waste generators, a repository facility that packages the wastes and then emplaces them in the geologic repository, and the transportation equipment and facilities for transporting the waste between these major facilities 7. Telephone-Based Coaching: A Comparison of Tobacco Cessation Programs in an Integrated Health Care System Science.gov (United States) Boccio, Mindy; Sanna, Rashel S.; Adams, Sara R.; Goler, Nancy C.; Brown, Susan D.; Neugebauer, Romain S.; Ferrara, Assiamira; Wiley, Deanne M.; Bellamy, David J.; Schmittdiel, Julie A. 2016-01-01 Purpose Many Americans continue to smoke, increasing their risk of disease and premature death. Both telephone-based counseling and in-person tobacco cessation classes may improve access for smokers seeking convenient support to quit. Little research has assessed whether such programs are effective in real-world clinical populations. Design Retrospective cohort study comparing wellness coaching participants with two groups of controls. Setting Kaiser Permanente, Northern California (KPNC), a large integrated health care delivery system. Subjects 241 patients who participated in telephonic tobacco cessation coaching from 1/1/2011–3/31/2012, and two control groups: propensity-score matched controls, and controls who participated in a tobacco cessation class during the same period. Wellness coaching participants received an average of two motivational interviewing based coaching sessions that engage the patient, evoke their reason to consider quitting and help them establish a quit plan. Measures Self-reported quitting of tobacco and fills of tobacco cessation medications within 12 months of follow-up. Analysis Logistic regressions adjusting for age, gender, race/ethnicity, and primary language. Results After adjusting for confounders, tobacco quit rates were higher among coaching participants vs. matched controls (31% vs. 23%, PCoaching participants and class attendees filled tobacco-cessation prescriptions at a higher rate (47% for both) than matched controls (6%, Pcoaching was as effective as in-person classes and was associated with higher rates of quitting compared to no treatment. The telephonic modality may increase convenience and scalability for health care systems looking to reduce tobacco use and improve health. PMID:26559720 8. Cancer Care Ontario and integrated cancer programs: portrait of a performance management system and lessons learned. Science.gov (United States) Cheng, Siu Mee; Thompson, Leslee J 2006-01-01 A performance management system has been implemented by Cancer Care Ontario (CCO). This system allows for the monitoring and management of 11 integrated cancer programs (ICPs) across the Province of Ontario. The system comprises of four elements: reporting frequency, reporting requirements, review meetings and accountability and continuous improvement activities. CCO and the ICPs have recently completed quarterly performance review exercises for the last two quarters of the fiscal year 2004-2005. The purpose of this paper is to address some of the key lessons learned. The paper provides an outline of the CCO performance management system. These lessons included: data must be valid and reliable; performance management requires commitments from both parties in the performance review exercises; streamlining performance reporting is beneficial; technology infrastructure which allows for cohesive management of data is vital for a sustainable performance management system; performance indicators need to stand up to scrutiny by both parties; and providing comparative data across the province is valuable. Critical success factors which would help to ensure a successful performance management system include: corporate engagement from various parts of an organization in the review exercises; desire to focus on performance improvement and avoidance of blaming; and strong data management systems. The performance management system is a practical and sustainable system that allows for performance improvement of cancer care services. It can be a vital tool to enhance accountability within the health care system. The paper demonstrates that the performance management system supports accountability in the cancer care system for Ontario, and reflects the principles of the provincial governments commitment to continuous improvement of healthcare. 9. Integrated Data Collection Analysis (IDCA) Program - KClO4/Carbon Mixture Energy Technology Data Exchange (ETDEWEB) Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Geoffrey W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Daniel N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pollard, Colin J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Sorensen, Daniel N. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Remmers, Daniel L. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Shelley, Timothy J. [Air Force Research Lab. (AFRL), Tyndall AFB, FL (United States); Reyes, Jose A. [Applied Research Associates, Tyndall AFB, FL (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States) 2013-01-31 The Integrated Data Collection Analysis (IDCA) program is conducting a proficiency study for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the results for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of a mixture of KClO4 and activated carbon—KClO4/C mixture. This material was selected because of the challenge of performing SSST testing of a mixture of two solids. The mixture was found to be insensitive to impact, friction, and thermal stimulus, and somewhat sensitive to spark discharge. This effort, funded by the Department of Homeland Security (DHS), ultimately will put the issues of safe handling of these materials in perspective with standard military explosives. The study is adding SSST testing results for a broad suite of different HMEs to the literature. Ultimately the study has the potential to suggest new guidelines and methods and possibly establish the SSST testing accuracies needed to develop safe handling practices for HMEs. Each participating testing laboratory uses identical test materials and preparation methods wherever possible. Note, however, the test procedures differ among the laboratories. The results are compared among the laboratories and then compared to historical data from various sources. The testing performers involved for the KClO4/carbon mixture are Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL), Indian Head Division, Naval Surface Warfare Center, (NSWC IHD), and Air Force Research Laboratory (AFRL/RXQL). These tests are conducted as a proficiency study in order to establish some consistency in test protocols, procedures, and experiments and to understand how to compare results when these testing variables cannot be made consistent. 10. Integrated Data Collection Analysis (IDCA) Program - KClO3/Dodecane Mixture Energy Technology Data Exchange (ETDEWEB) Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Geoffrey W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Daniel N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pollard, Colin J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Sorenson, Daniel N. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Remmers, Daniel L. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Shelley, Timothy J. [Air Force Research Lab. (AFRL), Tyndall AFB, FL (United States); Whinnery, LeRoy L. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States) 2011-05-23 The Integrated Data Collection Analysis (IDCA) program is conducting a proficiency study for Small-Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the results for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of a mixture of KClO3 and dodecane—KClO3/dodecane mixture. This material was selected because of the challenge of performing SSST testing of a mixture of solid and liquid materials. The mixture was found to: 1) be more sensitive to impact than RDX, and PETN, 2) less sensitive to friction than PETN, and 3) less sensitive to spark than RDX. The thermal analysis showed little or no exothermic features suggesting that the dodecane volatilized at low temperatures. A prominent endothermic feature was observed assigned to melting of KClO3. This effort, funded by the Department of Homeland Security (DHS), ultimately will put the issues of safe handling of these materials in perspective with standard military explosives. The study is adding SSST testing results for a broad suite of different HMEs to the literature. Ultimately the study has the potential to suggest new guidelines and methods and possibly establish the SSST testing accuracies needed to develop safe handling practices for HMEs. Each participating testing laboratory uses identical test materials and preparation methods wherever possible. Note, however, the test procedures differ among the laboratories. The results are compared among the laboratories and then compared to historical data from various sources. The testing performers involved for the KClO3/dodecane mixture are Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL), and Indian Head Division, Naval Surface Warfare Center, (NSWC IHD). These tests are conducted as a proficiency study in order to establish some consistency in test protocols, procedures, and experiments and to understand 11. Integrated Data Collection Analysis (IDCA) Program - AN and Bullseye Smokeless Powder Energy Technology Data Exchange (ETDEWEB) Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Geoffrey W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Daniel N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pollard, Colin J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States); Sorensen, Daniel N. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States); Remmers, Daniel L. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States); Phillips, Jason J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Shelley, Timothy J. [Bureau of Alcohol, Tobacco, and Firearms, Redstone Arsenal, AL (United States); Reyes, Jose A. [Applied Research Associates, Tyndall Air Force Base, FL (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States) 2013-07-17 The Integrated Data Collection Analysis (IDCA) program is conducting a proficiency study for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the results for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of ammonium nitrate (AN) mixed with Bullseye® smokeless powder (Gunpowder). The participants found the AN/Gunpowder to: 1) have a range of sensitivity to impact, comparable to or less than RDX, 2) be fairly insensitive to friction as measured by BAM and ABL, 3) have a range for ESD, from insensitive to more sensitive than PETN, and 4) have thermal sensitivity about the same as PETN and Gunpowder. This effort, funded by the Department of Homeland Security (DHS), is putting the issues of safe handling of these materials in perspective with standard military explosives. The study is adding SSST testing results for a broad suite of different HMEs to the literature. Ultimately the study has the potential to suggest new guidelines and methods and possibly establish the SSST testing accuracies needed when developing safe handling practices for HMEs. Each participating testing laboratory uses identical test materials and preparation methods. Note, however, the test procedures differ among the laboratories. The testing performers involved are Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL), Indian Head Division, Naval Surface Warfare Center, (NSWC IHD), Sandia National Laboratories (SNL), and Air Force Research Laboratory (AFRL/RXQL). These tests are conducted as a proficiency study in order to establish some consistency in test protocols, procedures, and experiments and to compare results when these testing variables cannot be made consistent. Keywords: Small-scale safety testing, proficiency test, impact-, friction-, spark discharge-, thermal testing, round-robin test, safety testing protocols, HME, RDX, potassium perchlorate, potassium 12. A linear programming computational framework integrates phosphor-proteomics and prior knowledge to predict drug efficacy. Science.gov (United States) Ji, Zhiwei; Wang, Bing; Yan, Ke; Dong, Ligang; Meng, Guanmin; Shi, Lei 2017-12-21 In recent years, the integration of 'omics' technologies, high performance computation, and mathematical modeling of biological processes marks that the systems biology has started to fundamentally impact the way of approaching drug discovery. The LINCS public data warehouse provides detailed information about cell responses with various genetic and environmental stressors. It can be greatly helpful in developing new drugs and therapeutics, as well as improving the situations of lacking effective drugs, drug resistance and relapse in cancer therapies, etc. In this study, we developed a Ternary status based Integer Linear Programming (TILP) method to infer cell-specific signaling pathway network and predict compounds' treatment efficacy. The novelty of our study is that phosphor-proteomic data and prior knowledge are combined for modeling and optimizing the signaling network. To test the power of our approach, a generic pathway network was constructed for a human breast cancer cell line MCF7; and the TILP model was used to infer MCF7-specific pathways with a set of phosphor-proteomic data collected from ten representative small molecule chemical compounds (most of them were studied in breast cancer treatment). Cross-validation indicated that the MCF7-specific pathway network inferred by TILP were reliable predicting a compound's efficacy. Finally, we applied TILP to re-optimize the inferred cell-specific pathways and predict the outcomes of five small compounds (carmustine, doxorubicin, GW-8510, daunorubicin, and verapamil), which were rarely used in clinic for breast cancer. In the simulation, the proposed approach facilitates us to identify a compound's treatment efficacy qualitatively and quantitatively, and the cross validation analysis indicated good accuracy in predicting effects of five compounds. In summary, the TILP model is useful for discovering new drugs for clinic use, and also elucidating the potential mechanisms of a compound to targets. 13. Integrating genomics and proteomics data to predict drug effects using binary linear programming. Science.gov (United States) Ji, Zhiwei; Su, Jing; Liu, Chenglin; Wang, Hongyan; Huang, Deshuang; Zhou, Xiaobo 2014-01-01 The Library of Integrated Network-Based Cellular Signatures (LINCS) project aims to create a network-based understanding of biology by cataloging changes in gene expression and signal transduction that occur when cells are exposed to a variety of perturbations. It is helpful for understanding cell pathways and facilitating drug discovery. Here, we developed a novel approach to infer cell-specific pathways and identify a compound's effects using gene expression and phosphoproteomics data under treatments with different compounds. Gene expression data were employed to infer potential targets of compounds and create a generic pathway map. Binary linear programming (BLP) was then developed to optimize the generic pathway topology based on the mid-stage signaling response of phosphorylation. To demonstrate effectiveness of this approach, we built a generic pathway map for the MCF7 breast cancer cell line and inferred the cell-specific pathways by BLP. The first group of 11 compounds was utilized to optimize the generic pathways, and then 4 compounds were used to identify effects based on the inferred cell-specific pathways. Cross-validation indicated that the cell-specific pathways reliably predicted a compound's effects. Finally, we applied BLP to re-optimize the cell-specific pathways to predict the effects of 4 compounds (trichostatin A, MS-275, staurosporine, and digoxigenin) according to compound-induced topological alterations. Trichostatin A and MS-275 (both HDAC inhibitors) inhibited the downstream pathway of HDAC1 and caused cell growth arrest via activation of p53 and p21; the effects of digoxigenin were totally opposite. Staurosporine blocked the cell cycle via p53 and p21, but also promoted cell growth via activated HDAC1 and its downstream pathway. Our approach was also applied to the PC3 prostate cancer cell line, and the cross-validation analysis showed very good accuracy in predicting effects of 4 compounds. In summary, our computational model can be 14. Multicriteria decision methodology for selecting technical alternatives in the Mixed Waste Integrated Program International Nuclear Information System (INIS) 1993-11-01 The US Department of Energy (DOE) Mixed Waste Integrated Program (MWIP) has as one of its tasks the identification of a decision methodology and key decision criteria for the selection methodology. The aim of a multicriteria analysis is to provide an instrument for a systematic evaluation of distinct alternative projects. Determination of this methodology will clarify (1) the factors used to evaluate these alternatives, (2) the evaluator's view of the importance of the factors, and (3) the relative value of each alternative. The selected methodology must consider the Comprehensive Environmental Response Compensation and Liability Act (CERCLA) decision-making criteria for application to the analysis technology subsystems developed by the DOE Office of Technology Development. This report contains a compilation of several decision methodologies developed in various national laboratories, institutions, and universities. The purpose of these methodologies may vary, but the core of the decision attributes are very similar. Six approaches were briefly analyzed; from these six, in addition to recommendations made by the MWIP technical support group leaders and CERCLA, the final decision methodology was extracted. Slight variations are observed in the many methodologies developed by different groups, but most of the analyzed methodologies address similar aspects for the most part. These common aspects were the core of the methodology suggested in this report for use within MWIP for the selection of technologies. The set of criteria compiled and developed for this report have been grouped in five categories: (1) process effectiveness, (2) developmental status, (3) life-cycle cost, (4) implementability, and (5) regulatory compliance 15. In Situ Remediation Integrated Program: Evaluation and assessment of containment technology International Nuclear Information System (INIS) Gerber, M.A.; Fayer, M.J. 1994-06-01 Containment technology refers to a broad range of methods that are used to contain waste or contaminated groundwater and to keep uncontaminated water from entering a waste site. The U.S. Department of Energy's (DOE) Office of Technology Development has instituted the In Situ Remediation Integrated Program (ISRIP) to advance the state-of-the-art of innovative technologies that contain or treat, in situ, contaminated media such as soil and groundwater, to the point of demonstration and to broaden the applicability of these technologies to the widely varying site remediation requirements throughout the DOE complex. The information provided here is an overview of the state-of-the-art of containment technology and includes a discussion of ongoing development projects; identifies the technical gaps; discusses the priorities for resolution of the technical gaps; and identifies the site parameters affecting the application of a specific containment method. The containment technology described in this document cover surface caps; vertical barriers such as slurry walls, grout curtains, sheet pilings, frozen soil barriers, and vitrified barriers; horizontal barriers; sorbent barriers; and gravel layers/curtains. Within DOE, containment technology could be used to prevent water infiltration into buried waste; to provide for long-term containment of pits, trenches, and buried waste sites; for the interim containment of leaking underground storage tanks and piping; for the removal of contaminants from groundwater to prevent contamination from migrating off-site; and as an interim measure to prevent the further migration of contamination during the application of an in situ treatment technology such as soil flushing. The ultimate goal is the implementation of containment technology at DOE sites as a cost-effective, efficient, and safe choice for environmental remediation and restoration activities 16. Demonstration of an Integrated Pest Management Program for Wheat in Tajikistan Science.gov (United States) Landis, Douglas A.; Saidov, Nurali; Jaliov, Anvar; El Bouhssini, Mustapha; Kennelly, Megan; Bahlai, Christie; Landis, Joy N.; Maredia, Karim 2016-01-01 Wheat is an important food security crop in central Asia but frequently suffers severe damage and yield losses from insect pests, pathogens, and weeds. With funding from the United States Agency for International Development, a team of scientists from three U.S. land-grant universities in collaboration with the International Center for Agricultural Research in Dry Areas and local institutions implemented an integrated pest management (IPM) demonstration program in three regions of Tajikistan from 2011 to 2014. An IPM package was developed and demonstrated in farmer fields using a combination of crop and pest management techniques including cultural practices, host plant resistance, biological control, and chemical approaches. The results from four years of demonstration/research indicated that the IPM package plots almost universally had lower pest abundance and damage and higher yields and were more profitable than the farmer practice plots. Wheat stripe rust infestation ranged from 30% to over 80% in farmer practice plots, while generally remaining below 10% in the IPM package plots. Overall yield varied among sites and years but was always at least 30% to as much as 69% greater in IPM package plots. More than 1,500 local farmers—40% women—were trained through farmer field schools and field days held at the IPM demonstration sites. In addition, students from local agricultural universities participated in on-site data collection. The IPM information generated by the project was widely disseminated to stakeholders through peer-reviewed scientific publications, bulletins and pamphlets in local languages, and via Tajik national television. PMID:28446990 17. Data Portal for the Library of Integrated Network-based Cellular Signatures (LINCS) program: integrated access to diverse large-scale cellular perturbation response data Science.gov (United States) Koleti, Amar; Terryn, Raymond; Stathias, Vasileios; Chung, Caty; Cooper, Daniel J; Turner, John P; Vidović, Dušica; Forlin, Michele; Kelley, Tanya T; D’Urso, Alessandro; Allen, Bryce K; Torre, Denis; Jagodnik, Kathleen M; Wang, Lily; Jenkins, Sherry L; Mader, Christopher; Niu, Wen; Fazel, Mehdi; Mahi, Naim; Pilarczyk, Marcin; Clark, Nicholas; Shamsaei, Behrouz; Meller, Jarek; Vasiliauskas, Juozas; Reichard, John; Medvedovic, Mario; Ma’ayan, Avi; Pillai, Ajay 2018-01-01 Abstract The Library of Integrated Network-based Cellular Signatures (LINCS) program is a national consortium funded by the NIH to generate a diverse and extensive reference library of cell-based perturbation-response signatures, along with novel data analytics tools to improve our understanding of human diseases at the systems level. In contrast to other large-scale data generation efforts, LINCS Data and Signature Generation Centers (DSGCs) employ a wide range of assay technologies cataloging diverse cellular responses. Integration of, and unified access to LINCS data has therefore been particularly challenging. The Big Data to Knowledge (BD2K) LINCS Data Coordination and Integration Center (DCIC) has developed data standards specifications, data processing pipelines, and a suite of end-user software tools to integrate and annotate LINCS-generated data, to make LINCS signatures searchable and usable for different types of users. Here, we describe the LINCS Data Portal (LDP) (http://lincsportal.ccs.miami.edu/), a unified web interface to access datasets generated by the LINCS DSGCs, and its underlying database, LINCS Data Registry (LDR). LINCS data served on the LDP contains extensive metadata and curated annotations. We highlight the features of the LDP user interface that is designed to enable search, browsing, exploration, download and analysis of LINCS data and related curated content. PMID:29140462 18. College and university environmental programs as a policy problem (Part 1): Integrating Knowledge, education, and action for a better world? Science.gov (United States) Clark, S.G.; Rutherford, M.B.; Auer, M.R.; Cherney, D.N.; Wallace, R.L.; Mattson, D.J.; Clark, D.A.; Foote, L.; Krogman, N.; Wilshusen, P.; Steelman, T. 2011-01-01 The environmental sciences/studies movement, with more than 1000 programs at colleges and universities in the United States and Canada, is unified by a common interest-ameliorating environmental problems through empirical enquiry and analytic judgment. Unfortunately, environmental programs have struggled in their efforts to integrate knowledge across disciplines and educate students to become sound problem solvers and leaders. We examine the environmental program movement as a policy problem, looking at overall goals, mapping trends in relation to those goals, identifying the underlying factors contributing to trends, and projecting the future. We argue that despite its shared common interest, the environmental program movement is disparate and fragmented by goal ambiguity, positivistic disciplinary approaches, and poorly rationalized curricula, pedagogies, and educational philosophies. We discuss these challenges and the nature of the changes that are needed in order to overcome them. In a subsequent article (Part 2) we propose specific strategies for improvement. ?? 2011 Springer Science+Business Media, LLC. 19. Generation IV Reactors Integrated Materials Technology Program Plan: Focus on Very High Temperature Reactor Materials Energy Technology Data Exchange (ETDEWEB) Corwin, William R [ORNL; Burchell, Timothy D [ORNL; Katoh, Yutai [ORNL; McGreevy, Timothy E [ORNL; Nanstad, Randy K [ORNL; Ren, Weiju [ORNL; Snead, Lance Lewis [ORNL; Wilson, Dane F [ORNL 2008-08-01 Since 2002, the Department of Energy's (DOE's) Generation IV Nuclear Energy Systems (Gen IV) Program has addressed the research and development (R&D) necessary to support next-generation nuclear energy systems. The six most promising systems identified for next-generation nuclear energy are described within this roadmap. Two employ a thermal neutron spectrum with coolants and temperatures that enable hydrogen or electricity production with high efficiency (the Supercritical Water Reactor-SCWR and the Very High Temperature Reactor-VHTR). Three employ a fast neutron spectrum to enable more effective management of actinides through recycling of most components in the discharged fuel (the Gas-cooled Fast Reactor-GFR, the Lead-cooled Fast Reactor-LFR, and the Sodium-cooled Fast Reactor-SFR). The Molten Salt Reactor (MSR) employs a circulating liquid fuel mixture that offers considerable flexibility for recycling actinides and may provide an alternative to accelerator-driven systems. At the inception of DOE's Gen IV program, it was decided to significantly pursue five of the six concepts identified in the Gen IV roadmap to determine which of them was most appropriate to meet the needs of future U.S. nuclear power generation. In particular, evaluation of the highly efficient thermal SCWR and VHTR reactors was initiated primarily for energy production, and evaluation of the three fast reactor concepts, SFR, LFR, and GFR, was begun to assess viability for both energy production and their potential contribution to closing the fuel cycle. Within the Gen IV Program itself, only the VHTR class of reactors was selected for continued development. Hence, this document will address the multiple activities under the Gen IV program that contribute to the development of the VHTR. A few major technologies have been recognized by DOE as necessary to enable the deployment of the next generation of advanced nuclear reactors, including the development and qualification of 20. Integrated Program of Multidisciplinary Education and Research in Mechanics and Physics of Earthquakes Science.gov (United States) Lapusta, N. 2011-12-01 Studying earthquake source processes is a multidisciplinary endeavor involving a number of subjects, from geophysics to engineering. As a solid mechanician interested in understanding earthquakes through physics-based computational modeling and comparison with observations, I need to educate and attract students from diverse areas. My CAREER award has provided the crucial support for the initiation of this effort. Applying for the award made me to go through careful initial planning in consultation with my colleagues and administration from two divisions, an important component of the eventual success of my path to tenure. Then, the long-term support directed at my program as a whole - and not a specific year-long task or subject area - allowed for the flexibility required for a start-up of a multidisciplinary undertaking. My research is directed towards formulating realistic fault models that incorporate state-of-the-art experimental studies, field observations, and analytical models. The goal is to compare the model response - in terms of long-term fault behavior that includes both sequences of simulated earthquakes and aseismic phenomena - with observations, to identify appropriate constitutive laws and parameter ranges. CAREER funding has enabled my group to develop a sophisticated 3D modeling approach that we have used to understand patterns of seismic and aseismic fault slip on the Sunda megathrust in Sumatra, investigate the effect of variable hydraulic properties on fault behavior, with application to Chi-Chi and Tohoku earthquake, create a model of the Parkfield segment of the San Andreas fault that reproduces both long-term and short-term features of the M6 earthquake sequence there, and design experiments with laboratory earthquakes, among several other studies. A critical ingredient in this research program has been the fully integrated educational component that allowed me, on the one hand, to expose students from different backgrounds to the 1. Implementing and measuring the level of laboratory service integration in a program setting in Nigeria. Directory of Open Access Journals (Sweden) Henry Mbah Full Text Available The surge of donor funds to fight HIV&AIDS epidemic inadvertently resulted in the setup of laboratories as parallel structures to rapidly respond to the identified need. However these parallel structures are a threat to the existing fragile laboratory systems. Laboratory service integration is critical to remedy this situation. This paper describes an approach to quantitatively measure and track integration of HIV-related laboratory services into the mainstream laboratory services and highlight some key intervention steps taken, to enhance service integration.A quantitative before-and-after study conducted in 122 Family Health International (FHI360 supported health facilities across Nigeria. A minimum service package was identified including management structure; trainings; equipment utilization and maintenance; information, commodity and quality management for laboratory integration. A check list was used to assess facilities at baseline and 3 months follow-up. Level of integration was assessed on an ordinal scale (0 = no integration, 1 = partial integration, 2 = full integration for each service package. A composite score grading expressed as a percentage of total obtainable score of 14 was defined and used to classify facilities (≤ 80% FULL, 25% to 79% PARTIAL and <25% NO integration. Weaknesses were noted and addressed.We analyzed 9 (7.4% primary, 104 (85.2% secondary and 9 (7.4% tertiary level facilities. There were statistically significant differences in integration levels between baseline and 3 months follow-up period (p<0.01. Baseline median total integration score was 4 (IQR 3 to 5 compared to 7 (IQR 4 to 9 at 3 months follow-up (p = 0.000. Partial and fully integrated laboratory systems were 64 (52.5% and 0 (0.0% at baseline, compared to 100 (82.0% and 3 (2.4% respectively at 3 months follow-up (p = 0.000.This project showcases our novel approach to measure the status of each laboratory on the integration continuum. 2. Learning about the Earth through Societally-relevant Interdisciplinary Research Projects: the Honours Integrated Science Program at McMaster Science.gov (United States) Eyles, C.; Symons, S. L.; Harvey, C. T. 2016-12-01 Students in the Honours Integrated Science (iSci) program at McMaster University (Hamilton, Ontario, Canada) learn about the Earth through interdisciplinary research projects that focus on important societal issues. The iSci program is a new and innovative undergraduate program that emphasizes the links between scientific disciplines and focuses on learning through research and the development of scientific communication skills. The program accepts up to 60 students each year and is taught by a team of 18 instructors comprising senior and junior faculty, post-doctoral fellows, a lab coordinator, instructional assistant, a librarian and library staff, and an administrator. The program is designed around a pedagogical model that emphasizes hands-on learning through interdisciplinary research (Research-based Integrated Education: RIE) and is mostly project-based and experiential. In their freshman year students learn fundamental Earth science concepts (in conjunction with chemistry, physics, mathematics and biology) through research projects focused on environmental contamination, interplanetary exploration, the effect of drugs on the human body and environment, sustainable energy, and cancer. In subsequent years they conduct research on topics such as the History of the Earth, Thermodynamics, Plant-Animal Interactions, Wine Science, Forensics, and Climate Change. The iSci program attracts students with a broad interest in science and has been particularly effective in directing high quality students into the Earth sciences as they are introduced to the discipline in their first year of study through research projects that are interesting and stimulating. The structure of the iSci program encourages consideration of geoscientific applications in a broad range of societally relevant research projects; these projects are reviewed and modified each year to ensure their currency and ability to meet program learning objectives. 3. Smoking Prevention for Students: Findings From a Three-Year Program of Integrated Harm Minimization School Drug Education. Science.gov (United States) Midford, Richard; Cahill, Helen; Lester, Leanne; Foxcroft, David R; Ramsden, Robyn; Venning, Lynne 2016-01-01 This study investigated the impact of the Drug Education in Victorian Schools (DEVS) program on tobacco smoking. The program taught about licit and illicit drugs in an integrated manner over 2 years, with follow up in the third year. It focused on minimizing harm, rather than achieving abstinence, and employed participatory, critical-thinking and skill-based teaching methods. A cluster-randomized, controlled trial of the program was conducted with a student cohort during years 8 (13 years), 9 (14 years), and 10 (15 years). Twenty-one schools were randomly allocated to the DEVS program (14 schools, n = 1163), or their usual drug education program (7 schools, n = 589). One intervention school withdrew in year two. There was a greater increase in the intervention students' knowledge about drugs, including tobacco, in all 3 years. Intervention students talked more with their parents about smoking at the end of the 3-year program. They recalled receiving more education on smoking in all 3 years. Their consumption of cigarettes had not increased to the same extent as controls at the end of the program. Their change in smoking harms, relative to controls, was positive in all 3 years. There was no difference between groups in the proportionate increase of smokers, or in attitudes towards smoking, at any time. These findings indicate that a school program that teaches about all drugs in an integrated fashion, and focuses on minimizing harm, does not increase initiation into smoking, while providing strategies for reducing consumption and harm to those who choose to smoke. 4. Veteran participation in the integrative health and wellness program: Impact on self-reported mental and physical health outcomes. Science.gov (United States) Hull, Amanda; Brooks Holliday, Stephanie; Eickhoff, Christine; Sullivan, Patrick; Courtney, Rena; Sossin, Kayla; Adams, Alyssa; Reinhard, Matthew 2018-04-05 Complementary and integrative health (CIH) services are being used more widely across the nation, including in both military and veteran hospital settings. Literature suggests that a variety of CIH services show promise in treating a wide range of physical and mental health disorders. Notably, the Department of Veterans Affairs is implementing CIH services within the context of a health care transformation, changing from disease based health care to a personalized, proactive, patient-centered approach where the veteran, not the disease, is at the center of care. This study examines self-reported physical and mental health outcomes associated with participation in the Integrative Health and Wellness Program, a comprehensive CIH program at the Washington DC VA Medical Center and one of the first wellbeing programs of its kind within the VA system. Using a prospective cohort design, veterans enrolled in the Integrative Health and Wellness Program filled out self-report measures of physical and mental health throughout program participation, including at enrollment, 12 weeks, and 6 months. Analyses revealed that veterans reported significant improvements in their most salient symptoms of concern (primarily pain or mental health symptoms), physical quality of life, wellbeing, and ability to participate in valued activities at follow-up assessments. These results illustrate the potential of CIH services, provided within a comprehensive clinic focused on wellbeing not disease, to improve self-reported health, wellbeing, and quality of life in a veteran population. Additionally, data support recent VA initiatives to increase the range of CIH services available and the continued growth of wellbeing programs within VA settings. (PsycINFO Database Record (c) 2018 APA, all rights reserved). 5. [Perception of Primary Care physicians on the integration with cardiology through continuity of healthcare programs in secondary prevention]. Science.gov (United States) Cosin-Sales, J; Orozco Beltrán, D; Ledesma Rodríguez, R; Barbon Ortiz Casado, A; Fernández, G 2018-02-17 To determine the perception of Primary Care (PC) physicians on the integration with cardiology (CA) through continuity of healthcare programs. A cross-sectional and multicentre study was conducted, in which a total of 200 PC physicians from all over Spain completed a qualitative survey that evaluated the level of integration with CA in secondary prevention. Physicians were grouped according to the level of PC-CA integration. The integration between CA and PC was good, but it was better in those centres with a higher integration (74.0% vs. 60.0%; p=.02) and in general, physicians considered that integration had improved (92.0% vs. 73.0%; pintegration. In 55.8%, 63.6%, and 51.3% of hospital discharge reports, indications were given on when to perform the follow-up blood analysis, as well as information about returning to working life and sexual activity, respectively. The most common communication method was the paper-based report (75 vs. 84%; p=NS). The communication between healthcare levels was greater in those Primary Care centres with a higher level of integration, as well as periodicity of the communication and the satisfaction of physicians (80.0% vs. 63.0%; p=.005). The level of integration between PC and CA is, in general, satisfactory, but those centres with a higher level of integration benefit more from a greater communication and satisfaction. Copyright © 2018 Sociedad Española de Médicos de Atención Primaria (SEMERGEN). Publicado por Elsevier España, S.L.U. All rights reserved. 6. Learning in Context: Technology Integration in a Teacher Preparation Program Informed by Situated Learning Theory Science.gov (United States) Bell, Randy L.; Maeng, Jennifer L.; Binns, Ian C. 2013-01-01 This investigation explores the effectiveness of a teacher preparation program aligned with situated learning theory on preservice science teachers' use of technology during their student teaching experiences. Participants included 26 preservice science teachers enrolled in a 2-year Master of Teaching program. A specific program goal was to… 7. Integrating Program Assessment and a Career Focus into a Research Methods Course Science.gov (United States) Senter, Mary Scheuer 2017-01-01 Sociology research methods students in 2013 and 2016 implemented a series of "real world" data gathering activities that enhanced their learning while assisting the department with ongoing program assessment and program review. In addition to the explicit collection of program assessment data on both students' development of sociological… 8. Systems analysis programs for hands-on integrated reliability evaluations (SAPHIRE) version 5.0, technical reference manual International Nuclear Information System (INIS) Russell, K.D.; Atwood, C.L.; Galyean, W.J.; Sattison, M.B.; Rasmuson, D.M. 1994-07-01 The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. This volume provides information on the principles used in the construction and operation of Version 5.0 of the Integrated Reliability and Risk Analysis System (IRRAS) and the System Analysis and Risk Assessment (SARA) system. It summarizes the fundamental mathematical concepts of sets and logic, fault trees, and probability. This volume then describes the algorithms that these programs use to construct a fault tree and to obtain the minimal cut sets. It gives the formulas used to obtain the probability of the top event from the minimal cut sets, and the formulas for probabilities that are appropriate under various assumptions concerning repairability and mission time. It defines the measures of basic event importance that these programs can calculate. This volume gives an overview of uncertainty analysis using simple Monte Carlo sampling or Latin Hypercube sampling, and states the algorithms used by these programs to generate random basic event probabilities from various distributions. Further references are given, and a detailed example of the reduction and quantification of a simple fault tree is provided in an appendix 9. [Effects of an Integrated Internet Addiction Prevention Program on Elementary Students' Self-regulation and Internet Addiction]. Science.gov (United States) Mun, So Youn; Lee, Byoung Sook 2015-04-01 The purpose of this study was to develop an integrated internet addiction prevention program and test its effects on the self-regulation and internet addiction of elementary students who are at risk for internet addiction. A quasi-experimental study with a nonequivalent control group pretest-posttest design was used. Participants were assigned to the experimental group (n=28) or control group (n=28). Contents of the program developed in this study included provision of information about internet addiction, interventions for empowerment and methods of behavioral modification. A pre-test and two post-tests were done to identify the effects of the program and their continuity. Effects were testified using Repeated measures ANOVA, simple effect analysis, and Time Contrast. The self-regulation of the experimental group after the program was significantly higher than the control group. The score for internet addiction self-diagnosis and the internet use time in the experimental group were significantly lower than the control group. The effects of the integrated internet addiction prevention program for preventing internet addiction in elementary students at risk for internet addiction were validated. 10. Integration оf Foreign Educational Technologies іn the Content of Program of Pre-School Education in Ukraine Directory of Open Access Journals (Sweden) 2017-02-01 Full Text Available Reveals the integration and implementation of foreign educational technologies in the content of educational programs of preschool education in Ukraine. The emphasis on the implementation of programs for the ideas of Waldorf education, Montessori programs, “SelfEsteem”, “Step by Step”, “Education for sustainable development for children of pre-school age”. It is proved that the integration of foreign educational technologies in the process of optimizing the scientific and methodological support preschool education content Ukraine simulated based priority, primarily oriented humanistic, pedagogical ideas and technologies.Key words: educational technologies, integration, educational program, content of preschool education, children of pre-school age. 11. Expected value based fuzzy programming approach to solve integrated supplier selection and inventory control problem with fuzzy demand Science.gov (United States) Sutrisno; Widowati; Sunarsih; Kartono 2018-01-01 In this paper, a mathematical model in quadratic programming with fuzzy parameter is proposed to determine the optimal strategy for integrated inventory control and supplier selection problem with fuzzy demand. To solve the corresponding optimization problem, we use the expected value based fuzzy programming. Numerical examples are performed to evaluate the model. From the results, the optimal amount of each product that have to be purchased from each supplier for each time period and the optimal amount of each product that have to be stored in the inventory for each time period were determined with minimum total cost and the inventory level was sufficiently closed to the reference level. 12. A framework for understanding outcomes of integrated care programs for the hospitalized elderly Directory of Open Access Journals (Sweden) Jacqueline M. Hartgerink 2013-11-01 Full Text Available Introduction: Integrated care has emerged as a new strategy to enhance the quality of care for hospitalised elderly. Current models do not provide insight into the mechanisms underlying integrated care delivery. Therefore, we developed a framework to identify the underlying mechanisms of integrated care delivery. We should understand how they operate and interact, so that integrated care programmes can enhance the quality of care and eventually patient outcomes.Theory and methods: Interprofessional collaboration among professionals is considered to be critical in integrated care delivery due to many interdependent work requirements. A review of integrated care components brings to light a distinction between the cognitive and behavioural components of interprofessional collaboration.Results: Effective integrated care programmes combine the interacting components of care delivery. These components affect professionals’ cognitions and behaviour, which in turn affect quality of care. Insight is gained into how these components alter the way care is delivered through mechanisms such as combining individual knowledge and actively seeking new information.Conclusion: We expect that insight into the cognitive and behavioural mechanisms will contribute to the understanding of integrated care programmes. The framework can be used to identify the underlying mechanisms of integrated care responsible for producing favourable outcomes, allowing comparisons across programmes. 13. Implementing and measuring the level of laboratory service integration in a program setting in Nigeria. Science.gov (United States) Mbah, Henry; Negedu-Momoh, Olubunmi Ruth; Adedokun, Oluwasanmi; Ikani, Patrick Anibbe; Balogun, Oluseyi; Sanwo, Olusola; Ochei, Kingsley; Ekanem, Maurice; Torpey, Kwasi 2014-01-01 The surge of donor funds to fight HIV&AIDS epidemic inadvertently resulted in the setup of laboratories as parallel structures to rapidly respond to the identified need. However these parallel structures are a threat to the existing fragile laboratory systems. Laboratory service integration is critical to remedy this situation. This paper describes an approach to quantitatively measure and track integration of HIV-related laboratory services into the mainstream laboratory services and highlight some key intervention steps taken, to enhance service integration. A quantitative before-and-after study conducted in 122 Family Health International (FHI360) supported health facilities across Nigeria. A minimum service package was identified including management structure; trainings; equipment utilization and maintenance; information, commodity and quality management for laboratory integration. A check list was used to assess facilities at baseline and 3 months follow-up. Level of integration was assessed on an ordinal scale (0 = no integration, 1 = partial integration, 2 = full integration) for each service package. A composite score grading expressed as a percentage of total obtainable score of 14 was defined and used to classify facilities (≤ 80% FULL, 25% to 79% PARTIAL and laboratory systems were 64 (52.5%) and 0 (0.0%) at baseline, compared to 100 (82.0%) and 3 (2.4%) respectively at 3 months follow-up (p = 0.000). This project showcases our novel approach to measure the status of each laboratory on the integration continuum. 14. Possible stakeholder concerns regarding volatile organic compound in arid soils integrated demonstration technologies not evaluated in the stakeholder involvement program International Nuclear Information System (INIS) Peterson, T. 1995-12-01 The Volatile Organic Compounds in Arid Soils Integrated Demonstration (VOC-Arid ID) supported the demonstration of a number of innovative technologies, not all of which were evaluated in the integrated demonstration's stakeholder involvement program. These technologies have been organized into two categories and the first category ranked in order of priority according to interest in the evaluation of the technology. The purpose of this report is to present issues stakeholders would likely raise concerning each of the technologies in light of commentary, insights, data requirements, concerns, and recommendations offered during the VOC-Arid ID's three-year stakeholder involvement, technology evaluation program. A secondary purpose is to provide a closeout status for each of the technologies associated with the VOC-Arid ID. This report concludes with a summary of concerns and requirements that stakeholders have for all innovative technologies 15. Hypersonic research engine project. Phase 2: Aerothermodynamic Integration Model (AIM) data reduction computer program, data item no. 54.16 Science.gov (United States) Gaede, A. E.; Platte, W. (Editor) 1975-01-01 The data reduction program used to analyze the performance of the Aerothermodynamic Integration Model is described. Routines to acquire, calibrate, and interpolate the test data, to calculate the axial components of the pressure area integrals and the skin function coefficients, and to report the raw data in engineering units are included along with routines to calculate flow conditions in the wind tunnel, inlet, combustor, and nozzle, and the overall engine performance. Various subroutines were modified and used to obtain species concentrations and transport properties in chemical equilibrium at each of the internal and external engine stations. It is recommended that future test plans include the configuration, calibration, and channel assignment data on a magnetic tape generated at the test site immediately before or after a test, and that the data reduction program be designed to operate in a batch environment. 16. SAMPLE RESULTS FROM THE INTEGRATED SALT DISPOSITION PROGRAM MACROBATCH 5 TANK 21H QUALIFICATION MST, ESS AND PODD SAMPLES Energy Technology Data Exchange (ETDEWEB) Peters, T.; Fink, S. 2012-04-24 Savannah River National Laboratory (SRNL) performed experiments on qualification material for use in the Integrated Salt Disposition Program (ISDP) Batch 5 processing. This qualification material was a composite created from recent samples from Tank 21H and archived samples from Tank 49H to match the projected blend from these two tanks. Additionally, samples of the composite were used in the Actinide Removal Process (ARP) and extraction-scrub-strip (ESS) tests. ARP and ESS test results met expectations. A sample from Tank 21H was also analyzed for the Performance Objectives Demonstration Document (PODD) requirements. SRNL was able to meet all of the requirements, including the desired detection limits for all the PODD analytes. This report details the results of the Actinide Removal Process (ARP), Extraction-Scrub-Strip (ESS) and Performance Objectives Demonstration Document (PODD) samples of Macrobatch (Salt Batch) 5 of the Integrated Salt Disposition Program (ISDP). 17. Structural Integrity Program for the Calcined Solids Storage Facilities at the Idaho Nuclear Technology and Engineering Center International Nuclear Information System (INIS) Bryant, J.W.; Nenni, J.A. 2003-01-01 This report documents the activities of the structural integrity program at the Idaho Nuclear Technology and Engineering Center relevant to the high-level waste Calcined Solids Storage Facilities and associated equipment, as required by DOE M 435.1-1, ''Radioactive Waste Management Manual.'' Based on the evaluation documented in this report, the Calcined Solids Storage Facilities are not leaking and are structurally sound for continued service. Recommendations are provided for continued monitoring of the Calcined Solids Storage Facilities 18. A Fortran program for the numerical integration of the one-dimensional Schroedinger equation using exponential and Bessel fitting methods International Nuclear Information System (INIS) Cash, J.R.; Raptis, A.D.; Simos, T.E. 1990-01-01 An efficient algorithm is described for the accurate numerical integration of the one-dimensional Schroedinger equation. This algorithm uses a high-order, variable step Runge-Kutta like method in the region where the potential term dominates, and an exponential or Bessel fitted method in the asymptotic region. This approach can be used to compute scattering phase shifts in an efficient and reliable manner. A Fortran program which implements this algorithm is provided and some test results are given. (orig.) 19. Structural Integrity Program for the Calcined Solids Storage Facilities at the Idaho Nuclear Technology and Engineering Center International Nuclear Information System (INIS) Jeffrey Bryant 2008-01-01 This report documents the activities of the structural integrity program at the Idaho Nuclear Technology and Engineering Center relevant to the high-level waste Calcined Solids Storage Facilities and associated equipment, as required by DOE M 435.1-1, 'Radioactive Waste Management Manual'. Based on the evaluation documented in this report, the Calcined Solids Storage Facilities are not leaking and are structurally sound for continued service. Recommendations are provided for continued monitoring of the Calcined Solids Storage Facilities 20. IT Workforce: Key Practices Help Ensure Strong Integrated Program Teams; Selected Departments Need to Assess Skill Gaps Science.gov (United States) 2016-11-01 principles and steps associated with workforce planning that agencies can utilize in their efforts to assess and address IT skill gaps. See GAO-04-39...As another example, our prior review of the United States Department of Agriculture’s Farm Service Agency’s Modernize and Innovate the Delivery of...IT WORKFORCE Key Practices Help Ensure Strong Integrated Program Teams; Selected Departments Need to Assess Skill Gaps
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33932822942733765, "perplexity": 7223.731067732124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400213006.47/warc/CC-MAIN-20200924002749-20200924032749-00551.warc.gz"}
https://allopy.readthedocs.io/en/latest/penalty/
# Penalty¶ The penalties are classes that are added to the Portfolio Optimizer family of optimizers (i.e. PortfolioOptimizer, ActivePortfolioOptimizer) to impose a penalty to the particular asset weight based on the amount of uncertainty present for that asset class. Uncertainty in this instance does not mean the risk (or variance). Rather, it signifies how uncertain we are of those estimates. For example, it represents how uncertain we are of the returns (mean) and volatility (standard deviation) estimates we have projected for the asset class. Penalty Classes
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6532881259918213, "perplexity": 1805.8692004588493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371883359.91/warc/CC-MAIN-20200410012405-20200410042905-00242.warc.gz"}
https://tex.stackexchange.com/questions/546357/correcting-the-space-between-overlapping-bars
# Correcting the space between overlapping bars How do i make spaces between the groups, so that bars stop overlapping each other?. Id thought it would automatically space them \documentclass{article} \usepackage{pgfplots} \pgfplotsset{compat=1.17} \begin{document} \begin{tikzpicture} \begin{axis}[ ybar, ymin=0,ymax=400, %enlargelimits=0.15, legend image code/.code={% \draw[#1, draw=none] (0cm,-0.1cm) rectangle (0.6cm,0.1cm); }, ymajorgrids = true, legend style={at={(0.5,-0.10)}, anchor=north,legend columns=-1}, ylabel={Execution Time (ms)}, symbolic x coords={R2G,Gaussian,Box,Sobel, Total Edge}, xtick=data, nodes near coords, nodes near coords style={anchor=west,rotate=90,inner xsep=1pt}, ] \addplot coordinates {(R2G,33) (Gaussian,54) (Box,127) (Sobel,246) (Total Edge, 145) };%CPU \addplot [fill=teal!] coordinates {(R2G,8.221) (Gaussian,13.3254) (Box,14.958) (Sobel,29.935) (Total Edge, 43) };%GPU \addplot coordinates {(R2G,20.234959834) (Gaussian,26.492609995) (Box,27.353843832) (Sobel,45.59262995) (Total Edge, 73.31923) };%FPGA \addplot coordinates {(R2G,21.467651233) (Gaussian,38.359383243) (Box,40.454379543) (Sobel,48.592629955) (Total Edge, 81.31923) };%HLS \legend{CPU,GPU,FPGA,HLS} \end{axis} \end{tikzpicture} \end{document} • You could add bar width=4pt, to the options of the plot. – user194703 May 26 '20 at 18:42 • Seems like it just makes all the bars super thin. May 26 '20 at 18:46 • Well you could use bar width=6pt. – user194703 May 26 '20 at 18:47 • i messed around the values, any number just makes it super thin, (just a line) . gyazo.com/afe8783785b93f2875a8e1a89d496b35 May 26 '20 at 18:48 • For example add width=\textwidth,height=0.4\textwidth, to the axis options. I don't think there is an automatic solution, you need to adjust the width of the axis, the width of the bars and the space between the bars (e.g. ybar=0pt instead of just ybar removes the space between the bars within a group) to your liking. May 26 '20 at 18:56 I'll be happy to remove this but with bar width=6pt I get \documentclass{article} \usepackage{pgfplots} \pgfplotsset{compat=1.17} \begin{document} \begin{tikzpicture} \begin{axis}[bar width=6pt, ybar, ymin=0,ymax=400, %enlargelimits=0.15, legend image code/.code={% \draw[#1, draw=none] (0cm,-0.1cm) rectangle (0.6cm,0.1cm); }, ymajorgrids = true, legend style={at={(0.5,-0.20)}, anchor=north,legend columns=-1}, ylabel={Execution Time (ms)}, symbolic x coords={R2G,Gaussian,Box,Sobel, Total Edge Detection}, xtick=data, nodes near coords, nodes near coords style={anchor=west,rotate=90,inner xsep=1pt}, x tick label style={text width=5em,anchor=north,align=center} ] \addplot coordinates {(R2G,33) (Gaussian,54) (Box,127) (Sobel,246) (Total Edge Detection, 145) };%CPU (Box,14.958) (Sobel,29.935) (Total Edge Detection, 43) };%GPU (Box,27.353843832) (Sobel,45.59262995) (Total Edge Detection, 73.31923) };%FPGA (Box,40.454379543) (Sobel,48.592629955) (Total Edge Detection, 81.31923) };%HLS \legend{CPU,GPU,FPGA,HLS} \end{axis} \end{tikzpicture} \end{document} So I am unable to reproduce the outcome of your comment. ADDENDUM: If you want the tick texts to go over two lines, one option is to set the text width appropriately. • Thats really strange, maybe its because where i placed it, i put right under ymin=0 May 26 '20 at 18:58 • Just a follow up from that, if i wanted to rename "total edge" -> "Total Edge detection", is there a way for the words to go onto the 2nd line? normally it would overlap the other labels ? May 26 '20 at 19:08 • @Freon You can set the text width of these ticks to some appropriate value. – user194703 May 26 '20 at 19:17 This is a "problem" quite often reported here on TeX.SX. For me the solution is to provide bar width and bar shift in axis units instead of as length. Then you just need to find a pair of these two keys that fit your aesthetics and can then change the axis width without changing the relatives between the bars width, separation in one group and separation between groups. But this can only be done when no symbolic coords are used. And symbolic coords are (almost?) always used in conjunction with \addplot coordinates. For both I (personally) don't see any advantage compared to using a table (i.e. \addplot table) which offers much more flexibility. So here is my proposed solution together with some other refinements of your code. 1. Create a data table from the \addplot coordinates. 2. Create xticklabels from the data table. 3. State bar width and bar shift in (absolute) axis coordinates. 4. Now you can freely scale the axis width. Some related questions to yours and the answers are ### Bonus stuff you could do Having this solution so far you could even go one step further and add the \addplots in a loop instead of adding them one by one. For that see the answers to e.g. And if you do that there remains one point that I haven't addressed so far: I removed the manual set color to the second \addplot command (because this is not relevant for answering your question). Of course when using the loop you can't state the color as an option to the \addplot command any more. But instead you can create your own cycle list and invoke it. For that see section 4.7.7 in the PGFPlots manual (v1.17) and of course you will find examples here on TeX.SX too. Now you will perhaps think: "Why should I do all this stuff? This seems to be much more work/code than in my "simple" solution." And you are absolutely right with that. But the real benefit comes, when you create styles with all of this. Then you make sure all your plots look quite similar and also changes are done easily to all axis/plots by just changing the styles. Unfortunately I don't have good examples here, because questions here on TeX.SX are usually not about this topic. But I will state some very basic examples and maybe you can image how simple it is to change stuff then and maybe also, if you would combine the styles. % used PGFPlots v1.17 \documentclass[border=5pt]{standalone} \usepackage{pgfplotstable} % use this compat' level or higher to be able to provide (relative) axis % units to bar width' and bar shift' \pgfplotsset{compat=1.7} \begin{document} \begin{tikzpicture}[ % step 3a: % define the values for the width and shift of the bars % (the total "width" at one coordinate is 1. So the sum of the two % values should be maximum 1 so the bars don't overlap.) % (The values could also be given directly to the keys, but this here is % more general and allows arbitrary calculations of the values. % So one could e.g. also first get the number of columns in the data % table and compute BarWidth from that.) /pgf/declare function={ BarWidth = 0.175; BarShift = BarWidth/2 + 0.05; }, ] % step 1: % create a data table % (when there is a space in a string it needs to be surrounded by curly brackets % or one could use another col sep) x CPU GPU FPGA HLS R2G 33 8.221 20.234959834 21.467651233 Gaussian 54 13.3254 26.492609995 38.359383243 Box 127 14.958 27.353843832 40.454379543 Sobel 246 29.935 45.59262995 48.592629955 {Total \\ Edge} 145 43 73.31923 81.31923 }\mydata \begin{axis}[ % % step 4: % % adjust the width of the axis to your needs % width=\textwidth, % step 3b: % use the above defined values /pgf/bar width=BarWidth, /pgf/bar shift=BarShift, ybar, ymin=0, ymax=300, enlarge x limits={abs=0.5}, ylabel={Execution Time (ms)}, xtick=data, % step 2: % use xticklabels from table instead of stating symbolic x coords xticklabels from table={\mydata}{x}, % (when you manually add line breaks you need to state how the text % should be aligned) xticklabel style={ align=center, }, ymajorgrids=true, nodes near coords, nodes near coords style={ anchor=west, rotate=90, }, legend image code/.code={% \draw[#1, draw=none] (0cm,-0.1cm) rectangle (0.6cm,0.1cm); }, legend style={ % (use xticklabel cs: so you don't have to care about the yshift) at={(xticklabel cs:0.5)}, anchor=north, legend columns=-1, }, % use the \coordindex for all plots table/x expr={\coordindex}, ] `
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7557390332221985, "perplexity": 3812.013643964947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300573.3/warc/CC-MAIN-20220129062503-20220129092503-00042.warc.gz"}
http://mathoverflow.net/users/30412/michael-zieve
# Michael Zieve less info reputation 1029 bio website math.lsa.umich.edu/~zieve location University of Michigan age member for 2 years, 1 month seen 20 hours ago profile views 1,172 I am a professor at the University of Michigan. In 2013-2014 I am on leave from Michigan, and I am at Shing-Tung Yau's Mathematical Sciences Center attached to Tsinghua University. 28 Possible counterexample to a theorem assuming Lang's conjecture 17 Is there an irreducible but solvable septic trinomial $x^7+ax^n+b = 0$? 15 Galois Group of $x^n-2$ 14 When is $f(x_1, \dots, x_n)+c$ an irreducible polynomial for almost all constants $c$? 13 On composition of polynomials # 3,976 Reputation +80 Inverse problem for zeta functions of curves over finite fields +10 Irreducibility of a class of polynomials +70 Weil's Riemann Hypothesis for dummies? +15 Given a formal power series ,decide whether there exists a polynomial the series satisfies and if it exists,how to write it down? # 6 Questions 20 Can every curve be written as $f(x)=g(y)$? 15 Identifying Ramanujan's integer solutions of x^3+y^3+z^3=1 among Elkies' rational solutions 8 Question about doubly transitive groups with an n-cycle 7 The curve $(x+y+z)^3=27xyz$ 7 The equation $x^m-1=y^n+y^{n-1}+…+1$ in prime powers $x,y$ # 45 Tags 154 nt.number-theory × 26 42 elliptic-curves × 3 105 ag.algebraic-geometry × 15 38 fields × 4 72 polynomials × 10 37 gr.group-theory × 9 58 galois-theory × 6 29 finite-fields × 4 48 ac.commutative-algebra × 7 28 counterexamples # 5 Accounts MathOverflow 3,976 rep 1029 Mathematics 1,063 rep 111 Academia 315 rep 9 Meta Stack Exchange 130 rep 4 Politics 101 rep 2
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7170578241348267, "perplexity": 3342.0375757559505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119646351.12/warc/CC-MAIN-20141024030046-00206-ip-10-16-133-185.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/262911/temporarily-patch-a-command-xpatchcmd
# temporarily patch a command (xpatchcmd) for context: I'm writing a cv with moderncv. I like the looks of it, but the package doesn't appear to allow easy layout cusomitzations. What I want to achieve is one \section that formats a certain command differently than the others. in particular I'm using \xpatchcmd\cventry{,}{\newline}{}{} to replace a comma by a \newline. Is it possible to 'scope' the patch, such that it is only applied within, say one section? I suggest a different patch: \xpatchcmd{\cventry}{,}{\cventrycomma}{}{} \newcommand{\cventrycomma}{,} Then you can do \renewcommand{\cventrycomma}{\newline} when you want to have a new line instead of a comma. The restoration can be obtained in two ways: 1. Enclose the part where you want the new line, including the \renewcommand, in a \begingroup...\endgroup pair 2. Issue \renewcommand{\cventrycomma}{,} when you want a comma again. Method one can be hidden in an environment: \newenvironment{specialsection} {\renewcommand{\cventrycomma}{\newline}} {} • this looks like what i wanted - and it will use all latexs scopes that already exist automatically :) – IARI Aug 24 '15 at 14:04 • how would I use a parameter from the patched command (\cventry) inside the wrapped LaTeX command (\cventrycomma)? – IARI Aug 24 '15 at 14:37 • @IARI Sorry, I don't understand – egreg Aug 24 '15 at 14:59 • I'm sorry, I wasn't clear enough. cventry has 5 arguments. I want to be able to use these in xpatch, or even in the new proxy-newcommand - for instance i might want to replace #1 by \emph{#1} or something like that. If this isn't clear enouth I will write a separate question. – IARI Aug 25 '15 at 10:21 • @IARI That's definitely a new question – egreg Aug 25 '15 at 10:23 I would do this as follows: \let\originalcventry\cventry% save a copy of \cventry \xpatchcmd\cventry{,}{\newline}{}{}% create the patched version ...use the patched version... \let\cventry\originalcventry% restore the original • I'd suggest \LetLtxMacro instead of \let. In the sense that you restoration will not work. – egreg Aug 24 '15 at 13:52 • @egreg Where is \LetLtxMacro defined? I don't know it (and it sounds like I should). – Andrew Aug 24 '15 at 13:54 • While that would work I don't think its a very good solution. Assume I want to do that multiple times, with difference commands. What I really want, is an actual 'scoping' (i.e. automatically use latex environments) and not have to come up with a new variable name - it looks like boilerplate to me. – IARI Aug 24 '15 at 13:56 • @Andrew In the letltxmacro package; its documentation explains why it should be used. If you test your restoration, you'll realize it doesn't work, because \xpatchcmd does nothing to the macro \cventry, but it acts on the internal macro \\cventry. – egreg Aug 24 '15 at 14:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5117058157920837, "perplexity": 1847.418103584849}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526489.6/warc/CC-MAIN-20190720070937-20190720092937-00350.warc.gz"}
https://datascience.stackexchange.com/questions/26053/testing-fit-of-probability-distribution
# Testing fit of probability distribution If I have fitted training data to a probability distribution, e.g. a poisson distribution, how can I test this fit on some test data? To fit the poisson distribution I am using R's fitdistrplus package that using MLE for determining the optimal coefficients of a given distribution. Therefore, I have the estimated $\lambda$ for a poisson distribution based on my training data but I am not sure how to test this on some unseen test data.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9112019538879395, "perplexity": 412.16789259014905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991269.57/warc/CC-MAIN-20210516105746-20210516135746-00509.warc.gz"}
https://anthonyashmore.com/talk/nankai-calabi-yau-metrics-cfts-and-random-matrix-theory/
# Nankai - Calabi-Yau Metrics, CFTs and Random Matrix Theory ### Abstract Calabi-Yau manifolds have played a key role in both mathematics and physics, and are particularly important for deriving realistic models of particle physics from string theory. Unfortunately, very little is known about the explicit metrics on these spaces, leaving us unable, for example, to compute particle masses or couplings in these models. I will review recent progress in this direction on using numerical approximations to compute the spectrum of the $(p,q)$-form Laplacian on these spaces. I will finish with an example of what one can do with this new “data”, giving an interesting link between Calabi-Yau metrics and random matrix theory. Date Aug 12, 2021 12:00 AM Location Virtual
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7656620144844055, "perplexity": 431.5635873564875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499654.54/warc/CC-MAIN-20230128184907-20230128214907-00328.warc.gz"}
http://theartofmodelling.wordpress.com/
# 2013 AARMS Mathematical Biology Workshop We are pleased to announce the 2013 AARMS Mathematical Biology Workshop to be held at Memorial University of Newfoundland, July 27-29, 2013 in St John’s, Newfoundland. Registration closes on May 17, 2013 and abstracts should be submitted by June 30, 2013. Plenary speakers: Edward Allen, Texas Tech University Linda Allen, Texas Tech University Steve Cantrell, University of Miami Odo Diekmann, Utrecht University Simon Levin, Princeton University Mark Lewis, University of Alberta Philip Maini, Oxford University For complete details please visit the conference website*: http://www.math.mun.ca/~ahurford/aarms/ *please note that there was a service outage for math.mun.ca on Monday Feb 18, but the link should work now. Photo credit: Michelle Wille Photography Photo credit: Michelle Wille Photography Photo credit: Michelle Wille Photography # Christmas gifts from Just Simple Enough! And, no, not the gift of homework. The gifts of song and movie! Movie. This is a movie that I took in 2009 while attending a Mathematical Biology Summer School in Botswana. Click here. (Unfortunately, I encountered some technical difficulties uploading this to YouTube, but it’s still watchable, albeit in micro-mini). Happy holidays everyone! # Mathematical biology – by way of example Mathematical biology takes many different forms depending on the practitioner. I take mine with one math and two biologys (the so-called “little m, big B”), but others like it stronger (“big M, little b”). Under my worldview, mechanistic models are a tool to analyze biological data; a tool that infuses our knowledge of the relevant biological processes into the analytical framework. That might sound very pie-in-the-sky, and so I’ve made up an example to illustrate what I mean. This example has been constructed so that it doesn’t require any advanced knowledge: if you know how to add and multiply – that’s all you’ll need to answer these questions. In the example below, the relevant biological processes are described in the section what we know already. You will need to use logical thinking to relate the what we know already section to the data reported in the DATASHEET so that you can answer the questions. If you have ever wondered ‘what is Theoretical Biology?’ this example helps to answer that question too. Specifically, the required steps to do modelling, as inspired by this example, would be: 1) to write down the information that goes in the what we know already section (you’d refer to these as the model assumptions); 2) to devise a scheme to relate what we know already with the biological quantities of interest (this is the model derivation step); and 3) to report the results of your analysis (model analysis and interpretation). As you work through this example, think about the types of questions that you are able to answer and how fulfilling it is that careful thinking has enabled us to draw some valuable conclusions. Understand too, that a criticism of mathematical modelling is that, in reality, everything might not happen quite as perfectly as we describe it to happen in the what we know already section. These sentiments capture the good and the bad of mathematical modelling. Mathematical models enable new and exciting insights, but our excitement is temped because these insights are only possible owing to the assumptions that have been made, and while we do our best to make sure these assumptions are good, we know that these assumptions can never be prefect. If this sounds like fun, then have a go at the example below. If you want to email me your answers, I can email you back to let you know how you did  (see here for my email address). —————————————- INFLUENZA X A new and unknown disease, Influenza X, has swept through a small town (popn. 100). Your task is to describe the characteristics of the disease. Health officials want to know: 1. How many days are citizens infected before they recover? 2. What fraction of infected citizens died from the disease? 3. What is the rate of becoming infected? During the epidemic, citizens can be classified into one of these four groups: • Susceptible • Infected • Recovered, or As is shown in the diagram: • Only Susceptible citizens can be Infected. • Infected citizens either Die or Recover. • Citizens must have been Infected before they can Recover. • Only Infected citizens die from the disease. • Once they have Recovered, citizens cannot be re-infected. • All Infected citizens take the same number of days to Die or Recover. • During the epidemic no one enters or leaves the city. No babies are born; no one dies of anything other than Influenza X. During the epidemic all that was recorded was the number of citizens who were Susceptible, Infected or Recoverd on each day and the number of people who had Died up until that point. This information is summarized in the DATASHEET provided at the end of this post. This information is also presented graphically below and you’ll get a better understanding of the data by considering how the graphs and the DATASHEET are related (Question 4). QUESTIONS 1. Fill in the missing values on the DATASHEET (below). 2. How many days are citizens infected before they recover? 3. What fraction of infected citizens died from the disease? 4. Label the axes on the graphs. 5. The transmission rate of Influenza X is 0.008 (the units have deliberately been omitted). Consider the graphs above and describe how this rate was estimated? 6. How is the unknown quantity from the DATASHEET calculated? DATASHEET Some definitions • If a patient is infected on Day 1 and recovers on Day 4 that patient is infected for 3 days (i.e., Day 1-3 inclusive). • Infected (cumulative) on Day T means the total number of citizens who have been infected any time from Day 1 to Day T (inclusive). Citizens who subsequently Died or Recovered are included in this number. For some of my older posts describing Mathematical Biology, you can start here. # How to make mathematical models (even at home) As a WordPress blogger, I get a handy list of search terms that have led people to my blog. A particularly memorable search term that showed up on my feed was ‘how to make mathematical models at home’. What I liked about this query was that it suggests mathematical modelling as a recreational hobby: at home, in one’s spare time; just for fun. This speaks to an under-appreciated quality of mathematical modelling – that it’s really quite accessible once the core principles have been mastered. To get started, I would suggest any of the following textbooks*: Now, I know, you want to make your own mathematical model, not just read about other people’s mathematical models in a textbook. To start down this road, I think you should pay attention to two things: • How to make a diagram that represents your understanding of how the quantities you want to model change and interact, and; • Developing a basic knowledge of the classic models in the ecology, evolution and epidemiology including developing an understanding of what these models assume. This would correspond to reading Chapters 2 and 3 of A Biologist’s Guide to Mathematical Modeling. Remember that the classic model usually represents the most simple model that will be appropriate, and only in rare circumstances, might you be able to justify using a more simple model. For example, if the level of predation or disease spread for your population of interest is very low, then you might be able to use a model for single species population growth (exponential/logistic/Ricker) instead of the Lotka-Volterra or SIR models, however, if predation and disease spread are negligible, then it arguably wasn’t appropriate to call your problem ‘predator-prey’ or ‘disease spread’ in the first place. Almost by definition, it’s usually not possible to go much simpler than the dynamics represented by the appropriate classic model. That should get you started. You can do this at the university library. You can do this for a project for a class. And, yes, you can even do this at home! Footnotes: *For someone with a background in mathematics some excellent textbooks are: but while the above textbooks will give you a better understanding of how to perform model analysis, the ‘For Biologist’s’ textbooks listed in this post are still the recommended reading to learn about model derivation and interpretation. # Testing mass-action UPDATE: I wrote this, discussing that I don’t really know the justification for the law of mass action, however, comments from Martin and Helen suggest that a derivation is possible using moment closure/mean field methods. I recently found this article: Use, misuse and extensions of “ideal gas” models of animal encounter. JM Hutchinson, PM Waser. 2007. Biological Reviews. 82:335-359. I haven’t have time to read it yet, but from the title it certainly sounds like it answers some of my questions. ——————– Yesterday, I came across this paper from PNAS: Parameter-free model discrimination criterion based on steady-state coplanarity by Heather A. Harrington, Kenneth L. Ho, Thomas Thorne and Michael P.H. Strumpf. The paper outlines a method for testing the mass-action assumption of a model without non-linear fitting or parameter estimation. Instead, the method constructs a transformation of the model variables so that all the steady-state solutions lie on a common plane irrespective of the parameter values. The method then describes how to test if empirical data satisfies this relationship so as to reject (or fail to reject) the mass-action assumption. Sounds awesome! One of the reasons I like this contribution is that I’ve always found mass-action to be a bit confusing, and consequently, I think developing simple methods to test the validity of this assumption is a step in the right direction.  Thinking about how to properly represent interacting types of individuals in a model is hard because there are lots of different factors at play (see below). For me, mass-action has always seemed a bit like a magic rabbit from out of the hat; just multiply the variables; don’t sweat the details of how the lion stalks its prey; just sit back and enjoy the show. Figure 1. c x (1 Lion x 1 Eland) = 1 predation event per unit time where c is a constant. Before getting too far along, let’s state the law: Defn. Let $x_1$ be the density of species 1, let $x_2$ be the density of species 2, and let $f$ be the number of interactions that occur between individuals of the different species per unit time. Then, the law of mass-action states that $f \propto x_1 \times x_2$. In understanding models, I find it much more straight forward to explain processes that just involve one type of individual – be it the logistic growth of a species residing on one patch of a metapopulation, or the constant per capita maturation rates of juveniles to adulthood. It’s much harder for me to think about interactions: infectious individuals that contact susceptibles, who then become infected, and predators that catch prey, and then eat them. Because in reality: Person A walks around, sneezes, then touches the door handle that person B later touches; Person C and D sit next to each other on the train, breathing the same air. There are lots of different transmission routes, but to make progress on understanding mass-action, you want to think about what happens on average, where the average is taken across all the different transmission routes. In reality, also consider that: Person A was getting a coffee; Person B was going to a meeting; and Persons C and D were going to work. You want to think about averaging over all of a person’s daily activities, and as such, all the people in the population might be thought of as being uniformly distributed across the entire domain. Then, the number of susceptibles in the population that find themselves in the same little $\Delta x$ as an infectious person is probably $\beta S(t) \times I(t)$. Part of it is, I don’t think I understand how I am supposed to conceptualize the movement of individuals in such a population. Individuals are going to move around, but at every point in time the density of the S’s and the I’s still needs to be uniform. Let’s call this the uniformity requirement. I’ve always heard that a corollary of the assumption of mass-action was an assumption that individuals move randomly. I can believe that this type of movement rule might be sufficient to satisfy the uniformity requirement, however, I can’t really believe that people move randomly, or for that matter, that lions and gazelles do either.  I think I’d be more willing to understand the uniformity requirement as being met by any kind of movement where the net result of all the movements of the S’s, and of the I’s, results in no net change in the density of S(t) and I(t) over the domain. That’s why I find mass-action a bit confusing. With that as a lead in: How do you interpret the mass-action assumption? Do you have a simple and satisfying way of thinking about it? ________________________________ This paper is relevant since the author’s derive a mechanistic movement model and determine the corresponding functional response: How linear features alter predator movement and the functional response by Hannah McKenzie, Evelyn Merrill, Raymond Spiteri and Mark Lewis. # Q1. Define independent parameterization Mechanistic and phenomenological models Mechanistic models describe the processes that relate variables to each other, attempting to explain why particular relationships emerge, rather than solely how the variables are related, as a phenomenological model would. Colleagues will ask me ‘is this a mechanistic model’ and then provide an example.  Often, I decide that the model in question is mechanistic, even though the authors of these types of models may rarely emphasize this. Otto & Day (2008) wrote that mechanistic and phenomenological are relative model categorizations – suggesting that it is only productive to discuss whether one model is more or less mechanistic than another – and I’ve always thought of this as a nice way of looking at it. This has also led me to think that nearly any model, on some level, can be considered mechanistic. But, of course, not all models are mechanistic. Here’s the definition that I am going to work from (derived from the Ecological Detective, see here): Mechanistic models have parameters with biological interpretations, such that these parameters can be estimated with data of a different type than the data of interest For example, if we are interested in a question that can be answered by knowing how the size of a population changes over time, then our data of interest is number versus time. A phenomenological model could be parameterized with data describing number versus time taken at a different location. On the other hand, a mechanistic model could be parameterized with data on the number of births versus time, and the number of deaths versus time; and so it’s a different type of data, and this is only possible because the parameters have biological interpretations by virtue of the model being mechanistic. The essence of a mechanistic model is that it should explain why, however, to do so, it is necessary to give biological interpretations to the parameters. This, then, gives rise to a test of whether a model is mechanistic or not: if it is possible to describe a different type of data that could be used to parameterize the model, then we can designate the model as mechanistic. Validation In mathematical modelling we can test our model structure and parameterization by assessing the model agreement with empirical observations. The most convincing models are parameterized and formulated completely independently of the validation data. It is possible to validate both mechanistic and phenomenological models. Example 1 is a description of a series of three experiments that I believe would be sufficient to validate the logistic growth model. Example 1.  The model is $\frac{d N}{d t} = r N \left(1-\frac{N}{K}\right)$ which has the solution N(t) = f(t, r, K, $N_0$) and where $N_0$ is the initial condition, N(0). Experiment 1 (Parameterization I): 1. Put 6 mice in a cage, 3 males and 3 females and of varied, representative ages. (This is a sexually reproducing species. I want a low density but not so few that I am worried about inbreeding depression). A fixed amount of food is put in the cage every day. 2. Every time the mice produce offspring, remove the offspring and put them somewhere else (i.e., keep the number of mice constant at 6 throughout Experiment 1). 3. Have the experiment run for a while, record the total time, No. of offspring and No. of the original 6 mice that died. Experiment 2 (Parameterization II): 4.  Put too many mice in the cage, but the same amount of food everyday, as for Experiment 1. Let the population decline to a constant number. This is K. 5. r is calculated from the results of Experiment 1 and K as (No. births – No. deaths)/(total time) = 6 r (1-6/K). Experiment 3 (Validation): 6. Put 6 mice in the cage and the same amount of food as before. This time keep the offspring in the cage and produce the time series N(t) by recording the number of mice in the cage each day. Compare the empirical observations for N(t) with the now fully parameterized equation for f(t,r,K,N(0)). The Question. Defining that scheme for model parameterization and validation was done to provide context for the following question: • When scientists talk about independent model parameterization and validation – what exactly does that mean? How independent is independent enough? How is independent defined in this context? If I was asked this, I would say that the parameterization and the validation data should be different. In the logistic growth model example (above), the validation data is taken for different densities and under a different experimental set-up. However, consider this second example. Example 2. Another way to parameterize and validate a model is to use the same data, but to use only part of the information. As an example consider the parameterization of r (the net reproductive rate) for the equation, $\frac{\partial u}{\partial t} = D\frac{\partial^2 u}{\partial x^2} + r u$           (eqn 1) The solution to Equation (1) is u(x,t), a probability density that describes how the population changes in space and time, however, another result is that the radius of the species range increases at a rate c=$\sqrt{4rD}$. To validate the model, I will estimate c from species range maps (see Figure 1). To estimate r, I will use data on the change in population density taken from a core area (this approach is suggested in Shigesada and Kawaski (1997): Biological invasions, pp. 36-41. See also Figure 1). To estimate D, I will use data on wolf dispersal taken from satellite collars. Returning to the question. But, is this data, describing the density of wolves in the core area, independent of the species range maps used for validation? The species range maps, at any point in time, provide information on both the number of individuals and where these individuals are. The table that I used for the model parameterization is recovered from the species range maps by ignoring the spatial component (see Figure 1). Figure 1. The location of wolves at time 0 (red), time 1 (blue) and time 2 (green). The circles are used to estimate, c, the rate of expansion of the radius of the wolves’ home range at t=0,1,2. The population size at t=0,1,2 is provided in the table. The core area is shown as the dashed line. Densities are calculated by dividing the number of wolves by the size of the core area. The reproductive rate is calculated as the slope of a regression on the density of wolves at time t versus the density at time t-1. For this example, the above table will only yield two data points, (3,5) and (5,9). While the data for the parameterization of r, and the validation by estimating c, seems quite related, the procedure outlined in Example 2 is still a strong test of Equation (1). Equation (1) makes some very strong assumptions, the strongest of which, in my opinion, is that the dispersal distance and the reproductive success of an individual are unrelated. If the assumptions of equation (1) don’t hold then there is no guarantee that the model predictions will bear any resemblance to the validation data. Furthermore, the construction of the table makes use of the biological definition of r, in contrast to a fully phenomenological approach to parameterization which would fit the equation u(x,t) to the data on the locations of the wolves to estimate r and D, and would then prohibit validation for this same data set. So, what are the requirements for independent model parameterization and validation? Are the expectations different for mechanistic versus phenomenological models? # Blogging for MPE 2013 On the Mathematics of Planet Earth (MPE) webpage there is a call for bloggers. This is a great initiative and one that I would love to see really take off. There are already some good mathematical biology-related blogs out there: and the MPE initiative is likely to bring more attention to blogging around the topic of mathematical biology. Here at Memorial University of Newfoundland, as part of MPE, we are proud to be hosting the AARMS Summer School on Dynamical Systems and Mathematical Biology. This summer school consists of 4 courses over 4 weeks from July 15 to August 9, 2013. These courses can often be transferred for credit at the student’s home institution and will be taught by leading experts in each of the focus areas. The city of St John’s offers a vibrant downtown, urban parks and walkways, and stunning coastlines. More information to follow.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 11, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6403363943099976, "perplexity": 813.5386841374582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
http://cms.math.ca/cmb/msc/53C21?fromjnl=cmb&jnl=CMB
Canadian Mathematical Society www.cms.math.ca location:  Publications → journals Search results Search: MSC category 53C21 ( Methods of Riemannian geometry, including PDE methods; curvature restrictions [See also 58J60] ) Expand all        Collapse all Results 1 - 5 of 5 1. CMB 2011 (vol 55 pp. 663) Zhou, Chunqin An Onofri-type Inequality on the Sphere with Two Conical Singularities In this paper, we give a new proof of the Onofri-type inequality \begin{equation*} \int_S e^{2u} \,ds^2 \leq 4\pi(\beta+1) \exp \biggl\{ \frac{1}{4\pi(\beta+1)} \int_S |\nabla u|^2 \,ds^2 + \frac{1}{2\pi(\beta+1)} \int_S u \,ds^2 \biggr\} \end{equation*} on the sphere $S$ with Gaussian curvature $1$ and with conical singularities divisor $\mathcal A = \beta\cdot p_1 + \beta \cdot p_2$ for $\beta\in (-1,0)$; here $p_1$ and $p_2$ are antipodal. Categories:53C21, 35J61, 53A30 2. CMB 2004 (vol 47 pp. 624) Zhang, Xi A Compactness Theorem for Yang-Mills Connections In this paper, we consider Yang-Mills connections on a vector bundle $E$ over a compact Riemannian manifold $M$ of dimension $m> 4$, and we show that any set of Yang-Mills connections with the uniformly bounded $L^{\frac{m}{2}}$-norm of curvature is compact in $C^{\infty}$ topology. Keywords:Yang-Mills connection, vector bundle, gauge transformationCategories:58E20, 53C21 3. CMB 2002 (vol 45 pp. 232) Ji, Min; Shen, Zhongmin On Strongly Convex Indicatrices in Minkowski Geometry The geometry of indicatrices is the foundation of Minkowski geometry. A strongly convex indicatrix in a vector space is a strongly convex hypersurface. It admits a Riemannian metric and has a distinguished invariant---(Cartan) torsion. We prove the existence of non-trivial strongly convex indicatrices with vanishing mean torsion and discuss the relationship between the mean torsion and the Riemannian curvature tensor for indicatrices of Randers type. Categories:46B20, 53C21, 53A55, 52A20, 53B40, 53A35 4. CMB 2001 (vol 44 pp. 376) Zhang, Xi A Note on $p$-Harmonic $1$-Forms on Complete Manifolds In this paper we prove that there is no nontrivial $L^{q}$-integrably $p$-harmonic $1$-form on a complete manifold with nonnegatively Ricci curvature $(0 Keywords:$p$-harmonic,$1$-form, complete manifold, Sobolev inequalityCategories:58E20, 53C21 5. CMB 1999 (vol 42 pp. 214) Paeng, Seong-Hun; Yun, Jong-Gug Conjugate Radius and Sphere Theorem Bessa [Be] proved that for given$n$and$i_0$, there exists an$\varepsilon(n,i_0)>0$depending on$n,i_0$such that if$M$admits a metric$g$satisfying$\Ric_{(M,g)} \ge n-1$,$\inj_{(M,g)} \ge i_0>0$and$\diam_{(M,g)} \ge \pi-\varepsilon$, then$M\$ is diffeomorphic to the standard sphere. In this note, we improve this result by replacing a lower bound on the injectivity radius with a lower bound of the conjugate radius. Keywords:Ricci curvature, conjugate radiusCategories:53C20, 53C21 © Canadian Mathematical Society, 2015 : https://cms.math.ca/
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9671546220779419, "perplexity": 1699.6374378882379}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299877.5/warc/CC-MAIN-20150323172139-00121-ip-10-168-14-71.ec2.internal.warc.gz"}
https://www.dmvmotoservice.com/rent/
# Rent a Motorchcle ### Search for Rental Motorcycle Wherever Your Are Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, Lorem Ipsum is simply dummy text of the printing and typesetting industry. ## Honda CB 500x Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, ## GSX R750 Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, ## Honda CB 500x Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, ## GSX R750 Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, ## Honda CB 500x Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, ## GSX R750 Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s,
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9104017615318298, "perplexity": 16150.144033492978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251789055.93/warc/CC-MAIN-20200129071944-20200129101944-00316.warc.gz"}
http://mathematica.stackexchange.com/users/142/wreach?tab=activity&sort=all&page=5
WReach Reputation 24,293 226/100 score Feb2 revised Can I sum or mulitply 2 columns in a list of associations added 316 characters in body Feb2 answered Can I sum or mulitply 2 columns in a list of associations Feb2 comment Can I sum or mulitply 2 columns in a list of associations Feb1 comment How to efficiently read data from any part inside huge file Feb1 comment Why Mathematica chooses bracket for function arguments over parenthesis? +1 This very example is discussed in The Mathematica Book. Jan31 reviewed Leave Open How to remove scroll bars from embedded CDF Jan31 reviewed Reopen Mathematica script file in Linux - how to export PDF file and close kernel? Jan31 answered Why doesn't my expression evaluate at the subkernel? Jan31 answered Input a square matrix and return a list of the matrix entries read by antidiagonals Jan31 answered Evaluation control with TagSet Jan29 comment Does RegularExpression support “(?R)”? @2012rcampion Yes, good idea. I have updated my answer to give an example. I suppose this post has outgrown its original billing as just an extended comment... :) Jan29 revised Does RegularExpression support “(?R)”? added the section on named patterns Jan29 answered Does RegularExpression support “(?R)”? Jan28 answered What is primary, function computation or function application? Jan28 awarded Nice Answer Jan27 comment How to filter a dataset using Select and a parameter @GordonCoale Done. Jan27 revised How to filter a dataset using Select and a parameter by request, added mention of the prefix and postfix forms of #[sel] Jan27 answered How to filter a dataset using Select and a parameter Jan24 awarded Nice Answer Jan24 awarded Nice Question
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34272676706314087, "perplexity": 8080.504637027198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246641266.56/warc/CC-MAIN-20150417045721-00020-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/to-help-reduce-the-severity-of-accidents-average-net-force.895271/
# To help reduce the severity of accidents [Average net force] 1. Nov 30, 2016 ### LionLieOn 1. The problem statement, all variables and given/known data To help reduce the severity of accidents, an engineering company designs large plastic barrels filled with antifreeze that can be placed in front of bridge supports. In a simple test, a 1200 kg car moving at 20 m/s [W] crashes into several barrels. The car slows down to 8.0 m/s [W] in 0.40 s. a) Find the average net force acting on the car during the collision. I was comparing answers with a friend of mine and his final answer was 36000 N [W] and I got 36000 N [E] I think I'm right, since 36000N [W] doesn't make any sense, the car would of had to be driving E for that to happen. Then again, I'm unsure so I would like to have a second opinion/correction. 2. Relevant equations 3. The attempt at a solution Check the attachment. #### Attached Files: • ###### ANTIFREEZE.jpg File size: 24.6 KB Views: 29 2. Nov 30, 2016 ### andrevdh Newton's 2nd law tells us that the force and the acceleration will be in the same direction. 3. Nov 30, 2016 ### LionLieOn Ahh! Ok I forgot about that thank you! Other then the direction, is my work correct? Anything I should correct or look over? 4. Nov 30, 2016 ### andrevdh No, you seem to be on the right track :) 5. Nov 30, 2016 ### andrevdh Looking at the provided solution the velocities before and after were inserted as positive values because + was chosen W. The fact that impulse came out negative tells us that it is in the opposite or easterly direction ,which is also the direction of the force, since the time interval is not involved in the direction since it is a scalar quantity. 6. Nov 30, 2016 ### LionLieOn So to correct my mistake in the attached file above, I should be putting 36000 N [W] instead of 36000 N [E] ? 7. Nov 30, 2016 ### andrevdh No, the attached file is correct. Fnet comes out negative so it is in the opposite direction of the chosen + west. 8. Nov 30, 2016 ### LionLieOn Ahh ok! Thank you so much for your help and for checking over my work. Much appreciated! Draft saved Draft deleted Similar Discussions: To help reduce the severity of accidents [Average net force]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8685407638549805, "perplexity": 1556.7028140509867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815951.96/warc/CC-MAIN-20180224211727-20180224231727-00620.warc.gz"}
https://zbmath.org/?q=ai%3Asavage.leonard-jimmie+ai%3Aolkin.ingram
× # zbMATH — the first resource for mathematics Inequalities on the probability content of convex regions for elliptically contoured distributions. (English) Zbl 0253.60021 Proc. 6th Berkeley Sympos. math. Statist. Probab., Univ. Calif. 1970, 2, 241-265 (1972). ##### MSC: 6e+06 Probability distributions: general theory 6.2e+100 Statistical distribution theory
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8590591549873352, "perplexity": 13575.937962977117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703527224.75/warc/CC-MAIN-20210121163356-20210121193356-00418.warc.gz"}
https://barneyshi.me/2021/10/09/binary-tree-cams/
# Leetcode 968 - Binary tree cams Note: • DP • dp[0] node has a cam. dp[1] children have cams. dp[2] parent has cams. • Assume both left and right subtree exist. • dp[0]: It doesn’t matter children have cams or not. So we need to pick min cams from them. • dp[1]: Consider left & right separately. Assume left has a cam, then pick min from right[1] or right[0]. • dp[2]: When cur node’s parent has a cam. For left or right subtree, two situations for each: A cam for themself or covered by children. Question: You are given the root of a binary tree. We install cameras on the tree nodes where each camera at a node can monitor its parent, itself, and its immediate children. Return the minimum number of cameras needed to monitor all nodes of the tree. Code:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5507007241249084, "perplexity": 5411.74596918868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00126.warc.gz"}
https://arxiv.org/abs/1002.1975
astro-ph.HE (what is this?) (what is this?) # Title: Measurement of the energy spectrum of cosmic rays above 10^18 eV using the Pierre Auger Observatory Abstract: We report a measurement of the flux of cosmic rays with unprecedented precision and statistics using the Pierre Auger Observatory. Based on fluorescence observations in coincidence with at least one surface detector we derive a spectrum for energies above 10^18 eV. We also update the previously published energy spectrum obtained with the surface detector array. The two spectra are combined addressing the systematic uncertainties and, in particular, the influence of the energy resolution on the spectral shape. The spectrum can be described by a broken power law E^-gamma with index gamma=3.3 below the ankle which is measured at log10(E/eV) = 18.6. Above the ankle the spectrum is described by a power law with index 2.6 followed by a flux suppression, above about log10(E/eV) = 19.5, detected with high statistical significance. Comments: Accepted for publication in Physics Letters B Subjects: High Energy Astrophysical Phenomena (astro-ph.HE) Journal reference: Phys.Lett.B685:239-246,2010 DOI: 10.1016/j.physletb.2010.02.013 Cite as: arXiv:1002.1975 [astro-ph.HE] (or arXiv:1002.1975v1 [astro-ph.HE] for this version) ## Submission history From: Fabian Schüssler [view email] [v1] Tue, 9 Feb 2010 21:37:17 GMT (239kb)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8448502421379089, "perplexity": 3066.449505762746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109893.47/warc/CC-MAIN-20170822031111-20170822051111-00129.warc.gz"}
http://wavefrontshaping.net/index.php/top/57-community/tutorials/spatial-lights-modulators-slms/147-how-to-calibrate-linearly-aligned-nematic-liquid-crystal-based-slms
## How to calibrate linearly aligned nematic liquid crystal based SLMs ### Characterization of phase modulation The goal is to determine the correspondence between the pixel values of an image sent to the SLM (encoded in 8 bits, i.e 256 values for the devices tested) and the resulting phase modulation of the light reflected off of the SLM. Typical phase only SLMs are composed of vertically aligned liquid crystals (Figure 1.). At rest, light propagating on the $$z$$-axis is sensitive to the anisotropy of the liquid crystals. As a result, the material is birefringent, the light sees two optical indices: an ordinary index $$n_o$$ along the $$x$$-polarization and an extraordinary index $$n_e$$ along the $$y$$-polarization. The index $$n_y$$ seen by a $$y$$-polarized light can be set to $$n_o$$ by applying a voltage on the liquid crystal plate, whereas the index $$n_x$$ for the $$x$$-polarization stays unchanged (see Figure 1.). In the general case, the index for the $$y$$-polarization $$n_y(V)$$ depends on the input voltage and can be tuned between $$n_o$$ and $$n_e$$. $$n_x = n_o$$ $$n_y(V) \in \left[n_o,n_e\right]$$ Figure 1. Verically aligned nematic liquid crystals at rest (left) show birefringence. It disappears upon application of a strong voltage (right). We use this feature to characterize the phase modulation by observing the interferences between the two paths propagating with $$x$$ and $$y$$-polarizations. The principle of the setup is presented in Figure 2. A laser beam polarized is rotated at 45 degrees with respect to the orientation of the liquid crystals and sent onto the SLM through a Beam Splitter (BS). Figure 2. Schematic representation of the optical setup. The optical field arriving at the SLM is identical for the two polarizations. $$E_x = {E_0}/{\sqrt{2}}$$ $$E_y = {E_0}/{\sqrt{2}}$$ with $$E_x$$ and $$E_y$$ the optical fields on the $$x$$ and $$y$$-polarization and $$E_0$$ the initial optical field amplitude. All field values are given to within a global phase. Due to the birefringence, after the reflection off the SLM the two polarization components have accumulated two different phases. $$E_{x}^\text{SLM} = E_0e^{ik2en_x} = E_0e^{i\phi_0}$$ $$E_{y}^\text{SLM} = E_0e^{ik2en_y(V)} = E_0e^{i\left(\Delta\phi(V)+\phi_0\right)}$$ with $$e$$ the thickness of the SLM. The phase terms are given by: $$\phi_0 = k2en_o$$ $$\Delta\phi(V) = k2e\left[n_y(V) - n_o\right]$$ After reflection on the beam splitter, the 45 degree polarization is selected by the analyzer. Thus, the two components of the light interfere and the resulting intensity on the photodetector is modulated according to: $$I \propto \cos^2\left(\frac{\Delta\phi(V)}{2}\right) \propto \frac{1}{2}\left[1 + \cos\left(\Delta\phi(V)\right)\right]$$ We experimentally measure the intensity as a function of the pixel values on the SLM. We show in Figure 3. experimental data for SLMs from the three majors brands: a Meadowlark Optics High Resolution 1920 x 1152, a Hamamatsu X13138-07 and a Holoeye PLUTO. One can directly extract the pixel value corresponding to the $$2\pi$$ phase modulation by measuring the period of the oscillating signal. Typicallay, when using a device designed for a wide range of wavelengths or when using an input wavelenght different from the one the SLM is calibrated for, the factory calibration may not provide a linear response and one needs to learn the lookup table of the pixel value. To do so, one can fit the experimental curve with the cosine function, which is somewhat less direct that the technique relying on the measurement of the spatial fringes. To illustrate linear phase calibration, we use the Meadowlark SLM which allow to change the onboard calibration by modifying the correspondence between the 8bit pixel values and the voltage values sent to the pixels encoded on 10 bits. To calibrate the device and have a linear relation between the pixel value and the phase modulation between 0 and $$2\pi$$, we find for each desired phase value the closest point in the calibration data. We show in Figure 3.c and Figure 3.d the experimental characterization of the phase modulation before and after calibration. Figure 3. Itensity (arbitrary units) of the interference signal as a function of the pixel value (0 to 256). ### Characterization of phase fluctuations Due to the electrical addressing control scheme (that can be digital for Holoeye devices or analog for the Hamamatsu or BNS/Meadolwark ones) and the needed for regular depolarization of the liquid crystals to avoid permanent damage, rapid phase fluctuations occurs [2]. (Note that it has been proposed to lower the temperature of the SLM to reduce the phase fluctuations, increasing the viscosity of the liquid crystals. One should take into account that it will likely decrease the achievable refresh rate too.) Depending on the brand and the model, these fluctuations usually occur between 100 Hz and 400 Hz. Unlike the approach with Young slits/holes, the technique presented here to characterize the phase modulation has the advantage of only needing a photodetector, which allow a fast detection without requiring expensive instruments. Let's write the phase modulation as a static term depending only on the voltage applied and a zero mean fluctuation term: $$\Delta\phi\left(V,t\right) = \Delta\phi_0\left(V\right)+\delta\phi(V,t)$$ For $$\Delta\phi_0\left(V\right) = \pm \pi/2 \pmod{2\pi}$$, the measured intensity reads: $$I\left(V,t\right) = I_0\left(V\right)+\delta I(V,t)$$ $$\delta I(V,t)/I_0\left(V\right) \approx \mp \delta\phi(V,t)/\Delta\phi_0\left(V\right)$$ Using the calibration of the previous step, we identify the voltage values corresponding to $$\Delta\phi_0\left(V\right) = \pm \pi/2 \pmod{2\pi}$$, set the whole SLM to this value and measure the temporal fluctuations of the intensity. We show in Figure 4. fluctuations measured for the three tested devices. Please note that one should not compare quantitatively those results as they were obtained with different conditions and with devices not in the most optimized conditions. In particular, the Holoeye device, which shows large fluctuations due to its digital addressing scheme, was a demo unit, not updated with the lasted firmware and we did not use the addressing configuration designed to reduce phase fluctuations (by decreasing the bit depth). Figure 4. Intensity fluctuations (in arbitrary units) of the interference signal as a function of the time (in seconds). [1] A. Farré et al. Opt. Express, 13, (2011) [2] A. Lizana et al., Opt. Express, 16, (2008) CalibrationSLM.pdf [Tutorial on the calibration of linearly aligned nematic liquid crystal based SLMs ] 499 kB
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7115778923034668, "perplexity": 815.364411101517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530040.33/warc/CC-MAIN-20190420200802-20190420222802-00308.warc.gz"}
https://codegolf.stackexchange.com/questions/201489/solving-the-water-bucket-riddle
# Context The water buckets riddle or the water jugs riddle is a simple riddle that can be enunciated in a rather general form as: Given $$\n > 0\$$ positive integers $$\a_1, a_2, \cdots, a_n\$$ representing the capacities (in units of volume) of $$\n\$$ buckets and a positive integer $$\t \leq \max(a_1, a_2, \cdots, a_n)\$$, find a sequence of "moves" that places $$\t\$$ units of volume of water in some bucket $$\i\$$. To define the valid "moves", let $$\c_1, c_2, \cdots, c_n\$$ represent the units of volume of water each bucket $$\i\$$ contains, with $$\0 \leq c_i \leq a_i\ \forall i\$$. Then, at each step you can do any of the following: • fill a bucket $$\i\$$ entirely, setting $$\c_i = a_i\$$ • empty a bucket $$\i\$$ entirely, setting $$\c_i = 0\$$ • pour a bucket $$\i\$$ over a bucket $$\j\$$, setting $$\begin{cases} c_i = \max(0, c_i - (a_j - c_j)) \\ c_j = \min(a_j, c_j + c_i) \end{cases}$$ i.e you pour bucket $$\i\$$ over bucket $$\j\$$ until bucket $$\i\$$ becomes empty or bucket $$\j\$$ becomes full, whatever happens first (or both if both things happen at the same time). Given the bucket capacities and the target measurement, your task is to output a minimal sequence of movements that places $$\t\$$ units of volume of water in one of the buckets. # Input The capacities of the buckets are positive integers. You can assume these capacities are unique and ordered. You can take them in a number of reasonable formats, including but not limited to: • a list of integers • arguments to a function Additionally, you will take a positive integer t that is not larger than the maximum number present in the input capacity list. You can assume the input parameters specify a solvable instance of the water buckets problem. # Output Your program/function/etc should output the shortest sequence of moves that places t units of volume of water in one of the buckets. If several such sequences exist you can output any one sequence. Please note that some moves commute and that also introduces multiple solutions to some problems. Your program can print the sequence or return it as a list of moves or any other sensible thing. To identify the moves and the buckets, you can choose any encoding suitable for your needs, as long as it is consistent across test cases and completely unambiguous. A suggestion is, use three letters to identify the three moves, like "E" for emptying a bucket, "F" for filling and "P" for pouring and use numbers to identify the buckets (0-index or 1-indexed or using its total capacity, for example). With this encoding, to identify a move you always need one letter and a number. In case of a "pouring" move, a second integer is also needed. It is up to you to consistently use "P" n m as n was poured over m or m was poured over n. # Test cases We use the encoding above and "P" n m means "pour bucket n over bucket m". [1, 2, 3, 4], 1 -> ['F 1'] [1, 2, 3, 4], 2 -> ['F 2'] [1, 2, 3, 4], 3 -> ['F 3'] [1, 2, 3, 4], 4 -> ['F 4'] [13, 17], 1 -> ['F 13', 'P 13 17', 'F 13', 'P 13 17', 'E 17', 'P 13 17', 'F 13', 'P 13 17', 'E 17', 'P 13 17', 'F 13', 'P 13 17'] [4, 6], 2 -> ['F 6', 'P 6 4'] [1, 4, 6], 2 -> ['F 6', 'P 6 4'] [3, 4, 6], 2 -> ['F 6', 'P 6 4'] [4, 5, 6], 2 -> ['F 6', 'P 6 4'] [4, 6, 7], 2 -> ['F 6', 'P 6 4'] [1, 3, 5], 2 -> ['F 3', 'P 3 1'] [7, 9], 4 -> ['F 9', 'P 9 7', 'E 7', 'P 9 7', 'F 9', 'P 9 7'] [8, 9, 13], 6 -> ['F 9', 'P 9 8', 'P 8 13', 'P 9 8', 'F 13', 'P 13 8'] [8, 9, 13], 7 -> ['F 8', 'P 8 9', 'F 8', 'P 8 9'] [8, 9, 11], 10 -> ['F 8', 'P 8 9', 'F 11', 'P 11 9'] [8, 9, 12], 6 -> ['F 9', 'P 9 12', 'F 9', 'P 9 12'] [8, 9, 12], 5 -> ['F 8', 'P 8 12', 'F 9', 'P 9 12'] [23, 37, 41], 7 -> ['F 41', 'P 41 23', 'P 41 37', 'P 23 41', 'F 41', 'P 41 23', 'P 41 37', 'F 41', 'P 41 37', 'E 37', 'P 41 37', 'E 37', 'P 41 37', 'F 41', 'P 41 37'] [23, 31, 37, 41], 7 -> ['F 23', 'P 23 37', 'F 31', 'P 31 37', 'P 31 41', 'P 37 31', 'P 31 41'] You can check a vanilla Python reference implementation here • Can I take n as input? – Surculose Sputum Mar 22 '20 at 4:56 • @SurculoseSputum in languages such as C and related ones, where you need such a parameter to know when the array ends, yes. If you are coding in python, then I'm afraid not. – RGS Mar 22 '20 at 6:46 # Python 3, 243 239 bytes -4 bytes thanks to @JonathanFrech! def f(a,t,k=1): while g(a,t,[0]*len(a),[],k):k+=1 def g(a,t,c,p,k):n=len(a);k,i=k//n,k%n;k,j=k//n,k%n;exec(["c[i]=0","c[i]=a[i]","x=min(a[j]-c[j],c[i]);c[i]-=x;c[j]+=x"][k%3]);p+=k%3,i,j;return g(a,t,c,p,k//3)if k>2else{t}-{*c}or print(p) Try it online! Input: a list of bucket capacities a, and the target t. Output: to stdout, a list of integers, where each triplet m,i,j denotes a move: m is the move type (0,1,2 corresponds to empty, fill, pour), and i,j are the bucket indices (0-index). For move types empty and fill, the 2nd bucket is ignored. How: Each sequence of moves p can be encoded by an integer k using modular arithmetic. g is a recursive function that checks if the sequence p encoded by k will result in the target t. If so, that sequence is printed to stdout, and a Falsy value is returned. ### Python 3.8 (pre-release), 279 249 bytes Whopping -30 thanks to @ovs's double product trick! from itertools import* P=product a,t=eval(input()) for r in count(): for p in P(*tee(P((0,1,2),R:=range(n:=len(a)),R),r)): c=[0]*n;[exec(["c[i]=0","c[i]=a[i]","x=min(a[j]-c[j],c[i]);c[i]-=x;c[j]+=x"][m])for m,i,j in p] if t in c:print(p);exit() Try it online! Slow, ugly and can probably be golfed more. Input: from stdin, a,t where a is the list of bucket capacities, and t is the goal. Output: to stdout, the optimal list of moves, each move has the form (m, i, j) where: • m is the move type 0,1,2 (empty, fill, pour) • i and j are the target buckets' indices (0-index). • The moves empty and fill only affects the 1st bucket i, and thus the irrelevant 2nd bucket j is set to an arbitrary value. • The move (2,i,j) pours the water from bucket i to bucket j. How: This program simply tries all possible sequence of move, in order of length. To generate all sequence of r moves: • product((0,1,2), range(n), range(n)) generates a list of all possible moves by performing the Cartesian product between all move types 0,1,2, all values of i and all values of j. • tee(product(...), r) clones the move list into r lists. • product(*tee(...)) takes the Cartesian product ofrmove lists, which results in all possible sequence ofr moves. To perform a sequence of move p: • c[i]=0,c[i]=a[i], and x=min(a[j]-c[j],c[i]);c[i]-=x;c[j]+=x respectively handles emptying, filling, and pouring between bucket i and j. Note that pouring can handle i==j, which results in a no-op. • exec(["handle E", "handle F", "handle P"][m]) selects the correct handler for move type m. • The inner for-loop is a little shorter with another product: for p in product([x for x in product((0,1,2),R,R)if x[1]^x[2]],repeat=r): – ovs Mar 22 '20 at 6:57 • And you don't really need to check if i and j are distinct: for p in product(product((0,1,2),R,R),repeat=r) (this is even slower than before) – ovs Mar 22 '20 at 7:07 • I found by hand that the correct k was 2358833, which when reached produces the correct sequence. – Surculose Sputum Mar 22 '20 at 15:13 • Ok, thanks for taking your time to verify this! Cool solution :D – RGS Mar 22 '20 at 15:51 • p+=[(k%3,i,j)]; ~> p+=(k%3,i,j),;. – Jonathan Frech Mar 22 '20 at 23:42 # JavaScript (ES6),  197 191  188 bytes Takes input as (a)(t). Returns a string of concatenated operations Fx, Ex or Px>y, with 0-indexed buckets. a=>F=(t,N)=>(g=(b,n,o)=>[...b,0].some((V,i,x)=>(x=a[i])-V^t?n&&b.some((v,j,[...B])=>(s='F',B[j]=i-j?x?(v+=V)-(B[s=P\${i}>,i]=x<v?x:v):a[s='E',j]:0,g(B,n-1,[o]+s+j))):O=o))(a,N)?O:F(t,-~N) Try it online! The above test link inserts spaces between operations for readability. Some longer test cases have been removed. # Javascript, 364 bytes I'm sure this can be golfed much better pretty easily. S=t=>G=>{L=t.length;r=(f,n,a,i,e=0)=>{if(0==n)return f.indexOf(G)>=0&&[];a=(A,B,C,D)=>(X=f.slice(),X[A]=B,X[C]=D,X);for(;e<L;++e){for(K of[0,t[e]])if(F=r(a(e,K),n-1))return[[+!K,e]].concat(F);for(i=0;i<L;++i)if(i!=e&&(O=r(a(e,Math.max(0,f[e]-t[i]+f[i]),i,Math.min(t[i],f[e]+f[i])),n-1)))return[[2,e,i]].concat(O)}};for(T=1;!(E=r(Array(L).fill(0),T));++T);return E} Returns an array of arrays. Each array is in the format [n, i] if n=0 (fill) or n=1 (empty), or [2, i, j] for "pour bucket i into bucket j". The buckets are always given as indices, starting from 0. Uses the same basic search method as the other answers. Unminified version: var S = (capacities, target) => { let n = capacities.length; var waterBuckets = (levels, maxSteps) => { if (maxSteps == 0) return levels.indexOf(target) >= 0 ? [] : false; let getCopy = () => levels.slice(); for (let i = 0; i < n; ++i) { for (let level of [0, capacities[i]]) { let levelsCopy = getCopy(); levelsCopy[i] = level; let res = waterBuckets(levelsCopy, maxSteps - 1); if (res) return [[+!level, i]].concat(res); } for (let j = 0; j < n; ++j) { if (i === j) continue; let levelsCopy = getCopy(); levelsCopy[i] = Math.max(0, levels[i] - capacities[j] + levels[j]); levelsCopy[j] = Math.min(capacities[j], levels[i] + levels[j]); let res = waterBuckets(levelsCopy, maxSteps - 1); if (res) return [[2, i, j]].concat(res); } } }; for (let s = 1;; ++s) { let r = waterBuckets(Array(n).fill(0), s); if (r) return r; } }; # Charcoal, 112 104 bytes ⊞υEθ⁰Fυ¿¬ⅈ¿№…ιLθη⪫✂ιLθLι¹ «FLθF²⊞υ⁺Eι⎇⁼κν∧λ§θκμ⟦§EFλκ⟧FLθFLθ¿⁻λκ«≔⌊⟦§ιλ⁻§θκ§ικ⟧ε⊞υ⁺Eι⎇⁼κν⁺με⎇⁼λν⁻μεμ⟦Pλκ Try it online! Link is to verbose version of code. Could save 6 bytes by including the final bucket state in the output. The code spends most of its time emptying or pouring empty buckets, so don't try it on the harder problems. Explanation: ⊞υEθ⁰ Start off with all buckets empty and no operations so far. (Each entry comprises n buckets plus an unspecified number of operations.) Fυ¿¬ⅈ Perform a breadth-first search until a solution has been printed. (This relies on t being positive, as that means that at least one step is necessary.) ¿№…ιLθη⪫✂ιLθLι¹ « If one of the first n buckets contains t then this is a solution in which case output it, otherwise: FLθF² Loop over each bucket and whether it's being emptied or filled. ⊞υ⁺Eι⎇⁼κν∧λ§θκμ⟦§EFλκ⟧ Calculate the new bucket values and append the result with the additional operation. FLθFLθ¿⁻λκ« Loop over each pair of distinct buckets. ≔⌊⟦§ιλ⁻§θκ§ικ⟧ε Calculate the amount that can be poured from one bucket to the other. ⊞υ⁺Eι⎇⁼κν⁺με⎇⁼λν⁻μεμ⟦Pλκ Calculate the new bucket values and append the result with the additional operation. Adding an extra ¿ε` to the beginning of this block does make the code a bit faster but it wasn't significant enough to be able to solve the harder problems on TIO.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 21, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3999008536338806, "perplexity": 6306.336694313324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077843.17/warc/CC-MAIN-20210414155517-20210414185517-00325.warc.gz"}
https://www.gradesaver.com/textbooks/science/chemistry/chemistry-4th-edition/chapter-16-questions-and-problems-page-772/16-44
## Chemistry (4th Edition) (a) $[LiOH] = 6.02 \times 10^{- 5}M$ (b) $[Ba(OH)_2] = 3.01 \times 10^{-5}M$ 1. Find $[OH^-]$ pH + pOH = 14 9.78 + pOH = 14 pOH = 4.22 $[OH^-] = 10^{- 4.22}$ $[OH^-] = 6.02 \times 10^{- 5}M$ ------ (a) Since LiOH is a strong base: $[LiOH] = [OH^-] = 6.02 \times 10^{- 5}M$ (b) Since $Ba(OH)_2$ is a strong base with 2 $OH^-$: $[OH^-] = [Ba(OH)_2] * 2$ $6.02 \times 10^{- 5} = [Ba(OH)_2] * 2$ $\frac{ 6.02 \times 10^{- 5}}{2} = [Ba(OH)_2]$ $[Ba(OH)_2] = 3.01 \times 10^{-5}M$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7588679790496826, "perplexity": 2906.98548514392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315865.44/warc/CC-MAIN-20190821085942-20190821111942-00214.warc.gz"}
http://mathhelpforum.com/calculus/186073-sequence-proof.html
# Math Help - Sequence Proof 1. ## Sequence Proof Prove that if x_n --> infinity, then the sequence given by x_n / x_n+1 is convergent. I am supposed to use the definition of divergence, so I know there is a number n for which all x_n > M for any M chosen. 2. ## Re: Sequence Proof Originally Posted by veronicak5678 Prove that if x_n --> infinity, then the sequence given by x_n / x_n+1 is convergent. I am supposed to use the definition of divergence, so I know there is a number n for which all x_n > M for any M chosen. i think you are using an incorrect definition of divergence to support what i have written in red: define $x_n$ as $x_{2k}=0, x_{2k+1}=1 \, \forall k \in \mathbb{Z}^+$. then $x_n$ is divergent since its not convergent. But if we take $M=5$(say) we can't find any $n$ which would satisfy $x_n>M$. Also can you please explicitly write what definition of $x_n \rightarrow \infty$ you are using?? 3. ## Re: Sequence Proof Originally Posted by veronicak5678 Prove that if x_n --> infinity, then the sequence given by x_n / x_n+1 is convergent. Consider the sequence $x_n = \left\{ \begin{gathered} n,\text{ n odd} \hfill \\ 2^n ,\text{ n even} \hfill \\ \end{gathered} \right.$. Is it true that $\left( {x_n } \right) \to \infty ~?$ What can you say about $\frac{{x_n }}{{x_{n + 1} }}~?$ 4. ## Re: Sequence Proof Thanks for the answers. The definition of a limit at infinity from my book is as follows: sn → ∞ as n → ∞ provided that for every number M there is an integer N so that sn ≥ M whenever n ≥ N. 5. ## Re: Sequence Proof Originally Posted by veronicak5678 The definition of a limit at infinity from my book is as follows: sn → ∞ as n → ∞ provided that for every number M there is an integer N so that sn ≥ M whenever n ≥ N. That definition is standard. Now apply it to the example I gave you. It will show that $(x_n)\to\infty$. BUT does $\frac{{x_n }}{{x_{n + 1} }}$ converge as claimed. 6. ## Re: Sequence Proof Not sure I follow... We could use the same definition to show that x_n-> inf implies x_n+1-> inf, but how does that prove that x_n / x_n+1 converges? 7. ## Re: Sequence Proof Originally Posted by veronicak5678 Not sure I follow... We could use the same definition to show that x_n-> inf implies x_n+1-> inf, but how does that prove that x_n / x_n+1 converges? Which of these two do you mean: $A~\left( {\frac{{x_n }}{{x_{n + 1} }}} \right)\text{ or }B~\left( {\frac{{x_n }}{{x_n + 1}}} \right)$ If it is A then the statement is false. If it is B then the proof is trivial. 8. ## Re: Sequence Proof Sorry for being unclear. I mean B. 9. ## Re: Sequence Proof Originally Posted by veronicak5678 Sorry for being unclear. I mean B. If $(x_n)\to\infty$ then $\left( {\frac{1}{{x_n+1 }}} \right) \to 0$. Note that $\left( {\frac{{x_n }}{{x_n + 1}}} \right) = \left( {1 - \frac{1}{{x_n + 1}}} \right)$ 10. ## Re: Sequence Proof I understand that, but how can I prove this using just the definition of divergence, and no theorems about limits? 11. ## Re: Sequence Proof Originally Posted by veronicak5678 I understand that, but how can I prove this using just the definition of divergence, and no theorems about limits? If $\varepsilon > 0$ use the divergence definition to make $x_n + 1 > \frac{1}{{1 + \varepsilon }}$. Having to do it this is just busy work. So that is as far as I am willing to take it. 12. ## Re: Sequence Proof OK. Thanks for helping me.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9890655279159546, "perplexity": 1508.583825002639}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936469016.33/warc/CC-MAIN-20150226074109-00245-ip-10-28-5-156.ec2.internal.warc.gz"}
https://web2.0calc.com/questions/what-is-the-coefficient-of
+0 # What is the coefficient of... 0 100 1 +43 What is the coefficient of $$a^2b^2$$in $$(a+b)^4\left(c+\dfrac{1}{c}\right)^6$$? Jz1234  Aug 26, 2017 Sort: #1 +78618 +2 Let's  look  at the expansion of the second term, first Note that, by the binomial expansion, the fourth term of this expansion will be : 20 c^3 (1/c)^3   =  20 And note that every other term in this expansion either contains  "c"  or "1/c" - or both - to some power(s) And in the expansion of (a + b)^4  , the  a^2b^2   term will be the third term....and its coefficient  will be 6 So.....the a^2b^2  term  in the product of both expansions will be  20 * 6   = 120 CPhill  Aug 26, 2017 edited by CPhill  Aug 26, 2017 edited by CPhill  Aug 27, 2017 ### 17 Online Users We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners.  See details
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.867911159992218, "perplexity": 10469.622194471693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805914.1/warc/CC-MAIN-20171120052322-20171120072322-00702.warc.gz"}
https://fontstyle.org/willow-font.php
# Willow Font Font Download Willow Font font for free. Willow Font is a font / typeface offered for free. please note that if the license offered here is non-commercial you have to obtain a commercial use license / permit from the original author. Font Name: Willow Font Author: Graphix Line Studio Website: https://www.creativefabrica.com/designer/graphix-line-studio/ref/239238/ License: Free for personal use / Demo Commercial License Website: https://www.creativefabrica.com/product/willow-5/ref/239238/ Font Description: Willow is a delicate, elegant, and flowing handwritten font. It has beautiful and well-balanced characters and as a result, it matches a wide pool of designs. Willow is PUA encoded, which means you can access all of the glyphs and swashes with ease!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8303958773612976, "perplexity": 13156.951852517064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521152.22/warc/CC-MAIN-20220518052503-20220518082503-00453.warc.gz"}
http://www.antipope.org/charlie/blog-static/2012/07/writing-a-novel-in-scrivener-e.html
Back to: Spammers ... | Forward to: The Blitz spirit Writing a novel in Scrivener: lessons learned Some of you probably know about Scrivener, the writer's tool from Literature and Latte. (If you don't, the short explanation is that it isn't a word processor, it's an integrated development environment for books. It's cross-platform (although initially developed for Mac OS X —versions for Windows and Linux are available, and it's being ported to iOS and Android), modestly priced, and has more features than you can wave a bundle of sticks at, mostly oriented around managing, tagging, editing, and reorganizing collections of information including rich text files.) I've used it before on several novels, notably ones where the plot got so gnarly and tangled up that I badly needed a tool for refactoring plot strands, but the novel I've finished, "Neptune's Brood", is the first one that was written from start to finish in Scrivener, because I have a long-standing prejudice against entrusting all my data to a proprietary application, however good it might be. That Scrivener was good enough to drag me reluctantly in is probably newsworthy in and of itself. First of all, I should note what Scrivener can't do for an author. Many publishers these days have moved to electronic document workflow during production. Manuscripts are submitted in a standard format (they've settled on the hideous, proprietary, obsolete binary format of the Microsoft Word 97-2003 .doc file, simply because that's what most people use). Copy edits are applied to the .doc file using Word's change tracking feature with annotations in place of post-it notes. If you want to process copy edits in this brave new world, you need a word processor, because Scrivener's view of a book is so radically different from Microsoft Word's single monolithic file that there's no way to reconcile the two and add Word-style change tracking to Scrivener. Luckily LibreOffice, a free fork of OpenOffice, is (a) free, (b) under active development again, and (c) can chow down on basic Word documents with change tracking and notes without throwing up most of the time. (The copy-edited manuscript of a novel does not contain Word BASIC macros, complex tables, or illustrations: it's just a stream of text with paragraph styles.) So Scrivener stops supporting publisher workflow once you have submitted the manuscript. And arguably it stops an hour before then, because figuring out how to modify the output format generated by the Scrivener "Compile" menu option is a black art ... I found it easier to slurp the resulting Word document into LibreOffice for final tidying up and reformatting before I submitted it. Scrivener doesn't support Word's paragraph style mechanism as far as I can tell; it simply emits styled text. So it's output isn't a direct product you can feed into an unattended turnkey pre-press package: you'll still have to pay someone to drive InDesign for you. Other weaknesses: Scrivener 2.3 on OSX is a big program. There's an introductory tutorial project, and a video. And then there's a 300+ page manual in PDF. Why PDF, when Scrivener emits some of the cleanest epub files I've ever seen? And why doesn't it work with the OSX built-in help system? Who knows. Let's just say that learning Scrivener's ins and outs is an ongoing task. (For example, I was most of the way through this novel, the (counts) sixth that I've used Scrivener for to some extent, when I discovered that the Edit->Writing Tools submenu now contains a character name generator, as well as the obvious stuff like the spelling and grammar checker controls.) General usage: In Scrivener, if you're writing a book you start by creating a new project, just as you would if you were starting to write a program using an IDE like XCode. The project is a hierarchical outline-based container for your research notes (including PDFs and images and web pages, which you can slurp in as files or direct from the web by entering URLs) and the small files, or "scrivenings", that constitute the work in progress. Scrivenings are basically RTF files (more accurately, Apple's RTFD—a derivative format that allows the inclusion of additional sub-elements like images), or folders containing scrivenings. A chapter is basically a folder, and the scenes in the chapter are scrivenings, and you get a collapsible, hierarchical view. You also get the ability to edit scrivenings, either individually, or by multi-selecting a bunch of them and seeing them as a continuous scroll of text: most convenient if you want to edit scenes 1, 2, 4, 6, and 8 in a chapter but not 3, 5, and 7, for example. That's treating it as a scene-based word processor. Scrivener provides other tools for looking at your data. There's a cork-board, in which you see each scrivening as an index card, and in which metadata (notes, defined keywords, all sorts of stuff) is transparently visible. Or you can display it as an outline in a classical outline processor mode. The general effect is to make it easy to search, organize, and see views of your data, and trivially easy to restructure a hierarchical document as long as you've broken it down properly into chapters containing sub-documents. Other tools: in some ways the most useful feature it provides for a jobbing author is the Project->Show Project Targets option. You get a floating window with progress bars (updated in real time) containing (a) your progress towards the target word count for the entire document, and (b) your progress towards your target word count for the day. As motivational goads go, this one is invaluable when you're slogging through the difficult middle of a book, and the ending seems as far away as the beginning. (Seriously, measuring your progress is one of the under-stated but vital tasks associated with any job: good luck getting Microsoft Word to help you with that.) Again: Scrivener projects can get quite large, and are structured internally as a folder hierarchy. Scrivener has an option to package them up as a zip archive (which can be emailed around, or re-imported later), and also to back them up to a private folder. Mine is linked to my (private) Dropbox account, for obvious reasons: it gives me version-controlled offsite backups. It's not quite git or subversion, but if you want those, there's a "sync with external folder" option which looks like, yes, you could use it to sync with a heavyweight configuration management system. (Note: in my opinion, novels don't need heavyweight version control—they virtually never fork and you seldom have as many as two authors. Straight linear versioning is fine for 95% of cases.) Stuff I don't use: there's a full-screen mode for folks who like to write without distractions. They are not me, and I just don't use it. The keyword tagging ... I can see types of work it would be useful for, but it's less obviously useful for fiction. Being able to define the status of a scrivening as planned, first-draft, or final is obviously useful to some people: but that's not how I work. Finally, there's the question of how you get your data out of the application. You can do it piecemeal: Scrivener is happy to export individual scrivenings or files. Or you can do it wholesale, via the File->Compile menu. Which takes the assembled scrivenings, filters them in accordance with whatever crazy criteria you set ("exclude odd-numbered scrivenings in even-numbered chapters" looks like it ought to be possible), applies transformations to them (Scrivener understands MultiMarkDown, so if the idea of proprietary RTF brings you out in cooties you can write in MMD text files), and generates a finished document in one of the target output formats—Word .doc is one, but it can also produce RTF, PDF, ODT, Final Draft, and ebook formats—epub or Mobi for Kindle. What's more, if you used MultiMarkDown it can emit LaTeX; given its footnote and endnote support, it may be a very useful tool for preparing academic papers that need a final production pass in LaTeX (a horrible format to work with by hand, in my opinion). This isn't a formal review: it's just a comment to the effect that Scrivener works pretty much from the moment of conception to the hour before final submission of a finished manuscript. It doesn't completely replace the word processor in my workflow, but it relegates it to a markup and proofing tool rather than being a central element of the process of creating a book. And that's about as major a change as the author's job has undergone since WYSIWYG word processing came along in the late 80s (actually the late 70s if you were a researcher at Xerox PARC, but the rest of us had to wait). My suspicion is that if this sort of tool spreads, the long-term result may be better structured novels with fewer dangling plot threads and internal inconsistencies. But time will tell. PS: Comments are still switched off due to spammers. If you want to discuss it, the Google Groups Antipope storm refuge is open for new members and I'll start a topic thread there. 1: "Why PDF, when Scrivener emits some of the cleanest epub files I've ever seen?" If I were to hazard a guess, I'd say it was because everyone and their dog already has a PDF reader installed on their desktop, whereas most people seem not to read ebooks at their desktop. 2: What's more, if you used MultiMarkDown it can emit LaTeX; given its footnote and endnote support, it may be a very useful tool for preparing academic papers that need a final production pass in LaTeX (a horrible format to work with by hand, in my opinion). I'm not quite sure what you mean here - is it that LaTeX is a horrible format? Once you get past the learning curve, it's pretty much like Word Perfect, or html markup code. And while actually inserting the codes for integrals, fractions and whatnot can look pretty ugly at first, after a while it just sorta becomes, um, transparent, I guess. When I see something like $\sum\limits_{j=n}^{2n-1}(2j+1)=3n^2$, I just "see" it as it appears in a pdf output file. I'm only harping on this because I had to learn LaTeX in grad school at the ripe old age of 45; at the time it was a nightmare and I was wishing mightily for a text editor that was a little more friendly to noobs. 3: Many thanks Charlie. Seems like Literature and Latte could do with putting some focus on the end of the workflow - editing, revision, etc., particularly if javascript enabled eBooks find favour. Overloading the comments/changes elements such paid reviewers could still use Word, and that a user could pipe back errors they spot in an epub, would simplify some of what you are currently doing. 4: Looking at it cynically, the answer is "no": far more people write books than succeed in convincing a publisher to buy them and subject them to the post-acquisition editing/processing steps. So support for editing/revision features in Scrivener would only be attractive to a minority of customers. (I've also discussed it in email with Keith, who has looked at the problem and concluded that it's very hard. The trouble is, Scrivener's internal model of a document -- as a folder hierarchy containing lots of little files -- is very different from the monolithic Word file that editors work on, and pulling a Word file back into Scrivener as a basis for diff/merge ops would require some extremely fancy section-level detection and matching, at a minimum. Integrating Word change tracking into Scrivener is therefore not practical.) 5: It's not just novel authors though. The real money is in business reports - and they certainly need the review and revision stage to work well. And the reason for mentioning about overloading the track changes functionality is to tag the change with its location in the original document - bypassing the MS mess (eg delete the next five characters from position character no. 48261). Although, we could always make the editing/review stage a web/cloud based activity .... 6: Scrivener is pretty much useless for writing business reports. It's just not designed for that kind of project. It's a novel-writing tool, first and foremost. Why would the author of a highly successful niche product want to tune their product to do something entirely different? 7: I'm (pleasantly) surprised to hear that LibreOffice can mostly support track changes and comments; the last time I looked, consensus seemed to be that those were the lock-in feature to Word for many markets. In theory, Scrivener (and everything else) ought to be able to decode the Word 2007 XML formats (i.e. .docx), but for some reason the various industries are all very reluctant to move past Word 97. Making Scrivener work with Word 97 .doc files is basically the same problem as VCS magic-merge, with the added fun of dealing with Microsoft standards/code. It should be possible in theory, but I sure wouldn't want to do it. I do a bunch of short-story-sized writing of interconnected projects, generally 10k-20k words, with in-lined images and tables. I frequently look at Scrivener as a way to escape the morass that is Word, but track changes and word comments are nearly impossible to remove from the process, so I've always shied away. Looking at the current version, I see that they've added SimpleNote integration, which is very tempting - the ability to get useful work done with the ipad instead of the mbp is pretty tempting. Have you tried this yet? 8: It's a damn good comic writing tool also. 9: A good overview of Scrivener but I think you blew it with your comment that "Scrivener is pretty much useless for writing business reports". I use Scrivener for business writing all the time. The ability to treat ideas as individual chunks of text to be moved and recombined at will is just as useful for business reports as it is for fiction writing. Being able to write "inside out" by starting with a small detail and expanding from there is a great way to get text flowing. The Research folders are incredibly useful for capturing and organizing source materials, related corporate documents, etc.. There are even features that make it easy to transcribe recorded interview notes. Scrivener's shortcomings in the business context show up at the same place as you describe in your publishing workflow -- when its time to get the draft into the hands of collaborators for review and additional input. 10: I personally get sick of the track comments. Back in the day, I actually saw a bug that filled up the entire hard drive with an infinite loop, due to a mess with the comments. I'm happier when people change font colors or do other simple annotations, especially if people are working across platforms. The simplest way I've seen to deal with comments is to enable the line numbering function, and then to simply comment on lines in a separate file. I'm still dipping my toe in Scrivener, and I do want to see how it deals with something other than a novel, too. The counter-argument to using Scrivener on business reports is that it's not particularly designed for them. That's why you're supposed to buy the uber-expensive Windows Office, instead of a $40 novel-writing application. You'll save money on the increased functionality, right? 11: You'll save money on the increased functionality, right? You will if you make your saving roll for sanity, but the odds are against you. Paragraph formatting was so buggy and difficult to build a mental model of in early versions of Word that after 2000 I just gave up on it completely. 12: The windows version of Scrivener is much less powerful, but even so beats the pants of MS Word etc. The dark art of sorting out the output is indeed dark, but once done need not be done again. 13: My wife just finished her first novel in Scrivener. She loves it. But her editor insisted on edits in MS Word. It nearly drove her crazy. She'll try the edits in Libre Office next time, but would happily pay more for Scrivener if it could deal with comments and change tracking better. 14: "And why doesn't it work with the OSX built-in help system?" Perhaps because the developer realizes Apple's help system is one of the worst ever deployed on a computer platform. Search is dependent upon how the author has indexed the help, so you cannot depend upon finding something you're searching for - even if it is *in* the help. PDF is much more useful and versatile. -ccs 15: @13: > But her editor insisted on edits in MS Word. The first book I sold, the publisher insisted that I send it to them printed on paper. They then paid someone to key it all back in to their compositor by hand. I guess I should have been grateful they didn't require I write it on parchment with a quill pen. For extra points, I had to airfreight the manuscript to their London group, which cost about US$100. By the time I sold the second book, the publisher (a different one) would accept a computer file. And not only that, they were savvy enough that they'd take it online instead of mailed on a diskette! Fortunately I had a Compuserve account so I could talk to them, those being the days of the big time-share systems. Like AOLers later, they thought CIS was "online." They insisted I send it to them all preformatted with WordPerfect, and were astonished and confused when I told them I didn't own a copy, nor was I willing to go out and pay several hundred dollars for one. After several days of negotiation, they agreed to take the ordinary 80-column ASCII files my text editor put out. For all I know, they printed it out and hired someone to key it all in to WordPerfect... I ran into something similar selling to a couple of magazines, including PC Tech Journal, which had an absolute policy of only accepting submissions in the current WordPerfect format, no exceptions. I eventually sold most of that stuff to Computer Shopper (not the same one Charlie wrote for) during the Stan Veit era. Computer Shopper's editors were apparently computer geeks instead of journalism grads, and they'd probably have cheerfully accepted files on hard-sector floppies in EBCDIC in some arcane word processor format... Word, though... blech! 16: Do any publishers make available a sample file of an edited document? 17: Not to the best of my knowledge. To see what one looks like you need to look for specialist sources that discuss the business of writing. 18: Thanks, Charlie. Maybe not a problem, but it could make it hard for anybody wanting to make tools for writers to use. I don't know what Open/Libre Office can cope with. Without reliable samples, how can any usefully test? 19: My brother wrote his dissertation in Scrivener, and is the one who recommended it to me. So it's clearly useful for more than just novels. One other thing worth mentioning is that Scrivener has a very reasonable trial mechanism. You get to try it (at full functionality) for a total of 30 non-consecutive days. So if you don't use it for a month in the middle of your trial, you still have trial time remaining when you come back to it. I think that's rather nice of them. 20: Actually, WYSIWYG word processing was available, at the very latest, by 1984 with the arrival of the Macintosh, if not earlier with some other computer(s) of which I’m not aware. It certainly was not “the late ’80’s.” 21: The word processing was available, but it wasn't until a bit later that laser printers came out and made desktop publishing affordable for small businesses, and shortly after that for everyone else. So, yes, late 80s. 22: Nice feedback. I don't use Scrivener, but many of the features listed here are also the ones Emacs Org-mode (http://orgmode.org) is focusing on. I wonder if there are Org-mode users outside of the geek/developers community. Thanks! 23: They also do a special extended trial for NaNoWriMo in November. And I think I ought to get my "winner's" cut-price deal. You get 20% off for just trying, 50% for hitting the magic 50,000 words. Charlie's style of working doesn't fit well with NaNoWriMo, but what I've picked up from trying amounts to two main things. 1: I know I can manage to organise things to have the time to write a lot of text. 2: Despite the efforts of my schoolteachers to constrain and denigrate my abilities—the essay for exams tends to kill talent (what can you write in a half-hour?)—I know I can produce a coherent 50,000 word story. Of course, it doesn't have to be good writing, but knowing you can handle something that size is a considerable boost. It makes a hundred thousand words a less intimidating target. Oh, 20% VAT... Ouch. 24: Sounds a bit like Final Draft, the defacto standard for doing screenplays. While you can do screenplays in Word or other word processors, Final Draft has all the fiddly "rules" for screenplays built in, like what things need to be in all caps, properly annotating dialog broken across page boundaries, etc. And it has a really nice Courier font built in. (Screenplays have to be in Courier. Gotta emulate a Remington typewriters.) One advantage for screenwriters is that while initial submissions are done on paper, you can turn in the electronic files after purchase, because everyone in the industry can handle them. 25: Curious: how big of a factor is word count in writing your novel? I mean, do you start writing and keep writing until it's done? Or do you track your word count as you go along and trim/inflate pieces accordingly? 26: @21: The word processing was available [by 1984], but it wasn't until a bit later that laser printers came out and made desktop publishing affordable for small businesses, and shortly after that for everyone else. So, yes, late 80s. I first met a WYSIWYG word processor around 1980 IIRC. Dedicated hardware that only did one thing, 8" floppies for storage, green screen, daisy wheel printer. Cost an arm, leg, kidney and first born. We didn't buy it, thank \${deity_of_choice}. Laser printers weren't generally available until the late 80s but existed before that. <smug>I've got a draft copy of the blue Smalltalk 80 book straight off Adele Goldberg's laser printer from 1982, WYSIWYG formatted on her D machine.</smug> Merchandise This page contains a single entry by Charlie Stross published on July 11, 2012 1:13 PM. Spammers ... was the previous entry in this blog. The Blitz spirit is the next entry in this blog. Find recent content on the main index or look in the archives to find all content.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.263884037733078, "perplexity": 2428.2644671071}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121153.91/warc/CC-MAIN-20170423031201-00629-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/on-verge-to-greatness.7140/
# On verge to greatness 1. Oct 13, 2003 ### ATCG On verge of greatness I am excited to say, that just today I have discovered a new notation for triple intergrals! Currently I am working on how and what it aplies to, and I will keep you posted on further developments. Last edited: Oct 14, 2003 2. Oct 14, 2003 ### ATCG Update I have an additional discovery. The Hyperdimesnional coordinate theory. The notation is also been revised. If you would like to learn more about either the new notation of the Theory please private message me (Please notify me of any spelling errors. I am writting this update on my Pocket PC.) 3. Oct 15, 2003 ### MathematicalPhysicist what is Hyperdimesnional coordinate theory? 4. Oct 15, 2003 ### HallsofIvy quantum gravity- he's pulling your chain. It's a lot easier to announce a great discovery than to actually make one! 5. Oct 15, 2003 ### ATCG I am not joking. You can't make that assumption unless you actually know what you are talking about. 6. Oct 15, 2003 ### MathematicalPhysicist at least the name is innovative (-:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8019998073577881, "perplexity": 4064.19359826187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512014.61/warc/CC-MAIN-20181018194005-20181018215505-00087.warc.gz"}
http://uhc-web.jp/sanko/teirei3.htm
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ gPWDP`UeCCRsv @ @ @ @ @ @ @ @ j R@@ @I N o ` BA B BC b XL uK @ @ 1 7 y Ec_ _߂ A @ 1 @ @ @ @ @ @ @ @ @ 15 mLiذް琬j ɰʲ B @ @ @ 1 @ @ @ @ @ @ @ 15 lCVxit߁j xmRW] A @ 1 @ @ @ @ @ @ @ @ @ 15 MC~ ~ A @ 1 @ @ @ @ @ @ @ @ @ 21 y }gR}ioXnCNj z܂nCN a` @ @ 1 @ @ @ @ @ @ @ @ 22 ZtEXL[uK @ uK @ @ @ @ @ @ @ 1 @ @ @ 21E22 yEq RsǗRs @ @ @ @ @ @ @ @ @ @ 1 @ @ 28 y ~uK @ uK @ @ @ @ @ @ @ 1 @ @ @ 28E29 yEq cAc @ @ @ @ @ @ @ 1 @ @ @ @ 29 cV XL[nCN a` @ @ 1 @ @ @ @ @ @ @ @ @ @ @ @ v@@ @ @ 3 2 1 @ @ 1 2 1 @ @ Q 4 y Hg @ @ @ @ @ @ @ 1 @ @ @ @ 5 XbJ XeϏJVE a @ @ @ 1 @ @ @ @ @ @ @ 5 ~c`R ɰʲ B @ @ @ 1 @ @ @ @ @ @ @ 11 y _EkJ Xe^LϏ b @ @ @ @ @ 1 @ @ @ @ @ 12 @ @ @ @ @ @ @ 1 @ @ @ @ 12 OR}(ذް琬j @ B @ @ @ 1 @ @ @ @ @ @ @ 12 qRJA}R z܂nCN a` @ @ 1 @ @ @ @ @ @ @ @ 19 ذް{uK ꏊFJV uK @ @ @ @ @ @ @ 1 @ @ @ 18E19 yEq RsǗRs @ @ @ @ @ @ @ @ @ 1 @ @ 26`28 `J uREQ @ @ @ @ @ @ @ 1 @ @ @ @ 26 LKxEP^R @ a` @ @ 1 @ @ @ @ @ @ @ @ 26 CxA z܂nCN a` @ @ 1 @ @ @ @ @ @ @ @ @ @ @ v@@ @ @ @ 3 3 @ 1 3 1 1 @ @ R 4 y NXR OYō A @ 1 @ @ @ @ @ @ @ @ @ 5 qAvX qA ` @ 1 @ @ @ @ @ @ @ @ @ 5 R}{VkJ(tj [kJRi A @ 1 @ @ @ @ @ @ @ @ @ 5 wRPxiذް琬 c B @ @ @ 1 @ @ @ @ @ @ @ 11 y 啽R 萁 A @ 1 @ @ @ @ @ @ @ @ @ 12 @ @ @ @ @ @ @ @ @ @ @ @ @ 19 RioXnCNj pW] A @ 1 @ @ @ @ @ @ @ @ @ 19 ␣`}g܂ ݸ޺y B @ @ @ 1 @ @ @ @ @ @ @ 18E19 yEq RsǗRs O`R} @ @ @ @ @ @ @ @ 1 @ @ 25E26 yEq kx ceg B @ @ @ 1 @ @ @ @ @ @ @ 26 召R`_CVR A a` @ @ 1 @ @ @ @ @ @ @ @ 26 ߐ{ EGP uK @ @ @ @ @ @ @ 1 @ @ @ 26 l؊X t P 1 @ @ @ @ @ @ @ @ @ @ 26 ÉuR} z܂nCN A @ 1 @ @ @ @ @ @ @ @ @ @ @ @ v@@ @ 1 6 1 3 @ @ @ 1 1 @ @ S 2 ËPx c b @ @ @ @ @ 1 @ @ @ @ @ 6`10 `cL ޗEsit߁j ÓsTN A @ 1 @ @ @ @ @ @ @ @ @ 9 TR @ a` @ @ 1 @ @ @ @ @ @ @ @ 9 ՕuEioXnCNj TN ` @ 1 @ @ @ @ @ @ @ @ @ 15 y Éun`PCiذް琬j Ilc a @ @ @ 1 @ @ @ @ @ @ @ 15E16 yEq RsǗRs @ @ @ @ @ @ @ @ @ 1 @ @ 16 Yym ccWW] ` @ 1 @ @ @ @ @ @ @ @ @ 16 ÉuR VjEJCuK uK @ @ @ @ @ @ @ 1 @ @ @ 23 ÉuR V}Rs A @ 1 @ @ @ @ @ @ @ @ @ 23 O`FR AJVI a` @ @ 1 @ @ @ @ @ @ @ @ 23 A @ ` @ 1 @ @ @ @ @ @ @ @ @ 23 lc}^R`x^P ƃAJVI B @ @ @ 1 @ @ @ @ @ @ @ 30 R^L} J^Nc B @ @ @ 1 @ @ @ @ @ @ @ 30 VR AJVI a` @ @ 1 @ @ @ @ @ @ @ @ 30 ORi^}j J^N a` @ @ 1 @ @ @ @ @ @ @ @ @ @ @ v@@ @ @ 5 4 3 @ 1 @ 1 1 @ @ T 5`7 ` RsǗRs @ @ @ @ @ @ @ @ @ 1 @ @ 6E7 yEq YERiɓj ŹނmW] a` @ @ 1 @ @ @ @ @ @ @ @ 7 j^CR @ a @ @ @ 1 @ @ @ @ @ @ @ 7 T`Es y^m݂Ȃ B @ @ @ 1 @ @ @ @ @ @ @ 14 F SRERAƂ a` @ @ 1 @ @ @ @ @ @ @ @ 14 ER J^uK uK @ @ @ @ @ @ @ 1 @ @ @ 14 xE߉x tniϏ a` @ @ 1 @ @ @ @ @ @ @ @ 14 UێR}} @ a` @ @ 1 @ @ @ @ @ @ @ @ 20 y VcQ @ ` @ 1 @ @ @ @ @ @ @ @ @ 20E21 yEq x(ذް琬j W] B @ @ @ 1 @ @ @ @ @ @ @ 21 ԏR}ioXnCNj V A @ 1 @ @ @ @ @ @ @ @ @ 21 x^P W]V a` @ @ 1 @ @ @ @ @ @ @ @ 21 x^P [gЉ a @ @ @ 1 @ @ @ @ @ @ @ 28 qRiUێRj @ a` @ @ 1 @ @ @ @ @ @ @ @ 28 RAnR W]V ` @ 1 @ @ @ @ @ @ @ @ @ 28 唒XR} W]ƃVVI a` @ @ 1 @ @ @ @ @ @ @ @ 28 IJxE~i~R @ a` @ @ 1 @ @ @ @ @ @ @ @ @ @ @ v@@ @ @ 3 8 4 @ @ @ 1 1 @ @ U 3 y of TN BA @ @ 1 @ @ @ @ @ @ @ @ 4 N[nCN m ` @ 1 @ @ @ @ @ @ @ @ @ 4 N[nCN mL ` @ 1 @ @ @ @ @ @ @ @ @ 4 N[nCN RE ` @ 1 @ @ @ @ @ @ @ @ @ 10E11 yEq ~lE୎R @ B @ @ @ 1 @ @ @ @ @ @ @ 11 mR} ui[R BA @ @ 1 @ @ @ @ @ @ @ @ 11 ~l`R AJVI a` @ @ 1 @ @ @ @ @ @ @ @ 11 TR @ B @ @ @ 1 @ @ @ @ @ @ @ 17.18 yEq RsǗRs @ @ @ @ @ @ @ @ @ 1 @ @ 17.18 yEq xmR W] ab @ @ @ @ 1 @ @ @ @ @ @ 17.18 yEq rR SR a` @ @ 1 @ @ @ @ @ @ @ @ 18 YR} @ B @ @ @ 1 @ @ @ @ @ @ @ 18 JiUR[Xj @ A @ 1 @ @ @ @ @ @ @ @ @ 24 y RiR`jioXnCNj @ a` @ @ 1 @ @ @ @ @ @ @ @ 24E25 yEq x eg B @ @ @ 1 @ @ @ @ @ @ @ 24E25 yEq cR} @ B @ @ @ 1 @ @ @ @ @ @ @ 25 rCR @ B @ @ @ 1 @ @ @ @ @ @ @ @ @ @ v@@ @ @ 4 5 6 1 @ @ @ 1 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9954123497009277, "perplexity": 144.2845125135619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826842.56/warc/CC-MAIN-20181215083318-20181215105318-00604.warc.gz"}
https://socratic.org/questions/how-do-you-solve-for-x-in-ax-b-cx
Algebra Topics # How do you solve for x in ax + b = cx? We have that $a x + b = c x$ $a x - c x = - b$ $x \cdot \left(a - c\right) = - b$ Assuming that $a \ne c$ we have that $x = - \frac{b}{a - c}$ ##### Impact of this question 3056 views around the world You can reuse this answer
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9609038233757019, "perplexity": 2720.982580132874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666467.20/warc/CC-MAIN-20191016063833-20191016091333-00453.warc.gz"}
https://lara.epfl.ch/w/cc11/homework_06
# Homework 6 Due Monday, November 21st, 10:10am. A unit expression is defined by following grammar where are unit expressions themselves and is a base unit: You may use to denote the set of the unit types For readability, we use the syntactic sugar and ## Problem a) Give the type rules for the arithmetic operations . Assume that the trigonometric functions take as argument radians, which are dimensionless (since they are defined as the ratio of arc length to radius). ## Problem b) The unit expressions as defined above are strings, so that e.g. however physically these units match. Define a procedure such that your type rules type check expressions, whenever they are correct according to physics. ## Problem c) Determine the type of in the following code fragment. The values in angle brackets give the unit type expressions of the variables and is the usual constant in the Scala math library. Give the full type derivation tree using your rules from a) and b), i.e. the tree that infers the types of the variables . val x: <m> = 800 val y: <m> = 6378 val g: <m/(s*s)> = 9.8 val R = x + y val w = sqrt(g/R) val T = (2 * Pi) / w ## Problem d) Consider the following function that computes the Coulomb force, and suppose for now that the compiler can parse the type expressions: def coulomb(<(N*m)/(C*C)> k, <C> q1, <C> q2, <m> r): <N> { return (k* q1 * q2)/(r*r) } The derived types are and . Does the code type check? Justify your answer. ## Problem e) Suppose you want to use the unit feet in addition to the Si units above. How can you extend your type system to accommodate for this? Explain your answer. (Assume that 1m = 3.28084 feet.) ## Problem f) For this problem we consider expressions whose free variables have only basic types, i.e. , , but no compund types like . Assume also that the only operators within expressions are , but that they can be combined in arbitrary ways and be arbitrarily large, as long as they type check. Consider such an expression that computes a value of some type , where may be compound. Prove the following theorem for this case. Theorem: Suppose that the result type does not contain a base unit (or, equivalently, this base type occurs only with the overal exponent 0, as ). If we multiply all variables of type by a fixed numerical constant , the final result of the expression does not change. For example, suppose that we want to compute the center of mass of two objects in 2D, using the following expression: Here are the masses of the objects in , and are their distances in (meters) from some reference point. Because the final result has type , by the above theorem, we can multiply and by, e.g., ; the value is the same as the previous value. As another example, suppose we estimate a gravity on Moon by measuring a free fall from height , then compute the time needed for a free fall from another height , computing as The result type of this expression is (seconds). Therefore, if we multiply both and by a constant, say 100, the value of this expression does not change. Let us re-state the theorem by introducing a little more notation. If is an expression (which may contain variables and possibly some other variables), and are expressions, let denote the result of replacing in in with expression , and so on, with replaced by . The theorem above then says that, under the stated assumptions, for every constant and all variables whose type exponent in is zero, the following equality holds for all values of all variables (including but also variables other than ): Prove the above theorem. Note: We assume that neither nor divide by zero or take square roots of negative numbers. You do not need to worry about such errors in this problem. Hint: You may need to prove a more general theorem and derive the desired theorem as a consequence.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9534804224967957, "perplexity": 740.6417412000113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666467.20/warc/CC-MAIN-20191016063833-20191016091333-00357.warc.gz"}
https://www.physicsforums.com/threads/fourier-transforms.220591/
# Fourier transforms 1. Mar 8, 2008 ### Niles [SOLVED] Fourier transforms 1. The problem statement, all variables and given/known data Please take a look at the following: I have shown that the Fourier transform of f(t) = exp(-|t|) = $$\sqrt{\frac{2}{pi}}\cdot \frac{1}{1+\omega ^2}$$. Now I am having trouble with question A. I know what the inverse Fourier transform is given by, but the we have an odd function (exp(iwt)) multiplied to an even function (the above). This results in an odd function, so how do I rewrite it? 2. Mar 8, 2008 ### Niles Ok, I read something about the inverse Fourier transform of an even function, and it adds up now. Problem solved. Similar Discussions: Fourier transforms
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9713647961616516, "perplexity": 576.0077044478304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612008.48/warc/CC-MAIN-20170529014619-20170529034619-00152.warc.gz"}
https://idoc.ias.u-psud.fr/biblio?f%5Bauthor%5D=1693
Biblio Found 1 results Filters: Author is Hansteen, V. H.  [Clear All Filters] 2003 Dynamics of solar coronal loops. I. Condensation in cool loops and its effect on transition region lines, , Astronomy and Astrophysics, Dec, Volume 411, p.605-613, (2003)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8745043873786926, "perplexity": 11766.37494029263}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588398.42/warc/CC-MAIN-20211028162638-20211028192638-00694.warc.gz"}
https://fr.maplesoft.com/support/help/maple/view.aspx?path=Student/MultivariateCalculus/SurfaceArea&L=F
Student[MultivariateCalculus] - Maple Programming Help Home : Support : Online Help : Education : Student Packages : Multivariate Calculus : Visualization : Student/MultivariateCalculus/SurfaceArea Student[MultivariateCalculus] SurfaceArea return the surface area defined by a function Calling Sequence SurfaceArea(f(x,y), x=a..b, y=c..d, opts) SurfaceArea(surf, sopts) Parameters f(x, y) - algebraic expression x, y - name; independent variables a, b, c, d - real constants; limits of integration surf - one of the possible regions of integration as described below opts - (optional) equation(s) of the form option=value where option is one of areaoptions, edgeoptions, functionoptions, output, regionoptions, showarea, showedges, showfunction, or showregion; specify output optionsmore sopts - (optional) equation(s) of the form option=value where option is one of areaoptions, output, regionoptions, showarea, or showregion Description • The SurfaceArea(f(x,y), x=a..b, y=c..d) calling sequence returns the value of the surface area defined by z=f(x,y) over the specified region, if this can be exactly determined, or the integral representing this value. In the latter case, use the evalf function to obtain a numerical approximation to the exact result. • The output parameter can be used to select whether this command returns a value, the integral representing the surface area, or a plot showing the surface and region over which the area is to be considered. • If the output=plot option is specified, the function f(x,y) is plotted over a region slightly larger than that specified by the second and third parameters.  The part of the surface over that specified region is colored differently from the part outside that region. Vertical lines are drawn at the corner of the region. • The opts and sopts arguments can contain any or some of the following equations that set output options. The valid options are described in the parameters section. – areaoptions = list Specifies the plot options for plotting the portion of the surface that lies over the selected region. For more information on plotting options, see plot3d/options. – edgeoptions = list Specifies the plot options for plotting the lines marking the corners of the region over which the surface area is to be computed. For more information on plotting options, see plot3d/options. – functionoptions = list Specifies the plot options for plotting the function $f\left(x,y\right)$. For more information on plotting options, see plot3d/options. – output = value, plot, or integral This option controls the return value of the function. • output = value specifies that the value of the surface area is returned. Plot options are ignored if output = value.  The default is output = value. • output = plot specifies that a plot is displayed showing the a graph of the expression, the specified region, and the portion of the surface over that region. • output = integral specifies that the inert form of the surface area integral is returned.  For this selection only, the endpoints of the integration ranges can be arbitrary algebraic expressions. – regionoptions = list Specifies the plot options for the region over which the area of the surface is being considered. For more information on plotting options, see plot3d/options. – showarea = true or false Specifies whether the area of the surface over the selected region is plotted (distinctly from the surface). The default is true. – showedges = true or false Determines whether the lines marking the region corners are plotted. The default is true. – showfunction = true or false Determines whether the function is plotted.  When true, the function is plotted over a region slightly larger than the region over which the area is being considered. The default is true. – showregion = true or false Determines whether the region is plotted. The default is true. • The SurfaceArea(surf) calling sequence returns the value of the surface area defined by surf, if this can be exactly determined, or the integral representing this value. • Specify the surface surf using unevaluated function calls. The possible surfaces are Box, Sphere, and Surface. – $\mathrm{Box}\left({r}_{1},{r}_{2},{r}_{3}\right)$ Each ${r}_{i}$ must have type algebraic..algebraic.  These represent the sides of the box. The surface integral is taken over each face of the box. – Sphere(center, radius) The first parameter of Sphere, center, must have type 'Vector'(3, algebraic). The second parameter radius must have type algebraic. These represent the center and radius of the sphere, respectively.  If a coordinate system attribute is specified on center, the center is interpreted in this coordinate system. – Surface(v, range, coordinate_system) The first argument, $v$, must have type 'Vector'(3, algebraic). The second argument, range, can be: • [name1, name2] = region(arguments) where region is any two-dimensional region that Student[MultivariateCalculus][MultiInt] accepts: Circle, Ellipse, Rectangle, Region, Sector, or Triangle. • name1=range1, name2=range2  This explicitly specifies the ranges for the two parameters. caption = anything A caption for the plot. The default caption is constructed from the parameters and the command options. caption = "" disables the default caption. For more information about specifying a caption, see plot/typesetting. title = anything A title for the plot. The default title is constructed from the parameters and the command options. title = "" disables the default title. For more information about specifying a title, see plot/typesetting. • For information on how to change the default colors, see the Student[SetColors] help page. Examples > $\mathrm{with}\left(\mathrm{Student}\left[\mathrm{MultivariateCalculus}\right]\right):$ > $\mathrm{SurfaceArea}\left({x}^{2}+y,x=a..b,y=c..d,\mathrm{output}=\mathrm{integral}\right)$ ${?}$ (1) > $\mathrm{SurfaceArea}\left({x}^{2}+y,x=0..1,y=0..1\right)$ $\frac{\sqrt{{6}}}{{2}}{+}\frac{{\mathrm{arcsinh}}{}\left(\sqrt{{2}}\right)}{{2}}$ (2) > $\mathrm{SurfaceArea}\left({x}^{2}+y,x=0..1,y=0..1,\mathrm{output}=\mathrm{plot},\mathrm{functionoptions}=\left[\mathrm{transparency}=0.8\right]\right)$ > $\mathrm{SurfaceArea}\left({x}^{2}+y,x=0..1,y=0..1,\mathrm{output}=\mathrm{plot}\right)$ > $\mathrm{SurfaceArea}\left(\mathrm{Box}\left(1..2,2..3,3..4\right)\right)$ ${6}$ (3) > $\mathrm{SurfaceArea}\left(\mathrm{Box}\left(1..2,3..5,6..9\right),\mathrm{output}=\mathrm{plot}\right)$ > $\mathrm{SurfaceArea}\left(\mathrm{Sphere}\left(⟨a,b,c⟩,r\right)\right)$ ${4}{}{{r}}^{{2}}{}{\mathrm{\pi }}$ (4) > $\mathrm{SurfaceArea}\left(\mathrm{Sphere}\left(⟨0,0,0⟩,3\right),\mathrm{output}=\mathrm{plot}\right)$ > $\mathrm{SurfaceArea}\left(\mathrm{Surface}\left(⟨s,t,1⟩,s=0..1,t=0..s\right)\right)$ $\frac{{1}}{{2}}$ (5) > $\mathrm{SurfaceArea}\left(\mathrm{Surface}\left(⟨s,t,1⟩,\left[s,t\right]=\mathrm{Triangle}\left(⟨0,0⟩,⟨1,0⟩,⟨0,1⟩\right)\right)\right)$ $\frac{{1}}{{2}}$ (6) > $\mathrm{SurfaceArea}\left(\mathrm{Surface}\left(⟨s,t,1+{s}^{2}{t}^{2}⟩,\left[s,t\right]=\mathrm{Sector}\left(\mathrm{Ellipse}\left(⟨0,0⟩,2,1,0\right),-\frac{\mathrm{\pi }}{4},\frac{\mathrm{\pi }}{4}\right)\right),\mathrm{output}=\mathrm{plot}\right)$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 22, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9063745737075806, "perplexity": 1324.0204210343659}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988696.23/warc/CC-MAIN-20210505203909-20210505233909-00242.warc.gz"}
http://meetings.aps.org/Meeting/MAR09/Event/93959
### Session B26: Focus Session: Computational Nanoscience II: Mechanics, Dynamics, and Assembly 11:15 AM–2:15 PM, Monday, March 16, 2009 Room: 328 Chair: Dennis Rapaport,, Bar-Ilan University Abstract ID: BAPS.2009.MAR.B26.9 ### Abstract: B26.00009 : Atomistic Simulations of Hydrodynamic and Interaction Forces on Functionalized Silica Nanoparticles 1:15 PM–1:27 PM Preview Abstract MathJax On | Off   Abstract #### Authors: J. Matthew D. Lane (Sandia National Labs) Ahmed E. Ismail (Sandia National Labs) Michael Chandross (Sandia National Labs) Christian D. Lorenz (King's College London) Gary S. Grest (Sandia National Labs) It is often desired to prevent the flocculation and phase separation of nanoparticles in solution. This can be accomplished either by manipulating the solvent or by tailoring the surface chemistry of the nanoparticles through functionalization with a monolayer of oligomer chains. Since it is not known how these functionalized coatings affect the interactions between nanoparticles and with the surrounding solvent, we present results from a series of molecular dynamics simulations of polyethylene oxide (PEO) coated silica nanoparticles of varying size (5 to 20 nm diameter) in water. For a single nanoparticle we determined the Stokes drag on the nanoparticle as it moves through the solvent and as it approaches a wall. Due to hydrodynamic interactions there are large finite size effects which we estimate by varying the size of the simulation cell. We also determined both solvent-mediated (velocity-independent) and lubrication (velocity-dependent) forces between two nanoparticles as a function of the coverage and chain length of the PEO chains. To cite this abstract, use the following reference: http://meetings.aps.org/link/BAPS.2009.MAR.B26.9
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8151729106903076, "perplexity": 5714.351014872671}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-algebra/74514-dimension-base-set-print.html
# Dimension and base of a set • Feb 19th 2009, 12:53 PM arbolis Dimension and base of a set Hi MHF, I'm unable to solve this problem. A little help would be welcome. (not a full answer if possible). Find the dimension and a base of this set : $\{ A \in \mathbb{R}^{n\times n} \text {such that }A=A^T \}$. My attempt : I'm tempted to answer $\dim S =n^2$ without thinking, but when I think I realize I don't know. $S$ is the set containing all the matrices nxn such that they are equal to their transposematrix. I'm not able to see further in order to find the dimension of $S$. And once I know the dimension, I'll know how many vectors form a base of $S$. Thank you very much in advance. • Feb 19th 2009, 01:22 PM clic-clac Hi So you're considering the subspace of symmetric matrices. Given a symmetric $n\times n$ matrix, how many "parameters" (=coefficients) do you think you must choose to completely define it? • Feb 19th 2009, 01:43 PM arbolis Quote: Originally Posted by clic-clac Hi So you're considering the subspace of symmetric matrices. Given a symmetric $n\times n$ matrix, how many "parameters" (=coefficients) do you think you must choose to completely define it? 2n? • Feb 19th 2009, 01:51 PM clic-clac Mhhh in fact, you just have to choose the upper coefficients, diagonal included. The other ones will be equal to their symmetric coefficient through the diagonal. So you have $1+2+...+n$ coefficients to choose, an that gives you the dimension. • Feb 19th 2009, 02:20 PM arbolis Quote: Originally Posted by clic-clac Mhhh in fact, you just have to choose the upper coefficients, diagonal included. The other ones will be equal to their symmetric coefficient through the diagonal. So you have $1+2+...+n$ coefficients to choose, an that gives you the dimension. Thank you very much! Very well explained. So $\dim S =\frac{n(n+1)}{2}$. I'll try now the other part of the question. • Feb 19th 2009, 03:27 PM arbolis Checking my result. A base $\bold B$ is $\{ \begin{bmatrix} 1 & 0 & 0 & ... & 0 \\ 0 & 0 & 0 & ... & 0 \\ ...& ... & ... & ... & ... \\ 0 & 0 & 0 & ... & 0 \\ ...& ... & ... & ... & ... \end{bmatrix}, \begin{bmatrix} 0 & 1 & 0 & ... & 0 \\ 1 & 0 & 0 & ... & 0 \\ ...& ... & ... & ... & ... \\ 0 & 0 & 0 & ... & 0 \\ ...& ... & ... & ... & ... \end{bmatrix} ,$ $\begin{bmatrix} 0 & 0 & 0 & ... & 0 \\ 0 & 1 & 0 & ... & 0 \\ ...& ... & ... & ... & ... \\ 0 & 0 & 0 & ... & 0 \\ ...& ... & ... & ... & ... \end{bmatrix}, ...\}$. All matrices are nxn and there are $\frac{n(n+1)}{2}$ matrices forming the base. Am I right? • Feb 19th 2009, 03:36 PM ThePerfectHacker For $1\leq i\leq j\leq n$ let $\bold{e}_{ij}$ be the matrix whose $ij$-th and $ji$-th entry is $1$ while everything else is zero. Notice if $i=j$ then $\bold{e}_{ij}$ is a matrix with only $1$ on the $ii$ (or $jj$) location on the diagnol. For example, $n=3$ then $\bold{e}_{12}$ is: $\begin{bmatrix}0&1&0\\1&0&0\\0&0&0\end{bmatrix}$ The basis is, $B = \{ \bold{e}_{ij} | 1\leq i\leq j\leq n\}$ and of course $|B| = \tfrac{1}{2}n(n+1)$. • Feb 19th 2009, 04:24 PM math2009 $A=\begin{bmatrix}a_{11} & \cdots & a_{ji} \\ \vdots & \ddots & \ \\ a_{ij} & \ & a_{nn} \end{bmatrix}$ $A=A^T$ , it means $a_{ij}=a_{ji}$. If $i\neq j$ , except diagonal,there are $n^2-n$ elements, $\because$ they are symmetric, $\therefore$ there are $\frac{n^2-n}{2}+n(diagonal)=\frac{n^2+n}{2}$ different elements. $\dim (A)=\frac{n^2+n}{2}$ $\mathfrak{B}=\{\begin{bmatrix}\ & \cdots & a_{ji} \\ \vdots & \ddots & \ \\ a_{ij} & \cdots & \ \end{bmatrix} | 1\leq i , j\leq n,a_{ij}=a_{ji}=1\}$ • Feb 19th 2009, 04:45 PM arbolis Ok, this is what I meant, but didn't know how to explain as TPH did and didn't know how to write it as math2009 did. Thank you very much for the help! • Feb 19th 2009, 04:55 PM math2009 At begin, I didn't know how to write perfect math formula. I check http://www.mathhelpforum.com/math-help/latex-help/ (LaTex Tutorial) And apply WinEdt , then I know LaTex syntax. You could click formula ,it pop up LaTex command.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 39, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9748191833496094, "perplexity": 651.767916005592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542972.9/warc/CC-MAIN-20161202170902-00285-ip-10-31-129-80.ec2.internal.warc.gz"}
https://motls.blogspot.com/2014/07/cms-sees-650-gev-leptoquarks.html
## Thursday, July 10, 2014 ... ///// ### CMS "sees" $650\GeV$ leptoquarks It seems pretty obvious to me by now that the LHC experiments were publishing lots of papers where things look "clear" and nothing new is observed and they were increasingly scrutinizing and postponing the searches and surveys that are potentially "seeing something new". Just to be sure, one may feel uncomfortable about this bias because the message extracted from the papers published before a moment $t$ is skewed. On the other hand, this bias is understandable because extraordinary claims deserve extraordinary care (and require extraordinary evidence). So one should arguably accept that it takes a longer time to complete papers with "positive claims" – and at the same moment, everyone should be aware of this bias and realize that the body of papers published up to an early enough moment looks more conservative than the actual evidence that the experimenters are probably seeing at that moment. You simply have to accept that the literature can't be mindlessly trusted as a faithful picture of "what science actually knows right now", and this bias is just one minor aspect of this disclaimer. The avalanche of anomalies in new papers published since the early July 2014 seems too strong to me to consider this clustering a set of coincidences. Note that ATLAS has published a paper on the anomalously high $W^+W^-$ cross section and CMS saw that an estimated 1% of Higgs decays go to $\mu^\pm \tau^\mp$, a flavor-violating combination that should be nearly absent and where the excess is 2.5 sigma. Today, the CMS released another paper with an anomaly, a paper on their search for leptoquarks: Search for Pair-production of First Generation Scalar Leptoquarks in $pp$ Collisions at $\sqrt{s} = 8\TeV$ Note that leptoquarks are (or would-be) speculative new particles that combine the charges of leptons as well as quarks. So a leptoquark carries (or, quite possibly, doesn't carry because it doesn't exist) a nonzero lepton number (like a lepton) and a nonzero baryon number (like a quark) and transforms as a color triplet (as a quark). The particular leptoquarks searched for in this paper were spin-0 scalar particles. And this is the money graph. As the picture at the top shows, the leptoquark (a single particle) is supposed to decay to a combination of lepton+quark. The decay necessarily exists because we don't observe any light and therefore stable leptoquarks; leptons and quarks have to be among the final products because of the lepton and baryon number conservation laws. The lepton may be either charged or neutral (neutrino). The percentage of the decays involving a (visible) charged lepton is known as $\beta$ (which stands for "branching ratio"). On the $x$-axis, you see the assumed mass of the leptoquark, on the $y$-axis, you see the maximum possible $\beta$. For a $650\GeV$ leptoquark, one predicts $20\pm 2\pm 2$ events (and the ability to prove something like $\beta\lt 0.075$) but they observe $36$ events (and can only say that $\beta\lt 0.2$). Quite a disagreement but the significance is actually just 2.4 sigma. Also, the shape is diluted – the excess exists for a wide range of leptoquark masses above $300\GeV$, while a "real" leptoquark would be expected to produce a narrower excess as a function of the mass. In fact, you could be bothered even by the other 2-sigma excess for masses around $850\GeV$. Leptoquarks are in principle possible additions to the particle zoo, even according to some of the string vacua, but many people's (and my) knee-jerk reaction would be to say that they're bizarre and unmotivated. And they could cause lots of trouble. Of course that if this fluke turned out to be a signal, people would jump on this bizarre possibility rather quickly. I would bet on a fluke but I am not really sure. #### snail feedback (9) : "The avalanche of anomalies in new papers published since the early July 2014 seems too strong to me to consider this clustering a set of coincidences." Of course it is not a coincidence, you have to think of the logistics of experimental groups, graduate students , thesis subjects. I would bet that a large percentage of those 3000+ names on the papers of the two large LHC experiments are graduate students who have been working for years on their thesis. How many students do you think could get a thesis by studying Higgs production? Afaik a thesis subject still has to have an original contribution. student advisers have their pet analysis honed on monte carlo events maybe for ten years, and do not be surprised to see limits from even more far out phenomenological models. There have been for years working groups on leptoquarks trying to bolster the claims for ILC. http://inspirehep.net/record/449743?ln=en . Would not Cumrun Vafa at least be happy about certain (chiral) leptoquarks ... :-)? Sorry for not being able to link to the Google search resalts I have found using certain search terms from my smart phone... When I have my laptop again tomorrow evening, I will maybe ask about it on a certain physics Q&A site. Less than 4 sigma is just desperation from theorists. DAMA is 6-9 sigma BTW. But nobody cares as it does not fit the intellectual orthodoxy... A rational theorist never goes to any "desperation"; instead, he is refining a (most) likely picture how the Universe works according to the available data. Whatever the data are, he sees a solution. The fact that the expected evidence *does* depend on the agreement with the "official orthodoxy" - you mean previous scientific evidence and principles extracted from it - is known as rationality. Different magnitudes of excesses are needed in different contexts because the prior probabilities are different, and so are the probabilities of a completely different (more mundane) explanation. If you think that the expected confidence level should be independent of the character of the new claim, you are only proving that you are a hopeless crank who has no clue what science is and how it works when it works. You may be right, Dilaton, he could be happy. Well, I know some other F-theorists whose happiness in that case seems more guaranteed. ;-) This kind of trollong comments I have seen at other dark places in the internet, do not belong here. Go home !
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8563376665115356, "perplexity": 1168.1688026420334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690340.48/warc/CC-MAIN-20170925055211-20170925075211-00487.warc.gz"}