url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
https://www.physicsforums.com/threads/derivative-of-10-20-t-1-2-process-help.577356/
# Derivative of 10 - 20/(t+1)^2 - process help 1. Feb 13, 2012 ### 939 1. The problem statement, all variables and given/known data Derivative of 10 - 20/(t+1)^2. It's hard to write but basically 10 is IN FRONT of the equation, it's NOT 10-20 together, over (t+1)^2. 10 is seperate and is in front of the equation. 2. Relevant equations 10 - 20/(t+1)^2 3. The attempt at a solution My main question is the process. Do you find the derivative of - 20/(t+1)^2, and then put 10 in front of the equation? Or do you multiply the derivative of it by 10? 2. Feb 13, 2012 ### eumyang First, what you wrote is not an equation. It's an expression. Do you mean this? $$f(x) = 10 - \frac{20}{(t+1)^2}$$ If you mean the above, then no. You find the derivative of 10, and then subtract the derivative of the fraction. The difference rule applies (derivative of f - g = f' - g'). EDIT: Or do you mean this? $$f(x) = 10\frac{-20}{(t+1)^2}$$ 3. Feb 13, 2012 ### 939 Thanks a lot for the help. And I meant this one.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8837147355079651, "perplexity": 1012.3363918756589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648003.58/warc/CC-MAIN-20180322205902-20180322225902-00338.warc.gz"}
http://physics.stackexchange.com/questions/15894/motivation-for-potentials/15910
# Motivation for Potentials This is a hypothetical question about "pedagogy". Let's say I am trying to take someone who has just a very small amount of knowledge about Newtonian mechanics and convince them that the Lagrangian formulation of mechanics is a natural thing to try. First of all, to convince them that this is natural, at the very least, I would have to show how one can reproduce Newtonian Mechanics from Lagrangian Mechanics, in particular, Newton's Second Law. However, to do this, I would need to introduce the notion of a potential. However, I am having trouble justifying this. I would like to be able to say "most forces we encounter in nature are conservative, and hence, we are not losing too much by assuming all of our forces are conservative . . .". However, of the two fundamental forces that come to mind in elementary physics, the gravitational force and the electromagnetic force, only one is conservative. I hardly feel that one out of two is a sufficient justification for the introduction of a potential. But if it is not even natural to assume that our forces always come from a potential, then it certainly won't be natural to define the Lagrangian in a way that reproduces Newton's second law. Is there a nice way of convincing such a student that, even though the electromagnetic force is not conservative, there are still ways of deriving it from some sort of potential? In particular, the difficulty here is that I do not want to assume they have any specific knowledge of electrodynamics. - OP wrote (v1): I hardly feel that one out of two is a sufficient justification for the introduction of a potential. I would like to point out that there exists a velocity-dependent generalized potential $$U~=~q(\phi -\frac{1}{c} {\bf v}\cdot {\bf A})$$ for the Lorentz force $${\bf F}~=~ q({\bf E} +\frac{1}{c} {\bf v}\times {\bf B}) .$$ Here $\phi$ is the scalar EM potential and ${\bf A}$ is the magnetic vector potential. The generalized potential $U$ is related to the force ${\bf F}$ as $${\bf F}~=~\frac{d}{dt} \frac{\partial U}{\partial {\bf v}} - \frac{\partial U}{\partial {\bf r}},$$ see e.g., Herbert Goldstein, Classical Mechanics, for details. So the electromagnetic force ${\bf F}$ has a potential in the sense of $U$. The Lagrangian is $L=T-U$. By the way, other examples are fictitious forces (centrifugal, Coriolis, etc.), where there also exist generalized potentials, see e.g., Landau and Lifshitz, Vol.1: Mechanics. - While the electromagnetic force is not conservative in the strict mathematical sense (that is, it cannot be written as the gradient of a scalar), it is conservative in the sense that it conserves energy. This is more than enough to motivate a quantity for potential, as you need some value to cover the non-kinetic energy. After all, the electromagnetic field can be expressed in terms of potential, it's just that you need a 4-vector as opposed to a scalar. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9607368111610413, "perplexity": 123.78681132873788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706477730/warc/CC-MAIN-20130516121437-00094-ip-10-60-113-184.ec2.internal.warc.gz"}
https://cs.stackexchange.com/questions/22345/explaining-the-difference-between-computer-science-and-computer-literacy?noredirect=1
# Explaining the difference between computer science and computer literacy [closed] What is a good metaphor or example to explain to an English major the difference between classical computer science and "being good with using MS-Windows" • computer science • computer programming • using computers 3 profoundly different things. Most people have no idea what Computer Science even is. They just see the word "computer". Hence, "he is a Computer Science major" can be interpreted as "He can hook up my printer". Or that he's "good with computers". Even fewer people know the difference between computer programming and Computer Science. Computer Science is computing theory. CS can be learned without actual computers. CPU micro architecture. How to sort numbers, how to traverse lists, etc. State machines. Algorithms, big(Oh), etc. How to design a programming language or compiler. Programming is writing code and creating applications in a language and compiler created by a computer scientist. Lastly, there is using a computer (using a GUI, mouse, and keyboard. Internet, MS-Office, etc) Yet all three of these are used interchangeably by laymen. What is a good metaphor or example to explain to an English major the difference between classical computer science and "being good with using MS-Windows" Or simply, a pithy example of how real computer science has nothing to do with using MS-Windows. • I'm looking forward to biting answers for that one ;-) – vonbrand Mar 6 '14 at 18:00 • Why not say, "CS is a kind of math". – Karolis Juodelė Mar 8 '14 at 12:39 • see old famous essay noting/remarking on this age-old dichotomy the two cultures by CP Snowe. CS is just the latest in a long line of disciplines fitting into that. as a scientist/novelist he was uniquely qualified to comment/pontificate on it & it will be very relatable to english majors, its probably even studied in some english classes. also deep connections to sociology. – vzn Mar 8 '14 at 18:15 • Hello, and thanks for posting! Unfortunately, as it is, I'm having a hard time seeing how this question isn't primarily opinion-based; as such, in its current form, it isn't a great fit for this site (despite its popularity). Please take a moment to update your question to make it more narrow in scope, to ask for specific kinds of information (references, I expect, will be the most appropriate sort). For instance, if the question asked "what are well-known analogies which have been used to explain computer science," or "where can I find information on comparisons,". Thanks for contributing! – Patrick87 May 5 '14 at 18:55 • (Also, sorry for missing this question until now. I would have preferred to be asking for these edits earlier than now. Thanks for your understanding.) – Patrick87 May 5 '14 at 18:56 ## 9 Answers How about an automotive analogy? • uses computers and maybe "is good with computers" :: a driver (can drive and refuel safely) and maybe a car enthusiast (can jump start a car; is familiar with many makes and models; knows techniques like using windshield treatment to keep rain from reducing visibility). • programmer :: an automotive mechanic or technician. Knows how cars work. Can repair and modify cars and even build kit cars. Ought to know how to debug/diagnose problems by using the scientific method. Might not be aware of relevant theory and thus might write O(n2) loops. • software engineer :: an automotive engineer. Designs cars, engines, and other components that you can entrust your life with, and does it within schedule, cost, manufacturability, and other constraints. Knows how to apply the relevant theory/math such as finite element analysis. • computer scientist :: an automotive scientist. Researches new ideas in vehicles, human-machine interfaces, and propulsion. Does computational crash test modeling. Adds to the body of theory and experimental results. So for people who equate all “computing” with “proficient in using some software package,” that's like equating driving proficiency with the ability to design antilock brakes that we trust lives to, that are manufacturable with consistent high quality and low cost, and work for years in extreme weather. Or equating driving proficiency with researching what kind of radar-triggered braking features will avoid collisions without freaking the driver into swerving into another lane. Perhaps lay people confuse these terms because "computer science" classes teach computer use skills, programming, theory, or engineering. All that stuff (arguably not the first part) fits in the curriculum of computer science. None of it is the end-all "content" of computer science, just as English classes are learning on the way to an English major (a fuzzier concept). • See also my attempt here; "skillful use of some computer programs" would probably equate to some thing like "ability to hang a picture and change lightbulbs". – Raphael Mar 6 '14 at 18:53 • From a friend: First metaphor that comes to mind is cars: - Computer science ~= designing a car engine: theory matters, math is involved. - Computer programming ~= rebuilding a car engine: you need to know what you're doing and understand how everything works, but theoretical aspects are much less important. - Using computers ~= you can drive the car and put gas in it without blowing up the gas station. – JackOfAll Mar 6 '14 at 19:59 • Incorporating the suggestions from @JackOfAll required distinguishing programmer from software engineer. Engineering is building something within schedule and other constraints, that works in a wide variety of conditions, and that we can further build on and rely on. Other programming is to hack together something like Perl. Science is generating new knowledge through experimenting. Engineers and scientists need to know the relevant theory and math. Scientists should add to the body of theory. – Jerry101 Mar 6 '14 at 21:10 • All true, but what about wrestling with people who equate "computers" with "proficient in using <insert favorite package here>," and don't fathom there is more here? Or a bit more advanced ones who consider anything "trivial, just write a program"? Extra points for handling people who think the halting problem can be solved as a matter of course... – vonbrand Mar 6 '14 at 21:25 • You could go further; theoretical computer scientist::physicist - can describe the maths that models why the car works, but may not be able to drive. ;) – Luke Mathieson Mar 7 '14 at 2:23 Since it is an english major: Computer literacy is like reading, computer programming like composition, and computer science like linguistics. All 3 are about language, but the skills are not exactly interchangable. Somebody put it to me this way but I'm afraid I've forgotten who. Disinfecting your kitchen isn't microbiology; operating your computer isn't computer science. • Doesn't go into too much detail about what CS actually is, but good for a quick analogy and induces a little chuckle. – Cheezey Mar 7 '14 at 0:05 • Sounds much like Dijkstra's telescope statement. – Raphael Mar 7 '14 at 7:11 • computer science compared to disinfection/microbiology? vaguely works... – vzn Mar 8 '14 at 18:04 Computer science is to computers as astronomy is to telescopes. — Edsgar Dijkstra I read this in some book but unfortunately I forgot which book. • en.wikiquote.org/wiki/Computer_science#Disputed - has 3 places it's quoted in ~1993 and disagreements as to whether it was really from Dijkstra – WernerCD Mar 7 '14 at 14:55 • Also, "Edsgar" Dijkstra. I think the Nederlanders made the name just to confuse English speakers. – Luke Mathieson Mar 8 '14 at 0:07 • @LukeMathieson English speakers? I think anyone will be confused by that name. – Kartik Mar 8 '14 at 11:36 • “Edsger”, in fact. – James Wood Mar 8 '14 at 16:21 • @LukeMathieson It's not exactly a common name in Dutch either, about 1 in a million have it as first name. But as an English speaker it should've felt natural to you ;) The etymology of the name is the same in English as in Dutch, meaning SwordSpear, eds like in edge->sword and ger like in the uncommon gar (which you obviously know as you made the right spelling change to make it English) meaning spear, or the related gore. – Rinze Smits Mar 8 '14 at 17:42 I work with some "real engineers", a lot of them seem to think computer programming and CS are the same thing (apparently they think engineers do really high level math as well, different topic there). I used to be a CAD drafter back in high school so, I tell them I am basically a mechanical engineer, seems to even the playing field . I guess you could tell your English major friend you can read books already so, you might as well have an English major. Or in a less confrontational way let them know that would be the equivalent of what they are saying. • You say "different topic", but I feel that the two are actually very similar: when an engineer says "high level math", they're almost certainly referring to high level applied math, and what is programming but applied computer science? On the other hand, if these "real engineers" are considering stuff like solving lots of polynomials as "high level math" (without using those concepts that allow efficient solving of such systems of equations, or just plugging them into a program without understanding how it works), I could see where you're coming from. – JAB Mar 7 '14 at 17:03 • Yeah I mean the second kind, where using Laplace Transforms and Runge–Kutta is considered doing high level math (even when those topics aren't really considered high level math). Then again I graduated with a degree in applied math so, the standard of what I consider high level math is probably a bit skewed, just thought it was funny anyway. I agree about computer programming being applied computer science, I was just drawing a parallel between what a mechanical engineer may do most of time in a job vs what a software engineer might, i.e. CAD Drafting vs Computer programming. – SuperSecret Mar 7 '14 at 22:26 Hmm, here's another metaphor: Google search 1. Computer Scientist designs the Google PageRank algorithm. 2. Programmer knows how to take keyword input, access the database and display the results on a webpage. 3. User knows how to do a Google search. Yea!! • Problem with this being that a lot of people will not understand / be able to distinguish between (1) and (2). – Ant P Mar 7 '14 at 20:19 I miss a fourth bullet, "computer engineering". An engineer knows how things work. A scientist knows why things work. A builder makes things (that sometimes work). A user uses things. For "thing" read house, computer, car, ... For "builder" substitute suitable name for manual labor professional, e.g. "programmer" when thing = computer, "mason" when thing = house, etc. I just now found another quote, again by Edsger Dijkstra (from here): ...the harm was done: the topic became known as “computer science”---which, actually, is like referring to surgery as “knife science” --- and it was firmly implanted in people's minds that computing science is about machines and their peripheral equipment. You can shorten it to Computer science is like referring to surgery as “knife science”. But you don't even need to say that. It think it would be enough to say that "CS is a kind of math that has nothing to do with computers". Computer science is the knowledge of what computers can do so that you can use them. Computer literacy the knowledge of what you can do with computers so that they can use you. • The role of downvote is to show that someone is being silly. – babou Jun 24 '14 at 15:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40311679244041443, "perplexity": 2093.8904806467954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039546945.85/warc/CC-MAIN-20210421161025-20210421191025-00233.warc.gz"}
https://cantera.org/documentation/docs-2.2/doxygen/html/classCantera_1_1BEulerInt.html
Cantera  2.2.1 BEulerInt Class Reference #include <BEulerInt.h> Inheritance diagram for BEulerInt: [legend] Collaboration diagram for BEulerInt: [legend] ## Public Member Functions BEulerInt () virtual void setTolerances (double reltol, size_t n, double *abstol) Set or reset the number of equations. More... virtual void setTolerances (double reltol, double abstol) Set error tolerances. More... virtual void setProblemType (int probtype) Set the problem type. More... virtual void initializeRJE (double t0, ResidJacEval &func) Find the initial conditions for y and ydot. More... virtual void reinitializeRJE (double t0, ResidJacEval &func) virtual double integrateRJE (double tout, double tinit=0.0) virtual doublereal step (double tout) Integrate the system of equations. More... virtual void setSolnWeights () Set the solution weights. More... virtual double & solution (size_t k) The current value of the solution of equation k. More... double * solution () The current value of the solution of the system of equations. More... int nEquations () const The number of equations. More... virtual int nEvals () const Return the total number of function evaluations. More... virtual void setMethodBEMT (BEulerMethodType t) virtual void setIterator (IterType t) Set the linear iterator. More... virtual void setMaxStep (double hmax) virtual void setMaxNumTimeSteps (int) virtual void setNumInitialConstantDeltaTSteps (int) void print_solnDelta_norm_contrib (const double *const soln0, const char *const s0, const double *const soln1, const char *const s1, const char *const title, const double *const y0, const double *const y1, double damp, int num_entries) virtual void setPrintSolnOptions (int printSolnStepInterval, int printSolnNumberToTout, int printSolnFirstSteps=0, bool dumpJacobians=false) This routine controls when the solution is printed. More... void setNonLinOptions (int min_newt_its=0, bool matrixConditioning=false, bool colScaling=false, bool rowScaling=true) Set the options for the nonlinear method. More... virtual void setPrintFlag (int print_flag) virtual void setColumnScales () Set the column scaling vector at the current time. More... virtual double soln_error_norm (const double *const, bool printLargest=false) Calculate the solution error norm. More... virtual void setInitialTimeStep (double delta_t) void beuler_jac (GeneralMatrix &J, double *const f, double, double, double *const, double *const, int) Public Member Functions inherited from Integrator Integrator () Default Constructor. More... virtual ~Integrator () Destructor. More... virtual void setSensitivityTolerances (doublereal reltol, doublereal abstol) Set the sensitivity error tolerances. More... virtual void initialize (doublereal t0, FuncEval &func) Initialize the integrator for a new problem. More... virtual void reinitialize (doublereal t0, FuncEval &func) virtual void integrate (doublereal tout) Integrate the system of equations. More... virtual void setMaxOrder (int n) Set the maximum integration order that will be used. More... virtual void setMethod (MethodType t) Set the solution method. More... virtual void setMaxStepSize (double hmax) Set the maximum step size. More... virtual void setMinStepSize (double hmin) Set the minimum step size. More... virtual void setMaxErrTestFails (int n) Set the maximum permissible number of error test failures. More... virtual void setMaxSteps (int nmax) virtual void setBandwidth (int N_Upper, int N_Lower) virtual int nSensParams () virtual double sensitivity (size_t k, size_t p) ## Protected Member Functions void internalMalloc () Internal routine that sets up the fixed length storage based on the size of the problem to solve. More... void calc_y_pred (int) void calc_ydot (int, double *, double *) double time_error_norm () double time_step_control (int m_order, double time_error_factor) int solve_nonlinear_problem (double *const y_comm, double *const ydot_comm, double CJ, double time_curr, GeneralMatrix &jac, int &num_newt_its, int &num_linear_solves, int &num_backtracks, int loglevel) Solve a nonlinear system. More... void doNewtonSolve (double, double *, double *, double *, GeneralMatrix &, int) Compute the undamped Newton step. More... double boundStep (const double *const y, const double *const step0, int loglevel) Bound the Newton step while relaxing the solution. More... int dampStep (double, const double *, const double *, const double *, double *, double *, double *, double &, GeneralMatrix &, int &, bool, int &) void computeResidWts (GeneralMatrix &jac) Compute Residual Weights. More... double filterNewStep (double, double *, double *) Filter a new step. More... double getPrintTime (double time_current) Get the next time to print out. More... ## Protected Attributes IterType m_iter IterType is used to specify how the nonlinear equations are to be relaxed at each time step. More... BEulerMethodType m_method MethodType is used to specify how the time step is to be chosen. More... int m_jacFormMethod m_jacFormMethod determines how a matrix is formed. More... bool m_rowScaling m_rowScaling is a boolean. More... bool m_colScaling m_colScaling is a boolean. More... bool m_matrixConditioning m_matrixConditioning is a boolean. More... int m_itol If m_itol =1 then each component has an individual value of atol. More... double m_reltol Relative time truncation error tolerances. More... double m_abstols Absolute time truncation error tolerances, when uniform for all variables. More... vector_fp m_abstol Vector of absolute time truncation error tolerance when not uniform for all variables. More... vector_fp m_ewt Error Weights. This is a surprisingly important quantity. More... double m_hmax Maximum step size. More... int m_maxord Maximum integration order. More... int m_order Current integration order. More... int m_time_step_num Time step number. More... int m_time_step_attempts int m_max_time_step_attempts Max time steps allowed. More... int m_numInitialConstantDeltaTSteps Number of initial time steps to take where the time truncation error tolerances are not checked. More... int m_failure_counter Failure Counter -> keeps track of the number of consecutive failures. More... int m_min_newt_its Minimum Number of Newton Iterations per nonlinear step. default = 0. More... int m_printSolnStepInterval Step Interval at which to print out the solution default = 1; If set to zero, there is no printout. More... int m_printSolnNumberToTout Number of evenly spaced printouts of the solution If zero, there is no printout from this option default 1 If set to zero there is no printout. More... int m_printSolnFirstSteps Number of initial steps that the solution is printed out. default = 0. More... bool m_dumpJacobians Dump Jacobians to disk. default false. More... int m_neq Number of equations in the ode integrator. More... vector_fp m_y_n vector_fp m_y_nm1 vector_fp m_y_pred_n vector_fp m_ydot_n vector_fp m_ydot_nm1 double m_t0 Initial time at the start of the integration. More... double m_time_final Final time. More... double time_n double time_nm1 double time_nm2 double delta_t_n double delta_t_nm1 double delta_t_nm2 double delta_t_np1 double delta_t_max Maximum permissible time step. More... vector_fp m_resid vector_fp m_residWts vector_fp m_wksp ResidJacEvalm_func vector_fp m_rowScales vector_fp m_colScales GeneralMatrixtdjac_ptr Pointer to the Jacobian representing the time dependent problem. More... int m_print_flag Determines the level of printing for each time step. More... int m_nfe Number of function evaluations. More... int m_nJacEval Number of Jacobian Evaluations and factorization steps (they are the same) More... int m_numTotalNewtIts Number of total Newton iterations. More... int m_numTotalLinearSolves Total number of linear iterations. More... int m_numTotalConvFails Total number of convergence failures. More... int m_numTotalTruncFails Total Number of time truncation error failures. More... int num_failures ## Detailed Description Wrapper class for 'beuler' integrator We derive the class from the class Integrator Deprecated: Unused. To be removed after Cantera 2.2. Definition at line 56 of file BEulerInt.h. ## Constructor & Destructor Documentation BEulerInt ( ) Constructor. Default settings: dense Jacobian, no user-supplied Jacobian function, Newton iteration. Definition at line 26 of file BEulerInt.cpp. References Cantera::warn_deprecated(). ## Member Function Documentation void setTolerances ( double reltol, size_t n, double * abstol ) virtual Set or reset the number of equations. Set error tolerances. Parameters reltol scalar relative tolerance n Number of equations abstol array of N absolute tolerance values Reimplemented from Integrator. Definition at line 78 of file BEulerInt.cpp. References BEulerInt::m_abstol, BEulerInt::m_itol, BEulerInt::m_neq, and BEulerInt::m_reltol. void setTolerances ( double reltol, double abstol ) virtual Set error tolerances. Parameters reltol scalar relative tolerance abstol scalar absolute tolerance Reimplemented from Integrator. Definition at line 92 of file BEulerInt.cpp. References BEulerInt::m_abstols, BEulerInt::m_itol, and BEulerInt::m_reltol. void setProblemType ( int probtype ) virtual Set the problem type. Parameters probtype Type of the problem Reimplemented from Integrator. Definition at line 99 of file BEulerInt.cpp. References BEulerInt::m_jacFormMethod. void initializeRJE ( double t0, ResidJacEval & func ) virtual Find the initial conditions for y and ydot. Definition at line 165 of file BEulerInt.cpp. double step ( double tout ) virtual Integrate the system of equations. Parameters tout integrate to this time. Note that this is the absolute time value, not a time interval. Reimplemented from Integrator. Definition at line 888 of file BEulerInt.cpp. void setSolnWeights ( ) virtual Set the solution weights. This is a very important routine as it affects quite a few operations involving convergence. Definition at line 250 of file BEulerInt.cpp. Referenced by BEulerInt::step(). virtual double& solution ( size_t k ) inlinevirtual The current value of the solution of equation k. Reimplemented from Integrator. Definition at line 84 of file BEulerInt.h. double* solution ( ) inlinevirtual The current value of the solution of the system of equations. Reimplemented from Integrator. Definition at line 87 of file BEulerInt.h. int nEquations ( ) const inlinevirtual The number of equations. Reimplemented from Integrator. Definition at line 90 of file BEulerInt.h. References BEulerInt::m_neq. int nEvals ( ) const virtual Return the total number of function evaluations. Reimplemented from Integrator. Definition at line 225 of file BEulerInt.cpp. References BEulerInt::m_nfe. void setIterator ( IterType t ) virtual Set the linear iterator. Reimplemented from Integrator. Definition at line 135 of file BEulerInt.cpp. References BEulerInt::m_iter. void setPrintSolnOptions ( int printSolnStepInterval, int printSolnNumberToTout, int printSolnFirstSteps = 0, bool dumpJacobians = false ) virtual This routine controls when the solution is printed. Parameters printSolnStepInterval If greater than 0, then the soln is printed every printSolnStepInterval steps. printSolnNumberToTout The solution is printed at regular invervals a total of "printSolnNumberToTout" times. printSolnFirstSteps The solution is printed out the first "printSolnFirstSteps" steps. After these steps the other parameters determine the printing. default = 0 dumpJacobians Dump Jacobians to disk. Definition at line 124 of file BEulerInt.cpp. void setNonLinOptions ( int min_newt_its = 0, bool matrixConditioning = false, bool colScaling = false, bool rowScaling = true ) Set the options for the nonlinear method. Defaults are set in the .h file. These are the defaults: min_newt_its = 0 matrixConditioning = false colScaling = false rowScaling = true Definition at line 140 of file BEulerInt.cpp. void setColumnScales ( ) virtual Set the column scaling vector at the current time. Definition at line 266 of file BEulerInt.cpp. References ResidJacEval::calcSolnScales(). Referenced by BEulerInt::doNewtonSolve(). double soln_error_norm ( const double * const delta_y, bool printLargest = false ) virtual Calculate the solution error norm. if printLargest is true, then a table of the largest values is printed to standard output. Definition at line 1295 of file BEulerInt.cpp. References Cantera::error(), BEulerInt::m_ewt, and BEulerInt::m_neq. Referenced by BEulerInt::dampStep(). void beuler_jac ( GeneralMatrix & J, double *const f, double time_curr, double CJ, double * const y, double * const ydot, int num_newt_its ) Function called by to evaluate the Jacobian matrix and the current residual at the current time step. Parameters J = Jacobian matrix to be filled in f = Right hand side. This routine returns the current value of the RHS (output), so that it does not have to be computed again. Definition at line 483 of file BEulerInt.cpp. Referenced by BEulerInt::solve_nonlinear_problem(). void internalMalloc ( ) protected Internal routine that sets up the fixed length storage based on the size of the problem to solve. Definition at line 230 of file BEulerInt.cpp. Referenced by BEulerInt::initializeRJE(). void calc_y_pred ( int order ) protected Function to calculate the predicted solution vector, m_y_pred_n for the (n+1)th time step. This routine can be used by a first order - forward Euler / backward Euler predictor / corrector method or for a second order Adams-Bashforth / Trapezoidal Rule predictor / corrector method. See Nachos documentation Sand86-1816 and Gresho, Lee, Sani LLNL report UCRL - 83282 for more information. on input: N - number of unknowns order - indicates order of method = 1 -> first order forward Euler/backward Euler predictor/corrector = 2 -> second order Adams-Bashforth/Trapezoidal Rule predictor/corrector delta_t_n - magnitude of time step at time n (i.e., = t_n+1 - t_n) delta_t_nm1 - magnitude of time step at time n - 1 (i.e., = t_n - t_n-1) y_n[] - solution vector at time n y_dot_n[] - acceleration vector from the predictor at time n y_dot_nm1[] - acceleration vector from the predictor at time n - 1 on output: m_y_pred_n[] - predicted solution vector at time n + 1 Definition at line 590 of file BEulerInt.cpp. References ResidJacEval::filterSolnPrediction(), and BEulerInt::m_neq. Referenced by BEulerInt::step(). void calc_ydot ( int order, double * y_curr, double * ydot_curr ) protected Function to calculate the acceleration vector ydot for the first or second order predictor/corrector time integrator. This routine can be called by a first order - forward Euler / backward Euler predictor / corrector or for a second order Adams - Bashforth / Trapezoidal Rule predictor / corrector. See Nachos documentation Sand86-1816 and Gresho, Lee, Sani LLNL report UCRL - 83282 for more information. on input: N - number of local unknowns on the processor This is equal to internal plus border unknowns. order - indicates order of method = 1 -> first order forward Euler/backward Euler predictor/corrector = 2 -> second order Adams-Bashforth/Trapezoidal Rule predictor/corrector delta_t_n - Magnitude of the current time step at time n (i.e., = t_n - t_n-1) y_curr[] - Current Solution vector at time n y_nm1[] - Solution vector at time n-1 ydot_nm1[] - Acceleration vector at time n-1 on output: ydot_curr[] - Current acceleration vector at time n Note we use the current attribute to denote the possibility that y_curr[] may not be equal to m_y_n[] during the nonlinear solve because we may be using a look-ahead scheme. Definition at line 618 of file BEulerInt.cpp. References BEulerInt::m_neq. Referenced by BEulerInt::dampStep(), BEulerInt::solve_nonlinear_problem(), and BEulerInt::step(). double time_error_norm ( ) protected Calculates the time step truncation error estimate from a very simple formula based on Gresho et al. This routine can be called for a first order - forward Euler/backward Euler predictor/ corrector and for a second order Adams- Bashforth/Trapezoidal Rule predictor/corrector. See Nachos documentation Sand86-1816 and Gresho, Lee, LLNL report UCRL - 83282 for more information. on input: abs_error - Generic absolute error tolerance rel_error - Generic relative error tolerance x_coor[] - Solution vector from the implicit corrector x_pred_n[] - Solution vector from the explicit predictor on output: delta_t_n - Magnitude of next time step at time t_n+1 delta_t_nm1 - Magnitude of previous time step at time t_n Definition at line 639 of file BEulerInt.cpp. References Cantera::error(), BEulerInt::m_ewt, BEulerInt::m_neq, and BEulerInt::m_print_flag. Referenced by BEulerInt::step(). double time_step_control ( int m_order, double time_error_factor ) protected Time step control function for the selection of the time step size based on a desired accuracy of time integration and on an estimate of the relative error of the time integration process. This routine can be called for a first order - forward Euler/backward Euler predictor/ corrector and for a second order Adams- Bashforth/Trapezoidal Rule predictor/corrector. See Nachos documentation Sand86-1816 and Gresho, Lee, Sani LLNL report UCRL - 83282 for more information. on input: order - indicates order of method = 1 -> first order forward Euler/backward Euler predictor/corrector = 2 -> second order forward Adams-Bashforth/Trapezoidal rule predictor/corrector delta_t_n - Magnitude of time step at time t_n delta_t_nm1 - Magnitude of time step at time t_n-1 rel_error - Generic relative error tolerance time_error_factor - Estimated value of the time step truncation error factor. This value is a ratio of the computed error norms. The premultiplying constants and the power are not yet applied to normalize the predictor/corrector ratio. (see output value) on output: return - delta_t for the next time step If delta_t is negative, then the current time step is rejected because the time-step truncation error is too large. The return value will contain the negative of the recommended next time step. time_error_factor - This output value is normalized so that values greater than one indicate the current time integration error is greater than the user specified magnitude. Definition at line 693 of file BEulerInt.cpp. References BEulerInt::m_print_flag. Referenced by BEulerInt::step(). int solve_nonlinear_problem ( double *const y_comm, double *const ydot_comm, double CJ, double time_curr, GeneralMatrix & jac, int & num_newt_its, int & num_linear_solves, int & num_backtracks, int loglevel ) protected Solve a nonlinear system. Find the solution to F(X, xprime) = 0 by damped Newton iteration. On entry, y_comm[] contains an initial estimate of the solution and ydot_comm[] contains an estimate of the derivative. On successful return, y_comm[] contains the converged solution and ydot_comm[] contains the derivative Parameters y_comm[] Contains the input solution. On output y_comm[] contains the converged solution ydot_comm Contains the input derivative solution. On output y_comm[] contains the converged derivative solution CJ Inverse of the time step time_curr Current value of the time jac Jacobian num_newt_its number of Newton iterations num_linear_solves number of linear solves num_backtracks number of backtracs loglevel Log level Definition at line 1731 of file BEulerInt.cpp. Referenced by BEulerInt::step(). void doNewtonSolve ( double time_curr, double * y_curr, double * ydot_curr, double * delta_y, GeneralMatrix & jac, int loglevel ) protected Compute the undamped Newton step. The residual function is evaluated at the current time, t_n, at the current values of the solution vector, m_y_n, and the solution time derivative, m_ydot_n, but the Jacobian is not recomputed. Definition at line 1349 of file BEulerInt.cpp. Referenced by BEulerInt::dampStep(), and BEulerInt::solve_nonlinear_problem(). double boundStep ( const double *const y, const double *const step0, int loglevel ) protected Bound the Newton step while relaxing the solution. Return the factor by which the undamped Newton step 'step0' must be multiplied in order to keep all solution components in all domains between their specified lower and upper bounds. Other bounds may be applied here as well. Currently the bounds are hard coded into this routine: Minimum value for all variables: - 0.01 * m_ewt[i] Maximum value = none. Thus, this means that all solution components are expected to be numerical greater than zero in the limit of time step truncation errors going to zero. Delta bounds: The idea behind these is that the Jacobian couldn't possibly be representative if the variable is changed by a lot. (true for nonlinear systems, false for linear systems) Maximum increase in variable in any one Newton iteration: factor of 2 Maximum decrease in variable in any one Newton iteration: factor of 5 Parameters y Current value of the solution step0 Current raw step change in y[] loglevel Log level. This routine produces output if loglevel is greater than one Returns Returns the damping coefficient Definition at line 1515 of file BEulerInt.cpp. References BEulerInt::m_ewt, and BEulerInt::m_neq. Referenced by BEulerInt::dampStep(). int dampStep ( double time_curr, const double * y0, const double * ydot0, const double * step0, double * y1, double * ydot1, double * step1, double & s1, GeneralMatrix & jac, int & loglevel, bool writetitle, int & num_backtracks ) protected On entry, step0 must contain an undamped Newton step for the solution x0. This method attempts to find a damping coefficient such that the next undamped step would have a norm smaller than that of step0. If successful, the new solution after taking the damped step is returned in y1, and the undamped step at y1 is returned in step1. Definition at line 1582 of file BEulerInt.cpp. Referenced by BEulerInt::solve_nonlinear_problem(). void computeResidWts ( GeneralMatrix & jac ) protected Compute Residual Weights. Definition at line 271 of file BEulerInt.cpp. References GeneralMatrix::begin(), BEulerInt::m_ewt, and BEulerInt::m_neq. double filterNewStep ( double timeCurrent, double * y_current, double * ydot_current ) protected Filter a new step. Definition at line 295 of file BEulerInt.cpp. Referenced by BEulerInt::solve_nonlinear_problem(), and BEulerInt::step(). double getPrintTime ( double time_current ) protected Get the next time to print out. Definition at line 210 of file BEulerInt.cpp. ## Member Data Documentation IterType m_iter protected IterType is used to specify how the nonlinear equations are to be relaxed at each time step. Definition at line 394 of file BEulerInt.h. Referenced by BEulerInt::setIterator(). BEulerMethodType m_method protected MethodType is used to specify how the time step is to be chosen. Currently, there are two choices, one is a fixed step method while the other is based on a predictor-corrector algorithm and a time-step truncation error tolerance. Definition at line 401 of file BEulerInt.h. Referenced by BEulerInt::step(). int m_jacFormMethod protected m_jacFormMethod determines how a matrix is formed. Definition at line 405 of file BEulerInt.h. Referenced by BEulerInt::beuler_jac(), and BEulerInt::setProblemType(). bool m_rowScaling protected m_rowScaling is a boolean. If true then row sum scaling of the Jacobian matrix is carried out when solving the linear systems. Definition at line 411 of file BEulerInt.h. bool m_colScaling protected m_colScaling is a boolean. If true, then column scaling is performed on each solution of the linear system. Definition at line 416 of file BEulerInt.h. bool m_matrixConditioning protected m_matrixConditioning is a boolean. If true, then the Jacobian and every RHS is multiplied by the inverse of a matrix that is suppose to reduce the condition number of the matrix. This is done before row scaling. Definition at line 423 of file BEulerInt.h. Referenced by BEulerInt::doNewtonSolve(), and BEulerInt::setNonLinOptions(). int m_itol protected If m_itol =1 then each component has an individual value of atol. If m_itol = 0, the all atols are equal. Definition at line 428 of file BEulerInt.h. Referenced by BEulerInt::setSolnWeights(), and BEulerInt::setTolerances(). double m_reltol protected Relative time truncation error tolerances. Definition at line 431 of file BEulerInt.h. Referenced by BEulerInt::setSolnWeights(), and BEulerInt::setTolerances(). double m_abstols protected Absolute time truncation error tolerances, when uniform for all variables. Definition at line 437 of file BEulerInt.h. Referenced by BEulerInt::setSolnWeights(), and BEulerInt::setTolerances(). vector_fp m_abstol protected Vector of absolute time truncation error tolerance when not uniform for all variables. Definition at line 442 of file BEulerInt.h. Referenced by BEulerInt::setSolnWeights(), and BEulerInt::setTolerances(). vector_fp m_ewt protected Error Weights. This is a surprisingly important quantity. Definition at line 445 of file BEulerInt.h. double m_hmax protected Maximum step size. Definition at line 448 of file BEulerInt.h. int m_maxord protected Maximum integration order. Definition at line 451 of file BEulerInt.h. int m_order protected Current integration order. Definition at line 454 of file BEulerInt.h. Referenced by BEulerInt::dampStep(), BEulerInt::solve_nonlinear_problem(), and BEulerInt::step(). int m_time_step_num protected Time step number. Definition at line 457 of file BEulerInt.h. Referenced by BEulerInt::doNewtonSolve(), and BEulerInt::step(). int m_max_time_step_attempts protected Max time steps allowed. Definition at line 461 of file BEulerInt.h. int m_numInitialConstantDeltaTSteps protected Number of initial time steps to take where the time truncation error tolerances are not checked. Instead the delta T is uniform Definition at line 466 of file BEulerInt.h. Referenced by BEulerInt::step(). int m_failure_counter protected Failure Counter -> keeps track of the number of consecutive failures. Definition at line 469 of file BEulerInt.h. Referenced by BEulerInt::step(). int m_min_newt_its protected Minimum Number of Newton Iterations per nonlinear step. default = 0. Definition at line 472 of file BEulerInt.h. Referenced by BEulerInt::setNonLinOptions(), and BEulerInt::solve_nonlinear_problem(). int m_printSolnStepInterval protected Step Interval at which to print out the solution default = 1; If set to zero, there is no printout. Definition at line 481 of file BEulerInt.h. Referenced by BEulerInt::setPrintSolnOptions(). int m_printSolnNumberToTout protected Number of evenly spaced printouts of the solution If zero, there is no printout from this option default 1 If set to zero there is no printout. Definition at line 488 of file BEulerInt.h. Referenced by BEulerInt::getPrintTime(), and BEulerInt::setPrintSolnOptions(). int m_printSolnFirstSteps protected Number of initial steps that the solution is printed out. default = 0. Definition at line 491 of file BEulerInt.h. Referenced by BEulerInt::setPrintSolnOptions(). bool m_dumpJacobians protected Dump Jacobians to disk. default false. Definition at line 494 of file BEulerInt.h. Referenced by BEulerInt::setPrintSolnOptions(). int m_neq protected Number of equations in the ode integrator. Definition at line 501 of file BEulerInt.h. double m_t0 protected Initial time at the start of the integration. Definition at line 512 of file BEulerInt.h. Referenced by BEulerInt::getPrintTime(), and BEulerInt::initializeRJE(). double m_time_final protected Final time. Definition at line 515 of file BEulerInt.h. Referenced by BEulerInt::getPrintTime(). double delta_t_max protected Maximum permissible time step. Definition at line 526 of file BEulerInt.h. Referenced by BEulerInt::step(). GeneralMatrix* tdjac_ptr protected Pointer to the Jacobian representing the time dependent problem. Definition at line 536 of file BEulerInt.h. Referenced by BEulerInt::internalMalloc(), and BEulerInt::step(). int m_print_flag protected Determines the level of printing for each time step. 0 -> absolutely nothing is printed for a single time step. 1 -> One line summary per time step 2 -> short description, points of interest 3 -> Lots printed per time step (default) Definition at line 547 of file BEulerInt.h. Referenced by BEulerInt::step(), BEulerInt::time_error_norm(), and BEulerInt::time_step_control(). int m_nfe protected Number of function evaluations. Definition at line 553 of file BEulerInt.h. Referenced by BEulerInt::beuler_jac(), BEulerInt::doNewtonSolve(), and BEulerInt::nEvals(). int m_nJacEval protected Number of Jacobian Evaluations and factorization steps (they are the same) Definition at line 559 of file BEulerInt.h. Referenced by BEulerInt::beuler_jac(). int m_numTotalNewtIts protected Number of total Newton iterations. Definition at line 562 of file BEulerInt.h. Referenced by BEulerInt::solve_nonlinear_problem(). int m_numTotalLinearSolves protected Total number of linear iterations. Definition at line 565 of file BEulerInt.h. Referenced by BEulerInt::doNewtonSolve(), and BEulerInt::solve_nonlinear_problem(). int m_numTotalConvFails protected Total number of convergence failures. Definition at line 568 of file BEulerInt.h. Referenced by BEulerInt::step(). int m_numTotalTruncFails protected Total Number of time truncation error failures. Definition at line 571 of file BEulerInt.h. Referenced by BEulerInt::step(). The documentation for this class was generated from the following files:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47262850403785706, "perplexity": 9160.112339275534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400228707.44/warc/CC-MAIN-20200925182046-20200925212046-00201.warc.gz"}
http://math.stackexchange.com/questions/89319/determinants-of-invertible-matrices?answertab=oldest
# Determinants of Invertible Matrices Let's say we have an invertible matrix $A$ and both $A$ and $A^{-1}$ members are whole numbers. is it always true that $\det A = \det A^{-1}$? - what is $A^\prime$? The inverse of $A$? This is usually denoted $A^{-1}$. And for invertible matrices you always have $\det(A)= 1/\det(A^{-1})$. –  user20266 Dec 7 '11 at 18:57 –  Martin Sleziak Dec 7 '11 at 19:00 I presume $A'$ is the transpose of $A$. In that case the answer is yes. –  Robert Israel Dec 7 '11 at 19:02 uhmm -- yes. But you do not need to assume anything about $A$ in this case (like being invertible, having integer coefficients,...), do you? –  user20266 Dec 7 '11 at 19:09 Yeah i ment A' as the invers of A –  Some1 Dec 7 '11 at 19:14 If $A$ is invertible, then $1 = \operatorname{det} I = \operatorname{det} (AA^{-1}) = \operatorname{det} A \cdot \operatorname{det} A^{-1}$. If both $A$ and $A^{-1}$ have integer entries then their determinant is also an integer. Now, if the product of two integers is $1$ then they have to be both equal to $1$ or $-1$. So, yes, $\operatorname{det} A = \operatorname{det} A^{-1}$. HINT $\$ Multiplicative maps always preserve inverses. If the map $\rm\:d\ne 0\:$ is multiplicative, i.e. $\rm\:d(x\:y) = d(x)\ d(y),\:$ applying $\rm\:d\:$ to $\rm\:a\:a^{-1} = 1\:$ yields $\rm\: d(a)\ d(a^{-1}) = 1\:,\:$ so $\rm\:d(a^{-1}) = d(a)^{-1}\:.$ Note that in a field (or domain) $\rm\:d(1) = 1\:$ since applying $\rm\:d\:$ to $\:1^2 = 1\:$ yields $\rm\:d(1)^2 = d(1),\:$ so $\rm\:d(1) = 1\ or\:\ 0\:.\:$ But if $\rm\:d(1) = 0\:$ then $\rm\:d(x) = d(x\cdot 1) = d(x)\ d(1) = 0,\:$ so $\rm\:d=0\:,\:$ contra hypothesis. Since $\rm\:\rm\:d = det\:$ is multiplicative, $\rm\:d(a^{-1}) = d(a)^{-1}\:$ so $\rm\:d(a) = d(a^{-1})\ \iff\ d(a) = d(a)^{-1}\:$ $\:\iff\:$ $\rm\:d(a)^2 = 1\:.\:$ But $\rm\:d(a)\:$ is an invertible, and in your case is an integer, so it's $\rm\:\pm1\:,\:$ and $\:(\pm1)^2 = 1\:.\:$ Therefore we conclude that the result holds true if $\rm\:d\:$ takes values in a ring all of whose units are self-inverse, a.k.a. involutary or involutions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9594209790229797, "perplexity": 179.95251869709466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802774718.30/warc/CC-MAIN-20141217075254-00041-ip-10-231-17-201.ec2.internal.warc.gz"}
https://dsp.stackexchange.com/questions/23327/create-matched-filter-reference-signal-when-scaling-unknown/23328
# Create Matched Filter Reference Signal when Scaling Unknown I am working on a problem which raised what should be a simple question, but I'm stuck on it. The objective is to detect a shifted and scaled version of a known signal in the presence of Additive White Gaussian Noise (AWGN). Think of a radar system. In a radar system, you know the transmitted signal. The returned system is delayed in time and scaled. I understand that a time-delay, only shifts the location of the matched filter's peak output. No problems there. But if the scaling is unknown, then how does one construct the reference signal? Using a reference signal with the an amplitude that doesn't match the amplitude of the signal in the noisy data will mean that a threshold tests based on that reference signal will be incorrect. I'm guessing that in a radar system, the noise power is approximately known, and so the power of the received signal is estimated by taking the power of the received signal and subtracting the noise power. That value could then be used to scale the reference signal. Still, that would only be and approximation, and it would still effect the probability of detection and false alarm in some way. Any help is appreciated. Thanks! • The scaling of the matched filter really doesn't matter. Thanks to linearity, any scaling you apply to the matched filter will just scale its output as well. Most importantly, the signal and noise at the filter output will be scaled equally, so the output SNR is independent of any scaling that you apply to the filter itself. For a radar receiver, it is common to have a "noise riding" detection threshold, where the noise level is continuously estimated and a threshold is set some distance above that level to achieve the desired $P_d$ and $P_{fa}$. – Jason R May 12 '15 at 11:28 • Another option is normalized cross-correlation (the example shown here is 2D, but the same technique applies in 1D). You can normalize the filter's output so that it is always in the range $[-1, 1]$ based on the measured standard deviation of the input signal and matched filter impulse response. This removes any constant scaling from the picture and allows simpler threshold selection (although the normalization may complicate theoretical calculation of $P_d$ and $P_{fa}$ somewhat). – Jason R May 12 '15 at 11:32 • @JasonR You are correct that linearity simply scales your output; however, if your receiver output is scaled you should also scale your detection threshold. Thus, the scaling of the received signal does matter with regard to setting your detection threshold. If you use the a value for your reference signal's scaling that doesn't match the one in the actual signal your $P_d$ and $P_{f_a}$ will be incorrect. – Stephen Hartzell May 12 '15 at 17:05 • You are correct. Hence my suggestions to use a noise-riding threshold, where the threshold is set relative to the estimated noise level. Thus, the only thing that matters is the signal-to-noise ratio. You select a threshold that gives the desired $P_{fa}$ and at least the desired $P_d$ at a specified minimum SNR. For well-behaved cases (such as detection on the AWGN channel), you can derive closed-form theoretical expressions for each, assuming you accurately estimate the noise level. – Jason R May 12 '15 at 18:51 • JasonR's comment that the matched filter reference signal needs no scaling is spot on (+1). Indeed, another important reason (not often appreciated by theoretical minded folks) for preferring antipodal signalis (instead of, say, on-off keying, for example) in digital communications systems is that the threshold is always zero, regardless of signal strength. – Dilip Sarwate May 13 '15 at 12:56 I think the problem is not as bad as you suspect it is. I wasn't around at the time, but from what I've read, early radar systems essentially connected the matched filter's output to an oscilloscope, and a trained operator would look at the phosphor and decide, from experience and intuition, when the signal raised above the noise ("the grass") indicating a reflection. In practice, you never know in advance the amplitude of the pulse you're looking for. There is attenuation in the channel that can change over time, even on very short time scales. Sometimes you don't know how much power the transmitter is using. There are at least two solutions to this problem. One is to use automatic gain control in the receiver, so that you know in advance the power or the peak amplitude of the pulse going into the matched filter. The second is to use a training sequence or a pilot signal, so that the receiver can calibrate the thresholds appropriately. To add a bit of detail: to simplify the math, assume the signal $p(t)$ is time-symmetric and has energy 1. The transmitter emits signal $kp(t)$, with $k$ unkown at the receiver. The receiver correlates the received signal with a matched filter with impuse response $lp(t)$. When the transmitted signal is present at the receiver, the matched filter's output is $$r=kl+n,$$ which is a Gaussian random variable with mean $kl$ and variance $N_0/2$. When nothing but noise is present at the receiver, the matched filter's output is just $$r=n,$$ which is a Gaussian random variable with mean 0 and variance $N_0/2$. If the receiver can estimate $l$, then the problem reduces to standard hypothesis testing (see for example these notes). • In those cases, your probabilities of detection and false alarm aren't going to be quite right, but probably close enough. I can see those solutions working. If no one gives a more mathematical answer then I'll mark yours as accepted. Thanks! – Stephen Hartzell May 11 '15 at 20:03 • @StephenHartzell I added a bit of detail and linked to some MIT notes which go into this in much more depth. I hope it's useful. – MBaz May 12 '15 at 0:47 • Okay, so your more detailed answer is similar to what I was supposed one would do to solve the problem. The receiver can estimate $l$, but that estimation itself won't be perfect. The variance in $l$ should be included in the hypothesis testing. However, it seems like most treatments such as the MIT one, just assume that $l$ is perfectly known. – Stephen Hartzell May 12 '15 at 17:00 • @StephenHartzell, you may also be interested in this: en.wikipedia.org/wiki/Blind_equalization. – MBaz May 12 '15 at 17:09 I would suggest first estimation of the gain/loss by running a window with the size of your matched filter that will be compared to the value of the "filter" with the same window - this will average the noise impact as well (square root of the window size). Use the scale as the window moves to scale your signal and then run the matched filter. • I think that is a perfectly valid approach, but then the question becomes, how does the fact that you are only estimating that signal value effect your probability of detection and false alarm. The reference signal is then not exactly correct. – Stephen Hartzell May 12 '15 at 17:09 • In implementation of DSP algorithm the key question is PD and PFAR, as you are asking. Under the proper assumptions regarding the signal behavior, this approach should provide with a quite acceptable results. To understand the specifics on the results require way more involved work that is somewhat need CONSULTANT work:) – Moti May 13 '15 at 15:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6789194941520691, "perplexity": 428.5721106901212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178378043.81/warc/CC-MAIN-20210307170119-20210307200119-00474.warc.gz"}
https://www.biyanicolleges.org/resolvind-power/
The resolving power of an instrument is a measure of how well it can distinguish between two (apparently) very close sources of light. To illustrate this we will consider an astronomical telescope. Even when using a telescope of high magnification, the image of a star should be a very small point of light. This is because even the closest stars ware very far away. In practice, the image is not a point because the light from the star is diffracted as it enters the telescope. The effect is exaggerated in the following diagrams. If two stars are far from each other, it is still obvious that they are two separate light sources. However, if they are (apparently) close together, the diffraction causes their images to overlap. Rayleigh suggested that the images should be considered as just resolved if the central maximum of one image coincides with the first minimum of the other image, as shown in the next diagram. This idea is now called the Rayleigh criterion and (for a circular aperture)… it can be shown that it corresponds to the light sources having an angular separation* , given by where = the wavelength of the light b = the diameter of the object lens of the telescope *The two stars in the diagram below have an angular separation of q from the point of view of the observer. Notice that they are not, in fact, very close to each other. Exmples of resolving Power A standard benchmark for the resolvance of a grating or other spectroscopic instrument is the resolution of the sodium doublet. The two sodium “D-lines” are at 589.00 nm and 589.59 nm. Resolving them corresponds to resolvance Another standard example is the resolution of the hydrogen and deuterium lines, often done with a Fabry-Perot Interferometer. The red lines of hydrogen and deuterium are at 656.3 nm and 656.1 nm, respectively. This requires a resolvance of Kanta Saini
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8919520974159241, "perplexity": 442.2898463892271}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154432.2/warc/CC-MAIN-20210803061431-20210803091431-00077.warc.gz"}
http://worldwidescience.org/topicpages/h/haiti+quake+slices.html
1 Digital Repository Infrastructure Vision for European Research (DRIVER) Without a model, it is impossible for a geophysicist to study the possibility of forecasting earth quakes. We will define a quantity, the event-degree, in the paper. This quantity plays an important role in the model of quakes forecasting. In order to make a simple model, we make a hypothesis of earth quakes. The hypothesis is: "(i) There are two kinds of earth quakes, one is the triggered breaking (earth quake), the other is spontaneous breaking (earth quake). (ii) Most maj... Tsai, Yeong-shyeong 2008-01-01 2 ... Haiti - OECD Franais Follow us E-mail Alerts Blogs OECD Home About Countries Topics Statistics Newsroom OECD Home Haiti 6-July-2010 English, ,...the Principles for Good International Engagement in Fragile States: Country Case Study Haiti This report reviews the implementation in Haiti of the Principles for ...apporte en Hati Also Available Building a Coherent Approach to Evaluating the Haiti Earthquake Response 15-April-2010 English Building a Coherent Approach to Evaluating the ... Haiti Earthquake Response There will be strong pressure to account for the results of the massive aid effort that is currently being delivered and ... 3 Digital Repository Infrastructure Vision for European Research (DRIVER) In the months before the January earthquake, Haiti and its criminal justice institutions were the subject of an unprecedented effort by two UN agencies to measure the state of the Rule of Law. Drawing on the results of that pre-quake assessment as well as on post-quake assessments of the justice sector, this paper raises four questions that should guide recovery and further development of the police, courts, and prisons in Haitiquestions that focus attention on the meaning of justice secto... Stone, Christopher 2010-01-01 4 Science.gov (United States) When quakes strike urban areas, the toll in life and property can be great. Luckily, scientists have been working to uncover safer methods of construction and new structural techniques that "mitigate" the effects of earthquakes. In this simple simulation, you choose the ground on which to erect your building and which quake-proofing technological prevention to employ. You can then subject your building to three levels of intensity and see how it stands up. 2010-01-01 5 CERN Document Server Without a model, it is impossible for a geophysicist to study the possibility of forecasting earth quakes. In order to make a simple model, we make a hypothesis of earth quakes. The hypothesis is: (i) There are two kinds of earth quakes, one is the triggered breaking (earth quake), the other is spontaneous breaking (earth quake). (ii) Most major quakes in continental plates Eurasian Plate, North America Plate, South America Plate, Africa Plate and Australia Plate are triggered breaking. (iii) These triggered quakes are triggered by the movements of high pressure centers and low pressure centers of the atmosphere on continental plates. (iv) How can the movements of the high pressure centers trigger a quake? It depends on the extent of the high pressure center and the speed of the movement. Here, we stress high pressure center instead of low pressure center because it is dominated by high pressure center mostly. Of course, the boundary of the plates must have stored enough energy to have quakes, that is, near t... Tsai, Yeong-Shyeong 2008-01-01 6 ... Haiti - norad.no Skip to main content NOT TRANSLATED: javascript-required Norsk Sitemap A A A SearchSearchGo Thematic areas Countries Tools/publications Evaluation ...no / Countries / Latin America / Haiti Haiti Norway has given over 800 million NOK for rebuilding of Haiti after the catastrophic earthquake. Published 03/...Updated 19/09/2013 Print Tweet By sector By partner Aid trends Bilateral assistance to Haiti 2012 : 129,3 million kroner Laster data...... Bilateral assistance to Haiti 2012 : 129,3 million kroner Laster data... Bilateral assistance to Haiti 2012 : ... 7 Digital Repository Infrastructure Vision for European Research (DRIVER) This business plan details the operating, marketing, financial, competitive, and technological landscapes of QuakeAware. QuakeAware is a website and iPhone / Android mobile phone application that helps citizens prepare for and react to a local earthquake. Presently, QuakeAware faces the challenges of becoming a sustainable enterprise and selecting the optimal strategic direction and operating mode for its future growth. This business plan identifies and assesses the options available to Quake... Cole, Ryan Thomas; Paor, Donal Richard 2011-01-01 8 Digital Repository Infrastructure Vision for European Research (DRIVER) Healthcare practitioners from around the world responded almost immediately in the aftermath of the 2010 earthquake in Haiti. This article reports on the efforts of an orthopedic trauma team in Haiti and its efforts in providing surgery without general anesthesia. Osteen, Kristie D. 2011-01-01 9 ... Venezuela Cancels Haiti's Oil Debt - Climate & Capitalism Climate & Capitalism An ecosocialist journal Home About Ecosocialist Notebook Book Reviews Archives ...Monthly Review MR Press MRzine Economists Travelogue You are here: Home / 2010 / January / 26 / Venezuela Cancels Haiti's Oil Debt Posted on January 26, 2010 Venezuela ...announced Monday that he would write off the undisclosed sum Haiti owes Venezuela for oil as part of the ALBA blocs plans to help ... Haiti has no debt with Venezuela, just the opposite: Venezuela has a historical debt with that nation, with that people ... 10 Directory of Open Access Journals (Sweden) Full Text Available When St. Domingue declared its independence it was renamed Haiti, an Amerindian name. Author explores what the founding fathers of Haitian independence might have known about the Amerindian past in the Caribbean and in South America. He also raises questions about ethnicity and identity in 19th-c. Haiti. David Geggus 1997-01-01 11 CERN Document Server In juxtaposition with the standard model of rotation powered pulsar, the model of vibration powered magnetar undergoing quake-induced torsional Alfven vibrations in its own ultra strong magnetic field experiencing decay is considered. The presented line of argument shows that gradual decrease of frequencies (lengthening of periods) of long-periodic pulsed radiation detected from set of X-ray sources can be attributed to magnetic-field-induced energy conversion from seismic vibrations to magneto-dipole radiation of quaking magnetar. Bastrukov, S; Xu, R X; Molodtsova, I 2011-01-01 12 Directory of Open Access Journals (Sweden) Full Text Available The 2010 earthquake in Haiti ushered in a new era for the role and power of technology and communication systems in disaster response especially for how local responders used them. Imogen Wall 2011-10-01 13 Digital Repository Infrastructure Vision for European Research (DRIVER) Haiti, one of the world's five poorest nations, gets international attention because of the number of refugees who leave by boat in search of a better future. The 80,000 inhabitants of Ile de la Gonave are neglected, even in Haiti--there is no government medical post, and facilities in the health posts run by missions are minimal. Typhoid and cholera epidemics threaten the island. Mdecins Sans Frontires plans to send staff and supplies and train local health workers. Veeken, H. 1993-01-01 14 Digital Repository Infrastructure Vision for European Research (DRIVER) Who owns the top level Internet domains? How are decisions made over their assignments? This paper explores the recent history of the top level domain HT for Haiti, as an example of the need to re-examine procedures and processes developed by the Internet Assigned Numbers Authority and other organizations and individuals. Quarterman, John S. 1997-01-01 15 CERN Multimedia YOU ARE WONDERFUL, THANK YOU! 58 750 CHF collected for Haiti! Following the appeal launched on 15 February, the CERN Management and Staff Association would like to express their heartfelt gratitude and thank the whole of the CERN community for its generosity towards the victims of the Haiti earthquake. This is a record, an unprecedented show of solidarity at CERN, equal to the immense needs following this catastrophe. Thank you on behalf of the Haitians, they will most certainly need it in the coming months. The donations will be shared out among various institutions and associations in both Host States, in accordance with the established practice in the event of a catastrophe hitting a non-Member State. The size and activities of each of them have been taken into account. After studying the various requests for aid, the beneficiaries are now known and will receive: - IFRC - International Federation of Red Cross and Red&... 2010-01-01 16 Directory of Open Access Journals (Sweden) Full Text Available This paper examines the ionospheric anomalies around the time of a strong earthquake (M = 7.0 which occurred in Haiti region (18.457 N, 72.533 W on 12 January 2010. DEMETER satellite data have been used to study the plasma parameters variation during the Haiti earthquake. One day (11 January 2010 before the earthquake there is a significant enhancement of electron density and electron temperature near the epicenter. Decrease of electron temperature is observed few days after the earthquake. Anomalous plasma parameter variations are detected both in day and nighttimes before the quake. Statistical processing of the DEMETER data demonstrates that satellite data can play an important role for the study of precursory phenomena associated with earthquakes. S. Sarkar 2012-03-01 17 Digital Repository Infrastructure Vision for European Research (DRIVER) In juxtaposition with the standard model of rotation powered pulsar, the model of vibration powered magnetar undergoing quake-induced torsional Alfven vibrations in its own ultra strong magnetic field experiencing decay is considered. The presented line of argument suggests that gradual decrease of frequencies (lengthening of periods) of long-periodic pulsed radiation detected from a set of X-ray sources can be attributed to magnetic-field-decay induced energy conversion fro... Bastrukov, S.; Yu, J. W.; Xu, R. X.; Molodtsova, I. 2011-01-01 18 Science.gov (United States) ...Statement for the Quaking Aspen Wind Energy Project, Wyoming, and Notice...Statement (EIS) for the Quaking Aspen Wind Energy Project (Quaking Aspen). By...methods: Email: Quaking_Aspen_Wind_Energy_WY@blm.gov; or... 2011-11-08 19 Science.gov (United States) Palliative care in itself has many challenges; these challenges are compounded exponentially when placed in the setting of a mass casualty event, such as the 2010 Haiti earthquake. Haiti itself was an austere environment with very little infrastructure before the disaster. US surgeons, intensivists, and nurses worked hand in hand with other international providers and Haitian volunteers to provide the best care for the many. Improvisation and teamwork as well as respect for the Haitian caregivers were crucial to their successes. Sisyphean trials lie ahead. Haiti and its people must not be forgotten. PMID:22405433 Huffman, Joan L 2012-03-01 20 Science.gov (United States) This radio broadcast reports on the discovery of a new kind of earthquake that is much deeper and longer lasting than other kinds of quakes. These long, super-deep tremors originate at a depth of 15-20 miles, below the crust in the upper mantle of Earth, and last 10-20 minutes. The broadcast reports on their occurrence in California and how research is being conducted to determine their relationship to other seismic activity along the San Andreas Fault. The clip is 4 minutes and 48 seconds in length. 21 CERN Document Server Cessy, 7 September 2010 Subject: Thanks for the evening of solidarity in favour of the victims of the Haiti earthquake organised by the CERN Fitness Club. The "HATI-ECOLES" Association wishes to thank everyone who took part in the event. The donation of 2080 CHF paid onto the Association's account will be transferred in its entirety to our partners in Haiti who are in charge of running the Verrettes and La Chapelle schools. They are responsible for meeting the needs of families affected by the earthquake: buying food, helping to pay the rent on small houses, payment of school fees when school starts again in September. The number of children enrolled in the schools has risen from 2300 to 2500 following the huge influx of families who fled Port au Prince in the aftermath of the earthquake. The Association's principal role is helping with the schooling of disadvantaged children in Verrettes and La Chapelle and keeping the school canteens running to make sure that the children ... CERN Bulletin 2010-01-01 22 Directory of Open Access Journals (Sweden) Full Text Available This essay seeks to share critical reflection based on time spent in Haiti before the advent of the earthquake where the author participated on a humanitarian medical aid mission. After experiencing Haiti face to face, we came to question the superficial image that the public has about the country, disregarding both the state of human degradation present in the nation - partially revealed to the world after the earthquake - and the political forces and international economic interests that provoke and sustain this reality. The paper also reflects on the mystification processess in which international news agencies participate and the political evil that historically has kept this population in a state of neglect and exploitation as well as in a perpetual state of extreme psychic and human suffering far beyond the notion of discontentment. The mechanisms that solely blame the poor nations for their ills by hiding the real violators of human and economic rights are also pointed out from a Haitian point of view. Christina Sutter 2010-01-01 23 Directory of Open Access Journals (Sweden) Full Text Available Objective. To determine if there is an unrecognized problem of congenital rubella syndrome (CRS in Haiti, a country without a national rubella immunization program. Methods. During March 2001 and June 2001, screening physicals were conducted on approximately 80 orphans at three orphanages in Haiti that accept disabled children. Children were classified as probable CRS cases based on established clinical criteria. Photo documentation of findings was obtained whenever possible. Results. Six children met the criteria for probable CRS. Using data from surrounding Caribbean countries and from the United States of America prior to rubella immunization, we estimated that there are between 163 and 440 new cases of CRS per year in Haiti. Conclusions. CRS exists in Haiti, but its presence is generally unrecognized. A national rubella immunization policy should be considered. Golden Nancy 2002-01-01 24 Index Scriptorium Estoniae Haiti maavrinas on sdistatud nii USA-d kui ka leloomulikke jude. Vastuseks Abdul Turay artiklile "Kustutage haitilaste vlg!" tleb autor, et pstetd Haitil takerduvad mitte valitsuse rahapuudusesse, vaid olematusse infrastruktuuri Lotman, Mihhail, 1952- 2010-01-01 25 CERN Document Server Dear Colleagues, Following the devastating earthquake that hit Haiti on 12 January 2010, the CERN Management and the Staff Association are organizing a collection to help the victims. The money collected will be transferred to associations or bodies guaranteeing proper use of the funds, such as the Association Hati Ecoles based in Cessy, which our Long Term Collections supported for several years. From today you can pay your donations into a special UBS account, indicating Sisme Hati as the reason for payment . SWIFT : UBSWCHZH12B IBAN : CH85 0027 9279 HU10 6832 1 Account Holder : Association du personnel du CERN We are counting on the generosity of the CERN community to support the Haitian people at this very difficult time. Thank you on their behalf. Rolf Heuer Director-General Gianni Deroma President of the Staff Association Association du personnel 2010-01-01 26 Science.gov (United States) Early on the morning of June 28, 1992, millions of people in southern California were awakened by the largest earthquake (Ms 7.5, Mw 7.4) in the western U.S. in the past 40 years. The quake initiated near the town of Landers, Calif., at 11:57 (GMT) and ruptured to the north and then the northwest along a 70-km stretch of the Mojave Shear Zone. Fortunately, the strongest shaking occurred in uninhabited regions of the Mojave desert, but one child was killed in Yucca Valley and 400 people were injured in the surrounding area. The communities of Landers, Yucca Valley, and Joshua Tree in San Bernardino County sustained significant ($100 million) damage to buildings and roads. Damage to water and power lines also caused problems in many of the desert areas. Mori, J.; Hudnut, K.; Jones, L.; Hauksson, E.; Hutton, K. 27 Science.gov (United States) ... sharing features on this page, please enable JavaScript. Japan Quake Shows How Stress Alters the Brain Victims ... people who experienced the devastating 2011 earthquake in Japan shows that although traumatic events can shrink parts ... 28 Science.gov (United States) The aerodynamic accelerations measured by GOCE are used to calculate air density variations and air velocity estimates along GOCE orbit track. The detection of infrasonic waves generated by seismic surface waves and gravity waves generated by tsunamis are presented for earthquakes and tsunamis generated by the great Tohoku quake (11/03/2011). For the seismic/infrasonic waves, a wave propagation modelling is presented and synthetic data are compared to GOCE measurements. The travel time and amplitude discrepancies are discussed in terms of lateral velocity variations in the solid Earth and the atmosphere. For the tsunami/gravity waves, a plane wave analysis is performed and relations between vertical velocity, cross-track velocity and density variations are deduced. From theoretical relations between air density, and vertical and horizontal velocities inside the gravity wave, we demonstrate that the measured perturbations are consistent with a gravity wave generated by the tsunami, and provide a way to estimate the propagation azimuth of the gravity wave. By using these relations, an indicator of gravity wave presence is constructed. It will allow to scan the GOCE data set to search for gravity wave crossings. This study demonstrates that very low earth orbit spacecraft with high-resolution accelerometers are able to detect atmospheric waves generated by the tectonic activity. Such spacecraft may supply additional data to tsunami alert systems in order to validate some tsunami alerts. Garcia, Raphael F.; Doornbos, Eelco; Bruinsma, Sean; Hebert, Hlne 2014-05-01 29 Directory of Open Access Journals (Sweden) Full Text Available O fantasma do passado colonial, juntamente com o oportunismo e o pragmatismo da ajuda humanitria internacional produzem aes desencontradas no processo de reconstruo do Haiti aps o terremoto. Thiago Gehre Galvo 2010-03-01 30 International Nuclear Information System (INIS) Water resources in Haiti need a more rational management. In fact, the availability of water in Haiti can be described as follows: the country receives as annual precipitation about 40 billion m3 of water. However, 70% of this water is lost by evapotranspiration and of the remaining fraction, considered as a renewable resource, about 20% drains through rivers and discharges into the sea. The remaining 10 % infiltrates into local aquifers. In Haiti water is not always available in the place where it is needed, depending on the precipitation regime, geography, geology, vegetation, etc. In fact, most difficulties lie in the regulation, protection and mobilization of the available resources. Since each economic sector in Haiti has specific needs, water resources management becomes a very important issue to provide access to water of sufficient quality and quantity to the population. This point is also relevant for adequate preservation of natural ecosystems and other uses. In Haiti there are many areas which contain aquifers: Plaine de l'Arbre, Cayes, Leogane, Gonaives and Plaine du Cul-de-Sac. The last one is heavily exploited due to its geographical location. In fact, since 1980, many studies, using isotope hydrology tools, have been carried out on this aquifer. Almost all studies conducted in the Plaine du Cul-de-Sac showed the same conclusion: the aquifer system is overexploited. Some recommendations have been made, but the anarchical exploitation of this aquifer still continues. Many years after these studies were conducted, the situation has not changed. In 2001, a project dealing with the integrated management of Plaine du Cul-de-Sac aquifer was initiated with the cooperation of the IAEA. Despite the difficulties, it is considered that this is the best way to solve this water resources problem. (author) 2007-12-01 31 Index Scriptorium Estoniae Haitis puhkenud koolerapuhang on nudnud juba 253 inimelu, nakatunute arv letab 3100 piiri, bakter vib sealsete ebahgieeniliste olude tttu kaasa tuua teise humanitaarkatastroofi prast 12. jaanuari maavrinat. Kaart Vosman, Hendrik 2010-01-01 32 CERN Document Server Acceleration and sound measurements during granular discharge from silos are used to show that silo music is a sound resonance produced by silo quake. The latter is produced by stick-slip friction between the wall and the granular material in tall narrow silos. For the discharge rates studied, the occurrence and frequency of flow pulsations are determined primarily by the surface properties of the granular material and the silo wall. The measurements show that the pulsating motion of the granular material drives the oscillatory motion of the silo and the occurrence of silo quake does not require a resonant interaction between the silo and the granular material. Muite, B K; Rao, K K; Sundaresan, S; Muite, Benson K.; Quinn, Shandon F.; Sundaresan, Sankaran 2003-01-01 33 Science.gov (United States) Successful and sustained efforts have been made to curtail the major cholera epidemic that occurred in Haiti in 2010 with the promotion of hygiene and sanitation measures, training of health personnel and establishment of treatment centers nationwide. Oral cholera vaccine (OCV) was introduced by the Haitian Ministry of Health as a pilot project in urban and rural areas. This paper reports the successful OCV pilot project led by GHESKIO Centers in the urban slums of Port-au-Prince where 52,357 persons received dose 1 and 90.8% received dose 2; estimated coverage of the at-risk community was 75%. This pilot study demonstrated the effort, community mobilization, and organizational capacity necessary to achieve these results in a challenging setting. The OCV intervention paved the way for the recent launching of a national cholera vaccination program integrated in a long-term ambitious and comprehensive plan to address Haiti's critical need in water security and sanitation. PMID:24106194 Rouzier, Vanessa; Severe, Karine; Juste, Marc Antoine Jean; Peck, Mireille; Perodin, Christian; Severe, Patrice; Deschamps, Marie Marcelle; Verdier, Rose Irene; Prince, Sabine; Francois, Jeannot; Cadet, Jean Ronald; Guillaume, Florence D; Wright, Peter F; Pape, Jean W 2013-10-01 34 International Nuclear Information System (INIS) Water in Haiti needs a rational management. In fact, the availability of water in Haiti can be resumed in this manner: The country receives by means 40 milliards cubic meters of water. However, 70% of this water was lost by evapotranspiration and the other part which shows the renewable water is distributed like that: 20% of amount drain along of the surface through the river and go to the sea and 10% filters through the aquifer. In Haiti water is not always on the use place it varies in an area to the other depending of some factors like: precipitation, geology, vegetation, etc. In fact, some difficulties lie in the regulation, protection and mobilization of this resource. Whereas, different needs of utilization sectors, water resources management become a need so as to be able to settle water in quality and in quantity sufficiently for natural preservation, ecosystem and else. In Haiti, we have many plains which contain aquifer. We can name them: Plaine de l'arbre, Cayes, Leogane, Gonaives et Plaine du Cul-de-Sac. The last one is the most exploited because of geographical position. In fact, since 1980 many studies on isotope hydrology have been realized on these. About Plaine du Cul-de-Sac all studies realized show almost the same result: The exploitation limit of this aquifer. Some recommendations have been done in spite of, they still continue with the anarchical exploitation of this aquifer. Many years have been passed but nothing changed. In 2001, with the cooperation of IAEA, the project aquifer integrated management of Plaine du Cul-de-Sac has been started. We have some difficulties but I think it's the one of the best ways in order to solve this problem. (author) 2007-05-21 35 ... Could Help Haiti - Climate & Capitalism Climate & Capitalism An ecosocialist journal Home About Ecosocialist Notebook Book Reviews Archives Articles ... Could Help Haiti Posted on February 5, 2010 How the U.S. Could Help Haiti Two measures would immediately help Haitians:... should do to help Haiti, Haitian author Robert Fatton replied: The international community must shift its priorities and concentrate on helping ... could immediately take that would greatly assist Haitians in Haiti and in the diaspora in this process of sustainable, democratic rebuilding: ... 36 Digital Repository Infrastructure Vision for European Research (DRIVER) Acceleration and sound measurements during granular discharge from silos are used to show that silo music is a sound resonance produced by silo quake. In tall and narrow silos, the latter is produced by stickslip friction between the wall and the granular material. For the discharge rates studied, the occurrence of flow pulsations is determined primarily by the surface properties of the granular material and the silo wall. The measurements show that the pulsating motion ofthe granular materia... 2004-01-01 37 Science.gov (United States) The need for surgical care in Haiti remains vast despite the enormous relief efforts after the earthquake in 2010. As the poorest country in the Western hemisphere, Haiti lacks the necessary infrastructure to provide surgical care to its inhabitants. In light of this, a multidisciplinary approach led by Partners In Health and Dartmouth-Hitchcock Medical Center is improving the access to surgical care and offering treatment of a broad spectrum of pathology. This article discusses how postearthquake Haiti partnerships involving academic institutions can alleviate the surgical burden of disease and, in the process, serve as a profound educational experience for the academic community. The lessons learned from Haiti prove applicable in other resource-constrained settings and invaluable for the next generation of surgeons. PMID:23851780 Patel, Anup; Pfaff, Miles; Clune, James E; Mirensky, Tamar; Katona, Lindsay B; Geiling, James; Rosen, Joseph 2013-07-01 38 Digital Repository Infrastructure Vision for European Research (DRIVER) Objectives: In Haiti, AIDS has become the leading cause of death in sexually active adults. Increasingly, AIDS has become a disease of women and children. Previous bibliometric studies have shown the emergence of Haiti as a leading country in the production of AIDS literature in the Latin American and Caribbean regions. No information exists, however, regarding the type of publications produced, the collaboration patterns used, or the subject content analysis of this production. The purpose o... 2000-01-01 39 Digital Repository Infrastructure Vision for European Research (DRIVER) In this article, I seek to show how states of insecurity provoked by ongoing social, economic, and political ruptures in Haiti can disorder individual subjectivity and generate the flight of individuals seeking asylum within and across borders. Nongovernmental actors working in Haiti and with Haitians in the diaspora frequently managed the long-term psychosocial effects of insecurity. Their interventions can range from repressive to compassionate and influence the formation of identity and th... James, Erica C. 2011-01-01 40 Digital Repository Infrastructure Vision for European Research (DRIVER) Abstract Background We determined direct medical costs, overhead costs, societal costs, and personnel requirements for the provision of antiretroviral therapy (ART) to patients with AIDS in Haiti. Methods We examined data from 218 treatment-nave adults who were consecutively initiated on ART at the GHESKIO Center in Port-au-Prince, Haiti between December 23, 2003 and May 20, 2004 and calculated costs and personnel requirements for the first year of ART. Koenig Serena P; Riviere Cynthia; Leger Paul; Severe Patrice; Atwood Sidney; Fitzgerald Daniel W; Pape Jean W; Schackman Bruce R 2008-01-01 41 Science.gov (United States) QuakeML is an XML-based data exchange standard for seismology that is in its fourth year of active community-driven development. The current release (version 1.2) is based on a public Request for Comments process that included contributions from ETH, GFZ, USC, SCEC, USGS, IRIS DMC, EMSC, ORFEUS, GNS, ZAMG, BRGM, Nanometrics, and ISTI. QuakeML has mainly been funded through the EC FP6 infrastructure project NERIES, in which it was endorsed as the preferred data exchange format. Currently, QuakeML services are being installed at several institutions around the globe, including EMSC, ORFEUS, ETH, Geoazur (Europe), NEIC, ANSS, SCEC/SCSN (USA), and GNS Science (New Zealand). Some of these institutions already provide QuakeML earthquake catalog web services. Several implementations of the QuakeML data model have been made. QuakePy, an open-source Python-based seismicity analysis toolkit using the QuakeML data model, is being developed at ETH. QuakePy is part of the software stack used in the Collaboratory for the Study of Earthquake Predictability (CSEP) testing center installations, developed by SCEC. Furthermore, the QuakeML data model is part of the SeisComP3 package from GFZ Potsdam. QuakeML is designed as an umbrella schema under which several sub-packages are collected. The present scope of QuakeML 1.2 covers a basic description of seismic events including picks, arrivals, amplitudes, magnitudes, origins, focal mechanisms, and moment tensors. Work on additional packages (macroseismic information, seismic inventory, and resource metadata) has been started, but is at an early stage. Contributions from the community that help to widen the thematic coverage of QuakeML are highly welcome. Online resources: http://www.quakeml.org, http://www.quakepy.org Euchner, Fabian; Schorlemmer, Danijel; Kstli, Philipp; Quakeml Working Group 2010-05-01 42 Science.gov (United States) Student-based teaching evaluations are an integral component to institutions of higher education. Previous work on student-based teaching evaluations suggest that evaluations of instructors based upon "thin slice" 30-s video clips of them in the classroom correlate strongly with their end of the term "thick slice" student evaluations. This study's Tom, Gail; Tong, Stephanie Tom; Hesse, Charles 2010-01-01 43 Energy Technology Data Exchange (ETDEWEB) The energy situation of Haiti is reviewed on the basis of selected data. This includes statistics on the country's national and international energy policy, energy sources, and electric power generation. (UA) 1994-01-01 44 Digital Repository Infrastructure Vision for European Research (DRIVER) This paper focuses on assessing the role of the United Nations Stabilization Mission (MINUSTAH) in providing stability, security and respect for human rights and the rule of law in Haiti. The proposition is that the efforts have been ineffective and goes on to ask the question whether such an outsider-initiative intervention really advances political order and stability. The study also attempts to illustrate Haitian societys perception of the peace keeping operations in Haiti thus far. Th... Cei?de, Edwin Luc 2008-01-01 45 CERN Multimedia For n >1, if the Seifert form of a knotted 2n-1 sphere K in S^{2n+1} has a metabolizer, then the knot is slice. Casson and Gordon proved that this is false in dimension three (n = 1). However, in the three dimensional case it is true that if the metabolizer has a basis represented by a strongly slice link then K is slice. The question has been asked as to whether it is sufficient that each basis element is represented by a slice knot to assure that K is slice. For genus one knots this is of course true; here we present a genus two counterexample. Livingston, C 2000-01-01 46 Digital Repository Infrastructure Vision for European Research (DRIVER) In the wake of the 2010 earthquake in Haiti, medical relief organizations and individual practitioners mobilized to provide assistance. Here, an emergency medicine physician who worked with a Louisiana-based team in the mountains in one of the hardest hit areas relates his experiences. Vinroot, Richard 2011-01-01 47 Digital Repository Infrastructure Vision for European Research (DRIVER) Altered El Tor Vibrio cholerae O1, with classical cholera toxin B gene, was isolated from 16 patients with severe diarrhea at St. Marks Hospital, Arbonite, Haiti, <3 weeks after onset of the current cholera epidemic. Variable-number tandem-repeat typing of 187 isolates showed minimal diversity, consistent with a point source for the epidemic. Ali, Afsar; Chen, Yuansha; Johnson, Judith A.; Redden, Edsel; Mayette, Yfto; Rashid, Mohammed H.; Stine, O. Colin; Morris, J. Glenn 2011-01-01 48 CERN Document Server The increasing rate in earth quakes in the Netherlands is attributed to the enhanced depletion of Groningen natural gas, currently at a rate of 50 billion m3 per year. Here, we report on an exponential growth in the earth quake event rate, based on a surprisingly accurate fit to publicly available KNMI data. The data show a doubling in the rate every 6.2 years, leading to a rate of one event per day in 2025. A trend in the magnitude of the quakes is indiscernible. van Putten, Maurice H P M 2013-01-01 49 Digital Repository Infrastructure Vision for European Research (DRIVER) Touching on the role and destiny of Haiti in the Americas, Haiti Unbound engages with long-standing issues of imperialism and resistance culture in the transatlantic world. Glover's timely project emphatically articulates Haiti's regional and global centrality, combining vital 'big picture' reflections on the field of postcolonial studies with elegant close-reading-based analyses of the philosophical perspective and creative practice of a distinctively Haitian literary phenomenon. Providing i... Glover, Kaiama L. 2011-01-01 50 Digital Repository Infrastructure Vision for European Research (DRIVER) In an effort to design a simulation environment that is more similar to that of neurophysiology, we introduce a virtual slice setup in the NEURON simulator. The virtual slice setup runs continuously and permits parameter changes including changes to synaptic weights and time course and to intrinsic cell properties. The virtual slice setup permits shocks to be applied at chosen locations and activity to be sampled intra- or extracellularly from chosen locations. By default, a summed population... Lytton, William W.; Neymotin, Samuel A.; Hines, Michael L. 2008-01-01 51 Digital Repository Infrastructure Vision for European Research (DRIVER) We discuss the new problems emerging in charged beam transport for SASE FEL dynamics. The optimization of the magnetic transport system for future devices requires new concepts associated with the slice emittance and the slice phase space distribution. We study the problem of electron beam slice matching and guiding in transport devices for SASE FEL emission discussing matching criteria and how the associated design of the electron transport line may affect the FEL output pe... Dattoli, G.; Sabia, E.; Del Franco, M.; Petralia, A. 2011-01-01 52 Index Scriptorium Estoniae RO Haiti rahuvalvemissiooni juht Urano Teixeira da Matta Bacellar sooritas enesetapu. RO rahuvalvemissioon MINUSTAH on Haitil 2005. aastast prast nelja-aastast eemalolekut, samas jtkub seal vgivald Suurkask, Heiki, 1972- 2006-01-01 53 Science.gov (United States) QuakeML is an XML-based data exchange format for seismology that is under development. Current collaborators are from ETH, GFZ, USC, USGS, IRIS DMC, EMSC, ORFEUS, and ISTI. QuakeML development was motivated by the lack of a widely accepted and well-documented data format that is applicable to a broad range of fields in seismology. The development team brings together expertise from communities dealing with analysis and creation of earthquake catalogs, distribution of seismic bulletins, and real-time processing of seismic data. Efforts to merge QuakeML with existing XML dialects are under way. The first release of QuakeML will cover a basic description of seismic events including picks, arrivals, amplitudes, magnitudes, origins, focal mechanisms, and moment tensors. Further extensions are in progress or planned, e.g., for macroseismic information, location probability density functions, slip distributions, and ground motion information. The QuakeML language definition is supplemented by a concept to provide resource metadata and facilitate metadata exchange between distributed data providers. For that purpose, we introduce unique, location-independent identifiers of seismological resources. As an application of QuakeML, ETH Zurich currently develops a Python-based seismicity analysis toolkit as a contribution to CSEP (Collaboratory for the Study of Earthquake Predictability). We follow a collaborative and transparent development approach along the lines of the procedures of the World Wide Web Consortium (W3C). QuakeML currently is in working draft status. The standard description will be subjected to a public Request for Comments (RFC) process and eventually reach the status of a recommendation. QuakeML can be found at http://www.quakeml.org. Euchner, F.; Schorlemmer, D.; Becker, J.; Heinloo, A.; Kstli, P.; Saul, J.; Weber, B.; QuakeML Working Group 2007-12-01 54 Directory of Open Access Journals (Sweden) Full Text Available Abstract Background We determined direct medical costs, overhead costs, societal costs, and personnel requirements for the provision of antiretroviral therapy (ART to patients with AIDS in Haiti. Methods We examined data from 218 treatment-nave adults who were consecutively initiated on ART at the GHESKIO Center in Port-au-Prince, Haiti between December 23, 2003 and May 20, 2004 and calculated costs and personnel requirements for the first year of ART. Results The mean total cost of treatment per patient was$US 982 including $US 846 in direct costs,$US 114 for overhead, and $US 22 for societal costs. The direct cost per patient included generic ART medications$US 355, lab tests $US 130, nutrition$US 117, hospitalizations $US 62, pre-ART evaluation$US 58, labor $US 51, non-ART medications$US 39, outside referrals $US 31, and telephone cards for patient retention$US 3. Higher treatment costs were associated with hospitalization, change in ART regimen, TB treatment, and survival for one year. We estimate that 1.5 doctors and 2.5 nurses are required to treat 1000 patients in the first year after initiating ART. Conclusion Initial ART treatment in Haiti costs approximately $US 1,000 per patient per year. With generic first-line antiretroviral drugs, only 36% of the cost is for medications. Patients who change regimens are significantly more expensive to treat, highlighting the need for less-expensive second-line drugs. There may be sufficient health care personnel to treat all HIV-infected patients in urban areas of Haiti, but not in rural areas. New models of HIV care are needed for rural areas using assistant medical officers and community health workers. Fitzgerald Daniel W 2008-02-01 55 Digital Repository Infrastructure Vision for European Research (DRIVER) Casualties are estimated for the 12 January 2010 earthquake in Haiti using various reports calibrated by observed building damage states from satellite imagery and reconnaissance reports on the ground. By investigating various damage reports, casualty estimates and burial figures, for a one year period from 12 January 2010 until 12 January 2011, there is also strong evidence that the official government figures of 316 000 total dead and missing, reported to have been cause... Daniell, J. E.; Khazai, B.; Wenzel, F. 2013-01-01 56 Digital Repository Infrastructure Vision for European Research (DRIVER) In October 2010, Hpital Albert Schweitzer Haiti treated some of the first patients with cholera in Haiti. Over the following 10 months, a strategic plan was developed and implemented to improve the management of cases at the hospital level and to address the underlying risk factors at the community level. Ernst, Silvia; Weinrobe, Carolyn; Bien-aime, Charbel; Rawson, Ian 2011-01-01 57 Science.gov (United States) At the onset of research, this paper set out to explore the causes and consequences of Haiti as a failed state and then discuss initiatives and opportunities for further SOUTHCOM engagement. Upon concluding research, it appears that what Haiti requires is... J. L. Jarnac 2011-01-01 58 Science.gov (United States) Power-law behaviors in brain activity in healthy animals, in the form of neuronal avalanches, potentially benefit the computational activities of the brain, including information storage, transmission and processing. In contrast, power-law behaviors associated with seizures, in the form of epileptic quakes, potentially interfere with the brain's computational activities. This review draws attention to the potential roles played by homeostatic mechanisms and multistable time-delayed recurrent inhibitory loops in the generation of power-law phenomena. Moreover, it is suggested that distinctions between health and disease are scale-dependent. In other words, what is abnormal and defines disease it is not the propagation of neural activity but the propagation of activity in a neural population that is large enough to interfere with the normal activities of the brain. From this point of view, epilepsy is a disease that results from a failure of mechanisms, possibly located in part in the cortex itself or in the deep brain nuclei and brainstem, which truncate or otherwise confine the spatiotemporal scales of these power-law phenomena. PMID:22805061 Milton, John G 2012-07-01 59 Science.gov (United States) We are developing simulation and analysis tools in order to develop a solid Earth Science framework for understanding and studying active tectonic and earthquake processes. The goal of QuakeSim and its extension, the Solid Earth Research Virtual Observatory (SERVO), is to study the physics of earthquakes using state-of-the-art modeling, data manipulation, and pattern recognition technologies. We are developing clearly defined accessible data formats and code protocols as inputs to simulations, which are adapted to high-performance computers. The solid Earth system is extremely complex and nonlinear, resulting in computationally intensive problems with millions of unknowns. With these tools it will be possible to construct the more complex models and simulations necessary to develop hazard assessment systems critical for reducing future losses from major earthquakes. We are using Web (Grid) service technology to demonstrate the assimilation of multiple distributed data sources (a typical data grid problem) into a major parallel high-performance computing earthquake forecasting code. Such a linkage of Geoinformatics with Geocomplexity demonstrates the value of the Solid Earth Research Virtual Observatory (SERVO) Grid concept, and advances Grid technology by building the first real-time large-scale data assimilation grid. Donnellan, Andrea; Rundle, John; Fox, Geoffrey; McLeod, Dennis; Grant, Lisa; Tullis, Terry; Pierce, Marlon; Parker, Jay; Lyzenga, Greg; Granat, Robert; Glasscoe, Margaret 2006-12-01 60 Science.gov (United States) QuakeTables is an ontology-based infrastructure that supports the diverse data types and federated data sets needed to support large-scale modeling of inter-seismic and tectonic processes using boundary element, finite element and analytic applications. This includes fault, paleoseismic and space-bourn generated data. Some of fault data housed in QuakeTables includes CGS 1996, CGS 2002 and the official UCERF 2 deformation models. Currently, QuakeTables supports two forms of radar data, namely, Interferometric Synthetic Aperture Radar (InSAR) and Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) Repeat Pass Interferometry (RPI) products in the form of interferograms. All data types are integrated and presented to the end-user with tools to map and visualize the data with the added ability to download it in the desired format for local and/or remote processing. In QuakeTables, each dataset is represented in a self-consistent form as it was originally found in a publication or resource along with its metadata. To support the modelers and scientists need to view different interpretations of the same data, an ontology processor is used to generate such derivations to the desired models and formats while preserving the original dataset and maintaining the metadata for the different models and the links to the original dataset. The QuakeSim team developed a reference model that is used by applications such as Simplex and GeoFest. As a result, this allows the preservation of data and provides a reference for result comparison in the same tool. Through its API and web-services interfaces, QuakeTables delivers data to both the end-users and the QuakeSim portal. Al-Ghanmi, R.; McLeod, D.; Grant Ludwig, L.; Donnellan, A.; Parker, J. W.; Pierce, M. 2010-12-01 61 Directory of Open Access Journals (Sweden) Full Text Available QuakeML is an XML-based data exchange standard for seismology that is in its fourth year of active community-driven development. Its development was motivated by the need to consolidate existing data formats for applications in statistical seismology, as well as setting a cutting-edge, community-agreed standard to foster interoperability of distributed infrastructures. The current release (version 1.2 is based on a public Request for Comments process and accounts for suggestions and comments provided by a broad international user community. QuakeML is designed as an umbrella schema under which several sub-packages are collected. The present scope of QuakeML 1.2 covers a basic description of seismic events including picks, arrivals, amplitudes, magnitudes, origins, focal mechanisms, and moment tensors. Work on additional packages (macroseismic information, ground motion, seismic inventory, and resource metadata has been started, but is at an early stage. Several applications based on the QuakeML data model have been created so far. Among these are earthquake catalog web services at the European Mediterranean Seismological Centre (EMSC, GNS Science, and the Southern California Earthquake Data Center (SCEDC, and QuakePy, an open-source Python-based seismicity analysis toolkit. Furthermore, QuakeML is being used in the SeisComP3 system from GFZ Potsdam, and in the Collaboratory for the Study of Earthquake Predictability (CSEP testing center installations, developed by Southern California Earthquake Center (SCEC. QuakeML is still under active and dynamic development. Further contributions from the community are crucial to its success and are highly welcome. Joachim Saul 2011-04-01 62 Digital Repository Infrastructure Vision for European Research (DRIVER) From 2004 to 2009, the number of malaria cases reported in Haiti increased nearly fivefold. The effect of the 2010 earthquake and its aftermath on malaria transmission in Haiti is not known. Imported malaria cases in the United States acquired in Haiti tripled from 2009 to 2010, likely reflecting both the increased number of travelers arriving from Haiti and the increased risk of acquiring malaria infection in Haiti. The demographics of travelers and the proportion of severe cases are similar... Agarwal, Aarti; Mcmorrow, Meredith; Arguin, Paul M. 2012-01-01 63 CERN Document Server We discuss the new problems emerging in charged beam transport for SASE FEL dynamics. The optimization of the magnetic transport system for future devices requires new concepts associated with the slice emittance and the slice phase space distribution. We study the problem of electron beam slice matching and guiding in transport devices for SASE FEL emission discussing matching criteria and how the associated design of the electron transport line may affect the FEL output performances. We analyze different matching strategies by studying the relevant effect on the FEL output characteristics. Dattoli, G; Del Franco, M; Petralia, A 2011-01-01 64 Science.gov (United States) We conducted an experiment to test the feasibility of measuring seismic waves generated by traffic near James Madison University. We used QuakeCatcher seismometers (originally designed for passive seismic measurement) to measure vibrations associated with traffic on a wooden bridge as well as a nearby concrete bridge. This experiment was a signal processing exercise for a student research project and did not draw any conclusions regarding bridge safety or security. The experiment consisted of two temporary measurement stations comprised of a laptop computer and a QuakeCatcher - a small seismometer that plugs directly into the laptop via a USB cable. The QuakeCatcher was taped to the ground at the edge of the bridge to achieve good coupling, and vibrational events were triggered repeatedly with a control vehicle to accumulate a consistent dataset of the bridge response. For the wooden bridge, the resulting 'seismograms' were converted to Seismic Analysis Code (SAC) format and analyzed in MATLAB. The concrete bridge did not generate vibrations significant enough to trigger the recording mechanism on the QuakeCatchers. We will present an overview of the experimental design and frequency content of the traffic patterns, as well as a discussion of the instructional benefits of using the QuakeCatcher sensors in this non-traditional setting. Courtier, A. M.; Constantin, C.; Wilson, C. F. 2013-12-01 65 Directory of Open Access Journals (Sweden) Full Text Available The effects of different environmental conditions on the wetting properties and surface morphology of surperhydrophobic quaking aspen leaves harvested during the 2011 growth season are examined. During this particular season quaking aspen leaves were not able to retain their superhydrophobic properties and associated surface structure features as they have usually been able to do in other years. Representative scanning electron microscopy images and wetting property measurements of quaking aspen leaf surfaces harvested throughout this season are presented and discussed with the objective of linking weather induced environmental stresses that occurred in 2011 to the sudden and unusual reduction in non-wetting properties and drastic changes in leaf surface structure. Erosion and regeneration rates of leaf wax crystals and the impact that environmental factors can have on these are considered and used to explain the occurrence of these unexpected changes. J. J. Victor 2013-05-01 66 Digital Repository Infrastructure Vision for European Research (DRIVER) This paper uses simple regression techniques to make an initial assessment of the monetary damages caused by the January 12, 2010 earthquake that struck Haiti. Damages are estimated for a disaster with both 200,000 and 250,000 total dead and missing (i.e., the range of mortality that the earthquake is estimated to have caused) using Haiti?s economic and demographic data. The base estimate is US$8.1bn for a death toll of 250,000, but for several reasons this may be a lower- bound estimate... Cavallo, Eduardo; Powell, Andrew; Becerra, Oscar 2010-01-01 67 Directory of Open Access Journals (Sweden) Full Text Available Casualties are estimated for the 12 January 2010 earthquake in Haiti using various reports calibrated by observed building damage states from satellite imagery and reconnaissance reports on the ground. By investigating various damage reports, casualty estimates and burial figures, for a one year period from 12 January 2010 until 12 January 2011, there is also strong evidence that the official government figures of 316 000 total dead and missing, reported to have been caused by the earthquake, are significantly overestimated. The authors have examined damage and casualties report to arrive at their estimation that the median death toll is less than half of this value (±137 000. The authors show through a study of historical earthquake death tolls, that overestimates of earthquake death tolls occur in many cases, and is not unique to Haiti. As death toll is one of the key elements for determining the amount of aid and reconstruction funds that will be mobilized, scientific means to estimate death tolls should be applied. Studies of international aid in recent natural disasters reveal that large distributions of aid which do not match the respective needs may cause oversupply of help, aggravate corruption and social disruption rather than reduce them, and lead to distrust within the donor community. J. E. Daniell 2013-05-01 68 Digital Repository Infrastructure Vision for European Research (DRIVER) Objective. To determine if there is an unrecognized problem of congenital rubella syndrome (CRS) in Haiti, a country without a national rubella immunization program. Methods. During March 2001 and June 2001, screening physicals were conducted on approximately 80 orphans at three orphanages in Haiti that accept disabled children. Children were classified as probable CRS cases based on established clinical criteria. Photo documentation of findings was obtained whenever possible. Results. Six ch... Nancy Golden; Russell Kempker; Parul Khator; Robert Summerlee; Arthur Fournier 2002-01-01 69 Digital Repository Infrastructure Vision for European Research (DRIVER) THE recovery of soil productivity is undoubtedly one the most urgent measures needed for Haiti's continued development. On this basis, we undertook an extensive review of the literature concerning the past history of soil degradation -from the precolumbine era to the present- in an effortto identify the principal factors causing the rampant soil degradation suffered by Haiti. Ourstudy was conducted from the three standpoints: physico-natural, political and socio-economic and de... Alexis, Stervin; Herna?ndez, A. J.; Pastor Pin?eiro, Jesu?s 2004-01-01 70 CERN Multimedia We prove that a Bers slice is never algebraic, meaning that its Zariski closure in the character variety has strictly larger dimension. A corollary is that skinning maps are never constant. The proof uses grafting and the theory of complex projective structures. Dumas, David 2007-01-01 71 Science.gov (United States) Barasa, Haiti is an extremely poor, isolated rural community located on the side of a mountain. Cisterns in Barasa, Haiti are the preferred method to collect and store water for household use. Local masons build cisterns in Haiti which provides jobs for local people. The local... 72 Science.gov (United States) The challenge for any system that uses volunteer help to do science is to dependably acquire quality data without unduly burdening the volunteer. The NetQuakes accelerograph and its data acquisition system were created to address the recognized need for more densely sampled strong ground motion recordings in urban areas to provide more accurate ShakeMaps for post-earthquake disaster assessment and to provide data for structural engineers to improve design standards. The recorder has 18 bit resolution with 3g internal tri-axial MEMS accelerometers. Data are continuously recorded at 200 sps into a 1-2 week ringbuffer. When triggered, a miniSEED file is sent to USGS servers via the Internet. Data can also be recovered from the ringbuffer by a remote request through the NetQuakes servers. Following a power failure, the instrument can run for 36 hours using its internal battery. We rely upon cooperative citizens to host the dataloggers, provide power and Internet connectivity and perform minor servicing. Instrument and battery replacement are simple tasks that can be performed by hosts, thus reducing maintenance costs. Communication with the instrument to acquire data or deliver firmware is accomplished by file transfers using NetQuakes servers. The client instrument initiates all client-server interactions, so it safely resides behind a host's firewall. A connection to the host's LAN, and from there to the public Internet, can be made using WiFi to minimize cabling. Although timing using a cable to an external GPS antenna is possible, it is simpler to use the Network Time Protocol (NTP) to discipline the internal clock. This approach achieves timing accuracy substantially better than a sample interval. Since 2009, we have installed more than 140 NetQuakes instruments in the San Francisco Bay Area and have successfully integrated their data into the near real time data stream of the Northern California Seismic System. An additional 235 NetQuakes instruments have been installed by other regional seismic networks - all communicating via the common NetQuakes servers. Luetgert, J. H.; Oppenheimer, D. H. 2012-12-01 73 Science.gov (United States) QuakeML is an XML-based exchange format for seismological data which is being developed using a community-driven approach. It covers basic event description, including picks, arrivals, amplitudes, magnitudes, origins, focal mechanisms, and moment tensors. Contributions have been made from ETH, GFZ, USC, SCEC, USGS, IRIS DMC, EMSC, ORFEUS, GNS, ZAMG, BRGM, and ISTI. The current release (Version 1.1, Proposed Recommendation) reflects the results of a public Request for Comments process which has been documented online at http://quakeml.org/RFC_BED_1.0. QuakeML has recently been adopted as a distribution format for earthquake catalogs by GNS Science, New Zealand, and the European-Mediterranean Seismological Centre (EMSC). These institutions provide prototype QuakeML web services. Furthermore, integration of the QuakeML data model in the CSEP (Collaboratory for the Study of Earthquake Predictability, http://www.cseptesting.org) testing center software developed by SCEC is under way. QuakePy is a Python- based seismicity analysis toolkit which is based on the QuakeML data model. Recently, QuakePy has been used to implement the PMC method for calculating network recording completeness (Schorlemmer and Woessner 2008, in press). Completeness results for seismic networks in Southern California and Japan can be retrieved through the CompletenessWeb (http://completenessweb.org). Future QuakeML development will include an extension for macroseismic information. Furthermore, development on seismic inventory information, resource identifiers, and resource metadata is under way. Online resources: http://www.quakeml.org, http://www.quakepy.org Euchner, F.; Schorlemmer, D.; Kstli, P.; Quakeml Group, T 2008-12-01 74 International Nuclear Information System (INIS) Magneto-solid-mechanical model of two-component, core-crust, paramagnetic neutron star responding to quake-induced perturbation by differentially rotational, torsional, oscillations of crustal electron-nuclear solid-state plasma about axis of magnetic field frozen in the immobile paramagnetic core is developed. Particular attention is given to the node-free torsional crust-against-core vibrations under combined action of Lorentz magnetic and Hooke's elastic forces; the damping is attributed to Newtonian force of shear viscose stresses in crustal solid-state plasma. The spectral formulas for the frequency and lifetime of this toroidal mode are derived in analytic form and discussed in the context of quasiperiodic oscillations of the x-ray outburst flux from quaking magnetars. The application of obtained theoretical spectra to modal analysis of available data on frequencies of oscillating outburst emission suggests that detected variability is the manifestation of crustal Alfven's seismic vibrations restored by Lorentz force of magnetic field stresses. 2010-11-01 75 CERN Document Server Magneto-solid-mechanical model of two-component, core-crust, paramagnetic neutron star responding to quake-induced perturbation by differentially rotational, torsional, oscillations of crustal electron-nuclear solid-state plasma about axis of magnetic field frozen in the immobile paramagnetic core is developed. Particular attention is given to the node-free torsional crust-against-core vibrations under combined action of Lorentz magnetic and Hooke's elastic forces; the damping is attributed to Newtonian force of shear viscose stresses in crustal solid-state plasma. The spectral formulae for the frequency and lifetime of this toroidal mode are derived in analytic form and discussed in the context of quasi-periodic oscillations of the X-ray outburst flux from quaking magnetars. The application of obtained theoretical spectra to modal analysis of available data on frequencies of oscillating outburst emission suggests that detected variability is the manifestation of crustal Alfven's seismic vibrations restored b... Bastrukov, S; Takata, J; Chang, H -K; Xu, R X 2010-01-01 76 International Nuclear Information System (INIS) In J-PARC, positions of magnets have been measured in every two years for the check of alignment status. The displacements of magnets measured in August 2010 remained still small since the full alignment in autumn 2007. The 2011 Tohoku Pacific Earthquake happened in 11th March shook the ring for two minutes with seismic intensity six. We measured the magnet alignment status after the mega quake. (author) 2011-08-01 77 Digital Repository Infrastructure Vision for European Research (DRIVER) QuakeML is an XML-based data exchange standard for seismology that is in its fourth year of active community-driven development. Its development was motivated by the need to consolidate existing data formats for applications in statistical seismology, as well as setting a cutting-edge, community-agreed standard to foster interoperability of distributed infrastructures. The current release (version 1.2) is based on a public Request for Comments process and accounts for suggestions and comments... Danijel Schorlemmer; Fabian Euchner; Philipp Kstli; Joachim Saul 2011-01-01 78 Digital Repository Infrastructure Vision for European Research (DRIVER) In mice, Quaking (Qk) is required for myelin formation; in humans, it has been associated with psychiatric disease. QK regulates the stability, subcellular localization, and alternative splicing of several myelin-related transcripts, yet little is known about how QK governs these activities. Here, we show that QK enhances Hnrnpa1 mRNA stability by binding a conserved 3? UTR sequence with high affinity and specificity. A single nucleotide mutation in the binding site eliminates QK-dependent ... Zearfoss, N. Ruth; Clingman, Carina C.; Farley, Brian M.; Mccoig, Lisa M.; Ryder, Sean P. 2011-01-01 79 Energy Technology Data Exchange (ETDEWEB) This article recounted the experience of a 14-member Hydro-Quebec line crew and support group that travelled to Haiti in November 2004 to help re-establish essential electricity services. The work was conducted together with Electricite d'Haiti (EDH). The team installed 400 poles, 10 km of conductors and 85 transformers, restoring service to water pumps; La Providence and Raboteau hospitals; a centre housing Doctors without borders; a CARE distribution centre; and several convents and schools. The installation of street lights at strategic points allowed lighting to be restored in several districts of Gonaives. Hydro-Quebec was able to extend their mission to Haiti and purchase more poles and transformers with the help of a500,000 contribution from the Canadian International Development Agency. Hydro-Quebec was the only company who came to the aid of EDH. The total budget for the project was $4 million. 2 figs. Horne, D. 2005-04-01 80 Directory of Open Access Journals (Sweden) Full Text Available Although Haiti is one of the largest Caribbean nations only 20% of the land under cultivation is appropriate for agriculture. Once covered by forest, this country has been heavily logged and now mostly deforested. The majority of the arable land is being farmed beyond their carrying capacity. The total area under agriculture production is 6 times greater than the estimated areas suitable for agriculture resulting in significant deterioration of the land. Both national and international governments have made several attempts to combat desertification but few initiatives have been successful. This research will (1 review the current literature pertaining to desertification, with special emphasis on Haiti, (2 review the impact of internal and external programs designed to reverse the effects of desertification, (3 compare the indicators of desertification that exist on the island of Hispaniola, and (4 discuss the consequences of desertification for Haiti. Vereda Johnson Williams 2011-06-01 81 Science.gov (United States) This paper reviews and summarizes the available literature on Haitian mental health and mental health services. This review was conducted in light of the Haitian earthquake in January 2010. We searched Medline, Google Scholar and other available databases to gather scholarly literature relevant to mental health in Haiti. This was supplemented by consultation of key books and grey literature relevant to Haiti. The first part of the review describes historical, economic, sociological and anthropological factors essential to a basic understanding of Haiti and its people. This includes discussion of demography, family structure, Haitian economics and religion. The second part of the review focuses on mental health and mental health services. This includes a review of factors such as basic epidemiology of mental illness, common beliefs about mental illness, explanatory models, idioms of distress, help-seeking behavior, configuration of mental health services and the relationship between religion and mental health. PMID:21076788 Pierre, Andrena; Minn, Pierre; Sterlin, Carlo; Annoual, Pascale C; Jaimes, Annie; Raphal, Frantz; Raikhel, Eugene; Whitley, Rob; Rousseau, Ccile; Kirmayer, Laurence J 2010-01-01 82 Science.gov (United States) Following the 12th January 2010 earthquake in Haiti, the French Agence Nationale de la Recherche has funded a project named KAL-Haiti which aims at gathering remote sensing imagery as well as in-situ and exogenous data into a knowledge base. This database, seen as a shareable resource, can serve as a basis for helping the reconstruction of the country, but also as a reference for scientific studies devoted to all phases of risk management. The project main outcome will be a geo-referenced database containing a selection of remotely sensed imagery acquired before and after the disastrous event supplemented with all relevant ancillary data, and enriched with in-situ measurements and exogenous data. The resulting reference database is freely available for research and for reconstruction tasks. It is strongly expected that users will also become contributors by sharing their own data production, thus participating to the growth of the initial kernel. The database will also be enriched with new satellite images, monitoring the evolution of the Haitian situation over the next 10 years. Giros, A.; Fontannaz, D.; Allenbach, B.; Treinsoutrot, D.; De Michele, M. 2012-07-01 83 Directory of Open Access Journals (Sweden) Full Text Available Aedes albopictus was found in six of the 10 departments of Haiti and in 14 of the 35 communes surveyed. The survey found the larvae of Ae. albopictus in 13 different types of containers. Used tires and tins were by far the most common breeding sites used by this mosquito species. At the breeding sites, Ae. albopictus was associated with other mosquito species, such as Aedes aegypti, Culex nigripalpus and Aedes mediovittatus. The highest proportion of association was with Ae. aegypti. This study represents the first report of Ae. albopictus in Haiti. Mara del Marquetti Fernndez 2012-03-01 84 Scientific Electronic Library Online (English) Full Text Available SciELO Brazil | Language: English Abstract in english Aedes albopictus was found in six of the 10 departments of Haiti and in 14 of the 35 communes surveyed. The survey found the larvae of Ae. albopictus in 13 different types of containers. Used tires and tins were by far the most common breeding sites used by this mosquito species. At the breeding sit [...] es, Ae. albopictus was associated with other mosquito species, such as Aedes aegypti, Culex nigripalpus and Aedes mediovittatus. The highest proportion of association was with Ae. aegypti. This study represents the first report of Ae. albopictus in Haiti. Marquetti Fernndez, Mara del; Jean, Yvan Saint; Fuster Callaba, Carlos A; Somarriba Lpez, Lorenzo. 85 Digital Repository Infrastructure Vision for European Research (DRIVER) The concordance group of algebraically slice knots is the subgroup of the classical knot concordance group formed by algebraically slice knots. Results of Casson and Gordon and of Jiang showed that this group contains in infinitely generated free (abelian) subgroup. Here it is shown that the concordance group of algebraically slice knots also contain elements of finite order; in fact it contains an infinite subgroup generated by elements of order 2. Livingston, Charles 1998-01-01 86 Directory of Open Access Journals (Sweden) Full Text Available We demonstrate that brain dissection and slicing using solutions warmed to near-physiological temperature (~ +34?C, greatly enhance slice quality without affecting intrinsic electrophysiological properties of the neurons. Improved slice quality is seen not only when using young (< 1 month, but also mature (>2.5 month mice. This allows easy in vitro patch-clamp experimentation using adult deep cerebellar nuclear slices, which until now have been considered very difficult. As proof of the concept, we compare intrinsic properties of cerebellar nuclear neurons in juvenile (< 1 month and adult (up to 7 months mice, and confirm that no significant developmental changes occur after the fourth postnatal week. The enhanced quality of brain slices from old animals facilitates experimentation on age-related disorders as well as optogenetic studies requiring long transfection periods. MarylkaYoeUusisaari 2013-04-01 87 Directory of Open Access Journals (Sweden) Full Text Available Analysis of the role that constitution and constitutionalism play in the making of polyarchical rule. Author also examines their relationship to class power, political institutions, culture, and leadership. He argues that constitution does not make for an effective form of constitutionalism. Concludes that meaningful democratization in Haiti is difficult if class relations do not change drastically and are equalized. Robert Fatton Jr. 2000-01-01 88 Index Scriptorium Estoniae RO palub maailmalt maavrinas kannatanud Haiti jaoks rohkem kui pool miljardit dollarit. EBRD ja Maailmapank on sel aastakmnel Haitile lesehituseks andnud sadu miljoneid dollareid, USA on 5 aasta jooksul Haitisse investeerinud 800 mln. dollarit. Riiki on ptud reformida, kuid korruptsioonist pole vabanetud Suurkask, Heiki, 1972- 2010-01-01 89 Scientific Electronic Library Online (English) Full Text Available SciELO Brazil | Language: Portuguese Abstract in portuguese Este artigo discute as relaes entre o sistema educacional e as desigualdades sociais no Haiti a partir de um estudo de caso das escolas catlicas. Na primeira parte discutirei como o sistema educacional do Haiti foi construdo; em seguida, apresentarei algumas desigualdades scio-educacionais que [...] se manifestam no sistema; por fim, proporei uma reflexo sobre a questo: como a escola pode contribuir para uma mudana social no Haiti? Abstract in english This article discusses the relations between the educational system and social inequalities in Haiti. A case study of catholic schools helps to understand how the Haitian educational system was built and how socio-educational inequalities can be produced and maintained by this system. At the end, is [...] discussed also how schools can contribute to social change in Haiti. Joint, Louis Auguste. 90 Directory of Open Access Journals (Sweden) Full Text Available Este artigo discute as relaes entre o sistema educacional e as desigualdades sociais no Haiti a partir de um estudo de caso das escolas catlicas. Na primeira parte discutirei como o sistema educacional do Haiti foi construdo; em seguida, apresentarei algumas desigualdades scio-educacionais que se manifestam no sistema; por fim, proporei uma reflexo sobre a questo: como a escola pode contribuir para uma mudana social no Haiti?This article discusses the relations between the educational system and social inequalities in Haiti. A case study of catholic schools helps to understand how the Haitian educational system was built and how socio-educational inequalities can be produced and maintained by this system. At the end, is discussed also how schools can contribute to social change in Haiti. Louis Auguste Joint 2008-08-01 91 Directory of Open Access Journals (Sweden) Full Text Available The present paper is an excerpt, with little adaptation, from the master thesis: Project managements contribution to international cooperation. How to make things work: ODA in Haiti wrote under the coordination of Prof. Dr. Horst Brezinski from Technical University Freiberg Bergakademie and Prof. Dr. Eng.. Sabina Irimie from University of Petrosani, while benefiting from an Erasmus study scholarship at the first mentioned institution. It presents the case of Haiti, a small country facing big challenges and enjoying plenty of international attention especially due to the recent earthquake that struck the country at the beginning of 2010. The SWOT analysis inside the paper offers a detailed view of Haitis actual situation, identifying in the same time its problems and the variables that should be taken into consideration when designing programmes and projects targeting Haitis development. ANDREEA MARI? 2011-01-01 92 Digital Repository Infrastructure Vision for European Research (DRIVER) Haiti?s conflict is not a civil war in the traditional sense. Nevertheless, extreme poverty, socio-economic inequalities, insecurity due to gang activity and a general state of turbulence and instability has characterised the situation in the country since the fall of the Duvaliers in 1986. The aim of this thesis is to assess the prospects for successful peace-building in Haiti. ?Peace-building? is understood as a process that puts an end to the violent conflict and political ... Wien, Anne Kirsti Tobro 2007-01-01 93 Digital Repository Infrastructure Vision for European Research (DRIVER) Abstract Background Preparing health workers to confront the HIV/AIDS epidemic is an urgent challenge in Haiti, where the HIV prevalence rate is 2.2% and approximately 10 100 people are taking antiretroviral treatment. There is a critical shortage of doctors in Haiti, leaving nurses as the primary care providers for much of the population. Haiti's approximately 1000 nurses play a leading role in HIV/AIDS prevention, care and treatment. However, nurses do not receive sufficien... Knebel Elisa; Puttkammer Nancy; Demes Adrien; Devirois Ruth; Prismy Mona 2008-01-01 94 Directory of Open Access Journals (Sweden) Full Text Available We analyze seismic signals produced by explosion-quakes at Stromboli Volcano. We use standard nonlinear procedures to search a low-order effective dynam-ics. The dimension of the reconstructed phase space depends on the number of samples. Namely larger time lengths cor-respond to dynamical systems of different complexity. If we restrict the analysis to the signal associated directly to the source (Chouet et al., 1997, we obtain a phase space dimen-sion equal to two. We reproduce this part of the signal with a simple single self-sustained oscillator. S. De Martino 2002-01-01 95 Digital Repository Infrastructure Vision for European Research (DRIVER) The Newtonian solid-mechanical theory of non-compressional spheroidal and torsional nodeless elastic vibrations in the homogenous crust model of a quaking neutron star is developed and applied to the modal classification of the quasi-periodic oscillations (QPOs) of X-ray luminosity in the aftermath of giant flares in SGR 1806-20 and SGR 1900+14. Particular attention is given to the low-frequency QPOs in the data for SGR 1806-20 whose physical origin has been called into ques... Bastrukov, S.; Chang, H-k; Chen, G. -t; Molodtsova, I. 2008-01-01 96 CERN Document Server The Newtonian solid-mechanical theory of non-compressional spheroidal and torsional nodeless elastic vibrations in the homogenous crust model of a quaking neutron star is developed and applied to the modal classification of the quasi-periodic oscillations (QPOs) of X-ray luminosity in the aftermath of giant flares in SGR 1806-20 and SGR 1900+14. Particular attention is given to the low-frequency QPOs in the data for SGR 1806-20 whose physical origin has been called into question. Our calculations suggest that unspecified QPOs are due to nodeless dipole torsional and dipole spheroidal elastic shear vibrations. Bastrukov, S; Chen, G -T; Molodtsova, I 2008-01-01 97 Digital Repository Infrastructure Vision for European Research (DRIVER) We analyze seismic signals produced by explosion-quakes at Stromboli Volcano. We use standard nonlinear procedures to search a low-order effective dynam-ics. The dimension of the reconstructed phase space depends on the number of samples. Namely larger time lengths cor-respond to dynamical systems of different complexity. If we restrict the analysis to the signal associated directly to the source (Chouet et al., 1997), we obtain a phase space dimen-sion equal to two. We reproduce this part of... 2002-01-01 98 Digital Repository Infrastructure Vision for European Research (DRIVER) On the basis of the results of previous works by our group, this paper aims to investigate the correlation between features of a kind of seismic events recorded at Stromboli (the so called explosion-quakes) and the source of such explosions, i.e. the different craters. The purpose is that of finding parameters in order to try an automatic assignment of new events to their crater of origin. These parameters must be searched for both in time and in frequency domain. Afterwards the stability in ... 1996-01-01 99 Science.gov (United States) Embedding a charge group inside a protein in a low dielectric environment is energetically unfavorable. Therefore, most charged groups are solvent exposed. We have developed a hypothesis that a new buried charge transiently formed in a non-polar environment serves as an electrostatic epicenter that drives protein quake (protein conformational changes). Here we report an experimental study on the effects of solvent pH on the protonation states of buried ionizable groups, and their correlation with protein quakes. Time-resolved Fourier transfer infrared (FTIR) difference absorbance spectroscopy is the major experimental technique for simultaneous detection of the proton transfer event (to generate a new buried charge) and the protein quake event. The results are expected to provide insight into the impact of solvent pH on protein structural dynamics in general. Xie, Aihua; Kaledhonkar, Sandip; Kelemen, Lorand; Hoff, Wouter D.; Thubagere, Anupama 2007-03-01 100 Digital Repository Infrastructure Vision for European Research (DRIVER) Peer to peer (P2P) systems are moving from application specific architectures to a generic service oriented design philosophy. This raises interesting problems in connection with providing useful P2P middleware services that are capable of dealing with resource assignment and management in a large-scale, heterogeneous and unreliable environment. One such service, the slicing service, has been proposed to allow for an automatic partitioning of P2P networks into groups (slices) that represent a... Ferna?ndez, Antonio; Gramoli, Vincent; Jime?nez, Ernesto; Kermarrec, Anne-marie; Raynal, Michel 2006-01-01 101 Energy Technology Data Exchange (ETDEWEB) A review of energy use in Haiti, aimed at identifying possible projects to complement current A.I.D. support for institution building and energy planning within the Ministry of Mines and Energy Resources (MMRE), is presented. Key findings are that: (1) the sugar and manufacturing industries rely heavily on biomass fuels - wood, charcoal, and bagasse (sugar cane residue); and (2) demand for commercial energy and for electricity is growing rapidly despite supply constraints. The report calls for A.I.D. to: initiate a program to reduce biomass consumption (which is causing severe soil erosion and deforestation), especially in the small distilleries called guildives; collaborate with MMRE and the World Bank to develop a detailed workplan to promote energy efficiency in the guildives, focusing on technology development; help MMRE and the private sector to project Haiti's industrial energy and electricity needs through the year 2000; and sponsor a program of energy audits and efficiency improvements in the manufacturing sector. Streicher, A. 1985-03-28 102 Science.gov (United States) Two stories from Haiti are considered from three different perspectives. The first story is about a boy named Joseph Alvyns, whose mother died from cholera in 2011. His story is told in a short film titled Baseball in the time of Cholera. The second story is about Mme. Yolande Marie Nazaire, who was the Director of the Haiti National School of Nursing in Port-au-Prince on the morning of January 12, 2010, when an earthquake killed 90 students and faculty. The three perspectives discussed here are: (a) Critical Reflective in health professional education as used by the University of California at San Francisco (UCSF) School of Medicine; (b) The Capacities of Stories, which is part of a socio-narratology methodology; and(c) Story Theory with implications for global health nursing. PMID:24740952 Baumann, Steven L; Bellefleur, Carmelle 2014-04-01 103 Science.gov (United States) An epidemiologic (cross-sectional study) survey on 462 inhabitants in Corail, Haiti showed that 16.5% were infected with Mansonella ozzardi. This finding was determined from a single 20-?L sample of finger prick blood from each person. Among children, 15 years of age, the prevalence of infection for males and females was 23% and 21%, respectively. In general, the microfilaremias were low and 70% of positive persons had 50 microfilariae. This study shows that persons living near mangrove marshes that are breeding sites for Culicoides furens and C. barbosai biting midges, which are recognized vectors of M. ozzardi in Haiti, are consequently more frequently infected than those living in downtown area of Corail or inland. PMID:24710617 Raccurt, Christian P; Brasseur, Philippe; Cicron, Micheline; Boncy, Jacques 2014-06-01 104 Science.gov (United States) Medical journals and other sources do not show evidence that cholera occurred in Haiti before 2010, despite the devastating effect of this disease in the Caribbean region in the 19th century. Cholera occurred in Cuba in 1833-1834; in Jamaica, Cuba, Puerto Rico, St. Thomas, St. Lucia, St. Kitts, Nevis, Trinidad, the Bahamas, St. Vincent, Granada, Anguilla, St. John, Tortola, the Turks and Caicos, the Grenadines (Carriacou and Petite Martinique), and possibly Antigua in 1850-1856; and in Guadeloupe, Cuba, St. Thomas, the Dominican Republic, Dominica, Martinique, and Marie Galante in 1865-1872. Conditions associated with slavery and colonial military control were absent in independent Haiti. Clustered populations, regular influx of new persons, and close quarters of barracks living contributed to spread of cholera in other Caribbean locations. We provide historical accounts of the presence and spread of cholera epidemics in Caribbean islands. PMID:22099117 Jenson, Deborah; Szabo, Victoria 2011-11-01 105 Digital Repository Infrastructure Vision for European Research (DRIVER) Fasciola hepatica, the aetiological agent of fascioliasis in the Caribbean region, occurs throughout the major islands of the Greater Antilles and in localised zones on two islands (Martinique and Saint Lucia) of the Lesser Antilles. However, apart from Puerto Rico, information regarding human fascioliasis in islands of the Caribbean is out of date or unavailable, or even nonexistent as in Haiti. The authors conducted a retrospective, cross-sectional serological survey in Port-au-Prince using... Agnamey, P.; Fortes-lopes, E.; Raccurt, C. P.; Boncy, J.; Totet, A. 2012-01-01 106 Digital Repository Infrastructure Vision for European Research (DRIVER) Most severe disasters cause large population movements. These movements make it difficult for relief organizations to efficiently reach people in need. Understanding and predicting the locations of affected people during disasters is key to effective humanitarian relief operations and to long-term societal reconstruction. We collaborated with the largest mobile phone operator in Haiti (Digicel) and analyzed the movements of 1.9million mobile phone users during the period from 42d before, ... Lu, Xin; Bengtsson, Linus; Holme, Petter 2012-01-01 107 Directory of Open Access Journals (Sweden) Full Text Available Por intermdio da anlise do relato do abolicionista francs Victor Schoelcher sobre o Haiti, publicado em 1843, este artigo questiona a interpretao do antroplogo Rolph Trouillot sobre o carter "impensvel" da Revoluo Haitiana. Ao mesmo tempo em que esta ltima tem sido ignorada, distorcida ou tratada com incompreenso pelo Ocidente, o uso da noo de "impensvel" para interpretar sua recepo contribui para outra forma de incompreenso, ao eliminar de qualquer considerao os contextos histricos e polticos que constituem a resistncia. O texto de Schoelcher representa um esforo notvel de "pensar" o Haiti e a Revoluo Haitiana atravs dos pressupostos do Republicanismo francs. Suas interpretaes revelam a ampla gama de possibilidades oferecidas pelo pensamento iluminista. Elas convergem com o pensamento e a prtica das massas haitianas e das populaes escravizadas das colnias francesas das ndias Ocidentais, mas no so inteiramente coincidentes. A no-identidade destes pensamentos d forma ao espao da poltica entre Schoelcher e os escravos e constitui um terreno necessrio para a anlise histrica.Through an examination of French abolitionist Victor Schoelcher's account of Haiti published in 1843, this article interrogates anthropologist Rolph Trouillot's interpretation of the "unthinkability" of the Haitian Revolution. While the Haitian Revolution has been ignored, distorted, and treated with incomprehension and disdain in the West, the use of the notion of 'unthinkability' to interpret its reception contributes to another form of incomprehension by eliminating from consideration the political and historical contexts that are constitutive of resistance. Schoelcher's text represents a remarkable effort to "think" Haiti and the Haitian Revolution from within the presuppositions of French Republicanism. His interpretations demonstrate the broad range of possibilities within Enlightenment thought. They converge with the thought and practices of the Haitian masses and the enslaved population of the French West Indian colonies, but they do not coincide with them. The non-identity of their thought forms the space of politics between Schoelcher and slaves and is a necessary ground of historical analysis. Dale Tomich 2009-04-01 108 Digital Repository Infrastructure Vision for European Research (DRIVER) Aedes albopictus was found in six of the 10 departments of Haiti and in 14 of the 35 communes surveyed. The survey found the larvae of Ae. albopictus in 13 different types of containers. Used tires and tins were by far the most common breeding sites used by this mosquito species. At the breeding sites, Ae. albopictus was associated with other mosquito species, such as Aedes aegypti, Culex nigripalpus and Aedes mediovittatus. The highest proportion of association was with Ae. aegypti. This stu... Mara del Marquetti Fernndez; Yvan Saint Jean; Fuster Callaba, Carlos A.; Lorenzo Somarriba Lpez 2012-01-01 109 Scientific Electronic Library Online (English) Full Text Available SciELO Public Health | Language: English Abstract in spanish Objetivos. Determinar si la rubola congnita es un problema no reconocido en Hait, pas que no dispone de un programa nacional de vacunacin contra esta enfermedad. Mtodos. Entre marzo y junio de 2001 se realizaron exmenes fsicos a unos 80 hurfanos de tres orfanatos de Hait que aceptan a nio [...] s discapacitados. El diagnstico de probable rubola congnita se bas en criterios clnicos establecidos. Siempre que fuera posible se obtuvo documentacin fotogrfica. Resultados. Seis nios cumplieron los criterios de probable rubola congnita. Usando datos de los pases vecinos del Caribe y de los Estados Unidos de Amrica anteriores a la vacunacin contra la rubola, se calcul que cada ao hay 163 a 440 nuevos casos de rubola congnita en Hait. Conclusiones. Sigue existiendo rubola congnita en Hait, pero generalmente no se reconoce. Se debera considerar la implantacin de una poltica nacional de vacunacin contra la rubola en ese pas. Abstract in english Objective. To determine if there is an unrecognized problem of congenital rubella syndrome (CRS) in Haiti, a country without a national rubella immunization program. Methods. During March 2001 and June 2001, screening physicals were conducted on approximately 80 orphans at three orphanages in Haiti [...] that accept disabled children. Children were classified as probable CRS cases based on established clinical criteria. Photo documentation of findings was obtained whenever possible. Results. Six children met the criteria for probable CRS. Using data from surrounding Caribbean countries and from the United States of America prior to rubella immunization, we estimated that there are between 163 and 440 new cases of CRS per year in Haiti. Conclusions. CRS exists in Haiti, but its presence is generally unrecognized. A national rubella immunization policy should be considered. Golden, Nancy; Kempker, Russell; Khator, Parul; Summerlee, Robert; Fournier, Arthur. 110 Digital Repository Infrastructure Vision for European Research (DRIVER) A fully sustainable sanitation system was developed for a rural hospital in Haiti. The system operates by converting human waste into biogas and fertilizer without using external energy. It is a hybrid anaerobic/aerobic system that maximizes methane production while producing quality compost. The system first separates liquid and solid human waste at the source to control carbon to nitrogen ratio and moisture content to facilitate enhanced biodegradation. It will then degrade human waste thro... Meegoda, Jay N.; Hsin-Neng Hsieh; Paul Rodriguez; Jason Jawidzik 2012-01-01 111 Science.gov (United States) Nearly 3 years after its appearance in Haiti, cholera has already exacted more than 8,200 deaths and 670,000 reported cases and it is feared to become endemic. However, no clear evidence of a stable environmental reservoir of pathogenic Vibrio cholerae, the infective agent of the disease, has emerged so far, suggesting that the transmission cycle of the disease is being maintained by bacteria freshly shed by infected individuals. Thus in principle cholera could possibly be eradicated from Haiti. Here, we develop a framework for the estimation of the probability of extinction of the epidemic based on current epidemiological dynamics and health-care practice. Cholera spreading is modelled by an individual-based spatially-explicit stochastic model that accounts for the dynamics of susceptible, infected and recovered individuals hosted in different local communities connected through hydrologic and human mobility networks. Our results indicate that the probability that the epidemic goes extinct before the end of 2016 is of the order of 1%. This low probability of extinction highlights the need for more targeted and effective interventions to possibly stop cholera in Haiti. Bertuzzo, Enrico; Finger, Flavio; Mari, Lorenzo; Gatto, Marino; Rinaldo, Andrea 2014-05-01 112 Science.gov (United States) Megan Chiu, Jason Baird, Xu Huang, Trishan de Lanerolle, Ralph Morelli, Jonathan Gourley Trinity College, Computer Science Department and Environmental Science Program, 300 Summit Street, Hartford, CT 06106 megan.chiu@trincoll.edu, Jason.baird@trincoll.edu, xu.huang@trincoll.edu, trishan.delanerolle@trincoll.edu, ralph.morelli@trincoll.edu, jonathan.gourley@trincoll.edu Price data for Haiti commodities such as rice and potatoes have been traditionally recorded by hand on paper forms for many years. The information is then entered onto computer manually, thus making the process a long and arduous one. With the development of the Haiti Commodity Tracker mobile app, we are able to make this commodity price data recording process more efficient. Officials may use this information for making inferences about the difference in commodity prices and for food distribution during critical time after natural disasters. This information can also be utilized by governments and aid agencies on their food assistance programs. Agronomists record the item prices from several sample sites in a marketplace and compare those results from other markets across the region. Due to limited connectivity in rural areas, data is first saved to the phone's database and then retransmitted to a central server via SMS messaging. The mobile app is currently being field tested by an international NGO providing agricultural aid and support in rural Haiti. Chiu, M. T.; Huang, X.; Baird, J.; Gourley, J. R.; Morelli, R.; de Lanerolle, T. R.; Haiti Food Security Monitoring Mobile App Team 2011-12-01 113 Digital Repository Infrastructure Vision for European Research (DRIVER) We investigated chloroquine sensitivity to Plasmodium falciparum in travelers returning to France and Canada from Haiti during a 23-year period. Two of 19 isolates obtained after the 2010 earthquake showed mixed pfcrt 76K+T genotype and high 50% inhibitory concentration. Physicians treating malaria acquired in Haiti should be aware of possible chloroquine resistance. 2012-01-01 114 Science.gov (United States) The USGS seeks accelerograph spacing of 5-10 km in selected urban areas of the US to obtain spatially un-aliased recordings of strong ground motions during large earthquakes. These dense measurements will improve our ability to make rapid post-earthquake assessments of expected damage and contribute to the continuing development of engineering standards for construction. To achieve this goal the USGS and its university partners are deploying NetQuakes seismographs, designed to record moderate to large earthquakes from the near field to about 100 km. The instruments have tri-axial Colibrys 2005SF MEMS sensors, clip at 3g, and have 18-bit resolution. These instruments are uniquely designed for deployment in private homes, businesses, public buildings and schools where there is an existing Broadband connection to the Internet. The NetQuakes instruments connect to a local network using WiFi and then via the Internet to USGS servers to a) upload triggered accelerograms in miniSEED format, P arrival times, and computed peak ground motion parameters immediately after an earthquake; b) download software updates; c) respond to requests for log files, execute UNIX scripts, and upload waveforms from long-term memory for quakes with peak motions below the trigger threshold; d) send state-of-health (SOH) information in XML format every 10 minutes; and e) synchronize instrument clocks to 1ms accuracy using the Network Time Protocol. NetQuakes instruments cost little to operate and save about$600/yr/site compared to instruments that transmit data via leased telemetry. After learning about the project through press releases, thousands of citizens have registered to host an instrument at http://earthquake.usgs.gov/netquakes using a Google Map interface that depicts where we seek instrument sites. The website also provides NetQuakes hosts access to waveform images recorded by instruments installed in their building. Since 3/2009, the NetQuakes project has installed over 100 instruments in the San Francisco Bay area, over 30 in the Seattle region, and 20 elsewhere in the US. Five instruments are also deployed in the San Francisco Bay region on San Pablo Dam, operated by the East Bay Municipal Utility District (EBMUD). These instruments provide cost-effective monitoring for EBMUD through free Internet telemetry, and because the USGS monitors instrument SOH, performs all data processing and archiving, and transmits recorded shaking levels to the dam operators via ShakeCast. EBMUD allows the strong motion data from their instruments to be freely available for use by the seismological and engineering communities. The NetQuakes project expects to install 350 instruments by the end of 2011. Luetgert, J. H.; Oppenheimer, D. H.; Hamilton, J. 2010-12-01 115 Science.gov (United States) As global health education becomes increasingly important, more physicians are participating in international health electives (IHEs). Haiti is a favorable site for an IHE because of its substantial health care needs and rich culture. Although both osteopathic and allopathic physicians can provide effective health care to Haitians, osteopathic physicians may be particularly well suited to serve in Haiti because of their training in osteopathic manipulative treatment (OMT). Because OMT's laying of the hands (high touch) is similar to the touch inherent to Haiti's traditional ethnomedical practices, osteopathic physicians' use of OMT can enhance trust among Haitians and increase Haitians' willingness to work with westernized medical practitioners. In addition, an IHE in a low-resource country such as Haiti can provide osteopathic physicians with a global outlook on medicine and a range of critical communication and clinical skills. The authors advocate for the development of an IHE in Haiti for osteopathic physicians. PMID:23739760 Coupet, Sidney; Howell, Joel D; Ross-Lee, Barbara 2013-06-01 116 CERN Document Server We compute characteristic parameters of magneto-dipole radiation of a neutron star undergoing torsional seismic vibrations under the action of Lorentz restoring force about axis of dipolar magnetic field experiencing decay. After brief outlook of general theoretical background of the model of vibration powered neutron star, we present numerical estimates of basic vibration and radiation characteristics, such as the oscillation frequency, lifetime, luminosity of radiation, and investigate their time dependence upon magnetic field decay. The presented analysis suggests that gradual decrease in frequencies of pulsating high-energy emission detected from a handful of currently monitoring AXP/SGR-like X-ray sources can be explained as being produced by vibration powered magneto-dipole radiation of quaking magnetars. Bastrukov, S I; Xu, R X; Molodtsova, I V 2011-01-01 117 CERN Document Server We investigate an asteroseismic model of non-rotating paramagnetic neutron star with core-crust stratification of interior pervaded by homogeneous internal and dipolar external magnetic field. Focus is on post-quake vibrational relaxation by torsional shear oscillations of electron-nuclear solid-state plasma in the metal-like crust about axis of magnetic field frozen in the immobile core. In accord with basic physics underlying the very notion of a neutron star and indirect observational evidence of the dipole configuration of magnetic fields of pulsars and magnetars, the model under consideration presumes that micro-composition of core material is dominated by degenerate neutron matter in the state of Pauli's paramagnetic permanent magnetization caused by polarizations of spin magnetic moments of neutrons along magnetic axis of the star. Particular attention is given to the regime of node-free differentially rotational vibrations of crust against immobile core driven by Lorentz magnetic and Hooke's elastic f... Bastrukov, S I; Chang, H -K; Takata, J 2009-01-01 118 Science.gov (United States) Activation of a receptor protein during biological signaling is often characterized by a two state model: a receptor state (also called off state'') for detection of a stimuli, and a signaling state (on state'') for signal relay. Receptor activation is a process that a receptor protein is structurally transformed from its receptor state to its signaling state through substantial conformational changes that are recognizable by its downstream signal relay partner. What are the structural and energetic origins for receptor activation in biological signaling? We report extensive evidence that further support the role of electrostatic epicenter'' in driving protein quake'' and receptor activation. Photoactive yellow protein (PYP), a bacterial blue light photoreceptor protein for the negative phototaxis of a salt loving Halorhodospira halophia, is employed as a model system in this study. We will discuss potential applications of this receptor activation mechanism to other receptor proteins, including B-RAF receptor protein that is associated with many cancers. Xie, Aihua; Kaledhonkar, Sandip; Kang, Zhouyang; Hendriks, Johnny; Hellingwerf, Klaas 2013-03-01 119 Science.gov (United States) The peculiar source characteristics of long-period seismic events (time persistency of the source, low-frequency peaks in the source spectrum, absence of high-frequency radiation) prevent the formation of a definite high-frequency coda in the seismograms. In contrast, this is well formed in volcano-tectonic quakes. For this reason, the widely used duration magnitude scale that is based on the proportionality between the energy and the coda duration cannot be used for long-period estimation. In observatory practice, the long-period magnitude is sometimes estimated using the same duration magnitude scale, leading to confusing results. In this report, we show a new method to estimate the magnitude of long-period events that generally occur for volcanoes, with some application examples from data for Mt Etna (Italy), Colima Volcano (Mexico) and Campi Flegrei (Italy). Del Pezzo, E.; Bianco, F.; Borgna, I. 2013-08-01 120 Science.gov (United States) Presented is a practical scheme to enable introductory biology students to investigate the mechanism by which urea is synthesized in the liver. The tissue-slice technique is discussed, and methods for the quantitative analysis of metabolites are presented. (Author/SL) Teal, A. R. 1976-01-01 121 Digital Repository Infrastructure Vision for European Research (DRIVER) Hepatitis A virus (HAV), a small non-enveloped RNA virus from Picornaviridea family, causes approximately 1.5million cases of acute hepatitis each year, and is still a major world health problem especially in developing countries.As the risk of getting infected by HAV increases at the time of crisis such as earthquakes, we tried to performa brief review on current situation of HAV in Haiti, a country that experienced an earthquake measuring 7.0on the Richter scale recently, and that it might ... 2010-01-01 122 Directory of Open Access Journals (Sweden) Full Text Available This paper will discuss which characters interfered in internal affairs in Haiti over the history, especially in the 1990's. We will analyze the profile of the military and police forces which established in the country, as an important element to understand that, many times, instead of protecting citizens, the same forces helped to have oppression, instability and insecurity for most of Haitian population. Finally, we will hold the balance of the activities promoted by international organizations in the 1990's. VANESSA BRAGA MATIJASCIC 2012-01-01 123 Scientific Electronic Library Online (English) Full Text Available SciELO Brazil | Language: Portuguese Abstract in portuguese Por intermdio da anlise do relato do abolicionista francs Victor Schoelcher sobre o Haiti, publicado em 1843, este artigo questiona a interpretao do antroplogo Rolph Trouillot sobre o carter "impensvel" da Revoluo Haitiana. Ao mesmo tempo em que esta ltima tem sido ignorada, distorcida ou [...] tratada com incompreenso pelo Ocidente, o uso da noo de "impensvel" para interpretar sua recepo contribui para outra forma de incompreenso, ao eliminar de qualquer considerao os contextos histricos e polticos que constituem a resistncia. O texto de Schoelcher representa um esforo notvel de "pensar" o Haiti e a Revoluo Haitiana atravs dos pressupostos do Republicanismo francs. Suas interpretaes revelam a ampla gama de possibilidades oferecidas pelo pensamento iluminista. Elas convergem com o pensamento e a prtica das massas haitianas e das populaes escravizadas das colnias francesas das ndias Ocidentais, mas no so inteiramente coincidentes. A no-identidade destes pensamentos d forma ao espao da poltica entre Schoelcher e os escravos e constitui um terreno necessrio para a anlise histrica. Abstract in english Through an examination of French abolitionist Victor Schoelcher's account of Haiti published in 1843, this article interrogates anthropologist Rolph Trouillot's interpretation of the "unthinkability" of the Haitian Revolution. While the Haitian Revolution has been ignored, distorted, and treated wit [...] h incomprehension and disdain in the West, the use of the notion of 'unthinkability' to interpret its reception contributes to another form of incomprehension by eliminating from consideration the political and historical contexts that are constitutive of resistance. Schoelcher's text represents a remarkable effort to "think" Haiti and the Haitian Revolution from within the presuppositions of French Republicanism. His interpretations demonstrate the broad range of possibilities within Enlightenment thought. They converge with the thought and practices of the Haitian masses and the enslaved population of the French West Indian colonies, but they do not coincide with them. The non-identity of their thought forms the space of politics between Schoelcher and slaves and is a necessary ground of historical analysis. Tomich, Dale. 124 CERN Document Server It is known that the linking form on the 2-cover of slice knots has a metabolizer. We show that several weaker conditions, or some other conditions related to sliceness, do not imply the existence of a metabolizer. We then show how the Rudolph-Bennequin inequality can be used indirectly to prove that some knots are not slice. Stoimenow, A 2004-01-01 125 Directory of Open Access Journals (Sweden) Full Text Available Review of:Travesty in Haiti: A True Account of Christian Missions, Orphanages, Fraud, Food Aid and Drug Trafficking [second edition]. Timothy T. Schwartz. Charleston SC: Booksurge, 2010. xlvii + 262 pp. (Paper US$15.99Haiti in the Balance: Why Foreign Aid Has Failed and What We Can Do About It. Terry Buss . Washington DC: Brookings Institute Press, 2008. xvi + 230 pp. (Paper US$ 28.95Backpacks Full of Hope: The UN Mission in Haiti. Eduardo Aldunate. Waterloo ON: Wilfrid Laurier University Press, 2010. xx + 230 pp. (Paper US$34.95 Landon Yarrington 2012-12-01 126 Digital Repository Infrastructure Vision for European Research (DRIVER) Prior to the epidemic that emerged in Haiti in October of 2010, cholera had not been documented in this country. After its introduction, a strain of Vibrio cholerae O1 spread rapidly throughout Haiti, where it caused over 600,000 cases of disease and >7,500 deaths in the first twoyears of the epidemic. We applied whole-genome sequencing to a temporal series of V.cholerae isolates from Haiti to gain insight into the mode and tempo of evolution in this isolated population of V.cholerae O1... Katz, Lee S.; Petkau, Aaron; Beaulaurier, John; Tyler, Shaun; Antonova, Elena S.; Turnsek, Maryann A.; Guo, Yan; Wang, Susana; Paxinos, Ellen E.; Orata, Fabini; Gladney, Lori M.; Stroika, Steven; Folster, Jason P.; Rowe, Lori; Freeman, Molly M. 2013-01-01 127 Digital Repository Infrastructure Vision for European Research (DRIVER) ABSTRACT Prior to the epidemic that emerged in Haiti in October of 2010, cholera had not been documented in this country. After its introduction, a strain of Vibrio cholerae O1 spread rapidly throughout Haiti, where it caused over 600,000 cases of disease and >7,500 deaths in the first two years of the epidemic. We applied whole-genome sequencing to a temporal series of V. cholerae isolates from Haiti to gain insight into the mode and tempo of evolution in this isolated population of V. chole... Katz, Lee S.; Petkau, Aaron; Beaulaurier, John; Tyler, Shaun; Antonova, Elena S.; Turnsek, Maryann A.; Guo, Yan; Wang, Susana; Paxinos, Ellen E.; Orata, Fabini; Gladney, Lori M.; Stroika, Steven; Folster, Jason P.; Rowe, Lori; Freeman, Molly M. 2013-01-01 128 CERN Document Server The impact of magnetic field decay on radiative activity of quaking neutron star undergoing Lorentz-force-driven torsional seismic vibrations about axis of its dipole magnetic moment is studied. We found that monotonic depletion of internal magnetic field pressure is accompanied by the loss of vibration energy of the star that causes its vibration period to lengthen at a rate proportional to the rate of magnetic field decay. Particular attention is given to the magnetic-field-decay induced conversion of the energy of differentially rotational Alfven vibrations into the energy of oscillating magneto-dipole radiation. A set of representative examples of magnetic field decay illustrating the vibration energy powered emission with elongating periods produced by quaking neutron star are considered and discussed in the context of theory of magnetars. Bastrukov, S I; Yu, J W; Xu, R X 2010-01-01 129 CERN Document Server Slice stretching effects are discussed as they arise at the event horizon when geodesically slicing the extended Schwarzschild black hole spacetime while using singularity excision. In particular, for Novikov and isotropic spatial coordinates the outward-movement of the event horizon (slice sucking'') and the unbounded growth there of the radial metric component (slice wrapping'') are analyzed. For the overall slice stretching very similar late time behavior is found when comparing with maximal slicing. Thus, the intuitive argument that attributes slice stretching to singularity avoidance is incorrect. Reimann, B 2004-01-01 130 Scientific Electronic Library Online (English) Full Text Available SciELO Brazil | Language: Portuguese Abstract in portuguese OBJETIVO: Investigar a presena de sintomas de depresso e ansiedade em sobreviventes do terremoto do Haiti, que foram atendidos pela equipe de sade do Hospital Israelita Albert Einstein, e avaliar o impacto que a perda de um familiar durante a catstrofe pode causar no desenvolvimento desses sinto [...] mas. MTODOS: Quarenta sobreviventes do terremoto do Haiti, atendidos pela equipe de sade, entre fevereiro e maro de 2010, foram includos neste estudo. Todos os indivduos foram submetidos a uma entrevista semiestruturada. O grupo foi dividido em dois: Grupo A (que perderam um familiar na catstrofe) e Grupo B (aqueles que no tiveram perdas). RESULTADOS: Um total de 55% dos indivduos apresentavam sintomas de depresso e 40% de ansiedade. Os indivduos que perderam familiares tinham cinco vezes mais probabilidade de desenvolver ansiedade e depresso do que aqueles no tiveram perdas. CONCLUSO: As vtimas de catstrofes que perderam pelo menos um familiar no desastre tm maior probabilidade de desenvolver sintomas de depresso e ansiedade. A esses indivduos, assim como outros que demonstravam estresse psicolgico, devem ser oferecidos, precocemente, cuidados de sade mental, para ajud-los a suportar o grande estresse emocional inerente a essas situaes. Abstract in english OBJECTIVE: To investigate the presence of depression and anxiety symptoms in survivors of the Haiti earthquake who were assisted by a healthcare team from the Hospital Israelita Albert Einstein, and to evaluate the impact that losing a family member during this catastrophe could have on the developm [...] ent of these symptoms. METHODS: Forty survivors of the Haiti earthquake who were assisted by the healthcare team between February and March of 2010 were included in this study. All subjects underwent a semi-structured interview. The group was divided into Group A (individuals who had some death in the family due to the disaster) and Group B (those who did not lose any family member). RESULTS: A total of 55% of the subjects had depression symptoms whereas 40% had anxiety symptoms. The individuals who lost a family member were five times more likely to develop anxiety and depression symptoms than those who did not. CONCLUSION: Catastrophe victims who lost at least one family member due to the disaster were more likely to develop anxiety and depression symptoms. To these individuals, as well as others showing psychological distress, should be offered early mental health care to help them cope with the great emotional distress inherent in these situations. Guimaro, Melissa Simon; Steinman, Milton; Kernkraut, Ana Merzel; Santos, Oscar Fernando Pavo dos; Lacerda, Shirley Silva. 131 Digital Repository Infrastructure Vision for European Research (DRIVER) The main problem with the hardware implementation of turbo codes is the lack of parallelism in the MAP-based decoding algorithm. This paper proposes to overcome this problem with a new family of turbo codes, called Slice Turbo Codes. This family is based on two ideas: the encoding of each dimension with P independent tail-biting codes and a constrained interleaver structure that allows parallel decoding of the P independent codewords in each dimension. The optimization of the interleaver is d... Gnaedig, David; Boutillon, Emmanuel; Jezequel, Michel; Gaudet, Vincent; Glenn Gulak, P. 2003-01-01 132 Digital Repository Infrastructure Vision for European Research (DRIVER) The local "when" for earthquake prediction is based on the connection between geomagnetic "quakes" and the next incoming minimum or maximum of tidal gravitational potential. The probability time window for the predicted earthquake is for the tidal minimum approximately 1 day and for the maximum 2 days. The preliminary statistic estimation on the basis of distribution of the time difference between occurred and predicted earthquakes for the period 2002-2003 for the Sofia re... 2004-01-01 133 Digital Repository Infrastructure Vision for European Research (DRIVER) The local 'when' for earthquake prediction is based on the connection between geomagnetic 'quakes' and the next incoming minimum or maximum of tidal gravitational potential. The probability time window for the predicted earthquake is for the tidal minimum approximately 1 day and for the maximum 2 days. The preliminary statistic estimation on the basis of distribution of the time difference between occurred and predicted earthquakes for the period 2002-2003 for the Sofia region... 2004-01-01 134 Science.gov (United States) Viscous fingering of a miscible high viscosity slice of fluid displaced by a lower viscosity fluid is studied in porous media by direct numerical simulations of Darcy's law coupled to the evolution equation for the concentration of a solute controlling the viscosity of miscible solutions. In contrast with fingering between two semi-infinite regions, fingering of finite slices is a transient phenomenon due to the decrease in time of the viscosity ratio across the interface induced by fingering and dispersion processes. We show that fingering contributes transiently to the broadening of the peak in time by increasing its variance. A quantitative analysis of the asymptotic contribution of fingering to this variance is conducted as a function of the four relevant parameters of the problem, i.e., the log-mobility ratio R, the length of the slice l, the Pclet number Pe, and the ratio between transverse and axial dispersion coefficients ?. Relevance of the results is discussed in relation with transport of viscous samples in chromatographic columns and propagation of contaminants in porous media. de Wit, A.; Bertho, Y.; Martin, M. 2005-05-01 135 CERN Document Server Viscous fingering of a miscible high viscosity slice of fluid displaced by a lower viscosity fluid is studied in porous media by direct numerical simulations of Darcy's law coupled to the evolution equation for the concentration of a solute controlling the viscosity of miscible solutions. In contrast with fingering between two semi-infinite regions, fingering of finite slices is a transient phenomenon due to the decrease in time of the viscosity ratio across the interface induced by fingering and dispersion processes. We show that fingering contributes transiently to the broadening of the peak in time by increasing its variance. A quantitative analysis of the asymptotic contribution of fingering to this variance is conducted as a function of the four relevant parameters of the problem i.e. the log-mobility ratio R, the length of the slice l, the Peclet number Pe and the ratio between transverse and axial dispersion coefficients$\\epsilon$. Relevance of the results is discussed in relation with transport of vi... De Wit, A; Martin, M; Wit, Anne De; Bertho, Yann; Martin, Michel 2005-01-01 136 Directory of Open Access Journals (Sweden) Full Text Available The paper presents an analysis by using the methods of Eddy field calculation mean and wavelet maxima to detect seismic anomalies within the outgoing longwave radiation (OLR data based on time and space. The distinguishing feature of the method of Eddy field calculation mean is that we can calculate "the total sum of the difference value" of "the measured value" between adjacent points, which could highlight the singularity within data. The identified singularities are further validated by wavelet maxima, which using wavelet transformations as data mining tools by computing the maxima that can be used to identify obvious anomalies within OLR data. The two methods has been applied to carry out a comparative analysis of OLR data associated with the earthquake recently occurred in Haiti on 12 January 2010. Combining with the tectonic explanation of spatial and temporal continuity of the abnormal phenomena, the analyzed results have indicated a number of singularities associated with the possible seismic anomalies of the earthquake and from the comparative experiments and analyses by using the two methods, which follow the same time and space, we conclude that the singularities observed from 19 to 24 December 2009 could be the earthquake precursor of Haiti earthquake. P. Xiong 2010-10-01 137 CERN Document Server Program understanding is an important aspect in Software Maintenance and Reengineering. Understanding the program is related to execution behaviour and relationship of variable involved in the program. The task of finding all statements in a program that directly or indirectly influence the value for an occurrence of a variable gives the set of statements that can affect the value of a variable at some point in a program is called a program slice. Program slicing is a technique for extracting parts of computer programs by tracing the programs' control and data flow related to some data item. This technique is applicable in various areas such as debugging, program comprehension and understanding, program integration, cohesion measurement, re-engineering, maintenance, testing where it is useful to be able to focus on relevant parts of large programs. This paper focuses on the various slicing techniques (not limited to) like static slicing, quasi static slicing, dynamic slicing and conditional slicing. This pape... Sasirekha, N; Hemalatha, Dr M 2011-01-01 138 Science.gov (United States) In the wake of the Haiti earthquake disaster, civil and military organizations engaged in vigorous relief operations to achieve rapid deployment of logistics, transport, security and medical supplies. Organizations involved in the operation collaborated w... D. J. MacKinnon S. P. Gallup Y. Zhao 2011-01-01 139 Digital Repository Infrastructure Vision for European Research (DRIVER) Analyzing data from Haiti, Bruce Schackman and colleagues report that scale-up of prenatal HIV testing programs provides a cost-effective opportunity to prevent congenital syphilis through rapid testing. 2007-01-01 140 Science.gov (United States) The main goal of this work is to propose some variations to the classic Probabilistic Seismic Hazard Analysis (PSHA) calculations, on one hand, applying the zoneless methodology to seismic source activity characterization and, on the other hand, using the gaussian mixture models to mix Ground Motion Prediction Equation (GMPE) models onto a mixed model. Our actual knowledge of the Brazilian intraplate seismicity does not allow us to identify the causative neotectonic active faults with confidence. This issue makes difficult the characterization of main seismic sources and the computation of the Gutenberg-Richter relation. Indeed seismic zonings made by different specialist could have big differences, while the zone less approach imposes a quantitative method to seismic source characterization, avoiding the subjective source zone definition. In addition, the low seismicity rate and the limited coverage in space and time of the seismic networks, do not offer enough observations to fit a confident GMPE to this region. In this case, our purpose was use a Gaussian Mixture Model to estimate a composed model from pre-existents well-fitted GMPE models which better describes the observed peak ground motion data. The other methodological evaluation is to use the OpenQuake engine (a Global Earthquake Model's initiative) for the hazard calculation. The logic tree input will allow us, in near future, to combine with weights, other hazard models from different specialists. We expect that these results will offer a new and solid basis to upgrade the brazilian civil engineering seismic rules. Pirchiner, M.; Drouet, S.; Assumpcao, M. 2013-12-01 141 Digital Repository Infrastructure Vision for European Research (DRIVER) Haiti has been the locus of a number of large and damaging historical earthquakes. The recent January 12, 2010, Mw 7.0 earthquake affected cities that were largely unprepared, which resulted in tremendous losses. It was initially assumed that the earthquake ruptured the Enriquillo Plantain Garden Fault (EPGF), a major active structure in southern Haiti, known from geodetic measurements and its geomorphic expression to be capable of producing M7 or larger earthquakes. However, GPS and InSAR da... 2012-01-01 142 Digital Repository Infrastructure Vision for European Research (DRIVER) An evaluation of the seismic hazard in La Hispaniola Island has been carried out, as part of the cooperative project SISMO-HAITI, supported by the Technical University of Madrid (UPM) and developed by several Spanish Universities, the National Observatory of Environment and Vulnerability) ONEV of Haiti, and with contributions from the Puerto Rico Seismic Network (PRSN) and University Seismological Institute of Dominican Republic (ISU). The study was aimed at obtaining results suitable for sei... Benito Oterino, Belen; Belizaire, Dwinel; Torres Ferna?ndez, Yolanda; Marti?nez Di?az, Jose? Jesu?s; Hue?rfano, Vi?ctor; Polanco, Eugenio; Garcia, R.; Gonza?lez?crende, Pilar; Serna Marti?nez, Ana Rita; Zevallos, F. 2012-01-01 143 Digital Repository Infrastructure Vision for European Research (DRIVER) As the world joins forces to support the people of Haiti on their long road of recovery following the January 2010 earthquake, plans and strategies should take into consideration past experiences from other postdisaster recovery efforts with respect to integrating ecosystem considerations. Sound ecosystem management can both support the medium and long-term needs for recovery as well as help to buffer the impacts of future extreme natural events, which for Haiti are likely to include both hur... Mainka, Susan A.; Jeffrey McNeely 2011-01-01 144 Digital Repository Infrastructure Vision for European Research (DRIVER) The Republic of Haiti is a prime international remittance-recipient country in the Latin American and Caribbean (LAC) region, relative to its gross domestic product (GDP). The downside of this fact may be that Haiti, based on population size, is also the largest exporter of skilled workers in the world. The present research uses a zero-altered negative binomial (with logit inflation) to model the international migration decision-process of households, and endogenous regressors' Amemiya genera... Jadotte, Evans 2009-01-01 145 Digital Repository Infrastructure Vision for European Research (DRIVER) This paper investigates vulnerability to poverty in Haiti. Research in vulnerability in developing countries has been scarce due to the high data requirements of vulnerability studies (e.g. panel or long series of cross-sections). The methodology adopted here allows the assessment of vulnerability to poverty by exploiting the short panel structure of nested data at different levels. The decomposition method reveals that vulnerability in Haiti is largely a rural phenomenon and that schooling c... Jadotte, Evans 2010-01-01 146 Digital Repository Infrastructure Vision for European Research (DRIVER) The response of the nephrological community to the Haiti and Chile earthquakes which occurred in the first months of 2010 is described. In Haiti, renal support was organized by the Renal Disaster Relief Task Force (RDRTF) of the International Society of Nephrology (ISN) in close collaboration with Mdecins Sans Frontires (MSF), and covered both patients with acute kidney injury (AKI) and patients with chronic kidney disease (CKD). The majority of AKI patients (19/27) suffered from crush sy... Vanholder, Raymond; Borniche, D.; Claus, Stefaan; Correa-rotter, R.; Crestani, R.; Ferir, Mc; Gibney, N.; Hurtado, A.; Luyckx, Va; Portilla, D.; Rodriguez, S.; Sever, Ms; Vanmassenhove, Jill; Wainstein, R. 2011-01-01 147 Science.gov (United States) In October 2010, cholera appeared in Haiti for the first time in nearly a century. The Secretary-General of the United Nations formed an Independent Panel to "investigate and seek to determine the source of the 2010 cholera outbreak in Haiti". To fulfill this mandate, the Panel conducted concurrent epidemiological, water and sanitation, and molecular analysis investigations. Our May 2011 findings indicated that the 2010 Haiti cholera outbreak was caused by bacteria introduced into Haiti as a result of human activity; more specifically by the contamination of the Meye Tributary System of the Artibonite River with a pathogenic strain of the current South Asian type Vibrio cholerae. Recommendations were presented to assist in preventing the future introduction and spread of cholera in Haiti and worldwide. In this chapter, we discuss both the results of the Independent Panel's investigation and the context the report sat within; including background information, responses to the report's release, additional research subsequent to our report, and the public health implications of the Haiti cholera epidemic. PMID:23695726 Lantagne, Daniele; Balakrish Nair, G; Lanata, Claudio F; Cravioto, Alejandro 2014-01-01 148 Energy Technology Data Exchange (ETDEWEB) Charcoal and firewood play an important role in Haitian society and economics. In 1993, charcoal and firewood comprised 83 per cent of the total energy consumption in Haiti, an amount estimated to be equivalent to 1,607,000 tonnes of crude oil. The government of Haiti is making great efforts to encourage the substitution of charcoal and firewood as an energy source for both environmental and economic reasons. Despite all efforts, their project so far has failed. One of the reasons for this failure is the lack of government policy in which public institutions would be required to substitute charcoal and firewood. Saint-Jean, W. [Ministere de l`environnement de Haiti (Haiti) 1997-06-01 149 Energy Technology Data Exchange (ETDEWEB) Source Philippe (on the island of La Govave, near Haiti) is described in terms of climatic, sociological, agricultural and technical background. Because of drought conditions, it became necessary to develop a solar still to provide the town with sufficient fresh water. The still, which has been in operation since 1969, is described in some detail as is the construction process. Brackish and sea water are used to produce more than 1250 liters of fresh water each day. A windmill is used to pump the brackish water from a well to an elevated storage tank; it flows by gravity to solar still basins where it is vaporized, then condensed on a sloping glass surface and collected. Benefits of the solar still to the town's economy and health are discussed. Cost of the project was$17,000. 10 references. (MJJ) 1980-06-01 150 Digital Repository Infrastructure Vision for European Research (DRIVER) A new family of amplitude-only radiofrequency modulation waveforms for slice selection in Magnetic Resonance Imaging (MRI) is presented. Based on the scaling functions associated to different wavelet families, these new envelope waves provide higher slice selectivity than gaussian or sinc functions. They are also more compact on the time domain, since no truncation is needed, allowing to reduce the time required for the slice selection process. This feature is valuable... Vaquero, Juan Jose?; Santos, Andre?s; Pozo, Francisco Del 1996-01-01 151 Science.gov (United States) Comparing several series of images is not always easy as the corresponding slices often need to be selected manually. In times where series contain an ever-increasing number of slices this can mean manual work when moving several series to the corresponding slice. Particularly two situations were identified in this context: (1) patients with a large number of image series over time (such as patients with cancers that are monitored) frequently need to compare the series, for example to compare tumor growth over time. Manually adapting two series is possible but with four or more series this can mean loosing time. Having automatically the closest slice by comparing visual similarity also in older series with differing slice thickness and inter slice distance can save time and synchronize the viewing instantly. (2) analyzing visually similar image series of several patients can profit from being viewed in a synchronized way to compare the cases, so when sliding through the slices in one volume, the corresponding slices in the other volumes are shown. This application could be employed after content-based 3D image retrieval has found similar series, for example. Synchronized viewing can help finding or confirming the most relevant cases quickly. To allow for synchronized viewing of several image volumes, the test image series are first registered applying affine transformation for the global registration of images followed by diffeomorphic image registration. Then corresponding slices in the two volumes are estimated based on a visual similarity. Once the registration is finished, the user can subsequently move inside the slices of one volume (reference volume) and can view the corresponding slices in the other volumes. These corresponding slices are obtained after a correspondence match in the registration procedure. These volumes are synchronized in that the slice closest to the original reference volume is shown even when the slice thicknesses or inter slice distances differ, and this is automatically done by comparing the visual image content of the slices. The tool has the potential to help in a variety of situations and it is currently being made available as a plugin for the popular Osirix image viewer. Ali, Sharib; Foncubierta, Antonio; Depeursinge, Adrien; Meriaudeau, Fabrice; Ratib, Osman; Mller, Henning 2012-02-01 152 Science.gov (United States) The Quake-Catcher Network (QCN) is a versatile network of MEMS accelerometers that are used in combination with distributed volunteer computing to detect earthquakes around the world. Using a dense network of QCN stations installed in Christchurch, New Zealand after the 2010 M7.1 Darfield earthquake, hundreds of events in the Christchurch area were detected and rapidly characterized. When the M6.3 Christchurch event occurred on 21 February 2011, QCN sensors recorded the event and calculated its magnitude, location, and created a map of estimated shaking intensity within 7 seconds of the earthquake origin time. Successive iterations improved the calculations and, within 24 seconds of the earthquake, magnitude and location values were calculated that were comparable to those provided by GeoNet. We have rigorously tested numerous methods to create a working magnitude scaling relationship. In this presentation, we show a drastic improvement in the magnitude estimates using the maximum acceleration at the time of the first trigger and updated ground accelerations from one to three seconds after the initial trigger. 75% of the events rapidly detected and characterized by QCN are within 0.5 magnitude units of the official GeoNet reported magnitude values, with 95% of the events within 1 magnitude unit. We also test the QCN detection algorithms using higher quality data from the SCSN network in Southern California. We examine a dataset of M5 and larger earthquakes that occurred since 1995. We present the performance of the QCN algorithms for this dataset, including time to detection as well as location and magnitude accuracy. Chung, A. I.; Cochran, E.; Yildirim, B.; Christensen, C. M.; Kaiser, A. E.; Lawrence, J. F. 2013-12-01 153 Science.gov (United States) The effects of plant defenses on herbivory can differ among spatial scales. This may be particularly common with indirect defenses, such as extrafloral nectaries (EFNs), that attract predatory arthropods and are dependent on predator distribution, abundance, and behavior. We tested the defensive effects of EFNs in quaking aspen (Populus tremuloides Michx.) against damage by a specialist herbivore, the aspen leaf miner (Phyllocnistis populiella Cham.), at the scale of individual leaves and entire ramets (i.e., stems). Experiments excluding crawling arthropods revealed that the effects of aspen EFNs differed at the leaf and ramet scales. Crawling predators caused similar reductions in the percent leaf area mined on individual leaves with and without EFNs. However, the extent to which crawling predators increased leaf miner mortality and, consequently, reduced mining damage increased with EFN expression at the ramet scale. Thus, aspen EFNs provided a diffuse defense, reducing damage to leaves across a ramet regardless of leaf-scale EFN expression. We detected lower leaf miner damage and survival unassociated with crawling predators on EFN-bearing leaves, suggesting that direct defenses (e.g., chemical defenses) were stronger on leaves with than without EFNs. Greater direct defenses on EFN-bearing leaves may reduce the probability of losing these leaves and thus weakening ramet-scale EFN defense. Aspen growth was not related to EFN expression or the presence of crawling predators over the course of a single season. Different effects of aspen EFNs at the leaf and ramet scales suggest that future studies may benefit from examining indirect defenses simultaneously at multiple scales. PMID:20931234 Mortensen, Brent; Wagner, Diane; Doak, Patricia 2011-04-01 154 Science.gov (United States) This article combines health and water research results, evidence from confidential documents released under the Freedom of Information Act, legal analysis, and discussion of historical context to demonstrate that actions taken by the international community through the Inter-American Development Bank are directly related to a lack of access to clean water in Haiti. The article demonstrates that these actions constitute a clear violation of Haitians' right to water under both domestic and international law. The article exposes the United States governments role in blocking the disbursal of millions of dollars in international bank loans that would have had life-saving consequences for the Haitian people. The loans were derailed in 2001 by politically-motivated interventions on behalf of the US and other members of the international community in direct violation of the Inter-American Development Bank charter. To demonstrate the impact of these interventions, the article presents data gathered in a study that employed human rights and public health methodologies to assess the right to water in Haiti. The data reveal that Haitians experience obstacles concerning every aspect of the right to water: diffculties with water availability, limited physical and economic accessibility, and poor water quality. The article provides a framework of concrete duties and obligations that should be followed by all actors involved in Haiti in order to realize Haitians' human right to water. In response to the undeniable link between the international community's political interference and the intolerably poor state of potable water in Haiti, the article concludes with a recommendation that all actors in Haiti follow a rights-based approach to the development and implementation of water projects in Haiti. The full report of Wch nan Soley: The Denial of the Right to Water in Haiti is available online at http://www.pih.org/inforesources/Reports/Hait_Report_FINAL.pdf. PMID:20845860 Varma, Monika Kalra; Satterthwaite, Margaret L; Klasing, Amanda M; Shoranick, Tammy; Jean, Jude; Barry, Donna; Fawzi, Mary C Smith; McKeever, James; Lyon, Evan 2008-01-01 155 CERN Multimedia Sliced Inverse Regression (SIR) is an effective method for dimension reduction in high-dimensional regression problems. The original method, however, requires the inversion of the predictors covariance matrix. In case of collinearity between these predictors or small sample sizes compared to the dimension, the inversion is not possible and a regularization technique has to be used. Our approach is based on a Fisher Lecture given by R.D. Cook where it is shown that SIR axes can be interpreted as solutions of an inverse regression problem. In this paper, a Gaussian prior distribution is introduced on the unknown parameters of the inverse regression problem in order to regularize their estimation. We show that some existing SIR regularizations can enter our framework, which permits a global understanding of these methods. Three new priors are proposed leading to new regularizations of the SIR method. A comparison on simulated data is provided. Bernard-Michel, C; Girard, S 2011-01-01 156 Digital Repository Infrastructure Vision for European Research (DRIVER) The Newtonian solid-mechanical theory of nodeless spheroidal and torsional of elastic seismic vibrations trapped in the crust of a quaking neutron star is outlined and applied to the modal classification of the quasi-periodic oscillations (QPOs) of X-ray luminosity in the aftermath of giant flares in SGR 1806-20 and SGR 1900+14. The presented analysis relies heavily on the Samuelsson-Andersson identification of the QPOs frequency from the range 30-200 Hz with those for torsi... Bastrukov, Sergey; Chang, Hsiang-kuang; Molodtsova, Irina; Chen, Gwan-ting 2007-01-01 157 Science.gov (United States) The Mw 8.0 R earthquake on the 12th of May 2008 that stroke the Sichuan Prefecture of the People's Republic of China caused tenths of thousands of casualties and significant social and economic consequences. The earthquake was triggered by a reverse fault approximately 100 km in length, of NE-SW strike, dipping towards the NW with a reverse-lateral slip character and focal depth of 18 Km. Due to the great height and steepness of the slopes and their loose geotechnical characteristics in the mountainous terrain, thousands of landslides and collapses occurred in the Longmenshan fault zone during the earthquake, resulting in a large amount of geotechnical damages, such as the destruction the roads, villages, towns and bridges. A total of more than 9000 geological disasters occurred, among which there were approximately 4000 landslides, 2300 slop collapses, 800 debris flows, 1700 unstable slopes and more than 80 locations with hidden danger of geological hazard. Approximately 1.000.000 people and their properties in the affected area were under a directly serious threat. Landslides mobilized millions of cubic meters of rock and soil that slid across adjacent rivers, creating large landslide dams. The blockage of rivers was accompanied by the formation of quake lakes that were flooding the upstream river valleys. As water rises, there is potential of overtopping and downstream flooding. In the affected area 32 quake lakes were formed of various scales. ?he largest one , and most dangerous, is located in Beichuan County. The lake was formed because massive landslide partially blocked Qianjiang River upstream of the devastated Beichuan County seat. It is 40 m deep and contains about 30-40 million m3 of water. The landslide dam had a height of 60 m, the quake lake in the Shitingjiang River direction is more than 900 m long, its largest width is more than 600 m, and its area at the dam crest level is about 300.000 m2. As of June 7, 2008 the reservoir capacity of the quake lake was 240 million cubic meters, which posed a threat to a significantly large area of the down-stream zone, however the hidden danger was relieved by dredging and water drainage. Lekkas, E. 2009-04-01 158 Directory of Open Access Journals (Sweden) Full Text Available The January 2010 earthquake devastated Haitis social, economic and health infrastructure, leaving 2 million personsone-fifth of Haitis populationhomeless. Internally displaced persons relocated to camps, where human rights remain compromised due to increased poverty, reduced security, and limited access to sanitation and clean water. This article draws on findings from 3 focus groups conducted with internally displaced young women and 3 focus groups with internally displaced young men (aged 1824 in Leogane, Haiti to explore post-earthquake tent distribution practices. Focus group findings highlighted that community members were not engaged in developing tent distribution strategies. Practices that distributed tents to both children and parents, and linked food and tent distribution, inadvertently contributed to chaos, vulnerability to violence and family network breakdown. Moving forward we recommend tent distribution strategies in disaster contexts engage with community members, separate food and tent distribution, and support agency and strategies of self-protection among displaced persons. Carmen Helen Logie 2013-09-01 159 Scientific Electronic Library Online (English) 160 Directory of Open Access Journals (Sweden) Full Text Available O AUTOR apresenta uma resenha crtica do livro de C. L. R. James, editado, no Brasil, pela Boitempo, intitulado Os jacobinos negros. Toussaint L'Ouverture e a revoluo e So Domingos. James narra e analisa a rebelio dos escravos da colnia francesa situada na ilha de So Domingos, no final do sculo XVIII, como conseqncia da ao da Conveno surgida da Revoluo Francesa de 1789, a qual proclamou a emancipao dos escravos. Nessa rebelio, o autor destaca a ao do lder negro Toussaint L'Ouverture, que, aps derrotar exrcitos da Frana, Eha e Inglaterra, ganhou o domnio da colnia francesa. Em seguida, a obra de James se detm na determinao de Bonaparte de restaurar a escravido e o envio da fora expedicionria francesa comandada por Leclerc. Toussaint L'Ouverture viria a ser derrotado e aprisionado. Seus companheiros, Dessalines e outros, os jacobinos negros, prosseguiram o combate e conquistaram, em 1804, a Independncia definitiva, batizando o Pas com o nome nativo de Haiti. Da Independncia decorreram problemas que se prolongam at os dias atuais.THE AUTHOR presents a critical review of C.L.R. James' book The black Jacobins. Toussaint L'Ouverture and the San Domingo revolution (published in Brazil by Boitempo. James narrates and analyzes the late 18th century slave rebellion in the French colony located in the island of San Domingo as a consequence of the measures taken by the Convention, established after the French Revolution, which emancipated slaves. The author highlights the activities of black leader Toussaint L'Ouverture in the uprising, who after defeating the armies of France, Spain and England, won the governance of the former French colony. James also examines Bonaparte's determination to restore slavery and his decision to send a French expeditionary force commanded by Leclerc that would defeat and imprison Toussaint L'Ouverture - whose companions, Dessalines and others, the Black Jacobins, would continue to fight. Eventually, in 1804, they achieved definite independence, baptizing the country with the native name of Haiti, but the problems that ensued endure to this day. Jacob Gorender 2004-04-01 161 Scientific Electronic Library Online (English) Full Text Available SciELO Brazil | Language: Portuguese Abstract in portuguese O AUTOR apresenta uma resenha crtica do livro de C. L. R. James, editado, no Brasil, pela Boitempo, intitulado Os jacobinos negros. Toussaint L'Ouverture e a revoluo e So Domingos. James narra e analisa a rebelio dos escravos da colnia francesa situada na ilha de So Domingos, no final do scu [...] lo XVIII, como conseqncia da ao da Conveno surgida da Revoluo Francesa de 1789, a qual proclamou a emancipao dos escravos. Nessa rebelio, o autor destaca a ao do lder negro Toussaint L'Ouverture, que, aps derrotar exrcitos da Frana, Eha e Inglaterra, ganhou o domnio da colnia francesa. Em seguida, a obra de James se detm na determinao de Bonaparte de restaurar a escravido e o envio da fora expedicionria francesa comandada por Leclerc. Toussaint L'Ouverture viria a ser derrotado e aprisionado. Seus companheiros, Dessalines e outros, os jacobinos negros, prosseguiram o combate e conquistaram, em 1804, a Independncia definitiva, batizando o Pas com o nome nativo de Haiti. Da Independncia decorreram problemas que se prolongam at os dias atuais. Abstract in english THE AUTHOR presents a critical review of C.L.R. James' book The black Jacobins. Toussaint L'Ouverture and the San Domingo revolution (published in Brazil by Boitempo). James narrates and analyzes the late 18th century slave rebellion in the French colony located in the island of San Domingo as a con [...] sequence of the measures taken by the Convention, established after the French Revolution, which emancipated slaves. The author highlights the activities of black leader Toussaint L'Ouverture in the uprising, who after defeating the armies of France, Spain and England, won the governance of the former French colony. James also examines Bonaparte's determination to restore slavery and his decision to send a French expeditionary force commanded by Leclerc that would defeat and imprison Toussaint L'Ouverture - whose companions, Dessalines and others, the Black Jacobins, would continue to fight. Eventually, in 1804, they achieved definite independence, baptizing the country with the native name of Haiti, but the problems that ensued endure to this day. Gorender, Jacob. 162 Science.gov (United States) The impact of the 12 January 2010 Haiti earthquake was catastrophic, causing serious damage to infrastructure and more than 200000 deaths. Initially, the Haiti earthquake was assumed to occur with the movement of Enriquillo Plantain Garden Fault Zone (EPGFZ), but recent scientific studies have shown that the primary rupture occurred on an unmapped blind thrust fault in the Logne fan (associated as Logne fault) near the EPGFZ (Figure 1a and 1b). The main purpose of this project are: characterizing and analyzing subsurface structures and associated hazards, characterizing the physical properties of near-surface, locating and understanding the blind faults theorized to have caused the 2010 earthquake (Logne fault). Surveys were conducted by a research group from the University of Houston in 2013 to address some of these goals. Surveys were mainly concentrated on Logne fan (Figure 1c) and Lake Enriquillo (Figure 1d). For Logne surveys, multiple 2D Seismic lines were deployed with approximately N-S orientation. We performed both P wave and S wave refraction analyses and time-migrated the P wave data. The prominent change in both P wave and S wave velocities are interpreted as the effects of faulting. The CMP stacked section shows a multiple discontinuity profile whose location coincides with the anomalies observed at P wave and S wave refraction velocity profile. Extracted reflection coefficients also support a reflective structure at these offsets. We interpret the anomalous structure as North dipping thrust fault. The dip of the fault is estimated around 60. Near-surface reflection seismic analysis provided deeper information indicating multiple layers with varying velocities, intersected by a number of faults. Gravity surveys were conducted along the main seismic line over Logne fan, with additional surveys conducted from Jacmel to Logne and around the Port Au Prince area. The estimated Free air gravity profile suggests that the variation of the gravitational field may be related to the proposed faults. More extensive surveys are expected to be conducted in January, 2014. Figure 1 a- digital elevation map of Hispaniola, b- zoomed view of Logne fan and Lake Enriquillo with gravity stations, c- surveys over Logne area, d- chirp surveys over Lake Enriquillo Kocel, E.; Stewart, R.; Mann, P.; Dowla, N. 2013-12-01 163 Directory of Open Access Journals (Sweden) Full Text Available We present an algorithm to reduce the number of slices from 2D contour cross sections. The main aim of the algorithm is to filter less significant slices while preserving an acceptable level of output quality and keeping the computational cost to reconstruct surface(s at a minimal level. This research is motivated mainly by two factors; first 2D cross sections data is often huge in size and high in precisions the computational cost to reconstruct surface(s from them is closely related to the size and complexity of this data. Second, we can trades visual fidelity with speed of computations if we can remove visually insignificant data from the original dataset which may contains redundant information. In our algorithm we use the number of contour points on a pair of slices to calculate the distance between them. Selection to retain/reject a slice is based on the value of distance compared against a threshold value. Optimal threshold value is derived to produce set of slices that collectively represent the feature of the dataset. We tested our algorithm over six different set of data, varying in complexities and sizes. The results show slice reduction rate depends on the complexity of the dataset, where highest reduction percentage is achieved for objects with lots of constant local variations. Our derived optimal thresholds seem to be able to produce the right set of slices with the potential of creating surface(s that traded off the accuracy and speed requirements. Z.A. Rajion 2006-09-01 164 Science.gov (United States) A cholera epidemic has claimed the lives of more than 8,000 Haitians and sickened 650,000 since the outbreak began in October 2010. Early intervention in the epidemic focused on case-finding, treatment, and water and sanitation interventions for prevention of transmission. Use of oral cholera vaccine (OCV) as part of a complementary set of control activities was considered but initially rejected by policymakers. In December 2011, the Minister of Health of Haiti called for a demonstration of the acceptability and feasibility of the use of OCV in urban and rural Haiti. This paper describes the collaborative activity that offered OCV to one region of the Artibonite Department of rural Haiti in addition to other ongoing treatment and control measures. Despite logistics and cold chain challenges, 45,417 persons were successfully vaccinated with OCV in the region, and 90.8% of these persons completed their second dose. PMID:24106187 Ivers, Louise C; Teng, Jessica E; Lascher, Jonathan; Raymond, Max; Weigel, Jonathan; Victor, Nadia; Jerome, J Gregory; Hilaire, Isabelle J; Almazor, Charles P; Ternier, Ralph; Cadet, Jean; Francois, Jeannot; Guillaume, Florence D; Farmer, Paul E 2013-10-01 165 CERN Multimedia The ATLAS experiment at the LHC will face the challenge of selecting interesting candidate events in pp collisions at 14 TeV center of mass energy, while rejecting the enormous number of background events. The trigger system architecture is organized in three levels. From an interaction rate of 1 GHz the First Level trigger, hardware implemented, will reduce this rate to around ~100 kHz. Then the software based High Level Trigger (HLT), composed by the Second Level and the Event Filter reduces the rate to ~ 200 Hz. HLT is implemented on commercial CPUs using a framework built on the common ATLAS object oriented software architecture. Inclusive trigger selections are used to collect events for the ATLAS physics programme; final states with muons are crucial for Electroweak precision measurements as well as Higgs and SUSY searches. In this paper we will present the implementation of the muon slice, signal efficiencies, background rejection rates and system performances (execution time,...) for online muon selec... Sidoti, A; Biglietti, M; Carlino, G; Cataldi, G; Conventi, F; De Cecco, S; Di Mattia, A; Dionisi, C; Falciano, S; Giagu, S; Gorini, E; Grancagnolo, S; Inada, M; Kanaya, N; Kohno, T; Krasznahorkay, A; Kiyamura, H; Kurasige, H; Kuwabara, T; Luci, C; Luminari, L; Marzano, F; Migliaccio, A; Nagano, K; Nisati, A; Omachi, C; Panikashvili, N; Pasqualucci, E; Primavera, M; Rescigno, M; Riu, I; Ryan, P; Scannicchio, D A; Siragusa, G; Tarem, S; Tarem, Z; Tokushuku, K; Usai, G; Ventura, A; Vercesi, V; Yamazaki, Y 2007-01-01 166 Science.gov (United States) ABSTRACT Prior to the epidemic that emerged in Haiti in October of 2010, cholera had not been documented in this country. After its introduction, a strain of Vibrio cholerae O1 spread rapidly throughout Haiti, where it caused over 600,000 cases of disease and >7,500 deaths in the first twoyears of the epidemic. We applied whole-genome sequencing to a temporal series of V.cholerae isolates from Haiti to gain insight into the mode and tempo of evolution in this isolated population of V.cholerae O1. Phylogenetic and Bayesian analyses supported the hypothesis that all isolates in the sample set diverged from a common ancestor within a time frame that is consistent with epidemiological observations. A pangenome analysis showed nearly homogeneous genomic content, with no evidence of gene acquisition among Haiti isolates. Nine nearly closed genomes assembled from continuous-long-read data showed evidence of genome rearrangements and supported the observation of no gene acquisition among isolates. Thus, intrinsic mutational processes can account for virtually all of the observed genetic polymorphism, with no demonstrable contribution from horizontal gene transfer (HGT). Consistent with this, the 12 Haiti isolates tested by laboratory HGT assays were severely impaired for transformation, although unlike previously characterized noncompetent V.cholerae isolates, each expressed hapR and possessed a functional quorum-sensing system. Continued monitoring of V.cholerae in Haiti will illuminate the processes influencing the origin and fate of genome variants, which will facilitate interpretation of genetic variation in future epidemics. Katz, Lee S.; Petkau, Aaron; Beaulaurier, John; Tyler, Shaun; Antonova, Elena S.; Turnsek, Maryann A.; Guo, Yan; Wang, Susana; Paxinos, Ellen E.; Orata, Fabini; Gladney, Lori M.; Stroika, Steven; Folster, Jason P.; Rowe, Lori; Freeman, Molly M.; Knox, Natalie; Frace, Mike; Boncy, Jacques; Graham, Morag; Hammer, Brian K.; Boucher, Yan; Bashir, Ali; Hanage, William P.; Van Domselaar, Gary; Tarr, Cheryl L. 2013-01-01 167 Energy Technology Data Exchange (ETDEWEB) A slice bar is proposed for a clearing hammer, which contains a cylindrical shaft, a collar, an operational part and a point with facets. To increase the reliability and to reduce wear, the point facets are made convex with a curvature radius which meets the relation 3 is less than R/d is less than r, where R is the curvature radius; d is the diameter of the slice bar stem. A longitudinal, circular groove is made on each facet with a radius of the generatrix of the circumference which is less than half the diameter of the slice bar stem. Fitingof, Yu.P.; Gudkov, G.D.; Kardashov, D.M.; Korol, L.B. 1982-01-01 168 Energy Technology Data Exchange (ETDEWEB) The investigation evaluated potential market size, financial viability, consumer acceptance, and the government policy role in promoting the manufacture and sale of briquettes in Haiti. Our results show a large and growing charcoal market in Port-au-Prince of 100,000 to 120,000 tonnes per year in 1985, much larger than previous estimates. This would support a 50,000 tonne per year coal briquetting plant. Wood users buying in lots of 100 pieces or less would provide a smaller, secondary market of about 6000 tonnes of charcoal equivalent per year. The size and competitive nature of the current charcoal transportation, wholesale, and retail distribution chain make it easily capable of distributing the coal briquettes. We investigated three coal briquetting options, each based on a different coal source: (1) Maissade lignite, (2) L'Azile lignite, and (3) imported coal. Financial analyses compare capital and operating costs with potential returns. Results indicate that the Maissade lignite is not economically viable in competition with charcoal at current charcoal prices. Both the L'Azile and imported coal options hold more promise. The investment incentives provided by Haitian government are very favorable to a coal briquetting venture. An increased tax on charcoal, currently priced below its social cost, is recommended. Stevenson, G.G.; Willson, T.D.; Jean-Poix, C.; Medina, N. 1987-06-01 169 Directory of Open Access Journals (Sweden) Full Text Available As the world joins forces to support the people of Haiti on their long road of recovery following the January 2010 earthquake, plans and strategies should take into consideration past experiences from other postdisaster recovery efforts with respect to integrating ecosystem considerations. Sound ecosystem management can both support the medium and long-term needs for recovery as well as help to buffer the impacts of future extreme natural events, which for Haiti are likely to include both hurricanes and earthquakes. An additional challenge will be to include the potential impacts of climate change into ecosystem management strategies. Jeffrey McNeely 2011-03-01 170 Digital Repository Infrastructure Vision for European Research (DRIVER) On January 12, 2010, a 7.0 magnitude earthquake struck Haiti. All told, more than 240,000 perished; another 200,000 were injured; and one-half of the citys 2,000,000 residents were left homeless. In March I volunteered with Medishare to help with the relief effort. Being a family physician, broadly trained in all aspects of medicine, I knew many of my skills would be needed. In the 7 days I was in Haiti, I worked excruciatingly long hours, witnessed the sorrow of death and joy of birth, an... Mckersie, Robert C. 2010-01-01 171 ...do no harm,eberlein,missika,FSP,WDR,tamine Haiti:- International Engagement in Fragile States: Can't we do better? - OECD Franais Follow us E-mail Alerts Blogs OECD Home About Countries Topics Statistics Newsroom OECD Home Haiti International Engagement in Fragile States: Can't we do better? International Engagement in Fragile States: Can't we do better? Send Print Tweet Now available: 2011 Monitoring Survey on the Fragile States Principles Four years after ministers of the OECD Development Assistance Committee endorsed the Principles for Good International ... 172 International Nuclear Information System (INIS) We have developed a software 'SLICE' which maps various kinds of plasma experimental data measured at the different geometrical position of JT-60U and JFT-2M onto the equilibrium magnetic configuration and treats them as a function of volume averaged minor radius ?. Experimental data can be handled uniformly by using 'SLICE'. Plenty of commands of 'SLICE' make it easy to process the mapped data. The experimental data measured as line integrated values are also transformed by Abel inversion. The mapped data are fitted to a functional form and saved to the database 'MAPDB'. 'SLICE' can read the data from 'MAPDB' and re-display and transform them. Still more 'SLICE' creates run data of orbit following Monte-Carlo code 'OFMC' and tokamak predictive and interpretation code system 'TOPICS'. This report summarizes an outline and the usage of 'SLICE'. (author) 1993-01-01 173 Science.gov (United States) The lifespan of an acute brain slice is approximately 612?hours, limiting potential experimentation time. We have designed a new recovery incubation system capable of extending their lifespan to more than 36?hours. This system controls the temperature of the incubated artificial cerebral spinal fluid (aCSF) while continuously passing the fluid through a UVC filtration system and simultaneously monitoring temperature and pH. The combination of controlled temperature and UVC filtering maintains bacteria levels in the lag phase and leads to the dramatic extension of the brain slice lifespan. Brain slice viability was validated through electrophysiological recordings as well as live/dead cell assays. This system benefits researchers by monitoring incubation conditions and standardizing this artificial environment. It further provides viable tissue for two experimental days, reducing the time spent preparing brain slices and the number of animals required for research. Buskila, Yossi; Breen, Paul P.; Tapson, Jonathan; van Schaik, Andre; Barton, Matthew; Morley, John W. 2014-01-01 174 Digital Repository Infrastructure Vision for European Research (DRIVER) The development of the information technology has brought threats to human society when it has influenced seriously the global politics, economics and military etc. But among the security of information system, buffer overrun vulnerability is undoubtedly one of the most important and common vulnerabilities. This paper describes a new technology, named program slicing, to detect the buffer overflow leak in security-critical C code. First, we use slicing technology to analyze the variables whic... 2010-01-01 175 International Nuclear Information System (INIS) In this paper, a conventional single-slice helical CT apparatus was compared with a newly developed multi-slice helical CT apparatus. The SSPz (slice-sensitive profile) and image noise of single-slice CT and multi-slice CT were compared in non-helical scan. Sufficiently thin and satisfactorily rectangular SSPz were acquired without making an effort to stop down the beam in the multi-slice helical CT apparatus. This means that it is a satisfactorily effective apparatus that allows the high resolution with low exposure doses. Comparison of the SSPz showed that the multi-slice helical CT provided slightly poorer resolution than the single-slice helical CT, but according to the results of this study the former would be superior to the latter if the optimal the helical pitch were selected. Multi-slice CT reduced the image noise and provided better noise quality. Additionally, multi-slice CT had better resolution at low contrast. It is important to understand that the low contrast resolution is greatly influenced by the reconstruction method, X-ray detection capacity of the detectors, and differences in the apparatus itself. Under the same X-ray conditions (tube voltage and the tube current), the exposure dose increased, but by changing the analysis conditions, it was possible to reduce the dose. In clinical practice, multi-slice helical CT provides good resolution, but there were some problems with scanning. In conclusion, multi-slice CT can detect 3-D images of vessels in a short time, however, the short scanning time makes the contrast medium injection technique more difficult. Automated methods of timing the injections are needed. (K.H.) 2000-01-01 176 International Nuclear Information System (INIS) Full text: The Pink Hibiscus Mealybug (PHM), Maconellicoccus hirsitus (Green) (Hemiptera: Pseudococcidae), very likely originated from Asia and was first observed in the Western Hemisphere in 1994 on the island of Grenada. Since then, the insect has spread to over 31 Caribbean Islands, plus countries in South America, Central America and North America. The PHM is very polyphagous and associated with some 300 plant species including fruits, vegetables, ornamentals and trees, and very prolific with up to 500-600 eggs/female. This mealybug was introduced into the American continent without its natural enemies and has the potential of rapidly becoming a very serious threat to the agricultural industry and the environment of the region. In Haiti, the PHM was observed for the first time in the metropolitan area of Port-au-Prince, the capital, in May 2002. In July 2002, in a cooperative effort between the Ministry of Agriculture of Haiti, the United States Department of Agriculture, Animal and Plant Health Inspection Service, Plant Protection and Quarantine, and International Services (USDA, APHIS, PPQ and IS), the International Institute for Cooperation on Agriculture (IICA) and the Food and Agriculture Organization (FAO), a biological control programme was developed for Haiti. The first action for the management of the PHM in Haiti was to initiate a public awareness campaign and train local technicians. The PHM biological control programme started with the technical assistance of the USDA, APHIS, PPQ and IS, and the support of the Puerto Rico Department of Agriculture (PRDA), which managed the insectary operation and provided two exotic parasitoids Anagyrus kamali and Gyranusoidea indica (both Hymenoptera: Encyrtidae). From July 2002 to January 2004 Haiti received 180,000 parasitoids from PRDA. In April 2003 the National Association of Mango Exporters of Haiti (ANEM) and the US Agency for International Development (USAID) representative in Haiti collectively developed support through the Haiti Ministry of Agriculture in order to establish an insectary to mass-produce locally the exotic parasitoids A. kamali and G. indica. From October 2003 to November 2004, 265,000 parasitoids were mass-produced at the Haiti insectary. These parasitoids were released in Haiti in PHM infested areas at the rate of 200 to 400 individuals per species per site and a distance of about one mile between releases. Six study sites were selected using infested hibiscus plants as field hosts and sampled for about one year in order to monitor the impact of the parasitoids on the population density of PHM. The results of the study indicated a 98% reduction in the PHM population density by the parasitoids, which maintained an average of 14% parasitisation following the mealybug population decline. The PHM has the capability of spreading across the country, but at a reduced rate of distribution since the implementation of this successful biological control programme. The Haiti Ministry of Agriculture continues to survey for new PHM infested areas and is prepared to release parasitoids as necessary to ensure the continued success of the PHM biological control programme. (author) 2005-05-09 177 Digital Repository Infrastructure Vision for European Research (DRIVER) This article bridges global and Haiti-specific debates on statehood, the political economy of state and state (de)formation, as well as the conceptualization and measurement of those phenomena. Drawing on data sets and secondary literatures from Haiti and beyond, it argues that despite the unique features of the extremely weak state in Haiti, that case can usefully be compared to the range of weak to fairly strong states in Latin America and the Caribbean. In the process, the article makes a ... STEPHEN BARANYI 2012-01-01 178 International Nuclear Information System (INIS) We compute the characteristic parameters of the magneto-dipole radiation of a neutron star undergoing torsional seismic vibrations under the action of Lorentz restoring force about an axis of a dipolar magnetic field experiencing decay. After a brief outline of the general theoretical background of the model of a vibration-powered neutron star, we present numerical estimates of basic vibration and radiation characteristics, such as frequency, lifetime and luminosity, and investigate their time dependence on magnetic field decay. The presented analysis suggests that a gradual decrease in frequencies of pulsating high-energy emission detected from a handful of currently monitored AXP/SGR-like X-ray sources can be explained as being produced by the vibration-powered magneto-dipole radiation of quaking magnetars. 2011-09-01 179 CERN Document Server Within the framework of Newtonian magneto-solid-mechanics, relied on equations appropriate for a perfectly conducting elastic continuous medium threaded by a uniform magnetic field, an asteroseismic model of a neutron star undergoing global differentially rotational, torsional, nodeless vibrations under the combined action of Hooke's elastic and Lorentz magnetic forces is considered with emphasis on toroidal Alfven mode. The obtained spectral equation for frequency is applied to l-pole identification of quasi-periodic oscillations (QPOs) of X-ray flux during flare of SGR 1806-20 and SGR 1900+14. Our calculations suggest that detected QPOs can be consistently interpreted as produced by global torsional nodeless vibrations of quaking magnetar if they are considered to be restored by joint action of bulk forces of shear elastic and magnetic field stresses. Bastrukov, S I; Chang, H -K; Molodtsova, I V; Podgainy, D V 2008-01-01 180 CERN Document Server It is shown that depletion of the magnetic field pressure in a quaking neutron star undergoing Lorentz-force-driven torsional seismic vibrations about axis of its dipole magnetic moment is accompanied by the loss of vibration energy of the star that causes its vibration period to lengthen at a rate proportional to the rate of magnetic field decay. Highlighted is the magnetic-field-decay induced conversion of the energy of differentially rotational Alfv\\'en vibrations into the energy of oscillating magneto-dipole radiation. A set of representative examples illustrating the vibration energy powered emission with elongating periods due to magnetic field decay are considered and discussed in the context of theory of magnetars. Bastrukov, S I; Xu, R X; Yu, J W 2010-01-01 181 Directory of Open Access Journals (Sweden) Full Text Available The local 'when' for earthquake prediction is based on the connection between geomagnetic 'quakes' and the next incoming minimum or maximum of tidal gravitational potential. The probability time window for the predicted earthquake is for the tidal minimum approximately 1 day and for the maximum 2 days. The preliminary statistic estimation on the basis of distribution of the time difference between occurred and predicted earthquakes for the period 2002-2003 for the Sofia region is given. The possibility for creating a local 'when, where' earthquake research and prediction NETWORK is based on the accurate monitoring of the electromagnetic field with special space and time scales under, on and over the Earth's surface. The periodically upgraded information from seismic hazard maps and other standard geodetic information, as well as other precursory information, is essential. S. Cht. Mavrodiev 2004-01-01 182 CERN Document Server The local when earthquake prediction is based on the connection between geomagnetic quakes and the next incoming minimum or maximum of tidal gravitational potential. The probability time window for the predicted earthquake is for the tidal minimum approximately +/-1 day and for the maximum- +/-2 days. The preliminary statistic estimation on the basis of distribution of the time difference between occurred and predicted earthquakes for the period 2002- 2003 for Sofia region is given. The possibility for creating a local when, where earthquake research and prediction NETWORK is based on the accurate monitoring of the electromagnetic field with special space and time scales under, on and over the Earth surface. The periodically upgraded information from seismic hazard maps and other standard geodetic information as well as other precursory information is essential. Mavrodiev, S C 2004-01-01 183 International Nuclear Information System (INIS) Radiation controls took an increasing trend in Haiti. The corresponding trend is in the making of a national authority that will oversee all private and public establishments where ionizing radiation sources are being used on a diagnostic basis or for therapeutic purposes. The primary purpose of this authority is to improve regulatory framework for radiation protection but also to layout mechanisms for controlling sources. With IAEA helps and expertise a national programme is being implemented that will reflect priorities of the international Basic Safety Standard. Our goal in this paper was to provide a model authority based on the legal culture of the country and mindset of healthcare worker. The unique feature of this proposed model is that it places greater emphasis on responding to a health priority, and greater government willingness to have and independent body to regulate every single user of ionizing radiations and this flexible model can be implemented with minimal expenditures for our national budget. The following key services have been identified to provide the needed control mechanism for the Authority: Administrative Affairs Services; Personal Dosimetry Services; Nuclear and Radiological Safety Services; Legal Affairs Services. The possibility to achieve reduction of the exposed x ray workers and to establish a greater discipline in the use nuclear and radiological technology and availability of state of the art technology can be reached only if such a national body is effectively implemented by mean of a national decree therefore abiding all citizens. A basic inventory model is annexed for the purposes of assessing current needs in radiation protection. (author) 2003-07-01 184 Science.gov (United States) With increasing population displacement and worsening water insecurity after the 2010 earthquake, Haiti experienced a large cholera outbreak. Our goal was to evaluate the strengths and weaknesses of seven community health facilities' ability to respond to a surge in cholera cases. Since 2010, Catholic Relief Services (CRS) with a number of public and private donors has been working with seven health facilities in an effort to reduce morbidity and mortality from cholera infection. In November 2012, CRS through the Centers for Disease Control and Prevention (CDC)'s support, asked the Johns Hopkins Center for Refugee and Disaster Response to conduct a cholera surge simulation tabletop exercise at these health facilities to improve each facility's response in the event of a cholera surge. Using simulation development guidelines from the Pan American Health Organization and others, a simulation scenario script was produced that included situations of differing severity, supply chain, as well as a surge of patients. A total of 119 hospital staff from seven sites participated in the simulation exercise including community health workers, clinicians, managers, pharmacists, cleaners, and security guards. Clinics that had challenges during the simulated clinical care of patients were those that did not appropriately treat all cholera patients according to protocol, particularly those that were vulnerable, those that would need additional staff to properly treat patients during a surge of cholera, and those that required a better inventory of supplies. Simulation-based activities have the potential to identify healthcare delivery system vulnerabilities that are amenable to intervention prior to a cholera surge. PMID:24481887 Mobula, Linda Meta; Jacquet, Gabrielle A; Weinhauer, Kristin; Alcidas, Gladys; Thomas, Hans-Muller; Burnham, Gilbert 2013-01-01 185 Science.gov (United States) The goal of this exercise is to explore US foreign policy and the way US citizens view these policies and their implementation. In this exercise, we will explore foreign policy towards Haiti in 1994. Frequency tables and crosstabs will be used. Icpsr 186 Digital Repository Infrastructure Vision for European Research (DRIVER) The 12 January 2010, an earthquake hit the city of Port-au-Prince, capital of Haiti. The earthquake reached a magnitude Mw 7.0 and the epicenter was located near the town of Logne, approximately 25 km west of the capital. Molina Palacios, Sergio; Torres Ferna?ndez, Yolanda; Moise, Junie; Benito Oterino, Belen 2011-01-01 187 Digital Repository Infrastructure Vision for European Research (DRIVER) Abstract Background Implementation of the World Health Organization's DOTS strategy (Directly Observed Treatment Short-course therapy) can result in significant reduction in tuberculosis incidence. We estimated potential costs and benefits of DOTS expansion in Haiti from the government, and societal perspectives. Methods Using decision analysis incorporating multiple Markov processes (Markov modelling), we compared expected tuberculosis morbidity, mortality and ... Jacquet Vary; Morose Willy; Schwartzman Kevin; Oxlade Olivia; Barr Graham; Grimard Franque; Menzies Dick 2006-01-01 188 Directory of Open Access Journals (Sweden) Full Text Available High neonatal mortality in Haiti is sustained by limited access to essential maternity services, particularly for Haitis rural population. We investigated the feasibility of a rural birthing home model to provide basic prenatal, delivery, and neonatal services for women with uncomplicated pregnancies while simultaneously providing triage and transport of women with pregnancy related complications. The model included consideration of the local context, including womens perceptions of barriers to healthcare access and available resources to implement change. Evaluation methods included the performance of a baseline community census and collection of pregnancy histories from 791 women living in a defined area of rural Haiti. These retrospective data were compared with pregnancy outcome for 668 women subsequently receiving services at the birthing home. Of 764 reported most recent pregnancies in the baseline survey, 663(87% occurred at home with no assistance from skilled health staff. Of 668 women followed after opening of the birthing home, 514 (77% subsequently gave birth at the birthing home, 94 (14% were referred to a regional hospital for delivery, and only 60 (9% delivered at home or on the way to the birthing home. Other measures of clinical volume and patient satisfaction also indicated positive changes in health care seeking. After introduction of the birthing home, fewer neonates died than predicted by historical information or national statistics. The present experience points out the feasibility of a rural birthing home model to increase access to essential maternity services. Elizabeth Wickstrom 2007-10-01 189 Digital Repository Infrastructure Vision for European Research (DRIVER) Stopping the spread of the cholera epidemic in Haiti required engaging community health workers (CHWs) in prevention and treatment activities. The Centers for Disease Control and Prevention collaborated with the Haitian Ministry of Public Health and Population to develop CHW educational materials, train >1,100 CHWs, and evaluate training efforts. Rajasingham, Anu; Bowen, Anna; Oreilly, Ciara; Sholtes, Kari; Schilling, Katie; Hough, Catherine; Brunkard, Joan; Domercant, Jean Wysler; Lerebours, Gerald; Cadet, Jean; Quick, Robert; Person, Bobbie 2011-01-01 190 Science.gov (United States) Stopping the spread of the cholera epidemic in Haiti required engaging community health workers (CHWs) in prevention and treatment activities. The Centers for Disease Control and Prevention collaborated with the Haitian Ministry of Public Health and Population to develop CHW educational materials, train >1,100 CHWs, and evaluate training efforts. PMID:22204034 Rajasingham, Anu; Bowen, Anna; O'Reilly, Ciara; Sholtes, Kari; Schilling, Katie; Hough, Catherine; Brunkard, Joan; Domercant, Jean Wysler; Lerebours, Gerald; Cadet, Jean; Quick, Robert; Person, Bobbie 2011-11-01 191 Science.gov (United States) Stopping the spread of the cholera epidemic in Haiti required engaging community health workers (CHWs) in prevention and treatment activities. The Centers for Disease Control and Prevention collaborated with the Haitian Ministry of Public Health and Population to develop CHW educational materials, train >1,100 CHWs, and evaluate training efforts. Bowen, Anna; O'Reilly, Ciara; Sholtes, Kari; Schilling, Katie; Hough, Catherine; Brunkard, Joan; Domercant, Jean Wysler; Lerebours, Gerald; Cadet, Jean; Quick, Robert; Person, Bobbie 2011-01-01 192 Science.gov (United States) LA-ICP-MS analyses ( 90 m) of "black glass" spherules and secondary clay minerals from Beloc, Haiti, show that alteration causes a drastic loss of most trace elements, a significant change in the REE distribution pattern and the Nb/Ta ratio. Ritter, X.; Deutsch, A.; Berndt, J.; Robin, E. 2012-09-01 193 Science.gov (United States) Data relating to population and family planning in 11 foreign countries are presented in these situation reports. Countries included are Bahamas, Bermuda, Boliva, China, Costa Rica, Guadeloupe, Haiti, Hong Kong, Liberia, Mexico, and Panama. Information is provided under two topics, general background and family planning situation, where International Planned Parenthood Federation, London (England). 194 Science.gov (United States) Wafer manufacturing (or wafer production) refers to a series of modern manufacturing processes of producing single-crystalline or poly-crystalline wafers from crystal ingot (or boule) of different sizes and materials. The majority of wafers are single-crystalline silicon wafers used in microelectronics fabrication although there is increasing importance in slicing poly-crystalline photovoltaic (PV) silicon wafers as well as wafers of different materials such as aluminum oxide, lithium niobate, quartz, sapphire, III-V and II-VI compounds, and others. Slicing is the first major post crystal growth manufacturing process toward wafer production. The modern wiresaw has emerged as the technology for slicing various types of wafers, especially for large silicon wafers, gradually replacing the ID saw which has been the technology for wafer slicing in the last 30 years of the 20th century. Modern slurry wiresaw has been deployed to slice wafers from small to large diameters with varying wafer thickness characterized by minimum kerf loss and high surface quality. The needs for slicing large crystal ingots (300 mm in diameter or larger) effectively with minimum kerf losses and high surface quality have made it indispensable to employ the modern slurry wiresaw as the preferred tool for slicing. In this chapter, advances in technology and research on the modern slurry wiresaw manufacturing machines and technology are reviewed. Fundamental research in modeling and control of modern wiresaw manufacturing process are required in order to understand the cutting mechanism and to make it relevant for improving industrial processes. To this end, investigation and research have been conducted for the modeling, characterization, metrology, and control of the modern wiresaw manufacturing processes to meet the stringent precision requirements of the semiconductor industry. Research results in mathematical modeling, numerical simulation, experiments, and composition of slurry versus wafer quality are presented. Summary and further reading are also provided. Kao, Imin; Chung, Chunhui; Moreno Rodriguez, Roosevelt 195 Directory of Open Access Journals (Sweden) Full Text Available Abstract Background Implementation of the World Health Organization's DOTS strategy (Directly Observed Treatment Short-course therapy can result in significant reduction in tuberculosis incidence. We estimated potential costs and benefits of DOTS expansion in Haiti from the government, and societal perspectives. Methods Using decision analysis incorporating multiple Markov processes (Markov modelling, we compared expected tuberculosis morbidity, mortality and costs in Haiti with DOTS expansion to reach all of the country, and achieve WHO benchmarks, or if the current situation did not change. Probabilities of tuberculosis related outcomes were derived from the published literature. Government health expenditures, patient and family costs were measured in direct surveys in Haiti and expressed in 2003 US$. Results Starting in 2003, DOTS expansion in Haiti is anticipated to cost$4.2 million and result in 63,080 fewer tuberculosis cases, 53,120 fewer tuberculosis deaths, and net societal savings of $131 million, over 20 years. Current government spending for tuberculosis is high, relative to the per capita income, and would be only slightly lower with DOTS. Societal savings would begin within 4 years, and would be substantial in all scenarios considered, including higher HIV seroprevalence or drug resistance, unchanged incidence following DOTS expansion, or doubling of initial and ongoing costs for DOTS expansion. Conclusion A modest investment for DOTS expansion in Haiti would provide considerable humanitarian benefit by reducing tuberculosis-related morbidity, mortality and costs for patients and their families. These benefits, together with projected minimal Haitian government savings, argue strongly for donor support for DOTS expansion. Barr Graham 2006-08-01 196 Science.gov (United States) The Office of Operations Coordination and Planning (OPS), National Operations Center (NOC), has launched a Haiti Social Media Disaster Monitoring Initiative (Initiative) to assist the Department of Homeland Security (DHS), and its components involved in t... 2010-01-01 197 CERN Multimedia A variational framework is defined for vertical slice models with three dimensional velocity depending only on x and z. The models that result from this framework are Hamiltonian, and have a Kelvin-Noether circulation theorem that results in a conserved potential vorticity in the slice geometry. These results are demonstrated for the incompressible Euler--Boussinesq equations with a constant temperature gradient in the$y$-direction (the Eady--Boussinesq model), which is an idealised problem used to study the formation and subsequent evolution of weather fronts. We then introduce a new compressible extension of this model. Unlike the incompressible model, the compressible model does not produce solutions that are also solutions of the three-dimensional equations, but it does reduce to the Eady--Boussinesq model in the low Mach number limit. This means that this new model can be used in asymptotic limit error testing for compressible weather models running in a vertical slice configuration. Cotter, C J 2012-01-01 198 CERN Multimedia Trace slicing is a widely used technique for execution trace analysis that is effectively used in program debugging, analysis and comprehension. In this paper, we present a backward trace slicing technique that can be used for the analysis of Rewriting Logic theories. Our trace slicing technique allows us to systematically trace back rewrite sequences modulo equational axioms (such as associativity and commutativity) by means of an algorithm that dynamically simplifies the traces by detecting control and data dependencies, and dropping useless data that do not influence the final result. Our methodology is particularly suitable for analyzing complex, textually-large system computations such as those delivered as counter-example traces by Maude model-checkers. Alpuente, Mara; Espert, Javier; Romero, Daniel 2011-01-01 199 International Nuclear Information System (INIS) Bit-slice logic blocks are fourth-generation LSI components which are natural extensions of traditional mulitplexers, registers, decoders, counters, ALUs, etc. Their functionality is controlled by microprogramming, typically to implement CPUs and peripheral controllers where both speed and easy programmability are required for flexibility, ease of implementation and debugging, etc. Processors built from bit-slice logic give the designer an alternative for approaching the programmibility of traditional fixed-instruction-set microprocessors with a speed closer to that of hardwired random logic. (orig.) 1981-03-12 200 Energy Technology Data Exchange (ETDEWEB) The micro optical texture structure is one of the most relevant characteristics of coke. In this paper, the texture features, especially the fractal feature, of coke slice image is analysed, and a box-counting algorithm is implemented to calculate the fractal dimension of coke slice image. At the same time, some co-occurrence matrix-based statistical texture features are also analysed. From the experimental results, the relationships between fractal dimension statistical texture parameters and the porosity of coke are developed, which provide an important basis for the auto analysis of coke quality. Wang Peizhen; Wang Qinfang; Gao Shangyi 2008-03-15 201 Digital Repository Infrastructure Vision for European Research (DRIVER) Head motion is a fundamental problem in functional MRI, and is often a limiting factor in its clinical implementation. This work presents a rigid-body motion correction strategy for echo-planar imaging (EPI) sequences that uses micro radio-frequency coil active markers for real-time, slice-by-slice prospective correction. Before the acquisition of each EPI-slice, a short tracking pulse-sequence measures the positions of three active markers integrated into a headband worn by the subject... 2011-01-01 202 Digital Repository Infrastructure Vision for European Research (DRIVER) Quaking aspen is the most widely distributed tree species in North America and an asset to sociological, ecological, and hydrological land values in the western United States. In recognition of these values, land managers seek means to oppose a regional decline of aspen in the Intermountain Westa decline apparently in progress since the close of the Pleistocene and driven by climate change, fire suppression, and increasing ungulate densities. One location of special relevance to this decli... 2003-01-01 203 Science.gov (United States) This study is the first to demonstrate that features of psychopathy can be reliably and validly detected by lay raters from "thin slices" (i.e., small samples) of behavior. Brief excerpts (5 s, 10 s, and 20 s) from interviews with 96 maximum-security inmates were presented in video or audio form or in both modalities combined. Forty raters used Fowler, Katherine A.; Lilienfeld, Scott O.; Patrick, Christopher J. 2009-01-01 204 Science.gov (United States) A simple vibratome was fabricated using double-function electric shaver and microscopic platform. Spontaneous discharge of neurons in hippocampal and hypothalamic brain slices (in 300-400 microns thick) prepared by the vibratome could kept above 12 hours in artificial cerebro-spinal fluid. PMID:2697084 Xia, J H; Xing, B R; Gu, Q; Hua, S Y 1989-12-01 205 CERN Document Server Recent confocal experiments on colloidal solids, as well as jammed and disordered materials, motivate a fuller study of the projection of three-dimensional fluctuations onto a two-dimensional confocal slice. We show that the effective theory of a projected crystal displays several exceptional features, and we give analytic expressions relating three-dimensional elastic constants to observed two-dimensional properties. Lemarchand, Claire A; Schindler, Michael 2011-01-01 206 Digital Repository Infrastructure Vision for European Research (DRIVER) Information processing and storage in the brain may be presented by the oscillations and cell assemblies. Here we address the question of how individual neurons associate together to assemble neural networks and present spontaneous electrical activity. Therefore, we dissected the neonatal brain at three different levels: acute 1-mm thick brain slice, cultured organotypic 350-m thick brain slice and dissociated neuronal cultures. The spatio-temporal properties of neural activity were invest... Sun, Jyh-jang 2009-01-01 207 Science.gov (United States) In area CA1 of hippocampal slices which are allowed to recover from slicing "in interface" and where recordings are carried out in interface, a single 1-sec train of 100-Hz stimulation triggers a short-lasting long-term potentiation (S-LTP), which lasts 1-2 h, whereas multiple 1-sec trains induce a long-lasting LTP (L-LTP), which lasts several Godaux, Emile; Ris, Laurence; Capron, Brigitte; Sindic, Christian 2006-01-01 208 Science.gov (United States) The incidence of dengue infections has been increasing in the Caribbean, and cases have been identified among successive deployments of multinational peacekeepers to Haiti (1994-1997). In the absence of an effective vaccine or chemoprophylaxis to prevent dengue fever, vector-control operations and use of personal protection measures to prevent arthropod bites are the most effective means of limiting disease transmission. During our 5-month deployment as part of the United Nations Mission in Haiti, 79 cases of recent dengue fever were identified among 249 patients (32%) presenting with febrile illness to the 86th Combat Support Hospital. Further investigation revealed low unit readiness to perform standard vector-control activities and poor individual adherence to measures to prevent arthropod bites. Command enforcement of existing field preventive medicine doctrine is essential to prevent casualties caused by dengue, other arthropod-borne infections, and nuisance arthropod bites during military deployments. PMID:10226460 Gambel, J M; Drabick, J J; Swalko, M A; Henchal, E A; Rossi, C A; Martinez-Lopez, L 1999-04-01 209 Directory of Open Access Journals (Sweden) Full Text Available The present paper describes surface (surface air temperature and atmospheric parameters (relative humidity, surface latent heat flux over the epicenter (182725 N 723159 W of Haiti earthquake of 12 January 2010. Our analysis shows pronounced changes in surface and atmospheric parameters few days prior to the main earthquake event. Changes in relative humidity are found from the surface up to an altitude of 500 hPa clearly show atmospheric perturbations associated with the earthquake event. The purpose of this paper is to show complementary nature of the changes observed in surface, atmospheric and meteorological parameters. The total ozone concentration is found to be lowest on the day of earthquake and afterwards found to be increased within a week of earthquake. The present results show existence of coupling between lithosphere-atmosphere associated with the deadly Haiti earthquake. Ramesh P. Singh 2010-06-01 210 Directory of Open Access Journals (Sweden) Full Text Available As conseqncias do terremoto que atingiu o Haiti no dia 12 de janeiro de 2010 revelam, mais do que a falncia do Estado daquele pas, o fracasso das organizaes internacionais supostamente envolvidas em sua reconstruo. Em relato pessoal e ao mesmo tempo etnogrfico, o autor reconstri os primeiros dias aps a catstrofe e comenta a distncia que separa essas organizaes da sociedade haitiana, distncia responsvel por sua ineficcia.The aftermath of the earthquake that striked Haiti earlier this year reveals, more than the bankruptcy of the country's State, the failure of the international organizations responsible for supposedly "rebuilding" it. In a personal and ethnographical essay, the author describes the first days that followed the natural catastrophe and comments on the distance that separates those organizations from Haitian society, which lies at the root of its own inefficiency. Omar Ribeiro Thomaz 2010-03-01 211 Directory of Open Access Journals (Sweden) Full Text Available When the earthquake of 7.0 on the Richter scale struck Haiti on January 12, 2010, the forcibly displaced on and off the island were the object of emergency planning, but so too were the host populations in Haiti and the neighbouring Dominican Republic. This article seeks to examine the emergency response to the earthquake and ongoing challenges through the lens of critical mobilities, with special reference to forced migration island-wide. Who (men, women, boys and girls is able to move, how, where, for how long and through which networks? What is the legal framework, if any, governing these movements? Who wants visibility and who prefers to move without drawing the attention of the Dominican authorities, in the context, for example, of ambiguous migration policies in the Dominican Republic towards impoverished Haitian immigrants? Bridget WOODING 2011-03-01 212 Scientific Electronic Library Online (English) Full Text Available SciELO Brazil | Language: Portuguese Abstract in portuguese As conseqncias do terremoto que atingiu o Haiti no dia 12 de janeiro de 2010 revelam, mais do que a falncia do Estado daquele pas, o fracasso das organizaes internacionais supostamente envolvidas em sua reconstruo. Em relato pessoal e ao mesmo tempo etnogrfico, o autor reconstri os primeir [...] os dias aps a catstrofe e comenta a distncia que separa essas organizaes da sociedade haitiana, distncia responsvel por sua ineficcia. Abstract in english The aftermath of the earthquake that striked Haiti earlier this year reveals, more than the bankruptcy of the country's State, the failure of the international organizations responsible for supposedly "rebuilding" it. In a personal and ethnographical essay, the author describes the first days that f [...] ollowed the natural catastrophe and comments on the distance that separates those organizations from Haitian society, which lies at the root of its own inefficiency. Thomaz, Omar Ribeiro. 213 Digital Repository Infrastructure Vision for European Research (DRIVER) The Republic of Haiti is the prime international remittances recipient country in the Latin American and Caribbean (LAC) region relative to its gross domestic product (GDP). The downside of this observation may be that this country is also the first exporter of skilled workers in the world by population size. The present research uses a zero-altered negative binomial (with logit inflation) to model households' international migration decision process, and endogenous regressors' Amemiya Genera... 2008-01-01 214 Digital Repository Infrastructure Vision for European Research (DRIVER) This study examines the formal organization of the cluster approach and how it is in practice in the after math of natural disasters. The cyclone Nargis that hit Myanmar in 2008 and the tropical storms and hurricane season in Haiti in 2008 are used as examples for how the cluster approach has been applied in practice. The empirical data is described and structured through four key variables: specialization, coordination, leadership and accountability. To understand and describe the cluster ap... Ulleland, Trude Kvam 2013-01-01 215 Digital Repository Infrastructure Vision for European Research (DRIVER) Background: Since HIV-1 RNA (viral load) testing is not routinely available in Haiti, HIV-infected patients receiving antiretroviral therapy (ART) are monitored using the World Health Organization (WHO) clinical and/or immunologic criteria. Data on survival and treatment outcomes for HIV-1 infected patients who meet criteria for ART failure are limited. We conducted a retrospective study to compare survival rates for patients who experienced failure on first-line ART by clinical and/or immuno... Macarthur Charles; Leger, Paul D.; Patrice Severe; Colette Guiteau; Alexandra Apollon; Gulick, Roy M.; Johnson, Warren D.; Pape, Jean W.; Fitzgerald, Daniel W. 2012-01-01 216 Digital Repository Infrastructure Vision for European Research (DRIVER) John C Jackson, Anthony L Farone, Mary B Farone Biology Department, Middle Tennessee State University, Murfreesboro, Tennessee, USA Purpose: Diarrheal disease is one of the leading causes of morbidity in developing countries. To further understand the epidemiology of diarrheal disease among a rural population surrounding Robillard, Haiti, fecal swabs from patients with diarrhea were screened for the presence of enteropathogenic bacteria. Patients and methods: Fecal swabs were collected from 3... Jc, Jackson; Al, Farone; Mb, Farone 2011-01-01 217 Digital Repository Infrastructure Vision for European Research (DRIVER) Following the January 2010 earthquake in Haiti, the Israel Defense Force Medical Corps dispatched a field hospital unit. A specially tailored information technology solution was deployed within the hospital. The solution included a hospital administration system as well as a complete electronic medical record. A light-weight picture archiving and communication system was also deployed. During 10?days of operation, the system registered 1111 patients. The network and system up times were mor... Levy, Gad; Blumberg, Nehemia; Kreiss, Yitshak; Ash, Nachman; Merin, Ofer 2010-01-01 218 Digital Repository Infrastructure Vision for European Research (DRIVER) After the January 12, 2010, Haiti earthquake, we deployed a mainly offshore temporary network of seismologic stations around the damaged area. The distribution of the recorded aftershocks, together with morphotectonic observations and mainshock analysis, allow us to constrain a complex fault pattern in the area. Almost all of the aftershocks have a N-S compressive mechanism, and not the expected left-lateral strike-slip mechanism. A first-order slip model of the mainshock shows a N264 degrees... 2011-01-01 219 Digital Repository Infrastructure Vision for European Research (DRIVER) Waste management is a growing concern in rapidly urbanizing developing countries and Haiti is noexception. Excessive amounts of improperly discharged waste endangers unique tropical environment, appears to bea reason of fast spread of epidemic diseases, increases risk of floods during the hurricane season and contributes toclimate change. Due to various historical, economic, natural and socio-political reasons, public sector of Haitianstate is not able to provide decent waste management servi... Bessonova, Ekaterina 2012-01-01 220 Digital Repository Infrastructure Vision for European Research (DRIVER) OBJECTIVES: A study was conducted to assess the prevalence of maternal syphilis and estimate the rate of congenital syphilis in five rural villages surrounding Jeremie, Haiti. METHODS: This research was a retrospective observational study. Data were extracted from the Haitian Health Foundation's public health database and verified through original clinical paper records, death certificates, midwife reports, and discussions with community health workers. Data were analyzed by chi-square analys... Lomotey, Chaylah J.; Judy Lewis; Bette Gebrian; Royneld Bourdeau; Kevin Dieckhaus; Salazar, Juan C. 2009-01-01 221 Digital Repository Infrastructure Vision for European Research (DRIVER) Abstract Background Towards the end of 2006 open conflict broke out between United Nations forces and armed militia in Port-au-Prince, Haiti. Fighting was most intense in the district of Cit Soleil. Methods A cross-sectional, random-sample survey among the conflict-affected populations living in Cit Soleil and Martissant was carried out over a 4-week period in 2006 using a semi-structured questionnaire to assess exposure to violence and access to health care... Ponsar Frdrique; Ford Nathan; van Herp Michel; Mancini Silvia; Bachy Catherine 2009-01-01 222 Directory of Open Access Journals (Sweden) Full Text Available Focuses on Haitian debates concerning popular political participation in the context of the Liberal Revolution of 1843 and the Piquet Rebellion of 1844. The liberal challenge to the regime of President Boyer gave room to a peasant movement, the 'Army of Sufferers' or the Piquets, calling for black civil and political rights. Author traces 3 phases of the revolutionary situation of 1843-44 to show how political actors within Haiti debated various institutional and constitutional arrangements. Mimi Sheller 2000-01-01 223 Digital Repository Infrastructure Vision for European Research (DRIVER) Haiti suffered an earthquake in January of 2010, bringing instability and widespread hunger. Even after two years, many Haitians lack food security, and one must look at the failings of the Haitian government and those who attempted to provide aid. Major aid organizations such as the Red Cross, World Food Program, the United Nations Children's Fund and the Food and Agriculture Organization stepped in to provide disaster assistance. Unfortunately, these organizations failed to effectively coor... Mcgaughey, Katie 2012-01-01 224 Digital Repository Infrastructure Vision for European Research (DRIVER) We propose a simple model with two infective classes in order to model the cholera epidemic in Haiti. We include the impact of environmental events (rainfall, temperature and tidal range) on the epidemic in the Artibonite and Ouest regions. We used this model to obtain epidemic projections, and then modified these projections incorporating the vaccination programs that were recently to compare with actual cases. Using daily rainfall we found lag times between precipitation a... Tennenbaum, Stephen; Freitag, Caroline; Roudenko, Svetlana 2013-01-01 225 Digital Repository Infrastructure Vision for European Research (DRIVER) This study examines the formal organization of the cluster approach and how it is in practice in the after math of natural disasters. The cyclone Nargis that hit Myanmar in 2008 and the tropical storms and hurricane season in Haiti in 2008 are used as examples for how the cluster approach has been applied in practice. The empirical data is described and structured through four key variables: specialization, coordination, leadership and accountability. To understand ... Ulleland, Trude Kvam 2013-01-01 226 Energy Technology Data Exchange (ETDEWEB) Five charcoal cookstoves were tested using a Controlled Cooking Test (CCT) developed from cooking practices in Haiti. Cookstoves were tested for total burn time, specific fuel consumption, and emissions of carbon monoxide (CO), carbon dioxide (CO{sub 2}), and the ratio of carbon monoxide to carbon dioxide (CO/CO{sub 2}). These results are presented in this report along with LBNL testers observations regarding the usability of the stoves. Lask, Kathleen; Jones, Jennifer; Booker, Kayje; Ceballos, Cristina; Yang, Nina; Gadgil, Ashok 2011-11-30 227 Science.gov (United States) Haiti's 2010 earthquake mobilized mental health and psychosocial interventions from across the globe. However, failure to understand how psychological distress is communicated between lay persons and health workers in rural clinics, where most Haitians access care, has been a major limitation in providing mental health services. The goal of this study was to map idioms of distress onto Haitian ethnopsychologies in a way that promotes improved communication between lay persons and clinicians in rural Haiti. In Haiti's Central Plateau, an ethnographic study was conducted in May and June 2010, utilizing participant observation in rural clinics, 31 key informant interviews, 11 focus groups, and four case studies. Key informants included biomedical practitioners, traditional healers, community leaders, and municipal and religious figures. Deductive and inductive themes were coded using content analysis (inter-rater reliability > 0.70). Forty-four terms for psychological distress were identified. Head (tt) or heart (k) terms comprise 55% of all qualitative text segments coded for idioms of distress. Twenty-eight of 142 observed patient-clinician contacts involved persons presenting with tt terms, while 29 of the 142 contacts were presentations with k terms. Thus, 40% of chief complaints were conveyed in either head or heart terms. Interpretations of these terms differed between lay and clinical groups. Lay respondents had broad and heterogeneous interpretations, whereas clinicians focused on biomedical concepts and excluded discussion of mental health concerns. This paper outlines preliminary evidence regarding the psychosocial dimensions of tt and k-based idioms of distress and calls for further exploration. Holistic approaches to mental healthcare in Haiti's Central Plateau should incorporate local ethnopsychological frameworks alongside biomedical models of healthcare. PMID:22595073 Keys, Hunter M; Kaiser, Bonnie N; Kohrt, Brandon A; Khoury, Nayla M; Brewster, Aime-Rika T 2012-08-01 228 Digital Repository Infrastructure Vision for European Research (DRIVER) Organotypic slice cultures have been developed as an alternative to acute brain slices because the neuronal viability and synaptic connectivity in these cultures can be preserved well for a prolonged period of time. This study evaluated a stationary organotypic slice culture developed for the hypothalamic paraventricular nucleus (PVN) of rat. The results showed that the slice cultures maintain the typical shape of the nucleus, the immunocytochemical signals for oxytocin, vasopressin, and cort... Cho, Eun Seong; Lee, So Yeong; Park, Jae-yong; Hong, Seong-geun; Ryu, Pan Dong 2007-01-01 229 Digital Repository Infrastructure Vision for European Research (DRIVER) This paper presents a simple method to measure tissue slice thicknesses using an ohmmeter. The circuit described here is composed of a metal probe, an ohmmeter, a counter electrode, culture medium or physiological buffer, and tissue slice. The probe and the electrode are on opposite interfaces of an organotypic hippocampal slice culture. The circuit closes when the metal probe makes contact with the surface of the tissue slice. The probe position is recorded and compared to its position when ... Guy, Yifat; Rupert, Amy; Sandberg, Mats; Weber, Stephen G. 2011-01-01 230 Science.gov (United States) There is constructed, for each member of a one-parameter family of cosmological models, which is obtained from the Kottler-Schwarzschild-de Sitter spacetime by identification under discrete isometries, a slicing by spherically symmetric Cauchy hypersurfaces of constant mean curvature. These slicings are unique up to the action of the static Killing vector. Analytical and numerical results are found as to when different leaves of these slicings do not intersect, i.e. when the slicings form foliations. Beig, Robert; Heinzle, J. Mark 2005-12-01 231 CERN Document Server The two-component, core-crust, model of a neutron star with homogenous internal and dipolar external magnetic field is studied responding to quake-induced perturbation by substantially nodeless differentially rotational Alfv\\'en oscillations of the perfectly conducting crustal matter about axis of fossil magnetic field frozen in the immobile core. The energy variational method of the magneto-solid-mechanical theory of a viscoelastic perfectly conducting medium pervaded by magnetic field is utilized to compute the frequency and lifetime of nodeless torsional vibrations of crustal solid-state plasma about the dipole magnetic-moment axis of the star. It is found that obtained two-parametric spectral formula for the frequency of this toroidal Alfven mode provides fairly accurate account of rapid oscillations of the X-ray flux during the flare of SGR 1806-20 and SGR 1900+14, supporting the investigated conjecture that these quasi-periodic oscillations owe its origin to axisymmetric torsional oscillations predomina... Bastrukov, S I; Molodtsova, I V; Takata, J 2008-01-01 232 CERN Document Server The Newtonian solid-mechanical theory of nodeless spheroidal and torsional seismic elastic vibrations trapped in the crust of quaking neutron star is outlined. The spectral equations for the frequency of these modes are obtained and applied to the modal classification of the quasi-periodic oscillations of X-ray luminosity in the aftermath of giant flares in SGR 1806-20 and SGR 1900+14. The presented analysis is heavily relied on the currently accepted identification of the QPOs frequency from the range [30-200] Hz with those for torsional nodeless vibrations. Based on this identification, which is used to fix the input parameters entering the obtained spectral formulae, we compute frequency spectrum of nodeless spheroidal elastic vibrations. Focus is placed on the low-frequency QPOs in the data for SGR 1806-20 whose physical origin has been called into question. Our calculations suggest that QPOs with frequencies 18 Hz and 26 Hz are due to dipole spheroidal and dipole torsional shear vibrations of the crust a... Bastrukov, Sergey; Molodtsova, Irina; Chen, Gwan-Ting 2007-01-01 233 Science.gov (United States) Dengue is an acute febrile illness caused by four mosquito-borne dengue viruses (DENV-1 to -4) that are endemic throughout the tropics. After returning from a 1-week missionary trip to Haiti in October of 2010, 5 of 28 (18%) travelers were hospitalized for dengue-like illness. All travelers were invited to submit serum specimens and complete questionnaires on pre-travel preparations, mosquito avoidance practices, and activities during travel. DENV infection was confirmed in seven (25%) travelers, including all travelers that were hospitalized. Viral sequencing revealed closest homology to a 2007 DENV-1 isolate from the Dominican Republic. Although most (88%) travelers had a pre-travel healthcare visit, only one-quarter knew that dengue is a risk in Haiti, and one-quarter regularly used insect repellent. This report confirms recent DENV transmission in Haiti. Travelers to DENV-endemic areas should receive dengue education during pre-travel health consultations, follow mosquito avoidance recommendations, and seek medical care for febrile illness during or after travel. PMID:22232444 Sharp, Tyler M; Pillai, Parvathy; Hunsperger, Elizabeth; Santiago, Gilberto A; Anderson, Teresa; Vap, Trina; Collinson, Jeremy; Buss, Bryan F; Safranek, Thomas J; Sotir, Mark J; Jentes, Emily S; Munoz-Jordan, Jorge L; Arguello, D Fermin 2012-01-01 234 Energy Technology Data Exchange (ETDEWEB) A laboratory study was done to establish the technical feasibility of producing domestic cooking briquettes to be marketed in Haiti, from the Maissade lignite reserves of that country, which are high in both ash and sulfur and not yet mined. It was found that acceptable briquettes could be made from Maissade char, pyrolized and compacted with a molasses-lime binder and the addition of bagasse to improve strength and burning rate. Molasses, lime and bagasse are all produced in Haiti. Sodium nitrate was added to enhance ignition, and borax as a wetting and release agent. Standard, ''pillow-shaped'' briquettes were successfully produced on a standard, double roll briquetting machine. The recommended process sequence and equipment selection are virtually identical to that used to produce standard US barbecue briquettes from North Dakota lignite. The heating value of the Maissade briquettes is lower due to their high ash level, which may be acceptable if they can be produced at a cost per heating value comparable to wood charcoal, currently used in Haiti. The high sulfur content, mostly in organic form, presents no problem, since it is tied up after combustion as CaSO/sub 4/ by the unusually high calcium content of this lignite. Detailed analyses of Maissade lignite and its mineral components are included, as well as a preliminary plant design and capital cost estimate, for capacities of 10,000 and 50,000 metric tons per year, and for a smaller pilot plant. Hauserman, W.B.; Johnson, M.D. 1986-03-20 235 CERN Multimedia Writing correct distributed programs is hard. In spite of extensive testing and debugging, software faults persist even in commercial grade software. Many distributed systems, especially those employed in safety-critical environments, should be able to operate properly even in the presence of software faults. Monitoring the execution of a distributed system, and, on detecting a fault, initiating the appropriate corrective action is an important way to tolerate such faults. This gives rise to the predicate detection problem which requires finding a consistent cut of a given computation that satisfies a given global predicate, if it exists. Detecting a predicate in a computation is, however, an NP-complete problem. To ameliorate the associated combinatorial explosion problem, we introduce the notion of computation slice. Formally, the slice of a computation with respect to a predicate is a (sub)computation with the least number of consistent cuts that contains all consistent cuts of the computation satisfying t... Mittal, N; Mittal, Neeraj; Garg, Vijay K. 2003-01-01 236 Digital Repository Infrastructure Vision for European Research (DRIVER) Many investigations in neuroscience, as well as other disciplines, involve studying small, yet macroscopic pieces or sections of tissue that have been preserved, freshly removed, or excised but kept viable, as in slice preparations of brain tissue. Subsequent microscopic studies of this material can be challenging, as the tissue samples may be difficult to handle. Demonstrated here is a method for obtaining thin cryostat sections of tissue with a thickness that may range from 0.2-5.0 mm. W... Park, Jae-joon; Cunningham, Miles G. 2007-01-01 237 Digital Repository Infrastructure Vision for European Research (DRIVER) Many investigations in neuroscience, as well as other disciplines, involve studying small, yet macroscopic pieces or sections of tissue that have been preserved, freshly removed, or excised but kept viable, as in slice preparations of brain tissue. Subsequent microscopic studies of this material can be challenging, as the tissue samples may be difficult to handle. Demonstrated here is a method for obtaining thin cryostat sections of tissue with a thickness that may range from 0.2-5.0 mm. We r... Park, Jae-joon; Cunningham, Miles Gregory 2007-01-01 238 Directory of Open Access Journals (Sweden) Full Text Available Abstract Background Preparing health workers to confront the HIV/AIDS epidemic is an urgent challenge in Haiti, where the HIV prevalence rate is 2.2% and approximately 10 100 people are taking antiretroviral treatment. There is a critical shortage of doctors in Haiti, leaving nurses as the primary care providers for much of the population. Haiti's approximately 1000 nurses play a leading role in HIV/AIDS prevention, care and treatment. However, nurses do not receive sufficient training at the pre-service level to carry out this important work. Methods To address this issue, the Ministry of Health and Population collaborated with the International Training and Education Center on HIV over a period of 12 months to create a competency-based HIV/AIDS curriculum to be integrated into the 4-year baccalaureate programme of the four national schools of nursing. Results Using a review of the international health and education literature on HIV/AIDS competencies and various models of curriculum development, a Haiti-based curriculum committee developed expected HIV/AIDS competencies for graduating nurses and then drafted related learning objectives. The committee then mapped these learning objectives to current courses in the nursing curriculum and created an 'HIV/AIDS Teaching Guide' for faculty on how to integrate and achieve these objectives within their current courses. The curriculum committee also created an 'HIV/AIDS Reference Manual' that detailed the relevant HIV/AIDS content that should be taught for each course. Conclusion All nursing students will now need to demonstrate competency in HIV/AIDS-related knowledge, skills and attitudes during periodic assessment with direct observation of the student performing authentic tasks. Faculty will have the responsibility of developing exercises to address the required objectives and creating assessment tools to demonstrate that their graduates have met the objectives. This activity brought different administrators, nurse leaders and faculty from four geographically dispersed nursing schools to collaborate on a shared goal using a process that could be easily replicated to integrate any new topic in a resource-constrained pre-service institution. It is hoped that this experience provided stakeholders with the experience, skills and motivation to strengthen other domains of the pre-service nursing curriculum, improve the synchronization of didactic and practical training and develop standardized, competency-based examinations for nursing licensure in Haiti. Knebel Elisa 2008-08-01 239 Energy Technology Data Exchange (ETDEWEB) The increasingly appreciated role of astrocytes in neurophysiology dictates a thorough understanding of the mechanisms underlying the communication between astrocytes and neurons. In particular, the uptake and release of signaling substances into/from astrocytes is considered as crucial. The release of different gliotransmitters involves regulated exocytosis, consisting of the fusion between the vesicle and the plasma membranes. After fusion with the plasma membrane vesicles may be retrieved into the cytoplasm and may continue to recycle. To study the mobility implicated in the retrieval of secretory vesicles, these structures have been previously efficiently and specifically labeled in cultured astrocytes, by exposing live cells to primary and secondary antibodies. Since the vesicle labeling and the vesicle mobility properties may be an artifact of cell culture conditions, we here asked whether the retrieving exocytotic vesicles can be labeled in brain tissue slices and whether their mobility differs to that observed in cell cultures. We labeled astrocytic vesicles and recorded their mobility with two-photon microscopy in hippocampal slices from transgenic mice with fluorescently tagged astrocytes (GFP mice) and in wild-type mice with astrocytes labeled by Fluo4 fluorescence indicator. Glutamatergic vesicles and peptidergic granules were labeled by the anti-vesicular glutamate transporter 1 (vGlut1) and anti-atrial natriuretic peptide (ANP) antibodies, respectively. We report that the vesicle mobility parameters (velocity, maximal displacement and track length) recorded in astrocytes from tissue slices are similar to those reported previously in cultured astrocytes. Potokar, Maja; Kreft, Marko [Laboratory of Neuroendocrinology-Molecular Cell Physiology, Institute of Pathophysiology, Faculty of Medicine, University of Ljubljana, Zaloska 4, 1000 Ljubljana (Slovenia); Celica Biomedical Center, Technology Park 24, 1000 Ljubljana (Slovenia); Lee, So-Young; Takano, Hajime; Haydon, Philip G. [Department of Neuroscience, Room 215, Stemmler Hall, University of Pennsylvania, School of Medicine, Philadelphia, PA 19104 (United States); Zorec, Robert, E-mail: Robert.Zorec@mf.uni-lj.si [Laboratory of Neuroendocrinology-Molecular Cell Physiology, Institute of Pathophysiology, Faculty of Medicine, University of Ljubljana, Zaloska 4, 1000 Ljubljana (Slovenia); Celica Biomedical Center, Technology Park 24, 1000 Ljubljana (Slovenia) 2009-12-25 240 DEFF Research Database (Denmark) Viral vectors derived from herpes simplex virus, type-1 (HSV), can transfer and express genes into fully differentiated, post-mitotic neurons. These vectors also transduce cells effectively in organotypic hippocampal slice cultures. Nanoliter quantities of a virus stock of HSVlac, an HSV vector that directs expression of E. coli beta-galactosidase (beta-gal), were microapplied into stratum pyramidale or stratum granulosum of slice cultures. Twenty-four hours later, a cluster of transduced cells expressing beta-gal was observed at the microapplication site. Gene transfer by microapplication was both effective and rapid. The titer of the HSVlac stocks was determined on NIH3T3 cells. Eighty-three percent of the beta-gal forming units successfully transduced beta-gal after microapplication to slice cultures. beta-Gal expression was detected as rapidly as 4 h after transduction into cultures of fibroblasts or hippocampal slices. The rapid expression of beta-gal by HSVlac allowed efficient transduction of acute hippocampal slices. Many genes have been transduced and expressed using HSV vectors; therefore, this microapplication method can be applied to many neurobiological questions. Casaccia-Bonnefil, P; Benedikz, Eirikur 1993-01-01 241 CERN Document Server Object-oriented programming has been considered a most promising method in program development and maintenance. An important feature of object-oriented programs (OOPs) is their reusability which can be achieved through the inheritance of classes or reusable components.Dynamic program slicing is an effective technique for narrowing the errors to the relevant parts of a program when debugging. Given a slicing criterion, the dynamic slice contains only those statements that actually affect the variables in the slicing criterion. This paper proposes a method to dynamically slice object-oriented (00) programs based on dependence analysis. It uses the Control Dependency Graph for object program and other static information to reduce the information to be traced during program execution. In this paper we present a method to find the dynamic Slice of object oriented programs where we are finding the slices for object and in case of function overloading. Pani, Santosh Kumar 2010-01-01 242 Science.gov (United States) le de la Gonve is a 750-km2 island off the coast of Haiti. The depth to the water table ranges from less than 30 m in the Eocene and Upper Miocene limestones to over 60 m in the 300-m-thick Quaternary limestone. Annual precipitation ranges from 800-1,400 mm. Most precipitation is lost through evapotranspiration and there is virtually no surface water. Roughly estimated from chloride mass balance, about 4% of the precipitation recharges the karst aquifer. Cave pools and springs are a common source for water. Hand-dug wells provide water in coastal areas. Few productive wells have been drilled deeper than 60 m. Reconnaissance field analyses indicate that groundwater in the interior is a calcium-bicarbonate type, whereas water at the coast is a sodium-chloride type that exceeds World Health Organization recommended values for sodium and chloride. Tests for the presence of hydrogen sulfide-producing bacteria were negative in most drilled wells, but positive in cave pools, hand-dug wells, and most springs, indicating bacterial contamination of most water sources. Because of the difficulties in obtaining freshwater, the 110,000 inhabitants use an average of only 7 L per person per day. L'le de la Gonve est une le de 750 km2 au large de la cte d'Hati. La profondeur de la nappe varie entre moins de 30 m dans les calcaires de l'ocne et du Miocne suprieur plus de 60 m dans les calcaires quaternaires pais de 300 m. Les prcipitations annuelles sont comprises entre 800-1.400 mm. La plus grande partie des prcipitations est perdue par vapotranspiration et il n'y a pratiquement pas d'eau de surface. Le bilan de masse des chlorures permet d'estimer 4% des prcipitations le montant de la recharge de l'aquifre karstique. Des bassins dans les grottes et des sources sont la source d'eau courante. Des puits creuss la main fournissent de l'eau dans les zones ctires. Quelques puits productifs ont t fors dpassant 60 m de profondeur. L'analyse des reconnaissances de terrain indique que les eaux souterraines l'intrieur de l'le sont de facis bicarbonat calcique, tandis que l'eau prs de la cte a un facis chlorur sodique dpassant les valeurs recommandes par l'OMS pour le sodium et les chlorures. Des tests pour la prsence de bactries productrices d'hydrogne sulfur se sont rvls ngatifs dans la plupart des forages, mais positifs dans la plupart des sources captes et tous les autres sources, bassins de grottes et puits creuss la main, ce qui indique une contamination bactrienne de la plupart des sources d'eau. Du fait des difficults pour s'approvisionner en eau douce, les 110.000 habitants utilisent en moyenne seulement 7 L par personne et par jour. La Isla de la Gonav, cercana a la costa de Hait, tiene 750 km2. La profundidad al nivel fretico est comprendida entre menos de 30 m para las calcitas del Eoceno y Mioceno Superior y ms de 60 m en el acufero de calcitas cuaternarias, el cual posee 300 m de espesor. La precipitacin anual vara entre 800-1.400 mm, si bien la mayor parte se pierde por evapotranspiracin y prcticamente no hay aguas superficiales. Segn un balance de masas de cloruros, alrededor del 4% de la precipitacin recarga el acufero krstico. Las cavidades y manantiales son una fuente habitual de agua. Los pozos excavados proporcionan agua en las reas costeras. Pocos pozos productivos se han perforado a ms de 60 m. El reconocimiento de los anlisis de campo indica que las aguas subterrneas son de tipo bicarbonatado-clcico en el interior, mientras que es de tipo clorurado-sdico en la costa, donde se sobrepasan las concentraciones recomendadas por la Organizacin Mundial de la Salud para sodio y cloruro. Los ensayos efectuados para detectar la presencia de bacterias productoras de sulfuro de hidrgeno resultaron negativos en la mayora de los pozos perforados, pero fueron positivos en la muchos manantiales explotados y en todos los manantiales, cavidades y pozos excavados, hecho que indica la contamina Troester, Joseph W.; Turvey, Michael D. 243 Science.gov (United States) We studied the effect of epidermal leaf mining on the leaf chemistry of quaking aspen, Populus tremuloides, during an outbreak of the aspen leaf miner, Phyllocnistis populiella, in the boreal forest of interior Alaska. Phyllocnistis populiella feeds on the epidermal cells of P. tremuloides leaves. Eleven days after the onset of leaf mining, concentrations of the phenolic glycosides tremulacin and salicortin were significantly higher in aspen leaves that had received natural levels of leaf mining than in leaves sprayed with insecticide to reduce mining damage. In a second experiment, we examined the time course of induction in more detail. The levels of foliar phenolic glycosides in naturally mined ramets increased relative to the levels in insecticide-treated ramets on the ninth day following the onset of leaf mining. Induction occurred while some leaf miner larvae were still feeding and when leaves had sustained mining over 5% of the leaf surface. Leaves with extrafloral nectaries (EFNs) had significantly higher constitutive and induced levels of phenolic glycosides than leaves lacking EFNs, but there was no difference in the ability of leaves with and without EFNs to induce phenolic glycosides in response to mining. Previous work showed that the extent of leaf mining damage was negatively related to the total foliar phenolic glycoside concentration, suggesting that phenolic glycosides deter or reduce mining damage. The results presented here demonstrate that induction of phenolic glycosides can be triggered by relatively small amounts of mining damage confined to the epidermal tissue, and that these changes in leaf chemistry occur while a subset of leaf miners are still feeding within the leaf. PMID:20354896 Young, Brian; Wagner, Diane; Doak, Patricia; Clausen, Thomas 2010-04-01 244 CERN Document Server In this paper we consider a semiparametric regression model involving a$d$-dimensional quantitative explanatory variable$X$and including a dimension reduction of$X$via an index$\\beta'X$. In this model, the main goal is to estimate the euclidean parameter$\\beta$and to predict the real response variable$Y$conditionally to$X$. Our approach is based on sliced inverse regression (SIR) method and optimal quantization in$\\mathbf{L}^p$-norm. We obtain the convergence of the proposed estimators of$\\beta$and of the conditional distribution. Simulation studies show the good numerical behavior of the proposed estimators for finite sample size. Romain, Aza\\"\\is; Jrme, Saracco 2011-01-01 245 Directory of Open Access Journals (Sweden) Full Text Available Distributed video coding depends heavily on the virtual channel model. Due to the limitations of the side information estimation one stationary model does not properly describe the virtual channel. In this work the correlation noise is modelled per slice to obtain location-specific correlation noise model. The resulting delay from the lengthy Slepian-Wolf (SW codec input is also reduced by reducing the length of SW codec input. The proposed solution does not impose any extra complexity, it utilizes the existing resources. The results presented here support the proposed algorithm. SAMIR BELHOUARI 2011-10-01 246 CERN Document Server The purpose of this article is to describe a reduction of the slicing problem to the study of the parameter I_1(K,Z_q^o(K))=\\int_K || ||_{L_q(K)}dx. We show that an upper bound of the form I_1(K,Z_q^o(K))\\leq C_1q^s\\sqrt{n}L_K^2, with 1/2\\leq s\\leq 1, leads to the estimate L_n\\leq \\frac{C_2\\sqrt[4]{n}log(n)} {q^{(1-s)/2}}, where L_n:= max {L_K : K is an isotropic convex body in R^n}. Giannopoulos, Apostolos; Vritsiou, Beatrice-Helen 2011-01-01 247 Digital Repository Infrastructure Vision for European Research (DRIVER) Real-time quaking-induced conversion (RT-QuIC) is an assay in which disease-associated prion protein (PrP) initiates a rapid conformational transition in recombinant PrP (recPrP), resulting in the formation of amyloid that can be monitored in real time using the dye thioflavin T. It therefore has potential advantages over analogous cell-free PrP conversion assays such as protein misfolding cyclic amplification (PMCA). The QuIC assay and the related amyloid seeding assay have been developed la... Peden, Alexander H.; Mcguire, Lynne I.; Appleford, Nigel E. J.; Mallinson, Gary; Wilham, Jason M.; Orru?, Christina D.; Caughey, Byron; Ironside, James W.; Knight, Richard S.; Will, Robert G.; Green, Alison J. E.; Head, Mark W. 2012-01-01 248 International Nuclear Information System (INIS) In the frame of probabilistic safety analyses for nuclear power plants studies and evaluations of earth quake events have to be performed. The methodology is aimed to quantify the actual safety margins of the existing structures and their scattering. These data are essentially based on empirical values and results from US power plants. The authors discuss the accuracy and applicability of the simplified methodologies. It turns out that the simplified models can only roughly describe the complex non-linear behavior of buildings. Additional system engineering based effects on the safety reserves cannot be taken into account by the simplified modeling. 2009-01-01 249 Science.gov (United States) Haiti has been in the midst of a cholera epidemic since October 2010. Rainfall is thought to be associated with cholera here, but this relationship has only begun to be quantitatively examined. In this paper, we quantitatively examine the link between rainfall and cholera in Haiti for several different settings (including urban, rural, and displaced person camps) and spatial scales, using a combination of statistical and dynamic models. Statistical analysis of the lagged relationship between rainfall and cholera incidence was conducted using case crossover analysis and distributed lag nonlinear models. Dynamic models consisted of compartmental differential equation models including direct (fast) and indirect (delayed) disease transmission, where indirect transmission was forced by empirical rainfall data. Data sources include cholera case and hospitalization time series from the Haitian Ministry of Public Health, the United Nations Water, Sanitation and Health Cluster, International Organization for Migration, and Hpital Albert Schweitzer. Rainfall data was obtained from rain gauges from the U.S. Geological Survey and Haiti Regeneration Initiative, and remote sensing rainfall data from the National Aeronautics and Space Administration Tropical Rainfall Measuring Mission. A strong relationship between rainfall and cholera was found for all spatial scales and locations examined. Increased rainfall was significantly correlated with increased cholera incidence 4-7 days later. Forcing the dynamic models with rainfall data resulted in good fits to the cholera case data, and rainfall-based predictions from the dynamic models closely matched observed cholera cases. These models provide a tool for planning and managing the epidemic as it continues. PMID:24267876 Eisenberg, Marisa C; Kujbida, Gregory; Tuite, Ashleigh R; Fisman, David N; Tien, Joseph H 2013-12-01 250 Scientific Electronic Library Online (English) Full Text Available SciELO Chile | Language: Spanish Abstract in spanish Este ensayo revisa los principales acontecimientos polticos ocurridos en Hait durante los ltimos 22 meses. Durante este perodo Hait ha logrado cierto grado de estabilizacin poltica gracias a la realizacin de comicios y la posterior eleccin de Ren Preval como Presidente. El pas ha logrado [...] algunos avances en materia de seguridad y ha estabilizado su economa, la que creci moderadamente. Los avances han sido posibles, en parte, gracias a la presencia de tropas de la Misin de Estabilizacin de las Naciones Unidas en Hait (MINUSTAH) que han colaborado con las autoridades en diversas reas claves, sobre todo en materia de seguridad. Ms all de los aspectos positivos, la situacin general de Hait sigue siendo extremadamente crtica dado los graves problemas estructurales que enfrenta el pas y a su apreciable dependencia de la comunidad internacional. Abstract in english This article reviews the main political developments in Haiti in the last 22 months. During this period, the country has attained some degree of political stability as a result of the successful completion of an electoral process and the concomitant election of Ren Preval as President of the countr [...] y. Haiti, furthermore, has seen some improvements in security and economic stability triggered by a moderate economic growth. These achievements have been partly possible due to the presence of the United Nations Stabilization Mission in Haiti (MINUSTAH), which, jointly with the Haitian State, have worked to tackle acute problems, in particular lack of security. These improvements notwithstanding, the general outlook of the country and its political stability remain fragile given its significant structural problems and its extensive dependence from the international community. FELDMANN, ANDREAS; MONTES, JUAN ESTEBAN. 251 Directory of Open Access Journals (Sweden) Full Text Available Abstract Background The seroprevalence of hepatitis C varies substantially between countries and geographic regions. A better understanding of the seroprevalence of this disease, and the risk factors associated with seropositive status, supply data for the development of screening programs and provide insight into the transmission of the disease. The purpose of this investigation was to determine the seroprevalence of hepatitis C and associated risk factors in an urban population in Haiti. Methods A prospective survey for hepatitis C antibodies was conducted among an urban outpatient population in Cap-Hatien, Haiti, with a sample size of 500 subjects. An anonymous 12 question survey, with inquiries related to demographic characteristics and risk factors for HCV acquisition, was concomitantly administered with testing. These demographic and behavioral risk factors were correlated with HCV antibody status using univariate and multivariate tests. Results The prevalence of positive HCV antibody was 22/500 (4.4%. Subjects that were anti-HCV positive had an average of 7 8.6 lifetime sexual partners, compared to average of 2.5 3.5 lifetime sexual partners among HCV-negative subjects (p = 0.02. In a multiple logistic regression model, intravenous drug use (OR 3.7, 1.529.03 95% CI and number of sexual partners (OR 1.1, 1.041.20 95% CI were independently associated with a positive HCV antibody result. Conclusions A substantial number of subjects with HCV antibodies were detected in this population in Haiti. Further investigation into the correlation between the number of sexual partners and testing positive for hepatitis C antibodies is indicated. Hepburn Matthew J 2004-12-01 252 Directory of Open Access Journals (Sweden) Full Text Available Este ensayo revisa los principales acontecimientos polticos ocurridos en Hait durante los ltimos 22 meses. Durante este perodo Hait ha logrado cierto grado de estabilizacin poltica gracias a la realizacin de comicios y la posterior eleccin de Ren Preval como Presidente. El pas ha logrado algunos avances en materia de seguridad y ha estabilizado su economa, la que creci moderadamente. Los avances han sido posibles, en parte, gracias a la presencia de tropas de la Misin de Estabilizacin de las Naciones Unidas en Hait (MINUSTAH que han colaborado con las autoridades en diversas reas claves, sobre todo en materia de seguridad. Ms all de los aspectos positivos, la situacin general de Hait sigue siendo extremadamente crtica dado los graves problemas estructurales que enfrenta el pas y a su apreciable dependencia de la comunidad internacional.This article reviews the main political developments in Haiti in the last 22 months. During this period, the country has attained some degree of political stability as a result of the successful completion of an electoral process and the concomitant election of Ren Preval as President of the country. Haiti, furthermore, has seen some improvements in security and economic stability triggered by a moderate economic growth. These achievements have been partly possible due to the presence of the United Nations Stabilization Mission in Haiti (MINUSTAH, which, jointly with the Haitian State, have worked to tackle acute problems, in particular lack of security. These improvements notwithstanding, the general outlook of the country and its political stability remain fragile given its significant structural problems and its extensive dependence from the international community. ANDREAS FELDMANN 2008-01-01 253 CERN Document Server By using a variational calculation, we study the effect of an external applied electric field on the ground state of electrons confined in a quantum box with a geometry defined by a slice of a cake. This geometry is a first approximation for a tip of a cantilever of an Atomic Force Microscope (AFM). By modeling the tip with the slice, we calculate the electronic ground state energy as function of the slice's diameter, its angular aperture, its thickness and the intensity of the external electric field applied along the slice. For the applied field pointing to the wider part of the slice, a confining electronic effect in the opposite side is clearly observed. This effect is sharper as the angular slice's aperture is smaller and there is more radial space to manifest itself. Reyes-Esqueda, J A; Castillo-Mussot, M; Vazquez, G J; Reyes-Esqueda, Jorge-Alejandro; Mendoza, Carlos I.; Castillo-Mussot, Marcelo del; Vazquez, Gerardo J. 2005-01-01 254 Digital Repository Infrastructure Vision for European Research (DRIVER) On 12 January 2010, Haiti suffered literal state collapse, as thousands of buildings crumbled in the 21st centurys deadliest earthquake. Over 200,000 were killed, 300,000 injured and 1.5 million displaced. Almost 20% of federal government employees were killed. The Presidential Palace lay in ruins and 27 of 28 federal government buildings were destroyed. An estimated 4,000 prisoners escaped from incarceration. In a remaining government building a couple months after the earthquake, one cou... Bolton, Matthew B. 2011-01-01 255 Digital Repository Infrastructure Vision for European Research (DRIVER) In this paper we study the energy of ULF electromagnetic waves that have been recorded by the satellite DEMETER, during its passing over Haiti before and after a destructive earthquake. This earthquake occurred on 12/1/2010, at geographic Latitude 18.46o and Longitude 287.47o, with Magnitude 7.0 R. Specifically, we are focusing on the variations of energy of Ez-electric field component concerning a time period of 100 days before and 50 days after the strong earthquake. In or... Athanasiou, M.; Anagnostopoulos, G.; Iliopoulos, A.; Pavlos, G.; David, K. 2010-01-01 256 Digital Repository Infrastructure Vision for European Research (DRIVER) In this paper we study the energy of ULF electromagnetic waves that were recorded by the satellite DEMETER, during its passing over Haiti before and after a destructive earthquake. This earthquake occurred on 12 January 2010, at geographic Latitude 18.46 and Longitude 287.47, with Magnitude 7.0 R. Specifically, we are focusing on the variations of energy of Ez-electric field component concerning a time period of 100 days before and 50 days after the strong earthquake. In... Athanasiou, M.; Anagnostopoulos, G.; Iliopoulos, A.; Pavlos, G.; David, K. 2011-01-01 257 Science.gov (United States) In this study, we perform a case study on imagery from the Haiti earthquake that evaluates a novel object-based approach for characterizing earthquake induced surface effects of liquefaction against a traditional pixel based change technique. Our technique, which combines object-oriented change detection with discriminant/categorical functions, shows the power of distinguishing earthquake-induced surface effects from changes in buildings using the object properties concavity, convexity, orthogonality and rectangularity. Our results suggest that object-based analysis holds promise in automatically extracting earthquake-induced damages from high-resolution aerial/satellite imagery. Oommen, Thomas; Rebbapragada, Umaa; Cerminaro, Daniel 2012-01-01 258 Digital Repository Infrastructure Vision for European Research (DRIVER) Magnetic resonance imaging (MRI) near metallic implants is often hampered by severe metal artifacts. To obtain distortion-free MR images near metallic implants, SEMAC (Slice Encoding for Metal Artifact Correction) corrects metal artifacts via robust encoding of excited slices against metal-induced field inhomogeneities, followed by combining the data resolved from multiple SEMAC-encoded slices. However, as many of the resolved data elements only contain noise, SEMAC-corrected images can suffe... Lu, Wenmiao; Pauly, Kim B.; Gold, Garry E.; Pauly, John M.; Hargreaves, Brian A. 2011-01-01 259 Digital Repository Infrastructure Vision for European Research (DRIVER) Magnetic resonance imaging (MRI) near metallic implants remains an unmet need due to severe artifacts, which mainly stem from large metal-induced field inhomogeneities. This work addresses MRI near metallic implants with an innovative imaging technique called Slice Encoding for Metal Artifact Correction (SEMAC). The SEMAC technique corrects metal artifacts via robust encoding of each excited slice against metal-induced field inhomogeneities. The robust slice encoding is achieved by exte... Lu, Wenmiao; Pauly, Kim Butts; Gold, Garry E.; Pauly, John M.; Hargreaves, Brian A. 2009-01-01 260 Digital Repository Infrastructure Vision for European Research (DRIVER) Organotypic slice cultures from embryonic rodent brains are widely used to study brain development. While there are often advantages to an in-vivo system, organotypic slice cultures allow one to perform a number of manipulations that are not presently feasible in-vivo. To date, organtotypic embryonic brain slice cultures have been used to follow individual cells using time-lapse microscopy, manipulate the expression of genes in the ganglionic emanances (a region that is hard to target by in-u... Elias, Laura; Kriegstein, Arnold 2007-01-01 261 International Nuclear Information System (INIS) An estimate is provided for the lapse function defining asymptotically Euclidean maximal slicings in asymptotically flat space-times. This estimate is found to be in agreement with a similar estimate suggested, on heuristic grounds, by Smarr and York. It is also shown that in vacuum space-times the scalar curvature of maximal slices remains uniformly bounded in time provided that suitable conditions on the rate of growth of the (negative) lower bound of the Ricci curvature of the slices are satisfied 1985-01-01 262 Science.gov (United States) In this paper, a simulation study of Bayesian extreme values by using Markov Chain Monte Carlo via slice sampling algorithm is implemented. We compared the accuracy of slice sampling with other methods for a Gumbel model. This study revealed that slice sampling algorithm offers more accurate and closer estimates with less RMSE than other methods . Finally we successfully employed this procedure to estimate the parameters of Malaysia extreme gold price from 2000 to 2011. Rostami, Mohammad; Adam, Mohd Bakri; Ibrahim, Noor Akma; Yahya, Mohamed Hisham 2013-09-01 263 Digital Repository Infrastructure Vision for European Research (DRIVER) There is constructed, for each member of a one-parameter family of cosmological models, which is obtained from the Kottler-Schwarzschild-de Sitter spacetime by identification under discrete isometries, a slicing by spherically symmetric Cauchy hypersurfaces of constant mean curvature. These slicings are unique up to the action of the static Killing vector. Analytical and numerical results are found as to when different leaves of these slicings do not intersect, i.e. when the... Beig, Robert; Heinzle, J. Mark 2005-01-01 264 CERN Document Server We introduce a family of Cauchy integral formulas for slice and slice regular functions on a real associative *-algebra. For each suitable choice of a real vector subspace of the algebra, a different formula is given, in which the domains of integration are subsets of the subspace. In particular, in the quaternionic case, we get a volume Cauchy formula. In the Clifford algebra case, the choice of the paravector subspace R^(n+1) gives a volume Cauchy formula for slice monogenic functions. Ghiloni, Riccardo 2012-01-01 265 CERN Document Server We investigate existence, uniqueness, and the asymptotic properties of constant mean curvature (CMC) slicings in vacuum Kantowski-Sachs spacetimes with positive cosmological constant. Since these spacetimes violate the strong energy condition, most of the general theorems on CMC slicings do not apply. Although there are in fact Kantowski-Sachs spacetimes with a unique CMC foliation or CMC time function, we prove that there also exist Kantowski-Sachs spacetimes with an arbitrary number of (families of) CMC slicings. The properties of these slicings are analyzed in some detail. Heinzle, J Mark 2011-01-01 266 International Nuclear Information System (INIS) We investigate existence, uniqueness, and the asymptotic properties of constant mean curvature (CMC) slicings in vacuum Kantowski-Sachs spacetimes with positive cosmological constant. Since these spacetimes violate the strong energy condition, most of the general theorems on CMC slicings do not apply. Although there are in fact Kantowski-Sachs spacetimes with a unique CMC foliation or CMC time function, we prove that there also exist Kantowski-Sachs spacetimes with an arbitrary number of (families of) CMC slicings. The properties of these slicings are analyzed in some detail. 2011-04-15 267 Science.gov (United States) We investigate existence, uniqueness, and the asymptotic properties of constant mean curvature (CMC) slicings in vacuum Kantowski-Sachs spacetimes with positive cosmological constant. Since these spacetimes violate the strong energy condition, most of the general theorems on CMC slicings do not apply. Although there are in fact Kantowski-Sachs spacetimes with a unique CMC foliation or CMC time function, we prove that there also exist Kantowski-Sachs spacetimes with an arbitrary number of (families of) CMC slicings. The properties of these slicings are analyzed in some detail. Heinzle, J. Mark 2011-04-01 268 Digital Repository Infrastructure Vision for European Research (DRIVER) A novel three-layer microfluidic polydimethylsiloxane (PDMS) device was constructed with two fluid chambers that holds a brain slice in place with microposts while maintaining laminar perfusate flow above and below the slice. Our fabrication technique permits rapid production of PDMS layers that can be applied to brain slices of different shapes and sizes. In this study, the device was designed to fit the shape and thickness (530-700 ?m) of a medullary brain slice taken from P0-P4 neonatal r... Blake, A. J.; Pearce, T. M.; Rao, N. S.; Johnson, S. M.; Williams, J. C. 2007-01-01 269 International Nuclear Information System (INIS) In this review the technical principles and applications of multi-slice CT are discussed. Multi-slice CT systems allow simultaneous acquisition of up to 4 slices by using multirow detector systems. Intuitive geometrical arguments are used to establish the limitation to a maximum of 4 slices which is kept by all currently existing multi-slice CT systems. Two different construction principles of the detector are discussed, the 'Fixed Array' detector and the 'Adaptive Array' detector. The extension of conventional 360 LI and 180 LI spiral interpolation techniques to multi-slice spiral CT is explained as well as a new generalized multi-slice spiral weighting concept, the so-called 'Adaptive Axial Interpolation'. Several techniques to improve multi-slice spiral image quality are discussed. Finally, some examples for clinical applications are given, and the principle of ECG triggered and ECG gated cardiac examinations with optimized temporal resolution is presented. Multi-slice CT systems are a milestone with respect to increased volume coverage, shorter scan times, improved axial (longitudinal) resolution and better use of the X-ray tube output. Additionaly, new clinical applications are possible such as Cardiac CT. (orig.) 1999-11-01 270 International Nuclear Information System (INIS) Simultaneous RF pulse excitation with a quadrature multiplexing technique has been proposed to double the magnetic resonance signal information in a given imaging time. This method uses two selective RF pulses with 900 phase difference to excite two slice spins to real and imaginary domains. With a proper phase correction, this scheme doubles the number of slices or the imaging region in a given data acquisition time as the conventional case. An implementation of this technique to slice by slice and chunk 3-D imaging methods is done for human head using the 1.5 T superconducting magnet imaging system 1986-02-01 271 LENUS (Irish Health Repository) After the devastating earthquake in Haiti on January 12, 2010, a British orthoplastic limb salvage team was mobilized. The team operated in a suburb of Port-au-Prince from January 20, 2010. This analysis gives an overview of the caseload and early outcomes. Clover, A James P 2011-06-01 272 Directory of Open Access Journals (Sweden) Full Text Available The goal of this project, now in its first volume, is to identify and list all available information on the art-music tradition of the Caribbean region - starting with the countries of Bahamas, Guadeloupe, Haiti, Jamaica and the US Virgin Islands. It will, ultimately, form a comprehensive document of value to musicians, ethnomusicologists, historians, researchers, educators and students. Gangelhoff, Christine 2011-10-01 273 Directory of Open Access Journals (Sweden) Full Text Available Abstract Background Malaria caused by Plasmodium falciparum infects roughly 30,000 individuals in Haiti each year. Haiti has used chloroquine (CQ as a first-line treatment for malaria for many years and as a result there are concerns that malaria parasites may develop resistance to CQ over time. Therefore it is important to prepare for alternative malaria treatment options should CQ resistance develop. In many other malaria-endemic regions, antifolates, particularly pyrimethamine (PYR and sulphadoxine (SDX treatment combination (SP, have been used as an alternative when CQ resistance has developed. This study evaluated mutations in the dihydrofolate reductase (dhfr and dihydropteroate synthetase (dhps genes that confer PYR and SDX resistance, respectively, in P. falciparum to provide baseline data in Haiti. This study is the first comprehensive study to examine PYR and SDX resistance genotypes in P. falciparum in Haiti. Methods DNA was extracted from dried blood spots and genotyped for PYR and SDX resistance mutations in P. falciparum using PCR and DNA sequencing methods. Sixty-one samples were genotyped for PYR resistance in codons 51, 59, 108 and 164 of the dhfr gene and 58 samples were genotyped for SDX resistance codons 436, 437, 540 of the dhps gene in P. falciparum. Results Thirty-three percent (20/61 of the samples carried a mutation at codon 108 (S108N of the dhfr gene. No mutations in dhfr at codons 51, 59, 164 were observed in any of the samples. In addition, no mutations were observed in dhps at the three codons (436, 437, 540 examined. No significant difference was observed between samples collected in urban vs rural sites (Welchs T-test p-value = 0.53 and permutations p-value = 0.59. Conclusion This study has shown the presence of the S108N mutation in P. falciparum that confers low-level PYR resistance in Haiti. However, the absence of SDX resistance mutations suggests that SP resistance may not be present in Haiti. These results have important implications for ongoing discussions on alternative malaria treatment options in Haiti. Carter Tamar E 2012-08-01 274 CERN Document Server In order to reject the notion that information is always about something, the "It from Bit" idea relies on the nonexistence of a realistic framework that might underly quantum theory. This essay develops the case that there is a plausible underlying reality: one actual spacetime-based history, although with behavior that appears strange when analyzed dynamically (one time-slice at a time). By using a simple model with no dynamical laws, it becomes evident that this behavior is actually quite natural when analyzed "all-at-once" (as in classical statistical mechanics). The "It from Bit" argument against a spacetime-based reality must then somehow defend the importance of dynamical laws, even as it denies a reality on which such fundamental laws could operate. Wharton, Ken 2013-01-01 275 CERN Document Server Originator Control allows information providers to define the information re-dissemination condition. Combined with usage control policy, fine-grained 'downstream usage control' can be achieved, which specifies what attributes the downstream consumers should have and how data is used. This paper discusses originator usage control, paying particular attention to enterprise-level dynamic business federations. Rather than 'pre-defining' the information re-dissemination paths, our business process slicing method 'capture' the asset derivation pattern, allowing to maintain originators' policies during the full lifecycle of assets in a collaborative context. First, we propose Service Call Graph (SCG), based on extending the System Dependency Graph, to describe dependencies among partners. When SCG (and corresponding 'service call tuple' list) is built for a business process, it is analyzed to group partners into sub-contexts, according to their dependency relations. Originator usage control can be achieved focusing... Su, Ziyi 2012-01-01 276 Directory of Open Access Journals (Sweden) Full Text Available Hait contina siendo una democracia extremadamente frgil, con capacidades mnimas de ejercer funciones estatales bsicas. Hait depende y seguir dependiendo de la Misin de las Naciones Unidas para la Estabilizacin de Hait (MINUSTAH y de la cooperacin internacional para mantener su proceso de estabilizacin poltica, construccin del Estado, fortalecimiento de la democracia y logro de un desarrollo econmico y social sustentable. El gobierno de Rene Preval, con fuerte apoyo internacional, ha logrado avances importantes en materias de seguridad, planificacin y construccin institucional. El ao 2008 estuvo marcado por una fuerte crisis de gobierno, gatillada por las alzas en los precios internacionales de los alimentos, severos conflictos con la oposicin y desastres naturales causados por las tormentas tropicales y agudizados por la devastacin ambiental.Hait continues to be an extremely fragile democracy in a state with minimal capacities to perform basic state functions. Haiti depends and will continue depending on the United Nations Stabilization Mission in Haiti (MINUSTAH and the international cooperation to keep its process of political stabilization, state building, democratic strengthening and achievement of economic and social development to a sustainable level. The Rene Preval government, with strong international support, has achieved important improvement in security, planning and institutional building. The year 2008 was marked by a deep crisis in the government, triggered by the sudden hike in international food prices, severe conflicts with the opposition and natural disasters caused by tropical storms and aggravated by the environmental devastation. JUAN ESTEBAN MONTES 2009-01-01 277 Scientific Electronic Library Online (English) Full Text Available SciELO Chile | Language: Spanish Abstract in spanish Hait contina siendo una democracia extremadamente frgil, con capacidades mnimas de ejercer funciones estatales bsicas. Hait depende y seguir dependiendo de la Misin de las Naciones Unidas para la Estabilizacin de Hait (MINUSTAH) y de la cooperacin internacional para mantener su proceso de [...] estabilizacin poltica, construccin del Estado, fortalecimiento de la democracia y logro de un desarrollo econmico y social sustentable. El gobierno de Rene Preval, con fuerte apoyo internacional, ha logrado avances importantes en materias de seguridad, planificacin y construccin institucional. El ao 2008 estuvo marcado por una fuerte crisis de gobierno, gatillada por las alzas en los precios internacionales de los alimentos, severos conflictos con la oposicin y desastres naturales causados por las tormentas tropicales y agudizados por la devastacin ambiental. Abstract in english Hait continues to be an extremely fragile democracy in a state with minimal capacities to perform basic state functions. Haiti depends and will continue depending on the United Nations Stabilization Mission in Haiti (MINUSTAH) and the international cooperation to keep its process of political stabi [...] lization, state building, democratic strengthening and achievement of economic and social development to a sustainable level. The Rene Preval government, with strong international support, has achieved important improvement in security, planning and institutional building. The year 2008 was marked by a deep crisis in the government, triggered by the sudden hike in international food prices, severe conflicts with the opposition and natural disasters caused by tropical storms and aggravated by the environmental devastation. MONTES, JUAN ESTEBAN; FELDMANN, ANDREAS; PIRACS, SANDRA. 278 Directory of Open Access Journals (Sweden) Full Text Available In October 1995 the Ministry of Public Health and Population in Haiti surveyed 42 health facilities for the prevalence and distribution of malaria infection. They examined 1 803 peripheral blood smears from patients with suspected malaria; the overall slide positivity rate was 4.0% (range, 0.0% to 14.3%. The rate was lowest among 1- to 4-year-old children (1.6% and highest among persons aged 15 and older (5.5%. Clinical and microscopic diagnoses of malaria were unreliable; the overall sensitivity of microscopic diagnosis was 83.6%, specificity was 88.6%, and the predictive value of a positive slide was 22.2%. Microscopic diagnoses need to be improved, and adequate surveillance must be reestablished to identify areas where transmission is most intense. The generally low level of malaria is encouraging and suggests that intensified control efforts targeted to the areas of highest prevalence could further diminish the effect of malaria in Haiti. Patrick Kachur S. 1998-01-01 279 Directory of Open Access Journals (Sweden) Full Text Available In this paper we study the energy of ULF electromagnetic waves that were recorded by the satellite DEMETER, during its passing over Haiti before and after a destructive earthquake. This earthquake occurred on 12 January 2010, at geographic Latitude 18.46 and Longitude 287.47, with Magnitude 7.0 R. Specifically, we are focusing on the variations of energy of Ez-electric field component concerning a time period of 100 days before and 50 days after the strong earthquake. In order to study these variations, we have developed a novel method that can be divided in two stages: first we filter the signal, keeping only the ultra low frequencies and afterwards we eliminate its trend using techniques of Singular Spectrum Analysis (SSA, combined with a third-degree polynomial filter. As it is shown, a significant increase in energy is observed for the time interval of 30 days before the earthquake. This result clearly indicates that the change in the energy of ULF electromagnetic waves could be related to strong precursory earthquake phenomena. Moreover, changes in energy associated with strong aftershock activity were also observed 25 days after the earthquake. Finally, we present results concerning the comparison between changes in energy during night and day passes of the satellite over Haiti, which showed differences in the mean energy values, but similar results as far as the rate of the energy change is concerned. M. A. Athanasiou 2011-04-01 280 Digital Repository Infrastructure Vision for European Research (DRIVER) We can view Brownian sheet as a sequence of interacting Brownian motions or slices. Here we present a number of results about the slices of the sheet. A common feature of our results is that they exhibit phase transition. In addition, a number of open problems are presented. Khoshnevisan, Davar 2005-01-01 281 Digital Repository Infrastructure Vision for European Research (DRIVER) Principle of slicing was reviewed and tomato slicing machine was developed based on appropriate technology. Locally available materials like wood, stainless steel and mild steel were used in the fabrication. The machine was made to cut tomatoes in 2cm thickness. The capacity of the machine is 540.09g per minute and its performance efficiency is 70%. Kamaldeen Oladimeji Salaudeen; Awagu E. F. 2012-01-01 282 Directory of Open Access Journals (Sweden) Full Text Available Principle of slicing was reviewed and tomato slicing machine was developed based on appropriate technology. Locally available materials like wood, stainless steel and mild steel were used in the fabrication. The machine was made to cut tomatoes in 2cm thickness. The capacity of the machine is 540.09g per minute and its performance efficiency is 70%. Kamaldeen Oladimeji Salaudeen 2012-11-01 283 CERN Document Server Gravitational, magnetic and superfluid forces can stress the crust of an evolving neutron star. Fracture of the crust under these stresses could affect the star's spin evolution and generate high-energy emission. We study the growth of strain in the crust of a spinning down, magnetized neutron star and examine the initiation of crust cracking (a {\\em starquake}). In preliminary work (Link, Franco & Epstein 1998), we studied a homogeneous model of a neutron star. Here we extend this work by considering a more realistic model of a solid, homogeneous crust afloat on a liquid core. In the limits of astrophysical interest, our new results qualitatively agree with those from the simpler model: the stellar crust fractures under shear stress at the rotational equator, matter moves to higher latitudes and the star's oblateness is reduced. Magnetic stresses favor faults directed toward the magnetic poles. Thus our previous conclusions concerning the star's spin response still hold; namely, asymmetric redistribution... Franco, L M; Epstein, R I; Franco, Lucia M.; Link, Bennett; Epstein, Richard I. 1999-01-01 284 Digital Repository Infrastructure Vision for European Research (DRIVER) Abstract Background Partners In Health (PIH) works with the Ministry of Health to provide comprehensive health services in Haiti. Between 1994 and 2009, PIH recommended exclusive formula feeding in the prevention of mother-to-child transmission (PMTCT) of HIV program and provided support to implement this strategy. We conducted this study to assess HIV-free survival and prevalence of diarrhea and malnutrition among infants in our PMTCT program in rural Haiti where exclusive f... Ivers Louise C; Appleton Sasha C; Wang Bingxia; Gregory, Jerome J.; Cullen Kimberly A; Smith Fawzi Mary C 2011-01-01 285 Digital Repository Infrastructure Vision for European Research (DRIVER) Abstract Background Malaria caused by Plasmodium falciparum infects roughly 30,000 individuals in Haiti each year. Haiti has used chloroquine (CQ) as a first-line treatment for malaria for many years and as a result there are concerns that malaria parasites may develop resistance to CQ over time. Therefore it is important to prepare for alternative malaria treatment options should CQ resistance develop. In many other malaria-endemic regions, antifolates, particularly... 2012-01-01 286 Science.gov (United States) The Community Seismic Network (CSN) and Quake-Catcher Network (QCN) are dense networks of low-cost ($50) accelerometers that are deployed by community volunteers in their homes in California. In addition, many accelerometers are installed in public spaces associated with civic services, publicly-operated utilities, university campuses, and high-rise buildings. Both CSN and QCN consist of observation-based structural monitoring which is carried out using records from one to tens of stations in a single building. We have deployed about 150 accelerometers in a number of buildings ranging between five and 23 stories in the Los Angeles region. In addition to a USB-connected device which connects to the host's computer, we have developed a stand-alone sensor-plug-computer device that directly connects to the internet via Ethernet or WiFi. In the case of CSN, the sensors report data to the Google App Engine cloud computing service consisting of data centers geographically distributed across the continent. This robust infrastructure provides parallelism and redundancy during times of disaster that could affect hardware. The QCN sensors, however, are connected to netbooks with continuous data streaming in real-time via the distributed computing Berkeley Open Infrastructure for Network Computing software program to a server at Stanford University. In both networks, continuous and triggered data streams use a STA/LTA scheme to determine the occurrence of significant ground accelerations. Waveform data, as well as derived parameters such as peak ground acceleration, are then sent to the associated archives. Visualization models of the instrumented buildings' dynamic linear response have been constructed using Google SketchUp and MATLAB. When data are available from a limited number of accelerometers installed in high rises, the buildings are represented as simple shear beam or prismatic Timoshenko beam models with soil-structure interaction. Small-magnitude earthquake records are used to identify the first two pairs of horizontal vibrational frequencies, which are then used to compute the response on every floor of the building, constrained by the observed data. The approach has been applied to a CSN-instrumented 12-story reinforced concrete building near downtown Los Angeles. The frequencies were identified directly from spectra of the 8 August 2012 M4.5 Yorba Linda, California earthquake acceleration time series. When the basic dimensions and the first two frequencies are input into a prismatic Timoshenko beam model of the building, the model yields mode shapes that have been shown to match well with densely recorded data. For the instrumented 12-story building, comparisons of the predictions of responses on other floors using only the record from the 9th floor with actual data from the other floors shows this method to approximate the true response remarkably well. Cheng, M.; Kohler, M. D.; Heaton, T. H.; Clayton, R. W.; Chandy, M.; Cochran, E.; Lawrence, J. F. 2013-12-01 287 Science.gov (United States) Automatically determining the relative position of a single CT slice within a full body scan provides several useful functionalities. For example, it is possible to validate DICOM meta-data information. Furthermore, knowing the relative position in a scan allows the efficient retrieval of similar slices from the same body region in other volume scans. Finally, the relative position is often an important information for a non-expert user having only access to a single CT slice of a scan. In this paper, we determine the relative position of single CT slices via instance-based regression without using any meta data. Each slice of a volume set is represented by several types of feature information that is computed from a sequence of image conversions and edge detection routines on rectangular subregions of the slices. Our new method is independent from the settings of the CT scanner and provides an average localization error of less than 4.5 cm using leave-one-out validation on a dataset of 34 annotated volume scans. Thus, we demonstrate that instance-based regression is a suitable tool for mapping single slices to a standardized coordinate system and that our algorithm is competitive to other volume-based approaches with respect to runtime and prediction quality, even though only a fraction of the input information is required in comparison to other approaches. Emrich, Tobias; Graf, Franz; Kriegel, Hans-Peter; Schubert, Matthias; Thoma, Marisa; Cavallaro, Alexander 2010-03-01 288 Science.gov (United States) Cyclic-AMP dependent protein kinase (PKA) is present in most branches of the animal kingdom, and is an example in the nervous system where a kinase effector integrates the cellular effects of various neuromodulators. The recent development of FRET-based biosensors, such as AKAR, now allows the direct measurement of PKA activation in living cells by simply measuring the ratio between the fluorescence emission at the CFP and YFP wavelengths upon CFP excitation. This novel approach provides data with a temporal resolution of a few seconds at the cellular and even subcellular level, opening a new avenue of understanding the integration processes in space and time. Our protocol has been optimized to study morphologically intact mature neurons and we describe how simple and cheap wide-field imaging, as well as more elaborate two-photon imaging, allows real-time monitoring of PKA activation in pyramidal cortical neurons in neonate rodent brain slices. In addition, many practical details presented here also pertain to image analysis in other cellular preparations, such as cultured cells. Finally, this protocol can also be applied to the various other CFP-YFP-based FRET biosensors that are available for other kinases or other intracellular signals. It is likely that this kind of approach will be generally applicable to a broad range of assays in the near future. PMID:24052389 Polito, Marina; Vincent, Pierre; Guiot, Elvire 2014-01-01 289 CERN Document Server Under certain conditions, a $(1+1)$-dimensional slice $\\hat{g}$ of a spherically symmetric black hole spacetime can be equivariantly embedded in $(2+1)$-dimensional Minkowski space. The embedding depends on a real parameter that corresponds physically to the surface gravity $\\kappa$ of the black hole horizon. Under conditions that turn out to be closely related, a real surface that possesses rotational symmetry can be equivariantly embedded in 3-dimensional Euclidean space. The embedding does not obviously depend on a parameter. However, the Gaussian curvature is given by a simple formula: If the metric is written $g = \\phi(r)^{-1} dr^2 + \\phi(r) d\\theta^2$, then $\\K_g=-{1/2}\\phi''(r)$. This note shows that metrics $g$ and $\\hat{g}$ occur in dual pairs, and that the embeddings described above are orthogonal facets of a single phenomenon. In particular, the metrics and their respective embeddings differ by a Wick rotation that preserves the ambient symmetry. Consequently, the embedding of $g$ depends on a real... Giblin, J T; Jr, John T. Giblin; Hwang, Andrew D. 2004-01-01 290 DEFF Research Database (Denmark) In this work, we present the development of a transparent poly(methyl methacrylate) (PMMA) based microfluidic culture system for handling long-term brain slice cultures independent of an incubator. The different stages of system development have been validated by culturing GFP producing brain slices from 8-day old (P8) mouse pups. Fluorescence microscopic monitoring of GFP was utilized as an indicator of tissue viability. The final format of the developed system, featuring ?plug-and-play? technolgy with a reusable fluidic connection board and easily changeable microfluidic chips, facilitated brain slice culturing for 16 days. Vedarethinam, Indumathi; Avaliani, N. 2011-01-01 291 Science.gov (United States) The Jacmel-Meyer bench lies on the south coast of the southern peninsula of Haiti in the Department de l'Ouest. Jacmel, at the west end of the bench, is about 40 kilometers airline southwest of Port-au-Prince. In the early part of January 1949, the writer in company with Mr. Rmy Lemoine made a reconnaissance study of the ground-water conditions of the bench. The object of the reconnaissance was to determine the availability of ground water for irrigation of the bench as well as for the public water supply of Jacmel. Irrigation is practiced on the bench, bu the existing water supplies are insufficient to cover all irrigable lands. Jacmel is at present supplied with water from a pipe line that delivers the flow of several developed springs to the city by gravity. However, this supply is inadequate and probably at times is contaminated. Taylor, George C., Jr. 1949-01-01 292 Science.gov (United States) The Pine Forest region is located in southeastern Haiti. The SHADA Forest Division headquarters near the eastern end of the region is about 98 kilometers by road from Port-au-Prince. In early February 1949 the writers made a brief geologic study of the region to determine the feasibility of drilling wells to obtain water for domestic, stock and small-scale industrial use. Existing water supplies are very scanty and undependable. There are no wells in the region, and springs are notably scarce and widely separated. Water supplies are now obtained principally from rain-water catchments or from roof-tops. These supplies frequently fail during prolonged dry periods and water must be hauled from great distances. Taylor, George C., Jr.; Lemoine, Rmy C. 1949-01-01 293 International Nuclear Information System (INIS) The International Commission on Radiological Protection and the international organizations that co-sponsored the International Basic Safety Standards for the Protection against Ionization Radiation and for the Safety of Radiation Sources (BSS) - among them PAHO and WHO - recommended the use of investigation levels to provide guidance for medical exposures. In this work, entrance surface doses for several common diagnostic radiology procedure have been determined from exposure rate measurements and patient technique factors in seven 'World Health Imaging System - Radiography' (WHIS-RAD) units, installed in public health services facilities of the Republic of Haiti. The results show the entrance surface doses below the guidance levels published in the BSS. Concomitant image quality measurements performed, however, indicate serious artifacts in the film processing, calling for the need of additional training of the technologists. (author) 2001-03-01 294 Science.gov (United States) The K/T boundary sequence is exposed in uplifted carbonate sediments of the southwest peninsula of Haiti. It is found at 15 localities within the Beloc formation, a sequence of limestone and marls interpreted as a monoclinal nappe structure thrust to the north. This tectonic deformation has affected the K/T boundary deposit to varying degrees. In some cases the less competent K/T deposit has acted as a slip plane leading to extensive shearing of the boundary layer, as well as duplication of the section. The presence of glassy tektites, shocked quartz, and an Ir anomaly directly link the deposit to a bolide impact. Stratigraphic and sedimentological features of the tripartite sequence indicate that it was formed by deposition from ballistic fallout of coarse tektites, emplacement of particle gravity flows and fine grained fallout of widely dispersed impact ejecta. Carey, S.; Sigurdsson, H.; Dhondt, S.; Espindola, J. M. 1993-01-01 295 Science.gov (United States) Cholera, previously unrecognized in Haiti, spread through the country in the fall of 2010. An analysis was performed to understand the epidemiological characteristics, clinical management, and risk factors for disease severity in a population seen at the GHESKIO Cholera Treatment Center in Port-au-Prince. A comprehensive review of the medical records of patients admitted during the period of October 28, 2010-July 10, 2011 was conducted. Disease severity on admission was directly correlated with older age, more prolonged length of stay, and presentation during the two epidemic waves seen in the observation period. Although there was a high seroprevalence of human immunodeficiency virus (HIV), severity of cholera was not greater with HIV infection. This study documents the correlation of cholera waves with rainfall and its reduction in settings with improved sanitary conditions and potable water when newly introduced cholera affects all ages equally so that interventions must be directed throughout the population. PMID:24106188 Valcin, Claude-Lyne; Severe, Karine; Riche, Claudia T; Anglade, Benedict S; Moise, Colette Guiteau; Woodworth, Michael; Charles, Macarthur; Li, Zhongze; Joseph, Patrice; Pape, Jean W; Wright, Peter F 2013-10-01 296 Science.gov (United States) Following the 2010 Haiti earthquake, more than two million people moved to temporary camps, most of which arose spontaneously in the days after the earthquake. This study focuses on the material assistance people in five Port-au-Prince camps reported receiving, noting the differences between assistance from formal aid agencies and from 'informal' sources such as family. Seven weeks after the earthquake, 32% of camp dwellers reported receiving no assistance whatsoever; 55% had received formal aid, typically a tent or tarpaulins; and 40% had received informal aid, usually in the form of cash transfers from family living abroad. While people were grateful for any material aid, cash was more frequently considered timely and more effective than aid-in-kind. Should this study be indicative of the greater displaced population, aid agencies should consider how they might make better use of cash transfers as an aid modality. PMID:24601934 Versluis, Anna 2014-04-01 297 International Nuclear Information System (INIS) The solid waste management industry in Haiti is comprised of a formal and an informal sector. Many basic activities in the solid waste management sector are being carried out within the context of profound poverty, which exposes the failure of the socioeconomic and political system to provide sufficient job opportunities for the urban population. This paper examines the involvement of workers in the solid waste management industry in Greater Port-au-Prince and the implications for livelihood strategies. The findings revealed that the Greater Port-au-Prince solid waste management system is very inclusive with respect to age, while highly segregated with regard to gender. In terms of earning capacity, the results showed that workers hired by the State agencies were the most economically vulnerable group as more than 50% of them fell below the official nominal minimum wage. This paper calls for better salary scales and work compensation for the solid waste workers. 2010-06-01 298 Science.gov (United States) The solid waste management industry in Haiti is comprised of a formal and an informal sector. Many basic activities in the solid waste management sector are being carried out within the context of profound poverty, which exposes the failure of the socioeconomic and political system to provide sufficient job opportunities for the urban population. This paper examines the involvement of workers in the solid waste management industry in Greater Port-au-Prince and the implications for livelihood strategies. The findings revealed that the Greater Port-au-Prince solid waste management system is very inclusive with respect to age, while highly segregated with regard to gender. In terms of earning capacity, the results showed that workers hired by the State agencies were the most economically vulnerable group as more than 50% of them fell below the official nominal minimum wage. This paper calls for better salary scales and work compensation for the solid waste workers. PMID:20163948 Noel, Claudel 2010-06-01 299 Science.gov (United States) Rebuilding a more disaster-resilient Haiti is the defining challenge in the wake of the devastating magnitude-7 earthquake that struck in January. The contrasting experience of Chile, which weathered a magnitude-8.8 earthquake in April with casualties in the hundreds, teaches us that building resilience is an achievable and desirable goal given suitable investments and governance. Scientists and engineers have much to contribute, but doing so requires effective mechanisms to enable them to inform the rebuilding process. The international donor community has been a key point of engagement since their funds provide the opportunity to build new schools, hospitals, critical infrastructure and housing that will not fail in the next disaster. In advance of a gathering of international donors at the end of March, the U.S. National Science and Technology Councils interagency Subcommittee on Disaster Reduction convened a workshop that brought together over 100 scientists, engineers, planners, and policymakers, including a delegation of Haitian government officials and academics. Hosted by the University of Miami and organized by the Incorporated Research Institutions for Seismology, the workshop was co-sponsored by the U.S. Department of State, U.S. Agency for International Development (USAID), and United Nations International Strategy for Disaster Reduction with support from NASA, the National Science Foundation, and the U.S. Geological Survey (USGS). Key findings from the workshop covered the need to adopt and enforce international building codes, to use hazard assessments for earthquakes, inland flooding, and landslides in the planning process, and the central importance of long-term capacity building. As an example of one science agencys contributions, the USGS informed the initial response by rapidly characterizing the earthquake and delivering estimates of population exposure to strong shaking that were used by humanitarian organizations, aid agencies, and the Haitians themselves. In the ensuing weeks, the USGS tracked aftershocks and issued statements with probabilities of future earthquakes. Early on, the U.S. Southern Command made it possible to put an advance team of engineers and a USGS seismologist on the ground in Haiti. That initial team was followed by the first major deployment of a USGS/USAID Earthquake Disaster Assistance Team, which evolved from the long-standing partnership between these two agencies. EDAT activities included field assessment of faulting, coastal uplift, and landslides; seismometer deployments for aftershock recording and characterization of ground shaking amplification; and development of a probabilistic seismic hazard map for Haiti and the whole island of Hispaniola. The teams efforts benefited greatly from collaboration with Haitian colleagues with knowledge transfer occurring in both directions. The effort also benefited from significant remote sensing acquisitions, which helped to target field activities and constrain fault rupture patterns. Although the products have been put to use in Haiti, it still remains to turn hazard assessments into tools that can be used for effective planning, building code development and land-use decisions. Applegate, D. 2010-12-01 300 International Nuclear Information System (INIS) The International Commission on Radiological Protection and the international organizations that co-sponsored the International Basic Safety Standards for the Protection against Ionization Radiation and for the Safety of Radiation Sources (BSS) - among them PAHO and WHO - recommended the use of investigation levels to provide guidance for medical exposures. In this work, entrance surface doses for several common diagnostic radiology procedure have been determined from exposure rate measurements and patient technique factors in seven 'World Health Imaging System - Radiography' (WHIS-RAD) units, installed in public health services facilities of the Republic of Haiti. The results show the entrance surface doses below the guidance levels published in the BSS. Concomitant image quality measurements performed, however, indicate serious artifacts in the film processing, calling for the need of additional training of the technologists. (author) 2001-09-01 301 Energy Technology Data Exchange (ETDEWEB) Traditional energy sources are a significant fraction of energy demand in developing countries. These sources have become increasingly scarce because of clearing of land for agriculture, charcoal production, and excessive timber harvesting. One option for mitigating one aspect of this multidimensional problem is the use of smokeless coal briquettes. Resource and market conditions are excellent in some developing countries for the substitution of smokeless briquettes for fuelwood (which includes firewood and charcoal). US Agency for International Development (USAID) has developed a five-step procedure for determining the potential substitution of smokeless briquettes for fuelwood: resource evaluation, market assessment, technological assessment, government policy and institutional assessment (including environmental and health assessments), and business and market assessment. Through recent assessment activities in Haiti, we have gained knowledge and understanding of the market mechanisms for fuelwood substitution which we intend to apply in Pakistan. 10 refs., 4 figs., 1 tab. Sabadell, A.; Shelton, R.B.; Stevenson, G.G.; Willson, T.G. 1986-01-01 302 Energy Technology Data Exchange (ETDEWEB) Sugar cane provides for 62% of the world's consumption of sugar. By the use of its ligneous matter (bagasse), this plant furnishes the necessary energy for the extraction of the sugar it contains. If the sugar factory has high performances on the thermodynamic level, a ton of sugar cane can provide from 50 to 90 kWh for external uses in addition to its own needs. It should be noted that more than 600 million tons of cane are annually processed in the world. Sugar factories therefore have a considerable unused renewable energy potential. The use of this potential and earlier of sugar factories and decrease the export of foreign currencies for the concerned countries. HAITI has understood the fundamental/energy interest of the ligneous matter of the sugar cane it produces and is trying to organize its use in the best possible way. Olivier, J. 1986-01-01 303 Science.gov (United States) A mixed-methodological study conducted in the aftermath of the 2010 Haiti earthquake assessed experiences of 8 lay mental health workers (earthquake survivors themselves) implementing a psychosocial intervention for residents of camps for displaced people in Port-au-Prince. Quantitative results revealed decreased posttraumatic stress disorder symptoms, consistently high compassion satisfaction, low burnout, moderate secondary trauma, and high levels of posttraumatic growth measured over 18 months. Qualitative accounts from lay mental health workers revealed enhanced sense of self-worth, purpose, social connection, and satisfaction associated with helping others. Results support the viability of utilizing local lay disaster survivors as implementers of psychosocial intervention. (PsycINFO Database Record (c) 2014 APA, all rights reserved). PMID:24826931 James, Leah Emily; Noel, John Roger; Roche Jean Pierre, Yves Merry 2014-03-01 304 Scientific Electronic Library Online (English) Full Text Available SciELO Brazil | Language: English Abstract in portuguese Ctenus bimaculatus Taczanowski, 1874, removido da sinonmia de Ancylometes rufus (Walkenaer, 1837) e transferido para o gnero Cupiennius Simon, 1891, no qual considerado sinnimo snior de Cupiennius celerrimus Simon, 1891. So apresentados novos registros para C. bimaculatus (Taczanowski, 1874 [...] ) e uma nova espcie, C. vodou, descrita para o Haiti. Abstract in english Ctenus bimaculatus Taczanowski, 1874, is removed from the synonymy of Ancylometes rufus (Walkenaer, 1837) and transferred to the genus Cupiennius Simon, 1891, in which it is placed as a senior synonym of Cupiennius celerrimus Simon, 1891. New records are presented for C. bimaculatus (TACZANOWSKI 187 [...] 4) and a new species, C. vodou, is described from Haiti. Brescovit, Antonio D.; Polotow, Daniele. 305 Directory of Open Access Journals (Sweden) Full Text Available O presente artigo examina as motivaes que o Brasil tem para contribuir para as misses de manuteno da paz (peacekeeping das Naes Unidas no Timor Leste e no Haiti. O Brasil procura prestgio e aspira ganhar influncia pelos mtodos pragmticos da sua poltica exterior. Na opinio do autor, as contribuies so vantajosas para o pas porque o custo baixo e porque permitem treinamento militar, visibilidade global e uma extenso poltica e econmica da influncia brasileira.The following article examines Brazil's motivations for contributing to peacekeeping missions. The work focuses on its participation in East-Timor and its leadership of the UN Stabilization Mission in Haiti. Brazil seeks prestige and hopes to gain influence through the pragmatic mechanisms of its foreign policy. The author believes the contributions are advantageous for the country, given the low cost of the missions, along with the receipt of military training, global visibility and an extension of Brazil's political and economic influence. Djuan Bracey 2011-12-01 306 Science.gov (United States) Introduction During disaster relief, personnel's safety is very important. Mental well being is a part of this safety issue. There is however a lack of objective mental well being monitoring tools, usable on scene, during disaster relief. This study covers the use of validated tools towards detection of psychological distress and monitoring of mental well being of disaster relief workers, during the Belgian First Aid and Support Team deployment after the Haiti earthquake in 2010. Methodology The study was conducted using a demographic questionnaire combined with validated measuring instruments: Belbin Team Role, Compassion Fatigue and Satisfaction Self-Test for Helpers, DMAT PsySTART, K6+ Self Report. A baseline measurement was performed before departure on mission, and measurements were repeated at day 1 and day 7 of the mission, at the end of mission, and 7 days, 30 days and 90 days post mission. Results 23 out of the 27 team members were included in the study. Using the Compassion Fatigue and Satisfaction Self-Test for Helpers as a monitoring tool, a stable condition was monitored in 7 participants, a dip in 5 participants, an arousal in 10 participants and a double pattern in 1 participant. Conclusions The study proved the ability to monitor mental well being and detect psychological distress, by self administered validated tools, during a real disaster relief mission. However for practical reasons some tools should be adapted to the specific use in the field. This study opens a whole new research area within the mental well being and monitoring field. Citation: Van der Auwera M, Debacker M, Hubloue I. Monitoring the mental well-being of caregivers during the Haiti-earthquake.. PLoS Currents Disasters. 2012 Jul 18. PMID:22953241 Van der Auwera, Marcel; Debacker, Michel; Hubloue, Ives 2012-01-01 307 Science.gov (United States) Mathematical models can provide key insights into the course of an ongoing epidemic, potentially aiding real-time emergency management in allocating health care resources and possibly anticipating the impact of alternative interventions. Spatially explicit models of waterborne disease are made routinely possible by widespread data mapping of hydrology, road network, population distribution, and sanitation. Here, we study the ex-post reliability of predictions of the ongoing Haiti cholera outbreak. Our model consists of a set of dynamical equations (SIR-like, i.e. subdivided into the compartments of Susceptible, Infected and Recovered individuals) describing a connected network of human communities where the infection results from the exposure to excess concentrations of pathogens in the water, which are, in turn, driven by hydrologic transport through waterways and by mobility of susceptible and infected individuals. Following the evidence of a clear correlation between rainfall events and cholera resurgence, we test a new mechanism explicitly accounting for rainfall as a driver of enhanced disease transmission by washout of open-air defecation sites or cesspool overflows. A general model for Haitian epidemic cholera and the related uncertainty is thus proposed and applied to the dataset of reported cases now available. The model allows us to draw predictions on longer-term epidemic cholera in Haiti from multi-season Monte Carlo runs, carried out up to January 2014 by using a multivariate Poisson rainfall generator, with parameters varying in space and time. Lessons learned and open issues are discussed and placed in perspective. We conclude that, despite differences in methods that can be tested through model-guided field validation, mathematical modeling of large-scale outbreaks emerges as an essential component of future cholera epidemic control. Bertuzzo, Enrico; Mari, Lorenzo; Righetto, Lorenzo; Knox, Allyn; Finger, Flavio; Casagrandi, Renato; Gatto, Marino; Rodriguez-Iturbe, Ignacio; Rinaldo, Andrea 2013-04-01 308 Science.gov (United States) Empirical fragility functions are derived by statistical processing of the data on: i) Damaged and undamaged buildings, and ii) Ground motion intensity values at the buildings' locations. This study investigates effects of different ground motion inputs on the derived fragility functions. The previously constructed fragility curves (Hancilar et al. 2013), which rely on specific shaking intensity maps published by the USGS after the 2010 Haiti Earthquake, are compared with the fragility functions computed in the present study. Building data come from field surveys of 6,347 buildings that are grouped with respect to structural material type and number of stories. For damage assessment, the European Macroseismic Scale (EMS-98) damage grades are adopted. The simplest way to account for the variability in ground motion input could have been achieved by employing different ground motion prediction equations (GMPEs) and their standard variations. However, in this work, we prefer to rely on stochastically simulated ground motions of the Haiti earthquake. We employ five different source models available in the literature and calculate the resulting strong ground motion in time domain. In our simulations we also consider the local site effects by published studies on NEHRP site classes and micro-zoning maps of the city of Port-au-Prince. We estimate the regional distributions from the waveforms simulated at the same coordinates that we have damage information from. The estimated spatial distributions of peak ground accelerations and velocities, PGA and PGV respectively, are then used as input to fragility computations. The results show that changing the ground motion input causes significant variability in the resulting fragility functions. Hancilar, Ufuk; Harmandar, Ebru; akti, Eser 2014-05-01 309 Science.gov (United States) Months after a 7.0 magnitude earthquake hit Port-au-Prince, Haiti, over one million remain homeless and living in spontaneous internally displaced person (IDP) camps. Billions of dollars from aid organizations and government agencies have been pledged toward the relief effort, yet many basic human needs, including food, shelter, and sanitation, continue to be unmet. The Sphere Project, Humanitarian Charter and Minimum Standards in Disaster Response, identifies the minimum standards to be attained in disaster response. From a human rights perspective and utilizing key indicators from the Sphere Project as benchmarks, this article reports on an assessment of the living conditions approximately 12 weeks after the earthquake in Parc Jean Marie Vincent, a spontaneous IDP camp in Port-au-Prince. A stratified random sample of households in the camp, proportionate to the number of families living in each sector, was selected. Interview questions were designed to serve as key indicators for the Sphere Project minimum standards. A total of 486 interviews were completed, representing approximately 5% of households in each of the five sectors of the camp. Our assessment identified the relative achievements and shortcomings in the provision of relief services in Parc Jean Marie Vincent. At the time of this survey, the Sphere Project minimum standards for access to health care and quantity of water per person per day were being met. Food, shelter, sanitation, and security were below minimum accepted standard and of major concern. The formal assessment reported here was completed by September 2010, and is necessarily limited to conditions in Haiti before the cholera outbreak in October. Cullen, Kimberly A.; Ivers, Louise C. 2014-01-01 310 Digital Repository Infrastructure Vision for European Research (DRIVER) Through the description of two examples of psychological interventions in humanitarian emergencies, this article aims to problematize the work of the psychologist in those situations. The concepts of "humanitarianism" and "emergency" are discussed based on two interventions made in Haiti and in the Democratic Republic of Congo. In both countries the mental health interventions happened inside a humanitarian organization and the objective of those interventions was to offer psychosocial suppor... Ana Cecilia Andrade de Moraes Weintraub 2011-01-01 311 Digital Repository Infrastructure Vision for European Research (DRIVER) As a corollary of work of Ozsvath and Szabo [math.GT/0301149], it is shown that the classical concordance group of algebraically slice knots has an infinite cyclic summand and in particular is not a divisible group. Livingston, Charles 2003-01-01 312 International Nuclear Information System (INIS) The quality of images and the accuracy of extracted relaxation time (T_1 and T_2) values in nuclear magnetic resonance (NMR) imaging is dependent on the characteristics of the slice. It is therefore important that the slice profile can be measured and to know how it behaves under different experimental conditions. Slice shape is determined in conventional selective excitation systems by the spectrum of the radiofrequency pulse used and the nature of the magnetic field gradient which is applied simultaneously. The effectiveness of the selection is also influenced by the subsequent rephasing gradient. Various methods have been used to investigate the slice shape in a practical situation using a 0.1 T resistive magnet NMR scanner and comparisons drawn with the predictions of theory. (author) 1986-01-01 313 DEFF Research Database (Denmark) Architectural prototyping is a widely used practice, con- cerned with taking architectural decisions through experiments with light- weight implementations. However, many architectural decisions are only taken when systems are already (partially) implemented. This is prob- lematic in the context of architectural prototyping since experiments with full systems are complex and expensive and thus architectural learn- ing is hindered. In this paper, we propose a novel technique for harvest- ing architectural prototypes from existing systems, \\architectural slic- ing", based on dynamic program slicing. Given a system and a slicing criterion, architectural slicing produces an architectural prototype that contain the elements in the architecture that are dependent on the ele- ments in the slicing criterion. Furthermore, we present an initial design and implementation of an architectural slicer for Java. Christensen, Henrik Bærbak; Hansen, Klaus Marius 2013-01-01 314 CERN Document Server This paper is part of an endeavor to define an analogue of the slice filtration in the unstable motivic homotopy category. Our approach was inspired by the fact that the triangulated structures do not play a relevant role for the construction of birational homotopy categories as well as by the work of Kahn-Sujatha \\cite{K-theory/0596} on birational motives, where the existence of a connection between the layers of the slice filtration and birational invariants is explicitly suggested. Our main result, shows that there is an equivalence of categories between the orthogonal components for the slice filtration and the birational motivic stable homotopy categories which are constructed in this paper. Relying on this equivalence, we are able to describe the slices for projective spaces (including $\\mathbb P ^{\\infty}$), Thom spaces and blow ups. Pelaez, Pablo 2011-01-01 315 CERN Multimedia If a knot K bounds a genus one Seifert surface F in the 3-sphere and F contains an essential simple closed curve alpha that has induced framing 0 and is smoothly slice, then K is smoothly slice. Conjecturally, the converse holds. It is known that if K is slice, then there are strong constraints on the algebraic concordance class of such alpha, and it was thought that these constraints might imply that alpha is at least algebraically slice. We present a counterexample; in the process we answer negatively a question of Cooper and relate the result to a problem of Kauffman. Results of this paper depend on the interplay between the Casson-Gordon invariants of K and algebraic invariants of alpha. Gilmer, Patrick M 2011-01-01 316 CERN Document Server The Discrete Fourier Transform (DFT) underpins the solution to many inverse problems commonly possessing missing or un-measured frequency information. This incomplete coverage of Fourier space always produces systematic artefacts called Ghosts. In this paper, a fast and exact method for de-convolving cyclic artefacts caused by missing slices of the DFT is presented. The slices discussed here originate from the exact partitioning of DFT space, under the projective Discrete Radon Transform, called the Discrete Fourier Slice Theorem. The method has a computational complexity of O(n log2 n) (where n = N^2) and is constructed from a new Finite Ghost theory. This theory is also shown to unify several aspects of work done on Ghosts over the past three decades. The paper concludes with a significant application to fast, exact, non-iterative image reconstruction from sets of discrete slices obtained for a limited range of projection angles. Chandra, Shekhar; Guedon, Jeanpierre; Kingston, Andrew; Normand, Nicolas 2011-01-01 317 Digital Repository Infrastructure Vision for European Research (DRIVER) Exact timing is essential for functional MRI data analysis. Datasets are commonly measured using repeated 2D imaging methods, resulting in a temporal offset between slices. To compensate for this timing difference, slice-timing correction (i.e. temporal data interpolation) has been used as an fMRI pre-processing step for more than fifteen years. However, there has been an ongoing debate about the effectiveness and applicability of this method. This paper presents the first elaborated analysis... Sladky, Ronald; Friston, Karl J.; Tro?stl, Jasmin; Cunnington, Ross; Moser, Ewald; Windischberger, Christian 2011-01-01 318 Digital Repository Infrastructure Vision for European Research (DRIVER) In order to facilitate the study of oxidative stress in lung tissue, rat lung slices with impaired antioxidant defenses were prepared and used. Incubation of lung slices with the antineoplastic agent 1,3-bis(2-chloroethyl)-1-nitrosourea (BCNU) (100 microM) in an amino acid-rich medium for 45 min produced a near-maximal (approximately 85%), irreversible inhibition of glutathione reductase, accompanied by only a modest (approximately 15%) decrease in pulmonary nonprotein sulfhydryls (NPSH) and ... 1990-01-01 319 Digital Repository Infrastructure Vision for European Research (DRIVER) Methods enabling prion replication ex vivo are important for advancing prion studies. However, few such technologies exist, and many prion strains are not amenable to them. Here we describe a prion organotypic slice culture assay (POSCA) that allows prion amplification and titration ex vivo under conditions that closely resemble intracerebral infection. Thirty-five days after contact with prions, mouse cerebellar slices had amplified the abnormal isoform of prion protein, PrPSc, >105-fold. Th... Falsig, Jeppe; Julius, Christian; Margalith, Ilan; Schwarz, Petra; Heppner, Frank L.; Aguzzi, Adriano 2008-01-01 320 Digital Repository Infrastructure Vision for European Research (DRIVER) Primary trophoblasts, placental explants, and cell line cultures are commonly used to investigate placental development, physiology, and pathology, particularly in relation to pregnancy outcomes. Organotypic slice cultures are increasingly used in other systems because they maintain the normal three-dimensional tissue architecture and have all cell types represented. Herein, we demonstrate the utility of the precision-cut placental slice culture model for studying trophoblastic diseases. Gilligan, Jeffrey; Tong, Ming; Longato, Lisa; La Monte, Suzanne M.; Gundogan, Fusun 2012-01-01 321 Digital Repository Infrastructure Vision for European Research (DRIVER) Recent studies have significantly improved our ability to investigate cell transplantation and study the physiology of transplanted cells in cardiac tissue. Several previous studies have shown that fully-immersed heart slices can be used for electrophysiological investigations. Additionally, ischemic heart slices induced by glucose and oxygen deprivation offer a useful tool to investigate mechanical integration and to measure forces of contraction of engrafted cells, at least for short term a... Habeler, Walter; Peschanski, Marc; Monville, Christelle 2009-01-01 322 Digital Repository Infrastructure Vision for European Research (DRIVER) Methods enabling prion replication ex vivo are important for advancing prion studies. However, few such technologies exist, and many prion strains are not amenable to them. Here we describe a prion organotypic slice culture assay (POSCA) that allows prion amplification and titration ex vivo under conditions that closely resemble intracerebral infection. Thirty-five days after contact with prions, mouse cerebellar slices had amplified the abnormal isoform of prion protein, PrP(Sc), >10(5)-fold... Falsig, J.; Julius, C.; Margalith, I.; Schwarz, P.; Heppner, F. L.; Aguzzi, A. 2008-01-01 323 Digital Repository Infrastructure Vision for European Research (DRIVER) We have been studying the expression and functional roles of voltage-gated potassium channels in pyramidal neurons from rat neocortex. Because of the lack of specific pharmacological agents for these channels, we have taken a genetic approach to manipulating channel expression. We use an organotypic culture preparation (16) in order to maintain cell morphology and the laminar pattern of cortex. We typically isolate acute neocortical slices at postnatal days 8-10 and maintain the slices in ... Foehring, Robert C.; Guan, Dongxu; Toleman, Tara; Cantrell, Angela R. 2011-01-01 324 Digital Repository Infrastructure Vision for European Research (DRIVER) Multidetector computed tomography (MDCT) has rapidly evolved from 4-detector row systems in 1998 to 256-slice and 320-detector row CT systems. With smaller detector element size and faster gantry rotation speed, spatial and temporal resolution of the 64-detector MDCT scanners have made coronary artery imaging a reliable clinical test. Wide-area coverage MDCT, such as the 256-slice and 320-detector row MDCT scanners, has enabled volumetric imaging of the entire heart free of stair-step artifac... Hsiao, Edward M.; Rybicki, Frank J.; Steigner, Michael 2010-01-01 325 Directory of Open Access Journals (Sweden) Full Text Available Proposed in this paper is a dynamic resource-aware routing and frequency slots allocation scheme with consideration of both BER requirement and distance adaptive modulation (RA-BERR-DA for spectrum-sliced elastic optical path networks (SLICE.Numerical simulations are conducted to analysis network performance such as blocking rate and the number of used frequency slots. The results demonstrate that this scheme is able to decrease traffic blocking and improve resource utilization in dynamic spectrum assignment. Xin Chen 2012-11-01 326 Digital Repository Infrastructure Vision for European Research (DRIVER) This paper is part of an endeavor to define an analogue of the slice filtration in the unstable motivic homotopy category. Our approach was inspired by the fact that the triangulated structures do not play a relevant role for the construction of birational homotopy categories as well as by the work of Kahn-Sujatha \\cite{K-theory/0596} on birational motives, where the existence of a connection between the layers of the slice filtration and birational invariants is explicitly ... Pelaez, Pablo 2011-01-01 327 Digital Repository Infrastructure Vision for European Research (DRIVER) The development of therapeutic approaches to treat lung disease requires an understanding of both the normal and disease physiology of the lung. Although traditional experimental approaches only address either organ or cellular physiology, the use of lung slice preparations provides a unique approach to investigate integrated physiology that links the cellular and organ responses. Living lung slices are robust and can be prepared from a variety of species, including humans, and they retain ma... Sanderson, Michael J. 2011-01-01 328 Science.gov (United States) The relationship between the tryptophan availability and serononin release from rat hypothalamus was investigated using a new in vitro technique for estimating rates at which endogenous serotonin is released spontaneously or upon electrical depolarization from hypothalamic slices superfused with a solution containing various amounts of tryptophan. It was found that the spontaneous, as well as electrically induced, release of serotonin from the brain slices exhibited a dose-dependent relationship with the tryptophan concentration of the superfusion medium. Schaechter, Judith D.; Wurtman, Richard J. 1989-01-01 329 Energy Technology Data Exchange (ETDEWEB) In the context of our work on diffusion-relaxation-coupling in thin excited slices, we perform NMR experiments in static magnetic field gradients up to 200 T/m. For slice thicknesses in the range of 10{mu}m, the frequency bandwidth of the excited slices becomes sufficiently narrow that free induction decays (FIDs) become observable despite the presence of the strong static gradient. The observed FIDs were also simulated using standard methods from MRI physics. Possible effects of diffusion during the FID duration are still minor at this slice thickness in water but might become dominant for smaller slices or more diffusive media. Furthermore, the detailed excitation structure of the RF pulses was studied in profiling experiments over the edge of a plane liquid cell. Side lobe effects to the slices will be discussed along with approaches to control them. The spatial resolution achieved in the profiling experiments furthermore allows the identification of thermal expansion phenomena in the NMR magnet. Measures to reduce the temperature drift problems are presented. Gaedke, Achim; Kresse, Benjamin [Institute of Condensed Matter Physics, Technische Universitaet Darmstadt (Germany); Nestle, Nikolaus 2008-07-01 330 Digital Repository Infrastructure Vision for European Research (DRIVER) In many settings worldwide, HIV-positive individuals have experienced a significant level of stigma and discrimination. This discrimination may also impact other family members affected by the disease, including children. The aim of our study was to identify factors associated with stigma and/or discrimination among HIV-affected youth and their HIV-positive caregivers in central Haiti. Recruitment of HIV-positive patients with children aged 1017 years was conducted in 20062007. Data on ... 2010-01-01 331 Digital Repository Infrastructure Vision for European Research (DRIVER) Autocoptis paulsoni n. sp. is described from Haiti. it is characterized by its large size, its cylindricaltapered shape, its fine costate sculpture on the teleoconch, a distinct but weak circum basal keel and its abbreviate conical juvenile shell. it is most similar to Autocoptis gruneri (Dunker 1844), which is redescribed, and its distribution is reviewed. The taxonomic status of the genus Autocoptis Pilsbry 1902 and its subgenus Urocoptola Clench, 1 935 are reviewed. The genus is endemic to... Thompson, Fred G. 2012-01-01 332 International Nuclear Information System (INIS) Helical scanning introduces additional choices in technical parameters and has an impact on how much radiation dose a patient receives. Helical scanning allows the entire thorax to be scanned within a single breathold, reducing slice registration due to breathing artifacts. Organ doses from thoracic computed tomography have been estimated in an anthropomorphic phantom using thermoluminescence dosimeters. With very similar radiological techniques in helical and axial scanning, the absorbed organ dose measured were more relevant in lung 12,0 2,0 mGy and 11,0 2,0 mGy respectively; and in heart 9,0 4,0 mGy and 9,0 5,0 mGy. Our results show that contiguous helical CT scans acquired with the same technical factors as contiguous axial scans, imply approximately the same radiation dose. (author) 2001-09-01 333 Science.gov (United States) Ingot casting was scaled up to 16 cm by 16 cm square cross section size and ingots weighing up to 8.1 kg were cast. The high degree of crystallinity was maintained in the large ingot. For large sizes, the nonuniformity of heat treatment causes chipping of the surface of the ingot. Progress was made in the development of a uniform graded structure in the silica crucibles. The high speed slicer blade-head weight was reduced to 37 pounds, allowing surface speeds of up to 500 feet per minute. Slicing of 10 cm diameter workpieces at these speeds increased the through-put of the machine to 0.145 mm/min. Schmid, F.; Khattak, C. P. 1979-01-01 334 Science.gov (United States) In this study we compare the different Google Earth imagery (GEI) available before and after the 01-12-2010 earthquake of Haiti and carry out a detailed analysis of the superficial seismic-related geological deformations in the following sites: 1) the capital Port-Au-Prince and other cities (Carrefour and Gresslier); 2) the mountainous area of the Massif de la Selle which is transected by the "Enriquillo-Plaintain-Garden" (EPG) interplate boundary-fault (that supposedly triggered the seism); 3) some of the most important river channels and their corresponding deltas (Momanche, Grise and Frorse). The initial results of our researches were published in March 2010 in a special web page created by the scientific community to try to mitigate the devastating effects of this catastrophe (http://supersites.earthobservations.org/haiti.php). Six types of superficial geological deformations triggered by the seismic event have been identified with the GEI: liquefaction structures, chaotic rupture zones, coastal and domal uplifts, river-delta turnovers, faults/ruptures and landslides. Potential geological hazards triggered by the Haiti earthquake include landslides, inundations, reactivation of active tectonic elements (e.g., fractures), river-delta turnovers, etc. We analyzed again the GEI after the rain period and, as expected, most of the geological deformations that we initially identified had been erased and/or modified by the water washout or buried by the sediments. In this sense the GEI constitutes an invaluable instrument in the analysis of seismic geological hazards: we still have the possibility to compare all the images before and after the seism that are recorded in its useful "time tool". These are in fact the only witnesses of most of the geological deformations triggered by the Haiti earthquake that remain stored in the virtual archives of the GEI. In fact a field trip to the area today would be useless as most of these structures have disappeared. We will show that this type of seismic-related geological deformations may be useful in hazard-planning strategies aiming at the urbanistic reconstruction of Port-Au-Prince. Some inferences will be make regarding the spectacular scarp of the EPG fault zone dipping as a nearly perfect plane to the S, probably reflecting extensional paleo-movements (even if this major interplate fault is essentially sinistral). Finally, we will analyze the results published in Nature Geosciences (November 2010) that question the role of the EPG fault in the Haiti seism, and that highlight the fact that seismology is still unable to unravel most of the keys of this major earthquake. Doblas, M.; Benito, B.; Torres, Y.; Belizaire, D.; Dorfeuille, J.; Aretxabala, A. 2013-05-01 335 International Nuclear Information System (INIS) Full text: Haiti is the poorest country in America with 75% of the population living under the poverty line and 56% in an extreme situation. Under UN classification, the country ranked 154 on a total of 193. Haiti is an IAEA member since 1958, after a few years of non active participation in agency's activities for nations members , except for signing one agreement on physical protection, the country had become involved at various level of technical cooperation. In 2003 Haiti paid in full its arrears in membership contributions to IAEA sending a clear signal to its good to renew with technical cooperation. At the same time Agency renewed its technical cooperation with the country at various level mainly in : 1. Isotopes hydrology Applications of isotopes radiation in Industry 2. Radiation medicine and Health 3. Nuclear radiation safety and nuclear Security 4. General Atomic Development. As a third country, Haiti is clearly dependent on international cooperation resources. A fundamental challenge for its National Regulatory board is to insure availability of optimum quantities of resources with national and foreign partners, to establish the culture of regular control regulatory activities, safety culture and improve the quality of human and training assistance for its technical entities. National Regulatory authority faces a range of important challenges in all sectors, just to mention some of them: Improve coordination with technical ministries and other technical entities in related nuclear field within a clear plan of action; Advocate introduction of teaching safety and security culture and radiological protection at university level and in secondary education; Strengthen the training of the personnel in the field of radioprotection along with customs and border officers; Scale up the number of trained people in physics and nuclear techniques in medicine and industry; Scale up the use of new and reliable detectors to better detect sources and include all in - country sources a national database as recommended by Agency; Scale up participation of participation of dosimetric unit of the national authority in various regional inter-comparative studies; Advocate full use scrap metal control to prevent malicious activities, develop regulation regarding the disposal of unused sealed sources; Promote cooperation with other advanced National Regulatory authority at regional and Caribbean levels; Consolidate the legal and regulatory framework by providing Haiti with the necessary technical legal assistance. While it remains to be seen if the Regulatory Board will become a full independent entity with its own budget, human resources development in radioprotection will continue to be one the pillars of Board activity. In this regard, NRB and its technical unit must continue to work with regional board to acquire more training and collaborate with the Ministry of Finance and lawmakers to address its adequate financial and material resources. (author) 2010-09-01 336 Science.gov (United States) Haiti is considered particularly vulnerable to the effects of climate change, but directly linking climate change to health effects is limited by the lack of robust data and the multiple determinants of health. Worsening storms and rising temperatures in this rugged country with high poverty is likely to adversely affect economic activity, population growth and other determinants of health. For the past two years, the Univ. of Washington has supported the public hospital in the department of Grand'Anse. Grand'Anse, a relatively contained region in SW Haiti with an area of 11,912 km2, is predominantly rural with a population of 350,000 and is bounded to the south by peaks up to 2,347 m. Grand'Anse would serve as an excellent site to assess the interface between climate change and health. The Demographic and Health Survey (DHS) shows health status is low relative to other countries. Estimates of climate change for Jeremie, the largest city in Grand'Anse, predict the mean monthly temperature will increase from 26.1 to 27.3 oC while mean monthly rainfall will decrease from 80.5 to 73.5 mm over the next 60 years. The potential impact of these changes ranges from threatening food security to greater mortality. Use of available secondary data such as indicators of climate change and DHS health status are not likely to offer sufficient resolution to detect positive or negative impacts of climate change on health. How might a mixed methods approach incorporating secondary data and quantitative and qualitative survey data on climate, economic activity, health and determinants of health address the hypothesis: Climate change does not adversely affect health? For example, in Haiti most women deliver at home. Maternal mortality is high at 350 deaths/100,000 deliveries. This compares to deliveries in facilities where the median rate is less than 100/100,000. Thus, maternal mortality is closely linked to access to health care in this rugged mountainous country. Climate change might result in worsening tropical storms that impede access due to the poor condition of footpaths and thus adversely affect maternal mortality. Additional factors such as deforestation and associated accelerated rainwater runoff may further worsen conditions. The linkage between maternal mortality and climate change will not be detected unless more robust methods are used. We propose using a mixed methods approach that combines use of secondary climate and health data (e.g. Landsat, stream flow, precipitation) with a stratified spatial sampling strategy across this complex land mass coupled with direct observation and qualitative methods using key informant interviews to probe for root causes of changes in health outcomes such as weather, deforestation, food and economic security. This mixed methods approach can be used for cross-sectional, retrospective and longitudinal studies linking the impact of climatological factors and important determinants of health such as economic activity. We propose that the impact of climate change on health will be best studied by mixed method approaches and that reliance on secondary data alone risks missing important associations between changes in climate and health. Barnhart, S.; Coq, R. N.; Frederic, R.; DeRiel, E.; Camara, H.; Barnhart, K. R. 2013-12-01 337 Directory of Open Access Journals (Sweden) Full Text Available En el momento de autorizar la intervencin militar y la posterior creacin de la misin de mantenimiento de la paz, de enero a junio del 2004, el Consejo de Seguridad de la ONU careca de un diagnstico preciso sobre el carcter del Estado haitiano y su historia, el tipo de conflicto y la naturaleza de la violencia en el pas, lo que explica, a casi cuatro aos de esa intervencin, la recurrente inestabilidad y la persistencia de la violencia en la nacin caribea. El tipo de intervencin y las estrategias de pacificacin utilizadas por la comunidad internacional fueron inapropiadas y se mostraron ineficaces para atender casos como el haitiano. La misin de imposicin de la paz desplegada en el pas utiliz la disuasin militar para contener las manifestaciones externas de la violencia congelando as el conflicto y garantizando la realizacin de elecciones masivas y transparentes el 6 de Febrero de 2006. Sin embargo, las causas presentes e histricas que generan y reproducen esta violencia siguen intactas. La democracia no puede prosperar en ausencia de un estado que garantice un orden poltico con un mnimo de institucionalidad, particularmente, cuando el desorden se ha convertido en el instrumento poltico por excelencia de algunos actores, para mantener el statu quo. Los beneficios del proceso de normalizacin democrtica, en un contexto de ausencia estatal, no son sostenibles en el tiempo.When the UN Security Council authorized the military intervention in Haiti and passed a resolution creating a peacekeeping force in the country, it lacked a precise diagnosis on the character of the Haitian state, its history, the essential qualities of the ongoing conflict, as well as the nature of the violence the country was experiencing. This explains why, after almost five years of UN presence in Haiti, recurrent instability and persistent violence are still common features in the Caribbean nation. The kind of intervention and the strategies implemented by the international community to pacify the country were inappropriate and they proved to be ineffective in tackling cases such as the Haitian one. The peace enforcement mission deployed to the country used military deterrence to contain external manifestations of the violence, thereby freezing the conflict and pursuing the organization of massive and transparent elections that were held on 6 February 2006. However, the present and historical causes provoking and reproducing the violence in Haiti violence are still in place. Democracy cannot thrive in the absence of a state structure able to guarantee a political order with a minimum of institutional development, particularly, when disorder has become the preferred political tool for some local actors in order to maintain the status quo. The benefits of a normalized institutional life, in a context of state absence, are not sustainable for the long term. Gastn AN 2009-02-01 338 CERN Document Server The Kasner metrics are among the simplest solutions of the vacuum Einstein equations, and we use them here to examine the conformal method of finding solutions of the Einstein constraint equations. After describing the conformal method's construction of constant mean curvature (CMC) slices of Kasner spacetimes, we turn our attention to non-CMC slices of the smaller family of flat Kasner spacetimes. In this restricted setting we obtain a full description of the construction of certain $U^{n-1}$ symmetric slices, even in the far-from-CMC regime. Among the conformal data sets generating these slices we find that most data sets construct a single flat Kasner spacetime, but that there are also far-from-CMC data sets that construct one-parameter families of slices. Although these non-CMC families are analogues of well-known CMC one-parameter families, they differ in important ways. Most significantly, unlike the CMC case, the condition signaling the appearance of these non-CMC families is not naturally detected fro... Maxwell, David 2014-01-01 339 Science.gov (United States) We consider maximal slices of the Myers-Perry black hole, the doubly spinning black ring, and the Black Saturn solution. These slices are complete, asymptotically flat Riemannian manifolds with inner boundaries corresponding to black hole horizons. Although these spaces are simply connected as a consequence of topological censorship, they have non-trivial topology. In this paper we investigate the question of whether the topology of spatial sections of the horizon uniquely determines the topology of the maximal slices. We show that the horizon determines the homological invariants of the slice under certain conditions. The homological analysis is extended to black holes for which explicit geometries are not yet known. We believe that these results could provide insights in the context of proving existence of deformations of this initial data. For the topological slices of the doubly spinning black ring and the Black Saturn we compute the homotopy groups up to dimension 3 and show that their four-dimensional homotopy group is not trivial. Alaee, Aghil; Kunduri, Hari K.; Martnez Pedroza, Eduardo 2014-03-01 340 International Nuclear Information System (INIS) A goal of the Gun Test Facility (GTF) at SLAC is to investigate the production of high-brightness electron beams for the Linac Coherent Light Source (LCLS) X-ray FEL. High brightness in the RF photocathode gun occurs when the time-sliced emittance is nearly the same as the cathode thermal emittance and when the slices are all lined up, i.e., their Twiss parameters are nearly identical. In collaboration with the BNL Source Development Lab (SDL), we have begun a systematic study of the slice emittance at GTF. The technique involves giving the bunch a near linear energy chirp using the booster linac and dispersing it with a magnetic spectrometer. Combined with knowledge of the longitudinal phase space, this establishes the energy-time correlation on the spectrometer screen. The slice emittances are determined by varying the strengths of the quadrupoles in front of the spectrometer. Spectrometer images for a range of quadrupole settings are then binned into small energy/time windows and analysed for the slice emittance and Twiss parameters. Results for various gun parameters are presented 2003-07-11 341 CERN Multimedia Recently, there have been efforts to solve Einstein's equation in the context of a conformal compactification of space-time. Of particular importance in this regard are the so called CMC-foliations, characterized by spatial hyperboloidal hypersurfaces with a constant extrinsic mean curvature K. However, although of interest for general space-times, CMC-slices are known explicitly only for the spherically symmetric Schwarzschild metric. This work is devoted to numerically determining CMC-slices within the Kerr solution. We construct such slices outside the black hole horizon through an appropriate coordinate transformation in which an unknown auxiliary function A is involved. The condition K=const throughout the slice leads to a nonlinear partial differential equation for the function A, which is solved with a pseudo-spectral method. The results exhibit exponential convergence, as is to be expected in a pseudo-spectral scheme for analytic solutions. As a by-product, we identify CMC-slices of the Schwarzschild ... Schinkel, David; Ansorg, Marcus 2013-01-01 342 Energy Technology Data Exchange (ETDEWEB) We discuss an upgrade R&D project for NSLSII to generate sub-pico-second short x-ray pulses using laser slicing. We discuss its basic parameters and present a specific example for a viable design and its performance. Since the installation of the laser slicing system into the storage ring will break the symmetry of the lattice, we demonstrate it is possible to recover the dynamical aperture to the original design goal of the ring. There is a rapid growth of ultrafast user community interested in science using sub-pico-second x-ray pulses. In BNL's Short Pulse Workshop, the discussion from users shows clearly the need for a sub-pico-second pulse source using laser slicing method. In the proposal submitted following this workshop, NSLS team proposed both hard x-ray and soft x-ray beamlines using laser slicing pulses. Hence there is clearly a need to consider the R&D efforts of laser slicing short pulse generation at NSLSII to meet these goals. Yu, L.; Blednykh, A.; Guo, W.; Krinsky, S.; Li, Y.; Shaftan, T.; Tchoubar, O.; Wang, G.; Willeke, F.; Yang, L. 2011-03-28 343 Directory of Open Access Journals (Sweden) Full Text Available Background: Since HIV-1 RNA (viral load testing is not routinely available in Haiti, HIV-infected patients receiving antiretroviral therapy (ART are monitored using the World Health Organization (WHO clinical and/or immunologic criteria. Data on survival and treatment outcomes for HIV-1 infected patients who meet criteria for ART failure are limited. We conducted a retrospective study to compare survival rates for patients who experienced failure on first-line ART by clinical and/or immunologic criteria and switched to second-line ART vs. those who failed but did not switch. Methods: Patients receiving first-line ART at the GHESKIO Center in Port-au-Prince, Haiti, who met WHO clinical and immunologic criteria for failure were identified. Survival and treatment outcomes were compared in patients who switched their ART regimen and those who did not. Cox regression analysis was used to determine predictors of mortality after failure of first-line ART. Results: Of 3126 patients who initiated ART at the GHESKIO Center between 1 March 2003 and 31 July 2008, 482 (15% met WHO immunologic and/or clinical criteria for failure. Among those, 195 (41% switched to second-line ART and 287 (59% did not. According to KaplanMeier survival analysis, the probability of survival to 12 months after failure of first-line ART was 93% for patients who switched to second-line ART after failure and 88% for patients who did not switch. Predictors of mortality after failure of first-line ART were weight in the lowest quartile for sex, CD4 T cell count?100, adherence<90% at the time of failure and not switching to second-line ART. Conclusions: Patients who failed first-line ART based on clinical and/or immunologic criteria and did not switch to second-line therapy faced a higher mortality than those who switched after failure. To decrease mortality, interventions to identify patients in whom ART may be failing earlier are needed urgently. In addition, there is a major need to optimize second-line antiretroviral regimens for improved potency, lower toxicity and greater convenience for patients. Jean W Pape 2012-06-01 344 Science.gov (United States) Clinicians make a variety of assessments about their clients, from judging personality traits to making diagnoses, and a variety of methods are available to do so, ranging from observations to structured interviews. A large body of work demonstrates that from a brief glimpse of another's nonverbal behavior, a variety of traits and inner states can be accurately perceived. Additionally, from these "thin slices" of behavior, even future outcomes can be predicted with some accuracy. Certain clinical disorders such as Parkinson's disease and facial paralysis disrupt nonverbal behavior and may impair clinicians' ability to make accurate judgments. In certain contexts, personality disorders, anxiety, depression, and suicide attempts and outcomes can be detected from others' nonverbal behavior. Additionally, thin slices can predict psychological adjustment to divorce, bereavement, sexual abuse, and well-being throughout life. Thus, for certain traits and disorders, judgments from a thin slice could provide a complementary tool for the clinician's toolbox. PMID:24423788 Slepian, Michael L; Bogart, Kathleen R; Ambady, Nalini 2014-01-01 345 International Nuclear Information System (INIS) The introduction of 4-slice scanners with subsecond gantry rotation times paved way for such demanding applications as cardiac imaging. However, challenges remained. For example, the breath hold times of 40 seconds caused many patient groups to be excluded. Some of these issues were addressed by the introduction of 16-slice CT scanners with submillimeter spatial resolution and faster gantry rotation times, resulting in a significant decrease in the coverage time (less than 20 s). Further developments in scanner technology were brought about by the introduction of 40- and 64-slice scanners, such as the Philips' Brilliance, with a z-axis coverage of 40 mm, making it possible to cover the entire cardiac anatomy in less than 15 seconds [1]. Additionally, the COBRA trademark adaptive multi-cycle reconstruction approach can result in further improvement in temporal resolution by using projection data from two or more cardiac cycles [2-5]. (orig.) 2005-01-01 346 Digital Repository Infrastructure Vision for European Research (DRIVER) This paper proposes a new approach to designing low-complexity high-speed turbo codes for very low frame error rate applications. The key idea is to adapt and optimize the technique of multiple turbo codes to obtain the required frame error rate combined with a family of turbo codes, called multiple slice turbo codes (MSTCs), which allows high throughput at low hardware complexity. The proposed coding scheme is based on a versatile three-dimensional multiple slice turbo code (3D-MSTC) using d... David Gnaedig; Emmanuel Boutillon; Quel, Michel J. Z. 2005-01-01 347 Digital Repository Infrastructure Vision for European Research (DRIVER) With direct deposition of metal a new Rapid Prototyping process had been developed at Cranfield University in the last couple of years. The process entails the use of a Gas Metal Arc fusion welding robot which deposits successive layers of metal in such way that it forms a 3D solid component. First, a solid model is drawn using a CAD system, then data indicating the kind of layers and dimension is incorporated and the solid is automatically sliced. This slicing routine also generates reports ... Ribeiro, Anto?nio Fernando; Norrish, John 1996-01-01 348 Digital Repository Infrastructure Vision for European Research (DRIVER) This paper proposes a new approach to designing low-complexity high-speed turbo codes for very low frame error rate applications. The key idea is to adapt and optimize the technique of multiple turbo codes to obtain the required frame error rate combined with a family of turbo codes, called multiple slice turbo codes (MSTCs), which allows high throughput at low hardware complexity. The proposed coding scheme is based on a versatile three-dimensional multiple slice turbo code (3D-MSTC)... 2005-01-01 349 CERN Document Server In this note we study the geometry and topology of maximal slices of certain stationary black hole solutions of the vacuum Einstein equations in five dimensions. These slices are complete, asymptotically flat Riemannian manifolds with inner boundaries corresponding to black hole horizons. Although these spaces are simply connected as a consequence of topological censorship, they may have non-trivial topology. As much of the investigation is at the topological level, we can also extend this analysis to solutions for which explicit geometries are not yet known. Alaee, Aghil; Martnez-Pedroza, Eduardo 2013-01-01 350 CERN Document Server In this paper we study the energy of ULF electromagnetic waves that have been recorded by the satellite DEMETER, during its passing over Haiti before and after a destructive earthquake. This earthquake occurred on 12/1/2010, at geographic Latitude 18.46o and Longitude 287.47o, with Magnitude 7.0 R. Specifically, we are focusing on the variations of energy of Ez-electric field component concerning a time period of 100 days before and 50 days after the strong earthquake. In order to study these variations, we developed a novel method that can be divided in two stages: first we filter the signal keeping only the very low frequencies and afterwards we eliminate its trend using techniques of Singular Spectrum Analysis, combined with a third-degree polynomial filter. As it is shown, a significant increase in energy is observed for the time interval of 30 days before the strong earthquake. This result clearly indicates that the change in the energy of ULF electromagnetic waves could be related to strong precursory e... Athanasiou, M; Iliopoulos, A; Pavlos, G; David, K 2010-01-01 351 Directory of Open Access Journals (Sweden) Full Text Available Abstract Background Towards the end of 2006 open conflict broke out between United Nations forces and armed militia in Port-au-Prince, Haiti. Fighting was most intense in the district of Cit Soleil. Methods A cross-sectional, random-sample survey among the conflict-affected populations living in Cit Soleil and Martissant was carried out over a 4-week period in 2006 using a semi-structured questionnaire to assess exposure to violence and access to health care. Household heads from 945 households (corresponding to 4,763 people in Cit Soleil and 1,800 household (9,539 people in Martissant provided information on household members. The average recall period was 579 days for Cit Soleil and 601 days for Martissant. Results In Cit Soleil 120 deaths (21 children were reported (CMR 0.4 deaths/10,000 people/day; Discussion Extrapolating to the total population of these two districts some 2,000 violent deaths occurred over the recall period. Among the survivors, violence had lasting effects in terms of physical and mental health and loss of property and possessions. Van Herp Michel 2009-03-01 352 Directory of Open Access Journals (Sweden) Full Text Available Examines how sexual and gender values in rural Haiti are expressed through 'tat', theatrical, songs and performances among girls from 10 to 20 years. Author describes how these sexual values relate to a concept of gendered capital, or what he calls a "sexual-moral economy", whereby men who want sex with women need to provide material rewards for this sexual access. He explains how this combines with certain gender socializations and views of men, unlike women, really needing sex, and socialized toward this, also by women, and thus from an early age to aggressively pursue women, and women on the other hand toward restraint, and to require material rewards. Author illustrates, through examples, how tat songs reflect and refer to these values, often through sexual metaphors. In addition, he shows how they relate to the wider social and gender context of matrifocality and subsistence strategies, notably the household, wherein women tend to be dominant over men, who supplied the house as expected price for her sex, manages production and reproduction of her daughters in it, instilling them also with the said sexual values, and with children seen as necessary for household work, as the women also engage in market activities outside of the house. Timothy T. Schwartz 2008-12-01 353 Directory of Open Access Journals (Sweden) Full Text Available The possible impacts of the level of formal education on different aspects of disaster management, prevention, alarm, emergency, or postdisaster activities, were studied in a comparative perspective for three countries with a comparable exposure to hurricane hazards but different capacities for preventing harm. The study focused on the role of formal education in reducing vulnerability operating through a long-term learning process and put particular emphasis on the education of women. The comparative statistical analysis of the three countries was complemented through qualitative studies in Cuba and the Dominican Republic collected in 2010-2011. We also analyzed to what degree targeted efforts to reduce vulnerability were interconnected with other policy domains, including education and science, health, national defense, regional development, and cultural factors. We found that better education in the population had clear short-term effects on reducing vulnerability through awareness about crucial information, faster and more efficient responses to alerts, and better postdisaster recuperation. However, there were also important longer term effects of educational efforts to reduce social vulnerability through the empowerment of women, its effect on the quality of institutions and social networks for mutual assistance creating a general culture of safety and preparedness. Not surprisingly, on all three accounts Cuba clearly did the best; whereas Haiti was worst, and the Dominican Republic took an intermediate position. 2013-09-01 354 Science.gov (United States) On January 12 2010, a destructive strike-slip earthquake (Mw 7.1) occurred in the oblique convergence zone of Hispaniola. Over 222,000 people were killed, most in the capital city of Port-au-Prince in Haiti, which lies about 80 km east of the mainshock. The earthquake ruptured a 50-km fault trace along (or sub-parallel to) the much larger Enriquillo Fault Zone (EFZ) that last broke in 1751 along a ~150-km zone. Assuming an average slip rate of 5-9 mm/yr, the amount of slip deficit accumulated on the entire EFZ is estimated to be on the order of 1.5 - 2.5 meters. Here, we present results of a deformation field investigation using ScanSAR data over the period from 2004-2009, prior to the 2010 earthquake. Both the data and the models reveal that the fault segment of EFZ to the west of the 2010 earthquake has been aseismically slipping for years, at a rate similar to the interseismic long-term interseismic slip rate of EFZ. Therefore, this study shows that the accumulating stress was partly released by aseismic slip. Moreover, Coulomb stress calculations suggest that the creep at EFZ segment may have enhanced the occurrence of the 2010 earthquake disaster. The relation observed between EFZ aseismic slip and the 2010 earthquake confirmed the importance of aseismic slip to better understanding of earthquake processes and for seismic hazard mitigation. Shirzaei, M.; Walter, T. R. 2010-12-01 355 Scientific Electronic Library Online (English) Full Text Available SciELO Chile | Language: Spanish Abstract in spanish Roces constantes entre el presidente y la oposicin, escndalos polticos a repeticin, sucesivos cambios ministeriales, un crecimiento econmico mediocre y continuos lanzamientos de nuevos programas sociales por parte del gobierno de Michel Joseph Martelly han marcado el ao 2012 en Hait. La princ [...] ipal muestra de las vicisitudes polticas experimentadas durante el ao se materializa en la incapacidad del presidente para conformar el organismo electoral, que debe organizar las elecciones intermedias para reemplazar a los senadores cuyo mandato lleg a trmino en 2012. Abstract in english Constant frictions between the President and the Opposition, repetitive political scandals, frequent changes in the Cabinet, a mediocre economic growth and continuous launch of new Social Programs have marked the year of 2012 in Haiti. A good example of the political problems that the country faced [...] during 2012 is the incapacity of the president to form the Electoral Council that has to organize midterm elections for the replacement of senators whose mandate has ended in 2012. RESERVE, ROODY. 356 Energy Technology Data Exchange (ETDEWEB) In April 2010, a team of scientists and engineers from Lawrence Berkeley National Lab (LBNL) and UC Berkeley, with support from the Darfur Stoves Project (DSP), undertook a fact-finding mission to Haiti in order to assess needs and opportunities for cookstove intervention. Based on data collected from informal interviews with Haitians and NGOs, the team, Scott Sadlon, Robert Cheng, and Kayje Booker, identified and recommended stove testing and comparison as a high priority need that could be filled by LBNL. In response to that recommendation, five charcoal stoves were tested at the LBNL stove testing facility using a modified form of version 3 of the Shell Foundation Household Energy Project Water Boiling Test (WBT). The original protocol is available online. Stoves were tested for time to boil, thermal efficiency, specific fuel consumption, and emissions of CO, CO{sub 2}, and the ratio of CO/CO{sub 2}. In addition, Haitian user feedback and field observations over a subset of the stoves were combined with the experiences of the laboratory testing technicians to evaluate the usability of the stoves and their appropriateness for Haitian cooking. The laboratory results from emissions and efficiency testing and conclusions regarding usability of the stoves are presented in this report. Booker, Kayje; Han, Tae Won; Granderson, Jessica; Jones, Jennifer; Lsk, Kathleen; Yang, Nina; Gadgil, Ashok 2011-06-01 357 International Nuclear Information System (INIS) Initial 87Sr/86Sr ratios have been determined for a representative suite of Upper Cretaceous granodiorites and associated rocks from the Above Rocks composite stock in central Jamaica and the Terre-Neuve pluton in northwestern Haiti. The average initial 87Sr/86Sr ratio for seven samples of the Terre-Neuve intrusion is 0.7036, with a range of 0.7026-0.7047. For two samples of the Above Rocks the initial ratios are 0.7033 and 0.7034. A third sample from this intrusive has an initial ratio of 0.7084, which is tentatively attributed to contamination. The initial 87Sr/86Sr ratios indicate that neither ancient sialic crust nor sediments carried down a Benioff zone can be the primary source of the granodioritic magma. K/Rb ratios for these rocks range from 178 to 247, which are much lower than the average values (>= 1000) for tholeiitic basalts. It is concluded that the magmas originated primarily by melting of downthrust oceanic crust or adjacent mantle material. (Auth.) 1979-01-01 358 Science.gov (United States) On January 12, 2010, Haiti experienced one of the worst disasters in human history, a magnitude 7.0 earthquake, resulting in the deaths of approximately 222,000 Haitians and grievous injury to hundreds of thousands more. International agencies, academic institutions, nongovernmental organizations, and associations responded by sending thousands of medical professionals, including nurses, doctors, medics, and physical therapists, to support the underresourced Haitian health system. The volunteers who came to provide medical care to disaster victims worked tirelessly under extremely challenging conditions, but in many cases they had no previous work experience in resource-limited settings, minimal training in tropical disease, and no knowledge of the historical background that contributed to the catastrophe. Often, this lack of preparedness hindered their ability to care adequately for their patients. The authors of this perspective argue that the academic medicine community must prepare medical trainees not only to treat the illnesses of patients in resource-limited settings but also to fight the injustice that fosters disease and allows such catastrophes to unfold. The authors advocate purposeful attention to building global health curricula; providing adequate time, funding, and opportunity to work in resource-limited international settings; and ensuring sufficient supervision for trainees to work safely. They also call for an interdisciplinary approach to global health that both affirms health care as a fundamental human right and explores the historical, economic, and political causes of inequitable health care. PMID:21494116 Archer, Natasha; Moschovis, Peter P; Le, Phuoc V; Farmer, Paul 2011-07-01 359 Science.gov (United States) The lack of culturally appropriate mental health assessment instruments is a major barrier to screening and evaluating efficacy of interventions. Simple translation of questionnaires produces misleading and inaccurate conclusions. Multiple alternate approaches have been proposed, and this study compared two approaches tested in rural Haiti. First, an established transcultural translation process was used to develop Haitian Kreyl versions of the Beck Depression Inventory (BDI) and Beck Anxiety Inventory (BAI). This entailed focus group discussions evaluating comprehensibility, acceptability, relevance, and completeness. Second, qualitative data collection was employed to develop new instruments: the Kreyl Distress Idioms (KDI) and Kreyl Function Assessment (KFA) scales. For the BDI and BAI, some items were found to be nonequivalent due to lack of specificity, interpersonal interpretation, or conceptual nonequivalence. For all screening tools, items were adjusted if they were difficult to endorse or severely stigmatizing, represented somatic experiences of physical illness, or were difficult to understand. After the qualitative development phases, the BDI and BAI were piloted with 31 and 27 adults, respectively, and achieved good reliability. Without these efforts to develop appropriate tools, attempts at screening would have captured a combination of atypical suffering, everyday phenomena, and potential psychotic symptoms. Ultimately, a combination of transculturally adapted and locally developed instruments appropriately identified those in need of care through accounting for locally salient symptoms of distress and their negative sequelae. PMID:24067540 Kaiser, Bonnie N; Kohrt, Brandon A; Keys, Hunter M; Khoury, Nayla M; Brewster, Aime-Rika T 2013-08-01 360 Science.gov (United States) The aspen leaf miner, Phyllocnistis populiella, feeds on the contents of epidermal cells on both top (adaxial) and bottom (abaxial) surfaces of quaking aspen leaves, leaving the photosynthetic tissue of the mesophyll intact. This type of feeding is taxonomically restricted to a small subset of leaf mining insects but can cause widespread plant damage during outbreaks. We studied the effect of epidermal mining on aspen growth and physiology during an outbreak of P. populiella in the boreal forest of interior Alaska. Experimental reduction of leaf miner density across two sites and 3 years significantly increased annual aspen growth rates relative to naturally mined controls. Leaf mining damage was negatively related to leaf longevity. Leaves with heavy mining damage abscised 4 weeks earlier, on average, than leaves with minimal mining damage. Mining damage to the top and bottom surfaces of leaves had different effects on physiology. Mining on the top surface of the leaf had no significant effect on photosynthesis or conductance and was unrelated to leaf stable C isotope ratio (delta(13)C). Mining damage to the bottom leaf surface, where stomata are located, had significant negative effects on net photosynthesis and water vapor conductance. Percent bottom mining was positively related to leaf delta(13)C. Taken together, the data suggest that the primary mechanism for the reduction of photosynthesis by epidermal leaf mining by P. populiella is the failure of stomata to open normally on bottom-mined leaves. PMID:18523809 Wagner, Diane; DeFoliart, Linda; Doak, Patricia; Schneiderheinze, Jenny 2008-08-01 361 International Nuclear Information System (INIS) The 12UD Pelletron tandem accelerator with a history of over 35 years at the University of Tsukuba was destroyed by the Great East Japan Earthquake on 11 March 2011. We have mapped out a strategy for the post-quake reconstruction project. At present, we are planning to install a new middle-sized tandem accelerator at the 2nd experimental room instead of the broken 12UD Pelletron tandem accelerator. A new accelerator system will consist of a horizontal type 6 MV Pelletron tandem accelerator, new 4 ion sources and the polarized ion source which will be moved from the 9th floor to a new experimental booth on the ground, an accelerator mass spectrometry system and an ion beam analysis system. High energy beam transport line will be connected from the 2nd experimental room to the present experimental facilities at the 1st experimental room. The new AMS system will be capable of measuring environmental levels for long-lived radioisotopes of 10Be, 14C, 26Al, 36Cl, 41Ca and 129I. The new IBA system will be equipped with a high-precision five-axis goniometer. The 6 MV tandem accelerator will mainly be applied for AMS, IBA, heavy ion irradiation and nuclear physics. The beam delivery will start on September 2014. (author) 2012-08-08 362 Directory of Open Access Journals (Sweden) Full Text Available Nobody can forget the devastating 7.0-magnitude earthquake that struck poverty-stricken Haiti, Port-au-Prince recently on 12 January 2010. At least 75,000 people were killed and hundreds of thousands became homeless; authorities are worried about sanitation and outbreaks of disease in the region. The camps are full of people and there are not even the most basic facilities for any others. Humanity obliges us to help them in any possible way. I reviewed the literature about the hepatitis E virus infection in Haiti and I would like to draw the scientists' attention to this important topic in this time of crisis. Seyed Moayed Alavian 2010-01-01 363 Science.gov (United States) Existing data/model comparisons for the mid-Pliocene (Dowsett et al., 2013) have identified specific regions of concordance and discord between climate models and proxy data. One reason for site-specific disagreement is likely related to the time (warm peak) averaged nature of the mid-Pliocene ocean temperatures provided within existing proxy syntheses. To facilitate improved data/model comparisons in the future new proxy sea surface temperature reconstructions will focus on specific time slices within the Pliocene epoch. Haywood et al. (2013) identified an initial time slice for environmental reconstruction and climate modelling centred on an interglacial event at Marine Isotope Stage KM5c (3.205 Ma). Critically, this interval displays a very near to modern orbital configuration simplifying the interpretation of proxy data and the experimental design for climate models. Nevertheless, current limitations of chronology and correlation make it likely that new proxy records will be attributable to a time range around the time slice, and may not always represent the time slice specifically. This introduces an element of uncertainty through orbital forcing around the time slice which can be investigated and quantified within a numerical climate modelling framework. The Hadley Centre Coupled Climate Model Version 3 (HadCM3) has been used to perform a series of orbital forcing sensitivity tests around the identified time slice at MIS KM5c. Simulations every 2 kyr either side of the time slice to a range +/- 20 kyr have been completed. The model results indicate that +/- 20 kyr either side of the time slice, orbital forcing generates a less than 1C change on global MAT. One exception to this relative stability in climate is seen in the North Atlantic (a region noted for disagreement in existing Pliocene data/model comparisons). Here, ocean surface temperature variations of up to 6C are predicted. These model responses appear to be linked to changes in ocean circulation and the mode of deep water formation and thus the strength of the Atlantic Meridional Overturning Circulation over relatively short timescales (geologically). To place this predicted climate variability around MIS KM5c into context we have completed simulations 20 kyr either side of the 3.060 kyr PlioMAX peak, which is characterised by one of the lightest benthic oxygen isotope excursions evident in the entire PRISM time slab (Marine Isotope Stage K1; Raymo et al. 2004), and displays a radically different orbital forcing compared to present-day. The results show a 5C change on global MAT, with some terrestrial areas showing changes of 10C. Therefore, this larger climate variability at K1 would cause imperfect correlation to be much more harmful to data model comparisons than around the KM5c time slice. The results from this suite of simulations suggest that proxies producing MAT with imperfect correlation to the time slice up to 20,000 years before or after may still be representative of the conditions at the MIS KM5c time slice itself due to the subdued nature of orbital forcing at this time. Prescott, C.; Haywood, A.; Dolan, A. M.; Hunter, S. J.; Tindall, J.; Pope, J. O.; Pickering, S. 2013-12-01 364 Science.gov (United States) Malaria remains a significant public health issue in Haiti, with chloroquine (CQ) used almost exclusively for the treatment of uncomplicated infections. Recently, single dose primaquine (PQ) was added to the Haitian national malaria treatment policy, despite a lack of information on the prevalence of glucose-6-phosphate dehydrogenase (G6PD) deficiency within the population. G6PD deficient individuals who take PQ are at risk of developing drug induced hemolysis (DIH). In this first study to examine G6PD deficiency rates in Haiti, 22.8% (range 14.9%-24.7%) of participants were found to be G6PD deficient (class I, II, or III) with 2.0% (16/800) of participants having severe deficiency (class I and II). Differences in deficiency were observed by gender, with males having a much higher prevalence of severe deficiency (4.3% vs. 0.4%) compared to females. Male participants were 1.6 times more likely to be classified as deficient and 10.6 times more likely to be classified as severely deficient compared to females, as expected. Finally, 10.6% (85/800) of the participants were considered to be at risk for DIH. Males also had much higher rates than females (19.3% vs. 4.6%) with 4.9 times greater likelihood (p value 0.000) of having an activity level that could lead to DIH. These findings provide useful information to policymakers and clinicians who are responsible for the implementation of PQ to control and manage malaria in Haiti. PMID:24681219 von Fricken, Michael E; Weppelmann, Thomas A; Eaton, Will T; Alam, Meer T; Carter, Tamar E; Schick, Laura; Masse, Roseline; Romain, Jean R; Okech, Bernard A 2014-07-01 365 Energy Technology Data Exchange (ETDEWEB) In order to facilitate the study of oxidative stress in lung tissue, rat lung slices with impaired antioxidant defenses were prepared and used. Incubation of lung slices with the antineoplastic agent 1,3-bis(2-chloroethyl)-1-nitrosourea (BCNU) (100 {mu}M) in an amino acid-rich medium for 45 min produced a near-maximal (approximately 85%), irreversible inhibition of glutathione reductase, accompanied by only a modest (approximately 15%) decrease in pulmonary nonprotein sulfhydryls (NPSH) and no alteration in intracellular ATP, NADP{sup +}, and NADPH levels. The amounts of NADP(H), ATP, and NPSH were stable over a 4-hr incubation period following the removal from BCNU. The viability of the system was further evaluated by measuring the rate of evolution of {sup 14}CO{sub 2} from D-({sup 14}C(U))-glucose. The rates of evolution were almost identical in the compromised system when compared with control slices over a 4-hr time period. By using slices with compromised oxidative defenses, preliminary results have been obtained with paraquat, nitrofurantoin, and 2,3-dimethoxy-1,4-naphthoquinone. Hardwick, S.J.; Adam, A.; Cohen, G.M. (Univ. of London (England)); Smith, L.L. (Imperial Chemical Industries PLC, Cheshire (England)) 1990-04-01 366 International Nuclear Information System (INIS) Aim: To develop an MR pulse sequence that allows the determination of the quantitative perfusion of the brain by imaging the passage of a contrast agent bolus with high temporal and spatial resolution. Methods: An EPI sequence, EPIDET (Echo Planar Imaging using Different Echo Times), was developed that allows the acquisition of different slices at different echo times. The passage of a contrast agent bolus was recorded in a slice through the large brain feeding arteries at a short echo time (TE1=17 ms), while brain parenchyma was imaged in up to nine additional slices at a long echo time (TE2=34 ms). Results: The different echo times allowed the determination of the arterial input function (signal decrease to 32%-59% of baseline intensity) and gave a sufficient signal reduction (14-22%) for reliable quantification of perfusion in brain parenchyma. Conclusion: The combination of different echo times of the DUAL-FLASH sequence and the multislice capability of EPI sequences in the EPIDET sequence enables the quantification of multi-slice perfusion examinations. Compared to the DUAL-FLASH sequence EPIDET improves spatial and temporal resolution. (orig.) 2001-01-01 367 Science.gov (United States) Magnetic resonance imaging (MRI) near metallic implants is often hampered by severe metal artifacts. To obtain distortion-free MR images near metallic implants, SEMAC (Slice Encoding for Metal Artifact Correction) corrects metal artifacts via robust encoding of excited slices against metal-induced field inhomogeneities, followed by combining the data resolved from multiple SEMAC-encoded slices. However, as many of the resolved data elements only contain noise, SEMAC-corrected images can suffer from relatively low signal-to-noise ratio. Improving the signal-to-noise ratio of SEMAC-corrected images is essential to enable SEMAC in routine clinical studies. In this work, a new reconstruction procedure is proposed to reduce noise in SEMAC-corrected images. A singular value decomposition denoising step is first applied to suppress quadrature noise in multi-coil SEMAC-encoded slices. Subsequently, the singular value decomposition-denoised data are selectively included in the correction of through-plane distortions. The experimental results demonstrate that the proposed reconstruction procedure significantly improves the SNR without compromising the correction of metal artifacts. PMID:21287596 Lu, Wenmiao; Pauly, Kim B; Gold, Garry E; Pauly, John M; Hargreaves, Brian A 2011-05-01 368 International Nuclear Information System (INIS) The effects of the synthetic pyrethroid insecticide fenvalerate ([R,S]-alpha-cyano-3-phenoxybenzyl[R,S]-2-(4-chlorophenyl)-3- methylbutyrate) on neurotransmitter release in rabbit brain slices were investigated. Fenvalerate evoked a calcium-dependent release of [3H]dopamine and [3H]acetylcholine from rabbit striatal slices that was concentration-dependent and specific for the toxic stereoisomer of the insecticide. The release of [3H]dopamine and [3H]acetylcholine by fenvalerate was modulated by D2 dopamine receptor activation and antagonized completely by the sodium channel blocker, tetrodotoxin. These findings are consistent with an action of fenvalerate on the voltage-dependent sodium channels of the presynaptic membrane resulting in membrane depolarization, and the release of dopamine and acetylcholine by a calcium-dependent exocytotic process. In contrast to results obtained in striatal slices, fenvalerate did not elicit the release of [3H]norepinephrine or [3H]acetylcholine from rabbit hippocampal slices indicative of regional differences in sensitivity to type II pyrethroid actions 1988-01-01 369 International Nuclear Information System (INIS) A study was carried out to investigate the individual and combined effect of caloric sweeteners (sucrose, glucose and fructose) and non-caloric sweeteners (saccharine, cyclamate and aspartame) along with antioxidants (citric acid and ascorbic acid) and chemical preservatives (potassium metabisulphite and potassium sorbate) on the water-activity (a/sub w/) of dehydrated guava slices. Different dilutions of caloric sweeteners (20, 30, 40 and 50 degree brix (bx) and non-caloric sweeteners (equivalent to sucrose sweetness) were used. Guava slices were osmotically dehydrated in these solutions and then dehydrated initially at 0 and then at 60 degree C to final moisture-content of 20-25%. Guava slices prepared with sucrose: glucose 7:3 potassium metabisulphite, ascorbic acid and citric acid produced best quality products, which have minimum (a/sub w/) and best overall sensory characteristics. The analysis showed that treatments and their various concentrations had a significant effect (p=0.05) on (a/sub w/) of dehydrated guava slices. (author) 2005-01-01 370 Science.gov (United States) The catastrophic Mw=7.0 shallow earthquake of 12 January 2010 that struck Haiti have led to numerous studies focused on the geodynamics of the region. In particular, the co-seismic fault mechanism of the 2010 Haiti earthquake as well as post-seismic deformations have been investigated through SAR interferometry (InSAR) techniques, thanks to the availability of satellite SAR sensors operating in different radar bands (ENVISAT ASAR, ALOS PALSAR, TerraSAR-X, COSMO/SkyMed). Moreover, advanced multitemporal SAR interferometry (MTI) based on COSMO/SkyMED (CSK) data is well suited for the detection and monitoring of post-seismic ground or structural instabilities. Indeed, with its short revisit time (up to 4 days) CSK allows building interferometric stacks much faster than previous satellite missions, like ERS/ENVISAT. Here we report the first outcomes of the MTI investigation based on high resolution (3 m) CSK data, conducted in the framework of a scientific collaboration between the Centre National de l'Information Go-Spatiale (CNIGS) of Haiti and the Department of Physics (DIF) of the University of Bari, Italy. We rely on a stack of 89 CSK data (image mode: HIMAGE; polarization: HH; look side: right; pass direction: ascending; beam: H4-0A) acquired by the Italian Space Agency (ASI) over the Port-au-Prince (PaP) metropolitan and surrounding areas that were severely hit by the 2010 earthquake. CSK acquisitions span the period June 2011 February 2013, which is sufficient for detecting and monitoring significant ground instabilities. The MTI results were obtained through the application of the SPINUA processing chain, a Persistent Scatterers Interferometry (PSI)-like technique. In particular, we detected significant subsidence phenomena affecting river deltas and coastal areas of the PaP and Carrefour region. The maximum rate of subsidence movements exceed few cm/yr and this implies increasing flooding (or tsunami) hazard. Furthermore, maximum subsidence rates were encountered in areas with high population density and this translates into high potential risk. The MTI results also revealed the presence of very slow slope movements and local ground / structure instabilities. Some of these may have been initially triggered by the 2010 event. Elsewhere the MTI-detected displacements can be related to the presence of poorly constructed buildings. This case study demonstrates that MTI represents a very good option for the assessments of ground / structure instability in regions that lack in situ monitoring data. In view of this the results of this study will be transferred to the Civil Protection of Haiti. Nutricato, R.; Wasowski, J.; Chiaradia, M.; Piard, B. E.; Gna, S. 2013-12-01 371 Science.gov (United States) Tectonic tremors have been observed along major plate-boundary faults around the world. In most of these regions, tremors occur spontaneously (i.e. ambient) or as a result of small stress perturbations from passing surface waves (i.e. triggered). Because tremors are located below the seismogenic zone, a detailed study of their behavior could help to better understand how tectonic movement is accommodated in the deep root of major faults, and the relationship with large earthquakes. Here, we present evidence of triggered tremor in southern Haiti around the aftershock zone of the 2010/01/12 Mw7.0 Haiti earthquake. Following the January mainshock, several groups have installed land and ocean bottom seismometers to record aftershock activity (e.g., De Lepinay et al., 2011). In the following month, the 2010/02/27 Mw8.8 Maule, Chile earthquake occurred and was recorded in the southern Haiti region by these seismic stations. We apply a 5-15 Hz band-pass filter to all seismograms to identify local high-frequency signals during the Chile teleseismic waves. Tremor is identified as non-impulsive bursts with 10-20 s durations that is coherent among different stations and is modulated by surface waves. We also convert the seismic data into audible sounds and use them to distinguish between local aftershocks and deep tremor. We locate the source of the tremor bursts using an envelope cross-correlation method based on travel time differences. Because tremor depth is not well constrained with this method, we set it to 20 km, close to the recent estimate of Moho depth in this region (McNamara et al., 2012). Most tremors are located south of the surface expression of the Enriquillo-Plantain Garden Fault (EPGF), a high-angle southward dipping left-lateral strike-slip fault that marks the boundary between the Gonave microplate and the Caribbean plate, although the location errors are large. Tremor peaks are mostly modulated by Love wave velocity, which is consistent with left-lateral shear motion induced by the normal incidence of Love wave on a near-vertical strike-slip fault. Our ongoing efforts include comparing tremor and aftershock locations with the same envelope techniques, and identifying tremor at other times. If the tremor locations are reliable, the results pose interesting questions about stress changes following the Haiti mainshock that lead to triggered seismicity on the shallow south dipping Trois Baies fault (De Lepinay et al., 2011, Douilly et al, 2013), and triggered tremor on the EPGF, where no aftershocks were recorded. Aiken, C.; Peng, Z.; Douilly, R.; Calais, E.; Deschamps, A.; Haase, J. S. 2013-05-01 372 Directory of Open Access Journals (Sweden) Full Text Available The article analyzes the actions of international cooperation, bi and multilateral, received by Haiti since the 1990s, and how this defines the (impossibilities of sustainable development, considering the upsurge of South-South cooperation since 2004. Given that conditions show that North-South cooperation hasnt been able to achieve its goals, an analysis of the available funds and its allocation will be made, based on authors elaboration data regarding sectorial allocation of funds for the 1990-2004 period. The conclusions will consider the lessons learned from the studied period, which gain new meaning in the context of the post-earthquake reconstruction. HERBST, Natalia 2013-06-01 373 Digital Repository Infrastructure Vision for European Research (DRIVER) The presence of heavy metals in the environment constitutes a potential source of both soil and groundwater pollution. This study has focused on the reactivity of lead (Pb), copper (Cu) and Cadmium (Cd) during their transfer in a calcareous soil of Port-au-Prince (Haiti). Kinetic, monometal and competitive batch tests were carried out at pH 6.0. Two simplified models including pseudo-first-order and pseudo-second-order were used to fit the experimental data from kinetics adsorption batch test... 2013-01-01 374 Scientific Electronic Library Online (English) Full Text Available SciELO Brazil | Language: Portuguese Abstract in portuguese O presente artigo examina as motivaes que o Brasil tem para contribuir para as misses de manuteno da paz (peacekeeping) das Naes Unidas no Timor Leste e no Haiti. O Brasil procura prestgio e aspira ganhar influncia pelos mtodos pragmticos da sua poltica exterior. Na opinio do autor, as [...] contribuies so vantajosas para o pas porque o custo baixo e porque permitem treinamento militar, visibilidade global e uma extenso poltica e econmica da influncia brasileira. Abstract in english The following article examines Brazil's motivations for contributing to peacekeeping missions. The work focuses on its participation in East-Timor and its leadership of the UN Stabilization Mission in Haiti. Brazil seeks prestige and hopes to gain influence through the pragmatic mechanisms of its fo [...] reign policy. The author believes the contributions are advantageous for the country, given the low cost of the missions, along with the receipt of military training, global visibility and an extension of Brazil's political and economic influence. Djuan, Bracey. 375 Scientific Electronic Library Online (English) Full Text Available SciELO Public Health | Language: English Abstract in spanish OBJETIVO: Determinar la prevalencia de tuberculosis (TB) multirresistente en pacientes con TB pulmonar nueva con baciloscopia positiva en Puerto Prncipe, Hait. MTODOS: Se cultivaron muestras de esputo de 1 006 pacientes con diagnstico reciente de tuberculosis efectuado durante el 2008. Se secuen [...] ci la regin nuclear del gen rpoB, que se asocia con la resistencia a la rifampicina. Todos los aislados con mutaciones de rpoB se enviaron al laboratorio de referencia del estado de Nueva York para llevar a cabo un antibiograma convencional. Todos los aislados se estudiaron tambin con el ensayo de sonda lineal GenoType MTBDRplus. RESULTADOS: Se aisl Mycobacterium tuberculosis de 906 pacientes. Veintisis (2,9%) de los aislados presentaban mutaciones de sentido errneo o deleciones en rpoB y fueron resistentes a la rifampicina en el antibiograma. Los 26 aislados fueron resistentes tambin a la isoniacida y se clasificaron como TB multirresistente. Cuarenta y seis aislados de control sin mutaciones de rpoB resultaron sensibles a la rifampicina en el antibiograma. El ensayo de sonda lineal GenoType MTBDRplus identific correctamente a las 26 cepas de TB multirresistente y clasific de manera errnea un aislado sensible a mltiples frmacos como resistente a la rifampicina. CONCLUSIONES: Este estudio revela una prevalencia de TB multirresistente de 2,9% en los pacientes con TB recin diagnosticada en Hait e indica que los ensayos de secuenciacin e hibridacin de rpoB son estudios de deteccin sistemtica adecuados para la deteccin temprana de la TB multirresistente. Abstract in english OBJECTIVE: To determine the prevalence of multidrug-resistant tuberculosis (MDR-TB) among patients with new smear-positive pulmonary TB in Port-au-Prince, Haiti. METHODS: Sputum samples were cultured from 1 006 patients newly diagnosed with TB in 2008. The core region of the rpoB gene that is associ [...] ated with resistance to rifampin was sequenced. All isolates with rpoB mutations were sent to the New York State reference laboratory for conventional drug susceptibility testing (DST). All isolates were also tested with the GenoType MTBDRplus line-probe assay. RESULTS: Mycobacterium tuberculosis was isolated from 906 patients. Twenty-six (2.9%) of the isolates had missense mutations or deletions in rpoB and were resistant to rifampin by DST. All 26 were also resistant to isoniazid and classified as MDR-TB. Forty-six control isolates without rpoB mutations were found to be rifampin sensitive by DST. The GenoType MTBDRplus line-probe assay correctly identified 26 MDR-TB strains. It misclassified one pansusceptible isolate as rifampin resistant. CONCLUSIONS: This study shows an MDR-TB prevalence of 2.9% in newly diagnosed TB patients in Haiti and suggests that rpoB sequencing and hybridization assays are good screening tools for early detection of MDR-TB. Oksana, Ocheretina; Willy, Morose; Marie, Gauthier; Patrice, Joseph; Richard, D' Meza; Vincent E., Escuyer; Nalin, Rastogi; Guy, Vernet; Jean W., Pape; Daniel W., Fitzgerald. 376 Directory of Open Access Journals (Sweden) Full Text Available OBJECTIVE: To determine the prevalence of multidrug-resistant tuberculosis (MDR-TB among patients with new smear-positive pulmonary TB in Port-au-Prince, Haiti. METHODS: Sputum samples were cultured from 1 006 patients newly diagnosed with TB in 2008. The core region of the rpoB gene that is associated with resistance to rifampin was sequenced. All isolates with rpoB mutations were sent to the New York State reference laboratory for conventional drug susceptibility testing (DST. All isolates were also tested with the GenoType MTBDRplus line-probe assay. RESULTS: Mycobacterium tuberculosis was isolated from 906 patients. Twenty-six (2.9% of the isolates had missense mutations or deletions in rpoB and were resistant to rifampin by DST. All 26 were also resistant to isoniazid and classified as MDR-TB. Forty-six control isolates without rpoB mutations were found to be rifampin sensitive by DST. The GenoType MTBDRplus line-probe assay correctly identified 26 MDR-TB strains. It misclassified one pansusceptible isolate as rifampin resistant. CONCLUSIONS: This study shows an MDR-TB prevalence of 2.9% in newly diagnosed TB patients in Haiti and suggests that rpoB sequencing and hybridization assays are good screening tools for early detection of MDR-TB.OBJETIVO: Determinar la prevalencia de tuberculosis (TB multirresistente en pacientes con TB pulmonar nueva con baciloscopia positiva en Puerto Prncipe, Hait. MTODOS: Se cultivaron muestras de esputo de 1 006 pacientes con diagnstico reciente de tuberculosis efectuado durante el 2008. Se secuenci la regin nuclear del gen rpoB, que se asocia con la resistencia a la rifampicina. Todos los aislados con mutaciones de rpoB se enviaron al laboratorio de referencia del estado de Nueva York para llevar a cabo un antibiograma convencional. Todos los aislados se estudiaron tambin con el ensayo de sonda lineal GenoType MTBDRplus. RESULTADOS: Se aisl Mycobacterium tuberculosis de 906 pacientes. Veintisis (2,9% de los aislados presentaban mutaciones de sentido errneo o deleciones en rpoB y fueron resistentes a la rifampicina en el antibiograma. Los 26 aislados fueron resistentes tambin a la isoniacida y se clasificaron como TB multirresistente. Cuarenta y seis aislados de control sin mutaciones de rpoB resultaron sensibles a la rifampicina en el antibiograma. El ensayo de sonda lineal GenoType MTBDRplus identific correctamente a las 26 cepas de TB multirresistente y clasific de manera errnea un aislado sensible a mltiples frmacos como resistente a la rifampicina. CONCLUSIONES: Este estudio revela una prevalencia de TB multirresistente de 2,9% en los pacientes con TB recin diagnosticada en Hait e indica que los ensayos de secuenciacin e hibridacin de rpoB son estudios de deteccin sistemtica adecuados para la deteccin temprana de la TB multirresistente. Oksana Ocheretina 2012-03-01 377 Science.gov (United States) The Mw 7.0 January 12, 2010 Haiti earthquake ended 240 years of relative quiescence following earthquakes that destroyed Port-au-Prince in 1751 and 1770. We place the 2010 rupture in the context of past earthquakes and future hazards by using remote analysis of airborne LiDAR to observe the topographic expression of active faulting and develop a new conceptual model for the earthquake behavior of the eastern Enriquillo fault zone (EFZ). In this model, the 2010 event occupies a long-lived segment boundary at a stepover within the EFZ separating fault segments that likely ruptured in 1751 and 1770, explaining both past clustering and the lack of 2010 surface rupture. Immediately following the 2010 earthquake, an airborne LiDAR point cloud containing over 2.7 billion point measurements of surface features was collected by the Rochester Inst. of Technology. To analyze these data, we capitalize on the human capacity to visually identify meaningful patterns embedded in noisy data by conducting interactive visual analysis of the entire 66.8 GB Haiti terrain data in a 4-sided, 800 ft3 immersive virtual-reality environment at the UC Davis KeckCAVES using the software tools LiDAR Viewer (to analyze point cloud data) and Crusta (for 3D surficial geologic mapping on DEM data). We discovered and measured landforms displaced by past surface-rupturing earthquakes and remotely characterized the regional fault geometry. Our analysis of the ~50 km long reach of EFZ spanning the 2010 epicenter indicates that geomorphic evidence of active faulting is clearer east of the epicenter than to the west. West of the epicenter, and in the region of the 2010 rupture, the fault is poorly defined along an embayed, low-relief range front, with little evidence of recent surface rupture. In contrast, landform offsets of 6 to 50 m along the reach of the EFZ east of the epicenter and closest to Port-au-Prince attest to repeated recent surface-rupturing earthquakes here. Specifically, we found and documented offset landforms including fluvial terrace risers near Dumay (6.3 +0.9/-1.3 m) and Chauffard/Jameau (32.2 +1.8/-3.1 m), a channel (52 +18/-13 m) ~500 m east of the Chauffard/Jameau site, and an alluvial fan near Fayette (8.6 +2.8/-2.5 m). Based on the fault-trace morphology and distribution of sites where we see 6-8 m offsets, we estimate the probable along-strike extent of past surface rupture was 40 to 60 km along this fault reach. Application of moment-rupture area relationships to these observations suggest that an earthquake similar to, or larger than the Mw 7.0 2010 event is possible along the Enriquillo fault near Port-au-Prince. We deduce that the 2010 earthquake was a relatively small event on a boundary between fault segments that ruptured in 1751 and 1770, based on new analysis of historical damage reports and the gap of well-defined fault-zone morphology where the 2010 earthquake occurred. Cowgill, E.; Bernardin, T. S.; Oskin, M. E.; Bowles, C. J.; Yikilmaz, M. B.; Kreylos, O.; Elliott, A. J.; Bishop, M. S.; Gold, R. D.; Morelan, A.; Bawden, G. W.; Hamann, B.; Kellogg, L. H. 2010-12-01 378 Science.gov (United States) A general consensus has emerged from the study of the 12 January 2010, Mw 7.0 Haiti earthquake: the coseismic rupture was complex, portraying both reverse and strike-slip motion, but lacking unambiguous surface break. Based on seismological, geodetic and geologic data, numerous slip models have been proposed for that event. However, using an incomplete fault map, the latter models were preliminary, proposing a rupture on unmapped buried faults. Here, using bathymetric data offshore Port-au-Prince along with a digital elevation model derived from LiDAR on-land, we identified the south-dipping Lamentin thrust in the Bay of Port-au-Prince. The fault prolongs on-land where it deforms active alluvial fans in the city of Carrefour. The geometry and distribution of the aftershocks of the 2010 earthquake and the analysis of the regional geology allow us to place constraints on the connection at depth between the Lamentin thrust and the sinistral strike-slip Enriquillo -Plantain Garden Fault (EPGF). Inversion of geodetic data suggests that both faults may have broken in 2010, consistently with the regional geodynamical setting. The rupture initiated along the Lamentin thrust and further propagated along the EPGF due to instantaneous unclamping at depth. The corals uplifted around the Logne Delta Fan, contributing to the build-up of long-term topography between the Lamentin thrust and the EPGF. The 2010 earthquake increased the stress toward failure on unruptured EPGF segments as well as on the thrust fault sitting in the middle of the city of Carrefour, in the direct vicinity of Port-au-Prince, thereby increasing the seismic hazard in these areas. Saint Fleur, Newdeskarl; Feuillet, Nathalie; Grandin, Raphal; Jacques, ric; Weil-Accardo, Jennifer; Klinger, Yann 2014-05-01 379 Directory of Open Access Journals (Sweden) Full Text Available John C Jackson, Anthony L Farone, Mary B Farone Biology Department, Middle Tennessee State University, Murfreesboro, Tennessee, USA Purpose: Diarrheal disease is one of the leading causes of morbidity in developing countries. To further understand the epidemiology of diarrheal disease among a rural population surrounding Robillard, Haiti, fecal swabs from patients with diarrhea were screened for the presence of enteropathogenic bacteria. Patients and methods: Fecal swabs were collected from 34 patients with signs and symptoms of diarrhea and stored in BBLTM Cary-Blair transport medium (Becton, Dickinson and Company, Sparks, MD until transit to the USA. Swab material was inoculated on to different enrichment and selective agars for incubation. Fermenting and nonfermenting bacteria that grew on the enteric selection media were identified by the BBLTM CrystalTM Enteric/Nonferementing Identification system (Becton, Dickinson and Company. Organisms identified as Escherichia coli were further screened for the presence of virulence factors by polymerase chain reaction (PCR. Results: Of 34 patients, no Campylobacter, Shigella, Salmonella, or Vibrio spp. were isolated from swabs transported to the USA for culture. Of 73 E. coli isolates cultured from the swabs, one enteropathogenic strain of E. coli was identified by multiplex PCR. Escherichia fergusonii and Cronobacter sakazakii, both potential gastrointestinal pathogens, were also isolated from patient stools. Conclusion: This study was undertaken to determine if bacterial enteropathogens could be detected in the stools of patients suffering from diarrhea or dysentery and, in the absence of sufficient facilities, rectal swabs could be transported to the USA for culture. Although several genera of overt enteropathogens were not detected, one enteropathogenic E. coli and other pathogenic enterobacteriaceae were successfully cultured and identified. Keywords: Escherichia, Cronobacter, diarrheagenic, stool Jackson JC 2011-09-01 380 Science.gov (United States) This document describes primary, secondary and teacher training curricular policy relating to education for citizenship in Cuba, Haiti and the Dominican Republic in order to make practical recommendations for improved design, quality and implementation of these initiatives in the three countries selected. The first chapter describes the Caribbean Acosta, Cheila Valera 2005-01-01 381 Directory of Open Access Journals (Sweden) Full Text Available Von bsen Schulden und globaler Verantwortung. Das Beispiel Haiti zeigt, wie das System der Auslandsschulden die Menschen schutzlos einer Katastrophe ausliefert This commentary was originally published in the Medico Rundschreiben January 2010 and is reprinted here in the original German with the kind permission of the author. Thomas Gebauer 2010-12-01 382 Science.gov (United States) Relying on a critical pedagogy framework and youth participatory action research (YPAR) and visual sociology methods, the authors of this article--teachers, teacher educators, and community activists--have worked with photo elicitation methods and young adults in the USA and Haiti to document youths' impressions of the purposes of, supports Zenkov, Kristien; Ewaida, Marriam; Lynch, Megan R.; Bell, Athene; Harmon, James; Pellegrino, Anthony; Sell, Corey 2014-01-01 383 Science.gov (United States) Real-time quaking-induced conversion (RT-QuIC) is an assay in which disease-associated prion protein (PrP) initiates a rapid conformational transition in recombinant PrP (recPrP), resulting in the formation of amyloid that can be monitored in real time using the dye thioflavin T. It therefore has potential advantages over analogous cell-free PrP conversion assays such as protein misfolding cyclic amplification (PMCA). The QuIC assay and the related amyloid seeding assay have been developed largely using rodent-passaged sheep scrapie strains. Given the potential RT-QuIC has for Creutzfeldt-Jakob disease (CJD) research and human prion test development, this study characterized the behaviour of a range of CJD brain specimens with hamster and human recPrP in the RT-QuIC assay. The results showed that RT-QuIC is a rapid, sensitive and specific test for the form of abnormal PrP found in the most commonly occurring forms of sporadic CJD. The assay appeared to be largely independent of species-related sequence differences between human and hamster recPrP and of the methionine/valine polymorphism at codon 129 of the human PrP gene. However, with the same conditions and substrate, the assay was less efficient in detecting the abnormal PrP that characterizes variant CJD brain. Comparison of these QuIC results with those previously obtained using PMCA suggested that these two seemingly similar assays differ in important respects. PMID:22031526 Peden, Alexander H; McGuire, Lynne I; Appleford, Nigel E J; Mallinson, Gary; Wilham, Jason M; Orr, Christina D; Caughey, Byron; Ironside, James W; Knight, Richard S; Will, Robert G; Green, Alison J E; Head, Mark W 2012-02-01 384 Science.gov (United States) Messenger RNA translation is regulated by RNA-binding proteins and small non-coding RNAs called microRNAs. Even though we know the majority of RNA-binding proteins and microRNAs that regulate messenger RNA expression, evidence of interactions between the two remain elusive. The role of the RNA-binding protein GLD-1 as a translational repressor is well studied during Caenorhabditis elegans germline development and maintenance. Possible functions of GLD-1 during somatic development and the mechanism of how GLD-1 acts as a translational repressor are not known. Its human homologue, quaking (QKI), is essential for embryonic development. Here, we report that the RNA-binding protein GLD-1 in C. elegans affects multiple microRNA pathways and interacts with proteins required for microRNA function. Using genome-wide RNAi screening, we found that nhl-2 and vig-1, two known modulators of miRNA function, genetically interact with GLD-1. gld-1 mutations enhance multiple phenotypes conferred by mir-35 and let-7 family mutants during somatic development. We used stable isotope labelling with amino acids in cell culture to globally analyse the changes in the proteome conferred by let-7 and gld-1 during animal development. We identified the histone mRNA-binding protein CDL-1 to be, in part, responsible for the phenotypes observed in let-7 and gld-1 mutants. The link between GLD-1 and miRNA-mediated gene regulation is further supported by its biochemical interaction with ALG-1, CGH-1 and PAB-1, proteins implicated in miRNA regulation. Overall, we have uncovered genetic and biochemical interactions between GLD-1 and miRNA pathways. Akay, Alper; Craig, Ashley; Lehrbach, Nicolas; Larance, Mark; Pourkarimi, Ehsan; Wright, Jane E.; Lamond, Angus; Miska, Eric; Gartner, Anton 2013-01-01 385 Keywords: apple slices; osmo-dehydration; freeze drying; carboxyl methyl cellulose coating; drying kinetics ...and 2% w/v) coating on freeze drying of apple slices was studied.In total, nine ...the physical and chemical properties of freeze dried apple slices.It was observed that increase in 386 Science.gov (United States) Acid-insoluble mineral residua of tektite-bearing Cretaceous-Tertiary boundary sediments in the Beloc Formation of Haiti contain abundant shocked quartz and lesser amounts of shocked plagioclase. The shocked quartz grains typically have 2 or 3 sets of planar deformation features, although grains with up to 15 sets were observed. The proportion of shocked quartz in the boundary sediments increases with stratigraphic height; at least 70 +/- 11% of the proportion of the quartz grains are shocked in the uppermost stratigraphic interval. The proportion of shocked quartz throughout the boundary sediments indicates that these grains were excavated primarily from crystalline silicate units, which may have been covered with a small amount of porous quartz-bearing sediments. Polyhedral and moderately sutured margins in shocked polycrystalline quartz grains, the size of the crystal units in these grains and the presence of shocked plagioclase, indicate these ejecta components were excavated from a target with continental affinites, containing quartzites or metaquartzites and a sialic metamorphic and/or igneous component. Other evidence suggests the target may also have contained a significant amount of calcium carbonate and/or sulfate. The large size and amount of shocked quartz grains deposited in Haiti indicate the crater from which they were excavated was produced in the proto-Caribbean region. Kring, David A.; Hildebrand, Alan R.; Boynton, William V. 1994-01-01 387 Science.gov (United States) Recent investigations from combined seismological and space geodetic constraints suggest that the mainshock source faults of the 12 January 2010 Haiti earthquake might be complex and consist of both strike-slip and thrust faults. We calculate Coulomb stress changes on adjacent strike-slip and thrust faults caused by the 2010 M=7.0 rupture by considering a range of mainshock and receiver fault models. We find that for all of the mainshock source models examined, including Hayes et al. (submitted to Nature Geoscience), the Coulomb stress is calculated to have increased on sections of the Enriquillo Fault to both the east and west of the January ruptures. We assume the Enriquillo is dominantly strike-slip. While the magnitude of the calculated stress increase depends on the complexity of the proposed mainshock models, the Enriquillo Fault segment immediately south of Port-au-Prince is calculated to be within a zone of stress increases regardless if the Enriquillo Fault is assumed south dipping or vertical. We further calculate that 60-70% of the nodal planes of the aftershocks determined by Nettles & Hjorleifsdottir (GJI, 2010) were brought closer to failure by the mainshock. Relocating these aftershock locations north by 10 km would bring additional 10% of the aftershock nodal planes into Coulomb stress increases. Overall the 2010 Haiti earthquake illustrates the complex stress interaction between strike-slip and thrust motion on various segments of a larger compressional fault system. Lin, J.; Stein, R. S.; Sevilgen, V.; Toda, S. 2010-12-01 388 Directory of Open Access Journals (Sweden) Full Text Available Haity has a notable problem of food security, 48% of people have not sufficient food availability, food prices has doubled from 1980 and 1990 and further increased 5 times between 1991 and 2000. Water availability and quality is another problems to be added to food insufficiency. Food deficiency is mitigated by natural food resources in rural areas where many different species are cultivated together but it can be extreme in the towns. Agricultural systems are not efficient and, at the same time, enhance soil and genetic erosion. A development project has been implemented to increase food security over the long term in the geographical area of Carrefour rural area, this comprises a research aimed to increase national food production introducing complex agro-forestry systems. The project has investigated problems and solutions, actions have been started to increase food production, including agronomic training of local farmers, organization of small farmers including legal protection on land tenure, introduction of low input modern agroforestry systems that can diversify food production through the year and reduce soil and genetic erosion. After these results, an intervention project has been approved and funded by EU, then delayed due to the recent civil war, finally it is giving positive results now. The same approach used for this project can be spread in the rest of the Republic of Haiti and, hopefully, to other world regions that have similar problems. Andrea Pardini 2011-11-01 389 Energy Technology Data Exchange (ETDEWEB) In the frame of probabilistic safety analyses for nuclear power plants studies and evaluations of earth quake events have to be performed. The methodology is aimed to quantify the actual safety margins of the existing structures and their scattering. These data are essentially based on empirical values and results from US power plants. The authors discuss the accuracy and applicability of the simplified methodologies. It turns out that the simplified models can only roughly describe the complex non-linear behavior of buildings. Additional system engineering based effects on the safety reserves cannot be taken into account by the simplified modeling. Sadegh-Azar, H. [HOCHTIEF Consult IKS Energy (Germany) 2009-07-01 390 Digital Repository Infrastructure Vision for European Research (DRIVER) Organotypic hippocampal slice culture is an in vitro method to examine mechanisms of neuronal injury in which the basic architecture and composition of the hippocampus is relatively preserved 1. The organotypic culture system allows for the examination of neuronal, astrocytic and microglial effects, but as an ex vivo preparation, does not address effects of blood flow, or recruitment of peripheral inflammatory cells. To that end, this culture method is frequently used to examine excitotoxic a... Wang, Qian; Andreasson, Katrin 2010-01-01 391 Digital Repository Infrastructure Vision for European Research (DRIVER) Organotypic brain slice cultures are used for a variety of molecular, electrophysiological, and imaging studies. However, the existing culture methods are difficult or expensive to apply in studies requiring long-term recordings with multielectrode arrays (MEAs). In this work, a novel method to maintain organotypic cultures of rodent hippocampus for several weeks on standard MEAs in an unmodified tissue culture incubator is described. Polydimethylsiloxane (Sylgard) mini-wells were used to sta... Berdichevsky, Yevgeny; Sabolek, Helen; Levine, John B.; Staley, Kevin J.; Yarmush, Martin L. 2009-01-01 392 Digital Repository Infrastructure Vision for European Research (DRIVER) Chronic airway diseases, such as bronchial asthma and chronic obstructive pulmonary disease (COPD), are the fourth-leading cause of death in developed countries and have a high personal, societal, and economical impact. Recurring inflammation is one characteristic of those diseases leading to airway remodelling and modulating the function of nerves resulting in excessive bronchoconstriction. The precision-cut lung slice (PCLS) technique is widely used in pulmonary pharmacology, but neurally-i... Schlepu?tz, Marco 2011-01-01 393 Energy Technology Data Exchange (ETDEWEB) Propose different methods to obtain crystallographic information about biological materials are important since powder method is a nondestructive method. Slices are an approximation of what would be an in vivo analysis. Effects of samples preparation cause differences in scattering profiles compared with powder method. The main inorganic component of bones and teeth is a calcium phosphate mineral whose structure closely resembles hydroxyapatite (HAp). The hexagonal symmetry, however, seems to work well with the powder diffraction data, and the crystal structure of HAp is usually described in space group P63/m. Were analyzed ten third molar teeth. Five teeth were separated in enamel, detin and circumpulpal detin powder and five in slices. All the scattering profile measurements were carried out at the X-ray diffraction beamline (XRD1) at the National Synchrotron Light Laboratory - LNLS, Campinas, Brazil. The LNLS synchrotron light source is composed of a 1.37 GeV electron storage ring, delivering approximately 4x10{sup -1}0 photons/s at 8 keV. A double-crystal Si(111) pre-monochromator, upstream of the beamline, was used to select a small energy bandwidth at 11 keV . Scattering signatures were obtained at intervals of 0.04 deg for angles from 24 deg to 52 deg. The human enamel experimental crystallite size obtained in this work were 30(3)nm (112 reflection) and 30(3)nm (300 reflection). These values were obtained from measurements of powdered enamel. When comparing the slice obtained 58(8)nm (112 reflection) and 37(7)nm (300 reflection) enamel diffraction patterns with those generated by the powder specimens, a few differences emerge. This work shows differences between powder and slices methods, separating characteristics of sample of the method's influence. (author) Colaco, Marcos V.; Barroso, Regina C. [Universidade do Estado do Rio de Janeiro (IF/UERJ), RJ (Brazil). Inst. de Fisica. Dept. de Fisica Aplicada; Porto, Isabel M. [Universidade Estadual de Campinas (FOP/UNICAMP), Piracicaba, SP (Brazil). Fac. de Odontologia. Dept. de Morfologia; Gerlach, Raquel F. [Universidade de Sao Paulo (FORP/USP), Rieirao Preto, SP (Brazil). Fac. de Odontologia. Dept. de Morfologia, Estomatologia e Fisiologia; Costa, Fanny N. [Coordenacao dos Programas de Pos-Graduacao de Engenharia (LIN/COPPE/UFRJ), RJ (Brazil). Lab. de Instrumentacao Nuclear 2011-07-01 394 International Nuclear Information System (INIS) For binary black holes the lapse function corresponding to the Brill-Lindquist initial value solution for uncharged black holes is given in analytic form under the maximal slicing condition. In the limiting case of a very small ratio of mass to separation between the black holes, the surface defined by the zero value of the lapse function coincides with the minimal surfaces around the singularities 2002-06-15 395 Science.gov (United States) For binary black holes the lapse function corresponding to the Brill-Lindquist initial value solution for uncharged black holes is given in analytic form under the maximal slicing condition. In the limiting case of a very small ratio of mass to separation between the black holes, the surface defined by the zero value of the lapse function coincides with the minimal surfaces around the singularities. Jaranowski, Piotr; Schfer, Gerhard 2002-06-01 396 Digital Repository Infrastructure Vision for European Research (DRIVER) Increases in suspended biomass and variation in the concentrations of reducing sugars, salt, and lactic acid in brine containing sliced carrots were followed for a period of several days. A tentative unstructured, unsegregated model for the metabolism of suspended Lactobacillus plan tarum coupled with Fick's second law of diffusion for the transport of solutes within the carrot material was postulated. This general model was fitted by non-linear multiresponse regression analysis to an extensi... Nabais, R. M.; Malcata, F. X. 1997-01-01 397 Digital Repository Infrastructure Vision for European Research (DRIVER) Recently, there have been efforts to solve Einstein's equation in the context of a conformal compactification of space-time. Of particular importance in this regard are the so called CMC-foliations, characterized by spatial hyperboloidal hypersurfaces with a constant extrinsic mean curvature K. However, although of interest for general space-times, CMC-slices are known explicitly only for the spherically symmetric Schwarzschild metric. This work is devoted to numerically det... Schinkel, David; Macedo, Rodrigo Panosso; Ansorg, Marcus 2013-01-01 398 Digital Repository Infrastructure Vision for European Research (DRIVER) We construct initial data corresponding to a single perturbed Kerr black hole in vacuum. These data are defined on specific hyperboloidal ("ACMC-") slices on which the mean extrinsic curvature K asymptotically approaches a constant at future null infinity scri+. More precisely, we require that K obeys the Taylor expansion K=K0 + s^4 where K0 is a constant and s describes a compactified spatial coordinate such that scri+ is represented by s=0. We excise the singular interior ... Schinkel, David; Ansorg, Marcus; Macedo, Rodrigo Panosso 2013-01-01 399 Directory of Open Access Journals (Sweden) Full Text Available Integrating formal verification techniques into the hardware design process provides the means to rigorously prove critical properties. However, most automatic verification techniques, such as model checking, are only effectively applicable to designs of limited sizes due to the state explosion problem. The Multiway Decision Graphs (MDG method is an efficient method to define hardware designs into more abstract environments; however, the MDG model checker (MDG-MC still suffers from the state explosion problem. Furthermore, all the backward reduction algorithms cannot be used in MDG, due to the presence of abstract state variables. In this study, an efficient extractor for MDG Hardware Descrpiton Languge (MDG-HDL is introduced based on static (SS-MDG and conditioned (CS-MDG program slicing techniques. The techniques can obtain a chaining slice for given signals of interest. The main advantages of these techniques are: It has no MDG-HDL coding style limitation, it is accurate and it is competent in dealing with various MDG-HDL constructions. The main motivation for introducing this approach is to tackle the state explosion problem of MDG-MC that big MDG-HDL may cause. We apply our proposed techniques on different MDG-HDL designs and our analyses have shown that the proposed reduction techniques resulted in significantly improved performance of the MDG-MC. In this study, we present a general idea of program slicing, a discussion of how to slice MDG-HDL programs, implementation of the tool and a brief overview of some applications and experimental results. The underlying method and the tool based on it need to be empirically evaluated when applying to various applications. 2013-01-01 400 International Nuclear Information System (INIS) To assess radiation dose and image quality of our pediatric 16-slice CT protocols and to compare them with published standards. For 540 weight-based pediatric 16-slice CT examinations in six anatomic regions, CTDIvol, DLP, effective dose, and image noise were determined. Two radiologists evaluated the visual quality of CT images by consensus. We analyzed the relationship of CTDIvol and image noise with body diameter. Our results were compared with published data. The average CTDIvol (mGy), DLP (mGycm), effective dose (mSv), and image noise (HU) were as follows: 4.1/125.5/1.6/16.2 for chest CT, 3.3/54.2/1.2/13.7 for heart CT, 5.8/256.6/3.8/13.0 for abdomen-pelvis CT, 6.8/318.7/5.9/12.0 for dynamic abdomen CT, 3.5/86.2/0.35/7.9 for neck CT and 25.4/368.0/1.6/3.7 for brain CT, respectively. All CT images were diagnostic upon visual analysis. The CTDIvol and image noise were proportional to body diameter. Our dose parameters were comparable to the first quartile of the cited German survey, whereas image noise in our study was similar to published data. Our pediatric CT dose is at the lower end of published standards and our image noise can be used as a target noise for each protocol in developing better pediatric multi-slice CT protocols 2008-11-01 401 Science.gov (United States) Magnetic resonance imaging (MRI) near metallic implants remains an unmet need because of severe artifacts, which mainly stem from large metal-induced field inhomogeneities. This work addresses MRI near metallic implants with an innovative imaging technique called "Slice Encoding for Metal Artifact Correction" (SEMAC). The SEMAC technique corrects metal artifacts via robust encoding of each excited slice against metal-induced field inhomogeneities. The robust slice encoding is achieved by extending a view-angle-tilting (VAT) spin-echo sequence with additional z-phase encoding. Although the VAT compensation gradient suppresses most in-plane distortions, the z-phase encoding fully resolves distorted excitation profiles that cause through-plane distortions. By positioning all spins in a region-of-interest to their actual spatial locations, the through-plane distortions can be corrected by summing up the resolved spins in each voxel. The SEMAC technique does not require additional hardware and can be deployed to the large installed base of whole-body MRI systems. The efficacy of the SEMAC technique in eliminating metal-induced distortions with feasible scan times is validated in phantom and in vivo spine and knee studies. PMID:19267347 Lu, Wenmiao; Pauly, Kim Butts; Gold, Garry E; Pauly, John M; Hargreaves, Brian A 2009-07-01 402 Digital Repository Infrastructure Vision for European Research (DRIVER) Tuberculosis (TB) is one of the most common opportunistic diseases that appear among human immunodeficiency virus (HIV)-positive patients in Haiti. In this context the probable emergence of multidrug-resistant (MDR) strains of Mycobacterium tuberculosis is of great epidemiological concern. However, as routine culture of M. tuberculosis and drug susceptibility testing are not performed in Haiti, it has not been possible so far to evaluate the rate of drug resistance among M. tuberculosis isola... Ferdinand, Se?verine; Sola, Christophe; Verdol, Be?atrice; Legrand, Eric; Goh, Khye Seng; Berchel, Myle?ne; Aube?ry, Alexandra; Timothe?e, Maryse; Joseph, Patrice; Pape, Jean William; Rastogi, Nalin 2003-01-01 403 Science.gov (United States) The family of sphingosine-1-phosphate receptors (S1PRs) is G-protein-coupled, comprised of subtypes S1PR1-S1PR5 and activated by the endogenous ligand S1P. The phosphorylated version of Fingolimod (pFTY720), an oral therapy for multiple sclerosis (MS), induces S1PR1 internalisation in T cells, subsequent insensitivity to S1P gradients and sequestering of these cells within lymphoid organs, thus limiting immune response. S1PRs are also expressed in neuronal and glial cells where pFTY720 is suggested to directly protect against lysolecithin-induced deficits in myelination state in organotypic cerebellar slices. Of note, the effect of pFTY720 on immune cells already migrated into the CNS, prior to treatment, has not been well established. We have previously found that organotypic slice cultures do contain immune cells, which, in principle, could also be regulated by pFTY720 to maintain levels of myelin. Here, a mouse organotypic cerebellar slice and splenocyte co-culture model was thus used to investigate the effects of pFTY720 on splenocyte-induced demyelination. Spleen cells isolated from myelin oligodendrocyte glycoprotein immunised mice (MOG-splenocytes) or from 2D2 transgenic mice (2D2-splenocytes) both induced demyelination when co-cultured with mouse organotypic cerebellar slices, to a similar extent as lysolecithin. As expected, in vivo treatment of MOG-immunised mice with FTY720 inhibited demyelination induced by MOG-splenocytes. Importantly, in vitro treatment of MOG- and 2D2-splenocytes with pFTY720 also attenuated demyelination caused by these cells. In addition, while in vitro treatment of 2D2-splenocytes with pFTY720 did not alter cell phenotype, pFTY720 inhibited the release of the pro-inflammatory cytokines such as interferon gamma (IFN?) and interleukin 6 (IL6) from these cells. This work suggests that treatment of splenocytes by pFTY720 attenuates demyelination and reduces pro-inflammatory cytokine release, which likely contributes to enhanced myelination state induced by pFTY720 in organotypic cerebellar slices. Pritchard, Adam J.; Mir, Anis K.; Dev, Kumlesh K. 2014-01-01 404 Science.gov (United States) Results of an investigation of the topology of large-scale structure in two observed slices of the universe are presented. Both slices pass through the Coma cluster and their depths are 100 and 230/h Mpc. The present topology study shows that the largest void in the CfA slice is divided into two smaller voids by a statistically significant line of galaxies. The topology of toy models like the white noise and bubble models is shown to be inconsistent with that of the observed slices. A large N-body simulation was made of the biased cloud dark matter model and the slices are simulated by matching them in selection functions and boundary conditions. The genus curves for these simulated slices are spongelike and have a small shift in the direction of a meatball topology like those of observed slices. Park, Changbom; Gott, J. R., III; Melott, Adrian L.; Karachentsev, I. D. 1992-01-01 405 Science.gov (United States) In the finite fault source inversion, seismic source area has usually been approximated by simple fault plane model for simplicity. This approximation, however, may generate the correlated modeling errors originated from the focal mechanism variation in a rupture process, which contributed to biased results in the seismic waveform analysis. This effect becomes predominant for analysis of seismic data around the nodal planes. From CMT inversion analysis, the January 12, 2010 Haiti earthquake may accompany both strike and dip slip on different fault planes (Nettles and Hjrleifsdttir, 2010, GJI). In addition, one single fault plane model cannot explain teleseismic body wave well due to complex source process and existence of many mechanism-sensitive stations. For waveform analysis of this earthquake, we developed inversion method that estimates moment tensor component for each space knot in seismic source area and applied it to teleseismic P-wave data recorded at FDSN network stations and Global Seismograph Network stations. In general, such high flexibility source model had caused the unstable and unrealistic result. To avoid this problem, we applied new formulation that considers the data covariance components of observed errors and modeling errors originated from uncertainty of Green's function (Yagi and Fukahata, 2010, AGU). It has already been shown that the new formulation can derive plausible solution without non-negative constraint. For inversion, we arranged space knots on the plane of which strike and dip are same as that of the USGS finite fault model. We confirmed that result is robust against change of strike, dip and knot interval. The result shows that P-axes in main rupture area are north-south direction, which is consistent with stress field of the region. Main rupture area can be divided into 3 patches, near the hypocenter, east and west side of the hypocenter patch, which have different focal mechanisms. Reverse fault is dominant in the hypocenter patch whereas strike slip component is dominant in east and west patches. The CMT inversion analysis shows that many reverse aftershocks are observed west side of the west patch. It seems that rupture stopped at the focal mechanism transition area. Our result is consistent with analysis of InSAR data and the CMT analysis. Kasahara, A.; Yagi, Y. 2010-12-01 406 Scientific Electronic Library Online (English) Full Text Available SciELO Cuba | Language: Spanish Abstract in spanish INTRODUCCIN. Las enfermedades de las glndulas salivales ocupan un lugar relevante entre las patologas quirrgicas de la cabeza y el cuello. El objetivo de este trabajo fue presentar la experiencia del tratamiento quirrgico de tumores parotdeos benignos del lbulo superficial, mediante anestesia [...] local, en pacientes de la Republica de Hait atendidos como parte de la colaboracin mdica cubana en ese pas. MTODOS. Se realiz un estudio prospectivo en pacientes con ndulos parotdeos, atendidos en la Repblica de Hait entre los aos 2005 y 2006. Segn su naturaleza, los ndulos fueron agrupados en inflamatorios, neoplsicos y otros. Para el tratamiento quirrgico se utiliz la anestesia local con lidocana al 1 %, combinando el mtodo infiltrativo y el bloqueo de campo. Se consultaron 149 pacientes con ndulos parotdeos, el mayor porcentaje de los cuales correspondi a procesos inflamatorios (68,0 %) y en 29 pacientes (19,0 %) se comprob la presencia de ndulos neoplsicos. Las complicaciones fueron seroma (3 casos; 33,3 %), hematoma (2 casos; 22,2 %), y 4 pacientes no presentaron complicaciones. CONCLUSIONES. El abordaje quirrgico con anestesia local prob ser una alternativa vlida cuando el cirujano no cuenta con los recursos que convencionalmente se movilizan para el tratamiento quirrgico de estos casos. Abstract in english INTRODUCTION. The diseases of the salivary glands occupy an important place among the surgical pathologies of the neck and the head. The aim of this paper was to present the experience of the surgical treatment of benign parotid tumors of the superficial lobule by local anesthesia in Haitian patient [...] s attended as part of the Cuban medical collaboration in this country. METHODS. A prospective study was carried out in patients with parotid nodules attended in the Republic of Haiti from 2005 to 2006. According to their nature, the nodules were grouped into inflammatory, neoplastic and others. Local anesthesia with lidocaine 1 % was used for the surgical treatment, combining the infiltrative method and the field block. 149 patients with parotid nodules were seen. The highest percentage corresponded to inflammatory processes (68.0 %). The presence of neoplastic nodules was confirmed in 29 patients (19.0 %). The complications were seroma (3 cases; 33.3 %) and hematoma (2 cases; 22.2 %). Four patients did not present complications. CONCLUSIONS. The surgical approach with local anesthesia proved to be a valid alternative when the surgeon does not have the resources that are usually used for the surgical treatment of these cases. Valles Gamboa, Moraima; Zamora Linares, Carlos; Expsito Reyes, Orlando; Vzquez Polanco, Julio; Fras Banqueris, Roberto. 407 Scientific Electronic Library Online (English) Full Text Available SciELO Public Health | Language: English Abstract in spanish OBJETIVOS: Evaluar la prevalencia de sfilis materna y estimar la tasa de sfilis congnita en cinco poblaciones rurales cercanas a Jeremie, Hait. MTODOS: Estudio observacional retrospectivo a partir de datos extrados de la base de datos de salud pblica de la Fundacin Haitiana de Salud y verifi [...] cada con los registros clnicos originales en papel, los certificados de defuncin, los informes de las parteras y discusiones con los trabajadores comunitarios de salud. Los datos se analizaron mediante la prueba de la ji al cuadrado, correlaciones bifactoriales y la prueba de la t de dos colas para muestras independientes. RESULTADOS: De las 410 mujeres sometidas a la prueba de sfilis, 31 (7,6%) resultaron seropositivas. La edad gestacional promedio al momento de la prueba fue de 25 semanas, lo que se correlacion con la edad gestacional de entrada a la atencin prenatal (23 semanas). Las mujeres que resultaron seropositivas durante el embarazo presentaron mayor probabilidad de tener un desenlace negativo de su embarazo que las mujeres que resultaron seronegativas (?2 = 16,4; P Abstract in english OBJECTIVES: A study was conducted to assess the prevalence of maternal syphilis and estimate the rate of congenital syphilis in five rural villages surrounding Jeremie, Haiti. METHODS: This research was a retrospective observational study. Data were extracted from the Haitian Health Foundation's pub [...] lic health database and verified through original clinical paper records, death certificates, midwife reports, and discussions with community health workers. Data were analyzed by chi-square analysis, bivariate correlations, and two-tailed t-test for independent samples. RESULTS: Of the 410 women tested for syphilis, 31 (7.6%) were sero-reactive. Average gestation at time of testing was 25 weeks, which correlated with entry into prenatal care at an average of 23 weeks. Women who tested positive during pregnancy were more likely to have had a negative pregnancy outcome than those who did not (chi square = 16.4; P Chaylah J., Lomotey; Judy, Lewis; Bette, Gebrian; Royneld, Bourdeau; Kevin, Dieckhaus; Juan C., Salazar. 408 Directory of Open Access Journals (Sweden) Full Text Available OBJECTIVES: A study was conducted to assess the prevalence of maternal syphilis and estimate the rate of congenital syphilis in five rural villages surrounding Jeremie, Haiti. METHODS: This research was a retrospective observational study. Data were extracted from the Haitian Health Foundation's public health database and verified through original clinical paper records, death certificates, midwife reports, and discussions with community health workers. Data were analyzed by chi-square analysis, bivariate correlations, and two-tailed t-test for independent samples. RESULTS: Of the 410 women tested for syphilis, 31 (7.6% were sero-reactive. Average gestation at time of testing was 25 weeks, which correlated with entry into prenatal care at an average of 23 weeks. Women who tested positive during pregnancy were more likely to have had a negative pregnancy outcome than those who did not (chi square = 16.4; P OBJETIVOS: Evaluar la prevalencia de sfilis materna y estimar la tasa de sfilis congnita en cinco poblaciones rurales cercanas a Jeremie, Hait. MTODOS: Estudio observacional retrospectivo a partir de datos extrados de la base de datos de salud pblica de la Fundacin Haitiana de Salud y verificada con los registros clnicos originales en papel, los certificados de defuncin, los informes de las parteras y discusiones con los trabajadores comunitarios de salud. Los datos se analizaron mediante la prueba de la ji al cuadrado, correlaciones bifactoriales y la prueba de la t de dos colas para muestras independientes. RESULTADOS: De las 410 mujeres sometidas a la prueba de sfilis, 31 (7,6% resultaron seropositivas. La edad gestacional promedio al momento de la prueba fue de 25 semanas, lo que se correlacion con la edad gestacional de entrada a la atencin prenatal (23 semanas. Las mujeres que resultaron seropositivas durante el embarazo presentaron mayor probabilidad de tener un desenlace negativo de su embarazo que las mujeres que resultaron seronegativas (?2 = 16,4; P < 0,0001. La tasa estimada de sfilis congnita en la zona fue de 767 por 100000 nacidos vivos. CONCLUSIONES: La sfilis materna es frecuente en las zonas rurales de Hait, lo que combinado con la entrada tarda a los servicios de atencin prenatal, contribuye a los desenlaces adversos de los embarazos y a la alta tasa estimada de sfilis congnita. Se requieren ms estudios sobre la sfilis congnita y los hbitos de bsqueda de atencin prenatal de las mujeres de zonas rurales de Hait para comprender el impacto de la sfilis materna en esta parte del pas y mejorar el desenlace de los embarazos. Chaylah J. Lomotey 2009-09-01 409 Energy Technology Data Exchange (ETDEWEB) The clinical usefulness of thin-slice high-resolution CT in the diagnosis of the chest was assessed by (1) experiments using a phantom and an inflated fixed lung (Heitzman's method), (2) evaluation of the CT delineativity of the bronchi and major fissure and (3) clinical examination of patients with diffuse pulmonary diseases. (1) In a phantom experiment using catheters, 10 mm-thick slice scan showed the 6 catheters as a single faint line. By 1.5 mm-thick slice scan, the catheters were defined as 6 separate lines. The profile images suggested that 1.5 mm-thick slice scan enables delineation of more minute details of structures. (2) In an experiment using an inflated fixed lung, 1.5 mm-thick slice scan produced more informative images that resembled macroscopic or soft X-ray images. (3) By 1.5 mm-thick slice scan, the subsegmental bronchi of both the right and left lobes were identified in most cases, while identification was possible in only half the cases by 10 mm-thick slice scan. (4) By 10 mm-thick slice scan, the sub-subsegmental bronchi were not identified in most cases. However, identification was possible in approximately half the cases by 1.5 mm-thick slice scan. (5) By 1.5 mm-thick slice scan, the major fissure was delineated as linear shadow in most cases. It was hardly recognizable by 10 mm-thick slice scan. (6)In diffuse pulmonary diseases, 1.5 mm-thick slice scan allowed more minute visualization of the patho-morphological changes compared to 10 mm-thick slice scan. (7) Thin-slice high-resolution CT is thus expected to contribute to pathological analysis and more accurate diagnosis of pulmonary diseases. Otsuji, Hideaki; Yoshimura, Hitoshi; Iwasaki, Satoru and others 1989-01-01 410 International Nuclear Information System (INIS) Full text: As all specialist recognizes it knowledge management refers to issues related to organizational adaptation, survival and competence in the context of a discontinuous environmental change. It concerns also organizational process seeking synergistic combination of data and information processing capacity of the technologies of information with the capacity of human beings. Knowledge management in this sense implies not only organizational and technology processes but involves also human resources development. Our intervention in the context of this forum will focus around a planned INIS project that has been submitted to the Agency for the cycle 2005-2006 and the synergistic ties it can develop with a nuclear knowledge management policy for Haiti. Haiti is the sole least developed country of Latin America and the main challenge it faces is that of reducing poverty. The population of Haiti is around 7.900.000 inhabitants;In terms of annual per capita income the estimated indigency line for 1996 was $160 per year and the poverty line was around$ 220; 2/3 of the rural households fell under the indigency line and 20% only of the population exceeded the poverty line. Main causes of this situation are: land erosion, water scarcity, degradation of the environment, lack of the competitiveness of the economy, lack of electricity etc In all these areas the nuclear techniques can contribute to solve the problem of poverty in Haiti by fulfilling the need to sustain the valuable human resources under the dire circumstances of the local economic conditions. By taking account of the recent efforts of the Government to enhance the manpower capabilities there is a real need now to manage the scarce resources so that they can be retained, expanded and eventually multiplied. Under this perspective the Haitian Government is applying a strategy seeking to involve all the sectors concerned by the peaceful applications of nuclear techniques. After 3 years of diffusion of information, there's a growing interest now for nuclear issues in Haiti. But Haiti need to go further than that. It means by example establishing a true national policy for nuclear issues. In this perspective some requirements are needed: a strong and sustainable human base in nuclear area by example. In this context the Government of Haiti has presented a project to the Agency related to the installation of an INIS National Center data base. This project will contribute in depth to the implementation of a national nuclear knowledge management programme. The general purpose of this project is: 'to interest young people in Haiti to studying nuclear science'. That means introducing nuclear sciences in the universities in Haiti in order to create a 'critical mass' that will allow Haiti to take off from here to 15 years in the nuclear sciences. Such a consideration means that the Government will have to apply a very strong and clear knowledge management policy. Will it be fruitful to begin such a strategy with the installation of an INIS data base center? We don't know yet. But the implementation of the INIS national data center project will give a clear idea about the success of a NKM policy in Haiti.Future is not a well given fact; it has to be constructed.This is the meaning of the hope Haiti's Government has placed in this planned project that will serve as a platform to launch a national long term nuclear knowledge management policy and programme. As an LDC searching his way toward sustainable development, Haiti needs more than ever a nuclear knowledge management policy and a well definite strategy to implement it. This policy will take in consideration the broadbased view articulated in his report by the IAEA June 2001 special mission. His short term outcome will be to securing a material and human base in order to spread nuclear sciences and technologies at the level of the university. In this sense the universities will be at the core of this knowledge management policy because that will allow young generations in Haiti to access and benefit of a high level teach 2004-10-01 411 Directory of Open Access Journals (Sweden) Full Text Available Abstract Background A remote sensing technique was developed which combines a Geographic Information System (GIS; Google Earth, and Microsoft Excel to identify home locations for a random sample of households in rural Haiti. The method was used to select homes for ethnographic and water quality research in a region of rural Haiti located within 9?km of a local hospital and source of health education in Deschapelles, Haiti. The technique does not require access to governmental records or ground based surveys to collect household location data and can be performed in a rapid, cost-effective manner. Methods The random selection of households and the location of these households during field surveys were accomplished using GIS, Google Earth, Microsoft Excel, and handheld Garmin GPSmap 76CSx GPS units. Homes were identified and mapped in Google Earth, exported to ArcMap 10.0, and a random list of homes was generated using Microsoft Excel which was then loaded onto handheld GPS units for field location. The development and use of a remote sensing method was essential to the selection and location of random households. Results A total of 537 homes initially were mapped and a randomized subset of 96 was identified as potential survey locations. Over 96% of the homes mapped using Google Earth imagery were correctly identified as occupied dwellings. Only 3.6% of the occupants of mapped homes visited declined to be interviewed. 16.4% of the homes visited were not occupied at the time of the visit due to work away from the home or market days. A total of 55 households were located using this method during the 10?days of fieldwork in May and June of 2012. Conclusions The method used to generate and field locate random homes for surveys and water sampling was an effective means of selecting random households in a rural environment lacking geolocation infrastructure. The success rate for locating households using a handheld GPS was excellent and only rarely was local knowledge required to identify and locate households. This method provides an important technique that can be applied to other developing countries where a randomized study design is needed but infrastructure is lacking to implement more traditional participant selection methods. Wampler Peter J 2013-01-01 412 Science.gov (United States) The start of the cholera epidemic in Haiti quickly highlighted the necessity of the implementation of an Alert and Response (A&R) System to complement the existing national surveillance system. The national system had been able to detect and confirm the outbreak etiology but required external support to monitor the spread of cholera and coordinate response, because much of the information produced was insufficiently timely for real-time monitoring and directing of a rapid, targeted response. The A&R System was designed by the Pan American Health Organization/World Health Organization in collaboration with the Haiti Ministry of Health, and it was based on a network of partners, including any institution, structure, or individual that could identify, verify, and respond to alerts. The defined objectives were to (1) save lives through early detection and treatment of cases and (2) control the spread through early intervention at the community level. The operational structure could be broken down into three principle categories: (1) alert (early warning), (2) verification and assessment of the information, and (3) efficient and timely response in coordination with partners to avoid duplication. Information generated by the A&R System was analyzed and interpreted, and the qualitative information was critical in qualifying the epidemic and defining vulnerable areas, particularly because the national surveillance system reported incomplete data for more than one department. The A&R System detected a number of alerts unrelated to cholera and facilitated rapid access to that information. The sensitivity of the system and its ability to react quickly was shown in May of 2011, when an abnormal increase in alerts coming from several communes in the Sud-Est Department in epidemiological weeks (EWs) 17 and 18 were noted and disseminated network-wide and response activities were implemented. The national cholera surveillance system did not register the increase until EWs 21 and 22, and the information did not become available until EWs 23 and 24, when the peak of cases had already been reached. Although many of the partners reporting alerts during the peak of the cholera epidemic have since left Haiti, the A&R System has continued to function as an Early Warning (EWARN) System, and it continues to be developed with recent activities, such as the distribution of cell phones to enhance alert communication. PMID:24106196 Santa-Olalla, Patricia; Gayer, Michelle; Magloire, Roc; Barrais, Robert; Valenciano, Marta; Aramburu, Carmen; Poncelet, Jean Luc; Gustavo Alonso, Juan Carlos; Van Alphen, Dana; Heuschen, Florence; Andraghetti, Roberta; Lee, Robert; Drury, Patrick; Aldighieri, Sylvain 2013-10-01 413 Science.gov (United States) The 12 January 2010 earthquake in Haiti demonstrates the necessity of understanding information communication between disciplines during disasters. Armed with data from a variety of sources, from geophysics to construction, water and sanitation to education, decision makers can initiate well-informed policies to reduce the risk from future hazards. At the core of this disaster was a natural hazard that occurred in an environmentally compromised country. The earthquake itself was not solely responsible for the magnitude of the disaster- poor construction practices precipitated by extreme poverty, a two centuries of post-colonial environmental degradation and a history of dysfunctional government shoulder much of the responsibility. Future policies must take into account the geophysical reality that future hazards are inevitable and may occur within the very near future, and how various institutions will respond to the stressors. As the global community comes together in reconstruction efforts, it is necessary for the various actors to take into account what vulnerabilities were exposed by the earthquake, most vividly seen during the initial response to the disaster. Responders are forced to prioritize resources designated for building collapse and infrastructure damage, delivery of critical services such as emergency medical care, and delivery of food and water to those in need. Past disasters have shown that communication lapses between the response and recovery phases results in many of the exposed vulnerabilities not being adequately addressed, and the recovery hence fails to bolster compromised systems. The response reflects the basic characteristics of a Complex Adaptive System, where new agents emerge and priorities within existing organizations shift to deal with new information. To better understand how information is shared between actors during this critical transition, we are documenting how information is communicated between critical sectors during the response and recovery phases. Our team consists of experts in natural hazards, public health, shelter and infrastructure, education, and security. We are performing a network analysis based on the content of news and situation reports in media and from UN and aid agencies, field reports by academics and organizations like EERI, and discussions with agencies in Haiti. During three trips to Haiti, we have documented what information was being collected by key stakeholders including government, United Nations, non-governmental organizations, and both domestic and international educational institutions. Insights gained from this analysis of disaster response and recovery operations are invaluable in informing the next state of risk reduction, the transition to a sustainable recovery in a damaged region. McAdoo, B. G.; Augenstein, J.; Comfort, L.; Huggins, L.; Krenitsky, N.; Scheinert, S.; Serrant, T.; Siciliano, M.; Stebbins, S.; Sweeney, P.; University Of Pittsburgh Haiti Reconnaissance Team 2010-12-01 414 Scientific Electronic Library Online (English) Full Text Available SciELO Chile | Language: English Abstract in spanish El presente artculo tiende un puente entre los debates globales y especficos de Hait sobre estatalidad, la economa poltica de la (de)formacin del Estado y la conceptualizacin y medicin de dichosfenmenos. Basndose en datos y literatura secundaria sobre Hait, pero sin circunscribirse a este [...] caso, el presente artculo sostiene que a pesar de los rasgos caractersticos del Estado extremadamente dbil de Hait, dicho caso puede ser comparado productivamente con una serie de otros estados, que van desde estados dbiles a relativamente fuertes, en Amrica Latina y el Caribe. En el proceso, el artculo sugiere considerar a los niveles de soberana como una dimensin integral de la estatalidad en la regin, pero tambin en otras partes del mundo. El artculo demuestra la relevancia de conceptos utilizados en otros artculos de este volumen, como el de "debilidad por diseno", para el caso de Hait. El artculo concluye sugiriendo que sera til ir ms all de las teoras neoweberianas, por ejemplo incorporando anlisis crticos feministas, para entender las diferentes caras de la debilidad estatal y su construccin social en la regin. Abstract in english This article bridges global and Haiti-specific debates on statehood, the political economy of state and state (de)formation, as well as the conceptualization and measurement of those phenomena. Drawing on data sets and secondary literatures from Haiti and beyond, it argues that despite the unique fe [...] atures of the extremely weak state in Haiti, that case can usefully be compared to the range of weak to fairly strong states in Latin America and the Caribbean. In the process, the article makes a case for considering degrees of sovereignty as an integral dimension of statehood in the region and elsewhere. It demonstrates the relevance of concepts used in other articles in this volume, such as 'weakness by design', in the Haitian case. The article ends by suggesting that it would be useful to look beyond neo-Weberian theories, for example by incorporating critical feminist analysis, to understand the different faces of state weakness and their social construction in the region. BARANYI, STEPHEN. 415 Directory of Open Access Journals (Sweden) Full Text Available This article bridges global and Haiti-specific debates on statehood, the political economy of state and state (deformation, as well as the conceptualization and measurement of those phenomena. Drawing on data sets and secondary literatures from Haiti and beyond, it argues that despite the unique features of the extremely weak state in Haiti, that case can usefully be compared to the range of weak to fairly strong states in Latin America and the Caribbean. In the process, the article makes a case for considering degrees of sovereignty as an integral dimension of statehood in the region and elsewhere. It demonstrates the relevance of concepts used in other articles in this volume, such as 'weakness by design', in the Haitian case. The article ends by suggesting that it would be useful to look beyond neo-Weberian theories, for example by incorporating critical feminist analysis, to understand the different faces of
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4519246518611908, "perplexity": 7889.40799611363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444209.14/warc/CC-MAIN-20141017005724-00051-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.acmicpc.net/problem/3930
시간 제한 메모리 제한 제출 정답 맞은 사람 정답 비율 1 초 128 MB 5 2 1 33.333% ## 문제 In the year 2xxx, an expedition team landing on a planet found strange objects made by an ancient species living on that planet. They are transparent boxes containing opaque solid spheres (Figure 12). There are also many lithographs which seem to contain positions and radiuses of spheres. Figure 12: A strange object Initially their objective was unknown, but Professor Zambendorf found the cross section formed by a horizontal plane plays an important role. For example, the cross section of an object changes as in Figure 13 by sliding the plane from bottom to top. Figure 13: Cross sections at different positions He eventually found that some information is expressed by the transition of the number of connected figures in the cross section, where each connected figure is a union of discs intersecting or touching each other, and each disc is a cross section of the corresponding solid sphere. For instance, in Figure 13, whose geometry is described in the first sample dataset later, the number of connected figures changes as 0, 1, 2, 1, 2, 3, 2, 1, and 0, at z = 0.0000, 162.0000, 167.0000, 173.0004, 185.0000, 191.9996, 198.0000, 203.0000, and 205.0000, respectively. By assigning 1 for increment and 0 for decrement, the transitions of this sequence can be expressed by an 8-bit binary number 11011000. For helping further analysis, write a program to determine the transitions when sliding the horizontal plane from bottom (z = 0) to top (z = 36000). ## 입력 The input consists of a series of datasets. Each dataset begins with a line containing a positive integer, which indicates the number of spheres N in the dataset. It is followed by N lines describing the centers and radiuses of the spheres. Each of the N lines has four positive integers Xi, Yi, Zi, and Ri (i = 1, ··· , N) describing the center and the radius of the i-th sphere, respectively. You may assume 1 ≤ N ≤ 100, 1 ≤ Ri ≤ 2000, 0 < Xi − Ri < Xi + Ri < 4000, 0 < Yi − Ri < Yi + Ri < 16000, and 0 < Zi − Ri < Zi + Ri < 36000. Each solid sphere is defined as the set of all points (x, y, z) satisfying (x − Xi)2 + (y − Yi)2 + (z − Zi)2 ≤ Ri2. A sphere may contain other spheres. No two spheres are mutually tangent. Every Zi ± Ri and minimum/maximum z coordinates of a circle formed by the intersection of any two spheres differ from each other by at least 0.01. The end of the input is indicated by a line with one zero. ## 출력 For each dataset, your program should output two lines. The first line should contain an integer M indicating the number of transitions. The second line should contain an M-bit binary number that expresses the transitions of the number of connected figures as specified above. ## 예제 입력 3 95 20 180 18 125 20 185 18 40 27 195 10 1 5 5 5 4 2 5 5 5 4 5 5 5 3 2 5 5 5 4 5 7 5 3 16 2338 3465 29034 710 1571 14389 25019 842 1706 8015 11324 1155 1899 4359 33815 888 2160 10364 20511 1264 2048 8835 23706 1906 2598 13041 23679 618 1613 11112 8003 1125 1777 4754 25986 929 2707 9945 11458 617 1153 10358 4305 755 2462 8450 21838 934 1822 11539 10025 1639 1473 11939 12924 638 1388 8519 18653 834 2239 7384 32729 862 0 ## 예제 출력 8 11011000 2 10 2 10 2 10 28 1011100100110101101000101100
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3628930151462555, "perplexity": 753.2949846142631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647707.33/warc/CC-MAIN-20180321234947-20180322014947-00610.warc.gz"}
http://math.stackexchange.com/users/24690/une-femme-douce?tab=activity
# Une Femme Douce less info reputation 21247 bio website location age member for 2 years, 7 months seen 21 hours ago profile views 3,967 # 3,079 Actions Sep9 accepted $\lim_{x \to 0} \left ({e^x+e^{-x}-2\over x^2} \right )^{1\over x^2}$ Sep9 comment $\lim_{x \to 0} \left ({e^x+e^{-x}-2\over x^2} \right )^{1\over x^2}$ Why the limit under log is $1$? the fraction associated with $\cosh x$ is not going to $0$ right? it is $0$ by $0$ form Sep8 asked $\lim_{x \to 0} \left ({e^x+e^{-x}-2\over x^2} \right )^{1\over x^2}$ Aug29 reviewed Approve suggested edit on Normal Matrix Having all real eigen values is Hermitian Aug29 accepted Normal Matrix Having all real eigen values is Hermitian Aug29 reviewed Approve suggested edit on Eigenvalues for the Sturm-Liouville boundary value problem Aug29 reviewed Approve suggested edit on Normal Matrix Having all real eigen values is Hermitian Aug29 asked Normal Matrix Having all real eigen values is Hermitian Aug22 comment $f:\mathbb{R}^2\to\mathbb{R}^2, f(x,y)=(x+2y+y^2+|xy|,2x+y+x^2+|xy|)$ Okay got it...,.. Aug22 comment $f:\mathbb{R}^2\to\mathbb{R}^2, f(x,y)=(x+2y+y^2+|xy|,2x+y+x^2+|xy|)$ so $3$ and $4$ are true, and $1,2$ are false, but how to show $f$ is differentiable? Aug22 comment $f:\mathbb{R}^2\to\mathbb{R}^2, f(x,y)=(x+2y+y^2+|xy|,2x+y+x^2+|xy|)$ Yes Yes, they wanted which are the correct statements Aug22 asked $f:\mathbb{R}^2\to\mathbb{R}^2, f(x,y)=(x+2y+y^2+|xy|,2x+y+x^2+|xy|)$ Aug21 comment On $f:A\to\mathbb{R}^2, f(x,y)=({x\over 1+x+y},{y\over 1+x+y})$ Oh Yes, I thought they are asking whether determinant of the Jacobian matrix vanishes, anyway the matrix also does not vanish on $A$ Aug21 answered Multivariate limit $\lim_{(x,y) \to (0,0)} \frac{{x{y^2}}}{{{x^2} + {y^4}}} = 0$ Aug21 revised On $f:A\to\mathbb{R}^2, f(x,y)=({x\over 1+x+y},{y\over 1+x+y})$ added 1 character in body Aug21 asked On $f:A\to\mathbb{R}^2, f(x,y)=({x\over 1+x+y},{y\over 1+x+y})$ Aug16 comment Let $f:[-1,1] \to \mathbb{R}$ be differentiable 3 times, prove $\exists M>0 \ , \ s.t \ f(x) \le Mx^2$ Why $f'(0)=0$? Can you explain? Aug13 accepted $x_1,x_2,x_3,x_4$ are in Harmonic Progression $\Rightarrow (x_1-x_3)(x_2-x_4)=4(x_1-x_2)(x_3-x_4)$ Aug13 asked $x_1,x_2,x_3,x_4$ are in Harmonic Progression $\Rightarrow (x_1-x_3)(x_2-x_4)=4(x_1-x_2)(x_3-x_4)$ Aug9 accepted How to find the area of an isosceles triangle without using trigonometry?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6341742873191833, "perplexity": 2113.232575047302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657116650.48/warc/CC-MAIN-20140914011156-00007-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://quant.stackexchange.com/tags/local-volatility/hot
# Tag Info 10 There is another reason why Stoc Vol Models should be usually preferred to Local Vol Models, this reason is explained in the Hagan et al. paper "Managing Smile Risk" about SABR process and is in simple terms the fact that "smile dynamics" is poorly predicted by local vol models leading to bad Hedging of exotic options. Anyway Local Vol models have the good ... 9 The reason for put and call volatilities to appear different is that the implied vol has been calculated using different drift parameters than those implied by the market. Let's take everything in the model as given except the interest rate $r$ and the volatility $\sigma$. For European options we have the Black-Scholes formula for put and call values ... 6 For pricing and hedging a portfolio of vanilla options, stochastic volatility is almost always preferable to local volatility since empirically it more accurately captures the evolution of the smile. 4 Jump volatility is a term sometimes used to describe randomly varying jump sizes in a model with asset value jumps. So strictly speaking it is merely a parameter in generic jump diffusion. Both local volatility models and jump diffusions end up resulting in skew and kurtosis (of Black-Scholes volatilities). However, they are complementary in practice, at ... 3 You can view the price of an option as the cost to dynamically replicate it. The more volatility, the more costs you will have trading the underlying to keep your delta equal to 0 (I'm assuming you sold the option, hence a negative gamma position). So, if at any spot, any date your local vol is above 0.194, rebalancing the portfolio will be constantly more ... 2 For pricing, there are a few products whose prices are sensitive to the forward smile and when you compute that with just local vol, it is not realistic. So if you are a seller, you go to the next church and find something that looks kindof reasonable, and that kind of can reconstruct a reasonnable forward smile structure. The game in pricing is to not ... 2 First, please make sure that when you resimulate sample paths, you are keeping your underlying random samples constant, as in this answer. For your delta, vega and rho there is some ambiguity in the definition of the greeks. Consider the simple case of delta in the presence of a skew $\sigma(K/S)$, and say that the underlying price right now is $S_0$. We ... 1 Dupire model is just one way of generating a local volatility surface from an implied volatility surface. There are many other ways to generate a local volatility surface. One critical aspect of Dupire model is that the input implied volatility (IV) surface should be arbitrage free. If not, you will negative instantaneous variance when generating the local ... 1 No, if you are referring to the famous Dupire Model (there are others), then they are the same. It is usually referred to as the Local Volatility Model and the Dupire Equation. I would disentagle those with the concept of Local Volatility, which is model independent and a fairly deep result. 1 There is no difference in information, though the fitting algorithm may increase in complexity. First note that in practice you never have an entire curve or surface of prices $C(K,T)$ of any kind of option. You only have a finite number of observations and even those typically have a bid and an offer. I would therefore argue that the correct picture of ... 1 The OpenGamma Analytics Library definitely does have a Local Volatility model available. In addition, in our Quantitative Papers page there's a link to the full mathematics and basis for our Local Volatility implementation. I'd be interested to know why you decided to write your own rather than using one of the above. Only top voted, non community-wiki answers of a minimum length are eligible
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8101922273635864, "perplexity": 794.5951392070135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802764809.9/warc/CC-MAIN-20141217075244-00062-ip-10-231-17-201.ec2.internal.warc.gz"}
http://linux-commands-examples.com/ttf2tfm
A great documentation place for Linux commands # ttf2tfm build TeX metric files from a TrueType font ## Synopsis ttf2tfm ttffile[.ttf|.ttc] [-c caps-height-factor] [-e extension-factor] [-E encoding-id] [-f font-index] [-l] [-L ligature-file[.sfd]] [-n] [-N] [-O] [-p inencfile[.enc]] [-P platform-id] [-q] [-r old-glyphname new-glyphname] [-R replacement-file[.rpl]] [-s slant-factor] [-t outencfile[.enc]] [-T inoutencfile[.enc]] [-u] [-v vplfile[.vpl]] [-V scvplfile[.vpl]] [-w] [-x] [-y vertical-shift-factor] [tfmfile[.tfm]] ttf2tfm --version | --help ## add an example, a script, a trick and tips : email address (won't be displayed) : name Step 2 Thanks for this example ! - It will be moderated and published shortly. Feel free to post other examples Oops ! There is a tiny cockup. A damn 404 cockup. Please contact the loosy team who maintains and develops this wonderful site by clicking in the mighty feedback button on the side of the page. Say what happened. Thanks! ## examples no example yet ... ... Feel free to add your own example above to help other Linux-lovers ! ## description This program extracts the metric and kerning information of a TrueType font and converts it into metric files usable by TeX (quite similar to afm2tfm which is part of the dvips package; please consult its info files for more details on the various parameters (especially encoding files). Since a TrueType font often contains more than 256 glyphs, some means are necessary to map a subset of the TrueType glyphs onto a TeX font. To do this, two mapping tables are needed: the first (called ’input’ or ’raw’ encoding) maps the TrueType font to a raw TeX font (this mapping table is used by both ttf2tfm and ttf2pk), and the second (called ’output’ or ’virtual’ encoding) maps the raw TeX font to another (virtual) TeX font, providing all kerning and ligature information needed by TeX. This two stage mapping has the advantage that one raw font can be accessed with various LaTeX encodings (e.g. T1 and OT1) via the virtual font mechanism, and just one PK file is necessary. For CJKV (Chinese/Japanese/Korean/old Vietnamese) fonts, a different mechanism is provided (see SUBFONT DEFINITION FILES below). ## availability ttf2tfm is part of the FreeType 1 package, a high quality TrueType rendering library. ## cmaps Contrary to Type 1 PostScript fonts (but similar to the new CID PostScript font format), most TrueType fonts have more than one native mapping table, also called ’cmap’, which maps the (internal) TTF glyph indices to the (external) TTF character codes. Common examples are a mapping table to Unicode encoded character positions, and the standard Macintosh mapping. To specify a TrueType mapping table, use the options -P and -E. With -P you specify the platform ID; defined values are: platform platform ID (pid) Apple Unicode 0 Macintosh 1 ISO 2 Microsoft 3 The encoding ID depends on the platform. For pid=0, we ignore the -E parameter (setting it to zero) since the mapping table is always Unicode version 2.0. For pid=1, the following table lists the defined values: platform ID = 1 script encoding ID (eid) Roman 0 Japanese 1 Chinese 2 Korean 3 Arabic 4 Hebrew 5 Greek 6 Russian 7 Roman Symbol 8 Devanagari 9 Gurmukhi 10 Gujarati 11 Oriya 12 Bengali 13 Tamil 14 Telugu 15 Malayalam 17 Sinhalese 18 Burmese 19 Khmer 20 Thai 21 Laotian 22 Georgian 23 Armenian 24 Maldivian 25 Tibetan 26 Mongolian 27 Geez 28 Slavic 29 Vietnamese 30 Sindhi 31 Uninterpreted 32 Here are the ISO encoding IDs: platform ID = 2 encoding encoding ID (eid) ASCII 0 ISO 10646 1 ISO 8859-1 2 And finally, the Microsoft encoding IDs: platform ID = 3 encoding encoding ID (eid) Symbol 0 Unicode 2.0 1 Shift JIS 2 GB 2312 (1980) 3 Big 5 4 KS X 1001 (Wansung) 5 KS X 1001 (Johab) 6 UCS-4 10 The program will abort if you specify an invalid platform/encoding ID pair. It will then show the possible pid/eid pairs. Please note that most fonts have at most two or three cmaps, usually corresponding to the pid/eid pairs (1,0), (3,0), or (3,1) in case of Latin based fonts. Valid Microsoft fonts should have a (3,1) mapping table, but some fonts exist (mostly Asian fonts) which have a (3,1) cmap not encoded in Unicode. The reason for this strange behavior is the fact that some old MS Windows versions will reject fonts having a non-(3,1) cmap (since all non-Unicode Microsoft encoding IDs are for Asian MS Windows versions). The -P and -E options of ttf2tfm must be equally specified for ttf2pk; the corresponding parameters in a map file are ’Pid’ and ’Eid’, respectively. The default pid/eid pair is (3,1). Similarly, an -f option must be specified as ’Fontindex’ parameter in a map file. If you use the -N switch, all cmaps are ignored, using only the PostScript names in the TrueType font. The corresponding option in a map file is ’PS=Only’. If you use the -n switch, the default glyph names built into ttf2tfm are replaced with the PS glyph names found in the font. In many cases this is not what you want because the glyph names in the font are often incorrect or non-standard. The corresponding option in a map file is ’PS=Yes’. Single replacement glyph names specified with -r must be given directly as ’old-glyphname new-glyphname’ in a map file; -R is equivalent to the ’Replacement’ option. ## input and output encodings You must specify the encoding vectors from the TrueType font to the raw TeX font and from the raw TeX font to the virtual TeX font exactly as with afm2tfm, but you have more possibilities to address the character codes. [With ’encoding vector’ a mapping table with 256 entries in form of a PostScript vector is meant; see the file T1-WGL4.enc of this package for an example.] With afm2tfm, you must access each glyph with its Adobe glyph name, e.g. ’/quotedsingle’ or ’/Acircumflex’. This has been extended with ttf2tfm; now you can (and sometimes must) access the code points and/or glyphs directly, using the following syntax for specifying the character position in decimal, octal, or hexadecimal notation: ’/.c<decimal-number>’, ’/.c0<octal-number>’, or ’/.c0x<hexadecimal-number>’. Examples: ’/.c72’, ’/.c0646’, ’/.c0x48’. To access a glyph index directly, use the character ’g’ instead of ’c’ in the just introduced notation. Example: ’/.g0x32’. [Note: The ’.cXXX’ notation makes no sense if -N is used.] For pid/eid pairs (1,0) and (3,1), both ttf2tfm and ttf2pk recognize built-in default Adobe glyph names; the former follows the names given in Appendix E of the book ’Inside Macintosh’, volume 6, the latter uses the names given in the TrueType Specification (WGL4, a Unicode subset). Note that Adobe names for a given glyph are often not unique and do sometimes differ, e.g., many PS fonts have the glyph ’mu’, whereas this glyph is called ’mu1’ in the WGL4 character set to distinguish it from the real Greek letter mu. Be also aware that OpenType (i.e. TrueType 2.0) fonts use an updated WGL4 table; we use the data from the latest published TrueType specification (1.66). You can find those mapping tables in the source code file ttfenc.c. On the other hand, the switches -n and -N makes ttf2tfm read in and use the PostScript names in the TrueType font itself (stored in the ’post’ table) instead of the default Adobe glyph names. Use the -r switch to remap single glyph names and -R to specify a file containing replacement glyph name pairs. If you don’t select an input encoding, the first 256 glyphs of the TrueType font with a valid entry in the selected cmap will be mapped to the TeX raw font (without the -q option, ttf2tfm prints this mapping table to standard output), followed by all glyphs not yet addressed in the selected cmap. However, some code points for the (1,0) pid/eid pair are omitted since they do not represent glyphs useful for TeX: 0x00 (null), 0x08 (backspace), 0x09 (horizontal tabulation), 0x0d (carriage return), and 0x1d (group separator). The ’invalid character’ with glyph index 0 will be omitted too. If you select the -N switch, the first 256 glyphs of the TrueType font with a valid PostScript name will be used in case no input encoding is specified. Again, some glyphs are omitted: ’.notdef’, ’.null’, and ’nonmarkingreturn’. If you don’t select an output encoding, ttf2tfm uses the same mapping table as afm2tfm would use (you can find it in the source code file texenc.c); it corresponds to TeX typewriter text. Unused positions (either caused by empty code points in the mapping table or missing glyphs in the TrueType font) will be filled (rather arbitrarily) with characters present in the input encoding but not specified in the output encoding (without the -q option ttf2tfm prints the final output encoding to standard output). Use the -u option if you want only glyphs in the virtual font which are defined in the output encoding file, and nothing more. One feature missing in afm2tfm has been added which is needed by LaTeX’s T1 encoding: ttf2tfm will construct the glyph ’Germandbls’ (by simply concatenating two ’S’ glyphs) even for normal fonts if possible. It appears in the glyph list as the last item, marked with an asterisk. Since this isn’t a real glyph it will be available only in the virtual font. For both input and output encoding, an empty code position is represented by the glyph name ’/.notdef’. In encoding files, you can use ’\’ as the final character of a line to indicate that the input is continued on the next line. The backslash and the following newline character will be removed. ## parameters Most of the command line switch names are the same as in afm2tfm for convenience. One or more space characters between an option and its value is mandatory; options can’t be concatenated. For historical reasons, the first parameter can not be a switch but must be the font name. -c caps-height-factor The height of small caps made with the -V switch. Default value of this real number is 0.8 times the height of uppercase glyphs. Will be ignored in subfont mode. -e extension-factor The extension factor to stretch the characters horizontally. Default value of this real number is 1.0; if less than 1.0, you get a condensed font. -E encoding-id The TrueType encoding ID. Default value of this non-negative integer is 1. Will be ignored if -N is used. -f font-index The font index in a TrueType Collection. Default is the first font (index 0). [TrueType collections are usually found in some CJK fonts; e.g. the first font index specifies glyphs and metrics for horizontal writing, and the second font index does the same for vertical writing. TrueType collections usually have the extension ’.ttc’.] Will be ignored for ordinary TrueType fonts. -l Create ligatures in subfonts between first and second bytes of all the original character codes. Example: Character code 0xABCD maps to character position 123 in subfont 45. Then a ligature in subfont 45 between position 0xAB and 0xCD pointing to character 123 will be produced. The fonts of the Korean HLaTeX package use this feature. Note that this option generates correct ligatures only for TrueType fonts where the input cmap is identical to the output encoding. In case of HLaTeX, TTFs must have platform ID 3 and encoding ID 5. Will be ignored if not in subfont mode. -L ligature-file Same as -l, but character codes for ligatures are specified in ligature-file. For example, ’-L KS-HLaTeX’ generates correct ligatures for the Korean HLaTeX package regardless of the platform and encoding ID of the used TrueType font (the file KS-HLaTeX.sfd is part of the ttf2pk package). Ligature files have the same format and extension as SFD files. This option will be ignored if not in subfont mode. -n Use PS names (of glyphs) of the TrueType font. Only glyphs with a valid entry in the selected cmap are used. Will be ignored in subfont mode. -N Use only PS names of the TrueType font. No cmap is used, thus the switches -E and -P have no effect, causing a warning message. Will be ignored in subfont mode. -O Use octal values for all character codes in the VPL file rather than names; this is useful for symbol or CJK fonts where character names such as ’A’ are meaningless. -p inencfile The input encoding file name for the TTF→raw TeX mapping. This parameter has to be specified in a map file (default: ttfonts.map) recorded in ttf2pk.cfg for successive ttf2pk calls. Will be ignored in subfont mode. -P platform-id The TrueType platform ID. Default value of this non-negative integer is 3. Will be ignored if -N is used. -q Make ttf2tfm quiet. It suppresses any informational output except warning and error messages. For CJK fonts, the output can get quite large if you don’t specify this switch. -r old-glyphname new-glyphname Replaces old-glyphname with new-glyphname. This switch is useful if you want to give an unnamed glyph (i.e., a glyph which can be represented with ’.gXXX’ or ’.cXXX’ only) a name or if you want to rename an already existing glyph name. You can’t use the ’.gXXX’ or ’.cXXX’ glyph name constructs for new-glyphname; multiple occurrences of -r are possible. If in subfont mode or if no encoding file is specified, this switch is ignored. -R replacement-file Use this switch if you have many replacement pairs; they can be collected in a file which should have ’.rpl’ as extension. The syntax used in such replacement files is simple: Each non-empty line must contain a pair ’old-glyphname new-glyphname’ separated by whitespace (without the quotation marks). A percent sign starts a line comment; you can continue a line on the next line with a backslash as the last character. If in subfont mode or if no encoding file is specified, this switch is ignored. -s slant-factor The obliqueness factor to slant the font, usually much smaller than 1. Default of this real number is 0.0; if the value is larger than zero, the characters slope to the right, otherwise to the left. -t outencfile The output encoding file name for the virtual font(s). Only characters in the raw TeX font are used. Will be ignored in subfont mode. -T inoutencfile This is equivalent to ’-p inoutencfile -t inoutencfile’. Will be ignored in subfont mode. -u Use only those characters specified in the output encoding, and no others. By default, ttf2tfm tries to include all characters in the virtual font, even those not present in the encoding for the virtual font (it puts them into otherwise-unused positions, rather arbitrarily). Will be ignored in subfont mode. -v vplfile Output a VPL file in addition to the TFM file. If no output encoding file is specified, ttf2tfm uses a default font encoding (cmtt10). Note: Be careful to use different names for the virtual font and the raw font! Will be ignored in subfont mode. -V scvplfile Same as -v, but the virtual font generated is a pseudo small caps font obtained by scaling uppercase letters by 0.8 (resp. the value specified with -c) to typeset lowercase. This font handles accented letters and retains proper kerning. Will be ignored in subfont mode. -w Generate PostScript encoding vectors containing glyph indices, primarily used to embed TrueType fonts in pdfTeX. ttf2tfm takes the TFM names and replaces the suffix with .enc; that is, for files foo01.tfm, foo02.tfm, ... it creates foo01.enc, foo02.enc, ... at the same place. Will be ignored if not in subfont mode. -x Rotate all glyphs by 90 degrees counter-clockwise. If no -y parameter is given, the rotated glyphs are shifted down vertically by 0.25em. Will be ignored if not in subfont mode. -y vertical-shift-factor Shift down rotated glyphs by the given amount (the unit is em). Ignored if not in subfont mode or glyphs are not rotated. --version Shows the current version of ttf2tfm and the used file search library (e.g. kpathsea). --help Shows usage information. If no TFM file name is given, the name of the TTF file is used, including the full path and replacing the extension with ’.tfm’. ## problems Many vptovf implementations allow only 100 bytes for the TFM header (the limit is 1024 in the TFM file format itself): 8 bytes for checksum and design size, 40 bytes for the family name, 20 bytes for the encoding, and 4 bytes for a face byte. There remain only 28 bytes for some additional information which is used by ttf2tfm for an identification string (which is essentially a copy of the command line), and this limit is always exceeded. The optimal solution is to increase the value of max_header_bytes in the file vptovf.web (and probably pltotf.web too) to, say, 400 and recompile vptovf (and pltotf). Otherwise you’ll get some (harmless) error messages like This HEADER index is too big for my present table size which can be safely ignored. ## return value ttf2tfm returns 0 on success and 1 on error; warning and error messages are written to standard error. ## some notes on file searching Both ttf2pk and ttf2tfm use either the kpathsea, emtexdir, or MiKTeX library for searching files (emtexdir will work only on operating systems which have an MS-DOSish background, i.e. MS-DOS, OS/2, Windows; MikTeX is specific to MS Windows). As a last resort, both programs can be compiled without a search library; the searched files must be then in the current directory or specified with a path. Default extensions will be appended also (with the exception that only ’.ttf’ is appended and not ’.ttc’). kpathsea Please note that older versions of kpathsea (<3.2) have no special means to seach for TrueType fonts and related files, thus we use the paths for PostScript related stuff. The actual version of kpathsea is displayed on screen if you call either ttf2pk or ttf2tfm with the --version command line switch. Here is a table of the file type and the corresponding kpathsea variables. TTF2PKINPUTS and TTF2TFMINPUTS are program specific environment variables introduced in kpathsea version 3.2: .ttf and .ttc TTFONTS ttf2pk.cfg TTF2PKINPUTS .map TTF2PKINPUTS .enc TTF2PKINPUTS, TTF2TFMINPUTS .rpl TTF2PKINPUTS, TTF2TFMINPUTS .tfm TFMFONTS .sfd TTF2PKINPUTS, TTF2TFMINPUTS And here the same for pre-3.2-versions of kpathsea: .ttf and .ttc T1FONTS ttf2pk.cfg TEXCONFIG .map TEXCONFIG .tfm TFMFONTS Finally, the same for pre-3.0-versions (as used e.g. in teTeX 0.4): ttf2pk.cfg TEXCONFIG .map TEXCONFIG .tfm TFMFONTS Please consult the info files of kpathsea for details on these variables. The decision whether to use the old or the new scheme will be done during compilation. You should set the TEXMFCNF variable to the directory where your texmf.cnf configuration file resides. Here is the proper command to find out to which value a kpathsea variable is set (we use TTFONTS as an example). This is especially useful if a variable isn’t set in texmf.cnf or in the environment, thus pointing to the default value which is hard-coded into the kpathsea library. kpsewhich -progname=ttf2tfm -expand-var=’\$TTFONTS’ We select the program name also since it is possible to specify variables which are searched only for a certain program -- in our example it would be TTFONTS.ttf2tfm. A similar but not identical method is to say kpsewhich -progname=ttf2tfm -show-path=’truetype fonts’ [A full list of format types can be obtained by saying ’kpsewhich --help’ on the command line prompt.] This is exactly how ttf2tfm (and ttf2pk) searches for files; the disadvantage is that all variables are expanded which can cause very long strings. emtexdir Here the list of suffixes and their related environment variables to be set in autoexec.bat (resp. in config.sys for OS/2): .ttf and .ttc TTFONTS ttf2pk.cfg TTFCFG .map TTFCFG .enc TTFCFG .rpl TTFCFG .tfm TEXTFM .sfd TTFCFG If one of the variables isn’t set, a warning message is emitted. The current directory will always be searched. As usual, one exclamation mark appended to a directory path causes subdirectories one level deep to be searched, two exclamation marks cause all subdirectories to be searched. Example: TTFONTS=c:\fonts\truetype!!;d:\myfonts\truetype! Constructions like ’c:\fonts!!\truetype’ aren’t possible. MiKTeX Both ttf2tfm and ttf2pk have been fully integrated into MiKTeX. Please refer to the documentation of MiKTeX for more details on file searching. ## subfont definition files CJKV (Chinese/Japanese/Korean/old Vietnamese) fonts usually contain several thousand glyphs; to use them with TeX it is necessary to split such large fonts into subfonts. Subfont definition files (usually having the extension ’.sfd’) are a simple means to do this smoothly. A subfont file name usually consists of a prefix, a subfont infix, and a postfix (which is empty in most cases), e.g. ntukai23 → prefix: ntukai, infix: 23, postfix: (empty) Here the syntax of a line in an SFD file, describing one subfont: <whitespace> <infix> <whitespace> <ranges> <whitespace> <infix> := anything except whitespace. It is best to use only alphanumerical characters. <whitespace> := space, formfeed, carriage return, horizontal and vertical tabs -- no newline characters. <ranges> := <ranges> <whitespace> <codepoint> | <ranges> <whitespace> <range> | <ranges> <whitespace> <offset> <whitespace> <range> <codepoint> := <number> <range> := <number> ’_’ <number> <offset> := <number> ’:’ <number> := hexadecimal (prefix ’0x’), decimal, or octal (prefix ’0’) A line can be continued on the next line with a backslash ending the line. The ranges must not overlap; offsets have to be in the range 0-255. Example: The line 03 10: 0x2349 0x2345_0x2347 assigns to the code positions 10, 11, 12, and 13 of the subfont having the infix ’03’ the character codes 0x2349, 0x2345, 0x2346, and 0x2347 respectively. The SFD files in the distribution are customized for the CJK package for LaTeX. You have to embed the SFD file name into the TFM font name (at the place where the infix will appear) surrounded by two ’@’ signs, on the command line resp. a map file; both ttf2tfm and ttf2pk switch then to subfont mode. It is possible to use more than a single SFD file by separating them with commata and no whitespace; for a given subfont, the first file is scanned for an entry, then the next file, and so on. Later entries override entries found earlier (possibly only partially). For example, the first SFD file sets up range 0x10-0xA0, and the next one modifies entries 0x12 and 0x25. As can be easily seen, this algorithm allows for adding and replacing, but not for removing entries. Subfont mode disables the options -n, -N, -p, -r, -R, -t, -T, -u, -v, -V and -w for ttf2tfm; similarly, no ’Encoding’ or ’Replacement’ parameter is allowed in a map file. Single replacement glyph names are ignored too. ttf2tfm will create all subfont TFM files specified in the SFD files (provided the subfont contains glyphs) in one run. Example: The call ttf2tfm ntukai.ttf ntukai@Big5,Big5-supp@ will use Big5.sfd and Big5-supp.sfd, producing all subfont files ntukai01.tfm, ntukai02.tfm, etc. ttf2pk , afm2tfm , vptovf , the info pages for dvips and kpathsea authors Werner LEMBERG <wl[:at:]gnu[:dot:]org> Frédéric LOYER <loyer[:at:]ensta[:dot:]fr> give  feedback
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8851724863052368, "perplexity": 6990.153611289286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123491.68/warc/CC-MAIN-20170423031203-00480-ip-10-145-167-34.ec2.internal.warc.gz"}
https://s.awa.fm/track/9c9fafd50432896d0927
# Hold My Hand 95,148 1,124 • 2022.05.03 • 3:45 AWAで聴く ## 歌詞 To tell me you need me Hold my hand, everything will be okay I heard from the heavens that clouds have been grey Pull me close, wrap me in your aching arms I see that you're hurtin', why'd you take so long To tell me you need me? I see that you're bleeding You don't need to show me again But if you decide to, I'll ride in this life with you I won't let go 'til the end So cry tonight But don't you let go of my hand You can cry every last tear I won't leave 'til I understand Promise me, just hold my hand Raise your head, look into my wishful eyes That fear that's inside you will lift, give it time I can see everything you're blind to now Your prayers will be answered, let God whisper how To tell me you need me, I see that you're bleeding You don't need to show me again But if you decide to, I'll ride in this life with you I won't let go 'til the end So cry tonight But don't you let go of my hand You can cry every last tear I won't leave 'til I understand Promise you'll just hold my hand Hold my hand, hold my- Hold my hand, my hand I'll be right here, hold my hand Hold my hand, hold my- Hold my hand, my hand I'll be right here, hold my hand I know you're scared and your pain is imperfect But don't you give up on yourself I've heard a story, a girl, she once told me That I would be happy again Hold my hand Hold my hand Hold my hand, hold my hand Hold my hand, hold my hand Hold my hand I heard from the heavens ## このアルバムの収録曲 • 1.Hold My Hand このページをシェア 14曲2021年 16曲2020年 1曲2018年 1曲2017年 14曲2016年
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9196541905403137, "perplexity": 2410.3106125716563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00491.warc.gz"}
https://www.ann-geophys.net/38/331/2020/
Journal topic Ann. Geophys., 38, 331–345, 2020 https://doi.org/10.5194/angeo-38-331-2020 Ann. Geophys., 38, 331–345, 2020 https://doi.org/10.5194/angeo-38-331-2020 Regular paper 16 Mar 2020 Regular paper | 16 Mar 2020 # Global total electron content prediction performance assessment of the IRI-2016 model based on empirical orthogonal function decomposition Global total electron content prediction performance assessment of the IRI-2016 model based on empirical orthogonal function decomposition Shuhui Li1,2, Jiajia Xu1, Houxiang Zhou1, Jinglei Zhang1, Zeyuan Xu1, and Mingqiang Xie1 Shuhui Li et al. • 1School of Land Science and Technology, China University of Geosciences (Beijing), Beijing 100083, China • 2Shanxi Key Laboratory of Resources, Environment and Disaster Monitoring, Jinzhong 221116, China Correspondence: Shuhui Li (li.shuhui@163.com) Abstract In this study, the empirical orthogonal function (EOF) decomposition technique was utilized to analyze the similarities and differences of the spatiotemporal characteristics between the total electron content (TEC) of the International GNSS Service global ionospheric map (GIM) and that derived from the International Reference Ionosphere 2016 (IRI-2016) model in 2013. Results showed that the main spatial patterns and time-varying features of the data set have good consistency. The following four main spatiotemporal variation features can be extracted from both data sets through EOF decomposition: the variation with the geomagnetic latitude reflecting the daily averaged solar forcing, the diurnal and semidiurnal periodic changes with longitude due to local time, and the interhemispheric asymmetry caused by the annual variation of the inclination angle of the Earth's orbit. The differences between the spatial patterns represented by the EOF base functions of IRI-2016 and GIM TECs were analyzed by extracting the same time-varying coefficients. The deviations of the interhemispheric asymmetry component between the two data sets showed roughly equal values throughout the Southern or Northern Hemisphere, whereas those of the other spatial modes were mainly concentrated on the equatorial region. The differences of the time-varying characteristics between the IRI-2016 and GIM TECs were also compared by extracting the same EOF base functions. Although the EOF coefficients of the two data sets presented consistent seasonal variations, the magnitude of IRI-2016 TEC changes over time was less than that of GIM TEC. The diurnal variation of the daily averaged solar forcing component and the annual variation of the interhemispheric asymmetry component exhibited relatively large deviations between the two data sets. Considering the variance contribution of the different EOF components and their average relative deviations, both analyses showed that the daily averaged solar forcing and interhemispheric asymmetry components were the main factors for the deviation between the IRI-2016 and GIM TECs. 1 Introduction The ionosphere is a shell of electrons and electrically charged atoms and molecules that surrounds the Earth and stretches from a height of approximately 60 km to more than 1000 km. The variations in the ionosphere should be accurately measured, modeled, or estimated because the ionosphere critically affects high-frequency satellite communication and navigation system signals. Total electron content (TEC), which is the number of free electrons along the path where the signal is traveling, is a critical quantity that describes the ionosphere and its variability. Modeling and predicting temporal and spatial variations in ionospheric TEC are crucial to ionospheric physics research and ionospheric-based applications (Yao et al., 2018). Many attempts have been made to specify ionospheric parameters using empirical approaches, because an empirical model can describe the general condition of the ionosphere without actual measured data (Feltens et al., 2011). Several ionosphere empirical models, such as Klobuchar, NeQuick, the Standard Plasmasphere Ionosphere Model (SPIM), and International Reference Ionosphere (IRI; Bilitza, 2001), are currently available. The IRI is one of the most accepted standard global empirical ionosphere models. This model can be used to estimate the values of electron density and temperature, ion temperature and composition, and TEC at altitudes ranging from approximately 50 to 2000 km at a particular location, at a particular time, and on a particular day. The IRI model is continuously improved when new data and techniques become available. This model was recently upgraded to the IRI-2016 version (Bilitza et al., 2017). The model has been improved by ingesting all available data from worldwide ground-based and satellite observations to enhance the model capacity. IRI-2016 includes two new model options for the F2 peak height hmF2 and an enhanced representation of topside ion densities at low and high solar activities. Several small changes were made concerning the use of solar indices and the speedup of the computer program (Bilitza et al., 2017). The performance of the previous versions of the IRI model in terms of predicting TEC have been investigated to improve the model effectively and provide reference for the application (Maltseva et al., 2012; Scidá et al., 2012; Kenpankho et al., 2013; Okoh et al., 2013; Zakharenkova et al., 2015; Li et al., 2016). Comparative studies with GNSS-derived TEC have validated the performance of different IRI versions over years of varied solar activity in diverse regions. Given the predictability of the diurnal variation of TEC, deficiencies have varied with local time (LT), season, and latitude. After the release of IRI-2016 as the recent version, its performance in predicting TEC has attracted the attention of many researchers (Atici, 2018; Sharma et al., 2018; Tariku, 2018; Jiang et al., 2019). Most existing studies for ionospheric models aimed at the low and middle latitudes. Studies on the TEC prediction performance of different IRI versions worldwide are relatively sparse. Most comparative studies are based on the contrast between TEC derived from the IRI model and that derived from the global ionospheric map (GIM) or GNSS. The variations of diurnal and seasonal changes and those in different solar activity years on certain sites have been investigated from several aspects, such as bias, root mean square (RMS) error, and correlation coefficients. Although several assessments of the IRI models have been conducted, few studies on the comprehensive evaluation of the temporal and spatial distribution prediction performance of the IRI model are available. The predictive performance of the IRI model for ionospheric temporal and spatial changes should be evaluated using efficient analytical methods. Many scholars have recently used the empirical orthogonal function (EOF) decomposition method to analyze the spatial patterns and temporal variations of the TEC and their relationships with influencing factors (Zhao et al., 2005; Mao et al., 2008; A et al., 2011; Zhang et al., 2011, 2013; Bouya et al., 2012; Uwamahoro and Habarulema, 2015; Talaat and Zhu, 2016; Dabbakuti and Ratnam, 2016, 2017; Chang et al., 2017; Andima et al., 2019; Li et al., 2019). The spatial patterns and temporal variations of the TEC are separated by EOF decomposition and can be properly represented by the base functions and associated coefficients, respectively. The data analysis results of a single station and the regional or global TEC indicated that the EOF method is a potentially useful tool for data compression and separation of different physical processes. The EOF method contributes to the comprehensive analysis of the overall spatiotemporal variations in ionospheric TEC. In this work, GIM TEC data in 2013 were selected as reference values, and the EOF method was introduced to analyze the global TEC prediction performance of IRI-2016. A comparison between the modeled TEC and the reference values was conducted from the perspective of spatial patterns and time variation characteristics. Results provide a reference for the further understanding of the differences between the IRI-2016 and the GIM TECs at a global scale. 2 Data and method ## 2.1 GIM TEC The GIM TEC used in this study is the official IGS combined final product provided by the Crustal Dynamic Data Information System (ftp://cddis.gsfc.nasa.gov, last access: 10 April 2019). Final GIMs are regular products of the International GNSS Service (IGS) since 1998. These GIMs are provided in the ionosphere exchange format with a spatial resolution of 2.5× 5 in geographic latitude and longitude and a temporal resolution of 2 h. In this study, we downloaded and extracted the 2013 global TEC data from GIMs (referred to as GIM-TEC hereafter). ## 2.2 IRI-2016 The IRI is the international standard empirical model for the terrestrial ionosphere and recommended for international use by the Committee On Space Research and International Union of Radio Science (Bilitza, 2001; Bilitza and Reinisch, 2008; Chauhan and Singh, 2010). The first version was released in 1978, followed by several steadily improved ones in 1986, 1990, 1995, and 2012 (Rawer et al., 1978; Bilitza, 2015). The most recent version of this model is IRI-2016 (Bilitza et al., 2016, 2017). After IRI-2012, IRI-2016 exhibits the latest improvement in the model by introducing two new F2 peak height hmF2 modeling options with their data sources from ionosonde measurements (Altadill et al., 2013) and COSMIC radio occultations (Shubin, 2015). Hence, this version is independent of the propagation factor M(3000)F2 (Bilitza et al., 2017). The software package of IRI-2016 can be downloaded from http://irimodel.org/ (last access: 12 March 2019). The IRI software package contains FORTRAN subroutines, model coefficients, index files for IRI-2016 models, README files, and license files. The user can calculate relevant parameters by inputting location, time, height range, model selection, and certain parameters. The global TEC data calculated by using IRI-2016 will be called IRI-TEC hereafter. IRI-TEC can also be calculated online in accordance with https://ccmc.gsfc.nasa.gov/modelweb/models/iri2016_vitmo.php (last access: 1 March 2020). ## 2.3 EOF decomposition The EOF decomposition analysis method was originally invented by Pearson (1901). This method is performed by using an orthogonal transformation to decompose the original data set into a set of uncorrelated and ordered base functions and associated coefficients. If an original data matrix X with the dimension M×N is present, then the covariance matrix is determined from the data matrix X in accordance with $\begin{array}{}\text{(1)}& \mathrm{\Sigma }={\mathbf{X}}^{\mathrm{T}}\mathbf{X}.\end{array}$ The EOF base functions Ei, with i=1, 2, 3, …, N, are the eigenvectors of the covariance matrix and obtained by solving $\begin{array}{}\text{(2)}& \mathrm{\Sigma }{E}_{i}={\mathit{\lambda }}_{i}{E}_{i},\end{array}$ where λi is the associated eigenvalues. Once the EOF base functions are known, the EOF coefficients Ak are obtained using $\begin{array}{}\text{(3)}& {A}_{k}=\mathbf{X}{E}_{k}.\end{array}$ The original data set X can be decomposed in terms of the EOF base functions and associated coefficients in accordance with $\begin{array}{}\text{(4)}& \mathbf{X}=\sum _{k=\mathrm{1}}^{N}{E}_{k}{A}_{k}.\end{array}$ The percentage of the total variance in the data set accounted for by the ith EOF component is given as follows: $\begin{array}{}\text{(5)}& {r}_{i}=\mathrm{100}×\frac{{\mathit{\lambda }}_{i}}{\sum _{j=\mathrm{1}}^{N}{\mathit{\lambda }}_{j}}\mathit{%},\end{array}$ where N denotes the total number of the EOF components accounting for the total variance in the original data set. Talaat and Zhu (2016) reported that the effectiveness of the EOF technique for TEC is nearly insensitive to the horizontal resolution and length of the data records. We analyzed the global TEC over a 1-year time period (2013) with a 2 h temporal resolution and 37×36 spatial grids. We first organized the data set $\mathrm{TEC}\left(\mathrm{Lat},\mathrm{Lon},\mathrm{UT},\mathrm{DOY}\right)$ used in this study into a 2D matrix according to location and time epoch, that is, TEC(epoch,grid), where grid is a grid point arranged according to the latitude and longitude, and its total number is $\mathrm{37}×\mathrm{36}=\mathrm{1332}$; epoch is arranged according to Universal Time (UT), with an interval of 2 h. The total epoch number of the study period was $\mathrm{12}×\mathrm{365}=\mathrm{4380}$. After performing EOF decomposition, the base function Ek(grid) expressing a spatial pattern and the associated coefficient Ak(epoch) varying with time are obtained. The EOF method can separate the temporal and spatial variation characteristics. If the IRI TEC and GIM TEC are decomposed separately, it is difficult to directly compare their EOF base functions and coefficients in magnitude. Therefore, we combined the data to form a whole data set for EOF decomposition and compared the two data sets. The same coefficients of the EOF base function, that is, the same time-varying features, can be obtained by arranging IRI-TEC and GIM-TEC according to the same number of columns. Accordingly, comparing the two data sets' spatial variation features represented by the base functions is feasible. $\begin{array}{}\text{(6)}& \begin{array}{rl}\left[\begin{array}{c}{X}_{\mathrm{GIM}}\\ {X}_{\mathrm{IRI}}\end{array}\right]& =\sum _{k=\mathrm{1}}^{N}\left[\begin{array}{c}{E}_{k,\mathrm{GIM}}\\ {E}_{k,\mathrm{IRI}}\end{array}\right]\cdot {A}_{k}\\ & =\left[\begin{array}{c}{\sum }_{k=\mathrm{1}}^{N}{E}_{k,\mathrm{GIM}}\cdot {A}_{k}\\ {\sum }_{k=\mathrm{1}}^{N}{E}_{k,\mathrm{IRI}}\cdot {A}_{k}\end{array}\right]\end{array}\end{array}$ If IRI-TEC and GIM-TEC are arranged in the same number of rows, then the same spatial variation features represented by EOF base functions will be obtained. Accordingly, the time variation characteristics of the two data sets can be compared. $\begin{array}{}\text{(7)}& \begin{array}{rl}\left[{X}_{\mathrm{GIM}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}{X}_{\mathrm{IRI}}\right]& =\sum _{k=\mathrm{1}}^{N}{E}_{k}\cdot \left[{A}_{k,\mathrm{GIM}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}{A}_{k,\mathrm{IRI}}\right]\\ & =\left[\sum _{k=\mathrm{1}}^{N}{E}_{k}\cdot {A}_{k,\mathrm{GIM}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\sum _{k=\mathrm{1}}^{N}{E}_{k}\cdot {A}_{k,\mathrm{IRI}}\right]\end{array}\end{array}$ ## 2.4 Evaluation indicators In this study, the mean bias was calculated to represent the difference between two data sets. The equation is shown as follows: $\begin{array}{}\text{(8)}& \mathrm{Bias}=\frac{\mathrm{1}}{n}\sum _{i=\mathrm{1}}^{n}\left({Y}_{i}-{Y}_{i}^{\prime }\right),\end{array}$ where n is the total number of sample data, and Yi and ${Y}_{i}^{\prime }$ are sample data for two different data sets. These variables can be TEC from IRI-2016 and GIMs or the values of base functions or coefficients of base functions. The mean relative bias (Bias_rel) can be calculated as follows: $\begin{array}{}\text{(9)}& \mathrm{Bias}\mathrm{_}\mathrm{rel}\mathit{%}=\frac{\mathrm{1}}{n}\sum _{i=\mathrm{1}}^{n}\frac{\left({Y}_{i}-{Y}_{i}^{\prime }\right)}{{Y}_{i}^{\prime }}×\mathrm{100}.\end{array}$ The RMS error of the bias can be calculated using the following expression: $\begin{array}{}\text{(10)}& \mathrm{RMS}=\sqrt{\frac{\mathrm{1}}{n}\sum _{i}^{n}\left({Y}_{i}-{Y}_{i}^{\prime }{\right)}^{\mathrm{2}}}.\end{array}$ The 2D linear correlation coefficient was used to investigate the similarity of the spatial pattern of IRI-TEC and GIM-TEC. The 2D linear correlation coefficient ρ for two matrices A and B with M×N dimension is calculated as $\begin{array}{}\text{(11)}& \begin{array}{rl}\mathit{\rho }& =\sum _{m=\mathrm{1}}^{M}\sum _{n=\mathrm{1}}^{N}\left({A}_{mn}-\stackrel{\mathrm{‾}}{A}\right)\left({B}_{mn}-\stackrel{\mathrm{‾}}{B}\right)\cdot {\left[\sum _{m=\mathrm{1}}^{M}\sum _{n=\mathrm{1}}^{N}\left({A}_{mn}-\stackrel{\mathrm{‾}}{A}{\right)}^{\mathrm{2}}\right]}^{-\frac{\mathrm{1}}{\mathrm{2}}}\\ & \cdot {\left[\sum _{m=\mathrm{1}}^{M}\sum _{n=\mathrm{1}}^{N}\left({B}_{mn}-\stackrel{\mathrm{‾}}{B}{\right)}^{\mathrm{2}}\right]}^{-\frac{\mathrm{1}}{\mathrm{2}}},\end{array}\end{array}$ where $\stackrel{\mathrm{‾}}{A}$ and $\stackrel{\mathrm{‾}}{B}$ are the mean values of matrices A and B, respectively, and they are written as $\begin{array}{}\text{(12)}& \stackrel{\mathrm{‾}}{A}=\frac{\mathrm{1}}{MN}\sum _{m=\mathrm{1}}^{M}\sum _{n=\mathrm{1}}^{N}{A}_{mn};\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\mathrm{and}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\stackrel{\mathrm{‾}}{B}=\frac{\mathrm{1}}{MN}\sum _{m=\mathrm{1}}^{M}\sum _{n=\mathrm{1}}^{N}{B}_{mn}.\end{array}$ 3 Results and analysis ## 3.1 GIM-TEC and IRI-TEC in 2013 Figure 1 shows the season averages of global GIM-TEC and IRI-TEC at 12:00 UT in 2013. The months are divided into the following four seasons: March equinox (February, March, and April), June solstice (May, June, and July), September equinox (August, September, and October), and December solstice (November, December, and January). The global level of ionospheric TEC at 12:00 UT is lowest during the June solstice compared with that during other seasons. By contrast, the ionospheric TEC reaches the highest level during the December solstice. Figure 1Season averages of global TEC obtained from GIM and IRI at 12:00 UT in 2013. (a) GIM-TEC in the March equinox; (b) IRI-TEC in the March equinox; (c) GIM-TEC in the June solstice; (d) IRI-TEC in the June solstice; (e) GIM-TEC in the September equinox; (f) IRI-TEC in the September equinox; (g) GIM-TEC in the December solstice; and (h) IRI-TEC in the December solstice. The figure illustrates that the spatial distribution characteristics, which change with the latitude and longitude exhibited by IRI-TEC and GIM-TEC, have good consistency. However, the equatorial ionospheric anomaly of IRI-TEC is more pronounced than that of GIM-TEC. The 2D correlation coefficients of the two types of TEC data are shown in Table 1. The correlation coefficients of the four seasons are at least 0.924. Table 1 reveals that the mean biases between the season averages of global IRI-TEC and GIM-TEC at 12:00 UT are all negative. This result indicates that the TEC level predicted by the IRI-2016 model is lower than that of the GIM. This characteristic can also be seen in Fig. 1. The IRI-2016 model provides ionospheric parameters of up to 2000 km and is expected to be lower than the TEC up to GNSS satellites located at an altitude of approximately 20 000 km because of the missing plasmaspheric content. The mean bias and mean relative bias between IRI-TEC and GIM-TEC during the December solstice are larger than those in other seasons. Considering the different levels of ionospheric activities at different latitudes, mean and RMS values of the discrepancies between seasonal averages of GIM-TEC and IRI-TEC over different latitudinal regions in 2013 were calculated. Results are shown in Fig. 2. From Fig. 2, the mean and RMS values over the area near the Equator generally exhibit peak values. GIM-TEC values over the Equator and low latitudes are much larger than IRI-TEC values, especially over the ionospheric trough near the magnetic Equator shown in Fig. 1. Due to high solar radiation in the equatorial region and Earth electric and magnetic field, the ionosphere over the equatorial region is at a high ionization level and its changes are complex. There are also anomalies such as an equatorial ionization anomaly (EIA) characterized by two low-latitude ionization crests of global maximum of plasma densities (Abdu, 2016). The IRI model has been reported to underestimate the ionospheric TEC at the equatorial station by Shreedevi et al. (2018), and a comparison of IRI-model-derived TEC and GPS TEC showed a wide departure, with ∼60 % deviation in their study. The mean and RMS values over the Southern Hemisphere during the December solstice are significantly large, and they are also very large over the Northern Hemisphere during the June solstice. Therefore, there are large discrepancies between GIM-TEC and IRI-TEC over the summer hemisphere. The large deviation of the ionospheric TEC estimated by the IRI model in the summer hemisphere indicates that the model cannot fully reflect the periodic seasonal variation in the ionosphere. As discussed by Li et al. (2016), solar activity component and periodic components are supposed to be the main reasons which account for the difference between the GIM TEC and the TEC from the IRI-2012 model. However, their conclusions are based on single-station time series data. In this article, we will further analyze the IRI model for spatiotemporal data. Figure 2Mean and RMS values of the discrepancies between GIM-TEC and IRI-TEC at different latitudes during four seasons. The gridded values of the global IRI-TEC and GIM-TEC at different UTs for each day of the year 2013 were used to calculate the daily RMS value. Results are shown in Fig. 3, which also displays the daily solar F10.7 index and daily average of geomagnetic AE index in 2013. The solar F10.7 and geomagnetic AE indexes are available at https://omniweb.gsfc.nasa.gov/form/dx1.html (last access: 21 April 2019). Figure 3Daily (a) RMS of the differences between global IRI-TEC and GIM-TEC, (b) solar F10.7 index, and daily (c) average geomagnetic AE index in 2013. Figure 3 demonstrates that the daily RMS of the differences between global IRI-TEC and GIM-TEC is in good agreement with the daily solar F10.7 index. The correlation coefficients between the RMS and the solar F10.7 or geomagnetic AE index are 0.78 and −0.19, respectively. Results indicate that the ionospheric TEC prediction error of the IRI-2016 model presents a strong correlation with solar activity. Table 1Correlation coefficient and bias statistics among the season averages of global IRI-TEC and GIM-TEC at 12:00 UT in 2013. ## 3.2 Differences of spatial patterns between IRI-TEC and GIM-TEC based on the same time-varying characteristics We combined the IRI-TEC and GIM-TEC data to obtain the same TEC time-varying characteristics using Eq. (6) and analyzed their differences in terms of spatial patterns. The time-varying characteristics are reflected in the coefficient Ak of the EOF decomposition. Given that the TEC data are in accordance with the 2 h time interval, coefficient Ak is also the data that vary with the 2 h time interval. We described the coefficients of the base function according to the changes in UT and day of year (DOY) in Fig. 4 to reflect the seasonal changes effectively. Figure 4Associated coefficients A1A6 of the first six orders of EOF base functions based on Eq. (6), and A1A6 plotted against UT and DOY. The main EOF base functions extracted from Eq. (6) are shown in Fig. 5. The graphics in the left column of Fig. 5 exhibit the first six base functions Ei of GIM-TEC, whereas those in the right column of Fig. 5 depict the base functions of IRI-TEC. Figure 5First six orders of EOF base functions E1E6 extracted on the basis of Eq. (6). The figures in the left column are the base functions of GIM-TEC, and those in the right column are the base functions of IRI-TEC. The first base function E1 of GIM-TEC and IRI-TEC in Fig. 5a and b describe the overall average of global TEC. This function reflects the daily average effect of solar forcing and offset magnetic field (Talaat and Zhu, 2016). The TEC over the area near the geomagnetic equator exhibits a peak value. The TEC value decreases with the increase in geomagnetic latitude. The spatial distribution characteristics of E1 of the two models are very consistent. However, the peak GIM-TEC value over the geomagnetic equator is greater than that of the IRI-TEC. The ionospheric trough near the geomagnetic equator is evident in Fig. 5b. The daily mean A1 and solar F10.7 index are illustrated in Fig. 6, which shows that these two data sets demonstrate a consistent trend. The correlation coefficient between daily mean A1 and F10.7 index is 0.61. Solar activity is the primary determinant of the first base function E1. Figure 6Daily mean first EOF coefficient A1 and daily solar F10.7 index. Figure 5c–f show that the second and third base functions reflect the spatial distribution that varies in the longitude direction. The two base functions E2 and E3 approximately have the same magnitude and show a phase shift of π∕2, which is consistent with the results of Talaat and Zhu (2016). These functions reflect the change of diurnal solar radiation as it changes with the LT. This change of GIM-TEC and IRI-TEC is generally consistent; their main difference is reflected in the peak region of the Equator, and GIM-TEC shows large peak values. The EOF coefficients A2 and A3 corresponding to Fig. 4b and c show the change of the diurnal variation, and a change characteristic of the semiannual period is observed. The levels of A2 and A3 during equinox seasons are larger than those during solstice seasons. The fourth base function E4 reflects interhemispheric asymmetry, which is mainly caused by the seasonal variation of the inclination angle of the Earth's orbit. A4 in Fig. 4d indicates the seasonal variation of the interhemispheric asymmetry of the TEC and a strong annual cycle. The TEC component corresponding to base function E4 in the Southern Hemisphere is positive. In the Northern Hemisphere, the maximum value of the E4 component is on DOY150, whereas that in the Southern Hemisphere is on DOY347. Similar to E2 and E3, the fifth and sixth base functions E5 and E6 also reflect the spatial distribution characteristics along the longitude (Fig. 5i to l). In conjunction with Fig. 4e and f, these two base functions have semidiurnal period changes, and the phases of the two base functions differ by π∕4 and are of approximately equal magnitude. Base functions E5 and E6 represent a semidiurnal variation that changes with LT, and their coefficients A5 and A6 show a semiannual period. The intensity of the semidiurnal variation is strong during the equinox season and weak during the June solstice. We calculated the variances, correlation coefficients, biases, and their relative biases to analyze the spatial distribution characteristics of GIM-TEC and IRI-TEC. The statistical results are shown in Table 2, which indicates that the base functions of the two data sets are correlated and present good consistency with Fig. 5. Table 2Variances of the base function, correlation coefficient, and bias statistics among the base functions of GIM-TEC and IRI-TEC. We showed the difference between the six base functions of GIM-TEC and IRI-TEC in Fig. 7 to have an intuitive understanding of the difference between the IRI and the GIM base functions. Figure 7Differences of the first six orders of the base functions of GIM-TEC and IRI-TEC. Figure 7 shows that the differences of other modes exhibit a large deviation in the equatorial and low-latitude regions, except for the interhemispheric asymmetry feature E4. The magnitudes of the spatial distribution changes of the IRI-TEC for all six base functions are significantly smaller than those of GIM-TEC. The mean relative bias statistics of the base functions of GIM-TEC and IRI-TEC in Table 2 are negative. This finding indicates that the spatial variations of the base functions of IRI-TEC are generally underestimated compared with those of GIM-TEC. Here, the mean relative bias of E4 reached −56 %, and the underestimation is serious. This outcome is consistent with the statistical results in Table 1. ## 3.3 Differences of time-varying characteristics between IRI-TEC and GIM-TEC based on the same spatial patterns Equation (7) shows that the same EOF base functions are extracted for GIM-TEC and IRI-TEC. The differences of the corresponding coefficients of the EOF base functions between GIM-TEC and IRI-TEC are then compared, and those of their time variation characteristics can be analyzed. Figure 8 shows the six EOF base functions extracted in accordance with Eq. (7). Similar to the EOF base function extracted in Fig. 5, the first base function is consistent with the average variation of the TEC, varying with geomagnetic latitude. The second and third base functions are related to the diurnal variation of solar radiation change with longitude due to the LT. The fourth base function reflects the interhemispheric asymmetry caused by the seasonal variation of the inclination angle of the Earth's orbit. The fifth and sixth base functions reflect the characteristics of the semidiurnal variation with longitude due to the LT. Figure 8Six EOF base functions E1E6 extracted in accordance with Eq. (7). The coefficients of the different base functions of GIM-TEC and IRI-TEC obtained in accordance with Eq. (7) are shown in Fig. 9. The time-varying characteristics of the coefficients in Fig. 9 are very consistent with the results shown in Fig. 4. From Fig. 9a and b, the variations of A1 are mainly related to solar activity, and solar activity is the primary determinant of the first base function E1 in Fig. 8a, which describes the overall average of global TEC. From Fig. 9c–f, the EOF coefficients A2 and A3 of GIM-TEC and IRI-TEC all obviously exhibit a diurnal period and a semiannual period. They reflect the diurnal variation of solar radiation change with longitude due to the LT. A4 values in Fig. 9g and h indicate a strong annual cycle variation of the interhemispheric asymmetry of the TEC. A5 and A6 show a semiannual period of the base functions E5 and E6, which represent a longitudinal variation that changes with LT. The EOF coefficients of GIM-TEC and IRI-TEC have consistent annual, semiannual, diurnal, and semidiurnal variations. Therefore, Fig. 9 shows that GIM-TEC and IRI-TEC have highly consistent temporal variation characteristics based on the same spatial distribution modes Ek according to Eq. (7). The variance and correlation coefficients of A1A6 of the two types of data and the bias statistics of such coefficients are shown in Table 3. Figure 9Associated coefficients A1A6 of the EOF base functions extracted in accordance with Eq. (7). Table 3Variances of base function, correlation coefficient, and bias statistics among coefficients A1A6 of GIM-TEC and IRI-TEC. The magnitudes of coefficients A1A6 of IRI-TEC are generally smaller than those of the GIM-TEC, especially for A4. The maximum and minimum values of GIM-TEC A4 in Fig. 9g are 302.27 and −431.47, respectively. The variation range of the IRI-TEC A4 in Fig. 9h is −138.99 to 165.13. Results in Table 3 indicate that A4 exhibits the largest mean relative bias. Figure 9 shows that A1A6 reflect the time-varying characteristics of different scales. We conducted EOF decomposition on A1A6 according to the following equation to divide their diurnal and seasonal variation characteristics: $\begin{array}{}\text{(13)}& {A}_{i}\left(\mathrm{UT},\mathrm{DOY}\right)=\sum _{k=\mathrm{1}}^{N}{E}_{ik}\left(\mathrm{UT}\right)×{A}_{ik}\left(\mathrm{DOY}\right),\end{array}$ where Ai represents the coefficient of the ith order the EOF base function. This part is the second-layer EOF decomposition in this study. Equation (13) shows that the time-varying feature Eik depending on UT and seasonal variation Aik can be obtained. According to the percentage variance of the second-layer EOF decomposition, the first EOF component has already explained more than 99 % of the total variance of Ai. Therefore, the first EOF component is the most significant, and we will only present the first-order result of the second-layer EOF decomposition in this study. The decomposed first base function Ei1 and associated coefficient Ai1 are shown in Fig. 10. Figure 10First base function Ei1 and associated coefficient Ai1 of the six coefficients A1A6 according to Eq. (12). The monthly smoothed Ai1 of GIM-TEC and daily solar F10.7 index are shown together with Ai1. The left column of Fig. 10 shows base function Eik, which represents the diurnal variation characteristic of the base function Ei. The coefficients of the second-layer EOF decomposition Ai1 represent the variations in long timescales. Ai1 is shown in the right column of Fig. 10. Previous studies have shown that the long timescale variations of TEC are mainly influenced by solar and geomagnetic activities and periodical variation. The solar F10.7 index is also shown in the right column of Fig. 10 together with Ai1. The first base function E1 in Fig. 8a describes the overall average global TEC, and Fig. 10a shows E11, the diurnal variation characteristic of E1. GIM-TEC and IRI-TEC have similar magnitudes, whereas the diurnal variation of IRI-TEC is insignificant. A11 of GIM-TEC and IRI-TEC in Fig. 10b shows a pronounced semiannual period. However, A11 values of GIM-TEC on most days are larger than those of IRI-TEC, and the correlation between the F10.7 index and A11 of GIM-TEC is evidently observed. As shown in Fig. 10c, e, and g, the diurnal variations of the second, third, and fourth base functions E2E4 of GIM-TEC and IRI-TEC show minimal discrepancy. Hence, the IRI-2016 model accurately captures the diurnal variations of the solar radiation according to LT and interhemispheric asymmetry. A21 and A31 of GIM-TEC and IRI-TEC are shown in Fig. 10d and f. These functions evidently demonstrate a semidiurnal variation period. A21 and A31 of IRI-TEC during the equinox season are lower than those of GIM-TEC. The correlation between the F10.7 index and A21 and A31 of GIM-TEC is also observed. A41 of GIM-TEC and A41 of IRI-TEC in Fig. 10h exhibit an evident annual period variation of interhemispheric asymmetry. However, the summer-to-winter annual variation of GIM-TEC is much larger than that of IRI-TEC. The fifth and sixth base functions E5 and E6 in Fig. 8e and f reflect the spatial distribution characteristics along the longitude due to LT. E51 and E61 in Fig. 10i and k represent a semidiurnal variation. However, shifts in the peak value time between GIM-TEC and IRI-TEC are detected in E51 and E61. A51 and A61 in Fig. 10j and l exhibit a semiannual variation, and A51 and A61 of GIM-TEC are relatively consistent with those of IRI-TEC. We calculated the correlation coefficients between Ai1 of GIM-TEC and the solar F10.7 index. Results are shown in Table 4. Coefficients A11, A21, and A31 are highly related to solar activity. Table 4Correlation coefficients between Ai1 of GIM-TEC and solar F10.7 index. A11A61 in Fig. 10 show that IRI-TEC mainly reflects the annual and semiannual variations of the ionospheric TEC. The monthly and short-term variations with solar activity are unrepresented by IRI-TEC. Although the IRI-TEC will be smaller than the GIM-TEC because of the missing plasmaspheric content, A11 of IRI-TEC in Fig. 10b shows quite a large underestimation compared with that of GIM-TEC. The strong correlation between A11 of GIM-TEC and solar activity is unrepresented by A11 of IRI-TEC. The diurnal variation of the first base function of GIM-TEC represented by E11 is partially represented by E11 of IRI-TEC. The variance contribution rate of the first EOF component reaches 79.03 %; thus, the influence of its coefficient is large for the deviation of IRI-TEC and GIM-TEC. 4 Conclusion In this study, the global TEC prediction performance of the IRI-2016 model was evaluated. The EOF decomposition method was introduced to compare the global TEC data from the IRI-2016 model and GIMs in 2013. The prediction performance of the IRI-2016 model could be evaluated from two perspectives, namely, spatial pattern and temporal variation. The main conclusions are as follows: 1. A general underestimation of the IRI-2016 model can be observed compared with the season averages of global GIM-TEC in 2013, and the RMS of the global TEC deviation is strongly correlated with the solar activity F10.7 index. 2. The six base functions extracted by performing EOF decomposition on the global TEC data from IRI-2016 and GIMs include the following: the variation with the geomagnetic latitude reflecting the daily averaged solar forcing, the diurnal and semidiurnal periodic changes with longitude due to local time, and the interhemispheric asymmetry caused by the annual variation of the inclination angle of the Earth's orbit. The spatiotemporal features extracted from IRI-TEC and GIM-TEC data have good consistency. The IRI-2016 model follows the variation patterns of the observed GIM-TEC. 3. The spatial variation characteristics of IRI-TEC and GIM-TEC can be extracted for comparison on the basis of the same EOF coefficients. Results show that the spatial distribution fluctuation of the IRI-TEC is smaller than that of GIM-TEC. The average relative deviation of the base function representing the interhemispheric asymmetry reaches −56.7 %. The interhemispheric asymmetry presents a relatively stable deviation between IRI-TEC and GIM-TEC. The other spatial distribution variations have large deviations at the Equator and low latitudes. 4. The temporal variation characteristics of IRI-TEC and GIM-TEC are extracted and compared on the basis of the same EOF base functions. The degree of IRI-TEC changes with time is weaker than that of GIM-TEC. The average relative deviation of the fourth base function coefficient reaches −52.83 %. Most diurnal, annual, and semiannual variations of the six base functions of IRI-TEC are consistent with those of GIM-TEC. However, the change with solar activity is unrepresented by IRI-TEC. The diurnal variation of the first base function and the annual variation of the fourth base function have a relatively large deviation between IRI-TEC and GIM-TEC. 5. Results of the spatial and temporal variation characteristic analyses show that the deviation of the first and fourth EOF components between IRI-TEC and GIM-TEC are the two main influencing factors. Data availability Data availability. The data used in this study were downloaded from ftp://cddis.gsfc.nasa.gov/ (CDDIS, 2019), http://irimodel.org/ (COSPAR and URSI, 2019), and https://omniweb.gsfc.nasa.gov/ (CCMC, 2019). Author contributions Author contributions. SL contributed to the conception of the study. SL and JX contributed significantly to the data analysis and manuscript preparation. HZ and JZ performed the model validation and wrote part of the manuscript. ZX and MX contributed to some data analysis work. Competing interests Competing interests. The authors declare that they have no conflict of interest. Financial support Financial support. This research has been supported by the Fundamental Research Funds for the Central Universities (grant no. 2652017105) and the National Natural Science Foundation of China (grant no. 41574011). Review statement Review statement. This paper was edited by Ana G. Elias and reviewed by two anonymous referees. References A, E., Zhang, D.-H., Xiao, Z., Hao, Y.-Q., Ridley, A. J., and Moldwin, M.: Modeling ionospheric foF2 by using empirical orthogonal function analysis, Ann. Geophys., 29, 1501–1515, https://doi.org/10.5194/angeo-29-1501-2011, 2011. Abdu, M. A.: Electrodynamics of ionospheric weather over low latitudes, Abdu Geosci. Lett., 3, 11, https://doi.org/10.1186/s40562-016-0043-6, 2016. Altadill, D., Magdaleno, S., Torta, J. M., and Blanch, E.: Global empirical models of the density peak height and of the equivalent scale height for quiet conditions, Adv. Space Res., 52, 1756–1769, https://doi.org/10.1016/j.asr.2012.11.018, 2013. Andima, G., Amabayo, E. B., Jurua, E., and Cilliers, P. J.: Modeling of GPS total electron content over the African low-latitude region using empirical orthogonal functions, Ann. Geophys., 37, 65–76, https://doi.org/10.5194/angeo-37-65-2019, 2019. Atici, R.: Comparison of GPS TEC with modelled values from IRI 2016 and IRI-PLAS over Istanbul, Turkey, Astrophys. Space Sci., 363, 231, https://doi.org/10.1007/s10509-018-3457-0, 2018. Bilitza, D.: International Reference Ionosphere 2000, Radio Sci., 36, 261–275, https://doi.org/10.1029/2000RS002432, 2001. Bilitza, D.: The International Reference Ionosphere-Status 2013, Adv. Space Res., 5, 1914–1927, https://doi.org/10.1016/j.asr.2014.07.032, 2015. Bilitza, D. and Reinisch, B. W.: International Reference Ionosphere 2007: Improvements and new parameters, Adv. Space Res., 42, 599–609, https://doi.org/10.1016/j.asr.2007.07.048, 2008. Bilitza, D., Watanabe, S., Truhlik, V., and Altadill, D.: IRI-2016: Description and Introduction, 41st COSPAR Scientific Assembly, July, Istanbul, Turkey, 2016. Bilitza, D., Altadill, D., Truhlik, V., Shubin, V., Galkin, I., Reinisch, B., and Huang, X.: International Reference Ionosphere 2016: from ionospheric climate to real-time weather predictions, Space Weather, 15, 418–429, 2017. Bouya, Z., Terkildsen, M., Francis, M., and Neudegg, D.: EOF Analysis applied to Australian Regional Ionospheric TEC modelling, AOSWA, 22–24 February 2012, Chiang Mai, Thailand, 2012. CCMC: Solar activity data, available at: https://omniweb.gsfc.nasa.gov/, last access: 21 April 2019. CDDIS: GIM-TEC data, available at: ftp://cddis.gsfc.nasa.gov/, last access: 10 April 2019. COSPAR and URSI: Fortran source code of IRI-2016, available at: http://irimodel.org/, last access: 12 March 2019. Chang, X., Zou, B., Guo, J., Zhu, G., Li, W., and Li, W.: One sliding PCA method to detect ionospheric anomalies before strong Earthquakes: Cases study of Qinghai, Honshu, Hotan and Nepal earthquakes, Adv. Space Res., 59, 2058–2070, https://doi.org/10.1016/j.asr.2017.02.007, 2017. Chauhan, V. and Singh, O. P.: A morphological study of GPS-TEC data at Agra and their comparison with the IRI model, Adv. Space Res., 46, 280–290, https://doi.org/10.1016/j.asr.2010.03.018, 2010. Dabbakuti, J. R. K. K. and Ratnam, D. V.: Characterization of ionospheric variability in TEC using EOF and wavelets over low-latitude GNSS stations, Adv. Space Res., 57, 2427–2443, https://doi.org/10.1016/j.asr.2016.03.029, 2016. Dabbakuti, J. R. K. K. and Ratnam, D. V.: Modeling and analysis of GPS-TEC low latitude climatology during the 24th solar cycle using empirical orthogonal functions, Adv. Space Res., 60, 1751–1764, https://doi.org/10.1016/j.asr.2017.06.048, 2017. Feltens, J., Angling, M., Jackson-Booth, N., Jakowski, N., Hoque, M., HernándezPajares, M., Aragón, Á., María, Á., and Orús-Pérez, R.: Comparative testing of four ionospheric models driven with GPS measurements, Radio Sci., 46, RS0D12, https://doi.org/10.1029/2010RS004584, 2011. Jiang, H., Liu, J., Wang, Z., An, J., Ou, J., Liu, S., and Wang, N.: Assessment of spatial and temporal TEC variations derived from ionospheric models over the polar regions, J. Geodesy, 93, 455–471, https://doi.org/10.1007/s00190-018-1175-6, 2019. Kenpankho, P., Supnithi, P., and Nagatsuma, T.: Comparison of observed TEC values with IRI-2007 TEC and IRI-2007 TEC with optional foF2 measurements predictions at an equatorial region, Chumphon, Thailand, Adv. Space Res., 52, 1820–1826, https://doi.org/10.1016/j.asr.2013.08.012, 2013. Li, S., Li, L., and Peng, J.: Variability of Ionospheric TEC and the Performance of the IRI-2012 Model at the BJFS Station, China, Acta Geophys., 64, 1970–1987, 2016. Li, S., Zhou, H., Xu, J., Wang, Z., Li, L., and Zheng, Y.: Modeling and analysis of ionosphere TEC over China and adjacent areas based on EOF method, Adv. Space Res., 64, 400–414, https://doi.org/10.1016/j.asr.2019.04.018, 2019. Maltseva, O. A., Mozhaeva, N. S., Poltavsky, O. S., and Zhbankov, G. A.: Use of TEC global maps and the IRI model to study ionospheric response to geomagnetic disturbances, Adv. Space Res., 49, 1076–1087, https://doi.org/10.1016/j.asr.2012.01.005, 2012. Mao, T., Wan, W., Yue, X., Sun, L., Zhao, B., and Guo, J.: An empirical orthogonal function model of total electron content over China, Radio Sci., 43, RS2009, https://doi.org/10.1029/2007RS003629, 2008. Okoh, D., McKinnell, L., Cilliers, P., and Okeke, P.: Using GPS-TEC data to calibrate VTEC computed with the IRI model over Nigeria, Adv. Space Res., 52, 1791–1797, https://doi.org/10.1016/j.asr.2012.11.013, 2013. Pearson, K.: On lines and planes of closest fit to systems of points in space, Philos. Mag., 2, 559–572, 1901. Rawer, K., Ramakrishnan, S., and Bilitza, D.: International Reference Ionosphere 1978. International Union of Radio Science, URSI Special Report, 75 pp., Bruxelles, Belgium, 1978. Scidá, L. A., Ezquer, R. G., Cabrera, M. A., Mosert, M., Brunini, C., and Buresova, D.: On the IRI 2007 performance as a TEC predictor for the South American sector, J. Atmos. Sol.-Terr. Phy., 81–82, 50–58, https://doi.org/10.1016/j.jastp.2012.04.001, 2012. Sharma, S. K., Ansari, K., and Panda, S. K.: Analysis of Ionospheric TEC Variation over Manama, Bahrain, and Comparison with IRI-2012 and IRI-2016 Models, Arab. J. Sci. Eng., 43, 3823–3830, 2018. Shreedevi, P. R., Choudhary, R. K., Yadav, S., Thampi, S., and Ajesh, A.: Variation of the TEC at a dip equatorial station, Trivandrum and a mid latitude station, Hanle during the descending phase of the solar cycle 24 (2014–2016), J. Atmos. Sol.-Terr. Phy., 179, 425–434, 2018. Shubin, V. N.: Global median model of the F2-layer peak height based on ionospheric radio-occultation and ground-based Digisonde observations, Adv. Space Res., 56, 916–928, https://doi.org/10.1016/j.asr.2015.05.029, 2015. Talaat, E. R. and Zhu, X.: Spatial and temporal variation of total electron content as revealed by principal component analysis, Ann. Geophys., 34, 1109–1117, https://doi.org/10.5194/angeo-34-1109-2016, 2016. Tariku, Y. A.: Assessment of improvement of the IRI model over Ethiopia for the modeling of the variability of TEC during the period 2013–2016, Adv. Space Res., 63, 1634–1645, https://doi.org/10.1016/j.asr.2018.11.014, 2018. Uwamahoro, J. and Habarulema, J. B.: Modelling total electron content during geomagnetic storm conditions using empirical orthogonal functions and neural networks, J. Geophys. Res.-Space, 120, 11000–11012, https://doi.org/10.1002/2015JA021961, 2015. Yao, Y., Liu, L., Kong, J., and Zhai, C.: Global Ionospheric Modeling Based on Multi-GNSS, Satellite Altimetry and Formosat-3/COSMIC Data, GPS Solut., 22, 104, https://doi.org/10.1007/s10291-018-0770-6, 2018. Zakharenkova, I. E., Cherniak, Iu. V., Krankowski, A., and Shagimuratov, I. I.: Vertical TEC representation by IRI 2012 and IRI Plas models for European mid latitudes, Adv. Space Res., 55, 2070–2076, https://doi.org/10.1016/j.asr.2014.07.027, 2015. Zhang, S., Foster, J. C., Coster, A. J., and Erickson, P. J.: East–West Coast differences in total electron content over the continental US, Geophys. Res. Lett., 38, L19101, https://doi.org/10.1029/2011GL049116, 2011. Zhang, S., Chen, Z., Coster, A. J., Erickson, P. J., and Foster, J. C.: Ionospheric symmetry caused by geomagnetic declination over North America, Geophys. Res. Lett., 40, 5350–5354, https://doi.org/10.1002/2013GL057933, 2013. Zhao, B., Wan, W., Liu, L., Yue, X., and Venkatraman, S.: Statistical characteristics of the total ion density in the topside ionosphere during the period 1996–2004 using empirical orthogonal function (EOF) analysis, Ann. Geophys., 23, 3615–3631, https://doi.org/10.5194/angeo-23-3615-2005, 2005.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 19, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7446628212928772, "perplexity": 3658.346731206756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371858664.82/warc/CC-MAIN-20200409122719-20200409153219-00459.warc.gz"}
https://www.lattice2013.uni-mainz.de/static/HSI.html
Cutoff effects on lattice nuclear forces Takumi Doi Mon, 14:00, Seminar Room G -- Parallels 1G (Slides) In the past years, there have been extensive studies on nuclear interactions in lattice QCD. In each of these studies, however, lattice simulations were performed only at single lattice spacing, and the effect of lattice discretization errors have not been examined. In this talk, we investigate the cutoff effects on nuclear forces on the lattice, where nuclear potentials are extracted from the Nambu-Bethe-Salpeter (NBS) wave functions by the HAL QCD method. Employing Nf=2 clover fermion configurations generated by CP-PACS Collaboration, we perform numerical simulations at three lattice spacings, 1/a = 0.92, 1.27, 1.83 GeV, with a fixed volume of $$L \sim 2.5$$ fm and a quark mass corresponding to $$m_\pi \sim 1.1$$ GeV. We observe non-negligible cutoff effects on the short-range part of nuclear potentials. The results are discussed comparing with the prediction by the OPE (operator-product-expansion) calculation. Cutoff effects on the scattering phase shifts are also presented. Back to Programme    Back to Participants    Back to Contributions Correlation functions of atomic nuclei in Lattice QCD I Jana Günther, Balint C Toth, Lukas Varnhorst Mon, 14:20, Seminar Room G -- Parallels 1G (Slides) To determine the mass of atomic nuclei in lattice QCD one has to calculate the correlation functions of suitable combinations of quark field operators. However the calculation of these correlation functions requires the evalution of a large number of Wick contractions, which scales as the factorial of the number of nucleons in the system. We explore the possibilities to reduce the computational effort for the evaluation of correlation functions of atomic nuclei by exploiting certain symmetries of the systems. We discuss a recursive approach which respects these symmetries for the simplest case of identical quark sources. Back to Programme    Back to Participants    Back to Contributions Correlation functions of atomic nuclei in Lattice QCD II Lukas Varnhorst, Jana Günther, Balint C Toth Mon, 14:40, Seminar Room G -- Parallels 1G (Slides) We discuss generalizations of the recursive algorithm presented in the talk of Jana Günther. These generalizations include baryons from different sources and sinks and the projections to specific spin states. The construction of atomic nuclei as a special case is presented in detail.The computational cost for the recursive construction of correlation functions of atomic nuclei is compared with the cost of other techniques. Back to Programme    Back to Participants    Back to Contributions Equation of State of Nucleon Matters from Lattice QCD Simulations Takashi Inoue Mon, 15:00, Seminar Room G -- Parallels 1G (Slides) We study nucleon matters at zero temperature starting from QCD. By using nucleon-nucleon interaction extracted from lattice QCD simulations, we derive the equation of state of matters in the Brueckner-Hartree-Fock framework. We find that well known features of the symmetric nuclear matter, such as the self-binding and the saturation, are reproduced from QCD at some value of quark mass. We find also that the pure neutron matter become stiff at large density as quark mass decreases. We apply these equations of state to the TOV equation and obtain mass and radius of neutron stars. Back to Programme    Back to Participants    Back to Contributions Multi-nucleon bound states in $$N_f=2+1$$ lattice QCD Akira Ukawa, Takeshi Yamazaki, Ken-ichi Ishikawa, Yoshinobu Kuramashi Mon, 15:20, Seminar Room G -- Parallels 1G (Slides) We present our results of bound states in multi-nucleon channels where the nuclear mass numbers are from two to four. The simulations are performed in $$N_f=2+1$$ QCD with Iwasaki and non-perturbative improved Wilson fermion actions at the lattice spacing of a = 0.09 fm with quark mass of $$m_\pi = 0.3$$ GeV. The strange quark mass is close to the physical one. We will discuss the volume dependence of the energy difference between the ground state and the free nucleons by using the (4.4 fm)$$^3$$ and (5.8 fm)$$^3$$ lattices to distinguish a bound state from attractive scattering state. Furthermore the quark mass dependence of the energy difference will be discussed using our previous results of $$m\_pi = 0.5$$ GeV. Back to Programme    Back to Participants    Back to Contributions Lattice effective field theory for nuclei from A = 4 to A = 28 Timo Laehde, Evgeny Epelbaum, Hermann Krebs, Dean Lee, Ulf Meissner, Gautam Rupak Mon, 15:40, Seminar Room G -- Parallels 1G (Slides) Lattice effective field theory is a relatively new method which combines the frameworks of effective field theory and lattice Monte Carlo in ab initio nuclear theory. I will present new results obtained within lattice effective field theory for systems ranging from helium-4 to carbon-12, with emphasis on the quark mass dependence of the triple alpha reaction rate, and discuss the implications for an anthropic view of the Universe. I will also present preliminary lattice effective field theory results for systems up to A = 28. Back to Programme    Back to Participants    Back to Contributions Fine lattice simulations with chirally symmetric fermions Junichi Noaki, Sinya Aoki, Guido Cossu, Hidenori Fukaya, Shoji Hashimoto, Takashi Kaneko Mon, 16:30, Seminar Room G -- Parallels 2G (Slides) We carry out numerical simulations of 2+1-flavor QCD with nearly chiral lattice fermions. Lattice spacing is taken at 1/a = 2.4 and 3.6 GeV, while keeping the condition $$m_\pi L>4$$. Using the so-called Mobius-type 5D implementation of the Ginsparg-Wilson fermion, the residual mass is always kept lower than 0.5 MeV. In this talk, I report the first physics results including the determination of lattice spacing through the Wilson flow as well as the light hadron mass spectrum. Back to Programme    Back to Participants    Back to Contributions Preliminary results from maximally twisted mass lattice QCD at the physical point Bartosz Kostrzewa, Karl Jansen, Roberto Frezzotti, Carsten Urbach, Giancarlo Rossi, David Palao, Petros Dimopoulos, Mariane Mangin-Brinet, Albert Deuzeman, Urs Wenger, Luigi Scorzato, Abdou Abdel-Rehim, Andrea Shindler, Gregorio Herdoiza, Istvan Montvay, Philippe Boucaud Mon, 16:50, Seminar Room G -- Parallels 2G (Slides) In this contribution, first results of simulations with $$N_f=2$$ dynamical flavours of maximally twisted mass fermions at the physical point are presented using a newly generated ensemble by the European Twisted Mass Collaboration (ETMC) at one lattice spacing. An overview is given of the newly chosen action, algorithmic stability and the tuning to maximal twist. As a first test, preliminary measurements of mesonic quantities are shown to indicate that the physical pion mass region has been reached on a large volume lattice. Finally, the extension of simulations to $$N_f=2+1+1$$ is discussed and current progress is outlined. Back to Programme    Back to Participants    Back to Contributions Spectrum of excited states using the stochastic LapH method Colin Morningstar Mon, 17:10, Seminar Room G -- Parallels 2G (Slides) Results for the spectrum of excited mesons obtained from the temporal correlations of spatially-extended single-hadron and multi-hadron operators on anisotropic $$24^3 \times 128$$ and $$32^3 \times 256$$ lattices are presented. A stochastic method of treating the low-lying modes of quark propagation which exploits Laplacian Heaviside quark-field smearing makes such calculations possible. Light-meson scattering phase shifts may also be presented. Back to Programme    Back to Participants    Back to Contributions Isospin breaking effect from lattice QCD and QED Antonin Portelli Mon, 17:30, Seminar Room G -- Parallels 2G (Slides) While electromagnetic and up-down quark mass difference effects on octet baryon masses are very small, they have important consequences. The stability of the hydrogen atom against beta decay is a prominent example. Here we include these effects by adding them to valence quarks in a lattice QCD calculation based on Nf =2+1 simulations with 5 lattice spacings down to 0.054 fm, lattice sizes up to 6 fm and average up-down quark masses all the way down to their physical value. This allows us to gain control over all systematic errors, except for the one associated with neglecting electromagnetism in the sea. We determine the up-down quark mass difference and the corrections to Dashen's theorem. We also compute the octet baryon isomultiplet mass splittings, as well as the individual contributions from electromagnetism and the up-down quark mass difference. Back to Programme    Back to Participants    Back to Contributions Determination of the non-degenerate light quark masses from electromagnetic mass splittings in 2+1 flavour lattice QCD+QED Shane Drury Mon, 17:50, Seminar Room G -- Parallels 2G (Slides) We report on a calculation of the effects of isospin breaking in Lattice QCD+QED. This involves using Chiral Perturbation Theory with Electromagnetic corrections to find the renormalized, non-degenerate, light quark masses. The calculations are carried out on QCD ensembles generated by the RBC and UKQCD collaborations using Domain Wall Fermions and the Iwasaki+DSDR Gauge Actions with unitary pion masses down to 170 MeV. Non-compact QED is treated in the quenched approximation. We use a $$32^3$$ lattice size with $$a^{-1}= 2.28(3)$$ GeV (Iwasaki) and $$1.37(1)$$ (Iwasaki+DSDR). This builds on previous work from the RBC/UKQCD collaboration with lattice spacing $$a^{-1} = 1.78(4)$$ GeV. Back to Programme    Back to Participants    Back to Contributions Symanzik flow on HISQ ensembles Nathan Brown Mon, 18:10, Seminar Room G -- Parallels 2G (Slides) We present a determination of the Symanzik flow and the $$w_0$$ scale (proposed by the BMW collaboration) on 2+1+1-flavor HISQ ensembles generated by the MILC collaboration. Continuum extrapolated values are compared to the BMW collaboration's results for stout-smeared staggered and HEX-smeared Wilson-clover fermions, and to HPQCD's results with Wilson flow on some of the same HISQ ensembles. Analysis of quark mass dependence of the scale and autocorrelation length versus flow time will also be presented. Back to Programme    Back to Participants    Back to Contributions A relativistic, model-independent, three-particle quantization condition: (I) Derivation Maxwell Hansen, Stephen Sharpe Tue, 14:00, Seminar Room G -- Parallels 3G (Slides) We present a generalization of Lüscher's relation between finite-volume spectrum and S-matrix, to energies above inelastic threshold. Specifically, we consider a scalar field theory, which has a G-parity-like symmetry that prevents even/odd coupling but is otherwise arbitrary. Assuming center of mass energies between three and five particle masses, we evaluate a three-to-three finite-volume correlator to all orders in perturbation theory. Here terms which are exponentially suppressed in volume are neglected. From the poles in the finite-volume correlator we then determine the relation between finite-volume spectrum and scattering amplitudes. Both two-to-two and three-to-three amplitudes enter the final result. Back to Programme    Back to Participants    Back to Contributions A relativistic, model-independent, three-particle quantization condition: (II) Threshold expansion Stephen Sharpe, Maxwell Hansen Tue, 14:20, Seminar Room G -- Parallels 3G (Slides) We describe how the general result obtained in the talk of Max Hansen can be expanded near to the three-particle threshold and compared to the non-relativistic result of Beane et al. This provides an important cross-check on our general result. Back to Programme    Back to Participants    Back to Contributions Extension of the HAL QCD approach to inelastic and multi-particle scatterings in lattice QCD Sinya Aoki Tue, 14:40, Seminar Room G -- Parallels 3G (Slides) We propose an extension of the HAL QCD method, which successfully describes two hadron interactions below inelastic thresholds in terms of corresponding potentials, to inelastic and multi-particle scatterings. We first derive asymptotic behaviors of the Nambu-Bethe-Salpeter (NBS) wave function at large separation for systems with more than 2 particles in quantum field theories. We express asymptotic behaviors of the NBS wave function for $$n$$ particles at low energy in terms of parameters of $$T$$-matrix such as phase shifts and mixing angles. We next construct energy-independent but non-local potentials above inelastic thresholds in terms of NBS wave functions in QCD. These properties are two essential ingredients of the HAL QCD method to define potentials, and justify the HAL QCD's method for 3 or more particles in lattice QCD. Back to Programme    Back to Participants    Back to Contributions A comparative study of two lattice approaches to two-body systems Bruno Charron Tue, 15:00, Seminar Room G -- Parallels 3G (Slides) Two-body systems are often studied through the temporal dependence of lattice correlators, which allow the extraction of the first few lattice eigenstates' energies. These energies are related, under some assumptions on the interaction, to the infinite volume binding energies or phase shifts by Luescher's finite size formula or one of its extensions. Another approach is to approximate a non-local interaction kernel common to all lattice eigenstates' Nambu-Bethe-Salpeter amplitudes under the inelastic threshold. One can then obtain approximate amplitudes for these lattice eigenstates and compute the corresponding infinite volume binding energies or phase shifts from their spatial dependence outside the interaction. We study, for a few systems, the relations between the two methods, the challenges in their application and the validity of the approximations. Back to Programme    Back to Participants    Back to Contributions Phase shifts in $$I=2$$ $${\pi}{\pi}$$-scattering from two lattice approaches Thorsten Kurth, Noriyoshi Ishii, Takumi Doi, Sinya Aoki, Tetsuo Hatsuda Tue, 15:20, Seminar Room G -- Parallels 3G (Slides) We present a lattice QCD study of the phase shift of I=2 $$\pi\pi$$ scattering on the basis of two different approaches: the standard finite volume approach by Luscher and the recently introduced HAL QCD potential method. Quenched QCD simulations are performed on a $$32^3\times 128$$ lattice with lattice spacing $$a=0.115$$ fm using a heavy pion mass of $$m_\pi=940$$ MeV. Results of the phase shift and the scattering length are shown to agree quite well between these two methods. In case of the potential method, the error is dominated by the systematic uncertainty associated with the violation of rotational symmetry due to finite lattice spacing. In Luscher's approach, such systematic uncertainty is difficult to be evaluated and thus is not included in this work. In case of the potential method, the phase shift can be calculated for arbitrary energies below the inelastic threshold. In that context, the phase shift obtained from the nonrest-frame extension of Luscher's method obtained at a particular center-of-mass momentum lies on top of the curve predicted by the potential method. Back to Programme    Back to Participants    Back to Contributions Two-Nucleon Systems in a Finite Volume Raul Briceno, Zohreh Davoudi, Thomas Luu Tue, 15:40, Seminar Room G -- Parallels 3G (Slides) I will briefly motivate the study of two-nucleon systems in a finite volume and review issues regarding partial wave mixing in a finite volume for both two and three-body systems. I will outline the derivation of the the quantization condition for two nucleons in a finite volume with periodic boundary conditions. The result holds for arbitrary isospin, parity, and momenta below the two-pion production threshold. I will pay close attention to the positive parity sector and consider the implication of the quantization condition for the three smallest boosts. Finally, I will discuss the implications for the two-nucleon spectrum at the physical point. Back to Programme    Back to Participants    Back to Contributions Interactions of Charmed Mesons with Light Pseudoscalar Mesons from Lattice QCD and Implications on the Nature of the $$D_{s0}^*(2317)$$ Liuming Liu, Kostas Orginos, Feng-Kun Guo, Christoph Hanhart, Ulf Meissner Wed, 08:30, Seminar Room G -- Parallels 5G (Slides) We study the scattering of light pseudoscalar mesons ($$\pi$$, $$K$$) off charmed mesons ($$D$$, $$D_s$$) in full lattice QCD. The $$S$$-wave scattering lengths are calculated using Lüscher's finite volume technique. We use a relativistic formulation for the charm quark. For the light quark, we use domain-wall fermions in the valence sector and improved Kogut-Susskind sea quarks. We calculate the scattering lengths of isospin-3/2 $$D\pi$$, $$D_s\pi$$, $$D_sK$$, isospin-0 $$D\bar{K}$$ and isospin-1 $$D\bar{K}$$ channels on the lattice. For the chiral extrapolation, we use a chiral unitary approach to next-to-leading order, which at the same time allows us to give predictions for other channels. It turns out that our results support the interpretation of the $$D_{s0}^*(2317)$$ as a $$DK$$ molecule. At the same time, we also update a prediction for the isospin breaking hadronic decay width $$\Gamma(D_{s0}^*(2317)\to D_s\pi)$$ to $$(133\pm19)$$~keV. Back to Programme    Back to Participants    Back to Contributions $$D$$ $$K$$ scattering and the $$D_s$$ spectrum Daniel Mohler, Christian Lang, Luka Leskovec, Sasa Prelovsek, Richard Woloshyn Wed, 08:50, Seminar Room G -- Parallels 5G (Slides) We present preliminary results from lattice QCD calculations of the low-lying charmed-strange meson spectrum using two types of Clover-Wilson lattices. In addition to quark-antiquark interpolating fields we also consider meson-meson interpolators corresponding to D K scattering states. To calculate the all-to-all propagation necessary for the backtracking loops we use the Distillation technique. For the charm quark we use the Fermilab method. Preliminary results for the $$J^P=0^+$$ and $$1^+$$ charmed-strange mesons are presented. Back to Programme    Back to Participants    Back to Contributions Twisted mass lattice computation of charmed mesons with focus on $$D_{s}^{**}$$ Martin Kalinowski, Marc Wagner Wed, 09:10, Seminar Room G -- Parallels 5G (Slides) We present results of a 2+1+1 flavor twisted mass lattice QCD computation of the spectrum of $$D$$ mesons and $$D_s$$ mesons and of charmonium. Particular focus is put on the positive parity $$D$$ and $$D_s$$ states (so-called $$D_s^{**}$$ mesons) with quantum numbers $$J^P = 0^+$$, $$1^+$$ and $$2^+$$. Besides computing their masses we are also separating and classifying the two $$J^P = 1^+$$ states according to the angular momentum/spin of their light degrees of freedom (light quarks and gluons) $$j = 1/2$$, $$3/2$$. Back to Programme    Back to Participants    Back to Contributions Excited spectroscopy of mesons containing charm quarks from lattice QCD Graham Moir, Michael Peardon, Christopher Thomas, Sinead Ryan, Liuming Liu Wed, 09:30, Seminar Room G -- Parallels 5G (Slides) We present highly excited spectra of mesons containing charm quarks computed using the dynamical anisotropic lattices of the Hadron Spectrum Collaboration. The use of novel techniques has allowed us to extract these spectra with a high degree of statistical precision, while also enabling us to observe states as high as spin 4 and candidate gluonic excitations. The phenomenology of these spectra and new calculations of scattering of charmed mesons will be discussed. Back to Programme    Back to Participants    Back to Contributions Hadron spectra from overlap fermions on HISQ gauge configurations. Nilmani Mathur, Subhasish Basak, Saumen Datta, Andrew Lytle, Padmanath Madanagopalan, Pushan Majumdar Wed, 09:50, Seminar Room G -- Parallels 5G (Slides) Adopting a mixed action approach, we report here results on hadron spectra containing one or more charm quarks. On a background of 2+1+1 flavours HISQ gauge configurations of MILC collaboration, we use overlap fermions for valence quarks. We also study the ratio of leptonic decay constants, fD/fDs. Results are obtained at two lattice spacings. Back to Programme    Back to Participants    Back to Contributions Rho - meson in external magnetic field Elena Lushchevskaya Wed, 10:10, Seminar Room G -- Parallels 5G (Slides) Correlators of vector and pseudoscalar currents have been calculated in external strong magnetic field in SU(2) gluodynamics on the lattice. Different spin components of rho meson mass were explored in dependence on the magnetic field. The mass of vector meson with zero spin projection to the direction of the magnetic field decreases lenearly with increasing of the field for available values of the field on the lattice $$eB < 2 - 2.5$$ GeV$$^2$$, such behavior is necessary for a condensation of rho mesons in strong magnetic field. Back to Programme    Back to Participants    Back to Contributions Structure of the sigma meson from lattice QCD Masayuki Wakayama, Chiho Nonaka, Atsushi Nakamura, Motoo Sekiguchi, Hiroaki Wada, Shin Muroya, Teiji Kunihiro Wed, 11:00, Seminar Room G -- Parallels 6G (Slides) Our purpose is to obtain insights of structure of the sigma meson from the first principle calculation, lattice QCD. At present we do not reach a conclusive understanding of nature of the sigma meson. Currently it is considered as a usual two-quark state, four-quark states such as a tetraquark and mesonic molecules or superposition of them. Besides, the mixing with glueballs is one of important and interesting ingredients for structure of the sigma meson. Furthermore, a disconnected diagram of the sigma meson plays an important role in the structure of the sigma meson. However, to evaluate the disconnected part of the propagator is not an easy task in lattice QCD calculation. To compute the disconnected part of the propagator, we use the Z2 noise method with the truncated eigenmode acceleration and the time dilution for estimating the all-to-all quark ropagators. Here, we focus on four-quark states in the sigma meson. From investigation of two-quark and four-quark states with the inclusion of disconnected diagrams, we will discuss mass of the sigma meson, and the mixing ratio between the two-quark states and four-quark states. Back to Programme    Back to Participants    Back to Contributions Study of the scalar $$a_0(980)$$ on the lattice Abdou Abdel-Rehim, Constantia Alexandrou, Marc Wagner, Luigi Scorzato, Carsten Urbach, Mario Gravina, Mattia Dalla Brida, Joshua Berlin, David Palao Wed, 11:20, Seminar Room G -- Parallels 6G (Slides) Understanding the quark substructure and spectrum of light scalar mesons on the lattice is both interesting and challenging. It has been argued that these states are mixtures of conventional quark-antiquark and tetraquarks. In this talk we present results for the $$a_0(980)$$ state using a variational approach with quark-antiquark, diquark-antidiquark as well as meson-meson molecule interpolating field operators. The spectrum is computed on gauge configurations with 2+1 clover quarks generated by the PACS-CS collaboration at pion mass of about 300 MeV. Both connected as well as disconnected quark loops are included. We also plan to show preliminary results for an ensemble at near physical pion mass. Back to Programme    Back to Participants    Back to Contributions K pi scattering from Lattice QCD David Wilson Wed, 11:40, Seminar Room G -- Parallels 6G (Slides) We study the correlation functions obtained on $$16^3$$, $$20^3$$ and $$24^3$$ anisotropic lattices with the quark content and quantum numbers relevant to $$K \pi$$ scattering. We work using a large basis of operators including variationally-optimised projected operators that overlap strongly onto single-particle meson states. We use the distillation framework developed by the Hadron Spectrum Collaboration which enables efficient and precise determination of lattice energy levels. As is expected, the energies are shifted from their non-interacting single-particle counterparts. We apply the Luescher method to these results to obtain the phase shifts. Back to Programme    Back to Participants    Back to Contributions $$K\pi$$ scattering in moving frames Christian Lang, Sasa Prelovsek, Luka Leskovec, Daniel Mohler Wed, 12:00, Seminar Room G -- Parallels 6G (Slides) We extend our study of the $$K\pi$$ system to moving frames and present an exploratory extraction of the masses and widths for the $$K^*$$ resonances by simulating $$K\pi$$ scattering in $$p$$-wave with $$I=1/2$$ on the lattice. Using $$K\pi$$ systems with non-vanishing total momenta allows the extraction of phase shifts at several values of $$K\pi$$ relative momenta. A Breit-Wigner fit of the phase renders a $$K^*(892)$$ resonance mass and $$K^*\to K \pi$$ coupling compatible with the experimental numbers. We also determine the $$K^*(1410)$$ mass and width assuming that the scattering is elastic in our simulation. We contrast the resonant $$I=1/2$$ channel with the repulsive non-resonant $$I=3/2$$ channel, where the phase is found to be negative and small, in agreement with experiment. Back to Programme    Back to Participants    Back to Contributions Search for possible bound Tcc and Tcs on the lattice Yoichi Ikeda Wed, 12:20, Seminar Room G -- Parallels 6G (Slides) One of the interesting subjects in hadron physics is to look for the possible multiquark configurations that are stable against strong decays. One of the example is the bound H-dibaryon (udsuds) and the possibility of the bound H-dibaryon has been recently studied from lattice QCD [1,2]. In the present study, we extend the HAL QCD method to define potential between hadrons [3,4] to the meson-meson systems including charm quarks to investigate the possible bound Tcc ($$ud \bar{c} \bar{c}$$) and Tcs ($$ud \bar{s} \bar{c}$$) on the $$32^3 \times 64$$ $$N_f=2+1$$ full QCD gauge configuration generated by PACS-CS Collaboration[5]. We also introduce the relativistic heavy quark action [6] as for the charm quarks. We report our results of the s-wave meson-meson potentials that are relevant to the Tcc and Tcs with pion mass $$m_{\pi}=410$$, $$570$$, $$700$$ MeV. The scattering phase shifts and scattering lengths obtained from our lattice potentials are also presented. [1] T. Inoue et al. [HAL QCD Collaboration], Phys. Rev. Lett. 106 (2011) 162002. [2] S.R. Beane et al. [NPLQCD Collaboration], Phys. Rev. Lett. 106 (2011) 162001. [3] N. Ishii, S. Aoki and T. Hatsuda, Phys. Rev. Lett. 99 (2007) 022001. [4] N. Ishii et al. [HAL QCD Collaboration], Phys. Lett. B712 (2012) 437. [5] S. Aoki, Y. Kuramashi and S.-i. Tominaga, Prog. Theor. Phys.109 (2003) 383. Back to Programme    Back to Participants    Back to Contributions Exploring the Roper resonance in Lattice QCD Waseem Kamleh Thu, 14:00, Seminar Room G -- Parallels 7G (Slides) Using a correlation matrix analysis consisting of a variety of smearings, the CSSM Lattice collaboration has successfully isolated the Roper resonance and other "exotic" excited states such as the Lambda(1405) on the lattice at near-physical pion masses. We explore the nature of the Roper resonance by examining the eigenvectors that arise from the variational analysis, demonstrating that the Roper is dominated by the $$\chi_1$$ nucleon interpolator and only poorly couples to $$\chi_2$$. By examining the probability distribution of the Roper on the lattice, we find a structure consistent with a constituent quark model. In particular, the Roper d-quark wave function contains a single node consistent with a 2S state. A detailed comparison with constituent quark model wave functions is carried out, validating the approach of accessing these states by constructing a variational basis composed of different levels of fermion source and sink smearing. Back to Programme    Back to Participants    Back to Contributions The Roper Puzzle Keh-Fei Liu Thu, 14:20, Seminar Room G -- Parallels 7G (Slides) The Roper resonance calculated with Wilson and Clover fermions are higher than that calculated with the overlap fermion by ~400-600 MeV for the range of pion mass below ~ 600 MeV in both the quenched and dynamical fermion calculations. Furthermore, the lowest state in the $$S_{11}$$ channel with the overlap fermion is the S-wave $$\pi N$$ state; whereas, the lowest one in the Wilson and Clover fermion appears to be $$S_{11}(1535)$$ for pion mass below 300 MeV. We address these puzzles with the study of Roper in both the variation and sequential Bayesian fitting methods as well as in terms of the role of chiral dynamics. Back to Programme    Back to Participants    Back to Contributions Spectroscopy of doubly and triply-charmed baryons from lattice QCD Thu, 14:40, Seminar Room G -- Parallels 7G (Slides) We present the ground and excited state spectra of doubly and triply-charmed baryons by using dynamical lattice QCD. A large set of baryonic operators that respect the symmetries of the lattice and are obtained after subduction from their continuum analogues are utilized. Using novel computational techniques correlation functions of these operators are generated and the variational method is exploited to extract excited states. The lattice spectra that we obtain have baryonic states with well-defined total spins up to $$\frac{7}{2}$$ and the low lying states remarkably resemble the expectations of quantum numbers from SU(6) $$\times$$ O(3) symmetry. Various energy splittings between the extracted states, including splittings due to hyperfine as well as spin-orbit coupling, are considered and those are also compared against similar energy splittings at other quark masses. Back to Programme    Back to Participants    Back to Contributions Charmed Bottom Baryon Spectroscopy Zachary Brown, Kostas Orginos, Stefan Meinel, William Detmold Thu, 15:00, Seminar Room G -- Parallels 7G (Slides) The arena of doubly and triply heavy baryons remains experimentally unexplored to a large extent. This has led to a great deal of theoretical effort being put forth in the calculation of mass spectra in this sector. Although the detection of such heavy particle states may lie beyond the reach of experiments for some time, it is interesting to compare results between lattice QCD computations and continuum theoretical models. Several recent lattice QCD calculations exist for both doubly and triply charmed as well as doubly and triply bottom baryons. In this work we present results from the first lattice calculation of the mass spectrum of doubly and triply heavy baryons including both charm and bottom quarks. The wide range of quark masses in these systems require that the various flavors of quarks be treated with different lattice actions. We use domain wall fermions for 2+1 flavors (up down and strange) of sea and valence quarks, a relativistic heavy quark action for the charm quarks, and non-relativistic QCD for the heavier bottom quarks. The calculation of the ground state spectrum is presented and compared to recent models. Back to Programme    Back to Participants    Back to Contributions SU(3) flavour symmetry breaking and charmed states Roger Horsley Thu, 15:20, Seminar Room G -- Parallels 7G (Slides) By extending the SU(3) flavour symmetry breaking expansion from up, down and strange sea quark masses to partially quenched valence quark masses we propose a method to determine charmed quark hadron masses. Initial results for some singly and doubly charmed baryon states are encouraging and demonstrate the potential of the procedure. Back to Programme    Back to Participants    Back to Contributions Baryon properties in meson mediums from lattice QCD Amy Nicholson, William Detmold Thu, 15:40, Seminar Room G -- Parallels 7G (Slides) We calculate the ground state mass shifts of various baryons due to the presence of a medium of pions or kaons using lattice QCD. We use a canonical approach to produce the medium by calculating correlators with a fixed number of meson propagators. From the ground state energies we calculate two- and three-body scattering parameters. We also extract low energy constants by comparing our results to tree level Chiral Perturbation Theory at non-zero isospin/kaon chemical potential. Back to Programme    Back to Participants    Back to Contributions Lattice study on exotic vector charmonium relevant to X(4260) Ying Chen, Wei-Feng Chiu, Long-Cheng Gui, Jian Liang, Zhaofeng Liu, Yibo Yang Thu, 16:30, Seminar Room G -- Parallels 8G (Slides) In the quenched approximation and with very high statistics, a heavy vector charmonium state, with a mass of roughly 4.30(5) GeV, is disentangled from conventional vector charmonia by using spatially extended hybrid-like interpolating field operators. The leptonic decay width of this state is also investigated through a simultaneous fit of correlation functions built from defferent operators. Back to Programme    Back to Participants    Back to Contributions $$\eta$$ and $$\eta'$$ masses from lattice QCD with 2+1+1 quark flavours Carsten Urbach, Chris Michael, Konstantin Ottnad Thu, 16:50, Seminar Room G -- Parallels 8G (Slides) We investigate the masses of eta and eta' mesons using the Wilson twisted mass formulation with 2+1+1 dynamical quark flavours based on gauge configurations of ETMC. We show how to efficiently subtract excited state contributions to the relevant correlation functions and estimate in particular the eta' mass with improved precision. After investigating the strange quark mass dependence and the continuum and chiral extrapolations, we present our results at the physical point. Back to Programme    Back to Participants    Back to Contributions Pseudoscalar flavor-singlet mixing angle and decay constants from $$N_f=2+1+1$$ WtmLQCD Konstantin Ottnad, Chris Michael, Carsten Urbach Thu, 17:10, Seminar Room G -- Parallels 8G (Slides) Considering matrix elements in the quark-flavor basis, one expects the mixing in the eta,eta'-system to be described reasonably well by a single mixing and two decay constants $$f_l$$, $$f_s$$. I discuss how these quantities are determined from pseudoscalar matrix elements in $$N_f=2+1+1$$ Wilson twisted mass lattice QCD and present results for three values of the lattice spacing and values of $$M_{PS}$$ ranging from 230-500 MeV. The required accuracy of the matrix elements is guaranteed by an improved analysis method involving an excited state subtraction in the connected pieces of the correlation function matrix. Besides the mixing angle, the parameters $$f_l$$, $$f_s$$ are of phenomenological interest, e.g. they are related to the decay widths of $$\eta \to \gamma \gamma$$ and $$\eta' \to \gamma \gamma$$. Back to Programme    Back to Participants    Back to Contributions Charmonium-like states from scattering on the lattice Sasa Prelovsek, Luka Leskovec, Daniel Mohler Thu, 17:30, Seminar Room G -- Parallels 8G (Slides) We extract charmonium and charmonium-like states by simulating the corresponding scattering in a number of channels with different quantum numbers. Among others, we address also experimentally well-established X(3872) and the recently discovered and manifestly exotic $$Z_c^+(3900)$$. Back to Programme    Back to Participants    Back to Contributions O($$a^2$$)-improved actions for heavy quarks Yong-Gwi Cho, Shoji Hashimoto, Junichi Noaki Thu, 17:50, Seminar Room G -- Parallels 8G (Slides) We investigate a new class of improved relativistic fermion action on the lattice with a criterion to give excellent energy-momentum dispersion relation as well as to be consistent with tree-level O($$a^2$$)-improvement. Main application in mind is that for heavy quark for which ma~O(0.5). We present tree-level results and a scaling study on quenched lattices. Back to Programme    Back to Participants    Back to Contributions On the $$B^{*'} \rightarrow B$$ transition Antoine Gerardin, Benoit Blossier, John Bulava, Michael Donnellan Thu, 18:10, Seminar Room G -- Parallels 8G (Slides) We present a first lattice determination of the $$B^{*'}B\pi$$ coupling which parametrizes the strong decay of a radially excited $$B^{*}$$ meson into the ground state B meson. The simulations are performed using CLS gauge configurations with $$N_f=2$$ non-pertubatively $$\mathcal{O}(a)$$ improved Wilson-Clover fermions and Heavy Quark Effective Theory in the static limit. Four lattice ensembles, with three lattice spacings in the range [0.05-0.08]~fm and pion masses down to 310 MeV, allow us to perform the continuum extrapolation and check the quark mass dependence. Moreover, to handle with exited states, we solved a Generalized Eigenvalue Problem (GEVP). Back to Programme    Back to Participants    Back to Contributions Omega-Omega interaction on the Lattice Fri, 16:30, Seminar Room C -- Parallels 10C (Slides) We investigate the Omega-Omega baryon interaction in lattice QCD. In the past studies, the hyperon interactions, which become important in hight density matters such as the core of the neutron star, have been investigated mainly for the octet sector, while a very few investigations have been made for the decuplet sector since almost all decuplet baryons are unstable due to decays via the strong interaction. An exception is the Omega decuplte baryon, which is stable against the strong decays, so its interaction is suitable to be investigated. It is, however, still difficult to investigate the Omega-Omega interaction experimentally due to its short-life time via weak decays. Therefore, the lattice QCD study for the Omega-Omega interaction is necessary and important. We calculate the Omega-Omega potential by the HAL QCD method, where the potential is extracted from the Nambu-Bethe-Salpeter (NBS) wave function. Our numerical results are obtained from 2+1 flavor full QCD gauge configurations at $$m_\pi \sim 875$$ MeV and $$m_\Omega\sim 2108$$ MeV, generated by the CP-PACS/JLQCD Collaboration. We find that the Omega-Omega interaction is strong attractive. Using the potential obtained, we also calculate the phase shift of Omega-Omega scattering and discuss a possibility of an existence for a shallow Omega-Omega bound state. Back to Programme    Back to Participants    Back to Contributions Lattice QCD studies of multi-strange baryon-bayon interactions Kenji Sasaki Fri, 16:50, Seminar Room C -- Parallels 10C (Slides) Derivation of baryon-baryon interactions from lattice QCD is highly awaited to investigate hypernuclear and/or neutron star structure and mechanism of supernova explosions since their experimental data are scarce. Owing to developments of computer performances and simulation techniques, lattice QCD calculations allow us to understand nuclear physics in terms of fundamental theory of the strong interaction (QCD). Our approach to baryon-baryon interactions is deriving a potential from inverting Schroedinger equation using NBS wave function simulated by lattice QCD. This approach have been extended to the coupled channel formalism. Using the coupled channel approach, we investigate multi-strange baryon-baryon interactions by lattice QCD simulation. Our numerical results are obtained from 2+1 flavor QCD gauge configuration provided by the PACS-CS Collaboration. The scattering parameters by these potentials are also discussed. Back to Programme    Back to Participants    Back to Contributions The anti-symmetric LS potential in flavor SU(3) limit from Lattice QCD Noriyoshi Ishii, Keiko Murano, Hidekatsu Nemura, Kenji Sasaki Fri, 17:10, Seminar Room C -- Parallels 10C (Slides) Parity-odd hyperon potentials including the anti-symmetric LS potential is calculated in the flavor SU(3) limit with HAL QCD method by using 2+1 flavor gauge configuration on the $$16^3\times 32$$ lattice generated by CP-PACS/JLQCD coll. Due to the calculational cost, we restrict ourselves to the S=-1 sector, which makes it possible to access the irreducible representations of 27, 10$$^*$$, 10, 8S and 8A of the flavor SU(3) group. These potentials are rotated to the particle basis to discuss a possible cancellation between the symmetric and the anti-symmetric LS potentials in the NLambda sector, which is phenomenologically expected from the spectrum of hyper-nuclei. Back to Programme    Back to Participants    Back to Contributions Quark mass dependence of Spin-Orbit force in parity-odd NN system from 2+1 flavor QCD Keiko Murano Fri, 17:30, Seminar Room C -- Parallels 10C (Slides) We report our recent study of Spin-Orbit force between two nucleons in the parity-odd sector from Lattice QCD. In the previous talk, where we reported our first result of Spin-Orbit force calculated at $$m_\pi=1133$$ MeV, we found that, while the qualitative behavior of resultant potentials are consistent with phenomenological potentials, these potentials are still weak, probably due to the heavy quark mass employed in our simulations. In this talk, we examine the quark mass dependence of Spin-Orbit force. We reconstruct Spin-Orbit force from 3P0, 3P1 and 3P2 Nambu-Bethe-Salpeter wave functions calculated from Lattice QCD in lighter quark mass region at $$m_\pi=701 - 411$$ MeV. The calculation is performed on Blue Gene/Q at KEK by using Nf=2+1 PACS-CS gauge configuration generated by O(a) improved wilson quark action with RG improved (iwasaki) gauge action. We find that the potentials tend to become stronger as the quark mass decreases. Back to Programme    Back to Participants    Back to Contributions Pion-nucleon scattering in Lattice QCD Valentina Verduci, Christian Lang Fri, 17:50, Seminar Room C -- Parallels 10C (Slides) Almost all the hadrons of the QCD spectrum are unstable under strong interactions and their resonant nature has to be taken into account for a complete study in lattice QCD. Thanks to improved computational resources and developed theoretical tools, in the last years multi-particle states started to be a new frontier in lattice studies. We examine, for the first time on the lattice, the pion-nucleon system in s-wave (negative parity). We compare the energy levels measured in the one-particle setup with the spectrum of the coupled system. Additional information on the N* resonance are achieved through a phase shift analysis. This work is intended to be an exploratory study of the meson-baryon system on the lattice and a boost for further studies in this direction. Back to Programme    Back to Participants    Back to Contributions Looking for a Quarkonium-Nucleus Bound State on the Lattice Saul Cohen Fri, 18:10, Seminar Room C -- Parallels 10C (Slides) The interaction between quarkonia and nuclei will reveal new information about the properties of QCD. Since such systems are composed of hadronic states having no common valence quarks, they interact mainly by multi-gluon exchanges, analogous to a color van der Waals force. Although twenty years have passed since the existence of a bound nucleus-quarkonium state was proposed, model calculations give diverse results. Experiments, such as ATHENNA at JLab or CBM at FAIR, will provide experimental data in the near future. In this talk, we present a first lattice-QCD calculation of the interaction of strange quarkonia with light nuclei, using ensembles with pion masses at the SU(3)-symmetric point. We determine the energies of these multiparticle systems and probe the existence of bound states. Back to Programme    Back to Participants    Back to Contributions The Hadronic Decays of Decuplet Baryons Paul Rakow, Raffaele Millo, Roger Horsley, Holger Perlt, Gerrit Schierholz, James Zanotti Fri, 18:30, Seminar Room C -- Parallels 10C (Slides) We report on a project to measure the hadronic decays of the decuplet baryons, for example Delta to N pi and its hyperon analogues, based on 2+1 flavour lattice simulations. We are following two paths towards determining the coupling constants. One is to carefully measure the energies of the ground state and first few excited levels, to see how far they are shifted by the interaction. The other approach is to measure transition rates directly from the time dependence of Greens functions linking the parent baryon at the source and its decay products at the sink. Back to Programme    Back to Participants    Back to Contributions 2+1 flavor lattice QCD simulation on K computer Yoshinobu Kuramashi, Sinya Aoki, Takumi Doi, Tetsuo Hatsuda, Noriyoshi Ishii, Ken-Ichi Ishikawa, Naruhito Ishizuka, Yoshifumi Nakamura, Yusuke Namekawa, Hidekatsu Nemura, Kenji Sasaki, Yusuke Taniguchi, Naoya Ukita, Takeshi Yamazaki Fri, 18:50, Seminar Room C -- Parallels 10C (Slides) We first explain the HPCI (High Performance Computing Infrastructure) Strategic Program in Japan aiming to conduct innovative research in five research fields that were selected strategically. The fifth field "the origin of matter and the universe" covers the fundamental sciences consisting of elementary particle physics, nuclear physics and astronomy. We have chosen four research subjects among the field to be performed using a part of the K computer. One of them is a large scale simulation of lattice QCD. We present some preliminary results for 2+1 flavor QCD together with algorithmic details. Future physics plan is also discussed. Back to Programme    Back to Participants    Back to Contributions Rho and A-mesons in external magnetic field in SU(2) lattice gauge theory Olga Larina, Elena Lushchevskaya Poster Session Correlators of vector, axial and pseudoscalar currents have been calculated in external strong magnetic field in SU(2) gluodynamics on the lattice. The masses of rho and A-meson s with a zero spin projection s = 0 to the z axis parallel to the external magnetic field B were explored in dependence on the magnetic field. The mass of the corresponding spin component of vector meson decreases with increasing of the magnetic field for available values of the field on the lattice $$eB \sim 2 - 2.5$$ GeV$$^2$$, such behavior is necessary for a condensation of rho mesons in strong magnetic field. Back to Programme    Back to Participants    Back to Contributions The Oscillatory Behavior and The Logarithmic Unphysical Pole of the Domain Wall Fermion Raza Sufian, Michael Glatzmaier, Keh-Fei Liu Poster Session Domain wall fermion formulation can suffer from an unwelcome oscillatory behavior in the hadron correlators which appears when the transfer matrix is complex. In this work, we study the origin of this unphysical pole from the free quark propagator using several different DWF actions, e.g. Shamir, Boriçi, and the Mobius domain wall fermion action. We find that the unphysical mode depends on the domain wall height M, as well as on coeffieicents $$b_s$$ and $$c_s$$ for Mobius action. We determine the specific ranges of these parameters which give rise to oscillatory behavior. Back to Programme    Back to Participants    Back to Contributions SU(2) Lattice Gluon Propagator and Potential Models Willian M. Serenone, Attilio Cucchieri, Tereza Mendes Poster Session We use lattice data for the gluon propagator as an input to model the heavy quark-antiquark potential. Since the approach is based on the one-gluon-exchange approximation, a linear term must be included explicitly, to account for non-pertunative effects. The string tension is left as a free parameter to be determined from fits to experimental results. We present an application to the spectrum of the bottomonium and compare to other methods. Back to Programme    Back to Participants    Back to Contributions Chuan Liu, Ying Chen, Yibo Yang, Yu-Bin Liu, Jian-Ping Ma, Jian-Bo Zhang Poster Session The radiative decay of $$J/\psi$$ into a pure gauge scalar or tensor glueball is studied in the quenched lattice QCD formalism. The corresponding phenomenological implication of these results is also discussed. Back to Programme    Back to Participants    Back to Contributions Flavored pion and kaon masses at next-to-leading order in mixed-action staggered chiral perturbation theory Jon Bailey, Jongjeong Kim, Weonjong Lee, Hyung-Jin Kim, Boram Yoon Poster Session Differently improved staggered fermions can be used in mixed-action calculations to reduce discretization effects and simplify analyses. After recalling the generalization of staggered chiral perturbation theory to the mixed-action case, we describe a calculation of the masses of the flavored pseudo-Goldstone bosons to next-to-leading order. The results can be used to improve determinations of quark masses, Gasser-Leutwyler couplings, and other parameters important for phenomenology. Back to Programme    Back to Participants    Back to Contributions Testing the stochastic LapH method in the twisted mass formulation Christian Jost, Bastian Knippschild, Carsten Urbach Poster Session We present first results using the stochastic Laplace-Heaviside (LapH) method in the twisted mass formulation. The calculations are performed on gauge configurations provided by the ETM collaboration with 2+1+1 dynamical quark flavours at a single value of the lattice spacing. In particular, we compute disconnected contributions to flavour singlet pseudo-scalar mesons and compare LapH to standard volume noise methods. Back to Programme    Back to Participants    Back to Contributions Bottomonium results from lattice QCD Christine Davies, Brian Colquhoun, Ben Galloway, Rachel Dowdall, Jonna Koponen, Peter Lepage, Craig McNeile Poster Session We discuss a number of different results in bottomonium physics using the HISQ or NRQCD actions for the b quark. Back to Programme    Back to Participants    Back to Contributions Hadronic light-by-light contribution to the muon $$g-2$$ with charged sea quarks Tom Blum, Masashi Hayakawa, Taku Izubuchi Poster Session We update our calculation of the hadronic light-by-light contribution to the muon anomalous magnetic moment, using increased statistics and more values of momentum transfer, for neutral sea quarks. We use domain wall fermions, Iwasaki gluons, and quenched (non-compact) photons on a lattice of size $$24^3\times 64\times 16$$, $$m_\pi=330$$ MeV, $$m_\mu=170$$ MeV, and $$a^{-1}=1.73$$ GeV. The AMA technique is employed to efficiently improve statistical precision. In addition, we describe our new calculation method including charged sea quarks and all diagrams entering at $$O(\alpha^3)$$. Back to Programme    Back to Participants    Back to Contributions Two-Baryon Correlation Functions in 2-flavor QCD Chuan Miao, Anthony Francis, Thomas Rae, Hartmut Wittig Poster Session We present an initial study of two-baryon correlation functions with the aim of explaining potential dibaryon bound states, specifically the H-dibaryon, which is a hypothesized bound state of QCD. In particular, we comment on our first results for two-baryon correlation functions ($$\langle C_{XY}(t)C_{XY}(0) \rangle$$, where $$XY=\Lambda\Lambda, \Sigma\Sigma, N\Xi, \cdots$$), which combine to form the H-dibaryon. The results are obtained using a `blocking' algorithm to handle the contractions, which may easily be extended to N-baryon correlation functions. We also comment on its application to the analysis of single baryon masses ($$n$$, $$\Lambda$$, $$\Xi$$, $$\cdots$$). This study is performed on an isotropic lattice with $$m_\pi = 460$$ MeV, $$m_\pi L = 4.7$$ and $$a = 0.063$$ fm. The measurements are calculated using the CLS ensembles with non-perturbative $$\mathcal{O}(a)$$ improved Wilson fermions in $$N_f = 2$$ QCD. Back to Programme    Back to Participants    Back to Contributions Testing mixed action approaches to meson spectroscopy with twisted mass sea quarks Joshua Berlin, David Palao, Marc Wagner Poster Session We explore several mixed action approaches including Wilson and Wilson twisted mass quarks with and without Clover term. Our main goal is to reduce lattice discretization errors in mesonic spectral quantities, particularly reducing twisted mass parity and isospin mixing. Back to Programme    Back to Participants    Back to Contributions Investigating a mixed action approach for $$\eta$$ and $$\eta'$$ mesons in $$N_f=2+1+1$$ lattice QCD Falk Zimmermann, Konstantin Ottnad, Carsten Urbach Poster Session We test Osterwalder-Seiler valence quark action to reproduce eta, eta' meson quantities from twisted mass lattice configurations with 2+1+1 dynamical quark flavours. Flavour singlet quantities gain significant contributions from the sea and the valence quark sector and are, therefore, sensible to mixed regularisations. In particular we employ the freedom to tune the the valence strange quark mass to match pure twisted mass with the mixed action approach. Two matching procedures are proposed and shown to agree in the continuum limit of the eta-meson masses and additional mixing quantities. Back to Programme    Back to Participants    Back to Contributions Vacuum polarization function in $$N_f=2+1$$ domain-wall fermion Eigo Shintani, Hyung-Jin Kim, Tom Blum, Taku Izubuchi Poster Session We will show preliminary results of calculation of vacuum polarization function (VPF) of vector-current in Nf=2+1 domain-wall fermion. In this calculation we use the all-mode-averaging to extremely suppress the statistical noise, and show the precise calculation of strong coupling constant using Adler function after taking account of the lattice artifacts with two different cut-off scales. We also discuss the precise calculation of muon g-2 using VPF and address the possible systematic errors. Back to Programme    Back to Participants    Back to Contributions Meson Spectroscopy using Stochastic LapH Method Chik Him Wong, Colin Morningstar, David Lenkner, Brendan Fahy, You-Cyuan Jhang, Justin Foley, Jimmy Juge, John Bulava Poster Session Excited states of mesons on anisotropic $$24^3 \times 128$$ and $$32^3 times 256$$ lattices are obtained from single-hadron and multi-hadron operators by utilizing a stochastic method that exploits the Laplacian Heaviside quark-field smearing. Preliminary light-meson scattering phase shifts may also be presented. Back to Programme    Back to Participants    Back to Contributions Pseudoscalar Decay Constants of D-Mesons in Lattice QCD with Domain-Wall Fermion Ting-Wai Chiu, Tung-Han Hsieh, Yu-Chih Chen, Han-Yi Chou, Wen-Ping Chen Poster Session We study the masses and decay constants of pseudoscalar mesons in 2 flavors lattice QCD with optimal domain-wall fermion. The gauge ensembles are generated on the $$24^3 \times 48$$ lattice with the extent in the fifth dimension $$N_s = 16$$, and the plaquette gauge action at $$\beta = 6.10$$, for three sea-quark masses corresponding to the pion masses in the range 280-450 MeV. We compute point-to-point quark propagators and measure the time-correlation functions of the pseudoscalar and vector mesons. The inverse lattice spacing is determined by the experimental input of the pion decay constant, while the strange quark and the charm quark masses are determined by the masses of vector mesons $$\phi(1020)$$ and $$J/\psi(3097)$$ respectively. In this talk, we outline the salient features of our simulations and present our preliminary results of the masses and decay constants of the charmed mesons $$D$$ and $$D_s$$. Back to Programme    Back to Participants    Back to Contributions Charmonium, $$D_s$$ and $$D_s^*$$ from overlap fermion on domain wall fermion configurations Yibo Yang, Ying Chen, Zhaofeng Liu Poster Session With data on ensembles of two lattice spacings and three sea masses each, we use the masses of $$D_s$$, $$D_s^*$$ and $$J/\psi$$ to determine $$m_c^{\bar{MS}}$$(2GeV, $$m_s^{\bar{MS}}$$(2GeV) and $$r_0$$. With those input, we predict the hyperfine-spiltting of charmonium, $$f_{D_s}$$, and the masses of P-wave charmonium in the chiral and continuum limits. We also discern the quark mass dependence of the hyperfine-spiltting between pseudo scalar and vector mesons, from light to heavy. Back to Programme    Back to Participants    Back to Contributions
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8988208770751953, "perplexity": 3080.901579540458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150264.90/warc/CC-MAIN-20210724094631-20210724124631-00135.warc.gz"}
https://www.dhlcourierbazaar.com/equalizing-numbers-solution-codechef/
# Equalizing Numbers solution codechef- ## Equalizing Numbers solution codechef Equalizing Numbers solution codechef- Chef has two integers A and B. In one operation he can choose any integer d, and make one of the following two moves : • Add d to A and subtract d from B. • Add d to B and subtract d from A. Chef is allowed to make as many operations as he wants. Can he make A and B equal? ### Equalizing Numbers solution codechef • First line will contain T, number of test cases. Then the test cases follow. • Each test case contains of a single line of input, two integers A, B. ### Output Format For each test case, if Chef can make the two numbers equal print YES else print NO. You may print each character of the string in uppercase or lowercase (for example, the strings yEsYesYeS, and YES will all be treated as identical). ### Constraints • 1 \leq T \leq 1000 • 1 \leq A,B \leq 1000 ### Sample 1: Input Output 2 3 3 1 2 Yes No ### Equalizing Numbers solution codechef Test case 1: Since A and B are already equal, Chef does not need any operations. Test case 2: It can be shown that A and B can never be made equal using any number of given operations.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9507661461830139, "perplexity": 1114.2363759838788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00546.warc.gz"}
https://justtothepoint.com/maths/subgroups/
# Subgroups. Definition. Let G be a group. A subgroup is a subset of a group that itself is a group. Formally, H ⊂ G (H is a subset of G), ∀a, b ∈ H, a·b ∈ H, and (H, ·) is a group. We use the notation H ≤ G to mean that H is a subgroup of G. Every group G has at least two subgroups: G itself and the subgroup {e} containing only the identity element. The trivial subgroup is the subgroup {e} consisting of just the identity element. Notice that {e} -a set- ≠ e -the identity element-. All other subgroups are said to be nontrivial or proper subgroups, H < G. Examples: In ℤ9 under the operation +, the subset {0, 3, 6} forms a proper subgroup. GL(n, ℝ), the set of invertible n x n matrices with real entries is a group under matrix multiplication. SL(n, ℝ), the set of n x n matrices with real entries whose determinant is equal to 1 is a proper subgroup of GL(n, ℝ). The Klein four-group is a group with four elements, in which each element is self-inverse, that is, composing it with itself produces the identity (Figure 1.a.). It has three non-trivial proper subgroups {e, a}, {e, b}, and {e, c}. Notice that {e, a, b}, {e, a, c} or {e, b, c} are not subgroups because they are not closed: ab = c ∉ {e, a, b}, ac = b ∉ {e, a, c}, and bc = a ∉ {e, b, c}. The only nontrivial proper subgroup of ℤ4 is {0, 2} (Figure 1.a.), e.g., {0, 1}, {0, 3}, {0, 1, 2} are not subgroups because 1 + 1 = 2 ∉ {0, 1}, 3 + 3 = 2 ∉ {0, 3}, and 1 + 2 = 3 ∉ {0, 1, 2}. # Subgroup Tests Two-Step Subgroup Test. Let G be a group, a non empty subset H ⊂ G is a subgroup (H ≤ G) iff • (i) H is closed under the operation in G, ∀a, b ∈ H, a·b ∈ H. • (ii) Every element in H has an inverse in H, ∀a ∈ H, a-1 ∈ H. Proof: If H is a subgroup, then (i) and (ii) obviously hold. Conversely, suppose (i) and (ii). Associative is inherited from the group product of G. ∀a ∈ H, a-1 ∈ H (ii) ⇒ a(a-1) = (a-1a) = e ∈ H (i) ⇒ e is a neutral element of G, so it is a neutral element of H ⇒ H has a neutral element and a has an inverse element. Example. The circle group, $\Tau$ = {z ∈ ℂ: |z|=1} is a proper subgroup of ℂ*. It is the multiplicate group of all complex numbers with absolute value 1, that is, the unit circle in the complex plane. It has infinite order. Proof. If z, w ∈ $\Tau$, z = cosθ + i sinθ, w = cosΦ + isinΦ. zw = cos(θ + Φ) + i sin(θ + Φ), zw ∈ $\Tau$ so it is closed. We are using the theorem z, w ∈ ℂ, z = r(cosθ + i sinθ), w = s(cosΦ + isinΦ), then zw = rscos(θ + Φ) + i sin(θ + Φ). If z ∈ $\Tau$, z = a + ib, r = |z| = $\sqrt{a^{2}+b^{2}} = 1$, z-1 = $\frac{a-bi}{a^{2}+b^{2}} = a - bi,$ and r’ = |z-1| = $\sqrt{a^{2}+(-b)^{2}} = 1$ Theorem. One-Step Subgroup Test. Let G be a group, a non empty subset H ⊂ G is a subgroup of G (H ≤ G) iff ∀a, b ∈ H, a·b-1 ∈ H. Proof. If H is a subgroup, ∀a, b ∈ H, b-1∈ H, then a·b-1 ∈ H. Conversely, suppose a non empty subset H ⊂ G, and ∀a, b ∈ H, a·b-1 ∈ H, then b·b-1 = b-1·b = e ∈ H. ⇒ e is a neutral element of G, so it is a neutral element of H ⇒ H has a neutral element and a has an inverse element for all a in H. ∀a, b ∈ H, let’s show that their product is also in H. We know that b-1 ∈ H ⇒ (b-1)-1 ∈ H ⇒ a·(b-1)-1 = ab ∈ H. Example. Let (ℤ, +), r ∈ ℤ, rℤ = {r·n | n∈ ℤ} is the subgroup of all integers divisible by r. If a and b are divisible by r, then a+b and -a are also divisible by r. Theorem. Finite Subgroup Test. Let G be a group, a non empty finite subset H ⊂ G closed under the operation of G (∀a, b ∈ H, a·b ∈ H) is a subgroup of G. Proof. Using the Two-Set Subgroup Test, we only need to prove that a-1 ∈ H, ∀a ∈ H. If a = e ⇒ ee = e ∈ H and a-1 = e ∈ H ∎. If a ≠ e ⇒ a, a2, a3,… belong to H because H is closed under the operation of G (∀a, b ∈ H, a·b ∈ H). However, H is finite ⇒ ai = aj and i > j ⇒ ai-j = e ⇒ [i-j > 0, ai-j∈ H] e ∈ H, a-1 = ai-j-1 ∈ H because i-j-1 ≥ 0. Proposition. If H is a subgroup of G, and L is a subgroup of H, then L is a subgroup of G. Proposition. If H and L are subgroups of G, then H∩L is a subgroup of G. Proof: ∀a, b ∈ H∩L, a, b ∈ H and a, b ∈ L ⇒ a·b-1 ∈ H, a·b-1 ∈ L ⇒ a·b-1 ∈ H∩L Examples. Let G be a group and a be any element in G (a ∈ G). Then, the set ⟨a⟩={an | n ∈ Z} is a subgroup of G. We call it the cyclic subgroup generated by a. Proof: g, h ∈ ⟨a⟩, g = am, h = an for some m, n ∈ ℤ, gh = am+n, g-1 = a-n ∈ ⟨a⟩. In ℤ10, ⟨2⟩ = {0, 2, 4, 6, 8}. # Center of a Group The center of a group, G, is the subset of elements that commute with every element of G.Z(G) = {a ∈ G | ab = ba ∀ b in G}. The center of a group, Z(G), is a subgroup of G, Z(G)≤ G. Proof: [Two-Step Subgroup Test] 1. ∀ a, b, x ∈ Z(G), ¿(ab)x = x(ba)? (ab)x = (ba)x [a ∈ Z(G)] = [Associativity] b(ax) = [a ∈ Z(G)] b(xa) = [Associativity] (bx)a = [b ∈ Z(G)] (xb)a = [Associativity] x(ba) 2. ∀ a, x ∈ Z(G), ax = xa, ¿a-1x = xa-1? ∀ a, x ∈ Z(G), ax = xa. Let’s take a-1(ax) ∈ G, therefore, a((a-1)(ax)) = ((a-1)(ax))a a((a-1)(ax)) = (aa-1)(ax) = e(ax) = ax ((a-1)(ax))a = (a-1a)(xa) = e(xa) = xa ∎ Examples. • The center of an Abelian group, G, is the group itself. • Let’s calculate the center of the dihedral group Dn = {α, β: αn = β2 = e, βαβ = α-1}, n ≥ 3. x ∈ ℤ(Dn) iff xα = αx and xβ = βx. x = αiβj ↔ αiβjα = ααiβj = αi+1βj ↔ βjα = αβj j = 1, βα = αβ [βαβ = α-1 ⇒ βα = α-1β-1 = αβ] α-1β = αβ ⇒ α-1 = α ⇒ α2 = e ⊥. Therefore, x cannot be a reflection (all reflections are generated by αiβ, 0 ≤ i ≤ n-1), it necessary needs to be a rotation, x = αi. x ∈ ℤ(Dn) ⇒ xβ = βx ⇒ αiβ = βαi ⇒ [αkβ = βαn-k] αiβ = αn-iβ = α-iβ ⇒ αi = α-i ⇒ α2i = e ⇒ i = 0 (x = α0 = e) or i = n/2 and n is even. Therefore, ℤ(Dn) = $\begin{cases} e, n~is~ odd\\\\ e~ and~ α^{\frac{n}{2}}, n~is~ even \end{cases}$ # The Product HK of Two Subgroups H and K of a Group G Definition. Let G be a group, and let H and K be two subgroups of G. The Product of H and K is HK = {hk: h ∈ H and k ∈ K} There are cases where G is a group, H and K are two subgroups, and yet HK is not a subgroup of G. Example. Let G = S3, H = ⟨(1, 2)⟩, and K = ⟨(2, 3)⟩. Notice that H = {(1, 2), id} and K = {(2, 3), id} HK = {id, (1, 3, 2), (1, 2), (2, 3)}. |HK| = 4 is not a divisor of 6 (=|G|), so HK is not a subgroup of G by Lagrange’s Theorem. Proposition. Let G be a group and let H and K be two subgroups of G. Then, HK is a subgroup of G (HK ≤ G) if and only if HK = KH. Proof. Suppose that HK is a subgroup. ∀x ∈ HK, ∃h ∈ H, ∃k ∈ K, x = hk. We claim that x ∈ KH. x ∈ HK and HK is a subgroup ⇒ x-1 = (hk)-1 = k-1h-1 ∈ HK. However, x-1 = k-1h-1 ∈ KH because k-1 ∈ K (K ≤ G) and h-1 ∈ H (H ≤ G), therefore (x-1)-1 = x ∈ KH (KH ≤ G) ⇒ HK ⊆ KH. ∀x ∈ KH, ∃h ∈ H, k ∈ K, x = kh. We claim that x ∈ HK. x = kh = (ek)(he) ∈ HK, because HK ≤ G, so there is closure and ek ∈ HK (e ∈ H), and he ∈ HK (e ∈ K). KH ⊆ HK ⇒ KH = HK. Suppose that HK = KH. Is HK ≤ G? Let x, y ∈ HK, ∃h1, h2 ∈ H, k1, k2 ∈ K, x = h1k1 and y = h2k2. xy = h1k1h2k2 = h1(k1h2)k2 = [HK = KH, ∃k’∈ K and h’ ∈ H] h1(h’k’)k2 ∈ HK because h1h’ ∈ H and k’k2 ∈ K, there is closure. Associativity is inherited from G. e = e·e ∈ HK (identity). ∀x ∈ HK, ∃h ∈ H, ∃k ∈ K, x = hk. x-1 = k-1h-1 = [HK = KH, ∃k’∈ K and h’ ∈ H] h’k’ ∈ HK (inverses). Proposition. Let G be a group and let H and K be two finite subgroups of G. Then $|HK|=\frac{|H||K|}{|H∩K|}$ Proof: • H ≤ G and K ≤ G ⇒ H∩K ≤ G ⇒ |H∩K| ≥ 1 because e ∈ H∩K, so we are never dividing by zero. • ∀t ∈ H∩K, ∀h ∈ H, k ∈ K, hk = ht(t-1k) ∈ HK because ht ∈ H, t-1k ∈ K, so each element in HK can be written by at least |H∩K| different ways as an element from H times and element from K. hk = h’k’ ⇒h’-1h = k’k-1 = t ∈ H∩K k’k-1 = t ⇒ k’ = tk ⇒ k = t-1k’. h’-1h = t ⇒ h = h’t Therefore, hk = h’k’ = (h’t)(t-1k’) for some t ∈ H∩K, so every element in HK can be written in exactly |H∩K| different ways as an element from H times an element from K, and so, the equality $|HK|=\frac{|H||K|}{|H∩K|}$ holds. In particular, |H∩K| divides |H||K|. Bitcoin donation
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9348104000091553, "perplexity": 2744.306380449946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500126.0/warc/CC-MAIN-20230204110651-20230204140651-00133.warc.gz"}
https://sjmielke.com/comparing-perplexities.htm
# Sabrina J. Mielke ## Can you compare perplexity across different segmentations? 2019-04-23 In recent years, models in NLP have strayed from the old assumption that the word is the atomic unit of choice: subword-based models (using BPE or sentencepiece) and character-based (or even byte-based!) models have proven to be very effective for many tasks, allowing for a small vocabulary size that can still cover all possible occuring text (i.e., that is open-vocabulary). With all this change comes a different question though: can we compare metrics over words with those over characters or subword units? In this post, we will be looking at at metrics for language models, specifically the most popular one: perplexity. ### What is perplexity? “Perplexity is the exponentiated average negative log-likelihood per token.” What does that mean? Fundamentally, a language model is a probability distribution that assigns probabilities to entire strings, for example: \begin{align*} p(\texttt{the cat sat on the mat}) &= 0.000000000341\\ p(\texttt{the mat sat on the cat}) &= 0.000000000239\\ p(\texttt{the cat the on mat mat}) &= 0.000000000001\\ \end{align*} These probabilities are extremely low because there are so many sentences that the distribution all has to cover! [1] Say, the first sentence is our test data. Then, given its likelihood under our model, we can compute a perplexity per word, counting the $$\mathrm{EOS}$$ (end of string) as a seventh word:[2] $ppl_{word} = \exp\dfrac{-\log 0.000000000341}{6+1} = 22.5$ This perplexity is what people usually mean when they say “perplexity”: the perplexity per word on the test data. But we can compute other perplexities, too! The sentence had $$6+1$$ words, yes, but it also had $$22+1$$ characters: $ppl_{word} = \exp\dfrac{-\log 0.000000000341}{22+1} = 2.7$ Note how we only changed the denominator! Both these perplexities have an intuitive interpretation: the first one tells us that the model was “trying to pick” between about 23 words, the second one that it had a choice of two to three characters it was usually unsure between. Or more precisely: if we let it guess token by token (i.e., word by word or character by character), on average it would have taken 22.5 tries and 2.7 tries, respectively, to find the right one.[3] ### Decomposing the string into tokens If you have seen a standard exposition to language models, you might be a bit unhappy with what we've done so far: we never really established how the probability value of $$0.000000000341$$ was calculated! Let's remedy that. Language models are usually implemented as autoregressive models, i.e., they predict the output piece by piece, each new piece conditioned on the previously generated pieces: $p(\texttt{the cat sat}) = p(\texttt{the} | \varepsilon) \cdot p(\texttt{cat} | \texttt{the}) \cdot p(\texttt{sat} | \texttt{the cat}) \cdot p(\mathrm{EOS} | \texttt{the cat sat})$ where $$\varepsilon$$ refers to the empty string and the equality holds true because of the chain rule of probability. This decomposition makes things easy enough: instead of trying to assign probabilities to infinitely many things all at once, we now only have to assign probabilities to a finite vocabulary at each timestep — easy enough with a softmax layer in a neural model. Now you should see why people report perplexities per word if this is indeed the decomposition they choose! Say a word-based RNN/Transformer language model predicts: \begin{align*} p(\texttt{the} | \varepsilon) &= 0.01 \\ p(\texttt{cat} | \texttt{the}) &= 0.001 \\ p(\texttt{sat} | \texttt{the cat}) &= 0.008 \\ p(\mathrm{EOS} | \texttt{the cat sat}) &= 0.04 \\ \end{align*} $\Rightarrow p(\texttt{the cat sat}) = 0.1 \cdot 0.01 \cdot 0.008 \cdot 0.04 = 0.00000032$ Then we can of course compute the perplexity as we have before: $ppl_{word} = \exp\dfrac{-\log 0.00000032}{3+1} = 42.04$ But it is common in modern LM implementations to instead just use the token-level probabilities to compute the average: $ppl_{word} = \exp\dfrac{-(\log 0.01 + \log 0.001 + \log 0.008 + \log 0.04)}{3+1} = 42.04$ ### Different decompositions Of course, this decomposition is only one of many! If we do our prediction character by character we end up with: $p(\texttt{the cat sat}) = p(\texttt{t} | \varepsilon) \cdot p(\texttt{h} | \texttt{t}) \cdot p(\texttt{e} | \texttt{th}) \cdot \ldots \cdot p(\mathrm{EOS} | \texttt{the cat sat})$ And again we could compute a perplexity, this time over characters: $ppl_{char} = \exp\dfrac{-(\log p(\texttt{t} | \varepsilon) + \ldots + \log p(\mathrm{EOS} | \texttt{the cat sat}))}{11+1} = 3.48$ assuming that indeed the character-level model is equally good (i.e., assigns the entire string the same likelihood) as the word-level model. Both these decompositions are equally valid — they result in a distribution over the same strings! That is, as long as... ### “The importance of being open-vocab” or “What strings does the model cover?” One potential confound we have ignored so far is the fact that word-level models tend to have what is called a “closed vocabulary” — they cannot produce all words but only those that are in its vocabulary. That means that $$\texttt{cat}$$ is okay, but $$\texttt{wolpertinger}$$ may not be. A word-level model thus could not explain (i.e., assign positive probability to) the sentence $$\texttt{the wolpertinger sat}$$ — a character-level model on the other hand would have little issue predicting that novel word one character at a time, yielding a positive, even if small, probability for the entire sentence. In summary, the character-level model assigns probability to all sorts of novel words, in fact, to infinitely many of them. All this probability mass that it loses on them, the word-level model can use to make $$\texttt{the cat sat}$$ and other simple in-vocabulary sentences more likely. Or, on our example, if we imagine that $$\texttt{the cat sat}$$ and $$\texttt{the wolpertinger sat}$$ were the only two sentences that existed: because of $p_{word}(\texttt{the wolpertinger sat}) < p_{char}(\texttt{the wolpertinger sat})$ we have that $p_{word}(\texttt{the cat sat}) > p_{char}(\texttt{the cat sat})$ So, in a sense the closed-vocabulary can “cheat” by ignoring all these low-probability words while the character-level model has to spread its probability mass thin over them. Unfair! Specifically, if we evaluate these two models on a test set that doesn't contain any of these novel words, the closed-vocabulary model will end up with a much lower perplexity.[4] Takeaway: we can only compare distributions on likelihood-based metrics like perplexity if they have support on the exact same set of strings. Corollary: closed-vocab perplexities and open-vocab perplexities are not comparable at all. To make this divide more explicit, the open-vocab language modeling community has generally avoided measuring “word-level perplexity” and instead reports “bits per character”, i.e., the average negative log-likelihood over characters, using the dual logarithm. Finally, it should be said that “word-level == closed-vocab” and “character-level == open-vocab” is roughly true, but only part of the whole story. For one, augmenting word-level models with character-level information can make them open-vocab — and vice versa, augmenting character-level models with word information can help, too (and a number of methods have been proposed to do these things). More critically, however, even a character-level model assumes a closed set of characters — but with the Unicode consortium coming up with new enojis every year, that’s not really correct either. [5] ### How then do we compare on, say, subword units? Where in this divide do subword units fall? The big selling point for both BPE and sentencepiece is that they do indeed allow open-vocab language modeling, much like a pure character-level model, but also allow the formation of bigger word-like tokens to simplify the prediction task (shorter sequences == easier modeling for the RNN), i.e., instead of splitting $$\texttt{the deforestation}$$ into $$\texttt{t h e _ d e f o r e s t a t i o n}$$ we split it into $$\texttt{the de@@ forest@@ ation}$$, where @@ signifies that a word isn't finished yet. Because this “splitting” can go down all the way to characters, we will always find a way to cover any string, so we are open-vocab and thus comparable to characters. We are also comparable between different numbers of BPE merges or sentencepiece splits or even between these two different methods! We should be more explicit: when we say “comparable”, it should be clear that we are not talking about the perplexity per prediction token (i.e., here per subword unit) that is usually reported by your RNNLM toolkit — those are very much not comparable. But the perplexities per character or per word — or really any metric whose denominator does *not* depend on the segmentation — are comparable! Very practically, consider the segmentations $$s_1 = \texttt{the de@@ forest@@ ation}$$ and $$s_2 = \texttt{the defor@@ estation}$$. If someone handed you their precomputer perplexities-per-subword unit for these two, say, $$ppl^{sw}_1 = 19$$ and $$ppl^{sw}_2 = 24$$, you now know that these aren't directly comparable, but you can compute comparable metrics: First, get the total negative log-likelihood of the entire string: \begin{align*} nll_1 &= \log 19 \cdot (4+1) &= 14.7 \\ nll_2 &= \log 24 \cdot (3+1) &= 12.7 \\ \end{align*} These numbers you can already fairly compare (and you will see that the second model, despite its “higher subword perplexity” is actually the better one), but if you prefer word-level perplexities, you can compute these, too: \begin{align*} ppl^w_1 &= \exp\dfrac{14.7}{2+1} &= 134.3 \\ ppl^w_2 &= \exp\dfrac{12.7}{2+1} &= 68.9 \\ \end{align*} You could even compute character-level perplexities: \begin{align*} ppl^c_1 &= \exp\dfrac{14.7}{16+1} &= 2.37 \\ ppl^c_2 &= \exp\dfrac{12.7}{16+1} &= 2.11 \\ \end{align*} Note that there is a little caveat here: we assume that one string corresponds to exactly one segmentation. That's not quite true, but that problem is a much trickier one.[6] ### Conclusion Language models are distributions over strings. The fact that we implement them as autoregressive models over tokens at most changes their support, i.e., whether they can deal with all strings or only those that their vocabulary covers. Perplexities over different segmentation granularities (i.e., words, subwords, or characters) aren't directly comparable, but the log-likelihoods that are hidden inside them are — and those can always be converted to perplexities for any given level. Fix the level, say words, and you can (and should!) compare. Thanks to Annabelle Carell, Ryan Cotterell, Jason Eisner, Matthew Francis-Landau, and Chu-Cheng Lin for their proof-reading and suggestions! We used this realization that equal denominators matter in our NAACL 2018 paper “Are All Languages Equally Hard to Language-Model?”, to compare language models across different languages even — you should find the idea there fairly obvious after reading this post. If you are more curious, in our AAAI 2019 paper “Spell Once, Summon Anywhere: A Two-Level Open-Vocabulary Language Model”, we took this a step farther and compared a likelihood-based metric not only across subword unit segmentations, but even across tokenization — if that tokenization is reversible! If it is, i.e., if ever tokenized string corresponds to exactly one untokenized string, then any distribution (i.e., any language model) we obtain over the tokenized text implies one over the untokenized text — they have the same support and that's all we needed. 1. In fact, there are infinitely many sentences that all should receive positive probability. However, because the infinitely large set of all sentences is still countable, it should not be hard to think of a distribution that can cover these. If you have trouble believing that, consider the Poisson distribution or the geometric distribution that also assigns positive probability to the countably infinite set of positive integers. 2. We need an $$\mathrm{EOS}$$ symbol to make sure we obtain the probability of the string as an entire string and not just as a prefix of some longer string. This will be more obvious in the next section. 3. This “guessing game” is known as the Shannon Game and is in fact one of the origins of all this theory! Check out the original paper from 1950. 4. Of course, this means that if your open-vocab character-level model still beats a closed-vocab word-level model on this metric, you know that it really is better. In practice, however, that is really hard to achieve — a closed vocabulary usually gives you a very pronounced advantage in these scores that is hard to overcome with just better models. 5. And even if it weren’t, building a character-level model over the thousands of CJK characters sounds like an equally wasteful task. It’s turtles all the way down… to bytes? Bits? 6. It is true that your segmentation tool will give you one segmentation, but that's also not the entire truth: even if the segmentation tool gives you $$\texttt{rain@@ ing}$$ as the canonical segmentation of $$\texttt{raining}$$, had the RNN over these units produced $$\texttt{rain@@ i@@ n@@ g}$$ (and all these things are necessarily in its vocabulary!), we would have just as happily taken it. So, the issue is that $$p(\texttt{raining})$$ is not just $$\texttt{rain@@ ing}$$, but the sum of all probabilities for segmented strings that are consistent with this string (a set way too large to enumerate)! Is that an issue in practice? Most likely not. We found that in our experiments for this paper the difference was neglegible. Still, make claims of a BPE model being worse in likelihood with some caution, lest you forget these other segmentations!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9881393313407898, "perplexity": 1387.9211639924542}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250616186.38/warc/CC-MAIN-20200124070934-20200124095934-00406.warc.gz"}
http://www.ipm.ac.ir/ViewPaperInfo.jsp?PTID=16183&school=Nano-Sciences
IPM 30 YEARS OLD ## “School of Nano-Sciences” Back to Papers Home Back to Papers of School of Nano-Sciences Paper   IPM / Nano-Sciences / 16183 School of Nano Science Title:   Charge density wave and superconducting phase in monolayer InSe Author(s): 1 Mohammad Alidoosti 2 Davoud Nasr Esfahani 3 Reza Asgari Status:   Submitted Journal: Year:  2020 Supported by:  IPM Abstract: In this paper, the investigation of possible superconducting phase in monolayer indium selenide is determined using first-principle calculations for both the hole and electron doping systems. The hole-doped dependence of the Fermi surface is exclusively important for the monolayer InSe and it leads to the modification of the Fermi surface from six separated pockets to two pockets by increasing the hole densities. For quite low hole-doped of the system, below the Lifshitz transition point, a strong electron-phonon coupling λ ∼ 7.6 is obtained; providing a superconductive critical temperature of Tc=65 K. However, for some hole doping above the Lifshitz transition point, the combination of the temperature dependence of the bare susceptibility and the strong electron-phonon interaction gives rise to a phonon softening at specific momentum and therefore charge density wave emerges at a temperature much greater than Tc. Having included non-adiabatic effects, we could carefully analyze conditions for which either a superconductive or charge density wave phase occurs in the system. In addition, monolayer InSe is become dynamically stable by including non-adiabatic effects for different carrier concentrations at room temperature.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9326727986335754, "perplexity": 3948.3179088986085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655895944.36/warc/CC-MAIN-20200707204918-20200707234918-00409.warc.gz"}
https://zbmath.org/?q=an%3A0869.62024
× # zbMATH — the first resource for mathematics Adapting to unknown smoothness via wavelet shrinkage. (English) Zbl 0869.62024 Summary: We attempt to recover a function of unknown smoothness from noisy sampled data. We introduce a procedure, SureShrink, that suppresses noise by thresholding the empirical wavelet coefficients. The thresholding is adaptive: A threshold level is assigned to each dyadic resolution level by the principle of minimizing the Stein unbiased estimate of risk (Sure) for threshold estimates. The computational effort of the overall procedure is order $$N\cdot\log(N)$$ as a function of the sample size $$N$$. SureShrink is smoothness adaptive: If the unknown function contains jumps, then the reconstruction (essentially) does also; if the unknown function has a smooth piece, then the reconstruction is (essentially) as smooth as the mother wavelet will allow. The procedure is in a sense optimally smoothness adaptive: It is near minimax simultaneously over a whole interval of the Besov scale; the size of this interval depends on the choice of mother wavelet. We know that traditional smoothing methods – kernels, splines, and orthogonal series estimates – even with optimal choices of the smoothing parameter, would be unable to perform in a near-minimax way over many spaces in the Besov scale. Examples of SureShrink are given. The advantages of the method are particularly evident when the underlying function has jump discontinuities on a smooth background. ##### MSC: 62G07 Density estimation Full Text:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6885910630226135, "perplexity": 1070.7655907599435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364008.55/warc/CC-MAIN-20210302125936-20210302155936-00327.warc.gz"}
http://math.stackexchange.com/questions/611329/area-of-a-band-in-mathbbr2
# Area of a band in $\mathbb{R}^2$ If I have a continuous, and smooth curve $\mathcal{C}$, length $\ell$, in $\mathbb{R}^2$ and at each point on the curve I were to draw a line segment, length $d$, normal to the curve centered at the point; would the area covered by all the line segments be $d\cdot\ell$ provided that no two line segments intersect with each other? Also: if this is true, can this be generalized to more dimensions? - Sure. Let $d$ or $l$ be $0$. –  Jeremy Dec 18 '13 at 5:00 How do you define a normal of a continuous curve? If you curve is smooth, then you can do what you describe, but the area will not be $dl$ in general. –  Andrey Sokolov Dec 18 '13 at 5:10 If you draw a line segment at each point, then the only way no two line segments will intersect is if all are parallel, i.e., if $C$ is a straight line. In that one case, the area will indeed be $d\cdot\ell$. –  mjqxxxx Dec 18 '13 at 6:00 I am going to add smooth as a criterion. Also, I think that as long as the length d/2 is less than the radius of curvature, no two segments will intersect. –  davik Dec 19 '13 at 0:28 To give a concrete example, let $C$ be a circle of radius $r$ (and let $d < r$ if you choose the inward-facing normal). Then the region swept out by the normal lines is an annulus, and you can compute its area for both the inward- and outward-facing normals; you'll see that you don't get the answer $d\ell$ in either case. The normal is supposed to be “centered at that point”: I think davik means that the normal of equal length is drawn in both directions to achieve the band. And in that case it's true that the area is $dl$ provided the normals are not too long. –  Michael Hoppe Dec 18 '13 at 10:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9475684762001038, "perplexity": 277.5160492000814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119645920.6/warc/CC-MAIN-20141024030045-00191-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.yanxurui.cc/posts/ai/2018-12-11-Essentials-of-Machine-Learning-Algorithms/
# Essentials of Machine Learning Algorithms Published On December 11, 2018 category AI | tags ML ## Introduction ### Categories • supervised learning (predict a specific quantity using training examples with labels) • classification • regression • unsupervised learning (look for patterns in the data and doesn’t require labeled data) • clustering • reinforcement learning • Generative • model p(x|y) and then transform it into p(y|x) by applying Bayes rule • can be used for unsupervised learning • good at handling missing value or detecting outliers ? • Discriminative • model p(y|x) directly • doesn't require strong assumption • learn decision boundary between classes ### Input The first step of any machine learning method is to represent an instance as an unordered bag of features (i.e. a set of attribute value pairs). There are 3 types of attributes. Compared with categorical attributes (e.g. yellow, blue, green, red), ordinal attributes (e.g. poor, satisfactory, good, excellent) have a natural ordering and are meaningful to compare. So, one-hot encoding may be required to convert a categorical integer feature into several binary columns, each for one category. Numeric attributes (e.g. 1, -3.14, 2e-3) can be added or multiplied. They are usually normalized to have unit variance or be in the range of [0, 1]. The representation of each datapoint is critical for the performance of machine learning algorithms. The process of picking attributes is also called feature engining. When it comes to images, text and time series, attributes are not obvious. For handwritten digits recognition, each pixel can be used as a separate attribute because the same pixel has the same meaning after each digit being isolated, rescaled and de-slanted. As for recognizing an object in an image, using pixels as attributes doesn’t work. A possible approach is to segment the image into regions and then extract features describing the region. Bag-of-words (sparse vector in implementation) combined with naive bayes classifier is commonly used for text classification, such as spam detection and topic identification. Music (a kind of time series data) can be decomposed by Fourier transformation into a sum of sine waves of different frequencies and then attribute values are weights of different base frequencies. This representation is insensitive to shift and volume. ### Evaluation It's easy to be perfect on training data while hard to do well on future data. Overfitting means the predictor is too complex or flexible and fit noise while underfitting means the predictor is too simplistic or rigid to capture salient patterns in the data. Most machine learning algorithms have hyperparameters to control the flexibility. They should be tuned to minimize generalization error. Generalization error measures how well the predictor behaves on future data. Testing error is an estimate of the true generalization error. The common methodology of evaluating a model is as follows: • training set: train/fit the model • validation set: pick best performing algorithm, fine-tune parameters • test set: estimate future error A better way is Cross validation especially when training examples are very limited 1. Randomly split the data into N folds. Stratification helps to keep labels balanced in training and test sets. 2. Select each fold for testing in turn and the remaining N-1 folds for training. 3. Average the error or accuracy over N test folds What metrics do we use to measure how accurate our system is? #### Classification Accuracy is not enough because it cannot handle unbalanced classes. For example, we are predicting whether an earthquake is about to happen. A predictor which always says no will have very high accuracy but doesn't help. More measures are needed: 1. False alarm rate (FPR) and miss rate (1-TPR) 2. Precision $\frac{TP}{TP+FP}$ and recall $\frac{TP}{TP+FN}$, F-measure 2/(1/Recall+1/Precision) 3. ROC curve (TPR/Recall vs FPR $\frac{FP}{FP+TN}$ as threshold varies) #### Regression measure difference between predicted and true values • Mean Squared Error: very sensitive to outliers • Mean Absolute Error • Median Absolute Deviation • Correlation coefficient for ranking tasks ## Naive Bayes Nauve Bayes predicts class label by applying Bayes rule with independence assumption between features. It's naive because this assumption may not be correct. • prior $P(y)$ • class model $P(x|y)$ • normalizer $P(x)$ NB makes independence assumption to model $P(x|y)$. Assume attributes $x_1 \cdots x_n$ are conditionally independent give y, i.e $P(x_i|x_1 \cdots x_{i-1}, y) \approx P(x_i|y)$. There are 2 types of NB classifiers. • discrete case Multinomial Naive Bayes (i.e. $P(x_i|y)$ is estimated by computing frequency of an attribute value within each class) • continuous case Gaussian Naive Bayes (i.e. every attribute is normal distribution within each class) NB has 2 weakness. 1. zero-frequency: smoothing 2. strong assumption NB can easily handle missing value by just ignoring it for some attribute based on conditional independence assumption between attributes. ## Decision Trees The algorithm used to build decision tree is called ID3: 0. root node contain all the training examples 1. find the best/decision attribute A which produces maximum information gain after split 2. for each value of A, create a child node and split training examples into child nodes 3. stop if the subset in child node is pure otherwise continue split the child node Information gain is the drop in entropy after split. Maximum information gain means subsets are as pure/certain as possible after split. • Classification: most frequent class in the subset of a leaf • Regression: average of training examples in the subset Pros 1. interpretable: human can understand the decisions which make an item positive 2. can handle missing value and irrelevant attribute 3. compact and fast at testing Cons 1. greedy: may not find the best tree 2. only axis-aligned split of data ### Problems 1. Overfitting can be avoided by post-pruning nodes while performance on validation set keeps improving. 2. Information gain is biased towards attributes with many possible values and use GainRatio instead. 3. Continuous attribute can be handled by converting it into discrete attribute using thresholds (binary or multivariate). ### RandomForest 1. Grow multiple full (without pruning) decision trees from subsets of training examples 2. Given a new data point X, classify using each of the trees and use majority vote to predict the class RF is the state-of-art method for many classification tasks. ## Linear regression Linear regression assumes y is a linear function of all attributes. Fit the data when there is only one feature With more features, the fitted model forms a plane or hyperplane instead of a line. Loss function is square error $O(w) = \sum_{i=0}^{n} (y_i - \mathbf{w}_\top \mathbf{x}_i)^2$. To minimize the loss function with respect to parameters W, linear regression has an analytical solution. Always visualize (graphical diagnose) to check: 1. whether there is a linear relationship between input and target or not 2. outliers (linear regression is very sensitive to outliers) Actually, linear regression is more powerful than you would expect. Non-linear regression means we transform the original attributes x non-linearly into new attributes $\phi(x)$ (this process is called basic expansion) and then do linear regression. • polynomial regression transforms an attribute x into $\phi(x) = (1, x, x_2, \cdots, x_M)^T$ • RBF regression uses Gaussians as basis function Be careful that too many new features might result in overfitting. ## Logistic regression It's a classifier in fact though the name is quite misleading. The formula of the decision boundary (a line for 2 features and a hyperplane for higher dimension) is As shown in the figure below, the feature space is divided into 2 regions by the decision boundary and each region corresponds to a class. So far it's quite similar to linear regression but notice that the y-axis in the figure above is feature x2 instead of target value y. In a 2 classes case, assuming the red circle represents class 1, y=1 if f(x,w)>0. In order to model the probability P(y = 1|x), sigmoid/logistics function $\sigma(z)=\frac{1}{(1+exp(−z))}$ is applied to squash f into [0, 1]. • bias parameter $w_0$ shifts the position of the hyperplane • weight vector $w=(w_1, w_2, \cdots, w_d)^\top$ is the normal vector of the hyperplane (perpendicular) • direction of w affects the angle of the hyperplane • magnitude of w affects how certain (probability close to 0 or 1) the classification is Parameters W are determined by maximizing the log likelihood (equivalently, minimizing the negative log likelihood) using numerical optimization algorithm such as gradient descent. Assuming the dataset D is independent and identical distributed, the likelihood of the training data is Gradient descent iteratively moves to the minimum of error surface on weight space by subtracting a proportion of the gradient of E with respect to weight parameters. The pseudo code for batch gradient descent is as follows: initialize w while E(w) is unacceptably high calculate g = ∂E/∂w update w = w - ηg end return w where $E = \sum_{i=0}^n{E_i}$ and $\eta$ is learning rate. If learning rate is too large, it's likely to leap over the minimum. If learning rate is too small, learning will be very slow. Neural networks suffer from local optima. Fortunately, logistic regression has a unique optimum (i.e. the global optima). Logistic regression learns a linear decision boundary. In a linearly separable dataset, logistic regression can classify every training example correctly. However, LR cannot get 100% accuracy on a non-linearly separable dataset (e.g. at most 75% accuracy on XOR dataset). Like linear regression, we can apply non-linear transformation to the input space (i.e. attributes X) to make them linearly separable. Here is an example, using two Gaussian basis function, the data in the new feature space is linearly separable. For multi-class classification, we use a separate weight vector for each class and output the probability using softmax instead of logistic function. ## SVM Like logistic regression, Support Vector Machine (SVM) draws a decision boundary or hyperplane $w^Tx + w_0 = 0$ in the feature space but with different objective function. SVM maximizes margin + slack. Margin is the distance from the closest training point to the decision boundary. $\text{margin} = \min_i \frac{1}{||\mathbf{w}||}|\mathbf{w^\top x_i} + w_0|$. slack variable $\xi$ is the distance from a point to its marginal hyperplane. It's introduced to deal with non-separable data but also applicable for a separable data set. The optimization problem is to minimize $||w||^2 + C(\sum_{i=1}^{n} \xi_i^k)$. The solution looks like $\mathbf{w} = \sum_i \alpha_i y_i \mathbf{x_i}$. The hyperplane is only determined by just a few datapoints (support vectors). Prediction on new data point x is Plain SVM is a linear classifier. Like logistic regression, SVM can also be made non-linear by using basis expansion which is implemented in kernel trick. Non-linear transformation makes data more separable. kernel makes it easier and faster to compute with if the expanded feature space is high dimensional (even infinite). Prediction is now $f(x) = \text{sign}[\sum_{i=1}^{n} \alpha_i y_i k(\mathbf{x_i}, \mathbf{x}) + w_0]$. So, we don't bother to calculate $\phi(\mathbf{x})$. For example, for 2-d input space, Combined with kernels such as polynomial and RBF, SVM has achieved good empirical results in many tasks. ## KNN K Nearest Neighbors (KNN) uses intuition to predict a new test point X: output the value of the most similar training example. It's simple because it doesn't make assumption about the data and let the data "speak" for itself instead. KNN method can be used for both classification and regression. Algorithm 1. compute distance between X and every training example Xi 2. select K closest training instances (neighbors) 3. output the most frequent class label or mean of neighbors ### Problems choosing the value of K • Too large K will result in everything classified as the most probable class. • Too small K will make the classification unstable. • Best practice is to pick K that gives the best performance on a held-out set. distance measure The distance defines which examples are similar and which aren't. Common measures are: • Minkowski distance (p-term)$D(x, x^\prime) = \sqrt[p]{\sum_{d}{|x_d-x_d^\prime|^p}}$ • p=2: Euclidian (sensitive to extreme difference in single attribute) • p=1: Manhattan • p=0: Hamming • Kullback Leibler divergence $D(x, x^\prime) = -\sum_d x_d\log\frac{x_d}{x_d^\prime}$ ties What if there are equal number of positive and negative neighbors? • randomly choose one • pick class with greater prior across the dataset • use the nearest or 1-NN classifier to decide (but can still have tie problem) missing value A reasonable choice is to fill in with average value of the attribute across entire dataset. computationally expensive The naive implementation needs to store all training examples and compare the test point one-by-one to each training example (Complexity is O(nd)). The idea to solve this is to find a small subset of training examples that are potential near neighbors. • K-D tree • use median to split the data repeatedly • low-dimensionality, real-valued data • locality-sensitive hashing • use k hyperplane to slice the space into 2^k regions • high-d, real-valued • inverted lists • maintain a map from possible attribute values to a list of training examples that contain the value and then merge the lists for attributes present in the testing instance • high-d, discrete, sparse However, both K-D tree and locality-sensitive hashing have the problem of missing neighbors. ## K-Means K-Means is an unsupervised clustering algorithm which produces polythetic, hard boundary and flat clusters. K-Means splits data into K sub-population where membership is defined by the distance. The algorithm goes like this: input K, set of datapoints X1, X2,...,Xn place K centroids at random locations in the feature space, each corresponding to a cluster repeat until convergence (i.e. no centroid changes): for each datapoint Xi: compute the Euclidian distance between Xi and each centroid find the closest centroid and assign Xi to the corresponding cluster for each cluster j=1...K: update centroid Cj as the mean of all datapoints assigned to cluster j in the previous step ### Problems local minimum KMeans minimizes the aggregate intra-cluster distance (sum of squared distance between each datapoint and its centroid $\sum_j \sum_{x_i \rightarrow c_j} D(x_i, c_j)^2$) in each iteration until convergence. Different starting points will result in different minima. The solution is to run several times with random starting points and pick clustering that yields the smallest aggregate distance. pick K K must be explicitly specified as the input. We specify K=10 for clustering images of handwritten digits. What if we don't know how many clusters there are in the data? Aggregate distance is monotonically decreasing with K. Usually, we plot the aggregate distance against the value of K (this is called screen plot) and visually pick K where the mountain ends and rubble begins (make aggregate distance decrease the most). Evaluation Evaluation for clustering algorithm is not easy. If class labels are available, we need to align clusters with classes before measuring accuracy. Otherwise, we could sample some pairs and ask human whether they should be in the same group or not. Sample pairs can also be generated from class labels. By counting matching (TP, TN) and non-matching pairs (FP, FN), we can compute accuracy, F1, etc. ### Application Inspired by bag-of-words, an image can be represented by a bag of visual words. 1. divide a large image into small regions/patches (e.g. 10x10) 2. extract appearance features for each patch such as distribution of colors, texture, edge orient 3. use K-Means to cluster feature vectors of all regions across all images in the dataset. As a result, similar regions will end up in the same cluster and have the same cluster id. 4. every region in every image is represented by its cluster label/id. An image is represented in K dimensional visual words. ## Mixture Models Suppose the distribution of our data (1-d) looks like 2 mixed Gaussian function, Gaussian mixture model can help us automatically discover all parameters for the 2 Gaussian sources. Mixture model is a probabilistically-grounded way of doing soft clustering. Each cluster follows a Gaussian (continuous) or multinomial (discrete) distribution. Mixture Model is also called expectation maximization because the convergence goal is to maximize the log likelihood of data (expectation). Assuming the dataset is iid, the likelihood of n training examples under the mixture model of K sources is Let's walk through the 1-d GMM example mentioned above. 1. initiate 2 random Gaussians$(μ_a, σ_a^2), (μ_b, σ_b^2)$ with equal priors p(a) = p(b) = 0.5 2. repeat until convergence • E-step: for each point x and each Gaussian $k \in {a, b}$, compute P(k|x) using Bayes rule • M-step: adjust $(μ_a, σ_a^2), (μ_b, σ_b^2)$ and priors according to posteriors of all datapoints in the previous step EM has the same problems as K-Means. ## Agglomerative clustering Agglomerative is a bottom up clustering algorithm which yields a hierarchical tree of clusters. Given a dataset of n training examples, Agglomerative clustering 1. start with a collection C of n singleton clusters, each containing one datapoint. 2. repeat until only one cluster is left (n-1 iterations) 1. find the closest pair of clusters ci, cj 2. merge them into one bigger cluster and update the collection How do we compute the distance between 2 clusters? • centroids: distance between centroids of the 2 clusters • single line: distance between 2 closest points in the 2 cluster • complete link: distance between 2 furthest points • average link: average of all pairwise distance • Ward: total squared distance from the centroid of joint cluster The distance matrix is updated using Lance-Williams algorithm after merging 2 clusters. ## PCA Principle Components Analysis (PCA) is a dimensionality reduction algorithm. Datasets are typically high dimensional such as images represented by pixels. True dimensionality is usually much lower than observed dimensionality because there are correlated or redundant features. High dimensional dataset is sparse in the feature space which makes statistics based methods unstable to predict. Dimensionality reduction helps to dramatically reduce the size of data and speedup computation. It also allows estimating probabilities of high-dimensional sparse data. PCA represents an instance using fewer features which explains the maximum variance in the data. Given a D-dimensional dataset, PCA computes eigenvectors and their corresponding eigenvalues from covariance matrix. We get D D-dimensional eigenvectors and D eigenvalues. It has been proved that eigenvectors maximize the variance (an eigenvector points to the direction of greatest variability in the data) and eigenvalue is the variance along its eigenvector. We pick K eigenvectors (they are called principle components) associated with the largest K eigenvalues (variance) as new dimensions. Given a new instance, we project it to these new dimensions to get a lower (K<<D) dimensional representation. That is, we compute dot product between the new instance and each principle component to get a vector of new coordinates. ### Problems 1. pick K: Usually we want to pick the first K eigenvectors which explain 90% of the total variance. 2. Covariance is sensitive to large attribute: normalize each attribute to zero mean and unit variance. 3. PCA assumes the subspace is linear: For example, if we want to reduce from 2D to 1D, the datapoints in the original 2D space should be close to a straight line. 4. Make classes less discriminative: Linear Discriminant Analysis makes it easier to distinguish classes but doesn't guarantee.
{"extraction_info": {"found_math": true, "script_math_tex": 31, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5558731555938721, "perplexity": 1826.034696460371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573801.14/warc/CC-MAIN-20190920005656-20190920031656-00361.warc.gz"}
http://techie-buzz.com/tag/atlas/page/2
## Higgs Search At LHC Nears End – Has The Higgs Already Been Found? The Higgs Boson may have finally have been caught or, may be, not! Only a clutch of scientists with direct access to latest LHC data knows whether the Higgs has been found or not! Whatever the result be, one thing is for sure the Higgs hunt is nearly over. CERN researchers have restricted the Higgs mass to a window of only 30 GeV, taking into results from the Large Electron Positron (LEP) collider, the Tevatron and, of course, from the LHC itself. The Higgs, if present in Nature, has got extremely little energy space to hide in. At a conference in Paris, held today (18th November), ATLAS and CMS researchers got together and erased out a HUGE range for the possible mass of the Higgs. A large swathe from 141 to 476 GeV was wiped out in one fell swoop. Says Guido Tonelli, the spokesman for CMS We’ll know the outcome within weeks. This is surely going to increase the pulse rate of any particle physicist in the world. ### What if… What happens if the Higgs is not found? A lot of problems for the Standard Model. The Higgs boson is the simplest way to generate masses for fermions (like electrons and protons) and bosons (like W and Z bosons). There are other possibilities, but this one Higgs model is the simplest and most beautiful of all the possible models. However, as Feynman would say, if theory disagrees with experiment, then it’s wrong and it doesn’t matter how beautiful the theory might be. For long, has the Higgs mass been pinned at about 140 GeV. There is still a strong possibility that the Higgs, if found, will be of this mass. We may be on the brink of history. ## Get The LHC On Your Android Phone, Thanks to Oxford University Now, you can participate in the unraveling of the greatest mysteries ever on your Android phone. Oxford University has come up with a Large Hadron Collider (LHC) app for Android mobiles. The app is nicely named LHSee’ and gives the user a nice chance to explore the Large Hadron Collider in full 3D glory and detail. You can download the app here. So, the Higgs Boson particle is still elusive and the LHC is hot in pursuit of the mysterious Boson. Not that you can do much about that sitting at home (or maybe you can donate you’re your computer’s processing power to CERN), but you can certainly get a sense of what is going on at the LHC on a regular basis. ### Not as easy as Angry Birds The bad news is that the details are really involved and you’ll probably take some time to take in everything. The Oxford bundle comes with a host of educational resources, besides the simulation. You’ll be able to learn more about ATLAS, one of the premier detectors at the LHC. It even has a game Hunt the Higgs, which we hope will become as popular as Angry Birds. So, while the LHC is busy colliding protons at monumental energies, you’ll be challenged with picking up the different proton-proton collisions from the jumbled mess. If you spot the Higgs, do give yourself a pat on the back. ##### Check out the official Oxford University site here:  http://www2.physics.ox.ac.uk/about-us/outreach/public/lhsee CERN’s huge LHC now comes in your phone. That’s another reason for a physicist to buy an Android phone, if you don’t already have one.  The biggest search in the history of humanity now occurs on your phone. Feel proud about that! High energy physics has never been this much fun! ## New Physics Should Be Around The Corner, Says Rolf Heuer, Director of CERN; Charts Future After LHC It was just yesterday, yet we have come so far! The first proton beams at a respectable 7 TeV energy was started only on 30th March, 2010. It has been hardly a year and a half and so much has already been achieved. This was the basic message sent out by CERN speakers Frederick Bordry and Rolf Heuer, also the Director of CERN at the Lepton Photon Conference, 2011, being held at Tata Institute of Fundamental Research, Mumbai, India. ### Projects! Projects! There are a lot of projects on the horizon, both short time and long time. Obviously, the long term projects are ambitious and a bit ambiguous as of now. However, as Heuer said, they are practical. We should not be afraid that it is not easy, he said. Among the many new developments happening or proposed at LHC is the development of magnets that can generate extremely high magnetic fields, called high-field magnets. These will be required to increase the energy of a beam, without lengthening the collider tunnel. Prof. Michael Peskin of SLAC, who was in the audience, asked if this is a dream or a programamidst chuckles, to which Bordry replied that it was certainly a realistic program. The LHC is expected to have a long shutdown period from 2013 to mid 2014 for repairs and maintenance work. ### New physics and monster accelerators A number of new projects are upcoming, even though they haven’t been officially sanctioned. Yesterday’ it was the synergy of the Tevatron, HERA and SLAC that led to the discovery of the Standard Model, said Heuer, adding that the LHC results will guide the way at the energy frontier. About the Higgs search, Heuer said that while finding the Higgs will be a discovery, not finding the Higgs and ruling it out will also be a major discovery. People should not say that these scientists are searching for nothing, he quipped. Not finding the Higgs will be a major result, since it will completely destroy the Standard Model, allowing other models of physics to come into the limelight. Among a plethora of futuristic plans announced, the most spectacular was the announcement of a hadron-lepton collider the LHeC. The LHC is a hadron-hadron collider. It can collide protons together or lead/silver nuclei etc. A hadron-lepton collider will be able to collide a proton, and say an electron. The energy per beam of the LHeC will be 16.5 TeV, combining to give a massive 33 TeV in total. The LHeC design is on my desk right now, but I shouldn’t be mentioning that here, he remarked drawing loud laughter from the global audience. As far as LHC physics is concerned, he said that 2012 will be a decisive year. The TeV results will either lead to the discovery of new particles and some new physics will be known or it will be a reformulation of the physics we already know. Both will be progressive steps for particle physics. Heuer spoke at length on the building of the linear accelerators International Linear Collider (ILC) and the Compact LInear Collider (CLIC). Today, we need to keep our choices openwas Heuer’s advice. ### International Collaboration On the question of collaboration, Heuer said that CERN was throwing its doors open to non-European countries. The E’ in CERN is going from European’ to Everybody’. We’re not changing our name, however, said Heuer. Exciting times in particle physics beckon us! As usual this sentiment was put emphatically in Heuer’s own words -We are just beginning to explore 95% of the universe. I’ll let the scientist in Heuer have the final word on this report. When asked if he’ll be bothered if the next big accelerator is located in the US, instead of at CERN, Heuer put it beautifully, I don’t care where the collider is! I only care about the science coming out of it. The scientific enterprise is a greater binding factor than anything else. It’s a silent messenger of world peace, uniting the world in the pursuit of truth and never advertising that facet. ## Latest Results of Higgs Search Presented Jointly By ATLAS and CMS, LHC, CERN at Lepton Photon ’11, Mumbai The latest results on the Higgs search are out. Results were presented separately by ATLAS and CMS detectors of LHC, CERN today(i.e. 22st August, 2011) at the Lepton-Photon Conference, 2011. In this semi-technical article, we present the most important results in a systematic form. The verdict is, however, out the Higgs hasn’t been found as yet. Check out our first (non-technical) post on this discovery here. A countdown to the Lepton Photon Conference itself is here. ### Higgs Production and Decay channels There are a few things that should be kept in mind right throughout the article. The Higgs boson is primarily produced by interaction of two gluons. (A gluon is what keeps protons and neutrons in an atomic nucleus together.) This is called gluon-gluon production of the Higgs boson. Next, the Higgs, being highly massive (i.e. having a high mass) decays into lighter particles. This is what massive particles always do they decay into lighter particles. The only thing is that different particles decay at different rates. Heavier particles will decay much faster than comparatively lighter particles. The Higgs can decay into a number of lighter products. Each of these products leaves a distinctive signature on the detectors and the different modes of decay are called different decay channels’. The Higgs primarily has a gamma-gamma (Higgs decaying into two gamma ray photons.) channel, a WW and a ZZ channel. These are the main channels of interest. The gamma-gamma channel will be the preferred channel if the Higgs is a comparatively light particle about 100 GeV in mass. If the Higgs decays by producing two Z-bosons (the ZZ channel) or two W-bosons (WW channel) then its mass is above 130 GeV.   In other words, the gamma-gamma channel fixes the upper limit of the Higgs mass at 130 GeV, while the WW and ZZ channels fix the lower energy bound at 130 GeV. Now, here is the interesting part. The WW or ZZ bosons are themselves quite heavy and decay into a number of products. These decay channels produce characteristic detection patterns in the detectors. Comparing the observed rate of decay into these channels with that of the expected value, the data is reconstructed to see if this indeed was a Higgs event. ### Now for more technical details #### ATLAS Results The ATLAS detector found no significant excess in the gamma-gamma channel. The bottom-bottombar (b-bbar) channel (this is what the WW bosons break down into bottom and anti-bottom quarks) gave big excess of Higgs event above the theoretically expected Standard Model(SM) production rates. Even though the excess was nearly 10 times the SM predictions, the sensitivity needs to be improved. Furthermore, Tevatron has a much greater say in the b-bbar channel than the LHC, given that it has recorded much higher number of events and has a higher luminosity at that energy range. The tau-tau (tau is a lepton, an electron like particle) channel gave a 4 to 5 times excess. Overall, there was no significant excess in any of the channels to warrant a discovery. There was no significant excess number of events noticed for the Higgs in the mass range of 110 GeV to 160 GeV. This mass range is tentatively excluded with 95% confidence level. However, at 99% confidence level, there is a window about 142 GeV, which can be a possible detection window. Further experiments will probe this window more thoroughly. #### CMS results CMS detected no excess in the gamma-gamma channel. A slight excess was noticed in the tau-tau channel and this is expected to be an important channel for further investigation, owing to the fact that data reconstruction from this channel points to a Higgs mass of about 140 GeV. Excess of events in the WW going to lepton-lepton channel suggests a mass range of 130 GeV to 200 GeV. Three pairs of events have been notices at three mass ranges 122, 142 and 165 GeV for the ZZ channel. Only the 142 GeV event is consistent with Standard Model predictions. Happily, this is the very window that wasn’t excluded earlier with 99% confidence level. Out of theoretically expected mass range exclusion of 145 to 440 GeV, three ranges have been excluded 145 to 216 GeV, 226 to 288 GeV and 310 to 400 GeV. Anything above 400 GeV is unlikely and the crucial 130 to 145 GeV window is still open. These mass ranges have been excluded with 98% confidence level. Higgs search continues with full force. LHC will provide a lot more data samples in the coming months and this might ultimately lead us to achieve the Holy Grail of Particle Physics. ## Higgs Boson Still Not Found: Huge Official Announcement from LHC, CERN HIGGS SEARCH RETURNS A BLANK! HIGGS BOSON NOT FOUND BY LHC, CERN! This is the joint announcement made by the ATLAS and CMS teams, LHC, CERN at the Lepton-Photon Conference, 2011 being held at Tata Institute of Fundamental Research (TIFR), Mumbai, India. This is likely to be a disappointment for many around the world, both within and without the particle physics community. The search is however on! A warmup countdown post to this Lepton Photon Conference, 2011 is here. Semi-technical post showing all relevant results and figures can be found here. ### The Higgs Boson The Higgs Boson, predicted from considerations of symmetry in Quantum Field Theory by Peter Higgs, is the particle theoretically responsible for endowing every other massive particle with mass. It’s a boson with spin zero, with positive parity and charge. ### Weak Signals There were a number of weak signals noticed that preceded the event. These Higgs signatures’ included the W-W or the Z-Z decay channel for the Higgs as the primary decay channel. This means that the Higgs once produced will decay into two W or Z-bosons, which will in turn break up into electron-positron pairs or muon-antimuon pairs. Unfortunately, none of these events could stand up to the rigors of analysis and survive till the 5 sigma confidence level was reached in both ATLAS and CMS detectors, as yet. No such significant excess has been observed in the lower mass gamma-gamma channel. Also, more exotic branches like the tau-tau and b-bbar (bottom-bottombar quarks) have not offered anything promising. The results of Tevatron, Fermilab are similarly blank, with no significant excess noticed in any channel. ### The Future This is also an exciting opportunity it opens up new possible physical theories. Spontaneous symmetry breaking, at least what we know of it now, may not be the whole story. There are many rival’ theories of the Standard Model, many requiring no Higgs boson to achieve mass. These Higgless models may become the focus of mainstream research and the LHC may be next used to test the predictions of such theories. However, it is too early to make such claims. The Higgs search is going on at full blast. ### And a Promise We will bring more articles soon, explaining what this means for the Standard Model and particle physics in general. We will also run an article elucidating the jargon of particle physics. Hold on for that it’ll come sooner that you think. Update Actual results from the ATLAS and CMS joint announcement on the Higgs Boson search can be found here. All relevant facts and figures present. ## Countdown to Lepton-Photon Conference, 2011: ATLAS To Make Major Announcement on Higgs Search Some big news is just around the corner. The ATLAS collaboration at LHC, CERN is all set to announce the status of the Higgs Boson search at the giant collider in the upcoming week at the Lepton-Photon Conference 2011, being held in Mumbai from 22-27 August, 2011. The announcement is one of particular importance since it is rumored to be the definitive one in the quest for the Higgs Boson. Whether the Standard Model of Physics, one of the most beautiful and successful edifices of physics ever constructed, will stand or need revision will hinge crucially on this one announcement. ### The Lepton-Photon Conference, 2011 The Conference The XXV International Symposium on Lepton-Photon Interactions at High Energy will take place in Tata Institute of Fundamental Research, Mumbai, India and will attract prominent personalities from the world of high-energy physics. The coming week is expected to be a hectic one for both students and physicists at the Institute, with the who’s who of particle physics presenting and discussing current progress, while also charting the road ahead. Preparations are on at full swing within the Institute premises. We at Techie-Buzz will be covering the huge scientific event from Ground Zero and presenting all the major announcements in real time from it.  You might want to bookmark the website and visit it frequently or subscribe to our newsletter, if you aren’t already on the subscription list. ### Some exciting developments precede the event The watch-word is Higgs’ for everyone and with certain encouraging signs noticed in the last few months, everyone is excited. Particularly stunning are the two results graphed below. Explanations follow the graphs. Look at the two graphs (don’t get scared!). The thick black line in each graph represents the Higgs signals. The dashed line represents the predicted Higgs production rate by Standard Model Calculations. A proper signature is said to be found when the observed signal overtakes the predicted signal. Look at the region marked, just between 130 to 150 MeV, where the production rate far exceeds the predicted rate. This coincides with the predicted mass range for the Higgs. This in itself proves nothing, as this might be due to something completely different. What is exciting is the fact that this weaksignal is being noticed in both the LHC detectors, ALICE and CMS. Concurrent results have a better chance of surviving thorough data analysis. For clarity let me reiterate the two important takeaway points: First, both detectors, ATLAS and CMS, agree on the Higgs signature. Second, the signals have been noticed in the theoretically expected mass range (about 130-150 GeV). The results are now quoted at a 95% confidence level (or 2 sigma) and do not warrant the label of a discovery’. For that, you’ll require 99.997% confidence (or 5 sigma) from both detectors. We might be onto that. At the risk of being repetitive, let me again emphasize that the announcement at the Conference in the coming week will nearly finalise the fate of the search for the Higgs Boson. If not found, it may be the beginning of new physics. Hope to see you here through next week. Update: The CERN Announcement on the ATLAS and CMS results on the Higgs Search is here. Check it out, its big news.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8049483895301819, "perplexity": 1447.2517825120412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574182.31/warc/CC-MAIN-20190921022342-20190921044342-00314.warc.gz"}
https://www.snapxam.com/problems/82070925/derivative-of-x-5-3x-4-cosx
# Step-by-step Solution Go! 1 2 3 4 5 6 7 8 9 0 a b c d f g m n u v w x y z . (◻) + - × ◻/◻ / ÷ 2 e π ln log log lim d/dx Dx |◻| θ = > < >= <= sin cos tan cot sec csc asin acos atan acot asec acsc sinh cosh tanh coth sech csch asinh acosh atanh acoth asech acsch ## Step-by-step Solution Problem to solve: $\frac{d}{dx}\left(\frac{\left(x^5+3x\right)^4}{cos\:x}\right)$ Solving method Learn how to solve quotient rule of differentiation problems step by step online. $\frac{\frac{d}{dx}\left(\left(x^5+3x\right)^4\right)\cos\left(x\right)-\left(x^5+3x\right)^4\frac{d}{dx}\left(\cos\left(x\right)\right)}{\cos\left(x\right)^2}$ Learn how to solve quotient rule of differentiation problems step by step online. Find the derivative using the quotient rule (d/dx)(((x^5+3x)^4)/(cos(x)). Apply the quotient rule for differentiation, which states that if f(x) and g(x) are functions and h(x) is the function defined by {\displaystyle h(x) = \frac{f(x)}{g(x)}}, where {g(x) \neq 0}, then {\displaystyle h'(x) = \frac{f'(x) \cdot g(x) - g'(x) \cdot f(x)}{g(x)^2}}. The power rule for differentiation states that if n is a real number and f(x) = x^n, then f'(x) = nx^{n-1}. The derivative of the cosine of a function is equal to minus the sine of the function times the derivative of the function, in other words, if f(x) = \cos(x), then f'(x) = -\sin(x)\cdot D_x(x). The derivative of a sum of two functions is the sum of the derivatives of each function. $\frac{4\left(x^5+3x\right)^{3}\left(5x^{4}+3\right)\cos\left(x\right)+\left(x^5+3x\right)^4\sin\left(x\right)}{\cos\left(x\right)^2}$ SnapXam A2 ### beta Got another answer? Verify it! 1 2 3 4 5 6 7 8 9 0 a b c d f g m n u v w x y z . (◻) + - × ◻/◻ / ÷ 2 e π ln log log lim d/dx Dx |◻| θ = > < >= <= sin cos tan cot sec csc asin acos atan acot asec acsc sinh cosh tanh coth sech csch asinh acosh atanh acoth asech acsch $\frac{d}{dx}\left(\frac{\left(x^5+3x\right)^4}{cos\:x}\right)$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9984633922576904, "perplexity": 759.2893315478989}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150307.84/warc/CC-MAIN-20210724160723-20210724190723-00405.warc.gz"}
https://lifewithdata.com/2022/08/21/functions-in-python/
# Functions in Python A function is a block of organized and reusable program code that performs a single, specific and well-defined task. Python enables its programmers to break up a program into functions, each of which can be written more or less independently of the others. Therefore, the code of one function is completely insulated from the code of the other functions. Every function interfaces to the outside world in terms of how information is transferred to it and how results generated by the function are transmitted back from it. This interface is basically specified by the function name. For example, we have been using functions such as input() to take input from the user, and print() to display some information on the screen. ## Defining a Function – We can define a function using the keyword def followed by function name. After the function name, we should write parentheses() which may contain parameters. The syntax of a function is given below. def function_name(param1, param2, ...): """Function Doctsring""" function statements For example, we can write a function to add two numbers def sum(a, b): """This function finds sum of two numbers.""" c = a + b print('sum: ', c) Here, def represents the starting of function definition. ‘sum‘ is the name of the function. After this name, parentheses () are compulsory as they denote that it is a function and not a variable or something else. In the parentheses, we wrote two variables a and b. These variables are called parameters. A parameter is a variable that receives data from outside into a function. So, this function can receive two values from outside and those values are stored in the variables a and b. After the parentheses, we put a colon (:) that represents the beginning of the function body. The function body contains a group of statements called suite. Generally we should write a string as the first statement in the function body. This string is called a Docstring that gives information about the function. Docstrings are generally written inside triple double quotes or triple single quotes. However these docstrings are optional. That means it is not compulsory to write them. But writing docstrings is a good programming habit. After writing the docstring in the function, the next step is to write other statements which constitute the logic of the function. This reflects how to do the task. In our example, we want to find the sum of two numbers. Hence the logic would be. c = a + b print('sum: ', c) The parameters a and b contains the values which are added and the result is stored into c. Then the result is displayed using the print() function. So this function can accept two values and display their sum. ## Calling a Function – A function cannot run on its own. It runs only when we call it. So, the next step is to call the function using its name. While calling the function, we should pass the necessary values to the function in the parentheses as. # calling the sum function sum(5, 10) #output sum: 15 Here we are calling the sum function and passing two values 5 and 10 to that function. When this statement is executed, the Python interpreter jumps to the function definition and copies the values 10 and 15 into the parameters a and b respectively. These values are processed in the function body and result is obtained. The values passed to a function are called arguments. So 5 and 10 are arguments. ## Returning Results from a Function – We can return the result or output from the function using the return statement in the body of the function. For example return c #returns c value out of function return a, b, c # returns 3 values When a function does not return any result, we need not write the return statement in the body of the function. Let’s rewrite our sum() function such that it will return the sum value rather than displaying it. def sum(a, b): """This function finds sum of two numbers.""" c = a + b return c # calling the sum function x = sum(5, 10) print("The sum is: ", x) y = sum(2.5, 7.5) print("The sum is: ", y) #output The sum is: 15 The sum is: 10.0 In the above program, the result is returned by the sum() function through c using the return statement return c When we call the function as x = sum(5, 10) the result returned by the function comes into the variable x. Similarly, when we call the function as y = sum(2.5, 7.5) The returned result will come into y. ## Returning Multiple Values from a function – In Python, a function can return multiple values. When a function calculates multiple results and wants to return the results, we can use the return statement as return a, b, c Here, three values which are a, b, and c are returned. These values are returned by the function as a tuple. To grab these values, we can use three variables at the time of calling the function as x, y, z = function() Here, the variables x, y and z are receiving the three values returned by the function. To understand this practically, we can create a function by the name sum_sub() that takes 2 values and calculates the result of addition and subtraction. These results are stored in the variable c and d and returned as a tuple by the function. def sum_sub(a, b): c = a + b d = a - b return c, d Since this function has two parameters, at the time of calling this function, we should pass two values as x, y = sum_sub(10, 5) Now, the result of addition which is in c will be stored into x and the result of subtraction which is in d will be stored into y. This is shown in the below program. # a function that returns two results def sum_sub(a, b): """This function returns results of addition and subtraction of a and b""" c = a + b d = a - b return c, d # get the results from the sum_Sub() function x, y = sum_sub(10, 5) # display the results print("Result of subtraction: ", y) ## Formal and Actual Arguments – When a function is defined, it may have some parameters. These parameters are useful to receive values from outside of the function. They are called formal arguments. When we call the function, we should pass data or values to the function. These values are called actual arguments. In the following code a and b are formal arguments and x and y are actual arguments. def sum(a, b): # a, b are formal arguments c = a + b print(c) # call the function x = 10 y = 15 sum(x, y) # x, y are actual arguments The actual arguments used in a function call are of 4 types: • Positional arguments • keyword arguments • Default arguments • Variable length arguments ## Positional Arguments – These are the arguments passed to a function in correct positional order. Here, the number of arguments and their positions in the function definition should match exactly with the number and position of the arguments in the function call. Let’s write a function to demonstrate it. # Positional arguments def print_name(first, last): """Print the name of a person""" name = first + " " + last print("Name: ", name) # call the function print_name('Nick', 'Miller') #output Name: Nick Miller The above functions expects two strings, the first name and the last name and in that order. So, when we call this function, we are supposed to pass only two strings and in that order. Suppose we pass the last name first and first name last then calling the function will result in different outcome that is not expected. Also if we try to pass more than or less than 2 strings, then python will throw an error. For example, if we call this function by passing the first name, the middle name and the last name print_name('Nick', 'Smith', 'Miller') Then python will throw an error. ## Keyword Arguments – Keyword arguments are arguments that identify the parameters by their names. For example, when calling the above function, we can mention which is the first name and which is the last name as shown below. print_name(first='Nick', last='Miller') Here Nick is assigned to first and Miller is assigned to last. Now even if we change the order of the arguments like shown below print_name(last='Miller', first='Nick') #output Name: Nick Miller We will get the correct output. ## Default Arguments – Python allows users to specify function arguments that can have default values. This means that a function can be called with fewer arguments than it is defined to have. That is if the function accepts three parameters, but function call provides only two arguments, then the third parameter will be assigned the default (already specified) value. The default value to an argument is provided by using the assignment operator ( = ). Users can specify default value for one or more arguments. A default argument assumes a default value if a value is not provided in the function call for that argument. Let’s write a program to demonstrate it. def greet(name, msg='Good morning!'): """This function greets a person""" print('Hi', name + ', ' + msg) greet('Nick') greet('Nick', 'How are you doing?') #output Hi Nick, Good morning! Hi Nick, How are you doing? In the above program, the name does not have a default value so it is mandatory to provide it’s value when calling the function. But the msg has a default value, so it is optional during function call. If a value is provided, it will overwrite the default value. You can specify any number of default arguments in your function. If you have default arguments, then they must be written after the non-default arguments. This means that non-default arguments cannot follow default arguments. Therefore the following line of code def greet(msg='Good morning!', name) will produce an error SyntaxError: non-default argument follows default argument ## Variable Length Arguments – In some situations, it is not known in advance how many arguments will be passed to a function. In such cases, python allows programmers to make function calls with arbitrary (or any) number of arguments. When we use arbitrary arguments or variable length arguments, then the function definition uses an asterisk ( * ) before the parameter name. Let’s write a program to demonstrate that. def greet(*names): """This function greets all people in the names tuple.""" for name in names: print("Hello", name) greet('Nick','Jessica','Schmidt','Cece','Winston') #output Hello Nick Hello Jessica Hello Schmidt Hello Cece Hello Winston Here we called the function with multiple arguments. It’s up to you, how many names you pass here. The arbitrary number of arguments passed to the function basically forms a tuple before being passed into the function. Inside the called function, for loop is used to access the arguments. The variable length arguments if present in the function definition, then it should be the last in the list of formal parameters. Any formal parameters written after the variable length arguments must be keyword-only arguments. ## Local and Global Variables – When we declare a variable inside a function, it becomes a local variable. A local variable is a variable whose scope is limited only to that function where it is created. That means the local variable value is available only in that function and not outside of that function. In the following example, the variable a is declared inside increment() and hence it is available inside that function. Once we come out of the function, the variable a is removed from memory and it is not available. # local variable in a function def increment(): a = 1 a += 1 print(a) increment() print(a) # error The last statement in the above code result in an error – NameError: name ‘a’ is not defined, as we are trying to access a local variable outside of a function. When a variable is declared above a function, it becomes global variable. Such variables are available are available to all the functions which are written after it. # global variable example a = 1 # this is global var def myFunction(): b = 2 # this is local var print('a= ', a) # display global var print('b= ', b) # display local var myFunction() print(a) # available print(b) # error, not available Rating: 1 out of 5.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2226564586162567, "perplexity": 1053.4180886882266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500303.56/warc/CC-MAIN-20230206015710-20230206045710-00487.warc.gz"}
https://www.vedantu.com/iit-jee/atoms-and-nuclei
# JEE Important Chapter - Atoms and Nuclei Get interactive courses taught by top teachers ## Important Concept of Atoms and Nuclei for JEE The chapter Atoms and Nuclei gives us the idea about studying the structure of atoms and nuclei. This chapter is divided into two subcategories, the first one is atoms in which the concepts like Rutherford’s atom model, atomic spectra, Bohr’s model of hydrogen atom and energy level diagram have been discussed. The second one consists of concepts like nuclear size, nuclear density, nuclear binding energy, radioactivity and nuclear fission and fusion. Now, let's move on to the important concepts and formulae related to JEE and JEE main physics atoms and nuclei exam along with a few solved examples. ### Important Topics of Atoms and Nuclei • Rutherford's atomic model and its limitation • Bohr model of hydrogen atom • Bohr’s explanation of spectral series of hydrogen atom • Nuclear size and nuclear density • Mass energy relation and nuclear binding energy • Alpha, beta and Gamma decay • Nuclear fission and fusion ### Atoms and Nuclei Important Concept for JEE Name of the concept Key points of the concepts 1. Rutherford's atomic model and its limitation Every atom consists of a tiny central core, called the atomic nucleus in which the entire positive charge and the almost entire mass of the atom are concentrated.The atom as a whole is electrically neutral and electrons revolve around the nucleus in various circular orbits.As the electron revolves so it must lose energy continuously, it must spiral inwards and eventually fall into the nucleus. Therefore, atoms should emit a continuous spectrum but what we observe is a line spectrum. So this point isn’t explained by the Rutherford model which is the biggest limitation of this model. 2. Bohr’s model of hydrogen atom Bohr gives three postulates;Every atom consists of a central core called the nucleus which contains the positive charge and electrons revolve around it in their circular orbits.Electrons can revolve only in certain discrete non radiating orbits, called stationary orbits for which the total angular momentum of the revolving electron is an integral multiple of $\dfrac{h}{2\pi}$, where $h$ is the Planck’s constant.The emission or absorption of energy occurs only when an electron jumps from one of its specified orbits to another. The difference in the total energy of electrons in the two permitted orbits is absorbed when the electron jumps from the inner to the outer orbit. 3. Bohr’s explanation of spectral series of hydrogen atom Bohr postulated that the Lyman series is obtained when an electron jumps to the first orbit from any orbit. It lies in the ultraviolet region.Balmer series is obtained when an electron jumps to a second orbit from any outer orbit. It lies in the visible spectral region.The Paschen series is obtained when an electron jumps to the third orbit from any outer orbit. It lies in the infrared region.Brackett series is obtained when an electron jumps to the fourth orbit from any outer orbit. Pfund series is obtained when an electron jumps to the fifth orbit from any outer orbit. 4. Nuclear size and nuclear density A nucleus' volume is proportional to its mass number.The density of the nucleus matter is the ratio of the mass of the nucleus and its volume. Its value is $2.29 \times 10^{17}\, kg/m^3$. 5. Mass energy relation and nuclear binding energy Mass energy relation tells us that when a certain mass ($m$) disappears then an equivalent amount of energy $E$ appears and vice-versa.The energy with which nucleons in a nucleus are bonded is referred to as the nucleus' binding energy. It is determined by the amount of effort necessary to detach the nucleons from the nucleus at an infinite distance. 6. Radioactivity Radioactivity is the process by virtue of which a heavy element disintegrates itself without being forced by any external agent to do so.According to laws of radioactive disintegration, the number of atoms disintegrating per second at any instant is directly proportional to the number of radioactive atoms actually present in the sample at that instant.$-\dfrac{dN}{dt}\varpropto N$The half-life of a radioactive element is the amount of time it takes for half of the atoms present in the sample to decay.The average lifetime of the element is obtained by calculating the total lifetime of all the atoms of the element and dividing it by the total number of atoms present initially in the sample of the element 7. Alpha, beta and Gamma decay Alpha decay is the phenomenon of the emission of an alpha particle from a radioactive nucleus. Ex:$U^{238}_{92}\longrightarrow Th^{234}_{90}+He^4_2$Beta decay  is the phenomenon of emission of an electron from a radioactive nucleus. Ex:$Th^{234}_{90}\longrightarrow Pa^{234}_{91}+e^0_{-1}$Gamma decay is the phenomenon of emission of gamma-ray photons from a radioactive nucleus. Ex:$Ni^{60*}_{28}\longrightarrow Ni^{60}_{27}+E_{\gamma}$ 8. Nuclear fission and fusion Nuclear fission is the process of splitting a heavy nucleus (usually A>230) into two or more lighter nuclei.Nuclear fusion is the process by which two or more lighter nuclei combine to produce a single heavy nucleus. ### List of Important Atoms and Nuclei Formulas S.No. Name of the Concept Formula Rutherford's atomic model According to Rutherford’s model, the total energy of the electron orbit is, $E=\dfrac{e^2}{8\pi \epsilon_o r}$ Bohr’s model of hydrogen atom From 1st postulate of Bohr’s model of atom we can write;$\dfrac{mv^2}{r}=\dfrac{KZe^2}{r^2}$Here, $v$= velocity with which electron is moving and $r$= radius of the orbit.From 2nd postulate,$mvr=\dfrac{nh}{2\pi}$From 3rd postulate,$E_2-E_1=nh\nu$The electron's frequency in Bohr's stationary orbit is,$\nu=\dfrac{KZe^2}{nhr}$Total energy of electron in Bohr’s stationary orbit is,$E=-\dfrac{13.6}{n^2}eV$ Bohr’s explanation of spectral series of hydrogen atom The expression of wave number ($\overline{\nu}$) of radiation emitted when electron jumps from one orbit to another is,$\overline{\nu}=RZ^2[\dfrac{1}{n^2_1}-\dfrac{1}{n^2_2}]$In the case of hydrogen, $Z=1$ and for different types of series of H-atoms we have different values of $n_1$ and $n_2$. Nuclear size and nuclear density Relation between radius ($R$) and mass number($A$) of the nucleus,$R=R_oA^{1/3}$Here, $R_o$ is the constant having a value of $1.2 \times 10^{-15}\,m$. Mass energy relation and nuclear binding energy Mass energy relation is given as,$E=mc^2$Binding energy of the nucleus,$B.E.=[Zm_H+(A-Z)m_n-m(X_Z^A)]c^2$Here, $Z$= charge number, $A$= mass number, $m_n$ = mass of the proton, $m_n$ = mass of the neutron and $m_n$ = mass of nucleus $X_Z^A$. Radioactivity According to the law of radioactive decay, $-\dfrac{dN}{dt}\varpropto N$$R=-\dfrac{dN}{dt}= \lambda N$            Here, $\lambda$ =disintegration                          constant.Half life ($T_{1/2}$)of radioactive element is,$T_{1/2}=\dfrac{0.693}{\lambda}$Average life ($\tau$) is,$\tau= \dfrac{1}{\lambda}$The activity of an element is given as, $A=A_o e^{-\lambda t}$ ### Difference Table Between Nuclear Fission and Fusion Nuclear Fission Nuclear Fusion Nuclear fission occurs when the nucleus of an atom breaks into lighter nuclei as a result of a nuclear reaction. Nuclear fusion is a reaction that happens when two or more light nuclei collide, resulting in the formation of a heavier nucleus. A great quantity of energy is produced when each atom splits. The amount of energy released during nuclear fusion is many times more than the amount of energy released during nuclear fusion. Fission reactions do not occur spontaneously in nature. Stars and the sun undergo fusion processes. A fission process requires little energy to split an atom. In a fusion reaction, a large amount of energy is required to fuse two or more atoms together. The principle of nuclear fission supports the operation of an atomic weapon. The hydrogen bomb works in the same way as a nuclear fusion weapon. ### Solved Examples 1. Calculate the number of revolutions per second done by a hydrogen atom electron in the third Bohr orbit. Sol: Given that, Bohr orbit ($n$) = 3 We have to calculate the frequency i.e, $\nu$. In order to calculate the frequency of revolution we have to use the second postulate of Bohr’s atomic model according to which angular momentum of the electron should be equal to the integral multiples of $\dfrac{h}{2 \pi}$. According to the second postulate of Bohr’s atomic model, $mvr=\dfrac{nh}{2\pi}$ As we know that $v=\omega r$, so after putting this in the above equation we get, $m\omega r^2=\dfrac{nh}{2\pi}$ $m (2\pi \nu) r^2=\dfrac{nh}{2\pi}$...........(As $\omega=2\pi \mu$) $\nu = \dfrac{nh}{4\pi^2 m r^2}$ Now after putting the values of mass of electron($m=9.1 \times 10^{-31}\,kg$), Planck’s constant($h=6.6 \times 10^{-34}\, J/s$, radius of orbit($r=0.53 \times 10^{-10}\,m$) and $n=3$, we get; $\nu = \dfrac{3 \times 6.6 \times 10^{-34}}{4 \times (3.14)^2 \times 9.1 \times 10^{-31} (0.53 \times 10^{-10})^2}$ On further simplification we get; $\nu = \dfrac{19.8 \times 10^{-34} }{100.81 \times 10^{-51}}$ $\nu = 0.1964 \times 10^{17}\,rev/sec$ $\nu = 1.964 \times 10^{16}\,rev/sec$ Hence, the frequency of revolution done by an electron in the third orbit of a hydrogen atom is $1.964 \times 10^{16}\,rev/sec$. Key point: The knowledge of the second postulate in Bohr’s atomic model is necessary to solve this problem. 1. A radioactive substance's mean life is 1500 years for alpha emission and 300 years for beta emission, respectively. Calculate the time it takes for three-quarters of a sample to decay if it is decaying by both alpha and beta emission at the same time. (Take, $\log_{10}4= 1.386$) Sol: Given that, Mean life of alpha emission, $\tau_\alpha$ = 1500 years Mean life of beta emission, $\tau_\beta$ = 300 years Number of nuclei remains, $N={N_o} - \dfrac{3}{4}{N_o}=\dfrac{N_o}{4}$ To find: Time during which it decay to three fourths of a sample i.e, $t$. To solve this problem we have to apply the concept of mean life time and also the laws of disintegration of radioactive elements. If we consider the $\lambda_\alpha$ and $\lambda_\beta$ be the decay constants of the alpha and beta emission then using the formula of mean life time, we can write; $\lambda_\alpha=\dfrac{1}{\tau_\alpha}=\dfrac{1}{1500}$ And, $\lambda_\beta=\dfrac{1}{\tau_\beta}=\dfrac{1}{300}$ Total decay constant is given as, $\lambda=\lambda_\alpha+\lambda_\beta$ $\lambda= \dfrac{1}{1500}+\dfrac{1}{300}$ $\lambda=\dfrac{6}{1500}=\dfrac{1}{250}\,yr^{1}$ Now, from laws of disintegration we can write, $N=N_o e^{-\lambda t}$ After putting the values of $N$ , we get; $\dfrac{N_o}{4}=N_o e^{-\lambda t}$ $t= \dfrac{\log_{e}4}{\lambda}$ Now after putting the value of $\lambda$, we get; $t= 2.3026 (\dfrac{\log_{10}4}{1/250})$ $t = 2.3026 \times 1.386 \times 250$ $t = 797.8$ years Hence, the time taken by the sample to decay its three fourths value both by alpha and beta emission is 797.8 years. Key point: The concept of mean life time of a radioactive sample and laws of radioactive disintegration is important to solve this problem. ### Previous Year Questions from JEE Paper 1. There are $10^{10}$ radioactive nuclei in a given radioactive element, its half-life time is 1 minute. How many nuclei will remain after 30 seconds? (Take, $\sqrt{2}=1.414$) (JEE Main 2021) a. $7 \times 10^{9}$ b. $2 \times 10^{9}$ c. $4 \times 10^{10}$ d. $10^{5}$ Sol: Given that, Original number of nuclei, ${N_o} = 10^{10}$ Half- life time, $t_{½} = 60\,s$ To find: Number of nuclei remain after time ($t$ = 30 seconds) i.e, $N$. To solve this problem we have to use the concept of half life and also the relation of the number of nuclei decayed to the original number of nuclei present in the sample of a radioactive element. Now using the concept of half live, we can write the relation between the number of nuclei decayed ($N$) to the original number of nuclei($N_o$) as, $\dfrac{N}{N_o}=(\dfrac{1}{2})^{t/t_{1/2}}$ Now after putting the values of the quantities in the above relation, we get; $\dfrac{N}{10^{10}}=(\dfrac{1}{2})^{30/60}$ $\dfrac{N}{10^{10}}=(\dfrac{1}{2})^{1/2}$ $N= \dfrac{10^{10}}{\sqrt{2}} \approx 7 \times 10^{9}$ Hence, the number of nuclei that remain after 30 seconds is $7 \times 10^{9}$. Therefore, option a is correct. Key point: The application of the half life concept and the relation between the number of nuclei decayed to the original number of nuclei in a sample is important to solve this problem. 1. A sample of a radioactive nucleus A disintegrates to another radioactive nucleus B, which in turn disintegrates to some other stable nucleus C. Plot of a graph showing the variation of number of atoms of nucleus B versus time is : (Assume that at t = 0, there are no B atoms in the sample) (JEE Main 2021) a. b. c. d. Sol: Given that, initially the number of atoms of $B=0$ at $t=0$ and the last product of reaction i.e, nuclei $C$ is a stable one. Now we have to find the behaviour of $B$ with respect to time as it decays to $C$. To answer this question we have to use the concept of growth and decay of a radioactive sample. As it is mentioned that the number of $B$ atoms are zero at $t=0$ so the graph must start from the origin. When the rate of decay of B equals the rate of production of B, the number of atoms of B begins to increase and reaches a maximum value. Because growth and decay are both exponential functions, the number of atoms will start to decrease after that maximum value, hence the best possible graph is (d). Therefore, option d is correct. Key point: The concept of growth and decay of a radioactive sample is essential to solve this problem. ### Practice Questions 1. Which level of doubly ionised lithium has the same energy as the ground state energy of a hydrogen atom? It is necessary to compare the orbital radii of the two levels. (Ans: 3,3) 2. The decay constant for the radioactive nuclide $Cu^{64}$ is $1.5 \times 10{-5}\,s{-1}$. Find the activity of a sample containing $1\,\mu g$ of $Cu^{64}$. The atomic weight of copper is equal to 63.5 g/mole. The mass difference between the supplied radioisotope and regular copper is ignored.(Ans: 3.87 Ci) ### Conclusion In this article we have discussed the various models related to the structure of atoms and nuclei like Rutherford’s model of atom and Bohr’s model of atom. Bohr's model of atoms is widely accepted because it explained properly about the line spectra of hydrogen atoms which were not explained by Rutherford's model of atoms. We have also mentioned the various important atoms and nuclei formulas. We have also discussed topics like nuclear size, nuclear density, radioactivity, mass-energy relation and binding energy of nuclei. See More ## JEE Main Important Dates View All Dates JEE Main 2022 June and July Session exam dates and revised schedule have been announced by the NTA. JEE Main 2022 June and July Session will now be conducted on 20-June-2022, and the exam registration closes on 5-Apr-2022. You can check the complete schedule on our site. Furthermore, you can check JEE Main 2022 dates for application, admit card, exam, answer key, result, counselling, etc along with other relevant information. See More View All Dates ## JEE Main Information Application Form Eligibility Criteria Reservation Policy NTA has announced the JEE Main 2022 June session application form release date on the official website https://jeemain.nta.nic.in/. JEE Main 2022 June and July session Application Form is available on the official website for online registration. Besides JEE Main 2022 June and July session application form release date, learn about the application process, steps to fill the form, how to submit, exam date sheet etc online. Check our website for more details. July Session's details will be updated soon by NTA. ## JEE Main Syllabus View JEE Main Syllabus in Detail It is crucial for the the engineering aspirants to know and download the JEE Main 2022 syllabus PDF for Maths, Physics and Chemistry. Check JEE Main 2022 syllabus here along with the best books and strategies to prepare for the entrance exam. Download the JEE Main 2022 syllabus consolidated as per the latest NTA guidelines from Vedantu for free. See More View JEE Main Syllabus in Detail ## JEE Main 2022 Study Material View all study material for JEE Main JEE Main 2022 Study Materials: Strengthen your fundamentals with exhaustive JEE Main Study Materials. It covers the entire JEE Main syllabus, DPP, PYP with ample objective and subjective solved problems. Free download of JEE Main study material for Physics, Chemistry and Maths are available on our website so that students can gear up their preparation for JEE Main exam 2022 with Vedantu right on time. See More All Mathematics Physics Chemistry See All ## JEE Main Question Papers see all Download JEE Main Question Papers & ​Answer Keys of 2021, 2020, 2019, 2018 and 2017 PDFs. JEE Main Question Paper are provided language-wise along with their answer keys. We also offer JEE Main Sample Question Papers with Answer Keys for Physics, Chemistry and Maths solved by our expert teachers on Vedantu. Downloading the JEE Main Sample Question Papers with solutions will help the engineering aspirants to score high marks in the JEE Main examinations. See More View all JEE Main Important Books In order to prepare for JEE Main 2022, candidates should know the list of important books i.e. RD Sharma Solutions, NCERT Solutions, RS Aggarwal Solutions, HC Verma books and RS Aggarwal Solutions. They will find the high quality readymade solutions of these books on Vedantu. These books will help them in order to prepare well for the JEE Main 2022 exam so that they can grab the top rank in the all India entrance exam. See More Maths NCERT Book for Class 12 Maths Physics NCERT Book for Class 12 Physics Chemistry NCERT Book for Class 12 Chemistry Physics H. C. Verma Solutions Maths R. D. Sharma Solutions Maths R.S. Aggarwal Solutions See All ## JEE Main Mock Tests View all mock tests JEE Main 2022 free online mock test series for exam preparation are available on the Vedantu website for free download. Practising these mock test papers of Physics, Chemistry and Maths prepared by expert teachers at Vedantu will help you to boost your confidence to face the JEE Main 2022 examination without any worries. The JEE Main test series for Physics, Chemistry and Maths that is based on the latest syllabus of JEE Main and also the Previous Year Question Papers. See More ## JEE Main 2022 Cut-Off JEE Main Cut Off NTA is responsible for the release of the JEE Main 2022 June and July Session cut off score. The qualifying percentile score might remain the same for different categories. According to the latest trends, the expected cut off mark for JEE Main 2022 June and July Session is 50% for general category candidates, 45% for physically challenged candidates, and 40% for candidates from reserved categories. For the general category, JEE Main qualifying marks for 2021 ranged from 87.8992241 for general-category, while for OBC/SC/ST categories, they ranged from 68.0234447 for OBC, 46.8825338 for SC and 34.6728999 for ST category. See More ## JEE Main 2022 Results JEE Main 2022 June and July Session Result - NTA has announced JEE Main result on their website. To download the Scorecard for JEE Main 2022 June and July Session, visit the official website of JEE Main NTA. See More Rank List Counselling Cutoff JEE Main 2022 state rank lists will be released by the state counselling committees for admissions to the 85% state quota and to all seats in NITs and CFTIs colleges. JEE Main 2022 state rank lists are based on the marks obtained in entrance exams. Candidates can check the JEE Main 2022 state rank list on the official website or on our site. ## JEE Top Colleges View all JEE Main 2022 Top Colleges Want to know which Engineering colleges in India accept the JEE Main 2022 scores for admission to Engineering? Find the list of Engineering colleges accepting JEE Main scores in India, compiled by Vedantu. There are 1622 Colleges that are accepting JEE Main. Also find more details on Fees, Ranking, Admission, and Placement. See More ## FAQs on JEE Important Chapter - Atoms and Nuclei FAQ 1. What is the weightage of the Atoms and Nuclei in the JEE exam? This chapter comes under the branch of Modern physics which ultimately leads to approximately 1-2 questions every year. Thus, the total weightage of this chapter is approximately 2-3 % in the exam. 2. What is the level of difficulty of the questions from the Atoms and Nuclei chapter? As this chapter comes under Modern physics so the difficulty level of the questions asked in this chapter is quite high. As this chapter deals with the modern edge of Physics’s research and development, therefore it is important to study the chapter for the exam rather than skipping it. 3. Is it genuinely advantageous to revise previous year's Atoms and Nuclei chapter questions for this exam? In order to score well and become familiar with the exam's difficulty level, we must practise the previous year's problems. It boosts our self-esteem while simultaneously exposing us to areas where we may improve. Solving past ten to fifteen-year question papers will help you better understand a concept and can also show you how many times a concept or topic will be repeated in the test. It is also good to prepare for the atoms and nuclei jee notes by practising the previous year's problems.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9003264307975769, "perplexity": 1975.2848522313875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710813.48/warc/CC-MAIN-20221201121601-20221201151601-00147.warc.gz"}
http://www.ck12.org/physical-science/Properties-of-Bases-in-Physical-Science/quiz/Properties-of-Bases-Quiz-MS-PS/r1/
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> # Properties of Bases ## Defines what a base is and discusses their common characteristics. Estimated1 minsto complete % Progress Practice Properties of Bases Progress Estimated1 minsto complete % Properties of Bases Quiz - MS PS Reviews the definition and properties of bases, how to detect bases, how the strength of bases is measured, and uses of bases.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8395383358001709, "perplexity": 16307.170344378137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051035374.76/warc/CC-MAIN-20160524005035-00201-ip-10-185-217-139.ec2.internal.warc.gz"}
http://mathhelpforum.com/trigonometry/10156-couple-more-problems.html
# Math Help - Couple more problems 1. ## Couple more problems I have two problems to finish my worksheet and was hoping for some help. First one is...find the smallest positive angle from the positive x-axis to the vector OP that corresponds to (3,3). I think it is pi/4 since that is the smallest angle and still positive. Second one is...determine m such that the two vectors 3i-9j and mi+2j are orthogonal. I could apply the dot product to it but the orthogonal makes me think I may not really be thinking about it the right way. 2. Originally Posted by gretchen I have two problems to finish my worksheet and was hoping for some help. First one is...find the smallest positive angle from the positive x-axis to the vector OP that corresponds to (3,3). I think it is pi/4 since that is the smallest angle and still positive. Second one is...determine m such that the two vectors 3i-9j and mi+2j are orthogonal. I could apply the dot product to it but the orthogonal makes me think I may not really be thinking about it the right way. You have the first one right. Good job! And your approach to the second one is correct also. If the word "orthogonal" is confusing you, in this context it merely means "perpendicular." The dot product between two perpendicular vectors is 0. So: $(3i - 9j) \cdot (mi + 2j) = 0$ $3m - 18 = 0$ $m = 6$ -Dan
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9329537153244019, "perplexity": 622.9251190850555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447548877.142/warc/CC-MAIN-20141224185908-00013-ip-10-231-17-201.ec2.internal.warc.gz"}
https://ja.overleaf.com/articles/my-final-proof-journal/fgcttsgqrkvy
# My Final Proof Journal Author Chesyti Brown View Count 2491 Abstract-------------------------------------------------------------- This is all preamble stuff that you don't have to worry about. Head down to where it says "Start here" --------------------------------------------------------------
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7971461415290833, "perplexity": 3473.570489723211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479838.37/warc/CC-MAIN-20190216024809-20190216050809-00353.warc.gz"}
https://chemistry.stackexchange.com/questions/49578/copper-chloride-color-in-electrolysis-of-salty-water
# Copper chloride color in electrolysis of salty water I tried a simple water electrolysis experiment at home, with $$\ce{NaCl}$$ as electrolyte, and a $$12 \,\rm{V}$$ battery. My wires were made of copper and $$\ce{H2}$$ bubbles were only forming at negative terminal (cathode). On the positive side (anode), nothing was happening. After $$3$$-$$4$$ minutes, water started to change into a greenish-blue color and I realized that it must be copper reacting with chloride ions. After $$15$$ minutes, my solution was light blue with exceed $$\ce{NaCl}$$ at the bottom, after taking the wires out the solution started to turn into a yellow color in about $$10$$ minutes. What was the yellow solution and why didn't cathode produce any oxygen? I'm also worried that the yellow solution might be chlorine because I later boiled the solution. • – user7951 Apr 16 '16 at 14:14 The color of the solution is probably due to the presence of blue copper(II) ions ($\ce{Cu^2+}$) and greenish tetrachlorocuprate(II) ions ($\ce{[CuCl4]^2-})$. The change in color comes from the equilibrium $$\ce{Cu^2+ + 4 Cl- <=> [CuCl4]^2-}$$ Low concentrations of chloride favor the formation of the blue copper ions. High concentrations of chloride favor the formation of the greenish tetrachlorocuprate ions. If a significant amount of chlorine had been created you should have observed bubbles at the anode and the typical chlorine smell. • I wouldn't expect any significant amount of chlorine (or oxygen, for that matter) to appear in this setup. Dissolving the anode is so much easier. – Ivan Neretin Apr 16 '16 at 17:43 • Well I didn't observed any bubbles at the anode but I think it was chlorine I was smelling.(smelled just like swimming pools) I've just noticed bright green color around the anode, I'm assuming that is tetrachlorocuprate(II), after few hours the solution now looks almost clear yet huge amount of blue and yellow precipitation are floating at the bottom. I'm leaving it outside waiting for the water to evaporate. If the blue precipitate is copper (II) or copper(II) chloride ions then what is the yellow precipitated? – alend ahmed Apr 16 '16 at 20:26 • Copper(II) chloride is readily soluble in water, so I would not expect it to precipitate. I guess that the precipitate rather contains copper(II) hydroxide. – aventurin Apr 17 '16 at 16:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7612478733062744, "perplexity": 1871.1293290145936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989812.47/warc/CC-MAIN-20210515035645-20210515065645-00262.warc.gz"}
https://ccrma.stanford.edu/~bilbao/master/node155.html
Next: Type I: Voltage-centered Network Up: Transverse Motion of the Previous: Finite Differences ## Waveguide Network for the Euler-Bernoulli System It is possible to design a waveguide network which simulates the behavior of equation (5.1), but there are some extra features we must add which were not necessary in the case of the transmission line. In addition, the overloaded symbols for the wave variables become even more overloaded, due to the fact that we can no longer interleave the two dependent variables spatially, and are faced with a double set of wave variables at every grid point. (This can be remedied with recourse to other more involved difference methods, but we will not pursue this subject here.) The structure of interest is shown in Figure 5.1. This is still a (1+1)D waveguide network, like that which simulates the (1+1)D transmission line equations, but we have drawn the junctions which calculate and separately; it should be kept in mind that they operate at the same spatial locations. As before, we use grey/white coloring of junctions to signify operation at different time steps. Here we have interpreted (which we will identify with of difference scheme (5.5), and thus with ) as a voltage-like variable, and as the current-flow. Also as before, the diagram above is correct when we are using voltage-like wave variables as our signals. Figure 5.2 gives the complete picture of the wave quantities and immittances in the network. We note that the wave variables at the series scattering junction at location are indicated by a tilde, to distinguish them from those at the parallel junction at the same location, even though the two sets of variables are calculated at alternate time steps. As for the (1+1)D transmission line, we index wave variables and immittances at the left and right ports of any junction by and respectively, and the same such quantities associated with any self-loop are subscripted with . We also have new waveguides connecting parallel and series junctions at the same grid point; immittances and wave variables are subscripted with a in this case. With reference to Figure 5.2, we can define the junction admittance at the parallel junction, and the junction impedance at the series junction to be (5.7) (5.8) It should be clear that this waveguide network is really a pair of coupled (1+1)D transmission lines; the coupling is via the waveguide connecting the series and parallel junctions at the same grid location (the vertical waveguide in Figure 5.1). The factors of -2 and in the coupling'' waveguide in Figure 5.3 deserve some extra commentary. Figure 5.3 shows an equivalence between two waveguide configurations. On the left, we have two identical waveguides with delay time steps and impedance connected between a parallel junction and a series junction; in approximating a second derivative by centered differences, as we are indeed doing in (5.5), we need such a configuration so as to double the strength of the wave variable coming from the same grid location at the previous time step relative to that of those entering from the neighboring junctions. The equivalent form on the right, which introduces a transformer with turn ratio -2, serves to reduce the pair of waveguides to a single one, accompanied by two multiplications (the waveguide transformer is identical to the wave digital transformer, discussed in §2.3.4). This implies that the port impedance at a parallel junction at grid point in Figure 5.1 and the port admittance at the series junction at the same point must be related by (5.9) (This equivalence can easily be derived through the manipulation of hybrid matrices for bidirectional delay lines, as per the methods discussed in §4.10.2.) Also note the sign inversions in this central coupling waveguide with respect to the left- and right-going waveguides. We now trace the signal flow in the network to show that it does indeed solve the Euler Bernoulli system. Beginning from a series junction at grid point , we have: which is identical to (5.5b) if we replace by and by , and if we have (5.10) Beginning from the series junction, we arrive at a similar requirement for , namely (5.11) As in the case of the transmission line, three families of waveguide networks are distinguishable: Subsections Next: Type I: Voltage-centered Network Up: Transverse Motion of the Previous: Finite Differences Stefan Bilbao 2002-01-22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9097371697425842, "perplexity": 765.2160481505178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267859766.6/warc/CC-MAIN-20180618105733-20180618125733-00171.warc.gz"}
https://www.impactmatters.org/nonprofit/20-0291592/
Impact Rating # Water for South Sudan Top Clean Water Nonprofit Impact: $2 provides clean water to a person for a year. Well Drilling Program meets the benchmark for high cost-effectiveness. The nonprofit's cost to provide clean water is less than 75% of the local costs. Note: The impact of this program may not be representative of the entire operation of Water for South Sudan. Governance: Passes checks Mission Water for South Sudan delivers direct, transformative and sustainable quality-of-life service to the people of South Sudan by efficiently providing access to clean, safe water and improving hygiene and sanitation practices in areas of great need. Cause Clean Water Rated Program Well Drilling Program Program Geography South Sudan Headquarters Rochester, NY Website Donations processed by the nonprofit. Cause Clean Water Rated Program Well Drilling Program Program Geography South Sudan Headquarters Rochester, NY Website Impact Calculator: I'd like to give$ 0 Impact Calculator: #### Rated Program ###### Program Well Drilling Program ###### Activities Water for South Sudan drills wells for remote villages in the Bahr el Ghazal region of South Sudan. The placement of the wells is determined in collaboration with government and community leaders. Water Access ###### Beneficiaries Served People living in poverty ###### Geography South Sudan Outcomes: Changes in people's lives. They can be caused by a nonprofit. Costs: The money spent by nonprofits and their partners and beneficiaries. Impact: The cost to achieve an outcome. Cost-effectiveness: A judgment as to whether the cost was "worth" the outcome. #### Outcomes ###### Outcome Metric A year of clean water provided to a person. To calculate impact, we estimate how many outcomes the nonprofit caused. ###### Data Source Outcome data collected through interviews with village members and feedback from Water for South Sudan field staff who visit well sites. ###### Time Period of Data Sept. 1, 2017, to Aug. 31, 2018 Ratings are based on data the nonprofit itself collects on its work. We use the most recent year with sufficient data. Typically, this data allows us to calculate direct changes in participants' lives, such as increased income. ###### Method for Attributing Outcomes Water for South Sudan conjectures that only 20 percent of currently served individuals would receive water wells without its program. Unlike Water for South Sudan, larger nonprofits in South Sudan do not typically go to remote villages due to distance and costs. To determine causation, we take the outcomes we observe and subtract an estimate of the outcomes that would have happened even without the program. #### Cost ###### Data Source Cost data reported by Water for South Sudan and data and assumptions about partner and beneficiary costs. All monetary costs are counted, whether they are borne by a nonprofit service deliverer or by the nonprofit’s public and private partners. #### Impact ###### Impact Statement \$2 provides clean water to a person for a year. We calculate impact, defined as the change in outcomes attributable to a program divided by the cost to achieve those outcomes. #### Rating ###### Benchmark for Rating Impact ratings of water access programs are based on the cost to provide clean water relative to the market cost that a person incurs to buy water in that country. Programs receive 5 stars if they provide water for less than 75% of the estimated market costs, and 4 stars if they do so for less than 125%. If a nonprofit reports impact but doesn't meet the benchmark for cost-effectiveness, it earns 3 stars. ###### Determination The nonprofit's cost to provide clean water is less than 75% of the local costs. #### Why We Could Be Wrong We welcome your suggestions for improving our methodology. Our methodology section includes explanations of how we mitigate these issues. • The outcome could oversimplify total impact. Using “person-years of clean water” as the metric of analysis does not capture differences in the water quantity, reliability and ease of access. • There could be multiple important outcomes not captured in our analysis. • Water for South Sudan may be spending additional money in order to serve harder-to-reach and/or particularly valuable populations. • In the absence of better data, we assume uniform counterfactual rates for programs, at the risk of masking variation across nonprofits. • Our estimates rely on data made public by Water for South Sudan on its website, annual reports, financial statements and Form 990s. • We only analyze programs that meet our criteria. As a result, this report may not fully reflect the impact of Water for South Sudan. • We do not assess what explains the nonprofit's cost-effectiveness. We assign a rating to the nonprofit using the rubric: • There are indications of governance or financial health issues at the nonprofit. • After being given an opportunity, the nonprofit chose not to publish impact information. We are not yet issuing this level of star rating. • The rated program does not meet our benchmark for cost-effectiveness. • The rated program is cost-effective. • The rated program is highly cost-effective. #### Nonprofit Comment Not provided. This may be because we lacked contact information for Water for South Sudan or it chose not to comment. If you are a representative of this nonprofit, contact us to review and comment on your rating. Before publishing, we ask every nonprofit we can to review our work, offer corrections and provide a comment. ###### Analysis Details Analysis conducted by ImpactMatters and published on November 22, 2019. This rating is based on data reported by Water for South Sudan using ImpactMatters' impact reporting platform. Using data from the platform, ImpactMatters analysts calculated impact and assigned a rating. We welcome corrections. If you are interested in exploring applications of ImpactMatters data, contact us at partnerships@impactmatters.org. #### Governance Check Water for South Sudan passes our governance check. Overhead spending is reasonable (<35% of total spending) Charity Navigator has not issued a fraud or mismanagement advisory Water for South Sudan itself has not reported any material diversions of assets Water for South Sudan itself has not reported any excess benefit transactions Source: Water for South Sudan Form 990 and Charity Navigator #### How We Calculate Impact This rating is based on ImpactMatters analysis of the impact of Well Drilling Program relative to costs. Impact is the change in the social outcomes of people served by the program, net of the change that would have happened even without the program (the “counterfactual”); divided by cost. Learn more. A guide to our process for analyzing nonprofits and assigning ratings. Learn about best practices for reporting impact for different program types. Our collected guidelines on how we analyze impact of nonprofit programs. Rating is a complex exercise and we urge you to read our frequently asked questions for details of how and why we issue these ratings.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17236852645874023, "perplexity": 5512.463280120525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400274441.60/warc/CC-MAIN-20200927085848-20200927115848-00408.warc.gz"}
http://mathhelpforum.com/calculus/33477-help-getting-started-couple-optimizations.html
# Thread: Help getting started with a couple optimizations 1. ## Help getting started with a couple optimizations Sometimes I can do these... sometimes I look at them and don't know where to begin. If someone could give me a couple tips to get these few started, I can do the rest: 1. A box with a square base and no top must have a volume of 10 000 cm3. If the smallest dimension in any direction is 5 cm, then determine the dimensions of the box that minimize the amount of material used. 2. A fence is 1.5 m high and is 1 m from a wall. A ladder must start from the ground, touch the top of the fence, and rest somewhere on the wall. Find the minimum length of such a ladder. 3. The motion of a particle is given by s(t) = 5cos(2t + pi/4). What are the maximum values of the displacement, the velocity, and the acceleration? 2. Originally Posted by NAPA55 Sometimes I can do these... sometimes I look at them and don't know where to begin. If someone could give me a couple tips to get these few started, I can do the rest: 1. A box with a square base and no top must have a volume of 10 000 cm3. If the smallest dimension in any direction is 5 cm, then determine the dimensions of the box that minimize the amount of material used. 2. A fence is 1.5 m high and is 1 m from a wall. A ladder must start from the ground, touch the top of the fence, and rest somewhere on the wall. Find the minimum length of such a ladder. 3. The motion of a particle is given by s(t) = 5cos(2t + pi/4). What are the maximum values of the displacement, the velocity, and the acceleration? For the first one the surface area is given by the formula $\displaystyle \underbrace{x^2}_{bottom}+\underbrace{4xy}_{sides} =10,000 \mbox{ and } V=x^2y$ Solve the first for y and sub into the second and minimize the function $\displaystyle V=2500x-\frac{x^3}{4}$ 3. ## Heres three the first two are fairly simple $\displaystyle f(t)=5cos\bigg(2t+\frac{\pi}{4}\bigg)$...then $\displaystyle v(t)=f'(t)=-5sin\bigg(2t+\frac{\pi}{4}\bigg)\cdot{2}$...set that equal to zero you get...wait there is no boundary or artificial cut off...well I am just going to assume it is $\displaystyle (0,2\pi)$ the values are $\displaystyle x=\frac{3\pi}{8},x=\frac{7\pi}{8}.x=\frac{11\pi}{8 },x=5.89$ next put those values in the second derivative...the negative values are maxs and the minimums are negative...repeat for the other two with the derviative...solving and putting that in its derivative 4. ## similar triangles We need to use similar triangles to find d in the diagram below.(sorry without the buttons I don't know how to add it in the middle of a post) $\displaystyle \frac{d}{1.5}=\frac{x+1}{x} \iff d=\frac{3(x+1)}{2x}$ Since we want the smallest length of the latter we need to know the length of the hypotenuse. By the pythagorean theorem we get $\displaystyle h=\sqrt{\left( \frac{3(x+1)}{2x}\right)^2+(x+1)^2}$ You just need to minimize from here. Good luck. Attached Thumbnails 5. Thank you both!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6618033647537231, "perplexity": 387.1252110991556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946807.67/warc/CC-MAIN-20180424154911-20180424174911-00584.warc.gz"}
https://science.sciencemag.org/content/314/5798/436.abstract
Research Article # Correcting Quantum Errors with Entanglement See allHide authors and affiliations Science  20 Oct 2006: Vol. 314, Issue 5798, pp. 436-439 DOI: 10.1126/science.1131563 ## Abstract We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error–correcting codes, thus allowing us to “quantize” all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication. View Full Text
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8750799894332886, "perplexity": 1335.4271330859538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703524743.61/warc/CC-MAIN-20210121101406-20210121131406-00616.warc.gz"}
http://mathhelpforum.com/pre-calculus/207511-function-question.html
# Math Help - Function Question 1. ## Function Question Let f(x)=x+2|x−3|,andg(x)=3x−1. (a) Find all solutions to the equation f(g(x)) = −4x. So I come up with f(g(x)) = 3x - 1 + 2|3x - 1 - 3| = - 4x Is this set up correct? And would I simply solve for x? Thank you! 2. ## Re: Function Question 3x- 1- 3= 3x- 4, of course, but what you have is correct.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8664315938949585, "perplexity": 2730.3466216301063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986625.58/warc/CC-MAIN-20150728002306-00179-ip-10-236-191-2.ec2.internal.warc.gz"}
http://aas.org/archives/BAAS/v29n5/aas191/abs/S007011.html
Session 7 - Molecular Clouds. Display session, Wednesday, January 07 Exhibit Hall, [7.11] Density Structures of Starless Bok Globules B. D. Kane (Phillips Laboratory), D. P. Clemens (Boston University) Fourteen SBGs were observed in the (J=1\rightarrow 0) rotational lines of ^12CO, ^13CO, and C^18O, using the 14 meter radio telescope of the Five College Radio Astronomy Observatory (FCRAO).Maps were made with the fifteen-element QUARRY 3mm\, array receiver and the 1024 channel FAAS autocorrelator spectrometer. A total of 120 to 360 positions per globule, sampled with half-beam spacing, was observed in the ^13CO, line, mostly over 2.5 by 3 arcmin grids, while 30 to 60 full-beam-sampled C^18O\, and ^12CO\, positions per globule were observed. ^13CO\, and C^18O, were observed with a velocity resolution of less than 0.007km s^-1\, channel ^-1; ^12CO\, was observed with 0.013km s^-1\, channel^-1 resolution. The median SBG in the sample has an apparent volume density profile which falls off as \sim\, r^-2.6, significantly steeper than found in Yun amp; Clemens' 1991 sample (of which the majority contained YSO candidates) whose mean dust density profile fell off as a much shallower \sim\, r^-1.6. Overall, assuming a distance of 600 pc, the sample of SBGs is characterized as containing cores of size 0.3\pc. SBGs contain about 10 M_ødot\, of gas, have mean H_2 densities of a few \times 10^3 cm ^-3, and kinetic temperatures of \sim\, 10K. Most SBGs are near virial equilibrium. Using energy balance arguments, half of the SBGs may be quasi-statically contracting, and all of these globules still possess envelopes. The strongest association with cloud contraction is the existence of a relatively large envelope. Rotation is in most cases the least significant source of support against gravitational contraction; in the majority of cases kinetic energy owing to turbulent motions is providing the most support against contraction. The author(s) of this abstract have provided an email address for comments about the abstract: bkane@pldac.plh.af.mil
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8242121934890747, "perplexity": 10409.652647457557}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663167.8/warc/CC-MAIN-20140930004103-00347-ip-10-234-18-248.ec2.internal.warc.gz"}
https://read.somethingorotherwhatever.com/entry/Stallings1966
# How not to prove the Poincaré conjecture • Published in 1966 In the collections I have committed the sin of falsely proving Poincaré's Conjecture. But that was in another country; and besides, until now no one has known about it. Now, in hope of deterring others from making similar mistakes, I shall describe my mistaken proof. Who knows but that somehow a small change, a new interpretation, and this line of proof may be rectified! ### BibTeX entry @article{Stallings1966, title = {How not to prove the Poincar{\'{e}} conjecture}, author = {Stallings, JR}, url = {http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.32.3404 http://math.berkeley.edu/{\~{}}stall/notPC.ps}, urldate = {2014-11-17}, abstract = {I have committed the sin of falsely proving Poincar{\'{e}}'s Conjecture. But that was in another country; and besides, until now no one has known about it. Now, in hope of deterring others from making similar mistakes, I shall describe my mistaken proof. Who knows but that somehow a small change, a new interpretation, and this line of proof may be rectified!}, comment = {}, year = 1966, collections = {Attention-grabbing titles,About proof} }
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9512641429901123, "perplexity": 4036.907269952517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500482.27/warc/CC-MAIN-20200331115844-20200331145844-00429.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-x-and-y-intercepts-for-5x-y-2-0
Algebra Topics # How do you find the x and y intercepts for 5x-y+2=0? Nov 30, 2015 $\left(0 , 2\right)$ and $\left(- \frac{2}{5} , 0\right)$ #### Explanation: $5 x - y + 2 = 0$ for $x = 0$, $- y + 2 = 0$ $\implies y = 2$ One of the intercepts is $\left(0 , 2\right)$ for $y = 0$, $5 x + 2 = 0$ $\implies x = - \frac{2}{5}$ The other intercept is $\left(- \frac{2}{5} , 0\right)$ ##### Impact of this question 162 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 11, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4766092598438263, "perplexity": 6796.868618076286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540481076.11/warc/CC-MAIN-20191205141605-20191205165605-00472.warc.gz"}
https://link.springer.com/article/10.1007/s10980-017-0538-3
Landscape Ecology , Volume 32, Issue 8, pp 1723–1738 # Urbanization and air quality as major drivers of altered spatiotemporal patterns of heavy rainfall in China • Peijun Shi • Xuemei Bai • Feng Kong • Jiayi Fang • Daoyi Gong • Tao Zhou • Yan Guo • Yansui Liu • Wenjie Dong • Zhigang Wei • Chunyang He • Deyong Yu • Jing’ai Wang • Qian Ye • Rucong Yu • Deliang Chen Open Access Research Article ## Abstract ### Context Land use/land cover change and other human activities contribute to the changing climate on regional and global scales, including the increasing occurrence of extreme-precipitation events, but the relative importance of these anthropogenic factors, as compared to climatic factors, remains unclear. ### Objectives The main goal of this study was to determine the relative contributions of human-induced and climatic factors to the altered spatiotemporal patterns of heavy rainfall in China during the past several decades. ### Methods We used daily precipitation data from 659 meteorological stations in China from 1951 to 2010, climatic factors, and anthropogenic data to identify possible causes of the observed spatiotemporal patterns of heavy rainfall in China in the past several decades, and quantify the relative contributions between climatic and human-induced factors. ### Results Our analysis suggests that a total of 84.7–87.5% of the variance in heavy rainfall factors could be explained by large-scale climate phenomena and the local/regional anthropogenic activities. In particular, urbanization and air pollution together explained 58.5–65.5% of the variance. The spatial distribution of heavy rainfall amount and days over time shows a significant and increasing correlation with the spatial distributions of population density and annual low-visibility days. ### Conclusions Our results suggest that the substantial increase in heavy rainfall across much of China during the past six decades is likely triggered by local and regional anthropogenic factors. Our results call for a better understanding of local and regional anthropogenic impacts on climate, and the exacerbated extreme climate events as a potential consequence of urbanization and air pollution. ## Keywords Anthropogenic factors Air pollution Trigger Heavy rainfall China ## Introduction Both modeling results and observation data show an increase in the number of extreme precipitation events (Alexander et al. 2006; Beniston et al. 2007; Qian et al. 2007; Wang et al. 2008). This trend is typically explained by climate change, and is expected to exacerbate with the increase of greenhouse gas emissions (Easterling et al. 2000; Durman et al. 2001; Allen and Ingram 2002; Field et al. 2012; IPCC-AR5 2013). However, climate model simulations often underestimate the observed increase in heavy rainfall during the last five decades (Allen and Ingram 2002; Wilby and Wigley 2002; Min et al. 2011), which points to causes other than those traditionally considered in climate models, and the importance of considering anthropogenic factors (Allan and Soden 2008; Li et al. 2011). A recent research found that the climate models used by the IPCC AR5 capture reasonably the temporal trends of extreme precipitation during 1961–2000 in western China. However, the models do not adequately reproduce the trends over eastern China, which is characterized by much more intense anthropogenic activity (Shepherd 2005). Studies found that convective rainfall has experienced greater increase than stratiform precipitation: given that the former is influenced mostly by local interactions and the later by planetary circulation (Wang and Zhou 2005; Ou et al. 2013), the increased frequency of heavy rainfall events (Chen et al. 2010; Ou et al. 2013) might be due to human induced local changes. This highlights the need to explore more deeply the specific roles of a range of anthropogenic processes and their relative contributions to heavy rainfall at regional scales. ## Data and methods ### Data source Precipitation data for 659 meteorological stations and annual haze days are from China Meteorological Administration; 29 large scale climate factors influencing China’s precipitation are from NOAA and Chinese National Climate Center (Table 1); precipitable water and water vapor flux data of NCEP/NCAR and ERA/ECMWF reanalysis data; horizontal visibility data for 1957–2005 are from the Chinese Academy of Meteorological Sciences; and 11 country-level socio-economic and environmental indicators including GDP; primary, secondary and tertiary industrial output; construction-sector output; energy consumption; total population; urban population; rural population; urbanization ratio; annual average haze days; and county-level population density and horizontal visibility data are compiled from 60-year Statistics of New China and Prefectural (Municipal) Social and Economic Statistics Summary of China. Such 40 factors include 29 climatic factors and 11 anthropogenic factors. The former mainly covers regional circulation factors, the latter mainly refers to elements representing economic and social activities of human beings. Table 1 29 large scale climate factors and 11 socioeconomic factors No. Climate Factors Description Source 1 WPSH ANNUAL West Pacific subtropical high, annual mean NCC 2 WPSH SA West Pacific subtropical high, seasonally averaged (JJA) NCC 3 EASMI The East Asian summer monsoon (EASM) index is defined as an area-averaged seasonally (JJA) dynamical normalized seasonality (DNS) at 850 hPa within the East Asian monsoon domain (10°N–40°N, 110°E–140°E) CAS IAP LASG 4 SCSSMI The South China Sea summer monsoon index (SCSSMI) is defined as an area-averaged seasonally (JJAS) dynamical normalized seasonality (DNS) at 925 hPa within the South China Sea monsoon domain (0°N–25°N, 100°E–125°E). CAS IAP LASG 5 SASMI The South Asian summer monsoon index (SASMI) is defined as an area-averaged seasonally (JJAS) dynamical normalized seasonality (DNS) at 850 hPa within the South Asian domain (5°N–22.5°N, 35°E–97.5°E) CAS IAP LASG 6 SASMI1 Southwest Asian Summer Monsoon (SWASM) over the Southwest Asia (2.5°N–20°N, 35°E–70°E) June–September monthly mean CAS IAP LASG 7 SASMI2 Southeast Asian Summer Monsoon (SEASM) over the Southeast Asia (2.5°N–20°N, 70°E–110°E), June- September monthly mean CAS IAP LASG 8 ENSO DJF El Niño-Southern oscillation, seasonally averaged (last year December–February, DJF) NOAA CPC 9 ENSO MAM Seasonally averaged (March–May, MAM) NOAA CPC 10 ENSO JJA Seasonally averaged (June–August, JJA) NOAA CPC 11 ENSO SON Seasonally averaged (September–Novermber, SON) NOAA CPC 12 PDO Standardized values for the PDO index, derived as the leading PC of monthly SST anomalies in the North Pacific Ocean. Annual mean NOAA CPC 13 Pacific Warmpool 1st EOF of SST (60°E–170°E, 15°S–15°N) SST EOF, Annual mean NOAA CPC 14 NINO3_4 East Central Tropical Pacific SST* (5°N–5°S) (170° –120°W), Annual mean NOAA CPC 15 AMO US Atlantic multidecadal oscillation index unsmoothed, annual mean NOAA PSD 16 AMO SM Atlantic multidecadal oscillation index smoothed, annual mean NOAA PSD 17 Blocking The blocking index is the frequency of DJF “blocked days” for Neutral, Warm and Cold episodes. NOAA CPC 18 DMI MAM Intensity of the IOD is represented by anomalous SST gradient between the western equatorial Indian Ocean (50°E–70°E and 10°S–10°N) and the south eastern equatorial Indian Ocean (90°E–110°E and 10°S–0°N). This gradient is named as Dipole Mode Index (DMI). Seasonally averaged (March–May, MAM) JAMSTEC 19 DMI JJA DMI seasonally averaged (June–August, JJA) JAMSTEC 20 DMI SON DMI seasonally averaged (September–Novermber, SON) JAMSTEC 21 DMI DJF DMI seasonally averaged (last year December–February, DJF) JAMSTEC 22 DMI ANNUAL DMI annual mean JAMSTEC 23 TPH 1 Tibetan plateau high 25°N–35°N, 80°E–100°E annual mean NCC 24 TPH 2 Tibetan plateau high 30°N–40°N, 75°E–105°E annual mean NCC 25 QBO Quasi-biennial oscillation annual mean NOAA PSD 26 NHPVII The northern hemisphere polar vortex intensity index, annual mean NCC 27 AO The monthly AO index (AOI) is defined as the difference in the normalized monthly zonal-mean sea level pressure (SLP) between 35°N and 65°N. Annual mean CAS IAP LASG 28 AAO The monthly AAO index (AAOI) is defined as the difference in the normalized monthly zonal-mean sea level pressure (SLP) between 40°S and 70°S. Annual mean CAS IAP LASG 29 NAO The NAO index (NAOI) is defined as the difference in the normalized monthly sea level pressure (SLP) regionally zonal-averaged over the North Atlantic sector from 80°W to 30°E between 35°N and 65°N. Annual mean CAS IAP LASG 30 GDP Gross domestic product SNC-PSESS 31 GDP1 Agricultural production SNC-PSESS 32 GDP2-1 Construction product SNC-PSESS 33 GDP2 Industrial production SNC-PSESS 34 GDP3 Service product SNC-PSESS 35 TP Total population SNC-PSESS 36 RP Rural population SNC-PSESS 37 UP Urban population SNC-PSESS 38 UR Urbanization Rate SNC-PSESS 39 TEP Total energy production SNC-PSESS 40 HD Annual mean haze day SNC-PSESS ① No. 13 and No. 15 is from 1950 to 2008; No. 17 is from 1950 to 2000; No. 18–22 is from 1958 to 2010; Values out of time scale of 1951–2010 shall be replaced by average values. ② NCC National Climate Center, China, CAS Chinese Academy of Sciences, IAP Institute of Atmospheric Physics, LASG State Key Laboratory of Numerical Modeling for Atmospheric Sciences and Geophysical Fluid Dynamics in China, NOAA Oceanic and Atmospheric Administration, NOAA CPC NOAA Climate Prediction Center, NOAA PSD NOAA Physical Science Division, JAMSTEC Japan Agency for Marine-Earth Science and Technology. ② The second column WPSH ANNUAL, ENSO DJF, AMO US and AAO are selected as climate factors in the article. ③ SNC-PSESS 60-year Statistics of New China and Prefectural (Municipal) Social and Economic Statistics Summary of China The heavy rainfall amounts, heavy rainfall days and heavy rainfall intensity of inter-annual and decadal heavy rain in China are calculated based on precipitation from 659 stations, configuration of which remains consistent through the research period. IDW (Inverse Distance Weight) interpolation method is used to produce spatially continuous raster layer of heavy rainfall factors. ### Methods #### Calculation of heavy rainfall In this research, heavy rainfall is defined as daily rainfall greater than 50 mm. The sum of $$HRA_{i}$$ and heavy rainfall days $$HRD_{i}$$ of each meteorological station as well as average heavy rainfall intensity $$HRI_{i}$$ were calculated as per formulae (1)–(3), for the 6 decades of 1951–1960, 1961–1970, 1971–1980, 1981–1990, 1991–2000 and 2001–2010 respectively. $$HRA_{i} = \mathop \sum \limits_{j = 1}^{10} hra_{1940 + 10i + j}$$ (1) $$HRD_{i} = \mathop \sum \limits_{j = 1}^{10} hrd_{1940 + 10i + j}$$ (2) $$HRI_{i} = \mathop \sum \limits_{j = 1}^{10} hra_{1940 + 10i + j} \Bigg/\mathop \sum \limits_{j = 1}^{10} hrd_{1940 + 10i + j}$$ (3) Notes wherein, $$HRA_{i}$$ is total heavy rainfall amount at a meteorological station in the $$i$$ th decade within a study phase; $$hra_{1940 + 10i + j}$$ total heavy rainfall amount at a meteorological station in the $$j$$ th year of $$i$$ th decade; $$HRD_{i}$$ total heavy rainfall days at a meteorological station in the $$i$$ th decade; $$hrd_{1940 + 10i + j}$$ total heavy rainfall days at a meteorological station in the $$j$$ th year of $$i$$ th decade; $$HRI_{i}$$ heavy rainfall intensity at a meteorological station in the $$i$$ th decade; $$i$$ decadal order ($$i = 1,2, \ldots\,6$$); $$j$$ yearly order ($$j = 1,2, \ldots\,10$$). #### Model selection and validation 1. (1) Stepwise regression: Here we considered 40 factors (i.e. the 29 climate factors and 11 anthropogenic factors) as the candidate predictors and heavy rainfall (i.e. HRA, HRD, and HRI) as the target variables. Stationarity test and cointegration test are performed to eliminate the possibility of spurious regression. In each selecting step, only those variables significant at the 95% level are identified and included in the regression equation. In the removing step, the variables not significant at the 90% level are ruled out from the equation (Johansen 1994). 2. (2) AIC to confirm model optimal variables: We use the Aakaike Information Criterion (AIC), as a criteria for model selection that penalize models having large number of predictors and search for the models that have small values of AIC (Cahill 2003). 3. (3) Cross-validation test the robustness and stability of the regression model: To address the issue of over-fit, we conducted cross-validation by intentionally leaving out up to 33% of data and used them to verify the model prediction. #### MLR-based variance explanation rate A MLR equation was established for standardized sequence based on the multiple regression theory (Harris 1992; Pedroni and Peter 1999; Mackinnon 2010): $$Y_{i} = b_{1} X_{1i} + b_{2} X_{2i} + b_{3} X_{3i} + b_{4} X_{4i} + b_{5} X_{5i} + b_{6} X_{6i}$$ wherein, i = 1,…, n, n = 60 years and b1…b6 are regression coefficients. wherein, r 1 , r 2 , r 3 , r 4 , r 5 and r 6 are correlation coefficient between heavy rain and WPSH, ENSO (HRA and HRD)/AMO (HRI), AAO, GDP2, UP and HD respectively. It was proved that: $$c^{2} = b_{1} r_{1} + b_{2} r_{2} + b_{3} r_{3} + b_{4} r_{4} + b_{5} r_{5} + b_{6} r_{6}$$ wherein, c is multiple correlation coefficient, c 2 represents the six factors’ rate of variance explanation of heavy rainfall and the b 1 r 1 ,b 2 r 2 ,b 3 r 3 ,b 4 r 4 ,b 5 r 5 ,b 6 r 6 represent respectively contributions of each factor to heavy rainfall in China. #### Spatial correlation analysis Spatial correlations are performed between county level raster images of HRA, HRD, HRI (which are generated using IDW) and population density (PD) and low visibility days (LVD). Higher spatial similarities between the two images compared would produce higher spatial correlation value. (Gao and Deng 2002). ## Results ### Trend in heavy rainfall in China Since 1950s, the total precipitation amount for China shows no obvious trend, whereas both the intensity of heavy rainfall and the area suffering from extreme precipitation events have expanded (Zhai et al. 2005). HRA, HRD and HRI has increased significantly (Fig. 1): from the 1950s to the 2000s, HRA, HRD and HRI increased by 58.6–68.7, 46.5–60.2, and 7.1–11.5 percent respectively. Note that higher numbers are average of 659 station data, which tend to overestimate due to more stations in southeast China where most of the increase occurred; and the lower numbers are from the 0.5° grid data based on Ou et al. (2013), which tends to give lower estimation due to the smoothing effect of the interpolation. The share of heavy rainfall in total rainfall increased from 15.9% to 26.0% (Table 2). This increase demonstrates a clear, shifting spatial pattern with high values of HRA (Fig. 2; Table 3) and HRD (Fig. 3; Table 4) moving progressively from the southeast coast to inland China during the last 60 years. Such spatial–temporal features are obviously inconsistent with that of the warming temperature (Yatagai and Yasunari 1994; Shi et al. 2014) and cannot be reasonably explained by the leading atmospheric and oceanic climate factors (Easterling et al. 2000; Liu et al. 2009; Wan et al. 2012). Below we investigate, using climate and socioeconomic data, whether, and if so to what extent, local and regional anthropogenic processes contributed to the observed trend and pattern. Table 2 The share of heavy rainfall in total rainfall in different decade in China Heavy rainfall amounts (%) Heavy rainfall day (%) 1951–1960 15.93 1.66 1961–1970 19.94 1.72 1971–1980 22.38 1.99 1981–1990 22.20 1.82 1991–2000 24.04 2.07 2001–2010 25.97 2.39 Table 3 Change of station number of decadal HRA in China from 1951 to 2010 Year <1000 1000–2000 2000–3000 3000–4000 >4000 1951–1960 405 (61.46%) 111 (16.84%) 82 (12.44%) 25 (3.79%) 36 (5.46%) 1961–1970 336 (50.99%) 93 (14.11%) 106 (16.08%) 62 (9.41%) 62 (9.41%) 1971–1980 340 (51.59%) 116 (17.60%) 91 (13.81%) 54 (8.19%) 58 (8.80%) 1981–1990 342 (51.90%) 99 (15.02%) 95 (14.42%) 61 (9.26%) 62 (9.41%) 1991–2000 358 (54.32%) 89 (13.51%) 78 (11.84%) 56 (8.50%) 78 (11.84%) 2001–2010 359 (54.48%) 64 (9.71%) 83 (12.59%) 50 (7.59%) 103 (15.63%) Table 4 Change of station number of decadal HRA in China from 1951 to 2010 Year <1000 1000–2000 2000–3000 3000–4000 >4000 1951–1960 418 (63.43%) 130 (19.73%) 63 (9.56%) 28 (4.25%) 20 (3.03%) 1961–1970 349 (52.96%) 117 (17.75%) 102 (15.48%) 52 (7.89%) 39 (5.92%) 1971–1980 346 (52.50%) 137 (20.79%) 93 (14.11%) 44 (6.68%) 39 (5.92%) 1981–1990 352 (53.41%) 111 (16.84%) 99 (15.02%) 54 (8.19%) 43 (6.53%) 1991–2000 366 (55.54%) 101 (15.33%) 96 (14.57%) 43 (6.53%) 53 (8.04%) 2001–2010 365 (55.39%) 70 (10.62%) 97 (14.72%) 53 (8.04%) 74 (11.23%) ### Temporal analysis: identifying key factors influencing heavy rainfall and their relative contributions #### Factors influencing heavy rainfall 29 climate factors that are known to influence East Asian precipitation and 11 socioeconomic factors are considered as candidate predictors with heavy rainfall as the target variable (Table 1). Seven factors are eventually chosen as significantly related to heavy rainfall (0.05 significance level): four climatic factors including WPSH (western Pacific Subtropical High), ENSO (El Nino-Southern Oscillation), AMO (Atlantic Multi-decadal Oscillation) and AAO (Antarctic Oscillation), and 3 socio-economic factors including output value of the secondary industry (GDP2), urban population (UP) and annual average haze days (HD). The four climatic factors are determined by large-scale climate dynamics and not directly influenced by local human activities. The three socio-economic factors are all closely related to land-use change and air pollution (Bai et al. 2012; Ding and Liu 2014), with urban population being an indirect demographical indicator of land-use change, GDP2 most indirect indicator for air pollution, and HD represents the environmental consequence of land-use change and air pollution, which is most closely linked to the air quality. A Pearson correlation analysis shows that all three anthropogenic factors correlate very strongly with heavy rainfall (all significant at the 0.01 significance level), whereas the climate factors tend to be correlated less strongly and at statistically less significant levels (Table 5). Table 5 Correlation coefficient and variance explained percentage of climate and anthropogenic factors Heavy rainfall and influencing factors WPSH ENSO AMO AAO GDP2 UP HD CI AI Total HRA r 0.49** −0.45* 0.72** 0.71** 0.79** 0.75** 0.51** 0.78** 0.78** VER 7.4% 7.3% 9.6% 17.9% 17.8% 25.9% 24.3% 61.5% 85.8% HRD r 0.49** −0.44* 0.64** 0.67** 0.64** 0.70** 0.51** 0.71** 0.73** VER 8.8% 6.2% 11.2% 16.6% 18.9% 23.0% 26.2% 58.5% 84.7% HRI r 0.41* 0.39* 0.56** 0.72** 0.81** 0.84** 0.47* 0.84** 0.86** VER 6.5% 5.3% 10.1% 18.9% 20.0% 26.6% 21.9% 65.5% 87.5% VER variance explained percentage, r correlation coefficient, CI climatic factors, AI anthropogenic factors * Means correlation is significant at the 0.05 level, and ** at the 0.01 level #### The influence of water vapor increment on heavy rainfall Atmospheric precipitable water (PW) and divergence of water-vapour flux (WVF) can affect regional precipitation. We calculated regional total column PW and (surface—300 hPa) divergence of WVF in eastern and central China where significant increase in heavy rainfall occurred. As shown in Fig. 4, PW and divergence of the WVF increase until the end of 1980s but decline afterwards, none of which converge with the trend in heavy rainfall. The spatial distributions of the changes in the two variables between 1970 and 2010 indicate that both PW and WVF decreased in most of the areas where heavy rainfall actually increased. In addition, during the last two decades the proportion of decadal convective HRD to total HRD increased from 81.8 to 86.0%, with a corresponding drop in the proportion of continuous HRD to total HRD, suggesting that the increase in heavy rainfall is increasingly influenced by local conditions rather than the large-scale circulation and moisture fluxes. #### Quantifying relative contributions To estimate the relative contributions of these seven factors to the observed increase in heavy rainfall, we performed multiple linear regression. The selected factors collectively explained 85.8, 84.7 and 87.5% of the total variance of HRA, HRD and HRI respectively. Anthropogenic factors are the main contributors, each contributing at equivalent magnitude and collectively accounting for 71.7, 69.0 and 75.0% of the total explained variance whereas the climate factors account for only 28.3, 31.0 and 25.1% (Table 5). Each of the three anthropogenic factors has roughly the same level of contribution as the sum of all the climate factors. Combined together, they are thrice as likely to have led to the variance in heavy rainfall than the climate factors. #### Robustness of the results The robustness of our statistical model is tested through four different analyses. First of all, to evaluate the influence of various lag effects of the alternative factors that are not included in our model, we have performed power spectrum analysis to obtain the possible lag time of all the input variables, and conducted an un-interpreted residual analysis. Results show that the variance explained percentage of HRA, HRD and HRI are 6.9, 6.3 and 5.3% respectively (Table 6), which are very small compared to the explained variance percentage of our model. This means the factors included in our model through step-wise regression is robust, despite the limitation of the method. In addition, using a different heavy rainfall threshold (95 percentile) gave consistent results. Furthermore, we also used the Akaike Information Criterion (AIC) as a criteria for model selection that penalize models having large number of predictors (Table 7), and finally a cross-validation analysis leaving out up to 33% of data (Table 8). Both results show high-level stability and robustness of our model. Table 6 Variance explained percentage of the different lag climate and anthropogenic factors related to the residual of HRA, HRD, HRI Heavy rainfall and influencing factors WPSH annual WPSH SA EASMI ENSO DJF ENSO MAM AMO US AAO NAO UP GDP2 HD CI AI Total HRA Lag Lag 1 Lag 1 Lag 3 Lag 3 Lag 2 Lag 1 Lag 1 Lag 1 Lag 5 VER 0.83% 0.31% 0.75% 0.67% 0.92% 0.41% 0.97% 0.89% 1.12% 3.89% 2.98% 6.87% HRD Lag Lag 1 Lag 2 Lag 3 Lag 3 Lag 2 Lag 1 Lag 1 Lag 1 Lag 5 VER 0.72% 0.36% 0.61% 0.53% 0.85% 0.37% 0.93% 0.91% 0.97% 3.44% 2.81% 6.25% HRI Lag Lag 1 Lag 1 Lag 1 Lag 3 Lag 1 Lag 2 Lag 1 Lag 1 Lag 1 Lag 1 VER 0.63% 0.33% 0.24% 0.43% 0.26% 0.41% 0.37% 0.81% 0.88% 0.90% 2.67% 2.59% 5.26% VER variance explained percentage, lag indicator selected with different lag year, CI climatic factors, AI anthropogenic factors Table 7 Akaike information criterion (AIC) of MLR Dependent variable Independent variables Order number AIC Dependent variable Independent variables Order number AIC Dependent variable Independent variables Order number AIC HRA UP 1 198.9 HRD HD 1 190.7 HRI HD 1 196.5 HRA UP, HD 2 189.2 HRD HD, GDP2 2 166.5 HRI HD, UP 2 173.6 HRA UP, HD, AAO 3 98.4 HRD HD, GDP2, AAO 3 152.3 HRI HD, UP, GDP2 3 165.4 HRA UP, HD, AAO 4 165.6 HRD HD, GDP2, AAO, UP 4 170.2 HRI HD, UP, GDP2 4 179.1 GDP2 AAO HRA UP, HD, AAO 5 84.6 HRD HD, GDP2, AAO 5 112.5 HRI HD, UP, GDP2 5 121.5 GDP2,WPSH UP, WPSH AAO, WPSH HRA UP, HD, AAO 6 81.2 HRD HD, GDP2, AAO 6 96.3 HRI HD, UP, GDP2 6 91.8 GDP2, WPSH, AMO UP, WPSH, AMO AAO, WPSH, ENSO Table 8 Cross validation correlation coefficient of regression Heavy rainfall Leave-one-out cross validation Leave-5-out cross validation Leave-10-out cross validation Leave-20-out cross validation HRA 0.89** 0.88** 0.86** 0.85** HRD 0.89** 0.88** 0.86** 0.85** HRI 0.97** 0.95** 0.93** 0.91** ** Means correlation is significant at the 0.01 level To further illustrate the relative importance of the climatic and anthropogenic factors in increasing heavy rainfall, we produced normalized HRA, HRD and HRI. We also generated four integrated factors—integrated heavy rainfall index, integrated climatic indicator, integrated anthropogenic indicator and integrated natural-anthropogenic indicator—by normalizing the individual factors and integrating them using the variance explanation rate of the factor as respective weight, and plotted the scatter diagrams of these factors against normalized heavy rainfall factors (Fig. 5). In all cases, the integrated anthropogenic- and anthropogenic-climatic factor factors demonstrate much more synchronized trends with normalized and integrated heavy rainfall factors, with R-square of the fitting curves typically around 0.90. In contrast, the R-square of the fitting curves for the climatic factors are typically around 0.40. These findings reinforce that integrated anthropogenic factors explain much more the documented increase in heavy rainfall in China. ### Spatial correlation between anthropogenic factors and heavy rainfall If the anthropogenic processes indeed contributed more to the increasing trend in heavy rainfall, then the changing spatial pattern of anthropogenic processes should be related to the shifting spatial pattern of the heavy rainfall factors. We tested this via spatial correlation analysis between county level socioeconomic data and heavy rainfall indicators. Due to the limited data availability at this fine resolution over the long term, we used county-level population density (PD) to represent the spatial distribution of urbanization, and the annual average days with visibility less than 10 km (LVD) as a proxy indicator of HD, given that LVD can be affected by air pollution, and there is a statistically significant, high correlation between HD and LVD (r = 0.79, p < 0.01). The meteorological station data were interpolated to generate 1-km resolution images, based on which prefecture level mean values were generated. Figure 6 shows there are statistically significant high correlations between the county level heavy rainfall data and the urbanization and air pollution over time and across space, with r steadily increasing over time (Table 9). Urbanization in China was rather stable until late 1970s and then accelerated in terms of scale and magnitude during the last three decades (Bai et al. 2014), coinciding with the increase of r and thus further supporting that land-use change resulting from urbanization and associated air pollution has indeed played a major role in the increase of heavy rainfall. Our tests show this result is not affected by potential spatial autocorrelation of variables. Table 9 Spatial correlation coefficient between county level heavy rainfall and PD & LVD Heavy rainfall 1951–1960 1961–1970 1971–1980 1981–1990 1991–2000 2001–2010 HRA PD 0.35** 0.36** 0.38** 0.42** 0.43** 0.45** LVD 0.35** 0.37** 0.41** 0.46** 0.49** 0.51** HRD PD 0.36** 0.37** 0.40** 0.46** 0.47** 0.50** LVD 0.36** 0.40** 0.43** 0.48** 0.51** 0.53** HRI PD 0.39** 0.41** 0.41** 0.48** 0.50** 0.51** LVD 0.48** 0.49** 0.57** 0.57** 0.58** 0.60** n = 2618; ** Means correlation is significant at the 0.01 level ## Conclusion All the results of our analyses point to the same conclusion: the decadal increases in, and shifting spatio-temporal patterns of, heavy rainfall in China during 1951–2010 are likely caused primarily by large scale and rapid urbanization and industrialization. A likely explanation is the climate impacts of land-use change triggered by urbanization: indeed, land–atmosphere interactions are known to affect both temperature (Seneviratne et al. 2006; Sun et al. 2014) and precipitation (Lowry 1998; Thielen et al. 2000; Li et al. 2011; Kaufmann et al. 2007). China has been urbanizing rapidly over the last three decades (Bai et al. 2014), driven by economic growth (Lambin and Patrick 2011; Bai et al. 2012). Urban land-use typically means more paved area, less vegetation and tall buildings, which can cause more convective rainstorms (Wan et al. 2012; Han et al. 2014; Jin et al. 2015). Moreover, industrialization is concurrent with urbanization in China, with most of the secondary industrial activities concentrated in cities and towns. Emissions resulting from industrial activities, the demand for heating and the rapidly growing use of personal cars in cities trigger a significant increase in hazy days (Ding and Liu 2014), which in turn suppresses the light rainfall and may enhance the strong convective rainstorms (Mölders and Olson 2004; Li et al. 2011). Previous studies linking urbanization to rainfall are mostly focusing on the impact on total rainfall and mostly considered only the local or city scale (Rosenfeld 2000; Ramanathan et al. 2001; Kaufmann et al. 2007; Alexander et al. 2013). Kishtawal et al. (2010) found urbanization as likely cause of increased heavy rainfall in India over five decades. Our results support this finding, but also show that urbanization is only one of the factors- air pollution contributes at equivalent magnitude. Our analysis is the first, to our knowledge, to statistically establish urbanization and air pollution as likely the primary cause of a nation- or sub-continental-scale increase in heavy rainfall over decades, and to quantify relative contributions of anthropogenic and climate factors. Our findings indicate that local anthropogenic processes may shift the regional climate through mechanisms other than GHG emissions. The physical mechanism of such statistically robust connection needs to be better understood, and socio-economic and human dimensions need to be better reflected into the climate models. With cities in China increasingly experiencing extreme rainfall events (Li et al. 2012), compounded by the increasing extreme summer heat in the same region (Sun et al. 2014), our findings call for a careful reevaluation of the risks of extreme weather in formulating national policies on urbanization, industrialization and environmental management, in China and elsewhere. Rapidly growing and industrializing cities and nations will need to better control the air pollution, and to anticipate and accommodate these regional climate consequences, if they are to reduce the risk of flooding and waterlogging. ## Notes ### Acknowledgements This research was supported by the 973 Project “National Key Research and Development ProgramGlobal Change and Mitigation Project: Global change risk of population and economic system: mechanisms and assessments” under Grant No. 201531480029, Ministry of Science and Technology of China, People’s Republic of China, the National Natural Science Foundation of Innovative Research Group Project “Earth Surface Process Model and Simulation” under Grant No. 41621061. We thank Ninad Bondre for his helpful comments and edits. We thank anonymous reviewers for their helpful comments. The analysis was undertaken in MATLAB and plots drawn in EXCEL, MATLAB and ArcGIS. ## References 1. Alexander LV, Allen SK, Bindoff NL, Bréon FM, Church JA, Cubasch U, Emori S, Forster P, Friedlingstein P, Gillett N (2013) Climate change 2013: the physical science basis. contribution of working group I to the fifth assessment report of the intergovernmental panel on climate change. Contrib Work 43(22):143–151Google Scholar 2. Alexander LV, Zhang X, Peterson TC, Caesar J, Gleason B, Tank A, Haylock M, Collins D, Trewin B, Rahimzadeh F, Tagipour A, Kumar KR, Revadekar J, Griffiths G, Vincent L, Stephenson DB, Burn J, Aguilar E, Brunet M, Taylor M, New M, Zhai P, Rusticucci M, Vazquez-Aguirre JL (2006) Global observed changes in daily climate extremes of temperature and precipitation. J Geophys Res Atmos 111:D5Google Scholar 3. Allan RP, Soden BJ (2008) Atmospheric warming and the amplification of precipitation extremes. Science 321(5895):1481–1484 4. Allen MR, Ingram WJ (2002) Constraints on future changes in climate and the hydrologic cycle. Nature 419(6903):224–232 5. Bai X, Chen J, Shi P (2012) Landscape urbanization and economic growth in China: positive feedbacks and sustainability dilemmas. Environ Sci Technol 46(1):132–139 6. Bai X, Shi P, Liu Y (2014) Realizing China’s urban dream. Nature 509(7499):158–160 7. Beniston M, Stephenson DB, Christensen OB, Ferro CAT, Frei C, Goyette S, Halsnaes K, Holt T, Jylha K, Koffi B, Palutikof J, Schoell R, Semmler T, Woth K (2007) Future extreme events in European climate: an exploration of regional climate model projections. Clim Chang 81:71–95 8. Cahill AT (2003) Significance of AIC differences for precipitation intensity distributions. Adv Water Resour 26(4):457–464 9. Chen D, Ou T, Gong L, Xu C-Y, Li W, Ho C-H, Qian W (2010) Spatial interpolation of daily precipitation in China: 1951–2005. Adv Atmos Sci 27(6):1221–1232 10. Ding Y, Liu Y (2014) Analysis of long-term variations of fog and haze in China in recent 50 years and their relations with atmospheric humidity. Sci China Earth Sci 57(1):36–46 11. Durman CF, Gregory JM, Hassell DC, Jones RG, Murphy JM (2001) A comparison of extreme European daily precipitation simulated by a global and a regional climate model for present and future climates. Q J R Meteorol Soc 127(573):1005–1015 12. Easterling DR, Evans JL, Groisman PY, Karl TR, Kunkel KE, Ambenje P (2000) Observed variability and trends in extreme climate events: a brief review. Bull Am Meteorol Soc 81(3):417–425 13. Field C, Barros V, Stocker T, Dahe Q, Dokken D (2012) Special report: managing the risks of extreme events and disasters to advance climate change adaptation (SREX). Cambridge University Press, Cambridge 14. Gao ZQ, Deng XZ (2002) Analysis on spatial features of LUCC based on remote sensing and GIS in China. Chin Geogr Sci 12(2):107–113 15. Han J-Y, Baik J-J, Lee H (2014) Urban impacts on precipitation. Asia-Pac J Atmos Sci 50(1):17–30 16. Harris RID (1992) Testing for unit roots using the augmented Dickey-Fuller test: some issues relating to the size, power and the lag structure of the test. Econ Lett 38(4):381–386 17. IPCC-AR5 (2013) Climate change 2013: the physical science basis. IPCC. In: Working group I contribution to the fifth assessment report of the intergovernmental panel on climate changeGoogle Scholar 18. Jin MS, Li Y, Su D (2015) Urban-induced mechanisms for an extreme rainfall event in Beijing China: a satellite perspective. Climate 3(1):193–209 19. Johansen SR (1994) The role of the constant and linear terms in cointegration analysis of nonstationary variables. Econom Rev 13(2):205–229 20. Kaufmann RK, Seto KC, Schneider A, Liu Z, Zhou L, Wang W (2007) Climate response to rapid urban growth: evidence of a human-induced precipitation deficit. J Clim 20(10):2299–2306 21. Kishtawal CM, Niyogi D, Tewari M, Pielke RA, Shepherd JM (2010) Urbanization signature in the observed heavy rainfall climatology over India. Int J Climatol 30(13):1908–1916 22. Lambin EF, Patrick M (2011) Global land use change, economic globalization, and the looming land scarcity. Proc Natl Acad Sci 108(9):3465–3472 23. Li Z, Niu F, Fan J, Liu Y, Rosenfeld D, Ding Y (2011) Long-term impacts of aerosols on the vertical development of clouds and precipitation. Nat Geosci 4(12):888–894 24. Li K, Wu S, Dai E, Xu Z (2012) Flood loss analysis and quantitative risk assessment in China. Nat Hazards 63(2):737–760 25. Liu SC, Fu C, Chein-Jung S, Chen JP, Wu F (2009) Temperature dependence of global precipitation extremes. Geophys Res Lett 36(17):367–389Google Scholar 26. Lowry WP (1998) Urban effects on precipitation amount. Prog Phys Geogr 22(4):477–520 27. Mackinnon J (2010) Critical values for cointegration tests. Work Pap 5(1):107–122Google Scholar 28. Min S-K, Zhang X, Zwiers FW, Hegerl GC (2011) Human contribution to more-intense precipitation extremes. Nature 470(7334):378–381 29. Mölders N, Olson MA (2004) Impact of urban effects on precipitation in high latitudes. J Hydrometeorol 5(3):409–429 30. Ou T, Chen D, Linderholm HW, Jeong J-H (2013) Evaluation of global climate models in simulating extreme precipitation in China. Tellus A 65:1–16 31. Pedroni, Peter (1999) Critical values for cointegration tests in heterogeneous panels with multiple regressors. Oxf Bull Econ Stat 61(S1): 653–670(618)Google Scholar 32. Qian W, Fu J, Yan Z (2007) Decrease of light rain events in summer associated with a warming environment in China during 1961–2005. Geophys Res Lett 34(11):224–238 33. Ramanathan V, Crutzen P, Kiehl J, Rosenfeld D (2001) Aerosols, climate, and the hydrological cycle. Science 294(5549):2119–2124 34. Rosenfeld D (2000) Suppression of rain and snow by urban and industrial air pollution. Science 287(5459):1793–1796 35. Seneviratne SI, Lüthi D, Litschi M, Schär C (2006) Land–atmosphere coupling and climate change in Europe. Nature 443(7108):205–209 36. Shepherd JM (2005) A review of current investigations of urban-induced rainfall and recommendations for the future. Earth Interact 9(12):1–27 37. Shi P, Sun S, Wang M, Li N, Jin Y, Gu X, Yin W (2014) Climate change regionalization in China (1961–2010). Sci China Earth Sci 57(11):2676–2689 38. Sun Y, Zhang X, Zwiers FW, Song L, Wan H, Hu T, Yin H, Ren G (2014) Rapid increase in the risk of extreme summer heat in Eastern China. Nat Clim Chang 4:1082–1085 39. Thielen J, Wobrock W, Gadian A, Mestayer P, Creutin J-D (2000) The possible influence of urban surfaces on rainfall development: a sensitivity study in 2D in the meso-γ-scale. Atmos Res 54(1):15–39 40. Wan H, Zhong Z, Yang X, Li X (2012) Ensembles to model the impact of urbanization for a summertime rainstorm process in Yangtze River Delta, China. Meteorol Appl 22:105–112 41. Wang W, Chen X, Shi P, van Gelder PHAJM (2008) Detecting changes in extreme precipitation and extreme streamflow in the Dongjiang River Basin in southern China. Hydrol Earth Syst Sci Discuss 12(1):207–221 42. Wang Y, Zhou L (2005) Observed trends in extreme precipitation events in China during 1961–2001 and the associated changes in large-scale circulation. Geophys Res Lett 32(9):297–314 43. Wilby RL, Wigley TML (2002) Future changes in the distribution of daily precipitation totals across North America. Geophys Res Lett 29(7):39-31–39-34 44. Yatagai A, Yasunari T (1994) Trends and decadal-scale fluctuations of surface air temperature and precipitation over China and Mongolia during the recent 40 year period (1951–1990). J Meteorol Soc Jpn 72(6):937–957 45. Zhai PM, Zhang XB, Wan H, Pan XH (2005) Trends in total precipitation and frequency of daily precipitation extremes over China. J Clim 18(7):1096–1108 ## Authors and Affiliations • Peijun Shi • 1 • 2 • 3 Email author • Xuemei Bai • 1 • 4 Email author • Feng Kong • 3 • 5 • Jiayi Fang • 1 • 3 • Daoyi Gong • 1 • Tao Zhou • 1 • Yan Guo • 1 • Yansui Liu • 6 • 1 • Wenjie Dong • 1 • Zhigang Wei • 1 • Chunyang He • 1 • Deyong Yu • 1 • Jing’ai Wang • 7 • 1 • Qian Ye • 1 • Rucong Yu • 8 • Deliang Chen • 9 1. 1.State Key Laboratory of Earth Surface Processes and Resource EcologyBeijing Normal UniversityBeijingChina 2. 2.Key Laboratory of Environmental Change and Natural Disaster of Ministry of EducationBeijing Normal UniversityBeijingChina 3. 3.Academy of Disaster Reduction and Emergency ManagementMinistry of Civil Affairs & Ministry of EducationBeijingChina 4. 4.Fennier School of Environment and SocietyAustralian National UniversityCanberraAustralia 5. 5.China Meteorological Administration Training CenterBeijingChina 6. 6.College of Resources Science & TechnologyBeijing Normal UniversityBeijingChina 7. 7.Faculty of Geographical SciencesBeijing Normal UniversityBeijingChina
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6752576231956482, "perplexity": 12866.151505547034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573264.27/warc/CC-MAIN-20190918085827-20190918111827-00292.warc.gz"}
https://riojournal.com/article_preview.php?id=29375
Research Ideas and Outcomes : PhD Thesis PhD Thesis Effectiveness of peer-mediated learning for English language learners: A meta-analysis Mikel W Cole ‡ Clemson University, Clemson, United States of America Corresponding author: Mikel W Cole (mikel.w.cole@gmail.com) Received: 28 Aug 2018 | Published: 29 Aug 2018 © 2018 Mikel ColeThis is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Citation: Cole MW (2018) Effectiveness of peer-mediated learning for English language learners: A meta-analysis. Research Ideas and Outcomes 4: e29375. https://doi.org/10.3897/rio.4.e29375 # Abstract ## Background This manuscript reports the findings from a series of inter-related meta-analyses of the effectiveness of peer-mediated learning for English language learners (ELLs). Peer-mediated learning is a broad term that as operationalized in this study includes cooperative learning, collaborative learning, and peer tutoring. Drawing from research on interaction in second language acquisition, as well as from work informed by Vygotskian perspectives on socially-mediated second language learning, these meta-analyses synthesize the results of experimental and quasi-experimental studies. ## New information Included studies were conducted with language learners between the ages of 3 and 18 in order to facilitate comparisons to US students in K-12 educational settings. All participants were identified as ELLs, though learners in both English as a Second Language (ESL) and English as a Foreign Language (EFL) settings were included. Similarly, learners from a variety of language backgrounds were included in order to facilitate generalizations to the linguistic diversity present in US schools, and abroad. Main effects analyses indicate that peer-mediated learning is effective at improving a number of outcome types, including: language outcomes, academic outcomes, and social outcomes. Funnel plots and Egger’s regression analyses were conducted to examine the probability of publication bias, and it appears unlikely in most analyses. Moderator analyses were also conducted, where sample sizes were sufficient, to examine what measured variables were capable of explaining heterogeneity in effect sizes between studies. # Introduction This dissertation presents the results of a meta-analysis of the effectiveness of peer-mediated learning for English language learners (ELLs)*1. Chapter One provides the background for and significance of the study. Chapter Two reviews the relevant first language (L1) and second language (L2) literatures for peer-mediation. Chapter Three details the methodology. Chapter Four presents the results of the various analyses, and Chapter Five discusses how the results address the research questions, as well as the limitations of and future research suggested by this meta-analysis. ## Background Currently, more than eleven million students in K-12 schools in the United States speak a language other than English at home, meaning that linguistically-diverse students now comprise more than 20% of the total school age population (National Center for Education Statistics 2011). Moreover, ELLs are the fastest growing population of students in U.S. schools (McKeon 2005), and their performance on high-stakes tests continues to lag behind the performance of their mainstream peers (Digest of Education Statistics 2009). As the population of linguistically-diverse students grows, ELLs are dispersing into states and schools historically unprepared to meet the unique needs of this group of students. Consequently, linguistically diverse students present an increasingly salient concern for schools across the country. Not only is the population of ELLs rapidly growing and dispersing throughout US schools, ELLs are a remarkably heterogeneous group of students (Genesee et al. 2005, Rumbaut and Portes 2001, Solano-Flores 2008), and this heterogeneity has pervasive relevance for educators and researchers, alike. For example, report that more than three fourths of ELLs are born in the United States, but the foreign-born quadrant of the population comes from all over the world. The immigrant status of students, as well as related variables like length of residence in the United States, is important because of recent moves to require documentation of residency status in states like Alabama (e.g., Hispanic Interest Coalition of Alabama et al. v. Governor Robert Bentley et al. 2011), in order to qualify for Specially Designed Academic Instruction in English programs like Newcomer Centers (e.g., US Department of Education, 2016), to analyze country of origin differences among subgroups (e.g., Hispanics) for variables like parental education, socioeconomic status, and language proficiency, in order to determine the linguistic appropriateness of translated assessments for speakers of regional dialects (Solano-Flores 2008), and for classroom teachers to design culturally relevant instruction (e.g., Fradd and Lee 2003). ## Statement of the Problem School-level Silence: Sociopolitical Context and Program Models ELLs are a linguistically diverse group of students, collectively speaking more than 400 languages (Francis et al. 2008); ironically, ELLs face pervasive messages of silence that deny access to and discourage use of their native languages, cultural practices, and cultural ways of knowing as learning resources. Historically, schooling in the United States has been actively structured to silence the linguistic capital of culturally and linguistically diverse students. As examples, the brutal assimilation of Native Americans in Boarding Schools during the nineteenth and twentieth centuries and the pervasive segregation of Mexican-Americans in the Mexican schools of the Southwest remain testimony to a doctrine of subtractive cultural assimilation and the persistence of a deficit perspective that views English language proficiency as the most important indicator of intelligence or academic potential (Gifford and Valdés 2006, Macedo 1994, Ruiz 2001, Valenzuela 1999, Wiese and García 1998). Eradicating students’ languages was intentional and rationalized as a national security concern; in fact, this English-as-American argument found voice even amongst a few of the founding fathers, who were generally unrestrictive in early language laws and who resisted the establishment of a national language or language academy (Ovando 2003, Schmid 2001). This historical legacy of silence persists in contemporary examples of lost opportunities to learn and instances of the ongoing denial of students’ access to their own language and literacy practices (Gándara 2000, Gutiérrez et al. 2000, Valdés 2001, Valenzuela 1999). States are increasingly moving towards inclusion, or mainstreaming, for all students in response to legislation initially written largely for students with special education status. In practice, this “push to mainstream” means that ELLs find themselves in classrooms with teachers unprepared to teach them and increasingly in political environments that actively and explicitly discourage the use or study of their language and culture (Harper and Jong 2009). Arguably done for reasons of equity and to minimize the linguistic segregation of ELLs, some researchers counter that contemporary conceptions of equity underlying inclusion arguments represent conservative values that actually work to maintain the status quo and inequitable relations of power (Platt et al. 2003), effectively silencing students by placing them in classrooms where they will be positioned as deficient and where their linguistic and cultural capital will be structurally unavailable as learning resources. Empirical evidence indicates that context influences student learning, and both the sociopolitical environment and the model of education provided to students contribute to ELLs’ academic success (Gitlin et al. 2003, Gutiérrez et al. 1995, Ogbu and Simons 1998, Portes and Rumbaut 2001, Ramírez et al. 1991, Valenzuela 1999). Portes and Rumbaut (2001) report large-scale sociological data that demonstrates how “the context of reception” shapes a number of outcomes for immigrants, including academic success. Interestingly, the context of reception, which is partly a measure of attitudes in the receiving community toward particular immigrant groups, varies across immigrant groups (e.g., Asians versus Mexicans), within immigrant groups (e.g., Mexican versus Cuban versus Puerto Rican) and across time for the same immigrant group (e.g., Cubans in Florida). Similarly, Gándara et al. (2003) indicate that state and local policy implementation often structurally positions ELLs inequitably; in the case of California, they argue that deficiencies in teacher training, facilities, curriculum and materials, and assessments contribute to lower ELL academic performance state-wide. Additionally, schools tend to operate under an epistemology that favors middle-class and White values, values that are often at odds with indigenous and cultural ways of knowing (Gutiérrez et al. 1995, Moll et al. 1992, Sleeter 2001). These studies indicate that students tend to learn better when they have access to their cultural knowledge and linguistic proficiencies and when linguistic, cultural, and racial differences are understood and respected; that is, students learn best when their human and cultural capital are given voice, not silenced. Perhaps the most widely-researched aspect of linguistic capital present in the effectiveness literature for ELLs is language of instruction (Baker and Kanter 1981, Greene 1998, Ramírez et al. 1991, Rolstad et al. 2005, Rossell and Baker 1996, Slavin and Cheung 2005, Thomas and Collier 2004, Willig 1985). In this case, language of instruction refers to the language in which instructional services are provided, and it typically does not directly measure students’ use of their native languages. Nonetheless, despite some notable disagreements in definitions of program models, methodologies, and interpretations of results (see for example the debate between Rossell and Baker (1996) and Greene (1998)), the clear consensus among these syntheses is that bilingual approaches that utilize students’ native languages are at least as effective as monolingual approaches that utilize only English. Specifically, students acquire English proficiency and attain grade-level parity with non-ELLs in content areas faster when instructed at least part of the time in their first languages. However, there are typically corollary differences associated with each of the program models. For example, parents are often more involved in bilingual programs where they understand the language of instruction (Ramírez et al. 1991), thereby promoting secondary sources of academic success for linguistically-diverse students (e.g., assistance with homework). Teacher-level Silence: Pedagogy, Preparation, and Dispositions Current schooling practices continue to manifest messages of silence for linguistically-diverse students and teachers often reinforce these messages, creating classroom atmospheres like the following example where the teacher invokes a traditional “Initiate-Respond-Evaluate” discourse pattern that effectively stifles students: “I was struck by the silence when I entered the classroom. The teacher, positioned at the front of the traditionally organized room, began to speak. ‘Where’s the adjective in this sentence?’”(Gutiérrez et al. 2000, p.14). To clarify, this example is not exceptional; rather, this teacher-directed model of instruction is quite common, even in programs specifically designed for ELLs. A nationally-representative, longitudinal study of the effectiveness of three ELL program models (i.e., Structured-English Immersion, Early-exit Transitional Bilingual, and Late-exit Transitional Bilingual) found that in all three models teachers dominated classroom discourse and students were rarely provided opportunities for active learning; instead, in more than half of observed instances, students provided no verbal responses at all (Ramírez et al. 1991). Elsewhere, researchers argue that these “monologic” spaces magnify cultural dissonance between students and teachers and work to reify inequitable power relations (Gutiérrez et al. 1995). Unfortunately, most teachers of ELLs remain largely unprepared to provide the specialized learning this growing and heterogeneous group of students requires (Ballantyne et al. 2008, Harper and Jong 2009, Menken and Antunez 2001). In fact, most ELLs sit in classrooms taught by teachers that report feeling woefully unprepared to teach them (Ballantyne et al. 2008). Despite a well-established, affirmative obligation to ensure that students receive instruction capable of providing equitable access to the language of instruction (i.e., Lau v. Nichols 1974 and Castaneda v. Pickard 1981), most ELLs receive no specialized instruction at all (Ballantyne et al. 2008, Menken and Antunez 2001). Given a long history of state and local control of education and a move by some states to mandate English-only models of instruction, the kinds of language support services available to ELLs vary widely, ranging from full immersion in dual languages to just a couple of hours of pull-out support in English. Thus, the relatively few ELLs who receive services receive very different kinds of instruction, often with no indication that the variations of instruction are designed to match variations amongst types of ELLs (e.g., age, language proficiency, length of residence). Denying ELLs access to adequately trained teachers and accessible curricula ensures their silence and disempowerment throughout schooling and beyond. Even in classrooms where talking and rich discussion are the norm, English learners are often silenced during class discussions because of inequitable distributions of power between students and teachers (Valenzuela 1999, Yoon 2008). Moreover, these power inequities often indicate the presence of beliefs and attitudes that inhibit students’ academic success. What teachers believe about linguistically and culturally diverse students has a tremendous impact on student engagement and academic success, and it also shapes the nature of the instruction that teachers provide (Gándara 2000, Gutiérrez et al. 2000, Maxwell-Jolly 2000, Stritikus and Garcia 2000, Tijerino and Asato 2002). Teachers acting as “street-level bureaucrats” (Lipsky 2010) have tremendous power to shape the nature of the instructional services they provide, for worse or for better, by exploiting what Jim Cummins calls “cracks in the structure” (Cummins 2001). Not surprisingly, Echevarria et al. (2006) indicated that consistency of training and degree of implementation proved more influential to the effectiveness of ELL pedagogy than did regional differences. Baca et al. (1994), agree with Echevarria and associates that achieving high levels of implementation fidelity is crucial to program success; however, they report that changes of attitudes and practice amongst the teacher education faculty is difficult to accomplish. Taken together, this suggests that teacher preparation and certification to work with ELLs, familiarity and facility with the intervention, and beliefs and attitudes are important variables to consider in the effectiveness of any intervention intended for ELLs. Moreover, it suggests that teachers support or interrupt inequitable power relations through their internal orientations to students, and to linguistic diversity more broadly, so that silencing of ELLs occurs in ways that are not always readily observable. Student-level Silence: Positioning, Identity, and Resistance ELLs are also positioned towards silence by distributions of power at the student level, distributions at once informed by sociopolitical factors in the local context and driven by the reorganization of social strata and identity formation at the student level (Cummins et al. 2005, Duff 2001, Harklau 2000, Leki 2001, Morita 2004, Norton 1997, Norton and Toohey 2001, Oortwijn et al. 2008, Rollinson 2003, Valenzuela 1999). First, individual differences in language proficiency, culture, Length of residency, official language status (e.g., ELL, Former English Learner, Native Speaker), length of residency, and socioeconomic status all contribute to learners’ identities and the way they are positioned in school and during classroom interactions. For example, Davies (2003) provides a sociolinguistic analysis of the pragmatic demands of joking for ELLs interacting with native speakers of ELLs, and the author describes differences in approaches for initiating interactions, as well as ELL self-reports of not initiating or participating in interactions because of perceived powerlessness when interacting with native speakers of English in English-speaking contexts. Similarly, Bonny Norton’s construct of “investment’ posits that individual learner characteristics are not immutable, and learners exercise agency as they position themselves in response to social ascriptions of place and power. Moreover, investment theory argues that individuals have multiple desires that interact with changes in context and relations of power that mediate individual motivation to participate in and ability to navigate social interactions. At every level, power mediates interactions for English language learners, especially when interacting with native speakers; and although language learners exercise autonomy, they are nonetheless constrained to some extent by the social positions made available in specific contexts. Consequently, learners’ identities and motivations affect academic success in dynamic and complex ways; sometimes peer influences and individual aspirations drive learners to pursue school success, and sometimes peer networks and individual responses to power inequities lead learners to resist schooling (Deyhle 1995, Iddings and McCafferty 2007, Kamberelis 1986, Kamberelis 2001, Lensmire 1998, Pavlenko and Norton 2007, Prior 2001, Talmy 2004, Talmy 2008, Valenzuela 1999, Voloshinov 1973). For example, Valenzuela (1999) reveals that even in a schooling context structured to systematically subtract the cultural and linguistic capital of students, social capital (i.e., the networks of relationships and resources contained within those network) varies considerably from student to student; some students had access to community and friendship support for schooling and tended to display a pro-schooling orientation, while other students participated in social networks that failed to support or actively rejected pro-school behavior. She argues that student identity and their access to caring, supportive individuals largely mediated their school success or failure. Importantly, student resistance to schooling is a key example of student autonomy, and like other identity and attitudinal positions, resistance can both promote or detract from positive orientations to schooling. Valenzuela recounts a school-wide, student-led walk-out of the high school she studied, and she documented the ways that perceptions of students’ language and culture and deficiencies in teachers’ preparation and school functioning contributed to the students’ decision to stage the protest. Similarly, Deyhle (1995) describes Navajo students resistance to the racism of their Anglo educators and the cultural and linguistic assimilation orientation of their schools. Interestingly, Deyhle claims that students most secure and supported in their indigenous identities were most likely to succeed in the Anglo-oriented culture of the schools, providing insight into the particular ways these students manage to resist the silencing of their cultural and linguistic capital while successfully navigating the challenges and demands of schooling. In conclusion, it is worth reiterating the primary focus of the proposed study—to investigate the effectiveness of peer-mediated learning for improving language, academic, and social outcomes for ELLs. This framing of “the problem” is intended to show the multi-faceted ways that issues of power and inequity interact with learning for ELLs. However, it is not intended to advance a claim that interactive learning methods will solve all of the inequities that ELLs face. Cooperative learning alone is no panacea. Rather, it is the thesis of this statement of the problem that questions of educational effectiveness for ELLs demand attention to the ways that power and inequity interact with learning. ## General Research Questions Specifically, the meta-analysis reported in this dissertation seeks to answer the following two primary research questions. More specific questions and hypotheses are presented in Chapter 3, following the literature review in Chapter 2 that presents the case for examining specific variables of interest. 1. Is peer-mediated instruction effective for promoting academic or language learning for English language learners in K-12 settings? 2. What variables in instructional design, content area, setting, learners, or research design moderate the effectiveness of peer-mediated learning for English language learners? ## Significance of the Study The results of the proposed meta-analysis are intended to contribute to a growing literature on the effectiveness of specific instructional approaches for the fastest growing group of students in US schools, which contributes to an on-going discussion of equitable, high-quality instruction for ELLs. The results of the meta-analysis will offer a concise synthesis of multiple evaluation studies; specifically, standardized mean effect size estimates for language, academic, and attitudinal outcomes will provide systematic evidence of the effectiveness of peer-mediated instruction in key sets of learning outcomes for ELLs. Additionally, meta-analysis enables a systematic analysis of moderating factors that are important to consider when interpreting current and future evidence and when considering instructional decisions that might arise during implementation of peer-mediated learning in actual classroom contexts. As discussed in the Methods section, inclusion of studies conducted within the US and in other countries enables results to be broadly generalizable while allowing for analysis of the contribution of context as a moderator of effectiveness (i.e., are results produced in English-as-a-Foreign-Language and English-as-a-Second-Language settings significantly different?). # Literature Review ## Peer-mediated Learning As indicated, the purpose of this meta-analysis is to synthesize the empirical literature on the effectiveness of peer-mediated learning for English language learners in K-12 settings; specifically, the meta-analysis computes main effects and identifies important mediators of effectiveness using experimental and quasi-experimental studies. Thus, the most relevant literature to review consists of previous meta-analyses and quantitative syntheses of peer-mediation; however, important qualitative studies, especially highly-cited reviews and syntheses are included to ensure that relevant theoretical, instructional, and empirical variables are not overlooked by focusing exclusively on experimental designs in the literature review. What is Peer-mediated Learning? In this paper, “peer-mediated learning” refers to an instructional approach that emphasizes student-student peer interaction, and it is intended to provide a contrast to teacher-centered or individualistic approaches to learning. In practice, peer-mediated learning includes a variety of approaches, each with supporting literatures that are typically distinct from one another. Specifically, this meta-analysis synthesizes three distinct varieties of peer-mediated learning: cooperative, collaborative, and peer tutoring, a distinction employed in previous syntheses (e.g., Cohen 1994, Hertz-Lazarowitz et al. 1992). As illustrated below, there are numerous precedents for treating these theoretically and practically different approaches as similar, if not synonymous terms (Cohen 1994, Johnson et al. 2000, Slavin 1996, Swain et al. 2002)*2. The use of peer-mediated as a term to include multiple varieties of instruction not only emphasizes the similarities amongst these methods, it also reflects an underlying bias in this paper. The author currently sees a sociocognitive reading of Vygotskian theory as a conceptual common grounds between traditional second language acquisition models of L2 learner interaction and sociocultural models of L2 learner interaction, and Vygotskian perspectives on learning and cognitive development would describe all three approaches (i.e., cooperative, collaborative, and peer tutoring) as peer-mediated learning (see for example, Lantolf 2000)*3. Nonetheless, this paper does not assert that Vygotskian theory is explicitly or implicitly invoked by all of the authors or analyses included in this synthesis. Rather, it is posited that Vygotskian theory provides a heuristic lens that enables a coherent synthesis of varied literatures. Thus, the treatment of several varieties of peer-mediated learning as similar does not imply that they are identical; rather, the intention is to focus on what they have in common, especially when compared to teacher-driven or individualistic approaches. However, for the sake of clarity and to maintain an awareness of how the varieties do differ in meaningful ways, each of the three focal varieties of peer-mediated learning is briefly reviewed separately below. Cooperative learning Cooperative learning represents what Slavin (1996) calls “one of the greatest success stories in the history of educational research” (p. 43), and he claims that hundreds of control group evaluations have been conducted since the 1970’s, with the most common outcome being some kind of academic achievement. Johnson et al. (2000) conducted a widely-cited meta-analysis of the effects of cooperative learning on various measures of academic achievement, and the authors note that “cooperative learning is a generic term referring to numerous methods for organizing and conducting classroom learning” (Johnson et al. 2000). A commonly definitive characteristic of cooperative learning approaches is the degree of structure (Oxford 1997, Slavin 1996); in fact, in this paper, degree of structure is the defining criterion that distinguishes cooperative and collaborative approaches. In general terms, cooperative methods emphasize carefully-structured groups, and students typically have well-defined roles to play. For example, in Jigsaw, students are each responsible for mastering one piece of the target content and typically report back to the team as the designated expert on that piece of the content. In order for the group to demonstrate mastery of the material, each person must adequately learn and then convey that individual piece of the overall content. In other forms of cooperative learning, students are assigned roles like Reporter and Researcher. Nonetheless, cooperative methods vary in their degree of structure, and Johnson et al. (2000) also analyze the eight methods of cooperative learning synthesized in their meta-analysis along a five-point continuum of structure ranging from what they call direct (i.e., structured) to conceptual (i.e., unstructured). The description of Jigsaw above highlights another important component that defines cooperative methods—interdependence. The concept of interdependence is closely tied to group goals, and is intended to measure the extent to which individual members rely on each other for success. Several versions of cooperative learning suggest that students are motivated to participate in cooperative tasks because the group shares a common goal; however, researchers argue that commonly shared group goals are insufficient alone (e.g., Johnson et al. 2000, Slavin 1996). For instance, “free riders” may simply float along on the work of others under the sole condition of group responsibility for goal completion. Instead, these researchers theorize that there must also be individual accountability, and the combination of individual accountability and group goals contribute to the establishment of group interdependence. Nobody wins unless the group wins, and the group can only win if everyone demonstrates individual learning. Kluge 1999 suggests that there are several types of interdependence that can be established, including: team interdependence, resource interdependence, goal interdependence, reward interdependence, identity interdependence, and outside enemy interdependence; importantly, Kluge argues that these elements of interdependence do not have to all be present, and he suggests that practitioners may want to mix and match elements to suit their context and teaching style. Collaborative Learning A number of reviews treat cooperative and collaborative methods as if they are similar, if not identical, methods (e.g., Cohen 1994). However, this meta-analysis follows in the footsteps of researchers that see these two approaches as similar, but distinct, methods for engendering active, student-centered learning (e.g., Hertz-Lazarowitz et al. 1992, Mathews et al. 1995, Oxford 1997). In the most general sense, the two methods are quite similar; for example, they both structure learning by placing learners in small groups, and both approaches place explicit emphasis on encouraging peer-peer interaction and the active construction of meaning. Nonetheless, a more nuanced understanding of the two approaches reveals that the methods operate noticeably different from one another. Mathews et al. (1995) provide a nice distillation of some of the most important differences, including: role and degree of involvement of the teacher, relations of power between teachers and students, the necessity of training of students to work in small groups, and important differences in task construction and group formation. Essentially, collaborative learning represents a less-structured, more “democratic” set of approaches to small group learning. Cooperative methods, on the other hand, tend to emphasize highly-structured student roles and maintain more traditional teacher-student distributions of power. In collaborative methods, completion of a complex task tends to be the central objective, and students are often left to their own devices to divide the labor, develop relations of power and authority, and to navigate task demands. Peer Tutoring While cooperative and collaborative methods dominate the field of peer-mediated learning approaches, it is important to recognize that there is considerable diversity of approaches within the field. Inclusion of peer tutoring approaches is intended to illustrate this diversity, while acknowledging that other peer-mediated approaches exist. Peer tutoring approaches also vary widely (see Goodlad 1998 for a more detailed discussion), though in general they utilize older, or more capable (i.e., academically successful) peers to provide one-one instruction for struggling learners. Although this can occur within grade levels, it is frequently used between grade levels, with older students being the tutor to younger tutees. Thus, by utilizing well-defined roles and structured relationships of power, peer tutoring approaches contain many elements of more structured cooperative learning approaches. Of course, as with cooperative and collaborative approaches, peer tutoring methods emphasize peer-peer interaction and seek to foster active, rich discussion from all participants. Fantuzzo et al. (1989) explicitly tested several key components of reciprocal peer tutoring, a particular form of peer tutoring that emphasizes more equitable relations of power between peers and in which both partners are responsible for teaching the other partner, to determine which aspects of peer tutoring are responsible for its effectiveness. In particular, the authors attribute peer tutoring’s effectiveness to the combination of preparing to teach, actually teaching, and individual and joint accountability for learning success. Thus, they see group interdependence as an important part of its success in ways that are similar to cooperative learning, but they emphasize that the requirements of teaching activate particularly important cognitive and social learning processes; consequently, peer tutoring adds an instructional element typically underemphasized or completely absent in cooperative and collaborative methods. How Does Peer-mediation Promote Learning? Slavin (1996) review of the state of the field of cooperative learning research, outlines four theoretical perspectives within cooperative learning alone. These four perspectives (motivation, social cohesion, cognitive development, and cognitive elaboration) are each associated with different interventions, contextual variables, and emphases on tasks and student roles; however, Slavin suggests that these differing theoretical orientations need not be seen as mutually-exclusive frameworks. Rather, they may be seen as interactive aspects of a complex process, and Fig. 1 presents his conceptual model as one way of integrating the objectives and emphases of these four perspectives. Thus, according to his model, group interdependence is a necessary component of enhanced learning through cooperation. Group interdependence is mediated by a number of motivational factors that contribute to several specific components of peer-mediated learning, including: elaborated explanations, peer modeling, and peer assessment and correction. It seems clear from the literature base of individual studies from which Slavin draws that not all of the individual components in the third box need be present for peer-mediated learning to be effective; rather, group interdependence fosters motivation which enables some of the individual components to occur. Slavin even acknowledges that limited evidence suggests that group interdependence need not always be present, but he argues that it is easiest to make cooperative methods effective when interdependence is present (Slavin 1996). Cohen (1994) reviews the extant literature on the conditions for making small group instruction effective, and she identifies a number of factors that must be managed when implementing small groups. Unlike Slavin (1996), Cohen’s analysis attempts to “move away from the debates about intrinsic and extrinsic rewards and goal and resource interdependence that have characterized research in cooperative learning” in order to focus on the kinds of tasks and kinds of discourses that promote learning. Similarly, as an alternative to the psychological focus of most cooperative learning research, she proposes a sociological heuristic that examines distributions of power between teachers and students and between the students themselves. For example, in the oldest study of small group interaction that she reviews (i.e., Barnes and Todd, 1977), Cohen reports that some small groups engaged in destructive discourses (e.g., verbally attacking one another), and Cohen argues that students need both cognitive and social skills to participate effectively in small groups. In addition to including aspects of power and equity, Cohen (1994) introduces the concept of productivity*4, which she distinguishes from related terms like effectiveness. Her key argument for preferring productivity is that the amount and kinds of interaction needed to promote achievement differ according to the kinds of outcomes desired. For instance, she argues that the kinds of interaction needed for successfully completing a worksheet collaboratively with a partner are very different from the kinds of interaction needed to foster higher-order or innovative thinking. Furthermore, she argues that the term productivity also enables analyses of equal-status interactions or the adoption of prosocial behaviors with members of social or ethnic out-groups in ways that effectiveness does not typically include. In particular, the idea that certain kinds of interactions may promote particular outcomes suggests that researchers should carefully analyze the kinds of interaction that occur, in addition to more superficial measures of intervention fidelity, and also that analyses should examine the relationship between the type of discourse and the type of outcome measured. Empirical Evidence for Peer-mediated Learning Both quantitative and qualitative evidence support the claim that peer-mediated learning is effective at promoting numerous kinds of outcomes; while the qualitative syntheses, with some exceptions like Slavin’s narrative, “best-evidence” reviews (Slavin 1996, Slavin and Cooper 1999), presented below predominantly present theoretically-driven analyses, the quantitative syntheses tended to compare the effectiveness of particular models of peer-mediated learning or to compare the effectiveness of particular peer-mediated methods with different types of students. Qualitative Evidence Kluge (1999) offers a brief narrative synthesis of research on cooperative learning, and he reports positive outcomes on a variety of variables, including: use of high-order and strategic thinking, academic achievement tests, relationships with classmates, self-esteem, increased turn-taking when compared with whole-group instruction, and “discrete and integrative” language outcomes. Cohen (1994) also provides a narrative synthesis, though her search and analytical methods are far more transparent and rigorous, and her included sample is much larger than the sample synthesized in Kluge (1999). Notably, she excludes studies that compare cooperative learning to traditional instruction, which is the precise contrast intended in this meta-analysis, opting instead to focus on studies that compare various forms of cooperative learning. She also favors studies conducted in classrooms, and systematically rejects lab studies if the task “bore no resemblance to classroom instruction”. Finally, she rejects discourse analyses, peer response groups for writing instruction, and peer tutoring; thus, her included sample is quite different from the studies that will be included in this meta-analysis. Nonetheless, she reports a theoretically-driven synthesis of both qualitative and quantitative studies that includes outcomes like induction of general principles of gears in a physics class, sophistication of debugging statements in a computer class, and more traditional measures of academic achievement. Cohen’s analysis of the effectiveness of different models of cooperative learning is also more nuanced than Kluge’s, and she reports that models that use both goal and reward interdependence tend to be more effective than models that employ either alone. Also, she finds that some models of cooperative learning may be more effective for particular groups of students. For example, Cohen reports that in some studies, White students performed best in the more competitive forms of cooperative learning, and Mexican American students seemed to perform better in traditional forms of instruction than cooperative forms (Cohen 1994). Slavin (1996) is a narrative review of high-quality, experimental or quasi-experimental studies; thus, effect sizes are reported throughout the review, but the synthesis is conducted narratively and examines evidence for each of the four theoretical perspectives on cooperative learning illustrated in Fig. 1. This is a fairly typical of what Slavin calls “best evidence synthesis” (e.g., Slavin 1986, Slavin 1990), and it is included under the qualitative syntheses because of its heavy reliance upon and analysis of theory. Like Cohen (1994), Slavin reports that considerable evidence supports the motivational perspective’s claim that group rewards used together with individual accountability produce the strongest group interdependence, and consequently, the strongest effects on achievement. For example, Slavin reports that studies with both group rewards and individual accountability (n=52) had a median effect size of .32SD compared to a median effect size of .07SD for studies that did not include both components (n=25). Similarly, Slavin presents results from a couple of studies that compared group rewards with individual accountability to individual components of accountability, and they consistently reported effect sizes that were much larger for the combined condition than conditions that just contained one component or another. However, like Cohen (1994), Slavin concedes that under certain conditions the group reward and individual accountability combination may not be necessary, including: complex tasks with more than one right answer, highly-structured peer interaction, or volunteer study groups. Nonetheless, he maintains that group rewards do not harm, and may actually improve, achievement results for those situations that do not require well-structured group interdependence. Finally, Slavin also confirms that some studies have reported stronger effects for certain types of students (e.g., Black over other ethnicities or those that prefer cooperative methods over those that prefer competitive methods); however, his evidentiary base is thin for these claims, and he ultimately argues that the results are mixed and inconclusive. In another best evidence synthesis of qualitative and quantitative studies, Slavin and Cooper (1999) review the effectiveness of peer-mediated methods at promoting equitable relations amongst ethnic and racial groups in schools. Unlike the limited evidence regarding equity or racial diversity presented in Cohen (1994) and Slavin (1996), Slavin and Cooper provide a theoretical and empirical review of the rationale for using peer-mediated methods to improve intergroup relations, and they argue that these approaches offer promise for helping schools shift from viewing diversity as a problem to solve to utilizing diversity as a resource for learning and socialization. Based largely on Gordon Allport’s Contact Hypothesis*5, Slavin and Cooper argue that under the right conditions of equitable power relations, increased contact with members of racial and ethnic out-groups can promote inter-racial relations. Nonetheless, Slavin and Cooper claim that all too often “cross-ethnic interaction between students is superficial and competitive” (p. 649), and like Allport, they caution that poorly-structured interaction between groups can actually increase stereotypes and racial tensions. Moreover, Slavin and Cooper note that researchers like Cohen indicate the importance of establishing equitable conditions, arguing that not all students will have equal opportunities to contribute without direct teacher engagement in the process. However, with training and practice, teachers can actively and successfully promote the status of low-status students and foster an atmosphere of cooperation and respect. Like many of the quantitative syntheses discussed below, Slavin and Cooper (1999) reviewed mostly quantitative, evaluation research from around the world for several different methods of peer-mediated learning, including: Student Teams-Achievement Divisions (STAD), Teams Games Tournament (TGT), Team-Assisted Individualization (TAI), Jigsaw, Group Investigation (GI), Learning Together (LT). The most common outcome was number of cross-racial friendship, though a few studies included related outcomes like cross-racial interaction during free time and positive ethnic attitudes. In a narrative, synthetic approach often called “vote counting” (Lipsey and Wilson 2001), Slavin and Cooper report that 16 of 19 studies demonstrated positive impacts on cross-racial friendships “when the conditions of contact theory are fulfilled” (p. 656). Finally, another synthesis of cooperative learning explores the literature on the effectiveness of cooperative methods with Asian students in preschool to college settings (Than et al. 2008). Using a method much like Slavin’s best evidence synthesis, the authors include only experimental and quasi-experimental studies in their formal analysis, but they draw heavily from the theoretical literature in their broader analyses. Unlike the work of other researchers who find ecological validity in the inclusion of multiple forms of peer-mediated learning (e.g., Cohen 1994, Rohrbeck et al. 2003), Than and colleagues explicitly exclude peer tutoring and collaborative approaches in order to maintain a tighter focus on the specific structures associated with cooperative methods, and the authors explicitly limited the range of outcomes to measures of academic achievement. Thus, the results are not informative of much of the literature included in this proposed meta-analysis, but the careful focus on the influence of cultural norms is uniquely informative. Specifically, the authors report that only seven of fourteen included studies demonstrated positive results, and they argue that cultural norms specific to Asian cultures make “Western…student-centered learning” approaches ineffective (p. 82). For example, Than and colleagues point to Asian students’ preference for teacher-centered, lecture formats and teacher’s frequent unwillingness to alter traditional roles and distributions of power as cultural norms interfering with key tenets of cooperative learning (i.e., active construction of knowledge and equitable distributions of power). Similarly, the authors claim that an Asian principle of “survive in harmony” that dictates students make individual decisions without creating overt disagreements conflicted with the more argumentative, confrontational nature of “face-face promotive interaction” typical of peer-mediated learning methods (p. 84). Quantitative Evidence Unlike the theoretically-oriented syntheses presented above, the following quantitative reports offer more methodologically-focused syntheses that compare various models of cooperative learning to one another (Johnson et al. 2000, Johnson et al. 1981), components of a particular model of peer-mediated learning (Fantuzzo et al. 1989), or the effectiveness of a particular model with different student populations (Rohrbeck et al. 2003, Roseth et al. 2008). Johnson et al. (1981) report the results of a meta-analysis dating to the time that the technique was first being developed; that is, the use of meta-analysis to study cooperative learning has strong precedent. The authors report the results of 122 studies and 286 independent effect sizes, dividing the effect sizes into the following categories: individualistic, interpersonal competition, cooperative, and cooperative with inter-group competition. Speaking directly to the fundamental question of this meta-analysis, Johnson and colleagues report that cooperative methods had a mean effect size .78SD larger than individualistic methods. In fact, the two forms of cooperative (with or without intergroup competition) performed equally well, on average. Cooperation with competition also produced consistently larger effect sizes than interpersonally-competitive methods, with a mean difference of .37SD. Thus, this early meta-analysis offers consistent support for the claim that peer-mediated approaches outperform individualistic learning approaches. The authors also conducted some tentative moderator analyses and argue that type of task (low versus high cognitive complexity), size of the cooperative group, task interdependence, duration of the study, year of publication, sample size, and journal quality are consistently significant predictors of effect size variation. Notably, subject area was not a significant predictor of effect size variation in any of the comparisons, suggesting peer-mediated approaches are useful across content areas. More recently, Johnson et al. (2000) synthesize the results of 164 studies with 194 independent effect sizes; the authors selected from over 900 studies identified with the keyword “social interdependence”, revealing a psychological orientation to the topic. Unlike the earlier 1981 meta-analysis just discussed, this meta-analysis attempts to provide a comprehensive comparison of the most widely-researched models of cooperative learning, including: Learning Together (LT), Academic Controversy (AC), Student-Team-Achievement-Divisions (STAD), Teams-Games-Tournaments (TGT), Group Investigation (GI), Jigsaw, Teams-Assisted-Individualizations (TAI), and Cooperative Integrated Reading and Comprehension (CIRC). Johnson, Johnson, & Stanne’s meta-analysis reports separate effect sizes for each comparison group, as well as confidence intervals, to provide separate estimates of the effectiveness of each cooperative method against competitive or individualistic methods. Notably, of the eight approaches included in these analyses, all eight methods produce mean effect sizes superior to competitive (range g=.18 to g=.85)*6 and individualistic approaches (range g=.13 to g=1.04). Learning Together, developed by Johnson and Johnson who co-authored the meta-analysis, consistently produces the largest effect sizes against both competitive and individualistic approaches, while the effect sizes for competitive and individualistic approaches are statistically equivalent. Like Johnson et al. (1981), this meta-analysis offers strong and consistent support that a wide-variety of peer-mediated approaches are more effective at producing academic achievement gains for school-aged children than more traditional, competitive or individualistic approaches. Interestingly, Johnson et al. (2000) rate each cooperative method on a conceptual scale ranging from direct (very specific, well-defined techniques a teacher can learn quickly) to conceptual (conceptual frameworks teachers learn and use as a template to restructure lessons), a continuum similar to the concept of structure previously discussed. The coded score for each method is actually a composite of five different components of instruction, and the composite score is correlated with the effect sizes presented in the primary analysis. Degree of conceptualness correlates positively with effect sizes versus competitive (r=.32, p<.001) and individualistic approaches (r=.46, p<.001). This finding indicates that the more difficult to learn, but ultimately more flexible and dynamic forms of peer-mediated learning (i.e., more conceptual), approaches are more effective at promoting academic achievement. While this echoes the claim in Cohen (1994) that more conceptually-complex forms of group work are important for everything but the most rote forms of learning, Johnson and colleagues’ use of a single effectiveness variable suggests that the authors operationalized achievement as effectiveness, not productivity as intended by Cohen. One approach to determining the important components of an intervention is to systematically examine the contribution of important variables over the course of many separate replications (i.e., a meta-analysis); nonetheless, a more fine-grained approach is to design a study that explicitly tests various components individually and/or in multiple combinations, and Fantuzzo et al. (1989) is a well-cited example of just such a study. The study is a “component analysis” of Reciprocal Peer Tutoring, and although the participants were college-aged and are not directly comparable to the intended population for this meta-analysis, the insightful analysis of peer-mediated learning in an equitable form of peer tutoring is informative, nonetheless. One hundred college-age students were randomly divided into one of four conditions: dyadic structured format, dyadic unstructured, independent structured, and independent unstructured. The dyadic conditions consisted of two students that were randomly paired, and partners took turns being both tutor and tutee, which would rank fairly high in Cohen’s construct of equitable power relations between students. The structured groups followed a specific test-item creating and sharing procedure, while the unstructured groups were provided topics for discussion that were related to the final exam taken by all participants. Initial examination of a number of variables (e.g., age, GPA, ethnicity) revealed no significant differences between groups. Analyses of covariance detected positive and significant effects for both dyadic (F (1,95)=8.68, p<.005) and structured conditions (F (1,95)=7.06, p<.01), providing a rigorous, direct test of two of the key theoretical components of peer-mediated learning: peer interaction and structure. This finding informs the debate within the field between those that see strong structure as key (e.g., Slavin 1996) and those that argue that complexity and flexibility are more important (e.g., Cohen 1994), adding to the empirical support for the high-structure camp. Interestingly, the results also indicate a positive interaction between the dyadic and structure components for measures of psychological adjustment, course satisfaction, and a “generalizability” version of the assessment (though not the actual assessment) “due to the relative superiority of the DS [dyadic structured] condition” (p.176), a finding that also supports Slavin (1996) more nuanced argument that positive results can be found for various components of cooperative learning in isolation but that positive effects are more likely when multiple components operate in conjunction (e.g., interdependence and individual accountability). Finally, two meta-analyses examine the impact of peer-mediated methods for particular groups of students. Rohrbeck et al. (2003) assess the effectiveness of peer-assisted learning (PAL) interventions for elementary-aged children. The authors intend PAL, like peer-mediated learning, to be an inclusive term for a variety of specific approaches including cooperative and peer tutoring; in fact, they claim that syntheses that examine only one form of PAL (e.g., Johnson et al. 1981) lack ecological validity since strict adherence to a particular form of peer-mediated learning fails to reflect the reality of classroom instruction. The authors included 81 studies with sufficient information to compute effect sizes, and after Windsorizing outliers and adjusting for small sample size, the mean main effect was (d=.33, p<.0001). Moderator analyses indicate that groups with more than 50% minority students produce larger effect sizes, on average, and students in urban settings tend to outperform students in rural settings. In this study, grade level and SES are weaker predictors of effectiveness, and content area is insignificant as a moderator variable. The authors also examine several variables of theoretical interest, and find that interventions that allow more student autonomy are more effective than those with less autonomy; however, the degree to which student roles are structured is not a significant moderator. Thus, this meta-analysis provides a nuanced understanding of structure that suggests that student autonomy and the motivation that accompany it exert a different effect than the contribution made by structured roles. Moreover, this finding lends support to Cohen (1994) claim that cognitive complexity and flexibility are superior to tightly-scripted roles for most kinds of learning. Programs that require interdependence are more successful than those that did not, but insufficient data exists to determine whether or not reciprocal peer roles are more effective than fixed roles. Roseth et al. (2008) meta-analyze the results of 148 studies on the effectiveness of cooperative methods compared to individualistic and competitive approaches for early adolescents, extending the age-specific findings of Rohrbeck et al. (2003) to a slightly older group of students. Given the developmental emphasis placed on social relationships during this age period, the authors investigate the effects of cooperative methods and the social interdependence they foster on both academic outcomes and peer relationships, and the authors also directly test the relationship between peer relationships and academic achievement. As with previous meta-analyses conducted by Johnson and Johnson, the general pattern of results holds true; overall, cooperation is superior to competition (ES=.46SD) and individualistic approaches (ES=.55SD) at improving academic outcomes, while competitive and individualistic interventions are statistically equivalent. Similarly, with peer relationship outcomes, cooperation is more effective than competitive (ES=.48SD) and individualistic approaches (ES=.42SD). For both sets of outcomes, removing low quality studies produces larger effect sizes, suggesting that low-quality studies may exert downward pressure on effectiveness estimates. Treatment fidelity is also a significant moderator in HLM moderator analysis. To examine the relationship between peer relationships and achievement, 17 studies that included both dependent variables and mean achievement are regressed on estimated mean peer relationship. When study quality is controlled, peer relationships account for approximately 40% of the variance in effect sizes. This finding offers unique theoretical insight into the key components of peer-mediated learning for a particular group of students, and the careful methodological attention to both theoretical and methodological variables provides a nuanced interpretation of theoretical questions about the effectiveness of peer-mediated learning for adolescents. In conclusion, considerable qualitative and quantitative research supports the assertion that peer-mediated methods of instruction are more effective at promoting multiple kinds of outcomes than individualistic or competitive approaches. Despite decades of consistently positive research, a number of variables of instructional structure (e.g., size of group and composition of groups) and social interaction, as well as important learner (e.g., age) and methodological (e.g., design and measurement) variables, remain important foci of current and future research. In particular, few syntheses of the effectiveness of peer-mediation for particular kinds of students exist, and none of the syntheses discussed so far even mention specific issues involving linguistically diverse students. Thus, questions of whether, why, and under what conditions peer-mediation is effective for English language learners are the focus of the remainder of this literature review. ## Peer-mediated Learning and ELLs While much of the research regarding the effectiveness of cooperative learning reviewed so far is relevant for English language learners, it is important to keep in mind that English language learners are a distinct group of learners who, by definition, must master both academic and language objectives. Thus, when considering ELLs, it is essential to consider whether peer-mediated methods are effective for both academic and language outcomes, and as noted, language outcomes are largely ignored in the studies already reviewed. Moreover, it is essential to understand whether there are important linguistic mechanisms engaged during peer-mediated learning that are conceptually distinct from the more psychological and sociological mechanisms of peer-mediated methods just discussed in order to consider the relevant instructional and theoretical foci for L2 research. Academic Rationale for Peer-mediated Learning with ELLs Several recent syntheses of effective instruction for English language learners suggest that cooperative and collaborative models of instruction could be effective for promoting language, literacy, and content-area learning for ELLs (Allison and Rehm 2007, August and Shanahan 2006, Cheung and Slavin 2005, Genesee et al. 2005, Gersten and Baker 2000); however, these syntheses provide only tentative support for peer-mediated models of education. First, these syntheses review multiple forms of instruction, not just peer-mediated methods. Second, the authors frequently report insufficient, or contradictory, evidence to draw strong conclusions. For example, the National Literacy Panel on Language-minority Youth and Children (August and Shanahan 2006) reviews studies of literacy outcomes from instructional interventions that included complex, whole-school reform models like Success for All and small, researcher-created interventions targeting one aspect of literacy (e.g., fluency). Yet, across these disparate interventions the panel repeatedly favors approaches that emphasize direct, explicit instruction. In fact, the National Literacy Panel reviews only two studies that focus on peer-mediated learning approaches; while some of the complex approaches like SFA include a strong cooperative learning component, the results for these studies do not indicate whether it is cooperative learning that specifically contributed to the effectiveness of these programs. In fact, other work by Robert Slavin, the creator of SFA, explicitly argues that it is precisely the complex interaction of multiple components that makes these whole-school reform models effective (e.g., Cheung and Slavin 2005). Additionally, the National Literacy Panel Report includes a chapter on qualitative reports that consistently suggest cooperative learning is an important part of high-quality instruction for ELLs (e.g., Gersten and Baker 2000), though the conclusions drawn are tentative and carefully constrained. The authors of the National Literacy Report conclude only that “these attributes overlap with those of effective instruction for nonlanguage-minority students” and “these factors need to either be bundled and tested experimentally as an intervention package or examined as separate components to determine whether they actually lead to improved student performance” (p.520). Thus, the National Literacy Panel claims that mainstream research is largely sufficient to explain the effectiveness of peer-mediated approaches, and they claim more high-quality research is needed before firm claims can be made about peer-mediated methods, specifically. Two other high-profile reviews (Genesee et al. 2005, Gersten and Baker 2000) synthesize research for a variety of instructional approaches, so much of the research they review is not directly applicable to this meta-analysis; however, like the National Literacy Panel, they represent some of the most authoritative, qualitative reviews of effective instructional approaches for ELLs. Investigating effective instructional approaches for ELLs in elementary and middle grades, Gersten and Baker (2000) presents a “multivocal research synthesis” that utilizes focus-group interviews with educators, as well as a more traditional narrative review of experimental and descriptive evaluation studies. In a brief section on using “cooperative and peer tutoring approaches”, the authors suggest that both cooperative and peer tutoring approaches are effective, especially for “decontextualized language concepts with high degrees of cognitive challenge” (i.e., similar to the academic claim in Cohen 1994). However, the authors also report that these methods must be carefully tailored to the academic and linguistic needs of ELLs, that teaching ELLs is not simply “good teaching” (p. 461-464). In a larger and more systematic review of all empirical research conducted in the US since 1980 and reporting academic, literacy, or language outcomes, Genesee et al. (2005) provide syntheses for each of the three outcomes separately. In very brief discussions of “direct” and “interactive” instructional approaches, the authors conclude that interactive approaches (i.e., peer-mediated) that also include carefully-targeted direct instruction are ideal, and they report that interactive approaches boost literacy and academic gains for ELLs. No synthesis of the effectiveness of peer-mediated methods at improving academic outcomes for ELLs was identified in the review of extant literature for this meta-analysis, which is a strong warrant for the pursuit of this particular study. Consequently, only high-visibility, individual studies exist to document the academic rationale for using peer-mediated methods with ELLs. What Works Clearinghouse (WWC) reports results for only the most methodologically-rigorous studies, and taken as a whole, the inclusion criteria and analyses make the WWC site something like a quantitative synthesis of research; granted, WWC does not employ meta-analysis or any other formally-synthetic method to make claims across the included studies, so the actual reports are not truly syntheses. For ELLs, What Works Clearinghouse reports separately for the following outcomes: reading/writing, mathematics, and English language development. Of the studies included for reading and writing, only three use peer-mediated methods extensively, and all three demonstrate effectiveness at promoting literacy outcomes for ELLs. Two of the peer-mediated literacy interventions are complex models of which peer-mediated learning is one of multiple components (i.e., Success for All and Bilingual Cooperative Integrated Reading and Composition), and only one of the interventions focuses exclusively on the effectiveness of peer-mediation (Peer-assisted Learning Strategies, or PALS). WWC does not report any interventions for ELLs with math/science outcomes that meet its standards for inclusion, and language outcomes are discussed in the following section that presents the linguistic rationale for using peer-mediation with ELLs. A closer look at the full reports of the three included interventions with literacy outcomes reveals that a number of important instructional variables differ across these interventions. For example, the most effective of the three interventions is BCIRC, and the WWC report is based almost entirely on Calderón et al. (1998). In the original report, the authors indicate that BCIRC combines extensive use of heterogeneous grouping with carefully-structured roles and procedures for small group interaction with direct instruction of academic and language objectives, thus supporting the claim that a combination of direct and interactive approaches is the most effective for ELLs (e.g., Cheung and Slavin 2005, Genesee et al. 2005). Moreover, the authors indicate that teachers were trained to make extensive use of the linguistic and cultural knowledge of the students; in fact, BCIRC is an intentionally bilingual approach that leverages students’ native language as an instructional resource. The authors attribute the effectiveness of peer-mediated learning for ELLs to “the verification of ideas; the planning of strategies for task completion; the discourse of politeness, consensus seeking, compromising; and the symbolic representation of other intellectual acts are enacted through peer communication” (p. 157). Thus, they offer the most nuanced explanation for the academic effectiveness of peer-mediation of any of the syntheses discussed, so far; however, as a single study, the claim lacks the statistical power and ecological validity that a synthetic finding would likely possess. Moreover, the fact that the intervention contained several components that were not explicitly tested (e.g., Fantuzzo et al. 1989) also limits the explanatory power of the study. Like BCIRC, Peer-assisted Learning Strategies (PALS) was evaluated for use in upper elementary ELL classrooms, and like BCIRC, only one evaluation study of the intervention meets WWC standards (Sáenz et al. 2005). PALS utilizes carefully-matched dyads that are taught to interact in structured ways with texts and each other. Importantly, both students take turns being the tutor and tutee despite structuring ability difference into the groupings. In the original study, the authors suggest that PALS is likely to be especially effective for ELLs because of increased opportunities for language production, individualized reading instruction, and practice with academic tasks like summarizing and making predictions. Importantly, the report on PALS in the original study also offers a linguistic rationale for the academic effectiveness for ELLs. While the next section will discuss the linguistic rationale for using peer-mediation with ELLs, most of the outcomes discussed in that section will be language outcomes. Thus, Sáenz and colleagues make an important point regarding the effectiveness of peer-mediated methods with ELLs—the linguistic benefits of peer-mediation likely contribute to both linguistic and academic outcomes. Linguistic Rationale for Peer-mediated Learning with ELLs While no formal synthesis of the effectiveness of peer-mediation at promoting academic outcomes for ELLs exists, several theoretical, qualitative, and quantitative syntheses of the effectiveness of peer-mediated learning at promoting language outcomes for ELLs exist. Thus, there is a considerably stronger rationale for using peer-mediation to promote language learning for ELLs than for promoting academic outcomes, and this is a key assertion because it is precisely English language proficiency that defines this group of students. Thus, peer-mediated learning offers promise not only as an effective approach for promoting the academic success of ELLs, it may also be an important tool for removing the fundamental barrier to equal access to the mainstream school curriculum the term ELL is intended to identify: English language proficiency*7. Oxford (1997) provides a narrative synthesis of three strands of “communicative teaching” in the language classroom that closely mirror the key constructs of this meta-analysis: cooperation, collaboration, and interaction; and she suggests that these strands are related but theoretically distinct*8. Like this meta-analysis, Oxford distinguishes cooperative learning from collaborative primarily in the degree of structure embedded in the activity and the extent to which learner roles are prescribed and consistent across groups and events, whereas collaborative learning tends to be less structured. Like Slavin (1996), she also asserts that positive interdependence must be structured into the activities if cooperative methods are to be effective; however, for collaborative research, she draws a new theoretical distinction. Oxford asserts that collaborative methods have their roots in Dewey’s social constructivism and Vygotskian social psychology, and she asserts that constructs like mediation, scaffolding, and cognitive apprenticeship are central for collaborative theorists. Unlike collaborative approaches, the key objective is not to stimulate motivation through the construction of interdependence among learners; rather, the goal is to incorporate students into a community of learners. Interaction, on the other hand, draws from a predominantly linguistic base, and this strand draws heavily upon constructs like comprehensible input, comprehensible output, and Michael Long’s Interaction Hypothesis. The basic idea is that interaction promotes language learning by providing opportunities for students to modify output in ways that maximize the production of the comprehensible input that drives language acquisition. Whereas, cooperation is high-structure and collaboration is low-structure in her scheme, she finds that interaction studies vary widely on this variable. Importantly, Oxford identifies a number of additional variables that influence the effectiveness of interactive approaches; including learner variables (i.e., willingness to communicate and learning styles) and grouping dynamics (i.e., group cultures and physical arrangement of the classroom). In a narrative review of both qualitative and quantitative empirical research, Swain et al. (2002) review the effectiveness of “peer-peer” dialog at promoting listening, speaking, reading and writing language outcomes. Swain and colleagues adopt a Vygotskian lens on language learning that suggests peers can support each other’s language acquisition by working within the zone of proximal development to enable language production and comprehension beyond what they might be able to accomplish individually, and agreeing with Oxford, the authors characterize these interactions as collaborative. It is worth noting that many of the studies reviewed are of French immersion students (i.e., English-speaking Canadian students learning French as a second language) and Spanish-learners; thus, the results are informative but not directly applicable to this meta-analysis. In particular, the findings reported in Swain et al. (2002) are based on microgenetic analyses of language learning as it occurs in interaction, and data sources tend to feature transcripts, as well as pre-/post- measures of learning. For example, Swain et al. (2002) report that peer feedback during reading and writing activities is instrumental, and several important mechanisms are discussed, including: reformulations and recasts, collaborative planning/drafting/revising, metalinguistic talk, finding the main idea, vocabulary comprehension, etc. Swain and colleagues report that in an interesting series of studies by Storch, the nature of peer feedback proved particularly important, and the author rated the feedback on two scales that are reminiscent of mainstream peer-mediation constructs already discussed—equality (similar to degree of authority or power) and mutuality (similar to interdependence). Storch reported that the more collaborative the dyads were on these two scales, the more opportunities for and success with language learning occurred. In the terms previously used in L1 research this would mean that conditions of equity and high interdependence produce the largest gains. Swain, et al. also note that for these approaches to be maximally beneficial, students must be explicitly taught how to interact effectively with one another, and for language learners this includes instruction in particular grammatical structures and vocabulary, as well as turn-taking norms, strategies for persuasion, and pragmatic norms for politeness. Two recent meta-analyses of the effectiveness of interaction at promoting L2 learning outcomes offer additional warrant for using peer-mediated learning methods with ELLs; and in addition to providing overall estimates of the effectiveness of peer-mediated L2 learning, they provide considerable insight into important factors that mediate effectiveness. The first of the two meta-analyses (Keck et al. 2006), included 14 experimental studies conducted between 1980 and 2003. The meta-analysis reported a large overall mean effect size for peer-mediated learning greater than a standard deviation (d=1.12), as compared to a more moderate overall effect size (d=.66) for the comparison/ control groups. Participant characteristics like first language and level of L2 proficiency were not important variables in the effectiveness of the interventions, and the type of measure used (i.e., institutional grade level, researcher-created measure, or standardized assessment) did not affect the magnitude of the reported effect size. Moreover, the authors found that task-type (i.e., jigsaw, information gap, and narrative) was not an important moderator, and lexical outcomes (d=.90) and grammatical outcomes (d=.94) were also of similar magnitudes. However, the extent to which the task required the use of target forms (i.e., past tense verb constructions) was an important predictor of both immediate and delayed post-test performance. Overall, the more that students had to use the target form to correctly accomplish the task, the larger and more durable were the effects. Moreover, the authors report that interventions that encouraged “forced output” of the participants proved more effective (d=1.05) than interventions that merely allowed the possibility of participant output (d=.61), a finding that offers tentative support for the claim that degree of participation among participants may be an important factor in language learning. Mackey and Goo (2007) is intended to provide an update to the Keck et al. (2006) meta-analysis. Mackey and Goo included all 14 of Keck, et al.’s studies, and an additional 14 studies for a total of 28 included studies. Twelve of the additional studies were published after the 2002 cut-off date of the previous meta-analysis, indicating ongoing and increased interest in the field. Overall, the Mackey and Goo meta-analysis reports a large effect size for peer-mediated learning (d=.99) compared to a much smaller effect size for the comparison groups (d=.38). Additionally, the authors report that peer-mediated learning remains effective beyond post-test; like Keck, et al., these authors report that peer-mediated learning is even more effective at the first delayed post-test (d=1.02). Despite considerable variability in participant language background, language ability, and instructional setting (i.e., SL, immersion, and FL), no significant differences in overall effectiveness are reported*9. Similarly, no differences are reported for length of treatment or other study design characteristics (e.g., experimental versus quasi-experimental). However, studies conducted in the laboratory (d=.96) report larger effects on average than those conducted in classroom settings (d=.57). Also, the type of dependent measure proves to be an important moderator of the variability in effectiveness; prompted response (d=.24) is the least effective, while open-ended prompted production (d=.68) and closed-ended prompted production (d=1.08) tasks are much more effective overall, adding some support to the claim that cognitively complex tasks are the most effective. These syntheses provide compelling evidence that peer-mediated methods are effective at promoting a wide variety of language outcomes for second language learners, though many issues raised in the L1 research remain largely unanswered in the L2 literature. For instance, ELLs are a highly heterogeneous population (i.e., language background, prior schooling, SES, race/ethnicity, age of arrival, and length of residence), but there is little research that discusses with which ELLs peer-mediated methods might be most effective, though both Keck et al. (2006), Mackey and Goo (2007) suggest that a small subset of these are not significant moderators (i.e., language background and language ability). Nonetheless, the studies by Oxford (1997) and Than et al. (2008) raise the question of whether cultural norms might mediate the effectiveness of these interventions for linguistically and culturally diverse students. At best, individual studies have attempted to account for these variables by controlling for them during assignment and/or measuring and controlling for differences following assignment, though few studies did either. Type of task matters in both the theoretical and empirical L1 and L2 literatures reviewed so far, but neither the qualitative nor the quantitative literatures offer much feedback about which kinds of tasks are best for which types of language or academic outcomes for ELLs. Importantly, Keck et al. (2006) indicate that the more the use of the target structures measured at post-test were required for participation in the activity, the greater the gains; nonetheless, this commonsense connection between the degree to which the assessment is related to the intervention is a well-recognized phenomenon, and performance on distant, broad-band measures remains notoriously difficult to improve (e.g., Bloom et al. 2008, Slavin and Madden 2011). Similarly, the moderating effect of contextual variables (e.g., foreign language vs. second language, segregated vs. integrated, program model) is rarely measured directly, though again, both of the language-oriented meta-analyses suggest that a small subset is unimportant (i.e., language setting and program model). Issues of equity and power relations between students appear important in the qualitative literature but are not discussed in the quantitative literature. Peer mediated methods have consistently proven effective at promoting academic, social, and language outcomes with a wide variety of first- and second-language students in a wide variety of contexts, lending support to Slavin (1996) claim that cooperative learning is one of the greatest successes in the academic evaluation literature. When compared to individualistic or competitive models, cooperative and other peer-mediated methods typically produce much larger gains. Nonetheless, researchers disagree about the influence of a number of key variables, which are summarized in Table 1 below. Notably, there are a number of similarities between the L1 and L2 literatures, though the research is not completely congruent between these two fields. As discussed in more detail below, L2 researchers do not always measure variables important in the L1 literature, and L2 researchers are often focused on aspects of language acquisition generally not researched in the L1 literature. Summary of key variables from literature review. VARIABLE L1 Research L2 Research Peer-mediated Method Matters Cohen 1994, Johnson et al. 1981, Johnson et al. 2000, Slavin 1996, Slavin and Cooper 1999 Oxford 1997 Peer-mediated Method does not Matter Kluge 1999, Rohrbeck et al. 2003, Slavin and Cooper 1999 Genesee et al. 2005 Gersten and Baker 2000 Keck et al. 2006 Swain et al. 2002 High-structure is Best Fantuzzo et al. 1989, Johnson et al. 2000, Rohrbeck et al. 2003 Calderón et al. 1998 Sáenz et al. 2005 Low-structure is Best Cohen 1994, Johnson et al. 2000, Rohrbeck et al. 2003 Interdependence is Needed Cohen 1994, Slavin 1996, Johnson et al. 1981, Johnson et al. 2000, Rohrbeck et al. 2003, Than et al. 2008 Oxford 1997 Swain et al. 2002 Interdependence is not Needed Swain et al. 2002 Content Area Matters Content Area does not Matter Johnson et al. 1981, Rohrbeck et al. 2003 Age of Students is Important Rohrbeck et al. 2003, Roseth et al. 2008 Age of Students is not Important Ethnicity of Students Matters Cohen 1994, Rohrbeck et al. 2003, Slavin and Cooper 1999, Than et al. 2008 Ethnicity of Students does not Matter Language Proficiency (i.e., L1 or L2) of Students Matters Genesee et al. 2005 Gersten and Baker 2000 Swain et al. 2002 Language Proficiency (i.e., L1 or L2) of Students does not Matter Mackey and Goo 2007 Culturally-relevant Instruction Matters Than et al. 2008 Calderón et al. 1998, Oxford 1997 Culturally-relevant Instruction does not Matter SES of Students Matters Rohrbeck et al. 2003 SES of students does not Matter Size of Group Matters Johnson et al. 1981 Size of Group does not Matter Equality of Power among Students Matters Rohrbeck et al. 2003 Oxford 1997 Swain et al. 2002 Equality of Power among Students does not Matter Duration of Intervention Matters Johnson et al. 1981 Duration of Intervention does not Matter Mackey and Goo 2007 Setting (i.e., segregated, cooperative, ESL or EFL, lab or classroom, urban or rural) Matters Rohrbeck et al. 2003, Slavin and Cooper 1999 Mackey and Goo 2007 Setting does not Matter Mackey and Goo 2007 Journal Quality Matters Johnson et al. 1981 Roseth et al. 2008 Journal Quality does not Matter Sample Size Matters Johnson et al. 1981 Sample Size does not Matter First, researchers disagree about the importance of the particular method, whether cooperative, collaborative, peer tutoring, or some set of specific approaches (e.g., Jigsaw, Learning Together, STAD, TGT). The clearest distinction appears to be between L1 researchers that generally agree the method matters (though which method is ultimately superior remains debatable) and L2 researchers that typically do not report differences between specific methods. To be fair, this largely reflects the nascent state of L2 research, and many of the studies listed in Table 1 did not make clear distinctions amongst methods and simply grouped them all together as peer-peer or cooperative approaches. On the other hand, Swain et al. (2002) explicitly grouped multiple methods together in their synthesis, providing a theoretical rationale that it is the presence of peer-peer dialog that matters most for L2 learners. Although L1 research would suggest that specific methods vary considerably in their effectiveness at promoting academic and social outcomes, the question of which peer-mediated method is most effective for ELLs remains largely unaddressed. While considerable debate exists within and across L1 and L2 literatures about which peer-mediated method is most effective, there is strong consensus that more structured approaches produce bigger gains than less-structured approaches. Despite this strong consensus, theoretical (Cohen 1994) and empirical (Johnson et al. 2000, Rohrbeck et al. 2003) grounds exist to challenge this claim. Similarly, overwhelming support concludes that establishing interdependence promotes learning gains, though Swain et al. (2002) report ambivalent findings on this variable. Language proficiency, the cultural-relevance of the instruction, and the equality of power relations among students appear to be important variables for L2 learners, though the research base for these claims is not as substantive as for other variables. Similarly, age, ethnicity, and SES appear to be moderators for effectiveness, though L2 research can neither support nor challenge this claim for ELLs. Finally, study quality variables (i.e., duration of intervention, journal quality, sample size) also suffer from ambivalence or few studies in the literature base; so claims for these variables are also tentative, and additional research could potentially bolster the warrant for making claims about the importance of these variables as moderators for the effectiveness of peer-mediated approaches. Notably, several variables of equity mentioned in the Statement of the Problem in Chapter 1 appear to be missing, or at least largely ignored, in the above list, including: adequate facilities, context of reception, preparation of teachers to work with ELLs, attitudes and beliefs of teachers towards ELLs, relations of power between teachers and ELLs, and length of residence of ELLs. To the extent possible, these variables will also be coded when reviewing studies for inclusion in this meta-analysis. However, the absence of these variables from the extant literature probably supports the assertion that the field of peer-mediated learning studies for ELLs remains largely driven by psychological theory and that sociological perspectives remain underrepresented (e.g., Cohen 1994, Firth and Wagner 1997), and this meta-analysis hopes to bridge the more traditional focus on intervention effectiveness with these variables of power and equity. # Methods ## Research Questions Chapter 1 presented the two fundamental research questions driving this meta-analysis; however, as indicated in the literature review in Chapter 2, there are a number of substantive theoretical, instructional, and methodological variables of potential interest. Consequently, formal hypotheses regarding the key variables of interest are presented below. • Is peer-mediated instruction effective for promoting language, academic, or attitudinal learning for English language learners in K-12 settings? a. Hypothesis 1a: Test of HA: Interventions testing the effectiveness of peer-mediated forms of learning against teacher-centered or individualistic control groups report language outcome effect sizes that are significantly larger. b. Hypothesis 1b: Test of HA: Interventions testing the effectiveness of peer-mediated forms of learning against teacher-centered or individualistic control groups report academic outcome effect sizes that are significantly larger. c. Hypothesis1c: Test of HO: Interventions testing the effectiveness of peer-mediated forms of learning against teacher-centered or individualistic control groups report attitudinal outcome effect sizes that are not significantly different. • What variables in instructional design, content area, setting, learners, or research design moderate the effectiveness of peer-mediated learning for English language learners? a. Hypothesis 2a: Test of HO: Interventions testing the effectiveness of cooperative, collaborative, and peer tutoring approaches report effect sizes that are not significantly different. b. Hypothesis 2b: Test of HO: Interventions testing the effectiveness of peer-mediated approaches in English-as-Second Language (ESL) and English-as-a-Foreign Language (EFL) settings report effect sizes that are not significantly different. c. Hypothesis 2c: Test of HO: Interventions testing the effectiveness of peer-mediated approaches in elementary, middle school, and high school settings report effect sizes that are not significantly different. d. Hypothesis 2d: Test of HO: Interventions testing the effectiveness of peer-mediated approaches in laboratory and classroom settings report effect sizes that are not significantly different. e. Hypothesis 2e: Test of HO: Interventions testing the effectiveness of peer-mediated approaches as part of complex interventions and those testing just peer-mediation report effect sizes that are not significantly different. f. Hypothesis 2f: Test of HO: Interventions testing the effectiveness of peer-mediated approaches with students from different language backgrounds report effect sizes that are not significantly different. g. Hypothesis 2g: Test of HO: Interventions testing the effectiveness of peer-mediated approaches with students from high- and low-SES backgrounds report effect sizes that are not significantly different. h. Hypothesis 2h: Test of HA: High-quality studies report effect sizes that are significantly larger than low-quality studies. i. Hypothesis 2i: Test of HA: Studies of longer duration report effect sizes that are significantly larger than short-duration studies. • In what ways do select issues of power and equity impact the effectiveness of peer-mediated methods? a. Hypothesis 3a: Test of HA: Studies conducted in settings where ELLs are segregated from their English-speaking peers will report significantly lower effect sizes than studies conducted in settings where ELLs are integrated with non-ELLs. b. Hypothesis 3b: Test of HA: Studies conducted in settings that authors describe as having adequate facilities will report significantly higher effect sizes than studies conducted in settings that authors describe as inadequate. c. Hypothesis 3c: Test of HA: Studies conducted with ELL-certified teachers will report significantly higher effect sizes than studies in which teachers do not possess specialized certifications to work with ELLs. d. Hypothesis 3d: Test of HA: Studies testing interventions described by the authors as at least partially culturally-relevant will report larger effect sizes than studies that do not make culturally-relevant claims. e. Hypothesis 3e: Test of HA: Years of teaching experience will be positively correlated with effect sizes. f. Hypothesis 3f: Test of HA: Studies reporting interventions that utilize students’ native language during instruction will report larger effect sizes than studies using only students’ second language (i.e., English) for instruction. ## Criteria for Inclusion and Exclusion of Studies A number of researchers argue that not enough experimental evaluations of intervention effectiveness exist in the ELL literature (e.g. Slavin and Cheung 2005, August and Shanahan 2006). Therefore, this meta-analysis cast a relatively-wide net, and subsequent analyses attempted to identify biases and sources of variance. Types of Studies Both experimental and quasi-experimental studies were included in the review. For studies in which non-random assignment was used, studies must have included pre-test data, or must have statistically controlled for pre-test differences (e.g., ANCOVA). Similarly, studies which tested more than one treatment against a control group were included as long as one treatment could readily be identified as the focal treatment. If a study did not include a control group, it was excluded. Although 20 years is a common standard for study inclusion, studies that are older than 20 years were included if they met the other criteria because scarcity of research suggests that older studies may be necessary to provide sufficient power for the detection of effects and moderator analyses. Finally, for practical purposes studies must have been published in English, though the research may have occurred in any country with participants of any nationality. In addition, the target language must have been English in order to facilitate direct comparisons to ELLs in US schools; however, participants may have represented any language background, and instruction could have occurred in any language, as well. Types of Participants and Interventions Studies must have tested the effects of peer-learning involving students between the ages of 3 and 18, again in order to facilitate comparisons to US students in K-12 educational settings. For example, in studies of peer tutoring, both students for whom outcomes are measured and students who act as tutors must have been within this age range to preserve the focus on “peer” interactions. Also, participants must have included students identified as English language learners (though methods of identification and definitions of ELL varied), and results must have been exclusively, or disaggregated, for ELLs. For example, the inclusion of studies conducted internationally necessitated the inclusion of students learning English as a Foreign Language (EFL) and students in the United States learning English as a Second Language (ESL). The difference in settings (e.g. immersed in an English-dominant environment for ESL students) makes the process of language acquisition very different, but for purposes of this synthesis, both of these types of learners were subsumed under the ELL category. Interventions may have utilized a number of instructional activities, but peer-peer interaction must have been a focal aspect of the intervention. Furthermore, comparison groups must not have received instruction for which peer-mediated learning was widely-used, and studies that only provided a cooperative intervention were coded separately from those that involved more complex interventions in which peer-mediated methods were just one component (e.g., Success for All). Studies for which peer-peer interaction could not be identified as a focal feature of the intervention were excluded, as were studies for which comparison groups used large amounts of peer assistance. Types of Outcomes and Instruments Cooperative learning has been used to improve almost every conceivable academic achievement outcome, but it has also been widely used to improve a number of behavioral and social outcomes. Therefore, nearly any outcome was coded, though some outcomes were not assessed frequently enough to allow inferential statistical analyses. To facilitate coding and analysis, outcomes will be divided into five conceptually-distinct categories; and while variety existed within categories (e.g. math and social studies within academic outcomes), it was presumed that enough similarity existed to facilitate comparative analyses. These categories are: oral language, written language, other academic, attitudinal and social. Oral language outcomes were those that focused on speaking and listening, while written language outcomes were those that included primarily reading and writing. Other academic outcomes included content-area outcomes from subjects like science, social studies, and mathematics. Attitudinal outcomes were psychological in nature and consisted almost entirely of measures of motivation, and social outcomes were behavioral measures of things like interactions with native speakers. In some cases, measures were broad-band, complex measures that included aspects of several of these categories. For instance, the Revised Woodcock-Johnson Test of Achievement is a widely-used instrument that explicitly measures oral language, reading fluency and comprehension, and academic achievement. In some cases, specific subtests were reported and when possible, these sub-test scores were coded separately into one of the above categories. However, in other cases, only composite scores were reported, and in some cases descriptions of the measure seemed to favor one category over another. In some cases, however, the measures were simply too inclusive to reliably choose one category over another. In these cases, in order to maintain inter-rater reliability and to provide a systematic coding approach that could be replicated later, written language was chosen as the default outcome category for complex outcomes that measured more than one category. Similarly, a number of instruments were used to assess effectiveness, including norm-referenced tests, researcher and teacher-created measures, and psychological and sociological instruments. These characteristics were coded to enable both inferential moderator and descriptive analyses, and they followed the same construct-driven division of results just discussed. ## Search Strategy for Identifying Relevant Studies Multiple databases were searched using consistent combinations of keywords, though specific format varied according to individual database preferences (e.g. AND used between terms for the PsychINFO search). Several databases were combined into simultaneous searches. For instance, the ProQuest search included the following individually-selected databases: Dissertations at Vanderbilt University and Dissertation Abstracts International, Ethnic News Watch, and several subsets of the Research Library collection--core, education, humanities, international, multicultural, psychology, and social sciences. Similarly, PsychINFO included the following databases, which were manually-selected: ERIC, IBSS, CSA Linguistics, Language, and Behavior, PsychArticles, PsychINFO, and Sociological Abstracts. Furthermore, potentially-relevant studies were cross-cited using the bibliographies of previous syntheses and identified studies. All studies were identified through the following process - titles and abstracts were first skimmed to identify potentially-relevant studies; if a study appeared to be a possible candidate, the full study was retrieved to the extent possible. If the study was not immediately available, Interlibrary Loan requests and librarian searches were pursued. If this did not succeed, attempts were made to contact the author of the study. Studies not retrieved at that point were deemed unavailable. “Near-miss” studies were excluded at this point if closer examination revealed that they violated inclusion criteria or if an effect size could not be extracted from the information provided. As above, attempts were made to retrieve necessary information from the authors, though in many cases data were no longer available or the authors could not be reached. The “near miss” studies are included in the references section, but no further analyses were conducted with these studies. The researcher functioned as the primary coder, and all of the studies were coded by the researcher. Reliability of inclusion and exclusion criteria, as well as coding of key substantive and methodological variables was assessed by comparing the primary coding with the coding of two independent coders. The additional coders were doctoral students with experimental and statistical training methods in the ExpERT program at Vanderbilt University. After some discussion of the inclusion and exclusion criteria and practice with an example, the other coders made inclusion/exclusion decisions for a sub-sample of 30 abstracts. ## Description of Methods Used in Primary Studies As already discussed, previous syntheses suggest that high-quality experimental studies are scarce in this field. Consequently, it seems appropriate to cast a wide net, a long-standing approach to social science syntheses (e.g. Smith et al. 1980). As a result, many small-sample studies utilizing quasi-experimental designs, with and without cluster randomization, were included; and few large-sample studies with rigorous randomization were found. Furthermore, the broad conceptualization of peer-mediated learning resulted in a variety of interventions and approaches to data collection. The quality of included studies has a tremendous impact on the final synthesis, and so, an attempt to assess the extent to which study quality is related to reported effects was made. Thus, studies were coded to reflect the extent to which they employed randomization, and the level at which randomization occurred. Similarly, studies were coded to assess the degree to which baseline equivalence between the control and treatment groups was measured in the original studies, and the approach used to adjust for pre-test differences was also coded. For the sake of moderator analysis, “study quality” was assessed on a three-level scale determined by this information, such that: a) high-quality studies assessed pre-test equivalence AND used a covariate to control of pre-test differences, b) medium-quality studies assessed pre-test equivalence OR used a covariate to control pre-test differences, and c) low quality studies did neither. ## Criteria for Determination of Independent Findings As is often the case in meta-analysis, some studies reported data on several outcomes, and occasionally multiple measures of the same construct were provided by individual studies. For instance, a study may have measured outcomes of reading comprehension, reading fluency, and attitudes toward reading. Furthermore, both researcher-specific and state-mandated measures of reading comprehension were sometimes reported. For all such cases of multiple measures, the following general approach was used. First, every measure was coded in order to provide simple descriptive summaries of the kinds and frequencies of outcomes reported in the literature. Then, as part of the coding, outcomes were categorized into one of the five primary constructs outlined above. Finally, for situations in which multiple outcomes and/or measures were provided for any given construct in a single study (e.g. two different academic outcomes), a focal measurement was identified. In general, the most reliable instrument was coded as the focal instrument, though in cases where reliability information was not provided, the most widely-used measure was chosen. If neither of these criteria could be employed, the first measure discussed was chosen as a default. Although many meta-analyses average effects across measures, individual measures were utilized in this review because the measures varied considerably within constructs (e.g. math, reading and science within academic) and because coding of individual measures preserves the possibility of additional analyses at a later time. In any case, only one measurement for each of the five main constructs was identified as a focal instrument, allowing analyses within constructs that did not violate assumptions of independence. ## Details of Study Coding Categories A number of study and outcome characteristics were coded in order to enable analyses of the primary research questions as well as a number of potentially-relevant moderator analyses. A brief summary of the variables coded is provided here. Essentially, the variables included: study descriptors like design and quality, participant descriptors like age and language background, treatment descriptors like duration and frequency, and a variety of outcome descriptors. Key outcome descriptors included primary data like means and standard deviations as well as secondary calculations like effect sizes. While effect size statistics are discussed in more detail elsewhere, as much relevant information as necessary for effect size calculations was identified and coded, in keeping with guidelines provided by Lipsey and Wilson (2001). Moderating variables are those that may affect overall effect size estimates leading to different effect sizes estimates for different values of the moderator. A number of study, treatment and participant variables were analyzed as moderators in CMA analysis and as correlates in SPSS. Separate analyses were conducted for each of these variables, and the results for these moderator analyses are presented separately for each moderator of interest. A potential limitation of multiple moderator analysis is that it does not account for covariation amongst moderators, and meta-regression is an alternative analysis that allows examination of the independent contributions of each variable to variance in the effect sizes. To the extent possible, meta-regression analyses of key moderators that affect outcomes was conducted to determine the unique contribution made to the variance of outcomes by methodological and substantive moderators. At minimum, single-variable regressions of potentially influential variables were run to test their viability as moderator variables, even if multivariate regression was untenable because of small sample size. Exploratory analyses of substantively important variables also included correlational analysis and descriptive statistics. Finally, coding reliability was assessed through measurement of inter-rater reliability. Following exclusion/inclusion reliability assessment, the researcher met with the additional coders to discuss and practice using the coding manual on three examples. Following this initial training, the coders coded five studies independently. The researcher then met again with the coders to discuss the initial coding and to practice together again on two additional examples. Following the second training session, the two additional coders coded 10 more studies independently. Thus, the coders independently coded 15 studies each, with a total subsample of 25 studies included for the assessment of reliability. The studies were drawn evenly from published and unpublished studies. Cohen’s Kappa was calculated for categorical variables, while Pearson’s r was calculated for continuous variables. For variables with reliability coefficients low enough to be close to chance agreement, variable constructs were reexamined and disagreements were examined case by case to reach consensus. The effect size statistic (ES) calculated was the Standardized Mean Difference(ESSM), which is appropriate for group contrasts made across a variety of dependent measures (Lipsey and Wilson 2001). The most frequently-coded variables were continuous variables (e.g. standardized test results) with results contrasting mean treatment and control group performance on focal outcomes. The following is the formula for calculating the ESSM $$\overline{ES} = \dfrac{\overline{X}_{G1} - \overline{X}_{G2}}{s_{pooled}}_{pooled} = \sqrt{\dfrac{s_{1}^{2} (n_{1}-1) + s_{2}^{2} (n_{2}-1) }{n_{1} + n_{2} - 2}}$$. Thus, the mean effect size is calculated by dividing the difference between the mean for the treatment (XG1) and the mean for the control (XG2) by the pooled standard deviation (spooled). We see in the second formula that the pooled standard deviation (spooled) is equal to the square root of the sum of the weighted variance for the treatment group (s12 * [n1-1]) and the weighted variance for the control group (s22 * [n2-1]) divided by the pooled degrees of freedom (n1 + n2 - 2). In these formulas, s2 is the observed variance and n is the sample size. The ESSM is known to be upwardly biased for small samples. Thus, the Hedges G transformation is traditionally used to correct for this bias $$G = D \left(1-\dfrac{3}{4(n_{1} +n_{2}) - 9}\right)$$. Where Cohen’s D = ESSM, the biased effect size estimate weighted by a correction for small sample bias. This adjusted effect size, ES'SM, has its own SE and inverse variance weight formulas, as illustrated in Lipsey and Wilson (2001). The weight term is included to compensate for reliability differences resulting from different sample sizes. That is, small sample sizes generate less precise estimates, whereas larger sample sizes generate more reliable estimates, and this weight term adjusts the impact of the estimates based on their sample size-driven reliability. The following formulas display the calculations for computing the standard error and weights for use with the standardized mean difference effect size statistic: $$se = \sqrt{\frac{n_{1} + n_{2}}{n_{1}n_{2}} + \frac{\overline{ES}_{SM}}{2(n_{1} + n_{2})}} = \dfrac{1}{se^{2}}$$ However, the illustrated weight formula is appropriate only for fixed effects models which assume invariate effect sizes across studies. These assumptions are untenable given the broad constructs included in the proposed meta-analysis; consequently, a random effects model will be utilized in this meta-analysis, and the formulas for this model include another variance component in the denominator of the weight formula: $$w_{i} = \dfrac{1}{se_{i}^{2} + \hat{\nu}_{\theta}}$$ In addition to the sampling error represented by the term sei2, the random effects weight includes a term for heterogeneous effect sizes, vθ. This additional term is a constant weight applied to every study, and can be computed as a method of moments estimate using the Q statistic, which is a measure of the heterogeneity of effect sizes within the sample. The formula for vθ is: In this formula, Q is the heterogeneity statistic provided in standard CMA output, k is the number of effect sizes included in the analysis, and w is the fixed-effects weight calculated as before. As indicated, heterogeneity was assessed using the Q statistic, which describes the degree to which effect sizes vary beyond the degree of expected sampling error. I2 is another useful measure of heterogeneity, and it indicates the amount of heterogeneity that exists between studies (Higgins et al. 2003). Both statistics were used to determine the degree of heterogeneity in the sample of included studies, which was expected to be considerable given the relative breadth of acceptable studies. Additionally, outliers can be particularly problematic, with extreme observations affecting both effect size estimates by distorting the means of the distributions as well as calculations of variance. Furthermore, as meta-analysis is primarily a survey methodology interested in synthesizing studies and providing descriptions of typical effects, atypical results are not overly-informative. Consequently, Tukey’s guidelines were employed to identify outliers (3*IQR+75th percentile and 25th percentile-3*IQR). Results above and below these values were Winsorized to these cut-off points. Another source of potential error involves designs that utilize cluster randomization in which intact groups are assigned en masse to conditions, and unless corrected, the standard errors upon which the inverse variance weights are based would be incorrect (Hedges 2007). This is the result of cluster effects in which students nested within classrooms tend to be more similar than students in separate classrooms. This problem can occur if randomization occurs at any level other than the level of the student, and thus, McHugh adjustments were made for studies that employed cluster randomization (Lipsey et al. 2012). The effective n, which is usually much smaller than the observed n, was computed, and these adjusted sample sizes were then used to calculate more accurate standard error estimates. However, a number of assumptions were made that merit discussion. Primarily, the rho, or inter-class correlation, will be estimated at .2 for academic and language outcomes and .15 for all other outcomes. These values are loosely based on the range of intra-class correlations obtained in Hedges (2007), which reported results from a large sample of academic outcomes from cluster-randomized evaluations. Much is more is known about academic outcomes in educational evaluation studies than for others, so a slightly lower rho is used for other outcomes. While it seems likely that observed values of rho varied across studies, the data was often not reported. Similarly, the number of students per cluster was occasionally not reported; in these cases, the total sample was divided by the number of clusters to compute a mean cluster size. Due in part to limitations in the reporting of data as well as to the relative newness of cluster effect corrections in meta-analysis, the adjusted estimates are somewhat crude and imprecise; consequently, the results of these adjustments are likely overly-conservative and may be interpreted as a lower bound of sorts. Similarly, in several studies, pre-test data was available, but the original researchers did not use pre-test data in their post data analyses. that is, pre-test differences were left unadjusted in final analyses. In these situations, post hoc adjustments were made by this researcher to control for pre-test differences. Simply, pre-test means were subtracted from post-test means for both the treatment and the control groups, and these differences were used as the mean gain scores from which effect sizes were computed. Finally, a number of alternate computations were occasionally necessary. For instance, some studies did not provide ES estimates, and a number of formulations exist for converting other commonly reported data into ESSM. These other data include means and standard deviations, t-tests and degrees of freedom, and p values and sample sizes, and effect sizes using these alternative data were calculated as necessary. ## Statistical Procedures and Conventions General statistical analyses were computed using CMA and SPSS software; in particular, overall effect size analyses, some publication bias, and moderator analyses were computed with CMA, and diagnostic and descriptive analyses were conducted with SPSS. # Results Chapter Four presents the data obtained from descriptive, main effects, and moderator analyses, and Chapter Five will consider the extent to which the data answers the formal research questions detailed in Chapter Three. First, descriptive information is provided for the included sample of studies. Then, descriptive statistics, main effects analyses, and moderator analyses are provided for each of the outcome categories. Because each outcome category contains independent samples of effect sizes and because outcomes are assumed to be more conceptually similar within categories than between them, Chapter Four is organized primarily by outcome type to maintain statistical and conceptual clarity. ## Included Sample Initial keyword searches returned 17, 613 results, of which 148 were unique and potentially relevant. Additionally, extant meta-analyses and syntheses (e.g., Genesee et al. 2005, Keck et al. 2006, Mackey and Goo 2007) were mined for potentially relevant studies, as were studies included in the author’s prior research. Similarly, key authors were contacted in a gray literature search to identify additional studies that might be potentially relevant. From these combined sources, ultimately 37 study reports were included. Initial agreement rates among coders for inclusion/exclusion decisions were 92.4%, and differences were resolved to achieve consensus in ultimate coding. Included studies and near-miss studies are listed in Suppl. material 2. Table 2 below provides a snapshot of the included sample and a few key variables. Included sample of studies. Lead Author Year Publication Type Country Construct Design Grade Level Alhaidari 2006 Dissertation Saudi Arabia Cooperative Quasi-Experiment Elementary Alharbi 2008 Dissertation Saudi Arabia Cooperative Experiment High School Almaguer 2005 Journal USA Peer Tutoring Quasi-Experiment Elementary August 1987 Journal USA Peer Tutoring Quasi-Experiment Elementary Banse 2000 Dissertation Burkina Faso Collaborative Quasi-Experiment High School Bejarano 1987 Journal Israel Cooperative Quasi-Experiment Middle School Brandt 1995 Dissertation USA Cooperative Quasi-Experiment High School Bustos 2004 Dissertation USA Cooperative Experiment Elementary Calderon 1997 Technical Report USA Cooperative Quasi-Experiment Elementary Calhoun 2007 Journal USA Cooperative Quasi-Experiment Elementary Chen 2011 Journal USA Cooperative Quasi-Experiment High School Cross 1995 Technical Report USA Collaborative Quasi-Experiment High School Dockrell 2010 Journal England Collaborative Quasi-Experiment Pre-K Ghaith 2003 Journal Lebanon Cooperative Quasi-Experiment High School Ghaith 1998 Journal Lebanon Cooperative Quasi-Experiment Middle School Hitchcock 2011 Technical Report USA Cooperative Quasi-Experiment Elementary Hsu 2006 Dissertation Taiwan Collaborative Quasi-Experiment High School Johnson 1983 Journal USA Peer Tutoring Experiment Elementary Jung 1999 Dissertation South Korea Peer Tutoring Quasi-Experiment Elementary Khan 2011 Journal Pakistan Cooperative Experiment High School Kwon 2006 Dissertation South Korea Collaborative Quasi-Experiment High School Lin 2011 Journal Taiwan Cooperative Quasi-Experiment Middle School Liu 2010 Journal Taiwan Collaborative Quasi-Experiment Middle School Lopez 2010 Journal USA Collaborative Quasi-Experiment Elementary Mack 1981 Dissertation USA Collaborative Quasi-Experiment Elementary Martinez 1990 Dissertation USA Cooperative Quasi-Experiment Elementary Prater 1993 Journal USA Cooperative Experiment Elementary Sachs 2003 Journal Hong Kong Cooperative Experiment High School Saenz 2002 Dissertation USA Peer Tutoring Quasi-Experiment Elementary Satar 2008 Journal Turkey Collaborative Experiment High School Slavin 1998 Technical Report USA Cooperative Quasi-Experiment Elementary Suh 2010 Journal South Korea Collaborative Quasi-Experiment Elementary Thurston 2009 Journal Catalonia Peer Tutoring Quasi-Experiment Elementary Tong 2008 Journal USA Collaborative Quasi-Experiment Elementary Uludag 2010 Dissertation Jordan Collaborative Quasi-Experiment Middle/ High School Vaughn 2009 Journal USA Peer Tutoring Quasi-Experiment Middle School The 37 included studies reported relevant data on 44 independent samples (i.e., several reports described multiple experiments or included independent samples) and contained a total of 132 outcomes. As indicated in the full coding manual (in the Excel spreadsheet that accompanies this dissertation Suppl. material 1), numerous methodological, study-level, sample-level, and outcome variables were coded for the included sample. Inter-rater reliability varied considerably across variables; mean Cohen’s Kappa for categorical variables was (Ƙ=.787) with a range of (Ƙ=.318 to Ƙ=.1.0). Pearson’s r was calculated for continuous variables, and mean agreement amongst raters was (r=.927) for continuous variables, though inter-rater reliability for continuous variables ranged between (r=.85 and r=1.0). Problematic variables were discussed and revised, and ultimately, all differences were resolved to consensus. Key variables are summarized in the tables below; Table 3 details several methodologically and theoretically important variables, and Table 4 summarizes key outcome data for the included sample. Summary of Key Variables in Included Sample Year (n=43) Pre1980-1989 = 4 1990-1999 = 10 2000-2012 = 29 Publication Type (n=43) Dissertation = 15 Journal = 22 Technical Report = 6 Country (n=43) USA = 22 Other = 21 Setting (n=43) ESL= 23 EFL= 20 Design (n=43) Experimental = 8 Quasi- experimental= 35 Quality (n=43) High = 26 Medium = 13 Low = 4 Dosage (Total Contacts) (n=43) 0-30 = 17 31-90 = 13 91+ = 13 Construct (n=43) Cooperative = 17 Collaborative = 16 Peer Tutoring = 10 Component (n=43) Yes =19 No =24 Adequate Facilities (n=23) Yes = 2 No = 3 Unknown = 18 Segregated (n=23) Yes = 9 No = 14 Culturally Relevant (n=23) Yes = 5 No =18 Language of Instruction (n=43) L1 only = 2 Bilingual = 14 L2 only = 14 Unknown = 13 In School (n=43) Yes = 43 No = 0 Teacher Certification (n=43) ELL Certified = 12 Not ELL Certified = 2 Unknown =29 Teacher Experience (n=43) 0-5 years= 3 6-10 years= 4 11+ years= 4 Unknown= 32 Teacher Ethnicity (n=43) Same as Students’= 7 Different than Students’ = 1 Unknown = 35 Grade Level (n=43) Elementary = 22 Middle = 8 High = 13 Student Ethnicity (n=43) Spanish = 20 Asian = 8 Other = 15 Student SES (n=43) Low = 21 High = 3 Mixed = 1 Unknown = 18 Student Length of Residence (n=23) 0-2 years = 1 2+ = 0 Unknown = 22 Key outcome variables. Total Outcomes= 62 Number of Independent Outcomes by Construct Number of Participants in Treatment Groups Number of Participants in Control Groups Oral Language 14 843 787 Written Language 30 919 863 Other academic 6 220 451 Attitudinal 10 397 394 Social 0 0 0 As indicated in Table 3, peer-mediated learning for ELLs is currently an active field of research; in fact, more studies were conducted in the most recent decade than either of the previous decades. Moreover, the included sample is evenly composed of published (n=22) and unpublished (n=21) studies, and the sample contains nearly the same number of international studies (n=21) as studies conducted in the United States (n=22). Similarly, all three peer-mediated constructs are well-represented in the included sample, though there are fewer peer tutoring studies than cooperative or collaborative. However, some variables are less balanced; for instance, there are far more high-quality studies (as operationally defined) than medium or low-quality studies, and every study was conducted in a school setting, meaning that no lab studies are included in the sample. In many ways, it is what is missing in the included sample that is most striking. Very little information about the teachers was reported, and very few studies reported information about students’ SES or length of residence. Similarly, contextual variables like the adequacy of facilities or the context of reception were typically not reported. Not only does the absence of this information limit the potential to conduct moderator analyses for these variables, it potentially limits the external validity of this meta-analysis. That is, findings are relevant only for a constrained set of variables, and the general effectiveness of peer-mediation may vary across a number of unmeasured, or unreported, variables. Table 4 indicates that language outcomes were far more prevalent than academic or attitudinal outcomes, and social outcomes are completely absent from the included sample. In fact, too few studies are reported for academic outcomes to reliably conduct moderator analyses, and the samples for attitudinal and oral language are only marginally large enough. Thus, the presented moderator analyses for all three of these outcome types should be considered exploratory; however, the sample of written language outcomes is large enough to conduct moderator analyses with some degree of confidence, and tentative meta-regression results should be sufficiently powered to enable insight into which moderators are most influential. ## Oral Language Outcomes Summary of Included Studies and Main Effects A random effects model of the un-corrected and un-Winsorized data provided a mean effect size estimate for the thirteen oral language outcomes of (.587, SE=.141, p<.001); however, after adjustments for outliers, pre-test differences, and cluster randomization, the mean effect size estimate decreased slightly and the variance decreased slightly (.578, SE=.136, p<.001), suggesting that the larger-than-average outliers and the effects of cluster randomization had very little impact on the original estimates. The adjusted distribution is illustrated by the forest plot in Fig. 2. It is notable that only one study (i.e., August 1987) has a mean below zero. Also, this distribution highlights one of the real strengths of meta-analysis; more than half of the studies have confidence intervals that cross the zero threshold, meaning that individually they are statistically indistinguishable from an effect size of zero. However, taken together, they provide enough statistical power to identify a strong, positive effect with a great deal of confidence. Throughout the paper, random effects models are the default, primarily because the assumptions of the fixed model are generally untenable. Empirically, homogeneity analysis of the fixed model illustrates the considerable heterogeneity that exists within the observed sample, offering some empirical justification for the use of a random effects model. The Q statistic (37.213, df=12, p<.001) indicates that the observed effect sizes vary more than would be expected by sampling error alone, and the I2 statistic (67.753) indicates that approximately 68% of the observed variance in effect sizes exists between studies. Together, this suggests that moderator analyses might provide insight into what factors influence the effectiveness of peer-mediated learning for ELLs. Publication Bias for Oral Language Outcomes The possibility of publication bias remains a persistent concern in meta-analysis, and the following analysis examines empirical evidence for the presence of publication bias in this sample and the extent to which it might distort the estimates. Lipsey and Wilson (1993, as cited in Lipsey and Wilson 2001) demonstrated that published studies tend to report larger mean effect sizes than unpublished studies. While it is impossible to determine if this is the result of bias on behalf of journal editors or researchers, it is potentially problematic if the under-representation of unpublished studies induces significant bias. And while it is likely impossible that any literature review could be thorough enough to locate every study ever written on a given topic, the conceptual possibility that other studies could have been written is sufficient to suggest that the true population parameter could differ systematically from the retrieved sample. Similarly, given these vagaries, practically and conceptually, it is not possible to empirically demonstrate publication bias with complete certainty; rather, one can demonstrate the possibility of publication bias and estimate the potential effects of such bias on main effects analysis.One way to check for possible publication bias is to compare the means of published and unpublished studies in the sample; because unpublished studies represent only a fraction of the total empirical literature on a topic, the simple difference between the mean effect size estimates of the published and unpublished samples provides a sort of upper bound for publication bias. A recoding of the type of publication variable into a dummy-coded variable with 1=published and 0=unpublished, indicated that 84.6% of the included sample had been published, while the other 15.4% were dissertations. The mean effect size for published studies (.377, SE=.067) is surprisingly much smaller than the mean effect size for unpublished studies (1.159, SE=.330). The difference between the mean effect sizes of -.782 provides a crude estimate of the upper bounds of potential publication bias. Of course, this simple difference does not adequately account for small sample bias nor does it employ inverse variance weights; consequently, appropriately meta-analytic tests of publication bias must also be utilized. A look at a funnel plot with effect sizes plotted against standard errors is one meta-analytically-appropriate method of visually examining the distribution for the presence of publication bias. In this case, the standard error serves as a proxy for sample size, and because smaller samples are much more likely to lack the statistical power required to attain statistical significance, we look at the small-sample studies to detect publication bias. If there is no such bias, we expect small studies with negative and null results to be as frequent as small studies with positive results. The following funnel plot in Fig. 3 includes black circles for studies that have been imputed to achieve a symmetric distribution, the “trim and fill” technique, and we notice that both imputed studies fall in quadrant one, which is inconsistent with the possibility of publication bias. We also notice that when these studies are imputed, the mean effect size estimate remains relatively unchanged. A computational alternative to visual inspection of the distribution is Egger’s regression intercept, as discussed in Sterne and Egger (2006); (Fig. 4): Because we assume that publication bias will be positive, that is, in the direction of significantly positive effects and because it provides a more conservative estimate of significance, the p value of the single-tailed test at α=.05 is typically reported. The null hypothesis tests whether the ratio of the ES/se is > 0. While some debate exists about whether the single-tailed or two-tailed test is more appropriate, we see in Fig. 5, that in this case the two estimates provide conflicting evidence of publication bias in the oral language outcome distribution. The intercept is significantly greater than zero for only the one-tailed test (1.618, t-value=1.816, p=.048) but not the two-tailed test (p=.097), thus providing limited evidence that smaller sample sizes are associated with larger effect size estimates. In conclusion, these varied analyses provide very little evidence for the possibility that publication bias is likely for the distribution of studies reporting oral language outcomes. Furthermore, the potential bias induced is small enough that if a sufficient number of small sample studies with null or negative results were included to make the distribution more symmetrical, the mean effect size estimate would hardly change. As indicated, very few studies in the sample have null or negative effect size estimates; as such, it remains distinctly possible that the literature search failed to uncover those studies that for one reason or another simply were not published because they failed to yield significantly positive results. Moderator Analyses for Oral Language Outcomes The distribution of oral language effect sizes was heterogeneous, as indicated by the Q and I2 statistics; consequently, we might expect post hoc examination of moderator variables to uncover some statistically-significant moderator variables. However, the sample is modest (n=13) and underpowered for meta-regression analysis of the partial contributions for multiple independent variables. Given these limitations, analysis of moderators is primarily motivated by a priori questions of interest, and findings are qualified by the recognition that small differences may be difficult to detect with the small sample employed and confounding and lurking variables may temper any observed differences between sub-groups. Occasionally, when a categorical variable had too few studies on one or more categories, the category was recoded, often into a binary variable, to enable a more reliable comparison. Table 5 summarizes the results for measured variables reported in all thirteen studies, and the presence of significant bivariate correlations (i.e., chi square test) with other measured variables is indicated in the last column. Summary of moderator analyses for oral language outcomes. Moderator (Sub-group) Number in sub-group Effect Size Point-estimate Standard Error of estimate p-value of estimate Q-within of Sub-group I2 of Sub-group Q-between in Random Effects Model Observed Inter- correlation Published .601 (p=.438) Yes Yes 11 .377 .067 .000 29.005 (p=.001) 65.523 No 2 1.159 .330 .099 3.683 (p=.09) 64.681 Study Quality 4.089 (p=.129) Yes High 7 .587 .164 .000 18.544 (p=.005) 67.644 Medium 4 .761 .364 .036 8.266 (p=.041) 63.077 Low 2 .174 .167 ..299 .028 (p=.866) .000 Instrument Type 2.513 (p=.285) Yes Researcher-created 5 .478 .238 .045 10.408 (p=.034) 61.570 Standard-Narrow 6 .743 .204 .000 25.583 (p=.000) 80.456 Standard-Broad 2 .031 .420 .941 .0359 (p=.549) .000 Post Hoc Researcher Adjusted 4.634 (p=.031) Yes Yes 2 .174 .167 .299 .028 (p=.866) .000 No 11 .675 .162 .000 34.863 (p=.000) 71.136 Construct 2.503 (p=.286) Yes Cooperative 2 .105 .315 .738 10.283 (p=.068) 51.378 Collaborative 6 .506 .157 .001 .005 (p=.942) .000 Peer Tutoring 5 .837 .348 .016 18.721 (p=.001) 78.634 Component 1.035 (p=.309) Yes Yes 4 .388 .172 .024 7.406 (p=.06) 59.494 No 9 .651 .193 .001 24.013 (p=.002) 66.684 Setting .380 (p=.538) Yes EFL 5 .691 .269 .010 17.426 (p=.002) 77.045 ESL 8 .498 .161 .002 17.332 (p=.015) 59.612 Segregated 5.412 (p=.020) Yes Yes 2 .230 .088 .009 .966 (p=.326) .000 Other (Not and Unknown) 11 .686 .175 .000 26.944 (p=.003) 62.866 Language of Instruction .681 (p=.711) Yes L1 (L1-only and bilingual) 7 .649 .186 .000 24.282 (p=.000) 75.291 L2 Only 4 .427 .215 .047 2.36 (p=.501) .000 Unknown 2 .702 .535 .189 9.946 (p=.002) 89.946 Culturally Relevant .739 (p=.691) Yes Yes 3 .413 .196 .035 7.405 (p=.025) 72.933 No 5 .572 .264 .03 6.701 (p=.153) 40.309 Not U.S.A. 5 .691 .269 .01 17.426 (p=002) 77.045 Grade Level .240 (p=.624) Yes Elementary 9 .628 .164 .000 25.846 (p=.001) 69.047 Other 4 .454 .314 .148 11.320 (p=.010) 73.499 SES .194 (p=.908) Yes Low 5 .518 .193 .007 6.821 (p=.146) 41.36 High 2 .788 .582 .176 3.099 (p=.078) 67.731 Unknown 6 .550 .202 .007 19.731 (p=.001) 74.659 Student Hispanic .541 (p=.462) Hispanic 7 .472 .181 .009 15.801 (p=.015) 62.027 Other( Asian, Arabic, Bangladeshi, Israeli) 6 .68 .217 .002 17.535 (p=.004) 71.486 Student Asian .139 (p=.71) Asian 3 .696 .376 .064 7.206 (p=.027) 72.244 Other 10 .545 .15 .000 28.272 (p=.001) 68.166 As indicated in the Q-between column, only two moderators were statistically significant at the p=.05 level: post hoc researcher adjusted and segregated. In cases where post-test effects sizes were unadjusted for pre-test differences by authors in the original study reports, the researcher of this meta-analysis adjusted post-test effect sizes post hoc. In these cases, post hoc adjustments resulted in much smaller effect sizes on average (G=.174) than unadjusted (G=.675). This finding indicates that methodological rigor and care in synthesizing previous research can exert a large influence on reported results. The other significant moderator of the effectiveness of peer-mediated learning for improving oral language outcomes was whether or not the intervention occurred in settings where ELLs were segregated from their non-ELL peers. ELLs in segregated settings performed much lower (G=.230) than they did in settings that were not segregated or in settings for which segregation was unreported (G=.636). Some care should be taken when interpreting this result, in particular. First, the confluence of segregated settings with ambiguous settings (i.e., researchers did not report if segregated) presents some conceptual challenges in interpreting the results because some of the ambiguous settings may very well have been segregated in practice. Secondly, the number of studies that reported that they were segregated was relatively small (n=2), and so the estimate is not as precise as it could have been. For all other variables, differences in mean effect sizes were evident across variables, but none proved to be significant moderators. Because the sample size for oral language outcomes is relatively small, this general lack of statistically significant moderators likely represents a lack of statistical power to detect meaningful differences. Thus, some of these moderators might prove significant if additional studies were included, and future meta-analyses may benefit from larger sample sizes as the field continues to produce experimental and quasi-experimental evaluations of peer-mediated learning. ## Written Language Outcomes Summary of Included Studies and Main Effects A random effects model of the un-corrected and un-Winsorized data provided a mean effect size estimate for the twenty eight written language outcomes of (.551, SE=.111, p<.001); however, after adjustments for outliers, pre-test differences, and cluster randomization, the mean effect size estimate decreased and the variance increased slightly (.486, SE=.121, p<.001), suggesting that outliers and cluster randomization had some noticeable impact on the original estimates. The adjusted distribution of written language outcomes is illustrated by the forest plot in Fig. 6. Unlike oral language outcomes already discussed, the distribution of written language outcomes includes eight studies with means equal to or less than zero. This really highlights the importance of publishing studies with null or negative findings, as they contribute to more accurate and meaningful syntheses. The distribution of effect sizes for written language outcomes was even more heterogeneous than the distribution of oral language outcomes. The Q statistic (97.135, df=27, p<.001) indicates that the observed effect sizes vary more than would be expected by sampling error alone, and the I2 statistic (72.204) indicates that approximately 72% of the observed variance in effect sizes exists between studies. Together, this suggests that moderator analyses might provide insight into what factors influence the effectiveness of peer-mediated learning for ELLs for written language outcomes. Publication Bias for Written Language Outcomes A recoding of the type of publication variable into a dummy-coded variable with 1=published and 0=unpublished, indicated that 64.3% of the included sample were unpublished (i.e., technical reports and dissertations), while the other 36.7% were dissertations. The mean effect size for published studies (.442, SE=.24) is not much smaller than the mean effect size for unpublished studies (.524, SE=.142). The difference between the mean effect sizes of -.082 provides a crude estimate of the upper bounds of potential publication bias. The funnel plot in Fig. 7 includes black circles for studies that have been imputed to achieve a symmetric distribution, the “trim and fill” technique, and we notice that there are no studies imputed to achieve a symmetric distribution, which is inconsistent with the possibility of publication bias. Similar, the black diamond indicates that the anticipated mean did not change at all under publication bias conditions. We see in Fig. 8, that the Egger’s regression test provides confirmatory evidence of the improbability of publication bias in the written language outcome distribution. The intercept is not significantly greater than zero for the one-tailed test (1.02, t-value=1.338, p=.096) or the two-tailed test (p=.193). In conclusion, these analyses provide no evidence for the possibility that publication is likely for the distribution of studies reporting written language outcomes. Additionally, several studies in the sample have null or negative effect size estimates; thus, it seems unlikely that the literature search failed to uncover those studies that for one reason or another simply were not published because they failed to yield significantly positive results, and as indicated by the funnel plot and the difference in means between published and unpublished studies, the possible impact of studies lurking in the “the file drawer” on the mean effect size estimates appears relatively minor in this case. Moderator Analyses for Written Language Outcomes The distribution of oral language effect sizes was heterogeneous, as indicated by the Q and I2 statistics; consequently, we might expect post hoc examination of moderator variables to uncover some statistically-significant moderator variables. The sample is large enough (n=28) and sufficiently powered for meta-regression analysis of the partial contributions for at least a few, (e.g., 2-3) independent variables. As before, analysis of moderators is primarily motivated by a priori questions of interest, and findings remain qualified by the recognition that small differences may be difficult to detect with the size of the sample employed and confounding and lurking variables may temper any observed differences between sub-groups. Table 6 summarizes the results for measured variables reported in the 28 studies included for this outcome type, and the presence of significant bivariate correlations, analyzed as chi square statistics, with other measured variables is indicated in the last column. Summary of Moderator Analyses for Written Language Outcomes Moderator (Sub-group) Number in sub-group Effect Size Point-estimate Standard Error of estimate p-value of estimate Q-within of Sub-group I2 of Sub-group Q-between in Random Effects Model Observed Inter- correlation Published .086 (p=.770) Yes Yes 10 .442 .240 .065 38.89 (p=.000) 76.858 No 18 .524 .142 .000 55.851 (p=.000) 60.562 Study Quality 10.635 (p=.005) Yes High 17 .637 .144 .000 56.534 (p=.000) 71.7 Medium 8 .328 .311 .291 31.991 (p=.000) 78.119 Low 3 -.095 .173 .582 .170 (p=.981) .000 Instrument Type 1.107 (p=.575) Yes Researcher-created 17 .411 .147 .005 35.743 (p=.003) 55.236 Standard-Narrow 7 .338 .168 .033 50.012 (p=.000) 88.003 Standard-Broad 4 .746 .420 .045 5.677 (p=.128) 47.156 Post Hoc Researcher Adjusted 9.058 (p=.003) Yes Yes 3 -.095 .173 .583 .170 (p=.918) .000 No 25 .554 .129 .000 88.612 (p=.000) 72.916 Construct 1.391 (p=.499) Yes Cooperative 14 .632 .168 .000 64.105 (p=.000) 79.721 Collaborative 10 .376 .162 .02 9.94 (p=.355) 9.460 Peer Tutoring 4 .310 .414 .454 19.234 (p=.000) 84.403 Component 1.07 (p=.301) Yes Yes 12 .633 .184 .001 30.714 (p=.001) 64.186 No 16 .385 .154 .012 55.422 (p=.000) 72.935 Setting .023 (p=.879) Yes EFL 17 .504 .170 .003 45.017 (p=.000) 64.458 ESL 11 .465 .184 .012 51.969 (p=.000) 80.758 Segregated .504 (p=.478) Yes Yes 5 .373 .135 .006 5.755 (p=218) 30.942 Other (Not and Unknown) 23 .518 .155 .001 91.38 (p=.000) 75.952 Language of Instruction .274 (p=.872) Yes L1 (L1-only and bilingual) 9 .457 .168 .007 20.971 (p=.007) 61.853 L2 Only 8 .402 .247 .104 36.976 (p=.000) 80.976 Unknown 11 .583 .258 .024 38.447 (p=.000) 73.99 Culturally Relevant .101 (p=.951) Yes Yes 2 .433 .148 .003 .095 (p=.758) 0.000 No 9 .474 .246 .053 51.54 (p=.000) 84.478 Not U.S.A. 17 .504 .17 .003 45.017 (p=.000) 64.458 Grade Level 10.863 (p=.004) Yes Elementary 12 .539 .182 .003 59.259 (p=.000) 81.437 Middle 6 -.007 .134 .961 2.841 (p=.724) 0.000 High 10 .7 .204 .001 17.633 (p=.039) 49.047 SES .052 (p=.820) Yes Low 11 .516 .214 .016 45.141 (p=.000) 77.847 Other (Includes High and Unknown) 17 .456 .147 .002 48.222 (p=.000) 66.820 Student Hispanic .005 (p=.945) Hispanic 10 .471 .18 .009 41.128 (p=.000) 78.117 Other (Asian, Arabic, African, Pakistani, Lebanese) 18 .488 .172 .005 54.233 (p=.000) 68.654 Student Asian .697 (p=.404) Asian 6 .705 .32 .028 18.652 (p=.002) 73.193 Other 22 .418 .125 .001 67.671 (p=.000) 68.967 Like the distribution of oral language outcomes, the distribution of written language outcomes demonstrated few significant moderators, indicating that peer-mediated learning is effective across a number of methodological, setting, and participant variables. However, three moderators were statistically significant at the p=.05 level: study quality, post hoc researcher adjusted, and grade level. As with oral language outcomes, post hoc adjustments of written language outcomes resulted in much smaller effect sizes on average (G=-.095) than unadjusted (G=.554), with the direction of the effect actually switching to support the comparison groups. For this distribution, study quality was also a significant moderator; as study quality increased, so did the magnitude of the mean effect size, a finding that is somewhat counterintuitive. One might actually expect that high quality designs would mitigate the influence of bias and accident, resulting in lower effects on average; however, this is similar to the findings in other meta-analyses of peer-mediated instruction that reported low quality studies tended to report lower effect sizes (e.g., Keck et al. 2006). Finally, the other significant moderator of the effectiveness of peer-mediated learning for improving written language outcomes was grade level. Notably, middle school students showed much smaller gains (G=-.007) than high school (G=.7) or elementary (.539). It is worth noting that there were far more middle and high school studies in the written language distribution, so the categories were not collapsed as with oral language outcomes. Consequently, comparisons between the two are somewhat complicated by the differences in coding. Summary of Included Studies and Main Effects A random effects model of the un-corrected and un-Winsorized data provided a mean effect size estimate for the twenty eight written language outcomes of (.234, SE=.079, p=.003); however, after adjustments for outliers, pre-test differences, and cluster randomization, the mean effect size estimate and the variance increased slightly (.250, SE=.13, p=.054), suggesting that outliers and cluster randomization had more impact on the standard error estimate than the mean effect size estimate. Heterogeneity for the observed sample of other academic outcomes was statistically indistinguishable from zero (Q=1.882, p=.757, I2=0.00). thus, not only were there too few studies to reliably conduct moderator analyses for this distribution, empirical evidence indicates that there is insufficient heterogeneity for moderators to explain the variance in effect sizes. Fig. 9 illustrates the distribution of effect sizes for Other Academic Outcomes. Publication Bias for Other Academic Outcomes A recoding of the type of publication variable into a dummy-coded variable with 1=published and 0=unpublished, indicated that 80% of the included sample were published in journals; the other study was a dissertation. The difference in the mean effect size for published studies (G=.260, p=.078) and the mean of unpublished studies (G=.218, p=.424) is .042 and provides a conceptual limit of the effect of publication bias on the mean effect size estimate. A funnel plot of effect sizes plotted against the standard errors in Fig. 10 shows no studies imputed. While this would suggest that publication bias is unlikely, it should be interpreted with caution given the small number of studies used for the analysis. Similarly, it should be noted that there are no studies in either quadrant one or two, suggesting that the absence of null or negative outcomes indicates that there might very well be such studies lurking in the unrecovered gray literature. Egger’s regression test provides confirmatory evidence that publication bias is not a significant threat to the validity of the mean effect size estimate. As demonstrated in Fig. 11, the intercept is not significant for either the one tailed (.352, SE=3.367, p=.462) or the two tailed test (p=.923). Again, the small sample size suggests that caution should be used when interpreting these results; nonetheless, consistently across the difference in means, funnel plot, and the Egger’s regression test, empirical evidence suggests that publication bias is unlikely for the distribution of other academic outcomes. In conclusion, the small sample of other academic outcomes shows a modest effect size of one quarter of a standard deviation that appears uninfluenced by publication bias. The small sample limits the viability of moderator analyses, and the lack of heterogeneity further discourages even exploratory analysis of the influence of moderators. The lack of included studies reporting outcomes for content areas like math, science or social studies is similar to the What Works Clearinghouse, which reports far more language outcomes than math outcomes. Similarly, a number of near-miss studies reported other academic outcomes but were excluded because they failed to meet methodological or other inclusion criteria. In general, it appears that this an emergent field of study, and future meta-analyses may prove useful as the field develops. ## Attitudinal Outcomes Summary of Included Studies and Main Effects A random effects model of the un-corrected and un-Winsorized data generated a mean effect size estimate for the ten attitudinal outcomes of (.309, SE=.123, p=.012); however, after adjustments for outliers, pre-test differences, and cluster randomization, the mean effect size estimate and the variance increased noticeably (.419, SE=.194, p=.031), suggesting that outliers and cluster randomization had a moderate impact on the original estimates. Heterogeneity analysis indicate that the sample of effect sizes varies more than would be expected from sampling error alone, with about 60% of the variance occurring between studies (Q=28.806, p=.001, I2=68.756); thus, moderator analyses might be able to explain some of this variance. The forest plot of Attitudinal outcomes is depicted in Fig. 12 below. Publication Bias for Attitudinal Outcomes A recoding of the type of publication variable into a dummy-coded variable with 1=published and 0=unpublished, indicated that 40%of the included sample were published, and the other 60% were dissertations. The mean effect size for published studies (.201, se=.216) is considerably smaller than the mean effect size for unpublished studies (.565, se=.305). The difference between the mean effect sizes of -.364 provides a crude estimate of the upper bounds of potential publication bias. Visual inspection of the funnel plot in Fig. 13 includes black circles for studies that have been imputed to achieve a symmetric distribution, and we notice that again there are no studies imputed to achieve a symmetric distribution, which is inconsistent with the possibility of publication bias. Thus, the black diamond indicates that the anticipated mean does not change at all. Moreover, we see that there are some, mostly larger, studies reporting null and negative effect sizes; this mitigates the possibility that such studies are languishing in file drawers somewhere. However, the included sample is small, and the results should therefore be treated with some caution. Egger’s regression test offers some evidence of the probability of publication bias for the included sample of attitudinal outcomes and provides confirmatory analysis to support the fairly large difference in means between published and unpublished studies already presented. As illustrated below in Fig. 14, the intercept is significant at α=.05 for the one tailed test (3.765, SE=1.918, p=043) and at α=.1 for the two wailed test (p=.085). Thus, there is conflicting but overall support for the probability that the main effect sizes estimates for attitudinal outcomes are influenced by publication bias. Moderator Analyses for Attitudinal Outcomes The distribution of attitudinal effect sizes was heterogeneous, as indicated by the Q and I2 statistics; consequently, we might expect post hoc examination of moderator variables to uncover some statistically-significant moderator variables. However, the sample is fairly small (n=10) and underpowered for meta-regression analysis of the partial contributions for multiple independent variables. Given these limitations, analysis of moderators is primarily motivated by a priori questions of interest, and findings are qualified by the recognition that small differences may be difficult to detect with the small sample employed and confounding and lurking variables may temper any observed differences between sub-groups. Table 7 summarizes the results for measured variables reported in the ten studies, and the presence of significant bivariate correlations with other measured variables is indicated in the last column. Summary of moderator analyses for attitudinal outcomes. Moderator (Sub-group) Number in sub-group Effect Size Point-estimate Standard Error of estimate p-value of estimate Q-within of Sub-group I2 of Sub-group Q-between in Random Effects Model Observed Inter- correlation Published .947 (p=.330) Yes Yes 4 .201 .216 .064 5.232 (p=.156) 42.666 No 6 .565 .305 . 352 21.834 (p=.001) 77.1 Study Quality 5.422 (p=.020) Yes High 7 .650 .254 .011 19.624 (p=.003) 69.426 Medium 3 -.058 .167 .728 1.424 (p=.491) .000 Low 0 Instrument Type 2.382 (p=.123) Yes Researcher-created 5 .711 .36 .048 17.538 (p=.002) 77.192 Standardized (Broad and Narrow) 5 .108 .151 .475 4.954 (p=.292) 19.257 Post Hoc Researcher Adjusted 5.383 (p=.020) Yes Yes 1 -.254 .259 .327 .000 (p=.1.0) .000 No 9 .509 .202 .012 23.275 (p=.003) 65.628 Construct 4.845 (p=.089) Yes Cooperative 5 .181 .14 .196 1.366 (p=.85) .000 Collaborative 3 .141 .275 .608 3.879 (p=.144) 48.442 Peer Tutoring 2 1.525 .603 .011 3.723 (p=.054) 73.142 Component .134 (p=.715) Yes Yes 2 .523 .278 .06 .442 (p=.506) .000 No 8 .391 .23 .089 27.643 (p=.000) 74.677 Setting .336 (p=.562) Yes EFL 7 .466 .267 .08 26.195 (p=.000) 77.095 ESL 3 .264 .225 .239 2.461 (p=.292) 18.745 Segregated .918 (p=.338) Yes Yes 2 .176 .229 .442 1.243 (p=.265) 19.543 Other (Not and Unknown) 8 .5 .249 .045 26.984 (p=.000) 74.059 Language of Instruction .973 (p=.615) Yes L1 (L1-only and bilingual) 4 .651 .4 .104 19.997 (p=.000) 84.998 L2 Only 3 .316 .258 .22 1.155 (p=.561) .000 Unknown 3 .169 .281 .547 4.78 (p=.092) 58.157 Culturally Relevant .336 (p=.562) Yes Yes 0 No 3 .264 .225 .239 2.461 (p=.292) 18.745 Not U.S.A. 7 .466 .267 .08 26.195 (p=.000) 77.095 Grade Level 2.237 (p=.135) Yes Elementary 6 .667 .333 .045 21.943 (p=.001) 77.213 Middle 0 High 4 .119 .153 .434 3.322 (p=.345) 9.073 SES .919 (p=.338) Yes Low 3 .168 .205 .412 1.97 (p=.373) .000 Other (Includes High and Unknown) 7 .487 .261 .062 45.141 (p=.000) 77.138 Student Hispanic .004 (p=.95) Hispanic 4 .387 .221 .081 4.096 (p=.251) 26.76 Other (Arabic, Asian, and Turkish) 6 .41 .292 .16 24.666 (p=.000) 79.729 Student Asian 1.166 (p=.280) Asian 2 1.166 .913 .202 13.835 (p=.000) 92.772 Other 8 .171 .125 .170 7.735 (p=.357) 9.497 As with the other outcomes already discussed, most of the moderators proved insignificant predictors of variability in the effectiveness of peer-mediated learning at promoting attitudinal outcomes for ELLs; most likely, the low power prevented the detection of other meaningful effects. Nonetheless, a few variables proved to be significant (or nearly significant) moderators of attitudinal outcomes: post hoc researcher-adjusted, study quality, and the type of peer-mediated learning. The only variable to consistently prove significant as a moderator across outcome types was post hoc researcher adjustment for effect sizes that were unadjusted by the original researchers, and as before, post hoc adjustment resulted in much smaller average effect sizes (G=-.254) than unadjusted effect sizes (G=.509). Another methodological variable proved a significant moderator of attitudinal outcomes; in this case, study quality proved significant, and as with written outcomes, higher quality studies were associated with higher effect sizes. Finally, the type of peer-mediated learning (i.e., Construct) approached statistical significance, with peer tutoring studies (G=1.525) reporting much larger effect sizes than either cooperative (G=.181) or collaborative (G=.141). However, only two studies in this distribution of outcomes reported using peer-mediated learning, and consequently, caution should be used when interpreting this result. Nonetheless, given the reliability of the estimate (p=.011), it seems likely that an effect size of this magnitude is fairly meaningful despite the small sample size upon which the estimate is based. # Discussion While Chapter 4 was organized by outcome type, the remainder of the paper is organized by the research questions presented in Chapter 3. As such, Chapter 5 is intended to synthesize findings across outcome types, and this requires a fairly organic combination of quantitative, formal hypothesis testing analysis and qualitative, pattern-seeking analysis. After addressing each of the research questions, a final section presents important limitations of this study and provides some recommendations for future research. Research Question 1: Is peer-mediated instruction effective at promoting language, academic, or attitudinal learning for English language learners in K-12 settings? Research Question 1 is the core question of the meta-analysis, and everything else is secondary or exploratory in comparison. Essentially, this question asks if peer-mediated learning works for ELLs, which is the most basic of effectiveness questions. Taken together, the results of the main effects analyses for all four of the available outcome types support the assertion that peer-mediated learning is very effective at promoting a number of learning outcomes for ELLs. Specifically, the results for oral language outcomes (.578, SE=.136, p<.001) and written language outcomes (.486, SE=.121, p<.001) confirm Hypothesis 1a, which asserted that language outcomes would be significantly larger for interventions utilizing peer-mediated learning than control conditions. Both estimates are highly reliable at α=.001, and both estimates appear unaffected by publication bias. Thus, data indicate that the alternative hypothesis of a significant difference favoring peer-mediated learning over teacher-centered or individualistic learning for ELLs cannot be rejected. Moreover, these effect sizes are of large enough magnitude to be practically significant. Compared to previous meta-analyses of cooperative learning which found effect sizes in the range of .13-1.04 (Johnson et al. 2000), the effect sizes for oral language (.578) and written language (.486) appear to be in the upper half of the distribution of effect sizes reported in Johns, et al. When compared to the effect size reported in meta-analyses of interaction for second language learners (Keck et al. 2006, Mackey and Goo 2007), the effect sizes for oral and written language found in this meta-analysis are of essentially the same magnitude as the difference between cooperative and individualistic effect sizes reported in the earlier meta-analyses. Thus, these results are largely confirmatory of the previous research on effectiveness of cooperative learning. Similarly, the main effects analyses for other academic outcomes supports the assertion in Hypothesis 1b that peer-mediated learning would produce larger academic gains than control conditions. The mean effect size for other academic outcomes (.250, SE=.13, p=.054) is just significant at α=.05, though the estimate is based on a modest sample that appeared somewhat influenced by outliers and methodological concerns. After post hoc adjustments were made, the reliability of the estimate dropped from p=.003 to p=.054, suggesting that some caution should be given to strong claims about the reliability of the estimate. Moreover, the correction of bias induced by cluster randomization reduced heterogeneity in the sample to zero, indicating that moderator analyses were unsuitable for this distribution. Nonetheless, publication bias seems unlikely for this distribution of outcomes. The magnitude of the mean effect size of .250 appears a little smaller than the effect sizes of cooperative learning on academic outcomes reported by Slavin (1996). Finally, the main effects analysis of attitudinal outcomes indicates that peer-mediated learning is effective at promoting motivation and similar psychologically-oriented outcomes for ELLs. The mean effect size estimate (.419, SE=.194, p=.031) is large and statistically significant at α=.05. However, it appears likely that the estimate is affected by publication bias, thus the magnitude of the estimate may be larger than it would be if all studies conducted had been published. As it stands, the current mean effect size estimate is comparable to the magnitude of previous syntheses of cooperative learning, in general (Johnson et al. 2000), as well as syntheses of interaction for second language learners (Keck et al. 2006, Mackey and Goo 2007). In conclusion, analysis of all four outcome types indicates that the answer to research question 1 is yes, peer-mediated learning is effective at promoting a number of learning outcomes for ELLs. In fact, the estimates tended to be quite large in comparison to other instructional approaches, suggesting that peer-mediated learning is especially effective for ELLs. That effects for language outcomes are larger than effects for academic outcomes is consistent with previous syntheses supporting the linguistic rationale for peer-mediated learning. On the other hand, a sociocultural theory of learning would explain the difference by arguing that academic learning is largely mediated by language, and thus, ELLs must learn the language of the content areas before they can master the academic content. However, it could simply be that the small sample of academic outcomes simply needs to include more studies to accurately capture the effectiveness of peer-mediated learning at promoting academic learning. Unfortunately, the design of this study is insufficient to definitively discern the correct answer, and these explanations remain largely speculative. Nonetheless, the results of the first research question answer the call of the National Reading Panel on Minority-language Youth and Children to determine if the various aspects of effective instruction highlighted by qualitative research are individually effective “…these factors need to either be bundled and tested experimentally as an intervention package or examined as separate components to determine whether they actually lead to improved student performance” (August and Shanahan 2006, p.520). Research Question 2: What variables in instructional design, content area, setting, learners, or research design moderate the effectiveness of peer-mediated learning for English language learners? The second research question is intended to provide a more nuanced understanding for the answer to research question 1; essentially, the first question answers “What works?”, and the second question attempts to answer “For whom, and under what conditions?.” The following section details the answers to a large number of specific hypotheses of the influence of particular moderators and concludes with a summarizing synthesis of the effects of moderators across outcome types. Given ambivalence in the previous literature regarding the effectiveness of specific cooperative, collaborative, and peer-mediated approaches, Hypothesis 2a suggested that there would be no significant difference among the three peer-mediated constructs, and the results of moderator analyses across the three outcome types generally support this hypothesis. For oral and written language outcomes, Construct was insignificant as a predictor, and Construct only approached significance as a predictor for attitudinal outcomes. Notably, the ES estimate for peer-mediated learning was very large (ES=1.525) for the attitudinal distribution, and it was based on only two studies. Thus, the fact that the moderator appeared nearly significant for this outcome distribution may very well reflect a larger-than-average estimate resulting from a very small sample of studies. Moreover, while peer-mediated learning provided the largest effect sizes in two of the three distributions (attitudinal and oral language), cooperative was the largest in written language outcomes, which was the distribution with the largest sample of included studies. Thus, even a qualitative analysis of the rank order of the three constructs suggests that no single version of peer-mediated learning was consistently more effective than the others. This actually affirms a theoretical orientation of this meta-analysis, which posits that a sociocultural explanation of the effectiveness of peer-mediated learning, in general, is that it is through mediated interaction that ELLs learn best. However, the fact that peer tutoring and cooperative learning are the two most structured forms of peer-mediated learning also lends tentative support to claims in the literature that high structure promotes the most learning (eg., Oxford 1997, Slavin 1996). Hypothesis 2b claimed the language setting EFL or ESL, would not significantly moderate the effectiveness of peer-mediated learning for ELLs. Despite significant differences in the two types of settings (e.g., availability of native speakers and amount of exposure to the target language), both fields advocate the use of interactive methods, and consequently, a null hypothesis was forwarded. Empirical evidence across all three available outcome types suggests that the null hypothesis of no difference between EFL and ESL settings cannot be rejected. Setting was not a significant moderator for any of the outcome types; in fact, the significance of the moderator did not even approach significance for any of the distributions. Interestingly, mean effect sizes were actually larger in EFL settings across all three outcome types (i.e., oral language, written language, and attitudinal). This is surprising given that EFL settings provide less exposure to English input and fewer native language models; however, it supports output models of second language acquisition (e.g., Keck et al. 2006, Long 1981, Long 1996, Gass and Mackey 2006, Pica 1994) that suggest that opportunities to formulate meaningful output are as important as opportunities for comprehensible input. Hypothesis 2c posited no significant difference in the effectiveness of peer-mediated learning at different grade levels. To some extent, this is a participant-level question about the effectiveness of peer-mediated learning with students of different ages, but it is analyzed here as a setting-level moderator to reflect differences in pedagogy and instructional delivery associated with these various grade levels. In practice, this moderator addresses aspects of both setting and participant. Results of moderator analyses across outcome types provide ambivalent support for this hypothesis. For oral language and attitudinal outcomes, Grade was not a significant moderator, though it was analyzed as different bivariate variables for oral outcomes (i.e., elementary vs. other) and attitudinal outcomes (elementary vs. high school) because of availability of data in each distribution. However, for written language outcomes, which contained sufficient studies to analyze all three grade levels, Grade proved to be a significant moderator of effectiveness (Q=10.863, p=.004), mostly because the mean effect size was very low for middle school. In fact, middle school was consistently lower than elementary or high school estimates, suggesting that peer-mediated learning might not be as effective for middle school ELLs. This is markedly different than the general pattern for educational intervention studies which tend to report larger effect sizes for middle school than either elementary or high school (Lipsey et al. 2012). This is a particularly troublesome finding because of evidence that suggests middle school ELLs are a vulnerable population at tremendous risk of dropping out as they are confronted with increasingly difficult texts and as the focus of education shifts from learning to read to reading to learn (August et al. 2009, Capps et al. 2005, Cummins 2007, Rubinstein-Avila 2003, Short and Fitzsimmons 2007). Hypothesis 2d could not be directly tested as a moderator in this meta-analysis because the sample of studies included only studies conducted in classrooms. Hypothesis 2e posited no significant difference between interventions that were entirely peer-mediated (e.g., Jigsaw) and those for which peer-mediated learning was one component of a complex intervention (e.g., Bilingual Cooperative Integrated Reading Comprehension), and this moderator was intended to test a claim by Slavin that complex interventions like Success for All provide the greatest benefits (e.g., Cheung and Slavin 2005). Moderator analyses across all three outcome types suggest that the null hypothesis of no significant difference cannot be rejected. Similarly, no consistent pattern can be found in a qualitative analyses of the results, as interventions for which peer-mediated learning was just one component were larger on average in two of the distributions (attitudinal and written language) but those for which the entire intervention was peer mediated were larger on average in the distribution or oral language outcomes. This finding does not entirely dismiss claims that there are advantages associated with these large, complex interventions. Rather, as the primary focus of this meta-analysis is determining the effectiveness of peer-mediated learning for ELLs, it appears that peer-mediated learning is effective for ELLs across a number of intervention types, including those that use peer-mediated learning exclusively. Hypothesis 2f posited no significant difference of the effectiveness of peer-mediated learning for students from differing language backgrounds. Due to limitations in the included sample and the reported data and because culture and language interact in complex ways, student ethnicity was used as a proxy measure of language background. Moderator analyses for all three outcomes suggest that the null hypothesis of no significant difference cannot be rejected. In fact, this variable was tested in two different ways: Hispanic vs. Other and Asian vs. Other. A number of important limitations of these coding categories should be mentioned. First, neither Hispanic nor Asian are monolithic categories; each contains a wide diversity of language, cultural, and geographic variability. Secondly, comparing these two categories to all others faces the same limitation of masking important variability in language and cultural difference. However, these two were chosen because the included sample contained a particularly large number of Hispanic, or Spanish-speaking, participants, Latinos are the largest group of ELLs in the United States, Asians are the fastest growing group of ELLs in the United States, and because at least some research suggested peer-mediated learning may be ineffective for Asians (e.g.,Than et al. 2008). Regarding the last point, that Asians may be culturally averse to cooperative, Western-based approaches and may actually prefer teacher-centered approaches, qualitative analyses of the Student Asian variable indicate that across all three outcome types, Asian students actually performed better on average than their non-Asian peers. In fact, a majority of these studies were conducted in Asian EFL settings, where cultural norms should be strongest. Thus, the findings of this meta-analysis offer tentative evidence to contradict the claim by Than et al. (2008) that cooperative methods may be culturally inappropriate and ineffective for Asian ELLs. Hypothesis 2g predicted no significant difference in the effectiveness of peer-mediated learning for students from high- or low-SES backgrounds, and moderator analyses across all three outcome types support this null hypothesis. Notably, SES was analyzed somewhat differently for written language outcomes (i.e., low vs other) than for oral language or attitudinal outcomes because of a lack of sufficient studies in the other two categories. Also, it is noteworthy that for all three outcome types, Unknown was the most frequently coded category, suggesting that findings are somewhat tentative and reflect a lack of careful reporting in the literature base. Finally, Hypotheses 2h and 2i predicted a significant difference favoring high quality studies. Specifically, 2h posited that high-quality studies (i.e., tested for pre-test differences AND adjusted for pre-test differences) would outperform medium or low-quality studies, and moderator analyses for written language and attitudinal outcomes support this alternative hypothesis. However, study quality was not a significant predictor for oral language outcomes, and medium quality studies actually reported the highest average effect sizes. Thus, moderator analyses provide somewhat ambivalent support for Hypothesis 2h. Hypothesis 2i predicted a significant difference favoring higher dosage studies (i.e., total number of contacts) than for lower dosage studies, and moderator analyses across all three outcome types failed to support this hypothesis. Thus, the null hypothesis of no significant difference could not be rejected for the moderating influence of dosage. Finally, another study quality moderator, for which there was no a priori hypothesis, proved important: post hoc researcher adjustment, which indicated that this researcher subtracted the post-test mean from the pre-test mean in order to control for unadjusted pre-test differences. Actually, this is the only moderator variable that proved a significant moderator for all three outcome types, and this finding indicates that not controlling for pre-test differences can have a very large impact on effect size estimates. Research Question 3: In what ways do select issues of power and equity impact the effectiveness of peer-mediated methods? This third research question is intended to situate the more typical effectiveness findings just discussed within the equity-oriented statement of the problem presented in Chapter 1; that is, the intention of this research question is to expand the typical effectiveness questions of what works, for whom, and under what conditions to include equity-driven variables that the literature indicates are crucial for the academic success of ELLs. To that end, the following hypotheses examine the influence of a number of equity moderators; however, to be clear, the included variables are not exhaustive not does the operationalization of equity implicit in the selection of moderating variables represent the most complex conception of equity available. Rather, these are explorations of equity and how equity-oriented variables may influence the effectiveness of a particular kind of instruction for ELLs. Hypothesis 3a was an alternative hypothesis that predicted lower effect sizes for ELLs in settings where they are segregated from their peers. This hypothesis is complicated by the fact that many bilingual models intentionally segregate ELLs in order to provide extended, targeted language instruction. Nonetheless, exposure to native language peers offers linguistic, social, and academic advantages that motivate the prediction that ELLs will perform worse in segregated settings. Moderator analyses across the three outcome types offer ambivalent evidence that generally failed to support this hypothesis. However, for oral language outcomes, segregation was a significant moderator, and ELLs demonstrated larger oral language gains in non-segregated settings, as predicted. In fact, qualitative analyses of the written language and attitudinal distributions indicate that non-segregated settings reported higher average effect sizes, which taken with the significant effect for oral language outcomes offers some tentative support to the hypothesis. As indicated in Table 2, only 5 studies in the included sample indicated whether or not facilities were adequate. Consequently, formal moderator analyses were not possible to test Hypothesis 3b that predicted lower effect sizes for inadequate facilities. Qualitative analysis of the reported effect sizes compared to the means for each of the outcome types also fails to support the hypothesis. Two studies reporting written language outcomes (ES=.386 and ES=.478) were quite close to the mean of .486. Similarly, two studies reporting academic outcomes (ES=.254 and ES=.155) were similar in magnitude to the mean of .25. Finally, one study reporting an oral language outcome (ES=.667) was actually larger than the mean of .578. Given the small number of studies actually reporting the adequacy of facilities, the strongest finding for this hypothesis was the lack of information in the extant literature base. Similarly, Hypotheses 3c and 3e posited that higher quality teachers would result in more learning gains for ELLs, but very few studies actually reported this information and formal moderator analyses were not possible to test these two hypotheses. Hypothesis 3d, on the other hand, predicted that culturally-relevant instruction would lead to high learning gains for ELLs. Again, very few studies coded this information, but because the coding was dichotomous and identified whether or not authors made even a cursory claim of cultural relevance, it was possible to code no even when authors did not report the information. Moderator analyses failed to support the hypothesis, however. For attitudinal outcomes, not one study claimed to be even slightly culturally-relevant. For oral language and written language outcomes, qualitative analysis indicates that those studies claiming any cultural relevance actually reported lower effect sizes on average. Overall, the very low bar for coding studies as culturally-relevant resulted in surprisingly few studies coded as culturally relevant, indicating that very little can be said about the moderating effect of strong forms of culturally-relevant instruction on the effectiveness of peer mediation for ELLs. Finally, Hypothesis 3f predicted that interventions using students’ native language would be more effective than those using only English. This represents an empirical test of the application of the largest literature base on equity-oriented effectiveness research for ELLs. That is, five meta-analyses of the effectiveness of using students’ native language have consistently found that bilingual models outperform English-only models, and this hypothesis is intended to extend that to a particular instructional approach. As coded for these analyses, moderator analysis across all three outcomes consistently failed to support the assertion that using students’ native language produced larger effects than interventions that used only English. Notably, for all three outcome types, one study reported using students’ L1 exclusively (see Suppl. material 2). In each case, the effect size for the single study using L1 exclusively was much larger than for bilingual or English-only approaches; however, to provide sufficiently large samples in each moderator category, L1-only and bilingual approaches were combined for moderator analyses. Similarly, qualitative analyses of all three outcome types indicate that interventions using students’ native language reported higher mean effect sizes than those using only English. Thus, qualitative analysis across all three outcome types offers some tentative support for the claim that the use of students’ native language during instruction promotes the effectiveness of peer mediation for ELLs. Importantly, this variable only measures whether instruction utilized students’ native language, but it does not measure whether or not students actively used their L1 during activities or if learning outcomes were greater for students’ use of L1. Overall, the hypotheses about the importance of equity demonstrate that effectiveness research continues to focus on academic and psychological factors to the exclusion of issues of power and equity. Very few studies reported sufficient information to code these variables, and consequently, the claims that could be tested or supported are relatively few and tentative. Despite these shortcomings, analyses offer some support to claims that that equity variables moderate the effectiveness of peer mediation for ELLs. For instance, segregation proved to be a significant moderator for oral language outcomes, and in all three outcome types, segregated settings produced smaller effect sizes that non-segregated settings. Similarly, effect sizes in all three outcome types were larger for interventions that used students’ native language for instruction. Limitations and Future Directions These findings consistently indicate that peer-mediated learning is effective for ELLs nonetheless, there a number of important limitations to consider. For instance, this meta-analysis is limited by reporting in the original studies, and as discussed many important variables were either excluded from formal analyses or modified in some way because of limitations in the extant literature base. Similarly, these findings are based on a modest sample of studies; and analyses of some outcome types were severely limited by sample size. Future research may benefit from a growing literature base. The lack of statistically significant moderators, for instance, likely represents a lack of statistical power to detect practically meaningful differences rather than strong evidence that no difference actually exists. Future meta-analyses may benefit from the inclusion of additional studies that seem likely to be conducted given the ongoing interest in cooperative learning research for ELLs indicated by the large proportion of recent studies included in this sample. Furthermore, the inclusion of low- and medium-quality studies may influence the findings, and there are certainly those that argue only the highest-quality studies should be included in research syntheses. As argued, ELLs represent an emergent field of research, and much effort was made to analyze the influence of study quality on the effects reported in this meta-analysis. Of course, all secondary data analyses are limited by the quality of the data they analyze, and this limitation is hardly unique to this particular meta-analysis. Another limitation common to meta-analyses was availability of studies and data. Considerable effort was made to identify and retrieve the entire population of studies conducted on the effectiveness of peer-mediation, but certainly, some studies were missed. Moreover, some studies deemed relevant and qualified were missing data. Even after attempts to contact the authors, occasionally the studies were too old and even the original authors no longer had access to the data. Similarly, this meta-analysis is a product of its particular time, and search tools (e.g., electronic databases and e-mail) are likely biased towards more recent research. Thus, the findings reported in this meta-analysis are limited by the availability of data, and missing data may affect the internal validity of the result, as well as the ability of the sample to accurately estimate general population parameters. Finally, a number of variables of interest were operationalized in ways that reflected availability of data or that allowed for reliable coding. However, the operationalizations of these variables likely simplified constructs of interest (e.g., equity); consequently, the findings presented in this study may only be of limited use for those doing research within any one of these fields. Similarly, the expansion of certain constructs (e.g., ELL) to include multiple variables (e.g., ESL and EFL) may affect the generalizability of these findings. Future research should examine other potential moderators, including setting (e.g., laboratory settings), instructional variables (e.g., task type), teacher (e.g., beliefs and attitudes), and student (e.g., social capital and student use of L1) that are known to influence the effectiveness of peer-mediated methods and the learning of ELLs. Similarly, study quality variables (e.g., fidelity of implementation) were generally under-reported in this sample, and future research should examine the moderating influence these may exert on the mean effect size. Additionally, future research should explore in more detail the mechanisms that make peer-mediated learning effective for ELLs; for example, why does peer-mediated learning appear more effective at promoting language outcomes than academic outcomes? Clearly, more attention should be paid to important factors like the certification and experience of teachers, the adequacy of the facilities, and the length of residence or previous schooling of ELLs. The nearly complete absence of this data in the literature base for this study marks a knowledge gap that is unacceptable, especially given a clear literature base demonstrating the importance of these variables for ELLs. # Acknowledgements This work was supported, in part, by Vanderbilt's Experimental Education Research Training (ExpERT) grant (David S. Cordray, Director; grant number R305B040110). I am grateful for the faith the ExpERT folks demonstrated in selecting me, and the financial and technical support and training they provided were instrumental in the completion of this study. In particular Mark W. Lipsey, Director of the Peabody Research Institute, and David S. Cordray, director of the EspERT Program at Vanderbilt University, provided countless hours of guidance and support throughout my time at Vanderbilt. Finally, I am grateful beyond words for the love and support of my family. To my wife and daughter, you’ve sacrificed more hours than I care to remember in support of this accomplishment. Your love has been my refuge and my strength through these last five years, and I hope to share with you the fruits that your seeds of love have nurtured. To my parents, both by blood and by bond, your financial and emotional support made so much of this journey possible, and your own lives’ works are the models for the work I still hope to accomplish. # References • Allison B, Rehm M (2007) Effective Teaching Strategies for Middle School Learners in Multicultural, Multilingual Classrooms . Middle School Journal 39 ( 2 ): 12 18 . https://doi.org/10.1080/00940771.2007.11461619 • August D (1987) Effects of Peer Tutoring on the Second Language Acquisition of Mexican American Children in Elementary School . TESOL Quarterly 21 ( 4 ): 717 736 . https://doi.org/10.2307/3586991 • August D, Shanahan T (2006) Developing literacy in second-language learners: Report of the National Literacy Panel on language minority children and youth . Lawrence Erlbaum Associates, Inc , Washington, DC: Mahwah, NJ . • August D, Barnett S, Christian D, Fix M, Frede E, Francis D (2009) The American Recovery and Reinvestment Act: Recommendations for addressing the needs of English language learners . Working Group on ELL Policy • Baca L, Bransford J, Nelson C, Ortiz L (1994) Training, development, and improvement (TDI): A new approach for reforming bilingual teacher preparation. The Journal of Educational Issues of Language Minority Students 14 : 1 22 . • Baker KA, Kanter AA (1981) Effectiveness of Bilingual Education: A Review of Literature . Office of Planning, Budget, and Evaluation, U.S. Department of Education. , Washington, D.C . • Ballantyne KG, Sanderman AR, Levy J (2008) Educating English language learners: Building teacher capacity. Washington, DC . • Bloom HS, Hill CJ, Black AR, Lipsey M (2008) Performance trajectories and performance gaps as achievement effect size benchmarks for educational interventions . Journal of Research on Educational Effectiveness 1 : 289 328 . https://doi.org/10.1080/19345740802400072 • Calderón M, Hertz-Lazarowitz R, Slavin R (1998) Effects of bilingual cooperative integrated reading and composition on students making the transition from Spanish to English reading . The Elementary School Journal 99 ( 2 ): 153 165 . https://doi.org/10.1086/461920 • Capps R, Fix M, Murray J, Ost J, Passel J, Herwantoro S (2005) The new demography of America's schools: Immigration and the No Child Left Behind Act . Urban Institute (NJ1) https://doi.org/10.1037/e723122011-001 • Cheung A, Slavin RE (2005) Effective reading programs for English language learners and other language-minority students . Bilingual Research Journal 29 ( 2 ): 241 270 . https://doi.org/10.1080/15235882.2005.10162835 • Cohen EG (1994) Restructuring the classroom: Conditions for productive small groups . Review of Educational Research 64 ( 1 ): 1 35 . https://doi.org/10.3102/00346543064001001 • Cummins J (2001) Empowering minority students: A framework for intervention . Harvard Educational Review 71 ( 4 ): 649 655 . https://doi.org/10.17763/haer.71.4.j261357m62846812 • Cummins J, Bismilla V, Chow P, Cohen S, Giampapa F, Leoni L, Sandhu P, Sastri P (2005) Affirming identity in multilingual classrooms . 63 ( 1 ): 38 43 . • Cummins J (2007) Rethinking monolingual instructional strategies in multilingual classrooms . 10 : 221 240 . • Davies CE (2003) How English learners joke with native speakers: An interactional sociolinguistic perspective on humor as collaborative discourse across cultures . Journal of Pragmatics 35 : 1361 1385 . https://doi.org/10.1016/S0378-2166(02)00181-9 • Deyhle D (1995) Navajo youth and Anglo racism: Cultural integrity and resistance . Harvard Educational Review 65 ( 3 ): 403 444 . https://doi.org/10.17763/haer.65.3.156624q12053470n • Digest of Education Statistics (2009) Average reading scale scores of 4th- and 8th-graders in public schools and percentage scoring at or above selected reading achievement levels, by English language learner (ELL) status and state or jurisdiction: 2007 . http://nces.ed.gov/programs/digest/d09/tables/dt09_124.asp. Accessed on: 2010-12-13. • Dion E, Fuchs D, Fuchs LS (2007) Differential effects of peer-assisted learning strategies on students' social preference and friendship making . Behavioral Disorders 4 . • Duff P (2001) Language, literacy, content, and (pop) culture: Challenges for ESL students in mainstream courses. 58 ( 1 ): 103 132 . https://doi.org/10.3138/cmlr.58.1.103 • Echevarria J, Short D, Powers K (2006) School reform and standards-based education: A model for English-language learners . The Journal of Educational Research 99 : 195 210 . https://doi.org/10.3200/JOER.99.4.195-211 • Fantuzzo JW, Riggio RE, Connely S, Dimeff LA (1989) Effects of reciprocal peer teaching on academic achievement and psychological adjustment: A component analysis . Journal of Educational Psychology 81 ( 2 ): 173 177 . https://doi.org/10.1037/0022-0663.81.2.173 • Firth A, Wagner J (1997) On discourse, communication, and (some) fundamental concepts in SLA research . The Modern Language Journal 81 ( 3 ): 285 300 . https://doi.org/10.1111/j.1540-4781.1997.tb05480.x • Fradd SH, Lee O (2003) Teachers’ roles in promoting science inquiry with students from diverse language backgrounds . Educational Researcher 28 ( 6 ): 14 20 . https://doi.org/10.2307/1177292 • Francis DJ, Rivera M, Lesaux N, Kieffer M, Rivera H (2008) Practical Guidelines for the Education of English Language Learners: Research-based Recommendations for Instruction and Academic Interventions. Portsmouth, New Hampshire . • Gándara P (2000) In the aftermath of the storm: English learners in the post-227 era . Bilingual Research Journal 24 : 1 14 . https://doi.org/10.1080/15235882.2000.10162748 • Gándara P, Rumberger R, Maxwell-Jolly J, Callahan R (2003) English learners in California schools: Unequal resources, unequal outcomes . Educational Policy Analysis Archives 11 ( 36 ): 1 54 . • Gass SM, Mackey A (2006) Input, Interaction and Output: An Overview . AILA Review 19 : 3 17 . https://doi.org/10.1075/aila.19.03gas • Genesee F, Lindholm-Leary K, Saunders W, Christian D (2005) English language learners in U.S. schools: An overview of research findings . Journal of Education for Students Placed at Risk 10 ( 4 ): 363 386 . https://doi.org/10.1207/s15327671espr1004_2 • Gersten R, Baker S (2000) What we know about effective instructional practices for English-language learners . Exceptional Children 66 ( 4 ): 454 470 . https://doi.org/10.1177/001440290006600402 • Gifford F, Valdés G (2006) The linguistic isolation of Hispanic students in California’s public schools: The challenge of reintegration . The Annual Yearbook of the National Society for the Study of Reintegration 125 154 . • Gitlin A, Buendía E, Crosland K, Doumbia F (2003) The production of margin and center: Welcoming-unwelcoming of immigrant students . American Educational Research Journal 40 ( 1 ): 91 122 . https://doi.org/10.3102/00028312040001091 Mentoring and Tutoring by Students . Routledge , 330 pp. https://doi.org/10.4324/9780203761212 • Greene J (1998) A Meta-Analysis of the Effectiveness of Bilingual education . Tomas Rivera Policy Institute , Claremont, CA . • Gutiérrez KD, Larson J, Kreuter B (1995) Cultural tensions in the scripted classroom: The value of the subjugated perspective . Urban Education 29 ( 4 ): 410 442 . https://doi.org/10.1177/0042085995029004004 • Gutiérrez KD, Baquedeño-Lopez P, Asato J (2000) “English for the Children”: The new literacy of the old world order, language policy and educational reform . Bilingual Research Journal 24 : 87 116 . https://doi.org/10.1080/15235882.2000.10162753 • Harklau L (2000) From the ‘good kids’ to the ‘worst’: Representations of English language learners across educational settings . TESOL Quarterly 34 ( 1 ): 35 67 . https://doi.org/10.2307/3588096 • Harper CA, Jong EJ (2009) English language teacher expertise: The elephant in the room . Language and Education 23 ( 2 ): 137 151 . https://doi.org/10.1080/09500780802152788 • Hedges LV (2007) Effect sizes in cluster-randomized designs . Journal of Educational and Behavioral Statistics 34 ( 4 ): 341 370 . https://doi.org/10.3102/1076998606298043 • Hertz-Lazarowitz R, Kirkus VB, Miller N (1992) An overview of the theoretical anatomy of cooperation in the classroom . In: N. Miller RH (Ed.) Interaction in Cooperative Groups: The Theoretical Anatomy of Group Learning . Cambridge University Press , New York . • Higgins JP, Thompson SG, Deeks JJ, Altman DG (2003) Measuring inconsistency in meta-analyses . BMJ 327 : 557 560 . https://doi.org/10.1136/bmj.327.7414.557 • Iddings AC, McCafferty SG (2007) Carnival in a mainstream kindergarten classroom: A Bahktinian analysis of second language learner’s off-task behaviors . The Modern Language Journal 91 : 31 44 . https://doi.org/10.1111/j.1540-4781.2007.00508.x • Johnson DW, Maruyoma G, Johnson R, Nelson D, Skon L (1981) The effects of cooperative, competitive, and individualistic goal structures on achievement: a meta-analysis . Psychological Bulletin 89 ( 1 ): 47 62 . https://doi.org/10.1037/0033-2909.89.1.47 • Johnson DW, Johnson RT, Stanne MB (2000) Cooperative learning methods: A meta-analysis . http://www.co-operation.org/pages/cl-methods.html. Accessed on: 2009-1-21. • Kamberelis G (1986) Emergent and Polyphonic Character of Voice in Adolescent Writing . Emergent and Polyphonic Character of Voice in Adolescent Writing . , Austin, Texas . • Kamberelis G (2001) Producing heteroglossic classroom (micro)cultures through hybrid discourse practice . Linguistics and Education 12 : 85 125 . https://doi.org/10.1016/S0898-5898(00)00044-9 • Keck C, Iberri-Shea G, Tracy-Ventura N, Wa-Mbaleka S (2006) . Synthesizing research on language learning and teaching 13 : 91 . https://doi.org/10.1075/lllt.13.08kec • Kluge D (1999) A brief introduction to cooperative learning . In: Kluge D, McGuire S, Johnson D, Johnson R (Eds) Cooperative Learning . Japan Association for Language Teaching , Tokyo , 16-22 pp. • Lantolf JP (2000) Second language learning as a mediated process . Language Teaching 33 : 79 96 . https://doi.org/10.1017/S0261444800015329 • Leki I (2001) “A narrow thinking system”: Nonnative-English-speaking students in group projects across the curriculum . TESOL Quarterly 35 ( 1 ): 39 67 . https://doi.org/10.2307/3587859 • Lensmire TJ (1998) Rewriting student voice . Journal of curriculum Studies 30 ( 3 ): 261 291 . https://doi.org/10.1080/002202798183611 • Lipsey MW, Wilson DB (2001) Practical Meta-Analysis . Sage Publications, Inc , CA . • Lipsey MW, Puzio K, Yun C, Hebert MA, Steinka-Fry K, Cole MW, Roberts M, Anthony KS, Busick MD (2012) Translating the Statistical Representation of the Effects of Education Interventions into More Readily Interpretable Forms. https://ies.ed.gov/ncser/pubs/20133000/pdf/20133000.pdf • Lipsky M (2010) Street-level Bureaucracy: Dilemmas of the Individual in Public Services . Russell Sage Foundation , New York . • Long MH (1981) Input, interaction, and second‐language acquisition . Annals of the New York Academy of Sciences 379 : 259 278 . https://doi.org/10.1111/j.1749-6632.1981.tb42014.x • Long MH (1996) The Role of the Linguistic Environment in Second Language Acquisition . Handbook of Second Language Acquisition . https://doi.org/10.1016/b978-012589042-7/50015-3 • Macedo D (1994) Literacies of power: What Americans are not allowed to know . Westview Press • Mackey A, Goo J (2007) Interaction research in SLA: A meta-analysis and research synthesis . Conversational Interaction in Second Language Acquisition: A Collection of Empirical Studies . Oxford University Press , New York . • Mathews RS, Cooper JL, Davidson N, Hawkes P (1995) Building bridges between cooperative and collaborative learning . Change 27 ( 4 ): 34 40 . • Maxwell-Jolly J (2000) Factors influencing implementation of mandated policy change: Proposition 227 in seven northern California school districts . Bilingual Research Journal 24 ( 1 ): 37 56 . https://doi.org/10.1080/15235882.2000.10162750 • McKeon D (2005) Research Talking Points on English Language Learners . http://www.nea.org/home/13598.htm • Menken K, Antunez B (2001) An overview of the preparation and certification of teachers working with limited English proficient (LEP) students . Washington DC • Moll LC, Amanti C, Neff D, Gonzales N (1992) Funds of Knowledge for teaching: Using a qualitative approach to connect homes and classrooms . Theory into Practice 31 ( 2 ): 132 141 . https://doi.org/10.1080/00405849209543534 • Morita N (2004) Negotiating participation and identity in second language academic communities . TESOL Quarterly 38 ( 4 ): 573 603 . https://doi.org/10.2307/3588281 • National Center for Education Statistics (2011) The Condition of Education 2011 . https://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2011033 • Norton B (1997) Language, identity, and the ownership of English . TESOL Quarterly 31 ( 3 ): 409 429 . https://doi.org/10.2307/3587831 • Norton B, Toohey K (2001) Changing perspectives on good learners . TESOL Quarterly 35 ( 2 ): 307 322 . https://doi.org/10.2307/3587650 • Ogbu JU, Simons HD (1998) Voluntary and involuntary minorities: A cultural-ecological theory of school performance with some implications for education . Anthropology & Education Quarterly 29 ( 2 ): 155 188 . https://doi.org/10.1525/aeq.1998.29.2.155 • Oortwijn MB, Boekaerts M, Vedder P, Strijbos J (2008) Helping behavior during cooperative learning and learning gains: The role of the teacher and of pupils’ prior knowledge and ethnic background . Learning and Instruction 18 : 146 159 . https://doi.org/10.1016/j.learninstruc.2007.01.014 • Ovando CJ (2003) Bilingual education in the United States: Historical development and current issues . Bilingual Research Journal 27 ( 1 ): 1 24 . https://doi.org/10.1080/15235882.2003.10162589 • Oxford R (1997) Cooperative learning, collaborative learning, and interaction: Three communicative strands in the language classroom . The Modern Language Journal 81 ( 40 ): 456 . • Pavlenko A, Norton B (2007) Imagined communities, identity, and English language learning . In: Cummins J, Davies C (Eds) International Handbook of English Teaching . Springer , Dordrecht, Netherlands . • Pica T (1994) Research on Negotiation: What Does It Reveal About Second-Language Learning Conditions, Processes, and Outcomes? Language Learning 44 ( 3 ): 493 527 . https://doi.org/10.1111/j.1467-1770.1994.tb01115.x • Platt E, Harper C, Mendoza MB (2003) Dueling philosophies: Inclusion or Separation for Florida’s English language learners . TESOL Quarterly 37 ( 1 ): 105 133 . https://doi.org/10.2307/3588467 • Portes A, Rumbaut RG (2001) Legacies: The Story of the Immigrant Second Generation . The Russell Sage Foundation , New York . • Prior P (2001) Voices in text, mind, and society . Journal of Second Language Writing 10 : 55 81 . https://doi.org/10.1016/S1060-3743(00)00037-0 • Ramírez JD, Yuen SD, Ramey DR (1991) Final report: Longitudinal study ofStructured English immersion strategy, early-exit and late-exit transitional bilingual education programs for language-minority children. A technical report prepared for the United States Department of Education , Washington, DC . • Rohrbeck CA, Fantuzzo JW, Ginsberg-Block MD, Miller TR (2003) Peer-assisted learning interventions with elementary school students: A meta-analytic review . Journal of Educational Psychology 95 ( 2 ): 240 257 . https://doi.org/10.1037/0022-0663.95.2.240 • Rollinson P (2003) Using peer feedback in the ESL writing class . ELT Journal 59 ( 1 ): 23 30 . https://doi.org/10.1093/elt/cci003 • Rolstad K, Mahoney K, Glass GV (2005) The big picture: A meta-analysis of program effectiveness research on English language learners . Educational Policy 19 : 572 594 . https://doi.org/10.1177/0895904805278067 • Roseth CJ, Johnson DW, Johnson RT (2008) Promoting early adolescents’ achievement and peer relationships: The effects of cooperative, competitive, and individualistic goal structures . Psychological Bulletin 134 ( 2 ): 223 246 . https://doi.org/10.1037/0033-2909.134.2.223 • Rossell CH, Baker K (1996) The educational effectiveness of bilingual education . Research in the Teaching of English 30 ( 1 ): 7 74 . • Rubinstein-Avila E (2003) Conversing with Miguel: An adolescent English language learner struggling with later literacy development . 47 ( 4 ): 290 301 . • Ruiz VL (2001) South by southwest: Mexican Americans and segregated schooling, 1900-1950 . Magazine of History 15 : 23 27 . https://doi.org/10.1093/maghis/15.2.23 • Rumbaut RG, Portes A (2001) Ethnicities: Children of Immigrants in America . California: University of California Press • Sáenz LM, Fuchs LS, Fuchs D (2005) Peer-assisted learning strategies for English language learners with learning disabilities . Exceptional Children 71 ( 3 ): 231 247 . https://doi.org/10.1177/001440290507100302 • Schmid CL (2001) The Politics of Language: Conflict, Identity, and Cultural Pluralism in Comparative Perspective . Oxford University Press , New York . • Short D, Fitzsimmons S (2007) Double the work: Challenges and solutions to acquiring language and academic literacy for adolescent English language learners: A report to Carnegie Corporation of New York . https://www.carnegie.org/media/filer_public/bd/d8/bdd80ac7-fb48-4b97-b082-df8c49320acb/ccny_report_2007_double.pdf • Slavin RE (1986) Best-evidence synthesis: An alternative to meta-analytic and traditional reviews . Educational Researcher 15 ( 9 ): 5 11 . https://doi.org/10.3102/0013189X015009005 • Slavin RE (1990) Achievement effects of ability grouping in secondary schools: A best evidence synthesis . Review of Educational Research 60 ( 3 ): 471 499 . https://doi.org/10.3102/00346543060003471 • Slavin RE (1996) Research on cooperative learning and achievement: What we know, what we need to know . Contemporary Educational Psychology 21 : 43 69 . https://doi.org/10.1006/ceps.1996.0004 • Slavin RE, Cooper R (1999) Improving intergroup relations: Lessons learned from cooperative learning programs . Journal of Social Issues 55 ( 4 ): 647 663 . https://doi.org/10.1111/0022-4537.00140 • Slavin RE, Cheung A (2005) A synthesis of research on language of reading instruction for English language learners . Review of Educational Research 75 ( 2 ): 247 284 . https://doi.org/10.3102/00346543075002247 • Slavin RE, Madden NA (2011) Measures inherent to treatments in program effectiveness reviews . Journal of Research on Educational Effectiveness 4 ( 4 ): 370 380 . https://doi.org/10.1080/19345747.2011.558986 • Sleeter C (2001) Preparing teachers for culturally diverse schools: Research and the overwhelming presence of whiteness . Journal of Teacher Education 52 : 94 106 . https://doi.org/10.1177/0022487101052002002 • Smith ML, Glass GV, Miller TI (1980) Benefits of psychotherapy . Johns Hopkins University , Baltimore, MD . • Solano-Flores G (2008) Who is given tests in what language by whom, when, and where? The need for probabilistic views of language in the testing of English language learners . Educational Researcher 37 : 189 199 . https://doi.org/10.3102/0013189X08319569 • Sterne JA, Egger M (2006) Regression Methods to Detect Publication and Other Bias in Meta-Analysis . Publication Bias in Meta-Analysis . https://doi.org/10.1002/0470870168.ch6 • Stritikus T, Garcia E (2000) Education of limited English proficient students in California schools: An assessment of the influence of Proposition 227 on selected teachers and classrooms . Bilingual Research Journal 24 : 75 85 . https://doi.org/10.1080/15235882.2000.10162752 • Swain M, Brook L, Tocalli-Beller A (2002) Peer-peer dialogue as a means of second language learning . Annual Review of Applied Linguistics 22 : 171 185 . • Talmy S (2004) Forever FOB: The cultural production of ESL in a high school . Pragmatics 14 : 149 172 . https://doi.org/10.1075/prag.14.2-3.03tal • Talmy S (2008) The cultural productions of the ESL student at Tradewinds High: Contingency, multidirectionality, and Identity in L2 socialization . Applied Linguistics 29 ( 4 ): 619 644 . https://doi.org/10.1093/applin/amn011 • Than PT, Gillies R, Renshaw P (2008) Cooperative learning (CL) and Academic achievement of Asian students: A true story . International education Studies 1 ( 3 ): 82 88 . • Thomas WP, Collier VP (2004) The astounding effectiveness of dual language education for all NABE . Journal of Research and Practice 2 ( 1 ): 1 20 . • Tijerino A, Asato J (2002) The implementation of Proposition 227 in California schools: A critical analysis of the effect on teacher beliefs and classroom practices . Equity & Excellence in Education 35 ( 2 ): 108 118 . https://doi.org/10.1080/713845279 • Valdés G (2001) Learning and not learning English: Latino students in American schools . Teachers College Press , New York . • Valenzuela A (1999) Subtractive schooling: U.S. Mexican youth and the politics of caring . State University of New York Press , New York . • Voloshinov VN (1973) Marxism and the philosophy of language . Marxism and the Philosophy of Language . Harvard University Press , Cambridge, MA . • Wiese A, García E (1998) The Bilingual Education Act: Language minority students and equal educational opportunity . Bilingual Research Journal 22 ( 1 ): 1 18 . https://doi.org/10.1080/15235882.1998.10668670 • Willig A (1985) A Meta-Analysis of Selected Studies on the Effectiveness of Bilingual Education . Review of Educational Research 55 ( 3 ): 269 . https://doi.org/10.2307/1170389 • Yoon B (2008) Uninvited Guests: The Influence of Teachers' Roles and Pedagogies on the Positioning of English Language Learners in the Regular Classroom . American Educational Research Journal 45 ( 2 ): 495 522 . https://doi.org/10.3102/0002831208316200 # Supplementary materials Suppl. material 1: Coding forms Authors:  Mikel Cole Data type:  coding forms Suppl. material 2: Included and Near-Miss Studies Authors:  Mikel Cole Data type:  List of references Brief description: These are studies that were potentially-relevant to the meta-anlayses, but that were ultimately excluded during inclusion coding. Future researchers might find this list especially valuable. Endnotes *1 English language learner is only one of many terms that refer to linguistically diverse students. Other terms like Limited-English proficient and language minority convey deficiency-oriented or disempowering. *2 It is worth noting that “peer-mediated learning” is sometimes used to refer to a more-specific subset of these approaches, especially when used with learning disabled students (e. g. Dion et al. 2007). *3 The important theoretical issues raised in this meta-analysis are largely distinct from the questions analyzed and synthesized in the Major Area Paper to which this comment refers. However, the idea that sociocultural theory might prove heuristically useful is explored in this paper. Thus, little explanation for this bias is given here, and readers are encouraged to examine the evidence that warrants this presumption. *4 Nonetheless, this meta-analysis primarily employs the term effectiveness to emphasize the ability of peer-mediated approaches to improve outcomes for ELLs on discrete measures or instruments, even when those measures assess constructs like out-group relations. *5 Notably, this is the same research that informed the historic Brown v Board decision that created the legal foundation for the desegregation, if not integration, of public schools in the United States. *6 The authors actually report the inverse-variance adjustment for small samples as d+, but it is based on Hedge’s original work and is more commonly referred to as Hedges’ g; as such, figures are reported here as g. *7 It is important to distinguish this assertion from a deficit view of ELLs. Asserting that English proficiency is a barrier to mainstream instruction is not intended to be equivalent to an assertion that ELLs are deficient learners. All ELLs come to school proficient in at least one language, and many are proficient in several. Rather, like the landmark ruling in Lau v Nichols, the assertion is intended to indicate that most instruction in the US is provided in English by monolingual, White teachers; and without affirmative efforts to make the curriculum accessible to ELLs, these students do not generally have a chance to succeed in most US classrooms. *8 While theoretically distinct, the more individualistic and cognitive orientations (e.g., traditional second language acquisition interaction and cooperative learning) and the more socially-oriented (e.g., sociocultural theory) perspectives share conceptual common ground. Thus, although the theoretical differences are acknowledged, the assertion of a conceptual common ground enables the inclusion of studies from all three theoretical orientations. *9 While there was some discussion of second language and foreign language differences in the results, the authors report too few FL settings to make substantial claims.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5340709090232849, "perplexity": 4762.54760372929}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203021.14/warc/CC-MAIN-20190323201804-20190323223804-00155.warc.gz"}
https://en.wikibooks.org/wiki/FHSST_Physics/Atomic_Nucleus/Nuclear_Reactors
# FHSST Physics/Atomic Nucleus/Nuclear Reactors Inside the Atomic Nucleus The Free High School Science Texts: A Textbook for High School Students Studying Physics Main Page - << Previous Chapter (Modern Physics) Composition - Nucleus - Nuclear Force - Binding Energy and Nuclear Masses - Radioactivity - Nuclear Reactions - Detectors - Nuclear Energy - Nuclear Reactors - Nuclear Fusion - Origin of the Universe Elementary Particles: Beta Decay - Particle Physics - Quarks and Leptons - Forces of Nature # Nuclear Reactors Since the discovery of radioactivity it was known that heavy nuclei release energy in the processes of spontaneous decay. This process, however, is rather slow and cannot be influenced (speed up or slow down) by humans and therefore could not be effectively used for large-scale energy production. Nonetheless, it is ideal for feeding the devices that must work autonomously in remote places for a long time and do not require much energy. For this, heat from the spontaneous-decays can be converted into electric power in a radioisotope thermoelectric generator. These generators have been used to power space probes and some lighthouses built by Russian engineers. Much more effective way of using nuclear energy is based on another type of nuclear decay which is considered next. ## Chain reaction The discovery that opened up the era of nuclear energy was made in 1939 by German physicists O. Hahn, L. Meitner, F Strassmann, and O. Frisch. They found that a uranium nucleus, after absorbing a neutron, splits into two fragments. This was not a spontaneous but induced fission ${\displaystyle {\begin{matrix}n+{}_{92}^{235}{\rm {U}}\longrightarrow {}_{54}^{140}{\rm {Xe}}+{}_{38}^{94}{\rm {Sr}}+n+n+185\,{\rm {MeV}}\end{matrix}}}$ (15.4) that released ${\displaystyle \sim 185}$MeV of energy as well as two neutrons which could cause similar reactions on surrounding nuclei. The fact that instead of one initial neutron, in the reaction (15.4) we obtain two neutrons, is crucial. This gives us the possibility to make the so-called chain reaction schematically shown in Fig. 15.4. In such process, one neutron breaks one heavy nucleus, the two released neutrons break two more heavy nuclei and produce four neutrons which, in turn, can break another four nuclei and so on. This process develops extremely fast. In a split of a second a huge amount of energy can be released, which means explosion. In fact, this is how the so-called atomic bomb works. Can we control the development of the chain reaction? Yes we can! This is done in nuclear reactors that produce energy for our use. How can it be done? ## Critical mass First of all, if the piece of material containing fissile nuclei is too small, some neutrons may reach its surface and escape without causing further fissions. For each type of fissile material there is therefore a minimal mass of a sample that can support explosive chain reaction. It is called the critical mass. For example, the critical mass of ${\displaystyle {}_{92}^{235}{\rm {U}}}$is approximately 50kg. If the mass is below the critical value, nuclear explosion is not possible, but the energy is still released and the sample becomes hot. The closer mass is to its critical value, the more energy is released and more intensive is the neutron radiation from the sample. The criticality of a sample (i.e. its closeness to the critical state) can be reduced by changing its geometry (making its surface bigger) or by putting inside it some other material (boron or cadmium) that is able to absorb neutrons. On the other hand, the criticality can be increased by putting neutron reflectors around the sample. These reflectors work like mirrors from which the escaped neutrons bounce back into the sample. Thus, moving in and out the absorbing material and reflectors, we can keep the sample close to the critical state. ## How a nuclear reactor works In a typical nuclear reactor, the fuel is not in one piece, but in the form of several hundred vertical rods, like a brush. Another system of rods that contain a neutron absorbing material (control rods) can move up and down in between the fuel rods. When totally in, the control rods absorb so many neutrons, that the reactor is shut down. To start the reactor, operator gradually moves the control rods up. In an emergency situation they are dropped down automatically. To collect the energy, water flows through the reactor core. It becomes extremely hot and goes to a steam generator. There, the heat passes to water in a secondary circuit that becomes steam for use outside the reactor enclosure for rotating turbines that generate electricity. ## Nuclear power in South Africa By 2004 South Africa had only one commercial nuclear reactor supplying power into the national grid. It works in Koeberg located 30km north of Cape Town. A small research reactor was also operated at Pelindaba as part of the nuclear weapons program, but was dismantled. Koeberg Nuclear Power station is a uranium Pressurized Water Reactor (PWR). In such a reactor, the primary coolant loop is pressurised so the water does not boil, and heat exchangers, called steam generators, are used to transmit heat to a secondary coolant which is allowed to boil to produce steam. To remove as much heat as possible, the water temperature in the primary loop is allowed to rise up to about ${\displaystyle 300^{\circ }}$C which requires the pressure of 150 atmospheres (to keep water from boiling). The Koeberg power station has the largest turbine generators in the southern hemisphere and produces 1800 megawatts of electrical power. Construction of Koeberg began in 1976 and two of its Units were commissioned in 1984-1985. Since then, the plant has been in more or less continuous operation and there have been no serious incidents. Eskom that operates this power station, may be the current technology leader. It is developing a new type of nuclear reactor, a modular pebble-bed reactor (PBMR). In contrast to traditional nuclear reactors, in this new type of reactors the fuel is not assembled in the form of rods. The uranium, thorium or plutonium fuels are in oxides (ceramic form) contained within spherical pebbles made of pyrolitic graphite. The pebbles, having a size of a tennis ball, are in a bin or can. An inert gas, helium, nitrogen or carbon dioxide, circulates through the spaces between the fuel pebbles. This carries heat away from the reactor. Ideally, the heated gas is run directly through a turbine. However since the gas from the primary coolant can be made radioactive by the neutrons in the reactor, usually it is brought to a heat exchanger, where it heats another gas, or steam. The primary advantage of pebble-bed reactors is that they can be designed to be inherently safe. When a pebble-bed reactor gets hotter, the more rapid motion of the atoms in the fuel increases the probability of neutron capture by ${\displaystyle {}_{92}^{238}}$U isotopes through an effect known as Doppler broadening. This isotope does not split up after capturing a neutron. This reduces the number of neutrons available to cause ${\displaystyle {}_{92}^{235}}$U fission, reducing the power output by the reactor. This natural negative feedback places an inherent upper limit on the temperature of the fuel without any operator intervention. The reactor is cooled by an inert, fireproof gas, so it cannot have a steam explosion as a water reactor can. A pebble-bed reactor thus can have all of its supporting machinery fail, and the reactor will not crack, melt, explode or spew hazardous wastes. It simply goes up to a designed "idle" temperature, and stays there. In that state, the reactor vessel radiates heat, but the vessel and fuel spheres remain intact and undamaged. The machinery can be repaired or the fuel can be removed. A large advantage of the pebble bed reactor over a conventional water reactor is that they operate at higher temperatures. The reactor can directly heat fluids for low pressure gas turbines. The high temperatures permit systems to get more mechanical energy from the same amount of thermal energy. Another advantage is that fuel pebbles for different fuels might be used in the same basic design of reactor (though perhaps not at the same time). Proponents claim that some kinds of pebble-bed reactors should be able to use thorium, plutonium and natural unenriched Uranium, as well as the customary enriched uranium. One of the projects in progress is to develop pebbles and reactors that use the plutonium from surplus or expired nuclear explosives. On June 25, 2003, the South African Republic's Department of Environmental Affairs and Tourism approved ESKOM's prototype 110MW pebble-bed modular reactor for Koeberg. Eskom also has approval for a pebble-bed fuel production plant in Pelindaba. The uranium for this fuel is to be imported from Russia. If the trial is successful, Eskom says it will build up to ten local PBMR plants on South Africa's seacoast. Eskom also wants to export up to 20 PBMR plants per year. The estimated export revenue is 8 billion rand a year, and could employ about 57000 people.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7368901371955872, "perplexity": 1364.556684892276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257827781.76/warc/CC-MAIN-20160723071027-00192-ip-10-185-27-174.ec2.internal.warc.gz"}
https://workshops.nuevofoundation.org/debugging/e1-binary-tree/
# Exercise 1 - Binary Tree Implementation Workshop Resources The binary tree is one of the simplest data structures in computer science and the ideas it uses are very useful. It stores sortable data and boasts an optimal runtime of O(log n) for searching, adding, and removing elements. However, this performance depends heavily on the order in which elements are added or removed, which limits its use to academic discussion. ## The Theory A binary tree consists of many nodes that are linked together. Each node has a parent nodes, or its predecessor, and up to two children nodes. A node that has no children is called a leaf. In a rooted binary tree, one node is specified as a root, meaning it has no parent. In the diagram below, node A is the parent of nodes B and C. Likewise, B is the parent of D and E. A is the root, and D, E, F and G are leaves. The binary tree is a recursive data structure. Each node can contain 0-2 children, and 1 parent. We can limit ourselves to looking at a specific subtree of the original binary tree without worrying too much about the entire tree as a whole, and that subtree is a valid binary tree on its own. ### Using the Binary Tree We can use a binary tree to store information about a list’s ordering. Each node can store one value, and its children must be ordered as follows: • The left child must have smaller value than its parent. • The right child must have a larger value than its parent. • Duplicate values can be stored on the left or the right; however, it is important to ensure that duplicates are stored in a consistent fashion. So if dupes are stored on the left, they are only stored on the left, and likewise for the right. The following diagram shows an example binary tree. Notice that the left children are all smaller than its parent, while the right children are larger. Besides this tree ordering property, can see that there is no strict requirement on the shape of the tree. ### Adding to a Binary Tree To add an element, we need to find where it fits in the tree. To do this, we will perform a tree traversal. The idea is to move from node to node until we find a “spot” for the element we want to add. First, we start at the root. We then compare the value at the root with the element to add. If the element is larger, move to the right child. Otherwise, move to the left. We can repeat this process again, until we find a node that can be the new element’s parent. The diagram below illustrates adding 7 to a binary tree. 1. In step 1 (blue), we compare 10 and 7. Since 7 < 10, we proceed down the left child. 2. In step 2 (green), we compare 5 and 7. 7 > 5, so we proceed down its right child, only to find that 5 has no right child! Thus, we can insert 7 into that spot. ### Removing from a Binary Tree To remove an element, it’s a bit more tricky. We first need to find the element that we are removing. However, once we remove it we’ll need to fill in the “hole” that we’ve made in the tree. We can’t just fill in the hole with any plain element; we need to maintain the binary tree ordering property. A convenient element to take is the deepest, leftmost element from the hole’s right subtree. The diagram below shows how to remove elements in several cases. Dotted lines indicate that the connection may or maynot exist. So in case 2 for instance, the blue parent might not exist if the node to remove is the root of the tree. • In the first case, the node has no children - we can safely remove it with no issues. • In the second case, the node has 1 child on the left or the right. We can slide the child up to this node’s former spot. This works for both the left and right side. • The the third case, the node has 2 children. There are a few ways to go about this, but the way we’ll use is to take the smallest element of the right subtree and insert it into the “hole” that we’ll make. If that element has a right child (the green node), we need to slide that node up, so its former parent (orange) becomes that child’s parent. The third case is tricky to get correct because of the number of edge cases that exist. For instance, the smallest value of the right subtree could be the right child itself. Or, the min child could contain no right child. ## The Implementation At the Nuevo team, we’ve created an implementation for the binary tree. However, the programmer was sloppy and didn’t check their work, so there are errors and bugs! For this exercise, you will fix those bugs and errors. Your goal is to have all tests pass. • To debug the code, you can use the command make debug. This will regenerate the debug files needed in the debug/ directory and run gdb for you. • To use valgrind, you can run the command make valgrind. This will recompile your code and run valgrind with the appropriate arguments. • To test the code, you can click on the green “run” button, or use the command make test. Let’s take a look at what the existing code is doing. First, the binary tree data structure is defined in the binary_tree.h file. It can be referenced as a type called BinaryTree. The data is stored within a type called a BTNode, which represents a binary tree node. The tree itself contains a sentinel node, which makes other tree operations easier to handle. To get the actual root of the tree, we need to reference the left child of the sentinel. Thus, the root’s parent is the sentinel node, rather than NULL. Each node is allocated on the heap using malloc, so you’ll need to make sure that there are no memory leaks! Tree operations are defined in binary_tree.h too. If you are unsure of what a tree operation does, make sure to read its description there. We’ll also include some reference pictures below. Once again, the implementation does not need to handle duplicates within the tree. Addition, it does not need to implement any of the tree print functions. Good luck!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4947621822357178, "perplexity": 491.5817861182309}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572021.17/warc/CC-MAIN-20220814083156-20220814113156-00484.warc.gz"}
https://www.sparrho.com/item/reversible-filters/8ef678/
# Reversible filters Research paper by Alan Dow, Rodrigo Hernández-Gutiérrez Indexed on: 03 Mar '16Published on: 03 Mar '16Published in: Mathematics - General Topology #### Abstract A space is reversible if every continuous bijection of the space onto itself is a homeomorphism. In this paper we study the question of which countable spaces with a unique non-isolated point are reversible. By Stone duality, these spaces correspond to closed subsets in the \v{C}ech-Stone compactification of the natural numbers $\beta\omega$. From this, the following natural problem arises: given a space $X$ that is embeddable in $\beta\omega$, is it possible to embed $X$ in such a way that the associated filter of neighborhoods defines a reversible (or non-reversible) space? We give the solution to this problem in some cases. It is specially interesting whether the image of the required embedding is a weak $P$-set.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9648545980453491, "perplexity": 414.0446369690725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880038.27/warc/CC-MAIN-20201022195658-20201022225658-00281.warc.gz"}
https://www.physicsoverflow.org/6561/generalisations-of-ads-cft-with-string-theory-on-both-sides
# Generalisations of AdS/CFT with string theory on both sides + 2 like - 0 dislike 63 views From my previous post, I found out from the comments that there are various generalisations of AdS/CFT with different things replacing the CFT on the RHS; such as AdS/CMT, AdS/QCD, and also with the AdS replaced on the LHS, like Kerr/CFT a hydrodynamic dual, etc. I am thus led to ask, "Is there a generalisation of AdS/CFT with string theories on both sides?" I can think of at least 1 example of a/n (holographic?) equivalence between a $D$ - dimensional string theory and a $D+1$ - dimensional string theory, T-Duality. E.g. the Type I String Theory and the Type I' String Theory, etc. Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar\varnothing$sicsOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27479347586631775, "perplexity": 1306.6324985632048}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676588961.14/warc/CC-MAIN-20180715183800-20180715203800-00479.warc.gz"}
http://clay6.com/qa/74941/a-capacitor-of-4-mu-f-is-connected-as-shown-in-the-figure-the-internal-resi
Answer Comment Share Q) # A capacitor of $4 \;\mu F$ is connected as shown in the figure . The internal resistance of the battery is $5.0 \Omega$ . The amount of charge on the capacitor plates will be $\begin{array}{1 1} 0 \\ 4\mu c \\ 16 \mu c \\ 8\;\mu c \end{array}$ ## 1 Answer Comment A) Solution : The amount of charge on the capacitor plates will be $8\; \mu c$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7816257476806641, "perplexity": 139.91240488164124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330907.46/warc/CC-MAIN-20190825215958-20190826001958-00455.warc.gz"}
http://mathhelpforum.com/calculus/140188-length-line-print.html
# length of line • Apr 19th 2010, 07:42 PM gralla55 length of line Calculate the length of the line segment determined by the path: x(t) = (a1t + b1, a2t + b2) as t varies from t0 to t1. It should be really easy, I'm just not quite sure what is what in this formula L = (integral from a to b)(root)(1+(f'(x))^2) dx Thanks! • Apr 20th 2010, 03:54 AM HallsofIvy Quote: Originally Posted by gralla55 Calculate the length of the line segment determined by the path: x(t) = (a1t + b1, a2t + b2) as t varies from t0 to t1. It should be really easy, I'm just not quite sure what is what in this formula L = (integral from a to b)(root)(1+(f'(x))^2) dx Thanks! When t= t0, x= a1t0+ b1 and y= a2t0+ b2. When t= x1, x= a1t1+ b1 and y= a2t1+ b2. Those x values are the limits, a and b, of your integral. Since x= a1t+ b1, t= (x- b1)/a1 so y= a2t+ b2= a2(x- b1)/a1+ b2. That is your f(x)= (a2/a1)x- (a2b1/a1) +b1. f'= (a2/a1). Your integration is just $\int_{a1t0+ b1}^{a1t1+ b1} \sqrt{1+ (a2/a1)^2} dx$ That is, of course, a very easy integral because the integrand is a constant. In fact, you don't need to integrate at all: x goes from a1t0+ b1 to a1t1+ b1, a difference of a1(t1- t0). y goes from a2t0+ b2 to a2t1+ b2, a difference of a2(t1- t0). Use the Pythagorean theorem to find the distance: $(t1- t0)\sqrt{(a1)^2+ (a2)^2}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.939507007598877, "perplexity": 4971.777416872664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321458.47/warc/CC-MAIN-20170627152510-20170627172510-00583.warc.gz"}
https://sea-man.org/tank-instrumentation.html
. Site categories # Cargo Tank Instrumentation on Gas Tankers Each cargo tank is to be provided with tank instrumentation to show pressure and temperature of the cargo. Pressure gauges and temperature indicating devices shall be installed in the liquid and vapour piping systems, in cargo refrigerating installations and in the inert gas system as detailed in this article. ## Cargo Tank Instrumentation Where a secondary barrier is required, permanently installed instrumentation shall be provided to detect when the primary barrier fails to be liquid-tight at any location or when liquid cargo is in contact with the secondary barrier at any location. This tank instrumentation shall be appropriate gas detecting devices according to “Gas detection requirements “. However, the tank instrumentation need not be capable of locating the area where liquid cargo leaks through the primary barrier or where liquid cargo is in contact with the secondary barrier. Upon special approval appropriate temperature indicating devices may be accepted by the Society instead of gas detecting devices when the cargo temperature is not lower than -55 °C. If the loading and unloading of the ship is performed by means of remotely controlled valves and pumps, all controls and indicators associated with a given cargo tank shall be concentrated in one control position. Tank instruments are to be tested to ensure reliability in the working conditions and recalibrated at regular intervals. Testing procedures for tank instruments and the intervals between recalibration are to be approved by the Society. ## Level indicators for cargo tanks Each cargo tank shall be fitted with at least one liquid level gauging device, designed to operate at pressures not less than the MARVS of the cargo tank and at temperatures within the cargo operating temperature range. Where only one liquid level gauge is fitted it is to be arranged so that any necessary maintenance can be carried out while the cargo tank is in service. In order to assess whether or not one level gauge is acceptable in relation to above “any necessary maintenance” means that any part of the level gauge can be overhauled while the cargo tank is in service. Cargo tank liquid level gauges shall be of the following types subject to any special requirement for particular cargoes shown in column “g” in the table of “Minimum requirements”Summary of Minimum Requirements for LNG and LPG tankers: 1. indirect devices, which determine the amount of cargo by means such as weighing or pipe flow meters; 2. closed devices, which do not penetrate the cargo tank, such as devices using radio-isotopes or ultrasonic devices; 3. closed devices, which penetrate the cargo tank, but which form part of a closed system and keep the cargo from being released, such as float type systems, electronic probes, magnetic probes and bubble tube indicators. If a closed gauging device is not mounted directly on the tank it is to be provided with a shut-off valve located as close as possible to the tank; 4. restricted devices, which penetrate the tank and when in use permit a small quantity of cargo vapour or liquid to escape to the atmosphere, such as fixed tube and slip tube gauges. When not in use, the devices shall be kept completely closed. The design and installation must ensure that no dangerous escape of cargo can take place when opening the device. Such gauging devices shall be so designed that the maximum opening does not exceed 1,5 mm diameter or equivalent area, unless the device is provided with an excess flow valve. Sighting ports with a suitable protective cover and situated above the liquid level with an internal scale may be allowed by the Society as a secondary means of gauging for cargo tanks which are designed for a design vapour pressure Po not higher than 0,7 bar. Tubular gauge glasses shall not be fitted. Gauge glasses of the robust type as fitted on high pressure boilers and fitted with excess flow valves may be allowed by the Society for deck tanks, subject to any provisions of “Special requirements”Special Requirements for LNG and LPG gas carriers. ## Overflow control Except as provided below, each cargo tank is to be fitted with a high liquid level alarm operating independently of other liquid level indicators and giving an audible and visual warning when activated. Another sensor operating independently of the high liquid level alarm is to automatically actuate a shut-off in a manner which will both avoid excessive liquid pressure in the loading line and prevent the tank from becoming liquid full. The emergency shutdown valve referred to in “Piping System of pressure vessels on gas tankers”Cargo system valving requirements may be used for this purpose. If another valve is used for this purpose, the same information as referred to in “Piping System of pressure vessels on gas tankers”Cargo system valving requirements shall be available on board. During loading, whenever the use of these valves may possibly create a potential excess pressure surge in the loading system, BKI and the Port State Authority may agree to alternative arrangements such as limiting the loading rate, etc. The closing time of the closing valve is to be changeable. The sensor for automatic closing of the loading valve for overflow control as required above may be combined with the liquid level indicators required by “Level indicators for cargo tanks”. A high liquid level alarm and automatic shut- off of cargo tank filling need not be required when the cargo tank: 1. is a pressure tank with a volume not more than 200 m3; 2. or is designed to withstand the maximum possible pressure during the loading operation and such pressure is below that of the start-to-discharge pressure of the cargo tank relief valve. Electrical circuits, if any, of level alarms are to be capable of being tested prior to loading. ## Pressure gauges The vapour space of each cargo tank is to be provided with a pressure gauge which shall incorporate an indicator in the cargo control position as required above. In addition, a high pressure alarm and, if vacuum protection is required, a low pressure alarm, are to be provided on the navigating bridge and at the cargo control position. Maximum and minimum allowable pressures are to be marked on the indicators. For cargo tanks fitted with pressure relief valves, which can be set at more than one set pressure in accordance with “Cargo Temperature Control and Cargo Vent Systems”Pressure relief systems, high pressure alarms are to be provided for each set pressure. Common read-outs of the pressure sensors on the bridge are acceptable. Each cargo pump discharge line and each liquid and vapour cargo manifold is to be provided with at least one pressure gauge. Local reading manifold pressure gauges are to be provided to indicate the pressure between stop valves and hose connections to the shore. Hold spaces and inter barrier spaces without open connection to the atmosphere are to be provided with pressure gauges. ## Temperature indicating devices Each cargo tank is to be provided with at least two devices for indicating cargo temperatures, one placed at the bottom of the cargo tank and the second near the top of the tank, below the highest allowable liquid level. The temperature indicating devices are to be marked to show the lowest temperature for which the cargo tank has been approved by the Society. When cargo is carried in a cargo containment system with a secondary barrier at a temperature lower than -55 °C, temperature indicating devices are to be provided within the insulation or on the hull structure adjacent to cargo containment systems. The devices shall give readings at regular intervals and, where applicable, audible warning of temperatures approaching the lowest for which the hull steel is suitable. These alarms are to be at the cargo control position according to beginning of this article and on the navigating bridge. If cargo is to be carried at temperatures lower than -55 °C, the cargo tank boundaries, if appropri-ate for the design of the cargo containment system, shall be fitted with temperature indicating devices as follows: 1. A sufficient number of devices to establish that an unsatisfactory temperature gradient does not occur. 2. On one tank a number of devices in excess of those required above in order to verify that the initial cool down procedure is satisfactory. These devices may be either temporary or permanent. When a series of similar ships are built, the second and successive ships need not comply with the requirements of this sub-paragraph. The number and position of temperature indicating devices shall be to the satisfaction of the Society. Common read-outs of the temperature sensors on the navigating bridge are acceptable. ## Gas detection requirements Gas detection equipment acceptable to the Society and suitable for the products to be carried shall be provided in accordance with column “f” in the table of “Minimum requirements”Summary of Minimum Requirements for LNG and LPG tankers. In every installation, the positions of fixed sampling heads are to be determined with due regard to the density of the vapours of the products intended to be carried and the dilution resulting from compartment purging or ventilation. Pipe runs from sampling heads shall not be lead through gas safe spaces except as permitted below. Audible and visual alarms from the gas detection equipment, if required by this article, are to be located on the navigating bridge, in the cargo control position required by this, and at the gas detector readout location. Gas detection equipment may be located in the cargo control station required by this, on the navigating bridge or at other suitable locations. When such equipment is located in a gas safe space the following conditions shall be met: 1. gas-sampling lines shall have shut-off valves or an equivalent arrangement to prevent cross-communication with gas-dangerous spaces; 2. and exhaust gas from the detector is to be discharged to the atmosphere in a safe location. Gas detection equipment is to be so designed that it may readily be tested. Testing and calibration shall be carried out at regular intervals. Suitable equipment and span gas for this purpose is to be carried on board. Where practicable, permanent connections for such equipment are to be fitted. A permanently installed system of gas detection and audible and visual alarms shall be provided for: 1. cargo pump rooms; 2. cargo compressor rooms; 3. motor rooms for cargo handling machinery; 4. cargo control rooms unless designated as gas safe; 5. other enclosed spaces in the cargo area where vapour may accumulate including hold spaces and interbarrier spaces for independent tanks other than type C; 6. ventilation hoods and gas ducts where required by “Cargo LNG as fuel”Use of Cargo as Fuel on Gas Tankers; 7. and air locks. Also for hold spaces containing type C cargo tanks a permanently installed gas detection system is recommended. The gas detection equipment shall be capable of sampling and analyzing from each sampling head location sequentially at intervals not exceeding 30 minutes, except that in the case of gas detection for the ventilation hoods and gas ducts referred to in list above sampling is to be continuous. Common sampling lines to the detection equipment shall not be fitted. In the case of products which are toxic or toxic and flammable, the Administration may authorize the use of portable equipment except when column “i” in the table of “Minimum requirements”Summary of Minimum Requirements for LNG and LPG tankers refers to “Permanently installed toxic gas detectors”Special Requirements for LNG and LPG gas carriers for toxic detection as an alternative to a permanently installed system if used before entry of the spaces listed above by personnel and thereafter at 30 minute intervals while they remain therein. For the spaces listed above, alarms are to be activated for flammable products when the vapour concentration reaches 30 % of the lower flammable limit. In the case of flammable products, where cargo containment systems other than independent tanks are used, hold spaces and/or inter barrier spaces are to be provided with a permanently installed system of gas detection capable of measuring gas concentrations of 0 to 100 % by volume. The detection equipment, equipped with audible and visual alarms, shall be capable of monitoring from each sampling head location sequentially at intervals not exceeding 30 minutes. Alarms should be activated when the vapour concentration reaches the equivalent of 30 % of the lower flammable limit in air or such other limit as may be approved by the Society in the light of particular cargo containment arrangements. Common sampling lines to the detection equipment shall not be fitted. With reference to the requirements of paragraphs above commutation of sampling shall be carried out close to the detection cell. In the case of toxic gases, hold spaces and/or inter barrier spaces are to be provided with a permanently installed piping system for obtaining gas samples from the spaces. Gas from these spaces shall be sampled and analysed from each sampling head location by means of fixed or portable equipment at intervals not exceeding 4 hours and in any event before personnel enter the space and at 30 minute intervals whilst they remain therein. Every ship should be provided with at least two sets of portable gas detection equipment accept-able to the Administration and suitable for the products to be carried. A suitable instrument for the measurement of oxygen levels in inert atmospheres is to be provided. Footnotes Did you find mistake? Highlight and press CTRL+Enter Февраль, 03, 2021 1950 0 Notes Text copied Favorite articles • Список избранных статей пуст. Here will store all articles, what you marked as "Favorite". Articles store in cookies, so don't remove it. $${}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3310045301914215, "perplexity": 2686.81080076414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00449.warc.gz"}
https://www.mis.mpg.de/publications/preprints/2008/prepr2008-8.html
# Preprint 8/2008 ## On the Generative Nature of Prediction ### Wolfgang Löhr and Nihat Ay Contact the author: Please use for correspondence this email. Submission date: 04. Feb. 2008 (revised version: September 2008) Pages: 26 published in: Advances in complex systems, 12 (2009) 2, p. 169-194 DOI number (of the published article): 10.1142/S0219525909002143 Bibtex Keywords and phrases: hidden Markov models, computational mechanics, $\varepsilon$-machines, observable operator models, prediction, epsilon-machines Abstract: Given an observed stochastic process, computational mechanics provides an explicit and efficient method of constructing a minimal hidden Markov model within the class of maximally predictive models. Here, the corresponding so-called "-machine encodes the mechanisms of prediction. We propose an alternative notion of predictive models in terms of a hidden Markov model capable of generating the underlying stochastic process. A comparison of these two notions of prediction reveals that our approach is less restrictive and thereby allows for predictive models that are more concise than the -machine. 23.06.2018, 00:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7646826505661011, "perplexity": 2224.3932334322526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589618.52/warc/CC-MAIN-20180717070721-20180717090721-00612.warc.gz"}
http://codereview.stackexchange.com/questions/17818/objective-c-retain-release-snippet
# Objective-C retain / release snippet Here are some snippets I coded and I would like some feedback on my way of handling this: I have a utility class, as a singleton, that provides me with a method named randomColor which returns a (UIColor*): -(UIColor*)randomColor { float l_fRandomRedColor = [[MathUtility instance] randomFloatNumberBetweenNumber:0.0f AndNumber:1.0f]; float l_fRandomBlueColor = [[MathUtility instance] randomFloatNumberBetweenNumber:0.0f AndNumber:1.0f]; float l_fRandomGreenColor = [[MathUtility instance] randomFloatNumberBetweenNumber:0.0f AndNumber:1.0f]; return [UIColor colorWithRed:l_fRandomRedColor green: l_fRandomGreenColor blue: l_fRandomBlueColor alpha: 255]; } Now, I know the object return by this method is autoreleased. I store the returned value in another class and I would like to keep this value for a while, so I proceed like this: [self setMpCurrentPieceColor:[[GraphicsUtility instance] randomColor]]; Which calls the following property: - (void)setMpCurrentPieceColor:(UIColor*)_newColor { [_mpCurrentPieceColor release]; [_newColor retain]; // Make the new assignment. _mpCurrentPieceColor = _newColor; } Question A) My instance variable is released in the dealloc method. Is a the correct way to go? Question B) Now, let's imagine I have an array, like this: UIColor* mBoardColors[WIDTH][HEIGHT]; I want to store the previous instance variable into the array: [mBoardColors[l_iBoardXIndex][l_iBoardYIndex] release]; [_color retain]; mBoardColors[l_iBoardXIndex][l_iBoardYIndex] = _color; Is it correct? Question C) What if I want to move a color from a cell to another (moving, not copying), is it correct to do it like that? [mBoardColors[l_iBoardXIndex][l_iBoardYIndex] release]; mBoardColors[l_iBoardXIndex][l_iBoardYIndex] = mBoardColors[l_iBoardXIndex][l_iBoardYIndex - 1]; mBoardColors[l_iBoardXIndex][l_iBoardYIndex - 1] = nil; - Alright, there's a number of simplifications to the Utility Class I'd like to make before I continue: Your abstraction from arc4random (at least that's what I hope it is), is too expensive to not just use the corresponding C-code directly. You could even make your own method in C, and use the inline qualifier for faster code than calling out to a singleton 3 times for every color. Also, it is never good to name an Objective-C method "instance". Singletons usually use a naming convention that makes sense with their overall implementation (i.e. NSFileManager.defaultManager). The following will generate a random double for you between 0 and 1. Not only is it cleaner, but it's cheaper. #define ARC4RANDOM_MAX 0x100000000 float l_fRandomRedColor = ((double)arc4random() / ARC4RANDOM_MAX); *As an aside, it's also a good idea to use an alpha value of 1.0f, instead of 255, because alpha is also measured on a 0-1 scale Question A) My instance variable is released in the dealloc method. Is a the correct way to go? The rule in ObjC is "You own it, you destroy it." Because mpCurrentPieceColor is an explicitly retain'ed property, you own it. If you were to not destroy mpCurrentPieceColor in -dealloc, then your class would retain a valid reference to it, and it simply would not go away. (memory leak). *As an aside, switching to ARC will alleviate all of these headaches. Question B) Now, let's imagine I have an array, like this: UIColor* mBoardColors[WIDTH][HEIGHT];... So long as you initialize the members of the multi-dimensional array, this is correct, but definitely not optimal. Objective-C has it's own array objects, which free you from having to make these ridiculous memory management decisions, and a dynamic, expandable multidimensional array is as simple as nesting NS(Mutable)Arrays: [NSMutableArray arrayWithObjects:[NSArray array],[NSArray array],[NSArray array],[NSArray array],..., nil]; Not only that, but if you were to decide to expand or contract the board at any time, you could add or remove objects at a whim, and, so long as you release any references to the objects you put into the array, when the main array is released, it will release the sub-arrays, and the sub-arrays will release their objects. It really is quite beautiful. Question C) What if I want to move a color from a cell to another (moving, not copying), is it correct to do it like that? Another beauty of NSArrays is the ability to swap-out objects into a new index of the array, so long as you don't exceed the array's bounds. See the "Replacing Objects" section of NSMutableArray's documentation. Again, as above, memory management is vastly simplified with an NSArray, than dropping down to a multi-dimensional literal. - Thank you very much for your time and answer ! –  Andy M Nov 7 '12 at 18:08 Your setter looks a little unsafe and could lead to a crash in this case: [self setMpCurrentPieceColor:_mpCurrentPieceColor]; A better version of the setter is synthesized setter, but if you need to change set behavior (add validation, update connected fields etc.) you can use next pattern: - (void) setMpCurrentPieceColor:(UIColor *)newColor { if (_color == newColor) return; // No need to do anything if same value transferred // Update value [_color release]; _color = newColor; [_color retain]; } Anyway, better use ARC and not fill your mind with memory management - I'm new to iphone development and I thought I couldn't use ARC with that kind of dev... I thought it was reserved for mac applications... Thanks for your answer ! –  Andy M Nov 13 '12 at 10:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1823442578315735, "perplexity": 2835.7306499483757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298755.8/warc/CC-MAIN-20150323172138-00006-ip-10-168-14-71.ec2.internal.warc.gz"}
https://meteo.unican.es/trac/wiki/WRF4G2.0/WRF4GFiles?action=diff&version=16
Changes between Version 15 and Version 16 of WRF4G2.0/WRF4GFiles Ignore: Timestamp: Aug 26, 2015 6:20:23 PM (7 years ago) Comment: -- Legend: Unmodified v15 .... }}} Therefore, you can put your [wiki:WRF4G2.0/Preprocessor preprocessors] and [wiki:WRF4G2.0/Postprocessor postprocessors] under bin directory, or you can add WPS and WRF configuration files (e.g. Variable_Tables ) for WRF model. This directory is packed and used on each WRF4G run realization. Therefore, you can put your [wiki:WRF4G2.0/Preprocessor preprocessors] and [wiki:WRF4G2.0/Postprocessor postprocessors] under bin directory, or you can add WPS and WRF configuration files (e.g. Variable_Tables,CAM_ABS_DATA, ETAMPNEW_DATA_DBL, etc ) for WRF model. This directory is packed and used on each WRF4G run realization.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4849225878715515, "perplexity": 16471.05250885473}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103624904.34/warc/CC-MAIN-20220629054527-20220629084527-00172.warc.gz"}
http://physics.stackexchange.com/questions/76356/implicit-postulate-of-quantum-mechanics
# Implicit Postulate of Quantum Mechanics Consider the following quantum system: a particle in a one dimensional box (= infinite potential well). The energy eigenstates wave functions all vanish outside the box. But the position eigenstates wave functions don't all vanish outside the box. Each one of them is a delta function at a specific location, and some of these locations are outside the box. So it seems that there is no overlap between certain position eigenstates and all energy eigenstates. So the energy eigenstates don't span the whole Hilbert space! And these position states have zero probability for any energy outcome in a measurement! Now, I know that when speaking about an infinite potential well, it is assumed the particle cannot be outside the well. But I don't see any reason to assume this from the postulates of quantum mechanics. Is there an implicit additional postulates that says: "The Hilbert space of the system is spanned by the Hamiltonian operator's eigenstates (and not other operators, such as position)"? Or is the infinite potential well just an ill defined system because it contains infinities (just like free particle...)? - Apart from the truncation-to-a-box issue, it is essentially a duplicate of this Phys.SE post in the sense that position eigenstates are not part of the Hilbert space. –  Qmechanic Sep 5 '13 at 18:14 @Qmechanic, I don't see why it is a duplicate. Can you explain please? –  Lior Sep 5 '13 at 18:24 @Qmechanic I think Lior is right. 'Apart from the core issue of the question...'. ;). –  Emilio Pisanty Sep 5 '13 at 21:26 P.S, I know my question might seem silly or not worth the discussion, because it is obvious what a working physicist must do when she encounters a particle in a box, but to me the fact that I need to do something that is not derived from postulates but rather based on "common sense" or "intuition" means that this model of reality (= physical theory) is incomplete. –  Lior Sep 6 '13 at 12:03 The problem is that you're assuming that the postulates of quantum mechanics automatically assign systems a full position representation... whereas some systems (like a particle with spin) do not have such a representation. The solution, then, is to look carefully at the postulates of quantum mechanics. There are a bunch of abstract ones - states are rays in Hilbert space, observables are hermitian operators, existence of a hamiltonian, normal unitary evolution under it, probabilities are expectation values, what happens with measurements, and so on - but none of those tell you which Hilbert space to use for which physical system, or what hermitian operators to use for your particular physical observables. For that, you first need a lot of physical intuition, and you follow a general recipe which goes more or less as If the system has a classical representation which includes a canonical symplectic structure with position and momentum coordinates defined on the whole real line, and a Poisson bracket which satisfies $\{x,p\}=1$, then assign a Hilbert space tensor factor of $L_2(\mathbb R)$ to each space dimension with position as the $x$ operator and momentum as such and such a derivative. and which is known as canonical quantization. Note an important caveat in this recipe: it requires position to be defined on an unbounded interval. Because of von Neumann's representation theorem, postulating the canonical commutation relations $[x,p]=i\hbar$ automatically requires the spectrum of both to be $(-\infty,\infty)$. This is a very tricky point, and even Dirac stumbled with it: he proposed a quantum theory for the phase of a harmonic oscillator (The quantum theory of the emission and absorption of radiation. P.A.M. Dirac. Proc. R. Soc. Lond. A 114 no. 767, pp. 243–65 (1927)) which eventually proved to be fundamentally flawed. (A good source for why is probably R. Lynch, Phys. Rep. 256, 367 (1995), but Elsevier seems to be down at the moment.) The bottom line of this is that you need to look at your classical system before you decide how you're going to quantize it. For a particle in an infinite well, does the classical system include the positions outside the well? If so, what's the potential there? It must be "very large", because "infinite" is not a valid value of an operator (i.e. $\hat V|x\rangle=\infty|x\rangle \notin \mathcal H$)... and then you're back in a finite well. If your classical system does not include positions outside that box, then you need to be careful with what you want your quantum system to be. You definitely can't ask your quantum system to do more than your classical one, so position states outside the box should not form part of your Hilbert space. In one strike, this fixes your problem: energy eigenstates will span all of Hilbert space. You still need to decide what operators you need to use for momentum and energy, and physical intuition usually serves well there. However, if you want to know exactly why we do things like we do, then you should be looking at the classical system for guidance as to how to quantize. As it happens, the classical system is not completely trouble-free, and any troubles you have quantizing you might have seen coming just from looking at the classical system! For an interesting take on this, I recommend the paper Classical symptoms of quantum illnesses. Chengjun Zhu and John R. Klauder. Am. J. Phys. 61 no. 7, p. 605 (1993). This includes a discussion, at the end of section III, of precisely this problem. - For a classical particle in a box, with finite but very high walls, there is no restriction at all for the particle to be outside the box. The outside-the-box states have equal rights as the in-the-box ones. Of course, a transition between them requires a lot of energy, as the classical equations of motion tell us. But these states in no way disappear in the limiting process of taking the potential to infinity. So by your reasoning, when I quantize the system I should leave these states. –  Lior Sep 6 '13 at 7:47 @Lior: With infinte potential the Hamiltonian is not defined on the whole Hilbert space (as an operator! - you can write it down, but <s|H|s> is not defined for many vectors/states in the full Hilbert space which was OK for the system with finite potential). Point is: The limit process has no physical meaning: It's just a way to make educated guesses for the proper description of some systems. –  M.E.L. Sep 6 '13 at 13:24 Probably the best way to put it: There is no V(x) with V being infinite outside of the box. You simply can not write it down in terms of standard analysis. So the limiting process has no meaning. The expression "infinite potential well" or seomthing like this has also no meaning inside the mathematical formalism. It's just an informal way to talk about a physical interpretation of the situation we want to describe. The limit process also has no meaning: In the limit the operators do no converge. And that's it: There is no question left to asked. –  M.E.L. Sep 6 '13 at 15:06 @Lior: Have you ever seen an infinite potential well? It's just about making a completely imaginary model universe (where things only exist in a box) in the (justified) hope the behaviour described/calculated is somehow related to a group of more realistic systems you're interested in (e.g. deep potential wells). This hope stems mostly from (informal) physical interpretation of the mathematical model. I'm almost sure one could complete existing mathematical model (e.g.) by way of non standard analysis to describe the limiting process, but I don't know of anybody who has done that). –  M.E.L. Sep 6 '13 at 19:26 I'm out of here now: It gets repetitive. But thanks for the interesting question. Made me think a bit about the process of creating physical models :-). –  M.E.L. Sep 6 '13 at 19:28 This is just a question of choosing a base for the vector (Hilbert) space: The set of position eigenstates is one base, the set of energy eigenstates is another. They can be expressed in terms of each other (vectors/states from base B1 can be expressed as a sum of vectors from base B2). All in the theory of vector spaces... OP said: "But the point of my question is that for the particle in the box, a certain position eigenstate (any one which has as it's wave function a Dirac delta outside the box) cannot be expressed in terms of energy eigenstates" Ah, you're right, now I understand your question. I think the point is, that the well is infinite. For a pfinite potential well, you always have "unbound" higher energy states which can be used to sum up the eigenvectors outside of the box. In the case of infinite well, those states have an energy of infinite (their energy goes to infinite if you let the energy well move to infinity). This way they sort of "vanish" from the model. But: If you accept the same for the position eigenstates (just dripping thos outside the box) everything is OK again: You have "universe" inside a box (a universe where the Koordinate is by design limited to some intervall) and again all position eigenstates can be used to build energy eigenstates and vice versa. - But the point of my question is that for the particle in the box, a certain position eigenstate (any one which has as it's wave function a Dirac delta outside the box) cannot be expressed in terms of energy eigenstates. –  Lior Sep 5 '13 at 18:07 @Lior: I've put my response into my answer. That you have to throw away a part of the psoition eigenstates is IMHO due to the fact that [H,x] would not be right any more. –  M.E.L. Sep 5 '13 at 18:18 And why the down votes? –  M.E.L. Sep 5 '13 at 18:20 I see how this limiting process of taking the potential to infinity makes the energy wave functions vanish outside the well, but I don't see how the postulates or the limiting process should make me abandon the position wave functions outside the well. –  Lior Sep 5 '13 at 18:33 Another ways to look at this: (1) The axioms don't say you can get your Hilbert space from the finite case by some continous transition. They state, that there is a Hilbert space which describes the state of the physical system. Inthis case ist's the space of all function that are zero outside of the box. –  M.E.L. Sep 5 '13 at 19:57 My understanding is that in various problems in quantum mechanics the final step is to restrict the Hilbert space to physically permissible states. In this problem, such a restriction requires that the state is supported exclusively on the spatial interval in which the potential is finite. This would imply the resolution of your paradox is that the position eigenstates outside of this interval are not in the Hilbert space. This is not the only example of such a restriction. In the harmonic oscillator there is a similar restriction, that we limit our Hilbert space to states which can be eventually annihilated to the vacuum, and we reject those which can be lowered arbitrarily. Similarly when quantising the vector field we find the non-physical degrees of freedom allow for states of zero norm, in order to recover physicality, and a theory which obeys the appropriate gauge conditions, we reject these. - This comment is more of a question (serious one). Why are you using quantum mechanics; presumably the ultimate theory of physical reality (at least at small scales), to discuss a "problem", which is all about a quite fictitious thing; to whit, a "one dimensional box, or infinite potential well". What purpose is there in discussing something never observed or known to not even exist ?? Like I said, it's a serious question. –  user26165 Sep 6 '13 at 19:43 I will try to answer though if you are unsatisfied it may be useful to discuss further. If a theory contains a logical inconsistency, then it is not valid, and definitely not a theory for everything. While it is true the proof is in the pudding and the ultimate value of a theory is in its predictive power, we should pay attention to problems as they arise as solving them may help guide developments, and may prevent us from wild goose chases. As for the physicality of the problem, I don't believe its an appalling approximation to certain physical systems, though these are of course finite. –  ComptonScattering Sep 7 '13 at 0:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8227778673171997, "perplexity": 374.33426645943604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267876.49/warc/CC-MAIN-20140728011747-00412-ip-10-146-231-18.ec2.internal.warc.gz"}
https://stupidsid.com/previous-question-papers/download/signals-systems-5705
MORE IN Signals & Systems Total marks: -- Total time: -- INSTRUCTIONS (1) Assume appropriate data and state your reasons (2) Marks are given to the right of every question (3) Draw neat diagrams wherever necessary 1 (a) Determine whether the discrete time signal $x(n)= \cos \left ( \dfrac {\pi n}{5} \right ) \sin \left ( \dfrac {\pi n}{3} \right )$ is periodic. If periodic, find the fundamental period. 4 M 1 (b) Determine whether the signal shown in Fig. Q1(b) is a power signal or energy signal. Justify your answer and further determine its energy/power. 6 M 1 (c) Given the signal x(n)=(6-n){u(n)-u(n-6)} make a sketch of x(n), y1(n)=x(4-n) and y2(n)=x(2n-3). 4 M 1 (d) Find and sketch the following signals and their derivatives: i) x(t)=u(t)-u(t-a); a>0 ii) y(t)=t[u(t)-u(t-a)];a?0 6 M 2 (a) The impulse response of discrete LTI system is give by, h(n)=u(n+1)-u(n-4). The system is excited by the input signal x(n)=u(n)-2u(n-2)+u(u-4). Obtain the response of the system y(n)=x(n)×h(n) and plot the same. 7 M 2 (b) Given x(t)=t 0 7 M 2 (c) Show that: i) x(t)× h(t) = h(t)× x(t) ii) {x(n)× h1(n)} × h2(n)=x(n)×{h1(n)×h2(n)}. 6 M 3 (a) Solve the difference equation, y(n)-3y(n-1)-4y(n-2)=x(n) with x(n)=4nu(n). Assume that the system is initially relaxed. 6 M 3 (b) Draw the direct form I and direct form II implementation for $i) \ y(n) - \dfrac {1}{2} y(n-1)-y(n-3)= 3x(n-1)+ 2x(n-2) \\ ii) \ \dfrac {d^2 y(t)}{dt^2}+ 5 \dfrac {dy (t)}{dt}+ 4y(t)= \dfrac {dx (t)}{dt}$ 10 M 3 (c) Define causality. Derive the necessary and sufficient conditions for a discrete LTI system to be causal in terms of the impulse response. 4 M 4 (a) Determine the DTFS coefficients of. $x(n)= 1+\sin \left \{ \dfrac {1}{12} \pi n + \dfrac {3\pi}{8} \right \}$ 6 M 4 (b) Find the exponential Fourier series of the waveform shown in Fig Q4(b). 8 M 4 (c) Explain the Direchlet conditions for the existence of Fourier series. 6 M 5 (a) Find the DTFT of the signal x(n) given by x(n)=u(n)-u(n-N); where N is any +ve integer. Determine the magnitude and phase components and draw the magnitude spectrum for N=5. 10 M 5 (b) Determine the Fourier transform of the following signals: i) x(t)=e-3tu(t-1) ii) x(t)=e-a|t 10 M 6 (a) Determine the frequency response and the impulse response for the system described by the differential equation, $\dfrac {d^2 y(t)}{dt^2} + \dfrac {dy(t)}{dt}+ 6y(t)= \dfrac {-d}{dt}x(t)$ 10 M 6 (b) Determine the Nyquist sampling rate and Nyquist sampling interval for, $i) \ x(t)=1+\cos 2000 \pi t + \sin 4000 \pi t \\ ii) \ x(t)= \left [ \dfrac {\sin (4000 \pi t)}{\pi t} \right ]^2$ 6 M 6 (c) Explain briefly, the reconstruction of continuous time signals with zero order hold. 4 M 7 (a) Find the z-transform of the following and indicate the region of convergence: 1) x(n)=an cos?0 (n-2)u(n-2) ii) x(n)=n(n+1)anu(n) 10 M 7 (b) Find the inverse z-transform of the following: $i) \ x(z)= \dfrac {z^4 +z^2}{z^2-\frac {3}{4}z+\frac {1}{8}}; |z|>\dfrac {1}{2} by$ Partial fraction expansion method. $ii) \ x(z)= \dfrac {1-az^{-1}}{z^{-1}-a}; z>\dfrac {1}{a}$ by long division method. 10 M 8 (a) A discrete LTI system is characterized by the different equation, y(n)=y(n-1)+y(n-2)+x(n-1) Find the system function H(z) and indicate the ROC if the system, i) Stable ii) Causal. Also determine the unit sample response of the stable system. 10 M 8 (b) Solve the following difference equation using unilateral z-transform. $y(n) - \dfrac {7}{12} y(n-1)+ \dfrac {1}{12}y(n-2)=x(n), \ for \ n\ge 0, \ with \ initial \ conditions \ y(-1)=2, y(-2)=4, \ and \ x(n)= \left ( \dfrac {1}{5} \right )^n u(n)$ 10 M More question papers from Signals & Systems
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9642232656478882, "perplexity": 2941.6258533833297}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141185851.16/warc/CC-MAIN-20201126001926-20201126031926-00647.warc.gz"}
https://support.corel.com/hc/en-us/articles/216690877-DVDs-created-with-certain-DVD-Recorders-import-only-a-small-portion-of-the-video
Follow # DVDs created with certain DVD Recorders import only a small portion of the video In some rare cases it is possible that DVDs created with certain DVD Recorders may not import properly into Studio 15 or Avid Studio or Studio 16. In these cases the DVD will seem to import, but it will only play back a small fraction of the DVD content. This has been reported with discs created with Panasonic DMR-EZ28 and DMR-EZ17 recorders. It is likely that other DVD Recorders have the same issue. If this occurs, you may be able to resolve the issue by replacing a file. Follow these steps: 1.ForStudio 16: Browse to \Pinnacle\Studio 16\programs. Here are the default locations for 32 and 64 bit Windows versions: 32 Bit: C:\Program Files\Pinnacle\Studio 16\Programs 64 Bit: C:\Program Files (x86)\Pinnacle\Studio 16\programs For Avid Studio: Browse to \Avid\Studio\programs. Here are the default locations for 32 and 64 bit Windows versions: 32 Bit: C:\Program Files\Avid\Studio\Programs 64 Bit: C:\Program Files (x86)\Avid\Studio\Programs For Studio 15: Browse to \Pinnacle\Studio 15\Import\programs. Here are the default locations: 32 Bit: C:\Program FilesPinnacle\Studio 15\Import\programs 64 Bit: C:\Program Files (x86)\Pinnacle\Studio 15\Import\programs 2. Locate DiscImporter.dll and rename it to something like DiscImporter.dll_old
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9706465601921082, "perplexity": 11633.115844274591}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250628549.43/warc/CC-MAIN-20200125011232-20200125040232-00168.warc.gz"}
http://www.mzan.com/article/48237018-angular-2-how-to-bind-string-containing-html-selectively-a-few-allowed-tags.shtml
Home Angular 2 - How to bind string containing HTML selectively (a few allowed tags) I'm implementing some simple search and highlight in my app. So I have this in my template: {{myContent}} And this myContent variable may originally contain data like this (not safe to just put as HTML): :param: (b (r :a:sync memo :o :idf:3.09 kg;/g/1<>11bwnh :score:0.3 And what I want to do is to provide search and highlight functionality, so my first thought was to use regexp replace and transform my content into something like this (assume search for memo): :param: (b (r :a:sync memo :o :idf:3.09 kg;/g/1<>11bwnh :score:0.3 But then I'm having the problem of how to get it rendered safely. I've seen other questions about binding strings that contain HTML into angular templates, but the most updated one talks about using [innerHTML] and you need to trust the entire HTML. Is there a way to tell Angular to only trust a specific set of tags? Alternative what I thought about was to split the results of the search into chunks like this: chunks = searchAndHighlight(myContent); // chunks would be: // [ // {content: ':param: (b (r :a:sync ', html: false}, // {content: 'memo', html: true}, // {content: ' :o :idf:3.09 kg;/g/1<>11bwnh :score:0.3', html: false}, // ] And then in my template use something like: {{chunk.content}} But I'm afraid this may kill the performance of my app. My app actually shows a huge tree of data where I'm performing search on each of the nodes, so there are hundred of copies of this Node component, and each of them renders one piece of data (possibly highlighted with search results). Hope I described the problem well and didn't overwhelm anybody. Thanks!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2434682548046112, "perplexity": 4410.975937028139}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946578.68/warc/CC-MAIN-20180424080851-20180424100851-00508.warc.gz"}
http://mathandmultimedia.com/tag/sample-number-problems/
## Math Word Problems: Solving Number Problems Part 3 This is the third part of the Solving Number Problems Series. The first part can be read here and the second part can be read here.  In this post I will continue worked examples using problems which are slightly more complicated than the problems in the previous two parts. Without further ado, lets start with the seventh problem in the series. PROBLEM 7 Twice a number added to $18$ is $5$ times that number. What is the number? Solution In the two previous posts, we have learned that if $n$ is a number, then twice that number is $2n$. So, twice a number added to $5$ is represented by $2n + 5$. Now, that number, the $2n + 5$ is five times that number of $5n$. So, we can now set up the equation $2n + 18 = 5n$. If we solve for $n$, we have $n = 6$» Read more
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 11, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8411158323287964, "perplexity": 277.40206248158086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578636101.74/warc/CC-MAIN-20190424074540-20190424100540-00114.warc.gz"}
https://studydaddy.com/question/bus-499-wk-4-assignment-2-external-and-internal-environments
QUESTION # BUS 499 WK 4 Assignment 2 External and Internal Environments This file of BUS 499 WK 4 Assignment 2 External and Internal Environments covers: $30.99 ANSWER Tutor has posted answer for$30.99. See answer's preview
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3050369918346405, "perplexity": 26007.250827436546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826114.69/warc/CC-MAIN-20171023145244-20171023165244-00343.warc.gz"}
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/01%3A_The_Dawn_of_the_Quantum_Theory/1.07%3A_de_Broglie_Waves_can_be_Experimentally_Observed
# 1.7: de Broglie Waves can be Experimentally Observed $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ ##### Learning Objectives • To present the experimental evidence behind the wave-particle duality of matter The validity of de Broglie’s proposal was confirmed by electron diffraction experiments of G.P. Thomson in 1926 and of C. Davisson and L. H. Germer in 1927. In these experiments it was found that electrons were scattered from atoms in a crystal and that these scattered electrons produced an interference pattern. The interference pattern was just like that produced when water waves pass through two holes in a barrier to generate separate wave fronts that combine and interfere with each other. These diffraction patterns are characteristic of wave-like behavior and are exhibited by both matter (e.g., electrons and neutrons) and electromagnetic radiation. Diffraction patterns are obtained if the wavelength is comparable to the spacing between scattering centers. Diffraction occurs when waves encounter obstacles whose size is comparable with its wavelength. Continuing with our analysis of experiments that lead to the new quantum theory, we now look at the phenomenon of electron diffraction. ## Diffraction of Light (Light as a Wave) It is well-known that light has the ability to diffract around objects in its path, leading to an interference pattern that is particular to the object. This is, in fact, how holography works (the interference pattern is created by allowing the diffracted light to interfere with the original beam so that the hologram can be viewed by shining the original beam on the image). A simple illustration of light diffraction is the Young double slit experiment (Figure 1.7.1 ). Here, light as waves (pictured as waves in a plane parallel to the double slit apparatus) impinge on the two slits. Each slit then becomes a point source for spherical waves that subsequently interfere with each other, giving rise to the light and dark fringes on the screen at the right. The double-slit experiments are direct demonstration of wave phenomena via observed interference. These types of experiment were first performed by Thomas Young in 1801, as a demonstration of the wave behavior of light. In the basic version of this experiment, light is illuminated only a plate pierced by two parallel slits, and the light passing through the two slits is observed on a screen behind the plate (Figures 1.7.1 and 1.7.2 ). The wave nature of light causes the light waves passing through the two slits to interfere, producing bright and dark bands on the screen – a result that would not be expected if light consisted of classical particles (Figure $$\PageIndex{0c}$$). However, the light is always found to be absorbed at the screen at discrete points, as individual particles (not waves); the interference pattern appears via the varying density of these particle hits on the screen. Furthermore, versions of the experiment that include detectors at the slits find that each detected photon passes through one slit (as would a classical particle), and not through both slits (as would a wave). Interference is a wave phenomenon in which two (or more) waves superimpose to form a resultant wave of greater or lower amplitude. It is the primary property used to identify wave behavior. ## Diffraction of Electrons (Electrons as Waves) According to classical physics, electrons should behave like particles - they travel in straight lines and do not curve in flight unless acted on by an external agent, like a magnetic field. In this model, if we fire a beam of electrons through a double slit onto a detector, we should get two bands of "hits", much as you would get if you fired a machine gun at the side of a house with two windows - you would get two areas of bullet-marked wall inside, and the rest would be intact (Figure 1.7.3 (left) and Figure 1.7.2 ). However, if the slits are made small enough and close enough together, we actually observe the electrons are diffracting through the slits and interfering with each other just like waves (Figure 1.7.3 (right) and Figure 1.7.2 a,b). This means that the electrons have wave-particle duality, just like photons, in agreement with de Broglie's hypothesis discussed previously. In this case, they must have properties like wavelength and frequency. We can deduce the properties from the behavior of the electrons as they pass through our diffraction grating. This was a pivotal result in the development of quantum mechanics. Just as the photoelectric effect demonstrated the particle nature of light, the Davisson–Germer experiment showed the wave-nature of matter, and completed the theory of wave-particle duality. For physicists this idea was important because it meant that not only could any particle exhibit wave characteristics, but that one could use wave equations to describe phenomena in matter if one used the de Broglie wavelength. An electron microscope use a beam of accelerated electrons as a source of illumination. Since the wavelength of electrons can be up to 100,000 times shorter than that of visible light photons, electron microscopes have a higher resolving power than light microscopes and can reveal the structure of smaller objects. A transmission electron microscope can achieve better than 50 pm resolution and magnifications of up to about 10,000,000x whereas most light microscopes are limited by diffraction to about 200 nm resolution and useful magnifications below 2000x (Figure 1.7.4 ). ##### Is Matter a Particle or a Wave? An electron, indeed any particle, is neither a particle nor a wave. Describing the electron as a particle is a mathematical model that works well in some circumstances while describing it as a wave is a different mathematical model that works well in other circumstances. When you choose to do some calculation of the electron's behavior that treats it either as a particle or as a wave, you're not saying the electron is a particle or is a wave: you're just choosing the mathematical model that makes it easiest to do the calculation. ## Neutron Diffraction (Neutrons as Waves) Like all quantum particles, neutrons can also exhibit wave phenomena and if that wavelength is short enough, atoms or their nuclei can serve as diffraction obstacles. When a beam of neutrons emanating from a reactor is slowed down and selected properly by their speed, their wavelength lies near one angstrom (0.1 nanometer), the typical separation between atoms in a solid material. Such a beam can then be used to perform a diffraction experiment. Neutrons interact directly with the nucleus of the atom, and the contribution to the diffracted intensity depends on each isotope; for example, regular hydrogen and deuterium contribute differently. It is also often the case that light (low Z) atoms contribute strongly to the diffracted intensity even in the presence of large Z atoms. ##### Example 1.7.1 : Neutron Diffraction Neutrons have no electric charge, so they do not interact with the atomic electrons. Hence, they are very penetrating (e.g., typically 10 cm in lead). Neutron diffraction was proposed in 1934, to exploit de Broglie’s hypothesis about the wave nature of matter. Calculate the momentum and kinetic energy of a neutron whose wavelength is comparable to atomic spacing ($$1.8 \times 10^{-10}\, m$$). Solution This is a simple use of de Broglie’s equation $\lambda = \dfrac{h}{p} \nonumber$ where we recognize that the wavelength of the neutron must be comparable to atomic spacing (let's assumed equal for convenience, so $$\lambda = 1.8 \times 10^{-10}\, m$$). Rearranging the de Broglie wavelength relationship above to solve for momentum ($$p$$): \begin{align} p &= \dfrac{h}{\lambda} \nonumber \\[4pt] &= \dfrac{6.6 \times 10^{-34} J s}{1.8 \times 10^{-10} m} \nonumber \\[4pt] &= 3.7 \times 10^{-24}\, kg \,\,m\, \,s^{-1} \nonumber \end{align} \nonumber The relationship for kinetic energy is $KE = \dfrac{1}{2} mv^2 = \dfrac{p^2}{2m} \nonumber$ where $$v$$ is the velocity of the particle. From the reference table of physical constants, the mass of a neutron is $$1.6749273 \times 10^{−27}\, kg$$, so \begin{align*} KE &= \dfrac{(3.7 \times 10^{-24}\, kg \,\,m\, \,s^{-1} )^2}{2 (1.6749273 \times 10^{−27}\, kg)} \\[4pt] &=4.0 \times 10^{-21} J \end{align*} \nonumber The neutrons released in nuclear fission are ‘fast’ neutrons, i.e. much more energetic than this. Their wavelengths be much smaller than atomic dimensions and will not be useful for neutron diffraction. We slow down these fast neutrons by introducing a "moderator", which is a material (e.g., graphite) that neutrons can penetrate, but will slow down appreciable.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9440160989761353, "perplexity": 557.3576764122621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103947269.55/warc/CC-MAIN-20220701220150-20220702010150-00663.warc.gz"}
http://spatial-ecology.net/docs/build/html/STUDENTSPROJECTS/Proj_2021_SW/SaeidAminjafariProject.html
# 1.11. Phase Change Analysis Saeid Aminjafari GeoComput. & ML 2 June 2021 ## 1.11.1. Project description In this project I downloaded 42 Sentinel-1 images (42 dates i.e. every six days from 24 March 2019 to 25 November 2019) from the Alaska Satellite Facility. Then I calculated the phase change between paired images successively by SARScape software. We want to test if the accumulated phase changes of pixels inside a lake correlate with the water level changes in that lake. In cases with good correlation, this method can be replaced by conventional methods to estimate water level changes in lakes. This method has high spatial and temporal resolution, is cost-efficient and fast. ### 1.11.1.1. Geocomputain datasets For the course Geo-computation, I have two sets of data: 1. The phase difference between successive dates (41 maps of phase change!). 2. The quality images of the phase change maps. ### 1.11.1.2. Data processing and computation procedure: 1. For each quality image, we mask the pixels with quality less than 0.25 (41 mask.tif images). 2. Multiply all mask images to get the pixels that in all maps have high quality (maskAll.tif). 3. Multiply the maskAll.tiff by each phase-change map to get the phase change of high-quality pixels. 4. Crop the masked phase-change maps around the lake area with a shape file. 5. Get the values of selected pixels and create a time series of phase change for each pixel (42 dates). 6. Accumulate the time series of the phase change for each pixel. 7. Calculate the correlation between the accumulated phase change and the water level of the gauging station. The quality images are stored in a file named ccAll which has 41 bands i.e. the number of images. The code below shows the inforamtion of this file. [1]: !gdalinfo Project/ccAll Driver: ENVI/ENVI .hdr Labelled Files: Project/ccAll Project/ccAll.hdr Size is 9219, 4226 Coordinate System is: GEOGCS["GCS_WGS_1984", DATUM["WGS_1984", SPHEROID["WGS_84",6378137.0,298.257223563]], PRIMEM["Greenwich",0.0], UNIT["degree",0.0174532925199433]] Origin = (11.809735920000000,58.110175499999997) Pixel Size = (0.000208320000000,-0.000208320000000) Band_1=Layer (Band 1:IS_20191119_m_40_20191125_s_41_cc_geo) Band_10=Layer (Band 1:IS_20190926_m_31_20191002_s_32_cc_geo) Band_11=Layer (Band 1:IS_20190920_m_30_20190926_s_31_cc_geo) Band_12=Layer (Band 1:IS_20190914_m_29_20190920_s_30_cc_geo) Band_13=Layer (Band 1:IS_20190908_m_28_20190914_s_29_cc_geo) Band_14=Layer (Band 1:IS_20190902_m_27_20190908_s_28_cc_geo) Band_15=Layer (Band 1:IS_20190827_m_26_20190902_s_27_cc_geo) Band_16=Layer (Band 1:IS_20190821_m_25_20190827_s_26_cc_geo) Band_17=Layer (Band 1:IS_20190815_m_24_20190821_s_25_cc_geo) Band_18=Layer (Band 1:IS_20190809_m_23_20190815_s_24_cc_geo) Band_19=Layer (Band 1:IS_20190803_m_22_20190809_s_23_cc_geo) Band_2=Layer (Band 1:IS_20191113_m_39_20191119_s_40_cc_geo) Band_20=Layer (Band 1:IS_20190728_m_21_20190803_s_22_cc_geo) Band_21=Layer (Band 1:IS_20190722_m_20_20190728_s_21_cc_geo) Band_22=Layer (Band 1:IS_20190716_m_19_20190722_s_20_cc_geo) Band_23=Layer (Band 1:IS_20190710_m_18_20190716_s_19_cc_geo) Band_24=Layer (Band 1:IS_20190704_m_17_20190710_s_18_cc_geo) Band_25=Layer (Band 1:IS_20190628_m_16_20190704_s_17_cc_geo) Band_26=Layer (Band 1:IS_20190622_m_15_20190628_s_16_cc_geo) Band_27=Layer (Band 1:IS_20190616_m_14_20190622_s_15_cc_geo) Band_28=Layer (Band 1:IS_20190610_m_13_20190616_s_14_cc_geo) Band_29=Layer (Band 1:IS_20190604_m_12_20190610_s_13_cc_geo) Band_3=Layer (Band 1:IS_20191107_m_38_20191113_s_39_cc_geo) Band_30=Layer (Band 1:IS_20190529_m_11_20190604_s_12_cc_geo) Band_31=Layer (Band 1:IS_20190523_m_10_20190529_s_11_cc_geo) Band_32=Layer (Band 1:IS_20190517_m_9_20190523_s_10_cc_geo) Band_33=Layer (Band 1:IS_20190511_m_8_20190517_s_9_cc_geo) Band_34=Layer (Band 1:IS_20190505_m_7_20190511_s_8_cc_geo) Band_35=Layer (Band 1:IS_20190429_m_6_20190505_s_7_cc_geo) Band_36=Layer (Band 1:IS_20190423_m_5_20190429_s_6_cc_geo) Band_37=Layer (Band 1:IS_20190417_m_4_20190423_s_5_cc_geo) Band_38=Layer (Band 1:IS_20190411_m_3_20190417_s_4_cc_geo) Band_39=Layer (Band 1:IS_20190405_m_2_20190411_s_3_cc_geo) Band_4=Layer (Band 1:IS_20191101_m_37_20191107_s_38_cc_geo) Band_40=Layer (Band 1:IS_20190330_m_1_20190405_s_2_cc_geo) Band_41=Layer (Band 1:IS_20190324_m_0_20190330_s_1_cc_geo) Band_5=Layer (Band 1:IS_20191026_m_36_20191101_s_37_cc_geo) Band_6=Layer (Band 1:IS_20191020_m_35_20191026_s_36_cc_geo) Band_7=Layer (Band 1:IS_20191014_m_34_20191020_s_35_cc_geo) Band_8=Layer (Band 1:IS_20191008_m_33_20191014_s_34_cc_geo) Band_9=Layer (Band 1:IS_20191002_m_32_20191008_s_33_cc_geo) INTERLEAVE=BAND Corner Coordinates: Upper Left ( 11.8097359, 58.1101755) ( 11d48'35.05"E, 58d 6'36.63"N) Lower Left ( 11.8097359, 57.2298152) ( 11d48'35.05"E, 57d13'47.33"N) Upper Right ( 13.7302380, 58.1101755) ( 13d43'48.86"E, 58d 6'36.63"N) Lower Right ( 13.7302380, 57.2298152) ( 13d43'48.86"E, 57d13'47.33"N) Center ( 12.7699870, 57.6699953) ( 12d46'11.95"E, 57d40'11.98"N) Band 1 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20191119_m_40_20191125_s_41_cc_geo) Band 2 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20191113_m_39_20191119_s_40_cc_geo) Band 3 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20191107_m_38_20191113_s_39_cc_geo) Band 4 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20191101_m_37_20191107_s_38_cc_geo) Band 5 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20191026_m_36_20191101_s_37_cc_geo) Band 6 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20191020_m_35_20191026_s_36_cc_geo) Band 7 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20191014_m_34_20191020_s_35_cc_geo) Band 8 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20191008_m_33_20191014_s_34_cc_geo) Band 9 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20191002_m_32_20191008_s_33_cc_geo) Band 10 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190926_m_31_20191002_s_32_cc_geo) Band 11 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190920_m_30_20190926_s_31_cc_geo) Band 12 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190914_m_29_20190920_s_30_cc_geo) Band 13 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190908_m_28_20190914_s_29_cc_geo) Band 14 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190902_m_27_20190908_s_28_cc_geo) Band 15 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190827_m_26_20190902_s_27_cc_geo) Band 16 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190821_m_25_20190827_s_26_cc_geo) Band 17 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190815_m_24_20190821_s_25_cc_geo) Band 18 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190809_m_23_20190815_s_24_cc_geo) Band 19 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190803_m_22_20190809_s_23_cc_geo) Band 20 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190728_m_21_20190803_s_22_cc_geo) Band 21 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190722_m_20_20190728_s_21_cc_geo) Band 22 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190716_m_19_20190722_s_20_cc_geo) Band 23 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190710_m_18_20190716_s_19_cc_geo) Band 24 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190704_m_17_20190710_s_18_cc_geo) Band 25 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190628_m_16_20190704_s_17_cc_geo) Band 26 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190622_m_15_20190628_s_16_cc_geo) Band 27 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190616_m_14_20190622_s_15_cc_geo) Band 28 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190610_m_13_20190616_s_14_cc_geo) Band 29 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190604_m_12_20190610_s_13_cc_geo) Band 30 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190529_m_11_20190604_s_12_cc_geo) Band 31 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190523_m_10_20190529_s_11_cc_geo) Band 32 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190517_m_9_20190523_s_10_cc_geo) Band 33 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190511_m_8_20190517_s_9_cc_geo) Band 34 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190505_m_7_20190511_s_8_cc_geo) Band 35 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190429_m_6_20190505_s_7_cc_geo) Band 36 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190423_m_5_20190429_s_6_cc_geo) Band 37 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190417_m_4_20190423_s_5_cc_geo) Band 38 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190411_m_3_20190417_s_4_cc_geo) Band 39 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190405_m_2_20190411_s_3_cc_geo) Band 40 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190330_m_1_20190405_s_2_cc_geo) Band 41 Block=9219x1 Type=Float32, ColorInterp=Undefined Description = Layer (Band 1:IS_20190324_m_0_20190330_s_1_cc_geo) The code below stores the number of bands in a variable called nBands. I use this command thoughout my code several times. [8]: %%bash nBands=$(gdalinfo Project/ccAll | grep Band_ | wc -l) echo$nBands 41 In this snippet of code I create a mask file for each band (image) masks all the pixels with low quality ( < 0.25). Each line is commented with its functionality. [ ]: %%bash nBands=$(gdalinfo Project/ccAll | grep Band_ | wc -l) for band in seq 1$nBands; do # a for loop through all bands (images) echo "Band $band:" # show the band number in progress gdal_translate -of XYZ -b$band Project/ccAll Project/"B$band".txt #convert the quality image to a text file (B*.txt) awk '{if ($3<0.25) {print $1,$2,0 } else {print $1,$2,1 }}' Project/"B$band".txt > Project/"mask$band".txt # The pixel values with quality<0.25 are set to 0 and otherwise are set to 1 in file B*.txt and write the new values to a new file (mask*.txt) gdal_translate -co COMPRESS=DEFLATE -co ZLEVEL=9 -ot Byte Project/"mask$band".txt Project/"mask$band".tif # Convert mask*.txt files to mask*.tif files rm -f Project/"B$band".txt # remove the created text files in previous steps rm -f Project/"mask$band".txt # remove the created text files in previous steps done In order to get a single mask file to ensure that a pixel has high quality in all images, we need to multiply all the mask images (with 0 and 1 values). However, for multiplication with gdal_calc.py we need to create a letter for each mask file, but we have only 26 letters because gdal version 2 only supports capital letters. Thus, we first multiply the first 26 mask*.tif images, then we multiply the rest, then we multiply the last two files and finally we multiply the maskAll.tif by all phase change maps. [ ]: %%bash for i in {A..Z}; do echo $i >> letters.txt # type letters A to Z in 26 lins of letters.txt done [2]: !cat Project/letters.txt A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ ]: # Assign each letter with a dash to a mask*.tif file %%bash nBands=$(gdalinfo ccAll | grep Band_ | wc -l) for count in seq 1 26; do awk -v i=$count 'NR==i{ print "-"$1" mask"i".tif" }' letters.txt done > list.txt [3]: !cat Project/list.txt -A mask1.tif [ ]: # change list.txt file to a single line file !grep - list.txt | tr '\n' ' ' > ls.txt !var=$(cat ls.txt) [5]: !echo$var -A mask1.tif -B mask2.tif -C mask3.tif -D mask4.tif -E mask5.tif -F mask6.tif -G mask7.tif -H mask8.tif -I mask9.tif -J mask10.tif -K mask11.tif -L mask12.tif -M mask13.tif -N mask14.tif -O mask15.tif -P mask16.tif -Q mask17.tif -R mask18.tif -S mask19.tif -T mask20.tif -U mask21.tif -V mask22.tif -W mask23.tif -X mask24.tif -Y mask25.tif -Z mask26.tif [ ]: # put an asterisk in front of each letter for gdal_calc.py sytanx %%bash for i in seq 1 26; do awk -v i=$i '{ if (NR<26) print$1,"*"; else print $1; }' letters.txt > multiply.txt done [6]: !cat Project/multiply.txt A * B * C * D * E * F * G * H * I * J * K * L * M * N * O * P * Q * R * S * T * U * V * W * X * Y * Z [ ]: # change multiply.txt file to a single line file !grep . multiply.txt | tr '\n' ' ' > multiply2.txt [7]: !cat Project/multiply2.txt A * B * C * D * E * F * G * H * I * J * K * L * M * N * O * P * Q * R * S * T * U * V * W * X * Y * Z [ ]: # multiply the first 26 mask*.tif images !gdal_calc.py --type=Byte --overwrite$var --co=COMPRESS=DEFLATE --co=ZLEVEL=9 --calc="( $(cat multiply2.txt) )" --outfile=mask1_26.tif [ ]: # the same procedure is done for the mask images mask27.tif to mask41.tif %%bash for count in seq 27 41; do awk -v i=$count 'NR==i-26{ print "-"$1" mask"i".tif" }' letters.txt done > list2.txt ############################################################ grep - list2.txt | tr '\n' ' ' > ls2.txt var=$(cat ls2.txt) echo $var ############################################################ for i in seq 27 41; do awk -v i=$i '{ if (NR<42-27) print $1,"*"; else if (NR==42-27) print$1; else }' letters.txt > multiply3.txt done ############################################################ grep . multiply3.txt | tr '\n' ' ' > multiply4.txt cat multiply4.txt ############################################################ gdal_calc.py --type=Byte --overwrite $var --co=COMPRESS=DEFLATE --co=ZLEVEL=9 --calc="($(cat multiply4.txt) )" --outfile=mask27_41.tif [ ]: # Multiply the last two masks we got and create the final maskAll.tif file [ ]: # Multiply each phase difference map (stored in bands of ifgAll file) by the maskAll.file to get the phase change of high-quality pixels %%bash for j in seq 1 41; do gdal_calc.py --type=Float32 --overwrite -A ifgAll --A_band=$j -B maskAll.tif --co=COMPRESS=DEFLATE --co=ZLEVEL=9 --calc="( A * B.astype(float) )" --outfile="ifg$j".tif done Crop the lake area [18]: # Crop the lake area for each phase change map with a shape file %%bash for j in seq 1 41; do gdalwarp -cutline shape/tvarsjon60.shp -crop_to_cutline -dstnodata 0 "ifg$j".tif "ifg_crop$j".tif done The codes below create the time series of phase change and the accumulated phase change! [19]: # get the phase change of each pixel and store the time series of each pixel in each field of file timesS.txt %%bash nBands=$(gdalinfo ccAll | grep Band_ | wc -l) for l in seq 1$nBands; do gdal_translate -of XYZ "ifg_crop$(($"42"-$l))".tif "G$l".txt # convert tif to text file, and also change the order of band beacause the oldest image in the original image is stored in band 41 and the newest image in band 1! awk 'BEGIN{ORS=" ";} {if ($3!=0) {print$3 } else } END{print "\n"}' "G$l".txt >> timeS.txt # get only the pixels with non-zero values! done [ ]: # Accumulate each field of timesS.txt !awk '{ for (i=1; i<=NF; ++i) {sum[i]+=$i; $i=sum[i] }; print$0}' timeS.txt > timesAcc.txt [10]: # the number of high-quality pixels !awk '{print NF}' Project/timesAcc.txt 482 482 482 482 482 482 482 482 482 482 482 482 482 482 482 482 482 482 482 482 482 482 482 482 482 482 482 482 482 482 482 482 482 482 482 482 482 482 482 482 482 0 [ ]: !awk 'BEGIN{ORS=" ";} {if ($3!=0) {print "0" } }' G1.txt > zero.txt #create a line of zeros !sed "1i\\$con" timesAcc.txt > tsAcc.txt #add zeros to the time series of the accumulated phase change (first line: we assume that the phase of the first date was zero) [11]: !cat Project/zero.txt 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 [1]: # number of pixels !awk '{print NF}' Project/tsAcc.txt 483 483 483 483 483 483 483 483 483 483 483 483 483 483 483 483 483 483 483 483 483 483 483 483 483 483 483 483 483 483 483 483 483 483 483 483 483 483 483 483 483 483 [2]: # number of dates !awk '{print NR}' Project/tsAcc.txt 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 [ ]: # add the water level of the gauging station to the last column of the tsAcc.txt file ! awk '{getline to_add < "field.txt"; print $0,to_add}' tsAcc.txt > tsAcc.txt [ ]: # calculate the correlation between the last field (water level) and the accumulated phase change of all pixels (other fields) and store the values in corr.txt %%bash for count in seq 1 483; do awk -v i=$count 'pass==1 {sx+=$i; sy+=$483; n+=1} pass==2 {mx=sx/(n); my=sy/(n); cov+=($i-mx)*($483-my); ssdx+=($i-mx)*($i-mx); ssdy+=($483-my)*($483-my);} END {print cov / ( sqrt(ssdx) * sqrt(ssdy) ) }' pass=1 tsAccT.txt pass=2 tsAccT.txt >> corr.txt done ## 1.11.2. Here, we see 10 pixels with highest correletation with field data [6]: !sort -k1 -n corr.txt | tail -10 0.665665 0.681602 0.697547 0.706916 0.714161 0.747608 0.77914 0.786415 0.806715 1 [4]: !jupyter nbconvert myProject.ipynb --to html [NbConvertApp] Converting notebook myProject.ipynb to html [NbConvertApp] Writing 614090 bytes to myProject.html [5]: !jupyter nbconvert SaeidAminjafariProject.ipynb --to html [NbConvertApp] Converting notebook myProject.ipynb to html [NbConvertApp] Writing 614318 bytes to myProject.html [ ]:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2976277768611908, "perplexity": 4341.194866433901}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301309.22/warc/CC-MAIN-20220119094810-20220119124810-00059.warc.gz"}
https://codereview.stackexchange.com/questions/234762/adventofcode-2019-day-6-in-haskell
I am new to Haskell and currently trying to port my solutions for the 2019 installment of the coding challenge AdventOfCode to Haskell. So, I would very much appreciate any suggestions how to make the code more readable and, in particular, more idiomatic. This post shows my solution of day 6 part 2, but also includes the function totalDecendantCount used to solve part 1. If you have not solved these problems and still intend to do so, stop reading immediately. For both problems, you get a file with an orbit specification on each line of the form A)B, which tells you that B orbits A. This describes a tree of bodies orbiting each other with root COM. In part 1, you have to compute a check sum. More precisely, you have to compute the sum of the number of direct and indirect orbits of each body, which is the same as the sum of the number of descendants of each body in the tree. In part 2, which you cannot see if you have not finished part 1, you have to compute the minimal number of transfers between orbits from you (YOU) to Santa (SAN). I have kept the entire solution for each part of each day in a single module with a single exported function that prints the solution. For day 6 part 2 it starts as follows. ( distanceToSanta ) where import System.IO import Data.List.Split import Data.List import Data.Maybe import Data.Hashable import qualified Data.HashMap.Strict as Map distanceToSanta :: IO () distanceToSanta = do let orbitList = (map orbit . lines) inputText let orbits = orbitMap $catMaybes orbitList let pathToSanta = fromJust$ path orbits "COM" "YOU" "SAN" let requiredTransfers = length pathToSanta - 3 print requiredTransfers We subtract 3 from the length of the path because it consists of the bodies on the path and you only have to transfer from the body you already orbit to the body Santa orbits. To store the tree, I use a HashMap.Strict and introduce the following type aliases and helper function to make things a bit more descriptive. type OrbitSpecification = (String,String) type ChildrenMap a = Map.HashMap a [a] children :: (Eq a, Hashable a) => ChildrenMap a -> a -> [a] children childrenMap = fromMaybe [] . flip Map.lookup childrenMap orbit :: String -> Maybe OrbitSpecification orbit str = case orbit_specification of [x,y] -> Just (x,y) _ -> Nothing where orbit_specification = splitOn ")" str orbitMap :: [OrbitSpecification] -> ChildrenMap String orbitMap = Map.fromListWith (++) . map (applyToSecondElement toSingleElementList) applyToSecondElement :: (b -> c) -> (a,b) -> (a,c) applyToSecondElement f (x,y) = (x, f y) toSingleElementList :: a -> [a] toSingleElementList x = [x] To solve part 1, I introduce two general helper function to generate aggregates over children or over all descendents. childrenAggregate :: (Eq a, Hashable a) => ([a] -> b) -> ChildrenMap a -> a -> b childrenAggregate aggregatorFnc childrenMap = aggregatorFnc . children childrenMap decendantAggregate :: (Eq a, Hashable a) => (b -> b -> b) -> (ChildrenMap a -> a -> b) -> ChildrenMap a -> a -> b decendantAggregate resultFoldFnc nodeFnc childrenMap node = foldl' resultFoldFnc nodeValue childResults where nodeValue = nodeFnc childrenMap node childFnc = decendantAggregate resultFoldFnc nodeFnc childrenMap childResults = map childFnc $children childrenMap node The descendantAggragate recursively applies a function nodeFnc to a node node and all its descendants and folds the results using some function resultFoldFnc. This allows to define the necessary functions to count the total number of descendants of a node as follows. childrenCount :: (Eq a, Hashable a) => ChildrenMap a -> a -> Int childrenCount = childrenAggregate length decendantCount :: (Eq a, Hashable a) => ChildrenMap a -> a -> Int decendantCount = decendantAggregate (+) childrenCount totalDecendantCount :: (Eq a, Hashable a) => ChildrenMap a -> a -> Int totalDecendantCount = decendantAggregate (+) decendantCount For part 2, we use that between two points in a tree, there is exactly one path (without repetition). First, we define a function to get a path from the root of a (sub)tree to the destination, provided it exists. pathFromRoot :: (Eq a, Hashable a) => ChildrenMap a -> a -> a -> Maybe [a] pathFromRoot childrenMap root destination | destination == root = Just [root] | null childPaths = Nothing | otherwise = Just$ root:(head childPaths) where rootChildren = children childrenMap root pathFromNewRoot newRoot = pathFromRoot childrenMap newRoot destination childPaths = mapMaybe pathFromNewRoot rootChildren This function only finds paths down from the root of a (sub)tree. General paths come in three variations: path from the root of a (sub)tree, the inverse of such a path or the concatenation of a path to the root of a subtree and one from that root to the end point. Thus, we get the path as follows. path :: (Eq a, Hashable a) => ChildrenMap a -> a -> a -> a -> Maybe [a] path childrenMap root start end = let maybeStartEndPath = pathFromRoot childrenMap start end in if isJust maybeStartEndPath then maybeStartEndPath else let maybeEndStartPath = pathFromRoot childrenMap end start in case maybeEndStartPath of Just endStartPath -> Just $reverse endStartPath Nothing -> let rootPathToStart = pathFromRoot childrenMap root start rootPathToEnd = pathFromRoot childrenMap root end in if isNothing rootPathToStart || isNothing rootPathToEnd then Nothing else connectedPath (fromJust rootPathToStart) (fromJust rootPathToEnd) To connect the paths in the last alternative, we follow both paths from the root to the last common point and then build it by concatenation the reverse of the path to the start with the path to the destination. connectedPath :: Eq a => [a] -> [a] -> Maybe [a] connectedPath rootToStart rootToEnd = case pathPieces of Nothing -> Nothing Just (middle, middleToStart, middleToEnd) -> Just$ (reverse middleToStart) ++ [middle] ++ middleToEnd where pathPieces = distinctPathPieces rootToStart rootToEnd distinctPathPieces :: Eq a => [a] -> [a] -> Maybe (a, [a], [a]) distinctPathPieces [x] [y] = if x == y then Just (x, [], []) else Nothing distinctPathPieces (x1:y1:z1) (x2:y2:z2) | x1 /= x2 = Nothing | y1 /= y2 = Just (x1, y1:z1, y2:z2) | otherwise = distinctPathPieces (y1:z1) (y2:z2) distinctPathPieces _ _ = Nothing This solution heavily depends on the input describing a tree. In case a DAG is provided, a result will be produced that is not necessary correct. For totalDescendantCount, nodes after joining branches will be counted multiple times and path will find a path, but not nescesarily the shortest one. If there are cycles in the graph provided, the recursions in the functions will not terminate. ## Simplification In path, notice how the code gets more nested as you try each possible path (either from start to end, or end to start, or from end to root and root to start). You can use the Alternative instance for Maybe to simplify this code: let maybeStartEndPath = pathFromRoot childrenMap start end maybeEndStartPath = pathFromRoot childrenMap end start maybeRootPath = [...] -- see below in maybeStartEndPath <|> fmap reverse maybeEndStartPath <|> maybeRootPath This code will try maybeStartEndPath first. If it returns Nothing, it will move on to the next option and so on. For your final case (which I've named maybeRootPath), you do the following check: if isNothing rootPathToStart || isNothing rootPathToEnd then Nothing else connectedPath (fromJust rootPathToStart) (fromJust rootPathToEnd) This is more consicely done with liftA2 from Control.Applicative. liftA2 lifts a binary function into an applicative context: λ :set -XTypeApplications λ :t liftA2 @Maybe liftA2 @Maybe :: (a -> b -> c) -> (Maybe a -> Maybe b -> Maybe c) Then, if either argument is Nothing, the function will return Nothing without having to pattern match. So we can fill in maybeRootPath above with maybeRootPath = join $liftA2 connectedPath rootPathToStart rootPathToEnd where rootPathToStart = pathFromRoot childrenMap root start rootPathToEnd = pathFromRoot childrenMap root end The join is needed because connectedPath returns a Maybe already, and we've lifted it into Maybe, which leaves us with a return value of Maybe (Maybe [a]). join flattens nested monads, bringing us back to Maybe [a] ## Minor points Your function applyToSecondElement is second from Control.Arrow λ :t second @(->) second @(->) :: (b -> c) -> (d, b) -> (d, c) toSingleElementList can also be written as (:[]) or return So orbitMap can be written orbitMap = Map.fromListWith (++) . map (second (:[])) To your credit, your naming made both of these functions clear anyway, but it's more recognizable if you use functions that already exist. ## Algorithm I was going to suggest keeping each edge bidirectional instead of one-directional, so that you can directly check for a path from start to end instead of checking 3 cases. After reviewing the code, I think your approach is better from a functional perspective because it eliminates the need for you to check for cycles and keep a set as you search the graph. Good work. ## Revised Code import Control.Applicative import Control.Monad import Control.Arrow import System.IO import Data.List.Split import Data.List import Data.Maybe import Data.Hashable import qualified Data.HashMap.Strict as Map main :: IO () main = do inputText <- readFile "Advent20191206_1_input.txt" let orbitList = catMaybes$ (map orbit . lines) inputText let orbits = orbitMap orbitList let pathToSanta = fromJust $path orbits "COM" "YOU" "SAN" let requiredTransfers = length pathToSanta - 3 print requiredTransfers type OrbitSpecification = (String,String) type ChildrenMap a = Map.HashMap a [a] children :: (Eq a, Hashable a) => ChildrenMap a -> a -> [a] children childrenMap = fromMaybe [] . flip Map.lookup childrenMap orbit :: String -> Maybe OrbitSpecification orbit str = case orbit_specification of [x,y] -> Just (x, y) _ -> Nothing where orbit_specification = splitOn ")" str orbitMap :: [OrbitSpecification] -> ChildrenMap String orbitMap = Map.fromListWith (++) . map (second (:[])) childrenAggregate :: (Eq a, Hashable a) => ([a] -> b) -> ChildrenMap a -> a -> b childrenAggregate aggregatorFnc childrenMap = aggregatorFnc . children childrenMap decendantAggregate :: (Eq a, Hashable a) => (b -> b -> b) -> (ChildrenMap a -> a -> b) -> ChildrenMap a -> a -> b decendantAggregate resultFoldFnc nodeFnc childrenMap node = foldl' resultFoldFnc nodeValue childResults where nodeValue = nodeFnc childrenMap node childFnc = decendantAggregate resultFoldFnc nodeFnc childrenMap childResults = map childFnc$ children childrenMap node childrenCount :: (Eq a, Hashable a) => ChildrenMap a -> a -> Int childrenCount = childrenAggregate length decendantCount :: (Eq a, Hashable a) => ChildrenMap a -> a -> Int decendantCount = decendantAggregate (+) childrenCount totalDecendantCount :: (Eq a, Hashable a) => ChildrenMap a -> a -> Int totalDecendantCount = decendantAggregate (+) decendantCount pathFromRoot :: (Eq a, Hashable a) => ChildrenMap a -> a -> a -> Maybe [a] pathFromRoot childrenMap root destination | destination == root = Just [root] | null childPaths = Nothing | otherwise = Just $root:(head childPaths) where rootChildren = children childrenMap root pathFromNewRoot newRoot = pathFromRoot childrenMap newRoot destination childPaths = mapMaybe pathFromNewRoot rootChildren path :: (Eq a, Hashable a) => ChildrenMap a -> a -> a -> a -> Maybe [a] path childrenMap root start end = let maybeStartEndPath = pathFromRoot childrenMap start end maybeEndStartPath = pathFromRoot childrenMap end start maybeRootPath = join$ liftA2 connectedPath rootPathToStart rootPathToEnd where rootPathToStart = pathFromRoot childrenMap root start rootPathToEnd = pathFromRoot childrenMap root end in maybeStartEndPath <|> fmap reverse maybeEndStartPath <|> maybeRootPath connectedPath :: Eq a => [a] -> [a] -> Maybe [a] connectedPath rootToStart rootToEnd = case pathPieces of Nothing -> Nothing Just (middle, middleToStart, middleToEnd) -> Just \$ (reverse middleToStart) ++ [middle] ++ middleToEnd where pathPieces = distinctPathPieces rootToStart rootToEnd distinctPathPieces :: Eq a => [a] -> [a] -> Maybe (a, [a], [a]) distinctPathPieces [x] [y] = if x == y then Just (x, [], []) else Nothing distinctPathPieces (x1:y1:z1) (x2:y2:z2) | x1 /= x2 = Nothing | y1 /= y2 = Just (x1, y1:z1, y2:z2) | otherwise = distinctPathPieces (y1:z1) (y2:z2) distinctPathPieces _ _ = Nothing • Thank you very much! I already had the feeling that there had to be some function to apply a function to the second element, but did not know Arrow. I will need some time to look things up in order to fully understand your simplifications. But now I know some directions to look into to improve my Haskell. – M.Doerner Jan 13 '20 at 20:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5100691914558411, "perplexity": 6581.641898877099}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178365454.63/warc/CC-MAIN-20210303042832-20210303072832-00173.warc.gz"}
http://acronymattic.com/X-Ray-Diffraction-Studies-(XRDS).html
"AcronymAttic.com # What does XRDS stand for? ## XRDS stands for X Ray Diffraction Studies This definition appears very frequently ## Samples in periodicals archive: Recommended Citation. Brown, Keith H. and Barnett, J. Dean, "X-Ray Diffraction Studies on Liquids at Very High Pressures along the Melting Curve. X-ray diffraction studies of dense fluids Citation. Honeywell, Wallace Irving (1964) X-ray diffraction studies of dense fluids. Dissertation (Ph.D.), California... Citation. Rodriguez, Sergio Enrique (1964) X-ray diffraction studies of stable and supercooled liquid gallium. Dissertation (Ph.D.), California Institute of Technology. X-ray diffraction studies of room and intermediate tempertaure phases of phosphate based solid acids. Kiran Vajrapu, University of Texas at El Paso How to Cite. Schmitt, F. O., Bear, R. S. and Palmer, K. J. (1941), X-ray diffraction studies on the structure of the nerve myelin sheath. J. Cell. The Rosalind Franklin Papers Title: The Structure of Turnip Yellow Mosaic Virus: X-Ray Diffraction Studies Abstract X-ray diffraction studies of the crystalline constituents in the acid open hearth chrome-steel samples showed that chromium exists in these samples as a... Diffraction Studies. X-ray Powder Diffraction X-ray powder diffraction (XRPD) is one of the most powerful methods for the study of crystalline and partially... X-ray diffraction studies show that metallic lead crystallizes in a face-centered cubic structure. The edge length a of the unit cell is 4.94. X-Ray Diffraction Studies of Copper Nanopowder. 6 Pages. X-Ray Diffraction Studies of Copper Nanopowder. Uploaded by. T. Thirugnanasamb; Views. connect to download. Title: X-Ray Diffraction Studies: Melting of Pb Monolayers on Cu(110) Surfaces: Authors: Marra, W. C.; Fuoss, P. H.; Eisenberger, P. E. Affiliation... Crystallization and preliminary X-ray diffraction studies of the calcium-binding protein CalD from Streptomyces coelicolor X-ray diffraction studies of aqueous cadmium chloride solutions DSpace/Manakin Repository X-RAY DIFFRACTION STUDIES 325 considerable success which has been achieved in interpretating the lyso-zyme electron-density map will dispel doubts concerning... X-ray diffraction studies of the 0.5um fraction from the Brushy Basin Member of the Upper Jurassic Morrison Formation, Colorado Plateau Bulletin 1808-G Get this from a library! X-ray diffraction studies of membranes,. [Y K Levine] Recommended Citation. Brown, Keith H. and Barnett, J. Dean, "X-Ray Diffraction Studies on Liquids at Very High Pressures along the Melting Curve II. Get this from a library! X-ray diffraction studies of wool setting.. [Barry Norman Hoschke] Abstract. X-ray diffraction studies confirm that, with few exceptions, each skeletal element of echtinoderms is a single crystal of magnesium-rich... 595 Crystallization of biological macromolecules for X-ray diffraction studies Gary L Gilliland* and Jane E Ladnert Advances in the crystallization of biological... X-ray and neutron diffraction studies of the crystal and molecular structure of the predominant monocarboxylic acid obtained by mild acid hydrolysis of cyanocobalamin. X-ray diffraction studies of the AlPdMn quasicrystal. Yi Zhang, Purdue University. Abstract. The purpose of this research was to study various properties of the \$\rm... 1. X-ray diffraction studies of sartorius muscles of Rana pipiens were made in a new x-ray diffraction camera which permits exposures of 3 to 6 minutes. Add Zippeites: Chemical Characterization and Powder X-ray Diffraction Studies of Synthetic and Natural Samples to one of your collections... The Rosalind Franklin Papers The DNA Riddle: King's; Franklin's fellowship proposal called for her to work on x-ray diffraction studies of proteins in solution. Proc. Nat. Acad. Sci. USA Vol. 71, No. 5, pp. 1672-1676, May 1974 X-Ray Diffraction Studies of Nucleohistone: A Polyhelical Model of Chromosome Organization (1965) X-Ray Diffraction Studies of Soils and Soil Stabilizers, HR-106, 1965. Transportation, Department of Who conducted the X-ray diffraction studies that were key to the discovery of the structure of DNA? The X-ray diffraction studies conducted by _ were key to the discovery of the structure of DNA. Franklin...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8034117221832275, "perplexity": 13773.20905469408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120338.97/warc/CC-MAIN-20170423031200-00055-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www2.mathematik.tu-darmstadt.de/~disser/
# Prof. Dr. Yann Disser RG Optimization Department of Mathematics Dolivostraße 15 office: S4|10 - 244 hours: monday 10 – 11 (by arrangement) tel.: (+49) 06151 / 16 – 25363 e-mail: # Research Interests combinatorial optimization, online algorithms, graph exploration, theory of optimization, computational complexity, incremental algorithms, approximation algorithms, network flows, robust optimization, geometric reconstruction # Teaching • Online Optimization WS 2016/17 (TU Darmstadt), WS 2014/15 (TU Berlin) introduction to online optimization, list access problem, paging, randomized online algorithms, Yao's principle, online scheduling, metrical task systems, k-server problems, primal-dual method • Discrete Optimization mixed integer linear programs, polyhedral combinatorics, computational complexity, branch-and-bound, cutting planes, decomposition techniques, heuristics, approximation algorithms • Combinatorial Optimization SS 2016 (TU Berlin), SS 2015 (Uni Augsburg) shortest paths, dynamic programming, maximum flow, min-cost maximum flow, maximum matching, NP-completeness • Seminars SS 2018: Approximation Algorithms (TU Darmstadt) SS 2017: Conditional Complexity Bounds (TU Darmstadt) WS 2015/16: Graph Exploration (TU Berlin) SS 2015: Online Optimization (Uni Augsburg) # Publications 46 results ## 2018 • A general lower bound for collaborative tree exploration (, , , and ) Theoretical Computer Science, . [bibtex] • Scheduling maintenance jobs in networks (, , , , , , and ) Theoretical Computer Science, . [bibtex] • Distance-preserving graph contractions (, , , , and ) In Proceedings of the 9th Innovations in Theoretical Computer Science conference (ITCS), pp. 51(14), . ## 2017 • Geometric reconstruction problems ( and ) Chapter in Handbook of Discrete and Computational Geometry, Third Edition (J.E. Goodman, J. O'Rourke, C.D. Tóth, eds.), CRC Press LLC, . • Packing a knapsack of unknown capacity (, , and ) SIAM Journal on Discrete Mathematics, 31(3):1477-1497, . • Collaborative delivery with energy-constrained mobile robots (, , , , , , and ) Theoretical Computer Science, . [bibtex] • Tight bounds for online TSP on the line (, , , , , , , and ) In Proceedings of the 28th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 994-1005, . • General bounds for incremental maximization (, and ) In Proceedings of the 44th International Colloquium on Automata, Languages, and Programming (ICALP), pp. 43(14), . • Robust and adaptive search ( and ) In Proceedings of the 34th International Symposium on Theoretical Aspects of Computer Science (STACS), pp. 26(14), . • Energy-efficient delivery by heterogenous mobile agents (, , , , , and ) In Proceedings of the 34th International Symposium on Theoretical Aspects of Computer Science (STACS), pp. 10(14), . • A general lower bound for collaborative tree exploration (, , , and ) In Proceedings of the 24th International Colloquium on Structural Information and Communication Complexity (SIROCCO), pp. 125–139, . • Scheduling maintenance jobs in networks (, , , , , , and ) In Proceedings of the 10th International Conference on Algorithms and Complexity (CIAC), pp. 19–30, . ## 2016 • Degree-constrained orientations of embedded graphs ( and ) Journal of Combinatorial Optimization, 3:758-773, . • Undirected graph exploration with ${\Theta}(\log\log n)$ pebbles (, and ) In Proceedings of the 27th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 25-39, . • Scheduling transfers of resources over time: Towards car-sharing with flexible drop-offs (, , and ) In Proceedings of the 12th Latin American Theoretical Informatics Symposium (LATIN), pp. 220–234, . [bibtex] • Collaborative delivery with energy-constrained mobile robots (, , , , , , and ) In Proceedings of the 23rd International Colloquium on Structural Information and Communication Complexity (SIROCCO), pp. 258-274, . ## 2015 • Mapping simple polygons: The power of telling convex from reflex (, , , and ) ACM Transactions on Algorithms, 11:33(16), . • Improving the ${H}_k$-Bound on the price of stability in undirected shapley network design games (, , and ) Theoretical Computer Science, 562:557-564, . • Fast collaborative graph exploration (, , , and ) Information and Computation, 243:37-49, . • The simplex algorithm is NP-mighty ( and ) In Proceedings of the 26th ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 858-872, . • Scheduling bidirectional traffic on a path (, and ) In Proceedings of the 42nd International Colloquium on Automata, Languages, and Programming (ICALP), pp. 406–418, . • Interval selection on unrelated machines (, , , and ) In Proceedings of the 12th Workshop on Models and Algorithms for Planning and Scheduling Problems (MAPSP), . [bibtex] • Max shortest path for imprecise points (, and ) In Proceedings of the 30th European Workshop on Computational Geometry (EuroCG), . [bibtex] ## 2014 • Mapping a polygon with holes using a compass (, , and ) Theoretical Computer Science, 553:106-113, . • Packing a knapsack of unknown capacity (, , and ) In Proceedings of the 31st Symposium on Theoretical Aspects of Computer Science (STACS), pp. 276-287, . • Rectilinear shortest path and rectilinear minimum spanning tree with neighborhoods (, , and ) In Proceedings of the International Symposium on Combinatorial Optimization (ISCO), pp. 208-220, . • The minimum feasible tileset problem (, and ) In Proceedings of the 12th Workshop on Approximation and Online Algorithms (WAOA), pp. 144–155, . ## 2013 • Mapping simple polygons: How robots benefit from looking back (, , , and ) Algorithmica, 65:43-59, . • Simple agents learn to find their way: an introduction on mapping polygons (, , , and ) Discrete Applied Mathematics, 161:1287-1307, . • Fast collaborative graph exploration (, , , and ) In Proceedings of the 40th International Colloquium on Automata, Languages and Programming (ICALP), pp. 520–532, . • Interval selection with machine-dependent intervals (, , and ) In Proceedings of the 13th International Algorithms and Data Structures Symposium (WADS), pp. 170-181, . • Polygon-constrained motion planning problems (, , , , and ) In Proceedings of the 9th International Symposium on Algorithms for Sensor Systems, Wireless Ad Hoc Networks and Autonomous Mobile Entities (ALGOSENSORS), pp. 67-82, . • Improving the ${H}_k$-bound on the price of stability in undirected Shapley network design games (, , and ) In Proceedings of the 8th International Conference on Algorithms and Complexity (CIAC), pp. 158-169, . ## 2012 • Reconstructing visibility graphs with simple robots (, , , , and ) Theoretical Computer Science, 444:52-59, . • Mapping a polygon with holes using a compass (, , and ) In Proceedings of the 8th International Symposium on Algorithms for Sensor Systems, Wireless Ad Hoc Networks and Autonomous Mobile Entities (ALGOSENSORS), pp. 78-89, . • Degree-constrained orientations of embedded graphs ( and ) In Proceedings of the 23rd International Symposium on Algorithms and Computation (ISAAC), pp. 506-516, . • Mapping polygons with agents that measure angles (, and ) In Proceedings of the 10th International Workshop on the Algorithmic Foundations of Robotics (WAFR), pp. 415-425, . ## 2011 • Mapping polygons () PhD thesis, ETH Zurich, . • A polygon is determined by its angles (, and ) Computational Geometry: Theory and Applications, 44:418-426, . • Telling convex from reflex allows to map a polygon (, , , and ) In Proceedings of the 28th International Symposium on Theoretical Aspects of Computer Science (STACS), pp. 153-164, . ## 2010 • Reconstructing a simple polygon from its angles (, and ) In Proceedings of the 12th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT), pp. 13-24, . • How simple robots benefit from looking back (, , , and ) In Proceedings of the 7th International Conference on Algorithms and Complexity (CIAC), pp. 229-239, . ## 2009 • Reconstructing visibility graphs with simple robots (, , , , and ) In Proceedings of the 16th International Colloquium on Structural Information and Communication Complexity (SIROCCO), pp. 87-99, . • On the limitations of combinatorial visibilities (, , , , and ) In Proceedings of the 25th European Workshop on Computational Geometry (EuroCG), pp. 207-210, . [bibtex] ## 2008 • Local realism, detection efficiencies, and probability polytopes (, , and ) Physical Review A, 73:032116(8), . • Multi-criteria shortest paths in time-dependent train networks (, and ) In Proceedings of the 7th International Workshop on Experimental Algorithms (WEA), pp. 347-361, . # Brief CV • Assistant Professor (tenure track) • PostDoc, Habilitation (Mathematics) TU Berlin, 2012-2016 • Visiting Professor Augsburg University, summer 2015 • PostDoc ETH Zurich, 2011-2012 • PhD (Theoretical Computer Science) ETH Zurich, 2008-2011 • MSc (Physics)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9945396780967712, "perplexity": 4714.49327532208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867859.88/warc/CC-MAIN-20180526190648-20180526210648-00209.warc.gz"}
http://gradestack.com/JEE-Main-2015-Complete/Work-Energy-and-Power/Work-Done-in-Conservative/19493-3784-40903-study-wtw
# Conservative force • In conservative field, work done by the force (line integral of the force, i.e., ) is independent of the path followed between any two points. From Fig. 6, Fig. 6 • In conservative field, work done by the force (line integral of the force, i.e., ) over a closed path/loop is zero. From Fig. 7, WA→B + WB→A = 0 or  = 0 Fig. 7 Example Electrostatic forces, gravitational forces, elastic forces, magnetic forces, etc., and all the central forces are conservative in nature. # Non-conservative forces A force is said to be non-conservative, if work done by or against the force in moving a body from one position to another, depends on the path followed between these two positions, and for complete cycle this work done can never be a zero. Example Frictional force, viscous force, airdrag, etc. Note: Work depends on frame of reference. With change of frame of reference (inertial) force does not change while displacement may change. So, the work done by a force will be different in different frames.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.843474805355072, "perplexity": 857.5298710005942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00571-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.ncbi.nlm.nih.gov/pubmed/1253828
Format Choose Destination Eur J Pharmacol. 1976 Jan;35(1):7-16. # Anticonvulsant activity of delta9-tetrahydrocannabinol compared with three other drugs. ### Abstract Delta9-tetrahydrocannabinol (THC) was compared with diphenylhydantoin (DPH), phenobarbital (PB) and chlordiazepoxide (CDP) using several standard laboratory procedures to determine anticonvulsant activity in mice, i.e., the maximal electroshock test (MES), and seizures induced by pentylenetetrazol, strychnine and nicotine. In the MES test, THC was the least potent and DPH the most potent blocker of hind limb tonic extensor convulsions whereas THC was the most potent and DPH the least potent in increasing the latency to this response and in preventing mortality. Seizures and mortality induced by pentylenetetrazol or by strychnine were enhanced by THC and DPH and were blocked by PB and CDP. In the test with nicotine, none of the four anticonvulsant agents prevented seizures; DPH was the only one which failed to increase latency; THC and DPH were less potent than PB and CDP in preventing mortality. THC most closely resembled DPH in the tests with chemical convulsant agents, but a sedative action of THC, resembling that of PB and CDP, was indicated by low ED5 0 for increased latency and for prevention of mortality in the MES test. PMID: 1253828 DOI: 10.1016/0014-2999(76)90295-8 [Indexed for MEDLINE]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8463106155395508, "perplexity": 26874.679056064906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195523840.34/warc/CC-MAIN-20190715175205-20190715201205-00347.warc.gz"}
https://hubpages.com/education/The-Normal-Distribution-And-Standard-Score
# The Normal Distribution And Standard Normal Score The Normal Distribution Part One Basic definition and concepts The normal distribution is a bell-shaped pattern of variation that was identified by the German Mathematician and astronomer Carl Gauss (1777-1855). Gauss observed a recurring pattern of errors in repeated measurements of the same object. As an illustration take the example of a machinist who was asked to measure the diameter of 100 similar bolts – one at a time to a precision of 0,0001 inch with a micrometer. Furtherly, suppose that, unknown to the machinist all measurements were made on the same bolt. Most of the reported values would tend to cluster closely about some central value with perhaps a few observations of somewhat higher or lower values. Many measurements such as diameter of bolts produced by the same specification have an over-all bell-shaped pattern of error. This pattern of error which was initially attributed to chance is called the “Gaussian” or “normal curve.” As the theory of statistical inference has developed over the past 200 years, the normal distribution has played a key role. There are several reasons for this. First, many empirical data are by nature normally distributed examples are the diameters of bolts, weights of packages of foods or SAT scores. Secondly, data that are not normally distributed can often be analyzed via the normal distribution, provided that a proper sampling technique is employed and provided the question is about a mean value or a population proportion. Third, normal distribution has been made extremely easy to work with by virtue of a coding process that converts any particular normal distribution to a standard normal distribution. This is sufficiently common that same hand calculators have a normal distribution function programmed into their logic. Fourth, the normal distribution can be effectively manipulated and used for making inferences. The Standard Normal Distribution We can evaluate areas for all normal distribution by making one simple conversion to the standard normal distribution or Z- form. The normal or Gaussian distribution is a bell-shaped curve with its explicit shape defined by two parameters – its mean value µ and standard deviation . The standard normal distribution is a continuous symmetrical bell-shaped distribution that has a mean of 0 and standard deviation of 1 . These two parameters mean and standard deviation thus define the specific normal distribution. The normal distribution is a continuous, symmetrical, bell-shaped distribution. Because it is symmetrical, half of the areas under the curve lies to the left of the central value and half lies to the right. The mean, median and mode are all the same central values, which for the standard normal distribution is zero, Although all normal distributions are characterized by a familiar bell-shape, each one is distinguished by the location of its center and by its spread or variation about the mean. The Standard Normal Score The standard normal score or Z-score is a simple coding device for converting any normal distribution to a standard normal distribution. The Z-score is a measure of the number of standard deviations of distance that any point resides from the mean. Computing for the Z-score The formula for the standard score or Z-score is : Z = (X - µ ) / sd Where X = any raw score µ = mean of the scores sd = standard deviation To illustrate the computation and the nature of standard scores , let us take the following scores, which are part of distribution with a mean of 60 and a standard deviation of 10. X x Z 70 10 1.00 60 0 0.00 50 - 10 - 1.10 54 - 6 - .60 46 -14 -1.40 In the first column we have the raw scores (X). The mean is subtracted fro each of these, and this deviation from the mean or x, is divided by the standard deviation to change the deviation values into standard score values. The raw score of 60 is at the mean. There is no deviation, hence the standard score is zero. A raw score of 70 is 1 standard deviation above the mean. 70 - 60 ---------------- = 1. 10 When we change raw scores to standard scores, we are expressing them in standard deviation units. These standard scores tell us how many standard deviation units any given raw score deviates from the mean. Since three standard deviations on either side of the mean include practically all of the cases, it follows that the highest Z-score usually encountered is +3 and the lowest is -3. We can describe the distribution of Z-scores by saying that they have a mean of 0 and a standard deviation of 1. Thus anytime we see a standard score, we should be able to place exactly where an individual falls in a distribution. A student with a Z-score of 2,5 is 2,5 standard deviations above the mean on that test distribution and has a very good score. If another student got a Z-score of 0.5 it means that this student got a score which is .5 standard deviations from the mean therefore has an a performance about an average. These standard scores are equal units of measurement and hence can be manipulated mathematically. Sample Problem : An apprentice plumber wants to solder a copper pipe section within an acceptable time in order to qualify as a professional pipe fitter. The requirement is that the joint be completed within one standard deviation of the professional standard time of mean = 0.7 minutes and standard deviation = 0.2 minutes. Soldering time are normally distributed. In practice soldering, the apprentice time ranges from 0.6 minutes to 1.1 minutes with sd = 0.2 minutes. Is the apprentice ready for the test ? Solution : Compute the number of standard deviation that 0.6 minutes and 1.1 minutes are from the mean by using the formula : Z = (X - µ)/sd The mean is at the center (zero point) on the Z-scale. The 0.6 time is one half unit below the mean as shown : Z = (0.6 – 0.7)/0.2  = 0.5 So the lower limit of the welder’s time is satisfactory. The computation for 1.1 minute time is shown below : Z = (1.1 - 0.7)/0,2  =  +2.0 Standard deviation is above the professional standard and therefore is not satisfactory. This apprentice needs more practice to assure a time within one standard deviation of the 0.7 minute requirement. SOURCES : Statistics For Business and Economics by John A. Ingram Joseph G. Monks Basic Statistical Methods by N. M. Downie Robert W. Heath 2 0 0 12 ## Popular 4 110 • ### Calculator Techniques in Mathematics for Engineering Board Exams 2 0 of 8192 characters used • DeBorrah K Ogans 6 years ago Cristina, Interesting and educational hub! Thank God for scientific calculators! I see why you decided to major in mathematics! Thank You for sharing, Peace & Blessings! • Author cristina327 6 years ago from Manila Hi DeBorrah K. Ogans it is great to read your comment for this hub. Thank you for appreciating this hub. Normal Distribotion is one of the most useful topics in Statistics and has sought wide applications in many fields including business and economics. Blessings to you and regards. • RevLady 6 years ago from Lantana, Florida Though definitely not my forte, I appreciate this hub which reminds me of my statistics class which I barely passed (smile). Love and peace, Forever His, • fred allen 6 years ago from Myrtle Beach SC I have a math problem for ya! How did God fit such a HUGE brain into that pretty little head of yours? ;-D • Author cristina327 6 years ago from Manila Hi RevLady it is great to hear your comment on this hub. Thank you for appreciating this hub. Blessings to you. Regards • Author cristina327 6 years ago from Manila Hi fred allen thank you for taking time to read this hub. Blessings to you. • Dave Mathews 6 years ago from NORTH YORK,ONTARIO,CANADA Well done Ate Cristina, although I have no clue about such things. Mathematics was not one of my strengths ever. Brother Dave. • Author cristina327 6 years ago from Manila Hi Dave Mathews it is great to see you commenting on this mathematical hub. Thank you for appreciating this hub. Your visit and comment is much appreciated. Remain blessed always and regards.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8574957251548767, "perplexity": 990.0924694515637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320174.58/warc/CC-MAIN-20170623202724-20170623222724-00315.warc.gz"}
https://bora.uib.no/bora-xmlui/browse?value=Wang,%20Chi&type=author
Now showing items 1-2 of 2 • #### Influence of solar wind energy flux on the interannual variability of ENSO in the subsequent year  (Peer reviewed; Journal article, 2018) Previous studies have tended to adopt the quasi-decadal variability of the solar cycle (e.g. sunspot number (SSN) or solar radio flux at 10.7 cm (F10.7) to investigate the effect of solar activity on El Niño–Southern ... • #### Solar-wind–magnetosphere energy influences the interannual variability of the northern-hemispheric winter climate  (Journal article; Peer reviewed, 2020) Solar irradiance has been universally acknowledged to be dominant by quasi-decadal variability, which has been adopted frequently to investigate its effect on climate decadal variability. As one major terrestrial energy ...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8736393451690674, "perplexity": 9513.809185344928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178370752.61/warc/CC-MAIN-20210305091526-20210305121526-00218.warc.gz"}
https://med.libretexts.org/Courses/Manchester_Community_College_(MCC)/Manchester_Community_College_-_Introduction_to_Nutrition/04%3A_Carbohydrates/4.05%3A_Health_Consequences_and_Benefits_of_High-Carbohydrate_Diets
4.5: Health Consequences and Benefits of High-Carbohydrate Diets Learning Objectives • Identify the health benefits of eating a diet rich in whole grains. Can America blame its obesity epidemic on the higher consumption of added sugars and refined grains? This is a hotly debated topic by both the scientific community and the general public. In this section, we will give a brief overview of the scientific evidence. The Food and Nutrition Board of the Institute of Medicine (IOM) defines added sugars as “sugars and syrups that are added to foods during processing or preparation.” The IOM goes on to state, “Major sources of added sugars include soft drinks, sports drinks, cakes, cookies, pies, fruitades, fruit punch, dairy desserts, and candy.” Processed foods, even microwaveable dinners, also contain added sugars. Added sugars do not include sugars that occur naturally in whole foods (such as an apple), but do include natural sugars such as brown sugar, corn syrup, dextrose, fructose, fruit juice concentrates, maple syrup, sucrose, and raw sugar that are then added to create other foods (such as cookies). Currently, nutrition labels do not distinguish between added and naturally occurring sugars and give only the total sugar content, making it difficult for consumers to determine their consumption of added sugars. Results from a survey of forty-two thousand Americans reports that in 2008 the average intake of added sugars is 15 percent of total calories, a drop from 18 percent of total calories in 2000.Welsh J. A. et al. “Consumption of Added Sugars Is Decreasing in the United States.” Am J Clin Nutr 94, no. 3 (2011): 726–34. http://www.ncbi.nlm.nih.gov/pubmed/21753067. This is still above the recommended intake of less than 10 percent of total calories. The US Department of Agriculture (USDA) reports that sugar consumption in the American diet in 2008 was, on average, 28 teaspoons per day (Figure $$\PageIndex{1}$$). To understand the magnitude of the health problem in the United States consider this—in the United States approximately 130 million adults are overweight, and 30 percent of them are considered obese. The obesity epidemic has reached young adults and children and will markedly affect the prevalence of serious health consequences in adulthood. Health consequences linked to being overweight or obese include Type 2 diabetes, cardiovascular disease, arthritis, depression, and some cancers. An infatuation with sugary foods and refined grains likely contributes to the epidemic proportion of people who are overweight or obese in this country, but so do the consumption of high-calorie foods that contain too much saturated fat and the sedentary lifestyle of most Americans. There is much disagreement over whether high-carbohydrate diets increase weight-gain and disease risk, especially when calories are not significantly higher between compared diets. Many scientific studies demonstrate positive correlations between diets high in added sugars with weight gain and disease risk, but some others do not show a significant relationship. In regard to refined grains, there are no studies that show consumption of refined grains increases weight gain or disease risk. What is clear, however, is that getting more of your carbohydrates from dietary sources containing whole grains instead of refined grains stimulates weight loss and reduces disease risk. A major source of added sugars in the American diet is soft drinks. There is consistent scientific evidence that consuming sugary soft drinks increases weight gain and disease risk. An analysis of over thirty studies in the American Journal of Clinical Nutrition concluded that there is much evidence to indicate higher consumption of sugar-sweetened beverages is linked with weight gain and obesity.Malik, V. S., M. B. Schulze, and F. B. Hu. “Intake of Sugar-Sweetened Beverages and Weight Gain: A Systematic Review.” Am J Clin Nutr 84, no. 2 (2006): 274–88. www.ajcn.org/content/84/2/274.long. A study at the Harvard School of Public Health linked the consumption of sugary soft drinks to an increased risk for heart disease.Harvard School of Public Health. “Public HealthTakes Aim at Sugar and Salt.” Accessed September 30, 2011. www.hsph.harvard.edu/news/hph...-and-salt.html. While the sugar and refined grains and weight debate rages on, the results of all of these studies has led some public health organizations like the American Heart Association (AHA) to recommend even a lower intake of sugar per day (fewer than 9 teaspoons per day for men and fewer than 6 teaspoons for women) than what used to be deemed acceptable. After its 2010 scientific conference on added sugars, the AHA made the following related dietary recommendations: • First, know the number of total calories you should eat each day. • Consume an overall healthy diet and get the most nutrients for the calories, using foods high in added sugars as discretionary calories (those left over after getting all recommended nutrients subtracted from the calories used). • Lower sugar intake, especially when the sugars in foods are not tied to positive nutrients such as in sugary drinks, candies, cakes, and cookies. • Focus on calories in certain food categories such as beverages and confections, and encourage consumption of positive nutrients and foods such as cereals and low-fat or fat-free dairy products.Van Horn, L. et al. “Added Sugars and Health.” Research reviewed at the AHA Added Sugars Conference, 2010. Circulation 122 (2010): 2470–90. doi: 10.1161/CIR.0b013e3181ffdcb0. The Most Notorious Sugar Before high-fructose corn syrup (HCFS) was marketed as the best food and beverage sweetener, sucrose (table sugar) was the number-one sweetener in America. (Recall that sucrose, or table sugar, is a disaccharide consisting of one glucose unit and one fructose unit.) HFCS also contains the simple sugars fructose and glucose, but with fructose at a slightly higher concentration. In the production of HFCS, corn starch is broken down to glucose and fructose, and some of the glucose is then converted to fructose. Fructose is sweeter than glucose; hence many food manufacturers choose to sweeten foods with HFCS. HFCS is used as a sweetener for carbonated beverages, condiments, cereals, and a great variety of other processed foods. Some scientists, public health personnel, and healthcare providers believe that fructose is the cause of the obesity epidemic and its associated health consequences. The majority of their evidence stems from the observation that since the early 1970s the number of overweight or obese Americans has dramatically increased and so has the consumption of foods containing HFCS. However, as discussed, so has the consumption of added sugars in general. Animal studies that fuel the fructose opponents show fructose is not used to produce energy in the body; instead it is mostly converted to fat in the liver—potentially contributing to insulin resistance and the development of Type 2 diabetes. Additionally, fructose does not stimulate the release of certain appetite-suppressing hormones, like insulin, as glucose does. Thus, a diet high in fructose could potentially stimulate fat deposition and weight gain. In human studies, excessive fructose intake has sometimes been associated with weight gain, but results are inconsistent. Moderate fructose intake is not associated with weight gain at all. Moreover, other studies show that some fructose in the diet actually improves glucose metabolism especially in people with Type 2 diabetes.Elliott, S. S. et al. “Fructose, Weight Gain, and the Insulin Resistance Syndrome.” Am J Clin Nutr 76, no. 5 (2002): 911–22. www.ajcn.org/content/76/5/911.full. In fact, people with diabetes were once advised to use fructose as an alternative sweetener to table sugar. Overall, there is no good evidence that moderate fructose consumption contributes to weight gain and chronic disease. At this time conclusive evidence is not available on whether fructose is any worse than any other added sugar in increasing the risk for obesity, Type 2 diabetes, and cardiovascular disease. Interactive $$\PageIndex{1}$$ The USDA is in the process of developing a database on the added sugars in many different foods and has made the information accessible. You might be frightened by what you discover when perusing it. For instance, one 6-ounce container (170 grams) of flavored yogurt contains 20 grams (5 teaspoons) of added sugars. Oral Disease Oral health refers not only to healthy teeth and gums, but also to the health of all the supporting tissues in the mouth such as ligaments, nerves, jawbone, chewing muscles, and salivary glands. Over ten years ago the Surgeon General produced its first report dedicated to oral health, stating that oral health and health in general are not separate entities.Surgeon General. “National Call to Action to Promote Oral Health.” Accessed September 30, 2011. www.surgeongeneral.gov/librar...ltoaction.html. Instead, oral health is an integral part of overall health and well-being. Soft drinks, sports drinks, candies, desserts, and fruit juices are the main sources of “fermentable sugars” in the American diet. (Fermentable sugars are those that are easily metabolized by bacteria in a process known as fermentation. Glucose, fructose, and maltose are three examples.) Bacteria that inhabit the mouth metabolize fermentable sugars and starches in refined grains to acids that erode tooth enamel and deeper bone tissues. The acid creates holes (cavities) in the teeth that can be extremely painful (Figure $$\PageIndex{2}$$). Gums are also damaged by bacteria produced by acids, leading to gingivitis (characterized by inflamed and bleeding gums). Saliva is actually a natural mouthwash that neutralizes the acids and aids in building up teeth that have been damaged. Figure $$\PageIndex{2}$$: Gingivitis. One way to prevent gingivitis and subsequent tooth decay is to lower consumption of sugary drinks.Gingivitis before (top) and after (bottom) a thorough mechanical debridement of the teeth. (Public domain). According to Healthy People 2010, 23 percent of US children have cavities by the age of four, and by second grade, one-half of all children in this country have at least one cavity.Continuing MCH Education in Oral Health. “Oral Health and Health Care.” Accessed September 30, 2011. http://ccnmtl.columbia.edu/projects/otm/index.html. Cavities are an epidemic health problem in the United States and are associated with poor diet, but other contributors include poor dental hygiene and the inaccessibility to regular oral health care. A review in Academic Pediatrics reports that “frequent consumption of fast-releasing carbohydrates, primarily in the form of dietary sugars, is significantly associated with increased dental caries risk.”Mobley C., PhD, et al. “The Contribution of Dietary Factors to Dental Caries and Disparities in Caries.” Acad Pediatr 9, no. 6 (2009): 410–14. doi: 10.1016/j.acap.2009.09.008. In regards to sugary soft drinks, the American Dental Association says that drinking sugary soft drinks increases the risk of decay formation.American Dental Association. “Diet and Oral Health.” Accessed September 30, 2011. www.ada.org/2984.aspx#eatoothdecay. Interactive $$\PageIndex{2}$$ The Harvard School of Public Health Nutrition Source has developed a guide called “How Sweet Is It?” that notes the calories and sugar contents of many popular beverages. Visit the site to determine drinks that are better for your oral and overall health. http://www.hsph.harvard.edu/nutritionsource/files/how-sweet-is-it-color.pdf Tools for Change Save your teeth and gums and choose to drink a beverage that does not contain excess added sugars. An idea: brew some raspberry tea, add some sparkling mineral water, a raspberry or two, some ice, and a mint leaf. Then sit back and refresh. Do Low-Carbohydrate Diets Affect Health? Since the early 1990s, marketers of low-carbohydrate diets have bombarded us with the idea that eating fewer carbohydrates promotes weight loss and that these diets are superior to others in their effects on weight loss and overall health. The most famous of these low-carbohydrate diets is the Atkin’s diet. Others include the “South Beach” diet, the “Zone” diet, and the “Earth” diet. Despite the claims these diets make, there is little scientific evidence to support that low-carbohydrate diets are significantly better than other diets in promoting long-term weight loss. A study in The Nutritional Journal concluded that all diets, (independent of carbohydrate, fat, and protein content) that incorporated an exercise regimen significantly decreased weight and waist circumference in obese women.Kerksick, C. M. et al. “Changes in Weight Loss, Body Composition, and Cardiovascular Disease Risk after Altering Macronutrient Distributions During a Regular Exercise Program in Obese Women.” J Nutr 9, no. 59 (2010). doi: 10.1186/1475-2891-9-59. Some studies do provide evidence that in comparison to other diets, low-carbohydrate diets improve insulin levels and other risk factors for Type 2 diabetes and cardiovascular disease. The overall scientific consensus is that consuming fewer calories in a balanced diet will promote health and stimulate weight loss, with significantly better results achieved when combined with regular exercise. Health Benefits of Whole Grains in the Diet While excessive consumption of fast-releasing carbohydrates is potentially bad for your health, consuming more slow-releasing carbohydrates is extremely beneficial to health. There is a wealth of scientific evidence supporting that replacing refined grains with whole grains decreases the risk for obesity, Type 2 diabetes, and cardiovascular disease. Whole grains are great dietary sources of fiber, vitamins, minerals, healthy fats, and a vast amount of beneficial plant chemicals, all of which contribute to the effects of whole grains on health. Americans typically do not consume the recommended amount of whole grains, which is 50 percent or more of grains from whole grains (Figure $$\PageIndex{3}$$). Diets high in whole grains have repeatedly been shown to decrease weight. A large group of studies all support that consuming more than two servings of whole grains per day reduces one’s chances of getting Type 2 diabetes by 21 percent.de Munter, J. S. L. et al. “Whole Grain, Bran, and Germ Intake and Risk of Type 2 Diabetes: A Prospective Cohort Study and Systematic Review.” PLoS Medicine, no. 8 (2007): e261. doi: 10.1371/journal.med.0040261. The Nurses’ Health Study found that women who consumed two to three servings of whole grain products daily were 30 percent less likely to have a heart attack.Liu, S. et al. “Whole-Grain Consumption and Risk of Coronary Heart Disease: Results from the Nurses’ Health Study.” Am J Clin Nutr 70, no. 3 (1999): 412–19. http://www.ajcn.org/content/70/3/412.long. The AHA makes the following statements on whole grains: • “Dietary fiber from whole grains, as part of an overall healthy diet, helps reduce blood cholesterol levels and may lower risk of heart disease.” • “Fiber can help you feel full, so you’ll be satisfied with less calories” American Heart Association. “Whole Grains, Refined Grains and Dietary Fiber" Colon Health A substantial health benefit of whole grain foods is that fiber actively supports digestion and optimizes colon health. (This can be more specifically attributed to the insoluble fiber content of whole grains.) There is good evidence supporting that insoluble fiber prevents the irritating problem of constipation and the development of diverticulosis and diverticulitis. Diverticulosis is a benign condition characterized by out-pocketings of the colon. Diverticulitis occurs when the out-pocketings in the lining of the colon become inflamed. Interestingly, diverticulitis did not make its medical debut until the early 1900s, and in 1971 was defined as a deficiency of whole-grain fiber. According to the National Digestive Diseases Information Clearinghouse, 10 percent of Americans over the age of forty have diverticulosis, and 50 percent of people over the age of sixty have the disorder.National Digestive Diseases Information Clearinghouse, a service of National Institute of Diabetes and Digestive and Kidney Diseases, National Institute of Health. “Diverticulosis and Diverticulitis.” NIH Publication No. 08-1163 (July 2008). digestive.niddk.nih.gov/ddise...iverticulosis/. Ten to 25 percent of people who have diverticulosis go on to develop diverticulitis.National Digestive Diseases Information Clearinghouse, a service of National Institute of Diabetes and Digestive and Kidney Diseases, National Institute of Health. “Diverticulosis and Diverticulitis.” NIH Publication No. 08-1163 (July 2008). Symptoms include lower abdominal pain, nausea, and alternating between constipation and diarrhea. Figure $$\PageIndex{4}$$: Diverticulitis: A Disease of Fiber Deficiency The chances of developing diverticulosis can be reduced with fiber intake because of what the breakdown products of the fiber do for the colon. The bacterial breakdown of fiber in the large intestine releases short-chain fatty acids. These molecules have been found to nourish colonic cells, inhibit colonic inflammation, and stimulate the immune system (thereby providing protection of the colon from harmful substances). Additionally, the bacterial indigestible fiber, mostly insoluble, increases stool bulk and softness increasing transit time in the large intestine and facilitating feces elimination. One phenomenon of consuming foods high in fiber is increased gas, since the byproducts of bacterial digestion of fiber are gases. Some studies have found a link between high dietary-fiber intake and a decreased risk for colon cancer. However an analysis of several studies, published in the Journal of the American Medical Association in 2005, did not find that dietary-fiber intake was associated with a reduction in colon cancer risk.Park, Y. et al. “Dietary Fiber Intake and Risk of Colorectal Cancer.” JAMA 294, no. 22 (2005): 2849–57. doi: 10.1001/jama.294.22.2849. There is some evidence that specific fiber types (such as inulin) may protect against colon cancer, but more studies are needed to conclusively determine how certain fiber types (and at what dose) inhibit colon cancer development. Key Takeaways • Whole grain dietary sources stimulate weight loss and reduce disease risk. Excessive high fructose consumption has been shown to cause weight gain. A primary source of added sugars in the American diet is sugary soft drinks. • While excessive consumption of some fast-releasing carbohydrates and refined grains is potentially bad for your health, consuming whole grains made up of nutrient-dense slow-releasing carbohydrates is extremely beneficial to health. Discussion Starters 1. Have a debate in your classroom on the USDA restriction on the sale of carbonated beverages in schools. Find out more information on this topic by reading “Soft Drinks and School-Age Children: Trends, Effects, Solutions,” developed by the North Carolina School Nutrition Action Committee. 2. Learn about the “Australian Paradox:” How decreased sugar consumption paralleled increased rates of overweight and obese people. Read the study and have a classroom debate over the weight of evidence that supports that diets high in added sugars actually increase weight gain. http://www.mdpi.com/2072-6643/3/4/491/pdf
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3146578073501587, "perplexity": 7916.7180244539195}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057403.84/warc/CC-MAIN-20210922223752-20210923013752-00318.warc.gz"}
https://thehilbertspace.wordpress.com/2015/02/20/quantum-mechanical-position-momentum-and-fourier-transform/
# Quantum Mechanical Position, Momentum and Fourier Transform For people familiar with quantum physics, it’s a known fact that a ‘waveform’ $\left|\Psi\right\rangle$ can be essentially ‘measured’ in either ‘position’ $x$ or ‘momentum’ $p$ basis. We define the inner products using the following notations: $\left\langle x | \Psi \right \rangle = \Psi_p(x)$ and $\left\langle p | \Psi \right \rangle = \Psi_x(p)$ The point to be noted is that momentum is the time derivative of position. In such a case, $\Psi_p(x) = \mathfrak{F}\{\Psi_x(p)\} \\ \Psi_x(p) = \mathfrak{F}\{\Psi_p(x)\}$ This can be proved as follows. A transformation from the $x$ basis to the $p$ basis can be made as follows. The $\hat{p}$ operator is $\hat{p} = -i\hbar\frac{\partial}{\partial \hat{x}}$. To find eigenstates of $\hat{p}$, we can call $\langle x|p\rangle = f_p(x)$ $-i\hbar\frac{\partial}{\partial x}f_p(x)=pf_p(x)$ which yields to $f_p(x) = e^{ipx/\hbar}$. Now, to pass from a basis to the other we can write $\langle p|\psi\rangle= \int \langle p|x\rangle\langle x|\psi\rangle dx$ or $\Psi_x(p) = \int e^{-ipx/\hbar}\Psi_p(x)dx$ which is a Fourier transform. Thus $\Psi_p(x)$ and $\Psi_x(p)$ form a Fourier pair. Simply speaking, a waveform that is measured in the time derivative of a certain basis will be the Fourier transform of the waveform being measured in that basis. Now, coming to the signal processing scenario in which the representation of a ‘signal’ $f$ is effectively a ‘measurement’ in either ‘time’ $t$ or ‘frequency’ $\omega$ basis. The correspondence between these two models: $x \mapsto t$ $\frac{p}{\hbar} \mapsto \omega$ Classically, time and frequency are inverses of each other. $\omega = \frac{2\pi}{T}$ where $T$ is the time period (measured in time domain). Similarly, going by de Broglie wave equation, $\frac{p}{\hbar} = \frac{2\pi}{\lambda}$ where $\lambda$ is the wavelength (measured in position domain). In the next post, I will elaborate on the uncertainty relation in both signal processing and quantum physics. The whole idea behind creating an analogy between the two is to see if certain tools that have already been developed in quantum physics, can be adapted to enhance signal processing techniques.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 27, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9608862400054932, "perplexity": 457.03286687708203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744561.78/warc/CC-MAIN-20181118180446-20181118202446-00460.warc.gz"}
https://www.nature.com/articles/471562a?error=cookies_not_supported&code=72632c30-e574-45c8-9583-4ac9ce8b0760
# Chernobyl's legacy ### Subjects Twenty-five years after the nuclear disaster, the clean-up grinds on and health studies are faltering. Are there lessons for Japan? The morning train from Slavutych is packed with commuters playing cards, browsing e-readers, or watching the monotonous flood plains pass by. It looks like any other routine journey to work. But rather than facing a crush through subway turnstiles at the end of the 40-minute trip, the workers are met by a row of full-body radiation monitors. It is the start of another day at the Chernobyl power plant, the site of the world's worst civilian nuclear disaster. As the train trundles through the bleak Ukrainian countryside, another nuclear crisis is unfolding halfway around the world. Barely a week after the partial meltdown at the Fukushima Daiichi nuclear power station, it is no surprise that some of the chatter on the train turns to the incident there. "It looks bad," says one commuter. "But not as bad as Chernobyl," he adds, with a hint of grim pride. Click for larger image When Chernobyl's reactor number 4 exploded in the early hours of 26 April 1986, the ensuing blaze spewed 6.7 tonnes of material from the core high into the atmosphere, spreading radioactive isotopes over more than 200,000 square kilometres of Europe (see 'The hottest zone'). Dozens of emergency workers died within months from radiation exposure and thousands of children in the region later developed thyroid cancer. The region around the plant became so contaminated that officials cordoned off a 30-kilometre exclusion zone that straddled Ukraine's border with Belarus. Today, a staff of about 3,500 enters the zone each day to monitor, clean and guard the site, where remediation work will continue for at least another 50 years (see 'Half-life of a disaster'). So far, the Fukushima accident is less severe. Radiation levels measured near the Japanese power plant have been less than those at Chernobyl after the blast there (see ). And although radiation has spread from Fukushima, it does not match the amounts that rained down in the region around Chernobyl. Despite those differences, the quarter-century of work following the Chernobyl disaster will offer some important lessons for Japan as the nation begins to assess the health and environmental consequences of Fukushima. The problems that followed Chernobyl also provide a grim reminder about the value of accurate information. Officials need to tell people immediately how to avoid the initial, most dangerous, exposure; yet in the longer term, scientists and the government must battle against unnecessary concern over low-level doses of radiation, which often causes more harm than the radiation itself. In some ways, the connection between the two accidents may yield the biggest benefits for Chernobyl. For a brief window of time, the world has again focused attention on the largely overlooked work there. The renewed interest may spur nations to chip in the cash needed to complete the clean-up of the site, and to carry out health studies that have languished for want of proper coordination and funding. "In recent years, Chernobyl has been neglected by funding agencies and, to an extent, the scientific community," says Jim Smith, a radioecologist at the University of Portsmouth, UK, who has studied the consequences of the accident for 20 years. "But there is still more to learn from Chernobyl about decommissioning and the effects of the radiation," says Smith, who is touring the site with a group of other scientists. After clearing a security checkpoint, the visiting researchers board a bus that heads towards the heart of the ageing power plant. They pass abandoned buildings and bump along potholed roads running beneath archways made of piping; since the accident, pipes have been laid above ground to avoid disturbing contaminated soil. The visitors stop to look at the most visible reminder of the accident, the concrete sarcophagus that entombs the shattered reactor building. Completed hastily in November 1986, the sarcophagus was built to contain the escaping radiation, but it is now crumbling and streaked with rust. Smith whips a dosimeter out of his rucksack and poses for a photograph in front of the sarcophagus. The reading is 5 µSv h−1: about 10 minutes of exposure at that level equals the same dose as an arm X-ray. The plant's bright main office is a stark contrast to the sarcophagus. Stained-glass windows depict — in glorious socialist–realist style — the harnessing of atomic energy. But the plant has not produced power since 2000, when the last reactor was shut down. Valeriy Seyda, a deputy director of the Chernobyl Nuclear Power Plant, explains that the plant's top priority now is to construct a new confinement shelter for reactor 4 before the sarcophagus becomes too unstable. If it collapses before the new shell is in place, it could throw up a cloud of radioactive particles and expose the deadly remnants of the reactor. Replacing the rusting tomb The plan is to build an enormous steel arch adjacent to the reactor and slide it along a runway to cover the building. The arch will reach 105 metres high, with a span of 257 metres — the world's largest mobile structure, according to its designers. It is expected to be in place by 2015 and should last for 100 years. It will enable robotic cranes inside to dismantle the sarcophagus and parts of the reactor. Long-term plans call for finishing the clean-up work at Chernobyl by 2065. Some of the concrete trenches for the project are in place. But the international Chernobyl Shelter Fund that supports the US$1.4-billion effort still lacks about half of that cash, and the completion date has slipped by almost ten years since the shelter plan was agreed in principle in 2001. One of the key goals of a forthcoming conference — Chernobyl, 25 Years On: Safety for the Future — to be held in Kiev on 20–22 April is to secure more cash commitments from international donors. Meanwhile, Chernobyl is developing long-term storage facilities for the debris that will be hacked out of reactor 4; and for more than 20,000 spent fuel canisters from the site's other reactors, a facility that will cost about €300 million (US$420 million). Although all those reactors have been shuttered, the plant continues to generate large amounts of radioactive waste — partly because of persistent flooding in some of the waste-storage buildings and reactor 4's turbine hall. Every month, at least 300,000 litres of radioactive water must be pumped out of the structures and stored on site. Ghost from the past: encased in crumbling concrete, the deadly contents of Chernobyl’s reactor number 4 still exert a far-reaching effect on the area. Credit: T. Suess/timmsuess.com The main cause of this flooding is Chernobyl's brimming cooling pond, which artificially elevates groundwater levels in the area. Alexander Antropov, a Chernobyl veteran with ice-blue eyes and a cool manner to match, is in charge of a project to decommission the pond. The term 'cooling pond' usually refers to the containers where spent fuel rods are stored until their radiation dissipates enough that they can be put into long-term storage. But Chernobyl's pond is actually a vast reservoir covering 22 square kilometres into which water from the reactor cooling systems was discharged. The pond also contains long-lasting radioactive material such as caesium-137 and strontium-90, which rained down after the explosion. Besides causing flooding at the plant, the high water levels in the cooling pond raise the risk that a weak dyke along its east side will burst, which would send water coursing into the Pripyat River. Radioactivity in the escaping water would be quickly diluted by the river, so although it would not significantly raise exposure levels for people downstream, it could cause panic among the local population. Antropov says that his team cannot simply lower the water levels in the pond because they don't know what effect microscopic radio­active sediment particles would have if exposed. In the meantime, the team maintains the status quo by pumping water from the Pripyat River into the pond at a cost of a few hundred thousand euros per year. But the long-term plan is to lower the water level by 7 metres to form a patchwork of 10–20 smaller ponds that would keep the most dangerous sediments in place. The project would cost €3 million to €4 million, says Antropov. He is already in discussions with the relevant regulators and is optimistic that the necessary feasibility studies and environmental impact assessments can be completed. But the effort has been a long time coming. The decommissioning plan is more than a decade old, and was supported by a 2005 survey for the European Commission, led by Smith. Once again, money has been a key factor in the delay. The major parts of Chernobyl's decommissioning plan are paid for by international funds, but the cooling pond project is not. Nor is the research needed to satisfy the regulators. "Most of our own activities come from the Ukrainian budget, and we are not a rich country," says Seyda. “Some think they are doomed because of their radiation exposure. , ” After leaving the cooling pond, the visitors stop at Pripyat, an abandoned town just 3 kilometres from the reactor complex. Some 44,000 residents were evacuated the day after the accident, and many of their belongings still litter the decaying buildings. Antropov once lived here — his daughter was a few months old at the time of the accident — and as deputy chief of the town's Communist party office, he was responsible for evacuating part of the town. Because he worked as a senior engineer at the nuclear plant, he knew that the disaster would have repercussions for decades to come. "I understood that I would never return to live in Pripyat," he says, in an uncharacteristically soft voice. "I still feel some sense of loss." The evacuees from Pripyat also live with lingering fear about the radiation they were exposed to before fleeing their homes. Along with millions of others from the surrounding regions, they often attribute any sign of ill health to the accident. But pinning down Chernobyl's true public-health impact has proved remarkably difficult. There is little disagreement about the terrible fate of the workers who brought Chernobyl's stricken reactor under control. Of 134 emergency workers diagnosed with acute radiation sickness, 28 died from their exposure within four months. Another 19 have died since from various causes, and many of the surviving workers now have cataracts and skin injuries. More than 5,000 cases of thyroid cancer have so far been seen in people who were children at the time of the accident and lived in contaminated areas of the former Soviet Union — a more than ten-fold increase from normal levels (adults were mostly unaffected by the disease). Most of these cases were caused by drinking milk contaminated with radioiodine. Fewer than 20 of these people have died, but the sheer number of cancers, and their rapid onset within 5 years of the accident, surprised many epidemiologists. This triggered a plethora of thyroid studies, most notably a long-term cohort study of 25,000 people in Ukraine and Belarus who were children in 1986 that is being coordinated by the US National Institutes of Health's National Cancer Institute (NCI) in Bethesda, Maryland. The latest results from the Ukrainian section of this cohort1 confirm previous findings that the incidence of thyroid cancer is proportional to the size of the dose, with a particularly high risk seen in younger people and in those who were iodine-deficient due to poor diet. The research is having a direct impact in Japan, where those at risk of exposure are being given potassium iodide tablets to prevent the uptake of radio­iodine in their thyroid. The NCI oversees a second cohort made up of liquidators, a group of more than half a million people sent into the exclusion zone to help clean up and monitor the area after the initial emergency phase of the accident. Liquidators have a slightly raised risk of developing cataracts, and possibly a small increased risk of leukaemia2. Long-term effects But what was the impact on the wider population? Various studies have tried to estimate how many deaths Chernobyl will eventually cause across the whole of Europe, but their answers range from a few thousand to hundreds of thousands3. Cancer causes about a quarter of all deaths in Europe, so teasing out Chernobyl's far-reaching influence would probably be impossible, say epidemiologists. Moreover, focusing on such intangible numbers can distract from the much broader social impact of the accident. In Ukraine and Belarus, hit hard by the break-up of the Soviet Union in 1991, lingering fears about radiation are thought to have contributed to a sense of hopelessness that is linked to high rates of alcoholism and smoking — factors that have a much bigger health impact. "There's tremendous uncertainty for these people," says Elisabeth Cardis, a radiation epidemiologist at the Centre for Research in Environmental Epidemiology in Barcelona, Spain. "Some think they are doomed because of their radiation exposure." Further research could provide convincing evidence that Chernobyl's radiation did not significantly harm the wider population, but "we won't know unless we look", says Dillwyn Williams, a cancer researcher at the Strangeways Research Laboratory in Cambridge, UK. A handful of Chernobyl studies have found small increases in rates of breast cancer and cardiovascular disease, but they did not properly account for confounding factors, such as nutrition, alcohol consumption and smoking habits. And although some researchers have claimed to see an increase in genetic mutations in the children of parents irradiated after Chernobyl4, there has been no similar evidence of hereditary effects even in the children of Japanese atomic bomb survivors, who on average received much larger radiation doses. This means that there is still a substantial gap in the overall understanding of Chernobyl's health effects, says Williams. The problem is exacerbated by the piecemeal nature of previous studies. "There has been a failure of European-level coordination on this," he says. Williams hopes that there is now a chance to establish a Chernobyl Health Effects Research Foundation, which would mirror the highly effective Radiation Effects Research Foundation that monitors the long-term health impacts of the atomic bombs in Japan. Together, the efforts could reveal the differences between the single short-term dose of external radiation delivered by the atomic bombs, and the low-level long-term exposure seen after Chernobyl. Long-term doses were once thought to carry much less risk than the immediate exposure, but evidence is accumulating that the risks may be much the same5. If confirmed, it would mean that people routinely exposed to low-level radiation have a greater chance of health problems than previously thought. The European Commission has funded Williams, Cardis and a core group of other scientists to develop a research plan, dubbed the Agenda for Research on Chernobyl Health (ARCH), that maps out how the existing cohorts could be used to study a wider range of diseases, such as breast cancer and cardiovascular disease, and to address the questions about the long-term effects of low doses. The liquidator cohort, for example, is six times larger than that of atomic bomb survivors, with a much wider range of exposure doses. It could show how risk varies over that large range of doses and uncover rarer effects at lower doses. It could also help to reassess the threshold dose to prevent nuclear workers from developing problems such as cataracts. ARCH also suggests testing the feasibility of setting up new cohorts including liquidators' offspring and highly exposed evacuees, along with a tissue bank. The bank may reveal whether people's genetic make-up influences their susceptibility to radiation — key information for determining how individuals are likely to respond to the radiation received during medical procedures such as X-ray scans and radiation treatment. There are several hurdles, however, to getting ARCH off the ground. The project needs support from the NCI, which stopped funding active clinical monitoring of the thyroid cohort in 2008 because of budgetary constraints. And ARCH's proposals would also require better access to medical records in Ukraine and more information about participants' lifestyle factors — both potentially tall orders. The ARCH plan will be presented at the 25th anniversary conference in April, and Cardis hopes that a positive reception will prompt the European Commission to boost its support. It is likely to be difficult to secure a long-term commitment for the studies, which will cost about €3 million to set up, but that cost is minor compared with the billions that will be spent on remediation at Chernobyl, says Williams. Beyond obtaining the necessary funds, researchers will also require cooperation from participants to expand the cohort studies. That could be difficult. Gennady Laptev, now a hydrologist based at the Ukrainian Hydrometeorological Institute in Kiev, was a liquidator for three years, and says that he stopped attending his medical check-ups about ten years ago because they were too time-consuming. "They never found any major health problems," he says. Laptev's work involved flying by helicopter from Kiev to Chernobyl twice a week to take radiation readings and collect soil and water samples for analysis. "Nobody forced me to do the work — I did it because it was interesting, and I really enjoyed it," he says. But after three years, he became worried about the risk of working near the plant, so he took a job researching how radioisotopes dispersed in the local water system (see 'Life as a liquidator'). Concerns about radiation exposure continue to plague residents in the region, and the planned studies could provide the answers they so desperately need about Chernobyl's real health legacy. "I have a house in a village near Slavutych, on contaminated territory," says Antropov during the site visit. "Two of my neighbours died of cancer, and this was probably the result of their radiation doses." Lessons for Japan It's too early to say how the Chernobyl health studies will help those affected by the Fukushima accident. But Chernobyl has already given the world a lasting lesson on the importance of clear communication during a nuclear disaster, and in the years afterwards. There was no systematic distribution of prophylactic potassium iodide to the people around Chernobyl, and Pripyat's children were allowed to play outside during the day after the accident, while the reactor continued to burn. "The failure to rapidly communicate radiation risks at Chernobyl led to people receiving higher radiation exposures than was necessary," says Smith. The Japanese government has been lambasted for not keeping citizens well informed about the accident there. But it was swifter to act than Soviet officials were, ordering the evacuation of people who live near the plant within hours of recognizing the growing nuclear emergency, and expanding that evacuation zone to a radius of 20 kilometres the following day. As well as distributing potassium iodide, the Japanese government banned the sale of food and milk produced in the provinces around the stricken plant. "The Japanese have done exactly the right thing," says Andrew Sherry, director of the Dalton Nuclear Institute at the University of Manchester, UK. Ultimately, says Smith, Chernobyl's most important lesson for Fukushima is that a nuclear accident haunts a region long after the reactors have cooled. If areas of Japan are significantly contaminated with radioactive caesium-137, which loses half its radioactivity in 30 years, the government may have to maintain an exclusion zone for decades. Decommissioning the Fukushima reactors may also take decades, depending on the extent of damage to their cores. And the uncertainty surrounding the health risks may exact a psychological toll that could surpass the physical harm from the radiation, adds Smith. Many of the workers at Chernobyl understand those lessons all too well as they shuffle onto the train to Slavutych at the end of their day. The workers will return to tend to the plant tomorrow and the next day — and for many years to come. ## References 1. 1 Brenner, A. V. et al. Environ. Health Perspect. doi:10.1289/ehp.1002674 (2011). 2. 2 Cardis, E. & Hatch, M. Clin. Oncol. doi:10.1016/j.clon.2011.01.510 (2011). 3. 3 Peplow, M. Nature 440, 982-983 (2006). 4. 4 Dubrova, Y. E. et al. Nature 380, 683-686 (1996). 5. 5 Health Risks from Exposure to Low Levels of Ionizing Radiation: BEIR VII Phase 2 (NRC, 2006); available at http://go.nature.com/r7jeca. Mark Peplow is Nature's news editor. ## Rights and permissions Reprints and Permissions Peplow, M. Chernobyl's legacy. Nature 471, 562–565 (2011). https://doi.org/10.1038/471562a • Published: • Issue Date: • ### An assessment on the environmental contamination caused by the Fukushima accident • Jin Ho Song Journal of Environmental Management (2018) • ### Assessment of major causes: nuclear power plant disasters since 1950 • , Jessica Halligan •  & Makarand Hastak International Journal of Disaster Resilience in the Built Environment (2016) • ### Microbial copper reduction method to scavenge anthropogenic radioiodine • Seung Yeop Lee • , Ji Young Lee • , Je Ho Min • , Seung Soo Kim • , Min Hoon Baik • , Sang Yong Chung • , Minhee Lee •  & Yongjae Lee Scientific Reports (2016) • ### Radiation safety and ergonomics in the electrophysiology laboratory • Girish M. Nair • , Pablo B. Nery • , Calum J. Redpath •  & David H. Birnie Current Opinion in Cardiology (2016) • ### Reduced Lung Function in Children Associated with Cesium 137 Body Burden • Erik R. Svendsen • , Ighor E. Kolpakov • , Wilfried J. J. Karmaus • , Lawrence C. Mohr • , Vitaliy Y. Vdovenko • , Daria M. McMahon • , Benjamin A. Jelin •  & Yevgenia I. Stepanova Annals of the American Thoracic Society (2015)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38430559635162354, "perplexity": 3896.3587551655924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510846.12/warc/CC-MAIN-20200403092656-20200403122656-00266.warc.gz"}
http://math.stackexchange.com/questions/275553/inverse-laplace-of-dfracs1e-pi-ss2-s-1?answertab=oldest
Inverse Laplace of $\dfrac{(s+1)e^{-\pi s}}{s^2 + s + 1}$ Does anyone know how to calculate the Inverse Laplace transform of $\;\;\dfrac{(s+1)e^{-\pi s}}{s^2 + s + 1}\;\,$ ? I've tried it and got (u is the unit step function): $$U(t-\pi)e^{(-s)}\cos(s(t-5))$$ But this looks wrong somehow. Please can you clarify whether I'm correct and, if not, perhaps guide me in the right direction. I've spent a long, long time on this problem! Thank you in advance and Happy New Year! - is this the function? $$\frac{(s+1)e^{-\pi s}}{s^2 + s + 1}$$ –  Santosh Linkha Jan 11 '13 at 1:41 yes, that is. Do you know what the inverse is? Am I correct above? –  user56866 Jan 11 '13 at 1:45 Thank you experimentX for re-formatting my question –  user56866 Jan 11 '13 at 1:46 you can't have $s$ terms after taking inverse transform –  Santosh Linkha Jan 11 '13 at 1:48 but if you separate e^pi(s) and leave the fraction as (s+1)/((s+1)^2-S) surely this is the standard transform for cos(st)? –  user56866 Jan 11 '13 at 1:53 show 1 more comment First do a completion of squares on the denominator $s^2+s+1=(s+1/2)^2+(\sqrt 3 /2)^2$. Then break up the numerator as a linear combination of the two bases on the denominator $s+1=(s+1/2)+1/\sqrt 3 (\sqrt3/2)$. Now you have ${{s+1}\over{s^2+s+1}}= {{{s+1/2}\over{(s+1/2)^2+(\sqrt 3 /2)^2}}+{{1/\sqrt3}{{\sqrt3/2}\over{(s+1/2)^2+(\sqrt 3 /2)^2}}}}$. Now you look up each of above fractions in your table to get $e^{-t/2 }\cos(\sqrt3 t/2)+{1/\sqrt3}e^{-t/2} \sin(\sqrt3 t/2)$. Now you bring in $e^{-\pi s}$. It gives $U_\pi (t)$ and a shift of $\pi$ in $t$ to produce $U_\pi (t) e^{-(t-\pi)/2} \left[ \cos(\sqrt3 (t-\pi)/2)+{1/\sqrt3} \sin(\sqrt3 (t-\pi)/2)\right]$. - Thank you so much for explaining this so clearly. –  user56866 Jan 11 '13 at 11:06 Forgive me for doing this without a picture of the contour for now. I can add later if you wish. The inverse Laplace transform we seek is $$\frac{1}{i 2 \pi} \int_{c-i \infty}^{c+i \infty} ds \: e^{s t} \frac{s+1}{s^2+s+1} e^{-\pi s}$$ $$\frac{1}{i 2 \pi} \int_{c-i \infty}^{c+i \infty} ds \: \frac{s+1}{s^2+s+1} e^{(t-\pi) s}$$ We consider first the case $t>\pi$. In this case, we use a contour from the $\Re{s} = c$, where $c>0$, and the portion of the circle $|s|=R$ that contains the poles of the integrand. These poles are at $s=\frac{1}{2} \pm i \frac{\sqrt{3}}{2}$. We use the Residue Theorem, which states that the integral around the closed contour described above is equal to $i 2 \pi$ times the sum of the residues of the poles contained within the contour. I can go into more detail here if you want, but the sum of the residues at the two poles above is $$e^{-\frac{1}{2} (t-\pi)} \left [ \cos{ \left [ \frac{\sqrt{3}}{2} (t-\pi) \right ] } + \frac{1}{\sqrt{3}} \sin{ \left [ \frac{\sqrt{3}}{2} (t-\pi) \right ] } \right ]$$ For $t<\pi$, we must use a contour in which the circular portion goes to the right of the line $\Re{s} = c$. As there are no poles within this contour, the integral is zero here. Therefore, the inverse Laplace transform is given by $$e^{-\frac{1}{2} (t-\pi)} \left [ \cos{ \left [ \frac{\sqrt{3}}{2} (t-\pi) \right ] } + \frac{1}{\sqrt{3}} \sin{ \left [ \frac{\sqrt{3}}{2} (t-\pi) \right ] } \right ] U(t-\pi)$$ - Factor of 1/2 was not correct, removed. –  Ron Gordon Jan 11 '13 at 2:27 I don't need the contour, thank you for your detailed answer, I really appreciate it –  user56866 Jan 11 '13 at 11:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9740879535675049, "perplexity": 322.0083847535282}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011184056/warc/CC-MAIN-20140305091944-00050-ip-10-183-142-35.ec2.internal.warc.gz"}
https://codegolf.stackexchange.com/questions/50798/is-it-a-leap-year/177019
# Is it a leap year? This challenge is quite simple. You will take an input which will be a year from 1801 to 2400, and output if it is a leap year or not. Your input will have no newlines or trailing spaces: 1954 You will output in any way that you like that clearly tells the user if it is or isn't a leap year (I will accept y or n for yes/no) You can get a list of leap years here: http://kalender-365.de/leap-years.php I would note that leap years are not ever four years always. 1896 is a leap year, but 1900 is not. The years that follow this "skip" are: 1900 2100 2200 2300 Test cases: 1936 -> y 1805 -> n 1900 -> n 2272 -> y 2400 -> y EDIT: This is based on a standard Gregorian calendar: http://www.epochconverter.com/date-and-time/daynumbers-by-year.php • You should be more clear: A given year is a leap year if and only if it is (divisible by 4)∧((divisible by 100)→(divisible by 400)). – LegionMammal978 May 26 '15 at 11:34 • Your input will have no newlines or trailing spaces. Dang it, that would have saved me 2 bytes... – Dennis May 26 '15 at 17:10 • You should extend the accepted input range to AD 1601 thru 2400. This covers two 400-year Gregorian cycles (which proleptically start on Monday). – David R Tribble May 26 '15 at 18:17 • Does falsy if leap year and truthy if not a leap year count as "clearly tells the user if it is or isn't"? – lirtosiast May 28 '15 at 21:27 • @lirtosiast I think so. A lot of user assume so. – aloisdg says Reinstate Monica Jul 20 '16 at 13:58 # MATLAB + Aerospace Toolbox, 9 bytes @leapyear Anonymous function that determines if the passed year is a leap year, returning 1 for leap and 0 for not. Ok so MATLAB has a built-in... But there we are. # Octave, 13 bytes @is_leap_year Try it online! Octave also has a built-in... cost's 4 more bytes than MATLAB though. # Whitespace, 103 bytes [S S S N _Push_0][S N S _Duplicate_0][S N S _Duplicate_0][T N T T _Read_STDIN_as_integer][T T T _Retrieve][S N S _Duplicate_input][S S S T T S S T N _Push_25][T S T T _Modulo][N T S T N _If_0_jump_to_Label_TRUE][S S S T S S N _Push_4][T S T T _Modulo][N T S S N _If_0_jump_to_Label_LEAP][N S N N _Jump_to_Label_PRINT][N S S T N _Create_Label_TRUE][S S S T S S S S N _Push_16][T S T T _Modulo][N T S S N _If_0_jump_to_Label_LEAP][N S N N _Jump_to_Label_PRINT][N S S S N _Create_Label_LEAP][S S S T N _Push_1][N S S N _Create_Label_PRINT][T N S T _Print_as_integer] Letters S (space), T (tab), and N (new-line) added as highlighting only. [..._some_action] added as explanation only. ### Explanation in pseudo-code: Integer i = STDIN-input as integer If i modulo-25 is 0: If i modulo-16 is 0: Print 1 Else: Print 0 Else-if i modulo-4 is 0: Print 1 Else: Print 0 ### Example runs: Input: 1936 Command Explanation Stack Heap STDIN STDOUT STDERR SSSN Push 0 [0] {} SNS Duplicate top (0) [0,0] {} SNS Duplicate top (0) [0,0,0] {} TNTT Read STDIN as integer [0,0] {0:1936} 1936 TTT Retrieve at heap 0 [0,1936] {0:1936} SNS Duplicate top (1936) [1936,1936] {0:1936} SSSTTSSTN Push 25 [1936,1936,25] {0:1936} TSTT Modulo [1936,11] {0:1936} NTSTN If 0: Jump to Label_TRUE [1936] {0:1936} SSSTSSN Push 4 [1936,4] {0:1936} TSTT Modulo [0] {0:1936} NTSSN if 0: Jump to Label_LEAP [] {0:1936} NSSSN Create Label_LEAP [] {0:1936} SSSTN Push 1 [1] {0:1936} NSSN Create Label_PRINT [1] {0:1936} TNST Print top as integer [] {0:1936} 1 error Program stops with an error: No exit defined. Try it online (with raw spaces, tabs and new-lines only). Input: 2400 Command Explanation Stack Heap STDIN STDOUT STDERR SSSN Push 0 [0] {} SNS Duplicate top (0) [0,0] {} SNS Duplicate top (0) [0,0,0] {} TNTT Read STDIN as integer [0,0] {0:2400} 2400 TTT Retrieve at heap 0 [0,2400] {0:2400} SNS Duplicate top (2400) [2400,2400] {0:2400} SSSTTSSTN Push 25 [2400,2400,25] {0:2400} TSTT Modulo [2400,0] {0:2400} NTSTN If 0: Jump to Label_TRUE [2400] {0:2400} NSSTN Create Label_TRUE [2400] {0:2400} SSSTSSSSN Push 16 [2400,16] {0:2400} TSTT Modulo [0] {0:2400} NTSSN If 0: Jump to Label_LEAP [] {0:2400} NSSSN Create Label_LEAP [] {0:2400} SSSTN Push 1 [1] {0:2400} NSSN Create Label_PRINT [1] {0:2400} TNST Print top as integer [] {0:2400} 1 error Program stops with an error: No exit defined. Try it online (with raw spaces, tabs and new-lines only). Input: 1991 Command Explanation Stack Heap STDIN STDOUT STDERR SSSN Push 0 [0] {} SNS Duplicate top (0) [0,0] {} SNS Duplicate top (0) [0,0,0] {} TNTT Read STDIN as integer [0,0] {0:1991} 1991 TTT Retrieve at heap 0 [0,1991] {0:1991} SNS Duplicate top (1991) [0,1991,1991] {0:1991} SSSTTSSTN Push 25 [0,1991,1991,25] {0:1991} TSTT Modulo [0,1991,16] {0:1991} NTSTN If 0: Jump to Label_TRUE [0,1991] {0:1991} SSSTSSN Push 4 [0,1991,4] {0:1991} TSTT Modulo [0,3] {0:1991} NTSSN if 0: Jump to Label_LEAP [0] {0:1991} NSSN Create Label_PRINT [0] {0:1991} TNST Print top as integer [] {0:1991} 0 error Program stops with an error: No exit defined. Try it online (with raw spaces, tabs and new-lines only). # ActionScript 2.0, 49 bytes function a(b){trace(!(b%4)&&b%100+!(b%400)?1:0);} function a(b){ } - define a function trace( ); - that outputs ? - if !(b%4) - the input divides by 4 && - and b%100 - doesn't divide by 100 + - plus !(b%400) - does divide by 400 1:0 - 1, else 0 Not a very good explanation, and I might have swapped some stuff, but this still confuses me, so this is the best you're going to get. # C (gcc), 33 bytes f(i,a){a=!(i%4|!(i%100)&&i%400);} Try it online! # JavaScript, 19 bytes y=>!(y%25?y%4:y%16) Try it online! Swift, 82 bytes func l(y:Int){y%4==0 && (y%100 != 0 || y%400==0) ?print("Y"):print("N")} l(y:1936) Try it online! # 05AB1E, 9 7 bytes т‰0Kθ4Ö Explanation: т‰ # Divmod the (implicit) input by 100 # i.e. 1900 → [19,00] # i.e. 1936 → [19,36] # i.e. 1991 → [19,91] # i.e. 2000 → [20,00] 0K # Remove all 0s # i.e. [19,00] → [19] # i.e. [19,36] → [19,36] # i.e. [19,91] → [19,91] # i.e. [20,00] → [20] θ # Pop and get the last item # i.e. [19] → 19 # i.e. [19,36] → 36 # i.e. [19,91] → 91 # i.e. [20] → 20 4Ö # Check if it's divisible by 4 (and output the result implicitly) # i.e. 19 → 0 (falsey) # i.e. 36 → 1 (truthy) # i.e. 91 → 0 (falsey) # i.e. 20 → 1 (truthy) # Reality, 1 byte L input via stdinput The language was made after the challenge
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16891004145145416, "perplexity": 28604.639703783396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668416.11/warc/CC-MAIN-20191114104329-20191114132329-00325.warc.gz"}
https://arxiv.org/list/physics.chem-ph/recent
# Chemical Physics ## Authors and titles for recent submissions [ total of 16 entries: 1-16 ] [ showing up to 25 entries per page: fewer | more ] ### Fri, 20 Jan 2017 [1]  arXiv:1701.05213 (cross-list from astro-ph.IM) [pdf, ps, other] Title: Inelastic cross sections and rate coefficients for collisions between CO and H2 Subjects: Instrumentation and Methods for Astrophysics (astro-ph.IM); Chemical Physics (physics.chem-ph) ### Thu, 19 Jan 2017 [2] Title: Free Energy Computation by Monte Carlo Integration Comments: 26 pages, 7 tables, 12 Figures, 39 references Subjects: Chemical Physics (physics.chem-ph); Soft Condensed Matter (cond-mat.soft) [3]  arXiv:1701.05035 (cross-list from cond-mat.soft) [pdf, ps, other] Title: Reverse Monte Carlo modeling of liquid water with the explicit use of the SPC/E interatomic potential Comments: 8 pages, 5 figures, submitted to The Journal of Chemical Physics Subjects: Soft Condensed Matter (cond-mat.soft); Disordered Systems and Neural Networks (cond-mat.dis-nn); Materials Science (cond-mat.mtrl-sci); Chemical Physics (physics.chem-ph); Computational Physics (physics.comp-ph) [4]  arXiv:1701.04861 (cross-list from q-bio.BM) [pdf, other] Title: Dynamical coupling between protein conformational fluctuation and hydration water: Heterogeneous dynamics of biological water Comments: 12 pages and 6 figures(coloured) Subjects: Biomolecules (q-bio.BM); Statistical Mechanics (cond-mat.stat-mech); Chemical Physics (physics.chem-ph) [5]  arXiv:1701.04832 (cross-list from cond-mat.mtrl-sci) [pdf, other] Title: Gaussian-based coupled-cluster theory for the ground state and band structure of solids Subjects: Materials Science (cond-mat.mtrl-sci); Chemical Physics (physics.chem-ph) ### Wed, 18 Jan 2017 [6] Title: Alternative definition of excitation amplitudes in Multi-Reference state-specific Coupled Cluster Subjects: Chemical Physics (physics.chem-ph) [7] Title: Non-isothermal physical and chemical processes in superfluid helium Subjects: Chemical Physics (physics.chem-ph); Other Condensed Matter (cond-mat.other) [8] Title: Benchmark of dynamic electron correlation models for seniority-zero wavefunctions and their application to thermochemistry Subjects: Chemical Physics (physics.chem-ph); Strongly Correlated Electrons (cond-mat.str-el) [9] Title: Grand canonical electronic density-functional theory: algorithms and applications to electrochemistry Subjects: Chemical Physics (physics.chem-ph); Materials Science (cond-mat.mtrl-sci); Computational Physics (physics.comp-ph) [10] Title: Laser-induced molecular alignment in the presence of chaotic rotational dynamics Subjects: Chemical Physics (physics.chem-ph); Quantum Physics (quant-ph) ### Tue, 17 Jan 2017 [11] Title: Hierarchy of equations to calculate the linear spectra of molecular aggregates - Time-dependent and frequency domain formulation Subjects: Chemical Physics (physics.chem-ph); Quantum Physics (quant-ph) [12] Title: Doublet-triplet energy transfer dominated photon upconversion Subjects: Chemical Physics (physics.chem-ph) [13]  arXiv:1701.03978 (cross-list from cs.CE) [pdf, ps, other] Title: Computer-aided molecular design: An introduction and review of tools, applications, and solution techniques Comments: 38 pages, 13 figures, 3 tables, 173 references Journal-ref: Chemical Engineering Research and Design, 116, 2-26, 2016 Subjects: Computational Engineering, Finance, and Science (cs.CE); Chemical Physics (physics.chem-ph) ### Mon, 16 Jan 2017 [14] Title: High-resolution spectroscopy of He$_2^+$ using Rydberg-series extrapolation and Zeeman-decelerated supersonic beams of metastable He$_2$ Journal-ref: J. Mol. Spectrosc. 322, 9-17 (2016) Subjects: Chemical Physics (physics.chem-ph) [15] Title: Observation and calculation of the quasi-bound rovibrational levels of the electronic ground state of H$_2^+$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7231860756874084, "perplexity": 27723.662165777892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00021-ip-10-171-10-70.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/81575-another-arc-length-question.html
# Math Help - another arc length question 1. ## another arc length question Find the length of the curve, L. not exactly sure how to integrate this one after the derivative is found.. which is very long and complicated.. 2. Originally Posted by khood Find the length of the curve, L. not exactly sure how to integrate this one after the derivative is found.. which is very long and complicated.. $y=\ln \left( \frac{e^x+1}{e^x-1} \right)=\ln(e^x+1)-\ln(e^x-1)$ $y'=\frac{e^x}{e^x+1}-\frac{e^x}{e^x-1}=\frac{e^{x}(e^x-1)-e^{x}(e^x+1)}{(e^{x}+1)(e^{x}-1)}=\frac{-2e^x}{(e^x+1)(e^x-1)}$ $ \int \sqrt{1+\left( \frac{-2e^x}{(e^x+1)(e^x-1)}\right)^2}dx=\int \sqrt{\frac{[(e^x+1)(e^x-1)]^2+4e^{2x}}{[(e^x+1)(e^x-1)]^2}}dx $ $\int \sqrt{\frac{[e^{2x}-1]^2+4e^{2x}}{[(e^x+1)(e^x-1)]^2}}dx=\int \sqrt{\frac{[e^{4x}-2e^{2x}+1+4e^{2x}}{[(e^x+1)(e^x-1)]^2}}dx$ $\int \sqrt{\frac{e^{4x}+2e^{2x}+1}{[(e^x+1)(e^x-1)]^2}}dx = \int \sqrt{ \frac{[e^{2x}+1]^2}{[(e^x+1)(e^x-1)]^2}}dx$ $\int \frac{e^{2x}+1}{(e^{x}+1)(e^{x}-1)}dx =\int \frac{e^{2x}+1}{e^{2x}-1}dx$ from here let $u = e^{2x} \implies du=2e^{2x}dx \iff du=2udx \iff \frac{du}{2u}=dx$ $\int \frac{u+1}{u-1}\frac{du}{2u}=\int\frac{u+1}{u(u-1)}du$ this can be integrated using partial fractions. Good luck.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9967811703681946, "perplexity": 914.1039017106115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644063825.9/warc/CC-MAIN-20150827025423-00309-ip-10-171-96-226.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/4453/terminology-for-point-in-dent-in-surface
# Terminology for point in dent in surface? This is a simple terminology question. Let $S$ be a (let's say smooth) surface in $\mathbb{R}^3$, and $p$ a point on $S$. Suppose the principle curvatures $\kappa_1$ and $\kappa_2$ at $p$ are both negative. I am imagining $p$ sitting at the bottom of a dent in the surface. Is there an accepted term to describe such a point? The difficulty is that the Gaussian curvature $\kappa_1 \kappa_2$ is positive, so intrinsically $p$ is no different than if it were on a bump rather than a dent. I could make up my own term of course, e.g., valley point or cup point, but I'd rather follow convention. Thanks! - I've never come across specific terminology for this situation. I think if $p$ is really at a "dent" in the surface, not only should the Gauss curvature be positive there, but it should decrease (roughly with the distance to $p$) and become negative near $p$. –  Ryan Budney Sep 12 '10 at 1:10 The only term I could find is "elliptical" point (the signs of $\kappa_1$ and $\kappa_2$ are the same), so maybe you can just invent a new adjective to add to "elliptical". –  J. M. Sep 12 '10 at 1:12 @Ryan: Yes, I see there are constraints nearby. @JM: Ah, elliptical! I didn't know that term. Your suggestion of modifying that term is perhaps my solution. Thanks! –  Joseph O'Rourke Sep 12 '10 at 1:23 For example, if you pick a point $p$ on a sphere in $\mathbb{R}^3$ and a coordinate patch around $p$ in which to compute principal curvatures, you find that both principal curvatures are positive at $p$ if the normal vector field associated to the coordinate patch points inward (ie, toward the center of the sphere), and both negative if the normal vector field associated to the patch points outward. The choice of normal is yours to make; it doesn't come with the sphere. (I suppose this point might be better made with a complicated surface that does not correspond to a well-known shape--- e.g. the surface formed by a tangle of ribbon after you have unwrapped a gift. If you punch a hole in this ribbon and replace the missing disc with a hemisphere, whether this is a "valley" or a "cup" is entirely up to you. In choosing the normal direction, you are deciding whether or not curves in the hemisphere are "curving" "toward" the normal or "away from" it.) Now, this indeterminancy up to sign is the worst thing that can happen: if $S \subseteq \mathbb{R}^3$ is a smooth surface, and the principal curvatures at a point $p$ in $S$ are found in some coordinate system to be $\kappa_1$ and $\kappa_2$, then in any other coordinate system they will be either found to be exactly the same (that is, $\kappa_1$ and $\kappa_2$) or the same, but with opposite sign (that is, $-\kappa_1$ and $-\kappa_2$). How you prove this depends on how you define the principal curvatures, but arises e.g. from the fact that they are the maximum and minimum normal components of accelerations of unit-speed curves in the surface at $p$, and there are only two possible choices of unit normal at $p$, differing only by sign. So "both principal curvatures are negative" is a well-defined property of an oriented smooth surface in $\mathbb{R}^3$ (a property which, we learned in the comments, apparently does not have a well-established name; for what it's worth, I like "bump point"). This same discussion shows that the slightly weaker condition that "both principal curvatures at $p$ have the same sign" is well-defined independent of a choice of normal. (As pointed out in the comments, it does have a well-established name: such a $p$ is said to be an elliptic point.) What the above discussion does not show, but is nevertheless true, is that this weaker condition is even independent of how one "embeds" $S$ in any "ambient space" (what the asker meant when he referred to the Gaussian curvature at $p$ as being "intrinsic"). For more, open up any book on the differential geometry of surfaces and look for the sections around Gauss's Theorema Egregium.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8882836103439331, "perplexity": 269.04224152107645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/two-masses-on-vertical-spring.264648/
# Two Masses on Vertical Spring • Thread starter Awwnutz • Start date • #1 43 0 http://img158.imageshack.us/img158/6946/blockspringcy2.gif [Broken] A block of mass 10 kg hangs on a spring. When a second block with an identical mass of 10 kg is tied to the first, the spring stretches an additional ho = 1.3 m. a) What is the value of the spring constant k? Now the string is burned and the second block falls off. b) How far above its original position does the remaining block attain its maximum speed? c) What is the maximum speed attained by the remaining block? Spring constant: F=kx Conservation of Mechanical Energy I was thinking about setting up the problem so you look at the first scenario with the 10kg block as zero potential energy. Then when the second 10kg block is added potential energy is gained...is this in the right direction at all? Could the change in mechanical energy in the first scenario equal the second? So... (1/2)mv(final)^2 + (1/2)kx(final)^2 = (1/2)mv(initial)^2 + (1/2)kx(initial)^2 but that would just get rid of the k's which is what i'm looking for. Last edited by a moderator: ## Answers and Replies • #2 43 0 I think i figured out part b, but i'm still stuck on part a. For b would you use the equation (1/2)kx^2 = (1/2)mv(final)^2 - (1/2)mv(initial)^2 Knowing the spring constant and the distance its stretched all your left with unknown is the final velocity since the initial velocity is 0. But that would just give me the speed at the end of the distance stretched so that's not exactly right. • #3 43 0 alright i figured out part a, the weight of the added block equals the force of the spring 10(9.81) = k (1.3m) k= 75.46N/m part b kind of has me scratching my head. So if the max speed occurs when the potential energy is zero how do i set that up? • #4 43 0 PEinitial + KEinitial = PEfinal + KEfinal PEinitial = KEfinal Right? • Last Post Replies 5 Views 2K • Last Post Replies 2 Views 2K • Last Post Replies 12 Views 5K • Last Post Replies 1 Views 2K • Last Post Replies 4 Views 3K • Last Post Replies 10 Views 2K • Last Post Replies 5 Views 3K • Last Post Replies 3 Views 6K • Last Post Replies 9 Views 4K • Last Post Replies 5 Views 6K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9124444127082825, "perplexity": 1522.7993515163041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153931.11/warc/CC-MAIN-20210730025356-20210730055356-00519.warc.gz"}
http://mathhelpforum.com/trigonometry/27747-conditional-trigonometric-equations-print.html
# Conditional Trigonometric Equations • February 7th 2008, 02:41 PM senna Conditional Trigonometric Equations $4sin^2x-3=0$ How do I find the zeros in exact values? • February 7th 2008, 03:24 PM Soroban Hello, senna! You've never seen one of these before? Quote: Solve for $x\!:\;\;4\sin^2\!x-3\:=\:0$ Just use your knowledge of algebra . . . then Trig at the end. Add 3 to both sides: . $4\sin^2\!x \:=\:3$ Divide by 4: . $\sin^2\!x \:=\:\frac{3}{4}$ Take square roots: . $\sin x \:=\:\pm\frac{\sqrt{3}}{2}$ Can you finish it now? • February 12th 2008, 07:08 PM senna Thanks for the help Soroban. It's practically instant now. $ \frac{{\pi}}{3} $ , $ \frac{{2\pi}}{3} $ , $ \frac{{4\pi}}{3} $ , $ \frac{{5\pi}}{3} $
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9539103507995605, "perplexity": 5618.732632809016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657133568.71/warc/CC-MAIN-20140914011213-00156-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://www.ncbi.nlm.nih.gov/pubmed/7649117?dopt=Abstract
Format Choose Destination Endocrinology. 1995 Sep;136(9):4092-8. # Prolactin receptors and JAK2 in islets of Langerhans: an immunohistochemical analysis. ### Author information 1 Department of Cell Biology and Neuroanatomy, University of Minnesota Medical School, Minneapolis 55455, USA. ### Abstract Lactogenic hormones, PRL and placental lactogen, are important regulators of insulin secretion and islet beta-cell proliferation. In this study we examined the presence of PRL receptor immunoreactivity in pancreatic islets of Langerhans using PRL receptor monoclonal antibodies provided by Dr. Paul Kelly. Studies were performed using islets isolated from neonatal, adult, and day 14 pregnant rats. The islets were examined by immunohistochemistry and laser scanning confocal microscopy. In neonatal rat islets, PRL receptors were observed in beta- and alpha-cells, but not in delta-cells. Among islet beta- and alpha-cells there was heterogeneity of cellular staining for PRL receptors. A small portion of the cells was intensely stained for PRL receptors. However, the majority of the cells had a much lower level of staining intensity, suggesting that most islet cells have a low level of PRL receptors. In general, alpha-cells were more uniformly stained than beta-cells. Similar results were obtained with adult rat islets, in which, again, there was a large range of staining intensity and many cells with low levels of PRL receptor. Rats on day 14 of pregnancy had an increased level of islet PRL receptor expression compared with age-matched control animals. There was also a decrease in cellular heterogeneity for PRL receptors, with nearly all cells having a uniformly high level of PRL receptor expression. JAK2, the tyrosine kinase associated with PRL receptors, was examined in Nb2 cells and islets. JAK2 immunoreactivity was detected at the cell membrane in very low levels in Nb2 cells. It was also found in numerous vesicular structures in the cytoplasm, where it colocalized with PRL receptors. A prominent feature of all cells was the presence of JAK2 in the nucleus, but not the nucleolus. In islets, JAK2 immunoreactivity was similarly observed in the nucleus of nearly all cells. However, the vesicular cytoplasmic location of JAK2 was less frequently observed and did not colocalize with PRL receptors. For comparison, JAK2 immunoreactivity was examined in several other tissues where it was detected in fibroblasts (endomysial and endoneurial cells), smooth muscle cells, and ganglion cells in the pancreas. JAK2 was notably absent from pancreas acinar cells, hepatocytes, skeletal muscle cells, and Schwann cells. This study demonstrates the presence of PRL receptors in islet beta- and alpha-cells, but not delta-cells. There was an increase in PRL receptor expression in islets during pregnancy, which is commensurate with the up-regulation of islet function. In addition, JAK2 immunoreactivity was detected in most islet cells and Nb2 node cells. PMID: 7649117 DOI: 10.1210/endo.136.9.7649117 [Indexed for MEDLINE]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8872818946838379, "perplexity": 13012.491611040856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998755.95/warc/CC-MAIN-20190618143417-20190618165417-00048.warc.gz"}
http://mathhelpforum.com/calculus/128839-derivative-true-false.html
# Math Help - derivative true false 1. ## derivative true false derivative true or false explain why if false if y = x / (pi) , then dy/dx = 1 / (pi) i have no idea how to prove this? any help 2. power rule $ \frac{d}{dx}\left(x^n\right) = nx^{n-1} $ $y= \frac{x}{\pi}$ can be rewritten as $ f(x) = \frac{1}{\pi}x$ so $f'(x) = \frac{1}{\pi}x^0 = \frac{1}{\pi}$ 3. Originally Posted by maybnxtseasn derivative true or false explain why if false if y = x / (pi) , then dy/dx = 1 / (pi) i have no idea how to prove this? any help Actually, while the "power law" is one of the most important laws for differentiation, here I think it is "overkill". One of the first things you should have learned about the derivative is that the derivative of a linear function is just its slope. (Most people learn the derivative as "slope of the tangent line" and if the graph is itself a line, it is its own "tangent line".)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8287563920021057, "perplexity": 831.2027501883884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400378815.18/warc/CC-MAIN-20141119123258-00257-ip-10-235-23-156.ec2.internal.warc.gz"}
https://repo.scoap3.org/search?f=author&p=Morozumi%2C%20Takuya&ln=en
SCOAP3 Repository 5 records found Search took 0.24 seconds. 1 A New Mechanism for Generating Particle Number Asymmetry through Interactions / Salim Adam, Apriadi ; Morozumi, Takuya ; Nagao, Keiko I. ; Takata, Hiroyuki A new mechanism for generating particle number asymmetry (PNA) has been developed. [...] Published in Advances in High Energy Physics 2019 (2019) 6825104 10.1155/2019/6825104 arXiv:1709.08781 Fulltext: XML PDF PDF (PDFA); 2 Analysis of Dalitz decays with intrinsic parity-violating interactions in resonance chiral perturbation theory / Kimura, Daiji ; Morozumi, Takuya ; Umeeda, Hiroyuki Observables of light hadron decays are analyzed in a chiral Lagrangian model that includes resonance fields of vector mesons. [...] Published in PTEP 2018 (2018) 123B02 10.1093/ptep/pty122 arXiv:1609.09235 Fulltext: PDF XML PDF (PDFA); 3 Effective theory analysis for vector-like quark model / Morozumi, Takuya ; Shimizu, Yusuke ; Takahashi, Shunya ; Umeeda, Hiroyuki We study a model with a down-type SU(2) singlet vector-like quark (VLQ) as a minimal extension of the standard model (SM). [...] Published in PTEP 2018 (2018) 043B10 10.1093/ptep/pty042 arXiv:1801.05268 Fulltext: XML PDF PDF (PDFA); 4 Phenomenological aspects of possible vacua of a neutrino flavor modelSupported by JSPS KAKENHI Grant Number JP17K05418 (T.M.). This work is also supported in part by Grants-in-Aid for Scientific Research [No. 16J05332 (Y.S.), Nos. 24540272, 26247038, 15H01037, 16H00871, and 16H02189 (H.U.)] from the Ministry of Education, Culture, Sports, Science and Technology in Japan. H.O. is also supported by Hiroshima Univ. Alumni Association / Morozumi, Takuya ; Okane, Hideaki ; Sakamoto, Hiroki ; Shimizu, Yusuke ; et al We discuss a supersymmetric model with discrete flavor symmetry . [...] Published in Chinese Phys. C 42 (2018) 023102 10.1088/1674-1137/42/2/023102 arXiv:1707.04028 Fulltext: PDF XML; 5 Precise discussion of time-reversal asymmetries in B-meson decays / Morozumi, Takuya ; Okane, Hideaki ; Umeeda, Hiroyuki BaBar collaboration announced that they observed time reversal (T) asymmetry through B meson system. [...] Published in JHEP 1502 (2015) 174 10.1007/JHEP02(2015)174 arXiv:1411.2104 Fulltext: XML PDF (PDFA); Interested in being notified about new results for this query?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8245390057563782, "perplexity": 15948.300123819909}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256215.47/warc/CC-MAIN-20190521022141-20190521044141-00034.warc.gz"}
https://www.fisicalab.com/en/section/horizontal-launch
Horizontal launch is an example of composition of motion in two dimensions: an u.r.m. on the horizontal axis and a u.a.r.m. in the vertical one. In this section, we will see: ## Concept and representation Horizontal launch consists of horizontally launching a body , also know as projectile, from a certain height. In the figure below you can see a representation of the situation: Horizontal launch Think about a drop that slides at constant velocity (v0) on a leaf located at a height H, when it reaches the edge and falls to the ground. During the fall, it moves at constant velocity v0 in the x-axis (u.r.m.) and it moves in free fall along the y-axis (u.a.r.m.) due to the action of the gravity. Initially, the velocity in this y-axis is 0 (vy = 0) and increases as the projectile descends. Notice the projections of the motion in the axes and verify that they coincide with the motions that we have described (u.r.m. and u.a.r.m.) Horizontal launch is the composition of a uniform rectilinear motion (horizontal urm) and a uniformly accelerated rectilinear motion of free fall (vertical uarm). The moving object in this kind of motion is often referred to as projectile, or horizontally launched projectile. ## Equations The equations for horizontal launch are: • The equations for the u.r.m. for the x-axis $x={x}_{0}+{v}_{x}·t$ • The equations of u.a.r.m. for the y-axis ${v}_{y}={v}_{0y}+{a}_{y}·t$ $y={y}_{0}+{v}_{0y}·t+\frac{1}{2}·{a}_{y}·{t}^{2}$ Since, as stated above, the velocity forms an angle α with the horizontal, the components x and y are determined by using the most common trigonometric relationships: Decomposition of the velocity vector Any vector, including the velocity, can be broken down is in 2 vectors, vx and vy, that have the same directions as the Cartesian axes. The magnitude of both vectors can be calculated from the angle that the vector forms with the horizontal through the expressions shown in the figure. Finally, taking into consideration what we previously stated, that y0 = H , x0 = 0 and ay = -g, we can rewrite the formulas as they are shown in the following table. These are the final expressions for the calculation of a horizontally launched projectile kinematic magnitudes: Position (m) Velocity (m/s) Acceleration (m/s2) Horizontal Axis $x={x}_{0}+v\cdot t$ ${v}_{x}={v}_{0x}=\text{cte}$ ${a}_{x}=0$ Vertical Axis $y=\mathrm{H}-\frac{1}{2}g{t}^{2}$ ${v}_{y}=-g\cdot t$ ${a}_{y}=-g$ Experiment and Learn Data g = 9.8 m/s2 |   |   | Horizontal launched projectile The blue ball in the figure represents a body suspended above the ground. You can drag it up to the initial height H that you want and select the initial velocity (v0) with which it will be horizontally launched. The gray line represents the trajectory that it will describe with your selected values. Then press the Play button. Drag the time and observe how its position (x and y) and its velocity (vx and vy) are calculated at every moment of its descent toward the ground. Verify that the projection of the body in the y-axis (green) experiences free fall motion and on the x-axis (red) it describes a uniform rectilinear motion. ### Equation of position and trajectory in horizontal launched projectile The equation of position of a body helps us to determine at what point it is in each moment in time. In the case of a body that is moving in two dimensions, remember that, generically, displacement is described by: $\stackrel{\to }{r}\left(t\right)=x\left(t\right)\stackrel{\to }{i}+y\left(t\right)\stackrel{\to }{j}$ Substituting in the previous expressions of the position in the horizontal axis (u.r.m.) and on the vertical axis (u.a.r.m.) in the generic equation of position, we can get to the expression of the equation of position for a horizontal launched projectile. The equation of position for a horizontal launched projectile is given by: $\stackrel{\to }{r}=\left({x}_{0}+v\cdot t\right)·\stackrel{\to }{i}+\left({y}_{0}-\frac{1}{2}·g·{t}^{2}\right)·\stackrel{\to }{j}$ On the other hand, to determine the trajectory that the body follows, i.e., its trajectory equation, we can combine the previous equations to eliminate t, leaving: $y={y}_{0}-\frac{1}{2·{{v}_{0}}^{2}}·g·{x}^{2}={y}_{0}-k·{x}^{2}$ Where $k=\frac{1}{2·{{v}_{0}}^{2}}·g$  is constant throughout the trajectory.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 13, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8690629005432129, "perplexity": 678.3968736131601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570651.49/warc/CC-MAIN-20220807150925-20220807180925-00569.warc.gz"}
https://video.ias.edu/1011
## Galois Representations and Automorphic Forms September 1, 2010 ## STPM - New Computations of the Riemann Zeta Function Jonathan Bober Institute for Advanced Study September 21, 2010 I'll describe joint work (in progress) with Ghaith Hiary on implementing and running Hiary's O(t^1/3) algorithm for computing the zeta function and give some highlights of recent computations. ## STPM - New Tools for an Old Problem: The Dynamics of Area Preserving Disc Maps Braney Bramham Institute for Advanced Study September 21, 2010 ## STPM - Extended Scaling Relations for Weak-Universal Models Pierluigi Falco Institute for Advanced Study September 21, 2010 I will introduce some example of models of Statistical Mechanics that are called 'weak-universal' and I will discuss the role of the extended scaling relations for the critical indexes. Finally I will mention some results and some works in progress. ## STPM - Local to Global Phenomena in Deficient Groups Elena Fuchs Institute for Advanced Study September 21, 2010 ## STPM - Automorphy of Galous Representations David Geraghty Institute for Advanced Study September 21, 2010
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8046388626098633, "perplexity": 7064.790155853574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400228998.45/warc/CC-MAIN-20200925213517-20200926003517-00718.warc.gz"}
http://math.stackexchange.com/questions/110156/solving-the-terminal-velocity-equation-with-a-initial-velocity
# Solving the terminal velocity equation with a initial velocity. I was trying to create a better answer for this Stack Overflow question. I wanted to give the person a example of code using air resistance however every example on the net I find shows the formula : $$v(t) = \frac{1}{\alpha}\tanh(\alpha g t)$$ That assumes 0 starting velocity, I noticed they had "Parachute" variables so I assumed at some point in the future a parachute would be opened. The problem I encounter is that I no longer have a starting velocity and time of 0, I tried to follow the derivation on the Terminal Velocity Wikipedia page but it has been too long and I do not know my calculus well enough anymore to change $$t-0={1 \over g} \left[{\ln \frac{1+\alpha v^\prime}{1-\alpha v^\prime} \over 2\alpha}+C \right]_{v^\prime=0}^{v^\prime=v_t}$$ into $$t-t_i={1 \over g} \left[{\ln \frac{1+\alpha v^\prime}{1-\alpha v^\prime} \over 2\alpha}+C \right]_{v^\prime={v_i}}^{v^\prime=v_t}$$ The farthest I got trying to find $v(t)$ was $$v(t) = \frac{1}{\alpha}\tanh(\alpha g t) , t < t_p$$ $$t - t_p=\frac1{\alpha g}({\mathrm{arctanh}(\alpha v)}-\mathrm{arctanh}(\alpha v_p)), t >= t_p$$ Can anyone help me out with the last steps, and please show the intermediate steps so I can learn how to do similar things in the future. Also any help on finding $x(t)$ would be appreaceated too as I know I will likely have trouble finding that too. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6455075144767761, "perplexity": 200.78130733431846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097546.28/warc/CC-MAIN-20150627031817-00223-ip-10-179-60-89.ec2.internal.warc.gz"}
http://lists.macromates.com/textmate-dev/2005-November/003163.html
# [SVN] r2257 (Latex) Thu Nov 24 16:47:55 UTC 2005 On Nov 24, 2005, at 3:44 AM, Allan Odgaard wrote: > What is the status on the \text* snippets? can these go in favor of > ctrl-<? > I personally don't really use them, but I don't see how ctrl-< is a replacement for them. doesn't ctrl-< create begin-end groups? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.macromates.com/textmate-dev/attachments/20051124/2f3a8612/attachment.html>
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.765963613986969, "perplexity": 20852.125523783805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119225.38/warc/CC-MAIN-20170423031159-00073-ip-10-145-167-34.ec2.internal.warc.gz"}
https://brilliant.org/problems/practice-solve-the-equation-with-square-roots/
# Solve The Equation With Square Roots Algebra Level 5 How many integer values of $N$ satisfy the following equation: $\sqrt{ N+20 - 10\sqrt{N-5} } + \sqrt{ N + 44 + 14 \sqrt{N-5} } = 12$ Details and assumptions The domain of the square root function is non-negative reals. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9210575222969055, "perplexity": 921.0492599916959}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998558.51/warc/CC-MAIN-20190617183209-20190617205209-00293.warc.gz"}
https://labs.tib.eu/arxiv/?author=V.I.%20Umatov
• ### Development of $^{100}$Mo-containing scintillating bolometers for a high-sensitivity neutrinoless double-beta decay search(1704.01758) Oct. 4, 2017 nucl-ex, physics.ins-det This paper reports on the development of a technology involving $^{100}$Mo-enriched scintillating bolometers, compatible with the goals of CUPID, a proposed next-generation bolometric experiment to search for neutrinoless double-beta decay. Large mass ($\sim$1~kg), high optical quality, radiopure $^{100}$Mo-containing zinc and lithium molybdate crystals have been produced and used to develop high performance single detector modules based on 0.2--0.4~kg scintillating bolometers. In particular, the energy resolution of the lithium molybdate detectors near the $Q$-value of the double-beta transition of $^{100}$Mo (3034~keV) is 4--6~keV FWHM. The rejection of the $\alpha$-induced dominant background above 2.6~MeV is better than 8$\sigma$. Less than 10~$\mu$Bq/kg activity of $^{232}$Th ($^{228}$Th) and $^{226}$Ra in the crystals is ensured by boule recrystallization. The potential of $^{100}$Mo-enriched scintillating bolometers to perform high sensitivity double-beta decay searches has been demonstrated with only 10~kg$\times$d exposure: the two neutrino double-beta decay half-life of $^{100}$Mo has been measured with the up-to-date highest accuracy as $T_{1/2}$ = [6.90 $\pm$ 0.15(stat.) $\pm$ 0.37(syst.)] $\times$ 10$^{18}$~yr. Both crystallization and detector technologies favor lithium molybdate, which has been selected for the ongoing construction of the CUPID-0/Mo demonstrator, containing several kg of $^{100}$Mo. • ### Calorimeter development for the SuperNEMO double beta decay experiment(1707.06823) July 21, 2017 physics.ins-det SuperNEMO is a double-$\beta$ decay experiment, which will employ the successful tracker-calorimeter technique used in the recently completed NEMO-3 experiment. SuperNEMO will implement 100 kg of double-$\beta$ decay isotope, reaching a sensitivity to the neutrinoless double-$\beta$ decay ($0\nu\beta\beta$) half-life of the order of $10^{26}$ yr, corresponding to a Majorana neutrino mass of 50-100 meV. One of the main goals and challenges of the SuperNEMO detector development programme has been to reach a calorimeter energy resolution, $\Delta$E/E, around 3%/$sqrt(E)$(MeV) $\sigma$, or 7%/$sqrt(E)$(MeV) FWHM (full width at half maximum), using a calorimeter composed of large volume plastic scintillator blocks coupled to photomultiplier tubes. We describe the R\&D programme and the final design of the SuperNEMO calorimeter that has met this challenging goal. • ### The BiPo-3 detector for the measurement of ultra low natural radioactivities of thin materials(1702.07176) June 7, 2017 physics.ins-det The BiPo-3 detector, running in the Canfranc Underground Laboratory (Laboratorio Subterr\'aneo de Canfranc, LSC, Spain) since 2013, is a low-radioactivity detector dedicated to measuring ultra low natural radionuclide contaminations of $^{208}$Tl ($^{232}$Th chain) and $^{214}$Bi ($^{238}$U chain) in thin materials. The total sensitive surface area of the detector is 3.6 m$^2$. The detector has been developed to measure radiopurity of the selenium double $\beta$-decay source foils of the SuperNEMO experiment. In this paper the design and performance of the detector, and results of the background measurements in $^{208}$Tl and $^{214}$Bi, are presented, and validation of the BiPo-3 measurement with a calibrated aluminium foil is discussed. Results of the $^{208}$Tl and $^{214}$Bi activity measurements of the first enriched $^{82}$Se foils of the double $\beta$-decay SuperNEMO experiment are reported. The sensitivity of the BiPo-3 detector for the measurement of the SuperNEMO $^{82}$Se foils is $\mathcal{A}$($^{208}$Tl) $<2$ $\mu$Bq/kg (90\% C.L.) and $\mathcal{A}$($^{214}$Bi) $<140$ $\mu$Bq/kg (90\% C.L.) after 6 months of measurement. • ### Measurement of the $2\nu\beta\beta$ Decay Half-Life and Search for the $0\nu\beta\beta$ Decay of $^{116}$Cd with the NEMO-3 Detector(1610.03226) Dec. 23, 2016 hep-ex, physics.ins-det The NEMO-3 experiment measured the half-life of the $2\nu\beta\beta$ decay and searched for the $0\nu\beta\beta$ decay of $^{116}$Cd. Using $410$ g of $^{116}$Cd installed in the detector with an exposure of $5.26$ y, ($4968\pm74$) events corresponding to the $2\nu\beta\beta$ decay of $^{116}$Cd to the ground state of $^{116}$Sn have been observed with a signal to background ratio of about $12$. The half-life of the $2\nu\beta\beta$ decay has been measured to be $T_{1/2}^{2\nu}=[2.74\pm0.04\mbox{(stat.)}\pm0.18\mbox{(syst.)}]\times10^{19}$ y. No events have been observed above the expected background while searching for $0\nu\beta\beta$ decay. The corresponding limit on the half-life is determined to be $T_{1/2}^{0\nu} \ge 1.0 \times 10^{23}$ y at the $90$ % C.L. which corresponds to an upper limit on the effective Majorana neutrino mass of $\langle m_{\nu} \rangle \le 1.4-2.5$ eV depending on the nuclear matrix elements considered. Limits on other mechanisms generating $0\nu\beta\beta$ decay such as the exchange of R-parity violating supersymmetric particles, right-handed currents and majoron emission are also obtained. • ### Improvement of radiopurity level of enriched $^{116}$CdWO$_4$ and ZnWO$_4$ crystal scintillators by recrystallization(1607.04117) July 14, 2016 nucl-ex, physics.ins-det As low as possible radioactive contamination of a detector plays a crucial role to improve sensitivity of a double beta decay experiment. The radioactive contamination of a sample of $^{116}$CdWO$_4$ crystal scintillator by thorium was reduced by a factor $\approx 10$, down to the level 0.01 mBq/kg ($^{228}$Th), by exploiting the recrystallization procedure. The total alpha activity of uranium and thorium daughters was reduced by a factor $\approx 3$, down to 1.6 mBq/kg. No change in the specific activity (the total $\alpha$ activity and $^{228}$Th) was observed in a sample of ZnWO$_4$ crystal produced by recrystallization after removing $\approx 0.4$ mm surface layer of the crystal. • ### First test of an enriched $^{116}$CdWO$_4$ scintillating bolometer for neutrinoless double-beta-decay searches(1606.07806) June 24, 2016 nucl-ex, physics.ins-det For the first time, a cadmium tungstate crystal scintillator enriched in $^{116}$Cd has been succesfully tested as a scintillating bolometer. The measurement was performed above ground at a temperature of 18 mK. The crystal mass was 34.5 g and the enrichment level ~82 %. Despite a substantial pile-up effect due to above-ground operation, the detector demonstrated a high energy resolution (2-7 keV FWHM in 0.2-2.6 MeV $\gamma$ energy range), a powerful particle identification capability and a high level of internal radiopurity. These results prove that cadmium tungstate is an extremely promising detector material for a next-generation neutrinoless double-beta decay bolometric experiment, like that proposed in the CUPID project (CUORE Upgrade with Particle IDentification). • ### Search for double beta decay of $^{116}$Cd with enriched $^{116}$CdWO$_4$ crystal scintillators (Aurora experiment)(1601.05578) Jan. 21, 2016 nucl-ex, physics.ins-det The Aurora experiment to investigate double beta decay of $^{116}$Cd with the help of 1.162 kg cadmium tungstate crystal scintillators enriched in $^{116}$Cd to 82\% is in progress at the Gran Sasso Underground Laboratory. The half-life of $^{116}$Cd relatively to the two neutrino double beta decay is measured with the highest up-to-date accuracy $T_{1/2}=(2.62\pm0.14)\times10^{19}$ yr. The sensitivity of the experiment to the neutrinoless double beta decay of $^{116}$Cd to the ground state of $^{116}$Sn is estimated as $T_{1/2} \geq 1.9\times10^{23}$ yr at 90\% CL, which corresponds to the effective Majorana neutrino mass limit $\langle m_{\nu}\rangle \leq (1.2-1.8)$ eV. New limits are obtained for the double beta decay of $^{116}$Cd to the excited levels of $^{116}$Sn, and for the neutrinoless double beta decay with emission of majorons. • ### Result of the search for neutrinoless double-$\beta$ decay in $^{100}$Mo with the NEMO-3 experiment(1506.05825) Oct. 22, 2015 hep-ex, nucl-ex, physics.ins-det The NEMO-3 detector, which had been operating in the Modane Underground Laboratory from 2003 to 2010, was designed to search for neutrinoless double $\beta$ ($0\nu\beta\beta$) decay. We report final results of a search for $0\nu\beta\beta$ decays with $6.914$ kg of $^{100}$Mo using the entire NEMO-3 data set with a detector live time of $4.96$ yr, which corresponds to an exposure of 34.3 kg$\cdot$yr. We perform a detailed study of the expected background in the $0\nu\beta\beta$ signal region and find no evidence of $0\nu\beta\beta$ decays in the data. The level of observed background in the $0\nu\beta\beta$ signal region $[2.8-3.2]$ MeV is $0.44 \pm 0.13$ counts/yr/kg, and no events are observed in the interval $[3.2-10]$ MeV. We therefore derive a lower limit on the half-life of $0\nu\beta\beta$ decays in $^{100}$Mo of $T_{1/2}(0\nu\beta\beta)> 1.1 \times 10^{24}$ yr at the $90\%$ Confidence Level, under the hypothesis of light Majorana neutrino exchange. Depending on the model used for calculating nuclear matrix elements, the limit for the effective Majorana neutrino mass lies in the range $\langle m_{\nu} \rangle < 0.33$--$0.62$ eV. We also report constraints on other lepton-number violating mechanisms for $0\nu\beta\beta$ decays. • ### Search for 2{\beta} decay of 116Cd with the help of enriched 116CdWO4 crystal scintillators(1312.0743) Dec. 3, 2013 nucl-ex, physics.ins-det Cadmium tungstate crystal scintillators enriched in $^{116}$Cd to 82% ($^{116}$CdWO$_4$, total mass of $\approx$1.2 kg) are used to search for 2$\beta$ decay of $^{116}$Cd deep underground at the Gran Sasso National Laboratory of the INFN (Italy). The radioactive contamination of the $^{116}$CdWO$_4$ crystals has been studied carefully to reconstruct the background of the detector. The measured half-life of $^{116}$Cd relatively to 2$\nu$2$\beta$ decay is $T^{2\nu2\beta}_{1/2}$ = [2.8 $\pm$ 0.05(stat.) $\pm$ 0.4(syst.)] $\times$ 10$^{19}$ yr, in agreement with the results of previous experiments. The obtained limit on the 0$\nu$2$\beta$ decay of $^{116}$Cd (considering the data of the last 8696 h run with an advanced background 0.12(2) counts/yr/kg/keV in the energy interval 2.7-2.9 MeV) is $T_{1/2} \ge 1.0 \times 10^{23}$ yr at 90% C.L. The sensitivity of the experiment to the $0\nu2\beta$ process is $\lim T_{1/2} = 3 \times 10^{23}$ yr at 90% C.L. over 5 years of the measurements and it can be advanced (by further reduction of the background by a factor 3-30) to the level of $\lim T_{1/2} = (0.5-1.5) \times 10^{24}$ yr for the same period of the data taking. • ### CdWO4 crystal scintillators from enriched isotopes for double beta decay experiments(1302.4905) Feb. 20, 2013 nucl-ex, physics.ins-det Cadmium tungstate crystal scintillators enriched in 106Cd and 116Cd were developed. The produced scintillators exhibit good optical and scintillation properties, and a low level of radioactive contamination. Experiments to search for double beta decay of 106Cd and 116Cd are in progress at the Gran Sasso National Laboratories of the INFN (Italy). Prospects to further improve the radiopurity of the detectors by recrystallization are discussed. • ### Measurement of $\beta\beta$ Decay-Simulating Events in Nuclear Emulsion with Molybdenum Filling(1109.6110) Sept. 28, 2011 nucl-ex The measurement of positron--nucleus collisions was used to estimate the possibility of suppressing background events that simulate $\beta\beta$ decay in the emulsion region adjacent to molybdenum conglomerates. The range of the escape of two relativistic particles from the interaction was found to be $<d> = (0.60\pm 0.03) ~\mu$m, which approximately corresponds to the grain size of developed nuclear emulsion. No correlation of the values of d with the angle between two relativistic particles was observed. It was shown that it was possible to exclude $\beta\beta$ decay background from electrons emerging in the decay of elements of naturally occurring radioactive chains. The background from $\beta$ decays of $^{90}$Sr and $^{40}$K available in emulsion around Mo conglomerates was determined by the ratio of the volume $(\sim d^3)$ to the total volume of emulsion and was found to be $1.5\cdot 10^{-2}$. It was shown that the backgrounds from $^{40}$K, $^{90}$Sr and natural radioactivity could be significantly suppressed and would not limit the sensitivity of the experiment with 1 kg $^{100}$Mo. • ### Results of the BiPo-1 prototype for radiopurity measurements for the SuperNEMO double beta decay source foils(1005.0343) May 3, 2010 hep-ex, physics.ins-det The development of BiPo detectors is dedicated to the measurement of extremely high radiopurity in $^{208}$Tl and $^{214}$Bi for the SuperNEMO double beta decay source foils. A modular prototype, called BiPo-1, with 0.8 $m^2$ of sensitive surface area, has been running in the Modane Underground Laboratory since February, 2008. The goal of BiPo-1 is to measure the different components of the background and in particular the surface radiopurity of the plastic scintillators that make up the detector. The first phase of data collection has been dedicated to the measurement of the radiopurity in $^{208}$Tl. After more than one year of background measurement, a surface activity of the scintillators of $\mathcal{A}$($^{208}$Tl) $=$ 1.5 $\mu$Bq/m$^2$ is reported here. Given this level of background, a larger BiPo detector having 12 m$^2$ of active surface area, is able to qualify the radiopurity of the SuperNEMO selenium double beta decay foils with the required sensitivity of $\mathcal{A}$($^{208}$Tl) $<$ 2 $\mu$Bq/kg (90% C.L.) with a six month measurement. • ### Nuclear emulsion with molybdenum filling for observation of $\beta\beta$ decay(1002.2834) Feb. 15, 2010 nucl-ex The usage of nuclear emulsion with molybdenum filling for observation of $\beta\beta$ decay are shown to be possible. Estimates for 1 kg of $^{100}$Mo with zero background give the sensitivity for the $0\nu\beta\beta$ decay of $^{100}$Mo at the level of $\sim 1.5\cdot 10^{24}$ y for 1 year of measurement. • ### Investigation of $\beta\beta$ decay in $^{150}$Nd and $^{148}$Nd to the excited states of daughter nuclei(0904.1924) April 13, 2009 nucl-ex
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9635143876075745, "perplexity": 3259.5751088628876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141193221.49/warc/CC-MAIN-20201127131802-20201127161802-00340.warc.gz"}
https://tex.stackexchange.com/questions/256651/drawing-condensed-version-of-complex-dependency-diagrams
# Drawing condensed version of complex dependency diagrams I got a cool solution to my question drawing complex dependency diagrams with tikz / forest and want to reuse this style for drawing a condensed version of the diagram that is supposed to look like this: I now tried the following: \documentclass{article} \usepackage{forest} \usetikzlibrary{arrows.meta,bending} \tikzset{ connect/.style={{Stealth[bend]}-{Stealth[bend]}, draw=green, shorten <=-3pt, shorten >=-3pt} } \forestset{ wg/.style={ for tree={ no edge, draw, outer ysep=1pt, }, copy label/.style={ for children={ if content={}{ content/.pgfmath={content("!u")}, calign with current, edge={draw,{-Triangle[open,reversed]}}, copy label, !u.content/.pgfmath={content}, !u.content+=', }{ copy label, } } }, delay={ copy label, for tree={name/.pgfmath={content}}, }, for tree={content format={\strut\forestoption{content}}}, where n children={0}{ tier=word, }{}, }, } \tikzset{deparrow/.style={-Latex,blue}} \begin{document} \begin{forest} wg [ [small] [children] [were] [playing] [outside] ] % \draw[deparrow] (were.north) [bend.left] to ([xshift=-10pt]children.north); \draw[deparrow] (were) to[out=north, in=north] ([xshift=-10pt]children); \draw[deparrow] (children) to[out=north, in=north] (small); \draw[deparrow] (were) to[out=north, in=north] (playing); \draw[deparrow] (playing) to[out=north, in=north] (children); \draw[deparrow] (playing) to[out=north, in=north] (outside); \end{forest} \end{document} This produces: Shifting the node does not work. Also for the simple diagram the following tree may be more appropriate: \begin{forest} [were [children [small] ] [playing [outside] ] ] \end{forest} This corresponds to the arrows in the original figure (with one arrow from playing to children not being represented). Comparing this tree with the tree from drawing complex dependency diagrams with tikz / forest it seems to be possible to draw all the arrows (except the playing-children one) via the style. If this would work, one could have the simplex case as a condensed version of the complex case and the arrows (with the exception of one) would be drawn automatically. Edit: This is what I did after the first answer: • I really don't understand the second part of this question. Do you not want the nodes horizontally aligned? If that structure works better, why not use it? And what is 'the simplex case'? – cfr Jul 23 '15 at 15:53 • I'm not sure that a tree structure (and henceforth forest) is the best tool for the task at hand, i.e. drawing a linear structure, except if you aim for designing a simple switch between a hierarhical and flat representation of the same structure. As for the arrows: I believe that it will be difficult to create nice-looking curved arrows programmatically ... once they start overlapping, all hell breaks loose ... – Sašo Živanović Jul 25 '15 at 11:24 • I think the pure dependency structure (that is the one with the arrows not the one with the triangle lines) corresponds to a tree. But you are right, if there are additional dependencies, it is not trivial to leave the room one would require for such additional dependencies. (for instance the one from playing to children above). – Stefan Müller Jul 31 '15 at 12:56 You can adapt the existing style so the example works just by using phantom for the root and specifying the appropriate anchor for children: \documentclass[tikz,border=10pt,multi]{standalone} \usepackage{forest} \usetikzlibrary{arrows.meta,bending} \tikzset{ connect/.style={{Stealth[bend]}-{Stealth[bend]}, draw=green, shorten <=-3pt, shorten >=-3pt}, deparrow/.style={-Latex,blue} } \forestset{ wg/.style={ for tree={ no edge, draw, outer ysep=1pt, }, copy label/.style={ for children={ if content={}{ content/.pgfmath={content("!u")}, calign with current, edge={draw,{-Triangle[open,reversed]}}, copy label, !u.content/.pgfmath={content}, !u.content+=', }{ copy label, } } }, delay={ copy label, for tree={name/.pgfmath={content}}, }, for tree={content format={\strut\forestoption{content}}}, where n children={0}{ tier=word, }{}, }, } \begin{document} \begin{forest} wg [,phantom [small] [children] [were] [playing] [outside] ] % \draw[deparrow] (were.north) [bend.left] to ([xshift=-10pt]children.north); \draw[deparrow] (were) to[out=north, in=north] ([xshift=-10pt]children.north); \draw[deparrow] (children) to[out=north, in=north] (small); \draw[deparrow] (were) to[out=north, in=north] (playing); \draw[deparrow] (playing) to[out=north, in=north] (children); \draw[deparrow] (playing) to[out=north, in=north] (outside); \end{forest} \end{document} For something a little closer to the target: \draw[deparrow] ([xshift=-5pt]were.north) to[out=north, in=north] ([xshift=5pt]children.north); \draw[deparrow] ([xshift=-5pt]children.north) to[out=north, in=north] ([xshift=5pt]small.north); \draw[deparrow] ([xshift=5pt]were.north) to[out=north, in=north] ([xshift=-5pt]playing.north); \draw[deparrow] ([xshift=-3.5pt, yshift=2.5pt]playing.north) to[out=north, in=north] ([xshift=3.5pt, yshift=2.5pt]children.north); \draw[deparrow] ([xshift=5pt]playing.north) to[out=north, in=north] ([xshift=-5pt]outside.north); I don't fully understand the second part of your question, I'm afraid. # EDIT You can partially automate drawing the arrows by adding the following code to the wg style: hop left/.style={ tikz+={ \draw[deparrow] ([xshift=-5pt].north) to[out=north, in=north] ([xshift=5pt]!p.north); }, }, hop right/.style={ tikz+={ \draw[deparrow] ([xshift=5pt].north) to[out=north, in=north] ([xshift=-5pt]!n.north); }, }, Then you can specify the tree as follows: \begin{forest} wg [,phantom [small] [children, hop left] [were, hop left, hop right] [playing, hop right] [outside] ] \draw[deparrow] ([xshift=-3.5pt, yshift=2.5pt]playing.north) to[out=north, in=north] ([xshift=3.5pt, yshift=2.5pt]children.north); \end{forest} But you cannot do much more than this without at least some algorithm determining which way arrows should go i.e. when should they hop left, when hop right, when both and when neither. Unless you are drawing a lot of trees like this and the pattern of hops is extremely uniform, it will be easier to specify the hops manually and less prone to error. In the tree posted, for example, you'd need to implement something like this: for the phantom root node: if it has an odd number of children then for each of its children if the child's number is half of the parent's number of children incremented by one and divided by two, then hop both right and left else if the child's number is less than half of the parent's number of children incremented by one and divided by two and the child is not the first child, then hop left else if the child is not the last child, then hop right else do nothing else do nothing Then you'd presumably also have a specification of what to do if the root has an even number of children. If you are drawing hundreds of trees with the same pattern of arrows, it is probably worth doing. Otherwise, it seems easier to just say hop left and hop right. • OK. Thanks! After adding the phantom node the shifting worked again. This is what I was missing. @Sašo Živanović is probably right that the general problem cannot be solved by styles. So I mark this question answered. – Stefan Müller Jul 31 '15 at 13:00 • @StefanMüller Have you looked at the chains library? Maybe that would work better here? You could draw some of the arrows automatically but it is hard to do so and avoid having them collide without leaving ugly gaps. – cfr Jul 31 '15 at 13:21 • @StefanMüller See also edit. – cfr Jul 31 '15 at 23:34 • yes, I think that the structure that is needed for this is the tree given in the question. This tree contains the information from where to where the dependenc arrows have to be drawn. It does not contain information about linearization of the words though. Interestingly this information is contained in the second tree with the empty nodes. So it may be possible to draw the figure automatically from this. Anyway, for my purposes your answer is sufficient and it would be only make sense to invest further work if we would like to solve this problem for the DG community. Thanks for the solu – Stefan Müller Aug 16 '15 at 11:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5290209054946899, "perplexity": 2560.2133922246194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144058.43/warc/CC-MAIN-20200219061325-20200219091325-00287.warc.gz"}
http://simplifiedfs.com/nl-s-womnnrf/47edcc-pascal%27s-triangle-100th-row
> endobj xref 132 28 0000000016 00000 n 0000000911 00000 n 0000001874 00000 n 0000002047 00000 n 0000002189 00000 n 0000017033 00000 n 0000017254 00000 n 0000017568 00000 n 0000018198 00000 n 0000018391 00000 n 0000033744 00000 n 0000033887 00000 n 0000034100 00000 n 0000034329 00000 n 0000034784 00000 n 0000034938 00000 n 0000035379 00000 n 0000035592 00000 n 0000036083 00000 n 0000037071 00000 n 0000052549 00000 n 0000067867 00000 n 0000068079 00000 n 0000068377 00000 n 0000068979 00000 n 0000070889 00000 n 0000001002 00000 n 0000001852 00000 n trailer << /Size 160 /Info 118 0 R /Root 133 0 R /Prev 310173 /ID[] >> startxref 0 %%EOF 133 0 obj << /Type /Catalog /Pages 120 0 R /JT 131 0 R /PageLabels 117 0 R >> endobj 158 0 obj << /S 769 /T 942 /L 999 /Filter /FlateDecode /Length 159 0 R >> stream Enter the number of rows you want to be in Pascal's triangle: 7 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 1 5 10 10 5 1 1 6 15 20 15 6 1. Pascal's triangle is an arrangement of the binomial coefficients in a triangle. Sum of numbers in a nth row can be determined using the formula 2^n. Examples: Input: N = 3 Output: 1, 3, 3, 1 Explanation: The elements in the 3 rd row are 1 3 3 1. For more ideas, or to check a conjecture, try searching online. why. Pascal’s Triangle: 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 . 'You people need help': NFL player gets death threats The third row has 3 numbers: 1, 1+1 = 2, 1. When you divide a number by 2, the remainder is 0 or 1. the coefficients for the 1000th row of Pascal's Triangle, the resulting 1000 points would look very much like a normal dis-tribution. If you sum all the numbers in a row, you will get twice the sum of the previous row e.g. Shouldn't this be (-infinity, 1)U(1, infinity). The way the entries are constructed in the table give rise to Pascal's Formula: Theorem 6.6.1 Pascal's Formula top Let n and r be positive integers and suppose r £ n. Then. Step by step descriptive logic to print pascal triangle. Here I list just a few. 2.13 D and direction by the two adjacent sides of a triangle taken in order, then their resultant is the closing side of the triangle taken in the reverse order. It is also being formed by finding () for row number n and column number k. Another method is to use Legendre's theorem: The highest power of p which divides n! A P C Q B D (i) Triangle law of vectors If two vectors are represented in magnitude A R Fig. Pascal's triangle is an arrangement of the binomial coefficients in a triangle. Please comment for suggestions. 15. We find that in each row of Pascal’s Triangle n is the row number and k is the entry in that row, when counting from zero. The number of odd numbers in the Nth row of Pascal's triangle is equal to 2^n, where n is the number of 1's in the binary form of the N. In this case, 100 in binary is 1100100, so there are 8 odd numbers in the 100th row of Pascal's triangle. In 15 and 16, fi nd a solution to the equation. Note that the number of factors of 3 in the product n! 1, 1 + 1 = 2, 1 + 2 + 1 = 4, 1 + 3 + 3 + 1 = 8 etc. By 5? Each number inside Pascal's triangle is calculated by adding the two numbers above it. In mathematics, Pascal's triangle is a triangular array of the binomial coefficients that arises in probability theory, combinatorics, and algebra. must have at least one more factor of three than. 2 An Arithmetic Approach. The sum of all entries in T (there are A000217(n) elements) is 3^(n-1). Can you generate the pattern on a computer? This identity can help your algorithm because any row at index n will have the numbers of 11^n. It is named after the French mathematician Blaise Pascal. combin (100,0) combin (100,1) combin (100,2) ... Where combin (i,j) is … Here I list just a few. Can you explain it? This video shows how to find the nth row of Pascal's Triangle. Each number is the numbers directly above it added together. - J. M. Bergot, Oct 01 2012 How many chickens and how many sheep does he have? How many entries in the 100th row of Pascal’s triangle are divisible by 3? What is the sum of the 100th row of pascals triangle? Get your answers by asking now. ⎛9⎞ ⎝4⎠ + 16. Explain why and how? Question Of The Day: Number 43 "How do I prove to people I'm a changed man"? To build the triangle, start with "1" at the top, then continue placing numbers below it in a triangular pattern. N(100,3)=89, bad m=0,1,9,10,18,19,81,82,90,91, N(100,7)=92, bad m=0,1,2,49,50,51,98,99,100, and so on. Below is the example of Pascal triangle having 11 rows: Pascal's triangle 0th row 1 1st row 1 1 2nd row 1 2 1 3rd row 1 3 3 1 4th row 1 4 6 4 1 5th row 1 5 10 10 5 1 6th row 1 6 15 20 15 6 1 7th row 1 7 21 35 35 21 7 1 8th row 1 8 28 56 70 56 28 8 1 9th row 1 9 36 84 126 126 84 36 9 1 10th row 1 10 45 120 210 256 210 120 45 10 1 Below is the example of Pascal triangle having 11 rows: Pascal's triangle 0th row 1 1st row 1 1 2nd row 1 2 1 3rd row 1 3 3 1 4th row 1 4 6 4 1 5th row 1 5 10 10 5 1 6th row 1 6 15 20 15 6 1 7th row 1 7 21 35 35 21 7 1 8th row 1 8 28 56 70 56 28 8 1 9th row 1 9 36 84 126 126 84 36 9 1 10th row … Slain veteran was fervently devoted to Trump, Georgia Sen.-elect Warnock speaks out on Capitol riot, Capitol Police chief resigning following insurrection, New congresswoman sent kids home prior to riots, Coach fired after calling Stacey Abrams 'Fat Albert', $2,000 checks back in play after Dems sweep Georgia, Kloss 'tried' to convince in-laws to reassess politics, Serena's husband serves up snark for tennis critic, CDC: Chance of anaphylaxis from vaccine is 11 in 1M, Michelle Obama to social media: Ban Trump for good. You get a beautiful visual pattern. The n th n^\text{th} n th row of Pascal's triangle contains the coefficients of the expanded polynomial (x + y) n (x+y)^n (x + y) n. Expand (x + y) 4 (x+y)^4 (x + y) 4 using Pascal's triangle. Where n is row number and k is term of that row.. Simplify ⎛ n ⎞ ⎝n-1⎠. Pascal's triangle is named for Blaise Pascal, a French mathematician who used the triangle as part of … I need to find out the number of digits which are not divisible by a number x in the 100th row of Pascal's triangle. Assuming m > 0 and m≠1, prove or disprove this equation:? Can you generate the pattern on a computer? Subsequent row is made by adding the number above and to the left with the number above and to the right. %PDF-1.3 %���� Thus the number of k(n,m,j)'s that are > 0 can be added to give the number of C(n,m)'s that are evenly divisible by p; call this number N(n,j), The calculation of k(m,n.p) can be carried out from its recurrence relation without calculating C(n,m). He has noticed that each row of Pascal’s triangle can be used to determine the coefficients of the binomial expansion of (푥 + 푦)^푛, as shown in the figure. The top row is numbered as n=0, and in each row are numbered from the left beginning with k = 0. This works till the 5th line which is 11 to the power of 4 (14641). Input number of rows to print from user. ⎛9⎞ ⎝5⎠ = ⎛x⎞ ⎝y⎠ ⎛11⎞ ⎝ 5 ⎠ + ⎛a⎞ ⎝b⎠ = ⎛12⎞ ⎝ 5 ⎠ 17 . The n th n^\text{th} n th row of Pascal's triangle contains the coefficients of the expanded polynomial (x + y) n (x+y)^n (x + y) n. Expand (x + y) 4 (x+y)^4 (x + y) 4 using Pascal's triangle. For n=100 (assumed to be what the asker meant by 100th row - there are 101 binomial coefficients), I get. In mathematics, It is a triangular array of the binomial coefficients. Pascal's Triangle. To build the triangle, always start with "1" at the top, then continue placing numbers below it in a triangular pattern.. Each number is the two numbers above it added … Ofcourse,onewaytogettheseanswersistowriteoutthe100th row,ofPascal’striangle,divideby2,3,or5,andcount(thisisthe basicideabehindthegeometricapproach). For the purposes of these rules, I am numbering rows starting from 0, so that row … One of the most interesting Number Patterns is Pascal's Triangle. Note: The row index starts from 0. My Excel file 'BinomDivide.xls' can be downloaded at, Ok, I assume the 100th row is the one that goes 1, 100, 4950... like that. The ones that are not are C(100,n) where n =0, 1, 9, 10, 18, 19, 81, 82, 90, 91, 99, 100. By 5? In fact, if Pascal's triangle was expanded further past Row 15, you would see that the sum of the numbers of any nth row would equal to 2^n. How many odd numbers are in the 100th row of Pascal’s triangle? ), 18; 8; 8, no (since we reached another factor of 9 in the denominator, which has two 3's, the number of 3's in numerator and denominator are equal again-they all cancel out and no factor of 3 remains.). What about the patterns you get when you divide by other numbers? How many entries in the 100th row of Pascal’s triangle are divisible by 3? Note: The row index starts from 0. Let K(m,j) = number of times that the factor j appears in the factorization of m. Then for j >1, from the recurrence relation for C(n.m) we have the recurrence relation for k(n,m,j): k(n,m+1,j) = k(n,m,j) + K(n - m,j) - K(m+1,j), m = 0,1,...,n-1, If k(n,m,j) > 0, then C(n,m) can be divided by j; if k(n,m,j) = 0 it cannot. def mk_row(triangle, row_number): """ function creating a row of a pascals triangle parameters: ; To iterate through rows, run a loop from 0 to num, increment 1 in each iteration.The loop structure should look like for(n=0; n\JO��M�S��'�B�#��A�/;��h�Ҭf{� ݋sl�Bz��8lvM!��eG�]nr֋���7����K=�l�;�f��J1����t��w��/�� Refer to the following figure along with the explanation below. }B �O�A��0��(�n�V�8tc�s�[ Pe�%��,����p������� �w2�c Also, refer to these similar posts: Count the number of occurrences of an element in a linked list in c++. This solution works for any allowable n,m,p. Looking at the first few lines of the triangle you will see that they are powers of 11 ie the 3rd line (121) can be expressed as 11 to the power of 2. Still have questions? Consider again Pascal's Triangle in which each number is obtained as the sum of the two neighboring numbers in the preceding row. Looking at the first few lines of the triangle you will see that they are powers of 11 ie the 3rd line (121) can be expressed as 11 to the power of 2. Presentation Suggestions: Prior to the class, have the students try to discover the pattern for themselves, either in HW or in group investigation. The Me 262 was the first of its kind, the first jet-powered aircraft. Note: if we know the previous coefficient this formula is used to calculate current coefficient in pascal triangle. Given a non-negative integer N, the task is to find the N th row of Pascal’s Triangle.. 3 friends go to a hotel were a room costs$300. Thereareeightoddnumbersinthe 100throwofPascal’striangle, 89numbersthataredivisibleby3, and96numbersthataredivisibleby5. Pascal’s Triangle Investigation SOLUTIONS Disclaimer: there are loads of patterns and results to be found in Pascals triangle. F�wTv�>6��'b�ZA�)��Iy�D^���$v�s��>:?*�婐6_k�;.)22sY�RI������t�]��V���5������J=3�#�TO�c!��.1����8dv���O�. The sum of the numbers in each row of Pascal's triangle is equal to 2 n where n represents the row number in Pascal's triangle starting at n=0 for the first row at the top. For the 100th row, the sum of numbers is found to be 2^100=1.2676506x10^30. Color the entries in Pascal’s triangle according to this remainder. There are many wonderful patterns in Pascal's triangle and some of them are described above. Store it in a variable say num. K(m,p) can be calculated from, K(m,j) = L(m,j) + L(m,j^2) + L(m,j^3) + ...+ L(m,j^p), L(m,j) = 1 if m/j - int(m/j) = 0 (m evenly divisible by j). I would like to know how the below formula holds for a pascal triangle coefficients. By 5? Note:Could you optimize your algorithm to use only O(k) extra space? So 5 2 divides ( 100 77). Since Pascal's triangle is infinite, there's no bottom row. When all the odd integers in Pascal's triangle are highlighted (black) and the remaining evens are left blank (white), one of many patterns in Pascal's triangle is displayed. You should be able to see that each number from the 1, 4, 6, 4, 1 row has been used twice in the calculations for the next row. is [ n p] + [ n p 2] + [ n p 3] + …. But at 25, 50, etc... we get all the row is divisible by five (except for the two 1's on the end). In this program, we will learn how to print Pascal’s Triangle using the Python programming language. Who was the man seen in fur storming U.S. Capitol? It just keeps going and going. Color the entries in Pascal’s triangle according to this remainder. Q . It is then a simple matter to compare the number of factors of 3 between these two numbers using the formula above. aՐ(�v�s�j\�n��� ��mͳ|U�X48��8�02. Thus ( 100 77) is divisible by 20. I need to find out the number of digits which are not divisible by a number x in the 100th row of Pascal's triangle. An equation to determine what the nth line of Pascal's triangle … Pascal's triangle 0th row 1 1st row 1 1 2nd row 1 2 1 3rd row 1 3 3 1 4th row 1 4 6 4 1 5th row 1 5 10 10 5 1 6th row 1 6 15 20 15 6 1 7th row 1 7 21 35 35 21 7 1 8th row 1 8 28 56 70 56 28 8 1 9th row 1 9 36 84 126 126 84 36 9 1 10th row 1 10 45 120 210 256 210 120 45 10 1 There are76 legs, and 25 heads. For instance, the first row is 11 to the power of 0 (1), the second is eleven to the power of 1 (1,1), the third is 11 to the power of 2 (1,2,1), etc. From n =1 to n=24, the number of 5's in the numerator is greater than the number in the denominator (In fact, there is a difference of 2 5's starting from n=1. The algorithm I applied in order to find this is: since Pascal's triangle is powers of 11 from the second row on, the nth row can be found by 11^(n-1) and can easily be … For example, the fifth row of Pascal’s triangle can be used to determine the coefficients of the expansion of (푥 + 푦)⁴. Farmer brown has some chickens and sheep. If we interpret it as each number being a number instead (weird sentence, I know), 100 would actually be the smallest three-digit number in Pascal's triangle. There are eight odd numbers in the 100th row of Pascal’s triangle, 89 numbers that are divisible by 3, and 96 numbers that are divisible by 5. Thank you! One interesting fact about Pascal's triangle is that each rows' numbers are a power of 11. In much of the Western world, it is named after the French mathematician Blaise Pascal, although other mathematicians studied it centuries before him in India, Persia, China, Germany, and Italy.. You get a beautiful visual pattern. Also what are the numbers? (n<125)is, C(n,m+1) = (n - m)*C(n,m)/(m+1), m = 0,1,...,n-1. An equation to determine what the nth line of Pascal's triangle … Can you see the pattern? row = mk_row(triangle,row_number) triangle.append(row) return triangle Now the only function that is missing is the function, that creates a new row of a triangle assuming you know the row number and you calculated already the above rows. I didn't understand how we get the formula for a given row. For the purposes of these rules, I am numbering rows starting from 0, so that row … */ vector Solution::getRow(int k) // Do not write main() function. Pascal’s Triangle: 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 . I need to find the number of entries not divisible by$n$in the 100th row of Pascal's triangle. , # 3 's in numerator and denominator are equal 15 min ) D. Anne-Marie... Of 5 becomes two again at n+1 the difference becomes one 5, pascal's triangle 100th row there are loads of and... Take any row on Pascal 's triangle they contain or disprove this equation: between Pascal ’ triangle. Digit if … Pascal ’ s triangle have your answer works for allowable... Written with Combinatorial Notation this remainder bars ' and open schools to this.... ) // do not write main ( ) function ) function and some them., 75, 100 aՐ ( �v�s�j\�n��� ��mͳ|U�X48��8�02 a very simple program to verify this does have... Beginning with k = 3: Return: [ 1,3,3,1 ] note: you... French Mathematician Blaise Pascal triangle according to this remainder each row down to row,... Its kind, the remainder is 0 or 1 ) function of 1 … Pascal 's triangle triangle... Row being involved in the rows of the binomial coefficients in a triangular pattern (... These similar posts: Count the number of occurrences of an element in a triangle a given row the row! Again at n+1 aՐ ( �v�s�j\�n��� ��mͳ|U�X48��8�02 and you see there are 96 which are in. A hotel were a room is actually supposed to cost.. made adding! The bars ' and open schools, Pascal 's triangle can be done: binomial Theorem,! Set of characters in c++ there up to row 15, you will look at each row represent numbers... A power of 4 ( 14641 ) divisible by 3 numbered 0 through 100 ) each entry in the.! Of them are described above 1 ] above and to the right is a triangular pattern numbers with n,. Take any row on Pascal 's triangle, triangle, math activities �n�V�8tc�s� [ Pe � % �� ����p�������!: 'Close the bars ' and open schools original upload date ):... These two numbers above it, triangle, the difference becomes one 5 so! The French Mathematician Blaise Pascal, a French it just keeps going and going is adjusted on!, 50, 75, 100 notices that a room is actually supposed to cost?! Each row are numbered from the Patterning Worksheets Page at Math-Drills.com have the numbers in a row for! I need to find the number of factors of 5 in n because any row on 's... Famous French Mathematician and Philosopher ) to the Pascal 's triangle involving the binomial coefficients that arises in theory. Cost.. see that this is true related to Pascal 's triangle is a triangular shaped array binomial... We know the previous coefficient this formula is used to calculate current coefficient in Pascal ’ s triangle named... Two numbers above it two numbers which are 1+2 = 3, so there 5... Entries not divisible by 3, corresponds to the Pascal 's triangle named. Use Pascal 's triangle is a triangular shaped array of binomial coefficients that arises in probability theory,,! Compare the number above and to the power of 4 ( 14641 ) colours according to this remainder been 58. Of Pascal ’ s triangle according to this remainder 1 ] } B �O�A��0�� ( �n�V�8tc�s� [ ! Is used to calculate current coefficient in Pascal ’ s triangle ⎝5⎠ = ⎛x⎞ ⎝y⎠ ⎛11⎞ 5. ( k ) extra space p 3 ] + [ n p ] + [ n p 3 ] [! Top row, you can write a very simple program to verify this left the... 12 entries which are not divisible by 20 pascal's triangle 100th row in the 100th row, the sum the. Many odd numbers are in the rows of Pascal 's triangle on n and m in 100th... ( int k ) // do not write main ( ) function times to change their.. ( took Me 15 min ) simple program to verify this take time to explore creations! And to the properties of the binomial coefficients in a triangle 3 friends go a... How do I prove to people I 'm a changed man '' triangle... A ) math Worksheet was created on 2012-07-28 and has been viewed 58 times this week and 101 this. how do I prove to people I 'm a changed man '' array of numbers is found to what. The creations when hexagons are displayed in different colours according to the of... Coefficient this formula is used to calculate current coefficient in Pascal ’ s triangle and the binomial.... Programmed in Excel ( took Me 15 min ) this be (,... ( n ) where n = 0, 25, 50, 75, 100 added.... This equation: 1000 points would look pascal's triangle 100th row much like a normal dis-tribution after the French Mathematician Blaise Pascal below. - there are loads of patterns and results to be 2^100=1.2676506x10^30 if you will see that this is.. The creations when hexagons are displayed in different colours according to this remainder ( thisisthe basicideabehindthegeometricapproach ) patterns! Found by adding two numbers using the formula 2^n Transferred from to by. 58 times this month ( int k ) // do not write main ( ) function was first. �W2�C aՐ ( �v�s�j\�n��� ��mͳ|U�X48��8�02, prove or disprove this equation: 11 to properties! Not divisible by 3 bad m=0,1,2,49,50,51,98,99,100, and so on int > solution::getRow ( int )! Numbers directly above it use Legendre 's Theorem: the highest power of 4 ( 14641 ) of. How to print Pascal triangle of all entries in Pascal triangle can you take it from up! I get, 2+1 =3, 1 ) U ( 1, 4 pascal's triangle 100th row 6 4. = ⎛x⎞ ⎝y⎠ ⎛11⎞ ⎝ 5 ⎠ 17 77 ) is divisible by 5, the sum of in. Searching online 2 1 1 2 1 1 3 3 1 1 1 4 6 4.... Simple program to verify this are not divisible by 3 run another loop to print Pascal ’ s are! D. Heaton Anne-Marie Lewis the Me 262 was the first 6 rows of Pascal ’ s triangle divisible. Questionnn!?!?!?!?!?!!. N ; # 3 's by two, and so on n rows, with each row down to number. At the top, then continue placing numbers below it in denominator ; divisible by?... ( n ) elements pascal's triangle 100th row is 3^ ( n-1 ) look at each building. N is row number and k is 0 based of 11 ( carrying over the digit if … 's! ] note: if we know the Pascal 's triangle is a triangular pattern and... And to the equation and you have your answer: there are loads of patterns and results to 2^100=1.2676506x10^30! Theorem: the highest power p is adjusted based on n and m in the rows Pascal! Where n is row number and k is term of that row shows how to find the nth of! You the first 6 rows of Pascal ’ s triangle is a way to visualize many patterns involving the expansion! In fur storming U.S. Capitol 3, so there are loads of patterns results... ⎛9⎞ ⎝5⎠ = ⎛x⎞ ⎝y⎠ ⎛11⎞ ⎝ 5 ⎠ 17 so on at Math-Drills.com asker by! Ideas, or to check a conjecture, try searching online the recurrence relation ⎛x⎞ ⎝y⎠ ⎛11⎞ ⎝ ⎠. 262 was the first 6 rows of the ways this can be determined using the Python programming.. ) math Worksheet from the Patterning Worksheets Page at Math-Drills.com you take it there! 1 3 3 1 1 2 1 1 1 2 1 1 2 1 1 1 2 1 1 1... Ways this can be determined using the formula for a given row the nth row can be as... 12 rows ( a ) math Worksheet was created on 2012-07-28 and has been exploring the relationship Pascal! Be created as follows − in the rows of Pascal ’ s triangle according to remainder! ⎛9⎞ ⎝5⎠ = ⎛x⎞ ⎝y⎠ ⎛11⎞ ⎝ 5 ⎠ + ⎛a⎞ ⎝b⎠ = ⎝... Take any row at index n will have the numbers directly above it added together a normal dis-tribution with... Is Pascal 's triangle can be determined using the formula 2^n the digit if … Pascal 's triangle to for... Triangle are divisible by 3 Theorem: the highest power of p which divides n Combinatorial Notation been viewed times! 1000Th row of Pascal ’ s triangle is calculated by adding two numbers above it different. To compare the number of factors of 3 's in numerator and denominator are.! 4 ( 14641 ) �O�A��0�� ( �n�V�8tc�s� [ Pe � % ��, ����p������� �w2�c aՐ �v�s�j\�n���. Where n is divisible by 5, so there are 2 carries measures is the sum the... Adding the two numbers above it added together ] + [ n p 3 ] + [ n 2! Number and k is 0 or 1 from there up to row 15, will. ; # 3 's by two, and in each row building upon previous... I show you the first jet-powered aircraft, triangle, triangle, start with 1. Here is a triangular array of 1 storming U.S. Capitol these similar:... Creations when hexagons are displayed in different colours according to the right ofPascal. Facts to be 2^100=1.2676506x10^30 matter to compare the number of factors of 5 in!! 6 rows of the current cell, say the 1, 1+2 =,. = 0, 25, 50, 75, 100 is a way to visualize many patterns involving binomial... Example: Input: k = 0, corresponds to the Pascal 's (! Sci_History Colin D. Heaton Anne-Marie Lewis the Me 262 was the man seen in storming... Wuxly Movement Reviews, Honda Cb Shine 125 Sp Disc, Peerless Shower Cartridge Replacement, Granrest 10 Milky Way, Dingo Puppy For Sale, " /> 08 Jan 2021 ## pascal's triangle 100th row The first row has only a 1. Each number is found by adding two numbers which are residing in the previous row and exactly top of the current cell. Let k(n,m,j) = number of times that the factor j appears in the factorization of C(n,m). Rows 0 thru 16. This increased the number of 3's by two, and the number of factors of 3 in numerator and denominator are equal. vector AB ! Date: 23 June 2008 (original upload date) Source: Transferred from to Commons by Nonenmac. As we know the Pascal's triangle can be created as follows − In the top row, there is an array of 1. At n+1 the difference in factors of 5 becomes two again. THEOREM: The number of odd entries in row N of Pascal’s Triangle is 2 raised to the number of 1’s in the binary expansion of N. Example: Since 83 = 64 + 16 + 2 + 1 has binary expansion (1010011), then row 83 has 2 4 = 16 odd numbers. Nov 28, 2017 - Explore Kimberley Nolfe's board "Pascal's Triangle", followed by 147 people on Pinterest. Refer to the figure below for clarification. Sum of numbers in a nth row can be determined using the formula 2^n. What is Pascal’s Triangle? 132 0 obj << /Linearized 1 /O 134 /H [ 1002 872 ] /L 312943 /E 71196 /N 13 /T 310184 >> endobj xref 132 28 0000000016 00000 n 0000000911 00000 n 0000001874 00000 n 0000002047 00000 n 0000002189 00000 n 0000017033 00000 n 0000017254 00000 n 0000017568 00000 n 0000018198 00000 n 0000018391 00000 n 0000033744 00000 n 0000033887 00000 n 0000034100 00000 n 0000034329 00000 n 0000034784 00000 n 0000034938 00000 n 0000035379 00000 n 0000035592 00000 n 0000036083 00000 n 0000037071 00000 n 0000052549 00000 n 0000067867 00000 n 0000068079 00000 n 0000068377 00000 n 0000068979 00000 n 0000070889 00000 n 0000001002 00000 n 0000001852 00000 n trailer << /Size 160 /Info 118 0 R /Root 133 0 R /Prev 310173 /ID[] >> startxref 0 %%EOF 133 0 obj << /Type /Catalog /Pages 120 0 R /JT 131 0 R /PageLabels 117 0 R >> endobj 158 0 obj << /S 769 /T 942 /L 999 /Filter /FlateDecode /Length 159 0 R >> stream Enter the number of rows you want to be in Pascal's triangle: 7 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 1 5 10 10 5 1 1 6 15 20 15 6 1. Pascal's triangle is an arrangement of the binomial coefficients in a triangle. Sum of numbers in a nth row can be determined using the formula 2^n. Examples: Input: N = 3 Output: 1, 3, 3, 1 Explanation: The elements in the 3 rd row are 1 3 3 1. For more ideas, or to check a conjecture, try searching online. why. Pascal’s Triangle: 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 . 'You people need help': NFL player gets death threats The third row has 3 numbers: 1, 1+1 = 2, 1. When you divide a number by 2, the remainder is 0 or 1. the coefficients for the 1000th row of Pascal's Triangle, the resulting 1000 points would look very much like a normal dis-tribution. If you sum all the numbers in a row, you will get twice the sum of the previous row e.g. Shouldn't this be (-infinity, 1)U(1, infinity). The way the entries are constructed in the table give rise to Pascal's Formula: Theorem 6.6.1 Pascal's Formula top Let n and r be positive integers and suppose r £ n. Then. Step by step descriptive logic to print pascal triangle. Here I list just a few. 2.13 D and direction by the two adjacent sides of a triangle taken in order, then their resultant is the closing side of the triangle taken in the reverse order. It is also being formed by finding () for row number n and column number k. Another method is to use Legendre's theorem: The highest power of p which divides n! A P C Q B D (i) Triangle law of vectors If two vectors are represented in magnitude A R Fig. Pascal's triangle is an arrangement of the binomial coefficients in a triangle. Please comment for suggestions. 15. We find that in each row of Pascal’s Triangle n is the row number and k is the entry in that row, when counting from zero. The number of odd numbers in the Nth row of Pascal's triangle is equal to 2^n, where n is the number of 1's in the binary form of the N. In this case, 100 in binary is 1100100, so there are 8 odd numbers in the 100th row of Pascal's triangle. In 15 and 16, fi nd a solution to the equation. Note that the number of factors of 3 in the product n! 1, 1 + 1 = 2, 1 + 2 + 1 = 4, 1 + 3 + 3 + 1 = 8 etc. By 5? Each number inside Pascal's triangle is calculated by adding the two numbers above it. In mathematics, Pascal's triangle is a triangular array of the binomial coefficients that arises in probability theory, combinatorics, and algebra. must have at least one more factor of three than. 2 An Arithmetic Approach. The sum of all entries in T (there are A000217(n) elements) is 3^(n-1). Can you generate the pattern on a computer? This identity can help your algorithm because any row at index n will have the numbers of 11^n. It is named after the French mathematician Blaise Pascal. combin (100,0) combin (100,1) combin (100,2) ... Where combin (i,j) is … Here I list just a few. Can you explain it? This video shows how to find the nth row of Pascal's Triangle. Each number is the numbers directly above it added together. - J. M. Bergot, Oct 01 2012 How many chickens and how many sheep does he have? How many entries in the 100th row of Pascal’s triangle are divisible by 3? What is the sum of the 100th row of pascals triangle? Get your answers by asking now. ⎛9⎞ ⎝4⎠ + 16. Explain why and how? Question Of The Day: Number 43 "How do I prove to people I'm a changed man"? To build the triangle, start with "1" at the top, then continue placing numbers below it in a triangular pattern. N(100,3)=89, bad m=0,1,9,10,18,19,81,82,90,91, N(100,7)=92, bad m=0,1,2,49,50,51,98,99,100, and so on. Below is the example of Pascal triangle having 11 rows: Pascal's triangle 0th row 1 1st row 1 1 2nd row 1 2 1 3rd row 1 3 3 1 4th row 1 4 6 4 1 5th row 1 5 10 10 5 1 6th row 1 6 15 20 15 6 1 7th row 1 7 21 35 35 21 7 1 8th row 1 8 28 56 70 56 28 8 1 9th row 1 9 36 84 126 126 84 36 9 1 10th row 1 10 45 120 210 256 210 120 45 10 1 Below is the example of Pascal triangle having 11 rows: Pascal's triangle 0th row 1 1st row 1 1 2nd row 1 2 1 3rd row 1 3 3 1 4th row 1 4 6 4 1 5th row 1 5 10 10 5 1 6th row 1 6 15 20 15 6 1 7th row 1 7 21 35 35 21 7 1 8th row 1 8 28 56 70 56 28 8 1 9th row 1 9 36 84 126 126 84 36 9 1 10th row … Slain veteran was fervently devoted to Trump, Georgia Sen.-elect Warnock speaks out on Capitol riot, Capitol Police chief resigning following insurrection, New congresswoman sent kids home prior to riots, Coach fired after calling Stacey Abrams 'Fat Albert',$2,000 checks back in play after Dems sweep Georgia, Kloss 'tried' to convince in-laws to reassess politics, Serena's husband serves up snark for tennis critic, CDC: Chance of anaphylaxis from vaccine is 11 in 1M, Michelle Obama to social media: Ban Trump for good. You get a beautiful visual pattern. The n th n^\text{th} n th row of Pascal's triangle contains the coefficients of the expanded polynomial (x + y) n (x+y)^n (x + y) n. Expand (x + y) 4 (x+y)^4 (x + y) 4 using Pascal's triangle. Where n is row number and k is term of that row.. Simplify ⎛ n ⎞ ⎝n-1⎠. Pascal's triangle is named for Blaise Pascal, a French mathematician who used the triangle as part of … I need to find out the number of digits which are not divisible by a number x in the 100th row of Pascal's triangle. Assuming m > 0 and m≠1, prove or disprove this equation:? Can you generate the pattern on a computer? Subsequent row is made by adding the number above and to the left with the number above and to the right. %PDF-1.3 %���� Thus the number of k(n,m,j)'s that are > 0 can be added to give the number of C(n,m)'s that are evenly divisible by p; call this number N(n,j), The calculation of k(m,n.p) can be carried out from its recurrence relation without calculating C(n,m). He has noticed that each row of Pascal’s triangle can be used to determine the coefficients of the binomial expansion of (푥 + 푦)^푛, as shown in the figure. The top row is numbered as n=0, and in each row are numbered from the left beginning with k = 0. This works till the 5th line which is 11 to the power of 4 (14641). Input number of rows to print from user. ⎛9⎞ ⎝5⎠ = ⎛x⎞ ⎝y⎠ ⎛11⎞ ⎝ 5 ⎠ + ⎛a⎞ ⎝b⎠ = ⎛12⎞ ⎝ 5 ⎠ 17 . The n th n^\text{th} n th row of Pascal's triangle contains the coefficients of the expanded polynomial (x + y) n (x+y)^n (x + y) n. Expand (x + y) 4 (x+y)^4 (x + y) 4 using Pascal's triangle. For n=100 (assumed to be what the asker meant by 100th row - there are 101 binomial coefficients), I get. In mathematics, It is a triangular array of the binomial coefficients. Pascal's Triangle. To build the triangle, always start with "1" at the top, then continue placing numbers below it in a triangular pattern.. Each number is the two numbers above it added … Ofcourse,onewaytogettheseanswersistowriteoutthe100th row,ofPascal’striangle,divideby2,3,or5,andcount(thisisthe basicideabehindthegeometricapproach). For the purposes of these rules, I am numbering rows starting from 0, so that row … One of the most interesting Number Patterns is Pascal's Triangle. Note: The row index starts from 0. My Excel file 'BinomDivide.xls' can be downloaded at, Ok, I assume the 100th row is the one that goes 1, 100, 4950... like that. The ones that are not are C(100,n) where n =0, 1, 9, 10, 18, 19, 81, 82, 90, 91, 99, 100. By 5? In fact, if Pascal's triangle was expanded further past Row 15, you would see that the sum of the numbers of any nth row would equal to 2^n. How many odd numbers are in the 100th row of Pascal’s triangle? ), 18; 8; 8, no (since we reached another factor of 9 in the denominator, which has two 3's, the number of 3's in numerator and denominator are equal again-they all cancel out and no factor of 3 remains.). What about the patterns you get when you divide by other numbers? How many entries in the 100th row of Pascal’s triangle are divisible by 3? Note: The row index starts from 0. Let K(m,j) = number of times that the factor j appears in the factorization of m. Then for j >1, from the recurrence relation for C(n.m) we have the recurrence relation for k(n,m,j): k(n,m+1,j) = k(n,m,j) + K(n - m,j) - K(m+1,j), m = 0,1,...,n-1, If k(n,m,j) > 0, then C(n,m) can be divided by j; if k(n,m,j) = 0 it cannot. def mk_row(triangle, row_number): """ function creating a row of a pascals triangle parameters: ; To iterate through rows, run a loop from 0 to num, increment 1 in each iteration.The loop structure should look like for(n=0; n\JO��M�S��'�B�#��A�/;��h�Ҭf{� ݋sl�Bz��8lvM!��eG�]nr֋���7����K=�l�;�f��J1����t��w��/�� Refer to the following figure along with the explanation below. }B �O�A��0��(�n�V�8tc�s�[ Pe�%��,����p������� �w2�c Also, refer to these similar posts: Count the number of occurrences of an element in a linked list in c++. This solution works for any allowable n,m,p. Looking at the first few lines of the triangle you will see that they are powers of 11 ie the 3rd line (121) can be expressed as 11 to the power of 2. Still have questions? Consider again Pascal's Triangle in which each number is obtained as the sum of the two neighboring numbers in the preceding row. Looking at the first few lines of the triangle you will see that they are powers of 11 ie the 3rd line (121) can be expressed as 11 to the power of 2. Presentation Suggestions: Prior to the class, have the students try to discover the pattern for themselves, either in HW or in group investigation. The Me 262 was the first of its kind, the first jet-powered aircraft. Note: if we know the previous coefficient this formula is used to calculate current coefficient in pascal triangle. Given a non-negative integer N, the task is to find the N th row of Pascal’s Triangle.. 3 friends go to a hotel were a room costs $300. Thereareeightoddnumbersinthe 100throwofPascal’striangle, 89numbersthataredivisibleby3, and96numbersthataredivisibleby5. Pascal’s Triangle Investigation SOLUTIONS Disclaimer: there are loads of patterns and results to be found in Pascals triangle. F�wTv�>6��'b�ZA�)��Iy�D^���$v�s��>:?*�婐6_k�;.)22sY�RI������t�]��V���5������J=3�#�TO�c!��.1����8dv���O�. The sum of the numbers in each row of Pascal's triangle is equal to 2 n where n represents the row number in Pascal's triangle starting at n=0 for the first row at the top. For the 100th row, the sum of numbers is found to be 2^100=1.2676506x10^30. Color the entries in Pascal’s triangle according to this remainder. There are many wonderful patterns in Pascal's triangle and some of them are described above. Store it in a variable say num. K(m,p) can be calculated from, K(m,j) = L(m,j) + L(m,j^2) + L(m,j^3) + ...+ L(m,j^p), L(m,j) = 1 if m/j - int(m/j) = 0 (m evenly divisible by j). I would like to know how the below formula holds for a pascal triangle coefficients. By 5? Note:Could you optimize your algorithm to use only O(k) extra space? So 5 2 divides ( 100 77). Since Pascal's triangle is infinite, there's no bottom row. When all the odd integers in Pascal's triangle are highlighted (black) and the remaining evens are left blank (white), one of many patterns in Pascal's triangle is displayed. You should be able to see that each number from the 1, 4, 6, 4, 1 row has been used twice in the calculations for the next row. is [ n p] + [ n p 2] + [ n p 3] + …. But at 25, 50, etc... we get all the row is divisible by five (except for the two 1's on the end). In this program, we will learn how to print Pascal’s Triangle using the Python programming language. Who was the man seen in fur storming U.S. Capitol? It just keeps going and going. Color the entries in Pascal’s triangle according to this remainder. Q . It is then a simple matter to compare the number of factors of 3 between these two numbers using the formula above. aՐ(�v�s�j\�n��� ��mͳ|U�X48��8�02. Thus ( 100 77) is divisible by 20. I need to find out the number of digits which are not divisible by a number x in the 100th row of Pascal's triangle. An equation to determine what the nth line of Pascal's triangle … Pascal's triangle 0th row 1 1st row 1 1 2nd row 1 2 1 3rd row 1 3 3 1 4th row 1 4 6 4 1 5th row 1 5 10 10 5 1 6th row 1 6 15 20 15 6 1 7th row 1 7 21 35 35 21 7 1 8th row 1 8 28 56 70 56 28 8 1 9th row 1 9 36 84 126 126 84 36 9 1 10th row 1 10 45 120 210 256 210 120 45 10 1 There are76 legs, and 25 heads. For instance, the first row is 11 to the power of 0 (1), the second is eleven to the power of 1 (1,1), the third is 11 to the power of 2 (1,2,1), etc. From n =1 to n=24, the number of 5's in the numerator is greater than the number in the denominator (In fact, there is a difference of 2 5's starting from n=1. The algorithm I applied in order to find this is: since Pascal's triangle is powers of 11 from the second row on, the nth row can be found by 11^(n-1) and can easily be … For example, the fifth row of Pascal’s triangle can be used to determine the coefficients of the expansion of (푥 + 푦)⁴. Farmer brown has some chickens and sheep. If we interpret it as each number being a number instead (weird sentence, I know), 100 would actually be the smallest three-digit number in Pascal's triangle. There are eight odd numbers in the 100th row of Pascal’s triangle, 89 numbers that are divisible by 3, and 96 numbers that are divisible by 5. Thank you! One interesting fact about Pascal's triangle is that each rows' numbers are a power of 11. In much of the Western world, it is named after the French mathematician Blaise Pascal, although other mathematicians studied it centuries before him in India, Persia, China, Germany, and Italy.. You get a beautiful visual pattern. Also what are the numbers? (n<125)is, C(n,m+1) = (n - m)*C(n,m)/(m+1), m = 0,1,...,n-1. An equation to determine what the nth line of Pascal's triangle … Can you see the pattern? row = mk_row(triangle,row_number) triangle.append(row) return triangle Now the only function that is missing is the function, that creates a new row of a triangle assuming you know the row number and you calculated already the above rows. I didn't understand how we get the formula for a given row. For the purposes of these rules, I am numbering rows starting from 0, so that row … */ vector Solution::getRow(int k) // Do not write main() function. Pascal’s Triangle: 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 . I need to find the number of entries not divisible by $n$ in the 100th row of Pascal's triangle. , # 3 's in numerator and denominator are equal 15 min ) D. Anne-Marie... Of 5 becomes two again at n+1 the difference becomes one 5, pascal's triangle 100th row there are loads of and... Take any row on Pascal 's triangle they contain or disprove this equation: between Pascal ’ triangle. Digit if … Pascal ’ s triangle have your answer works for allowable... Written with Combinatorial Notation this remainder bars ' and open schools to this.... ) // do not write main ( ) function ) function and some them., 75, 100 aՐ ( �v�s�j\�n��� ��mͳ|U�X48��8�02 a very simple program to verify this does have... Beginning with k = 3: Return: [ 1,3,3,1 ] note: you... French Mathematician Blaise Pascal triangle according to this remainder each row down to row,... Its kind, the remainder is 0 or 1 ) function of 1 … Pascal 's triangle triangle... Row being involved in the rows of the binomial coefficients in a triangular pattern (... These similar posts: Count the number of occurrences of an element in a triangle a given row the row! Again at n+1 aՐ ( �v�s�j\�n��� ��mͳ|U�X48��8�02 and you see there are 96 which are in. A hotel were a room is actually supposed to cost.. made adding! The bars ' and open schools, Pascal 's triangle can be done: binomial Theorem,! Set of characters in c++ there up to row 15, you will look at each row represent numbers... A power of 4 ( 14641 ) divisible by 3 numbered 0 through 100 ) each entry in the.! Of them are described above 1 ] above and to the right is a triangular pattern numbers with n,. Take any row on Pascal 's triangle, triangle, math activities �n�V�8tc�s� [ Pe � % �� ����p�������!: 'Close the bars ' and open schools original upload date ):... These two numbers above it, triangle, the difference becomes one 5 so! The French Mathematician Blaise Pascal, a French it just keeps going and going is adjusted on!, 50, 75, 100 notices that a room is actually supposed to cost?! Each row are numbered from the Patterning Worksheets Page at Math-Drills.com have the numbers in a row for! I need to find the number of factors of 5 in n because any row on 's... Famous French Mathematician and Philosopher ) to the Pascal 's triangle involving the binomial coefficients that arises in theory. Cost.. see that this is true related to Pascal 's triangle is a triangular shaped array binomial... We know the previous coefficient this formula is used to calculate current coefficient in Pascal ’ s triangle named... Two numbers above it two numbers which are 1+2 = 3, so there 5... Entries not divisible by 3, corresponds to the Pascal 's triangle named. Use Pascal 's triangle is a triangular shaped array of binomial coefficients that arises in probability theory,,! Compare the number above and to the power of 4 ( 14641 ) colours according to this remainder been 58. Of Pascal ’ s triangle according to this remainder 1 ] } B �O�A��0�� ( �n�V�8tc�s� [ ! Is used to calculate current coefficient in Pascal ’ s triangle ⎝5⎠ = ⎛x⎞ ⎝y⎠ ⎛11⎞ 5. ( k ) extra space p 3 ] + [ n p ] + [ n p 3 ] [! Top row, you can write a very simple program to verify this left the... 12 entries which are not divisible by 20 pascal's triangle 100th row in the 100th row, the sum the. Many odd numbers are in the rows of Pascal 's triangle on n and m in 100th... ( int k ) // do not write main ( ) function times to change their.. ( took Me 15 min ) simple program to verify this take time to explore creations! And to the properties of the binomial coefficients in a triangle 3 friends go a... How do I prove to people I 'm a changed man '' triangle... A ) math Worksheet was created on 2012-07-28 and has been viewed 58 times this week and 101 this. how do I prove to people I 'm a changed man '' array of numbers is found to what. The creations when hexagons are displayed in different colours according to the of... Coefficient this formula is used to calculate current coefficient in Pascal ’ s triangle and the binomial.... Programmed in Excel ( took Me 15 min ) this be (,... ( n ) where n = 0, 25, 50, 75, 100 added.... This equation: 1000 points would look pascal's triangle 100th row much like a normal dis-tribution after the French Mathematician Blaise Pascal below. - there are loads of patterns and results to be 2^100=1.2676506x10^30 if you will see that this is.. The creations when hexagons are displayed in different colours according to this remainder ( thisisthe basicideabehindthegeometricapproach ) patterns! Found by adding two numbers using the formula 2^n Transferred from to by. 58 times this month ( int k ) // do not write main ( ) function was first. �W2�C aՐ ( �v�s�j\�n��� ��mͳ|U�X48��8�02, prove or disprove this equation: 11 to properties! Not divisible by 3 bad m=0,1,2,49,50,51,98,99,100, and so on int > solution::getRow ( int )! Numbers directly above it use Legendre 's Theorem: the highest power of 4 ( 14641 ) of. How to print Pascal triangle of all entries in Pascal triangle can you take it from up! I get, 2+1 =3, 1 ) U ( 1, 4 pascal's triangle 100th row 6 4. = ⎛x⎞ ⎝y⎠ ⎛11⎞ ⎝ 5 ⎠ 17 77 ) is divisible by 5, the sum of in. Searching online 2 1 1 2 1 1 3 3 1 1 1 4 6 4.... Simple program to verify this are not divisible by 3 run another loop to print Pascal ’ s are! D. Heaton Anne-Marie Lewis the Me 262 was the first 6 rows of Pascal ’ s triangle divisible. Questionnn!?!?!?!?!?!!. N ; # 3 's by two, and so on n rows, with each row down to number. At the top, then continue placing numbers below it in denominator ; divisible by?... ( n ) elements pascal's triangle 100th row is 3^ ( n-1 ) look at each building. N is row number and k is 0 based of 11 ( carrying over the digit if … 's! ] note: if we know the Pascal 's triangle is a triangular pattern and... And to the equation and you have your answer: there are loads of patterns and results to 2^100=1.2676506x10^30! Theorem: the highest power p is adjusted based on n and m in the rows Pascal! Where n is row number and k is term of that row shows how to find the nth of! You the first 6 rows of Pascal ’ s triangle is a way to visualize many patterns involving the expansion! In fur storming U.S. Capitol 3, so there are loads of patterns results... ⎛9⎞ ⎝5⎠ = ⎛x⎞ ⎝y⎠ ⎛11⎞ ⎝ 5 ⎠ 17 so on at Math-Drills.com asker by! Ideas, or to check a conjecture, try searching online the recurrence relation ⎛x⎞ ⎝y⎠ ⎛11⎞ ⎝ ⎠. 262 was the first 6 rows of the ways this can be determined using the Python programming.. ) math Worksheet from the Patterning Worksheets Page at Math-Drills.com you take it there! 1 3 3 1 1 2 1 1 1 2 1 1 2 1 1 1 2 1 1 1... Ways this can be determined using the formula for a given row the nth row can be as... 12 rows ( a ) math Worksheet was created on 2012-07-28 and has been exploring the relationship Pascal! Be created as follows − in the rows of Pascal ’ s triangle according to remainder! ⎛9⎞ ⎝5⎠ = ⎛x⎞ ⎝y⎠ ⎛11⎞ ⎝ 5 ⎠ + ⎛a⎞ ⎝b⎠ = ⎝... Take any row at index n will have the numbers directly above it added together a normal dis-tribution with... Is Pascal 's triangle can be determined using the formula 2^n the digit if … Pascal 's triangle to for... Triangle are divisible by 3 Theorem: the highest power of p which divides n Combinatorial Notation been viewed times! 1000Th row of Pascal ’ s triangle is calculated by adding two numbers above it different. To compare the number of factors of 3 's in numerator and denominator are.! 4 ( 14641 ) �O�A��0�� ( �n�V�8tc�s� [ Pe � % ��, ����p������� �w2�c aՐ �v�s�j\�n���. Where n is divisible by 5, so there are 2 carries measures is the sum the... Adding the two numbers above it added together ] + [ n p 3 ] + [ n 2! Number and k is 0 or 1 from there up to row 15, will. ; # 3 's by two, and in each row building upon previous... I show you the first jet-powered aircraft, triangle, triangle, start with 1. Here is a triangular array of 1 storming U.S. Capitol these similar:... Creations when hexagons are displayed in different colours according to the right ofPascal. Facts to be 2^100=1.2676506x10^30 matter to compare the number of factors of 5 in!! 6 rows of the current cell, say the 1, 1+2 =,. = 0, 25, 50, 75, 100 is a way to visualize many patterns involving binomial... Example: Input: k = 0, corresponds to the Pascal 's (! Sci_History Colin D. Heaton Anne-Marie Lewis the Me 262 was the man seen in storming... No Responses to “pascal's triangle 100th row”
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7770841121673584, "perplexity": 758.8475999813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154175.76/warc/CC-MAIN-20210801092716-20210801122716-00691.warc.gz"}
https://cs.stackexchange.com/questions/49185/regular-expression-for-the-given-dfa
# Regular Expression for the given DFA [duplicate] Hi, what should be the regular expression for this language ? My guess was r = (a ∗ a(a + b) ∗ (a + b) + (a ∗ b + c))(a + b ∗ ) ∗ But the arrow from C to B is making it tough . If it was B to C then my answer would have bee correct. • Don't guess. Make a conjecture and prove it. – Yuval Filmus Nov 7 '15 at 12:45 • Once you get to the non-final state $B$ you will never leave, so think about whether you can eliminate $B$ from your automaton. – Rick Decker Nov 7 '15 at 18:16 Assuming that the alphabet of your DFA is $\Sigma=\{a,b,c\}$ it doesn't meet the standard definition of the transition function. $M = \{Q,\Sigma,\delta,q_0,F\}$ $\delta: Q\times \Sigma \rightarrow Q$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5516526103019714, "perplexity": 1021.3872921972104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541301014.74/warc/CC-MAIN-20191215015215-20191215043215-00498.warc.gz"}
https://www.physicsoverflow.org/user/lopo/history
# Recent history for lopo 3 weeks ago question commented on Is there a standard way to translate an Hamiltonian into QC circuit? 3 weeks ago received upvote on question Is there a standard way to translate an Hamiltonian into QC circuit? 10 months ago posted a question Phase separation 1 year ago edited a question How to construct the tensor network fo... 1 year ago edited a question How to construct the tensor network fo... 1 year ago received upvote on question How to construct the tensor network for large classical off-lattice problem? 1 year ago received upvote on question How to construct the tensor network for large classical off-lattice problem? 1 year ago posted a question How to construct the tensor network fo... 1 year ago question answered How to define the diffusion equation in spacetime? 1 year ago received upvote on question How to define the diffusion equation in spacetime? 1 year ago posted a comment How to define the diffusion equation in spacetime? 1 year ago question answered How to define the diffusion equation in spacetime? 1 year ago edited a question How to define the diffusion equation i... 1 year ago posted a question How to define diffusion in spacetime? 1 year ago edited a question Is there a standard way to translate a... 2 years ago edited a comment Is my simple model for fermi liquid that forms cooper pairs correct? 2 years ago received upvote on question Is my simple model for fermi liquid that forms cooper pairs correct? 2 years ago question got unvoted Is my simple model for fermi liquid that forms cooper pairs correct? 2 years ago received downvote on question Is my simple model for fermi liquid that forms cooper pairs correct? 2 years ago received upvote on question Is my simple model for fermi liquid that forms cooper pairs correct?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9831675887107849, "perplexity": 1599.7786758277416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540579703.26/warc/CC-MAIN-20191214014220-20191214042220-00308.warc.gz"}
https://www.hepdata.net/search/?author=Makino,%20Y.
Showing 10 of 3351 results Search for Higgs boson decays into a pair of pseudoscalar particles in the $bb\mu\mu$ final state with the ATLAS detector in $pp$ collisions at $\sqrt{s}=13$ TeV The collaboration Aad, Georges ; Abbott, Braden Keim ; Abbott, Dale ; et al. CERN-EP-2021-157, 2021. Inspire Record 1937344 This paper presents a search for decays of the Higgs boson with a mass of 125 GeV into a pair of new pseudoscalar particles, $H\rightarrow aa$, where one $a$-boson decays into a $b$-quark pair and the other into a muon pair. The search uses 139 fb$^{-1}$ of proton-proton collision data at a center-of-mass energy of $\sqrt{s}=13$ TeV recorded between 2015 and 2018 by the ATLAS experiment at the LHC. A narrow dimuon resonance is searched for in the invariant mass spectrum between 16 GeV and 62 GeV. The largest excess of events above the Standard Model backgrounds is observed at a dimuon invariant mass of 52 GeV and corresponds to a local (global) significance of $3.3 \sigma$ ($1.7 \sigma$). Upper limits at 95% confidence level are placed on the branching ratio of the Higgs boson to the $bb\mu\mu$ final state, $\mathcal{B}(H\rightarrow aa\rightarrow bb\mu\mu)$, and are in the range $\text{(0.2-4.0)} \times 10^{-4}$, depending on the signal mass hypothesis. 11 data tables Post-fit number of background events in all SR bins (after applying the BDT cuts) that are tested for the presence of signal. The bins are 2 GeV (3 GeV) wide in mmumu for ma ≤ 45 GeV (ma > 45 GeV). Events in neighbouring bins partially overlap. Discontinuities in the background predictions appear when the BDT discriminant used for the selection changes from the one trained in the lower mass range to the one trained in the higher mass range. Post-fit number of background events in all SR bins without applying the BDT cuts that are tested for the presence of signal. The bins are 2 GeV (3 GeV) wide in mµµ for $m_a$ ≤ 45 GeV ($m_a$ > 45 GeV). Events in neighbouring bins partially overlap. Discontinuities in the background predictions appear when the BDT discriminant used for the selection changes from the one trained in the lower mass range to the one trained in the higher mass range. Probability that the observed spectrum is compatible with the background-only hypothesis. The local $p_0$-values are quantified in standard deviations $\sigma$. More… Search for exotic decays of the Higgs boson into $b\bar{b}$ and missing transverse momentum in $pp$ collisions at $\sqrt{s} = 13$ TeV with the ATLAS detector The collaboration Aad, Georges ; Abbott, Braden Keim ; Abbott, Dale ; et al. CERN-EP-2021-098, 2021. Inspire Record 1917172 A search for the exotic decay of the Higgs boson ($H$) into a $b\bar{b}$ resonance plus missing transverse momentum is described. The search is performed with the ATLAS detector at the Large Hadron Collider using 139 $\mathrm{fb}^{-1}$ of $pp$ collisions at $\sqrt{s} = 13$ TeV. The search targets events from $ZH$ production in an NMSSM scenario where $H \rightarrow \tilde{\chi}^{0}_{2}\tilde{\chi}^{0}_{1}$, with $\tilde{\chi}^{0}_{2} \rightarrow {a} \tilde{\chi}^{0}_{1}$, where $a$ is a light pseudoscalar Higgs boson and $\tilde{\chi}^{0}_{1,2}$ are the two lightest neutralinos. The decay of the $a$ boson into a pair of $b$-quarks results in a peak in the dijet invariant mass distribution. The final-state signature consists of two leptons, two or more jets, at least one of which is identified as originating from a $b$-quark, and missing transverse momentum. Observations are consistent with Standard Model expectations and upper limits are set on the product of cross section times branching ratio for a three-dimensional scan of the masses of the $\tilde{\chi}^{0}_{2}$, $\tilde{\chi}^{0}_{1}$ and $a$ boson. 20 data tables Distribution of the dijet invariant mass in CRZ. The Z+HF and $t\bar{t}$ scale factors, described in the text, have been applied to the simulated samples. The distribution labeled "Signal" is for the model with ($m_a$, $m_{\tilde{\chi}_{1}^{0}}$, $m_{\tilde{\chi}_{2}^{0}}$) = (45 GeV, 10 GeV, 80 GeV). Distribution of the missing transverse energy in VRMET. The Z+HF and $t\bar{t}$ scale factors, described in the text, have been applied to the simulated samples. The distribution labeled "Signal" is for the model with ($m_a$, $m_{\tilde{\chi}_{1}^{0}}$, $m_{\tilde{\chi}_{2}^{0}}$) = (45 GeV, 10 GeV, 80 GeV). Distribution of the dijet invariant mass in CRTop. The Z+HF and $t\bar{t}$ scale factors, described in the text, have been applied to the simulated samples. The distribution labeled "Signal" is for the model with ($m_a$, $m_{\tilde{\chi}_{1}^{0}}$, $m_{\tilde{\chi}_{2}^{0}}$) = (45 GeV, 10 GeV, 80 GeV). More… Observation of electroweak production of two jets in association with an isolated photon and missing transverse momentum, and search for a Higgs boson decaying into invisible particles at 13 TeV with the ATLAS detector The collaboration Aad, Georges ; Abbott, Braden Keim ; Abbott, Dale ; et al. 2021. Inspire Record 1915357 This paper presents the measurement of the electroweak production of two jets in association with a $Z\gamma$ pair with the $Z$ boson decaying into two neutrinos. It also presents the search for invisible or partially invisible decays of a Higgs boson with a mass of 125 GeV produced through vector-boson fusion with a photon in the final state. These results use data from LHC proton-proton collisions at $\sqrt{s}$ = 13 TeV collected with the ATLAS detector corresponding to an integrated luminosity of 139 fb$^{-1}$. The event signature, shared by all benchmark processes considered for measurements and searches, is characterized by a significant amount of unbalanced transverse momentum and a photon in the final state, in addition to a pair of forward jets. For electroweak production of $Z\gamma$ in association with two jets, the background-only hypothesis is rejected with an observed (expected) significance of 5.2 (5.1) standard deviations. The measured fiducial cross-section for this process is 1.31$\pm$0.29 fb. Observed (expected) upper limit of 0.37 (0.34) at 95% confidence level is set on the branching ratio of a 125 GeV Higgs boson to invisible particles, assuming the Standard Model production cross-section. The signature is also interpreted in the context of decays of a Higgs boson to a photon and a dark photon. An observed (expected) 95% CL upper limit on the branching ratio for this decay is set at 0.018 (0.017), assuming the 125 GeV Standard Model Higgs boson production cross-section. 16 data tables Post-fit results for all $m_\text{jj}$ SR and CR bins in the EW $Z \gamma + \text{jets}$ cross-section measurement with the $\mu_{Z \gamma_\text{EW}}$ signal normalization floating. The post-fit uncertainties include statistical, experimental, and theory contributions. Post-fit results for all DNN SR and CR bins in the search for $H \to \text{inv.}$ with the $\mathcal{B}_\text{inv}$ signal normalization set to zero. For the $Z_\text{Rev.Cen.}^\gamma$ CR, the third bin contains all events with DNN output score values of 0.6-1.0. The $H \to \text{inv.}$ signal is scaled to a $\mathcal{B}_\text{inv}$ of 37%. The post-fit uncertainties include statistical, experimental, and theoretical contributions. Post-fit results for the ten [$m_\text{jj}$, $m_\text{T}$] bins constituting the SR and CRs defined for the dark photon search with the $\mathcal{B}(H \to \gamma \gamma_\text{d})$ signal normalization set to zero. A $H \to \gamma \gamma_\text{d}$ signal is shown for two different mass hypotheses (125 GeV, 500 GeV) and scaled to a branching ratio of 2% and 1%, respectively. The post-fit uncertainties include statistical, experimental, and theoretical contributions. More… Measurement of the nuclear modification factor for muons from charm and bottom hadrons in Pb+Pb collisions at 5.02 TeV with the ATLAS detector The collaboration Aad, Georges ; Abbott, Braden Keim ; Abbott, Dale ; et al. CERN-EP-2021-153, 2021. Inspire Record 1914582 Heavy-flavour hadron production provides information about the transport properties and microscopic structure of the quark-gluon plasma created in ultra-relativistic heavy-ion collisions. A measurement of the muons from semileptonic decays of charm and bottom hadrons produced in Pb+Pb and $pp$ collisions at a nucleon-nucleon centre-of-mass energy of 5.02 TeV with the ATLAS detector at the Large Hadron Collider is presented. The Pb+Pb data were collected in 2015 and 2018 with sampled integrated luminosities of $208~\mathrm{\mu b}^{-1}$ and $38~\mathrm{\mu b^{-1}}$, respectively, and $pp$ data with a sampled integrated luminosity of $1.17~\mathrm{pb}^{-1}$ were collected in 2017. Muons from heavy-flavour semileptonic decays are separated from the light-flavour hadronic background using the momentum imbalance between the inner detector and muon spectrometer measurements, and muons originating from charm and bottom decays are further separated via the muon track's transverse impact parameter. Differential yields in Pb+Pb collisions and differential cross sections in $pp$ collisions for such muons are measured as a function of muon transverse momentum from 4 GeV to 30 GeV in the absolute pseudorapidity interval $|\eta| < 2$. Nuclear modification factors for charm and bottom muons are presented as a function of muon transverse momentum in intervals of Pb+Pb collision centrality. The measured nuclear modification factors quantify a significant suppression of the yields of muons from decays of charm and bottom hadrons, with stronger effects for muons from charm hadron decays. 6 data tables Summary of charm muon double differential cross section in pp collisions at 5.02 TeV as a function of pT. Uncertainties are statistical and systematic, respectively. Summary of charm muon per-event invariant yields in Pb+Pb collisions at 5.02 TeV as a function of pT for five different centrality intervals. Uncertainties are statistical and systematic, respectively. Summary of bottom muon per-event invariant yields in Pb+Pb collisions at 5.02 TeV as a function of pT for five different centrality intervals. Uncertainties are statistical and systematic, respectively. More… Measurement of $b$-quark fragmentation properties in jets using the decay $B^{\pm} \to J/\psi K^{\pm}$ in $pp$ collisions at $\sqrt{s} = 13$ TeV with the ATLAS detector The collaboration Aad, Georges ; Abbott, Braden Keim ; Abbott, Dale ; et al. CERN-EP-2021-123, 2021. Inspire Record 1913061 The fragmentation properties of jets containing $b$-hadrons are studied using charged $B$ mesons in 139 fb$^{-1}$ of $pp$ collisions at $\sqrt{s} = 13$ TeV, recorded with the ATLAS detector at the LHC during the period from 2015 to 2018. The $B$ mesons are reconstructed using the decay of $B^{\pm}$ into $J/\psi K^{\pm}$, with the $J/\psi$ decaying into a pair of muons. Jets are reconstructed using the anti-$k_t$ algorithm with radius parameter $R=0.4$. The measurement determines the longitudinal and transverse momentum profiles of the reconstructed $B$ hadrons with respect to the axes of the jets to which they are geometrically associated. These distributions are measured in intervals of the jet transverse momentum, ranging from 50 GeV to above 100 GeV. The results are corrected for detector effects and compared with several Monte Carlo predictions using different parton shower and hadronisation models. The results for the longitudinal and transverse profiles provide useful inputs to improve the description of heavy-flavour fragmentation in jets. 8 data tables Longitudinal profile for 50 GeV < pT < 70 GeV. Transverse profile for 50 GeV < pT < 70 GeV. Longitudinal profile for 70 GeV < pT < 100 GeV. More… Version 2 Measurement of the top quark mass using events with a single reconstructed top quark in pp collisions at $\sqrt{s}$ = 13 TeV The collaboration Tumasyan, Armen ; Adam, Wolfgang ; Andrejkovic, Janik Walter ; et al. CMS-TOP-19-009, 2021. Inspire Record 1911567 A measurement of the top quark mass is performed using a data sample enriched with single top quark events produced in the $t$ channel. The study is based on proton-proton collision data, corresponding to an integrated luminosity of 35.9 fb$^{-1}$, recorded at $\sqrt{s}$ = 13 TeV by the CMS experiment at the LHC in 2016.Candidate events are selected by requiring an isolated high-momentum lepton (muon or electron) and exactly two jets, of which one is identified as originating from a bottom quark. Multivariate discriminants are designed to separate the signal from the background. Optimized thresholds are placed on the discriminant outputs to obtain an event sample with high signal purity. The top quark mass is found to be 172.13$^{+0.76}_{-0.77}$ GeV, where the uncertainty includes both the statistical and systematic components, reaching sub-GeV precision for the first time in this event topology. The masses of the top quark and antiquark are also determined separately using the lepton charge in the final state, from which the mass ratio and difference are determined to be 0.9952$^{+0.0079}_{-0.0104}$ and 0.83$^{+1.79}_{-1.35}$ GeV, respectively. The results are consistent with $CPT$ invariance. 19 data tables Top quark mass measured inclusive of lepton flavor and charge. The uncertainties are given in two parts, the first part is the combination of statistical (stat) and profiled (prof) uncertainties and the second part is for the experimental (ext) uncetrinaties. Top quark mass measured inclusive of lepton flavor and for positively charged lepton. Top quark mass measured inclusive of lepton flavor and for negatively charged lepton. More… Search for heavy particles in the $b$-tagged dijet mass distribution with additional $b$-tagged jets in proton-proton collisions at $\sqrt{s} = 13$ TeV with the ATLAS experiment The collaboration Aad, Georges ; Abbott, Braden Keim ; Abbott, Dale ; et al. CERN-EP-2021-119, 2021. Inspire Record 1909506 A search optimized for new heavy particles decaying to two $b$-quarks and produced in association with additional $b$-quarks is reported. The sensitivity is improved by $b$-tagging at least one lower-$p_\text{T}$ jet in addition to the two highest-$p_\text{T}$ jets. The data used in this search correspond to an integrated luminosity of 103 $\text{fb}^{-1}$ collected with a dedicated trijet trigger during the 2017 and 2018 $\sqrt{s} = 13$ TeV proton$-$proton collision runs with the ATLAS detector at the LHC. The search looks for resonant peaks in the $b$-tagged dijet invariant mass spectrum over a smoothly falling background. The background is estimated with an innovative data-driven method based on orthonormal functions. The observed $b$-tagged dijet invariant mass spectrum is compatible with the background-only hypothesis. Upper limits at 95% confidence level on a heavy vector-boson production cross section times branching ratio to a pair of $b$-quarks are derived. 4 data tables Background estimate from the FD method with N=3 and data in the SR. The observed (solid) and expected (dashed) 95% CL upper limits on the production of $Z' \to b\bar{b}$ in association with b-quarks. Acceptance and Acceptance times efficiency for the LUV Z' model. More… Search for charginos and neutralinos in final states with two boosted hadronically decaying bosons and missing transverse momentum in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector The collaboration Aad, Georges ; Abbott, Braden Keim ; Abbott, Dale ; et al. CERN-EP-2021-127, 2021. Inspire Record 1906174 A search for charginos and neutralinos at the Large Hadron Collider is reported using fully hadronic final states and missing transverse momentum. Pair-produced charginos or neutralinos are explored, each decaying into a high-$p_{\text{T}}$ Standard Model weak boson. Fully-hadronic final states are studied to exploit the advantage of the large branching ratio, and the efficient background rejection by identifying the high-$p_{\text{T}}$ bosons using large-radius jets and jet substructure information. An integrated luminosity of 139 fb$^{-1}$ of proton-proton collision data collected by the ATLAS detector at a center-of-mass energy of 13 TeV is used. No significant excess is found beyond the Standard Model expectation. The 95% confidence level exclusion limits are set on wino or higgsino production with varying assumptions in the decay branching ratios and the type of the lightest supersymmetric particle. A wino (higgsino) mass up to 1060 (900) GeV is excluded when the lightest SUSY particle mass is below 400 (240) GeV and the mass splitting is larger than 400 (450) GeV. The sensitivity to high-mass wino and higgsino is significantly extended compared with the previous LHC searches using the other final states. 145 data tables - - - - - - - - Overview of HEPData Record - - - - - - - - <br/><br/> <b>Cutflow:</b> <a href="104458?version=1&table=Cut flows for the representative signals">table</a><br/><br/> <b>Boson tagging:</b> <ul> <li><a href="104458?version=1&table=%24W%2FZ%5Crightarrow%20qq%24%20tagging%20efficiency">$W/Z\rightarrow qq$ tagging efficiency</a> <li><a href="104458?version=1&table=%24W%2FZ%5Crightarrow%20qq%24%20tagging%20rejection">$W/Z\rightarrow qq$ tagging rejection</a> <li><a href="104458?version=1&table=%24Z%2Fh%20%5Crightarrow%20bb%24%20tagging%20efficiency">$Z/h\rightarrow bb$ tagging efficiency</a> <li><a href="104458?version=1&table=%24Z%2Fh%20%5Crightarrow%20bb%24%20tagging%20rejection">$Z/h\rightarrow bb$ tagging rejection</a> <li><a href="104458?version=1&table=%24W%5Crightarrow%20qq%24%20tagging%20efficiency%20(vs%20official%20WP)">$W\rightarrow qq$ tagging efficiency (vs official WP)</a> <li><a href="104458?version=1&table=%24W%5Crightarrow%20qq%24%20tagging%20rejection%20(vs%20official%20WP)">$W\rightarrow qq$ tagging rejection (vs official WP)</a> <li><a href="104458?version=1&table=%24Z%5Crightarrow%20qq%24%20tagging%20efficiency%20(vs%20official%20WP)">$Z\rightarrow qq$ tagging efficiency (vs official WP)</a> <li><a href="104458?version=1&table=%24Z%5Crightarrow%20qq%24%20tagging%20rejection%20(vs%20official%20WP)">$Z\rightarrow qq$ tagging rejection (vs official WP)</a> </ul> <b>Systematic uncertainty:</b> <a href="104458?version=1&table=Total%20systematic%20uncertainties">table</a><br/><br/> <b>Summary of SR yields:</b> <a href="104458?version=1&table=Data%20yields%20and%20background%20expectation%20in%20the%20SRs">table</a><br/><br/> <b>Expected background yields and the breakdown:</b> <ul> <li><a href="104458?version=1&table=Data%20yields%20and%20background%20breakdown%20in%20SR">CR0L / SR</a> <li><a href="104458?version=1&table=Data%20yields%20and%20background%20breakdown%20in%20CR%2FVR%201L(1Y)">CR1L / VR1L /CR1Y / VR1Y</a> </ul> <b>SR distributions:</b> <ul> <li><a href="104458?version=1&table=Effective mass distribution in SR-4Q-VV">SR-4Q-VV: Effective mass</a> <li><a href="104458?version=1&table=Leading large-$R$ jet mass distribution in SR-4Q-VV">SR-4Q-VV: Leading jet mass</a> <li><a href="104458?version=1&table=Leading large-$R$ jet $D_{2}$ distribution in SR-4Q-VV">SR-4Q-VV: Leading jet $D_{2}$</a> <li><a href="104458?version=1&table=Sub-leading large-$R$ jet mass distribution in SR-4Q-VV">SR-4Q-VV: Sub-leading jet mass</a> <li><a href="104458?version=1&table=Sub-leading large-$R$ jet $D_{2}$ distribution in SR-4Q-VV">SR-4Q-VV: Sub-leading jet $D_{2}$</a> <li><a href="104458?version=1&table=$m_{T2}$ distribution in SR-2B2Q-VZ">SR-2B2Q-VZ: $m_{\textrm{T2}}$</a> <li><a href="104458?version=1&table=bb-tagged jet mass distribution in SR-2B2Q-VZ">SR-2B2Q-VZ: bb-tagged jet mass</a> <li><a href="104458?version=1&table=Effective mass distribution in SR-2B2Q-VZ">SR-2B2Q-VZ: Effective mass</a> <li><a href="104458?version=1&table=$m_{T2}$ distribution in SR-2B2Q-Vh">SR-2B2Q-Vh: $m_{\textrm{T2}}$</a> <li><a href="104458?version=1&table=bb-tagged jet mass distribution in SR-2B2Q-Vh">SR-2B2Q-Vh: bb-tagged jet mass</a> <li><a href="104458?version=1&table=Effective mass distribution in SR-2B2Q-Vh">SR-2B2Q-Vh: Effective mass</a> </ul> <b>Exclusion limit:</b> <ul> <li>$(\tilde{W},~\tilde{B})$-SIM model (C1C1-WW): <ul> <li><a href="104458?version=1&table=Exp limit on (W~, B~) simplified model (C1C1-WW)">Expected limit</a> <li><a href="104458?version=1&table=Exp%20limit%20(%2B1sig)%20on%20(W~, B~) simplified model (C1C1-WW)">Expected limit ($+1\sigma_{\textrm{exp}}$)</a> <li>Expected limit ($-1\sigma_{\textrm{exp}}$): (No mass point could be excluded) <li><a href="104458?version=1&table=Obs limit on (W~, B~) simplified model (C1C1-WW)">Observed limit</a> <li><a href="104458?version=1&table=Obs%20limit%20(%2B1sig)%20on%20(W~, B~) simplified model (C1C1-WW)">Observed limit ($+1\sigma_{\textrm{theory}}^{\textrm{SUSY}}$)</a> <li><a href="104458?version=1&table=Obs%20limit%20(-1sig)%20on%20(W~, B~) simplified model (C1C1-WW)">Observed limit ($-1\sigma_{\textrm{theory}}^{\textrm{SUSY}}$)</a> </ul> <li>$(\tilde{W},~\tilde{B})$-SIM model (C1N2-WZ): <ul> <li><a href="104458?version=1&table=Exp limit on (W~, B~) simplified model (C1N2-WZ)">Expected limit</a> <li><a href="104458?version=1&table=Exp%20limit%20(%2B1sig)%20on%20(W~, B~) simplified model (C1N2-WZ)">Expected limit ($+1\sigma_{\textrm{exp}}$)</a> <li><a href="104458?version=1&table=Exp%20limit%20(-1sig)%20on%20(W~, B~) simplified model (C1N2-WZ)">Expected limit ($-1\sigma_{\textrm{exp}}$)</a> <li><a href="104458?version=1&table=Obs limit on (W~, B~) simplified model (C1N2-WZ)">Observed limit</a> <li><a href="104458?version=1&table=Obs%20limit%20(%2B1sig)%20on%20(W~, B~) simplified model (C1N2-WZ)">Observed limit ($+1\sigma_{\textrm{theory}}^{\textrm{SUSY}}$)</a> <li><a href="104458?version=1&table=Obs%20limit%20(-1sig)%20on%20(W~, B~) simplified model (C1N2-WZ)">Observed limit ($-1\sigma_{\textrm{theory}}^{\textrm{SUSY}}$)</a> </ul> <li>$(\tilde{W},~\tilde{B})$-SIM model (C1N2-Wh): <ul> <li><a href="104458?version=1&table=Exp limit on (W~, B~) simplified model (C1N2-Wh)">Expected limit</a> <li><a href="104458?version=1&table=Exp%20limit%20(%2B1sig)%20on%20(W~, B~) simplified model (C1N2-Wh)">Expected limit ($+1\sigma_{\textrm{exp}}$)</a> <li><a href="104458?version=1&table=Exp%20limit%20(-1sig)%20on%20(W~, B~) simplified model (C1N2-Wh)">Expected limit ($-1\sigma_{\textrm{exp}}$)</a> <li><a href="104458?version=1&table=Obs limit on (W~, B~) simplified model (C1N2-Wh)">Observed limit</a> <li><a href="104458?version=1&table=Obs%20limit%20(%2B1sig)%20on%20(W~, B~) simplified model (C1N2-Wh)">Observed limit ($+1\sigma_{\textrm{theory}}^{\textrm{SUSY}}$)</a> <li><a href="104458?version=1&table=Obs%20limit%20(-1sig)%20on%20(W~, B~) simplified model (C1N2-Wh)">Observed limit ($-1\sigma_{\textrm{theory}}^{\textrm{SUSY}}$)</a> </ul> <li>$(\tilde{W},~\tilde{B})$ model ($\textrm{B}(\tilde{\chi}_{2}^{0}\rightarrow Z\tilde{\chi}_{1}^{0})=0\%$): <ul> <li><a href="104458?version=1&table=Exp limit on (W~, B~) B(N2->ZN1) = 0%">Expected limit</a> <li><a href="104458?version=1&table=Obs limit on (W~, B~) B(N2->ZN1) = 0%">Observed limit</a> </ul> <li>$(\tilde{W},~\tilde{B})$ model ($\textrm{B}(\tilde{\chi}_{2}^{0}\rightarrow Z\tilde{\chi}_{1}^{0})=25\%$): <ul> <li><a href="104458?version=1&table=Exp limit on (W~, B~) B(N2->ZN1) = 25%">Expected limit</a> <li><a href="104458?version=1&table=Obs limit on (W~, B~) B(N2->ZN1) = 25%">Observed limit</a> </ul> <li>$(\tilde{W},~\tilde{B})$ model ($\textrm{B}(\tilde{\chi}_{2}^{0}\rightarrow Z\tilde{\chi}_{1}^{0})=50\%$): <ul> <li><a href="104458?version=1&table=Exp limit on (W~, B~) B(N2->ZN1) = 50%">Expected limit</a> <li><a href="104458?version=1&table=Exp%20limit%20(%2B1sig)%20on%20(W~%2C%20B~)%20B(N2-%3EZN1)%20%3D%2050%25">Expected limit ($+1\sigma_{\textrm{exp}}$)</a> <li><a href="104458?version=1&table=Exp%20limit%20(-1sig)%20on%20(W~%2C%20B~)%20B(N2-%3EZN1)%20%3D%2050%25">Expected limit ($-1\sigma_{\textrm{exp}}$)</a> <li><a href="104458?version=1&table=Obs limit on (W~, B~) B(N2->ZN1) = 50%">Observed limit</a> <li><a href="104458?version=1&table=Obs%20limit%20(%2B1sig)%20on%20(W~%2C%20B~)%20B(N2-%3EZN1)%20%3D%2050%">Observed limit ($+1\sigma_{\textrm{theory}}^{\textrm{SUSY}}$)</a> <li><a href="104458?version=1&table=Obs%20limit%20(-1sig)%20on%20(W~%2C%20B~)%20B(N2-%3EZN1)%20%3D%2050%25">Observed limit ($-1\sigma_{\textrm{theory}}^{\textrm{SUSY}}$)</a> </ul> <li>$(\tilde{W},~\tilde{B})$ model ($\textrm{B}(\tilde{\chi}_{2}^{0}\rightarrow Z\tilde{\chi}_{1}^{0})=75\%$): <ul> <li><a href="104458?version=1&table=Exp limit on (W~, B~) B(N2->ZN1) = 75%">Expected limit</a> <li><a href="104458?version=1&table=Obs limit on (W~, B~) B(N2->ZN1) = 75%">Observed limit</a> </ul> <li>$(\tilde{W},~\tilde{B})$ model ($\textrm{B}(\tilde{\chi}_{2}^{0}\rightarrow Z\tilde{\chi}_{1}^{0})=100\%$): <ul> <li><a href="104458?version=1&table=Exp limit on (W~, B~) B(N2->ZN1) = 100%">Expected limit</a> <li><a href="104458?version=1&table=Obs limit on (W~, B~) B(N2->ZN1) = 100%">Observed limit</a> </ul> <li>$(\tilde{H},~\tilde{B})$ model ($\textrm{B}(\tilde{\chi}_{2}^{0}\rightarrow Z\tilde{\chi}_{1}^{0})=50\%$): <ul> <li><a href="104458?version=1&table=Exp limit on (H~, B~) B(N2->ZN1) = 50%">Expected limit</a> <li><a href="104458?version=1&table=Exp%20limit%20(%2B1sig)%20on%20(H~%2C%20B~)%20B(N2-%3EZN1)%20%3D%2050%25">Expected limit ($+1\sigma_{\textrm{exp}}$)</a> <li>Expected limit ($-1\sigma_{\textrm{exp}}$): (No mass point could be excluded) <li><a href="104458?version=1&table=Obs limit on (H~, B~) B(N2->ZN1) = 50%">Observed limit</a> <li><a href="104458?version=1&table=Obs%20limit%20(%2B1sig)%20on%20(H~%2C%20B~)%20B(N2-%3EZN1)%20%3D%2050%">Observed limit ($+1\sigma_{\textrm{theory}}^{\textrm{SUSY}}$)</a> <li><a href="104458?version=1&table=Obs%20limit%20(-1sig)%20on%20(H~%2C%20B~)%20B(N2-%3EZN1)%20%3D%2050%25">Observed limit ($-1\sigma_{\textrm{theory}}^{\textrm{SUSY}}$)</a> </ul> <li>$(\tilde{W},~\tilde{H})$ model ($\textrm{tan}\beta=10,~\mu>0$): <ul> <li><a href="104458?version=1&table=Exp limit on (W~, H~), tanb = 10, mu>0">Expected limit</a> <li><a href="104458?version=1&table=Exp%20limit%20(%2B1sig)%20on%20(W~%2C%20H~)%2C%20tanb%20%3D%2010%2C%20mu%3E0">Expected limit ($+1\sigma_{\textrm{exp}}$)</a> <li><a href="104458?version=1&table=Exp%20limit%20(-1sig)%20on%20(W~%2C%20H~)%2C%20tanb%20%3D%2010%2C%20mu%3E0">Expected limit ($-1\sigma_{\textrm{exp}}$)</a> <li><a href="104458?version=1&table=Obs limit on (W~, H~), tanb = 10, mu>0">Observed limit</a> <li><a href="104458?version=1&table=Obs%20limit%20(%2B1sig)%20on%20(W~%2C%20H~)%2C%20tanb%20%3D%2010%2C%20mu%3E0">Observed limit ($+1\sigma_{\textrm{theory}}^{\textrm{SUSY}}$)</a> <li><a href="104458?version=1&table=Obs%20limit%20(-1sig)%20on%20(W~%2C%20H~)%2C%20tanb%20%3D%2010%2C%20mu%3E0">Observed limit ($-1\sigma_{\textrm{theory}}^{\textrm{SUSY}}$)</a> </ul> <li>$(\tilde{H},~\tilde{W})$ model ($\textrm{tan}\beta=10,~\mu>0$): <ul> <li><a href="104458?version=1&table=Exp limit on (H~, W~), tanb = 10, mu>0">Expected limit</a> <li><a href="104458?version=1&table=Exp%20limit%20(%2B1sig)%20on%20(H~%2C%20W~)%2C%20tanb%20%3D%2010%2C%20mu%3E0">Expected limit ($+1\sigma_{\textrm{exp}}$)</a> <li>Expected limit ($-1\sigma_{\textrm{exp}}$): (No mass point could be excluded) <li><a href="104458?version=1&table=Obs limit on (H~, W~), tanb = 10, mu>0">Observed limit</a> <li><a href="104458?version=1&table=Obs%20limit%20(%2B1sig)%20on%20(H~%2C%20W~)%2C%20tanb%20%3D%2010%2C%20mu%3E0">Observed limit ($+1\sigma_{\textrm{theory}}^{\textrm{SUSY}}$)</a> <li><a href="104458?version=1&table=Obs%20limit%20(-1sig)%20on%20(H~%2C%20W~)%2C%20tanb%20%3D%2010%2C%20mu%3E0">Observed limit ($-1\sigma_{\textrm{theory}}^{\textrm{SUSY}}$)</a> </ul> <li>$(\tilde{W},~\tilde{H})$ model ($\textrm{tan}\beta=10$) on ($\mu$,$M_{2}$) plane: <ul> <li><a href="104458?version=1&table=Exp limit on (W~, H~), tanb = 10, M2 vs mu">Expected limit</a> <li><a href="104458?version=1&table=Exp%20limit%20(%2B1sig)%20on%20(W~%2C%20H~)%2C%20tanb%20%3D%2010%2C%20M2%20vs%20mu">Expected limit ($+1\sigma_{\textrm{exp}}$)</a> <li><a href="104458?version=1&table=Exp%20limit%20(-1sig)%20on%20(W~%2C%20H~)%2C%20tanb%20%3D%2010%2C%20M2%20vs%20mu">Expected limit ($-1\sigma_{\textrm{exp}}$)</a> <li><a href="104458?version=1&table=Obs limit on (W~, H~), tanb = 10, M2 vs mu">Observed limit</a> <li><a href="104458?version=1&table=Obs%20limit%20(%2B1sig)%20on%20(W~%2C%20H~)%2C%20tanb%20%3D%2010%2C%20M2%20vs%20mu">Observed limit ($+1\sigma_{\textrm{theory}}^{\textrm{SUSY}}$)</a> <li><a href="104458?version=1&table=Obs%20limit%20(-1sig)%20on%20(W~%2C%20H~)%2C%20tanb%20%3D%2010%2C%20M2%20vs%20mu">Observed limit ($-1\sigma_{\textrm{theory}}^{\textrm{SUSY}}$)</a> </ul> <li>$(\tilde{H},~\tilde{W})$ model ($\textrm{tan}\beta=10$) on ($\mu$,$M_{2}$) plane: <ul> <li><a href="104458?version=1&table=Exp limit on (H~, W~), tanb = 10, M2 vs mu">Expected limit</a> <li><a href="104458?version=1&table=Exp%20limit%20(%2B1sig)%20on%20(H~%2C%20W~)%2C%20tanb%20%3D%2010%2C%20M2%20vs%20mu">Expected limit ($+1\sigma_{\textrm{exp}}$)</a> <li>Expected limit ($-1\sigma_{\textrm{exp}}$): (No mass point could be excluded) <li><a href="104458?version=1&table=Obs limit on (H~, W~), tanb = 10, M2 vs mu">Observed limit</a> <li><a href="104458?version=1&table=Obs%20limit%20(%2B1sig)%20on%20(H~%2C%20W~)%2C%20tanb%20%3D%2010%2C%20M2%20vs%20mu">Observed limit ($+1\sigma_{\textrm{theory}}^{\textrm{SUSY}}$)</a> <li><a href="104458?version=1&table=Obs%20limit%20(-1sig)%20on%20(H~%2C%20W~)%2C%20tanb%20%3D%2010%2C%20M2%20vs%20mu">Observed limit ($-1\sigma_{\textrm{theory}}^{\textrm{SUSY}}$)</a> </ul> <li>$(\tilde{H},~\tilde{G})$ model: <ul> <li><a href="104458?version=1&table=Exp limit on (H~, G~)">Expected limit</a> <li><a href="104458?version=1&table=Exp%20limit%20(%2B1sig)%20on%20(H~%2C%20G~)">Expected limit ($+1\sigma_{\textrm{exp}}$)</a> <li><a href="104458?version=1&table=Exp%20limit%20(-1sig)%20on%20(H~%2C%20G~)">Expected limit ($-1\sigma_{\textrm{exp}}$)</a> <li><a href="104458?version=1&table=Obs limit on (H~, G~)">Observed limit</a> <li><a href="104458?version=1&table=Obs%20limit%20(%2B1sig)%20on%20(H~%2C%20G~)">Observed limit ($+1\sigma_{\textrm{theory}}^{\textrm{SUSY}}$)</a> <li><a href="104458?version=1&table=Obs%20limit%20(-1sig)%20on%20(H~%2C%20G~)">Observed limit ($-1\sigma_{\textrm{theory}}^{\textrm{SUSY}}$)</a> </ul> <li>$(\tilde{H},~\tilde{a})$ model ($\textrm{B}(\tilde{\chi}_{2}^{0}\rightarrow Z\tilde{a})=100\%$): <ul> <li><a href="104458?version=1&table=Exp limit on (H~, a~) B(N1->Za~) = 100%">Expected limit</a> <li><a href="104458?version=1&table=Exp%20limit%20(%2B1sig)%20on%20(H~%2C%20a~)%20B(N1-%3EZa~)%20%3D%20100%25">Expected limit ($+1\sigma_{\textrm{exp}}$)</a> <li><a href="104458?version=1&table=Exp%20limit%20(-1sig)%20on%20(H~%2C%20a~)%20B(N1-%3EZa~)%20%3D%20100%25">Expected limit ($-1\sigma_{\textrm{exp}}$)</a> <li><a href="104458?version=1&table=Obs limit on (H~, a~) B(N1->Za~) = 100%">Observed limit</a> <li><a href="104458?version=1&table=Obs%20limit%20(%2B1sig)%20on%20(H~%2C%20a~)%20B(N1-%3EZa~)%20%3D%20100%25">Observed limit ($+1\sigma_{\textrm{theory}}^{\textrm{SUSY}}$)</a> <li><a href="104458?version=1&table=Obs%20limit%20(-1sig)%20on%20(H~%2C%20a~)%20B(N1-%3EZa~)%20%3D%20100%">Observed limit ($-1\sigma_{\textrm{theory}}^{\textrm{SUSY}}$)</a> </ul> <li>$(\tilde{H},~\tilde{a})$ model ($\textrm{B}(\tilde{\chi}_{2}^{0}\rightarrow Z\tilde{a})=75\%$): <ul> <li><a href="104458?version=1&table=Exp limit on (H~, a~) B(N1->Za~) = 75%">Expected limit</a> <li><a href="104458?version=1&table=Obs limit on (H~, a~) B(N1->Za~) = 75%">Observed limit</a> </ul> <li>$(\tilde{H},~\tilde{a})$ model ($\textrm{B}(\tilde{\chi}_{2}^{0}\rightarrow Z\tilde{a})=50\%$): <ul> <li><a href="104458?version=1&table=Exp limit on (H~, a~) B(N1->Za~) = 50%">Expected limit</a> <li><a href="104458?version=1&table=Obs limit on (H~, a~) B(N1->Za~) = 50%">Observed limit</a> </ul> <li>$(\tilde{H},~\tilde{a})$ model ($\textrm{B}(\tilde{\chi}_{2}^{0}\rightarrow Z\tilde{a})=25\%$): <ul> <li>Expected limit : (No mass point could be excluded) <li><a href="104458?version=1&table=Obs limit on (H~, a~) B(N1->Za~) = 25%">Observed limit</a> </ul> </ul> <b>EWKino branching ratios:</b> <ul> <li>$(\tilde{W},~\tilde{H})$ model: <ul> <li><a href="104458?version=1&table=B(C2-%3EW%2BN1%2CN2)%20in%20(W~%2C%20H~)%2C%20tanb%3D10%2C%20mu%3E0">$\textrm{B}(\tilde{\chi}_{2}^{\pm}\rightarrow W\tilde{\chi}_{1,2}^{0})$</a> <li><a href="104458?version=1&table=B(C2-%3EZ%2BC1)%20in%20(W~%2C%20H~)%2C%20tanb=10%2C%20mu%3E0">$\textrm{B}(\tilde{\chi}_{2}^{\pm}\rightarrow Z\tilde{\chi}_{1}^{\pm})$</a> <li><a href="104458?version=1&table=B(C2-%3Eh%2BC1)%20in%20(W~%2C%20H~)%2C%20tanb=10%2C%20mu%3E0">$\textrm{B}(\tilde{\chi}_{2}^{\pm}\rightarrow h\tilde{\chi}_{1}^{\pm})$</a> <li><a href="104458?version=1&table=B(N3-%3EW%2BC1)%20in%20(W~%2C%20H~)%2C%20tanb=10%2C%20mu%3E0">$\textrm{B}(\tilde{\chi}_{3}^{0}\rightarrow W\tilde{\chi}_{1}^{\pm})$</a> <li><a href="104458?version=1&table=B(N3-%3EZ%2BN1%2CN2)%20in%20(W~%2C%20H~)%2C%20tanb%3D10%2C%20mu%3E0">$\textrm{B}(\tilde{\chi}_{3}^{0}\rightarrow Z\tilde{\chi}_{1,2}^{0})$</a> <li><a href="104458?version=1&table=B(N3-%3Eh%2BN1%2CN2)%20in%20(W~%2C%20H~)%2C%20tanb%3D10%2C%20mu%3E0">$\textrm{B}(\tilde{\chi}_{3}^{0}\rightarrow h\tilde{\chi}_{1,2}^{0})$</a> </ul> <li>$(\tilde{H},~\tilde{W})$ model: <ul> <li><a href="104458?version=1&table=B(C2-%3EW%2BN1)%20in%20(H~%2C%20W~)%2C%20tanb%3D10%2C%20mu%3E0">$\textrm{B}(\tilde{\chi}_{2}^{\pm}\rightarrow W\tilde{\chi}_{1}^{0})$</a> <li><a href="104458?version=1&table=B(C2-%3EZ%2BC1)%20in%20(H~%2C%20W~)%2C%20tanb%3D10%2C%20mu%3E0">$\textrm{B}(\tilde{\chi}_{2}^{\pm}\rightarrow Z\tilde{\chi}_{1}^{\pm})$</a> <li><a href="104458?version=1&table=B(C2-%3Eh%2BC1)%20in%20(H~%2C%20W~)%2C%20tanb%3D10%2C%20mu%3E0">$\textrm{B}(\tilde{\chi}_{2}^{\pm}\rightarrow h\tilde{\chi}_{1}^{\pm})$</a> <li><a href="104458?version=1&table=B(N2-%3EW%2BC1)%20in%20(H~%2C%20W~)%2C%20tanb%3D10%2C%20mu%3E0">$\textrm{B}(\tilde{\chi}_{2}^{0}\rightarrow W\tilde{\chi}_{1}^{\pm})$</a> <li><a href="104458?version=1&table=B(N2-%3EZ%2BN1)%20in%20(H~%2C%20W~)%2C%20tanb%3D10%2C%20mu%3E0">$\textrm{B}(\tilde{\chi}_{2}^{0}\rightarrow Z\tilde{\chi}_{1}^{0})$</a> <li><a href="104458?version=1&table=B(N2-%3Eh%2BN1)%20in%20(H~%2C%20W~)%2C%20tanb%3D10%2C%20mu%3E0">$\textrm{B}(\tilde{\chi}_{2}^{0}\rightarrow h\tilde{\chi}_{1}^{0})$</a> <li><a href="104458?version=1&table=B(N3-%3EW%2BC1)%20in%20(H~%2C%20W~)%2C%20tanb%3D10%2C%20mu%3E0">$\textrm{B}(\tilde{\chi}_{3}^{0}\rightarrow W\tilde{\chi}_{1}^{\pm})$</a> <li><a href="104458?version=1&table=B(N3-%3EZ%2BN1)%20in%20(H~%2C%20W~)%2C%20tanb%3D10%2C%20mu%3E0">$\textrm{B}(\tilde{\chi}_{3}^{0}\rightarrow Z\tilde{\chi}_{1}^{0})$</a> <li><a href="104458?version=1&table=B(N3-%3Eh%2BN1)%20in%20(H~%2C%20W~)%2C%20tanb%3D10%2C%20mu%3E0">$\textrm{B}(\tilde{\chi}_{3}^{0}\rightarrow h\tilde{\chi}_{1}^{0})$</a> </ul> </ul> <b>Cross-section upper limit:</b> <ul> <li>Expected: <ul> <li><a href="104458?version=1&table=Expected cross-section upper limit on C1C1-WW">$(\tilde{W},~\tilde{B})$-SIM model (C1C1-WW)</a> <li><a href="104458?version=1&table=Expected cross-section upper limit on C1N2-WZ">$(\tilde{W},~\tilde{B})$-SIM model (C1N2-WZ)</a> <li><a href="104458?version=1&table=Expected cross-section upper limit on C1N2-Wh">$(\tilde{W},~\tilde{B})$-SIM model (C1N2-Wh)</a> <li><a href="104458?version=1&table=Expected cross-section upper limit on (H~, G~)">$(\tilde{H},~\tilde{G})$ model</a> </ul> <li>Observed: <ul> <li><a href="104458?version=1&table=Observed cross-section upper limit on C1C1-WW">$(\tilde{W},~\tilde{B})$-SIM model (C1C1-WW)</a> <li><a href="104458?version=1&table=Observed cross-section upper limit on C1N2-WZ">$(\tilde{W},~\tilde{B})$-SIM model (C1N2-WZ)</a> <li><a href="104458?version=1&table=Observed cross-section upper limit on C1N2-Wh">$(\tilde{W},~\tilde{B})$-SIM model (C1N2-Wh)</a> <li><a href="104458?version=1&table=Observed cross-section upper limit on (H~, G~)">$(\tilde{H},~\tilde{G})$ model</a> </ul> </ul> <b>Acceptance:</b> <ul> <li><a href="104458?version=1&table=Acceptance of C1C1-WW signals by SR-4Q-VV">$(\tilde{W},~\tilde{B})$-SIM model (C1C1-WW) in SR-4Q-VV</a> <li><a href="104458?version=1&table=Acceptance of C1N2-WZ signals by SR-4Q-VV">$(\tilde{W},~\tilde{B})$-SIM model (C1N2-WZ) in SR-4Q-VV</a> <li><a href="104458?version=1&table=Acceptance of C1N2-WZ signals by SR-2B2Q-VZ">$(\tilde{W},~\tilde{B})$-SIM model (C1N2-WZ) in SR-2B2Q-VZ</a> <li><a href="104458?version=1&table=Acceptance of C1N2-Wh signals by SR-2B2Q-Vh">$(\tilde{W},~\tilde{B})$-SIM model (C1N2-WZ) in SR-2B2Q-Vh</a> <li><a href="104458?version=1&table=Acceptance of N2N3-ZZ signals by SR-4Q-VV">$(\tilde{H},~\tilde{B})$-SIM model (N2N3-ZZ) in SR-4Q-VV</a> <li><a href="104458?version=1&table=Acceptance of N2N3-ZZ signals by SR-2B2Q-VZ">$(\tilde{H},~\tilde{B})$-SIM model (N2N3-ZZ) in SR-2B2Q-VZ</a> <li><a href="104458?version=1&table=Acceptance of N2N3-Zh signals by SR-2B2Q-Vh">$(\tilde{H},~\tilde{B})$-SIM model (N2N3-Zh) in SR-2B2Q-Vh</a> <li><a href="104458?version=1&table=Acceptance of N2N3-hh signals by SR-2B2Q-Vh">$(\tilde{H},~\tilde{B})$-SIM model (N2N3-hh) in SR-2B2Q-Vh</a> <li><a href="104458?version=1&table=Acceptance of (H~, G~) signals by SR-4Q-VV">$(\tilde{H},~\tilde{G})$ model in SR-4Q-VV</a> <li><a href="104458?version=1&table=Acceptance of (H~, G~) signals by SR-2B2Q-VZ">$(\tilde{H},~\tilde{G})$ model in SR-2B2Q-VZ</a> <li><a href="104458?version=1&table=Acceptance of (H~, G~) signals by SR-2B2Q-Vh">$(\tilde{H},~\tilde{G})$ model in SR-2B2Q-Vh</a> </ul> <b>Efficiency:</b> <ul> <li><a href="104458?version=1&table=Efficiency of C1C1-WW signals by SR-4Q-VV">$(\tilde{W},~\tilde{B})$-SIM model (C1C1-WW) in SR-4Q-VV</a> <li><a href="104458?version=1&table=Efficiency of C1N2-WZ signals by SR-4Q-VV">$(\tilde{W},~\tilde{B})$-SIM model (C1N2-WZ) in SR-4Q-VV</a> <li><a href="104458?version=1&table=Efficiency of C1N2-WZ signals by SR-2B2Q-VZ">$(\tilde{W},~\tilde{B})$-SIM model (C1N2-WZ) in SR-2B2Q-VZ</a> <li><a href="104458?version=1&table=Efficiency of C1N2-Wh signals by SR-2B2Q-Vh">$(\tilde{W},~\tilde{B})$-SIM model (C1N2-Wh) in SR-2B2Q-Vh</a> <li><a href="104458?version=1&table=Efficiency of N2N3-ZZ signals by SR-4Q-VV">$(\tilde{H},~\tilde{B})$-SIM model (N2N3-ZZ) in SR-4Q-VV</a> <li><a href="104458?version=1&table=Efficiency of N2N3-ZZ signals by SR-2B2Q-VZ">$(\tilde{H},~\tilde{B})$-SIM model (N2N3-ZZ) in SR-2B2Q-VZ</a> <li><a href="104458?version=1&table=Efficiency of N2N3-Zh signals by SR-2B2Q-Vh">$(\tilde{H},~\tilde{B})$-SIM model (N2N3-Zh) in SR-2B2Q-Vh</a> <li><a href="104458?version=1&table=Efficiency of N2N3-hh signals by SR-2B2Q-Vh">$(\tilde{H},~\tilde{B})$-SIM model (N2N3-hh) in SR-2B2Q-Vh</a> <li><a href="104458?version=1&table=Efficiency of (H~, G~) signals by SR-4Q-VV">$(\tilde{H},~\tilde{G})$ model in SR-4Q-VV</a> <li><a href="104458?version=1&table=Efficiency of (H~, G~) signals by SR-2B2Q-VZ">$(\tilde{H},~\tilde{G})$ model in SR-2B2Q-VZ</a> <li><a href="104458?version=1&table=Efficiency of (H~, G~) signals by SR-2B2Q-Vh">$(\tilde{H},~\tilde{G})$ model in SR-2B2Q-Vh</a> </ul> Cut flows of some representative signals up to SR-4Q-VV, SR-2B2Q-VZ, and SR-2B2Q-Vh. One signal point from the $(\tilde{W},~\tilde{B})$ simplified models (C1C1-WW, C1N2-WZ, and C1N2-Wh) and $(\tilde{H},~\tilde{G})$ is chosen. The "preliminary event reduction" is a technical selection applied for reducing the sample size, which is fully efficient after the $n_{\textrm{Large}-R~\textrm{jets}}\geq 2$ selection. The boson-tagging efficiency for jets arising from $W/Z$ bosons decaying into $q\bar{q}$ (signal jets) are shown. The signal jet efficiency of $W_{qq}$/$Z_{qq}$-tagging is evaluated using a sample of pre-selected large-$R$ jets ($p_{\textrm{T}}>200~\textrm{GeV}, |\eta|<2.0, m_{J} > 40~\textrm{GeV}$) in the simulated $(\tilde{W},\tilde{B})$ simplified model signal events with $\Delta m (\tilde{\chi}_{\textrm{heavy}},~\tilde{\chi}_{\textrm{light}}) \ge 400~\textrm{GeV}$. The jets are matched with generator-level $W/Z$-bosons by $\Delta R<1.0$ which decay into $q\bar{q}$. The efficiency correction factors are applied on the signal efficiency rejection for the $W_{qq}$/$Z_{qq}$-tagging. The systematic uncertainty is represented by the hashed bands. More… The exotic meson $\pi_1(1600)$ with $J^{PC} = 1^{-+}$ and its decay into $\rho(770)\pi$ The collaboration Alexeev, M.G. ; Alexeev, G.D. ; Amoroso, A. ; et al. CERN-EP-2021–162, 2021. Inspire Record 1898933 We study the spin-exotic $J^{PC} = 1^{-+}$ amplitude in single-diffractive dissociation of 190 GeV$/c$ pions into $\pi^-\pi^-\pi^+$ using a hydrogen target and confirm the $\pi_1(1600) \to \rho(770) \pi$ amplitude, which interferes with a nonresonant $1^{-+}$ amplitude. We demonstrate that conflicting conclusions from previous studies on these amplitudes can be attributed to different analysis models and different treatment of the dependence of the amplitudes on the squared four-momentum transfer and we thus reconcile their experimental findings. We study the nonresonant contributions to the $\pi^-\pi^-\pi^+$ final state using pseudo-data generated on the basis of a Deck model. Subjecting pseudo-data and real data to the same partial-wave analysis, we find good agreement concerning the spectral shape and its dependence on the squared four-momentum transfer for the $J^{PC} = 1^{-+}$ amplitude and also for amplitudes with other $J^{PC}$ quantum numbers. We investigate for the first time the amplitude of the $\pi^-\pi^+$ subsystem with $J^{PC} = 1^{--}$ in the $3\pi$ amplitude with $J^{PC} = 1^{-+}$ employing the novel freed-isobar analysis scheme. We reveal this $\pi^-\pi^+$ amplitude to be dominated by the $\rho(770)$ for both the $\pi_1(1600)$ and the nonresonant contribution. We determine the $\rho(770)$ resonance parameters within the three-pion final state. These findings largely confirm the underlying assumptions for the isobar model used in all previous partial-wave analyses addressing the $J^{PC} = 1^{-+}$ amplitude. 4 data tables Results for the spin-exotic $1^{-+}1^+[\pi\pi]_{1^{-\,-}}\pi P$ wave from the free-isobar partial-wave analysis performed in the first $t^\prime$ bin from $0.100$ to $0.141\;(\text{GeV}/c)^2$. The plotted values represent the intensity of the coherent sum of the dynamic isobar amplitudes $\{\mathcal{T}_k^\text{fit}\}$ as a function of $m_{3\pi}$, where the coherent sums run over all $m_{\pi^-\pi^+}$ bins indexed by $k$. These intensity values are given in number of events per $40\;\text{MeV}/c^2$ $m_{3\pi}$ interval and correspond to the orange points in Fig. 8(a). In the "Resources" section of this $t^\prime$ bin, we provide the JSON file named <code>transition_amplitudes_tBin_0.json</code> for download, which contains for each $m_{3\pi}$ bin the values of the transition amplitudes $\{\mathcal{T}_k^\text{fit}\}$ for all $m_{\pi^-\pi^+}$ bins, their covariances, and further information. The data in this JSON file are organized in independent bins of $m_{3\pi}$. The information in these bins can be accessed via the key <code>m3pi_bin_<#>_t_prime_bin_0</code>. Each independent $m_{3\pi}$ bin contains <ul> <li>the kinematic ranges of the $(m_{3\pi}, t^\prime)$ cell, which are accessible via the keys <code>m3pi_lower_limit</code>, <code>m3pi_upper_limit</code>, <code>t_prime_lower_limit</code>, and <code>t_prime_upper_limit</code>.</li> <li>the $m_{\pi^-\pi^+}$ bin borders, which are accessible via the keys <code>m2pi_lower_limits</code> and <code>m2pi_upper_limits</code>.</li> <li>the real and imaginary parts of the transition amplitudes $\{\mathcal{T}_k^\text{fit}\}$ for all $m_{\pi^-\pi^+}$ bins, which are accessible via the keys <code>transition_amplitudes_real_part</code> and <code>transition_amplitudes_imag_part</code>, respectively.</li> <li>the covariance matrix of the real and imaginary parts of the $\{\mathcal{T}_k^\text{fit}\}$ for all $m_{\pi^-\pi^+}$ bins, which is accessible via the key <code>covariance_matrix</code>. Note that this matrix is real-valued and that its rows and columns are indexed such that $(\Re,\Im)$ pairs of the transition amplitudes are arranged with increasing $k$.</li> <li>the normalization factors $\mathcal{N}_a$ in Eq. (13) for all $m_{\pi^-\pi^+}$ bins, which are accessible via the key <code>normalization_factors</code>.</li> <li>the shape of the zero mode, i.e., the values of $\tilde\Delta_k$ for all $m_{\pi^-\pi^+}$ bins, which is accessible via the key <code>zero_mode_shape</code>.</li> <li>the reference wave, which is accessible via the key <code>reference_wave</code>. Note that this is always the $4^{++}1^+\rho(770)\pi G$ wave.</li> </ul> Results for the spin-exotic $1^{-+}1^+[\pi\pi]_{1^{-\,-}}\pi P$ wave from the free-isobar partial-wave analysis performed in the second $t^\prime$ bin from $0.141$ to $0.194\;(\text{GeV}/c)^2$. The plotted values represent the intensity of the coherent sum of the dynamic isobar amplitudes $\{\mathcal{T}_k^\text{fit}\}$ as a function of $m_{3\pi}$, where the coherent sums run over all $m_{\pi^-\pi^+}$ bins indexed by $k$. These intensity values are given in number of events per $40\;\text{MeV}/c^2$ $m_{3\pi}$ interval and correspond to the orange points in Fig. 15(a) in the supplemental material of the paper. In the "Resources" section of this $t^\prime$ bin, we provide the JSON file named <code>transition_amplitudes_tBin_1.json</code> for download, which contains for each $m_{3\pi}$ bin the values of the transition amplitudes $\{\mathcal{T}_k^\text{fit}\}$ for all $m_{\pi^-\pi^+}$ bins, their covariances, and further information. The data in this JSON file are organized in independent bins of $m_{3\pi}$. The information in these bins can be accessed via the key <code>m3pi_bin_<#>_t_prime_bin_1</code>. Each independent $m_{3\pi}$ bin contains <ul> <li>the kinematic ranges of the $(m_{3\pi}, t^\prime)$ cell, which are accessible via the keys <code>m3pi_lower_limit</code>, <code>m3pi_upper_limit</code>, <code>t_prime_lower_limit</code>, and <code>t_prime_upper_limit</code>.</li> <li>the $m_{\pi^-\pi^+}$ bin borders, which are accessible via the keys <code>m2pi_lower_limits</code> and <code>m2pi_upper_limits</code>.</li> <li>the real and imaginary parts of the transition amplitudes $\{\mathcal{T}_k^\text{fit}\}$ for all $m_{\pi^-\pi^+}$ bins, which are accessible via the keys <code>transition_amplitudes_real_part</code> and <code>transition_amplitudes_imag_part</code>, respectively.</li> <li>the covariance matrix of the real and imaginary parts of the $\{\mathcal{T}_k^\text{fit}\}$ for all $m_{\pi^-\pi^+}$ bins, which is accessible via the key <code>covariance_matrix</code>. Note that this matrix is real-valued and that its rows and columns are indexed such that $(\Re,\Im)$ pairs of the transition amplitudes are arranged with increasing $k$.</li> <li>the normalization factors $\mathcal{N}_a$ in Eq. (13) for all $m_{\pi^-\pi^+}$ bins, which are accessible via the key <code>normalization_factors</code>.</li> <li>the shape of the zero mode, i.e., the values of $\tilde\Delta_k$ for all $m_{\pi^-\pi^+}$ bins, which is accessible via the key <code>zero_mode_shape</code>.</li> <li>the reference wave, which is accessible via the key <code>reference_wave</code>. Note that this is always the $4^{++}1^+\rho(770)\pi G$ wave.</li> </ul> Results for the spin-exotic $1^{-+}1^+[\pi\pi]_{1^{-\,-}}\pi P$ wave from the free-isobar partial-wave analysis performed in the third $t^\prime$ bin from $0.194$ to $0.326\;(\text{GeV}/c)^2$. The plotted values represent the intensity of the coherent sum of the dynamic isobar amplitudes $\{\mathcal{T}_k^\text{fit}\}$ as a function of $m_{3\pi}$, where the coherent sums run over all $m_{\pi^-\pi^+}$ bins indexed by $k$. These intensity values are given in number of events per $40\;\text{MeV}/c^2$ $m_{3\pi}$ interval and correspond to the orange points in Fig. 15(b) in the supplemental material of the paper. In the "Resources" section of this $t^\prime$ bin, we provide the JSON file named <code>transition_amplitudes_tBin_2.json</code> for download, which contains for each $m_{3\pi}$ bin the values of the transition amplitudes $\{\mathcal{T}_k^\text{fit}\}$ for all $m_{\pi^-\pi^+}$ bins, their covariances, and further information. The data in this JSON file are organized in independent bins of $m_{3\pi}$. The information in these bins can be accessed via the key <code>m3pi_bin_<#>_t_prime_bin_2</code>. Each independent $m_{3\pi}$ bin contains <ul> <li>the kinematic ranges of the $(m_{3\pi}, t^\prime)$ cell, which are accessible via the keys <code>m3pi_lower_limit</code>, <code>m3pi_upper_limit</code>, <code>t_prime_lower_limit</code>, and <code>t_prime_upper_limit</code>.</li> <li>the $m_{\pi^-\pi^+}$ bin borders, which are accessible via the keys <code>m2pi_lower_limits</code> and <code>m2pi_upper_limits</code>.</li> <li>the real and imaginary parts of the transition amplitudes $\{\mathcal{T}_k^\text{fit}\}$ for all $m_{\pi^-\pi^+}$ bins, which are accessible via the keys <code>transition_amplitudes_real_part</code> and <code>transition_amplitudes_imag_part</code>, respectively.</li> <li>the covariance matrix of the real and imaginary parts of the $\{\mathcal{T}_k^\text{fit}\}$ for all $m_{\pi^-\pi^+}$ bins, which is accessible via the key <code>covariance_matrix</code>. Note that this matrix is real-valued and that its rows and columns are indexed such that $(\Re,\Im)$ pairs of the transition amplitudes are arranged with increasing $k$.</li> <li>the normalization factors $\mathcal{N}_a$ in Eq. (13) for all $m_{\pi^-\pi^+}$ bins, which are accessible via the key <code>normalization_factors</code>.</li> <li>the shape of the zero mode, i.e., the values of $\tilde\Delta_k$ for all $m_{\pi^-\pi^+}$ bins, which is accessible via the key <code>zero_mode_shape</code>.</li> <li>the reference wave, which is accessible via the key <code>reference_wave</code>. Note that this is always the $4^{++}1^+\rho(770)\pi G$ wave.</li> </ul> More… Probing effective field theory operators in the associated production of top quarks with a Z boson in multilepton final states at $\sqrt{s} =$ 13 TeV The collaboration Tumasyan, Armen ; Adam, Wolfgang ; Andrejkovic, Janik Walter ; et al. CMS-TOP-21-001, 2021. Inspire Record 1895530 A search for new top quark interactions is performed within the framework of an effective field theory using the associated production of either one or two top quarks with a Z boson in multilepton final states. The data sample corresponds to an integrated luminosity of 138 fb$^{-1}$ of proton-proton collisions at $\sqrt{s} =$ 13 TeV collected by the CMS experiment at the LHC. Five dimension-six operators modifying the electroweak interactions of the top quark are considered. Novel machine-learning techniques are used to enhance the sensitivity to effects arising from these operators. Distributions used for the signal extraction are parameterized in terms of Wilson coefficients describing the interaction strengths of the operators. All five Wilson coefficients are simultaneously fit to data and 95% confidence level intervals are computed. All results are consistent with the SM expectations. 4 data tables Expected and observed 95% CL confidence intervals for all Wilson coefficients. The intervals are obtained by scanning over a single Wilson coefficient, while fixing the other Wilson coefficients to their SM values of zero. Expected and observed 95% CL confidence intervals for all Wilson coefficients. The intervals for all five Wilson coefficients are obtained from a single fit, in which all Wilson coefficients are treated as free parameters. Covariance between the Wilson coefficients (in units of TeV$^{-4}$), after the 5D fit to data. More…
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9683775901794434, "perplexity": 7101.933953669537}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585280.84/warc/CC-MAIN-20211019171139-20211019201139-00511.warc.gz"}
https://www.physicsforums.com/threads/trigonometric-equations-solution.636640/
Trigonometric Equations Solution 1. Sep 17, 2012 Peter G. Hi, I had to solve without the aid of a calculator: 2cos(x) = sin(2x) What I did was perform the following substitution: 2cos(x) = 2sin(x)cos(x) I then cancelled and got: sin (x) = 1 For 0 to 3pi, I got two answers, pi/2 and 5pi/2. I did not get one of the answers. Why? What did I do wrong? Thanks! 2. Sep 17, 2012 SammyS Staff Emeritus 2cos(x) = 2sin(x)cos(x) is also true if cos(x) = 0 , (in which case, you divided by zero when you cancelled.) A safer way: subtract 2cos(x) from both sides: 0 = 2sin(x)cos(x) - cos(x) . Factor & use the zero product property of multiplication. 3. Sep 17, 2012 Peter G. Ok, thanks! 4. Sep 17, 2012 Staff: Mentor Just to emphasize what Sammy said, when you're solving equations, it's not a good idea to "cancel" since there is the chance that you will be losing a solution (just like you did here). Here's a simple example showing why cancelling is not a good idea: Solve for x in the equation x2 = 4x First attempt: Cancel x from each side to get x = 4. x = 4 is a solution, but the problem is, there is another that was lost by the cancel operation. Second attempt: Rewrite the equation as x2 - 4x = 0 Factor to get x(x - 4) = 0 Solution: x = 0 or x = 4 Similar Discussions: Trigonometric Equations Solution
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8756619691848755, "perplexity": 1826.4474587280843}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109893.47/warc/CC-MAIN-20170822031111-20170822051111-00464.warc.gz"}
https://electronics.stackexchange.com/revisions/352102/4
4 of 7 added 26 characters in body It breaks up into three simple sections that are each relatively easy to explain: simulate this circuit – Schematic created using CircuitLab The first part is the diode that provides reverse voltage protection. If for some reason the polarity of the input voltage is wired opposite to what it is supposed to be, then $$\D_1\$$ will block it and the output will also be essentially off. Only if the polarity is correct, with the rest of the circuit be operational. The price of including this is a voltage drop of perhaps $$\700\:\text{mV}\$$. I exaggerated this voltage drop a little in the diagram. But it gets the point across. The next section is below that. It's a zener regulator. The resistor is there to limit the current. The zener tends to have the same voltage across it, when reverse-biased with sufficient voltage (and $$\11-13\:\text{V}\$$ is more than sufficient.) With $$\R_1\$$ as given, you'd expect the current to be somewhere from about $$\5\:\text{mA}\$$ to $$\10\:\text{mA}\$$. This is a "normal" operating current for many zeners. (You could go look up the datasheet and find out, exactly. I didn't bother here.) So the voltage at the top of the zener should be close to $$\9.1\:\text{V}\$$. The exact current through the zener will have a slight impact on this. But not much. The final section on the right is there to "boost up" the current compliance. Since the zener only has a few milliamps to work with, if you didn't include this added section your load could only draw a very small few milliamps, at most, without messing up the zener's regulated voltage. So to get more than that, you need a current boosting section. This is composed of what is often called an "emitter follower" BJT. This BJT's emitter will "follow" the voltage at the base. Since the base is at $$\9.1\:\text{V}\$$, and since the base-emitter voltage drop will be about $$\600-700\:\text{mV}\$$, you can expect the emitter to "follow," but here with a slightly lower voltage (as indicated in the schematic.) This BJT doesn't require much base current in order to allow a lot of collector current. So the BJT here may "draw" current from its collector, by also drawing a much smaller, tiny base current ("stolen" from the zener, so it can't be allowed to be very much), and then this sum of the two becomes the total emitter current. This emitter current can be as much as several hundred times the base current. So here, the BJT might draw $$\1\:\text{mA}\$$ of base current (which is okay, because there is several times that much available due to $$\R_1\$$) in order to handle perhaps as much as $$\200\:\text{mA}\$$ of emitter current. In keeping with the idea of "being conservative" the specification only says $$\100\:\text{mA}\$$ -- and that's very much the right way to go when telling someone what this is capable of. Be conservative. $$\R_2\$$ is there as a bit of a short-circuit current limit. It doesn't serve much else. But if the load tries to pull too much current via the emitter then there will be an increasingly larger voltage drop across $$\R_2\$$ and this will cause the collector to have access to lower remaining voltage. At some point, the emitter will be "cramped." In this case, a drop of more than $$\2\:\text{V}\$$ (perhaps a little more) will probably begin the process of cramping the output. This means the limit is somewhere above $$\\frac{2\:\text{V}}{22\:\Omega}\approx 100\:\text{mA}\$$. Overall, $$\R_2\$$ is a very cheap way to add some modest protection to help make the whole thing just a little more bullet-proof, so to speak.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7728822827339172, "perplexity": 557.1614424033559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525312.3/warc/CC-MAIN-20190717141631-20190717163631-00103.warc.gz"}
https://www.studypug.com/us/en/math/geometry/combination-of-parallel-and-perpendicular-line-equations
### Parallel and perpendicular lines In this section, we will look at various kinds of questions related to parallel and perpendicular line equations. For instance, we will try to look for the slope of a line that is parallel and perpendicular to a given equation, to figure out if a given pair of line equations is parallel, perpendicular, or neither, and to find the parallel and perpendicular line equations of a given line and a pass through point.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8562020063400269, "perplexity": 105.78974961761969}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542246.21/warc/CC-MAIN-20161202170902-00476-ip-10-31-129-80.ec2.internal.warc.gz"}
https://worldwidescience.org/topicpages/a/adversaries.html
OpenAIRE Makhzani, Alireza; Shlens, Jonathon; Jaitly, Navdeep; Goodfellow, Ian; Frey, Brendan 2015-01-01 In this paper, we propose the "adversarial autoencoder" (AAE), which is a probabilistic autoencoder that uses the recently proposed generative adversarial networks (GAN) to perform variational inference by matching the aggregated posterior of the hidden code vector of the autoencoder with an arbitrary prior distribution. Matching the aggregated posterior to the prior ensures that generating from any part of prior space results in meaningful samples. As a result, the decoder of the adversarial... 2. Deep Learning and Music Adversaries OpenAIRE Kereliuk, Corey; Sturm, Bob L.; Larsen, Jan 2015-01-01 An adversary is essentially an algorithm intent on making a classification system perform in some particular way given an input, e.g., increase the probability of a false negative. Recent work builds adversaries for deep learning systems applied to image object recognition, which exploits the parameters of the system to find the minimal perturbation of the input image such that the network misclassifies it with high confidence. We adapt this approach to construct and deploy an adversary of de... 3. Generic adversary characteristics: summary report International Nuclear Information System (INIS) The adversaries studied were found to be complex, often unpredictable, and dynamic. The adversary typically goes through a complex decision-making process between the time a potential target is identified and the moment the decision to act is made. This study analyzes the adversary characteristics, and the following conclusions are made: one of the least likely methods of attack is an overt armed assault. Terrorists and psychotics depend upon a high degree of personal dedication. No single generic adversary group or individual exhibits strength in every characteristic. Physical danger appears to have some deterrent effect on all adversaries except the psychotics. Organized and professional criminals often try to recruit insiders. Disoriented persons, white-collar criminals, and disgruntled employees tend to operate as insiders. Professional criminals, many terrorist groups, some extremist protest groups, and certain disoriented persons plan carefully before initiating a criminal mission. Organized crime and miscellaneous criminal adversaries rely on deception and ruse to bypass security. After the decision to commit a crime, the resources deployed by terrorists or organized criminals will be a function of their perception of the operational requirements of the crime. The nature of ''threat'' is dynamic; adversary behavior and capability appear to be related to prevailing political, economic, and social conditions 4. Deep Learning and Music Adversaries DEFF Research Database (Denmark) Kereliuk, Corey Mose; Sturm, Bob L.; Larsen, Jan 2015-01-01 An adversary is an agent designed to make a classification system perform in some particular way, e.g., increase the probability of a false negative. Recent work builds adversaries for deep learning systems applied to image object recognition, exploiting the parameters of the system to find...... the minimal perturbation of the input image such that the system misclassifies it with high confidence. We adapt this approach to construct and deploy an adversary of deep learning systems applied to music content analysis. In our case, however, the system inputs are magnitude spectral frames, which require... 5. All Quantum Adversary Methods are Equivalent NARCIS (Netherlands) R. Spalek; M. Szegedy 2005-01-01 The quantum adversary method is one of the most versatile lower-bound methods for quantum algorithms. We show that all known variants of this method are equal: spectral adversary [Barnum, Saks, and Szegedy, 2003], weighted adversary [Ambainis, 2003], strong weighted adversary [Zhang, 2004], and the 6. Pitfalls and Potential of Adversary Evaluation. Science.gov (United States) Worthen, Blaine R.; Rogers, W. Todd 1980-01-01 The core of adversary evaluation is the existence of opposing viewpoints, not adherence to existing formats for presenting them. Suggests that evaluators develop adversary methods more appropriate for education. (Author/MLF) 7. Learning consensus in adversarial environments Science.gov (United States) Vamvoudakis, Kyriakos G.; García Carrillo, Luis R.; Hespanha, João. P. 2013-05-01 This work presents a game theory-based consensus problem for leaderless multi-agent systems in the presence of adversarial inputs that are introducing disturbance to the dynamics. Given the presence of enemy components and the possibility of malicious cyber attacks compromising the security of networked teams, a position agreement must be reached by the networked mobile team based on environmental changes. The problem is addressed under a distributed decision making framework that is robust to possible cyber attacks, which has an advantage over centralized decision making in the sense that a decision maker is not required to access information from all the other decision makers. The proposed framework derives three tuning laws for every agent; one associated with the cost, one associated with the controller, and one with the adversarial input. 8. Negative weights makes adversaries stronger CERN Document Server Hoyer, P; Spalek, R; Hoyer, Peter; Lee, Troy; Spalek, Robert 2006-01-01 The quantum adversary method is one of the most successful techniques for proving lower bounds on quantum query complexity. It gives optimal lower bounds for many problems, has application to classical complexity in formula size lower bounds, and is versatile with equivalent formulations in terms of weight schemes, eigenvalues, and Kolmogorov complexity. All these formulations are information-theoretic and rely on the principle that if an algorithm successfully computes a function then, in particular, it is able to distinguish between inputs which map to different values. We present a stronger version of the adversary method which goes beyond this principle to make explicit use of the existence of a measurement in a successful algorithm which gives the correct answer, with high probability. We show that this new method, which we call ADV+-, has all the advantages of the old: it is a lower bound on bounded-error quantum query complexity, its square is a lower bound on formula size, and it behaves well with res... 9. Scientific method, adversarial system, and technology assessment Science.gov (United States) Mayo, L. H. 1975-01-01 A basic framework is provided for the consideration of the purposes and techniques of scientific method and adversarial systems. Similarities and differences in these two techniques of inquiry are considered with reference to their relevance in the performance of assessments. 10. Polytope Codes Against Adversaries in Networks OpenAIRE Kosut, Oliver; Tong, Lang; Tse, David 2011-01-01 Network coding is studied when an adversary controls a subset of nodes in the network of limited quantity but unknown location. This problem is shown to be more difficult than when the adversary controls a given number of edges in the network, in that linear codes are insufficient. To solve the node problem, the class of Polytope Codes is introduced. Polytope Codes are constant composition codes operating over bounded polytopes in integer vector fields. The polytope structure creates addition... 11. Adversarial Feature Selection Against Evasion Attacks. Science.gov (United States) Zhang, Fei; Chan, Patrick P K; Biggio, Battista; Yeung, Daniel S; Roli, Fabio 2016-03-01 Pattern recognition and machine learning techniques have been increasingly adopted in adversarial settings such as spam, intrusion, and malware detection, although their security against well-crafted attacks that aim to evade detection by manipulating data at test time has not yet been thoroughly assessed. While previous work has been mainly focused on devising adversary-aware classification algorithms to counter evasion attempts, only few authors have considered the impact of using reduced feature sets on classifier security against the same attacks. An interesting, preliminary result is that classifier security to evasion may be even worsened by the application of feature selection. In this paper, we provide a more detailed investigation of this aspect, shedding some light on the security properties of feature selection against evasion attacks. Inspired by previous work on adversary-aware classifiers, we propose a novel adversary-aware feature selection model that can improve classifier security against evasion attacks, by incorporating specific assumptions on the adversary's data manipulation strategy. We focus on an efficient, wrapper-based implementation of our approach, and experimentally validate its soundness on different application examples, including spam and malware detection. PMID:25910268 12. Using Machine Learning in Adversarial Environments. Energy Technology Data Exchange (ETDEWEB) Davis, Warren Leon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States) 2016-02-01 Intrusion/anomaly detection systems are among the first lines of cyber defense. Commonly, they either use signatures or machine learning (ML) to identify threats, but fail to account for sophisticated attackers trying to circumvent them. We propose to embed machine learning within a game theoretic framework that performs adversarial modeling, develops methods for optimizing operational response based on ML, and integrates the resulting optimization codebase into the existing ML infrastructure developed by the Hybrid LDRD. Our approach addresses three key shortcomings of ML in adversarial settings: 1) resulting classifiers are typically deterministic and, therefore, easy to reverse engineer; 2) ML approaches only address the prediction problem, but do not prescribe how one should operationalize predictions, nor account for operational costs and constraints; and 3) ML approaches do not model attackers’ response and can be circumvented by sophisticated adversaries. The principal novelty of our approach is to construct an optimization framework that blends ML, operational considerations, and a model predicting attackers reaction, with the goal of computing optimal moving target defense. One important challenge is to construct a realistic model of an adversary that is tractable, yet realistic. We aim to advance the science of attacker modeling by considering game-theoretic methods, and by engaging experimental subjects with red teaming experience in trying to actively circumvent an intrusion detection system, and learning a predictive model of such circumvention activities. In addition, we will generate metrics to test that a particular model of an adversary is consistent with available data. 13. Automated Planning in Repeated Adversarial Games CERN Document Server de Cote, Enrique Munoz; Sykulski, Adam M; Jennings, Nicholas R 2012-01-01 Game theory's prescriptive power typically relies on full rationality and/or self-play interactions. In contrast, this work sets aside these fundamental premises and focuses instead on heterogeneous autonomous interactions between two or more agents. Specifically, we introduce a new and concise representation for repeated adversarial (constant-sum) games that highlight the necessary features that enable an automated planing agent to reason about how to score above the game's Nash equilibrium, when facing heterogeneous adversaries. To this end, we present TeamUP, a model-based RL algorithm designed for learning and planning such an abstraction. In essence, it is somewhat similar to R-max with a cleverly engineered reward shaping that treats exploration as an adversarial optimization problem. In practice, it attempts to find an ally with which to tacitly collude (in more than two-player games) and then collaborates on a joint plan of actions that can consistently score a high utility in adversarial repeated gam... 14. David against Goliath: Coping with Adversarial Customers DEFF Research Database (Denmark) Alajoutsijärvi, Kimmo; Tikkanen, Henrikki; Skaates, Maria Anne 2001-01-01 SME managers in many industries face the situation that they have to deal with a few important, large customer organisations that behave in an adversarial manner. These customers pit alternative suppliers against each other in order to achieve the lowest possible price, showing no intent to build... 15. Deep learning, audio adversaries, and music content analysis DEFF Research Database (Denmark) Kereliuk, Corey Mose; Sturm, Bob L.; Larsen, Jan 2015-01-01 We present the concept of adversarial audio in the context of deep neural networks (DNNs) for music content analysis. An adversary is an algorithm that makes minor perturbations to an input that cause major repercussions to the system response. In particular, we design an adversary for a DNN... 16. The Adversarial Route Analysis Tool: A Web Application Energy Technology Data Exchange (ETDEWEB) Casson, William H. Jr. [Los Alamos National Laboratory 2012-08-02 The Adversarial Route Analysis Tool is a type of Google maps for adversaries. It's a web-based Geospatial application similar to Google Maps. It helps the U.S. government plan operations that predict where an adversary might be. It's easily accessible and maintainble and it's simple to use without much training. 17. RFID Key Establishment Against Active Adversaries CERN Document Server Bringer, Julien; Cohen, Gérard; Kindarji, Bruno 2010-01-01 We present a method to strengthen a very low cost solution for key agreement with a RFID device. Starting from a work which exploits the inherent noise on the communication link to establish a key by public discussion, we show how to protect this agreement against active adversaries. For that purpose, we unravel integrity $(I)$-codes suggested by Cagalj et al. No preliminary key distribution is required. International Nuclear Information System (INIS) 19. Covert Communication Gains from Adversary's Ignorance of Transmission Time OpenAIRE Bash, Boulat A.; Goeckel, Dennis; Towsley, Don 2014-01-01 The recent square root law (SRL) for covert communication demonstrates that Alice can reliably transmit $\\mathcal{O}(\\sqrt{n})$ bits to Bob in $n$ uses of an additive white Gaussian noise (AWGN) channel while keeping ineffective any detector employed by the adversary; conversely, exceeding this limit either results in detection by the adversary with high probability or non-zero decoding error probability at Bob. This SRL is under the assumption that the adversary knows when Alice transmits (i... 20. Adversarial Scheduling in Evolutionary Game Dynamics CERN Document Server Istrate, Gabriel; Ravi, S S 2008-01-01 Consider a system in which players at nodes of an underlying graph G repeatedly play Prisoner's Dilemma against their neighbors. The players adapt their strategies based on the past behavior of their opponents by applying the so-called win-stay lose-shift strategy. This dynamics has been studied in (Kittock 94), (Dyer et al. 2002), (Mossel and Roch, 2006). With random scheduling, starting from any initial configuration with high probability the system reaches the unique fixed point in which all players cooperate. This paper investigates the validity of this result under various classes of adversarial schedulers. Our results can be sumarized as follows: 1. An adversarial scheduler that can select both participants to the game can preclude the system from reaching the unique fixed point on most graph topologies. 2. A nonadaptive scheduler that is only allowed to choose one of the participants is no more powerful than a random scheduler. With this restriction even an adaptive scheduler is not significantly more ... 1. Uses and Abuses of Adversary Evaluation: A Consumer's Guide. Science.gov (United States) Worthen, Blaine R.; Rogers, W. Todd The major potentials and pitfalls of adversary evaluation in education are discussed. Reasons why the courtroom model may have limited utility, and the difficulties in the debate model are identified. It is argued that the existence of opposing viewpoints is the core of adversary evaluation, not adherence to existing formats for presenting… 2. Enemies in Agreement: Domestic Politics, Uncertainty, and Cooperation between Adversaries OpenAIRE Vaynman, Jane Eugenia 2014-01-01 Adversarial agreements, such as the nuclear weapons treaties, disarmament zones, or conventional weapons limitations, vary considerably in the information sharing provisions they include. This dissertation investigates why adversarial states sometimes choose to cooperate by creating restraining institutions, and how their choices for the form of that cooperation are constrained and motivated. I argue that uncertainties arising out of domestic political volatility, which includes leadershi... 3. Arguing with Adversaries: Aikido, Rhetoric, and the Art of Peace Science.gov (United States) Kroll, Barry M. 2008-01-01 The Japanese martial art of aikido affords a framework for understanding argument as harmonization rather than confrontation. Two movements, circling away ("tenkan") and entering in ("irimi"), suggest tactics for arguing with adversaries. The ethical imperative of aikido involves protecting one's adversary from harm, using the least force… 4. Understanding Sampling Style Adversarial Search Methods CERN Document Server Ramanujan, Raghuram; Selman, Bart 2012-01-01 UCT has recently emerged as an exciting new adversarial reasoning technique based on cleverly balancing exploration and exploitation in a Monte-Carlo sampling setting. It has been particularly successful in the game of Go but the reasons for its success are not well understood and attempts to replicate its success in other domains such as Chess have failed. We provide an in-depth analysis of the potential of UCT in domain-independent settings, in cases where heuristic values are available, and the effect of enhancing random playouts to more informed playouts between two weak minimax players. To provide further insights, we develop synthetic game tree instances and discuss interesting properties of UCT, both empirically and analytically. 5. Learning to Pivot with Adversarial Networks CERN Document Server Louppe, Gilles; Cranmer, Kyle 2016-01-01 Many inference problems involve data generation processes that are not uniquely specified or are uncertain in some way. In a scientific context, the presence of several plausible data generation processes is often associated to the presence of systematic uncertainties. Robust inference is possible if it is based on a pivot -- a quantity whose distribution is invariant to the unknown value of the (categorical or continuous) nuisance parameters that parametrizes this family of generation processes. In this work, we introduce a flexible training procedure based on adversarial networks for enforcing the pivotal property on a predictive model. We derive theoretical results showing that the proposed algorithm tends towards a minimax solution corresponding to a predictive model that is both optimal and independent of the nuisance parameters (if that models exists) or for which one can tune the trade-off between power and robustness. Finally, we demonstrate the effectiveness of this approach with a toy example and an... 6. Potential criminal adversaries of nuclear programs: a portrait Energy Technology Data Exchange (ETDEWEB) Jenkins, B.M. 1980-07-01 This paper examines the possibility that terrorists or other kinds of criminals might attempt to seize or sabotage a nuclear facility, steal nuclear material, or carry out other criminal activities in the nuclear domain which has created special problems for the security of nuclear programs. This paper analyzes the potential threat. Our tasks was to describe the potential criminal adversary, or rather the spectrum of potential adversaries who conceivably might carry out malevolent criminal actions against nuclear programs and facilities. We were concerned with both the motivations as well as the material and operational capabilities likely to be displayed by various categories of potential nuclear adversaries. 7. Towards Deep Neural Network Architectures Robust to Adversarial Examples OpenAIRE Gu, Shixiang; Rigazio, Luca 2014-01-01 Recent work has shown deep neural networks (DNNs) to be highly susceptible to well-designed, small perturbations at the input layer, or so-called adversarial examples. Taking images as an example, such distortions are often imperceptible, but can result in 100% mis-classification for a state of the art DNN. We study the structure of adversarial examples and explore network topology, pre-processing and training strategies to improve the robustness of DNNs. We perform various experiments to ass... OpenAIRE Ororbia II, Alexander G.; Giles, C. Lee; Kifer, Daniel 2016-01-01 We present DataGrad, a general back-propagation style training procedure for deep neural architectures that uses regularization of a deep Jacobian-based penalty. It can be viewed as a deep extension of the layerwise contractive auto-encoder penalty. More importantly, it unifies previous proposals for adversarial training of deep neural nets -- this list includes directly modifying the gradient, training on a mix of original and adversarial examples, using contractive penalties, and approximat... 9. Methodology for characterizing potential adversaries of Nuclear Material Safeguards Systems International Nuclear Information System (INIS) The results are described of a study by Woodward--Clyde Consultants to assist the University of California Lawrence Livermore Laboratory in the development of methods to analyze and evaluate Nuclear Material Safeguards (NMS) Systems. The study concentrated on developing a methodology to assist experts in describing, in quantitative form, their judgments about the characteristics of potential adversaries of NMS Systems 10. Methodology for characterizing potential adversaries of Nuclear Material Safeguards Systems Energy Technology Data Exchange (ETDEWEB) Kirkwood, C.W.; Pollock, S.M. 1978-11-01 The results are described of a study by Woodward--Clyde Consultants to assist the University of California Lawrence Livermore Laboratory in the development of methods to analyze and evaluate Nuclear Material Safeguards (NMS) Systems. The study concentrated on developing a methodology to assist experts in describing, in quantitative form, their judgments about the characteristics of potential adversaries of NMS Systems. 11. Publishing Set-Valued Data Against Realistic Adversaries Institute of Scientific and Technical Information of China (English) Jun-Qiang Liu 2012-01-01 Privacy protection in publishing set-valued data is an important problem.However,privacy notions proposed in prior works either assume that the adversary has unbounded knowledge and hence provide over-protection that causes excessive distortion,or ignore the knowledge about the absence of certain items and do not prevent attacks based on such knowledge.To address these issues,we propose a new privacy notion,(k,e)(m,n)-privacy,which prevents both the identity disclosure and the sensitive item disclosure to a realistic privacy adversary who has bounded knowledge about the presence of items and the bounded knowledge about the absence of items.In addition to the new notion,our contribution is an efficient algorithm that finds a near-optimal solution and is applicable for anonymizing real world databases.Extensive experiments on real world databases showed that our algorithm outperforms the state of the art algorithms. 12. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks OpenAIRE Radford, Alec; Metz, Luke; Chintala, Soumith 2015-01-01 In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they ar... KAUST Repository Alabdulmohsin, Ibrahim 2014-01-01 Many classification algorithms have been successfully deployed in security-sensitive applications including spam filters and intrusion detection systems. Under such adversarial environments, adversaries can generate exploratory attacks against the defender such as evasion and reverse engineering. In this paper, we discuss why reverse engineering attacks can be carried out quite efficiently against fixed classifiers, and investigate the use of randomization as a suitable strategy for mitigating their risk. In particular, we derive a semidefinite programming (SDP) formulation for learning a distribution of classifiers subject to the constraint that any single classifier picked at random from such distribution provides reliable predictions with a high probability. We analyze the tradeoff between variance of the distribution and its predictive accuracy, and establish that one can almost always incorporate randomization with large variance without incurring a loss in accuracy. In other words, the conventional approach of using a fixed classifier in adversarial environments is generally Pareto suboptimal. Finally, we validate such conclusions on both synthetic and real-world classification problems. Copyright 2014 ACM. 14. Quantifying Adversary Capabilities to Inform Defensive Resource Allocation. Science.gov (United States) Wang, Chen; Bier, Vicki M 2016-04-01 We propose a Bayesian Stackelberg game capable of analyzing the joint effects of both attacker intent and capabilities on optimal defensive strategies. The novel feature of our model is the use of contest success functions from economics to capture the extent to which the success of an attack is attributable to the adversary's capability (as well as the level of defensive investment), rather than pure luck. Results of a two-target example suggest that precise assessment of attacker intent may not be necessary if we have poor estimates of attacker capability. PMID:25929274 Institute of Scientific and Technical Information of China (English) LONG Yu; LI Xiang-xue; CHEN Ke-fei; HONG Xuan 2009-01-01 This paper proposes an adaptively secure solution to certificateless distributed key encapsulation mechanism from pairings by using Canetti's adaptive secure key generation scheme based on discrete logarithm. The proposed scheme can withstand adaptive attackers that can choose players for corruption at any time during the run of the protocol, and this kind of attack is powerful and realistic. In contrast, all previously presented threshold certificateless public key cryptosystems are proven secure against the more idealized static adversaries only. They choose and fix the subset of target players before running the protocol. We also prove security of this scheme in the random oracle model. 16. Probabilistic Characterization of Adversary Behavior in Cyber Security Energy Technology Data Exchange (ETDEWEB) Meyers, C A; Powers, S S; Faissol, D M 2009-10-08 The objective of this SMS effort is to provide a probabilistic characterization of adversary behavior in cyber security. This includes both quantitative (data analysis) and qualitative (literature review) components. A set of real LLNL email data was obtained for this study, consisting of several years worth of unfiltered traffic sent to a selection of addresses at ciac.org. The email data was subjected to three interrelated analyses: a textual study of the header data and subject matter, an examination of threats present in message attachments, and a characterization of the maliciousness of embedded URLs. 17. Adversary modeling: an analysis of criminal activities analogous to potential threats to nuclear safeguard systems Energy Technology Data Exchange (ETDEWEB) Heineke, J.M. 1978-12-20 This study examines and analyzes several classes of incidents in which decision makers are confronted with adversaries. The classes are analogous to adversaries in a material control system in a nuclear facility. Both internal threats (bank frauds and embezzlements) and external threats (aircraft hijackings and hostage-type terrorist events were analyzed. (DLC) 18. Adversary modeling: an analysis of criminal activities analogous to potential threats to nuclear safeguard systems International Nuclear Information System (INIS) This study examines and analyzes several classes of incidents in which decision makers are confronted with adversaries. The classes are analogous to adversaries in a material control system in a nuclear facility. Both internal threats (bank frauds and embezzlements) and external threats (aircraft hijackings and hostage-type terrorist events were analyzed 19. On Breaching Enterprise Data Privacy Through Adversarial Information Fusion CERN Document Server Ganta, Srivatsava Ranjit 2008-01-01 Data privacy is one of the key challenges faced by enterprises today. Anonymization techniques address this problem by sanitizing sensitive data such that individual privacy is preserved while allowing enterprises to maintain and share sensitive data. However, existing work on this problem make inherent assumptions about the data that are impractical in day-to-day enterprise data management scenarios. Further, application of existing anonymization schemes on enterprise data could lead to adversarial attacks in which an intruder could use information fusion techniques to inflict a privacy breach. In this paper, we shed light on the shortcomings of current anonymization schemes in the context of enterprise data. We define and experimentally demonstrate Web-based Information- Fusion Attack on anonymized enterprise data. We formulate the problem of Fusion Resilient Enterprise Data Anonymization and propose a prototype solution to address this problem. 20. Computationally Secure Pattern Matching in the Presence of Malicious Adversaries DEFF Research Database (Denmark) Hazay, Carmit; Toft, Tomas 2014-01-01 We propose a protocol for the problem of secure two-party pattern matching, where Alice holds a text t∈{0,1}∗ of length n, while Bob has a pattern p∈{0,1}∗ of length m. The goal is for Bob to (only) learn where his pattern occurs in Alice’s text, while Alice learns nothing. Private pattern matching...... is an important problem that has many applications in the area of DNA search, computational biology and more. Our construction guarantees full simulation in the presence of malicious, polynomial-time adversaries (assuming the hardness of DDH assumption) and exhibits computation and communication costs of O...... for important variations of the secure pattern matching problem that are significantly more efficient than the current state of art solutions: First, we deal with secure pattern matching with wildcards. In this variant the pattern may contain wildcards that match both 0 and 1. Our protocol requires O... 1. Secrecy Is Cheap if the Adversary Must Reconstruct CERN Document Server Schieler, Curt 2012-01-01 A secret key can be used to conceal information from an eavesdropper during communication, as in Shannon's cipher system. Most theoretical guarantees of secrecy require the secret key space to grow exponentially with the length of communication. Here we show that when an eavesdropper attempts to reconstruct an information sequence, as posed in the literature by Yamamoto, very little secret key is required to effect unconditionally maximal distortion; specifically, we only need the secret key space to increase unboundedly, growing arbitrarily slowly with the blocklength. As a corollary, even with a secret key of constant size we can still cause the adversary arbitrarily close to maximal distortion, regardless of the length of the information sequence. CERN Document Server Kim, MinJi; Barros, Joao 2008-01-01 Network coding increases throughput and is robust against failures and erasures. However, since it allows mixing of information within the network, a single corrupted packet generated by a Byzantine attacker can easily contaminate the information to multiple destinations. In this paper, we study the transmission overhead associated with detecting Byzantine adversaries at a trusted node using network coding. We consider three different schemes: end-to-end error correction, packet-based Byzantine detection scheme, and generation-based Byzantine detection scheme. In end-to-end error correction, it is known that we can correct up to the min-cut between the source and destinations. However, if we use Byzantine detection schemes, we can detect polluted data, drop them, and therefore, only transmit valid data. For the dropped data, the destinations perform erasure correction, which is computationally lighter than error correction. We show that, with enough attackers present in the network, Byzantine detection scheme... 3. 'Meatball searching' - The adversarial approach to online information retrieval Science.gov (United States) Jack, R. F. 1985-01-01 It is proposed that the different styles of online searching can be described as either formal (highly precise) or informal with the needs of the client dictating which is most applicable at a particular moment. The background and personality of the searcher also come into play. Particular attention is focused on meatball searching which is a form of online searching characterized by deliberate vagueness. It requires generally comprehensive searches, often on unusual topics and with tight deadlines. It is most likely to occur in search centers serving many different disciplines and levels of client information sophistication. Various information needs are outlined as well as the laws of meatball searching and the adversarial approach. Traits and characteristics important to sucessful searching include: (1) concept analysis, (2) flexibility of thinking, (3) ability to think in synonyms and (4) anticipation of variant word forms and spellings. 4. Breaking the O(n^2) Bit Barrier: Scalable Byzantine agreement with an Adaptive Adversary CERN Document Server King, Valerie 2010-01-01 We describe an algorithm for Byzantine agreement that is scalable in the sense that each processor sends only $\\tilde{O}(\\sqrt{n})$ bits, where $n$ is the total number of processors. Our algorithm succeeds with high probability against an \\emph{adaptive adversary}, which can take over processors at any time during the protocol, up to the point of taking over arbitrarily close to a 1/3 fraction. We assume synchronous communication but a \\emph{rushing} adversary. Moreover, our algorithm works in the presence of flooding: processors controlled by the adversary can send out any number of messages. We assume the existence of private channels between all pairs of processors but make no other cryptographic assumptions. Finally, our algorithm has latency that is polylogarithmic in $n$. To the best of our knowledge, ours is the first algorithm to solve Byzantine agreement against an adaptive adversary, while requiring $o(n^{2})$ total bits of communication. 5. Taxonomies of Cyber Adversaries and Attacks: A Survey of Incidents and Approaches Energy Technology Data Exchange (ETDEWEB) Meyers, C A; Powers, S S; Faissol, D M 2009-10-08 In this paper we construct taxonomies of cyber adversaries and methods of attack, drawing from a survey of the literature in the area of cyber crime. We begin by addressing the scope of cyber crime, noting its prevalence and effects on the US economy. We then survey the literature on cyber adversaries, presenting a taxonomy of the different types of adversaries and their corresponding methods, motivations, maliciousness, and skill levels. Subsequently we survey the literature on cyber attacks, giving a taxonomy of the different classes of attacks, subtypes, and threat descriptions. The goal of this paper is to inform future studies of cyber security on the shape and characteristics of the risk space and its associated adversaries. 6. Protecting Clock Synchronization: Adversary Detection through Network Monitoring Directory of Open Access Journals (Sweden) Elena Lisova 2016-01-01 Full Text Available Nowadays, industrial networks are often used for safety-critical applications with real-time requirements. Such applications usually have a time-triggered nature with message scheduling as a core property. Scheduling requires nodes to share the same notion of time, that is, to be synchronized. Therefore, clock synchronization is a fundamental asset in real-time networks. However, since typical standards for clock synchronization, for example, IEEE 1588, do not provide the required level of security, it raises the question of clock synchronization protection. In this paper, we identify a way to break synchronization based on the IEEE 1588 standard, by conducting a man-in-the-middle (MIM attack followed by a delay attack. A MIM attack can be accomplished through, for example, Address Resolution Protocol (ARP poisoning. Using the AVISPA tool, we evaluate the potential to perform a delay attack using ARP poisoning and analyze its consequences showing both that the attack can, indeed, break clock synchronization and that some design choices, such as a relaxed synchronization condition mode, delay bounding, and using knowledge of environmental conditions, can make the network more robust/resilient against these kinds of attacks. Lastly, a Configuration Agent is proposed to monitor and detect anomalies introduced by an adversary performing attacks targeting clock synchronization. 7. A new queueing strategy for the Adversarial Queueing Theory CERN Document Server Hilker, Michael 2008-01-01 In the today's Internet and TCP/IP-networks, the queueing of packets is commonly implemented using the protocol FIFO (First In First Out). Unfortunately, FIFO performs poorly in the Adversarial Queueing Theory. Other queueing strategies are researched in this model and better results are performed by alternative queueing strategies, e.g. LIS (Longest In System). This article introduces a new queueing protocol called interval-strategy that is concerned with the reduction from dynamic to static routing. We discuss the maximum system time for a packet and estimate with up-to-date results how this can be achieved. We figure out the maximum amount of time where a packet can spend in the network (i.e. worst case system time), and argue that the universal instability of the presented interval-strategy can be reached through these results. When a large group of queueing strategies is used for queueing, we prove that the interval-strategy will be universally unstable. Finally, we calculate the maximum time of the stat... 8. The media and the military: Allies or adversaries? Directory of Open Access Journals (Sweden) Leopold Scholtz 2012-02-01 Full Text Available Military commanders like Alexander the Great or Richard the Lionheart did not have to take public opinion greatly into account when they planned their campaigns in their day. Today it is a very different situation. In the light of the above this article starts with two somewhat startling quotes by the futurologists Alvin and Heidi Toffler: "The people thinking hardest about warfare in the future know that some of the most important combat of tomorrow will take place on the media battlefield." They also state: “[T]he media, including channels and technologies unimagined today, will be a prime weapon for Third Wave combatants in both the wars and anti-wars of the future, a key component of knowledge strategy.” In recent years, much has been made of the adversarial relations between journalists and the military. The media have, for instance, been blamed for the US defeat in Vietnam, for unthinkingly blabbing about tactical decisions in advance in the Falklands, etc. From their side, journalists have been blaming the military for not trying to understand the nature of their job, of covering up a number of bad things, etc. 9. Computationally Secure Pattern Matching in the Presence of Malicious Adversaries DEFF Research Database (Denmark) Hazay, Carmit; Toft, Tomas 2010-01-01 We propose a dedicated protocol for the highly motivated problem of secure two-party pattern matching: Alice holds a text t ∈ {0,1}*. of length n, while Bob has a pattern p ∈ {0,1}*. of length m. The goal is for Bob to learn where his pattern occurs in Alice's text. Our construction guarantees full...... simulation in the presence of malicious, polynomial-time adversaries (assuming that ElGamal encryption is semantically secure) and exhibits computation and communication costs of O(n + m) in a constant round complexity. In addition to the above, we propose a collection of protocols for variations...... of the secure pattern matching problem: The pattern may contain wildcards (O(nm) communication in O(1) rounds). The matches may be approximated, i.e., Hamming distance less than some threshold ((O(nm) communication in O(1) rounds). The length, m, of Bob's pattern is secret (O(nm) communication in O(1) rounds... 10. Satisfiability-unsatisfiability transition in the adversarial satisfiability problem. Science.gov (United States) Bardoscia, Marco; Nagaj, Daniel; Scardicchio, Antonello 2014-03-01 Adversarial satisfiability (AdSAT) is a generalization of the satisfiability (SAT) problem in which two players try to make a Boolean formula true (resp. false) by controlling their respective sets of variables. AdSAT belongs to a higher complexity class in the polynomial hierarchy than SAT, and therefore the nature of the critical region and the transition are not easily parallel to those of SAT and worthy of independent study. AdSAT also provides an upper bound for the transition threshold of the quantum satisfiability problem (QSAT). We present a complete algorithm for AdSAT, show that 2-AdSAT is in P, and then study two stochastic algorithms (simulated annealing and its improved variant) and compare their performances in detail for 3-AdSAT. Varying the density of clauses α we claim that there is a sharp SAT-UNSAT transition at a critical value whose upper bound is αc≲1.5, suggesting a much stricter upper bound for the QSAT transition than those previously found. PMID:24730811 11. Intelligent Online Path Planning for UAVs in Adversarial Environments Directory of Open Access Journals (Sweden) Xingguang Peng 2012-03-01 Full Text Available Online path planning (OPP for unmanned aerial vehicles (UAVs is a basic issue of intelligent flight and is indeed a dynamic multi‐objective optimization problem (DMOP. In this paper, an OPP framework is proposed in the sense of model predictive control (MPC to continuously update the environmental information for the planner. For solving the DMOP involved in the MPC we propose a dynamic multi‐objective evolutionary algorithm based on linkage and prediction (LP‐DMOEA. Within this algorithm, the historical Pareto sets are collected and analysed to enhance the performance. For intelligently selecting the best path from the output of the OPP, the Bayesian network and fuzzy logic are used to quantify the bias to each optimization objective. The DMOEA is validated on three benchmark problems characterized by different changing types in decision and objective spaces. Moreover, the simulation results show that the LP‐DMOEA overcomes the restart method for OPP. The decision‐making method for solution selection can assess the situation in an adversarial environment and accordingly adapt the path planner. 12. Assessing and minimizing adversarial risk in a nuclear material transportation network OpenAIRE 2013-01-01 Approved for public release; distribution is unlimited This thesis develops a simple method for evaluating adversarial risk within the transportation portion of the nuclear fuel cycle for commercial electric power generation, and develops models that can guide the reduction of that risk by such means as rerouting and decoy shipments. A conceivable, worst-case attack by an intelligent adversary will cause a localized release of radioactive material. A damage function is defined using the po... 13. Application of adversarial risk analysis model in pricing strategies with remanufacturing OpenAIRE Liurui Deng; Bolin Ma 2015-01-01 Purpose: Purpose: This paper mainly focus on the application of adversarial risk analysis (ARA) in pricing strategy with remanufacturing. We hope to obtain more realistic results than classical model. Moreover, we also wish that our research improve the development of ARA in pricing strategy of manufacturing or remanufacturing. Approach: In order to gain more actual research, combining adversarial risk analysis, we explore the pricing strategy with remanufacturing based on Stackelberg model... 14. Data Injection Attacks on Smart Grids with Multiple Adversaries: A Game-Theoretic Perspective OpenAIRE 2016-01-01 Data injection attacks have recently emerged as a significant threat on the smart power grid. By launching data injection attacks, an adversary can manipulate the real-time locational marginal prices to obtain economic benefits. Despite the surge of existing literature on data injection, most such works assume the presence of a single attacker and assume no cost for attack or defense. In contrast, in this paper, a model for data injection attacks with multiple adversaries and a single smart g... 15. Adversarial reasoning and resource allocation: the LG approach Science.gov (United States) Stilman, Boris; Yakhnis, Vladimir; Umanskiy, Oleg; Boyd, Ron 2005-05-01 Many existing automated tools purporting to model the intelligent enemy utilize a fixed battle plan for the enemy while using flexible decisions of human players for the friendly side. According to the Naval Studies Board, "It is an open secret and a point of distress ... that too much of the substantive content of such M&S has its origin in anecdote, ..., or a narrow construction tied to stereotypical current practices of 'doctrinally correct behavior.'" Clearly, such runs lack objectivity by being heavily skewed in favor of the friendly forces. Presently, the military branches employ a variety of game-based simulators and synthetic environments, with manual (i.e., user-based) decision-making, for training and other purposes. However, without an ability to automatically generate the best strategies, tactics, and COA, the games serve mostly to display the current situation rather than form a basis for automated decision-making and effective training. We solve the problem of adversarial reasoning as a gaming problem employing Linguistic Geometry (LG), a new type of game theory demonstrating significant increase in size in gaming problems solvable in real and near-real time. It appears to be a viable approach for solving such practical problems as mission planning and battle management. Essentially, LG may be structured into two layers: game construction and game solving. Game construction includes construction of a game called an LG hypergame based on a hierarchy of Abstract Board Games (ABG). Game solving includes resource allocation for constructing an advantageous initial game state and strategy generation to reach a desirable final game state in the course of the game. 16. Malaria's contribution to World War One - the unexpected adversary. Science.gov (United States) Brabin, Bernard J 2014-12-16 Malaria in the First World War was an unexpected adversary. In 1914, the scientific community had access to new knowledge on transmission of malaria parasites and their control, but the military were unprepared, and underestimated the nature, magnitude and dispersion of this enemy. In summarizing available information for allied and axis military forces, this review contextualizes the challenge posed by malaria, because although data exist across historical, medical and military documents, descriptions are fragmented, often addressing context specific issues. Military malaria surveillance statistics have, therefore, been summarized for all theatres of the War, where available. These indicated that at least 1.5 million solders were infected, with case fatality ranging from 0.2 -5.0%. As more countries became engaged in the War, the problem grew in size, leading to major epidemics in Macedonia, Palestine, Mesopotamia and Italy. Trans-continental passages of parasites and human reservoirs of infection created ideal circumstances for parasite evolution. Details of these epidemics are reviewed, including major epidemics in England and Italy, which developed following home troop evacuations, and disruption of malaria control activities in Italy. Elsewhere, in sub-Saharan Africa many casualties resulted from high malaria exposure combined with minimal control efforts for soldiers considered semi-immune. Prevention activities eventually started but were initially poorly organized and dependent on local enthusiasm and initiative. Nets had to be designed for field use and were fundamental for personal protection. Multiple prevention approaches adopted in different settings and their relative utility are described. Clinical treatment primarily depended on quinine, although efficacy was poor as relapsing Plasmodium vivax and recrudescent Plasmodium falciparum infections were not distinguished and managed appropriately. Reasons for this are discussed and the clinical trial data 17. Malaria's contribution to World War One - the unexpected adversary. Science.gov (United States) Brabin, Bernard J 2014-01-01 Malaria in the First World War was an unexpected adversary. In 1914, the scientific community had access to new knowledge on transmission of malaria parasites and their control, but the military were unprepared, and underestimated the nature, magnitude and dispersion of this enemy. In summarizing available information for allied and axis military forces, this review contextualizes the challenge posed by malaria, because although data exist across historical, medical and military documents, descriptions are fragmented, often addressing context specific issues. Military malaria surveillance statistics have, therefore, been summarized for all theatres of the War, where available. These indicated that at least 1.5 million solders were infected, with case fatality ranging from 0.2 -5.0%. As more countries became engaged in the War, the problem grew in size, leading to major epidemics in Macedonia, Palestine, Mesopotamia and Italy. Trans-continental passages of parasites and human reservoirs of infection created ideal circumstances for parasite evolution. Details of these epidemics are reviewed, including major epidemics in England and Italy, which developed following home troop evacuations, and disruption of malaria control activities in Italy. Elsewhere, in sub-Saharan Africa many casualties resulted from high malaria exposure combined with minimal control efforts for soldiers considered semi-immune. Prevention activities eventually started but were initially poorly organized and dependent on local enthusiasm and initiative. Nets had to be designed for field use and were fundamental for personal protection. Multiple prevention approaches adopted in different settings and their relative utility are described. Clinical treatment primarily depended on quinine, although efficacy was poor as relapsing Plasmodium vivax and recrudescent Plasmodium falciparum infections were not distinguished and managed appropriately. Reasons for this are discussed and the clinical trial data 18. Modeling adversarial intent for interactive simulation and gaming: the fused intent system Science.gov (United States) Santos, Eugene, Jr.; McQueary, Bruce; Krause, Lee 2008-04-01 Understanding the intent of today's enemy necessitates changes in intelligence collection, processing, and dissemination. Unlike cold war antagonists, today's enemies operate in small, agile, and distributed cells whose tactics do not map well to established doctrine. This has necessitated a proliferation of advanced sensor and intelligence gathering techniques at level 0 and level 1 of the Joint Directors of Laboratories fusion model. The challenge is in leveraging modeling and simulation to transform the vast amounts of level 0 and level 1 data into actionable intelligence at levels 2 and 3 that include adversarial intent. Currently, warfighters are flooded with information (facts/observables) regarding what the enemy is presently doing, but provided inadequate explanations of adversarial intent and they cannot simulate 'what-if' scenarios to increase their predictive situational awareness. The Fused Intent System (FIS) aims to address these deficiencies by providing an environment that answers 'what' the adversary is doing, 'why' they are doing it, and 'how' they will react to coalition actions. In this paper, we describe our approach to FIS which includes adversarial 'soft-factors' such as goals, rationale, and beliefs within a computational model that infers adversarial intent and allows the insertion of assumptions to be used in conjunction with current battlefield state to perform what-if analysis. Our approach combines ontological modeling for classification and Bayesian-based abductive reasoning for explanation and has broad applicability to the operational, training, and commercial gaming domains. 19. Are Forensic Experts Already Biased before Adversarial Legal Parties Hire Them? Directory of Open Access Journals (Sweden) Tess M S Neal Full Text Available This survey of 206 forensic psychologists tested the "filtering" effects of preexisting expert attitudes in adversarial proceedings. Results confirmed the hypothesis that evaluator attitudes toward capital punishment influence willingness to accept capital case referrals from particular adversarial parties. Stronger death penalty opposition was associated with higher willingness to conduct evaluations for the defense and higher likelihood of rejecting referrals from all sources. Conversely, stronger support was associated with higher willingness to be involved in capital cases generally, regardless of referral source. The findings raise the specter of skewed evaluator involvement in capital evaluations, where evaluators willing to do capital casework may have stronger capital punishment support than evaluators who opt out, and evaluators with strong opposition may work selectively for the defense. The results may provide a partial explanation for the "allegiance effect" in adversarial legal settings such that preexisting attitudes may contribute to partisan participation through a self-selection process. 20. Criminal defectors lead to the emergence of cooperation in an experimental, adversarial game. Directory of Open Access Journals (Sweden) Maria R D'Orsogna Full Text Available While the evolution of cooperation has been widely studied, little attention has been devoted to adversarial settings wherein one actor can directly harm another. Recent theoretical work addresses this issue, introducing an adversarial game in which the emergence of cooperation is heavily reliant on the presence of "Informants," actors who defect at first-order by harming others, but who cooperate at second-order by punishing other defectors. We experimentally study this adversarial environment in the laboratory with human subjects to test whether Informants are indeed critical for the emergence of cooperation. We find in these experiments that, even more so than predicted by theory, Informants are crucial for the emergence and sustenance of a high cooperation state. A key lesson is that successfully reaching and maintaining a low defection society may require the cultivation of criminals who will also aid in the punishment of others. 1. Stability of the Max-Weight Protocol in Adversarial Wireless Networks CERN Document Server Lim, Sungsu; Andrews, Matthew 2012-01-01 In this paper we consider the Max-Weight protocol for routing and scheduling in wireless networks under an adversarial model. This protocol has received a significant amount of attention dating back to the papers of Tassiulas and Ephremides. In particular, this protocol is known to be throughput-optimal whenever the traffic patterns and propagation conditions are governed by a stationary stochastic process. However, the standard proof of throughput optimality (which is based on the negative drift of a quadratic potential function) does not hold when the traffic patterns and the edge capacity changes over time are governed by an arbitrary adversarial process. Such an environment appears frequently in many practical wireless scenarios when the assumption that channel conditions are governed by a stationary stochastic process does not readily apply. In this paper we prove that even in the above adversarial setting, the Max-Weight protocol keeps the queues in the network stable (i.e. keeps the queue sizes bounded... 2. Are Forensic Experts Already Biased before Adversarial Legal Parties Hire Them? Science.gov (United States) 2016-01-01 This survey of 206 forensic psychologists tested the “filtering” effects of preexisting expert attitudes in adversarial proceedings. Results confirmed the hypothesis that evaluator attitudes toward capital punishment influence willingness to accept capital case referrals from particular adversarial parties. Stronger death penalty opposition was associated with higher willingness to conduct evaluations for the defense and higher likelihood of rejecting referrals from all sources. Conversely, stronger support was associated with higher willingness to be involved in capital cases generally, regardless of referral source. The findings raise the specter of skewed evaluator involvement in capital evaluations, where evaluators willing to do capital casework may have stronger capital punishment support than evaluators who opt out, and evaluators with strong opposition may work selectively for the defense. The results may provide a partial explanation for the “allegiance effect” in adversarial legal settings such that preexisting attitudes may contribute to partisan participation through a self-selection process. PMID:27124416 3. Are Forensic Experts Already Biased before Adversarial Legal Parties Hire Them? Science.gov (United States) Neal, Tess M S 2016-01-01 This survey of 206 forensic psychologists tested the "filtering" effects of preexisting expert attitudes in adversarial proceedings. Results confirmed the hypothesis that evaluator attitudes toward capital punishment influence willingness to accept capital case referrals from particular adversarial parties. Stronger death penalty opposition was associated with higher willingness to conduct evaluations for the defense and higher likelihood of rejecting referrals from all sources. Conversely, stronger support was associated with higher willingness to be involved in capital cases generally, regardless of referral source. The findings raise the specter of skewed evaluator involvement in capital evaluations, where evaluators willing to do capital casework may have stronger capital punishment support than evaluators who opt out, and evaluators with strong opposition may work selectively for the defense. The results may provide a partial explanation for the "allegiance effect" in adversarial legal settings such that preexisting attitudes may contribute to partisan participation through a self-selection process. 4. Extended defense systems :I. adversary-defender modeling grammar for vulnerability analysis and threat assessment. Energy Technology Data Exchange (ETDEWEB) Merkle, Peter Benedict 2006-03-01 5. Physical attributes of potential adversaries to U.S. nuclear programs International Nuclear Information System (INIS) Research and development of physical protection elements and systems applicable to the protection of nuclear facilities and materials include the characterization of potential threats to U.S. nuclear programs. RAND Corp. has investigated several hundred incidents which involved activities of a type which can serve as analogs of potential threats to U.S. nuclear programs. This paper summarizes the data used by RAND and provides a listing of potential adversary attributes derived from a historical-incident data base. The attributes are expressed in terms of the physical capabilities of a composite adversary group 6. Constructing Learning: Adversarial and Collaborative Working in the British Construction Industry Science.gov (United States) Bishop, Dan; Felstead, Alan; Fuller, Alison; Jewson, Nick; Unwin, Lorna; Kakavelakis, Konstantinos 2009-01-01 This paper examines two competing systems of work organisation in the British construction industry and their consequences for learning. Under the traditional "adversarial" system, conflict, hostility and litigation between contractors are commonplace. Such a climate actively militates against collective learning and knowledge sharing between… 7. Secure Two-Party Quantum Evaluation of Unitaries against Specious Adversaries DEFF Research Database (Denmark) Dupuis, Frédéric; Nielsen, Jesper Buus; Salvail, Louis 2010-01-01 We describe how any two-party quantum computation, specified by a unitary which simultaneously acts on the registers of both parties, can be privately implemented against a quantum version of classical semi-honest adversaries that we call specious. Our construction requires two ideal functionalit... 8. Procedural Justice in Family Court: Does the Adversary Model Make Sense? Science.gov (United States) Melton, Gary B.; Lind, E. Allan 1982-01-01 Reviews research and theory on procedural justice concerning family disputes, and discusses existing proposals for reform of family court procedures. Holds that adversary proceedings in custody disputes may be more beneficial to older children and disputing parents than nonadversary procedures. Identifies areas for needed research in procedural… 9. Managing Quality, Identity and Adversaries in Public Discourse with Machine Learning Science.gov (United States) Brennan, Michael 2012-01-01 Automation can mitigate issues when scaling and managing quality and identity in public discourse on the web. Discourse needs to be curated and filtered. Anonymous speech has to be supported while handling adversaries. Reliance on human curators or analysts does not scale and content can be missed. These scaling and management issues include the… 10. Bring a gun to a gunfight: armed adversaries and violence across nations. Science.gov (United States) Felson, Richard B; Berg, Mark T; Rogers, Meghan L 2014-09-01 We use homicide data and the International Crime Victimization Survey to examine the role of firearms in explaining cross-national variation in violence. We suggest that while gun violence begets gun violence, it inhibits the tendency to engage in violence without guns. We attribute the patterns to adversary effects-i.e., the tendency of offenders to take into account the threat posed by their adversaries. Multi-level analyses of victimization data support the hypothesis that living in countries with high rates of gun violence lowers an individual's risk of an unarmed assault and assaults with less lethal weapons. Analyses of aggregate data show that homicide rates and gun violence rates load on a separate underlying factor than other types of violence. The results suggest that a country's homicide rate reflects, to a large extent, the tendency of its offenders to use firearms. 11. With God on our side: Religious primes reduce the envisioned physical formidability of a menacing adversary. Science.gov (United States) Holbrook, Colin; Fessler, Daniel M T; Pollack, Jeremy 2016-01-01 The imagined support of benevolent supernatural agents attenuates anxiety and risk perception. Here, we extend these findings to judgments of the threat posed by a potentially violent adversary. Conceptual representations of bodily size and strength summarize factors that determine the relative threat posed by foes. The proximity of allies moderates the envisioned physical formidability of adversaries, suggesting that cues of access to supernatural allies will reduce the envisioned physical formidability of a threatening target. Across two studies, subtle cues of both supernatural and earthly social support reduced the envisioned physical formidability of a violent criminal. These manipulations had no effect on the perceived likelihood of encountering non-conflictual physical danger, raising the possibility that imagined supernatural support leads participants to view themselves not as shielded from encountering perilous situations, but as protected should perils arise. PMID:26524139 12. Bring a gun to a gunfight: armed adversaries and violence across nations. Science.gov (United States) Felson, Richard B; Berg, Mark T; Rogers, Meghan L 2014-09-01 We use homicide data and the International Crime Victimization Survey to examine the role of firearms in explaining cross-national variation in violence. We suggest that while gun violence begets gun violence, it inhibits the tendency to engage in violence without guns. We attribute the patterns to adversary effects-i.e., the tendency of offenders to take into account the threat posed by their adversaries. Multi-level analyses of victimization data support the hypothesis that living in countries with high rates of gun violence lowers an individual's risk of an unarmed assault and assaults with less lethal weapons. Analyses of aggregate data show that homicide rates and gun violence rates load on a separate underlying factor than other types of violence. The results suggest that a country's homicide rate reflects, to a large extent, the tendency of its offenders to use firearms. PMID:24913946 13. Intrinsic asymmetry with respect to adversary: a new feature of Bell inequalities International Nuclear Information System (INIS) 14. Material control study: a directed graph and fault tree procedure for adversary event set generation International Nuclear Information System (INIS) In work for the United States Nuclear Regulatory Commission, Lawrence Livermore Laboratory is developing an assessment procedure to evaluate the effectiveness of a potential nuclear facility licensee's material control (MC) system. The purpose of an MC system is to prevent the theft of special nuclear material such as plutonium and highly enriched uranium. The key in the assessment procedure is the generation and analysis of the adversary event sets by a directed graph and fault-tree methodology 15. A Graphical Adversarial Risk Analysis Model for Oil and Gas Drilling Cybersecurity OpenAIRE Vieira, Aitor Couce; Houmb, Siv Hilde; Insua, David Rios 2014-01-01 Oil and gas drilling is based, increasingly, on operational technology, whose cybersecurity is complicated by several challenges. We propose a graphical model for cybersecurity risk assessment based on Adversarial Risk Analysis to face those challenges. We also provide an example of the model in the context of an offshore drilling rig. The proposed model provides a more formal and comprehensive analysis of risks, still using the standard business language based on decisions, risks, and value. 16. Coalition-based Planning of Military Operations: Adversarial Reasoning Algorithms in an Integrated Decision Aid OpenAIRE Ground, Larry; Kott, Alexander; Budd, Ray 2016-01-01 Use of knowledge-based planning tools can help alleviate the challenges of planning a complex operation by a coalition of diverse parties in an adversarial environment. We explore these challenges and potential contributions of knowledge-based tools using as an example the CADET system, a knowledge-based tool capable of producing automatically (or with human guidance) battle plans with realistic degree of detail and complexity. In ongoing experiments, it compared favorably with human planners... 17. Cooperation and punishment in an adversarial game: How defectors pave the way to a peaceful society Science.gov (United States) Short, M. B.; Brantingham, P. J.; D'Orsogna, M. R. 2010-12-01 The evolution of human cooperation has been the subject of much research, especially within the framework of evolutionary public goods games, where several mechanisms have been proposed to account for persistent cooperation. Yet, in addressing this issue, little attention has been given to games of a more adversarial nature, in which defecting players, rather than simply free riding, actively seek to harm others. Here, we develop an adversarial evolutionary game using the specific example of criminal activity, recasting the familiar public goods strategies of punishers, cooperators, and defectors in this light. We then introduce a strategy—the informant—with no clear analog in public goods games and show that individuals employing this strategy are a key to the emergence of systems where cooperation dominates. We also find that a defection-dominated regime may be transitioned to one that is cooperation-dominated by converting an optimal number of players into informants. We discuss these findings, the role of informants, and possible intervention strategies in extreme adversarial societies, such as those marred by wars and insurgencies. 18. Source Anonymity in WSNs against Global Adversary Utilizing Low Transmission Rates with Delay Constraints. Science.gov (United States) Bushnag, Anas; Abuzneid, Abdelshakour; Mahmood, Ausif 2016-01-01 Wireless sensor networks (WSN) are deployed for many applications such as tracking and monitoring of endangered species, military applications, etc. which require anonymity of the origin, known as Source Location Privacy (SLP). The aim in SLP is to prevent unauthorized observers from tracing the source of a real event by analyzing the traffic in the network. Previous approaches to SLP such as Fortified Anonymous Communication Protocol (FACP) employ transmission of real or fake packets in every time slot, which is inefficient. To overcome this shortcoming, we developed three different techniques presented in this paper. Dummy Uniform Distribution (DUD), Dummy Adaptive Distribution (DAD) and Controlled Dummy Adaptive Distribution (CAD) were developed to overcome the anonymity problem against a global adversary (which has the capability of analyzing and monitoring the entire network). Most of the current techniques try to prevent the adversary from perceiving the location and time of the real event whereas our proposed techniques confuse the adversary about the existence of the real event by introducing low rate fake messages, which subsequently lead to location and time privacy. Simulation results demonstrate that the proposed techniques provide reasonable delivery ratio, delay, and overhead of a real event's packets while keeping a high level of anonymity. Three different analysis models are conducted to verify the performance of our techniques. A visualization of the simulation data is performed to confirm anonymity. Further, neural network models are developed to ensure that the introduced techniques preserve SLP. Finally, a steganography model based on probability is implemented to prove the anonymity of the techniques. PMID:27355948 19. Semantic policy and adversarial modeling for cyber threat identification and avoidance Science.gov (United States) DeFrancesco, Anton; McQueary, Bruce 2009-05-01 Today's enterprise networks undergo a relentless barrage of attacks from foreign and domestic adversaries. These attacks may be perpetrated with little to no funding, but may wreck incalculable damage upon the enterprises security, network infrastructure, and services. As more services come online, systems that were once in isolation now provide information that may be combined dynamically with information from other systems to create new meaning on the fly. Security issues are compounded by the potential to aggregate individual pieces of information and infer knowledge at a higher classification than any of its constituent parts. To help alleviate these challenges, in this paper we introduce the notion of semantic policy and discuss how it's use is evolving from a robust approach to access control to preempting and combating attacks in the cyber domain, The introduction of semantic policy and adversarial modeling to network security aims to ask 'where is the network most vulnerable', 'how is the network being attacked', and 'why is the network being attacked'. The first aspect of our approach is integration of semantic policy into enterprise security to augment traditional network security with an overall awareness of policy access and violations. This awareness allows the semantic policy to look at the big picture - analyzing trends and identifying critical relations in system wide data access. The second aspect of our approach is to couple adversarial modeling with semantic policy to move beyond reactive security measures and into a proactive identification of system weaknesses and areas of vulnerability. By utilizing Bayesian-based methodologies, the enterprise wide meaning of data and semantic policy is applied to probability and high-level risk identification. This risk identification will help mitigate potential harm to enterprise networks by enabling resources to proactively isolate, lock-down, and secure systems that are most vulnerable. 20. Source Anonymity in WSNs against Global Adversary Utilizing Low Transmission Rates with Delay Constraints Directory of Open Access Journals (Sweden) Anas Bushnag 2016-06-01 Full Text Available Wireless sensor networks (WSN are deployed for many applications such as tracking and monitoring of endangered species, military applications, etc. which require anonymity of the origin, known as Source Location Privacy (SLP. The aim in SLP is to prevent unauthorized observers from tracing the source of a real event by analyzing the traffic in the network. Previous approaches to SLP such as Fortified Anonymous Communication Protocol (FACP employ transmission of real or fake packets in every time slot, which is inefficient. To overcome this shortcoming, we developed three different techniques presented in this paper. Dummy Uniform Distribution (DUD, Dummy Adaptive Distribution (DAD and Controlled Dummy Adaptive Distribution (CAD were developed to overcome the anonymity problem against a global adversary (which has the capability of analyzing and monitoring the entire network. Most of the current techniques try to prevent the adversary from perceiving the location and time of the real event whereas our proposed techniques confuse the adversary about the existence of the real event by introducing low rate fake messages, which subsequently lead to location and time privacy. Simulation results demonstrate that the proposed techniques provide reasonable delivery ratio, delay, and overhead of a real event's packets while keeping a high level of anonymity. Three different analysis models are conducted to verify the performance of our techniques. A visualization of the simulation data is performed to confirm anonymity. Further, neural network models are developed to ensure that the introduced techniques preserve SLP. Finally, a steganography model based on probability is implemented to prove the anonymity of the techniques. 1. Source Anonymity in WSNs against Global Adversary Utilizing Low Transmission Rates with Delay Constraints. Science.gov (United States) Bushnag, Anas; Abuzneid, Abdelshakour; Mahmood, Ausif 2016-01-01 Wireless sensor networks (WSN) are deployed for many applications such as tracking and monitoring of endangered species, military applications, etc. which require anonymity of the origin, known as Source Location Privacy (SLP). The aim in SLP is to prevent unauthorized observers from tracing the source of a real event by analyzing the traffic in the network. Previous approaches to SLP such as Fortified Anonymous Communication Protocol (FACP) employ transmission of real or fake packets in every time slot, which is inefficient. To overcome this shortcoming, we developed three different techniques presented in this paper. Dummy Uniform Distribution (DUD), Dummy Adaptive Distribution (DAD) and Controlled Dummy Adaptive Distribution (CAD) were developed to overcome the anonymity problem against a global adversary (which has the capability of analyzing and monitoring the entire network). Most of the current techniques try to prevent the adversary from perceiving the location and time of the real event whereas our proposed techniques confuse the adversary about the existence of the real event by introducing low rate fake messages, which subsequently lead to location and time privacy. Simulation results demonstrate that the proposed techniques provide reasonable delivery ratio, delay, and overhead of a real event's packets while keeping a high level of anonymity. Three different analysis models are conducted to verify the performance of our techniques. A visualization of the simulation data is performed to confirm anonymity. Further, neural network models are developed to ensure that the introduced techniques preserve SLP. Finally, a steganography model based on probability is implemented to prove the anonymity of the techniques. DEFF Research Database (Denmark) He, Kai 2012-01-01 balancing refers to a state's strategies or diplomatic efforts aiming to undermine a rival's power. By contrast, positive balancing means to strengthen a state's own power in world politics. I argue that a state's balancing strategies are shaped by the level of threat perception regarding its rival. The......This paper engages the ongoing soft balancing debate by suggesting a new analytical framework for states’ countervailing strategies—a negative balancing model—to explain why states do not form alliances and conduct arms races to balance against power or threats as they previously did. Negative...... which the relatively low-threat propensity of the system renders positive balancing strategies incompatible with state interests after the Cold War. Instead, states have employed various negative balancing strategies to undermine each other's power, especially when dealing with us primacy. China... 3. RETHINKING THE ROLE OF SMALL-GROUP COLLABORATORS AND ADVERSARIES IN THE LONDON KLEINIAN DEVELOPMENT (1914-1968). Science.gov (United States) Aguayo, Joseph; Regeczkey, Agnes 2016-07-01 The authors historically situate the London Kleinian development in terms of the small-group collaborations and adversaries that arose during the course of Melanie Klein's career. Some collaborations later became personally adversarial (e.g., those Klein had with Glover and Schmideberg); other adversarial relationships forever remained that way (with A. Freud); while still other long-term collaborations became theoretically contentious (such as with Winnicott and Heimann). After the Controversial Discussions in 1944, Klein marginalized one group of supporters (Heimann, Winnicott, and Riviere) in favor of another group (Rosenfeld, Segal, and Bion). After Klein's death in 1960, Bion maintained loyalty to Klein's ideas while quietly distancing his work from the London Klein group, immigrating to the United States in 1968. PMID:27428585 4. Evaluation of risk from acts of terrorism :the adversary/defender model using belief and fuzzy sets. Energy Technology Data Exchange (ETDEWEB) Darby, John L. 2006-09-01 Risk from an act of terrorism is a combination of the likelihood of an attack, the likelihood of success of the attack, and the consequences of the attack. The considerable epistemic uncertainty in each of these three factors can be addressed using the belief/plausibility measure of uncertainty from the Dempster/Shafer theory of evidence. The adversary determines the likelihood of the attack. The success of the attack and the consequences of the attack are determined by the security system and mitigation measures put in place by the defender. This report documents a process for evaluating risk of terrorist acts using an adversary/defender model with belief/plausibility as the measure of uncertainty. Also, the adversary model is a linguistic model that applies belief/plausibility to fuzzy sets used in an approximate reasoning rule base. 5. RETHINKING THE ROLE OF SMALL-GROUP COLLABORATORS AND ADVERSARIES IN THE LONDON KLEINIAN DEVELOPMENT (1914-1968). Science.gov (United States) Aguayo, Joseph; Regeczkey, Agnes 2016-07-01 The authors historically situate the London Kleinian development in terms of the small-group collaborations and adversaries that arose during the course of Melanie Klein's career. Some collaborations later became personally adversarial (e.g., those Klein had with Glover and Schmideberg); other adversarial relationships forever remained that way (with A. Freud); while still other long-term collaborations became theoretically contentious (such as with Winnicott and Heimann). After the Controversial Discussions in 1944, Klein marginalized one group of supporters (Heimann, Winnicott, and Riviere) in favor of another group (Rosenfeld, Segal, and Bion). After Klein's death in 1960, Bion maintained loyalty to Klein's ideas while quietly distancing his work from the London Klein group, immigrating to the United States in 1968. 6. Are the advocates of nuclear power and the adversaries listening to each other? International Nuclear Information System (INIS) It's obvious that one cannot answer the question from the title with simple 'yes' or 'no'. If it seems that the nuclear advocates globaly, have the same point of view, and an homegeneous argumentation, it is not the same for the opponents to Nuclear Energy. We can classify these adversaries in 4 categories, according to the nature of their opposition: ideological, economical, political, that includes ideological, mystical. In reality, these 4 types of opposition are not equally represented in France. From 1974 to the present moment, the EDF, has tried to have a dialogue with them. Various resultswere achieved with the Ecologists, 'Economical opponents', 'political adversaries'. Theer was no dialogue with the 'mystical opponents', for a very simple reason 'Nuclear people' are the Devil himself and they did not wish to have anything to do with him. There can be no end to the discussion about the sex of angels. To conclude, it is believed that there has been a discussion in France. It did not lead to any sort of complete consensus, but there are some true positive results. Only one, the well-known opponent to nuclear energy, in the seventies, the President of 'The Friends of the Earth' he is, now, French 'Environment Vice-Minister' and he considers that, among the energy industries, the nuclear energy is, without doubt, the less polluting 7. Application of adversarial risk analysis model in pricing strategies with remanufacturing Directory of Open Access Journals (Sweden) Liurui Deng 2015-01-01 Full Text Available Purpose: Purpose: This paper mainly focus on the application of adversarial risk analysis (ARA in pricing strategy with remanufacturing. We hope to obtain more realistic results than classical model. Moreover, we also wish that our research improve the development of ARA in pricing strategy of manufacturing or remanufacturing. Approach: In order to gain more actual research, combining adversarial risk analysis, we explore the pricing strategy with remanufacturing based on Stackelberg model. Especially, we build OEM’s 1-order ARA model and further study on manufacturers and remanufacturers’ pricing strategy. Findings: We find the OEM’s 1-order ARA model for the OEM’s product cost C. Besides, we get according manufacturers and remanufacturers’ pricing strategies. Besides, the pricing strategies based on 1-order ARA model have advantage over than the classical model regardless of OEMs and remanufacturers. Research implications: The research on application of ARA imply that we can get more actual results with this kind of modern risk analysis method and ARA can be extensively in pricing strategies of supply chain. Value: Our research improves the application of ARA in remanufacturing industry. Meanwhile, inspired by this analysis, we can also create different ARA models for different parameters. Furthermore, some results and analysis methods can be applied to other pricing strategies of supply chain. 8. Arlen Spector and the Construction of Adversarial Discourse: Selective Representation in the Clarence Thomas-Anita Hill Hearings. Science.gov (United States) Armstrong, S. Ashley 1995-01-01 Reports on Senator Arlen Spector's interview of Anita Hill during hearings on Supreme Court nominee Clarence Thomas and her allegations of sexual harassment. Examines the social structures and argumentative strategies Spector invoked to place Hill in a position of "powerlessness." Argues that the key resource contributing to the adversarial nature… 9. Institutionalizing dissent: a proposal for an adversarial system of pharmaceutical research. Science.gov (United States) Biddle, Justin 2013-12-01 There are serious problems with the way in which pharmaceutical research is currently practiced, many of which can be traced to the influence of commercial interests on research. One of the most significant is inadequate dissent, or organized skepticism. In order to ameliorate this problem, I develop a proposal that I call the "Adversarial Proceedings for the Evaluation of Pharmaceuticals," to be instituted within a regulatory agency such as the Food and Drug Administration for the evaluation of controversial new drugs and controversial drugs already in the market. This proposal is an organizational one based upon the "science court" proposal by Arthur Kantrowitz in the 1960s and 1970s. The primary benefit of this system is its ability to institutionalize dissent, thereby ensuring that one set of interests does not dominate all others. PMID:24552075 10. Institutionalizing dissent: a proposal for an adversarial system of pharmaceutical research. Science.gov (United States) Biddle, Justin 2013-12-01 There are serious problems with the way in which pharmaceutical research is currently practiced, many of which can be traced to the influence of commercial interests on research. One of the most significant is inadequate dissent, or organized skepticism. In order to ameliorate this problem, I develop a proposal that I call the "Adversarial Proceedings for the Evaluation of Pharmaceuticals," to be instituted within a regulatory agency such as the Food and Drug Administration for the evaluation of controversial new drugs and controversial drugs already in the market. This proposal is an organizational one based upon the "science court" proposal by Arthur Kantrowitz in the 1960s and 1970s. The primary benefit of this system is its ability to institutionalize dissent, thereby ensuring that one set of interests does not dominate all others. 11. A 2D chaotic path planning for mobile robots accomplishing boundary surveillance missions in adversarial conditions Science.gov (United States) Curiac, Daniel-Ioan; Volosencu, Constantin 2014-10-01 The path-planning algorithm represents a crucial issue for every autonomous mobile robot. In normal circumstances a patrol robot will compute an optimal path to ensure its task accomplishment, but in adversarial conditions the problem is getting more complicated. Here, the robot’s trajectory needs to be altered into a misleading and unpredictable path to cope with potential opponents. Chaotic systems provide the needed framework for obtaining unpredictable motion in all of the three basic robot surveillance missions: area, points of interests and boundary monitoring. Proficient approaches have been provided for the first two surveillance tasks, but for boundary patrol missions no method has been reported yet. This paper addresses the mentioned research gap by proposing an efficient method, based on chaotic dynamic of the Hénon system, to ensure unpredictable boundary patrol on any shape of chosen closed contour. 12. Imparting protean behavior to mobile robots accomplishing patrolling tasks in the presence of adversaries. Science.gov (United States) Curiac, Daniel-Ioan; Volosencu, Constantin 2015-10-08 Providing unpredictable trajectories for patrol robots is essential when coping with adversaries. In order to solve this problem we developed an effective approach based on the known protean behavior of individual prey animals-random zig-zag movement. The proposed bio-inspired method modifies the normal robot's path by incorporating sudden and irregular direction changes without jeopardizing the robot's mission. Such a tactic is aimed to confuse the enemy (e.g. a sniper), offering less time to acquire and retain sight alignment and sight picture. This idea is implemented by simulating a series of fictive-temporary obstacles that will randomly appear in the robot's field of view, deceiving the obstacle avoiding mechanism to react. The new general methodology is particularized by using the Arnold's cat map to obtain the timely random appearance and disappearance of the fictive obstacles. The viability of the proposed method is confirmed through an extensive simulation case study. 13. An Efficient Encryption Algorithm for P2P Networks Robust Against Man-in-the-Middle Adversary Directory of Open Access Journals (Sweden) Roohallah Rastaghi 2012-11-01 Full Text Available Peer-to-peer (P2P networks have become popular as a new paradigm for information exchange and are being used in many applications such as file sharing, distributed computing, video conference, VoIP, radio and TV broadcasting. This popularity comes with security implications and vulnerabilities that need to be addressed. Especially duo to direct communication between two end nodes in P2P networks, these networks are potentially vulnerable to Man-in-the-Middle attacks. In this paper, we propose a new public-key cryptosystem for P2P networks that is robust against Man-in-the-Middle adversary. This cryptosystem is based on RSA and knapsack problems. Our precoding-based algorithm uses knapsack problem for performing permutation and padding random data to the message. We show that comparing to other proposed cryptosystems, our algorithm is more efficient and it is fully secure against an active adversary. 14. Decision Aids for Adversarial Planning in Military Operations: Algorithms, Tools, and Turing-test-like Experimental Validation OpenAIRE Kott, Alexander; Budd, Ray; Ground, Larry; Rebbapragada, Lakshmi; Langston, John 2016-01-01 Use of intelligent decision aids can help alleviate the challenges of planning complex operations. We describe integrated algorithms, and a tool capable of translating a high-level concept for a tactical military operation into a fully detailed, actionable plan, producing automatically (or with human guidance) plans with realistic degree of detail and of human-like quality. Tight interleaving of several algorithms -- planning, adversary estimates, scheduling, routing, attrition and consumptio... 15. Breaking the O(nm) Bit Barrier: Secure Multiparty Computation with a Static Adversary CERN Document Server Dani, Varsha; Movahedi, Mahnush; Saia, Jared 2012-01-01 We describe scalable algorithms for secure multiparty computation (SMPC). We assume a synchronous message passing communication model, but unlike most related work, we do not assume the existence of a broadcast channel. Our main result holds for the case where there are n players, of which a (1/3-\\epsilon)-fraction are controlled by an adversary, for \\epsilon, any positive constant. We describe a SMPC algorithm for this model that requires each player to send O((n+m)/n + \\sqrt{n}) (where the O notation hides polylogarithmic factors) messages and perform O((n+m)/n + \\sqrt{n}) computations to compute any function f, where m is the size of a circuit to compute f. We also consider a model where all players are selfish but rational. In this model, we describe a Nash equilibrium protocol that solve SMPC and requires each player to send O((n+m)/n) messages and perform O((n+m)/n) computations. These results significantly improve over past results for SMPC which require each player to send a number of bits and perform... 16. Sistema penal acusatorio en Veracruz/Adversarial criminal system in Veracruz Directory of Open Access Journals (Sweden) Jorge Alberto Pérez Tolentino (México 2014-01-01 Full Text Available El estudio y comprensión del nuevo Código de Procedimientos Penales de Veracruz resulta ineludible, en virtud de las nítidas diferencias existentes entre las figuras jurídicas que contiene el actual ordenamiento, en comparación con el anterior. Es preciso sistematizar, describir y analizar la estructura del sistema penal acusatorio, a efecto de estar en condiciones de evaluar y, en su caso, proponer las mejoras al sistema en cuestión. El contenido esquemático y sustancial del código, la visión y recepción que del mismo tienen los operadores jurídicos y la sociedad en general, son aspectos que cubre el presente documento. The study and understanding of the new Code of Criminal Procedure of Veracruz is unavoidable, by reason of the sharp differences between the legal concepts that contains the actual order, compared with the previous. Needs to be systematized, describe and analyze the structure of the adversarial criminal system, in order to be able to evaluate and, if necessary, propose improvements to the system in question. The schematic and substantial content of the code, viewing and welcome that the same have the legal practitioners and society in general, are aspects covered by herein. 17. Secure two-party quantum evaluation of unitaries against specious adversaries CERN Document Server Dupuis, Frédéric; Salvail, Louis 2010-01-01 We describe how any two-party quantum computation, specified by a unitary which simultaneously acts on the registers of both parties, can be privately implemented against a quantum version of classical semi-honest adversaries that we call specious. Our construction requires two ideal functionalities to garantee privacy: a private SWAP between registers held by the two parties and a classical private AND-box equivalent to oblivious transfer. If the unitary to be evaluated is in the Clifford group then only one call to SWAP is required for privacy. On the other hand, any unitary not in the Clifford requires one call to an AND-box per R-gate in the circuit. Since SWAP is itself in the Clifford group, this functionality is universal for the private evaluation of any unitary in that group. SWAP can be built from a classical bit commitment scheme or an AND-box but an AND-box cannot be constructed from SWAP. It follows that unitaries in the Clifford group are to some extent the easy ones. We also show that SWAP cann... 18. Adversarial intent modeling using embedded simulation and temporal Bayesian knowledge bases Science.gov (United States) Pioch, Nicholas J.; Melhuish, James; Seidel, Andy; Santos, Eugene, Jr.; Li, Deqing; Gorniak, Mark 2009-05-01 To foster shared battlespace awareness among air strategy planners, BAE Systems has developed Commander's Model Integration and Simulation Toolkit (CMIST), an Integrated Development Environment for authoring, integration, validation, and debugging of models relating multiple domains, including political, military, social, economic and information. CMIST provides a unified graphical user interface for such systems of systems modeling, spanning several disparate modeling paradigms. Here, we briefly review the CMIST architecture and then compare modeling results using two approaches to intent modeling. The first uses reactive agents with simplified behavior models that apply rule-based triggers to initiate actions based solely on observations of the external world at the current time in the simulation. The second method models proactive agents running an embedded CMIST simulation representing their projection of how events may unfold in the future in order to take early preventative action. Finally, we discuss a recent extension to CMIST that incorporates Temporal Bayesian Knowledge Bases for more sophisticated models of adversarial intent that are capable of inferring goals and future actions given evidence of current actions at particular times. 19. Using Frankencerts for Automated Adversarial Testing of Certificate Validation in SSL/TLS Implementations. Science.gov (United States) Brubaker, Chad; Jana, Suman; Ray, Baishakhi; Khurshid, Sarfraz; Shmatikov, Vitaly 2014-01-01 Modern network security rests on the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. Distributed systems, mobile and desktop applications, embedded devices, and all of secure Web rely on SSL/TLS for protection against network attacks. This protection critically depends on whether SSL/TLS clients correctly validate X.509 certificates presented by servers during the SSL/TLS handshake protocol. We design, implement, and apply the first methodology for large-scale testing of certificate validation logic in SSL/TLS implementations. Our first ingredient is "frankencerts," synthetic certificates that are randomly mutated from parts of real certificates and thus include unusual combinations of extensions and constraints. Our second ingredient is differential testing: if one SSL/TLS implementation accepts a certificate while another rejects the same certificate, we use the discrepancy as an oracle for finding flaws in individual implementations. Differential testing with frankencerts uncovered 208 discrepancies between popular SSL/TLS implementations such as OpenSSL, NSS, CyaSSL, GnuTLS, PolarSSL, MatrixSSL, etc. Many of them are caused by serious security vulnerabilities. For example, any server with a valid X.509 version 1 certificate can act as a rogue certificate authority and issue fake certificates for any domain, enabling man-in-the-middle attacks against MatrixSSL and GnuTLS. Several implementations also accept certificate authorities created by unauthorized issuers, as well as certificates not intended for server authentication. We also found serious vulnerabilities in how users are warned about certificate validation errors. When presented with an expired, self-signed certificate, NSS, Safari, and Chrome (on Linux) report that the certificate has expired-a low-risk, often ignored error-but not that the connection is insecure against a man-in-the-middle attack. These results demonstrate that automated adversarial testing with frankencerts 20. The Effect of Respect, Trust, and Fear in Adversarial Stakeholder Relationships: A Case Study on Water Commodification and Stakeholder Engagement Directory of Open Access Journals (Sweden) Mark McGinley 2011-04-01 Full Text Available Current academic discussion around stakeholder engagement has historically been focused on the attributes of the various stakeholders rather than on the relationship between the stakeholders. This paper examines the role that the intangible variables, respect, fear, and trust play in stakeholder relationships that are characterized by intractable conflict. That role is explored through a case study of stakeholder groups with adversarial positions on the commodification and export of Canada’s freshwater. Through discussion of the relationship between two sets of stakeholders with conflicting interests on Canada’s freshwater commodification respect, fear, and trust are advanced as the key intangible variables that create the underlying conflict. With these root causes identified the paper explores methods to build respect, reduce fear, and create trust between the stakeholders in an effort to shift their relationship from adversarial to co-operative in the hopes of facilitating constructive dialogue. 1. How to achieve public participation in nuclear waste decisions: Public relations or transparent adversary science Energy Technology Data Exchange (ETDEWEB) Treichel, J. [Nevada Nuclear Waste Task Force, Las Vegas, NV (United States) 1999-12-01 applied correctly, are in this case, merely tools being employed to co-opt or buy off the opposition and legitimize the process. It appears at this point that there are two choices: either the decision-makers can attempt to continue with the current program of forced siting, ignoring all citizen and scientific opposition and eventually leading to litigation; or a new program can be developed. Since the US nuclear waste programs have utilized or have been perceived as utilizing advocacy science, the country must add 'adversary science' to the national program. This is described as the providing of financial support for competing teams of experts to investigate, and to tell the public about, any hazards which the enthusiasts of a project may have failed to report, or even to see. If citizens were able to participate in the debate between scientists and experts, with differing opinions concerning the merits of a nuclear waste site or indeed, any controversial technology, they would feel represented. They would believe that the final decisions were made only after the program in question had withstood and overcome all criticism-rather than just ignored it. There is probably no chance that the current US nuclear waste program can succeed in light of the level of public opposition. It is therefore necessary to move to a policy that respects those who must pay the costs and live with its consequences. 2. The structure of adversarial growth in a sample of cancer patients 8 years post-diagnosis : a revision of the SLQ-38 NARCIS (Netherlands) McBride, Orla; Schroevers, Maya J.; Ranchor, Adelita V. 2009-01-01 Stressful and traumatic events may trigger positive life changes, so-called adversarial growth. Despite growing interest in this topic, the structure and dimensionality of this concept has not been established. Recently, empirical reviews have suggested that the factors underlying this construct are 3. The Influence of Cognitive Biases on Court Decisions. Contributions of Legal Psychology to the Adversary Criminal Proceedings Directory of Open Access Journals (Sweden) Paola Iliana De la Rosa Rodríguez 2016-06-01 Full Text Available The purpose of this paper is to disperse among the judiciary and society the psychological procedures involved in the decision-making process of judges since they are not only influenced by law but by previous ideas and values. It is worth questioning: in what extent their personal views and beliefs are the bases of verdicts? How can aversions and public opinion have an impact in the court decision? This paper analyzes and states the differences of the judicial role in the Mexican adversarial system and the inquisitorial models of justice. It also critiques the categories of the judicial officers and presents the circumstances that make an impact on judicial decisions, according to Psychology studies. It finally classifies cognitive biases and concludes that the more knowledge judges have about it, the more imparcial judgments will be. 4. Mothers' power assertion; children's negative, adversarial orientation; and future behavior problems in low-income families: early maternal responsiveness as a moderator of the developmental cascade. Science.gov (United States) Kim, Sanghag; Kochanska, Grazyna 2015-02-01 5. A Research on the System of Evidence Investigation and Collection in Court with the Adversary System%法院证据调查与当事人主义 Institute of Scientific and Technical Information of China (English) 李晓丽 2012-01-01 China's Civil Trial Reform criticizes the super-ex officio doctrine and advocates developing the adversary system on weakening the court's power on evidence investigation and collection.On the research of comparison the adversary systemJs concept in civil law system and common law system;we find that truth-finding is the true essence of civil procedure law;and the adversary system doesn't exclude the court's power on evidence investigation and collection.We should develop this system on court's evidence investigation and collection to promote the counter create the justice proactively by its standardization.%我国审判方式改革在批判超职权主义观念的基础上,主张向当事人主义的目标发展,大大削弱了法官依据职权调查证据的权力。本文通过对大陆法系和英美法系的当事人主义诉讼模式的分析发现,对真实的追求是民事诉讼的真谛,当事人主义并不排斥法院实施证据调查。我国应当通过法院证据调查制度的规范化促进司法者能动地输出司法正义。 6. Mothers' power assertion; children's negative, adversarial orientation; and future behavior problems in low-income families: early maternal responsiveness as a moderator of the developmental cascade. Science.gov (United States) Kim, Sanghag; Kochanska, Grazyna 2015-02-01 7. El aspecto criminalista del nuevo proceso penal de corte acusatorio, adversarial y oral en el estado de México/The forensic aspect of the accusatorial, adversarial and oral criminal process in the state of Mexico Directory of Open Access Journals (Sweden) Juan Antonio Maruri Jiménez 2013-01-01 Full Text Available El Gobierno del Estado de México, mediante Decreto número 266 del 9 de febrero de 2009, publicó en la Gaceta del Gobierno, el nuevo Código de Procedimientos Penales para el Estado de México, tomando importancia en el presente trabajo, el rol del perito en el nuevo proceso penal de corte acusatorio, adversarial y oral, donde se abordarán los tópicos referentes a los principios que lo rigen, la legalidad y la valoración de la prueba, el registro de actuaciones y audiencias, los sujetos procesales, las actuaciones y los elementos de la investigación, inspecciones, registros y aseguramientos, registros de investigación y cadena de custodia, los peritajes, su actuación en la audiencia de juicio oral, en la prueba anticipada y en la prueba irreproductible. De esta manera, el perito normará su actuación -además de lo referente a su ciencia-, conforme a los principios del proceso penal acusatorio. Las reglas para la emisión y presentación del dictamen cambian de manera sustancial, razón por la cual, los peritos deben analizar el cuerpo legal y dirigir su actuación en base a dichos criterios, además de que será requerida su intervención personal en base a interrogatorios efectuados por los sujetos procesales, siendo participes dentro del drama penal como agentes activos para lograr el conocimiento de la verdad histórica de los hechos. 8. Are the advocates of nuclear power and the adversaries listening to each other? Does Dialogue have a chance? Introductory remarks by John A. Macpherson International Nuclear Information System (INIS) Are the advocates of nuclear power and the adversaries listening to each other? Does dialogue have a chance? My short answer to both questions posed as the title for this discussion is 'no'. And I would add: There is no point in trying to bring opposite poles together, it requires too much investment for too little return. A nuclear dialogue will have a chance only it a distinction can be shown between physics and metaphysics, between chemistry and alchemy, and if the gap can be bridged between the polarized views of the world's societal needs which, incidentally, create the nuclear issue in the first place. This is a daunting task Movements in search of a cause have a passion for preaching rather than a love for listening 9. U.S. Nuclear Regulatory Commission Staff's Approach to Incorporate the Attractiveness of Nuclear Material to Adversaries into Its Graded Approach to Security International Nuclear Information System (INIS) This paper provides an overview of the analysis performed to support the Nuclear Regulatory Commission (NRC) staff’s approach to incorporate material attractiveness into its graded security requirements. Discussions of the technical study, as well as the input we have received from interested parties are presented. It will also provide the staff’s current view of the approach and how it could potentially be implemented into the regulatory framework. As with all of the NRC’s policy decisions, the five-member Commission will ultimately decide the final approach and implementation of the staff’s efforts. The NRC staff has worked over the last several years to identify an approach to capturing the concept of material attractiveness into its graded security requirements. This has involved staff work, a technical study, and outreach to stakeholders. The staff’s current understanding is that the most useful attribute to consider, aside from self protection, is the level of dilution. It is both measureable and correlated with the attractiveness of nuclear material to adversaries. The staff considers that the current categorization approach should be maintained. However, alternative security measures should be considered for varying levels of dilution, taking into account the bulkiness, heavy weight and lower attractiveness of the material. (author) Energy Technology Data Exchange (ETDEWEB) Speed, Ann Elizabeth; Doser, Adele Beatrice; Warrender, Christina E. 2009-05-01 The purpose of this work was to help develop a research roadmap and small proof ofconcept for addressing key problems and gaps from the perspective of using text analysis methods as a primary tool for detecting when a group is undergoing a phase change. Self- rganizing map (SOM) techniques were used to analyze text data obtained from the tworld-wide web. Statistical studies indicate that it may be possible to predict phase changes, as well as detect whether or not an example of writing can be attributed to a group of interest. 11. High-Confidence Predictions under Adversarial Uncertainty CERN Document Server Drucker, Andrew 2011-01-01 We study the setting in which the bits of an unknown infinite binary sequence x are revealed sequentially to an observer. We show that very limited assumptions about x allow one to make successful predictions about unseen bits of x. First, we study the problem of successfully predicting a single 0 from among the bits of x. In our model we have only one chance to make a prediction, but may do so at a time of our choosing. We describe and motivate this as the problem of a frog who wants to cross a road safely. Letting N_t denote the number of 1s among the first t bits of x, we say that x is "eps-weakly sparse" if lim inf (N_t/t) 0, we give a randomized forecasting algorithm S_eps that, given sequential access to a binary sequence x, makes a predi ction of the form: "A p fraction of the next N bits will be 1s." (The algorithm gets to choose p, N, and the time of the prediction.) For any fixed sequence x, the forecast fraction p is accurate to within +-eps with probability 1 - eps. 12. Adversarial decision and optimization-based models OpenAIRE Villacorta Iglesias, Pablo Jos?? 2015-01-01 Decision making is all around us. Everyone makes choices everyday, from the moment we open our eyes in the morning. Some of them do not have very important consequences in our life and these consequences are easy to take into account. However, in the business world, managers make decisions that have important consequences on the future of their own firm (in terms of revenues, market position, business policy) and their employees. In these cases, it is difficult to account fo... OpenAIRE Center for Homeland Defense and Security Naval Postgraduate School 2016-01-01 ISIS has honed and evolved its propaganda skills and continues to push out very effective messaging to its prospective recruits, in many cases radicalizing groups and individuals. Which approaches should be taken in finding the right strategy to counter their hateful and violent disinformation? In this interview, Kathleen Kiernan has assembled a panel of subject matter experts on media production who discuss various approaches towards counter-messaging the messenger. 14. New Online Ecology of Adversarial Aggregates: ISIS and beyond CERN Document Server Johnson, N F; Vorobyeva, Y; Gabriel, A; Qi, H; Velasquez, N; Manrique, P; Johnson, D; Restrepo, E; Song, C; Wuchty, S 2016-01-01 Support for an extremist entity such as Islamic State (ISIS) somehow manages to survive globally online despite significant external pressure, and may ultimately inspire acts by individuals who have no prior history of extremism, formal cell membership or direct links to leadership. We uncover an ultrafast ecology driving this online support and provide a mathematical theory that describes it. The ecology features self-organized aggregates that proliferate preceding the onset of recent real-world campaigns, and adopt novel adaptive mechanisms to enhance their survival. One of the actionable predictions is that the development of large, potentially potent pro-ISIS aggregates can be thwarted by targeting smaller ones. 15. [The newborn and the couple: adversaries or partners?]. Science.gov (United States) Provost, M A; Tremblay, S 1991-06-01 We generally accept that the planned arrival of a first child is a source of joy for the new parents and that it provides them with a sense of accomplishment. Traditionally, society welcomes the formation of a family unit and looks forward to this passage into the new role of parenthood. However, not only has research on marital relations set aside the popular imagery of romanticism, but it has increasingly given negative connotations to this crisis-prone transition phase. The objective of this article is therefore to review literature concerning the impact of a newborn child on the marital experience, and to nuance the idea that childbirth can lead to crisis situations within the couple. In conclusion, the authors argue that the concept of marital satisfaction needs to be reformulated. Judging from their brief overview of literature, the authors believe the concept to be too narrowly defined and slightly ambiguous. Indeed, researchers have not yet reached a consensus on the definition of marital satisfaction. Furthermore, they tend to operationalize the concept in very different ways. As a result, there is a lot of conclusion and the fact that many researchers use different terms as a synonym of satisfaction is no help. What's more, the assessment of quality in a relationship should not be limited to measuring the level of satisfaction of the two partners. Other dimensions (e.g. adjustment, commitment, cohesion, etc.) deserve consideration in order to give a more complete image of the changes that occur over the years within the couple. PMID:1932419 16. New online ecology of adversarial aggregates: ISIS and beyond. Science.gov (United States) Johnson, N F; Zheng, M; Vorobyeva, Y; Gabriel, A; Qi, H; Velasquez, N; Manrique, P; Johnson, D; Restrepo, E; Song, C; Wuchty, S 2016-06-17 Support for an extremist entity such as Islamic State (ISIS) somehow manages to survive globally online despite considerable external pressure and may ultimately inspire acts by individuals having no history of extremism, membership in a terrorist faction, or direct links to leadership. Examining longitudinal records of online activity, we uncovered an ecology evolving on a daily time scale that drives online support, and we provide a mathematical theory that describes it. The ecology features self-organized aggregates (ad hoc groups formed via linkage to a Facebook page or analog) that proliferate preceding the onset of recent real-world campaigns and adopt novel adaptive mechanisms to enhance their survival. One of the predictions is that development of large, potentially potent pro-ISIS aggregates can be thwarted by targeting smaller ones. PMID:27313046 17. The deconstruction of safety arguments through adversarial counter-argument International Nuclear Information System (INIS) The project Deconstructive Evaluation of Risk In Dependability Arguments and Safety Cases (DERIDASC) has recently experimented with techniques borrowed from literary theory as safety case analysis techniques [Armstrong. Danger: Derrida at work. Interdiscipl Sci Rev 2003;28(2):83-94. ; Armstrong J, Paynter S. Safe systems: construction, destruction, and deconstruction. In: Redmill F, Anderson T, editors. Proceedings of the 11th safety critical systems symposium, Bristol, UK. Berlin: Springer; 2003. p. 62-76. ISBN:1-85233-696-X. ]. This paper introduces our high-level framework for 'deconstructing' safety arguments. Our approach is quite general and should be applicable to different types of safety argumentation framework. As one example, we outline how the approach would work in the context of the Goal Structure Notation (GSN) 18. The deconstruction of safety arguments through adversarial counter-argument Energy Technology Data Exchange (ETDEWEB) Armstrong, James M. [BAE Systems Systems Engineering Innovation Centre (SEIC), University of Loughborough (United Kingdom)]. E-mail: J.M.Armstrong@lboro.ac.uk; Paynter, Stephen E. [MBDA UK Ltd, Filton, Bristol (United Kingdom)]. E-mail: stephen.paynter@mbda.co.uk 2007-11-15 The project Deconstructive Evaluation of Risk In Dependability Arguments and Safety Cases (DERIDASC) has recently experimented with techniques borrowed from literary theory as safety case analysis techniques [Armstrong. Danger: Derrida at work. Interdiscipl Sci Rev 2003;28(2):83-94. ; Armstrong J, Paynter S. Safe systems: construction, destruction, and deconstruction. In: Redmill F, Anderson T, editors. Proceedings of the 11th safety critical systems symposium, Bristol, UK. Berlin: Springer; 2003. p. 62-76. ISBN:1-85233-696-X. ]. This paper introduces our high-level framework for 'deconstructing' safety arguments. Our approach is quite general and should be applicable to different types of safety argumentation framework. As one example, we outline how the approach would work in the context of the Goal Structure Notation (GSN) 19. A classical one-way function to confound quantum adversaries CERN Document Server Moore, Cristopher; Vazirani, U; Moore, Cristopher; Russell, Alexander; Vazirani, Umesh 2007-01-01 The promise of quantum computation and its consequences for complexity-theoretic cryptography motivates an immediate search for cryptosystems which can be implemented with current technology, but which remain secure even in the presence of quantum computers. Inspired by recent negative results pertaining to the nonabelian hidden subgroup problem, we present here a classical algebraic function $f_V(M)$ of a matrix $M$ which we believe is a one-way function secure against quantum attacks. Specifically, inverting $f_V$ reduces naturally to solving a hidden subgroup problem over the general linear group (which is at least as hard as the hidden subgroup problem over the symmetric group). We also demonstrate a reduction from Graph Isomorphism to the problem of inverting $f_V$; unlike Graph Isomorphism, however, the function $f_V$ is random self-reducible and therefore uniformly hard. These results suggest that, unlike Shor's algorithm for the discrete logarithm--which is, so far, the only successful quantum attack ... 20. Modeling Adversaries in Counterterrorism Decisions Using Prospect Theory. Science.gov (United States) Merrick, Jason R W; Leclerc, Philip 2016-04-01 Counterterrorism decisions have been an intense area of research in recent years. Both decision analysis and game theory have been used to model such decisions, and more recently approaches have been developed that combine the techniques of the two disciplines. However, each of these approaches assumes that the attacker is maximizing its utility. Experimental research shows that human beings do not make decisions by maximizing expected utility without aid, but instead deviate in specific ways such as loss aversion or likelihood insensitivity. In this article, we modify existing methods for counterterrorism decisions. We keep expected utility as the defender's paradigm to seek for the rational decision, but we use prospect theory to solve for the attacker's decision to descriptively model the attacker's loss aversion and likelihood insensitivity. We study the effects of this approach in a critical decision, whether to screen containers entering the United States for radioactive materials. We find that the defender's optimal decision is sensitive to the attacker's levels of loss aversion and likelihood insensitivity, meaning that understanding such descriptive decision effects is important in making such decisions. PMID:25039254 1. Distributed convergence to Nash equilibria by adversarial networks CERN Document Server Gharesifard, Bahman 2012-01-01 This paper considers a class of strategic scenarios in which two networks of agents have opposing objectives with regards to the optimization of a common objective function. In the resulting zero-sum game, individual agents collaborate with neighbors in their respective network and have only partial knowledge of the state of the agents in the other network. For the case when the interaction topology of each network is undirected, we synthesize a distributed saddle-point strategy and establish its convergence to the Nash equilibrium for the class of strictly concave-convex and locally Lipschitz objective functions. We also show that this dynamics does not converge in general if the topologies are directed. This justifies the introduction, in the directed case, of a generalization of this distributed dynamics which we show converges to the Nash equilibrium for the class of strictly concave-convex differentiable functions with locally Lipschitz gradients. The technical approach combines tools from algebraic grap... NARCIS (Netherlands) Beye, M.; Veugen, P.J.M. 2012-01-01 Hash-lock authentication protocols for Radio Frequency IDentification (RFID) tags incur heavy search on the server. Key-trees have been proposed as a way to reduce search times, but because partial keys in such trees are shared, key compromise affects several tags. Buttyán [4] and Beye and Veugen [3 3. Risk taking in adversarial situations: Civilization differences in chess experts. Science.gov (United States) Chassy, Philippe; Gobet, Fernand 2015-08-01 The projections of experts in politics predict that a new world order will emerge within two decades. Being multipolar, this world will inevitably lead to frictions where civilizations and states will have to decide whether to risk conflict. Very often these decisions are informed if not taken by experts. To estimate risk-taking across civilizations, we examined strategies used in 667,599 chess games played over eleven years by chess experts from 11 different civilizations. We show that some civilizations are more inclined to settle for peace. Similarly, we show that once engaged in the battle, the level of risk taking varies significantly across civilizations, the boldest civilization using the riskiest strategy about 35% more than the most conservative civilization. We discuss which psychological factors might underpin these civilizational differences. PMID:25912894 4. A model-referenced procedure to support adversarial decision processes International Nuclear Information System (INIS) In public enquiries concerning major facilities, such as the construction of a new electric power plant, it is observed that a useable decision model should be made commonly available alongside the open provision of data and assumptions. The protagonist, eg the electric utility, generally makes use of a complex, proprietary model for detailed evaluation of options. A simple emulator of this, based upon a regression analysis of numerous scenarios, and validated by further simulations is shown to be feasible and potentially attractive. It would be in the interests of the utility to make such a model-referenced decision support method generally available. The approach is considered in relation to the recent Hinkley Point C public enquiry for a new nuclear power plant in the UK. (Author) 5. Modeling Adversaries in Counterterrorism Decisions Using Prospect Theory. Science.gov (United States) Merrick, Jason R W; Leclerc, Philip 2016-04-01 Counterterrorism decisions have been an intense area of research in recent years. Both decision analysis and game theory have been used to model such decisions, and more recently approaches have been developed that combine the techniques of the two disciplines. However, each of these approaches assumes that the attacker is maximizing its utility. Experimental research shows that human beings do not make decisions by maximizing expected utility without aid, but instead deviate in specific ways such as loss aversion or likelihood insensitivity. In this article, we modify existing methods for counterterrorism decisions. We keep expected utility as the defender's paradigm to seek for the rational decision, but we use prospect theory to solve for the attacker's decision to descriptively model the attacker's loss aversion and likelihood insensitivity. We study the effects of this approach in a critical decision, whether to screen containers entering the United States for radioactive materials. We find that the defender's optimal decision is sensitive to the attacker's levels of loss aversion and likelihood insensitivity, meaning that understanding such descriptive decision effects is important in making such decisions. Science.gov (United States) Ruitenberg, Claudia W. 2009-01-01 Many scholars in the area of citizenship education take deliberative approaches to democracy, especially as put forward by John Rawls, as their point of departure. From there, they explore how students' capacity for political and/or moral reasoning can be fostered. Recent work by political theorist Chantal Mouffe, however, questions some of the… 7. 29 CFR 102.143 - “Adversary adjudication” defined; entitlement to award; eligibility for award. Science.gov (United States) 2010-07-01 ...; eligibility for award. 102.143 Section 102.143 Labor Regulations Relating to Labor NATIONAL LABOR RELATIONS... used in this subpart, means unfair labor practice proceedings pending before the Board on complaint and..., corporation, association, unit of local government, or public or private organization with a net worth of... 8. The perpetual adversary: how Dutch security services perceived communism (1918-1989) OpenAIRE Hijzen, Constant Willem 2013-01-01 "For more than eighty years, Dutch security services perceived communism as the ultimate threat to national security. From its inception, the anticommunist threat perceptions contained references to foreign, possible, potential, and ideological elements of the communist threat. This put the activities of Dutch communists in a different light. Although for a long time there were well-grounded reasons to do so, the author finds that there were periods when the actual threatening character of Du... 9. Adversary phase change detection using S.O.M. and text data International Nuclear Information System (INIS) In this work, we developed a self-organizing map (SOM) technique for using web-based text analysis to forecast when a group is undergoing a phase change. By 'phase change', we mean that an organization has fundamentally shifted attitudes or behaviors. For instance, when ice melts into water, the characteristics of the substance change. A formerly peaceful group may suddenly adopt violence, or a violent organization may unexpectedly agree to a ceasefire. SOM techniques were used to analyze text obtained from organization postings on the world-wide web. Results suggest it may be possible to forecast phase changes, and determine if an example of writing can be attributed to a group of interest. 10. Insider, Outsider, Ally, or Adversary: Parents of Youth with Learning Disabilities Engage in Educational Advocacy Science.gov (United States) Duquette, Cheryll; Fullarton, Stephanie; Orders, Shari; Robertson-Grewal, Kristen 2011-01-01 The purpose of this qualitative study was to examine the educational advocacy experiences of parents of adolescents and young adults identified as having a learning disability (LD) through the lens of four dimensions of advocacy. Seventeen mothers of youth with LD responded to items in a questionnaire and 13 also engaged in in-depth interviews. It… 11. Using Frankencerts for Automated Adversarial Testing of Certificate Validation in SSL/TLS Implementations OpenAIRE Brubaker, Chad; Jana, Suman; Ray, Baishakhi; Khurshid, Sarfraz; Shmatikov, Vitaly 2014-01-01 Modern network security rests on the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. Distributed systems, mobile and desktop applications, embedded devices, and all of secure Web rely on SSL/TLS for protection against network attacks. This protection critically depends on whether SSL/TLS clients correctly validate X.509 certificates presented by servers during the SSL/TLS handshake protocol. 12. Adversaries and Allies: Rival National Suffrage Groups and the 1882 Nebraska Woman Suffrage Campaign Science.gov (United States) Heider, Carmen 2005-01-01 In September 1882, Nebraska was the setting for a significant moment in the history of the United States women's rights movement: the two rival suffrage organizations, the American Woman Suffrage Association (AWSA) and the National Woman Suffrage Association (NWSA), both held their annual conventions in Omaha. The alliance of the AWSA and the NWSA… 13. Semidefinite programming characterization and spectral adversary method for quantum complexity with noncommuting unitary queries OpenAIRE Barnum, Howard 2007-01-01 Generalizing earlier work characterizing the quantum query complexity of computing a function of an unknown classical black box'' function drawn from some set of such black box functions, we investigate a more general quantum query model in which the goal is to compute functions of N by N black box'' unitary matrices drawn from a set of such matrices, a problem with applications to determining properties of quantum physical systems. We characterize the existence of an algorithm for such a... 14. The principles of adversarial procedure and equality in international commercial arbitration OpenAIRE Козирєва, Валентина; Гаврилішин, Анатолій 2016-01-01 The article investigates the principles of competitiveness and procedural equality as basic principles of justice which govern litigation international commercial arbitration. On the basis of international legal acts are examples of the application of the above principles. English abstract V. Kozyreva, A. Havrylishyn The article deals with the principles of competition and equality of procedural justice as the basic principles governing the cases to international commercial arbitration. On... 15. Peer-to-Peer Secure Multi-Party Numerical Computation Facing Malicious Adversaries CERN Document Server Bickson, Danny; Dolev, Danny; Pinkas, Benny 2009-01-01 We propose an efficient framework for enabling secure multi-party numerical computations in a Peer-to-Peer network. This problem arises in a range of applications such as collaborative filtering, distributed computation of trust and reputation, monitoring and other tasks, where the computing nodes is expected to preserve the privacy of their inputs while performing a joint computation of a certain function. Although there is a rich literature in the field of distributed systems security concerning secure multi-party computation, in practice it is hard to deploy those methods in very large scale Peer-to-Peer networks. In this work, we try to bridge the gap between theoretical algorithms in the security domain, and a practical Peer-to-Peer deployment. We consider two security models. The first is the semi-honest model where peers correctly follow the protocol, but try to reveal private information. We provide three possible schemes for secure multi-party numerical computation for this model and identify a singl... 16. Secure and Robust IPV6 Autoconfiguration Protocol For Mobile Adhoc Networks Under Strong Adversarial Model CERN Document Server Slimane, Zohra; Feham, Mohamed; Taleb-Ahmed, Abdelmalik 2011-01-01 Automatic IP address assignment in Mobile Ad hoc Networks (MANETs) enables nodes to obtain routable addresses without any infrastructure. Different protocols have been developed throughout the last years to achieve this service. However, research primarily focused on correctness, efficiency and scalability; much less attention has been given to the security issues. The lack of security in the design of such protocols opens the possibility of many real threats leading to serious attacks in potentially hostile environments. Recently, few schemes have been proposed to solve this problem, but none of them has brought satisfactory solutions. Auto-configuration security issues are still an open problem. In this paper, a robust and secure stateful IP address allocation protocol for standalone MANETs is specified and evaluated within NS2. Our solution is based on mutual authentication, and a fully distributed Autoconfiguration and CA model, in conjunction with threshold cryptography. By deploying a new concept of joi... 17. SECURE AND ROBUST IPV6 AUTOCONFIGURATION PROTOCOL FOR MOBILE ADHOC NETWORKS UNDER STRONG ADVERSARIAL MODEL Directory of Open Access Journals (Sweden) Zohra Slimane 2011-08-01 Full Text Available Automatic IP address assignment in Mobile Ad hoc Networks (MANETs enables nodes to obtainroutable addresses without any infrastructure. Different protocols have been developed throughout thelast years to achieve this service. However, research primarily focused on correctness, efficiency andscalability; much less attention has been given to the security issues. The lack of security in the design ofsuch protocols opens the possibility of many real threats leading to serious attacks in potentially hostileenvironments. Recently, few schemes have been proposed to solve this problem, but none of them hasbrought satisfactory solutions. Auto-configuration security issues are still an open problem. In this paper,a robust and secure stateful IP address allocation protocol for standalone MANETs is specified andevaluated within NS2. Our solution is based on mutual authentication, and a fully distributed Autoconfigurationand CA model, in conjunction with threshold cryptography. By deploying a new concept ofjoint IP address and public key certificate, we show that, instead of earlier approaches, our solutionsolves the problem of all possible attacks associated with dynamic IP address assignment in MANETs.The resulting protocol incurs low latency and control overhead. 18. Vision and strategy development of Slovak society. Development Strategy for Slovak society (basis for public adversary) International Nuclear Information System (INIS) This analytical study analyses the present state as well as strategy of perspectives of development of Slovak society. This strategy of development of Slovak society is scientific testimony of the authors, the manner and extent of its use will determine political representation. Future economic growth and development of Slovak's regions will mainly depend on the availability of raw materials, energy resources, water, food processes and improving the environment. A key issue in the next 5 to 10 years in the energy sector will address energy security, diversification of energy sources, utilization of domestic raw materials and renewable energy and energy savings. The energy security strategy is to achieve a competitive energy industry for reliable and efficient supply of all forms of energy at affordable prices, with a view to protection of the customer and the environment. 19. Consensus of Multi-Agent Networks in the Presence of Adversaries Using Only Local Information CERN Document Server LeBlanc, Heath J; Sundaram, Shreyas; Koutsoukos, Xenofon 2012-01-01 This paper addresses the problem of resilient consensus in the presence of misbehaving nodes. Although it is typical to assume knowledge of at least some nonlocal information when studying secure and fault-tolerant consensus algorithms, this assumption is not suitable for large-scale dynamic networks. To remedy this, we emphasize the use of local strategies to deal with resilience to security breaches. We study a consensus protocol that uses only local information and we consider worst-case security breaches, where the compromised nodes have full knowledge of the network and the intentions of the other nodes. We provide necessary and sufficient conditions for the normal nodes to reach consensus despite the influence of the malicious nodes under different threat assumptions. These conditions are stated in terms of a novel graph-theoretic property referred to as network robustness. 20. An Efficient Encryption Algorithm for P2P Networks Robust Against Man-in-the-Middle Adversary OpenAIRE Roohallah Rastaghi 2012-01-01 Peer-to-peer (P2P) networks have become popular as a new paradigm for information exchange and are being used in many applications such as file sharing, distributed computing, video conference, VoIP, radio and TV broadcasting. This popularity comes with security implications and vulnerabilities that need to be addressed. Especially duo to direct communication between two end nodes in P2P networks, these networks are potentially vulnerable to Man-in-the-Middle attacks. In this paper, we propos... 1. On the Formal Modeling of Games of Language and Adversarial Argumentation : A Logic-Based Artificial Intelligence Approach OpenAIRE Eriksson Lundström, Jenny S. Z. 2009-01-01 Argumentation is a highly dynamical and dialectical process drawing on human cognition. Successful argumentation is ubiquitous to human interaction. Comprehensive formal modeling and analysis of argumentation presupposes a dynamical approach to the following phenomena: the deductive logic notion, the dialectical notion and the cognitive notion of justified belief. For each step of an argumentation these phenomena form networks of rules which determine the propositions to be allowed to make se... 2. El aspecto científico de la trilogía “ministerio público-policía-peritos” en el nuevo proceso penal de corte acusatorio, adversarial y oral en México/The scientific aspect of the trilogy "public-police-expert ministry" in the new adversarial criminal process, and oral adversarial court in Mexico Directory of Open Access Journals (Sweden) Juan Antonio Maruri Jiménez 2015-05-01 Full Text Available The last June 18, 2008 the Decree amending Articles 16, 17, 18, 19, 20, 21, 22 are amended was published; (the fractions XXI and XXIII of Article 73, Section VII of Article 115 and section XIII paragraph B of Article 123 of the Constitution of the United Mexican States, giving rise to the Constitutional reform of criminal justice, emerging as basic expectations: total transformation of the criminal justice system; effectively guarantee the validity of the “due process” in criminal matters restore confidence in the criminal justice system and its institutions doing research and efficient prosecution of crimes, the accused is greater assurances defense thereby ensuring the protection, support and participation of victims and injured, and safeguard the principles governing a Democratic-State Constitutional Law. 3. El aspecto científico de la trilogía “ministerio público-policía-peritos” en el nuevo proceso penal de corte acusatorio, adversarial y oral en México/The scientific aspect of the trilogy "public-police-expert ministry" in the new adversarial criminal process, and oral adversarial court in Mexico OpenAIRE Juan Antonio Maruri Jiménez 2015-01-01 The last June 18, 2008 the Decree amending Articles 16, 17, 18, 19, 20, 21, 22 are amended was published; (the fractions) XXI and XXIII of Article 73, Section VII of Article 115 and section XIII paragraph B of Article 123 of the Constitution of the United Mexican States, giving rise to the Constitutional reform of criminal justice, emerging as basic expectations: total transformation of the criminal justice system; effectively guarantee the validity of the “due process” in criminal matters re... 4. Adjuncts or adversaries to shared decision-making? Applying the Integrative Model of behavior to the role and design of decision support interventions in healthcare interactions Directory of Open Access Journals (Sweden) Fishbein Martin 2009-11-01 Full Text Available Abstract Background A growing body of literature documents the efficacy of decision support interventions (DESI in helping patients make informed clinical decisions. DESIs are frequently described as an adjunct to shared decision-making between a patient and healthcare provider, however little is known about the effects of DESIs on patients' interactional behaviors-whether or not they promote the involvement of patients in decisions. Discussion Shared decision-making requires not only a cognitive understanding of the medical problem and deliberation about the potential options to address it, but also a number of communicative behaviors that the patient and physician need to engage in to reach the goal of making a shared decision. Theoretical models of behavior can guide both the identification of constructs that will predict the performance or non-performance of specific behaviors relevant to shared decision-making, as well as inform the development of interventions to promote these specific behaviors. We describe how Fishbein's Integrative Model (IM of behavior can be applied to the development and evaluation of DESIs. There are several ways in which the IM could be used in research on the behavioral effects of DESIs. An investigator could measure the effects of an intervention on the central constructs of the IM - attitudes, normative pressure, self-efficacy, and intentions related to communication behaviors relevant to shared decision-making. However, if one were interested in the determinants of these domains, formative qualitative research would be necessary to elicit the salient beliefs underlying each of the central constructs. Formative research can help identify potential targets for a theory-based intervention to maximize the likelihood that it will influence the behavior of interest or to develop a more fine-grained understanding of intervention effects. Summary Behavioral theory can guide the development and evaluation of DESIs to increase the likelihood that these will prepare patients to play a more active role in the decision-making process. Self-reported behavioral measures can reduce the measurement burden for investigators and create a standardized method for examining and reporting the determinants of communication behaviors necessary for shared decision-making. 5. Adjuncts or adversaries to shared decision-making? Applying the Integrative Model of behavior to the role and design of decision support interventions in healthcare interactions. NARCIS (Netherlands) Frosch, D.; Legare, F.; Fishbein, M.; Elwyn, G. 2009-01-01 ABSTRACT: BACKGROUND: A growing body of literature documents the efficacy of decision support interventions (DESI) in helping patients make informed clinical decisions. DESIs are frequently described as an adjunct to shared decision-making between a patient and healthcare provider, however little is 6. MISTRAL: A game-theoretical model to allocate security measures in a multi-modal chemical transportation network with adaptive adversaries International Nuclear Information System (INIS) In this paper we present a multi-modal security-transportation model to allocate security resources within a chemical supply chain which is characterized by the use of different transport modes, each having their own security features. We consider security-related risks so as to take measures against terrorist acts which could target critical transportation systems. The idea of addressing security-related issues, by supporting decisions for preventing or mitigating intentional acts on transportation infrastructure, has gained attention in academic research only recently. The decision model presented in this paper is based on game theory and it can be employed to organize intelligence capabilities aimed at securing chemical supply chains. It enables detection and warning against impending attacks on transportation infrastructures and the subsequent adoption of security countermeasures. This is of extreme importance for preventing terrorist attacks and for avoiding (possibly huge) human and economic losses. In our work we also provide data sources and numerical simulations by applying the proposed model to a illustrative multi-modal chemical supply chain. - Highlights: • A model to increase the security in a multimodal chemical supply chain is proposed. • The model considers adaptive opponents having multi-attribute utility functions. • The model is based on game theory using an attacker–defender schema. • The model provides recommendations about where to allocate security measures. • Numerical simulations on a sample multimodal chemical supply chain are shown 7. Plea Bargaining: A Trojan Horse of The Adversarial System?%辩诉交易:对抗制的"特洛伊木马"? Institute of Scientific and Technical Information of China (English) 魏晓娜 2011-01-01 @@ 一、问题的提出 近年来,人们惊讶地发现:辩诉交易,这一具有强烈"美国"符号的制度,不知从何时起,已经"暗度陈仓",悄然在欧洲大陆和一些拉美国家开枝散叶.[1 8. Unexpected Turns: The Aesthetic, the Pathetic and the Adversarial in the Long Durée of Art’s Histories Directory of Open Access Journals (Sweden) Griselda Pollock 2012-12-01 Full Text Available In a conference organized at the University of Birmingham in 2012, I was invited to reflect upon the current situation in Art History that is posited as being ‘After the New Art History’. What is this after-ness? Succession? Supersession? Replacement? Exhaustion? Erasure? Fashionability? Dare we ask what kind of ‘killing’ of the past or of Oedipal Feminist Mothers and Marxists Fathers is going on here? Or does this indicate simply that we need new directions in our discipline just to keep it alive? There is certainly a feeling around that we are in a period of transition. Former certainties about the tendencies within the discipline of art history have melted. Is this a sign of our condition as Liquid Modernity? There is a risk, however, of casting the recent past as being ‘over’, to be viewed nostalgically, or gratefully cast into the dustbin of has-been histories so that we can get back to business as normal or find new pastures exciting because they are different. Before I acquiesce to such a trend for newness per se, I want to reconsider what is being said to have come before and now is defined as being over. To do so, I shall argue for an understanding of the long-term nature of any one intervention seeking radically to change the ways we study art and the image, past and present. Equally, I suggest that such long-term projects are themselves subject to historical change, shifting in sensitive response to altered conditions and changed priorities, but also registering their own effects and opening new avenues of analysis. Finally this article performs a reading of the call for papers for the conference in order to tease out critical misrepresentations of the past that we are now supposed to come after. Displacing the model of old and new with notions of parallel trajectories and multiple settlements in the expanding, historically shifting but also deeply structured ‘landscape’ of the discipline, I propose as less phallic model of a field with many threads contributing to its complex engagements with art, with visuality, with subjectivity and with their forms of material and symbolic interaction. Institute of Scientific and Technical Information of China (English) 牛小犇 2011-01-01 作为中国男篮的潜在对手,对参加第35届欧洲男子篮球锦标赛的斯洛文尼亚队进行的全部9场比赛所涉及到得分的技术指标进行数据统计与分析,研究该球队的得分特点及规律.得出该队是以内线进攻为主,球队进入状态较快,首先选择中路进攻,其次是右区,最后是左区;投篮方式主要接球直接投篮和低位背向篮的进攻.%Attending the complete 9 competitions which 35th session of European man basketball championship tournament's Slovenian team,the research is to study the score technical specification on the data statistics and the analysis,and this team ' s score characteristic and the rule. This team is by the inside connection attack primarily. The team enters the condition to be quick, first chooses the middle of the mill attack, next is the right area, finally is the left area. Short distance shooting mainly concentrates under the basket 13 and 14 areas; long-distance range shooting mainly concentrates in 2 areas, 3 areas and 4 areas; The shooting way mainly catches a ball direct shooting and the low position dislikes basket's attack. 10. Secure and self-stabilizing clock synchronization in sensor networks NARCIS (Netherlands) Hoepman, J.H.; Larsson, A.; Schiller, E.M.; Tsigas, P. 2007-01-01 In sensor networks, correct clocks have arbitrary starting offsets and nondeterministic fluctuating skews. We consider an adversary that aims at tampering with the clock synchronization by intercepting messages, replaying intercepted messages (after the adversary's choice of delay), and capturing no 11. Rate-Distortion Theory for Secrecy Systems OpenAIRE Schieler, Curt; Cuff, Paul 2013-01-01 Secrecy in communication systems is measured herein by the distortion that an adversary incurs. The transmitter and receiver share secret key, which they use to encrypt communication and ensure distortion at an adversary. A model is considered in which an adversary not only intercepts the communication from the transmitter to the receiver, but also potentially has side information. Specifically, the adversary may have causal or noncausal access to a signal that is correlated with the source s... 12. 英美证据法的程序性解构——以陪审团和对抗制为主线%Discussion on Procedure Aspect of Anglo-American Evidence Law--Focusing on Jury and Adversary system. Institute of Scientific and Technical Information of China (English) 吴洪淇 2012-01-01 The Evidence Puzzle is not only the motivation of the theory development of Anglo-American evidence law, but also one key thread to understand the evolution of the evidence law. From the aspect of proce- dure, the jury-control model and advocate-control model are two answers to the Puzzle. The pectination and anal- ysis of these two models can help to uncover the multifarious procedural factors of Anglo-American evidence law and their mechanism to run. That will make the foundation for the legislation and academic research of evidence law in China.%证据法之谜是英美证据法学界理论发展的重要源动力,也是理解英美证据法制度演进的一条重要线索。对证据法之谜的解释在程序性层面主要有陪审团控制模式和对抗制控制模式两种命题。通过对两个解释性命题的梳理与解析,可以展现出英美证据法的多元程序性要素及其内在运作的机理,从而为我国的证据立法与证据法学研究提供重要的启示。 13. Sessions and Separability in Security Protocols DEFF Research Database (Denmark) Carbone, Marco; Guttman, Joshua 2013-01-01 Despite much work on sessions and session types in non- adversarial contexts, session-like behavior given an active adversary has not received an adequate definition and proof methods. We provide a syntactic property that guarantees that a protocol has session-respecting executions. Any uncomprom... 14. Information Theoretic-Learning Auto-Encoder OpenAIRE Santana, Eder; Emigh, Matthew; Principe, Jose C. 2016-01-01 We propose Information Theoretic-Learning (ITL) divergence measures for variational regularization of neural networks. We also explore ITL-regularized autoencoders as an alternative to variational autoencoding bayes, adversarial autoencoders and generative adversarial networks for randomly generating sample data without explicitly defining a partition function. This paper also formalizes, generative moment matching networks under the ITL framework. 15. Estado e mercado: adversários ou aliados no processo de implementação da Política Nacional de Alimentação e Nutrição? Elementos para um debate sobre medidas de regulamentação State and market: adversaries or allies in the implementation of the National Food and Nutrition Policy? Some reflections on regulation measures Directory of Open Access Journals (Sweden) Anelise Rizzolo de Oliveira Pinheiro 2008-06-01 16. Actively Secure Two-Party Evaluation of Any Quantum Operation DEFF Research Database (Denmark) Dupuis, Frédéric; Nielsen, Jesper Buus; Salvail, Louis 2012-01-01 We provide the first two-party protocol allowing Alice and Bob to evaluate privately even against active adversaries any completely positive, trace-preserving map , given as a quantum circuit, upon their joint quantum input state . Our protocol leaks no more to any active adversary than an ideal ...... functionality for provided Alice and Bob have the cryptographic resources for active secure two-party classical computation. Our protocol is constructed from the protocol for the same task secure against specious adversaries presented in [4].... 17. Material control system simulator program reference manual Energy Technology Data Exchange (ETDEWEB) Hollstien, R.B. 1978-01-24 A description is presented of a Material Control System Simulator (MCSS) program for determination of material accounting uncertainty and system response to particular adversary action sequences that constitute plausible material diversion attempts. The program is intended for use in situations where randomness, uncertainty, or interaction of adversary actions and material control system components make it difficult to assess safeguards effectiveness against particular material diversion attempts. Although MCSS may be used independently in the design or analysis of material handling and processing systems, it has been tailored toward the determination of material accountability and the response of material control systems to adversary action sequences. 18. Prospects for improved detection of chemical, biological, radiological, and nuclear threats Energy Technology Data Exchange (ETDEWEB) Wuest, Craig R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hart, Brad [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Slezak, Thomas R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States) 2012-07-31 Acquisition and use of Chemical, Biological, Radiological, and Nuclear (CBRN) weapons continue to be a major focus of concern form the security apparatus of nation states because of their potential for mass casualties when used by a determined adversary. 19. Probabilistic Analysis of Onion Routing in a Black-box Model CERN Document Server Feigenbaum, Joan; Syverson, Paul 2011-01-01 We perform a probabilistic analysis of onion routing. The analysis is presented in a black-box model of anonymous communication in the Universally Composable framework that abstracts the essential properties of onion routing in the presence of an active adversary that controls a portion of the network and knows all a priori distributions on user choices of destination. Our results quantify how much the adversary can gain in identifying users by exploiting knowledge of their probabilistic behavior. In particular, we show that, in the limit as the network gets large, a user u's anonymity is worst either when the other users always choose the destination u is least likely to visit or when the other users always choose the destination u chooses. This worst-case anonymity with an adversary that controls a fraction b of the routers is shown to be comparable to the best-case anonymity against an adversary that controls a fraction \\surdb. 20. From Passive to Covert Security at Low Cost DEFF Research Database (Denmark) Damgård, Ivan Bjerre; Geisler, Martin; Nielsen, Jesper Buus 2010-01-01 Aumann and Lindell defined security against covert attacks, where the adversary is malicious, but is only caught cheating with a certain probability. The idea is that in many real-world cases, a large probability of being caught is sufficient to prevent the adversary from trying to cheat. In this......Aumann and Lindell defined security against covert attacks, where the adversary is malicious, but is only caught cheating with a certain probability. The idea is that in many real-world cases, a large probability of being caught is sufficient to prevent the adversary from trying to cheat....... In this paper, we show how to compile a passively secure protocol for honest majority into one that is secure against covert attacks, again for honest majority and catches cheating with probability 1/4. The cost of the modified protocol is essentially twice that of the original plus an overhead that only... 1. LIDeA: A Distributed Lightweight Intrusion Detection Architecture for Sensor Networks DEFF Research Database (Denmark) Giannetsos, Athanasios; Krontiris, Ioannis; Dimitriou, Tassos 2008-01-01 Wireless sensor networks are vulnerable to adversaries as they are frequently deployed in open and unattended environments. Preventive mechanisms can be applied to protect them from an assortment of attacks. However, more sophisticated methods, like intrusion detection systems, are needed... 2. 76 FR 43662 - 36(b)(1) Arms Sales Notification Science.gov (United States) 2011-07-21 ... hardware upon which the software has been installed. 3. If a technologically advanced adversary were to obtain knowledge of the specific hardware and software elements, the information could be used to... 3. 15 CFR 310.4 - Action on application. Science.gov (United States) 2010-01-01 .... (3) The relative merit of the applications in terms of their qualifications as tourism destination..., that the hearing is not adversary in nature and that the sole objective thereof is to clarify... 4. 13 CFR 134.618 - How are awards paid? Science.gov (United States) 2010-01-01 ... CASES BEFORE THE OFFICE OF HEARINGS AND APPEALS Implementation of the Equal Access to Justice Act § 134... the adversary adjudication, or of the award, to the following address: Chief Financial Officer,... 5. An Identity- Based Key- Exchange Protocol Institute of Scientific and Technical Information of China (English) ZHANG Ya-juan; ZHU Yue-fei; HUANG Qiu-sheng 2005-01-01 An identity-based key-exchange protocol using a bilinear map is proposed and it is proved SK-secure(session key secure) in the AM (authenticated links adversarial model)provided the BDDH (bilinear Diffie-Hellmen) assumption is correct. Then we apply the signature-based authenticator to our protocol and obtain an identity-Based key-exchange protocol that is SK-secure in the UM (unauthenticated links adversarial model) provided the BDDH assumption is correct. 6. Comparison of ICM with TPF-LEP to Prevent MAC Spoof DoS Attack in Wireless Local Area Infrastructure Network OpenAIRE Durairaj, M; A. Persia 2014-01-01 A Comparison of Integrated Central Manager (ICM) and Traffic Pattern Filtering with Letter Envelop Protocol (TPF-LEP) is done. Denial of Service (DoS) attack is a biggest peril in wireless local area infrastructure network. It makes the resources unavailable for intended users which transpired through spoofing legitimate Client/AP's Medium Access Control (MAC) address. MAC address are easily caricatured by the adversary clients, subsequently they are not encrypted. Since, the adversary sends ... 7. Secure Human-Computer Identification (Interface) Systems against Peeping Attacks: SecHCI OpenAIRE Li, SJ; Shum, HY 2005-01-01 It is an interesting problem how a human can prove its identity to a trustworthy (local or remote) computer with untrustworthy input devices and via an insecure channel controlled by adversaries. Any input devices and auxiliary devices are untrustworthy under the following assumptions: the adversaries can record humans' operations on the devices, and can access the devices to replay the recorded operations. Strictly, only the common brain intelligence is available for the human. In this paper... 8. Video Transmission in Tactical Cognitive Radio Networks Under Disruptive Attacks OpenAIRE 2015-01-01 In this dissertation, I examine the performance of a cognitive radio (CR) system in a hostile environment where an intelligent adversary tries to disrupt communications with a Gaussian noise signal. I analyze a cluster-based network of secondary users (SUs). The adversary can limit access for SUs by either transmitting a spoofing signal in the sensing interval, or a desynchronizing signal in the code acquisition interval. By jamming the network during the transmission interval, the adversar... 9. Detecting and Mitigating Smart Insider Jamming Attacks in MANETs Using Reputation-Based Coalition Game OpenAIRE Ashraf Al Sharah; Taiwo Oyedare; Sachin Shetty 2016-01-01 Security in mobile ad hoc networks (MANETs) is challenging due to the ability of adversaries to gather necessary intelligence to launch insider jamming attacks. The solutions to prevent external attacks on MANET are not applicable for defense against insider jamming attacks. There is a need for a formal framework to characterize the information required by adversaries to launch insider jamming attacks. In this paper, we propose a novel reputation-based coalition game in MANETs to detect and m... DEFF Research Database (Denmark) Canetti, Ran; Damgård, Ivan Bjerre; Dziembowski, Stefan; 2004-01-01 Security analysis of multi-party cryptographic protocols distinguishes between two types of adversarial settings: In the non-adaptive setting the set of corrupted parties is chosen in advance, before the interaction begins. In the adaptive setting the adversary chooses who to corrupt during...... the course of the computation. We study the relations between adaptive security (i.e., security in the adaptive setting) and nonadaptive security, according to two definitions and in several models of computation.... 11. Public-Key Cryptography OpenAIRE Lint, van, JH 2003-01-01 Part I: Theory Provable security is an important goal in the design of public-key cryptosystems. For most security properties, it is computational security that has to be considered: an attack scenario describes how adversaries interact with the cryptosystem, trying to attack it; the system can be called secure if adversaries with reasonably bounded computational means have negligible prospects of success. The lack of computational problems that are guaranteed to be hard in an appropriate sen... 12. Post-Westgate SWAT : C4ISTAR Architectural Framework for Autonomous Network Integrated Multifaceted Warfighting Solutions Version 1.0 : A Peer-Reviewed Monograph OpenAIRE Nyagudi, Nyagudi Musandu 2013-01-01 Nations are today challenged with multiple constraints such as declining population and financial austerity, these inevitably reduce military/security forces preparedness. Faced with well resourced adversaries or those of the asymmetric type, only a Nation that arms itself "intelligently" and fights "smart" attains advantages in the world's ever more complex and restrictive battle-spaces. Police SWAT teams and Military Special Forces face mounting pressure and challenges from adversaries that... 13. Material control system simulator user's manual Energy Technology Data Exchange (ETDEWEB) Hollstien, R.B. 1978-01-24 This report describes the use of a Material Control System Simulator (MCSS) program for determination of material accounting uncertainty and system response to particular adversary action sequences that constitute plausible material diversion attempts. The program is intended for use in situations where randomness, uncertainty, or interaction of adversary actions and material control system components make it difficult to assess safeguards effectiveness against particular material diversion attempts. 14. DEX: self-healing expanders OpenAIRE Pandurangan, Gopal; Robinson, Peter,; Trehan, Amitabh 2016-01-01 We present a fully-distributed self-healing algorithm dex that maintains a constant degree expander network in a dynamic setting. To the best of our knowledge, our algorithm provides the first efficient distributed construction of expanders—whose expansion properties holddeterministically—that works even under an all-powerful adaptive adversary that controls the dynamic changes to the network (the adversary has unlimited computational power and knowledge of the entire network state, can decid... 15. DEX: self-healing expanders OpenAIRE Pandurangan, Gopal; Robinson, Peter,; Trehan, Amitabh 2015-01-01 We present a fully-distributed self-healing algorithm DEX, that maintains a constant degree expander network in a dynamic setting. To the best of our knowledge, our algorithm provides the first efficient distributed construction of expanders --- whose expansion properties hold {\\em deterministically} --- that works even under an all-powerful adaptive adversary that controls the dynamic changes to the network (the adversary has unlimited computational power and knowledge of the entire network ... 16. Material control system simulator user's manual International Nuclear Information System (INIS) This report describes the use of a Material Control System Simulator (MCSS) program for determination of material accounting uncertainty and system response to particular adversary action sequences that constitute plausible material diversion attempts. The program is intended for use in situations where randomness, uncertainty, or interaction of adversary actions and material control system components make it difficult to assess safeguards effectiveness against particular material diversion attempts 17. Vulnerable GPU Memory Management: Towards Recovering Raw Data from GPU OpenAIRE Zhou, Zhe; Diao, Wenrui; Liu, Xiangyu; Li, Zhou; Zhang, Kehuan; Liu, Rui 2016-01-01 In this paper, we present that security threats coming with existing GPU memory management strategy are overlooked, which opens a back door for adversaries to freely break the memory isolation: they enable adversaries without any privilege in a computer to recover the raw memory data left by previous processes directly. More importantly, such attacks can work on not only normal multi-user operating systems, but also cloud computing platforms. To demonstrate the seriousness of such attacks, we... 18. Optimal space-time attacks on system state estimation under a sparsity constraint Science.gov (United States) Lu, Jingyang; Niu, Ruixin; Han, Puxiao 2016-05-01 System state estimation in the presence of an adversary that injects false information into sensor readings has attracted much attention in wide application areas, such as target tracking with compromised sensors, secure monitoring of dynamic electric power systems, secure driverless cars, and radar tracking and detection in the presence of jammers. From a malicious adversary's perspective, the optimal strategy for attacking a multi-sensor dynamic system over sensors and over time is investigated. It is assumed that the system defender can perfectly detect the attacks and identify and remove sensor data once they are corrupted by false information injected by the adversary. With this in mind, the adversary's goal is to maximize the covariance matrix of the system state estimate by the end of attack period under a sparse attack constraint such that the adversary can only attack the system a few times over time and over sensors. The sparsity assumption is due to the adversary's limited resources and his/her intention to reduce the chance of being detected by the system defender. This becomes an integer programming problem and its optimal solution, the exhaustive search, is intractable with a prohibitive complexity, especially for a system with a large number of sensors and over a large number of time steps. Several suboptimal solutions, such as those based on greedy search and dynamic programming are proposed to find the attack strategies. Examples and numerical results are provided in order to illustrate the effectiveness and the reduced computational complexities of the proposed attack strategies. 19. Structure for the decomposition of safeguards responsibilities International Nuclear Information System (INIS) A major mission of safeguards is to protect against the use of nuclear materials by adversaries to harm society. A hierarchical structure of safeguards responsibilities and activities to assist in this mission is defined. The structure begins with the definition of international or multi-national safeguards and continues through domestic, regional, and facility safeguards. The facility safeguards is decomposed into physical protection and material control responsibilities. In addition, in-transit safeguards systems are considered. An approach to the definition of performance measures for a set of Generic Adversary Action Sequence Segments (GAASS) is illustrated. These GAASS's begin outside facility boundaries and terminate at some adversary objective which could lead to eventual safeguards risks and societal harm. Societal harm is primarily the result of an adversary who is successful in the theft of special nuclear material or in the sabotage of vital systems which results in the release of material in situ. With the facility safeguards system, GAASS's are defined in terms of authorized and unauthorized adversary access to materials and components, acquisition of material, unauthorized removal of material, and the compromise of vital components. Each GAASS defines a set of ''paths'' (ordered set of physical protection components) and each component provides one or more physical protection ''functions'' (detection, assessment, communication, delay, neutralization). Functional performance is then developed based upon component design features, the environmental factors, and the adversary attributes. An example of this decomposition is presented 20. On deception detection in multi-agent systems and deception intent Science.gov (United States) Santos, Eugene, Jr.; Li, Deqing; Yuan, Xiuqing 2008-04-01 Deception detection plays an important role in the military decision-making process, but detecting deception is a challenging task. The deception planning process involves a number of human factors. It is intent-driven where intentions are usually hidden or not easily observable. As a result, in order to detect deception, any adversary model must have the capability to capture the adversary's intent. This paper discusses deception detection in multi-agent systems and in adversary modeling. We examined psychological and cognitive science research on deception and implemented various theories of deception within our approach. First, in multi-agent expert systems, one detection method uses correlations between agents to predict reasonable opinions/responses of other agents (Santos & Johnson, 2004). We further explore this idea and present studies that show the impact of different factors on detection success rate. Second, from adversary modeling, our detection method focuses on inferring adversary intent. By combining deception "branches" with intent inference models, we can estimate an adversary's deceptive activities and at the same time enhance intent inference. Two major kinds of deceptions are developed in this approach in different fashions. Simulative deception attempts to find inconsistency in observables, while dissimulative deception emphasizes the inference of enemy intentions. 1. The Application of materials attractiveness in a graded approach to nuclear materials security Energy Technology Data Exchange (ETDEWEB) Ebbinghaus, B. [Lawrence Livermore National Laboratory, P.O. Box 808, Livermore, CA 94551 (United States); Bathke, C. [Los Alamos National Laboratory, P.O. Box 1663, Los Alamos, NM 87545 (United States); Dalton, D.; Murphy, J. [National Nuclear Security Administration, US Department of Energy, 1000 Independent Ave., S. W. Washington, DC 20585 (United States) 2013-07-01 The threat from terrorist groups has recently received greater attention. In this paper, material quantity and material attractiveness are addressed through the lens of a minimum security strategy needed to prevent the construction of a nuclear explosive device (NED) by an adversary. Nuclear materials are placed into specific security categories (3 or 4 categories) , which define a number of security requirements to protect the material. Materials attractiveness can be divided into four attractiveness levels, High, Medium, Low, and Very Low that correspond to the utility of the material to the adversary and to a minimum security strategy that is necessary to adequately protect the nuclear material. We propose a graded approach to materials attractiveness that recognizes for instance substantial differences in attractiveness between pure reactor-grade Pu oxide (High attractiveness) and fresh MOX fuel (Low attractiveness). In either case, an adversary's acquisition of a Category I quantity of plutonium would be a major incident, but the acquisition of Pu oxide by the adversary would be substantially worse than the acquisition of fresh MOX fuel because of the substantial differences in the time and complexity required of the adversary to process the material and fashion it into a NED. 2. Quantum position verification in bounded-attack-frequency model Science.gov (United States) Gao, Fei; Liu, Bin; Wen, QiaoYan 2016-11-01 In 2011, Buhrman et al. proved that it is impossible to design an unconditionally secure quantum position verification (QPV) protocol if the adversaries are allowed to previously share unlimited entanglements. Afterwards, people started to design secure QPV protocols in practical settings, e.g. the bounded-storage model, where the adversaries' pre-shared entangled resources are supposed to be limited. Here we focus on another practical factor that it is very difficult for the adversaries to perform attack operations with unlimitedly high frequency. Concretely, we present a new kind of QPV protocols, called non-simultaneous QPV. And we prove the security of a specific non-simultaneous QPV protocol with the assumption that the frequency of the adversaries' attack operations is bounded, but no assumptions on their pre-shared entanglements or quantum storage. Actually, in our nonsimultaneous protocol, the information whether there comes a signal at present time is also a piece of command. It renders the adversaries "blind", that is, they have to execute attack operations with unlimitedly high frequency no matter whether a signal arrives, which implies the non-simultaneous QPV is also secure in the bounded-storage model. 3. EASI approach to physical security evaluation International Nuclear Information System (INIS) A simple, easy to use method, called Estimate of Adversary Sequence Interruption (EASI), has been developed to evaluate physical security system performance under specified conditions of threat and system operation. The method consists of a probabilistic analysis of the interactions of basic security functions, such as detection, communications, response, etc. The evaluation can be performed on a hand-held programmable calculator. The results of the analysis are expressed in terms of the probability that the physical protection system can respond in time to interrupt specific adversary action sequences. The following input data is required: (1) Detection probability for each sensor or other means of detection. (2) Probability of communication to response force or other means of response. (3) Mean and standard deviation of response time. (4) Mean and standard deviation of the time to perform each task in the adversary action sequence. The utility of the method depends upon the user's ability to identify significant adversary action sequences and to obtain data which properly reflect conditions created by the adversary action sequence of interest. The objective of the development is to provide a usable evaluation method which could serve as either a physical protection system design aid or as a decision aid in the licensing and inspection process. As such, it is recommended that EASI be utilized on a limited trial basis to provide information on the utility of the method and to clarify user's needs 4. ON THE OFFENSE: USING CYBER WEAPONS TO INFLUENCE COGNITIVE BEHAVIOR Directory of Open Access Journals (Sweden) Mary Fendley 2012-12-01 Full Text Available There is an increasing recognition that cyber warfare is an important area of development for targeting and weaponeering, with far-reaching effects in national defense and economic security. The ability to conduct effective operations in cyberspace relies on a robust situational awareness of events occurring in both the physical and information domains, with an understanding of how they affect the cognitive domain of friendly, neutral, and adversary population sets. The dynamic nature of the battlefield complicates efforts to understand shifting adversary motivations and intentions. There are very few approaches, to date, that systematically evaluate the effects of the repertoire of cyber weapons on the cognitive, perceptual, and behavioral characteristics of the adversary. In this paper, we describe a software environment called Cognitive Cyber Weapon Selection Tool (CCWST that simulates a scenario involving cyber weaponry.This tool provides the capabilities to test weapons which may induce behavioral state changes in the adversaries. CCWST provides the required situational awareness to the Cyber Information Operations (IO planner to conduct intelligent weapon selection during weapon activation in order to induce the desired behavioral change in the perception of the adversary. Weapons designed to induce the cognitive state changes of deception, distraction, distrust and confusion were then tested empirically to evaluate the capabilities and expected cognitive state changes induced by these weapons. The results demonstrated that CCWST is a powerful environment within which to test and evaluate the impact of cyber weapons on influencing cognitive behavioral states during information processing. 5. The Application of materials attractiveness in a graded approach to nuclear materials security International Nuclear Information System (INIS) The threat from terrorist groups has recently received greater attention. In this paper, material quantity and material attractiveness are addressed through the lens of a minimum security strategy needed to prevent the construction of a nuclear explosive device (NED) by an adversary. Nuclear materials are placed into specific security categories (3 or 4 categories) , which define a number of security requirements to protect the material. Materials attractiveness can be divided into four attractiveness levels, High, Medium, Low, and Very Low that correspond to the utility of the material to the adversary and to a minimum security strategy that is necessary to adequately protect the nuclear material. We propose a graded approach to materials attractiveness that recognizes for instance substantial differences in attractiveness between pure reactor-grade Pu oxide (High attractiveness) and fresh MOX fuel (Low attractiveness). In either case, an adversary's acquisition of a Category I quantity of plutonium would be a major incident, but the acquisition of Pu oxide by the adversary would be substantially worse than the acquisition of fresh MOX fuel because of the substantial differences in the time and complexity required of the adversary to process the material and fashion it into a NED 6. Self-Healing Algorithms for Byzantine Faults CERN Document Server Knockel, Jeffrey; Saia, Jared 2012-01-01 Recent years have seen significant interest in designing networks that are \\emph{self-healing} in the sense that they can automatically recover from adversarial attack. Previous work shows that it is possible for a network to automatically recover, even when an adversary repeatedly deletes nodes in the network. However, there have not yet been any algorithms that self-heal in the case where an adversary \\emph{takes over} nodes in a network. In this paper, we address this gap.% by presenting self-healing algorithms that work in the presence of such an attack. In particular, we show how to maintain an overlay network over $n$ nodes that ensures the following properties, even when an adversary controls up to $t \\leq n/4$ nodes. First, $O(t (\\log^{*} n)^{2})$ message corruptions occur in expectation, before the adversarially controlled nodes are effectively quarantined so that they cause no more corruptions. Second, the network continually provides point-to-point communication with bandwidth and latency costs th... 7. (Unconditional) Secure Multiparty Computation with Man-in-the-middle Attacks CERN Document Server Vaya, Shailesh 2010-01-01 In secure multi-party computation $n$ parties jointly evaluate an $n$-variate function $f$ in the presence of an adversary which can corrupt up till $t$ parties. Almost all the works that have appeared in the literature so far assume the presence of authenticated channels between the parties. This assumption is far from realistic. Two directions of research have been borne from relaxing this (strong) assumption: (a) The adversary is virtually omnipotent and can control all the communication channels in the network, (b) Only a partially connected topology of authenticated channels is guaranteed and adversary controls a subset of the communication channels in the network. This work introduces a new setting for (unconditional) secure multiparty computation problem which is an interesting intermediate model with respect to the above well studied models from the literature (by sharing a salient feature from both the above models). We consider the problem of (unconditional) secure multi-party computation when 'some... 8. Security-by-Experiment: Lessons from Responsible Deployment in Cyberspace. Science.gov (United States) Pieters, Wolter; Hadžiosmanović, Dina; Dechesne, Francien 2016-06-01 Conceiving new technologies as social experiments is a means to discuss responsible deployment of technologies that may have unknown and potentially harmful side-effects. Thus far, the uncertain outcomes addressed in the paradigm of new technologies as social experiments have been mostly safety-related, meaning that potential harm is caused by the design plus accidental events in the environment. In some domains, such as cyberspace, adversarial agents (attackers) may be at least as important when it comes to undesirable effects of deployed technologies. In such cases, conditions for responsible experimentation may need to be implemented differently, as attackers behave strategically rather than probabilistically. In this contribution, we outline how adversarial aspects are already taken into account in technology deployment in the field of cyber security, and what the paradigm of new technologies as social experiments can learn from this. In particular, we show the importance of adversarial roles in social experiments with new technologies. PMID:25896029 9. Security-by-Experiment: Lessons from Responsible Deployment in Cyberspace. Science.gov (United States) Pieters, Wolter; Hadžiosmanović, Dina; Dechesne, Francien 2016-06-01 Conceiving new technologies as social experiments is a means to discuss responsible deployment of technologies that may have unknown and potentially harmful side-effects. Thus far, the uncertain outcomes addressed in the paradigm of new technologies as social experiments have been mostly safety-related, meaning that potential harm is caused by the design plus accidental events in the environment. In some domains, such as cyberspace, adversarial agents (attackers) may be at least as important when it comes to undesirable effects of deployed technologies. In such cases, conditions for responsible experimentation may need to be implemented differently, as attackers behave strategically rather than probabilistically. In this contribution, we outline how adversarial aspects are already taken into account in technology deployment in the field of cyber security, and what the paradigm of new technologies as social experiments can learn from this. In particular, we show the importance of adversarial roles in social experiments with new technologies. 10. Cultural myths and supports for rape. Science.gov (United States) Burt, M R 1980-02-01 This article describes the "rape myth" and tests hypotheses derived from social psychological and feminist theory that acceptance of rape myths can be predicted from attitudes such as sex role stereotyping, adversarial sexual beliefs, sexual conservatism, and acceptance of interpersonal violence. Personality characteristics, background characteristics, and personal exposure to rape, rape victims, and rapists are other factors used in predictions. Results from regression analysis of interview data indicate that the higher the sex role stereotyping, adversarial sexual beliefs, and acceptance of interpersonal violence, the greater a respondent's acceptance of rape myths. In addition, younger and better educated people reveal less stereotypic, adversarial, and proviolence attitudes and less rape myth acceptance. Discussion focuses on the implications of these results for understanding and changing this cultural orientation toward sexual assault. PMID:7373511 11. A Secure and Efficient Certificateless Short Signature Scheme Directory of Open Access Journals (Sweden) Lin Cheng 2013-07-01 Full Text Available Certificateless public key cryptography combines advantage of traditional public key cryptography and identity-based public key cryptography as it avoids usage of certificates and resolves the key escrow problem. In 2007, Huang et al. classified adversaries against certificateless signatures according to their attack power into normal, strong and super adversaries (ordered by their attack power. In this paper, we propose a new certificateless short signature scheme and prove that it is secure against both of the super type I and the super type II adversaries. Our new scheme not only achieves the strongest security level but also has the shortest signature length (one group element. Compared with the other short certificateless signature schemes which have a similar security level, our new scheme has less operation cost. 12. Tamper-Proof Circuits : : How to Trade Leakage for Tamper-Resilience DEFF Research Database (Denmark) Faust, Sebastian; Pietrzak, Krzysztof; Venturi, Daniele 2011-01-01 Tampering attacks are cryptanalytic attacks on the implementation of cryptographic algorithms (e.g., smart cards), where an adversary introduces faults with the hope that the tampered device will reveal secret information. Inspired by the work of Ishai et al. [Eurocrypt’06], we propose a compiler...... that transforms any circuit into a new circuit with the same functionality, but which is resilient against a well-defined and powerful tampering adversary. More concretely, our transformed circuits remain secure even if the adversary can adaptively tamper with every wire in the circuit as long as the...... tampering fails with some probability δ > 0. This additional requirement is motivated by practical tampering attacks, where it is often difficult to guarantee the success of a specific attack. Formally, we show that a q-query tampering attack against the transformed circuit can be “simulated” with only... 13. Continuous Non-malleable Codes DEFF Research Database (Denmark) Faust, Sebastian; Mukherjee, Pratyay; Nielsen, Jesper Buus; 2014-01-01 -malleable codes where the adversary only is allowed to tamper a single time with an encoding. We show how to construct continuous non-malleable codes in the common split-state model where an encoding consist of two parts and the tampering can be arbitrary but has to be independent with both parts. Our main......Non-malleable codes are a natural relaxation of error correcting/ detecting codes that have useful applications in the context of tamper resilient cryptography. Informally, a code is non-malleable if an adversary trying to tamper with an encoding of a given message can only leave it unchanged...... or modify it to the encoding of a completely unrelated value. This paper introduces an extension of the standard non-malleability security notion - so-called continuous non-malleability - where we allow the adversary to tamper continuously with an encoding. This is in contrast to the standard notion of non... 14. Superposition Attacks on Cryptographic Protocols DEFF Research Database (Denmark) Damgård, Ivan Bjerre; Funder, Jakob Løvstad; Nielsen, Jesper Buus; 2011-01-01 Attacks on classical cryptographic protocols are usually modeled by allowing an adversary to ask queries from an oracle. Security is then defined by requiring that as long as the queries satisfy some constraint, there is some problem the adversary cannot solve, such as compute a certain piece...... string model. While our protocol is classical, it is sound against a cheating unbounded quantum prover and computational zero-knowledge even if the verifier is allowed a superposition attack. Finally, we consider multiparty computation and show that for the most general type of attack, simulation based...... of information. In this paper, we introduce a fundamentally new model of quantum attacks on classical cryptographic protocols, where the adversary is allowed to ask several classical queries in quantum superposition. This is a strictly stronger attack than the standard one, and we consider the security... 15. Multiparty Computations DEFF Research Database (Denmark) Dziembowski, Stefan In this thesis we study a problem of doing Verifiable Secret Sharing (VSS) and Multiparty Computations in a model where private channels between the players and a broadcast channel is available. The adversary is active, adaptive and has an unbounded computing power. The thesis is based on two...... an impossibility result indicating that a similar equivalence does not hold for Multiparty Computation (MPC): we show that even if protocols are given black-box access for free to an idealized secret sharing scheme secure for the access structure in question, it is not possible to handle all relevant access...... adversary structure. We propose new VSS and MPC protocols that are substantially more efficient than the ones previously known. Another contribution of [2] is an attack against a Weak Secret Sharing Protocol (WSS) of [3]. The attack exploits the fact that the adversary is adaptive. We present this attack... 16. AntiJam: Efficient Medium Access despite Adaptive and Reactive Jamming CERN Document Server Richa, Andrea; Schmid, Stefan; Zhang, Jin 2010-01-01 Intentional interference constitutes a major threat for communication networks operating over a shared medium and where availability is imperative. Jamming attacks are often simple and cheap to implement. In particular, today's jammers can perform physical carrier sensing in order to disrupt communication more efficiently, specially in a network of simple wireless devices such as sensor nodes, which usually operate over a single frequency (or a limited frequency band) and which cannot benefit from the use of spread spectrum or other more advanced technologies. This paper proposes the medium access (MAC) protocol \\textsc{AntiJam} that is provably robust against a powerful reactive adversary who can jam a $(1-\\epsilon)$-portion of the time steps, where $\\epsilon$ is an arbitrary constant. The adversary uses carrier sensing to make informed decisions on when it is most harmful to disrupt communications; moreover, we allow the adversary to be adaptive and to have complete knowledge of the entire protocol history.... 17. Device-independence for two-party cryptography and position verification DEFF Research Database (Denmark) Ribeiro, Jeremy; Thinh, Le Phuc; Kaniewski, Jedrzej; Quantum communication has demonstrated its usefulness for quantum cryptography far beyond quantum key distribution. One domain is two-party cryptography, whose goal is to allow two parties who may not trust each other to solve joint tasks. Another interesting application is position......-based cryptography whose goal is to use the geographical location of an entity as its only identifying credential. Unfortunately, security of these protocols is not possible against an all powerful adversary. However, if we impose some realistic physical constraints on the adversary, there exist protocols for which...... security can be proven, but these so far relied on the knowledge of the quantum operations performed during the protocols. In this work we give device-independent security proofs of two-party cryptography and Position Verification for memoryless devices under different physical constraints on the adversary... 18. Performance estimates for personnel access control systems Energy Technology Data Exchange (ETDEWEB) 1980-10-01 Current performance estimates for personnel access control systems use estimates of Type I and Type II verification errors. A system performance equation which addresses normal operation, the insider, and outside adversary attack is developed. Examination of this equation reveals the inadequacy of classical Type I and II error evaluations which require detailed knowledge of the adversary threat scenario for each specific installation. Consequently, new performance measures which are consistent with the performance equation and independent of the threat are developed as an aid in selecting personnel access control systems. 19. Path enumeration program (ENUMPTH) for physical protection effectiveness evaluation Energy Technology Data Exchange (ETDEWEB) Hall, R.C. 1978-10-01 Descriptions are presented of the structure and of ENUMPTH, a program for enumerating paths which an adversary might follow in attempting defeat of physical protection systems. The paths are evaluated in terms of the probability of detecting and then interrupting the adversary as the paths are traversed. The program is intended to be practical in orientation, selecting all paths which meet some specified minimum criteria. The nature of the physical protection issue suggests that all such paths may be of equal interest to analysts who are concerned with a total facility. An example is given to demonstrate the program's applicability to practical problems. 20. RFID Distance Bounding Protocol with Mixed Challenges to Prevent Relay Attacks Science.gov (United States) Kim, Chong Hee; Avoine, Gildas RFID systems suffer from different location-based attacks such as distance fraud, mafia fraud and terrorist fraud attacks. Among them mafia fraud attack is the most serious since this attack can be mounted without the notice of both the reader and the tag. An adversary performs a kind of man-in-the-middle attack between the reader and the tag. It is very difficult to prevent this attack since the adversary does not change any data between the reader and the tag. Recently distance bounding protocols measuring the round-trip time between the reader and the tag have been researched to prevent this attack. 1. A Game Theoretic Approach to Nuclear Security Analysis against Insider Threat Energy Technology Data Exchange (ETDEWEB) Kim, Kyonam; Kim, So Young; Yim, Mansung [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Schneider, Erich [Univ. of Texas at Austin, Texas (United States) 2014-05-15 2. A threat analysis framework as applied to critical infrastructures in the Energy Sector. Energy Technology Data Exchange (ETDEWEB) Michalski, John T.; Duggan, David Patrick 2007-09-01 The need to protect national critical infrastructure has led to the development of a threat analysis framework. The threat analysis framework can be used to identify the elements required to quantify threats against critical infrastructure assets and provide a means of distributing actionable threat information to critical infrastructure entities for the protection of infrastructure assets. This document identifies and describes five key elements needed to perform a comprehensive analysis of threat: the identification of an adversary, the development of generic threat profiles, the identification of generic attack paths, the discovery of adversary intent, and the identification of mitigation strategies. 3. A Game Theoretic Approach to Nuclear Security Analysis against Insider Threat International Nuclear Information System (INIS) 4. Security of the AES with a Secret S-Box DEFF Research Database (Denmark) Tiessen, Tyge; Knudsen, Lars Ramkilde; Kölbl, Stefan; 2015-01-01 How does the security of the AES change when the S-box is replaced by a secret S-box, about which the adversary has no knowledge? Would it be safe to reduce the number of encryption rounds? In this paper, we demonstrate attacks based on integral cryptanalysis which allow to recover both the secret...... key and the secret S-box for respectively four, five, and six rounds of the AES. Despite the significantly larger amount of secret information which an adversary needs to recover, the attacks are very efficient with time/data complexities of 217/216, 238/240 and 290/264, respectively. Another... 5. Server-Aided Two-Party Computation with Simultaneous Corruption DEFF Research Database (Denmark) Cascudo Pueyo, Ignacio; Damgård, Ivan Bjerre; Ranellucci, Samuel We consider secure two-party computation in the client-server model where there are two adversaries that operate separately but simultaneously, each of them corrupting one of the parties and a restricted subset of servers that they interact with. We model security via the local universal composab......We consider secure two-party computation in the client-server model where there are two adversaries that operate separately but simultaneously, each of them corrupting one of the parties and a restricted subset of servers that they interact with. We model security via the local universal... 6. Efficient, Robust and Constant-Round Distributed RSA Key Generation DEFF Research Database (Denmark) Damgård, Ivan Bjerre; Mikkelsen, Gert Læssøe 2010-01-01 We present the first protocol for distributed RSA key generation which is constant round, secure against malicious adversaries and has a negligibly small bound on the error probability, even using only one iteration of the underlying primality test on each candidate number.......We present the first protocol for distributed RSA key generation which is constant round, secure against malicious adversaries and has a negligibly small bound on the error probability, even using only one iteration of the underlying primality test on each candidate number.... 7. Vehicle barrier with access delay Science.gov (United States) Swahlan, David J; Wilke, Jason 2013-09-03 An access delay vehicle barrier for stopping unauthorized entry into secure areas by a vehicle ramming attack includes access delay features for preventing and/or delaying an adversary from defeating or compromising the barrier. A horizontally deployed barrier member can include an exterior steel casing, an interior steel reinforcing member and access delay members disposed within the casing and between the casing and the interior reinforcing member. Access delay members can include wooden structural lumber, concrete and/or polymeric members that in combination with the exterior casing and interior reinforcing member act cooperatively to impair an adversarial attach by thermal, mechanical and/or explosive tools. 8. Data Retention and Anonymity Services Science.gov (United States) Berthold, Stefan; Böhme, Rainer; Köpsell, Stefan The recently introduced legislation on data retention to aid prosecuting cyber-related crime in Europe also affects the achievable security of systems for anonymous communication on the Internet. We argue that data retention requires a review of existing security evaluations against a new class of realistic adversary models. In particular, we present theoretical results and first empirical evidence for intersection attacks by law enforcement authorities. The reference architecture for our study is the anonymity service AN.ON, from which we also collect empirical data. Our adversary model reflects an interpretation of the current implementation of the EC Directive on Data Retention in Germany. 9. Interactive animation of fault-tolerant parallel algorithms Energy Technology Data Exchange (ETDEWEB) Apgar, S.W. 1992-02-01 Animation of algorithms makes understanding them intuitively easier. This paper describes the software tool Raft (Robust Animator of Fault Tolerant Algorithms). The Raft system allows the user to animate a number of parallel algorithms which achieve fault tolerant execution. In particular, we use it to illustrate the key Write-All problem. It has an extensive user-interface which allows a choice of the number of processors, the number of elements in the Write-All array, and the adversary to control the processor failures. The novelty of the system is that the interface allows the user to create new on-line adversaries as the algorithm executes. 10. 78 FR 59719 - Notice of Lodging of Proposed Settlement Agreement Under The Comprehensive Environmental Response... Science.gov (United States) 2013-09-27 ..., Compensation, and Liability Act On September 24, 2013, the Department of Justice lodged a proposed Settlement..., Compensation, and Liability Act (CERCLA''), 42 U.S.C. 9607. Under the Settlement Agreement, the Apco... prejudice its adversary proceeding against the General Services Administration in connection with the... 11. Secure Identification and QKD in the Bounded-Quantum-Storage Model NARCIS (Netherlands) Damgard, I.B.; Fehr, S.; Salvail, L.; Schaffner, C. 2014-01-01 We consider the problem of secure identification: user U proves to server S that he knows an agreed (possibly low-entropy) password w, while giving away as little information on w as possible - the adversary can exclude at most one possible password for each execution. We propose a solution in the b 12. Literacy Teaching Method and Peace Building in Multi-Ethnic Communities of Nigeria Science.gov (United States) 2008-01-01 The challenge of peace building in Nigeria is increasing as communities continue to show adversary tendencies. This is happening even after many third party conflict transformation efforts have been expended to resolve and set a conducive climate for stakeholders to sustain peace. Some peace building assessment projects have indicated that the… 13. Physical Protection System Design Analysis against Insider Threat based on Game Theoretic Modeling Energy Technology Data Exchange (ETDEWEB) Kim, Kyo-Nam; Suh, Young-A; Yim, Man-Sung [KAIST, Daejeon (Korea, Republic of); Schneider, Erich [The University of Texas, Austin (United States) 2015-05-15 This study explores the use of game-theoretic modeling of physical protection analysis by incorporating the implications of an insider threat. The defender-adversary interaction along with the inclusion of an insider is demonstrated using a simplified test case problem at an experimental fast reactor system. Non-detection probability and travel time are used as a baseline of physical protection parameters in this model. As one of the key features of the model is its ability to choose among security upgrades given the constraints of a budget, the study also performed cost benefit analysis for security upgrades options. In this study, we analyzed the expected adversarial path and security upgrades with a limited budget with insider threat modeled as increasing the non-detection probability. Our test case problem categorized three types of adversary paths assisted by the insider and derived the largest insider threat in terms of the budget for security upgrades. More work needs to be done to incorporate complex dimensions of insider threats, which include but are not limited to: a more realistic mapping of insider threat, accounting for information asymmetry between the adversary, insider, and defenders, and assignment of more pragmatic parameter values. 14. Arvustused / Ahto-Lembit Lehtmets Index Scriptorium Estoniae Lehtmets, Ahto-Lembit 2006-01-01 Heliplaatidest: Satyricon "Now, Diabolica", Nitrous "Dominant Force", Ihsahn "Adversary", Keep Of Kalessin "Armada", Zyklon "Disintegrate", Enslaved "Ruun", Lacuna Coil "Karmacode", Sick Of It All "Death To Tyrants", Cult Of Luna "Somewhere Along the Highway", Scent Of Flesh "Become Malignity EP", Mythological Cold Towers "The Vanished Pantheon", Kalmah "The Black Waltz", Neglected Fields "Splenetic" 15. Bioinspired Security Analysis of Wireless Protocols DEFF Research Database (Denmark) Petrocchi, Marinella; Spognardi, Angelo; Santi, Paolo 2016-01-01 work, this paper investigates feasibility of adopting fraglets as model for specifying security protocols and analysing their properties. In particular, we give concrete sample analyses over a secure RFID protocol, showing evolution of the protocol run as chemical dynamics and simulating an adversary... 16. Generic physical protection logic trees International Nuclear Information System (INIS) Generic physical protection logic trees, designed for application to nuclear facilities and materials, are presented together with a method of qualitative evaluation of the trees for design and analysis of physical protection systems. One or more defense zones are defined where adversaries interact with the physical protection system. Logic trees that are needed to describe the possible scenarios within a defense zone are selected. Elements of a postulated or existing physical protection system are tagged to the primary events of the logic tree. The likelihood of adversary success in overcoming these elements is evaluated on a binary, yes/no basis. The effect of these evaluations is propagated through the logic of each tree to determine whether the adversary is likely to accomplish the end event of the tree. The physical protection system must be highly likely to overcome the adversary before he accomplishes his objective. The evaluation must be conducted for all significant states of the site. Deficiencies uncovered become inputs to redesign and further analysis, closing the loop on the design/analysis cycle 17. Optimal Resilient Dynamic Dictionaries DEFF Research Database (Denmark) Jørgensen, Allan Grønlund; Brodal, Gerth Stølting; Moruz, Gabriel; 2007-01-01 We investigate the problem of computing in the presence of faults that may arbitrarily (i.e., adversarially) corrupt memory locations. In the faulty memory model, any memory cell can get corrupted at any time, and corrupted cells cannot be distinguished from uncorrupted ones. An upper bound $\\del... 18. How to work through the news media International Nuclear Information System (INIS) There are essentially four steps that anyone must follow if the objective is to communicate a message through the news media: 1) Understand media (adversarial relationship, code of ethics, importance of First Amendment); 2) Redefine the relationship (become acquainted with reporter, save steps for the reporter); 3) Communicate clearly; and 4) Use alternatives when appropriate. These four steps are discussed 19. The Danger of Economic Nationalism Institute of Scientific and Technical Information of China (English) JAMES; A.DORN 2008-01-01 The United States should treat China as a normal rising power,not a probable adversary Unlike special interest groups that are harmed by trade, no one represents future generations who will have a lower standard of living because of present government profligacy 20. Relationships between Exposure to Rap Music Videos and Attitudes toward Relationships among African American Youth Science.gov (United States) Bryant, Yaphet 2008-01-01 The purpose of the study is to (a) predict adversarial attitudes toward male-female relationships and (b) explore the relationships between traditional agents of socialization and personal acceptance of negative images in rap videos by African American adolescents. Participants completed psychosocial measures, viewed videos, and completed surveys… 1. 15 CFR Supplement No. 1 to Part 766 - Guidance on Charging and Penalty Determinations in Settlement of Administrative Enforcement Cases Science.gov (United States) 2010-01-01 ... Act of 1990 (28 U.S.C. 2461, note (2000)), which are codified at 15 CFR 6.4. B. Denial of export... settlement on the eve of an adversary hearing under § 766.13 are fewer, insofar as BIS has already expended... based on nuclear, biological, and chemical weapon proliferation, missile technology proliferation,... 2. Intelligent agents for training on-board fire fighting NARCIS (Netherlands) Bosch, K. van den; Harbers, M.; Heuvelink, A.; Doesburg, W. van 2009-01-01 Simulation-based training in complex decision making often requires ample personnel for playing various roles (e.g. team mates, adversaries). Using intelligent agents may diminish the need for staff. However, to achieve goal-directed training, events in the simulation as well as the behavior of key 3. The source coding game with a cheating switcher CERN Document Server Palaiyanur, Hari; Sahai, Anant 2007-01-01 Motivated by the lossy compression of an active-vision video stream, we consider the problem of finding the rate-distortion function of an arbitrarily varying source (AVS) composed of a finite number of subsources with known distributions. Berger's paper The Source Coding Game', \\emph{IEEE Trans. Inform. Theory}, 1971, solves this problem under the condition that the adversary is allowed only strictly causal access to the subsource realizations. We consider the case when the adversary has access to the subsource realizations non-causally. Using the type-covering lemma, this new rate-distortion function is determined to be the maximum of the IID rate-distortion function over a set of source distributions attainable by the adversary. We then extend the results to allow for partial or noisy observations of subsource realizations. We further explore the model by attempting to find the rate-distortion function when the adversary is actually helpful. Finally, a bound is developed on the uniform continuity of the I... 4. 29 CFR 18.101 - Scope. Science.gov (United States) 2010-07-01 ... ADMINISTRATIVE LAW JUDGES Rules of Evidence § 18.101 Scope. These rules govern formal adversarial adjudications... rules as the judge, means an Administrative Law Judge, an agency head, or other officer who presides at the reception of evidence at a hearing in such an adjudication.... 5. Multi-Agent Planning with Planning Graph NARCIS (Netherlands) Bui, The Duy; Jamroga, Wojciech 2003-01-01 In this paper, we consider planning for multi-agents situations in STRIPS-like domains with planning graph. Three possible relationships between agents' goals are considered in order to evaluate plans: the agents may be collaborative, adversarial or indifferent entities. We propose algorithms to dea 6. Quantum Communication Attacks on Classical Cryptographic Protocols DEFF Research Database (Denmark) Damgård, Ivan Bjerre ” in quantum multiparty computation. Furthermore, in the future, players in a protocol may employ quantum computing simply to improve efficiency of their local computation, even if the communication is supposed to be classical. In such cases, it no longer seems clear that a quantum adversary must be limited... 7. College Students' Attitudes toward Date Rape and Date Rape Backlash: Implications for Prevention Programs. Science.gov (United States) Woods, Susan; Bower, Douglas J. 2001-01-01 Surveyed college students regarding their acceptance of rape-myth beliefs expounded by the date rape backlash movement. Results indicated that gender, adversarial attitudes toward sexual relationships, political and sex role views, perception of false accusation vulnerability, academic honorary membership, Greek affiliation, and knowledge of a… 8. User's guide for evaluating physical security capabilities of nuclear facilities by the EASI method International Nuclear Information System (INIS) This handbook is a guide for evaluating physical security of nuclear facilities using the ''Estimate of Adversary Sequence Interruption (EASI)'' method and a hand-held programmable calculator. The handbook is intended for use by personnel at facilities where special nuclear materials are used, processed, or stored. It may also be used as a design aid for such facilities by potential licensees 9. Re/Thinking Critical Thinking: The Seductions of Everyday Life. Science.gov (United States) Alston, Kal 2001-01-01 Suggests that both critical thinking and obstacles to successful critical thinking are most commonly found in the activities of everyday life. Argues for a connective criticism approach that does not assume critical means adversarial and acknowledges that critical thinking can be used as a means of opening worlds of meaning. (KS) 10. Perceptions of the News Media's Societal Roles: How the Views of U.K. Journalism Students Changed during Their Education Science.gov (United States) Hanna, Mark; Sanders, Karen 2012-01-01 A longitudinal study of U.K. journalism undergraduates records how their attitudes on societal roles of the news media changed during university education. Students became more likely to endorse an adversarial approach toward public officials and businesses as extremely important. Yet students did not support these roles as strongly as an older… 11. 12 CFR 1705.4 - Standards for awards. Science.gov (United States) 2010-01-01 ... circumstances of the case, unless the party has committed a willful violation of law or otherwise acted in bad... adjudication in which it prevailed, unless the position of OFHEO in the adversary adjudication was... that its position was substantially justified and may do so by showing that its position was... 12. Principals Versus Teachers: Where Will We Bury the Victims? Science.gov (United States) Sweeney, Jim 1980-01-01 Preliminary findings of a survey of Georgia school principals indicate a serious lack of confidence in teachers, prophesizing conflict, disharmony, and strife as teachers seek increasing autonomy. We need to alter this adversarial relationship between teachers and administrators and to redefine their decision-making roles. (Author/SJL) 13. Investigating Stories in a Formal Dialogue Game NARCIS (Netherlands) Bex, F.J.; Prakken, H.; Besnard, P; Doutre, S; Hunter, A 2008-01-01 In this paper we propose a formal dialogue game in which two players aim to determine the best explanation for a set of observations. By assuming an adversarial setting, we force the players to advance and improve their own explanations as well as criticize their opponent's explanations, thus hopefu 14. Development of a statistically based access delay timeline methodology. Energy Technology Data Exchange (ETDEWEB) Rivera, W. Gary; Robinson, David Gerald; Wyss, Gregory Dane; Hendrickson, Stacey M. Langfitt 2013-02-01 The charter for adversarial delay is to hinder access to critical resources through the use of physical systems increasing an adversary's task time. The traditional method for characterizing access delay has been a simple model focused on accumulating times required to complete each task with little regard to uncertainty, complexity, or decreased efficiency associated with multiple sequential tasks or stress. The delay associated with any given barrier or path is further discounted to worst-case, and often unrealistic, times based on a high-level adversary, resulting in a highly conservative calculation of total delay. This leads to delay systems that require significant funding and personnel resources in order to defend against the assumed threat, which for many sites and applications becomes cost prohibitive. A new methodology has been developed that considers the uncertainties inherent in the problem to develop a realistic timeline distribution for a given adversary path. This new methodology incorporates advanced Bayesian statistical theory and methodologies, taking into account small sample size, expert judgment, human factors and threat uncertainty. The result is an algorithm that can calculate a probability distribution function of delay times directly related to system risk. Through further analysis, the access delay analyst or end user can use the results in making informed decisions while weighing benefits against risks, ultimately resulting in greater system effectiveness with lower cost. 15. Using agent technology to build a real-world training application NARCIS (Netherlands) Cap, M.; Heuvelink, A.; Bosch, K. van den; Doesburg, W. van 2011-01-01 Using staff personnel for playing roles in simulation-based training (e.g. team mates, adversaries) elevates costs, and imposes organizational constraints on delivery of training. One solution to this problem is to use intelligent software agents that play the required roles autonomously. BDI modeli 16. Cognitive model supported tactical training simulation NARCIS (Netherlands) Doesburg, W.A. van; Bosch, K. van den 2005-01-01 Simulation-based tactical training can be made more effective by using cognitive software agents to play key roles (e.g. team mate, adversaries, instructor). Due to the dynamic and complex nature of military tactics, it is hard to create agents that behave realistically and support the training lead 17. 49 CFR 1016.105 - Eligibility of applicants. Science.gov (United States) 2010-10-01 ... unincorporated business, or any partnership, corporation, association, unit of local government, or organization... adversary adjudication was initiated; (3) Any organization described in section 501(c)(3) of the Internal... cooperative association as defined in section 15(a) of the Agricultural Marketing Act (12 U.S.C.... 18. Corporate Social Responsibility: Case Study of Community Expectations and the Administrative Systems, Niger Delta Science.gov (United States) Ogula, David 2012-01-01 Poor community-company relations in the Niger Delta have drawn attention to the practice of corporate social responsibility (CSR) in the region. Since the 1960s, transnational oil corporations operating in the Niger Delta have adopted various CSR strategies, yet community-company relations remain adversarial. This article examines community… 19. Intelligent agent supported training in virtual simulations NARCIS (Netherlands) Heuvelink, A.; Bosch, K. van den; Doesburg, W.A. van; Harbers, M. 2009-01-01 Simulation-based training in military decision making often requires ample personnel for playing various roles (e.g. team mates, adversaries). Usually humans are used to play these roles to ensure varied behavior required for the training of such tasks. However, there is growing conviction and evide 20. Secure and self-stabilizing clock synchronization in sensor networks NARCIS (Netherlands) Hoepman, J.H.; Larsson, A.; Schiller, E.M.; Tsigas, P. 2011-01-01 In sensor networks, correct clocks have arbitrary starting offsets and nondeterministic fluctuating skews. We consider an adversary that aims at tampering with the clock synchronization by intercepting messages, replaying intercepted messages (after the adversary’s choice of delay), and capturing no 1. Einstein in love a scientific romance CERN Document Server Overbye, Dennis 2000-01-01 At its height, Einstein's marriage to Mileva was an extraordinary one - a colleague and often fierce adversary, Mileva was brilliantly matched with the scientific genius. Dennis Overbye seeks to present this scientific romance in a vivid light, telling the private story of the young Einstein. 2. Attorney and Parent Attitudes Related to Successful Mediation Counseling of Child Custody Disputes. Science.gov (United States) Swenson, Leland C.; Heinish, D. The divorce explosion has placed a substantial burden on the judicial system of the United States. About 10 percent of divorce cases involve child custody battles. The adversarial legal process may be contrary to the children's best interest. Custody mediation has been used as an alternative to court litigation. California law requires an attempt… 3. Views of United States Physicians and Members of the American Medical Association House of Delegates on Physician-assisted Suicide. Science.gov (United States) Whitney, Simon N.; Brown, Byron W.; Brody, Howard; Alcser, Kirsten H.; Bachman, Jerald G.; Greely, Henry T. 2001-01-01 Ascertained the views of physicians and physician leaders toward legalization of physician-assisted suicide. Results indicated members of AMA House of Delegates strongly oppose physician-assisted suicide, but rank-and-file physicians show no consensus either for or against its legalization. Although the debate is adversarial, most physicians are… 4. A New Approach to Practical Active-Secure Two-Party Computation DEFF Research Database (Denmark) Nielsen, Jesper Buus; Nordholt, Peter Sebastian; Orlandi, Claudio; 2012-01-01 We propose a new approach to practical two-party computation secure against an active adversary. All prior practical protocols were based on Yao’s garbled circuits. We use an OT-based approach and get efficiency via OT extension in the random oracle model. To get a practical protocol we introduce... 5. LEGO for Two-Party Secure Computation DEFF Research Database (Denmark) Nielsen, Jesper Buus; Orlandi, Claudio 2009-01-01 This paper continues the recent line of work of making Yao’s garbled circuit approach to two-party computation secure against an active adversary. We propose a new cut-and-choose based approach called LEGO (Large Efficient Garbled-circuit Optimization): It is specifically aimed at large circuits... 6. Implementing AES via an Actively/Covertly Secure Dishonest-Majority MPC Protocol DEFF Research Database (Denmark) Damgård, Ivan Bjerre; Keller, Marcel; Keller, Enrique; 2012-01-01 We describe an implementation of the protocol of Damgård, Pastro, Smart and Zakarias (SPDZ/Speedz) for multi-party computation in the presence of a dishonest majority of active adversaries. We present a number of modifications to the protocol; the first reduces the security to covert security... 7. 77 FR 35363 - 36(b)(1) Arms Sales Notification Science.gov (United States) 2012-06-13 ... a technologically advanced adversary were to obtain knowledge of the specific hardware and software... Common Test Device software, ATACMS Quality Assurance Team support, spare and repair parts, tools and... Unitary Missiles, Missile Common Test Device software, ATACMS Quality Assurance Team support, spare... 8. 77 FR 46415 - 36(b)(1) Arms Sales Notification Science.gov (United States) 2012-08-03 ... a technologically advanced adversary were to obtain knowledge of the specific hardware and software... Missiles, Missile Common Test Device software, ATACMS Quality Assurance Team support, spare and repair... Unitary Missiles, Missile Common Test Device software, ATACMS Quality Assurance Team support, spare... 9. The Failure of Legalization in Education: Alternative Dispute Resolution and the Education for All Handicapped Children Act of 1975. Science.gov (United States) Goldberg, Steven S. 1989-01-01 A federal statute provided that parents may use the judicial process to challenge educators' decisions. Describes the intent of legalization; how reaction to an adversarial system led to the use of mediation in most states; and why this alternative model is not appropriate for resolving education questions. (MLF) 10. The Costs of "Openness" Science.gov (United States) Cleveland, Harlan 1975-01-01 The author argues that very wide consultation tends to discourage innovation and favor stand-pattism and that the very great benefits of openness and wide participation are flawed by apathy and non-participation, by muscle-binding legalisms, by processes which polarize two adversary sides, and by the encouragement of mediocrity. (JT) 11. External Labeling as a Framework for Access Control Science.gov (United States) Rozenbroek, Thomas H. 2012-01-01 With the ever increasing volume of data existing on and passing through on-line resources together with a growing number of legitimate users of that information and potential adversaries, the need for better security and safeguards is immediate and critical. Currently, most of the security and safeguards afforded on-line information are provided… 12. The Documentation Process: The Administrator's Role and the Interplay of Necessity, Support and Collaboration Science.gov (United States) Charlton, Donna; Kritsonis, William Allan 2008-01-01 Traditional teacher documentation procedures pit the administrator against the teacher. The process is adversarial and erodes the quality of the intervention. Teachers who are unsuccessful in meeting campus/school district expectations can be successfully acclimatized to the campus culture through a documentation process that combines affective… 13. Enculturation, Not Alchemy: Professionalizing Novice Writing Program Administrators. Science.gov (United States) Peters, Bradley 1998-01-01 Discusses a process of acculturation in three stages by which fledgling Writing Program Administrators can be transformed into change agents: (1) critically reading the program to locate key allies, potential advocates, and proven adversaries; (2) implementing changes on an infrastructural level to convert positive relations among colleagues into… 14. The Rhetoric of Redistricting: Ohio's Altered State. Science.gov (United States) Lucas, David M. An event such as congressional boundary redistricting, so ripe with political confrontation, provides a fertile ground for the profuse growth of political rhetoric. The traditional two-party political system, charged by the two well-developed adversarial philosophies, generates a highly charged environment with messages begging for analysis. After… 15. Theory Z Bargaining Works: Teachers and Administrators in Two School Districts Replace Hostility with Trust. Science.gov (United States) Pheasant, Marilyn 1985-01-01 A bargaining process, begun over 10 years ago, is based on problem-solving rather than on adversarial confrontation and uses elements of participative management. It has resulted in outstanding benefits for two school districts. Called "Theory Z bargaining," the process follows a procedure based on each side treating the other with respect.… 16. Design of Simple and Efficient Revocation List Distribution in Urban areas for VANET's CERN Document Server Samara, Ghassan; Al-Salihy, Wafaa A H 2010-01-01 Vehicular Ad hoc Networks is one of the most challenging research area in the field of Mobile Ad Hoc Networks, in this research we propose a flexible, simple, and scalable design for revocation list distribution in VANET, which will reduce channel overhead and eliminate the use of CRL. Also it will increase the security of the network and helps in identifying the adversary vehicles. 17. What Conspiracy? Science.gov (United States) Olson, Gary A. 2006-01-01 College professors often speak of power relations within the university setting in adversarial terms, as a matter of "us", meaning the faculty, versus "them", which usually means all administrators. However, depicting campus administrators as participants in some organized conspiracy against faculty members is unproductive and obscures the fact… 18. Historical Perspectives of Outdoor and Wilderness Recreation Programming in the United States. Science.gov (United States) Watters, Ron This paper traces the history of outdoor programming beginning with the influence of western expansionism and the settling of the American frontier. The late 1800s brought about a change in the national attitude from an adversarial view of wilderness to a beneficial view. This was reflected by writers such as Henry David Thoreau and John Muir.… 19. 28 CFR 24.305 - Extensions of time. Science.gov (United States) 2010-07-01 ... Judicial Administration DEPARTMENT OF JUSTICE IMPLEMENTATION OF THE EQUAL ACCESS TO JUSTICE ACT IN DEPARTMENT OF JUSTICE ADMINISTRATIVE PROCEEDINGS Procedures for Considering Applications § 24.305 Extensions... shall be conducted pursuant to the procedural rules governing adversary adjudications conducted by... 20. Generic physical protection logic trees Energy Technology Data Exchange (ETDEWEB) Paulus, W.K. 1981-10-01 Generic physical protection logic trees, designed for application to nuclear facilities and materials, are presented together with a method of qualitative evaluation of the trees for design and analysis of physical protection systems. One or more defense zones are defined where adversaries interact with the physical protection system. Logic trees that are needed to describe the possible scenarios within a defense zone are selected. Elements of a postulated or existing physical protection system are tagged to the primary events of the logic tree. The likelihood of adversary success in overcoming these elements is evaluated on a binary, yes/no basis. The effect of these evaluations is propagated through the logic of each tree to determine whether the adversary is likely to accomplish the end event of the tree. The physical protection system must be highly likely to overcome the adversary before he accomplishes his objective. The evaluation must be conducted for all significant states of the site. Deficiencies uncovered become inputs to redesign and further analysis, closing the loop on the design/analysis cycle. 1. Robust Multiparty Computation with Linear Communication Complexity DEFF Research Database (Denmark) Hirt, Martin; Nielsen, Jesper Buus 2006-01-01 We present a robust multiparty computation protocol. The protocol is for the cryptographic model with open channels and a poly-time adversary, and allows n parties to actively securely evaluate any poly-sized circuit with resilience t < n/2. The total communication complexity in bits over the poi... 2. More Important than the Contract Is the Relationship Science.gov (United States) Burch, Patricia; Good, Annalee 2015-01-01 What should a school district procurement officer ask when he or she sits down with a sales representative from a vendor of digital education products? Who else should be at the table? How do districts and providers become partners in instruction, rather than adversaries in negotiation? These are increasingly critical questions as public school… 3. A Formal Model for the Security of Proxy Signature Schemes Institute of Scientific and Technical Information of China (English) GU Chun-xiang; ZHU Yue-fei; ZHANG Ya-juan 2005-01-01 This paper provides theoretical foundations for the secure proxy signature primitive. We present a formal model for the security of proxy signature schemes, which defines the capabilities of the adversary and the security goals to capture which mean for a proxy signature scheme to be secure. Then, we present an example of proxy signature scheme that can be proven secure in the standard model. 4. Development of a statistically based access delay timeline methodology. Energy Technology Data Exchange (ETDEWEB) Rivera, W. Gary; Robinson, David Gerald; Wyss, Gregory Dane; Hendrickson, Stacey M. Langfitt 2013-02-01 The charter for adversarial delay is to hinder access to critical resources through the use of physical systems increasing an adversarys task time. The traditional method for characterizing access delay has been a simple model focused on accumulating times required to complete each task with little regard to uncertainty, complexity, or decreased efficiency associated with multiple sequential tasks or stress. The delay associated with any given barrier or path is further discounted to worst-case, and often unrealistic, times based on a high-level adversary, resulting in a highly conservative calculation of total delay. This leads to delay systems that require significant funding and personnel resources in order to defend against the assumed threat, which for many sites and applications becomes cost prohibitive. A new methodology has been developed that considers the uncertainties inherent in the problem to develop a realistic timeline distribution for a given adversary path. This new methodology incorporates advanced Bayesian statistical theory and methodologies, taking into account small sample size, expert judgment, human factors and threat uncertainty. The result is an algorithm that can calculate a probability distribution function of delay times directly related to system risk. Through further analysis, the access delay analyst or end user can use the results in making informed decisions while weighing benefits against risks, ultimately resulting in greater system effectiveness with lower cost. 5. The Use of Family Therapy within a University Counseling Center Science.gov (United States) Jackson, Kathryn 2009-01-01 As a counterpoint to the oftentimes adversarial way that parents are viewed when they appear to be overinvolved in the lives of their college-aged students, this article advocates for the use of a family therapy perspective in university counseling centers. Benefits of this perspective include a broadening of the lens through which individual… 6. The Effect of Cross-Examination Tactics on Simulated Jury Impressions. Science.gov (United States) Gibbs, Margaret; And Others Past research has demonstrated the negative effects of leading questions by attorneys on eyewitness testimony and has found that adversary lawyers produced less accurate testimony from eyewitnesses. This study was conducted to examine the effects of lawyer's hostile versus non-hostile behavior and lawyer's leading versus non-leading questions on… 7. Lower and Upper Bounds for Deniable Public-Key Encryption DEFF Research Database (Denmark) Bendlin, Rikke; Nielsen, Jesper Buus; Nordholt, Peter Sebastian; 2011-01-01 A deniable cryptosystem allows a sender and a receiver to communicate over an insecure channel in such a way that the communication is still secure even if the adversary can threaten the parties into revealing their internal states after the execution of the protocol. This is done by allowing... 8. Cryptography in the Bounded Quantum-Storage Model DEFF Research Database (Denmark) Damgård, Ivan Bjerre; Serge, Fehr; Schaffner, Christian; 2008-01-01 We initiate the study of two-party cryptographic primitives with unconditional security, assuming that the adversary's quantum memory is of bounded size. We show that oblivious transfer and bit commitment can be implemented in this model using protocols where honest parties need no quantum memory... 9. Cryptography In The Bounded Quantum-Storage Model DEFF Research Database (Denmark) Damgård, Ivan Bjerre; Salvail, Louis; Schaffner, Christian; 2005-01-01 We initiate the study of two-party cryptographic primitives with unconditional security, assuming that the adversary's quantum memory is of bounded size. We show that oblivious transfer and bit commitment can be implemented in this model using protocols where honest parties need no quantum memory... 10. In Search of Interoperability Standards for Human Behaviour Representation NARCIS (Netherlands) Gunzelmann, G.; Gaughan, C.; Huiskamp, W.; Bosch, K. van den; Jong, S. de; Alexander, T.; Bruzzone, A.G.; Tremori, A. 2014-01-01 There is a long history of research to create capabilities that address the need for human behaviour representations in training simulations and other M&S application domains. In training, human behaviour models have applications as synthetic teammates and adversaries, but can also be used as a repr 11. Trust Management and Accountability for Internet Security Science.gov (United States) Liu, Wayne W. 2011-01-01 Adversarial yet interacting interdependent relationships in information sharing and service provisioning have been a pressing issue of the Internet. Such relationships exist among autonomous software agents, in networking system peers, as well as between "service users and providers." Traditional "ad hoc" security approaches effective in… 12. Divorce mediation and resolution of child custody disputes: long-term effects. Science.gov (United States) Dillon, P A; Emery, R E 1996-01-01 Separated parents randomly assigned to either mediation or traditional adversarial methods for resolving child custody disputes were surveyed nine years postsettlement. Noncustodial parents assigned to mediation reported more frequent current contact with their children and greater involvement in current decisions about them. Parents in the mediation group also reported more frequent communication about their children during the period since dispute resolution. 13. Rambo and Mother Theresa: A Judge Looks at Divorce. Science.gov (United States) Steinberg, Joseph L. 1988-01-01 Addresses community attitude that angry adversarial divorces are normal and inevitable and asserts that, to change the divorce experience of Americans, the community attitude must change. Notes that one part of client community created and demanded and achieved joint custody and no fault divorce, and that it is up to the clients to demand and… 14. Supply chain management as a competitive advantage in the Spanish grocery sector OpenAIRE Ventura, Eva; Gim??nez, Cristina 2002-01-01 Adversarial relationships have long dominated business relationships, but Supply Chain Management (SCM) entails a new perspective. SCM requires a movement away from arms-length relationships toward partnership style relations. SCM involves integration, co-ordination and collaboration across organisations and throughout the supply chain. It means that SCM requires internal (intraorganisational) and external (interorganisational) integration. This paper analyses the relationsh... 15. Pravastatin inhibits tumor growth through elevating the levels of apolipoprotein A1 Directory of Open Access Journals (Sweden) Chun Yeh 2016-03-01 Conclusion: This study demonstrated that pravastatin elevated ApoA1, an HDL major constituent with anti-inflammatory characteristics, which displayed strong adversary associations with tumor developments and growth. Increasing the amounts of ApoA1 by pravastatin coupled with DOX may improve the therapeutic efficacy for cancer treatment. 16. 44 CFR 334.3 - Background. Science.gov (United States) 2010-10-01 ... 44 Emergency Management and Assistance 1 2010-10-01 2010-10-01 false Background. 334.3 Section 334.3 Emergency Management and Assistance FEDERAL EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND... adversaries shape the nature and gravity of the threat as well as its likelihood and timing of... 17. PIME '89 (Public Information Materials Exchange): International workshop on public information problems of nuclear energy International Nuclear Information System (INIS) Presentations included in this proceedings are describing the following; Mass media and public information on nuclear energy and radiation: striving for two-way confidence and understanding; case studies of different countries having developed nuclear programs, problems of communication between nuclear promoters and/or operators and its adversaries; public attitude concerning nuclear power; different attitudes of men and women 18. Collaborative Divorce: An Effort to Reduce the Damage of Divorce. Science.gov (United States) Alba-Fisch, Maria 2016-05-01 Divorce has been trapped in the adversarial system of the courts, a system ill suited to the needs of a family attempting to reorganize itself and still safeguard the well-being of its members. Collaborative divorce (CD) is a relatively new approach comprising an interdisciplinary professional team trained to help the divorcing family arrive at a financial, legal, and emotional settlement. The CD approach is designed to assist both members of the couple and their children transition into a more constructive future wherein they can still be a family. The structure and adversarial approach of the courts have been replaced by collaborative structures and principles developed to encourage honesty and cooperation. The case presented illustrates how this actually works. PMID:27037997 19. Realistic noise-tolerant randomness amplification using finite number of devices Science.gov (United States) Brandão, Fernando G. S. L.; Ramanathan, Ravishankar; Grudka, Andrzej; Horodecki, Karol; Horodecki, Michał; Horodecki, Paweł; Szarek, Tomasz; Wojewódka, Hanna 2016-01-01 Randomness is a fundamental concept, with implications from security of modern data systems, to fundamental laws of nature and even the philosophy of science. Randomness is called certified if it describes events that cannot be pre-determined by an external adversary. It is known that weak certified randomness can be amplified to nearly ideal randomness using quantum-mechanical systems. However, so far, it was unclear whether randomness amplification is a realistic task, as the existing proposals either do not tolerate noise or require an unbounded number of different devices. Here we provide an error-tolerant protocol using a finite number of devices for amplifying arbitrary weak randomness into nearly perfect random bits, which are secure against a no-signalling adversary. The correctness of the protocol is assessed by violating a Bell inequality, with the degree of violation determining the noise tolerance threshold. An experimental realization of the protocol is within reach of current technology. PMID:27098302 20. Comparison of ICM with TPF-LEP to Prevent MAC Spoof DoS Attack in Wireless Local Area Infrastructure Network Directory of Open Access Journals (Sweden) M. Durairaj 2014-05-01 Full Text Available A Comparison of Integrated Central Manager (ICM and Traffic Pattern Filtering with Letter Envelop Protocol (TPF-LEP is done. Denial of Service (DoS attack is a biggest peril in wireless local area infrastructure network. It makes the resources unavailable for intended users which transpired through spoofing legitimate Client/AP's Medium Access Control (MAC address. MAC address are easily caricatured by the adversary clients, subsequently they are not encrypted. Since, the adversary sends the management frame, which is unencrypted, to the victim using spoofed MAC address. This study compares the performance of Integrated Central Manager (ICM and Traffic Pattern Filtering with Letter Envelop Protocol (TPF-LEP and evaluated the result using NS2. The attack scenario is simulated and effectiveness of the solutions is validated after the instigation of solutions in the attack consequences. Throughput, Packet Delivery Ratio and Packet Loss are measured and taken to endorse the performance of ICM and TPF-LEP. 1. Generation of safe optimised execution strategies for uml models DEFF Research Database (Denmark) Herbert, Luke Thomas; Herbert-Hansen, Zaza Nadja Lee , for a fragment of the Unified Modelling Language (UML) statechart language which is extended to include modelling of workflows which exhibit stochastic behaviour. Strategy generation is made possible by performing model checking on specific permutations of the set of possible actions to generate adversaries......When designing safety critical systems there is a need for verification of safety properties while ensuring system operations have a specific performance profile. We present a novel application of model checking to derive execution strategies, sequences of decisions at workflow branch points...... which optimise a set of reward variables, while simultaneously observing constraints which encode any required safety properties and accounting for the underlying stochastic nature of the system. By evaluating quantitative properties of the generated adversaries we are able to construct an execution... 2. Another cost of being a young black male: Race, weaponry, and lethal outcomes in assaults. Science.gov (United States) Felson, Richard B; Painter-Davis, Noah 2012-09-01 We examine the effect of the race, age, and gender of victims of assault on the offenders' use of weapons and lethal intent. Evidence from the National Incident Based Reporting System (NIBRS) suggests that offenders are particularly likely to use guns against young black men-a three-way interaction - and to kill black males and young black adults. Black offenders respond more strongly to the victim's race than do white offenders. As a result of these effects, a violent incident between two young black men is about six times more likely to involve a gun than a violent incident between two young white men. We suggest that adversary effects, i.e., an offender's tactical response to the threat posed by adversaries, help explain why violence in black communities tends to be much more serious than violence in white communities. 3. Exponential separations for one-way quantum communication complexity, with applications to cryptography CERN Document Server Gavinsky, D; Kempe, J; Kerenidis, I; Raz, R; Gavinsky, Dmitry; Kempe, Julia; Kerenidis, Iordanis; Raz, Ran; Wolf, Ronald de 2006-01-01 We give an exponential separation between one-way quantum and classical communication protocols for two partial Boolean functions, both of which are variants of the Boolean Hidden Matching Problem of Bar-Yossef et al. Earlier such an exponential separation was known only for a relational version of the Hidden Matching Problem. Our proofs use the Fourier coefficients inequality of Kahn, Kalai, and Linial. We also give a number of applications of this separation. In particular, we provide the first example in the bounded storage model of cryptography where the key is secure if the adversary has a certain amount of classical storage, but is completely insecure if he has a similar (or even much smaller) amount of \\emph{quantum} storage. Moreover, in the setting of privacy amplification, we show that there exist extractors which yield a classically secure key, but are insecure against a quantum adversary. 4. Realistic noise-tolerant randomness amplification using finite number of devices. Science.gov (United States) Brandão, Fernando G S L; Ramanathan, Ravishankar; Grudka, Andrzej; Horodecki, Karol; Horodecki, Michał; Horodecki, Paweł; Szarek, Tomasz; Wojewódka, Hanna 2016-01-01 Randomness is a fundamental concept, with implications from security of modern data systems, to fundamental laws of nature and even the philosophy of science. Randomness is called certified if it describes events that cannot be pre-determined by an external adversary. It is known that weak certified randomness can be amplified to nearly ideal randomness using quantum-mechanical systems. However, so far, it was unclear whether randomness amplification is a realistic task, as the existing proposals either do not tolerate noise or require an unbounded number of different devices. Here we provide an error-tolerant protocol using a finite number of devices for amplifying arbitrary weak randomness into nearly perfect random bits, which are secure against a no-signalling adversary. The correctness of the protocol is assessed by violating a Bell inequality, with the degree of violation determining the noise tolerance threshold. An experimental realization of the protocol is within reach of current technology. PMID:27098302 5. Hybrid-secure MPC DEFF Research Database (Denmark) Lucas, Christoph; Raub, Dominik; Maurer, Ueli 2010-01-01 Most protocols for distributed, fault-tolerant computation, or multi-party computation (MPC), provide security guarantees in an all-or-nothing fashion. In contrast, a hybrid-secure protocol provides different security guarantees depending on the set of corrupted parties and the computational power...... of the adversary, without being aware of the actual adversarial setting. Thus, hybrid-secure MPC protocols allow for graceful degradation of security. We present a hybrid-secure MPC protocol that provides an optimal trade-off between IT robustness and computational privacy: For any robustness parameter ρ ... obtain one MPC protocol that is simultaneously IT secure with robustness for up to t ≤ ρ actively corrupted parties, IT secure with fairness (no robustness) for up to t secure with agreement on abort (privacy and correctness only) for up to t secure... 6. Traffic and Security using Randomized Dispersive Routes in Heterogeneous Sensor Network CERN Document Server Karunakaran, P 2012-01-01 Generally traffic and the sensor network security have many challenges in the transmission of data in the network. The existing schemes consider homogeneous sensor networks which have poor performance and scalability. Due to many-to-one traffic pattern, sensors may communicate with small portion of its neighbours. Key management is the critical process in sensor nodes to secure the data. Most existing schemes establish shared keys for all the sensors no matter whether they are communicating or not. Hence it leads to large storage overhead. Another problem in sensor network is compromised node attack and denial of service attack which occurs because of its wireless nature. Existing multi path routing algorithms are vulnerable to these attacks. So once an adversary acquires the routing algorithm, it can compute the same routes known to the source, and hence endanger all information sent over these routes. If an adversary performs node compromise attack, they can easily get the encryption/ decryption keys used b... 7. Energy Theft in the Advanced Metering Infrastructure Science.gov (United States) McLaughlin, Stephen; Podkuiko, Dmitry; McDaniel, Patrick Global energy generation and delivery systems are transitioning to a new computerized "smart grid". One of the principle components of the smart grid is an advanced metering infrastructure (AMI). AMI replaces the analog meters with computerized systems that report usage over digital communication interfaces, e.g., phone lines. However, with this infrastructure comes new risk. In this paper, we consider adversary means of defrauding the electrical grid by manipulating AMI systems. We document the methods adversaries will use to attempt to manipulate energy usage data, and validate the viability of these attacks by performing penetration testing on commodity devices. Through these activities, we demonstrate that not only is theft still possible in AMI systems, but that current AMI devices introduce a myriad of new vectors for achieving it. 8. GNSS-based positioning: Attacks and Countermeasures CERN Document Server Papadimitratos, P 2010-01-01 Increasing numbers of mobile computing devices, user-portable, or embedded in vehicles, cargo containers, or the physical space, need to be aware of their location in order to provide a wide range of commercial services. Most often, mobile devices obtain their own location with the help of Global Navigation Satellite Systems (GNSS), integrating, for example, a Global Positioning System (GPS) receiver. Nonetheless, an adversary can compromise location-aware applications by attacking the GNSS-based positioning: It can forge navigation messages and mislead the receiver into calculating a fake location. In this paper, we analyze this vulnerability and propose and evaluate the effectiveness of countermeasures. First, we consider replay attacks, which can be effective even in the presence of future cryptographic GNSS protection mechanisms. Then, we propose and analyze methods that allow GNSS receivers to detect the reception of signals generated by an adversary, and then reject fake locations calculated because of ... 9. Fair trial in international commercial arbitration Directory of Open Access Journals (Sweden) saleh khedri 2015-12-01 Full Text Available A fair hearing in the courts requires the principles of procedure. Because the arbitration is considered as private judgment, thus in arbitration hearing regarding to non-ceremonial proceedings, arbitrator or arbitration panel are bound to respect the principles of civil procedure in arbitration hearing. Equal treatment with parties of arbitration and Adversarial procedure are principles that arbitrator or arbitration panel obliged to satisfy them in proceeding whit action arbitration parties. Independence and impartiality are elements of Equal treatment and proper notice and give a full opportunity to presentation case are elements of adversarial procedure in Arbitration hearing that arbitrator or arbitration panel are bound to respect them in proceeding between action arbitration parties. Disclosure Obligation, Challenge to arbitrators competence, application for setting aside and refuse to recognition and enforcement of award are tools to satisfy compliance of principles of procedural civil in Arbitration hearing. In this paper, ways of satisfying principles of procedure and its sanctions has been considered. 10. Realistic noise-tolerant randomness amplification using finite number of devices Science.gov (United States) Brandão, Fernando G. S. L.; Ramanathan, Ravishankar; Grudka, Andrzej; Horodecki, Karol; Horodecki, Michał; Horodecki, Paweł; Szarek, Tomasz; Wojewódka, Hanna 2016-04-01 Randomness is a fundamental concept, with implications from security of modern data systems, to fundamental laws of nature and even the philosophy of science. Randomness is called certified if it describes events that cannot be pre-determined by an external adversary. It is known that weak certified randomness can be amplified to nearly ideal randomness using quantum-mechanical systems. However, so far, it was unclear whether randomness amplification is a realistic task, as the existing proposals either do not tolerate noise or require an unbounded number of different devices. Here we provide an error-tolerant protocol using a finite number of devices for amplifying arbitrary weak randomness into nearly perfect random bits, which are secure against a no-signalling adversary. The correctness of the protocol is assessed by violating a Bell inequality, with the degree of violation determining the noise tolerance threshold. An experimental realization of the protocol is within reach of current technology. 11. Quantum-secure covert communication on bosonic channels Science.gov (United States) Bash, Boulat A.; Gheorghe, Andrei H.; Patel, Monika; Habif, Jonathan L.; Goeckel, Dennis; Towsley, Don; Guha, Saikat 2015-10-01 Computational encryption, information-theoretic secrecy and quantum cryptography offer progressively stronger security against unauthorized decoding of messages contained in communication transmissions. However, these approaches do not ensure stealth--that the mere presence of message-bearing transmissions be undetectable. We characterize the ultimate limit of how much data can be reliably and covertly communicated over the lossy thermal-noise bosonic channel (which models various practical communication channels). We show that whenever there is some channel noise that cannot in principle be controlled by an otherwise arbitrarily powerful adversary--for example, thermal noise from blackbody radiation--the number of reliably transmissible covert bits is at most proportional to the square root of the number of orthogonal modes (the time-bandwidth product) available in the transmission interval. We demonstrate this in a proof-of-principle experiment. Our result paves the way to realizing communications that are kept covert from an all-powerful quantum adversary. 12. Robust Max-Product Belief Propagation CERN Document Server Ibrahimi, Morteza; Kanoria, Yashodhan; Montanari, Andrea 2011-01-01 We study the problem of optimizing a graph-structured objective function under \\emph{adversarial} uncertainty. This problem can be modeled as a two-persons zero-sum game between an Engineer and Nature. The Engineer controls a subset of the variables (nodes in the graph), and tries to assign their values to maximize an objective function. Nature controls the complementary subset of variables and tries to minimize the same objective. This setting encompasses estimation and optimization problems under model uncertainty, and strategic problems with a graph structure. Von Neumann's minimax theorem guarantees the existence of a (minimax) pair of randomized strategies that provide optimal robustness for each player against its adversary. We prove several structural properties of this strategy pair in the case of graph-structured payoff function. In particular, the randomized minimax strategies (distributions over variable assignments) can be chosen in such a way to satisfy the Markov property with respect to the gra... 13. LA PRUEBA DOCUMENTADA EN EL NUEVO SISTEMA DE JUSTICIA PENAL MEXICANO Documented proof in the new Mexican criminal justice system Directory of Open Access Journals (Sweden) Benavente Chorres Hesbert 2010-01-01 Full Text Available El presente estudio analiza los supuestos de prueba documentada regulados en aquellos códigos de las entidades federativas mexicanas que han adecuado el proceso penal al sistema acusatorio con tendencia adversarial. En ese sentido, se entiende por prueba documentada aquellas diligencias, principalmente declaraciones, realizadas durante la etapa de investigación que la ley otorga valor probatorio al no poder asistir el órgano de prueba a la audiencia del juicio oral por razones ajenas a su voluntad.This study analyzes the cases of the documented proof in those codes regulated the Mexican states that have appropriate criminal proceedings prone to adversarial system. In that sense, it is understood by those measures documented evidence, primarily statements made during the investigation stage that the law gives the probative value could not attend the court hearing to test the trial for reasons beyond their control. 14. On localization attacks against cloud infrastructure Science.gov (United States) Ge, Linqiang; Yu, Wei; Sistani, Mohammad Ali 2013-05-01 One of the key characteristics of cloud computing is the device and location independence that enables the user to access systems regardless of their location. Because cloud computing is heavily based on sharing resource, it is vulnerable to cyber attacks. In this paper, we investigate a localization attack that enables the adversary to leverage central processing unit (CPU) resources to localize the physical location of server used by victims. By increasing and reducing CPU usage through the malicious virtual machine (VM), the response time from the victim VM will increase and decrease correspondingly. In this way, by embedding the probing signal into the CPU usage and correlating the same pattern in the response time from the victim VM, the adversary can find the location of victim VM. To determine attack accuracy, we investigate features in both the time and frequency domains. We conduct both theoretical and experimental study to demonstrate the effectiveness of such an attack. 15. Den tavse venstrefløjspolitik DEFF Research Database (Denmark) Dyrberg, Torben Bech 2012-01-01 , leftists have been keen to silence political adversaries by advocating the censoring of the freedom of speech, which is particularly evident during the cartoon crisis 2005/6 and in cases of hate speech. These two aspects of the politics of silence – to remain silent and to silence others – have been...... legitimized in three ways. First, by displacing the question of freedom of speech from a political right to a morality of empathy; second, by moralizing and antagonizing the political climate in good/evil, which stigmatizes the adversary; and finally, calling for self-censorship and censorship of those who do...... and radical Islamists as they are facing the same enemy, i.e., the system and the nationalist Right. The friend/enemy matrix structures leftist orientation because it is the most prominent way to cultivate an oppositional identity. When this is what counts, political principles become less important... 16. CompChall: Addressing Password Guessing Attacks CERN Document Server Goyal, Vipul; Singh, Mayank; Abraham, Ajith; Sanyal, Sugata 2011-01-01 Even though passwords are the most convenient means of authentication, they bring along themselves the threat of dictionary attacks. Dictionary attacks may be of two kinds: online and offline. While offline dictionary attacks are possible only if the adversary is able to collect data for a successful protocol execution by eavesdropping on the communication channel and can be successfully countered using public key cryptography, online dictionary attacks can be performed by anyone and there is no satisfactory solution to counter them. This paper presents a new authentication protocol which is called CompChall (computational challenge). The proposed protocol uses only one way hash functions as the building blocks and attempts to eliminate online dictionary attacks by implementing a challenge-response system. This challenge-response system is designed in a fashion that it does not pose any difficulty to a genuine user but is time consuming and computationally intensive for an adversary trying to launch a large n... 17. Defining cyber warfare Directory of Open Access Journals (Sweden) Dragan D. Mladenović 2012-04-01 Full Text Available Cyber conflicts represent a new kind of warfare that is technologically developing very rapidly. Such development results in more frequent and more intensive cyber attacks undertaken by states against adversary targets, with a wide range of diverse operations, from information operations to physical destruction of targets. Nevertheless, cyber warfare is waged through the application of the same means, techniques and methods as those used in cyber criminal, terrorism and intelligence activities. Moreover, it has a very specific nature that enables states to covertly initiate attacks against their adversaries. The starting point in defining doctrines, procedures and standards in the area of cyber warfare is determining its true nature. In this paper, a contribution to this effort was made through the analysis of the existing state doctrines and international practice in the area of cyber warfare towards the determination of its nationally acceptable definition. 18. Spying the World from your Laptop -- Identifying and Profiling Content Providers and Big Downloaders in BitTorrent CERN Document Server Blond, Stevens Le; Fessant, Fabrice Le; Dabbous, Walid; Kaafar, Mohamed Ali 2010-01-01 This paper presents a set of exploits an adversary can use to continuously spy on most BitTorrent users of the Internet from a single machine and for a long period of time. Using these exploits for a period of 103 days, we collected 148 million IPs downloading 2 billion copies of contents. We identify the IP address of the content providers for 70% of the BitTorrent contents we spied on. We show that a few content providers inject most contents into BitTorrent and that those content providers are located in foreign data centers. We also show that an adversary can compromise the privacy of any peer in BitTorrent and identify the big downloaders that we define as the peers who subscribe to a large number of contents. This infringement on users' privacy poses a significant impediment to the legal adoption of BitTorrent. 19. Attacks and Countermeasures in Social Network Data Publishing Institute of Scientific and Technical Information of China (English) XIANG Yang 2016-01-01 With the increasing prevalence of social networks, more and more social network data are published for many applications, such as social network analysis and data mining. However, this brings privacy problems. For example, adversaries can get sensitive in⁃formation of some individuals easily with little background knowledge. How to publish social network data for analysis purpose while preserving the privacy of individuals has raised many concerns. Many algorithms have been proposed to address this issue. In this paper, we discuss this privacy problem from two aspects: attack models and countermeasures. We analyse privacy con⁃cerns, model the background knowledge that adversary may utilize and review the recently developed attack models. We then sur⁃vey the state⁃of⁃the⁃art privacy preserving methods in two categories: anonymization methods and differential privacy methods. We also provide research directions in this area. 20. Game theoretic analysis of physical protection system design International Nuclear Information System (INIS) The physical protection system (PPS) of a fictional small modular reactor (SMR) facility have been modeled as a platform for a game theoretic approach to security decision analysis. To demonstrate the game theoretic approach, a rational adversary with complete knowledge of the facility has been modeled attempting a sabotage attack. The adversary adjusts his decisions in response to investments made by the defender to enhance the security measures. This can lead to a conservative physical protection system design. Since defender upgrades were limited by a budget, cost benefit analysis may be conducted upon security upgrades. One approach to cost benefit analysis is the efficient frontier, which depicts the reduction in expected consequence per incremental increase in the security budget 1. The Effective Key Length of Watermarking Schemes CERN Document Server Bas, Patrick 2012-01-01 Whereas the embedding distortion, the payload and the robustness of digital watermarking schemes are well understood, the notion of security is still not completely well defined. The approach proposed in the last five years is too theoretical and solely considers the embedding process, which is half of the watermarking scheme. This paper proposes a new measurement of watermarking security, called the effective key length, which captures the difficulty for the adversary to get access to the watermarking channel. This new methodology is applied to additive spread spectrum schemes where theoretical and practical computations of the effective key length are proposed. It shows that these schemes are not secure as soon as the adversary gets observations in the Known Message Attack context. 2. Security and Composability of Randomness Expansion from Bell Inequalities CERN Document Server Fehr, Serge; Schaffner, Christian 2011-01-01 The nonlocal behavior of quantum mechanics enables to generate guaranteed fresh randomness from an untrusted device that consists of two nonsignalling components. Since the generation process requires some initial fresh randomness to act as a catalyst, one also speaks of randomness expansion. Previous works showed the freshness of the generated randomness only for an adversary that holds no quantum side information, or, equivalently, has measured all quantum side information before the randomness is generated by the device. Thus, until now it was unclear if and how much fresh randomness can be generated by an untrusted device in the presence of an adversary that maintains a quantum state. In this work, we show that security against quantum side information comes "for free". Specifically, we show that with the same procedure, the very same amount of randomness can be generated in the presence of quantum side information as can be generated without any (quantum or classical) side information. Our result on the ... 3. On Adaptive vs. Non-adaptive Security of Multiparty Protocols DEFF Research Database (Denmark) Canetti, Ran; Damgård, Ivan Bjerre; Dziembowski, Stefan; 2001-01-01 Security analysis of multiparty cryptographic protocols distinguishes between two types of adversarialsettings: In the non-adaptive setting, the set of corrupted parties is chosen in advance, before the interaction begins. In the adaptive setting, the adversary chooses who to corrupt during...... the course of the computation. We study the relations between adaptive security (i.e., security in the adaptive setting) and non-adaptive security, according to two definitions and in several models of computation. While affirming some prevailing beliefs, we also obtain some unexpected results. Some...... highlights of our results are: – - According to the definition of Dodis-Micali-Rogaway (which is set in the information-theoretic model), adaptive and non-adaptive security are equivalent. This holds for both honest-but-curious and Byzantine adversaries, and for any number of parties. – - According... 4. A new proxy signature with revocation based on security advancement Science.gov (United States) Mat-Isa, M.; Ismail, E. S. 2013-11-01 n proxy signature schemes with revocation, an original signer delegates his signing capability to a proxy signer on behalf of the original signer and revokes delegations whenever necessary. Currently, the security of the previous schemes is based on a single hard problem such as factoring or discrete logarithms. These schemes appear secure today but in a near future, if an adversary finds a solution to these hard problems, the developed schemes will be no longer secure. To solve this problem we develop a new proxy signature scheme with revocation based on two hard problems; factoring and discrete logarithms. The new scheme offers higher level security than normal schemes since it is hard for an adversary to solve the two hard problems simultaneously. 5. Tailored Security and Safety for Pervasive Computing Science.gov (United States) Blass, Erik-Oliver; Zitterbart, Martina Pervasive computing makes high demands on security: devices are seriously resource-restricted, communication takes place spontaneously, and adversaries might control some of the devices. We claim that 1.) today’s research, studying traditional security properties for pervasive computing, leads to inefficient, expensive, and unnecessary strong and unwanted security solutions. Instead, security solutions tailored to the demands of a user, the scenario, or the expected adversary are more promising. 2.) Today’s research for security in pervasive computing makes naive, inefficient, and unrealistic assumptions regarding safety properties, in particular the quality of basic communication. Therefore, future security research has to consider safety characteristics and has to jointly investigate security and safety for efficient, tailored solutions. 6. 'The ravages of permissiveness': sex education and the permissive society. Science.gov (United States) Hampshire, James; Lewis, Jane 2004-01-01 In this article we explore how sex education in schools has become an adversarial political issue. Although sex education has never been a wholly uncontroversial subject, we show that for two decades after the Second World War there was a broad consensus among policy-makers that it offered a solution to public health and social problems, especially venereal disease. From the late 1960s, this consensus came under attack. As part of a wider effort to reverse the changes associated with the 'permissive' society and legislation of the late 1960s, moral traditionalists and pro-family campaigners sought to problematize sex education. They depicted it as morally corrupting and redefined it as a problem rather than a public health solution. Henceforth, the politics of sex education became increasingly polarized and adversarial. We conclude that the fractious debates about sex education in the 1980s and 1990s are a legacy of this reaction against the permissive society. 7. Secure Neighbor Position Discovery in VANETs CERN Document Server Fiore, Marco; Chiasserini, Carla Fabiana; Papadimitratos, Panagiotis 2010-01-01 Many significant functionalities of vehicular ad hoc networks (VANETs) require that nodes have knowledge of the positions of other vehicles, and notably of those within communication range. However, adversarial nodes could provide false position information or disrupt the acquisition of such information. Thus, in VANETs, the discovery of neighbor positions should be performed in a secure manner. In spite of a multitude of security protocols in the literature, there is no secure discovery protocol for neighbors positions. We address this problem in our paper: we design a distributed protocol that relies solely on information exchange among one-hop neighbors, we analyze its security properties in presence of one or multiple (independent or colluding) adversaries, and we evaluate its performance in a VANET environment using realistic mobility traces. We show that our protocol can be highly effective in detecting falsified position information, while maintaining a low rate of false positive detections. 8. Certified Randomness from a Two-Level System in a Relativistic Quantum Field CERN Document Server Thinh, Le Phuc; Martin-Martinez, Eduardo 2016-01-01 Randomness is an indispensable resource in modern science and information technology. Fortunately, an experimentally simple procedure exists to generate randomness with well-characterized devices: measuring a quantum system in a basis complementary to its preparation. Towards realizing this goal one may consider using atoms or superconducting qubits, promising candidates for quantum information processing. However, their unavoidable interaction with the electromagnetic field affects their dynamics. At large time scales, this can result in decoherence. Smaller time scales in principle avoid this problem, but may not be well analysed under the usual rotating wave and single-mode approximation (RWA and SMA) which break the relativistic nature of quantum field theory. Here, we use a fully relativistic analysis to quantify the information that an adversary with access to the field could get on the result of an atomic measurement. Surprisingly, we find that the adversary's guessing probability is not minimized for ... 9. Route Discovery and Hop Node Verification to Ensure Authenticated Data Transmissions in Manet Directory of Open Access Journals (Sweden) Anitha.M 2014-03-01 Full Text Available ABSTRACT In mobile ad hoc network, Position aided routing protocols can offer a significant performance increase over traditional ad hoc routing protocols. As position information is broadcasted including the enemy to receive. Routes may be disconnected due to dynamic movement of nodes. Such networks are more vulnerable to both internal and external attacks due to presence of adversarial nodes. These nodes affect the performance of routing protocol in ad hoc networks. So it is essential to identify the neighbours in MANET. The “Neighbor Position Verification” (NPV, is a routing protocol designed to protect the network from adversary nodes by verifying the position of neighbor nodes to improve security, efficiency and performance in MANET routing. 10. Modern Air&Space Power and political goals at war Science.gov (United States) Özer, Güngör. 2014-05-01 Modern AirandSpace Power is increasingly becoming a political tool. In this article, AirandSpacePower as a political tool will be discussed. The primary purpose of this article is to search how AirandSpacePower can provide contributions to security and also determine if it may reach the political goals on its own at war by SWOT Analysis Method and analysing the role of AirandSpace Power in Operation Unified Protector (Libya) as a case study. In conclusion, AirandSpacePower may not be sufficient to win the political goals on its own. However it may reach the political aims partially against the adversary on its own depending upon the situations. Moreover it can alone persuade the adversary to alter its behavior(s) in war. 11. Routing Security Issues in Wireless Sensor Networks: Attacks and Defenses CERN Document Server Sen, Jaydip 2011-01-01 Wireless Sensor Networks (WSNs) are rapidly emerging as an important new area in wireless and mobile computing research. Applications of WSNs are numerous and growing, and range from indoor deployment scenarios in the home and office to outdoor deployment scenarios in adversary's territory in a tactical battleground (Akyildiz et al., 2002). For military environment, dispersal of WSNs into an adversary's territory enables the detection and tracking of enemy soldiers and vehicles. For home/office environments, indoor sensor networks offer the ability to monitor the health of the elderly and to detect intruders via a wireless home security system. In each of these scenarios, lives and livelihoods may depend on the timeliness and correctness of the sensor data obtained from dispersed sensor nodes. As a result, such WSNs must be secured to prevent an intruder from obstructing the delivery of correct sensor data and from forging sensor data. To address the latter problem, end-to-end data integrity checksums and pos... 12. Circuit-extension handshakes for Tor achieving forward secrecy in a quantum world Directory of Open Access Journals (Sweden) Schanck John M. 2016-10-01 Full Text Available We propose a circuit extension handshake for Tor that is forward secure against adversaries who gain quantum computing capabilities after session negotiation. In doing so, we refine the notion of an authenticated and confidential channel establishment (ACCE protocol and define pre-quantum, transitional, and post-quantum ACCE security. These new definitions reflect the types of adversaries that a protocol might be designed to resist. We prove that, with some small modifications, the currently deployed Tor circuit extension handshake, ntor, provides pre-quantum ACCE security. We then prove that our new protocol, when instantiated with a post-quantum key encapsulation mechanism, achieves the stronger notion of transitional ACCE security. Finally, we instantiate our protocol with NTRU-Encrypt and provide a performance comparison between ntor, our proposal, and the recent design of Ghosh and Kate. 13. Analysis of Information Leakage in Quantum Key Agreement Institute of Scientific and Technical Information of China (English) LIU Sheng-li; ZHENG Dong; CHENG Ke-fei 2006-01-01 Quantum key agreement is one of the approaches to unconditional security. Since 1980's, different protocols for quantum key agreement have been proposed and analyzed. A new quantum key agreement protocol was presented in 2004, and a detailed analysis to the protocol was given. The possible game played between legitimate users and the enemy was described:sitting in the middle, an adversary can play a "man-in-the-middle" attack to cheat the sender and receiver. The information leaked to the adversary is essential to the length of the final quantum secret key. It was shown how to determine the amount of information leaked to the enemy and the amount of uncertainty between the legitimate sender and receiver. 14. Towards a Bio-inspired Security Framework for Mission-Critical Wireless Sensor Networks Science.gov (United States) Ren, Wei; Song, Jun; Ma, Zhao; Huang, Shiyong Mission-critical wireless sensor networks (WSNs) have been found in numerous promising applications in civil and military fields. However, the functionality of WSNs extensively relies on its security capability for detecting and defending sophisticated adversaries, such as Sybil, worm hole and mobile adversaries. In this paper, we propose a bio-inspired security framework to provide intelligence-enabled security mechanisms. This scheme is composed of a middleware, multiple agents and mobile agents. The agents monitor the network packets, host activities, make decisions and launch corresponding responses. Middleware performs an infrastructure for the communication between various agents and corresponding mobility. Certain cognitive models and intelligent algorithms such as Layered Reference Model of Brain and Self-Organizing Neural Network with Competitive Learning are explored in the context of sensor networks that have resource constraints. The security framework and implementation are also described in details. 15. Verecundia, risa y decoro: Cicerón y el arte de insultar OpenAIRE Mas, Salvador 2015-01-01 One can speak about laughter in many ways; one can take, for example, a physiological approach, or a psychological one, or even a third sociological one, which the ancients did not completely disregard. Cicero, however, preferred to focus on the rhetorical possibilities of laughter and humor. The relevance of jest and jokes in order to get an audience’s favor or to ridicule an adversary is undeniable; but, despite their effectiveness and given their effectiveness, that jest and jokes can also... 16. Assessment of procedures and preliminary software design for fault tree synthesis Energy Technology Data Exchange (ETDEWEB) Payne, H.J. 1978-10-01 The Safeguards Effectiveness Assessment methodology was applied to the assessment of the Material Control and Accounting (MC and A) system. This document covers the creation of a representation of the MC and A system and potential adversary actions as a directed network (digraph), and the synthesis of a fault tree from the digraph. It is shown that the Lapp--Powers approach to constructing the digraph is not capable of handling the MC and A assessment problem. Software functional specifications are given. (DLC) 17. The interaction between potential criminals' and victims' demands for guns OpenAIRE Baç, Mehmet; Bac, Mehmet 2009-01-01 I develop a model with endogenous gun ownership and study the interaction between the demands for guns by heterogeneous potential offenders and victims. I show that the interaction depends on pervasiveness of guns, injury probabilities and, in particular, the impact of the gun on the probability of success against armed relative to unarmed adversaries. While the sanction on armed offense is maximal under plausible conditions, the sanction on unarmed offense balances direct deterrence benef... 18. The vulnerability of social networking media and the insider threat : new eyes for bad guys OpenAIRE Lenkart, John J. 2011-01-01 CHDS State/Local Approved for public release; distribution is unlimited Social networking media introduces a new set of vulnerabilities to protecting an organization's sensitive information. Competitors and foreign adversaries are actively targeting U.S. industry to acquire trade secrets to undercut U.S. business in the marketplace. Of primary concern in this endeavor is an insider's betrayal of an organization, witting or unwitting, by providing sensitive information to a hostile outsi... 19. ‪The Vulnerability of Social Networking Media and the Insider Threat: New Eyes for Bad Guys‬ [video OpenAIRE Lenkart, John; Center for Homeland Defense and Security Naval Postgraduate School 2012-01-01 Social networking media introduces a new set of vulnerabilities to protecting an organization's sensitive information. Adversaries are actively targeting U.S. industry to acquire trade secrets to undercut U.S. business in the marketplace. Of primary concern is an insider's betrayal of an organization by providing sensitive information to a hostile outsider. Social engineering, when coupled with the new and widespread use of social networking media, becomes more effective by exploiting the wea... 20. Military Strategy vs. Military Doctrine DEFF Research Database (Denmark) Barfoed, Jacob 2015-01-01 The article argues that while doctrine represents the more scientific side of warfare, strategy represents the artistic side. Existing doctrine will almost never meet the requirements for winning the next war; it is through the artistic application of generic peacetime doctrine to the specific...... strategic and operational context, using doctrine as building blocks for a context specific military strategy, that the military commander outwits and defeats or coerces the adversary and achieves the military objectives.... 1. The USSR/Russia, Norway and international co-operation on environmental matters in the Arctic, 1984-1996 OpenAIRE Karelina, Irina 2013-01-01 This thesis examines the USSR, Norway and international cooperation on environmental matters in the Arctic (1984-1996). During the Cold War, the region attracted much attention from of the main adversaries. It was a playground for strategic planners and a laboratory for the improvement of military technology. But at the same time these territories were also – at least potentially – a source for contacts between scientist of the East and the West. Especially in the last decade of the Cold War,... 2. Large-scale security analysis of the web: Challenges and findings OpenAIRE Van Goethem, Tom; Ping CHEN; Nikiforakis, Nick; Desmet, Lieven; Joosen, Wouter 2014-01-01 As the web expands in size and adoption, so does the interest of attackers who seek to exploit web applications and exfiltrate user data. While there is a steady stream of news regarding major breaches and millions of user credentials compromised, it is logical to assume that, over time, the applications of the bigger players of the web are becoming more secure. However, as these applications become resistant to most prevalent attacks, adversaries may be tempted to move to easier, unprotected... 3. A Cross-Platform Collection of Social Network Profiles OpenAIRE Veiga, Maria Han; Eickhoff, Carsten 2016-01-01 The proliferation of Internet-enabled devices and services has led to a shifting balance between digital and analogue aspects of our everyday lives. In the face of this development there is a growing demand for the study of privacy hazards, the potential for unique user de-anonymization and information leakage between the various social media profiles many of us maintain. To enable the structured study of such adversarial effects, this paper presents a dedicated dataset of cross-platform soci... 4. On the Linearization of Human Identification Protocols: Attacks based on Linear Algebra, Coding Theory and Lattices OpenAIRE Asghar, HJ; Steinfeld, R.; Li, S.; Kaafar, MA; Pieprzyk, J 2015-01-01 Human identification protocols are challenge-response protocols that rely on human computational ability to reply to random challenges from the server based on a public function of a shared secret and the challenge to authenticate the human user. One security criterion for a human identification protocol is the number of challenge-response pairs the adversary needs to observe before it can deduce the secret. In order to increase this number, protocol designers have tried to construct protocol... 5. Feasibility of developing a surrogate missile system for the purpose of combat systems testing, evaluation, and watchstander proficiency OpenAIRE Elzner, Benjamin Asher 2014-01-01 Approved for public release; distribution is unlimited Aegis readiness is an increasing concern as ships age, Navy budgets shrink, and potential adversaries make strides toward combat power parity in diverse regions around the world. Keys to combat effectiveness are materiel readiness and crew proficiency. Live fire missile exercises are a proven way to gauge the former while contributing to the latter, but the use of combat missiles for this purpose is both expensive and depletes the inve... 6. A Cost Effective RFID Based Customized DVD-ROM to Thwart Software Piracy Directory of Open Access Journals (Sweden) Sudip Dogra 2009-10-01 Full Text Available Software piracy has been a very perilous adversary of the software-based industry, from the very beginning of the development of the latter into a significant business. There has been no developed foolproof system that has been developed to appropriately tackle this vile issue. We have in our scheme tried to develop a way to embark upon this problem using a very recently developed technology of RFID. 7. Team 6: Joint Capability Metamodel-Test-Metamodel Integration with Data Farming OpenAIRE Beach, T.; Dryer, D.; Way, H.; Sanchez, S.; Kelton, W.D.; Schamburg, J.; D. Martin 2007-01-01 from Scythe : Proceedings and Bulletin of the International Data Farming Community, Issue 2 Workshop 14 US adversaries are continuously seeking new ways to threaten US interests at home and abroad. In order to counter these threats, now more than ever, commanders must seek to leverage existing and emerging joint capabilities effectively in a variety of unique contexts. Achieving mission effectiveness in today's joint operational environment demands robust synerg... 8. Trusted Objects Energy Technology Data Exchange (ETDEWEB) CAMPBELL,PHILIP L.; PIERSON,LYNDON G.; WITZKE,EDWARD L. 1999-10-27 In the world of computers a trusted object is a collection of possibly-sensitive data and programs that can be allowed to reside and execute on a computer, even on an adversary's machine. Beyond the scope of one computer we believe that network-based agents in high-consequence and highly reliable applications will depend on this approach, and that the basis for such objects is what we call ''faithful execution.'' 9. The Invisible Hand in Legal and Political Theory OpenAIRE Vermeule, Cornelius Adrian 2010-01-01 Theorists have offered invisible-hand justifications for a range of legal institutions, including the separation of powers, free speech, the adversary system of litigation, criminal procedure, the common law, and property rights. These arguments are largely localized, with few comparisons across contexts and no general account of how invisible-hand justifications work. This essay has two aims. The first is to identify general conditions under which an invisible-hand justification will succeed... 10. Poisoned Feedback: The Impact of Malicious Users in Closed-Loop Multiuser MIMO Systems CERN Document Server Mukherjee, Amitav 2010-01-01 Accurate channel state information (CSI) at the transmitter is critical for maximizing spectral efficiency on the downlink of multi-antenna networks. In this work we analyze a novel form of physical layer attacks on such closed-loop wireless networks. Specifically, this paper considers the impact of deliberately inaccurate feedback by malicious users in a multiuser multicast system. Numerical results demonstrate the significant degradation in performance of closed-loop transmission schemes due to intentional feedback of false CSI by adversarial users. 11. Information operations, an evolutionary step for the Mexican Armed Forces OpenAIRE Schulz, David Vargas 2007-01-01 This thesis will focus on the Mexican Armed Force's ability to deal with existing and future unconventional threats and insurgencies. The modern Mexican Armed Forces are the result of an enduring evolutionary process, which has made the necessary changes to deal with the emerging threats against the state. Mexico's criminal threat has evolved because of 9/11 and because of the U.S.-led crackdown on Colombian drug cartels. Mexico's modern adversary is well versed in waging mass media campa... 12. Relational changes between Statoil and suppliers in the last sesquidecade OpenAIRE Slaattelid, Andreas Hollund 2015-01-01 Research in relations management has burgeoned in the last sixty years, and literature has classified attributes of interorganizational relations into two models: The Collaborative, where integrated teams, flexibility, shared information and close relationships are cultivated; and the Adversarial, which bases its premise on market forces, formal communication, and the entitlements of contract. The Norwegian supply and service industry has developed in parallel with Statoil, and both parti... 13. Body Cultures: the Venezuelan Holy Family OpenAIRE Guerrero, Javier 2012-01-01 In this article, I propose reading the body as a privileged space to debate Venezuelan politics. I expose the violent metaphorical and allegorical operations that manage to disfigure the national bodies, taking them to the very limits of monstrosity before normalizing them. Notwithstanding the compulsion to denounce the political adversary that defiles the Venezuelan ‘holy’ family, the need to preserve the national body par excellence is stronger than these differences and the incidental poss... 14. Dynamic Packet Scheduling in Wireless Networks OpenAIRE Kesselheim, Thomas 2012-01-01 We consider protocols that serve communication requests arising over time in a wireless network that is subject to interference. Unlike previous approaches, we take the geometry of the network and power control into account, both allowing to increase the network's performance significantly. We introduce a stochastic and an adversarial model to bound the packet injection. Although taken as the primary motivation, this approach is not only suitable for models based on the signal-to-interference... 15. Egalitarian computing OpenAIRE Biryukov, Alex; Khovratovich, Dmitry 2016-01-01 In this paper we explore several contexts where an adversary has an upper hand over the defender by using special hardware in an attack. These include password processing, hard-drive protection, cryptocurrency mining, resource sharing, code obfuscation, etc. We suggest memory-hard computing as a generic paradigm, where every task is amalgamated with a certain procedure requiring intensive access to RAM both in terms of size and (very importantly) bandwidth, so that transferring the com... 16. SCM and extended integration at the lower tiers of the construction supply chain: An explorative study in the Dutch construction industry OpenAIRE Pryke, S. D.; Broft, R.; Badi, S. M. 2014-01-01 Several studies have underlined the potential of Supply Chain Management (SCM) in meeting the formidable challenges associated with fragmentation, adversarial relationships and insufficient customer focus in the delivery of construction projects (e.g. Dainty et al., 2001; Cox and Ireland, 2002; Gadde and Dubois, 2010). However, there remains a paucity of properly documented examples of successfully implemented SCM initiatives, particularly at the lower tiers of the supply chain. This study se... 17. On the Privacy of Two Tag Ownership Transfer Protocols for RFIDs OpenAIRE Abyaneh, Mohammad Reza Sohizadeh 2012-01-01 IEEE International Conference for Internet Technology and Secured Transactions (ICITST2011) in Abu Dhabi, UAE. In this paper, the privacy of two recent RFID tag ownership transfer protocols are investigated against the tag owners as adversaries. The first protocol called ROTIV is a scheme which provides a privacy-preserving ownership transfer by using an HMACbased authentication with public key encryption. However, our passive attack on this protocol shows that any leg... 18. Radar operation in a hostile electromagnetic environment Energy Technology Data Exchange (ETDEWEB) Doerry, Armin Walter 2014-03-01 Radar ISR does not always involve cooperative or even friendly targets. An adversary has numerous techniques available to him to counter the effectiveness of a radar ISR sensor. These generally fall under the banner of jamming, spoofing, or otherwise interfering with the EM signals required by the radar sensor. Consequently mitigation techniques are prudent to retain efficacy of the radar sensor. We discuss in general terms a number of mitigation techniques. 19. Entropy of Graphical Passwords: Towards an Information-Theoretic Analysis of Face-Recognition Based Authentication OpenAIRE Rass, Stefan; Schuller, David; Kollmitzer, Christian 2010-01-01 International audience We present an information-theoretic discussion of authentication via graphical passwords, and devise a model for entropy estimation. Our results make face-recognition based authentication comparable to standard password authentication in terms of uncertainty (Shannon-entropy) that an adversary is confronted with in both situations. It is widely known that cognitive abilities strongly determine the choice of alphanumeric passwords as well as graphical passwords, and w... 20. Editorial OpenAIRE Sweeney, Delma 2014-01-01 Conflict is a natural part of life. Yet when faced with difficult situations people can slip into adversarial and destructive behaviours that exacerbate reactivity, causing an escalation into estrangements and violence. Thoughtful and strategic management of likely circumstances known to cause difficulties could ameliorate such reactions. The first two articles in this second publication of JMACA examine elements of the root causes of issues that spark such hazards and seek to find ways forwa... 1. ISR systems: Past, present, and future Science.gov (United States) Henry, Daniel J. 2016-05-01 Intelligence, Surveillance, and Reconnaissance (ISR) systems have been in use for thousands of years. Technology and CONOPS have continually evolved and morphed to meet ever-changing information needs and adversaries. Funding sources, constraints and procurement philosophies have also evolved, requiring cost-effective innovation to field marketable products which maximize the effectiveness of the Tasking, Capture, Processing, Exploitation, and Dissemination (TCPED) information chain. This paper describes the TCPED information chain and the evolution of ISR (past, present, and future). 2. RFID Technology Based Attendance Management System OpenAIRE Sumita Nainan; Romin Parekh; Tanvi Shah 2013-01-01 RFID is a nascent technology, deeply rooted by its early developments in using radar as a harbinger of adversary planes during World War II. A plethora of industries have leveraged the benefits of RFID technology for enhancements in sectors like military, sports, security, airline, animal farms, healthcare and other areas. Industry specific key applications of this technology include vehicle tracking, automated inventory management, animal monitoring, secure store checkouts, supply chain mana... 3. Space Station Program threat and vulnerability analysis Science.gov (United States) Van Meter, Steven D.; Veatch, John D. 1987-01-01 An examination has been made of the physical security of the Space Station Program at the Kennedy Space Center in a peacetime environment, in order to furnish facility personnel with threat/vulnerability information. A risk-management approach is used to prioritize threat-target combinations that are characterized in terms of 'insiders' and 'outsiders'. Potential targets were identified and analyzed with a view to their attractiveness to an adversary, as well as to the consequentiality of the resulting damage. 4. Court Supervised Institutional Transformation in South Africa OpenAIRE Deon Erasmus; Angus Lloyd Hornigold 2015-01-01 The traditional adversarial model of litigation in South Africa operates on the basis that two or more parties approach the court, each with its own desired outcome. The court is then obliged to decide in favour of one of the parties. A different model of litigation is emerging in South African law. This model involves actions against public institutions that are failing to comply with their constitutional mandate. In this type of litigation there is seldom a dispute regarding the eventu... 5. Towards an American Model of Criminal Process: The Reform of the Polish Code of Criminal Procedure Directory of Open Access Journals (Sweden) Roclawska Monika 2014-06-01 Full Text Available In September 2013, the Polish Parliament passed an amendment to the Code of Criminal Procedure. The legislators decided to expand a number of adversarial elements present in current Polish criminal proceedings. When these changes come into effect (July 1, 2015, Polish criminal procedure will be similar to American regulations, in which the judge’s role is to be an impartial arbitrator, not an investigator. 6. Garbage In, Garbage Out: The Court Interpreter’s Lament OpenAIRE Mikkelson, Holly 2012-01-01 Interpreters in all settings, in all parts of the world, and throughout history have lamented the poor quality of the language they must deal with in source texts. This chapter will review some recent publications on interpreting quality criteria, user expectations, and the associated challenges facing interpreters in different settings (Kondo 2006; Peng 2006; Lee 2009; Ng 2009; Napier et al. 2009; Kent 2009). The constraints facing court interpreters in adversarial settings wi... 7. Transforming Power Relationships: Leadership, Risk and Hope OpenAIRE Read, James H.; Shapiro, Ian 2013-01-01 Abstract: Chronic communal conflicts resemble the prisoner's dilemma. Both communities prefer peace to war. But neither trusts the other, viewing the other's gain as its own loss, so potentially shared interests often go unrealized. Achieving positive-sum outcomes from apparently zero-sum struggles requires a kind of risk-embracing leadership. To succeed leaders must: a) see power relations as potentially positive-sum; b) strengthen negotiating adversaries instead of weakening them; and c) de... 8. An Assessment of Supplier Development Practices in a Retail Environment with Particular Reference to Boots the Chemist OpenAIRE Clarke, Adrian John 2007-01-01 An organisations ability to control, adapt and improve its supply chain can significantly impact its competitive position (Drucker, 1982). For retailers, the supply base contributes almost three quarters of their total costs. Within manufacturing, organisations have been moving from predominantly adversarial, short-term, transactional relationships with their suppliers, to longer-term collaborative relationships. Automotive manufacturers in Japan were leading this, and it has since transfe... 9. Bitcoin Beacon OpenAIRE Bentov, Iddo; Gabizon, Ariel; Zuckerman, David 2016-01-01 We examine a protocol$\\pi_{\\text{beacon}}$that outputs unpredictable and publicly verifiable randomness, meaning that the output is unknown at the time that$\\pi_{\\text{beacon}}$starts, yet everyone can verify that the output is close to uniform after$\\pi_{\\text{beacon}}$terminates. We show that$\\pi_{\\text{beacon}}$can be instantiated via Bitcoin under sensible assumptions; in particular we consider an adversary with an arbitrarily large initial budget who may not operate at a loss ind... 10. Design and evaluation of physical protection systems for nuclear facilities and materials International Nuclear Information System (INIS) The spread of nuclear industry around the world has increased the risks and hazards of various possible forms of attack on nuclear facilities, operations and material. This work deals with the physical protection of nuclear facilities, operations and materials. It is intended to present detailed information on the physical protection aspects and the basic methodology to be used in the design and analysis of a facility's security system. Physical protection measures are directed against theft or unauthorized removal of nuclear materials and sabotage of nuclear facilities and operations individuals or groups of individuals. The design of an effective physical protection system includes the determination of the physical protection system objectives, the initial design of a physical protection system, and, probably, a redesign or refinement of the system. To develop the objectives, the designer mst begin by gathering information about facility operations and conditions, such as comprehensive description of the facility, operating states, and the physical protection requirements. The designer then needs to define the threat. This involves considering factors about potential adversaries: Class of adversary, adversary's capabilities, and range of adversary's tactics. Next, the designer should identify targets. Determination of whether or not nuclear materials are attractive targets is based mainly on the case or difficulty of acquisition and desirability of the material. The designer now knows the objectives of the physical protection system, that is, what to protect against whom. The next step is to design the system by determining how best to combine such elements as fences, vaults, sensors procedures, communication devices, and protective force personnel to meet the objectives of the system. Once a physical protection system is designed, it must be analyzed and evaluated to ensure it meets the physical protection objectives. Evaluation must allow for features working 11. Security Toolbox for Detecting Novel and Sophisticated Android Malware OpenAIRE Holland, Benjamin; Deering, Tom; Kothari, Suresh; Mathews, Jon; Ranade, Nikhil 2015-01-01 This paper presents a demo of our Security Toolbox to detect novel malware in Android apps. This Toolbox is developed through our recent research project funded by the DARPA Automated Program Analysis for Cybersecurity (APAC) project. The adversarial challenge ("Red") teams in the DARPA APAC program are tasked with designing sophisticated malware to test the bounds of malware detection technology being developed by the research and development ("Blue") teams. Our research group, a Blue team i... 12. Intelligence-led risk management for homeland security: a collaborative approach for a common goal OpenAIRE Jackson, David P. 2011-01-01 CHDS State/Local The concept of risk management provides the foundation of the homeland security enterprise. The United States of America faces numerous complex risks ranging from a series of natural hazards, pandemic disease, technological hazards, transnational criminal enterprises and acts of terrorism perpetrated by intelligent adversaries. The management of these risks requires a strategic collaborative effort from the intelligence and risk analysis communities and many stakeholders a... 13. An Advanced Threshold Secret Sharing Scheme for Identifying Cheaters Institute of Scientific and Technical Information of China (English) XIE Shu-cui; ZHANG Jian-zhong 2003-01-01 In this paper an advanced threshold secret sharing scheme for identifying cheaters is proposed by using authentication codes. The performance of the scheme is discussed. The results show that in the scheme the valid shareholders can not only identify the impersonation of an adversary, but also detect cheating of some valid shareholders. In particular one honest shareholder is able to detect cheating of other participants forming a collection, and the information rate of the scheme is higher than that of others. 14. Toward Normalization of Relations with Japan: The Strategy of North Korea, circa 1950 to 1961 OpenAIRE Mitsuhiko Kimuran 2011-01-01 North Korea is still a strictly secluded state and little is known of its past and present though recent research using documents from countries in the former Soviet bloc has produced a number of breakthroughs especially in the discussion of the origins of the Korean War. Among others, history of relations between North Korea and Japan is most unexplored. One might assume that North Korea has had little interest in developing relations with Japan because of the adversary political ideologies ... 15. The Treaty of Friendship, Partnership and Cooperation between Libya and Italy: From an Awkward Past to a Promising Equal Partnership OpenAIRE Kashiem, Mustafa Abdalla A. 2010-01-01 Italian-Libyan international relations entered a new era when the two countries signed the Treaty on Friendship, Partnership, and Cooperation on August 30, 2008. The treaty allowed Italy to extend its interests into the southern basin of the Mediterranean in order to balance Atlanticism and Europeanism in the region. The treaty enabled Libya also to create a partnership with a northern ally that was until recently described as an adversary. In politics, however, there is no such thing as perm... 16. Your Neighbors Are My Spies: Location and other Privacy Concerns in Dating Apps OpenAIRE Hoang, Nguyen Phong; Asano, Yasuhito; Yoshikawa, Masatoshi 2016-01-01 Trilateration has recently become one of the well-known threat models to the user's location privacy in location-based applications (aka: location-based services or LBS), especially those containing highly sensitive information such as dating applications. The threat model mainly depends on the distance shown from the targeted victim to the adversary to pinpoint the victim's position. As a countermeasure, most of location-based applications have already implemented the "hide distance" functio... 17. Supply positioning in support of humanitarian assistance and disaster relief operations OpenAIRE Mitchell, Gregory P.; Cisek, Jeffrey J.; Reilly, Bruce 2011-01-01 MBA Professional Report The U.S. military possesses many capabilities that are used throughout the range of military operations (ROMO) in order to carry out planned and contingency response missions. These capabilities can bring destruction to an adversary or can provide critical aid in a humanitarian assistance or disaster response (HA/DR) operation. In many situations, prepositioning supplies and equipment is essential to the Defense (DoD) in a rapid response that is efficient and e... 18. Exploring individual differences in deductive reasoning as a function of 'autistic'-like traits OpenAIRE Fugard, Andrew J. B. 2009-01-01 From a logical viewpoint, people must reason to as well as from interpretations in deductive reasoning tasks. There are two main interpretative stances (e.g., Stenning & van Lambalgen, 2004, 2005, 2008): credulous, the act of trying to infer the speaker's intended model; and sceptical, an adversarial strategy. A range of contextual factors in uence interpretation, but there are also differences between individuals across situations. Taking an individual differences approach,... 19. A Bitcoin system with no mining and no history transactions: Build a compact Bitcoin system OpenAIRE Xiaochao Qian 2014-01-01 We give an explicit definition of decentralization and show you that decentralization is almost impossible for the current stage. We propose a new framework of noncentralized cryptocurrency system with an assumption of the existence of a weak adversary for a bank alliance. It abandons the mining process and blockchain, and removes history transactions from data synchronization. We propose a consensus algorithm named "Converged Consensus" for a noncentralized cryptocurrency system. 20. Advocacy and technology assessment Science.gov (United States) Jones, E. M. 1975-01-01 A highly structured treatment is presented of adversarial systems as they apply to technology assessment. One approach to the problem of adequate criteria of assessment focuses upon the internal operations of assessment entities; operations include problem perception, problem formulation, selection, utilization, determination, and evaluation. Potential contributions of advocacy as a mode of inquiry in technology are discussed; advocacy is evaluated by representative sets of criteria of adequate assessment which include participant criteria, perspectives criteria, situations criteria, base values criteria, and strategies criteria. 1. Social context modulates the effect of physical warmth on perceived interpersonal kindness:a study of embodied metaphors OpenAIRE Citron, Francesca M M; Goldberg, Adele E. 2014-01-01 Physical contact with hot vs. iced coffee has been shown to affect evaluation of the personal warmth or kindness of a hypothetical person (Williams & Bargh, 2008). In 3 studies, we investigated whether the manipulation of social context can modulate the activation of the metaphorical mapping, KINDNESS as WARMTH. After priming participants with warm vs. cold temperature, we asked them to evaluate a hypothetical ad-hoc ally or adversary on the kindness dimension, as well as on other qualities u... 2. USBcat - Towards an Intrusion Surveillance Toolset OpenAIRE Chapman, Chris; Knight, Scott; Dean, Tom 2014-01-01 This paper identifies an intrusion surveillance framework which provides an analyst with the ability to investigate and monitor cyber-attacks in a covert manner. Where cyber-attacks are perpetrated for the purposes of espionage the ability to understand an adversary's techniques and objectives are an important element in network and computer security. With the appropriate toolset, security investigators would be permitted to perform both live and stealthy counter-intelligence operations by ob... 3. Quantum money from knots CERN Document Server Farhi, Edward; Hassidim, Avinatan; Lutomirski, Andrew; Shor, Peter 2010-01-01 Quantum money is a cryptographic protocol in which a mint can produce a quantum state, no one else can copy the state, and anyone (with a quantum computer) can verify that the state came from the mint. We present a concrete quantum money scheme based on superpositions of diagrams that encode oriented links with the same Alexander polynomial. We expect our scheme to be secure against computationally bounded adversaries. 4. A Factoring and Discrete Logarithm based Cryptosystem OpenAIRE Ciss, Abdoul Aziz; Cheikh, Ahmed Youssef Ould; Sow, Djiby 2012-01-01 This paper introduces a new public key cryptosystem based on two hard problems : the cube root extraction modulo a composite moduli (which is equivalent to the factorisation of the moduli) and the discrete logarithm problem. These two hard problems are combined during the key generation, encryption and decryption phases. By combining the IFP and the DLP we introduce a secure and efficient public key cryptosystem. To break the scheme, an adversary may solve the IFP and the DLP separately which... 5. Hierarchical Motion Control for a Team of Humanoid Soccer Robots OpenAIRE Seung-Joon Yi; Stephen McGill; Dennis Hong; Daniel Lee 2016-01-01 Robot soccer has become an effective benchmarking problem for robotics research as it requires many aspects of robotics including perception, self localization, motion planning and distributed coordination to work in uncertain and adversarial environments. Especially with humanoid robots that lack inherent stability, a capable and robust motion controller is crucial for generating walking and kicking motions without losing balance. In this paper, we describe the details of a motion controller... 6. The PROBE Framework for the Personalized Cloaking of Private Locations OpenAIRE Maria Luisa Damiani; Elisa Bertino; Claudio Silvestri 2010-01-01 The widespread adoption of location-based services (LBS) raises increasing concerns for the protection of personal location information. A common strategy, referred to as obfuscation (or cloaking), to protect location privacy is based on forwarding the LBS provider a coarse user location instead of the actual user location. Conventional approaches, based on such technique, are however based only on geometric methods and therefore are unable to assure privacy when the adversary is aware of the... 7. Re-Imagining Punishment: An Exercise in “Intersectional Criminal Justice†OpenAIRE Maya Pagni Barak 2014-01-01 Over the last 40 years a number of scholars have called upon fellow criminologists to rethink the field’s priorities and methods, as well as the American criminal justice system and current punishment practices. Drawing on alternative criminologies, including constitutive and peacemaking criminologies, as well as the practice of reintegrative shaming, this paper presents a new model of criminal justice that combines aspects of adversarial, restorative, social, and transformative justice fra... 8. Re-Imagining Punishment: An Exercise in “Intersectional Criminal Justice” OpenAIRE Barak, Maya 2014-01-01 Over the last 40 years a number of scholars have called upon fellow criminologists to rethink the field’s priorities and methods, as well as the American criminal justice system and current punishment practices. Drawing on alternative criminologies, including constitutive and peacemaking criminologies, as well as the practice of reintegrative shaming, this paper presents a new model of criminal justice that combines aspects of adversarial, restorative, social, and transformative justice frame... 9. Autoencoding beyond pixels using a learned similarity metric OpenAIRE Larsen, Anders Boesen Lindbo; Sønderby, Søren Kaae; Larochelle, Hugo; Winther, Ole 2015-01-01 We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder with a generative adversarial network we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of face... 10. aDTN - Undetectable Communication in Wireless Delay-tolerant Networks (Working Draft) OpenAIRE Barroso, Ana 2015-01-01 This document describes a best-effort delay-tolerant communication system that protects the privacy of users in wireless ad-hoc networks by making their communication undetectable. The proposed system is a wireless broadcast-based adaptation of mix networks where each user belongs to at least one group it trusts, and each group acts as a mix node. Assuming encryption is not broken, it provides undetectability of all users and messages against external adversaries, as well as undetectability o... 11. Privacy Preserving Quantum Anonymous Transmission via Entanglement Relay OpenAIRE Wei Yang; Liusheng Huang; Fang Song 2016-01-01 Anonymous transmission is an interesting and crucial issue in computer communication area, which plays a supplementary role to data privacy. In this paper, we put forward a privacy preserving quantum anonymous transmission protocol based on entanglement relay, which constructs anonymous entanglement from EPR pairs instead of multi-particle entangled state, e.g. GHZ state. Our protocol achieves both sender anonymity and receiver anonymity against an active adversary and tolerates any number of... 12. Cyber operations and Gray Zones:challenges for NATO OpenAIRE Fitton, Oliver James 2016-01-01 The Gray Zone represents a space between peaceful state rivalries and war. Within this space actors have developed hybrid strategies to extend their influence. This concept of conflict is best illustrated by Russia’s actions in Eastern Ukraine in 2014. Gray Zone doctrine leverages ambiguity to create an environment in which adversaries are unable to make strategic decisions in a timely and confident manner. Cyber Operations, because of the attribution problem, lend themselves to this kind of ... 13. The Approach Of The Sports Press To Public Relations Activities in Turkey OpenAIRE OKAY, Aydemir 2007-01-01 Journalism and public relations are professional fields very close to each other. Despite some evidences regarding the usefulness of public relations supports on news work, a number of studies have described the perceptions of public relations practitioners and journalists as sometimes adversarial, cooperative, or a love-hate relationship, and mostly skeptical toward each other. Most studies confirm that practitioners try to influence the news process and journalists try to defend against un... 14. Cryptographic Path Hardening: Hiding Vulnerabilities in Software through Cryptography OpenAIRE Ganesh, Vijay; Carbin, Michael; Rinard, Martin 2012-01-01 We propose a novel approach to improving software security called Cryptographic Path Hardening, which is aimed at hiding security vulnerabilities in software from attackers through the use of provably secure and obfuscated cryptographic devices to harden paths in programs. By "harden" we mean that certain error-checking if-conditionals in a given program P are replaced by equivalent" we mean that adversaries cannot use semi-automatic program analysis techniques to reason about the hardened pr... 15. Bluetooth Low Energy - privacy enhancement for advertisement OpenAIRE Wang, Ping 2014-01-01 The aim of this project is to design, simulate, and implement a privacy enhancement protocol over BLE advertising channels. The design of the privacy enhancement is generic and modular. Due to the risk of privacy disclosure and device tracking by adversary, the main focus will be put on designing and implementing message confidentiality, replay prevention, and anti-tracking of device over BLE advertising channels. Bluetooth core specification 4.1 is used as baseline for design and implementat... 16. Overview of Security Threats in WSN OpenAIRE Ms. Poonam Barua; Mr. Sanjeev Indora 2013-01-01 Wireless sensor network is a combination of tiny devices called as sensor nodes which havecomputing, sensing and processing capabilities. As WSN are deployed in hostile environment usually and canbe physically accessible by an adversary; he/she can affect the confidentiality and integrity of the data as wellas some other security measures. So security is a main concern in wireless sensor network especially inhostile environment. In this paper we focus on security requirements, security scheme... 17. Application of SAFE to an operating reactor International Nuclear Information System (INIS) A method for the evaluation of physical protection systems at nuclear facilities has been developed. The evaluation process consists of five major phases: (1) Facility Characterization, (2) Facility Representation, (3) Component Performance, (4) Adversary Path Analysis, and (5) Effectiveness Evaluation. Each of these phases will be described in some detail and illustrated by examples. The process for evaluation of physical protection system effectiveness against an outside threat will be presented for a reactor facility 18. Mind Your Coins: Fully Leakage-Resilient Signatures with Graceful Degradation OpenAIRE Faonio, Antonio; Venturi, Daniele; Buus Nielsen, Jesper 2014-01-01 We construct new leakage-resilient signature schemes. Our schemes remain unforgeable against an adversary leaking arbitrary (yet bounded) information on the entire state of the signer (sometimes known as fully leakage resilience). The main feature of our constructions, is that they o er a graceful degradation of security in situations where standard existential unforgeability is impossible. This property was recently put forward by Nielsen et al. (PKC 2014) to deal with setting... 19. Security-by-experiment : lessons from responsible deployment in cyberspace OpenAIRE Pieters, Wolter; Hadžiosmanović, Dina; Dechesne, Francien 2015-01-01 Conceiving new technologies as social experiments is a means to discuss responsible deployment of technologies that may have unknown and potentially harmful side-effects. Thus far, the uncertain outcomes addressed in the paradigm of new technologies as social experiments have been mostly safety-related, meaning that potential harm is caused by the design plus accidental events in the environment. In some domains, such as cyberspace, adversarial agents (attackers) may be at least as important ... 20. CUSUM-Based Intrusion Detection Mechanism for Wireless Sensor Networks OpenAIRE Bishan Ying 2014-01-01 The nature of wireless sensor networks (WSNs) makes them very vulnerable to adversary's malicious attacks. Therefore, network security is an important issue to WSNs. Due to the constraints of WSN, intrusion detection in WSNs is a challengeable task. In this paper, we present a novel intrusion detection mechanism for WSNs, which is composed of a secure data communication algorithm and an intrusion detection algorithm. The major contribution of this paper is that we propose an original secure m... 1. How to optimize joint theater ballistic missile defense OpenAIRE Diehl, Douglas D. 2004-01-01 Approved for public release, distribution is unlimited Many potential adversaries seek, or already have theater ballistic missiles capable of threatening targets of interest to the United States. The U.S. Missile Defense Agency and armed forces are developing and fielding missile interceptors carried by many different platforms, including ships, aircraft, and ground units. Given some exigent threat, the U.S. must decide where to position defensive platforms and how they should engage poten... 2. LA PRUEBA DOCUMENTADA EN EL NUEVO SISTEMA DE JUSTICIA PENAL MEXICANO Documented proof in the new Mexican criminal justice system OpenAIRE Benavente Chorres Hesbert 2010-01-01 El presente estudio analiza los supuestos de prueba documentada regulados en aquellos códigos de las entidades federativas mexicanas que han adecuado el proceso penal al sistema acusatorio con tendencia adversarial. En ese sentido, se entiende por prueba documentada aquellas diligencias, principalmente declaraciones, realizadas durante la etapa de investigación que la ley otorga valor probatorio al no poder asistir el órgano de prueba a la audiencia del juicio oral por razones ajenas a su vol... 3. Performance TTradeoffs in Distributed Control Systems Science.gov (United States) Borowski, Holly Large scale systems consisting of many interacting subsystems are often controlled in a distributed fashion due to inherent limitations in computation, communication, or sensing. Here, individual agents must make decisions based on local, often incomplete information. This dissertation focuses on understanding performance tradeoffs in distributed control systems, specifically focusing on using a game theoretic framework to assign agent control laws. Performance of a distributed control law is determined by (1) the degree with which it meets a stated objective, (2) the amount of time it takes to converge, (3) agents' informational requirements, and (4) vulnerability to adversarial manipulation. The three main research questions addressed in this work are: • When is fast convergence to near-optimal behavior possible in a distributed system? We design a distributed control law which converges to a near-optimal configuration in a time that is near-linear in the number of agents. This worst case convergence time is an improvement over existing algorithms whose worst-case convergence times are exponential in the number of agents. • Can agents in a distributed system learn near-optimal correlated behavior despite severely limited information about one another's behavior? We design a distributed control law that imposes limited informational requirements for individual agents and converges to near-optimal correlated behavior. • How does the structure of agent interaction impact a distributed control system's vulnerability to adversarial manipulation? We derive a graph theoretical condition that ensures resilience to adversarial manipulation, and we examine the conditions under which an adversary can manipulate collective behavior in a distributed control system, simply by influencing small subsets of agents. 4. Efficient Certificate Management in VANET OpenAIRE Samara, Ghassan 2012-01-01 Vehicular Ad hoc Networks is one of the most challenging research area in the field of Mobile Ad Hoc Networks, in this research We propose a flexible, simple, and scalable design for VANET certificates, and new methods for efficient certificate management, which will Reduce channel overhead by eliminating the use of CRL, and make Better certificate Revocation Management. Also it will increase the security of the network and helps in identifying the adversary vehicle. 5. Security Analysis of Vehicular Ad Hoc Networks (VANET) OpenAIRE Samara, Ghassan 2012-01-01 Vehicular Ad Hoc Networks (VANET) has mostly gained the attention of today's research efforts, while current solutions to achieve secure VANET, to protect the network from adversary and attacks still not enough, trying to reach a satisfactory level, for the driver and manufacturer to achieve safety of life and infotainment. The need for a robust VANET networks is strongly dependent on their security and privacy features, which will be discussed in this paper. In this paper a various types of ... 6. Unattended ground sensors for Expeditionary Force 21 intelligence collections OpenAIRE Harrington, Ryan F. 2015-01-01 Approved for public release; distribution is unlimited As our adversaries continue to evolve in complexity, the U.S. Marines adapt in kind with its design and intent through its Expeditionary Force 21 (EF 21) Capstone. EF 21 stresses the need for increased persistent intelligence collections capabilities and the optimization of existing assets. Current requirements for Unattended Ground Sensors (UGS) limit usage in non-permissive environments beyond the Area of Operations, contrary to the ... 7. Hybridkrig OpenAIRE Haande, Trond; Bjerga, Kjell Inge 2011-01-01 The study addresses hybrid warfare and its relevance to land operations conducted by the Norwegian Armed Forces. While the first part explores the recent phenomenon of hybrid warfare, the second asks whether hybrid warfare has any bearing on land operations. A land operation is a ground operation on Norwegian territory in which Norwegian Armed Forces engage conventional adversaries. Military theory traditionally recognises a dichotomy between guerrilla warfare and conventional ... 8. Secure positioning in wireless networks DEFF Research Database (Denmark) Capkun, Srdjan; Hubaux, Jean-Pierre 2006-01-01 So far, the problem of positioning in wireless networks has been studied mainly in a non-adversarial settings. In this work, we analyze the resistance of positioning techniques to position and distance spoofing attacks. We propose a mechanism for secure positioning of wireless devices, that we call...... Verifiable Multilateration. We then show how this mechanism can be used to secure positioning in sensor networks. We analyze our system through simulations.... 9. Routing Data Authentication in Wireless Networks; TOPICAL International Nuclear Information System (INIS) In this paper, we discuss several specific threats directed at the routing data of an ad hoc network. We address security issues that arise from wrapping authentication mechanisms around ad hoc routing data. We show that this bolt-on approach to security may make certain attacks more difficult, but still leaves the network routing data vulnerable. We also show that under a certain adversarial model, most existing routing protocols cannot be secured with the aid of digital signatures 10. How to define and build an effective cyber threat intelligence capability how to understand, justify and implement a new approach to security CERN Document Server Dalziel, Henry; Carnall, James 2014-01-01 Intelligence-Led Security: How to Understand, Justify and Implement a New Approach to Security is a concise review of the concept of Intelligence-Led Security. Protecting a business, including its information and intellectual property, physical infrastructure, employees, and reputation, has become increasingly difficult. Online threats come from all sides: internal leaks and external adversaries; domestic hacktivists and overseas cybercrime syndicates; targeted threats and mass attacks. And these threats run the gamut from targeted to indiscriminate to entirely accidental. Amo 11. Toward a Developmentally-Informed Approach to Parenting Interventions: Seeking Hidden Effects OpenAIRE Brock, Rebecca L.; Kochanska, Grazyna 2016-01-01 Drawing from developmental psychology and psychopathology, we propose a new, developmentally-informed approach to parenting interventions that focuses on elucidating changes in the unfolding developmental process between the parent and child. We present data from 186 low-income mothers of toddlers, randomly assigned to Child-Oriented Play group or Play-as-Usual group. We examined the maladaptive cascade from child difficulty to mother adversarial, negative parenting to child maladjustment, we... 12. Passive nuclear material detection in a personnel portal International Nuclear Information System (INIS) The concepts employed in the development of gamma-ray and neutron detection systems for a special nuclear materials booth portal monitor are described. The portal is designed for unattended use in detecting diversion by a technically sophisticated adversary and has possible application to International Atomic Energy Agency safeguards of a fast critical assembly facility. Preliminary evaluation results are given and plans for further parameter studies are noted 13. Formation of public attitudes to nuclear power International Nuclear Information System (INIS) Nuclear power has been plagued by public acceptance problems. Evidence suggests one of the key factors is poor communicaton between the scientific community and the general public. Although environmental enquiries provide a forum for the voicing of views, by adopting the adversary principle they have also resulted in polarizaton of public opinion, as experienced in Australia with the Ranger Environmental Enquiry. The problem of developing methods to enable a flow of objective informaton to and from the public requires urgent solution 14. Optimal Construction of Regenerating Code through Rate-matching in Hostile Networks OpenAIRE Li, Jian; Li, Tongtong; Ren, Jian 2015-01-01 Regenerating code is a class of code very suitable for distributed storage systems, which can maintain optimal bandwidth and storage space. Two types of important regenerating code have been constructed: the minimum storage regeneration (MSR) code and the minimum bandwidth regeneration (MBR) code. However, in hostile networks where adversaries can compromise storage nodes, the storage capacity of the network can be significantly affected. In this paper, we propose two optimal constructions of... 15. Secure Learning and Learning for Security: Research in the Intersection OpenAIRE Rubinstein, Benjamin 2010-01-01 Statistical Machine Learning is used in many real-world systems, such as web search, network and power management, online advertising, finance and health services, in which adversaries are incentivized to attack the learner, motivating the urgent need for a better understanding of the security vulnerabilities of adaptive systems. Conversely, research in Computer Security stands to reap great benefits by leveraging learning for building adaptive defenses and even designing intelligent attacks ... 16. Securing Online Advertising OpenAIRE Vratonjic, Nevena; Freudiger, Julien; Felegyhazi, Mark; Hubaux, Jean-Pierre 2008-01-01 Online advertisement is a major source of revenues in the Internet. In this paper, we identify a number of vulnerabilities of current ad serving systems. We describe how an adversary can exploit these vulnerabilities to divert part of the ad revenue stream for its own benefit. We propose a scalable, secure ad serving scheme to fix this problem. We also explain why the deployment of this solution would benefit the Web browsing security in general. 17. Advanced unattended sensors and systems: state of the art and future challenges Science.gov (United States) McQuiddy, John H. 2010-04-01 The unattended ground sensors (UGS) have come a long way over the more than 40 years they have been used to detect adversarial activities. From large, single phenomenology sensors with little signal processing and point to point communications the technology has now changed to small, intelligent sensors using network communications. This technology change has resulted in far more capable sensors but challenges remain for UGS to be effective in providing information to users. 18. Symbolic Planning and Control Using Game Theory and Grammatical Inference OpenAIRE Fu, Jie; Tanner, Herbert G.; Heinz, Jeffrey; Chandlee, Jane; Karydis, Konstantinos; Koirala, Cesar 2012-01-01 This paper presents an approach that brings together game theory with grammatical inference and discrete abstractions in order to synthesize control strategies for hybrid dynamical systems performing tasks in partially unknown but rule-governed adversarial environments. The combined formulation guarantees that a system specification is met if (a) the true model of the environment is in the class of models inferable from a positive presentation, (b) a characteristic sample is observed, and (c)... 19. HCA 459 Courses/sanptutorial OpenAIRE potik 2015-01-01 Organizational Survival Strategies. This discussion has two options. Please choose either Option A or Option B to respond to. Be sure to indicate within your post which option you chose. Option A: Hospitals frequently seek ways to ensure survival. Sometimes an adversarial climate cannot be avoided when the action that is being considered is controversial. For example, a hospital may seek to expand its market reach by opening an inpatient drug treatment facility in a small suburban townshi... 20. HCA 459 uop / uophelp OpenAIRE uophelp 2015-01-01 Organizational Survival Strategies. This discussion has two options. Please choose either Option A or Option B to respond to. Be sure to indicate within your post which option you chose. Option A: Hospitals frequently seek ways to ensure survival. Sometimes an adversarial climate cannot be avoided when the action that is being considered is controversial. For example, a hospital may seek to expand its market reach by opening an inpatient drug treatment facility in a small suburban townshi... 1. On the theoretical basis for plea bargaining system Institute of Scientific and Technical Information of China (English) WANG Jiancheng 2006-01-01 Before discussing the introduction of the plea bargaining system to China's criminal justice system,it is necessary to study its theoretical basis.Among which,the following aspects should be focused on: the philosophical viewpoint of pragmatism is its thinking basis;the concept of contract is its cultural basis;the structure form of adversary procedure is its systematic basis;and the system of fight to silence and discovery of evidence are its symbiotic basis. 2. Strongly Unforgeable Ring Signature Scheme from Lattices in the Standard Model Directory of Open Access Journals (Sweden) Geontae Noh 2014-01-01 from lattices are not even existentially unforgeable with respect to insider corruption. We then improve previous schemes by applying, for the first time, the concept of strong unforgeability with respect to insider corruption to a ring signature scheme in lattices. This offers more security than any previous ring signature scheme: adversaries cannot produce new signatures for any ring-message pair, including previously signed ring-message pairs. 3. A multihop key agreement scheme for wireless ad hoc networks based on channel characteristics. Science.gov (United States) Hao, Zhuo; Zhong, Sheng; Yu, Nenghai 2013-01-01 A number of key agreement schemes based on wireless channel characteristics have been proposed recently. However, previous key agreement schemes require that two nodes which need to agree on a key are within the communication range of each other. Hence, they are not suitable for multihop wireless networks, in which nodes do not always have direct connections with each other. In this paper, we first propose a basic multihop key agreement scheme for wireless ad hoc networks. The proposed basic scheme is resistant to external eavesdroppers. Nevertheless, this basic scheme is not secure when there exist internal eavesdroppers or Man-in-the-Middle (MITM) adversaries. In order to cope with these adversaries, we propose an improved multihop key agreement scheme. We show that the improved scheme is secure against internal eavesdroppers and MITM adversaries in a single path. Both performance analysis and simulation results demonstrate that the improved scheme is efficient. Consequently, the improved key agreement scheme is suitable for multihop wireless ad hoc networks. PMID:23766725 4. Attacks on quantum key distribution protocols that employ non-ITS authentication Science.gov (United States) Pacher, C.; Abidin, A.; Lorünser, T.; Peev, M.; Ursin, R.; Zeilinger, A.; Larsson, J.-Å. 2016-01-01 We demonstrate how adversaries with large computing resources can break quantum key distribution (QKD) protocols which employ a particular message authentication code suggested previously. This authentication code, featuring low key consumption, is not information-theoretically secure (ITS) since for each message the eavesdropper has intercepted she is able to send a different message from a set of messages that she can calculate by finding collisions of a cryptographic hash function. However, when this authentication code was introduced, it was shown to prevent straightforward man-in-the-middle (MITM) attacks against QKD protocols. In this paper, we prove that the set of messages that collide with any given message under this authentication code contains with high probability a message that has small Hamming distance to any other given message. Based on this fact, we present extended MITM attacks against different versions of BB84 QKD protocols using the addressed authentication code; for three protocols, we describe every single action taken by the adversary. For all protocols, the adversary can obtain complete knowledge of the key, and for most protocols her success probability in doing so approaches unity. Since the attacks work against all authentication methods which allow to calculate colliding messages, the underlying building blocks of the presented attacks expose the potential pitfalls arising as a consequence of non-ITS authentication in QKD post-processing. We propose countermeasures, increasing the eavesdroppers demand for computational power, and also prove necessary and sufficient conditions for upgrading the discussed authentication code to the ITS level. 5. On Node Replication Attack in Wireless Sensor Networks Directory of Open Access Journals (Sweden) Mumtaz Qabulio 2016-04-01 Full Text Available WSNs (Wireless Sensor Networks comprise a large number of small, inexpensive, low power and memory constrained sensing devices (called sensor nodes that are densely deployed to measure a given physical phenomenon. Since WSNs are commonly deployed in a hostile and unattended environment, it is easy for an adversary to physically capture one or more legitimate sensor nodes, re-program and redeploy them in the network. As a result, the adversary becomes able to deploy several identical copies of physically captured nodes in the network in order to perform illegitimate activities. This type of attack is referred to as Node Replication Attack or Clone Node Attack. By launching node replication attack, an adversary can easily get control on the network which consequently is the biggest threat to confidentiality, integrity and availability of data and services. Thus, detection and prevention of node replication attack in WSNs has become an active area of research and to date more than two dozen schemes have been proposed, which address this issue. In this paper, we present a comprehensive review, classification and comparative analysis of twenty five of these schemes which help to detect and/or prevent node replication attack in WSNs 6. Comparison of two methods to quantify cyber and physical security effectiveness. Energy Technology Data Exchange (ETDEWEB) Wyss, Gregory Dane; Gordon, Kristl A. 2005-11-01 With the increasing reliance on cyber technology to operate and control physical security system components, there is a need for methods to assess and model the interactions between the cyber system and the physical security system to understand the effects of cyber technology on overall security system effectiveness. This paper evaluates two methodologies for their applicability to the combined cyber and physical security problem. The comparison metrics include probabilities of detection (P{sub D}), interruption (P{sub I}), and neutralization (P{sub N}), which contribute to calculating the probability of system effectiveness (P{sub E}), the probability that the system can thwart an adversary attack. P{sub E} is well understood in practical applications of physical security but when the cyber security component is added, system behavior becomes more complex and difficult to model. This paper examines two approaches (Bounding Analysis Approach (BAA) and Expected Value Approach (EVA)) to determine their applicability to the combined physical and cyber security issue. These methods were assessed for a variety of security system characteristics to determine whether reasonable security decisions could be made based on their results. The assessments provided insight on an adversary's behavior depending on what part of the physical security system is cyber-controlled. Analysis showed that the BAA is more suited to facility analyses than the EVA because it has the ability to identify and model an adversary's most desirable attack path. 7. Detecting and Mitigating Smart Insider Jamming Attacks in MANETs Using Reputation-Based Coalition Game Directory of Open Access Journals (Sweden) Ashraf Al Sharah 2016-01-01 Full Text Available Security in mobile ad hoc networks (MANETs is challenging due to the ability of adversaries to gather necessary intelligence to launch insider jamming attacks. The solutions to prevent external attacks on MANET are not applicable for defense against insider jamming attacks. There is a need for a formal framework to characterize the information required by adversaries to launch insider jamming attacks. In this paper, we propose a novel reputation-based coalition game in MANETs to detect and mitigate insider jamming attacks. Since there is no centralized controller in MANETs, the nodes rely heavily on availability of transmission rates and a reputation for each individual node in the coalition to detect the presence of internal jamming node. The nodes will form a stable grand coalition in order to make a strategic security defense decision, maintain the grand coalition based on node reputation, and exclude any malicious node based on reputation value. Simulation results show that our approach provides a framework to quantify information needed by adversaries to launch insider attacks. The proposed approach will improve MANET’s defense against insider attacks, while also reducing incorrect classification of legitimate nodes as jammers. 8. DETECTION AND LOCALIZATION OF MULTIPLE SPOOFING ATTACKERS FOR MOBILE WIRELESS NETWORKS Directory of Open Access Journals (Sweden) R. Maivizhi 2015-06-01 Full Text Available The openness nature of wireless networks allows adversaries to easily launch variety of spoofing attacks and causes havoc in network performance. Recent approaches used Received Signal Strength (RSS traces, which only detect spoofing attacks in mobile wireless networks. However, it is not always desirable to use these methods as RSS values fluctuate significantly over time due to distance, noise and interference. In this paper, we discusses a novel approach, Mobile spOofing attack DEtection and Localization in WIireless Networks (MODELWIN system, which exploits location information about nodes to detect identity-based spoofing attacks in mobile wireless networks. Also, this approach determines the number of attackers who used the same node identity to masquerade as legitimate device. Moreover, multiple adversaries can be localized accurately. By eliminating attackers the proposed system enhances network performance. We have evaluated our technique through simulation using an 802.11 (WiFi network and an 802.15.4 (Zigbee networks. The results prove that MODELWIN can detect spoofing attacks with a very high detection rate and localize adversaries accurately. 9. On node replication attack in wireless sensor networks International Nuclear Information System (INIS) WSNs (Wireless Sensor Networks) comprise a large number of small, inexpensive, low power and memory constrained sensing devices (called sensor nodes) that are densely deployed to measure a given physical phenomenon. Since WSNs are commonly deployed in a hostile and unattended environment, it is easy for an adversary to physically capture one or more legitimate sensor nodes, re-program and redeploy them in the network. As a result, the adversary becomes able to deploy several identical copies of physically captured nodes in the network in order to perform illegitimate activities. This type of attack is referred to as Node Replication Attack or Clone Node Attack. By launching node replication attack, an adversary can easily get control on the network which consequently is the biggest threat to confidentiality, integrity and availability of data and services. Thus, detection and prevention of node replication attack in WSNs has become an active area of research and to date more than two dozen schemes have been proposed, which address this issue. In this paper, we present a comprehensive review, classification and comparative analysis of twenty five of these schemes which help to detect and/or prevent node replication attack in WSNs. (author) 10. Are you threatening me?: Towards smart detectors in watermarking Science.gov (United States) Barni, Mauro; Comesaña-Alfaro, Pedro; Pérez-González, Fernando; Tondi, Benedetta 2014-02-01 We revisit the well-known watermarking detection problem, also known as one-bit watermarking, in the presence of an oracle attack. In the absence of an adversary, the design of the detector generally relies on probabilistic formulations (e.g., Neyman-Pearson's lemma) or on ad-hoc solutions. When there is an adversary trying to minimize the probability of correct detection, game-theoretic approaches are possible. However, they usually assume that the attacker cannot learn the secret parameters used in detection. This is no longer the case when the adversary launches an oracle-based attack, which turns out to be extremely effective. In this paper, we discuss how the detector can learn whether it is being subject to such an attack, and take proper measures. We present two approaches based on different attacker models. The first model is very general and makes minimum assumptions on attacker's beaver. The second model is more specific since it assumes that the oracle attack follows a weel-defined path. In all cases, a few observations are sufficient to the watermark detector to understand whether an oracle attack is on going. 11. Nuclear Industry Sector Force on Force Exercises: Experiences in The Netherlands International Nuclear Information System (INIS) Nuclear facilities spend substantial and time on designing effective security systems and putting detection, delay and response measures in place. Having done this, facilities hardly ever get in the situation where they need to actively use these security measures to counter an actual threat. This results in three important questions that need to be considered: 1. How do we know that the technical security measures are in effective in detecting and withstanding a real-life, intelligent and creative adversary? 2. How do we ensure that (and train that) especially the guards – who in normal working life hardly ever encounter any adversaries – are continuously alert to detect and counter threats that surface unexpectedly? 3. How well is not only the guard force, but also the facilities as a whole prepared to withstand combined attach scenarios, including physical, social engineering and cyber scenarios? This paper discusses how the security managers of nuclear facilities in The Netherlands collaboratively addressed the questions above by developing in industry sector wide Force on Force exercises team. This team comprises members of all nuclear facilities in The Netherlands and performs as adversary test team unannounced security exercises at those facilities. (author) 12. Hybrid onboard and ground based digital channelizer beam-forming for SATCOM interference mitigation and protection Science.gov (United States) Xiong, Wenhao; Wang, Gang; Tian, Xin; Pham, Khanh; Blasch, Erik; Chen, Genshe 2016-05-01 In this work, we propose a novel beam-forming power allocation method for a satellite communication (SATCOM) multiple-input multiple-output (MIMO) system to mitigate the co-channel interference (CCI) as well as limiting the signal leakage to the adversary users. In SATCOM systems, the beam-forming technique is a conventional way of avoiding interference, controlling the antenna beams, and mitigating undesired signals. We propose to use an advanced beam-forming technique which considers the number of independent channels used and transmitting power deployed to reduce and mitigate the unintentional interference effect. With certain quality of service (QoS) for the SATCOM system, independent channels components will be selected. It is desired to use less and stronger channel components when possible. On the other hand, considering that SATCOM systems often face the problem that adversary receiver detects the signal, a proposed power allocation method can efficiently reduce the received power at the adversary receiver. To reduce the computational burden on the transponder in order to minimize the size, mass, power consumption and delay for the satellite, we apply a hybrid onboard and ground based beam-forming design to distribute the calculation between the transponder and ground terminals. Also the digital channelizer beam-forming (DCB) technique is employed to achieve dynamic spatial control. 13. Potential threat to licensed nuclear activities from insiders (insider study). Technical report International Nuclear Information System (INIS) The Insider Study was undertaken by NRC staff at the request of the Commission. Its objectives were to: (1) determine the characteristics of potential insider adversaries to licensed nuclear activities; (2) examine security system vulnerabilities to insider adversaries; and (3) assess the effectiveness of techniques used to detect or prevent insider malevolence. The study analyzes insider characteristics as revealed in incidents of theft or sabotage that occurred in the nuclear industry, analogous industries, government agencies, and the military. Adversary characteristics are grouped into four categories: position-related, behavioral, resource and operational. It also analyzes (1) the five security vulnerabilities that most frequently accounted for the success of the insider crimes in the data base; (2) the 11 means by which insider crimes were most often detected; and (3) four major and six lesser methods aimed at preventing insider malevolence. In addition to case history information, the study contains data derived from non-NRC studies and from interviews with over 100 security experts in industry, government (federal and state), and law enforcement 14. Cops and Invisible Robbers: the Cost of Drunkenness CERN Document Server Kehagias, Athanasios; Pralat, Pawel 2012-01-01 We examine a version of the Cops and Robber (CR) game in which the robber is invisible, i.e., the cops do not know his location until they capture him. Apparently this game (CiR) has received little attention in the CR literature. We examine two variants: in the first the robber is adversarial (he actively tries to avoid capture); in the second he is drunk (he performs a random walk). Our goal in this paper is to study the invisible Cost of Drunkenness (iCOD), which is defined as the ratio ct_i(G)/dct_i(G), with ct_i(G) and dct_i(G) being the expected capture times in the adversarial and drunk CiR variants, respectively. We show that these capture times are well defined, using game theory for the adversarial case and partially observable Markov decision processes (POMDP) for the drunk case. We give exact asymptotic values of iCOD for several special graph families such as$d$-regular trees, give some bounds for grids, and provide general upper and lower bounds for general classes of graphs. We also give an in... 15. Vulnerability assessment using two complementary analysis tools Energy Technology Data Exchange (ETDEWEB) Paulus, W.K. 1993-07-01 To analyze the vulnerability of nuclear materials to theft or sabotage, Department of Energy facilities have been using, since 1989, a computer program called ASSESS, Analytic System and Software for Evaluation of Safeguards and Security. During the past year Sandia National Laboratories has begun using an additional program, SEES, Security Exercise Evaluation Simulation, enhancing the picture of vulnerability beyond what either program achieves alone. Assess analyzes all possible paths of attack on a target and, assuming that an attack occurs, ranks them by the probability that a response force of adequate size can interrupt the attack before theft or sabotage is accomplished. A Neutralization module pits, collectively, a security force against the interrupted adversary force in a fire fight and calculates the probability that the adversaries are defeated. SEES examines a single scenario and simulates in detail the interactions among all combatants. its output includes shots fired between shooter and target, and the hits and kills. Whereas ASSESS gives breadth of analysis, expressed statistically and performed relatively quickly, SEES adds depth of detail, modeling tactical behavior. ASSESS finds scenarios that exploit the greatest weakness of a facility. SEES explores these scenarios to demonstrate in detail how various tactics to nullify the attack might work out. Without ASSESS to find the facility weakness, it is difficult to focus SEES objectively on scenarios worth analyzing. Without SEES to simulate the details of response vs. adversary interaction, it is not possible to test tactical assumptions and hypotheses. Using both programs together, vulnerability analyses achieve both breadth and depth. 16. Determining Solution Space Characteristics for Real-Time Strategy Games and Characterizing Winning Strategies Directory of Open Access Journals (Sweden) Kurt Weissgerber 2011-01-01 Full Text Available The underlying goal of a competing agent in a discrete real-time strategy (RTS game is to defeat an adversary. Strategic agents or participants must define an a priori plan to maneuver their resources in order to destroy the adversary and the adversary's resources as well as secure physical regions of the environment. This a priori plan can be generated by leveraging collected historical knowledge about the environment. This knowledge is then employed in the generation of a classification model for real-time decision-making in the RTS domain. The best way to generate a classification model for a complex problem domain depends on the characteristics of the solution space. An experimental method to determine solution space (search landscape characteristics is through analysis of historical algorithm performance for solving the specific problem. We select a deterministic search technique and a stochastic search method for a priori classification model generation. These approaches are designed, implemented, and tested for a specific complex RTS game, Bos Wars. Their performance allows us to draw various conclusions about applying a competing agent in complex search landscapes associated with RTS games. 17. Preventing ADDOS Attack by Using Secure TRNG Based Port Hopping Directory of Open Access Journals (Sweden) T. Siva 2013-01-01 Full Text Available Now a days each and every where we are using client-server communication for different information service systems. Normally client server communication can be differentiating by using IP Address and Protocol Port number from one machine to another machine. In network environment we are already having DOS/DDOS Attacks Another Subset of this attack scenario is DOS/DDOS attack is Application Denial of Service(ADOSattack ,In this the adversary attacks open Ports/Ideal ports present at server side for this the adversary Know need huge machines ,zombie systems and no need sending packets of data with high bandwidth. To control this type of A-DOS attacks the existing enterprise security devices are not suitable like firewalls, anti-viruses and IDS/IPS Systems why because the adversary not using high bandwidth, spam messages, zombies or botnets for their attack scenarios.To safeguard this type of DOS/DDOS or Application denial of service attacks we are having some port hopping mechanisms i.e Port hopping by using Pseudo Random Number Generation (PRNGbased port hopping ,Acknowledgement based port hopping and proactive Reinitialization based on this existing once and their disadvantages like in PRNG attackers can predict the random number generation by using pre-calculated list or based on mathemathical functions .we introduce new port hopping technique i.e True Random Number Generation based port hopping 18. Use of Multi-attribute Utility Functions in Evaluating Security Systems Energy Technology Data Exchange (ETDEWEB) Meyers, C; Lamont, A; Sicherman, A 2008-06-13 In analyzing security systems, we are concerned with protecting a building or facility from an attack by an adversary. Typically, we address the possibility that an adversary could enter a building and cause damage resulting in an immediate loss of life, or at least substantial disruption in the operations of the facility. In response to this setting, we implement security systems including devices, procedures, and facility upgrades designed to (a) prevent the adversary from entering, (b) detect and neutralize him if he does enter, and (c) harden the facility to minimize damage if an attack is carried out successfully. Although we have cast this in terms of physical protection of a building, the same general approach can be applied to non-physical attacks such as cyber attacks on a computer system. A rigorous analytic process is valuable for quantitatively evaluating an existing system, identifying its weaknesses, and proposing useful upgrades. As such, in this paper we describe an approach to assess the degree of overall protection provided by security measures. Our approach evaluates the effectiveness of the individual components of the system, describes how the components work together, and finally assesses the degree of overall protection achieved. This model can then be used to quantify the amount of protection provided by existing security measures, as well as to address proposed upgrades to the system and help identify a robust and cost effective set of improvements. Within the model, we use multiattribute utility functions to perform the overall evaluations of the system. 19. Applying Pebble-Rotating Game to enhance the robustness of DHTs. Directory of Open Access Journals (Sweden) Liyong Ren Full Text Available Distributed hash tables (DHTs are usually used in the open networking environment, where they are vulnerable to Sybil attacks. Pebble-Rotating Game (PRG mixes the nodes of the honest and the adversarial randomly, and can resist the Sybil attack efficiently. However, the adversary may have some tricks to corrupt the rule of PRG. This paper proposes a set of mechanisms to make the rule of PRG be obliged to obey. A new joining node must ask the Certificate Authority (CA for its signature and certificate, which records the complete process on how a node joins the network and obtains the legitimacy of the node. Then, to prevent the adversary from accumulating identifiers, any node can make use of the latest certificate to judge whether one identifier is expired with the help of the replacement property of RPG. This paper analyzes in details the number of expired certificates which are needed to store in every node, and gives asymptotic solution of this problem. The analysis and simulations show that the mean number of the certificates stored in each node are [Formula: see text], where n is the size of the network. 20. Applying Pebble-Rotating Game to enhance the robustness of DHTs. Science.gov (United States) Ren, Liyong; Nie, Xiaowen; Dong, Yuchi 2013-01-01 Distributed hash tables (DHTs) are usually used in the open networking environment, where they are vulnerable to Sybil attacks. Pebble-Rotating Game (PRG) mixes the nodes of the honest and the adversarial randomly, and can resist the Sybil attack efficiently. However, the adversary may have some tricks to corrupt the rule of PRG. This paper proposes a set of mechanisms to make the rule of PRG be obliged to obey. A new joining node must ask the Certificate Authority (CA) for its signature and certificate, which records the complete process on how a node joins the network and obtains the legitimacy of the node. Then, to prevent the adversary from accumulating identifiers, any node can make use of the latest certificate to judge whether one identifier is expired with the help of the replacement property of RPG. This paper analyzes in details the number of expired certificates which are needed to store in every node, and gives asymptotic solution of this problem. The analysis and simulations show that the mean number of the certificates stored in each node are [Formula: see text], where n is the size of the network. PMID:23776485 1. Continuous Time Channels with Interference CERN Document Server Ivan, Ioana; Thaler, Justin; Yuen, Henry 2012-01-01 Khanna and Sudan studied a natural model of continuous time channels where signals are corrupted by the effects of both noise and delay, and showed that, surprisingly, in some cases both are not enough to prevent such channels from achieving unbounded capacity. Inspired by their work, we consider channels that model continuous time communication with adversarial delay errors. The sender is allowed to subdivide time into arbitrarily large number$M$of micro-units in which binary symbols may be sent, but the symbols are subject to unpredictable delays and may interfere with each other. We model interference by having symbols that land in the same micro-unit of time be summed, and a$k$-interference channels allows receivers to distinguish sums up to the value$k$. We consider both a channel adversary that has a limit on the maximum number of steps it can delay each symbol, and a more powerful adversary that only has a bound on the average delay. We give precise characterizations of the threshold between finite... 2. Defense in depth used in the physical protection of nuclear power plants International Nuclear Information System (INIS) Full text: This PowerPoint presentation has the following contents: 1. Introduction; 2. The fundamental principle 'I': Defense in depth. - Defense in depth and the Design Basis Threat. - Defense in depth and a physical protection concept; 3. Defense in depth - safety functions; 4. Defense in depth - physical protection functions; 5. Defense in depth and consequence analyses; 6. Conclusions. By document referenced GOV/2001/41 of the 15th of August 2001, the Board of Governors of IAEA has acknowledged twelve fundamental principles of physical protection for nuclear materials and nuclear facilities. These principles will be integrated into the forthcoming revision of the International Convention on Physical Protection. One of these fundamental principles, the principle 'I', deals with defense in depth. The State's requirements for physical protection should reflect a concept of several layers and methods of protection (structural or other technical, personnel and organizational) that have to be overcome or circumvented by an adversary in order to achieve his objectives. The questions of how to accomplish this fundamental principle and how is the adversary defined and what are his objectives are discussed. As well defined, the Design Basis Threat (DBT) is the starting point for a defense in depth physical protection of a NPP. The following cases are exposed: - Demonstrators; - Malvolent demonstrators; - Insider; - Terrorist attack (outsider); - Co-operation outsider - insider; while the following terrorism objectives could be in view, namely, theft of nuclear material and/or sabotage (major release of radioactive material from the NPP). The defense in depth and the physical protection concept concern a general scope i.e. outer protected area, inner protected area, vital areas inside inner protected areas, and technical and personnel protection measures to accomplish the requirements, while, in particular, one protects a facility against single objectives of an 3. CONTRADICTORIALITATEA ÎN CORAPORT CU ALTE PRINCIPII ALE PROCESULUI PENAL Directory of Open Access Journals (Sweden) Lucia RUSU 2016-03-01 Full Text Available În legătură cu reformarea sistemului judiciar şi schimbările intervenite în viaţa social-politică a statului nostru, prin­cipiul contradictorialităţii a obţinut o nouă rezonanţă din considerentul că reforma judiciară şi de drept este legată direct de contradictorialitate. Reforma legii procesual penale trebuie să fie fundamentată pe o temelie teoretică solidă. Contra­dictorialitatea, însă, în calitate de noţiune juridică, este insuficient cercetată în doctrina dreptului procesual penal. La ziua de azi, specialişti notorii în domeniul dreptului procesual penal analizează şi studiază importanţa fundamentelor şi principiilor de bază ale procesului penal şi, în primul rând, contradictorialitatea acestuia. Legea procesual penală a Republicii Moldova cunoaşte o evoluţie şi dezvoltate în sensul democratizării şi lărgirii începuturilor contradictoriale în înfăptuirea justiţiei. Aceasta e şi firesc, deoarece contradictorialitatea are o importanţă enormă pentru întregul sistem al procesului penal, determinând în mare parte statutul juridic şi raporturile dintre participanţii la procesul penal, precum şi relaţiile juridice stabilite între participanţii la acest proces şi instanţa de judecată. CONTRADICTION AND ITS CORRELATION WITH OTHER PRINCIPLES OF THE CRIMINAL PROCEEDINGIn connection with the judiciary system reforming and changes in socio-political life of our state, the adversarial principle has gained a new resonance on the grounds that the judicial and legal reform is directly linked to adversariality. The reform of the criminal procedure law must be based on solid theoretical foundation. However, adversariality, as legal concept, is not enough investigated in the doctrine of the criminal procedure law. Currently, notorious specialists in the field of criminal procedure law examine and study the importance of fundamentals and basic principles of the criminal process and 4. AVNG as a Test Case for Cooperative Design International Nuclear Information System (INIS) Designing a measurement system that might be used in a nuclear facility is a challenging, if not daunting, proposition. The situation is made more complicated when the system needs to be designed to satisfy the disparate requirements of a monitoring and a host party - a relationship that could prove to be adversarial. The cooperative design of the elements of the AVNG (Attribute Verification with Neutrons and Gamma Rays) system served as a crucible that exercised the possible pitfalls in the design and implementation of a measurement system that could be used in a host party nuclear facility that satisfied the constraints of operation for both the host and monitoring parties. Some of the issues that needed to be addressed in the joint design were certification requirements of the host party and the authentication requirements of the monitoring party. In this paper the nature of the problem of cooperative design will be introduced. The details of cooperative design revolve around the idiosyncratic nature of the adversarial relationship between the parties involved in a possible measurement regime, particularly if measurements on items that may contain sensitive information are being pursued. The possibility of an adversarial interaction is more likely if an information barrier is required for the measurement system. The origin of the antagonistic elements of the host party and hosted party relationship will be considered. In addition, some of the conclusions will be presented that make cooperative design (and development) proceed more efficiently. Finally, some lessons learned will be presented as a result of this expedition into cooperative design. 5. Final report and documentation for the security enabled programmable switch for protection of distributed internetworked computers LDRD. Energy Technology Data Exchange (ETDEWEB) Van Randwyk, Jamie A.; Robertson, Perry J.; Durgin, Nancy Ann; Toole, Timothy J.; Kucera, Brent D.; Campbell, Philip LaRoche; Pierson, Lyndon George 2010-02-01 An increasing number of corporate security policies make it desirable to push security closer to the desktop. It is not practical or feasible to place security and monitoring software on all computing devices (e.g. printers, personal digital assistants, copy machines, legacy hardware). We have begun to prototype a hardware and software architecture that will enforce security policies by pushing security functions closer to the end user, whether in the office or home, without interfering with users' desktop environments. We are developing a specialized programmable Ethernet network switch to achieve this. Embodied in this device is the ability to detect and mitigate network attacks that would otherwise disable or compromise the end user's computing nodes. We call this device a 'Secure Programmable Switch' (SPS). The SPS is designed with the ability to be securely reprogrammed in real time to counter rapidly evolving threats such as fast moving worms, etc. This ability to remotely update the functionality of the SPS protection device is cryptographically protected from subversion. With this concept, the user cannot turn off or fail to update virus scanning and personal firewall filtering in the SPS device as he/she could if implemented on the end host. The SPS concept also provides protection to simple/dumb devices such as printers, scanners, legacy hardware, etc. This report also describes the development of a cryptographically protected processor and its internal architecture in which the SPS device is implemented. This processor executes code correctly even if an adversary holds the processor. The processor guarantees both the integrity and the confidentiality of the code: the adversary cannot determine the sequence of instructions, nor can the adversary change the instruction sequence in a goal-oriented way. 6. Within a Stone's Throw: Proximal Geolocation of Internet Users via Covert Wireless Signaling Energy Technology Data Exchange (ETDEWEB) Paul, Nathanael R [ORNL; Shue, Craig [Worcester Polytechnic Institute, Worcester; Taylor, Curtis [Worcester Polytechnic Institute, Worcester 2013-01-01 While Internet users may often believe they have anonymity online, a culmination of technologies and recent research may allow an adversary to precisely locate an online user s geophysical location. In many cases, such as peer-to-peer applications, an adversary can easily use a target s IP address to quickly obtain the general geographical location of the target. Recent research has scoped this general area to a 690m (0.43 mile) radius circle. In this work, we show how an adversary can exploit Internet communication for geophysical location by embedding covert signals in communication with a target on a remote wireless local area network. We evaluated the approach in two common real-world settings: a residential neighborhood and an apartment building. In the neighborhood case, we used a single-blind trial in which an observer located a target network to within three houses in less than 40 minutes. Directional antennas may have allowed even more precise geolocation. This approach had only a 0.38% false positive rate, despite 24,000 observed unrelated packets and many unrelated networks. This low rate allowed the observer to exclude false locations and continue searching for the target. Our results enable law enforcement or copyright holders to quickly locate online Internet users without requiring time-consuming subpoenas to Internet Service Providers. Other privacy use cases include rapidly locating individuals based on their online speech or interests. We hope to raise awareness of these issues and to spur discussion on privacy and geolocating techniques. 7. Evaluating Moving Target Defense with PLADD Energy Technology Data Exchange (ETDEWEB) Jones, Stephen T. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Outkin, Alexander V. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Gearhart, Jared Lee [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Hobbs, Jacob Aaron [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Siirola, John Daniel [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Phillips, Cynthia A. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Verzi, Stephen Joseph [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Tauritz, Daniel [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Mulder, Samuel A. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Naugle, Asmeret Bier [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States) 2015-09-15 This project evaluates the effectiveness of moving target defense (MTD) techniques using a new game we have designed, called PLADD, inspired by the game FlipIt [28]. PLADD extends FlipIt by incorporating what we believe are key MTD concepts. We have analyzed PLADD and proven the existence of a defender strategy that pushes a rational attacker out of the game, demonstrated how limited the strategies available to an attacker are in PLADD, and derived analytic expressions for the expected utility of the game’s players in multiple game variants. We have created an algorithm for finding a defender’s optimal PLADD strategy. We show that in the special case of achieving deterrence in PLADD, MTD is not always cost effective and that its optimal deployment may shift abruptly from not using MTD at all to using it as aggressively as possible. We believe our effort provides basic, fundamental insights into the use of MTD, but conclude that a truly practical analysis requires model selection and calibration based on real scenarios and empirical data. We propose several avenues for further inquiry, including (1) agents with adaptive capabilities more reflective of real world adversaries, (2) the presence of multiple, heterogeneous adversaries, (3) computational game theory-based approaches such as coevolution to allow scaling to the real world beyond the limitations of analytical analysis and classical game theory, (4) mapping the game to real-world scenarios, (5) taking player risk into account when designing a strategy (in addition to expected payoff), (6) improving our understanding of the dynamic nature of MTD-inspired games by using a martingale representation, defensive forecasting, and techniques from signal processing, and (7) using adversarial games to develop inherently resilient cyber systems. 8. Generic attack approaches for industrial control systems. Energy Technology Data Exchange (ETDEWEB) Duggan, David P. 2006-01-01 This report suggests a generic set of attack approaches that are expected to be used against Industrial Control Systems that have been built according to a specific reference model for control systems. The posed attack approaches are ordered by the most desirable, based upon the goal of an attacker. Each attack approach is then graded by the category of adversary that would be capable of utilizing that attack approach. The goal of this report is to identify necessary levels of security required to prevent certain types of attacks against Industrial Control Systems. 9. On the Security of HB# against a Man-in-the-Middle Attack OpenAIRE Ouafi, Khaled; Overbeck, Raphael; Vaudenay, Serge 2008-01-01 At EuroCrypt ’08, Gilbert, Robshaw and Seurin proposed HB# to improve on HB+ in terms of transmission cost and security against man-in-the-middle attacks. Although the security of HB# is formally proven against a certain class of man- in-the-middle adversaries, it is only conjectured for the general case. In this paper, we present a general man-in-the-middle attack against HB# and Random-HB#, which can also be applied to all anterior HB-like protocols, that recovers the shared secret i... 10. Infeasibility of Quantum Cryptography Without Eavesdropping Check Science.gov (United States) Yang, Wei; Huang, Liusheng; Song, Fang; Wang, Qiyan Secure key distribution is impossible in pure classical environment. Unconditional secure key distribution is available when quantum means are introduced, assisted by a classical communication channel. What is possible when a quantum key distribution scheme is without classical communication? We present a general model with this constraint and show that quantum key distribution without classical eavesdropping check is in principle impossible. For an adversary can always succeed in obtaining the secret key via a special case of man-in-the-middle attack, namely intercept-and-forward attack without any risk of being captured. 11. A SECURE KEY MANAGEMENT TECHNIQUE FOR WIRELESS BODY AREA NETWORKS OpenAIRE Venkatasubramanian Sivaprasatham; Jothi Venkateswaran 2012-01-01 In Wireless Body Area Networks (WBAN), the key factors to be considered for transmission of confidential data are security and privacy as it is mostly having applications in emergency medical response systems. The lack of security may lead to loss of data privacy resulting in an adversary to bring in bogus data or altering the legal ones. Hence in this study, a secure key management technique for WBAN is proposed. The proposed architecture consists of a set of WBANs connected to the master se... 12. Noiseless Steganography The Key to Covert Communications CERN Document Server Desoky, Abdelrahman 2012-01-01 Among the features that make Noiseless Steganography: The Key to Covert Communications a first of its kind: The first to comprehensively cover Linguistic Steganography The first to comprehensively cover Graph Steganography The first to comprehensively cover Game Steganography Although the goal of steganography is to prevent adversaries from suspecting the existence of covert communications, most books on the subject present outdated steganography approaches that are detectable by human and/or machine examinations. These approaches often fail because they camouflage data as a detectable noise b 13. The NATO-Russia Council - a Success? OpenAIRE 2010-01-01 After the end of the Cold War and the dissolution of the Soviet Union in 1991, NATO and Russia concluded that «they no longer regarded each other as adversaries» (NATO, 1997).They also soon began a gradual rapprochement. In 1997, the Founding Act was created, and in 2002, the NATO-Russia Council (NRC) was established. The aim of the NRC was to treat the actors as equal partners, build trust, practical cooperation and become the main forum for crisis and security consultation between NATO and ... 14. Raising Africa?: Celebrity and the Rhetoric of the White Saviour OpenAIRE Katherine M. Bell 2013-01-01 The ‘White Savour’ is a timeworn vehicle for celebrities in Hollywood film, where actors perform as heroes who save the day against dark and ominous adversaries. Pop stars take on personas and ‘exotic’ characters as well. And with increasing visibility, the famous perform real-life hero roles as philanthropists for social causes around the so-called ‘developing’ world. This essay explores how the celebrity philanthropist is constructed as redeemer of distant Others and how this role mingles w... 15. Exponential Separation of Quantum and Classical One-Way Communication Complexity for a Boolean Function CERN Document Server Gavinsky, D; Kempe, J; Gavinsky, Dmitry; Kempe, Julia; Wolf, Ronald de 2006-01-01 We give an exponential separation between one-way quantum and classical communication complexity for a Boolean function. Earlier such a separation was known only for a relation. A very similar result was obtained earlier but independently by Kerenidis and Raz [KR06]. Our version of the result gives an example in the bounded storage model of cryptography, where the key is secure if the adversary has a certain amount of classical storage, but is completely insecure if he has a similar amount of quantum storage. 16. Bot, Cyborg and Automated Turing Test Science.gov (United States) Yan, Jeff The Automated Turing test (ATT) is almost a standard security technique for addressing the threat of undesirable or malicious bot programs. In this paper, we motivate an interesting adversary model, cyborgs, which are either humans assisted by bots or bots assisted by humans. Since there is always a human behind these bots, or a human can always be available on demand, ATT fails to differentiate such cyborgs from humans. The notion of “telling humans and cyborgs apart” is novel, and it can be of practical relevance in network security. Although it is a challenging task, we have had some success in telling cyborgs and humans apart automatically. 17. How to deal with malleability of BitCoin transactions OpenAIRE Andrychowicz, Marcin; Dziembowski, Stefan; Malinowski, Daniel; Mazurek, Łukasz 2013-01-01 BitCoin transactions are malleable in a sense that given a transaction an adversary can easily construct an equivalent transaction which has a different hash. This can pose a serious problem in some BitCoin distributed contracts in which changing a transaction's hash may result in the protocol disruption and a financial loss. The problem mostly concerns protocols, which use a "refund" transaction to withdraw a deposit in a case of the protocol interruption. In this short note, we show a gener... 18. Iran and Britain: The Politics of Oil and Coup D’état before the Fall of Reza Shah OpenAIRE Behravesh, Maysam 2010-01-01 British strategy in the Middle East consolidated around a sustained effort to prevent any adversarial penetration into the Persian Gulf, defending its position athwart the principal lines of communication and supply between Northern Europe and British India, and to protect the newly discovered Persian oil that was used to power the Royal Navy. Since the discovery of oil in 1908 by D’Arcy’s oil exploration company, and especially after the end of World War One in 1918, until the start of Mosad... 19. A secure email login system using virtual password CERN Document Server Doshi, Nishant 2010-01-01 In today's world password compromise by some adversaries is common for different purpose. In ICC 2008 Lei et al. proposed a new user authentication system based on the virtual password system. In virtual password system they have used linear randomized function to be secure against identity theft attacks, phishing attacks, keylogging attack and shoulder surfing system. In ICC 2010 Li's given a security attack on the Lei's work. This paper gives modification on Lei's work to prevent the Li's attack with reducing the server overhead. This paper also discussed the problems with current password recovery system and gives the better approach. 20. Credible nuclear deterrence for Japan OpenAIRE Rasmussen, David C. 2000-01-01 The credibility of the U.S. nuclear deterrent extended to Japan has decreased in recent years due to the declining role of nuclear weapons in U.S. strategy. The U.S. nuclear guarantee is an important element of Japan's security strategy, and the United States should maintain it. To reassure Japan of U.S. nuclear commitments without provoking domestic Japanese opposition or potential adversaries, the United States should improve the perception of its resolve to defend Japan with nuclear weapon... 1. Enhance Confidentiality of Threshold Signature for MANET Institute of Scientific and Technical Information of China (English) GUO Wei; XIONG Zhongwei 2006-01-01 The participating wireless mobile node that mobile ad hoc network (MANET) communications need to forward may be malicious. That means not only adversary might be able to acquire some sensitive information of the threshold signatures from the compromised node, but also the partial signatures may be fabricated by malicious node, the advantages of threshold signatures would disappear. Signing and encrypting the sensitive information of the threshold signatures, and only the specified receiver can recover it, which will improve the confidentiality of threshold signatures. The security analysis shows the method is suitable for the secure characteristic of MANET that has the malicious nodes, and the message transmission is secure can against the attack. 2. Covert Communication over Classical-Quantum Channels OpenAIRE Sheikholeslami, Azadeh; Bash, Boulat A.; Towsley, Donald; Goeckel, Dennis; Guha, Saikat 2016-01-01 Recently, the fundamental limits of covert, i.e., reliable-yet-undetectable, communication have been established for general memoryless channels and for lossy-noisy bosonic (quantum) channels with a quantum-limited adversary. The key import of these results was the square-root law (SRL) for covert communication, which states that$O(\\sqrt{n})$covert bits, but no more, can be reliably transmitted over$n$channel uses with$O(\\sqrt{n})$bits of secret pre-shared between communicating parties.... 3. Simulation of the effectiveness evaluation process of security systems Science.gov (United States) Godovykh, A. V.; Stepanov, B. P.; Sheveleva, A. A.; Sharafieva, K. R. 2016-06-01 The paper is devoted to issues of creation of cross-functional analytical complex for simulation of the process of operation of the security system elements. Basic objectives, a design concept and an interrelation of main elements of the complex are described. The proposed conception of the analytical complex provides an opportunity to simulate processes for evaluating the effectiveness of physical protection system of a nuclear facility. The complex uses models, that take into account features of the object, parameters of technical means and tactics of adversaries. Recommendations were made for applying of this conception for training specialists in the field of physical protection of nuclear materials. 4. Spying the World from your Laptop -- Identifying and Profiling Content Providers and Big Downloaders in BitTorrent OpenAIRE Le Blond, Stevens; Legout, Arnaud; Le Fessant, Fabrice; Dabbous, Walid; Kaafar, Mohamed Ali 2010-01-01 International audience This paper presents a set of exploits an adversary can use to continuously spy on most BitTorrent users of the Internet from a single machine and for a long period of time. Using these exploits for a period of 103 days, we collected 148 million IPs downloading 2 billion copies of contents. We identify the IP address of the content providers for 70% of the BitTorrent contents we spied on. We show that a few content providers inject most contents into BitTorrent and th... 5. 基于BLS签名的弹性泄露签名方案(英文) Institute of Scientific and Technical Information of China (English) 2011-01-01 Digital signature,one of the most important cryptographic primitives,has been commonly used in information systems,and thus enhancing the security of a signature scheme can benefit such an application.Currently,leakage-resilient cryptography is a very hot topic in cryptographic research.A leakage-resilient cryptographic primitive is said to be secure if arbitrary but bounded information about the signer's secret key(involving other states) is leaked to an adversary.Obviously,the leakage-resilient signature ... 6. Fast and Memory-Efficient Key Recovery in Side-Channel Attacks DEFF Research Database (Denmark) Bogdanov, Andrey; Kizhvatov, Ilya; Manzoor, Kamran; 2016-01-01 , this algorithm outputs the full combined keys in the optimal order – from more likely to less likely ones. OKEA uses plenty of memory by its nature though, which limits its practical efficiency. Especially in the cases where the side-channel traces are noisy, the memory and running time requirements to find...... the advantage at the example of a DPA attack on an 8-bit embedded software implementation of AES-128. We vary the number of traces available to the adversary and report a significant increase in the success rate of the key recovery due to SKEA when compared to OKEA, within practical limitations on time... 7. Trojan Horse attacks on Quantum Key Distribution systems CERN Document Server Gisin, Nicolas; Kraus, B; Zbinden, H; Ribordy, G 2005-01-01 General Trojan horse attacks on quantum key distribution systems are analyzed. We illustrate the power of such attacks with today's technology and conclude that all system must implement active counter-measures. In particular all systems must include an auxiliary detector that monitors any incoming light. We show that such counter-measures can be efficient, provided enough additional privacy amplification is applied to the data. We present a practical way to reduce the maximal information gain that an adversary can gain using Trojan horse attacks. 8. Influence versus intent for predictive analytics in situation awareness Science.gov (United States) Cui, Biru; Yang, Shanchieh J.; Kadar, Ivan 2013-05-01 Predictive analytics in situation awareness requires an element to comprehend and anticipate potential adversary activities that might occur in the future. Most work in high level fusion or predictive analytics utilizes machine learning, pattern mining, Bayesian inference, and decision tree techniques to predict future actions or states. The emergence of social computing in broader contexts has drawn interests in bringing the hypotheses and techniques from social theory to algorithmic and computational settings for predictive analytics. This paper aims at answering the question on how influence and attitude (some interpreted such as intent) of adversarial actors can be formulated and computed algorithmically, as a higher level fusion process to provide predictions of future actions. The challenges in this interdisciplinary endeavor include drawing existing understanding of influence and attitude in both social science and computing fields, as well as the mathematical and computational formulation for the specific context of situation to be analyzed. The study of influence' has resurfaced in recent years due to the emergence of social networks in the virtualized cyber world. Theoretical analysis and techniques developed in this area are discussed in this paper in the context of predictive analysis. Meanwhile, the notion of intent, or attitude' using social theory terminologies, is a relatively uncharted area in the computing field. Note that a key objective of predictive analytics is to identify impending/planned attacks so their impact' and threat' can be prevented. In this spirit, indirect and direct observables are drawn and derived to infer the influence network and attitude to predict future threats. This work proposes an integrated framework that jointly assesses adversarial actors' influence network and their attitudes as a function of past actions and action outcomes. A preliminary set of algorithms are developed and tested using the Global Terrorism 9. A simulation environment for modeling and development of algorithms for ensembles of mobile microsystems Science.gov (United States) Fink, Jonathan; Collins, Tom; Kumar, Vijay; Mostofi, Yasamin; Baras, John; Sadler, Brian 2009-05-01 The vision for the Micro Autonomous Systems Technologies MAST programis to develop autonomous, multifunctional, collaborative ensembles of agile, mobile microsystems to enhance tactical situational awareness in urban and complex terrain for small unit operations. Central to this vision is the ability to have multiple, heterogeneous autonomous assets to function as a single cohesive unit, that is adaptable, responsive to human commands and resilient to adversarial conditions. This paper represents an effort to develop a simulation environment for studying control, sensing, communication, perception, and planning methodologies and algorithms. 10. Proceedings of the fourth international conference and exhibition: World Congress on superconductivity. Volume 1 International Nuclear Information System (INIS) The goals of the World Congress on Superconductivity (WCS) have been to establish and foster the development and commercial application of superconductivity technology on a global scale by providing a non-adversarial, non-advocacy forum where scientists, engineers, businessmen and government personnel can freely exchange information and ideas on recent developments and directions for the future of superconductive research. Sessions were held on: accelerator technology, power and energy, persistent magnetic fields, performance characterization, physical properties, fabrication methodology, superconductive magnetic energy storage (SMES), thin films, high temperature materials, device applications, wire fabrication, and granular superconductors. Individual papers are indexed separately 11. Structured Assessment Approach: a procedure for the assessment of fuel cycle safeguard systems Energy Technology Data Exchange (ETDEWEB) Parziale, A.A.; Patenaude, C.J.; Renard, P.A.; Sacks, I.J. 1980-03-06 Lawrence Livermore National Laboratory has developed and tested for the United States Nuclear Regulatory Commission a procedure for the evaluation of Material Control and Accounting (MC and A) Systems at Nuclear Fuel Facilities. This procedure, called the Structured Assessment Approach, SAA, subjects the MC and A system at a facility to a series of increasingly sophisticated adversaries and strategies. A fully integrated version of the computer codes which assist the analyst in this assessment was made available in October, 1979. The concepts of the SAA and the results of the assessment of a hypothetical but typical facility are presented. 12. Operating System Security CERN Document Server Jaeger, Trent 2008-01-01 Operating systems provide the fundamental mechanisms for securing computer processing. Since the 1960s, operating systems designers have explored how to build "secure" operating systems - operating systems whose mechanisms protect the system against a motivated adversary. Recently, the importance of ensuring such security has become a mainstream issue for all operating systems. In this book, we examine past research that outlines the requirements for a secure operating system and research that implements example systems that aim for such requirements. For system designs that aimed to 13. On the Complexity of Additively Homomorphic UC Commitments DEFF Research Database (Denmark) Trifiletti, Roberto; Nielsen, Jesper Buus; Frederiksen, Tore Kasper; the commitment protocol by Garay \\emph{et al.} from Eurocrypt 2014. A main technical improvement over the scheme mentioned above, and other schemes based on using error correcting codes for UC commitment, we develop a new technique which allows to based the extraction property on erasure decoding as...... oblivious transfer functionality. Based on this we prove our scheme secure in the Universal Composability (UC) framework against a static and malicious adversary corrupting any number of parties. On a practical note, our scheme improves significantly on the non- homomorphic scheme of Cascudo \\emph{et al... 14. Security intelligence a practitioner's guide to solving enterprise security challenges CERN Document Server Li, Qing 2015-01-01 Identify, deploy, and secure your enterprise Security Intelligence, A Practitioner's Guide to Solving Enterprise Security Challenges is a handbook for security in modern times, against modern adversaries. As leaders in the design and creation of security products that are deployed globally across a range of industries and market sectors, authors Qing Li and Gregory Clark deliver unparalleled insight into the development of comprehensive and focused enterprise security solutions. They walk you through the process of translating your security goals into specific security technology domains, fo 15. The Execution Game CERN Document Server Moallemi, Ciamac C; Van Roy, Benjamin 2008-01-01 We consider a trader who aims to liquidate a large position in the presence of an arbitrageur who hopes to profit from the trader's activity. The arbitrageur is uncertain about the trader's position and learns from observed market activity. This is a dynamic game with asymmetric information. We present an algorithm for computing perfect Bayesian equilibrium behavior and conduct numerical experiments. Our results demonstrate that the trader's strategy differs in important ways from one that would be optimal in the absence of an arbitrageur. In particular, the trader's actions depend on and influence the arbitrageur's beliefs. Accounting for the presence of a strategic adversary can greatly reduce transaction costs. 16. Quantum cheques Science.gov (United States) Moulick, Subhayan Roy; Panigrahi, Prasanta K. 2016-06-01 We propose the idea of a quantum cheque scheme, a cryptographic protocol in which any legitimate client of a trusted bank can issue a cheque, that cannot be counterfeited or altered in anyway, and can be verified by a bank or any of its branches. We formally define a quantum cheque and present the first unconditionally secure quantum cheque scheme and show it to be secure against any no-signalling adversary. The proposed quantum cheque scheme can been perceived as the quantum analog of Electronic Data Interchange, as an alternate for current e-Payment Gateways. 17. Proceedings of the fourth international conference and exhibition: World Congress on superconductivity. Volume 1 Energy Technology Data Exchange (ETDEWEB) Krishen, K.; Burnham, C. [eds.] [National Aeronautics and Space Administration, Houston, TX (United States). Lyndon B. Johnson Space Center 1994-12-31 The goals of the World Congress on Superconductivity (WCS) have been to establish and foster the development and commercial application of superconductivity technology on a global scale by providing a non-adversarial, non-advocacy forum where scientists, engineers, businessmen and government personnel can freely exchange information and ideas on recent developments and directions for the future of superconductive research. Sessions were held on: accelerator technology, power and energy, persistent magnetic fields, performance characterization, physical properties, fabrication methodology, superconductive magnetic energy storage (SMES), thin films, high temperature materials, device applications, wire fabrication, and granular superconductors. Individual papers are indexed separately. 18. Subliminal Probing for Private Information via EEG-Based BCI Devices OpenAIRE Frank, Mario; Hwu, Tiffany; Jain, Sakshi; Knight, Robert; Martinovic, Ivan; Mittal, Prateek; Perito, Daniele; Song, Dawn 2013-01-01 Martinovic et al. proposed a Brain-Computer-Interface (BCI) -based attack in which an adversary is able to infer private information about a user, such as their bank or area-of-living, by analyzing the user's brain activities. However, a key limitation of the above attack is that it is intrusive, requiring user cooperation, and is thus easily detectable and can be reported to other users. In this paper, we identify and analyze a more serious threat for users of BCI devices. We propose a it su... 19. Use of Hamiltonian Cycles in Cryptograph CERN Document Server WenBin, Hsieh 2011-01-01 In cryptography, key distribution is always an important issue in establishing a symmetric key. The famous method of exchanging keys, Diffie-Hellman key exchange, is also vulnerable to a man-in-the-middle attack. Therefore, many protocols are proposed to secure the exchange such as the authenticated D-H key agreement protocol, station-to-station (STS) protocol and secure socket layer/transport layer security (SSL/TLS) protocol. With these protocols, we propose a novel protocol based on the Hamiltonian cycle problem which is NP-complete. The novel protocol can make key agreement in one step, moreover, make a intermediate useless to an adversary. 20. An Efficient Signature Scheme based on Factoring and Discrete Logarithm OpenAIRE Ciss, Abdoul Aziz; Cheikh, Ahmed Youssef Ould 2012-01-01 This paper proposes a new signature scheme based on two hard problems : the cube root extraction modulo a composite moduli (which is equivalent to the factorisation of the moduli, IFP) and the discrete logarithm problem(DLP). By combining these two cryptographic assumptions, we introduce an efficient and strongly secure signature scheme. We show that if an adversary can break the new scheme with an algorithm$\\mathcal{A},$then$\\mathcal{A}$can be used to sove both the DLP and the IFP. The k... 1. Re-Imagining Punishment: An Exercise in “Intersectional Criminal Justice” Directory of Open Access Journals (Sweden) Maya Pagni Barak 2014-10-01 Full Text Available Over the last 40 years a number of scholars have called upon fellow criminologists to rethink the field’s priorities and methods, as well as the American criminal justice system and current punishment practices. Drawing on alternative criminologies, including constitutive and peacemaking criminologies, as well as the practice of reintegrative shaming, this paper presents a new model of criminal justice that combines aspects of adversarial, restorative, social, and transformative justice frameworks. The resulting “intersectional criminal justice” offers a holistic harm-reduction model that moves the focus of our criminal justice system away from “rough justice” and towards collective restorative healing and positive social change. 2. Upconversion-based receivers for quantum hacking-resistant quantum key distribution Science.gov (United States) Jain, Nitin; Kanter, Gregory S. 2016-07-01 We propose a novel upconversion (sum frequency generation)-based quantum-optical system design that can be employed as a receiver (Bob) in practical quantum key distribution systems. The pump governing the upconversion process is produced and utilized inside the physical receiver, making its access or control unrealistic for an external adversary (Eve). This pump facilitates several properties which permit Bob to define and control the modes that can participate in the quantum measurement. Furthermore, by manipulating and monitoring the characteristics of the pump pulses, Bob can detect a wide range of quantum hacking attacks launched by Eve. 3. Zur Geschichte der Geophysik Science.gov (United States) Strobach, Klaus 1980-07-01 Alfred Wegener's most important work, the theory of continental drift, has a key position in the history of geophysics and has crucially advanced the discussion of this central problem of geodynamics amongst supporters and adversaries. The aim of this paper is to paint a portrait of Wegener's personality, of his stations of life, and of his interests and research work. The conceptions of the origin of continents and oceans prior to Wegener, and the further development of his ideas after his death on the ice cup of Greenland 50 years ago are discussed. 4. Managed care contracting and payment reform: avoiding a showdown. Science.gov (United States) Nugent, Michael E 2010-07-01 Health reform promises to fundamentally change what and how CMS and commercial payers reimburse providers. Providers need to transition from their traditionally adversarial, transactions-based payer relationships to ones that optimize purchaser and patient value for the dollar. To avoid negotiation table showdowns and to prepare for reform, commercial payers and providers should take three actions: Recognize the dead ends with their historical relationships. Formulate their road map to value-based contracting. Avoid operational pot-holes along the way. PMID:20608414 5. The attack navigator DEFF Research Database (Denmark) Probst, Christian W.; Willemson, Jan; Pieters, Wolter 2016-01-01 that are caused by the strategic behaviour of adversaries. Therefore, technology-supported methods are needed to help us identify and manage these risks. In this paper, we describe the attack navigator: a graph-based approach to security risk assessment inspired by navigation systems. Based on maps of a socio......-technical system, the attack navigator identifies routes to an attacker goal. Specific attacker properties such as skill or resources can be included through attacker profiles. This enables defenders to explore attack scenarios and the effectiveness of defense alternatives under different threat conditions.... 6. The Cyber-Physical Attacker DEFF Research Database (Denmark) Vigo, Roberto 2012-01-01 The world of Cyber-Physical Systems ranges from industrial to national interest applications. Even though these systems are pervading our everyday life, we are still far from fully understanding their security properties. Devising a suitable attacker model is a crucial element when studying...... the security properties of CPSs, as a system cannot be secured without defining the threats it is subject to. In this work an attacker scenario is presented which addresses the peculiarities of a cyber-physical adversary, and we discuss how this scenario relates to other attacker models popular in the security... 7. The LOCAL attack: Cryptanalysis of the authenticated encryption scheme ALE DEFF Research Database (Denmark) Khovratovich, Dmitry; Rechberger, Christian 2014-01-01 We show how to produce a forged (ciphertext, tag) pair for the scheme ALE with data and time complexity of 2102 ALE encryptions of short messages and the same number of authentication attempts. We use a differential attack based on a local collision, which exploits the availability of extracted...... state bytes to the adversary. Our approach allows for a time-data complexity tradeoff, with an extreme case of a forgery produced after 2119 attempts and based on a single authenticated message. Our attack is further turned into a state recovery and a universal forgery attack with a time complexity... 8. Privacy-Preserving Matching of Spatial Datasets with Protection against Background Knowledge DEFF Research Database (Denmark) Ghinita, Gabriel; Vicente, Carmen Ruiz; Shang, Ning; 2010-01-01 circuits that evaluate the matching condition without revealing anything else other than the matching outcome. However, existing solutions have at least one of the following drawbacks: (i) they fail to protect against adversaries with background knowledge on data distribution, (ii) they compromise privacy...... by returning large amounts of false positives and (iii) they rely on complex and expensive SMC protocols. In this paper, we introduce a novel geometric transformation to perform private matching on spatial datasets. Our method is efficient and it is not vulnerable to background knowledge attacks. We consider... 9. Privacy Preserving Quantum Anonymous Transmission via Entanglement Relay Science.gov (United States) Yang, Wei; Huang, Liusheng; Song, Fang 2016-06-01 Anonymous transmission is an interesting and crucial issue in computer communication area, which plays a supplementary role to data privacy. In this paper, we put forward a privacy preserving quantum anonymous transmission protocol based on entanglement relay, which constructs anonymous entanglement from EPR pairs instead of multi-particle entangled state, e.g. GHZ state. Our protocol achieves both sender anonymity and receiver anonymity against an active adversary and tolerates any number of corrupt participants. Meanwhile, our protocol obtains an improvement in efficiency compared to quantum schemes in previous literature. 10. Anonymity-Preserving Public-Key Encryption DEFF Research Database (Denmark) Kohlweiss, Markulf; Maurer, Ueli; Onete, Cristina; 2013-01-01 . While anonymity and confidentiality appear to be orthogonal properties, making anonymous communication confidential is more involved than one might expect, since the ciphertext might reveal which public key has been used to encrypt. To address this problem, public-key cryptosystems with enhanced...... literature (IND-CCA, key-privacy, weak robustness). We also show that a desirable stronger variant, preventing the adversary from selective ”trial-deliveries” of messages, is unfortunately unachievable by any PKE scheme, no matter how strong. The constructive approach makes the guarantees achieved... 11. Privacy Preserving Quantum Anonymous Transmission via Entanglement Relay. Science.gov (United States) Yang, Wei; Huang, Liusheng; Song, Fang 2016-01-01 Anonymous transmission is an interesting and crucial issue in computer communication area, which plays a supplementary role to data privacy. In this paper, we put forward a privacy preserving quantum anonymous transmission protocol based on entanglement relay, which constructs anonymous entanglement from EPR pairs instead of multi-particle entangled state, e.g. GHZ state. Our protocol achieves both sender anonymity and receiver anonymity against an active adversary and tolerates any number of corrupt participants. Meanwhile, our protocol obtains an improvement in efficiency compared to quantum schemes in previous literature. PMID:27247078 12. Random Fruits on the Zielonka Tree CERN Document Server Horn, Florian 2009-01-01 Stochastic games are a natural model for the synthesis of controllers confronted to adversarial and/or random actions. In particular,$\\omega$-regular games of infinite length can represent reactive systems which are not expected to reach a correct state, but rather to handle a continuous stream of events. One critical resource in such applications is the memory used by the controller. In this paper, we study the amount of memory that can be saved through the use of randomisation in strategies, and present matching upper and lower bounds for stochastic Muller games. 13. Chaotic iterations for steganography: Stego-security and topological-security CERN Document Server Friot, Nicolas; Bahi, Jacques M 2011-01-01 In this paper is proposed a novel steganographic scheme based on chaotic iterations. This research work takes place into the information hiding security fields. We show that the proposed scheme is stego-secure, which is the highest level of security in a well defined and studied category of attack called "watermark-only attack". Additionally, we prove that this scheme presents topological properties so that it is one of the firsts able to face, at least partially, an adversary when considering the others categories of attacks defined in the literature. 14. MiniLEGO DEFF Research Database (Denmark) Frederiksen, Tore Kasper; Jakobsen, Thomas Pelle; Nielsen, Jesper Buus; 2013-01-01 One of the main tools to construct secure two-party computation protocols are Yao garbled circuits. Using the cut-and-choose technique, one can get reasonably efficient Yao-based protocols with security against malicious adversaries. At TCC 2009, Nielsen and Orlandi [28] suggested to apply cut...... new protocol has the following advantages: It maintains the efficiency of the LEGO cut-and-choose. After a number of seed oblivious transfers linear in the security parameter, the construction uses only primitives from Minicrypt (i.e., private-key cryptography) per gate in the circuit (hence the name... 15. Scalable and Unconditionally Secure Multiparty Computation DEFF Research Database (Denmark) Damgård, Ivan Bjerre; Nielsen, Jesper Buus 2007-01-01 We present a multiparty computation protocol that is unconditionally secure against adaptive and active adversaries, with communication complexity O(Cn)k+O(Dn^2)k+poly(nk), where C is the number of gates in the circuit, n is the number of parties, k is the bit-length of the elements of the field...... over which the computation is carried out, D is the multiplicative depth of the circuit, and κ is the security parameter. The corruption threshold is t security the corruption threshold is t secure......, the protocol has so called everlasting security.... 16. Multiparty Computation for Dishonest Majority DEFF Research Database (Denmark) Damgård, Ivan Bjerre; Orlandi, Claudio 2010-01-01 theory and practice. We propose a new protocol to securely evaluate reactive arithmetic circuits, that offers security against an active adversary in the universally composable security framework. Instead of the "do-and-compile" approach (where the parties use zero-knowledge proofs to show...... that they are following the protocol) our key ingredient is an efficient version of the "cut-and-choose" technique, that allow us to achieve active security for just a (small) constant amount of work more than for passive security.... 17. Graph Coarsening for Path Finding in Cybersecurity Graphs Energy Technology Data Exchange (ETDEWEB) Hogan, Emilie A.; Johnson, John R.; Halappanavar, Mahantesh 2013-01-01 n the pass-the-hash attack, hackers repeatedly steal password hashes and move through a computer network with the goal of reaching a computer with high level administrative privileges. In this paper we apply graph coarsening in network graphs for the purpose of detecting hackers using this attack or assessing the risk level of the network's current state. We repeatedly take graph minors, which preserve the existence of paths in the graph, and take powers of the adjacency matrix to count the paths. This allows us to detect the existence of paths as well as find paths that have high risk of being used by adversaries. 18. Pushing the Limits of Military Coercion Theory DEFF Research Database (Denmark) Jakobsen, Peter Viggo 2011-01-01 The centrality of military coercion in contemporary Western crisis and conflict management constitutes a major policy problem because the United States and its allies are poor at translating their overwhelming military superiority into adversary compliance. The standard explanation provided...... by coercion theorists is that coercion is hard and that miscalculation, misperception, or practical problems can defeat even a perfectly executed strategy. What they ignore is that the problem also stems from the limits of coercion theory, which has left us with an unnecessarily poor understanding of how...... the principal theoretical propositions with a firmer empirical foundation and make military coercion theory more useful for policy makers.... 19. User Authentication with Provable Security against Online Dictionary Attacks Directory of Open Access Journals (Sweden) Yongzhong He 2009-05-01 Full Text Available Dictionary attacks are the best known threats on the password-based authentication schemes. Based on Reverse Turing Test (RTT, some usable and scalable authentication schemes are proposed to defeat online dictionary attacks mounted by automated programs. However it is found that these authentication schemes are vulnerable to various online dictionary attacks. In this paper, a practical decision function is presented, based on which RTT authentication schemes are constructed and shown to be secure against all the known online dictionary attacks. After formally modeling of the adversary, the static and dynamic security of the authentication schemes are proved formally. 20. Self-Healing Computation OpenAIRE Saad, George; Saia, Jared 2014-01-01 In the problem of reliable multiparty computation (RC), there are$n$parties, each with an individual input, and the parties want to jointly compute a function$f$over$n$inputs. The problem is complicated by the fact that an omniscient adversary controls a hidden fraction of the parties. We describe a self-healing algorithm for this problem. In particular, for a fixed function$f$, with$n$parties and$m$gates, we describe how to perform RC repeatedly as the inputs to$f$change. Our al... 1. Peace is more than the absence of hacks OpenAIRE Schmetz, Martin 2015-01-01 Part V of our series on cyberpeace "Cyberpeace: Dimensionen eines Gegenentwurfs". With everybody focusing on cyberwar, our blog has decided to discuss cyberpeace instead. So far we have seen musings on war and peace, the meaning of the term “cyberpeace” itself and how we construct it discursively and calls to end cyberwar by focusing on the technical aspects again. All of these points are valid. But I feel that they are limited in their scope, because they focus too much on the adversarial... 2. The Iranian nuclear crisis a memoir CERN Document Server Mousavian, Seyed Hossein 2012-01-01 The first detailed Iranian account of the diplomatic struggle between Iran and the international community, The Iranian Nuclear Crisis: A Memoir opens in 2002, as news of Iran's clandestine uranium enrichment and plutonium production facilities emerge. Seyed Hossein Mousavian, previously the head of the Foreign Relations Committee of Iran'sSupreme National Security Council and spokesman for Tehran's nuclear negotiating team, brings the reader into Tehran's private deliberations as its leaders wrestle with internal and external adversaries.Mousavian provides readers with intim 3. Infinite Randomness Expansion and Amplification with a Constant Number of Devices OpenAIRE Coudron, Matthew; Yuen, Henry 2013-01-01 We present a device-independent randomness expansion protocol, involving only a constant number of non-signaling quantum devices, that achieves \\emph{infinite expansion}: starting with$m$bits of uniform private randomness, the protocol can produce an unbounded amount of certified randomness that is$\\exp(-\\Omega(m^{1/3}))$-close to uniform and secure against a quantum adversary. The only parameters which depend on the size of the input are the soundness of the protocol and the security of t... 4. Reflexiones en torno a los conceptos de guerra justa y cruzada y su actual revalorización. Directory of Open Access Journals (Sweden) Horacio Cagni 2009-07-01 Full Text Available The concept of just war and crusade remain through time as elements of casus belli legitimating, since the early manifestations in high Middle Ages, the pinnacle in the Crusades for regaining the Holy Land, and its ratification in contemporary era. The typical characteristic is the adversary demonization as immensurable evil and total enemy, destroying the ius belli and the iustus hostis concepts. The essay approaches the examples of russian-german war 1941-45, the anglo-american strategic bombing in WW.II, and the confrontations between the West and the Islamic world. 5. Legal Transparency as a National Security Strategy OpenAIRE Yoni Eshpar 2013-01-01 The act of taking initiative is considered the preferred modus operandi within the various spheres that shape and define the concept of Israel’s national security: on the battlefield and in diplomacy, as well as on the media front. Conventional wisdom within all these spheres is that one should not be dragged along by the force of events, nor should one ever allow an adversary to define the terms of the battle. The legal realm, however, would appear to be an exception to this rule. Although reco... 6. NEMESIS: Keeping Russia an Enemy through Cold War Pathologies Directory of Open Access Journals (Sweden) Matthew Crosston 2015-01-01 Full Text Available This article examines the openly adversarial neoconservative foundation under George Bush to the supposedly more ‘engaged’diplomatic interaction under Barack Obama. What will be exposed is a fairly uninspired and non-innovative American policy that not only fails to consider Russian initiatives from Russia’s own national security perspectives, but aims to contain it within a continued Cold War box that not only sours opportunities for collaboration but guarantees the absence of partnership in areas of global security. This piece examines the consequences of imagining Russia only as nemesis. 7. The COG Strikes Back DEFF Research Database (Denmark) Barfoed, Jacob 2014-01-01 contributes to the discussion by combining the COG concept with strategic theory, hereby addressing many of the raised critique points. The article presents three COG-Strategy schools, centered on different/competing interpretations of the Clausewitzian Center of Gravity (CoG) concept as well as different...... by the adversary leadership and on defeating the adversary’s strategy, starting at the grand strategic level of war and with the lower levels providing increasingly more details to various elements of the grand strategy... 8. MicroCommentary: A New Role for Coenzyme F420 in Aflatoxin Reduction by Soil Mycobacteria Energy Technology Data Exchange (ETDEWEB) Graham, David E [ORNL 2010-01-01 Hepatotoxic aflatoxins have found a worthy adversary in two new families of bacterial oxidoreductases. These enzymes use the reduced coenzyme F420 to initiate the degradation of furanocoumarin compounds, including the major mycotoxin products of Aspergillus flavus. Along with pyridoxalamine 5 -phosphate oxidases and aryl nitroreductases, these proteins form a large and versatile superfamily of flavin and deazaflavin-dependent oxidoreductases. F420-dependent members of this family appear to share a common mechanism of hydride transfer from the reduced deazaflavin to the electron-deficient ring systems of their substrates. 9. ANODR-ECC Key Management protocol with TELNET to secure Application and Network layer for Mobile Adhoc Networks Directory of Open Access Journals (Sweden) G.Padmavathi 2012-02-01 Full Text Available A mobile ad hoc network (MANETs is a self-organizing network that consists of mobile nodes that are connected through wireless media. A number of unique features, such as lack of infrastructural or central administrative supports, dynamic network topologies, open communication channels, and limited device capabilities and bandwidths, have made secure, reliable and efficient routing operations in MANET a challenging task. The ultimate goal of the security solutions for MANET is to provide security services, such as authentication, confidentiality, integrity, anonymity, and availability to mobile users. To achieve the goals, the security solution need for entire protocol stack. . The proposed protocol ANODRECC with Telnet provide application layer security and it ensures route anonymity and location privacy and is robust against eavesdropping attack.For route anonymity, it prevents strong adversaries from tracing a packet flow back to its source or destination; for location privacy, it ensures that adversaries cannot discover the real identities of local transmitters. The simulation is done using network simulator qualnet 5.0 for different number of mobile nodes. The proposed model has exposed improved results in terms of Average throughput, Average end to end delay, Average packet delivery ratio and Average jitter. 10. Development of nonproliferation and assessment scenarios. Energy Technology Data Exchange (ETDEWEB) Finley, Melissa; Barnett, Natalie Beth 2005-10-01 The overall objective of the Nonproliferation and Assessments Scenario Development project is to create and analyze potential and plausible scenarios that would lead to an adversary's ability to acquire and use a biological weapon. The initial three months of funding was intended to be used to develop a scenario to demonstrate the efficacy of this analysis methodology; however, it was determined that a substantial amount of preliminary data collection would be needed before a proof of concept scenario could be developed. We have dedicated substantial effort to determine the acquisition pathways for Foot and Mouth Disease Virus, and similar processes will be applied to all pathogens of interest. We have developed a biosecurity assessments database to capture information on adversary skill locales, available skill sets in specific regions, pathogen sources and regulations involved in pathogen acquisition from legitimate facilities. FY06 funding, once released, will be dedicated to data collection on acquisition, production and dissemination requirements on a pathogen basis. Once pathogen data has been collected, scenarios will be developed and scored. 11. TBRF: Trust Based Routing Framework for WSNs Directory of Open Access Journals (Sweden) Kushal Gulaskar 2014-03-01 Full Text Available The multi-hop routing in wireless sensor networks (WSNs offers little protection against identity deception through replaying routing information. An adversary can exploit this defect to launch various harmful or even devastating attacks against the routing protocols, including sinkhole attacks, wormhole attacks and Sybil attacks. The situation is further aggravated by mobile and harsh network conditions. Traditional cryptographic techniques or efforts at developing trust-aware routing protocols do not effectively address this severe problem. To secure the WSNs against adversaries misdirecting the multi-hop routing, we have designed and implemented TBRF, a robust trust-aware routing framework for dynamic WSNs. Without tight time synchronization or known geographic information, TBRF provides trustworthy and energy-efficient route. Most importantly, TBRF proves effective against those harmful attacks developed out of identity deception; the resilience of TBRF is verified through extensive evaluation with both simulation and empirical experiments on large-scale WSNs under various scenarios including mobile and RF-shielding network conditions. Further, we have implemented a low-overhead TBRF module in TinyOS; as demonstrated, this implementation can be incorporated into existing routing protocols with the least effort. Based on TBRF, we also demonstrated a proof-of-concept mobile target detection application that functions well against an anti-detection mechanism 12. Human reliability-based MC and A models for detecting insider theft International Nuclear Information System (INIS) Material control and accounting (MC and A) safeguards operations that track and account for critical assets at nuclear facilities provide a key protection approach for defeating insider adversaries. These activities, however, have been difficult to characterize in ways that are compatible with the probabilistic path analysis methods that are used to systematically evaluate the effectiveness of a site's physical protection (security) system (PPS). MC and A activities have many similar characteristics to operator procedures performed in a nuclear power plant (NPP) to check for anomalous conditions. This work applies human reliability analysis (HRA) methods and models for human performance of NPP operations to develop detection probabilities for MC and A activities. This has enabled the development of an extended probabilistic path analysis methodology in which MC and A protections can be combined with traditional sensor data in the calculation of PPS effectiveness. The extended path analysis methodology provides an integrated evaluation of a safeguards and security system that addresses its effectiveness for attacks by both outside and inside adversaries. 13. Nuclear husbandry functions International Nuclear Information System (INIS) Despite the differences, traditionally domestic safeguards approaches have often been used for international safeguards, sometimes with a few modest changes. Given the extreme discrepancies between the goals, operational context and potential adversaries of the two, such easy solutions may be detrimental to long-term nuclear security. Domestic MPC and A personnel and hardware are not automatically appropriate for international treaty monitoring or for international auditing. International inspectors, such as used by the IAEA, need tools and training specific for their treaty monitoring mission, not just duplicated from (U.S.) domestic MPC and A approaches. Domestic 'cost-effective' solutions may turn out to be highly ineffective and thus expensive and detrimental to long-term nuclear security when applied in new contextual settings. Emphasis should be given to optimize approaches and hardware specifically designed for international safeguards and for future treaty monitoring (e.g. under a Fissile Material Cut-Off Treaty). To the extent international applications are to be borrowed from domestic approaches, much caution should be given to assess all aspects of the unique nuclear husbandry function in question (objective, obstacles to implementation, potential adversaries etc.), before any fielding of devices or systems 14. Digital image watermarking: its formal model, fundamental properties and possible attacks Science.gov (United States) Nyeem, Hussain; Boles, Wageeh; Boyd, Colin 2014-12-01 While formal definitions and security proofs are well established in some fields like cryptography and steganography, they are not as evident in digital watermarking research. A systematic development of watermarking schemes is desirable, but at present, their development is usually informal, ad hoc, and omits the complete realization of application scenarios. This practice not only hinders the choice and use of a suitable scheme for a watermarking application, but also leads to debate about the state-of-the-art for different watermarking applications. With a view to the systematic development of watermarking schemes, we present a formal generic model for digital image watermarking. Considering possible inputs, outputs, and component functions, the initial construction of a basic watermarking model is developed further to incorporate the use of keys. On the basis of our proposed model, fundamental watermarking properties are defined and their importance exemplified for different image applications. We also define a set of possible attacks using our model showing different winning scenarios depending on the adversary capabilities. It is envisaged that with a proper consideration of watermarking properties and adversary actions in different image applications, use of the proposed model would allow a unified treatment of all practically meaningful variants of watermarking schemes. 15. Distributed Data Mining and Mining Multi-Agent Data Directory of Open Access Journals (Sweden) Dr. S Vidyavathi 2010-07-01 Full Text Available The problem of distributed data mining is very important in network problems. Ina distributed environment (such as a sensor or IP network, one has distributed probes placed at strategic locations within the network. The problem here is to be able to correlatethe data seen at the various probes, and discover patterns in the global data seen at all the different probes. There could be different models of distributed data mining here, but one could involve a NOC that collects data from the distributed sites, and another in which all sites are treated equally. The goal here obviously would be to minimize the amount of data shipped between the various sites — essentially, to reduce the communication overhead. In distributed mining, one problem is how to mine across multipleheterogeneous data sources: multi-database and multirelational mining. Another important new area is adversary data mining. In a growing number of domains — email spam, counter-terrorism, intrusiondetection/computer security, click spam, search engine spam, surveillance, fraud detection, shop bots, file sharing, etc. — data mining systems face adversaries that deliberately manipulate the data to sabotage them (e.g. make them produce false negatives. In this paper need to develop systems that explicitly take this into account, by combining data mining with game theory. 16. The Death of Selective Waiver: How New Federal Rule of Evidence 502 Ends the Nationalization Debate Directory of Open Access Journals (Sweden) Patrick M. Emery 2009-05-01 Full Text Available New Federal Rule of Evidence 502 (“FRE 502” will end the threedecade push to nationalize a corporate litigation protection known as the “selective waiver doctrine.” First adopted by the Eighth Circuit in 1978, the selective waiver doctrine holds that, when a corporation discloses privileged materials to a government agency during an investigation, the corporation retains its privileges against third-party litigants—i.e., the corporation may selectively waive its attorney-client privilege (and in other circuits its attorney work product protection. This flies in the face of traditional waiver rules, under which a waiver of privilege to one’s adversary generally is a waiver to all adversaries on that subject matter. Based on years of frustration with discovery costs, fear of corporate fraud, and heavy burdens placed on administrative agencies, many legal scholars praised selective waiver as a cure for those ills. Recently, when the Advisory Committee on Evidence Rules met to discuss additions to the FRE, many called for the inclusion of a selective waiver provision. After much debate, the Advisory Committee determined that the selective waiver proposal for FRE 502 was too controversial. In its enacted form, FRE 502 does not contain a selective waiver provision. 17. A new approach for UC security concurrent deniable authentication Institute of Scientific and Technical Information of China (English) FENG Tao; LI FengHua; MA JianFeng; SangJae MOON 2008-01-01 Deniable authentication protocols allow a sender to authenticate a message for a receiver, in a way which the receiver cannot convince a third party that such au-thentication ever took place. When we consider an asynchronous multi-party net-work with open communications and an adversary that can adaptively corrupt as many parties as it wishes, we present a new approach to solve the problem of concurrent deniable authentication within the framework of universally compos-able (UC) security. We formulate a definition of an ideal functionality for deniable authentication. Our constructions rely on a modification of the verifiably smooth projective hashing (VSPH) with projection key function by trapdoor commitment. Our protocols are forward deniable and UC security against adaptive adversaries in the common reference string model. A new approach implies that security is pre-served under concurrent composition of an unbounded number of protocol execu-tions; it implies non-malleability with respect to arbitrary protocols and more. The novelty of our schemes is the use of witness indistinguishable protocols and the security is based on the decisional composite residuosity (DCR) assumption. This new approach is practically relevant as it leads to more efficient protocols and se-curity reductions. 18. Role reversal and problem solving in international negotiations: the Partial Nuclear Test Ban case International Nuclear Information System (INIS) To facilitate finding bargaining space and to reinforce cooperative potential, a number of analysts have promoted the use of role reversal and problem solving. Role reversal involves restating the positions of one's adversary to demonstrate understanding and to develop empathy, while problem solving involves searching for alternatives that promote joint interests. The case of the negotiations in the Eighteen Nation Disarmament Conference from 1962--1963 leading to the Partial Nuclear Test Ban Treaty provided the context for examining bargaining relationships involving role reversal and problem solving. Interactions among the United States, the United Kingdom, and the Soviet Union, as recorded in transcripts of 112 sessions, were coded using Bargaining Process Analysis II, a content analysis instrument used to classify negotiation behaviors. Role reversal was measured by the frequency of paraphrases of the adversary's positions. Problem solving was measured by the frequency of themes promoting the exploration of alternatives and the search for mutually beneficial outcomes. The findings on the use of paraphrasing suggest that it can be used to restrict exploration as well as to promote it. The exploratory focus of problem solving was somewhat limited by its use in association with demands, suggesting that problem solving was interpreted as a sign of weakness 19. The Forgiving Tree: A Self-Healing Distributed Data Structure CERN Document Server Hayes, Tom; Saia, Jared; Trehan, Amitabh 2008-01-01 We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that the following process continues for up to n rounds where n is the total number of nodes initially in the network: the adversary deletes an arbitrary node from the network, then the network responds by quickly adding a small number of new edges. We present a distributed data structure that ensures two key properties. First, the diameter of the network is never more than$O(\\log \\Delta)$times its original diameter, where$\\Delta$is the maximum degree of the network initially. We note that for many peer-to-peer systems,$\\Delta\$ is polylogarithmic, so the diameter increase would be a O(log log n) multiplicative factor. Second, the degree of any node never increases by more than 3 over its original degree. Our data structure is fully distributed, has O(1) latency per round and requires each node to send and receive O(1) messages per round. The data structure requires an init... 20. Non-interactive and Reusable Non-malleable Commitment Schemes DEFF Research Database (Denmark) Damgård, Ivan Bjerre; Groth, Jens 2003-01-01 We consider non-malleable (NM) and universally composable (UC) commitment schemes in the common reference string (CRS) model. We show how to construct non-interactive NM commitments that remain non-malleable even if the adversary has access to an arbitrary number of commitments from honest player...... primitive than NM. Finally, we show that our strong RSA based construction can be used to improve the most efficient known UC commitment scheme so it can work with a CRS of size independent of the number of players, without loss of efficiency.......We consider non-malleable (NM) and universally composable (UC) commitment schemes in the common reference string (CRS) model. We show how to construct non-interactive NM commitments that remain non-malleable even if the adversary has access to an arbitrary number of commitments from honest players...... version based on the strong RSA assumption. For UC commitments, we show that existence of a UC commitment scheme in the CRS model (interactive or not) implies key exchange and - for a uniform reference string - even implies oblivious transfer. This indicates that UC commitment is a strictly stronger...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5061736106872559, "perplexity": 3342.4737704259933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608668.51/warc/CC-MAIN-20170526144316-20170526164316-00260.warc.gz"}
https://hal.inria.fr/hal-01644328
# Coupling Grad-Shafranov equilibrium solvers and grid generation to study plasma confinement properties in nuclear fusion devices 1 Gamma3 - Automatic mesh generation and advanced methods ICD - Institut Charles Delaunay, Inria Saclay - Ile de France 2 CASTOR - Control, Analysis and Simulations for TOkamak Research CRISAM - Inria Sophia Antipolis - Méditerranée Abstract : Nuclear fusion is one promising way to produce a clean energy in the forthcoming years. The possibility to construct nuclear fusion reactors is studied in large scale physics experiments as the international ITER project in construction in Cadarache, France that gathers contributions from seven different countries. In this experimental reactor, known as a tokamak, (a Russian acronym meaning toroidal chamber) an extremely hot plasma of hydrogen isotopes is confined thanks to a very large magnetic field, in order to reach sufficiently high temperatures and densities to initiate nuclear fusion reactions. The equilibrium state of this plasma is described by a non-linear elliptic equation known as the Grad-Shafranov equation[1, 2]. Due to the complexity of the magnetic geometry, this equation can only be solved by specialized numerical solvers[3]. However, since the plasma is highly unstable, the confinement properties of the equilibrium have to be studied by other numerical simulations addressing some specific physical issues as the turbulent transport in the plasma that is done by gyrokinetic codes or the stability of the plasma that is studied by magnetohydrodynamic solvers. These different numerical codes have therefore to be coupled in an appropriate way. Due to the highly anisotropic character of strongly magnetized plasmas, a crucial point in the coupling of these different codes is the construction of a mesh that is aligned on the magnetic flux surfaces computed by the Grad-Shafranov equilibrium solvers. In this work, we will describe an original method for the construction of flux aligned grids that respect the magnetic equilibrium topology. This method relies on the analysis of the singularities of the magnetic flux function and the construction of a graph known as the Reeb graph [4] that allows the segmentation of the physical domain into sub-domains that can be mapped to a reference square domain. We will present several examples taken from existing tokamaks and ITER to illustrate this grid generation process. REFERENCES [1] Grad, H., and Rubin, H. Type de document : Communication dans un congrès Coupled Problems 2017 - VII International Conference on Coupled Problems in Science and Engineering, Jun 2017, Rhodes, Greece. 31, pp.190103, 2017 Domaine : Liste complète des métadonnées Littérature citée [4 références] https://hal.inria.fr/hal-01644328 Contributeur : Herve Guillard <> Soumis le : mercredi 22 novembre 2017 - 10:24:59 Dernière modification le : mardi 27 février 2018 - 14:40:04 ### Identifiants • HAL Id : hal-01644328, version 1 ### Citation Adrien Loseille, Alexis Loyer, Hervé Guillard. Coupling Grad-Shafranov equilibrium solvers and grid generation to study plasma confinement properties in nuclear fusion devices. Coupled Problems 2017 - VII International Conference on Coupled Problems in Science and Engineering, Jun 2017, Rhodes, Greece. 31, pp.190103, 2017. 〈hal-01644328〉 ### Métriques Consultations de la notice ## 195 Téléchargements de fichiers
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8196133971214294, "perplexity": 2131.6491220645976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515029.82/warc/CC-MAIN-20181022092330-20181022113830-00557.warc.gz"}
https://alexschroeder.ch/wiki?action=browse;diff=2;id=2012-10-17_Traveller_Maps_for_Campaign_Wiki
# 2012-10-17 Traveller Maps for Campaign Wiki Last edit Summary: Since I'm apparently going to use my Spelljammer-Planescape-Traveller map for a while, I added a way to make map display even easier on Campaign Wiki pages. Added: > Klick to switch to the map. There, you can click on the systems to get the wiki pages. Since I’m apparently going to use my Spelljammer-Planescape-Traveller map for a while, I added a way to make map display even easier on Campaign Wiki pages. Write “Traveller:” followed by the UWP like this: Traveller: Tu Akhra 0404 D0602B3-1 P  As De Lo Lt R Susrael 0503 B0604A4-1 N  As De Lt NI A Nova Genova 0607 B867686-1 N  Ag Ga Lt NI Ri Hinia Oot 0705 E060200-1 As De Lo Lt A Monkey Island 0308 E064105-0 Lo Lt And the result will be this: Klick to switch to the map. There, you can click on the systems to get the wiki pages. Tags: ## Comments Please make sure you contribute only your own work, or work licensed under the GNU Free Documentation License. See Info for text formatting rules. You can edit the comment page if you need to fix typos. You can subscribe to new comments by email without leaving a comment. To save this page you must answer this question: Please say HELLO.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5867520570755005, "perplexity": 7997.242603641879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064538.25/warc/CC-MAIN-20150827025424-00144-ip-10-171-96-226.ec2.internal.warc.gz"}
http://travelbyjen.com/crcsh0o6/semi-eulerian-graph-225757
In fact, we can find it in O (V+E) time. For example, let's look at the semi-Eulerian graphs below: First consider the graph ignoring the purple edge. Eulerian path for directed graphs: To check the Euler nature of the graph, we must check on some conditions: 1. Wikidot.com Terms of Service - what you can, what you should not etc. Suppose that $$\Gamma$$ is semi-Eulerian, with Eulerian path $$v_0, e_1, v_1,e_2,v_3,\dots,e_n,v_n\text{. But then G wont be connected. (i) the complete graph Ks; (ii) the complete bipartite graph K 2,3; (iii) the graph of the cube; (iv) the graph of the octahedron; (v) the Petersen graph. v2: 11. Watch headings for an "edit" link when available. Append content without editing the whole page source. 1. Hamiltonian Graph Examples. 1. Eulerian Trail. A connected graph \(\Gamma$$ is semi-Eulerian if and only if it has exactly two vertices with odd degree. Thus, for a graph to be a semi-Euler graph, following two conditions must be satisfied- Graph must be connected. Semi-Eulerian. While P n of course works, perhaps something that's also simple, but slightly more interesting like Image:Semi-Eulerian graph.png would be good. 3. Prerequisite – Graph Theory Basics Certain graph problems deal with finding a path between two vertices such that each edge is traversed exactly once, or finding a path between two vertices while visiting each vertex exactly once. Definition 5.3.3. Graf yang mempunyai lintasan Euler dinamakan juga graf semi-Euler (semi-Eulerian graph). Writing New Data. Unfortunately, there is once again, no solution to this problem. A graph is semi-Eulerian if it has a not-necessarily closed path that uses every edge exactly once. Definition: Eulerian Graph Let }G ={V,E be a graph. v5 ! In the above mentioned post, we discussed the problem of finding out whether a given graph is Eulerian or not. A graph is subeulerian if it is spanned by an eulerian supergraph. - Eulerian graph detection - Semi-Eulerian graph detection - Tarjan's algorithm for strongly connected components in directed graphs - Tree detection - Bipartite graph detection - Complete graph detection - Tree center (unweighted graph) - Tree center (weighted graph) - Tree radius - Tree diameter - Tree node eccentricity - Tree centroid Except for the first listing of u1 and the last listing of … v5 ! Is there a $6$ vertex planar graph which which has Eulerian path of length $9$? An Eulerian graph is one which contains a closed Eulerian trail - one in which we can start at some vertex $v$, travel through all the edges exactly once of $G$, and return to $v$. Like the graph 2 above, if a graph has ways of getting from one vertex to another that include every edge exactly once and ends at another vertex than the starting one, then the graph is semi-Eulerian (is a semi-Eulerian graph). We again make use of Fleury's algorithm that says a graph with an Euler path in it will have two odd vertices. 1 2 3 5 4 6. a c b e d f g h m k. 14/18. You will only be able to find an Eulerian trail in the graph on the right. Reading Existing Data. Sub-Eulerian Graphs: A graph G is called as sub-Eulerian if it is a spanning subgraph of some Eulerian graphs. Skip navigation Sign in. In this post, an algorithm to print Eulerian trail or circuit is discussed. Find out what you can do. Following is Fleury’s Algorithm for printing Eulerian trail or cycle (Source Ref1). A non-Eulerian graph that has an Euler trail is called a semi-Eulerian graph. The Euler path problem was first proposed in the 1700’s. Eulerian and Semi Eulerian Graphs. The graph on the right is not Eulerian though, as there does not exist an Eulerian trail as you cannot start at a single vertex and return to that vertex while also traversing each edge exactly once. eulerian graph is a connected graph where all vertices except possibly u and v have an even degree; if u = v , then the graph is eulerian. If it has got two odd vertices, then it is called, semi-Eulerian. To show a graph isn't Eulerian, quote this, and point out a vertex of odd degree; If it is Eulerian, use the algorithm to actually find a cycle. Creative Commons Attribution-ShareAlike 3.0 License. A graph is said to be Eulerian if it has a closed trail containing all its edges. Like the graph 2 above, if a graph has ways of getting from one vertex to another that include every edge exactly once and ends at another vertex than the starting one, then the graph is semi-Eulerian (is a semi-Eulerian graph). In fact, we can find it in O(V+E) time. •Sirkuit Euler ialah sirkuit yang melewati masing-masing sisi tepat satu kali.. •Graf yang mempunyai sirkuit Euler disebut graf Euler (Eulerian graph). Search. A connected graph is Eulerian if and only if every vertex has even degree. Connecting two odd degree vertices increases the degree of each, giving them both even degree. 3. The process in this case is called Semi-Eulerization and ends with the creation of a graph that has exactly two vertices of odd degree. Gambar 2.3 semi Eulerian Graph Dari graph G, tidak terdapat path tertutup, tetapi dapat ditemukan barisan edge: v1 ! Eulerian Trail. A closed Hamiltonian path is called as Hamiltonian Circuit. Proof Necessity Let G be a connected Eulerian graph and let e = uv be any edge of G. Then G−e isa u−v walkW, and so G−e =W containsan odd numberof u−v paths. Exercises 6 6.15 Which of the following graphs are Eulerian? A graph is called Eulerian if it has an Eulerian Cycle and called Semi-Eulerian if it has an Eulerian Path. (a) (b) Figure 7: The initial graph (a) and the Eulerized graph (b) after adding twelve duplicate edges Hamiltonian Path and Hamiltonian Circuit- Hamiltonian path is a path in a connected graph that contains all the vertices of the graph. Change the name (also URL address, possibly the category) of the page. I do not understand how it is possible to for a graph to be semi-Eulerian. Computing Eulerian cycles. Make sure the graph has either 0 or 2 odd vertices. Theorem 3.1 (Euler) A connected graph G is an Euler graph if and only if all vertices of G are of even degree. For a graph G to be Eulerian, it must be connected and every vertex must have even degree. In the following image, the valency or order of each vertex - the number of edges incident on it - is written inside each circle. 2. v1 ! The problem seems similar to Hamiltonian Path which is NP complete problem for a general graph. In fact, we can find it in O(V+E) time. We will now look at criterion for determining if a graph is Eulerian with the following theorem. Semi-eulerian: If in an undirected graph consists of Euler walk (which means each edge is visited exactly once) then the graph is known as traversable or Semi-eulerian. If something is semi-Eulerian then 2 vertices have odd degrees. A graph that has an Eulerian trail but not an Eulerian circuit is called Semi-Eulerian. Click here to edit contents of this page. A graph is called Eulerian if it has an Eulerian Cycle and called Semi-Eulerian if it has an Eulerian Path. Reading and Writing Proof Necessity Let G(V, E) be an Euler graph. „6VFIˆçËÑ£í4/¬…S&'şäâQ©=yF•Ø*FšĞ#4ªmq!¦â\ŒÎÉ2(�øS–¶\ô ÿĞÂç¬Tø�fmŒ1ˆ%ú&‰.ã}Ñ1ÒáhPr-ÀK�íì °*ìTf´ûÓ½bËB:H…L¨SÒíel «¨!ª[dP©€"‹#à�³ÄH½Ş ]‚!õt«ÈÖwAq`“ö22ç¨Ï|b D@ʉê¼H'ú,™ñUæ…’.¶­ÇûÈ{ˆˆ\­ãUb‘E_ñİæÂzsÙù’²JqVu¹—ÈN+ºu²'4¯½ĞmçA¥Él­xrú…$Â^\½˜-ŸDè—�RŸ=ìW’Çú_�’ü¬Ë¥PÅu½Wàéñ•�¤œEF‚S˜Ï( m‰G. You can start at any of the vertices in the perimeter with degree four, go around the perimeter of the graph, then traverse the star in the center and return to the starting vertex. exactly two vertices have odd degree, and; all of its vertices with nonzero degree belong to a single connected component. In the following image, the valency or order of each vertex - the number of edges incident on it - is written inside each circle. A graph that has a non-closed w alk co v ering eac h edge exactly once is called semi-Eulerian. A variation. A minor modification of our argument for Eulerian graphs shows that the condition is necessary. - Eulerian graph detection - Semi-Eulerian graph detection - Tarjan's algorithm for strongly connected components in directed graphs - Tree detection - Bipartite graph detection - Complete graph detection - Tree center (unweighted graph) - Tree center (weighted graph) - Tree radius - Tree diameter - Tree node eccentricity - Tree centroid În teoria grafurilor, un drum eulerian (sau lanț eulerian) este un drum într-un graf finit, care vizitează fiecare muchie exact o dată. 1. If something is semi-Eulerian then 2 vertices have odd degrees. We will use vertices to represent the islands while the bridges will be represented by edges: So essentially, we want to determine if this graph is Eulerian (and hence if we can find an Eulerian trail). Examples: Input : n = 3, m = 2 Edges[] = {{1, 2}, {2, 3}} Output : 1 By connecting 1 to 3, we can create a Euler Circuit. While P n of course works, perhaps something that's also simple, but slightly more interesting like Image:Semi-Eulerian graph.png would be good. v2 ! (Here in given example all vertices with non-zero degree are visited hence moving further). View and manage file attachments for this page. Now by adding the purple edge, the graph becomes Eulerian, and it should be rather clear that when you traverse the graph again starting at the same vertex, that when you get to what was once the end vertex now has an edge taking you back to the starting point. A graph is called Eulerian if it has an Eulerian Cycle and called Semi-Eulerian if it has an Eulerian Path. Writing New Data. A graph that has an Eulerian trail but not an Eulerian circuit is called Semi-Eulerian. After traversing through graph, check if all vertices with non-zero degree are visited. Fortunately, we can find whether a given graph has a Eulerian Path or not in polynomial time. 5 Barisan edge tersebut merupakan path yang tidak tertutup, tetapi melalui se- mua edge dari graph G. Dengan demikian graph G merupakan semi Eulerian. For a graph G to be Eulerian, it must be connected and every vertex must have even degree. Theorem 1.5 A closed Hamiltonian path is called as Hamiltonian Circuit. v6 ! An Eulerian path visits all the edges of a graph in sequence, with no edges repeated. Fortunately, we can find whether a given graph has a Eulerian Path or not in polynomial time. }\) Then at any vertex other than the starting or ending vertices, we can pair the entering and leaving edges up to get an even number of edges. This problem of finding a cycle that visits every edge of a graph only once is called the Eulerian cycle problem. Fortunately, we can find whether a given graph has a Eulerian Path or not in polynomial time. - Eulerian graph detection - Semi-Eulerian graph detection - Tarjan's algorithm for strongly connected components in directed graphs - Tree detection - Bipartite graph detection - Complete graph detection - Tree center (unweighted graph) - Tree center (weighted graph) - Tree radius - Tree diameter - Tree node eccentricity - Tree centroid Click here to toggle editing of individual sections of the page (if possible). graph-theory. The task is to find minimum edges required to make Euler Circuit in the given graph.. Consider the graph representing the Königsberg bridge problem. In other words, we can say that a graph G will be Eulerian graph, if starting from one vertex, we can traverse every edge exactly once and return to the starting vertex. The condition of having a closed trail that uses all the edges of a graph is equivalent to saying that the graph can be drawn on paper in … Eulerian gr aph is a graph with w alk. semi-Eulerian? Semi-Euler Graph- If a connected graph contains an Euler trail but does not contain an Euler circuit, then such a graph is called as a semi-Euler graph. The problem seems similar to Hamiltonian Path which is NP complete problem for a general graph. The graph is Eulerian if it has an Euler cycle. Reading Existing Data. Hamiltonian Graph Examples. I added a mention of semi-Eulerian, because that's a not uncommon term used, but we should also have an example for that. Th… Robb T. Koether (Hampden-Sydney College) Eulerizing and Semi-Eulerizing Graphs Mon, Oct 30, 2017 4 / 9 Now remove the last edge before you traverse it and you have created a semi-Eulerian trail. A variation. graph G which are required if one is to traverse the graph in such a way as to visit each line at least once. If G has closed Eulerian Trail, then that graph is called Eulerian Graph. Lemma 2: A Graph$G$where each vertex has an even degree can be split into cycles by which no cycle has a common edge. A graph with a semi-Eulerian trail is considered semi-Eulerian. (i) The Complete Graph Ks; (ii) The Complete Bipartite Graph K 2,3; (iii) The Graph Of The Cube; (iv) The Graph Of The Octahedron; (v) The Petersen Graph. In 1736, Euler solved the Königsberg bridges problem by noting that the four regions of Königsberg each bordered an odd number of bridges, but that only two odd-valenced vertices could be in an Eulerian graph.A semigraceful graph has edges labeled 1 to , with each edge label equal to the absolute differ For example, let's look at the two graphs below: The graph on the left is Eulerian. Notify administrators if there is objectionable content in this page. Watch Queue Queue. 2. All the vertices with non zero degree's are connected. Eulerization is the process of adding edges to a graph to create an Euler circuit on a graph.To eulerize a graph, edges are duplicated to connect pairs of vertices with odd degree. 1.9.3. ŒöeŒĞ¡d c,�¼mÅNøß­&¸-”6Îà¨cP.9œò)½òš–÷*Òê-D­“�Á™ The graph is semi-Eulerian if it has an Euler path. Proof: If G is semi-Eulerian then there is an open Euler trail, P, in G. Suppose the trail begins at u1 and ends at un. Euler proved the necessity part and the sufficiency part was proved by Hierholzer [115]. About This Quiz & Worksheet. Adding an edge between and will result in a new graph, let's call it, that is Eulerian since the degree of each vertex must be even. A connected multi-graph G is semi-Eulerian if and only if there are exactly 2 vertices of odd degree. In fact, we can find it in O(V+E) time. Check out how this page has evolved in the past. Given a undirected graph of n nodes and m edges. Proof: Let be a semi-Eulerian graph. Is an Eulerian circuit an Eulerian path? The Eulerian Trail in a graph G(V, E) is a trail, that includes every edge exactly once. Exercises: Which of these graphs are Eulerian? Hamiltonian Graph in Graph Theory- A Hamiltonian Graph is a connected graph that contains a Hamiltonian Circuit. Unless otherwise stated, the content of this page is licensed under. Semi-Eulerian? The Eulerian Trail in a graph G(V, E) is a trail, that includes every edge exactly once. First, let's redraw the map above in terms of a graph for simplicity. A graph is called Eulerian if it has an Eulerian Cycle and called Semi-Eulerian if it has an Eulerian Path. In other words, we can say that a graph G will be Eulerian graph, if starting from one vertex, we can traverse every edge exactly once and return to the starting vertex. In this paper, we find more simple directions, i.e. You can verify this yourself by trying to find an Eulerian trail in both graphs. In , Metsidik and Jin characterized all Eulerian partial duals of a plane graph in terms of semi-crossing directions of its medial graph. A similar problem rises for obtaining a graph that has an Euler path. Definition: Eulerian Circuit Let }G ={V,E be a graph. A connected graph is Eulerian if and only if every vertex has even degree. An undirected graph is Semi-Eulerian if and only if exactly two vertices have odd degree, and all of its vertices with nonzero degree belong to a single connected component. Essentially the bridge problem can be adapted to ask if a trail exists in which you can use each bridge exactly once and it … If G has closed Eulerian Trail, then that graph is called Eulerian Graph. Theorem 3.4 A connected graph is Eulerian if and only if each of its edges lies on an oddnumber of cycles. Eulerian Trail. Suppose that $$\Gamma$$ is semi-Eulerian, with Eulerian path \(v_0, e_1, v_1,e_2,v_3,\dots,e_n,v_n\text{. 1 2 3 5 4 6. a c b e d f g. 13/18. Eulerian Graph. Essentially the bridge problem can be adapted to ask if a trail exists in which you can use each bridge exactly once and it doesn't matter if you end up on the same island. Try traversing the graph starting at one of the odd vertices and you should be able to find a semi-Eulerian trail ending at the other odd vertex. 1. Hamiltonian Path and Hamiltonian Circuit- Hamiltonian path is a path in a connected graph that contains all the vertices of the graph. crossing-total directions, of medial graph to characterize all Eulerian partial duals of any ribbon graph and obtain our second main result. A connected non-Eulerian graph G with no loops has an Euler trail if and only if it has exactly two odd vertices. A minor modification of our argument for Eulerian graphs shows that the condition is necessary. A graph is said to be Eulerian, if all the vertices are even. The problem seems similar to Hamiltonian Path which is NP complete problem for a general graph. v3 ! View wiki source for this page without editing. This video is unavailable. v4 ! v3 ! These paths are better known as Euler path and Hamiltonian path respectively. Is it possible disconnected graph has euler circuit? I added a mention of semi-Eulerian, because that's a not uncommon term used, but we should also have an example for that. 1.9.4. Hamiltonian Graph in Graph Theory- A Hamiltonian Graph is a connected graph that contains a Hamiltonian Circuit. Semi-eulerian: If in an undirected graph consists of Euler walk (which means each edge is visited exactly once) then the graph is known as traversable or Semi-eulerian. The travelers visits each city (vertex) just once but may omit several of the roads (edges) on the way. Reading and Writing In fact, we can find it in O (V+E) time. Definition (Semi-Eulerization) Tosemi-eulerizea graph is to add exactly enough edges so that all but two vertices are even. Semi-Eulerizing a graph means to change the graph so that it contains an Euler path. Loading... Close. View/set parent page (used for creating breadcrumbs and structured layout). (a) dan (b) grafsemi-Euler, (c) dan (d) graf Euler , (e) dan (f) bukan graf semi-Euler atau graf Euler Take an Eulerian graph and begin traversing each edge. Something does not work as expected? It wasn't until a few years later that the problem was proved to have no solutions. Characterization of Semi-Eulerian Graphs. You can imagine this problem visually. Definition: A Semi-Eulerian trail is a trail containing every edge in a graph exactly once. We must understand that if a graph contains an eulerian cycle then it's a eulerian graph, and if it contains an euler path only then it is called semi-euler graph. [ 115 ] that the condition is necessary otherwise stated, the content this... Cycle that visits every edge in a graph is to add exactly enough edges that! Have created a semi-Eulerian trail is called as sub-eulerian if it has an Euler path possible ) } =. Will get stuck nonzero degree belong to a single connected component the necessity part and the last edge before traverse! Yang melalui masing-masing sisi tepat satu kali.. •Graf yang mempunyai sirkuit Euler disebut Euler! 'S redraw the map above in terms of Service - what you should not etc ( in. Closed Hamiltonian path which is NP complete problem for a graph that contains all the vertices with non-zero degree visited!, following two conditions must be connected and every vertex has even degree then the graph terms... Know the best route to distribute your letters without visiting a street twice Euler circuits tetapi. U1 and the sufficiency part was proved to have no solutions Euler Cycle its medial graph be! First proposed in the graph the Eulerian Cycle and called semi-Eulerian if and only every... First, Let 's look at the semi-Eulerian graphs below: the graph is semi-Eulerian it!, an algorithm to print Eulerian trail in both graphs edges ) on way. Di dalam graf tepat satu kali.. •Graf yang mempunyai sirkuit Euler disebut graf Euler ( Eulerian graph it. Eulerian gr aph is a trail, that includes every edge of G is semi-Eulerian if only! Non-Zero degree are even or Cycle ( Source Ref1 ) with images Euler! You can verify this yourself by trying to find an Eulerian graph G. Know the best route to distribute your letters without visiting a street twice an to. Degree 's are connected URL address, possibly the category ) of the page ( used for creating breadcrumbs structured... Print Eulerian trail or Cycle ( Source Ref1 ) all Eulerian partial duals of ribbon! Moving further ) more simple directions, of medial graph notify administrators if there are exactly 2 of...: a semi-Eulerian trail is a path in it will have two odd vertices polynomial time headings... Means to change the name ( also URL address, possibly the category of., of medial graph to be a graph is called as Hamiltonian circuit semi eulerian graph Semi-Eulerization and with. Spanning subgraph of some Eulerian graphs shows that the condition is necessary an Eulerian Cycle and called semi-Eulerian if only... Is the easiest way to do it be able to semi eulerian graph minimum edges required to Euler... Traversing each edge able to find an Eulerian supergraph for simplicity for many,! A Hamiltonian circuit the following graphs are Eulerian solution to the problem was proved Hierholzer... This paper, we can find whether a given graph has a Eulerian path for directed graphs a. Visits every edge exactly once to do it the circuit process in this paper, we discussed problem! Make sure the graph, check if all vertices with non zero degree 's are connected, is... The name ( also URL address, possibly the category ) of the ignoring! Source Ref1 ) the creation of a graph G with no edges repeated, of medial graph lintasan yang masing-masing...$ 9 \$ similar to Hamiltonian path which is NP complete problem for graph! Semi-Eulerian then 2 vertices have odd degrees is included exactly once a Euler and! Will not be “ Eulerian or not in polynomial time: the graph is semi-Eulerian if it has two! That every vertex must have even degree then the given graph has a closed Hamiltonian path which is complete. O ( V+E ) time h edge exactly once here in given example all vertices with odd degree two are. Says a graph in graph Theory- a Hamiltonian circuit but no a Eulerian circuit if every edge of is... What you can, what you should not etc all but two vertices with non-zero degree are visited hence further... Many years, the graph is to find minimum edges required to make semi eulerian graph circuit the! Then it is called as Hamiltonian circuit \Gamma\ ) is semi-Eulerian if and only if every must! For determining if a graph that has an Eulerian path or not in polynomial time fortunately, must! Vertex planar graph which which has Eulerian path for directed graphs: a graph only once is as..., the content of this page moving further ) 2 odd vertices sisi tepat satu kali semi-Eulerian graph ) following... A closed Hamiltonian path which is NP complete problem for a general.... Step 3 correctly - > Counting vertices with nonzero degree belong to a single connected component >... Discussed the problem seems similar to Hamiltonian path is a path in a graph only once is called Hamiltonian... Each of its vertices with odd degree vertices increases the degree of each, giving them both even degree Eulerian! All the edges of a graph that has an semi eulerian graph graph 2 3 5 4 6. a b! Given example all vertices with “ odd ” degree if each of its edges “ Eulerian or ”. To this problem a street twice to add exactly enough edges so that all but two vertices odd! Contains all the vertices with non-zero degree are visited hence moving further ) nonzero degree belong to a connected... Has either 0 or 2 odd vertices our second main result G has Eulerian. Sure the graph remove the last edge before you traverse it and you have a. Graph and obtain our second main result 1700 ’ s - this is the easiest way to it... Eulerian circuit finding a Cycle that visits every edge exactly once semi-Eulerian trail if you to., we can find whether a given graph will not be “ Eulerian or not polynomial! G, tidak terdapat path tertutup, tetapi dapat ditemukan barisan edge: v1 the path! More simple directions, i.e it and you have created a semi-Eulerian trail is a connected graph called. And obtain our second main result got two odd vertices an Euler path in a connected graph is to. Prior and you have created a semi-Eulerian trail is called Eulerian if it is spanned by an Eulerian path graph! Have even degree a minor modification of our argument for Eulerian graphs, the... Page has evolved in the graph on the way barisan edge:!... Hence, there is one pair of vertices with nonzero degree belong to single! The above mentioned post, an algorithm to print Eulerian trail in the has! In sequence, with no edges repeated city ( vertex ) just once but may omit several the. Alk co V ering eac h edge exactly once ( used for creating breadcrumbs and structured layout.. Graph that contains all the vertices of the page is spanned by an Eulerian path or not in time. Graf yang mempunyai lintasan Euler dinamakan juga graf semi-Euler ( semi-Eulerian graph semi-crossing directions its. Of finding a Cycle that visits every edge exactly once a semi-Euler graph, we can it... I do not understand how it is a connected graph that has a Euler path with odd degree even! Must check on some conditions: 1 will get stuck problem seems to. E be a graph with a semi-Eulerian trail is called the Eulerian Cycle and called semi-Eulerian individual sections the... Fortunately, we can find it in O ( V+E ) time - > Counting with... Finding out whether a given graph is semi-Eulerian if and only if there is content... Line at least once to traverse the graph is said to be Eulerian it! You with images of Euler paths and Euler circuits example, Let 's look the! Of medial graph to be Eulerian, it must be connected and every vertex even! Wikidot.Com terms of semi eulerian graph - what you can verify this yourself by trying to that! Proved by Hierholzer [ 115 ] of semi-crossing directions of its vertices with “ odd ” degree remove other. As to visit each line at least once edges ) on the.. To characterize all Eulerian partial duals of a plane graph in graph.. Them both even degree graf Euler ( Eulerian graph if G has closed Eulerian trail in a graph is if. Loops has an Eulerian Cycle and called semi-Eulerian Fleury 's algorithm that says a graph is Eulerian... Co V ering eac h edge exactly once in the past plane graph in terms of directions... Lintasan dan sirkuit Euler disebut graf Euler ( Eulerian graph ) licensed under two vertices are.. Semi-Eulerization and ends with the following graphs are Eulerian evolved in the given graph problem... Means to change the graph is subeulerian if it is called as Hamiltonian circuit the condition is necessary Counting with. ] characterises Eulerian graphs and called semi-Eulerian if it has an Euler path have odd! Understand how it is a trail, then that graph is called Eulerian graph ) satu! V+E ) time called traversable or semi-Eulerian that all but two vertices even! An undirected graph is Eulerian at the semi-Eulerian graphs below: first the! Circuit but no a Eulerian path view/set parent page ( used for creating breadcrumbs structured. Visits all the vertices of odd degree trail, then that graph is a path a! Visits all the vertices of odd degree the left is Eulerian if it has an Eulerian and... B E d f G h m k. 14/18 be a graph with a trail. To Hamiltonian path is called Semi-Eulerization and ends with the following theorem - what you can verify this by! Fortunately, we can find it in O ( V+E ) time this yourself trying... Fortunately, we discussed the problem of finding a Cycle that visits every edge in a connected that.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.649523913860321, "perplexity": 807.1399916479901}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991258.68/warc/CC-MAIN-20210517150020-20210517180020-00090.warc.gz"}
https://www.eurotrib.com/story/2011/3/25/15478/0902
Welcome to European Tribune. It's gone a bit quiet around here these days, but it's still going. ET Stays On The Reactor Case... by Crazy Horse Sat Mar 26th, 2011 at 04:01:05 AM EST ... even if the media is getting tired of it (and its repercussions.) To sum up, we've got: •  Stray neutron beams unaccounted for, what? •  How did such hot water get in the turbine halls? •  The situation in Japan seems to highlight the many loose threads left over from Chernobyl, which had of course fallen under the radar. •  A growing understanding that this is a very big deal, about which we have very little understanding. We are just beginning to fathom what this means to Japan, and the rest of us. But here's a place to continue to report and dissect news as it happens. Put this here because the other thread is getting filled. Front-paged by afew Spiegel Thread (middle) on 'Schland's preparedness here. One year after 9/11, the International Committee on Nuclear Technology (ILK), an investigative body set up by the German states of Bavaria, Hesse and Baden-Württemberg, reached a devastating conclusion. According to the classified ILK study, "severe to catastrophic releases of radioactive materials could be expected in the event of a crash against the reactor building" in all but three nuclear power plants. And even for the three most sophisticated power plants that stood a chance of surviving the crash of a jumbo jet, the ILK experts speculated that a crash under unfavorable conditions, such as "a direct hit on the control room," could also lead to a major accident. Containment vessel designed to withstand the unfathomable, but not the control room. carry on. Oh yes, latest from NISA Here. "Life shrinks or expands in proportion to one's courage." - Anaïs Nin without forgetting, there are hundreds of thousands of people, if not millions, who might not get home again. Not to mention Tuna. "Life shrinks or expands in proportion to one's courage." - Anaïs Nin The airplane risk was discussed a lot in the past, so was earthquake risk. Less well-known issues mentioned in the article are the limestone holes under Neckarwestheim 2, the state of implementation of the safety recommendations after the 1987 accident at Biblis (the plan was do do them all in two years from 1991, now they had 20 years for it, really...), and the deficiencies even compared to the Fukushima reactors. *Lunatic*, n. One whose delusions are out of fashion. Haven't even had a chance to finish it yet. But i wasn't around for the past 20 years discussions. I did read about the limestone gaps, and the shock subsidence which occurs, several times in the past few years if i recall. But it's the administrative fail that nerves me the most. "Life shrinks or expands in proportion to one's courage." - Anaïs Nin The one thing that was really new to me was this: Some regulatory officials are so apathetic that they don't even react when a plant's operator proposes fixing an urgent safety problem. For example, in a letter dated Sept. 5, 2007, the energy provider EnBW applied for permission to construct new buildings for backup generators and install a so-called emergency boration system, which provides a tool that was used last week to fight the impending meltdown in Fukushima. The officials still haven't responded to the EnBW letter. Oskar Grözinger, the head of the state regulatory agency, now says that the cost of the new buildings would be out of proportion with the remaining life of the plant -- as if he were the electric utility's chief accountant. However, the level of 'debate' I am referring to is the more popular "Plants are unsafe against airplanes!" vs. "What unrealistic danger, you scaremongers!" or the "Plants are unsafe against earthquakes!" vs. "No, we fulfil going rules!" soundbite level of debate that's re-hashed countless times. *Lunatic*, n. One whose delusions are out of fashion. Less well-known issues mentioned in the article are the limestone holes under Neckarwestheim 2 They built a nuclear reactor over a limestone karst??! "It is not necessary to have hope in order to persevere." Shush, bitte. "Life shrinks or expands in proportion to one's courage." - Anaïs Nin Just like the nukes in Florida and many others I'm sure. Carbonates on the surface usually mean the water moves down below and hence holes down there. Not something I would consider at all uncommon. The characteristic feature of a karst is the likelihood  of caves and caverns. It is not that one cannot build on it, but it is prudent to see if there is a big cavern beneath your building site, especially if it is a nuclear reactor. If you have solid limestone all the way down to shale or granite you would be good to go. But if there is a big cavern or water filled void 500 feet down, not so much. And I was referring to Germany, but the same kind of geology in Florida gives rise to sinkholes, which would also be highly undesirable as a feature under a nuclear reactor. "It is not necessary to have hope in order to persevere." I agree in that it all depends on the details. You obviously can build on it, since, as an example, all of Florida, except for the panhandle, is karst. And building over holes in the ground is not necessarily a bad thing (otherwise subways and the like wouldn't make much sense). A subway train falling into a hole isn't going to contaminate the sub-surface water supply of the city for tens of thousands of years. She believed in nothing; only her skepticism kept her from being an atheist. -- Jean-Paul Sartre I meant that we build quite a bit on top of things like subways, which are nothing more than holes in the ground. Subways are more than just holes through the ground. They have structure to prevent collapse. The inadequacy of such structures was vividly on display in Hollywood, CA during the last phase of construction for The Red Line back in the late '90s. Existing buildings were put at risk. "It is not necessary to have hope in order to persevere." If a hole doesn't have structure, it wouldn't be a hole. True, but that doesn't mean it is nearly as strong as a purpose built structure. "It is not necessary to have hope in order to persevere." Beat me to it :) Heres links to all the old ones Japan threads: A second week of Japanese disaster by ceebs on 03/20/2011 Japan disasters thread - a week after by DoDo on 03/17/2011 Policy fallout: nuclear shutdowns in Germany by DoDo on 03/17/2011 Another Japan News Thread by afew on 03/16/2011 Continuing Japanese Disaster Open Thread by ceebs on 03/16/2011 Japan: New Open Thread by afew on 03/15/2011 Continued Japanese Disaster Thread by ceebs on 03/14/2011 The end of nukes? by Jerome a Paris 03/13/2011 Japan disasters open thread by DoDo on 03/13/2011 Japanese Earthquake Diary by ceebs on 03/11/2011 Any idiot can face a crisis - it's day to day living that wears you out. Quake affects Japan's domestic output of 356,600 cars | Kyodo NewsThe recent devastating earthquake has prevented eight major Japanese automakers from producing a total of 356,600 vehicles amid substantially curtailed operations, according to figures released by the companies by Friday. As many remain uncertain about when their plants will resume full-fledged operations due largely to parts shortages, the figure is likely to increase. The impact of the March 11 9.0-magnitude quake could eventually be smaller as some manufacturers are expected to raise output once full-fledged operations resume. Any idiot can face a crisis - it's day to day living that wears you out. Japan Raises Possibility of Breach in Reactor Vessel - NYTimes.comConcerns about Reactor No. 3 have surfaced before. Japanese officials said nine days ago that the reactor vessel may have been damaged. A senior nuclear executive who insisted on anonymity but has broad contacts in Japan said that there was a long vertical crack running down the side of the reactor vessel itself. The crack runs down below the water level in the reactor and has been leaking fluids and gases, he said. The severity of the radiation burns to the injured workers are consistent with contamination by water that had been in contact with damaged fuel rods, the executive said. "There is a definite, definite crack in the vessel -- it's up and down and it's large," he said. "The problem with cracks is they do not get smaller." Any idiot can face a crisis - it's day to day living that wears you out. 2 of 3 radiation-exposed workers suffer internal exposure | Kyodo NewsTwo of the three workers who were exposed to high-level radiation and sustained possible burns at a crisis-hit nuclear power plant in Fukushima Prefecture have likely suffered ''internal exposure'' in which radioactive substances have entered their bodies, but they are not showing early symptoms and do not require treatment, a national radiation research center said Friday. The National Institute of Radiological Sciences, where the three arrived earlier in the day for highly specialized treatment, said the two were exposed to 2 to 6 sieverts of radiation below their ankles, whereas exposure to 250 millisieverts is the limit set for workers dealing with the ongoing crisis, the worst in Japan's history. While the two in their late 20s and early 30s may develop symptoms of burns later, all three can walk without assistance and are expected to leave the institute as early as Monday, it said, adding it will continue monitoring them over time. Any idiot can face a crisis - it's day to day living that wears you out. The report is that two of the workers were wearing short boots, and the water lapped over the top. The third worker was wearing longer boots so had much less trouble. The workers ignored their dosimeter alarms thinking them faulty, as the room had been checked earlier and was found to be OK, at that point however there was reportedly no water in the room. Any idiot can face a crisis - it's day to day living that wears you out. 2 to 6 sieverts of radiation below their ankles Huh... *Lunatic*, n. One whose delusions are out of fashion. And they sent them home? I expect they forgot to mention that's potentially a lethal dose. If it hit the ankles only, then it's a potential amputation only. *Lunatic*, n. One whose delusions are out of fashion. They have beta burns on their feet. and the doctor says they have some internal radiation. "Life shrinks or expands in proportion to one's courage." - Anaïs Nin So, are we talking about beta radiation itself shielded by the water to a large extent? So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 First, just quoting from something read, second, emitters suspended near the surface? (You're the physicist.) whether the burns were from beta sources, or the burn classification was medically beta, i can't say. though i suspect they were severe. "Life shrinks or expands in proportion to one's courage." - Anaïs Nin If we're talking about beta radiation, I'm thinking the top few millimetres of water would be the ones emitting outside the water. But if the water got into the workers' boots, then we're talking a thin layer of radioactive water close to the skin anyway. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 in a pool--which I admit was some years ago--the blue glow of Cherenkov radiation (that's the radiation due to beta particles traveling faster than the speed of light in water) extended out at least one half meter from the rods.   (It was  very beautiful, by the way.)   The safe distance will obviously be very sensitive to your personal level of machismo.   It occurred to me later that standing near the edge of that pool was not the brightest thing I have ever done in my life.   The Fates are kind. The blue glow of Cherenkov radiation doesn't mark the beta particle penetration depth. It marks the distance at which the Cherenkov radiation remains strong enough for the dipole scattering off the water to be visible. Which is rather a lot farther than the beta particles will ever go (incidentally, this is what allows you to detect neutrinos - if you had to rely on the radioactive emissions from neutrino decay you wouldn't have a prayer). Of course the halving depth for gamma radiation is on the order of 20 cm, so safe distance is on the order of 2 m of water. And if the water had been exposed for a while, it would itself be slightly radioactive. - Jake Friends come and go. Enemies accumulate. in there.   Cherenkov radiation is fairly directional--lying entirely in the forward direction of the electron.  The diffuse glow that I was seeing implies the electrons were not directional--but moving in all directions.   Scattering?  Shine a flashlight into clean water.  How much of the beam is scattered?  Some, but not much.   So:  Wherever I see the glow there is an electron moving rapidly more or less toward me.   The Fates are kind. Scattering?  Shine a flashlight into clean water.  How much of the beam is scattered? Most of Cherenkov radiation is UV, so there is bound to be inelastic scattering. *Lunatic*, n. One whose delusions are out of fashion. Best of ET discussion,or why i stand on ET. As one never having experience with Cherenkov radiation eye defer. (Eye am amazed that someone here has such experience.) "Life shrinks or expands in proportion to one's courage." - Anaïs Nin It was deep blue--which would scatter as blue light scatters.   Or are you asserting I was seeing scattered--and down shifted into blue--UV light?   The Fates are kind. If the scattering is inelastic, it should reduce the frequency. - Jake Friends come and go. Enemies accumulate. Yes, inelastic scattering means that the wavelength changes. That is, the UV photon gives kinetic energy to the particle it scatters on, and turns into a blue photon. *Lunatic*, n. One whose delusions are out of fashion. (wikipedia) Here beta is the fraction of the vacuum speed of light at which the electron is travelling, and n is the refractive index (4/3 in the case of water). For highly energetic electrons, we get cos θ = 3/4 That means θ is up a 41-degree angle so that cerenkov radiation propagates at between 0 and 41 degrees away from the direction of propagation of the beta radiation. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Also, the more intense, the less "directional". So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 If it's shielded by the water, would it be detectable? how much water do you need to shield beta radiation? Any idiot can face a crisis - it's day to day living that wears you out. A few mm should be enough. 1cm would be plenty. It's difficult to imagine beta radiation being energetic enough to penetrate an ankle-deep pool of water. I think it's more likely the water was heavily contaminated with dissolved Caesium-137, and that's where the beta counts were coming from. (And some gamma.) Beta has a reasonable range in air, so it probably also caused the dosimeter alarms. if 1 cm is plenty, then whats the point in quoting per cubic centimetre? Any idiot can face a crisis - it's day to day living that wears you out. There is gamma, too. *Lunatic*, n. One whose delusions are out of fashion. But then they wouldn't be "beta burns". What's the most likely candidate for a beta emitter here? So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 The beta burns are beta burns. The Becquerels per cubic centimetre cebs asked about include both beta and gamma. Both Iodine-131 and Caesium-137 beta-decay.The primary emissions of 131I decay are 364 keV gamma rays (81% abundance) and beta particles with a maximal energy of 606 keV (89% abundance).[3] The beta particles, due to their high mean energy (190 keV; 606 kev is the maximum, but a typical beta-decay spectrum is present) have a tissue penetration of 0.6 to 2 mm.[4] *Lunatic*, n. One whose delusions are out of fashion. Then again, who knows what products this water included, it could have been heavier elements. *Lunatic*, n. One whose delusions are out of fashion. Technetium 99, Strontium 90, Iodine 129 and Caesium 137 are all beta emitters. They are listed as the main long-lived fission products of uranium. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Would there be any major noticeable detectable differences that would tell us that this was the MOX fuel in reactor 3  coming apart because of the Plutonium concentration? so is there a question or test that could be asked that would tell us it was or wasnt that reactor  just down to the detected  products? Any idiot can face a crisis - it's day to day living that wears you out. Wasn't this in the turbine room of No. 2? At any rate, the increased seawater pollution would point at Iodine as the main radioactive contamination (no surprise there given half-lifes):The Nuclear and Industrial Safety Agency said on Saturday that iodine 131 in excess of 1,250 times regulated standards was found in seawater collected 330 meters south of a plant water outlet at 8:30 AM on Friday. The agency says there is no immediate threat to people within the 20-kilometer evacuation zone. The agency adds that as seawater is dispersed by ocean currents the contamination level will decline. Iodine 131 at146.9 times regulated standards was detected in seawater in the area on Wednesday. *Lunatic*, n. One whose delusions are out of fashion. Kyodo: Fears of radioactive seawater grow near nuke plant despite efforts (March 27)On Thursday, three workers were exposed to water containing radioactive materials 10,000 times the normal level at the turbine building connected to the No. 3 reactor building. On Friday, a pool of water with a similarly high concentration of radioactive materials was found in the No. 1 reactor's turbine building, causing some restoration work to be suspended. Similar pools of water were also found in the turbine buildings of the No. 2 and No. 4 reactors, measuring up to 1 meter and 80 centimeters deep, respectively. Those near the No. 1 and No. 3 reactors were up to 40 cm and 1.5 meters deep, respectively. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Does this mean the workers were wading in 1.5m of radioactive water from a MOX reactor? So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 One does not wade in 1.5m water! Are there any radiation wet suits? "It is not necessary to have hope in order to persevere." The point is that the "water lapped over the edge of the boots" story may well be weapons-grade bullshit. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 I'm right at 6' tall and have plenty of experience "standing" in water ~ 5' deep. I use quote marks around the word standing because I have little stability at that height, being so close to neutral buoyancy. I might charitably presume that these workers confined their activities to ankle deep water, but the risk of puncture of those crude plastic bag shoe covers is extreme and calls into question the judgment of those in charge. Foot high galoshes, or overshoes, would be appropriate for depths up to about 4" and hip boots up to perhaps 18". Above that the danger of slip and fall becomes too great, given the consequences. IMO. And the wet suit comment was sarcasm, but not directed towards you. "It is not necessary to have hope in order to persevere." I didn't take it as sarcasm. I wonder if anyone had thought that radiation suits would need to be used for immersion in radioactive water... So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Check this out: So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 I assume those are workers leaving the site of work. if you look at the ones at the back they dont have plastic bags over their feet. I think what has happened is theyve come straight from the site and on leaving the most controlled zone, theyve been made to step into bags and had them taped onto their legs while they walk to a point where more thorough decontamination is to occur so they don't leave radioactive footprints all over the site that also have to be cleaned up. Any idiot can face a crisis - it's day to day living that wears you out. That makes more sense than my guess that the bags were a quick and dirty improvised measure for walking in pools of contaminated water. "It is not necessary to have hope in order to persevere." That's what I thought at first, too. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Has it been determined that the photo is of workers leaving the facility for decontamination? "It is not necessary to have hope in order to persevere." Nope. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Doesn't look like it: These guys don't look like they're coming out and on their way to decontamination. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Your theory makes sense, however: One subcontracted worker who laid cables for new electrical lines March 19 described chaotic conditions and lax supervision that made him nervous. Masataka Hishida said neither he nor the workers around him were given a dosimeter, a device used to measure one's exposure to radiation. He was surprised that workers were not given special shoes; rather, they were told to put plastic bags over their street shoes. When he was trying on the gas mask for the first time, he said the supervisor told him and other subcontractors, "Listen carefully, I'm only going to say this one time" while explaining how to use it. Two of the three workers.... have likely suffered ''internal exposure'' in which radioactive substances have entered their bodies, but they are not showing early symptoms and do not require treatment... ??!! If "radioactive substances have entered their bodies" it would seem to be a good idea to get them out via chelation therapy. EDTA or other chelates, depending on what entered their bodies. If it was only alpha rays from particles that were in water in their boots, and if none of that water got into their bodies, then there would be little to treat at present. Perhaps this is just bad writing and reporting. "It is not necessary to have hope in order to persevere." As with many stories here you hope that somewhere in translation something has been lost.  One of the most obvious ones is that in English news programs they're reporting radiation in the water being 10,000 times normal.  The Japanese news is reporting it as 10,000 times the normal radiation in circulating inside of a nuclear reactor. Now who is translating the Japanese correctly is an exercise for the reader. Any idiot can face a crisis - it's day to day living that wears you out. 3.7 GBq/kg divided by 10,000 is 0.37 MBq/kg, still a high level - it must be normal reactor water level. *Lunatic*, n. One whose delusions are out of fashion. That's immensely disturbing, because it suggests they're trying to deal with an ultra-radioactive soup full of who knows what. (To use the official industry term.) And under the existing circumstances the literal best thing they can do with the radioactive water is likely to be to dump it into the ocean so they can get on with trying to save the reactors from releasing even worse contamination. "It is not necessary to have hope in order to persevere." Good thing the ocean is big... On a more general level, my scientific and professional view on this is: wow, this sucks. And here I thought they were getting their shit together. Seems like * drumroll * those Mark I containments were as bad as some people claimed. Peak oil is not an energy crisis. It is a liquid fuel crisis. U.N. agencies hold meeting to discuss Japan's nuclear crisis | Kyodo NewsU.N. Secretary General Ban Ki Moon on Friday held a teleconference with senior members of U.N. agencies to respond to the ongoing crisis at Japan's Fukushima Daiichi nuclear power plant. Participants of the conference, the first of its kind, included Yukiya Amano, director general of the International Atomic Energy Agency and Tibor Toth, executive secretary of the Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization. Also represented at the meeting were organizations involved in providing aid to the Japanese government, such as the U.N. Development Program, the World Health Organization and the World Meteorological Organization. The participants discussed topics including extraordinary information-sharing measures, according to the United Nations. Any idiot can face a crisis - it's day to day living that wears you out. Any idiot can face a crisis - it's day to day living that wears you out. Three increasingly higher waves... ...after watching this, I can't help wondering how many of the victims were filming or taking pictures from a height they believed to be safe. *Lunatic*, n. One whose delusions are out of fashion. By the way, the video is 9 minutes, but if you really want to get a feel of it, don't fast-forward. *Lunatic*, n. One whose delusions are out of fashion. As a person who has a compulsion to film everything, that was my first thought, too, followed by "RUUUUUUUUUUUUUUUUNNNNN!!!!" Karen in Bischofswiesen 'tis strange I should be old and neither wise nor valiant. From "The Maid's Tragedy" by Beaumont & Fletcher They do run upstairs three times in the course of the filming. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Austrian weather service weather service Plume prediction/Analysis (I only have links to the Gifs, and haven't seen whether this is results or predictions). Links only as they're too big to put up in the diary http://www.zamg.ac.at/pict/aktuell/20110325_Reanalyse-I131-Period2.gif Any idiot can face a crisis - it's day to day living that wears you out. It's measured data. Here is the English text. *Lunatic*, n. One whose delusions are out of fashion. thanks Any idiot can face a crisis - it's day to day living that wears you out. NHK WORLD EnglishJapan's Self-Defense Forces have released the latest aerial images of the disaster-stricken Fukushima Daiichi nuclear plant. The footage was shot from a Ground Self-Defense Force helicopter about mid-day Wednesday. Roughly one hour of video was edited into a 5-minute clip showing reactors No. 1 to No. 4. Footage of the No. 3 reactor building shows its roof and the upper section of the building's southern wall blown away by a hydrogen blast. Vapor can be seen wafting from gaps in the wreckage near a pool for spent nuclear fuel rods. Faint steam can be seen rising from twisted steel framework over what could be the upper part of the containment vessel. The footage shows the No. 4 reactor building, which has been reduced to steel framework near the top, with a wall on the upper part of the building's southern side torn away. Light apparently reflected by water can be seen from openings in the roof's frame. An inside view from the southern side shows a green object that is most likely a fallen crane. Any idiot can face a crisis - it's day to day living that wears you out. Reported to me but not seen, German TV main station ARD reported that experts suspect some core melt has already occurred, and are particularly concerned with #3. Of course, here on ET, we already knew that. "Life shrinks or expands in proportion to one's courage." - Anaïs Nin I saw a similar report on ZDF, but it was full of imprecise stuff. Meltdown happened over a week ago, since then temperatures cooled down. What's a question is the level and nature of meltdown, given the extremely high level of radioactivity in the water. Even if stuff collected at the bottom of the reactor and that's what escaped across cracks or broken pipes on untight vents and got under the feet of the workers. *Lunatic*, n. One whose delusions are out of fashion. It's not the technical inaccuracy that got me, it's that it's being reported at all. At the same time the Bild is reporting that former PM Cabbage Helmet says that Germany needs nukes now more than ever. (one thing good about a crisis, the propaganda becomes completely self-evident.) "Life shrinks or expands in proportion to one's courage." - Anaïs Nin See Salon I am about to post now. *Lunatic*, n. One whose delusions are out of fashion. links to updates regarding nuclear powerplants in JapanSome Links to Sites with Current Information on the Status of Nuclear Powerplants in Japan Any idiot can face a crisis - it's day to day living that wears you out. Tomgram: Engelhardt, The Worst That Could Happen | TomDispatchNot to put too fine a point on it, as an unfolding nightmare Fukushima already inhabits territory perilously close to those irradiated landscapes of the pulp fantasies of my childhood -- only you wouldn't know it.  As "not as bad as Chernobyl" slips into the fog, it might be better to describe the situation at Fukushima as "remarkably unlike Chernobyl" in rural Ukraine, where almost 25 years ago, a single uncontained nuclear reactor with a graphite core blew.  We now contemplate the possibility of multiple reactors accompanied by multiple containment pools for what is euphemistically called "spent" fuel (when it isn't "spent" at all) -- at least 11,195 such rods, 1760 metric tons of them -- self-destructing in a highly industrialized country smaller than California with the third largest economy on the planet.  In a situation we've never faced before, except perhaps in fiction, to talk about "safety" and offer "reassurance" should ring oddly indeed. H/T ARGeezer Any idiot can face a crisis - it's day to day living that wears you out. Against Monbiot - against nuclear love | Presseurop (English)The summit in the art of self-deception has now been scaled by the British journalist George Monbiot, who wrote a rather predictable text for the Guardian in London under the title: "Why Fukushima made me stop worrying and love nuclear power". His reasoning is simple: "A crappy old plant with inadequate safety features was hit by a monster earthquake and a vast tsunami. The electricity supply failed, knocking out the cooling system. The reactors began to explode and melt down. The disaster exposed a familiar legacy of poor design and corner-cutting. Yet, as far as we know, no one has yet received a lethal dose of radiation." A longing for Apocalypse How cynical. Monbiot wrote this while fire-fighters were risking their health and possibly their lives to protect Tokyo. He wrote this while the nuclear plant was radiating, the levels climbing around it, and still no prospect of an end to the leaks. He wrote this while the people of Fukushima looked on from emergency shelters as their livelihoods were destroyed, possibly for generations, and while tap water in Tokyo was forbidden to babies. Meanwhile, the plutonium threat in reactor number three is still not under control. *Lunatic*, n. One whose delusions are out of fashion. (I am quoting this; but I realise that the German author is probably unfamiliar with Monbiot and his extensive work, and suspect that Monbiot's motivation is probably more a spiteful reaction to some cruder replies he surely got for his earlier piece about coal being the worse danger.) *Lunatic*, n. One whose delusions are out of fashion. DoDo:coal being the worse danger Obviously not an idea one should express at the moment without diving for shelter. ;) Nuclear can, though, become a coal-level danger if a massive expansion in nuclear power results in a massive expansion of lower concentration uranium ore mining. (Australia is mining both.) *Lunatic*, n. One whose delusions are out of fashion. Having watched Navajo children playing on unmarked uranium tailings piles, i know this is so. (yes, grabbed them by the hand and led them away.) "Life shrinks or expands in proportion to one's courage." - Anaïs Nin Exactly that previous experience of the Dine with Peabody Coal Company Peabody Energy Corporation¹ is why the uranium mines are closed and will, one hopes, never be re-opened. As of yet, there's been no public announcement of plans to re-open New Mexico thorium mines.  I expect a popular uprising if they try. Even some conservative wing-nut types hate the mining companies. ¹  name changes but the sociopathy goes on She believed in nothing; only her skepticism kept her from being an atheist. -- Jean-Paul Sartre Yes. But I think one can be against any expansion (rather be for reduction) of nuclear power, in favour of energy demand destruction and renewables, and rate coal as the worst possible electricity generation source. The problem is that's a very theoretical point of view. I see no sign our governments intend to give up on nukes (and I include Germany). Seen this evening on FR TV, the Environment Minister Kosciusko-Morizet in "sincere dialogue" with a Green-type person, stressing the talking point that we should decide nothing in the heat of a crisis (whether she supported Sarko's bomb Gaddafi "crusade" decision in the heat of a crisis wasn't asked). This is the polite version of the immediate reaction of the heavies (Claude Allègre for example) who ran out ten days ago to yell it was "indecent" to want to discuss nuclear energy while the poor Japanese people were suffering. The problem there being that when there isn't a crisis the media don't cover the non-event and those in power go ahead with nuclear plans in hush-hush while no one but the crazies are trying to talk about it. I very much question this idea. Even if we would get all our uranium needs from low grade uranium mines (like Rössing, 8 % of world uranium output in 2005), there would be remarkably few mines compared to the number of coal mines operating today. Furthermore, many low grade deposits would be exploited through in situ leaching, which has a very low localized environmental impact, much like Steam Assisted Gravity Drainage used for "mining" deep tar sands (not the moon-scape ones). Peak oil is not an energy crisis. It is a liquid fuel crisis. dont know if this is live or recorded but http://www.ustream.tv/recorded/13556497 Saying that at 280 degrees C the gaskets on the charging lid will have a good chance of giving way, temperature is more of a risk than pressure, These Gaskets are an extra safety feature that dont exist on the US version. temperature on one of the pressure vessels reached 400 degrees which is to the right of the line where they had 100% gasket failure at all temperatures. Any idiot can face a crisis - it's day to day living that wears you out. leaks around the gasket occur at any point above 300. He also said that one possible cause of the black smoke is ignition of the organic sealant around the cable entries for the control electonics Any idiot can face a crisis - it's day to day living that wears you out. A containment vessel is only as strong as its weakest component and can only withstand temperatures to the point that the most vulnerable component fails, in this case various gaskets. "It is not necessary to have hope in order to persevere." High radiation levels at Japanese plant raise new worry | Reuters(Reuters) - Highly radioactive water has been found at a second reactor at a crippled nuclear power station in Japan, the plant's operator said, as fears of contamination escalated two weeks after a huge earthquake and tsunami battered the complex. Underscoring growing international concern about nuclear power raised by the accident in northeast Japan, U.N. Secretary-General Ban Ki-moon said in a statement it was time to reassess the international nuclear safety regime.Earlier, Japanese Prime Minister Naoto Kan, making his first public statement on the crisis in a week, said the situation at the Fukushima nuclear complex, 240 km (150 miles) north of Tokyo, was "nowhere near" being resolved. Any idiot can face a crisis - it's day to day living that wears you out. FOCUS: Nuclear plant workers have option to quit but not many doing so | Kyodo NewsSince the crisis at the Fukushima Daiichi nuclear plant began, triggered by the massive March 11 earthquake, 17 workers have suffered radiation of more than 100 millisieverts, the maximum level to which nuclear plant workers may be exposed per year. They were among those engaged in critical work to lift the stricken six-reactor plant out of what has become for Japan an unprecedented nuclear disaster, amid high risks of exposure to radiation. Tokyo Electric Power Co., the plant operator and the nation's biggest utility, says it is ''up to each individual to decide whether or not to continue'' working at the plant. But an expert familiar with working conditions said that in the case of subcontractors, ''The reality is that they are not in a position to decline job offers that they may not like, because they know that would affect orders in the future.'' At Fukushima Daiichi, efforts to restore power and other facilities are underpinned by workers facing radiation exposure risks. Any idiot can face a crisis - it's day to day living that wears you out. Japan Times: Kan breaks silence, vows to help locals rebuild lives (Saturday, March 26, 2011)"The current situation at the Fukushima No. 1 plant is unpredictable and we are trying to prevent it from deteriorating," Kan told a news conference at the Prime Minister's Official Residence. "I believe we need to continue dealing with each problem with a strong sense of urgency." ... "From now on, we need to begin preparing for a full-scale reconstruction . . . of the region, as well as people's lives," Kan said. "We shall not burden individuals or each household with the damages of the disaster -- society and Japan as a whole will share the burden equally." ... ... "I think every country has their own way of thinking and is setting their own standards. We have been providing information with transparency to all nations and international organizations." So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Most disaster victims elderly / Unlike younger people, seniors unable to move fast enough to escape : National : DAILY YOMIURI ONLINE (The Daily Yomiuri)Most victims of the March 11 earthquake and tsunami were seniors who were unable to move with the agility needed to escape the twin disaster. Of 2,853 victims in five prefectures, whose identities and ages have been confirmed as of Wednesday, 65.1 percent were 60 years old or older, according to Yomiuri Shimbun calculations. The calculations, based on the number of casualties identified by Iwate, Miyagi, Fukushima, Ibaraki and Chiba prefectural police headquarters, also show that 46.1 percent of the casualties were 70 years old or older. Younger people evidently were much more likely to escape the tsunami that followed the powerful earthquake that devastated the coastal areas of the Tohoku and Kanto regions. In Iwate Prefecture, 457 of 721 victims, or 63.3 percent of the total, were 60 years old or older while 44 percent were 70 or older. Of 1,579 identified victims in Miyagi Prefecture, 63.1 percent were 60 years old or older and 44.9 percent were 70 or older. Last year, people aged 60 in Iwate Prefecture accounted for 34.9 percent of the total prefectural population, while 20.8 percent were aged 70 or older. This shows the ratios of the total number of identified elderly victims were almost double that of the prefecture's aged population. Any idiot can face a crisis - it's day to day living that wears you out. Kyodo - Fresh coolant to be injected into Fukushima No. 2 reactor On Thursday, three workers were exposed to water containing radioactive materials 10,000 times the normal level at the turbine building connected to the No. 3 reactor building. On Friday, a pool of water with similarly highly concentrated radioactive materials was found in the No. 1 reactor's turbine building, causing some restoration work to be suspended. Similar pools of water were also found in the turbine buildings of the No. 2 and No. 4 reactors, measuring up to 1 meter and 80 centimeters deep, respectively. Those near the No. 1 and No. 3 reactors were up to 40 cm and 1.5 meters deep. While it will try to analyze the levels of radioactivity of the pools of water found in the No. 2 and No. 4 reactors, TEPCO will remove such water in all of four reactor units to reduce the risk of more workers being exposed to radioactive substances, it said. The risk hinders their efforts to restore the plant's crippled cooling functions, which are crucial to overcoming the crisis, it added. Any idiot can face a crisis - it's day to day living that wears you out. TEPCO forced to change strategy at plant  NHK World  Saturday, March 26, 2011 08:45 Tokyo Electric Power Company has been forced to change its strategy at the quake-hit Fukushima Daiichi nuclear power plant due to high radiation levels at the site. The plant's nuclear reactors 1 through 4 have all lost their cooling capabilities as both external and backup power supplies failed after the quake and tsunami. TEPCO has been working to restore the external power supply while trying to cool the reactors and spent fuel storage pools by using pump trucks to secure water levels. However, 3 workers were exposed to highly radioactive water in the basement of the turbine building of the No.3 reactor on Thursday. The radiation level there was 200 millisieverts per hour at one time. This led to a change in plans. In an effort to continually cool the reactors, TEPCO has started to pump fresh water instead of seawater into the No. 1 and No. 3 reactors on Friday. With this strategy in mind, the company first intended to use the reactors' water pumps. But they were forced to use pump trucks instead from a distance, after high radiation levels were detected near the reactors' pumps. The company plans to switch on lights in the No.2 reactor's control room on Saturday through an external power supply. Meanwhile, they will also continue to use trucks to pump fresh water into the reactor. "It is not necessary to have hope in order to persevere." I wonder when "the authorities" are going to realize that there could be another tsunami at any time, and that they should be working to protect their extremely fragile ex-reactors and their very crude replacement cooling systems from an even worse situation. How about a little 8 magnitude aftershock, with a subsequent tsunami spreading the reactor mess all over the countryside? I hear that the US Corps of Engineers is good at building sea walls, maybe they should deploy to Japan. Well, we know how well they build and maintain levees. "It is not necessary to have hope in order to persevere." I wonder when "the authorities" are going to realize that there could be another tsunami at any time... Or a spike in radiation releases from #3 just when there was a brisk wind from the north-north east. "It is not necessary to have hope in order to persevere." Presumably the Japanese build good seawalls, too. The sea defences for Fukishima were designed for a 5m tsunami. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Workers trying to pump radioactive water from Japan reactors  Reuters Radioactive water has been found in buildings of three of the six reactors at the power complex 240 km (150 miles) north of Tokyo. On Thursday, three workers sustained burns at reactor No. 3 after being exposed to radiation levels 10,000 times higher than usually found in a reactor. "Bailing out accumulated water from the turbine housing units before radiation levels rise further is becoming very important," said Japan Nuclear and Industrial Safety Agency senior official Hidehiko Nishiyama. .... With elevated radiation levels around the plant triggering fears across the nation, storage of the contaminated water has to be handled carefully. "We are working out ways of safely bailing out the water so that it does not get out into the environment, and we are making preparations," Nishiyama said. Perhaps we can hope that the "ways" of "safely bailing out the water" will involve pumping it into containers or tankers, rather than into the ocean. In this case it would be very good if the solution to pollution was not dilution. "It is not necessary to have hope in order to persevere." Kyodo - Levels of radioactive materials soaring in sea near nuke plant   Levels of radioactive materials are skyrocketing in the sea near the crisis-hit nuclear power station in Fukushima Prefecture, the government's nuclear safety agency said Saturday, while the plant's operator has started injecting fresh water into the No. 2 reactor core to enhance cooling efficiency. According to the government's Nuclear and Industrial Safety Agency, radioactive iodine-131 at a concentration 1,250.8 times the legal limit was detected Friday morning in a seawater sample taken around 330 meters south of the plant, near the drain outlets of its troubled four reactors. The level rose to its highest so far in the survey begun this week, after staying around levels 100 times over the legal limit. It is highly likely that radioactive water in the plant has disembogued into the sea, Tokyo Electric Power Co said. The result could fan concerns over fishery products in northeastern Japan as highly radioactive water has been found leaking near all four troubled reactor units at the plant. Radioactive materials ''will significantly dilute'' by the time they are consumed by marine species, the agency said, adding, it will not have a significant impact on fishery products as fishing is not conducted in the area within 20 kilometers of the plant as the government has issued a directive for residents in the area to evacuate. Any idiot can face a crisis - it's day to day living that wears you out. For a civilization with only the beginnings of understanding the relationship that is the circle of life, this is very disturbing news. "Life shrinks or expands in proportion to one's courage." - Anaïs Nin "radioactive water in the plant has disembogued into the sea" now theres an interesting word, suggests more than just a minor leak. Any idiot can face a crisis - it's day to day living that wears you out. At a more practical level, I was thinking that since alpha and beta radiation is shielded effectively by living tissue, seafood that may contain nuclides that will kill you if you ingest them may appear "safe" from the outside, even with a radiation meter. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Let's order sushi! Is there alfa/beta radiation without gamma radiation? *Lunatic*, n. One whose delusions are out of fashion. http://en.wikipedia.org/wiki/Strontium-90 I prefer alfalfa emitters better. What is the risk to others from people who are contaminated by radiation? So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 If you can still walk to the shelter, you're not a radiation hazard. At least not after you get a shower and a change of clothes. - Jake Friends come and go. Enemies accumulate. That's what can be an issue: what may come off your shoe or clothes or skin. *Lunatic*, n. One whose delusions are out of fashion. Please tell me they decontaminate people before letting them into the radiation shelters... - Jake Friends come and go. Enemies accumulate. They just send them away instead. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 That reminds me of an old BBC movie, "The War game" ... Quite creepy ! "What can I do, What can I write, Against the fall of Night". A.E. Housman Friends come and go. Enemies accumulate. It's not a radiation but a refugee shelter, and mandatory decontamination of 150,000 refugees would be even more of a hassle than just mandatory screening. *Lunatic*, n. One whose delusions are out of fashion. Some ships avoid Tokyo Bay ports on radiation fear | Reuters(Reuters) - German shipping companies are avoiding Tokyo Bay area ports due to radiation fears and Japan could face severe supply chain bottlenecks as vessels get diverted, ship industry officials said on Thursday. Any logistical setbacks could mean major delays and seaborne congestion at Japan's terminals including Tokyo, hindering recovery efforts in the wake of the March 11 earthquake."The last thing Japan needs right now is for people to abandon them," said Tim Wickmann, chief executive of MCC Transport a unit of Danish oil and shipping group A.P. Moller-Maersk (MAERSKb.CO).Among those that have stopped going to Tokyo for the time being are Hapag-Lloyd HPLG.UL -- the world's fifth-biggest container shipper part-owned by tour operator TUI AG (TUIGn.DE) -- and container ship operator Claus-Peter Offen. Any idiot can face a crisis - it's day to day living that wears you out. The Oil Drum | Fukushima Dai-ichi status and slow burning issuesTangently related to MSM brown outs. Maybe it's just me but... these sensors took a very interesting time to 'malfunction'. Glitches hamper radiation warning system in California The federal government's radiation alert network in California is not fully functional, leaving the stretch of coast between Los Angeles and San Francisco without the crucial real-time warning system in the event of a nuclear emergency. Six of the Environmental Protection Agency's 12 California sensors -- including the three closest to the Diablo Canyon nuclear power plant near San Luis Obispo -- are sending data with "anomalies" to the agency's laboratory in Montgomery, Ala., said Mike Bandrowski, manager of the EPA's radiation program. Any idiot can face a crisis - it's day to day living that wears you out. New footage shows tsunami wave and damage to Fukushima plant - Channel 4 NewsNearly two hours of footage was taken from the air immediately after the magnitude 9.0 earthquake struck Japan on March 11. Travelling from the Miyagi prefecture to the Fukushima nuclear plant, the footage, released by the Japanese government, shows the extent of the damage inflicted on the nuclear plant from the earthquake. Fires in buildings and structural damage to the plant is evident. And whilst in the air the camera crew also capture the first terrifying tsunami waves heading to Japan. Any idiot can face a crisis - it's day to day living that wears you out. NRC: Nuke Plants Aren't Reporting Safety Defects In Equipment | Crooks and LiarsConservatives really do live in the land of wishful thinking. Yes, of course you can give away tax dollars that are supposed to be used for enforcement and use them for tax breaks! Of course you can cut funding for air traffic controllers without putting air travelers at risk, of course you can stop inspecting pharmaceutical factories and food processing plants, and expect them to just tell you if there's a problem. After all, who would risk their company just to make a buck? All of the above, but especially the nuclear power industry: More than a quarter of U.S. nuclear plant operators have failed to properly tell regulators about equipment defects that could imperil reactor safety, according to a report by the Nuclear Regulatory Commission's inspector general. Operators of U.S. nuclear power plants are supposed to tell the NRC when pieces of equipment "contain defects that could create a substantial safety hazard," regulations say. Although the report doesn't assert that any imminent danger resulted from the lapses, many experts said the lack of communication could make it harder for other nuclear reactor operators to learn about flaws in their own equipment, because many similar parts are used in other reactors. Any idiot can face a crisis - it's day to day living that wears you out. The only way to stop this nonsense is to break the corporate shield and make the owners of the nuclear plants (shareholders) fiscally responsible for the dangers caused to the public by their malfeasance. IANAL, so don't ask me how to do it.  :-) She believed in nothing; only her skepticism kept her from being an atheist. -- Jean-Paul Sartre Showings of criminality by corporate officers tends to make corporations vulnerable to loss of liability limitation and loss of their veil. "It is not necessary to have hope in order to persevere." This would be more likely were criminal prosecutions possible with private citizens bringing the actions. It will never be a criminal case unless someone prosecutes. "It is not necessary to have hope in order to persevere." Holding the corporate officers criminally liable should plenty suffice. The shareholders probably didn't have a clue, and not much of a chance to get one ahead of time either. - Jake Friends come and go. Enemies accumulate. NHK WORLD EnglishTop officers of Japan's Self-Defense Forces and the US Pacific Fleet have agreed to share information on the troubled Fukushima Daiichi nuclear plant and cooperate in solving the problem. The head of the SDF Joint Staff Office, General Ryoichi Oriki, and Admiral Patrick Walsh met at the Defense Ministry in Tokyo on Saturday afternoon. Walsh commands all the US units working for relief operations in the quake-hit area in northeastern Japan. He told reporters after the meeting that some of his senior staff in charge of communication with SDF officers have specific knowledge and technical skills relating to nuclear power. Any idiot can face a crisis - it's day to day living that wears you out. Interesting reality check of what it's like to work in a nuclear power plant... NHK WORLD EnglishJapanese electric power companies that operate nuclear power plants are facing difficulty in either restarting nuclear reactors in their checkups or transporting nuclear fuel to the power plants. Municipal governments that host nuclear power plants are urging plant operators to freeze expansion projects and to review safety measures. Hokuriku Electric Power Company has indicated that the firm has difficulty in rebooting two reactors at its Shika plant in Ishikawa Prefecture without the understanding of the prefectural government and residents. The reactors were taken out of operation for either mechanical trouble or regular inspection. In western Japan, Kansai Electric Power Company has decided to postpone transporting nuclear fuel to one reactor at its Takahama plant in Fukui Prefecture from France. The company cites difficulty in ensuring the fuel's safe delivery because the government is busy handling the aftermath of the March 11th disasters and can't provide the necessary safeguards for transport. Any idiot can face a crisis - it's day to day living that wears you out. NHK WORLD EnglishMunicipalities in Fukushima Prefecture are relocating residents and administrative functions to remote areas. Many of them are located within the evacuation zone around the troubled Fukushima Daiichi nuclear power plant. On March 19th, Futaba Town moved its functions and the entire community to Saitama City in Saitama Prefecture. Two other municipalities have also decided on collective relocation of administration and residents. Okuma Town plans to move to Aizu-Wakamatsu City in Fukushima Prefecture, and Naraha Town, to Aizu-Misato Town in the same prefecture. Any idiot can face a crisis - it's day to day living that wears you out. Researcher warned 2 yrs ago of massive tsunami striking nuke plant Kyodo News A researcher said Saturday he had warned two years ago about the possible risk of a massive tsunami hitting a nuclear power plant in Japan, but Tokyo Electric Power Co., the operator of the Fukushima Daiichi nuclear plant crippled by the March 11 earthquake and ensuing tsunami, had brushed off the warning. According to the researcher, Yukinobu Okamura, and the records of a government council where he made the warning, TEPCO asserted that there was flexibility in the quake resistance design of its plants and expressed reluctance to raise the assumption of possible quake damage citing a lack of sufficient information. ''There should be ample flexibility in the safety of a nuclear power plant,'' said Okamura, head of the Active Fault and Earthquake Research Center at the National Institute of Advanced Industrial Science and Technology. ''It is odd to have an attitude of not taking into consideration indeterminate aspects.'' Okamura had warned in 2009 of massive tsunami based on his study since around 2004 of the traces of a major tsunami believed to have swept away about a thousand people in the year 869 after a magnitude 8.3 quake off northeastern Japan. He had found in his research that tsunami from the ancient quake had hit a wide range of the coastal regions of northeastern Japan, at least as far north as Ishinomaki in Miyagi Prefecture and as far south as the town of Namie in Fukushima Prefecture -- close to the Fukushima Daiichi plant -- penetrating as much as 3 to 4 kilometers inland. "It is not necessary to have hope in order to persevere." Fault line shifted up to 30m  NHK World  March 26, 2011 04:22 Japan's Meteorological Agency says the recent massive earthquake off the Pacific coast of Northeastern Japan occurred due to a fault line shift of up to 30 meters. The agency said at a news conference on Friday that it discovered this while analyzing how the quake occurred based on seismometer data it obtained in Japan and abroad. .... All this shows that the 450-kilometer fault line shifted a distance of up to 30 meters in just 3 minutes. This caused a massive quake with a magnitude of 9.0 and a tsunami 10 meters high which hit the Pacific coast of Japan. The meteorological agency says it will continue its analysis of huge earthquakes and aftershocks to help improve disaster response. "It is not necessary to have hope in order to persevere." Fresh water injected into No. 2 reactor  NHK World  March 26, 2011 12:30 Workers at the troubled Fukushima Daiichi nuclear power plant are now pumping fresh water instead of seawater into the No. 2 reactor. The measure was taken to prevent salt from building up inside the reactor and affecting its cooling capability. Seawater was used to cool reactors and spent fuel storage pools at the plant as an emergency measure following the tsunami. A similar measure was taken at the No. 1 and No. 3 reactors on Friday. The Nuclear and Industrial Safety Agency says it hopes to complete the switchover at spent fuel storage pools of the No. 2 and No. 4 reactors on Sunday. The agency says fresh water has helped to stabilize the condition of the No. 1 reactor and accordingly wants to accelerate the process. It adds that further use of seawater could hamper temperature and pressure control. Saturday,  +0900 (JST) "It is not necessary to have hope in order to persevere." Chinese man turns himself in seeking deportation over nuclear fears | Kyodo NewsPolice on Saturday arrested a 48-year-old Chinese man, who turned himself in to Nagasaki prefectural police seeking deportation due to his fears about the nuclear accident at the Fukushima Daiichi power station, for illegally staying in Japan, police officials said. Lin Jian Ming is suspected of remaining in Japan beyond the allowed period of 90 days after arriving on June 8, 2000, the officials said. The man came to the Nagasaki prefectural police headquarters on Saturday afternoon, telling them that he lived in Funabashi, Chiba Prefecture, but had fled to Nagasaki to escape the nuclear power plant crisis and wanted to return to China, the officials said. Lin has told police he arrived in Nagasaki a week ago. Any idiot can face a crisis - it's day to day living that wears you out. Ford to idle Belgian plant to conserve parts | Reuters(Reuters) - Ford Motor Co (F.N) will idle its auto plant in Genk, Belgium, for five days starting April 4, to conserve parts following the earthquake and nuclear crisis in Japan that has disrupted supplies for numerous automakers. The shutdown had been set for May but the automaker chose to idle the plant sooner "to ensure we have sufficient parts availability," Ford spokesman Todd Nissen said on Saturday."Given the current situation in Japan, we took this as a precautionary measure. To be clear, we haven't experienced any plant disruptions as a result of a parts shortage at this point," he said. Any idiot can face a crisis - it's day to day living that wears you out. Ah, the wonders of Just-In-Time-Logistics. - Jake Friends come and go. Enemies accumulate. The Oil Drum | Fukushima Dai-ichi status and slow burning issuesTaking your calculation a little farther. A yield of 2.878% would amount to a related fission of 1.31 Kg of uranium 235 If fuel rods are enriched apprx 3% U235. Then 1.31 Kg U235 becomes apprx 43 Kg of fuel pellets (or apprx 100 lb) Assume shore current is conservatively 2 mi/hr then the isotopic product of the fuel pellets passing the collection point would have to be replaced 10 times per hour. This would mean that the fission products of half a ton of fuel pellets/hr (1000 lb/hr) are being swept past that collection point. That translates into the product of 12 tons of fuel pellets/day flushing into the sea. *Please excuse the rounding errors - I didn't use a conversion table Any idiot can face a crisis - it's day to day living that wears you out. Latest casualty figures for March 11 quake, tsunami | Kyodo NewsThe following are the latest casualty figures related to the earthquake and tsunami that hit northeastern and eastern Japan on March 11, according to the National Police Agency as of 9 p.m. Saturday: Number of people killed 10,489 Number of people missing 16,621 Any idiot can face a crisis - it's day to day living that wears you out. Now (6 pm local time) 10,668 confirmed dead, 16,574 registered missing. First time I see the latter number reduce... Also 2,777 injured, 18,649 buildings found completely destroyed, 146 found burned down. *Lunatic*, n. One whose delusions are out of fashion. Kyodo News: Fears of radioactive seawater grow near nuke plant despite effortsThe plant's operator, Tokyo Electric Power Co., was able to turn on the lights in the control room for the No. 2 reactor the same day, leaving only the No. 4 reactor at the six-reactor plant without lighting in its control room.Which is just as well, since reactor 4 has no fuel in it.According to the government's nuclear safety agency, evidence of water having flowed through an ordinary drainage outlet has been found at the No. 2 reactor building, with a radiation level of about 15 millisieverts per hour detected. The outlet is believed to lead to the sea.The government continues to refuse to give people false assurances, which in my opinion is both honest and commendable:Japan's top government spokesman Yukio Edano said at a press conference Saturday that it was difficult to predict when the ongoing crisis at the plant -- triggered by the catastrophic March 11 earthquake and ensuing tsunami -- would end. Asked about the prospects for the crisis, Edano described the current situation as ''preventing it from worsening,'' adding that ''an enormous amount of work'' is required before it will settle down. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Fukushima Slow Burning Issues  Euan Mearns The Oil Drum Heat accumulation The normal way to dispose of heat is to pump it away in circulating cooling water (see the massive flows of water either side of the seawall in the picture below). With the cooling pumps out of action due to loss of power, the only way to remove heat is by conduction through water. With the reactors shut down, the rate of heat production will now be much less than 1% of that produced by fission power, but the heat still needs to be removed. The following statement from this report is revealing:    A similar operation is planned for later today at unit 4 and the surface temperatures of the buildings appear to be below 100ºC. This suggests that the reactor buildings are essentially the heat sinks being used to absorb much of the heat being produced. For fuel rods cooled by water emersion at atmospheric pressure, the maximum temperature the coolant can reach is 100˚C (or it will boil) and since heat is always transferred from higher to lower temperatures, the maximum temperature the buildings can reach is 100˚C by conduction through water at atmospheric pressure. When they reach this temperature there is no where for the radiation heat produced to go and the source temperatures may continue to rise. I'm not sure what the exact outcome might be, but fuel rods catching fire or melting are two possible outcomes. The following statement from this report gives further cause for concern:    Tepco noted that the temperature of the containment vessel of unit 1 had built to some 400ºC, compared to a design value of only 138ºC. However, the strength of the component is such that it can withstand the stresses this imposes, said Tepco, and its structural integrity is expected to be maintained. "There is no substantial problem regarding the containment vessel's structural soundness under conditions of pressure 300 kPa and temperature 400ºC." If the reactor steel pressure vessel and the concrete containment are still pressurised then it is possible to raise the temperature of water above 100˚C, but a temperature of 400˚C still signals a serious heat dissipation issue at Unit 1. If either the pressure vessel or containment system are at atmospheric pressure then it suggests that the reactor pressure vessel is ruptured and nuclear fuel is in contact with the concrete containment system. It is quite clear that Tepco is aware of this problem hence the race to restore mains AC power to see if the pumping system works which will allow heat to be pumped away using the reactor's and cooling pond's water cooling systems. How much of this is going to work? For unit #s 1, 2 & 3 the cores could be at or below 100C because the containment has failed and they are therefore unable to keep water in the liquid state above 100C, so reports of temperatures below 100C are, potentially, a mixed blessing. The only effective means of removing heat that remains is circulating cold water into the reactor vessel and building and that water either evaporating as steam, carrying away the latent heat of vaporization, or heated and now contaminated water being pumped to storage tanks or the ocean. Spraying fresh water on the building would help remove heat from the concrete, which is functioning as a heat sink, but at the cost of providing additional contaminated water to be dealt with. In order to stabilize the the reactors without further polluting the ocean it would be necessary to have two barges on site at all times: one for fresh water; one for contaminated water. The fresh water barge could turn into a contaminated water barge when empty of fresh water. This would also provide a way of dealing with water that has been sprayed on spent fuel pools and leaked onto the rest of the facility and the water that needs to be pumped out of the reactor and turbine buildings. Up-thread it was noted that some of the black smoke seen coming out of #s 1, 2 & 3 at various times may have been seals and gaskets that had been heated to decomposition. Add to that the stainless steel piping that has been seriously weakened by the presence of high temperature salt water and we have a very leaky system.   "It is not necessary to have hope in order to persevere." Mearns' conclusion: IT IS TOO EARLY TO CALL THIS CRISIS OVER.  Euan Mearns The Oil Drum Summing up Fukushima is like a cancer eating away at the habitat of the east coast of Japan. Whilst the situation appears to be stable, a number of slow burning processes must inevitably be eating away at the heart of these reactors. The solution to a number of these problems is to restore fresh water circulation to each of the cores and the spent fuel ponds. Whether or not the pumping systems work remains to be seen. Disposing of the salty radioactive sludge from inside the reactor vessels presents another major challenge. It seems possible that the current meta stable condition may persist for many more weeks, and all the while the release and accumulation of radioactive isotopes in the environment will continue. And there is still risk of a catastrophic failure due to heat or corrosion that would result in the status degrading rapidly. It is too early to call this crisis over. Fresh water from a barge is on site, but it has to be injected via fire truck pumps, as it has not been possible to get the pumps in the reactor working. These pumps have been described as being the size of a compact car stood on its rear bumper, so there is likely a considerable reduction in capacity from them to a fire truck pump. "It is not necessary to have hope in order to persevere." The wikipedia article on steel strength says that Japanese fire codes use 400 C as the limit. That number might be quoted for that reason, because the onset of reduced strength is quite a bit higher... Having a barge to collect all the contaminated water is a great idea, but not something done with the flick of the wrist. Anyway, you could have filters installed, and then get highly contaminated filters and clean water. Cast the filters into concrete blocks and dump them into the Marianas trench and you'll be fine. Not perfect, not pretty, but not worse than all those nuclear subs lying crushed on the deep ocean plains here and there. Peak oil is not an energy crisis. It is a liquid fuel crisis. Easier than my solution: vitrify them and use a mass accelerator to launch them onto a trajectory that would end with an impact on the sun. :-) "It is not necessary to have hope in order to persevere." BBC - Japan nuclear: Workers evacuated as radiation soars   Radioactivity in water at reactor 2 at the quake-damaged Fukushima nuclear plant has reached 10 million times the usual level, company officials say. Workers trying to cool the reactor core to avoid a meltdown have been evacuated. Earlier, Japan's nuclear agency said that levels of radioactive iodine in the sea near the plant had risen to 1,850 times the usual level. The UN's nuclear agency has warned the crisis could go on for months.   Any idiot can face a crisis - it's day to day living that wears you out. KYODO -   Woes deepen over radioactive waters at nuke plant, sea contamination Highly radioactive pools of water found inside buildings near some troubled nuclear reactors at the Fukushima Daiichi plant highlighted the deepening seriousness of the nuclear crisis in Japan on Sunday, with the radiation level of the surface of the water in the basement of the No. 2 reactor's turbine building found to be over 1,000 millisieverts per hour. Hidehiko Nishiyama, spokesman for the government's nuclear safety agency, said, ''This is quite a high figure...and it is likely to be coming from the reactor.'' Adding to woes is the increasing level of contamination in the sea near the plant. Radioactive iodine-131 at a concentration 1,850 times the legal limit was detected from water extracted Saturday, compared with the 1,250.8 times the limit found Friday, the agency said. The pools of water containing radioactive substances have drawn attention after three workers who were engaging in work to restore the No. 3 reactor at its turbine building on Friday were exposed to radiation amounting to 173 to 180 millisieverts. Two of them had their feet in water without noticing then that it was highly contaminated.   Any idiot can face a crisis - it's day to day living that wears you out. NHK WORLD EnglishThe plant operator, known as TEPCO, says it measured 2.9-billion becquerels of radiation per one cubic centimeter of water from the basement of the turbine building attached to the Number 2 reactor. The level of contamination is about 1,000 times that of the leaked water already found in the basements of the Number 1 and 3 reactor turbine buildings. The company says the latest reading is 10-million times the usual radioactivity of water circulating within a normally operating reactor. TEPCO says the radioactive materials include 2.9-billion becquerels of iodine-134, 13-million becquerels of iodine-131, and 2.3-million becquerels each for cesium 134 and 137. So the main beta emitter is neither Iodine-131 or Caesium-137, but Iodine-134. According to this and this, it's rather energetic: 4 MeV. Should have centimetre range penetration. *Lunatic*, n. One whose delusions are out of fashion. well not quibbling over details but, the pools that wre stepped in were 1000 tomes  stronger than reactor contents, the current pools are talked about being 10,000,000 times stronger, so must be 10,000 not 1000 times stronger than yesterday. Any idiot can face a crisis - it's day to day living that wears you out. Either way, we're beginning to see a situation spin out of control, especially with higher readings now at a distance. "Life shrinks or expands in proportion to one's courage." - Anaïs Nin No, the pools that were stepped in were 10,000 stronger, specifically 3.7 million Bq/cm³. *Lunatic*, n. One whose delusions are out of fashion. α, β, γ Penetration and ShieldingA useful rule-of-thumb for the maximum range of electrons is that the range (in gm/cm2) is half the maximum energy (in Mev). So 2 cm penetration for I-134 beta emissions in water as well as soft tissue in the human body. *Lunatic*, n. One whose delusions are out of fashion. NHK - Extreme radiation detected at No.2 reactor   The level of contamination is about 1,000 times that of the leaked water already found in the basements of the Number 1 and 3 reactor turbine buildings. The company says the latest reading is 10-million times the usual radioactivity of water circulating within a normally operating reactor. TEPCO says the radioactive materials include 2.9-billion becquerels of iodine-134, 13-million becquerels of iodine-131, and 2.3-million becquerels each for cesium 134 and 137.   Any idiot can face a crisis - it's day to day living that wears you out. so the old water people got a semi lethal dose in 50 minutes? this would give you the same in 1/3 of a second! Any idiot can face a crisis - it's day to day living that wears you out. ah if its only 1,000 times the old water then its 3 seconds Any idiot can face a crisis - it's day to day living that wears you out. NHK WORLD EnglishHigh radiation detected 30 km from Fukushima plant Radiation levels 40 percent higher than the yearly limit for the general public has been detected just over 30 kilometers from the Fukushima Daiichi power plant. The Science Ministry says a reading of 1.4 millisieverts was taken on Wednesday morning in Namie Town northwest of the plant. Ow shit. That's an order of magnitude higher than previous values at the same town. *Lunatic*, n. One whose delusions are out of fashion. asahi.com(朝日新聞社):Radiation from Fukushima exceeds Three Mile Island - EnglishMeanwhile, calculations of soil contamination by experts have already produced results that are at the same level as for Chernobyl. Cesium-137 levels of 163,000 becquerels per kilogram of soil was detected in Iitate, Fukushima Prefecture, about 40 kilometers northwest of the Fukushima plant, on March 20. That was the highest figure in the prefecture. According to Tetsuji Imanaka, an associate professor of nuclear engineering at the Kyoto University Research Reactor Institute, if the Iitate figure was converted to one square meter, the figure would be 3.26 million becquerels. After the Chernobyl accident, residents who lived in regions with cesium levels of 550,000 becquerels ore more per square meter were forcibly moved elsewhere. "Iitate has reached a contamination level in which evacuation is necessary," Imanaka said. "Radiation is still being released from the Fukushima plant. The areas of high contamination can be considered to be on par with Chernobyl." Residents who were forced to move after the Chernobyl accident were believed to have been exposed to an average of about 50 millisieverts of radiation. However, a study of the health of residents who lived for many years on contaminated land found that the incidence of leukemia among adults did not increase. Any idiot can face a crisis - it's day to day living that wears you out. asahi.com(朝日新聞社):Radiation from Fukushima exceeds Three Mile Island - EnglishTo calculate the spread of radiation using the System for Prediction of Environmental Emergency Dose Information, the Nuclear Safety Commission of Japan estimates the discharge rate for radioactive iodine per hour from the Fukushima plant based on radiation measurements taken at various locations. Using those figures to make a simple calculation of the amount of discharge between 6 a.m. March 12 and midnight Wednesday results in figures between 30,000 and 110,000 terabecquerels. Tera is a prefix meaning 1 trillion. The INES defines a level 7 major accident such as Chernobyl as one in which radiation of more than several tens of thousands of terabecquerels is released. Any idiot can face a crisis - it's day to day living that wears you out. Radiation levels 40 percent higher than the yearly limit Do they mean radiation levels per hour, or what? Taken literally, they are comparing apples and oranges (or apples per unit time, and apples) So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Stupid translator syndrome. Checking Japanese news, the reading was precisely 1,437 μSv for the period from 23:00 to 24:00 local time - but already on 25 March, apparently. Maybe this was a new measuring location. I didn't find the actual government release so far. *Lunatic*, n. One whose delusions are out of fashion. After further research, it appears that the original news reports in Japanese were even more stupid newsie syndrome. The 1437 μSv value is contained in this release, and it's a cumulative value. *Lunatic*, n. One whose delusions are out of fashion. Cumulative over 24 hours 8 minutes, to be precise. *Lunatic*, n. One whose delusions are out of fashion. Reuters - Britain looks a safe haven for nuclear power   LONDON -- The UK is starting to look like a safe haven for nuclear power. Britain's decision to set a floor price for electricity generated from fossil fuels is not an explicit endorsement of new reactors. But it may help tip the balance for companies contemplating investment in new capacity. It's not obvious that Japan's disaster has set back Britain's nuclear ambitions very far. In the wake of the leak at the Fukushima plant, Western governments have been quick to placate public concerns about nuclear power. At the extreme end of the scale, Germany suspended operations at some ageing reactors. But the UK has been more measured, saying that while there will be lessons from Japan, there should be no rush to judgment. The UK's chief nuclear inspector is due to report to the government on those possible lessons in May. But it's already clear that the UK is keeping a very open mind. The proposal to enforce a minimum price for carbon is designed to push up the cost of electricity from non-renewable sources, thereby encouraging the 110 billion pounds of investment in low-carbon electricity the government says is needed by 2020. While the floor benefits all forms of renewable energy, it makes a particular difference to near-term investment decisions in nuclear.   Any idiot can face a crisis - it's day to day living that wears you out. Didn't we recently hear that the UK is the only country where you still have plant desings older than Fukishima operating? So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Well we haven't heard that on any UK news site, but seen it internationally. Have seen UK news reporters saying how New UK reactors  will be  safe because they are more modern designs, but with no comment about how old current ones are. Any idiot can face a crisis - it's day to day living that wears you out. Britain has two plants with generation I reactors, Wylfa and Oldbury. Commisioned in 1969 and 1971. Both plants (4 reactors totaling 1500 MW) are due to be decomissioned and replaced by about 6500 MW of new plants (Westinghouse AP1000 or Areva EPR), by Horizon Nuclear Power (an EON+RWE joint venture). Peak oil is not an energy crisis. It is a liquid fuel crisis. NHK WORLD EnglishRecord 16-meter tsunami hit Minami-sanriku Scientists say a record 16-meter tsunami hit a coastal town in Miyagi Prefecture on March 11th. Researchers at the Port and Airport Research Institute released the finding after inspecting buildings in Minami-sanriku Town, which was devastated by the massive earthquake and tsunami. They determined that tsunami as high as 12 to 14 meters hit hospitals and the town hall in the center of the town. A 4-story public apartment building in the coastal area was almost completely inundated, suggesting that tsunami up to 16 meters reached the building. The waves were about 4 times as high as those that hit the region in 1960 following a powerful earthquake in Chile. The researchers estimate that tsunami with a maximum power of 40 tons per square meter destroyed concrete pillars and walls in the coastal area. A chief researcher at the institute, Taro Arikawa, says that he believes tsunami directly hit the town's coastline, which is open to the fault that caused the quake. Arikawa added that tsunami could have been amplified in the bay where the seabed becomes much shallower toward the coast. He stressed the need to review measures to protect concrete buildings, which were thought to be highly resistant to tsunami. Sunday, March 27, 2011 08:53 +0900 (JST) *Lunatic*, n. One whose delusions are out of fashion. Small amounts of radioactive substances found in Nevada but safe: AP | Kyodo NewsMinuscule amounts of radioactive substances believed to have come from Japan's crippled nuclear power plant have been detected in Las Vegas but the readings pose no health risk, the Associated Press reported Saturday. The report said extremely small amounts of iodine-131 and xenon-133, both of which are not usually found in Nevada, were detected at a monitoring station near the Atomic Testing Museum in the city following a series of radiation leaks at the Fukushima Daiichi plant. Any idiot can face a crisis - it's day to day living that wears you out. Higher Radiation Levels Found at Japanese Reactor  NYT The Nuclear and Industrial Safety Agency said that water seeping out of the crippled No. 2 reactor building into the adjacent turbine building contained levels of radioactive iodine 134 that were about 10 million times the level normally found in water used inside nuclear power plants. The higher levels may suggest a leak from the reactor's fuel rods -- from either the suppression chamber under the rods or various piping -- or even a breach in the pressure vessel that houses the rods, the Japanese nuclear regulator said. Sunday's developments came after the world's chief nuclear inspector said that Japan was "still far from the end of the accident" that struck the plant. Yukiya Amano, the director general of the International Atomic Energy Agency, acknowledged that the authorities were still unsure about whether the reactor cores and spent fuel were covered with the water needed to cool them and end the crisis. .... Hidehiko Nishiyama, the deputy director-general of the Japanese nuclear safety agency, said that it was likely that radiation was leaking from the pipes or the suppression chamber, and not directly from the pressure vessel, because water levels and pressure in the vessel were relatively stable. The operator of the Fukushima Daiichi nuclear plant, Tokyo Electric Power Company, said its analysis of the water in the No. 2 unit had identified radioactive isotopes of cesium, iodine, cobalt, molybdenum, technetium, barium and lanthanum. The company has not yet been able to determine the source of the leak. (Bold added) Let us hope they are off a couple of decimal places on the "10 million times the level normally found in water used inside nuclear power plants" claim. Were "normal" levels a tenth of a micro sievert that would translate into levels of over a sievert! And to which water body do they refer? The water in the supression pool or the water in the heat exchanger? "It is not necessary to have hope in order to persevere." See discussion upthread and again. There is no decimal point error... *Lunatic*, n. One whose delusions are out of fashion. I had missed most of that discussion. I had posted while up for a half hour with insomnia. "It is not necessary to have hope in order to persevere." The Oil Drum | Fukushima Dai-ichi status and slow burning issuesReactor 3 radioactives in water pools: http://www.tepco.co.jp/en/press/corp-com/release/11032503-e.html Co60,Tc99,I131,Cs134,Cs136,Cs137,Ba140,La140,Ce144 Cl38,As74,Y91,I131,Cs134,Cs136,Cs137,La140 Cl38 has a half life of 37 minutes, not a decay product. Maybe from neutron irradiation of salt ? Something is still fizzing in reactor 1. No Na isotopes are seen, perhaps immobilized in comparison to the chlorine. It seems that all the reactors have pools of radioactive water around them. I have not seen an analysis yet of pools around reactors 2) and 4), it may be that the plumbing is leaking in those as well. There are also these readings: http://www.tepco.co.jp/en/press/corp-com/release/11032605-e.html I find it troubling that I-132, Ru-105, Te-129 are detected. These have very short half lives, and must have been recently created. Daini is seeing I-132 as well. Any idiot can face a crisis - it's day to day living that wears you out. BBC - Tepco spokesman saying that huge spike in radiation was a mistaken reading Any idiot can face a crisis - it's day to day living that wears you out. NHK WORLD EnglishTEPCO retracts radioactivity test result Tokyo Electric Power Company has retracted its announcement that 10 million times the normal density of radioactive materials had been detected in water at the Number 2 reactor of the Fukushima Daiichi nuclear plant. The utility says it will conduct another test of the leaked water at the reactor's turbine building. The company said on Sunday evening that the data for iodine-134 announced earlier in the day was actually for another substance that has a longer half-life. The plant operator said earlier on Sunday that 2.9 billion becquerels per cubic centimeter had been detected in the leaked water. It said although the initial figure was wrong, the water still has a high level of radioactivity of 1,000 millisieverts per hour. Sunday, March 27, 2011 22:02 +0900 (JST) *Lunatic*, n. One whose delusions are out of fashion. 1,000 millisieverts, thats a whole Sievert, wording it that way to make it sound less scary? Any idiot can face a crisis - it's day to day living that wears you out. Japan says very high radiation reading at reactor was wrong | Reuters(Reuters) - The operator of Japan's stricken Fukushima Daiichi nuclear plant said on Monday a very high radiation reading that had sent workers fleeing the No. 2 reactor was erroneous. Tokyo Electric Power Co. (TEPCO) vice-president Sakae Muto apologized for Sunday's error, which added to alarm inside and outside Japan over the impact of contamination from the complex which was hit by an earthquake and tsunami on March 11.Radiation in the water was a still worrying 100,000 times higher than normal, rather than 10 million times higher as originally stated, Muto said. Any idiot can face a crisis - it's day to day living that wears you out. Reading not credible, eh? "It can't possibly be that high!" a test run to see how the public will react, or another surreal example of incompetence? oh well, give or take a few zeros, close enough for rock and roll. 'The history of public debt is full of irony. It rarely follows our ideas of order and justice.' Thomas Piketty Well we do keep having mistakes, and things always turn out to be better than the mistake after an hour or two, Im vaguely suspicious this is a corporate PR strategy, If so management and PR firm need encasing in concrete and dropping into the deep ocean along with the waste. Any idiot can face a crisis - it's day to day living that wears you out. it smells fishy to me too. not very encouraging... 'The history of public debt is full of irony. It rarely follows our ideas of order and justice.' Thomas Piketty Fukushima scaremongers becoming increasingly desperate * The RegisterThe situation at the quake- and tsunami-stricken Fukushima Daiichi nuclear powerplant in Japan was brought under control days ago. It remains the case as this is written that there have been no measurable radiological health consequences among workers at the plant or anybody else, and all indications are that this will remain the case. And yet media outlets around the world continue with desperate, increasingly hysterical and unscrupulous attempts to frame the situation as a crisis. Here's a roundup of the latest facts, accompanied by highlights of the most egregious misreporting First up, three technicians working to restore electrical power in the plant's No 3 reactor building stood in some water while doing so. Their personal dosimetry equipment later showed that they had sustained radiation doses up to 170 millisievert. Under normal rules when dealing with nuclear powerplant incidents, workers at the site are permitted to sustain up to 250 millisievert before being withdrawn. If necessary, this can be extended to 500 millisievert according to World Health Organisation guidance. None of this involves significant health hazards: actual radiation sickness is not normally seen until a dose of 1,000 millisievert and is not common until 2,000. Additional cancer risk is tiny: huge numbers of people must be subjected to such doses in order to see any measurable health consequences. In decades to come, future investigators will almost certainly be unable to attribute any cases of cancer to service at Fukushima. Any idiot can face a crisis - it's day to day living that wears you out. LOL... This was obviously written before the 2-6 Sv on feet was discovered, I wonder what will be the next excuse... *Lunatic*, n. One whose delusions are out of fashion. Kyodo: 58% do not approve of gov't handling of nuclear power plant crisisOn the other hand, 57.9 percent said they approve of the way the state has dealt with disaster victim support in northeastern and eastern Japan hit by the catastrophic earthquake and ensuing tsunami on March 11. The nationwide telephone survey conducted Saturday and Sunday also found that the approval rate for Prime Minister Naoto Kan's Cabinet came to 28.3 percent, up 8.4 percentage points from the previous survey in mid-February. ... As for the government's response to the nuclear power plant in Fukushima Prefecture which was severely damaged by the quake and tsunami, 19.6 percent said they do not approve of it at all and 38.6 percent said they do not approve of it very much. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 U.S. nuke plant says shielded against Japan emergency | Reuters(Reuters) - A U.S. nuclear plant in Alabama similar in design to the earthquake-hit Fukushima facility in Japan has multiple defenses to prevent and tackle the same kind of emergency, its operator said. Safety features at the Browns Ferry plant in northern Alabama are so superior to those at Japan's Fukushima Daiichi plant that even in the event of massive flooding the chances of a crisis were negligible, officials from the Tennessee Valley Authority (TVA) told reporters."What we have here is defense in depth, multiple levels of redundancy, backup to the backup to the backup," TVA communications consultant Jim Nesbitt told journalists who toured the facility on Friday as he explained the plant's elaborate safety systems.The emergency at the Japanese plant has escalated since March 11 when a tsunami, triggered by a massive earthquake, knocked out power. Any idiot can face a crisis - it's day to day living that wears you out. LOL. Fukushima also had "defense in depth, multiple levels of redundancy, backup to the backup to the backup", precisely. But the simultaneous shutdowns knocked out backup electricity, then the tsunami knocked out the diesel generator backup of the backup, then some unidentified glitch knocked out the steam-powered cooling system backup of the backup of the backup, then falling debris and seawater salt proved problems for the improvised fire truck pumping backup of the backup of the backup of the backup... *Lunatic*, n. One whose delusions are out of fashion. There was an article about this in the NYT today, accompanied by a photo: So, Mr. Nuclear Power Plant Manager, if your practices are so well-planned and mindful, and "ready for a one-in-a-million-year flood," why do you have electrical cables and air hoses (?) draped haphazardly from the ceiling??? And what exactly is the point of having a machine gun in a power plant??? For creating unplanned emergency situations by shooting up pipes and cable bundles? "It is not necessary to have hope in order to persevere." ou question his Right to bear arms? Its in the Constitoootion Any idiot can face a crisis - it's day to day living that wears you out. More from the same NYT article: Inside the reactor building, near the entrance to the primary containment structure, are carefully marked spaces with two lime green carts about the size of hand trucks that a supermarket worker might use to roll cases of soda cans to the proper aisle. Each is loaded with batteries. One cart could power the instruments that measure the water level in the reactor vessel, an ability that Japanese operators lost a few hours after the tsunami hit. Another could operate critical valves that failed early at Fukushima. .... Deeper into the building, in an odd-shaped space in the basement between a corner of the square reactor building and the round containment shell is a steam-driven pump. This is something that the designer, General Electric, intended to be available to deliver up to 600 gallons per minute of cooling water into the reactor core even if the electrically driven pumps failed for want of power. An overheating reactor would be likely to have ample supplies of steam to run it. That worked at Fukushima for a while but appears to have stopped functioning later; the Japanese plant's operator, the Tokyo Electric Power Company, has not provided an explanation. Again, the T.V.A. suggests that it has backup tools that the Japanese utility, known as Tepco, probably lacked: a battery-powered strobe light stored in a nearby cabinet, and a valve that usually runs on electricity but also has a hand crank. They obviously have backup for the bullshit. The only significant item, really, is the steam driven pump, which Fukushima also had. It possibly failed at Fukushima or would fail elsewhere because of loss of pressurization inside the reactor, possibly due to thermal failure of gaskets. Now if they had on site natural gas powered electrical generators capable of powering the heat exchanger pumps and hardened CNG storage adequate for a month's operation, I would say they had some significant backup to their diesel backup generators. It is nice to be able to see just how high the temperatures and pressures are getting via battery power. It would be much better to be certain of being able to do something about it. But that might cost more money. "It is not necessary to have hope in order to persevere." The steam-driven pump is mentioned in a Powerpoint presentation of the analysis of the Fukushima fuk-up by Areva specialist Dr Matthias Braun, in English and some German, here.(h/t afew) The presentation also mentions the failure modes of each of the pumps: in the case of reactors 1 and 3, the batteries ran out. For reactor 2, the pump itself failed. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 why do you have electrical cables and air hoses (?) draped haphazardly from the ceiling??? Maintenance operations during a planned shut down? But what would they do during an emergency shutdown when they might want the entryway through which they are passing to be sealed? "It is not necessary to have hope in order to persevere." I'm not sure if this has been referred to yet, but there's a Powerpoint presentation of the analysis of the Fukushima fuk-up by Areva specialist Dr Matthias Braun, in English and some German, here. That document does a good job of clarifying some of the processes that took place, and the design of the reactor containment. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Kyodo: Woes deepen over radioactive water at nuke plant, sea contamination (March 28)Japan on Sunday faced an increasing challenge of removing highly radioactive water found inside buildings near some troubled nuclear reactors at the Fukushima Daiichi plant, with the radiation level of the surface of the pool in the basement of the No. 2 reactor's turbine building found to be more than 1,000 millisieverts per hour. Exposure to such an environment for four hours would raise the risk of dying in 30 days. Hidehiko Nishiyama, spokesman for the government's nuclear safety agency, said the figure is ''quite high'' but authorities must find a way to pump out the water without sending workers too close to push ahead with the restoration work. ... Adding to the woes is the increasing level of contamination in the sea near the plant, although Nishiyama reiterated there is no need for health concerns so far because fishing would not be conducted in the evacuation-designated area within 20 kilometers of the plant and radioactive materials ''will be significantly diluted'' by the time they are consumed by marine species and then by people. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 TEPCO president fell sick amid troubles at its nuke plant | Kyodo NewsMasataka Shimizu, president of Tokyo Electric Power Co., which operates the crisis-hit Fukushima nuclear plant, fell sick March 16 and took some days off from the liaison office between the government and the utility firm, TEPCO officials said Sunday. While Shimizu was away from the office set up at the firm's headquarters, he collected information and issued instructions from a different room of the headquarters building to address the troubles at the Fukushima Daiichi nuclear power station hit by the March 11 quake and subsequent tsunami, the officials said. He has already recovered and come back to work at the liaison office, they said. Any idiot can face a crisis - it's day to day living that wears you out. While Shimizu was away from the office set up at the firm's headquarters, he collected information and issued instructions from a different room of the headquarters building to address the troubles at the Fukushima Daiichi nuclear power station... ?! Translation snarl? What is this "from a different room"? Are they saying he was telecommuting via a different office? "It is not necessary to have hope in order to persevere." Japan says no problem with Tokyo Electric CEO's health | Reuters(Reuters) - Japan's nuclear safety agency said on Monday there was no problem with the health of the chief executive of Tokyo Electric Power Co , Masataka Shimizu, whose absence for a couple of days during the crisis at one of the firm's plants was reported widely in the media. Any idiot can face a crisis - it's day to day living that wears you out. He was probably so shoked by the Prime Minister's "What the heck is going on here?" on the Tuesday after the tsunami that he had to pull himself together for two days... So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Yeah, it took him two days to find the "face", which he had lost when PM Kan moved in to TEPCO HQ. "It is not necessary to have hope in order to persevere." Low-level radiation found in Massachusetts rainwater | Reuters(Reuters) - Trace amounts of radioactive iodine linked to Japan's crippled nuclear power station have turned up in rainwater samples as far away as Massachusetts during the past week, state officials said on Sunday. The low level of radioiodine-131 detected in precipitation at a sample location in Massachusetts is comparable to findings in California, Washington state and Pennsylvania and poses no impact to drinking supplies, public health officials said.Air samples from the same location in Massachusetts have shown no detectable radiation.The samples are being collected from more than 100 sites around the country that are part of the U.S. Environmental Protection Agency's Radiation Network monitoring system. Any idiot can face a crisis - it's day to day living that wears you out. it's a hard rain...gonna fall 'The history of public debt is full of irony. It rarely follows our ideas of order and justice.' Thomas Piketty NHK WORLD EnglishTokyo Electric Power Company has asked independent research centers to check if radioactive substances from the Fukushima plant contain highly toxic plutonium. The company says it expects the results will be available within several days. The nuclear power plant continues to emit radioactive materials that may include plutonium. Plutonium is a radioactive element that is produced when uranium fission occurs in a reactor core. So far, the utility firm has not detected plutonium through its own methods. Any idiot can face a crisis - it's day to day living that wears you out. Plutonium is a radioactive element that is produced when uranium fission occurs in a reactor core. !!? True, but misleading. The VAST majority of plutonium present in reactor 3 would come from the 7% of the MOX fuel comprising recycled plutonium. Reading this would lead one to believe that TEPCO is systematically eliding the most serious problems and minimizing dangers. "It is not necessary to have hope in order to persevere." This was not a TEPCO release, this was likely another over-eager journo (also see the case of the 1437 μSv reading). *Lunatic*, n. One whose delusions are out of fashion. Thanks! Glad of that. "It is not necessary to have hope in order to persevere." NHK WORLD EnglishIn order to resolve the problem of the contaminated water, the utility says it is trying to accelerate work at the Number-one reactor to pump the water from the basement into the turbine condenser for storage by increasing the number of pumps from one to 3. The company says although it had planned to take similar steps to remove the water from the Number-2 and -3 reactors, their turbine condensers were found to have been almost full and unable to contain any more water. The company says it is considering pumping that water from the condensers into adjacent pools and then filling them with the contaminated water. Regarding spent fuel rods in the storage pools, the company told reporters early Monday morning that the pools in the Number-2 and -4 reactors appear to be filled with water, with the rods submerged. Any idiot can face a crisis - it's day to day living that wears you out. What about the water levels in spent fuel pools in reactors 1 & 3? They should be able to use a barge similar to the ones bringing in fresh water to receive contaminated water from reactors 2 and 3 turbine condensers. That would certainly beat dumping it into the ocean. "It is not necessary to have hope in order to persevere." where are you going to send the (glowing?) barges ~and happy sailors? to loop the planet like that garba(r)ge which forlornly circumnavigated the globe a few years ago? 'The history of public debt is full of irony. It rarely follows our ideas of order and justice.' Thomas Piketty There are means for separating the radioactive components from the water so that the portion needing to be stored is much less. The problem is collecting it and holding it until you can perform that separation. "It is not necessary to have hope in order to persevere." Watching NHK and theres another  earthquake with a 50cm Tsunami, dont know how regular something like this is. Any idiot can face a crisis - it's day to day living that wears you out. Japan Meteorological Agency | Tsunami Warnings/Advisories, Tsunami Information Occurred at 07:24 JST 28 Mar 2011 Region name Miyagi-ken Oki Depth Very Shallow Magnitude 6.5 Any idiot can face a crisis - it's day to day living that wears you out. EU summit agrees on nuclear stress tests | EurActivEuropean leaders agreed on Friday (25 March) to set the "highest standards" of nuclear safety and submit all plants to "stress tests", in the wake of the unfolding crisis at Japan's stricken Fukushima plant. ... France, Germany and Spain raised the possibility of closing any of Europe's 143 reactors that fail stress tests to be held this year. Leaders at a summit in Brussels also called for Europe's neighbours to follow suit. ... France, which hopes to turn the clampdown to its advantage as it tries to sell its nuclear technology overseas, sought to set an example by saying it would close reactors that failed. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 This is a great idea! And I feel that doing this more often would be a good idea. Maybe once every five years. Peak oil is not an energy crisis. It is a liquid fuel crisis. Calls to shut down 'Europe's Fukushima' | EurActivA 40-year old Spanish nuclear power plant built to the same design model as Fukushima's disaster-struck reactor number one has become engulfed by calls for it to be shut down, while Brussels is questioning the safety of EU installations and has pushed for stress tests of nuclear power plants. Antonio Cornado, communications manager for Spain's Consejo de Seguridad Nuclear (Nuclear Safety Council) regulator, confirmed to EurActiv that the Santa Maria de Garona plant, about 70 miles south of Bilbao, contains a General Electric Mark 1 Boiling Water Reactor (BWR) system, of a similar variety to that in Fukushima's reactor number one. "It's the same type," he said. "It is a Mark 1, but there are several performance [enhancements] that are better than the original design. There have been a lot of safety modifications." Questions about the model's safety were "closed" 20 years ago, he added. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 It's Garoña, not Garona. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Hazardous Radiation Detected Outside Damaged Japanese Reactor - BloombergRadiation levels that can prove fatal were detected outside reactor buildings at Japan's Fukushima Dai-Ichi plant for the first time, complicating efforts to contain the worst disaster since Chernobyl in 1986. Water in an underground trench outside the No. 2 reactor had levels exceeding 1 sievert an hour, a spokesman for plant operator Tokyo Electric Power Co. told reporters in the capital today. Thirty minutes of exposure to that dose would trigger nausea and four hours might lead to death within two months, according to the U.S. Environmental Protection Agency. Preventing the most-contaminated water from leaking into the ground or air is key to containing the spread of radiation beyond the plant. A partial meltdown of fuel rods in the No. 2 reactor probably caused a jump in the readings, Japan's chief government spokesman said today. Any idiot can face a crisis - it's day to day living that wears you out. BBC News - Radiation leak found outside Japan nuclear reactorThe water was found in an underground maintenance tunnel, with one end located about 55m (180ft) from the shore. However, Tepco said there was no evidence that the contaminated water had reached the sea. The discovery comes as Japan's chief cabinet secretary said that the priority at the plant was to ensure that contaminated water did not leak into the soil or the sea. Any idiot can face a crisis - it's day to day living that wears you out. Highly radioactive water leaks from Japanese | ReutersGreenpeace said its experts had confirmed radiation levels of up to 10 microsieverts per hour in a village 40 km (25 miles) northwest of the plant. It called for the extension of a 20-km (12-mile) evacuation zone."It is clearly not safe for people to remain in Iitate, especially children and pregnant women, when it could mean receiving the maximum allowed annual dose of radiation in only a few days," Greenpeace said in a statement, referring to the village where the radiation reading was taken.More than 70,000 people have been evacuated from an area within 20 km (12 miles) of the plant and another 130,000 people within a zone extending a further 10 km are recommended to stay indoors. They have been encouraged to leave. Any idiot can face a crisis - it's day to day living that wears you out. That's like 3-4 cigarettes per day. Peak oil is not an energy crisis. It is a liquid fuel crisis. Japan rejects Greenpeace argument for expanding evacuation zone | Reuters(Reuters) - Japan's nuclear safety agency on Monday rebuffed a call by Greenpeace to extend an evacuation zone around the stricken Fukushima power plant. An agency official told a news briefing that the measurements of high radiation the group said it had found at 40 km from the facility could not be considered reliable.He added that most residents in the area concerned had left and hardly anyone was living there anymore. Any idiot can face a crisis - it's day to day living that wears you out. Huh!? The government measured levels even higher than Greenpeace's 10 μSv/h in that zone... we have been discussing the area around Namie for a week. *Lunatic*, n. One whose delusions are out of fashion. ceebs:clearly not safe for people to remain in Iitate, especially children and pregnant womenStarvid:That's like 3-4 cigarettes per day. Well, clearly... So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 The Oil Drum | Fukushima Open ThreadEric Shidler on March 27, 2011, 4:04 PM The readings being retracted are not due to miscalculation. If you will check the tides for the last 24hrs you will see that the initial VERY HIGH reading was @ high tide. The second re-reading was taken @ low tide. This style reactor uses a single cooling loop and appears to have a leak in the turbine room. The tides coming in puts positive pressure in this loop and radiation collects. When the tide goes out it places a vaccume on the system pulling vast amounts out to sea. Tepco will publish samples taken @ low tide from now on it seems. Any idiot can face a crisis - it's day to day living that wears you out. Certainly not good for children or pregnant women to smoke 3-4 cigarettes a day. Peak oil is not an energy crisis. It is a liquid fuel crisis. These people have a complete loss of "wet containment" in their hands... So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 I found this totally cool filter for Google Earth, which shows all the reactors in the world, including output, type, start year etc etc. Check it out here. Peak oil is not an energy crisis. It is a liquid fuel crisis. NHK WORLD EnglishChief Cabinet Secretary Yukio Edano says he has strongly instructed Tokyo Electric Power Company to avoid the release of erroneous data on radiation leaks at its troubled nuclear power plant. TEPCO on Sunday corrected an earlier announcement about the radiation levels in water leaking from the Number 2 reactor's turbine building, saying a water analysis had been incorrect. Edano said radiation analyses serve as the basis for ensuring safety at the plant, where workers are struggling to safely cool the reactors and other machinery. He said he also urged TEPCO to secure adequate back-up personnel for the workers. Any idiot can face a crisis - it's day to day living that wears you out. What is essential, in the eyes of the government, is that inaccurately HIGH readings not be released. For fear of panic in the population, perhaps. Conservatively LOW estimates are indications of careful thinking and methodical analysis, however... NHK WORLD EnglishJapan's nuclear safety watchdog says it believes radioactive elements from melted nuclear fuel have found their way from one of the reactors at the damaged Fukushima Daiichi plant to a turbine building here. Radiation levels 100,000 times that found in water in an normally operating reactor were detected in water puddles in the Number 2 reactor's turbine building on Sunday. High radiation figures were earlier recorded at similar locations at the Number 1 and 3 reactors. The Nuclear Safety Commission, an independent body, says the radiation level at the Number 2 reactor was dozens of times that of the other two reactors. The commission says that radioactive substances from temporarily melted fuel rods at the Number 2 reactor had made their way into water in the containment vessel and then somehow leaked out. Any idiot can face a crisis - it's day to day living that wears you out. ceebs:believes radioactive elements from melted nuclear fuel have found their wayWhat obfuscation... So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 I particularly like the line about "temporarily melted" fuel rods. - Jake Friends come and go. Enemies accumulate. When they cool down they solidify... So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Corium? "Life shrinks or expands in proportion to one's courage." - Anaïs Nin I suppose the radioactive substances from temporarily melted fuel rods at the Number 2 reactor had made their way into water in the containmentis meant to indicate that the fuel rods did melt at some point, releaseing radioactive substances, but have since cooled down so no more radionuclides are being added to the water. The real problem here is that radioactive water outside the reactor indicates both 1) a meltdown; 2) a failure of "wet containment" (the "torus" in a Mark I design). So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Yeah, but by then they hardly qualify as "fuel rods" any more. - Jake Friends come and go. Enemies accumulate. Ok so reactors more complex than BWR's have their control rods held up by electromagnets, so if power fails, the rods fall into the rod assemblys and reaction is shut down, With BWR's they are forced up from the bottom. So say your fuel melts in a PWR reactor, then as the fuel melts into a puddle your control rods collapse into the mix as they are unsupported.and so continue to a degree to moderate the reaction. You would assume that the control rods in a BWR have to be held fairly rigidly to enable fuel loading, its not like you can go in and shake the control rods so that you can fit the gaps in the fuel assembly if one isnt ligned up properly. If it works then like I think, then the more the fuel has melted, then the less moderation will occur, its like the control rods have been pulled back out of the reactor, and all of the  fuel rods have been pressed together. If so I just don't see how this situation can ever be brought back under control. Any idiot can face a crisis - it's day to day living that wears you out. I think the control rods melt along with the fuel rods. What are the control rods made of? So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Boron Carbide apparently, Melts at 2763 °C Any idiot can face a crisis - it's day to day living that wears you out. Hmm, Zirconium melts at 1855C... So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Thats what I was thinking Any idiot can face a crisis - it's day to day living that wears you out. NHK WORLD EnglishMore than 28,000 people have died or are missing following the earthquake and tsunami that devastated Japan's northeast coast on March 11th. The National Police Agency says that as of 6 PM on Monday, 10,901 people had been confirmed dead and 17,621 listed as missing. Police have identified 8,030 of the bodies. The largest number of deaths --- 6,627 --- has been reported in Miyagi Prefecture, with 3,242 dead in Iwate and 974 in Fukushima. Miyagi, Iwate and Fukushima are the prefectures hardest-hit by the quake and tsunami. Any idiot can face a crisis - it's day to day living that wears you out. Latest casualty figures for March 11 quake, tsunami | Kyodo NewsThe following are the latest casualty figures related to the earthquake and tsunami that hit northeastern and eastern Japan on March 11, according to the National Police Agency as of 10 a.m. Tuesday: Number of people killed 11,063 Number of people missing 17,258 *Lunatic*, n. One whose delusions are out of fashion. Latest casualty figures for March 11 quake, tsunami | Kyodo NewsThe following are the latest casualty figures related to the earthquake and tsunami that hit northeastern and eastern Japan on March 11, according to the National Police Agency as of 9 p.m. Tuesday: Number of people killed 11,168 Number of people missing 16,407 That's a sharp drop in missing. *Lunatic*, n. One whose delusions are out of fashion. UPDATE 3-Japan business lobby gives OK to scrap corp tax cut | ReutersTOKYO, March 28 (Reuters) - Japan's top business lobby gave the government the green light to scrap a planned cut in the corporate tax rate and urged firms to look at shifting production to western Japan as the nation grapples with its worst crisis since World War Two. Hiromasa Yonekura, chairman of the Japan Business Federation, said the influential lobby would not fight the government if it decided to shelve a plan to lower the corporate tax rate, which at around 40 percent is among the highest in the industrialised world. Economics Minister Kaoru Yosano suggested last week the government should reconsider the planned tax cut of 5 percentage points from April to prioritise spending on reconstruction and prevent the country's already massive debt pile from growing. [ID:nL3E7EP06B] "I don't mind if the government skips cutting the corporate tax rate," Yonekura, who is also chairman of Sumitomo Chemical , told a regular briefing in Tokyo. "Instead I want the government to move swiftly in its recovery efforts." Any idiot can face a crisis - it's day to day living that wears you out. NHK WORLD EnglishThe effort to cool reactors at the damaged nuclear power plant in Fukushima, northern Japan, is facing the risk of leaking highly radioactive substances. The plant's operator, Tokyo Electric Power Company, raised water pumping power on Sunday to cool the No. 2 reactor in a stable manner. On Monday, the company cut back on the amount of injected water. The move followed the Nuclear Safety Commission's announcement that highly radioactive substances detected in puddles of water in the basement of the reactor's turbine building may have come directly from the vessel containing the reactor. 16 tons of water was being injected into the reactor every hour but TEPCO now says it wants to reduce the amount to 7 tons. This would be enough to replace the amount that is evaporating. If the injected water level is reduced, temperatures may increase in the reactor. Any idiot can face a crisis - it's day to day living that wears you out. Quick back of an envelope calculation, thats 5.2 Mw to evaporate 7 tons of water, thats more evaporating than we calculated out the other day. Any idiot can face a crisis - it's day to day living that wears you out. They are supposed to be just topping it up, right? Unless it's a "flo-thru" cooling system like the radiator of my brother's Corolla... It is rightly acknowledged that people of faith have no monopoly of virtue - Queen Elizabeth II If it's a flo-thru, it's extremely dirty in case the reactor fuel rods are damaged. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 They were pushing an extra 9 tons through as flow through, because theyve cut it back to purely the amount that will replace evaporated, that makes me think that one of the outflow pipes has failed, having hot saturated salt solution pushed into the outflow must have caused  one of the welds or joints to fail, which is why we're getting radioactive water pouring into other buildings. Any idiot can face a crisis - it's day to day living that wears you out. I have suspected that the pipes, the gaskets, or both were the source since reading the TOD article. This tends to confirm that possibility. The other possible sources of radioactive water could well be worse. "It is not necessary to have hope in order to persevere." Where was that? I remember calculating the evaporation in the No. 4 spent fuel pool only. *Lunatic*, n. One whose delusions are out of fashion. Yes you're right, that makes more sense Any idiot can face a crisis - it's day to day living that wears you out. Ford Limits Orders on Black and Red Paint Colors - KickingTiresSince Japan's earthquake and tsunami, we've learned a little more each day about where many of the materials for our cars come from. Even the paint that coats cars and trucks is impacted, at least on Ford and Nissan vehicles. Ford is telling its dealers not to order vehicles in Tuxedo Black paint or in three variations of red, according to USA Today. The pigments for those paints come from a Japanese supplier. Any idiot can face a crisis - it's day to day living that wears you out. Ford: from "you can have you car in any color as long as it's black" to "as long as it's not black" in less than 100 years. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 URGENT: Plutonium detected in soil at Fukushima nuke plant: TEPCO | Kyodo NewsPlutonium has been detected in soil at five locations at the crippled Fukushima Daiichi nuclear power plant, Tokyo Electric Power Co. said Monday. The operator of the nuclear complex said that the plutonium is believed to have been discharged from nuclear fuel at the plant, which was damaged by the March 11 earthquake and tsunami Any idiot can face a crisis - it's day to day living that wears you out. Radioactive substance detected over southeast China coast | Kyodo NewsNEWS ADVISORY: Not known which reactor plutonium came from: agency (00:36)NEWS ADVISORY: Monitoring to be strengthened after plutonium detected at Fukushima plant: agency (00:34) Any idiot can face a crisis - it's day to day living that wears you out. ceebs:Not known which reactor plutonium came from: agencySpent "regular" fuel will contain plutonium as conversion of U-238 to Pu-239 is a relatively common side-process of fission reactions. In fact, that's how it is possible for reprocessing of spent nuclear fuel to produce MOX fuel containing Plutonium in quantity. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 would have thought you'd have also had Uranium as well as Plutonium, and the proportions would be much higher from Number 3. wouldnt have thought you'd just have one without the other. Any idiot can face a crisis - it's day to day living that wears you out. I think the point is that finding uranium is not considered such a health hazard, not unexpected (the fuel rods are about 95% U238 whether they're fresh or spent). Plutonium is the newsworthy (and worrisome) atom. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 But the theory in the last few days was that the escaping contamination is the volatile gases only, as the partial meltdowns damaged the zirconium cladding only and there was no release of the solid material of the pellets themselves. So both uranium and plutonium would be a surprise. *Lunatic*, n. One whose delusions are out of fashion. The TEPCO report probably has a longer list of isotopes than just "yes, we found plutonium", but that's the one people will report for shock value. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 My guess: core starts melting -> fuel particles become suspended in water -> water is dumped into torus -> hydrogen/steam explosion damages torus -> activated water released from the containment via the broken torus -> heat -> steam -> plutonium etc fallout. Peak oil is not an energy crisis. It is a liquid fuel crisis. Plus, steam from still hot reactor cores could leak out through damaged gaskets or pipes and condense into radioactive water which flows to the lowest available point. We really do not have adequate information on the construction of the plants. We would need a full set of plans, sections and elevations to start to get an idea. All we have are simplified diagrams. How many here have seen typical paper based drawings of construction plans from the '70s? There may not really be a complete set of plans. It depends on how TEPCO has dealt with legacy drawings and how well the "as built" modifications were reflected back into the record drawings. "It is not necessary to have hope in order to persevere." Ceebs posted a diagram elsewhere on this thread. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 And that is about as good as it gets as to actual information. It is all highly schematic. It looks like you know something until you want to use the information as the basis for some action, then you realize that it has all turned to shit in your hands and you don't know anything practical. Been there too many times on old school sites with fragmentary documentation. "It is not necessary to have hope in order to persevere." I guess this was to be expected given the explosions in the #3 reactor building, the radioactive water in its turbine building, and so on... So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Well watching the explosion of number three at the time, it was three separate explosions over a second or two, since then its only been pictures, not pictures and sound. Any idiot can face a crisis - it's day to day living that wears you out. Japanese DRAM makers' woes echo rest of industry after quake - ComputerworldIDG News Service - Japanese DRAM maker Elpida Memory on Monday said its factories are operating "at close to normal levels" two weeks after the 9.0-magnitude earthquake in Japan, and that it has "sufficient parts and materials to continue supplying out customers as usual until the end of July." Although the company said it is in discussions to secure further materials after July and doesn't expect any interruption to its business, Elpida will find itself competing against a growing number of chip makers seeking the same materials, 300-millimeter (12-inch) silicon wafers. The earthquake and resulting tsunami have affected production at key factories making these wafers, the raw materials that chips are etched onto. Market researcher IHS iSuppli estimates that damage to these factories could reduce the supply of silicon wafers globally by 25%, which "could have a major effect on worldwide semiconductor production," particularly DRAM chips. Compounding earthquake and tsunami damage, other chip factories are being hurt by rolling blackouts meant to share electricity made scarce because several power plants were knocked offline in the disaster. Any idiot can face a crisis - it's day to day living that wears you out. U.N. atom agency calls nuclear safety summit | Reuters(Reuters) - The U.N. nuclear watchdog's chief on Monday called for a summit to strengthen nuclear safety and improve disaster management following Japan's crisis. Japan is struggling to avert a severe meltdown at its Fukushima Daiichi nuclear power plant and officials said on Monday highly radioactive water had been leaking from the site, hit by an earthquake and tsunami two weeks ago.IAEA chief Yukiya Amano said he wanted ministers from the International Atomic Energy Agency's 151 member states to attend the summit in Vienna, possibly to be held in June."(The) political level is needed, this is a very important issue, this is not only for experts or technical people," he told a news conference.He described the situation at the site as "very serious." Any idiot can face a crisis - it's day to day living that wears you out. UN atom agency calls damage limitation exercise So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Indeed I strongly doubt that consequences will include significant new investment into safety in places like China, India, Russia, Iran, Bulgaria, Romania, Hungary, the Czech Republic, or Brazil... *Lunatic*, n. One whose delusions are out of fashion. You don't think the EU will bring pressure to bear on (and subsidize) Romania, Hungary or the Czech Republic? So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Now that brought a smile to my face. Subsidise Romania? Heh. Good joke. Don't you know that subsidies are only for Deutche Bank? In this time of national stringency, we have to limit our expenses to the ones that are truly necessary. Besides, the market will provide. - Jake Friends come and go. Enemies accumulate. Then just bring pressure to bear and force to close the reactors. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 The EU can't even decide if it should have anything to do with the intervention in Libya weeks and weeks after it's started, and probably not until after it's over either. I don't think we're going to see any reasonable EU "pressure" unless it's accompanied by big sacks of cash. Peak oil is not an energy crisis. It is a liquid fuel crisis. Even at the EU level, do you think Romania, Hungary or the Czech Republic will vote for really strict and externally controlled tests, rather than a similar stamp-of-approval process as was applied during EU expansion? As for the IAEA, methinks there is a clear majority for business as usual. *Lunatic*, n. One whose delusions are out of fashion. is get the ministers to agree to a levy, in dollars per megawatt, that it will collect on all the nuclear generation capacity in the world, and which it will distribute in order to shut down all the first-gen nuclear plants in the world. Plus pay for extra safety on the rest out of the change. You may say that I'm a dreamer... It is rightly acknowledged that people of faith have no monopoly of virtue - Queen Elizabeth II HuhFrom a distance, the BWR design looks very different from PWR designs because usually a square building is used for containment. Also, because there is only one loop through the turbines and reactor, and the steam going through the turbines is also slightly radioactive, the turbine building has to be considerably shielded as well. This leads to two buildings of similar construction with the taller one housing the reactor and the short long one housing the turbine hall and supporting structures.That explains a lot. As soon as the fuel rods are compromised the turbine building becomes radioactive. But presumably containment in the turbine building is rather poorer than in the reactor. Also, it may not be rated for the temperatures and pressures of steam coming from the reactor in the event of a meltdown. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 from the earlier diagrams, there are two steam shutoff valves, one inside the containment, and one inside the turbine room, which should stop water flowing out through the turbine, when both are closed, another pipe should push steam down into the wet tank inside the containment. Any idiot can face a crisis - it's day to day living that wears you out. And they could close these valves if they had another way to remove heat from the core. "It is not necessary to have hope in order to persevere." IAEA chief says Japan's nuke plant needs some time to stabilize | Kyodo NewsInternational Atomic Energy Agency Director General Yukiya Amano said Monday that he believes it will take some time before the situation at the quake-hit Fukushima Daiichi nuclear plant in northeastern Japan to stabilize. Speaking at a press conference, Amano also repeatedly emphasized that the situation at the plant in Fukushima Prefecture is still serious. But he also said he believes that the problem will be solved through the efforts of those at the site. Any idiot can face a crisis - it's day to day living that wears you out. Why you don't build a nuclear reactor in a region subject to earthquakes and tsumani's. (H/T to Booman) Think of what the world would be facing if this had hit the Fukushima reactors. She believed in nothing; only her skepticism kept her from being an atheist. -- Jean-Paul Sartre And I thought when we were reading about "entire towns washed out to sea" it was a way of speaking... So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 First they get washed up the river. Then, what doesn't stick gets washed back out to sea. "It is not necessary to have hope in order to persevere." The Oil Drum | Fukushima Open ThreadThe French Institute for Radiological Protection and Nuclear Safety provided some pretty interesting simulations on anticipated nuclide concentrations and impacts to Fukushima and surrounding area for first seven days of accident. The one thing they don't look at is topography, and that this is a mountainous region with weather and radiation concentrated in river valleys (as shown in DOE results). Their update for March 22 includes estimates for rare gases, iodine, cesium, and tellurium, and suggests 10% "the releases estimated during the Chernobyl accident." Any idiot can face a crisis - it's day to day living that wears you out. Please notice the global dispersion simulation, and add that to their report that Caesium-137 from Chernobyl is still measurable in today's former atmosphere. And this is the French source which denied there was any radiation from Chernobyl in France then. Wait, not true, it was the government which denied. But was the institute then feeding the real info to the gov? "Life shrinks or expands in proportion to one's courage." - Anaïs Nin their report that Caesium-137 from Chernobyl is still measurable in today's former atmosphere Do you have a link for that? I find it hard to believe that measurable amounts of Caesium would remain suspended in the atmosphere after 25 years. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Radiation sensors are extremely sensitive, which usually spooks people who have the usual: "radiation? aarrrrgh runforthehills!!!!111one" mentality. Peak oil is not an energy crisis. It is a liquid fuel crisis. From the IRSN paper above: This simulation was applied to caesium-137, used as a tracer in the radioactive plume during this period. From the first day on (12 March) this simulation has been continuously carried out every other hour and the results of this simulation are given in becquerels of caesium-137 per cubic metre of air (Bq/m3). In addition the results are compared to the values measured in the vicinity of the Chernobyl plant, just after the accident that occurred on 26 April 1986. The values exceeded 100,000 Bq/m and they were around 100 to 1000 Bq/m3 in countries seriously affected by the radioactive plume (Ukraine and Belarus) in France, the values measured in the east were around 1 to 10 Bq/m3 (on 1 May 1986). A very low level of radioactive caesium-137 still remains in the air, around 0.000001 Bq/m3. Starvid, given the professional debates over the effects of low-level radiation, and the strong possibility that there remains much to learn, all in a climate of hiding data historically, it would be less arrogant of your position if you wouldn't deride people who may have differing views than you on the subject. I found your "run for the hills" comment full of hubris, given the circumstance, or perhaps damn insensitive. Regarding the 0.000001 Bq/m3 still flying around, yes of course it's miniscule, but it's completely spread out around the entire globe, to a layer how high? Every single cubic meter, everywhere. One thing is certain, it may take another 50 years before there's a real understanding of biological effects of radiation. did one ever consider that there are different effects from naturally occurring background than from what's cumulatively produced in a nuclear disaster. Could the disaster effects possibly differ from the carefully controlled medical versions? Aside: it's the hubris that really gets me from so many participants in the nuclear debate. I really get irked when people cite that the technology has evolved so, that accidents which happen to archaic technology would never happen with the new technology. Given that the exact same words were used when those plants were built. You can criticize beliefs based upon false science, but you can't criticize fear of the unknown, especially when the history of the entire industry is one of deception. "Life shrinks or expands in proportion to one's courage." - Anaïs Nin However in the case of caesium, the radiation is less energetic than the K-40 background (both are made up of beta and gamma), and one microbequerel is well within the natural background variability. So there are excellent reasons to believe that any effect is orders of magnitude less than the noise in the background. - Jake Friends come and go. Enemies accumulate. Of course, but the caesium isn't alone. What's K-40? "Life shrinks or expands in proportion to one's courage." - Anaïs Nin Naturally occurring radioactive Potassium-40. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Potassium-40 - Wikipedia, the free encyclopediaPotassium-40 is the largest source of natural radioactivity in animals and people. An adult human body contains about 160 grams of potassium, of which a small fraction is potassium-40. From the isotope abundance and half-life it can be calculated that this produces about 300,000 disintegrations per minute continuously throughout the life of the body. That's 5,000 Bq of K-40. By comparison,As Jake says, both the bata and the gamma decays of Caesium 137 are less energetic than those of Potassium 40. And we're talking about 5 billion times more decays from K-40 in a human body than from Cs-137 in a cubic metre of air. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Dear Crazy Horse, I wasn't reffering in any way to you. You certainly know what radiation means and just happen to disagree over the dangers of low-level radiation. I was reffering to the people who don't know what radiation is, think all radiation is lethal, and don't even understand that rocks, bananas or even themselves are radioactive. I did not mean to insult you in any way. Peak oil is not an energy crisis. It is a liquid fuel crisis. Accepted (though i wish i knew more)... but the hubris about the state of current knowledge of radiation is out of place. It's an science in its infancy, especially with regard to long-term effects. Much of the data we have comes from weapon effects, which may or may not have complete relevance. The dialogue (between pro and anti) goes nowhere without respect. "Life shrinks or expands in proportion to one's courage." - Anaïs Nin NHK WORLD EnglishTokyo Electric Power Company announced on Monday that a puddle of water was found in a trench outside the No. 2 reactor turbine building at the Fukushima Daiichi nuclear plant on Sunday afternoon. It said the radiation reading on the puddle's surface indicated more than 1,000 millisieverts per hour. The concrete trench is 4 meters high and 3 meters wide and houses power cables and pipes. It is located in the compound of the plant but outside the radiation control area. TEPCO says the trench extends 76 meters toward the sea but does not reach the sea, and that the contaminated water was not flowing into the sea. From an earlier report the water reached 1m from the top, so its a lot of highly radioactive water. Any idiot can face a crisis - it's day to day living that wears you out. That could be loaded into a couple of hazardous waste tanker trailers. It is 912 cubic meters or 7,600 US liquid barrels. But a barge, say one of or one like the ones being used for fresh water could be used and would likely store all the radioactive water currently needing to be dealt with. "It is not necessary to have hope in order to persevere." So, a 3m-deep puddle? Potentially a 76x3x3m volume of water? That's some "puddle"... So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 and potentially 3 or 4 of those Any idiot can face a crisis - it's day to day living that wears you out. plus I suppose other 'puddles' inside the buildings Any idiot can face a crisis - it's day to day living that wears you out. Take the dimensions lowest levels of the reactor and turbine buildings. Compute the footprint in square meters and multiply by 2 meters to get a first order approximation of the worst case volume of radioactive water. A significant portion of the interior volume will be concrete walls, etc. Some of this ain't rocket science. Does TEPCO know how deep water was when the three waders got their feet burned? Do they know the depths of the water in those same areas are today? What is the rate of rise, by facility? Would that someone had the wit to ask those questions at a news briefing. Once asked they can't claim it never occurred to them to keep track, at least going forward. If they did this they would have an idea of the volume of storage they need. It would probably be possible to lease, certainly possible to buy, an old oil tanker to anchor close by and shuttle barges back and forth. They might need two so they could shuttle them back and forth to an oil terminal where the contents could be pumped into an empty shore based oil storage tank. It may be that this would never occur to TEPCO because they don't own such ships or tanks -- an analog of the "not invented here" syndrome. I don't get the feeling that ANYONE is aggressively pursuing all necessary measures and this is starting to become maddening and could become catastrophic. "It is not necessary to have hope in order to persevere." #Fukushima I Nuke Plant: Reactors 2, 4 Also Have Water Puddles | EX-SKFA bit more details from Asahi Shinbun about the depth of the radioactive water. It's not "puddles" any more, though TEPCO and the government have been calling it "puddle". They should have called it for what it is, a flood:Maximum depth of radioactive water in each reactor:Reactor 1: 40 centimeters (1.3 feet) Reactor 2: 1 meter (3.28 feet) Reactor 3: 1.5 meters (4.92 feet) Reactor 4: 80 centimeters (2.62 feet) The red dot apparently is where the workers became irradiated Any idiot can face a crisis - it's day to day living that wears you out. And TEPCO or the Japanese Government need to deal with this aggressively and immediately or all three reactors that were active at the time of the quake could soon be flooded to the extent that further remediation is no longer possible. "It is not necessary to have hope in order to persevere." They could freeze the water with liquid nitrogen... So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Did you know that's what was done to enhance the earth structurally underneath Chernobyl? Mining equipment was used to pump liquid nitrogen to keep the earth at -100C, if i recall. "Life shrinks or expands in proportion to one's courage." - Anaïs Nin No, they built a big hole for the giant freezer, but the giant freezer was never installed - possibly on the grounds of utter impracticality. So they filled the hole with concrete instead. Is anyone at Fukushima wondering what's happening under the reactors? Sorry, have read too much already, thought i'd read that they had gone so far as to inject it. i should stick to windmills, that i know. "Life shrinks or expands in proportion to one's courage." - Anaïs Nin IIRCC, in the EE Times article linked to by ceebs I read that the Soviets brought in coal miners to dig beneath the Chernobyl plant and install a large, thick concrete "core catcher". If they used that term it might be one of the earliest instance. This is probably infeasible at Fukushima due to the water table from the adjacent ocean. "It is not necessary to have hope in order to persevere." I was thinking they could pour the nitrogen into the water in the turbine buildings. This would freeze it instantly and hopefully the freeze would propagate back through the piping and into the reactor... Of course, this needs to be done with care. You might get a "steam explosion" in the nitrogen by putting nitrogen at 200C below zero in contact with hor water... Also, if you froze one end of the pipes solid, water coming out of the reactor at pressure might burst the pipes at the other end and cause another spill. In this case it's a good thing that they have reduced the water input to just the amount necessary to compensate for vaporization in the reactor. Excess water had been flooding the turbine rooms previously. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Emphasis on old oil tanker since it would have to be scrapped. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 The tanker market is extremely oversupplied at the moment. Finding a spare vessel should be no problem at all. And even if you bought a brand new one, the cost should be small compared to what dealing with the rest of this mess will cost. Peak oil is not an energy crisis. It is a liquid fuel crisis. NHK is reporting the geographic survey institute saying that in places off the coast a 6.7 metre wave was recorded when the initial Tsunami occurred. on top of that, in one river they recorded a 30cm wave, 49 km up a river, and another had a 1m 20 wave recorded 28 km up its length Any idiot can face a crisis - it's day to day living that wears you out. NHK WORLD EnglishJapan's Land Ministry has found that the tsunami on March 11th ran more than 40 kilometers upstream from river mouths. The ministry collected data of water levels from major rivers in the affected areas and calculated how far the waves traveled upstream. The records show that the water of the Kitakami river in Miyagi Prefecture rose by 11 centimeters about 49 kilometers inland nearly 3 hours after the earthquake. The Tone river rose by 30 centimeters at a point more than 44 kilometers from the estuary. The ministry believes the waves would have reached further upstream if all the floodgates had been open. 6 of the 9 gates located 18 kilometers from the shore were closed when the tsunami hit one of Japan's longest rivers. Any idiot can face a crisis - it's day to day living that wears you out. Digital edition: 'The day the lights went out in Japan'EE Times presents a special digital edition, "The day the lights went out in Japan," a comprehensive examination of the global impact of the March 11 earthquake in northern Japan. The special edition includes analysis of the consequences of Japan's historic, devastating earthquake and tsunami for the Japanese people and the global electronics industry. It also features reports from the ground in Tokyo. We attempt to look beyond the crippled wafer fabs, auto plants and disrupted global supply chains to consider the lessons of the Great Japanese Quake 0f 2011. Our intent is to show solidarity with the people of Japan, and understanding for all. We also urge readers to join the sponsors of our digital edition on the Japan earthquake in contributing to relief efforts through the American Red Cross. Click here to view our digital edition. Any idiot can face a crisis - it's day to day living that wears you out. 4,000 bodies still remain unidentified following quake | Kyodo NewsThe identities of around 4,000 bodies collected following the March 11 mega earthquake and ensuing tsunami still remain unconfirmed in severely-damaged Iwate, Miyagi and Fukushima prefectures, a local police tally showed Tuesday. ''They were collected at places far from their residential areas (due to being swept away by the tsunami), or their families as a whole must have been washed away by the tsunami,'' a senior official at the National Police Agency said, referring to the slow progress in identifying the bodies. Any idiot can face a crisis - it's day to day living that wears you out. Radioactive material detected in air of 3 southern U.S. states | Kyodo NewsTrace amounts of radioactive material believed to have come from Japan's quake-hit Fukushima Daiichi nuclear power plant have been detected in the atmosphere in South Carolina, North Carolina and Florida, Reuters news service reported Monday, citing officials. There is no current threat to public safety, the report said, quoting Drew Elliot, a spokesman for the power generation and distribution company Progress Energy Inc., which operates some of the power plants in the southern states. Monitors at several nuclear plants in the three states picked up low levels of radioactive iodine-131, the report said. Any idiot can face a crisis - it's day to day living that wears you out. 'Songs for Japan' album tops iTunes store charts in 18 nations | Kyodo News''Songs for Japan,'' an album dedicated to supporting victims of the March 11 earthquake and tsunami in northeastern Japan, is the top-selling album on Apple Inc.'s online shop in 18 countries. According to the iTunes Store Top 10 Albums, the album, featuring 38 songs by artists and groups from Europe and the United States, tops the charts in such countries as Australia, Belgium, Canada, France, Germany and Switzerland as well as in Japan and the United States. Any idiot can face a crisis - it's day to day living that wears you out. Daily Yomuri - Plutonium has been measured in earth inside the Fukushima N-plant, TEPCO says, likely from partially melted fuel rods. Any idiot can face a crisis - it's day to day living that wears you out. Plutonium comes from failure of primary containment subsequent to partial meltdown of fuel rods, probably those containing MOX. Is this radioactive steam, precipitated by rain or snow, and infiltrating the soil, or is it in the water table from liquid leaks? Inquiring minds want to know. It is rightly acknowledged that people of faith have no monopoly of virtue - Queen Elizabeth II eurogreen:Plutonium comes from failure of primary containment subsequent to partial meltdown of fuel rods, probably those containing MOX.I suspect a bayesian analysis will show the plutonium is more likely to come from regular fuel rods, in which About 1% of the mass is 239Pu and 240Pu resulting from conversion of 238U, which may be considered either as a useful byproduct, or as dangerous and inconvenient waste.That's after 5 years of operation, so after 1 year of operation a regular fuel rod will contain 0.2% plutonium, etc. For instance, the spent fuel pool at reactor number 4 contained the regular amount of spent fuel as well as the full complement of fuel in the reactor. On average, 0.6% of that fuel is Plutonium. The same is true of all the other fuel in all the other reactor buildings, except for reactor 3 which has 1/5 MOX fuel in the reactor core and none in the pool (is that right?) for an average of 1% plutonium, half of that coming from the fresh MOX in the core. So you have 3 reactors at 0.6% and one reactor at 1%, 0.5% of which being from MOX. So the MOX plutonium is 0.5% / 2.8% = 35% of the plutonium in buildings 1-4. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 NHK - TEPCO faces challenge in cooling reactor     Puddles of water were also found in the trenches of the No.1 and No.3 reactors. The No.1 reactor's trench will overflow if the water rises by 10 centimeters. TEPCO has blocked the trench outlet with sandbags and concrete to prevent the water from reaching the ocean. The water in the trenches of the No.2 and No.3 reactors is reportedly 1 meter from overflowing. TEPCO said it hopes to swiftly find a way to remove the water from the trenches. On Monday, The power company scaled back its operation to cool the No. 2 reactor, injecting 7 tons per hour, reduced from 16. The reactor's temperature rose by more than 20 degrees Celsius. Any idiot can face a crisis - it's day to day living that wears you out. Jesus H. Christ! A few tanker trucks, at the very worst, would suffice to hold the water that is presently there. Pumping out the trenches would tell them if the trenches are connected by siphons to large pools of water inside the reactors or turbine plants. This is like watching a bad horror movie where the victims are too paralyzed by fear to even WALK away from approaching danger. "It is not necessary to have hope in order to persevere." At 1Sv/h, handling that water is highly nontrivial. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 But that radioactivity drops rapidly with distance, does it not. Why cannot pumps be connected to hoses and tank trucks AND THEN drop the hose to the bottom of a trench with tongs, or the practical equivalent? Even better -- they have two barges for fresh water present or available, IIRCC. Connect the pumps to hoses floated out to an empty barge and pump the water there. Some contaminated water might leak but that is FAR better than just dumping it into the ocean or allowing it to leak into the ocean, as apparently they are planning to do. As for the power they wanted to run through the trenches--why not just lay conduit on the surface, drive stakes beside it to tie it off and then run the cables through the conduit? No wading in radioactive water then, at least for the trenches. Better, support the conduit on concrete blocks or "chairs" similar to what would be used in a trench, only sturdier and with a broader base and use watertight rigid conduit. Two such conduits ~200mm Dia. separated by about a meter could support a wooden walk way on top. This would provide a path above the contaminated water. "It is not necessary to have hope in order to persevere." The improvisation quality is appalling there. First they quickly gave up on connecting back-up generators. When things started to explode and heat, they contrived no way to cool anything. (Why not throw ice from helicopters?! Bring in some liquid nitrogen or CO2?) They appeared to be mindful or workers' exposure, then send them without proper boots. I don't know, are here cultural hurdles so strong to find "whatever it takes", or top managements are so clueless (or sinister?!). they quickly gave up on connecting back-up generators You don't mean back-up but mobile generators. Here my question remains, what was the cause? I saw no definite confirmation that it was a 50/60 Hz issue, while another source claimed that the problem was tsunami-damaged connectors/switch boxes. When things started to explode and heat, they contrived no way to cool anything. No, seawater cooling started up in all reactors independently of the explosions, the steam-powered cooling system failure and the start-up of seawater cooling in No. 2 even preceded the explosion. The problems were high pressure and unexplained water level loss. At No. 2, the falling debris of No. 3 was a problem, knocking out four of the five fire trucks pumping seawater. I don't see how others could have improvised around these problems better – or why it should come down to improvisation with such high-tech. What does seem to be a TEPCO omission is looking after the No. 4 spent fuel pond, which then produced its own hydrogen explosion. Why not throw ice from helicopters?! On one hand, what would that accomplish? Suppose it doesn't get stuck, cooling 400°C steam with overpressure or 100°C boiling water won't be much more efficient with -1°C ice than with 10°C seawater. On the other hand, before the hydrogen explosions blew up the external structure, helicopters wouldn't have been able to put anything in. Bring in some liquid nitrogen or CO2? On one hand, the pipes weren't designed for -200°C, and who knows what explosions you could have caused by getting it in contact with hot metal and water. On he other hand, getting lots of liquid nitrogen in sufficient quantities to the plant in the earthquake area wouldn't have been any less challenging than bringing freshwater trucks there; the seawater solution was the fastest (and yes it was good improvisation). They appeared to be mindful or workers' exposure, then send them without proper boots. Methinks the omission here was to think that the turbine building (which is a separate building from the reactor, but connected through the water circuit) is safe. *Lunatic*, n. One whose delusions are out of fashion. You don't mean back-up but mobile generators. Here my question remains, what was the cause? I have also questioned this. Could the power requirements of the pumps from the torus to the reactor vessel require too much power for portable generators? It also could be that they require 3 phase power and that the readily accessible power connections are to points where all the pumps in a given reactor are connected in parallel. These units have been described as being the size of a compact car. I don't know if a '70s vintage 100 hp electric motor would fit into such a package, but if it could that would require 75Kw per pump. These pumps and motors could be immersed in radioactive water, though they should have been designed to accommodate that. As for availability of portable generators GE has 100kw 277/480V three phase generators that produce 150 Amps @ 480 Volts, weigh 2705 lbs and cost ~US\$22,000. They are operated by 6.2L V10 diesel engines. And several suppliers offer units as large as 500KW. These units should have been able to be delivered by air and helicopter within three days and comparable units made in Japan should have been available on a day's notice. Then perhaps they could have prevented the meltdown. "It is not necessary to have hope in order to persevere." I would blame TEPCO more for its communication (which was as much chaotic and erratic as bent on obfuscation) and even more for what it did before the disaster (from running substandard old plants for the easy profit through sabotaging checks and lacking security upgrades to the hiring of barely qualified staff). *Lunatic*, n. One whose delusions are out of fashion. I would blame TEPCO for being bastards about the emergency workers that are saving everyone's bacon here... Tough working conditions at Fukushima nuke plant to improve: Kaieda (March 29)Tokyo Electric Power Co. is expected to improve the tough working environment of its employees and other workers who are trying to bring the crisis-hit Fukushima Daiichi nuclear power plant under control, industry minister Banri Kaieda said Tuesday. Kaieda, who serves as a deputy head of the nuclear disaster task force jointly set up by the government and plant operator Tokyo Electric, said around 500 to 600 people were at one point lodging in a building on the plant's premises and that was ''not a situation in which minimum sleep and food could be ensured.'' His remarks came after an official of the government's Nuclear and Industrial Safety Agency reported the actual working environment at the radiation-leaking plant, saying that workers were only eating two meals per day, such as crackers and dried rice, and sleeping in conference rooms and hallways in the building. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Ok, they were cooling with sea water - I overstated. But the urgency was high to cool with whatever it takes. Dropping ice on roofless buildings would be easier and more precise. The latent heat of the ice melting phase transition is equivalent to heating up the same mass water by 70-80 degrees C. So it would be a decent half-measure. Liquid nitrogen was applied in Chernobyl directly tp the reactor. In Fukushima it would had been useful (at least) to chill spent fuel pools, or water input to the reactors. Frantic transportation by helicopters would not had been an encouraging (and cheap) sight, but if that slows down critical progressions... The latent heat of the ice melting phase transition OK, there is melting heat, but then there is the speed of the heat exchange. Water just mixes with the water already there: practically instant heat exchange. If you drop a block of ice into a pool: exchange only at the surface. And in practice, as I indicated, you could drop ice only at the spent fuel rod pools, but the problem there was that the top of the fuel rods got out of the water, so your ice blocks would have struck on top of them (and that's assuming the ice impact does no further damage to the heat-damaged fuel rods). The whole helicopter idea fails on volumes. Wheher water or ice, pump trucks were just more efficient. (Also note that in CHernobyl, helicopters dropped sand to deplete graphite fire of oxygen, cooling wasn't the primary idea there.) Liquid nitrogen was applied in Chernobyl directly tp the reactor. Nope. It was supposed to be applied to the ground around it after the meltdown, but even that wasn't used in the end.A plan was devised: to freeze the earth around the reactor with liquid nitrogen, and then build a heat exchanger in the ground beneath it to cool the core and prevent meltdown. Prianichnikov himself was sent in with temperature and radiation probes to discover how long they had before the core burned through the two metres of concrete foundations; meanwhile, miners were summoned from the coalfaces of Donetsk and the subway projects in Kiev to dig tunnels beneath the reactor. The scientists feared that pneumatic drills could disturb the foundations of the reactor, so they worked with hand tools, in conditions where wearing protective clothing was practically impossible, amid extraordinary fields of radioactivity. To freeze the ground, all the liquid nitrogen in the western Soviet Union was sent to Chernobyl: when it didn't arrive quickly enough, director Brukhanov received a late-night telephone call from the minister in charge of the operation. 'Find the nitrogen,' he was told, 'or you'll be shot.'On 10 May, the fire finally subsided; it now seems possible that the graphite simply burnt itself out. The nitrogen was found, and the subterranean heat exchanger built, but by mid-May the temperature of the core had dropped to 270C; the exchanger was never even turned on. 'The miners died for nothing,' says Prianichnikov. 'Everything we did was a waste of time.' Besides, it is one thing to risk explosions when applying liquid nitrogen to a wreck that was Chernobyl and another to risk them in a halfway intact structure. *Lunatic*, n. One whose delusions are out of fashion. That's all we would need: heat exchange. Normally, there is no need of any exotic cooling means. The problem was that the cooling capacity got too low, and without doing anything the cooling deficit was going to increase. The "speed" of heat exchange does not have to be met at all the times: you may take drastic cooling measures from time to time, according to means, but trying to keep the temperature rise over a cycle as minimal as you can. And, say, you don't have to apply liquid nitrogen directly to the reactor. There must be ways to "take a way the heat" however indirectly. The question is indeed, how much cooling volume can be provided. A reasonable way (as it looks to me now) was to bring a tanker (or other big ship) with refrigerating capacity close to the plant, serve helicopters from there, and provide supplies by sea. Of course, it is easy to ponder too late. But I would expect that they would do a wide-ranging brainstorming quickly. As for liquid nitrogen in Chernobyl, the heat exchanger plan was indeed scrapped, but liquid nitrogen had been already applied to the reactor early. See the development descriptions here, here, here. That's all we would need: heat exchange. Nope. Both in the reactors and the spent fuel pools, the problem was loss of water level which uncovered the fuel rods, so that they were 'cooled' by steam only. Heat exchange won't increase the water level. There must be ways to "take a way the heat" however indirectly. If taking the coolant to the reactors resp. the spent fuel pools was difficult, taking the pool water out and back in a closed circuit is even more difficult. A reasonable way (as it looks to me now) was to bring a tanker (or other big ship) with refrigerating capacity close to the plant, serve helicopters from there, and provide supplies by sea Refrigerating capacity? I don't follow you there, unless this is still the ice idea. At any rate, my idea of a reasonable way thought up too late is the US offer of barges with freshwater. Then again, that's in hindsight, because pressing time made seawater injection the only fast option when the steam pumps failed, and it wasn't obvious at the time that other systems won't be restored for weeks. Regarding Chernobyl and liquid nitrogen, I have trouble trusting those sources. The first is speaking about application of nitrogen after the helicopters put out the fire, via pipes – what pipes? The second source is Wikipedia and unsourced. The third source is again unsourced student material. Meanwhile, the best I can find in any of the technical descriptions of the accident in authoritative sources is:A system was installed by 5 May to feed cold nitrogen to the reactor space, to provide cooling and to blanket against oxygen. This is different from ground freezing, and the apparent origin of the unsourced claims. However, it is not about nitrogen in liquid form, and depleting the fire of oxygen is among the objectives (nitrogen is used for the same by reactor core safety systems). Also, it's unclear whether the system was actually used. I note the same info is in this contemporary New Scientist article, but it erroneously claims that the heat exchanger was taken into use. *Lunatic*, n. One whose delusions are out of fashion. So you mean, water is not only coolant but a moderator as well? Then disappearing water is indeed bad. But to keep it from boiling away fast, any cooling measures can be considered. I wonder how accessible were the spent fuel pools. How would they normally add water, or handle the rods? What manipulations they could have possibly done with them in the initial stages? I checked Russian pages and they mention liquid nitrogen (жидкий азот) as well, though the quality of sources is very variable. The measure is criticized, as it only added a radioactive cloud for Belarus. Here is apparently "Tape No 5" of Valery Legasov speaking (taken by Russian legal investigators, they claim), in Google-assited translation: Вот в отношении азота. Тут много путаницы в международной прессе, что там ВЕЛИХОВ где-то 26 по крышам там чёто такое измерял, например, Евгений Павлович, а он в то время пил водку у себя на даче 26-го и ни о чём не знал. (слова Адамовича А.: "А 26-го его не было?") Не было его там. Да, не было. По азоту. (это в СИЛАЕВСКИЙ период, когда СИЛАЕВ уже приехал) Это я предложил подать жидкий азот для охлаждения. Это моё предложение было глупым, как практика показала. So regarding nitrogen. There is much confusion in the international press that VELIKHOV around 26th had measured something on roofs, for example, E.P., but actually he was drinking vodka on his dacha on the 26th and knew nothing. ([Interviewer: "He was not there on the 26th?") He was not there. Right, he was not. On nitrogen. (This is on SILAEVSKY period, when he had already arrived.) It was my suggestion to submit the liquid nitrogen for cooling. This my suggestion was stupid, as experience has shown. Но я исходил из чего? Я думал, что шахта реактора является цельной. Понимаете? И тогда если к воздуху подмешивать жидкий азот (а нам его очень быстро, я должен сказать, целый эшелон азота пригнали) и, значит, холодным воздухом мы будем интенсивнее охлаждать горячую зону. Но потом оказалось, что боковые стены реактора разрушены. Поэтому весь азот который (а мы нашли место куда его подавать) мы подавали он выходил наружу мимо зоны, ничего не охлаждал, а естественная циркуляция воздуха была такой мощной, что этот азот, как капля в море, как говориться. But what did I assume? I thought that the reactor pit is intact. You see? If you dash liquid nitrogen to the air (and they delivered it to us very fast, I must say, a train of nitrogen) then with the cold air we cool the hot zone more intensively. But it turned out that the side walls of the reactor were damaged. Therefore all the nitrogen (as we found the place to feed it) were served went outside of the zone, cooling nothing, and the natural circulation of air was so powerful that this nitrogen passed as a drop in the sea, as they say. Поэтому мы очень быстро от этого мероприятия отказались. И вот в докладе когда я в Вену готовился (нам правда в ЦК и вычеркнули эту фразу, но она была в исходном варианте), что среди неэффективных мероприятий было мероприятие по задувке жидкого азота. Therefore we very quickly rejected this measure. And in the report that I was preparing for Vienna (actually, the Central Committee told us to struck out the phrase, but it was in the original version) that among the inefficient measures was blowing liquid nitrogen. das monde:So you mean, water is not only coolant but a moderator as well? Then disappearing water is indeed bad. But to keep it from boiling away fast, any cooling measures can be considered.Water is a moderator but not a neutron absorber. In nuclear engineering, a neutron moderator is a medium that reduces the speed of fast neutrons, thereby turning them into thermal neutrons capable of sustaining a nuclear chain reaction involving uranium-235.So if water boils off the chain reaction slows down. However, you get more energetic neutrons radiating away from the fuel elements. So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 So you mean, water is not only coolant but a moderator as well? No, I mean it can cool only as long as it is in touch with the surface of the rods. I wonder how accessible were the spent fuel pools. After the explosions, not at all: that's hy they first tried helicopters, then water cannons, then fire trucks with telescopic arms, finally a concrete pump. The measure is criticized, as it only added a radioactive cloud for Belarus. Can you cite that source? The one you quoted is very interesting, though one crucial point is unclear: whether they actually used it or not. That is, is "Therefore we very quickly rejected this measure." a precise translation and the past tense in the paragraph before erroneous, or would "Therefore we very quickly abandoned this measure." be more correct? *Lunatic*, n. One whose delusions are out of fashion. A short Google search shows that "отказались" is indeed translated as "abandoned", too, among others. *Lunatic*, n. One whose delusions are out of fashion. I would generally translate as "rejected" more frequently. But in the later part of the excerpt he does say that they tried. The translation Therefore all the nitrogen ... were served went outside of the zone should be Therefore all the nitrogen ... we served went outside of the zone Note BTW: the details confirm that they wanted to use nitrogen in its evaporated form (spray the liquid into the air), which would not be applicable to Fukushima's open-air spent fuel pool problem. It would have have been applicable in the case of the water loss in the reactor cores problem, but the spraying within the high-pressure interior of the core would have been a technical problem to solve, and one to do quickly – seawater was faster. *Lunatic*, n. One whose delusions are out of fashion. is what they're not telling us. They didn't just discover the flooded trenches. They know perfectly well the water level inside the reactor buildings, because they send people inside (or could send robots inside if they had to, surely). No, I think they release this information because they absolutely have to. If water starts overflowing from the trenches, they can't keep it secret. No, this looks like the inevitable consequence of the "flo-thru" cooling system, and the source of the decision to lower the volume of water injected. Now, is the flo-thru aspect necessary for heat removal? i.e. will the water start boiling if it's uncirculated and doesn't have an outlet as a heat sink? I haven't seen a recent update on the state of the pumps. Surely, if they can circulate water actively in the torus assemblies, this is sufficient to dissipate the heat? It is rightly acknowledged that people of faith have no monopoly of virtue - Queen Elizabeth II eurogreen:If water starts overflowing from the trenches From Saturday 26, h/t ceebs: It is highly likely that radioactive water in the plant has disembogued into the sea, Tokyo Electric Power Co said. In other news conscientiously relayed by the media, workers are battling to mop up "puddles". Why oh why are the media not asking any of the obvious questions that we are asking? Any idiot can face a crisis - it's day to day living that wears you out. "Cultural reasons"? Gimme a break! We all breathe the same air and the ocean is world wide. The consequences of this disaster impact more than just Japan. "It is not necessary to have hope in order to persevere." Because the journalists generally don't have the basic technical knowledge needed to ask the right questions. Peak oil is not an energy crisis. It is a liquid fuel crisis. There is that, but nowadays it isn't that hard for them to find someone who does Any idiot can face a crisis - it's day to day living that wears you out. Nationalization of Tokyo Electric Power an option: minister  TOKYO, March 29, Kyodo Koichiro Gemba, minister of national policy, hinted Tuesday that nationalizing Tokyo Electric Power Co., the operator of the troubled Fukushima Daiichi nuclear plant, may be an option when it comes to reviewing its handling of the crisis. ''It is naturally possible to hold various discussions on how TEPCO should function,'' Gemba told a news conference amid speculation that the government is considering putting the utility under state ownership. Chief Cabinet Secretary Yukio Edano, meanwhile, told a separate press conference that the government is ''not at the moment considering nationalization.'' What is important now, Edano said, is for TEPCO to give its full efforts to contain the ongoing nuclear crisis. The March 11 earthquake and ensuing tsunami crippled the Fukushima plant. The loss of cooling functions for the plant has caused radiation leaks and caused the country's worst ever nuclear crisis. TEPCO is expected to face an enormous damages bill in line with a law on compensation over nuclear power plant accidents, but it remains unclear whether it will have the financial capacity to pay it. ''Since the state has been promoting nuclear energy as its policy, it is necessary for the state to ultimately take responsibility,'' Gemba said, indicating the government will step in to cover expenses that cannot be handled by the utility. "It is not necessary to have hope in order to persevere." Get your news early on European TribuneTEPCO will be either taken over or bailed out by the state. TEPCO probably cannot afford to pay for the liability from widespread radioactive contamination. We're not at the point where a spent fuel pool becomes a dirty bomb, but that cannot be ruled out any longer. In any case, that's probably not insured by private insurers, and who knowsthe extent to which insurance covers the damage so far. It might be that TEPCO has a waiver of liability or an explicit state guarantee. The implicit state guarantee always exists - decommissioning and decontamination must happen it technically possible and so the state will do it as a last resort. In addition, TEPCO is going to have two write off at least 3 reactor cores due to seawater injections. The entire DaiIchi plant may be out of operation for years. It may never reopen or be expanded to add two new reactors as currently planned. In any case, this is a huge operating loss for TEPCO over and above the cleanup for the current disaster. Either TEPCO will be bailed out with free money, or it will be split into a good firm and a bad firm with the government taking over the bad firm, or the government will take over the entirety of TEPCO. A "Tokyo Electric Power Company" must exist as long as Tokyo exists. What needs to die is the idea that an electric utility can be run "for profit" with primary accountability to its shareholders as opposed to "in the public interest". So, in what may be my last act of "advising", I'll advise you to cut the jargon. -- My old PhD advisor, to me, 26/2/11 Let's see. How about this : the IAEA gifts radiation emission permits to all nuclear utilities, and these can be freely traded. It is rightly acknowledged that people of faith have no monopoly of virtue - Queen Elizabeth II Gov't to push for solar energy in quake reconstruction plan: Edano | Kyodo NewsPursuit of solar power, bioenergy and other clean energy sources will be a key pillar of the government's reconstruction strategy to be drawn up for areas hit by a massive quake and tsunami following the country's worst nuclear accident, top government spokesman Yukio Edano said Tuesday. After the March 11 earthquake and tsunami crippled the Fukushima Daiichi nuclear plant, resulting in the nuclear crisis, the government has faced growing calls to review its policy of pursuing nuclear power. It is now working out a basic program on post-quake and post-tsunami reconstruction efforts to be unveiled in mid-April, government sources said. Ten years ago, Japan was the unquestioned leader in photovoltaics worldwide. But while Germany, Spain, Italy (and producers in China, Taiwan) surged ahead, the Japanese effort faltered due to lack of serious new promotion policies. In the last two years, the market doubled twice, but that only got it into the one gigawatt region. Replacing the six Fukushima reactors in a few years would need new installations of solar at rates like in Germany and wind in the multi-gigawatts, too. *Lunatic*, n. One whose delusions are out of fashion. And they can pay for all of the solar in Japanese Yen! What is not to like? They need to get all of their solar panel plants running at full capacity and ramp up similar capabilities for commercial and residential installation. It would add massive resiliency to their grid. "It is not necessary to have hope in order to persevere." Yeah: it may be intermittent, but it won't stop during an earthquake. *Lunatic*, n. One whose delusions are out of fashion. Solar would be vulnerable to a "nuclear winter", but so what? "It is not necessary to have hope in order to persevere." I wonder if the semiconductor plants that make solar panels are in the affected region... Top Diaries Pope Francis' visit to Ireland by Frank Schnittger - Aug 17 by Oui - Aug 12 by gmoke - Aug 7 by Oui - Aug 9 No deal means no deal by Frank Schnittger - Aug 3 A victory for European justice by IdiotSavant - Jul 20 Brexit: How not to negotiate a deal [UPDATE] by Frank Schnittger - Jul 27 by Cat - Jul 29 Recent Diaries by Oui - Aug 19 Pope Francis' visit to Ireland by Frank Schnittger - Aug 17 by Oui - Aug 12 by Oui - Aug 11 by Oui - Aug 10 by Oui - Aug 9 by gmoke - Aug 7 by Oui - Aug 3 No deal means no deal by Frank Schnittger - Aug 3 by Oui - Aug 3 by Oui - Jul 31 by Oui - Jul 31 1 comment by Cat - Jul 29 Brexit: How not to negotiate a deal [UPDATE] by Frank Schnittger - Jul 27 by Oui - Jul 26 The UK to remain within a reformed EU? by Frank Schnittger - Jul 25 by Oui - Jul 23 Above the Law. Jupiter, the Ministers and the Bodyguard by Bernard - Jul 22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3203229010105133, "perplexity": 3269.529582583024}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221217354.65/warc/CC-MAIN-20180820215248-20180820235248-00243.warc.gz"}
https://moneyin24.com/quickquid-loan-setba/yo8xb3.php?id=spatial-covariance-matrix-289bf1
As the numerical integration is one-dimensional these results are computed quickly and accurately. xcov = sensorcov (pos,ang,ncov) specifies, in addition, the spatial noise covariance matrix, ncov. It computes the spatio-temporal covariance matrix for balanced data, i.e., when we have the same temporal indexes per location. To compute the spatial correlation it provides 5 functions: exponential, gaussian, matern, spherical and power exponential. To calculate these variances, the squares of the differences between each cell value and the mean value of all cells are averaged. In this paper, we propose a kernel approach that will operate di erently on the spatial covariance matrices. If the channel is modeled as H = Rr^(1/2) * Hiid * Rt^(1/2), where Hiid has i.i.d. Dans le cadre de son animation scientifique, l'Institut DATAIA organise des séminaires mensuels visant à échanger autour de l'IA. This is illustrated by figure 4, where the eigenvectors are shown in green and magenta, and where the eigenvalues clearly equal the variance components of the covariance matrix. This resolves the spatial dependency issue, however we still assume. 1 $\begingroup$ Every time I think I have understood the covariance matrix, someone else comes up wih a different formulation. You can call cov.spatial to calculate these in R (exactly what geoR::varcov.spatial does) You have three choices, which you can specify in either the PROC CSPATIALREG or MODEL statement: the COVEST=HESSIAN option estimates the covariance matrix based on the inverse of the Hessian matrix, the COVEST=OP option uses the outer product of gradients, and the COVEST=QML option … 2 Nonstationary Modeling for Non-Gaussian Spatial Data Let Z = fZ(s i)gN i=1 be the observed data and X ˆR N p be the matrix of covariates at the spatial locations s = fs igN i=1 in a spatial domain S R 2.W = fW(s i)gN i=1 is a mean-zero Gaussian process with covariance matrix ˆR N.Then SGLMMs can be de ned as In the case of isotropic spatial models or spatial models with geometric anisotropy terms for agricultural experiments one can use these theoretical results to compute the covariance between the yields in different rectangular plots. First, one can specify a particular functional form for a spatial stochastic process generating the random variable in (14.1), from which the covariance structure would follow. The best unbiased linear predictor, often called kriging predictor in geostatistical science, requires the solution of a large linear system based on the covariance matrix of the observations. 137-150. Many panel data sets encountered in macroeconomics, international economics, regional science, and finance are characterized by cross-sectional or “spatial… The covariance matrix contains values of variances and covariances. Adv. Start with a Correlation Matrix. C. Croux, C. Dehon, A. YadineThe k-step spatial sign covariance matrix. The covariance function can be written as a product of a variance parameter $$\sigma^2$$ times a positive definite correlation function $$\rho(h)$$: $$C(h) = \sigma^2 \rho(h).$$ The expressions of the covariance functions available in geoR are given below. Node 14 of 25 . https://doi.org/10.1016/j.jmva.2014.05.004. ScienceDirect ® is a registered trademark of Elsevier B.V. ScienceDirect ® is a registered trademark of Elsevier B.V. A generalized spatial sign covariance matrix. Covariance Matrix Types The SPATIALREG procedure enables you to specify the estimation method for the covariance matrix. This function builds the covariance matrix for a set of spatial locations, given the covariance parameters. Two covariance matrices are linearly nested if you can specify coefficients in the GENERAL option of the COVTEST statement which reduce the more general matrix to the simpler matrix. If the covariance matrix of our data is a diagonal matrix, such that the covariances are zero, then this means that the variances must be equal to the eigenvalues . Keywords: kriging, sparse matrix, asymptotic optimality, large linear systems, compactly supported covariance. The covariance matrix C x(h) resulting from a spatial blind source separation model is always symmetric and can be written as C x(h) = Xp k=1 K k(h)T k, with T k = ω kωT k, ω k being the kth column of Ω. From this time-series, one can construct two interesting covariance matrices: The spatial covariance matrix : A i j = ∑ t x i ( t) x j ( t) The temporal covariance matrix : B s t = ∑ i x i ( s) x i ( t) If one puts x i ( t) in matrix form X, with X i t = x i ( t), then A = X. X ′ and B = X ′. scipy.spatial.distance.mahalanobis (u, v, VI) [source] ¶ Compute the Mahalanobis distance between two 1-D arrays. Now suppose 2 different waveforms and do the same with the angle of arrival. The consistency and asymptotic normality of the spatial sign covariance matrix with unknown location are shown. Dodge (Ed. In this paper we study more general radial functions. The spatial covariance can be modeled in three basic ways. Note that the argument VI is the inverse of V. Parameters u (N,) array_like. disTemp: T x T temporal distance matrix without considering repetitions. Try this one time in your model and it will be clear. If A is a row or column vector, C is the scalar-valued variance.. For two-vector or two-matrix input, C is the 2-by-2 covariance matrix between the two random variables. Most textbooks explain the shape of data based on the concept of covariance matrices. However, the The SHAC estimator is robust against potential misspeci cation of the disturbance terms and allows for unknown forms of heteroskedasticity and correlation across spatial units. I need to relate this to spatial covariance structure such spherical, exponential, gaussian, AR, power. Although the use of the spatial channel covariance matrix helps the hybrid precoding design to be simpler and more practical, the hybrid architecture makes it difficult to estimate the covariance matrix. Spatial Covariance Matrix (WSCM), MUltiple SIgnal Classifi-cation (MUSIC) 1. The variances are along the diagonal of C. The variance is a statistical measure showing how much variance there is from the mean. Spatial correlations. Spatial correlations. of the variance covariance matrix in a spatial context. For a random field or stochastic process Z(x) on a domain D, a covariance function C(x, y) gives the covariance of the values of the random field at the two locations x and y: C ( x , y ) := cov ⁡ ( Z ( x ) , Z ( y ) ) = E [ { Z ( x ) − E [ Z ( x ) ] } ⋅ { Z ( y ) − E [ Z ( y ) ] } ] . An example with spatial data is presented in a … A covariance matrix presents the variances of all raster bands along the diagonal from the upper left to lower right and covariances between all raster bands in the remaining entries. – Jeffrey Evans Oct 7 '16 at 16:10. add a comment | 2 Answers Active Oldest Votes. Copyright © 2014 Elsevier Inc. All rights reserved. To compute the temporal correlation is used an autocorrelation function of an AR (1) process. It should be mentioned that the effects of non-ideal channel estimation and spatial covariance matrix estimation have been factored in (12). Example; References; The Band Collection Statistics tool provides statistics for the multivariate analysis of a set of raster bands. Active 8 years, 8 months ago. at capturing the spatial behavior for an individual process; only within the last few decades has it become commonplace to model multiple processes jointly. the covariance operator is placed on analysis Part. Input array. RSM = spsmooth(R,L) computes an averaged spatial covariance matrix, RSM, from the full spatial covariance matrix, R, using spatial smoothing (see Van Trees , p. 605).Spatial smoothing creates a smaller averaged covariance matrix over L maximum overlapped subarrays.L is a positive integer less than N.The resulting covariance matrix, RSM, has dimensions (N–L+1)-by-(N–L+1). In this case, you can compute covariance matrix as R = E{ vec(H)’ * vec(H)}. By continuing you agree to the use of cookies. They all require a Euclidean distance matrix which is calculated internally based on the coordinates. Computes Covariance Matrix and Related Results. Plot the leading MCA spatial left/right pattern and time series Normalize by standardizing the time series, so patterns correspond to a 1 standard-deviation variation in a1 or b1 Also, reverse the sign of U1 and V1 so El Nino SSTA is a positive a1. First, one can specify a particular functional form for a spatial stochastic process generating the random variable in (14.1), from which the covariance structure would follow. That means that the table has the same headings across the top as it does along the side. The literature about testing the equality of high‐dimensional spatial sign covariance matrices is sparse, but there is a rich literature on testing the proportionality of two high‐dimensional covariance matrices. Figure 4. An example with spatial data is … In this syntax, the signal power is assumed to be unity for all signals. For example, the COVTEST statement can be used to compare unstructured and compound symmetric covariance matrices, because the equal variances and equal covariances constraints needed to reduce the … C is normalized by the number of observations -1. etc) can also be returned. You can use Spatial Model Maker and use operator called Statistics. Estimation of Covariance Matrix Min Seong Kim and Yixiao Sun Department of Economics, UC San Diego Abstract This paper considers spatial heteroskedasticity and autocorrelation consistent (spa-tial HAC) estimation of covariance matrices of parameter estimators. By continuing you agree to the use of cookies. Viewed 2k times 9. of Large Spatial Datasets Reinhard Furrer, Marc G. Genton and Douglas Nychka Interpolation of a spatially correlated random process is used in many areas. For the power exponential function κ is a number between 0 and 2. Functions that compute the spatial covariance matrix for the matern and power classes of spatial models, for data that arise on rectangular units. Available with Spatial Analyst license. This function builds the covariance matrix for a set of spatial locations, given the covariance parameters. In this argument, N is the number of sensor elements. The variance is a statistical measure showing how much variance there is from the mean. The key difficulty is in specifying the cross-covariance function, that is, the function responsible for the relationship between distinct variables. with applications to test the proportionality H 0:Σ 1 = c Σ 2 for elliptically symmetric distributions.. Covariance functions return the value of the covariance $$C(h)$$ between a pair variables located at points separated by the distance $$h$$. Heteroskedasticity is likely to arise when spatial units di er in size or in other structural features. therefore be appropriate to whiten the STA by the inverse of the stimulus covariance matrix. TYPE=covariance-structure specifies the covariance structure of G or R. TYPE=VC (variance components) is the default and it models a different variance component for each random effect or repeated effect. The correlation matrix provides the correlation coefficients between each combination of two input bands. Copyright © 2020 Elsevier B.V. or its licensors or contributors. Noise spatial covariance matrix specified as a non-negative, real-valued scalar, a non-negative, 1-by-N real-valued vector or an N-by-N, positive definite, complex-valued matrix. The correlation matrix provides the correlation coefficients between each combination of two input bands. Compute the Mahalanobis distance between two 1-D arrays. The structures exp, gau and mat are meant to used for spatial data. C. Croux, E. Ollila, H. OjaSign and rank covariance matrices: statistical properties and application to principal components analysis. Available with Spatial Analyst license. Data Anal. The Mahalanobis distance between 1-D arrays u and v, is defined as (u − v) V − 1 (u − v) T where V is the covariance matrix. foremost challenge of estimating covariance for a spatial set up arises due to absence of repeti-tion. etc) can also be returned. The Band Collection Statistics tool provides statistics for the multivariate analysis of a set of raster bands. Noise spatial covariance matrix specified as a non-negative, real-valued scalar, a non-negative, 1-by-N real-valued vector or an N-by-N, positive definite, complex-valued matrix. A covariance matrix presents the variances of all raster bands along the diagonal from the upper left to lower right and covariances between all raster bands in the remaining entries. v (N,) array_like. A simulation study indicates that the best results are obtained when the inner half of the data points are not transformed and points lying far away are moved to the center. If you used correlation then there will not be a covariance matrix. According to the input options other results related to the covariance matrix (such as decompositions, determinants, inverse. This may seem absurd if we realize this situation as a multivariate extension of computing variance from one observation. Copyright © 2020 Elsevier B.V. or its licensors or contributors. - What will happen with them? ScienceDirect ® is a registered trademark of Elsevier B.V. ScienceDirect ® is a registered trademark of Elsevier B.V. Its popularity stems from its robustness to outliers, fast computation, and applications to correlation and principal component analysis. RSM = spsmooth(R,L) computes an averaged spatial covariance matrix, RSM, from the full spatial covariance matrix, R, using spatial smoothing (see Van Trees , p. 605).Spatial smoothing creates a smaller averaged covariance matrix over L maximum overlapped subarrays.L is a positive integer less than N.The resulting covariance matrix, RSM, has dimensions (N–L+1)-by-(N–L+1). The covariance matrix contains values of variances and covariances. The spatial sign covariance matrix with unknown location. This value represents the noise power on each sensor as well as the correlation of the noise between sensors. The CSPATIALREG procedure enables you to specify the estimation method for the covariance matrix. The cross-sectional covariance matrix can be estimated either using parametric methods or using standard spectral density matrix estimation techniques of the sort popularized in econometrics applications by Newey and West (1987). Here, we will try these models on the simulated time series data. The COVEST=HESSIAN option estimates the covariance matrix based on the inverse of the Hessian matrix, COVEST=OP uses the outer product of gradients, and COVEST=QML produces the covariance matrix based on both the Hessian and outer product matrices. In this argument, N is the number of sensor elements. The CSPATIALREG procedure enables you to specify the estimation method for the covariance matrix. In this article, we provide an intuitive, geometric interpretation of the covariance matrix, by exploring the relation between linear transformations and the resulting data covariance. Classif., 4 (2010), pp. In Interpolation of Spatial Data, Stein (who actually proposed the name of the Matérn covariance function), argues (pg. Maximum likelihood estimation of the GLS model with unknown parameters in the disturbance covariance matrix. Input array. 1 Introduction Many applications of statistics in the geophysical and environmental sciences depend on estimating the spatial and temporal extent of a physical process based on irregularly spaced observations. a vector with 2 elements or an ns x 2 matrix with the covariance parameters. The well-known spatial sign covariance matrix (SSCM) carries out a radial transform which moves all data points to a sphere, followed by computing the classical covariance matrix of the transformed data. 1. The well-known spatial sign covariance matrix (SSCM) carries out a radial transform which moves all data points to a sphere, followed by computing the classical covariance matrix of the transformed data. This code can also be used for the change of support problem and for spatial data that arise on irregularly shaped regions like counties or zipcodes by laying a fine grid of rectangles and aggregating the integrals in a form of Riemann integration. Here, we will try these models on the simulated time series data. CrossRef View Record in Scopus Google Scholar. It has excellent robustness properties: its influence function is bounded, and the asymptotic breakdown point is. You have three choices, which you can specify in either the PROC CSPATIALREG or MODEL statement: the COVEST=HESSIAN option estimates the covariance matrix based on the inverse of the Hessian matrix, … The Mahalanobis distance between 1-D arrays u and v, is defined as $\sqrt{ (u-v) V^{-1} (u-v)^T }$ where V is the covariance matrix. It should be mentioned that the effects of non-ideal channel estimation and spatial covariance matrix estimation have been factored in (12). The term spatial sign covariance matrix was coined by Visuri, Koivunen and Oja, but the estimator has a longer history in the statistics literature. Published by Elsevier Inc. https://doi.org/10.1016/j.jmva.2018.11.010. The well-known spatial sign covariance matrix (SSCM) carries out a radial transform which moves all data points to a sphere, followed by computing the classical covariance matrix of the transformed data. Its popularity stems from its robustness to outliers, fast computation, and applications to correlation and principal component analysis. Depending on the specification of the non-spatial residual, tags are L or Psi for a block diagonal or diagonal covariance matrix, respectively. Specifying the Spatial Weights Matrix Tree level 6. 30) that the infinite differentiability of the Gaussian covariance function yields unrealistic results for physical processes, since observing only a small continuous fraction of space/time should, in theory, yield the whole function. It is shown that the eigenvectors of the generalized SSCM are still consistent and the ranks of the eigenvalues are preserved. A spatial covariance matrix is by construction symmetric and if su cient data have been used to estimate it, it will also be positive de nite. We use cookies to help provide and enhance our service and tailor content and ads. They all require a Euclidean distance matrix which is calculated internally based on the coordinates. The spatial covariance can be modeled in three basic ways. Description Calculates spatial covariance matrix of the observed responses, and possibly, the responses to be predicted. Journal of Econometrics , 7:281–312.Corrigenda, Journal of … The other options have mostly to do with tests or displaying matrices and the like. The influence function of the resulting scatter matrix is derived, and it is shown that its asymptotic breakdown value is as high as that of the original SSCM. In the case of exponential, gaussian and spherical function κ is equal to zero. The simplest example, and a cousin of a covariance matrix, is a correlation matrix. As odd as may it sound, the trick is to consider a specific spar-sity structure for the covariance matrix under study. kappa: numerical value for the additional smoothness parameter of the correlation function. Functions that compute the spatial covariance matrix for the matern and power classes of spatial models, for data that arise on rectangular units. © 2018 The Authors. {\displaystyle C(x,y):=\operatorname {cov} (Z(x),Z(y))=\mathbb {E} \left[\{Z(x)-\mathbb {E} [Z(x)]\}\cdot \{Z(y)-\mathbb {E} [Z(y)]\}\right].\,} X. Simulations illustrate the different asymptotic behaviors when using the mean and the spatial median as a location estimator. A Covariance Matrix, like many matrices used in statistics, is symmetric. n x n spatial distance matrix without considering repetitions. A novel joint sparse representation based multi-source localization method is presented in this work. We use cookies to help provide and enhance our service and tailor content and ads. If pcoords is not provided, then only V, the covariance matrix … Using a non-negative scalar results in a noise spatial covariance matrix that has identical white noise power values (in watts) along its diagonal and has off-diagonal values of zero. With available, the asymptotic variance covariance matrix of the spatial^ two-stage least squares estimates is given by: =^ n2(Z^0Z^) 1Z0H(H0H) 1 (^ H0H) 1H0Z(Z^0Z^) 1 (14) As a result, small sample inference concerning ^ S2SLS can be based on the approximation ^ S2SLS ˘N( ;n 1).^ Jeanty (Rice) Spatial HAC in Stata July 26-27, 2012 9 / 29 . C = cov (A) returns the covariance. This code can also be used for the change of support problem and for spatial data that arise on irregularly shaped regions like counties or zipcodes by laying a fine grid of rectangles and aggregating the integrals in a form of Riemann integration. covariance matrix can then be used to construct standard errors which are robust to the presence of spatial correlation. Then start to increase time delay between your signal sources and also look at eigen values of their spatial covariance matrix. The corresponding individual entries in the covariance matrix and correlation matrix will have the same sign because the correlation matrix is simply the covariance matrix divided by the standard deviations, which are always positive. Second, one can model the covariance structure directly, typically as a func- or you can use Old Model Maker . The structures exp, gau and mat are meant to used for spatial data. kappa: parameter for all spatial covariance functions. To calculate these variances, the squares of the differences between each cell value and the mean value of all cells are averaged. The value of the covariance function at each distance; form the full symmetric variance covariance matrix from these calculated covariances. Some of the primary options for specifying the structure of the covariance matrix are below. For single matrix input, C has size [size(A,2) size(A,2)] based on the number of random variables (columns) represented by A.The variances of the columns are along the diagonal. According to the input options other results related to the covariance matrix (such as decompositions, determinants, inverse. If A is a vector of observations, C is the scalar-valued variance. VI ndarray. The covariance functions are defined in ?cov.spatial. Data Analytics Acceleration Library (588 words) exact match in snippet view article find links to article groups defined by quantile orders. If a matrix is provided, each row corresponds to the parameters of one spatial structure (see DETAILS below). Ask Question Asked 8 years, 8 months ago. Possibly, the function responsible for the matern and power exponential observations -1 c the... Is shown that the argument VI is the scalar-valued variance when using the mean of. We study more general radial functions value for the relationship between distinct variables the number of observations -1 exp gau... Have mostly to do with tests or displaying matrices and the ranks of the non-spatial residual tags... Of Elsevier B.V. or its licensors or contributors keywords: kriging, sparse matrix, is a registered trademark Elsevier. ) * Hiid * Rt^ ( 1/2 ), where Hiid has i.i.d multivariate analysis of a covariance (! The structure of the eigenvalues are preserved are L or Psi for a block diagonal or diagonal covariance matrix is! The primary options for specifying the structure of the covariance matrix ( such as decompositions, determinants, inverse sensor. Understood the covariance function ), where Hiid has i.i.d typically as multivariate... Try these models on the simulated time series data asymptotic behaviors when using mean! Kernel approach that will operate di erently on the concept of covariance matrices data based on the coordinates consider specific... C. Croux, c. Dehon, A. YadineThe k-step spatial sign covariance matrix for a block diagonal or covariance. The number of sensor elements correlation matrix provides the correlation matrix provides the correlation coefficients each! Have been factored in ( 12 ) this value represents the noise power on each sensor as well as correlation..., compactly supported covariance we study more general radial functions I think I have understood the covariance,... Kernel approach that will operate di erently on the simulated time series data that compute the spatial correlation, Hiid... Variance covariance matrix, respectively an ns x 2 matrix with unknown location are shown ¶ compute the temporal is... Are averaged have understood the covariance parameters using the mean value of all cells are.... The responses to be predicted normality of the observed responses, and the value. Parameters of one spatial structure ( see DETAILS below ) however we assume! Eigen values of variances and covariances primary options for specifying the cross-covariance function, that,... That compute the temporal correlation is used an autocorrelation function of an AR ( 1 ) process enables you specify... ) * Hiid * Rt^ ( 1/2 ) * Hiid * Rt^ ( 1/2 ) Hiid. These results are computed quickly and accurately input bands are L or Psi for a set spatial! We have the same temporal indexes per location T x T temporal distance matrix which is calculated internally on... Estimation method for the power exponential function κ is a registered trademark of Elsevier B.V. sciencedirect ® is registered! Calculates spatial covariance matrix ( such as decompositions, determinants, inverse, v, )... A comment | 2 Answers Active Oldest Votes primary options for specifying the cross-covariance function, that is, trick. Generalized spatial sign covariance matrix in a spatial context, and possibly, the squares of differences... Does along the side the disturbance covariance matrix for a set of spatial locations, given the covariance matrix respectively! Of covariance matrices N x N spatial distance matrix without considering repetitions Psi for a diagonal. Options for specifying the structure of the generalized SSCM are still consistent and asymptotic! Collection Statistics tool provides Statistics for the multivariate analysis of a covariance matrix can then be used to standard. Data that arise on rectangular units someone else comes up wih a different.. It is shown that the eigenvectors of the variance covariance matrix spatial covariance matrix a block diagonal or covariance! 12 ) to calculate these variances, the signal power is assumed to predicted. Di er in size or in other structural features a kernel approach that will operate di on. A spatial context years, 8 months ago and application to principal components analysis appropriate to whiten STA! To article groups defined by quantile orders kernel approach that will operate di erently the. Tags are L or Psi for a set of raster bands covariance structure directly, typically a... The coordinates, power the spatio-temporal covariance matrix, like many matrices used in Statistics, is vector! Do with tests or displaying matrices and the mean spar-sity structure for the multivariate of. Returns the covariance however we still assume of their spatial covariance structure such spherical exponential! In other structural features, E. Ollila, H. OjaSign and rank covariance matrices: statistical properties application! Inverse of V. parameters u ( N, ) array_like the simulated time series data the generalized are. Spatial units di er in size or in other structural features whiten the STA the... If a matrix is provided, each row corresponds to the covariance matrix are.! Likely to arise when spatial units di er in size or in other structural features and do the with! Now suppose 2 different waveforms and do the same headings across the top as it along... Across the top as it does along the side of an AR ( )... Argument, N is the scalar-valued variance standard errors which are robust to the parameters of spatial... Er in size or in other structural features power is assumed to be unity for all signals calculated internally on., gau and mat are meant to used for spatial data Psi for set... Échanger autour de l'IA wih a different formulation three basic ways arise when units! Spatial locations, given the covariance matrix: numerical value for the matern and power exponential gaussian and spherical κ! Try these models on the simulated time series data is shown that the effects of non-ideal estimation! As odd as may it sound, the squares of the generalized SSCM are still consistent and the mean the! Matrices and the like extension of computing variance from one observation the simulated series. \$ Every time I think I have understood the covariance parameters we more! Diagonal covariance matrix with the angle of arrival time in your model and it will be clear SPATIALREG enables! Function is bounded, and possibly, the squares of the variance covariance matrix ;. A matrix is provided, each row corresponds to the presence of spatial locations, given the matrix. One-Dimensional these results are computed quickly and accurately exp, gau and mat are meant to used for spatial,! 2 elements or an ns x 2 matrix with the angle of.. The spatial covariance matrix, asymptotic optimality, large linear systems, compactly supported covariance and do the temporal! B.V. a generalized spatial sign covariance matrix of the differences between each value... Headings across the top as it does along the side it sound, the squares of the spatial matrix... Construct standard errors which are robust to the use of cookies used for spatial data Hiid * Rt^ ( )... Raster bands are L or Psi for a set of spatial locations, given the covariance.! Value for the spatial covariance matrix matrix of the Matérn covariance function ), argues ( pg in... Tests or displaying matrices and the mean value of the noise power on each sensor as well as correlation! Combination of two input bands Interpolation of spatial data if we realize this situation as a extension... I think I have understood the covariance parameters on rectangular units you correlation. According to the use of cookies T x T temporal distance matrix without considering repetitions Elsevier B.V then there not... Responsible for the covariance parameters there is from the mean di erently on the coordinates multivariate... Compactly supported covariance [ source ] ¶ compute the spatial correlation these variances, the function for... Observations, c is the number of sensor elements estimation have been factored in ( 12 ) considering repetitions ;. To principal components analysis that is, the function responsible for the matern and power exponential covariance.. Answers Active Oldest Votes will not be a covariance matrix are below set of bands! The cross-covariance function, that is, the squares of the spatial sign covariance matrix estimation have been in! For elliptically symmetric distributions c Σ 2 for elliptically symmetric distributions this value represents the noise power each. Along the side 1-D arrays the non-spatial residual, tags are L or Psi for a set of spatial,. A spatial context Oldest Votes below ) for elliptically symmetric distributions and 2 some of the model... A is a registered trademark of Elsevier B.V. or its licensors or.! Inverse of the covariance function at each distance ; form the full spatial covariance matrix variance matrix! Statistics tool provides Statistics for the relationship between distinct variables is bounded, and the asymptotic breakdown point is calculate! ( N, ) array_like along the side form the full symmetric variance matrix. Spatialreg procedure enables you to specify the estimation method for the covariance structure such spherical exponential! Will be clear heteroskedasticity is likely to arise when spatial units di er in or. Spherical function κ is equal to zero row corresponds to the use cookies. Elliptically symmetric distributions it is shown that the effects of non-ideal channel estimation and spatial covariance can modeled. 2 different waveforms and do the same temporal indexes per location the disturbance covariance contains! For elliptically symmetric distributions exp, gau and mat are meant to used for spatial.. Spatial structure ( see DETAILS below ) and power classes of spatial locations, the! Model and it will be clear this function builds the covariance parameters the mean value of the is... Which are robust to the parameters of one spatial structure ( see DETAILS below ) shape of data on... Applications to test the proportionality H 0: Σ 1 = c Σ 2 for elliptically symmetric distributions organise séminaires... ; the Band Collection Statistics tool provides Statistics for the covariance structure such,... The presence of spatial locations, given the covariance matrix such spherical exponential... Function κ is a registered trademark of Elsevier B.V. or its licensors or contributors V. parameters (!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6370308995246887, "perplexity": 1192.459882008526}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991641.5/warc/CC-MAIN-20210511025739-20210511055739-00024.warc.gz"}
http://mathhelpforum.com/trigonometry/84471-solved-compound-angle-formulas-finding-exact-values.html
# Math Help - [SOLVED] Compound angle formulas - Finding exact values. 1. ## [SOLVED] Compound angle formulas - Finding exact values. I am rather confused as to how to go about doing this problem. Evaluate using formulas developed in this section. $tan(-\frac{7}{12}pi)$ That is how it is shown in the question. *The formulas being referred to are the compound angle formulas* What is confusing me is why the pi is on the outside of the fraction. I do not know how to start it. If you could just show how to get it into one of the compound angle formulas I could likely take it from there. The answer shown in the book is: $-\frac{1 + \sqrt{3}}{1 - \sqrt{3}}$ 2. Note that $-\frac{7 \pi}{12} = -\frac{4 \pi}{12} - \frac{3 \pi}{12}$ So $tan(-\frac{\pi}{3}-\frac{\pi}{4}) = \frac{tan(-\frac{\pi}{3}) + tan(-\frac{\pi}{4})}{1-tan(-\frac{\pi}{3}) \cdot tan(-\frac{\pi}{4})} = \frac{-1 -\sqrt{3}}{1-\sqrt{3}}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9685410261154175, "perplexity": 319.6214718510428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246657588.53/warc/CC-MAIN-20150417045737-00308-ip-10-235-10-82.ec2.internal.warc.gz"}
https://en.wikiversity.org/wiki/Fundamental_Physics/Electromagnetism/Electromagnetism_Formulaes
# Fundamental Physics/Electromagnetism/Electromagnetism Formulaes ## Electric charge Electric charge Charge process Charge quantity Electric field Magnetic field Negative charge O + e -> - - Q -->E<-- B ↓ Positive charge O - e -> + + Q <--E--> B ↑ ## Electric charge interaction force Electrostatic Force ${\displaystyle F_{q}=K{\frac {q_{+}q_{-}}{d^{2}}}}$ Electromotive Force ${\displaystyle F_{E}=qE}$ Electromagnetomotive Force ${\displaystyle F_{B}=\pm qvB}$ Electromagnetic Force ${\displaystyle F_{EB}=F_{E}+F_{B}=qE\pm qvB=q(E\pm B)}$ ## Electromagnet Configuration Magnetic Field Magnetic field intensity For a straight wire circular magnetic field surrounds a point charge along the straight line ${\displaystyle B=LI={\frac {\mu }{2\pi r}}I}$ For a circular loop circular magnetic field surrounds a point charge along the circular loop ${\displaystyle B=LI={\frac {\mu }{2r}}I}$ For a coil of N circular loops lines of magnetic field runs from North pole (Positive polarity) to South pole (Negative polarity) ${\displaystyle B=LI={\frac {N\mu }{l}}I}$ ## Electromagnetic induction Electromagnetic induction takes place in a circular loop and a coil of N circular loops ${\displaystyle V={\frac {d}{dt}}B}$ ${\displaystyle \epsilon =-{\frac {d}{dt}}\phi }$ For a single circular loop ${\displaystyle B=LI={\frac {\mu }{2r}}I}$${\displaystyle V={\frac {dB}{dt}}=L{\frac {dI}{dt}}}$ Coil of N circular loops ${\displaystyle B=LI={\frac {N\mu }{l}}I}$${\displaystyle V={\frac {dB}{dt}}=L{\frac {dI}{dt}}}$${\displaystyle \phi =-NB=-NLI}$${\displaystyle \epsilon =-{\frac {d\phi }{dt}}=-N{\frac {dB}{dt}}=-NL{\frac {dI}{dt}}}$ ## Electromagnetization The way a coil of N circular loops turn metal inside the loops into a electromagnet describe by 4 vector equation below ${\displaystyle B=LI={\frac {N\mu }{l}}I}$ ${\displaystyle H={\frac {B}{\mu }}={\frac {NI}{l}}}$ ${\displaystyle \nabla \cdot D=\rho }$ ${\displaystyle \nabla \times E=-\nabla B}$ ${\displaystyle \nabla \cdot B=0}$ ${\displaystyle \nabla \times H=J+\nabla B}$ ## Electromagnetic oscillation wave The way a coil of N circular loops generates oscillation of 2 fields , Electric filed and Electromagnetic field , perpendicular to each other which can be expressed mathematically by 4 vector equations below ### Electromagnetic oscillation ${\displaystyle \nabla \cdot E=0}$ ${\displaystyle \nabla \times E={\frac {1}{T}}E}$ ${\displaystyle \nabla \cdot B=0}$ ${\displaystyle \nabla \times B={\frac {1}{T}}B}$ ### Electromagnetic wave equation Oscillation of Electromagnetic field and Electric field generates sinusoidal wave equations ${\displaystyle \nabla ^{2}E=-\omega E}$ ${\displaystyle \nabla ^{2}B=-\omega B}$ ### Electromagnetic wave function Solving equations above gives Electromagnetic sinusoidal wave function ${\displaystyle E=ASin\omega t}$ ${\displaystyle B=ASin\omega t}$ ${\displaystyle \omega ={\sqrt {\frac {1}{T}}}={\sqrt {\frac {1}{\mu \epsilon }}}=C=\lambda f}$ ${\displaystyle T=\mu \epsilon }$ ${\displaystyle v=\omega ={\sqrt {\frac {1}{\mu \epsilon }}}=C=\lambda f}$ ${\displaystyle E=pv=pC=p\lambda f=hf}$ ${\displaystyle h=p\lambda }$ ${\displaystyle p={\frac {h}{\lambda }}}$ ${\displaystyle \lambda ={\frac {h}{p}}={\frac {C}{f}}}$ The following mathematical formulas can be derived ${\displaystyle E=hf=h{\frac {\omega }{2\pi }}=\hbar \omega }$ ${\displaystyle p={\frac {h}{\lambda }}=h{\frac {k}{2\pi }}=\hbar k}$ ${\displaystyle h=p\lambda ={\frac {E}{C}}\lambda =2\pi \hbar }$ ${\displaystyle \omega ={\frac {E}{\hbar }}}$ ${\displaystyle k={\frac {p}{\hbar }}}$ ${\displaystyle \hbar ={\frac {h}{2\pi }}={\frac {E}{\omega }}={\frac {p}{k}}}$ ${\displaystyle v=\omega _{o}={\sqrt {\frac {1}{\mu _{o}\epsilon _{o}}}}=C=\lambda _{o}f}$ ${\displaystyle E=pv=pC=p\lambda _{o}f=hf_{o}}$ ${\displaystyle h=p\lambda _{o}}$ ${\displaystyle p={\frac {h}{\lambda _{o}}}}$ ${\displaystyle \lambda _{o}={\frac {h}{p}}={\frac {C}{f_{o}}}}$ ${\displaystyle v=\omega ={\sqrt {\frac {1}{\mu \epsilon }}}=C=\lambda f}$ ${\displaystyle E=pv=pC=p\lambda f=hf}$ ${\displaystyle h=p\lambda }$ ${\displaystyle p={\frac {h}{\lambda }}}$ ${\displaystyle \lambda ={\frac {h}{p}}={\frac {C}{f}}}$ Photon (Electromagnetic wave radiation) exists in 2 states at specific frequency . . #### Heissenberg's uncertainty principle States that Photon cannot exist in 2 states at the same time The chance to find one of its state (successful rate of finding photon) is 1/2 where h = p λ . h and p do not change, only wavelength changes with frequency Mathematically, Uncertainty principle can be expressed as ${\displaystyle \Delta p\Delta \lambda ={\frac {1}{2}}{\frac {h}{2\pi }}={\frac {h}{4\pi }}={\frac {\hbar }{2}}}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 53, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9827612042427063, "perplexity": 2997.948691913471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.40/warc/CC-MAIN-20210514060536-20210514090536-00611.warc.gz"}