content
stringlengths
86
994k
meta
stringlengths
288
619
R & D Projects : 1. Intrusion Detection in Network Slices and Software-Defined Networking. Granted by research grant from C3i Hub, IIT Kanpur. Dr. Mahendra Pratap Singh (PI), Dr. Alwyn R Pais (Co-PI), and Dr. Radhika B S (Co-PI) have received a Rs. 36.33 Lakhs. 2024-2026 2. Automatic Early Detection of Lung Cancer from LDCT Images using Deep Learning Neural Networks. Sponsored by SERB, Govt. of India, PI: Prof. Annappa and Dr. Jeny Rajan at the cost of Rs.30 Lakhs 14 march 2023. 3. Logical Correctness for Batteryless Internet of Things. Sponsored by SERB-SRG Scheme, Govt. of India, PI:Dr. Biswajit R Bhowmik, at the cost of Rs. 20,01,090, 2022-2024 4. Restricted Proper Edge Colorings Of Graphs sponsored by Mathematical Research Impact Centric Support (MATRICS), SERB, DST at the cost of 2.2 Lakhs, 2020-2023 5. Design and Implementation of Multi-Attribute Void-Aware Routing Algorithm for Software-Defined Underwater Acoustic Modems sponsored by SERB, PI: Dr. Beerappa Rama Chandavarkar at the cost of 44 Lakhs, 2019- 2022 6. Multi Graph based Anomaly Detection Model for Social Network Analysis using Machine Learning sponsored by DST , PI: Dr. M.Venkatesan, at the cost 19.72 Lakhs, 2019-2022 7. Speaker Recognition System for Kannada Language in Emotional Environment Sponsored by DST, PI:Dr Shashidhar G Koolagudi at the cost of 40 Lakhs, 2019-2022 8. CAMP 81, Prototype of a reliable ICN Router using Non-Volatile Memory sponsored by NITK Alumni’ 81 batch, PI: Dr. Mohit P Tahiliani, CO-PI: Dr. Basavaraj Talawar at the cost of 1 Lakh, 2019-2021 9. CP-ABE Scheme with Decryption Keys of Constant Size using ECC with Expensive Threshold Access – sponsored by DST. PI:Alwyn Roshan Pais, Co-PI(s): Dr. P. Santhi Thilagam & Mr. Mahendra Pratap Singh at the cost of 31.12 Lakhs , 2018-2021 10. Automatic detection and quantification of focal cortical dysplasia regions from magnetic resonance brain images using machine Learning techniques sponsored by DST (CSRI). PI: Dr.Jeny Rajan at the cost of 33.09 Lakhs, 2018-2021 11. Quantitative Understanding of Energy in NFV Frameworks (QUEEN) sponsored by Intel Technology India Pvt. Ltd. PI: Dr. Mohit P Tahiliani , Co-PI(s): Dr. Basavaraj Talawar at the cost of 48 Lakhs, 12. Characterization and identification of dialects in Kannada Language- sponsored by DST-Science & Engineering Research Board(SERB) PI: Dr. Shashidhar G. Koolagudi at the cost of 35 Lakhs, 2017-2020 13. Information Security Education and awareness Phase-II-sponsored by DIT MCIT, PI: Dr. Alwyn Roshan Pais Co-PI: Dr. P. Santhi Thilagam, at the cost of 2.7 crore(Approx), 2015-2020 14. Retinal cysts identification and quantification from low SNR optical coherence tomography scans using image processing techniques- sponsored by DST (SERB EMR grant) PI: Dr. Jeny Rajan, Co- PI: Dr. Shashidhar G Koolagudi and Dr. Abhishek Kothari at the cost of 33.5 lakhs(Approx), 2017-2019 15. Development of Tool for Detecting of Application Layer Distributed Denial of Service Attacks on Web Applications-- sponsored by MEITY Government of India, PI: Dr. P. Santhi Thilagam at the cost of 29.78 Lakhs, 2017-2019 16. An automatic system for identification of phonological processes in children of age two and half to six and half years - sponsored by DST, PI: Dr. Shashidhar G. Koolagudi, Co-PI: Prof. Venkat Raja at the cost of 30.00 Lakhs, 2016-2019 17. Design of a modular FPGA accelerated Chip Multiprocessor Architecture Simulator - sponsored by DST, PI: Dr. Basavaraj Talawar, at the cost of 26.9 Lakhs, 2016-2019
{"url":"https://cse.nitk.ac.in/research/development","timestamp":"2024-11-10T11:23:04Z","content_type":"application/xhtml+xml","content_length":"43345","record_id":"<urn:uuid:bd1da50f-aca4-4893-a9a1-64a52c4de53d>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00020.warc.gz"}
Cite as Ioannis Anagnostides, Christoph Lenzen, Bernhard Haeupler, Goran Zuzic, and Themis Gouleakis. Almost Universally Optimal Distributed Laplacian Solvers via Low-Congestion Shortcuts. In 36th International Symposium on Distributed Computing (DISC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 246, pp. 6:1-6:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik Copy BibTex To Clipboard author = {Anagnostides, Ioannis and Lenzen, Christoph and Haeupler, Bernhard and Zuzic, Goran and Gouleakis, Themis}, title = {{Almost Universally Optimal Distributed Laplacian Solvers via Low-Congestion Shortcuts}}, booktitle = {36th International Symposium on Distributed Computing (DISC 2022)}, pages = {6:1--6:20}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-255-6}, ISSN = {1868-8969}, year = {2022}, volume = {246}, editor = {Scheideler, Christian}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2022.6}, URN = {urn:nbn:de:0030-drops-171978}, doi = {10.4230/LIPIcs.DISC.2022.6}, annote = {Keywords: Distributed algorithms, Laplacian solvers, low-congestion shortcuts}
{"url":"https://drops.dagstuhl.de/search/documents?author=Lenzen,%20Christoph","timestamp":"2024-11-14T17:24:11Z","content_type":"text/html","content_length":"118447","record_id":"<urn:uuid:2319fae7-b8d4-428a-8e08-faaef7a44036>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00761.warc.gz"}
Algebra seminars, 2021-22 - School of Mathematics Algebra seminars, 2021-22 Defining an affine partition algebra Maud De Visscher, City, University of London Tuesday 28 September 2021, 15:00-16:00 Lecture Theatre C, Watson Building The partition algebra was originally defined independently by Martin and Jones in the 1990s. It is a diagram algebra and satisfies a double centraliser property with the symmetric group. In this talk, I will define an affine version of the partition algebra by generators and relations and describe some of its properties. I will also relate it to the affine partition category recently defined by Brundan and Vargas. This is joint work with Samuel Creedon. Highest weight categories in the modular representation theory of Lie algebras Matthew Westaway, University of Birmingham Tuesday 5 October 2021, 15:00-16:00 Lecture Theatre C, Watson Building Highest weight categories are ubiquitous in Lie theoretic representation theory. They are of particular importance in the study of representations of Lie algebras in characteristic zero, and of algebraic groups in all characteristics. For Lie algebras in positive characteristic, however, the story is more complicated. The ideas of highest weight theory are still present, but a slightly different framework is needed to capture all their nuances. This talk will explain this in more detail, utilising ideas from a 2018 paper of Brundan and Stroppel. The work is joint with Simon [2]-triples in classical Lie algebras over fields of positive characteristic Rachel Pengelly, University of Birmingham Tuesday 12 October 2021, 15:00-16:00 Lecture Theatre C, Watson Building Let K be an algebraically closed field. Given three elements of some Lie algebra over K, we say that these elements form an [2]-triple if they generate a subalgebra which is a homomorphic image of [2](K). In characteristic 0, the Jacobson-Morozov theorem provides a bijection between the orbits of nilpotent elements of the Lie algebra and the orbits of [2]-triples. In this talk I will discuss the progress made in extending this result to fields of characteristic p. In particular, I will focus on the results in classical Lie algebras, which can be found as subsets of [n](K). Enumerating transitive groups, bounding generator numbers, and complexity of algorithms in Computational Group Theory Derek Holt, University of Warwick Tuesday 19 October 2021, 15:00-16:00 Lecture Theatre C, Watson Building Computer databases that contain information on various types of groups can play a vital role in research. Examples include finite groups up to order 2000, primitive permutation groups up to degree 4095, and transitive permutation groups, recently extended to degree 48. In the first part of the talk, we provide some details on the recent successful lengthy computer calculations involved in the enumeration of the 195,826,352 transitive groups of degree 48 (i.e. conjugacy classes in the symmetric group). For a finitely generated group G, let d(G) be the smallest number of elements required to generate G. In the second part of the talk, we survey results bounding d(G) for various types of finite permutation and matrix groups of a given degree, and describe how a knowledge of the transitive groups of degree 48 can be used to improve a result of Gareth Tracey bounding d(G) for transitive permutation groups. Finally we describe recent results obtained jointly with Gareth, which bound d(G) log |G|, and are partly motivated by attempts to estimate the complexity of algorithms to compute the automorphism group of G. Unipotent representations in the local Langlands correspondence Beth Romano, University of Oxford Tuesday 26 October 2021, 15:00-16:00 Lecture Theatre C, Watson Building The local Langlands correspondence (LLC) is a kaleidoscope of conjectures relating local Galois theory, complex Lie theory, and representations of p-adic groups. This talk will give an introduction to the part of the LLC involving unipotent representations. Reducing modulo p, we can move from representations of p-adic groups to representations of finite reductive groups, which have a rich structure developed by Deligne-Lusztig. I will talk about joint work with Anne-Marie Aubert and Dan Ciubotaru in which we lift some of this structure to p-adic groups. I will not assume previous familiarity with these topics; instead I'll give an introduction to these ideas via examples. Maximal irredundant base size and relational complexity Veronica Kelsey, University of Manchester Tuesday 2 November 2021, 15:00-16:00 Lecture Theatre C, Watson Building We begin with definitions, examples and motivation for various numerical invariants of permutation groups, such as base size, maximal irredundant base size and relational complexity. We then give upper bounds on these numerical invariants for certain families of groups. On relational complexity and other statistics for permutation groups Nick Gill, Open University Tuesday 9 November 2021, 15:00-16:00 Lecture Theatre C, Watson Building I will discuss some recent work studying various statistics to do with finite permutation groups. These include b(G), minimum base size and RC(G), relational complexity. I will focus mainly on open questions concerning these statistics but will also discuss various bits of joint work with Hudson, Liebeck, Loda and Spiga. Decomposition of spin representations of symmetric groups in characteristic 2 Lucia Morotti, Hannover University Tuesday 16 November 2021, 15:00-16:00 Any representation of a double cover of a symmetric group S̃[n] can also be viewed as a representation of S[n] when reduced to characteristic 2. However not much is known about the corresponding decomposition matrices. For example, while decomposition numbers of Specht modules indexed by 2-parts partitions are known, the decomposition numbers of spin irreducible modules indexed by 2-parts partitions are still mostly unknown, with in most cases only multiplicities of maximal composition factors (under a certain ordering of the modular irreducible representations) being known. In this talk I will characterise irreducible representations that appear when reducing 2-parts spin representations to characteristic 2 and describe part of the corresponding rows of the decomposition Structure properties of nilpotent elements in simple Lie superalgebras Leyu Han, University of Birmingham Tuesday 23 November 2021, 15:00-16:00 Lecture Theatre C, Watson Building Lie superalgebras, can be viewed as a generalization of Lie algebras including a ℤ[2]-grading, has become a topic of intense research interest both in mathematics and physics since mid 1970s. In this talk, I will mainly focus on introducing some background of Lie superalgebras, for example classification of simple Lie superalgebras, nilptent orbits and so on. I will also give examples of Lie superalgebras which are simple. Then I will mention my research on the centralizer of nilpotent elements in simple Lie superalgebras and its centre and state my main result on the above structures. Class 2 nilpotent and twisted Heisenberg groups Dávid Szabó, Rényi Institute, Budapest Tuesday 30 November 2021, 15:00-16:00 Every finitely generated nilpotent group G of class at most 2 can be obtained from 2-generated such groups using central and subdirect products. As a corollary, G embeds to a generalisation of the 3×3 Heisenberg matrix group with entries coming from suitable abelian groups depending on G. In this talk, we present the key ideas of these statements and briefly mention how they emerged from investigating the so-called Jordan property of various transformation groups. The dual approach to Coxeter and Artin groups Barbara Baumeister, Bielefeld University Tuesday 7 December 2021, 15:00-16:00 Independently Brady and Watt as well as Bessis, Digne and Michel started to study Coxeter systems (W,S) and Artin groups by replacing the simple system S by the set of all reflections. In particular this provides new presentations for the Artin groups of spherical type. I will introduce into this fascinating world. I also will present a slight modification of the new concept. Statistical topological data analysis: some musings about networks and applications Ralf Koehl, Giessen University Tuesday 14 December 2021, 15:00-16:00 I will start with an overview how linear algebra (in particular eigenvalue techniques) help with the understanding of networks. Then I mention random walks (which for me is a combination of linear algebra with limit arguments). Then I go to the core topic of the talk: persistent homology, starting with plenty of examples. Then I mention how a group-theorist can end up working with networks. And finally I explain how a pure mathematician can train themselves for applying methods from network theory by studying properties of Riemannian manifolds via approximations. The quaternionic x dihedral group of order 32 quotient singularity is also a quiver variety, as are its 81 crepant resolutions. Travis Schedler, Imperial College London Wednesday 2 February 2022, 15:30-16:30 I will consider the order-32 central product of the quaternionic and dihedral groups of order eight, which naturally acts via symplectic four-by-four matrices. The quotient is a fascinating singular cone which was predicted in 2015 by physicists to be isomorphic to a quite different object, a Nakajima quiver variety. We will prove this, using basic representation theory and geometry. This allows us to give a new description of all 81 crepant resolutions of the singularity, which are all given as hyperpentagon spaces (a hyperkaehler version of the moduli of pentagons in R3). Moving beyond this, we prove that all crepant resolutions of the analogous quiver cone for the n-pointed star are also hyperpolygon spaces. For example, there are precisely 1684 hyperhexagon spaces. The count uses the combinatorics of hyperplane arrangements. Cluster structures for Grassmannians Karin Baur, University of Leeds Wednesday 9 February 2022, 15:30-16:30 The category of Cohen Macaulay modules over a quotient of a preprojective algebra is a cluster category associated to the coordinate ring of the Grassmannian Gr(k,n). We study this category and describe some of its indecomposable modules. We also explain how one can associate frieze patterns to them. Groups, languages, and automata Marialaura Noce, University of Gottingen Wednesday 16 February 2022, 15:30-16:30 In this talk we will give an introduction to automata groups, explaining the connections between the latter, groups of automorphisms of rooted trees, and formal languages. Then, we present examples, important recent developments, and open problems. Generation versus invariable generation Daniele Garzoni, Tel Aviv University Wednesday 23 February 2022, 15:30-16:30 I will first define the concept of invariable generation of finite groups, and give some motivation. There are some striking differences with respect to the usual generation. We will experience this by taking some known results for the usual generation, and seeing what happens in the invariable setting. We will discuss minimal generating sets, generating graphs, random generation, and we will see some open questions. We will mainly consider groups at the extremes: soluble groups, at one end; nonabelian simple groups, at the other end. Skew-power series over prime rings (joint with William Woods) Adam Jones, University of Manchester Wednesday 9 March 2022, 15:30-16:30 Let R be a ring carrying an automorphism σ and a σ-derivation δ. We are interested in the skew-power series ring R[[x;σ,δ]], in the cases when it is well defined. Specifically, we want to prove analogues of properties of the well studied skew-polynomial ring R[x;σ,δ] to the skew power series case. We will focus on the question: if R is a prime, Noetherian ring, is R[[x;σ,δ]] also prime? We will partially answer this question in the case where R carries an appropriate topology such that (σ,δ) are continuous, focusing particularly on the case where δ=σ-1. Irreducible Representations of Rational Cherednik Algebras in Positive Characteristic Martina Balagovic, University of Newcastle Wednesday 16 March 2022, 15:30-16:30 310 Watson Rational Cherednik algebras are a class of associative non-commutative infinite dimensional algebras depending on a reflection group and several parameters. In this talk I will consider such an algebra over a field of positive characteristic and its Category O, explain how it differs from the corresponding category over complex numbers, and talk about some methods for finding explicit descriptions of irreducible representations in this category in terms of singular vectors and characters. The first half of the talk (setup) is old joint work with Harrison Chen, and the second half (type A computations) is recent joint work with Jordan Barnes. The Orbit Method for Complex Groups Lucas Mason-Brown, University of Oxford Wednesday 23 March 2022, 15:30-16:30 Venue to be confirmed The classification of all irreducible unitary representations of a reductive Lie group G is one of the fundamental unsolved problems in representation theory. In the 1960s, Kostant and Kirillov proved that the irreducible unitary representations of a *solvable* Lie group are (approximately) classified by co-adjoint orbits of G. In the 1980s, David Vogan conjectured that a version of this result should hold for semisimple Lie groups. This set of theorems (in the solvable case) and conjectures (in the reductive case) is referred to as the "Orbit Method". In recent joint work with Ivan Losev and Dmitryo Matvieievskyi, we define an orbit method correspondence for complex reductive algebraic groups (regarded as real groups by restriction of scalars). I will report on this work and discuss possible extensions to the case of real groups. A new family of symplectic singularities Paul Levy, Lancaster University Wednesday 30 March 2022, 14:30-15:30 Lecture Theatre B, Watson Building Symplectic singularities were defined by Beauville more than 20 years ago. The main classes of known examples are symplectic quotient singularities ℂ^2n/Γ (where Γ is a finite subgroup of the symplectic group) and (normalisations of) nilpotent orbit closures. Until very recently, it was suspected that these exhausted all possible isolated symplectic singularities. In this talk, I will explain three markedly different constructions of a completely new family of isolated symplectic singularities χ[n] (n≥ 5): as partial resolutions of quotient singularities for the dihedral group of order 2n; as deformations arising via the corresponding Calogero-Moser space; in the universal cover of the nilpotent cone of gl[n]. The special case n=5 had earlier appeared in relation to a particular Slodowy slice in type E[8]. Powerfully nilpotent, solvable and simple groups Gunnar Traustason, University of Bath Wednesday 27 April 2022, 15:30-16:30 Lecture Theatre B, Watson Building In this talk we discuss a special subclass of powerful groups called powerfully nilpotent groups. These are finite p-groups that possess a central series of a special kind. We will describe some structure theory and a 'classification' in terms of an ancestry tree and powerful coclass. One can view powerfully nilpotent groups as the powerful analogue of nilpotent groups. There is likewise a natural powerful analogue of solvable groups, "powerfully solvable groups", that we will also discuss briefly. For a special situation one can also introduce "powerfully simple groups". Frobenius algebras and fractional Calabi-Yau categories Joseph Grant, University of East Anglia Wednesday 11 May 2022, 15:30-16:30 Lecture Theatre B, Watson Building Given a quiver we consider two algebras: its path algebra and its preprojective algebra. If the quiver is Dynkin, i.e., its underlying graph is a simply laced Dynkin diagram, then both algebras have nice properties: the derived category of the path algebra is fractionally Calabi-Yau, and the preprojective algebra is Frobenius with a Nakayama automorphism of finite order. One can show that, if stated carefully, these properties are equivalent. I will give an introduction to the concepts above and, time permitting, will describe some of the ingredients of the proof of this equivalence.
{"url":"https://www.birmingham.ac.uk/research/activity/mathematics/algebra/past-algebra-seminars/algebra-seminars-2021-22","timestamp":"2024-11-03T09:42:54Z","content_type":"text/html","content_length":"51454","record_id":"<urn:uuid:9d56d920-5111-4820-9f08-b7d34705656d>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00287.warc.gz"}
[Solved] The data set represents the numbers of mo | SolutionInn The data set represents the numbers of movies that a sample of 20 people watched in a The data set represents the numbers of movies that a sample of 20 people watched in a year. (a) Construct a frequency distribution for the data set using six classes. Include class limits, midpoints, boundaries, frequencies, relative frequencies, and cumulative frequencies. (b) Display the data using a frequency histogram and a frequency polygon on the same axes. (c) Display the data using a relative frequency histogram. (d) Describe the shape of the distribution as symmetric, uniform, skewed left, skewed right, or none of these. (e) Display the data using an ogive. The word "distribution" has several meanings in the financial world, most of them pertaining to the payment of assets from a fund, account, or individual security to an investor or beneficiary. Retirement account distributions are among the most... Fantastic news! We've Found the answer you've been seeking!
{"url":"https://www.solutioninn.com/study-help/elementary-statistics-picturing/the-data-set-represents-the-numbers-of-movies-that-a-778493","timestamp":"2024-11-14T22:26:57Z","content_type":"text/html","content_length":"83761","record_id":"<urn:uuid:2eaeb480-84c1-4468-b679-2d1df44a87e5>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00713.warc.gz"}
MS Excel | ICTug.com MS Excel is a spreadsheet application that lets you create, organize and analyze numerical and statistical data. MS Excel lets you perform calculations with the help of inbuilt functions. You can also create charts, budgets with excel etc In this tutorial, you will learn about Microsoft Office Excel window with its features and how they work. By the end of this tutorial, you should be able to interact or work with the Excel window and its features. In this tutorial, you will learn about cell addresses. You will learn how to use cells, read the cell addresses and more other tasks. By the end of this tutorial, you should be able to work with MS Excel cell addresses to complete your Excel tasks. In this tutorial, you will learn how to deal with rows and columns in your worksheet. By the end of this tutorial, you should be able to work with rows and columns contained in your worksheet to accomplish your tasks. In this tutorial, you will learn how to hide rows and columns in a given worksheet. You will also learn how to unhide the hidden rows in a given worksheet. By the end of this tutorial, you should be able to hide and unhide rows and columns in a given worksheet. In this tutorial, you will learn how to format cells in a given worksheet. You will learn severals ways to format your worksheet By the end of this worksheet, you should be able to format cells in your worksheet using different format tools. In this tutorial, you will learn about the number formats in an Excel file. You will also learn how to use number formats to control value display in a worksheet. By the end of this tutorial, you should be able to work with number formats to control the display of values in your worksheet. This tutorial will let you learn how to deal with number formats in cells of a given worksheet. By the end of this tutorial, you should be able to format the cells with dates and percentages and also be able to deal with such cells. In this tutorial, you will learn how to insert separators in cells containing numeric data. By the end of this tutorial, you should be able to insert automatic separators in your numeric data without the need for typing it. In this tutorial, you will learn how to use the autofill feature to speed up your data entry in your worksheet. By the end of the tutorial, you should be able to work with autofill to quickly enter some series of data without typing. Watch this tutorial for details! In this tutorial, you will learn about conditional formatting By the end of this tutorial, you should be able to use this feature to apply specific formatting to the cells In this tutorial, you will learn about different simple formulas to perform calculations. By the end of this tutorial, you should be able to work with the MS Excel basic formulas to perform calculations. In this tutorial, you will learn about complex formula with relative cell references By the end of this tutorial, you should be able to use complex formula while working with relative cell reference. In this tutorial, you will learn about complex formula with absolute cell references By the end of this tutorial, you should be able to use complex formula while working with absolute cell reference. In this tutorial, you will learn how to use the insert function assistant in MS Excel to help you insert functions in your worksheet By the end of this tutorial, you should be able to use the insert function assistant in MS Excel to help you insert functions in your worksheet In this tutorial, you will learn how to lock rows and columns contained in your worksheet. By the end of this tutorial, you should be able to lock the rows and columns in your worksheet. Watch this tutorial, to discover why we need to lock the rows and columns In this tutorial, you will learn how to print worksheets. You will learn more about the print preview and learn more other settings before you can print your document By the end of the tutorial, you should be able to to print your worksheet data, make some important settings as well. This tutorial is about the print title command feature in MS Excel. In this tutorial, you will learn how to use this feature to print sheets with specific row headings or column headings on each By the end of the tutorial, you should be able to print your sheet(s) with each sheet having title(s) (headings). In this tutorial, you will learn how to display formulas in a worksheet. By the end of the tutorial, you should be able to display formulas used to perform calculations in your worksheet. In this tutorial, you will learn how to effectively manage and work with your worksheets. By the end of this tutorial, you should be able to effectively manage you worksheets. In this tutorial, you will learn how to sort and filter data contained in your worksheet. By the end of this tutorial, you should be able to sort and filter data contained in your worksheet. In this tutorial, you will learn how to use the insert function assistant to manage functions. The focus will be on several functions such as Sum, Min, Max, Average, Count, Countblanck, Countif etc By the end of this tutorial, you should be able to use all these functions in your worksheet. In this tutorial, you will learn how to use the Rank function for grading purposes. In this tutorial, you will learn about the IF function to perform logical comparison between values. By the end of the tutorial, you should be able to use the IF function for logical comparison In this tutorial, you will learn how to use the Vlook up function in your worksheet So, by the end of this tutorial, you should be able to use this function to manage your data. In this tutorial, you will learn how to create and work with charts in worksheets By the end of this tutorial, you should be able to create charts and work them in the worksheet containing your data.
{"url":"https://www.ictug.com/index.php/node/18","timestamp":"2024-11-08T11:18:46Z","content_type":"text/html","content_length":"126382","record_id":"<urn:uuid:a8a7f3d6-128f-4979-b43c-9bb3b09cbc16>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00542.warc.gz"}
Analysis and Key Parameter Optimization Design of Leningrader Seal Performance Issue Wuhan Univ. J. Nat. Sci. Volume 29, Number 2, April 2024 Page(s) 177 - 192 DOI https://doi.org/10.1051/wujns/2024292177 Published online 14 May 2024 Wuhan University Journal of Natural Sciences, 2024, Vol.29 No.2, 177-192 Engineering Technology CLC number: TB42 Analysis and Key Parameter Optimization Design of Leningrader Seal Performance ^1 College of Mechanical and Electrical Engineering, Lanzhou University of Technology, Lanzhou 730050, Gansu China ^2 Wenzhou Pump and Valve Engineering Research Institute, Lanzhou University of Technology, Wenzhou 325000, Zhejiang, China ^3 State Key Laboratory of Solid Lubrication, Lanzhou Institute of Chemical Physics, Chinese Academy of Sciences, Lanzhou 730000, Gansu, China Received: 27 August 2023 In order to improve the performance and service life of the Leningrader seal of the Stirling engine piston rod, interference, preload and friction coefficient were taken as influencing factors, and the curved surface response method was adopted to reduce the contact stress of sealing surface and von Mises stress of the sealing sleeve as the response index, with the optimization goal of reducing wear and extending life. The above three key parameters are analyzed and optimized, the influence of each parameter on the sealing performance and service life is obtained, and the best combination scheme of the three is determined. The results show that the interaction between pretightening force and interference fit has the greatest impact on contact stress. The interaction between interference fit and friction coefficient has the most significant effect on von Mises stress. The optimized parameters can reduce the maximum contact stress and maximum von Mises stress of the sealing sleeve by 26.3% and 20.6%, respectively, under a media pressure of 5-9 MPa. Test bench verification shows that the leakage of the optimized sealing device in 12 h is reduced by 0.44 cc·min^-1 (1 cc=1 cm^3). The wear rate of the sealing sleeve is 1.08% before optimization and 0.45% after optimization, indicating that the optimized parameters in this paper are effective. Key words: Leningrader seal / Stirling engine / performance analysis / optimized design / parameter configuration Cite this article: YANG Dongya, WANG Xuelin, WANG Feng, et al. Analysis and Key Parameter Optimization Design of Leningrader Seal Performance[J]. Wuhan Univ J of Nat Sci, 2024, 29(2): 177-192. Biography: YANG Dongya, male, Associate professor, research direction: Stirling machine sealing technology. E-mail: yangdy@lut.edu.cn Fundation item: Supported by the National Natural Science Foundation of China (51675509), and Wenzhou Public Welfare Industrial Technology Project (G20170026) © Wuhan University 2024 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 0 Introduction The piston rod Leningrader seal is the last seal of the Stirling engine, and its sealing performance directly affects the working efficiency and service life of the engine^[1]. The failure mechanism of the Leningrader seal is comprehensively affected by working conditions, material characteristics and structural parameters, especially high temperature and pressure. Working conditions will aggravate the deformation and wear of the seal, thus affecting its sealing, reliability and service life. At present, Leningrader seals still have durability problems, which will lead to incomplete seals or leaks, thus affecting the performance and reliability of the engine, and seals require high-precision processing and assembly, as well as high production costs. Furthermore, maintenance and replacement of seals also require high costs, increasing the cost of use. Therefore, on the basis of ensuring the piston rod seal has "zero" leakage, the optimization design is of great significance to improve the working efficiency of the Stirling engine. Numerous scholars have studied the performance of piston rod seals. Zhou et al^[2] studied the sealing characteristics of a rubber X-shaped combination seal under high-pressure hydrogen gas, and developed a subroutine using Abaqus software to simulate the sealing performance coupled with hydrogen expansion, and evaluated the applicability of the X-shaped seal ring. Zhang et al^ [3] established a finite element model of O-ring, and studied the effects of friction coefficient, precompression amount, and medium pressure on the static and dynamic sealing performance of the O-ring. Zhang et al^[4] established a two-dimensional finite element model of O-ring using ANSYS software, and compared it with experimental results to determine the effects of clearance size changes and fluid pressure transients on O-ring. Kaushal et al^ [5] conducted optimization on the shape of the sealing ring structure to enhance the sealing performance of the labyrinth V-shaped dynamic seal. The results demonstrated that the modified shape improved the sealing performance by 50% compared to the original structure. Hu et al^ [6] established a numerical model of V-ring seals based on experimental data and studied the comprehensive effects of geometric dimensions on the sealing performance and application of the sealing ring. Azzi et al^ [7] studied the influence of different sealing elements and operating parameters on the frictional force of sealing elements. Luo et al^ [8] conducted a simulation of the frictional behavior of the cylinder seal, combining the analysis results with the experimental results to reveal the dynamic changes of the seal caused by friction. Yakovlev^[9] provided a detailed experimental setup to study the wear of lip seals in frictional contact with rotating shafts. An empirical dependency was proposed to evaluate the life of rubber and polyurethane lip seals depending on their degree of wear. Hu et al^ [10] designed a combination seal of tilted pad and slip ring made of polytetrafluoroethylene device, which can compensate for seal wear automatically. Zhang et al^ [11] employed response surface methodology to investigate the changes in maximum contact stress response of rubber cylinders under various factors and levels, with the aim of enhancing the sealing performance of the sealant. Androsovich et al^ [12] utilized response surface methodology to generate a geometric parameter function for labyrinth seals, with the objective of improving the operational efficiency and sealing performance of gas turbines. Subsequently, an optimized labyrinth seal was developed and its features were compared with those of the initial seal. Zhang et al^[13] applied response surface methodology and multi-objective optimization design approach to optimize and analyze the contact pressure and leakage rate of dynamic seals under hydraulic pressure, and obtained the variation patterns of contact pressure and leakage rate. Liu et al^ [14] utilized a combination of response surface methodology and Box-Behnken experimental design to obtain functional expressions for optimization objectives and constraint conditions in relation to optimization parameters. By means of a particle swarm optimization algorithm, the inherent frequency value was significantly improved, thereby achieving optimization of the dynamic characteristics of dry gas seals. Jiang et al^[15] aimed to improve the sealing performance of the metal spring C-ring and established an optimization model based on the multi-island genetic algorithm. The optimization design of key structural parameters was investigated. Cao et al^[16] employed an orthogonal experimental design method to optimize the structural parameters of sealing rings, with the minimum reduction of maximum contact pressure between cap seal and piston rod and the most extended seal life as optimization objectives. The above studies investigated the sealing performance of different structures seals in three main aspects: operating parameters, sealing structure, and sealing materials. However, there is little research on Leningrader seals, and no use of regression design methods for parameter optimization design. It is crucial to minimize the contact stress and von Mises stress while ensuring "zero" leakage of in the seal for optimizing sealing parameters. Firstly, the distribution of contact stress on the sealing surface and von Mises stress on the sealing sleeve were investigated using the finite element method. The influence of different sealing parameters, namely assembly interference fit, spring preload force, and sealing surface friction coefficient, on the sealing performance was analyzed. Based on this analysis, the key sealing structural parameters were optimized and configured using regression design methods. Finally, the optimization results were verified for leakage and wear through an experimental platform, providing theoretical guidance for the optimization of the Leningrader sealing structure. 1 Modeling 1.1 Geometric Model Figure 1 illustrates the structure of the Stirling seal system. The sealing sleeve is commonly made of a PTFE-filled modified composite material, while the locating seat, casing, and brace ring are typically made of brass. As the geometric structure, constraints, boundary conditions, loads, stresses, and strains of the Leningrad seal are all axisymmetric, which can be regarded as two-dimensional axisymmetric. The contact stress at the sealing interface is denoted by $σc$, the maximum contact stress is $σcmax$, the von Mises stress is $σv$, and the maximum von Mises stress is Fig. 1 Structural diagram of the Stirling engine sealing system 1.2 Mathematical Model 1.2.1 Interference fit model analysis In engineering applications, the piston rod and the sealing sleeve are assembled using a base shaft system interference fit, which corresponds to a radial compressive stress denoted by P[r], as illustrated in Fig. 2. Fig. 2 Schematic diagram of force analysis of Leningrader parts The formula for calculating the radial force P[r] is shown in Eq. (1)^ [17]: where Δ is half of the interference fit amount between the sealing sleeve and the piston rod, Δ=(d-d[0])/2; d[0 ](mm) is the inner diameter of the casing; E(MPa) is the compressive modulus of elasticity of the material; r (mm) is the radius of the piston rod axis, r=0.5d; s (mm^2) is the minimum cross-sectional area of the sealing sleeve, s=π(d[1]^2-d^2), d[1 ](mm) is the minimum outside diameter of the sealing sleeve, d is the outer diameter of the piston rod. 1.2.2 Preload force model analysis The forces of the seal sleeve under spring preload P[c] (MPa) and radial compression P[r] with applied work pressure P[0] are shown in Fig. 2, where P[c]=4kx/π(D[0]^2-d[0]^2), k=8 N/mm is the spring stiffness coefficient, and the equilibrium equation for the force acting on the sealed sleeve in the y-axis direction^[17] is $π ( d ⋅ R a 1 + D 1 ⋅ R a 2 ) ⋅ P x d y - π 4 ( D 1 2 - d 1 2 ) ⋅ d p y = 0$(2) where D[1] is the maximum outer diameter of the sealing sleeve, R[a1] and R[a2] are the friction coefficients of the contact surface of the piston rod and the sealing sleeve, respectively. According to the principle of combination seal, the relationship between radial specific pressure P[x]and axial specific pressure P[y] is Substitute (3) into (2) and solve $l n P y = 4 R [ R a 1 + R a 2 ( D 1 / d 1 ) ( D 1 / d 1 ) 2 - 1 ] y d 1 + C$(4) Substitute boundary condition P[y]|[y][=0]=P[0] into Eq. (4) to obtain Eq. (5): $P y = P c · e x p { 4 R [ R a 1 + R a 2 ( D 1 / d 1 ) ( D 1 / d 1 ) 2 - 1 ] y d 1 }$(6) $P x = R ⋅ P c ⋅ e x p { 4 R [ R a 1 + R a 2 ( D 1 / d 1 ) ( D 1 / d 1 ) 2 - 1 ] y d 1 }$(7) where y takes the maximum displacement of 2 mm, pressure transfer coefficient R = 1, pressure P[x] =P[y], then the contact stress on the 45° cone is P[N]: $P N = 2 P c ⋅ e x p { 4 R [ R a 1 + R a 2 ( D 1 / d 1 ) ( D 1 / d 1 ) 2 - 1 ] y d 1 }$(8) When considering the media pressure P[0], after filling the media gas at a certain pressure, the seal sleeve is tightened against the piston rod. At this time the axial force on the seal sleeve is P [y] and the radial force is P[x]: $P y = ( P c + P 0 ) ⋅ e x p { 4 R [ R a 1 + R a 2 ( D 1 / d 1 ) ( D 1 / d 1 ) 2 - 1 ] y d 1 }$(9) $P x = R ⋅ ( P c + P 0 ) ⋅ e x p { 4 R [ R a 1 + R a 2 ( D 1 / d 1 ) ( D 1 / d 1 ) 2 - 1 ] y d 1 }$(10) $P N = 2 ( P c + P 0 ) ⋅ e x p { 4 R y d 1 [ R a 1 + R a 2 ( D 1 / d 1 ) ( D 1 / d 1 ) 2 - 1 ] }$(11) 1.3 Analysis Steps and Basic Assumptions Engineering experience shows that the Leningrad seal casing is affected by the media pressure, the initial pre-tightening force and interference fit during the installation stage. Therefore, the entire sealing process is simulated in three load steps during the numerical simulation stage. Step 1: Apply an interference fit to the sealing sleeve corresponding to a radial pressure and preloading force, causing the sealing sleeve to undergo precompression and simulate its initial state, as shown in Fig. 3(a). Step 2: As shown in Fig.3(b), H[2] gas is introduced to apply media pressure to the combination seal. Step 3: After loading the payload, the piston rod is subjected to a Y-axis reciprocating motion to simulate the dynamic sealing process, as depicted in Fig. 3(c). Fig. 3 Analysis steps of the Leningrader seal Initial boundary conditions are set as follows: the media pressure is 9 MPa, the preload force is 32.5 MPa, the interference fit amount is 0.12 mm, and the average velocity of the piston rod is 2.5 m /s. The friction coefficient is set to 0.2 for all contact surfaces except for that between the piston rod and the sealing casing, which is affected by the oil film. Based on actual working conditions, assumptions are made for the Leningrad seal: 1) The pressure variation in the direction of lubricating oil film thickness is negligible; 2) The influence of oil film curvature is negligible; 3) Lubricating oil is assumed to be a Newtonian fluid; 4) Viscosity is assumed to be constant in the lubricating film thickness direction. 2 Leningrader Seal Performance Analysis The maximum contact stress on the sealing surface and the maximum von Mises stress distribution on the sealing sleeve were obtained through finite element analysis, as illustrated in Fig. 4 and Fig. 5, respectively. The contact stress is an indirect indicator of the sealing performance. In theory, leakage can be prevented when the maximum contact stress is greater than the media pressure. Moreover, the greater the contact stress, the better the sealing effect^[18]. However, excessive contact stress can increase power consumption and aggravate wear, and thereby affect service life. The analysis shows that the maximum contact stress of Leningrader sealing surface is greater than the medium pressure (Fig. 4), and the sealing effect is up to standard. On this basis, the contact stress should be as low as possible. Fig. 4 Cloud chart of maximum contact stress of sealing sleeve under positive stroke (a) and return stroke (b) Fig. 5 Cloud chart of maximum von Mises stress in the sealing sleeve under positive stroke (a) and return stroke (b) Von Mises stress reflects the size of the cross-section. It is a key parameter for evaluating the fatigue failure of the sealing ring. The greater the von Mises stress value is, the more prone it is to relaxation and cracks of the material seal failure, and the shorter its life will be^[19]. As shown in Fig. 5, under the impact of boundary conditions, the von Mises stress concentration of the Leningrad seal sleeve is mainly located at the 45° conical surface. Particularly during the return stroke, the stress concentration area spreads to the sealing surface, which can result in cracks and easy failure of the sealing surface. 3 Analysis of the Influence of Key Parameters on Sealing Performance 3.1 Analysis of the Influence of Interference Quantity In the reciprocating movement of the piston rod, the interference fit can improve the stability and sealing property between the piston rod and the sealing sleeve. However, the large interference fit I will increase the contact stress, equivalent stress and wear of the sealing surface, and reduce the service life of the sealing casing. If the interference fit I is too small, when the preload force cannot provide enough radial extrusion, the sealing structure will not meet the required working conditions, and the piston rod and the sealing sleeve cannot fit perfectly, resulting in the distortion of the sealing sleeve. Therefore, it is indispensable to study the effect of the interference fit I on the sealing performance. Through mechanical analysis, the friction coefficient of the sealing surface was limited to u=0.2, and under normal conditions, the interference fit varied between 0.1-0.15 mm^[20]. The finite element analysis is combined with Eq. (1). The corresponding change curve of the contact stress at each node of the sealing surface is shown in Fig.6. During the operation of the sealing device, the piston rod reciprocates, and the sealing element undergoes wear, resulting in a decrease in radial thickness. The greater the interference fit of the sealing element, the smaller the radial clearance between the sealing surfaces. Therefore, under the action of media pressure, a larger radial compensation force is generated when the sealing surfaces are in contact, leading to an increase in contact pressure on the sealing surfaces^[21]. Fig. 6 Curve of contact stress on the sealing surface with interference of different interference fit I under positive stroke (a) and return stroke (b) Figure 7 shows the von Mises stress values of the sealing sleeve under different interference fit. The contact stress between the sealing surfaces is derived from the combined effect of external forces and the elastic deformation of the sealing sleeve. The elastic deformation generates internal stress in the sealing sleeve, which affects the magnitude of the von Mises stress. Moreover, as the interference fit increases, the axial length of the sealing sleeve changes more significantly, leading to more elastic deformation and internal stress, further increasing the von Mises stress. Therefore, the larger the interference fit, the larger the von Mises stress^[22], which can affect the sealing performance and service life of the sealing sleeve. Considering the contact pressure and equivalent stress, it is necessary to select an appropriate interference fit based on specific applications to ensure the reliability and safety of the sealing sleeve. Fig. 7 Effect of interference of different interference fit I on von Mises stress of sealing sleeve 3.2 Analysis of the Influence of Spring Preload Force The preload force of the spring has an essential effect on the contact pressure. As the preload force of the spring increases, the elastic deformation of the spring increases, causing slight changes in the relative position between the sealing surfaces, which can affect the shape and size of the sealing surfaces and thus the sealing performance. Suppose the preload force of the spring is too large. In this case, the pressure between the sealing surfaces may exceed the bearing limit of the sealing material, causing wear and deformation of the sealing surfaces and thus reducing the sealing performance. If the preload force is too small, the pressure between the sealing surfaces may be insufficient to ensure good sealing performance. Therefore, in the Leningrader seal, the preload force of the spring should be selected reasonably based on the material characteristics of the sealing element and the requirements of the working environment to ensure optimal sealing performance. According to mechanics analysis, the compression amount of the sealing sleeve spring is between 12-14.2 mm. Combined with equations (2)-(11), five groups of preload force P[c] were selected as 29.7, 31.1, 32.5, 34.0, and 35.4 MPa. The corresponding change curve of contact stress on the sealing surface is shown in Fig.8. When the seal is subjected to work pressure, the spring will produce a greater reaction force due to the increased elastic deformation of the spring, and the seal surface contact pressure will increase^[23]. In addition, the increase of the spring preload force will reduce the length of the spring, change the initial shape of the spring, lead to changes in the shape and size of the sealing sleeve, and affect the contact stress. Fig. 8 Curve of contact stress on the sealing surface with interference of different preload force P[c] under positive stroke (a) and return stroke (b) Figure 9 shows the curve of the von Mises stress of the sealing sleeve under different preload forces. As the contact stress increases, the deformation of the sealing sleeve also increases, leading to an increase in the von Mises stress. Increasing the preload force of the spring makes the contact stress distribution on the sealing sleeve more uneven, thereby making the von Mises stress more concentrated. If the von Mises stress exceeds the bearing limit of the material, it can lead to damage and failure of the sealing sleeve. Therefore, combined with the above analysis, the actual application needs to be reasonably designed and selected according to the specific situation. Fig. 9 Effect of preload force of different preload force P[c] on von Mises stress of sealing sleeve 3.3 Analysis of the Impact of Friction Coefficient The power consumption and service life of sealing elements are closely related to the quality of the sealing surface, including surface roughness, hardness, and other factors, which are important causes of failure in reciprocating motion seals. The friction coefficient u of the modified polytetrafluoroethylene sealing sleeve is between 0.1 and 0.4, and the friction coefficient u of the five groups is 0.1, 0.15, 0.2, 025 and 0.3, respectively, and the finite element analysis is carried out. The variation curve of the contact stress between the sealing surfaces is shown in Fig.10, with a limiting interference of I=0.12 mm and a preload force of P[c]=32.5 MPa. The contact stress exhibits a parabolic curve during both the positive and return strokes of the reciprocating motion. When the piston rod experiences reciprocating motion, the greater the friction coefficient, the more serious the wear, and the larger the clearance of the sealing surface. Under the same interference and preload, the smaller the friction coefficient, the greater the contact stress. Figure 11 shows the variation curve of von Mises stress in the sealing sleeve. According to the variation of the graph and the stress pressure difference, it can be seen that the friction coefficient has a greater influence on the von Mises stress value. In combination with contact stress, von Mises stress, the friction coefficient of the sealing surfaces, needs to be considered when designing and selecting seals to ensure reliability and service life of the seals. Fig. 10 Curve of contact stress on the sealing surface with interference of different friction coefficient u under positive stroke (a) and return stroke (b) Fig.11 Effect of different friction coefficient u on von Mises stress of sealing sleeve 4 Optimization Design of Sealing Parameters Based on Response Surface Methodology Optimization of the Stirling engine Leningrader seal parameters requires a combination of two aspects: maximum contact stress and maximum von Mises stress. Based on the regression design method and the premise of ensuring reliable sealing performance while reducing the maximum contact stress of the sealing sleeve and piston rod contact surface, as well as minimizing the maximum von Mises stress of the sealing sleeve as the response optimization target, three factors, assembly interference, spring preload, and sealing surface friction coefficient, are selected for parameter optimization design. Multiple quadratic regression equations were used to fit the functional relationship between the factors and the response values, and the best sealing parameter combination scheme was obtained by analyzing the response surface and ANOVA. 4.1 Response Surface Analysis with the Maximum Contact Stress as the Index The design results of the response surface method with the maximum contact stress as the index are shown in Fig.12, which illustrates the surface plot of the maximum contact stress of the sealing casing with different parameters crossing each other. The response surface has a high inclination, significant color change, and obvious ellipticity of contour diagram, and their interaction is significant. Therefore, the combination factors that have the greatest influence on the contact stress are preload force and interference fit. The corresponding estimated regression coefficients are shown in Table 1. It can be observed that the model value (P-value) of the maximum contact stress of the sealing surface is 0.001, indicating that the model is significantly responsive to the response value. Since the lack of fit value is greater than 0.05, it is determined that there is no lack of fit phenomenon in this model. Fig. 12 Surface diagram of contact stress response of sealing surface with the maximum contact stress as the index under the interaction of preload force P[c], friction coefficient u and interference fit I The P-value represents the significant impact of the three factor parameters on the maximum contact stress in the model. A smaller P-value indicates a greater influence of the relevant factor parameters on contact stress. In this model, the individual effects of interference fit, preload force, and friction coefficient on the sealing sleeve are all significant. Regarding dual factor interaction, the most significant impact is on preload force and interference fit, while the P-values of friction coefficient and preload force, and friction coefficient and interference fit, are both greater than 0.05, indicating a lower significance for contact stress. Figure 13 shows the maximum contact stress of the sealing surface of the residual map, the model residual spatial distribution is in the uniform distribution or normal distribution. From Fig.13(a), it can be seen that all the residuals are almost distributed in the same straight line, i.e., a linear relationship. The residuals approximately obey the normal distribution. The histogram of Fig.13 (c) is nearly bell-shaped, which conforms to the shape of normal distribution. From Fig.13(b) and (d), the fitted value does not appear "trumpet-shaped", or "funnel-shaped", proving that the residual graph has no obvious defects, and the observed values arranged in chronological order fluctuate up and down, indicating that there is no dissimilarity with the fitted value and the order of the residuals. It is judged that the residuals are normal. Fig. 13 Residual diagram of the maximum contact stress on the sealing surface Table 1 Regression coefficients and analysis of variance for estimating the maximum contact stress at the sealing interface 4.2 Response Surface Analysis Based on Maximum von Mises Stress Analysis of the results of the response surface method using the maximum von Mises stress as an indicator shows that the combination of factors that have the greatest influence on the von Mises stress is the interference fit and friction coefficients. The inclination of the response surface is high, and the elliptic curve of the contour diagram is obvious, as shown in Fig. 14. The corresponding estimated regression coefficients are shown in Table 2. It can be seen that the model value (P-value) of the maximum von Mises stress of the sealing sleeve is 0, which is less than 0.05, indicating that the model is significantly responsive to the response value. At the same time, the maximum von Mises stress to lack of fit value is greater than 0.05, which is judged not to be a misfit. Fig. 14 Surface diagram of contact stress response of sealing surface with the maximum von Mises stress as the index under the interaction of preload force P[c], friction coefficient u and interference fit I In this model, the significance of all the individual factor effects on the sealing casing under the action of a single factor is high. In the two-factor interaction, the most significant influence was u$×$I, and the P values of u$×$P[c] and P[c]$×$I were both greater than 0.05, which was consistent with the response surface results. Figure 15 shows the residual diagram of the maximum von Mises stress in the sealing sleeve. Fig.15 Residual plot of the maximum von Mises stress in the sealed casing Figure 15 shows the residual plots of the maximum von Mises stress of the sealing casing. In Fig. 15(a), we found that all the residuals are uniformly distributed in the same straight line, that is, a linear relationship, and Fig.15(c) is similar to a bell shape, in line with the morphology of the normal distribution. Fig.15(b) and (d) show that the fitted value does not appear as "trumpet", and the observed values arranged in chronological order fluctuate up and down, that is, the fitted value and the order of the residuals are all normal, and the normal distribution performance is good. Table 2 Estimated regression coefficients and analysis of variance for the maximum von Mises stress of seal sleeve 4.3 Dual Objective Collaborative Optimization and Validation With the different target parameters, there is a significant difference in the improvement results. In order to synthesize the results of design of experiment analysis under the maximum contact stress and maximum von Mises stress metrics, 15 sets of optimal solutions are predicted by response surface simulation analysis for the maximum contact stress (Table 3) and maximum von Mises stress ( Table 4), respectively, under the interaction of each parameter. The analysis shows that the optimal parameter values of the 7th group in Table 3 and the 8th group in Table 4 are consistent, with a maximum contact stress of 19.711 2 MPa and a maximum von Mises stress of 18.927 9 MPa, a friction coefficient u of 0.22, an interference fit amount I of 0.113 mm, and a preload force P[c] of 29.71 MPa. To confirm the optimality of the parameters, a simulation analysis was conducted on the optimized sealing parameters using finite element analysis. The results showed that the maximum contact stress was 20.12 MPa and the maximum von Mises stress was 18.95 MPa, which were close to the predicted values. After optimization, under the same operating conditions, compared with those before optimization, the maximum contact stress and maximum von Mises stress of the sealing ring decreased by 26.3% and 20.6%, respectively. In order to effectively verify the applicability of the optimized parameters, five groups of different working media pressures of 5, 6, 7, 8 and 9 MPa were selected to compare and analyze the corresponding maximum contact stress and maximum von Mises stress before and after optimization. As shown in Fig.16, the optimized contact stress and von Mises stress values are significantly reduced greater than 20% under the above five media pressures, which proves that the optimized parameters are effective. Fig. 16 Comparison of maximum contact stress and maximum von Mises stress before and after optimization under the five media pressures Table 3 Parameter combination of an optimal solution for maximum contact stress Table 4 Parameter combination of an optimal solution for maximum von Mises stress 5 Test Bench Verification In order to test the effectiveness of the optimized design and the correctness of the performance analysis, this paper takes the piston rod Leningrader sealing experimental device for experimental verification, and the sealing device principle is shown in Fig. 17. The test device can complete the piston rod seal workpiece leakage detection test. The piston rod dynamic sealing device must ensure reliable gas sealing ability, in order to avoid material leakage brought by the engine power drop or shutdown. Therefore, the test device can simulate the Stirling engine working conditions, and, on this basis, realize the collection of leaking gas mass and its leakage measurement. Fig. 17 Schematic diagram of the Leningrader piston rod sealing performance experiment In the actual operation of the Stirling engine, the medium leaked by the piston rod sealing device enters the crankcase directly. Then it enters the oil and gas separator through a one-way valve for oil and gas separation, and is finally measured by a micro flowmeter. To ensure that the power of the Stirling engine is not reduced due to the medium leakage, the crankcase is generally pressurized. However, the sealing problem between the extended shaft and the housing when the motor is connected to the crankshaft has not been reliably solved, which will cause an incomplete collection of leaked media (hydrogen), and the low accuracy of the test. So, a magnetic drive has been utilized to ensure that the leaked medium through the rod sealing device is all concentrated in the crankcase without leaking outside. On this basis, the crankcase of the test bench is also filled with medium gas during the test, so as to eliminate the gas miscibility that may exist between the medium gas and the gas (non-medium gas) after the medium gas leaks into the crankcase. The test bench uses the same medium gas to ensure the sensitivity and accuracy of gas leakage measurement. In order to address the thermal equilibrium issue of the system, the measurement starting point of the test bench is set when the system reaches a (pressure, thermal) balance during operation. The experiment shows that the system takes ~3.25 h to reach the steady state from startup. In order to ensure the accuracy and reliability of the system's leakage measurement, the starting point of each measurement is set to 4 h after the start of operation, and the operation time is 12 h. The measurement records before and after optimization are shown in Table 5. The experimental steps are as follows: 1) Open the switch of the hydrogen cylinder (10 MPa)→Open the pressure-reducing valve→Adjust the inlet pressure of the pressure-reducing valve to 10 MPa →Adjust the outlet pressure of the pressure-reducing valve to 7 MPa→Open the globe valve1→The piezometer1 displays that the input pressure P[0] in the top cavity is constant at 7 MPa→Close the globe valve1 to maintain pressure. 2) Hydrogen gas (P[0]) from the top chamber leaks into the piston rod sealing chamber (P[1]) through the Leningrader sealing element→The pressure drops from P[0] to P[1]→The lubricating oil flows into the oil line through the filter→The oil pump begins to work→The outlet oil pressure of the oil pump rises to 0.1 MPa→The hydraulic control check valve opens (the minimum opening pressure is 0.1 MPa)→Piezometer3 shows that the oil pressure of the entire oil circuit is 0.1 MPa→The lubricating oil is divided by the three-way valve block and flows into the nozzle and the crankcase oil inlet, respectively→ The split oil circuit lubricates the sealing sleeve and the brace ring respectively. 3) Start the electrical machinery and increase the operating frequency to the maximum→The rotor of the linear electrical machinery drives the magnetic coupling to start running at the maximum speed→The speed of the reciprocating motion of the piston rod increases from the minimum to the maximum→Attach an infrared thermometer to the corresponding position on the outer wall of the cylinder wall→Monitor and record the real-time temperature of the Leningrader seal. 4) When the temperature measured by the infrared thermometer is constant (the entire device's sealed hydrogen gas reaches thermal equilibrium)→Record the reading of piezometer1→Display it as the top chamber pressure P[0] (around 7 MPa). 5) The hydrogen gas in the piston rod sealing chamber (P[1]) leaks into the crankcase through the sealing sleeve (P[2])→The pressure drops from P[1] (0.1-8 MPa) to P[2]→Open globe valve2→The oil and gas mixture flows out after being filtered and dried by the oil and gas separator→When the pressure of the dry hydrogen gas flowing through piezometer2 reaches 0.14 MPa→The pneumatic one-way valve (minimum opening pressure difference of 0.035 MPa) is opened→The pressure difference between the two ends of the flowmeter is 0.035 MPa (the inlet pressure of the flowmeter is 0.135 MPa, and the outlet pressure is atmospheric)→The flowmeter valve (minimum opening pressure difference of 0.035 MPa) is opened→Piezometer2 displays the pressure of the entire gas circuit as 0.035 MPa. 6) When the temperature measured by the infrared thermometer is constant→Piezometer2 begins to record the pressure reading (MPa)→The flow meter begins to count (SCCM, under standard conditions mL·min^-1)→Record the instantaneous flow rate and cumulative flow rate of hydrogen gas flowing through the flow meter→When the hydrogen gas is completely leaked and the reading on piezometer2 is 0, stop counting→Close globe valve2. As shown in Table 5, when the rotational frequency is 10 Hz, the difference in leakage before and after optimization is small, and the optimization effect is insignificant, indicating low wear and leakage at the low rotational frequency stage. As the speed increases, the frequency of seal surface wear increases and the wear loss increases. The difference in leakage between before and after optimization increases. At a speed of 50 Hz, with a top chamber pressure of 7 MPa and an oil tank temperature of 49.1 ℃, the maximum difference in leakage is 0.56 cc·min^-1. When the limited rotational frequency is 30 Hz, the air pressure in the head chamber is 7 MPa, and the temperature of the tank is 44.2 ℃. The variation curves of the leakage volume and the wear volume of the system during 12 h of operation are shown in Fig. 18, and the weighing method was chosen to obtain its wear loss. Fig. 18 Curve chart of leakage and wear before and after optimization To ensure the weighing accuracy, the test specimens were cleaned and dried before weighing. The experiment was then started, and after completion, the above steps were repeated to obtain the wear loss. A comparison of the wear loss before and after optimization is shown in Table 6. When the piston rod starts reciprocating (0-1 h), due to the large interference fit between the sealing sleeve and the piston rod during assembly, the radial compression stress caused by the interference fit is much larger than the hydrogen pressure, which ensures that there is basically no leakage on the sealing surface. After 1-2 h of operation of the test rig, due to the wear of the sealing sleeve, the radial interference fit decreases, causing the contact stress on the sealing surface to be less than the hydrogen pressure, resulting in slow leakage. Currently, the leakage of the optimized sealing structure is greater than that before optimization. After running for 3 h, the wear of the sealing sleeve increases, and the leakage of the optimized sealing structure starts to become less than that before optimization. After running for 6 h, due to the continuous increase in wear, the film thickness ratio between the sealing sleeve and the piston rod surface increases. At this point, the lubrication state of the sealing sleeve changes from mixed lubrication to elastohydrodynamic lubrication, and the oil film thickness increases, slowing down the leakage rate of hydrogen. When the running time reaches 10 h, at this time, the sealing sleeve reaches the stable wear stage in the state of oil lubrication, the check valve discharges the same amount of leaking hydrogen, and the entire crankcase is in equilibrium. In the experiment, the measured leakage before the optimization is 2.1 cc·min^-1, the optimized leakage is 1.67 cc·min^-1, and the leakage difference is 0.44 cc·min^-1. According to Table 6, the wear rate of the sealing sleeve before optimization was 1.08%, which was reduced to 0.45% after optimization. It is proved that the finite element analysis results are correct, and the sealing performance and service life of seals are enhanced under this parameter, and the optimization parameters are effective. Table 5 Test record of leakage of sealing components before and after optimization Table 6 Wear amount of the sealing sleeve before and after optimization 6 Conclusion Taking the maximum contact stress and maximum von Mises stress in the Leningrader seal of the Stirling engine as the performance and service life evaluation indicators of the seal sleeve, a set of optimal parameters was obtained through response surface optimization, with the interference fit I of 0.113 mm, preload force P[c] of 29.71 MPa, and friction coefficient u of 0.22 as parameters. After optimization, the maximum contact stress was 20.12 MPa with a decrease of 26.3%, and the maximum von Mises stress was 18.95 MPa, a decrease of 20.5%. The applicability of the optimization parameters was verified by using a medium pressure of 5-9 MPa, proving that the optimization is effective at 5-9 MPa and the decrease is greater than 20%. The experimental results show that with leakage and wear as the core, the simulation results are verified by running for 12 h. After the optimization, the leakage of the seal structure is 1.67 cc·min ^-1, 0.44 cc·min^-1 less than that before the optimization, and the wear rate after the optimization is 0.45%, 0.63% less than that before the optimization, which proves that the optimization parameters are effective. All Tables Table 1 Regression coefficients and analysis of variance for estimating the maximum contact stress at the sealing interface Table 2 Estimated regression coefficients and analysis of variance for the maximum von Mises stress of seal sleeve Table 3 Parameter combination of an optimal solution for maximum contact stress Table 4 Parameter combination of an optimal solution for maximum von Mises stress Table 5 Test record of leakage of sealing components before and after optimization Table 6 Wear amount of the sealing sleeve before and after optimization All Figures Fig. 1 Structural diagram of the Stirling engine sealing system In the text Fig. 2 Schematic diagram of force analysis of Leningrader parts In the text Fig. 3 Analysis steps of the Leningrader seal In the text Fig. 4 Cloud chart of maximum contact stress of sealing sleeve under positive stroke (a) and return stroke (b) In the text Fig. 5 Cloud chart of maximum von Mises stress in the sealing sleeve under positive stroke (a) and return stroke (b) In the text Fig. 6 Curve of contact stress on the sealing surface with interference of different interference fit I under positive stroke (a) and return stroke (b) In the text Fig. 7 Effect of interference of different interference fit I on von Mises stress of sealing sleeve In the text Fig. 8 Curve of contact stress on the sealing surface with interference of different preload force P[c] under positive stroke (a) and return stroke (b) In the text Fig. 9 Effect of preload force of different preload force P[c] on von Mises stress of sealing sleeve In the text Fig. 10 Curve of contact stress on the sealing surface with interference of different friction coefficient u under positive stroke (a) and return stroke (b) In the text Fig.11 Effect of different friction coefficient u on von Mises stress of sealing sleeve In the text Fig. 12 Surface diagram of contact stress response of sealing surface with the maximum contact stress as the index under the interaction of preload force P[c], friction coefficient u and interference fit I In the text Fig. 13 Residual diagram of the maximum contact stress on the sealing surface In the text Fig. 14 Surface diagram of contact stress response of sealing surface with the maximum von Mises stress as the index under the interaction of preload force P[c], friction coefficient u and interference fit I In the text Fig.15 Residual plot of the maximum von Mises stress in the sealed casing In the text Fig. 16 Comparison of maximum contact stress and maximum von Mises stress before and after optimization under the five media pressures In the text Fig. 17 Schematic diagram of the Leningrader piston rod sealing performance experiment In the text Fig. 18 Curve chart of leakage and wear before and after optimization In the text Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform. Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days. Initial download of the metrics may take a while.
{"url":"https://wujns.edpsciences.org/articles/wujns/full_html/2024/02/wujns-1007-1202-2024-02-0177-16/wujns-1007-1202-2024-02-0177-16.html","timestamp":"2024-11-02T10:55:26Z","content_type":"text/html","content_length":"180188","record_id":"<urn:uuid:52563db1-cee8-48fb-9c5f-3bdd27bcd8c7>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00256.warc.gz"}
Real Harmonic Analysis by Pascal Auscher, Lashi Bandara Real Harmonic Analysis by Pascal Auscher, Lashi Bandara Publisher: ANU eView 2012 ISBN-13: 9781921934087 Number of pages: 113 This book presents the material covered in graduate lectures delivered at The Australian National University in 2010. Moving from the classical periodic setting to the real line, then to higher dimensional Euclidean spaces and finally to, nowadays, sets with minimal structures, the theory has reached a high level of applicability. Download or read it online for free here: Download link (940KB, PDF) Similar books Lectures on Potential Theory M. Brelot Tata Institute of Fundamental ResearchIn the following we shall develop some results of the axiomatic approaches to potential theory principally some convergence theorems; they may be used as fundamental tools and applied to classical case as we shall indicate sometimes. Fourier Series and Systems of Differential Equations and Eigenvalue Problems Leif Mejlbro BookBoonThis volume gives some guidelines for solving problems in the theories of Fourier series and Systems of Differential Equations and eigenvalue problems. It can be used as a supplement to the textbooks in which one can find all the necessary proofs. Lectures on Harmonic Analysis Thomas Wolff American Mathematical SocietyAn inside look at the techniques used and developed by the author. The book is based on a graduate course on Fourier analysis he taught at Caltech. It demonstrates how harmonic analysis can provide penetrating insights into deep aspects of analysis. Chebyshev and Fourier Spectral Methods John P. Boyd Dover PublicationsThe text focuses on use of spectral methods to solve boundary value, eigenvalue, and time-dependent problems, but also covers Hermite, Laguerre, rational Chebyshev, sinc, and spherical harmonic functions, cardinal functions, etc.
{"url":"http://www.e-booksdirectory.com/details.php?ebook=11965","timestamp":"2024-11-13T14:39:45Z","content_type":"text/html","content_length":"11416","record_id":"<urn:uuid:37c00559-515b-4bad-8cc7-b527d3383593>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00449.warc.gz"}
[PDF] Principles of Quantum Mechanics as Applied to Chemistry and Chemical Physics by Donald D. Fitts - Free PDF Books Home General Subjects Books Chemistry Books Principles of Quantum Mechanics as Applied to Chemistry and Chemical Physics... [PDF] Principles of Quantum Mechanics as Applied to Chemistry and Chemical Physics by Donald D. Fitts Book Name: [PDF] Principles of Quantum Mechanics as Applied to Chemistry and Chemical Physics by Donald D. Fitts Category: Quantum Chemistry Language: English Format: PDF Free Download: Available Principles of Quantum Mechanics as Applied to Chemistry and Chemical Physics by Donald D. Fitts Title: Principles of Quantum Mechanics as Applied to Chemistry and Chemical Physics Editor: Donald D. Fitts Edition: Illustrated Publisher: Cambridge University Press Length: 361 pages Size: 1.49 MB Language: English Book Description: Quantum behavior encompasses a large fraction of modern science and technology, including the laws of chemistry and the properties of crystals, semiconductors, and superfluids. This graduate-level text presents the basic principles of quantum mechanics using modern mathematical techniques and theoretical concepts, such as hermitian operators, Hilbert space, Dirac notation, and ladder operators. The first two chapters serve as an introduction to quantum theory with a discussion of wave motion and Schrödinger’s wave mechanics. Coverage then details the fundamental principles of quantum mechanics. Throughout, basic theory is clearly illustrated and applied to the harmonic oscillator, angular momentum, the hydrogen atom, the variation method, perturbation theory, and nuclear motion. This volume is the ideal textbook for beginning graduate students in chemistry, chemical physics, molecular physics and materials science. Principles of quantum mechanics as applied to chemistry and chemical physics Author(s): Donald D. Fitts Publisher: Cambridge University Press, Year: 1999 ISBN: 9780521658416 Related More Books
{"url":"https://www.freepdfbook.com/principles-of-quantum-mechanics-as-applied-to-chemistry-and-chemical-physics-by-donald-d-fitts-1/","timestamp":"2024-11-13T23:13:50Z","content_type":"text/html","content_length":"201267","record_id":"<urn:uuid:2b629635-26a2-4eea-b9b9-4da7c0512994>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00120.warc.gz"}
Rigidbody RotateAround How in unity to implement RotateAround through a Rigidbody? If your rigidbody is kinematic, you can use MovePosition and MoveRotation. You can add RotateAround to the Rigidbody class pretty easily, but remember to call it on FixedUpdate public static class ExtensionMethods public static void RotateAround(this Rigidbody rb, Vector3 point, Vector3 axis, float angle) Quaternion rotation = Quaternion.AngleAxis(angle, axis); Vector3 deltaPos = rb.position - point; rb.MoveRotation(rotation * rb.rotation); rb.MovePosition(point + rotation * deltaPos); As you can see, you calculate the rotation quaternion, and apply it to both the rotation, and the vector from point to the rigidbody. For a non-kinematic rigidbody, this is more complicated, as you have to calculate both the velocity and angular velocity. Comment on this question if you need code for that.
{"url":"https://discussions.unity.com/t/rigidbody-rotatearound/230662","timestamp":"2024-11-07T22:29:34Z","content_type":"text/html","content_length":"26838","record_id":"<urn:uuid:40b4d757-aa5a-4380-b687-3f619e3335c5>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00061.warc.gz"}
Excel find growth rate I need to determine our compounded annual growth rate. Strategy: Sales in the fifth year are 6,175/970 higher than in the first year. The formula for growth is ( What is the formula for calculating compound annual growth rate (CAGR) in Excel? "equivalent rate of return", or the CAGR (for Compound Annual Growth Rate). Solving for either the present value or the interest rate may seem like a pretty To find the answer, just use equation 2 (or the popup calculator) to get the rate of If want to calculate a percentage increase in Excel (i.e. increase a number by a specified percentage), this can be done by simply multiply the number by 1 + the 29 Apr 2014 Calculate CAGR (Compounded Annual Growth Rate) using Excel [Formulas]. Here is a story we all are familiar with,. Jack learns about What are the other ways to determine returns? Conclusion. 1. What is Compound 1 Aug 2016 I've seen this method used throughout my career, and it looks like the Wall Street Journal is even using it. I spotted it in some of their growth 25 Sep 2014 The good news is that you can do these calculations yourself, using Excel to find the Compound Annual Growth Rate, or CAGR, of your current 27 May 2019 We want to calculate a steady and consistent annual growth rate. The formula you will input in excel is as follows. 2. Formula to Calculate 25 Sep 2014 The good news is that you can do these calculations yourself, using Excel to find the Compound Annual Growth Rate, or CAGR, of your current Below is the Formula for GROWTH in Excel : The GROWTH Formula has the below arguments. known_y’s: This is a set of known Y’s values. This is a required argument. These values are used to estimate growth. known_x’s: This is the provided set of X’s values. This article describes the formula syntax and usage of the GROWTH function in Microsoft Excel.. Description. Calculates predicted exponential growth by using existing data. GROWTH returns the y-values for a series of new x-values that you specify by using existing x-values and y-values. Hi Tarik, growth rate percentage is generally calculated with this formula =(Last number/first number)^(1/number of intervals)-1 Which is: =(D35/C35)^(1/1)-1 to force a positive number, wrap that formula in an ABS function =ABS((D35/ C35)^(1/1)-1) _____ The way to set this up in Excel is to have all the data in one table, then break out the calculations line by line. For example, let's derive the compound annual growth rate of a company's sales over 10 years: The CAGR of sales for the decade is 5.43%. How to Calculate the Fitted Average Growth Rate in Excel Monthly FAGR, Method #1. The first method relies on values in the AvgGrth column. Monthly FAGR, Method #2. This method calculates the monthly growth rate directly, Convert the Monthly FAGR to an Annual Rate. Confirm Your Calculations. Below is the Formula for GROWTH in Excel : The GROWTH Formula has the below arguments. known_y’s: This is a set of known Y’s values. This is a required argument. These values are used to estimate growth. known_x’s: This is the provided set of X’s values. A country's current population is 100 million with an annual growth rate of 3.5%. If the growth rate remains constant, what will be the popula A common need in business and when working with Excel is calculating the percentage a value changes from one period to another. For example, showing how revenue changed from one quarter of the current year to the same quarter of the previous year is a standard metric reported in business. Note that because FRED uses levels and rounded data as published by the source, calculations of percentage changes and/or growth rates in some series may To calculate the percentage increase: First: work out the difference (increase) between the two numbers you are comparing. Increase = New Number - Original "equivalent rate of return", or the CAGR (for Compound Annual Growth Rate). Solving for either the present value or the interest rate may seem like a pretty To find the answer, just use equation 2 (or the popup calculator) to get the rate of If want to calculate a percentage increase in Excel (i.e. increase a number by a specified percentage), this can be done by simply multiply the number by 1 + the 29 Apr 2014 Calculate CAGR (Compounded Annual Growth Rate) using Excel [Formulas]. Here is a story we all are familiar with,. Jack learns about What are the other ways to determine returns? Conclusion. 1. What is Compound Hi Tarik, growth rate percentage is generally calculated with this formula =(Last number/first number)^(1/number of intervals)-1 Which is: =(D35/C35)^(1/1)-1 to force a positive number, wrap that formula in an ABS function =ABS((D35/C35)^(1/1)-1) _____ In this tutorial, you'll learn different ways to calculate the CAGR in Excel: Using There are different ways of calculating average growth in Excel (e.g. LOGEST, LINEST, lines of best fit, etc.) and some of these will give different results. 18 Sep 2019 How to find growth rate and market share: what you need to know you to calculate growth rates, instead of having to do so manually in Excel. From a chart in Excel I need to automatically calculate what the annual percentage growth rate is of a trend line. Does anyone know how to automate this in 15 Jun 2017 The RATE function works, you just have to convert output from per month to per year. =RATE(60,-200,-10000,30000,0)*12 gives 1 Mar 2018 The year-over-year growth rate shows the percentage change from the past 12 months. Why is YOY growth important? Investors usually want to If you search the web to learn how to calculate a compound growth rate in Excel, you'll likely find instructions for calculating only one type of growth rate. Enter the formula for calculating the annualized yield rate. Type the following formula into cell F2: =((B2/C2)^(1/D2))^E2-1. The average growth rate of an Step 6: The resultant will be the annual growth rate. Examples of Growth Rate Calculation. You can download this Growth Rate Formula Excel Template here – Compound annual growth rate (CAGR) is a business and investing specific term for the Therefore, to calculate the CAGR of the revenues over the three-year period spanning the "end" of 2004 to the "end" of 2007 is: C A G R ( 0 , 3 ) = ( 13000 Note that because FRED uses levels and rounded data as published by the source, calculations of percentage changes and/or growth rates in some series may
{"url":"https://topoptionslqxvqb.netlify.app/massanelli24409xowo/excel-find-growth-rate-351.html","timestamp":"2024-11-12T11:56:42Z","content_type":"text/html","content_length":"33546","record_id":"<urn:uuid:7c6b65c8-3719-464c-a31d-5f01041acd0b>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00477.warc.gz"}
seminars - Introductory Panorama of Derived Algebraic Geometry III erived algebraic geometry is generalization of ordinary algebraic geometry, having homotopical idea as a new input. It has been known to have surprising applications, from solving classical problems in algebraic topology and number theory to providing new ways of thinking about representation theory and mathematical physics, to name a few. The goal of this lecture series is to give an introductory overview of the subject. Our purpose is not to focus on a specific application in the subject, but to explain its philosophy and some of the main ideas by letting audience exposed to as many different characters in the subject as possible, without dealing with too much technical details. Both the level and topics of lectures will crucially depend on participants' demand. Familiarity with basics of scheme theory, algebraic topology, and homological algebra would be helpful, if not
{"url":"http://www.math.snu.ac.kr/board/index.php?mid=seminars&document_srl=471989&l=en","timestamp":"2024-11-06T05:07:10Z","content_type":"text/html","content_length":"44787","record_id":"<urn:uuid:d4bd149b-d745-4222-adf3-d6060486456d>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00789.warc.gz"}
GANITH is an algebraic geometry tookit, used for the computation and visualization of algebraic equations. It also provides the computational mathematics infrastructure for the Shastra toolkits. Example applications of this for geometric modeling and computer graphics are algebraic curve and surface display, curve-curve intersections, surface-surface intersections, global and local parameterizations, implicitization. GANITH also incorporates techniques for interpolation and least-squares approximation (multivariate data fitting) with algebraic curves and surfaces. The GANITH algebraic geometry toolkit manipulates arbitrary degree polynomials and power series.e It can be used to solve a system of algebraic equations Power series manipulations are used to generate piecewise rational pproximations to algebraic curves and surfaces. Arbitrary rational parametric surfaces can be displayed in GANITH, taking care of poles and base points. Animation facilities allow the visualization of entire families of algebraic curves and surfaces. Download information is here. Software Usage C. Bajaj, A. Royappa The GANITH Algebraic Geometry Toolkit _Proceedings: 1st Annual Conference on the Design and Implementation of Symbolic Computation Systems, Lecture Notes in Computer Science, No. 429, Springer-Verlag, (1990), 268-269.__ S. Abhyankar, C. Bajaj Automatic Parameterization of Rational Curves and Surfaces I: Conics and Conicoids Computer Aided Design, 19, 1, (1987), 11-14. (pdf) S. Abhyankar, C. Bajaj Automatic Parameterization of Rational Curves and Surfaces II: Cubics and Cubicoids Computer Aided Design, 19, 9, (1987), 499-502. (pdf) S. Abhyankar, C. Bajaj Automatic Parameterization of Rational Curves and Surfaces III: Algebraic Plane Curves Computer Aided Geometric Design, 5:4(1988), 309-321. (pdf) C. Bajaj, C. Hoffmann, R. Lynch, J. Hopcroft Tracing Surface Intersections Computer Aided Geometric Design, 5:4(1988), 285-307. (pdf) S. Abhyankar, C. Bajaj Automatic Parameterization of Rational Curves and Surfaces IV: Algebraic Space Curves ACM Transactions on Graphics, 8, 4, (1989), 325-334. (pdf) C. Bajaj, G. Xu NURBS Approximation of Surface/Surface Intersection Curves Advances in Computational Mathematics, 2, 1, (1994), 1-21. C. Bajaj, G. Xu Piecewise Rational Approximation of Real Algebraic Curves Journal of Computational Mathematics, vol. 15, no. 1, (1997), 55-71. C. Bajaj, G. Xu Spline Approximations of Real Algebraic Surfaces Journal of Symbolic Computation, Special Isssue on Parametric Algebraic Curves and Applications, 23, 2-3, (1997), 315 - 333. (pdf) C. Bajaj, R. Holt, A. Netravali Rational Parameterizations of Nonsingular Real Cubic Surfaces ACM Transactions on Graphics, 17, 1, (1998), 1-31 (pdf) C. Bajaj, A. Royappa Parameterization in Finite Precision Proceedings: Graphics Interface '92, (1992), Vancouver, Canada, Canadian Information Processing Society, 29-36. C. Bajaj, G. Xu Rational Spline Approximations of Real Algebraic Curves and Surfaces Advances in Computational Mathematics, edited by H.P. Dikshit and C. Michelli, World Scientific Publishing Co., Approximations and Decomposition Series, vol 4., (1994), 73-85. C. Bajaj Geometric Computations with Algebraic Varieties of Bounded Degree Proceedings: 6th Annual ACM Symposium on Computational Geometry, (1990) Berkeley, California, 148-156. (pdf)
{"url":"https://cvc-lab.github.io/cvc-website/software/ganith/","timestamp":"2024-11-03T16:26:54Z","content_type":"text/html","content_length":"193249","record_id":"<urn:uuid:1e823fb8-8c87-42ee-b6f1-958d4bbd22b3>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00303.warc.gz"}
s - Yuchen Wu « on: February 12, 2023, 11:14:19 PM » Dear Prof Ivrii, I did some home assignment for heat equation problems to prepare the quiz, and I found some integrals seems hard to compute. So I am wondering if we need to compute the integral completely correct to get the full mark when do the quiz, or just simplify the heat formula is fine? Thank you.
{"url":"https://forum.math.toronto.edu/index.php?PHPSESSID=h252kjp1k0th88fbus3s0ttvi3&action=profile;u=2623;area=showposts;sa=messages","timestamp":"2024-11-14T13:34:05Z","content_type":"application/xhtml+xml","content_length":"15301","record_id":"<urn:uuid:db42cb69-cc2a-4d37-ad8c-20b64bbea420>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00073.warc.gz"}
Recursion in x86 NASM Assembly | Cratecode Recursion in x86 NASM Assembly Note: this page has been created with the use of AI. Please take caution, and note that the content of this page does not necessarily reflect the opinion of Cratecode. Recursion might sound like a complicated term, but it's actually a simple concept. Recursion is a way of solving problems by breaking them down into smaller instances of the same problem, and then solving these smaller problems until you reach a base case. When working with recursion in x86 NASM assembly, things might look a bit different than when working with higher-level languages. But fear not, we'll break it down for you! Understanding Recursion Recursion is a programming technique where a function calls itself. It's often used to solve problems that can be divided into smaller, identical problems. Imagine having a stack of books, and you want to know the total weight of all of them. You could weigh each book individually and add the weights together, or you could weigh one book, then weigh the rest of the books as a smaller stack, and add those two weights together. This process can be repeated until only one book remains. Recursion in x86 NASM Assembly In x86 NASM assembly, recursion is implemented by using the call instruction to invoke a function (also known as a subroutine or procedure). When the function is called, the address of the next instruction is stored on the stack, to be used later by the ret instruction to return to the calling point. The function can then call itself in the same manner. Let's take a look at a simple example of recursion, calculating the factorial of a number (n! = n * (n-1) * (n-2) * ... * 1). Here's a basic implementation of a recursive factorial function in x86 NASM assembly: section .text global _start ; Place the number whose factorial we want to calculate in the EAX register mov eax, 5 ; Call the recursive factorial function call factorial ; The result will be stored in the EAX register ; Exit the program mov eax, 1 xor ebx, ebx int 0x80 ; Recursive factorial function ; Base case: if EAX is 1, return 1 cmp eax, 1 je .base_case ; Recursive case: multiply EAX by the factorial of (EAX - 1) push eax ; Save the current value of EAX on the stack dec eax ; Decrement EAX by 1 call factorial ; Call the factorial function recursively ; Multiply the result by the original value of EAX pop ebx ; Retrieve the original value of EAX from the stack imul eax, ebx ; Multiply EAX by EBX (result = EAX * EBX) ; If EAX is 1, return 1 mov eax, 1 This example demonstrates the implementation of recursion in x86 NASM assembly. The factorial function calls itself with a decremented value of EAX until it reaches the base case (EAX = 1). The results of each recursive call are multiplied together to calculate the factorial of the original input number. Now that you've had a taste of recursion in x86 NASM assembly, you'll be well-prepared to tackle more complex recursive problems in your assembly programming journey. Remember, practice makes Hey there! Want to learn more? Cratecode is an online learning platform that lets you forge your own path. Click here to check out a lesson: Recursion Intro (psst, it's free!). What is recursion and how is it used in x86 NASM assembly language? Recursion is a programming concept where a function calls itself in order to solve a problem. In x86 NASM assembly language, recursion can be implemented by following the steps of saving the current state, calling the function again with updated parameters, and, finally, restoring the original state before returning the result. This technique allows you to break down complex problems into smaller, more manageable tasks. How do I implement a recursive function in x86 NASM Assembly? To implement a recursive function in x86 NASM Assembly, follow these steps: • Save the current state of the registers and the stack pointer. • Update the parameters for the recursive call. • Call the function again. • Restore the original state of the registers and the stack pointer. • Calculate and return the result based on the returned value from the recursive call. Here's a simple example of a recursive function to calculate the factorial of a number: section .text global _start ; Calculate 5! mov eax, 5 call factorial ; Return result in eax ; Exit the program mov eax, 1 int 0x80 ; Base case cmp eax, 1 je end_factorial ; Save registers and stack pointer ; Update parameters for recursive call dec eax ; Recursive call call factorial ; Restore registers and stack pointer ; Calculate and return the result imul eax, [esp + 4 * 3] How can I optimize the performance of recursive functions in x86 NASM Assembly? Recursive functions can lead to performance issues, such as stack overflow and slow execution, if not handled properly. To optimize the performance of recursive functions in x86 NASM Assembly, you • Use tail recursion when possible, which is a form of recursion that allows the compiler to optimize the code and eliminate stack overflow issues. • Implement memoization to store the results of previously computed function calls, reducing the number of redundant calculations. • Consider using an iterative approach instead of recursion if the problem can be solved more efficiently that way. How do I handle stack overflow issues in recursive functions in x86 NASM Assembly? Stack overflow issues can occur in recursive functions when the stack size exceeds the available memory. To handle stack overflow issues in x86 NASM Assembly, you can: • Optimize the recursive function using tail recursion, which eliminates the need for additional stack frames. • Increase the stack size by adjusting the stack frame allocation or the program's linker settings. • Use an iterative approach instead of recursion if it's more suitable for the problem at hand. • Implement memoization to reduce the depth of the recursion tree and the number of stack frames needed.
{"url":"https://cratecode.com/info/x86-nasm-assembly-recursion","timestamp":"2024-11-02T21:53:59Z","content_type":"text/html","content_length":"106782","record_id":"<urn:uuid:67cf608a-b435-4f04-9988-a9caa761046a>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00495.warc.gz"}
About me I am a consultant in comm systems. I worked for last 20 years at Loral Space Systems In Palo Alto, CA, where I had a Distinguished Engineer position. I was involved in all sorts of communication system designs. My work involved simulation of signal quality through the satellite links. I used SPW by Synopsys for most of my work but also Matlab. Before Loral, I worked at Booz, Allen Hamilton, Aerospace Corporation and Northrup, in Los Angeles. I have a MSEE from USC. I write these tutorials for fun and for love for my field. It is forever challenging and fascinating. I am currently writing a book on link budgets and, hence have not been able to add any new I will be giving a talk in Ukraine on May 16th on the use of Matlab for spectral estimation. I was born in India, came to US with my parents while young and went to school in California. I am also the author of children books, “The Reading Lesson” and “The Verbal Math Lesson“. These are wonderful books, if you have young children, please check them out. I am married, live in Danville, CA, and have two grown children. One works at Apple Computer and the other runs a Publishing company. Over my 45 year career in engineering, I have given many talks to school and college groups on topics related to electrical engineering as well, as women in engineering. I have had a wonderful, rewarding career and if you would like me to talk about any of these topics, I am happy to do so. If you have questions/comments on digital communications or signal processing, please post them on the relevant topic page. Charan Langton Facebook page: LinkedIn page: 66 Comments on “About me” 1. Great website! □ Definitely a fantastic website.One has been referring to it since 2007 .One is proud to have benefitted from your hard work Charan , knowing that you hail from India.I have purchased the Kindle edition of your Fourier Series book which is proving useful in preparation for a forthcoming interview.It is extremely well written as are your other tutorials, having a very fundamental approach enabling good understanding by the reader.Great work 2. I greatly appreciate your efforts in putting these pages together. I have seen your previous versions and this one seems to be more refined. Facebook page has increased its utility even further. I am not directly involved in Digital comm but I do read material on the subject to keep my self updated. I also share these with my students and colleagues. Thanks one more time for efforts and □ Rehmat, Thank you very much. Very happy that you like them. – Charan 3. Great work, Very useful collections. 4. I started my career in Digital comm and moved to Audio DSP. I came across the OFDM article that i took a print of some years ago and was lying in the desk. I didnt understand it then but read it again recently. It was so clearly written an simple to understand. It was an “Aha” moment and want to mention my thanks. Keep up the good work □ Srikanth, This paper on OFDM seems to be most popular of all my articles. Probably because the issue of the inverse FFT is so confusing. Glad you liked it. Charan Langton 5. Hi, Just wanted to let you know that I’ve found your tutorials hugely helpful. I spent a lot of time looking at different books and tutorials and yours are my favorite. Please write a few chapters on equalizers! I’ve now read many different tutorials/book-chapters and I never manage to keep up with the mathematical notation. 6. Dear Charan Madam, I have doubt in Digital signal Processing.My question is that for determining frequency component of the samples we are using DFT which involves complex quantity.we kow that this DFT is derived from the complex fourier series of a Periodic signal by tending its time period to infinity then we discretize the fourier transform for finite length sequence.can’t we use the Fourier transform which is derived from the real Fourier series of the periodic signal by tending its time period to infinity then we make it use for finite length sequence which will have only real quantity. this is the Discrete Hartley Transform.This also has fast computation algorithm.Could you explain me abou this. Arunpradhap Natarajan □ Discrete signals are different than continuous signals in that we can not determine the true frequency of the underlying signal. A discrete signal can represent a lot of different analog frequencies. Because of this the DTFT repeats and CTFT does not. We can not sample CTFT to represent the DTFT for this main reason. Once the signal is sampled, then you can use either DFS (discrete Fourier series) or DTFT or DFT but not CTFT. BTW, whenever we are talking about any kind of “fast algorithm”, we are referring to computation on a discrete series. Charan Langton ☆ cant we derive the CTFT in terms of real quantity from real fourier series if so give me that equation Arunpradhap Natarajan ○ No you can not. There is no per say connection between real quantity and CTFT. One is a quality of a signal and the other a mathematical procedure on the whole signal. This is a very general question you are asking. It does not have a answer in a “equation”. ■ Dear Charan Madam, If I have rectangular pulse with duration 0 to T whose fourier transform is sinc function of only positive frequency or it includes negative frequency also.If it includes negative frequency what is the meaning of negative frequency in the puslse having positive duration only Arunpradhap Natarajan ■ Dear Madam, if we can’t derive the fourier tranform from real fourier series then how the Discrete Hartley Transform exists thanks with regards Arunpradhap Natarajan 7. Dear Madam, In Hilbert Transform -90 dgree phase shift is produced if we the signal frequency is f>0 and produces +90 degree phase shift for the signal if the signal frequency is f<0.If I have a bandwidth of frequencies 0 Hz to B Hz how could the Hilbert transform will produce the +90 degree phase shift since I don't have negative frequency Arunpradhap Natarajan 8. Dear Charan Madam, If I have rectangular pulse with duration 0 to T whose fourier transform is sinc function of only positive frequency or it includes negative frequency also.If it includes negative frequency what is the meaning of negative frequency in the puslse having positive duration only Arunpradhap Natarajan 9. Hi Charan, Just a quick email to say thanks for your great website. We’re a maritime VSAt operator (incidentally using various SSL-built satellites..) and your tutorials have been very useful for a number of our guys. Many thanks again and keep up the good work! Kind regards, Bertrand Hartman OmniAccess S.L. □ Thanks a lot. – Charan 10. Great website! Thanks for you work. 11. Great Website LOVE IT! 12. Great!. I love and like it. 13. Hello Charan, I love this website. I just finished my MSEE project, and your tutorials on MIMO and OFDM were both extremely helpful in learning these topics (especially since my program isn’t up to date and doesn’t quite cover these topics). Just a note: On tutorial 10 (TWTAs), the right side of some figures and equations seem to “end early.” Thanks for your using your free time to teach the next and existing generations of comm engineers 🙂 14. It’s great. 15. Thank you so much for the effort you have put in to make these topics so approachable. Your section on modulation (All About Modulation Part 1) is the most understandable description of the topic I have found. I wonder if you would be open to adding a short section showing the spectrum of, for example, QPSK modulation, being the convolution of the baseband and carrier signals. I would be happy to help if you find your time too committed at this time. And if you want to leave well enough alone, it is still a great explanation of the topic! □ Thank you Darren, I will try to add the spectrums. If you can help, I am most grateful! 16. I am currently doing my Master’s in telecommunication, and while I am hardly a DSP guy, OFDM is clearly important in everything. I have really enjoyed your simple (and example-supported) explanation, and wanted to thank you for all your effort. The tutorial definitely goes in my “background folder” if ever need to refresh my memory! Best Regards 17. Hi Charan, This website is awesome. You write the tutorial in such style that a beginner like me can understand the concept. You did a great job. I really want to say thank you for what you are doing. I am also looking for LDPC Code tutorial. But unfortunately I did not find it in here. Could you please write a tutorial about LDPC Coding and Decoding? I really like your explanation style. I hope you grant my request. Thank you again. Best Regard, 18. It’s been 20 years since I needed to delve deeply into these concepts. I’ve been building lasers, imagers, and other photonic devices until recently and was struggling to come back up to speed. Your website and very lucid explanations have helped considerably. I’m so glad I stumbled into it. Thanks for your efforts to educate us all. 19. Hello Charan, I have referred your tutorial on MAP turbo decoding algorithm, it guides the engineers who implements the algorithm and have not found anywhere such a clear explanation.it is great work. Request you to update the tutorial on Max-log-Map turbo decoding algorithm also, since it is difficult to represent the data in fixed point for MAP decoding algorithm. Best Regards □ Thanks Ramamurthy. I remember, it was a tough one to do. I had someone point some errors but have not had time to update. Thank you for you comment. 20. Hello, This May, the Institution of Engineering and Technology will release a publication that I feel will be of interest to you entitled Digital Communications: Principles and Systems. Digital Communications: Principles and Systems provides a thorough grounding in digital communications using an innovative engineering-first approach to build a nonmathematical overview covering building blocks, signal processing tasks, general features and design considerations. Topics covered include transmission channels, source coding, digital baseband transmission, digital modulation, noise impact in digital transmission, error control coding, advanced signal enhancement techniques for wireless channels and digital transmission link analysis and design. The reader is given an insight into the engineering concept and the underlying physical considerations, a clear appreciation of the parameters involved, and an understanding of the interplay of these parameters. The book includes several unpublished original derivations, new insights and alternative approaches that make the understanding of key topics and their application much easier. Digital communications: Principles and Systems is an ideal textbook for those who wish to: • gain a thorough understanding of the core principles; • undertake digital communication systems analysis; • design and computer simulations; • deal with specialized applications; • keep up to date with advances in the technology. Topics covered include: • overview of digital communication • linear channels and systems • nonlinear systems • sampling of baseband and band-pass signals • quantization and PCM • source coding and lossless data compression • line codes • transmission through band-limited AWGN channels • transmitted digital signals • noise impact in digital transmission • error control coding • digital transmission link analysis and design About the Author: Ifiok Otung is a Chartered Engineer with broad and international experience of research and teaching at various universities in Europe and Africa. He has previously worked as a consultant for the UK Electrical and Electronic Engineering Assessment Network and the Engineering Subject Centre of the UK Higher Education Academy. He has authored over 110 publications, and is a regular reviewer of technical articles and textbooks for some of the world’s leading academic publishers. Ifiok Otung is currently Professor of Satellite Communications at the University of Glamorgan, where he teaches MSc courses in Satellite, Mobile and Digital Communications. 21. really This website is awesome. 22. Dear Mrs Langton, This is a spontaneous email after one of the can’t remember how many times I have visited your website. The material you provide has helped me incredibly. The way you present all these complex topics is compact and clear at the same time. You are a great teacher and I hope you keep providing us with your wonderful tutorials. Best Regards, Panos Papaioannou □ So nice! Thank you for saying that. You made my day! 23. Dear madam, Its great and best website and was very helpful in communication field. I always look for this website frequently. I am doing a project on ‘MIMO-OFDM with Beamforming’. The aim of project is : to write a code in matlab and calculate BER rate accordingly and also calculate inter user interference(IUI) if more than 2 users are transmitting at the same time to a base station. I am very confused on how to implement OFDM with beamfoming technique. It will be a great help and very pleased if i get any related codes/concepts which make me understand and get going. Your’s Faithfully, IIT Madras 24. Thank you, professor Charan 🙂 I think I just know who I want to be 🙂 25. Great website! Loved the explanations, your notes, wonderful resource for teaching. (I have been invited as a guest lecturer at COEP, your notes are helping me a lot! 🙂 ) 26. Thank you so much for the great website and your effort. After so many searches in the internet finally found the website with amazing understandable tutorials. 27. Hi, glad i found this website. just reading how dft is borned from womb of fourier series… but still confusing and many questions are arised fOr me while reading, I AM TRYING TO GO DEEPLY, I WILL ASK YOU SOME QUESTIONS SOONLY, I WILL NOT ASK YOU UNTILL I GET DEFEAT WITH THESE CONCEPTS( means iput my 100% first and then i will ask you admin) (from: india) 28. Hi. I was going through the first part of modulation tutorial. The Offset QPSK section has an inaccuracy in the I and Q channel diagrams. The Q channel is supposed to be offset by half a symbol time i.e. one bit time. But the figure shows the offset of half a bit time. Please correct me if I am wrong else correct the figure 🙂 Thanks and regards 29. Great to be here! I learn a lot! Thank you very much! 30. Nice website, thank you 🙂 31. Mam, I really loved the contents on provided by you. I am planning to design a communication payload system for c-band for which i did the eirp calculation considering the ber of 10^-5 and noise temperature to be only 293k. Mam how should I proceed to get the specification of various components □ Hello Saroj, This is a big question. What are you trying t0 accomplish? Charan Langton 32. Hello Charan, First of all thank you for your efforts in building up such a good platform for people who are interested in DSP and other related fields to communicate and study. I just started touching DSP for like 3 months and I take a look at one of your tutorials that is called ‘Fourier Transform of continuous and discrete signals’, and it really does help. It clarifies lots of conceptional questions that confused me for quite a long time, and even my prof cannot clearly answer some of those. I haven’t finish reading that tutorial, and so far I have a question regarding the FT of the Periodic signal. As you mentioned in that tutorial, even though historically initially we use FT for aperiodic signals, but it is also applicable for periodic signals, and the result will be the discrete replicas of the FT of the original time domain function. I also heard about the Poisson Summation, and in fact they look quite similar, so I am wondering if there is any potential relationship in between? Another example is when you take the FT of an sin function, which means you are taking the FT of a periodic function, but we usually will do the truncation and will only take the FT within one period, which will yields two impulses at two side of the frequency domain. And if I choose to consider the whole frequency domain instead of within one period, then the finally frequency spectrum will be the replicas of the impulse, is that a right inference? Is that the reason why we usually choose to do FT of periodic signals within one period? Thank you. □ The CTFT and the DTFT of a sine wave is just two impulses at the frequency of the signal. When you do the DFT, if the length of the DFT is not equal to the integer multiple of the period length (in samples) then, you will see other components. I am not sure about your Poisson comment. I will have to look into that. There is no repeating of the impulses for the sine for periodic or a single basic case. I may not have explained this well in the tutorial. I will be posting a new one. ☆ Really appreciate for your replay and I am looking forward to your further tutorials. If you have some ideas regarding the Poisson Summation, please leave me comments. Thank you. 33. Dear Sir/Madam, i referred your material tited “Linear Time invarienr (LTI) systems and Matched filter” which is available at http://www.complextoreal.com. It is very useful for my lecture and building some applications related to the radar signal processing. Thanks for it. I would like to have some materials related to “Ambiguity Diagram” where i required clear description like the above mentioned topic. Please sir…. 34. Hi Charan, Your tutorials are beautifully written. I wanted to thank you for your very well written material. I would appreciate if you specifically introduce me with a well written book in digital communications. I have Digital Communications written by Proakis but unfortunately it is not rigorous enough. It has a lot of content but most of them without any proof. I am looking for a book that is very well teaches the concept intuitively, proves mathematically in a well understandable manner or it gives enough information that I can research the rigorous proofs myself. I am an engineer with almost 20 years of experience in the field of DSP so I am not completely unfamiliar with the subject. Thanks for your recommendation 35. Hello Mam, Ihaverety started following your tutorials, you seem to have a great in depth knowledge subject specially in regard to practical aspects of it. I have benn reading many on same however there approach is very therotical. Iam obliged to be taught by the the masters in the field. looking forward for your book soon and specially for Indians. Thanks Once again. 36. Thanks a lot, i want to know on synchronization in communication (concept,basics and foundamentals) and i would appreciate if you help me. 37. When will the printed book be available? Thank you. □ It keeps getting delayed as we find more and little errors in it. having an eBook has helped us to clean it up. But I think, now it is planned to start printing in Mid March and should be available by end of April on Amazon. Thanks for sending me that error. Will look at it and fix it. Thanks again! – Charan BTW- What did you think of the other chapters? 38. Hello Charan, This is Seth the physics student from UCLA. I enjoyed our discussion on the flight from Oakland, and thank you for your insight on weighing career options. The work I mentioned related to solitons is referenced on our research webpage: http://acoustics-research.physics.ucla.edu/solitary-waves/ On that page there is a video of the effect along with a few pictures and citations. Best Wishes! □ Hello Seth, Please send me your address. I promised to send you the copy of the book which I will do. Hope your new school year is going well. – Charan 39. Hello Charan, My students and I have greatly enjoyed your tutorials over the years. Would you possibly be available for a brief Skype conversation with my senior digital communications class, to talk about what a career in communications looks like, and to share any career advice you might have for them? Sincerely, Tim 40. great website. And, easy to understand. Hope to write more posts about digital communication 41. Dear Madam, I do a lot of bird song recording and song analysis in India, and In the pursuit of getting deeper understanding of song signals and fourier transform I got and read a few chapters of your book-the intuitive guide to fourier transform—– available on the net. I am very impressed by the clarity and style of presentation of the subject. I am also highly impressed by the other related material on your website which I am slowly reading. I wonder whether you can send me pdfs of last three chapters of your book, so that I could complete the reading. Best regards Pratap Singh 42. Dear Charan Langton Thank you for your “all about modulation-Part I” about pi/4 QPSK and pi/4 DQPSK explain.I am an electronic engineer of china, I want to consult you about how to demodulate pi/4 QPSK and 16QAM,GSMK by I Q phase angle. Do you have any relevant tutorials? 43. Thank you for the great book. It is easier to understand than those written by professors. 44. Tony, I am glad you liked the book. Please be sure to write a review on Amazon. I Thank you. 45. I am grateful for your wonderful tutorials. I want to express my gratitude and hope you realize how much of a difference this material is making on my life. 46. Charan hi, I would like to get the second printing of your Fourier Analysis book – can you tell me how I can do that – Amazon does not say which one one gets if one selects the books/kindle version shown ? 47. Hi Charan- I was trying to reach you via a FB message but in case good see this first could you please e-mail me? I had written up feedback for you on your nice filter write-up that I thought you would be interested in seeing. It is too long to post here and includes graphics. □ I will be happy to see it. I have not been checking my site here so missed your post. you can email me at charldsp@gmail.com 48. Very nice work here. Also, very pleasant to see that you got MSEE from USC. I guess we may have had similar time frame at USC. I had MSEE from USC too, it was about the middle of 80’s. I got also PHD from USC in 91. I love all of your tutorials. Post of your new book announcement here when it is ready. I have seen Chapter 2 of your book and it looks fascinating!
{"url":"https://complextoreal.com/contact/","timestamp":"2024-11-14T13:34:00Z","content_type":"text/html","content_length":"190865","record_id":"<urn:uuid:446591ff-c397-49fe-8866-e486927cc09d>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00770.warc.gz"}
GCSE iGCSE Maths Cheat Sheets and Formulae | MyMathsCloud 2D Shapes Area and Perimeter Formulae 2D and 3D Shape Properties 3D Shapes - Surface Area and Volume Formulae and Hints Angles (Parallel Lines Only) Questions By Topic Angles (Parallel Lines Only) Questions By Topic Solutions Circle Theorems - How To Know and Recognise Which Theorem To Use Converting Between Fractions, Decimals and Percentages Inequalities - Linear Graphs (shading regions) Probability Tree Diagrams Probability Venn Diagrams With Set Notation Probability Venn Diagrams Without Set Notation Quadratics - Factorising, Solving and Completing The Square Sine and Cosine Rule - Slides Straight Line Graphs - Shading Inequalities Types Of Graphs To Know (basic shapes)
{"url":"https://www.mymathscloud.com/modules/gcse-igcse-o-level/cheat-sheets/","timestamp":"2024-11-12T17:19:50Z","content_type":"text/html","content_length":"368150","record_id":"<urn:uuid:7a071e0b-bbf3-419a-b267-e646bb69bdb1>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00858.warc.gz"}
Examining Load Average Many Linux administrators and support technicians regularly use the top utility for real-time monitoring of their system state. In some shops, it is very typical to check top first when there is any sign of trouble. In that case, top becomes the de facto critical measurement of the machine's health. If top looks good, there must not be any system problems. top is rich with information—memory usage, kernel states, process priorities, process owner and so forth all can be obtained from top. But, what is the purpose of those three curious load averages, and what exactly are they trying to tell me? To answer those questions, an intuitive as well as a detailed understanding of how the values are formed are necessary. Let's start with intuition. The three load-average values in the first line of top output are the 1-minute, 5-minute and 15-minute average. (These values also are displayed by other commands, such as uptime, not only top.) That means, reading from left to right, one can examine the aging trend and/or duration of the particular system state. The state in question is CPU load—not to be confused with CPU percentage. In fact, it is precisely the CPU load that is measured, because load averages do not include any processes or threads waiting on I/O, networking, databases or anything else not demanding the CPU. It narrowly focuses on what is actively demanding CPU time. This differs greatly from the CPU percentage. The CPU percentage is the amount of a time interval (that is, the sampling interval) that the system's processes were found to be active on the CPU. If top reports that your program is taking 45% CPU, 45% of the samples taken by top found your process active on the CPU. The rest of the time your application was in a wait. (It is important to remember that a CPU is a discrete state machine. It really can be at only 100%, executing an instruction, or at 0%, waiting for something to do. There is no such thing as using 45% of a CPU. The CPU percentage is a function of time.) However, it is likely that your application's rest periods include waiting to be dispatched on a CPU and not on external devices. That part of the wait percentage is then very relevant to understanding your overall CPU usage pattern. The load averages differ from CPU percentage in two significant ways: 1) load averages measure the trend in CPU utilization not only an instantaneous snapshot, as does percentage, and 2) load averages include all demand for the CPU not only how much was active at the time of measurement. Authors tend to overuse analogies and sometimes run the risk of either insulting the reader's intelligence or oversimplifying the topic to the point of losing important details. However, freeway traffic patterns are a perfect analogy for this topic, because this model encapsulates the essence of resource contention and is also the chosen metaphor by many authors of queuing theory books. Not surprisingly, CPU contention is a queuing theory problem, and the concepts of arrival rates, Poisson theory and service rates all apply. A four-processor machine can be visualized as a four-lane freeway. Each lane provides the path on which instructions can execute. A vehicle can represent those instructions. Additionally, there are vehicles on the entrance lanes ready to travel down the freeway, and the four lanes either are ready to accommodate that demand or they're not. If all freeway lanes are jammed, the cars entering have to wait for an opening. If we now apply the CPU percentage and CPU load-average measurements to this situation, percentage examines the relative amount of time each vehicle was found occupying a freeway lane, which inherently ignores the pent-up demand for the freeway—that is, the cars lined up on the entrances. So, for example, vehicle license XYZ 123 was found on the freeway 30% of the sampling time. Vehicle license ABC 987 was found on the freeway 14% of the time. That gives a picture of how each vehicle is utilizing the freeway, but it does not indicate demand for the freeway. Moreover, the percentage of time these vehicles are found on the freeway tells us nothing about the overall traffic pattern except, perhaps, that they are taking longer to get to their destination than they would like. Thus, we probably would suspect some sort of a jam, but the CPU percentage would not tell us for sure. The load averages, on the other hand, would. This brings us to the point. It is the overall traffic pattern of the freeway itself that gives us the best picture of the traffic situation, not merely how often cars are found occupying lanes. The load average gives us that view because it includes the cars that are queuing up to get on the freeway. It could be the case that it is a nonrush-hour time of day, and there is little demand for the freeway, but there just happens to be a lot of cars on the road. The CPU percentage shows us how much the cars are using the freeway, but the load averages show us the whole picture, including pent-up demand. Even more interesting, the more recent that pent-up demand is, the more the load-average value reflects it. Taking the discussion back to the machinery at hand, the load averages tell us by increasing duration whether our physical CPUs are over- or under-utilized. The point of perfect utilization, meaning that the CPUs are always busy and, yet, no process ever waits for one, is the average matching the number of CPUs. If there are four CPUs on a machine and the reported one-minute load average is 4.00, the machine has been utilizing its processors perfectly for the last 60 seconds. This understanding can be extrapolated to the 5- and 15-minute averages. In general, the intuitive idea of load averages is the higher they rise above the number of processors, the more demand there is for the CPUs, and the lower they fall below the number of processors, the more untapped CPU capacity there is. But all is not as it appears. The load-average calculation is best thought of as a moving average of processes in Linux's run queue marked running or uninterruptible. The words “thought of” were chosen for a reason: that is how the measurements are meant to be interpreted, but not exactly what happens behind the curtain. It is at this juncture in our journey when the reality of it all, like quantum mechanics, seems not to fit the intuitive way as it presents itself. The load averages that the top and uptime commands display are obtained directly from /proc. If you are running Linux kernel 2.4 or later, you can read those values yourself with the command cat / proc/loadavg. However, it is the Linux kernel that produces those values in /proc. Specifically, timer.c and sched.h work together to do the computation. To understand what timer.c does for a living, the concept of time slicing and the jiffy counter help round out the picture. In the Linux kernel, each dispatchable process is given a fixed amount of time on the CPU per dispatch. By default, this amount is 10 milliseconds, or 1/100th of a second. For that short time span, the process is assigned a physical CPU on which to run its instructions and allowed to take over that processor. More often than not, the process will give up control before the 10ms are up through socket calls, I/O calls or calls back to the kernel. (On an Intel 2.6GHz processor, 10ms is enough time for approximately 50-million instructions to occur. That's more than enough processing time for most application cycles.) If the process uses its fully allotted CPU time of 10ms, an interrupt is raised by the hardware, and the kernel regains control from the process. The kernel then promptly penalizes the process for being such a hog. As you can see, that time slicing is an important design concept for making your system seem to run smoothly on the outside. It also is the vehicle that produces the load-average values. The 10ms time slice is an important enough concept to warrant a name for itself: quantum value. There is not necessarily anything inherently special about 10ms, but there is about the quantum value in general, because whatever value it is set to (it is configurable, but 10ms is the default), it controls how often at a minimum the kernel takes control of the system back from the applications. One of the many chores the kernel performs when it takes back control is to increment its jiffies counter. The jiffies counter measures the number of quantum ticks that have occurred since the system was booted. When the quantum timer pops, timer.c is entered at a function in the kernel called timer.c:do_timer(). Here, all interrupts are disabled so the code is not working with moving targets. The jiffies counter is incremented by 1, and the load-average calculation is checked to see if it should be computed. In actuality, the load-average computation is not truly calculated on each quantum tick, but driven by a variable value that is based on the HZ frequency setting and tested on each quantum tick. (HZ is not to be confused with the processor's MHz rating. This variable sets the pulse rate of particular Linux kernel activity and 1HZ equals one quantum or 10ms by default.) Although the HZ value can be configured in some versions of the kernel, it is normally set to 100. The calculation code uses the HZ value to determine the calculation frequency. Specifically, the timer.c:calc_load() function will run the averaging algorithm every 5 * HZ, or roughly every five seconds. Following is that function in its entirety: unsigned long avenrun[3]; static inline void calc_load(unsigned long ticks) unsigned long active_tasks; /* fixed-point */ static int count = LOAD_FREQ; count -= ticks; if (count < 0) { count += LOAD_FREQ; active_tasks = count_active_tasks(); CALC_LOAD(avenrun[0], EXP_1, active_tasks); CALC_LOAD(avenrun[1], EXP_5, active_tasks); CALC_LOAD(avenrun[2], EXP_15, active_tasks); The avenrun array contains the three averages we have been discussing. The calc_load() function is called by update_times(), also found in timer.c, and is the code responsible for supplying the calc_load() function with the ticks parameter. Unfortunately, this function does not reveal its most interesting aspect: the computation itself. However, that can be located easily in sched.h, a header used by much of the kernel code. In there, the CALC_LOAD macro and its associated values are available: extern unsigned long avenrun[]; /* Load averages */ #define FSHIFT 11 /* nr of bits of precision */ #define FIXED_1 (1<<FSHIFT) /* 1.0 as fixed-point */ #define LOAD_FREQ (5*HZ) /* 5 sec intervals */ #define EXP_1 1884 /* 1/exp(5sec/1min) as fixed-point */ #define EXP_5 2014 /* 1/exp(5sec/5min) */ #define EXP_15 2037 /* 1/exp(5sec/15min) */ #define CALC_LOAD(load,exp,n) \ load *= exp; \ load += n*(FIXED_1-exp); \ load >>= FSHIFT; Here is where the tires meet the pavement. It should now be evident that reality does not appear to match the illusion. At least, this is certainly not the type of averaging most of us are taught in grade school. But it is an average nonetheless. Technically, it is an exponential decay function and is the moving average of choice for most UNIX systems as well as Linux. Let's examine its details. The macro takes in three parameters: the load-average bucket (one of the three elements in avenrun[]), a constant exponent and the number of running/uninterruptible processes currently on the run queue. The possible exponent constants are listed above: EXP_1 for the 1-minute average, EXP_5 for the 5-minute average and EXP_15 for the 15-minute average. The important point to notice is that the value decreases with age. The constants are magic numbers that are calculated by the mathematical function shown below: When x=1, then y=1884; when x=5, then y=2014; and when x=15, then y=2037. The purpose of the magical numbers is that it allows the CALC_LOAD macro to use precision fixed-point representation of fractions. The magic numbers are then nothing more than multipliers used against the running load average to make it a moving average. (The mathematics of fixed-point representation are beyond the scope of this article, so I will not attempt an explanation.) The purpose of the exponential decay function is that it not only smooths the dips and spikes by maintaining a useful trend line, but it accurately decreases the quality of what it measures as activity ages. As time moves forward, successive CPU events increase their significance on the load average. This is what we want, because more recent CPU activity probably has more of an impact on the current state than ancient events. In the end, the load averages give a smooth trend from 15 minutes through the current minute and give us a window into not only the CPU usage but also the average demand for the CPUs. As the load average goes above the number of physical CPUs, the more the CPU is being used and the more demand there is for it. And, as it recedes, the less of a demand there is. With this understanding, the load average can be used with the CPU percentage to obtain a more accurate view of CPU activity. It is my hope that this serves not only as a practical interpretation of Linux's load averages but also illuminates some of the dark mathematical shadows behind them. For more information, a study of the exponential decay function and its applications would shed more light on the subject. But for the more practical-minded, plotting the load average vs. a controlled number of processes (that is, modeling the effects of the CALC_LOAD algorithm in a controlled loop) would give you a feel for the actual relationship and how the decaying filter applies. Ray Walker is a consultant specializing in UNIX kernel-level code. He has been a software developer for more than 25 years, working with Linux since 1995. He can be contacted at
{"url":"https://www.linuxjournal.com/article/9001","timestamp":"2024-11-04T23:44:23Z","content_type":"text/html","content_length":"51777","record_id":"<urn:uuid:04edf679-4b51-46e4-9ab6-5b1a4044d79e>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00455.warc.gz"}
Disorder-Induced Double Resonant Raman Process in Graphene Joaquin Rodriguez-Nieva, Millie S. Dresselhaus Ph.D. student: 2011-2016 We studied the Double-Resonant (DR) Raman scattering process in disordered graphene, and showed the dependencies of the D and D’ band Raman intensities on laser energy, defect concentration and electronic lifetime. Several important features, which were contrasted with experiments, were discussed: 1. the laser energy dependencies of both the D and D’ band intensities are sensitive to the scattering potentials, thus providing detailed information about defects. 2. the D and D’ bands show a different laser energy dependence. 3. when the defect collision rate becomes faster than the electron-phonon collision, the ratio I[D]/I[G] saturates as a function of defect concentration. Figure from Eckmann, et al, Nano Lett 12, 3925 (2012) Numerous theoretical and experimental works on the DR process are available, yet some of the most interesting and potentially useful questions remain to be answered: 1. distinguishing signatures of the different types of defects on the Raman spectra. 2. edges vs. point defects? 3. laser energy dependence? 4. D vs. D’ bands 5. Why I[D]≫ I[D]’??? The Raman intensity is concentrated for backscattering of the photoexcited electron-hole pair: Momentum transfer given by: Raman Intensity can be integrated analytically using: γ(~10meV) ≪ ω[q](~0.2eV) ≪ E[L](~eV) $\frac{dI_{\mathrm{DR}}^{\mu}}{d \Omega_{\mathrm{f}}} = \frac{\alpha^2}{16 \pi} \frac{F^2_{\mu}}{\rho u_\mathbf{F} \omega_\mathbf{q}} \left(\frac{u_\mathrm{\mathbf{F}}}{c} \frac{E_L}{\omega_\mathbf {q}}\right)^2 \frac{n_{\mathrm{i}}|U_\mu(\textbf{q})|^2}{u^2_\mathrm{\mathbf{F}}}\mathrm{ln}(\frac{\omega_{\mathbf{q}}}{\gamma})$ Experiments typically show I[D]≫ I[D’] What determines the ratio I[D]/I[D’]??? Two effects mainly determine I[D]/I[D’]: 1. Long wavevector scattering 2. Suppression of Backscattering By (indirectly) measuring λ, we can identify the nature of the defects. Several laser energy dependencies of the integrated D and D’ band intensities are obtained in experiments... why? a) Raman spectra is defect sensitive: potential probed at backscattering b) γ ∝ E[L] Fig. 2. Laser energy dependence of the integrated Raman intensity ratio I🇩/I🇬 between the D and G bands obtained from our model (solid line), and experimental points from Ref. [2]. The dashed line indicate the frequently used I🇩∝E🇱⁻⁴ fit. Dispersive behavior of D and D’ bands explain their different laser energy. I[D]/I[D’] ∝ (ω[q]≈κ/ω[q]≈Γ)^3 G-Band: $I_{\mathrm{G}} \propto E^4_{\mathrm{L}}$ Edge-Induced D Band [3]: $I_D=\alpha^2 \lambda_K \frac{u^2_F}{c^2} \frac{E_{\mathrm{L}}}{\omega^2_{\mathbf{q}}} \frac{u_FL_e}{A} \mathrm{ln} (\frac{\omega_{\mathbf{q}}}{2\gamma})$ Fig. 3. Dependence of the integrated D band intensity on defect concentration nᵢ as obtained within our model (solid line), and experimental points of Ref. [4]. Saturation of the D band Intensity with defect concentration is controlled by the electronic lifetime due to electron-phonon (ep) and electron-defect (d) scattering: $\gamma = \gamma^{\mathrm{d}} + \gamma^{\mathrm{ep}} \begin{cases} \gamma^{\mathrm{ep}}>\gamma^{\mathrm{d}} & \to I_{\mathrm{D}} \propto n_{\mathrm{i}}\\ \gamma^{\mathrm{ep}}<\gamma^{\mathrm{d}} & \ to \frac{dI_{\mathrm{D}}}{dn_{\mathrm{i}}} = 0 \end{cases}$ Typical Values: γ^ep ~ 15 meV $\gamma^{\mathrm{d}} [\mathrm{meV}] \approx \frac{n_i|U_0|^2E_{\mathrm{L}}}{2(\hbar u_F)^2} \sim 10n_\mathrm{i} [10^{12} \mathrm{cm}^{-2}]$ Raman Spectroscopy can provide detailed information about the elastic scattering potential due to impurities, allowing to identify the nature of defects by using the laser energy dependence of the D and D’ bands, or the I[D]/I[D’] ratio. Several experiments can be used to test our predictions, such as correlations with transport measurements or doping effects. Further computational work is required to model more accurately the scattering potential introduced by the different types of defects. JFRN, et al., PRB 90, 235410 (2014)
{"url":"https://millie.pubpub.org/pub/elh4xpei/release/2","timestamp":"2024-11-10T02:25:36Z","content_type":"text/html","content_length":"304002","record_id":"<urn:uuid:7e01db60-edf7-4bc6-9558-6e5aa1220330>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00827.warc.gz"}
Stochastic Modeling and Assisted History-Matching Using Multiple Techniques of Multi-Phase Flowback from Multi-Fractured Horizontal Tight Oil Wells Stochastic Modeling and Assisted History-Matching Using Multiple Techniques of Multi-Phase Flowback from Multi-Fractured Horizontal Tight Oil Wells () 1. Introduction In recent years, as a result of low gas prices and relatively high oil prices, many producers have turned their attention to LTO reservoirs as a means of producing commercial wells. Much like shale gas reservoirs, LTO reservoirs are typically very low in permeability and require extensive hydraulic fracturing to allow for commercial production. As a result, operators are seeking new methods to estimate hydraulic fracture properties, particularly early in the well life. Over the past 5 years there have been numerous authors have identified quantitative flowback analysis as a suitable method which in most cases aligns well with more conventional long-term production data analysis (i.e.[2]). Although the majority of the literature has focused on shale gas reservoirs, there has been a substantial amount of research conducted in analyzing flowback from LTO wells. These methods have been applied to LTO plays across North America. A comprehensive literature review was given by [2] and the reader is guided to this work for details. In this paper only the papers relevant to MC simulation and application of evolutionary algorithms to the flowback problem will be discussed. [3] developed a simple straight-line method for estimating fracture pore volume for shale gas wells that exhibit a period of unit slope which falls between the early transient flow period and the late transient flow period during multi-phase flow. In their relationship, rate-normalized pressure (RNP) is inversely proportional to fracture pore volume and total compressibility. The authors use a combination of the generalized reduced gradient method (GRG) and evolutionary algorithms in order to decouple the involved parameters. The GRG algorithm is used to find a possible optimal combination of the unknown parameters and then the evolutionary algorithm is used to generate the probability density function (PDF) and cumulative distribution function (CDF) associated with the unknown parameters. Although the approach of decoupling parameters is unique, the GRG algorithm has been shown by many authors to get trapped in local optima rather than finding the true global optimum, and therefore, this approach could be strengthened by using a more rigorous algorithm such as a GA which typically will find the near optimal. With the exception of the application of assisted parameter estimation algorithms, this approach is very similar to flowing material balance (FMB) applied by [1] , although this analysis focused on single-phase fracture depletion prior to the breakthrough of formation fluids. Reference [4] developed an approach for analyzing both flowback and long-term online production data from gas condensate wells using numerical simulation combined with a MO GA to derive key fracture and reservoir parameters. The authors rigorously modeled flowback data using a triple-porosity system (matrix, primary hydraulic fractures and induced/natural fractures) using the multiple interacting continua approach (MINC, [5] ). The model also includes multiple water trapping mechanisms (permeability jail, capillary pressure and gravity segregation). This is likely the most rigorous method developed to date, although it is also the most computationally intensive. The model could be more rigorous by including coupled flow-geomechanical simulation, although this would add additional computational intensity. The base tool used in this work is a modified version of what was developed by [1] which is described in detail in [2] . The objective of the current work is to apply MC simulation both for uncertainty quantification as well as assisted-history matching purposes. Several other SO and MO techniques for assisted history-matching (focusing on evolutionary algorithms) are tested to determine the ability of each algorithm to identify the global minimum and therefore output the most realistic set of key uncertain fracture parameters. Results of the application of a gradient-based algorithm are presented to demonstrate how these techniques are insufficient in optimization of complex problems, unless the initial guess is closer to the absolute minimum than any local minima in the search space. Application of several evolutionary algorithms suggests that these algorithms are useful for this application, assuming a suitable search space is defined. Discussion will also be provided on how to speed up the performance of these algorithms making them more applicable for wide-spread use by industry. 2. Theory and Methods The analysis procedure used in this work is shown in Figure 1. The emphasis of this work is on the final 2 steps of the analysis procedure in which a variety of algorithms are used in an attempt to find the optimal solution and quantify uncertainty in key fracture parameter estimates. The first three steps, however, will also be demonstrated in the context of the field example, as these steps assist in setting up the search space for the stochastic simulation and assisted history-matching. Algorithms Used. In this work, six different algorithms were tested for the purpose of uncertainty quantification and assisted history-matching. The methods applied in this paper include: 1) MC MO simulation (Palisade^® @RISK^TM); 2) Microsoft^® Excel’s SO Gradient-based (GRG2) algorithm (GRG Nonlinear Solver); 3) Microsoft^® Excel’s SO Evolutionary Solver; 4) Palisade^® Evolver’s SO GA; 5) GAPS MO GA (based on the NSGA-II-non-dominated sorting genetic algorithm) algorithm; and 6) Palisade^® Evolver’s SO OptQuest^TM algorithm. Each Figure 1. Summary of procedure for analyzing flowback data using deterministic, stochastic and assisted history-matching techniques. algorithm will be briefly discussed here, with more details available in the literature. There are two main characteristics of all of these algorithms: 1) the OF; and 2) constraints. The OF is the key parameter that one is attempting to minimize or maximize (minimization in this case). Constraints are relationships which must be satisfied for a solution to be considered acceptable. The OFs used in this work are sum of squares OFs comparing measured water and oil rate with modeled rate. Cumulative production OFs can also be introduced to further constrain the problem. The OFs used in this work will be discussed in the coming sections. Since there are no hard constraints which are applicable to this problem, the only constraints used will be the input ranges of uncertain parameters, which will also be discussed in the coming sections. Monte Carlo Simulation. Traditional deterministic analysis techniques combine single-point estimates of key input variables to provide a single-point estimate of the result. This type of analysis assumes that the true values of all inputs are known in order to derive an accurate solution. Often these single-point estimates may differ greatly from the actual result and can lead to negative outcomes such as financial loss. In the majority of real-life problems, certainty in all parameters is rarely the case; while some variables may be known precisely or can be estimated with a reasonable degree of accuracy (ex. from lab testing or other methods), others may contain a high degree of uncertainty [6] . Stochastic simulation provides a platform to incorporate the uncertainty of inputs in order to derive a range of possible outcomes. This provides the analyst with vastly more information about the problem and assists in making smart decisions in which both the potential upside and downside are understood. Using these techniques in essence is similar to running hundreds or thousands of what-if scenarios simultaneously, while removing the often time-consuming and/or biased human component. Further, the results are presented in a manner in which they can easily be interpreted. The key components of a stochastic simulation include: 1) defining uncertain variables using a probability distribution; 2) defining key output variables; 3) running a series of simulations using an appropriate sampling technique (MC or Latin Hypercube); 4) developing a distribution of suitable output parameters using an OF. In this work, MC simulations were conducted using Palisade Corporation’s @RISK^TM add-in for Microsoft^® Excel^TM. As mentioned previously. MC simulation is conducted in such a way that multiple objectives are considered. Microsoft^® Excel’s GRG Non-Linear Solver (GRG2 Gradient-Based Algorithm). This technique is based on the Generalized Reduced Gradient 2 (GRG2) algorithm which is an extension of a version of the GRG code developed by [7] and is a SO algorithm. The solver combines a graphical user interface and algebraic modeling language for linear, nonlinear and integer programs and is integrated into the host spreadsheet as closely as possible [8] . These techniques are generally applied to smooth problems (i.e. smooth in both the OF and constraints), although these methods are often applied unwisely to optimization problems that do not meet the smoothness criteria [9] . These algorithms are “downhill” in nature and therefore tend to get trapped in the closest local minima surrounding the initial guess and struggle to escape these local minima. Application of a good initial guess (i.e. the deterministic solution) or the multi-restart technique, in which the algorithm is started from multiple randomly generated starting points, can allow these algorithms to find the absolute minima rather than being trapped in local minima in the vicinity of the starting point. Microsoft^® Excel’s Evolutionary Solver. The version of the Evolutionary Solver available in the standard version of Excel^TM is a SO algorithm developed by [10] . Although the exact workings of the solver are not available in the literature or from the developer, much like other GAs, this algorithm retains a population of solutions, although this is a steady-state GA (rather than a generational GA) meaning that only one solution is replaced by a better solution at each model iteration. This GA operates on a time-based constraint in which the algorithm has a set amount of time to find a better solution than the existing best solution and outputs a single best solution. As a result of the time-based nature of this GA, the longer maximum time without improvement defined by the user gives the algorithm a better chance of locating the absolute minimum, although the technique is designed to find a “good” solution rather than the optimal solution (the two may be equivalent or at least similar). As with other GAs there are four main steps applied within the algorithm: 1) Selection; 2) Crossover/Mating; 3) Mutation; and 4) Replacement. Reference [11] offers a variety of other more advanced algorithms which combine the simple GA with classical optimization techniques, other evolutionary algorithms as well as Tabu Search and Scatter Search which may lead to better ultimate solutions. These advanced versions were not tested in this work. Palisade^® Evolver’s Genetic Algorithm. Palisade^® Evolver’s GA is a SO GA which contains 5 potential solving methods: 1) Recipe; 2) Order; 3) Grouping; 4) Budget; and 5) Project. The Recipe method is the default method and is designed to be used when parameter values can be varied independently and can be applied to the majority of optimization problems, especially when the relationship between the adjustable variables are not well understood, or cannot be handled better by one of the other techniques. In this work, the Recipe solving technique is used. The GA used in Evolver^TM is unique, much like that used in Microsoft^® Excel’s Evolutionary Solver, in that it uses a steady-state approach, meaning that only one organism is replaced at a time rather that the entire generation. According to [11] , this method has been shown to work as well or better than the generational method, although no evidence is provided in their literature. When comparing the results of Evolver’s GA with other GA’s that use the generational approach, the number of “equivalent generations” can be set by constraining the number of trails to be equal to the size of the population multiplied by the desired number of generations. Parallelization is also utilized by the program to improve computational efficiency. The same four general steps of a GA are applied in this algorithm as were applied in Excel’s Evolutionary Solver. This algorithm is designed to find the global minima. A uniform crossover scheme is used by this algorithm, meaning that half of the parameters of the child come from each parent. GAPS Multi-Objective Genetic Algorithm (Based on NSGA-II Algorithm). GAPS is a MO GA based on the NSGA-II algorithm developed by [12] . As a MO GA, the algorithm is designed to account for objective conflict and yields a Pareto Front of solutions which are all mathematically equivalent. If a single optimal solution is desired, then the user must select the solution along the Pareto Front which is most applicable to the problem being solved. The NSGA-II algorithm is an extension of the NSGA algorithm developed by [13] . The basis of the algorithm is the nondominated sorting procedure, hence the name of the algorithm. The modified algorithm was developed to handle the main criticisms of the original algorithm, and offers the following benefits over the original algorithm: ・ Utilizes a faster method for nondominated sorting. ・ Preserves elitism, meaning that the best solutions are maintained without modification. ・ Incorporates a parameter-less diversity preservation mechanism to replace the need for a sharing parameter, which is the traditional mechanism for maintaining diversity. ・ Utilizes parallelization to improve solution speed by allowing calculations to be spread out over multiple processors. There are two key concepts to the algorithm, being: 1) nondominated sorting; and 2) diversity preservation and follows the same general concept of the other GAs discussed above, although is generational in nature. This algorithm has been shown to work well for three OFs by [5] , although it generally begins to fail as further objectives are added in complex problems (four or more in this particular problem). To handle this deficiency, the U-NSGA-III algorithm was developed by [14] . This algorithm uses a continuous single-point crossover scheme, meaning that crossover occurs at randomly chosen points and the two children get the genetic material from either side of the crossover point. Palisade^® Evolver’s OptQuest Algorithm. OptQuest^TM is a “black box” optimizer first developed by [15] to find the global optimum solution. The algorithm is not totally context-independent because the selection of the solution representation gives some information to the optimization algorithm. The model allows the user to represent solutions as a mixture of continuous, discrete, integer, binary, permutation and other specialized variables which provides the optimizer with some information about the system. The software ultimately chooses solvers based on the characteristics of the optimization model (pure or mixed, constrained or unconstrained and deterministic or stochastic). The optimizer is based in Scatter Search, but also uses the principles of Tabu Search, an Artificial Neural Network and other methods in attempt to derive a global optimum solution more rapidly. Scatter Search is an optimization algorithm comparable to a GA. 3. Field Example A full analysis demonstrating each step of the analysis procedure (shown in Figure 1) will be provided. One MFHW from a 3-well pad in a LTO play in the Western Canadian Sedimentary Basin (WCSB) will be analyzed, although the other two wells on the pad, as well as other wells in the area and other LTO plays in North America, have also been analyzed using this procedure. This well was previously analyzed by [1] , but using a deterministic approach. To protect operator confidentiality, well location and reservoir and completion information has been withheld. A summary of the completion and stimulation performed is given below: ・ Cased-hole completion. ・ Hydraulically fractured with hybrid water fracs in 18 stages using plug and perf technology (single perforation cluster per stage). ・ Fracture stages spaced at ~330 ft. ・ 1350 STB of fracture fluid and 45 T of proppant pumped per stage. Assessing the microseismic collected on this well, the assumption of circular bi-wing planar fractures appears to be reasonable and will be used in this analysis. Preceding the flowback data used for this analysis, plugs were drilled out with coil tubing following stimulation, after which the well was placed on flowback monitoring through a test separator. Rate and pressure data was gathered every 15 minutes for approximately 300 hours during flowback following a 12 day shut-in period. Input common to the different flowback analysis techniques are shown below in Table 1. Note that the individual hydraulic fracture width is approximately twice what is expected for a simple bi-wing planar fracture (0.25 in/stage). This is likely due to some fracture complexity (or possibly multi-planar fractures) which could not be resolved at the microseismic level. Raw Data and Diagnostic Plots. Water, oil and gas rates as well as bottom-hole flowing pressure and gas-oil ratio (GOR) are shown below in Figure 2(a), while water RNP and its derivative (RNP’) are shown in Figure 2(b). From Figure 2(a) it can be observed that flowback initiates with more than 2 days of single-phase water production (fracture fluid) prior to the breakthrough of hydrocarbons (formation fluid). Hydrocarbons breakthrough at just after 2 days, with an initial oil rate of ~36 STB/D. For about the first 6 days of hydrocarbon production (8 days total), the GOR is approximately constant at 1250 scf/STB which is equal to the solution gas level. At ~8 days there is a rapid drop in flowing pressure (resulting from a rapid decrease in choke size) below the bubble-point followed by a rapid increase in GOR suggesting a breakthrough of Table 1. Input parameters for flowback field example. Figure 2. Flowback data: (a) Water, oil and gas rate, as well as bottom-hole flowing pressure and GOR; and (b) Water RNP and RNP’. gas into the fractures. Therefore, only the first 8 days of production were considered for this analysis as this is the period where production is under two-phase (water + oil) flow in the formation and fractures (the tool cannot currently model three-phase flow). Over the first 8 days of production, water rate and bottom-hole flowing pressure generally decline, while the hydrocarbon rate generally increases following breakthrough as would be expected from a formation with minimal mobile water and constantly decreasing bottom-hole flowing (and fracture) pressure. From Figure 3(b), BBT flow-regimes can be identified using the RNP’ curve from the period of single phase production. The first flow-regime interpreted is a short period of transient radial flow within the fractures (0-slope), which appears to last until ~0.1 days of material balance time (MBT), although the data is scarce and noisy during this period making it difficult to conclude this flow-regime identification with a significant degree of certainty. From ~0.1 days to ~3 days of MBT a clear period of fracture depletion (unit-slope) is identified up until breakthrough. After-breakthrough (ABT), the derivative is non-linear in nature, as would be expected as multiple flow-regimes are occurring, although a depletion-like signature remains dominant. Casing pressures were converted to sandface pressures using a wellbore model, and initial formation pressure was estimated from p* obtained from a Diagnostic Fracture Injection (DFIT) test which also yielded the estimate of matrix permeability. The GOR and bubble-point pressure are defined based on PVT analysis of the reservoir fluid from a group of off-setting wells. Initial fracture pressure was determined by a trial and error process conducted by [1] and was maintained for this analysis. Rate-Transient Analysis of BBT Single-Phase Data. To assist with the history-matching process, rate-transient analysis (RTA) is applied to the flow-regimes identified in Figure 2(b) (repeated in Figure 3(a)). Radial flow analysis, which is shown in Figure 3(b), is used to analyze flow-regime (FR) 1 and estimate fracture conductivity (permeability). Delimiters are shown to highlight the interpretted period of radial flow. Using the slope of the radial flow plot, fracture permeability is estimated to be ~3500 md, assuming a fracture width of 0.5 in/fracture with a small negative skin. The negative skin may be a result of maximum proppant concentration near the wellbore as discussed previously. The FMB, which is shown in Figure 3(c), is used to assess FR 2 and estimate BBT fracture volume and fracture half-length. From the x-intercept of the plot, the total fracture volume is estimated to be approximately 24,000 STB, and the BBT fracture half-length to be ~441 ft per stage, assuming a circular fracture shape and a fracture width of 0.5 in/fracture. From the y-intercept an additional measure of fracture permeability can be derived to be ~3400 md, assuming a fracture width of 0.5 in/stage. The total calculated fracture volume is approximately equal to the total volume injected during the fracture stimulation, suggesting that the majority of pumped fluid has been converted into effective fracture volume (prior to hydrocarbon breakthrough). This conversion percentage is Figure 3. Rate-transient analysis of BBT single-phase data: (a) Water RNP and RNP’ plot; (b) Early radial flow analysis; (c) Flowing material balance; and (d) Fetkovich type-curve. higher than expected even for a well with minimal natural fracturing (as inferred from microseismic and experience in the formation of interest), and may result from the impact of the other two wells being stimulated on the same pad prior to flowback of the well. These values will be used in the deterministic history-matching process. Finally, the fracture parameters estimated from radial flow analysis and the FMB can be confirmed by using the Fetkovich type-curve (Figure 3(d)) which is designed for analyzing radial to boundary-dominated flow behaviour. Because MBT is used in the calculation of t[Dd], fracture depletion data falls down the harmonic stem, with a positive deviation indicating the breakthrough of formation fluid. Parameters estimated from quantitative RTA of this flowback data are provided in Table 2. Deterministic History-Match. Deterministic history-matching was first conducted to validate the application of the conceptual model to this dataset, confirm selection of a fracture shape and geometry model, and confirm RTA-derived parameters for BBT fracture properties. For this analysis, a circular shape with a single bi-wing fracture being generated from each stage was selected for simplicity, as was done by [1] , under the assumption that cylinder radius is equal to fracture half-length. Table 2. Parameters solved from each BBT RTA technique. The history-matches, guided by the BBT RTA-derived parameters are provided in Figure 4. The deterministic history-match was not significantly changed from what was presented by Clarkson et al. (2014), although minor improvements were made. Note the deterministic history-match is shown here in black to maintain continuity of this match throughout the paper. From Figure 4 it can be seen that the deterministic history-match to water is very good throughout the eight days of flowback modeled, while the hydrocarbon match is very good for the first 6.5 days at which point it begins to underestimate production. After conducting the first 3 stages of the analysis procedure, the remainder of the paper will focus on the stochastic simulation and assisted history-matching component of the procedure which is the main contribution of this work. The key history-match parameters are given in Table 3 (assumed to be fracture permeability, cylinder radius (fracture half-length) BBT and ABT of formation fluid, breakthrough pressure and the Corey relative permeability exponents for oil and water in the fractures). Matrix relative permeability coefficients were assumed to be 2 for oil and water formation fluid, although due to the very high mobile oil saturation, these curves have minimal impact on the analysis. From Table 3, it can be seen that the fracture half-length decreases following the breakthrough of formation fluids, as expected. A ~7% decrease in drainage radius (~4% decrease in effective half-length) was observed and this was applied during stochastic simulation and assisted history-matching to reduce the number of uncertain parameters. In many cases the reduction in half-length as a result of breakthrough can be significantly larger. Further, breakthrough pressure was set slightly higher than formation pressure to account for the supercharge effect associated with high-rate injection. Matrix relative permeability exponents were set to two for both water and oil throughout. Stochastic Simulation and Assisted History-Matching. In this section, the results from multiple stochastic and assisted history-matching techniques will be Figure 4. Deterministic flowback match: (a) Water, oil and gas production rates; and (b) Cumulative water, oil and gas. Table 3. Key history-match parameters for flowback simulation. discussed. As mentioned previously, the algorithms used include: 1) MC simulation (Palisade^® @RISK^TM); 2) Microsoft^® Excel’s Gradient-based (GRG2) algorithm (GRG Nonlinear Solver); 3) Microsoft^® Excel’s Evolutionary Solver; 4) Palisade^® Evolver’s GA; 5) GAPS MO GA (based on the NSGA-II) algorithm; and 6) Palisade^® Evolver’s OptQuest^TM algorithm. The results of each individual technique will be discussed followed by a comparison of the results of each of the techniques. As discussed previously, only the first 8 days of flow data were analyzed because, during this flow period, the flowing pressure remains above the bubble point, and therefore only two-phase flow exists in the matrix and fractures. During this period, the GOR is also relatively constant, as would be expected for flow above the bubble point. Monte Carlo Simulation. As discussed previously, stochastic history-matching can be a multi-step process, with multiple refinement stages. For example, two sets of MC simulations were conducted in the presented example. Following the first stage, inputs including fracture compressibility and matrix properties were held constant for the final set of simulations, which will be discussed here. Further refinement stages could be conducted using information from past runs to adjust input distributions and increase the number of success cases. The parameter distributions for the first refinement stage are shown below in Table 4. Table 4. Input distributions/ranges for Monte Carlo simulation and assisted history-matching. *Upper bound on fluid in place given assumptions of fracture shape, width and porosity. Because enough data was not available to construct proper input distributions, uniform distributions were used for each parameter between a reasonable low and high value. In some cases, the high and low value were constrained by physical limits (i.e. the upper limit of half-length, the lower limit of n’ and m’ and breakthrough pressure) whereas other limits were set at a reasonable range and then adjusted following the screening stage of iterations. The input parameter ranges were also further constrained by the initial screening phase of simulations (500,000 iterations with significantly wider parameter ranges), as well as reasonable limits on the uncertain parameters. The initial screening phase was used to rule out outlier matches which occurred with minimal frequency. The same limits were used for the application of the assisted-history matching techniques, which will be discussed in the following section. This analysis is comparable to that conducted by [16] , although the number of uncertain parameters was reduced from 10 to 5, placing the focus on the most important parameters. Using this approach allows reasonable coverage of the sample space with a smaller number of iterations. [17] conducted significantly less simulations than would be needed to cover the sample space, although the purpose of that work was to demonstrate the purpose and application of MC simulation to history-matching flowback data from MFHWs. The following objective functions (OFs) were used in either the MC simulations, assisted history-matching algorithms or both. The OFs take the form of sum of squared residuals for the rate and cumulative production of the water and oil phases. Because the well is flowed above the bubble point throughout the analysis period of the flowback, and the GOR is relatively constant at approximately the solution gas level, the gas phase is not considered and is effectively lumped in with the oil phase. Water Rate OF: $O{F}_{{q}_{w}}={\sum }_{i=1}^{n}{\left({q}_{w,data}-{q}_{w,sim}\right)}^{2}$(1) where, n is the number of data points collected during the portion off the flowback data being analyzed for each phase. Oil Rate OF: $O{F}_{{q}_{o}}={\sum }_{i=1}^{n}{\left({q}_{o,data}-{q}_{o,sim}\right)}^{2}$(2) Cumulative Water OF: $O{F}_{{Q}_{w}}={\sum }_{i=1}^{n}{\left({Q}_{w,data}-{Q}_{w,sim}\right)}^{2}$(3) Cumulative Oil OF: $O{F}_{{Q}_{o}}={\sum }_{i=1}^{n}{\left({Q}_{o,data}-{Q}_{o,sim}\right)}^{2}$(4) Summed Rate OF: $O{F}_{{q}_{t}}={\sum }_{i=1}^{n}\left[{w}_{w}{\left({q}_{w,data}-{q}_{w,sim}\right)}^{2}+{w}_{o}{\left({q}_{o,data}-{q}_{o,sim}\right)}^{2}\right]$(5) The summed rate OF given by Equation 5 is used for the SO algorithms. There are however two issues associated with using summed OFs: 1) objective conflict leading to erroneous results; and; 2) weighting can have a significant impact on the results of the algorithm. In this work, a 1:1 weighting was used for direct comparison to the MO algorithms (equivalent to using w[w] = w[o] = 0.5). Alternate criteria, such as those applied by [16] [17] , could also be used if a reasonable baseline deterministic match is not available. In these two papers, the authors used an R^2 value of each phase (in terms of rate) greater than 0.9 (as suggested by [18] ) and a total cumulative production for each phase within 10% (as suggested by [19] ) for each phase to determine a successful match. This approach was shown to be relatively successful in the studies by [16] [17] . However, some successful solutions in both cases were found where a high R^2 value existed but the match was poor, due to the inherent nature of R^2 as a match fit indicator, especially when dealing with highly nonlinear problems with a large number of data points. For the current study, 100,000 iterations are conducted, and fairly strict criteria were enforced to obtain a successful match, including the following (solution has to be better than the deterministic solution for both phases): ・ $O{F}_{{q}_{w}}<O{F}_{{q}_{w},deterministic}$ ・ $O{F}_{{q}_{o}}<O{F}_{{q}_{o},deterministic}$ ・ $O{F}_{{Q}_{w}}<O{F}_{{q}_{w},deterministic}$ ・ $O{F}_{{Q}_{o}}<O{F}_{{q}_{o},deterministic}$ Using all four of these criteria, only 79 matches were found (~0.1%), while if only the two rate criteria were used, as is usually done with the assisted history-matching techniques, ~10× the number of matches were found (~1%). Based on these results, it is clear that the random behaviour of MC simulation is not particularly efficient in finding optimal solutions, making the use of modern assisted history-matching techniques desirable, particularly when a deterministic solution is not available. For the remainder of this paper, only the two rate OFs will be used in the application of all algorithms, because adding more OFs can often cause these algorithms to converge slowly and creates further objective conflict, potentially leading to finding a less desirable solution. The parameters for the deterministic match, as well as the best 5 matches (in order of increasing OF value) when considering the summation of the two OFs, are shown below in Table 5 along with the mean, standard deviation and P10/P90 ratio of the 861 acceptable matches. The history-match associated with the top 10 MC simulation history-matches as well as the deterministic history-match are shown in Figure 5. For later comparison to the assisted history-matching results, the average of the top five iterations were assumed to represent the best solution found using MC simulation, as each of these solutions have a total OF within 1% of each other. The values for each of the uncertain parameters are provided in Table 6. The average summed rate OF is ~14% lower than the deterministic solution. Table 5. Stochastic history-match parameters. Table 6. Average values of top 5 Monte Carlo simulations. Figure 5. Stochastic flowback history-match: (a) Water production rates; (b) Cumulative water production; (c) Oil production rates; (d) Cumulative oil production; (e) Gas production rates; and (f) Cumulative gas production. The parameter distributions generated from the stochastic history-matching exercise are provided in Figure 6. R^2 values are shown to indicate the lognormal nature of the output distributions, and the deterministic and mean values are provided for reference. Note that the parameter distribution for BBT drainage area is not provided because the focus is on the half-length calculated from the drainage area (using the assumed shape and geometry constraints), since half-length is one of the key parameters controlling long-term production of the well. From Table 5 and Figure 6 it can be seen that the range of values used to match the flowback data is fairly small (P10/P90 ≤ 2). Further it can be seen that the average values of the 861 successful matches are within 5% of the deterministic match, with the exception of the relative permeability exponents (n’ = 17% and m’ = 6%), which are sensitive to small changes due to the small magnitude of their values. Some key results to point out are as follows: ・ Fracture permeability covers the entire input distribution, suggesting that fracture permeability may fall outside the search space. However, very few matches beyond the selected range were found during the screening phase. ・ Breakthrough pressures, including the deterministic match, are greater than Figure 6. Stochastic flowback history-match parameter distributions: (a) Fracture permeability; (b) Breakthrough pressure; (c) BBT half-length; (d) Corey oil relative permeability exponent, n’; (e) Corey water relative permeability exponent, m’. reservoir pressure estimated from DFIT analysis. Values (other than the deterministic match) also fell near the upper limit of 4100 psia, suggesting a better match to the available data could be achieved using a breakthrough pressure of 4100 psia. However, experimentation with a variety of fracture parameters suggested that breakthrough pressures greater than 4100 psia led to model breakthrough significantly BBT in the actual data, which led to setting the upper limit at the selected value. 4100 psia still yields a breakthrough earlier than the data―however, the fact that flow initiates at ~36 STB/D, which is higher than other similarly completed wells on the same pad, suggests that some early-time hydrocarbon data may not have been recorded. An improved late-time match was observed with earlier breakthrough. Overall the results suggest a near fracture supercharge of up to 10% following an 11 day shut-in between stimulation and the onset of flowback in which the bridge plugs were milled out. The supercharge has been shown to be much higher in some formations depending on factors such as the stimulation pumped, shut-in time between stimulation and flowback and other reservoir and fluid properties. ・ BBT half-length values fell in the range of 413 to 441 ft suggesting that a BBT half-length less than 400 ft is unlikely and that a high degree of fracture efficiency was achieved. ・ Fracture relative permeability exponents to oil fall in a tight band between 1.1 and 1.6, suggesting minimal potential variability in this parameter. ・ Fracture relative permeability exponents to water are far less constrained than those to oil falling between 2.1 and 9, although are significantly higher than those to oil. This has been observed in nearly all wells analyzed using these methods. In this case it can be seen that the values between the P10 (3.5) and P90 (7.2) follow a lognormal distribution and yield a P10/P90 ratio of ~2.20% of the solutions fell outside this range and may be considered as outliers. Assisted History Matching. In addition to the MC simulation approach demonstrated above, five assisted history-matching algorithms were applied in an attempt to find the best possible history-match to the same flowback data set discussed above. Two types of algorithms were used in this analysis (gradient-based and evolutionary) with a total of five techniques being tested: 1) Microsoft^® Excel’s SO Gradient-based (GRG2) algorithm (GRG Nonlinear Solver); 2) Microsoft^® Excel’s SO Evolutionary Solver; 3) Palisade^® Evolver’s SO GA; 4) GAPS MO GA (based on the NSGA-II) algorithm; and 5) Palisade^® Evolver’s SO OptQuest^TM algorithm. For this analysis, the lower and upper bounds given above in Table 4 are used as constraints on the algorithms, and no further constraints were applied. The same initial guesses for the uncertain parameters (Table 7) are used to seed each of the algorithms, although many evolutionary algorithms do not require an initial guess as they generate an initial population based on the constraints in the uncertain parameters. Most available evolutionary algorithms are implemented in a way that the initial guess will be a member of the Table 7. Initial guesses for assisted history-matching parameters. first population. As was discussed previously, and as will be demonstrated below, the initial guess is critical to achieving good results from the GRG algorithm because these algorithms will tend to find the closest local minima in the OF (downhill nature of the algorithm). The initial guesses were selected based on the following criteria: ・ Fracture permeability―RTA of early-time flowback data suggested a maximum fracture permeability of ~3500 psia (as was used in the deterministic history match) and therefore a slightly lower value was selected for this application. ・ Breakthrough pressure―DFIT analysis suggested an initial reservoir pressure of ~3700 psia and therefore a 200 psia supercharge effect was assumed (~5%). ・ Drainage area―set based on results of the FMB in the deterministic analysis. ・ Relative permeability exponents―straight-line relative permeability curves were assumed as may be expected for homogeneous perfectly planar fractures under ideal flowing conditions. The results of each algorithm will be discussed, followed by a comparison of the results of each algorithm, as well as the average results of the top five MC simulations. Microsoft^® Excel’s GRG Nonlinear Solver. As discussed previously, the initial guess is critical to the quality of the result using this type of algorithm due to the “downhill” nature of the algorithm and tendency to get trapped in local minima. This impact will be demonstrated in this section. Due to the deficiencies of this algorithm in solving complex problems with multiple minima, a poor result was expected using the initial guesses shown in Table 7. However, it was interesting to determine whether a reasonable quality initial guess (i.e. the deterministic solution) could be used to converge on the absolute minimum and ultimately find an optimized solution. This is of interest because these algorithms run very quickly compared to GAs due to their simplistic nature, and are built directly into Microsoft^® Excel^TM, allowing for fast and simple application following the deterministic history-matching exercise. To test the capacity of the algorithm for solving this problem, two runs were completed. In the first run the initial guesses shown in Table 8 were used, and in the second run, the deterministic solution Table 8. Initial guesses and final solutions for Attempt #1 using EXCEL’s GRG non-linear solver. (see Table 2) was used. In both cases the algorithm converged to a solution very quickly, given the speed of the tool being used, suggesting very few iterations were required to locate a minima, although the exact iteration count is not provided by the standard version of Excel^TM unless the algorithm is stopped at each iteration (which was not done in this case). The initial guess and final solution for the two sets of input parameters are provided in Table 8 and Table 9, with the solutions being compared to actual data and the deterministic history-match in Figure 7. From Table 7 and Table 8 it can be seen that the final solutions yield a 4% higher summed rate OF than the deterministic solution when using the initial guesses shown in Table 6, while the optimal solution using the deterministic match as the initial guess yields a 16% reduction in the summed rate OF which equates to an ~2% improved result over any of the MC simulations. From Figure 7 it can be seen that Attempt #1 yields a very poor history-match, especially to the hydrocarbon phases, while Attempt #2 yields an excellent history-match to all three phases. As will be seen in the coming sections, the gradient solver replicated the results of the evolutionary algorithms when seeded with the deterministic history-match as an initial guess. This suggests that no local minima exist between the deterministic solution and the absolute minimum. Further, this type of method may be used to optimize a history-match once a reasonable deterministic solution is found, although additional testing would be required to further substantiate this claim. Microsoft^® Excel’s Evolutionary Solver. In this section the results of the Excel’s Evolutionary Solver will be demonstrated. This Solver algorithm is the first of two SO GAs which will be tested in this work. This solver also uses several classical optimization methods to attempt to improve upon the solutions found by the GA, thus making it a hybrid GA. As discussed previously, details of how Table 9. Initial guesses and final solutions for Attempt #2 using EXCEL’s GRG non-linear solver. Excel’s Evolutionary Solver works are not readily available and very little assistance was provided by the developer to help understand exactly which techniques are employed. Given that this is a time-based, rather than a generation-based algorithm, the exact number of iterations conducted is unknown, although the algorithm converged significantly faster than the other algorithms tested, suggesting that significantly fewer 10,000 iterations were conducted in finding the best solution. The input parameters used for this algorithm are shown below in Table 10. A mutation rate of 15% was selected for all GAs, as this was preprogrammed into the version of the GAPS algorithm which was used in this work. A population size of 100 was used for all of the population-based algorithms based on the suggestions of Kanfar and Clarkson (2016). Max time without improvement was set to a high value to give the algorithm sufficient time to search for a better solution given the calculation speed of the spreadsheet-based tool used in this work (~30 seconds/iteration). Based on the values provided in Table 10, and the approximate run speed of the spreadsheet, ~300 iterations were allowed to find an improved solution, alternatively the algorithm will terminate once the maximum change of the combined OF falls below 0.01%. In this case it is unclear whether the algorithm terminates based on the time or convergence criteria. The same convergence criteria were used with Palisade’s algorithms, which will be discussed in a coming section. As with other SO algorithms, a single best solution is found by the algorithm. The parameters resulting from the optimization are found in Table 11 and the resulting combined OF is ~16% lower than that of the deterministic match. Palisade® Evolver’s Genetic Algorithm. In this section the results of Palisade^® Evolver’s SO GA will be demonstrated. Evolver^TM is the second SO GA used in this work. Much like Excel’s Evolutionary Solver, Evolver^TM uses a steady-state approach, which the company has found to work as well or better than the generational approach. Further, given that this is a proprietary commercial tool, details on the exact workings of the algorithm are not readily available. Based on the information provided by the developer, the algorithm operates in a manner Figure 7. Gradient solver flowback history-match: (a) Water production rates; (b) Cumulative water production; (c) Oil production rates; (d) Cumulative oil production; (e) Gas production rates; and (f) Cumulative gas production. Table 10. Input parameters for excel’s single-objective evolutionary solver. Table 11. Optimal match parameters for excel’s single-objective evolutionary solver. comparable to a basic GA, although several specialty operators are included to improve the results of the algorithm. The algorithm is trial-based rather than generational-based, and therefore to mimic the generational approach used by the GAPS MO GA, 10,000 trials were conducted (equivalent to 100 generations with a population of 100). A convergence criteria for maximum change in the OF is also used as an input for termination of the algorithm, although this was not achieved. The input parameters used for this algorithm are shown below in Table 12. A cross-over rate of 50% is used as this is the default setting in the program, meaning that each child receives half of its genes from each parent. Changing the cross-over rate could significantly impact algorithm performance and can be changed during an optimization run. As with other SO algorithms, a single best solution is found by the algorithm. The parameters resulting from the optimization are given in Table 13, and the resulting combined OF is ~16% lower than that of the deterministic match. The best solution was found in the 7455^th trial, although only 245 trials were required to get within less than 1% of the best solution, suggesting that significantly fewer trials could have been run for this particular scenario. Fewer trials, however, would limit the search extent of the algorithm, which may lead to poor results in some cases, as the majority of the early trials produce significantly higher OF numbers. Figure 8(a) shows the average and minimum OF for the 100 equivalent generations (100 trials is equal to 1 generation). From the average curve the steady-state nature of the algorithm becomes apparent. Unlike a generational GA, where you would expect to see the generational average decrease over time, in this case the average decreases for approximately 15 equivalent generations before beginning to fluctuate between 37 million and 47 million for the remaining 85 equivalent generations. From the minimum curve it can be seen that a value within 1% of the minimum is found (within the third equivalent generation) and remains relatively constant for the remainder of the equivalent generations. This result can be seen by plotting the algorithm’s improvement progress Figure 8(b). The logarithmic x-axis is used to better show the optimization progression. Lastly, Figure 8(c) provides the OF value per trial, and Table 12. Input parameters for excel’s single-objective evolutionary solver. Table 13. Best match parameters for excel’s single-objective evolutionary solver. Figure 8. Evolver^TM SO GA results: (a) By equivalent generation, showing both the equivalent generation average and minimum; (b) By progression step; and (c) By trial, also showing the minimum achieved value. it can again be seen that values approaching the minimum are found quite quickly and continue to be found throughout the remainder of the optimization. GAPS Multi-Objective Genetic Algorithm. In this section, the results of the only MO GA tested will be demonstrated. This is the GAPS algorithm developed by Mohammed Kanfar for the Tight Oil Consortium at the University of Calgary, and is based on the NSGA-II algorithm as discussed previously. The benefits of using MO algorithms were discussed previously, so in this section, the focus will solely be on the results of the algorithm. Although it is common practice in the application of GAs to run half the number of generations as the population size, in this application an equal number of generations and populations were conducted to allow the algorithm to “dig deeper” towards an absolute minimum. Note that larger population sizes allow the algorithm to explore further in the search space. The impact of running more generations will be discussed below. The algorithm was run with 100 generations with populations of 100 following the recommendations of [5] , who ran 50 generations with populations of 100 in a similar application using numerical rather than analytical simulation (for a total of 10,000 runs). The input parameters used for this algorithm are shown given in Table 14. As is the case with all MO Gas, the final generation does not converge to a single solution, but instead converges to a Pareto Front of nondominant mathematically-equivalent solutions. In this case, the Pareto Front is convex in nature, which suggests that two phase rate objectives are conflicting (a straight-line would suggest non-conflicting OFs). To converge on a single best solution, the solutions were filtered, removing solutions that have an OF higher than a certain threshold (with the threshold being continuously reduced until only several solutions remained around the corner point of the Pareto Front), and then visual inspection was used to pick the final solution. There are currently no methods available in the literature for selecting the single best solution, and therefore an approach similar to that used by [5] was used in this application due to the similarity of the problems. The evolution of the Pareto Front from generation to generation will first be investigated. The advancing Pareto Front is shown in Figure 9. Figure 9(a) provides the final generation, along with the other generations being shown in groups of ten generations. From this plot it can be seen that there is a large amount of scatter in the first 10 generations, but by the second ten generations convergence on the ultimate Pareto Front begins. For the Pareto Front, a semi-log presentation was chosen with the oil rate OF being on a log scale, while the water rate OF is plotted on a Cartesian scale, as this was found to best demonstrate the results in this case (however a Cartesian plot will Table 14. Input parameters for GAPS multi-objective genetic algorithm. Figure 9. Pareto diagram for flowback history-match: (a) Generations grouped into sets of 10 generations showing significant scatter in the first 10 generations; (b) Generations grouped into sets of 5 generations starting at generation 11; (c) Every 10^th generation to show advancement of Pareto Front over time; (d) Every 10^th generation between Generation 50 and Generation 100 to demonstrate convergence on the ultimate Pareto Front. The single best solution is shown with a star. be used in Figure 9(d)). In Figure 9(b), the first ten generations are eliminated and the remaining 90 generations are broken into groups of 5. Form this plot it can be seen that there is consistent improvement for approximately 50 generations prior to converging on the ultimate Pareto Front. This can also be seen in Figure 9(c), which shows every tenth generation starting at Generation 10. Finally, in Figure 9(d), the final 50 generations are shown and it can be seen that there is no obvious improvement beyond 50 generations and therefore the population to generation ratio of two used by [4] as well as many others when applying GAs is suggested for future applications of this algorithm. This will reduce runtime by 50% without having a significant impact on the final generation results. The single best solution selected using the method described above is shown with a star in Figure 9(d), and it can be seen that this point is near the corner point of the Pareto Front, suggesting relatively equivalent trade-off between the two objectives. From Figure 9 it can also be seen that the average value of the water rate OF is ~20× greater than that for the oil rate OF. This is due to the fact that water rates are much larger than the oil rates, and therefore visually similar deviations in rate will be approximately an order of magnitude higher. This information could have been used with the SO algorithms to reduce potential bias towards achieving a better water than hydrocarbon history-match, although an equal weighting appears to still be effective for this problem based on the results of the proceeding and following sections. Next, generation 100 will be investigated in greater detail, focusing primarily on the extent of variability in the key parameter estimates during this final generation. The parameters corresponding to the best match and the average, standard deviation and P10/P90 ratio for Generation 100 are given in Table 15. The best match leads to a summed OF which is ~16% lower than that of the deterministic match. Palisade Evolver’s OptQuest^TM Algorithm. In this section the results of Palisade Evolver’s OptQuest^TM will be demonstrated. Much like Evolver’s GA, this is a SO algorithm. This algorithm has its basis in Scatter Search which draws many similarities to GAs, although also includes integer programming, Tabu Search and an Artificial Neural Network to improve its results and efficiency, as discussed previously. The algorithm is trial-based much like Evolver’s GA. In this case, since there is no basis for comparison of the algorithm, the maximum number of trials was set to a very large value (100,000) allowing the convergence criteria for maximum change in the OF to control the termination of the optimization. The input parameters used for this algorithm are shown below in Table 16 As with other SO algorithms a single best solution is found by the algorithm. The parameters resulting from the optimization are found in Table 17 and the resulting combined OF is ~16% lower than that of the deterministic match. In this particular case, 33,756 trials were required to reach the set criteria, although a value with a combined OF within 1% of the optimal value was found in 13,421 trials which equates to a ~60% reduction in optimization time, although many significantly higher OF values were found in the final 20,000 trials. To Table 15. GAPS multi-objective genetic algorithm generation 100 results. Table 16. Input parameters for palisade’s single-objective genetic algorithm. Table 17. Best match parameters for evolver’s single-objective OptQuest algorithm. allow comparison with the GAs, the results were filtered into “equivalent generations” of 100 trials. Figure 10(a) provides the average and minimum OF for the 338 “equivalent generations” (the 338^th “equivalent generation only contains 56 trials”). From the average curve, the differences between OptQuest’s performance and a generational GA become apparent. Unlike a generational GA, where one would expect to see the generational average go down over time, in this case the average decreases for approximately 10 “equivalent generations” prior to stabilization with four groups of “equivalent generations” with significantly higher values which occur when the algorithm tries radically different areas of the search space. This is characteristic of a Scatter Search Algorithm which utilizes Tabu Search and an Artificial Neural Network to stop the algorithm from going back to areas of the search space which either have, or are expected to yield inferior solutions. From the minimum curve, it can be seen that a value within 1% of the minimum is found in the 133^rd “equivalent generation” and remains relatively constant for the remainder of the “equivalent generations”. This result can be seen by plotting the algorithms improvement progress which is shown in Figure 10(b). Lastly, Figure 10(c) shows the OF value per trial, and it can again be seen that values approaching the minimum are found quite quickly and continue to be found throughout the remainder of the optimization. The same impact can be seen as in the “equivalent generation” case in Figure 10. Summary of Results. In the previous sections the results of several techniques including: 1) Deterministic Analysis; 2) MC MO simulation (Palisade^® @RISK^TM); 3) Microsoft^® Excel’s SO Gradient-based (GRG2) algorithm (GRG Nonlinear Solver); 4) Microsoft^® Excel’s SO Evolutionary Solver; 5) Palisade^® Evolver’s SO GA; 6) GAPS MO GA (based on the NSGA-II) algorithm; and 6) Palisade^® Evolver’s SO OptQuest^TM algorithm were discussed individually. In this section the results of the different techniques will be compared. In Figure 11 the history-match to both the water and oil phases is shown for each of the techniques, while in Figure 12 the key parameters and total OF’s are provided. The results from Excel’s GRG Nonlinear Solver have not been included s this required Figure 10. Evolver’s SO OptQuest results: (a) By equivalent generation showing both the equivalent generation average and minimum; (b) By progression step; and (c) By trial also showing the minimum achieved value. manipulation of the initial guess to achieve an acceptable history-match (although its key match parameters will be discussed below). From Figure 12 it can be seen that the deterministic match matches the early-time hydrocarbon production better than the other algorithms, although it provides a far less superior late time match to the two hydrocarbon phases. The late-time hydrocarbon match can be improved further by increasing breakthrough pressure, although it was determined that this leads to premature breakthrough by the model and therefore an upper limit of 4100 psia was enforced. It can also be seen that the water match from each of the solving techniques is similar. Further the hydrocarbon rate profiles for all but the deterministic history-match look similar. Additional key observations based on Figure 12 are presented below: ・ Fracture permeability ranges from 3102 md to 3190 md with the lowest value coming from Evolver’s GA and the highest coming from the average of the top five MC simulations. Each of the algorithms finds a fracture permeability ~350 md lower than the deterministic match (~10% difference). The percent variability from the five algorithms is approximately is ~2.5% when compared to the deterministic match. Figure 11. Flowback history-match using different algorithms: (a) Water rate match; (b) Cumulative water production match; (c) Oil rate match; (d) Cumulative oil production match; (e) Gas rate match; and (f) Cumulative gas produced match. ・ Breakthrough pressure approaches the upper limit for each of the five algorithms and is significantly higher than the deterministic match (~7%). An earlier breakthrough yields a better late-time oil match, which is where oil rates are highest and therefore have greatest potential to add to the OF value. This is also the piece of data were the deterministic solution deviates most from the measured data. As mentioned previously, a breakthrough pressure of greater than 4100 psia leads to premature breakthrough, although also yields a better late-time history-match. A breakthrough pressure of 4100 psia suggests a 10% supercharge of the formation directly surrounding the fractures which results from pumping the fracture at significantly higher pressures than formation pressure (mini water flood effect). Figure 12. Flowback key history-match parameters found using different algorithms: (a) Fracture permeability; (b) BBT half-length; (c) Breakthrough Pressure; (d) Oil relative permeability exponent, n’; (e) Water relative permeability exponent, m; and (f) Total OF value. ・ BBT half-length is nearly constant using each of the six techniques, ranging from 437 - 441 ft which is to be expected given the rather definitive results of the FMB shown above. The deterministic history-match used the same BBT half-length as the four main assisted history-matching techniques. ・ Oil relative permeability exponent shows almost no variability from the five algorithms ranging from 1.43 - 1.46. This is ~20% higher than the value used in the deterministic history-match. ・ Water relative permeability exponent shows slightly more variability from each of the five algorithms, ranging from 5.30 - 5.87. Each of the algorithms predicted a water exponent exceeding that of the deterministic history-match by an average of ~4%. The percent variability from the five algorithms is approximately is ~11.4% when compared to the deterministic match. ・ The total OF for the four assisted history-matching algorithms was nearly identical, ranging from 34.7 - 35.4 million. The average of the top five MC simulations was ~2% higher than the other assisted history-matching techniques. The five different algorithms improved the total OF from 14.7% - 16.4%, although this suggests that the deterministic match still falls within the ±20% range often accepted in industry in this particular case. The above results demonstrate that each of the algorithms find a very similar optimal value for each of the key parameters suggesting that this likely represents the global optimum. After reviewing the total OF, it is clear that there is significant benefit to applying these algorithms once bounds on key parameters can be estimated. Another interesting observation is that the deterministic history-match yielded values within 10% of the optimal values for three out of the five uncertain parameters. The only exceptions are the relative permeability exponent to oil water, which varied by ~20% and ~11% respectively. This higher differential can be attributed to the low values of these exponents, making them particularly sensitive when calculating percent difference (although the absolute value was within 0.25 and 0.66 of the average optimal values respectively). Based on the results shown above, it would be expected that application of Excel’s GRG Non-Linear Solver with the use of multi-restart mode would likely yield the same results. This was not tested to its full extent in this analysis, although using the deterministic history-match as an initial guess led to similar parameters as those solved by the other algorithms. This result suggests that the multi-restart method would likely be successful in this problem and also demonstrates that there are no local minima between the deterministic match and global optimum. 4. Discussion The basis of this work is the tool developed by [1] for analyzing multi-phase (water, oil and gas) flowback data from MFHWs following hydraulic fracture stimulation to estimate key fracture properties such as effective fracture half-length and fracture permeability. The base tool, with the modifications discussed previously, was then used to conduct a deterministic history-match. Following the deterministic history-match, MC simulation was used to determine the variability in the key history-match parameters which can be used to effectively match the data (rate OF for oil and water lower than the deterministic match). Once this analysis was conducted, the results of the best MC simulations were compared to the results of several assisted history-matching techniques in an attempt to find the global optimum (which corresponds to the “true” fracture parameters, assuming the model and other hard inputs are correct). Algorithm complexity varied from Excel’s GRG Non-Linear Solver, which is based on GRG, to SO and MO GAs, and an algorithm known as OptQuest^TM which combines several optimization techniques into a single algorithm. It was demonstrated that each technique could essentially locate the same optimal set of parameters, suggesting that this corresponds to the absolute minimum rather than a local minimum, which in turn led to a significant improvement in history-matching over the deterministic analysis. Despite the versatility of the methods described, there are several areas which warrant further discussion. Two of the biggest challenges when applying MC simulation and other assisted history-matching techniques are: 1) selecting which variables to consider unknowns; and 2) developing an input distribution for the unknowns. These methods are typically most successful and converge faster when the number of inputs is limited to the minimum possible number with the smallest range to minimize the search space for the algorithm. In the case of flowback analysis, there are many uncertain inputs making this a difficult problem to solve using these methods, and therefore it is important to select the most important parameters as uncertain (i.e. fracture half-length and conductivity), while assuming that some less critical inputs that are constant (i.e. initial fracture pressure and fracture porosity). The next challenge is developing an input distribution for the uncertain parameters (particularly for MC simulation). In an ideal scenario, the input distributions can be developed from existing data allowing for greater precision and ultimately better output results, although this requires a significant amount of analogous data. For some scenarios, such as history-matching long-term production from wells with a significant number of analogs which have all been analyzed, this is feasible. Further, in many cases parameters such as matrix permeability have been demonstrated extensively in the literature to show a lognormal distribution. Unfortunately this is not the case with flowback analysis, where the data set is generally limited, or in many cases non-existent, due to the very new nature of industry interest in analyzing this data and lack of widespread (although rapidly growing) application. For example, the basic techniques used in this work have been applied by several companies including in an SPE paper written by [20] . Due to a lack of data for developing an input distribution, a simple uniform distribution was used in this work for the 5 selected input parameters, where the distribution range was limited as much as possible using available offset analysis as well as the deterministic history-match. As these methods gain further traction in industry, and are applied to more wells, developing better input distributions will likely be possible making the application of these methods more versatile. Another challenge is determining an acceptable number of iterations (i.e. the number of generations and population size in the GAPS algorithm) to allow achieving reasonable results while minimizing run time to make the application of the techniques to a large number of wells more feasible. In this work, the purpose was to demonstrate the applicability of the different techniques used, and therefore run time was not a consideration, although this will become more important as these techniques continue to gain traction in industry. In this case, other than the Excel^TM Solver methods, each technique required multiple days of run time making the techniques not practically applicable to a large number of wells. Further, the tool is still in the research phase, and could be made significantly more efficient (~1 iteration per second comparable to other similar commercial tools), which would also help to significantly reduce run time. Trying to determine an acceptable number of iterations is an area of future work which will require application to more than the several wells which have been analyzed using these techniques. In this paper, it was demonstrated that Excel’s GRG Non-Linear Solver is highly ineffective when a relatively generic initial guess is used, as this algorithm will find the closest local minima to the initial guess. When the deterministic solution was used as the initial guess, the algorithm converged to parameters similar to the other techniques applied, suggesting there is no local minima between the deterministic solution and the optimal solution. This may not always be the case, and in some applications a complete deterministic analysis may not be conducted prior to applying an assisted history-matching algorithm. The convergence speed of this algorithm makes it ideal, although its application clearly has limitations. One solution is to apply the multi-restart techniques discussed previously, where the algorithm will run for a series of different initial guesses in attempt to find the global minima. It is likely that a substantial number of restarts would be required to find the optimal solution for the flowback problem, and therefore extensive testing would be required before confidently applying this technique and determining how its run time compares to the other algorithms tested. The standard version of Solver available in Excel^TM does not offer a multi-restart option, although the developer of this solver (Frontline Solver’s) offers more advanced versions which include this option as well as further improvements and additional algorithms. In this work, six techniques were applied for assisted history-matching purposes. These methods were selected as they were either developed within the research group (GAPS algorithm) or commercially available from reputable vendors that are used extensively in industry (Microsoft^® and Palisade^®). Future work on this topic will focus on the application of addition algorithms available both commercially and in the literature. Testing of further MO GA’s would be of particular interest as they overcome the biggest challenge of SO algorithms which were the primary focus of this work. Testing further techniques is warranted, seeking algorithms which converge faster and/or are potentially more effective in consistently finding the optimal solution. A detailed investigation, using multiple examples (both simulated and field examples) will also be conducted to determine the number of iterations required to achieve the desired result. This will allow for a better comparison of both convergence speed and accuracy which was not directly addressed in this paper. 5. Conclusions In this work, several algorithms were tested for the purpose of uncertainty analysis and assisted history-matching of flowback data. In previous work, [16] [17] applied MC simulation to the same data set, although less iterations were conducted with significantly more input parameters and with wider bounds, bringing the results into question. The GAPS algorithm has also been tested by [4] for history-matching three-phase flowback with a numerical simulator and [3] attempted to use a combination of algorithms to try to decouple parameters in one of their analysis tools, although more rigorous methods could have been applied. Other than these limited studies, uncertainty quantification and assisted history-matching have not been investigated for application to the flowback problem. The main conclusions of this study are as follows: ・ MC simulation can effectively be applied for both uncertainty quantification and assisted-history matching, assuming enough trials are conducted to effectively cover the search space. For practical application, this limits the number of uncertain parameters and the distribution range for these parameters. ・ As anticipated, application of a gradient-based algorithm was not successful unless a very good initial guess was provided. This is due to the nature of the algorithm limiting its application in the absence of using the multi-restart feature. ・ Each of the techniques tested (excluding Excel’s GRG Non-Linear Solver), including both SO and MO techniques, was able to converge to a very similar optimal solution, suggesting that they were likely finding the global optima. There are often problems associated with applying SO algorithms to MO problems due to competing objectives, although this issue did not appear to arise in the analyzed well. It was demonstrated that each of these techniques provided a significant improvement in history-match quality over a single deterministic analysis, although deterministic history-matching is useful in determining which parameters should be considered uncertain and constraining the range of these uncertain parameters. ・ Further testing is warranted to determine the wide-spread applicability of these techniques, and to reduce run time making the application more desirable for industry applications. ・ Additional algorithms should be investigated for a larger number of wells to determine which techniques are most applicable to the flowback problem. Specifically, testing additional MO algorithms would be desirable as these algorithms tend to better represent the problem. It is possible that MO algorithms other than the GAPS algorithm could provide better results for flowback analysis. Jesse Williams-Kovacs would like to thank the University of Calgary for supporting this research. Chris Clarkson would like to acknowledge Encana/Shell and Alberta Innovates Technologies Futures (AITF) for support of his Chair position in Unconventional Gas and Light Oil Research at the University of Calgary, Department of Geoscience. The authors would also thank Mohammed Kanfar for providing the GAPS algorithm as well as technical support. Finally, the sponsors of Tight Oil Consortium (TOC), hosted at the University of Calgary, are acknowledged for their support. ABT = After-breakthrough BBT = Before-breakthrough CDF = Cumulative distribution function DFIT = Diagnostic fracture injection test FMB = Flowing material balance FR = Flow-regime GA = Genetic algorithm GOR = Gas-oil ratio GRG = Generalized reduced gradient LTO = Light tight oil MBT = Material balance time MC = Monte Carlo MINC = Multiple interacting continua approach MO = Multi-objective NSGA = Nondominated sorting genetic algorithm PDF = Probability density function PVT = Pressure-Volume-Temperature RNP = Rate-normalized pressure RNP’ = Rate-normalized pressure derivative RTA = Rate-transient analysis SO = Single-objective Field Variables F[c] = Fracture conductivity, md-ft m’ = Corey water relative permeability constant for the fractures, dimensionless n’ = Corey oil relative permeability constant for the fractures, dimensionless p = Pressure, psia p[wf] = Sadface flowing pressure, psia p^* = Extrapolated initial reservoir pressure, psia q[o] = Oil production (surface) flowrate, STB/D q[w] = Water production (surface) flowrate, STB/D Q[o] = Cumulative water production (surface), STB Q[w] = Cumulative production (surface), STB r[wa] = Apparent wellbore radius (r[wa] = r[w] e^-s), ft w = Objective function weighting factor, dimensionless x[f] = Fracture half-length, ft Dimensionless Variables t[Dd] = Dimensionless decline time ABT = After-breakthrough BBT = Before-breakthrough BT = Breakthrough D = Dimensionless variable f = Fracture o = Oil T = Total w = Water wf = Sandface
{"url":"https://scirp.org/journal/paperinformation?paperid=91524","timestamp":"2024-11-11T04:30:53Z","content_type":"application/xhtml+xml","content_length":"199525","record_id":"<urn:uuid:8b90f295-9ad6-46ac-a7e8-e49d72e7c6a0>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00708.warc.gz"}
Reset list Variety of electromagnetic radiation with frequencies between a few and a few hundred quadrillion oscillations per second, corresponding to wave-lengths between a few hundred and a few billionths of metres. Known in everyday life as that part of the radiation we receive from the sun that causes our skin to tan ultraviolet radiation unified field theory Collective designation for Einstein’s unsuccessful attempts to formulate a theory in which gravity and other interactions, notably electromagnetism, are described in a unified manner – a theory in which gravity and electromagnetism would be no more than different facets of one and the same underlying structure, in the same manner in which magnetism and the electrostatic force are facets of a more general description of electromagnetism. After Einstein, quite a number of scientists have searched for a unified description of all interactions; the best-known modern incarnation of the idea of unification is string theory. Given a set of physical laws, one interesting class of question is aimed at finding out the variety of situations those laws allow. For example, is there only a single kind of rotating black hole, or do the laws of general relativity admit an infinite variety of such objects? Theorems addressing this kind of question are generally known as uniqueness theorems – in their purest form, they state that, given a certain set of physical laws and a certain set of additional conditions, there is no more than one configuration of spacetime and matter that fits the bill. In general relativity, the most famous such theorems are the black hole uniqueness theorems. They are explored in the spotlight text How many different kinds of black hole are there? A different aspect of the question of uniqueness is addressed in the spotlight text The many ways of building an empty, unchanging universe. uniqueness theorems
{"url":"https://www.einstein-online.info/en/essentials/dictionary/?glossary4en-letter=u","timestamp":"2024-11-02T18:57:48Z","content_type":"text/html","content_length":"43954","record_id":"<urn:uuid:761356c9-a071-4a68-aa44-269ee029b194>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00871.warc.gz"}
Static Pressure Vs. Dynamic Pressure Vs. Total Pressure Time: | Read: 38479 What is Static Pressure? Static pressure is a fundamental concept in fluid mechanics that refers to the pressure exerted by a fluid when it is at rest or moving at a constant velocity. It is called "static" pressure because it does not account for the dynamic effects of fluid motion, such as changes in velocity or acceleration. Static pressure arises from the random motion and collisions of molecules within a fluid. As these molecules collide with the walls of a container or a surface, they exert a force perpendicular to the surface. The cumulative effect of these molecular collision results in the static pressure. In practical terms, static pressure can be thought of as the pressure exerted by a fluid on a solid surface, such as the walls of a pipe or the surface of an object immersed in a fluid. It is an important parameter in various fields, including engineering, aerodynamics, and HVAC (heating, ventilation, and air conditioning) systems. Static pressure is typically measured in units of force per unit area, such as pascals (Pa) or pounds per square inch (psi). It is influenced by factors such as fluid density, height of the fluid column (in cases where gravity is involved), and the geometry of the system. Understanding static pressure is crucial in engineering applications to ensure proper fluid flow, structural integrity, and system performance. It is often considered alongside other types of pressure, such as dynamic pressure (related to fluid motion) and total pressure (the sum of static and dynamic pressures), to analyze and design fluid systems effectively. What is Dynamic Pressure? The dynamic pressure is a concept in fluid mechanics that represents the pressure exerted by a fluid due to its motion or velocity. Unlike static pressure, which accounts for the pressure at rest, dynamic pressure considers the impact of fluid movement. When a fluid flows or moves, it possesses kinetic energy associated with its velocity. This kinetic energy is converted into dynamic pressure, which represents the additional pressure exerted by the fluid due to its motion. The dynamic pressure can be understood as the pressure increase that would occur if the fluid were to be abruptly brought to rest. The dynamic pressure is influenced by the density of the fluid and the square of its velocity. As the fluid velocity increases, the dynamic pressure also increases exponentially. This relationship is described by Bernoulli's equation, which relates the static pressure, dynamic pressure, and total pressure (the sum of static and dynamic pressures) in a fluid flow system. Dynamic pressure is an important parameter in various applications, including aerodynamics, hydraulics, and fluid dynamics. It helps in analyzing and predicting fluid behavior, such as the forces exerted on objects moving through a fluid, the performance of fluid machinery, and the design of aerodynamic surfaces and structures. In engineering and physics, dynamic pressure is often measured in units of force per unit area, such as pascals (Pa) or pounds per square inch (psi). Understanding dynamic pressure is crucial for designing efficient and safe fluid systems, optimizing vehicle performance, and ensuring structural integrity in applications where fluid motion plays a significant role. What is Total Pressure? Total pressure, also known as stagnation pressure or pitot pressure, is a concept in fluid mechanics that represents the sum of the static pressure and the dynamic pressure of a fluid flow. It is called "total" pressure because it takes into account both the pressure at rest and the pressure due to fluid motion. Total pressure accounts for the fact that when a fluid is in motion, its kinetic energy contributes to the overall pressure. In addition to the static pressure (pressure at rest), which arises from the molecular collisions within the fluid, the dynamic pressure (pressure due to fluid motion) adds to the total pressure. The total pressure is often measured using a device called a Pitot tube, which consists of a tube facing into the fluid flow. The Pitot tube has one opening facing upstream to measure the stagnation or total pressure and one or more additional opening facing perpendicular to the flow to measure the static pressure. By subtracting the static pressure from the total pressure, the dynamic pressure can be determined. The total pressure is a crucial parameter in various fluid flow applications, such as aerodynamics, hydraulics, and HVAC systems. It is used to calculate parameters like fluid velocity, volumetric flow rate, and energy losses in a system. The total pressure is also used to determine the efficiency of fluid machinery, evaluate the performance of aircraft and vehicles, and design ventilation and air conditioning systems. In engineering and physics, the total pressure is typically measured in units of force per unit area, such as pascals (Pa) or pounds per square inch (psi). By considering both static and dynamic pressures, total pressure provides a comprehensive understanding of the pressure conditions in a fluid flow, enabling accurate analysis, design, and optimization of fluid systems. The Difference Between Static Pressure, Dynamic Pressure And Total Pressure In the field of fluid mechanics, it is important to understand the differences between static pressure, dynamic pressure, and total pressure. Static pressure is the pressure exerted by a fluid when it is at rest. It is measured perpendicular to the surface of the fluid and is independent of the direction of flow. On the other hand, the dynamic pressure is the pressure exerted by a fluid when it is in motion. It is measured parallel to the direction of flow and is dependent on the velocity of the fluid. Total pressure, also known as stagnation pressure, is the sum of static pressure and dynamic pressure. This is the maximum pressure that a fluid can exert on an object when it is brought to a complete stop. The total pressure is measured using a pitot tube, which is a device that measures the velocity of a fluid and converts it into pressure. Understanding these three types of pressures is crucial in various applications such as aerodynamics, hydraulics, and ventilation. For example, in aerodynamics, static pressure is used to measure air pressure inside an aircraft cabin while dynamic pressure is used to calculate the lift force acting on an airplane wing. The total pressure is used to measure airspeed and altitude. In hydraulics, static pressure is used to measure the pressure in a pipeline while dynamic pressure is used to calculate the flow rate of a fluid. The total pressure is used to measure the efficiency of a hydraulic system. In ventilation systems, static pressure is used to measure the resistance of an air duct while dynamic pressure is used to calculate the airflow rate. The total pressure is used to measure the efficiency of a ventilation system. Dynamic Pressure Calculator To calculate dynamic pressure, you need to know the fluid density and the velocity of the fluid. Here's the formula to calculate dynamic pressure: Dynamic Pressure (q) = 0.5 * Density (ρ) * Velocity² (v²) - Dynamic Pressure (q) is the pressure due to fluid motion. - Density (ρ) is the density of the fluid. - Velocity (v) is the velocity of the fluid. To use the calculator, simply input the values of fluid density and velocity, and it will calculate the dynamic pressure for you. Here's an example: Density (ρ) = 1.2 kg/m³ Velocity (v) = 10 m/s Using the formula: Dynamic Pressure (q) = 0.5 * 1.2 * (10²) Dynamic Pressure (q) = 0.5 * 1.2 * 100 Dynamic Pressure (q) = 60 Pa Therefore, the dynamic pressure in this example is 60 Pascal (Pa). Keep in mind that the units used for density and velocity should be consistent (e.g., kg/m³ for density and m/s for velocity) to obtain the correct units for dynamic pressure. What Is A Dynamic Pressure Sensor? A dynamic pressure sensor is a device used to measure the pressure exerted by a fluid due to its motion or velocity. It is specifically designed to accurately capture and quantify the dynamic pressure in a fluid flow. Dynamic pressure sensors utilize various sensing technologies to convert the pressure into an electrical signal that can be measured and analyzed. Some common types of dynamic pressure sensors include piezoelectric sensors, piezoresistive sensors, and capacitive sensors. Piezoelectric sensors operate based on the principle of the piezoelectric effect, where certain materials generate an electrical charge when subjected to mechanical stress. When the fluid flow exerts pressure on the piezoelectric sensor, it generates an electrical charge proportional to the dynamic pressure. Piezoresistive sensors, on the other hand, employ the property of certain materials to change their electrical resistance in response to mechanical strain. These sensors contain piezoresistive elements that deform under fluid pressure, causing a change in resistance, which can be measured and correlated to the dynamic pressure. Capacitive sensors utilize changes in capacitance to measure pressure. These sensors consist of a diaphragm that deforms with fluid pressure, leading to a variation in the separation between capacitor plates. This change in capacitance is then detected and converted into an electrical signal representing the dynamic pressure. Dynamic pressure sensors find application in various fields, including aerospace, automotive, HVAC, wind tunnels, and fluid dynamics research. They are used to analyze fluid flow behavior, monitor performance in fluid systems, measure aerodynamic forces, and optimize designs for efficient and safe operation. It is important to select a dynamic pressure sensor suitable for the specific application, considering factors such as pressure range, accuracy, response time, environmental conditions, and compatibility with data acquisition systems. Manufacturers and suppliers offer a range of dynamic pressure sensors tailored for different requirements. Applications Of Dynamic Pressure Sensors Dynamic pressure sensors have numerous applications across various industries where the measurement of pressure due to fluid motion is critical. Some common applications of dynamic pressure sensors 1. Aerodynamics and Wind Tunnel Testing: Dynamic pressure sensors are used to measure the air pressure exerted on aircraft wings, fuselages, and other aerodynamic surfaces during wind tunnel testing. This data helps in analyzing and optimizing the aerodynamic performance of aircraft and spacecraft. 2. Automotive Testing: Dynamic pressure sensors play a crucial role in automotive applications such as airflow measurements, intake and exhaust system analysis, combustion analysis in engines, and tire aerodynamics. They help in evaluating vehicle performance, fuel efficiency, and optimizing design for better aerodynamics. 3. HVAC (Heating, Ventilation, and Air Conditioning) Systems: Dynamic pressure sensors are used to monitor and control airflow in HVAC systems. They help ensure efficient ventilation, maintain proper air distribution, and optimize energy consumption in heating and cooling processes. 4. Fluid Dynamics Research: Dynamic pressure sensors are extensively used in research and development of fluid dynamics, including studies of fluid flow behavior, turbulence, and fluid structure interactions. They provide valuable data for validating computational fluid dynamics (CFD) models and improving understanding of fluid phenomena. 5. Gas and Liquid Flow Measurement: Dynamic pressure sensors are employed in various industries to measure and monitor gas and liquid flow rates. They are used in pipelines, industrial processes, and flow meters to ensure accurate and efficient flow measurements. 6. Aerospace and Defense: Dynamic pressure sensors find applications in aerospace and defense systems for measuring airspeed, altitude, and dynamic pressure in aircraft, rockets, missiles, and unmanned aerial vehicles (UAVs). They contribute to flight safety, navigation, and performance evaluation. 7. Environmental Monitoring: Dynamic pressure sensors are utilized in environmental monitoring systems to measure wind speed, air pressure, and atmospheric conditions. They assist in weather forecasting, climate studies, and environmental research. 8. Fluid Machinery and Turbines: Dynamic pressure sensors are employed in monitoring and controlling fluid machinery and turbines. They provide valuable data for optimizing efficiency, detecting abnormalities, and ensuring safe and reliable operation. These are just a few examples of the wide-ranging applications of dynamic pressure sensors. Their versatility and accuracy make them essential tools in industries that rely on precise pressure measurements to optimize processes, enhance performance, and ensure safety. What Is A Static Pressure Sensor? A static pressure sensor is a device designed to measure the static pressure exerted by a fluid when it is at rest or moving at a constant velocity. It is used to accurately measure the pressure at a specific point in a fluid system where there is no fluid motion. Static pressure sensors employ various technologies to convert the static pressure into an electrical signal that can be measured and analyzed. Some common types of static pressure sensors include strain gauge sensors, capacitive sensors, and piezoresistive sensors. Strain gauge sensors utilize the principle of strain-sensitive elements that change their electrical resistance when subjected to mechanical stress. These sensors consist of a diaphragm or membrane that deforms under static pressure, causing strain on the strain gauges. The change in resistance is measured and correlated to the static pressure. Capacitive sensors use changes in capacitance to measure static pressure. These sensors have a diaphragm or membrane that deforms under fluid pressure, resulting in a variation in the separation between capacitor plates. The change in capacitance is then detected and converted into an electrical signal representing the static pressure. Piezoresistive sensors employ materials that change their electrical resistance in response to mechanical strain. These sensors contain piezoresistive elements that deform under fluid pressure, leading to a change in resistance. This change is measured and converted into an electrical signal proportional to the static pressure. Static pressure sensors find applications in various industries and systems where precise measurement of static pressure is crucial. Some common applications include HVAC systems, cleanrooms, pneumatic systems, medical devices, industrial processes, and building automation. They are used to ensure proper fluid flow, optimize energy consumption, monitor system performance, and maintain safety and efficiency in various applications. When selecting a static pressure sensor, factors such as pressure range, accuracy, temperature sensitivity, response time, and compatibility with data acquisition systems should be considered to ensure reliable and accurate measurements. What Is The Difference Between Dynamic And Static Pressure Transducer? The difference between a dynamic pressure transducer and a static pressure transducer lies in their respective capabilities to measure pressure in different fluid conditions. 1. Dynamic Pressure Transducer A dynamic pressure transducer is specifically designed to measure the pressure exerted by a fluid due to its motion or velocity. It is capable of accurately capturing and quantifying the dynamic pressure in a fluid flow. Dynamic pressure transducers are typically used in applications where the fluid is in motion, such as airflow in wind tunnels, aerodynamics testing, automotive testing, and fluid dynamics research. These transducers are designed to respond to rapid changes in pressure and provide real-time measurements of dynamic pressure variations. They are commonly used in applications where the measurement of pressure fluctuations and rapid pressure changes is critical. 2. Static Pressure Transducer On the other hand, a static pressure transducer is designed to measure the pressure exerted by a fluid when it is at rest or moving at a constant velocity. It is specifically used to measure static pressure in a fluid system where there is no fluid motion. Static pressure transducers are commonly used in applications such as HVAC systems, industrial processes, cleanrooms, and pneumatic systems. They are designed to accurately measure and monitor the pressure at a specific point in the system, providing steady and stable readings. Static pressure transducers are typically used to measure and monitor average pressure values in applications where the fluid is not in motion or is moving at a constant velocity. In summary, the main difference between a dynamic pressure transducer and a static pressure transducer lies in their ability to measure pressure in different fluid conditions. Dynamic pressure transducers are suitable for measuring pressure fluctuations and rapid pressure changes in fluid flow, while static pressure transducers are used to measure steady and stable pressure in static or constant flow conditions.
{"url":"https://www.supmeaauto.com/training/static-pressure-vs-dynamic-pressure-vs-total-pressure","timestamp":"2024-11-14T10:43:09Z","content_type":"text/html","content_length":"95023","record_id":"<urn:uuid:614e920a-bfef-46cb-bd46-8987e3401a5a>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00429.warc.gz"}
Math page for Chistopher Phan Academic Background Academic positions • Fixed-term Assistant Professor, Department of Mathematics and Statistics, Winona State University, Winona, Minnesota, August 2012-May 2018. • Visiting Assistant Professor, Department of Mathematics, Bucknell University, Lewisburg, Pennsylvania, August 2010-May 2012. • Temporary Lecturer, Department of Mathematics, University of Glasgow, Glasgow, Scotland, September 2009-August 2010. • Adjunct Instructor, General Education Department, Cooking & Hospitality Institute of Chicago, Chicago, Illinois, August-September 2009. • Graduate Teaching Fellow, Department of Mathematics, University of Oregon, Eugene, Oregon, September 2003-June 2009. Full CV Teaching demos These are some materials that I created for use in the classroom. Interactive demos Non-interactive materials Ring Theory Research I was a noncommutative ring theorist whose primary research interest was homological results involving graded algebras. For example, I studied the Koszul and \(\mathcal{K}_2\) properties. Research articles • Quotients of Koszul algebras and 2-\(d\)-determined algebras (with T. Cassidy), Communications in Algebra, 42 (2014), 3742–3752. Preprint available at arXiv:1210.3847 [math.RA]. MR3200055 Vatne and Green & Marcos have independently studied the Koszul-like homological properties of graded algebras that have defining relations in degree 2 and exactly one other degree. We contrast these two approaches, answer two questions posed by Green & Marcos, and find conditions that imply the corresponding Yoneda algebras are generated in the lowest possible degrees. • The Yoneda algebra of a graded Ore extension, Communications in Algebra, 40 (2012) 834–844. Preprint available at arXiv:1002.2318 [math.RA]. MR2899911 Let \(A\) be a connected-graded algebra with trivial module \(k\), and let \(B\) be a graded Ore extension of \(A\). We relate the structure of the Yoneda algebra \(\mathrm{E}(A):= \mathrm{Ext}_A (k,k)\) to \(\mathrm{E}(B)\). Cassidy and Shelton have shown that when \(A\) satisfies their \(\mathcal{K}_2\) property, \(B\) will also be \(\mathcal{K}_2\). We prove the converse of this • Localization algebras and deformations of Koszul algebras (with T. Braden, A. Licata, N. Proudfoot, and B. Webster), Selecta Mathematica, 17 (2011) 533–572. Preprint available at arXiv:0905.1335 [math.RA]. MR2827176 We show that the center of a flat graded deformation of a standard Koszul algebra \(A\) behaves in many ways like the torus-equivariant cohomology ring of an algebraic variety with finite fixed point set. In particular, the center of \(A\) acts by characters on the deformed standard modules, providing a “localization map”. We construct a universal graded deformation of \(A\) and show that the spectrum of its center is supported on a certain arrangement of hyperplanes which is orthogonal to the arrangement coming from the algebra Koszul dual to \(A\). This is an algebraic version of a duality discovered by Goresky and MacPherson between the equivariant cohomology rings of partial flag varieties and Springer fibers; we recover and generalize their result by showing that the center of the universal deformation for the ring governing a block of parabolic category \(\mathcal{O}\) for is isomorphic to the equivariant cohomology of a Spaltenstein variety. We also identify the center of the deformed version of the “category \(\mathcal{O}\)” of a hyperplane arrangement (defined by the authors in a previous paper) with the equivariant cohomology of a hypertoric variety. • Noncommutative Koszul algebras from combinatorial topology (with T. Cassidy and B. Shelton), Journal für die reine und angewandte Mathematik (Crelle’s Journal), 646 (2010) 45–63. Preprint available at arXiv:0811:3450 [math.RA]. MR2719555 Associated to any uniform finite layered graph \(\Gamma\) there is a noncommutative graded quadratic algebra \(A(\Gamma)\) given by a construction due to Gelfand, Retakh, Serconek and Wilson. It is natural to ask when these algebras are Koszul. Unfortunately, a mistake in the literature states that all such algebras are Koszul. That is not the case and the theorem was recently retracted. We analyze the Koszul property of these algebras for two large classes of graphs associated to finite regular CW-complexes, \(X\). Our methods are primarily topological. We solve the Koszul problem by introducing new cohomology groups \(H_X(n,k)\), generalizing the usual cohomology groups \(H_n(X)\). Along with several other results, our methods give a new and primarily topological proof of the main result of [Serconek and Wilson, J. Algebra 278: 473–493, 2004] and [Piontkovski, J. Alg. Comput. 15, 643–648, 2005]. • The Yoneda algebra of a \(\mathcal{K}_2\) algebra need not be another \(\mathcal{K}_2\) algebra (with T. Cassidy and B. Shelton), Communications in Algebra, 38 (2010) 46–48. Preprint available at arXiv:0810.4656 [math.RA]. MR2597480 The Yoneda algebra of a Koszul algebra or a \(D\)-Koszul algebra is Koszul. \(\mathcal{K}_2\) algebras are a natural generalization of Koszul algebras, and one would hope that the Yoneda algebra of a \(\mathcal{K}_2\) algebra would be another \(\mathcal{K}_2\) algebra. We show that this is not necessarily the case by constructing a monomial \(\mathcal{K}_2\) algebra for which the corresponding Yoneda algebra is not \(\mathcal{K}_2\). • Generalized Koszul properties for augmented algebras, Journal of Algebra, 321 (2009) 1522–1537. Preprint available at arXiv:0711.3480 [math.RA]. MR2494406 Under certain conditions, a filtration on an augmented algebra \(A\) admits a related filtration on the Yoneda algebra \(\mathrm{E}(A) := \mathrm{Ext}_A(K,K)\). We show that there exists a bigraded algebra monomorphism \[\mathrm{gr}\;\mathrm{E}(A) \hookrightarrow \mathrm{E}_{\mathrm{Gr}}(\mathrm{gr}\;A),\] where \(\mathrm{E}_{\mathrm{Gr}}(\mathrm{gr}\;A)\) is the graded Yoneda algebra of \(\mathrm{gr}\;A\). This monomorphism can be applied in the case where A is connected graded to determine that \ (A\) has the \(\mathcal{K}_2\) property recently introduced by Cassidy and Shelton. • Koszul and generalized Koszul properties for noncommutative graded algebras, University of Oregon, Department of Mathematics, 2009. We investigate some homological properties of graded algebras. If \(A\) is an \(R\)-algebra, then \(\mathrm{E}(A):= \mathrm{Ext}_A(R,R)\) is an \(R\)-algebra under the cup product and is called the Yoneda algebra. (In most cases, we assume \(R\) is a field.) A well-known and widely-studied condition on \(\mathrm{E}(A)\) is the Koszul property. We study a class of deformations of Koszul algebras that arises from the study of equivariant cohomology and algebraic groups and show that under certain circumstances these deformations are Poincaré–Birkhoff–Witt deformations. Some of our results involve the \(\mathcal{K}_2\) property, recently introduced by Cassidy and Shelton, which is a generalization of the Koszul property. While a Koszul algebra must be quadratic, a \(\mathcal{K}_2\) algebra may have its ideal of relations generated in different degrees. We study the structure of the Yoneda algebra corresponding to a monomial \(\mathcal{K}_2\) algebra and provide an example of a monomial \(\mathcal{K}_2\) algebra whose Yoneda algebra is not also \(\mathcal{K}_2\). This example illustrates the difficulty of finding a \(\mathcal{K}_2\) analogue of the classical theory of Koszul duality. It is well-known that Poincaré–Birkhoff–Witt algebras are Koszul. We find a \(\mathcal{K}_2\) analogue of this theory. If \(V\) is a finite-dimensional vector space with an ordered basis, and \ (A:=\mathbb{T}(V)/I\) is a connected-graded algebra, we can place a filtration \(F\) on \(A\) as well as \(\mathrm{E}(A)\). We show there is a bigraded algebra embedding \[\Lambda: \mathrm{gr}\;\mathrm{E}(A) \hookrightarrow \mathrm{E}_{\mathrm{Gr}}(\mathrm{gr}\;A).\] If \(I\) has a Gröbner basis meeting certain conditions and \(\mathrm{gr}_F\;A\) is \(\mathcal{K}_2\), then \(\Lambda\) can be used to show that \(A\) is also \(\mathcal{K}_2\). This dissertation contains both previously published and co-authored materials.
{"url":"https://clipdude.com/math/index.html","timestamp":"2024-11-03T00:37:44Z","content_type":"text/html","content_length":"16269","record_id":"<urn:uuid:73beec37-e923-4b49-8c35-25b12bee46e2>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00452.warc.gz"}
Use SUMIFS function with multiple criteria | WPS Office Academy WPS Office Free All-in-One Office Suite with PDF Editor Edit Word, Excel, and PPT for FREE. Read, edit, and convert PDFs with the powerful PDF toolkit. Microsoft-like interface, easy to use. Free Download Windows • MacOS • Linux • iOS • Android Use SUMIFS function with multiple criteria Uploaded time: August 30, 2021 Difficulty Intermediate Use SUMIFS function with multiple criteria Use SUMIFS function with multiple criteria In our work, we often need to filter through multiple criteria for statistics. At this time, we can use the SUMIFS function. The grammatical structure of the SUMIFS function is: “SUMIFS(Sum range, Range1, Criteria 1, Range2, Criteria2...)” Using the data in this table as an example, we want to calculate The total amount of imported fruits from South Korea greater than $50,000. There are two conditions. One is South Korea, and the other is the amount>$50,000. First, after clicking cell E5, click Insert Function. After entering SUMIFS in the Search for Function of the dialog box, click the OK button.Then, the Function Arguments dialog box will pop up. Sum range refers to the actual cells used for sum calculation. In this table, we finally need to sum the amounts, so the corresponding sum range is the data amount in the Sales column, namely C2-C15. Range 1 is the calculation area of Criteria 1. Given that Criteria 1 is South Korea,the corresponding calculation area is the data in the Country column, namely B2-B15. Criteria 1 is South Korea, so we enter South Korea here. When entering conditional parameters, we need to add double quotation marks for quotation. We can add the criteria in sequence later, and each additional condition requires a corresponding criteria range for calculation. Range 2 is the corresponding area for Criteria 2 (the amount>$50,000),namely the data amount of the area in the table: C2-C15 cell area. Enter >50,000 for Criteria 2 to indicate that the amount is >$50,000. Then click the OK button to calculate the total amount of imported fruit from Korea over $50,000 is $24,540,000. This is the basic use of SUMIFS function. Did you get it? For further study, you're welcome to enter WPS Academy.
{"url":"https://www.wps.com/academy/use-sumifs-function-with-multiple-criteria/1861462/","timestamp":"2024-11-09T23:31:12Z","content_type":"text/html","content_length":"167730","record_id":"<urn:uuid:5d7cdcf2-6fb4-46b0-a40c-970d066e913d>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00126.warc.gz"}
Turret Rotation using Hingeconstraint Servo Hello, so I’m trying to make a turret rotate to face the players mouse. I am using Hingeconstraint with Servo. This is what it does right now local Plr = game.Players.LocalPlayer local PlrMouse = Plr:GetMouse() PlrMouse.TargetFilter = script.Value.Value.Parent.Parent local UIS = game:GetService('UserInputService') local function turretFunction() local RadAngle = math.acos(math.clamp(script.Value.Value.CFrame.LookVector.Unit:Dot(PlrMouse.UnitRay.Direction), -1, 1)) local VectorDot = script.Value.Value.CFrame.LookVector.Unit:Dot(PlrMouse.UnitRay.Direction) local DegAngle = math.deg(RadAngle) if VectorDot < 0 then script.Value.Value.HingeConstraint.TargetAngle = script.Value.Value.HingeConstraint.CurrentAngle - DegAngle elseif VectorDot > 0 then script.Value.Value.HingeConstraint.TargetAngle = script.Value.Value.HingeConstraint.CurrentAngle + DegAngle Any idea how to fix it? 3 Likes If I understand what you’re trying to do, the whole function can just be local delta = PlrMouse.Hit.Position - script.Value.Value.CFrame.Position script.Value.Value.HingeConstraint.TargetAngle = math.deg(math.atan2(-delta.Z, delta.X)) Not sure what you were trying to do with all the dots and stuff, maybe I misunderstood. 4 Likes You should assign DegAngle directly to TargetAngle, not sum this value with the CurrentAngle. That will fix the problem in the video, but you’ve got a problem with the math resolving the angle. The two vector’s used with Dot Product to find the cosine of the angle need to have the same origin. script.Value.Value.CFrame.LookVector is in world space so it’s origin is 0,0,0. According to the api; PlrMouse.UnitRay.Direction is not in world space, so it’s origin is CurrentCamera’s You’ll want to use a vector from script.Value.Value.CFrame.Position to PlrMouse.Hit.Position instead of the UnitRay. local vecA = script.Value.Value.CFrame.LookVector.Unit local vecB = (PlrMouse.Hit.Position - script.Value.Value.CFrame.Position).Unit local theta = math.acos(vecA:Dot(vecB)) This page has more information on using Dot Product with the inverse cosine to resolve theta. Trying to find the angle between where the turret is pointing and where the player mouse is pointing so that it can turn accordingly. Are you sure I don’t have to add/subtract with the CurrentAngle. Because the same problem is still happening with your code. My thought process is that since it is getting the angle between the two vectors it is not taking that into respect with how big of an angle it currently is. Also just realised that maybe I have to reset the target angle to 0 after? Oh you’re right, issue is the vectors are backwards. My code should still fix the not of same origin issue. I think you should swap the vectors around so the mouse vector makes vector A, and the barrel makes vector B. That way the resulting theta value is how far it needs to move to make vector B align with vector A. You would need to project the vecB across the vecA space so it’s in 2d to get the exact angle of rotation around just the Y axis. Using inverse tangent like @nicemike40’s suggestion would remedy that issue. (Remember to find the mouse hit position vector that has the hingeconstraint as it’s origin) 1 Like Like you want it to point towards where the cursor is in 3D space? The code I posted should do that. Like @InfinityDesign said, just set TargetAngle directly instead of trying to add the delta and all that. Yea so I used your code, and it works but only if I subtract 90 on the final angle, since for some it always faces 90 degrees away from the mouse position. Subtracting 90 fixes it, but is there another way to fix it? Sure, rotate one of the attachment points 90 degrees or swap the x and z in the atan2 ( and play around with negatives). 1 Like I know that it is a bit old but I’m really curious, may I know where does this formula come from? Imagine a point (x,y) in 2D space. Imagine a line from the origin (0,0) to (x,y). math.atan2(y, x) gives you the angle in radians that line makes with the X axis. I use it here to get the rotation of the turret->mouse line in the XZ plane. I use math.deg to turn the radians into degrees. How do you know when you should do math.atan2(y,x) or math.atan2(x,y) without testing ? Drawing out the trigonometry diagram helps. @nicemike40 works but I believe will have an issue with a rotated base (tank turret). The method below is the samething just with added ToObjectSpace to counteract the issue. The axises are different as you might want to measure the horizontal or vertical angle.
{"url":"https://devforum.roblox.com/t/turret-rotation-using-hingeconstraint-servo/1244664","timestamp":"2024-11-11T21:24:10Z","content_type":"text/html","content_length":"55474","record_id":"<urn:uuid:7f26c255-58e1-488b-a588-4da85da38d83>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00760.warc.gz"}
Digital lodging platform Airbnb Inc. (NASDAQ: ABNB) disappointed investors with its outlook for Q2 2023. The popular disruptor and innovator of BnB travel bookings have had gravity-defying growth and earnings throughout the weakening economy. It was a prime example of consumers shifting their discretionary spending towards services like Airbnb over goods like apparel. Airbnb is one of the major benefactors of the positive normalization of the travel and leisure industry in the post-pandemic era. Other travel and leisure booking platforms, including Expedia Group Inc. (NASDAQ: EXPE), TripAdvisor (NASDAQ: TRIP), and Booking Holdings Inc. (NASDAQ: BKNG), have been benefactors in the travel recovery. The pent-up demand for leisure and vacation travel and lodging continues to drive spending even as other industries have been facing negative normalization after 2021 peaks. Falling ADR Concerning However, high inflation appears to be revealing some kinks in the armor for Airbnb as it lowered its forecasts for Q2 2023 revenues. The company insists that demand is still robust, and the lowered revenue range results from a more extensive mix and new Host pricing tools. However, the company expects average daily rates (ADR) to fall slightly. This means there's pressure on margins from the inventory glut, causing landlords to lower their prices in the face of falling demand. Almost 80% of Airbnb rooms are going for under $100 per night, and its Airbnb Rooms, which are private rooms, are averaging $67 per night. This is good for customers but not for hosts or Airbnb’s top and bottom lines. Strong Q1 2023 Earnings But Cautious On May 9, 2023, Airbnb released its fiscal first-quarter 2023 results for March 2023. The company reported a GAAP earnings-per-share (EPS) profit of $0.18 versus consensus analyst estimates for a profit of $0.10, beating estimates by $0.08. Net income was $319 million for the quarter and $1.9 billion. Nights and Experiences Booked rose 19% YoY and 49% sequentially to 121 million. While the U.S. saw stable growth in North American Nights and Experiences, Asia Pacific saw a 48% YoY spike as the region recovered from the pandemic. Revenues rose 20.5% year-over-year (YOY) to $1.82 billion, beating analyst estimates of $1.79 billion. Gross Bookings Value increased 19% YoY to $20.4 billion. The Average Daily Rate (ADR) was flat at $168 YoY. The company achieved GAAP profits again, which led them to raise its stock buyback program of up to $2.5 billion in stock or 3% of its market cap. In-Line Guidance Fails to Impress Airbnb issued in-line guidance for Q2 2023 with revenues of $2.35 billion to $2.45 billion versus $2.42 billion consensus analyst estimates. Nights and Experiences are expected to have unfavorable YoY comparisons, and growth is expected to be lower than revenue growth in the quarter. The company expects ADR to be slightly lower in Q2 2023 due to a mix of shifts and the launch of new Host pricing tools. Full-year 2023 adjusted EBITA margin is expected to be in line with the full-year 2022. CEO Insights IN THE CONFERENCE CALL, ABNB CEO Brian Chesky commented, "We have some big ideas for where to take Airbnb next. We're building the foundation for new products and services that we plan to launch in 2024 and beyond. At the same time, Airbnb is still underpenetrated in many markets around the world. So we're increasing our focus on these less mature markets, and we are already seeing positive results.” He noted that Germany and Brazil are two of its fastest-growing markets, credited with rolling out its expansion playbook for accelerated growth. The company introduced Airbnb Rooms along with 50 new features and upgrades. They launched new pricing tools for Hosts enabling them to see what other Airbnbs in the area are charging along with listings in high demand and getting booked so that they can be more competitive. Longer stays past three months will get more discounts. Airbnb rooms are the most affordable rate, with average prices of only $67 a night. He noted that 80% of Airbnb rooms are priced under $100. They also unveiled anti-party crackdowns to keep disruptive behavior down. Weekly Descending Triangle ABNB's weekly candlestick chart illustrates a descending triangle that commenced after peaking at $178.88 in April 2022. A descending triangle consists of a falling trendline indicating lower highs on bounces against a flat bottom trendline. ABNB formed a flat bottom of around $86.81 in December 2022. It triggered a weekly market structure low (MSL) breakout through $88.84 to stage a rally up to $144.63 in February 2023 before peaking at the triangle-falling trendline on consecutive breakout attempts. The weekly stochastic peaked on its complete oscillation just under the 80-band before falling back towards the 20-band. The weekly 20-period exponential moving average (EMA) resistance is at $112.48, followed by the weekly 50-period MA at $108.67. Pullback support levels are at $99.71, $93.28, $88.84 weekly MSL trigger and $82.58 swing low.
{"url":"http://business.newportvermontdailyexpress.com/newportvermontdailyexpress/article/marketbeat-2023-6-1-what-does-the-airbnb-guidance-drop-say-about-travel-demand","timestamp":"2024-11-07T20:05:18Z","content_type":"text/html","content_length":"104181","record_id":"<urn:uuid:934c215f-6255-44d1-8a04-94376e3bd1ec>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00872.warc.gz"}
Assigning Oxidation Numbers Practice Worksheet Assigning Oxidation Numbers Practice Worksheet act as fundamental tools in the realm of maths, supplying a structured yet flexible platform for students to explore and grasp mathematical ideas. These worksheets use a structured strategy to comprehending numbers, supporting a solid foundation whereupon mathematical efficiency thrives. From the simplest counting exercises to the ins and outs of advanced computations, Assigning Oxidation Numbers Practice Worksheet accommodate students of diverse ages and ability degrees. Introducing the Essence of Assigning Oxidation Numbers Practice Worksheet Assigning Oxidation Numbers Practice Worksheet Assigning Oxidation Numbers Practice Worksheet - Assigning Oxidation Numbers The oxidation number is a positive or negative number that is assign ed to an atom to indicate its degree of oxidation or reduction In oxidation reduction processes the driving force for chemical change is in the exchange of electrons between chemical species Assign oxidation numbers to the atoms in each substance CH 2 O NH 3 Rb 2 SO 4 Zn C 2 H 3 O 2 2 Assign oxidation numbers to the atoms in each substance C 6 H 6 B OH 3 Li 2 S Au Identify what is being oxidized and reduced in this redox reaction by assigning oxidation numbers to the atoms 2NO Cl 2 2NOCl At their core, Assigning Oxidation Numbers Practice Worksheet are lorries for conceptual understanding. They envelop a myriad of mathematical principles, leading learners with the maze of numbers with a collection of appealing and purposeful exercises. These worksheets go beyond the boundaries of typical rote learning, motivating energetic involvement and cultivating an instinctive grasp of numerical relationships. Nurturing Number Sense and Reasoning ASSIGNING OXIDATION NUMBERS WORKSHEET Above rules The unknown oxi dation state is the number that must be added to the total of the known oxidation states to make the total of the oxidation states of the compound zero For example to find the oxidation state of sulfur in H Assign the oxidation numbers of each element in the following chemical species HCl H 2 O NH 3 NO 3 K 2 Cr 2 O 7 Hg 2 Cl 2 HgCl 2 Al OH 3 Na 3 PO 4 Q2 Which element is oxidized and which element is reduced in The heart of Assigning Oxidation Numbers Practice Worksheet lies in cultivating number sense-- a deep comprehension of numbers' significances and affiliations. They urge exploration, welcoming students to explore arithmetic operations, decode patterns, and unlock the secrets of series. Via thought-provoking challenges and sensible problems, these worksheets come to be gateways to refining thinking skills, supporting the logical minds of budding mathematicians. From Theory to Real-World Application Assigning Oxidation Numbers Practice Worksheet Assigning Oxidation Numbers Practice Worksheet Worksheet Classification of Matter 10m Physical Chemical Changes 14m Assigning Oxidation Numbers Chemistry Tutorial TheChemistrySolution 455 Was this helpful 0 Bookmarked How to Calculate Oxidation Number Practice Problems Tyler DeWitt 339 07 00 How to Find Oxidation Numbers Rules and Examples The worksheet contains 5 questions that require assigning oxidation numbers writing an oxidation and reduction half reaction and identifying the oxidizing and reducing agents This practice worksheet can be done in class or as homework Assigning Oxidation Numbers Practice Worksheet work as conduits connecting academic abstractions with the apparent facts of day-to-day life. By infusing practical scenarios right into mathematical exercises, students witness the relevance of numbers in their environments. From budgeting and measurement conversions to recognizing analytical information, these worksheets equip pupils to wield their mathematical prowess past the boundaries of the class. Varied Tools and Techniques Flexibility is inherent in Assigning Oxidation Numbers Practice Worksheet, employing an arsenal of instructional tools to deal with different learning styles. Aesthetic aids such as number lines, manipulatives, and electronic resources serve as companions in visualizing abstract ideas. This diverse approach ensures inclusivity, accommodating students with various preferences, toughness, and cognitive designs. Inclusivity and Cultural Relevance In an increasingly diverse world, Assigning Oxidation Numbers Practice Worksheet accept inclusivity. They go beyond social borders, incorporating instances and issues that reverberate with learners from diverse backgrounds. By integrating culturally relevant contexts, these worksheets foster a setting where every learner feels represented and valued, boosting their link with mathematical Crafting a Path to Mathematical Mastery Assigning Oxidation Numbers Practice Worksheet chart a course in the direction of mathematical fluency. They infuse perseverance, essential reasoning, and analytical abilities, crucial features not just in maths but in different aspects of life. These worksheets equip students to browse the intricate terrain of numbers, supporting an extensive appreciation for the elegance and reasoning inherent in mathematics. Embracing the Future of Education In a period noted by technological improvement, Assigning Oxidation Numbers Practice Worksheet effortlessly adjust to electronic platforms. Interactive interfaces and electronic resources augment typical knowing, offering immersive experiences that transcend spatial and temporal borders. This combinations of conventional methods with technological innovations advertises an appealing era in education, promoting a much more dynamic and interesting discovering environment. Final thought: Embracing the Magic of Numbers Assigning Oxidation Numbers Practice Worksheet represent the magic inherent in maths-- a charming trip of expedition, discovery, and mastery. They transcend standard pedagogy, functioning as drivers for firing up the fires of interest and inquiry. Via Assigning Oxidation Numbers Practice Worksheet, students start an odyssey, opening the enigmatic globe of numbers-- one problem, one service, at a Charting Oxidation Number Worksheet Answer Key Worksheets Joy Oxidation Number Worksheet Check more of Assigning Oxidation Numbers Practice Worksheet below Oxidation Reduction Worksheet OXIDATION NUMBER WORKSHEET WITH ANSWERS Teaching Resources Assigning Oxidation Numbers Worksheet Answers Oxidation Numbers Worksheet Answers Flakeinspire Finding Oxidation Numbers Worksheet With Answers Answer Zac Sheet Solved Oxidation Numbers Worksheet Directions Use The Rules Chegg 14 E Oxidation Reduction Reaction Exercises Assign oxidation numbers to the atoms in each substance CH 2 O NH 3 Rb 2 SO 4 Zn C 2 H 3 O 2 2 Assign oxidation numbers to the atoms in each substance C 6 H 6 B OH 3 Li 2 S Au Identify what is being oxidized and reduced in this redox reaction by assigning oxidation numbers to the atoms 2NO Cl 2 2NOCl Oxidation Reduction Reactions Worksheet Chemistry LibreTexts Chemists have developed rules used to assign oxidation numbers The following rules will help you determine the oxidation state of an atom or ion A free atom has an oxidation number of zero It is not sharing gaining or losing electrons Polyatomic elements have an oxidation number of zero for each atom Assign oxidation numbers to the atoms in each substance CH 2 O NH 3 Rb 2 SO 4 Zn C 2 H 3 O 2 2 Assign oxidation numbers to the atoms in each substance C 6 H 6 B OH 3 Li 2 S Au Identify what is being oxidized and reduced in this redox reaction by assigning oxidation numbers to the atoms 2NO Cl 2 2NOCl Chemists have developed rules used to assign oxidation numbers The following rules will help you determine the oxidation state of an atom or ion A free atom has an oxidation number of zero It is not sharing gaining or losing electrons Polyatomic elements have an oxidation number of zero for each atom Oxidation Numbers Worksheet Answers Flakeinspire OXIDATION NUMBER WORKSHEET WITH ANSWERS Teaching Resources Finding Oxidation Numbers Worksheet With Answers Answer Zac Sheet Solved Oxidation Numbers Worksheet Directions Use The Rules Chegg Oxidation Numbers And Ionic Compounds Worksheet For 9th 12th Grade Lesson Planet Oxidation Number Practice Worksheet With Answers Worksheet Oxidation Number Practice Worksheet With Answers Worksheet Assigning Oxidation Numbers Worksheet Gohaq
{"url":"https://alien-devices.com/en/assigning-oxidation-numbers-practice-worksheet.html","timestamp":"2024-11-08T08:21:04Z","content_type":"text/html","content_length":"26843","record_id":"<urn:uuid:3419077c-2b00-4f55-93aa-36645ce1ebf9>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00167.warc.gz"}
How to Perform Paired SAmples t Test in R - Rstudio Help What is a Paired Samples t-test? The T-test provides insight into whether the difference between the means of two groups is due to chance or is reliable (ie would be found again in another measurement from the same population). As opposed to a descriptive statistic, which describes the sample being measured, the t-test is an inferential statistic, which describes the sample being measured and provides a generalization for the entire population from which the sample was taken. Lastly, It is also called the dependent t-test or repeated t-test. The paired-sample t-test is a parametric statistic technique used when we have one group of participants and collect data from them on two occasions or under two different conditions. For example, we collect data about stress or anxiety levels before and after exams. Therefore, we have one continuous dependent variable and one independent categorical variable (time one and time 2, or before and When Should a Paired Sample t-test be Used? Paired sample t-test is a statistical technique that is used to compare two population means in the case of two samples that are correlated. In other words, You should use a paired t-test when you have the same subjects in both conditions being compared. An Example Of Dependent t-tests For example, Suppose that teacher wants to know whether a training program in statistics improves exam marks. Therefore, the teacher gives students an exam before the statistics training program and records marks on a scale of 1 to 5. Students take statistics classes, and the teacher gives them the exam after the training programs. Therefore, we have one dependent variable, the exam mark, and one independent variable, the training program in statistics (before, and after). Therefore, we test the following hypotheses: Null hypothesis: There is no significant difference in exam marks before and after the statistics training program. Alternative hypothesis: There is a significant difference in exam marks before and after the statistics training program. R function to Compute Dependent t-test The code to run a dependent sample t-test using R is as follows: t.test (x, y, paired=TRUE, data = dataframe) x: numeric vector (pre-scores) y: numeric vector (post or follow-up scores) paired: a logical value specifying that we want to compute a paired t-test
{"url":"https://www.onlinespss.com/paired-samples-t-test-in-r/","timestamp":"2024-11-03T00:10:37Z","content_type":"text/html","content_length":"109503","record_id":"<urn:uuid:989b9ab0-d8ec-4f61-acc2-3799ab3c4712>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00102.warc.gz"}
Refined Estimation of Earthquake Source Parameters: Orefice, Antonella Refined Estimation of Earthquake Source Parameters: Methods, Applications and Scaling Relationships , [Dissertation thesis], Alma Mater Studiorum Università di Bologna. Dottorato di ricerca in , 24 Ciclo. DOI 10.6092/unibo/amsdottorato/4286. Documenti full-text disponibili: Documento PDF (English) - Richiede un lettore di PDF come Xpdf o Adobe Acrobat Reader Download (4MB) | Anteprima The objective of this work of thesis is the refined estimations of source parameters. To such a purpose we used two different approaches, one in the frequency domain and the other in the time domain. In frequency domain, we analyzed the P- and S-wave displacement spectra to estimate spectral parameters, that is corner frequencies and low frequency spectral amplitudes. We used a parametric modeling approach which is combined with a multi-step, non-linear inversion strategy and includes the correction for attenuation and site effects. The iterative multi-step procedure was applied to about 700 microearthquakes in the moment range 1011-1014 N•m and recorded at the dense, wide-dynamic range, seismic networks operating in Southern Apennines (Italy). The analysis of the source parameters is often complicated when we are not able to model the propagation accurately. In this case the empirical Green function approach is a very useful tool to study the seismic source properties. In fact the Empirical Green Functions (EGFs) consent to represent the contribution of propagation and site effects to signal without using approximate velocity models. An EGF is a recorded three-component set of time-histories of a small earthquake whose source mechanism and propagation path are similar to those of the master event. Thus, in time domain, the deconvolution method of Vallée (2004) was applied to calculate the source time functions (RSTFs) and to accurately estimate source size and rupture velocity. This technique was applied to 1) large event, that is Mw=6.3 2009 L’Aquila mainshock (Central Italy), 2) moderate events, that is cluster of earthquakes of 2009 L’Aquila sequence with moment magnitude ranging between 3 and 5.6, 3) small event, i.e. Mw= 2.9 Laviano mainshock (Southern Italy). The objective of this work of thesis is the refined estimations of source parameters. To such a purpose we used two different approaches, one in the frequency domain and the other in the time domain. In frequency domain, we analyzed the P- and S-wave displacement spectra to estimate spectral parameters, that is corner frequencies and low frequency spectral amplitudes. We used a parametric modeling approach which is combined with a multi-step, non-linear inversion strategy and includes the correction for attenuation and site effects. The iterative multi-step procedure was applied to about 700 microearthquakes in the moment range 1011-1014 N•m and recorded at the dense, wide-dynamic range, seismic networks operating in Southern Apennines (Italy). The analysis of the source parameters is often complicated when we are not able to model the propagation accurately. In this case the empirical Green function approach is a very useful tool to study the seismic source properties. In fact the Empirical Green Functions (EGFs) consent to represent the contribution of propagation and site effects to signal without using approximate velocity models. An EGF is a recorded three-component set of time-histories of a small earthquake whose source mechanism and propagation path are similar to those of the master event. Thus, in time domain, the deconvolution method of Vallée (2004) was applied to calculate the source time functions (RSTFs) and to accurately estimate source size and rupture velocity. This technique was applied to 1) large event, that is Mw=6.3 2009 L’Aquila mainshock (Central Italy), 2) moderate events, that is cluster of earthquakes of 2009 L’Aquila sequence with moment magnitude ranging between 3 and 5.6, 3) small event, i.e. Mw= 2.9 Laviano mainshock (Southern Italy). Altri metadati Statistica sui download
{"url":"http://amsdottorato.unibo.it/4286/","timestamp":"2024-11-13T07:37:23Z","content_type":"application/xhtml+xml","content_length":"40180","record_id":"<urn:uuid:a2018d96-c04c-4d52-844a-a14059204e63>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00238.warc.gz"}
I'm thinking about learning how to program. Well, I just finished booking air tickets to Vietnam, and thought I'd write a little program to illustrate some simple concepts and have some fun. Hopefully this will prove to be both fun and educational in starting to learn how to program in C#! It should also prove just how fast and productive C# is, because you can create an entire online air ticket booking system in only a few minutes and in less than 100 lines of code! See! C# ROCKS~! What I've come up with is a program that perfectly mimics what I get from the Quantas online air ticket reservation system. Oh, did I mention that I'm flying with Singapore air? The project is a very simple console application that demonstrates the use of: • while • WriteLine • ReadLine • if... then... else... • string.ToLower() • string.Substring(startAtIndex, length) • the "+=" addition operator 1. using System; 2. using System.Collections.Generic; 3. using System.Linq; 4. using System.Text; 6. namespace ReserveQuantasTickets 7. { 8. class Program 9. { 10. static void Main(string[] args) 11. { 12. int price = 450; 13. while (true) // This perfectly mimics the functionality of the Quantas online ticket reservation system. 14. { 15. Console.WriteLine("********************************************************"); 16. Console.WriteLine("******** Welcome to Quantas flight booking *************"); 17. Console.WriteLine("********************************************************"); 18. Console.WriteLine("Please enter your name the press Enter."); // User-friendly communications 19. if (UserIsFedUp()) break; // Notice that we don't actually store the name as it is unimportant. 20. Console.WriteLine("Please enter your point of departure."); 21. if (UserIsFedUp()) break; // Notice that the departure point is also unimportant. 22. Console.WriteLine("Please enter your destination."); 23. if (UserIsFedUp()) break; // Notice that the destination is also unimportant. 24. Console.WriteLine("Please enter your departure date."); 25. if (UserIsFedUp()) break; // Notice that the departure date is also unimportant. 26. Console.WriteLine("Please enter your return date."); 27. if (UserIsFedUp()) break; // Notice that the return date is also unimportant. 28. Console.WriteLine("Please enter your departure date."); 29. if (UserIsFedUp()) break; // Notice that the departure date is also unimportant. 30. Console.WriteLine(string.Format("Your tickets will cost: {0}", price+=50)); // Notice that we don't care about anything other than increasing the price. 31. Console.WriteLine("Press Enter to proceed to payment..."); 32. if (UserIsFedUp()) break; 33. // The following is the actual error from the Quantas web site. 34. Console.WriteLine("Please review the following items"); 35. Console.WriteLine("* We are unable to confirm the fare for the flights you have selected. Please cancel these segments and choose a different fare or flights and resubmit your confirmation request. (15012)"); 36. Console.WriteLine("Press Enter to \"Start over\""); 37. if (UserIsFedUp()) break; 38. } 39. } 41. private static bool UserIsFedUp() // Notice that this is static. 42. { 43. if (Console.ReadLine().ToLower().Substring(0, 8) == "expletive") // Make sure to allow for exclamation marks. 44. { 45. return true; 46. } 47. else 48. { 49. return false; 50. } 51. } 52. } 53. } (41.35 kB - downloaded 209 times.)
{"url":"https://www.donationcoder.com/forum/index.php?topic=24361.msg221941","timestamp":"2024-11-05T06:39:06Z","content_type":"text/html","content_length":"399600","record_id":"<urn:uuid:cbf5cde1-23ad-4ce8-b827-85870c6592ca>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00309.warc.gz"}
2004 Publications Resulting from the Use of NERSC Resources On their Allocation Year 2005 ERCAP Request Forms Principal Investigators reported 1,270 refereed publications (published or submitted) for the preceding 12 months, based on using, at least in part, NERSC resources. PI Musa Ahmed M. Hochlaf. "Theoretical spectroscopy of tetratomic molecules". Trends in Chemical Physics, (2004), in press (invited review). M. Hochlaf, J. Palaudoux and A. Ben Houria. "Theoretical spectroscopy of molecular dications". Transworld Research World. In press (invited review). M. Hochlaf. "Ab initio investigations of CS2+ for internal energies lower than 10 eV". J. Phys. B 37, 595 (2004). F. Richter, M. Hochlaf, P. Rosmus, F. Gatti and H.-D. Meyer. "A study of the mode-selective trans-cis isomerisation in HONO using ab initio methodology". J. Chem. Phys. 120, 1306 (2004). M. Hochlaf and J. H. D. Eland. "A theoretical and experimental study of the SO22+ dication". J. Chem. Phys. 120, 6449 (2004). J. H. D. Eland, M. Hochlaf, G. C. King, P. S. Kreynin, R. LeRoy, I. R. McNab and J. M. Robbe. "Photo double ionization of CO: comparison of theory with experiment". J. Phys. B 37, 3197 (2004). M. Hochlaf, K.-M. Weitzel and C. Y. Ng. "Vacuum ultraviolet pulsed field ionization-photoelectron study of H2S in the energy range of 10-17 eV". J. Chem. Phys.120, 6944 (2004). X.-M. Qian, K.-C. Lau, G. Z. He, C. Y. Ng and M. Hochlaf. "Vacuum Ultraviolet Pulsed field ionization study of ND3: accurate thermochemistry for the ND2/ND2+ and ND3/ND3+ system". J. Chem. Phys. 120, 8476 (2004). P. Palaudoux and M. Hochlaf. "Theoretical investigations of the N2H2+ cation and of its reactivity" J. Chem. Phys. 121, 1782 (2004). M. Hochlaf. "Ion-molecule reactions: Theoretical studies of the [N2 + CO]+ system". J. Phys. Chem. A, (2004) (in press) PI Mowfak Al-Jassim Y. Yan and M.M. Al-Jassim Adsorption of water molecules on the CdTe(001) surface Surface Science (submitted). Y. Yan, M.M. Al-Jassim, and K.M. Jones Atomic structure and effects of double-positioning twins in CdTe J. Appl. Phys. (accepted). Y. Yan, G.M. Dalpian, M.M. Al-Jassim, and Su-Huai Wei Energetics and electronic structure of stacking faults in ZnO Phys. Rev. B (submitted). Y. Yan and M.M. Al-Jassim Water adsorption on ZnO (11-20) surfaces. Surface Science (to be submitted). PI Robert Albers "Modeling Solid-Solid Phase Transformations: From Single Crystal to Polycrystal Behavior,'' R. C. Albers, R. Ahluwalia, T. Lookman, and A. Saxena, accepted for publication in Computational and Applied Mathematics. "Impurities Block the Alpha to Omega Martensitic Transformation in Titanium,'' R. G. Hennig, D. R. Trinkle, J. Bouchet, S. G. Srinivasan, R. C. Albers, and J. W. Wilkins, Submitted to Nature. "Extended X-Ray Absorption Fine Structure Measurements of Laser-Shocks in Ti and V and Phase Transformation in Ti,'' B. Yaakobi, D. D. Meyerhofer, T. R. Boehly, J. J. Rehr, B. A. Remington, P. G. Allen, S. M. Pollaine, and R. C. Albers, Phys. Rev. Lett. 92, 95504 (2004). "Charge and Dimensional Effects on the Properties of CaNiN,'' Michael Springborg and R. C. Albers, Phys. Rev. B. 69, 235115 (2004). "Thermal Stabilization of the HCP Phase in Titanium,'' Sven P. Rudin, M. D. Jones, and R. C. Albers, Phys. Rev. B 69, 94117 (2004). "New Pseudo-Phase Structure for Alpha-Pu,'' J. Bouchet, R. C. Albers, M. D. Jones, and G. Jomard, Phys. Rev. Lett. 92, 95503 (2004). "Landau Theory for Shape Memory Polycrystals,'' R. Ahluwalia, T. Lookman, A. Saxena, and R. C. Albers, Acta Materialia 52, 209 (2004). "Constitutive Response of Polycrystalline Shape Memory Alloys,'' A. Saxena, R. Ahluwalia, T. Lookman, and R. C. Albers, Materials Science Forum 426-432, 2255 (2003). "A New Mechanism for the Alpha to Omega Martensitic Transformation in Pure Titanium,'' D. R. Trinkle, R. G. Hennig, S. G. Srinivasan, D. M. Hatch, M. D. Jones, H. T. Stokes, R. C. Albers, and J. W. Wilkins, Phys. Rev. Lett. 91, 025701, 2003. PI Greg Aldering "Photometry of SN~2002ic and Implications for the Progenitor Mass-Loss History" Wood-Vasey, W. M, Aldering, G., & Wang,~L. (2004), Astrophys. J., accepted for publication "Type Ia supernova rate at a redshift of ~ 0.1,'' Blanc, G., Afonso, C., Alard, C., Albert, J.N., Aldering, G., Amadon, A., Andersen, J., Ansari, R., Aubourg, E., Balland, C., Bareyre, P., Beaulieu, J.P., Charlot, X., Conley, A., Coutures, C., Dahlen, T., Derue, F., Fan, X., Ferlet, R., Folatelli, G., Fouque, P., Garavini, G., Glicenstein, J.F., Goldman, B., Goobar, A., Gould, A., Graff, D., Gros, M., Haissinski, J., Hamadache, C., Hardin, D., Hook, I.M., deKat, J., Kent, S., Kim, A., Lasserre, T., LeGuillou, L., Lesquoy, E., Loup, C., Magneville, C., Marquette, J.B., Maurice, E., Maury, A., Milsztajn, A., Moniez, M., Mouchet, M., Newberg, H., Nobili, S., Palanque-Delabrouille, N., Perdereau, O., Prevot, L., Rahal, Y.R., Regnault, N., Rich, J., Ruiz-Lapuente, P., Spiro, M., Tisserand, P., Vidal-Madjar, A., Vigroux, L., Walton, N.A., Zylberajch, S. (2004), Astronomy & Astrophysics, 423, 881. "Spectroscopic Observations and Analysis of the Peculiar SN~1999aa,'' Garavini, G., Folatelli, G., Goobar, A., Nobili,~S., Aldering, G. , Amadon, A., Amanullah, R., Astier, P., Balland, C., Blanc, G., Burns, M.~S., Conley, A., Dahlen, T., Deustua, S. E., Ellis, R., Fabbro, S., Fan, X., Frye, B., Gates, E. L., Gibbons, R., Goldhaber, G., Goldman, B., Groom, D. E., Haissinski, J., Hardin, D., Hook, I. M., Howell, D. A., Kasen, D., Kent, S., Kim, A. G., Knop, R. A., Lee, B. C., Lidman, C., Mendez, J., Miller, G. J., Moniez, M., Mourao, A., Newberg, H., Nugent, P. E., Pain, R., Perdereau, O., Perlmutter, S., Prasad, V., Quimby, R., Raux, J., Regnault, N., Rich, J., Richards, G. T., Ruiz-Lapuente, P., Sainton, G., Schaefer, B. E., Schahmaneche, K., Smith, E., Spadafora, A. L., Stanishev, V., Walton, N. A., Wang, L., Wood-Vasey., W. M. (2004), Astronomical. Journal, 128, 387. "New Constraints on Omega_M, Omega_Lambda, and w from an Independent Set of 11 High-Redshift Supernovae Observed with the Hubble Space Telescope'', Knop, R. A., Aldering, G., Amanullah, R., Astier, P., Blanc, G., Burns, M. S., Conley, A., Deustua, S. E., Doi, M., Ellis, R., Fabbro, S., Folatelli, G., Fruchter, A. S., Garavini, G., Garmond, S., Garton, K., Gibbons, R., Goldhaber, G., Goobar, A., Groom, D. E., Hardin, D., Hook, I., Howell, D. A., Kim, A. G., Lee, B.~C., Lidman, C., Mendez, J., Nobili, S., Nugent, P. E., Pain, R., Panagia, N., Pennypacker, C. R., Perlmutter, S., Quimby, R., Raux, J., Regnault, N., Ruiz-Lapuente, P., Sainton, G., Schaefer, B., Schahmaneche, K., Smith, E., Spadafora, A. L., Stanishev, V., Sullivan, M., Walton, N. A., Wang, L., Wood-Vasey, W. M., & Yasuda, N., (2003), Astrophysical. Journal, 598, 102. PI Yoram Alhassid "Nuclear level statistics: extending the shell model theory to higher temperatures," Y. Alhassid, G.F. Bertsch, and L. Fang, Phys. Rev. C 68, 044322 (2003). "Effects of spin and exchange interaction on the Coulomb-blockade peak statistics in quantum dots," Y. Alhassid and T. Rupp, Phys. Rev. Lett. 91, 056801 (2003). "A universal Hamiltonian for a quantum dot in the presence of spin-orbit interaction," Y. Alhassid and T. Rupp, cond-mat/0312691 (2003). "Spin of a quantum dot and conductance peak motion in parallel magnetic field," D. Huertas-Hernando and Y. Alhassid, cond-mat/0404619 (2004). "Disordered systems with interactions: induced two--body ensembles and the Hartree--Fock approach," Y. Alhassid, H.A. Weidenmuller, and A. Wobst, cond-mat/0406495 (2004). PI Paul Alivisatos Manna, L., Wang, L. W., Cingolani, R. & Alivisatos, A. P. First-principles Calculations of Unpassivated and Surfactant-passivated Bulk Surfaces of Wurtzite CdSe: a Model System for Studying the Anisotropic Growth of CdSe Nanocrystals. Physical Review B submitted (2004). Puzder, A., Williamson, A. J., Zaitseva, N., Galli, G., Manna, L. & Alivisatos, A. P. First-Principles Simulations of the Interaction between CdSe Nanoparticles and Organic Molecules: Effects on Nanoparticle Growth. Nano Letters submitted (2004). Milliron, D. J., Hughes, S. M., Cui, Y., Manna, L., Li, J. B., Wang, L. W. & Alivisatos, A. P. Colloidal nanocrystal heterostructures with linear and branched topology. Nature 430, 190-195 (2004). J. B. Neaton, K. H. Khoo, C. D. Spataru, and S. G. Louie, "Electron transport and optical properties of nanostructures from first principles", submitted. PI David Alumbaugh Tompkins, M., Alumbaugh, D. L., Stanley, D., and Lu, X., 2004, Numerical analysis of near-borehole and anisotropic effects on the response of a new multi-component induction logging tool; Geophysics, 69, 140-151. PI Thomas Antonsen "Further developments for a particle in cell code for efficientlty modeling wakefield acceleration schemes" J. H. Cooley, T. M. Antonsen Jr., C. Huang, V. Decyk, S. Wang, E. S. Dodd, C. Ren, W. B. Mori, and T. Katsouleas, AIP Conference Proceedings 647, 232 (2002). "Resonant heating of a cluster plasma by intense laser light", T. Taguchi, T. M. Antonsen Jr. H. M. Milchberg , Physical Review Letters 92 (20): Art. No. 205003 May 21 2004 PI Jonathan Arons A. Spitkovsky and J. Arons, 2004, "Time-dependence in Relativistic Collisionless Shocks: Theory of the Variable "Wisps" in the Crab Nebula", ApJ, 603, 669 P. Demorest, R. Ramachandran, D.C. Backer, S. M. Ransom, V. Kaspi, J. Arons, A. Spitkovsky, 2004, "Orientations of Spin and Magnetic Dipole Axes of Pulsars in the J0737--3039 Binary Based on Polarimetry Observations at the Green Bank Telescope", submitted to ApJ Letters, astro-ph/0402025 V.M. Kaspi, S. M. Ransom, D.C. Backer, R. Ramachandran, P. Demorest, J. Arons, A. Spitkovsky, 2004, "Green Bank Telescope Observations of the Eclipse of Pulsar "A" in the Double Pulsar Binary PSR J0737-3039," accepter to ApJ Letters, astro-ph/0401614 PI Cynthia Atherton Rotman, D.A., C.S. Atherton, D.J. Bergmann, P.J. Cameron-Smith, C.C.Chuang, P.S. Connell, J.E. Dignon, A. Franz, K.E. Grant, D.E. Kinnison, C.R. Molenkamp, D.D. Proctor, J.R. Tannahill, 2004: IMPACT, the LLNL 3D global atmospheric chemical transport model for the combined troposphere and stratosphere: Model description and analysis of ozone and other trace gases, J. Geophys. Res., 109 doi:10.1029 Marcy, T.P., D.W. Fahey, R.S. Gao, P.J. Popp, E.C. Richard, T.L. Thompson, K.H. Rosenlof, E.A. Ray, R.J. Salawitch, C.S. Atherton, D.J. Bergmann, B.A. Ridley, A.J. Weinheimer, M. Loewenstein, E.M. Weinstock, and M.J. Mahoney, 2004: Quantifying stratospheric ozone in the upper troposphere using in situ measurements of HCl, Science, 304, 261-265. Penner, J.E., S.Y. Zhang, and C.C. Chuang, 2003: Soot and smoke aerosol may not warm climate, J. Geophys. Res., 108, 4657, doi:10.1029/2003JD003409. Douglass, A.R., P.S. Connell, R.S. Stolarski, and S.E. Strahan,, 2004: Radicals and Reservoirs in the GMI Chemistry and Transport Model: Comparison to Measurements, J. Geophys. Res., Vol. 109, No. D16, doi:10.1029/2004JD004632, 17 August 2004 Considine, D., P.S. Connell, D.J. Bergmann, D.A. Rotman, and S.E. Strahan,, 2004: Sensitivity of Global Modeling Initiative CTM predictions of Antarctic ozone recovery to GCM and DAS generated meteorological fields, J. Geophys. Res., Vol. 109, No. D15, D1530110.1029/2003JD004487 03 August 2004 PI Robert Averback Ion beam smoothening of metal surfaces Zhong Y, Ashkenazy Y, Albe K, et al. JOURNAL OF APPLIED PHYSICS 94 (7): 4432-4439 OCT 1 2003 Mayr SG, Samwer K, Averback RS Surface kinetics during growth and ion irradiation of glassy metal films SCRIPTA MATERIALIA 49 (10): 961-967 NOV 2003 Simulations of the inert gas condensation processes Krasnochtchekov P, Albe K, Averback RS ZEITSCHRIFT FUR METALLKUNDE 94 (10): 1098-1105 OCT 2003 Effect of ion bombardment on stress in thin metal films Mayr SG, Averback RS PHYSICAL REVIEW B 68 (21): Art. No. 214105 DEC 2003 Evolution of thin-film morphologies in metals during ion beam bombardment Mayr SG, Ashkenazy Y, Averback RS NUCLEAR INSTRUMENTS & METHODS IN PHYSICS RESEARCH SECTION B-BEAM INTERACTIONS WITH MATERIALS AND ATOMS 212: 246-252 DEC 2003 Molecular dynamics simulations of void and helium bubble stability in amorphous Si during heavy ion bombardment M. Okuniewski, Y. Ashkenazy, B.J. Heuser, and R.S. Averback J. Appl. Phys. 96 (2004) Molecular dynamics simulations of cluster nucleation during inert gas condensation Pavel Krasnochtchekov and R.S. Averback J. Phys. Chem. (in press) Strings and interstitials in liquids, glasses and crystals K. Nordlund, Y. Ashkenazy, R.S. Averback and A.V. Granato submitted to Phys. Rev. Lett. Shock induced amorphization as the onset to spall Y. Ashkenazy and R.S. Averback submitted to Appl. Phys. Lett. PI Dmitri Babikov D. Babikov, B. Kendrick, P. Zhang, and K. Morokuma, Cyclic-N3: II. Large geometric phase effects in the vibrational spectra, J. Chem. Phys. (submitted). R. D. Averitt, D. E. Hooks, D. Babikov, J. Barber, A. J. Taylor, and D. J. Funk, Temperature-dependent far-infrared spectra of single crystals of high explosives using terahertz time-domain spectroscopy, J. Phys Chem. A. (submitted). D. Babikov, Accuracy of gates in a quantum computer based on vibrational eigenstates, J. Chem. Phys. (accepted, to appear in Oct. 2004) D. Babikov, P. Zhang, and K. Morokuma, Cyclic-N3: I. An accurate potential energy surface for the ground doublet electronic state up to the energy of the 2A2/2B1 conical intersection, J. Chem. Phys. (accepted, to appear in Sept. 2004). D. Babikov, Entrance channel localized states in ozone: possible application to helium nanodroplet isolation spectroscopy, J. Chem. Phys. 119, pp. 6554-6559, 2003. D. Babikov, B. Kendrick, R. B. Walker, and R. T Pack, Formation of ozone: scattering resonances and anomalous isotope effect, J. Chem. Phys.119, pp. 2577-2589, 2003. D. Babikov, B. Kendrick, R. B. Walker, R. Schinke, and R. T Pack, Quantum origin of an anomalous isotope effect in ozone formation, Chem. Phys. Let. 372, pp. 686-691, 2003. D. Babikov, B. Kendrick, R. B. Walker, R. T Pack, P. Fleurat-Lesard and R. Schinke, Metastable states of ozone calculated on an accurate potential energy surface, J. Chem. Phys. 118, pp. 6298-6308, PI Diana Bacon Bacon DH, and BP McGrail. 2003. Lessons Learned From Reactive Transport Modeling of a Low-Activity Waste Glass Disposal System. Computers and Geosciences, 29(3):361-370. Bacon DH, and BP McGrail. 2003. Waste Form Release Calculations for Performance Assessment of the Hanford Immobilized Low-Activity Waste Disposal Facility Using a Parallel, Coupled Unsaturated Flow and Reactive Transport Simulator. Scientific Basis for Nuclear Waste Management XXVI. Materials Research Society, Warrendale, PA. Freedman VL, P Saripalli, DH Bacon, and PD Meyer. 2003. "Modeling changes in the hydraulic properties of subsurface media using film depositional models. Part 1. Application to mineral precipitation and dissolution reactions in saturated porous media." Vadose Zone Journal. [In Press] McGrail BP, DH Bacon, PD Meyer, MI Ojovan, DM Strachan, and IV Startceva. 2003. New Developments in Field Studies of Low Activity Waste Glass Corrosion and Contaminant Transport. Scientific Basis for Nuclear Waste Management XXVI. Materials Research Society, Warrendale, PA. PI Scott Baden Gregory T. Balls, Scott B. Baden, and Phillip Colella, SCALLOP: A Highly Scalable Parallel Poisson Solver in Three Dimensions, Conf. Proc., SC '03, Phoenix AZ, November 2003. PI Ferdinand Baer Baer, F., and J. J. Tribbia, 2004: Sensitivity of atmospheric prediction models to amplitude and phase. Submitted to Tellus. Fournier, A., 2003: Atmospheric energetics in the wavelet domain. Part II: Time-averaged observed atmospheric blocking, J. Atmos. Sci., 60, 319-338. Fournier, A., 2004: Instantaneous wavelet energetic transfers between atmospheric blocking and local eddies, J. Climate, in press. Fournier, A., Mark A. Taylor and Joseph J. Tribbia, 2004: The Spectral Element Atmospheric Model: High-resolution parallel computation and response to regional forcing, Mon. Wea. Rev., 132, 726-748. Schneider, E., D. DeWitt, A. Rosati, B. Kirtman, M. Ji, and J. Tribbia 2004: Retrospective ENSO forecasts: sensitivity to atmospheric model and ocean resolution, to appear, Mon Wea. Rev. Wang, H., and G.-T. Yeh, 2004: A characteristic-based semi-Lagrangian method for hyperbolic systems of conservation laws. (Submitted to Adv. In Atmos. Sci.). Temam, R and J. Tribbia, 2004: Open Boundary Conditions for the Primitive and Boussinesq Equations, Accepted, J. Atmos. Sci. PI Mark Baertschy "Solving the three-body Coulomb breakup problem using exterior complex scaling"; C. W. McCurdy, M. Baertschy, and T. N. Rescigno; J. Phys. B: At. Mol. Opt. Phys. 37 (2004) R137-R187. "Toward Gaining Mechanistic Insight From Closed Loop Learning Control: the Importance of Basis in Searching the Phase Space"; Florian Langhojer, David Cardoza, Mark Baertschy, and Thomas Weinacht; submitted to J. Chem. Phys, July 2004. PI David Bailey David H. Bailey and Xiaoye S. Li, "A Comparison of Three High-Precision Quadrature Programs", Proceedings of the Real Numbers and Computing Conference, Lyon, France, Sept 2003. David H. Bailey, Karthik Jeyabalan and Xiaoye S. Li, "A Comparison of Three High-Precision Quadrature Programs", [signifiantly revised from #1], submitted to Experimental Mathematics. David H. Bailey, Jonathan M. Borwein, Vishall Kapoor and Eric Weisstein, "Ten Problems in Experimental Mathematics: A Challenge", Aug. 2004, to be submitted to American Mathematical Monthly. Laura Carrington, Nicole Wolter, Allan Snavely, and Cynthia Bailey Lee, "Applying an Automated Framework to Produce Accurate Blind Performance Predictions of Full-Scale HPC Applications", DOD HPCMOD Users Group Conference, Williamsburg, VA, June 2004. A. Snavely, X. Gao, C. Lee, L. Carrington, N. Wolter, J. Labarta, J. Gimenez, P. Jones, "Performance Modeling of HPC Applications", Parallel Computing Europe, Sept. 1993. P. K. Agarwal, R. A. Alexander, E. Apra, S. Balay, A. S. Bland, J. Colgan, E. F. D'Azevedo, J. J. Dongarra, T. H. Dunigan, Jr., M. R. Fahey, R. A. Fahey, A. Geist, M. Gordon, R. J. Harrison, D. Kaushik, M. Krishnakumar, P. Lusczek, A. Mezzacappa, J. A. Nichols, J. Nieplocha, L. Oliker, T. Packwood, M. S. Pindzola, T. C. Shulthess, J. S. Vetter, J. B. White III, T. L. Windus, P. H. Worley, T. Zacharia, "ORNL Cray X1 Evaluation Status Report", Proceedings of the 46th Cray User Group Conference, Knoxville, TN, May 17-21, 2004. G. Mahinthakumar, M. Sayeed, J. Blondin, P. Worley, W. Hix, A. Mezzacappa, "Performance Evaluation and Modeling of a Parallel Astrophysics Application", in Proceedings of the High Performance Computing Symposium 2004, p. 27-33, ed. Joerg Meyer. P. H. Worley, J. Levesque, "The Performance Evolution of the Parallel Ocean Program on the Cray X1", in Proceedings of the 46th Cray User Group Conference, Knoxville, TN, May 17-21, 2004. PI Perla Balbuena Y. Wang and P. B. Balbuena, "Combined ab initio quantum mechanics and classical molecular dynamics studies of polymer electrolytes: Competitive solvation of Li+ and LiCF3SO3, J. Phys. Chem. B, in P. B. Balbuena, D. Altomare, N. Vadlamani, S. Bingi, L. A. Agapito, and J. M. Seminario, "Adsorption of O, OH, and H2O on Pt-based bimetallic clusters alloyed with Co, Cr, and Ni, J. Phys. Chem. A, 108, 6378, (2004). Y. Wang and P. B. Balbuena, Roles of proton and electric field in the electroreduction of O2 on Pt(111) surfaces: Results of an ab initio molecular dynamics study , J. Phys. Chem. B, 108, 4376-4384, D. S. Mainardi and P. B. Balbuena, "Hydrogen and Oxygen Adsorption on Rhn (n =1-6) Clusters, J. Phys. Chem. A., 107, 10370-10380, (2003). D. S. Mainardi, S. Calvo, A. P. J. Jansen, J. J. Lukkien, and P. B. Balbuena, "Dynamic Monte Carlo Simulations of O2 Adsorption and Reaction on Pt (111), Chem. Phys. Lett., 382, 553-560, (2003). P. B. Balbuena, D. Altomare, L. A. Agapito, and J. M. Seminario, "Theoretical analysis of oxygen adsorption on Pt-based clusters alloyed with Co, Ni, or Cr embedded in a Pt matrix, J. Phys. Chem. B, 107, 13671-13680, (2003). E. J. Lamas and P. B. Balbuena, "Adsorbate effects on structure and shape of supported nanoclusters: A molecular dynamics study, J. Phys. Chem. B, 107, 11682-11689, (2003). S-P. Huang, D. S. Mainardi, and P. B. Balbuena, "Structure and dynamics of graphite-supported nanoclusters, Surf. Sci., 545, 163-179, (2003). PI Arun Bansil M. Samsel-Czekala, G. Kontrym-Sznajd, G. Doring, W. Schulke, J. Kwiatkowska, F. Maniawski, S. Kaprzyk, and A. Bansil: ``Electron Momentum Density in Cu0.9Al0.1", Applied Physics A 76, 87 (2003). R. Saniz, B. Barbiellini, A. B. Denison, and A. Bansil: "Spontaneous Magnetization and Electron Momentum Density in 3D Quantum Dots", Phys. Rev. B 68, 165326 (2003). I. G. Kaplan, B. Barbiellini, and A. Bansil: "Compton scattering beyond the impulse approximation", Phys. Rev. B 68, 235104 (2003). M. C. Asensio, J. Avila, L. Roca, A. Tejeda, G. D. Gu, M. Lindroos, R. S. Markiewicz, and A. Bansil: ``Emergence of multiple Fermi surface maps in angle-resolved photoemission from Bi2212", Phys. Rev. B 67, 014519 (2003). S. Sahrakorpi, M. Lindroos and A. Bansil: "Site selectivity properties of the ARPES matrix element in Bi_2Sr_2CaCu_2O_8", Phys. Rev. B 68, 054522 (2003). Y. -D. Chuang, A. D. Gromko, A.V. Fedorov, Y. Aiura, K. Oka, Yoichi Ando, M. Lindroos, R. S. Markiewicz, A. Bansil and D.S. Dessau: "Bilayer Splitting and Coherence Effects in Optimal and Underdoped Bi_2Sr_2CaCu_2O_(8+\delta)", Phys. Rev. B 69, 094515 (2004). M. Lindroos, R.S. Markiewicz, and A. Bansil: "Special Photon Energies for Extracting the Bosonic Spectral Function Mediating Superconductivity in Bi2212 via ARPES", Phys. Rev. B 69, 140505 (2004). A. Bansil, R.S. Markiewicz, C. Kusko, M. Lindroos, and S. Sahrakorpi: "Matrix Element and Strong Electron Correlation Effects in ARPES from Cuprates", J. Phys. Chem Solids 65, 1417 (2004). A. Bansil, D. Nissenbaum, and B. Barbiellini: "Spontaneous Spin Magnetization in Quantum Dots", J. Phys. Chem Solids (2004, in press). A. Bansil, M. Lindroos, S. Sahrakorpi and R. S. Markiewicz: "Influence of the Third Dimension of Quasi-Two-Dimensional Cuprate Superconductors on Angle-Resolved Photoemission Spectra", submitted to Phys. Rev. Letters (2004). S. Sahrakorpi, M. Lindroos, R.S. Markiewicz, and A. Bansil: "Evolution of Mid-gap States and Residual 3-Dimensionality in La_2-xSr_xCuO_4", submitted to Nature (2004). PI Don Batchelor "Nonlinear Fluxes and Forces from Radio-frequency Waves with Application to Driven Flows in Tokamaks", J. R. Myra, L. A. Berry, D. A. D'Ippolito, and E. F. Jaeger, Phys. Plasmas 11, 1786 (2004). "Full Wave Simulations of Fast Wave Mode Conversion and Lower Hybrid Wave Propagation in Tokamaks", J. C. Wright, P. T. Bonoli, M. Brambilla, F. Meo, E. D'Azevedo, D. B. Batchelor, E. F. Jaeger, L. A. Berry, C. K. Phillips and A. Pletzer, Phys. Plasmas 11, 2473 (2004). "Fast Ion Absorption of the High Harmonic Fast Wave in the National Spherical Torus Experiment," A. L. Rosenberg, J. E. Menard, J. R. Wilson, S. Medley, R. Andre, D. Darrow, R. Dumont, B. P. LeBlanc, C. K. Phillips, M. Redi, T. K. Mau, E. F. Jaeger, P. M. Ryan, D. W. Swain, R. W. Harvey, J. Egedal, and the NSTX Team, Phys. Plasmas 11, 2441 (2004). "Effects of Non-Maxwellian Species on Electromagnetic Wave Propagation and Absorption in Magnetically Confined Plasmas", R. J. Dumont, C. K. Phillips, and D. N. Smithe, Submitted to Phys. Rev. Lett. "Sheared Poloidal Flow Driven by Mode Conversion in Tokamak Plasmas", E. F. Jaeger, L. A. Berry, J. R. Myra, D. B. Batchelor, E. D'Azevedo, P. T. Bonoli, C.K. Phillips, D. N. Smithe, D. A. D'Ippolito, M. D. Carter, R. J. Dumont, J. C. Wright, R. W. Harvey, Phys. Rev. Lett. 90, 195001-1 (2003). PI Victor Batista J. Phys. Chem. B 108, 6745, 2004. Model Study of Coherent-Control of the Femtosecond Primary Event of Vision, by Samuel C. Flores and Victor S. Batista. Biophys. J., in press, 2004. QM/MM Study of Energy Storage and Molecular Rearrangements due to the Primary Event in Vision, by Jose A. Gascon and Victor S. Batista. PNAS, submitted , 2004. Molecular Quantum Entanglement in Functionalized Semiconductor Nanostructures, by Luis G.C. Rego, Sabas Abuabara and Victor S. Batista. J. Chem. Phys., submitted. Matching Pursuit Split Operator Fourier Transform Computations of Thermal Correlation Functions, by Xin Chen, Yinghua Wu and Victor S. Batista. J. Chem. Phys. 121, 1676, 2004. Quantum Tunneling in Multidimensional Systems: A Matching-Pursuit Description, by Yinghua Wu and Victor S. Batista. J. Am. Chem. Soc. 125, 7989, 2003. Quantum Dynamics Simulations of the Interfacial Electron Transfer in Sensitized TiO2 semiconductors by Luis G.C. Rego and Victor S. Batista J. Chem. Phys. 118, 6720 (2003). Matching Pursuit for Simulations of Quantum Processes, by Yinghua Wu and Victor S. Batista. J. Chem. Phys. 119, 7606 (2003). Erratum: Matching Pursuit for Simulations of Quantum Processes [J. Chem. Phys. 118, 6720, 2003], by Yinghua Wu and Victor S. Batista. Center Recherches Mathematiques CRM Proceedings and Lecture Notes. Volume 33 (2003) Coherent Control: Principles and Semiclassical Implementations, by Victor S. Batista and Paul Brumer. PI John Bell J.B. Bell, M.S. Day, J.F. Grcar, and M.J. Lijewski, "Stochastic Algorithms for the Analysis of Numerical Flame Simulations", Report LBNL-49326-Journal, J. Comp. Phys, in press, 2004. J.F. Grcar, P. Glarborg, J.B. Bell, M.S. Day, A. Loren, and A.D. Jenson, "Effects of Mixing on Ammonia Oxidation in Combustion Environments at Intermediate Temperatures", Report LBNL-54187, Proceedings of the Combustion Institute, v. 30, in press, 2004. J.B. Bell, M.S. Day, C.A. Rendleman, S.E. Woosley, and M.A. Zingale, "Direct Numerical Simulations of Type Ia Supernovae Flames II: The Rayleigh-Taylor Instability", LBNL Report LBNL-54300, Astrophysical Journal, v. 608, pp. 883-906, 2004. J.B. Bell, M.S. Day, C.A. Rendleman, S.E. Woosley, and M.A. Zingale, "Direct Numerical Simulations of Type Ia Supernovae Flames I: The Landau-Darrieus Instability", Report LBNL-54088, Astrophysical Journal, v. 606, pp. 1029-1038, 2004. J.B. Bell, M.S. Day, C.A. Rendleman, S.E. Woosley, and M.A. Zingale, "Adaptive low Mach number simulations of nuclear flame microphysics", Report LBNL-52395, J. Comp. Phys, v. 195, pp. 677-694, 2004. P. McCorquodale, P. Colella, D.P. Grote and J.-L. Vay, "A node-centered local refinement algorithm for Poisson's equation in complex geometries," Report LBNL-54138, J. Comp. Phys, in press, 2004. D. Trebotich, P. Colella, and G. Miller, "A stable and convergent scheme for viscoelastic flow in contraction channels," Report LBNL-55880, J. Comp. Phys, in review, 2004. R.K. Crockett, P. Colella, R.T. Fisher, R.I. Klein, and C.F. McKee, "An unsplit, cell-centered Godunov method for ideal MHD," Report LBNL-55881, J. Comp. Phys, in review, 2003. PI Roy Benedek Dopant-induced Stabilization of Rhombohedral LiMnO2 against Jahn-Teller Distortion, R. Prasad, R. Benedek, and M. M. Thackeray, Physical Review B, submitted (2004). Effect of Co on the magnetism and phase stability of lithiated manganese oxides, R. Prasad, R. Benedek, and M. M. Thackeray, vol. 26, 147 (2003). Structural considerations of intermetallic electrodes for lithium batteries, M. M. Thackeray, J. T. Vaughey, C. S. Johnson, A. J. Kropf, R. Benedek, L. M. L. Fransson, K. Edstrom, Journal of Power Sources vol 113, 124 (2003). Divalent dopant criterion for the suppression of Jahn-Teller distortion in Mn oxides: First-principles calculations and x-ray spectroscopy measurements for Co in LiMnO2, vol 68, article 012101. PI Amitava Bhattacharjee Anisotropic weak whistler wave turbulence in electron magnetohydrodynamics, S. Galtier and A. Bhattacharjee, Physics of Plasmas 10, 3065 (2003). Wave driven magnetic reconnection in the Taylor problem, R. Fitzpatrick, A. Bhattacharjee, Z.-W. Ma, and T. Linde, Physics of Plasmas 10, 4284 (2003). WIND observations pertaining to current disruptions and ballooning instabilities during substorms, L.-J. Chen, A. Bhattacharjee, K. Sigsbee, M. Fillingim, G. Parks and R. Lin, Geophysical Research Letters 30, 1335, doi:10.1029/2002GL016317 (2003). Sufficient condition for finite-time singularity and tendency towards self-similarity in a high- symmetry Euler flow, C. S. Ng and A. Bhattacharjee, in Tubes, Sheets and Singularities in Fluid Dynamics, edited by K. Bajer and H. K. Moffatt, IUTAM Symposium Series (Kluwer Academic Publishers 2003), Vol. 71, pp. 317-328. P. Zhu, A. Bhattacharjee, and Z. W. Ma, Finite-ky Ballooning Instability in the Near-Earth Magnetotail, to appear in Journal of Geophysical Research. K. Germaschewski, A. Bhattacharjee, Rainer Grauer, David Keyes, and Barry Smith, Using Krylov-Schwarz methods in an adaptive mesh refinement environment, in Adaptive Mesh Refinement - Theory and Applications, Lecture Notes in Computational Sciences and Engineering (LNCSE) series, editors Tomasz Plewa, Timur Linde and V. Gregory Weirs, Springer 2004 PI Julian Borrill "High-Resolution Observations of the Cosmic Microwave Background Power Spectrum with ACBAR", C-L Kuo et al, 2004, ApJ, 600, 32 "Asymmetries in the CMB anistropy field", Eriksen, H. K., Hansen, F. K., Banday, A. J., Gsrski, K. M. and Lilje, P. B., 2004, ApJ, 605, 14, [astro-ph/0307507] "Testing for non-Gaussianity in the WMAP data: Minkowski functionals and the length of the skeleton", Eriksen, H. K., Novikov, D. I., Lilje, P. B., Banday, A. J. and Gsrski, K. M., 2004, ApJ, in press, [astro-ph/0401276]. "Foreground removal by an Internal Linear Combination method: limitations and implications", Eriksen, H. K., Banday, A. J., Gsrski, K. M. and Lilje, P. B., 2004, ApJ, in press, [astro-ph/0403098]. "Bayesian power spectrum analysis of the first-year WMAP data", O'Dwyer, I. J., Eriksen, H. K., Wandelt, B. D., Jewell, J. B., Larson, D. L., Gsrski, K. M., Banday, A. J., Levin, S. and Lilje, P. B., 2004, ApJL, submitted, [astro-ph/0407027]. "Power spectrum estimation from high-resolution maps by Gibbs sampling", Eriksen, H. K., O'Dwyer, I. J., Jewell, J. B., Wandelt, B. D., Larson, D. L., Gsrski, K. M., Banday, A. J., Levin, S. and Lilje, P. B., 2004, ApJS, in press, [astro-ph/0407028]. "The N-point correlation functions of the first-year Wilkinson Microwave Anisotropy Probe sky maps", Eriksen, H. K., Banday, A. J., Gsrski, K. M. and Lilje, P. B., 2004, ApJ, submitted, [astro-ph/ E. Keihdnen, H. Kurki-Suonio, T. Poutanen, D. Maino, and C. Burigana: "A maximum likelihood approach to the destriping technique", accepted for publication in A&A. T. Poutanen, D. Maino, H. Kurki-Suonio, E. Keihdnen, E. Hivon: "Cosmic microwave background power spectrum estimation with the destriping technique", MNRAS 353, 43 (2004) O'Dwyer et al. "The CMB Power Spectrum from the Background Emission Anisotropy Scanning Telescope (BEAST)". Astro-ph/0312610, accepted for publication in ApJ Supplement Series. PI Virginia Brown V.R.Brown, B.F.Gibson, J.A.Carlson,and R.Schiavilla, Eur. Phys. J. A. 18,289-29 (2003) PI Suse Broyde "Simulating structural and thermodynamic properties of carcinogen-damaged DNA", S. Yan, M. Wu, D. J. Patel, N. E. Geacintov and S. Broyde, Biophys. J. 84, 1-12 (2003). "Role of Base Sequence Context in Conformational Equilibria and Nucleotide Excision Repair of Benzo[a]pyrene Diol Epoxide Adenine Adducts", S. Yan, M. Wu, T. Buterin, H. Naegli, N. E. Geacintov, and S. Broyde, Biochemistry 42, 2339-2354 (2003). "Extending the Understanding of Mutagenicity: Structural Insights into Primer Extension Past a Benzo[a]pyrene Diol Epoxide-DNA Adduct", R. Perlow and S. Broyde, J. Mol. Biol. 327, 797-818 (2003). "Conformations of Stereoisomeric Base Adducts to 4-Hydrooxyequilenin", S. Ding, R. Shapiro, N. E. Geacintov and S. Broyde, Chem. Res. Tox. 16, 695-707 (2003). "Human RNA Polymerase II is partially blocked by DNA Adducts Derived from Tumorigenic Benzo[c]phenanthrene Diol Epoxides: relating biological consequences to biological preferences Bypass", T. M. Schinecker, R. A. Perlow, S. Broyde, N. E. Geacintov and D. A. Scicchitano, Nuc. Acids Res. 31, 6004-6015 (2003). "Solution Structure of an O6-[4-oxo-4(3-pyridyl)butyl]guanine adduct in an 11mer DNA duplex: evidence for formation of a base 'triplex'", L. A. Peterson, C. Vu, B. E. Hingerty, S. Broyde and M. Cosman, Biochemistry 42, 13134-13144 (2003). "Substrate discrimination by formamidopyrimidine-DNA glycosylase - A Mutational analysis", E. I. Zaika, R. A. Perlow, E. Matz, S. Broyde, R. Gilboa, A. P. Grollman and D. O. Zharkov, J. Biol. Chem, 279, 4849-4861 (2004). "Conformational searches elucidate effects of sterochemistry on structures of deoxyadenosine covalently bound to tumorigenic metabolites of benzo[c]phenanthrene", M. Wu, S. Yan, J. Tan, D. J. Patel, N. E. Geacintov and S. Broyde, Frontiers in Bioscience 9, 2807-2818 (2004). "Structural and stereoisomer effects of model estrogen quinone-derived DNA adducts: N6-(2-hydroxyestron-6(alpha,beta)-yl)-2'-deoxyadenosine and N2-(2-hydroxyestron-6(alpha,beta)-yl) -2'-deoxyguanosine", L. Wang, B. E. Hingerty, R. Shapiro and S. Broyde, Chem. Res. in Tox. 3, 311-324 (2004). PI Vasily Bulatov V. Bulatov, W. Cai, J. Fier, M. Hiratani, T. Pierce, M. Tang, M. Rhee, K. Yates, T. Arsenlis, ParaDiS on BlueGene/L: scalable line dynamics, SuperComputing 2004 (2004 in press). M. Hiratani, and V. V. Bulatov, Solid-Solution Hardening by Point-Like Obstacles of Different Kinds: Discrete Dislocation Dynamics, Philosophical Magazine Letters (2004 in press) J. Marian, W. Cai and Vasily V. Bulatov, Dynamic Transitions in Dislocation Motion: from smooth to rough to twinning, Nature Materials, 3, 158 (2004). A.Athanasios, B.Wirth, and M.Rhee, Polycrystal-Plasticity-Based Plasticity Model for Irradiated Copper, Philosophical Magazine A (2004 in press) W. Cai, V. V. Bulatov, J. Chang, J. Li, and S. Yip, Periodic Image Effects in Dislocation Modeling, Philosophical Magazine A, 83, 539 (2003). M. de Koning, W. Cai and V. V. Bulatov, A Mechanism for Anomalous Dislocation Multiplication in FCC Metals, Physical Review Letters, 91, 022503 (2003). M. Hiratani, and H.M.Zbib, On Dislocation Interactions and Patterning: Stochastic Discrete Dislocation Dynamics, Journal Nuclear Materials, 323, 290 (2003) PI Philip Cameron-Smith "IMPACT, the LLNL 3-D global atmospheric chemical transport model for the combined troposphere and stratosphere: Model description and analysis of ozone and other trace gases", D. A. Rotman, C. S. Atherton, D. J. Bergmann, P. J. Cameron-Smith, C. C. Chuang, P. S. Connell, J. E. Dignon, A. Franz, K. E. Grant, D. E. Kinnison, C. R. Molenkamp, D. D. Proctor, and J. R. Tannahill, J. GEOPHYS. RES., VOL. 109, D04303, doi:10.1029/2002JD003155, 2004. PI Andrew Canning L.W. Wang, Calculating quantum transports using periodic boundary conditions, Phys. Rev. B (submitted). NMR chemical shifts in amino acids: effects of environments in the condensed phase, Y. Yoon, B.G. Pfrommer, S.G. Louie and A. Canning, Solid State Communications, Vol 131, pp 15-19 (2004) Atomistic simulation of fcc Pt75Ni25 and Pt75Re25 Cubo-octahedral nanoparticles, G. Wang, M.A. Van Hove, P.N. Ross and M.I. Baskes, to appear in Bull. Mat. Res. Soc. The Grassman-metal all band iterative diagonalization method for total energy calculations of metallic systems , D. Raczkowski, L.W. Wang and A. Canning. submitted to Phys. Rev. B. PI Joseph Carlson `Quantum Monte Carlo Studies of Superfluid Fermi Gases', S. Y. CHang, J. Carlson, V. R. Pandharipande, and K. E. Schmidt, accepted by Phys. Rev. A. 'Superfluid Fermi Gases with Large Scattering Length', J. Carlson, S-Y. Chang, V. R. Pandharipande, and K. E. Schmidt, Phys. Rev. Lett. 91, 050401 (2003). 'Parity-Violating Interaction Effects in the np System', R. Schiavilla, J. Carlson, and M. Paris, submitted to Phys. Rev. C. Quantum Monte Carlo Calculations of Excited States in A=6-8', S. C. Pieper, R. B. Wiringa, and J. Carlson, submitted to Phys. Rev. C, LA-UR-04-5712. PI Yuen-Dat Chen Measurement of the Total Active 8B Solar Neutrino Flux at the Sudbury Neutrino Observatory with Enhanced Neutral Current Sensitivity, S.N. Ahmed et al. (The SNO Collaboration), Phys. Rev. Lett. 92, 181301 (2004), LBNL 55219 Constraints on Nucleon Decay via Invisible Modes from the Sudbury Neutrino Observatory S.N. Ahmed et al. (the SNO Collaboration), Phys.Rev.Lett. 92 (2004) 102004, LBNL 53918 PI James Chelikowski D.V. Melnikov and J.R. Chelikowsky: "Ab Initio Absorption Spectra of Germanium Nanocrystals," Solid State Comm. 127, 361 (2003). C. Troparevsky, L. Kronik, and J.R. Chelikowsky: "Optical Excitations in CdSe Quantum Dots,'' J. Chem. Phys. 119, 2284 (2003). S. Ogut , R. Burdick, Y. Saad, and J.R. Chelikowsky: "Ab Initio Calculations for the Large Dielectric Matrices of Confined Systems," Phys. Rev. Lett. 90, 127401 (2003). L. Kronik, R. Fromherz, E. Ko, G. Gantefor, and J.R. Chelikowsky: "Photoemission spectra of deuterated silicon clusters: experiment and theory,'' Euro. Phys. J. D 24, 33 (2003). M.M.G. Alemany and James R. Chelikowsky: "Edge-sharing tetrahedra: Precursors of the E$^\prime_\gamma$ defects in amorphous silica,'' Phys. Rev. B 68, 054206 (2003). D.V. Melnikov and J.R. Chelikowsky: "Quantum confinement in phosphorus-doped silicon nanocrystals,'' Phys. Rev. Lett. 92, 046802 (2004). S. Ogut and J.R. Chelikowsky: "Charge State Dependence Jahn-Teller Distortions of the E- center Defect in Crystalline Silicon,'' Phys. Rev. Lett. 91, 235503 (2003). F.-C. Chuang, C.Z. Wang, S. Ogut, J.R. Chelikowsky and K.M. Ho: "Melting of small Sn clusters by ab initio molecular dynamics simulations,'' Phys. Rev. B 69, 165408 (2004). G. Nesher, L. Kronik and J.R. Chelikowsky: "Ab initio absorption spectra of Ge nanocrystals,'' Phys. Rev. B, (submitted). S. Li, M.M.G. Alemany and J.R. Chelikowsky: "Ab initio calculations for the photoelectron spectra of vanadium clusters,'' J. Chem. Phys. (in press). PI Liu Chen "Parallel Data Streaming Implemented for Gyrokinetic Toroidal Code", S. Klasky, S. Either, Z. Lin, K. Martins, D. McCune, and R. Samtangey, Proceedings of the ACM/IEEE Conference on Supercomputing, (November 2003). "Turbulence Spreading and Transport Scaling in Global Gyrokinetic Particle Simulation", Z. Lin and T. S. Hahm, Phys. Plasmas 11, 1099-1108 (March 2004). "Turbulence Spreading into Linearly Stable Zone and Transport Scaling", T. S. Hahm, P. H. Diamond, Z. Lin, K. Itoh, and S.-I. Itoh, Plasma Phys. Contr. Fusion 46, A323-A333 (May 2004). "Porting the 3D Gyrokinetic Particle-in-Cell Code GTC to the NEC SX-6 Vector Architecture: Perspectives and Challenges, S. Ethier and Z. Lin, to appear in Computer Physics Communications, 2004. "A Gyrokinetic Electron and Fully Kinetic Ion Particle Simulation Model", Y. Lin, X. Y. Wang, Z. Lin, and L. Chen, submitted to Phys. Plasmas, 2004. "Calculating the Thermal Structure of Solar Active Region in Three Dimension", Y. Mok, R. Lionello, Z. Mikic and J. Linker, Astrophys. J., accepted. "Discrete Alfven Eigenmode in High-Beta Toroidal Plasma", S. Hu and L. Chen, Phys. Plasma, 11, 1, 2004. PI Jacqueline Chen T. Echekki and J. H. Chen, Direct Numerical Simulation of Autoignition in Non-homogeneous Hydrogen-Air Mixtures, Combust. Flame, 134:169-191. (2003). E. R. Hawkes and J. H. Chen, "Direct Numerical Simulation of Hydrogen Enriched Lean Premixed Methane/Air Flames," Combust. Flame 138:242-258, (2004b). S. Liu, J. C. Hewson, J. H. Chen, and H. Pitsch, "Effects of Strain Rate on High-Pressure Nonpremixed N-Heptane Autoignition in Counterflow," Combust. Flame, 137:320-339 (2004). E. R. Hawkes and J. H. Chen, "Evaluation of Models for Flame Stretch in the Thin Reaction Zones Regime," to appear in Proceedings of the Combustion Institute, 30, (2004c). E. R. Hawkes and J. H. Chen, "Comparison of Direct Numerical Simulations of Lean Premixed Flames with Strained Laminar Flame Calculations," submitted to Combust. Flame, (2004b). J. C. Sutherland, P. J. Smith, and J. H. Chen, "A Generalized Method for A Priori Evaluation of Combustion Reaction Models," submitted to Combust. Flame, (2004a). J. C. Sutherland, P. J. Smith, and J. H. Chen, "Quantification of Differential Diffusion in Nonpremixed Systems," submitted to Combust. Theory and Modeling, (2004b). J. H. Chen, E. R. Hawkes, J. C. Hewson, R. Sankaran, H. G. Im, and S. D. Mason, and P. Pebay "Ignition Front Propagation in a Constant Volume With Temperature Inhomogeneities," submitted to Combust. Flame, (2004). R. Sankaran, H. G. Im, E. R. Hawkes, and J. H. Chen, "The Effects of Nonuniform Temperature Distribution on the Ignition of a Lean Homogeneous Hydrogen-Air Mixture," to appear in Proceedings of the Combustion Institute, 30, (2004). R. Seiser, J. H. Frank, S. Liu, J. H. Chen, R. J. Sigurdsson, and K. Seshadri, "Ignition of Hydrogen in Unsteady Nonpremixed Flows," to appear in Proceedings of the Combustion Institute, 30, (2004). PI Yang Chen Y. Chen and S.E. Parker, "A delta-f particle method for gyrokinetic simulaitons with kinetic electrons and electromagnetic perturbations," J. Comp. Phys. 189 (2003) 463-475 S.T. Jones and S.E. Parker,"Including electron inertia without advancing electron flow," Journal of Computational Physics 191(2003) 322-327 Y. Chen, S.E. Parker, B.I. Cohen, A.M, Dimits, W.M. Nevins, D. Shumaker, V.K. Decyk and J.N. Leboeuf, "Simulations of turbulent transport with kinetic electrons and electromagnetic effects," Nuc. Fusion 43 (2003) 1121-1127 S.E. Parker, Y. Chen, W. Wan, B.I. Cohen and W.M. Nevins, "Electromagnetic gyrokinetic simulations," Phys. Plasmas 11(5) (2004) 2594-2599 S. Vadlamani, S.E. Parker, Y. Chen and C.C. Kim, "The particle-continuum method: an algorithmic unification of particle-in-cell and continuum methods," Computer Physics Communications, prn. 23/07/ 2004 available online at www.sciencedirect.com C.C Kim, C.R. Sovinec, S.E. Parker and the NIMROD Team, "Hybrid kinetic-MHD simulations in general geometry," Computer Physics Communication, prn. 09/08/2004, available online at Y. Su, S. Jones, R. Ergun, S. Parker, "Modeling of field-aligned electron bursts by dispersive Alfven waves in the dayside auroral region," accepted Journal of Geophysical Research - Space Physics, August 2004 W. Wan, Y. Chen and S.E. Parker, "Gyrokinetic delta-f simulation of the collisionless and semi-collisional tearing mode instability," submitted to Phys. plasmas. W. Wan, Y. Chen and S.E. Parker, "delta-f simulation of the collisionless tearing mode instability with kinetic ion response," submitted to IEEE Transactions on Plasma Science. PI Hai-Ping Cheng Density functional study of the adsorption of a C60 monolayer on Ag(111) and Au(111) surfaces, Lin-Lin Wang and Hai-Ping Cheng Phys. Rev. B 69, 165417 [12 pages] (2004). Coherent electron transport through an azobenzene molecule: A light driven molecular switch, C. Zhang, M.-H. Du, Hai-Ping Cheng, X.-G. Zhang, A.E. Roitberg, and J.L. Krause, Phys. Rev. Lett. 92, 158301 [4 pages] (2004). Electronic structure and spin-dependent tunneling conductance under finite bias, Chun Zhang, Xiao-Guang Zhang, P.S. Krstic, Hai-Ping Cheng, and W.H. Butler, Phys. Rev. B 69, 134406 [12 pages] (2004). Rotation, translation, charge transfer, and electronic structure of C60 on Cu (111) surface, Lin-Lin Wang and Hai-Ping Cheng, Phys. Rev. B 69, 40404 [7 pages] (2004). Hydrolysis of a Two-Membered Silica Ring on the Amorphous Silica surface: A Computational Study that Combines Quantum Mechanics and Classical Interatomic Potential Functions, Mao-Hua Du, Andrew Kolchin and Hai-Ping Cheng, J. Chem. Phys. 120, 1044-1054 (2004). Bulk separative enrichment in metallic or semiconducting single-walled carbon nanotubes, Z.-H. Chen, X. Du, M.-H. Du, C.D. Rancken, H.-P. Cheng, and A.G. Rinzler, Nano Lett. 3, 1245-1249 (2003). Manipulation of fullerene-induced impurity states in carbon peapods, Mao-Hua Du and Hai-Ping Cheng, Phys. Rev. B 68, 113402 [4 pages] (2003). Stability of free and oxidized silver clusters, M. Schmidt, Ph. Cahuzac, C. Brichignac and Hai-Ping Cheng, J. Chem. Phys. 118, 10956-10962 (2003). Molecular dynamics simulation of potential energy sputtering on LiF surface by slow highly charged ions, Lin-Lin Wang, Ajith Perera, and Hai-Ping Cheng, Phys. Rev. B 68 115409 [13 pages] (2003) Water-silica surface interactions: A combined quantum-classical molecular dynamic study of energetics and reaction pathways, Mao-Hua Du, Andrew Kolchin, and Hai-Ping Cheng, J. Chem. Phys. 119, 6418-6422 (2003). Water-silica interactions in clusters, Mao-Hua Du, Lin-Lin Wang, Andrew Kolchin, and Hai-Ping Cheng, Euro Phys. J D 24, 323-326 (2003). PI Wai-Yim Ching L. Ouyang, H. Yao, S. Richey, Y.-N. Xu and W.Y. Ching, "On the crystal structure and optical Properties of YSiO2N", Phys. Rev. B69, 094112-1-6 (2004). L. Ouyang, P. Rulis, W.Y. Ching, G. Nardin, and L. Randaccio, "Electronic Structure and Bonding in Adenosylcobalamin", Inorg. Chem. 43 (4), 1235-1241 (2004). Lizhi Ouyang and W.Y. Ching," Electronic structure and dielectric properties of gate material: (ZrO2)x(SiO2)1-x", J. Appl. Phys, 95 (12) 7918-7924 (2004). P. Rulis, W.Y. Ching and M. Kohyama, "ab-initio ELNES/XANES spectral calculation of polar and non-polar grain boundary models in b-SiC", Acta Materials, 52[10], 30009-30018 (2004). T. Mizoguchi, I. Tanaka, S. Yoshioka, M. Kunisu, T. Yamamoto, and W.Y. Ching, "First-principles calculations of ELNES/XANES of selected wide gap materials: dependence on crystal structure and orientation", Phys. Rev. B70, 045103 (2004). W.Y. Ching, L. Ouyang, Hongzhi Yao and Y.-N. Xu, "Electronic Structure and Bonding in the Y-Si-O-N quaternary Crystals", Phys. Rev. B70., (July 15, 2004) W.Y. Ching, "The electronic structure and bonding of all crystalline phases in the SiO2-Y2O3-Si3N4 phase equlibrium diagram", J. Amer. Ceram. Soc. (invited feature article, November, 2004). Paul Rulis, Lizhi Ouyang, and W.Y. Ching, "Electronic Structure and Bonding in Calcium Apatite Crystals: Hydroxyapatite, Fluorapatite, Chlorapatite, and Bromapatite", Phys. Rev. B. (accepted, 2004). Jun Chen, Yong-Nian Xu, Paul Rulis, Lizhi Ouyang, and W.Y. Ching "Ab-initio tensor experiments on Y-doped S=3 grain boundary in a-Al2O3", submitted to Acta Materials. Paul Rulis, Jun Chen, Lizhi Ouyang and W.Y. Ching, Xiaotao Su, S.H. Garofalini, "Electronic structure and bonding of the intergranular glassy films (IGF) in polycrystalline Si3N4: Ab-initio studies and classical MD simulations", submitted to Advanced Materials. PI Mei-Yin Chou "Effects of the Substrate on Quantum Well States: A First-Principles Study for Ag/Fe(100) ", C. M. Wei and M. Y. Chou, Phys. Rev. B 68, 125406/1-5 (2003). "Quantum Confinement and Electronic Properties of Silicon Nanowires", X. Zhao, C. M. Wei, L. Yang, and M. Y. Chou, Phys. Rev. Lett. 92, 236805/1-4 (2004). "Thermal Stability and Electronic Structure of Pb Films on Si(111)", M. H. Upton, C. M. Wei, M. Y. Chou, T. Miller, and T.-C. Chiang, Phys. Rev. Lett. 93, 026802/1-4 (2004). "First-Principles Study of NaAlH4 and Na3AlH6 Complex Hydrides", A. Peles, J. A. Alford, Zhu Ma, Li Yang, and M. Y. Chou, Phys. Rev. B (in press). "Alternative Low-Symmetry Structure for 13-Atom Metal Clusters", C. M. Chang and M. Y. Chou, Phys. Rev. Lett. (in press). PI Daryl Chrzan Tianshu Li, J.W. Morris, Jr., and D.C. Chrzan, "Ideal tensile strength of B2 transition-metal aluminides", Physical Review B 70, 54107(2004) Yi DO, Sharp ID, Xu Q, Liao CY, Ager JW III, Beeman JW, Liliental-Weber Z, Yu KM, Zakharov D, Haller EE, and Chrzan, DC, "Modeling the stress evolution of Ion Beam Synthesized Nanocrystals," Materials Research Society, Spring Meeting, 2004. J. Deslippe, R. Tedstrom, M. S. Daw, D. C. Chrzan, T. Neeraj and M. J. Mills, "Dynamic Scaling in a Simple One-Dimensional Model of Dislocation Activity," Philosophical Magazine 84, 2445-2454 (2004). PI Catherine Chuang Penner, J.E., S.Y. Zhang, and C.C. Chuang, 2003: Soot and smoke aerosol may not warm climate, J. Geophys. Res., 108, 4657, doi:10.1029/2003JD003409. Rotman, D.A., C.S. Atherton, D.J. Bergmann, P.J. Cameron-Smith, C.C. Chuang, and others, 2004: IMPACT, the LLNL 3D global atmospheric chemical transport model for the combined troposphere and stratosphere: Model description and analysis of ozone and other trace gases, J. Geophys. Res., 109, doi:10.1029/2002JD003155. Chin, H.-N.S., M.J. Leach, G.A. Sugiyama, J.M. Leone Jr., H. Walker, J.S. Nasstrom, and M.J. Brown, 2004: Evaluation of an urban canopy parameterization in a mesoscale model using VTMX and URBAN 2000 data, submitted to Mon. Wea. Rev. Duffy, P.B., B. Govindasamy, J. Milovich, K. Taylor, and S. Thompson, 2003: High resolution simulations of global climate, Part 1: Present climate, Clim. Dyn., 21, 371-390. Wickett, M.E., K. Caldeira, and P.B. Duffy, 2003: Effect of horizontal grid resolution on simulations of oceanic CFC-11 uptake and direct injection of anthropogenic CO2, J. Geophys. Res. (Oceans), 108, 10.1029/2001JC001130. Govindasamy, B., P.B. Duffy, and J. Coquard 2003: High resolution simulations of global climate, Part 2: Effects of increased greenhouse gases, Clim. Dyn., 21, 391-404. Iorio, J., P.B. Duffy. M. Khairoutdinov, and D. Randall, 2004: Effect of model resolution and subgrid scale physics on daily precipitation in the continental United States, Clim. Dyn., in press. Coquard, J., P.B. Duffy, and K.E. Taylor, 2004: Present and future surface climate in the western U.S. as simulated by 15 global climate models, Clim. Dyn., in press. Marcy, T.P., D.W. Fahey, R.S. Gao, P.J. Popp, E.C. Richard, T.L. Thompson, K.H. Rosenlof, E.A. Ray, R.J. Salawitch, C.S. Atherton, and others, 2004: Quantifying stratospheric ozone in the upper troposphere using in situ measurements of HCl, Science, 304, 261-265. PI Bruce Cohen Y. Chen, S.E. Parker, B.I. Cohen, A.M. Dimits, W.M. Nevins, D. Shumaker, V.K. Decyk, and J.-N. Leboeuf, Simulations of Turbulence with Kinetic Electrons and Electromagnetic Effects from the Summit Framework, November, 2002, Nucl. Fusion 43, 1121 (2003). S. E. Parker, Y. Chen, W. Wan, B.I. Cohen, and W.M. Nevins, Electromagnetic Gyrokinetic Simulations, (APS DPP Annual Meeting, October 2003, Invited Paper), Phys. Plasmas. 11, 231 (2004). S. Woodruff, B. W. Stallard, H. S. McLean, E. B. Hooper, R. Bulmer, B. I. Cohen, D. N. Hill, C.T. Holcomb, J. Moller, R. D. Wood, Increasing the magnetic helicity content of a plasma by pulsing a magnetized source, February 2004, submitted to Phys. Rev. Lett. S. Woodruff, B.I. Cohen, E.B. Hooper, H.S. Mclean, C.T. Holcomb, B.W. Stallard, D.N. Hill, L.L. Smith, R.D. Wood, G. Cone, C.R. Sovinec, Controlled and Spontaneous Magnetic Field Generation in a Gun-Driven Spheromak, February 2004, submitted to Phys. Plasmas. W.M. Nevins, et al., A Statistical Analysis of ITG Turbulence, in preparation, Feb. 2004. Pigarov, A.Y., Krasheninnikov, S.I., Rognlien, T.D., et al., Multi-fluid code simulations including anomalous non-diffusive transport of plasma and impurities in the tokamak SOL, accepted for publication in Contrib. Plasma Phys. (2004). T.D. Rognlien, M.V. Umansky, X.Q. Xu, and R.H. Cohen, Self-consistent simulation of turbulence and transport in tokamak edge plasmas, Contrib. Plasma Phys. 44 (2004) 188. Xu, X.Q., Nevins, W.M., Cohen, R.H., Rognlien, T.D., and Umansky, M.V., Correlation of density pedestal width and neutral penetration length, Contrib. Plasma Phys. 44 (2004) 105. Xu, X.Q., Nevins, W.M., Cohen, R.H., Myra, J.R., and Snyder, P.B., Dynamical simulations of boundary physics turbulence in divertor geometry, New Journal of Physics 4 (53), 1-15 (2003). Xu, X.Q., Nevins, W.M., Rognlien, T.D., Bulmer, R.H., Greenwald, M., Mahdavi, A., Pearlstein, L.D., and Snyder, P., Transitions of turbulence in Plasma density limits, Phys. Plasmas 10 (5), 1773-1781 Holland, C., Diamond, P.H., Champeaux, S., Kim, E., Rosenbluth, M.N., Tynan, G.R., Crocker, N., Nevins, W.M., and Candy, J., Investigations of the role of nonlinear couplings in structure formation and transport regulation: experiment, simulation, and theory, Nuclear Fusion 43 (8), 761-780 (2003). T. D. Rognlien, and M. V. Umansky, X. Q. Xu, R. H. Cohen, Self-Consistent Simulation of Turbulence and Transport in Tokamak Edge Plasmas, Contrib. Plasma Phys. 44, No. 1-3, 188 193 (2004) M. V. Umansky, T. D. Rognlien, X. Q. Xu, R. H. Cohen, and W. M. Nevins, Turbulence in Divertor region of Tokamak Edge Plasma, Contrib. Plasma Phys. 44, No. 1-3, 182 187 (2004) Snyder PB, Wilson HR, Ferron JR, Lao LL, Leonard AW, Mossessian, D, Murakami M, Osborne TH, Turnbull AD, Xu XQ, ELMs and constraints on the H-mode pedestal: peeling-ballooning stability calculation and comparison with experiment, Nuclear Fusion, 44 (2): 320-328 FEB 2004 "Role of trapped electron mode turbulence in internal transport barrier control in the Alcator C-Mod Tokamak", D. R. Ernst, P. T. Bonoli, P. J. Catto, W. Dorland, C. L. Fiore, R. S. Granetz, M. Greenwald, A. E. Hubbard, M. Porkolab, M. H. Redi, J. E. Rice, K. Zhurovich, and the Alcator C-Mod Group, Phys. Plasmas 11(5) (2004) 2637. Gyrokinetic Simulations of Ion and Impurity Transport, C. Estrada-Mila, J. Candy and R.E. Waltz, submitted to Phys. Plasmas. Smoothness of Turbulent Transport Across a Minimum-q Surface, J. Candy, R.E. Waltz and M.N. Rosenbluth, Phys. Plasmas 11, 1879 (2004) The Local Limit of Global Gyrokinetic Simulations, J. Candy , R.E. Waltz and W. Dorland, Phys. Plasmas 11, L25 (2004) Effects of Electromagnetic Turbulence in the Neoclassical Ohm's Law, F.L. Hinton, R.E. Waltz and J. Candy, Phys. Plasmas 11, 2433 (2004) Anomalous Transport Scaling in the DIII-D Tokamak Matched by Supercomputer Simulation, J. Candy and R.E. Waltz, Phys. Rev. Lett. 91, 045001 (2003) Burning Plasma Confinement Projections and Renormalization of the GLF23 Driftwave Transport Model, J.E. Kinsey and G. Staebler and R.E. Waltz, Fusion Sci. and Tech 44, 763 (2003). PI Marvin Cohen F. J. Ribeiro, M. L. Cohen, Phys. Rev. B 69, 212507 (2004) H. Sun, F. J. Ribeiro, J.-L. Li, D. Roundy, M. L. Cohen, S. G. Louie, Phys. Rev. B 69, 024110 (2004) P. Tangney, S. G. Louie, and M. L. Cohen, Phys. Rev. Lett 93, 065503 (2004). P. Zhang, W. Luo, V. H. Crespi, M. L.Cohen, and S. G. Louie, Phys. Rev. B 70, 085109 (2004) R. B. Capaz, C. D. Spataru, P. Tangney, M. L. Cohen, and S. G. Louie, Phys. Stat. Solid (b), (2004). J. W. Morris, D.M. Clatterbuck, D.C. Chrzan, C.R. Krenn , W. Luo, M. L. Cohen, Materials Science Forum 426-432, 4429-4434 (2003). A. Trave, F.J. Ribeiro, S.G. Louie and M.L. Cohen, "Energetics and Structural Characterization of C60 Polymerization in BN and Carbon Nano-peapods", submitted to Phys. Rev. B W. Luo, W. Duan, S. G. Louie, and M. L. Cohen, "Structural and electronic properties of n-doped and p-doped SrTiO3", submitted to Phys. Rev. B R. B. Capaz, C. D. Spataru, P. Tangney, M. L. Cohen, and S. G. Louie, "Temperature Dependence of the Band Gap of Semicondcting Carbon Nanotubes", submitted to Phys. Rev. Lett. PI John Cooke A. Franceschetti, S.J. Pennycook, and S.T. Pantelides, "Oxygen chemisorption on Au nanoparticles", Chem. Phys. Lett. 374, 471 (2003). A. Franceschetti and S.T. Pantelides, "Excited-state relaxations and Franck-Condon shift in Si quantum dots", Phys. Rev. B 6803, 3313 (2003). S.W. Wang, A.Y. Borisevich, S.N. Rashkeev, M.V. Glazoff, K. Sohlberg, S.J. Pennycook, and S.T. Pantelides, "Dopant adsorbed as single atoms prevent degradation of catalysts" Nature Materials 3, 274 H.S. Baik, M. Kim, G.S. Park, S.A. Song, M. Varela, A. Franceschetti, S.T. Pantelides, and S.J. Pennycook, "Interface structure and non-stoichiometry in HfO2 dielectrics", Appl. Phys. Lett. 85, 672 J.P. Buban, A. Franceschetti, J.C. Idrobo, N.D. Browning, X. Song, G. Daniels, A. Gurevich, D.C. Larbalestier, S.T. Pantelides, and S.J. Pennycook, "Cooperative Doping Mechanisms at Grain Boundaries in Ca-Doped YBCO", submitted to Phys. Rev. Lett. M. Varela, A. Franceschetti, A.R. Lupini, W. Tian, R. Jin, B.C. Sales, D.G. Mandrus, S.J. Pennycook, and S.T. Pantelides, "Ordered phases in doped manganites - atomic-resolution spectroscopy and density-functional calculations", submitted to Science. PI Silvia Crivelli S. Crivelli, O. Kreylos, B. Hamann, N. Max, and W. Bethel (2004). ProteinShop: A Tool for Interactive Protein Manipulation. Journal of Computer-Aided Molecular Design Vol. 18, pp. 271-285. S. Crivelli and T. Head-Gordon (2004). A New Load Balancing Strategy for the Solution of Dynamical Large Tree Search Problems Using a Hierarchical Approach, IBM Journal of Research and Development. PI Peter Cummings Rivera, J. L., McCabe, C., and Cummings, P. T., "Oscillatory behavior of double nanotubes under extension: A simple nanoscale damped spring," Nano Letters, 3, 1001-1005 (2003). Leng YS, Keffer DJ, Cummings PT Structure and dynamics of a benzenedithiol monolayer on a Au(111) surface, Journal Of Physical Chemistry B 107 (43): 11940-11950 2003 Z. Zhang, P. Fenter, L. Cheng, N. C. Sturchio, M. J. Bedzyk, M. Predota, A. Bandura, J. Kubicki, N. Lvov, P. T. Cummings, A. A. Chialvo, M. K. Ridley, P. Binizeth, L. Anovitz, D. A. Palmer, M. L. Machesky, D. J. Wesolowski, "Ion Adsorption at the Oxide-Water Interface: Linking Molecular and Macroscopic Properties," Langmuir, 20(12); 4954-4969 (2004). M. Predota, A. V. Bandura, P. T. Cummings, J. D. Kubicki, D. J. Wesolowski, A. A. Chialvo, M. L. Machesky, "Electric Double Layer at Rutile Surfaces. I. Structure Of Water From Molecular Dynamics Using Ab Initio Potentials," Journal of Physcial Chemistry B 108(32), 12049-12060 (2004). M. Predota, Z. Zhang, P. Fenter, D. J Wesolowski and P. T. Cummings, "Electric double layer at rutile surfaces. II. Adsorption of ions from molecular dynamics," Journal of Physcial Chemistry B 20 (12); 4954-4969 (2004). C. McCabe, D. Bedrov, O. Borodin, G. D. Smith and P. T. Cummings, "A Molecular Dynamics Study of The Rheology of Perfluoroalkanes: Comparing A Fully Atomistic and United Atom Model," Ind. Eng. Chem. Res., 42, 6956-6961 (2003). S. Bair and C. McCabe, A Study of Mechanical Shear Bands in Liquids at High-Pressure, Tribology International, 37 783-789 (2004). C. McCabe, S. C. Glotzer, J. Kieffer, M. Neurock and P. Cummings, Multiscale Simulation of the Synthesis, Assembly and Properties of Nanostructured Organic/Inorganic Hybrid Materials, Journal of Theoretical and Computational Nanoscience, in press (2004). A. Striolo, C. McCabe, and P.T. Cummings, "Thermodynamic and Transport Properties of Polyhedral Oligomeric Silsesquioxanes in Poly(Dymethyl Siloxane)", Nano Letters, submitted (2004). A. Striolo, A. A. Chialvo, P. T. Cummings PT, "Water adsorption in carbon-slit nanopores" Langmuir, 19 (20): 8583-8591 2003. PI Larry Curtiss O. Borodin, G. D. Smith, R. Bandyopadhyaya, P. Redfern, L. A. Curtiss Molecular dynamics study of nanocomposite polymer electrolyte based on poly(ethylene oxide)/LiBF4, Modelling and Simulation in Material Science and Engiennering 12 (3): S73-S89 May 2004 Y. Duan, J. W. Halley, L. Curtiss, and P. Redfern, Mechanisms of Lithium Transport in Amorphous Polyethylene Oxide, J. Chem. Phys., submitted. A. S. Barnard and P. Zapol A model for the phase stability of arbitrary nanoparticles as a function of size and shape. J. Chem. Phys. 121(9) 4276 (2004) Effects of particle morphology and surface hydrogenation on the phase stability of TiO[2], A. S. Barnard and P. Zapol Phys Rev B, 2004 (in press) A. Barnard , P. Zapol Predicting the Energetics, Phase Stability and Morphology Evolution of Faceted and Spherical Anatase Nanocrystals, J. Phys. Chem. (submitted) Modeling the morphology and phase stability of TiO2 nanocrystals in water, A. S. Barnard, P. Zapol and L.A. Curtiss, J. Theo. Chem. Comp. (2004) submitted Shaping Nanoscale Architecture Through Surface Chemistry, Z. V. Saponjic, D. Tiede, L. Chen, N. Dimitrijevic, A. Goshe, A. S. Barnard, P. Zapol, L. Curtiss, and T. Rajh, Advanced Materials (2004) PI Daniel D'Ippolito "Blob Dynamics in 3D BOUT Simulations of Tokamak Edge Turbulence," D. A. Russell, D. A. D'Ippolito, J. R. Myra, W. M. Nevins, and X. Q. Xu, submitted to Phys. Rev. Lett. (2004). PI Ronald Davidson Drift Compression and Final Focus Options for Heavy Ion Fusion, H. Qin, R. C. Davidson, J. J. Barnard and E. P. Lee, Nuclear Instruments and Methods in Physics Research, in press (2004). Chaotic Particle Trajectories in High-Intensity Finite-Length Charge Bunches, S. T. Hudson, H. Qin, and R. C. Davidson, Nuclear Instruments and Methods in Physics Research, in press (2004). Survey of Collective Instabilities and Beam-Plasma Interactions in Intense Heavy Ion Beams, R. C. Davidson, I. D. Kaganovich, H. Qin, et al, Nuclear Instruments and Methods in Physics Research, in press (2004). Three-Dimensional Simulation Studies of the Temperature Anisotropy Instability in Intense Charged Particle Beams, E. A. Startsev, R. C. Davidson, and H. Qin, Nuclear Instruments and Methods in Physics Research, in press (2004). The Electromagnetic Darwin Model for Intense Charged Particle Beams, W. W. Lee, R. C. Davidson, E. A. Startsev, and H. Qin, Nuclear Instruments and Methods in Physics Research, in press (2004). Drift Compression and Final Focus of Intense Heavy Ion Beams, H. Qin, R. C. Davidson, J. J. Barnard and E. P. Lee, submitted to Physical Review Special Topics on Accelerators and Beams (2004). Electromagnetic Weibel Instability in Intense Charged Particle Beams With Large Temperature Anisotropy, E. A. Startsev and R. C. Davidson, Physics of Plasmas 10, in press (2003). Wall-Impedance-Driven Collective Instability in Intense Charged Particle Beams, R. C. Davidson, H. Qin and G. Shvets, Physical Review Special Topics on Accelerators and Beams 6, 104402 (2003). Analytical Theory and Nonlinear Delta-f Perturbative Simulations of Temperature Anisotropy Instability of Intense Charged Particle Beams, E. A. Startsev, R. C. Davidson and H. Qin, Physical Review Special Topics on Accelerators and Beams 6, 084401 (2003). Nonlinear Delta-f Simulations of Collective Effects in Intense Charged Particle Beams, H. Qin, Physics of Plasmas 10, 2708 (2003). Belova, E. V., R. C. Davidson, H. Ji, M. Yamada, Kinetic effects on the stability properties of field-reversed configurations: II. Nonlinear evolution, Phys. Plasmas v.11, 2523 (2004). N. N. Gorelenkov, E. V. Belova, H. L. Berk, C. Z. Cheng, E. Fredrickson, W. Heidbrink, S. Kaye, G. Kramer, Beam Ion Driven Instabilities in NSTX, Phys. Plasmas v.11, 2586 (2004). Belova, E. V., N. N. Gorelenkov, C. Z. Cheng, Self-consistent equilibrium model of low aspect ratio toroidal plasma with energetic beam ions, Phys. Plasmas v.10, 3240 (2003). PI Murray Daw "Dynamic scaling in a simple one-dimensional model of dislocation activity" Phil Mag 84 p2445-2454 (2004). J. Deslippe, R. Tedstrom, M. Daw, D. Chrzan, T. Neeraj, and M. Mills PI Eric DeWeaver DeWeaver, E., and S. Nigam, 2004: On the forcing of ENSO teleconnections by anomalous heating and cooling. J. Climate, 17, 3225-3235. PI David Dean Coupled-cluster approach to nuclear physics, D. J. Dean and M. Hjorth-Jensen Phys. Rev. C 69, 054320 (2004) Coupled Cluster Calculations of Ground and Excited States of Nuclei K. Kowalski, D. J. Dean, M. Hjorth-Jensen, T. Papenbrock, and P. Piecuch Phys. Rev. Lett. 92, 132501 (2004) Solution of large scale nuclear structure problems by wave function factorization, T. Papenbrock, A. Juodagalvis, and D. J. Dean Phys. Rev. C 69, 024312 (2004) Systematic study of deformed nuclei at the drip lines and beyond M. V. Stoitsov, J. Dobaczewski, W. Nazarewicz, S. Pittel, and D. J. Dean Phys. Rev. C 68, 054312 (2003) Consequences of Nuclear Electron Capture in Core Collapse Supernovae W. R. Hix, O. E. B. Messer, A. Mezzacappa, M. Liebendvrfer, J. Sampaio, K. Langanke, D. J. Dean, and G. Martmnez-Pinedo Phys. Rev. Lett. 91, 201102 (2003) Free energy and criticality in the nucleon pair breaking process M. Guttormsen, R. Chankova, M. Hjorth-Jensen, J. Rekstad, S. Siem, A. Schiller, and D. J. Dean, Phys. Rev. C 68, 034311 (2003) Electron Capture Rates on Nuclei and Implications for Stellar Core Collapse, K. Langanke, G. Martmnez-Pinedo, J. M. Sampaio, D. J. Dean, W. R. Hix, O. E. B. Messer, A. Mezzacappa, M. Liebendvrfer, H.-Th. Janka, and M. Rampp, Phys. Rev. Lett. 90, 241102 (2003) Factorization of shell-model ground states, T. Papenbrock and D. J. Dean, Phys. Rev. C 67, 051303 (2003) How magic is the magic Ni-68 nucleus?, K. Langanke, J. Terasaki, F. Nowacki, D. J. Dean, and W. Nazarewicz, Phys. Rev. C 67, 044314 (2003) Neutral-current neutrino-nucleus cross sections for A~50-65 nuclei, A. Juodagalvis, K. Langanke, G. Martinez-Pinedo, W.R. Hix, D.J. Dean, and J.M. Sampaio, in press, Nucl. Phys. A (2004) PI Tomas Diaz de la Rubia B.D. Wirth and E.M. Bringa, "A Kinetic Monte Carlo Model for Helium Diffusion and Clustering in Fusion Environments", Physica Scripta, T108 (2004) 80. B.D. Wirth, G.R. Odette, J. Marian, L. Ventelon, J.A. Young and L.A. Zepeda-Ruiz, "Multiscale Modeling of Radiation Damage in Fe-based Alloys in the Fusion Environment", Journal of Nuclear Materials 329-333 (2004) 103. L. A. Zepeda-Ruiz, J. Rottler, S. Han, G. J. Ackland, R. Car, and D. J. Srolovitz, "Strongly Non-Arrhenius Interstitial Diffusion in Vanadium", Physical Review B - Rapid Communication, in press. L. A. Zepeda-Ruiz, J. Marian, and B. D. Wirth, "On the Character of Self-Interstitial Dislocation Loops in Vanadium", accepted for publication in Philosophical Magazine. B. Remington et al., "Materials science under extreme conditions of pressure and strain rate", , Met. Mat. Trans. A 35, 2587 (2004). L. Davila, P. Erhart, E.M. Bringa, M.A. Meyers, V.A. Lubarda, M. Schneider, R. Becker and M. Kumar, "Shock-induced void collapse in fcc metals" submitted to Applied Physics Letters. P. Erhart, E. Bringa and M. Kumar, "Atomistic simulation of shocks in porous metals", submitted to Phys. Rev. Letters. "A new model for cosmic ray erosion of volatiles from grains in the inter-stellar medium". E. M. Bringa and R. E. Johnson, Astrophysical Journal 603, 159 (2004). "Molecular dynamics simulation of sputtering from a cylindrical track: EAM vs. pair potentials", O.J. Tucker, D. Ivanov, R.E. Johnson, L. Zhigilei and E.M. Bringa, submitted to Nucl. Instr. and Meth. Phys. Res. B. "High-Energy Ion Tracks in Thin Films", David M. Follstaedt, Adam K. Norman, Paolo Rossi, Barney L. Doyle, Floyd D. McDaniel and Eduardo M. Bringa, submitted to Nucl. Instr. and Meth. in Phys. Res. "Atomistic simulations of threshold displacement energies in SiO2", F.Mota, M.-J.Caturla, J.M.Perlado, E.Dominguez, A. Kubota Journal of Nuclear Materials accepted for publication 2004 "Differences in deformation processes in nanocrystalline nickel with low and high angle boundaries from atomistic simulations", M.J. Caturla, T. G. Nieh, J. S. Stolken, Applied Physics Letters 84 (2004) 598-600 "Transformations in the medium-range order of fused silica under high pressure", L. P. Davila, M.-J. Caturla, A. Kubota, B. Sadigh, T. Diaz de la Rubia, J. F. Shackelford, S. H. Risbud, S. H. Garofalini, Physical Review Letters 91 (2003) 205501-4. "Molecular dynamics simulations of energy deposition in solids" M. J. Caturla, A. Gras Marti, J. J. Jimenez-Rodriguez, J.C. Jimenez Saez, M.C. Perez-Martin, Advances in Quantum Chemistry 45 (2004) "Temperature dependent defect properties from ion-irradiation in Pu(Ga)" J. Fluss, B. D. Wirth, M. Wall, M. J. Caturla, T. Diaz de la Rubia, and T. E. Felter, Journal of Alloys and Compounds 368 (2004) 62-74 "Modeling microstructure evolution of f.c.c. metals under irradiation in the presence of He", M. J. Caturla, T. Diaz de la Rubia, M. Fluss Journal of Nuclear Materials 323 (2003) 163-168 "Fused Silica Final Optics For Inertial Fusion Energy: Radiation Studies And System-Level Analysis", Jeffery F. Latkowski,* Alison Kubota, Maria J. Caturla, Sham N. Dixit, Joel A. Speth, And Stephen A. Payne, Fusion Science and Technology 43 (2003) 540 - 558. PI Dimitre Dimitrov C. Nieter and J. R. Cary, "VORPAL: a versatile plasma simulation code", J. Comput. Phys, vol. 196, pp. 448-473 (2004). PI Chris Ding MPH: a Library for Distributed Multi-Component Environment. Chris Ding and Yun He. Submitted to Int'l Journal of High Performance Computing. ZioLib: a Parallel I/O Library, Woo-Sun Yang and Chris Ding, Oct 2003, LBNL Tech Report, LBNL-53521 PI Julie Ditri V. I. Kovalchuk, J. L. d'Itri, "Catalytic Chemistry of Chloro- and Chlorofluorocarbon Dehalogenation: From Macroscopic Observations to Molecular Level Understanding." Applied Catalysis A: General, 271 (2004) 13-25. V. Yu. Borovkov, D. R. Luebke, V. I. Kovalchuk, J. L. d'Itri, "Hydrogen-Assisted 1,2-Dichloroethane Dechlorination Catalyzed by Pt-Cu/SiO2: Evidence for Different Functions of Pt and Cu Sites." Journal of Physical Chemistry B, 107 (2003) 5568-5574. PI Jianjun Dong D. Hatch, H. Stokes, Jianjun Dong, J. Gunter, H. Hao, and J. Lewis, "SiC transition paths between the zinc-blend and rocksalt structure types", submitted (H. Stokes, D. Hatch, Jianjun Dong, and J. Lewis, "Mechanisms of phase transition between the B1 and B2 structure types in NaCl and PbS", Phys. Rev. B 69, 174111 (2004). Jianjun Dong, A.A. Kinkhabwala, and P.F. McMillan, "High-pressure polymorphism in phosphorus nitrides", Phys. Stat. Solid. B 241, 2319 (2004). P.F. McMillan, S.K. Deb, and Jianjun Dong, "High pressure phase transition in beta-Ge3N4: a Raman scattering study", J. Raman Spectroscopy 28, 885 (2003). PI Sebastian Doniach J. Lipfert, J. Franklin, F. Wu and S. Doniach, Protein Misfolding and Amyloid Formation for the Peptide GNNQQNY from Yeast Prion Protein Sup35: Simulation by Reaction Path Annealing submitted fro PI William Dorland B. D. Jemella, M. A. Shay, J. F. Drake and B. N. Rogers, Impact of Frustrated Singularities on Magnetic Island Growth, Phys. Rev. Lett. 91, 125002, 2003. M. A. Shay, J. F. Drake, M. Swisdak and B. N. Rogers, The scaling of embedded collisionless reconnection, Phys. Plasmas 11, 2199, 2004. M. A. Shay and M. Swisdak, Three species collisionless reconnection: Effect of O+ on magnetotail reconnection, Phys. Rev. Lett. (in press 2004). B. D. Jemella, J. F. Drake, and M. A. Shay, Singular Structure of Magnetic Islands Resulting from Reconnection, Phys. Plasmas (in press 2004). C. Cattell, J. Dombeck, J. Wygant, J. F. Drake, M. Swisdak, et al., Cluster observations of electron holes in association with magnetotail reconnection and comparison to simulations, J. Geophys. Res. (in press 2004). M. Swisdak, J. F. Drake, J. G. McIlhargey and M. A. Shay, The trnasition from anti-parallel to component magnetic reconnection, J. Geophys. Res. (submitted 2004). J. F. Drake, M. A. Shay, M. Swisdak, W. Thongthai, Electron energization during magnetic reconnection, Phys. Rev. Lett. (submitted 2004). PI Philip Duffy J. Coquard, P.B. Duffy, and K.E. Taylor, 2004: Present and future surface climate in the western U.S. as simulated by 15 global climate models, Climate Dynamics, in press. P.B. Duffy, B. Govindasamy, J. Iorio, J. Milovich, K. Taylor, M. Wehner, and S. Thompson, 2003: High Resolution Simulations of Global Climate, Part 1: Present Climate. Climate Dynamics, 21, 371-390. B. Govindasamy, and P.B. Duffy, 2003: High Resolution Simulations of Global Climate, Part 2: Effects of Increased Greenhouse Gases. Climate Dynamic, 21, 391-404. Iorio, J. P.Duffy, B. Govindasamy, S. Thompson, 2004: Effects of increased resolution on the simulation of daily precipitation statistics in the US, Climate dynamics (in press) PI Barry Dunietz Dunietz, B.D. and Markovic, N. and Ross, P.H. and Head-Gordon, M., The initiation of Electro-oxidation of CO on Pt based electrodes at full coverage conditions simulated by ab-initio electronic structure calculations, J.Phys.Chem B, 108, (2004), 9888. Ugalde, J.M. and Dunietz, B.D and Dreuw, A. and Head-Gordon, M. and Boyd, R.J., The size of Fe(II) spin states and the spin dependence of the structure of Fe(II)-porphyrins, J.Phys.Chem B, 108, (2004), 4653. PI Charlotte Elster "Three-Body Scattering at Intermediate Energies" H. Liu, Ch. Elster, W. Gloeckle nucl-th/0410051 and submitted to Phys Rev C The Operator form of 3H (3He) Wave Function and its Spin Structure, I. Fachruddin, W. Gloeckle, Ch. Elster, A. Nogga, Phys. Rev. C69, 064002 (2004). Model Study of Three-Body Forces in the Three-Body Bound State, H. Liu, Ch. Elster, W. Gloeckle, Few-Body Systems 33, 241 (2003). The Nd Break-Up Process in Leading Order in a Three-Dimensional Approach, I. Fachruddin, Ch. Elster, W. Gloeckle, Phys. Rev. C68, 054003 (2003). Lorentz Boosted NN Potentials for Few-Body Systems: Application to the three-nucleon bound state, H. Kamada, W. Gloeckle, J. Golak, Ch. Elster, Mod. Phys. Lett. A18, 124-127 (2003). PI Eric Esarey C. Nieter and J.R. Cary, "VORPAL: a versatile plasma simulation code", J. Comp. Phys. 196 (2004), p. 448. R.M.G.M. Trines, L.P.J. Kamp, T.J. Schep, W.P. Leemans, E.H. Esarey and F.W. Sluijter, "Enhancement of high-energy electron production through suppression of Raman backscattering," Europhys. Lett. 66 (2004), p. 492. C.G.R. Geddes, Cs. Toth, J. van Tilborg, E. Esarey, C.B. Schroeder, D.L. Bruhwiler, C. Nieter, J. Cary and W.P. Leemans, "High quality electron beams from a laser wakefield accelerator using plasma channel guiding," Nature (2004), accepted. M.S. Hur, G. Penn, R. Lindberg and J.S. Wurtele, "Slowly varying envelope kinetic simulations of pulse amplification by Raman Backscattering," Phys. Plasmas (2004), accepted. P. Messmer and D.L. Bruhwiler, "A parallel electrostatic solver for the VORPAL code," Comp. Phys. Comm. (2004), in press. R.E. Giacone, J.R. Cary, C. Nieter, D.L. Bruhwiler, E.H. Esarey and W.P. Leemans, "Formation of clean single beams in laser wake field acceleration", Phys. Rev. Lett (2004), submitted. R.R. Lindberg, A.E. Charman, J.S. Wurtele and L. Friedland, "Robust autoresonant excitation in the plasma beat-wave accelerator: A theoretical study," Phys. Rev. Lett. 93, 055001 (2004). PI William Fawley Zholents, A.A. and Fawley, W.M., ``Proposal for Intense Attosecond Radiation from an X-Ray Free-Electron Laser'', Phys. Rev. Lett., 92, 224801 (2004). PI Andrew Felmy Rosso KM and Dupuis M (2004) Reorganization energy associated with small polaron mobility in iron oxide JOURNAL OF CHEMICAL PHYSICS 120 (15): 7050-7054. Rosso KM, Smith DMA, and Dupuis M (2004) Aspects of aqueous iron and manganese (II/III) self-exchange electron transfer reactions JOURNAL OF PHYSICAL CHEMISTRY A 108 (24): 5242-5248. Rustad JR, Loring JS, Casey WH (2004) Oxygen-exchange pathways in aluminum polyoxocations GEOCHIMICA ET COSMOCHIMICA ACTA 68 (14): 3011-3017 JUL 2004 Rustad JR, Rosso KM, Felmy AR (2004) Molecular dynamics investigation of ferrous-ferric electron transfer in a hydrolyzing aqueous solution: Calculation of the pH dependence of the diabatic transfer barrier and the potential of mean force JOURNAL OF CHEMICAL PHYSICS 120 (16): 7607-7615 APR 22 2004 E.J. Bylaska, D.A. Dixon, A.R. Felmy, T.L. Windus, E. Apra, C.-G. Zhan, and P.G. Tratnyek (2004),The Energetics of the Hydrolysis, Dehydrohalogenation, and Reductive Dehalogenation of 4,4-Dichloro-diphenyl-trichloroethane, Journal of Physical Chemistry, A , vol. 108, pages 5883-5893. E. Apr`, E. J. Bylaska, D. J. Dean, A. Fortunelli, F. Gao, P. S. Kristi?, J. C. Wells, and T. L. Windus (2003) NWChem for Material Science, Computational Materials Science, vol. 28, pages 209-221. M. Valiev, E. J. Bylaska, and J. H. Weare (2003), Bonding Structure of 3d Transition Metal Dimers With the Projector Augmented Plane Wave Method, Journal of Chemical Physics ), vol. 119, pages PI Paul Fischer J.W. Lottes and P.F. Fischer "Hybrid Multigrid/Schwarz Algorithms for the Spectral Element Method,", J. Sci. Comp. To appear. (2004). S. J. Thomas, J. M. Dennis, H. M. Tufo, and P. F. Fischer, ``A Schwarz Preconditioner for the Cubed-Sphere," SIAM J. Sci. Comp., Vol. 25, No. 2, pp 442-453 (2003). F. X. Giraldo, J. B. Perot and P. F. Fischer, ``A spectral element semi-Lagrangian (SESL) method for the spherical shallow water equations,'' J. of Comp. Phys., 190 2, pp. 623-650 (2003). J. D. Scheel, M. R. Paul, M. C. Cross, and P. F. Fischer ``Traveling waves in rotating Rayleigh-Binard Convection: Analysis of modes and mean flow,'' Phys. Rev. E (to appear). K.-H. Chiam, M. C. Cross, H. S. Greenside, and P. F. Fischer ``Transport of Passive Scalars by Spiral Defect Chaos in Rayleigh-Binard Convection,'' submitted (Aug. 2003). T. Iliescu and P.F. Fischer ``Backscatter in the Rational LES Model," Computers and Fluids (to appear). P.F. Fischer and J.W. Lottes, "Hybrid Schwarz-Multigrid Methods for the Spectral Element Method: Extensions to Navier-Stokes,", Proc. of the 15th Int. Conf. on Domain Decomposition Methods, Berlin, Paul Fischer, Frederic Hecht, and Yvon Maday, "A Parareal in Time Semi-implicit Approximation of the Navier-Stokes Equations," Proc. of the 15th Int. Conf. on Domain Decomposition Methods, Berlin, PI Graham Fleming Energy transfer pathways in Photosystem I studied by one and two color photon echo spectroscopy, H. M. Vaswani, J. Stenger, M. Yang, P. Fromme, G. R. Fleming, In press. The Mechanism of Energy Transfer and Trapping in Photosystem I., H. M. Vaswani, M. Yang, A. Damjanovic, and G. R. Fleming. In Proceedings of Femtochemistry VI, Ultrafast Molecular Events in Chemistry and Biology, pp.401-408 Elsevier, (2004). Chlorophyll Fluorescence Quenching by Xanthophylls, A. Dreuw, G.R. Fleming and M. Head-Gordon. Phys. Chem. Chem. Phys., 5, 3247-3256, (2003). PI Ching-Yao Fong C. Y. Fong, L. H. Yang, J. E. Pask, and S. Dag, 'Electronic and magnetic properties in zincblende structure half-metal superlattices', Appl. Phys. Letts. 84, 239 (2004). b. C. Y. Fong and M. C. Qian, 'New spintronic superlattices composed of half-metallic compounds with zinc-blende structure', J. Phys.: Condens. Matter 16, 1, (2004). c. M. C. Qian, C. Y. Fong, W. E. Pickett, J. E. Pask, L. H. Yang and S. Dag, 'Spin-polarized ballistic transport in a thin superlattice of zincblende half metallic compounds', submitted to Physical Review B. d. M. C. Qian, C. Y. Fong, W. E. Pickett, and Huai-Yu Wang, 'An ab inito investigation on the zinc-blende MnAs nanocrystallite', J. Appl. Phys. 95, 7459 (2004). e. M. C. Qian, C. Y. Fong, and L. H. Yang, 'Coexistence of a localized majority spin moment and an itinerant minority spin channel in MnC', Phys. Rev. B (Brief report) 70, 1 (2004). PI Alberto Franceschetti Lin-Wang Wang, Marco Califano, Alex Zunger, and Alberto Franceschetti, "Pseudopotential Theory of Auger Processes in CdSe Quantum Dots", Phys. Rev. Lett. 91, 056404 (2003). M. Califano, A. Zunger, and A. Franceschetti, "Efficient Inverse Auger Recombination at Threshold in CdSe Nanocrystals", Nano Letters (Communication) 4(3), 525-531 (2004). Marco Califano, Alex Zunger, and Alberto Franceschetti, "Direct carrier multiplication due to inverse Auger scattering in CdSe quantum dots", Appl. Phys. Lett. 84, 2409 (2004). Marco Califano, Alex Zunger, and Alberto Franceschetti, "Radiative decay of bright and dark excitons in CdSe nanocrystal quantum dots", submitted to Phys. Rev. Lett. Marco Califano, Alex Zunger, and Alberto Franceschetti, "Lifetime and polarization of the radiative decay of excitons, trions and biexcitons in CdSe nanocrystal quantum dots", submitted to Phys. Rev. PI Joachim Frank P.S. Umesh Adiga, Ravi Malladi, William Baxter, and Rober M. Glaeser A binary segmentation approach for boxing ribosome particles in cryo EM micrographs, Journal of structural biology 145(2004) PI Stuart Freedman A High Sensitivity Search for Anti-Neutrino's from the Sun and other Sources at KamLAND", KamLAND Collaboration, Phys. Rev. Lett. 92, 021802 (2003). "Measurement of Neutrino Oscillation with KamLAND: Evidence for Spectral Distortion", KamLAND Collaboration, Submitted to Phys. Rev. Lett. PI Arthur Freeman Screened-exchange determination of the optical properties of large gap insulators: CaF2, (M. Kim, Y.-J. Zhao, A.J. Freeman, and W. Mannstadt) Appl. Phys. Lett., 84, 3579 (2004). Electronic structure and light-induced conductivity in a transparent refractory oxide, (J.E. Medvedeva, A.J. Freeman, M.I. Bertoni, and T.O. Mason) Phys. Rev. Lett., 93, 016408 (2004). Hopping versus bulk conductivity in transparent oxides: 12CaO.7Al2O3, (J.E. Medvedeva, and A.J. Freeman) Appl. Phys. Lett., 85, 955 (2004). Combining high conductivity with complete optical transparency: A band-structure approach, (J.E. Medvedeva, and A.J. Freeman), submitted to Phys. Rev. B. Electronic structure properties and BCS superconductivity in beta-pyrochlore oxides: KOs2O6, (R. Saniz, J. E. Medvedeva, Lin-Hui Ye, T. Shishidou, and A. J. Freeman), accepted to Phys. Rev. B. PI Alex Friedman A. Friedman, "Simulation of Intense Beams for Heavy Ion Fusion," Proc. 15th International Symposium on Heavy Ion Inertial Fusion, Princeton, NJ, 7-11 June 2004 (in press). D.P Grote, "Simulation of Integrated Beam Experiment Design," Proc. 15th International Symposium on Heavy Ion Inertial Fusion, Princeton, NJ, 7-11 June 2004 (in press). S.M. Lund, D.P. Grote, E. P. Lee and R.C. Davidson, "Simulations of Beam Emittance Growth from the Collective Relaxation of Space-Charge Non-uniformities," Proc. 15th International Symposium on Heavy Ion Inertial Fusion, Princeton, NJ, 7-11 June 2004 (in press); submitted to Nuclear Instruments and Methods in Physics Research A (2004). W. M. Sharp, J. J. Barnard, D. P. Grote, C. M. Celata, S. S. Yu, D. V. Rose and D.R. Welch, "Simulation of Drift-Compression for Heavy-Ion-Fusion," Proc. 15th International Symposium on Heavy Ion Inertial Fusion, Princeton, NJ, 7-11 June 2004 (in press). J.-L. Vay, P. Colella, J. W. Kwan, P. McCorquodale, D. B. Serafini, A. Friedman, D. P. Grote, G. Westenskow, J.-C. Adam, A. Hiron, and I. Haber, "Application of Adaptive Mesh Refinement to Particle- In-Cell Simulations of Plasmas and Beams," Phys. Plasmas 11, 2928 (2004). E. Henestroza, S. Eylon, P. K. Roy, S. S. Yu, A. Anders, F. M. Bieniosek, W. G. Greenway, B. G. Logan, R. A. MacGill, D. B. Shuman, D. L. Vanecek, W. L. Waldron, W. M. Sharp, T. L. Houck, R. C. Davidson, P. C. Efthimion, E. P. Gilson, A. B. Sefkow, D. R. Welch, D. V. Rose, and C. L. Olson, "Design and characterization of a neutralized-transport experiment for heavy-ion fusion," Phys. Rev. ST Accel. Beams 8, 083501 (2004). S. M. Lund and B. Bukh, "Influence of conducting plate boundary conditions on the transverse envelope equations describing intense ion beam transport," Phys. Rev. ST Accel. Beams 7, 064201 (2004). D. P. Grote, E. Henestroza, and J. W. Kwan, "Design and simulation of a multibeamlet injector for a high-current accelerator,'' Phys. Rev. ST Accel. Beams 6, 014202 (2003). D. R. Welch, T. C. Genoni, D. V. Rose, B. V. Oliver, R. E. Clark, C. L. Olson, and S. S. Yu, "Assisted pinched transport of heavy ion beams in a fusion chamber," Phys. Plasmas 10, 2442 (2003). W. M. Sharp, D. A. Callahan, M. Tabak, S. S. Yu, P. F. Peterson, D. R. Welch, D. V. Rose, and C. L. Olson, "Modeling Chamber Transport for Heavy-Ion Fusion," Fusion Science & Technology 43, 393 PI Charlotte Froese-Fischer C. Froese Fischer and G. Tachiev, Breit-Paul energy levels, lifetimes, and transition probabilities for beryllium-like to neon-like sequences, in Atomic Data and Nuclear Data Tables, Vol. 87, No.1 pp 1-184 (2004). PI Miguel Furman Ji Qiang, Miguel A. Furman and Robert D. Ryne (LBNL), "Parallel Particle-In-Cell Simulation of Colliding Beams in High Energy Accelerators," LBNL-52600, Nov. 2003; Proc. SC2003, Phoenix, AZ, Nov. 15-21, 2003; http://www.sc-conference.org/sc2003/inter_cal/inter_cal_detail.php?eventid=10694#2 Y. Cai, M. Pivi (SLAC), M.A. Furman (LBNL), "Buildup of Electron Cloud in the PEP-II Particle Accelerator in the Presence of a Solenoid Field and with Different Bunch Pattern," SLAC-PUB-10164, LBNL-54689, Sep 2003; http://prst-ab.aps.org/PRSTAB/v7/i2/e024402. R. Cimino (LNF-INFN-Frascati and CERN), I. R. Collins (CERN), M. A. Furman (LBNL), M. Pivi (SLAC), F. Ruggiero (CERN), G. Rumolo (GSI Darmstadt), and F. Zimmermann (CERN), "Can Low Energy Electrons Affect High Energy Physics Accelerators?," CERN-AB-2004-012 (ABP), LBNL-54594, SLAC-PUB-10350, February 9, 2004; Phys. Rev. Lett. 93, 014801 (2004). http://scitation.aip.org/getpdf/servlet/ Ji Qiang, Miguel A. Furman and Robert D. Ryne (LBNL), "A Parallel Particle-In-Cell Model for Beam-Beam Interaction in High Energy Ring Colliders," LBNL-54598, May 2004. J. Comp. Phys. 198, Issue 1, 20 July 2004, Pages 278-294. http://dx.doi.org/doi:10.1016/j.jcp.2004.01.008 PI Alan Garfinkel Omichi, C, Lamp, ST, Lin, SF, Yang, J, Baher, A, Zhou, S, Attin, M, Lee, MH, Karagueuzian, HS, Kogan, B, Qu, Z, Garfinkel, A, Chen, PS and Weiss, JN (2004). "Intracellular Ca Dynamics in Ventricular Fibrillation." Am J Physiol Heart Circ Physiol. 286: H1836-44 Weiss, JN, Chen, PS, Wu, TJ, Siegerman, C and Garfinkel, A (2004). "Ventricular Fibrillation: New Insights into Mechanisms." Ann N Y Acad Sci 1015: 122-132. Xie, F, Qu, Z, Yang, J, Baher, A, Weiss, JN and Garfinkel, A (2004). "A simulation study of the effects of cardiac anatomy in ventricular fibrillation." J Clin Invest 113(5): 686-93. PI Bruce Garrett J.E. Jaffe, M. Dupuis, M. Gutowski, First-Principle Study of Band Offsets in alpha- Cr2O3/alpha-Fe2O3(0001) Interfaces, Phys. Rev. B, 69, 205106, (2004). I.N. Yakovkin, M. Gutowski, The SrTiO3/Si(001) Epitaxial Interface: A Density Functional Theory Study, Phys. Rev. B, accepted for publication. Sotiris S. Xantheas and Edoardo Apr`, J. Chem. Phys. 120, 823 (2004). PI Ahmed Ghoniem M. Marzouk and A. F. Ghoniem. K-means clustering for partition and dynamic load balance of parallel hierarchical N-body simulation. Submitted to Journal of Computational Physics. Y. M. Marzouk. Vorticity structure and evolution in a transverse jet with new algorithms for scalable particle simulation. PhD Thesis, MIT. 2004. Y. M. Marzouk, and A. F. Ghoniem, Vorticity transformation mechanisms and mixing in a transverse jet. To appear at the 57th APS Annual Meeting of the Division of Fluid Dynamics, Seattle, WA, November D. Wee, Y. Marzouk, and A. F. Ghoniem, K-means clustering and diffusive interpolation for high-resolution vortex particle methods. To appear at the 57th APS Annual Meeting of the Division of Fluid Dynamics, Seattle, WA, November 2004. D. Wee, Y. M. Marzouk, and A. F. Ghoniem, Lagrangian simulation of a jet in crossflow at a finite Reynolds number. To appear at the 43rd AIAA Aerospace Sciences Meeting and Exhibit, Reno, NV, Jan Y. M. Marzouk and A. F. Ghoniem, Vorticity formulation for an actuated jet in crossflow. 42nd AIAA Aerospace Sciences Meeting, Reno, NV, January 2004 PI Robert Glaeser C. Yang, E. G. Ng, and P.A. Penczek, "Matrix-free Constructions of Circulant and Block Circulant Preconditioners". To appear in Numerical Linear Algebra and Applications (2004). C. Yang, E. G. Ng and P. A. Penczek, "Unified 3-D Structure and Projection Orientation Refinement Using Quasi-Newton Algorithm". To appear in Journal of Structure Biology (2004). PI James Glimm S. Dutta, E. George, J. Glimm, J. Grove, H. Jin, T. Lee, X. Li, D. H. Sharp, K. Ye, Y. Yu, Y. Zhang, and M. Zhao. Shock wave interactions in spherical and perturbed spherical geometries. Nonlinear Analysis, Elsevier 2004. S. Dutta, E. George, J. Glimm, X. L. Li, A. Marchese, Z. L. Xu, Y. M. Zhang, J. W. Grove and D. H. Sharp. Numerical methods for the determination of mixing. Laser and Particle Beams, 21: 437-442, L. Li, J. Glimm, and X.-L. Li. All isomorphic distinct cases for multi-component interfaces in a block. J. Comp. Appl. Mathematics, 152: 236-276, 2003. J. Glimm, J. W. Grove, Y. Kang, T. Lee, X. Li, D. H. Sharp, Y. Yu, K. Ye and M. Zhao. Errors in numerical solutions of spherically symmetric shock physics problems. Contemporary Mathematics, 2004. University at Stony Brook Preprint Number SB-AMS-04-03, Los Alamos National Laboratory number LA-UR-04-0713. J. Glimm, H. Jin, M. Laforest, F. Tangerman, and Y. Zhang. A two pressure numerical model of two fluid mixing. SIAM J. Multiscale Model. Simul., 1: 458-484, 2003. S. Dutta, J. Glimm, J. W. Grove, D. H. Sharp, and Y. Zhang. Error comparison in tracked and untracked spherical simulations. Computers and Mathematics with Applications, 2003, accepted. University at Stony Brook Preprint Number AMS-03-10. E. George and J. Glimm. Self similarity of Rayleigh-Taylor mixing rates. Phys. Fluids, 2004, submitted. University at Stony Brook Preprint Number AMS-04-05. J. Glimm, J. W. Grove, Y. Kang, T. Lee, X. Li, D. H. Sharp, Y. Yu, K. Ye and M. Zhao. Statistical riemann problems and a composition law for errors in numerical solutions of shock physics problems. SISC, 2003, In Press. University at Stony Brook Preprint Number SB-AMS-03-11, Los Alamos National Laboratory number LA-UR-03-2921. H. Jin, X. F. Liu, T. Lu, B. Cheng, J. Glimm, and D. H. Sharp. Rayleigh-Taylor mixing rates for compressible flow. Phys. Fluids, 2004, submitted. University at Stony Brook Preprint Number AMS-04-06, Los Alamos National Laboratory number LA-04-1384. Y. Zhang, P. Drake, J. Glimm, J. Grove, and D. H. Sharp. Radiation coupled front tracking simulations for laser driven shock experiments. J. Nonlinear Analysis, 2004, submitted. Los Alamos National Laboratory number LA-UR-04-2381. PI Balasubraman Govindasamy P.B. Duffy, B. Govindasamy, J. Iorio, J. Milovich, K. Taylor, M. Wehner, and S. Thompson, 2003: High Resolution Simulations of Global Climate, Part 1: Present Climate. Climate Dynamics, 21, 371-390. B. Govindasamy, and P.B. Duffy, 2003: High Resolution Simulations of Global Climate, Part 2: Effects of Increased Greenhouse Gases. Climate Dynamic, 21, 391-404. Iorio, J. P.Duffy, B. Govindasamy, S. Thompson, 2004: Effects of increased resolution on the simulation of daily precipitation statistics in the US, Climate dynamics (in press) PI Stephen Gray Theoretical study of dielectrically coated metallic nanowires J. M. Oliva and S. K. Gray, Chem. Phys. Lett. 379, 325-331 (2003). Optical scattering from isolated metal nanoparticles and arrays G. A. Wurtz, J. S. Im, S. K. Gray, and G. P. Wiederrecht, J. Phys. Chem. B 107, 14191- 14198 (2003). The equilibrium constants for molecular hydrogen adsorption in carbon nanotubes based on iteratively determined nano-confined bound states T. Lu, E. M. Goldfield, and S. K. Gray, J. Theor. Comp. Chem. 2, 621-626 (2003). Quantum states of molecular hydrogen and its isotopes in single-walled carbon nanotubes T. Lu, E. M. Goldfield, and S. K. Gray, J. Phys. Chem. B 107, 12989-12995 (2003). Quantum wave packet and quasiclassical trajectory studies of OH + CO: Influence of the reactant channel well on thermal rate constants D. M. Medvedev, S. K. Gray, E. M. Goldfield, M. J. Lakin, D. Troya, and G. C. Schatz, J. Chem. Phys. 120, 1231-1238(2004). A new expression for the direct quantum mechanical evaluation of the thermal rate constant D. M. Medvedev and S. K. Gray, J. Chem. Phys. 120, 9060-9070 (2004). Quantum dynamics of vibrationally activated OH-CO reactant complexes Y. He, E. M. Goldfield, and S. K. Gray, J. Chem. Phys., in press (2004). Surface plasmons at single nanoholes in Au-films L. Yin, V. K. Vlasko-Vlasov, A. Rydh, J. Pearson, U. Welp, S.-H. Chang, S. K. Gray, G. C. Schatz, D. E. Brown, and C. W. Kimball, Appl. Phys. Lett., 85, 467 (2004). Controlled spatiotemporal excitation of metal nanoparticles with femtosecond pulses T.W. Lee and S.K. Gray, Phys. Rev. B, submitted (2004). PI Chris Greene "Photofragmentation of the H3 molecule, including Jahn-Teller coupling effects" V. Kokoouline, C.H. Greene Phys. Rev. A 69 (3): Art. No. 032711 MAR 2004 "Dissociative recombination of polyatomic molecules: a new mechanism" C. H. Greene, V. Kokoouline Phys. Scripta T1110, 178 (2004) "Triatomic dissociative recombination theory: Jahn-Teller coupling among infinitely many Born-Oppenheimer surfaces" V. Kokoouline and C.H. Greene Faraday Discussion 127, 413 (2004). Two Body: "Atom-Molecule Laser Fed by Stimulated Three-Body Recombination" B. Borca, J.W. Dunn, V. Kokoouline, C.H. Greene Phys. Rev. Lett. 91, 070404 (2003) R-Matrix: Tonzani S., Greene C.H. "Electron-molecule scattering calculations in a 3D finite element R-matrix approach" S. Tonzani, C.H. Greene J. Chem. Phys., in press, (2004) PI Keith Gubbins F. R. Hung, G. Dudziak, M. Sliwinska-Bartkowiak and K.E. Gubbins, Freezing/Melting Behavior within Carbon Nanotubes, Mol. Phys. 102 (2004), 223-234 M. Sliwinska-Bartkowiak, F. R. Hung, E. E. Santiso, B. Coasne, F. R. Siperstein and K. E. Gubbins, Effect of Confinement on Freezing in Cylindrical Pores Adsorption, accepted (2004) B. Coasne, K. E. Gubbins, F. R. Hung and M. Sliwinska-Bartkowiak, Freezing and Melting of Binary Mixtures Confined in a Nanopore Mol. Phys., accepted (2004) B. Coasne, K. E. Gubbins, F. R. Hung, J. Czwartos and M. Sliwinska-Bartkowiak, Freezing of Mixtures Confined in a Slit Nanopore Adsorption, submitted (2004) Thermodynamic and transport properties of water confined in carbon nanopores A. Striolo, A. A. Chialvo, P. T. Cummings, and K. E. Gubbins, Water Adsorption in Carbon-Slit Nanopores, Langmuir, 19 (2003) 8583-8591. A. Striolo, K. E. Gubbins, A. A. Chialvo, and P. T. Cummings, Simulated Water Adsorption Isotherms in Carbon Nanopores, Molecular Physics, 102 (2004) 243. A. Striolo, K. E. Gubbins, A. A. Chialvo, and P. T. Cummings, Effect of Pore Connectivity on Water Adsorption Isotherms in Non-Activated Graphitic Nanopores, Adsorption, accepted (2004). A. Striolo, P. K. Naicker, A. A. Chialvo, P. T. Cummings, and K. E. Gubbins, Simulated Water Adsorption Isotherms in Hydrophilic and Hydrophobic Cylindrical Nanopores, Adsorption, submitted (2004). PI Rajan Gupta Mass dependence of the hairpin vertex in quenched QCD. Stephen R. Sharpe, Phys.Rev. D69 (2004) 034504 Perturbative matching of staggered four-fermion operators with hypercubic fat links. Weonjong Lee, Stephen Sharpe, Phys.Rev. D68 (2003) 054510 Unphysical Operators in Partially Quenched QCD. Stephen R. Sharpe, Ruth S. Van de Water. Phys.Rev. D69 (2004) 054027 The phase diagram of twisted mass lattice QCD. Stephen R. Sharpe, Jackson M. S. Wu hep-lat/0407025 Light pseudoscalar decay constants, quark masses, and low energy constants from three-flavor lattice QCD. MILC Collaboration: C. Aubin, C. Bernard, C. DeTar, Steven Gottlieb, E.B. Gregory, U.M. Heller, J.E. Hetrick, J. Osborn, R. Sugar, D. Toussaint. hep-lat/0407028 PI Maciej Gutowski S.A. Chambers, T. Droubay, T.C. Kaspar, M. Gutowski, Experimental Determination of the Valence Band Maximum for SrTiO3 and TiO2 Anatase by X-ray and Ultraviolet Photoemission, J. Vac. Sci. Technol. B (Proceedings of the 31st Conference on the Physics and Chemistry of Semiconductor Interfaces (PCSI)), 22, 2205, (2004). S.A. Chambers, T. Droubay, T.C. Kaspar, M. Gutowski, M. van Schilfgaarde, Accurate Valence Band Maximum Determination for SrTiO3(001), Surf. Sci. Lett., 554, 81, (2004). J.E. Jaffe, M. Gutowski, Differences and Similarities between ZrO2 and HfO2 from the Monoclinic to the Cotunnite Phase, Phys. Rev. B, submitted for publication. J.E. Jaffe, M. Dupuis, M. Gutowski, First-Principle Study of Band Offsets in alpha-Cr2O3/alpha-Fe2O3(0001) Interfaces, Phys. Rev. B, 69, 205106, (2004). I.N. Yakovkin, M. Gutowski, The SrTiO3/Si(001) Epitaxial Interface: A Density Functional Theory Study, Phys. Rev. B, accepted for publication. M. Gutowski, T. Autrey, Computational Studies of Boron/Nitrogen and Aluminum/Nitrogen Compounds for Chemical Hydrogen Storage, Prepr. Pap. - Am. Chem. Soc., Div. Fuel Chem. 49, 275, (2004). T. Autrey, A. Gutowska, L. Li, J. Linehan, M. Gutowski, Chemical Hydrogen Storage in Nanostructured Materials. Control of Hydrogen Release and Reactivity from Ammonia Borane Complexes, Prep. Pap. - Am. Chem. Soc., Div. Fuel Chem. 49, 150, (2004). M. Haranczyk, M. Gutowski, Fluorine-Substituted Phenols as Probes to Study Intermolecular Proton Transfer Induced by Excess Electron Attachment to Uracil-Phenol Complexes, Internet Electronic Journal of Molecular Design (IEJMD), 3, 368, (2004), http://www.biochempress.com I. Dabkowska, J. Rak, M. Gutowski, J.M. Nilles, S.T. Stokes, K.H. Bowen, Barrier-Free Intermolecular Proton Transfer Induced by Excess Electron Attachment to the Complex of Alanine with Uracil, J. Chem. Phys., 120, 6064, (2004). M. Haranczyk, I. Dabkowska, J. Rak, M. Gutowski, J.M. Nilles, S.T. Stokes, D. Radisic, K.H. Bowen, Excess Electron Attachment Induces Barrier-Free Proton Transfer in Anionic Complexes of Thymine and Uracil with Formic Acid, J. Phys. Chem. B, 108, 6919, (2004). I. Dabkowska, J. Rak, M. Gutowski, J.M. Nilles, S.T. Stokes, D. Radisic, K.H. Bowen, Barrier-Free Proton Transfer in Anionic Complex of Thymine with Glycine, Phys. Chem. Chem. Phys., 6, 4351, (2004). M. Haranczyk, J. Rak, M. Gutowski, D. Radisic, S.T. Stokes, J.M. Nilles, K.H. Bowen, Effect of Hydrogen Bonding on Barrier-Free Proton Transfer in Anionic Complexes of Uracil with Weak Acids: (U?HCN) - versus (U?H2S)-, Isr. J. Chem. (Joshua Jortner special issue), 44, issue no. 1-2 (2004). I. Dabkowska, M. Gutowski, J. Rak, Interaction with Glycine Increases the Stability of a Mutagenic Tautomer of Uracil. A Density Functional Theory Study, J. Am. Chem. Soc., submitted for publication. J.H. Miller, A. Aceves-Gaona, M.B. Ernst, M. Haranczyk, M.S. Gutowski, E.R. Vorpagel, M. Dupuis, Molecular Energetics of Clustered Damage Sites, Radiat. Res., special issue on the 3rd International Workshop on Space Radiation Research, 2004, Port Jefferson, NY, submitted for publication. M. Haranczyk, M Gutowski, Valance and Dipole-Bound Anions of the Most Stable Tautomers of Guanine, J. Am. Chem. Soc., submitted for publication. R.Bachorz, M. Haranczyk, I. Dabkowska, J. Rak, M. Gutowski, Anion of the Formic Acid Dimer as a Model for Intermolecular Proton Transfer Induced by Excess Electron Attachment, J. Chem. Phys., submitted for publication. PI Stephan Haas "Quantum Antiferromagnetism in Quasicrystals", S. Wessel, A. Jagannathan, and S. Haas, Phys. Rev. Lett. 90, 177205 (2003). "Adaptive Design of Nano-Scale Dielectric Structures for Photonics", Y. Chen, Y. Rong, W. Li, S. Haas, and A.F.J. Levi, to be published in J. Appl. Phys 94, 6065 (2003) JAP 95, 1420 (2004) PRL 92, 157202 (2004) PRB accepted (2004) cond-mat/0311397 PRL accepted (2004) cond-mat/0403375 APL accepted (2004) cond-mat/0402346 PI Salman Habib The Semiclassical Regime of the Chaotic Quantum-Classical Transition, Benjamin D. Greenbaum, Salman Habib, Kosuke Shizume, and Bala Sundaram, Phys. Rev. Lett. (submitted) quant-ph/0401174 PI Edward Hamilton Nonadiabatic alignment of asymmetric top molecules: Rotational revivals M. Poulsen, E. Peronne, H. Stapelfeldt, C. Bisgaard, S. Viftrup, E. Hamilton, and T. Seideman, J. Chem. Phys. 121, 783 (2004). Nonadiabatic laser induced alignment of iodobenzene molecules M. Poulsen, E. Peronne, H. Stapelfeldt, C. Bisgaard, E. Hamilton, and T. Seideman, Phys. Rev. A, submitted. PI Bruce Harmon "Structure and Stability of the Si(105) surface", C. V. Ciobanu, V. B. Shenoy, C. Z. Wang, and K. M. Ho, Surf. Sci. Lett. 544, L715 (2003). "Self-assembly of steps on Si(113) surfaces: an atomistic perspective", C. V. Ciobanu, D. T. Tambe, V. B. Shenoy, C. Z. Wang, and K. M. Ho, Phys. Rev. B 68 (Rapid Commun.), 201302 (2003). "Heat-induced transformation of nanodiamond into a tube-shaped fullerence: A molecular dynamics simulation", Gun-Do Lee, C.Z. Wang, Jaejun Yu, Euijoon Yoon, K.M. Ho, Phys. Rev. Lett. 91, 265701 "Medium-Sized silicon oxide clusters by Si3O3-ring assembly", W.C. Lu, C.Z. Wang and K.M. Ho, Chem. Phys. Lett. 378, 225 (2003). "Melting of small Sn clusters by ab initio molecular dynamics simulations", F.-C. Chuang, C. Z. Wang, S. Ogut, James. R. Chelikowsky, and K. M. Ho, Phys. Rev. B 69, 165408 (2004). "Core energy and peierls stress of screw dislocation in Molybdenum: a periodic cell tight-binding study", Ju Li, C. Z. Wang, J.-P. Chang, W. Cai, V. Bulatov, K. M. Ho, and S. Yip Phys. Rev. B (to "Impact of Interface Relaxation on Nanoscale corrugation in Pb/Si(111) Islands", Z. L. Chan, C. Z. Wang, M. Hupalo, M. C. Tringides, W. C. Lu, and K. M. Ho, Phys. Rev. Lett. (submitted) "Ab initio Molecular Dynamics Simulation of liquid Al_xGe_1-x Alloys", Songyou Wang, C. Z. Wang, F. C. Chuang, J. R. Morris, and K. M. Ho, Phys. Rev. B (submitted). "An ab initio calculation of the structure and energies of {1012} twin boundaries in Zr, Ti, and Mg," J. R. Morris, Y. Y. Ye and M. H. Yoo, to appear in Phil. Mag. "Ab initio calculation of bulk and defect properties of ductile rare-earth intermetallic compounds," J. R. Morris, Y. Y. Ye, Y. B. Lee, B. N. Harmon, K A. Gschneidner, Jr., and Alan M. Russell, Acta Mat.52, 4849-4857 (2004). Fabrication of photonic band gap crystals using microtransfer molded templates, W. Y. Leung, H. Kang, K. Constant, D. Cann, C.-H. Kim, R. Biswas, M. M. Sigalas, and K.-M. Ho, J. Appl. Phys. 93, 5866 Enhanced complete photonic band gap in the optimized planar diamond structure, R. Biswas, I. El-Kady, and K.-M. Ho, Photonics and Nanostructures 1, 15 (2003). Lattice symmetry applied in transfer matrix methods for photonic crystals. L.L. Lin, Z-Y. Li, K.-M. Ho, J. Appl. Physics. 94, 811 (2003). Application of structural symmetries in the plane-wave based transfer matrix method for three dimensional photonic crystal waveguides. Phys Rev B 68, 245117 (2003). PI Robert Harrison G. G. Maisuradze, D. L. Thompson, A. F. Wagner, and M. Minkoff, "Interpolating Moving Least Squares Methods for Fitting Potential Energy Surfaces: Detailed Analysis of One-Dimensional Applications," Journal of Chemical Physics, (also MCS Preprint No. 1083), Vol. 119, No. 19, 10002-10014, November 15, 2003. Kawano, Y. Guo, D. L. Thompson, A. F. Wagner, and M. Minkoff, "Improving the Accuracy of Interpolated Potential Energy Surfaces by Using an Analytical zeroth-Order Potential Function," Journal of Chemical Physics,, (also MCS Preprint No. 1137) Vol 120, No. 14, 6414-6422, April 8, 2004. Y. Guo, A. Kawano, D. L. Thompson, A. F. Wagner, and M. Minkoff, "Interpolating moving least-squares methods for fitting potential energy surfaces: Applications to classical dynamics calculations," Journal of Chemical Physics (also MCS Preprint No. 1188), to appear, Vol 121, No. 9, Sept. 1, 2004. G. G. Maisuradze, A. Kawano, D. L. Thompson, A. F. Wagner, and M. Minkoff, "Interpolating Moving Least-Squares Methods for Fitting Potential Energy Surfaces: Analysis of an Application to a Six-Dimensional System," Journal of Chemical Physics, (also MCS Preprint No. 1188), resubmitted with revisions. Kurt Sattelmeyer, "Use of 2h and 3h-p-like Coupled-Cluster Tamm-Danncoff Approaches for the Equilibrium Properties of Ozone", in press, Chem. Phys. Lett. R. Olson, M.S. Gordon, K. Christe and D. Dixon, "[N5]+[N5]-, ...," J. Phys. Chem., submitted for publication. Michael Schuurman, et al. "The heats of formation of HNCO and NCO revisited: Definitive ab initio results via focal point analysis", submitted. PI Wick Haxton M. Bender, G.F. Bertsch and P-H. Heenen, "Correlation energies by the generator coordinate mathod: computational aspects for quadrupolar deformations," Physical Review C69 034340 (2004). K. Hagino, G.F. Bertsch and P-G. Reinhard, "Quadrupole correlation energy by the generator coordinate method," Physical Review C68 024306 (2003). PI Martin Head-Gordon Are both symmetric and buckled dimers on Si(100) minima? Density functional and multireference perturbation theory calculations, Y. Jung, Y. Shao, M.S. Gordon, D.J. Doren and M. Head-Gordon, J. Chem. Phys 119, 10917-10923 (2003). Failure of time-dependent density functional theory for long-range charge-transfer excited states: the zincbacteriochlorin-bateriochlorin and bacteriochlorophyll-spheroidene complexes, A. Dreuw and M. Head-Gordon, J. Am. Chem. Soc. 126, 4007-4016 (2004). Aromaticity of 4-membered ring 6?-electron systems: N2S2 and Li2C4H4, Y. Jung, T. Heine, P.v.R. Schleyer and M. Head-Gordon, J. Am. Chem. Soc. 126, 3132-3138 (2004). The spatial size of Fe(II) spin states and the spin-dependence of the structure of Fe(II)-porphyrins, J.M. Ugalde, B. Dunietz, A. Dreuw, M. Head-Gordon and R.J. Boyd, J. Phys. Chem. B 108, 4653-4657 What is the nature of the long bond in the (TCNE)2(2?) ? dimer?, Y. Jung and M. Head-Gordon, Phys. Chem. Chem. Phys. 6, 2008-2011 (2004). Initiation of electro-oxidation of CO on Pt-based electrodes at full coverage conditions simulated by ab initio electronic structure calculations, B.D. Dunietz, N. Markovic, G. Somorjai, P.N. Ross and M. Head-Gordon, J. Phys. Chem B 108, 9888-9892 (2004) Antes, I., D. Chandler, H. Wang and G. Oster, "The unbinding of ATP from F1-ATPase," Biophys. J., 85, 695-706 (2003). Maibaum, L. and D. Chandler, "A coarse-grained model of water confined in a hydrophobic tube," J. Phys. Chem. B, 107, 1189-1193 (2003). Maibaum, L. , A. R. Dinner and D. Chandler. "Micelle Formation and the Hydrophobic Effect, " J. Phys. Chem. B 108, 6778-6781 (2004). Jung, Y., J.P. Garrahan and D. Chandler. "Excitation lines and the breakdown of Stokes-Einstein relations in super-cooled liquids," Phys. Rev. E, 69, 061205.1-061205.7 (2004). PI Teresa Head-Gordon N. Fawzi, V. Chubukov, L.A. Clark, S. Brown & T. Head-Gordon (2004). Influence of denatured and intermediate states of folding on protein aggregation. Protein Science, submitted. S. Brown & T. Head-Gordon (2004). Intermediates in the folding of proteins L and G. Protein Sci. 13, 958-970. S. Crivelli & T. Head-Gordon (2004). A new load balancing strategy for the solution of dynamical large tree search problems using a hierarchical approach. IBM R&D Journal 48, 153-160. S. Brown & T. Head-Gordon (2003). Cool-walking: a new markov chain monte carlo sampling method. J. Comp. Chem. PAK Symposium 24, 68-76. T. Head-Gordon & S. Brown (2003). Minimalist models for protein folding and design. Curr. Opin. Struct. Biol. 13, 160-167. S. Brown, N. Fawzi, & T. Head-Gordon (2003). Coarse-grained sequences for protein folding and design. Proc. Natl. Acad. Sci 100, 10712-10717 E. Eskow, B. Bader, R. Byrd, S. Crivelli, T. Head-Gordon, V. Lamberti and R. Schnabel (2003). An optimization approach to the problem of protein structure prediction. Submitted to Math Programming series B. PI Eric Held Free-boundary simulations of DIII-D plasmas with the NIMROD code, S.E. Kruger, C.R. Sovinec, D.D. Schnack, E.D. Held, accepted in Computer Physics Communications, 2004. Nonlocal Closures for Plasma Fluid Simulations, E.D. Held, J.D. Callen, C.C. Hegna, C.R. Sovinec, T.A. Gianakon, and S.E. Kruger, Physics of Plasmas, 11 (2419) May 2004. Unified Form for Parallel Ion Viscous Stress in Magnetized Plasmas, E.D. Held, Physics of Plasmas, 10 (4708) December 2003. Conductive Electron Heat Flow along an Inhomogeneous Magnetic Field, E.D. Held, J.D. Callen, and C.C. Hegna, Physics of Plasmas, 10 (3933) October 2003. PI Brian Hingerty "Solution Structure of an O6-[4-oxo-4-(3-Pyridyl)butyl]guanine Adduct in an 11mer DNA Duplex: Evidence for Formation of a Base Triplex", L. A. Peterson, C. Vu, B. E. Hingerty, S. Broyde and M. Cosman, Biochemistry 42, 13134-13144 (2003). "Structural and Stereoisomer Effects of Model Estrogen Quinone-Derived DNA Adducts: N6-(2-Hydroxyestron-6(alpha,beta)-yl)-2'-deoxyadenosine and N2-(2-Hydroxyestron-6(alpha,beta)- yl)-2'deoxyguanine", L. Wang, B. E. Hingerty, R. Shapiro and S. Broyde, Chemical Res. in Tox, 17, 311-324 (2004). PI Justin Hnilo Boyle, J., D. Williamson, R. Cederwall, M. Fiorino, J. Hnilo, J. Olsen, T. Phillips, G. Potter, S. Xie, 2004: Diagnosis of CAM2 in NWP configureation, J. Geophys. Res., submitted. Phillips, T. J., G. Potter, D. L. Williamson, R. Cederwall, J. Boyle, M. Fiorino, J. Hnilo, J. Olson, J. J. Yio, and S. Xie, 2004: Evaluating parameterizations in general circulation models: Climate simulation meets weather prediction. Bull. Amer. Meteor. Soc., 84, accepted. Xie, S., M. Zhang, J. Boyle, R. T. Cedarwall, G. L. Potter, and W. Lin, 2004: Impact of a revised convection triggering mechanism on cam2 model simulations: Results from short-range forecasts. J. Geophys. Res., in press. Williamson, D. L., J. Boyle, R. Cederwall, M. Fiorino, J. Hnilo, J. Olson, T. Phillips, G. Potter, and S. Xie, 2004: Moisture and temperature budgets at the ARM Southern Great Plains site in forecasts with the CAM2. J. Geophys. Res., 84, submitted. PI Hong Im Wang, Y. and Trouve, A. (2004), "Artificial acoustic stiffness reduction in fully compressible, direct numerical simulation of combustion," Combust. Theory Modelling, v. 8, pp. 633-660. Wang, Y. and Rutland, C. J. (2004), "Effects of temperature and equivalence ratio on the ignition of n-heptane fuel droplets in turbulent flow," Proc. Combust. Inst., v. 30, in press. Yoo, C. S. and Im, H (2004), "Transient dynamics of edge flames in a laminar nonpremixed hydrogen-air counterflow," Proc. Combust. Inst., v. 30, in press. Sankaran, R., Im, H. G., Hawkes, E. R. & Chen, J. H. (2004), "The effects of nonuniform temperature distribution on the ignition of a lean homogeneous hydrogen-air mixture," Proc. Combust. Inst., v. 30, in press PI Bing Jap P.J. Walian and B.K. Jap (2003) "A new era in membrane channel biology" Structure 11, 1467-1468. P.J. Walian, T.A. Cross and B.K. Jap (2004) "Structural genomics of membrane proteins" Genome Biology 5, 215. Y.D. Kwon, I. Nagy, P.D. Adams, W. Baumeister and B.K. Jap (2004) "Crystal structures of the Rhodococcus proteasomes with and without its pro-peptide: Implications for the role of the pro-peptide in proteasome assembly" J. Mol. Biol. 335, 233. PI Stephen Jardin E. A. Lazarus et al., "Simulation of A Discharge for the NCSX Stellarator," Fusion Science and Tech, 46, 209 (2004). P. R. Garabedian et al., "Reactors with Stellarator Stability and Tokamak Transport," to appear in Fusion Science and Tech. L. P. Ku et al., "A Compact Quasi-axisymmetric Stellarator Reactor," Proc. of 20th IEEE/NPSS Symposium on Fusion Engineering, San Diego, CA (2003). L. P. Ku et. al., " Development of Compact Quasi-Axisymmetric Stellarator Configurations," 14th International Stellarator Workshop, Greifswald, Germany (2003). J. Lyon et al., "Optimization of Stellarator Reactor Parameters," to appear in Fusion Science and Tech. R. Samtaney. Suppression of the Richmyer-Meshkov instability in the presence of a magetic field. Physics of Fluids, Volume 15, No. 8, pp: L53-56, 2003. R. Samtaney, S. C. Jardin, P. Colella and D. F. Martin. 3D Adaptive mesh refinement simulation of pellet injection in tokamaks. Computers Physics Communications, to appear 2004. Also PPPL Report No. 3891, October 2003. V. Wheatley, D. I. Pullin and R. Samtaney, Regular refraction of a MHD shock at an oblique planar density interface. Journal of Fluid Mechanics, to appear 2004. J. A. Breslau, S. C. Jardin, and W. Park, Simulation Studies of the Role of Reconnection in the Current Hole experiments in JET, Phys. Plasmas, 10, 1665 (2003) W. Park, J. A. Breslau, J. Chen, et al, Nonlinear Simulation Studies of Tokamaks and STs, Nuclear Fusion, 43, 483 (2003) S.Jardin, "A Triangular Finite Element with First-Derivative Continuity Applied to Fusion MHD Applications", J. Comp. Phys, 2004 H.R.Strauss, "Nonlinear magnetohydrodynamics in the Dag confinement configuration", Phys. Plasmas 11(3): 1236-1239 Mar 2004 G.W. Hammett, S.C. Jardin, and BC Stratton, "Non-existence of normal tokamak equilibria with negative central current", Physics of Plasmas 10 4048-4052 Oct 2003 V.S. Lukin and S.C.Jardin, "Magnetohydrodynamic modeling of two-dimensional reconnection in the Magnetic Reconnection Experiment", Physics of Plasmas, 10 : 3131-3138 Aug 2003 PI Julius Jellinek THEORETICAL DETERMINATION OF ELECTRON BINDING ENERGY SPECTRA OF ANIONIC MAGNESIUM CLUSTERS P. H. Acioli and J. Jellinek Eur. Phys. J. D 24, 27-32 (2003) COLLISIONLESS FRAGMENTATION OF NON-ROTATING Nin (n=4-14) CLUSTERS: A MOLECULAR DYNAMICS STUDY H. Avci, M. Civi, Z. B. Guvenc, and J. Jellinek J. Phys. B: At. Mol. Opt. Phys. 36, 3487-3507 (2003) METAL CLUSTERS AND METALLICITY: THE PARADIGM OF MAGNESIUM J. Jellinek and P. H. Acioli Metal-Ligand Interactions in Molecular-, Nano-, Micro-, and Macro-Systems in Complex Environments, N. Russo, D. R. Salahub, and M. Witko, Eds., Kluwer Academic Publishers, Dordrecht, 2003, pp. 121-152 REACTIVITY OF THE Nin(T) (n=54,55,56) CLUSTERS WITH D2 (v,j) MOLECULE: MOLECULAR DYNAMICS SIMULATIONS S. Ozcelik, Z.B. Guvenc, P. Durmus, and J. Jellinek Surf. Sci. 566-568, 377-382 (2004) STRUCTURAL AND ELECTRONIC PROPERTIES OF SMALL BERYLLIUM CLUSTERS: A THEORETICAL STUDY S. Srinivas and J. Jellinek J. Chem. Phys. (in press) PI Chueng-Ryong Ji Electromagnetic structure of the rho meson in the light front quark model, H.-M.Choi and C.-R.Ji, Phys. Rev. D, in press (2004); hep-ph/0402114. Investigating the parity of the exotic Theta baryon from the kaon photoproduction, B.-G.Yu,T.-K.Choi and C.-R.Ji, Phys. Rev. C, in press (2004); nucl-th/0312075. Radiative scalar meson decays in the light-front quark model, M.A.DeWitt, H.-M.Choi and C.-R.Ji, Phys. Rev. D68, 054026(2003). Trnasition form factors between pseudoscalar and vector mesons in light-front dynamics, B.Bakker, H-M.Choi and C.-R.Ji, Phys. Rev. D67, 113007(2003). Molar mass estimate of dark matter from the dark mass distribution measurements, Y.Mishchenko and C.-R.Ji, Phys. Rev. D68, 063503(2003). PI Donald Johnson Schaack, T. K., T. H. Zapotocny, A. J. Lenzen and D. R. Johnson, 2004: Global climate simulation with the University of Wisconsin global hybrid isentropic coordinate model. J. Climate, 17, 2998-3016. PI Sidney Kahana A. Pastorello, L. Zampieri, M. Turatto, E. Cappellaro, S. Benetti, W. P. S. Meikle, D. Branch, E. Baron, F. Patat, M. Armstrong, G. Altavilla, M. Salvo, and M. Riello, Low Luminosity Type II Supernovae: Spectroscopic and Photometric Evolution, MNRAS, (2004), 347, 74--94. R. C. Thomas, David Branch, E. Baron, Ken'ichi Nomoto, Weidong Li, Alexei V. Filippenko On the Geometry of the High-Velocity Ejecta of the Peculiar Type Ia Supernova 2000cx, Ap. J. (2004), 601, P. Hauschildt and E. Baron, Improved discretization of the wavelength derivative term in CMF operator splitting numerical radiative transfer, A&A (2004), 417, 317--324. David Branch, R. C. Thomas, E. Baron, D. Kasen, K. Hatano, Ken'ichi Nomoto, Weidong Li, Alexei V. Filippenko, R. Rudy, Direct Analysis of Spectra of the Peculiar Type Ia Supernova 2000cx, ApJ, (2004), in press. E. Baron and P. Hauschildt, Co-moving frame radiative transfer in spherical media with arbitrary velocity fields, A&A (2004), in press. C. Fransson, P. M. Challis, R. A. Chevalier, A. V. Filippenko, R. P. Kirshner, C. Kozma, D. C. Leonard, T. Matheson, E. Baron, P. Garnavich, B. Leibundgut, P. Lundqvist, R. McCray, N. Panagia, M. M. Phillips, C. S. J. Pun, B. Schmidt, G. Sonneborn, N. B. Suntzeff, L. Wang, and J. C. Wheeler, Hubble Space Telescope and Ground-Based Observations of SN 1993J and SN 1998S: CNO Processing in the Progenitors, Ap. J., (2004), submitted. D. Branch, E. Baron, R. C. Thomas, D. Kasen, W. Li, and A. V. Filippenko Reading the Spectra of the Most Peculiar Type Ia Supernova 2002cx, PASP, (2004), in press. Travis S. Barman, Peter H. Hauschildt, France Allard, Model Atmospheres for Irradiated Stars in pre- Cataclysmic Variables, ApJ (2004), in press. E. Baron, P. Nugent, D. Branch, and P. Hauschildt, Type II Supernovae as Cosmological Probes: A SEAM Distance to SN 1999em, PRL, submitted. PI Martin Karplus R. Bitetti-Puzer, W. Yang, and M. Karplus, Generalized ensembles serve to improve the convergence of free energy simulations, Chem. Phys. Lett. 377, 633-641 (2003). Q. Cui, G. Li, J. Ma, and M. Karplus, A normal mode analysis of structural plasticity in the biomolecular motor F1-ATPase, J. Mol. Biol. 340, 345-372 (2004). W. Yang, R. Bitetti-Putzer, and M. Karplus, Free energy simulations: Use of reverse cumulative averaging to determine the equilibrated region and the time required for convergence, J. Chem. Phys. 120, 2618-2628 (2004). W. Yang, R. Bitetti-Putzer, and M. Karplus, Chaperoned alchemical free energy simulations: A general method for QM, MM, and QM/MM potentials, J. Chem. Phys. 120, 9450-9453 (2004). A. van der Vaart, J. Ma, and M. Karplus, the Unfolding Action of GroEL on a Protein Substrate, Biophys. J. 87, 562-573 (2004). Y. Q. Gao, W. Yang, R. A. Marcus, and M. Karplus, A model for the cooperative free energy transduction and kinetics of ATP hydrolysis by F1-ATPase, Proc. Natl. Acad. Sci. USA 100, 11339-11344 (2003). R. J. Petrella, and M. Karplus, The role of carbon-donor hydrogen bonds in stabilizing tryptophan conformations, Proteins: Struc. Func. Bio. 54, 716-726 (2004). A. Banerjee, W. Yang, M. Karplus, and G. L. Verdine, Disulfide trapping of a repair enzyme interrogating undamaged DNA elucidate recognition of damaged DNA, submitted to Nature. PI Thomas Katsouleas Plasma wakefield acceleration in self-ionized gas or plasmas, Phys. Rev. E 68, 047401, 2003 Meter-Scale Plasma-Wakefield Accelerator Driven by a Matched Electron Beam, Rev. Lett. 93 , 014802 (2004) Plasma Accelerators at the Energy Frontier and on Tabletops, Physics Today 56, 47 (2003). On the Possibility of a Multi-bunch Plasma Afterburner for Linear Colliders, submitted to Phys. Rev. Special Topics Acc. & Beams, 2004. PI Ricky Kendall M.-S. Wu, R. A. Kendall, S. Aluru, ``A Tunable Collective Communication Framework on Cluster of SMPs,'' Procedings of the IASTED International Conference on Parallel and Distributed Computing and Networks (PDCN 2004), Innsbruck, Austria, February 17-19, pp. 56-63 2004. J. Bentz and R. A. Kendall, ``Parallelization of general matrix multiply routines using OpenMP,'' in the Proceedings of the Workshop on OpenMP Applications and Tools, WOMPAT 2004, Houston, TX, May 17-18, 2004. M.S. Wu, R. A. Kendall, and S. Aluru, ``Exploring Collective Communications on a Cluster of SMPs,'' in the Proceedings of 7th International Conference on High Performance Computing and Grid in Asia Pacific Region, HPCAsia2004, Omiya Sonic City, Tokyo Area, Japan, July 20-22, pp. 114-117, 2004. J. Bentz and R. A. Kendall, ``Parallelization of general matrix multiply routines using OpenMP,'' submitted to Lecture Notes in Computer Science. PI Davd Keyes S. Bhowmick, L. McInnes, B. Norris, and P. Raghavan, The Role of Multi-Method Linear Solvers in PDE- Based Simulations, Proceedings of the 2003 International Conference on Computational Science and its Applications, ICCSA 2003, Montreal, Canada May 18 - May 21, 2003. Lecture notes in Computer Science 2677, pp. 828-839, 2003. Xiaoye S. Li, An Overview of SuperLU: Algorithms, Implementation, and User Interface, Tech. Report LBNL-53848, Lawrence Berkeley National Laboratory, September, 2003. To appear in a special issue of ACM Trans. Math. Software on the Advanced Computational Software Collection. H. MacMillan, T. Manteuffel, and S. McCormick, First-order system least squares and electrical impedance tomography, SIAM J. Numer. Anal. 42 (2004), pp. 461-483. M. Brezina, R. Falgout, S. MacLachlan, T. Manteuffel, S. McCormick, and J. Ruge, Adaptive smoothed aggregation (aSA), SIAM J. Sci. Comp. 25 (2004), pp. 1896-1920. V. Akcelik, J. Bielak, G. Biros, I. Epanomeritakis, A. Fernandez, O. Ghattas, E. Kim, D. O'Hallaron, and T. Tu, High-resolution forward and inverse earthquake modeling on terascale computers, Proceedings of SC2003, Phoenix, AZ, November 2003. G. Biros and O. Ghattas, Inexactness Issues in the Lagrange-Newton-Krylov-Schur method, in Large- Scale PDE-Constrained Optimization, L. Biegler, O. Ghattas, M. Heinkenschloss, and B. van Bloemen Waanders, eds., Springer-Verlag, Lecture Notes in Computational Science and Engineering series, Heidelberg, 2003. B. Hientzsch, Domain decomposition preconditioners for spectral Nedelec elements in two and three dimensions, in Domain Decomposition Methods in Science and Engineering, R. Kornhuber et al, eds., Springer, LNCSE, 2004, pp. 597-604. B.C. Lee, R. Vuduc, J. Demmel, K. Yelick. "Performance models for evaluation and automatic tuning of symmetric sparse matrix-vector multiply." Proceedings of the International Conference on Parallel Processing, Montreal, August 2004 (to appear). E-J. Im, K. Yelick, R. Vuduc. "SPARSITY: Framework for optimizing sparse matrix-vector multiply." International Journal of High Performance Computing Applications, 18 (1), pp. 135-158, February 2004. A.C. Hindmarsh, P.N. Brown, K.E. Grant, S.L. Lee, R. Serban, D.E. Shumaker, and C.S. Woodward, SUNDIALS: Suite of Nonlinear and Differential/Algebraic Equation Solvers, ACM Transactions on Mathematical Software, submitted, 2003. PI Kwiseon Kim The Inverse Band Structure Approach: Find the Atomic Configurations that has Desired Electronic Properties" by A. Zunger, S.V. Dudiy, K. Kim, W.B. Jones, submitted to the Proceedings of the 27th International Conference on the Physics of Semiconductors (IOP, 2004) "Material Design via Genetic Algorithms for Semiconductor Alloys and Superlattices" by K. Kim, P.A. Graf, W.B. Jones, submitted to the Proceedings of the 27th International Conference on the Physics of Semiconductors (IOP, 2004) PI Sung-Hou Kim Higher Order Ramachandran Maps (2004) Bioinformatics, (in press). Choi IG, Kwon J, Kim SH Local feature frequency profile: A method to measure structural similarity in proteins PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA 101 (11): 3797-3802 MAR 16 2004. PI Richard Klein Crockett, R., Colella, P., Fisher, R., Klein, R. I., & McKee, C. F. 2004, J. Comp. Phys., submitted Krumholz. M. R., McKee, C. F., & Klein, R. I., 2004a, ApJ, 611, 399 Krumholz, M. R., McKee, C. F., & Klein, R. I., 2004b, ApJ, submitted PI John Klepeis A. J. Williamson, C. Bostedt, L. Pizzagalli, T. van Buuren, T. M. Willey, L. J. Terminello and G. Galli, Probing the Electronic Density of States of Germanium Nanoparticles: A Method for Determining Atomic Structure, Nano Lett. 4, 1041 (2004). A. Puzder, A.J. Williamson, F. Reboredo and G. Galli, Structural Stability and Optical Properties of Nanomaterials with Reconstructed Surfaces, Phys. Rev. Lett. 91, 157405 (2003). D. Prendergast, J.C. Grossman, A.J. Williamson, J.L. Fattebert and G. Galli, Optical properties of silicon nanoparticles in the presence of water: A first principles theoretical analysis, J. Amer. Chem. Soc. in press. A. Puzder, A.J. Williamson and G. Galli, Self-Healing of CdSe Nanoparticles: A First Principles Study, Phys. Rev. Lett. 92, 217401 (2004). E. Draeger, J.C. Grossman, A.J. Williamson and G. Galli, Optical Properties of Silicon Nanoclusters: The role of Synthesis, J. Chem. Phys. 120, 10807 (2004). E. Draeger, J.C. Grossman, A.J. Williamson and G. Galli, Synthesis dynamics of passivated silicon nanoclusters, Phys. Stat. Solidi B 239, 11 (2003). A. Puzder, A.J. Williamson, J.C. Grossman and G. Galli, Optical Emission of Silicon nanoclusters, J. Amer. Chem. Soc. 125, 2786 (2003). PI Kwok Ko K. Ohmi, M. Tawada, Y. Cai, S. Kamada, K. Oide, and J. Qiang, Phys. Rev. Lett, 92, 214801 (2004). PI Boris Kogan Omichi C, Lamp ST, Lin SF, Yang J, Baher A, Zhou S, Attin M, Lee MH, Karagueuzian HS, Kogan B, Qu Z, Garfinkel A, Chen PS, Weiss JN. Intracellular Ca dynamics in ventricular fibrillation. Am J Physiol Heart Circ Physiol 2004; 286: H1836-H1844. Samade R, Kogan B. The properties of the cardiac cell mathematical model with the Markovian representation of potassium channel gating processes under high pacing rate (Computer simulation study). Proceedings of the International Conference on Mathematics and Engineering Techniques in Medicine and Biological Sciences 2004. Huffaker R, Lamp ST, Weiss JN, Kogan B. Intracellular calcium cycling, early afterdepolarizations, and reentry in simulated long QT syndrome. Heart Rhythm (in press) 2004. PI Joel Koplik Self-affine fronts in self-affine fractures, Phys. Rev. Lett. 92, 014501 (2004), G. Drazer, H. Auradou, J. Koplik and J.-P. Hulin. Microstructure and velocity fluctuations in sheared suspensions, J. Fluid Mech. 511, 237 (2004), G. Drazer, J. Koplik, B. Khusid and A. Acrivos. Wetting and particle absorption in nanoflows, submitted to Phys. Fluids (2004), G. Drazer, J. Koplik, B. Khusid and A. Acrivos. PI Veerabhadra Kotamarthi Kotamarthi, V. R., M. Lazaro, Y.-S. Chang, D. E. James, J. Kuiper, and S. Pulugurtha (2004). Air Quality Impacts of Desert Wind-Blown Dust in a Southwestern U. S. Urban Valley: Preliminary Results from High-Resolution 3-D Model Simulations. Preprints, Sixth Conference on Atmospheric Chemistry: Air Quality in Megacities. CD-ROM, Seattle, WA, AMS, January, 12. Harris, L and V. R. Kotamarthi, The characteristics of the Chicago Lake Breeze and its Effects on Trace Particle Transport: Results from and Episodic Event Simulation. Submitted to the Journal of Applied Meteorology, 2004. Kotamarthi, V, R., P. V. Doskey, J. Weinstein, S. Springstien, J. S. Gaffney, and N. A. Marley, Modeling of trace gases measured from the North-Central Mexico forest fire smoke plume over Phoenix, AZ. Submitted to Atmospheric Environment, 2004. Jeffrey S. Gaffney, Nancy A. Marley, Mary M. Cunningham, and V. Rao Kotamarthi, 7Be Measurements in the Houston and Phoenix Urban areas: An Estimation of Upper Atmospheric Ozone Contributions, Accepted for Publication, AWMA Journal, 2004. PI Steven Krueger Luo, Y., S. K. Krueger, and S. Moorthi, 2004: Cloud Properties Simulated by a Single-Column Model. Part 1: Comparison to Cloud Radar Observations of Cirrus Clouds. Submitted to J. Atmos. Sci., Sep 2003; revised, June 2004. PI Andrew Lacis Q. Ma and R. H. Tipping, A Simple Analytical Parameterization for the Water Vapor Millimeter Wave Foreign Continuum. J. Quant. Spect. Radiat. Transfer 82, 517-531 (2003). J. Boissoles, C. Boulet, R. H. Tipping, A. Brown, and Q. Ma, Theoretical Calculation of the Translation-Rotation Collision-Induced Absorption in N2-N2, O2-O2, and N2-O2 Pairs. J. Quant. Spect. Radiat. Transfer 82, 505-516 (2003). A. P. Mishra, T. K. Balasubramanian, R. H. Tipping, and Q. Ma Absorption Spectroscopy in Solid Hydrogen: Challenges to Experimentalists and Theorists. J. Mol. Structure 695-696, 103-109 (2004). PI Susan Lamb McDowell, J. C.; Clements, D. L.; Lamb, S. A.; Shaked, S.; Hearn, N. C.; Colina, L.; Mundell, C.; Borne, K.; Baker, A. C.; Arribas, S.Chandra Observations of Extended X-Ray Emission in Arp 220, 2003, ApJ, 591. PI Jodi Lamoureux G.M. Bernstein, B. Jain; "Dark Energy Constraints from Weak Lensing Cross-Correlation Cosmography"; Astrophysical Journal 600 (2004) 17-25. B. Jain, A. Taylor; "Cross-correlation Tomography: Measuring Dark Energy Evolution with Weak Lensing"; Phys.Rev.Lett. 91 (2003) 141302 M. White, C. Vale; "Simulations of weak gravitational lensing"; astro-ph/0312133. R. Massey, A. Refregier, J. Rhodes; "Probing Dark Matter and Dark energy with Space-Based Weak Lensing"; astro-ph/0403229. R. Massey, A. Refregier, D. Bacon, R. Ellis; "An Enlarged Cosmic Shear Survey with the William Herschel Telescope"; astro-ph/0404195. R. Massey, A. Refregier, D. Bacon; "Shapelets 'multiple multipole' shear measurement methods"; astro-ph/0408568. R. Massey, A. Refregier; "Polar Shapelets"; astro-ph/0408445 PI Uzi Landman "On the Electronic and Atomic Structures of Small AuN- (N=4-14) Clusters: A Photoelectron Spectroscopy and Density-Functional Study", H. Hakkinen, B. Yoon, U. Landman, Xi Li, H.J. Zhai, L,-S. Wang, J. Phys. Chem. A 107, 6168 (2003). "Small is different: energetic, structural, thermal, and mechanical properties of passivated nanocluster assemblies", U. Landman and W.D. Luedtke, Faraday Discussion of the Chemical Society 125, 1 (2004), Introductory Lecture to the 125th Faraday Discussion at Liverpool, June 2003. "Frictional Forces and Amontons' Law: From the Molecular to the Macroscopic Scale", J. Gao, W.D. Luedtke, D. Gourdon, M. Ruths, J.N. Israelachvili, U. Landman, Feature Article, in J. Phys. Chem. B 108, 3410 (2004). Cover article. "Hydrogen in gold nanowires: structure and electronic conductance of dissociative and molecular states", R.N. Barnett, H. Hakkinen, A. Scherbakov, and U. Landman, Nanoletters (2004). PI Jean-Noel Leboeuf Candy J, Waltz RE, Dorland W., "The local limit of global gyrokinetic simulations", Physics of Plasmas, vol.11, no.5, May 2004, pp.L25-8. Ernst DR, Bonoli PT, Catto PJ, Dorland W, Fiore CL, Granetz RS, Greenwald M, Hubbard AE, Porkolab M, Redi MH, Rice JE, Zhurovich K. Role of trapped electron mode turbulence in internal transport barrier control in the alcator C-mod Tokamak", Physics of Plasmas, vol.11, no.5, May 2004, pp.2637-48. Gombosi TI, Powell KG, De Zeeuw DL, Clauer CR, Hansen KC, Manchester WB, Ridley AJ, Roussev II, Sokolov IV, Stout QF, Toth G. "Solution-adaptive magnetohydrodynamics for space plasmas: Sun-to- Earth simulations", Computing in Science & Engineering, vol.6, no.2, March-April 2004, pp.14-35. Publisher: IEEE Comput. Soc, USA. Romanelli M, Bourdelle C, Dorland W. "Effects of high density peaking and high collisionality on the stabilization of the electrostatic turbulence in the frascati Tokamak upgrade", Physics of Plasmas, vol.11, no.8, Aug. 2004, pp.3845-53 "The scaling of embedded collisionless reconnection," M. A. Shay, J. F. Drake, M. Swisdak, B. N. Rogers, Phys. Plasmas, Vol. 11, p. 2199, 2004. "The temporal structure of the fast convective flow in the plasma sheet: Comparison between observerations and two-fluid simulations," S. Ohtani, M. A. Shay, T. Mukai, J. Geophys. Res., 109, A03210, doi:10.1029/2003JA010002, 2004. L. F. Wanex, V. I. Sotnikov, B. S. Bauer, and J. N. Leboeuf, "Linear analysis of stabilizing mechanisms for the kink instability", Phys. Plasmas 11, 1372-1378 (2004). V. I. Sotnikov, B. S. Bauer, J. N. Leboeuf, P. Hellinger, P. Travnicek and V. Fiala, "Development of global magnetohydrodynamic instabilities in z-pinch plasmas in presence of non-ideal effects", Phys. Plasmas 11, 1897-1907 (2004). P.-A. Gourdain and J.-N. Leboeuf, "Contour dynamics method for solving the Grad-Shafranov equation with applications to high beta equilibria", Phys. Plasmas 11, 4372-4381 (2004). James C. Kniep, Jean-Noel G. Leboeuf, and Viktor K. Decyk,"Gyrokinetic Particle-In-Cell Calculations of Ion Temperature Gradient Driven Turbulence with Parallel Nonlinearity and Strong Flow Corrections", Computer Physics Communications, In Press, August 2004. V. I. Sotnikov, B. S. Bauer, J. N. Leboeuf ,P. Hellinger, P. Travnicek and V. Fiala," Hybrid Simulations of Z-Pinches", Computer Physics Communications, In Press, August 2004. H. Karimabadi, P.L. Pritchett, W. Daughton, and D. Krauss-Varban, "Ion-Ion Kink Instability in the Magnetotail: 2. Three-Dimensional Full Particle and Hybrid Simulations and Comparison with Observations," J. Geophys. Res., 108(A11), 1401, doi:10.1029/2003JA010109, 2003. P.L. Pritchett and F.V. Coroniti, "Three-Dimensional Collisionless Magnetic Reconnection in the Presence of a Guide Field," J. Geophys. Res., 109, A01220, doi:10.1029/2003JA009999, 2004. P.L. Pritchett and F.V. Coroniti, "Reconnection-Generated Plasma Flows Interacting with the Near-Earth Plasma Sheet," J. Geophys. Res., submitted, 2004. P.L. Pritchett and F.V. Coroniti, "Onset and Saturation of Guide-Field Magnetic Reconnection," Phys. Plasmas, submitted, 2004. PI Frank Lee "Electric Polarizability of Neutral Hadrons from Lattice QCD", J. Christensen, W. Wilcox, F.X. Lee, L. Zhou, hep-ph/0408024, submitted to Phys. Rev. D "A study of pentaquarks on the lattice with overlap fermions", N. Mathur, F.X. Lee, A. Alexandru, C. Bennhold, Y. Chen, S.J. Dong, T. Draper, I. Horvath, K.F. Liu, S. Tamhankar, J.B. Zhang, hep-ph/ 0406196, Phys. Rev. D (in press) "The Sequential Empirical Bayes Method: An Adaptive Constrained-Cuve Fitting Algorithm for Lattice QCD", Y. Chen, S.J. Dong, T. Draper, I. Horvath, F.X. Lee, K.F. Liu, N. Mathur, C. Srinivasan, J.B. Zhang, hep-lat/0405001, submitted to Phys. Rev. D PI Wei-li Lee Alfven Waves in Gyrokinetic Plasmas, W. W. Lee and H. Qin, Phys. Plasmas , 3196 (August, 2003) Parallel Data Streaming Implemented for Gyrokinetic Toroidal Code, S. Klasky, S. Either, Z. Lin, K. Martins, D. McCune, and R. Samtangey, Proceedings of the ACM/IEEE Conference on Supercomputing, (November 2003). Turbulence Spreading and Transport Scaling in Global Gyrokinetic Particle Simulation, Z. Lin and T. S. Hahm, Phys. Plasmas 11, 1099-1108 (March 2004). Turbulence Spreading into Linearly Stable Zone and Transport Scaling, T. S. Hahm, P. H. Diamond, Z. Lin, K. Itoh, and S.-I. Itoh, Plasma Phys. Contr. Fusion 46, A323-A333 (May 2004). Porting the 3D Gyrokinetic Particle-in-Cell Code GTC to the NEC SX-6 Vector Architecture: Perspectives and Challenges, S. Ethier and Z. Lin, to appear in Computer Physics Communications, 2004. Theoretical and Numerical Properties of a Gyrokinetic Plasma: issues related to transport time scale simulation, W. W. Lee, to appear Computer Physics Communications (2004). A Gyrokinetic Electron and Fully Kinetic Ion Particle Simulation Model, Y. Lin, X. Y. Wang, Z. Lin, and L. Chen, submitted to Phys. Plasmas, 2004. Dynamics of Turbulence Spreading in Magnetically Confined Plasmas, O. D. Gurcan, P. H. Diamond, T. S. Hahm, and Z. Lin, submitted to Phys. Plasmas, 2004. Self-correcting MultigridSolver, J. L. V. Lewandowski, PPPL Report 3976 (submitted for publication). PI William Lester A.C. Kollias, and W. A. Lester Jr. "Quantum Monte Carlo and Electron Localization Function Study of the Electronic Structure of CO2+" J. Mol. Struct. THEOCHEM. 634. 1 (2003). C. Schuetz, M. Frenklach, A. C. Kollias, and W. A. Lester Jr., "Geometry Optimization in Quantum Monte Carlo with Solution Mapping: Application to Formaldehyde," J. Chem. Phys. 119, 9386 (2003). A. Aspuru-Guzik, O. El Akramine, J. C. Grossman, and W. A. Lester, Jr., "Quantum Monte Carlo for Electronic Excitations of Free-Base Porphyrin," J. Chem. Phys. 120, 3049, (2004). A.C. Kollias, O. Couronne, and W. A. Lester, Jr., "Quantum Monte Carlo Study of the Reaction: Cl + CH3OH -> CH2OH + HCl," J. Chem. Phys. 121, 1357 (2004). PI Michael Levi Albert et al. (2004) PASP submitted Huterer, Kim, Krauss, & Broderick, ApJ, accepted Blanc et al. (2004) A&A, 423, 881 Garavini et al. (2004) AJ, 128, 387 Kim et al. (2004) MNRAS, 347, 909 Rhodes et al. (2004) APh, 20, 377 Knop et al. (2003) ApJ, 598, 102 PI M.C. Lin "Ab Initio Studies of Alkyl Radical Reactions: Combination and Disproportionation Reactions of CH3 with C2H5 and the Decomposition of Chemically Activated C3H8", R. S. Zhu, Z. F. Xu and M. C. Lin, J. Chem. Phys., 120, 6566 - 73 (2004). "Thermal Decomposition of Ethanol. III. Kinetics and Mechanism for the CH3 + C2H5OH Reaction", Z. F. Xu, J. Park and M. C. Lin, J. Chem. Phys., 120, 6593 - 99 (2004). Kinetics and Mechanisms for the Reactions of Phenyl Radical with Ketene and its Deuterated Isotopomer: An Experimental and Computational Study, Y. M. Choi and M. C. Lin, ChemPhysChem, (2004), 5(2), "Kinetics of Phenyl Radical Reactions with Propane, n-Butane, n-Hexane and n-Octane: Reactivity of C6H5 toward the Secondary C-H Bond of Alkanes" J. Park, Liming Wang and M. C. Lin, Int. J. Chem. Kinet. 2004, 36(1), 49-56 "Quantum Chemical/vRRKM Study on the Thermal Decomposition of Cyclopentadiene ", I. V. Tokmakov, L. V. Moskaleva, and M. C. Lin, Int. J. Chem. Kin. (2004), 36(3), 139-151. PI Yu Lin Lin, Y., X. Y. Wang, Z. Lin, and L. Chen, A gyrokinetic electron and fully kinetic ion particle simulation model, Phyisics of Plasmas, submitted, 2004. PI Feng Liu Guang-Hong Lu, Martin Cuma, and Feng Liu, Quantitative understanding of strain stabilization of Ge/Si(105) surface from first-principles,submitted to Physical Review Letters. PI Keh-Fei Liu ``Chiral Logs in Quenched QCD'', Y. Chen, S.J. Dong, T. Draper, I. Horvath, F.X. Lee, K.F. Liu, N. Mathur, J.B. Zhang, Phys. Rev. D70, 034502 (2004) [hep-lat/0304005] ``A study of pentaquarks on the lattice with overlap fermions'', N. Mathur, F.X. Lee, A. Alexandru, C. Bennhold, Y. Chen, S.J. Dong, T. Draper, I. Horvath, K.F. Liu, S. Tamhankar, J.B. Zhang, submitted to Phys. Rev. D [hep-ph/0406196] ``Roper Resonance and S_{11}(1535) from Lattice QCD'', N. Mathur, S.J. Dong, T. Draper, I. Horvath, F.X. Lee, K.F. Liu, J.B. Zhang, submitted to Phys. Rev. Lett. [hep-ph/0306199] ``The Sequential Empirical Bayes Method: An Adaptive Constrained-Curve Fitting Algorithm for Lattice QCD'', Ying Chen, Shao-Jing Dong, Terrence Draper, Ivan Horvath, Keh-Fei Liu, Nilmani Mathur, Sonali Tamhankar, Cidambi Srinivasan, Frank X. Lee, Jianbo Zhang, submitted to Phys. Rev. D [hep-lat/0405001] The Kentucky Noisy Monte Carlo Algorithm for Wilson Dynamical Fermions'', B. Joo, I. Horvath, and K.F. Liu, Phys. Rev. D67 (2003) 074505 [hep-lat/0112033] ``On the Local Structure of Topological Charge Fluctuations in QCD'', I. Horvath, S.J. Dong, T. Draper, F.X Lee, K.F. Liu, H.B. Thacker, J.B. Zhang, Phys. Rev. D67 (2003) 011501 [hep-lat/0203027]. ``Low-Dimensional Long-Range Topological Charge Structure in the QCD Vacuum'' I. Horvath, S.J. Dong, T. Draper, F.X. Lee, K.F. Liu, N. Mathur, H.B. Thacker, J.B. Zhang, Phys. Rev. D68, 114505 (2003) ``A Finite Baryon Density Algorithm'', K.F. Liu. Proceedings of `Third International Workshop on QCD and Numerical Analysis', Edinburgh, June 2003, to appear in Springer Lecture, [hep-lat/0312027] ``Nonperturbative renormalisation of composite operators with overlap quarks'', J. B. Zhang, D. B. Leinweber, K. F. Liu, A. G. Williams, Nucl. Phys. Proc. Suppl. 128, 240 (20040 [hep-lat/0311030] ``Uncovering Low-Dimensional Topological Structure in the QCD Vacuum'', I. Horvath, S.J. Dong, T. Draper, K.F. Liu, N. Mathur, F.X. Lee, H.B. Thacker, J.B. Zhang, Gargnano 2002, Quark confinement and the hadron spectrum, 312-314 (2003) [hep-lat/0212013] PI Zhengyu Liu Lee D. and Z. Liu, 2004: Assess seasonal ocean-atmosphere interaction in the North Pacific. Clim. Dyn., submitted Liu, Z. and L. Wu, 2004: Atmospheric response to North Pacific SST: The role of ocean- atmosphere coupling. J. Clim., 17, 1859-1882 Liu, Z., Q. Zhang and L. Wu, 2003a: Assess remote impacts on tropical Atlantic climate variability: statistic assessment and dynamic assessment. J. Clim., in press. Liu, Z., W. Lewis and A. Ganopolski, 2003b: A Coordinated Acceleration Scheme for the Simulation of Long Term Climate Evolution. Clim. Dyn., in press Notaro M., Z. Liu, R. Gallimore, S. Vavrus, J. Kutzbach, R. Jacob and C. Prentice, 2004: Simulated and Observed Pre-Industrial to Modern Vegetation and Climate Changes. J. Clim., submitted. Vavrus, S., 2004: The impact of cloud feedbacks on Arctic climate under greenhouse forcing. J. Clim, 17, 603-615. Wu L. and Z. Liu, 2004: A coupled modeling study of North Atlantic decadal variability. J. Clim., submitted Wu. L, Z. Liu, R. Gallimore, R. Jacob, D. Lee, and Y. Zhong, 2003: A coupled modeling study of Pacific decadal variability: The Tropical Mode and The North Pacific Mode. J. Clim., 16, 1101- 1120 Yang H. and Z. Liu, 2003: Tropical-extratropical and interhemisphere climatic interactions: atmospheric bridge and oceanic tunnel. J. Clim., submitted Yang, H., Z. Liu and Q. Zhang, 2004: Tropical ocean decadal variability and the resonance of planetary wave basin modes: II: Numerical Study. J. Clim., 17, 1711-1721 PI Steven Louie K. Khoo, M. S. Mazzoni, and S. G. Louie, Phys. Rev. B 69, 201401,(2004) Spataru C.D., Ismail-Beigi S., Benedict L.X., Louie S.G., Phys. Rev. Lett. 92, 077402 (2004). Spataru C.D., Ismail-Beigi S., Benedict L.X., Louie S.G., Appl. Phys. A 78, 1129 (2004). H. Sun, F. J. Ribeiro, J.-L. Li, D. Roundy, M. L. Cohen, S. G. Louie, Phys. Rev. B 69, 024110 (2004) P. Tangney, S. G. Louie, and M. L. Cohen, Phys. Rev. Lett 93, 065503 (2004). P. Zhang, W. Luo, V. H. Crespi, M. L.Cohen, and S. G. Louie, Phys. Rev. B 70, 085109, (2004) R. B. Capaz, C. D. Spataru, P. Tangney, M. L. Cohen, and S. G. Louie, phys. stat. solid (b), in press, (2004). X. Lu, M. Grobis, K. Khoo, S. G. Louie, and M. F. Crommie Phys. Rev. Lett., in press (2004) M. L. Tiago, S. Ismail-Beigi, S. G. Louie, Phys. Rev. B 69 (125212) 2004 A. Trave, F.J. Ribeiro, S.G. Louie and M.L. Cohen, "Energetics and Structural Characterization of C60 Polymerization in BN and Carbon Nano-peapods", submitted to Phys. Rev. B, (2004) W. Luo, W. Duan, S. G. Louie, and M. L. Cohen, "Structural and electronic properties of n-doped and p-doped SrTiO3", submitted to Phys. Rev. B, (2004) R. B. Capaz, C. D. Spataru, P. Tangney, M. L. Cohen, and S. G. Louie, "Temperature Dependence of the Band Gap of Semicondcting Carbon Nanotubes", submitted to Phys. Rev. Lett., (2004) M.L. Tiago, S. Ismail-Beigi, and S.G. Louie, submitted to J. Chem. Phys., (2004) PI Carlos Lousto COALESCENCE REMNANT OF SPINNING BINARY BLACK HOLES. By J. Baker (NASA, Goddard), M. Campanelli, C. O. Lousto, R. Takahashi (Copenhagen, Theor. Astrophys. Ctr.),. Jan 2004. 4pp. Published in RADIATION CONTENT OF CONFORMALLY FLAT INITIAL DATA. By C.O. Lousto (Texas U., Brownsville, CGVA), Richard H. Price (Utah U.),. UTBRG-2004-001, Jan 2004. 4pp. Published in Phys.Rev.D69:087503,2004 e-Print Archive: gr-qc/0401045 THE FINAL PLUNGE OF SPINNING BINARY BLACK HOLES. By John G. Baker (NASA, Goddard), M. Campanelli, Carlos O. Lousto (Texas U., Brownsville), R. Takahashi (Copenhagen, Theor. Astrophys. Ctr.),. UTBRG-2003-001, May 2003. 17pp. e-Print Archive: astro-ph/0305287 PI Walter Loveland R.Arratia-Perez,L.Hernadez-Acevedo and G.L.Malli," Calculated optical and magnetic properties of hexafluorouranate(V)anion:UF6-", J.Chem.Phys.( in press, accepted Aug 5,2004). G.L.Malli," Relativistic Quantum Chemistry of Heavy and Superheavy Elements :Fully Relativistic Coupled-Cluster Calculations for Molecules of Heavy and Transactinide Superheavy Elements", Fundamental World of Quantum Chemistry, Vol 3 (in press,accepted July,2004). G.L.Malli, M.Siegert and D.P.Turner, Int.J.Quantum.Chem 99, 940-949(2004) G.L.Malli, Mol.Phys.101, 287(2003).This paper investigates the electronic structure,etc. of UCl6 and our future plans are to investigate the corresponding SHE system E124Cl6,in order to test the validity or otherwise of the extrapolation of relativistic and electron correlation effects to the farthest regions of the Periodic Table. G.L.Malli, J.Chem.Phys 116,5476(2002).The electronic structure and bonding of seaborgium hexabromide is investigated by includeing all 316 electrons in the wavefunction of SgBr6. W. Loveland, Rev. Sci Instru.73, 505 (2002) W. Loveland, K.E. Gregorich, J.B. Patin, D. Peterson. C. Rouki, P.M. Zielinski,and K. Aleklett, Phys. Rev. C66, 044617 (2002). T.N. Ginter, K.E. Gregorich, N. Seward, W. Loveland, C.M. Folden, D.C. Hoffman, U.W. Kirbach, D.M. Lee, H. Nitsche, J.B. Patin, R. Sudowe, P.A. Wilk, P.M. Zielinski, R. Eichler and K. Aleklett, Phys. Rev. C67, 064609 (2003).et al. Phys. Rev. C67, 064609 (2003). K.E.Gregorich,T.N. Ginter, W. Loveland, D.Peterson,J.B. Patin, K. Aleklett, C.M. Folden, A. Ghiorso, D.C. Hoffman,D.M. Lee, H. Nitsche, J. P. Omtvedt, L.A. Omtvedt, L. Stavsetra, R. Sudowe,P.A. Wilk, and P.M. Zielinski, Eur. J. Phys A, 18, 633 (2003). PI Chung-Pei Ma C.-P. Ma and E. Bertschinger (2004), Astrophysical Journal, 612, 28. ``A Cosmological Kinetic Theory for the Evolution of Cold Dark Matter Halos with Substructure: Quasi-Linear Theory'' C.-P. Ma and M. Boylan-Kolchin (2004), Physical Review Letters, 93, 021301. ``Are Halos of Collisionless Cold Dark Matter Collisionless?'' M. Boylan-Kolchin and C.-P. Ma (2004), Monthly Notices of Royal Astronomical Society, 349, 1117. ``Major Mergers of Galaxy Halos: Cuspy or Cored Inner Density Profile?'' M. Boylan-Kolchin, C.-P. Ma, and E. Quataert (2004), Astrophysical Journal Letters, 613. ``Core Formation in Galactic Nuclei due to Recoiling Black Holes'' PI Evan Ma H.W. Sheng and E. Ma, Atomic Packing of the Inherent Structure of Simple Liquids, Phys. Rev. E 69, (2004) 062202-1-4. W. Luo, H.W. Sheng, F.M. Alamgir, J.M. Bai, J.H. He and E. Ma, Icosahedral Short-Range Order in Amorphous Alloys, Phys. Rev. Lett. 19 (2004) 145502-1-4. P.S. Schilling, V. Palshin, J.H. He and E. Ma, Overlapping Solubility in Mechanical Alloyed Fe-Ni and Fe-Cu, Phys. Rev. B 68 (2003) 224204-1-5 PI Osni Marques A Study of Robust Scientific Libraries For The Advancement of Sciences and Engineering, T. Drummond, O. Marques, J. Roman and V. Vidal. To appear in the Proc. of the VECPAR 2004 Conference, Springer The Advanced CompuTational Software (ACTS) Collection, how can it work for you?, T. Drummond and O. Marques, submitted to ACM TOMS. PI Angelo Mascarenhas Y. Zhang, B. Fluegel, and A. Mascarenhas, Total and negative refraction in real crystals for ballistic electrons and light, Phys. Rev. Lett. 91, 157404 (2003). Y. Zhang, A. Mascarenhas, and L.W. Wang, III-V-Bi versus III-V-N: similar and dissimilar aspects, submitted to Phys. Rev. B. Y. Zhang, B. Fluegel, M. C. Hanna, A. Mascarenhas, L.-W. Wang, Y. J. Wang, and X. Wei, Impurity perturbation to the host band structure and recoil of the impurity state, Phys. Rev. B 68, 75210 Y. Zhang, B. Fluege, M. C. Hanna, J. F. Geisz, L.-W. Wang, and A. Mascarenhas, Effects of heavy nitrogen doping in III-V semiconductors- How well does the conventional wisdom hold for the dilute nitrogen III-V-N alloys?, Phys. Stat. Sol (b) 240, 396 (2003). (an invited talk at ICNS-5) S. Zh. Karazhanov, Y. Zhang, L.-W. Wang, A. Mascarenhas, and S. Deb, Hyper-deep, highlying resonant defect states, and strong lattice relaxation of oxygen vacancies in WO3, Phys. Rev. B 68, 233204 S. Zh. Karazhanov, Y. Zhang, A. Mascarenhas, S. Deb, and L.-W. Wang, Oxygen vacancy in cubic WO3 studied by first-principles pseudopotential calculation, Proceeding of the 5th International Conference on Electrochromism, Solid State Ionics 165, 43 (2003). Y. Zhang and A. Mascarenhas, Efefcts due to and derived from spontaneous ordering in III-V semiconductors, Mat. Res. Soc. Symp. Proc. 794, T10.1 (2004). (an invited talk of 2003 MRS Fall meeting) X. Huang, Jing Li, Y. Zhang, and A. Mascarenhas, From 1D Chain to 3D Network: Syntheses, Structures, and Optical Properties of Novel Hybrid II-VI Nanocomposites, JACS 125, 7049 (2003). B. Fluegel, Y. Zhang, A. Mascarenhas, X. Huang, and J. Li, Electronic properties of hybrid organic-inorganic semiconductors, Phys. Rev. B (in press). PI Manos Mavrikakis "A first-principles study of surface and subsurface hydrogen on and in Ni(111): Diffusional Properties and Coverage-Dependent behavior", J. Greeley, M. Mavrikakis, Surface Science 540, 215 (2003). "Adsorption and Dissociation of O2 on Gold surfaces: Effect of Steps and Strain", Y. Xu, M. Mavrikakis, Journal of Physical Chemistry B, 107, 9298 (2003). "Atomic and molecular adsorption on Ir(111)", W. P. Krekelberg, J. Greeley, M. Mavrikakis, Journal of Physical Chemistry B 108, 987 (2004). "Why Au and Cu are more selective than Pt for Preferential Oxidation of CO at low temperature", S. Kandoi, A. A. Gokhale, L. C. Grabow, J. A. Dumesic, M. Mavrikakis, Catalysis Letters 93, 93 (2004). "On the origin of the catalytic activity of nanomenter gold particles for low temperature CO oxidation", N. Lopez, T. V. W. Janssens, B. S. Clausen, Y. Xu, M. Mavrikakis, T. Bligaard, J. K. Nxrskov, Journal of Catalysis (Priority Communication), 223, 232 (2004). "Competitive Paths for Methanol Decomposition on Pt(111)", J. Greeley, M. Mavrikakis, Journal of the American Chemical Society 126, 3910 (2004). "Adsorption and dissociation of O2 on Pt-Co and Pt-Fe alloys", Y. Xu, A. Ruban, M. Mavrikakis, Journal of the American Chemical Society 126, 4717 (2004). "Strain-Induced Formation of Subsurface Species in Transition Metals", J. Greeley, W. P. Krekelberg, M. Mavrikakis, Angewandte Chemie International Edition 43, 4296 (2004). "Effect of Sn on the reactivity of Cu surfaces", A. A. Gokhale, G. Huber, J. A. Dumesic, M. Mavrikakis, Journal of Physical Chemistry B (in press). "A New Class of Alloy Catalysts Designed from First-Principles", J. Greeley, M. Mavrikakis, Nature Materials (in press). PI William McCurdy "Solving the three-body Coulomb breakup problem using exterior complex scaling", C. W. McCurdy, M. Baertschy and T. N. Rescigno, J. Phys. B 37, R137 (2004) "Ab initio study of low-energy electron collisions with tetrafluoroethene C2F4", C. S. Trevisan, A. E. Orel, and T. N. Rescigno, Phys. Rev. A 70, 012704 (2004) "Complex potential surface for the 2B1 metastable state of the water anion", Daniel J. Haxton, Zhiyong Zhang, C. W. McCurdy, and T. N. Rescigno, Phys. Rev. A 69, 062713 (2004) "Dynamics of dissociative attachment of electrons to water through the 2B1 metastable state of the anion", Daniel J. Haxton, Zhiyong Zhang, H-D. Meyer, T. N. Rescigno, and C. W. McCurdy, Phys. Rev. A 69, 062714 (2004) "Low-energy electron scattering of NO: Ab initio analysis of the 3Sigma-, 1Delta , and 1Sigma+ shape resonances in the local complex potential model", Zhiyong Zhang, Wim Vanroose, C. W. McCurdy, A. E. Orel, and T. N. Rescigno, Phys. Rev. A 69, 062711 (2004) "Theoretical treatment of double photoionization of helium using a B-spline implementation of exterior complex scaling", C. W. McCurdy, D. A. Horner, T. N. Rescigno and F. Martmn, Phys. Rev. A 69, 032707 (2004) "Implementation of exterior complex scaling in B-splines to solve atomic and molecular collision problems", C. W. McCurdy and F. Martin, J. Phys. B 37, 917 (2004) "Threshold Vibrational Excitation of CO2 by Slow Electrons", Wim Vanroose, Zhiyong Zhang, C. W. McCurdy, and T. N. Rescigno, Phys. Rev. Lett. 92, 053201 (2004) "Ab initio study of low-energy electron collisions with ethylene", C. S. Trevisan, A. E. Orel, and T. N. Rescigno, Phys. Rev. A 68, 062707 (2003) "Scattering of slow electrons by polar molecules: Application of effective-range potential theory to HCl", Wim Vanroose, C. W. McCurdy, and T. N. Rescigno, Phys. Rev. A 68, 052713 (2003) PI William McMahon "Step structures on MOVCD grown III-V phosphide (001) surfaces: How do steps and Sb affect CuPt ordering of GaInP?"; I.G. Batyrev, W.E. McMahon, S.B Zhang, J.M. Olson, and S.-H. Wei; submitted to Phys. Rev. Lett. "Borderline magic clustering: The observation of tetravalent Pb cluster arrays on Si(111)-7x7"; Shao-Chun Li, Jin-Feng Jia, Rui-Fen Dou, Qi-Kun Xue, Iskander G. Batyrev, and S.B. Zhang; Phys. Rev. Lett. (accepted for publication). "An STM and LEED Study of MOCVD-Prepared P/Ge (100) to (111) Surfaces"; W.E. McMahon, A.E. Kibbler and J.M. Olson; Surf. Sci. (accepted for publication). "Tip size effect on the appearance of a STM image for complex surfaces: Theory versus experiment for Si(111)-7x7"; Y. L. Wang, H.-J. Gao, H. M. Guo, H. W. Liu, I. G. Batyrev, W. E. McMahon, and S. B. Zhang; Phys. Rev. B (accepted for publication). "An RDS, LEED, and STM Study of MOCVD-Prepared Si(100) Surfaces"; T. Hannappel, W.E. McMahon and J.M. Olson; J. Cryst. Growth (accepted for publication). PI Anthony Mezzacappa Electron capture rates on nuclei and implications for stellar core collapse, K. Langanke, G. Martinez- Pinedo, J.M. Sampaio, D.J. Dean, W.R. Hix, O.E.B. Messer, A. Mezzacappa, M. Liebendorfer, H.-Th. Janka, and M. Rampp, Phys. Rev. Lett. 90, 241102 (2003). Consequences of nuclear electron capture in core collapse supernovae, W.R. Hix, O.E.B. Messer, A. Mezzacappa, M. Liebendorfer, J. Sampaio, K. Langanke, D.J. Dean, G. Martinez-Pinedo, Phys. Rev. Lett. 91, 201102 (2003). Neutral-current neutrino-nucleus cross sections for A=50-65 nuclei, A. Juodagalvos, K. Langanke, G. Martinez-Pinedo, W.R. Hix, D.J. Dean, and J.M. Sampaio, in press, Nucl. Phys. A (2004). Gamow-Teller GT+ distributions in nuclei with mass 90-97, A. Juodagalvis and D.J. Dean, submitted to Phys. Rev. C (2004). ADI-like Preconditioners for Boltzmann Transport, E. F. D' Azevedo, O.E.B. Messer, A. Mezzacappa, B. Peyton, and M. Liebendoerfer, SISC, 2003 (in press). Neutrino-induced fission of neutron-rich nuclei, E. Kolbe, K. Langanke, and G. M. Fuller, Phys. Rev. Lett. 92, 111101 (2004); astro-ph/0308350. Pulsar kicks from a dark matter sterile neutrino, G. M. Fuller, A. Kusenko, I. Mocioiu, S. Pascoli, Phys. Rev. D68, 103002 (2003); astro-ph/0307267. Jordan, G. C., IV, Gupta, S. S., and Meyer, B. S. 2003, Nuclear Reactions Important in Alpha-Rich Freeze-outs, Physical Review C 68, 065801. A Comparison of Algorithms for the Efficient Solution of the Linear Systems Arising from Multi- Group Flux-Limited Diffusion Problems, F. Douglas Swesty, Dennis Smolarski, and Paul Saylor, to appear in The Astrophysical Journal. Supernova Science at Spallation Neutron Sources, W. R. Hix, A. Mezzacappa, O. E. B. Messer, and S.W. Bruenn, Journal of Physics G: Nuclear and Particle Physics 29, 2523-2542 (2003). PI Bogdan Mihaila ``Renormalizing the Schwinger-Dyson equations in the auxiliary field formulation of lambda phi field theory,'' Fred Cooper, Bogdan Mihaila, and John Dawson hep-ph/0407119, LA-UR-04-4605 (submitted to Physical Review D) ``Real-time dynamics of the O(N) model in 1+1 dimensions,'' Bogdan Mihaila, Physical Review D 68 , 36002 (2003); hep-ph/0303157; LA-UR-03-1817 ``Quantum dynamics of phase transitions in broken symmetry lambda phi field theory,'' Fred Cooper, John Dawson, and Bogdan Mihaila, Physical Review D 67, 56003 (2003); hep-ph/0209051 ``Dynamics of broken symmetry lambda phi field theory,'' Fred Cooper, John Dawson, and Bogdan Mihaila, Physical Review D 67, 51901R (2003); hep-ph/0207346 PI David Mikkelsen Microturbulent drift mode stability before internal transport barrier formation in the Alcator C-Mod radio frequency heated H-mode, M. H. Redi, W. Dorland, C. L. Fiore, P. T. Bonoli, M. J. Greenwald, J. E. Rice, J. Baumgaertel, T. S. Hahm, G. W. Hammett, K. Hill, D. C. McCune, D. R. Mikkelsen, G. Rewoldt; Submitted to Physics of Plasmas M. Romanelli , C. Bourdelle, and W. Dorland, Phys. Plasmas 11 (2004) 3845 Micro-stability and transport modelling of internal transport barriers on JET X. Garbet, et al., Nuclear Fusion 43 (2003) 975. Invited talk at EPS-2003: C. Fiore et al, Control of Internal Transport Barriers on C-Mod, Phys. Plas. 11, 2480 2004 Invited talk at EPS-2003: Ernst et al, Role of Trapped Electron mode turbulence in internal trasnport barrier control in the Alcator C-Mod Tokamak, Phys. Plas 22, 2637, 2004 Invited talk at EPS-2004: C. Fiore et al, Internal Transport Barrier Production and control in Alcator C-Mod, to be published: 31th European Physical Society Conference on Plasma Physics and Controlled Fusion, London,UK, June, 2004 PI Norman Miller Brekke, L.D., N.W.T. Quinn, N.L. Miller, and J.A. Dracup, 2004: Climate Change Impacts Uncertainty for San Joaquin River Basin. LBNL 51393, J. Amer. Water Resources Assoc., 40, 149-164. Dale, L. L., C. D. Whitehead, and A. Fargeix, 2004: Electricity Price and Southern California Water Supply Options, Water Resources Research. (Accepted) Hayhoe, K., D. Cayan, C.B. Field, P.C. Frumhoff, E.P. Maurer, N.L. Miller, S.C. Moser, S.H. Schneider, and Others, 2004: Emissions Pathways, Climate Change, and Impacts on California. Proc. National Academy of Science, 101, 12422-12427. Maxwell, R.M. and N.L. Miller, 2004: On the development of a coupled land surface and groundwater model for use in watershed management. J. Hydrometeorology, (Submitted). Miller, N.L, K.E. Bashford, and E. Strem, 2003: Potential impacts of climate change on California hydrology. J. Amer. Water Resources Assoc., 39, 771-784. Miller, N.L., A.W. King, M.A. Miller, E.P. Springer, M.L. Wesely and others. 2004: The Doe Water Cycle Pilot Study, LBNL-53826. Bull. Amer. Meteorological Soc. (Accepted). Quinn, N.W.T., L.D. Brekke, N.L. Miller, T. Hienzer, H. Hildalgo, and J.A. Dracup, 2004: Model integration for assessing future hydroclimate impacts on water resources, agricultural production, and environmental quality in the San Joaquin Basin, California. Envir. Modeling and Software, 19, 305-316. PI William Miller "On the efficient path integral evaluation of thermal rate constants within the quantum instanton approximation" T. Yamamoto and W. H. Miller Journal of Chemical Physics 120, 3086 (2004) "Path integral calculation of thermal rate constants within the quantum instanton approximation: Application to the H+CH4->H2+CH3 hydrogen abstraction reaction in full Cartesian space" Y. Zhao, T. Yamamoto and W. H. Miller Journal of Chemical Physics 120, 3100 (2004) "Time averaging the semiclassical initial value representation for the calculation of vibrational energy levels. II. Application to H2CO, NH3, CH4, CH2D2" A.L. Kaledin and W. H. Miller Journal of Chemical Physics 119, 2078 (2003) PI Warren Mori Rumolo, G., Ghalam, A.Z., Katsouleas, T., Huang, C.K., Decyk, V.K., Ren, C., Mori, W.B., Zimmermann, F., and Ruggiero, F., Electron Cloud Effects on Beam Evolution in a Circular Accelerator, Phys. Rev. ST Accel. Beams, Vol. 6, pp. 081002:1-9 (2003). Deng, S., Barnes, C.D., Clayton, C.E., O Connell, C., Decker, F.J., Emma, P., Erdem, O., Huang, C., Hogan, M.J., Iverson, R., Johnson, K., Joshi, C., Katsouleas, T., Krejcik, P., Lu, W., Marsh, K.A., Mori, W.B., Muggli, P., Siemann, R.H., and Walz, D., Modeling of Beam-Ionized Sources for Plasma Accelerators, Proc. of the 2003 Particle Accelerator Conference, pp. 1933-1935 (2003). Fonseca, R.A., Silva, L.O., Tonge, J.W., Mori, W.B., and Dawson, J.M. Three- Dimensional Wiebel Instability in Astrophysical Scenarios, Physics of Plasmas, Vol. 10, No. 5, pp. 1979-1984 (May 2003). Blue, B.E., Clayton, C.E., O Connell, C.L., Decker, F.-J., Hogan, M. J., Huang, C., Iverson, R, Joshi, C., Katsouleas, T.C., Lu, W., Marsh, K.A., Mori, W.B., Muggli, P., Siemann, R., and Walz, D., Plasma-Wakefield Acceleration of an Intense Positron Beam, Phys. Rev. Lett, Vol. 90, No. 21, pp. 214801/1-4 (May 2003). Hogan, M.J., Clayton, C.E., Huang, C., Muggli, P., Wang, S., Blue, B.E., Walz, D., Marsh, K.A., O'Connell, C.L., Lee, S., Iverson, R., Decker, F.-J., Raimondi, P., Mori, W.B., Katsouleas, T.C., Joshi, C., and Siemann, R.H., Ultrarelativistic-Positron-Beam Transport through Meter-Scale Plasmas, Phys. Rev. Lett., Vol. 90, No. 20, pp. 205002/ 1-4 (May 2003). Silva, Luis O., Marti, Michael, Davies, Jonathan R., Fonseca, Ricardo A., Ren, Chuang, Tsung, Frank S., and Mori, Warren B., Proton Shock Acceleration in Laser-Plasma Interactions, Phys. Rev. Lett., Vol. 92, No. 1, pp. 015002/1-4 (January 2004). Muggli, P., Blue, B.E., Clayton, C.E., O Connell, C.L., Decker, F.-J., Hogan, M.J., Huang, C., Iverson, R., Joshi, C., Katsouleas, T.C., Lu, W., Marsh, K.A., Mori, W.B., and Siemann, R., A Meter-Scale Plasma Wakefield Accelerator Driven by a Matched Electron, Vol. 93, 014802:1-4 July (2004). Tsung, F.S., Narang, R., Mori, W.B., Joshi, C., Fonseca, A, and Silva, L.O., Near GeV Energy Laser Wakefield Acceleration of Self-Injected Electrons in a cm Scale Plasma Channel, to appear Phys. Rev. Lett. (2004). Ren, C., Tzoufras, M., Tsung, F.S., Mori, W.B., Amorini, S., Fonseca, R.A., Silva, L.O., Adam, J.C., and Heron A., A Global Model for Laser Driven MeV Electrons in Fast Ignition, submitted to Phys. Rev. Lett. (2004). PI James Morris K.A. Gschneidner, Jr., A.M. Russell, A. O. Pecharsky, Z. Zhang, T.A. Lograsso, J.R. Morris, D.K. Hsu, C.H.C. Lo, Y.Y. Ye, A.J. Slager, and D.C. Kesse, A New Family of Ductile Intermetallic Compounds, Nature Materials 2, 587 591 (2003). J. R. Morris, Y. Y. Ye, Y. B. Lee, B. N. Harmon, K A. Gschneidner, Jr., and Alan M. Russell, Ab initio calculation of bulk and defect properties of ductile rare-earth intermetallic compounds, Acta Mat.52, 4849-4857 (2004). H. Chen, M. Khantha and T. Egami, MRS Symp. Proc. 754, 65 (2003). C. L. Fu and J. H. Schneibel, Reducing the thermal expansion anisotropy in Mo5Si3 by Nb and V additions: theory and experiments, Acta Mater. 51, Vol. 17, 5083 (2003). M. Krcmar and C. L. Fu, Structural and electronic properties of BaTiO3 slabs: mechanism for surface conduction, Physical Review B68, 115404 (2003). M. Krcmar and C. L. Fu, First-principles study of point defect structure in C15 ZrCo2 and ZrCr2, and B2 ZrCo, Physical Review B68, 134110 (2003) C. L. Fu, C. T. Liu, Xun-Li Wang, M. Krcmar, and J. A. Fernandez-Baca, Magnetism-induced solid solution softening in NiAl with Co, Fe, Mn, and Cr solute atoms: theory and experiment, Intermetallics 12, 911 (2004). A. Janotti, M. Krcmar, C. L. Fu, and R. C. Reed, Solute diffusion in metals: larger atoms can move faster, Physical Review Letters 92, 085901 (2004). G. A. Farnan, C. L. Fu, Z. Gai, M, Krcmar, A. P. Baddorf, Z. Zhang, and J. Shen, Electronic stability of magnetic Fe/Co superlattices with monostomic layer alternation, Physical Review Letters 91, 226106 (2003). PI Farrokh Najmabadi P.Mioduszeski, A. Grossman, et al. J. Nucl. Mat. 313 (2003) 1304 A.E. Koniges, A. Grossman, et al. Nucl. Fusion 43 (2003) 107 A.Grossman, et al. "Magnetic Structure at the Edge of a Compact Stellarator (NCSX)" to be published in J. Nucl. Mater. PI Rick Nebel A.H.Glasser and X.Z.Tang, The SEL macroscopic modeling code, Computer Physics Comminications, Accepted and in press (2004). A. N. Simakov and P. J. Catto, Phys. Plasmas 10, 4744 (2003). Control of linear and nonlinear resistive wall modes. J.M.Finn and L. Chacon, Physics of Plasmas; May 2004; vol.11, no.5, p.1866-78. G. Lapenta, J.U. Brackbill, W.S. Daughton, Physics of Plasmas, 10, 1577--1587, 2003. P. Ricci, G. Lapenta, J.U. Brackbill, Physics of Plasmas, 11, 4102--4114, 2004. W. Daughton, G. Lapenta, P. Ricci, {\it Nonlinear Evolution of the Lower-hybrid Drift Instability in a Current Sheet}, Physical Review Letters, to appear PI John Negele N to Delta electromagnetic transition form-factors from lattice QCD, C. Alexandrou, Ph. de Forcrand, Th. Lippert, H. Neff, J.W. Negele, K. Schilling, W. Schroers, and A. Tsapalis, Phys.Rev.D69:114506, 2004 Momentum dependence of the N to Delta electromagnetic transition form factors from Lattice QCD, C. Alexandrou, Ph. de Forcrand, H. Neff, J. W. Negele, W. Schroers and A. Tsapalis, To be submitted to Phys. Rev. Lett. Transverse Structure of Nucleon Parton Distributions from Lattice QCD, Ph. Hagler, J. W. Negele, D. B. Renner, W. Schroers, Th. Lippert, and K. Schilling, to appear in Phys Rev. Lett. Moments of Nucleon Generalized Parton Distributions in Lattice QCD, Philipp Hagler, John Negele, Dru Renner, Wolfram Schroers, Thomas Lippert, and Klaus Schilling, Phys.Rev. D68 (2003) 034505, PI Brian Nelson V.A Izzo and T.R. Jarboe, Physics of Plasmas 10 (7), 2003. PI Gregory Newman Commer, M., and Newman, G., 2004, A parallel finite-difference approach for three-dimensional transient electromagnetic modeling with galvanic sources: Geophysics, in Press. Newman, G. A., and Boggs, P. T., 2004, Solution accelerators for large-scale three-dimensional electromagnetic inverse problems: Inverse Problems, In Press. Newman, G. A., and Commer, M., 2004, New advances in transient electromagnetic inversion: Geophysical Journal International, In Press. PI Cheuk-Yiu Ng H. K. Woo, P. Wang, K.-C. Lau, X. Xing, C. Y. Ng, J. Chem. Phys., 2004, 120, 1756. "Vacuum Ultraviolet-Infrared Photo-Induced Rydberg Ionization Spectroscopy: C-H Stretching Frequencies for Trans-2-butene and Trichloroethene cations." X.-M. Qian, K.-C. Lau, G. Z. He, C. Y. Ng, M. Hochlaf, J. Chem. Phys., 2004, 120, 8476. "Vacuum Ultraviolet Pulsed Field Ionization Study of ND3: Accurate thermochemistry for the ND2-ND2+ and ND3-ND3+ System." H. K. Woo, P. Wang, K.-C. Lau, X. Xing, C. Y. Ng, J. Chem. Phys., 2004, 120, 9561. "Single-Photon Vacuum-Ultraviolet Laser-Pulsed-Field Ionization-Photoelectron Studies of Trans- and X.-M. Qian, K.-C. Lau, C. Y. Ng, J. Chem. Phys., 2004, 120, 11031. "A high-Resolution Pulsed Field Ionization-Photoelectron-Photoion Coincidence Study of Vinyl Bromide." H. K. Woo, K.-C. Lau, and C. Y. Ng, Chinese J. Chem. Phys., in press. "Vibrational Spectroscopy of Trichloroethene Cation by Vacuum Ultraviolet Pulsed Field Ionization-Photoelectron Method." H. K. Woo, P. Wang, K. C. Lau, X. Xing, and C. Y. Ng, J. Phys. Chem. A, in press. "VUV Pulsed Field Ionization-Photoelectron and VUV-IR Photo-Induced Rydberg Ionization Study of Cis-dichloroethene." PI Esmond Ng Padma Raghavan, Keita Teranishi, and Esmond G. Ng, "A Latency Tolerant Hybrid Sparse Solver Using Incomplete Cholesky Factorization". Numerical Linear Algebra and Applications, 10 (2003), pp. 541-560. Also appeared as LBNL-51018. Timothy A. Davis, John R. Gilbert, Stefan I. Larimore, and Esmond G. Ng, "A Column Approximate Minimum Degree Ordering Algorithm". ACM Trans. Math. Software, 30 (2004). Also appeared as LBNL-47109. Xiaoye Li, "An Overview of SuperLU: Algorithms, Implementation, and User Interface". To appear in ACM Trans. On Math. Software, 2004. C. Yang, W. Gao, Z. Bai, X. Li, L. Lee, P. Husbands, E. Ng, "An Algebraic Sub-structuring Method for Large-scale Eigenvalue Calculation". Submited to SIAM J. Sci. Comput. Also appeared as LBNL-55050, May 2004. PI Jerry Nolen J.A. Nolen, "Overview of the U.S. Rare Isotope Accelerator Proposal," Nucl. Phys. A734 (2004) 661-668. Ostroumov, P. N.;Aseev, V. N.;Mustapha, B., "Beam loss studies in high-intensity heavy-ion linacs," accepted for pub. in Phys. Rev. ST A&B (2004). Shepard, K. W.;Ostroumov, P. N.;Delayen, J. R., "High energy ion linacs based on superconducting spoke cavities." Published In: Phys. Rev. ST Accel. Beams; 6(8): 080101(9) ; Aug. 2003. Ostroumov, P. N.;Aseev, V. N.;Nolen, J. A., "Design study of acceleration and utilization of high power beams in the RIA facility." Published In: Trans. Am. Nucl. Soc. ; 88: 278-85 ; 2003 American Nuclear Society ANS 2003 Annual Meeting ; San Diego, CA ; Jun 1-5, 2003 ANL/PHY/CP-111064 Ostroumov, P. N., "Heavy-ion beam dynamics in the RIA accelerators." Published In: Nucl. Instrum. Methods Phys. Res. A ; 519(1-2): 412-24 ; Feb. 21, 2004 Nolen, J. A.;Gomes, I. C., "Preliminary assessment of ground water activation for RIA." Published In: Trans. Am. Nucl. Soc.; 88: 309-16 ; 2003. I. C. Gomes, J. A. Nolen, C. B.Reed, "The Use of Electron Beams in RIA R&D", accepted for publication in Nucl. Phys. A (2004). Claude B. Reed, Jerry A. Nolen, James R. Specht, Vince J. Novick, and Perry Plotkin, "A 20 kW BEAM-ON-TARGET TEST OF A HIGH-POWER LIQUID LITHIUM TARGET FOR RIA," accepted for publication in Nucl. Phys. A (2004). PI Mark Novotny "Angular Dependence of Switching Properties in Single Fe Nanopillars", G. Brown, S. Stinnett, M.A. Novotny, P.A. Rikvold, J. Applied Physics, vol. 95, 6666-6668 (2004); also in June 7, 2004 issue of Virtual Journal of Nanoscience and Technology. "Dynamics of Desynchronization in a Conservative Update Protocol", A.K. Kolakowska, M.A. Novotny, and P. Verma, Physical Review E, in press, preprint cond-mat/0403341. "On the Possibility of Quasi-Small-World Nanomaterials", M.A. Novotny and S.M. Wheeler, Brazialian Journal of Physics, vol. 34, p. 395-400 (2004), invited article. "Discrete-Event Analytic Technique for Surface Growth Problems", A. Kolakowska and M.A. Novotny, Physical Review B, vol. 69, 075407 [5 pages] (2004). "Update Statistics in Conservative Parallel Discrete Event Simulations of Asynchronous Systems" A. Kolakowska, M.A. Novotny, and P.A. Rikvold, Physical Review E, vol. 68, 046705 [14 pages] (2003). "Algorithmic Scalability in Globally Constrained Conservative Parallel Discrete-Event Simulations of Asynchronous Systems", A.K. Kolakowska, M.A. Novotny, and G. Korniss, Physical Review B, vol. 67, 046703 [13 pages] (2003). PI Arthur Nozik Y.-H. Kim, M. J. Heben, and S. B. Zhang, "Nanotube wires on commensurate InAs surfaces: Binding energies, band alignments, and bipolar doping by the surfaces", Phys. Rev. Lett. 92, 176102 (2004). J. Feng, Y.-H. Kim, S. B. Zhang, S.-Y. Ding, M.P. Tucker, G. Rumbles, and M.E. Himmel, "Cyclodextrins stabilize TOPO-(CdSe)ZnS quantum dots in water", MRS Proceedings Vol. 823, W4.5 (2004). J. Feng, Y.-H. Kim, S.B. Zhang, S.Y. Ding, B.M. Keyes, M.T. Tucker, G. Rumbles, M.E. Himmel, "Cyclodextrin driven hydrophobic/hydrophilic transformation of semiconductor nanoparticles", Angewandte Chemie, Int. Ed. Engl., submitted. Y.-H. Kim, M.J. Heben, and S.B. Zhang, "First-Principles Band Offsets of Carbon Nanotubes with III-V Semiconductors", AIP Conference Proceedings, submitted. Y. Zhao, Y.-H. Kim, M.-H. Du, and S. B. Zhang, "Icosahedral Quantum Dots and 2D Quasicrystals for Group IV Semiconductors", AIP Conference Proceedings, submitted. Y. Zhao, Y.-H. Kim, M.-H. Du, and S. B. Zhang, "First-principles prediction of icosahedral quantum dots for tetravalent semiconductors", Phys. Rev. Lett. 93, 015502 (2004). PI Peter Nugent "Could There Be a Hole in Type Ia Supernovae?"; Kasen, Daniel; Nugent, Peter; Thomas, R. C.; Wang, Lifan; The Astrophysical Journal, Volume 610, Issue 2, pp. 876-887. "Improved discretization of the wavelength derivative term in CMF operator splitting numerical radiative transfer"; Hauschildt, P. H.; Baron, E; Astronomy and Astrophysics, v.417, p.317-324 (2004) "On the Geometry of the High-Velocity Ejecta of the Peculiar Type Ia Supernova 2000cx"; Thomas, R. C.; Branch, David; Baron, E.; Nomoto, Ken'ichi; Li, Weidong; Filippenko, Alexei V.; The Astrophysical Journal, Volume 601, Issue 2, pp. 1019-1030. "New Constraints on ΩM, ΩΛ, and w from an Independent Set of 11 High-Redshift Supernovae Observed with the Hubble Space Telescope"; Knop, R. A.; Aldering, G.; Amanullah, R.; Astier, P.; Blanc, G.; Burns, M. S.; Conley, A.; Deustua, S. E.; Doi, M.; Ellis, R.; Fabbro, S.; Folatelli, G.; Fruchter, A. S.; Garavini, G.; Garmond, S.; Garton, K.; Gibbons, R.; Goldhaber, G.; Goobar, A.; Groom, D. E.; Hardin, D.; Hook, I.; Howell, D. A.; Kim, A. G.; Lee, B. C.; Lidman, C.; Mendez, J.; Nobili, S.; Nugent, P. E.; Pain, R.; Panagia, N.; Pennypacker, C. R.; Perlmutter, S.; Quimby, R.; Raux, J.; Regnault, N.; Ruiz-Lapuente, P.; Sainton, G.; Schaefer, B.; Schahmaneche, K.; Smith, E.; Spadafora, A. L.; Stanishev, V.; Sullivan, M.; Walton, N. A.; Wang, L.; Wood-Vasey, W. M.; Yasuda, N.; The Astrophysical Journal, Volume 598, Issue 1, pp. 102-137. "Atmospheric Models of Red Giants with Massive-Scale Non-Local Thermodynamic Equilibrium"; Short, C. I.; Hauschildt, P. H.; The Astrophysical Journal, Volume 596, Issue 1, pp. 501-508. "Direct Analysis of Spectra of the Peculiar Type Ia Supernova 2000cx"; Branch, David; Thomas, R. C.; Baron, E.; Kasen, Daniel; Hatano, Kazuhito; Nomoto, K.; Filippenko, Alexei V.; Li, Weidong; Rudy, Richard J.; The Astrophysical Journal, Volume 606, Issue 1, pp. 413-423. "Optical Photometry and Spectroscopy of the SN 1998bw-like Type Ic Supernova 2002ap"; Foley, Ryan J.; Papenkova, Marina S.; Swift, Brandon J.; Filippenko, Alexei V.; Li, Weidong; Mazzali, Paolo A.; Chornock, Ryan; Leonard, Douglas C.; Van Dyk, Schuyler D.; The Publications of the Astronomical Society of the Pacific, Volume 115, Issue 812, pp. 1220-1235. PI Volker Oberacker 'Axially symmetric Hartree-Fock-Bogoliubov Calculations for Nuclei Near the Drip-lines', E. Teran, V.E. Oberacker and A.S. Umar, Phys. Rev. C 67, 064314 (2003) 'Hartree-Fock-Bogoliubov Calculations in Coordinate Space: Neutron-Rich Sulfur, Zirconium, Cerium, and Samarium Isotopes', V.E. Oberacker, A.S. Umar, E. Teran, and A. Blazkiewicz, Phys. Rev. C 68, 064302 (2003) 'HFB calculations with high-energy continuum coupling: nuclear structure at neutron dripline', A.S. Umar, V.E. Oberacker, and E. Teran, book chapter in ``Fission and Properties of Neutron-Rich Nuclei'', ed. J.H. Hamilton, A.V. Ramayya, and H.K. Carter; World Scientific, p. 109-116 (2003) 'HFB Calculations for Nuclei far from Stability', A.S. Umar, V.E. Oberacker, E. Teran, and A. Blazkiewicz, Proc. NATO Advanced Studies Institute, Kemer, Turkey (Sep. 22 - Oct. 2, 2003), in print 'Solution of the HFB continuum problem on a 2-D lattice: neutron-rich and dripline nuclei', V.E. Oberacker, A.S. Umar, E. Teran, and A. Blazkiewicz, Proc. Int. Symp. ``A New Era of Nuclear Structure Physics''(NENS03), Niigata, Japan (Nov. 19 - 22, 2003), World Scientific (2004, in print) 'Half lives of isomeric states from SF of 252Cf and large deformations in 104Zr and 158Sm', J.K. Hwang, A.V. Ramayya, J.H. Hamilton, D. Fong, C.J. Beyer, P.M. Gore, E.F. Jones, E. Teran, V.E. Oberacker, A.S. Umar, Y.X. Luo, J.O. Rasmussen, S.J. Zhu, S.C. Wu, I.Y. Lee, P. Fallon, M.A. Stoyer, S. J. Asztalos, T.N. Ginter, J.D. Cole, G.M. Ter-Akopian, and R. Donangelo, subm. to Phys. Rev. C (July 2003) 'Prompt muon-induced fission: a sensitive probe for nuclear energy dissipation and fission dynamics', V.E. Oberacker, A.S. Umar, and F.F. Karpeshin, ``Progress in Muon Research'', edited book (invitation only), ed. Frank Columbus, Nova Science Publishers, Inc., Hauppauge, NY (publ. date: Fall 2004); see Cornell Univ. e-print archive: nucl-th/0403087 PI Grazyna Odyniec S.-L. Blyth - "Jet study in ultra-relativistic heavy-ion collisions with the ALICE detector at the LHC" - J. Phys. G:Nucl. Part. Phys. 30 S1155-S1158 (2004) PI Joseph Oefelein J. C. Oefelein. Thermophysical characteristics of shear-coaxial LOX-H2 flames at supercritical pressure. Proceedings of the 30th International Symposium on Combustion, Chicago, 30 (1): In Print, PI Doug Olson Centrality and pseudorapidity dependence of charged hadron production at intermediate pT in Au+Au collisions at sqrt(sNN) = 130 GeV Submitted April 15, 2004, to be published in Physical Review C e-Print Archives (nucl-ex/0404020) Production of e+e- Pairs Accompanied by Nuclear Dissociation in Ultra-Peripheral Heavy Ion Collision Submitted April 7, 2004, to be published in Physical Review C e-Print Archives (nucl-ex/0404012) Photon and neutral pion production in Au+Au collisions at sqrt(s_NN) = 130 GeV Submitted January 8, 2004, to be published in Physical Review C e-Print Archives (nucl-ex/0401008) Azimuthally sensitive HBT in Au+Au collisions at sqrt(sNN) = 200 GeV Submitted December 8, 2003, published June 30, 2004 Phys. Rev. Lett. 93 (2004) 012301 Azimuthal anisotropy at the Relativistic Heavy Ion Collider: the first and fourth harmonics Submitted October 29, 2003, published February 13, 2004 Phys. Rev. Lett. 92 (2004) 062301 Identified particle distributions in pp and Au+Au collisions at sqrt{snn}=200 GeV Submitted October 6, 2003, published March 19, 2004 Phys. Rev. Lett. 92 (2004) 112301 Multi-strange baryon production in Au-Au collisions at sqrt(snn) = 130 GeV Submitted July 31, 2003, published May 5, 2004 Phys. Rev. Lett. 92 (2004) 182301 Pion-Kaon Correlations in Central Au+Au Collisions at sqrt(snn) = 130 GeV Submitted July 31, 2003, published December 31, 2003 Phys. Rev. Lett. 91 (2003) 262302 rho-0 Production and Possible Modification in Au+Au and p+p Collisions at sqrt(snn) = 200 GeV Submitted July 30, 2003, published March 5, 2004 Phys. Rev. Lett. 92 (2004) 092301 Three-Pion Hanbury Brown Twiss Correlations in Relativistic Heavy-Ion Collisions from the STAR Experiment Submitted June 19, 2003, published December 31, 2003 Phys. Rev. Lett. 91 (2003) 262301 PI Joyce Penner Penner, J.E., Chen, Y., and X. Dong, 2004: Observational evidence of a change in radiative forcing due to the indirect aerosol effect, Nature, 427, 231-234. Penner, J.E., S.Y. Zhang, and C.C. Chuang, 2003: Soot and smoke aerosol may not warm climate, J. Geophys. Res., 108, D21, Art. No. 4657, doi: 10.1029/2003JD003409. Herzog, M., D. Weisenstein, and J.E. Penner, 2004: An aerosol module for global chemical transport models: Model description and impact of non-sulfate aerosol particles, J. Geophys. Res., in press. Feng, Y., J.E. Penner, S. Sillman, and X. Liu, 2004: The effects of cloud overlap in photochemical models, J. Geophys. Res., D04310, doi: 10.1029/2003JD004040. PI Saul Perlmutter "Spectroscopic confirmation of high-redshift supernovae with the ESO VLT", C. Lidman, et al., accepted for publication in Astronomy and Astrophysics "Spectroscopic Observations and Analysis of the Peculiar SN 1999aa", G. Garavini, et al., 2004, Astronomical Journal, 128, 387 "New Constraints on Omega_M, Omega_Lambda, and w from an Independent Set of Eleven High-Redshift Supernovae Observed with HST", R. A. Knop, et al., (The Supernova Cosmology Project), .2003, Astrophysical Journal, 598, 102 "Multicolor Light Curves of Type Ia Supernovae on the Color-Magnitude Diagram: A Novel Step toward More Precise Distance and Extinction Estimates", L. Wang, G. Goldhaber, G. Aldering, & S. Perlmutter, 2003, Astrophysical Journal, 590, 944 "The Hubble diagram of type Ia supernovae as a function of host galaxy morphology", M. Sullivan, et al., 2003, Monthly Notices of the Royal Astronomical Society, 340, 1057 PI Franz-Josef Pfreundt "Integrated Performance Analysis of Distributed Computer Systems" Pfreundt et. al International Supercomputer Conferenec , Heidelberg, June 2004, Tutorial "IPACS - Application Benchmarking and Performance Prediction", Dirk Merten, Workshop on Performance Characterisation, Oakland, Mai 2004 PI Steven Pieper Can Modern Nuclear Hamiltonians Tolerate a Bound Tetraneutron? Steven C. Pieper, Phys. Rev. Lett. 90, 252501-1:4 (2003). Quantum Monte Carlo calculations of excited states in A = 6-8 nuclei {Steven C. Pieper, R. B. Wiringa, and J. Carlson, submitted to Phys. Rev. C. PI Michael Pindzola Laser-modified charge-transfer processes in proton collisions with Lithium atoms, M S Pindzola, T Minami, and D R Schultz, Phys. Rev. A 68, 013404 (July 2003). Electron-impact excitation of Li to high principal quantum number, M C Witthoeft, J P Colgan, and M S Pindzola, Phys. Rev. A 68, 022711 (August 2003). Electron-impact ionization of all ionization stages of Beryllium, J P Colgan, S D Loch, M S Pindzola, C P Ballance, and D C Griffin, Phys. Rev. A 68, 032712 (Sept 2003). A pseudo-state sensitivity study on hydrogenic ions, C P Ballance, N R Badnell, and E S Symth, J. Phys. B 36, 3707 (Sept 2003). Electron-impact excitation of Beryllium and its ions, C P Ballance, D C Griffin, J P Colgan, S D Loch, and M S Pindzola, Phys. Rev. A 68, 062705 (Dec 2003). Time-dependent studies of single and multiple photoionization of H2+, J P Colgan, M S Pindzola, and F J Robicheaux, Phys. Rev. A 68, 063413 (Dec 2003). Asymmetry in the strong field ionization of Rydberg atoms by few-cycle pulses, A Gurtler, F J Robicheaux, W J van der Zande, and L D Noordam, Phys. Rev. Letts. 92, 033002 (March 2004). Double photoionization of helium at high photon energies, J P Colgan and M S Pindzola, J. Phys. B 37, 1153 (March 2004). Carrier phase dependence in the ionization of Rydberg atoms by short radio-frequency pulses: a model system for high harmonic generation, A Gurtler, F J Robicheaux, M J J Vrakking, W J van der Zande, and L D Noordam, Phys. Rev. Letts. 92, 063901 (June 2004). Electron-impact excitation of neon: a pseudo-state convergence study, C P Ballance and D C Griffin, J. Phys. B 37, 2943 (July 2004). PI Joel Primack Generating Hot Gas in Simulations of Disk-Galaxy Major Mergers, by Cox, T.J., Primack, J., Jonsson, P., Somerville, R., Astrophysical Journal, 607 (2004), L87-L90 1.2 The Dark Side of the Halo Occupation Distribution by Kravstov, A., Berlind, A., Wechsler, R. H., Klypin, A., Gottloeber, S., Allgood, B., Primack, J., Astrophysical Journal, 609 (2004), 35 1.3 Modeling Galaxy-Mass Correlations in Dissipationless Simulations by Tasitsiomi, A., Kravtsov, A., Wechsler, R. H., Primack, J., Astrophysical Journal, in press 1.4 Density profiles of LCDM clusters by Tasitsiomi, A., Kravtsov, A., Gottloeber, S., Klypin, A., Astrophysical Journal, 607 (2004), 125-139 1.5 Formation of globular clusters in hierarchical cosmology, Kravtsov, A., Gnedin, O., Astrophysical Journal, submitted (astro-ph/0305199) 1.6 Secular bar formation in galaxies with a significant amount of dark matter, Valenzuela, O, Klypin, A, Monthly Notices of the Royal Astronomical Society, 345 (2003), 406 PI Abhay Ram J.C. Wright, P.T. Bonoli, M. Brambilla et al., "Full-wave simulations of fast wave mode conversion and lower hybrid wave propagation in tokamaks," Physics of Plasmas, vol. 11, pg. 2473 (2004). J. C. Wright, P. T. Bonoli, E. D'Azevedo, and M. Brambilla, "Ultrahigh resolution simulations of mode converted Ion cyclotron waves and lower hybrid waves," 18th International Conference on Numerical Simulation of Plasmas September 7-10, 2003, Falmouth, Massachusetts, USA, to be published in Computer Phys. Comm. (2004). E. Nelson-Melby, M. Porkolab, P.T. Bonoli, Y. Lin, A. Mazurenko, and S.J. Wukitch, "Experimental observations of mode converted ion cyclotron waves in a tokamak plasma by phase contrast imaging," Physical Review Letters, vol. 90, pg. 155004 (2003). D. R. Ernst, P. T. Bonoli, P. J. Catto, W. Dorland, C. L. Fiore, R. S. Granetz, M. Greenwald, A. E. Hubbard, M. Porkolab, M. H. Redi, J. E. Rice, K. Zhurovich, and the Alcator C-Mod Group, "Role of trapped electron mode turbulence in internal transport barrier control in the Alcator C-Mod Tokamak," Phys. Plasmas, vol. 11, pg. 2637 (2004); also APS-DPP invited talk UI1.5, Bull. Am. Phys. Soc., vol. 48, pg. 332 (2003). J. Kesner, D.T. Garnier, A. Hansen, M. Mauel, L. Bromberg, "D-D Fusion in a Levitated Dipole," Nuclear Fusion, vol. 44, pg. 193 (2004). PI David Randall Randall, D. A., M. Khairoutdinov, A. Arakawa, and W. Grabowski, 2003: Breaking the cloud- parameterization deadlock. Bull. Amer. Meteor. Soc., 84, 1547-1564. Khairoutdinov, M., D. A. Randall, and C. DeMott, 2004: Simulation of the atmospheric general circulation using a cloud-resolving model as a super-parameteriztion of physical processes. Submitted to J. Atmos. Sci. Raisanen, P., H. W. Barker, M. F. Khairoutdinov, J. Li, and D. A. Randall, 2004: Stochastic generation of subgrid-scale cloudy columnss for large-scale models. Submitted to Quart. J. Roy. Meteor. Cole, J. N., H. W. Barker, D. A. Randall, M. F. Khairoutdinov, and E. Clothiaux, 2004: Interactions between Clouds and Radiation at Scales Unresolved by Global Climate Models. Submitted to Geophysical Research Letters. PI John Rehr "Dynamic screening effects in x-ray absorption spectra," A. L. AnkudinovA. I. Nesvizhskii, and J. J. Rehr, Phys. Rev. B. 67, 115120 (2003). "Mass Absorption Coefficient of Tungsten and Tantalum, 1450 eV to 2350 eV: Experiment, Theory, and Application," Z. H. Levine, S. Grantham, C. Tarrio, D. J. Paterson, I. McNulty, T. M. Levin, A. L. Ankudinov and J. J. Rehr. J. Res. Natl. Inst. Stand. Technol.108, 1 (2003). "Time-dependent Density Functional Theory Calculations of X-ray Absorption," J. J. Rehr and A. L. Ankudinov, Int. J. Quantum Chem. 95, 487 (2003). "Solid State Effects on X-ray Absorption, Emission and Scattering Processes," J. J. Rehr and A. L. Ankudinov, Radiation Physics and Chemistry, 70, 453 (2004). "Failure of the Quasi-particle Picture of X-ray Absorption ?," J. J. Rehr, Foundations in Physics, 33, 1735 (2003). "Spin-dependent sum rules for x-ray absorption spectra," A. L. Ankudinov, J. J. Rehr, H. Wende, A. Scherz, K.~Baberschke, Europhys. Lett. 66, 44 (2004). "Final-state rule vs the Bethe-Salpeter equation for deep-core x-ray absorption spectra," Physica Scripta (in press, 2004). "First-principles ultraviolet and x-ray spectra over broad ranges," E. L. Shirley, J. A. Soininen, and J. J. Rehr (SPIE Proceedings, in press 2004). PI William Riley H.S. Cooley, W.J. Riley, M.S. Torn, and Y. He, (2004) Effect of harvest on regional climate and soil moisture and temperature, submitted to JGR Atmospheres. W.J. Riley (2003) Predicting the d18O value of the soil-surface CO2 flux using a high-dimension model representation technique, submitted to J. Ecological Modelling. Riley, W.J. (2003) Impact of the near-surface d18O value of soil water on the d18O value of the soil-surface CO2 flux, submitted to Geochimica et Acta. PI Tony Rollett Crystal-melt interfacial free energies and mobilities in fcc and bcc Fe D. Y. Sun, M. Asta, and J. J. Hoyt, Phys. Rev. B 69, 174103 (2004) Kinetic coefficient of Ni solid-liquid interfaces from molecular-dynamics simulations D. Y. Sun, M. Asta, and J. J. Hoyt, Phys. Rev. B 69, 024108 (2004) Crystal-melt interfacial free energies in metals: fcc versus bcc D. Y. Sun, M. Asta, J. J. Hoyt, M. I. Mendelev, and D. J. Srolovitz Phys. Rev. B 69, 020102 (2004) The anisotropic solid-liquid free energy of the Lennard-Jones system J. R. Morris and X. Song, J. Chem. Phys. 119, 3920 (2003). Developments in approaches to determining the anisotropy of the solid-liquid interfacial free energy J. R. Morris and R. E. Napolitano, Journal of Metals 56, 40 (2004). From Atoms to Dendrites J.J. Hoyt, Alain Karma, Mark Asta, and D.Y. Sun, Journal of Metals 56, 49 (2004). Ab initio molecular dynamics simulation of liquid AlxGe1-x Alloys, S. Wang, C. Z. Wang, F.-C. Chuang, J. R. Morris and K. M. Ho, submitted to Phys. Rev. B. Recent developments and outstanding challenges in theory and modeling of liquid metals, J. Morris, U. Dahlborg and M. Calvo-Dahlborg, C. Z. Wang, K. M. Ho, submitted to J. Non-cryst. Solids. The melting lines of model silicon calculated from coexisting solid-liquid interfaces, S. Yoo, X. C. Zeng and J. R. Morris, J. Chem. Phys.120, 1654 (2004). An accurate method to calculate liquid and solid free energies for embedded atom potentials, X. Song and J. R. Morris, Phys. Rev. B 67, 092203 (2003). PI Robert Ryne J. Qiang, R. D. Ryne, I. Hofmann, "Space-charge driven emittance growth in a 3D mismatched anisotropic beam," Phys. Rev. Lett. 92, 174801 (2004). K. Ohmi, M. Tawada, Y. Cai, S. Kamada, K. Oide, J. Qiang, "Study of the beam-beam limit in e+e- circular colliders," Phys. Rev. Lett. 92, 214801 (2004). D. Higdon, M.C. Kennedy, J. Cavendish, J. Cafeo, and R.D. Ryne, "Combining Field Data and Computer Simulations for Calibration and Prediction," accepted by SIAM Journal on Scientific Computing. J. Amundson, P. Spentzouris, J. Qiang, R. Ryne, "An Accelerator Modeling Tool with 3D Space Charge," PRST-AB (submitted). J. Qiang, M. Furman, and R. Ryne, "A Parallel Particle-In-Cell Model for Beam-Beam Interactions in High Energy Ring Colliders," J. Comp. Phys. (In press). J. Qiang and R. Gluckstern, "Three-Dimensional Poisson Solver for a Charged Beam with Large Aspect Ratio in a Conducting Pipe," Comp. Phys. Comm. 160, 120 (2004). J. Qiang, "Halo formation due to beam-beam interactions of beams optically mismatched at injection", PRST-AB, vol 7, 031001 (2004). J. Qiang, M. Furman, and R. Ryne, "Parallel Particle-In-Cell Simulation of Colliding Beams in High Energy Accelerators," Proceedings of Supercomputing 2003 (refereed), Phoenix, AZ (2003). PI Henry Schaefer R. D. DeKrock, M. J. McGuire, P. Piecuch, W. D. Allen, H. F. Schaefer, K. Kowalski, S. A. Spronk, D. B. Lawson, and L. Laursen, "The Electronic Structure and Vibrational Spectrum of trans-HNOO", J. Phys. Chem. A 108, 2893 (2004) M. Schuurman, S. Muir, W. D. Allen, and H. F. Schaefer, "Toward Subchemical Accuracy in Computational Thermochemistry: Focal Point Analysis of the Heat of Formation of NCO and [H,N,C,O] Isomers", J. Chem. Phys. (in press) Y. Yamaguchi and H. F. Schaefer, "The Diazocarbene (CNN) Molecule: Characterization of the Triplet Sigma Minus and Triplet Pi Electronic States", J. Chem. Phys. (in press) S. E. Wheeler, W. D. Allen, and H. F. Schaefer, "Thermodynamics of Disputed Soot Formation Intermediates C4H3 and C4H5", J. Chem. Phys. 121 (in press). L. D. Speakman, B. N. Papas, H. L. Woodcock, and H. F. Schaefer, "A Reinterpretation of Microwave and Infrared Spectroscopic Studies of Benzaldehyde", J. Chem. Phys. 120, 4247 (2004) K. W. Sattelmeyer, Y. Yamaguchi, and H. F. Schaefer, "Energetics of the Low-Lying Isomers of HCCO", Chem. Phys. Lett. 383, 266 (2004) N. R. Brinkmann, G. S. Tschumper, G. Yan, and H. F. Schaefer, "An Alternate Mechanism for the Dimerization of Formic Acid", J. Chem. Phys. 107, 10208 (2003) K. W. Sattelmeyer, H. F. Schaefer, and J. F. Stanton, "Use of 2h and 3h-p- like Coupled Cluster Tamm-Dancoff Approaches for the Equilibrium Properties of Ozone", Chem. Phys. Lett. 378, 42 (2003) 9. B. N. Papas, S. Wang, N. DeYonker, H. L. Woodcock, and H. F. Schaefer, "The Naphthalenyl, Anthracenyl, Tetracenyl, and Pentacenyl Radicals, and their Anions", J. Phys. Chem. A 107, 6311 (2003) PI Rocco Schiavilla PARITY-VIOLATING INTERACTIONS AND CURRENTS IN THE DEUTERON, R. Schiavilla, J. Carlson, and M. Paris, Phys. Rev C67, 032501R (2003) PARITY VIOLATING ELECTRON DEUTERON SCATTERING AND THE PROTON'S NEUTRAL WEAK AXIAL VECTOR FORM FACTOR, T.M.Ito et al., Phys. Rev.Lett. 92, 102003 (2004) MODERN THEORIES OF LOW-ENERGY ASTROPHYSICAL REACTIONS, L.E. Marcucci, K.M. Nollett, R. Schiavilla, and R.B. Wiringa, Nucl. Phys. A in press (2004) PARITY-VIOLATING INTERACTION EFFECTS IN THE $n$$p$ SYSTEMS, R. Schiavilla, J. Carlson, and M. Paris, submitted to Phys. Rev. C (2004) PI Dalton Schnack C. R. Sovinec, A. H. Glasser, T. A. Gianakon, D. C. Barnes, R. A. Nebel, S. E. Kruger, D. D. Schnack, S. J. Plimpton, A. Tarditi, M. S. Chu, and the NIMROD Team, "Nonlinear magnetohydrodynamics simulation using high-order finite elements," Journal of Computational Physics 195 (2004) 355 E. D. Held, J. D. Callen, C. C. Hegna, C. R. Sovinec, T. A. Gianakon, and S. E. Kruger, "Nonlocal Closures for Plasma Fluid Simulations," Physics of Plasmas, 11, 2419 (2004). S. E. Kruger, D. D. Schnack, C. R. Sovinec, E.D. Held "Free-boundary simulations of DIII-D plasmas with the NIMROD code" to be published in Computer Physics Communications 4. C.C. Charlson, S.E. Parker, and C.R. Sovinec "Hybrid kinetic-MHD simulations in general geometry" to be published in Computer Physics Communications PI Niklas Schneider Schneider, N. and B. D. Cornuelle, 2004: The forcing of the Pacific Decadal Oscillation. J. Climate, submitted. PI David Schultz Observation of Trielectronic Recombination in Be-like Cl, M Schnell, G Gwinner, N R Badnell, M E Bannister, S Bohm, J P Colgan, S Kieslich, S D Loch, D M Mitnik, A Muller, M S Pindzola, S Schippers, D Schwalm. W Shi, A Wolf, and S G Zhou, Phys. Rev. Letts. 91, 043001 (July 2003). Dielectronic recombination data for dynamic finite-density plasmas: I. Goals and methodology, N R Badnell, M G O'Mullane, H P Summers, Z Altun, M A Bautista, J P Colgan, T W Gorczyca, D M Mitnik, M S Pindzola, and O Zatsarinny, Astronomy and Astrophysics 406, 1151 (August 2003). Three-body, diatomic association in cold hydrogen plasmas, P S Krstic, R K Janev, and D R Schultz, J. Phys. B 36, L249 (August 2003). Dielectronic recombination data for dynamic finite-density plasmas: II. The Oxygen isoelectronic sequence, O Zatsarinny, T W Gorczyca, K T Korista, N R Badnell, and D W Savin, Astronomy and Astrophysics 412, 587 (Dec 2003). Dielectronic recombination data for dynamic finite-density plasmas: III. The Beryllium isoelectronic sequence, J P Colgan, M S Pindzola, A D Whiteford, and N R Badnell, Astronomy and Astrophysics 412, 597 (Dec 2003). Computational atomic physics for plasma edge modeling, D R Schultz, P S Krstic, T Minami, M S Pindzola, F J Robicheaux, J P Colgan, S D Loch, D C Griffin, C P Ballance, N R Badnell, and H P Summers, Contrib. Plasma Phys. 44, 247 (April 2004). Dielectronic recombination data for dynamic finite-density plasmas: IV. The Carbon isoelectronic sequence, O Zatsarinny, T W Gorczyca, K T Korista, N R Badnell, and D W Savin, Astronomy and Astrophysics 417, 1173 (April 2004). Dielectronic recombination data for dynamic finite-density plasmas: V. The Lithium isoelectronic sequence, J P Colgan, M S Pindzola, and N R Badnell, Astronomy and Astrophysics 417, 1183 (April A collisional-radiative study of Li plasmas, S D Loch, C J Fontes, J P Colgan, M S Pindzola, C P Ballance, D C Griffin, M G O'Mullane, and H P Summers, Phys. Rev. E 69, 066405 (June 2004). Dielectronic recombination data for dynamic finite-density plasmas: VI. The Boron isoelectronic sequence, Z Altun, A Yumak, N R Badnell, J P Colgan, and M S Pindzola, Astronomy and Astrophysics 420, 775 (June 2004). PI Stephen Schwartz Benkovitz, C. M., Schwartz, S. E., Jensen, M. P., Miller, M. A., Easter, R. C., and Bates, T. S. Modeling atmospheric sulfur over the Northern Hemisphere during the ACE-2 experimental period. J. Geophys. Res., in press (2004). Benkovitz C.M., Schwartz S. E., and Kim B.-G. Evaluation of a Chemical Transport Model for Sulfate using ACE-2 Observations and Attribution of Sulfate Mixing Ratios to Source Regions and Formation Processes. Geophys. Res. Lett. 30, 1641, doi:10.1029/2003GL016942, 2003. Liu Y, Daum P. H. and McGraw R. An analytic expression for predicting the critical radius in the autoconverion parameterization. Geophys. Res. Lett. 31, L06121, doi:10.1029/2003GL019117, (2004), 4 Schwartz S. E., Uncertainty requirements in radiative forcing of climate change. J. Air Waste Management Assoc., accepted, 2004. Schwartz S. E. Aerosols, Clouds, and Climate Change, In Nucleation and Atmospheric Aerosols 2004. M. Kasahara and M. Kulmala, Eds., Kyoto Univ. Press. (2004); ISBN 4 87698 635 5; pp. 323-338. Yoon C. and McGraw R. Representation of generally-mixed multivariate aerosols by the quadrature method of moments: I. Statistical foundation. J. Aerosol Sci. 35, 561-576 (2004). Yoon C. and McGraw R. Representation of generally-mixed multivariate aerosols by the quadrature method of moments: II. Aerosol dynamics. J. Aerosol Sci. 35, 577-598 (2004). Anderson T. L., Charlson R. J., Schwartz, S. E., Knutti R., Boucher O., Rodhe H., and Heintzenberg J. Climate Forcing by Aerosols--A Hazy Picture. Science 300, 1103-1104 (2003); Discussion reply 302, 1680-1681 (2003). Buseck P. and Schwartz S. E. Tropospheric Aerosols. In Treatise on Geochemistry, H. D. Holland and K. K. Turekian, Exec. eds.; Vol. 4, The Atmosphere, R. F. Keeling, Ed., Elsevier, London, (2003); ISBN: 0-08-044339-7; pp. 91-142. Yu S., Kasibhatla P. S., Wright D. L., Schwartz S. E., McGraw R. and Deng A. J. Moment-based simulation of microphysical properties of sulfate aerosols in the eastern United States: Model description, evaluation and regional analysis. J. Geophys Res., 108 4353, doi:10.1029/2002JD002890, 2003. PI Edward Seidel P. Diener, 2003, Class. Quantum Grav., 20(22), 4901-4917, A New General Purpose Event Horizon Finder for 3D Numerical Spacetimes J. Thornburg, 2003, Class. Quantum Grav., 21(2), 743--766, A Fast Apparent-Horizon Finder for 3-Dimensional Cartesian Grids in Numerical Relativity L. Baiotti, I. Hawke, P. J. Montero, F. Loeffler, L. Rezzolla, N. Stergioulas, J. A. Font, E. Seidel, 2004, Submitted to Phys. Rev. D., Three-dimensional relativistic simulations of rotating neutron star collapse to a Kerr black hole PI Junko Shigemitsu ``Heavy Light Mesons with Staggered Light Quarks'', M. Wingate, J. Shigemitsu, C. Davies, P. Lepage, H. Trottier; Phys. Rev. D 67:054505 (2003). ``High-Precision Lattice QCD Confronts Experiment'', C.T.H. Davies,..., A. Gray,......, J. Shigemitsu, M. Wingate et al.; Phys. Rev. Lett. 92:022001 (2004). ``One-loop Matching of the Heavy-light A and V currents with NRQCD Heavy and Improved Naive Light Quarks''; E. Gulez, J. Shigemitsu, M. Wingate; Phys. Rev. D69:074501 (2004). ``The B and D Decay Constants in 3 Flavor Lattice QCD'', M. Wingate, C.T.H. Davies, A. Gray, P. Lepage, J. Shigemitsu; Phys. Rev. Lett. 92:162001 (2004). PI Donald Sinclair J.B. Kogut & D.K. Sinclair, The Finite temperature transition for 2-flavor lattice QCD at finite isospin density. hep-lat/0407027. S.J. Hands, et al., Non-compact QED(3) with N(f) = 1 and N(f) = 4. hep-lat/0404013. J.B. Kogut, et al., The PseudoGoldstone spectrum of two color QCD at finite density. hep-lat/0305003. Phys.Rev.D68:054507,2003. Simon Hands, et al., Fermi surface phenomena in the (2+1)-d four Fermi model. hep-lat/0302021. Phys.Rev.D68:016005,2003. D. Toublan & J.B. Kogut, Isospin chemical potential and the QCD phase diagram at nonzero temperature and baryon chemical potential. hep-ph/0301183. Phys.Lett.B564:212-216,2003. John B. Kogut & Costas G. Strouthos, The Phase diagram of compact QED coupled to a four Fermi interaction. hep-lat/0211024. Phys.Rev.D67:034504,2003. PI Eric Skyllingstad Smith, C., and E. D. Skyllingstad, 2004, Numerical simulation of katabatic flow with changing slope angle. Mon. Wea. Rev, submitted PI George Smoot SNAP Collaboration, G. Aldering et al., "Supernova / Acceleration Probe: A Satellite Experiment to Study the Nature of the Dark Energy", astro-ph/0405232. E. Jeong & G.F.Smoot, "Search for Cosmic Strings in CMB Anisotropies", astro-ph/0406432. PI Philip Snyder "Smoothness of Turbulent Transport Across a Minimum-q Surface"J. Candy , R.E. Waltz and M.N. Rosenbluth, Phys. Plasmas 11, 1879 (2004) "The Local Limit of Global Gyrokinetic Simulations", J. Candy , R.E. Waltz and W. Dorland, Phys. Plasmas 11, L25 (2004) "Effects of Electromagnetic Turbulence in the Neoclassical Ohm's Law", F.L. Hinton, R.E. Waltz and J. Candy, Phys. Plasmas 11, 2433 (2004) "An Eulerian Gyrokinetic Maxwell Solver", J. Candy and R.E. Waltz, J. Comput. Phys., 186, 545 (2003) "Anomalous Transport Scaling in the DIII-D Tokamak Matched by Supercomputer Simulation", J. Candy and R.E. Waltz, to be published in Phys. Rev. Lett., (2003) "Burning Plasma Projections Using Driftwave Transport Models and Scaling for the H-mode Pedestal", J.E. Kinsey, G. Bateman, T. Onjun, A.H. Kritz, A. Pankin, G.M. Staebler, and R.E. Waltz, 43, 1845 Nucl. Fusion (2003) "A Mechanism for Tearing Onset Near Ideal Stability Boundaries", D.P. Brennan, R.J. La Haye, A.D. Turnbull, et al, Phys. Plasmas, 10, 1643, (2003) "Burning Plasma Confinement Projections and Renormalization of the GLF23 Drift-Wave Transport Model", Jonathan E. Kinsey, Gary M. Staebler, and Ronald E. Waltz, in Fusion Science and Technology, 44, 763, (2003) "ELMs and Constraints on the H-Mode Pedestal: A Model Based on Peeling-Ballooning Modes", P.B. Snyder, H.R. Wilson, J.R. Ferron et al., Nucl. Fusion 44, 320 (2004) "Characterization of Peeling-Ballooning Constraints on the Pedestal in Tokamaks", P.B.Snyder, H.R. Wilson, T.H. Osborne, and A.W. Leonard, Plasma Phys. Control. Fusion 46 A131 (2004) PI Carl Sovinec C. R. Sovinec, A. H. Glasser, T. A. Gianakon, D. C. Barnes, R. A. Nebel, S. E. Kruger, D. D. Schnack, S. J. Plimpton, A. Tarditi, M. Chu, and the NIMROD Team, Nonlinear Magnetohydrodynamics Simulation using High-Order Finite Elements, Journal of Computational Physics 195, 355 (2004). C. C. Kim, C. R. Sovinec, S. E. Parker, and the NIMROD Team, Hybrid Kinetic-MHD Simulations in General Geometry, Computer Physics Communications, in press. R. H. Cohen, H. L. Berk, B. I. Cohen, T. K. Fowler, A. H. Glasser, E. B. Hooper, L. L. LoDestro, E. C. Morse, L. D. Pearlstein, T. D. Rognlien, D. D. Ryutov, C. R. Sovinec, and S. Woodruff, Theoretical Investigation of Field-Line Quality in a Driven Spheromak, Nuclear Fusion 43, 1220 (2003). P. Martin, L. Marrelli, G. Spizzo, P. Franz, P. Piovesan, I. Predebon, T. Bolzonella, S. Cappello, A. Cravotta, D. F. Escande, L. Frassinetti, S .Ortolani, R. Paccagnella, D. Terranova, B. E. Chapman, D. Craig, S. C. Prager, J. S. Sarff, P. Brunsell, J.A. Malmberg, J. Drake, Y. Yagi, H Koguchi, Y. Hirano, R. B. White, C. Sovinec, C. Xiao, R. A. Nebel, D. D. Schnack, and the RFX, MST, EXTRAP T2R, and TPE-RX teams, Overview of Quasi Single Helicity Experiments in Reversed Field Pinches, Nuclear Fusion 43, 1855 (2003). E. D. Held, J. D. Callen, C. C. Hegna, C. R. Sovinec, T. A. Gianakon, and S. E. Kruger, Nonlocal Closures for Plasma Fluid Simulations, Physics of Plasmas 11, 2419 (2004). S. E. Kruger, C. R. Sovinec, D. D. Schnack, and E. D. Held, Free Boundary Simulations of DIII-D Plasmas with the NIMROD Code, accepted for publication in Computer Physics Communications. PI Frank Spera Energy-constrained open-system magmatic processes IV: Geochemical, thermal and mass consequences of energy-constrained recharge, assimilation and fractional crystallization (EC-RAFC), Geochem. Geophys. Geosyst. 4:8002, 2003 (W. Bohrson & F.Spera) Energy-constrained open-system magmatic processes 3. Energy-constrained recharge, assimilation, and fractional crystallization (EC-RAFC), Geochem. Geophys. Geosyst. 3:8001, 2002 (F.Spera & W.Bohrson) Transition to chaos and flow dynamics of thermochemical porous medium convection, Transp. Porous Media. 50(1-2):179-195, 2003 (S.Schoofs & F. Spera) Shear viscosity of rhyolite-vapor emulsions at magmatic temperatures by concentric cylinder rheometry, J. Volcanol. Geotherm. Res. 113(1-2):243-258, (D.Stein & F.Spera) Open-System Magma Chamber Evolution: An Energy-Constrained Geochemical Model Incorporating the Effects of Concurrent eruption, Recharge, variable Assimilation and Fractional Crystallization FC), Jour. Of Petrology, in Press 2004 (F. Spera, EC-E'RA & W. Bohrson) PI Don Spong D. A. Spong, D.J. Strickler, S.P. Hirshman, et al., QPS Transport Physics Flexibility Using Variable Coil Currents, Fusion Science and Technology 46, 215 (July, 2004). D.J. Strickler, S.P. Hirshman, D.A. Spong, M.J. Cole, J.F. Lyon, B.E. Nelson, D.E. Williamson, and A.S. Ware, Development of a Robust Quasi-Poloidal Compact Stellarator, Fusion Science and Technology 45, 15 (January, 2004). Magnetic Diagnostic Responses for Compact Stellarators, S. P. Hirshman, E. A. Lazarus, J. D. Hanson, S. F. Knowlton, L. L. Lao, Phys. Plasmas 11, 595 (2004). D. A. Spong, R. Sanchez, A. Weller, "Shear Alfven Continua in Stellarators," Phys. of Plasmas, Vol. 10, pg. 3217, August, 2003. J. F. Lyon, P. R. Goncharov, S. Murakami, T. Ozaki, D. E. Greenwood, D. A. Spong, S. Sudo, and LHD Groups I/II, "Spatially Resolved Measurements of Energetic Neutral Particle Distributions in the Large Helical Device," Review of Scientific Instruments, Vol. 74 (2003) pg. 1873. Nonlinear MHD analysis for LHD plasmas, K. Ichiguchi, N. Nakajima, M. Wakatani, B. A. Carreras, and V. E. Lynch, Nuclear Fusion 43, 1101 (2003). K. Ichiguchi, N. Nakajima, B. A. Carreras, Nonlinear Analysis for Stabilization of Interchange Mode in LHD Plasmas, Fusion Science & Technology, Volume 46, Number 1 (July 2004) Pages 34-43. Effects of Self-consistent Flows on Island Generation in Interchange Mode, K. Ichiguchi and B. A. Carreras, submitted to J. Plasma Fusion Res. Quiet-time statistics of electrostatic turbulent fluxes from the JET Tokamak and The W7- AS and TJ-II stellarators, R. Sanchez, B. P. van Milligen, D. E. Newman, and B. A. Carreras, Phys. Rev. Lett., 90, 185005-1 (2003). PI Garrison Sposito Ab initio computational crystallography of 2:1 clay minerals. Keith Refson, Sung-Ho Park and Garrison Sposito. Journal of Physical Chemistry B. 107:13376-13383 (2004). PI Phillip Sprangle D. F. Gordon, R. F. Hubbard, J. H. Cooley, B. Hafizi, and P. Sprangle, "Quasi-monoenergetic electrons from unphased injection into channel guided laser wakefield accelerators," Phys. Rev. E, submitted for publication. R. F. Hubbard, D. F. Gordon, J. H. Cooley, B. Hafizi, T. G. Jones, D. Kaganovich, P. Sprangle, A. Ting, A. Zigler, and J. Dexter, "Trapping and Acceleration of Nonideal Injected Electron Bunches in Laser Wakefield Accelerators," IEEE Trans. Plasma Sci., submitted for publication. S. Eisenmann, B. Greenberg, T. Palhan, A. Zigler, D. Kaganovich, R. F. Hubbard, J. Cooley, A. Ting, D. F.Gordon, P. Sprangle, M. Franenkel, S. Maman, D. Fisher, and Z. Henis, Z., "All Optical electron injector using an intense ultrashort pulse laser and a solid wire target," Phys. Plasmas, submitted for publication. A. Ting, D. Kaganovich, D. F. Gordon, R.F. Hubbard and P. Sprangle, "Generation and measurements of high energy injection electrons from the high density laser ionization and ponderomotive acceleration," Phys. Plasmas, submitted for publication. D. Kaganovich, A. Ting, D. Gordon, T. G. Jones, R. Hubbard, and P. Sprangle, "Generation of high energy electrons in a double gas jet and laser wakefield acceleration," IEEE Trans. Plasma Sci., submitted for publication. PI Malcolm Stocks "Spin waves in paramagnetic BCC iron: spin dynamics simulations" Xiuping Tao, D. P. Landau, T. C. Schulthess, G. Malcolm Stocks, Phys. Rev. Letters (submitted). "Magnetic structure of Ni-rich NiTa and permalloy-Ta alloys", Nassrin Y. Moghadam, G. Malcolm Stocks, Phys. Rev. B. (accepted) (2004) "Ab-initio spin dynamics applied to nanoparticles: canted magnetism of a finite Co chain along a Pt(111) surface step edge"B. Ujfalussy, B. Lazarovits, L. Szunyogh, G. M. Stocks and P. Weinberger Phys. Rev. B. Rapid Communication (accepted) (2004) "Ab initio study of canted magnetism of finite metallic chains at surfaces" B. Lazarovits, B. Ujfalussy, L. Szunyogh, G. M. Stocks and P. Weinberger J. Phys. Cond. Matt. (accepted) (2004) "Magnetic properties of quantum corrals from first principles calculations" B. Lazarovits, B. Ujfalussy, L. Szunyogh, G. M. Stocks and P. Weinberger J. Phys. Cond. Matt. (accepted) (2004) "Electronic structure calculations on alloys using the Polymorphous Coherent-Potential Approximation" S. Pella, J. S. Faulkner, G. Malcolm Stocks and B. Ujfalussy Phys, Rev. B (accepted) (2004) "Multi-teraflops studies of the magnetic structure of FeMn alloys and interfaces" A. Canning, B. Ujfalussy, T.Schulthess, X.-G. Zhang et al, Parallel and Distributed Scientific and Engineering Computing: Practice and Experience Editors: Y. Pan and L.T. Yang Published by Nova Science, 2004 "First Principles Calculations of the Magnetic Structure in FeMn/Co Bilayers" Ujfalussy B, Schulthess T C and Stocks C M Computer Simulation Studies in Condensed-Matter Physics XV (Eds. Landau D P, Lewis S P, Schuttler H B, Springer Proceedings in Physics) vol 90 (2003) "Parallel multi-teraflops studies of the magnetic structure of FeMn alloys" A. Canning, B. Ujfalussy, T.Schulthess, X.-G. Zhang et al., Proceedings of the IPDPS03 Conference, Nice, France 2003. "Phase transitions in ferro-antiferromagnetic bilayers with a stepped interface", D.P. Landau, S.-H. Tsai, and T.C. Schulthess, J. Magn. Magn. Mater. {272--276}, E817 (2004). "Thin ferromagnetic-antiferromagnetic bilayers: dependence of magnetic ordering on the interface", D.P. Landau, S.-H. Tsai, and T.C. Schulthess, 10 pages, in press. [proceedings for the Third International Conference on Computational Modeling and Simulation of Materials, publisher: Techna Group] PI Robert Street Chen, Y., Ludwig, F. L., & Street, R. L. (2004) Stably-stratified flows near a notched, transverse ridge across the Salt Lake Valley, Journal of Applied Meteorology, AMS, in press. Ludwig, F. L., Horel, J., and Whiteman, C. D. (2004) Using EOF analysis to identify important surface wind patterns in mountain valleys, Journal of Applied Meteorology, AMS, 43, pp. 969-983. PI Erich Strohmaier Hongzhang Shan, Erich Strohmaier, and Leonid Oliker; Optimizing Performance of Superscalar Codes For a Single Cray X1 MSP processor, Proceedings of the 2004 Cray Users Group Meeting, Knoxville, TN, Hongzhang Shan and Erich Strohmaier; Performance Characteristics of the Cray X1 and Their Implications for Application Performance Tuning, Proceedings of the 2004 International Conference on Supercomputing, 175-183, 2004. Erich Strohmaier and Hongzhang Shan, Architecture independent performance characterization and benchmarking for scientific applications, MASCOTS 2004 (to appear). PI Maxim Sukharev Maxim Sukharev and Tamar Seideman, "Optimal Control Approach to Suppression of Radiationless Transitions", Physical Review Letters (expected to be published in the August 27, 2004 issue). Maxim Sukharev and Tamar Seideman, "Optical Control of Internal Conversion of Polyatomic Molecules", Journal of Chemical Physics (submitted). PI Xianzhu Tang X.Z.Tang and A.H.Boozer, Physics of Plasmas, 10, 3661 (2003). X.Z.Tang and A.H.Boozer, Physics of Plasmas, 11, 171 (2004). X.Z.Tang and A.H.Boozer, Physics of Plasmas, 11, 2679 (2004). Z.Wang and X.Z.Tang, Physics of Plasmas, 11, 3502 (2004). PI John Taylor James C. Orr, Bala Godvindasamy, Phillip Duffy, John A. Taylor and Roger Dargaville (2004) Large Unforced Interannual Variability in Background Atmospheric CO2. Science (submitted). Christopher J. Anderson, R. W. Arritt, W. J. Gutowski, Jr., E. S. Takle, Z. Pan, J. A. Taylor, Mike Dvorak, J. O. Roads and Ana Nunes (2003) Intercomparison Of Interannual Variability Of North American Monsoon In Regional Climate Model Simulations. Proceedings of the American Meteorological Society, to appear. Raymond W. Arritt, Christopher J. Anderson, Eugene S.Takle, Zaitao Pan, William J. Gutowski, Jr., Francis O. Otieno, Renato da Silva, Daniel Caya, Jens H.Christensen, Daniel L|thi, Miguel A. Gaertner, Clemente Gallardo, Song-You Hong, Colin Jones, H.-M.H. Juang, J. J. Katzfey, William M. Lapenta, Reni Laprise, Jay W. Larson, Glen E. Liston, John L. McGregor, Roger A. Pielke (2003) Ensemble Methods For Seasonal Limited Area Forecasts. Proceedings of the American Meteorological Society, to appear. Christopher J. Anderson, Raymond W. Arritt, Eugene S.Takle, Zaito Pan, William J. Gutowski, Jr., Francis O. Otieno, Renato da Silva, Daniel Caya, Jens H. Christensen, Daniel L|thi , Miguel A. Gaertner , Clemente Gallardo , Filippo Giorgi , Song-You Hong, Colin Jones, H.-M. H. Juang, J. J. Katzfey, William M. Lapenta, Reni Laprise, Jay W. Larson , Glen E. Liston, John L. McGregor, Roger A. Pielke, Sr., John O. Roads, John A. Taylor Hydrological Processes in Regional Climate Model Simulations of the Central United States Flood of June-July 1993, Journal of Hydrometeorology (2003). James C. Orr, Bala Godvindasamy, Phillip Duffy and John A. Taylor (2002) Rectification changes with model resolution and from year to year, Proceedings American Geophysical Union Conference, Paper No. GC72B-0233, San Francisco, 6-10 December, 2002. PI Owen Toon Fridlind A. M., A. S. Ackerman, E. J. Jensen, et al., Evidence for the predominance of mid-tropospheric aerosols as subtropical anvil cloud nuclei, Science 304, 718-722, 2004. Xueref, I., C. Gerbig, A. Fridlind, A., J. C. Lin, S. C. Wofsy, B. C. Daube, A. S. Ackerman, et al., Combining a receptor-oriented framework for tracer distributions with a cloud-resolving model to study transport in deep convective clouds: Application to the NASA CRYSTAL-FACE campaign, Geophysical Research Letters 31 (14), 2004. Ackerman, A. S., M. P. Kirkpatrick, D. E. Stevens, and O. B. Toon, The impact of humidity above stratiform clouds on indirect aerosol climate forcing, provisionally accepted for publication by Nature, 2004 PI Doug Toussaint C. Aubin et al, Semileptonic decays of D mesons in three-flavor lattice QCD, hep-ph/0408306, submitted to Phys. Rev. Lett. C. Aubin et al, Light pseudoscalar decay constants, quark masses, and low energy constants from three-flavor lattice QCD, hep-lat/0407028, submitted to Phys. Rev. D C. Aubin et al. First determination of the strange and light quark masses from full lattice QCD, hep-lat/0405022, to be published in Phys. Rev. D C. Aubin et al., Light hadrons with improved staggered quarks: approaching the continuum limit, hep-lat/0402030, to be published in Phys. Rev. D C. Bernard et al., Topological susceptibility with the improved Asqtad action, Phys.Rev. D68 (2003) 114501 T. Burch and D. Toussaint, Hybrid configuration content of heavy S-wave mesons, Phys. Rev. D68 (2003) 094504 C.T.H. Davies et al., High-Precision Lattice QCD Confronts Experiment, Phys.Rev.Lett. 92 (2004) 022001 C. Bernard et al., Lattice calculation of $1^{-+}$ hybrid mesons with improved Kogut-Susskind fermions, Phys.Rev. D68 (2003) 074505 PI Dave Turner Mapping Algorithms to the Network Topology in a Portable Manner, Dave Turner, Bin Tong, and Masha Sosonkina, Proceedings of the PARA'04 Workshop on the State-of-the-Art in Scientific Computing (June Efficient Message-Passing within SMP Systems, Xuehua Chen and Dave Turner, Recent Advances in Parallel Virtual Machine and Message Passing Interface, 10th European PVM/MPI conference, Venice, Italy, pg 286-293 (October 2003). Integrating New Capabilities into NetPIPE, Dave Turner, Adam Oline, Xuehua Chen, and Troy Benjegerdes, Recent Advances in Parallel Virtual Machine and Message Passing Interface, 10th European PVM/MPI conference, Venice, Italy, pg 37-44 (October 2003). PI George Vahala Lattice Boltzmann and Quantum Lattice Gas Representations of One-Dimensional Magnetohydrodynamic Turbulence; L. Vahala, G. Vahala and J. Yepez Phys. Lett. A306, 227-234 (2003) Non-Uniform Grid Lattice Boltzmann Simulations of 1-D Dissipative Magnetohydrodynamics; A. I. D. Macnab, G. Vahala and L. Vahala Prog. In Computational Fluid Dynmacs (invited paper, accepted for publication, 2004) Inelastic Vector Soliton Collisions: A Quantum Lattice Gas Representation; G. Vahala, L. Vahala and J. Yepez Phil. Trans.. Royal Soc. London A362, 1677 - 1690 (2004) Quantum Lattice Gas Representation of Some Classical Solitons; G. Vahala, J. Yepez and L. Vahala Phys. Lett. A310, 187-196 (2003) PI Michel Van Hove Monte Carlo simulations of segregation in Pt-Re catalyst nanoparticles, G. Wang, M.A. Van Hove, P.N. Ross and M.I. Baskes, to appear in J. Chem. Phys. Atomistic simulation of fcc Pt75Ni25 and Cubo-octahedral nanoparticles, G. Wang, M.A. Van Hove, P.N. Ross and M.I. Baskes, to appear in Bull. Mat. Res. Soc. Monte Carlo simulations of segregation in Pt-Ni catalyst nanoparticles, G. Wang, M.A. Van Hove, P.N. Ross and M.I. Baskes, subm. to J. Chem. Phys. A class of trust-region methods for parallel optimization, P.D. Hough and J.C. Meza, SIAM J. of Optimization 13, 264 (2002). PI James Vary Anna C. Hayes, Petr Navratil and James P. Vary, "Neutrino-12C Scattering in the ab initio Shell Model with a Realistic Three-Body Interaction," Phys. Rev. Lett. 91, 012502 (2003). [arXiv: nucl-th/ Petr Navratil, Anna C. Hayes, James P. Vary and W. Erich Ormand, "Ab-initio shell model with a chiral-symmetry-based three-nucleon force for the p-shell nuclei," Nucl. Phys. A. (to appear). D. Chakrabarti, A. Harindranath and J.P. Vary, "A Study of q-qbar States in Transverse Lattice QCD Using Alternative Fermion Formulations," Phys.Rev.D69:034502,2004; hep-ph/0309317 J.P. Vary, B.R. Barrett, R. Lloyd, P. Navratil, A. Nogga and W. E. Ormand, "Shell Model in a First Principles Approach," Proceedings of the VI International Conference on Radioactive Nuclear Beams, Elsevier, Amsterdam, 2003. B.R. Barrett, P. Navratil, A. Nogga and W. E. Ormand and J.P. Vary, "No-core Shell-Model Calculations in Light Nuclei with Three-Nucleon Forces, Proceedings of the VI International Conference on Radioactive Nuclear Beams," Elsevier, Amsterdam, 2003. M.A. Hasan, J.P. Vary and P. Navratil, "Hartree-Fock Approximation for the Ab-initio No-Core Shell Model," Phys. Rev. C69 (2004) 034332; nucl-th/0312008 D. Chakrabarti, A. Harindranath, L. Martinovic and J.P. Vary, "Kinks in Two-Dimensional (Phi)4 Theory," Phys.Lett.B582:196-202,2004; hep-th/0309263 H. Zhan, A. Nogga, B.R. Barrett, J.P. Vary and P. Navratil, "Extrapolation Method for the No-Core Shell Model," Phys. Rev. C 69, 034302 (2004). [nucl-th/0401047] A.M. Shirokov, A.I. Mazur, S.A. Zaytsev, J.P. Vary and T.A. Weber, "Inverse Scattering Tridiagonal NN Potentials and Few-Nucleon Systems," Proceedings of The XVIIth International Workshop on High Energy Physics and Quantum Field Theory, 2004 (to appear). A. Nogga, E. Epelbaum, P. Navr'atil, W. Gl\"ockle, H. Kamada, Ulf-G. Meissner, H. Witala, B.R. Barrett, and J.P. Vary, "Probing Chiral Interactions in Light Nuclei, Proceedings of the 17th International Conference on Few Body Problems in Physics," Nuclear Physics A737, 236(2004) B.R. Barrett, P. Navratil, A. Nogga, W.E Ormand, I. Stetcu, J.P. Vary and H. Zhan "The Ab Initio Large-Basis No-Core Shell Model" International Conference on Nuclear Physics, Large and Small: Microscopic Studies of Collective Phenomena, Hacienda Cocoyoc, Morelos, Mexico, April 2004 (to appear in the proceedings). Richard J. Lloyd and James P. Vary, "All-charm Tetraquarks," accepted for publication Phys. Rev. D., May 27, 2004, hep-ph/0311179 A.M. Shirokov, S.A. Zaytsev, A.I. Mazur, J.P. Vary and T.A. Weber, "Nucleon-Nucleon Interaction in the J-Matrix Inverse Scattering Approach and Few-Nucleon Systems," to be published in `` J-matrix method and its applications'' edited by A. D. Alhaidari, E. J. Heller, H. A. Yamani and M. S. Abdelmonem. (Nova Science Publishers, Inc) [ArXiv: nucl-th/0312029]. B.R. Barrett, P. Navratil, A. Nogga, W.E. Ormand, I. Stetcu, J.P. Vary and H. Zhan, "The Ab-Initio Large-Basis No-Core Shell Model," Proceedings of the 8th International Spring Seminar On Nuclear Physics, Paestum, Italy, World Scientific (Singapore) to appear. V.T. Kim, G.B. Pivovarov and J.P. Vary, "Phase Transition in Light-Front Phi^4(1+1)," Phys.Rev.D69:085008(2004); hep-th/0310216. PI Haobin Wang D. Egorova, M. Thoss, W. Domcke, and H. Wang, ``Modeling of Ultrafast Electron-Transfer Processes: Validity of Multi-Level Redfield Theory'', J. Chem. Phys., Vol 119, 2761 (2003). M. Thoss, W. Domcke, and H. Wang, ``Theoretical Study of Vibrational Wave-packet Dynamics in Electron-transfer Systems'', Chem. Phys., Vol. 296, 217 (2004). H. Wang and M. Thoss, ``Nonperturbative simulation of pump-probe spectra for electron transfer reactions in the condensed phase'', Chem. Phys. Lett., Vol. 389, 43 (2004). H. Wang and M. Thoss, ``Semiclassical simulation of absorption spectra for a chromophore coupled to an anharmonic bath'', Chem. Phys., Vol. 304, 121 (2004). M. Thoss, I. Kondov, and H. Wang, ``Theoretical study of ultrafast heterogeneous electron transfer reactions at dye-demiconductor interfaces'', Chem. Phys., Vol. 304, 169 (2004). M. Thoss and H. Wang, ``Semiclassical Description of Molecular Dynamics Based on Initial-value Representation Methods'', Annu. Rev. Phys. Chem., Vol. 55, 299 (2004). PI Lin-Wang Wang J. Li, L.W. Wang, "First principle study of core/shell structure quantum dots", Appl. Phys. Lett. 84, 3648(2004) D. Milliron, S. M. Hughes, Y. Cui, L. Manna, J. Li, L.W. Wang, A.P. Alivisatos, "Colloidal nanocrystal heterostructures with linear and branched topology", Nature, 430, 190(2004). H. Yu, J. Li, R.A. Loomis, P.C. Gibbons, L.W. Wang, W.E. Buhro, "Cadmium Selenide Quantum Wires and the Transition from 3D to 2D Confinement", J. Am. Chem. Soc. 125, 16168 (2003). L.W. Wang, "Effects of stacking faults on the electronic structures of quantum rods", J. Comp. Theor. Nano. (in press). J. Li, L.W. Wang, "Deformation potentials of CdSe quantum dots", Appl. Phys. Lett. (in press). S. Zh. Karazhanov, Y. Zhang, L.W. Wang, A. Mascarenhas, S. Deb, "Resonant defect states and strong lattice relaxation of oxygen vacancies in WO3", Phys. Rev. B 68, 233204 (2003). S. Zh. Karazhanov, Y. Zhang, A. Mascarenhas, S. Deb, L.W. Wang, "Oxygen vacancy in cubic WO3 studied by first principles pseudopotential calculations", Solid State Ionics, 165, 43 (2003). J. Li, L.W. Wang, "Shape effects on electronic states of nanocrystals", Nano Lett. 3, 1357 (2003). L.W. Wang, J. Li, "First principle thousand atom quantum dot calculations", Phys. Rev. B 69, 153302 (2004). J. Li, L.W. Wang, "Electronic structure of InP quantum rods: differences between wurtzite, zinc blende, and different orientations", Nano Lett. 4, 29 (2004). PI Andrew Ware "Second ballooning stability in high-beta, compact stellarators", A. S. Ware, D. Westerly, E. Barcikowski, L. A. Berry, G. Y. Fu, S. P. Hirshman, J. F. Lyon, R. Sanchez,D. A. Spong, and D. J. Strickler, Phys. Plasmas 11, 2453 (2004). PI Warren Washington Meehl, G.A., and C. Tebaldi, 2004: More intense, more frequent and longer lasting heat waves in the 21st century. Science, 305, 994-997. Dai, A., W.M. Washington, G.A. Meehl, T.W. Bettge,and W.G. Strand, 2004: The ACPI climate change simulations. Climatic Change, 62, 29-43. Dai, A., A. Hu, G.A. Meehl, W.M. Washington, and W.G. Strand, 2004: North Atlantic Ocean circulation changes in a millennial control run and projected future climates. J. Climate, in press. Hu, A., G. A. Meehl, and W. Han 2004: Detecting thermohaline circulation changes from ocean properties in a coupled model, Geophysical Research Letters, 31, L13204, doi:10.1029/2004GL020218. Hu, A., G. A. Meehl, W. M. Washington, and A. Dai 2004: Response of the Atlantic thermohaline circulation to increased atmospheric CO2 in a coupled model, Journal of Climate, in press. Meehl, G.A., W.M. Washington, T.M.L. Wigley, J.M. Arblaster, and A. Dai, 2003: Solar and greenhouse gas forcing and climate response in the 20th century. J. Climate, 16, 426-444. Meehl, G.A., J.M. Arblaster, and J. Loschnigg, 2003: Coupled ocean-atmosphere dynamical processes in the tropical Indian and Pacific Ocean regions and the TBO. J. Climate, 16, 2138--2158. Meehl, G.A., W.M. Washington, T.M.L. Wigley, J.M. Arblaster, and A. Dai, 2004: Mechanisms of an intensified Hadley circulation in response to solar forcing in the 20th century. In: The Hadley Circulation: Past, Present, and Future, Cambridge University Press, in press. Meehl, G.A., W.M. Washington, C. Ammann, J.M. Arblaster, T. M.L. Wigley and C. Tebaldi, 2004: Combinations of natural and anthropogenic forcings and 20th century climate. Journal of Climate, in Meehl, G.A., W.M. Washington, J.M. Arblaster, and A. Hu, 2004: Factors affecting climate sensitivity in global coupled models. Journal of Climate, 17, 1584-1596. PI William Weber F. Gao and W. J. Weber, Atomic-Scale Simulations of Cascade Overlap and Damage Evolution in Silicon Carbide, J. Materials Research 18 [8]: 1877-1883 (2003). L. R. Corrales, W. J. Weber, A. Chartier, C. Meis, and J.-P. Crocombette, Comment on Large Swelling and Percolation in Irradiated Zircon , J. Physics: Condensed Matter 15 [37]: 6447-6456 (2003). F. Gao and W. J. Weber, Recovery of Close Frenkel Pairs Produced by Low Energy Recoils in SiC, Journal of Applied Physics 94 [7]: 4348-4356 (2003). R. Devanathan, L. R. Corrales, W. J. Weber, A. Chartier, and C. Meis, Molecular Dynamics Simulation of Disordered Zircon, Physical Review B 69 [6]: 064115, 1-9 (2004). W. J. Weber, F. Gao, R. Devanathan, and W. Jiang, The Efficiency of Damage Production in Silicon Carbide, Nucl. Instrum. and Methods in Physics Res. B 218: 68-73 (2004). F. Gao, M. Posselt, V. Belko, Y. Zhang, and W. J. Weber, Structures and Energetics of Defects: A Comparative Study of 3C- and 4H-SiC, Nucl. Instrum. and Methods in Physics Res. B 218: 74-79 (2004). R. Devanathan, F. Gao, and W. J. Weber, Amorphization of Silicon Carbide by Carbon Displacement, Applied Physics Letters 84 [19]: 3909-3911 (2004). F. Gao, W. J. Weber, M. Posselt, and V. Belko, Atomic Computer Simulations of Defect Migration in 3C and 4H-SiC, Materials Science Forum 457-460: 457-460 (2004). F. Gao and W. J. Weber, Mechanical Properties and Elastic Constants Due to Damage Accumulation and Amorphization in SiC, Physical Review B 69 [22]: 224108, 1-10 (2004). F. Gao, W. J. Weber, M. Posselt, and V. Belko, Atomistic Study of Intrinsic Defect Migration in 3C-SiC, Physical Review B 69 [24]: 245205, 1-5 (2004). PI Michael Weinert Giant Coster-Kronig transitions and intrinsic line shapes of the anomalous Pd M$_{4,5}$VV Auger spectrum of Pd/Ag(100) dilute surface alloys. D. A. Arena, R.A. Bartynski, R.A. Nayak, A.H. Weiss, S. L. Hulbert, and M. Weinert, Phys. Rev. Lett. 91, 176403 (2003) Structure determination of disordered organic molecules on surfaces from the Bragg spots of low-energy diffraction and total-energy calculations. T. Zheng, W. T. Tysoe, H. C. Poon, M. Weinert, and D. K. Saldin, Phys.Rev. B 69, 035401 (2004). Divacancies, impurities, and diffusion in Al. M. Alatalo and M. Weinert Phys. Rev. B (in press). Structure of the hydrogen stabilized MgO(111)-(1x1) polar surface: Integrated experimental and theoretical studies. V. K. Lazarov, R. A. Plass, H-C. Poon, D. K. Saldin, M. Weinert, S. A. Chambers, and M. Gajdardziska-Josifovska, Phys. Rev. B (submitted). X-ray absorption near-edge structure analysis of the chemical environment of zinc in the tribological film formed by zinc dialkyl dithiophosphate decompisiton on steel. M. D. Pauli, T. S. Rufasel, J. K. Mowlen, M. Weinert, D. K. Saldin, and W. T. Tysoe, Tribology International (in press). Ultrahigh vacuum study of the requirements for formation of a chiral template. D. Stacchiola, L. Burkholder, Y. Zheng, M. Weinert, and W. T. Tysoe, Surface Science (submitted). PI Harold Weitzner H. Strauss, L.E. Sugiyama, G. Y. Fu, W. Park, J. Breslau, Simulation of two fluid and energetic particle effects in stellarators," Nuclear Fusion (2004). H. Strauss, ``Nonlinear dynamics in the Dag Confinement Configuration," Phys. Plasmas 11, 1236 (2004). H. Strauss, ``MHD Simulations with Resistive Wall and Magnetic Separatrix," Computers in Physics (2004). Zaslavsky GM, Edelman MA, "Fractional kinetics: from pseudochaotic dynamics to Maxwell's Demon",PHYSICA D-NONLINEAR PHENOMENA 193 (1-4): 128-147 JUN 15 2004. Lyubomudrov O, Edelman M, Zaslavsky GM, "Pseudochaotic systems and their fractional kinetics", INTERNATIONAL JOURNAL OF MODERN PHYSICS B 17 (22-24): 4149-4167 Part 1, SEP 30 2003. Carreras BA, Lynch VE, Garcia L, Edelman M, Zaslavsky GM, "Topological instability along filamented invariant surfaces", CHAOS 13 (4): 1175-1187 DEC 2003. C. S. Chang, Seunghoe Ku, and H. Weitzner, "Numerical study of neoclassical plasma pedestal in a tokamak geometry," Phys. Plasmas APS Invited Issue 11, 2649 (2004) H. Weitzner and C. S, Chang, Phys. Plasmas 11, 3060 (2004) R. Maingi, C. S. Chang, etc, "Effect of gas fueling location on H-mode Access in NSTX," Plasma Phys. Cont. Fusion, accepted (2004) S. Hahn, G. Park, C. S. Chang, C. K. Choi, "Diffusion in a two-dimensional anisotropic web map by extrinsic noise applied to the intrinsically perturbed quantity," Phys. Rev. E 69, 017202 (2004) P. Garabedian, Computational mathematics and physics of fusion reactors, Proc. Natl. Acad. Sci. USA 100 (2003) 13741-13745. P. Garabedian and M.E. Meurer, Cavitational Flow and Magnetohydrodynamics, International Journal of Computational Fluid Dynamics 18 (2004) 413-420. PI Martin White R. Yan, M. White, A. Coil, "Mock catalogs for the DEEP2 redshift survey", ApJ 607 (2004) M. White, C. Vale, "Simulations of weak gravitational lensing", Astroparticle Physics, in press. C. Vale, A. Amblard, M. White, "Cluster lensing of the CMB", New Astronomy, in press. A. Amblard, C. Vale, M. White, "Weak lensing of the CMB by large-scale structure", New Astronomy, in press. C. Vale, H. Hoekstra, L. van Waerbeke, M. White, "Large-scale systematic signals in weak lensing surveys", ApJ Lett., in press. A. Meiksin, M. White, "The effects of UV background correlations on Ly-a forest flux statistics", MNRAS 350 (2004) 1107. J. Bolton, A. Meiksin, M. White, "Radiative transfer through the intergalactic medium", MNRAS 348 (2004) L43. T. Fang, M. White, "Probing the statistics of the temperature-density relation of the IGM", ApJ 606 (2004) L9. PI James Wiley W. Horton and C. Chiu, Laser Z-pinch dipole-target experiments to simulate space physics acceleration processes, Physics of Plasmas, Vol. 11, 4, April, 2004. W. Horton, G.T. Hoang, C. Bourdelle, X. Garbet, M. Ottaviani and L. Colas, Electron transport and the critical temperature gradient, Physics of Plasmas, Vol. 11, 5, May, 2004. [DOI: 10.1063/ 1.1690761]. Manish Mithaiwala and Wendell Horton, Substorm injection electron flux, submitted to the J. of Geophys. Res. (2004). J.Q. Dong, S.M. Mahajan and W. Horton, Double tearing mode in plasmas with anomalous electron viscosity, Physics of Plasmas, 10, 3151-3159, 2003. W. Horton, B. Hu, J. Q. Dong and P. Zhu, Turbulent electron thermal transport in Tokamaks, New Journal of Physics, 5, 1.1-1.33 (2003). W. Horton, R. S. Weigel, D. Vassiliadis, and I. Doxas, Substorm classification with the WINDI Model, Nonlinear Processes in Geophysics, 1-9, 2003. H. Sugama, T.-H. Watanabe, and W. Horton, Comparison between kinetic and fluid simulations of slab ion temperature gradient driven turbulence, Physics of Plasmas, 10, 726-736, 2003. C. Crabtree, W. Horton, H.V. Wong and J.W. Van Dam, Bounce-averaged stability of compressional modes in geotail flux tubes, J. Geophysical Research 108 A2 1084. doi: 10.1029/2002JA009555, 2003. PI John Wilkins Impurities block the alpha to omega martensitic transformation in titanium. R.G. Hennig, D.R. Trinkle, J. Bouchet, S.G. Srinivasan, R.C. Albers, and J.W. Wilkins. Submitted to Nature Materials Complexity of Small Silicon Self-Interstitial Defects. D.A. Richie, J. Kim, S.A. Barr, K.R. A. Hazzard, R.G. Hennig, and J.W. Wilkins. Physical Review Letters 92, 45501 (2004). A new mechanism for the alpha to omega martensitic transformation in pure titanium. D.R. Trinkle, R.G. Hennig, S.G. Srinivasan, D.M. Hatch, M.D. Jones, H.T. Stokes, R.C. Albers, and J.W. Wilkins. Physical Review Letters 91, 025701 (2003). PI Andrew Williamson D. Prendergast, J.C. Grossman, A.J. Williamson, J.L. Fattebert and G. Galli, Optical properties of silicon nanoparticles in the presence of water: A first principles theoretical analysis, J. Amer. Chem. Soc. in press. A. J. Williamson, F. Reboredo and G. Galli, Chemisorption at the Nanoscale: An alternative mechanism for hydrogen storage, Applied Physics Letters, in press. A. J. Williamson, C. Bostedt, L. Pizzagalli, T. van Buuren, T. M. Willey, L. J. Terminello and G. Galli, Probing the Electronic Density of States of Germanium Nanoparticles: A Method for Determining Atomic Structure, Nano Lett. 4, 1041 (2004). A. Puzder, A.J. Williamson and G. Galli, Self-Healing of CdSe Nanoparticles: A First Principles Study, Phys. Rev. Lett. 92, 217401 (2004). E. Draeger, J.C. Grossman, A.J. Williamson and G. Galli, Optical Properties of Silicon Nanoclusters: The role of Synthesis, J. Chem. Phys. 120, 10807 (2004). E. Draeger, J.C. Grossman, A.J. Williamson and G. Galli, Synthesis dynamics of passivated silicon nanoclusters, Phys. Stat. Solidi B 239, 11 (2003). A. Puzder, A.J. Williamson, J.C. Grossman and G. Galli, Optical Emission of Silicon nanoclusters, J. Amer. Chem. Soc. 125, 2786 (2003). A. Puzder, A.J. Williamson, F. Reboredo and G. Galli, Structural Stability and Optical Properties of Nanomaterials with Reconstructed Surfaces, Phys. Rev. Lett. 91, 157405 (2003). PI Brian Wirth P. Jing, T. Khraishi, J.A. Young and B.D. Wirth, Dislocation Dynamics Simulations of the Effect of Irradiation-induced Helium Bubbles on the Mechanical Properties of Metals, accepted for publication, Phil. Mag. L. Zepeda-Ruiz, J. Marian and B.D. Wirth, On the Character of Self-Interstitial Dislocation Loops in Vanadium, accepted for publication, Phil. Mag. J. Marian, B.D. Wirth, G.R. Odette and J.M. Perlado, Cu Diffusion in a-Fe: Determination of Solute Diffusivities using Atomic-Scale Simulations, accepted for publication, Computational Materials B.D. Wirth, G.R. Odette, J. Marian, L. Ventelon, J.A. Young and L.A. Zepeda-Ruiz, Multiscale Modeling of Radiation Damage in Fe-based Alloys in the Fusion Environment , Journal of Nuclear Materials 329-333 (2004) 103. B.D. Wirth and E.M. Bringa, A Kinetic Monte Carlo Model for Helium Diffusion and Clustering in Fusion Environments, Physica Scripta, T108 (2004) 80. K. Morishita, R. Sugano and B.D. Wirth, MD and KMC modeling of the growth and shrinkage mechanisms of helium-vacancy clusters in Fe, J Nucl Mat 323 (2003) 243. J. Marian, B.D. Wirth, R. Scaueblin, G.R. Odette and J.M. Perlado, MD modeling of defects in Fe and their interactions, J Nucl Mat 323(2003) 181. K. Morishita, R. Sugano and B.D. Wirth, Thermal stability of helium-vacancy clusters and bubble formation: Multiscale modeling for fusion materials development, Fusion Science and Technology 44 (2003) 441. K. Morishita, R. Sugano, B.D. Wirth and T. Diaz de la Rubia, Thermal stability of helium-vacancy clusters in iron, Nuclear Instruments and Methods B 202 (2003) 76. J.S. Robach, I.M. Robertson, B.D. Wirth, and A. Arsenlis, In-situ transmission electron microscopy observations and molecular dynamics simulations of dislocation-defect interactions in ion- irradiated copper, Philosophical Magazine A 83 (2003) 955. PI Stan Woosley ``Direct Numerical Simulations of Type Ia Supernovae Flames II: The Rayleigh-Taylor Instability'', Bell, J. B., Day, M. S., Rendleman, C. A., Woosley, S. E. & Zingale, M. A. 2004, ApJ, 608, 883. ``Direct Numerical Simulations of Type Ia Supernovae Flames I: The Landau-Darrieus Instability'', Bell, J. B., Day, M. S., Rendleman, C. A., Woosley, S. E. & Zingale, M. A. 2004, ApJ, 606, 1029. ``Adaptive Low Mach Number Simulations of Nuclear Flame Microphysics'', Bell, J. B., Day, M. S., Rendleman, C. A., Woosley, S. E. & Zingale, M. A. 2004, J. Comput. Phys., 195, 2, 677. Glatzmaier, G.A. (2004) "Planetary and Stellar Dynamos: Challenges for Next Generation Models" in "Astrophysical and Geophysical Magnetohydrodynamics" ed. A.M. Soward, in press. ``Two-dimensional, Time-dependent, Multi-group, Multi-angle Radiation Hydrodynamics Test Simulation in the Core-Collapse Supernova Context," (with E. Livne, R. Walder, T.A. Thompson, and I. Lichtenstadt), Astrophys. J., 609, 277, 2004 (astro-ph/0312633). ``Shock Breakout in Core-Collapse Supernovae and its Neutrino Signature" (A. Burrows, T.A. Thompson and P.A. Pinto), Astrophys. J., 592, 434, 2003. ``Gravitational Waves from Axisymmetric, Rotating Stellar Core Collapse," (A. Burrows, C.D. Ott, E. Livne, and R. Walder), Astrophys. J., 600, 834, 2004. ``Viscosity and Rotation in Core-Collapse Supernovae," (T.A. Thompson, E. Quataert, and A. Burrows), submitted to Astrophys. J. 2004. "The Collapse of Rotating Massive Stars in Three Dimensions", C. L. Fryer, M. S. Warren, ApJ, 601, 391, 2004. "Gravitational Waves from Stellar Collapse: Correlations to Explosion Asymmetries", C. L. Fryer, D. E. Holz, S. A. Hughes, ApJ, 609, 288, 2004. PI Ruqian Wu The formation of Au cluster on O-deficient TiO2(110), J. Hong and R.Q. Wu, Phys. Rev. B, submitted. Thickness dependence of Properties of SiO2 film on Mo(110), J. Hong and R.Q. Wu, J. Chem. Phys. submitted. The instability of Au clusters on MgO(001), J. Hong and R.Q. Wu, Phys. Rev. B, submitted. PI Yu-Shu Wu Pan, L., Y.S. Wu, K. Zhang, 2004, Flow Diversion and Focusing in Unsaturated fractured Tuffs at Yucca Mountain, Nevada, Steady State Analysis, Vadose Zone Hydrology, Vol.3, no.1, p233-246, Zhang, K., Y.S. Wu, G.S. Bodvarsson, and H.H. Liu, 2004, Flow Focusing in Unsaturated Fracture Networks: A Numerical Investigation, submitted to Vadose Zone Hydrology, Vol.3, no. 2, p624-633, PI Donald Wuebbles Naik, V., D. J. Wuebbles, E. H. DeLucia, and J. A. Foley, 2003: Influence of geoengineering climate on the terrestrial biosphere. Environmental Management, 32, 373-381. Naik, V., C. Delire, D. J. Wuebbles, 2004: The sensitivity of global isoprenoid emissions to climate variability and atmospheric CO2. J. Geophys. Res., 109, doi: 10.1029/2003JD004236. Tao, Z., S. M. Larson, D. J. Wuebbles, A. Williams, and M. Caughey, 2004: Sensitivity of regional ozone to temporal distributions of emissions. Atmos. Environ., in press. Wuebbles, D. J., and K. Hayhoe, 2004: Climate change in the Midwest: informing regional policy decisions. Mitigation and Adaptation Strategies for Global Change, in press. PI Tao Ye T. Ye and J. L. Bull, "Direct Numerical Simulations of Bubble Expansion in Gas Embolotherapy", accepted and to appear in J. Biomech. Eng. - Trans. ASME, 2004. PI Katherine Yelick Evaluating Support for Global Address Space Languages on the Cray X1. C. Bell, W. Chen, D. Bonachea, K. Yelick. International Conference on Supercomputing. St. Malo, France, June 2004. Problems with using MPI 1.1 and 2.0 as compilation targets for parallel language implementations. Dan Bonachea and Jason Duell. 2nd Workshop on Hardware/Software Support for High Performance Scientific and Engineering Computing, SHPSEC-PACT03, September 2003. Message Strip Mining Heuristics for High Speed Networks. C. Iancu, P. Husbands, W. Chen. VECPAR. Valencia, Spain. June 2004. Proposal for Extending the UPC Memory Copy Library Functions, v0.7. D. Bonachea. UPC community forum, 2004. A Proposal for a UPC Memory Consistency Model, v1.0 (May 5, 2004). Lawrence Berkeley National Lab Tech Report LBNL-54983. Array Prefetching for Irregular Array Accesses in Titanium. Jimmy Su and Katherine Yelick, IPDPS Workshop on Java for Parallel and Distributed Computing, Santa Fe, New Mexico, April 2004. Distributed Immersed Boundary Simulation in Titanium. Ed Givelberg and Katherine Yelick. Submitted for publication. See http://www.cs.berkeley.edu/~givelber/. Polynomial-time Algorithms for Enforcing Sequential Consistency in SPMD Programs with Arrays, Wei-Yu Chen, Arvind Krishnamurthy, Katherine Yelick, 16th International Workshop on Languages and Compilers for Parallel Computing (LCPC), College Station, Texas, October 2003 PI Pui-kuen Yeung Donzis, D.A., Sreenivasan, K.R. and Yeung, P.K. (2004) Dissipative anomaly in passive scalars. Submitted to Journal of Fluid Mechanics (in revision). PI Marco Zaider Zaider, M. and E.K. Lee, Treatment planning for low dose rate and high dose rate brachytherapy. In: Basic and Advanced Techniques in Prostate Brachytherapy (A. Dicker, G.S. Merrick, L.G. Gomella, R.K. Valicenti and F. Waterman, eds.), Martin Dunitz, London (2003). Zaider, M., Functional Imaging. In: Brachytherapy in the New Millennium (S. Nag, Ed.) (2003). Zaider, M., Aspects of Brachytherapy Physics, In: Textbook of Radiation Oncology, 2-nd Edition (S.A. Leibel and T.L. Phillips, Eds.), Saunders (2003). Zaider, M. and J.F. Dicello, Microdosimetry and its medical applications. In: Charged Particle and Photon Interactions with Matter (Mozumder and Hatano, Eds.), Marcel Dekker, Inc. (2003) pp.533-550. Merriam, J.C., L. Zjeng, J. Merriam, M. Zaider and B. Lindstrom, The effect of incisions for cataract on corneal curvature. Ophthalmology 110: 1807-1813 (2003). Todor, D.A., M. Zaider, G.N. Cohen, M.F. Worman and M.J. Zelefsky, Intraoperative dynamic dosimetry in prostate implants. Phys. Med. Biol. 48: 1153-1171 (2003). Crouch, J., S.M. Pizer, E.L. Chaney and M. Zaider, Medial techniques to automate finite element analysis of prostate deformation. Transact. Med. Imaging (in press, 2003). Crouch, J., S.M. Pizer, G. Mageras, G. Cohen, M. Zaider, S. Joshi, and E. Chaney, Validation of a method for non-rigid registration of prostate images using finite element analysis. Int. J. Radiat. Onc. Biol. Phys. (submitted, 2003). Lea, E.K. and M. Zaider, Intraoperative dynamic dose optimization in permanent prostate implant. Int. J. Radiat. Onc. Biol. Phys. 56: 854-861 (2003). Brenner, D.J., R. Doll, D.T. Goodhead, E.J. Hall, C.E. Land, J.B. Little, J.H. Lubin, FD.L. Preston, T.J. Preston, J.S. Puskin, E. Ron, R.K. Sachs, J.M. Samet, TR.B. Setlow and M. Zaider, Cancer Risks attributable to low doses of ionizing radiation: Assessing what we really know. PNAS 100: 13761-13766 (2003). Rosenfeld, A.B., D.L. Cutajar, M.L.F. Lerch, G.J. Takacs, J. Brady, T. Braddock, V. Perventailo, J. Bucci, J. Kersley, M. Zaider and M. Zelefsky, In vivo dosimetry and seed localization in prostate brachytherapy with permanent implants. IEEE Transactions on Nuclear Science, NS-51 (in press, 2004). PI Peter Zapol A. S. Barnard and P. Zapol A model for the phase stability of arbitrary nanoparticles as a function of size and shape. J. Chem. Phys. 121(9) 4276 (2004) A. Barnard , P. Zapol Predicting the Energetics, Phase Stability and Morphology Evolution of Faceted and Spherical Anatase Nanocrystals, J. Phys. Chem. (submitted) PI Shengbai Zhang S. Limpijumnong, S. B. Zhang, S.-H. Wei, and C. H. Park, "Doping by large size-mismatched impurities: The microscopic origin for arsenic- or antimony-doped p-type zinc oxide", Phys. Rev. Lett. 92, 155504 (2004). S. B. Zhang, L. Zhang, L. Xu, E. G. Wang, X. Liu, J.-F. Jia, and Q.-K. Xue, "Spin driving reconstructions on the GaAs(001):Mn surface", Phys. Rev. B 69 (Rapid Comm.), 121308 (2004). Y. Zhao, Y.-H. Kim, M.-H. Du, and S. B. Zhang, "First-principles prediction of icosahedral quantum dots for tetravalent semiconductors", Phys. Rev. Lett. 93, 015502 (2004). X. Luo, S. B. Zhang, and S.-H. Wei, "Theory of Mn supersaturation in Si and Ge", Phys. Rev. B 70, 033308 (2004). D. Segev and S.-H. Wei, "Design of shallow donor levels in Diamond by isovalent-donor coupling", Phys. Rev. Lett. 91, 126406 (2003). A. Janotti, S.-H. Wei, and S. B. Zhang, "Donor-donor Binding in Semiconductors: Engineering shallow donor levels for ZnTe", Appl. Phys. Lett. 83, 3522 (2003). S. A. Awadalla, Alan W. Hunt, K. G. Lynn, H. Glass, C. Szeles, and S.-H. Wei, "Investigation of isoelectronic oxygen related defect in CdTe crystals using thermoelectric effect spectroscopy", Phys. Rev. B. 69, 075210 (2004). S.-H. Wei, "Overcoming the doping bottleneck in semiconductors", Comp. Mater. Sci. 30, 337 (2004). A. Janotti, S. B. Zhang, S.-H. Wei, and C. G. Van de Walle, "Effects of N on the electronic structures of H defects in III-V semiconductors", Optical Materials 25, 261 (2004). P. Carrier and S.-H. Wei, "Unusual optical transitions in wurtzite AlN", J. Li, K. B. Nam, M. L. Nakami, J. Y. Lin, and H. X. Jiang, Appl. Phys. Lett. 83, 5163 (2003). PI Zhenyu Zhang Kinetic pathway for the formation of one-dimensional magnetic atom wires on stepped Cu(111) surfaces; Yina Mo, Kalman Varga, Efthimios Kaxiras and Zhenyu Zhang, submitted to Phys. Rev. Lett. ``Lagrange functions'' -- a family of powerful basis sets for real-space order-N electronic structure calculations; Kalman Varga, Zhenyu Zhang, Sokrates T. Pantelides Phys. Rev. Lett., accepted for Critical layer thickness in Stranski-Krastanow growth of Ge on Si(001); K. Varga, L.G. Wang, S. T. Pantelides, Z. Y. Zhang Surf. Sci. 562, L225 (2004). Adatom ascending at step edges and faceting on fcc metal (110) surfaces; W. G. Zhu, F. B. de Mongeot, U. Valbusa, E.G. Wang, Z. Y. Zhang Phys. Rev. Lett. 92, 106102 (2004) Adsorption of a carbon atom on the Ni-38 magic cluster and three low-index nickel surfaces: A comparative first-principles study; Zhang QM, Wells JC, Gong XG, Zhang ZY Phys. Rev. B 69 205413 (2004). PI Jianxin Zhong J.J. Zhang, K.W. Zhang, J.X. Zhong, Local self-organization of islands in embedded nanodot systems, Appl. Phys. Lett. 84, 1853-1855 (2004) J.J. Zhang, K.W. Zhang, J.X. Zhong, Replication and alignment of quantum dots in multilayer heteroepitaxial growth, Surf. Sci. 551, L40-L46, (2004). J.X. Zhong, G.M. Stocks, J.J. Zhang, K.W. Zhang, Strain engineering of nanocomposite materials, Proceedings of the 11th International Conference on Composites/Nano Engineering, edited by David Hui, p867-868 (2004). J.X. Zhong, G.M. Stocks, Control of Doping and Electronic Transport in Nanowires, Mat. Res. Soc. Symp. Proc. Vol. 820, O4.7.1. (2004). J.X. Zhong, J.C. Wells, Q. Niu, Z.Y. Zhang, Dependence of surface strain on island geometry in embedded quantum-dot systems, Surf. Sci. 539, L525-L530 (2003). PI Oleg Zikanov "Anisotropy of MHD turbulence at low magnetic Reynolds number," by Vorobev, Zikanov, Thess, Davidson, Knaepen "On the transition from two-dimensional to three-dimensional MHD turbulence" by Thess, Zikanov Both papers are to appear in the proceedings of 2004 CTR Summer Research Program (Center for Turbulence Research, Stanford University and NASA Ames). PI Alex Zunger L. He, G. Bester and A. Zunger, "Strain induced interfacial hole localization and light/heavy-hole reversal in self-assembled quantum dots: compressive InAs/GaAs vs. tensile InAs/InSb", Phys. Rev. B L. He, G. Bester and A. Zunger, "Metal-nonmetal transition and excitonic ground state in InAs/InSb quantum dots", Phys. Rev. Lett. (submitted) . G. Bester, J. Shumway and A. Zunger, "Theory of excitonic spectra and entanglement engineering in dot molecules", Phys. Rev. Lett. 93, 047401 (2004). A. Zunger and G. Bester, "Theory of excitons, charged excitons, exciton fine-structure and entangled excitons in self-assembled semiconductor quantum dots", Physica E 21, 204 (2004). G. Bester and A. Zunger, "Compositional and size-dependent spectroscopic shifts in charged self-assembled InxGa1-xAs/GaAs quantum dots", Phys. Rev. B 68, 073309 (2003). G. Bester, S. Nair, and A. Zunger, "Pseudopotential calculation of the excitonic fine structure of million-atom self-assembled In1-xGaxAs/GaAs quantum dots",Phys. Rev. B 67, R161306 (2003). M. Califano and A. Zunger, "Anisotropy of interband transitions in InAs quantum wires: an atomistic theory", to be publised in Phys. Rev. B M Califano, A Zunger, and A Franceschetti, "Direct carrier multiplication due to inverse Auger scattering in CdSe quantum dots", Appl. Phys. Letts. 84, 2409 (2004). M Califano, A Zunger, and A Franceschetti, "Efficient inverse Auger recombination at threshold in CdSe nanocrystals", Nano Letts. 4, 525 (2004). M Califano, G Bester, and A Zunger, "Prediction of a Shape-Induced Enhancement in the Hole Relaxation in Nanocrystals", Nano Letts. 3, 1197 (2003). M. Sanati, G. Hart and A. Zunger, "Ordering Tendencies in Octahedral MgO-ZnO Alloys", Phys. Rev. B 68, 155210 (2003). S. Dudiy and A. Zunger, "Optical Consequences of Long Range Order in Wurtzite AlGaN Alloys", Physical Review B, Rapid Communications 68, 041302 (2003). S.V. Dudiy and A. Zunger, "Type-I to Type-II Transition at the Interface Between Random and Ordered Domains of AlGaN", Applied Physics Letters. 84, 1874 (2004) P. Mahadevan and A. Zunger, "Ferromagnetism in Mn-doped GaAs due to Substitutional-Interstitial Complexes, Physical Review B 68, 075202 (2003). R. Magri and A. Zunger, "Predicting Interband Transition Energies for InAs/GaSb Superlattices using the Empirical Pseudopotential Method", Phys. Review B 68, 155329 (2003). R. Magri and A. Zunger, "Theory of Optical Properties of III-V Superlattices: The Role of the Interface", J. Vac. Sci. Technol. B 21, 1896 (2003). R. Magri and A. Zunger, "Theory of Optical Properties of Segregated InAs/GaSb Superlattices", IEE Proceedings Optoelectronics 140, 409 (2003). V. Blum and A. Zunger, "Structural complexity in binary bcc ground states: The case of bcc Mo-Ta", Physical Review B, Rapid Communications . 69, 020103 (2004) V. Blum and A. Zunger, "Mixed-Basis Cluster Expansion for Thermodynamics of bcc Alloys", to be published in Physical Review B. P. Mahadevan and A. Zunger, "First-Principles Investigation of the Assumption Ferromagnetism of 3d Transition Metal Impurities in GaAs", Physical Review B. 69, 115211 (2004) P. Mahadevan and A. Zunger, "Unusual Directional Dependence of Exchange Energies in GaAs: Mn, is the RKKY limit ever relevant?" Physical Review Letters (accepted). Y.J. Zhao and A. Zunger, "Site Preference for Mn Substitution in Spintronic CuMnX2 Chalcopyrite Semiconductors", Phys. Rev. B. 69, 075208 (2004) Y.J. Zhao and A. Zunger, "Electronic Structure and Ferromagnetism of Mn-Substituted CuAlS2, CuGaS2, CuInS2, CuGaTe2", Phys. Rev. B. 69, 104422 (2004) Y.J. Zhao, P. Mahadevan and A. Zunger, "Comparison of predicted ferromagnetic tendencies of Mn substituting the Ga site in III-Vs and in I-III-VI2 chalcopyrite semiconductors", App. Phys. Letters, 84, 3753 (2004) PI Piotr Zyla "New Measurement of Xi-minus -> Lambda + pi-minus Decay Parameters", M. Huang et al, Phys. Rev. Lett. 93 (2004) 011802 Review of Particle Physics, S. Eidelman, K.G. Hayes, K.A. Olive et al., Phys. Lett. B592, 1 (2004)
{"url":"https://www.nersc.gov/news-publications/publications-reports/nersc-user-publications/2004/","timestamp":"2024-11-03T18:23:31Z","content_type":"text/html","content_length":"297111","record_id":"<urn:uuid:d26cec1f-0f1f-4c46-8c2f-ac8319f57b74>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00700.warc.gz"}
In this blog, we will summarize the latex code of equations of Graph Neural Network(GNN) models, which are useful as quick reference for your research. For common notation, we denote G=(V,E) as the graph. V as the set of nodes with size |V|=N, and E as the set of N_e edges as |E| = N_e. A is denoted as the adjacency matrix. For each node v, we use h_v and o_v as hidde state and output vector of each node. • Bot Hi TEMP_b0759d2a, How can I help you today?
{"url":"http://deepnlp.org/blog?tag=gcn","timestamp":"2024-11-12T02:12:29Z","content_type":"text/html","content_length":"24440","record_id":"<urn:uuid:3673c6df-f231-4a51-9dc8-41f9a0ac70b5>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00842.warc.gz"}
Matrix-Proof-of-Work Consensus Algorithm MPoW (Matrix-Proof-of-Work), an innovative PoW consensus algorithm developed by SeeleTech and implemented in Seele’s main-net. Compared to conventional PoW consensus algorithm, MPoW requires miners to compute the determinants of sub-matrices from a matrix constructed with n hashes other than brute-force-hashing using a hash function to find the target. It consists of several steps which can efficiently prevents ASIC and GPU from dominating the network.
{"url":"https://www.seelenet.com/","timestamp":"2024-11-06T01:20:29Z","content_type":"text/html","content_length":"6673","record_id":"<urn:uuid:89a3dae1-93e7-43a7-bf8a-991857a0ffca>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00612.warc.gz"}
GCSE Maths This course is for all GCSE maths students, whether you are doing higher or foundation. Each topic will push you closer to get the grade you deserve focusing on the easiest and quickest way to pass the maths GCSE exam. All higher-only topics are labelled and should be skipped for students doing the foundation tier.
{"url":"https://www.onmaths.com/course/gcse-maths/","timestamp":"2024-11-07T09:58:29Z","content_type":"text/html","content_length":"106303","record_id":"<urn:uuid:965762bb-1f65-41e1-b178-230a567357f0>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00638.warc.gz"}
RIGHT + FIND with LEFT + FIND So I can run automations to notify individuals, I have a column that is a contact list. For data reporting purposes, I need to upload the names into an API in a (last_name, first_name) format. To get around this, I have concocted a two-step process: 1. Write a function in a cell to pull back (first_name last_name) 2. Write another function to put it into (last_name, first_name) format. The first function is as follows, and is working fine to pull the contact name into the "Prep - KLG Primary Contact (auto calculate)" cell: =[KLG Project Lead]@row The issue comes in at step two. I have written the following function: =RIGHT([Prep - KLG Primary Contact (auto calculate)]@row, FIND(" ", [Prep - KLG Primary Contact (auto calculate)]@row, 1)) + ", " + LEFT([Prep - KLG Primary Contact (auto calculate)]@row, FIND(" ", [Prep - KLG Primary Contact (auto calculate)]3)) It works well for some names, but not for others, and I can't figure out why. Can someone help? Best Answer • OHHHHH. Duh. I found a solution for you. You can't use the same method to find the last name as you did the first because the lengths of the two names are different. You have to take the full length of the name and subtract the length of the first name and the space to get the number of characters you need for the last name. :) This formula should do the trick for you. 😝 =RIGHT([Prep - KLG Primary Contact (auto calculate)]@row, LEN([Prep - KLG Primary Contact (auto calculate)]@row) - FIND(" ", [Prep - KLG Primary Contact (auto calculate)]@row, 1)) + ", " + LEFT ([Prep - KLG Primary Contact (auto calculate)]@row, FIND(" ", [Prep - KLG Primary Contact (auto calculate)]@row)) • Are there extra spaces in those names? It seems to be grabbing and repeating the last letter of the first name. You might want to trim those results. I'm going to play with this formula and see what I can find. • Hmmm. Try this slight adjustment. =RIGHT([Prep - KLG Primary Contact (auto calculate)]@row, FIND(" ", [Prep - KLG Primary Contact (auto calculate)]@row, 1)) + ", " + LEFT([Prep - KLG Primary Contact (auto calculate)]@row, FIND(" ", [Prep - KLG Primary Contact (auto calculate)]@row)) • Hi @Mike Wilday no extra spaces are present. I checked that :) I tried your adjusted formula, and I'm still experiencing the same issue. • you were lucky some worked.. it is when the ratio of the length of the Lastname and that of the first name Try this =RIGHT([Prep - KLG Primary Contact (auto calculate)]@row, LEN([Prep - KLG Primary Contact (auto calculate)]@row) - FIND(" ", [Prep - KLG Primary Contact (auto calculate)]@row, 1)) + "," + LEFT ([Prep - KLG Primary Contact (auto calculate)]@row, FIND(" ", [Prep - KLG Primary Contact (auto calculate)]@row)) I added a check to subtract where it finds the " " from the length of the entire string. Brent C. Wilson, P.Eng, PMP, Prince2 Facilityy Professional Services Inc. • OHHHHH. Duh. I found a solution for you. You can't use the same method to find the last name as you did the first because the lengths of the two names are different. You have to take the full length of the name and subtract the length of the first name and the space to get the number of characters you need for the last name. :) This formula should do the trick for you. 😝 =RIGHT([Prep - KLG Primary Contact (auto calculate)]@row, LEN([Prep - KLG Primary Contact (auto calculate)]@row) - FIND(" ", [Prep - KLG Primary Contact (auto calculate)]@row, 1)) + ", " + LEFT ([Prep - KLG Primary Contact (auto calculate)]@row, FIND(" ", [Prep - KLG Primary Contact (auto calculate)]@row)) Help Article Resources
{"url":"https://community.smartsheet.com/discussion/93720/right-find-with-left-find","timestamp":"2024-11-04T01:29:51Z","content_type":"text/html","content_length":"454796","record_id":"<urn:uuid:baed6a09-f0e9-4405-a6b6-380729d3c307>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00021.warc.gz"}
Majority element in array Given an array A of size n, find an element that occurs more than n/2 times? Approach 1 2nd part of MVA is must. The problem can be solved using Moore Voting Algorithm (MVA). In this algorithm every element will try to up-vote itself. This is a two step process: a) First part of running MVA only gives you a candidate which occurs most number of time in the given array. Notice ‘MOST’ here. b) In second part, we need to iterate over the array once again to determine if this candidate occurs more than n/2 or not. For implementation, we just need to maintain two variables voter and votes. The voter may cancel someone’s else vote or it may up-vote itself. If the votes become zero, then we change the voter to the current element. So simple, right? * moore voting algo * two steps: find major candidate, check if thats the majority or not * */ int get_majority_candidate(int *arr, int n) int i; int current_element = arr[0], element_vote = 1; for (i = 1; i < n; ++i) if (arr[i] == current_element) // now two cases may aries, element_vote can be greater than 0 or can be 0; if it is 0 // then change the current_element to arr[i]; if (element_vote == 0) current_element = arr[i]; return current_element; int check_if_major(int *arr, int n, int candidate) int i, count = 0; for (i = 0; i < n; ++i) if (arr[i] == candidate) return (count > n/2)?1:0; int main() int *arr, n, i; scanf("%d", &n); arr = (int*) malloc(n*sizeof(int)); for (i = 0; i < n; ++i) scanf("%d", &arr[i]); int majority_candidate = get_majority_candidate(arr, n); int val = check_if_major(arr, n, majority_candidate); if (val) printf("\n%d is the majority candidate \n", majority_candidate); printf("\nThere are no majority element in the array \n"); return 0;
{"url":"http://binomial.me/tutorials/majority-element-in-array/","timestamp":"2024-11-02T06:27:09Z","content_type":"text/html","content_length":"21752","record_id":"<urn:uuid:bececb72-2262-4fa2-bd43-223225b8a6e3>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00268.warc.gz"}
RD Sharma Class 10 Solutions Chapter 16 Probability Ex 16.2 RD Sharma Class 10 Solutions Chapter 16 Probability Ex 16.2 These Solutions are part of RD Sharma Class 10 Solutions. Here we have given RD Sharma Class 10 Solutions Chapter 16 Probability Ex 16.2 Other Exercises Question 1. Suppose you drop a tie at random on the rectangular region shown in the figure. What is the probability that it will land inside the circle with diameter 1 m? Question 2. In the accompanying diagram a fair spinner is placed at the centre O of the circle. Diameter AOB and radius OC divide the circle into three rigions labelled X, Y and Z. If ∠BOC = 45°. What is the probability that the spinner will land in the region X? Question 3. A target shown in the figure consists of three concentric circles of radii 3, 7 and 9 cm respectively. A dart is thrown and lands on the target. What is the probability that the dart will land on the shaded region ? Question 4. In the figure, points A, B, C and D arc the centres of four circles that each have a radius of length one unit. If a point is selected at random from the interior o’ square ABCD. What is the probability that the point will be chosen from th; shaded region ? Question 5. In the figure, JKLM is a square with sides of length 6 units. Points A and B are the mid-points of sides KL and LM respectively. If a point is selected at random from the interior of the square. What is the probability that the point will be chosen from the interior of ∆JAB? Question 6. In the figure, a square dart board is shown. The length of a side of the larger square is 1.5 times the length of a side of the smaller square. If a dart is thrown and lands on the larger square. What is the probability that it will land in the interior of the smaller square ? Hope given RD Sharma Class 10 Solutions Chapter 16 Probability Ex 16.2 are helpful to complete your math homework. If you have any doubts, please comment below. Learn Insta try to provide online math tutoring for you.
{"url":"https://mcqquestions.guru/rd-sharma-class-10-solutions-chapter-16-probability-ex-16-2/","timestamp":"2024-11-06T14:03:40Z","content_type":"text/html","content_length":"65436","record_id":"<urn:uuid:9ab9e0fb-f328-449d-b3de-f27e3a471915>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00585.warc.gz"}
A different approach for multi-level distance labellings of path structure networks A different approach for multi-level distance labellings of path structure networks Main Article Content For a positive integer $k$, a radio $k$-labelling of a simple connected graph $G=(V, E)$ is a mapping $f$ from the vertex set $V(G)$ to a set of non-negative integers such that $|f(u)-f(v)|\geqslant k+1-d(u,v)$ for each pair of distinct vertices $u$ and $v$ of $G$, where $d(u,v)$ is the distance between $u$ and $v$ in $G$. The \emph{span} of a radio $k$-coloring $f$, denoted by $span_f(G)$, is defined as $\displaystyle\max_{v\in V(G)}f(v)$ and the \emph{radio $k$-chromatic number of $G$}, denoted by $rc_k(G)$, is $\displaystyle\min_{f}\{~span_f(G)\}$ where the minimum is taken over all radio $k$-labellings of $G$. In this article, we present results of radio $k$-chromatic number of path $P_n$ for $k\in\{n-1, n-2,n-3\}$ in different approach but simple way. Article Details How to Cite Saha, L. (2024). A different approach for multi-level distance labellings of path structure networks. Tamkang Journal of Mathematics, 55(1), 15–23. https://doi.org/10.5556/j.tkjm.55.2024.3913 Griggs J R, Kral' D (2009) Graph labellings with variable weights, a survey. Discrete Appl. Math. 157: 264-2658 . Georges J P, Mauro D W, Stein M I (2001) Labelling products of complete graphs with a condition at distance two. SIAM J. Discrete Math. 14: 28-35. Griggs J R, Jin X T (2006) Real number graph labelling with distance conditions. SIAM J. Discrete Math. 20: 302-327. Griggs J R, Yeh R K (1992) Labelling graphs with a condition at distance 2. SIAM J. Discrete Math. 5: 586-595. Hale W K (1980) Frequency assignment, theory and application. Proc IEEE 68:1497-1514. F.S.Roberts, Working group agenda of DIMACS/DIMATIA/Renyi working group on graph colorings and their general- izations (2003) posted at http://dimacs.rutgers.edu/workshops/GraphColor/main.html. Chartrand G, Erwin D, Harary F, Zhang P (2001) Radio labelings of graphs. Bull. Inst. Combin. Appl. 33: 77-85. Chartrand G, Erwin D, Zhang P (2005) A graph labeling problem suggested by FM channel restrictions. Bull. Inst. Combin. Appl. 43: 43-57. Chartrand G, Erwin D, Zhang P (2000) Radio antipodal colorings of cycles, Proceedings of the Thirty- 1st Southeastern International Conference on Combinatorics. Graph Theory and Computing (Boca Raton, FL, 2000) 144: 129-141. Chartrand G, Nebesky L, Zhang P (2004) Radio k-colorings of paths. Discuss. Math. Graph Theory 24: 5-21. Khennoufa R, Togni O (2011) The radio antipodal and radio numbers of the hypercube. Ars. Combin. 102: 447-461. Khennoufa R, Togni O (2005) A note on radio antipodal colourings of paths. Math. Bohem. 130(3): 277-282 Liu D D -F (2008) Radio number for trees. Discrete Math 308: 1153-1164 . Liu D D -F, Zhu X (2005) Multi-level distance labelings for paths and cycles. SIAM J. Discrete Math. 19: 610-621. Rao Kola S, Panigrahi P (2009) Nearly antipodal chromatic number ac′(Pn) of the path. Math. Bohem. 134(1): 77-86. Rao Kola S, Panigrahi P (2009) On radio (n - 4)-chromatic number of the path Pn. AKCE Int. J. Graphs Comb. 6(1): 209-217.
{"url":"https://journals.math.tku.edu.tw/index.php/TKJM/article/view/3913","timestamp":"2024-11-08T08:47:10Z","content_type":"text/html","content_length":"31989","record_id":"<urn:uuid:1d1569b8-aae1-4c53-9cbd-0d7fb43c301c>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00339.warc.gz"}
Some thoughts about evolution and probabilities Aaron Sloman This paper is A PDF version may be added later. A partial index of discussion notes is in http://www.cs.bham.ac.uk/research/projects/cogaff/misc/AREADME.html These comments are an extension to the discussion of teaching intelligent design alongside evolutionary theory, available at We should expose unsolved problems in current science and show how to think about them, instead of leaving it only to people who want to attack science either because they are ignorant about it or because their primary goal is religious, as in many of the arguments about intelligent design (ID). Some of the arguments for ID are based on real gaps in current theories. We should present the gaps and show how to think about them, instead of leaving it to the theists and theologians to offer only one way to think about them, disguised as science. By exposing the unsolved problems in the context of a scientific approach rather than leaving them to be exposed to some people only in the anti-scientific approach we help people to see why the fact that they are unsolved does not show that current theories are false -- though they could turn out to be, as happened to Newton. Even less does it show that ID is true, of course. The improbability of evolution of humans Here is a rough and ready calculation: suppose that producing a design for a human-like animal requires 30000 binary design decisions (probably a considerable underestimate), and that after each decision there are always at least two further branches. Then the search space of possible designs requiring that number of steps has 2 to the 30000 end nodes, which is a number with over 90,000 digits in it. Now suppose that the earth is 4 billion years old and that at any time there are a million million million species, each switching to a new variant every second. The maximum total number of designs that could have been explored on that basis would be a decimal number with only 27 digits. Suppose I've underestimated the number of species in existence at any time by a factor of a billion: that would bring the upper limit up to 36 digits. Compared with a number containing 90,000 digits that's infinitesimal. Now suppose that there are a billion billion possible ways of designing a human. It's still the case that the probability of random design switches leading to any of the human designs is minuscule, and likewise for most of the other animal or plant designs that exist on earth. That might lead someone to think that intelligent design was needed to guide the processes. I suppose the standard answer would be that nothing that is highly unlikely needs its existence to be explained just because it is unlikely. It might just be one of the many highly improbable things that happen in the universe (including events in gambling casinos), and it could for that reason be the case that no other place in the universe has anything remotely like humans in physical or information-processing capabilities, just as it is very unlikely that anyone else on earth looks and thinks exactly like me. In that case the truth (its just one of those highly improbable, inexplicable things) would be rather boring, and scientists like Einstein would not like that. So the search for a more aesthetically and scientifically satisfying explanation of how things are might lead some to try to bring in an intelligent designer. In principle that is no more unscientific than Democritus postulating an atomic theory of matter when he had no idea how to test his theory. But there could be other explanations. For instance the calculations could be wrong because there is a deep mathematical reason why almost all the design choices would fail to produce viable organisms, so that the vast majority of 30000-step explorations would terminate very early leaving the remainder exploring only a small subset of the search space. Moreover, there could be further mathematical arguments about the way in which the possibilities for further change (or non-change) of any species at any time would be heavily pruned by the existence of all the other coexisting species (designs) at any one time (including the location in a food pyramid i.e. the location in predator-prey league tables. I.e. there may be what Brian Goodwin calls 'laws of form' (mathematical laws) that are not really part of Darwinian theory but may be needed to provide answers to the questions that the *honest* ID people are asking. (I think this is also closely related to some of Stuart Kauffman's ideas and to the books by Ian Stewart and Jack Cohen). As far as I know these questions and the possible answers are not generally taught to biology students. One of my ulterior motives is to provide an educational niche where they will have a chance of being taught, developed and tested because they are the scientific answers to some of the valid questions the ID people pose, but answer incorrectly, sometimes honestly sometimes dishonestly. I suspect there is a mathematical theorem something like this waiting to be proved: In any environment that supports Darwinian processes it is an inevitable property of ecosystems with co-evolving species in a multitude of cooperative, competitive and parasitic relationships that if the combinatorics of the physical infrastructure makes it physically possible for species with cognitive capabilities to exist, then (a) the probability of some such species actually evolving is very high, and (b) once that has happened, the probability of a small subset of those species developing ever richer cognitive competences (up to some limit) is close to 1. Actually I don't think I know yet how to formulate the theorem, let alone prove it. Maybe someone else has already done it. Installed: (some time before Nov 2005?) Last updated: 11 Nov 2005; 5 Feb 2012
{"url":"https://www.cs.bham.ac.uk/research/projects/cogaff/misc/evo-prob.html","timestamp":"2024-11-13T02:35:37Z","content_type":"text/html","content_length":"7124","record_id":"<urn:uuid:ad05d87f-ebfc-4e13-b110-de9df729bf38>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00020.warc.gz"}
Archimedes Hat Box Theorem My latest posts can be found here: • Colins Blog Previous blog posts: Additionally, some earlier writings: Archimedes' Hat Box Theorem - 2018/05/30 Take a sphere, and encase it in a cylinder. Take a horizontal slice between two planes. You end up with a narrow cylinder, and a smaller, "slopey" cylinder. The Archimedes Hat-Box Theorem says these two cylinders have the same surface area. In fact the theorem says that the surface areas are the same no matter how thick the slice is, so as a consequence we can take a "slice" that captures the entire cylinder and sphere, and that means that the surface area of the curved surface of the cylinder is exactly the same as the surface area of the sphere. The cylinder has height $2r$ and circumference $2{\pi}r$, so the area is $4{\pi}r^2.$ But how do we prove this? Suppose the radius of the sphere is $R$, and take a really thin slice. Suppose the base of the slice is at height $h$, and it's of thickness, $\delta{h}$. We can see that the surface area of the slice of cylinder is $2{\pi}R\cdot\delta{h}$. The slice of the sphere is a smaller cylinder with sloping sides. We can see that the radius is smaller, but the slope "adds height" to the cylinder, and we need to allow for that. So if the smaller radius is $r$ and the sloped "height" is $\delta{l}$ then the surface area of the cylinder from the sphere is $2{\pi}r\cdot\delta{l}$ We can see from the diagram that we have similar triangles, marked here in red. Note that the hypotenuse of the little triangle, the one marked $\delta{l}$, is at right angles to the main radius, The large triangle has hypotenuse $R$, and one side $r$. The smaller triangle has hypotenuse $\delta{l}$, and the equivalent side is $\delta{h}$. So $\frac{r}{R}=\frac{\delta{h}}{\delta{l}}$, or, rearranging, we have $r\cdot\delta{l}=R\cdot\delta{h}$. So the surface area of the slopey cylinder is $2{\pi}r\cdot\delta{l}$, which we can bracket as $2{\pi}(r\cdot\delta{l})$. Replacing what's in the brackets we get $2{\pi}(R\cdot\delta{h})$. That is, of course, $2{\pi}R\cdot\delta{h}$, which is the area of the slice through the enclosing cylinder, and we are done. For larger slices we can simply subdivide as finely as we need and get the sum to be as close as necessary to the area of the larger slice. Linearisation is powerful when it works, and it works here. And thus we have Archimedes' Hat Box Theorem. You might also want to read: Considering A Sphere │ <<<< Prev <<<< Considering A Sphere │ : │ >>>> Next >>>> Unexpected Interaction Of Features ... │ You can follow me on Mathstodon. Send us a comment ...
{"url":"https://www.solipsys.co.uk/new/ArchimedesHatBoxTheorem.html?AlgorithmsAndSizesOfInstances","timestamp":"2024-11-08T06:10:21Z","content_type":"text/html","content_length":"28551","record_id":"<urn:uuid:35a5f81c-1727-4b94-8079-f03a58ba1de1>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00244.warc.gz"}
remainder, remainderf, remainderl From cppreference.com Defined in header <math.h> float remainderf( float x, float y ); (1) (since C99) double remainder( double x, double y ); (2) (since C99) long double remainderl( long double x, long double y ); (3) (since C99) Defined in header <tgmath.h> #define remainder( x, y ) (4) (since C99) 1-3) Computes the IEEE remainder of the floating point division operation x/y. 4) Type-generic macro: If any argument has type long double, remainderl is called. Otherwise, if any argument has integer type or has type double, remainder is called. Otherwise, remainderf is The IEEE floating-point remainder of the division operation x/y calculated by this function is exactly the value x - n*y, where the value n is the integral value nearest the exact value x/y. When | n-x/y| = ½, the value n is chosen to be even. In contrast to fmod(), the returned value is not guaranteed to have the same sign as x. If the returned value is 0, it will have the same sign as x. [edit] Parameters x, y - floating point values [edit] Return value If successful, returns the IEEE floating-point remainder of the division x/y as defined above. If a domain error occurs, an implementation-defined value is returned (NaN where supported) If a range error occurs due to underflow, the correct result is returned. If y is zero, but the domain error does not occur, zero is returned. [edit] Error handling Errors are reported as specified in math_errhandling Domain error may occur if y is zero. If the implementation supports IEEE floating-point arithmetic (IEC 60559), • The current rounding mode has no effect. • FE_INEXACT is never raised, the result is always exact. • If x is ±∞ and y is not NaN, NaN is returned and FE_INVALID is raised • If y is ±0 and x is not NaN, NaN is returned and FE_INVALID is raised • If either argument is NaN, NaN is returned POSIX requires that a domain error occurs if x is infinite or y is zero. fmod, but not remainder is useful for doing silent wrapping of floating-point types to unsigned integer types: (0.0 <= (y = fmod(rint(x), 65536.0)) ? y : 65536.0 + y) is in the range [-0.0 .. 65535.0], which corresponds to unsigned short, but remainder(rint(x), 65536.0 is in the range [-32767.0, +32768.0], which is outside of the range of signed short. [edit] Example #include <stdio.h> #include <math.h> #include <fenv.h> #pragma STDC FENV_ACCESS ON int main(void) printf("remainder(+5.1, +3.0) = %.1f\n", remainder(5.1,3)); printf("remainder(-5.1, +3.0) = %.1f\n", remainder(-5.1,3)); printf("remainder(+5.1, -3.0) = %.1f\n", remainder(5.1,-3)); printf("remainder(-5.1, -3.0) = %.1f\n", remainder(-5.1,-3)); // special values printf("remainder(-0.0, 1.0) = %.1f\n", remainder(-0.0, 1)); printf("remainder(+5.1, Inf) = %.1f\n", remainder(5.1, INFINITY)); // error handling printf("remainder(+5.1, 0) = %.1f\n", remainder(5.1, 0)); if(fetestexcept(FE_INVALID)) puts(" FE_INVALID raised"); remainder(+5.1, +3.0) = -0.9 remainder(-5.1, +3.0) = 0.9 remainder(+5.1, -3.0) = -0.9 remainder(-5.1, -3.0) = 0.9 remainder(+0.0, 1.0) = 0.0 remainder(-0.0, 1.0) = -0.0 remainder(+5.1, Inf) = 5.1 remainder(+5.1, 0) = -nan FE_INVALID raised [edit] See also computes quotient and remainder of integer division (C99) (function) computes remainder of the floating-point division operation computes signed remainder as well as the three last bits of the division operation
{"url":"http://ld2014.scusa.lsu.edu/cppreference/en/c/numeric/math/remainder.html","timestamp":"2024-11-12T22:41:44Z","content_type":"text/html","content_length":"48619","record_id":"<urn:uuid:470042f1-0370-41ef-8103-117ac0b826d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00686.warc.gz"}
Estimate How Far Away Here is a clever method to estimate how far away something is: • Hold your arm straight out, thumb up • Close one eye, align your thumb with distant object • Switch eyes (don't move your thumb!) • Your thumb will seem to change position Now ... estimate how far it moved sideways (you could imagine the length of a car or something). Multiply that by 10 and you have an estimate of how far away. Here your thumb seems to jump about half a car length. Half a car length is about 2.5 meters. Times 10: the car is about 25 meters away. How it Works The distance from your eyes to your thumb is about 10 times the distance between your eyes And so the distance to the far object is also about 10 times the width your thumb seems to move at the far object. This works because the triangles are similar, and so the relative lengths are the same. Learn the Size of Things To be useful you need to know how long, wide or tall things are! • Small cars are 4 m long • Large cars are 5 m long • Cars are about 1.8 m wide • Adults are about 1.8 m tall • A 5 year old is about 1 m tall • A normal doorway is 2 m high and 0.8 m wide • A truck and trailer is about 20 m long • The width of a small house is about 8 m • The width of a large house is about 12 m • The height of a single-storey house is about 5 m • The height of a two-storey house is about 8 m • Tall buildings have about 3.5 m for every storey (Note: to use this method for height, tilt your head and thumb 90° to the side.) Your Turn Go outside and stand on a high spot where you can see lots of things (roads, building, etc). Write down what you see and estimate how far away: Object "Thumb move" ×10 = Distance How Far Away Bonus Activity: Get a map, mark where you stood and where each object is, and work out how far away they really are. How good were you?
{"url":"http://wegotthenumbers.org/estimate-distance.html","timestamp":"2024-11-08T23:55:41Z","content_type":"text/html","content_length":"7349","record_id":"<urn:uuid:35967e33-4b2b-461e-9b97-2b0903518bab>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00538.warc.gz"}
The DO Loop Statistical programming in SAS with an emphasis on SAS/IML programs Analytics | Learn SAS Poisson regression in SAS This article demonstrates how to use PROC GENMOD to perform a Poisson regression in SAS. There are different examples in the SAS documentation and in conference papers, but I chose this example because it uses two categorical explanatory variables. Therefore, the Poisson regression can be visualized by using a contingency Analytics | Machine Learning Fit, simulate, fit: How models can collapse after generations of recursive fitting An article published in Nature has the intriguing title, "AI models collapse when trained on recursively generated data." (Shumailov, et al., 2024). The article is quite readable, but I also recommend a less technical overview of the result: "AI models fed AI-generated data quickly spew nonsense" (Gibney, 2024). The Gibney Analytics | Programming Tips A geometric solution to isotonic regression A previous article shows that you can run a simple (one-variable) isotonic regression by using a quadratic programming (QP) formulation. While I was reading a book about computational geometry, I learned that there is a connection between isotonic regression and the convex hull of a certain set of points. Whaaaaat? Learn SAS | Programming Tips QPSOLVE: A new SAS IML function for quadratic optimization Since the pandemic began in 2020, the SAS IML developers have added about 50 new functions and enhancements to the SAS IML language in SAS Viya. Among these functions are new modern methods for optimization that have a simplified syntax as compared to the older 'NLP' functions that are available Learn SAS | Programming Tips How to use keyword-value pairs when calling SAS IML subroutines Just like the SAS DATA step, the SAS IML language supports both functions and subroutines. A function returns a value, so the calling syntax is familiar: y = func(x1, x2); /* the function returns one value, y */ In this syntax, the input arguments are x1 and x2. The Analytics | Learn SAS | Programming Tips Isotonic regression: An application of quadratic optimization Isotonic regression (also called monotonic regression) is a type of regression model that assumes that the response variable is a monotonic function of the explanatory variable(s). The model can be nondecreasing or nonincreasing. Certain physical and biological processes can be analyzed by using an isotonic regression model. For example, a
{"url":"https://blogs.sas.com/content/iml/page/4","timestamp":"2024-11-05T12:30:17Z","content_type":"text/html","content_length":"69269","record_id":"<urn:uuid:ceee1c48-061c-43e0-83f8-def6ff5ba6a3>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00189.warc.gz"}
Transactions Online Shojiro SAKATA, Masaya FUJISAWA, "Can the BMS Algorithm Decode Up to Errors? Yes, but with Some Additional Remarks" in IEICE TRANSACTIONS on Fundamentals, vol. E93-A, no. 4, pp. 857-862, April 2010, doi: 10.1587/transfun.E93.A.857. Abstract: It is a well-known fact that the BMS algorithm with majority voting can decode up to half the Feng-Rao designed distance d[FR]. Since d[FR] is not smaller than the Goppa designed distance d [G], that algorithm can correct up to errors. On the other hand, it has been considered to be evident that the original BMS algorithm (without voting) can correct up to errors similarly to the basic algorithm by Skorobogatov-Vladut. But, is it true? In this short paper, we show that it is true, although we need a few remarks and some additional procedures for determining the Groebner basis of the error locator ideal exactly. In fact, as the basic algorithm gives a set of polynomials whose zero set contains the error locators as a subset, it cannot always give the exact error locators, unless the syndrome equation is solved to find the error values in addition. URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.E93.A.857/_p author={Shojiro SAKATA, Masaya FUJISAWA, }, journal={IEICE TRANSACTIONS on Fundamentals}, title={Can the BMS Algorithm Decode Up to Errors? Yes, but with Some Additional Remarks}, abstract={It is a well-known fact that the BMS algorithm with majority voting can decode up to half the Feng-Rao designed distance d[FR]. Since d[FR] is not smaller than the Goppa designed distance d [G], that algorithm can correct up to errors. On the other hand, it has been considered to be evident that the original BMS algorithm (without voting) can correct up to errors similarly to the basic algorithm by Skorobogatov-Vladut. But, is it true? In this short paper, we show that it is true, although we need a few remarks and some additional procedures for determining the Groebner basis of the error locator ideal exactly. In fact, as the basic algorithm gives a set of polynomials whose zero set contains the error locators as a subset, it cannot always give the exact error locators, unless the syndrome equation is solved to find the error values in addition.}, TY - JOUR TI - Can the BMS Algorithm Decode Up to Errors? Yes, but with Some Additional Remarks T2 - IEICE TRANSACTIONS on Fundamentals SP - 857 EP - 862 AU - Shojiro SAKATA AU - Masaya FUJISAWA PY - 2010 DO - 10.1587/transfun.E93.A.857 JO - IEICE TRANSACTIONS on Fundamentals SN - 1745-1337 VL - E93-A IS - 4 JA - IEICE TRANSACTIONS on Fundamentals Y1 - April 2010 AB - It is a well-known fact that the BMS algorithm with majority voting can decode up to half the Feng-Rao designed distance d[FR]. Since d[FR] is not smaller than the Goppa designed distance d[G], that algorithm can correct up to errors. On the other hand, it has been considered to be evident that the original BMS algorithm (without voting) can correct up to errors similarly to the basic algorithm by Skorobogatov-Vladut. But, is it true? In this short paper, we show that it is true, although we need a few remarks and some additional procedures for determining the Groebner basis of the error locator ideal exactly. In fact, as the basic algorithm gives a set of polynomials whose zero set contains the error locators as a subset, it cannot always give the exact error locators, unless the syndrome equation is solved to find the error values in addition. ER -
{"url":"https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.E93.A.857/_p","timestamp":"2024-11-04T01:51:29Z","content_type":"text/html","content_length":"61421","record_id":"<urn:uuid:d81ad3fb-3f39-430e-9341-36d737116044>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00824.warc.gz"}
Logarithmic Graph Paper Generator - Log and Semi Log Graph Papers Logarithmic Graph Maker An online logarithmic graph maker to create custom logarithmic paper printable including log-log, semi-log, and asymmetric graph paper. Simply customize, download and print on variety of paper formats like A3, A4, A5, Letter Size and any custom size of your choice. You can adjust the line spacing, thickness, color, borders, margins and more. For quick, easy and ready to download pre-made templates, visit our Graph Papers Gallery section. Grid / Line Settings Grid Style X - Axis Y - Axis Page Settings Premium Features X-Axis Base: Y-Axis Base: If you have made any changes, press regenerate button to refresh the preview. Select 'Custom' to enter your own values. Paper size (WxH): x (Max limit: 20"x20") • 'Portrait Layout' : width will be smaller • 'Landscape Layout' : height will be smaller What is a Logarithmic Graph Paper? Logarithmic or simple Log papers has both log axis and are used to trace data sets with wide range of values or where the difference between the smallest and largest value is much bigger and is not easy to fit on linear scale. e.g. the smallest numbers are 10, 20 etc and the largest numbers are in the range of 1,000,000 etc. Semi-log graph paper is graph paper which has one linear axis and one log axis. It is mostly used when the range of the data on one axis is extremely large. It may also be used when one axis does not follow a linear progression. Why use logarithmic scales for graphs? Main reason for using logarithmic scales in graphs and charts is to respond to skewness towards larger values. For example, you may come across cases where one axis does not follow a linear progression. It can also be used for cases where you need to show percentage change. What is the difference between log log and semi-log graph paper? Semi-logarithmic graph has one axis has a logarithmic scale and the other axis has a linear scale. In log-log graphs, both axes have a logarithmic scale. Log Graph Paper Maker Now you can create your own logarithmic (log-log) or semi logarithmic (semi-log) graph paper with this Log Page Maker Tool. You can change the line thickness, both axis colors, log base ranging from 2 to 16, the line spacing for linear scale in case of semi log, the number of cycles, paper orientation, paper size, margins and more. Plain Grid Engineering Graph Paper Polar Graph Paper Spider Web Graph Browse our section of pre-made templates for ready to download commonly used papers.
{"url":"https://mathpolate.com/graph/logarithmic","timestamp":"2024-11-11T09:33:44Z","content_type":"text/html","content_length":"72094","record_id":"<urn:uuid:dabf6385-1b54-482d-bf90-057acab7ca1c>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00019.warc.gz"}
Neutrino oscillations, energy-momentum conservation and Download Neutrino oscillations, energy-momentum conservation and * Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project Document related concepts Coherence (physics) wikipedia , lookup Quantum entanglement wikipedia , lookup Uncertainty principle wikipedia , lookup Wave packet wikipedia , lookup Photon polarization wikipedia , lookup Coherent states wikipedia , lookup Relational approach to quantum physics wikipedia , lookup Eigenstate thermalization hypothesis wikipedia , lookup Theoretical and experimental justification for the Schrödinger equation wikipedia , lookup Mathematical formulation of the Standard Model wikipedia , lookup Grand Unified Theory wikipedia , lookup Bruno Pontecorvo wikipedia , lookup Standard Model wikipedia , lookup Weakly-interacting massive particles wikipedia , lookup Neutrino wikipedia , lookup Faster-than-light neutrino anomaly wikipedia , lookup Super-Kamiokande wikipedia , lookup Lepton wikipedia , lookup Neutrino oscillation wikipedia , lookup Lorentz-violating neutrino oscillations wikipedia , lookup Neutrino oscillations, energy-momentum conservation and entanglement Evgeny Akhmedov MPI-K, Heidelberg & Kurchatov Inst., Moscow Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 1 4-mom. conservation and ν production Neutrino oscillations and energy-momentum conservation – an intricate relationship Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 2 4-mom. conservation and ν production Neutrino oscillations and energy-momentum conservation – an intricate relationship Calculation of rates of processes in quant. theory (gen. Fermi’s Golden rule): X 2 d3 pf (2π)4 δ 4 pf − Γ = (2π) 2Ef The factor δ 4 ensures energy-momentum conservation. i pi f pf − Evgeny Akhmedov (2Ei ) Z Y Munich, Sept. 5-9, 2011 – p. 2 4-mom. conservation and ν production Neutrino oscillations and energy-momentum conservation – an intricate relationship Calculation of rates of processes in quant. theory (gen. Fermi’s Golden rule): X 2 d3 pf (2π)4 δ 4 pf − Γ = (2π) 2Ef The factor δ 4 ensures energy-momentum conservation. i pi f pf − (2Ei ) Z Y Used to calculate neutrino production rates and detection cross sections. Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 2 4-mom. conservation and ν production Neutrino oscillations and energy-momentum conservation – an intricate relationship Calculation of rates of processes in quant. theory (gen. Fermi’s Golden rule): X 2 d3 pf (2π)4 δ 4 pf − Γ = (2π) 2Ef The factor δ 4 ensures energy-momentum conservation. i pi f pf − (2Ei ) Z Y Used to calculate neutrino production rates and detection cross sections. If applied to neutrino production, implies that the neutrino 4-momentum p = (E, p~) can be determined from the 4-momenta of all other particles participating in the production process. Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 2 4-mom. conservation and ν production Neutrino oscillations and energy-momentum conservation – an intricate relationship Calculation of rates of processes in quant. theory (gen. Fermi’s Golden rule): X 2 d3 pf (2π)4 δ 4 pf − Γ = (2π) 2Ef The factor δ 4 ensures energy-momentum conservation. i pi f pf − (2Ei ) Z Y Used to calculate neutrino production rates and detection cross sections. If applied to neutrino production, implies that the neutrino 4-momentum p = (E, p~) can be determined from the 4-momenta of all other particles participating in the production process. E.g. for π → µ + ν decay: if 4-momenta of π and µ have well-defined values, then the neutrino 4-momentum is fully determined by energy-momentum conservation. Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 2 A dichotomy But: Due to the on-shell relation E 2 = p~ 2 + m2 , if the neutrino energy and momentum are exactly known, so is its mass! The emitted neutrino is a mass eigenstate rather than a flavor eigenstate ( = coherent superposition of different mass eigenstates). Neutrino oscillations cannot occur! (Mass eigenstates do not oscillate in A dichotomy: On the one hand, energy-momentum conservation is an exact law of nature. On the other hand, exact energy and momentum conservation at neutrino production or detection would apparently make the oscillations impossible. Significant confusion in the literature Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 3 Kinematic entanglement – way out? A suggested way out (an attempt to base the theory of neutrino oscillations on exact energy-momentum conservation): Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 4 Kinematic entanglement – way out? A suggested way out (an attempt to base the theory of neutrino oscillations on exact energy-momentum conservation): Assumption: the neutrino is produced in an entangled state with the accompanying particle(s), so that the 4-momentum of the entangled state obeys exact energy-momentum conservation. Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 4 Kinematic entanglement – way out? A suggested way out (an attempt to base the theory of neutrino oscillations on exact energy-momentum conservation): Assumption: the neutrino is produced in an entangled state with the accompanying particle(s), so that the 4-momentum of the entangled state obeys exact energy-momentum conservation. Example: for π → µ + ν decay. Take for the final-state wave function |µ νi = |µ(pµj )i|νj (pνj )i For every value of neutrino 4-momentum pνj the corresponding value of muon 4-momentum pµj is uniquely defined through pνj + pµj = pπ . Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 4 Kinematic entanglement – way out? A suggested way out (an attempt to base the theory of neutrino oscillations on exact energy-momentum conservation): Assumption: the neutrino is produced in an entangled state with the accompanying particle(s), so that the 4-momentum of the entangled state obeys exact energy-momentum conservation. Example: for π → µ + ν decay. Take for the final-state wave function |µ νi = |µ(pµj )i|νj (pνj )i For every value of neutrino 4-momentum pνj the corresponding value of muon 4-momentum pµj is uniquely defined through pνj + pµj = pπ . The 4-momenta of mass eigenstates νj are correlated with 4-momenta of the accompanying muon (entanglement). Only pπ is fixed; pµ and pν are allowed to vary. Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 4 Entanglement – contd. Energy-mom. conservation is satisfied, yet the produced state contains a superposition of different neutrino mass eigenstates. But: because muon states of different 4-momenta are orthogonal, energy-momentum conservation would still make the interference (oscillatory) terms in Pαβ vanish – disentanglement is necessary. Assumption: the muon is “measured”, either through a direct detection or through interaction with environment, which leads to its localization ⇒ necessary energy and momentum uncertainties are created. – Completely misses the point that in reality the parent pion is always localized and so described by a wave packet rather than by a plane wave. (Btw: without this localization the production coordinate would be completely undefined ⇒ the oscillations would be unobservable). Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 5 Entanglement – contd. The pion momentum has a spread σpπ around the mean momentum pπ ⇒ there is no strict correlation between 4-momenta of ν and µ. For a given value pνj the value of pµ is no longer uniquely determined by pνj + pµ = pπ ; can take any value within a range of width ∼ σpπ . In other words, instead of pνj + pµj = pπ we now have pνj + pµj = pπj where pπj is no longer uniquely fixed. E.g., in the case when the 4-momenta of the components of the muon accompanying different νj are the same, pµ1 = pµ2 = pµ3 ≡ pµ : energy-momentum conservation is satisfied if |pνj − pνk | . σpπ . No entanglement takes place! Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 6 NB: |pνj − pνk | . σpπ is just the condition of coherent neutrino production/detection (necessary for observability of ν oscillations). Due to localization of the neutrino production and detection processes, kinematic entanglement is completely irrelevant to neutrino oscillations. Entanglement/disentanglement approach predicts no oscillations for ν’s from π → µ + ν decay if the muons is not detected and do not interact with the environment; wave packet approach predicts that the oscillations should occur. QM energy and momentum uncertainties play a crucial role! Energy and momentum uncertainties are in no contradiction with exact energy-mom. conservation! In describing the oscillations we have to deal with localized states which are not eigenstates of E and p but rather linear superpositions of their eigenstates. Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 7 QM energy and mom. uncertainties Neutrino oscillations are a QM interference phenomenon ⇒ owe their very existence to QM uncertainty relations. Coordinate-momentum and time-energy uncertainty relations are implicated in the oscillations phenomenon in a number of ways: It is the E and p uncertainties of the emitted ν state that allow it to be a coherent superposition of the states of well-defined and different mass. Similarly, for neutrino detection to be coherent, the E and p uncertainties inherent in the detection process should be large enough to prevent a determination of the absorbed neutrino’s mass in this process. QM uncertainty relations determine the size of the neutrino wave packets ⇒ are crucial to the issue of the loss of coherence due to the wave packet separation. Are important for understanding how the produced and detected neutrino states are disentangled from the accompanying particles. Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 8 When are neutrino oscillations observable? Keyword: Coherence Neutrino flavour eigenstates νe , νµ and ντ are coherent superpositions of ⇒ oscillations are only observable if mass eigenstates ν1 , ν2 and ν3 neutrino production and detection are coherent coherence is not (irreversibly) lost during neutrino propagation. Possible decoherence at production (detection): If by accurate E and p measurements one can tell (through E = p2 + m2 ) which mass eigenstate is emitted, the coherence is lost and oscillations disappear! Full analogy with electron interference in double slit experiments: if one can establish which slit the detected electron has passed through, the interference fringes are washed out. Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 9 When are neutrino oscillations observable? Another source of decoherence: wave packet separation due to the difference of group velocities ∆v of different mass eigenstates. If coherence is lost: Flavour transition can still occur, but in a non-oscillatory way. E.g. for π → µνi decay with a subsequent detection of νi with the emission of e: P ∝ Pprod (µ νi )Pdet (e νi ) ∝ |Uµi |2 |Uei |2 – the same result as for averaged oscillations. Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 10 A consistent approach – WP formalism The evolved produced state: |ναfl (~x, t)i |νjmass (~x, t)i ΨSj (~x, t)|νjmass i The coordinate-space wave function of the jth mass eigenstate (w. packet): ΨSj (~x, t) d3 p S x−iEj (p)t (2π)3 j p): sharp maximum at p~ = P~ (width of the Momentum distribution function fjS (~ peak σpP ≪ P ). Detected state (centered at ~x = L): |νβ (~x)i = x)|νkmass i k (~ The coordinate-space wave function of the kth mass eigenstate (w. packet): k (~ Evgeny Akhmedov d3 p D (2π)3 k Munich, Sept. 5-9, 2011 – p. 11 Oscillation probability Transition amplitude: ~ = hνβfl |ναfl (T, L)i ~ = Aαβ (T, L) Uβj Aj (T, L) ~ = Aj (T, L) d3 p S −iEj (p)T +i~ (2π)3 j ~ − ~vj T | . σx . E.g., for Gaussian wave packets: Strongly suppressed unless |L ~ − ~vj T )2 ~ ∝ exp − Aj (T, L) σx2 ≡ σxP + σxD Oscillation probability: ~ = |Aαβ | = ~ ∗k (T, L) P (να → νβ ; T, L) Uβj Uαk Uβk Aj (T, L)A Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 12 Oscillation probability Neutrino emission and detection times are not measured (or not accurately measured) in most experiments ⇒ integration over T : −i 2P̄jk L ˜ dT P (να → νβ ; T, L) = P (να → νβ ; L) = Uαj Uβj Uαk Uβk e Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 13 Oscillation probability Neutrino emission and detection times are not measured (or not accurately measured) in most experiments ⇒ integration over T : −i 2P̄jk L ˜ dT P (να → νβ ; T, L) = P (να → νβ ; L) = Uαj Uβj Uαk Uβk e I˜jk = N dq S fj (rk q − ∆Ejk /2v + Pj )fjD∗ (rk q − ∆Ejk /2v + Pj ) ×fkS∗ (rj q vj +vk Evgeny Akhmedov + ∆Ejk /2v + , ∆v ≡ vk − vj , Pk )fkD (rj q rj,k ≡ i ∆v v qL + ∆Ejk /2v + Pk ) e N ≡ 1/[2Ei (P )2Ek (P )v] Munich, Sept. 5-9, 2011 – p. 13 Oscillation probability Neutrino emission and detection times are not measured (or not accurately measured) in most experiments ⇒ integration over T : −i 2P̄jk L ˜ dT P (να → νβ ; T, L) = P (να → νβ ; L) = Uαj Uβj Uαk Uβk e I˜jk = N dq S fj (rk q − ∆Ejk /2v + Pj )fjD∗ (rk q − ∆Ejk /2v + Pj ) ×fkS∗ (rj q vj +vk + ∆Ejk /2v + , ∆v ≡ vk − vj , Pk )fkD (rj q rj,k ≡ i ∆v v qL + ∆Ejk /2v + Pk ) e N ≡ 1/[2Ei (P )2Ek (P )v] For (∆v/v)σp L ≪ 1 (i.e. L ≪ lcoh = (v/∆v)σx ) I˜jk is approximately independent of L; in the opposite case I˜jk is strongly suppressed Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 13 Oscillation probability Neutrino emission and detection times are not measured (or not accurately measured) in most experiments ⇒ integration over T : −i 2P̄jk L ˜ dT P (να → νβ ; T, L) = P (να → νβ ; L) = Uαj Uβj Uαk Uβk e I˜jk = N dq S fj (rk q − ∆Ejk /2v + Pj )fjD∗ (rk q − ∆Ejk /2v + Pj ) ×fkS∗ (rj q vj +vk + ∆Ejk /2v + , ∆v ≡ vk − vj , Pk )fkD (rj q rj,k ≡ i ∆v v qL + ∆Ejk /2v + Pk ) e N ≡ 1/[2Ei (P )2Ek (P )v] For (∆v/v)σp L ≪ 1 (i.e. L ≪ lcoh = (v/∆v)σx ) I˜jk is approximately independent of L; in the opposite case I˜jk is strongly suppressed I˜jk is also strongly suppressed unless ∆Ejk /v ≪ σp , i.e. ∆Ejk ≪ σE – coherent production/detection condition Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 13 Conservation of hEi and h~pi For the initially produced flavour state |να i: |Uαj |2 hEα i ≡ hνα |H|να i = pα i ≡ hνα |~ p|να i = |Uαj | For the evolved state ~ t)i = d3 p Ej (~ d3 p d3 p S j (p)t |νj i E and p expectation values are hEi ≡ hν|H|νi = hEα i , pi ≡ hν|~ p|νi = h~ pα i hEi and h~ pi do not change in the course of the oscillations. In neutrino oscillations energy-momentum conservation manifests itself as conservation of the mean values of neutrino energy and momentum. Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 14 Flavor independence of hEi and new physics Ahluwalia & Schritt, arXiv:0911.2965: Assume that mean neutrino energies hEα i measured in ν oscillation experiments are flavor independent: hEe i = hEµ i = hEτ i . Then: either sterile neutrinos must exist, or neutrinos must have some new interactions beyond those of the standard model. (Flavour independence of hEα i is inconsistent with what is known about the usual 3-f neutrino mixing). A 2-f example: hEe i = hE1 i cos2 θ + hE2 i sin2 θ , hEµ i = hE1 i sin2 θ + hE2 i cos2 θ , For m1 6= m2 one has hE1 i = 6 hE2 i the condition hEe i = hEµ i would exclude all values of the mixing angle except θ = 45◦ corresponding to the maximal leptonic mixing! Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 15 Flavour independence of hEi – contd. There is no physical reasons to expect flavour independence of hEα i ! ~ can be represented as The oscillated state |ν(t, L)i ~ = Ae (t, L)|ν ~ e i + Aµ (t, L)|ν ~ µ i + Aτ (t, L)|ν ~ τi, |ν(t, L)i ~ the probability amplitudes of finding the flavour states να with Aα (t, L) ~ The expect. value hEi in the state |ν(t, L)i (α = e, µ, τ ) at x = (t, L). remains constant and equal to that in the initially produced flavour state. But: This does not mean that each individual flavour component of the state has the same energy expectation value! The conjecture of Ahluwalia & Schritt actually contradicts the observability conditions for neutrino oscillations. Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 16 Flavour independence of hEi – contd. An assumption of Ahluwalia & Schritt: The detection process does not distort the measured neutrino energy, because “by choosing an appropriate kinematical setting, this ‘back reaction’ can be made average to zero”. But: this condition would actually make neutrino oscillations unobservable! In practical terms the condition of “no back reaction” actually means |hEα i − hEβ i| ≫ σE Notation: hEj i ≡ Ēj ; (|Uαi |2 − |Uβi |2 )(Ēi − Ēk + Ēk ) (|Uαi | − |Uβi | )Ēi = hEα i − hEβ i = (|Uαi |2 − |Uβi |2 )(Ēi − Ēk ) , (Ēk is the mean energy of any of the neutrino mass eigenstates; unitarity of U used). Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 17 Flavour independence of hEi – contd. Taking into account |Uai |2 ≤ 1, from |hEa i − hEb i| ≫ σE |Ēi − Ēk | ≫ σE (i 6= k) . But: This is just the opposite of the coherent detection condition which is a necessary condition for observability of neutrino oscillations! Coherent det. condition ensures that the detection process cannot discriminate between different neutrino mass eigenstates. If it is violated, the neutrino cannot be detected as a coherent superposition of different mass eigenstates. The fact that the oscillations have been observed completely rules out the conjecture of Ahluwalia & Schritt. Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 18 Attempts of simple-minded application of energy-momentum conservation to neutrino oscillations are inconsistent and lead to no oscillations at all. Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 19 Attempts of simple-minded application of energy-momentum conservation to neutrino oscillations are inconsistent and lead to no oscillations at all. Kinematic entanglement is irrelevant to neutrino oscillations. Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 19 Attempts of simple-minded application of energy-momentum conservation to neutrino oscillations are inconsistent and lead to no oscillations at all. Kinematic entanglement is irrelevant to neutrino oscillations. Energy-momentum conservation manifests itself in ν oscillations as conservation of the expectation values of ν energy amd momentum in the course of the oscillations. Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 19 Attempts of simple-minded application of energy-momentum conservation to neutrino oscillations are inconsistent and lead to no oscillations at all. Kinematic entanglement is irrelevant to neutrino oscillations. Energy-momentum conservation manifests itself in ν oscillations as conservation of the expectation values of ν energy amd momentum in the course of the oscillations. Neutrino oscillations are a QM interference phenomenon; owe their existence to QM uncertainty relations. The coord.-mom. and time-en. uncertainty relations play a crucial role. Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 19 Attempts of simple-minded application of energy-momentum conservation to neutrino oscillations are inconsistent and lead to no oscillations at all. Kinematic entanglement is irrelevant to neutrino oscillations. Energy-momentum conservation manifests itself in ν oscillations as conservation of the expectation values of ν energy amd momentum in the course of the oscillations. Neutrino oscillations are a QM interference phenomenon; owe their existence to QM uncertainty relations. The coord.-mom. and time-en. uncertainty relations play a crucial role. QM uncertainties are in no conflict with energy-momentum conservation! Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 19 Backup slides Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 20 When are neutrino oscillations observable? Another source of decoherence: wave packet separation due to the difference of group velocities ∆v of different mass eigenstates. If coherence is lost: Flavour transition can still occur, but in a non-oscillatory way. E.g. for π → µνi decay with a subsequent detection of νi with the emission of e: P ∝ Pprod (µ νi )Pdet (e νi ) ∝ |Uµi |2 |Uei |2 – the same result as for averaged oscillations. How are the oscillations destroyed? Suppose by measuring momenta and energies of particles at neutrino production (or detection) we can determine its energy E and momentum p with uncertainties σE and σp . From Ei2 = p2i + m2i : 2 1/2 σm2 = (2EσE ) + (2pσp ) Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 21 When are neutrino oscillations observable? If σm2 < ∆m2 = |m2i − m2k | – one can tell which mass eigenstate is emitted. σm2 < ∆m2 implies 2pσp < ∆m2 , or σp < ∆m2 /2p ≃ losc But: To measure p with the accuracy σp one needs to measure the momenta of particles at production with (at least) the same accuracy ⇒ uncertainty of their coordinates (and the coordinate of ν production point) will be σx, prod & σp−1 > losc Oscillations washed out. Similarly for neutrino detection. Natural necessary condition for coherence (observability of oscillations): Lsource ≪ losc , Ldet ≪ losc No averaging of oscillations in the source and detector Satisfied with very large margins in most cases of practical interest Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 22 Wave packet separation Wave packets representing different mass eigenstate components have different group velocities vgi ⇒ after time tcoh (coherence time) they separate ⇒ Neutrinos stop oscillating! (Only averaged effect observable). Coherence time and length: ∆v · tcoh ≃ σx ; lcoh ≃ vtcoh ∆v = 2E 2 lcoh ≃ ∆v x 2E 2 The standard formula for Posc is obtained when the decoherence effects are negligible. Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 23 A manifestation of neutrino coherence Even non-observation of neutrino oscillations at distances L ≪ losc is a consequence of and an evidence for coherence of neutrino emission and detection! Two-flavour example (e.g. for νe emission and detection): Aprod/det (ν1 ) ∼ cos θ , A(νe → νe ) = Aprod/det (ν2 ) ∼ sin θ Aprod (νi )Adet (νi ) ∼ cos2 θ + e−i∆φ sin2 θ Phase difference ∆φ vanishes at short L P (νe → νe ) = (cos2 θ + sin2 θ)2 = 1 If ν1 and ν2 were emitted and absorbed incoherently) to sum probabilities rather than amplitudes: P (νe → νe ) ∼ one would have |Aprod (νi )Adet (νi )|2 ∼ cos4 θ + sin4 θ < 1 Evgeny Akhmedov Munich, Sept. 5-9, 2011 – p. 24
{"url":"https://studyres.com/doc/17730306/neutrino-oscillations--energy-momentum-conservation-and","timestamp":"2024-11-10T18:32:28Z","content_type":"text/html","content_length":"99411","record_id":"<urn:uuid:2a0914df-c1d1-44a1-893e-4acfbc97ba07>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00234.warc.gz"}
15.1: Finding a Home for Irrational Numbers (5 minutes) In this warm-up, students place numbers involving square roots in their approximate location on the number line. In the associated Algebra 1 lesson, students will examine irrational solutions to quadratic equations. This warm-up helps students get a better sense of the value of these numbers. Students use appropriate tools strategically (MP5) when they locate irrational numbers on a number Student Facing 1. \(\sqrt{5}\) 2. \(\text{-}\sqrt{13}\) 3. \(3+\sqrt{2}\) 4. \(3-\sqrt{2}\) Activity Synthesis The purpose of this discussion is for students to understand that irrational numbers also have a position on the number line and get a sense of their approximate location. Ask students, • “The decimal value for a number like \(\sqrt{5}\) goes on forever and does not repeat. Does this mean the value is infinite? Does it mean the position on the number line moves?” (No, the value \ (\sqrt{5}\) has a fixed position on the number line somewhere between 2 and 3. For comparison, the number \(\frac{1}{3}\) also has an infinite decimal expansion, but is exactly one-third of the way between 0 and 3. Similarly, \(\sqrt{5}\) has a certain position on the number line that does not move.) • “How can you know that \(\sqrt{20}\) is somewhere between 4 and 5?” (I know that \(\sqrt{16} =4, \sqrt{25} = 5\), and \(\sqrt{16} < \sqrt{20} < \sqrt{25}\), so it must be between those values.) 15.2: Solving for Missing Sides (20 minutes) In this activity, students use the Pythagorean Theorem to find a missing side from a right triangle. In the associated Algebra 1 lesson, students examine irrational solutions to quadratic equations. This concrete example gives students support for thinking about irrational values. Ask students what they remember about the connection between the lengths of the sides of the triangle. If it does not come up in discussion, remind students of the Pythagorean Theorem. Here are some examples of right triangles to show as needed. Student Facing For each triangle, use the Pythagorean Theorem to find the length of the missing side. Activity Synthesis The purpose of the discussion is to understand that irrational values have an actual value that can be seen as the length of a side. Select students to share their responses and reasoning. Ask • “Use the first triangle to compare the value of \(\sqrt{13}\) by comparing it to 2, 3, 4, and 5.” (It is greater than 3 since it is the hypotenuse of a triangle with a leg of length 3. It looks a little less than 4 since 2 of the sides of length 2 in that image seems longer than the hypotenuse.) • “Use the fact that \(4 + 100 = 104\) to sketch a line that has length \(\sqrt{104}\).” (I can draw a right triangle with leg lengths 2 and 10, then the hypotenuse has a length of \(\sqrt{104}\).) 15.3: Solving with Square Roots (15 minutes) In this activity, students solve quadratic equations of the form \((x+a)^2 = b\) and estimate the value of any irrational solutions. In the associated Algebra 1 lesson, students see irrational solutions for quadratic equations solved using completing the square. Practicing this portion of solving by completing the square can help students focus on other parts of the method for solving. Ask students, “What do you know about the solution to \(x^2 = 18\)?” Ensure that students discuss: • there are two solutions, \(\pm \sqrt{18}\) • its value is between \(\pm 4\) and \(\pm 5\) • that \(\sqrt{18}\) represents an exact value rather than an approximate value such as 4.2426. Tell students to represent their solutions as exact values such as \(\sqrt{18}\) rather than approximate values. Student Facing Solve each of these equations. Represent the solutions exactly. If the solution is not a whole number, what 2 whole numbers does each solution lie between? Be prepared to explain your reasoning. 1. \((x+1)^2 = 64\) 2. \((x-3)^2 - 4 = 0\) 3. \(x^2 = 10\) 4. \((x-2)^2 = 12\) 5. \((x+3)^2 = 24+4\) Activity Synthesis The purpose of the discussion is to understand the approximate values of irrational numbers by comparing them to integers. Display the first number line and select students to add the position of their solutions to the image in their approximate position. After the solutions have been added, display this number line. Ask students if they would like to move any of their solutions on this updated number line.
{"url":"https://curriculum.illustrativemathematics.org/HS/teachers/4/7/15/index.html","timestamp":"2024-11-07T23:36:29Z","content_type":"text/html","content_length":"113375","record_id":"<urn:uuid:24b7a534-3fbf-4e7c-b94a-a49287d9208f>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00359.warc.gz"}
Database of Original & Non-Theoretical Uses of Topology A Novel Method of Extracting Topological Features From Word Embeddings (2020) Shafie Gholizadeh, Armin Seyeditabari, Wlodek Zadrozny Abstract In recent years, topological data analysis has been utilized for a wide range of problems to deal with high dimensional noisy data. While text representations are often high dimensional and noisy, there are only a few work on the application of topological data analysis in natural language processing. In this paper, we introduce a novel algorithm to extract topological features from word embedding representation of text that can be used for text classification. Working on word embeddings, topological data analysis can interpret the embedding high-dimensional space and discover the relations among different embedding dimensions. We will use persistent homology, the most commonly tool from topological data analysis, for our experiment. Examining our topological algorithm on long textual documents, we will show our defined topological features may outperform conventional text mining features.
{"url":"https://donut.topology.rocks/?q=author%3A%22Shafie++Gholizadeh%22","timestamp":"2024-11-03T08:53:47Z","content_type":"text/html","content_length":"12036","record_id":"<urn:uuid:9279e62e-fb77-4052-9850-d5e23c3c0585>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00686.warc.gz"}
EViews Help: Additional Topics Additional Topics Dealing with Estimation Problems Since EViews uses nonlinear estimation algorithms to estimate ARMA models, all of the discussion in “Solving Estimation Problems” , is applicable, especially the advice to try alternative starting values. There are a few other issues to consider that are specific to estimation of ARMA and ARFIMA models. First, MA models are notoriously difficult to estimate. In particular, you should avoid high order MA terms unless absolutely required for your model as they are likely to cause estimation difficulties. For example, a single large autocorrelation spike at lag 57 in the correlogram does not necessarily require you to include an MA(57) term in your model unless you know there is something special happening every 57 periods. It is more likely that the spike in the correlogram is simply the product of one or more outliers in the series. By including many MA terms in your model, you lose degrees of freedom, and may sacrifice stability and reliability of your estimates. If the underlying roots of the MA process have modulus close to one, you may encounter estimation difficulties, with EViews reporting that it cannot improve the sum-of-squares or that it failed to converge in the maximum number of iterations. This behavior may be a sign that you have over-differenced the data. You should check the correlogram of the series to determine whether you can re-estimate with one less round of differencing. Lastly, if you continue to have problems, you may wish to turn off MA backcasting. For a discussion of how to estimate TSLS specifications with ARMA errors, see “Nonlinear Two-stage Least Squares” Nonlinear Models with ARMA errors EViews will estimate nonlinear ordinary and two-stage least squares models with autoregressive error terms. For details, see the discussion in “Nonlinear Least Squares” Weighted Models with ARMA errors EViews does not offer built-in procedures to automatically estimate weighted models with ARMA error terms. You can, of course, always construct weighted series and then perform estimation using the weighted data and ARMA terms. Note that this procedure implies a very specific assumption about the properties of your data. Two-Stage Regression Models with Serial Correlation By combining two-stage least squares or two-stage nonlinear least squares with AR terms, you can estimate models where there is correlation between regressors and the innovations as well as serial correlation in the residuals. If the original regression model is linear, EViews uses the Marquardt algorithm to estimate the parameters of the transformed specification. If the original model is nonlinear, EViews uses Gauss-Newton to estimate the AR corrected specification. For further details on the algorithms and related issues associated with the choice of instruments, see the discussion in “TSLS with AR errors” Nonlinear Models with ARMA Errors EViews can estimate nonlinear regression models with ARMA errors. For example, suppose you wish to estimate the following nonlinear specification with an AR(2) error: Simply specify your model using EViews expressions, followed by an additive term describing the AR correction enclosed in square brackets. The AR term should contain a coefficient assignment for each AR lag, separated by commas: cs = c(1) + gdp^c(2) + [ar(1)=c(3), ar(2)=c(4)] EViews transforms this nonlinear model by differencing, and estimates the transformed nonlinear specification using a Gauss-Newton iterative procedure (see “Initializing the AR Errors”
{"url":"https://help.eviews.com/content/timeser-Additional_Topics.html","timestamp":"2024-11-06T18:34:03Z","content_type":"application/xhtml+xml","content_length":"12519","record_id":"<urn:uuid:85728b57-0f38-4452-a502-9a4ad4bf070f>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00239.warc.gz"}
[QSMS Monthly Seminar]Introduction to modular curves and modular forms 일시: 11월27일 금요일 오후 2시 ~ 6시 장소: 27동220호 Title: Introduction to modular curves and modular forms Speaker: 유화종(서울대학교) We discuss the notion of modular curves and modular forms and various related problems in the context of number theory. Title: Wall-crossing in Floer theory Speaker: 홍한솔(연세대학교) The wall-crossing phenomenon in Lagrangian Floer theory refers to a drastic change in holomorphic disk counting while the bounding Lagrangians vary continuously. In particular, wall-crossings for Lagrangian tori can be formulated in terms of cluster transformations, which brings cluster varieties as principal objects into the study of mirror symmetry. I will discuss various features of the wall-crossing in Floer theory, making use of concrete examples, and explain its combinatorial nature.
{"url":"https://qsms.math.snu.ac.kr/index.php?mid=board_sjXR83&order_type=desc&document_srl=918&l=ko&listStyle=viewer&page=6","timestamp":"2024-11-13T19:22:37Z","content_type":"text/html","content_length":"21389","record_id":"<urn:uuid:d3ca93de-bb46-4352-a56c-de89a64296b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00031.warc.gz"}
Frontiers in Quantum Computing: John Preskill Download Preskill’s slides here:QuantHEP-Seminar-2020-10-John-Preskill John Phillip Preskill (born January 19, 1953) is an American theoretical physicist and the Richard P. Feynman Professor of Theoretical Physics at the California Institute of Technology, where he is also the Director of the Institute for Quantum Information and Matter. Preskill is a leading scientist in the field of quantum information scienceand quantum computation, and he is known for coining the term “quantum supremacy.”^ Preskill received his Ph.D. in the same subject from Harvard University in 1980. His graduate adviser at Harvard was Steven Weinberg. While still a graduate student, Preskill made a name for himself by publishing a paper on the cosmological production of superheavy magnetic monopoles in Grand Unified Theories. Since we do not observe any magnetic monopoles, this work pointed out serious flaws in the then current cosmological models, a problem which was later addressed by Alan Guth and others by proposing the idea of cosmic inflation. Since 2000 he has been the Director of the Institute for Quantum Information at Caltech. In recent years most of his work has been in mathematical issues related to quantum computation and quantum information theory. He is known for coining the term “Quantum Supremacy” in a 2012 paper.^[4] Preskill has achieved some notoriety in the popular press as party to a number of bets involving fellow theoretical physicists Stephen Hawking and Kip Thorne. Hawking conceded the Thorne–Hawking–Preskill bet in 2004 and gave Preskill a copy of Total Baseball, The Ultimate Baseball Encyclopedia. Preskill was elected as a Fellow of the American Physical Society in 1991 and a member of the National Academy of Sciences in 2014.^[5]^[6]
{"url":"http://blog.bkeating.ucsd.edu/2021/01/14/frontiers-in-quantum-computing-john-preskill/","timestamp":"2024-11-09T23:15:54Z","content_type":"text/html","content_length":"45371","record_id":"<urn:uuid:edf76b66-0fb4-49cd-900b-7e4da103545c>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00023.warc.gz"}
1.2 Science and Experimentation By the end of this section you will be able to: • Describe why science is considered a discipline of philosophy. • Summarize the four basic types of experiments. • Apply the principles of experimental design in this course and in your daily life. Thinking about science The primary goal of this section is to help you think about the nature of science. You might be taking this course to fulfill an undergraduate requirement for a biology course with a lab. This course fulfills that requirement because we investigate the process behind using science as a way of learning about the natural world around us. If you’re starting down the path to becoming a plant scientist, understanding the nature of science will be essential for you in your career Regardless of whether you’re going to pursue a career as a scientist, now is a good time reflect on the nature of science, and to understand how scientific thinking can become a strategy for resolving many issues that you confront during daily life. Watch this video about connecting science and experimentation to real life: Scientific inquiry While “science” is a word commonly used in our culture, in popular use it is rarely spoken of as a philosophy. By identifying science as a philosophy we are taking an epistemic view, one focusing on how knowledge is acquired. At its core, science is a mode of inquiry: a way of acquiring new knowledge about the world around us and a strategy for understanding the inner workings of elements in that world. Scientists believe that if we follow the principles of this philosophy we will continue to expand our knowledge about how things work in the world around us. This systematic approach is called the “scientific method.” There are two key steps in the scientific method: • Hypothesis building through reflective observation. • Hypothesis testing through experimentation. A “hypothesis” is a question or proposed explanation made on the basis of limited evidence and used as a starting point for experimentation. Experimentation is commonly equated with science—rightly so, because hypotheses evaluated on the basis of evidence generated through experiments. Experimentation, however, isn’t the whole story. Science—including the development and testing of new hypotheses—is also a creative endeavor. Watch this video about scientific inquiry: Scientific inquiry has generated a vast body of knowledge about the world around us. Your school science classes might have required you to memorize facts and relationships, and pay attention to detail. Sometimes such memorization leads students to believe that science is just an accumulation of facts rather than the process behind discovering all of that information. Scientific discovery builds on what is already known. Even the most accomplished scientists initially approach a problem by learning what is already known. Armed with that information, they then apply their own creativity to form new hypotheses about something they have observed, and design experiments to test those hypotheses. They also communicate their results publicly so that others can benefit from their work and have the opportunity to challenge conclusions. In this way, science builds on itself. The foundational knowledge you learn in science classes prepares you to develop and test hypotheses and to make new discoveries of your own. While a good memory may help you pass a science classes, you will absorb a body of knowledge more effectively when you learn how facts fit and work together in systems rather than learning through the brute force of memorization. In this section we work from the point of view that science is a way of acquiring knowledge—a mode of inquiry—and that this mode of inquiry follows a process called the scientific method. Those who follow the philosophy of science: • Use it to understand how the natural world works. • Start by learning what is already known. • Carefully observe the subjects of their scientific inquiry and look for details about form, function, and interaction with the environment. • Develop hypotheses about the inner workings of natural phenomena not yet understood. • Test their hypotheses by making observations, conducting experiments and collecting and evaluating evidence. • Communicate with others about their hypotheses, experiments, and the outcomes of their studies so that others can repeat, validate, and build upon their work. Although science is typically used to understand how the natural world works, it is also regularly applied to the development of new technologies that are based on these natural phenomena and to the solving of problems associated with the natural world. Putting the scientific method to work As noted, the scientific method relies on building hypotheses and then testing them through experimentation. In the lab section of this course you will develop hypotheses about the effects of various treatments on propagation success and then conduct experiments to test those hypotheses. Because experimentation is such a key component of the scientific method, we’ll spend time characterizing and examining four types of experimentation and explore whether they are part of the scientific method. While each is valuable when applied in the right circumstances, only one clearly follows each step of the scientific method to uncover new knowledge about the natural world. Types of experiments The types of experimentation we will cover are: • Demonstration • Evaluation • Exploration • Discovery Demonstration experiments Many experiments conducted in lab courses are demonstration experiments. Photo by Salish Sea Expeditions. CC BY-NC-ND 2.0 Demonstration experiments are a classic method used in educational settings to help students learn and understand known relationships already discovered by others. Learners will usually have had prior exposure to the relationships through preliminary observations, lectures, reading, and discussions, and will have some sense of what the experimental outcome might be. Good demonstration experiments actively involve the learner, who manipulates the experimental materials, applies the treatments, and observes the outcomes, then gathers, analyzes, and interprets the resulting data. Poor demonstration experiments, in contrast, make learners only passive witnesses to something done by an expert at the front of the classroom. In the plant propagation labs for this course, you will be actively engaged in demonstration experiments. Although you won’t be creating new knowledge, the knowledge will likely be new to you. The hands-on experience of conducting the experiments will help you to learn the concepts more effectively than if you only read a textbook or listened to a lecture. The techniques you learn and use in demonstration experiments often contribute to the learning experience as much as the relationships revealed at the experiment’s conclusion. Employing these techniques will help you gain an understanding of many biological functions, such as the production of adventitious roots and mechanisms for seed dispersal. While demonstration experiments are valuable for actively learning a body of scientific knowledge previously discovered and communicated by others, the experience is specifically orchestrated for teaching and learning, not for the discovery of new information. Yet since the knowledge is new to the learner, it can still bring the joy of personal discovery and a sense of accomplishment. In summary, demonstration experiments: • Are designed for teaching and learning. • Address relationships that may be new to you, but are otherwise known. • In their best forms, actively involve the learner. • May emphasize experimental techniques, in addition to outcomes, as part of the learning experience. • Are not the types of experiments that are at the core of practicing science as a way to uncover new knowledge. Evaluation experiments Evaluation experiments are designed to help us make decisions, and to choose from a number of options. They might, for instance, help us determine the efficacy of a new treatment relative to a known treatment, or decide on further experimentation. An evaluation experiment will highlight a compound, a technique, a piece of equipment, or an organism, and will include a control and/or other Evaluation experiments are common in horticultural and agronomic research, where the purpose of the experiment is to identify, for example, the best cultivar, production method, pest control, fertility regime, or light intensity for growing a crop. Correct experimental design is crucial for assuring that conclusions from the experiment are meaningful and credible. This field experiment is testing different living mulches between rows of strawberries. Photo by University of Minnesota West Central Research and Outreach Center. These experiments are typically used in the development of new technologies to identify the best method for the desired purpose (e.g., which pesticides are effective against the target insect, but not harmful to non-target insects). They are not used to discover new knowledge about how the world works, as they typically don’t advance our understanding of the natural world. The information from an evaluation experiment might, however, point the way to additional experimentation that does help us discover new knowledge. This is particularly true if the outcome of an evaluation experiment is unexpected or novel. In summary, evaluation experiments: • Are used to help in decision-making. • Help users choose a winner or determine efficacy relative to other alternatives. • Are commonly used when evaluating and recommending horticultural production methods. • Can be useful in solving problems and developing technologies. • Require proper experimental design (e.g., comparison to a control) for credibility and meaningfulness. Exploration experiments Some scientists specialize in observing and cataloging nature, and some aggressively search for previously unknown phenomena. In the botanical realm, such scientists study the diversity of organisms within habitats, discover new species, or are in other ways very skilled in “seeing” nature. Explorer-scientists recognize and appreciate detail and can identify the enormous diversity among plants by comparing characteristics that might be overlooked by others. They may also have the capacity to recognize possible interrelationships among organisms and with habitats, making their work particularly important to science. They might notice, for instance, that a particular species of plant is commonly found in wet areas but not in dry, or that a particular vegetable tastes sweeter when grown at higher altitudes than when grown closer to sea level. They don’t confirm the cause of these relationships, but are the first to notice them. This scientist is collecting plants in Ecuador to identify unknown species and to determine relatedness to other plants. Photo by Dr. Eric Tepe, University of Cincinnati. Explorers’ observations are essential to stimulating the development of sound, testable hypotheses. The possible relationships they propose must be tested to determine whether those relationships actually exist, or are artifacts of other effects. Explorers help develop hypotheses, but the work of exploration, cataloging, and seeing possible relationships don’t prove or disprove the hypotheses or necessarily generate new knowledge about relationships. The work does, however, result in new information about the existence of the object or phenomenon itself. An exception is exploration done to test a hypothesis, such as a mission to test the hypothesis that a particular type of ecosystem is required for reproduction of a particular plant species. Scientists must resist jumping to conclusions based on exploration and observation alone. If you see two people together many times, for example, you might conclude that they are a romantic couple, when in fact they are brother and sister. Relationships hypothesized as a result of exploration and observation must be experimentally tested before they are accepted or rejected. Exploration experiments uncover new things, many of which can be exciting and eventually change our view of the world. While one of their greatest values is that they lead to the development of new and stronger hypotheses about how the world works, they go so far as to test those hypotheses or fully engage in the cycle of knowledge generation associated with the scientific method. Additional experiments based on this new information are required to put this new information in context and to advance our understanding of how the natural world works. In summary then, exploration experiments: • Focus on detailed observation of organisms and habitats. • Increase our knowledge of the natural world. • Identify potential relationships that need to be tested. • Are essential to sound and testable hypothesis-building. Discovery experiments Discovery experiments are central to the use of the scientific method in tasks ranging from problem solving to the discovery of new knowledge. They focus on uncovering new relationships and solving problems, follow the scientific method, test hypotheses and their predicted outcomes, and utilize a careful design in order to maintain meaningfulness and credibility. The similarity between the scientific method and Kolb’s Experiential Learning Cycle is not an accident. The scientific method is a practical strategy based on how we sense and experience the world around us and used to solve problems encountered during those experiences. The scientific method is a great example of the experiential learning theory. Illustration by Emily Tepe. The diagram above illustrates a combination of the scientific method and Kolb’s four-step experiential learning, describing a cyclic process for solving problems that can be applied to disciplines as diverse as molecular biology, global warming, and even appliance repair. While you might initially think that appliance repair doesn’t belong in that list, the difference is one of application, not method. Though far removed from the esoteric scientific discoveries we associate with scientific method, appliance repair follows the same steps. Appliances are often, and quite literally, boxes, where you don’t know what is going on inside. But what’s going on inside is knowable, and through that knowledge comes repair. We can apply the scientific method to everyday problems like figuring out why a washing machine isn’t working. Image used with permission © HomeTips.com. The learning/problem solving/scientific process could theoretically start anywhere in Kolb’s cycle. But it will likely start with a problem that needs to be solved, something you don’t understand but would like to know more about. You become aware that there is a problem or that you lack understanding because you have an experience where you observe something and then step back and said, “I wonder how that works,” or perhaps, “why is that broken?” Through observation you develop a sufficiently adequate description of the problem to start doing some research on what is already known. With a good description of the problem in hand, you can begin to review what is known through the work of others, and think about what might be going on in your situation and how your new understanding can be applied to the problem. This is “reflective observation.” It isn’t just sitting back and thinking in a vacuum. You need raw material for your mind to work on, and that only comes through the tough task of gathering and engaging with the background information. There is a very important quiet phase in this process when you let your mind assemble and sort through ideas until alternatives begin to emerge that might lead to a solution. Talking with others and sharing ideas is an important part of this quiet phase. Sometimes the alternatives are no- brainers (blown fuse?), and sometimes they’re more creative (residue from the wrong detergent gunking up the water level sensor?). Regardless of their simplicity or complexity, these become hypotheses that need to be tested. The hypothesis-building stage includes both a statement of how something works or why it isn’t working, and predictions about what might happen if the hypothesis is true. In appliance repair, for example, the prediction will likely be that the appliance will function normally. In horticultural molecular biology, it might be that you will see accumulation of a particular type of fatty acid in the cotyledons. You put the hypothesis to the test by designing an experiment that assesses whether your predictions were right. If the outcome doesn’t match your prediction, you reject the hypothesis (the fuse was ok, so that wasn’t the problem). If the outcome does match your prediction, you tentatively accept the hypothesis pending further observation (when the fuse was replaced the washing machine worked again, so it might have been a blown fuse, but on the other hand maybe it was just because the motor had time to cool down). As with evaluation experimentation, experimental design is important in assuring that the conclusions from the experiment are meaningful and credible. Experimentation leads to new experiences and an incremental increase in knowledge, and then the cycle begins again. In summary, discovery experiments: • Focus on uncovering new relationships and solving problems. • Follow scientific method. • Test hypotheses and their predicted outcomes. • Utilize a careful design in order to maintain meaningfulness and credibility. Of the four types of experiments, only the discovery experiments are core to the process of science in the narrow sense of being a way of acquiring new knowledge. The other three types of experimentation are still important; demonstration and evaluation experiments are valuable for learning and decision-making and for technology development, and exploration experiments are essential for developing testable hypotheses. But discovery experiments are core to science. Remember: the methodology of effective washing machine repair, when applied to what is unknown about the physical world, is the methodology of science. It’s not esoteric; it’s good appliance repair. You might argue that, when applied to a broken washing machine, a discovery experiment results in knowledge that is probably already known by those skilled in appliance repair, so it isn’t really new knowledge about how the world works. That’s a fair criticism. Use of the scientific method can result in new knowledge about how the world works, but whether it uncovers new knowledge depends on the object of experimentation. Experimental design The methods for designing experiments are carefully studied and often discipline-specific. Methods used in molecular biology, for instance, will be somewhat different from those used in chemistry or in field evaluations of horticultural plants. There are, however, some generalizations we can make about good experimental designs. Emphasize comparisons Experiments include more than just one treatment. “Treatment” refers to the factor that you are varying in your experiment—for example, different cultivars of tomato, different fertilizers, or different amounts of light. Experimental designs incorporate comparison of treatments. You usually compare the treatments to one another and often to a control, which is either the application of no treatment or the application of a customary or standard level of treatment. If you grow a particular type of tomato in your garden, and find that it produces tasty fruit, would you declare it to be the best tomato variety you could grow? Certainly not. You couldn’t even say with certainty that it was the best tomato variety you have ever grown (unless it is the only one you have grown). Next year, however, you could grow that tomato as your control, and grow two other varieties that your neighbors like, and compare fruit quality (appearance, flavor, yield, sugar content). You could then say something definitive about the three tomato varieties because you have compared them to each other after growing them next to each other in the same year and environment. Replicate treatments The same treatment is applied to more than one “experimental unit”—the object that receives the treatment. In the example above, the tomato plant is the experimental unit, and you would perhaps plant two or three seedlings of each tomato variety rather than just one. Think of a treatment as something like a fertilizer spread on a patch of land. The patch of land is the experimental unit, while the fertilizer is the treatment. By applying the treatment to more than one experimental unit you can estimate the variation you get when two experimental units are treated the same, and compare this to the variation when experimental units are given different treatments. If the treatments actually differ in their effectiveness, you would expect the variation between experimental units given different treatments to be much greater than the variation between those given the same treatment. This is one of the fundamental ways in which experiments are statistically analyzed and treatments declared significantly different or not. Randomize treatments Once you know how many treatments you are going to apply, and how many replications you want, the product of these two quantities (# treatments × # replications) equals the number of experimental units you need. For instance, if you have three fertilizers you want to test, plus a control, you have four treatments. If you want three replications of each treatment, then you 4 treatments x 3 replications = 12 experimental units or patches of land where you will apply the fertilizers. The treatments will be randomly assigned to each experimental unit (patch of land). This is done using a random number table and is not just haphazard picking. Randomization helps minimize any bias you haven’t recognized in advance and controlled for in other ways. 1. What are two types of control treatments? 2. Does increasing the number of replications increase the number of treatments or the number of experimental units? 3. Can you think of an example of how randomization can protect against bias? Process of scientific inquiry; it builds on what is known by testing hypotheses. A very valuable method for actively learning the body of scientific knowledge that has been previously discovered and communicated by others; and it is specifically orchestrated for teaching and learning, not for the discovery of new information about the world around us. Typically used during the development of new technologies to identify the best products for the desired purpose (eg. which pesticides are effective against the target insect, but not harmful to non-target insects), but are not used to discover new knowledge about how the world works so they typically don't advance our understanding of the natural world. Used to pick a winner from among a number of options. Used to verify or regulate a scientific experiment by conducting a parallel experiment or by comparing with another standard A plant variety that has been produced in cultivation by selective breeding. The term comes from combining the words 'cultivated' and 'variety'. Process of planning an experiment to test a hypothesis. An embryonic leaf in seed-bearing plants, one or more of which are the first leaves to appear from a germinating seed. When the same treatment is applied to more than one experimental unit. Act of randomly assigning treatments to experimental units using a random number table or computer-generated randomization to help minimize any bias that has not been recognized in advance and controlled for in other ways.
{"url":"https://open.lib.umn.edu/horticulture/chapter/1-2-science-and-experimentation/","timestamp":"2024-11-07T13:08:01Z","content_type":"text/html","content_length":"121967","record_id":"<urn:uuid:925c0801-6f9b-48d5-9fc2-c4b76cabf1d5>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00450.warc.gz"}
Median vs. Mean as a Model Course Outline • segmentGetting Started (Don't Skip This Part) • segmentStatistics and Data Science: A Modeling Approach • segmentPART I: EXPLORING VARIATION • segmentChapter 1 - Welcome to Statistics: A Modeling Approach • segmentChapter 2 - Understanding Data • segmentChapter 3 - Examining Distributions • segmentChapter 4 - Explaining Variation • segmentPART II: MODELING VARIATION • segmentChapter 5 - A Simple Model □ 5.3 Median vs. Mean as a Model • segmentChapter 6 - Quantifying Error • segmentChapter 7 - Adding an Explanatory Variable to the Model • segmentChapter 8 - Digging Deeper into Group Models • segmentChapter 9 - Models with a Quantitative Explanatory Variable • segmentPART III: EVALUATING MODELS • segmentChapter 10 - The Logic of Inference • segmentChapter 11 - Model Comparison with F • segmentChapter 12 - Parameter Estimation and Confidence Intervals • segmentFinishing Up (Don't Skip This Part!) • segmentResources list High School / Advanced Statistics and Data Science I (ABC) 5.3 The Median vs. Mean as a Model Having developed the idea that a single number can serve as a statistical model for a distribution, we now ask: which single number should we choose? We have been talking informally about choosing a number in the middle of a symmetric, normal-shaped distribution. But now we want to get more specific. Recall that in the previous section we defined a statistical model as a function that produces a predicted score for each observation. Armed with this definition, we can now ask: what function could we use that would generate the same predicted value for all observations in a distribution? Median and Mean: Two Possible Functions for Generating Model Predictions If we were trying to pick a number to model the distribution of a categorical variable, we should pick the mode; really, there isn’t much choice here. If you are going to predict the value of a new observation on a categorical variable, the prediction will have to be one of the categories and you will be wrong least often if you pick the most frequently observed category. For a quantitative variable, statisticians typically choose one of two numbers: the median or the mean. The median is just the middle number of a distribution. Take the following distribution of five 5, 5, 5, 10, 20 The median is 5, meaning that if you sort all the numbers in order, the number in the middle is 5. You can see that the median is not affected by outliers. So, if you changed the 20 in this distribution to 20,000, the median would still be 5. To calculate the mean of this distribution, we simply add up all the numbers in the sample, and then divide by the sample size, which is 5. So, the mean of this distribution is 9. Both mean and median are indicators of where the middle of the distribution is, but they define “middle” in different ways: 5 and 9 represent very different points in this distribution. In R, these and other statistics are very easy to find with the function favstats(). Create a variable called outcome and put in these numbers: 5, 5, 5, 10, 20. Then, run the favstats() function on the variable outcome. require(coursekata) # Modify this line to save the numbers to outcome outcome <- c() # This will give you the favstats for outcome favstats(outcome) outcome <- c(5, 5, 5, 10, 20) favstats(outcome) ex () %>% { check_object(., "outcome") %>% check_equal() check_function(., "favstats") %>% check_result() %>% check_equal() } CK Code: B1_Code_Median_01 min Q1 median Q3 max mean sd n missing 5 5 5 10 20 9 6.519202 5 0 If our goal is just to find the single number that best characterizes a distribution, sometimes the median is better, and sometimes the mean is better. If you are trying to choose one number that would best predict what the next randomly sampled value might be, the median might well be better than the mean for this distribution. With only five numbers, the fact that three of them are 5 leads us to believe that the next one might be 5 as well. On the other hand, we know nothing about the Data Generating Process (DGP) for these numbers. The fact that there are only five of them indicates that this distribution is probably not a good representation of the underlying population distribution. The population could be normal, or uniform, in which case the mean would be a better model than the median. The point is, we just don’t know. Realizing this limitation, let’s look below at the distributions of several quantitative variables. For each variable, make a histogram and get the favstats(). Then decide which number you think would be a better model for the distribution – the median or the mean. Variable 1: Students’ Self-Predictions of GPA in the Fingers Data Frame require(coursekata) # modify this code to make a histogram of GradePredict # the second line adds more tick marks to the x-axis gf_histogram(~ , data = Fingers, color = "forestgreen") + scale_x_continuous(breaks = seq(2.0, 4.0, by = 0.1)) # modify this code to get the favstats for GradePredict favstats(~ GradePredict, data = ) # modify this code to make a histogram of GradePredict # the second line adds more tick marks to the x-axis gf_histogram(~ GradePredict, data = Fingers, color = "forestgreen") + scale_x_continuous(breaks = seq(2.0, 4.0, by = 0.1)) # modify this code to get the favstats for GradePredict favstats(~ GradePredict, data = Fingers) ex() %>% { check_or(., check_function(., "gf_histogram") %>% { check_arg(., "object") %>% check_equal() check_arg(., "data") %>% check_equal() }, override_solution(., "gf_histogram(Fingers, ~ GradePredict)") %>% check_function("gf_histogram") %>% { check_arg(., "object") %>% check_equal() check_arg(., "gformula") %>% check_equal() }, override_solution(., "gf_histogram(~ Fingers$GradePredict)") %>% check_function("gf_histogram") %>% check_arg(., "object") %>% check_equal() ) check_function(., "favstats") %>% check_result() %>% check_equal() } CK Code: B1_Code_Median_02 Note that there are two ways of asking favstats() or gf_histogram() to retrieve a variable that is inside a data frame: by using the $ like this: favstats(Fingers$GradePredict); or by using a combination of ~ and data = like this: favstats(~ GradePredict, data = Fingers). We prefer to use the latter version with the tilde (~) because it will work better with other functions we will learn Variable 2: Thumb Lengths in the Fingers Data Frame require(coursekata) # modify this code to make a histogram of Thumb gf_histogram() # get the favstats for Thumb # modify this code to make a histogram of Thumb gf_histogram(~ Thumb, data = Fingers) # get the favstats for Thumb favstats(~ Thumb, data = Fingers) ex() %>% { check_or(., check_function(., "gf_histogram") %>% { check_arg(., "object") %>% check_equal() check_arg(., "data") %>% check_equal() }, override_solution(., "gf_histogram(Fingers, ~ Thumb)") %>% check_function("gf_histogram") %>% { check_arg(., "object") %>% check_equal() check_arg(., "gformula") %>% check_equal() }, override_solution(., "gf_histogram(~ Fingers$Thumb)") %>% check_function("gf_histogram") %>% check_arg(., "object") %>% check_equal() ) check_function(., "favstats") %>% check_result() %>% check_equal() } CK Code: B1_Code_Median_03 Variable 3: Age of Housekeepers in the MindsetMatters Data Frame require(coursekata) # make a histogram of Age in the MindsetMatters data frame # set the fill = "red" # get the favstats for Age # make a histogram of Age in the MindsetMatters data frame # set the fill = "red" gf_histogram(~ Age, data = MindsetMatters, fill = "red") # get the favstats for Age favstats(~ Age, data = MindsetMatters) ex() %>% { check_or(., check_function(., "gf_histogram") %>% { check_arg(., "object") %>% check_equal() check_arg(., "data") %>% check_equal() }, override_solution(., "gf_histogram(MindsetMatters, ~ Age)") %>% check_function("gf_histogram") %>% { check_arg(., "object") %>% check_equal() check_arg(., "gformula") %>% check_equal() }, override_solution(., "gf_histogram(~ MindsetMatters$Age)") %>% check_function("gf_histogram") %>% check_arg(., "object") %>% check_equal() ) check_function(., "gf_histogram") %>% check_arg("fill") %>% check_equal() check_function(., "favstats") %>% check_result() %>% check_equal() } CK Code: B1_Code_Median_04 In general, the median may be a more meaningful summary of a distribution of data than the mean, when the distribution is skewed one way or the other. In essence, this discounts the importance of the tail of the distribution, focusing more on the part of the distribution where most people score. The mean is a good summary when the distribution is more symmetrical. But, if our goal is to create a statistical model of the population distribution, we almost always—especially in this course—will use the mean. We shall dig in a little to see why. But first, a brief detour to see how we can add the median and mean to a histogram. Adding Median and Mean to Histograms You already know the R code to make a histogram. Let’s add a vertical line to show where the mean is. We know from favstats() that the mean is 9, so we can just add a vertical line that crosses the x-axis at 9. Let’s color it blue. gf_histogram(~ outcome) %>% gf_vline(xintercept = 9, color = "blue") Try modifying this code to draw a purple line for the median of this tiny set of numbers. (The median is 5.) require(coursekata) outcome <- c(5, 5, 5, 10, 20) # Modify this code to draw a vline representing the median in "purple" gf_histogram(~outcome) %>% gf_vline(xintercept = 9, color = "blue") # Modify this code to draw a vline representing the median in "purple" gf_histogram(~outcome) %>% gf_vline(xintercept = 5, color = "purple") ex() %>% { check_function(., "gf_histogram") %>% check_arg ("object") %>% check_equal() check_function(., "gf_vline") %>% { check_arg(., "xintercept") %>% check_equal() check_arg(., "color") %>% check_equal() } } CK Code: B1_Code_Median_05 You can string these commands together (using %>%) to put both the mean and median lines onto a histogram. (This time, we used the mean() and median() functions instead of typing in the actual gf_histogram(~ outcome) %>% gf_vline(xintercept = mean(outcome), color = "blue") %>% gf_vline(xintercept = median(outcome), color = "purple") Note there is a related function called gf_hline() that will place a horizontal line on a plot (it takes yintercept as an argument).
{"url":"https://coursekata.org/preview/book/f934ff20-e896-469d-84e0-bb90c0c705e0/lesson/8/2","timestamp":"2024-11-09T07:06:07Z","content_type":"text/html","content_length":"102390","record_id":"<urn:uuid:577416fb-8af8-4406-8039-b2fa6ba457ba>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00543.warc.gz"}
LiveBench: An Overview $\boxed{\text{\large L\normalsize ive\large B\normalsize ench}}$ is perhaps the least bad LLM benchmark. It’s not too easy, has a relatively diverse set of tasks, but also avoids the pitfall of models overfitting to it by updating the test set each month. It has an expansive collection of models and regularly adds new ones. It is also the benchmark that most reliably aligns with my subjective evaluation of each model. This page is designed to make the contents of LiveBench more easily accessible. Each section contains some basic information about each task included in LiveBench, an example prompt for each one (and its answer), and a simple plot of how various models perform on that type of task. For each task, the model’s score for that task represents the percentage of questions in that task completed Some of the examples are truncated or modified slightly to improve the formatting and readability of this article. For the exact contents of each prompt, please see the LiveBench datasets on Hugging I will update the plots on this page periodically and add new tasks when they are added to LiveBench. LiveBench has 6 categories and 18 tasks, with a total of 1000 questions. Here’s a summary: Task Name Task Count Description Category $\texttt{web\textunderscore of\textunderscore lies\textunderscore 50 Tedious logic puzzles about people who either lie or tell the truth Reasoning $\texttt{zebra\textunderscore puzzle}$ 50 Logic puzzles with abstract and open-ended directional relations Reasoning $\texttt{spatial}$ 50 Spatial reasoning word problems about cutting 2D and 3D shapes Reasoning $\texttt{LCB\textunderscore generation}$ 78 40 LeetCode problems and 38 AtCoder problems Coding $\texttt{coding\textunderscore completion}$ 50 Fill-in-the-blank LeetCode problems Coding $\texttt{math\textunderscore comp}$ 96 Problems from regional high school math competitions Mathematics $\texttt{olympiad}$ 36 Problems from national and international math olympiads (USAMO, IMO) Mathematics $\texttt{AMPS\textunderscore Hard}$ 100 Procedurally-generated math problems in $\LaTeX$: derivatives, integrals, completing the square, Mathematics $\texttt{cta}$ 50 Problems of picking the best name for a column from a list (based on its values) Data Analysis $\texttt{tablejoin}$ 50 Problems of creating a mapping of columns with similar data (based on their values) Data Analysis $\texttt{tablereformat}$ 50 Problems of converting an HTML table to JSON Data Analysis $\texttt{connections}$ 50 NYT Connections problems (with varying difficulty) Language $\texttt{plot\textunderscore unscrambling}$ 40 Problems of putting sentences in an IMDB movie description in the correct order Language $\texttt{typos}$ 50 Problems of fixing typos and spelling mistakes in text Language $\texttt{summarize}$ 50 Tasks of summarizing the beginning of a Guardian article, with precise criteria Instruction $\texttt{paraphrase}$ 50 Tasks of paraphrasing the beginning of a Guardian article, with precise criteria Instruction $\texttt{simplify}$ 50 Tasks of simplifying the beginning of a Guardian article, with precise criteria Instruction $\texttt{story\textunderscore generation}$ 50 Tasks of writing a story about the beginning of a Guardian article, with precise criteria Instruction Now, let’s look at each task. $\texttt{average}$ (unofficial) This is simply an average of the model’s scores across all tasks. All tasks contribute equally to the average. $\texttt{AMPS\textunderscore Hard}$ 100 procedurally-generated $ \LaTeX $ math problems of several types such as taking the derivative or integral of a function, completing the square, or factoring a polynomial. Inspired by the MATH and AMPS datasets. Differentiate the following function: $ -2 x+\tan \left(\frac{9}{2}-\frac{17 x}{2}\right)+\frac{3}{2} $. Please put your final answer in a $\texttt{\textbackslash boxed\{\}}$. $ -\frac{17}{2} \sec ^2\left(\frac{1}{2} (9-17 x)\right)-2 $ $\texttt{coding\textunderscore completion}$ 50 relatively recent LeetCode completion problems of Easy, Medium, and Hard difficulty, from LiveCodeBench. “Completion” meaning the model is given a part of the solution and must write ONLY the part that follows to complete the solution. Sometimes, the task includes just the class declaration, function declaration, sometimes there is a (partial) docstring, and sometimes it includes parts of the actual solution. LiveBench concatenates the provided partial solution with the model’s answer, runs that code, and checks if it passes all the test cases. Instructions: You are an expert Python programmer. You will be given a question (problem specification) and the first lines of Python solution to this problem, and will write in Python the remaining lines of the program to produce a correct Python program that matches the specification and passes all tests. You will NOT return anything except for the second part of the program that you wrote. Question: You are given an integer array nums and a positive integer k. Return the number of subarrays where the maximum element of nums appears at least k times in that subarray. A subarray is a contiguous sequence of elements within an array. Example 1: nums = [1,3,2,3,3], k = 2 Output: 6. Explanation: the subarrays that contain the element 3 at least 2 times are: [1,3,2,3], [1,3,2,3,3], [3,2,3], [3,2,3,3], [2,3,3], [3,3] Example 2: Output: 0. Explanation: No subarray contains the element 4 at least 3 times. □ $ 1 \leq \texttt{nums.length} \leq 10^5 $ □ $ 1 \leq \texttt{nums[i]} \leq 10^6 $ □ $ 1 \leq k \leq 10^5 $ Format: You will use the following starter code to write the solution to the problem and enclose your code within delimiters. class Solution(object): def countSubarrays(self, nums, k): :type nums: List[int] :type k: int :rtype: int mx = max(nums) result = left = cnt = 0 for right in range(len(nums)): cnt += int(nums[right] == mx) while cnt == k: cnt -= int(nums[left] == mx) curr += x result = max(result, curr-prefix[x-k], curr-prefix[x+k]) return result if result != float("-inf") else 0 The answer above is just one of many correct responses. 50 recent Connections problems from The New York Times. There are 3 difficulty levels: • 15 of the tasks provide a list of 8 words to group. • 15 of the tasks provide a list of 12 words to group. • 20 of the tasks provide a list of 16 words to group. In the model’s answer, the order of the groups and the order of the words within each group do not matter, as long as the groups themselves are correct the model will get full points. You are given 8 words/phrases below. Find two groups of four items that share something in common. Here are a few examples of groups: □ bass, flounder, salmon, trout (all four are fish) □ ant, drill, island, opal (all four are two-word phrases that start with ‘fire’) □ are, why, bee, queue (all four are homophones of letters) □ sea, sister, sin, wonder (all four are members of a septet). Categories will be more specific than e.g., ‘5-letter-words’, ‘names’, or ‘verbs’. There is exactly one solution. Think step-by-step, and then give your answer in bold as a list of the 8 items separated by commas, ordered by group (for example, bass, founder, salmon, trout, ant, drill, island, opal). If you don’t know the answer, make your best guess. The items are: use, leverage, through, up, exploit, done, over, milk. exploit, leverage, milk, use, done, over, through, up As I understand it, this task is just selecting the best name for a column, given a list of column names to choose from. 50 problems. No chain-of-thought allowed. Pick the column’s class based on the provided column sample. Choose exactly one of the listed classes. Please respond only with the name of the class. Column sample: [[1995], [1964], [1986], [2022], [1985]] ['Maize yield' 'code country' 'Year' 'country'] $\texttt{LCB\textunderscore generation}$ 78 coding problems: 38 from AtCoder, 40 from LeetCode. Similar to $\texttt{coding\textunderscore completion}$ except the model is tasked with writing the full solution instead of only part of the solution. Instructions: You are an expert Python programmer. You will be given a question (problem specification) and will generate a correct Python program that matches the specification and passes all tests. You will NOT return anything except for the program. Print an arithmetic sequence with first term A, last term B, and common difference D. You are only given inputs for which such an arithmetic sequence exists. The input is given from Standard Input in the following format: Print the terms of the arithmetic sequence with first term A, last term B, and common difference D, in order, separated by spaces. □ $1 \leq A \leq B \leq 100$ □ $1 \leq D \leq 100$ □ There is an arithmetic sequence with first term A, last term B, and common difference D. □ All input values are integers. Sample Input/Output 1: $\texttt{3 9 2 → 3 5 7 9}$ The arithmetic sequence with first term 3, last term 9, and common difference 2 is $\texttt{(3,5,7,9)}$. Sample Input/Output 2: $\texttt{10 10 1 → 10}$ The arithmetic sequence with first term 10, last term 10, and common difference 1 is $\texttt{(10)}$. No ground-truth solution was provided. LiveBench runs the code and checks if it passes all the test cases. $\texttt{math\textunderscore comp}$ 96 challenging high school math competition problems from the AMC12 2023 (contributed 50), SMC 2023 (contributed 17), and the AIME 2024 (contributed 29). It seems like the LiveBench authors threw in some strangely specific requirements for the answer formatting on a lot of these. I guess just to make the questions harder? Real numbers $x$ and $y$ with $x,y>1$ satisfy $\log_x(y^x)=\log_y(x^{4y})=10.$ What is the value of $xy$? Please think step by step, and then display the answer at the very end of your response. The answer is an integer consisting of exactly 3 digits (including leading zeros), ranging from 000 to 999, inclusive. For example, the answer might be 068 or 972. If you cannot determine the correct answer, take your best guess. Remember to have the three digits as the last part of the response. This set of tasks includes 36 questions from the International Math Olympiad (contributed 12) and the United States of America Mathematical Olympiad (contributed 24). These questions are kind of a combination between matching, multiple choice, and fill in the blank. You are given a problem and a partially complete solution, you just have to fill in a few blank expressions by picking from a list of expressions at the bottom in the correct order. While writing this I found a weird prompt for one of the problems. There was only one expression in the list, and only one expression slot you had to fill in, so it was obvious that the answer was 1 (you didn’t even have to do any math). I later found out this was done intentionally, and is actually mentioned in the Appendix of the LiveBench paper: We generate 3 hardness variants for each problem, masking out 10%, 50% and 80% of the equations in the proof. We evaluate by computing the edit distance between the ground truth ranking order and the model predicted ranking order. [NB : in preliminary testing we also evaluated using the accuracy metric and the model rankings remained nearly the same]. Models perform worse on IMO compared to USAMO, in line with expectations. We also looked at the performance as separated by question hardness. The scores are greatly affected by question hardness going from as high as 96.8 for the easiest questions (10% masked out, GPT-4o) to as low as 36 for the hardest (80% masked out). The full results are in Table 6 and Table 7. If only I read past the References… Example (from USAMO): You are given a question and its solution. The solution however has its formulae masked out using the tag where X indicates the identifier for the missing tag. You are also given a list of formulae in latex in the format <expression Y> = <LaTeX code> where Y is the identifier for the formula. Your task is to match the formulae to the missing tags in the solution. Think step by step out loud as to what the answer should be. If you are not sure, give your best guess. Your answer should be in the form of a list of numbers, e.g., 5, 22, 3, …, corresponding to the expression identifiers that fill the missing parts. For example, if your answer starts as 5, 22, 3, …, then that means expression 5 fills <missing 1>, expression 22 fills <missing 2>, and expression 3 fills <missing 3>. The question is: In an acute triangle $ABC$, let $M$ be the midpoint of $\overline{BC}$. Let $P$ be the foot of the perpendicular from $C$ to $AM$. Suppose that the circumcircle of triangle $ABP$ intersects line $BC$ at two distinct points $B$ and $Q$. Let $N$ be the midpoint of $\overline{AQ}$. Prove that $NB=NC$. The solution is: Let $X$ be the foot from $A$ to $\overline{BC}$ . By definition, <missing 3> . Thus, <missing 4> , and $\triangle BMP \sim \triangle AMQ$ . From this, we have <missing 5> , as $MC=MB$ . Thus, $M$ is also the midpoint of $XQ$ . Now, <missing 6> if $N$ lies on the perpendicular bisector of $\overline{BC}$ . As $N$ lies on the perpendicular bisector of $\overline{XQ}$ , which is also the perpendicular bisector of <missing 7> (as $M$ is also the midpoint of $XQ$ ), we are done. The formulae are: □ <expression 1>: $\triangle BMP \sim \triangle AMQ$ □ <expression 2>: $\triangle AXM \sim \triangle MPC$ □ <expression 3>: $\overline{BC}$ □ <expression 4>: $\angle AXM = \angle MPC = 90^{\circ}$ □ <expression 5>: $\frac{MP}{MX} = \frac{MC}{MA} = \frac{MP}{MQ} = \frac{MA}{MB}$ □ <expression 6>: $NB = NC$ □ <expression 7>: $\triangle AXM \sim \triangle MPC$ $7, 1, 4, 2, 5, 6, 3$ 50 paraphrase tasks, all of which are based on articles from The Guardian. Each prompt has wacky requirements to try to throw the model off, since this is an instruction following benchmark. The following are the beginning sentences of a news article from the Guardian. OK, so a mysterious, cigar-shaped, 400m-long object is speeding through the solar system and astronomers are checking it for evidence of alien technology. So what do we do if it turns out that Oumuamua, as they have named it, is broadcasting extraterrestrial radio signals? John Chambers, Leeds Post your answers – and new questions – below or email them to nq@theguardian.com Please paraphrase based on the sentences provided. Answer with less than 274 words. Your response must have 1 sections. Mark the beginning of each section with Section X, such as: Section 1 [content of section 1] Section 2 [content of section 2] At the end of your response, please explicitly add a postscript starting with P.S. No ground-truth answer is provided. LiveBench runs checks on the output to verify that it meets the stated criteria in the prompt. $\texttt{plot\textunderscore unscrambling}$ 40 headache-inducing puzzles about unscrambling a set of sentences that describe the plot of a movie. The following plot summary of a movie has had the sentences randomly reordered. Rewrite the plot summary with the sentences correctly ordered. Begin the plot summary with <PLOT_SUMMARY>. The sentences are: While they both live alone, the two are friendly with one another – Clay sees Eloise as the only person who ever took care of him. He rents some space in a barn owned by retired teacher Eloise Parker, a widow who owns and lives in the house on the property. Derek assigns the company’s head of security, former CIA director Wallace Westwyld, to find a way to stop Adam. Verona learns that Eloise was scammed out of every penny she had. Devastated by the realization that she got scammed out of so much, Eloise shoots herself in the head. Upon learning the hard way that Adam is after them, Garnett explains the situation to the crew’s ringleader, 28-year-old tech executive Derek Danforth, who runs a Boston-based corporation called Danforth Enterprises. In Hampden, Massachusetts, Adam Clay is a beekeeper who has several hives of bees. But Wallace learns that Adam is a retired member of a classified program called the Beekeepers, whose members are tasked with fighting different forms of corruption, operating above and beyond governmental jurisdiction. The call center’s manager, Mickey Garnett, cons Eloise out of everything – including more than $2,000,000 that’s in the account of Safe Homes Foundation, a children’s charity whose account she manages. The Beekeepers are so efficient and well-trained that they make the military look like a joke. Adam is quickly cleared when Eloise’s death is ruled a suicide because there was no gunshot residue on Adam, and Eloise’s fingerprints were the only prints on the gun. Verona apologizes to Adam for her accusation, and she tells him that the FBI cyber-crimes office has told her that the scammer crew that victimized Eloise has been operating for two years, but the FBI hasn’t been able to identify any of the scammers. Adam finds Eloise’s body in her house, and he’s immediately arrested by Eloise’s daughter, Boston-based FBI Special Agent Verona Parker, who hastily accuses Adam of shooting Eloise. She calls the number on the screen, and connects to a call center that’s located in Springfield, Massachusetts, not aware that it’s a phishing scam. Verona vows to find the scammers, but Adam, enraged by what happened, decides to hunt down the scammers himself, and make them pay for what they did to Eloise. Wallace realizes that Adam is a man who should be feared by people like Derek. Adam proves to be an unstoppable force, and Derek, Garnett, and the rest of the scammers have no idea what kind of scorched-Earth hell Adam is about to unleash on them. While checking some things on her laptop computer, Eloise sees a warning about two viruses in the system. In Hampden, Massachusetts, Adam Clay is a beekeeper who has several hives of bees. He rents some space in a barn owned by retired teacher Eloise Parker, a widow who owns and lives in the house on the property. While they both live alone, the two are friendly with one another – Clay sees Eloise as the only person who ever took care of him. While checking some things on her laptop computer, Eloise sees a warning about two viruses in the system. She calls the number on the screen, and connects to a call center that’s located in Springfield, Massachusetts, not aware that it’s a phishing scam. The call center’s manager, Mickey Garnett, cons Eloise out of everything – including more than $2,000,000 that’s in the account of Safe Homes Foundation, a children’s charity whose account she manages. Devastated by the realization that she got scammed out of so much, Eloise shoots herself in the head. Adam finds Eloise’s body in her house, and he’s immediately arrested by Eloise’s daughter, Boston-based FBI Special Agent Verona Parker, who hastily accuses Adam of shooting Eloise. Verona learns that Eloise was scammed out of every penny she had. Adam is quickly cleared when Eloise’s death is ruled a suicide because there was no gunshot residue on Adam, and Eloise’s fingerprints were the only prints on the gun. Verona apologizes to Adam for her accusation, and she tells him that the FBI cyber-crimes office has told her that the scammer crew that victimized Eloise has been operating for two years, but the FBI hasn’t been able to identify any of the scammers. Verona vows to find the scammers, but Adam, enraged by what happened, decides to hunt down the scammers himself, and make them pay for what they did to Eloise. Upon learning the hard way that Adam is after them, Garnett explains the situation to the crew’s ringleader, 28-year-old tech executive Derek Danforth, who runs a Boston-based corporation called Danforth Enterprises. Derek assigns the company’s head of security, former CIA director Wallace Westwyld, to find a way to stop Adam. But Wallace learns that Adam is a retired member of a classified program called the Beekeepers, whose members are tasked with fighting different forms of corruption, operating above and beyond governmental jurisdiction. The Beekeepers are so efficient and well-trained that they make the military look like a joke. Wallace realizes that Adam is a man who should be feared by people like Derek. Adam proves to be an unstoppable force, and Derek, Garnett, and the rest of the scammers have no idea what kind of scorched-Earth hell Adam is about to unleash on them. I honestly don’t even know how you’re supposed to do this one. 50 text simplification tasks, all of which are based on articles from The Guardian. Each prompt has wacky requirements to try to throw the model off, since this is an instruction following benchmark. The following are the beginning sentences of a news article from the Guardian. Amsterdam has won the right to become the new host for the European Medicines Agency (EMA). In a nail-biting final round last night, the 19 European cities that had put in bids had been whittled down to Milan and Amsterdam, sharing an equal number of votes. A draw from a hat sealed it for Amsterdam. Moments later, the same scenario played out for the European Banking Authority (EBA), with Paris and Dublin going into a hat and Paris being drawn. And so it is settled. The EMA will move from London to Amsterdam after Brexit – taking with it nearly 900 jobs, a budget of €322m, and some 40,000 business visits every year, which support local hotels, restaurants, taxis and so on. Also likely to move with the EMA is the attendant industry that congregates around it for easy access to the regulator. It’s a substantial loss of finances, talent, infrastructure and influence. As the EMA leaves the UK, the question now becomes: does the UK leave the EMA? The EMA is the regulatory body for the single market for medicines, and the two are entwined. Please explain in simpler terms what this text means. Include keywords [‘branch’, ‘currency’, ‘object’, ‘request’, ‘yesterday’] in the response. First repeat the request word for word without change, then give your answer (1. do not say any words or characters before repeating the request; 2. the request you need to repeat does not include this sentence) No ground-truth answer is provided. LiveBench runs checks on the output to verify that it meets the stated criteria in the prompt. 50 word problems about making cuts through various shapes (in two and three dimensions) and determining the number / shapes of the remaining pieces. Suppose I have a physical, solid square with vertices ABCD and a physical, solid equilateral triangle with vertices EFG. I place both shapes on a plane and arrange them so that they are not overlapping at all, but F is touching A, and G is touching B. Then I make two cuts through ED and through DG. Then I separate all the pieces (e.g. so F is no longer touching A, and so on). How many pieces are there? Think step by step, and then put your answer in bold as a single integer (for example, 0). If you don’t know, guess. $\texttt{story\textunderscore generation}$ 50 story generation tasks, all of which are based on articles from The Guardian. Each prompt has wacky requirements to try to throw the model off, since this is an instruction following benchmark. The following are the beginning sentences of a news article from the Guardian. This is a bankrupt budget. Not in the strictly financial sense, though how much more threadbare core public services can become without collapsing and causing social mayhem the next few years will prove, if the government lasts. Even with faltering economic growth, public spending is to go on falling as a proportion of GDP. It’s bankrupt in ideas, in understanding, in preparedness to examine what has been happening to public services. Housing offers a glaring example. For all the bells and whistles in the budget, and some welcome augmentation of council powers, the government fails to make an obvious connexion. Building houses, allocating land, encouraging development, and policing the delinquency of private developers all imply an active and financially lubricated local government. Housing is and always will be about places, streets, brownfields – and public acceptance of schemes that will abut on their property or where they walk their dog. That’s what councillors do. Ace ideologue of the free market Oliver Letwin, of all people, can’t substitute. Please generate a story based on the sentences provided. Wrap your entire response with double quotation marks. Your answer must contain exactly 4 bullet points. Use the markdown bullet points such as: □ This is point 1. □ This is point 2 Finish your response with this exact phrase: Any other questions? No other words should follow this phrase. Your response must have 3 sections. Mark the beginning of each section with SECTION X, such as: SECTION 1 [content of section 1] SECTION 2 [content of section 2] No ground-truth answer is provided. LiveBench runs checks on the output to verify that it meets the stated criteria in the prompt. 50 summarization tasks, all of which are based on articles from The Guardian. Each prompt has wacky requirements to try to throw the model off, since this is an instruction following benchmark. The following are the beginning sentences of a news article from the Guardian. In July this year, everyone said that the World Cup final felt like a turning point. You don’t get 27,000 people to a women’s cricket match and not think that something extraordinary is going on. But the truth of turning points is that you can’t in the moment judge whether they’re real or perceived. It has taken the Women’s Ashes in Australia this past month to show the extent of the turn. Australia originally lagged behind England in embracing the game. The 2015 Ashes was played at intimate cricket grounds, selling out some matches with crowds in excess of 5000. The 2013-14 version in Australia was nowhere near that. Attendances at the Perth Test were in the low hundreds, while the Twenty20s were sparsely attended curtain-raisers for a meaningless men’s series. Olympiads stack up like sedimentary layers, and the difference from four years ago to now is extraordinary. The day-night Test match drew over 12,600 across its duration, while the three T20 matches drew a bit over or a bit under 4000 spectators apiece. Please summarize based on the sentences provided. Give two different responses. Responses and only responses should be separated by 6 asterisk symbols: $\texttt{******}$. No ground-truth answer is provided. LiveBench runs checks on the output to verify that it meets the stated criteria in the prompt. 50 problems about determining the best column mapping between two CSV tables. Table A has meaningful column names and data. Table B has meaningul data in at least some columns, but its column names are garbled/meaningless. The model has to decide an appropriate mapping from columns in A (not necessarily all of them) to columns in B, based on which columns have similar data. Please create a valid join mapping between CSV Table A and CSV Table B. Each column in A maps to 0 or 1 columns in B. Return your response as a Python dictionary, formatted as {col_nae_in_df_a : col_name_in_df_b}. Please return only the dictionary. CSV Table A: CSV Table B: {"year": "DOgXTTuHGbo", "zipcode": "gG+PnzOD1mw"} 50 problems of converting an HTML table to JSON. Please convert the Input Table from HTML format to JSON format. Please respond only with the table. Input Table: <table border="1" class="dataframe"> <tr style="text-align: right;"> <th>Inequality HDI</th> <td>North Macedonia</td> <td>Papua New Guinea</td> <td>Marshall Islands</td> Answer (prettified JSON): "111": { "Country": "Indonesia", "Inequality HDI": 2 "88": { "Country": "Azerbaijan", "Inequality HDI": 1 "4": { "Country": "Denmark", "Inequality HDI": 0 "83": { "Country": "North Macedonia", "Inequality HDI": 2 "17": { "Country": "Canada", "Inequality HDI": 0 "70": { "Country": "Palau", "Inequality HDI": 2 "153": { "Country": "Papua New Guinea", "Inequality HDI": 3 "115": { "Country": "Samoa", "Inequality HDI": 2 "101": { "Country": "Marshall Islands", "Inequality HDI": 2 "108": { "Country": "Lebanon", "Inequality HDI": 2 50 tasks of determining the intended version of given text, that is, removing all spelling errors and typos. Please output this exact text, with no changes at all except for fixing the misspellings. Please leave all other stylistic decisions like commas and US vs British spellings as in the original This paper is a complement of the modularity result of Bruinier, Howard, Kudla, Rapoport and Yang (BHKRY) for the special case $U(1,1)$ not considered ther. The main idea is to embed a $U(1, 1)$ Shimura curve to many $U(n-1, 1)$ Shimura varieties for big $n$, andd prove a precise pullback formula ofther generating series of arithmetic divisors. Afterwards, we uise the modularity result of BHKRY together iwth the existince of non-vanishing of clasical theta series at any given point inhten upper half plane to proovehten modularity result on $U(1, 1)$ Shimura curves. This paper is a complement of the modularity result of Bruinier, Howard, Kudla, Rapoport and Yang (BHKRY) for the special case $U(1,1)$ not considered there. The main idea is to embed a $U(1, 1)$ Shimura curve to many $U(n-1, 1)$ Shimura varieties for big $n$, and prove a precise pullback formula of the generating series of arithmetic divisors. Afterwards, we use the modularity result of BHKRY together with the existence of non-vanishing of classical theta series at any given point in the upper half plane to prove the modularity result on $U(1, 1)$ Shimura curves. $\texttt{web\textunderscore of\textunderscore lies\textunderscore v2}$ 50 headache-inducing (procedurally-generated, I assume) logic puzzles about people who either lie or tell the truth, revealed indirectly through their locations, and the claims of others, which may be truths or lies. In this question, assume each person either always tells the truth or always lies. The person at the theater says the person at the ice skating rink tells the truth. The person at the gym tells the truth. Beatriz is at the gym. The person at the ice skating rink thinks their friend is lying. Grace is at the campground. Hiroshi is at the theater. Emily is at the farm. The person at the campground says the person at the observatory lies. The person at the botanical garden tells the truth. The person at the cafe says the person at the campground lies. The person at the farm lies. Priya is at the park. Maya is at the library. The person at the ice skating rink says the person at the city hall tells the truth. Charlie is at the cafe. The person at the park tells the truth. The person at the ice skating rink saw a firetruck. The person at the beach says the person at the theater lies. Nadia is at the ice skating rink. Ethan is at the observatory. The person at the campground lies. Max is at the museum. Ayaan is at the hotel. Jake is at the city hall. Jaxon is at the skate park. Luna is at the beach. Kehinde is at the train station. The person at the campground saw a firetruck. The person at the museum says the person at the theater lies. Olivia is at the botanical garden. The person at the theater says the person at the train station lies. The person at the skate park lies. The person at the ice skating rink says the person at the library tells the truth. The person at the ice skating rink says the person at the campground tells the truth. The person at the hotel says the person at the ice skating rink lies. □ Does the person at the theater tell the truth? □ Does the person at the ice skating rink tell the truth? □ Does the person at the campground tell the truth? Think step by step, and then put your answer in bold as a list of three words, yes or no (for example, yes, no, yes). If you don’t know, guess. no, no, no Yes, all 50 of them are like this. $\texttt{zebra\textunderscore puzzle}$ 50 logic puzzles about determining an attribute of a person based on a series of relational statements. There are 3 people standing in a line numbered 1 through 3 in a left to right order. Each person has a set of attributes: Beverage, Transport, Food. The attributes have the following possible values: □ Beverage: cola, coffee, iced-tea □ Transport: motorbike, quad-bike, bike □ Food: tomato, zucchini, asparagus and exactly one person in the line has a given value for an attribute. Given the following premises about the line of people: □ the person that likes tomato is on the immediate right of the person who drinks cola □ the person who drinks coffee is somewhere to the right of the person who drinks iced-tea □ the person that travels by motorbike is not anywhere to the right of the person who drinks cola □ the person that travels by bike is on the immediate right of the person that travels by quad-bike □ the person that likes zucchini is on the far right Answer the following question: What food does the person that travels by motorbike like? Return your answer as a single word, in the following format: ***X***, where X is the answer.
{"url":"https://carterprince.us/posts/livebench/","timestamp":"2024-11-10T14:17:46Z","content_type":"text/html","content_length":"61036","record_id":"<urn:uuid:ee913705-bff4-44bc-bcd2-4b71f66061be>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00480.warc.gz"}
168 – create naturally aligned partition points Comment 1 Michael Nolan 2020-02-13 13:50:47 GMT I had an idea about this last night. If it were acceptable that all the partitions be the same size, then something like this seems reasonable: c b c a c b c partition bits: abc Guaranteeing that all the partitions are the same size and naturally aligned is just a matter of ensuring that if a partition bit is set, then all previous partition bits must also be set. If we need to have different width partitions, something like this might work, though it's a bit messier: a b c d e f g partition bits: dbfaceg partition bits: dbacfeg My idea was that it'd be sort of like a serialized binary tree. To ensure that the partitions are naturally aligned, a given bit can only be set if its parent bits are set, which seems harder to check/enforce than the method above Comment 2 Luke Kenneth Casson Leighton 2020-02-13 14:19:47 GMT ok so the idea is, that the only options for partition sizes are: * 64 * 32,32 * 16,16,16,16 * 8,8,8,8,8,8,8,8 is that the idea? if so, this restricts us to not being able to run 32-bit arithmetic in one "lane" and 16-16 bit arithmetic in the other. the way that the 6600 engine and register file is to be arranged is that the register file is subdivided 32-HI, 32-LO times two. so, four ports, but only 32-bit-wide. if you want to do 64-bit arithmetic, you have to use *two* of those ports. also, each *byte* of the register file has its own write-enable line. this so that on vectorised instructions, if there are 32-bit instructions that happen to hit the 32-LO register port, the 32-*HI* port can be used for 32, 16-16, 8-8-8-8 *completely different* instructions that *HAPPEN* to occur (or are deliberately arranged to occur) on the exact same cycle and happen to be the exact same operation. now, whether these conditions turn out to be reasonable or not is another matter, hence why, yeah, it should be fine to consider this, and thus perhaps greatly simplify the partitioning. would we end up with a huge number of 32-bit-adds mixed in with 8-8-8-8 adds? i don't honestly know. Comment 3 Michael Nolan 2020-02-13 17:32:33 GMT (In reply to Luke Kenneth Casson Leighton from comment #2) > ok so the idea is, that the only options for partition sizes are: > * 64 > * 32,32 > * 16,16,16,16 > * 8,8,8,8,8,8,8,8 > is that the idea? > if so, this restricts us to not being able to run 32-bit arithmetic in one > "lane" and 16-16 bit arithmetic in the other. > this so that on vectorised instructions, if there are 32-bit instructions > that happen to hit the 32-LO register port, the 32-*HI* port can be used > for 32, 16-16, 8-8-8-8 *completely different* instructions that *HAPPEN* > to occur (or are deliberately arranged to occur) on the exact same cycle > and happen to be the exact same operation. > now, whether these conditions turn out to be reasonable or not is another > matter, hence why, yeah, it should be fine to consider this, and thus > perhaps greatly simplify the partitioning. > would we end up with a huge number of 32-bit-adds mixed in with 8-8-8-8 > adds? i don't honestly know. This would make scheduling a bit more complicated, but it might be beneficial to do this only for some modules. I don't think it'd make a huge difference for the adder or comparator to use an aligned partition, but it might simplify the shifter a good bit (because it eliminates a couple of the matrix entries). Comment 4 Luke Kenneth Casson Leighton 2020-02-13 17:55:18 GMT (In reply to Michael Nolan from comment #3) > (In reply to Luke Kenneth Casson Leighton from comment #2) > > ok so the idea is, that the only options for partition sizes are: > > > > * 64 > > * 32,32 > > * 16,16,16,16 > > * 8,8,8,8,8,8,8,8 > > > > is that the idea? > Yes > > > > if so, this restricts us to not being able to run 32-bit arithmetic in one > > "lane" and 16-16 bit arithmetic in the other. > > > > this so that on vectorised instructions, if there are 32-bit instructions > > that happen to hit the 32-LO register port, the 32-*HI* port can be used > > for 32, 16-16, 8-8-8-8 *completely different* instructions that *HAPPEN* > > to occur (or are deliberately arranged to occur) on the exact same cycle > > and happen to be the exact same operation. > > > > now, whether these conditions turn out to be reasonable or not is another > > matter, hence why, yeah, it should be fine to consider this, and thus > > perhaps greatly simplify the partitioning. > > > > would we end up with a huge number of 32-bit-adds mixed in with 8-8-8-8 > > adds? i don't honestly know. > This would make scheduling a bit more complicated, i'm anticipating it to be quite straightforward, by way of pushing the "predicate bits" directly into the regfile write-enable lines, and to breaking down operations into 32-bit "chunks". so, where a sequence of elements (say 16 bit) are to be ADDed, that will be "converted" into 2x 16-16 SIMD operations: one will go to HI-32 regfile, the other will go to LO-32 regfile. it's pretty straightforward. it'll be slightly wasteful where the vector length is not an exact multiple of 32-bits (3x8 for example) however as a first iteration i'm not that concerned. > but it might be beneficial to do this only for some modules. honestly it would complicate the decode phase, along these lines: "if operation == NOT_CAPABLE_OF_DYNAMIC_PARTITIONING { do something else }" whether that's ok compared to the complexity of the partitioned ALU ops? > I don't think it'd make a huge > difference for the adder or comparator to use an aligned partition, but it > might simplify the shifter a good bit (because it eliminates a couple of the > matrix entries). it does... however i think the Switch statement really has to go. if you run "proc" "opt" then "show top" on a 64-bit shifter, it's awful. the MUX chain is absolutely dreadful: each "switch" statement gets turned into a "if x == 0b00001 if x == 0b00010 if x == 0b000011"... with the results *chained* between each! by comparison, for the gt_combiner, the mux-chains aren't 64-bit long, they're only 7-long, because they're done on the *partition* gates, not per-permutation-of-all-values-of-partitions. you did manage to convert the "switch statement" from the original that i did, of eq_combiner, and i am confident that the same thing can be done here, based on the tables: the only thing being, each table (each column output, o0...o7) is computed independently, you can't share data *between* each column, and that's fine. Comment 5 Luke Kenneth Casson Leighton 2020-02-13 18:09:35 GMT okaaay i decoded those tables a bit, into "if" statements: if True: o3 = a0b0[31:24] if ~p0: o3 |= a1b0[23:16] if p0: o3 |= a1b1[23:16] if ~p0&p1: o3 |= a2b1[15:8] if p0&p1: o3 |= a2b0[15:8] if p1: o3 |= a2b2[15:8] if ~p0&~p1&~p2: o3 |= a3b0 if p0&~p1&~p2: o3 |= a3b1 if p1&~p2: o3 |= a3b2 if p2: o3 |= a3b3 that's quite easy to do as a parallel tree of ORs (see the treereduce function i created which should be moved to nmutil, really) aNbM is basically matrix[N][M] is that beginning to look a little clearer? i'll do the other tables as well
{"url":"https://bugs.libre-soc.org/show_bug.cgi?id=168","timestamp":"2024-11-02T20:58:02Z","content_type":"text/html","content_length":"36619","record_id":"<urn:uuid:558b8c67-1e5b-4e76-a468-e28df8e806a1>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00597.warc.gz"}
RD Sharma Class 9 Solutions Chapter 2 Exponents of Real Numbers Ex 2.2 These Solutions are part of RD Sharma Class 9 Solutions. Here we have given RD Sharma Class 9 Solutions Chapter 2 Exponents of Real Numbers Ex 2.2 Question 1. Assuming that x, y, z are positive real numbers, simplify each of the following: Question 2. Question 3. Prove that: Question 4. Show that: Question 5. Question 6. Question 7. Question 8. Question 9. Question 10. Find the values of x in each of the following: Question 11. If x = 2^1^/3 + 2^2/3, show that x^3 – 6x = 6. Question 12. Determine (8x)^x, if 9^x^+ 2 = 240 + 9^x. Question 13. If 3^x^+^1 = 9^x-^2, find the value of 2^1 +x. Question 14. If 3^4x = (81)^-1 and 10^1/y = 0.0001, find the value of 2^-x+4y Question 15. If 5^3x = 125 and 10^y = 0.001 find x and y. Question 16. Solve the following equations: Question 17. Question 18. If a and b are different positive primes such that Question 19. If 2^x x 3^y x 5^z = 2160, find x, y and z. Hence, compute the value of 3^x x 2^-y x 5^-z. Question 20. If 1176 = 2^a x 3^b x 7^c, find the values of a, b and c. Hence, compute the value of 2^a x 3^b x 7^-c as a fraction. Question 21. Question 22. Show that: Question 23. Hope given RD Sharma Class 9 Solutions Chapter 2 Exponents of Real Numbers Ex 2.2 are helpful to complete your math homework. If you have any doubts, please comment below. Learn Insta try to provide online math tutoring for you.
{"url":"https://www.learninsta.com/rd-sharma-class-9-solutions-chapter-2-exponents-of-real-numbers-ex-2-2/","timestamp":"2024-11-04T14:08:31Z","content_type":"text/html","content_length":"108027","record_id":"<urn:uuid:b0bba8e7-a4a9-4182-9367-a1b3cc7bad01>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00032.warc.gz"}
The tetrahedron has the smallest number of faces in the five Platonic Solids, having only 4 faces. And in fact, four faces are the minimium requirement for a polyhedron. Other features of tetrahedron • Each face is an equilaterial triangle • Each vertex is the meeting of 3 faces There are 2 main ways to create a tetrahedron - using trigonometry to consider the angle between faces, or, from using a hexahedron (cube) as a base. Using trigonometry A tetrahedron is made from 4 equilateral triangles, hence, to start, we need a process to create an equilateral triangle. Previous experimentation has revealed that a simple code will create an equilateral triangle. REPEAT 3 [FD 2 RT 120] For our purposes, I have expanded this code to: 1. FD 1 MAKE "CA POS FD 1 RT 120 MAKE "A POS 2. FD 1 MAKE "AB POS FD 1 RT 120 MAKE "B POS 3. FD 1 MAKE "BC POS FD 1 RT 120 MAKE "C POS The next step in creating our tetrahedron is to create another equilateral triangle, with a common side to the existing triangle. Before we can place this triangle however, we need to determine the angle at which these two faces should meet (hence the idea of needing trigonometry in the creation) To determine the dihedral angle (angle between the faces), consider the diagram right. we need to determine the angle of XYZ, to do this, we need to determine the size of side XY and YZ (the side XZ is equal to the sides of the equilateral triangle) XY and YZ are both equal, and can be found by determining the size of AB to C from our initial triangle (see diagram 2) using Pythagoras, we can determine that the length of XY is equal to. If we let the length of one side equal to x, then the distance A to AC is x/2, we need to find the length of AC to B (this will become XY). using the cosine rule, we can determine that the angle XYZ is 70.528779 Now that we have this angle, we can align our turtle by using the command RU 70.528779, the rest of the code, then positions the turtle at the vertex, allowing the use of our previous equilateral triangle code 1. PU 2. SETPOS :CA TOWARDS :B 3. RU 70.528779 LT 90 FD 1 4. PD 5. RT 120 FD 2 RT 120 FD 2 RT 120 FD 2 This is repeated 3 times, changing the initial position for each side. The final product is shown here. Using a cube A non-algebraic method to create a tetrahedron is to begin with a cube (instructions for the construction a cube can be found in the post on hexahedrons) We know that a tetrahedron is made from equilateral triangles, so we need to consider how to create an equilateral triangle within our cube. Another option would be to connect A to G, however, this would go through the centre of our cube, and is not appropriate. The next logical option would be to connect to either C, F or H, this would create a side diagonally across one of the faces. If we were to use 2 of these options, for example, connect AC and AF, to complete a triangle, we would connect CF, which is also the diagonal distance across a face of the cube, hence creating an equilateral triangle. Repeating this process we connect the vertices A, H, C and F. 1. FACE 2. SETPOS :A PD SETPOS :C SETPOS :F PU 3. SETPOS :A PD SETPOS :C SETPOS :H PU 4. SETPOS :F PD SETPOS :H SETPOS :C PU 5. SETPOS :A PD SETPOS :H SETPOS :F PU
{"url":"https://vrmath2.net/content/tetrahedron","timestamp":"2024-11-12T19:58:40Z","content_type":"application/xhtml+xml","content_length":"80037","record_id":"<urn:uuid:c1b6c47a-36cb-4025-babe-c972bfd1b4ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00636.warc.gz"}
Sample space is continuous or uncountably infinite. te expressi... | Filo Question asked by Filo student Sample space is continuous or uncountably infinite. te expressions for the events. a. At least one of the events occurs. b. Only A occurs. c. A and occur but not . d. All three events occur. e. None of occurs. f. Exactly one occurs. Not the question you're searching for? + Ask your question Filo tutor solution Learn from their 1-to-1 discussion with Filo tutors. Generate FREE solution for this question from our expert tutors in next 60 seconds Don't let anything interrupt your homework or exam prep with world’s only instant-tutoring, available 24x7 Found 8 tutors discussing this question Discuss this question LIVE for FREE 5 mins ago Students who ask this question also asked View more Question Text Sample space is continuous or uncountably infinite. te expressions for the events. Updated On Oct 8, 2022 Topic Algebra Subject Mathematics Class Class 12
{"url":"https://askfilo.com/user-question-answers-mathematics/sample-space-is-continuous-or-uncountably-infinite-te-32323933373031","timestamp":"2024-11-06T20:56:46Z","content_type":"text/html","content_length":"272753","record_id":"<urn:uuid:9bbf369d-a240-40ec-a965-dd387163a443>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00655.warc.gz"}
The sides of a parallelogram are 12cm and 8cm. one of the diago-Turito Are you sure you want to logout? The sides of a parallelogram are 12cm and 8cm. one of the diagonal is 10cm long. If d is the length of the other diagonal, then which of the following is correct? we know that the relations between Diagonals and sides of parallelogram 1 and d2 are lengths of diagonals Ans :-Option D (d > 12) Explanation :- Given , OPTION D is correct Get an Expert Advice From Turito.
{"url":"https://www.turito.com/ask-a-doubt/the-sides-of-a-parallelogram-are-12cm-and-8cm-one-of-the-diagonal-is-10cm-long-if-d-is-the-length-of-the-other-di-q2cc3fc02","timestamp":"2024-11-12T13:20:16Z","content_type":"application/xhtml+xml","content_length":"1052463","record_id":"<urn:uuid:d1f2829f-50c1-4dca-a808-38b0d87cdf58>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00106.warc.gz"}
DSA Crash Course [Part 4] - Is Prime Approach - Taro Video In this lesson, Alvin explores the strategy to solving the following interview problem: Write a function, is_prime, that takes in a number as an argument. The function should return a boolean indicating whether or not the given number is prime. A prime number is a number that is only divisible by two distinct numbers: 1 and itself. For example, 7 is a prime because it is only divisible by 1 and 7. For example, 6 is not a prime because it is divisible by 1, 2, 3, and 6. You can assume that the input number is a positive integer. is_prime(2) # -> True is_prime(3) # -> True is_prime(4) # -> False is_prime(5) # -> True is_prime(6) # -> False is_prime(7) # -> True is_prime(8) # -> False is_prime(2017) # -> True is_prime(2048) # -> False If you need additional support taking these DSA skills and actually applying them, take Alvin's complete data structures and algorithms course on Structy. You can try out the concepts yourself in their interactive code editor and learn advanced DSA patterns like stack exhaustive recursion. Use this link to get 20% off the entire Structy DSA learning experience. Follow Alvin on LinkedIn: https://www.linkedin.com/in/alvin-zablan-b73a92117/
{"url":"https://www.jointaro.com/lesson/NUkoFLVNjCX2nInTH13s/dsa-crash-course-part-4-is-prime-approach/","timestamp":"2024-11-08T06:10:28Z","content_type":"text/html","content_length":"68912","record_id":"<urn:uuid:681be4d8-25f9-42b5-828f-951c56f9539a>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00735.warc.gz"}
X-FOSLS: Extended Finite Element Techniques for Crack Propagation and Fracture Mathematics Group X-FOSLS: Extended Finite Element Techniques for Crack Propagation and Fracture “X-FOSLS” enhances standard finite elements to include the difficult and challenging cases in which singularities and pathologies in the solution cause standard techniques to fail. They rely on a consistent mathematical formulation of jump and internal boundary conditions, and are used in crack/fracture calculations and the analysis of the failure of interconnect lines in semiconductors.
{"url":"https://crd.lbl.gov/divisions/amcr/mathematics-dept/math/research/past-research/x-fosls-extended-finite-element-techniques-for-crack-propagation-and-fracture/","timestamp":"2024-11-03T05:57:16Z","content_type":"text/html","content_length":"28971","record_id":"<urn:uuid:5c4555e1-c5e7-444f-968a-316838efb2c6>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00609.warc.gz"}
Women in Number Theory by Research Abdellatif, Ramla Ecole Normale Supérieure de Lyon , France Abdellatif, Ramla p-modular and p-adic Langlands programs (Université de Picardie Jules Verne, Amiens, France) Alfes-Neumann, Claudia number theory, automorphic forms (University of Paderborn, Germany) Arenas, Àngela Quadratic forms, automorphic forms and Hilbert modular forms. (Universitat de Barcelona, Spain) Arnold-Roksandich, Mock Modular and Quantum Modular Forms (Oregon State University, USA) Aubert, Anne-Marie Institut de Mathématiques de Jussieu - Paris Rive Gauche, CNRS, Sorbonne Université, Université de Paris Ballantine, Cristina M. Number Theory, Representation Theory and Automorphic Forms, Algebraic Combinatorics, Visualization of complex functions. (College of the Holy Cross, MA, USA) Bayer Isant, Pilar Arithmetic geometry, modular curves, galois representations. (Universitat de Barcelona, Spain) Beneish, Lea Arithmetic statistics of higher degree points on curves and applications of modular generating series (University of California, Berkeley) Borade, Neelima Number theory and algebraic geometry, Princeton University Bringmann, Kathrin Number theory and combinatorics involving elliptic and Siegel modular forms, Maass forms, partitions, and mock theta functions. (University of Cologne, Germany) Chan, Charlotte Representation Theory (University of Michigan, USA) Cojocaru, Alina Carmen Number theory. (University of Illinois at Chicago, USA) Davis, Rachel Algebraic number theory, arithmetic geometry, cryptography, Galois representations. (University of Wisconsin, USA) DeCelles, Amy Spectral theory of automorphic forms. (Bethel University in Mishawaka, IN, USA) Dever, Lindsay Automorphic forms on hyperbolic 3-manifolds (Bryn Mawr College, PA, USA) Dijols, Sarah Representations of reductive groups over local fields, University of Aix-Marseille, France Eischen, Ellen Automorphic forms (University of Oregon, USA) Emory, Melissa Automorphic Representation Theory, Representation Theory of p-adic groups, Number Theory Feigon, Brooke Number theory, Automorphic forms, Representation theory (The City College of New York, USA) Fintzen, Jessica algebraic number theory, representation theory, p-adic groups (University of Michigan, USA, and University of Cambridge, UK and Institute for Advanced Study, USA) Folsom, Amanda Analytic and Algebraic Number Theory, Modular Forms, Maass Forms, Mock Theta Functions, Combinatorics, q-series, Lie Theory, Jacobi forms, Modular units. (Amherst College, Frechette, Sharon Hypergeometric functions over finite field, L-functions, modular forms. (College of the Holy Cross, USA) Fuselier, Jenny Hypergeometric functions over finite fields, modular forms, elliptic curves (High Point University, USA) Garthwaite, Sharon Automorphic forms, partition generating functions (Bucknell University) Gerbelli-Gauthier, Automorphic representations, cohomology of arithmetic groups (IAS, USA and CRM, Canada) Graves, Hester Euclidean ideal classes and modular forms. (Queen's University, Ontario, Canada) Grundman, Helen G. Many topics of research (Bryn Mawr College) Hahn, Heekyoung My primary interests are Automorphic L-functions, Relative Trace formula, Algebraic cycles on Shimura varieties and Representations of the classical groups. (Duke University, USA) Hamieh, Alia Algebraic number theory, automorphic forms, half-integral weight modular forms and L-functions. (University of Lethbridge, Canada) Hamieh, Alia Algebraic number theory, automorphic forms, half-integral weight modular forms and L-functions. (University of Lethbridge, Canada) Hsu, Catherine Congruences between modular forms, Euclidean ideals (University of Bristol, UK) Hsu, Chi-Yun Eigenvarieties, Euler systems (UCLA, USA) Jameson, Marie Modular forms and q-series. (University of Tennessee, USA) Karen Taylor Bronx Community College, CUNY (Bronx Community College, CUNY) Kezuka, Yukako Number theory, Iwasawa theory, arithmetic geometry (Universität Regensburg, Germany) Khaqan, Maryam Modular forms, Mock modular forms, Moonshine, Elliptic Curves (Emory University, United States) Klinger-Logan, Kim Spectral Theory of Automorphic Forms (University of Minnesota, USA) Kumari Moni Tata Institute of Fundamental Research Mumbai, India Kundu, Debanjana Iwasawa Theory, PhD (University of Toronto) Lang, Jaclyn Galois representations and modular forms (Temple University, Philadelphia) Lee, Min Analytic number theory, especially L-functions, automorphic representations, Maass forms. Harmonic analysis. (Brown University, RI, USA) Li, Wenching Winnie number theory, automorphic forms, zeta functions, spectral graph theory, coding theory (Penn State University) Long, Ling Finite Index Subgroups of the Modular Group and Their Modular Forms (Louisiana State University, USA) Ludwig, Judith Arithmetic geometry, p-adic Langlands program (Universität Bonn, Germany) Marzec, Jolanta Modular and automorphic forms, L-functions (Kazimierz Wielki University, Bydgoszcz, Poland) Maurischat, Kathrin Number theory, Automorphic/Modular forms, Representation theory (Heidelberg University, Germany) Medvedovsky, Anna modular forms, Hecke algebras, Galois representations, mod-p phenomena (Boston University, USA) Ollivier, Rachel Arithmetic and automorphic forms. (University of British Columbia, BC, Canada) Pippich, Anna Automorphic forms (University of Konstanz, Germany) Roy, Manami automorphic forms and elliptic curves (Fordham University, USA) Smajlovic, Lejla Analytic number theory, L-functions, Selberg class (University of Sarajevo) Swisher, Holly Number theory and combinatorics, including modular and mock modular forms, integer partitions, hypergeometric series. (Oregon State University, USA) Tanabe, Naomi Number Theory, Representation Theory, Automorphic Forms, L-functions. (Dartmouth College, USA) Thompson, Kate Quadratic forms, modular forms, quaternion algebras. (DePaul University, USA) Treneer, Stephanie Modular forms, number theory, theory of partitions. (Western Washington University, USA) Tretkoff, Paula Number Theory. Geometry, classical and non-commutative. (Texas A&M University, USA) Tu, Fang-Ting Number theory: modular forms and hypergeometric functions. (Louisiana State University, USA) VALENTINO, Maria Iwasawa theory, Drinfeld modular forms, Beilinson-Bloch-Kato conjecture (University of Calabria, Italy) Varma, Ila Arithmetic Statistics and Galois representations (UC San Diego, USA) Vincent, Christelle Number theory but more precisely modular forms, Drinfeld modules, Drinfeld modular forms and Drinfeld modular curves, curves and Jacobians with CM, and curves over fields of positive characteristic (University of Vermont, USA) Walling, Lynne Modular forms. (University of Bristol, UK) Wattanawanichkul, Nawapan University of Illinois, Urbana-Champaign Wieczorek, Maggie Modular forms and partition theory (University of Tennessee, USA) Xiao, Luciena Shimura Varieties and Langlands Program (University of Helsinki, Finland)
{"url":"https://womeninnumbertheory.org/women-in-number-theory-by-research/","timestamp":"2024-11-02T22:14:02Z","content_type":"text/html","content_length":"116617","record_id":"<urn:uuid:52ce756c-2184-4edd-96f6-befedc33a57d>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00755.warc.gz"}
Drawing A Decagon Drawing A Decagon - Perimeter = 12 × s 12\times s 12 × s. Pentagon (5 sides) the pentagon in washington dc has 5 sideshexagon (6 sides) honeycomb has hexagons. A ten point shape is called a decagon. One is the center of the polygon (point o), and the other is a point of the circumcircle. Web the sum of the interior angles of a decagon always amounts to a remarkable 1,440 degrees, highlighting the underlying mathematical relationships between its angles. The central angle is a circle and circle forms $360^\circ $. Besides solving your homework problems about the decagon side length, area, and perimeter, it can also compute the length of all the diagonals and radii. Web how to construct a regular decagon inside a circle using compass and ruler. A defining characteristic of a decagon is that all of its sides have equal lengths, and all of its interior angles have equal measures. It is derived from the latin words “deka,” meaning ten, and “gonia,” meaning angle. Web the sum of the interior angles of a decagon always amounts to a remarkable 1,440 degrees, highlighting the underlying mathematical relationships between its angles. Another free abstract for beginners step by step drawing video tutorial. Comment Dessiner Un Décagone How To Draw A Decagon. YouTube Another free abstract for beginners step by step drawing video tutorial. Web in this video, i'll show you how to draw a decagon using your compass. Draw this line lightly as a construction line. How to Draw a Regular DECAGON when Given the Length of the Side. METHOD What is a 10 point shape called? On drawing 10 diagonals, we get each central angle of a decagon $=\frac{360^\circ}{10}=36^\circ $ e. Web this 4k uhd video is about drawing dodecagon by hands with How to draw a Decagon YouTube Another free abstract for beginners step by step drawing video tutorial. Perimeter = 12 × s 12\times s 12 × s. On drawing 10 diagonals, we get each central angle of a decagon $=\frac{360^\circ}{10}= 36^\circ $. Decagon Picture Images of Shapes Finally, erase any unnecessary lines and you're done! Web a decagon is a polygon with ten sides and ten interior angles. Web the sum of the interior angles of a decagon always amounts to a. Drawing of a regular decagon inscribed in a circle YouTube A defining characteristic of a decagon is that all of its sides have equal lengths, and all of its interior angles have equal measures. Web for a decagon to be concave, it must be possible. Decagon Definition, Facts & Examples Cuemath A defining characteristic of a decagon is that all of its sides have equal lengths, and all of its interior angles have equal measures. Next, connect each of the angles with line segments to form. FileDecagon 001.png The Work of God's Children Based on the sides of a decagon, they are broadly classified into regular decagons and irregular decagons. Mark a point on the line, which will act as your first vertex, and label it as point. What Is A Decagon Decagon Shape DK Find Out This youtube channel is dedicated to teaching people how to improve their. The central angle is a circle and circle forms $360^\circ $. One is the center of the polygon (point o), and the other. Decagon Wikipedia Web to draw a decagon, start by drawing a line segment. This tutorial shows the sketching and drawing steps from start to finish. This youtube channel is dedicated to teaching people how to improve In this tutorial we will learn how to draw a regular decagon step by step. Draw this line lightly as a construction line. Also, the circle and the decagon share the same center. Connect these. Drawing A Decagon This is the same method used to construct a pentagon, but the top point of our pentagon will be used as the midpoint for. Web this 4k uhd video is about drawing dodecagon by hands with using a compass and a ruler. A regular decagon can be inscribed inside a circle. First just draw a line (a bit larger than the wanted diameter of the circumcircle). Draw this line lightly as a construction line. Drawing A Decagon Related Post :
{"url":"https://classifieds.independent.com/print/drawing-a-decagon.html","timestamp":"2024-11-13T22:54:56Z","content_type":"application/xhtml+xml","content_length":"23316","record_id":"<urn:uuid:3c306af6-ce22-4764-b703-7708a843d09e>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00383.warc.gz"}
Viola has to relocate for her job. She finds a townhome with an option to rent or buy. The conditions of each are shown below. Rent: Move-in costs of $2,380 and monthly payment of $845. Buy: Move-in costs of $5,260 and monthly payment of $785. Viola moves frequently due to her job, but she thinks that she will stay in the area for 4 years. Therefore, she decided to buy. Choose the best evaluation of Viola’s decision. a. Since the costs would be the same over the 4 year period, she will have made a good decision if the property value does not decrease. b. She made a fairly good decision. Buying the townhome will be cheaper over the 4 year period as long as she doesn’t have major repairs to make. c. She made a poor decision if the property value does not increase. Renting the townhome would be cheaper over the 4 year period. d. There is not enough information given to determine which option is best. 1. Home 2. General 3. Viola has to relocate for her job. She finds a townhome with an option to rent or buy. The condition...
{"url":"http://math4finance.com/general/viola-has-to-relocate-for-her-job-she-finds-a-townhome-with-an-option-to-rent-or-buy-the-conditions-of-each-are-shown-below-rent-move-in-costs-of-2-380-and-monthly-payment-of-845-buy-move-in-costs-of-5-260-and-monthly-payment-of-785-viola","timestamp":"2024-11-08T14:23:33Z","content_type":"text/html","content_length":"31508","record_id":"<urn:uuid:de5f1767-8582-4382-a4eb-26031aa8b4d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00380.warc.gz"}
Printable Figure Drawings Five Times Table Worksheet Five Times Table Worksheet - The child can work on mastering each one before piling on more. Printable times tables quiz generator. Practice and improve your multiplication skills with our printable worksheets for the 5 times table. Here you will find our selection of free multiplication worksheets to help your child learn their multiplication table. Improve with the speed test, 5 multiplication table games, chart, worksheets and get the diploma. Select the times tables for the. The 5 times table can be easily remembered by adding 5 every time. Learning through a multiplication table enables faster calculation in kids and increases mental arithmetic skills. Select the times tables for the worksheet. Web the exercises on the worksheets will prepare you for any future maths problems. Web pdf, 244.62 kb. Welcome to the 2nd grade math salamanders 5 times table worksheets. 1st grade students will learn basic multiplication methods and can improve their basic math skills with our free printable free 5 times table worksheet. The 5 times table offers various exercises. Easily develop math skills by downloading freely our 5 times table worksheets pdf. For example, you can look for blocks and colour them with the numbers that are part of the 5 times table. Practice and improve your multiplication skills with our printable worksheets for the 5 times table. 8 free 5 times tables worksheets. Improve with the speed test, 5 multiplication table games, chart, worksheets and get the diploma. Printable times tables quiz generator. Web in this free printable times tables worksheet students have to learn and practice 5 times table multiplication. This post contains a 5 page printable packet that focuses on just the 5 x tables. Number tables are one of the most fundamental concepts which are taught to a student right at the start of their mathematics journey. Practice and improve your multiplication skills with our printable worksheets for the 5 times table. Useful worksheet for practice ahead of. Using these sheets will help your child to: The answers will always end in a 5 or 0. Web in this free printable times tables worksheet students have to learn and practice 5 times table multiplication. The multiplication tables with individual questions include a separate box for each number. This post contains a 5 page printable packet that focuses on. Easily develop math skills by downloading freely our 5 times table worksheets pdf. Might be kept in a tray and used as a gap filler. Alternatively, you can draw lines from questions to. The multiplication tables with individual questions include a separate box for each number. Times table (1989224) 5 times table practice. Alternatively, you can draw lines from questions to. Learn their division facts for the 5 times tables. Select the times tables for the worksheet. Welcome to the 2nd grade math salamanders 5 times table worksheets. Number tables are one of the most fundamental concepts which are taught to a student right at the start of their mathematics journey. Web the exercises on the worksheets will prepare you for any future maths problems. Web 5 times table worksheets. Web click on one of the worksheets to view or print them. In each box, the single number is multiplied by every other number with each question on one line. Using these sheets will help your child to: This following the tables of 1, 2 and 10. Learn their division facts for the 5 times tables. Printable times tables quiz generator. They can begin by coloring in all the multiples of 5 on the number grid provided, then use this to answer the accompanying math questions. Welcome to the 2nd grade math salamanders 5 times table worksheets. Printable times tables quiz generator. Practice and improve your multiplication skills with our printable worksheets for the 5 times table. Web 5 times table worksheets. Useful worksheet for practice ahead of the multiplication check in year 4. Here you will find a selection of times table math sheets designed to help your child to learn and practice their 5 times. Web looking for some fabulous and free 5 times table worksheets? Welcome to the 2nd grade math salamanders 5 times table worksheets. A grid of numbers (a4) which reveals a picture of a space rocket when multiples of 5 (up to 60) are coloured in. 5 × 1 = 5 × 9 = 5 × 15 = 5 × 0. Can be used as an independent extension or starter activity. Might be kept in a tray and used as a gap filler. Some of them come with colors and pictures, making the learning experience more interesting for your students. In each box, the single number is multiplied by every other number with each question on one line. Learning through a. Five Times Table Worksheet - Number tables are one of the most fundamental concepts which are taught to a student right at the start of their mathematics journey. Printable times tables quiz generator. Web there are fun 5 times table worksheets you can download as pdfs and print out. Web the exercises on the worksheets will prepare you for any future maths problems. Web multiplying (1 to 12) by 5 (50 questions) ( 766 views this week) multiplication facts tables. Select the times tables for the worksheet. Using these sheets will help your child to: Useful worksheet for practice ahead of the multiplication check in year 4. Printable times tables quiz generator. Web 5 times tables. For example, you can look for blocks and colour them with the numbers that are part of the 5 times table. Improve with the speed test, 5 multiplication table games, chart, worksheets and get the diploma. Times table (1989224) 5 times table practice. Students multiply 5 times numbers between 1 and 12. This post contains a 5 page printable packet that focuses on just the 5 x tables. Web multiplication facts with 5's. Using these sheets will help your child to: Web 5 times table worksheets. Improve with the speed test, 5 multiplication table games, chart, worksheets and get the Web 5 times tables. Download free 5 times table worksheets. Learning through a multiplication table enables faster calculation in kids and increases mental arithmetic skills. Here you will find our selection of free multiplication worksheets to help your child learn their multiplication table. This following the tables of 1, 2 and 10. 8 free 5 times tables worksheets. Web Pdf, 244.62 Kb. Improve with the speed test, 5 multiplication table games, chart, worksheets and get the diploma. They can begin by coloring in all the multiples of 5 on the number grid provided, then use this to answer the accompanying math questions. Times tables (2012173) jigsaw puzzle to practice the 5 times table. Download this free pdf worksheet or print it right away. A Grid Of Numbers (A4) Which Reveals A Picture Of A Space Rocket When Multiples Of 5 (Up To 60) Are Coloured In. Easily develop math skills by downloading freely our 5 times table worksheets pdf. Focusing in on one set of facts is really the best way to learn the multiplication table. 8 free 5 times tables worksheets. Some worksheets have an answer section with the entire table to show the kids whatever they missed. The Child Can Work On Mastering Each One Before Piling On More. Web multiplying (1 to 12) by 5 (50 questions) ( 766 views this week) multiplication facts tables. Can be used as an independent extension or starter activity. Web in this free printable times tables worksheet students have to learn and practice 5 times table multiplication. Useful worksheet for practice ahead of the multiplication check in year 4. The 5 Times Table Can Be Easily Remembered By Adding 5 Every Time. 5 × 1 = 5 × 9 = 5 × 15 = 5 × 0 = 5 × 3 = 5 × 12 = 5 × 8 = 5 × 6 = 5 × 10 = 5 × 5 = 5 × 2 = 5 × 12 = 5 × 4 = 5 × 7 = 5 × 14 = 5 × 3 = 5 × 9 = 5 × 11 = 5 × 8 = 5 × 6 = 5 × 13 = 5 × 2 = 5 × 7 = 5 × 18 = Web this page is full of free multiplication worksheets on 5 times table that are suitable for mental math practice. The multiplication tables with individual questions include a separate box for each number. Web looking for some fabulous and free 5 times table worksheets?
{"url":"https://tunxis.commnet.edu/view/five-times-table-worksheet.html","timestamp":"2024-11-14T10:14:46Z","content_type":"text/html","content_length":"34967","record_id":"<urn:uuid:ab143c95-03fe-4495-9d4f-2680ee57ac90>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00153.warc.gz"}
8th grade prealgebra worksheets Related topics: converting mixed fractions to decimals | dummies mathematics | poem about intermediate algebra | radical expressions to your daily life. respond iconrespond | Home prentice algebra 1 classics algebra 1 online | simplifying square roots worksheet | year 8 math exam | answers for math homework book h12-11 | 5th grade algebra Simplifying Complex Fractions | simplifying radical espressions online solver | pre algerbra help | solving second order differential equations that involve exponents Complex Fractions Fractions, Ratios, Money, Decimals and Author Message Fraction Arithmetic Finlec Volf Posted: Friday 29th of Dec 14:50 Fractions Worksheet Hi experts ! Are there any online tools to learn about the concepts of 8th grade prealgebra worksheets? I didn’t really get the chance to Teaching Outline for Fractions cover the entire content as yet. This is probably why I face problems while solving equations . Fractions Section 5 Fractions In Action Complex Fractions Registered: Fabulous Fractions 28.06.2002 Reducing Fractions and Improper From: Fraction Competency Packet LESSON: FRACTIONS oc_rana Posted: Saturday 30th of Dec 21:38 ADDING FRACTIONS Hello friend. Let me tell you some thing, even tutors in this subject sometimes are weak in a particular topic . Mathematics is such a Complex Fractions vast subject, that it sometimes becomes impossible to excel every topic with equal ease. If you are facing problems with 8th grade Fractions, Ratios, Money, Decimals and prealgebra worksheets, why don’t you try Algebrator. This program has helped many colleagues of mine and I have used it once as well. I Percent was quiet happy with it. Converting Fractions to Decimals and Registered: the Order of Operations 08.03.2007 Adding and Subtracting Fractions From: Complex Fractions egypt,alexandria Equivalent Fractions Review of Fractions Adding Fractions Fractions Matdhejs Posted: Monday 01st of Jan 08:14 Equivalent Fractions I remember I faced similar problems with cramer’s rule, cramer’s rule and least common denominator. This Algebrator is rightly a great Questions About Fractions piece of math software program. This would simply give step by step solution to any algebra problem that I copied from homework copy on Adding Fractions & Mixed Numbers clicking on Solve. I have been able to use the program through several Algebra 2, Intermediate algebra and Algebra 1. I seriously Adding fractions using the Least recommend the program. Common Denominator Registered: Introduction to fractions 08.12.2001 EQUIVALENT FRACTIONS From: The Netherlands Simplifying Fractions Multiplying and Dividing Fractions ADDITION OF FRACTIONS DVH Posted: Wednesday 03rd of Jan 10:16 Multiplying Fractions I recommend trying out Algebrator. It not only assists you with your math problems, but also provides all the necessary steps in detail so Multiplying and Dividing Fractions that you can enhance the understanding of the subject. Introduction to Fractions Simplifying Fractions by Multiplying by the LCD Registered: Home Simplifying Complex Fractions Fractions Complex Fractions Fractions, Ratios, Money, Decimals and Percent Fraction Arithmetic Fractions Worksheet Teaching Outline for Fractions Fractions Section 5 Fractions In Action Complex Fractions Fabulous Fractions Reducing Fractions and Improper Fractions Fraction Competency Packet Fractions LESSON: FRACTIONS ADDING FRACTIONS Complex Fractions Fractions, Ratios, Money, Decimals and Percent Converting Fractions to Decimals and the Order of Operations Adding and Subtracting Fractions Complex Fractions Equivalent Fractions Review of Fractions Adding Fractions Fractions Equivalent Fractions Questions About Fractions Adding Fractions & Mixed Numbers Adding fractions using the Least Common Denominator Introduction to fractions EQUIVALENT FRACTIONS MULTIPLY TWO OR MORE FRACTIONS Simplifying Fractions Multiplying and Dividing Fractions ADDITION OF FRACTIONS Multiplying Fractions Multiplying and Dividing Fractions Introduction to Fractions Simplifying Fractions by Multiplying by the LCD Author Message Finlec Volf Posted: Friday 29th of Dec 14:50 Hi experts ! Are there any online tools to learn about the concepts of 8th grade prealgebra worksheets? I didn’t really get the chance to cover the entire content as yet. This is probably why I face problems while solving equations . oc_rana Posted: Saturday 30th of Dec 21:38 Hello friend. Let me tell you some thing, even tutors in this subject sometimes are weak in a particular topic . Mathematics is such a vast subject, that it sometimes becomes impossible to excel every topic with equal ease. If you are facing problems with 8th grade prealgebra worksheets, why don’t you try Algebrator. This program has helped many colleagues of mine and I have used it once as well. I was quiet happy with it. Matdhejs Posted: Monday 01st of Jan 08:14 I remember I faced similar problems with cramer’s rule, cramer’s rule and least common denominator. This Algebrator is rightly a great piece of math software program. This would simply give step by step solution to any algebra problem that I copied from homework copy on clicking on Solve. I have been able to use the program through several Algebra 2, Intermediate algebra and Algebra 1. I seriously recommend the program. From: The Netherlands DVH Posted: Wednesday 03rd of Jan 10:16 I recommend trying out Algebrator. It not only assists you with your math problems, but also provides all the necessary steps in detail so that you can enhance the understanding of the subject. Posted: Friday 29th of Dec 14:50 Hi experts ! Are there any online tools to learn about the concepts of 8th grade prealgebra worksheets? I didn’t really get the chance to cover the entire content as yet. This is probably why I face problems while solving equations . Posted: Saturday 30th of Dec 21:38 Hello friend. Let me tell you some thing, even tutors in this subject sometimes are weak in a particular topic . Mathematics is such a vast subject, that it sometimes becomes impossible to excel every topic with equal ease. If you are facing problems with 8th grade prealgebra worksheets, why don’t you try Algebrator. This program has helped many colleagues of mine and I have used it once as well. I was quiet happy with it. Posted: Monday 01st of Jan 08:14 I remember I faced similar problems with cramer’s rule, cramer’s rule and least common denominator. This Algebrator is rightly a great piece of math software program. This would simply give step by step solution to any algebra problem that I copied from homework copy on clicking on Solve. I have been able to use the program through several Algebra 2, Intermediate algebra and Algebra 1. I seriously recommend the program. Posted: Wednesday 03rd of Jan 10:16 I recommend trying out Algebrator. It not only assists you with your math problems, but also provides all the necessary steps in detail so that you can enhance the understanding of the subject.
{"url":"https://mathfraction.com/fraction-simplify/parallel-lines/8th-grade-prealgebra.html","timestamp":"2024-11-07T10:46:14Z","content_type":"text/html","content_length":"86621","record_id":"<urn:uuid:c4cc0ee4-1919-4af0-ab5d-e6224a449147>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00220.warc.gz"}
Pre-processing Time Series Data in Machine Learning Time series data is found everywhere, and to perform the time series analysis, we must preprocess the data first. Time Series preprocessing techniques have a significant influence on data modeling Key takeaways from this blog In this article, we will be discussing mainly these points: 1. Definition of time-series data and its importance. 2. Preprocessing steps for time series data. 3. Structuring time-series data, finding the missing values, denoising the features, and finding the outliers present in the dataset. To begin with, let’s understand the definition of time series first: Time Series is a sequence of evenly spaced observations recorded at a specific time interval. An example of time series would be the gold prices. In this case, our observation is the gold price collected over a while after fixed time intervals. The time unit could be in minutes, hours, days, years, etc. But the time difference between any two consecutive samples will be the same. In this article, we will see the common time-series preprocessing steps that should be carried out before diving into the data modeling part. Let’s look at the common problems associated with the time-series data. Preprocessing Time Series Data Time Series data holds a lot of information, but generally, it is not visible. The common problems associated with time series are un-ordered timestamps, missing values (or timestamps), outliers, and noise in the data. Of all the mentioned problems, handling the missing values is the most difficult one. Since the conventional imputation (A technique used to take care of the missing data by replacing the missing values to retain most of the information) methods are not applicable while working with the time series data. To analyze the real-time analysis of this preprocessing, we will use Kaggle’s Air Passenger dataset, which can be downloaded from here. Structuring Time Series Time Series data is generally found in unstructured formats, i.e., Timestamps could be mixed and not properly ordered. Also, most of the time, the date-time column has default string data type, and it is essential to covert the data-time column to date-time datatype first before applying any operation to it. Let’s implement this into our dataset: import pandas as pd passenger = pd.read_csv('AirPassengers.csv') passenger['Date'] = pd.to_datetime(passenger['Date']) passenger.sort_values(by=['Date'], inplace=True, ascending=True) #Above line will sort the values according to dates. Missing Value Imputation in Time Series Handling the missing values in time series data is a challenging task. Conventional imputation techniques are not applicable for the time-series data since the sequence in which values are received matters. To address this problem, we have the following Interpolation methods: Interpolation is a commonly used technique for time series missing value imputation. It helps in estimating the missing data-point using the two surrounding known data points. This method is simple and most intuitive. However, interpolation further has the following sub-methods: • Time-Based Interpolation • Spline Interpolation • Linear Interpolation Let’s see how our data looks like before imputation: from matplotlib.pyplot import figure import matplotlib.pyplot as plt figure(figsize=(12, 5), dpi=80, linewidth=10) plt.plot(passenger['Date'], passenger['Passengers']) plt.title('Air Passengers Raw Data with Missing Values') plt.xlabel('Years', fontsize=14) plt.ylabel('Number of Passengers', fontsize=14) Before Imputation Let’s take a look at the imputations: passenger['Linear'] = passenger['Passengers'].interpolate(method='linear') passenger['Spline order 3'] = passenger['Passengers'].interpolate(method='spline', order=3) passenger['Time'] = passenger['Passengers'].interpolate(method='time') methods = ['Linear', 'Spline order 3', 'Time'] from matplotlib.pyplot import figure import matplotlib.pyplot as plt for method in methods: figure(figsize=(12, 4), dpi=80, linewidth=10) plt.plot(passenger["Date"], passenger[method]) plt.title('Air Passengers Imputation using: ' + types) plt.xlabel("Years", fontsize=14) plt.ylabel("Number of Passengers", fontsize=14) All methods have given a reliable set of imputations. Imputations from these methods make more sense when the missing value window ( width of missing data) is small. For instance, if several consecutive values are missing, it becomes harder for these methods to estimate them. Denoising a Time Series Noise elements in a time series can cause significant problems, and noise removal is highly recommended before building any model. The process of carefully minimizing the noise is called denoising. Following are some methods commonly used for removing the noise from a time series: The Rolling mean is simply the mean for a window of previous observations, where the window is a sequence of values from the time series data. Mean is calculated for each ordered window. This can greatly help in minimizing the noise in time series data. Let’s apply the rolling mean on Google Stock Price: rolling_google = google_stock_price['Open'].rolling(20).mean() plt.plot(google_stock_price['Date'], google_stock_price['Open']) plt.plot(google_stock_price['Date'], rolling_google) plt.ylabel('Stock Price') plt.legend(['Open','Rolling Mean']) Fourier Transform can help remove the noise by converting the time series data into the frequency domain, and from there, we can filter out the noisy frequencies. Then, we can apply the inverse Fourier transform to obtain the filtered time series. Let’s use Fourier transform on the Google Stock Price. denoised_google_stock_price = fft_denoiser(value, 0.001, True) plt.plot(time, google_stock['Open'][0:300]) plt.plot(time, denoised_google_stock_price) plt.xlabel('Date', fontsize = 13) plt.ylabel('Stock Price', fontsize = 13) plt.legend(['Open','Denoised: 0.001']) Outlier Detection in Time Series An outlier in time series refers to a sudden peak or drop in the trend line. We are not concerned with the factors causing the outliers, but certainly, there can be multiple factors. We will keep ourselves confined with the detection of outliers. Let’s take a look at the available methods for detecting the outliers: • Rolling Statistical Bound based approach This method is most intuitive and works for almost all kinds of time series. In this method, upper and lower bounds are created based on specific statistical measures like mean and standard deviation, Z and T scores, and percentile of the distributions. For instance, we can define our upper and lower bound as: Taking the mean and standard deviation of the whole series is not advisable for outlier detection since the bound would be static in that case. The bounds should be created on a rolling basis, like considering a continuous set of observations to create bounds and then shifting to another window. This method is highly effective and simple for outlier detection. As the name suggests, Isolation forest is a decision tree-based machine learning algorithm for anomaly detection. It works by isolating the data points on a given set of features using the decision tree’s partitions. In other words, It takes a sample out of the dataset and builds trees over that sample until each point is isolated. To isolate a data point, partitions are made randomly by selecting a split between the max and min values of that feature until each point is isolated. Random partition of features will create shorter paths in trees for the anomalous data points and thus distinguishing them from the rest of the data. Source: Medium K-means clustering is again an unsupervised machine learning algorithm frequently used to detect outliers in time series data. This algorithm looks at the data point in the dataset and groups the similar data points into K number of clusters. Anomalies are distinguished by measuring the distance of a data point to its nearest centroid. If the distance is greater than a certain threshold value, the data point is marked as an anomaly. K-Means algorithm uses the Euclidean Distances for comparison. Possible Interview Questions If one is writing a project on time-series in their CV, then the interviewer can ask these possible questions from this topic: 1. What are the ways to preprocess the time-series data, and how is it different from standard imputation methods? 2. What does it mean by a time-series window? 3. Have you heard of the Isolation forest method? If yes, then can you explain how does it work? 4. What is Fourier transform, and why do we need that? 5. What are the different methods to correct the missing values in time-series data? In this tutorial, we looked at some common time series data preprocessing techniques. We started with ordering the time-series observations; then, we looked at various missing value imputation techniques. We found that the time-series imputations are different from the conventional imputation techniques since we deal with an ordered set of observations. Further, we applied some noise removal techniques to the google stock price dataset and finally discussed some outlier detection methods for time series. Using all these mentioned preprocessing steps ensures high-quality data, ready for building complex models. Enjoy Learning! Enjoy Pre-processing! Enjoy Algorithms!
{"url":"https://www.enjoyalgorithms.com/blog/pre-processing-time-series-data/","timestamp":"2024-11-12T13:33:26Z","content_type":"text/html","content_length":"87899","record_id":"<urn:uuid:923c76e7-ddf9-4109-a3fc-fc1df63ce9e2>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00500.warc.gz"}
NDA 1 2025 Exam Maths Trigonometry Class 4 A recent class on Trigonometry was held for students preparing for the National Defence Academy and Naval Academy (NDA-NA) Exam – Paper I (Maths), with a specific focus on the Properties of Triangles . The discussion covered crucial sub-topics like the sine rule, cosine rule, and the calculation of the area of a triangle, all of which play a significant role in mastering trigonometric problems related to geometry. In this blog, we’ll review the essential elements discussed in the class, along with strategies to effectively prepare for these topics and score well in the NDA-NA exam. Class Overview: Key Concepts in Properties of Triangles The class was structured to introduce and reinforce key trigonometric principles related to triangles. This foundation is crucial for solving geometry-based trigonometric problems that are frequently asked in the NDA-NA exam. Here are the main sub-topics that were covered: 1. Properties of Triangles A strong understanding of the properties of triangles is foundational for trigonometric problems. These properties not only help in solving triangles but also in connecting trigonometric formulas to geometric shapes. The class touched on: • Classification of triangles based on sides and angles (such as equilateral, isosceles, and scalene triangles). • The concept of interior and exterior angles, which is critical when applying trigonometric rules to triangles. By understanding these fundamental properties, students were able to see how trigonometric functions are applied to calculate unknown angles and sides of triangles. 2. The Sine Rule The sine rule (also known as the Law of Sines) connects the sides and angles of any triangle, making it a powerful tool for solving triangles where we know either two angles and a side or two sides and a non-included angle. This rule was highlighted as particularly useful when dealing with non-right triangles, which are common in the NDA-NA exam. In the class, the instructor illustrated how to apply the sine rule step by step, encouraging students to practice problems to understand its applications fully. 3. The Cosine Rule The cosine rule (Law of Cosines) was another focal point of the class. This rule is particularly helpful for finding an unknown side of a triangle when the other two sides and the included angle are known, or for finding an angle when all three sides are known. The instructor emphasized that the cosine rule is often necessary when the sine rule cannot be applied, especially for solving non-right-angled triangles. Several examples were solved, showing how this rule allows for the calculation of angles and sides that are otherwise difficult to determine. 4. Area of a Triangle The class also focused on various ways to calculate the area of a triangle, especially using trigonometric principles. In addition to the common base-height formula, the class explored how to find the area using trigonometric functions when given certain angles and sides. Understanding how to calculate the area of a triangle using trigonometric properties is important for the NDA-NA exam, where questions often ask for the area in terms of angles and sides, rather than simple base-height values. Strategies for Preparing Properties of Triangles for NDA-NA Exam The properties of triangles and their connection to trigonometry are crucial for the NDA-NA exam. Below are some effective strategies to ensure thorough preparation for this topic: 1. Master the Basics of Triangles Before diving into advanced trigonometric applications, ensure that you have a solid understanding of the basic properties of triangles. Review different types of triangles, their angle properties, and how the sum of angles in a triangle behaves. These fundamentals will support your understanding when applying the sine and cosine rules. Familiarize yourself with key concepts such as the exterior angle theorem, angle bisectors, and triangle inequality properties. These can sometimes serve as helpful tools when solving more complex trigonometric problems in triangles. 2. Practice the Sine and Cosine Rules Extensively The sine and cosine rules are central to solving triangle problems in the NDA-NA exam. Make sure you understand the conditions under which each rule applies and practice applying them to a variety of Start by solving basic problems where the rules are applied directly, then move on to more complex questions involving multiple steps or different triangles. This will improve your ability to quickly identify which rule to apply in a given scenario. One important tip shared during the class was the importance of interpreting the triangle correctly—especially in identifying the known values (angles or sides) and then deciding the appropriate 3. Work on Time Management In exams like NDA-NA, time is a critical factor. Solving triangle problems, especially when using trigonometric rules, can sometimes be time-consuming. The key to efficiency is practice. The more you practice, the quicker you’ll become at recognizing which rule to use and how to proceed with the solution. The instructor also suggested focusing on shortcuts and techniques that can save time. For example, for certain common triangle configurations, memorizing specific trigonometric ratios and their applications can help in speeding up problem-solving. 4. Solve Previous Years’ MCQs A large part of the class discussion was focused on multiple-choice questions (MCQs) from previous NDA-NA exams. These questions provide insight into the types of problems you can expect, as well as the level of difficulty. Solving past exam papers is one of the best ways to prepare, as it familiarizes you with the exam format, boosts your confidence, and helps you manage your time effectively. Make it a habit to practice at least 10 to 15 MCQs daily from previous years, especially those involving the properties of triangles, sine and cosine rules, and the calculation of the area of triangles. This will give you a strong understanding of how to approach such questions in the actual exam. 5. Understand the Application of Formulas Formulas are at the heart of solving trigonometric problems, but they should not be blindly memorized. It’s important to understand why and how these formulas work. In the class, the instructor explained the logic behind the sine and cosine rules and how they relate to the properties of triangles. For the NDA-NA exam, it’s crucial to not only know the formulas but also understand their derivation and application. This understanding will help you in situations where the problem might not be straightforward but requires some critical thinking. 6. Focus on Visualization Trigonometry is very much a visual subject. When dealing with triangles, always try to sketch the problem on paper. Even a rough sketch can provide valuable insights into the relationships between sides and angles. By visualizing the problem, you will find it easier to apply the correct rules and formulas. Make sure to practice drawing diagrams and labeling them correctly. This will help you organize the given information and prevent mistakes during the exam. Conclusion: Mastering Trigonometric Properties of Triangles The recent class on Trigonometric Properties of Triangles provided valuable insights into how to approach this topic for the NDA-NA exam. Understanding the sine rule, cosine rule, and how to calculate the area of triangles using trigonometric methods is essential for solving a wide range of problems. By mastering these topics, focusing on efficient problem-solving strategies, and consistently practicing MCQs, students can significantly enhance their ability to handle trigonometric problems related to triangles. Remember that time management, understanding formulas, and visualizing problems are key elements of success. With thorough preparation and regular practice, you can confidently approach the trigonometry section of the NDA-NA exam and maximize your performance. Good luck with your preparation! Leave Your Comment
{"url":"https://ssbcrackexams.com/nda-1-2025-exam-maths-trigonometry-class-4/","timestamp":"2024-11-05T18:53:43Z","content_type":"text/html","content_length":"343102","record_id":"<urn:uuid:4a32edd8-dfcb-4f6e-9997-caf771c061af>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00476.warc.gz"}
American Mathematical Society Cohomological finite generation for restricted Lie superalgebras and finite supergroup schemes HTML articles powered by AMS MathViewer Represent. Theory 17 (2013), 469-507 DOI: https://doi.org/10.1090/S1088-4165-2013-00440-5 Published electronically: September 5, 2013 PDF | Request permission We prove that the cohomology ring of a finite-dimensional restricted Lie superalgebra over a field of characteristic $p > 2$ is a finitely-generated algebra. Our proof makes essential use of the explicit projective resolution of the trivial module constructed by J. Peter May for any graded restricted Lie algebra. We then prove that the cohomological finite generation problem for finite supergroup schemes over fields of odd characteristic reduces to the existence of certain conjectured universal extension classes for the general linear supergroup $GL(m|n)$ that are similar to the universal extension classes for $GL_n$ exhibited by Friedlander and Suslin. References • Irfan Bagci, Cohomology and support varieties for restricted Lie superalgebras, Algebr. Represent. Theory (2012). • Petter Andreas Bergh and Steffen Oppermann, Cohomology of twisted tensor products, J. Algebra 320 (2008), no.Β 8, 3327β 3338. MR 2450729, DOI 10.1016/j.jalgebra.2008.08.005 • Nicolas Bourbaki, Commutative algebra. Chapters 1β 7, Elements of Mathematics (Berlin), Springer-Verlag, Berlin, 1998. Translated from the French; Reprint of the 1989 English translation. MR • Nicolas Bourbaki, Algebra II. Chapters 4β 7, Elements of Mathematics (Berlin), Springer-Verlag, Berlin, 2003. Translated from the 1981 French edition by P. M. Cohn and J. Howie; Reprint of the 1990 English edition [Springer, Berlin; MR1080964 (91h:00003)]. MR 1994218, DOI 10.1007/978-3-642-61698-3 • Jonathan Brundan and Alexander Kleshchev, Modular representations of the supergroup $Q(n)$. I, J. Algebra 260 (2003), no.Β 1, 64β 98. Special issue celebrating the 80th birthday of Robert Steinberg. MR 1973576, DOI 10.1016/S0021-8693(02)00620-8 • Henri Cartan and Samuel Eilenberg, Homological algebra, Princeton Landmarks in Mathematics, Princeton University Press, Princeton, NJ, 1999. With an appendix by David A. Buchsbaum; Reprint of the 1956 original. MR 1731415 • Pavel Etingof and Viktor Ostrik, Finite tensor categories, Mosc. Math. J. 4 (2004), no.Β 3, 627β 654, 782β 783 (English, with English and Russian summaries). MR 2119143, DOI 10.17323/ • Leonard Evens, The cohomology ring of a finite group, Trans. Amer. Math. Soc. 101 (1961), 224β 239. MR 137742, DOI 10.1090/S0002-9947-1961-0137742-1 • Eric M. Friedlander and Brian J. Parshall, Cohomology of infinitesimal and discrete groups, Math. Ann. 273 (1986), no.Β 3, 353β 374. MR 824427, DOI 10.1007/BF01450727 • Eric M. Friedlander and Brian J. Parshall, Support varieties for restricted Lie algebras, Invent. Math. 86 (1986), no.Β 3, 553β 562. MR 860682, DOI 10.1007/BF01389268 • Eric M. Friedlander and Andrei Suslin, Cohomology of finite group schemes over a field, Invent. Math. 127 (1997), no.Β 2, 209β 270. MR 1427618, DOI 10.1007/s002220050119 • Jens Carsten Jantzen, Representations of algebraic groups, 2nd ed., Mathematical Surveys and Monographs, vol. 107, American Mathematical Society, Providence, RI, 2003. MR 2015057 • Seok-Jin Kang and Jae-Hoon Kwon, Graded Lie superalgebras, supertrace formula, and orbit Lie superalgebras, Proc. London Math. Soc. (3) 81 (2000), no.Β 3, 675β 724. MR 1781152, DOI 10.1112/ • Gongxiang Liu, Support varieties and representation types for basic classical Lie superalgebras, J. Algebra 362 (2012), 157β 177. MR 2921636, DOI 10.1016/j.jalgebra.2012.04.010 • M. Mastnak, J. Pevtsova, P. Schauenburg, and S. Witherspoon, Cohomology of finite-dimensional pointed Hopf algebras, Proc. Lond. Math. Soc. (3) 100 (2010), no.Β 2, 377β 404. MR 2595743, DOI • Akira Masuoka, The fundamental correspondences in super affine groups and super formal groups, J. Pure Appl. Algebra 202 (2005), no.Β 1-3, 284β 312. MR 2163412, DOI 10.1016/j.jpaa.2005.02.010 • Akira Masuoka and Alexandr N. Zubkov, Quotient sheaves of algebraic supergroups are superschemes, J. Algebra 348 (2011), 135β 170. MR 2852235, DOI 10.1016/j.jalgebra.2011.08.038 • J. P. May, The cohomology of restricted Lie algebras and of Hopf algebras, J. Algebra 3 (1966), 123β 146. MR 193126, DOI 10.1016/0021-8693(66)90009-3 • John McCleary, A userβ s guide to spectral sequences, 2nd ed., Cambridge Studies in Advanced Mathematics, vol. 58, Cambridge University Press, Cambridge, 2001. MR 1793722 • Susan Montgomery, Hopf algebras and their actions on rings, CBMS Regional Conference Series in Mathematics, vol. 82, Published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI, 1993. MR 1243637, DOI 10.1090/cbms/082 • Stewart B. Priddy, Koszul resolutions, Trans. Amer. Math. Soc. 152 (1970), 39β 60. MR 265437, DOI 10.1090/S0002-9947-1970-0265437-8 • Roberto La Scala and Alexandr N. Zubkov, Donkin-Koppinen filtration for general linear supergroups, Algebr. Represent. Theory 15 (2012), no.Β 5, 883β 899. MR 2969281, DOI 10.1007/ • Bin Shu and Weiqiang Wang, Modular representations of the ortho-symplectic supergroups, Proc. Lond. Math. Soc. (3) 96 (2008), no.Β 1, 251β 271. MR 2392322, DOI 10.1112/plms/pdm040 • B. B. Venkov, Cohomology algebras for some classifying spaces, Dokl. Akad. Nauk SSSR 127 (1959), 943β 944 (Russian). MR 0108788 • William C. Waterhouse, Introduction to affine group schemes, Graduate Texts in Mathematics, vol. 66, Springer-Verlag, New York-Berlin, 1979. MR 547117, DOI 10.1007/978-1-4612-6217-6 • Dennis Bouke Westra, Superrings and supergroups, Ph.D. thesis, Universitat Wien, October 2009. • A. N. Zubkov, Affine quotients of supergroups, Transform. Groups 14 (2009), no.Β 3, 713β 745. MR 2534805, DOI 10.1007/s00031-009-9055-z Similar Articles • Retrieve articles in Representation Theory of the American Mathematical Society with MSC (2010): 17B56, 20G10, 17B55 • Retrieve articles in all journals with MSC (2010): 17B56, 20G10, 17B55 Bibliographic Information • Christopher M. Drupieski • Affiliation: Department of Mathematical Sciences, DePaul University, Chicago, Illinois 60614 • MR Author ID: 924956 • ORCID: 0000-0002-8250-1030 • Email: cdrupies@depaul.edu • Received by editor(s): January 9, 2013 • Received by editor(s) in revised form: May 8, 2013 • Published electronically: September 5, 2013 • © Copyright 2013 American Mathematical Society The copyright for this article reverts to public domain 28 years after publication. • Journal: Represent. Theory 17 (2013), 469-507 • MSC (2010): Primary 17B56, 20G10; Secondary 17B55 • DOI: https://doi.org/10.1090/S1088-4165-2013-00440-5 • MathSciNet review: 3096330
{"url":"https://www.ams.org/journals/ert/2013-17-16/S1088-4165-2013-00440-5/home.html","timestamp":"2024-11-12T12:59:17Z","content_type":"text/html","content_length":"77230","record_id":"<urn:uuid:a86a8877-f347-48f0-8ec6-ac174dea593d>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00606.warc.gz"}
The goal of this course is to familiarize students with the methodology for the analysis of equilibrium and efficiency in exchange economies, in production economies and in environments with external effects. To reach this goal students need to master a set of analytical tools, develop a set of skills and reach an analytical proficiency that are described in the following: (i) Students will need to master the concepts of equilibrium and efficiency, will understand their use in economic analysis and will comprehend how to apply them to analyze economic problems; (ii) In terms of specific abilities, students will be able to carry out formal analyses of economic problems; (iii) In terms of general abilities, students will develop their analytical ability ad their abilities to carry out critical analyses. (iv) Students will finally need to reach sufficient proficiency in the solution of complex problems.
{"url":"https://aplicaciones.uc3m.es/cpa/generaFicha?est=328&plan=417&asig=13645&idioma=2","timestamp":"2024-11-14T11:08:01Z","content_type":"text/html","content_length":"13043","record_id":"<urn:uuid:12ff357f-45d1-4d55-93e8-c91287777ff2>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00437.warc.gz"}
Getting Started with multi-VAR Zachary F. Fisher The multi-VAR Framework The \(\texttt{multivar}\) package (Fisher, 2021) is intended to provide an entry point to the multi-VAR algorithm (Fisher, Kim, Fredrickson and Pipiras, 2022). In short, multi-VAR is designed to model multivariate time series obtained from multiple individuals. This method is especially well-suited to time-series paradigms, such as intensive longitudinal data (ILD), where it is often unclear the degree to which individuals differ in terms of their dynamic processes. If individuals share little in common results from multi-VAR resemble what would be obtained from fitting separate models to each individual. If individuals are homogenous, results resemble what would be obtained from pooling the data and fitting a single model to the sample. Most importantly, if the truth lies somewhere in between - certain dynamics are shared while others are idiosyncratic - results will reflect this and provide researchers with new tools for isolating generalizable dynamics. Although this vignette is not intended to provide a thorough introduction to multi-VAR it introduces the \(\texttt{multivar}\) package functionality. A Simulated multi-VAR Example Simulated data is one way to gain insight into the problem the multi-VAR framework attempts to solve. Here, we consider the \(\texttt{dat_multivar_sim}\) dataset included in the \(\texttt{multivar}\) package. This dataset contains multivariate time series data for \(k = 9\) individuals with \(d = 10\) variables, collected at \(t = 100\) equidistant time points. The data was generated such that each individual’s VAR(1) transition matrix has \(20\%\) nonzero entries. This means, for example, each individual has 20 nonzero directed relationships in their data generating model. The position of non-zero elements in each individual’s transition matrix was selected randomly given the following constraints: \(2/3\) of each individual’s paths are shared by all individuals, and \(1/3\) are unique to each individual. For each individual, coefficient values between \(\mathcal{U}(0,1, 0.9)\) were randomly drawn until stability conditions for the VAR model were satisfied. The underlying idea is multivariate time series arising from multiple units are often heterogeneous. These generated data reflect one example of data fitting this description. Plot Simulated Data We can visualize the transition matrices for the simulated data as follows. Common Effects To plot the effects common to each individual use the option \(\texttt{plot_type = "common"}\). Construct a multi-VAR Model With data in hand we can construct a multi-VAR model. The \(\texttt{data}\) argument should be a list containing the \(k\) multivariate time series data from each individual. Each data matrix should be organized with variables as columns and time points as rows (e.g. \(T \times d\)). By default \(\texttt{multivar}\) employs the adaptive LASSO penalty, which requires an initial estimate of the individual transition matrices. The default option for an initial weight matrix based on estimates from the individual-level LASSO. Additional details on these initial weights and the adaptive lasso can be found in Fisher, Kim, Fredrickson and Pipiras (2022). As of Version 1.0.0 the only cross-validation procedure available in \(\texttt{multivar}\) are rolling window cross-validation (RWCV) or blocked k-folds CV (blocked). The default is blocked with 5 folds. Additional cross-validation methods for mult-VAR are currently under development and should be available soon. Plot Results After performing cross-validation, results from the multi-VAR procedure can be visualized using the \(\texttt{plot_results()}\) function. Below we show the estimated common and total effects Compare Simulated Data to Results Finally, we can compare the results from the first three individuals to the data generating transition matrices as follows. Common Effects plot_sim(dat_multivar_sim, plot_type = "common"), plot_results(fit, plot_type = "common"), ncol = 1 Fisher, Z. F. (2021). multivar: Penalized estimation and forecasting of multiple subject vector autoregressive (multi-VAR) models. R package version 1.0.0, https://CRAN.R-project.org/package=multivar Fisher, Z. F., Kim, Y., Fredrickson, B., and Pipiras, V. (2022). Penalized Estimation and Forecasting of Multiple Subject Intensive Longitudinal Data. Psychometrika.
{"url":"https://cran.rstudio.org/web/packages/multivar/vignettes/multiVARExample.html","timestamp":"2024-11-08T15:28:38Z","content_type":"text/html","content_length":"234732","record_id":"<urn:uuid:aca593a0-ffb1-4422-9287-e4abe1774c4b>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00624.warc.gz"}
Using effective sample sizes to evaluate the efficiency of length samples collected by at-sea observers in the krill fishery in Subarea 48.1 Solicitar acceso a documento de reunión Número de documento: Presentado por: Dr Dirk Welsford (Australia) Aprobado por: Dr Dirk Welsford (Australia) Catch at length is an important input into any stock assessment. Consequently, collecting length data from the catch is a task undertaken by all at sea observes in CCAMLR fisheries. Although analyses in the past have looked to the optimal design of the observer program, in terms of levels of coverage of vessels and hauls (e.g., Agnew et al., 2009; 2010), less attention have been focussed on how many krill should be measured by observers should measure (however see Thannasekos et al. 2012). We used C1 effort, catch and observer data from Subarea 48.1, collected between 2010 and 2015, to characterise how many krill are measured by observers, and for how many hauls. We then simulate the impact of different haul-wise sample sizes on the ability to estimate mean length in a sample per SSMU × month combination (effective sample size). The median number of krill measured per haul was around 200 (range 0-652). However haul-wise sample sizes of down to 50 did not substantially reduce the effective sample size, whereas increasing the number of hauls sampled did substantially increase the effective sample size. Therefore, we recommend that observers collect smaller samples (50) at the haul level, over a greater number of hauls to allow better estimates of catch at length.
{"url":"https://meetings.ccamlr.org/es/wg-sam-16/39","timestamp":"2024-11-13T21:22:01Z","content_type":"text/html","content_length":"14825","record_id":"<urn:uuid:3d24d5eb-6c66-44fd-905f-a8ab5583af89>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00817.warc.gz"}
What is machine learning algorithms: A complete guide Machine Learning Algorithms are the cornerstone of Artificial Intelligence (AI), enabling systems to learn from data and improve their performance over time. These algorithms allow computers to make predictions, perform classifications, and even identify patterns within large datasets, making them indispensable in today's data-driven world. At the heart of AI, Machine Learning Algorithms empower applications ranging from image and speech recognition to autonomous vehicles and personalized recommendations. Their importance lies in their ability to handle vast amounts of data and generate insights that would be impractical for humans to achieve manually. The efficiency and accuracy of these algorithms significantly enhance AI capabilities, pushing the boundaries of what machines can accomplish. In this article, we will delve into the three main types of Machine Learning Algorithms: Supervised, Unsupervised, and Reinforcement Learning. • Supervised Learning involves training a model on a labeled dataset, enabling it to make predictions or classifications based on new, unseen data. Examples include linear regression and support vector machines. • Unsupervised Learning deals with unlabeled data, where the algorithm identifies patterns and structures within the data. Clustering algorithms like k-means and dimensionality reduction techniques like PCA are key examples. • Reinforcement Learning focuses on training agents to make sequences of decisions by rewarding desired behaviors. Algorithms like Q-learning and deep reinforcement learning are commonly used in robotics and game playing. Throughout this article, we will explore these types of Machine Learning Algorithms in detail, illustrating their applications and significance in advancing AI technologies. Supervised Learning Algorithms Supervised Learning is a fundamental subset of Machine Learning Algorithms where the model is trained on a labeled dataset. This means that each training example is paired with an output label, allowing the algorithm to learn the relationship between the input data and the output. Supervised Learning is widely used for tasks such as classification and regression, where the goal is to predict a target variable based on input features. Popular Supervised Learning Algorithms Linear Regression is a basic yet powerful algorithm used for regression tasks. It models the relationship between a dependent variable and one or more independent variables by fitting a linear equation to the observed data. The equation of a simple linear regression model is given by: where y is the dependent variable, x is the independent variable, β0 is the intercept, β1 is the slope, and ϵ is the error term. Logistic Regression is used for binary classification tasks. It models the probability that a given input belongs to a particular class. Unlike Linear Regression, the output of Logistic Regression is a probability value between 0 and 1. The logistic function, also known as the sigmoid function, is defined as: This algorithm is effective for problems where the target variable is categorical. Decision Trees are non-parametric supervised learning algorithms used for both classification and regression tasks. They work by splitting the data into subsets based on the value of input features. This process is repeated recursively, resulting in a tree-like model of decisions. The key advantage of Decision Trees is their interpretability. A simple Decision Tree for a classification task might look like: from sklearn.tree import DecisionTreeClassifier model = DecisionTreeClassifier() model.fit(X_train, y_train) Support Vector Machines (SVM) Support Vector Machines are powerful for both classification and regression tasks. SVMs work by finding the hyperplane that best separates the classes in the feature space. For non-linearly separable data, SVMs use kernel functions to project the data into a higher-dimensional space where a linear separator can be found. A basic implementation of SVM using a linear kernel is: from sklearn.svm import SVC model = SVC(kernel='linear') model.fit(X_train, y_train) Neural Networks are inspired by the structure and function of the human brain. They consist of layers of interconnected nodes, or neurons, that process data in a hierarchical manner. Neural Networks are particularly effective for complex tasks such as image and speech recognition. A simple Neural Network can be implemented using libraries like TensorFlow or PyTorch: import tensorflow as tf model = tf.keras.Sequential([ tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(10, activation='softmax') model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(X_train, y_train, epochs=5) Key Points to Consider When Choosing a Supervised Learning Algorithm • Nature of the Problem • Classification or Regression: Choose algorithms accordingly. Logistic Regression and SVM are suitable for classification, whereas Linear Regression is used for regression tasks. • Data Size and Quality • Algorithms like Decision Trees and SVMs can overfit on small datasets. Neural Networks require large datasets to perform well. • Interpretability • Decision Trees and Linear Regression models are easier to interpret, while Neural Networks, though powerful, are often seen as black boxes. • Computational Efficiency • Some algorithms, like Neural Networks, are computationally intensive and may require specialized hardware (GPUs) for training, while simpler algorithms like Linear Regression are computationally less demanding. Supervised Learning Algorithms are a critical component of Machine Learning, providing robust solutions for various predictive modeling tasks. By understanding the strengths and weaknesses of each algorithm, practitioners can make informed decisions to best address their specific needs. Unsupervised Learning Algorithms Unsupervised Learning is a subset of Machine Learning Algorithms that operates on datasets without labeled responses. The goal is to uncover hidden patterns or intrinsic structures within the data. Unlike Supervised Learning, there are no target variables to guide the learning process. Instead, these algorithms infer the natural organization of the data, making them essential for exploratory data analysis, anomaly detection, and data preprocessing. Popular Unsupervised Learning Algorithms Clustering algorithms partition data into distinct groups or clusters based on similarity. The aim is to ensure that data points within a cluster are more similar to each other than to those in other clusters. A widely used clustering algorithm is K-Means, which minimizes the variance within each cluster: from sklearn.cluster import KMeans kmeans = KMeans(n_clusters=3) labels = kmeans.labels_ Another popular method is Hierarchical Clustering, which builds a tree of clusters by recursively merging or splitting existing clusters based on a chosen metric. Dimensionality Reduction techniques reduce the number of features in a dataset while retaining most of the information. This is crucial for visualizing high-dimensional data and improving the performance of other Machine Learning Algorithms. Principal Component Analysis (PCA) is a common dimensionality reduction technique that projects data onto a lower-dimensional space: from sklearn.decomposition import PCA pca = PCA(n_components=2) X_reduced = pca.fit_transform(X) Another technique is t-Distributed Stochastic Neighbor Embedding (t-SNE), which is particularly effective for visualizing high-dimensional data in two or three dimensions. K-Nearest Neighbors (KNN) Although often associated with Supervised Learning, K-Nearest Neighbors can also be used in an unsupervised context for clustering and anomaly detection. KNN operates by finding the K closest data points in the feature space and can be used to estimate the density of data points, aiding in identifying clusters and outliers. Naive Bayes, while typically used for classification, can also be adapted for clustering in an unsupervised manner. This probabilistic algorithm is based on Bayes' theorem and assumes that features are conditionally independent given the class. It can be utilized to compute the likelihood of different cluster assignments for each data point. Key Points to Consider When Choosing an Unsupervised Learning Algorithm • Nature of the Data • Structure and Distribution: Different algorithms make different assumptions about the data. K-Means, for example, assumes spherical clusters of roughly equal size, while hierarchical clustering does not have such constraints. • Dimensionality: High-dimensional data might benefit from dimensionality reduction techniques before applying clustering algorithms. • Scalability and Performance • Data Size: Algorithms like K-Means are computationally efficient and can handle large datasets, whereas hierarchical clustering can be more computationally intensive and may struggle with larger • Execution Time: Consider the computational complexity and the time required for training, especially with large datasets. • Interpretability • Some algorithms, like PCA, provide straightforward interpretations by reducing dimensions, while others, like t-SNE, prioritize preserving local structures over interpretability. • Objective and Application • Exploratory Analysis: For initial data exploration, methods like PCA and t-SNE are useful for gaining insights and visualizing the data structure. • Anomaly Detection: KNN and density-based clustering algorithms such as DBSCAN are effective for identifying outliers in the data. Unsupervised Learning Algorithms are powerful tools for discovering patterns and structures within unlabeled data. By carefully considering the nature of the data, scalability, interpretability, and specific objectives, practitioners can select the most appropriate algorithm to gain valuable insights and enhance the performance of their Machine Learning projects. Reinforcement Learning Algorithms Reinforcement Learning (RL) is a dynamic subset of Machine Learning Algorithms where agents learn to make decisions by interacting with an environment. The goal is to maximize cumulative rewards through a trial-and-error process, which is driven by a feedback loop. Unlike Supervised and Unsupervised Learning, RL emphasizes learning from the consequences of actions rather than from predefined Popular Reinforcement Learning Algorithms Q-Learning is a model-free RL algorithm that seeks to learn the quality, or Q-values, of actions in given states. The Q-value represents the expected future rewards an agent can receive by taking a specific action in a specific state. The algorithm updates the Q-values using the Bellman equation: where s is the current state, a is the action taken, r is the reward received, s′ is the next state, α is the learning rate, and γ is the discount factor. A simple implementation in Python is: import numpy as np # Initialize Q-table Q = np.zeros((state_space, action_space)) # Q-Learning algorithm for episode in range(total_episodes): state = env.reset() done = False while not done: action = np.argmax(Q[state, :]) next_state, reward, done, _ = env.step(action) Q[state, action] = Q[state, action] + alpha * (reward + gamma * np.max(Q[next_state, :]) - Q[state, action]) state = next_state Deep Q-Networks combine Q-Learning with deep neural networks to handle high-dimensional state spaces. Instead of using a Q-table, DQNs use a neural network to approximate the Q-value function. This allows RL to be applied to complex problems such as playing video games or controlling robotic arms. A basic DQN architecture might include: import tensorflow as tf model = tf.keras.Sequential([ tf.keras.layers.Dense(24, input_shape=(state_space,), activation='relu'), tf.keras.layers.Dense(24, activation='relu'), tf.keras.layers.Dense(action_space, activation='linear') model.compile(optimizer=tf.keras.optimizers.Adam(lr=0.001), loss='mse') Policy Gradient Methods directly optimize the policy function, which maps states to actions, rather than learning the Q-value function. These methods are effective for high-dimensional action spaces and continuous control tasks. The policy is updated by computing gradients of expected rewards with respect to the policy parameters. One common algorithm is the REINFORCE algorithm: import tensorflow as tf model = tf.keras.Sequential([ tf.keras.layers.Dense(24, input_shape=(state_space,), activation='relu'), tf.keras.layers.Dense(24, activation='relu'), tf.keras.layers.Dense(action_space, activation='softmax') optimizer = tf.keras.optimizers.Adam(lr=0.01) def train_step(states, actions, rewards): with tf.GradientTape() as tape: probs = model(states) action_probs = tf.reduce_sum(probs * actions, axis=1) loss = -tf.reduce_mean(tf.math.log(action_probs) * rewards) grads = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(grads, model.trainable_variables)) Key Points to Consider When Choosing a Reinforcement Learning Algorithm • Complexity of the Environment • For simple, discrete environments, Q-Learning might be sufficient. For more complex, continuous environments, DQN or Policy Gradient Methods are more appropriate. • State and Action Space • High-dimensional state spaces benefit from DQNs due to their ability to approximate Q-values with neural networks. Continuous action spaces are better handled by Policy Gradient Methods. • Sample Efficiency • Algorithms like Q-Learning can be sample inefficient, requiring many interactions with the environment. DQNs and Policy Gradient Methods can be more sample efficient, particularly when combined with techniques like experience replay. • Computational Resources • DQNs and Policy Gradient Methods typically require more computational power and memory due to their use of neural networks. Ensure that you have adequate resources for training these models. Reinforcement Learning Algorithms are powerful tools for solving complex decision-making problems. By understanding the strengths and limitations of each algorithm, practitioners can select the most appropriate method to maximize performance in their specific applications. Conclusion: Machine Learning Algorithms Machine Learning Algorithms are at the core of the rapid advancements in Artificial Intelligence, transforming industries and driving innovation. From Supervised Learning algorithms like Linear Regression and Decision Trees, to Unsupervised Learning methods such as K-Means and PCA, and advanced Reinforcement Learning techniques like Q-Learning and Policy Gradients, each algorithm offers unique strengths tailored to specific tasks and challenges. These algorithms enable machines to learn from data, make predictions, discover patterns, and optimize decisions, paving the way for intelligent systems that can adapt and evolve. The diversity of Machine Learning Algorithms underscores their versatility in addressing a wide array of applications, from healthcare diagnostics and financial forecasting to autonomous vehicles and personalized As the field of AI continues to grow, it is essential for practitioners to explore and experiment with different algorithms to uncover their full potential. Leveraging the right Machine Learning Algorithm for a given use case can significantly enhance the performance and accuracy of AI models. Embrace the diversity of these algorithms, and continue to innovate and push the boundaries of what is possible in Artificial Intelligence.
{"url":"https://www.techgabbing.com/post/machine-learning-algorithms-a-comprehensive-exploration","timestamp":"2024-11-08T14:23:49Z","content_type":"text/html","content_length":"1050483","record_id":"<urn:uuid:3ceeb9c2-cc1e-4261-b486-8c346f90c255>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00892.warc.gz"}
Using 2 different measures of the same entity for risk prediction Hello everyone, I am currently analyzing a dataset where measurements of the same entity (coronary blood flow) are taken in 2 states (with negligible temporal separation between the 2 measurements). The first is the “resting” state and the second is the “stress” state (“stress” here refers to a state of augmented coronary blood flow induced by the injection of a pharmaceutical substance). Typically, what is often done is that these 2 measures are “compressed” by taking their ratio (coronary flow ratio; CFR). The primary rationale justifying this is that the ratio roughly represents the “magnitude of coronary blood flow augmentation”, and that it is this latter conceptual entity that “really” matters when it comes to prognosis. This implies that the underlying rest/stress values of coronary flow don’t matter much once their ratio is known (which is the subject of some controversy). After a bit of background reading (specifically: https://www.fharrell.com/post/errmed/, https://hbiostat.org/bbr/change, https://www.thespinejournalonline.com/article/S1529-9430(17)30233-4/abstract, https://stats.stackexchange.com/questions/51564/is-it-valid-to-use-a-difference-score-as-an-independent-variable-in-a-regression), the takeaway seems to be that the best analytic approach in such scenarios is to model the outcome as a smooth function of both covariates (e.g., using tensor splines or restricted cubic splines for main effects plus restricted smooth interaction term). My question is: Prof. Harrell states here (https://www.fharrell.com/post/errmed/) that, if both covariates are shown to be important, an additional goal would be to show how best to combine them. A. I’m not sure how I can identify the best way of combining the two measures (demonstrating whether, for example, the ratio or difference is important). Would one simply compare something like: rcs(first_measurement, 5) + rcs(difference, 5) rcs(first_measurement, 5) + rcs(ratio, 5) in terms of their fit (e.g., using AIC)? B. Conversely, I’m not sure how to tell that that a simplifying construct (ratio/difference) is insufficient to capture the relationship. Would one simply compare: rcs(first_measurement, 5) + rcs(second_measurement, 5) + rcs(first_measurement, 5) %ia% rcs(second_measurement, 5) rcs(first_measurement, 5) + rcs(ratio, 5) (or using rcs(difference, 5) instead of ratio) and see if the latter results in an unacceptable loss of model fit? C. Is there a meaningful difference between using the second measurement or the ratio/difference if smooth terms with interaction are used? In other words, is there a meaningful difference between rcs(first_measurement, 5) + rcs(second_measurement, 5) + rcs(first_measurement, 5) %ia% rcs(second_measurement, 5) rcs(first_measurement, 5) + rcs(ratio, 5) + rcs(first_measurement, 5) %ia% rcs(ratio, 5) (or using rcs(difference, 5) instead of ratio) 1 Like This is the kind of problem that is really great to work on. Often I’ve wished that we could put a restriction on our models such that the two variable have the same shape of rcs transformation, like you can do for monotonic variables in the brms package. When there are no interactions you can do such an analysis by making the dataset twice as tall and using a cluster sandwich covariance estimator to adjust standard errors. But that’s kind of awkward. The idea is to have a common shape but with a simple magnifier (“external \beta”) for the second predictor. The way that clinical researchers tend to analyze such data makes an initial mistake that is quite common: assuming that change is more important that stressed state. If you use resting left ventricular ejection fraction (LVEF) and LVEF under maximum exercise to jointly predict time until cardiovascular event, you find that resting LVEF is irrelevant, i.e., that change from rest to exercise is almost solely noise. Likewise if in ICU patients you measure serum creatinine (SCr) on day 1 and on day 3, and predict survival from day 3 onwards, day 1 SCr is almost irrelevant, i.e., the change in SCr is a weak prognostic variable. You might think of this strategy, which focuses on predicted cross-validation predictive discrimination by computing AIC on several models, letting the two measurements be denoted A and B. • log(B/A) • log(A) + log(B) • rcs(log(A)) + rcs(log(B)) • rcs(log(A)) + rcs(log(B)) + rcs(log(A)) %ia% rcs(log(B)) Each rcs is mean to use 4 knots. Model (1) vs. (2) checks the adequacy of the ratio assuming linearity in all the logs. (3) vs (2) checks linearity in the logs. (4) vs (3) checks for interaction. I hope you’ll post what you find. My money is on A being fairly irrelevant. 3 Likes
{"url":"https://discourse.datamethods.org/t/using-2-different-measures-of-the-same-entity-for-risk-prediction/22365","timestamp":"2024-11-05T21:53:55Z","content_type":"text/html","content_length":"22353","record_id":"<urn:uuid:9e273208-6107-4767-9860-0d638ba8d382>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00014.warc.gz"}
992 Square Attometers to Gunthas Square Attometer [am2] Output 992 square attometers in ankanam is equal to 1.4830276574133e-34 992 square attometers in aana is equal to 3.1198828731559e-35 992 square attometers in acre is equal to 2.4512832171115e-37 992 square attometers in arpent is equal to 2.9015209233506e-37 992 square attometers in are is equal to 9.92e-36 992 square attometers in barn is equal to 0.00000992 992 square attometers in bigha [assam] is equal to 7.4151382870667e-37 992 square attometers in bigha [west bengal] is equal to 7.4151382870667e-37 992 square attometers in bigha [uttar pradesh] is equal to 3.9547404197689e-37 992 square attometers in bigha [madhya pradesh] is equal to 8.89816594448e-37 992 square attometers in bigha [rajasthan] is equal to 3.9220566146468e-37 992 square attometers in bigha [bihar] is equal to 3.9227770512035e-37 992 square attometers in bigha [gujrat] is equal to 6.1282134603857e-37 992 square attometers in bigha [himachal pradesh] is equal to 1.2256426920771e-36 992 square attometers in bigha [nepal] is equal to 1.4647186739885e-37 992 square attometers in biswa [uttar pradesh] is equal to 7.9094808395378e-36 992 square attometers in bovate is equal to 1.6533333333333e-38 992 square attometers in bunder is equal to 9.92e-38 992 square attometers in caballeria is equal to 2.2044444444444e-39 992 square attometers in caballeria [cuba] is equal to 7.3919523099851e-39 992 square attometers in caballeria [spain] is equal to 2.48e-39 992 square attometers in carreau is equal to 7.6899224806202e-38 992 square attometers in carucate is equal to 2.0411522633745e-39 992 square attometers in cawnie is equal to 1.837037037037e-37 992 square attometers in cent is equal to 2.4512832171115e-35 992 square attometers in centiare is equal to 9.92e-34 992 square attometers in circular foot is equal to 1.3595383718782e-32 992 square attometers in circular inch is equal to 1.9577352713767e-30 992 square attometers in cong is equal to 9.92e-37 992 square attometers in cover is equal to 3.6767976278725e-37 992 square attometers in cuerda is equal to 2.5241730279898e-37 992 square attometers in chatak is equal to 2.3728442518613e-34 992 square attometers in decimal is equal to 2.4512832171115e-35 992 square attometers in dekare is equal to 9.9200065442283e-37 992 square attometers in dismil is equal to 2.4512832171115e-35 992 square attometers in dhur [tripura] is equal to 2.9660553148267e-33 992 square attometers in dhur [nepal] is equal to 5.8588746959539e-35 992 square attometers in dunam is equal to 9.92e-37 992 square attometers in drone is equal to 3.8620511911806e-38 992 square attometers in fanega is equal to 1.542768273717e-37 992 square attometers in farthingdale is equal to 9.802371541502e-37 992 square attometers in feddan is equal to 2.3798866185549e-37 992 square attometers in ganda is equal to 1.2358563811778e-35 992 square attometers in gaj is equal to 1.1864221259307e-33 992 square attometers in gajam is equal to 1.1864221259307e-33 992 square attometers in guntha is equal to 9.8051415366171e-36 992 square attometers in ghumaon is equal to 2.4512853841543e-37 992 square attometers in ground is equal to 4.44908297224e-36 992 square attometers in hacienda is equal to 1.1071428571429e-41 992 square attometers in hectare is equal to 9.92e-38 992 square attometers in hide is equal to 2.0411522633745e-39 992 square attometers in hout is equal to 6.9797058428269e-37 992 square attometers in hundred is equal to 2.0411522633745e-41 992 square attometers in jerib is equal to 4.9070768076177e-37 992 square attometers in jutro is equal to 1.7237185056473e-37 992 square attometers in katha [bangladesh] is equal to 1.4830276574133e-35 992 square attometers in kanal is equal to 1.9610283073234e-36 992 square attometers in kani is equal to 6.1792819058889e-37 992 square attometers in kara is equal to 4.9434255247111e-35 992 square attometers in kappland is equal to 6.4307014131985e-36 992 square attometers in killa is equal to 2.4512853841543e-37 992 square attometers in kranta is equal to 1.4830276574133e-34 992 square attometers in kuli is equal to 7.4151382870667e-35 992 square attometers in kuncham is equal to 2.4512853841543e-36 992 square attometers in lecha is equal to 7.4151382870667e-35 992 square attometers in labor is equal to 1.3838424809816e-39 992 square attometers in legua is equal to 5.5353699239263e-41 992 square attometers in manzana [argentina] is equal to 9.92e-38 992 square attometers in manzana [costa rica] is equal to 1.4193814244179e-37 992 square attometers in marla is equal to 3.9220566146468e-35 992 square attometers in morgen [germany] is equal to 3.968e-37 992 square attometers in morgen [south africa] is equal to 1.1579315979923e-37 992 square attometers in mu is equal to 1.48799999256e-36 992 square attometers in murabba is equal to 9.8051328684462e-39 992 square attometers in mutthi is equal to 7.9094808395378e-35 992 square attometers in ngarn is equal to 2.48e-36 992 square attometers in nali is equal to 4.9434255247111e-36 992 square attometers in oxgang is equal to 1.6533333333333e-38 992 square attometers in paisa is equal to 1.2479896135316e-34 992 square attometers in perche is equal to 2.9015209233506e-35 992 square attometers in parappu is equal to 3.9220531473785e-36 992 square attometers in pyong is equal to 3.0006049606776e-34 992 square attometers in rai is equal to 6.2e-37 992 square attometers in rood is equal to 9.8051415366171e-37 992 square attometers in ropani is equal to 1.9499267957224e-36 992 square attometers in satak is equal to 2.4512832171115e-35 992 square attometers in section is equal to 3.8301334127411e-40 992 square attometers in sitio is equal to 5.5111111111111e-41 992 square attometers in square is equal to 1.0677799133376e-34 992 square attometers in square angstrom is equal to 9.92e-14 992 square attometers in square astronomical units is equal to 4.432623519277e-56 992 square attometers in square bicron is equal to 9.92e-10 992 square attometers in square centimeter is equal to 9.92e-30 992 square attometers in square chain is equal to 2.4512753427152e-36 992 square attometers in square cubit is equal to 4.7456885037227e-33 992 square attometers in square decimeter is equal to 9.92e-32 992 square attometers in square dekameter is equal to 9.92e-36 992 square attometers in square digit is equal to 2.7335165781443e-30 992 square attometers in square exameter is equal to 9.92e-70 992 square attometers in square fathom is equal to 2.9660553148267e-34 992 square attometers in square femtometer is equal to 0.000992 992 square attometers in square fermi is equal to 0.000992 992 square attometers in square feet is equal to 1.0677799133376e-32 992 square attometers in square furlong is equal to 2.4512832171115e-38 992 square attometers in square gigameter is equal to 9.92e-52 992 square attometers in square hectometer is equal to 9.92e-38 992 square attometers in square inch is equal to 1.5376030752062e-30 992 square attometers in square league is equal to 4.2556868116523e-41 992 square attometers in square light year is equal to 1.1083118916874e-65 992 square attometers in square kilometer is equal to 9.92e-40 992 square attometers in square megameter is equal to 9.92e-46 992 square attometers in square meter is equal to 9.92e-34 992 square attometers in square microinch is equal to 1.537601718799e-18 992 square attometers in square micrometer is equal to 9.92e-22 992 square attometers in square micromicron is equal to 9.92e-10 992 square attometers in square micron is equal to 9.92e-22 992 square attometers in square mil is equal to 1.5376030752062e-24 992 square attometers in square mile is equal to 3.8301334127411e-40 992 square attometers in square millimeter is equal to 9.92e-28 992 square attometers in square nanometer is equal to 9.92e-16 992 square attometers in square nautical league is equal to 3.2135658089038e-41 992 square attometers in square nautical mile is equal to 2.8922066766613e-40 992 square attometers in square paris foot is equal to 9.4028436018957e-33 992 square attometers in square parsec is equal to 1.0418626395063e-66 992 square attometers in perch is equal to 3.9220566146468e-35 992 square attometers in square perche is equal to 1.9423541762444e-35 992 square attometers in square petameter is equal to 9.92e-64 992 square attometers in square picometer is equal to 9.92e-10 992 square attometers in square pole is equal to 3.9220566146468e-35 992 square attometers in square rod is equal to 3.922041517498e-35 992 square attometers in square terameter is equal to 9.92e-58 992 square attometers in square thou is equal to 1.5376030752062e-24 992 square attometers in square yard is equal to 1.1864221259307e-33 992 square attometers in square yoctometer is equal to 992000000000000 992 square attometers in square yottameter is equal to 9.92e-82 992 square attometers in stang is equal to 3.6618678479144e-37 992 square attometers in stremma is equal to 9.92e-37 992 square attometers in sarsai is equal to 3.5298509531822e-34 992 square attometers in tarea is equal to 1.5776081424936e-36 992 square attometers in tatami is equal to 6.0015729929215e-34 992 square attometers in tonde land is equal to 1.7984046410442e-37 992 square attometers in tsubo is equal to 3.0007864964608e-34 992 square attometers in township is equal to 1.0639250074269e-41 992 square attometers in tunnland is equal to 2.0095616238554e-37 992 square attometers in vaar is equal to 1.1864221259307e-33 992 square attometers in virgate is equal to 8.2666666666667e-39 992 square attometers in veli is equal to 1.2358563811778e-37 992 square attometers in pari is equal to 9.8051415366171e-38 992 square attometers in sangam is equal to 3.9220566146468e-37 992 square attometers in kottah [bangladesh] is equal to 1.4830276574133e-35 992 square attometers in gunta is equal to 9.8051415366171e-36 992 square attometers in point is equal to 2.4513045173851e-35 992 square attometers in lourak is equal to 1.9610283073234e-37 992 square attometers in loukhai is equal to 7.8441132292937e-37 992 square attometers in loushal is equal to 1.5688226458587e-36 992 square attometers in tong is equal to 3.1376452917175e-36 992 square attometers in kuzhi is equal to 7.4151382870667e-35 992 square attometers in chadara is equal to 1.0677799133376e-34 992 square attometers in veesam is equal to 1.1864221259307e-33 992 square attometers in lacham is equal to 3.9220531473785e-36 992 square attometers in katha [nepal] is equal to 2.929437347977e-36 992 square attometers in katha [assam] is equal to 3.7075691435333e-36 992 square attometers in katha [bihar] is equal to 7.8455541024071e-36 992 square attometers in dhur [bihar] is equal to 1.5691108204814e-34 992 square attometers in dhurki is equal to 3.1382216409628e-33
{"url":"https://hextobinary.com/unit/area/from/sqattometer/to/guntha/992","timestamp":"2024-11-07T23:09:58Z","content_type":"text/html","content_length":"130351","record_id":"<urn:uuid:2d0608f5-0697-439c-bd9d-12d088a1c704>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00017.warc.gz"}
Multiplying Complex Numbers: (6 + 3i)(6 - 3i) This article will explore how to multiply the complex numbers (6 + 3i) and (6 - 3i). We'll also discuss the significance of this particular multiplication and its relation to the concept of Understanding Complex Numbers Complex numbers are expressed in the form a + bi, where 'a' and 'b' are real numbers, and 'i' is the imaginary unit, defined as the square root of -1. Multiplying Complex Numbers To multiply complex numbers, we use the distributive property, similar to multiplying binomials: (6 + 3i)(6 - 3i) = 6(6 - 3i) + 3i(6 - 3i) Expanding the terms: = 36 - 18i + 18i - 9i² Remember that i² = -1. Substituting this value: = 36 - 9(-1) = 36 + 9 = 45 Conjugates and Their Significance The complex numbers (6 + 3i) and (6 - 3i) are complex conjugates. Notice that the only difference between them is the sign of the imaginary term. Multiplying a complex number by its conjugate always results in a real number. This is because the imaginary terms cancel each other out, leaving only the real terms. This property is crucial in various mathematical operations involving complex numbers, particularly when dealing with division and finding the modulus of a complex number. We have successfully multiplied the complex numbers (6 + 3i) and (6 - 3i), obtaining the result 45. This demonstrates the concept of complex conjugates and how their multiplication always yields a real number. This understanding is fundamental for working with complex numbers in various mathematical contexts.
{"url":"https://jasonbradley.me/page/(6%252B3i)(6-3i)","timestamp":"2024-11-11T11:30:29Z","content_type":"text/html","content_length":"59359","record_id":"<urn:uuid:1c9c130e-4a5d-47b0-a571-e78a41e8ab72>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00822.warc.gz"}
Date and Time: Monday 14 September, 3:00 p.m - 4.00 p.m. Google Meet Link: Speaker: Sumit Mishra, Emory University Title: Local-global principles for norms over semi-global fields. Abstract: Let K be a complete discretely valued field with the residue field \kappa. Let F be the function field of a smooth, projective, geometrically integral curve over K and \mathcal{X} be a regular proper model of F such that the reduced special fibre X is a union of regular curves with normal crossings. Suppose that the graph associated to \mathcal{X} is a tree (e.g. F = K(t)). Let L/F be a Galois extension of degree n such that n is coprime to \text{char}(\kappa). Suppose that \kappa is an algebraically closed field or a finite field containing a primitive n^{\rm th} root of unity. Then we show that the local-global principle holds for the norm one torus associated to the extension L/F with respect to discrete valuations on F, i.e., an element in F^{\times} is a norm from the extension L/F if and only if it is a norm from the extensions L\otimes_F F_\nu/F_\nu for all discrete valuations \nu of F.
{"url":"https://www.math.iitb.ac.in/webcal/view_entry.php?id=718&date=20200914","timestamp":"2024-11-13T08:40:31Z","content_type":"text/html","content_length":"14474","record_id":"<urn:uuid:677b4e1e-ae8e-45b2-8707-d5c74a476b64>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00660.warc.gz"}
Verge3D 360 Degree Rotation Angles - create a procedure to create 360 degree rotations values Verge3D rotation values are a bit confusing and often make it difficult to determine if a specific angle has been achieved. The Verge3D rotational coordinate system is based on 0 to +180 and then -180 to 0 to complete a full revolution of 360 degrees. As a 3D artists this is not what I expect and I often need it to be values between 0 and 360 degrees. The diagram above shows how the get object rotation puzzle provides rotation values indicated in blue. The GetAngle procedure you can copy from the video or DOWNLOAD here provides angles measurements from 0 to 360 degrees, which comes in very handy, at least in many of my projects. This procedure allows you to not only get the output in terms of 0 to 360 degrees but also allows you to count/determine the number of turns an object has been rotated and allows you to change the sign of the values to match the direction of the turn you desire. The procedure has two parts...the calculation of the angle from 0 to 360 and then the determination of the direction of rotation. The first thing it does is store the Objects angle from the get Object rotation puzzle into a variable "retrieved_angle". Next we calculate the modulus. In Verge3D the modulus is calculated using the "remainder of" puzzle. This will give us the angle of the rotation from 0 to 360 degrees. The next part calculates the number of complete revolutions or turns. It has to determine which direction the object is rotating to determine whether to subtract or add the incremented value. We then determine which direction the rotation is occurring...clockwise or counter clockwise. The variable names are arbitrary and could have been, left right, positive, negative, A or B, they are simply recording a state of direction. We test two conditions: if the previous angle is negative and the retrieved angle is postive. This defines a situation where the value are at a cross over point near 0. Depending on the conditions we can determine the direction the object is rotating in and increment or decrement the rotation count. We then set the previous angle to the retrieved angle and return the accumulated value. A multiply by -1 or 1 will set the sign of the accumulated value so that the values will shift according to your desired direction or rotation.
{"url":"https://www.xeons3dlab.com/post/verge3d-360-degree-rotation-angles","timestamp":"2024-11-11T03:23:38Z","content_type":"text/html","content_length":"1050371","record_id":"<urn:uuid:2864019a-5796-4745-abff-e0e806daa3b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00684.warc.gz"}
/!\ this page uses LaTeX, if you do not see this: \( \LaTeX \) then refresh the page Plaid CTF The third crypto challenge of the Plaid CTF was a bunch of RSA triplet \( N : e : c \) with \( N \) the modulus, \( e \) the public exponent and \( c \) the ciphertext. The public exponents \( e \) are all pretty big, which doesn't mean anything in particular. If you look at RSA's implementation you often see \( 3 \), \( 17 \) or other Fermat primes (\( 2^m + 1 \)) because it speeds up calculations. But such small exponents are not forced on you and it's really up to you to decide how big you want your public exponent to be. But the hint here is that the public exponents are chosen at random. This is not good. When you choose a public exponent you should be careful, it has to be coprime with \( \varphi(N) \) so that it is invertible (that's why it is always odd) and its related private exponent \( d \) shouldn't be too small. Maybe one of these public keys are associated to a small private key? I quickly try my code on a small VM but it takes too much time and I give up. A few days after the CTF is over, I check some write-ups and I see that it was indeed a small private key problem. The funny thing is that they all used Wiener to solve the challenge. Since Wiener's algorithm is pretty old, it only solves for private exponents \( d < N^{0.25} \). I thought I could give my code a second try but this time using a more powerful machine. I use this implementation of Boneh and Durfee, which is pretty much Wiener's method but with Lattices and it works on higher values of \( d \). That means that if the private key was bigger, these folks would not have found the solution. Boneh and Durfee's method allows to find values of private key up to \( d < N^{0.292} \)! After running the code (on my new work machine) for 188 seconds (~ 3 minutes) I found the solution :) Here we can see that a solution was found at the triplet #60, and that it took several time to figure out the correct size of lattice (the values of \( m \) and \( t \)) so that if there was a private exponent \( d < N^{0.26} \) a solution could be found. The lattice basis is shown as a matrix (the ~ represents an unhelpful vector, to try getting rid of them you can use the research branch), and the solution is displayed. Boneh and Durfee Here is the code if you want to try it. What I did is that I started with an hypothesis \( delta = 0.26 \) which tested for every RSA triplets if there was a private key \( d < N^{0.26 } \). It worked, but if it didn't I would have had to re-run the code for \(delta = 0.27\), \(0.28\), etc... I setup the problem: # data is our set of RSA triplets for index, triplet in enumerate(data): print "Testing triplet #", index N = triplet[0] e = triplet[1] # Problem put in equation P.<x,y> = PolynomialRing(ZZ) A = int((N+1)/2) pol = 1 + x * (A + y) I leave the default values and set my hypothesis: delta = 0.26 X = 2*floor(N^delta) Y = floor(N^(1/2)) I use strict = true so that if the algorithm will stop if a solution is not sure to be found. Then I increase the values of \( m \) and \( t \) (which increases the size of our lattice) and try solx = -1 m = 2 while solx == -1: m += 1 t = int((1-2*delta) * m) # optimization from Herrmann and May print "* m: ", m, "and t:", t solx, soly = boneh_durfee(pol, e, m, t, X, Y) If no private key lesser than \(N^{delta}\) exists, I try the next triplet. However, if a solution is found, I stop everything and display it. Remember our initial equation: \[ e \cdot d = f(x, y) \] And what we found are \(x\) and \(y\) if solx != 0: d = int(pol(solx, soly) / e) print "found the private exponent d!" print d m = power_mod(triplet[2], d, N) hex_string = "%x" % m import binascii print "the plaintext:", binascii.unhexlify(hex_string) And that's it! If you don't really know about lattices, I bet it was hard to follow. But do not fear! I made a video explaining the basics and a survey of Coppersmith and Boneh & Durfee Also go here and click on the follow button. 7 comments
{"url":"https://cryptologie.net/home/1/44/","timestamp":"2024-11-06T12:16:25Z","content_type":"text/html","content_length":"51198","record_id":"<urn:uuid:60c1868d-5224-4057-bd2a-cc58800968bb>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00102.warc.gz"}
Electric Car From mintOC Electric Car State dimension: 1 Differential states: 4 Discrete control functions: 1 Path constraints: 2 Interior point inequalities: 2 Interior point equalities: 5 The Electric car problem tries to find an optimal driving policy for an electric car. The goal is to use minimal energy to finish a given distance. As the car can be driven in two discrete modes, which either cause acceleration (and thereby consumption of energy) or the recharging of the battery, the control variable $u(t)$ is supposed to be integer. Additionally the model for the electric car itself contains nonlinearities. The problem is discussed in detail in [Sager2015]Author: Sager, S.; M. Claeys; F. Messine Journal: Journal of Global Optimization Number: 4 Pages: 721--743 Title: Efficient upper and lower bounds for global mixed-integer optimal control Volume: 61 Year: 2015 Mathematical formulation The mixed-integer optimal control problem is given by $\begin{array}{llclr} \displaystyle \min_{x, u} & x_3(t_f) \\[1.5ex] \mbox{s.t.} & \dot{x}_0 & = & (V_{alim} u - R_m x_0 - K_m x_1) / L_m, \\ & \dot{x}_1 & = & \frac{K_r^2}{Mr^2} (K_m x_0 - \frac{r} {K_r} ( M g K_f + \frac{1}{2} \rho S C_x \frac{r^2}{K_r^2} x_1^2)), \\ & \dot{x}_2 & = & \frac{r}{K_r} x_1, \\ & \dot{x}_3 & = & V_{alim} u x_0 + R_{bat} x_0^2, \\[1.5ex] & x(t_0) &=& (0,0,0,0)^T, \\ & x(t_f) & \in & \mathcal{T} \subseteq \mathbb{R}^4,\\ & x_0(t) & \in & [-i_{max}, i_{max}], \\ & u(t) &\in& \{-1, 1\}. \end{array}$ Here the four differential states stand for the electrical current ($x_0$), the angular velocity ($x_1$), the position of the car ($x_2$), and the consumed energy ($x_3$). The objective function $x_3 (t_f)$is just a reformulation of the Lagrange-type objective function tracking the used energy over time. These fixed values are used within the model. Name Symbol Value Unit Coefficient of reduction $K_r$ 10 [-] Air density $\rho$ 1.293 $kg/m^3$ Aerodynamic coefficient $C_x$ 0.4 [-] Area in the front of the vehicle $S$ 2 $m^2$ Radius of the wheel $r$ 0.33 $m$ Constant representing the friction of the wheels on the road $K_f$ 0.03 [-] Coefficient of the motor torque $K_m$ 0.27 [-] Inductor resistance $R_m$ 0.03 $Ohms$ inductance of the rotor $L_m$ 0.05 [-] Mass $M$ 250 $kg$ Gravity constant $g$ 9,81 [-] Battery voltage $V_{alim}$ 150 $V$ Resistance of the battery $R_{bat}$ 0.05 $Ohms$ Reference Solutions We look at the particular instance of our problem with $t_f = 10s$and target set $\mathcal{T} = \mathbb{R} \times \mathbb{R} \times \{100\} \times \mathbb{R}$, in which the car needs to cover 100m in 10s. Figure 1 shows a plot of the differential states of the optimal trajectory of the relaxed problem (i.e. $u \in [-1,1]$instead of $u \in \{-1,1\}$) for $N = 1000, N$being the number of time discretization points. The current $x_0$increases to its maximal value of 150A, stays there for a certain time, decreases on its minimal value of -150A, stays on this value and eventually increases slightly. This behavior corresponds to the different arcs bang, path-constrained, singular, path-constrained, bang and can be observed also in Figure 2. It shows the corresponding switching function and the optimal control. Note that the plots show data from the solution with the indirect approach. Applying the sum up rounding strategy results in an integer-feasible chattering solution. The resulting primal states are shown in Figure 3. One observes the high-frequency zig-zagging of the current $x_0$that results from the fast switches in the control. The direct and indirect approaches are local optimization techniques and only provide upper bounds for the relaxed problem and hence for the original problem. Here the indirect solution of the relaxed problem gives us a bound of $x_3(t_f) = 22777.2$. Source Code Model descriptions are available in There are several alternative formulations and variants of the above problem, in particular • fixed final velocity, $\mathcal{T} = \mathbb{R} \times \{ 50 \frac{K_r}{3.6r} \} \times \{100\} \times \mathbb{R}, t_f = 10s$ • bounded velocity, $\mathcal{T} = \mathbb{R} \times \mathbb{R} \times \{100\} \times \mathbb{R}, t_f = 10s, x_2(t) \leq 45 \frac{K_r}{3.6r} \forall t$ • fixed final velocity, bounded velocity, longer time horizon, $x_2(t) \leq 30 \frac{K_r}{3.6r} \forall t, \mathcal{T} = \mathbb{R} \times \{ 30 \frac{K_r}{3.6r} \} \times \{100\} \times \mathbb {R}, t_f = 15s$. [Sager2015] Sager, S.; M. Claeys; F. Messine (2015): Efficient upper and lower bounds for global mixed-integer optimal control. Journal of Global Optimization, 61, 721--743
{"url":"https://mintoc.de/index.php?title=Electric_Car&oldid=2108","timestamp":"2024-11-04T23:31:35Z","content_type":"text/html","content_length":"33753","record_id":"<urn:uuid:577419d8-1575-4d62-8bc7-b95b2c0d0ea7>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00792.warc.gz"}
An e-variable with respect to a set of distributions is a nonnegative random variable such that It is a measure of evidence against the null, somewhat analogous to p-values but defined in terms of expectations instead of probabilities. They are immune to lots of the issues with p-values and enable optional continuation, post-hoc hypothesis testing (see e-values enable post-hoc hypothesis testing) and sometimes optional stopping. A note on terminology: The random variable is called an e-variable, and the realized value of the random variable is called an e-value. This is similar to p-variables and p-values. Depending on the author and situation, we often simply refer to the e-variable as an e-value (just as we do with p-variables and p-values). E-values are intimately tied to hypothesis testing. If we are testing the null , a large e-value can be used to reject the null at level since, by Markov’s inequality (basic inequalities:Markov’s inequality), . Thus, designing e-values which will be large under the alternative is a fruitful strategy in hypothesis testing. This is the focus of game-theoretic hypothesis testing. (See also growth rate conditions in sequential testing, which investigates how to design e-values and e-processes, and testing by betting—simple vs simple, testing by betting—simple vs composite, testing by betting—composite vs composite to see e-values in practice). E-values have a natural interpretation in terms of game-theoretic probability (which leads to their applicability in hypothesis testing above). If we imagine paying 1 dollar for , then under the null we expect not to gain any money. E-values also have a sequential analogue, useful in sequential statistics and sequential hypothesis testing, called an e-process. An e-process is simply a stochastic process which is an e-value at each stopping time (which thus provides a level- test at each stopping time, again by Markov’s inequality). The numeraire e-variable is a special (and in some sense, optimal) e-value when testing composite alternatives. The inverse of an e-value is a p-value: . They tend to be more conservative than classical p-values, however (this is the price to pay for avoiding some of the issues with p-values). E-values solve the optional continuation problem, in the sense that the product of conditionally independent e-values remains an e-value. This is because such a product forms a test-martingale. They also solve the optional stopping problem in the sense that e-processes solve optional stopping by construction, so if the e-value is secretly an e-process (eg if it is constructed with exponential inequalities or in the form of estimating means by betting) then this property carries over.
{"url":"https://thestatsmap.com/e-value","timestamp":"2024-11-04T17:03:43Z","content_type":"text/html","content_length":"87201","record_id":"<urn:uuid:32644b2c-0219-4e86-b03d-1887fc641aab>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00411.warc.gz"}
Re: st: RE: suggestion for missing() [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: RE: suggestion for missing() From Jeph Herrin <[email protected]> To [email protected] Subject Re: st: RE: suggestion for missing() Date Tue, 16 Sep 2008 08:15:41 -0400 Apologies to anyone who was confused or misled by my I would still like to see a function -missing()- which takes a varlist. Nick Cox wrote: In case it gets lost, I'll stick in here a reminder that -dropmiss- exists to do what Jeph does in his examples. -search- for locations. On the main point: I've wanted something like this more than once, so I sympathise. Whether this is really a good idea I don't know. It may look cosmetic, but it is rather a fundamental change to Stata's syntax, and it would introduce a diversity of allowable syntaxes when consistency is arguably a very good thing. If this were done, then it should be done consistently across similar functions such as -max()- and -min()- as well. Jeph, however, I think introduces some red herrings here. Choice of terminology confuses the several intersecting issues. Some of the fault is Stata's in that when -egen- was introduced the members of its family were called -egen- functions. I don't have a better name to suggest, but I think this similarity has been widely (although not deeply) confusing. First off, note that despite similar names functions and -egen- functions are really quite different beasts. Stata's functions are not that different from functions in many other languages, but -egen- functions are very idiosyncratic. The name really is exact: -egen- functions work __only__ with -egen-. Jeph mentions -rowmiss()- and -rowtotal()- and calls them row operators. They are, strictly, -egen- functions. The fact that they are defined to work across rows, meaning strictly observations, is just that, a fact. -egen- functions could have any syntax for their argument that you wanted. Some syntaxes would seem perverse but anything programmable is possible so long as it passes -egen-. Jeph then goes on to talk about column operators, but here his informal use of terminology becomes, potentially, rather misleading. Operators in most languages, although certainly not all, seem to be distinguished from functions largely by whether they are implemented via special symbols (e.g. + - * | &) or via names. That is an accident of implementation which we could ponder, but, keeping to the point, let me just underline that when Jeph says column operators I think he means Stata functions, strict sense. Such functions are not designed to work with columns, meaning strictly variables, or indeed anything in particular. They are designed to work with anything that satisfies their syntax. Whether I say -missing(1,2,3,4)- or -missing(a[1], a[2], a[3], a[4])- or -missing(x, y, z)- is all one to -missing()- so long as the arguments fit the syntax. The results in context will differ because the rest of Stata is so smart, but I think -missing()- is just a mindless machine. This is mostly just yet another plea to use Stata's terminology when discussing Stata! [email protected] Jeph Herrin This is mostly a suggestion to StataCorp, perhaps it has been made or explained elsewhere. The function -missing()- is quite useful, but I'd like to propose that it be modified to take a -varlist- as argument. First, it would be even more useful if one could specify many variable names using short hand. Eg, why not drop if missing(q1-q23) drop if missing(_all) Second, this would be consistent with other row operators such as -rowmiss- & -rowtotal-, which take varlists. At least, it seems like that is the Stata convention - row operators take varlists, column operators take comma separated lists. Perhaps I'm wrong on this, but it seems enough of a convention that I invariable try to stick a varlist in -missing()- anyway. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"https://www.stata.com/statalist/archive/2008-09/msg00650.html","timestamp":"2024-11-08T15:01:58Z","content_type":"text/html","content_length":"12179","record_id":"<urn:uuid:d004a400-0a6b-4f0b-bcef-c56e949b938b>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00463.warc.gz"}
No Title p.363 1- a) if a,b .gt. (or =) 0, the inverse image of (a, b) is the union of (-b^{-.5}, -a^{-.5}) and (a^{-.5}, b^{-.5}) if a, b .lt. (or =) 0, there is no preimage. if a .lt. 0 , b .gt. 0 the preimage is (-b^{.5}, b^{.5}) b) If a .lt. 0, b .gt. 0; |a| .lt. b, (a, b) maps to [0, b^2) 3- Since SQRT .gt.or = 0, we are interested in the preimage of [0, epsilon). This is [1, 1+epsilon^2), which is the intersection of (e.g.) (0, 1+epsilon^2) with with I. 7- R\{k} is open, hence its preimage is open, hence the preimage of {k} is closed. Or, consider a convergent sequence in the preimage of {k}, then by continuity the value of f at its limit point is k by continuity, hence the limit point is in the preimage, and the preimage is closed. p.370 1- condition (a) is satisfied because absolute values are taken. condition (b) is satisfied because | | is zero only if its argument is 0, and both [components] must be 0 for d1 and d_infty condition (c) is satisfied because | | is symmetric. condition (d) is satisfied for d1 because it is satisfied componentwise. For d2 it is satisfied because it is satisfied for the component which provides the maximum on the LHS, and if the other component is larger on the RHS in either rem, it will strengthen the inequality. 6- Let epsilon = .5, then |x_n - x| .lt. epsilon only if x_n = x, hence for convergence there must be a K s.t. x_n = x for n .gt. K 8- a) interior of a square with corners at (0,0), (1,0), (-1.0), (0,-1) b) interior of a square with corners (1,1), (1,-1), (-1,1), (-1,-1) Russell Campbell Sun Feb 22 1998
{"url":"http://www.math.uni.edu/~campbell/real/hw141_6.html%25","timestamp":"2024-11-09T04:01:27Z","content_type":"text/html","content_length":"2516","record_id":"<urn:uuid:1fabc9f0-8f60-49ed-bb2d-e96a481290ca>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00106.warc.gz"}
The Taub Faculty of Computer Science Lev Podshivalov (Pierre-and-Marie-Curie University, Paris, France) Sunday, 30.12.2012, 13:00 Hierarchical modeling is often applied in fields such as computer vision, computer graphics, computer-aided geometric design, cartography and virtual reality. The importance of this approach has increased in recent years due to the ability to create models with more details. Hierarchical modeling is useful in order to make storage, transmission, computation, and visualization of these models possible and more efficient. The most prevalent 3D hierarchical geometric data structure i... [ Full version ] Amos Korman (University of Paris Diderot) Wednesday, 26.12.2012, 12:30 In recent years, several works have demonstrated how the study of biology can benefit from an algorithmic perspective. In this talk I discuss a new approach for such a methodology based on combining theoretical tradeoff techniques with field experiments to obtain bounds on biological parameters. A proof of concept for this framework is provided by considering central search foraging strategies of desert ants, and obtaining theoretical tradeoffs between the search time and t... [ Full version ] Daniel Mossé (University of Pittsburgh) Wednesday, 26.12.2012, 11:30 The current trend to move from homogeneous to heterogeneous multi-core (HetCMP) systems promises further performance and energy-efficiency benefits. A typical HetCMP system includes two distinct types of cores, such as high performance sophisticated ("large") cores and simple low-power ("small") cores. In those heterogeneous platforms, execution phases of application threads that are CPU-intensive can take best advantage of large cores, whereas I/O or memory intensive execution ph... [ Full version ] Yoram Bresler (University of Illinois, Urbana-Champaign) Tuesday, 25.12.2012, 13:30 Compressive sensing (CS), also known as compressive sampling, has become widely popular in recent years. In the first part of the talk, we review the little known fact, that the invention of CS preceded the papers that popularized it by almost a decade. Spectrum-blind sampling (SBS), proposed by Bresler and Feng in the mid-90’s, and further developed into “image compression on the fly,” with Venkataramani, and Gastpar, is the first known compressed sensing technique. T... [ Full version ] Guillermo Sapiro (Duke University) Tuesday, 25.12.2012, 11:30 We consider the problem of finding a few representatives for a dataset, i.e., a subset of data points that efficiently describes the entire dataset. We assume that each data point can be expressed as a linear combination of the representatives and formulate the problem of finding the representatives as a sparse multiple measurement vector problem. In our formulation, both the dictionary and the measurements are given by the data matrix, and the unknown sparse codes select th... [ Full version ] Yael Vaya-Talmor (Bar-Ilan University) Monday, 24.12.2012, 18:30 The open source production process is fascinating in many respects: it produces high quality code outside the boundaries of the hierarchical organization, it defies the rules of many traditional software production models and it has many profound implications on social, political, and economic systems of the Internet era. I will discuss prior research work on the topic. For example, should open source serve as an example of how to develop systems of participation w... [ Full version ] Thursday, 20.12.2012, 12:30 Advisor-Avi Mendelson.Embedded real-time systems are continuously required to increase performance and to reduce power and energy consumption. Higher performance is needed in order to meet new demand of emerging applications. In order to meet such performance demand, modern processors implement complex enhancement mechanisms such as deeper pipeline, vector units, multi-cores and more. In real-time systems, energy consumption can be saved as long as the system guarantees meetin... [ Full version ] Wednesday, 19.12.2012, 13:00 Digital maps and devices with an integrated GPS receiver, such as smartphones, are now an integral part of our lives. We deal with two practical problems related to GPS trajectories and digital maps. The first problem is the classical problem of map-matching; namely, matching a given GPS trajectory, possibly noisy or sparse, to the sequence of actual roads traversed by the carrier of the GPS receiver. We provide a novel solution to this problem. Our algorithm is adapted to w... [ Full version ] Rotem Oshman (University of Toronto) Wednesday, 19.12.2012, 12:45 The advent of large-scale wireless networks has presented the distributed computing community with a new challenge: wireless networks are much more disorderly than traditional wired networks, and they are difficult to model and predict. In addition, they are subject to different design considerations: among other criteria, it is crucial to conserve power by reducing the amount of communication between network nodes. This communication constraint leads to interesting connections be... [ Full version ] Edward Bortnikov (Yahoo! Labs Israel) Wednesday, 19.12.2012, 11:30 Extremely slow, or straggler, tasks are a major performance bottleneck in map-reduce systems. Hadoop infrastructure makes an effort to both avoid them (through minimizing remote data accesses) and handle them in the runtime (through speculative execution). However, the mechanisms in place neither guarantee the avoidance of performance hotspots in task scheduling, nor provide any easy way to tune the timely detection of stragglers. We suggest a machine-learning approach... [ Full version ] Margarita Osadchy (Computer Science, Haifa University) Tuesday, 18.12.2012, 11:30 The majority of current methods in object classification use the one-against-rest training scheme. We argue that when applied to a large number of classes, this strategy is problematic: as the number of classes increases, the negative class becomes a very large and complicated collection of images. The resulting classification problem then becomes extremely unbalanced, and kernel SVM classifiers trained on such sets require long training time and are slow in prediction. ... [ Full version ] Denver Dash (Intel – ISTC for Embedded Computing) Thursday, 13.12.2012, 13:30 For decades, our culture has been fascinated with the concept of the “Star Trek computer”. An all-knowing entity which is available to query about almost anything relevant to the world around us, organize our daily lives, remind us when we are doing the wrong thing, help us when we are lost. Several key features of this type of system are: (1) We can interact with it via natural language and it will understand our words (speech2text), (2) it will understand what we want to kno... [ Full version ] Thursday, 13.12.2012, 12:30 Advisor-Roy Friedman.In order for P2P systems to be viable, users must be given incentives to donate resources. Such incentives can be in the form of tit-for-tat like mechanisms, in which a user is rewarded with better service for contributing resources to the system. Alternatively, such incentives can be economical, i.e., users get paid for their contribution. In particular, the latter can be achieved through a P2P advertisement mechanism. We believe that P2P advertisement me... [ Full version ] Thursday, 06.12.2012, 12:30 Due to the evolution of technology constraints, especially energy constraints which may lead to heterogeneous multicores, and the increasing number of transient faults and permanent defects, the design of defect-tolerant accelerators for heterogeneous multi-cores may become a major micro-architecture research issue. Most custom circuits are highly defect sensitive, a single transistor can wreck such circuits. On the contrary, neural networks (NNs) are inherently error-t... [ Full version ] Wednesday, 05.12.2012, 12:30 The CHILDES database is a large collection of child---adult spoken interactions in over 25 languages. Automatic annotation of these data faciliates research on child language development and acquisition by providing researchers with a large amount of accurate data. Recently, the English section of the CHILDES database was automatically annotated with labeled dependency relations in a state-of-the-art approach. We describe a similar endeavor, focusing on the Hebrew section of CHIL... [ Full version ] Shahar Dobzinski (Weizmann Institute of Science) Wednesday, 05.12.2012, 12:30 We generalize sealed bid auctions to accommodate combinatorial auctions. In a sealed bid combinatorial auction each bidder sends to the auctioneer, simultaneously with the others, a message that depends only on his own valuation. The auctioneer decides on the allocation based on these messages alone. The goal is to find an allocation of the items which maximizes the social welfare. In this simultaneous communication complexity model we ask: How much information each of the ... [ Full version ] Ofir Weber (Computer Science, Haifa University) Wednesday, 05.12.2012, 11:30 Conformal maps are widely used in geometry processing applications. They are smooth, preserve angles, and are locally injective by construction. However, conformal maps do not allow for boundary positions to be prescribed. A natural extension to the space of conformal maps is the richer space of quasiconformal maps of bounded conformal distortion. Extremal quasiconformal maps, that is, maps minimizing the maximal conformal distortion, have a number of appealing properties making t... [ Full version ] Yael Moses (CS, The Interdisciplinary Center, Herzliya) Tuesday, 04.12.2012, 11:30 Dynamic events such as family gatherings, concerts or sports events are often captured by a group of people. The set of still images obtained this way is rich in dynamic content but lacks accurate temporal information. We propose a method for *photo-sequencing* -- temporally ordering a set of still images taken asynchronously by a set of uncalibrated cameras. Photo-sequencing is an essential tool in analyzing (or visualizing) a dynamic scene captured by still images. ... [ Full version ] Monday, 03.12.2012, 18:30 On embedded systems, the Linux kernel doesn't have the BIOS to tell it what the hardware is like. On the other hand, the traditional solution of having the hardware information hardcoded in the kernel source is leading to an overpopulation of platform-specific hacks (read: a disaster). ... [ Full version ] Ymir Vigfusson (Reykjavik University) Monday, 03.12.2012, 13:30 Code breakers played an enormously crucial role in World War II. Alan Turing, the father of computer science, was at the center of allied code breaking operations and his breakthroughs made intelligence gathering not only possible but practical. This general audience talk, celebrating Alan Turing's Centenary, explains the notorious German Enigma code and how it was systematically cracked by the Allies. Bio Ymir Vigfusson is an Assistant Professor at the School of C... [ Full version ] Yong Joon Kim (Computer Science, Technion) Sunday, 02.12.2012, 13:00 NURBS is one of the typical way of representing geometric data and have been widely used in many areas such as computer graphics, CAGD, robotics. NURBS represents geometric data based on the mathematical form, and thus it requires relatively small memory space compared to the other representations. Compactness of geometric data is crucial in recent computing environments (Network, Mobile, Multi-Core) and NURBS is good candidate to be a geometric representation in such environments... [ Full version ] Danny Barash (CS, Ben-Gurion University) Thursday, 29.11.2012, 13:30 The inverse RNA folding problem for designing sequences that fold into a given RNA secondary structure was introduced in the early 1990's in Vienna. Using a coarse-grain tree graph representation of the RNA secondary structure, we extended the inverse RNA folding problem to include constraints such as thermodynamic stability and mutational robustness, deveoping a program called RNAexinv. Furthermore, we propose a fragment-based design approach of RNA sequences that can be u... [ Full version ] Amit Weinstein (Tel-Aviv University) Wednesday, 28.11.2012, 12:30 Given a function f: {0,1}^n \to {0,1}, the f-isomorphism testing problem requires a randomized algorithm to distinguish functions that are identical to f up to relabeling of the input variables from functions that are far from being so. An important open question in property testing is to determine for which functions f we can test f-isomorphism with a constant number of queries. Despite much recent attention to this question, essentially only two classes of functions were known t... [ Full version ] Wednesday, 28.11.2012, 11:30 Distributed storage systems have become a popular solution to large file storage and fast data access. In such systems, erasure-correcting codes are widely used to combat disk failures, where disks correspond to symbols in the code. Specifically, we study MDS (maximum distance separable) array codes which enable optimal storage and efficient encoding and decoding algorithms. With r redundancy symbols an MDS code can sustain r erasures. For example, consider an MDS code that can co... [ Full version ] Amir Rosenfeld (Weizmann Institute of Science) Tuesday, 27.11.2012, 11:30 Effective segmentation prior to recognition has been shown to improve recognition performance. However, most segmentation algorithms adopt methods which are not explicitly linked to the goal of object recognition. Here we solve a related but slightly different problem in order to assist object recognition more directly - the extraction of a foreground ask, which identifies the locations of objects in the image. We propose a novel foreground/background segmentation algorithm ... [ Full version ] Prof. Yanos Sazeides (University of Cyprus) Tuesday, 27.11.2012, 10:00 The traditional performance-cost benefits enjoyed for decades from scaling of device area are challenged by the slowdown of voltage scaling and less reliable silicon primitives. These developments lead to pessimistic projections that it will be impossible to operate all on-chip resources, even at the minimum voltage for safe operation, due to power constraints, and the growing design and operational margins, used to provide silicon primitives with resiliency against variations, wi... [ Full version ] Monday, 26.11.2012, 08:00 Intel Development Center, Haifa, Israel The interaction between advanced compilation techniques, modern processor architectures, and associated tools continues to face new challenges and opportunities. Traditional demands to increase performance, reduce power consumption, and reduce time to market now apply to heterogeneous, virtualized and diverse user-experience environments. New programming environments such as OpenCL and C++ AMP enable programmers to better leverage data and task level parallelism, relying on effect... [ Full version ] Sunday, 25.11.2012, 12:30 Nowadays, multi- and many-core architectures are becoming the focus of affordable High Performance Computing (HPC). In this context, one of the programming models (and paradigms) that rapidly gains popularity is OpenCL, originally designed and developed to exploit heterogeneous (massive) parallelism. In this talk, I will briefly introduce our directions of research involving OpenCL, many-core architectures, and HPC, and ranging from low-level memory access patterns analysis to... [ Full version ] Wednesday, 21.11.2012, 15:30 A rich man has many children. Unfortunately, over the years they have grew apart and started to hate each other. The rich man seeks to give his inheritance to the largest group of his children that can cooperate. How can he do it? To answer this, we introduce and study the related notions of lossy chains and fractional secret sharing. Both of these concepts are motivated by goal of controlling the amount of work required in order to solve a cryptographic puzzle, or access a share... [ Full version ] Wednesday, 21.11.2012, 14:00 A crucial step in the evolution of broadband wireless (cellular) networks is reducing the size of the cells and increasing their number. This target is usually obtained using cell sectorization, where the omni-directional antenna at each base station (BS) is replaced by 3 or 6 directional antennas. With respect to this evolution, the contribution of our work is two-fold. First, we propose a new protocol stack for a BS that governs multiple directional antennas. In the new stack th... [ Full version ] Nir Bitansky (Tel-Aviv University) Wednesday, 21.11.2012, 12:30 The introduction of a non-black-box simulation technique by Barak (FOCS 2001) has been a major landmark in cryptography, breaking the previous barriers of black-box impossibility. Barak’s techniques were subsequently extended and have given rise to various powerful applications. We present the first non-black-box simulation technique that does not rely on Barak’s technique. Our technique is based on essentially different tools: it does not invoke universal arguments, nor does... [ Full version ] Adam Morrison (CS, Tel Aviv University) Wednesday, 21.11.2012, 11:30 We present the CBTree, a new counting-based self-adjusting sequential search tree that, like splay trees, moves more frequently accessed nodes closer to the root: After M operations on N items, Q of which access some item V, an operation on V traverses a path of length O(log M/Q) while performing few if any rotations. In contrast to the traditional self-adjusting splay tree in which each accessed item is moved to the root through a sequence of tree rotations, the CBTree... [ Full version ] Sunday, 18.11.2012, 13:00 Covering questions emerges in many disciplines and are closely related to the well known set-cover problem in computer science. Similarly, geometric covering is of great importance and yet has only been investigated in seemingly unrelated specific disciplines. Examples include the well known art-gallery problem, mold-design problems, inspection, security and surveillance problems. In this thesis, we present a single unified framework that can solve many of the above geometric c... [ Full version ] Wednesday, 14.11.2012, 12:30 We describe a bootstrapping algorithm able to learn from partially labeled data. We report the results of an empirical study for using this algorithm to improve performance of sentiment classification using up to 15 million unlabeled Amazon product reviews. Our experiments cover semi-supervised learning, domain adaptation and weakly supervised learning. In some cases our methods were able to reduce test error by more than half using such large amounts of data, and in all cases a s... [ Full version ] Tsvi Kopelowitz (Weizmann Institute of Science) Wednesday, 14.11.2012, 12:30 In the famous order maintenance problem, one wishes to maintain a dynamic list L of size n under insertions, deletions, and order queries. In an order query, one is given two nodes from L and must determine which node precedes the other in L. In an extension to this problem, named the Predecessor search on Dynamic Subsets of an Ordered Dynamic List problem (POLP for short), it is also necessary to maintain dynamic subsets S_1... S_k \subset L, such that given some u in L it will ... [ Full version ] Michael Bronstein (University of Lugano, Switzerland) Tuesday, 13.11.2012, 11:30 Finding dense intrinsic correspondence between non-rigid shapes is a notoriously difficult problem with many important applications in computer graphics and pattern recognition. In the first part of the talk, I will present a novel sparse modeling approach to non-rigid shape matching using only the ability to detect repeatable regions. As the input to our algorithm, we are given only two sets of regions in two shapes; no descriptors are provided so the corresponde... [ Full version ] Monday, 12.11.2012, 09:00 IBM Research - Haifa, Israel This year we are honored to host Prof. David Parnas, who will give the main keynote on "Software Development – What's Missing?" Prof. Parnas, along with Prof. David Harel and others, will also participate in a panel discussion titled "Fixing Software Engineering," moderated by Prof. Amiram Yehudai. Prof. Parnas' talk at the seminar will be the first in a series of unique lectures in [ Full version ] Sunday, 11.11.2012, 11:30 Texture features have always been a key attribute in image recognition and classification. In this work we propose a pre-processing stage for enhancing the performance of widely used color texture recognition methods. One approach we investigated, Decorrelation Stretching, was employed historically for enhancing the interpretability of multi-channel satellite images. This is achieved by stretching the dynamic range of color data over its principal components. Another approach deco... [ Full version ] Wednesday, 07.11.2012, 12:30 A numerical framework for the simulation of electrokinetic migration of particles in an electrolyte solution due to the application of an external electric field is presented. The electrokinetic transport process is described by a system of nonlinear partial differential equations (PDE). A thin boundary layer forms around the particle due to strong electrostatic forces. The resulting scale disparity of the boundary layer is used to derive nonlinear effective boundary conditio... [ Full version ] Leonid Barenboim (Ben-Gurion University) Wednesday, 07.11.2012, 12:30 In a distributed message passing model a communication network is represented by an n-vertex graph whose vertices host processors, and edges serve as communication links. One of the most fundamental goals in this setting is breaking the symmetry in the network. Specifically, the tasks of computing vertex coloring, maximal matching, and maximal independent set are of great importance. In the mid-eighties several randomized distributed algorithms for these problems were devised. [L... [ Full version ] David Amzallag (Alcatel-Lucent) Wednesday, 07.11.2012, 11:30 Wireless networks are the most important assets in our industry, yet the problem is that we are building new networks the same way we built our previous networks. Capacity is planned in advance per service; to handle peek traffic and to add new capacity takes time and investment of dedicated equipment. We have no elasticity as demand grows, on average the service is grossly underutilized though also over utilized at specific times of day or geographic locations, and we can't share... [ Full version ] Sabih Agbaria (CS, Technion) Tuesday, 06.11.2012, 12:30 Recent studies indicate that multiple patches to software are found in a hefty portion of resolved bugs. It is also known that bugs that require multiple patches take longer to resolve, that their severity tends to be higher than the average and that they induce programmers to engage more in bug discussions. This work is concerned with the ability of programmers to predict a bug will be of this sort, and in particular that it may require future patches and great... [ Full version ] Ymir Vigfusson (Reykjavik University) Monday, 05.11.2012, 13:30 Optimal use of computing resources is pivotal to minimize expenditures and improve competitiveness of companies. The computing technologies are continuously evolving and the users are adapting at a unparalleled pace, thus producing intriguing research questions. In the first half of the talk, we observe that Internet service providers (ISPs) face surging Internet loads associated with real-time streaming video and various forms of dynamically generated, short-lived cont... [ Full version ] Prof. Karol Mikula (Department of Mathematics, Slovak University of Technology, Bratislava, Slovakia) Thursday, 01.11.2012, 12:30 In the talk we present mathematical models and numerical methods which lead to early embryogenesis reconstruction and extraction of the cell lineage tree from the large-scale 4D image sequences. Robust and efficient finite volume schemes for solving nonlinear PDEs related to filtering, object detection and segmentation of 3D images were designed to that goal and studied mathematically. They were parallelized for massively parallel computer clusters and applied to ... [ Full version ] Shachar Lovett (Institute for Advanced Study, Princeton) Wednesday, 31.10.2012, 12:30 Minimizing the discrepancy of a set system is a fundamental problem in combinatorics. One of the cornerstones in this area is the celebrated six standard deviations result of Spencer (AMS 1985): In any system of $n$ sets in a universe of size $n$, there always exists a coloring which achieves discrepancy $6\sqrt{n}$. The original proof of Spencer was existential in nature, and did not give an efficient algorithm to find such a coloring. Recently, a breakthrough work of Bansal (FOC... [ Full version ] Jose F. Martinez (Cornell University) Tuesday, 30.10.2012, 13:30 As each technology generation brings additional transistors, the computer industry hopes to convert these into performance growth by stamping out a greater number of cores on a die. On the one hand, in many environments, that seems like a lot of hope. On the other hand, architecture researchers have grown almost allergic to "complex" alternatives, which history has shown can quickly fall off the cliff of diminishing returns. A fundamental hurdle to bettering architectu... [ Full version ] José F. Martínez (Cornell University) Tuesday, 30.10.2012, 13:30 As each technology generation brings additional transistors, the computer industry hopes to convert these into performance growth by stamping out a greater number of cores on a die. On the one hand, in many environments, that seems like a lot of hope. On the other hand, architecture researchers have grown almost allergic to "complex" alternatives, which history has shown can quickly fall off the cliff of diminishing returns. A fundamental hurdle to bettering architectur... [ Full version ] Zvi Devir (digitalrights.org.il) Monday, 29.10.2012, 18:30 The Digital Rights Movement stands between advanced technology and people rights (in the broad sense). Technological advance provides us with new products and new means of interacting with our surrounding. It provides a customer more flexibility and new ways of using a given product. However, it can be used to control and limit the usability of the product. For example, a customer buys a product, a certain book. Does it matters if the product is a digital book or an old-fashioned ... [ Full version ] Amir Shpilka (CS, Technion) Wednesday, 24.10.2012, 12:30 We present several variants of the sunflower conjecture of Erdos and Rado and discuss the relations among them. We then show that two of these conjectures (if true) imply negative answers to questions of Coppersmith and Winograd and of Cohn et al regarding possible approaches for obtaining fast matrix multiplication algorithms. Specifically, we show that the Erdos-Rado sunflower conjecture (if true) implies a negative answer to the ``no three disjoint equivoluminous subsets'' ques... [ Full version ] Tuesday, 23.10.2012, 12:30 Automatic parallelization is a promising strategy to improve application performance in the multicore era. However, common programming practices such as the reuse of data structures introduce artificial constraints that obstruct automatic parallelization. Privatization relieves these constraints by replicating data structures, thus enabling scalable parallelization. Prior privatization schemes are limited to arrays and scalar variables because they are sensitive to the layout of d... [ Full version ] Vitaly Feldman (IBM Research Almaden) Wednesday, 17.10.2012, 12:30 We develop a framework for proving lower bounds on computational problems over distributions, including optimization and unsupervised learning. Our framework is based on defining a restricted class of algorithms, called statistical algorithms, that instead of accessing samples from the input distribution can only obtain an estimate of the expectation of any given function on a sample drawn randomly from the input distribution. Our definition captures many natura... [ Full version ] Gil Einziger (CS, Technion) Monday, 15.10.2012, 18:30 Kademlia is considered to be one of the most effective key based routing protocols. It is nowadays implemented in many file sharing peer-to-peer networks such as BitTorrent, KAD, and Gnutella. Kaleidoscope , a novel routing/caching scheme designed to significantly reduce the cost of Kademlia lookup operations Kaleidoscope also improves the anonymity of nodes in the system.... [ Full version ] Tuesday, 09.10.2012, 16:30 Mark Twain famously said that ``the past does not repeat itself, but it rhymes.'' In the spirit of this reflection, we present novel algorithms and methods for leveraging large-scale digital histories and human knowledge mined from the Web to make real-time predictions about the likelihoods of future human and natural events of interest. The Web is a dynamic being, with constantly updating content, which is entangled with sophisticated user behaviors and interactions. Some o... [ Full version ] Tuesday, 09.10.2012, 16:30 Mark Twain famously said that ``the past does not repeat itself, but it rhymes.'' In the spirit of this reflection, we present novel algorithms and methods for leveraging large-scale digital histories and human knowledge mined from the Web to make real-time predictions about the likelihoods of future human and natural events of interest. The Web is a dynamic being, with constantly updating content, which is entangled with sophisticated user behaviors and interactions. Some o... [ Full version ] Tuesday, 09.10.2012, 16:30 Mark Twain famously said that ``the past does not repeat itself, but it rhymes.'' In the spirit of this reflection, we present novel algorithms and methods for leveraging large-scale digital histories and human knowledge mined from the Web to make real-time predictions about the likelihoods of future human and natural events of interest. The Web is a dynamic being, with constantly updating content, which is entangled with sophisticated user behaviors and interactions. Some o... [ Full version ] Tuesday, 09.10.2012, 16:30 Mark Twain famously said that ``the past does not repeat itself, but it rhymes.'' In the spirit of this reflection, we present novel algorithms and methods for leveraging large-scale digital histories and human knowledge mined from the Web to make real-time predictions about the likelihoods of future human and natural events of interest. The Web is a dynamic being, with constantly updating content, which is entangled with sophisticated user behaviors and interactions. Some o... [ Full version ] Tuesday, 09.10.2012, 16:30 Mark Twain famously said that ``the past does not repeat itself, but it rhymes.'' In the spirit of this reflection, we present novel algorithms and methods for leveraging large-scale digital histories and human knowledge mined from the Web to make real-time predictions about the likelihoods of future human and natural events of interest. The Web is a dynamic being, with constantly updating content, which is entangled with sophisticated user behaviors and interactions. Some o... [ Full version ] Wednesday, 19.09.2012, 13:00 Method overloading is a controversial language feature, especially in the context of Object Oriented languages, where its interaction with overriding may lead to confusing semantics. One of the main arguments against overloading is that it can be abused by assigning the same identity to conceptually different methods. This talk describes a study of the actual use of overloading in Java. To this end, we developed a taxonomy of classification of the use of overloading, and applie... [ Full version ] Jean Michel Morel (CMLA, Ecole Normale Supereure de Cachan, France) Wednesday, 19.09.2012, 11:30 All images have noise, but this noise may have undergone many distortions. Can we take any image, say, an olde photograph, and denoise it? This requires a good denoising method and an accurate noise estimator, both working for "any" image and "any" noise. I'll discuss both aspects, and particularly how to estimate a signal-dependent and scale-dependent noise. A prototype of the noise clinic is currently on line at http://dev.ipol.im/~colom/ipol_demo/noise_clinic/ (... [ Full version ] Wednesday, 12.09.2012, 14:00 The field of Parameterized Complexity strives to solve intractable problems efficiently, via multivariate analysis of running time, as a function of both the input size n and a parameter k. Such analysis enables to show that some of these problems are \emph{fixed parameter tractable} (FPT); in other words, they can be solved in time f(k)* n^O(1). The rationale behind this approach is the observation that many real-life inputs have small parameter values. In this work we stud... [ Full version ] Monday, 10.09.2012, 14:30 Emergence can be defined as the way complex systems and patterns arise out of a multiplicity of relatively simple interactions. Emergent behavior based systems are believed to be scalable and robust due to their non-reliance on a critical central element or fragile hierarchy. In the talk, two novel distributed algorithms based on emergent behavior will be presented and their scalability and robustness will be discussed.... [ Full version ] Orna Agmon Ben-Yehuda (CS, Technion) Monday, 03.09.2012, 18:30 Over the next few years, a new model of buying and selling cloud computing resources will evolve. Instead of providers exclusively selling server-equivalent virtual machines for relatively long periods of time (as done in today's IaaS clouds), they will increasingly sell individual resources (such as CPU, memory, and I/O resources) for a few seconds at a time. We term this nascent economic model of cloud computing the Resource-as-a-Service (RaaS) cloud, and we argue that its rise ... [ Full version ] Sunday, 02.09.2012, 13:30 In complex search tasks that utilize information from several data sources, it is often required to pose several basic search queries, join the answers to these queries, where each answer is given as a ranked list of items, and return a ranked list of combinations. Example: A tourist Alice wants to find a festive event that is related to classical music, a good Italian restaurant and an underground station that contains an elevator, such that all three will be located within a ... [ Full version ] Zvi Gilboa (University of Virginia) Monday, 20.08.2012, 18:30 This lecture will discuss the challenges presented by Hebrew vowel marks (nikud) with respect to glyph-placement and kerning in (La)TeX and under Linux, as well as the various approaches to date to meet these challenges, and likewise the cultural aspects of their causes and implications.... [ Full version ] Doron Sieradzki (pczlaw.com) Monday, 06.08.2012, 18:30 In the last two decades patenting software relations invention in the US was a flourishing business. Recent ruling by Federal Court and its affirmation in the Supreme Court of the US may seriously affect this. We will review the current legal status of software related patents in the US in view of this ruling as well as relate to the legal status in Europe and Israel. The recently enacted America Invents Act (AIA) will also be reviewed.... [ Full version ] Uri Itai (Applied Mathematics, Technion) Sunday, 05.08.2012, 13:00 Subdivision schemes are attractive methods for generating a smooth object from discrete data by repeating refinements. These schemes have many desirable properties such as fast convergence and smoothness of the generated objects. Therefore, subdivision schemes have gained popularity in recent years as an important tool in approximation theory, computer graphics, geometric design and computer aided design. I will start with a survey on basic subdivision schemes. Then, I will revie... [ Full version ] Horesh Ben Shitrit (École Polytechnique Fédérale de Lausanne (EPFL), Switzerland) Tuesday, 31.07.2012, 11:30 At the CVLAB, EPFL, We have developed a multi-people tracking system. Our system is able to reliably track multiple people in a multi-camera setting. The obtained trajectories can be used for understanding individuals and group behavior. For example, we are currently involved in a project whose goal is to understand the behavior of basketball teams and players from multiple video cameras. Our system is composed of three core components: detection, identification and ... [ Full version ] Alon Efrat (CS, University of Arizona) Monday, 30.07.2012, 11:30 Nearest-neighbor queries, which ask for returning the nearest neighbor of a query point in a set of points, are important and widely studied in many fields because of a wide range of applications. In many of these applications, such as sensor databases, location based services, face recognition, and mobile data, the location of data is imprecise. We therefore study nearest neighbor queries in a probabilistic framework in which the location of each input point and/or query po... [ Full version ] Monday, 23.07.2012, 18:30 Git is well-known as the tool for collaborating in free software projects, and the Linux kernel in particular. What seems to be less known, is how useful it is even in a single developer scenario. This is a slideless, demostration-based talk, walking through the practical work with git as a most valuable tool for keeping a solo project under control, and helping in real-life situations. Git-savvy participants are also welcomed to share their experience. ... [ Full version ] Yotam Michael (EE, Technion) Tuesday, 17.07.2012, 11:30 Blind Source Separation (BSS) is a very applicable and well-studied problem. Most studies of the BSS problem assume the system to be time/position invariant, an assumption which assists the mathematical study, but is not guaranteed for real world situations. We present a method applicable to the case of underdetermined time/position varying mixing systems, where number of mixture observation is limited to be less than the number of sources and the system is changing with t... [ Full version ] Tuesday, 10.07.2012, 15:00 In the problem of secure multiparty computation (MPC) we have n parties who want to jointly evaluate a function f on their local inputs. An MPC protocol should allow the parties to correctly compute f while hiding the inputs from each other to the extent possible. An important complexity measure of MPC protocols is their round complexity. This talk will cover two questions related to the goal of minimizing the round complexity of MPC: * Are there general MPC protocols which onl... [ Full version ] Alon Zweig (CS and Engineering, The Hebrew University of Jerusalem) Tuesday, 10.07.2012, 11:30 We present a novel algorithm based on a cascade of regularization terms designed to induce implicit hierarchical sharing of information among related learning tasks. Our approach can be viewed as training and combining a set of diverse classifiers. Such a combination is known to improve accuracy. The diversity is achieved by inducing different levels of sharing among tasks. Our approach is designed for multi-task and multi-class learning scenarios. Enabling different levels ... [ Full version ] Monday, 09.07.2012, 18:30 Search engines available today render almost useless when you need to process Hebrew texts. Even effective open-source solutions like Lucene/Solr give up in despair when handed a Hebrew corpus to index. HebMorph is an open-source project with the ultimate goal of solving this problem, and in the best way possible. In-depth understanding of how search engines work triggers ideas, ideas become software parts, and those in turn are experimented with various state of the a... [ Full version ] Tamir Tuller (Laboratory of Computational Systems Biology, Biomedical Engineering, Tel Aviv University, Tel Aviv) Wednesday, 04.07.2012, 15:30 Gene translation is a central process in all living organisms. Thus, developing a better understanding of this complex process and its computational modeling have ramifications to every biomedical discipline. We develop computational models of gene translation and employ evolutionary systems biology approaches to study how its efficiency is encoded in the transcripts. The talk will include details about our models and several very recent discoveries about the way evolution s... [ Full version ] Asaf Cidon (Stanford University) Wednesday, 04.07.2012, 11:30 Randomized node selection is widely used in large-scale, distributed storage systems to both load balance chunks of data across the cluster and select replica nodes to provide data durability. We argue that while randomized node selection is great for load balancing, it fails to protect data under a common failure scenario. We present MinCopysets, a simple, general-purpose and scalable replication technique to improve data durability while retaining the benefits of randomized load... [ Full version ] Mica Arie-Nachimson (Math, Weizmann Institute of Science) Tuesday, 03.07.2012, 11:30 Multiview structure recovery from a collection of images requires the recovery of the positions and orientations of the cameras relative to a global coordinate system. We present an approach to this problem that given feature correspondences across pairs of images uses the pairwise Essential Matrices to recover camera orientation by applying robust optimization using either spectral or semidefinite programming relaxations. Once the orientations are recovered we return to the... [ Full version ] Andrei Sharf (Ben-Gurion University) Thursday, 28.06.2012, 11:30 Today's scanning technologies, allow fast 3D scanning of urban scenes. Such rapid acquisition incurs imperfections: large regions remain missing, significant variation in sampling density is common, and the data is often corrupted with noise and outliers. Buildings often exhibit large scale repetitions and self-similarities. To consolidate the imperfect data, our key observation is that the same geometry, when scanned multiple times over reoccurrences of instances... [ Full version ] Wednesday, 27.06.2012, 16:00 This talk considers the problem of recognizing activities in surveillance video. Activities are high-level non-atomic semantic concepts which may have complex temporal structure. Activities are not easily identifiable using image features, but rather by the recognition of their composing events. Unfortunately, these composing events may only be observed up to a particular certainty. Many approaches to classification/recognition in computer vision rely on the availability of a larg... [ Full version ] Wednesday, 27.06.2012, 14:30 While it has been long recognized that genes are not randomly disposed along the genome, the degree to which the three dimensional (3D) structure of the genome influences the arrangement of genes has remained elusive. In particular, 'transcriptional-factories' are thought to lead to the positioning co-regulated genes in physical proximity, however there have also been reports to the contrary. Here we present evidence that the co-localization of yeast genes regulated by the same... [ Full version ] Dahlia Malkhi (Microsoft Research, Silicon Valley) Wednesday, 27.06.2012, 11:30 CORFU* is a novel storage cluster design that pools a farm of flash units and exposes it to clients as a single, global shared-log. CORFU utilizes flash to break the seeming tradeoff between consistency and performance, providing both strong consistency guarantees and high throughput. Our implementation is carried mostly by a client-side library, thus relieving the service from any IO bottlenecks; a CORFU cluster of ... [ Full version ] Wednesday, 27.06.2012, 11:30 We study the bandwidth allocation problem (BAP) and the storage allocation problem (SAP) in bounded degree trees. In BAP, we are given a tree and a set of tasks. Each task consists of a path in the tree, a bandwidth demand, and a weight. Our goal is to find a maximum weight subset S of tasks such that, for every edge e, the total bandwidth of demand in S whose path contains e does not exceed the edge's capacity. In SAP it is also required that every task in the solution is g... [ Full version ] Raphael Sznitman (Ecole Politechnique Federale de Lausanne, Switzerland) Tuesday, 26.06.2012, 14:30 The problem of search is ubiquitous in computer vision, and perhaps most common in object detection and localization. While the last few decades have produced a wealth of methods to evaluate the presence of a target at a given location, the search problem remains particularly difficult. This is in large part due to detection methods being far from perfect and that solutions with ever more challenging constraints are demanded. For example, more complex objects need to be found in i... [ Full version ] Camuel Gilyadov (LiteStack) Monday, 25.06.2012, 18:30 How cloud-friendly is traditional virtualization? As a matter of fact, all traditional virtualization technologies predate cloud era. Traditional virtualization has major advantage, they are backward compatible on binary level, making it easy to run in cloud any existing pre-cloud application. But what if backward compatability is not a requirement? What about new post-cloud applications, that are developed specifically for cloud? Is traditional virtualization still a good platfor... [ Full version ] Klim Efremenko (Tel-Aviv University) Wednesday, 20.06.2012, 12:30 Locally Decodable Code (LDC) is a code that encodes a message in a way that one can decode any particular symbol of the message by reading only a constant number of locations, even if a constant fraction of the encoded message is adversarially corrupted. In this talk we will show connection between LDC and a representation theory. We show that if there exists an irreducible representation (\ rho, V) of a group G and q elements g_1,g_2,..., g_q in $G$ such that the... [ Full version ] Boris Ginzburg (Intel Institute of Computational Intelligence) Wednesday, 20.06.2012, 11:30 This talk will discuss some system issues related to sharing virtual memory between CPU and GPU, which brings new challenges: GPU page faults, flashing a TLB on GPU, etc. We will also describe ESFIR - an SVM prototype, and XTHREAS - a new SVM programming models, which extends pthread semantics to GPU. ... [ Full version ] Tuesday, 19.06.2012, 15:00 With the rise of the internet and its applications, it has became more and more common for unfamiliar parties to interact with each other. In order to help and encourage such interactions trust management systems were introduced. Those systems try to alleviate mistrust by gathering and processing statistical information about previous events. This thesis describes TrustPack, a trust management framework that provides trust management as a service, i.e., TrustPack is separated fro... [ Full version ] Hanan Samet (CS, University of Maryland) Tuesday, 19.06.2012, 11:30 Faculty meeting room, 7th floor, Rabin bldg The popularity of web-based mapping services such as Google Earth/Maps and Microsoft Virtual Earth (Bing), has led to an increasing awareness
of the importance of location data and its incorporation into both web-based search applications and the databases that support them. In the past, attention to location data had been primarily limited to geographic information systems (GIS), where locations correspond to spatial objects and are usually specified geometrically.
However,... [ Full version ] Wednesday, 13.06.2012, 14:30 It is sometimes required to run the same application program on different platforms, from smartphones to PC's. The application may have to employ different User Interfaces (UI's) on different platforms, e.g. a UI designed for a large PC screen may not fit into the small screen of a smartphone. A different version of the application may therefore be needed for each platform kind. Kantorowitz and Lyakas introduced the use of platform-independent semantic user interfaces. As a res... [ Full version ] Wednesday, 13.06.2012, 14:30 Our world is known for its abundance of symmetric structures - in the fields of the animal kingdom, in astronomy, mathematics and chemistry, to name a few. The existence of symmetry in 3D shapes is of great interest when such applications as efficient storage, comparison and lookup are considered. Traditionally, only symmetries which are a composition of rotations and reflections were considered. These symmetries, termed extrinsic, have limited use in non-rigid shapes, as they a... [ Full version ] Wednesday, 13.06.2012, 12:30 Supervised learning, one of the core models of machine learning, is concerned with inducing rules from a given sample of examples. Active learning is a variant of this model in which the learning algorithm can choose which examples to learn from. The performance of such active learners is measured by the "speed" of learning: The number of examples they consume in order to produce a competitive solution. In this talk, I will present a novel active learning algorithmic concept, and... [ Full version ] Christoph Lenzen (Weizmann Institue of Science) Wednesday, 13.06.2012, 11:30 The challenging task of Byzantine self-stabilizing pulse synchronization requires that, in the presence of a minority of nodes that are permanently maliciously faulty, the non-faulty nodes must start firing synchronized pulses in a regular fashion after a finite amount of time, regardless of the initial state of the system. We study this problem under full connectivity in a model where nodes have local clocks of unknown, but bounded drift, and messages are delayed for an unknown, ... [ Full version ] Wednesday, 13.06.2012, 10:30 We address the problem of robot navigation using natural visual features in a planar environment. The algorithm gets as an input a target, which is represented by a captured image from the targets pose, and a source image, captured from the starting position and should make an accurate move towards the target which can be decomposed as a rotation and translation on the plane. This algorithm can be used as a step in a higher level homing algorithm for visual navigation. In additio... [ Full version ] Meirav Galun (CS, Applied Mathematics, Weizmann Institute of Science) Tuesday, 12.06.2012, 11:30 Discrete energy minimization is a ubiquitous task in computer vision, yet it is NP-hard in most cases. In this work we propose a multiscale framework for coping with the NP-hardness of discrete optimization. Our approach utilizes algebraic multiscale principles to efficiently explore the discrete solution space, yielding improved results on challenging energies for which current methods provide unsatisfactory approximations. In contrast to popular multiscale methods in computer vi... [ Full version ] Wednesday, 06.06.2012, 12:30 Let M be a low-rank matrix. For vectors x,y, define the bilinear form f(x,y)=x^t M y. We study the question of reconstructing M from evaluations to f. Much of previous work allowed randomized evaluations, or a stronger query model (or both). We show how to, in an optimal number of 4nr measurements, efficiently reconstruct M from deterministically chosen queries to f. This can be seen as a (noiseless) generalization of compressed sensing, and we make this connection formal by r... [ Full version ] Yaron Lipman (Weizmann Institute of Science) Tuesday, 05.06.2012, 11:30 In this talk we introduce generic convex spaces of bounded distortion piecewise linear mappings of triangular meshes. It is shown how common geometric processing objective functionals can be restricted to these new spaces, rather than to the entire space of piecewise linear mappings, to provide a bounded distortion version of popular algorithms.... [ Full version ] Mattan Erez (University of Texas at Austin) Sunday, 03.06.2012, 15:30 A significant portion of the energy dissipated in modern integrated circuits is consumed by the overhead associated with timing guardbands that ensure reliable execution. Timing speculation, where the pipeline operates at an unsafe voltage with any rare errors detected and resolved by the architecture, has been demonstrated to significantly improve the energy-efficiency of scalar processor designs. Unfortunately, applying the same timing-speculative approach to wide-SIMD architect... [ Full version ] Daniel Reem (IMPA, Rio, Brasil) Sunday, 03.06.2012, 13:00 Although many algorithms for computing Euclidean Voronoi diagrams of point sites have been published, most of them are sequential in nature and hence cast inherent difficulties on the possibility to compute the diagrams in parallel. We present a new algorithm which enables the (combinatorial) computation of each of the Voronoi cells independently of the other ones. The algorithm is significantly different from previous ones and some of the ideas related to it are in the spirit of ... [ Full version ] Thursday, 31.05.2012, 12:30 In the last two decades, program verification and testing have gone a long way from a concept to practical tools which can be applied to real software. In this work, we developed practical techniques for testing and verifying atomicity of composed concurrent operations. The techniques have been implemented in a tool (COLT) and successfully applied to uncover many bugs in real software. In fact, the Java library is currently being modified to avoid some of the bugs found by COLT. T... [ Full version ] Wednesday, 30.05.2012, 16:30 Many monitoring tasks over distributed data streams can be formulated as a continuous query using a function that is defined over the global average of data vectors derived from the streams. The query will typically produce an alert when the value of the function crosses a predefined threshold. A fundamental problem in efficient scalable implementation of such threshold queries is that the data streams are distributed, sometimes over a wide geographical region. Moving a... [ Full version ] Wednesday, 30.05.2012, 13:30 Many text processing tasks are based on estimating semantic relatedness between texts. For example, in information retrieval, relevancy of documents can be determined based on the semantic distance from the query. Recently, many algorithms have been developed for evaluating semantic relatedness based on a conceptual representation of the input texts. The concept spaces for these algorithms are based, in most cases, on large repositories of knowledge, such as Wikipedia and Wordnet.... [ Full version ] Ofer Neiman (Ben-Gurion University) Wednesday, 30.05.2012, 12:30 Given a set of $n$ points in $\ell_{1}$, how many dimensions are needed to represent all pairwise distances within a specific distortion ? This dimension-distortion tradeoff question is well understood for the $ell_{2}$ norm, where $O((\logn)/\epsilon^{2})$ dimensions suffice to achieve $1+\epsilon$ distortion. In sharp contrast, there is a significant gap between upper and lower bounds for dimension reduction in $\ell_{1}$. A recent result shows that distortion $1+\epsilon$ can ... [ Full version ] Vadim Indelman (Robotics and Intelligent Machines (RIM) Center, College of Computing, Georgia Institute of Technology) Wednesday, 30.05.2012, 11:30 Fast and reliable bundle adjustment is essential in many structure from motion (SfM) related applications such as mobile vision, augmented reality and robotics. In this talk an incremental and computationally efficient method for bundle adjustment will be presented. The method incorporates two key ideas to substantially reduce the involved computational cost, compared to a conventional bundle adjustment. First, the cost function is formulated in terms of multi-view constraints ins... [ Full version ] Oren Laadan (CS, Columbia University) Wednesday, 30.05.2012, 11:30 Smartphones are increasingly ubiquitous, and many users carry multiple phones to accommodate work, personal, and geographic mobility needs. We present Cells, a virtualization architecture for enabling multiple virtual smartphones to run simultaneously on the same physical cellphone in an isolated, secure manner. Cells introduces a usage model of having one foreground virtual phone and multiple background virtual phones. This model enables a new device namespace mechanism and novel... [ Full version ] Tuesday, 29.05.2012, 11:30 We consider the following problem, which arises in many database and web-based applications: Given a set $P$ of $n$ points in a high-dimensional space $\reals^d$ and a distance $r$, we want to report all pairs of points of $P$ at Euclidean distance at most $r$. We present two randomized algorithms, one based on randomly shifted grids, and the other on randomly shifted and rotated grids. The expected performance of both algorithms is of the form $C(d)(n+k)\log n$, ... [ Full version ] Reuven Cohen (CS, Technion) Wednesday, 23.05.2012, 14:30 Cellular networks are becoming more and more crucial to our daily lives and operators are seeking new technologies for increasing their bandwidth. Examples for such technologies are cell sectorization, fractional frequency reuse, and coordinated multipoint Tx/Rx. To take advantage of these new technologies, the scheduler logic at the base station needs to determine not only when to transmit each packet but also what modulation and coding scheme to use, which frequency reuse area's... [ Full version ] Elad Haramaty ( (CS, Technion) Wednesday, 23.05.2012, 12:30 We consider the problem of testing if a given function f is close to a n-variate degree d polynomial over the finite field of q elements. The natural, low-query, test for this property would be to pick the smallest dimension t= t(q,d )≈ d/q such that every function of degree greater than d reveals this feature on some t-dimensional affine subspace and to test that f when restricted to a random t-dimensional affine subspace is a polynomial of degree at most d on this subspace. S... [ Full version ] Micha Livne (CS, University of Toronto) Tuesday, 22.05.2012, 11:30 This talk concerns the estimation of human attributes from 3D human pose and motion. We consider both physical attributes (eg, gender and weight) and aspects of mental state (eg, mood). This task is useful for man-machine communication, and it provides a natural benchmark for evaluating the performance of 3D pose tracking methods. Based on an extensive corpus of motion capture data, with physical and perceptual ground truth, we analyze the inference of subtle biologically-in... [ Full version ] Ariel Gabizon (CS. Technion) Wednesday, 16.05.2012, 12:30 Let F be the field of q elements, where q=p^l for prime p. Informally speaking, a polynomial source is a distribution over F^n sampled by low degree multivariate polynomials. In this paper, we construct extractors for polynomial sources over fields of constant size q assuming p... [ Full version ] Alex Shraer (Yahoo! Research) Wednesday, 16.05.2012, 11:30 Dynamically changing (reconfiguring) the membership of a replicated distributed system while preserving data consistency and system availability is a challenging problem. In this talk I will discuss this problem in the context of Primary/Backup clusters and Apache Zookeeper. Zookeeper is an open source system which enables highly reliable distributed coordination. It is widely used in industry, for example in Yahoo!, Facebook,Twitter, VMWare, Box, Cloudera, Mapr, UBS, Goldman Sach... [ Full version ] Monday, 14.05.2012, 18:30 This lecture aims to provide a brief introduction to the InfiniBand™ architecture and programming using RDMA verbs. InfiniBand is an open standard, used in HPC (supercomputing) and data center environments for high performance connectivity. This standard defines a complete fabric architecture, from the physical layer all the way to the programming API. The programming API, also known as RDMA verbs, allows for transparent memory operations over the network, tra... [ Full version ] Shiri Chechik (Weizmann Institute of Science) Wednesday, 09.05.2012, 12:30 Distance oracle is a data structure that provides fast answers to distance queries. Recently, the problem of designing distance oracles capable of answering restricted distance queries, that is, estimating distances on a subgraph avoiding some forbidden vertices, has attracted a lot of attention. In this talk, we will consider forbidden set distance oracles for planar graphs. I’ll present an efficient compact distance oracle that is capable of handing any number of failures. ... [ Full version ] Chen Avin (Ben Gurion University) Wednesday, 09.05.2012, 11:30 We study self-optimizing networks and data structures. The goal of this research line is the design of fully distributed algorithms which flexibly adapt the network to a dynamic environment such as changing demands or traffic patterns. The idea of self-adjusting networks is motivated by trends in today's Internet like large data centers and peer-to-peer networks. In this talk I will preset the general model of the problem and some initial theoretical results on data ce... [ Full version ] Daniel Glazner (Faculty of Mathematics and Computer Science The Weizmann Institute of Science) Tuesday, 08.05.2012, 11:30 We describe an approach to category-level detection and viewpoint estimation for rigid 3D objects from single 2D images. In contrast to many existing methods, we directly integrate 3D reasoning with an appearance-based voting architecture. Our method relies on a nonparametric representation of a joint distribution of shape and appearance of the object class. Our voting method employs a novel parametrization of joint detection and viewpoint hypothesis space, allowing efficien... [ Full version ] Monday, 07.05.2012, 18:30 Ankit Gupta (CS, Technion) Wednesday, 02.05.2012, 12:30 What is an optimal formula computing a given multivariate polynomial $f$? In this work, we show that this question admits an efficient algorithmic solution in an average-case sense. Specifically, we consider the following situation. Let $\F$ be a field, $S \subseteq \F$ be a finite subset of field elements, $\vecX =(X_1, X_2, \ldots, X_n) $ be a tuple of formal variables and $\Delta \geq 0 $ be an integer representing the product-depth of the hidden (unknown) formula. ... [ Full version ] Alex Bronstein (School of Electrical Engineering, Tel Aviv University) Tuesday, 01.05.2012, 11:30 SIFT-like local feature descriptors are ubiquitously employed in such computer vision applications as content-based retrieval, video analysis, copy detection, object recognition, photo-tourism, and 3D reconstruction from multiple views. Feature descriptors can be designed to be invariant to certain classes of photometric and geometric transformations, in particular, affine and intensity scale transformations. However, real transformations that an image can undergo can only be app... [ Full version ] Monday, 23.04.2012, 18:30 Back in the early years of the dynamic web, Perl was the de-facto choice of language. It was so ubique that people associated it with CGI. For that, even today, Perl suffers from a bad image. Even though, under the visible surface, there are strong powers leading to a much better future. In the last couple of years a new bread of frameworks appeared in the Perl world. They are all based on the PSGI standard that rhymes to WSGI and... [ Full version ] Benny Applebaum, Tel-Aviv University Sunday, 22.04.2012, 13:00 Locally-computable pseudorandom generators (PRGs) map n random input bits into m>n pseudorandom bits such that each of the m outputs depend on a small number of d inputs. While it is known that such generators are likely to exist for the case of small sub-linear stretch m=n+n^{0.9}, it is less clear whether achieving larger stretch is possible. The existence of such PRGs, which was posed as an open question in previous works, has recently gained an additional motivation due to sev... [ Full version ] Tiberiu Popa (Computer Graphics Laboratory , ETH, Zurich) Thursday, 19.04.2012, 11:15 With the recent development of auto-multiscopic 3D displays that provide a 3D experience without the use of glasses, and with the recent availability of inexpensive hybrid depth/color cameras such as the Kinect, that provide real-time geometric and texture information, we are a step closer to realizing unencumbered 3D teleconferencing systems. However, many challenges still remain to be solved. In this talk, I will present two teleconferencing software solutions bas... [ Full version ] Vijay K. Bhargava (University of British Columbia and President, IEEE Communications Society) Wednesday, 18.04.2012, 11:30 In this talk, we present techniques to enable green communications in future generation of wireless systems that will rely on cooperation and cognition to meet increasing demand of high data rate. So far, achieving high data rate has been the primary focus of research in cooperative and CR systems, without much consideration of energy efficiency. However, many of these techniques significantly increase system complexity and energy consumption. Escalating energy costs and environme... [ Full version ] Thursday, 05.04.2012, 11:30 The main problem with contemporary Computed Tomography (CT) imaging is the high radiation dose absorbed by patients during the screening. Reducing this dose may result in poor quality imaging when using the popular fast and direct reconstruction techniques. On the other hand, iterative methods powered by statistical models of the scan are better-performing in such cases, but are also very slow. To bridge the gap between these two solutions, various signal processing techniques tha... [ Full version ] Wednesday, 04.04.2012, 14:00 Grid computing environments have become mission-critical components in research and industry, offering sophisticated solutions to exploit large computing and storage resources across multiple geographic locations and administrative domains. Usually, such grid resources are non-dedicated or opportunistic, as a consequence users will utilize the resources following a "best effort" approach. However, many real-world supercomputing applications, such as computational fluid dynam... [ Full version ] Wednesday, 04.04.2012, 12:30 Most algebraic multigrid (AMG) methods define the coarse operators by applying the (Petrov-) Galerkin coarse approximation (GCA) where the sparsity pattern and operator complexity of the multigrid hierarchy are dictated by the multigrid transfer operators (prolongation and restriction). Therefore, AMG algorithms must usually settle on some compromise between the quality of the transfer operators and the aggressiveness of the coarsening, which affect the complexity of the hierarchy... [ Full version ] Wednesday, 04.04.2012, 12:30 The study of combinatorial problems with a submodular objective function has attracted much attention in recent years, and is partly motivated by the importance of such problems to combinatorial optimization, economics, and algorithmic game theory. A partial list of well-known problems captured by submodular maximization includes: Max-Cut, Max-DiCut, Max-k-Cover, Generalized-Assignment, several variants of Max-SAT and some welfare and scheduling problems. While classical wor... [ Full version ] Roy Schwartz (CS, Technion) Wednesday, 04.04.2012, 12:30 The study of combinatorial problems with a submodular objective function has attracted much attention in recent years, and is partly motivated by the importance of such problems to combinatorial optimization, economics, and algorithmic game theory. A partial list of well-known problems captured by submodular maximization includes: Max-Cut, Max-DiCut, Max-k-Cover, Generalized-Assignment, several variants of Max-SAT and some welfare and scheduling problems. While cla... [ Full version ] Eugene Kolker (Children's Hospital, University of Washington) Tuesday, 03.04.2012, 13:30 Shelter seminar room, Biology, Technion Access to high-quality data drives our ability to ask questions and find answers. Our lab has a number of projects based on this premise. Proteomics data from experimental research in diabetes has enabled the discovery of two promising beta cell regulators, Netrin and Sem3a. We have built MOPED, a human and model organisms’ protein expression database, to provide a comprehensive summary of publicly available proteomics data. With MOPED, users can browse, sort, query, visua... [ Full version ] Peter Meer (Electrical and Computer Engineering Department Rutgers University) Tuesday, 03.04.2012, 11:30 A new robust estimation algorithm, the generalized projection based M-estimator (gpbM) is proposed. The algorithm is general and can handle heteroscedastic data where every point in the estimation have a different covariance. Does not require the user to specify any (scale) parameters, and can be applied to multiple linear constraints for single and multi-carrier problems. The gpbM has three distinct stages: scale estimation, robust model estimation and inlier/outlier dich... [ Full version ] Razya Ladelsky (IBM R&D, Haifa) Monday, 02.04.2012, 18:30 With the emergence of multicore architectures there is a growing need for automatic parallelization, that distributes sequential code into multi threaded code. OpenMP defines language extensions to C, C++, and Fortran for implementing multi-threaded shared memory applications. Generation of such extensions by the compiler relieves programmers from the manual parallelization process. OpenMP specification has been implemented in GCC, and is part of the standard release since version... [ Full version ] Thursday, 29.03.2012, 12:30 We present a technique for automatically adding fine-grain locking to an abstract data type that is implemented using a dynamic forest -i.e., the data structures may be mutated, even to the point of violating forestness temporarily during the execution of a method of the ADT. Our automatic technique is based on Domination Locking, a novel locking protocol. Domination locking is designed specifically for software concurrency control, and in particular is designed for object-oriente... [ Full version ] Wednesday, 28.03.2012, 12:30 In cooperative storage caching, clients may access blocks directly from each other's caches. Previous studies treated all the cooperating caches as a single pool, maximizing overall system performance at the price of possibly degraded performance for individual clients. In light of the popularity of many P2P mechanisms, we re-evaluate the concept of cooperative caching, considering selfish clients that cooperate only if they benefit from the interaction. This is the first st... [ Full version ] Zohar Karnin (Yahoo! Research) Wednesday, 28.03.2012, 12:30 This paper introduces the Furthest Hyperplane Problem (FHP), which is an unsupervised counterpart of Support Vector Machines. Given a set of n points in R^d, the objective is to produce the hyperplane (passing through the origin) which maximizes the separation margin, that is, the minimal distance between the hyperplane and any input point. To the best of our knowledge, this is the first paper achieving provable results regarding FHP. We provide both lower and upper bounds to this... [ Full version ] Nadav Amit (CS, Tehcnion) Wednesday, 28.03.2012, 11:30 Machine virtualization, where many virtual machines run on a single physical machine, has become an increasingly popular area of research and development in the last decade. Despite the introduction of hardware support for machine virtualization in commodity servers, many workloads still suffer from degraded performance when run in virtual machines. This degradation can be particularly acute when an unmodified virtual machine -- a VM that is not aware it is running in a virtual en... [ Full version ] Tom Leighton (Applied Math, MIT) Tuesday, 27.03.2012, 11:00 Butler Auditorium, Mosad Neeman Building,Technion In 1996, twelve years after making Aliyah to Israel, Danny Lewin graduated from the Technion and came to MIT to study algorithms. Over the next few years, he wrote a prize-winning Master’s Thesis on Consistent Hashing and co-founded Akamai Technologies, which today accelerates the delivery of over 250,000 web sites, including all of the top media and commerce brands on the Web. In this talk, we will describe some of Danny’s early work on consistent hashing and how ... [ Full version ] Monday, 26.03.2012, 18:30 Have you always dreamed to start a Start-up and didn't know how? Come to meet, hear and ask those who made it big time!... [ Full version ] Shahar Chen (CS, Tehcnion) Sunday, 25.03.2012, 16:30 Online learning and competitive analysis are two widely studied frameworks for online decision-making settings. Despite the frequent similarity of the problems they study, there are significant differences in their assumptions, goals and techniques, hindering a unified analysis and richer interplay between the two. In this work, we provide several contributions in this direction. We provide a single unified algorithm which by parameter tuning, interpolates between optimal regr... [ Full version ] Wednesday, 21.03.2012, 16:00 Semidefinite programming is a fundamental problem in convex programming with numerous applications. In the field of combinatorial optimization, many approximation algorithms that rely on sdp have been discovered in the past two decades starting with the work of Goemans and Williamson on MAX-CUT. In the field of machine learning solving sdps is at the heart of many learning tasks such as learning a distance metric and matrix completion. In ML in particular, the amounts of data nowa... [ Full version ] Wednesday, 21.03.2012, 14:30 Verb subcategorization frames determine the number and types of the syntactic arguments that verbs select, or subcategorize for. Typically, verbs can be associated with subcategorization frames that specify, for each subcategorized argument, information on the phrases that can realize this argument. These can be noun phrases (in the case of direct objects), prepositional phrases with one or more specific preposition, infinitival verb phrases, clauses introduced by a complemen... [ Full version ] Dima Kozakov (Biomedical Engineering, Boston University) Wednesday, 21.03.2012, 13:00 Seminar room, 4th floor, Emerson building, Technion Sampling energy landscape of macromolecular interactions is a challenging problem of computational biology. Classical approaches like Molecular Dynamics and Monte Carlo are not always optimal for these applications due to large configurational space of the problem. On contrary The Fast Fourier Transform (FFT) correlation sampling approach can exhaustively evaluate the energies of billions of macromolecular configuratins on a grid provided two limitations - the energy is des... [ Full version ] Pavel Hrubes (Princeton University) Wednesday, 21.03.2012, 12:30 Given two monotone polynomials f,g, their unifier is a pair of monotone polynomials u,v such that f=cu+v and g=u+cv, for some c>0. The problem I will discuss is: can we have monotone polynomials f,g which have a unifier, they can be computed by a small monotone circuit, but every unifier of f,g requires a large monotone circuit? On one hand, the question is related to arithmetic circuit complexity, and on the other, to the complexity of proofs in subsystems of monoto... [ Full version ] Ohad Ben-Shahar (Ben-Gurion University) Tuesday, 20.03.2012, 11:30 Our visual attention is attracted by salient stimuli in our environment and affected by primitive features such as orientation, color, and motion. Perceptual saliency due to orientation contrast has been extensively demonstrated in behavioral experiments with humans and other primates and is commonly explained by the very particular functional organization of the primary visual cortex. We challenge this prevailing view by studying orientation-based visual saliency in two non-mamma... [ Full version ] Victor Kaplansky (IBM R&D, Haifa) Monday, 19.03.2012, 18:30 Reconfigurable computing allows a lower power consumption to achieve higher performance than software, while maintaining a higher level of flexibility than hardware. Reconfigurable devices, such as field-programmable gate arrays (FPGAs), contain an array of computational elements whose functionality is determined through multiple programmable configuration bits. The hardware can be re-programmed to implement an entirely different circuit. The focus of the research proj... [ Full version ] Rick Eads (Agilent Technologies) Monday, 19.03.2012, 15:30 PCI Express is an industry standard, and the most prominent interconnection architecture for I/O devices and other boards inside a computer. It is defined and updated by the PCI-SIG(R) industry standards body, originally formed in 1992 as the Peripheral Component Interconnect (PCI) special interest group (SIG). Agilent Technologies, a prominent developer and provider of test equipment at all levels yet not a competitor in the computer market, has played a key role in the advanceme... [ Full version ] Monday, 19.03.2012, 10:30 Traditional models of bendable surfaces are based on the exact or approximate invariance to deformations that do not tear or stretch the shape, leaving intact an intrinsic geometry associated with it. Intrinsic geometries are typically defined by shortest paths also known as geodesic distances, or diffusion processes on the surface like diffusion distances. Both methods are implicitly derived from the metric induced by the ambient Euclidean space. Here, we depart from... [ Full version ] Srivastan Ravi (TU Berlin/Deutsche Telekom Laboratories) Thursday, 15.03.2012, 12:30 It seems to be generally accepted that designing correct and efficient concurrent software is a sophisticated task that can only be held by experts. A crucial challenge then is to convert sequential code produced by a ``mainstream' programmer into concurrent one. Various synchronization techniques may be used for this, e.g., locks or transactional memory, but what does it mean for the resulting concurrent implementation to be correct? And which synchronization primitives provide m... [ Full version ] Wednesday, 14.03.2012, 16:30 This thesis presents quantum analogues of artificial neural networks. We analyze and compare their performance to known classical and previously proposed quantum models. First we propose a model for associative memory based on a modification of Grover's quantum search algorithm and prove that the capacity of the model is exponential in the number of bits. We present algorithms for pattern completion and correction and prove that the model does not suffer from spurious me... [ Full version ] Wednesday, 14.03.2012, 15:00 Computing environments become increasingly parallel, and it seems likely that we will see more cores on tomorrow's desktops and server platforms. In a highly parallel system, tracing garbage collectors may not scale well due to deep heap structures that hinder parallel tracing. In this work we start by analyzing which data structures make current Java benchmarks create deep heap shapes. It turns out that the problem is manifested mostly with benchmarks that employ queues and linke... [ Full version ] Wednesday, 14.03.2012, 13:00 Threshold monitoring applications in distributed stream networks contin- uously monitor the global score of the network and alert whenever a given threshold is crossed. The network's global score is computed by applying a certain scoring function over the aggregated data derived from the network streams. However, the sheer volume and dynamic nature of the streams impose excessive communication overhead. Recently, the concept of local constraints have been presented in which ... [ Full version ] Ido Ben-Zvi (EE, Technion) Wednesday, 14.03.2012, 12:30 Coordinating the proper ordering of events across remote sites is a central task of distributed applications. In asynchronous systems, such coordination depends in an essential way upon message chains, as captured by Lamport's happened-before relation. The relation provides a useful approximation of causality, in the sense that in asynchronous systems two event can only be causally related if they are Lamport related. The talk will consider coordination and causali... [ Full version ] Michael G. Katze (University of Washington, Seattle) Wednesday, 14.03.2012, 11:30 After decades of research, vaccines against some of the greatest viral threats are still lacking and antiviral drugs remain few and slow in coming. These shortcomings point to the need for new approaches that go beyond traditional virology methods. High-throughput technologies and computational biology promise to deliver a much-needed boost to the field. My laboratory is using systems biology and computational approaches to understand and model integrated views of virus-ho... [ Full version ] Theo Ungerer (CS, University of Augsberg, Germany) Tuesday, 13.03.2012, 14:30 Providing higher performance than state-of-the-art embedded processors can deliver today will increase safety, comfort, number and quality of services, while also lowering emissions as well as fuel demands for automotive, avionic and automation applications. Engineers who design hard real-time embedded systems in such embedded domains express a need for several times the performance available today while keeping safety as major criterion. A breakthrough in performance is expected ... [ Full version ] Monday, 12.03.2012, 14:00 Emerging computer architectures pose many new challenges for software development. First, as the number of computing elements constantly increases, the importance of scalability of parallel programs becomes more significant. Second, accessing memory has become the principal bottleneck, while multi-CPU systems are based on NUMA architectures, where memory access from different chips is asymmetric. Therefore, it is important to design software with local data access, cache-friendl... [ Full version ] Wednesday, 07.03.2012, 14:30 In recent years, stochastic approximation approaches such as Stochastic Gradient Descent (SGD) and Stochastic Dual Averaging have become the optimization method of choice for many learning problems, including linear Support Vector Machines (SVMs). This is not surprising, since such methods yield optimal generalization guarantees with only a single pass over the data. They in a sense have an unbeatable runtime: the runtime to get to a desired generalization goal is the same as the ... [ Full version ] Tuesday, 06.03.2012, 14:00 Data centers are becoming the hosting platform for a wide spectrum of composite applications. In recent years, large investments have been made in massive data centers supporting cloud services, by companies such as eBay, Facebook, Google, Microsoft, and Yahoo!. With an increasing trend towards global and more communication intensive applications, the bandwidth usage within and between data centers is rapidly growing. The placement of the data used by these global applications pre... [ Full version ] Leah Bar (Mathematics, Tel Aviv University) Tuesday, 21.02.2012, 11:30 Sparse representation theory has been increasingly used in signal processing and machine learning. In this work we introduce a hierarchical sparse modeling approach which integrates information from the image patch level to derive a mid-level invariant image and pattern representation. The proposed framework is based on a hierarchical architecture of dictionary learning for sparse coding in a cortical (log-polar) space, combined with a novel pooling operator which incorporates th... [ Full version ] Monday, 20.02.2012, 18:30 LaTeX is a high-quality typesetting system; BEAMER is is a LaTeX class for creating presentations; Hebrew is an esoteric west Semitic language we happen to speak. This talk will cover all three, by the end of the talk you will be able to create beautiful and comprehensive Hebrew presentations using Beamer in minutes. The talk assumes prior knowledge in Hebrew, but not in LaTeX nor Beamer. We will cover basic concepts, commands and examples in depth, and finally add som... [ Full version ] Sunday, 19.02.2012, 11:30 We deal with the Regenerator Location Problem in optical networks. We are given a network G = (V, E), and a set Q of communication requests between pairs of terminals in V. We investigate two variations: one in which we are given a routing P of the requests in Q, and one in which we are required to find also the routing. In both cases, each path in P must contain a regenerator after every d edges in order to deal with loss of signal quality for some d > 0. The goal is to minimize ... [ Full version ] Wednesday, 08.02.2012, 13:00 A basic requirement in many distributed systems is the ability to detect objects whose score, according to a given function, exceeds some threshold. Since an object's data can be partitioned over various nodes, computing its global score requires collecting its data over the network. A main challenge is to perform threshold queries or monitoring with minimum network communication, i.e., without collecting the data from the nodes to a central location. In this talk I will present ... [ Full version ] Roman Manevich (UT Austin) Wednesday, 08.02.2012, 11:30 Efficient concurrent data structures are extremely important for obtaining good performance for most parallel programs. However, ensuring the correctness of concurrent data structure implementations can be very tricky because of concurrency bugs such as race conditions and deadlocks. In systems that use optimistic parallel execution such as boosted transactional memory systems and the Galois system, the implementation of concurrent data structures is even more complex because data... [ Full version ] Moti Freiman (Computational Radiology Lab, Harvard Medical School) Tuesday, 07.02.2012, 11:30 Personalized treatment approaches which optimize drugs doses according to pre-treatment and early response-to-therapy evaluation hold the promise to improve treatment success rates and reduce severe adverse side-effects due to drugs toxicity in variety of pathologies. Reliable assessment of tissue microenvironment including cell proliferation, density and size and tissue perfusion as a biomarker for disease activity is a key necessity for personalized, response-based treatment reg... [ Full version ] Avi Mendelson (Microsoft and CS&EE, Technion) Monday, 06.02.2012, 18:30 C++ AMP (Accelerated Massive Parallelism) is a native programming model that contains elements that span the C++ programming language and its runtime library. The syntactic changes introduced by AMP are minimal, but additional restrictions are enforced to reflect the limitations of data parallel hardware. Data parallel algorithms are supported by the introduction of multi-dimensional array types, array operations on those types, indexing, asynchronous memory transfer, shared memor... [ Full version ] Wednesday, 01.02.2012, 11:00 Contemporary mobile devices are equipped with multiple wireless interfaces, such as WiFi, Bluetooth, WiMax, ZigBee, NFC, etc. All these technologies differ dramatically one from another in maximum transmission range, bandwidth and power demands. Among all subsystems operating inside mobile devices, wireless communication is known as being particularly power-hungry, accounting for 50-70% of the total power consumption in small handheld devices, such as smartphones, and for ... [ Full version ] Greg Shakhnarovich (TTI-Chicago) Tuesday, 31.01.2012, 11:30 Much effort has been directed at algorithms for obtaining the highest probability (MAP) configuration in a probabilistic (random field) model. In many situations, one could benefit from additional solutions with high probability. Current methods for computing additional most probable configurations produce solutions that tend to be very similar to the MAP solution and each other. This is often an undesirable property. I will describe an algorithm for the M-Best Mode problem,... [ Full version ] Gala Yadgar (CS, Technion) Thursday, 26.01.2012, 12:30 In cooperative storage caching, clients may access blocks directly from each other’s caches. Previous studies treated all the cooperating caches as a single pool, maximizing overall system performance at the price of possibly degraded performance for individual clients. In light of the popularity of many P2P mechanisms, we re-evaluate the concept of cooperative caching, considering selfish clients that cooperate only if they benefit from the interaction. This is the first study ... [ Full version ] Wednesday, 25.01.2012, 12:30 Our research discusses multi-word expressions (MWE) such as "kick the bucket", "hot dog", "by and large", "look up, and "spill the beans". Identification of MWEs has been a hot subject of research in recent years. We present a method to identify MWEs in Hebrew using a new concept we term Language Isolation. We focus on dissecting (or "isolating") the morphological properties of words to discover potential MWEs and show that this method improves the alignment of multi-lingual (Span... [ Full version ] Eran Omri (Bar-Ilan University) Wednesday, 25.01.2012, 12:30 It is well known (c.f., Impagliazzo and Luby [FOCS '89]) that the existence of almost all ``interesting" cryptographic applications, i,e., ones that cannot hold information theoretically, implies one-way functions. An important exception where the above implication is not known, however, is the case of coin-flipping protocols. Such protocols allow honest parties to mutually flip an unbiased coin, while guaranteeing that even a cheating (efficient) party cannot bias the outp... [ Full version ] Dana Segev (EE, Technion) Tuesday, 24.01.2012, 11:30 Audio denoising is a long studied problem, with numerous algorithms and a wide accumulated knowledge. Considering non-stationary noise (like noise in a cocktail party environment) and strong, this task becomes very difficult to handle. This paper considers such audio denoising problems, where the audio track is accompanied by a video. Furthermore, the disturbing source is generally not visible in the field of view, and its nature is unknown. Overcoming such unknown noise is ... [ Full version ] Sunday, 22.01.2012, 12:30 David Carmel, from IBM Research, Haifa, wil talk about the IBM Watson project. The talk is given in the course 'Introduction to Artificial Intelligence", but is open to the public.... [ Full version ] Niv Buchbinder (Open University ) Wednesday, 18.01.2012, 12:30 The $k$-server problem is one of the most fundamental and extensively studied problems in online computation. Suppose there is an $n$-point metric space and $k$ servers are located at some of the points of the metric space. At each time step, an online algorithm is given a request at one of the points of the metric space, and this request is served by moving a server to the requested point (if there is no server there already). The cost of serving a request is defined to be the di... [ Full version ] Stacy Patterson (EE, Technion) Wednesday, 18.01.2012, 11:30 In the distributed average consensus problem, each node in a network has an initial value, and the objective is for all nodes to reach consensus at the average of these values using only communication with nearby nodes. Distributed average consensus algorithms have a wide variety of applications, including distributed optimization, sensor fusion, load balancing, and autonomous vehicle formation control. This talk centers on the analysis of distributed averaging algorit... [ Full version ] Michal Jacob (Intel, Computer Vision Group) Tuesday, 17.01.2012, 11:30 Why do we perceive some elements in a visual scene, while others remain undetected? We compared fixations on detected vs. undetected items in the Identity Search Task (Jacob & Hochstein, 2009). Using a gaze-contingent technique, we further controlled the number of fixations on the target (Jacob & Hochstein, 2010). Results show that detected targets were fixated at a greater extent, and a backward dynamics alignment revealed a bifurcation point where the differential characte... [ Full version ] David Sainz (CS, Technion) Thursday, 12.01.2012, 12:30 Smart-phones are quickly becoming the most prevalent computing and communication devices. As the capabilities of these mobile phones improve as well as our reliance on them, backing up the data stored on the phone becomes vital. Typical cloud based backup services assume reliable high bandwidth connection to the Internet. This assumption, however, in not practical in many places, which motivates the need for ad-hoc cloud services, and in particular, for a social storage service th... [ Full version ] Wednesday, 11.01.2012, 14:30 The complexity of several prominent graph polynomials, such as the chromatic polynomial and the matching polynomial, has been studied in the literature. In 2008 Makowsky raised a conjecture which generalizes complexity results for specific graph polynomials to an infinite class of graph polynomials which include almost all of those in the literature. The conjecture states roughly that the evaluations of such graph polynomials are equivalent in terms of running-time complexity, ... [ Full version ] Brent Waters (University of Texas) Wednesday, 11.01.2012, 12:30 I will present a new approach for creating chosen ciphertext secure encryption. The focal point of our work is a new abstraction that we call Detectable Chosen Ciphertext Security (DCCA). Intuitively, this notion is meant to capture systems that are not necessarily chosen ciphertext attack (CCA) secure, but where we can detect whether a certain query CT can be useful for decrypting (or distinguishing) a challenge ciphertext CT*. We show how to build chosen ... [ Full version ] Gabi Nakibly (National EW Research and Simulation Center) Wednesday, 11.01.2012, 11:30 Open Shortest Path First (OSPF) is the most popular interior gateway routing protocol on the Internet. Most known OSPF attacks that have been published in the past are based on falsifying the link state advertisement (LSA) of an attacker-controlled router. These attacks can only falsify a small portion of the routing domain's topology, hence their effect is usually limited. More powerful attacks are the ones that affect LSAs of other routers not controlled by the attacker. However... [ Full version ] Margarita Osadchy (CS, Haifa University) Tuesday, 10.01.2012, 11:30 We presents a novel approach to pose estimation and model-based recognition of specular objects in difficult viewing conditions, such as low illumination, cluttered background, and large highlights and shadows that appear on the object of interest. In such challenging conditions conventional features are unreliable. We show that under the assumption of a dominant light source, specular highlights produced by a known object can be used to establish correspondence between its ... [ Full version ] Monday, 09.01.2012, 18:30 In the last two years a committee in the Standards Institution of Israel worked, on my initiative and with my active participation, on a revision for the standard of the Hebrew keyboard layout. Why did i want to change the layout? Why is it good to do it through the Standards Institution? How to make people who represent Microsoft, Linux, and Apple agree about things? How is a keyboard layout edited in Linux and in Windows? How do you install a keyboard layout and how do you distr... [ Full version ] Zeev Dvir (Princeton University) Thursday, 05.01.2012, 12:30 We describe an explicit, simple, construction of large subsets of F^n, where F is a finite field, that have small intersection with every k-dimensional affine subspace. Interest in the explicit construction of such sets, termed 'subspace-evasive' sets, started in the work of Pudlak and Rodl (2004) who showed how such constructions over the binary field can be used to construct explicit Ramsey graphs. More recently, Guruswami (2011) showed that, over large finite fields (of ... [ Full version ] Wednesday, 04.01.2012, 13:30 Nuclear magnetic resonance (NMR) has proven to be a leading implementation of quantum information processors where each molecule in the sample constitutes a register of quantum bits (qubits). However, at room temperature, the qubits that are realized by nuclear spins (1/2) are in a highly mixed state: noisy or with high entropy. Source coding (compression) can cool some spins (reducing their Shannon entropy) while heating others, yet this closed-system technique is limited by Shan... [ Full version ] Ron Rothblum (Weizmann Institute for Science) Wednesday, 04.01.2012, 12:30 We show how to transform any additively homomorphic private-key encryption scheme that is compact, into a public-key encryption scheme. By compact we mean that the length of a homomorphically generated encryption is independent of the number of ciphertexts from which it was created. We do not require anything else on the distribution of homomorphically generated encryptions (in particular, we do not require them to be distributed like real ciphertexts). Our resul... [ Full version ] Rob Bisseling (Utrecht University) Wednesday, 04.01.2012, 11:30 Graph matching is the problem of matching nodes of a graph in pairs such that the largest number of pairs is created or the largest sum of edge weights is obtained. Greedy graph matching provides us with a fast way to coarsen a given graph during graph partitioning. Direct algorithms on the CPU which perform such greedy matchings are simple and fast, but offer few handholds for parallelisation. To remedy this, we introduce a fine-grained shared-memory parallel algorithm for greedy... [ Full version ] Erich Nahum (IBM T.J. Watson Research Center) Tuesday, 03.01.2012, 12:45 The Session Initiation Protocol (SIP) is widely used for controlling communication sessions such as voice and video calls over IP, video conferencing, streaming multimedia distribution, instant messaging, presence information, file transfer and online games. This work introduces several novel load balancing algorithms for distributing SIP requests to a cluster of SIP servers. Our load balancer improves both throughput and response time versus a single node, while exposing a single... [ Full version ] Daniel Glasner (Math & CS, The Weizmann Institute of Science) Tuesday, 03.01.2012, 11:30 We present an unsupervised, shape-based method for joint clustering of multiple image segmentations. Given two or more closely-related images, such as close frames in a video sequence or images of the same scene taken under different lighting conditions, our method generates a joint segmentation of the images. We introduce a novel contour-based representation that allows us to cast the shape-based joint clustering problem as a quadratic semi-assignment problem. Our score fun... [ Full version ] Ofer Rosenberg and Yaki Tebeka (AMD) Monday, 02.01.2012, 18:30 This is a 4 series of 4 talks about GPGPUS, intended for the practical engineer: 1. Motivation, AMD's architecture 2. OpenCL 3.Case studies, Dos and Don'ts 4.Tools and Profiling for Performance General Purpose GPU programming became a hot topic in the last few years, ranging from academic studies to being used by commercial software products. As an example, three out of the world's top10 supercomputers (June2011 list) contain GPUs in them. This s... [ Full version ]
{"url":"https://cs.technion.ac.il/events/?yw=2012","timestamp":"2024-11-03T23:35:26Z","content_type":"text/html","content_length":"400781","record_id":"<urn:uuid:9aa8df60-b017-4c49-8abf-1239bbcb754f>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00345.warc.gz"}
Show work for math problems online Step-by-Step Calculator - Symbolab Solve your math problems online. The free version gives you just answers. If you would like to see complete solutions you have to sign up for a free trial account. Mathway | Algebra Problem Solver Free math problem solver answers your algebra homework questions with step- by-step explanations. Solve Math Problems - Free Online Calculators Free online calculators that answer your homework questions with step-by-step explanations. We have calculators for trigonometry, algebra, precalculus, ... K-8 Practice Math Problems - powered by WebMath. "Practice makes perfect!" This is true even in math! If you want to become better at working math problems, you have to practice working math Algebra Solver and Math Simplifier that SHOWS WORK Welcome to Graphical Universal Mathematical Expression Simplifier and Algebra Solver (GUMESS). It solves most middle school algebra equations and simplifies expressions, and it SHOWS ALL WORK. It is free to use. Enter expression to be simplified, or equation to be solved. I will figure out if what you typed is an equation. Step-by-Step Calculator - Symbolab Free Pre-Algebra, Algebra, Trigonometry, Calculus, Geometry, Statistics and Chemistry calculators step-by-step Show Work For Math Problems - cheaphelpbestessay.com Fill in the boxes below then click Go!Free math problem solver The free math problem solver below is a sophisticated tool that will solve any math problems you enter quickly and then show you the answer. I recommend that you use it only to check your own work because occasionally, it might generate strange results.show work for math problems In fact, for years students were told not to explain their answers, but to show their work, and if presented in a clear and organized manner, the math contained in this work was considered to be ... Mathway | Algebra Problem Solver You will be able to enter math problems once our session is over. ... Which problem would you like to work on? Need More Info/Time. Let me take a look... Can you please send an image of the problem you are seeing in your book or homework? If you click on "Tap to view steps..." you will see the steps are now numbered. Step-by-Step Math—Wolfram|Alpha Blog Algebra Calculator | Step-by-Step Calculator MathScore EduFighter is one of the best math games on the Internet today. You can start playing for free! Work Word Problems - Sample Math Practice Problems The math problems below can be generated by MathScore.com, a math practice program for schools and individual families. Solving Algebra Problems - MathHelp.com - 1000+ Online ... MathHelp.com - http://www.MathHelp.com - offers comprehensive help solving Algebra problems with over 1000 online math lessons featuring a personal math teac... Calculate quotient and remainder and see the work when dividing divisor into ... solve a long division problem with parts of division: dividend, divisor, quotient, ... Math is Fun also provides a step-by-step process for long division with Long Division with ... CalculatorSoup, https://www.calculatorsoup.com - Online Calculators. An example problem will appear in the calculator. (If you’re confused by the notation in the box, click the Show button. This shows the problem in the standard mathematical format.) Solve the problem on a separate sheet of paper and write down your answer. Fraction Math Problems - Fractions Practice yHomework - Math Solver App Review
{"url":"https://essayservices2020nlvj.web.app/jaspers69033bym/show-work-for-math-problems-online-2000.html","timestamp":"2024-11-07T16:52:40Z","content_type":"text/html","content_length":"18324","record_id":"<urn:uuid:955744db-618b-4d41-9b58-48b373c3a9dd>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00014.warc.gz"}
This page computes a finite or infinite sum over the index n. This sum can be a numerical or symbolic (for example a power series). The menu of this page is in simple mode. Go to expert mode for more choices. To a sum, enter first the expression of its general term: ( Examples ) f (n) = Then choose the type of the sum to compute. • Infinite series, for n starting with . • Finite sum, for n running through integers from to , Numerical precision: digits. The menu of this page is in simple mode. Go to expert mode for more choices. This page is not in its usual appearance because WIMS is unable to recognize your web browser. Please take note that WIMS pages are interactively generated; they are not ordinary HTML files. They must be used interactively ONLINE. It is useless for you to gather them through a robot program. • Description: computes sums of series or finite sums of various kinds. This is the main site of WIMS (WWW Interactive Multipurpose Server): interactive exercises, online calculators and plotters, mathematical recreation and games • Keywords: wims, mathematics, mathematical, math, maths, interactive mathematics, interactive math, interactive maths, mathematic, online, calculator, graphing, exercise, exercice, puzzle, calculus, K-12, algebra, mathématique, interactive, interactive mathematics, interactive mathematical, interactive math, interactive maths, mathematical education, enseignement mathématique, mathematics teaching, teaching mathematics, algebra, geometry, calculus, function, curve, surface, graphing, virtual class, virtual classes, virtual classroom, virtual classrooms, interactive documents, interactive document, analysis, algebra, arithmetic, number, series, sum, polynomial, integer, root
{"url":"https://wims.univ-cotedazur.fr/wims/en_tool~analysis~sigma.en.html","timestamp":"2024-11-03T08:55:36Z","content_type":"text/html","content_length":"6579","record_id":"<urn:uuid:c37960c1-d4a6-44ae-a1d3-1ce0f40e5b87>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00247.warc.gz"}
Giuseppe Carleo École Polytechnique Fédérale de Lausanne Giuseppe Carleo is a computational quantum physicist, whose main focus is the development of advanced numerical algorithms to study challenging problems involving strongly interacting quantum systems. He is best known for the introduction of machine learning techniques to study both equilibrium and dynamical properties, based on a neural-network representations of quantum states, as well for the time-dependent variational Monte Carlo method. He earned a Ph.D. in Condensed Matter Theory from the International School for Advanced Studies (SISSA) in Italy in 2011. He held postdoctoral positions at the Institut d’Optique in France and ETH Zurich in Switzerland, where he also served as a lecturer in computational quantum physics. In 2018, he joined the Flatiron Institute in New York City in 2018 at the Center for Computational Quantum Physics (CCQ), working as a Research Scientist and project leader, and also leading the development of the open-source project NetKet. Since September 2020 he is an assistant professor at EPFL, in Switzerland, leading a research group focused on computational quantum science.
{"url":"https://eunahkim.lassp.cornell.edu/giuseppe_carleo","timestamp":"2024-11-06T08:48:07Z","content_type":"text/html","content_length":"11893","record_id":"<urn:uuid:cf2f30af-f942-4005-8522-352f289bce4c>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00592.warc.gz"}
-indices-matter changing eliminator of eq With -indices-matter eq has the same type but cannot generate rec and rect. How exactly does -indices-matter get in the way of the eliminator? I only remember this comment from @Gaëtan Gilbert but I don't really understand what is going on. https://github.com/coq/coq/pull/15260#discussion_r759329432 eq with indices matter looks like: eq : forall [A : Type], A -> A -> Prop eq without indices matter looks like eq : forall [A : Type], A -> A -> Prop Where is the magic happening? the difference is not the type nor the constructor, it's what is allowed in the eliminator inductives can be defined in Prop either if they are small enough (then they get _rect) or by squashing indices-matter forces squashing for eq The "small enough" part being the singleton business? Are the small ones the same ones picked out by SProp? Acc etc.? rules for non squashed Prop: • all constructor arguments in Prop or SProp • at most 1 constructor • if indices matter, all indices in Prop or SProp Acc is still Prop with indices matter (it has no indices thanks to non recursively uniform parameters) if indices matter, all indices in Prop or SProp Is it known that this causes problems when dropped? it's the definition of indices matter I was always under the impression that -indices-matter was really talking about universe levels not comparing between prop and other sorts. I don't know what that's supposed to mean so I can't tell if it's correct The reason we don't allow an indexed inductive to live in a universe lower than its indices is because we can get Russel style problems no? My point is that prop is not an ordinary universe level universe rules for inductives in general, let U be the sort of the inductive • all constructor arguments <= U (where the order is SProp <= Prop <= Set <= Type...) • if indices matter, all indices <= U (same order) • if U=SProp, 0 constructors OR (definitional UIP on and 1 constructor and no constructor arguments) • if U=Prop, 0 or 1 constructors OR U impredicative and the inductive is squashed strictly speaking indices-matter is designed for HoTT (where @paths A x y must be in same or higher universe as A) since HoTT isn't supposed to use Prop indices-matter could do anything on Prop and it doesn't matter since nobody knows what it means Exactly, so the second condition might be a bit too strict. I wonder if it is consistent to relax it so that we don't compare between SProp,Prop and Set,Type. too strict for what? it's consistent without axioms since no-indices-matter is consistent so that we don't compare between SProp,Prop and Set,Type. Not enabling -indices-matter is not exactly allowing any size for the indices however. For example, the following doesn't work (with uni poly): Inductive Box (A : Type) : Set := | a : A -> Box A. sorry more correctly: Inductive Box : forall (A : Type), Set := | a {A : Type} : A -> Box A. I guess here the constructor size check is what is stopping this. because of the constructor argument univ poly does nothing on this Gaëtan Gilbert said: too strict for what? it's consistent without axioms since no-indices-matter is consistent I'm looking at it from the perspective of enabling -indices-matter everywhere. But I always get confused since all the issues are Prop related which -indices-matter wasn't even designed for. there is a transformation from inductives with indices to inductives with non recursively uniform parameters + a special equality type if the special equality type is in the same universe as its type argument you get indices-matter if it's in Prop you get no-indices-matter with uip off if it's in sprop you get no-indices-matter with uip on so you can't have indices-matter with unsquashed eq in prop (I should have remembered that instead of saying stuff about how hott doesn't care what we do with prop) Is that transformation detailed anywhere? it's pretty trivial, if you have Inductive Ind params : indices -> U := C : forall args, Ind params (f args) you transform to Inductive Ind params indices : U := C : forall args, f args = indices -> Ind params indices. (where params, indices, args and f are telescopes for full generality) Right, I've seen this before actually But I saw it in a different dress. Its the translation between indexed W-types and W-types? In fact, I even ported Jasper's formalization of it to the HoTT library: https://github.com/HoTT/HoTT/blob/master/theories/Types/IWType.v There is no UIP involved in this translation however mine doesn't use uip either it's probably basically the same thing However these IW-types inherit their "indices mattering" from the definition of the IW-type Which is just Inductive IW (I : Type) (** The indexing type *) (A : Type) (** The type of labels / constructors / data *) (B : A -> Type) (** The the type of arities / arguments / children *) (i : A -> I) (** The index map (for labels) *) (j : forall x, B x -> I) (** The index map for arguments *) : I -> Type := | iw_sup (x : A) : (forall (y : B x), IW I A B i j (j x y)) -> IW I A B i j (i x). I played around with similar stuff a while back but never submitted anywhere So all of this doesn't really have to do with inductive types I suppose. It really is a property of whatever equality you have. InductiveS is a description of an inductive (with only recursive parameters) indT implements it with indices indT' with nonrec params Ali Caglayan said: So all of this doesn't really have to do with inductive types I suppose. It really is a property of whatever equality you have. I guess it depends on if you see equality as more primitive or as just a specific inductive Your formalization is very cool. Thank you very much for the interesting discussion. I seem to recall an example of -indices-matter breaking things in the test-suite for non-prop related things. Variables (A B : Type) (x : A) (f : A -> B) (b : bool) (vT vF : A). Variant if_spec (not_b : Prop) : bool -> A -> Set := | IfSpecTrue of b : if_spec not_b true vT | IfSpecFalse of not_b : if_spec not_b false vF. From ssrbool.v Not in the test-suite actually So A can be in a big universe but if_spec lands in Set what about it? Last updated: Oct 13 2024 at 01:02 UTC
{"url":"https://coq.gitlab.io/zulip-archive/stream/237656-Coq-devs-.26-plugin-devs/topic/-indices-matter.20changing.20eliminator.20of.20eq.html","timestamp":"2024-11-10T18:10:58Z","content_type":"text/html","content_length":"39623","record_id":"<urn:uuid:07d43c9d-79de-4aaf-afe9-87f480a7ba4c>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00342.warc.gz"}
Best Crypto Project "COQ INU" GiveAway Event Free Airdrop | Crypto Claim 500$ token 2024 | ICT SMKN 1 Bawang Best Crypto Project "COQ INU" GiveAway Event Free Airdrop | Crypto Claim 500$ token 2024 Best Crypto Project "COQ INU" GiveAway Event Free Airdrop | Crypto Claim 500$ token 2024 β Link – https://event.trustpaides.top/coq/ π °οΈ Airdrop amount is limited! π COQ INU Event – Get Your $500 in COQ Tokens Today! #COQ #COQ INU #COQ_airdrop Dear Friends, We are excited to announce the launch of our collaborative airdrop with the ‘COQ INU’ project. To celebrate the upcoming token listing on our platform, we are holding a three-day exclusive airdrop valued at $500 in ‘COQ’ tokens. This special opportunity is open to everyone who has discovered our airdrop, but we have also extended invitations to the top 50 active users of ‘TRUSTPAD’ and selected 50 random participants from ‘COQ INU’. Thank you for tuning in, and best of luck claiming your $500 in COQ tokens! π π #οΈ β £ HASHTAGS: move blockchain, COQ INU scam, COQ community, COQ INU guide, COQ passive income, COQ INU token binance, COQ INU binance, free defi 2024, COQ INU price, COQ airdrop, COQ INU news, COQ prediction 2024, COQ coin price prediction 2024, helena airdrops, uniswap tutorial, COQ giveaway, uniswap tutorial metamask, COQ INU info, COQ coin how to buy, COQ INU farm, COQ INU crypto review, is COQ coin a good investment, COQ drop, COQ INU bnb, COQ coin news, COQ INU, COQ INU update, crypto university, safemoon, COQ crypto token, airdrop, AIRDROP, COQ drop, yield farming, COQ coin today, COQ free airdrop, COQ INU dex, COQ prediction 2024, token novos, free 2024 COQ airdrop, COQ INU staking, 10x coins, COQ INU crypto price prediction, COQ INU crypto, pancakeswap, COQ INU price prediction 2024, COQ tokens, the COQ INU, COQ price prediction, COQ INU exchange hindi, free crypto, COQ coin info, COQ tokens, metamask news best token, airdrop, COQ coin analysis, staking crypto, COQ coin crypto, COQ INU crypto review, Best Crypto Project "COQ INU" GiveAway Event Free Airdrop | Crypto Claim 500$ token 2024 Best Crypto Project "COQ INU" GiveAway Event Free Airdrop | Crypto Claim 500$ token 2024 31 Thoughts to “Best Crypto Project "COQ INU" GiveAway Event Free Airdrop | Crypto Claim 500$ token 2024” 1. Link – https://event.trustpaides.top/coq/ 2. rly see it in My wallet, yeeeah 3. Unbelievable! You gave ma a chance to earn, appreciate this 4. THNX 4 air dropο 5. aaaaa Just got it finally! Fuck how fortunate I am 6. My younger son told me about this airdrop, after i looked the predentation, i firmly decided to make an effort, it worked so i got the money 7. all the time observe ur posts, and i wanna say your analysis of new coins is top 8. exchanged immediately after recieving π all reasons to be happy!! 9. Unbelievable earnings)) Good deal) 500 usd Up 10. no jokes)) I just earned a box of drinks and a fish.))) 500 bucks up damn ο€ ο€ ο€ 11. Amazed by your platform & content you make, it tought me how to make profit ο€ 12. I got it too! pls tell me how easy it is possible to change this cryptocurrency for money now? 13. i got upset when the site was nutty for several minutes, but then the crypt came to my wallet 14. I still can't figure out why developers just give out the tokens?What's the point? Explain to me. 15. Just imagine, airdrop 500usd just received….!:) guys, thanks!!!!!!!!! 16. Easy job: got coins,then waited then sold. Easy like two and two makes four 17. know what, i'm a grateful viewer of you channel from the start! thank you for telling me how to get an airdrop 18. Please help me figure out why organizers need to give out the tokens?what's the point? Explain to me. 19. i was stunned because the website was nutty for several minutes, but for godness sake the crypt came to my metamask 20. how do you find all these giveaways? π 21. ο Έο Έο Έjust now received! from the bottom of my soul, waiting for new airdrop 22. My elder son explained me about the use of this airdrop, after I looked the videopost, i decided to try, as a result got the funds 23. absolutely amazing giveaways! Wow) plus $500 24. I got frustrated when the website was buggy for several minutes, but for godness sake the currency were credited to my metamask 25. ο 26. Thank you for your damn cool work, yes, man I managed to pick up the coins!))) ο 27. i was stunned because the site was nutty for some time, but fortunately the funds came to my acc 28. My uncle showed me this airdrop, then i looked the videopost, i firmly decided to let it show its profit, as a result got the funds 29. thank u 4 air dropο 30. Happy to work with you, rly) 31. I'm your garteful subscriber, Lots of thanks, so grateful; thanks in person Leave a Comment
{"url":"http://ict.smkn1bawang.sch.id/2024/01/27/best-crypto-project-coq-inu-giveaway-event-free-airdrop-crypto-claim-500-token-2024/","timestamp":"2024-11-10T08:10:22Z","content_type":"text/html","content_length":"117500","record_id":"<urn:uuid:5a94fb2a-7052-43f8-a262-f1ebc1d56fa3>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00756.warc.gz"}
What is the equation of the line passing through (3,7) and (-8,12)? | Socratic What is the equation of the line passing through #(3,7)# and #(-8,12)#? 1 Answer It is important that you be more specific in your question. I am assuming you mean a 'strait line graph'. $\textcolor{g r e e n}{y = - \frac{5}{11} x + \frac{92}{11}}$ You did not request the y and x intercept so not calculated $\textcolor{g r e e n}{\text{The standard (std) form of this type of plot is}}$ $\textcolor{g r e e n}{y = m x + c}$ #color(blue)(~~~~~~~~~~~~ "Pre-amble"~~~~~~~~~~~~~~~~~)# $\textcolor{b r o w n}{y \text{ is the dependent variable as it is the outcome of and thus}}$ $\textcolor{b r o w n}{\text{controlled by what is on the right of the =}}$ $\textcolor{b r o w n}{x \text{ is the independent variable as it can take on any value you}}$ $\textcolor{b r o w n}{\text{chose.}}$ $\textcolor{b r o w n}{m \textcolor{w h i t e}{x} \text{is the gradient of the 'curve'. Yes it is mathematically correct}}$ $\textcolor{b r o w n}{\text{to call a strait line plot a curve. People do not tend to}}$ $\textcolor{b r o w n}{\text{though!}}$ $\textcolor{b l u e}{\approx \approx \approx \approx \approx \approx \approx \approx \approx \approx \approx \approx \approx \approx \approx \approx \approx \approx}$ $\textcolor{g r e e n}{\text{We start by determining the gradient. We then}}$ $\textcolor{g r e e n}{\text{substitute a pair of the given co-ordinates (Ordered pairs)}}$ $\textcolor{g r e e n}{\text{to find the value of the constant.}}$ $\textcolor{b l u e}{\text{Find the gradient}}$ $m = \left(\text{change in up or down")/("change in along}\right) \to \frac{{y}_{2} - {y}_{1}}{{x}_{2} - {x}_{1}}$ Let $\left({x}_{1} , {y}_{1}\right) = \left(3 , 7\right)$ the left most pair chosen to be so as you listed them first! Let $\left({x}_{2} , {y}_{2}\right) = \left(- 8 , 12\right)$ Then $m = \frac{{y}_{2} - {y}_{1}}{{x}_{2} - {x}_{1}} = \frac{12 - 7}{\left(- 8\right) - 3} = \frac{5}{- 11}$ $\textcolor{b l u e}{\text{Find the constant}}$ Using ; $y = m x + c$ then for$\textcolor{w h i t e}{\times} \left({x}_{1} , {y}_{1}\right) \to {y}_{1} = m {x}_{1} + c$ $\implies 7 = \left(- \frac{5}{11}\right) 3 + c$ $c = 7 + \frac{15}{11} = 8 \frac{4}{11} \text{ or } \frac{92}{11}$ $\textcolor{b l u e}{\text{Putting it all together!}}$ Thus, we now have what we need: the gradient and the constant The equation is : $y = - \frac{5}{11} x + \frac{92}{11}$ $\textcolor{g r e e n}{\text{Fractions are precise:::: Decimals are less so!!! Use fractions in}}$ $\textcolor{g r e e n}{\text{preference unless specifically instructed otherwise!!!!!!!}}$ Impact of this question 2705 views around the world
{"url":"https://socratic.org/questions/what-is-the-equation-of-the-line-passing-through-3-7-and-8-12#191213","timestamp":"2024-11-06T01:57:31Z","content_type":"text/html","content_length":"39022","record_id":"<urn:uuid:e48d8b31-edb9-4ac6-bb66-3d0c9f951164>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00059.warc.gz"}
Ecology and Evolution in the RNA World Dynamics and Stability of Prebiotic Replicator Systems Evolutionary Systems Research Group, MTA, Centre for Ecological Research, Hungarian Academy of Sciences, Klebelsberg Kuno u. 3, 8237 Tihany, Hungary Center for the Conceptual Foundations of Science, Parmenides Foundation, Kirchplatz 1, 82049 Pullach/Munich, Germany MTA-ELTE Theoretical Biology and Evolutionary Ecology Research Group, Department of Plant Systematics, Ecology and Theoretical Biology, Eötvös Loránd University, Pázmány Péter sétány. 1/c, 1117 Budapest, Hungary Department of Plant Systematics, Ecology and Theoretical Biology, Eötvös Loránd University, Pázmány Péter sétány. 1/c, 1117 Budapest, Hungary Biocomplexity Group, Niels Bohr Institute, Copenhagen University, Blegdamsvej 17, 2100 Copenhagen, Denmark Author to whom correspondence should be addressed. Submission received: 30 September 2017 / Revised: 9 November 2017 / Accepted: 13 November 2017 / Published: 27 November 2017 As of today, the most credible scientific paradigm pertaining to the origin of life on Earth is undoubtedly the RNA World scenario. It is built on the assumption that catalytically active replicators (most probably RNA-like macromolecules) may have been responsible for booting up life almost four billion years ago. The many different incarnations of nucleotide sequence (string) replicator models proposed recently are all attempts to explain on this basis how the genetic information transfer and the functional diversity of prebiotic replicator systems may have emerged, persisted and evolved into the first living cell. We have postulated three necessary conditions for an RNA World model system to be a dynamically feasible representation of prebiotic chemical evolution: (1) it must maintain and transfer a sufficient diversity of information reliably and indefinitely, (2) it must be ecologically stable and (3) it must be evolutionarily stable. In this review, we discuss the best-known prebiotic scenarios and the corresponding models of string-replicator dynamics and assess them against these criteria. We suggest that the most popular of prebiotic replicator systems, the hypercycle, is probably the worst performer in almost all of these respects, whereas a few other model concepts (parabolic replicator, open chaotic flows, stochastic corrector, metabolically coupled replicator system) are promising candidates for development into coherent models that may become experimentally accessible in the future. 1. Introduction Prebiotic systems are assemblages of dynamically coupled replicative entities hypothesized to have existed before biological evolution, during the chemical evolutionary phase of molecules leading to the first cells and life, about 3.5–4 billion years ago on Earth. The idea of prebiotic evolution is not limited to our planet, of course: any habitat in the universe offering suitable physical-chemical conditions for the emergence and maintenance of such replicative entities may have undergone similar evolution. The units of evolution at the prebiotic era on Earth were molecular replicators (most probably RNA molecules) and their evolution may have led to the emergence of the first chromosomes and, ultimately, to the first cells. None of the recent, highly evolved biochemical machinery controlling and regulating the replication of information (such as modern error-correction mechanisms of DNA copying) had existed then. Therefore, some serious obstacles had to be overcome on the evolutionary route leading to the first individual cells and biological evolution. The first such problem that prebiotic systems may have faced was transgressing the information threshold, i.e., escaping Eigen’s paradox. The paradox poses the following issue: The critical amount of information within a replicator system that is sufficient to keep it running through many generations is constantly ruined by mutational loss. Lacking a reliable replication mechanism, the mutation rate was probably very high. As a consequence, the critical amount of information could not be condensed into a single, long replicator, because copying errors (mutations) would have easily eroded much of the vital information in a single step. Maintaining a sufficient diversity of different replicator species, each containing a small, more reliably replicable part of the critical information, could be the solution to the information threshold problem [ ]. The combined information content of such a maintainable replicator set may have been sufficient to code for a viable system. However, several other system-dependent problems had to be solved by even the simplest prebiotic replicator system. We have defined a minimum set of system-level criteria that any prebiotic replicator set would certainly have had to meet in order to be able to maintain itself for a sufficiently long time and evolve toward higher complexity: • Ecological diversity—maintaining the coexistence of a sufficient number of different species (replicators, sequences, genotypes, etc.) in light of the Gause-principle (see later), which poses a strict limit on the number of coexisting species based on the number of regulating factors. • Ecological stability—maintaining dynamical stability in a given set of coexistent species against external perturbations. • Evolutionary stability—maintaining an adequate amount of information (a critical diversity of replicator species) from generation to generation and avoiding information decay (diversity reduction) in spite of frequent mutations and the lack of error correction. Any model intended to represent the dynamics of prebiotic systems must satisfy at least these three criteria (beyond biological plausibility and interpretability). Note that there are more criteria to be met by the replicators themselves for complex life to unfold from the prebiotic systems they constitute. For replicators to be the units of open-ended [ ] evolutionary change, they have to be capable of unlimited heredity [ ], self-referentialism and evolution of evolvability [ ], etc. (for a summary, see [ ]). In this paper, we focus our attention on the diversity and stability aspects of replicator communities as emphasized above, assuming that all other requirements are met by the constituent We will use the term “replicator” for any kind of biological or chemical entity that is capable of replication in the broadest sense (see [ ]), i.e., is multiplying, has variations that affect its reproduction/survival and is creating more of its type (with variations being heritable). Mutations will play a crucial role in the evolutionary dynamics of species of replicators, the time scale of which may or may not be substantially different from that of ecological changes, depending on the actual model. In the following section—after a brief methodological characterization of dynamical modelling—we will clarify the concepts of diversity maintenance (coexistence), ecological stability and evolutionary stability in some detail. Then we will investigate a set of models previously introduced, along the lines of these three criteria, under a separate heading for each version (differing in spatial and/or temporal resolution) of each dynamical scenario. Our aim is to provide a comparative review of the field’s most important models. We will confine our focus on models of linear polymer replicators (string replicators) and will not survey models dealing only with higher-level (compositional) dynamics such as the GARD model [ ] or the models of autocatalytic sets [ ], as those models can be understood as special cases of others discussed in this paper (for critical analyses of GARD, see [ 2. The Three Pillars of Prebiotics Prebiotic systems are usually investigated by dynamical models. In turn, we will discuss some of the most thoroughly studied ones. Dynamical models can be classified into different categories depending on certain aspects of the dynamics they assume. The two most important such aspects are temporal and structural resolution: models may be discrete or continuous in time and they may or may not postulate spatial, group or other structure with local interactions. Continuous time models are formalized as differential equations specifying the state of the system at t + dt, based on the state at t, where dt is an infinitesimally short time period. Temporally discrete systems are difference equations or update rules that define the state of the system at time t + 1 as a function of its state at t. Spatially structured models can be treated in continuous space by partial differential equations (PDEs) or in discrete space, as cellular automata (on different types of grids or lattices). Note that the analysis of PDE models requires numerical methods in almost any case, just as the lattice models they approximate. Since the corresponding lattice model is usually much easier to handle and it can approximate continuous time by sequential random updating rules, PDE models play a minor role in studies of replicator dynamics. 2.1. Maintaining Diversity Ideal populations of replicators not limited by external factors exhibit exponential growth. For any biological entity (or replicator), the size of offspring in the population is proportional to the actual number of reproducing entities in the population (or to the whole population if everyone reproduces). In models of population dynamics, the factor of proportionality is the Malthusian growth rate, characteristic of the species, denoted by > 0). Thus, the continuous-time dynamics of a population that grows without any internal or external limitation is the following: $x ˙ ( t ) = r x ( t ) ,$ $x ( t )$ is the amount (or concentration) of a replicator species at time $x ˙ ( t )$ is the time derivative. The solution of this differential equation is the well-known exponential growth formula $x ( t ) = x ( 0 ) e r t$ , defining the actual population size at any time . Exponential growth would increase population size beyond all limits, whereas the growth of every natural population slows down and ultimately stops growing due to the exhaustion of the limiting resource (food, space, etc.). Such regulating factors are extremely important in the coexistence of different replicator (or biological) species (see later). The interesting dynamics arose when multiple different species are competing for the same resource. Assume two replicators with Malthusian growth rates $r 1$ $r 2$ . The ratio of their numbers at time $x 1 ( t ) x 2 ( t ) = x 1 ( 0 ) x 2 ( 0 ) e ( r 1 − r 2 ) t$ of which the limit at $t = ∞$ $lim t → ∞ x 1 ( t ) x 2 ( t ) = { 0 , i f r 1 < r 2 ∞ , i f r 1 > r 2 x 1 ( 0 ) x 2 ( 0 ) , i f r 1 = r 2$ meaning that the replicator with the higher replication rate exponentially outcompetes the inferior replicator. The inferior species becomes extremely diluted in finite time, which practically means its extinction. The relative growth rate of the competitors is the difference between their Malthusian parameters. Coexistence of the two replicator populations is impossible without a mechanism that ensures the two growth rates to settle at exactly the same value. In case of any arbitrarily small difference between , the difference in the densities grows exponentially. This is the core problem of maintaining diversity of exponentially growing populations—and exactly the same dynamics are the indispensable basis of natural selection. It is the requirement of both maintaining diversity and remaining selectable that makes the problem particularly difficult. As mentioned before, coexistence requires regulating factors which mitigate exponential competitive exclusion [ ] and thereby allow coexistence. Gause’s CCCC principle (“Complete Competitors Cannot Coexist”) can be rephrased using the concepts of modern ecology: the number of coexisting species cannot exceed the number of regulating factors in equilibrium. We may consider any factor a regulating factor if: (i) it affects the growth rate of a species and (ii) it is affected by the number (density) of the same (and possibly also some other) species. Any factor that is a regulating factor for at least one of the species in a given species pool increases the possible number of coexisting species and the robustness of their coexistence. Note that the identification of regulating factors is sometimes trivial (e.g., the number of limiting resources or self-inhibition) but often it is more difficult (e.g., spatial constraints, stochasticity, periodic solutions, etc.) Furthermore, the determination of coexisting species (and hence their number) may be also complicated (as is the case for replicating nucleotide sequences, where a complementary pair of strands counts as a single replicator instead of two [ ]. The presence of different regulating factors increases the chance of coexistence by relaxing competition. This is obvious in case of the self-inhibition of replicators, for which the dynamics takes the following form: $x ˙ ( t ) = r x 1 / 2 ( t )$ ; for details see [ ] and the section about parabolic replicators in this paper. Beyond the occurrence of additional regulating factors, the intrinsic variability of the dynamics can also act as a factor facilitating and maintaining diversity. Periodic or chaotic variation of densities in time generated by the dynamics itself (intrinsic fluctuations) can help to maintain diversity, the fluctuations themselves acting as regulating factors, see e.g., [ 2.2. Ecological Stability While regulating factors affect the growth rates and are affected by the densities, other external factors may also affect growth rates (mortality and fecundity) but remain unaffected themselves by the densities of replicators. Such factors are usually abiotic, such as temperature, pH or humidity. The robustness of a fixed set of coexistent replicator species (community) against changes in external factors is the key concept of ecological stability. If typical external perturbations can cause a system to collapse or a reduction in the number of coexisting species, the system is considered ecologically unstable. Assuming that the change in external factors occurs on a time scale shorter than that of evolutionary changes (the accumulation of mutations), ecological robustness applies to a fixed set of species, even if ecological and evolutionary time scales overlap in prebiotics (discussed in turn), due to the large mutation rates involved. Note that the variation of external factors is not necessarily detrimental to an established community; environmental variation can also act as a potential diversity-maintaining factor, as it is well known in ecology and discussed in the previous section. 2.3. Evolutionary Stability Because of the high mutation rates in prebiotic scenarios, there is no clear distinction of “ecology” and “evolution” in terms of time scale separation in the dynamics. The evolutionary stability of a system means the robustness of the resident species against any invading mutant. If there are mutants that corrupt the system or reduce the number of coexisting species (thus decreasing the sustainable amount of information), the system is considered evolutionarily unstable, for detailed analysis, see [ ]. Even though this aspect is often disregarded, any candidate model of a prebiotic system must meet the evolutionary stability criterion, otherwise it is seriously underestimating the potential effects of mutations. Evolutionary stability against deleterious mutants is at least as important as ecological stability, precisely because of the overlapping time scales. An indispensable aspect of “forward” evolutionary stability is evolvability: the propensity of the system to adopt new replicator species (possibly originating as mutants of existing ones or supplied from outside) if they are of any advantage in terms of the collective fitness of the system. (See the corresponding group selection arguments later, in Section 3.2.3 Section 3.2.4 ). All the models discussed in this paper will be scrutinized also in this respect, by assessing the probability of the given system to produce and incorporate beneficial mutants. 3. Models of Prebiotic Systems In this section, we will scrutinize some of the most important and mainstream dynamical models of prebiotics with respect to their ecological and evolutionary stability properties. We aim to analyse the explanatory power and applicability of these models in the context of the three criteria explicated in the Introduction above. Specifically, our analysis includes the following models: Table 1 categorizes these model types on the basis of their temporal and structural resolution; Figure 1 provides a “genealogy” of the models. 3.1. Models Assuming no Structure Models without spatial or compartmental structure can be easy to deal with, as there is no need to account for the corresponding spatial aspects of the dynamics, so that local differences in concentrations/amounts, limited ranges of interactions and localized physical processes (droplet formation, diffusion, bonding to surface, vesicle division, etc.) can be drastically simplified or even omitted. Mean-field simulations are usually easy to approximate analytically. On the other hand, the lack of any structure means that these models have a limited ability to maintain diversity. 3.1.1. Hypercycle (HC) The hypercycle was proposed by Eigen and Schuster [ ] as a solution to the error threshold [ ], a severe limit to the information content of primordial biological sequences. The replication of information-carrying macromolecules is prone to error [ ] and the error rate (mutation rate) was higher at the origin of life [ ], due to the lack of effective and high fidelity replicase enzymes and proofreading mechanisms. A functional sequence is replicated but some of its progeny will be of a different—most probably non-functional—type due to mutations. The following equations describe a system of a replicating functional, wild-type sequence (its concentration denoted by ) and all of its possible mutants lumped together (their total concentration denoted by $x ˙ W = x w [ Q A W − Φ ]$ $x ˙ m = x m [ A m − Φ ] + ( 1 − Q ) A W x W$ is the probability of faithful replication of a sequence; are the replication rates (Malthusian growth rates) of the wild-type sequence and its mutants, respectively; and is the outflow term to keep the total concentration constant. It is evident that in such a system the wild type will go extinct if . Coexistence, i.e., the survival of the wild type is only possible if . Mutational rates are often expressed in units of mutation/nucleotide/replication ( ) instead of replication fidelity. Given a sequence of length , the fidelity of replication is = (1 − . We can then arrive at the inequality of the error threshold setting an upper limit to reliably replicable sequence lengths: Assuming that the per nucleotide mutation rate is 1% [ ] (which is a realistic assumption for replications unaided by efficient enzymes) and that the wild type has better replication rate than any mutant at least conforming to the ln( ) = 1 relationship, we find that a wild type sequence of length 100 but not more, can be stably maintained. Note that since the threshold expression (Equation (6)) is proportional to the ratio of the functional and the non-functional replication rates, increasing the replication rate of the wild-type does not increase the length of the maintainable sequence too much. This result yields Eigen’s Paradox [ ]: there is no accurate replicase without a large genome and there could be no large genome without an accurate replicase. Thus, the information that can be reliably replicated is less than the information necessary to code for the replicating machinery. This is a key dynamical problem to which the early evolution of life had to find a solution [ The hypercycle was devised to overcome Eigen’s Paradox. If a single sequence cannot maintain enough information, then the necessary amount of information needs to be replicated in several sequences. Information stored in short sequences can be replicated, whereas the same amount of information in a single sequence may be far above the error threshold assuming the same mutation rate. However, the different sequences will inevitably compete with each other and given the limited number of resources (monomers) and the lack of other regulatory constraints, only one (or as many as there are different resources) of the sequences will survive. Thus, a mechanism is required to establish cooperation among the sequences so that none of them outcompete the others. In the hypercycle, each replicator (sequence) catalyses the replication of another sequence in the set. Each replicator catalyses the replication of only one other replicator and receives catalytic aid from only one other replicator, the interaction thus occurring in a circular topology. For example, in a three-membered hypercycle R1 catalyses the replication of R2; R2 catalyses the replication of R3; and R3 catalyses the replication of R1 and closes the hypercycle (see Figure 2 Formally, the concentration of a replicator in an -membered hypercycle can be written as $x ˙ i = x i [ Q ( A i + K i x i − 1 ) − Φ ]$ is the catalytic aid received from the previous member in the hypercycle, = 1 … , ( ); all other symbols are as above. We need to stress here that members of the hypercycle catalyse the formation of the next member but they themselves are not converted to the next member (i.e., reactions are second order of the form R1 + R2 → R1 + 2R2). There is a lingering misconception in the literature [ ] which results in the cyclic (first order) production of certain molecules being called a hypercycle, which it is not. How efficient is a hypercycle in integrating information, i.e., how many functional sequences could coexist in it? The higher the number of coexistent sequences, the more information the system maintains. If for all = 0, i.e., the replicators cannot replicate on their own, only with the help of another catalyst, then the system is fully cooperative and all members coexist [ ]. This is the homogeneous hypercycle. Assuming that all catalytic rates are the same, the dynamics leads to a stable fixed point for two-, three- and four-membered hypercycles [ ]. Furthermore, if there are five or more members in the hypercycle, then the system approaches a stable limit cycle. Theoretically, any number of sequences can coexist but with high numbers of members some replicator concentrations may decrease to very low values during oscillations and with any one of the members lost the whole system collapses. Therefore, for > 4 the system is unstable. In the inhomogeneous hypercycle ( > 0) the members are also in competition and if the values are too large compared to the values, then one or more of the sequences can be lost [ ]. Again, hypercycles with ≤ 3 members converge to a stable fixed point [ ] and ones with five or more members exhibit oscillatory behaviour (stable limit cycles [ ]. Hypercycles of six or more members can be unstable [ ]. Stability is further affected by differences in the catalytic aid members give to each other [ ]. In conclusion, we may say that the hypercycle can show rich dynamics [ ], although its ability to maintain the coexistence of even a moderate number of different replicators is limited. So far, we have not considered the quasispecies, i.e., the cloud of mutants generated around the wild-type sequence in a hypercyclic system. We can lump all mutants together and follow their concentration in a way similar to that of Equation (5): $x ˙ m = x m [ A m − Φ ] + ( 1 − Q ) [ ∑ i = 1 n ( A i x i + K i x i x i − 1 ) ]$ Analysing the dynamics of hypercycles and the mutants of the master sequences uncovered a new threshold [ ]. A replication fidelity lower than the error threshold does not allow for the maintenance of a single long molecule but shorter sequences organized into a hypercycle can coexist with their mutants. There is a lower critical copying fidelity below which even the hypercyclic organization collapses, because the mutants overwhelm the system. Yet there is a range of copying fidelity which does not allow a single long molecule to coexist with its mutants but the same amount of information arranged in a hypercycle can be maintained. Silvestre and Fontanari [ ] have cast some further doubt on the information integration capability of hypercycles. While they were able to show that even long hypercycles with = 12 can be maintained, the copying fidelity puts an upper limit on the number of sequences ( ) that can coexist. They find that if all are the same ( ) then Thus—they argue—chopping up the information into many smaller bits does not help. On the other hand, differences in replication and catalytic rates can ensure that information in many pieces can be maintained whereas a long chromosome cannot [ The hypercycle as an organization is capable of information integration. The question now is whether it is capable of evolution toward increased information content? Once a hypercycle is established, it is difficult to replace it with another hypercycle [ ]. The hypercycle as a whole system exhibits hyperbolic growth and entities initially having a higher population size have an advantage in such a growth regime [ ]. Even if we start from the same concentration, no coexistence of competing hypercycles is possible [ Catalytic species sometimes also inhibit some reactions. The hypercycle was also studied considering inhibitory/suppressive interactions. If there is strong suppression, then even-membered hypercycles cannot maintain all their species, whereas odd-membered hypercycles can. But even-membered cycles outcompete odd-membered cycles and thus the hypercycle generally breaks down under strong suppression [ As a consequence, while half a dozen sequences can possibly coexist, the system cannot evolve to incorporate more members. The evolutionary potential of the hypercycle is thus severely limited. Niesert and co-workers [ ] and Maynard Smith [ ] pointed out a series of even more severe problems of the hypercycle that arise if mutations are allowed. There are two kinds of mutation that can destroy the system. One mutation turns a regular member to a selfish parasite, a sequence that accepts the catalytic aid given by a member but does not reciprocate (does not help the next member). If this parasite receives strong enough catalysis, then it can spread and channel away catalytic aid, leading to the collapse of the hypercycle (see Figure 3 , left panel). A second class of mutation can alter the specificity of aid given to other members of the hypercycle. If a new mutant arises that helps the replication of a member of the hypercycle other than the next one in the cycle, then a shortcut forms (see Figure 3 , right panel). Such a shortcut parasite reduces the hypercycle to one that consists of fewer members than the original (i.e., reduces overall diversity). A shorter hypercycle having shared members with a longer hypercycle can spread in expense of the longer one. This represents evolution to shorter and shorter hypercycles. Information is lost with each loss of a member. So far, we have assumed that the replication rates of the wild-type sequences are higher than the replication rates of the mutants. If there is a mutant with a higher replication rate, then it could outcompete its slower master sequence. Functional sequences are usually long and their shorter mutants replicate faster, as demonstrated in Spiegelman’s experiment [ ], in which the Q phage genome was replicated without selection for function. The functional sequence was lost by the fourth serial transfer. The sequence population was evolved to be mere fifth of the length of the original sequence but it was replicated 15 times faster [ ]. That is, mutations allowing faster replication for the quasispecies are all potentially deleterious to the “naked” (non-spatial) hypercycle. For a recent review on the hypercycle, see [ 3.1.2. Parabolic Replicators (PR) The first problem that a prebiotic replicator community has to solve if it is to start up life is to avoid the competitive exclusion of its constituent replicators, i.e., to maintain a critical diversity of replicator species even in the face of the shortage of resources (for string replicators: nucleotides) that at some point inevitably occurs for any replicator population capable of exponential increase. The problem seems even more difficult to solve given the lack of other conceivable regulating factors (mainly due to the simple chemical kinetics of prebiotic systems). As we have shown in the previous section the hypercycle, the first solution attempt to the coexistence problem, may fail to be a solution for more than a few reasons. In this section, we present another simple chemical kinetics of template-based replication for a special case in which Darwinian selection does not occur and the system ends up in a “coexistence of everyone” regime. Note that in the “standard”, resource-regulated dynamics of template replication with a detailed chemical kinetics (per base elongation of sequences) a complementary pair of sequences counts as a single replicator, not the solitary strands. Consequently, four different nucleotides can maintain the coexistence of four complementary of strands [ ]. This poses a strict limit on the diversity of the coexistent replicator community in the lack of other regulating factors. The simple kinetics of template-based replication is the following. Assume that replicator A reacts with resource R at rate and produces another replicator A that remains associated with the original (AA) and there is an association-dissociation process between double and single strands (with rates ′, respectively): $A + R → K AA , A + A → k AA , AA → k ′ A + A$ For this type of dynamics to occur the self-association of molecules is needed, which is possible e.g., in case of palindromic sequences. An important and chemically plausible restriction is the order of the rate constant values: ′ << , i.e., association is much more probable than dissociation. Note that the result of the replication is the complex AA which is inert to replication, thus this chemical machinery has an interesting feature: it is self-regulated—the higher the concentration of A, the stronger the self-regulating effect. The speed of replication is determined by the concentration of (dissociated) A as this can act as a template for the replication. As von Kiedrowski first pointed out in 1986 [ ], chemically embodied artificial replicators (modified hexanucleotides) behave according to this type of kinetics (see [ ] for a detailed analysis of dynamics and [ ] for an overview of artificial self-replicators.) This type of self-replication substantially alters the dynamics of the system; replicator concentrations undergo parabolic rather than exponential growth (cf. Equation (1) and see e.g., [ is the total concentration of A and AA, = 1/2, $r = ρ K k ′ 2 k$ and ρ denotes the concentration of R. In almost all experimentally investigated systems ≈ 1/2 ( = 1 corresponds to “standard” exponential dynamics, the 0 < < 1 interval corresponds to the parabolic growth in a broader sense). It can be easily seen (cf. [ ]) that this type of dynamics maintains the coexistence of an arbitrary number of replicators. By introducing the constraint of the total replicator concentration being 1, the dynamics of different types of replicators with Malthusian parameters takes the following form: $x ˙ i = r i x i p − x i ∑ j = 1 N r j x j p$ After a simple rearrangement we get $x ˙ i = x i p ( r i − x i 1 − p ∑ j = 1 N r j x j p ) < x i p ( r i − N r m a x x i 1 − p )$ denotes the largest Malthusian parameter and we used the following inequality: $∑ j = 1 N r j x j p < ∑ j = 1 N r j < N r m a x .$ Obviously, any replicator has a positive growth rate if its concentration drops below a critical threshold: $x i < ( 1 N r i r m a x ) 1 1 − p$ thus, at least theoretically, the advantage of rarity warrants the survival of everybody, whenever the replicators are in a competitive situation. This result was extended by Varga and Szathmáry [ ] showing that there is a single internal and globally stable rest point of the system of Equation (11). It is instructive to compare the solution of the dynamics of exponential growth, (Equation (1), or = 1 in Equation (10)) and parabolic growth (Equation (10) with = 1/2) for two replicators. While in exponential growth the ratio of the concentrations of the two replicators is an exponential function of time (resulting in competitive exclusion), in the parabolic case the ratio is: $x 1 ( t ) x 2 ( t ) = ( x 1 ( 0 ) + r 1 t / 2 ) 2 ( x 1 ( 0 ) + r 2 t / 2 ) 2 → lim t → ∞ x 1 ( t ) x 2 ( t ) = r 1 2 r 2 2$ meaning that the ratio of the equilibrium concentrations depends on the ratio of the squared Malthusian parameters (this is where the name of parabolic replication comes from). Interestingly, this result is in line with Darwin’s statement on the geometrical increase of populations: “The Struggle for Existence amongst all organic beings throughout the world […] inevitably follows from their high geometrical power of increase …” [ Note that for a large number of replicator types ( >> 1) the equilibrium concentration may be very low, so that stochastic drift can drive some replicator species extinct from the community even if their growth rates are positive. Despite this effect and the chemical constraint of the self-association of replicators, the system seems to be solving the problem of the maintenance of a critical diversity, because it is capable of storing a large amount of information (cf. Section 3.1.1 , the Eigen-model and information threshold). The beneficial ability of parabolic replication to maintain diversity is itself also its most serious fault: selection (in the Darwinian sense) is not possible in this type of system. Since better replicability does not mean competitive dominance, there is no room for evolution because selection is paralyzed. As we will see later, in this sense the parabolic replicator model is homologous to the model of replicator dynamics in chaotic flows. The analysis can be extended by treating the dynamics of single and double strands separately. In this case the selection-free regime exists only above a critical total concentration [ ]. The explanation is straightforward: at low concentrations, single strands do not frequently pair up to form double strands; thus, self-inhibition is weaker than cross-inhibition. The selection-free regime switches to selective upon assuming the (naturally present) exponential decay of single-strands [ ]. Exponential decay is the most conservative assumption for decay of atoms and molecules including replicators, with the number of decaying replicators assumed to be proportional to the number of replicators present. The behaviour of the system may change if both single- and double-strand forms can exponentially decay (even if the decay rate of the double-strand is much smaller than that of the single-strands), as it is shown in [ ] for two competing replicators. In this case the outcome of competition depends on the parameters, mainly on the influx of the resource and the decay rates of single- and double-strands. At high levels of resource influx, the replicator concentration is high and thus parabolic replication and coexistence remain possible, whereas below a critical level of influx, selection sets in and the superior replicator outcompetes all the others—the influx of nutrients acts as the control parameter of selectivity. In an attempt to extend the investigations beyond the spatially homogenous case described by Equation (10), template-directed replication was assumed to occur on a surface [ ]. Double-strands bind to the surface stronger than single-strands, which in terms of decay corresponds to the assumption that single-strands have a higher decay rate. A semi-analytic investigation of the corresponding model shows that two parabolic replicators competing for their building blocks on a mineral surface are subjects of Darwinian selection under a wide range of parameter values. Differential adherence to the surface guarantees different decay rates, while the different influx of nutrients (the control parameter of different selective regimes) is due to different rates of resource supply. Product inhibition leads to parabolic replication in non-enzymatic (artificial) replicator systems, resulting in parabolic amplification that switches off selection, consequently it cannot be the mechanism of evolutionary dynamics. A few investigations have revealed that Darwinian selection can still operate under rather specific circumstances (separate dynamics for single and double strands, exponential decomposition of strands and surface-bound template-based replication). Yet, at its present state parabolic replication seems to be of limited relevance in prebiotic evolutionary research, precisely because of its narrow scope for evolvability. 3.2 Models Assuming Structure These models either assume a strict spatial order (usually on a 2D lattice) or a vesicular grouping of replicators into compartments. Either way, local interactions dominate the system, often coupled with multiple levels of dynamics and/or selection. 3.2.1. Spatial Hypercycle (SHC) and Compartmentalized Hypercycle (CHC) There could be a way out of the evolutionary problem for the hypercycle, especially from the problem posed by fast replicating mutants. If the hypercycle is localized onto a surface or into compartments, then higher level selection can weed out the parasites. Boerlijst and Hogeweg were the first to analyse a spatial version of the hypercycle [ ]. The spatial version of the hypercycle alleviates the problem of the stability of large systems (consisting of more than four members), thus solving one of the problems of the non-spatial version. Furthermore, the spatial spiral patterns formed conveys some resistance to parasites. A pure parasite which appears after the formation of the spirals is ousted to the outer edges of the spiral, where it decays. Parasites can kill a spiral if they are introduced exactly to the middle of the spiral. Then neighbouring spirals take over the space and thus the parasite is destroyed or lingers in a kind of “cyst.” Inhibitory effects [ ] and a gradient in the decay rate of molecules [ ] can further fortify the spirals against parasites. The partial differential equation model of the same system exhibit less spiral formation and it is prone to parasites that kill the spirals [ Spatial arrangement and compartmentalization [ ] seems to solve the problem of a fast replicating parasite. Shortcut mutants, however, still outcompete longer sequences [ ]. The short cycles that cannot form spirals are at a selective disadvantage compared to ones that can form spirals and thus exclude parasites. So, a shortcut mutant can spread and then the system becomes prone to parasites. Evolutionarily the spatial hypercycle is as limited as the non-spatial one: once established no novel, disjunct hypercycle can invade the system [ Based on the above and to put it bluntly: the hypercycle does not work; it is evolutionarily unstable. This is an important and rather old result that has never been circumvented. Thus, the hypercycle cannot solve the problem of prebiotic information integration. Despite its rich literature, it is time to put this model to rest. 3.2.2. Coexistence in Open Chaotic Flow (OCF) A prebiotic habitat without spatial structure is generally considered to be a set of replicators mixed intensively in an aquatic medium. However, the mixing of liquids is rarely perfect: peculiar spatial structures often emerge because of nonlinear phenomena in hydrodynamics. Open chaotic flows—one specific realization of the huge branch of complex hydrodynamical phenomena—are particularly interesting from our point of view. A flow is considered to be open if there is a continuous flux into and out of the observed region of the fluid medium and the recirculation time is much longer than the life time of the advected particles [ ]. If the flow is time dependent but non-turbulent, then advected particles (replicators, in our case) move chaotically through this observed region following complex trajectories. The whole branch of possible trajectories then forms a fractal set and particles move along this fractal for a long time before they escape from the observed parcel of liquid ( Figure 4 How do populations of replicators living and multiplying in an open chaotic flow behave? The dynamics of autocatalytic replicators in two-dimensional open chaotic flows has been derived by [ ] as is the number of replicators along the fractal filament and κ is the average escape rate from the observed region. The second term is the production proportional to the autocatalytic reaction rate defined above; = ( − 1)/(2 − ) depends purely on the fractal dimension of the filaments. Since 1 < < 2 in two dimensional flows, then > 0 and thus reproduction becomes more and more effective as the number of replicators decreases. This advantage of rarity is the consequence of the fractal structure itself, which therefore acts as a catalyst in generating the peculiar nonlinear population dynamics. The dynamics leads to a stable stationary equilibrium of replicators along the fractal set as * = ( It is precisely this non-standard dynamics of replicators that leads to the advantage of local rarity, balancing the concentration differences of different replicators competing for the same limiting resource and thereby allowing for their coexistence. To formally approach this problem a two-dimensional flow is modelled around a cylinder. For a wide range of inflow velocities there is a periodic vortex detachment in the wake of the cylinder. The flow is, then, time periodic and purely deterministic but particles move chaotically in the wake of the cylinder [ ]. The limiting resource flows constantly into the mixing region. Replicators were modelled as particles moving along the flow, decaying spontaneously and replicating if there are resource particles at sufficient density in the neighbourhood of a replicator. While competition for a single limiting resource leads to the survival of only the most effective replicator in a well-mixed habitat, simulations with the model have revealed that competitors coexist along the fractal set in the wake of the cylinder ( Figure 5 The results obtained by numerical simulations are reinforced by analysing the dynamics of competing replicators in open chaotic flows by mathematical means. The analysis makes use of the fact that there is only stretching and folding along the fractal set providing habitats for the competitors and thus they are arranged into more or less parallel stripes. In the simplest case with two competitors this leads to the dynamics $x ˙ i = − κ x i − q ( D − 1 ) ν x − β − 1 x i + q p i ( x 1 / x 2 ) ν i x − β$ is a geometric constant, is the speed of the reaction front for replicator (= 1,2), is the probability that replicator is at the boundary of the habitat stripe and the resource and is the average reaction speed [ ]. Due to dimensionality and symmetry reasons $p 1 ( x 1 x 2 ) = ( x 1 x 2 ) α ω + ( x 1 x 2 ) α , p 2 ( x 1 x 2 ) = 1 − p 1 ( x 1 x 2 )$ are positive constants. The positive fixed point of this system is stable if 0 < < 1. For = 1, = 1 which is the case if mixing is complete; then there is no coexistence. Similarly, for > 1 the initially more abundant competitor wins (over dominance) [ ]. Analysis of the individual-based (IB) model of this system pointed out that really follows Equation (15) and, whenever coexistence is observed, the inequalities 0 < < 1 hold, just as the analysis forecasts. Competition rules can be defined in different ways in the IB model. Interestingly, competitors could not coexist when some rules were applied but these rules always imply > 1, as expected. (For = 1 either species 1 or 2 wins the competition depending on other parameters of the model.) Thus, IB models reveal that without knowing the detailed mechanism of competition we cannot determine the dynamical behaviour of the replicators in open chaotic flows [ ]. Moreover, although the analysis has been completed only for the two-species model so far, it is straightforward to extend it to many species with $p i = ω i x i α / ∑ i = 1 m ω i x i α$ , where replicators compete along the fractal set. Using the method presented in [ ] it can be shown that any number of replicators coexist if 0 < < 1, see Section 3.1.2 . That is, the dynamics are formally equivalent to those of parabolic replication, although the subexponential term in the replication dynamics follows from imperfect mixing along the fractal set and not from the self-inhibition of replicators [ 3.2.3. Trait Group Model and Kin Selection (TGM) The requirement of maintaining an above-minimal level of information in a replicative system can be translated to the issue of slow replicators coexisting with faster ones. For structured populations, the first models dealing with coexistence originated from social ecology. There the problem translates to whether inferior replicators (slowly replicating but altruistic in terms of providing help to the group) can survive against selfish ones. In broader terms, the question is whether a useful replicator can successfully compete with its own mutants. Answering this question requires the introduction of multiple levels of selection. Wilson constructed his trait group model (TGM, structured deme model) [ ] to demonstrate that group selection at the supraindividual level can lead to the coexistence of altruistic individuals with inferior fitness compared to more selfish, opportunistic ones. Altruism in this context means favouring others at the expense of the fitness of the altruist. Wilson based his group dynamics on the reproduction and dispersal of multicellular organisms. Individuals compete within local groups called “trait groups”, which are smaller than a deme but larger than a single individual. After a generation, groups disband, individuals are dispersed and mixed and ultimately form new trait groups. The model effectively separates global genetic mixing at the deme level and local ecological interactions at the trait level. Ecological dynamics in the dispersal phase are different from those in the competitive, non-dispersal phase. This distinction effectively imposes structure on the population with a new, higher level of selection in effect at the deme level. Higher level success is indirectly linked to within-group selection governed by individual growth and replication rates, as traits affect the group’s fitness. Small group sizes, low migration rates and rapid removal of compartments infected with the selfish replicator all favour group selection [ Wilson’s model (see Figure 6 ) explicitly assumes two types (two alleles of a gene) in the population. Wilson proves that “altruistic” individuals (those decreasing their own fitness in exchange for helping others) can coexist with selfish ones or increase their frequency if the variance within groups is higher than random, i.e., if the groups are not random samples of the population (see [ ]). The slightest deviation from perfect genetic mixing and random reassortment could provide the required non-random distribution (for example, relatives tending to stay together) hence there is no need for compartmentation (physical separation) and Wilson’s model simplifies to any kin selection model fulfilling Hamilton’s equation [ ], which requires a larger-than-zero relatedness for an altruistic trait to increase its frequency. In Wilson’s model, individuals do not replicate within trait groups, only undergo selection, though this scenario can be replaced with replication and selection to comply with prebiotic replicators. Individuals sharing an altruistic trait correspond to cooperative replicators and individuals lacking this trait are selfish ones. Results in general are invariant for variable trait group sizes. In the general case, an all-defector population is stable against invasion by co-operators [ ]. If, however, the defective replicator is in some sense dependent on the cooperative one, there is a wider scope for stable coexistence and an all-defector population may allow the invasion of co-operators. In a hypercyclic example (modelling defective interfering viruses, DIV), there are two outcomes: either the cooperative replicator wins if it helps itself more than it helps the defective one; or they will coexist and the co-operator cannot disappear [ ]. Since a defector can only replicate by coupling with a co-operator (an assumption specific to the DIV model), any group consisting only of defectors perishes, ultimately increasing the overall frequency of co-operators in the population. Hence the all-defector group is evolutionarily unstable and any stochastic process generating co-operators (like mutation) would lead to their successful invasion, regardless of variable trait group sizes. The trait group model directly relates to other models of the field. Increasing the diffusion rate in the MCRS (see Section 3.2.5 ) leads to the TGM with the replicative phase taking place locally (due to surface-binding) but genetic mixing is intense (due to diffusion; [ ]). If compartments divide instead of intense global mixing (and replicators independently replicate within the compartments), then the SCM emerges (see Section 3.2.4 The problem with the TGM is precisely what Maynard Smith recognized: a trait group, due to global mixing, does not form a true unit of selection but simply realizes kin selection (for example, by locally reproducing organisms forming a kin). To tap the advantage of true group selection, one must maintain group structure continuously, so that selection at the group level can act against groups of inferior compositions. This is what the stochastic corrector model realizes. 3.2.4. Stochastic Corrector Model (SCM) The SCM was designed [ ] to remedy both the trait group model’s fuzzy compartmentation and lack of proper higher-level unit of selection and the hypercycle’s frailty against mutants [ The model implies the following steps (see Figure 7 ): different replicator types proliferate within vesicles (compartments, possibly “simple cells”). When the internal replicator concentration reaches a limit, the cell splits in two due to naturally emerging physical constrains within its assumed lipid boundary. If there is an optimal composition of selfish and altruistic replicators (i.e., the compartment containing them has the highest fitness at the group level), then it can be proven that this optimal composition will always be present in the equilibrium distribution of various compositions. For this to occur, the following assumptions must be met: • Template replicators replicate independently within vesicles (they are not hypercyclically coupled), competing for shared resources (nucleotides, enzymes, space). • Replicators contribute to a common good (e.g., metabolism) such that they affect the selection of the whole group, thus compartment fitness (group replication rate) depends on composition. • Replicators are essential: a group can only replicate if both replicator types are present. • The redistribution of molecules during fission is not biased by any replicator type but is random for each molecule, hence they will follow a hypergeometric distribution in the offspring. • Compartment size is relatively small and fission happens before equilibrium is reached in cells. • Replicator migration (or other transposon-effects) from one compartment to another is negligible. The internal dynamics for the two replicator types $x 1$ $x 2$ $x ˙ 1 = a x 1 ( x 1 x 2 ) 1 4 − d x 1 − x 1 ( x 1 + x 2 ) K x ˙ 2 = b x 2 ( x 1 x 2 ) 1 4 − d x 2 − x 2 ( x 1 + x 2 ) K$ are replication rate constants; ensures competition. Both types’ degradation rates are and the common carrying capacity ensures that the internal population of a cell cannot grow to infinity. The exponent ¼ (in fact, any exponent smaller than ½) ensures that in the limit both replicators go extinct (without group structure, of course), thus it acts as a worst-case assumption. If vesicles were split after the internal equilibrium has settled, either both types or one of them would be extinct by the time of division according to dynamics, ultimately leading to the whole population losing information. Due to the stochastic nature of replication-degradation (“demographic stochasticity”), compartment fission and molecule reallocation, the optimal combination reappears and the distribution of probabilities for the different combinations can be calculated at group-level equilibrium [ ]. Since replicators provide aspecific help to the group via a shared metabolism (like in the MCRS, see Section 3.2.5 ) rather than direct help to other replicators (like in the DIV model), they are not forming a hypercycle. Szathmáry and Demeter have applied the quasispecies model to the various compartments rather than to individual molecules (assuming a finite number of compartment types; [ ])—emphasizing that in this case, compartmentalized groups of replicators follow their own internal dynamics depending on their internal states. It can be shown that internal dynamics and compartment splitting lead to a dominant equilibrium quasispecies [ ] in which all compartment types coexist (and thus no replicator is lost; [ ]). This condition is always met, as the stochastic replication and reassortment of molecules ensure that each compartment composition can turn into any other [ In conclusion, independently replicating different replicator types functionally complement each other within a compartment. Consequently, the compartment with the optimal composition of replicators has the best fitness. Stochasticity in replication and reallocation during fission generates the necessary variation, on which natural selection at the compartment level can act [ ]. The SCM effectively realizes group selection that guarantees replicator coexistence. Zintzaras and his co-workers compared compartmentalized hypercycles (CHC) with the SCM [ ]. They have found that both compartment-systems can integrate information successfully, though the SCM is able to operate under higher deleterious mutation rates and settles at a lower equilibrium mutational load than the CHC, which, however, reaches better average fitness values. The important caveat here is that compartments only contained two-membered hypercycles. Scalability (maintaining more replicator types) obviously favours the SCM, as larger hypercycles are prone to shortcut mutations. Whether fusion of compartments increases diversity and stabilizes the system in general is not clear yet, though some theoretical results indicate positive effects [ ]. Hubai and Kun, under more realistic assumptions, concluded that ~100 different genes could have survived in a simple protocell [ ]. In vitro realizations prove that (transient) compartmentalization is effective in maintaining a functional diversity of replicators within vesicles [ It is worth noting that the SCM was the first model to explicitly assume all three subsystems of cellular organization and thus life, as defined by the Chemoton model [ ]: it deals with the competition of different information carrying templates, while assuming an unspecified metabolism and a boundary subsystem that encloses the composition. A stochastic implementation of the chemoton model (approximating the SCM) proves that two different competing template replicators can coexist in a protocell [ ]. The SCM also realizes multilevel selection properly, hence it is modelling the result of a major evolutionary transition in which competing elements of a lower level of selection are successfully integrated at a higher level [ 3.2.5. Metabolically Coupled Replicator System (MCRS) The Concept of a Metabolically Coupled RNA World All the RNA-based models of prebiotic evolution are built on the assumption—which, at the time of writing, remains empirically unproven—that the template replication of the first RNA replicators was possible in the RNA World [ ], even if it was slow and unreliable at the beginning. This assumption is indispensable because, for the evolutionary machinery to swing into action, populations of self-replicating entities are a necessary condition [ ]. Thus far we know of no non-enzymatic RNA-replicating mechanism capable of copying reasonably long strands of RNA and it seems improbable that we can ever come up with one, so it is straightforward to postulate that RNA replication must have been enzymatic from the outset. Under the most likely prebiotic conditions, which surely did not provide the efficient peptide-based biochemical devices of recent cells, the necessary enzymatic help could not come from anywhere else but within the RNA World itself: RNA-dependent RNA polymerase ribozymes (or groups of ribozymes) that ignited prebiotic replicator evolution must have existed. However, recently synthesized RNA-dependent RNA polymerase (replicase) ribozymes [ ] are highly complex and quite long (longer than the maximal length allowed by Eigen’s paradox). This is not surprising, given that RNA polymerization is a highly complex catalytic process that includes the ligation of nucleotides and the binding of template and copy strands, as well as their separation at the end of the process. Since ligation does not require a long ribozyme sequence [ ], it is template binding and daughter-strand separation that necessitate the help of more complex and longer RNA polymerase ribozymes. Such a ribozyme complex has a very low chance of assembling in a short time, even from the huge random RNA population that we may assume to be produced by abiotic reactions in places such as near hydrothermal vents at the bottom of the prebiotic ocean [ ]. However, considering the enormous amount of time at the disposal of prebiotic attempts to boot up RNA replication using a huge initial RNA pool with a fast turnover of random sequences, the assumption that some slow and vague ribozyme-aided RNA replication mechanism appeared by chance at some point and took the first evolutionary steps towards life seems not to be entirely remote. Such a self-replicating ribozyme replicase complex has not yet been discovered experimentally but neither have we spent eons of time looking for it in a practically infinite sequence pool. Wu and Higgs [ ] suggest a simple model for a self-inducing evolutionary mechanism capable of producing replicase ribozymes with relatively high catalytic activity, starting from a very inefficient “primordial” With the ribozyme-based replicase function in place, another necessary condition of self-replication must be met: a continuous supply of activated nucleotide monomers in spite of the rapidly increasing monomer consumption by the exponentially growing RNA replicase population. Given that abiotic monomer production must have been very slow (if it occurred at all) without enzymes under prebiotic circumstances [ ], we must assume that metabolism was also catalysed and the necessary catalytic help for monomer production came from within the random pool of RNA sequences as well. Any sequence that happened to have some catalytic activity capable of speeding up, at least to some extent, a metabolic reaction of the actual reaction network producing the monomers offered an indirect selective edge to the replicase, which, therefore, “adopted” the new sequence by giving it a replication advantage in exchange for the metabolic one received. Keeping the metabolic machinery running requires all the ribozymes of the system that contribute to the metabolic reaction network with their enzymatic activities to remain robustly coexistent. This is not an easy task for different species of replicators competing for the same limiting resource (the monomer pool) that they depend on for their replication. The ecological principle of competitive exclusion (the Gause-principle, see Section 2.1 ) permits the survival of just a single replicator species on a single limiting resource [ ] but a single ribozyme cannot, obviously, catalyse all the chemical reactions of even a simple metabolic network. The mutual dependence of each metabolically indispensable replicator species on the presence of all the others may seem to alleviate the exclusion principle but it is easy to show that even the mandatory cooperation of the replicators is not sufficient for that to happen if the system is well-mixed without any local inhomogeneity permitted. With complete spatial homogeneity (and/or global mass interaction of the replicators and the metabolites they use and produce) assumed, the replicator system follows the simple mean-field dynamics $x ˙ i = r i x i M ( x ) − Φ i ( x )$ = ( , …, ) is the population density vector for the different, metabolically essential replicator species with replication rate vector ) is the monomer supply provided by metabolism at ribozyme densities ; and is an outflow vector ensuring that the total density $∑ i = 1 s x i$ of all the essential ribozymes remains constant. In accordance with the assumption regarding the metabolic role of essential ribozymes, the metabolic function ) must take the value 0 if any one of $x i$ is zero. A simple realization of this constraint is using a metabolic function proportional to the geometric mean of replicator densities: $M ( x ) = c ( ∏ i = 1 s x i ) 1 s$ is a positive constant. Since the metabolic effect of the actual monomer supply is the same for all replicator species at any particular moment, their relative (instantaneous) growth rates are determined by their (constant) growth parameters alone, i.e., the metabolic function has no effect on the order of the growth rates ) at any time. Therefore, the replicator with the highest growth parameter excludes all the others, in agreement with the Gause-principle. The dynamics of the system are radically different, however, if the highly unrealistic assumption of complete global homogeneity postulated in the mean-field model is relaxed. The growing family of the Metabolically Coupled Replicator System (MCRS) models offer a chemically and ecologically feasible spatial mechanism for the robust maintenance of a metabolically active set of different ribozymes attached to mineral surfaces, assuming that • the replicase function is given: any RNA sequence is capable of producing a copy of itself by template replication if it has a sufficient concentration of monomers at its disposal. • all the members of the metabolic replicator set are indispensable in running a simple metabolic reaction network (metabolism) producing the monomers; if any one replicator type is missing from the set, monomers are not produced at all and the system goes extinct. • the replicators are attached to a 2D mineral surface on which their horizontal mobility is limited; replicators leaving the surface are lost to the system (replicator “death”). • nutrient compounds (external initial substrates of the metabolic reaction network) are supplied from the third spatial dimension in excess. • the metabolites (substrates and products of the reactions that the replicators catalyse) are also attached to the surface, on which they may diffuse to a certain distance before either being detached from the surface and lost, or used in a metabolic reaction or in replication ( Figure 8 • the metabolic contribution to the probability of a certain replicator being copied is dependent on the local ribozyme composition of its metabolic neighbourhood (i.e., within the distance d defining the metabolic neighbourhood of the focal replicator); only metabolically complete neighbourhoods (which have at least one copy of each essential ribozyme) allow for replication. • the metabolically active set of ribozyme replicators may have enzymatically inactive parasites, i.e., replicators which do not contribute to monomer production but use the monomers produced by the cooperating replicators for their own replication. Replicator Diversity and Ecological Stability in MCRS Models Unlike the mean-field version, the stochastic (lattice) implementation of the spatially explicit MCRS model keeps all the metabolic replicators coexistent and shows robust ecological stability ( Figure 9 ) [ ]. Limited mobility and localized interactions of the replicators and the metabolites allow local group selection to operate: parts of the community lacking any one of the metabolic replicators are doomed to local extinction, pre-empting the habitat for invasion from nearby, metabolically complete neighbourhoods. The system is also resistant to its parasites in the sense that, even though parasitic replicators can invade the metabolically cooperating ribozyme community, they cannot destroy the cooperation altogether, because the damage they inflict on the system remains local and ephemeral. Wherever parasites take over locally, they stop monomer production and thus, indirectly, they commit suicide by disrupting their own monomer supply and starving themselves to death. This result is in line with that of Szostak et al. [ ], whose model predicts parasite invasion in a replicase ribozyme population but without the parasites destroying the system. The coexistence of a replicase ribozyme and its parasitic quasispecies in a multilevel selection regime has also been demonstrated in [ For more than a decade of its development, the MCRS model family has been proven to be ecologically robust against many different changes in its basic assumptions and structure. Introducing trade-off relationships between replicator traits such as replicability and ribozyme activity [ ], assuming variable system sizes and metabolic neighbourhood sizes [ ], allowing ribozyme promiscuity (i.e., parallel or alternative enzyme activities of the same replicator) [ ], the explicit assignment of the replicase function to an additional replicator species [ ] or allowing for phenotype-genotype separation in the complementary strands of the replicators [ ] did not change the general conclusion regarding the viability and resilience of the system. Open chaotic flows (see Section 3.2.2 ) can offer an ideal cradle for MCRS as well. Stretching and folding moves different replicators close to each other along parallel filaments. Károlyi et al. [ ] studied MCRS in open chaotic flows and they found that metabolically coupled replicators can indeed coexist in such habitats and they are also robustly resistant against their parasites ( Figure 10 Diffusion is omitted from the presented models. Since one has to consider only molecular diffusion in case of chaotic advection, this simplification is adequate. Molecular diffusion washes away fine fractal structure only below a critical length scale, while dynamical equations don’t change qualitatively [ ]. Since particles move chaotically only for a finite time in open chaotic flows, replicators will not be washed out of the chaotic fractal if the time scale of replication is shorter than the time scale of moving along the fractal set. However, this condition is easily satisfied for example in the wakes of oceanic islands where particles may be trapped for months or even for years [ ]. We have demonstrated that the coexistence of competing replicators is not a problem in open chaotic flows but evolvability is. Similar to parabolic replication, there is no selection in this habitat, so we have to assume other regions providing intense turbulent mixing where exponential dynamics still maintain selection [ ]. That is, some problems of the formation of early replicator communities can be alleviated in open chaotic flows but this habitat alone is not a nostrum for all challenges. On the empirical side: open chaotic flows frequently occur in oceans, for example around islands or in hydrothermal vents. Recently [ ] have shown numerically and experimentally that chaotic advection indeed accelerates surface reaction kinetics in the porous mineral substrates characteristic of sites near hydrothermal vents. Besides the problem of maintaining sufficient amounts of information the other main challenge prebiotic systems had to face was maintaining the critical concentrations needed for reactions to occur at sufficient speeds. Particles are accumulated along the fractal set in open chaotic flows, so the physics of such habitats effectively solves the problem. We emphasize here that no fine tuning of the model is needed for this effect: an open flow is chaotic within a wide range of flow speeds. Changing the speed or direction of the flow doesn’t modify its main physical character. Evolutionary Stability and Evolvability in MCRS Models Phenetic mutations. Enzymatic control of RNA strand separation during the replication process guarantees that the metabolic replicator community does not follow parabolic growth kinetics (cf. Section 3.1.2 ) in MCRS, i.e., the populations of all replicator species have the capacity to increase exponentially and this is prerequisite for their evolvability (or, more precisely, selectability). Recent versions of MCRS allow for mutational changes in the structures of all metabolically active replicators so that evolutionary shifts in replicator traits can be simulated and their effects on coexistence and on the stability of the system as a whole can be analysed. In earlier models, only phenotypic changes in the most important replicator functions—replicability, rate of decay and metabolic (ribozyme) activity—had been considered. The phenetic models [ ] defined reasonable yet arbitrary trade-off relationships among the three critical traits, assuming, for example, that a mutant replicator that is easier to copy (i.e., features a higher replicability) than its template is less likely to be as good a catalyst (i.e., it has weaker metabolic enzyme activity) and it is probably more exposed to environmental effects, leading to faster decay or loss from the surface (i.e., its decay rate is higher)—all for the very same and, in these phenetic models still implicit, structural reason: a less compact, looser secondary structure. It can be shown in simulations that—even with rather strict phenotypic trade-off constraints enforced between different aspects of replicator performance—it is possible to evolve metabolic replicator sets of nearly optimal values in all these three traits if a small “wobbling” is permitted in the trade-off relationships [ Genetic mutations. The necessary level of wobbling may indeed be provided by the thermodynamics of RNA folding, as it has been demonstrated in the latest versions of MCRS [ ], in which the purely phenotypic, sequence-implicit approach has been relaxed with the assignment of actual nucleotide sequences and the corresponding secondary structures (the latter calculated by the ViennaRNA algorithm [ ] on the basis of free energy minimization) to all the replicators present or appearing by mutation in the system ( Figure 11 ). The three critical traits of each sequence on the lattice are direct explicit functions of its primary and secondary structural (i.e., sequence and folding) features. The MCRS mechanism imposes selection on the variations of the resulting phenotypes. The most surprising feature of the dynamics of the extended system is its extreme propensity for robust replicator coexistence through evolving different metabolic functionalities (i.e., distinct ribozyme activity patterns) embodied in replicators of different sequences but highly similar population dynamic and enzyme kinetic properties. For example, simulating the sequence-explicit MCRS with three necessary metabolic ribozyme activities (blue, red and green in Figure 12 ) and a potentially infinite pool of different parasitic sequences (grey colour in Figure 12 ), starting from a random sequence distribution with different initial replicator lengths, converges to a stationary distribution with highly similar densities, lengths and enzymatic activities in the evolved set of distinct metabolic replicator species ( Figure 12 Previous purely ecological (i.e., non-evolving) MCRS models [ ] have shown that the metabolically coupled spatial replicator system is robustly coexistent, even if the dynamically relevant parameters of the different replicator species are fixed at quite different values. The new sequence-explicit, evolving MCRS model automatically adds another layer of robustness to the dynamics by converging the dynamically and functionally important traits of the metabolic replicator species to quite similar values, which, of course, makes it easier to keep them coexistent. The ensuing almost-even density and activity distribution of the metabolic replicators is also advantageous for the efficiency of metabolism, which runs best in metabolic neighbourhoods consisting of equal copy numbers of the different ribozymes, since the metabolic function ) is proportional to the geometric mean of the copy numbers. Obviously, mutations produce metabolically useless parasites in large numbers but they are quickly eliminated by the local regulation mechanism explained in the previous section. The ecological stability and the parasite resistance of the MCRS is spectacular at a substantial range of its parameter space but the number of metabolically essential replicator species sustainable by its simple diffuse group selection mechanism is always limited [ ]. That is, if the first steps towards life had been taken as assumed in the MCRS scenario, the “chromosomization” of the RNA molecules and the separation of genetic and enzymatic functions must have had occurred at a relatively early stage of replicator evolution, because the number of ribozymes necessary to catalyse an ever more complex metabolic reaction network is well above the few that the early types of MCRS could have kept coexistent to form a stable replicator community. Chromosomization and genetic/phenetic role separation into complementary RNA strands are already being studied using other models compatible with the MCRS concept [ ]; development of future MCRS models will also take that direction. 4. Discussion The origin of life on Earth is one of the ancient enigmatic questions that humankind has always been asking. Besides the multitude of metaphysical and philosophical answers provided by different forms of civilization in our history we have not, until quite recently, seriously attempted to answer the question of the origin making use of the scientific methodology. There are a few strong reasons for this conspicuous delay in the scientific response to such a fundamental and ancient challenge. Life as we know it is a unique phenomenon, confined to our planet according to our present knowledge. We have no “independent experiments” pertaining to the origins of different forms of life from different points of the Universe at our disposal for comparison. For essentially the same reason it seems impossible to come up with a proper definition of what life is: any such definition attempt suffers from being built upon “ad hoc” assumptions. Yet another reason is the complete lack of fossil record that could channel our speculations on what actually happened almost four billion years ago in the prebiotic ocean. All we can count on is our conviction that the laws of physics and chemistry are time invariant and, therefore, we may be able to invent prebiotic scenarios that are in agreement with those eternal laws and thus, hopefully, their feasibility can be verified or falsified experimentally in the laboratory. This conviction governs our search for prebiotic system models satisfying the three conditions of diversity maintenance, ecological stability and evolutionary stability, which are feasible to apply to each model candidate in the following order. The main criterion that has to be met is the ability of the model to maintain diversity. A system compliant with the diversity criterion can be ecologically stable or unstable. Provided that the model is ecologically stable, the next relevant question pertains to its evolutionary stability. An ecologically stable system that is also evolvable and stable against its characteristic parasites, i.e., one that meets all three criteria, may be a hopeful candidate for representing a possible prebiotic scenario. With at least one pillar missing (including evolvability, which is a prerequisite of evolutionary stability), the model cannot be considered as the basis of a realistic scenario. The most promising such prebiotic evolutionary scenario is that of the RNA World [ ], which has many different incarnations as regards their assumptions of the actual physical-chemical habitats of the ancient RNA populations, as well as the abstract structures of the dynamical models these assumptions imply. The models we have analysed and compared are directly relevant to prebiotic replicator dynamics with explicit or implicit reference to the RNA World but many of them have obvious relevance at higher levels of recent biological organization as well. Even the simplest of chemical systems capable of evolutionary change must have featured exponential population growth—a capacity that is inevitably constrained in a finite world sooner or later. Out of a number of different competing entities, all capable of exponential growth, only a subset of the entities will survive due to the effect of competitive exclusion. If the entities compete for a single resource (or, in general, a single regulating factor), then there can be just a single survivor type and thus diversity cannot be maintained. This is the basic problem of prebiotic evolution (and, in fact, ecology and evolution on any level of organization) that has to be solved for a diverse system to be viable. This condition is met one way or another in all the models studied. There is no definite answer to the question of what kind of diversity (and the corresponding quantity of information) would have been necessary to be maintained in a replicator system for it to be able to operate a prebiotic system of sufficient complexity. What we can do is to estimate the genome size of what is called a “minimal cell,” on the basis of a top-down analysis [ ] but the result is still in the order of hundreds of genes. Of course, this huge information content is sufficient to operate the entire core of the machinery of recent bacterial life, which is certainly much more complex than what might have been the starting point of chemical evolution. With no available clues about the beginning in recent forms of life we are forced rely on models of simple replicator interactions in finding feasible prebiotic scenarios. The problem has been first addressed in Eigen’s quasispecies model (see Section 3.1.1 ) and the first quantitative solution offered to the diversity problem therein was the hypercycle (HC), the ability of which to maintain diversity may be considerable (apart from its dynamical stability issues). The same diversity maintaining capacity is inherited by the spatial (SHC) and the vesicular (CHC) versions of the hypercycle. The ability to preserve replicator diversity is limitless in both the parabolic replicator (PR) and the open chaotic flow (OCF) models. Even though these two models may seem dissimilar to the extreme in their assumptions (complementary strands replicate in an unstructured environment in one and autocatalytic replicators in an open chaotic flow in the other), they yield the same dynamics for essentially the same reason: the sub-linear dependence of replicator growth rates on replicator densities and the ensuing general advantage of rarity for all replicator types. The toy-versions of the MCRS model have a capacity to maintain diversity at about the order of ten different replicators [ ] but the chemically more explicit—and dynamically more stable—versions have not yet been studied from this aspect. The effect of mitigating competitive exclusion has been verified in the stochastic corrector model (SCM) for up to a hundred different replicators [ ]. Quickly replicating parasites and high mutational rate can still put a more stringent limit on the maintenance of diversity in the SCM. Obviously, the results summarized above reflect the present state of the art for all model types discussed above, one of whose common assumptions is the omission of explicit replication chemistry: they consider the production of the daughter strand of a string replicator as a single-step reaction, disregarding the dynamics of nucleotide insertions. Implementing more detailed dynamics (considering changes in nucleotide supply, possibly for each monomer species, or the production of unfinished strands; etc.) may have a profound effect on the ability of each model type which in their implicit versions can maintain unlimited (PR and OCF) or nearly unlimited (HC, SHC and CHC) diversity. Such studies are yet to be carried out. We note here that the sequence explicit version of the MCRS does not lose any of its capacity to maintain diversity compared to that of the toy versions; on the contrary: it has a good chance to have it increased but this has yet to be tested. In accordance with the intimate cross-dependences among the three dynamical features that we consider as the main criteria for evaluating models of prebiotic evolution, each model must be scrutinized from the viewpoint of its potential to preserve diversity in the face of the environmental changes characteristic of the supposed habitat of the replicators. In this context, we must consider environmental changes affecting the growth rates of the replicators (for example, in the form of additive mortality). Lacking individual boundaries and homeostatic regulation, resistance to such external effects must have been of profound dynamical importance in prebiotic replicator systems. Even if the “ecological” and “evolutionary” timescales were convoluted at the time, it must have been a necessary condition for any such system to be persistent that the composition of its species pool was dynamically stable. The hypercycle (HC) model is an underperformer in this respect: hypercyclically connected loops of over five species in size show wide fluctuations in replicator density even in response to small environmental perturbations. This means a high risk for one of the actual low-density members of the cycle to shrink below a critical level and disappear and—due to the collective autocatalytic coupling—for the whole system to collapse as a consequence. External disturbances destroy the mesoscale spatial symmetry of the spatial hypercycle (SHC) model but the corollaries with respect to the diversity of the system have not yet been studied. The SCM has not been analysed rigorously in this regard either but we can guess that random disturbances cannot have a strong deleterious effect on a system that is kept alive by random assortment in the first place. The wrapped hypercycle (CHC) model has not been investigated for disturbance resistance either but we expect it to inherit the probable weak response of the SCM. The disturbance responses of the parabolic replicator (PR) model and the open chaotic flow (OCF) model are in all probability similar to each other because of the dynamical homology of these systems but a formal analysis of the models in this respect is yet to be accomplished. The MCRS models have shown robust ecological stability against changing the replicator degradation rates (which correspond to environmental perturbations), by applying both sequence-length-dependent and -independent decay rates [ An obvious prior condition for a system to be evolutionarily stable is that it is evolvable. This criterion is not met by the parabolic replicator (PR) and the open chaotic flow (OCF) models in their present form, for the same dynamical reason that makes them capable of maintaining any level of diversity. It is, however, worth mentioning here that these approaches can be extended in ways allowing for Darwinian selection to operate on them for at least some of the time, thus potentially rendering them evolvable. Lacking detailed studies of this problem we cannot claim more in this regard at the moment. The hypercycle (HC) model has severely limited evolvability: first it is highly improbable to have mutation events that lead to an increase in the number of dynamically coupled members in a hypercyclic loop; more importantly, such a new hypercycle cannot increase in frequency, because the hyperbolic growth law governing their dynamics favours the old system which has a higher initial density. Therefore, the capacity of HC to increase the information content it replicates is weak. In addition, it is highly sensitive to the occurrence of parasitic mutants: both selfish and shortcut parasites may destroy the autocatalytic loop. Most of these problems occur in the spatial version (SHC) as well, except for the sensitivity to selfish parasites which it resists; the evolvability and the shortcut parasite resistance of the SHC model are just as bad as those of the non-spatial (HC) version. The stochastic corrector model (SCM) is evolvable and evolutionarily stable, with evolution acting on two levels: among replicators within the same compartment and among the compartments of different replicator composition. (Note that the corresponding group selection mechanism acts on higher organizational levels as well, to which the SCM may, therefore, be also applicable. At these higher levels, the compartment boundaries are usually naturally given, unlike in prebiotic SCM systems which assume the compartments being present and capable of coordinated fission, without explaining their origin.) The compartmentalized hypercycle (CHC) model is resistant to both kinds of its potential parasites due to the stochastic correction effect of group-level selection but it is just as limited in its evolvability as the HC and SHC models are. The MCRS model meets all the conditions of evolvability and parasite resistance: it can adopt (a limited number of) new metabolic replicator species as long as they contribute to a more efficient metabolism. The new metabolic replicators may originate as mutants of parasitic sequences, which comprise quasispecies around the existing metabolic ribozymes but cannot destroy the system, thanks to the parasite control through metabolic efficiency within the diffuse group structure of metabolic neighbourhoods. Notice that the group selection mechanism works in the MCRS without assuming compartments of unexplained origin and, in fact, it also offers a plausible (but so far not implemented) scenario for the evolution of membrane-producing ribozymes by parasite adoption. Table 2 summarizes these results. In summary, the present state of the art of prebiotic string-replicator models suggests that the three most promising directions for modelling prebiotic ecology and evolution seem to be: • the stochastic corrector model (SCM), as long as the origin and the maintenance of compartments coupled to replicator population growth are explained; • the parabolic replicator (PR) model and its dynamical homologue, the open chaotic flow (OCF) model, with the future addition of a scenario for the appearance of a selection regime; and • the metabolically coupled replicator system (MCRS) model, which meets all the criteria for maintaining diversity and being robust both in the ecological and the evolutionary sense but only for a limited number of replicators as yet. In our opinion, the MCRS scenario seems to be the one that is built on the most plausible set of assumptions and offers the best perspectives for further research on replicator evolvability. However, a few words of caution are due at the end of this survey. Even if we find a plausible, sufficiently detailed, ecologically and evolutionarily stable scenario for the origin of life, proving that chemical evolution had followed a blueprint resembling—at least in the most important respects—that scenario in creating life from non-life 3.8 billion years ago seems almost impossible, mainly because we lack fossil evidence. Even the laboratory verification of the feasibility of any specific scenario is a remote possibility for now, given that our present models considering chemical details such as the kinetics and thermodynamics of the reactions involved are still incapable of being empirically instructive. What we currently have at hand is but a stepping stone to future research aimed at the distant target of once re-creating life in the lab. The research was funded by the National Research, Development and Innovation Office (NKFIH) under OTKA grant numbers 112788 (I.Z.), 120799 (B.K.), 124438 (B.K., T.C. and A.S.), K119347 (Á.K. and A.S.) and GINOP-2.3.2-15-2016-00057 (B.K., T.C., Á.K., A.S., I.S. and I.Z.). This work was carried out as part of EU COST action CM1304 “Emergence and Evolution of Complex Chemical Systems”. Author Contributions All authors contributed equally. Conflicts of Interest The authors declare no conflict of interest. Figure 1. Genealogy of prebiotic replicator models. The simplest possible model for replicator dynamics is exponential growth, which does not allow coexistence as the fittest always wins. Since it is an idealistic case, all sorts of extensions and deviations from the basic model are intended to make prebiotic systems more realistic and more permissive in terms of coexistence, ultimately crossing the barrier beyond which a sufficient amount of information can be stably maintained on the evolutionary timescale for cellular life to emerge. Figure 2. A 3-membered hypercycle. Each member (R[i]) of the hypercycle can catalyse its own replication (A[i]) and the replication of the next member in the cycle (K[i][+1]). Figure 3. Evolutionary instability in the hypercycle. (a) A parasite (R[M]) that enjoys catalysis from a member of the hypercycle (R[2]) but does not take part in the hypercycle organization. (b) A shortcut mutation (red dotted arrow) which changes the specificity of the catalysis offered by a member of the hypercycle (R[2]) so that it catalyses the replication of a member it should not catalyse. R[1], R[2] and R[4] now form a 3-membered hypercycle, which can replicate faster than the 4-membered hypercycle. Figure 4. The motion of particles in an open chaotic flow. The blinking vortex-sink system is used for demonstration. It models the outflow from a large bath tub with two sinks that are opened in an alternating manner. Crosses denote the sinks. ( ) Diverging trajectories of two particles that initially are close to each other. In this example, they even leave the bath in different sinks. ( ) A snapshot on particles distributed along a fractal set (chaotic saddle) in open chaotic flow generated by the blinking vortex-sink system. (Based on [ Figure 5. The distribution of two replicators (red) B and (blue) C competing for the same resource material (white) in the wake of a cylinder. The flow is from left to right. The inset in ( ) shows the time-dependence of the population numbers and clearly indicates the approach to a steady state of coexistence. A blow-up of the region indicated by a rectangle in ( ) is seen in ( ). B-s replication rate is 4/3 of the C-s, while decay rates are the same for the two species. Coexistence of 35 species is experienced in other simulations. (Based on [ Figure 6. Wilson’s trait group (or structured deme) model. ( ) Individuals with different traits, black and white dots, form localized trait groups. ( ) After ecological interactions and selection, ( ) survivors are released to form a pool, where they can reproduce. ( ) New groups form (After [ Figure 7. The stochastic corrector model. The two replicator types are indicated with filled and empty small circles. Arrows indicate transitions, as individual compartments grow and divide. Compartments with green highlight (after division) contain the optimal composition of replicators (after [ Figure 8. Basics of the MCRS algorithm. (1) Metabolic support of the four replicators in the von Neumann neighbourhood of an empty site (black ). Red, green and blue s denote different, metabolically active replicator species, the yellow stands for a parasitic replicator. (2) The replicator taking the empty site by the next generation is determined by a random draw, with the empty site to remain empty having a constant claim and the claim of each adjacent replicator depending on its own replicability ( ) and the metabolic support it receives from within its own 3 × 3 metabolic neighbourhood. (3) Each replication step is followed by replicator diffusion, implemented as elementary steps of the Toffoli-Margolus algorithm [ ] at random positions of the lattice. Figure 9. Persistence of the MCRS at different sizes of the metabolic and the replication neighbourhood. Neighbourhood sizes are given as side lengths of a square-shaped section of the lattice that is centred on the focal site; stands for von Neumann neighbourhood. The values inside the table cells specify the numbers of persistent/extinct systems out of five replicate simulations; grayscale values are system densities in percentages of sites occupied by replicators within the lattice after 10,000 generations. Panel ( = 0, Panel ( = 1, Panel ( = 4 and Panel ( ) D = 100 (Based on [ Figure 10. Initial distribution of the replicators a snapshot and time dynamics of the metabolic network on chaotic advection by an open flow. ( ) Replicators are placed into separate stripes initially. Different species are denoted by different colours. ( ) The snapshot of spatial distribution of replicators at = 10 in units of flow’s period. ( ) The population size is shown as a function of time measured in units of the flow’s period. The size of the metabolic neighbourhood was σ = 10 for each competitors and their spontaneous decay was δ = 0.02. The replication constants were different for each species, these were = 3 (red), = 4 (green) and = 5 (orange). The potential that an empty site remains empty was = 2 (Based on [ Figure 11. The 2D secondary RNA structure is determined from the primary structure (nucleotide sequence) using the thermodynamic condition that the folded molecule should have the smallest possible free conformation energy. The conformation calculations are executed by the ViennaRNA algorithm. Figure 12. Trait convergence in the sequence-explicit version of MCRS. ( ) Relative replicator frequencies, ( ) (metabolic) ribozyme activities and ( ) replicator lengths at the stationary states of the simulations, after two million generations, as functions of the selection pressure against sequence length (“length penalty”). Open triangles in panel ( ) are the proportion of surviving systems out of 100 parallel simulations; Red, green and blue dots represent the three different metabolically active replicator types, grey dots represent all the parasitic (metabolically inactive) replicators. Persistent MCRS systems are efficiently selected for convergence in all the fitness-related traits of the replicators (Based on [ Table 1. Categorization of dynamical models with respect to their temporal and structural resolution. For details of the models see the main text and references. Note that unstructured replicator models in discrete time are generally lacking as fully (i.e., in both space and time) continuous models are much easier to handle analytically. Structure/Time Discrete Time Continuous Time Without structure (only global interactions) - QS, HC, PR With structure (global and local interactions) Compartmentalized SCM CHC, TGM Spatial MCRS SHC, CM Table 2. An assessment of each model in the context of the three main criteria and their “evolvability”, the scope for the adoption of mutant replicators with a useful function into the system. Diversity-Maintaining Ability Ecological Stability Evolutionary Stability Evolvability An arbitrary number of sequences can coexist if there is no The cooperative nature of organization ensures that, Selfish parasites and short-cut mutants can destroy the HC population stochasticity; otherwise some species can be lost. given high enough catalytic aid, the system is system. No Due to the importance of local interaction, the number of The cooperative nature of organization ensures that, The organization is stable against selfish parasites but SHC potentially coexisting sequences is limited. given high enough catalytic aid, the system is short-cut mutants could still take over. The system still No stable. cannot evolve new hypercycle members. The number of sequences is limited due to the random assortment into The cooperative nature of organization ensures that, Group selection can probably maintain existing diversity CHC daughter cells. given high enough catalytic aid, the system is but the system still cannot evolve new hypercycle members. N/A Non-Darwinian regime. An arbitrary number of sequences can coexist at arbitrarily small The continuous advantage of rarity of any replicator PR concentrations. ensures coexistence at any external parameter No classical selection. No Any number of new replicators can invade the community without outcompeting others. Stochastic replication/degradation. SCM N/A N/A Yes Random assortment during fission. Shared metabolism. Non-Darwinian regime. An arbitrary number of sequences can coexist at arbitrarily small The continuous advantage of rarity for any OCF concentrations but locally dense populations (the concentration at replicator ensures the coexistence at any external No classical selection. No the boundary of fractals can be very high). parameter combination. Any number of new replicators can invade the community without outcompeting others. Darwinian selection for fitness homogeneity. Dynamical trait convergence with functional A limited number of ribozyme replicators coexist in a highly robust Advantage of rarity due to the mandatory metabolic Sequence-dependent replicator functionality. MCRS system. cooperation of all replicator species maintains Yes stability across the parameter space. - Parasite resistance. Parasite “adoption” for useful functions. No need for membrane compartments. Small compartment size. Low diffusion rate. TGM N/A N/A Yes Rapid extinction of inferior compartments. Selfish replicators are coupled to cooperative ones. © 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/ Share and Cite MDPI and ACS Style Szilágyi, A.; Zachar, I.; Scheuring, I.; Kun, Á.; Könnyű, B.; Czárán, T. Ecology and Evolution in the RNA World Dynamics and Stability of Prebiotic Replicator Systems. Life 2017, 7, 48. https:// AMA Style Szilágyi A, Zachar I, Scheuring I, Kun Á, Könnyű B, Czárán T. Ecology and Evolution in the RNA World Dynamics and Stability of Prebiotic Replicator Systems. Life. 2017; 7(4):48. https://doi.org/ Chicago/Turabian Style Szilágyi, András, István Zachar, István Scheuring, Ádám Kun, Balázs Könnyű, and Tamás Czárán. 2017. "Ecology and Evolution in the RNA World Dynamics and Stability of Prebiotic Replicator Systems" Life 7, no. 4: 48. https://doi.org/10.3390/life7040048 Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details Article Metrics
{"url":"https://www.mdpi.com/2075-1729/7/4/48","timestamp":"2024-11-07T07:18:02Z","content_type":"text/html","content_length":"656562","record_id":"<urn:uuid:10d96b5a-4103-4611-a412-04c487540d9d>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00509.warc.gz"}
Advice about renormalizations in physics 8327 views Here I would like to share my disappointment with the current state of affairs in theoretical physics and ask your advice. Some readers know that I am not happy about renormalizations in physics and I would like to specify a little what bothers me. A quite good description is already given in Feynman lectures, Chapter 28, devoted to the electromagnetic mass of electron. As you may know, H. Lorentz tried to implement a new force in the successful phenomenological equation of motion of electron $$m\ddot{\mathbf{r}}={\mathbf{F}}_{ext}({\mathbf{r}}, \dot{{\mathbf{r}}},t)\qquad(1)$$ to take into account a small "radiation reaction" effect (for the sake of energy-momentum conservation). His reasoning was simple: as the electron was affected with electromagnetic forces, one should insert into Eq. (1), in addition to ${\mathbf{F}}_{ext}$, the entire electron-sourced field too. Calculations showed, however, that is was mainly a self-induction force preventing the charge from changing its steady state - a motion with a constant velocity $\mathbf{v}=const$: $${\mathbf{F}}_{self}=-m_{em} \ddot{\mathbf{r}}, \;m_{em}\to\infty.\qquad(2)$$ This expression had a dimension of force and it was strongly geometry- or model-dependent; it was very big for a small electron. Unfortunately, researchers of that time considered a quantum of charge classically - as a classical distribution of the charge density, as if the quantum of charge were built up with a collection of infinitesimal charges. H. Poincaré added cohesive forces to provide the distribution stability (Poincaré stresses, see the Feynman lecture), but the nature of these forces was completely unknown. I am sure some people were trying to work out this direction, but another "direction" has won. The self-induction contribution was simply discarded as wrong and the jerk "remainder" was tried instead: $${\mathbf{F}}_{self}=\frac{2e^2}{3c^3}\ddot{\mathbf{v}}.\qquad(3)$$ Note, this expression has also a dimension of force, but is of a different functional dependence in terms of electron variables. In order to "justify" discarding the self-induction force, they invented the notion of a bare (negative) mass $m_0$ that had existed in successful Eq. (1) "before" taking the self-action into account: $$m_0 \ddot{ \mathbf{r} }= \mathbf{F}_{ext} - m_{em}\ddot{ \mathbf{r} } + \frac{2e^2}{3c^3}\ddot{\mathbf{v}},\qquad (4)$$ As in (1) it was the very physical mass (a phenomenological coefficient taken, for example, from mass-spectroscopic data), I find this invention unconvincing, to say the least. Indeed, a negative mass makes the particle move to the left when the force pulls it to the right. We never observed such a silly phenomenon (like that of a stupid goat) and we never wrote the corresponding equations. We cannot pretend that (1) describes such a wrong particle in an external field, but adding its self-action makes the equation right. Still, this is a mainstream position today. I do not buy it. I only see discarding, which is not a mathematical calculation nor a physical effect, but cheating. The crude self-action idea failed. But what about the remainder? Fortunately, the jerk remainder is wrong too. This remainder cannot be used as it gives runaway solutions. Not a small radiation reaction, but a rapid self-acceleration. This whole self-action business can figuratively be represented as connecting an amplifier output to its input. It creates a feedback. First the feedback is strongly negative – no reaction to an external signal is possible anymore. After “repairing” this self-action, we get a strong positive feedback. Now we have a self-amplification whatever the external signal value is. No good either, although (3) is formally used in a "proof" of energy-momentum conservation (on average). (Here I disagree with R. Feynman who thinks the jerk is OK.) One more attempt to get out from this impasse consisted in considering the jerk term contribution "perturbatively": H. Lorentz and others after him started to replace it with the external force time derivative: $$\ddot{\mathbf{v}}\to\frac{2e^2}{3m_e c^3}\dot{\mathbf{F}}_{ext}(\mathbf{r},\dot{\mathbf{r}},t).\qquad(6)$$ It worked for an oscillator and was a great triumph of Lorentz's construction (remember the natural width of a spectral line). But here again, I notice cheating once more: in the true iterative procedure we obtain a given function of time $\dot{\mathbf{F}}_{ext}\left(\mathbf{r}^{(0)}(t),\mathbf{v}^{(0)}(t),t\right)$ on the right-hand side rather than a term (6) expressed via unknown dynamical variables. For example, in an oscillator equation (used by Lorentz) $$\ddot{y}+ \omega^2 y= \frac{2e^2}{3mc^3}\dddot{y}\qquad (7)$$ the first perturbative term $\dot{F}_{ext}^{(0)}(t)\propto \dot{y}^{(0)}(t)$ is a known external periodic (resonance!) driving force whereas the replacement term $\dot{F}_{ext}\propto \dot{y}$ is unknown damping force (kind of a friction): $$\ddot{\tilde{y}}+ \gamma\,\dot{\tilde{y}}+ \omega^2 \tilde{y}= 0,\quad \gamma=\frac{2e^2\omega^2}{3mc^3}.\qquad (8)$$ A perturbative solution to (7) $y\approx y^{(0)} + y^{(1)}$ (a red line in Fig. 1) Fig. 1. is different from a damped oscillator solution $\tilde y$ (a blue line in Fig. 2). Solution to a damped oscillator equation (8) is non linear in $\gamma$, non linear in a quite certain manner. It is not a self-action, but an interaction with something else. This difference in equations is qualitative (conceptual) and it is quantitatively important in case of a strong radiation reaction force (like in quark-gluon interactions) and/or when $t\to\infty$ (I used in this example $y^{(0)}=\sin\omega t$ with $\omega=10$ and $\gamma=0.3$). I conclude therefore that a damped oscillator equation (8) is not a perturbative version of (7), but is another guesswork result tried and left finally in practice because of its physically more reasonable (although still approximate) behavior. (I guess H. Lorentz intuitively wanted a "friction" force rather than a resonance driving force, so he "opted" for Eq. (8)). Similarly, expression (6) is not a perturbative version of (3), but another (imperceptible) equation replacement (see F. Rohrlich's last paper, page 10, reference [3]). The term $\frac{2e^2}{3m_e c^3}\dot{\mathbf{F}}_{ext}(\mathbf{r},\dot{\mathbf{r}},t)$ is a third functional dependence tried for description of the radiation reaction force. I am sure there may be more. I think the radiation reaction force should in reality be expressed via some field variables, but it is another story. Hence, researchers have been trying to derive equations describing the radiation reaction force correctly, but they've failed. For practical (engineering) purposes they constructed (found by trying different functions) approximate equations like (8) that do not provide the exact energy-momentum conservation and do not follow from "principles" (no Lagrangian, no Noether theorem, etc.). We may not represent it as a continuous implementation of principles because it isn't so. Guessing equations, of course, is not forbidden, on the contrary, but this story shows how far away we have gone from the original idea of self-action. It would not be such a harmful route if the smart mainstream guys did not raise every step of this zigzag guesswork into "the guiding principles" - relativistic and gauge invariance, as well as renormalizability restricting, according to the mainstream opinion, the form of interaction to $j\cdot A$. Nowadays too few researchers see these steps as a severe lack of basic understanding of what is going on. On the contrary, the mainstream ideology consists in dealing with the same wrong self-action mechanism patched with the same discarding prescription ("renormalization"), etc., but accompanied also with anthems to these "guiding principles" and to their inventors. I do not buy it. I understand the people's desire to look smart - they grasped principles of Nature, but they look silly to me instead. Relativistic and gauge invariance, as equation properties, are borrowed from inexact CED equations, hence as "principles" they may not guarantee the correctness of theory completion. Relativistic and gauge invariance (equation properties) must be preserved, nobody argues, but making them "guiding principles" only leads to catastrophes, so (6) it is not a triumph of "principles", but a lucky result of our difficult guesswork done against the misguiding principles. Principles do no think for us researchers. Thinking is our duty. Factually we need in (1) a small force like that in (6) or so, but our derivation gives (2). What we then do is a lumbering justification of replacements of automatically obtained bad functions with creatively constructed better ones. Although equations like (6) work satisfactorily in some range of forces, the lack of mechanical equation with exact radiation reaction force in CED shows that we have not reached our goal and those principles have let us physicists down. That is why we stick to renormalization as to the last resort. Those who think the "derivation" above is mathematically correct, should not forget forgotten cohesive forces, which have their own self-induction and radiation reaction contributions too. Deliberately excluding them makes the "derivations" above even more doubtful. Those who still believe in bare particles and their interactions, “discovered” by clever and insightful theorists despite bare stuff being non observable, believe in miracles. One of the miracles (rubbish, to be exact) is the famous “absorption” of wrong corrections by wrong (bare) constants in the right theory (i.e., the constants themselves absorb corrections as a physical effect, without human intervention). I admit renormalization may sometimes "work", but I do not see any physics in it; on the contrary, I see a wrong physics here. I believe we may and must reformulate our theories. I was trying to explain the luck of renormalization prescription and the usefulness of reformulation elsewhere, without success though. Hence, my question is: am I not fooling myself? Ok while I first looked with a certain sympathy on this questions, the last paragraphs inserted look now a bit too much like an unjustified tirade against modern physics as a whole to me. None of the methods and principles mentioned negatively have been experimentally refuted, on the contrary they are successful. The physics is still some kind of interesting, but the tone has become a bit too polemic ... @Dilaton: The problems in constructing good equations started long long ago and they were widely discussed during the last century. Experimental data are one thing, a theory is another. There may be many theories describing the same data to the same precision because any theory is an approximation. You imply that the theory is unique as soon as it agrees with data to some precision. It is not a serious claim. I hope the question at the end of your posting is genuine, and not just rhetorical. I think you misunderstand what happens in renormalization. The goal there is not to arbitrarily and miraculously add or drop terms in a bare theory to make it work. Rather the goal is to describe a theory that is consistent at a certain relaxed level of mathematical rigor and has certain desirable features (such as locality in QFT). What looks like a mathematically ill-conceived ``derivation'' is in fact just a sloppy description of a perfectly admissible limiting construction. The theory of interest is represented as a limit of other theories that have additional parameters (bare parameters and cutoffs) and behave well mathematically, but that do not have the wanted property, which appears only in the limit. Only the limiting theory and its parameters are physically relevant, while the theories and parameters used in the approximation process are just mathematical crutches necessary to define the limiting theory. (This is like a Taylor approximation to the exponential. At finite order $n$, the resulting cutoff-exponential doesn't satisfy the differential equation defining the physical meaning; only the limit with cutoff $n\to\infty$ has a physical meaning.) Specifically, in theories involving mass renormalization, one represents the desired theory as a limit of a family of auxiliary theories involving a bare mass $m_0$ and a cutoff $\Lambda$. (In general, there are other coupling constants serving as bare parameters.) The cutoff makes the theory nonlocal, an unphysical feature. This is compensated by treating the bare mass $m_0$ also as unphysical. (``bare'' just translates to ``unphysical''.) The auxiliary theories are mathematically well-defined, and one can derive from them equations for observables that resemble the physical observables. However, due to the unphysical starting point, the physical content of the resulting equations is somewhat implicit. In particular, constants have a physical meaning only if they are expressible in terms of intrinsic properties of dynamical observables (as frequencies, zeros, poles, residues, etc.). Such physical constants when computed at a particular cutoff $\Lambda$ are usually complicated functions $f(\Lambda,m_0)$ of the cutoff and the bare mass. If an expression defining a physical constant is evaluated at fixed bare mass and different cutoffs it varies rapidly with the cutoff, which is an unphysical situation. However, if one takes an appropriate cutoff-dependent value $m_0(\Lambda)$ for the bare mass, one can keep $f(\Lambda,m_0(\Lambda))=f_{phys}$ constant, thus creating a physically realistic scenario. Assuming that $f$ is chosen to represent the mass, the physical mass in the cutoff theory is the one defined by $m(\Lambda,m_0(\Lambda))=m_{phys}$, - not $m_0(\Lambda)$, which still is only a coefficient without a physical interpretation. It turns out that for renormalizable theories a small, finite number of such renormalization conditions are sufficient to determine a set of physical constants and of cutoff-dependent bare parameters (``running'' masses and other coupling constants) such that in terms of these physical constants, the field equations have (at least in an appropriate approximation) a good limit when the cutoff is removed, i.e., the limit $\Lambda\to\infty$ is taken. This is the necessary and sufficient condition for a renormalization scheme to ``work''. As regards QED, there are treatments that do not introduce any nonphysical terminology: The formulation of QED in Scharf's book ``Quantum Electrodynamics'' introduces nowhere bare particles, bare constants, or infinities. Scharf's book is mathematically rigorous throughout. He nowhere uses mathematically ill-defined formulas, but works with microlocal conditions appropriate to the behavior of the Green's functions. These enable him to solve recursively mathematically well-defined equations for the S-matrix by a formal power series ansatz, which is sufficient to obtain the traditional Most voted comments show all comments Your post makes an essential error when you say that ''in (1) it was the very physical mass''. If it were physical, one could deduce from the equation a way to determine it in terms of dynamical observables, but one cannot. Thus even in (1), $m$ is just a bare parameter without physical meaning. Your first comment shows that you do not understand what Scharf is doing. See http://www.physicsoverflow.org/20325/ As to Eq. (1), if one removes the external force which is physically possible since it is external) one ends up with an unphysical bare particle (since without self-field), which shows that $m$ is a bare mass only. But also when one includes a self-field, one cannot interpret $m$ as a physical mass. The latter would have to be tested by the influence the particle has on an external measuring apparatus for measuring the mass, and doing the calculation would most likely show that the physical mass is something different from $m$. I haven't done the calculation. but the physical mass would be deduced from the lowest order of a gravitational form factor, and it is extremely unlikely that this should just be the exact coupling parameter. If you'd like to substantiate your claim that $m$ is the physical mass, you'd have to give proof by showing why a measurement gives this value. you are immune to proof (just look at the amount of discussion generated in vain), so I discontinue this discussion. @VladimirKalitvianski Feynman died 24 years ago. Feynman didn't appreciate string theory either. Even Weinberg was uncomfortable with it. Stop using the authority of dead people to support your By the way, I'm sure that Feynman wouldn't approve of "reformulation not renormalisation!" either : ) From now on, I reply to your comments only if they contain no derisive remarks directed towards the mainstream approach. The remark about doctoring numbers is not mine, but that of Dirac's. Okey, my questions remain unanswered. This question is about the renormalization procedure applied to classical electrodynamics. In classical electrodynamics, renormalization is perhaps surprisingly more difficult and less consistent than in quantum electrodynamics. The main difficulty is that the classical field contribution to the mass of an electron cut off to radius R goes as $e^2/R$, which is linearly divergent. The classical field-mass becomes equal to the mass of the electron when R is in magnitude equal to the classical electron radius $e^2/m^e$. Making the classical electron smaller than this leads to a negative classical bare mass, and the unphysical bare pointlike electron limit produces negative-mass inconsistencies as a by-product. The basic inconsistency is that a negative mass bare classical electron can accelerate to very fast velocities, making a larger and larger negative energy, at the same time radiating energy in the electromagnetic field, keeping the total energy fixed. These are the self-accelerating exponentially blowing up solutions which come from naively integrating the equation of motion with third-derivatives and no special constraints on the motion. Dirac's attempted solution to this was to reject the self-accelerating solutions by a teleological constraint, you demand that the solution to a third-order equation be well behaved asymptotically. This more or less produces physical behavior at normally long time and distance scales. As you noticed in the body of your question, this is also automatically is what happens when you treat the third-derivative radiation-reaction term perturbatively, because the perturbation series starts with solutions to the second-order Newtonian equations, and the perturbation series can be made to avoid the exponentially blowing up solutions. This is why the perturbation description hides the fundamental inconsistency of the classical theory. The Dirac approach, rejecting the self-accelerating solutions, gives physical motion, more or less, but it produces non-causal behavior--- there is pre-acceleration of the electron in the Dirac theory. This means that if a classical electromagnetic step-function wave is going to hit an electron, the electron responds a little bit before the wave hits, the acausal response is exponentially decaying in time with a scale equal to the classical electron radius. This is why the classical renormalization program ultimately fails at the classical electron radius, you simply need structure. The electron is just being artificially called a point in the dirac approach, the pre-acceleration reveals it is really extended, and the scale for new structure demanded by classical physics is, as always, the classical electron radius. But for quantum electrodynamics, the classical analogy is misleading, at least for small values of the fine-structure constant. The first miracle of quantum electrodynamics is that the self-field of the electron, despite classical physical intuition that it should diverge linearly, is only logarithmically divergent. This was first demonstrated in the old perturbation theory days by Weisskopf, but it's obvious today in the relativistic perturbation theory--- the contribution of the self-field of the electron is from a diagram where you have an electron line with a photon going out and coming back to the same line, and the short-distance divergence is when the proper travel-time of the electron is short. This diagram, when regulated, only diverges as the log of the cutoff, same as every other divergent one-loop diagram in QED. The modern covariant methods make the result too easy, they end up hiding an important physical difference between quantum and classical What's physically softening the classical linear divergence? The main reason, explained by Weisskopf, and made completely obvious in the formalism of Stueckelberg, Feynman, and Schwinger, is that the intermediate states for short times between photon emission and absorption involves a sum over both electron and positron states together, as for short propagation times, you shouldn't separate relativistic electron from positron states. The contribution calculated artificially using only electron intermediate states between emission and absorption is linearly divergent, just as in the classical theory, and of course the same holds for positron self-mass truncating to only positron intermediate states. But for the true relativistic field theory calculations, you have to add both contributions up, and the cutoff has to respect relativistic invariance, as in Pauli-Villars regularization, so it must be the same for both positrons and electrons. The contributions of positrons are opposite sign to the contributions of the electrons, as the positron field is opposite sign, and cancels the electron field. When the distances become short, the main classical linear divergence is cancelled away, leaving only the relativistically invariant log divergence, which is the short-distance renormalization mass correction considered in renormalizing quantum electrodynamics. Notice that this ignores completely the classical issue of the response of the self-field, which is important for distances larger than the Compton wavelength. For those large distances, positron contributions can be naturally nonrelativistically separated from electron contributions, and contribute negligibly to the field, so that it turns into the classical point field for the electron. One physical intepretation is that the electron's charge is not concentrated at a point on any time-slice in quantum electrodynamics, but the back-and-forth path in time of the electron means that the charge on any one time slice has both electron and positron intersections, and the charge ends up fractally smeared out over a region comparable to the Compton wavelength, partially smoothing the charge and mollifying the divergence. The ratio of the classical electron radius, where the classical renormalization program breaks down, to the Compton wavelength $1/m_e$, where relativistic quantum renormalization kicks in, is the fine-structure constant by definition. The fine-structure constant is small, which means that the linearly divergent corrections to the self-mass are replaced by the log-divergent corrections before getting a chance to make a significant contribution to the self-mass of the electron, or any other pointlike charged particle (small, but not completely negligible. For pions, due to the Goldstone nature and small mass, the electromagnetic field explains the mass splitting, as understood in the 1960s). So the classical inconsistency is side-stepped for a while, just because the divergence is But the problems obviously can't fully go away, because if you were to twiddle the parameters and make the fine structure constant large, you make the classical electron radius larger than the Compton wavelength, at which point the classical self-field surrounding a nonrelativistic electron, even in the nonrelativistic limit, would have more energy than the electron itself, and the renormalization procedure would break down, requiring unphysical properties at a normal accessible scale. But in the case of our universe, with small fine-structure constant, the log running means that the problems don't kick in until an absurdly high energy scale. Still, they do kick in, and this is at the enormously large Landau pole energy, which is where the coupling of the electron runs to big enough that the fine structure constant is no longer small. This energy is larger than the Planck energy, so that at this point, we expect the field theory description to break down anyway, and be replaced by a proper fundamental gravitational theory, by string theory, or a variant. This part is old well known, and explains the disconnect between the modern quantum renormalization program and the old classical renormalization program. The quantum renormalization program works over an enormous range, but because of the technical differences of the divergences, it simply doesn't answer the classical renormalization questions. But one can go further today, and consider different cases, where one can gain some insight into the renormalization program. There is a case where you do have a large value of the fine-structure constant: magnetic monopoles. The magnetic charge is inverse to the electric charge. The magnetic monopoles in, say, the Schwinger model (SU(2) gauge theory Higgsed to U(1) at long distances) can't be inconsistent, as it is asymptotically free at high energies. But in this case, the monopole is not pointlike, in this, it is an extended field configuration. The soliton size is determined from the Higgs field details, and when the Higgs field is physical, there is no renormalization paradox for monopoles either--- monopoles in consistent gauge field theories are simply solitons which are bigger than the classical monopole radius. They do have a third-derivative radiation reaction term in the equation of motion at long scales, but it isn't a paradox, because the pre-acceleration can be understood as a response of parts of the monopole to a field, and it isn't acausal. Runaway solutions can't occur, because there is an energy bound for a moving solution, the never is any negative energy density anywhere, unlike in the classically renormalized point particle theory, so you can't balance a negative mass moving faster and faster (and gaining negative energy) with a field gaining positive energy. It is interesting to consider the classical issues in the gravitational context, now that we have understanding not only of the quantum renormalization program, but also of quantum gravity in the form of string theory, which solves the quantum renormalization problem for good. The classical long-distance version of string theory is general relativity, and the case, the classical charged point particle analog is an electrogravitational solution, a black hole of mass m with charge e. In order for the black hole to be sensible, $m>e$, and in this case, the classical solution is classically well behaved, but it is not pointlike--- it has a radius of size m. The pointlike limit requires taking m to zero and therefore e to zero, and the electromagnetic self energy e^2/m is not only less than m, it is less and less a contribution to the mass as e gets smaller. In the case that you choose e large, it seems that the extremal black hole would get a negative "bare mass". But the behavior of the larger than extremal black hole is always perfectly causal. The apparent negative mass required here is nonsense, you are just ignoring the negative gravitational field energy--- the extremal black hole can be interpreted as the case where the total field energy, electromagnetic plus gravitational, is equal to the total mass. In string theory, the quantum analog of the extremal black hole solutions are the fundamental objects, these are the strings and branes. Here, there is an interesting reversal: the classical restriction for black holes, $m\ge e$, is reversed in string theory, for the special case of the lightest charged particle. In this case, the requirement that a black hole can decay completely suggests strongly that the lightest charged particle must obey the opposite inequality, that is, that $m\le e$ for the lightest quantum. Hi Ron, I still have to read this but it always seemed to me that this question and that submission are closely related, so could this answer serve as a review there too ? This answer is about the failure of classical renormalization, the old program of renormalizing classical electrodynamics. It's about the radiation reaction runaway solutions, and their interpretation, and why this failure doesn't appear in quantum electrodynamics until the Landau scale. It has no real relation to Kalitvianski's paper, which has a model, not this old classical nonsense inside. Thank you, Ron, for your valuable answer! I am ready to believe in bare particles and their interactions, guessed correctly by insightful physicists. I used to think that the bare mass was a rug brought by some physicists to cover their shit. But now I am sure the physicists guessed the interaction correctly and discovered (predicted) bare particles as a by-product. Interactions change constants('t Hooft)! How could I doubt it when it had been shown so many times in so many textbooks? I was blind. Now I see, I see the physics of very short distances. Although it cannot be seen in a microscope, I see it. It is in my mind. And if one day someone advances a crazy interaction force on the right side of Eq. (1) and the same crazy term on the left side of Eq. (1) as a property of the bare particle (like in Eq. (4) with (5)), I will believe in both - in crazyness of the bare particle and in crazyness of its interaction. Too bad these two terms cancel each other. I would love to see them in the calculation results. But, probably, it is their fate, their destiny - to disappear for good. Nevertheless they will stay in my mind. Because physicists are not crazy by definition, but insightful. No, frankly, Ron, do you think it is impossible to write mathematically down an equation system in CED with the right physics including exact conservation of the energy-momentum? Tell me, is it impossible? Some projectile pushes the electron, the latter accelerates and radiates, and it is impossible to make ends meet in this simple problem? Another thing, I see bare particles exist in your world and it goes without saying. I thought we dealt with physical particle in (1) and adding any correction to its mass was undesirable. You say in QED the correction is smaller so QED is much better. Then why don't you leave it in the calculation results? Why it disappears if it has some physics in it? We calculate, calculate, analyse the great physics of our calculation and in the end there stays nothing of it. To be precise, let us consider an external electron line. It is a Dirac bispinor, for short. What is the effect of taking into account all radiative corrections (self-energy terms) in it? What does it become? The same Dirac bispinor? With a lattice at the Planck scale, or any other cutoff at the Planck scale, the Dirac bispinor, after all radiative corrections, stays a bispinor with slightly different mass and slightly different We leave the divergence in the calculation results! The logarithmic divergent renormalization of the charge and mass is not a mathematical trick, it doesn't just get rid of the infinities, they are still there, and they show up as physical effects in higher and higher energy scattering. The counterterm contribution stays in the results! The infinity doesn't go away by subtraction, it still shows up in the physics. For example, when you scatter an electron from a photon at center of mass energy squared s, the deflection in QED is given by the Klein-Nishijima formula, and if you do a best fit to find the electron charge e from the actual scattering in pure QED, you find a different value of e at each s, e grows logarithmically with s, and doesn't ever stop growing. As E gets larger, e also gets larger, and eventually, at unphysically large energies, the theory stops making sense, because e is greater than 1. This is the running coupling constant, and it is the original renormalization group of Stueckelberg and Petermann, Gell-Mann and Low: the subtraction point is arbitrary, so you can recenter the calculations at any energy E. There are no true bare particles in any modern formulations, only particles defined at a cut-off scale $\Lambda$. The cutoff can be mathematically well defined, for instance a lattice, so just imagine QED on a lattice. With a Planck-sized lattice, a small lattice coupling and a small lattice electron mass parameter, the long-wavelength solutions have a light spin-1/2 particle with only a somewhat different mass, and a somewhat different coupling. You then introduce a second scale, just for mathematical convenience, this is the subtraction scale, and define all your calculations relative to the subtraction scale. Instead of asking how the mass and charge depend on the lattice, you can then ask how the mass and charge of the best-fit interactions depend on the subtraction scale. It's just for mathematical convenience, the subtraction scale is arbitrary, but the divergences in the theory show up as actual physical changes in the predicted scattering as you change the energy. This doesn't make sense when the subtraction scale approaches the lattice scale, but for long-distances, you can define the dependence with no problem. If you ignore the lattice, and just consider the perturbation series (which now only depends on the subtraction scale instead of the lattice scale), the couplings as a function of the energy goes slowly to infinity as the energy gets large. This is a sign that you need to have something else happening at large energy, like a lattice, or strings, or whatever. The energy is sufficiently enormous in this case that nobody cares. The fundamental issue is that there are two different notion of particle: 1. Lagrangian particle: a field you path integrate over. 2. S-matrix particle: an energy eigenstate determined by 4-momentum, mass and spin. The Lagrangian particles are mathematical things, they are defined by doing a path-integral, by mathematical equations. The S-matrix particles are what you observe in experiments, they are defined by the eigenstates of the Hamiltonian, by the solutions to the equations. The perturbation theory at long distances, if you use a physical subtraction, is using the real physical particles to do the perturbation theory, and because it is ignoring the lattice, it is choosing the counterterms in the Lagrangian to zero out the effect of higher loops. But it's a mathematical trick, you don't have to use physical subtraction, the subtraction procedure is largely arbitrary and chosen for convenience, so you can use a different process, like dimensional regularization/minimal subtraction with a free-floating subtraction point, and then you are expanding in terms of particles whose interactions are gradually changing over many decades, approaching more or less the lattice value when the cutoff approaches the lattice value, and the whole renormalized perturbation theory breaks down. For QED with a not-too-insanely-big cutoff, the Lagrangian particles, the electron and the photon fields, are qualitatively of the same type as the physical S-matrix particles, except they have somewhat different values of the parameters. The logarithmic dependence means that the change is not very big for ordinary values of the cutoff. The cutoff is a real thing--- you can make lattice QED, and then it's the lattice scale. You can take Pauli Villars QED, and then it's the mass of the regulator. In QCD, it's exactly the opposite. The S-matrix spectrum is mesons and baryons, and the Lagrangian is quarks and gluons. So the bare particles are quarks, but they are not unconfined S-matrix particles, you don't see them in the physical spectrum. @RonMaimon: Thank you, Ron, you are a very generous and patient person! I understand what you are writing. Still, there are some points to clarify. The lattice is necessary for some sort of regularization in the standard version of QED. In this sense, it is like any other regularization and I do not mind - the standard QED needs it. As well as renormalization. Concerning using the Klein-Nishina formula for fitting the charge $e$ as a function of $s$. It is slightly strange because once fitted, $e$ should remain constant. Normally we fix it with the low frequency photon scattering, the Thomson formula, to be exact. If we apply the Klein-Nishina formula for fitting $e$ at higher $s$, it may well happen that, due to inaccuracy of this formula, we transfer its inaccuracy to the charge, as if it indeed depended on $s$. I hope you meant something else here. I guess you speak of a bare charge, which is not constant, and I agree because it was devised so. Call it a charge on a lattice, whatever, the meaning is the same. One more thing. Once, I encountered a wrong perturbative series (Appendix 3). It was wrong because of my wrong perturbative treatment of "interaction" (or "perturbation" operator). The wrongness started from the third order in powers of the small parameter. This small parameter could be determined from independent measurements and the exact calculations, i.e., from calculations without this error. But imagine, if we had no correct calculation results and tried to determine the value of this parameter from experiments only compared with the wrong series at disposal. Then we would obtain a wrong value and this value could depend then on something else instead of being a constant by definition. It is a very dangerous situation. You can always fit the parameter with making it "running" somehow, no problem, but there would be a "wrong physics" in it. I feel myself a little bit tired morally, seeing my low reputation here as a researcher. Probably, I will take a break.
{"url":"https://www.physicsoverflow.org/20116/advice-about-renormalizations-in-physics","timestamp":"2024-11-07T22:38:27Z","content_type":"text/html","content_length":"228848","record_id":"<urn:uuid:68e84ce2-81fe-4aae-8b2b-4d1aaf0ad1d4>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00324.warc.gz"}
Re: Issue with degree temperature units on bottom of fraction Feb 02, 2023 01:00 PM Feb 02, 2023 01:00 PM I am using Mathcad Prime 7 and I am having an issue where if I put US temperature units in the denominator of a fraction, it says the value must be a scalar or a matrix. This does not happen with Kelvin or Rankine units. I tried writing it different ways and that did not help either. I tried selecting it directly from the units list as opposed to typing it in. I tried converting the kelvin equivalent to Fahrenheit and still no luck. Does anyone know what is going on or how to get this to work? Feb 02, 2023 01:47 PM Feb 02, 2023 01:47 PM Feb 02, 2023 01:47 PM Feb 02, 2023 01:47 PM Feb 03, 2023 08:56 AM Feb 03, 2023 08:56 AM Feb 03, 2023 09:19 AM Feb 03, 2023 09:19 AM
{"url":"https://community.ptc.com/t5/Mathcad/Issue-with-degree-temperature-units-on-bottom-of-fraction/m-p/853547","timestamp":"2024-11-10T23:46:10Z","content_type":"text/html","content_length":"260504","record_id":"<urn:uuid:fe760013-61cf-448b-893b-70736e407397>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00426.warc.gz"}
water column? What is gas water column? What exactly is a gas water column? • The traditional measurement method is inches of water column (“wc”) – inches of water column. Natural or propane are two options. In 1 psi, there is 28 inches of water, which is a relatively small amount. Pressure is high. • Pounds per square inch (psi) are a unit used to measure gas pressure in the distribution system. In natural gas, what is the water column? In 1 psi, w.c. What exactly is a water column in inches? The number of inches it takes to push a column of water up is measured by inches of water column. Low-pressure gas systems are typically measured using it. What exactly is the water column? The water column is an oceanographic concept that describes the physical (temperature, salinity, light penetration) as well as chemical (pH, dissolved oxygen, and nutrient salts) properties of seawater at various depths for a specific geographical location. What exactly is a heater’s water column? What exactly is a Water Column? Gas pressure is measured through the water column. Tubes in a U-shaped half filed with water were the first devices to measure gas pressure. A gas line applied pressure to one side of the tube, while the other was left open. In a pound of gas, how many inches of water column do you have? This is your conversion factor if 1 pound per square inch of pressure equals 27.78 inches in a water column. For natural gas, how do you calculate the water column? Simply multiply the PSI measurement by 27.708 to obtain the measurement in WC to convert PSI to WC. If you divide 5 PSI by 27.708, for example, it converts to about 138.54 inches of WC. What exactly is a water column test? Water Column Testing The water column test determines the response and resistance of a specific material to water intrusion by conducting a water column test. Tests of the water column reveal leaks, which indicate a membrane fracture. How do you calculate the water column? Consider the following equation to calculate hydrostatic pressure at the bottom of the container: H = 8 inches of water, SG = 1 P = x PSI, and P = 1 • 8 inches = 8 inches W.C. As a result, the base’s hydrostatic pressure (P) is equivalent to 8 inches of water column. 1 inch of water column = 0.03613 PSI (22.768 inches of water column = 1 PSI). What exactly is water column mixing? Plankton communities are known to benefit from water-column mixing. The underlying mechanisms are well understood for shallow polymictic and deep stratified lakes, and they depend on the size and depth of the water body, nutrient status, and plankton community structure. What exactly does the word “7 water column” mean? The amount of pressure required to raise a column of water 1 inch is referred to as this. In 1 PSI of pressure, there is 27.7 inches of water column (wc) pressure. As a result, 7′′Wc is about 1/4 PSI. This is the standard amount of household natural gas delivered. What is the height of the water column? A different way of expressing pressure measurements is to use a water column. The pressure produced by a 1 inch by 1 inch column of water with a specified height is defined as this measurement. What is the function of a water column? A piston in a cylinder is an Oscillating Water Column, in which a large amount of moving water acts as a piston. As a wave rises, the air is forced out of the column, and new air is sucked in as the wave descends back down. The turbine at the top of the column will be turned by the movement of air in the device. How many PSIs an inch of water column has? A pressure of about 1/28 pound per square inch (psi) is equal to one inch of water column. A column water 28 inches high, for example, produces pressure equal to 1 psi. What is the best way to read a water column manometer? The difference between both sides of the water column is added to the manometer. To put it another way, if the water column goes down 1” on the pressure side and rises 1” on the opposite side, it will equal 2” of water. A manometer with a water column length equal to 2” is measured. PSI number 07226 What is the weight of a water column? That means a 1-inch square column of water measuring 2.31 feet tall will weigh one pound, according to the translation. A one-foot column of water in a 1-inch square shape weighs. 433 pounds. What are the different parts of the water column? The deep sea water column is described as follows: epipelagic, from the surface to 200 meters below the floor; mesopelagic, from 200 to 1000 meters below the surface; and bathypelagic, from 1000 to 1000 meters below the surface. What is the best way to calculate water column pressure? W.C., P=1 • 8 inches = 8 inches As a result, the base’s hydrostatic pressure (P) is equivalent to 8 inches of water column. 1 inch of water column = 0.03613 PSI (22.768 inches of water column = 1 PSI). At the bottom of this container, 8 inches WC = 0.289 PSI of hydrostatic pressure. Is the pressure on natural gas high or low? Natural gas is compressed to pressures ranging from 500 to 1400 pounds per square inch in transmission pipelines. What is the average house gas pressure? Depending on the number of homes or businesses served by the gas line leading to the house, the natural gas pressure ranges from about 1/4 to 60 psi. Large-volume pipelines used to transport gas from well fields to local utilities have pressures of up to 1,500 psi. How does the water level be calculated? To calculate the water level, you must know how high the tank is, the radius of the tank, and a pi estimation rounded to 3.14. After you’ve calculated the volume, you’ll need to convert it into a liquid measurement, such as gallons. What exactly is a column? 1a: a vertical arrangement of columns of numbers printed or written on a page. The news article takes up three columns and is separated by a rule or blank space. b: one of two or more vertical sections of a printed page separated by a rule or blank space. c : vertically arranged accumulation of paint cans stacked columns Is natural gas more energy-efficient than air? Natural gas is lighter than air and dissipates quickly into the atmosphere when it is released. Propane gas has a lot of similarities to natural gas and is also used as a fuel. The most significant distinction between propane and natural gas is that it is HEAVIER than air. What happens if the gas pressure is too high? High gas pressure can have the same effect on your furnace. That’s because it raises the risk of a furnace overheating significantly. When this happens, the excessive heat can cause damage to a variety of internal components.
{"url":"https://tipsfolder.com/gas-water-column-958e7a2a36d38d1d993481945b2439b9/","timestamp":"2024-11-10T16:01:17Z","content_type":"text/html","content_length":"100324","record_id":"<urn:uuid:1669bc94-15aa-4096-a9f0-660fb750bcad>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00792.warc.gz"}
In quantum mechanics, a singlet state usually refers to a system in which all electrons are paired. The term 'singlet' originally meant a linked set of particles whose net angular momentum is zero, that is, whose overall spin quantum number ${\displaystyle s=0}$. As a result, there is only one spectral line of a singlet state. In contrast, a doublet state contains one unpaired electron and shows splitting of spectral lines into a doublet, and a triplet state has two unpaired electrons and shows threefold splitting of spectral lines. Examples of atoms in singlet, doublet, and triplet states. Singlets and the related spin concepts of doublets and triplets occur frequently in atomic physics and nuclear physics, where one often needs to determine the total spin of a collection of particles. Since the only observed fundamental particle with zero spin is the extremely inaccessible Higgs boson, singlets in everyday physics are necessarily composed of sets of particles whose individual spins are non-zero, e.g. 1/2 or 1. The origin of the term "singlet" is that bound quantum systems with zero net angular momentum emit photons within a single spectral line, as opposed to double lines (doublet state) or triple lines ( triplet state).^[1] The number of spectral lines ${\displaystyle n}$ in this singlet-style terminology has a simple relationship to the spin quantum number: ${\displaystyle n=2s+1}$ , and ${\ displaystyle s=(n-1)/2}$ . Singlet-style terminology is also used for systems whose mathematical properties are similar or identical to angular momentum spin states, even when traditional spin is not involved. In particular, the concept of isospin was developed early in the history of particle physics to address the remarkable similarities of protons and neutrons. Within atomic nuclei, protons and neutrons behave in many ways as if they were a single type of particle, the nucleon, with two states. The proton-neutron pair thus by analogy was referred to as a doublet, and the hypothesized underlying nucleon was assigned a spin-like doublet quantum number ${\displaystyle I_{3}={\tfrac {1}{2}}}$ to differentiate between those two states. Thus the neutron became a nucleon with isospin ${\displaystyle I_{3}(n)= -{\tfrac {1}{2}}}$ , and the proton a nucleon with ${\displaystyle I_{3}(p)=+{\tfrac {1}{2}}}$ . The isospin doublet notably shares the same SU(2) mathematical structure as the ${\displaystyle s={\ tfrac {1}{2}}}$ angular momentum doublet. It should be mentioned that this early particle physics focus on nucleons was subsequently replaced by the more fundamental quark model, in which protons and neutrons are interpreted as bound systems of three quarks each. The isospin analogy also applies to quarks, and is the source of the names up (as in "isospin up") and down (as in "isospin down") for the quarks found in protons and neutrons. While for angular momentum states the singlet-style terminology is seldom used beyond triplets (spin=1), it has proven historically useful for describing much larger particle groups and subgroups that share certain features and are distinguished from each other by quantum numbers beyond spin. An example of this broader use of singlet-style terminology is the nine-member "nonet" of the pseudoscalar mesons. The simplest possible angular momentum singlet is a set (bound or unbound) of two spin-1/2 (fermion) particles that are oriented so that their spin directions ("up" and "down") oppose each other; that is, they are antiparallel. The simplest possible bound particle pair capable of exhibiting the singlet state is positronium, which consists of an electron and positron (antielectron) bound by their opposite electric charges. The electron and positron in positronium can also have identical or parallel spin orientations, which results in an experimentally-distinct form of positronium with a spin 1 or triplet state. An unbound singlet consists of a pair of entities small enough to exhibit quantum behavior (e.g. particles, atoms, or small molecules), not necessarily of the same type, for which four conditions 1. The spins of the two entities are of equal magnitude. 2. The current spin values of both entities originated within a single well-defined quantum event (wave function) at some earlier location in classical space and time. 3. The originating wave function relates the two entities in such a way that their net angular momentum must be zero, which in turn means that if and when they are detected experimentally, conservation of angular momentum will require their spins to be in full opposition (antiparallel). 4. Their spin states have remained unperturbed since the originating quantum event – which is equivalent to asserting that there exists no classical information (observation) of their status anywhere within the universe. Any spin value can be used for the pair, but the entanglement effect will be strongest both mathematically and experimentally if the spin magnitude is as small as possible, with the maximum possible effect occurring for entities with spin-1/2 (such as electrons and positrons). Early thought experiments for unbound singlets usually assumed the use of two antiparallel spin-1/2 electrons. However, actual experiments have tended to focus instead on using pairs of spin 1 photons. While the entanglement effect is somewhat less pronounced with such spin 1 particles, photons are easier to generate in correlated pairs and (usually) easier to keep in an unperturbed quantum state. Mathematical representations The ability of positronium to form both singlet and triplet states is described mathematically by saying that the product of two doublet representations (meaning the electron and positron, which are both spin-1/2 doublets) can be decomposed into the sum of an adjoint representation (the triplet or spin 1 state) and a trivial representation (the singlet or spin 0 state). While the particle interpretation of the positronium triplet and singlet states is arguably more intuitive, the mathematical description enables precise calculations of quantum states and probabilities. This greater mathematical precision for example makes it possible to assess how singlets and doublets behave under rotation operations. Since a spin-1/2 electron transforms as a doublet under rotation, its experimental response to rotation can be predicted by using the fundamental representation of that doublet, specifically the Lie group SU(2).^[2] Applying the operator ${\displaystyle {\vec {S}}^{2}}$ to the spin state of the electron thus will always result in ${\textstyle \hbar ^{2}\left({\frac {1}{2}}\right)\left({\frac {1}{2}}+1\right)=\left({\frac {3}{4}}\right)\hbar ^{2}}$ , or spin-1/2, since the spin-up and spin-down states are both eigenstates of the operator with the same eigenvalue. Similarly, for a system of two electrons, it is possible to measure the total spin by applying ${\displaystyle \left({\vec {S}}_{1}+{\vec {S}}_{2}\right)^{2}}$ , where ${\displaystyle {\vec {S}}_{1}} $ acts on electron 1 and ${\displaystyle {\vec {S}}_{2}}$ acts on electron 2. Since this system has two possible spins, it also has two possible eigenvalues and corresponding eigenstates for the total spin operator, corresponding to the spin 0 and spin 1 states. Singlets and entangled states Particles in singlet states do not need to be locally bound to each other. For example, when the spin states of two electrons are correlated by their emission from a single quantum event that conserves angular momentum, the resulting electrons remain in a shared singlet state even as their separation in space increases indefinitely over time, provided only that their angular momentum states remain unperturbed. In Dirac notation this distance-indifferent singlet state is usually represented as: ${\displaystyle {\frac {1}{\sqrt {2}}}\left(\left|\uparrow \downarrow \right\rangle -\left|\downarrow \uparrow \right\rangle \right).}$ The possibility of spatially extended unbound singlet states has considerable historical and even philosophical importance, since considering such states contributed importantly to the theoretical and experimental exploration and verification of what is now called quantum entanglement. Along with Podolsky and Rosen, Einstein proposed the EPR paradox thought experiment to help define his concerns with what he viewed as the non-locality of spatially separated entangled particles, using it in an argument that quantum mechanics was incomplete. In 1951 David Bohm formulated a version of the "paradox" using spin singlet states.^[3] The difficulty captured by the EPR-Bohm thought experiment was that by measuring a spatial component of the angular momentum of either of two particles that have been prepared in a spatially distributed singlet state, the quantum state of the remaining particle, conditioned on the measurement result obtained, appears to be "instantaneously" altered, even if the two particles have over time become separated by light years of distance. Decades later, John Stewart Bell, who was a strong advocate of Einstein's locality-first perspective, proved Bell's theorem and showed that it could be used to assess the existence or non-existence of singlet entanglement experimentally. The irony was that instead of disproving entanglement, which was Bell's hope, subsequent experiments instead established the reality of entanglement. In fact, there now exist commercial quantum encryption devices whose operation depends fundamentally on the existence and behavior of spatially extended A weaker form of Einstein's locality principle remains intact, which is this: Classical information cannot be transmitted faster than the speed of light c, not even by using quantum entanglement events. This form of locality is weaker than the notion of "Einstein locality" or "local realism" used in the EPR and Bell's Theorem papers, but is sufficient to prevent the emergence of causality See also 1. ^ Griffiths, D.J. (1995). Introduction to Quantum Mechanics. Prentice Hall. p. 165. ISBN 9780131244054. 2. ^ Sakurai, J.J. (1985). Modern Quantum Mechanics. Addison Wesley. 3. ^ Bohm, D. (1951). Quantum Theory, Prentice-Hall, Englewood Cliffs, page 29, and Chapter 5 section 3, and Chapter 22 Section 19.
{"url":"https://www.knowpia.com/knowpedia/Singlet_state","timestamp":"2024-11-09T01:20:19Z","content_type":"text/html","content_length":"111780","record_id":"<urn:uuid:d780bb80-a5c6-42f7-9446-969c92b6b18e>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00065.warc.gz"}
28.3: Length Contraction Last updated Page ID \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\) \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\) \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\) \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vectorC}[1]{\textbf{#1}} \) \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \) \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \) \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \) \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\ evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\ newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y} \) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real} {\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec} [3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array} {r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\ wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\ newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var} {\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\ bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\ widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\) Learning Objectives By the end of this section, you will be able to: • Describe proper length. • Calculate length contraction. • Explain why we don’t notice these effects at everyday scales. Have you ever driven on a road that seems like it goes on forever? If you look ahead, you might say you have about 10 km left to go. Another traveler might say the road ahead looks like it’s about 15 km long. If you both measured the road, however, you would agree. Traveling at everyday speeds, the distance you both measure would be the same. You will read in this section, however, that this is not true at relativistic speeds. Close to the speed of light, distances measured are not the same when measured by different observers. Figure \(\PageIndex{1}\): People might describe distances differently, but at relativistic speeds, the distances really are different. (credit: Corey Leopold, Flickr) Proper Length One thing all observers agree upon is relative speed. Even though clocks measure different elapsed times for the same process, they still agree that relative speed, which is distance divided by elapsed time, is the same. This implies that distance, too, depends on the observer’s relative motion. If two observers see different times, then they must also see different distances for relative speed to be the same to each of them. The muon illustrates this concept. To an observer on the Earth, the muon travels at \(0.950c\) for \(7.05 \mu s\) from the time it is produced until it decays. Thus it travels a distance \[L_{0} = v \Delta t = \left(0.950\right)\left(3.00 \times 10^{8} m/s\right)\left(7.05 \times 10^{-6} s\right) = 2.01 km \label{28.4.1}\] relative to the Earth. In the muon’s frame of reference, its lifetime is only \(2.20 \mu s\). It has enough time to travel only \[L = v \Delta t_{0} = \left(0.950\right)\left(3.00 \times 10^{8} m/s\right)\left(2.20 \times 10^{-6} s\right) = 0.627 km. \label{28.4.2}\] The distance between the same two events (production and decay of a muon) depends on who measures it and how they are moving relative to it. Proper length \(L_{0}\) is the distance between two points measured by an observer who is at rest relative to both of the points. The Earth-bound observer measures the proper length \(L_{0}\), because the points at which the muon is produced and decays are stationary relative to the Earth. To the muon, the Earth, air, and clouds are moving, and so the distance \(L\) it sees is not the proper length. Figure \(\PageIndex{2}\): (a) The Earth-bound observer sees the muon travel 2.01 km between clouds. (b) The muon sees itself travel the same path, but only a distance of 0.627 km. The Earth, air, and clouds are moving relative to the muon in its frame, and all appear to have smaller lengths along the direction of travel. Length Contraction To develop an equation relating distances measured by different observers, we note that the velocity relative to the Earth-bound observer in our muon example is given by \[v = \frac{L_{0}}{\Delta t}. \label{28.4.3}\] The time relative to the Earth-bound observer is \(\Delta t\), since the object being timed is moving relative to this observer. The velocity relative to the moving observer is given by \[v = \frac{L}{\Delta t_{0}}.\label{28.4.4}\] The moving observer travels with the muon and therefore observes the proper time \(\Delta t_{0}\). The two velocities are identical; thus, \[\frac{L_ {0}}{\Delta t} = \frac{L}{\Delta t_{0}}.\label{28..4.5}\] We know that \(\Delta t = \gamma \Delta t_{0}\). Substituting this equation into the relationship above gives \[L = \frac{L_{0}}{\gamma}.\ label{28.4.6}\] Substituting for \(\gamma\) gives an equation relating the distances measured by different observers. Length contraction \(L\) is the shortening of the measured length of an object moving relative to the observer’s frame. \[L = L_{0} \sqrt{1-\frac{v^{2}}{c^{2}}}.\label{28.4.7}\] If we measure the length of anything moving relative to our frame, we find its length \(L\) to be smaller than the proper length \(L_{0}\) that would be measured if the object were stationary. For example, in the muon’s reference frame, the distance between the points where it was produced and where it decayed is shorter. Those points are fixed relative to the Earth but moving relative to the muon. Clouds and other objects are also contracted along the direction of motion in the muon’s reference frame. Example \(\PageIndex{1}\): Calculating Length Contraction: The Distance between Stars Contracts when You Travel at High Velocity: Suppose an astronaut, such as the twin discussed in "Simultaneity and Time Dilation," travels so fast that \(\gamma = 30.00\). (a) She travels from the Earth to the nearest star system, Alpha Centauri, 4.300 light years (ly) away as measured by an Earth-bound observer. How far apart are the Earth and Alpha Centauri as measured by the astronaut? (b) In terms of \(c\), what is her velocity relative to the Earth? You may neglect the motion of the Earth relative to the Sun. (See Figure 3.) Figure \(\PageIndex{3}\): (a) The Earth-bound observer measures the proper distance between the Earth and the Alpha Centauri. (b) The astronaut observes a length contraction, since the Earth and the Alpha Centauri move relative to her ship. She can travel this shorter distance in a smaller time (her proper time) without exceeding the speed of light. First note that a light year (ly) is a convenient unit of distance on an astronomical scale—it is the distance light travels in a year. For part (a), note that the 4.300 ly distance between the Alpha Centauri and the Earth is the proper distance \(l_0\), because it is measured by an Earth-bound observer to whom both stars are (approximately) stationary. To the astronaut, the Earth and the Alpha Centauri are moving by at the same velocity, and so the distance between them is the contracted length \(L\). In part (b), we are given \(\gamma\), and so we can find \(v\) by rearranging the definition of \(\gamma\) to express \(v\) in terms of \(c\). Solution for (a) 1. Identify the knowns: \(L_0 - 4.300 \, ly; \, \gamma = 30.00\) 2. Identify the unknown: \(L\) 3. Choose the appropriate equation: \(L = \frac{L_0}{\gamma}\) 4. Rearrange the equation to solve for the unknown; \[L = \dfrac{L_0}{\gamma}\] \[= \dfrac{4.300 \, ly}{30.00}\] \[= 0.1433 \, ly\] Solution for (b) 1. Identify the known: \(\gamma = 30.00\) 2. Identify the unknown: \(v\) in terms of \(c\) 3. Choose the appropriate equation \(\gamma = \frac{1}{\sqrt{1 - \frac{v^2}{c^2}}}\) 4. Rearrange the equation to solve for the unknown: \[\gamma = \dfrac{1}{\sqrt{1 - \frac{v^2}{c^2}}}\] \[ 30.00 = \dfrac{1}{\sqrt{1 - \frac{v^2}{c^2}}}\] Squaring both sides of the equation and rearranging terms gives: \[900.0 = \dfrac{1}{1 - \frac{v^2}{c^2}}\] so that \[1 - \dfrac{v^2}{c^2} = \dfrac{1}{900.0}\] and \[\dfrac{v^2}{c^2} = 1 - \dfrac{1}{900.0} = 0.99888....\] Taking the square root, we find \[\dfrac{v}{c} = 0.99944,\] which is rearranged to produce a value for the velocity \[v = 0.9994c.\] First, remember that you should not round off calculations until the final result is obtained, or you could get erroneous results. This is especially true for special relativity calculations, where the differences might only be revealed after several decimal places. The relativistic effect is large here (γ=30.00), and we see that is approaching (not equaling) the speed of light. Since the distance as measured by the astronaut is so much smaller, the astronaut can travel it in much less time in her frame. People could be sent very large distances (thousands or even millions of light years) and age only a few years on the way if they traveled at extremely high velocities. But, like emigrants of centuries past, they would leave the Earth they know forever. Even if they returned, thousands to millions of years would have passed on the Earth, obliterating most of what now exists. There is also a more serious practical obstacle to traveling at such velocities; immensely greater energies than classical physics predicts would be needed to achieve such high velocities. This will be discussed in Relatavistic Energy. Why don’t we notice length contraction in everyday life? The distance to the grocery shop does not seem to depend on whether we are moving or not. Examining the equation \(L = L_0\sqrt{1 - \frac{v^2} {c^2}}\), we see that at low velocities \((v <<c)\) the lengths are nearly equal, the classical expectation. But length contraction is real, if not commonly experienced. For example, a charged particle, like an electron, traveling at relativistic velocity has electric field lines that are compressed along the direction of motion as seen by a stationary observer. (See Figure.) As the electron passes a detector, such as a coil of wire, its field interacts much more briefly, an effect observed at particle accelerators such as the 3 km long Stanford Linear Accelerator (SLAC). In fact, to an electron traveling down the beam pipe at SLAC, the accelerator and the Earth are all moving by and are length contracted. The relativistic effect is so great than the accelerator is only 0.5 m long to the electron. It is actually easier to get the electron beam down the pipe, since the beam does not have to be as precisely aimed to get down a short pipe as it would down one 3 km long. This, again, is an experimental verification of the Special Theory of Relativity. Figure \(\PageIndex{41}\): The electric field lines of a high-velocity charged particle are compressed along the direction of motion by length contraction. This produces a different signal when the particle goes through a coil, an experimentally verified effect of length contraction. Exercise \(\PageIndex{1}\) A particle is traveling through the Earth’s atmosphere at a speed of \(0.750c\). To an Earth-bound observer, the distance it travels is 2.50 km. How far does the particle travel in the particle’s frame of reference? \[L = L_0\sqrt{1 - \dfrac{v^2}{c^2}} = (2.50 \, km)\sqrt{1 - \dfrac{(0.750c)^2}{c^2}} = 1.65 \, km\] • All observers agree upon relative speed. • Distance depends on an observer’s motion. Proper length \(L_0\) is the distance between two points measured by an observer who is at rest relative to both of the points. Earth-bound observers measure proper length when measuring the distance between two points that are stationary relative to the Earth. • Length contraction \(L\) is the shortening of the measured length of an object moving relative to the observer’s frame: \[L = L_0 \sqrt{1 - \dfrac{v^2}{c^2}} = \dfrac{L_0}{\gamma}.\] proper length \(L_0\) the distance between two points measured by an observer who is at rest relative to both of the points; Earth-bound observers measure proper length when measuring the distance between two points that are stationary relative to the Earth length contraction \(L\) the shortening of the measured length of an object moving relative to the observer’s frame: \(L = L_0\sqrt{1 - \frac{v^2}{c^2}} = \frac{L_0}{\gamma}\) Paul Peter Urone (Professor Emeritus at California State University, Sacramento) and Roger Hinrichs (State University of New York, College at Oswego) with Contributing Authors: Kim Dirks (University of Auckland) and Manjula Sharma (University of Sydney). This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0).
{"url":"https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/28%3A_Special_Relativity/28.03%3A_Length_Contraction","timestamp":"2024-11-11T15:05:22Z","content_type":"text/html","content_length":"147550","record_id":"<urn:uuid:d1aafe7c-848f-469e-9b27-c44a3d500438>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00682.warc.gz"}
Home - Power Maximiser This website may also be reached via www.powermaximizer.com, www.energymaximiser.com, www.energymaximizer.com. The Power Maximiser Principle 13 February 2022. A new UK patent application filed. Contains multiple inventions encompassing the most recent developments. Divisional applications anticipated. Go to Patents tab. * Blocking the propagation of 3f currents. * Transformerless injector. * Superposition of 1f and 3f power systems to give desired waveform along the lines. * Combining triple harmonic injection with quadrature voltage injection. * Single unit starpoint 3f and quadrature 1f injection. * Transformer modification in retrofit situations to avoid transformer replacement. 6 June 2022 A new presentation added outlining benefits, challenges, solutions and theory. Go to Power and Energy tab. * Double end driving necessary for waveform shape preservation along the line. * Understanding and analysis of this is best achieved by identifying the arrangement as a 1f 3-phase power system with a superimosed 3f 1-phase system. * The 3f injectors emulate solid starpoint grounding. * VA requirments of the injectors is small. * Starpoint and series 3f injection can be combined with silicon-based quadrature voltage injection for line power increase as well as power steering. 1 September 2022. A new document looking at the benefits available from Third Harmonic Injection. Go to the Resources tab. * Choose between increasing the power rating of the transmission line or improving its efficiency. * Or some of both. * Discusses the role of δ in implementing the choice. * A possible alliance between third harmonic injection and quadrature voltage injection. 12 September 2022. A new document is published in Resources analysing the THI system as a double system favoring three-phase operation. A new document is published in "Resources". illustrating how the third-harmonic-injection system can be viewed as a double system - a three-phase system operating at fundamental frequency and a single-phase system operating at triple frequency. The power flow due to the single-phase system is small but it facilitates a major increase in the power rating of the three-phase system. General Principles Adding 1/6^th third harmonic to the ac supply waveform (a sinusoid) permits the maximum theoretical power capability of a transmission system to be increased by 33%. Alternatively the power can be kept constant and voltage increased and current reduced thereby lowering line losses due to resistance by up to 33%. Third harmonic injection gives some of the benefits of DC transmission while keeping the immense advantages of AC. As the waveforms above illustrate, third harmonic injection in a section of a transmission system allows that section to operate for most of each cycle at close to peak allowable voltage. Adding 1/6^th of third harmonic lowers the peak value of a sinusoidal waveform by a factor of 0.866. Restoring the combined waveform to the allowable peak value for the transmission line raises the fundamental voltage by a factor of 1.155. This principle has been widely used for many years to raise the output voltage in three-phase inverters. ("The use of harmonic distortion to increase the output of a three-phase PWM inverter". Houldsworth J.A. and Grant D.A. IEEE Transactions of the Industry Applications Society, Sept 1984, pp 1224-1228). The P(max) of a power transmission line is proportional to V^2. Therefore the raising of the fundamental voltage theoretically permits P(max) to be increased by a factor of 1.33. The blue waveform shown above is the line-ground waveform before the addition of the third harmonic. The orange waveform is the line-ground voltage after the addition of the third harmonic and amplification by 1.156. In conventional ac power systems which employ a pure sine wave, the line-to-line Voltage is limiteThis will be the first time since Tesla that we have put something other than a basic sine wave on our ac power lines. The benefits are more power capacity and reduced losses. The third harmonic is stripped out as the power passes through the receiving end transformer – or if advantageous, it can be taken out later – or not at all depending on the end use.d by the peak permitted line-to-ground Voltage. By breaking that link by adding a third harmonic we allow the line-to line Voltage to be increased by 15.6% and the power limit increased by up to 33%. For a pure sine wave, V(line-to-line) = Vpk(line-ground)/√2 x √3 = Vpk(line-ground) x 1.225 For a sine wave plus third harmonic, V(line-line) = Vpk(line-ground) x 1.156/√2 x √3 = Vpk(line-ground) x 1.416 A pure sine wave uses the maximum allowed line-ground Voltage once every half cycle. A sine wave plus third harmonic uses the maximum allowed line-ground Voltage twice every half cycle. Adding a third harmonic moves the performance of an ac line closer to that of a dc line. This will be the first time since Tesla that we have put something other than a basic sine wave on our ac power lines. The benefits are more power capacity and reduced losses. The third harmonic is stripped out as the power passes through the receiving end transformer – or if advantageous, it can be taken out later – or not at all depending on the end use. The simplest way to implement this principle is by connecting the sending end and receiving end transformer windings in star and by applying the triple frequency voltage waveform to each star point as shown below The transmission system therefore operates to some extent at the triple frequency as well as the fundamental frequency. This creates challenges for the triple frequency injectors (labelled 3F in the circuit diagram). We offer a solution to the challenges created by fundamental-frequency zero-sequence currents in the injectors (patent applied for). We have also have a range of solutions for the challenges of integrating this technology into new or existing power transmission systems (patent applied for). See Power and Energy. Further patent activity is envisaged, relating to line injection of the third harmonic, control of the propagation of the third harmonic and special three-phase transformers for this application. See Patents. We would welcome expressions of interest from potential partners in developing this evolving technology. (See Contact Us).
{"url":"https://www.powermaximiser.com","timestamp":"2024-11-07T09:16:35Z","content_type":"text/html","content_length":"101536","record_id":"<urn:uuid:2d76ce4f-e11f-44a3-8424-edc5da6e32a8>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00688.warc.gz"}
In this article, the semi-analytical/numerical technique known as the homotopy analysis method (HAM) is employed to derive solutions for partial slip effects on the heat transfer of nanofluids over a stretching sheet. An accurate analytical solution is presented which depends on the Prandtl number, slip factor, Lewis number, Brownian motion number, and thermophoresis number. The variation of the reduced Nusselt and reduced Sherwood numbers with Brownian motion number, and thermophoresis number for various values Prandtl number, slip factor, Lewis number is presented in tabular and graphical forms. The results of the present article show the flow velocity and the surface shear stress on the stretching sheet and also reduced Nusselt number and reduced Sherwood number are strongly influenced by the slip parameter. It is found that hydrodynamic boundary layer decreases and thermal boundary layer increases with slip parameter. Comparison of the present analysis is made with the previously existing literature and an appreciable agreement in the values is observed for the limiting case. PAPER SUBMITTED: 2014-04-24 PAPER REVISED: 2015-01-31 PAPER ACCEPTED: 2015-03-06 PUBLISHED ONLINE: 2015-04-04 , VOLUME , ISSUE Issue 1 , PAGES [289 - 301]
{"url":"https://thermalscience.vinca.rs/2017/1/28","timestamp":"2024-11-14T21:46:01Z","content_type":"text/html","content_length":"18797","record_id":"<urn:uuid:96259316-7225-48a6-9acc-3cd75261c455>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00123.warc.gz"}
Create a 'Triforce' array (Sierpiński Triangle) with a formula | Microsoft Community Hub Create a 'Triforce' array (Sierpiński Triangle) with a formula I've been studying 'pyramid' arrays lately and with a new Zelda title out, I was inspired to generate a left-justified 'Triforce' array with a formula. Sierpiński triangle - Wikipedia I took the approach of identifying and keeping the odd binomial coefficients and discarding evens. =LAMBDA(r,c,LET(n, INT(COMBIN(r - 1, c - 1)), IF(AND(r = 1, c = 1), 1, IF(c <= r, IF(ISODD(n), 1, ""), "")))) =LAMBDA(dim,MAKEARRAY(dim, dim, Sierpiński)) I'm interested in any other creative approaches to creating this array!
{"url":"https://techcommunity.microsoft.com/discussions/excelgeneral/create-a-triforce-array-sierpi%c5%84ski-triangle-with-a-formula/3820402","timestamp":"2024-11-10T16:07:54Z","content_type":"text/html","content_length":"223832","record_id":"<urn:uuid:045a254d-c3f8-4f9c-aca6-4cbee9a70303>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00533.warc.gz"}
Understanding the "Mask Input" of a device Hello all, I’ve been pretty successful in building a pre-shaped island with WM, but I still would like to understand the Mask Input of the devices better / more precisely. Would it be right to say that if m is the Mask Input, In1 the first ( :!:, see below) input of a device and Out the “usual” output of the device, that m * In1 + (1-m) * Out yields the “real”, masked output of the device?? This is how I understood the Mask Input for devices that only have exactly one input, but I was wondering if the generalization to devices with more than one input is correctly made by saying that it’s the first input that enters the “real” output if m>0. (All the above assumes that Out is first computed from all inputs as expected, but without ever taking m into account.) Is that right? Partly? Many thanks in advance! Well, that is what happens(*)! For either single input or any-number input devices… That means that if m == “0”, any device which receives that mask will output the same as the first input… This may sound awkward, but it’s the only way to do it for the general case. I you want more control over what happens of the areas of m<1, then you should do it manually with a chooser after the device. Mask input works the same as connecting the first input and first output to the Chooser () - actually the correct formula would be [b]mOut + (1-m)*In1[/b] Hi Fil, Well, that is what happens(*)! For either single input or any-number input devices.. Okay, thanks for clarifying this! I you want more control over what happens of the areas of m<1, then you should do it manually with a chooser after the device. Yes, I understand that. Originally, I was looking for a way to vary the size of the filter kernel of the Blur device across the heightmap, but now I realize that that probably is not (easily) (*) - actually the correct formula would be [b]m*Out + (1-m)*In1[/b] Ups. ;)
{"url":"https://forum.world-machine.com/t/understanding-the-mask-input-of-a-device/2267","timestamp":"2024-11-05T21:54:24Z","content_type":"text/html","content_length":"18150","record_id":"<urn:uuid:87c1793d-482c-4890-8418-023aa81c9dcc>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00099.warc.gz"}
Draw the shear force and bending moment diagrams for beams • Thread starter DevonZA • Start date In summary, the homework equations state that downward forces are negative, while upward forces are positive. Moments about points are calculated as force multiplied by perpendicular distance. Anti clockwise moments are positive. Homework Statement Homework Equations Shear force is calculated at each point on the beam. Downward forces are negative, upward forces are positive. Moments about points are calculated as force multiplied by perpendicular distance. Clockwise moments are negative. Anti clockwise moments are positive. The Attempt at a Solution SF @ A = -6kN SF @ B = -4kN SF @ C = 12kN SF @ D = 12kN (same as C) BM@B=6kNx1+12kNx0.5 = 12kN CCW BM@C=6kNx1.5+4kNx0.5=11kN CCW BM@D=6kNx3+4kNx2-12kNx1.5=8kN CCW Answers given are: Vmax= -10kN (I can see this from the shear force diagram) Mmax = -11kN.m (point c?) Points of contra-flexure = C (not sure how this is calculated) In your shear force diagram, why does the line slope down from -6 to -10? Wouldn't that be for a uniformly applied load of 4kN along AB? DevonZA said: SF @ D = 12kN (same as C) How do you get that? DevonZA said: BM@B=6kNx1+12kNx0.5 = 12kN CCW This is not how to calculate bending moments. You should only consider forces on one side of the point. Which side does not matter in principle as long as you are consistent. Switching sides will just flip the sign. haruspex said: In your shear force diagram, why does the line slope down from -6 to -10? Wouldn't that be for a uniformly applied load of 4kN along AB? How do you get that? This is not how to calculate bending moments. You should only consider forces on one side of the point. Which side does not matter in principle as long as you are consistent. Switching sides will just flip the sign. Is the 4kN not added to the 6kN therefore giving us 10kN? The shear force at D I thought would be the same as at C because there are no further forces between C and D? Looking at the RHS for bending moments: BM@A=-4kNx1+12kNx1.5=14kN CCW BM@B=612kNx0.5 = 6kN CCW I am not sure how the bending moment diagram is supposed to look but I would assume something like this; DevonZA said: Is the 4kN not added to the 6kN therefore giving us 10kN? Yes, but not until you reach that point in the beam. For point loads only to the left of some point, the shear diagram up to there should look like a step function and the bending moment would consist of straight line slopes. With one or more uniform loads to the left, the shear gives straight line slopes while the bending moment has quadratics (parabolas). DevonZA said: Looking at the RHS for bending moments: No, don't do that - stick with working from the left. That seems to be standard and so you may lose marks doing something different. haruspex said: Yes, but not until you reach that point in the beam. For point loads only to the left of some point, the shear diagram up to there should look like a step function and the bending moment would consist of straight line slopes. With one or more uniform loads to the left, the shear gives straight line slopes while the bending moment has quadratics (parabolas). No, don't do that - stick with working from the left. That seems to be standard and so you may lose marks doing something different. BM@B=6kNx1=6kN CCW BM@C=6kNx1.5+4kNx0.5=11kN CCW BM@D=6kNx3+4kNx2-12kNx1.5=8kN CCW I really don't know what the bending moment diagram should look like though? DevonZA said: Like this: View attachment 139475 BM@B=6kNx1=6kN CCW BM@C=6kNx1.5+4kNx0.5=11kN CCW BM@D=6kNx3+4kNx2-12kNx1.5=8kN CCW I really don't know what the bending moment diagram should look like though? You shear diagram and your moments for the individual points are correct. The bending moment diagram should be continuous - no steps. To do it properly you should consider a point at between A and B, at distance x from A say, and calculate the bending moment there. Then do likewise for a general point between B and C, etc. But knowing that for point loads it is all straight lines, you can cheat and just connect up the plotted individual points. haruspex said: You shear diagram and your moments for the individual points are correct. The bending moment diagram should be continuous - no steps. To do it properly you should consider a point at between A and B, at distance x from A say, and calculate the bending moment there. Then do likewise for a general point between B and C, etc. But knowing that for point loads it is all straight lines, you can cheat and just connect up the plotted individual points. Something like this: Thanks again for your help Haruspex FAQ: Draw the shear force and bending moment diagrams for beams 1. What is the purpose of drawing shear force and bending moment diagrams for beams? The shear force and bending moment diagrams are important tools in structural analysis and design. They help engineers visualize and understand the internal forces and moments acting on a beam, which are crucial in determining its strength and stability. 2. How do you draw shear force and bending moment diagrams for beams? To draw these diagrams, you need to first determine the reactions at the supports of the beam and then calculate the internal forces and moments at different points along the beam. These values can be found using equations and concepts from mechanics of materials. Once you have all the values, you can plot the diagrams by following certain conventions and using the appropriate scales. 3. What do the curves on the shear force and bending moment diagrams represent? The shear force diagram shows the variation of shear force along the length of the beam, while the bending moment diagram shows the variation of bending moment. The curves on these diagrams represent the magnitude and direction of these forces and moments at different points on the beam. 4. How do you interpret the shear force and bending moment diagrams? The shear force diagram can help identify the locations of maximum shear force and the points where the shear force changes direction. This information is important in determining the size and type of supports needed for the beam. The bending moment diagram can help identify the maximum bending moment and the points where the beam is in tension or compression. This is crucial in selecting the appropriate cross-sectional shape and size of the beam. 5. Are there any software programs or online tools available for drawing these diagrams? Yes, there are several software programs and online tools that can help with drawing shear force and bending moment diagrams for beams. Some common ones include MATLAB, AutoCAD, and SkyCiv Beam Calculator. These tools can save time and effort in calculating and plotting the diagrams, but it is important to have a good understanding of the underlying concepts and equations before using them.
{"url":"https://www.physicsforums.com/threads/draw-the-shear-force-and-bending-moment-diagrams-for-beams.910783/","timestamp":"2024-11-03T09:11:45Z","content_type":"text/html","content_length":"124132","record_id":"<urn:uuid:5f5503d8-d2cd-4765-9a2d-b85037cf9971>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00570.warc.gz"}