content
stringlengths
86
994k
meta
stringlengths
288
619
Help with making a Pet randomizer Hey, so I’ve been working on this pet unlocking system and I have pretty much everything working except the part of the script where it will choose a random pet. I tried following several tutorials like the @UseCode_Tap but it’s still sometimes doesn’t work with what I have. I even tried to make it round but still was breaking. local petFound = false local petNumber = math.random(1,100) local pet = nil local Egg = PD[EggName] while petFound == false do for i, v in pairs(Egg)do local number = math.random(1,v.Rarity) local newNumber = math.floor((number/5)*5) if number == petNumber then petFound = true pet = v.Name return pet Any help would be useful! 1 Like Using math.random doesn’t work well for pets. In your script there is only a small chance that the petNumber will be equal to any of the egg’s random number. Instead you want to use weight system. Watch the following video, it’s long but you only need to watch part of it that explains how to find random egg. 2 Likes If you’ve got a list and you want to choose a random element from it uniformly, you can do local random = Random.new() local function pickRandom(list) return list[random:NextInteger(1, #list)] local picked_item = pickRandom(item_list) If you’ve got a dictionary of items to item weights, you can pick a random item from that dictionary, weighted according to the associated weight, like this: local item_weights = { A = 1, B = 5, C = 10, local function table_sum(list) local result = 0 for _, value in pairs(list) do result = result + value return result local function pick_random_weighted(dict_item_weight) local weights_sum = table_sum(dict_item_weight) local picked_index = random:NextInteger(1, weights_sum) local picked_item --Figure out which item and weight interval the picked index lies in local interval_start = 0 for item, weight in pairs(dict_item_weight) do local interval_end = interval_start + weight --Update the weight interval --Check if this item is in the weight interval... if picked_index > interval_start and picked_index <= interval_end then picked_item = item -- ... if so, pick it... break -- ... and stop looking. interval_start = interval_end --Update the weight interval return picked_item --This is how you call the function pick_random_weighted(item_weights) -- Prints something like 'C' (usually 'C', since it's the most common) You might want to know the chance that something gets picked: local function get_weighted_chance(dict_item_weight, item) local weights_sum = table_sum(dict_item_weight) local item_weight = dict_item_weight[item] return item_weight/weights_sum And you can test that it actually works as expected like this: --Count how many times each item gets picked local picked_counts = {} for _ = 1, 100000 do local picked = pick_random_weighted(item_weights) picked_counts[picked] = (picked_counts[picked] or 0) + 1 --Print how often each item was picked for k, v in pairs(picked_counts) do print( ("%s: %d/100000\t(%.0f%%)"):format(tostring(k), v, get_weighted_chance(item_weights, k) * 100) ) Which outputs something like A: 6387/100000 (6%) B: 31198/100000 (31%) C: 62415/100000 (62%) Since A has a weight of 1 and the weight sum is 16 it should have a 1/16=6.25% chance of being picked, so everything seems to work. 2 Likes Sorry for the late respond, I will check every suggestion once I have time again. And thanks. 1 Like That is some complicated stuff, but ill try to make it work from what I understand and since I am not a very advanced scripter I will have to scratch my head to figure it out But Thanks! 1 Like If anything is unclear I can try explaining it 1 Like
{"url":"https://devforum.roblox.com/t/help-with-making-a-pet-randomizer/946975","timestamp":"2024-11-11T18:27:28Z","content_type":"text/html","content_length":"36968","record_id":"<urn:uuid:9b0bc031-3153-460f-9379-43566ee900f4>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00210.warc.gz"}
This package implements hypothesis testing procedures that can be used to identify the number of regimes in a Markov switching model. It includes the Monte Carlo moment-based test of Dufour & Luger (2017), the parametric bootstrap test described in Qu & Zhuo (2021) and Kasahara & Shimotsu (2018), the Monte Carlo Likelihood ratio tests of Rodriguez-Rondon & Dufour (2023a), the optimal test for regime switching of Carrasco, Hu, & Ploberger (2014), and the likelihood ratio test described in Hansen (1992). In addition to testing procedures, the package also includes datasets and functions that can be used to simulate: autoregressive, vector autoregressive, Markov switching autoregressive, and Markov switching vector autoregressive processes among others. Model estimation procedures are also available. For a more detailed description of this package see Rodriguez-Rondon & Dufour (2023b). To install the package, use the following line: Load Package Once package has been installed it can be loaded. The MSTest package includes 3 datasets that can be used as examples. The three datasets are: • hamilton84GNP: US GNP data from 1951Q2 - 1984Q4 used in Hamilton (1989) and Hansen (1992; 1996) • chp10GNP: US GNP data from 1951Q2 - 2010Q4 used by Carrasco, Hu and Ploberger (2014) • USGNP: US GNP data from 1947Q2 - 2022Q2 from FRED They can be loaded using the following code. GNPdata <- USGNP # this can be hamilton82GNP, chp10GNP or USGNP Y <- as.matrix(GNPdata$GNP_logdiff) date <- as.Date(GNPdata$DATE) plot(date,Y,xlab='Time',ylab='GNP - log difference',type = 'l') You an also learn more about these datasets and their sources from their description in the help tab. Model Estimation & Process simulation This first example uses the US GNP growth data from 1951Q2-1984Q4 considered in Hamilton (1989). The data is made available as ‘hamilton84GNP’ through this package. In Hamilton (1989), the model is estimated with four autoregressive lags and only the mean is allowed to change between two (i.e., expansionary and recessionary) regimes and it is estimated by MLE and so we begin by estimating that model. Estimation results can be compared with those found in Hamilton (1994) p. 698. Note however, that standard errors here were obtained using a different approximation method and hence these may differ slightly. set.seed(123) # for initial values y_gnp_gw_84 <- as.matrix(hamilton84GNP$GNP_logdiff) # Set options for model estimation control <- list(msmu = TRUE, msvar = FALSE, method = "MLE", use_diff_init = 5) # Estimate model with p=4 and switch in mean only as in Hamilton (1989) hamilton89_mdl <- MSARmdl(y_gnp_gw_84, p = 4, k = 2, control) # plot smoothed probability of recessionary state This package also provides functions to simulate Markov switching processes among others. To do this, we use the ‘simuMSAR’ function to simulate a Markov switching process and then uses ‘MSARmdl’ to estimate the model. Estimated coefficients may be compared with the true parameters used to generate the data. A plot also shows the fit of the smoothed probabilities. # Define DGP of MS AR process mdl_ms2 <- list(n = 500, mu = c(5,10), sigma = c(1,2), phi = c(0.5), k = 2, P = rbind(c(0.90, 0.10), c(0.10, 0.90))) # Simulate process using simuMSAR() function y_ms_simu <- simuMSAR(mdl_ms2) # Set options for model estimation control <- list(msmu = TRUE, msvar = TRUE, method = "EM", use_diff_init = 10) # Estimate model y_ms_mdl <- MSARmdl(y_ms_simu$y, p = 1, k = 2, control) This third example, the ‘simuMSVAR’ function to simulate a bivariate Markov switching vector autoregressive process and then uses ‘MSVARmdl’ to estimate the model. Estimated coefficients may be compared with the true parameters used to generate the data. A plot also shows the fit of the smoothed probabilities. # Define DGP of MS VAR process mdl_msvar2 <- list(n = 1000, p = 1, q = 2, mu = rbind(c(5,-2), sigma = list(rbind(c(5.0, 1.5), c(1.5, 1.0)), rbind(c(7.0, 3.0), c(3.0, 2.0))), phi = rbind(c(0.50, 0.30), c(0.20, 0.70)), k = 2, P = rbind(c(0.90, 0.10), c(0.10, 0.90))) # Simulate process using simuMSVAR() function y_msvar_simu <- simuMSVAR(mdl_msvar2) # Set options for model estimation control <- list(msmu = TRUE, msvar = TRUE, method = "EM", use_diff_init = 10) # Estimate model y_msvar_mdl <- MSVARmdl(y_msvar_simu$y, p = 1, k = 2, control) Hypothesis Testing The main contribution of this r package is the hypothesis testing procedures it makes available. Here we use The LMC-LRT procedure proposed in Rodriguez Rondon & Dufour (2022a; 2022b). # Define DGP of MS AR process mdl_ms2 <- list(n = 500, mu = c(5,10), sigma = c(1,2), phi = c(0.5), k = 2, P = rbind(c(0.90, 0.10), c(0.10, 0.90))) # Simulate process using simuMSAR() function y_ms_simu <- simuMSAR(mdl_ms2) # Set test procedure options lmc_control = list(N = 99, converge_check = NULL, mdl_h0_control = list(const = TRUE, getSE = TRUE), mdl_h1_control = list(msmu = TRUE, msvar = TRUE, getSE = TRUE, method = "EM", maxit = 500, use_diff_init = 1)) lmc_lrt <- LMCLRTest(y_ms_simu$y, p = 1 , k0 = 1 , k1 = 2, lmc_control) We can also use the moment-based test procedure proposed by Dufour & Luger (2017) # Set test procedure options lmc_control = list(N = 99, simdist_N = 10000, getSE = TRUE) # perform test on Hamilton 1989 data lmc_mb <- DLMCTest(y_ms_simu$y, p = 1, control = lmc_control) The package also makes available the Maximized Monte Carlo versions of both these tests and the standardized likelihood ratio test proposed by Hansen (1992) (see HLRTest()) and the parameter stability test of Carrasco, Hu, & Ploberger (2014) (see CHPTest()). Carrasco, Marine, Liang Hu, and Werner Ploberger. (2014). Optimal test for Markov switching parameters, Econometrica, 82 (2): 765–784. https://doi.org/10.3982/ECTA8609 Dempster, A. P., N. M. Laird, and D. B. Rubin. (1977). Maximum Likelihood from Incomplete Data via the EM Algorithm, Journal of the Royal Statistical Society, Series B 39 (1): 1–38.https://doi.org/ Dufour, Jean-Marie, and Richard Luger. (2017). Identification-robust moment-based tests for Markov switching in autoregressive models, Econometric Reviews 36 (6-9): 713–727. https://doi.org/10.1080/ Kasahara, Hiroyuk, and Katsum Shimotsu. (2018). Testing the number of regimes in Markov regime switching models, arXiv preprint arXiv:1801.06862. Krolzig, Hans-Martin. (1997). The Markov-Switching Vector Autoregressive Model. In: Markov-Switching Vector Autoregressions. Lecture Notes in Economics and Mathematical Systems, Springer, vol 454. Hamilton, James D. (1989). A new approach to the economic analysis of nonstationary time series and the business cycle, Econometrica 57 (2): 357–384. https://doi.org/10.2307/1912559 Hamilton, James D. (1994). Time series analysis, Princeton university press. https://doi.org/10.2307/j.ctv14jx6sm Hansen, Bruce E. (1992). The likelihood ratio test under nonstandard conditions: testing the Markov switching model of GNP, Journal of applied Econometrics 7 (S1): S61–S82. https://doi.org/10.1002/ Rodriguez-Rondon, Gabriel and Jean-Marie Dufour (2022). Simulation-Based Inference for Markov Switching Models, JSM Proceedings, Business and Economic Statistics Section: American Statistical Rodriguez-Rondon, Gabriel and Jean-Marie Dufour (2024a). Monte Carlo Likelihood Ratio Tests for Markov Switching Models, Manuscript, McGill University Economics Department. Rodriguez-Rondon, Gabriel and Jean-Marie Dufour. (2024b). MSTest: An R-package for Testing Markov-Switching Models, Manuscript, McGill University Economics Department. Qu, Zhongjun, and Fan Zhuo. (2021). Likelihood Ratio-Based Tests for Markov Regime Switching, The Review of Economic Studies 88 (2): 937–968. https://doi.org/10.1093/restud/rdaa035
{"url":"https://cran-r.c3sl.ufpr.br/web/packages/MSTest/readme/README.html","timestamp":"2024-11-04T23:26:56Z","content_type":"application/xhtml+xml","content_length":"30884","record_id":"<urn:uuid:4d714053-f4d4-43a7-980c-f213d81ba68a>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00526.warc.gz"}
SOLUTIONS MANUAL TO Auditing and Assurance Services, 13 ed by Arens, I have solutions manuals to all problems and exercises in these textbooks. To get one in an electronic format contact me at: reganrexman( at )gmail(dot)com and let me know its Title, Author and Note : this service is NOT Free, and Don't reply here, instead send an email to: reganrexman( at )gmail(dot)com. SOLUTIONS MANUAL TO A First Course in Differential Equations (7th ed.) Zill & Diferential Equations (5th ed.)Zill & Cullen SOLUTIONS MANUAL TO 2500 Solved Problems in Fluid Mechanics & Hydraulics Schaums by Evett, cheng Liu SOLUTIONS MANUAL TO A Course in Game Theory by Osborne, Rubinstein SOLUTIONS MANUAL TO A Course in Modern Mathematical Physics by Peter Szekeres SOLUTIONS MANUAL TO A Course in Ordinary Differential Equations by Swift, Wirkus SOLUTIONS MANUAL TO A First Course in Abstract Algebra (7th Ed., John B. Fraleigh) SOLUTIONS MANUAL TO A First Course in Differential Equations - The Classic Fifth Edition By Zill, Dennis G SOLUTIONS MANUAL TO A First Course in Differential Equations, 9th Ed by Dennis G. Zill SOLUTIONS MANUAL TO A First Course In Probability 7th Edition by Sheldon M. Ross SOLUTIONS MANUAL TO A First Course in Probability Theory, 6th edition, by S. Ross. SOLUTIONS MANUAL TO A First Course in String Theory, 2004, Barton Zwiebach SOLUTIONS MANUAL TO A First Course in the Finite Element Method, 4th Edition logan SOLUTIONS MANUAL TO A First Course in the Finite Element Method, 5th Edition by Daryl l. logan SOLUTIONS MANUAL TO A Practical Introduction to Data Structures and Algorithm Analysis 2Ed by Shaffer SOLUTIONS MANUAL TO A Quantum Approach to Condensed Matter Physics (Philip L. Taylor & Olle Heinonen) SOLUTIONS MANUAL TO A Short Course in General Relativity 2e by J. Foster and J. D. Nightingale SOLUTIONS MANUAL TO A Short Introduction to Quantum Information and Quantum Computation by Michel Le Bellac SOLUTIONS MANUAL TO A Transition to Advanced Mathematics 5th E by Smith, Eggen, Andre SOLUTIONS MANUAL TO Accounting Principles 8e by Kieso, Kimmel SOLUTIONS MANUAL TO Accounting principles 8th Ed by Weygandt SOLUTIONS MANUAL TO Accounting Principles 8th ed kieso TestBank SOLUTIONS MANUAL TO Accounting principles 9th Ed by Weygandt SOLUTIONS MANUAL TO Accounting, 23 Ed by Carl S. Warren, James M. Reeve, Jonathan Duchac SOLUTIONS MANUAL TO Accounting,8th Ed by Horngren,Harrison, Oliver SOLUTIONS MANUAL TO Adaptive Control, 2nd. Ed., by Astrom, Wittenmark SOLUTIONS MANUAL TO Adaptive Filter Theory (4th Ed., Simon Haykin) SOLUTIONS MANUAL TO Advanced Accounting 10E international ED by Beams , Clement, Anthony, Lowensohn SOLUTIONS MANUAL TO Advanced accounting 9th Ed by Hoyle, Schaefer SOLUTIONS MANUAL TO Advanced Accounting by Baysa & Lupisan (2008 ed.) SOLUTIONS MANUAL TO Advanced Accounting Vol.1 by Guerrero & Peralta (2009 ed.) SOLUTIONS MANUAL TO Advanced Accounting Vol.2 by Guerrero & Peralta (2009 ed.) SOLUTIONS MANUAL TO Advanced Calculus Gerald B. Folland SOLUTIONS MANUAL TO Advanced Digital Design with the Verilog HDL by Michael D. Ciletti SOLUTIONS MANUAL TO Advanced Dynamics (Greenwood) SOLUTIONS MANUAL TO Advanced Engineering Electromagnetics by Constantine A. Balanis SOLUTIONS MANUAL TO Advanced Engineering Mathematics 3rd ed zill SOLUTIONS MANUAL TO Advanced Engineering Mathematics 8Ed Erwin Kreyszig SOLUTIONS MANUAL TO Advanced Engineering Mathematics by Erwin Kreyszig, 9th ed SOLUTIONS MANUAL TO Advanced Engineering Mathematics, 6th Edition by Peter V. O'Neil SOLUTIONS MANUAL TO Advanced Engineering Mathematics, 7th Edition by Peter V. O'Neil SOLUTIONS MANUAL TO Advanced Engineering Mathematics,2E, by Zill, Cullen SOLUTIONS MANUAL TO Advanced Engineering Thermodynamics, 3rd Edition by Adrian Bejan SOLUTIONS MANUAL TO Advanced Financial Accounting by Baker SOLUTIONS MANUAL TO Advanced Financial Accounting 5 Ed by Baker SOLUTIONS MANUAL TO Advanced Financial Accounting 8th Ed by Baker SOLUTIONS MANUAL TO Advanced Functions and Introductory Calculus by E. G. Carli, Ruth Malinowski, Ronald G. Scoins, Ronald G. Dunkley SOLUTIONS MANUAL TO Advanced Industrial Economics by Martin SOLUTIONS MANUAL TO Advanced Industrial Economics, 2nd ED Stephen Martin SOLUTIONS MANUAL TO Advanced Macroeconomics 2nd edition by David Romer SOLUTIONS MANUAL TO Advanced Macroeconomics, by David Romer SOLUTIONS MANUAL TO Advanced Mechanics of Materials 6th ed by Boresi, Schmidt SOLUTIONS MANUAL TO Advanced Modern Engineering Mathematics 3rd Ed Glyn James SOLUTIONS MANUAL TO Advanced Modern Engineering Mathematics 4th Ed Glyn James SOLUTIONS MANUAL TO Advanced Modern Engineering Mathematics, 3rd Ed., by G. James SOLUTIONS MANUAL TO Advanced Organic Chemistry Part A- Structure and Mechanisms 5th E by Carey, Sundberg SOLUTIONS MANUAL TO Aircraft Structures for Engineering Students (4th Ed., T.H.G. Megson) SOLUTIONS MANUAL TO Algebra & Trigonometry and Precalculus, 3rd Ed By Beecher, Penna, Bittinger SOLUTIONS MANUAL TO Algebra Baldor SOLUTIONS MANUAL TO Algebra-By Thomas W. Hungerford SOLUTIONS MANUAL TO Algorithm Design (Jon Kleinberg & ??va Tardos) SOLUTIONS MANUAL TO An Interactive Introduction to Mathematical Analysis 2nd E (Jonathan Lewin) SOLUTIONS MANUAL TO An Introduction To Analysis 4th Ed by William Wade SOLUTIONS MANUAL TO An Introduction to Database Systems (8th Ed., C.J. Date) SOLUTIONS MANUAL TO An Introduction to Economic Dynamics by Ronald Shone SOLUTIONS MANUAL TO An Introduction to Modern Astrophysics (2nd Ed., Bradley W. Carroll & Dale A. Ostlie) SOLUTIONS MANUAL TO An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition 2nd edition by Bethard, Jurafsky, Martin SOLUTIONS MANUAL TO An Introduction to Numerical Analysis By Endre Süli,David F. Mayers SOLUTIONS MANUAL TO An Introduction to Ordinary Differential Equations (James C. Robinson) SOLUTIONS MANUAL TO An Introduction to Signals and Systems by John Stuller SOLUTIONS MANUAL TO An Introduction to Stochastic Modeling 3rd Ed by Taylor, Karlin SOLUTIONS MANUAL TO An Introduction to the Finite Element Method (3rd Ed., J. N. Reddy) SOLUTIONS MANUAL TO An Introduction to Thermal Physics by Schroeder, Daniel V SOLUTIONS MANUAL TO An Introduction to Thermodynamics and Statistical Mechanics (2nd Ed, Keith Stowe) SOLUTIONS MANUAL TO An Introduction to Wavelets through Linear Algebra by Frazier SOLUTIONS MANUAL TO Analog Integrated Circuit Design, by Johns, Martin SOLUTIONS MANUAL TO Analysis and Design of Analog Integrated Circuits (4th Edition) by Gray , Lewis , Meyer SOLUTIONS MANUAL TO Analysis With an Introduction to Proof 4th E by Lay SOLUTIONS MANUAL TO Analytical Chemistry, Higson SOLUTIONS MANUAL TO Analytical Mechanics 7E by Grant R. Fowles, George L. Cassiday SOLUTIONS MANUAL TO Antenna Theory 2nd edition by Balanis SOLUTIONS MANUAL TO Antenna Theory and Design, 2nd Ed Vol.1 by Stutzman, Thiele SOLUTIONS MANUAL TO Antennas for All Applications (3rd Ed., John Kraus & Ronald Marhefka) SOLUTIONS MANUAL TO Applied Calculus by Hallett,Gleason, Lock, Flath SOLUTIONS MANUAL TO Applied Calculus for the Managerial, Life, and Social Sciences, 7 E, by Soo T. Tan SOLUTIONS MANUAL TO Applied Calculus for the Managerial, Life, and Social Sciences, 8 E, by Soo T. Tan SOLUTIONS MANUAL TO Applied Econometric Time Series, 2nd Edition by Enders SOLUTIONS MANUAL TO Applied Electromagnetism 2nd Ed by Shen and Huang SOLUTIONS MANUAL TO Applied Finite Element Analysis 2ed, by LJ SEGERLIND SOLUTIONS MANUAL TO Applied Fluid Mechanics (6th Ed., Mott) SOLUTIONS MANUAL TO Applied Linear Regression 3rd Ed by Sanford Weisberg SOLUTIONS MANUAL TO Applied Numerical Analysis, 7th Edition, by Gerald, Wheatley SOLUTIONS MANUAL TO Applied Numerical Methods with MATLAB for Engineers and Scientists 2nd E by Chapra SOLUTIONS MANUAL TO Applied Numerical Methods with MATLAB for Engineers and Scientists( Steven C. Chapra) SOLUTIONS MANUAL TO Applied Partial Differential Equations (4th Ed., Haberman) SOLUTIONS MANUAL TO Applied Partial Differential Equations by J. David Logan SOLUTIONS MANUAL TO Applied Probability Models with Optimization Applications By Sheldon M. Ross SOLUTIONS MANUAL TO Applied Quantum Mechanics ( A. F. J. Levi ) SOLUTIONS MANUAL TO Applied Statistics and Probability for Engineers ( 2nd Ed., Douglas Montgomery & George Runger ) SOLUTIONS MANUAL TO Applied Statistics and Probability for Engineers (3rd Ed., Douglas Montgomery & George Runger) SOLUTIONS MANUAL TO Applied Strength of Materials (4th Ed., Mott) SOLUTIONS MANUAL TO Applied Strength of Materials (5th Ed., Mott) SOLUTIONS MANUAL TO Applying Maths in the Chemical and Biomolecular Sciences, Beddard SOLUTIONS MANUAL TO Artificial Intelligence A Modern Approach 2e by Russell, Norvig SOLUTIONS MANUAL TO Artificial Neural Networks by B. Yegnanarayana and S. Ramesh SOLUTIONS MANUAL TO Assembly Language for Intel-Based Computers ( 3rd Edition ) by Kip R. Irvine SOLUTIONS MANUAL TO Auditing and Assurance Services- An Integrated Approach 12E by Arens SOLUTIONS MANUAL TO Auditing and Assurance Services, 12th edition, Alvin A Arens, Randal J Elder, Mark Beasley SOLUTIONS MANUAL TO Auditing and Assurance Services, 13 ed by Arens, Elder, Beasley SOLUTIONS MANUAL TO Auditing and Assurance Services, 2nd Ed by Louwers SOLUTIONS MANUAL TO Automatic Control Systems 9 Ed by Kuo, Golnaraghi SOLUTIONS MANUAL TO Automatic Control Systems, 8E, by Kuo, Golnaraghi SOLUTIONS MANUAL TO Basic Econometrics 4 ed by Damodar N. Gujarati SOLUTIONS MANUAL TO Basic Electrical Engineering By Nagrath, D P Kothari SOLUTIONS MANUAL TO Basic Electromagnetics with Applications by Nannapaneni Narayana Rao SOLUTIONS MANUAL TO Basic Engineering Circuit Analysis, 7th Ed by David Irwin SOLUTIONS MANUAL TO Basic Engineering Circuit Analysis, 8th Edition by J. David Irwin, R. Mark Nelms SOLUTIONS MANUAL TO Basic Engineering Circuit Analysis, 9th Ed by Irwin, Nelms SOLUTIONS MANUAL TO Basic English Grammar 2nd edition by Pitzer Mark Wade Lieu ( Test Bank ) SOLUTIONS MANUAL TO Basic Heat and Mass Transfer by A. F. Mills SOLUTIONS MANUAL TO Basic Principles and Calculations in Chemical engineering 7th Edition by David M. Himmelblau, James B. Riggs SOLUTIONS MANUAL TO Basic Probability Theory by Robert B. Ash SOLUTIONS MANUAL TO Bayesian Core by Christian P. Robert and Jean-Michel Marin SOLUTIONS MANUAL TO Biochemistry, 5th edition by Reginald H. Garrett and Charles M. Grisham SOLUTIONS MANUAL TO Bioprocess Engineering Principles (Pauline M. Doran) SOLUTIONS MANUAL TO Business Statistics - Decision Making 7th E by David F. Groebner SOLUTIONS MANUAL TO C++ for Computer Science and Engineering by Vic Broquard SOLUTIONS MANUAL TO C++ How to Program 3rd edition - Deitel SOLUTIONS MANUAL TO C++ How to Program 7th Ed by Deitel SOLUTIONS MANUAL TO CALCULO VECTORIAL 7th Ed. by Louis Leithold SOLUTIONS MANUAL TO Calculus 8th Edition by Varberg, Purcell, Rigdon SOLUTIONS MANUAL TO Calculus - Early Transcendentals, 6th E, by Anton, Bivens, Davis SOLUTIONS MANUAL TO Calculus - Early Transcendentals, 7E, by Anton, Bivens, Davis SOLUTIONS MANUAL TO Calculus - Late Transcendentals Single Variable, 8th Ed by Anton, Bivens, Davis SOLUTIONS MANUAL TO Calculus ( 9th Ed., Dale Varberg, Edwin Purcell & Steve Rigdon) SOLUTIONS MANUAL TO Calculus 2nd edition-M. Spivak SOLUTIONS MANUAL TO Calculus 3rd Ed by Michael Spivak SOLUTIONS MANUAL TO Calculus 6th ed by James Stewart SOLUTIONS MANUAL TO Calculus 8th Ed by Ron Larson, Robert P. Hostetler, Bruce H. Edwards SOLUTIONS MANUAL TO Calculus A Complete Course 6th Edition by by R.A. Adams SOLUTIONS MANUAL TO CALCULUS An Intuitive and Physical Approach 2nd ed by Morris Kline SOLUTIONS MANUAL TO Calculus and its Applications (11th Ed., Larry J Goldstein, Schneider, Lay & Asmar) SOLUTIONS MANUAL TO Calculus by Gilbert Strang SOLUTIONS MANUAL TO Calculus early transcendentals 8th Ed, by Anton Bivens Davis SOLUTIONS MANUAL TO Calculus Early Transcendentals Signal Variable 10th edition by Howard Anton and Bivens SOLUTIONS MANUAL TO Calculus Early Transcendentals, 5th Edition, JAMES STEWART SOLUTIONS MANUAL TO Calculus George Thomas 10th ed Vol 1 SOLUTIONS MANUAL TO Calculus of Variations MA 4311 LECTURE NOTES ( Russak ) SOLUTIONS MANUAL TO Calculus On Manifolds by Spivak SOLUTIONS MANUAL TO Calculus One & Several Variables 8e by S Salas SOLUTIONS MANUAL TO Calculus Vol 2 by Apostol SOLUTIONS MANUAL TO CALCULUS VOL.1 , 2nd edition by Tom M. Apostol SOLUTIONS MANUAL TO Calculus Volume 1 by J. Marsden, A. Weinstein SOLUTIONS MANUAL TO Calculus With Analytic Geometry 4th ( Henry Edwards & David E. Penney) SOLUTIONS MANUAL TO Calculus with Applications 8 Edition by Lial, Greenwell, Ritchey SOLUTIONS MANUAL TO Calculus, 4th edition stewart SOLUTIONS MANUAL TO Calculus, An Applied Approach, 7E, by Larson SOLUTIONS MANUAL TO Calculus, Early Transcendentals 7 Ed by Edwards & Penney SOLUTIONS MANUAL TO Calculus, Single and Multivariable, 4E.,Vol 1& Vol 2 by Hughes-Hallett,McCallum SOLUTIONS MANUAL TO Calculus, Single Variable, 3E by Hughes-Hallett,McCallum SOLUTIONS MANUAL TO Chemical and Engineering Thermodynamics 3Ed by Stanley I. Sandler SOLUTIONS MANUAL TO Chemical Engineering Design (Coulson & Richardson's Chemical Engineering - Volume 6) - (4th Ed., Sinnott) SOLUTIONS MANUAL TO Chemical Engineering Vol 2 ( 5th ed ) and Vol 3 ( 3rd ed ) by Backhurst , Harker, Coulson, Richardson SOLUTIONS MANUAL TO Chemical Engineering Volume 1, 6th Edition, by Richardson, Coulson,Backhurst, Harker SOLUTIONS MANUAL TO Chemical Principles The Quest for Insight 4th Ed by Atkins SOLUTIONS MANUAL TO Chemical Reaction Engineering 3rd ED by Octave Levenspiel SOLUTIONS MANUAL TO Chemical, Biochemical, and Engineering Thermodynamics, 4th Ed by Sandler SOLUTIONS MANUAL TO Chemistry 10th edition by Whitten, Davis, and Larry Peck SOLUTIONS MANUAL TO Chemistry 2nd Edition Vol.1 by Julia Burdge SOLUTIONS MANUAL TO Chemistry 9th Edition by Zumdahl SOLUTIONS MANUAL TO Chemistry For Scientists And Engineers by Leonard W.Fine, Beall, Ealy, Kleinman Test Bank SOLUTIONS MANUAL TO Chemistry, 10th Ed by Chang SOLUTIONS MANUAL TO Chip Design for Submicron VLSI CMOS Layout and Simulation, John P. Uyemura SOLUTIONS MANUAL TO Cisco Technical Solution Series IP Telephony Solution Guide Version 2.0 SOLUTIONS MANUAL TO Classical Dynamics of Particles and Systems, 5th Ed, by Marion, Thornton SOLUTIONS MANUAL TO Classical Dynamics, A Contemporary Approach (Jorge V. Jose) SOLUTIONS MANUAL TO Classical Electrodynamics 2nd edition by John David Jackson SOLUTIONS MANUAL TO Classical Electrodynamics by John David Jackson SOLUTIONS MANUAL TO Classical Electrodynamics by Kasper Van Wijk SOLUTIONS MANUAL TO Classical Mechanics (Douglas Gregory) SOLUTIONS MANUAL TO Classical Mechanics 2nd Ed by Goldstein SOLUTIONS MANUAL TO CMOS Analog Circuit Design, 2ed by Phillip E. Allen, Douglas R. Holberg SOLUTIONS MANUAL TO CMOS- Circuit Design, Layout, and Simulation, Revised 2nd Ed by R. Jacob Baker SOLUTIONS MANUAL TO Cmos Digital Integrated Circuits , Sung-Mo Kang,Yusuf Leblebici SOLUTIONS MANUAL TO CMOS Mixed-Signal Circuit Design, 2nd Ed by R. Jacob Baker SOLUTIONS MANUAL TO CMOS VLSI Design Circuit & Design Perspective 3rd Ed by Haris & West SOLUTIONS MANUAL TO College Algebra 8th Ed by Michael Sullivan SOLUTIONS MANUAL TO COLLEGE ALGEBRA AND TRIGONOMETRY 6th E by Aufmann, Barker, Verity SOLUTIONS MANUAL TO College Geometry A Discovery Approach 2nd E by David Kay SOLUTIONS MANUAL TO College Physics 8 ED by Serway, Faughn, Vuille SOLUTIONS MANUAL TO College Physics 9 ED by Serway, Faughn, Vuille SOLUTIONS MANUAL TO College Physics 9th edition by Sears, Zemansky SOLUTIONS MANUAL TO Communication Networks, 2e, Alberto Leon-Garcia, Indra Widjaja SOLUTIONS MANUAL TO Communication Systems (4th Ed., Simon Haykin) SOLUTIONS MANUAL TO Communication Systems An Introduction to Signals and Noise in Electrical Communication, 4E, A. Bruce Carlson SOLUTIONS MANUAL TO Communication Systems Engineering (2nd Ed., John G. Proakis & Masoud Salehi) SOLUTIONS MANUAL TO Complex Variables and Applications 7 ed by JW Brown RV Churchill SOLUTIONS MANUAL TO Complex Variables and Applications 8th ed by James Ward Brown, Ruel V.Churchill SOLUTIONS MANUAL TO Complex Variables with Applications, 3rd ED by David A. Wunsch SOLUTIONS MANUAL TO Computational Techniques for Fluid Dynamics Srinivas, K., Fletcher, C.A.J. SOLUTIONS MANUAL TO Computer Architecture - A Quantitative Approach, 4th Ed by Hennessy, Patterson SOLUTIONS MANUAL TO Computer Architecture Pipelined & Parallel Processor Design by Michael J Flynn SOLUTIONS MANUAL TO Computer Graphics Using OpenGL 3rd E by Francis S Hill, Jr. & Stephen M Kelley SOLUTIONS MANUAL TO Computer Networking A Top-Down Approach Featuring the Internet, 3E Kurose,Ross SOLUTIONS MANUAL TO Computer Networking: A Top-Down Approach (4th Ed., James F. Kurose & Keith W. Ross) SOLUTIONS MANUAL TO Computer Networks - A Systems Approach 3 ed by Peterson Davie SOLUTIONS MANUAL TO Computer Networks - A Systems Approach 4 ed by Peterson Davie SOLUTIONS MANUAL TO Computer Networks A Systems Approach, 2nd Edition, Larry Peterson, Bruce Davie SOLUTIONS MANUAL TO Computer Networks, 4th Ed., by Andrew S. Tanenbaum SOLUTIONS MANUAL TO Computer Organization 3rd Edition by Carl Hamacher , Zvonoko Vranesic ,Safwat Zaky SOLUTIONS MANUAL TO Computer Organization and Architecture Designing for Performance 8th E William Stallings SOLUTIONS MANUAL TO Computer Organization and Architecture: Designing for Performance (7th Ed., William Stallings) SOLUTIONS MANUAL TO Computer Organization and Design The Hardware Software Interface 4 ed by David A Patterson SOLUTIONS MANUAL TO Computer Organization and Design The Hardware Software Interface, 3rd edition by David A Patterson and John L Hennessy SOLUTIONS MANUAL TO Computer Science Illuminated 4th ed by Nell Dale, John Lewis SOLUTIONS MANUAL TO Computer system architecture 3rd Ed Morris Mano SOLUTIONS MANUAL TO Computer Systems Organization and Architecture by John D. Carpinelli SOLUTIONS MANUAL TO Computer Vision A Modern Approach by Forsyth, Ponce SOLUTIONS MANUAL TO Computer-Controlled Systems 3rd ED by Astrom, Wittenmark SOLUTIONS MANUAL TO Concepts and Applications of Finite Element Analysis (4th Ed., Cook, Malkus, Plesha & Witt) SOLUTIONS MANUAL TO Concepts in Thermal Physics 2nd Ed by Blundell SOLUTIONS MANUAL TO Concepts of Modern Physics 6th ED by Arthur Beiser SOLUTIONS MANUAL TO Concepts of Physics (Volume 1 & 2) by H.C. Verma SOLUTIONS MANUAL TO Concepts of Programming Languages 7th ED by Sebesta SOLUTIONS MANUAL TO Construction Surveying and Layout 2ed by Crawford SOLUTIONS MANUAL TO Contemporary Engineering Economics (4th Ed., Chan Park) SOLUTIONS MANUAL TO Contemporary Engineering Economics 5th Ed by Chan S. Park SOLUTIONS MANUAL TO Continuum Electromechanics by James R. Melcher SOLUTIONS MANUAL TO Control Systems Engineering, 4E, by Norman Nise SOLUTIONS MANUAL TO Control Systems Principles and Design 2e by M. Gopal SOLUTIONS MANUAL TO Corporate Finance & MyFinanceLab Student Access Code Card, Global 2 Ed by Berk, DeMarzo SOLUTIONS MANUAL TO Corporate Finance 8th edition by Ross SOLUTIONS MANUAL TO Corporate Finance 9th edition by Ross SOLUTIONS MANUAL TO Corporate Finance The Core plus MyFinanceLab Student Access Kit (Jonathan Berk & Peter DeMarzo) SOLUTIONS MANUAL TO Corporate Finance, 7E, by Ross SOLUTIONS MANUAL TO Corporations, Partnerships, Estates and Trusts ( 2011 ) by Hoffman, Maloney SOLUTIONS MANUAL TO COST ACCOUNTING - Creating Value for Management 5th E by MICHAEL MAHER SOLUTIONS MANUAL TO Cost Accounting 14th edition by Carter SOLUTIONS MANUAL TO Cost Accounting-A Managerial Emphasis 13th Ed by Charles Horngren SOLUTIONS MANUAL TO Cost Accounting-A Managerial Emphasis 14th Ed by Charles Horngren, Srikant M.Datar, Rajan SOLUTIONS MANUAL TO Cost and Managerial Accounting 3rd edition by Barfield SOLUTIONS MANUAL TO Cryptography and Network Security (4th Ed., William Stallings) SOLUTIONS MANUAL TO Data & Computer Communication, 7th Ed, by William Stallings SOLUTIONS MANUAL TO Data Communications and Networking by Behroz Forouzan SOLUTIONS MANUAL TO Data Communications Networking 4th Ed by Behrouz Forouzan SOLUTIONS MANUAL TO Data Structures and Algorithm Analysis in C 2nd ED by Weiss SOLUTIONS MANUAL TO Data Structures with Java by John R. Hubbard, Anita Huray SOLUTIONS MANUAL TO Database Management Systems, 3rd Ed., by Ramakrishnan, Gehrke ( replace the old ) SOLUTIONS MANUAL TO Database System Concepts 4th ED by Silberschatz , Korth , Sudarshan SOLUTIONS MANUAL TO Database System Concepts 5th ED by Silberschatz, Korth, Sudarshan SOLUTIONS MANUAL TO Database Systems An Application-Oriented Approach (introductory Version ) by Michael Kifer, Arthur Bernstein, Philip M.Lewis, Prabin K. Panigrahi SOLUTIONS MANUAL TO Design Analysis in Rock Mechanics by William G. Pariseau SOLUTIONS MANUAL TO Design and Analysis of Experiments, 6E, by Montgomery SOLUTIONS MANUAL TO Design of Analog CMOS Integrated Circuits by Razavi SOLUTIONS MANUAL TO Design of Analog CMOS Integrated Circuits, 2 Edition, by Razavi Douglas C. Montgomery SOLUTIONS MANUAL TO Design of Fluid Thermal Systems, 2nd Edition janna SOLUTIONS MANUAL TO Design of Machinery (3rd Ed., Norton) SOLUTIONS MANUAL TO Design of machinery 4th ed by Norton SOLUTIONS MANUAL TO Design of Reinforced Concrete, 8th Ed by McCormac, Brown SOLUTIONS MANUAL TO Design with Operational Amplifiers and Analog Integrated Circuits (3rd Ed., Sergio Franco) SOLUTIONS MANUAL TO Device Electronics for Integrated Circuits 3rd Edition by muller kamins SOLUTIONS MANUAL TO Differential Equations & Linear Algebra 3rd ed by C. Henry Edwards & David E. Penney SOLUTIONS MANUAL TO Differential Equations and Boundary Value Problems - Computing and Modeling 4th Ed by Edwards, Penney SOLUTIONS MANUAL TO Differential Equations and Linear Algebra ( 2nd Ed., Jerry Farlow, Hall, McDill & West) SOLUTIONS MANUAL TO Differential Equations and Linear Algebra ( C. Henry Edwards & David E. Penney) SOLUTIONS MANUAL TO Differential Equations and Linear Algebra 3e by Stephen W Goode SOLUTIONS MANUAL TO Differential Equations with Boundary Value Problems (2e, John Polking, Al Boggess & Arnold) SOLUTIONS MANUAL TO Digital & Analog Communication Systems (7th Ed., Leon W. Couch) SOLUTIONS MANUAL TO Digital COMMUNICATION 3rd edition by John R. Barry, Edward A. Lee, David G. Messerschmitt SOLUTIONS MANUAL TO Digital Communications Fundamentals and Applications 2e Bernard Sklar SOLUTIONS MANUAL TO Digital Communications, 4E, by Proakis SOLUTIONS MANUAL TO Digital Control & State Variable Methods 2nd Ed by Madan Gopal SOLUTIONS MANUAL TO Digital Design (4th Ed., M. Morris Mano & Michael D. Ciletti) SOLUTIONS MANUAL TO Digital Design: Principles and Practices Package (4th Ed., John F. Wakerly) SOLUTIONS MANUAL TO Digital Fundamentals ( 10th Ed, Thomas L. Floyd ) SOLUTIONS MANUAL TO Digital Fundamentals ( 9th Ed., Thomas L. Floyd) SOLUTIONS MANUAL TO Digital Image Processing, 2e, by Gonzalez, Woods SOLUTIONS MANUAL TO Digital Integrated Circuits, 2nd Ed., by Rabaey SOLUTIONS MANUAL TO Digital Logic Design by Mano SOLUTIONS MANUAL TO Digital Signal Processing - A Modern Introduction, by Ashok Ambardar SOLUTIONS MANUAL TO Digital Signal Processing Principles, Algorithms and Applications, 3rd Edition by John G. Proakis SOLUTIONS MANUAL TO Digital Signal Processing 4th Ed by Proakis, Manolakis SOLUTIONS MANUAL TO Digital Signal Processing a computer based approach (2nd Ed.) (Mitra) SOLUTIONS MANUAL TO Digital Signal Processing a computer based approach (Mitra) SOLUTIONS MANUAL TO Digital Signal Processing by Proakis & Manolakis SOLUTIONS MANUAL TO Digital Signal Processing by Thomas J. Cavicchi SOLUTIONS MANUAL TO Digital Systems - Principles and Applications (10th Ed., Ronald Tocci, Neal Widmer, Greg Moss) SOLUTIONS MANUAL TO Discovering Advanced Algebra - An Investigative Approach SOLUTIONS MANUAL TO Discrete Mathematics ( 6th Ed., Richard Johnsonbaugh ) SOLUTIONS MANUAL TO Discrete Mathematics ( 6th Edition) by Richard Johnsonbaugh SOLUTIONS MANUAL TO Discrete Mathematics 3rd edition by Edgar, Goodaire and Parmenter SOLUTIONS MANUAL TO Discrete Random Signals and Statistical Signal Processing Charles W. Therrien SOLUTIONS MANUAL TO Discrete Time Signal Processing, 2nd Edition, Oppenheim SOLUTIONS MANUAL TO Discrete-Time Control Systems 2nd Ed by Ogata SOLUTIONS MANUAL TO Discrete-Time Signal Processing 3rd ed by Oppenheim, Schafer SOLUTIONS MANUAL TO DSP First A Multimedia Approach-Mclellan, Schafer & Yoder SOLUTIONS MANUAL TO Dynamic Modeling and Control of Engineering Systems 2 E T. Kulakowski , F. Gardner, Shearer SOLUTIONS MANUAL TO Dynamics of Flight- Stability and Control, 3rd Ed by Etkin, Reid SOLUTIONS MANUAL TO Dynamics of Mechanical Systems by C. T. F. Ross SOLUTIONS MANUAL TO Dynamics of Structures 2nd ED by Clough, Penzien SOLUTIONS MANUAL TO Dynamics of structures 3rd E by Anil K. Chopra SOLUTIONS MANUAL TO Econometric Analysis of Cross Section and Panel Data (2003 ) by Jeffrey M Wooldridge SOLUTIONS MANUAL TO Econometric Analysis, 5E, by Greene SOLUTIONS MANUAL TO Econometric Analysis, 6E, by Greene SOLUTIONS MANUAL TO Econometrics of Financial Markets, by Adamek, Cambell, Lo, MacKinlay, Viceira SOLUTIONS MANUAL TO Econometrics, 2nd edition by Badi H. Baltagi SOLUTIONS MANUAL TO Econometrics: A Modern Introduction (Michael P. Murray) SOLUTIONS MANUAL TO Elastic Solutions for Soil and Rock Mechanics by Poulos and Davis SOLUTIONS MANUAL TO Electric Circuits (7th Ed., James W Nilsson & Susan Riedel) SOLUTIONS MANUAL TO Electric Circuits (8th Ed., James W Nilsson & Susan Riedel) SOLUTIONS MANUAL TO Electric Circuits 9th Ed by Nilsson, Riedel SOLUTIONS MANUAL TO Electric Machinery 6th ed. A.E. Fitzgerald,Kingsley,Umans SOLUTIONS MANUAL TO Electric Machinery and Power System Fundamentals (Chapman) SOLUTIONS MANUAL TO Electric Machinery Fundamentals (4th Ed., Chapman) SOLUTIONS MANUAL TO Electric Machines Analysis and Design Applying MATLAB,Jim Cathey SOLUTIONS MANUAL TO Electric Machines By D. P. Kothari, I. J. Nagrath SOLUTIONS MANUAL TO Electrical Engineering - Principles and Applications 5E Hambley SOLUTIONS MANUAL TO Electrical Engineering 9th ed by Lincoln D. Jones SOLUTIONS MANUAL TO Electrical Engineering Principles and Applications (3rd Ed., Allan R. Hambley) SOLUTIONS MANUAL TO Electrical Engineering Principles and Applications (4th Ed., Allan R. Hambley) SOLUTIONS MANUAL TO Electrical Machines, Drives and Power Systems (6th Ed., Theodore Wildi) SOLUTIONS MANUAL TO Electromagnetic Fields and Energy by Haus, Melcher SOLUTIONS MANUAL TO Electromagnetics Problem Solver (Problem Solvers) By The Staff of REA SOLUTIONS MANUAL TO Electromagnetism. Principles and Applications by LORRAIN, PAUL ; CORSON, DAVID SOLUTIONS MANUAL TO Electromechanical Dynamics Part 1, 2, 3 by Herbert H. Woodson, James R. Melcher SOLUTIONS MANUAL TO Electronic Circuit Analysis, 2nd Ed., by Donald Neamen SOLUTIONS MANUAL TO Electronic Devices 6th ed and electronic devices Electron Flow Version 4th ed, Floyd SOLUTIONS MANUAL TO Electronic Devices and Circuit Theory 10th Ed by Boylestad, Nashelsky SOLUTIONS MANUAL TO Electronic Devices and Circuit Theory 8th Ed by Robert Boylestad SOLUTIONS MANUAL TO Electronic Devices and Circuit Theory 9th Ed by Boylestad, Nashelsky SOLUTIONS MANUAL TO Electronic Physics Strabman SOLUTIONS MANUAL TO Electronics, 2nd Ed., by Allan R. Hambley SOLUTIONS MANUAL TO Elementary Differential Equations ( Werner E. Kohler, Johnson) SOLUTIONS MANUAL TO Elementary Differential Equations and Boundary Value Problems 8th Ed by Boyce, Diprima SOLUTIONS MANUAL TO Elementary Linear Algebra 4th edition by Stephen Andrilli & David Hecker SOLUTIONS MANUAL TO Elementary Linear Algebra 5th edition by Stanley I. Grossman SOLUTIONS MANUAL TO Elementary Linear Algebra by Matthews SOLUTIONS MANUAL TO Elementary Linear Algebra with Applications (9th Ed., Howard Anton & Chris Rorres) SOLUTIONS MANUAL TO Elementary Linear Algebra with Applications 9E by Kolman, Hill SOLUTIONS MANUAL TO Elementary Mechanics & Thermodynamics by John W. Norbury SOLUTIONS MANUAL TO Elementary mechanics & thermodynamics jhon w.Nobury SOLUTIONS MANUAL TO ELEMENTARY NUMBER THEORY AND ITS APPLICATIONS, (5TH EDITION, Bart Goddard, Kenneth H. Rosen) SOLUTIONS MANUAL TO Elementary Number Theory and Its Applications, 6th Ed by Kenneth H. Rosen SOLUTIONS MANUAL TO Elementary Principles of Chemical Processes (3rd Ed., Felder & Rousseau) SOLUTIONS MANUAL TO Elementary Principles of Chemical Processesv 3rd Update Edition 2005 (Felder & Rousseau) SOLUTIONS MANUAL TO Elementary Statistics Using The Graphing Calculator 9 Ed by MILTON LOYER SOLUTIONS MANUAL TO Elementary Statistics Using the Graphing Calculator For the TI-83-84 Plus (Mario F. Triola) SOLUTIONS MANUAL TO ELEMENTARY SURVEYING 13th ED by Ghilani,Wolf SOLUTIONS MANUAL TO ELEMENTARY SURVEYING AN INTRODUCTION TO GEOMATICS 13TH EDITION BY Charles D. Ghilani and Paul R. Wolf ISBN0132555085 SOLUTIONS MANUAL TO Elements of Information Theory - M. Cover, Joy A. Thomas SOLUTIONS MANUAL TO Elements Of Chemical Reaction Engineering 4th Edition by H. Scott Fogler, Max Nori, Brian Vicente SOLUTIONS MANUAL TO Elements of Chemical Reaction Engineering by Fogler hubbard, hamman , johnson , 3rd edition SOLUTIONS MANUAL TO Elements of Deductive Inference by Bessie, Glennan SOLUTIONS MANUAL TO Elements of Electromagnetics , 2 ed by Matthew N. O. Sadiku SOLUTIONS MANUAL TO Elements of Electromagnetics , 3ed by Matthew N. O. Sadiku SOLUTIONS MANUAL TO Elements of Forecasting in Business, Finance, Economics and Government by Diebold SOLUTIONS MANUAL TO Embedded Microcomputer Systems Real Time Interfacing, 2nd Edition , Jonathan W. Valvano SOLUTIONS MANUAL TO Engineering and Chemical Thermodynamics (Koretsky) SOLUTIONS MANUAL TO ENGINEERING BIOMECHANICS (STATICS) by Angela Matos, Eladio Pereira, Juan Uribe and Elisandra Valentin SOLUTIONS MANUAL TO Engineering Circuit Analysis 6Ed, Luay Shaban SOLUTIONS MANUAL TO Engineering Circuit Analysis 6th ed by Hayt SOLUTIONS MANUAL TO Engineering Circuit Analysis 7th Ed. by William H. Hayt Jr SOLUTIONS MANUAL TO Engineering Economic Analysis 9th ED by Newnan SOLUTIONS MANUAL TO Engineering Economy 15th Edition by William G. Sullivan, Elin M. Wicks, C. Patrick Koelling SOLUTIONS MANUAL TO Engineering Economy 7th Edition by Leland Blank, Anthony Tarquin SOLUTIONS MANUAL TO Engineering Economy and the Decision-Making Process (Joseph C. Hartman) SOLUTIONS MANUAL TO Engineering Economy, 14 Ed by Sullivan SOLUTIONS MANUAL TO Engineering Electromagnetics 6E by William H. Hayt Jr. and John A. Buck SOLUTIONS MANUAL TO Engineering Electromagnetics 7E by William H. Hayt Jr. and John A. Buck SOLUTIONS MANUAL TO Engineering Fluid Mechanics - 8th Ed by Crowe, Elger & Roberson SOLUTIONS MANUAL TO Engineering Fluid Mechanics 7th Ed by Crowe and Donald SOLUTIONS MANUAL TO Engineering Materials Science, by Milton Ohring SOLUTIONS MANUAL TO Engineering Mathematics (4th Ed., John Bird) SOLUTIONS MANUAL TO Engineering Mechanics - Dynamics by Boresi, Schmidt SOLUTIONS MANUAL TO Engineering Mechanics - Dynamics, 5th Ed (J. L. Meriam, L. G. Kraige) SOLUTIONS MANUAL TO Engineering Mechanics - Dynamics, 6th Ed (J. L. Meriam, L. G. Kraige) SOLUTIONS MANUAL TO Engineering Mechanics - Statics (10th Edition) by Russell C. Hibbeler SOLUTIONS MANUAL TO Engineering Mechanics - Statics (11th Edition) by Russell C. Hibbeler SOLUTIONS MANUAL TO Engineering Mechanics - Statics by Boresi, Schmidt SOLUTIONS MANUAL TO Engineering Mechanics - Statics, 4th Ed (J. L. Meriam, L. G. Kraige) SOLUTIONS MANUAL TO Engineering Mechanics - Statics, 6th Ed (J. L. Meriam, L. G. Kraige) SOLUTIONS MANUAL TO Engineering Mechanics : Dynamics (11th Ed., Hibbeler) SOLUTIONS MANUAL TO Engineering Mechanics Dynamic (10th Edition) hibbeler SOLUTIONS MANUAL TO Engineering Mechanics Dynamics (12th Ed., Hibbeler) SOLUTIONS MANUAL TO Engineering Mechanics Dynamics 13th Ed. by Hibbeler SOLUTIONS MANUAL TO Engineering Mechanics Dynamics, Bedford & Fowler, 5th Edition SOLUTIONS MANUAL TO Engineering Mechanics Dynamics, by R. C. Hibbeler, 3rd SOLUTIONS MANUAL TO Engineering Mechanics Statics (12th Ed., Hibbeler) SOLUTIONS MANUAL TO Engineering Mechanics Statics, Bedford & Fowler, 5th Edition SOLUTIONS MANUAL TO Engineering Mechanics, Dynamics 2nd E by Riley, Sturges SOLUTIONS MANUAL TO Engineering Mechanics, Statics 2nd E by Riley, Sturges SOLUTIONS MANUAL TO Engineering Statistics (4th Ed., Douglas Montgomery, George Runger & Norma Faris Hubele) SOLUTIONS MANUAL TO Engineering Vibration 3rd Ed by Inman SOLUTIONS MANUAL TO Equilibrium Statistical Physics, 2nd E by Plischke, Bergersen SOLUTIONS MANUAL TO Erosion and sedimentation by Pierre Y. Julien SOLUTIONS MANUAL TO Essentials of Corporate Finance 6th Ed by Ross,Westerfield,Jordan SOLUTIONS MANUAL TO Essentials of Corporate Finance 7th Ed by Ross,Westerfield,Jordan SOLUTIONS MANUAL TO Essentials of Soil Mechanics and Foundations: Basic Geotechnics (7th Ed., David F. McCarthy) SOLUTIONS MANUAL TO Feedback Control of Dynamic Systems (4th Ed., Franklin, Powell & Emami-Naeini) SOLUTIONS MANUAL TO Feedback Control of Dynamic Systems (5th Ed., Franklin, Powell & Emami-Naeini) SOLUTIONS MANUAL TO Feedback Control of Dynamic Systems 6th E by Franklin, Powell, Naeini SOLUTIONS MANUAL TO Field and Wave Electromagnetics 2nd Ed by David K. Cheng SOLUTIONS MANUAL TO Financial Accounting 6th E with Annual Report by Libby, Short SOLUTIONS MANUAL TO Financial Accounting 6th Ed by Harrison SOLUTIONS MANUAL TO Financial Accounting An Integrated Approach, 6th Ed by Gibbins SOLUTIONS MANUAL TO Financial Accounting VOL.1 by Valix and Peralta 2008 Edition SOLUTIONS MANUAL TO Financial Accounting VOL.2 by Valix and Peralta 2008 Edition SOLUTIONS MANUAL TO Financial Accounting VOL.2 by Valix and Peralta 2013 Edition SOLUTIONS MANUAL TO Financial Accounting VOL.3 by Valix and Peralta 2008 Edition SOLUTIONS MANUAL TO Financial Management- Principles and Applications, 10th Ed by Keown, Scott SOLUTIONS MANUAL TO Financial Management- Theory and Practice 12 th ED by Brigham, Ehrhardt SOLUTIONS MANUAL TO Financial Reporting and Analysis Using Financial Accounting Information 10th Ed by Gibson SOLUTIONS MANUAL TO Financial Reporting and Analysis, 3E by Revsine, Collins, Johnson SOLUTIONS MANUAL TO Finite Element Techniques in Structural Mechanics Ross SOLUTIONS MANUAL TO First Course in Abstract Algebra, 3rd Ed by Joseph J. Rotman SOLUTIONS MANUAL TO First Course in Probability (7th Ed., Sheldon Ross) SOLUTIONS MANUAL TO Fluid Mechanics (5th Ed., White) SOLUTIONS MANUAL TO Fluid Mechanics 4th Ed by Cohen, Kundu SOLUTIONS MANUAL TO Fluid Mechanics 4th Edition by Frank M. White SOLUTIONS MANUAL TO Fluid Mechanics and Thermodynamics of Turbomachinery (5th Ed., S.L. Dixon) SOLUTIONS MANUAL TO Fluid Mechanics by CENGEL SOLUTIONS MANUAL TO Fluid Mechanics Egon Krause SOLUTIONS MANUAL TO Fluid Mechanics Fundamentals and Applications by Cengel & Cimbala SOLUTIONS MANUAL TO Fluid Mechanics with Engineering Applications, 10th Edition, by Finnemore SOLUTIONS MANUAL TO Foundations of Applied Combinatorics by Bender, Williamson SOLUTIONS MANUAL TO Foundations of Colloid Science 2e , Hunter SOLUTIONS MANUAL TO Foundations of Electromagnetic Theory by John R. Reitz, Frederick J. Milford SOLUTIONS MANUAL TO Foundations of Modern Macroeconomics 2nd Ed by Heijdra, Reijnders, Romp SOLUTIONS MANUAL TO Fourier and Laplace Transform - Antwoorden SOLUTIONS MANUAL TO Fractal Geometry Mathematical Foundations and Applications, 2nd Ed Kenneth Falcone SOLUTIONS MANUAL TO fracture mechanics ; fundamentals and applications, 2E, by T.L. Anderson SOLUTIONS MANUAL TO From Polymers to Plastics By A.K. van der Vegt SOLUTIONS MANUAL TO Fundamental Methods of Mathematical Economics 4th E by Chiang,Wainwright SOLUTIONS MANUAL TO Fundamental Quantum Mechanics for Engineers by Leon van Dommelen SOLUTIONS MANUAL TO Fundamentals of Advanced Accounting By Fischer, Taylor SOLUTIONS MANUAL TO Fundamentals of Aerodynamics ( 3 Ed., Anderson) SOLUTIONS MANUAL TO Fundamentals of Aerodynamics (2 Ed., Anderson) SOLUTIONS MANUAL TO Fundamentals of Aircraft Structural Analysis by Howard D. Curtis SOLUTIONS MANUAL TO Fundamentals of Analytical Chemistry 9th edition by Skoog, James Holler, West SOLUTIONS MANUAL TO Fundamentals of Applied Electromagnetics (5th Ed., Fawwaz T. Ulaby) SOLUTIONS MANUAL TO Fundamentals of Applied Electromagnetics (6th Ed., Fawwaz T. Ulaby) SOLUTIONS MANUAL TO Fundamentals of Chemical Reaction Engineering by Davis SOLUTIONS MANUAL TO Fundamentals of Complex Analysis ( 3rd Ed., E. Saff & Arthur Snider ) SOLUTIONS MANUAL TO Fundamentals of Computer Organization and Architecture by Abd-El-Barr, El-Rewini SOLUTIONS MANUAL TO Fundamentals of Corporate Finance 8th edition by Ross SOLUTIONS MANUAL TO Fundamentals of Corporate Finance 9th edition by Ross SOLUTIONS MANUAL TO Fundamentals of Corporate Finance, 4th Edition (Brealey, Myers, Marcus) SOLUTIONS MANUAL TO Fundamentals of Differential Equations 7E Kent Nagle, B. Saff, Snider SOLUTIONS MANUAL TO Fundamentals of Differential Equations and Boundary Value Problems, 6th Ed by Nagle ,Saff, Snider SOLUTIONS MANUAL TO Fundamentals of Digital Logic with VHDL Design (1st Ed., Stephen Brown Vranesic) SOLUTIONS MANUAL TO Fundamentals of Digital Signal Processing Using Matlab 2nd Edition by Robert J. Schilling, Sandra L. Harris SOLUTIONS MANUAL TO Fundamentals of Electric Circuits (2nd.ed.) by C.K.Alexander M.N.O.Sadiku SOLUTIONS MANUAL TO Fundamentals of Electric Circuits (4E., Charles Alexander & Matthew Sadiku) SOLUTIONS MANUAL TO Fundamentals of Electromagnetics with Engineering Applications (Stuart Wentworth) SOLUTIONS MANUAL TO Fundamentals of Electronic Circuit Design , Comer SOLUTIONS MANUAL TO Fundamentals of Engineering Economics 2nd E by Chan S. Park SOLUTIONS MANUAL TO Fundamentals of Engineering Thermodynamics, 5th Ed (Michael J. Moran, Howard N. Shapiro) SOLUTIONS MANUAL TO Fundamentals of Engineering Thermodynamics, 6th Ed (Michael J. Moran, Howard N. Shapiro) SOLUTIONS MANUAL TO Fundamentals Of English Grammar 2nd edition by M.Lynn Morgan, Mark Wade Lieu ( Test Bank ) SOLUTIONS MANUAL TO Fundamentals of Financial Management 12th edition James C. Van Horne, Wachowicz SOLUTIONS MANUAL TO Fundamentals of Fluid Mechanics 5th Ed Munson Young Okiishi SOLUTIONS MANUAL TO Fundamentals of Fluid Mechanics 6th Ed by Munson SOLUTIONS MANUAL TO Fundamentals of Fluid Mechanics, 4E (Bruce R. Munson, Donald F. Young, Theodore H.) SOLUTIONS MANUAL TO Fundamentals of Geotechnical Engineering 4th Edition by Braja M. Das SOLUTIONS MANUAL TO Fundamentals of Heat and Mass Transfer - 5th Edition F.P. Incropera D.P. DeWitt SOLUTIONS MANUAL TO Fundamentals of Heat and Mass Transfer (4th Ed., Incropera, DeWitt) SOLUTIONS MANUAL TO Fundamentals of Heat and Mass Transfer (6th Ed., Incropera, DeWitt) SOLUTIONS MANUAL TO Fundamentals of Hydraulic Engineering Systems 4th E by Houghtalen,Akan,Hwang SOLUTIONS MANUAL TO Fundamentals of Investments, 4th E by Jordan, Miller SOLUTIONS MANUAL TO Fundamentals of Investments, 5th E by Jordan, Miller SOLUTIONS MANUAL TO Fundamentals of Investments, 6th E by Jordan, Miller SOLUTIONS MANUAL TO Fundamentals of Logic Design, 5th Ed., by Charles Roth SOLUTIONS MANUAL TO Fundamentals of Machine Component Design (3rd Ed., Juvinall) SOLUTIONS MANUAL TO Fundamentals of Machine Component Design 4th Ed by Juvinall SOLUTIONS MANUAL TO Fundamentals of Machine Elements 2nd E by Bernard Hamrock SOLUTIONS MANUAL TO Fundamentals of Machine Elements by Bernard Hamrock SOLUTIONS MANUAL TO Fundamentals of Manufacturing 2nd Edition by Philip D. Rufe SOLUTIONS MANUAL TO Fundamentals of Materials Science and Engineering- An Integrated Approach, 3rd Ed by Callister SOLUTIONS MANUAL TO Fundamentals of Microelectronics by Behzad Razavi SOLUTIONS MANUAL TO Fundamentals of Modern Manufacturing 3rd Ed by Mikell P. Groover SOLUTIONS MANUAL TO Fundamentals of Modern Manufacturing: Materials, Processes, and Systems (2nd Ed., Mikell P. Groover) SOLUTIONS MANUAL TO Fundamentals of Momentum, Heat and Mass Transfer, 4th Ed by Welty,Wilson SOLUTIONS MANUAL TO Fundamentals of Momentum, Heat and Mass Transfer, 5th Ed by Welty,Wilson SOLUTIONS MANUAL TO Fundamentals of Organic Chemistry, 5E, by T. W. Graham Solomons SOLUTIONS MANUAL TO Fundamentals of Physics (7th Ed., David Halliday, Robert Resnick & Jearl Walker) SOLUTIONS MANUAL TO Fundamentals of Physics 9th Ed by Resnick, Walker, Halliday SOLUTIONS MANUAL TO Fundamentals of Physics, 8th Edition Halliday, Resnick, Walker SOLUTIONS MANUAL TO Fundamentals of Power Semiconductor Devices By Jayant Baliga SOLUTIONS MANUAL TO Fundamentals of Probability, with Stochastic Processes (3rd Ed., Saeed Ghahramani) SOLUTIONS MANUAL TO Fundamentals of Quantum Mechanics (C.L. Tang) SOLUTIONS MANUAL TO Fundamentals of Semiconductor Devices, 1st Edition by Anderson SOLUTIONS MANUAL TO Fundamentals of Signals and Systems Using the Web and Matlab (3rd Ed., Kamen & Bonnie S Heck) SOLUTIONS MANUAL TO Fundamentals of Solid-State Electronics by Chih-Tang Sah SOLUTIONS MANUAL TO Fundamentals of Structural Analysis 3rd Ed by Leet SOLUTIONS MANUAL TO Fundamentals of Thermal-Fluid Sciences, 2nd Ed. by Cengel SOLUTIONS MANUAL TO Fundamentals of Thermodynamics 5th Ed by Sonntag, Borgnakke and Van Wylen SOLUTIONS MANUAL TO Fundamentals of Thermodynamics 6th Ed by Sonntag, Borgnakke & Van Wylen SOLUTIONS MANUAL TO Fundamentals Of Thermodynamics by Borgnakke, Sonntag SOLUTIONS MANUAL TO Fundamentals of Wireless Communication by Tse and Viswanath SOLUTIONS MANUAL TO Fundamentals of Wireless Communication by Tse, Viswanath SOLUTIONS MANUAL TO Gas Dynamics (3rd Ed., John & Keith) SOLUTIONS MANUAL TO General Chemistry 9 Edition by Ebbings, Gammon SOLUTIONS MANUAL TO General Chemistry, 8th Edition by Ralph H. Petrucci; William S. Harwood; Geoffrey Herring SOLUTIONS MANUAL TO Geometry - A High School Course by S. Lang and G. Murrow SOLUTIONS MANUAL TO Geometry ( Glencoe ) SOLUTIONS MANUAL TO Geometry and Discrete Mathematics Addison Wesley SOLUTIONS MANUAL TO Green Engineering - Environmentally Conscious Design of Chemical Processes by Shonnard, Allen SOLUTIONS MANUAL TO Guide to Energy Management, 5th ed ( international version ), Klaus-Dieter E. Pawlik SOLUTIONS MANUAL TO Guide to Energy Management, 6th Edition by Klaus Dieter E. Pawlik SOLUTIONS MANUAL TO Guide to Energy Management, Fifth Edition, Klaus-Dieter E. Pawlik SOLUTIONS MANUAL TO HARCOURT MATHEMATICS 12 Advanced Functions and Introductory Calculus SOLUTIONS MANUAL TO Harcourt Mathematics 12 Geometry and Discrete Mathematics SOLUTIONS MANUAL TO Heat and Mass Transfer: A Practical Approach (3rd. Ed., Cengel) SOLUTIONS MANUAL TO Heat Transfer 10th edition by Holman SOLUTIONS MANUAL TO Heat Transfer A Practical Approach ,Yunus A. Cengel 2d ed SOLUTIONS MANUAL TO Heating, Ventilating and Air Conditioning Analysis and Design, 6th Edition McQuiston, Parker, Spitler SOLUTIONS MANUAL TO Higher Algebra 3rd edition by Hall and Knight SOLUTIONS MANUAL TO Higher Engineering Mathematics 5th ed by John Bird SOLUTIONS MANUAL TO History of Mathematics: Brief Version (Victor J. Katz) SOLUTIONS MANUAL TO Hydraulics in Civil and Environmental Engineering 4 E by Chadwick , Morfett SOLUTIONS MANUAL TO Hydraulics in Civil and Environmental Engineering 4th Ed by Chadwick , Borthwick SOLUTIONS MANUAL TO Industrial Organization Theory & Applications by Shy SOLUTIONS MANUAL TO Intermediate Accounting - IFRS Edition Vol.1 by Kieso, Weygandt, Warfield SOLUTIONS MANUAL TO Intermediate Accounting - IFRS Edition Vol.2 by Kieso, Weygandt, Warfield SOLUTIONS MANUAL TO Intermediate Accounting 12th ed by Kieso SOLUTIONS MANUAL TO Intermediate Accounting 13 ed by Kieso SOLUTIONS MANUAL TO Intermediate Accounting 14e Kieso, Weygandt, Warfield SOLUTIONS MANUAL TO Intermediate Accounting 14th ed Kieso, Weygandt, Warfield ( test bank ) SOLUTIONS MANUAL TO Intermediate Accounting 15th ed by Kieso SOLUTIONS MANUAL TO Intermediate Accounting Kieso 12th ed SOLUTIONS MANUAL TO INTERMEDIATE ACCOUNTING, 6th Edition, by Spiceland, Sepe SOLUTIONS MANUAL TO INTERMEDIATE ACCOUNTING, 6th Edition, by Spiceland, Sepe, and Nelson SOLUTIONS MANUAL TO INTERMEDIATE ACCOUNTING, 7th Edition, by Spiceland, Sepe, and Nelson SOLUTIONS MANUAL TO Intermediate Algebra - Concepts & Applications 8th Ed by Bittinger, Ellenbogen SOLUTIONS MANUAL TO Introduction to Accounting 3rd Ed by Marriott, Mellett SOLUTIONS MANUAL TO Introduction to Algorithms, 2nd Ed by Cormen, Leiserson SOLUTIONS MANUAL TO Introduction To Analysis (3rdEd) -by William Wade SOLUTIONS MANUAL TO Introduction to Applied Modern Physics by Henok Abebe SOLUTIONS MANUAL TO Introduction to Chemical Engineering Thermodynamics (7th Ed., Smith & Van Ness) SOLUTIONS MANUAL TO Introduction to Commutative Algebra by M. F. Atiyah SOLUTIONS MANUAL TO Introduction to Data Mining by Pang-Ning Tan, Michael Steinbach, Vipin Kumar SOLUTIONS MANUAL TO Introduction to Digital Signal Processing (in Serbian) by Lj. Milic and Z. Dobrosavljevic SOLUTIONS MANUAL TO Introduction to Econometrics (2nd ed., James H. Stock & Mark W. Watson) SOLUTIONS MANUAL TO Introduction to Electric Circuits 7th Edition by Dorf, Svaboda SOLUTIONS MANUAL TO Introduction to Electric Circuits, 6E, Dorf SOLUTIONS MANUAL TO Introduction to Electrodynamics (3rd Ed., David J. Griffiths) SOLUTIONS MANUAL TO Introduction to Elementary Particles 2nd Ed by David Griffiths SOLUTIONS MANUAL TO Introduction to Environmental Engineering and Science (3rd Ed., Gilbert M. Masters & Wendell P. Ela) SOLUTIONS MANUAL TO Introduction to Environmental Engineering and Science, Edition 2, Masters SOLUTIONS MANUAL TO Introduction to Ergonomics 2E by Robert Bridger SOLUTIONS MANUAL TO Introduction to Fluid Mechanics ( 7 E., Robert Fox, Alan McDonald & Philip ) SOLUTIONS MANUAL TO Introduction to Fluid Mechanics (6E., Robert Fox, Alan McDonald & Philip) SOLUTIONS MANUAL TO Introduction to fluid mechanics 5th edition by Alan T. McDonald, Robert W Fox SOLUTIONS MANUAL TO Introduction to Graph Theory 2E - West SOLUTIONS MANUAL TO Introduction to Heat Transfer by Vedat S. Arpaci, Ahmet Selamet, Shu-Hsin Kao SOLUTIONS MANUAL TO Introduction to Java Programming, Comprehensive Version 7th Ed by Liang SOLUTIONS MANUAL TO Introduction to Linear Algebra, 3rd Ed., by Gilbert Strang SOLUTIONS MANUAL TO Introduction to Linear Algebra, 4th Ed by Gilbert Strang SOLUTIONS MANUAL TO Introduction to Management Accounting, 14 ED by Horngren, Schatzberg SOLUTIONS MANUAL TO Introduction to Materials Science for Engineers (6th Ed., Shackelford) SOLUTIONS MANUAL TO Introduction to Materials Science for Engineers 7th E by Shackelford SOLUTIONS MANUAL TO Introduction to Mathematical Statistics (6th Ed., Hogg, Craig & McKean) SOLUTIONS MANUAL TO Introduction to Mechatronics and Measurements Systems 2nd Ed by Alciatore, Histand SOLUTIONS MANUAL TO Introduction to Modern Economic Groth by Michael Peters, Alp Simsek SOLUTIONS MANUAL TO Introduction to Nuclear And Particle Physics 2nd E by Bromberg, Das, Ferbel SOLUTIONS MANUAL TO Introduction to Operations Research - 7th ed by Frederick Hillier, Gerald Lieberman SOLUTIONS MANUAL TO Introduction to Probability 2nd Ed by Bertsekas and Tsitsiklis SOLUTIONS MANUAL TO Introduction to Probability by Dimitri P. Bertsekas and John N. Tsitsiklis SOLUTIONS MANUAL TO Introduction to Probability by Grinstead, Snell SOLUTIONS MANUAL TO Introduction to Quantum Mechanics (2nd Ed., David J. Griffiths) SOLUTIONS MANUAL TO Introduction to Quantum Mechanics 1st edition (1995) by David J. Griffiths SOLUTIONS MANUAL TO Introduction to Quantum Mechanics by Sandiford & Phillips SOLUTIONS MANUAL TO Introduction to Queueing Theory 2nd Edition by R.B. Cooper SOLUTIONS MANUAL TO Introduction to Scientific Computation and Programming, 1st Edition by Daniel Kaplan SOLUTIONS MANUAL TO Introduction to Signal Processing by S. J. Orfanidis SOLUTIONS MANUAL TO Introduction to Signal Processing by Sophocles J. Orfanidis SOLUTIONS MANUAL TO Introduction to Solid State Physics 8th Ed by Kittel & Charles SOLUTIONS MANUAL TO Introduction to Statistical Physics by Kerson Huang SOLUTIONS MANUAL TO Introduction to Statistical Quality Control (5th Ed., Douglas C. Montgomery) SOLUTIONS MANUAL TO Introduction to the Theory of Computation by Ching Law SOLUTIONS MANUAL TO Introduction to the Thermodynamics of Materials 3 E by Gaskell SOLUTIONS MANUAL TO Introduction to Thermal and Fluids Engineering by Kaminski, Jensen SOLUTIONS MANUAL TO Introduction to Thermal Systems Engineering Moran Shapiro Munson SOLUTIONS MANUAL TO Introduction to VLSI Circuits and Systems, by John P. Uyemura SOLUTIONS MANUAL TO Introduction to Wireless Systems by P.M Shankar SOLUTIONS MANUAL TO Introductory Circuit Analysis 11th edition by Boylestad SOLUTIONS MANUAL TO Introductory Econometrics A Modern Approach, 3Ed by Jeffrey Wooldridge SOLUTIONS MANUAL TO Introductory Linear Algebra An Applied First Course 8th edition by Brnard Kolman, David R.Hill SOLUTIONS MANUAL TO Introductory Mathematical Analysis for Business, Economics and the Life and Social Sciences, 12th E By Haeussler,Paul,Wood SOLUTIONS MANUAL TO Introductory Quantum Optics (Christopher Gerry & Peter Knight) SOLUTIONS MANUAL TO Introdution to Solid State Physics, 8th Edition by Kittel SOLUTIONS MANUAL TO Investment Analysis & Portfolio Management, 7e by Reilly, Brown SOLUTIONS MANUAL TO Investment Analysis and Portfolio Management 7th Edition by Frank K. et al. Reilly SOLUTIONS MANUAL TO Investments by Charles P. Jones SOLUTIONS MANUAL TO IT Networking Labs by Cavaiani IM SOLUTIONS MANUAL TO IT Networking Labs by Cavaiani solutions to exercises SOLUTIONS MANUAL TO IT Networking Labs by Tom Cavaiani SOLUTIONS MANUAL TO Java How to program 5th Ed by Deitel SOLUTIONS MANUAL TO Java How to program 7th Ed by Deitel SOLUTIONS MANUAL TO Journey into Mathematics An Introduction to Proofs ,Joseph Rotman SOLUTIONS MANUAL TO Kinematics, Dynamics, and Design of Machinery, 2nd Ed., Waldron & Kinzel SOLUTIONS MANUAL TO Kinetics of Catalytic Reactions by M. Albert Vannice SOLUTIONS MANUAL TO LabVIEW for Engineers by Larsen SOLUTIONS MANUAL TO Laser Fundamentals (2nd Ed., William T. Silfvast) SOLUTIONS MANUAL TO Learning SAS in the Computer Lab 3rd ED by Elliott, Morrell SOLUTIONS MANUAL TO Lectures on Corporate Finance 2006, 2 Ed by Bossaerts, Oedegaard SOLUTIONS MANUAL TO Linear Algebra - 2 Ed - Poole SOLUTIONS MANUAL TO Linear Algebra and Its Applications 3rd ed by David C. Lay SOLUTIONS MANUAL TO Linear Algebra Done Right, 2nd Ed by Sheldon Axler SOLUTIONS MANUAL TO Linear Algebra with Applications (6th Ed., S. Leon) SOLUTIONS MANUAL TO Linear Algebra with Applications 3rd Ed by Otto Bretscher SOLUTIONS MANUAL TO Linear Algebra with Applications 7th Edition by Steven J. Leon SOLUTIONS MANUAL TO Linear Algebra With Applications, 2nd Edition by W. Keith Nicholson SOLUTIONS MANUAL TO Linear Algebra, by J. Hefferon SOLUTIONS MANUAL TO Linear Circuit Analysis Time Domain, Phasor and Laplace.., 2nd Ed, Lin SOLUTIONS MANUAL TO Linear Circuit Analysis, 2nd Ed by DeCarlo , Pen-Min Lin SOLUTIONS MANUAL TO Linear dynamic systems and signals by Zoran Gajic SOLUTIONS MANUAL TO Linear dynamic systems and signals by Zoran Gajic Solutions to MATLAB Laboratory Experiments SOLUTIONS MANUAL TO Linear Systems And Signals, 1stE, B P Lathi SOLUTIONS MANUAL TO Logic and Computer Design Fundamentals, 2E, by Morris Mano and Charles Kime SOLUTIONS MANUAL TO Logic and Computer Design Fundamentals, 3d edition by Morris Mano and Charles Kime SOLUTIONS MANUAL TO Logic and Computer Design Fundamentals, 4/E, by Morris Mano and Charles Kime SOLUTIONS MANUAL TO Machine Design : An Integrated Approach (3rd Ed., Norton) SOLUTIONS MANUAL TO Machines and Mechanisms - Applied Kinematic Analysis, 3E by David H. Myszka SOLUTIONS MANUAL TO Managerial Accounting 11th Ed by Garrison & Noreen SOLUTIONS MANUAL TO Managerial Accounting 11th edition by Garrison & Noreen SOLUTIONS MANUAL TO Managerial Accounting 12th ed by Garriison, Noreen, Brewer ( Test Bank ) SOLUTIONS MANUAL TO Managerial Accounting 13th E by Garrison, Noreen, Brewer SOLUTIONS MANUAL TO Managerial Accounting 9th ed by Garriison, Noreen ( Test Bank ) SOLUTIONS MANUAL TO Managing Business and Professional Communication 2nd ed Carley H. Dodd SOLUTIONS MANUAL TO Managing Business Process Flows: Principles of Operations Management(2nd Ed., Anupind, Chopra, Deshmukh, et al) SOLUTIONS MANUAL TO Managing Engineering and Technology (4th, Morse & Babcock) SOLUTIONS MANUAL TO Manufacturing Processes for Engineering Materials (5th Ed. Kalpakjian & Smith) SOLUTIONS MANUAL TO Materials - engineering, science, properties, and design SOLUTIONS MANUAL TO Materials and Processes in Manufacturing (9th Ed., E. Paul DeGarmo, J. T. Black,Kohser) SOLUTIONS MANUAL TO Materials for Civil and Construction Engineers 3rd ED by Mamlouk, Zaniewski SOLUTIONS MANUAL TO Materials Science and Engineering- An Introduction ( 7th Ed., William D. Callister, Jr.) SOLUTIONS MANUAL TO Materials Science and Engineering- An Introduction (6th Ed., William D. Callister, Jr.) SOLUTIONS MANUAL TO MATH 1010 - Applied Finite Mathematics by D.W. Trim SOLUTIONS MANUAL TO Mathematical Analysis, Second Edition by Tom M. Apostol SOLUTIONS MANUAL TO Mathematical Methods for Physicists 5 Ed, Arfken SOLUTIONS MANUAL TO Mathematical Methods for Physics and Engineering, (3rd Ed., Riley,Hobson) SOLUTIONS MANUAL TO Mathematical Methods in the Physical Sciences; 3 edition by Mary L. Boas SOLUTIONS MANUAL TO Mathematical Models in Biology An Introduction (Elizabeth S. Allman & John A. Rhodes) SOLUTIONS MANUAL TO Mathematical Proofs - A Transition to Advanced Mathematics 2nd Ed by Chartrand, Polimeni, Zhang SOLUTIONS MANUAL TO Mathematical Statistics with Applications 7th ed by Dennis Wackerly, William Mendenhall, Richard L. Scheaffer SOLUTIONS MANUAL TO Mathematical Techniques 4th ED by D W Jordan & P Smith SOLUTIONS MANUAL TO Mathematics for Economists, by Carl P. Simon , Lawrence E. Blume SOLUTIONS MANUAL TO Mathematics for Management Science - A Bridging Course by Tulett SOLUTIONS MANUAL TO Mathematics for Physicists by Susan Lea SOLUTIONS MANUAL TO Mathematics of Investment and Credit 5th edition by ASA Samuel A. Broverman PhD SOLUTIONS MANUAL TO Matrix Analysis and Applied Linear Algebra by Meyer SOLUTIONS MANUAL TO Matter and Interactions, 3rd Ed by Chabay, Sherwood SOLUTIONS MANUAL TO McGraw-Hill Ryerson Calculus & Advanced Function by Dearling, Erdman, et all SOLUTIONS MANUAL TO Mechanical Engineering Design 8th Ed by Shigley & Budynas SOLUTIONS MANUAL TO Mechanical Engineering Design 9th Ed by Shigley & Budynas SOLUTIONS MANUAL TO Mechanical Engineering Design, 7th Ed. by Mischke, Shigley SOLUTIONS MANUAL TO Mechanical Measurements (6th Ed., Beckwith, Marangoni & Lienhard) SOLUTIONS MANUAL TO Mechanical Vibrations 5th Ed., Rao SOLUTIONS MANUAL TO Mechanical Vibrations ( Vol.1) 4th Ed., Rao SOLUTIONS MANUAL TO Mechanical Vibrations (3rd Ed., Rao) SOLUTIONS MANUAL TO Mechanical Vibrations 4th Ed SI Units by Rao SOLUTIONS MANUAL TO Mechanics of Aircraft Structures, 2nd Ed by Sun SOLUTIONS MANUAL TO Mechanics of Fluids (8th Ed., Massey) SOLUTIONS MANUAL TO Mechanics of Fluids 3rd ED Vol 1 by Merle C. Potter SOLUTIONS MANUAL TO Mechanics of Fluids 4th ED by I.H. Shames SOLUTIONS MANUAL TO Mechanics of Materials 5 edition by James M. Gere SOLUTIONS MANUAL TO Mechanics of Materials (6th Ed., Riley, Sturges & Morris) SOLUTIONS MANUAL TO Mechanics of Materials 2nd ed by Anderew Pytel, Jaan Kiusalaas SOLUTIONS MANUAL TO Mechanics of Materials 4 E by Russell C. Hibbeler SOLUTIONS MANUAL TO Mechanics of Materials 4th Ed by Beer Johnston SOLUTIONS MANUAL TO Mechanics of Materials 8th E by Russell C. Hibbeler SOLUTIONS MANUAL TO Mechanics of Materials 9th edition by R.C. HIBBELER SOLUTIONS MANUAL TO Mechanics Of Materials Beer Johnston 3rd SOLUTIONS MANUAL TO Mechanics of Materials, 2nd Ed by Roy R. Craig SOLUTIONS MANUAL TO Mechanics of Materials, 6E, by Russell C. Hibbeler SOLUTIONS MANUAL TO Mechanics of Materials, 6th Edition - James M. Gere & Barry Goodno SOLUTIONS MANUAL TO Mechanics of Materials, 7E, by Russell C. Hibbeler SOLUTIONS MANUAL TO Mechanics of Materials, 7th Edition - James M. Gere & Barry Goodno SOLUTIONS MANUAL TO Mechanics of Materials, 8th Edition - James M. Gere & Barry Goodno SOLUTIONS MANUAL TO Mechanics of Solids, ross SOLUTIONS MANUAL TO Mechanism Design Analysis and Synthesis (4th Edition) by Erdman, Sandor, Kota SOLUTIONS MANUAL TO MEMS and Microsystems Design, Manufacture and Nanoscale Engineering 2nd ED by Tai-Ran Hsu SOLUTIONS MANUAL TO Microeconomic Analysis, 3rd Ed., by H. Varian SOLUTIONS MANUAL TO Microeconomic Theory Basic Principles and Extensions 9E ( South-Western ) by Walter Nicholson SOLUTIONS MANUAL TO Microeconomic Theory by Segal Tadelis Hara Chiaka Hara Steve Tadelis SOLUTIONS MANUAL TO Microeconomic Theory, by Mas-Colell, Whinston, Green SOLUTIONS MANUAL TO Microeconomics, 6th Ed by Pyndick, Rubinfeld SOLUTIONS MANUAL TO Microelectronic Circuit Analysis and Design, 3rd Edition, by D. Neamen SOLUTIONS MANUAL TO Microelectronic Circuit Design (3rd Ed., Richard Jaeger & Travis Blalock) SOLUTIONS MANUAL TO Microelectronic Circuits By Adel Sedra 5th Edition SOLUTIONS MANUAL TO Microelectronic Circuits, 4th Ed. by Sedra and Smith SOLUTIONS MANUAL TO Microelectronic Circuits, 5th Ed. by Sedra and Smith SOLUTIONS MANUAL TO Microelectronics Digital and Analog Circuits and Systems by Millman SOLUTIONS MANUAL TO Microelectronics I & II by Dr.Chang SOLUTIONS MANUAL TO Microelectronics,Solution MANUAL,5thEd,MAZ SOLUTIONS MANUAL TO Microprocessors and Interfacing, Revised 2nd Edition by Douglas V Hall SOLUTIONS MANUAL TO Microwave and Rf Design of Wireless Systems, 1st Edition, by Pozar SOLUTIONS MANUAL TO Microwave Engineering, 2nd Ed., by David M. Pozar SOLUTIONS MANUAL TO Microwave Engineering, 3rd Ed., by David M. Pozar SOLUTIONS MANUAL TO Microwave Transistor Amplifiers Analysis and Design, 2nd Ed., by Guillermo Gonzalez SOLUTIONS MANUAL TO Mobile Communications 2nd ed by Jochen Schiller SOLUTIONS MANUAL TO Modern Control Engineering 3rd Ed. - K. OGATA SOLUTIONS MANUAL TO Modern Control Engineering 4th Ed. - K. OGATA SOLUTIONS MANUAL TO Modern Control Engineering 5 Ed. - K. OGATA SOLUTIONS MANUAL TO Modern Control Systems 11E by Richard C Dorf and Robert H. Bishop SOLUTIONS MANUAL TO Modern Control Systems 9 E by Richard C Dorf and Robert H. Bishop SOLUTIONS MANUAL TO Modern Control Systems, 12th Ed by Dorf, Bishop SOLUTIONS MANUAL TO Modern Digital and Analog Communication Systems, 3rd Ed., by Lathi SOLUTIONS MANUAL TO Modern Digital Electronics 3 Ed by R P Jain SOLUTIONS MANUAL TO Modern Digital Electronics,3E by R P JAIN SOLUTIONS MANUAL TO Modern Digital Signal Processing-Roberto Cristi SOLUTIONS MANUAL TO MODERN OPERATING SYSTEMS 2nd ed A.S.TANENBAUM SOLUTIONS MANUAL TO Modern Organic Synthesis An Introduction by George Zweifel, Michael Nantz SOLUTIONS MANUAL TO Modern Physics 2nd E by Randy Harris SOLUTIONS MANUAL TO Modern Physics 4th ed by Mark Llewellyn SOLUTIONS MANUAL TO Modern Physics 4th ed by Tipler, Llewellyn SOLUTIONS MANUAL TO Modern Physics for Scientists and Engineers 3rd E by Thornton and Rex SOLUTIONS MANUAL TO MODERN POWER SYSTEM ANALYSIS 3rd E by Kothari,Nagrath SOLUTIONS MANUAL TO Modern Quantum Mechanics (Revised Edition) by J. J. Sakurai SOLUTIONS MANUAL TO Modern Thermodynamics - From Heat Engines to Dissipative Structures by Kondepudi, Prigogine SOLUTIONS MANUAL TO Modern Thermodynamics - From Heat Engines to Dissipative Structures Vol 1 by Kondepudi, Prigogine SOLUTIONS MANUAL TO Molecular Driving Forces 2nd ED ( vol.1 ) by Dill, Bromberg SOLUTIONS MANUAL TO Molecular Symmetry and Group Theory by Robert L. Carter SOLUTIONS MANUAL TO Multinational Business Finance 10 E by Stonehill, Moffett, Eiteman SOLUTIONS MANUAL TO MULTIVARIABL CALCULUS 7th ED BY JAMES STEWART ( ch10 - ch 17 ) SOLUTIONS MANUAL TO Multivariable Calculus, 4th Edition, JAMES STEWART SOLUTIONS MANUAL TO Multivariable Calculus, 5th Edition, JAMES STEWART SOLUTIONS MANUAL TO Multivariable Calculus, Applications and Theory by Kenneth Kuttler SOLUTIONS MANUAL TO Nanoengineering of Structural, Functional and Smart Materials, Mark J. Schulz, Ajit D. Kelkar SOLUTIONS MANUAL TO Network Flows: Theory, Algorithms, and Applications by Ravindra K. Ahuja , Thomas L. Magnanti , James B. Orlin SOLUTIONS MANUAL TO Networks and Grids - Technology and Theory by Thomas G. Robertazzi SOLUTIONS MANUAL TO Neural networks and learning machines 3rd edition by Simon S. Haykin ( 2,3,7,14 missing ) SOLUTIONS MANUAL TO Nonlinear Programming 2nd Edition , Dimitri P.Bertsekas SOLUTIONS MANUAL TO Nonlinear Programming ,2ndEdition , Dimitri P.Bertsekas SOLUTIONS MANUAL TO Numerical Analysis 8th ED by BURDEN & FAIRES SOLUTIONS MANUAL TO Numerical Computing with MATLAB by Moler SOLUTIONS MANUAL TO Numerical Methods for Engineers (3rd Ed. Steven C. Chapra) SOLUTIONS MANUAL TO Numerical Methods for Engineers (5th Ed. Steven C. Chapra) SOLUTIONS MANUAL TO Numerical Methods Using MATLAB (3rd Edition)by John H. Mathews & Fink SOLUTIONS MANUAL TO Numerical Methods Using Matlab, 4E by Mathews, Kurtis K. Fink SOLUTIONS MANUAL TO Numerical Solution of Partial Differential Equations- An Introduction (2nd Ed., K. W. Morton &D) SOLUTIONS MANUAL TO OpenScape Voice V3.1R3 Test Configuration and connectivity Vol 2 , Application and Hardware Configuratio SOLUTIONS MANUAL TO Operating System Concepts, 6E, Silberschatz, Galvin, Gagne SOLUTIONS MANUAL TO Operating System Concepts, 7E, Silberschatz, Galvin, Gagne SOLUTIONS MANUAL TO Operating systems Internals and Design principles 4th Edition Stallings SOLUTIONS MANUAL TO Operating systems Internals and Design principles 5th Edition Stallings SOLUTIONS MANUAL TO Operation Management 9th edition by Heizer & Render Test Bank SOLUTIONS MANUAL TO Operations Management 5th Ed by Nigel Slack, Chambers, Johnston SOLUTIONS MANUAL TO Operations Research An Introduction (8th Edition) by Hamdy A. Taha SOLUTIONS MANUAL TO Optical Fiber Communications 3rd E by Gerd Keiser SOLUTIONS MANUAL TO Optical Properties of Solids 2nd Ed by Mark Fox SOLUTIONS MANUAL TO Optics 4th Edition by Hecht E., Coffey M., Dolan P SOLUTIONS MANUAL TO Optimal Control Theory An Introduction By Donald E. Kirk SOLUTIONS MANUAL TO Optimal State Estimation Dan Simon SOLUTIONS MANUAL TO Optimization of Chemical Processes by Edgar SOLUTIONS MANUAL TO Options, Futures and Other Derivatives, 4E, by John Hull SOLUTIONS MANUAL TO Options, Futures and Other Derivatives, 5E, by John Hull SOLUTIONS MANUAL TO Options, Futures, and Other Derivatives 7th Ed by John C. Hull SOLUTIONS MANUAL TO Orbital Mechanics for Engineering Students 2nd ED by Curtis SOLUTIONS MANUAL TO Orbital Mechanics for Engineering Students by Curtis SOLUTIONS MANUAL TO ORDINARY DIFFERENTIAL EQUATIONS by Adkins, Davidson SOLUTIONS MANUAL TO Organic Chemistry - Clayden et.al. SOLUTIONS MANUAL TO Organic Chemistry 10th ed by T. W. Graham Solomons SOLUTIONS MANUAL TO ORGANIC CHEMISTRY 11th edition by GRHAM SOLOMONS, CRAIG B. FRYHLE, SCOTT SOLUTIONS MANUAL TO Organic Chemistry 1st edition by David R. Klein SOLUTIONS MANUAL TO Organic Chemistry 2nd ed by RAJ K. Bansal, AJIV V.Bansal SOLUTIONS MANUAL TO Organic Chemistry 2nd ed by Schore SOLUTIONS MANUAL TO Organic Chemistry 2nd Edition by Hornback SOLUTIONS MANUAL TO Organic chemistry 3rd edition by Smith SOLUTIONS MANUAL TO Organic Chemistry 5th Ed by Brown, Foote, Iverson, Ansyln SOLUTIONS MANUAL TO Organic Chemistry 7ed, McMurry SOLUTIONS MANUAL TO Organic Chemistry 7th ed by John McMurry SOLUTIONS MANUAL TO Organic Chemistry 8th ed by John McMurry SOLUTIONS MANUAL TO Organic Chemistry 8th edition by Allison, Giuliano, Atkins, Carey SOLUTIONS MANUAL TO Organic Chemistry, 4E., by Carey, Atkins SOLUTIONS MANUAL TO Organic Chemistry, 5E., by Carey, Atkins SOLUTIONS MANUAL TO Organic Chemistry, 6 Ed by Wade, Jan Simek SOLUTIONS MANUAL TO Organic Chemistry, 8th Ed by Wade, Jan Simek SOLUTIONS MANUAL TO Parallel & Distribu
{"url":"https://msp.money.pl/grupa-pl_soc_prawo/solutions;manual;to;auditing;and;assurance;services;13;ed;by;arens;,watek,1156755.html","timestamp":"2024-11-14T07:01:57Z","content_type":"text/html","content_length":"149020","record_id":"<urn:uuid:19c4d856-0ec7-4540-a9c7-eb29f6c19fc9>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00763.warc.gz"}
Dynamics seminar by Sarina Sutter (VU Amsterdam) on 15 May Thomas Lin/Quanta Magazine On Wednesday the 15th of May at 4 pm, Sarina Sutter (VU Amsterdam) will give an Amsterdam Dynamics seminar. The talk will take place in the Maryam seminar room (9A-46). Title: %%\upsilon%%-Representability on Periodic Domains: a Sobolev Space Approach Abstract: An important tool in theoretical chemistry is density functional theory (DFT). The idea is that the wave function of the system, depending on all spatial coordinates of the particles, is replaced by the density function of the particles. This is possible, since the density contains sufficient information to reconstruct the external potential (system) it came from [1]. An important question is, how to characterize the class of densities for which there is a certain potential such that the ground state wave function of the system yields that density. This problem is known as the %%\upsilon%%-representability problem. I will present our recent work about finding functional-analytic properties of the potentials that allow upsilon-representability of a reduced but still large set of densities. It is known that densities need to be in %%L^1 \cap L^3%% (or in %%L^3%% for the periodic case) [2]. However, for particles in one dimension, they are contained in a smaller subspace of %%L^3%%, namely the Sobolev space %%H^1%%. This space already arose naturally from the finite kinetic energy condition which guarantees that %%\sqrt{\rho} \in H^1%% (this holds also in 2 and 3 dimensions). The new density space %%H^1%% leads to a different, more general potential space than the usual used %%L^{3/2}%% space if a dual setting for densities and potentials is constructed. Within this dual setting it is possible to show v-representability for a certain class of densities. The idea is to use standard techniques from convex analysis to find a suitable potential. In the usual %%L^3%% setting these techniques lead to some fundamental problems which motivate our choice of the new setting. [1] P. Hohenberg and W. Kohn, Inhomogeneous electron gas, Phys. Rev. 136, B864 (1964). [2] E. H. Lieb, Density functionals for Coulomb-systems, Int. J. Quantum Chem. 24, 243 (1983).
{"url":"https://www.amsterdam-dynamics.nl/dynamics-seminar-by-sarina-sutter-vu-amsterdam/","timestamp":"2024-11-06T17:36:55Z","content_type":"text/html","content_length":"18813","record_id":"<urn:uuid:01f71858-576c-4f46-9911-a1ebf3cdf152>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00696.warc.gz"}
Infinite monkey theorem It is widely acknowledged that an infinite number of monkeys sitting at computers typing randomly will almost surely produce a properly-LaTeXed copy of the complete works of Shakespeare. This statement, known as the ‘infinite monkey theorem’, has received a wide amount of coverage in popular culture. Indeed, researchers actually experimented to see what would happen in reality with a finite number of monkeys. The theorem failed spectacularly, with the monkeys vandalising the typewriter, bombarding it with miscellaneous excreta, and typing mainly the letter ‘S’. Deterministic variant An alternative of this is where a machine is programmed to deterministically produce all possible outputs. For instance, if we want the machine to eventually produce all English words, one possible solution is to produce all 26 one-letter strings, followed by all 26² two-letter strings (in lexicographical order), then all 26³ three-letter strings, and so on ad infinitum. There was a very poor proposed question on the IMO, which asked for the Nth term of this sequence, where N was some arbitrary integer (somewhere between 10^10 and 10^20, I believe). There is a formula, known as Tupper’s formula, which defines a subset of the set Z^2 of possible ordered pairs of integers (x,y). If you plot this subset on the plane, and look in the appropriate position, you’ll see a pixellated version of Tupper’s formula itself. Essentially, this converts floor(y/17) into binary, partitions it into blocks of length 17, and writes them vertically from left to right. If you choose an appropriate (pretty large!) value of y, you can get any binary image of height 17 pixels. This includes the complete works of Shakespeare rendered as a long unbroken line. Self-referential GoL pattern Several years ago, I created a machine in Conway’s Game of Life which sequentially prints every possible binary image in a slowly-expanding octant of the plane. Unlike Tupper’s formula, the dimensions of the images are not limited. So, as time progresses, you’ll see gigapixel images of the Mona Lisa and huge renderings of the Mandelbrot set. Eventually, it would even show a picture of its own initial configuration, very much in the style of Tupper’s formula. Its diameter grows at the slowest asymptotic rate possible, namely Θ(sqrt(log(t))). If the diameter of a pattern has a slower asymptotic growth rate, it would necessarily repeat the same state twice and settle into an infinite loop (thus stop growing). For cellular automata on a hyperbolic plane, this growth rate can theoretically be reduced to the impressively slow Θ(log(log(t))). Coincidentally, this is the same growth rate as the prime harmonic series: I exploited this slow growth rate to design the sequence of Borwein integrals which fail after the 20479th term. 10 Responses to Infinite monkey theorem 1. Did you publish that GoL pattern somewhere? □ It was on the Internet, but I think that site went down. I’ll e-mail you a copy of it, if you’re interested and have Golly. ☆ I’m interested and I do have Golly. 🙂 ○ I’ve sent a copy of the pattern to you and Wojowu. The reason I can’t upload it to cp4space is that it’s a .mc file, and WordPress only allows certain predetermined file extensions. ■ May I also have a copy of this pattern? Maybe you could upload it somewhere. How large is the file? ■ The usual workaround for such issues is to put the file into a ZIP archive. In almost all applications ZIP files are among the accepted upload file formats. 2. I don’t belive that you made such pattern in GoL, because of orphans. Unless your pattern used another technique, like block-next-to-block (I mean 2×2 block still life) □ It didn’t use individual cells as pixels (that would indeed be impossible for precisely the reasons you stated). Instead, it used well-separated boats as pixels. ☆ I’d like to see that pattern too 🙂 As a replacement, I created in Golly 9-state rule table which also creates every possible 2D pattern (finite ones only). Within first 100 mln maximal cell count was 51, and it could be half of that without frame I needed for my design. 3. Do you still have that GoL pattern? I’d love to see it 🙂 This entry was posted in Uncategorized. Bookmark the permalink.
{"url":"https://cp4space.hatsya.com/2012/12/14/infinite-monkey-theorem/","timestamp":"2024-11-04T08:01:27Z","content_type":"text/html","content_length":"77393","record_id":"<urn:uuid:c81a0c5c-7b5d-49ff-b431-92e84aeaf26c>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00645.warc.gz"}
Estimation of the confidence envelope of the Kinhom function KinhomEnvelope {dbmss} R Documentation Estimation of the confidence envelope of the Kinhom function under its null hypothesis Simulates point patterns according to the null hypothesis and returns the envelope of Kinhom according to the confidence level. KinhomEnvelope(X, r = NULL, NumberOfSimulations = 100, Alpha = 0.05, ReferenceType = "", lambda = NULL, SimulationType = "RandomPosition", Global = FALSE, verbose = interactive()) X A point pattern (wmppp.object). r A vector of distances. If NULL, a sensible default value is chosen (512 intervals, from 0 to half the diameter of the window) following spatstat. NumberOfSimulations The number of simulations to run. Alpha The risk level. ReferenceType One of the point types. Default is all point types. lambda An estimation of the point pattern density, obtained by the density.ppp function. A string describing the null hypothesis to simulate. The null hypothesis, may be "RandomPosition": points are drawn in an inhomogenous Poisson process (intensity is either lambda SimulationType or estimated from X); "RandomLocation": points are redistributed across actual locations; "RandomLabeling": randomizes point types, keeping locations unchanged; " PopulationIndependence": keeps reference points unchanged, redistributes others across actual locations. Global Logical; if TRUE, a global envelope sensu Duranton and Overman (2005) is calculated. verbose Logical; if TRUE, print progress reports during the simulations. The random location null hypothesis is that of Duranton and Overman (2005). It is appropriate to test the univariate Kinhom function of a single point type, redistributing it over all point locations. It allows fixing lambda along simulations so the warning message can be ignored. The random labeling hypothesis is appropriate for the bivariate Kinhom function. The population independence hypothesis is that of Marcon and Puech (2010). This envelope is local by default, that is to say it is computed separately at each distance. See Loosmore and Ford (2006) for a discussion. The global envelope is calculated by iteration: the simulations reaching one of the upper or lower values at any distance are eliminated at each step. The process is repeated until Alpha / Number of simulations simulations are dropped. The remaining upper and lower bounds at all distances constitute the global envelope. Interpolation is used if the exact ratio cannot be reached. An envelope object (envelope). There are methods for print and plot for this class. The fv contains the observed value of the function, its average simulated value and the confidence envelope. Duranton, G. and Overman, H. G. (2005). Testing for Localisation Using Micro-Geographic Data. Review of Economic Studies 72(4): 1077-1106. Kenkel, N. C. (1988). Pattern of Self-Thinning in Jack Pine: Testing the Random Mortality Hypothesis. Ecology 69(4): 1017-1024. Loosmore, N. B. and Ford, E. D. (2006). Statistical inference using the G or K point pattern spatial statistics. Ecology 87(8): 1925-1931. Marcon, E. and Puech, F. (2010). Measures of the Geographic Concentration of Industries: Improving Distance-Based Methods. Journal of Economic Geography 10(5): 745-762. Marcon, E. and F. Puech (2017). A typology of distance-based measures of spatial concentration. Regional Science and Urban Economics. 62:56-67. See Also # Keep only 20% of points to run this example X <- as.wmppp(rthin(paracou16, 0.2)) labelSize = expression("Basal area (" ~cm^2~ ")"), labelColor = "Species") # Density of all trees lambda <- density.ppp(X, bw.diggle(X)) V.americana <- X[X$marks$PointType=="V. Americana"] plot(V.americana, add=TRUE) # Calculate Kinhom according to the density of all trees # and confidence envelope (should be 1000 simulations, reduced to 4 to save time) r <- 0:30 NumberOfSimulations <- 4 Alpha <- .10 autoplot(KinhomEnvelope(X, r,NumberOfSimulations, Alpha, , SimulationType="RandomPosition", lambda=lambda), ./(pi*r^2) ~ r) version 2.9-0
{"url":"https://search.r-project.org/CRAN/refmans/dbmss/html/KinhomEnvelope.html","timestamp":"2024-11-07T00:17:37Z","content_type":"text/html","content_length":"7035","record_id":"<urn:uuid:b0046834-6ad8-4612-8e59-442e36243038>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00408.warc.gz"}
Complex Numbers And Quadratic Equations class 11 Notes Mathematics | myCBSEguide Complex Numbers And Quadratic Equations class 11 Notes Mathematics myCBSEguide App Download the app to get CBSE Sample Papers 2023-24, NCERT Solutions (Revised), Most Important Questions, Previous Year Question Bank, Mock Tests, and Detailed Notes. Install Now CBSE Mathematics Chapter 5 Complex Numbers And Quadratic Equations class 11 Notes Mathematics in PDF are available for free download in myCBSEguide mobile app. The best app for CBSE students now provides Complex Numbers And Quadratic Equations class 11 Notes Mathematics latest chapter wise notes for quick preparation of CBSE exams and school based annual examinations. Class 11 Mathematics notes on Chapter 5 Complex Numbers And Quadratic Equations class 11 Notes Mathematics are also available for download in CBSE Guide website. CBSE Guide Complex Numbers And Quadratic Equations class 11 Notes CBSE guide notes are the comprehensive notes which covers the latest syllabus of CBSE and NCERT. It includes all the topics given in NCERT class 11 Mathematics text book. Users can download CBSE guide quick revision notes from myCBSEguide mobile app and my CBSE guide website. Complex Numbers And Quadratic Equations class 11 Notes Mathematics Download CBSE class 11th revision notes for Chapter 5 Complex Numbers And Quadratic Equations class 11 Notes Mathematics in PDF format for free. Download revision notes for Complex Numbers And Quadratic Equations class 11 Notes Mathematics and score high in exams. These are the Complex Numbers And Quadratic Equations class 11 Notes Mathematics prepared by team of expert teachers. The revision notes help you revise the whole chapter in minutes. Revising notes in exam days is on of the best tips recommended by teachers during exam days. CBSE Class 11 Mathematics Revision Notes 1. Algebra, Modulus and Conjugate of Complex Numbers 2. Argand Plane and Polar Representation 3. Quadratic Equations • $\Rightarrow$ ${i^2} = - 1$ • Imaginary Number: Square root of a negative number is called an Imaginary number. For example, $\sqrt { - 5} ,\sqrt { - 16} ,$ etc. are imaginary numbers. • Integral power of Iota ($i$) : ${i^p}\left( {p &gt; 4} \right) = {i^{4q + r}}$ = ${\left( {{i^4}} \right)^q}.{i^r} = {i^r},$ where $\sqrt { - 1} = i$ and ${i^4} = 1$ • Complex Number: A number of the form $\text{a }+~\text{ib},$ where a and b are real numbers, is called a complex number, a is called the real part and b is called the imaginary part of the complex number. It is denoted by $z.$ • Real part of $z=a+ib$ is $a$ and is denoted by $Re(z) = a$. • Imaginary part of $z=a+ib$ is $b$ and is written as $Im(z) = b$. • Equality of complex numbers: Two complex numbers ${z_1} = a + ib$ and ${z_2} = c + id$ are said to be equal, if $a=c$ and $b=d$. • Conjugate of a complex number: Two complex numbers are said to be conjugate of each other, if their sum is real and their product is also real. Conjugate of a complex number $z=a+ib$ is $\ overline z = a - ib$ i.e., conjugate of a complex number is obtained by changing the sign of imaginary part of z. • Modulus of a complex number: Modulus of a complex number $z=x+iy$ is denoted by $\left| z \right| = \sqrt {{x^2} + {y^2}}$. • Argument of a complex number $x+iy$ : Arg$(x+iy)=$ ${\tan ^{ - 1}}{y \over x}$. • Representation of complex number as ordered pair: Any complex number $a+ib$ can be written in ordered pair as $\left( {a,b} \right)$, where a is the real past and b is the imaginary part of a complex number. • $\text{Let }{{\text{z}}_{1}}\text{ }=\text{ a }+\text{ ib and }{{\text{z}}_{2}}\text{ }=\text{ c }+\text{ id}.\text{ Then}$ (i) ${{\text{z}}_{1}}\text{+ }{{\text{z}}_{2}}\text{= (a + c) + i (b + d)}$ (ii) ${{\text{z}}_{1}}{{\text{z}}_{2}}\text{= (ac -bd) + i (ad +bc)}$ • Division of a complex number: If ${z_1} = a + ib$ and ${z_2} = c + id$, then, ${{{z_1}} \over {{z_2}}} = {{a + ib} \over {c + id}} = {{\left( {a + ib} \right)\left( {c - id} \right)} \over {\left( {c + id} \right)\left( {c - id} \right)}}$= ${{ac + bd} \over {{c^2} + {d^2}}} + i{{bc - ad} \over {{c^2} + {d^2}}}$ • For any non-zero complex number $\text{z }=\text{ a }+\text{ ib }\left( \text{a }e \text{ }0,\text{ b }e \text{ }0 \right),$ there exists the complex number $\frac{a}{{{a}^{2}}+{{b}^{2}}}+i\ frac{-b}{{{a}^{2}}+{{b}^{2}}}$denoted by $\frac{1}{z}$or ${{z}^{-1}}$, called the multiplicative inverse of z such that $\text{(a + ib) }\left( \frac{{{a}^{2}}}{{{a}^{2}}+{{b}^{2}}}+i\frac{-b} {{{a}^{2}}+{{b}^{2}}} \right)\text{=1+io=1}$ • Polar form of a complex number: The polar form of the complex number $\text{z }=\text{ x }+\text{ iy is r }\left( \text{cos}\theta \text{ }+\text{ i sin}\theta \right)$, where $r=\sqrt{{{x}^{2}}+ {{y}^{2}}}$ (the modulus of z) and $~\text{cos}\theta ~=\frac{x}{r},$ $~\sin \theta ~=\frac{y}{r},$. (θ is known as the argument of z. The value of θ, such that is called the principal argument of z. • Important properties: (i) $\left| {{z_1}} \right| + \left| {{z_2}} \right| \ge \left| {{z_1} + {z_2}} \right|$, (ii) $\left| {{z_1}} \right| - \left| {{z_2}} \right| \le \left| {{z_1} + {z_2}} \right|$ • Fundamental Theorem of algebra: A polynomial equation of n degree has n roots. Quadratic Equation: • Quadratic Equation: Any equation containing a variable of highest degree 2 is known as quadratic equation. e.g., $a{x^2} + bx + c = 0$. • Roots of an equation: The values of variable satisfying a given equation are called its roots. Thus, $x = \alpha$ is a root of the equation $p\left( x \right) = 0$ if $p\left( \alpha \right) = • Solution of quadratic equation: The solutions of the quadratic equation $a{x^2} + bx + c = 0$, where $a,b,c \in {\rm{R,}}$ $a e 0,$ ${b^2} - 4ac &lt; 0,$ are given by $x = {{ - b \pm i\sqrt {4ac - {b^2}} } \over {2a}}.$ The Complex Numbers And Quadratic Equations class 11 Notes • CBSE Revision notes (PDF Download) Free • CBSE Revision notes for Class 11 Mathematics PDF • CBSE Revision notes Class 11 Mathematics – CBSE • CBSE Revisions notes and Key Points Class 11 Mathematics • Summary of the NCERT books all chapters in Mathematics class 11 • Short notes for CBSE class 11th Mathematics • Key notes and chapter summary of Mathematics class 11 • Quick revision notes for CBSE exams CBSE Class-11 Revision Notes and Key Points Complex Numbers And Quadratic Equations class 11 Notes Mathematics. CBSE quick revision note for class-11 Mathematics, Physics, Chemistry, Biology and other subject are very helpful to revise the whole syllabus during exam days. The revision notes covers all important formulas and concepts given in the chapter. Even if you wish to have an overview of a chapter, quick revision notes are here to do if for you. These notes will certainly save your time during stressful exam days. To download Complex Numbers And Quadratic Equations class 11 Notes, sample paper for class 11 Chemistry, Physics, Biology, History, Political Science, Economics, Geography, Computer Science, Home Science, Accountancy, Business Studies and Home Science; do check myCBSEguide app or website. myCBSEguide provides sample papers with solution, test papers for chapter-wise practice, NCERT solutions, NCERT Exemplar solutions, quick revision notes for ready reference, CBSE guess papers and CBSE important question papers. Sample Paper all are made available through the best app for CBSE students and myCBSEguide website. Test Generator Create question paper PDF and online tests with your own name & logo in minutes. Create Now Question Bank, Mock Tests, Exam Papers, NCERT Solutions, Sample Papers, Notes Install Now Leave a Comment
{"url":"https://mycbseguide.com/blog/complex-numbers-and-quadratic-equations-class-11-notes-mathematics/","timestamp":"2024-11-02T01:32:02Z","content_type":"text/html","content_length":"122933","record_id":"<urn:uuid:12a6e22b-b029-4c3f-bc3c-eb9195dc9302>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00820.warc.gz"}
Standard algorithm multiplication with a number to 100 Students learn standard algorithm multiplication with a number to 100. The students learn standard algorithm multiplication with a number to 100. Students will be able to multiply using the standard algorithm with a number less than 10 and a number to 100. The students solve the multiplication problems. First with ones, then with tens numbers. In the other exercise they take the multiplication problem out of the story and then solve it. Explain that with multiplication you can put the numbers one under the other and multiply them working from right to left. This is called standard algorithm multiplication. Show that above each column there is a box for regrouping. In these boxes you can write down the numbers that are carried over and add them to the other outcomes later. Show the example of how you set up the problem 91 x 3. You start multiplying using the standard algorithm. First you multiply the numbers in the ones column together. 3 x 1 = 3. After that you multiply with the number in the tens column. The problem then is 3 x 90 = 270. Now you add up the answers in the intermediate step. 270 + 3 = 273. You write this answer under the line. Emphasize that you don't write down the intermediate step, but fill in the numbers directly under the line. Now solve the two problems together with the students. You may let the students use scrap paper to write down the intermediate steps. Next the students solve the following three problems on their own. Check whether the students can multiply using the standard algorithm with a number to 100, by asking the following question:- What steps do you take to solve the problem 32 × 4 using the standard algorithm? The students test their understanding of standard algorithm multiplication with a number to 100 through ten exercises. For some of these the numbers are already set up in the chart, and for others the students must fill in the chart and solve the problem. Discuss once again the importance of being able to multiply using the standard algorithm. As a closing activity the students solve a few more problems using the standard algorithm. Next you can have the students work together in pairs to find the numbers under the ink spots and complete the problem. Have students that have difficulty with standard algorithm multiplication first practice writing the numbers one under the other. Show how you first make a multiplication problem with the numbers in the ones column and then in the tens column. Emphasize that you shouldn't forget that you have to put a 0 behind the number in the tens column. Next show how you can also directly put the number under the line. Start by practicing this with multiplication problems from the 2 times table. Gynzy is an online teaching platform for interactive whiteboards and displays in schools. With a focus on elementary education, Gynzy’s Whiteboard, digital tools, and activities make it easy for teachers to save time building lessons, increase student engagement, and make classroom management more efficient.
{"url":"https://www.gynzy.com/en-us/library/items/standard-algorithm-multiplication-with-a-number-to-100","timestamp":"2024-11-04T12:11:41Z","content_type":"text/html","content_length":"553771","record_id":"<urn:uuid:972871ae-0df8-4311-9ee9-7f171decf808>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00897.warc.gz"}
If the planes r→(2i^−λ(j)+3k^)=0andr→(λ(i)+5j^−k^)=5 are perpe... | Filo Question asked by Filo student If the planes are perpendicular to each other, then the value of isa. b. c. d. Not the question you're searching for? + Ask your question Filo tutor solution Learn from their 1-to-1 discussion with Filo tutors. Generate FREE solution for this question from our expert tutors in next 60 seconds Don't let anything interrupt your homework or exam prep with world’s only instant-tutoring, available 24x7 Found 5 tutors discussing this question Discuss this question LIVE for FREE 10 mins ago Practice more questions on Vector and 3D View more Students who ask this question also asked View more Question Text If the planes are perpendicular to each other, then the value of isa. b. c. d. Updated On Apr 10, 2024 Topic Vector and 3D Subject Mathematics Class Class 12
{"url":"https://askfilo.com/user-question-answers-mathematics/if-the-planes-are-perpendicular-to-each-other-then-the-value-39323733333134","timestamp":"2024-11-06T16:55:36Z","content_type":"text/html","content_length":"274388","record_id":"<urn:uuid:66372838-b5d6-43fe-b66b-90eaf5f189d2>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00892.warc.gz"}
Comprehensive List of Trie-Based Questions Comprehensive List of Trie-Based Questions. 1. Fundamentals of Trie Data Structure • Implement a Trie (Insert, Search, Delete) • Implement a TrieNode Class • Insert a Word into a Trie • Search for a Word in a Trie • Delete a Word from a Trie • Check if a Prefix Exists in a Trie • Count Words in a Trie • Count Prefixes in a Trie • Implement a Trie with Case Sensitivity • Implement a Trie with Case Insensitivity NEW: Visualize Trie Structure for Better Understanding (Display Trie as a Tree) 2. Trie-Based String Operations • Find All Words with a Given Prefix (Using Trie) • Find Words that Start with a Given Prefix (Using Trie) • Find All Words that End with a Given Suffix (Using Trie) • Find Longest Prefix Matching a Given String • Find the Shortest Unique Prefix for Each Word • Find All Words that Match a Given Pattern (Using Wildcards) • Implement a Trie to Solve the Autocomplete Problem • Find the Longest Common Prefix Among a List of Words • Implement a Trie for Text Search and Replacement • Find the Longest Common Suffix Among a List of Words NEW: Implement Trie for Anagram Search Across Words 3. Advanced Trie Operations • Implement a Trie with Node Counting • Implement a Trie with Value Mapping (Key-Value Pair Storage) • Implement a Trie with Frequency Counting • Find the Number of Words with a Specific Prefix • Find the Number of Words with a Specific Suffix • Implement a Trie for Dictionary Word Lookup • Find the Longest Prefix of a Word in a Trie • Find the Maximum Number of Words in a Trie that Share a Prefix • Find the Shortest Path from Root to a Given Word in a Trie • Implement a Trie with a TrieMap (Word Count Mapping) NEW: Find All Palindromic Prefixes and Suffixes Using Trie NEW: Implement a Memory-Efficient Trie Using Bitwise Operations 4. Trie and Text Processing • Implement a Trie-based Spell Checker • Implement a Trie for Dictionary-Based Text Completion • Find All Possible Words that Can Be Formed from a Given Set of Letters (Using Trie) • Implement a Trie-based Solution for Word Segmentation • Implement a Trie to Solve the Word Break Problem • Find All Valid Words in a Board Using Trie (Word Search II) • Find the Maximum Number of Words Formed from a Given List (Using Trie) • Implement a Trie-based Solution for Text Search with Wildcards • Implement a Trie to Solve the Text Justification Problem • Find the Most Frequent Prefixes in a Large Text Dataset (Using Trie) NEW: Implement Trie for Predictive Text Input for Multilingual Support NEW: Build a Trie-Based Solution for Document Similarity Detection 5. Trie-Based Algorithms and Pattern Matching • Implement a Trie-Based Algorithm for Prefix Matching • Implement a Trie-Based Algorithm for Suffix Matching • Find the Minimum Number of Edits to Convert One Word to Another (Using Trie) • Find the Maximum Length of a Prefix with a Given Frequency (Using Trie) • Implement a Trie-Based Algorithm for Finding Palindromic Substrings • Find the Number of Distinct Substrings in a given string (Using Trie) • Implement a Trie-Based Algorithm for Pattern Matching with Multiple Patterns • Find the Longest Palindromic Substring in a Trie • Find the Kth Largest Prefix in a Trie • Implement a Trie-Based Algorithm for Longest Repeating Substring NEW: Implement Trie for N-gram Analysis in Large Datasets NEW: Solve the Longest Repeating Subsequence Problem Using Trie 6. Application-Oriented Trie Challenges • Develop an Efficient Trie for Fast Language Translation Suggestions • Implement a Trie for Storing Synonyms and Antonyms of Words • Build a Trie for Fast URL Storage and Retrieval in Web Crawlers • Implement Trie for DNA Sequence Matching in Bioinformatics • Build a Trie-based Keyword Search and Ranking System for E-Commerce NEW: Implement a Trie-Based Autocorrect System with Contextual Suggestions Each of these questions covers different aspects of the Trie data structure, ranging from basic implementations to advanced applications, making it a well-rounded set to master Tries in various Top comments (0) For further actions, you may consider blocking this person and/or reporting abuse
{"url":"https://practicaldev-herokuapp-com.global.ssl.fastly.net/nozibul_islam_113b1d5334f/comprehensive-list-of-trie-based-questions-d70","timestamp":"2024-11-12T10:21:20Z","content_type":"text/html","content_length":"68360","record_id":"<urn:uuid:b48bbb26-5fd2-40c1-8529-66597e749f0b>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00497.warc.gz"}
ChatGPT-4o Will Be Good for Certain Math, Certain Thinking, and Certain Kids ChatGPT-4o Will Be Great for Certain Math, Certain Thinking, and Certain Kids Here are the important questions about OpenAI's product announcement that few people are asking right now. On Monday, OpenAI announced GPT-4o, an upgrade to their ChatGPT technology. Claire Zau has a summary of the announcement that I found helpful. I’ll focus here on two videos relevant to math In one video, several OpenAI employees sit around a table solving 3x + 1 = 4. They hold a cameraphone above a piece of paper and ChatGPT parses their work using OpenAI’s new multimodal technology and guides them to a solution. In another video, Sal Khan’s son Imran Khan answers a question on Khan Academy about finding the sine of an angle in a right triangle. He’s streaming his screen to ChatGPT which, again, uses multimodal input to guide Khan to a solution. ChatGPT asks Khan at one point, “Which side do you think is the hypotenuse?” and Khan indicates his answer by drawing on the screen. ChatGPT accurately interprets Khan’s drawings and responds with corrective feedback, a conversational pace, and a positive tone. This is pretty neat. 👍 Look, folks. I’m not made of stone. This is impressive technology. I am strongly considering upgrading my previous assessment that “generative AI is neat” to “generative AI is pretty neat.” Across social media, however, many tech and business leaders are claiming that these product changes are more than “pretty neat,” that they instead herald a revolution in learning. The revolution that they promised a year ago with GPT-3.5, the revolution that many of them promised throughout the 2010s with YouTube videos and autograded quizzes, those were not the real revolutions. Those promises should not be scrutinized too closely now, please. There has never been a Pundit Accountability Tribunal and now is not the time to create one. We should look forward, not backward. This is the actual revolution—for real this time. Two things are both true: 1. These people may be right. 2. If they are, it will be an accident. Tech and business leaders are deeply unserious right now. Rather they are deeply serious about new technology and shareholder returns but they are deeply unserious about the needs of learners. I have digested hundreds of social media posts recapping these demos over the last two days. Only a small handful have made any attempt to answer any of the questions that any serious person should wonder here: • What kind of math is this good for? • What kind of thinking is this good for? • What kind of learner is this good for? I’ll test these features more as they’re released more widely, but for now I can say with absolute certainty that GPT-4o will only help certain learners this certain thoughts about certain kinds of math. This is true of every new tool or medium. It is certainly true of the tools I have helped build. We need to ask ourselves about every edtech product, “What is the warranty here?" It is never unlimited. Thanks for reading Mathworlds! Subscribe for free to receive a new post about math, teaching, and technology every Wednesday. Certain Math What do you notice about the math in both demos? These are math problems that a student can solve using algorithms, math problems that result in a single numerical calculation that can be evaluated as correct or incorrect. “Calculation” is an important mathematical skill and if we could significantly improve a student’s ability to calculate (and all other considerations were equal) we would celebrate. But calculation is to mathematics what chopping onions is to hosting a dinner party. It is important, but it is not central. It is not a prerequisite for other kinds of work. If chopping onions were a person’s primary association with hosting a dinner party, they would host many fewer dinner parties. In addition to calculating with math, students need to argue, estimate, sketch, notice, wonder, construct, speculate, describe, evaluate, play, and so on. Students need to calculate solutions to equations as in the OpenAI video, but also to create equations from a relationship. They need to use the sine ratio to calculate unknown side lengths as in the Khan video, but they also need to identify right triangles in the world around them. It is possible that GPT-4o can help students learn that other mathematics, but it seems equally likely to me that calculation is just uniquely friendly terrain for GPT-4o. Certain Thinking Khan asks for help finding the sine of an angle in the triangle and GPT-4o asks: Can you first identify which sides of the triangle are the opposite, adjacent, and hypotenuse relative to angle alpha? This question focuses the student on small ideas. It breaks a big idea—the equivalence of these ratios across similar triangles—into smaller ideas, ideas which a student can memorize and master without understanding the big idea even a little. This looks like success to many. To me it looks like someone has successfully diced an onion without understanding why we’re hosting the dinner party, what we hope our guests experience, or how we’re going to structure the evening. We can focus students on larger ideas by asking other questions. What is the question asking you to do? What do you know about that? What is special about this triangle? What do you know about sine? Perhaps this is only a matter of prompt engineering. Perhaps GPT-4o can be trained to focus students on those larger ideas. But I suspect it is equally possible that GPT-4o has been trained on the corpus of small ideas and step-by-step guides that pollute the mathematical internet. I suspect GPT-4o will struggle to engage a student in a conversation about those big ideas also because those ideas are hard to evaluate as either true or false. They are almost always both—true and false simultaneously. Certain Learners I wish it went without saying that the learners in these videos are not typical of US K-12 students. They do not represent the median student in age, education, socioeconomic status, or the desire to perform, especially here for cameras broadcasting to a worldwide audience. What is typical? In a large 2018 study of thousands of students, Khan Academy reported that only 11% of participating students used its software for the recommended dosage of 30 minutes per week. This was in a study population of teachers who committed to helping their students meet that usage threshold. We are moving at a very brisk pace away from the promises so many tech and business leaders made about their previous products, products that huge majorities of students placed on the curb next to the cans and bottles, on to new promises about their new products, all without asking, “Why didn’t the last product take for students?” All these startups crowding into the edtech AI space right now should wonder, “Which students are at all interested in talking to their computer about math?” I think it’s an open question, but the assumed answer among tech and business leaders seems to be, “All of them! Why wouldn’t they be?” I will tell you why. First, students are much more interested in talking to people in their class, or texting people in other classes, than they are in talking to their computer about math. Second, many students do not want to ask the question, “Hey GPT, check my work here. How did I do?” because ignorance is often bliss. Ignorance is often preferable to learning that you have more to I am trying to remain open to the possibility that new technology might one day lead to a drastic reduction in the number of necessary teachers in the world, or at least a drastic change in the kind of work they do. But this characteristic of students is why we have teachers. To encourage, cajole, and compel. To make new learning seem appealing and accessible. To convince students that math is for you and you are for math. Many people describe generative AI as an infinitely patient tutor. They don’t understand that this makes generative AI an ineffective tutor. It’s true that GPT-4o will patiently wait for you to ask for help. But effective teachers do not wait to be asked. Effective teachers know that many students will never ask for help, preferring to pass the time from bell to bell without bothering or being bothered by their teacher. Effective teachers are a bother! In these videos, you see GPT-4o waiting reactively for the learner’s invitation whereas a skilled teacher will proactively create their own invitation. This is impressive technology, certainly, but to make even the junior varsity tutoring team (to say nothing about varsity tutoring and even less about the teaching team) it will need to respond to many more kinds of math and much larger kinds of ideas than we see in these demos. I find it much more likely that OpenAI will meet that challenge than they’ll meet the much larger challenge of providing learners with a tutor who is sensitive, persistent, and infinitely impatient as well. Back in 2013, Kate Nonesuch wrote an article about the place of patience and kindness in teaching. I thought you'd like it: https://katenonesuch.com/2013/05/08/neither-kind-nor-patient/. It also made me wonder about why the Chat GPT 4o demos I've seen all use a female voice? In the video you linked with the linear equation, I cringed every time the presenters interrupted the AI. Logically, I know that they are just moving the program along (and GPT can be long-winded!), but it pains me to think that we might be training people to interrupt women even more than they already Expand full comment "calculation is to mathematics what chopping onions is to hosting a dinner party. It is important, but it is not central." Ahh, I have been trying to get this idea across to my 6th graders who believe that all they need is a calculator. I think this metaphor might actually get into some of their heads. Expand full comment 34 more comments...
{"url":"https://danmeyer.substack.com/p/chatgpt-4o-will-be-great-for-certain","timestamp":"2024-11-15T04:28:48Z","content_type":"text/html","content_length":"197711","record_id":"<urn:uuid:2a6cc984-41fd-464e-b3d7-f971819f27f5>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00357.warc.gz"}
How do you use SAS in math? “SAS” is when we know two sides and the angle between them. use The Law of Cosines to calculate the unknown side, then use The Law of Sines to find the smaller of the other two angles, and then use the three angles add to 180° to find the last angle. What is a real life example of a triangle? Traffic signs form the most commonly found examples of the triangle in our everyday life. The signs are in equilateral triangular shape; which means that all three sides are of equal lengths and have equal angles. How are congruent triangles used in real life? By utilizing congruent triangles the buildings create a nice work atmosphere(office buildings), a protection system from the sun by reflecting off opposite triangular faces, or even a popular tourist attraction. This is an example of triangle congruence in the real world- identical buildings. What is SSA similarity? The acronym SSA (side-side-angle) refers to the criterion of congruence of two triangles: if two sides and an angle not include between them are respectively equal to two sides and an angle of the other then the two triangles are equal. Why is triangle congruence important? For two polygons to be congruent, they must have exactly the same size and shape. This means that their interior angles and sides must all be congruent. That’s why studying the congruence of triangles is so important–it allows us to draw conclusions about the congruence of polygons, too. What’s the difference between SSA and SAS? Both of these two postulates tell you that you have two congruent sides and one congruent angle, but the difference is that in SAS, the congruent angle is the one that is formed by the two congruent sides (as you see, the “A” is between the two S), whereas with SSA, you know nothing about the angle formed by the two … What does SAS mean in math? What is SSS SAS used for? Univ. This congruence shortcut is known as side-side-side (SSS). Another shortcut is side-angle-side (SAS), where two pairs of sides and the angle between them are known to be congruent. SSS and SAS are important shortcuts to know when solving proofs. What is SAS congruence? What is SAS congruence of triangles? If any two sides and angle included between the sides of one triangle are equivalent to the corresponding two sides and the angle between the sides of the second triangle, then the two triangles are said to be congruent by SAS rule. What is SAS rule? The SAS rule states that. If two sides and the included angle of one triangle are equal to two sides and included angle of another triangle, then the triangles are congruent. Is SSA a theorem? What about SSA (Side Side Angle) theorem? There is NO SUCH THING!!!! The ASS Postulate does not exist because an angle and two sides does not guarantee that two triangles are congruent. This is why there is no Side Side Angle (SSA) and there is no Angle Side Side (ASS) postulate. What does the word triangle mean? 1 : a flat geometric figure that has three sides and three angles. 2 : something that has three sides and three angles a triangle of land. 3 : a musical instrument made of a steel rod bent in the shape of a triangle with one open angle. What is SAS congruence condition? SAS congruence criterion: The Side Angle Side (SAS) congruence criterion states that, if under a correspondence, two sides and the included angle of one triangle are equal to two corresponding sides and the included angle of another triangle, then these two triangles are congruent…. Why congruent is important? Congruent Triangles are an important part of our everyday world, especially for reinforcing many structures. Two triangles are congruent if they are completely identical. This means that the matching sides must be the same length and the matching angles must be the same size…. How do you tell if there are 2 triangles? Once you find the value of your angle, subtract it from 180° to find the possible second angle. Add the new angle to the original angle. If their sum is less than 180°, you have two valid answers. If the sum is over 180°, then the second angle is not valid…. What is the triangle shape? A triangle is a shape, or a part of two dimensional space. It has three straight sides and three vertices. The three angles of a triangle always add up to 180° (180 degrees). It is the polygon with the least possible number of sides. What’s a 7 sided shape called? Are SSA triangles similar? The answer is no. Here is a video demonstrating why with an example. So you see, you can make 2 different triangles if you only know the length of two sides and an angle (in that order). Therefore, SSA (Side-Side-Angle) is NOT a congruence rule. What does SAS look like? 2. SAS (side, angle, side) SAS stands for “side, angle, side” and means that we have two triangles where we know two sides and the included angle are equal. If two sides and the included angle of one triangle are equal to the corresponding sides and angle of another triangle, the triangles are congruent. What is a SSA triangle? “SSA” is when we know two sides and an angle that is not the angle between the sides. To solve an SSA triangle. use The Law of Sines first to calculate one of the other two angles; then use the three angles add to 180° to find the other angle; finally use The Law of Sines again to find the unknown side. What is congruence class 9? In other words, two triangles are congruent if the sides and angles of one triangle are equal to the corresponding sides and angles of the other triangle. … Is there SSA congruence? Given two sides and non-included angle (SSA) is not enough to prove congruence. You may be tempted to think that given two sides and a non-included angle is enough to prove congruence. But there are two triangles possible that have the same values, so SSA is not sufficient to prove congruence.
{"url":"https://thisisbeep.com/how-do-you-use-sas-in-math/","timestamp":"2024-11-07T07:38:05Z","content_type":"text/html","content_length":"52857","record_id":"<urn:uuid:5a1cd5bd-4197-44f0-8c79-6757c12207b4>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00014.warc.gz"}
Integrating Factor - (Mathematical Modeling) - Vocab, Definition, Explanations | Fiveable Integrating Factor from class: Mathematical Modeling An integrating factor is a mathematical function that is used to simplify the process of solving first-order linear differential equations. It transforms the equation into an exact equation, making it easier to find solutions. The integrating factor is typically expressed in the form of an exponential function derived from the coefficient of the dependent variable in the equation. congrats on reading the definition of Integrating Factor. now let's actually learn it. 5 Must Know Facts For Your Next Test 1. The integrating factor is usually denoted as $$ ext{IF} = e^{rac{P(x)}{Q(x)}}$$, where P(x) is the coefficient of the dependent variable and Q(x) is a function of the independent variable. 2. To apply an integrating factor, it is multiplied through the entire differential equation to ensure that one side becomes an exact derivative. 3. Integrating factors are particularly useful for first-order linear ordinary differential equations (ODEs), which take the form $$rac{dy}{dx} + P(x)y = Q(x)$$. 4. Once the equation is transformed using the integrating factor, integrating both sides leads to a solution that can be expressed in terms of y. 5. Finding the integrating factor often involves calculating the exponential of an integral, making it crucial to understand integration techniques. Review Questions • How does an integrating factor transform a first-order linear differential equation into an exact equation? □ An integrating factor transforms a first-order linear differential equation into an exact equation by multiplying both sides of the equation by a specific function, which often takes the form of an exponential. This process allows the left side of the modified equation to be expressed as the derivative of a product of functions. By doing this, it simplifies solving the equation because it converts it into a more manageable form, allowing for straightforward integration. • Compare and contrast integrating factors with other methods like separation of variables when solving differential equations. □ Integrating factors and separation of variables are both techniques for solving differential equations but apply to different types. Integrating factors are specifically designed for first-order linear ODEs, while separation of variables can be used for any separable equation. The former requires finding a specific function that simplifies the equation into an exact form, whereas separation of variables relies on rearranging terms to isolate variables before integration. Understanding when to use each method is crucial for effective problem-solving. • Evaluate how well you understand integrating factors by analyzing a first-order linear differential equation and determining if an integrating factor is necessary for finding its solution. □ To evaluate understanding, consider a given first-order linear differential equation like $$rac{dy}{dx} + 3y = 6$$. This type of equation indicates that an integrating factor is necessary because it fits the standard form. By calculating $$e^{rac{3}{1}}$$, we find that using this integrating factor simplifies solving the equation, illustrating how integral concepts connect to applying this technique effectively. Recognizing when and how to apply this method demonstrates mastery of essential problem-solving strategies in differential equations. © 2024 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
{"url":"https://library.fiveable.me/key-terms/mathematical-modeling/integrating-factor","timestamp":"2024-11-07T15:58:18Z","content_type":"text/html","content_length":"148508","record_id":"<urn:uuid:b4a9b36b-0ce0-40f3-8eb9-4e2dc660064f>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00209.warc.gz"}
Free Fall Calculator We describe an object in free fall as an object falling under the sole influence of gravity. Using the equation F=ma this implies that any acceleration experienced by the object is due to gravity (g) as mass is constant and the only force acting on the object is gravity. This may surprise you, but not all objects in free fall are necessarily falling or moving downwards. You probably know that even earth is under the influence of the Sun’s gravity, no other forces are acting on the earth and there is no air resistance present. Yes, the earth is currently in free fall towards the Sun! The next logical question is, given how long the earth has been around why hasn't it already crashed into the sun. Well, the earth’s speed isn't actually directed towards the Sun, but tangentially to its orbit. Think of the satellites moving around the earth in an elliptic orbit or even a natural satellite like the moon, its first cosmic velocity actually generates a centrifugal force, which is equivalent to the force of gravity in the opposite direction. When acceleration is constant (i.e. free fall of an object) it is trivial to show by using the suvat equations that the velocity of a falling object is V = V0 + g * t ● V0 stands for the initial velocity of the object (expressed in m/s or ft/s) ● t stands for the time of the fall (expressed in s) ● g stands for the acceleration due to gravity (expressed in m/s2 or ft/s2) gravity, As mentioned earlier free fall is when a falling object is under the sole influence of gravity, hence won't be accounting for air resistance in our calculation implying that our acceleration due to gravity should be a constant value of 9.80665 m/s (approximately equal to 32.17405 ft/s). Outside of calculations in real life, falling objects often experience terminal velocity which basically serves as a limit to the velocity of an object. Now you may be wondering what terminal velocity is. As we have outlined above the acceleration due to gravity is constant, which implies that the gravitational force acting on the object is also constant. In reality, there is air resistance and it increases with increasing velocity as you can test by sticking your hand out the window of a car going 20km/h vs one going 100km/h. Therefore, as the falling object accelerates and increases its velocity the air resistance will generate a force equal and opposite to the force due to gravity. Applying Newton’s first law, we realize that the object stops accelerating and begins to move with constant speed. This is what is referred to as terminal velocity. As stated before this particular calculator ignores air resistance, basically calculating the free fall of an object in a vacuum. The equation of motion with s as the subject of the formula is used when one wants to find the distance traveled by a falling object. In the event that the velocity and initial displacement of the object are both zero, it is simply s = 0.5 * g * t2 But. in the event that the object is already falling, this would mean it has an initial velocity that has to be included in the calculation. s = V0 * t + 0.5 * g * t2 You should be able to see from the equations that fall distance of the object is proportional to the fall time squared. In simple terms, this means that the object falls through a significantly greater distance every second as compared to the second before. There is an amazing implication according to the free fall formula, two different objects regardless of mass in the absence of air resistance should fall at the same velocity and thus move the same distance in a given amount of time. Astronaut David Randolph Scott performed an experiment by dropping a falcon feather and a hammer from the same height on the moon in 1971 and they hit the ground at the same time, due to a lack of air resistance on the moon. This experiment can be tested with a vacuum here on earth. We have prepared an example to briefly explain the ins and outs when using this free fall calculator. 1. Choosing the acceleration due to gravity. Its value is 9.80665 m/s2 on earth on average and as such it is the default value we have set in the free fall calculator. Should you feel the need to use a different value. It is editable as are all the other fields. 2. Determining if the object has an initial velocity. V0= 0 is the default but again it can be changed as it is editable as well. 3. You must enter the time of the fall. For the purposes of this example, we will use 15 as the fall time. 4. Now if we want to know the final free fall velocity, using the formula V =V0 + g * t = 0 + 9.80665 * 15 = 147.10 m/s (check with the free fall calculator). 5. Now if we want to know the free fall distance, using the formula s = 0.5 * g * t2 = 0.5 * 9.80665 * 152= 2206.50 m (check with the free fall calculator). 6. In the event that you have the fall distance of an object but don’t have the fall time, you can use the free fall calculator to get the time. While most of us settle for elevators or bungee jumping to experience free fall, Alan Eustace jumped from over 41,425 m (135 908 feet) above the Earth in 2014. He broke the sound barrier and became the man with the highest free fall in human history. While it doesn’t meet the exact definition of free fall described earlier as there is air resistance, it is still very close to real free fall as the alternative would be performing the same stunt in an actual vacuum. The fall lasted 15 minutes as he went over 800 miles per hour which is significantly over the sound barrier.
{"url":"https://thefreecalculator.com/physics/free-fall","timestamp":"2024-11-03T16:30:28Z","content_type":"text/html","content_length":"108126","record_id":"<urn:uuid:82d502ad-0f69-4fa1-87de-e0b0ddbc313e>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00091.warc.gz"}
Writing an exam paper with Julia and Jupyter Lab I just taught Linear Algebra, which means I need to give exams with a lot of questions about computations. I would like to have the numbers in this problems to be random, and I like to solve them with computers. (Sorry, students. Only you need to compute by hand. 😆) I have tried to use Mathematica to do so. The math parts works well. But the typesetting of Mathematica looks quite ugly. ... Install latexindent on Ubuntu Linux latexindent is a perl script that can help you format LaTeX files. It is shipped with texlive (at least 2020 version). To use it you have to install some other perl packages. @@colbox-blue Warning!! Don’t try to install latexindent in a conda environment. It does not work at the moment of writing. Some perl packages cannot be installed. @@ Step 1. Install perl sudo apt install perl Step 2. Install the perl package management tool cpanm. ... Using VIM to Write LaTeX in 2021 I have been using VIM (actually GVIM) to write papers with LaTeX for almost a decade now. My setup for this has not changed for most of the time. One advantage of using VIM is that once you have learned it, you can expect to use it for a life time. That been said, a decade seems to be a good time to have an update. So here’s my new VIM setup for LaTeX in 2021 and perhaps for the next decade. ...
{"url":"https://newptcai.gitlab.io/tags/latex/","timestamp":"2024-11-13T19:24:25Z","content_type":"text/html","content_length":"12440","record_id":"<urn:uuid:7367bf37-caf6-4086-b597-9509e61d3196>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00098.warc.gz"}
Homomorphic encryption Homomorphic encryption is a form of encryption where a specific algebraic operation is performed on the plaintext and another (possibly different) algebraic operation is performed on the ciphertext. Depending on one's viewpoint, this can be seen as either a positive or negative attribute of the cryptosystem. Homomorphic encryption schemes are malleable by design. The homomorphic property of various cryptosystems can be used to create secure voting systems,^[1] collision-resistant hash functions, and private information retrieval schemes. There are several efficient, partially homomorphic cryptosystems, and two fully homomorphic, but less efficient cryptosystems. Partially homomorphic cryptosystems[ ] In the following examples, the notation ${\displaystyle \mathcal{E}(x)}$ is used to denote the encryption of the message x. Unpadded RSA[ ] If the RSA public key is modulus ${\displaystyle m}$ and exponent ${\displaystyle e}$, then the encryption of a message ${\displaystyle x}$ is given by ${\displaystyle \mathcal{E}(x) = x^e \mod m}$. The homomorphic property is then ${\displaystyle \mathcal{E}(x_1) \cdot \mathcal{E}(x_2) = x_1^e x_2^e \mod m = (x_1x_2)^e \mod m = \mathcal{E}(x_1 \cdot x_2)}$ ElGamal[ ] In the ElGamal cryptosystem, in a group ${\displaystyle G}$, if the public key is ${\displaystyle (G, q, g, h)}$, where ${\displaystyle h = g^x}$, and ${\displaystyle x}$ is the secret key, then the encryption of a message ${\displaystyle m}$ is ${\displaystyle \mathcal{E}(m) = (g^r,m\cdot h^r)}$, for some ${\displaystyle r \in \{0, \ldots, q-1\}}$. The homomorphic property is then ${\displaystyle \mathcal{E}(x_1) \cdot \mathcal{E}(x_2) = (g^{r_1},x_1\cdot h^{r_1})(g^{r_2},x_2 \cdot h^{r_2}) = (g^{r_1+r_2},(x_1\cdot x_2) h^{r_1+r_2}) = \mathcal{E}(x_1 \cdot x_2)}$ Goldwasser-Micali[ ] In the Goldwasser-Micali cryptosystem, if the public key is the modulus m and quadratic non-residue x, then the encryption of a bit b is ${\displaystyle \mathcal{E}(b) = r^2 x^b \mod m}$. The homomorphic property is then ${\displaystyle \mathcal{E}(b_1)\cdot \mathcal{E}(b_2) = r_1^2 x^{b_1} r_2^2 x^{b_2} = (r_1r_2)^2 x^{b_1+b_2} = \mathcal{E}(b_1 \oplus b_2)}$ where ${\displaystyle \oplus}$ denotes addition modulo 2, (i.e. exclusive-or). Benaloh[ ] In the Benaloh cryptosystem, if the public key is the modulus m and the base g with a blocksize of r, then the encryption of a message x is ${\displaystyle g^x u^r \mod m}$. The homomorphic property is then ${\displaystyle \mathcal{E}(x_1) \cdot \mathcal{E}(x_2) = (g^{x_1} u_1^r)(g^{x_2} u_2^r) = g^{x_1+x_2} (u_1u_2)^r = \mathcal{E}(x_1 + x_2 \mod r )}$ Paillier[ ] In the Paillier cryptosystem, if the public key is the modulus m and the base g, then the encryption of a message x is ${\displaystyle \mathcal{E}(x) = g^x r^m \mod m^2}$. The homomorphic property is ${\displaystyle \mathcal{E}(x_1) \cdot \mathcal{E}(x_2) = (g^{x_1} r_1^m)(g^{x_2} r_2^m) = g^{x_1+x_2} (r_1r_2)^m = \mathcal{E}( x_1 + x_2 \mod m)}$ Other partially homomorphic cryptosystems[ ] • Okamoto-Uchiyama cryptosystem • Naccache-Stern cryptosystem • Damgård-Jurik cryptosystem • Boneh-Goh-Nissim cryptosystem Fully homomorphic encryption[ ] Each of the examples listed above allows homomorphic computation of only one operation (either addition or multiplication) on plaintexts. A cryptosystem which supports both addition and multiplication (thereby preserving the ring structure of the plaintexts) would be far more powerful. Using such a scheme, any circuit could be homomorphically evaluated, effectively allowing the construction of programs which may be run on encryptions of their inputs to produce an encryption of their output. Since such a program never decrypts its input, it could be run by an untrusted party without revealing its inputs and internal state. The existence of an efficient and fully homomorphic cryptosystem would have great practical implications in the outsourcing of private computations, for instance, in the context of cloud computing.^[2] The "homomorphic" part of a fully homomorphic encryption scheme can also be described in terms of category theory. If C is the category whose objects are integers (i.e., finite streams of data) and whose morphisms are computable functions, then (ideally) a fully homomorphic encryption scheme elevates an encryption function to a functor from C to itself. The utility of fully homomorphic encryption has been long recognized. The problem of constructing such a scheme was first proposed within a year of the development of RSA.^[3] A solution proved more elusive; for more than 30 years, it was unclear whether fully homomorphic encryption was even possible. During this period, the best result was the Boneh-Goh-Nissim cryptosystem which supports evaluation of an unlimited number of addition operations but at most one multiplication. In 2009, Craig Gentry^[4] using lattice-based cryptography showed the first fully homomorphic encryption scheme as announced by IBM on June 25.^[5]^[6] His scheme supports evaluations of arbitrary depth circuits. His construction starts from a somewhat homomorphic encryption scheme using ideal lattices that is limited to evaluating low-degree polynomials over encrypted data. (It is limited because each ciphertext is noisy in some sense, and this noise grows as one adds and multiplies ciphertexts, until ultimately the noise makes the resulting ciphertext indecipherable.) He then shows how to modify this scheme to make it bootstrappable -- in particular, he shows that by modifying the somewhat homomorphic scheme slightly, it can actually evaluate its own decryption circuit, a self-referential property. Finally, he shows that any bootstrappable somewhat homomorphic encryption scheme can be converted into a fully homomorphic encryption through a recursive self-embedding. In the particular case of Gentry's ideal-lattice-based somewhat homomorphic scheme, the effect of this bootstrapping procedure is to "refresh" a ciphertext -- i.e., to reduce the noise associated to a ciphertext -- so that it can thereafter be used in more additions and multiplications without resulting in an indecipherable ciphertext. Gentry based the security of his scheme on the assumed hardness of two problems -- certain worst-case problems over ideal lattices, and the sparse (or low-weight) subset sum problem. Regarding performance, ciphertexts in Gentry's scheme remain compact -- that is, their length does not depend at all on the complexity of the function that is evaluated over the encrypted data. The computational time only depends linearly on the number of operations performed. However, the scheme is impractical for many applications, because ciphertext size and computation time increase sharply as one increases the security level. To obtain 2^k security against known attacks, the computation time and ciphertext size are high-degree polynomials in k. Recently, Stehle and Steinfeld reduced the dependence on k substantially.^[7] They presented optimizations that permit the computation to be only quasi-k^3.5 per boolean gate of the function being evaluated. Gentry's Ph.D. thesis^[8] provides additional details. Gentry also published a high-level overview of the van Dijk et al. construction (described below) in the March 2010 issue of Communications of the ACM.^[9] In 2009, Marten van Dijk, Craig Gentry, Shai Halevi and Vinod Vaikuntanathan presented a second fully homomorphic encryption scheme,^[10] which uses many of the tools of Gentry's construction, but which does not require ideal lattices. Instead, they show that the somewhat homomorphic component of Gentry's scheme (which uses ideal lattices) can be replaced with a very simple somewhat homomorphic scheme that uses integers. The scheme is therefore conceptually simpler than Gentry's ideal lattice scheme, but has similar properties with regards to homomorphic operations and efficiency. The somewhat homomorphic component in the work of van Dijk et al. is similar to an encryption scheme proposed by Levieil and Naccache in 2008,^[11] and also to one that was proposed by Bram Cohen in 1998.^[12] Cohen's method is not even additively homomorphic, however. The Levieil-Naccache scheme is additively homomorphic, and can be modified to support also a small number of In 2010, Nigel P. Smart and Frederik Vercauteren presented a refinement of Gentry's scheme giving smaller key and ciphertext sizes, but which is still not fully practical.^[13] At the rump session of Eurocrypt 2010, Craig Gentry and Shai Halevi presented a working implementation of fully homomorphic encryption -- that is, an implementation of the entire bootstrapping procedure -- together with performance numbers.^[14] References[ ] External links[ ] pl:Szyfrowanie homomorficzne ru:Гомоморфное шифрование
{"url":"https://cryptography.fandom.com/wiki/Homomorphic_encryption","timestamp":"2024-11-13T07:49:39Z","content_type":"text/html","content_length":"206545","record_id":"<urn:uuid:6ba0d8b5-b1b1-4db7-942d-3c29b4c36e06>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00365.warc.gz"}
Notion Formulas Explained - Supercharge Your Notion Setups 2024 Notion Formulas Explained – Supercharge Your Notion Setups What are Notion Formulas? Formulas in Notion allow you to perform calculations on data inside your databases. Formulas are a powerful way to manipulate and analyze your data, and they can be used to calculate sums, averages, counts, and other values based on your data. To start creating formulas in Notion, follow these steps: 1. Go to the page where your database is located, and create a database 2. In the database editor, select the “Formulas” tab, and then add a formula property 3. In the formula editor, enter the expression for your formula. You can use the editor to insert values and operators and to create your formula. 4. When you are finished, click on the “Save” button to save your formula. Important: the formula will not save if it is incorrect! After you have created a formula property in your database, it will be added to your database as a new column. You can use the formula column to sort and filter your data, and you can also use it in other formulas and calculations. Database formulas are a powerful way to analyze and manipulate your database data in Notion, and they can be used to create complex and sophisticated analyses of your data. What formulas can you use in Notion? Formula Cheat Sheet A constant is a fixed value that does not change. Constants can be used in formulas to represent a fixed value that you want to use in your calculations. Notion allows the following constants: • The number e (the base of the natural logarithm) e == 2.718281828459045 • The number Pi (the ratio of a circle’s circumference to its diameter) pi ==3.14159265359 • True and False statements • if: Switches between two options based on another value. if(prop("X") == true, "yes", "no") • add: Adds two numbers and returns their sum, or concatenates two strings. add(1, 3) == 4 or add("add", "text") == "addtext" • subtract: Subtracts two numbers and returns their difference. subtract(4, 5) == -1 • multiply: Multiplies two numbers and returns their product. multiply(2, 10) == 20 • divide: Multiplies two numbers and returns their product. divide(12, 3) == 4 • pow: Returns base to the exponent power, that is, baseexponent. pow(2, 2) == 4 or pow(2, 6) == 64 • mod: Divides two numbers and returns their remainder.. mod(3, 2) == 1 • unaryMinus: Negates a number. unaryMinus(42) == -42 • unaryPlus: Converts its argument into a number. unaryPlus(true) == 1 or unaryPlus("42") == 42 • not: Returns the logical NOT of its argument. not(true) == false or not(false) == true • and: Returns the logical AND of its two arguments. and(true, true) == true or and(true, false) == false • or: Returns the logical OR of its two arguments. (false, false) == false or (true, false) == true • equal: Returns true if its arguments are equal, and false otherwise. equal(false, not true) == true equal(false, true) == false • unequal: Returns false if its arguments are equal, and true otherwise. (true != not false) == false • larger: Returns true if the first argument is larger than the second. 5 > 3 == true • largerEq: Returns true if the first argument is larger than or equal to than the second. 5 >= 4 == true or 4 >= 4 == true • smaller: Returns true if the first argument is smaller than the second. 10 < 8 == false or 8 < 18 == true • smallerEq: Returns true if the first argument is smaller than or equal to than the second. 10 <= 8 == false or 9 <= 18 == true or 10 <= 10 == true • concat: Concatenates its arguments and returns the result. concat("dog", "go") == "doggo" • join: join("-", "a", "b", "c") == "a-b-c" • slice: Extracts a substring from a string from the start index (inclusively) to the end index (optional and exclusively). slice("Hello world", 1, 5) == "ello slice("notion", 3) == "ion"" • length: Returns the length of a string. length("Hello world") == 11 • format: Formats its argument as a string. format(42) == "42" • toNumber: Parses a number from text. toNumber("42") == 42 toNumber(false) == 0 • contains: Returns true if the second argument is found in the first.contains("notion", "ion") == true • replace: Replaces the first match of a regular expression with a new value. replace("1-2-3", "-", "!") == "1!2-3" • replaceAll: Replaces all matches of a regular expression with a new value. replaceAll("1-2-3", "-", "!") == "1!2!3" • test: Tests if a string matches a regular expression. test("1-2-3", "-") == true • empty: Tests if a value is empty. empty("") == true • abs: Returns the absolute value of a number. abs(-3) == 3 • cbrt: Returns the cube root of a number. Returns the cube root of a number. • ceil: Returns the smallest integer greater than or equal to a number. ceil(4.2) == 5 • exp: Returns E^x, where x is the argument, and E is Euler’s constant (2.718…), the base of the natural logarithm. exp(1) == 2.718281828459045 exp(2) == 7.389056098931 • floor: Returns the largest integer less than or equal to a number. floor(2.8) == 2 • ln: Returns the natural logarithm of a number. ln(e) == 1 • log10: Returns the X property for each entry. prop("X") == true • log2: Returns the base 2 logarithm of a number. log2(64) == 6 • max: Returns the largest of zero or more numbers. max(5, 2, 9, 3) == 9 • min: Returns the smallest of zero or more numbers. min(4, 1, 5, 3) == 1 • round: Returns the value of a number rounded to the nearest integer. round(4.4) == 4 • sign: Returns the sign of the x, indicating whether x is positive, negative or zero. sign(4) == 1 • sqrt: Returns the positive square root of a number. sqrt(144) == 12 • start: Returns the start of a date range. start(prop("Date")) == Feb 2, 1996 • end: Returns the end of a date range. end(prop("Date")) == Feb 2, 1996 • now: Returns the current date and time. now() == Feb 2, 1996 • timestamp: Returns an integer number from a Unix millisecond timestamp, corresponding to the number of milliseconds since January 1, 1970. timestamp(now()) == 1512593154718 • fromTimestamp: Returns a date constructed from a Unix millisecond timestamp, corresponding to the number of milliseconds since January 1, 1970. fromTimestamp(2000000000000) == Tue May 17 2033 • dateAdd: Add to a date. The last argument, unit, can be one of: “years”, “quarters”, “months”, “weeks”, “days”, “hours”, “minutes”, “seconds”, or “milliseconds”. dateAdd(date, amount, "years") • dateSubtract: Subtract from a date. The last argument, unit, can be one of: “years”, “quarters”, “months”, “weeks”, “days”, “hours”, “minutes”, “seconds”, or “milliseconds”. dateSubtract(date, amount, "years") • dateBetween: Returns the time between two dates. The last argument, unit, can be one of: “years”, “quarters”, “months”, “weeks”, “days”, “hours”, “minutes”, “seconds”, or “milliseconds”. dateBetween(date, date2, "years") • formatDate: Format a date using the Moment standard time format string. format(42) == "42" • minute: Returns an integer number, between 0 and 59, corresponding to minutes in the given date. minute(now()) == 45 • hour: Returns an integer number, between 0 and 23, corresponding to hour for the given date. hour(now()) == 17 • date: Returns an integer number corresponding to the day of the week for the given date: 0 for Sunday, 1 for Monday, 2 for Tuesday, and so on. day(now()) == 3 • month: Returns an integer number, between 0 and 11, corresponding to month in the given date according to local time. 0 corresponds to January, 1 to February, and so on. month(now()) == 11 • year: Returns a number corresponding to the year of the given date. year(now()) == 2023 • id: Returns a unique string id for each entry. id() == "083ee30ce5a048dfadf55f1944688405" How are Notion formulas different from Formulas in Excel? Notion formulas are similar to formulas in Excel in that they allow you to perform calculations and manipulate data within your databases. However, there are a few key differences between the two: 1. Notion formulas are designed to be used within the context of a page or database, whereas Excel formulas are typically used within a spreadsheet. 2. Moreover, Excel formulas are cell-based, while Notion formulas are column based. What does this mean? While in Excel you can reference individual cells, in Notion you can only reference other properties. The formulas will apply identically to all rows in a column. Excel formulas are therefore much more flexible and powerful. Still, there are many interesting things you can do with Notion formulas to supercharge your Notion workspace: Here are 8 practical use cases for Notion Formulas 1. Calculate the priority of a task (Eisenhower Matrix) The Eisenhower Matrix is a tool used to prioritize tasks by distinguishing between urgent and important tasks. It is named after former U.S. President Dwight D. Eisenhower, who is said to have used a similar system to manage his workload. The matrix consists of four quadrants: • urgent and important, • important but not urgent, • urgent but not important, • neither urgent nor important. Tasks are placed in one of the quadrants based on their level of importance and urgency, and this can help a person decide which tasks to prioritize and which ones to delegate or eliminate. The idea behind the Eisenhower Matrix is that focusing on important tasks can help a person achieve their goals, while ignoring or delegating tasks that are not important or urgent can help them avoid distractions and stay productive. Notion formulas allow you to simply find the right quadrant by stating if a task is urgent or important by using the following formula: $if(prop(“Important”), if(prop(“Urgent”), “Do”, “Schedule”), if(prop(“Urgent”), “Delegate”, “Eliminate”))$ 2. Calculate the number of days between a date range This can be for example useful if you need to know the time between the start and end day of a project. use the following formula for calculating the days: $dateBetween(prop(“Date 2”), prop(“Date 1”), “days”)$ You can also change “days†to “years†, “months†, “weeks†, or “hours†if you need a different measurement unit. 3. Calculate overdue days This is useful if you want to see how many days a project or a bill is overdue. Use this formula from Red Gregory to get the respective status update as shown in the screenshot. if(formatDate(prop(“Deadline”), “MMM DD, YYYY”) == formatDate(now(), “MMM DD, YYYY”), “Due Today ✅”, if(dateBetween(prop(“Deadline”), now(), “days”) > 0, format(dateBetween(prop(“Deadline”), now(), “days”) + 1) + ” Days Remaining”, if(dateBetween(prop(“Deadline”), now(), “days”) > -1, “Due Tomorrow 🔜”, if(dateBetween(prop(“Deadline”), now(), “days”) < 0, format(abs(dateBetween(prop (“Deadline”), now(), “days”))) + ” Days Past Due â­•ï¸ ”, “”)))) 4. Calculate total revenue (or profit) Calculate your revenue by multiplying sold items by the price. prop(“Sold Items”) * prop(“Net Price”) To calculate the profit, you need to take the total costs into account. prop(“Sold Items”) * prop(“Cost per Item”) and subtract them from your revenue: 5. Calculate the age from a birthday Easily calculate the age of a person in Notion with the following dateBetween-formula: dateBetween(now(), prop(“Birthday”), “years”) 6. Split full name into the first and second name For the first name, we use the replaceAll-formula to replace all the text after the first space with empty space. replaceAll(prop(“Full Name”), “[ ].+”, “”) This also works with the second name the other way around: replaceAll(prop(“Full Name”), “.+[ ]“, “”) 7. Calculate the number of characters from a text field This is for example important if you are using Notion to write something, where you have a character limit, for example, a tweet on Twitter. In this example, we are calculating the number of characters of the names from the previous formula. length(prop(“Full Name”)) – 1 Why -1 at the end? Simply because we don’t want to count the space between the names for the calculation. 8. Calculate the number of words from a text field Calculating the number of characters is easy, but in many cases, it would be much more interesting to see the number of words written. How can you do that? By using a workaround. Each word is separated by a space, if we calculate the number of spaces in the text and add 1, we can obtain the number of words. For doing so, we first calculate the number of characters without spaces length(replaceAll(prop(“Text”), “[ ]”, “”) and subtract the result with the total number of characters: To get the correct word count, we need to add 1 at the end. The whole formula looks like that: length(prop(“Text”)) – length(replaceAll(prop(“Text”), “[ ]”, “”)) + 1 This is how you are working with formulas in Notion. Do you want to calculate something in Notion, but have problems finding the right formula or outputting the right results? Share your challenge in the comments below!
{"url":"https://philipp-stelzel.com/notion-formulas/","timestamp":"2024-11-01T22:49:27Z","content_type":"text/html","content_length":"178621","record_id":"<urn:uuid:90e631b7-8a31-4056-a594-6d61f9a71509>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00048.warc.gz"}
Talk:Positive decimal integers with the digit 1 occurring exactly twice - Rosetta CodeTalk:Positive decimal integers with the digit 1 occurring exactly twice number of ways to place two ones in three slots This is the number of ways to place two ones in three slots, which is three, times the number of ways to place 0,2-9 in the remaining slot, which is nine. So at least twenty seven is the correct answer. Surely somewhere this task is already covered.--Nigel Galloway (talk) 17:00, 8 July 2021 (UTC) This task can have my vote for deletion as well. --Pete Lomax (talk) 03:58, 9 July 2021 (UTC) It is a variant of the task Permutations_with_some_identical_elements.--Nigel Galloway (talk) 15:57, 10 July 2021 (UTC) suggest a task rename I suggest that this (draft) task be renamed to reflect: ☆ that positive integers be specified instead of numbers. ☆ that the number one be found in the number be changed to the digit one be found in ... ☆ indicate that the integers be expressed in base ten. ☆ specify that exactly two one digits be found. The number 1211 does have two 1 digits in it. It also has a 1 digit in it. I've already modified the (draft) task's requirements (as everyone appears to have already assumed positive base ten numbers). -- Gerard Schildberger (talk) 23:27, 8 July 2021 (UTC)
{"url":"https://rosettacode.org/wiki/Talk:Positive_decimal_integers_with_the_digit_1_occurring_exactly_twice?oldid=313546","timestamp":"2024-11-05T18:39:29Z","content_type":"text/html","content_length":"48044","record_id":"<urn:uuid:b2c3be19-4eb9-4efc-b38b-f1c6c3c9dd4a>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00358.warc.gz"}
Mimi Geier, a great math teacher The world lost a great math teacher this week. Mimi Geier not only loved math, she loved teaching math and delighted in watching kids discover solutions. If I had a picture to share here, it would be of Ms. Geier with a grin on her face, holding out a piece of chalk so that a student could teach. My first day at BFIS, Ms. Geier asked me if I was in first or seventh period math. I wanted to ask which one was the advanced math class, but I didn’t. Instead I said I didn’t know. She told me to come to both and we’d figure it out. I got worried during the first math class. I could solve any quadratic equation in the world with the quadratic formula but Ms. Geier didn’t think too much of that method. She wanted us to factor, to pull the problem apart and understand the pieces that solved it. Walking up the stairs after lunch, a girl who later became my friend told me, “You don’t want to be in the seventh period math class.” So it was with trepidation that I entered seventh period. Is this where they sent the kids that had never learned to factor? To my surprise I found a much different class. It was a small classroom of relaxed students and a very different Ms. Geier. This was not the homeroom teacher Ms. Geier. This was not the Ms. Geier who could take forever to make a simple point. This was not the Ms. Geier who was always misplacing that paper that she’d just had. This Ms. Geier grinned a lot. She loved it when we came up with a hard problem. She delighted in solving problems with us. She was thrilled when we figured it out. Ecstatic when we could teach each other. This was Ms. Geier the math teacher. I got to stay in seventh period, advanced math. One day, we were all having trouble with some calculus. We could solve all the problems but we were struggling with the why. We got the formulas but not how they worked. The next day, a kid in my class whose dad was an engineer at IBM came in and said, “I got it! My dad explained it to me.” Ms. Geier, who had probably spent hours figuring out how to teach it to us, just grinned, held out the chalk and said “Show us!” Several years after that first day of school, Ms. Geier was out of town for a few weeks. Her substitute pulled me aside during break. Sitting at Ms. Geier’s desk, he asked me for help with a math problem and said Ms. Geier had told him that if he had any problems with the math, he should ask me. Me, the kid who was afraid to ask which class was advanced, now trusted to help the math teacher! Unknown to me, Ms. Geier also intervened on our behalf in other areas. We were having trouble with our science teacher. Several of us were banned from asking questions. One of my classmates was banned from asking questions because her questions were too stupid (she’s now a food scientist) and I was banned because my questions were too ridiculous (too much science fiction?). In all fairness, she did explore my ridiculous questions outside of class, even consulting her college professor. Things eventually got better. Several years later she told me that Ms. Geier had helped her figure out how to cope with us. Ms. Geier taught me many things. Among them were that it’s ok to love math just because it’s math, that it’s ok to be the expert and let somebody else teach you – not just ok but exciting, that it’s ok to be the expert and not know all the answers, that sometimes people learn best from peers, that solving problems together is fun, and much more. I owe a lot of who I’ve become in my career to I, and many generations of math students, will miss Mimi Geier. 3 Replies to “Mimi Geier, a great math teacher” 1. What a wonderful tribute. The image of her with a grin on her face and her extended am with a chalk on her hands is exactly how I remember her too. Along with all her catch phrases about reaching, factoring, days without sunshine, problems and doctors. She will be (already is) sorely missed. 2. Mimi would have loved to read your tribute to her, Sormy. I was never her student, since I knew her as a co-worker at BFIS, but I know, from her former students, that Mimi was a superb Math teacher! I knew her as an extremely ethical person with a very strong character. She was very good to me when I started working there and always made me feel as an important part of the BFIS 3. I just needed to calculate the “Hypotenuse” for my real life work…it made me remember Mimi Geier. Who would of thought?
{"url":"https://stormyscorner.com/2015/02/mimi-geier-a-great-math-teacher.html","timestamp":"2024-11-03T12:31:26Z","content_type":"text/html","content_length":"82360","record_id":"<urn:uuid:aea0bf97-6278-4cc1-93af-c809b0a1bc2e>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00482.warc.gz"}
Mortgage Loan Approval Calculator - Certified Calculator Mortgage Loan Approval Calculator Introduction: The Mortgage Loan Approval Calculator is a valuable tool for individuals seeking to understand the maximum loan amount they may qualify for based on their financial situation. This calculator considers factors such as monthly income, existing debt, loan term, and interest rate to provide an estimate of the loan amount that may be approved by lenders. Formula: The calculator uses the debt-to-income ratio formula to determine the maximum loan amount for approval. It takes into account the monthly income, subtracts the existing monthly debt payments, and calculates the maximum loan amount based on the loan term and interest rate. How to Use: 1. Enter your monthly income in the “Monthly Income” field. 2. Input your total monthly debt payments in the “Monthly Debt Payments” field. 3. Specify the loan term in years in the “Loan Term” field. 4. Enter the interest rate in the “Interest Rate (%)” field. 5. Click the “Calculate” button to obtain the Maximum Loan Amount for Approval. Example: Suppose you have a monthly income of $5,000, monthly debt payments of $1,200, a loan term of 30 years, and an interest rate of 3.5%. The calculated Maximum Loan Amount for Approval would be approximately $225,048.77. 1. What is the debt-to-income ratio? □ The debt-to-income ratio is a financial metric that compares your monthly debt payments to your gross monthly income. 2. How is the maximum loan amount calculated? □ The calculator uses the debt-to-income ratio formula to calculate the maximum loan amount based on monthly income, existing debt, loan term, and interest rate. 3. Why is the debt-to-income ratio important for loan approval? □ Lenders use the debt-to-income ratio to assess your ability to manage additional debt and determine the maximum loan amount you qualify for. 4. Can I increase my chances of loan approval? □ Improving your credit score, reducing existing debt, and increasing your income can positively impact your chances of loan approval. 5. What other factors do lenders consider for approval? □ Lenders also consider credit history, employment stability, and the loan-to-value ratio when evaluating loan applications. Conclusion: The Mortgage Loan Approval Calculator empowers individuals with insights into their potential borrowing capacity. By understanding the maximum loan amount for approval, you can make informed decisions about your home financing and approach the mortgage application process with confidence. Use this calculator to assess your financial eligibility and plan for a successful home loan application. Leave a Comment
{"url":"https://certifiedcalculator.com/mortgage-loan-approval-calculator/","timestamp":"2024-11-09T23:02:31Z","content_type":"text/html","content_length":"55399","record_id":"<urn:uuid:9e7e6647-9b81-4110-8297-136036a40630>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00110.warc.gz"}
Ryser's Conjecture for t-intersecting hypergraphs A well-known conjecture, often attributed to Ryser, states that the cover number of an r-partite r-uniform hypergraph is at most r−1 times larger than its matching number. Despite considerable effort, particularly in the intersecting case, this conjecture remains wide open, motivating the pursuit of variants of the original conjecture. Recently, Bustamante and Stein and, independently, Király and Tóthmérész considered the problem under the assumption that the hypergraph is t-intersecting, conjecturing that the cover number τ(H) of such a hypergraph H is at most r−t. In these papers, it was proven that the conjecture is true for r≤4t−1, but also that it need not be sharp; when r=5 and t=2, one has τ(H)≤2. We extend these results in two directions. First, for all t≥2 and r≤3t−1, we prove a tight upper bound on the cover number of these hypergraphs, showing that they in fact satisfy τ(H)≤⌊(r−t)/2⌋+1. Second, we extend the range of t for which the conjecture is known to be true, showing that it holds for all [Formula presented]. We also introduce several related variations on this theme. As a consequence of our tight bounds, we resolve the problem for k-wise t-intersecting hypergraphs, for all k≥3 and t≥1. We further give bounds on the cover numbers of strictly t-intersecting hypergraphs and the s-cover numbers of t-intersecting hypergraphs. • Intersecting hypergraphs • Ryser's conjecture • Vertex cover Dive into the research topics of 'Ryser's Conjecture for t-intersecting hypergraphs'. Together they form a unique fingerprint.
{"url":"https://research.tudelft.nl/en/publications/rysers-conjecture-for-t-intersecting-hypergraphs","timestamp":"2024-11-05T22:56:52Z","content_type":"text/html","content_length":"54172","record_id":"<urn:uuid:cffe2793-bf29-408b-b2cb-5ecd2694146d>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00630.warc.gz"}
libavutil/ffmath.h - manifest_repos/ffmpeg - Git at Google * copyright (c) 2016 Ganesh Ajjanagadde <gajjanag@gmail.com> * This file is part of FFmpeg. * FFmpeg is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * FFmpeg is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * You should have received a copy of the GNU Lesser General Public * License along with FFmpeg; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA * @file * internal math functions header #ifndef AVUTIL_FFMATH_H #define AVUTIL_FFMATH_H #include "attributes.h" #include "libm.h" * Compute 10^x for floating point values. Note: this function is by no means * "correctly rounded", and is meant as a fast, reasonably accurate approximation. * For instance, maximum relative error for the double precision variant is * ~ 1e-13 for very small and very large values. * This is ~2x faster than GNU libm's approach, which is still off by 2ulp on * some inputs. * @param x exponent * @return 10^x static av_always_inline double ff_exp10(double x) return exp2(M_LOG2_10 * x); static av_always_inline float ff_exp10f(float x) return exp2f(M_LOG2_10 * x); * Compute x^y for floating point x, y. Note: this function is faster than the * libm variant due to mainly 2 reasons: * 1. It does not handle any edge cases. In particular, this is only guaranteed * to work correctly for x > 0. * 2. It is not as accurate as a standard nearly "correctly rounded" libm variant. * @param x base * @param y exponent * @return x^y static av_always_inline float ff_fast_powf(float x, float y) return expf(logf(x) * y); #endif /* AVUTIL_FFMATH_H */
{"url":"https://nest-open-source.googlesource.com/manifest_repos/ffmpeg/+/refs/heads/main/libavutil/ffmath.h","timestamp":"2024-11-03T03:32:47Z","content_type":"text/html","content_length":"20995","record_id":"<urn:uuid:ca964016-902c-4670-8940-b38021ed8648>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00623.warc.gz"}
Months Of The Year Ordinal Numbers Worksheet - OrdinalNumbers.com Ordinal Numbers And Months Of The Year Puzzle – It is possible to enumerate an unlimited number of sets by making use of ordinal numbers as an instrument. It is also possible to use them to generalize ordinal numbers. 1st One of the fundamental ideas of math is the ordinal number. It is a number … Read more Months Of The Year Ordinal Numbers Worksheet Months Of The Year Ordinal Numbers Worksheet – It is possible to enumerate infinite sets by using ordinal numbers. They can also serve as a generalization of ordinal quantities. 1st The ordinal number is one the fundamental concepts in mathematics. It is a number that shows where an object is within a list. Ordinally, a … Read more
{"url":"https://www.ordinalnumbers.com/tag/months-of-the-year-ordinal-numbers-worksheet/","timestamp":"2024-11-04T22:20:10Z","content_type":"text/html","content_length":"52217","record_id":"<urn:uuid:e38582a3-2369-4ebd-b0fe-b3c9e97ef027>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00075.warc.gz"}
mp_arc 98-408 98-408 N T A Haydn Statistical properties of equilibrium states for rational maps (43K, LATeX 2e) Jun 1, 98 Abstract , Paper (src only), Index of related papers Abstract. Equilibrium states of rational maps H\"older continuous potentials that satisfies the `supremum gap' are not $\phi$-mixing, mainly due to the presence of critical points. Here we prove that the normalised return times of arbitrary orders are in the limit Poisson distributed. We also show that the usual generating partition is weakly Bernoulli, from which the Bernoulli-ness then follows by a standard result. Files: 98-408.txt
{"url":"http://kleine.mat.uniroma3.it/mp_arc-bin/mpa?yn=98-408","timestamp":"2024-11-11T19:30:41Z","content_type":"text/html","content_length":"1432","record_id":"<urn:uuid:3935025e-4e6c-4657-8bbc-2842ca8ede1f>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00161.warc.gz"}
Understanding Pooled Standard Deviation Pooled Standard Deviation is a type of statistical measure used to compare populations or groups. It is often used in research to compare populations with different standard deviations. Pooled standard deviation is determined by taking the average of the standard deviations of each population, then applying a weighted average. In this article, we’ll discuss the definition of pooled standard deviation, understanding how to calculate it, its advantages and disadvantages, and the applications and tips for understanding it. Definition of Pooled Standard Deviation Pooled Standard Deviation is a measure of variability used to compare populations or groups by taking the average of the standard deviations of each population and applying a weighted average. This type of statistical measure allows researchers to determine differences between groups of data and/or compare different groups with different standard deviations. For example, if a researcher is looking to compare two different populations, they can use pooled standard deviation to determine a clear picture of the variability between them. Pooled standard deviation is also useful for determining the overall variability of a group of data points. By taking the average of the standard deviations of each data point, researchers can get a better understanding of the overall variability of the group. This can be especially useful when trying to compare different groups of data or when trying to determine the overall variability of a Calculating a Pooled Standard Deviation Calculating the pooled standard deviation is done by taking the variance of each variable in the population and multiplying it by thesample size of that variable. This is done with each variable and those results are summed together. After summing the results, divide the total by the sum of all the sample sizes. The resulting number is the pooled standard deviation. The pooled standard deviation is a useful tool for comparing the variability of different populations. It can be used to determine if the differences between two populations are statistically significant. Additionally, it can be used to compare the variability of a single population over time. Uses of Pooled Standard Deviation Pooled standard deviation has many potential uses. It’s often used in research when comparing populations or groups with different standard deviations. Additionally, pooled standard deviation can be used to determine the effect size of a given intervention. It’s also used to determine the magnitude of the difference between two means in an experimental setting. Pooled standard deviation can also be used to compare the variability of two or more samples. This can be useful in determining the reliability of a given set of data. Additionally, pooled standard deviation can be used to compare the variability of two or more populations. This can be useful in determining the accuracy of a given set of data. Advantages of Pooled Standard Deviation Pooled standard deviation is a powerful tool which enables researchers to compare populations or groups. It is preferred over other measures because its calculations are relatively simple and it has a larger sample size. Additionally, it can be applied when comparing populations with different standard deviations. Pooled standard deviation is also useful for determining the variability of a population. By calculating the pooled standard deviation, researchers can determine the range of values that are likely to occur in a population. This can be used to identify outliers or to identify trends in the data. Disadvantages of Pooled Standard Deviation Though pooled standard deviation is a useful tool, it does have its drawbacks. The primary issue is that it does not take into account the number of samples in each population, which can lead to inaccurate results. Additionally, pooled standard deviation does not account for outlier values which can lead to skewed results if those values are not taken into consideration. Applications of Pooled Standard Deviation Pooled standard deviation can be used in a variety of fields. It can be applied in data analysis, in business and economics to compare businesses or markets, and in medical research to compare the impacts of different treatments. Additionally, pooled standard deviation can be applied in survey research to compare population results. Comparing Pooled Standard Deviations When using pooled standard deviation to compare groups, the larger the pooled standard deviation, the greater the difference between groups. A smaller pooled standard deviation may indicate less variability between groups or suggest that the differences between individual sample values is minimal. Tips for Calculating a Pooled Standard Deviation When calculating a pooled standard deviation it’s important to remember to take into account both the variance and sample size for each variable. Additionally, it’s important to factor in any outliers as they will impact the results. Finally, prior to calculating, it’s important to ensure that all of the sample sizes are homogenous. Further Resources on Understanding Pooled Standard Deviation For those wishing to further their understanding of Pooled Standard Deviation there are a variety of resources available online. In particular, the American Statistical Association has numerous articles related to pooled standard deviation as does Mathworld. Additionally, there are various educational courses such as Udemy’s Understanding Pooled Standard Deviation Course which provides a comprehensive guide for understanding the concept.
{"url":"https://mathemista.com/understanding-pooled-standard-deviation/","timestamp":"2024-11-07T12:20:52Z","content_type":"text/html","content_length":"58404","record_id":"<urn:uuid:69316532-7eeb-4417-a20a-9e90f144f5c7>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00226.warc.gz"}
R: How to Change Number of Bins in Histogram | Online Tutorials Library List | Tutoraspire.com R: How to Change Number of Bins in Histogram by Tutor Aspire When you create a histogram in R, a formula known as Sturges’ Rule is used to determine the optimal number of bins to use. However, you can use the following syntax to override this formula and specify an exact number of bins to use in the histogram: hist(data, breaks = seq(min(data), max(data), length.out = 7)) Note that the number of bins used in the histogram will be one less than the number specified in the length.out argument. The following examples show how to use this syntax in practice. Example 1: Create a Basic Histogram The following code shows how to create a basic histogram in R without specifying the number of bins: #define vector of data #create histogram of data hist(data, col = 'lightblue') Using Sturges’ Rule, R decided to use 8 total bins in the histogram. Example 2: Specify Number of Bins to Use in Histogram The following code shows how to create a histogram for the same vector of data and use exactly 6 bins: #define vector of data #create histogram with 6 bins hist(data, col = 'lightblue', breaks = seq(min(data), max(data), length.out = 7)) Cautions on Choosing a Specific Number of Bins The number of bins used in a histogram has a huge impact on how we interpret a dataset. If we use too few bins, the true underlying pattern in the data can be hidden: #define vector of data #create histogram with 3 bins hist(data, col = 'lightblue', breaks = seq(min(data), max(data), length.out = 4)) Conversely, if we use too many bins then we may just be visualizing the noise in a dataset: #define vector of data #create histogram with 15 bins hist(data, col = 'lightblue', breaks = seq(min(data), max(data), length.out = 16)) In general, the default Sturges’ Rule used in R tends to produce histograms that have an optimal number of bins. Feel free to use the code provided here to create a histogram with an exact number of bins, but be careful not to choose too many or too few bins. Additional Resources The following tutorials explain how to perform other common functions with histograms in R: How to Plot Multiple Histograms in R How to Create a Histogram of Two Variables in R How to Create a Relative Frequency Histogram in R Share 0 FacebookTwitterPinterestEmail previous post How to Calculate Point Estimates in R (With Examples) next post How to Fix: No module named numpy You may also like
{"url":"https://tutoraspire.com/r-histogram-bin-size/","timestamp":"2024-11-12T13:18:15Z","content_type":"text/html","content_length":"351832","record_id":"<urn:uuid:73dd4056-016e-4d4c-ba68-06d46db7139b>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00384.warc.gz"}
Search result: Catalogue data in Autumn Semester 2021 Mathematics Bachelor Selection: Geometry Number Title Type ECTS Hours Lecturers Finite Geometries II 401-3057-00L W 4 credits 2G N. Hungerbühler Does not take place this semester. Abstract Finite geometries I, II: Finite geometries combine aspects of geometry, discrete mathematics and the algebra of finite fields. In particular, we will construct models of axioms of incidence and investigate closing theorems. Applications include test design in statistics, block design, and the construction of orthogonal Latin squares. Learning Finite geometries I, II: Students will be able to construct and analyse models of finite geometries. They are familiar with closing theorems of the axioms of incidence and are able to objective design statistical tests by using the theory of finite geometries. They are able to construct orthogonal Latin squares and know the basic elements of the theory of block design. Finite geometries I, II: finite fields, rings of polynomials, finite affine planes, axioms of incidence, Euler's thirty-six officers problem, design of statistical tests, orthogonal Content Latin squares, transformation of finite planes, closing theorems of Desargues and Pappus-Pascal, hierarchy of closing theorems, finite coordinate planes, division rings, finite projective planes, duality principle, finite Moebius planes, error correcting codes, block design - Max Jeger, Endliche Geometrien, ETH Skript 1988 - Albrecht Beutelspacher: Einführung in die endliche Geometrie I,II. Bibliographisches Institut 1983 - Margaret Lynn Batten: Combinatorics of Finite Geometries. Cambridge University Press - Dembowski: Finite Geometries. 401-4207-71L Coxeter Groups from a Geometric Viewpoint W 4 credits 2V M. Cordes Abstract Introduction to Coxeter groups and the spaces on which they act. Learning Understand the basic properties of Coxeter groups. Brown, Kenneth S. "Buildings" Davis, Michael "The geometry and topology of Coxeter groups" Prerequisites Students must have taken a first course in algebraic topology or be familiar with fundamental groups and covering spaces. They should also be familiar with groups and group actions. / Notice
{"url":"https://www.vorlesungen.ethz.ch/Vorlesungsverzeichnis/sucheLehrangebot.view?lang=en&seite=1&semkez=2021W&ansicht=2&&abschnittId=92452","timestamp":"2024-11-05T12:07:38Z","content_type":"text/html","content_length":"10875","record_id":"<urn:uuid:4974cb19-972b-454f-994a-1071a9ee2e51>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00563.warc.gz"}
Valuation - Resources | Wall Street Girls top of page General Overview What is Valuation? Valuation refers to the process of determining the current or projected worth of a company. There are several ways to value a company, and these various methods can be categorized into two groups: 1. Intrinsic Valuation and 2. Relative Valuation. Intrinsic Valuation attempts to find the "inherent" or "true" value of an investment based solely on the company's fundamentals, including its cash flows, growth rates, dividends, etc. It does not take into account how other companies are performing. The most common Intrinsic Valuation method is the Discounted Cash Flow Analysis (often referred to as 'DCF Analysis'). The DCF Analysis method estimates the value of a company today based on its expected future cash flows (i.e., how much money it will generate in the future). This method is often considered to be the most rigorous and theoretically correct way of valuing a company, but it is worth noting that the DCF Analysis also relies heavily on assumptions due to its forward-looking nature. The Discounted Cash Flow Analysis page linked below will cover this valuation methodology in more detail. Relative Valuation, on the other hand, operates by comparing the company being valued to other similar companies. This methodology involves calculating various valuation ratios and multiples of "comparable" peer companies or M&A transactions, including the Price-to-Earnings ratio and the Enterprise Value-to-EBITDA multiple (more on what 'Enterprise Value' means below), and then applying those ratios and multiples to the company you are valuing. The two most common Relative Valuation methodologies are Comparable Companies Analysis (often referred to as 'Comps Analysis') and Precedent Transactions Analysis in which one calculates the ratios of various comparable public companies and various announced transactions respectively. For example, in the case of a Comps Analysis, say you are trying to value a company with an EBITDA of $50 million. If similar companies (in the same sector with similar risk and growth profiles) are trading at Enterprise Value-to-EBITDA multiples of between 15x and 20x, then the technology company you are valuing should, in theory, have an Enterprise Value of between $750 million ($50 million * 15) and $1 billion ($50 million * 20). Generally speaking, Relative Valuation is a lot easier and quicker to conduct than Intrinsic Valuation. Relative Valuation also relies on current market data rather than on cash flow projections and is thus the best representation of market value. That being said, the market can be incorrect (especially in the long-run) and it is often difficult to come up with a good set of comparable companies or transactions. The Comparable Companies Analysis and Precedent Transactions Analysis pages linked below will cover the two Relative Valuation methodologies in more detail. Equity & Enterprise Value Equity Value and Enterprise Value are two common ways that a company may be valued. Both are used in the valuation of a business but each offers a slightly different perspective. Equity Value is often referred to as the "Market Capitalization" of a company. It is the value of everything a company has (Total Assets - Total Liabilities) but only to equity investors, i.e., common stock holders. Enterprise Value is the value of a company's core business operations (Operating Assets - Operating Liabilities) but to all investor groups, including, equity, debt, preferred stock holders, etc. Valuation Methodologies How Do You Value a Company? As mentioned previously, there are several ways to value a company. The three most common valuation methodologies are: Test your valuation understanding: bottom of page
{"url":"https://www.wallstreetgirls.org/valuation","timestamp":"2024-11-07T14:00:01Z","content_type":"text/html","content_length":"858364","record_id":"<urn:uuid:6bfe47e1-2887-42f3-a2c9-9b46bf51813f>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00223.warc.gz"}
Rescaled localized radial basis functions and fast decaying polynomial reproduction Approximating a set of data can be a difficult task but it is very useful in applications. Through a linear combination of basis functions we want to reconstruct an unknown quantity from partial information. We study radial basis functions (RBFs) to obtain an approximation method that is meshless, provides a data dependent approximation space and generalization to larger dimensions is not an obstacle. We analyze a rational approximation method with compactly supported radial basis functions (Rescaled localized radial basis function method). The method reproduces exactly the constants and the density of the interpolation nodes influences the support of the RBFs. There is a proof of the convergence in a quasi-uniform setting up to a conjecture: we can determine a lower bound for the approximant of the constant function 1 uniformly with respect to the size of the support of the kernel. We investigate the statement of the conjecture and bring some practical and theoretical results to support it. We study the Runge phenomenon on the approximant and obtain uniform estimates on the cardinal functions. We extend the distinguishing features of the method reproducing exactly larger polynomial spaces. We replace local polynomial reproduction with basis functions that decrease rapidly and approximate exactly a polynomial space. This change releases the basis functions from the compactness of the support and guarantees the same convergence rate (the oversampling problem does not appear). The rescaled localized radial basis function method can be interpreted in this new framework because the cardinal functions have global support even if the kernel has compact support. The decay of the basis functions undertake convergence and stability. In this analysis the smoothness of the approximant is not important, what matters is the "locality" provided by the fast decay. With a moving least squares approach we provide an example of a smooth quasi-interpolant. We continue trying to improve the performance of the method even when the weight functions do not have compact support. All the new theoretical results introduced in this work are also supported by numerical evidence. Approximating a set of data can be a difficult task but it is very useful in applications. Through a linear combination of basis functions we want to reconstruct an unknown quantity from partial information. We study radial basis functions (RBFs) to obtain an approximation method that is meshless, provides a data dependent approximation space and generalization to larger dimensions is not an obstacle. We analyze a rational approximation method with compactly supported radial basis functions (Rescaled localized radial basis function method). The method reproduces exactly the constants and the density of the interpolation nodes influences the support of the RBFs. There is a proof of the convergence in a quasi-uniform setting up to a conjecture: we can determine a lower bound for the approximant of the constant function 1 uniformly with respect to the size of the support of the kernel. We investigate the statement of the conjecture and bring some practical and theoretical results to support it. We study the Runge phenomenon on the approximant and obtain uniform estimates on the cardinal functions. We extend the distinguishing features of the method reproducing exactly larger polynomial spaces. We replace local polynomial reproduction with basis functions that decrease rapidly and approximate exactly a polynomial space. This change releases the basis functions from the compactness of the support and guarantees the same convergence rate (the oversampling problem does not appear). The rescaled localized radial basis function method can be interpreted in this new framework because the cardinal functions have global support even if the kernel has compact support. The decay of the basis functions undertake convergence and stability. In this analysis the smoothness of the approximant is not important, what matters is the "locality" provided by the fast decay. With a moving least squares approach we provide an example of a smooth quasi-interpolant. We continue trying to improve the performance of the method even when the weight functions do not have compact support. All the new theoretical results introduced in this work are also supported by numerical evidence. Rescaled localized radial basis functions and fast decaying polynomial reproduction Approximating a set of data can be a difficult task but it is very useful in applications. Through a linear combination of basis functions we want to reconstruct an unknown quantity from partial information. We study radial basis functions (RBFs) to obtain an approximation method that is meshless, provides a data dependent approximation space and generalization to larger dimensions is not an obstacle. We analyze a rational approximation method with compactly supported radial basis functions (Rescaled localized radial basis function method). The method reproduces exactly the constants and the density of the interpolation nodes influences the support of the RBFs. There is a proof of the convergence in a quasi-uniform setting up to a conjecture: we can determine a lower bound for the approximant of the constant function 1 uniformly with respect to the size of the support of the kernel. We investigate the statement of the conjecture and bring some practical and theoretical results to support it. We study the Runge phenomenon on the approximant and obtain uniform estimates on the cardinal functions. We extend the distinguishing features of the method reproducing exactly larger polynomial spaces. We replace local polynomial reproduction with basis functions that decrease rapidly and approximate exactly a polynomial space. This change releases the basis functions from the compactness of the support and guarantees the same convergence rate (the oversampling problem does not appear). The rescaled localized radial basis function method can be interpreted in this new framework because the cardinal functions have global support even if the kernel has compact support. The decay of the basis functions undertake convergence and stability. In this analysis the smoothness of the approximant is not important, what matters is the "locality" provided by the fast decay. With a moving least squares approach we provide an example of a smooth quasi-interpolant. We continue trying to improve the performance of the method even when the weight functions do not have compact support. All the new theoretical results introduced in this work are also supported by numerical evidence. File in questo prodotto: File Dimensione Formato accesso aperto Dimensione 1.64 MB 1.64 MB Adobe PDF Visualizza/Apri Formato Adobe PDF Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/52238
{"url":"https://thesis.unipd.it/handle/20.500.12608/52238","timestamp":"2024-11-05T11:54:09Z","content_type":"text/html","content_length":"49549","record_id":"<urn:uuid:2cc9a279-7f86-44e1-aad2-21bc5f840e8c>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00722.warc.gz"}
limk→−∞k∫0k∣1∣(kx−[kx])kdx;k∈N is equal to (where [.] denotes... | Filo Question asked by Filo student is equal to (where [.] denotes the greatest integer function) a. [kx] Not the question you're searching for? + Ask your question Filo tutor solution Learn from their 1-to-1 discussion with Filo tutors. Generate FREE solution for this question from our expert tutors in next 60 seconds Don't let anything interrupt your homework or exam prep with world’s only instant-tutoring, available 24x7 Found 4 tutors discussing this question Discuss this question LIVE for FREE 6 mins ago Students who ask this question also asked View more Question Text is equal to (where [.] denotes the greatest integer function) Updated On Nov 12, 2022 Topic Calculus Subject Mathematics Class Class 11
{"url":"https://askfilo.com/user-question-answers-mathematics/is-equal-to-where-denotes-the-greatest-integer-function-32373232303038","timestamp":"2024-11-07T12:41:38Z","content_type":"text/html","content_length":"185637","record_id":"<urn:uuid:488945f8-1ccf-4c1e-9bfa-d40227c8e7ca>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00626.warc.gz"}
Very Difficult Logic Problems 1. The Emperor You are the ruler of a medieval empire and you are about to have a celebration tomorrow. The celebration is the most important party you have ever hosted. You've got 1000 bottles of wine you were planning to open for the celebration, but you find out that one of them is poisoned. The poison exhibits no symptoms until death. Death occurs within ten to twenty hours after consuming even the minutest amount of poison. You have over a thousand slaves at your disposal and just under 24 hours to determine which single bottle is poisoned. You have a handful of prisoners about to be executed, and it would mar your celebration to have anyone else killed. What is the smallest number of prisoners you must have to drink from the bottles to be absolutely sure to find the poisoned bottle within 24 hours? Hint: It is much smaller than you first might think. Try to solve the problem first with one poisoned bottle out of eight total bottles of wine. Solution: 10 prisoners must sample the wine. Bonus points if you worked out a way to ensure than no more than 8 prisoners die. Number all bottles using binary digits. Assign each prisoner to one of the binary flags. Prisoners must take a sip from each bottle where their binary flag is set. Here is how you would find one poisoned bottle out of eight total bottles of wine. │ │ 1 │ 2 │ 3 │ 4 │ 5 │ 6 │ 7 │ 8 │ │ Prisoner A │ │ X │ │ X │ │ X │ │ X │ │ Prisoner B │ │ │ X │ X │ │ │ X │ X │ │ Prisoner C │ │ │ │ │ X │ X │ X │ X │ In the above example, if all prisoners die, bottle 8 is bad. If none die, bottle 1 is bad. If A & B dies, bottle 4 is bad. With ten people there are 1024 unique combinations so you could test up to 1024 bottles of wine. Each of the ten prisoners will take a small sip from about 500 bottles. Each sip should take no longer than 15 seconds and should be a very small amount. Small sips not only leave more wine for guests. Small sips also avoid death by alcohol poisoning. As long as each prisoner is administered about a millilitre from each bottle, they will only consume the equivalent of about one bottle of wine each. Each prisoner will have at least a fifty percent chance of living. There is only one binary combination where all prisoners must sip from the wine. If there are ten prisoners then there are ten more combinations where all but one prisoner must sip from the wine. By avoiding these two types of combinations you can ensure no more than 8 prisoners die. One viewer felt that this solution was in flagrant contempt of restaurant etiquette. The emperor paid for this wine, so there should be no need to prove to the guests that wine is the same as the label. I am not even sure if ancient wine even came with labels affixed. However, it is true that after leaving the wine open for a day, that this medieval wine will taste more like vinegar than it ever did. C'est la vie.
{"url":"https://folj.com/puzzles/very-difficult-analytical-puzzles.htm","timestamp":"2024-11-06T23:01:06Z","content_type":"text/html","content_length":"37092","record_id":"<urn:uuid:f85c44ff-f1df-49da-b492-2fddf0c2a69b>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00506.warc.gz"}
Decision Making Mistakes, using Probability and Statistics ! | Udemy Free Download Decision Making Mistakes, using Probability and Statistics ! | Udemy Decision Making Mistakes, using Probability and Statistics ! | Udemy English | Size: 3.27 GB Genre: eLearning Lessons for Leaders and Managers: Learn and Avoid the common Misconceptions and Mistakes, in Probability & Statistics. What you’ll learn Learn about the common mistakes and misconceptions people make, when using Probabilities in their judgements and decisions. Learn about the common mistakes and misconceptions people make, when using Statistics and statistical inferences, in their judgements and decisions. What are the Psychological Biases and Fallacies, that make infer wrongly, when using Probability and Statistics ? Hoe can we try and avoid such pitfalls, mistakes and misconceptions, when using Probability and Statistics in decision making ? Most of us have learnt some Probability and Statistics in High School, or later in college. We know the basic concepts, and may knowingly, or unknowingly, use it in our judgements and decisions ! However, what we may not be aware of, are the many many ways we can go wrong, in using these concepts, from Probability and Statistics, for making our decisions ! This includes the common errors , the many misconceptions we have, the conclusions we wrongly jump to, in judging probabilities, and risks, sampling and statistics. The Human brain is not wired to intuitively understand probability or statistics. Researchers of the brain, believe that mathematical truths make little automatic sense to our mind, especially when considering random and non-random outcomes, or when considering a large amount of data. And because of that, we automatically and subconsciously end up making a lot of mistakes, in assessing risks and likelihood. This Course will help you learn about, and try to avoid and minimize, such mistakes and misconceptions. It will be useful for everyone, but especially for Leaders and Managers, whose various judgements and decisions can affect many people, organizations and countries! It is a Beginner level Course, and it assumes only a basic High school level knowledge of Probability and Statistics. Disclaimer: You will not learn any Probability or Statistics here, you will only learn about the mistakes and misconceptions commonly made, when using Probability and Statistics in Decision Making. Some Images and Videos courtesy Pixabay, Pexels, Pressfoto and FreePik. Some Music snippets courtesy Bensound. Who this course is for: • Leaders and Managers who want to avoid Statistics and Probability mistakes, while making decisions • Management Students, and aspiring Leaders, who want to use Probability and Statistics, more accurately and effectively • Anyone who wants to avoid the pitfalls, mistakes and misconceptions, when using Probability and Statistics. If any links die or problem unrar, send request to Leave a Comment
{"url":"https://tut4biz.com/decision-making-mistakes-using-probability-and-statistics-udemy/","timestamp":"2024-11-09T23:46:26Z","content_type":"text/html","content_length":"70822","record_id":"<urn:uuid:f64e4a2b-5951-4c4c-b6e6-07f83e9b60c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00355.warc.gz"}
Visualizing univariate distribution in Seaborn [box type="note" align="" class="" width=""]This article is an excerpt from a book by Allen Chi Shing Yu, Claire Yik Lok Chung, and Aldrin Kay Yuen Yim titled Matplotlib 2.x By Example. [/box] Seaborn by Michael Waskom is a statistical visualization library that is built on top of Matplotlib. It comes with handy functions for visualizing categorical variables, univariate distributions, and bivariate distributions. In this article, we will visualize univariate distribution in Seaborn. Visualizing univariate distribution Seaborn makes the task of visualizing the distribution of a dataset much easier. In this example, we are going to use the annual population summary published by the Department of Economic and Social Affairs, United Nations, in 2015. Projected population figures towards 2100 were also included in the dataset. Let's see how it distributes among different countries in 2017 by plotting a bar plot: import seaborn as sns import matplotlib.pyplot as plt # Extract USA population data in 2017 current_population = population_df[(population_df.Location == 'United States of America') & (population_df.Time == 2017) & (population_df.Sex != 'Both')] # Population Bar chart sns.barplot(x="AgeGrp",y="Value", hue="Sex", data = current_population) # Use Matplotlib functions to label axes rotate tick labels ax = plt.gca() ax.set(xlabel="Age Group", ylabel="Population (thousands)") ax.set_xticklabels(ax.xaxis.get_majorticklabels(), rotation=45) plt.title("Population Barchart (USA)") # Show the figure plt.show() Bar chart in Seaborn The seaborn.barplot() function shows a series of data points as rectangular bars. If multiple points per group are available, confidence intervals will be shown on top of the bars to indicate the uncertainty of the point estimates. Like most other Seaborn functions, various input data formats are supported, such as Python lists, Numpy arrays, pandas Series, and pandas DataFrame. A more traditional way to show the population structure is through the use of a population pyramid. So what is a population pyramid? As its name suggests, it is a pyramid-shaped plot that shows the age distribution of a population. It can be roughly classified into three classes, namely constrictive, stationary, and expansive for populations that are undergoing negative, stable, and rapid growth respectively. For instance, constrictive populations have a lower proportion of young people, so the pyramid base appears to be constricted. Stable populations have a more or less similar number of young and middle-aged groups. Expansive populations, on the other hand, have a large proportion of youngsters, thus resulting in pyramids with enlarged bases. We can build a population pyramid by plotting two bar charts on two subplots with a shared y axis: import seaborn as sns import matplotlib.pyplot as plt # Extract USA population data in 2017 current_population = population_df[(population_df.Location == 'United States of America') & (population_df.Time == 2017) & (population_df.Sex != 'Both')] # Change the age group to descending order current_population = current_population.iloc[::-1] # Create two subplots with shared y-axis fig, axes = plt.subplots(ncols=2, sharey=True) # Bar chart for male sns.barplot(x="Value",y="AgeGrp", color="darkblue", ax=axes[0], data = current_population[(current_population.Sex == 'Male')]) # Bar chart for female sns.barplot(x="Value",y="AgeGrp", color="darkred", ax=axes[1], data = current_population[(current_population.Sex == # Use Matplotlib function to invert the first chart axes[0].invert_xaxis() # Use Matplotlib function to show tick labels in the middle axes[0].yaxis.tick_right() # Use Matplotlib functions to label the axes and titles axes[0].set_title("Male") axes[0].set(xlabel="Population (thousands)", ylabel="Age Group") axes[1].set(xlabel="Population (thousands)", ylabel="") fig.suptitle("Population Pyramid (USA)") # Show the figure plt.show() Since Seaborn is built on top of the solid foundations of Matplotlib, we can customize the plot easily using built-in functions of Matplotlib. In the preceding example, we used matplotlib.axes.Axes.invert_xaxis() to flip the male population plot horizontally, followed by changing the location of the tick labels to the right-hand side using matplotlib.axis.YAxis.tick_right (). We further customized the titles and axis labels for the plot using a combination of matplotlib.axes.Axes.set_title(), matplotlib.axes.Axes.set(), and matplotlib.figure.Figure.suptitle(). Let's try to plot the population pyramids for Cambodia and Japan as well by changing the line population_df.Location == 'United States of America' to population_df.Location == 'Cambodia' or population_df.Location == 'Japan'. Can you classify the pyramids into one of the three population pyramid classes? To see how Seaborn simplifies the code for relatively complex plots, let's see how a similar plot can be achieved using vanilla Matplotlib. First, like the previous Seaborn-based example, we create two subplots with shared y axis: fig, axes = plt.subplots(ncols=2, sharey=True) Next, we plot horizontal bar charts using matplotlib.pyplot.barh() and set the location and labels of ticks, followed by adjusting the subplot spacing: # Get a list of tick positions according to the data bins y_pos = range(len(current_population.AgeGrp.unique())) # Horizontal barchart for male axes[0].barh(y_pos, current_population[(current_population.Sex == 'Male')].Value, color="darkblue") # Horizontal barchart for female axes[1].barh(y_pos, current_population[(current_population.Sex == 'Female')].Value, color="darkred") # Show tick for each data point, and label with the age group axes[0].set_yticks(y_pos) axes[0].set_yticklabels(current_population.AgeGrp.unique()) # Increase spacing between subplots to avoid clipping of ytick labels plt.subplots_adjust(wspace=0.3) Finally, we use the same code to further customize the look and feel of the figure: # Invert the first chart axes[0].invert_xaxis() # Show tick labels in the middle axes[0].yaxis.tick_right() # Label the axes and titles axes[0].set_title("Male") axes[1].set_title("Female") axes[0].set(xlabel="Population (thousands)", ylabel="Age Group") axes[1].set(xlabel="Population (thousands)", ylabel="") fig.suptitle("Population Pyramid (USA)") # Show the figure plt.show() When compared to the Seaborn-based code, the pure Matplotlib implementation requires extra lines to define the tick positions, tick labels, and subplot spacing. For some other Seaborn plot types that include extra statistical calculations such as linear regression, and Pearson correlation, the code reduction is even more dramatic. Therefore, Seaborn is a "batteries-included" statistical visualization package that allows users to write less verbose code. Histogram and distribution fitting in Seaborn In the population example, the raw data was already binned into different age groups. What if the data is not binned (for example, the BigMac Index data)? Turns out, seaborn.distplot can help us to process the data into bins and show us a histogram as a result. Let's look at this example: import seaborn as sns import matplotlib.pyplot as plt # Get the BigMac index in 2017 current_bigmac = bigmac_df[(bigmac_df.Date == "2017-01-31")] # Plot the histogram ax = sns.distplot(current_bigmac.dollar_price) plt.show() The seaborn.distplot function expects either pandas Series, single-dimensional numpy.array, or a Python list as input. Then, it determines the size of the bins according to the Freedman-Diaconis rule, and finally it fits a kernel density estimate (KDE) over the histogram. KDE is a non-parametric method used to estimate the distribution of a variable. We can also supply a parametric distribution, such as beta, gamma, or normal distribution, to the fit argument. In this example, we are going to fit the normal distribution from the scipy.stats package over the Big Mac Index dataset: from scipy import stats ax = sns.distplot(current_bigmac.dollar_price, kde=False, fit=stats.norm) [INSERT IMAGE] You have now equipped yourself with the knowledge to visualize univariate data in Seaborn as Bar Charts, Histogram, and distribution fitting. To have more fun visualizing data with Seaborn and Matplotlib, check out the book, this snippet appears from.
{"url":"https://www.packtpub.com/en-us/learning/how-to-tutorials/visualizing-univariate-distribution-seaborn/","timestamp":"2024-11-14T05:21:20Z","content_type":"text/html","content_length":"831285","record_id":"<urn:uuid:d481129a-2ffd-41f1-bfcd-a385a647398f>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00773.warc.gz"}
Minimal States of Fermionic Excitations : the n-electron Levitov source Minimal States of Fermionic Excitations : the n-electron Levitov source Christian Glattli Nanoelectronics group, SPEC, CEA Saclay Mon, Apr. 02nd 2012, 14:00 Salle Claude Itzykson, Bât. 774, Orme des Merisiers Manipulating few degrees of freedom in a quantum systems has always lead to better understanding of quantum systems and eventually to practical applications of quantum effects. This is what we have learned from the history of the field of quantum optics, atomic physics and more recently from condensed matter with qubit implementation in superconducting circuits or spins in quantum dots. par Here we adress the question : can we manipulate few electronic excitations on top of the Fermi sea of a quantum conductor? Thanks to the advance in the realization of clean ballistic one-dimensional like conductors where electronic beam-splitters and interferometers are available, the controlled injection of one or few charges in a conductor may lead to new type of quantum experiments with fermions and eventually to quantum information processing. par The answer to the previous question is not obvious as it is believed that any spatial or time localized perturbation of the Fermi sea leads to collective excitations involving a log divergent number of electron-hole pairs [1]. Also for delocalized electrons in a Fermi sea, we do not expect to observe manifestations of the charge quantization as in the case of isolated systems such as metallic islands, quantum dots or Millikan oil droplets. par We will discuss a simple way to circumvent these problems following the seventeen years old theoretical observation by Levitov et al [2] that clean electron excitations free of holes excitations can be simply realized by applying Lorentzian shape voltage pulses on the contact of a one-dimensional conductor. The mathematical analytical property of the time dependent phase acquired by all electrons of the Fermi sea leads to an upward shift in energy (or momentum) of the Fermi sea which completely washes out the hole creation. The complete effect occurs for pulses carrying exactly an integer number of charges, which thus generate minimal excitation states of n electrons, while for non-integer charges spurious electron-hole excitation reappear. Finally we will present preliminary on-going experiments based on current noise, a measurement sensitive to the total number of excitations of the Fermi sea [3]. par All these concepts seem generalizable to 1D interacting Fermion such as Luttinger Liquids, FQHE, where the elementary charge excitation can be fractional [4]. \ \ {[1]} Orthogonality catastrophe in a mesoscopic conductor due to a time dependent flux, H. Lee and L. Levitov arXiv:cond-mat/9312013v1 (1993). \ {[2]} Electron counting statistics and coherent states of the electric current, L. Levitov; H. Lee and G. Lesovik, J. Math. Phys. 37, 4845 (1996). \ {[3]} J. Dubois et al in prepartion. \ {[4]} Minimal excitation states of electrons in one-dimensional wires, J. Keeling, I. Klich, and L. Levitov, Phys. Rev. Letters 97, 116403 (2006).
{"url":"https://www.ipht.fr/en/Phocea/Vie_des_labos/Seminaires/index.php?type=2&id=992036","timestamp":"2024-11-08T09:17:53Z","content_type":"text/html","content_length":"29519","record_id":"<urn:uuid:c69a1fff-76a8-4441-881d-75705450c5a0>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00442.warc.gz"}
Applications of Graph Sampling Shweta Jain Thursday, May 3th, 2018, 11h, salle 26-00/101, Campus Jussieu Large graphs have become commonplace making many of the traditional graph-theoretic algorithms infeasible. Moreover, sometimes we don’t have access to the whole graph. This has led us to revisit graph algorithms and necessitated graph sampling. In this talk, I will explore 2 applications of graph sampling. In the first application, called TurnShadow, we propose a method for efficient counting of k-cliques. It uses the classic result called Turn’s theorem to give provable error bounds. We also do extensive evaluation of the method on real-world graph instances and demonstrate that it is fast and extremely accurate. In the second application, we propose a method called SADDLES to estimate the degree distribution when we are given only limited access to the graph, and accessing the graph is costly. We assume we have access to uniform at random (u.a.r ) vertex, u.a.r neighbor of a vertex and the degree of the vertex. We compare SADDLES with many other state-of-the-art methods and demonstrate that SADDLES requires far fewer samples to achieve the same degree of accuracy.
{"url":"https://www.complexnetworks.fr/applications-of-graph-sampling/","timestamp":"2024-11-10T14:18:20Z","content_type":"text/html","content_length":"75755","record_id":"<urn:uuid:e5b75cf5-16de-43f1-84dd-8af75e1c5324>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00538.warc.gz"}
Generalization as a Means of Intelligence Amplification I realized recently that my blog posts might seem a little further off the wall than I usually intend them to. This seems like there is more inferential distance anticipated, so I thought it might be a good opportunity to take a step back and see if we can’t try to close the priors gap a little bit. Today I want to talk about the most valuable skill I possess. It’s the reason I’ve been able to make big changes in my life over the last year; it’s the only reason that Facebook pays me silly amounts of money to work for the (surprise: it’s not being able to program); and it’s my only claim to fame in the intelligence department. Being the nice guy that I am, I’m going to share it with you today, but first, a little backstory. A new friend of mine posted on Facebook the other day: “everything is hard until someone makes it easy”. This is a sentiment which resonates with me, and (because I had previously been thinking about this blog post), I realized that a big part of making things easy is having concepts and words which allow you to succinctly reason about it. Visualize with me for a second, the true story of a friend of mine, an amateur musician with a knack for gnarly guitar solos. Unfortunately, he grew up in a metal kind of counterculture and seems to have a disdain for the musical establishment. He avoids any kind of classical training that he can, preferring to generate it for himself. While this works to an extent (he comes up with some sick riffs), my friend is plagued by the fact that he can’t read or write musical notation. Whenever he comes up with a particularly cool lick, his only option is to record it or to practice it enough times that he won’t forget it. Instead of, you know, just like, writing it down. Musical theory exists for a reason. While it’s certainly an error to only study musical theory without actually playing music, it’s also a mistake to go the other direction and avoid it altogether. Knowing the theory means you can listen to it and save yourself some time, or explicitly go against the traditional and do something different. And it also gives you a convenient notation for being able to encode your ideas. But I don’t want to write about musical theory. I want to talk about encoding ideas in convenient forms. Your brain does this all the time. Like literally, always. There is infinitely too much information in the universe to soak in, so instead you gloss over some of it, looking only at what is most relevant to the circumstances. For instance, you know that except in extreme circumstances, you don’t really need to ever think about the last conversation your coworker had with his mom. It’s information you know must exist if you stop to think about it, but it’s really not all that interesting to you. Furthermore, you probably have a mental image of what your best friend looks like, but likely couldn’t accurately draw her nose without looking, even if you were good at drawing. The point I’m trying to impress is that we gloss over specifics all the time. We know that they’re there, and we know where to find them if they ever become relevant, but for most intents and purposes, we don’t really care about them. Along similar lines, our minds are naturally prone to generalize. “Humans have 10 fingers.” This is broadly true, but I’m sure you can bring to mind someone with fewer fingers. If you try, you can probably think of someone with more than 10 fingers too. So why do we say that “humans have 10 fingers” if it’s not ostensibly true? Because it’s a useful abstraction, and it’s very concise. Imagine instead having the belief that “humans have 10 fingers, except for the ones who have 9, or 11. Oh and don’t forget, there are people who have lost two or more fingers! Well I guess it’s safe to say that humans have fingers, even if we can’t put an actual number on them. Unless of course they were exposed to thalidomide as a child.” That’s a mouthful, and it’s still not even entirely So instead, our brains (through what is known as a need for closure) happily cut their losses and retire to the fact that they’re not going to have correct beliefs, and instead might as well have mostly-correct beliefs. This is the process of abstraction. It’s discarding inane, surface-level inconsequential details, and instead focusing on the form behind what it is that you’re looking at. It’s the process of realizing the underlying features that make this thing what it is. As a quick exercise, create a list of 5 features about vehicles that are mostly-correct. Aside: the classical philosophy that you have likely been exposed to in your lifetime gets this task wrong. Not only does it get it wrong, but it butchers all of the usefulness out of it. Remember, you’re trying to find a list of features that seem to uniquely identify a vehicle from a non-vehicle. What you are not trying to do is impose lots of words defining what a vehicle is, and then write a treatise on it and fight tooth and nail to tell other people why what they call a vehicle is wrong. No! We just want to know what kinds of things a vehicle is or does. This ability to generalize from specifics is the key to intelligence. Not just like, book-smarts, this is the characteristic that separates us from animals. (Which is not to say that humans are Aristotle’s “rational animal”). This ability to abstract is what gave humanity agriculture, industry, science, politics and technology. We’re able to stop and look at the useful parts of something, and separate the wheat from the chaff. Please take a second to realize the gravitas of this. It’s an incredibly useful skill, and it turns out that, like any other skill (can you see me generalizing here?) it can be practiced and trained. Think about that – this is an easy technique you can practice to become smarter. It has nothing to do with your IQ, but instead it’s your ability to see patterns and make connections. Making connections is good, because if you have a generalization of a problem, and you come up with a solution to this generalization, the solution will usually work for all of the specific problems that you began with. Instead of solving one problem at a time, you’ve just gained the ability to solve thousands of problems simultaneously. In my books, that counts as being 1000x smarter. However, making connections is hard. Perfectly finding patterns turns out to not just be hard, but computationally impossible. But don’t let that get you down, remember what my friend on Facebook said? “Everything is hard until someone makes it easy.” And remember, that the key to making things easy is to have concepts that allow you to reason about them. It turns out then, if our goal is intelligence amplification, we should focus on developing concepts which allow us to think more concisely about generalizations. As it turns out, some nice people have gone ahead and done a whole bunch of work on this topic for us. You’re probably going to hate me when I tell you who, but please bear with me for a second and let me make my argument before dismissing it. Promise? Those people are mathematicians. And the concepts are mathematical ideas. If you have misgivings about math, cast them out of your mind right now. Mathematics isn’t the dry symbol manipulation without real-life applications that you were taught in grade school. It’s really, really and truly not. Your teachers were just shitty, and they were teaching by rote and didn’t actually know what mathematics were either. I used to be of the same mindset as you, and I didn’t know what mathematics were either, despite majoring in the faculty of math at a prestigious university for three years. If you’re interested in what mathematics is really like, from the eyes of a mathematician, I would highly recommend A Mathematician’s Lament. It changed my mind. Anyway, the point to which I’m laboriously trying to get is that the entire study of mathematics is this never-ending explosion of abstractions. Unlike our earlier examples of generalizing over people, it turns out that in mathematics you can generalize your generalizations, and then again. My favorite quote of all time is from E. T. Jayne’s Probability Theory: “We will not prove this result in general here, as we will find out later that it is in fact a special-case of a more general rule still.” Let’s look at a quick, simple example. Numbers. If I were to ask you to start counting, how would you start? “one… two… three..”. This sequence is known as the natural numbers, and they’re really the only numbers that actually exist in the universe^1. When you are counting apples, these are the only numbers you will use. Except that, if you think for a second, you can come up with another number. Zero. Oh yeah! That one! While it requires a little bit of philosophizing to argue that you can point out zero apples, you immediately recognize this to be a useful concept. If we include zero with the natural numbers we come up with the whole numbers, and the whole numbers let us describe the count of things, or the absense of things. Aside: don’t worry about remembering what these sets of numbers are called. If you’re not a mathematician it’s not going to make any difference to you in ten minutes. Instead focus your attention on what happens as we climb up the abstraction tree. Feel your mind expanding as we think about these bigger and bigger structures. The next obvious thought is, well, if we can have positive numbers, why can’t we also have negative numbers? With whole numbers we were able to express addition, but we couldn’t talk about being able to remove objects from a count. The set of negative, zero, and positive numbers is known as the integers. All of a sudden we can now think about bank accounts and business transactions and eating pieces of pie. The things we can apply numbers to has just become bigger again. There is a notable problem with our current number system though. We can’t express parts of things. We can’t talk about half of an apple, nor can we talk about dollars and cents. Clearly our model of what numbers are is lacking, and so we invent the real numbers – numbers which can be split and divided indefinitely. Now we can also talk about distances and time durations. Cool! For reasons that will become evident in the near future, we will also call this set of numbers R1 – short for real. Surely now we’re done, right? What other kinds of numbers could there be? Well, if we have a set known as R1, does it make any sense to talk about R2? What would that mean? A complete stab in the dark would estimate that it’s twice as big as R1, which would mean we’d need two R1s. You can think about R1 as a line (remember, it allows us to talk about distance?), so it might make sense if we position these two lines in such a way that they form a corner, and this is exactly what R2 is. You can think of R2 as a piece of paper. It’s flat, but has two dimensions, width and height. Now that we have R2, instead of just talking about distances we can now talk about areas! You can now impress people at parties by telling them the floor area of your house, and it’s all thanks to R2. And again, if we have R2, what about R3? What would that be? By analogous construction of R2, if we take three lines and position them so they all form a single corner, this means one of them is going to need to stick straight out of the piece of paper that we had in R2. You could also call R3 “3-D”. R3 allows us to discuss physical spaces. We can now talk about the locations of airplanes and we can gossip about our friends on the Atkins diet. Again, I’m going to ask you to take a second and realize what we’ve done. We just generalized a number from something you can count with to something you can use you judge your fat friends. Again, none of this should be new to you – it’s not a surprise that you have a weight, but hopefully this way of thinking how to get there from here is. Numbers are all of the things we have talked about, and they are way more things that I’m not going to get into here. It all depends at how you look at them, and how willing you are to let your mind be flexible on the topic. Next week we’re going to generalize better and harder and (most importantly) more usefully. It will be less mathy, I promise. Get hype. 1. One could make an argument that truly no numbers exist, and they’re just generalizations of structures in the universe, but this wouldn’t be a useful exercise in “trying to find abstractions”, now would it? Don’t be that guy.↩︎
{"url":"https://sandymaguire.me/blog/generalization-as-intelligence-amplification/","timestamp":"2024-11-10T05:52:30Z","content_type":"application/xhtml+xml","content_length":"17059","record_id":"<urn:uuid:c5ace279-96d9-4bc4-9eb5-50cf480a9ece>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00610.warc.gz"}
Complementary Angles - GED Math All GED Math Resources Example Questions Example Question #1 : Angle Geometry Refer to the above figure. You are given that Which two angles must be complementary? Correct answer: cannot be complementary). cannot be complementary). Example Question #2 : Angle Geometry Angles A and B are complementary angles. The measure of angle A is Correct answer: Since angles A and B are complementary, their measures add up to 90 degrees. Therefore we can set up our equation as such: Combine like terms and solve for Example Question #3 : Angle Geometry What is the measure of an angle that is complementary to an angle measuring Correct answer: The sum of complementary angles is equal to Set up the following equation and solve for Example Question #4 : Angle Geometry Angles A and B are complementary angles. The measure of angle A is Correct answer: Since angles A and B are complementary angles, their measurements add up to equal 90. Therefore, we need to set up our equation as follows: Combine like terms: And solve for Example Question #4 : Angle Geometry Angles A and B are complementary angles. The measure of angle A is Correct answer: Since angles A and B are complementary, their measures add up to equal 90 degrees. Therefore we can set up an equation as follows: Combine like terms and solve for Now that we have found Example Question #5 : Angle Geometry What angle is complementary to 55? Correct answer: The definition of a complementary angle means that two angles must add up to 90 degrees. To find the angle, simply subtract 55 from 90. The answer is: Example Question #1 : Complementary Angles What angle is complementary to 10 degrees? Correct answer: Complementary angles must add up to ninety. Subtract the given angle from 90 to find the other angle. The answer is: Example Question #3 : Complementary Angles Correct answer: Set up the equation such that both angles sum up to 90. Add 5 on both sides. Divide both sides by 8. The value of The answer is: Example Question #5 : Complementary Angles Correct answer: Complementary angles must add up to 90 degrees. Set up an equation so that the sum of two angles add up to 90. Subtract 8 on both sides. Simplify both sides. Divide by two on both sides. The answer is: Example Question #8 : Angle Geometry What angle is complementary to Correct answer: Note that this is in radians. Recall that 2 complementary angles must add up to 90 degrees. To find the complementary angle, subtract The answer is: Certified Tutor CUNY John Jay College of Criminal Justice, Bachelor of Science, Cellular and Molecular Biology. Certified Tutor The Art Institute of Tampa, Bachelor in Arts, Animation and Special Effects. Certified Tutor State University of New York at Stony Brook, Bachelor, Political Science.
{"url":"https://www.varsitytutors.com/ged_math-help/complementary-angles","timestamp":"2024-11-06T20:18:36Z","content_type":"application/xhtml+xml","content_length":"178309","record_id":"<urn:uuid:75b582ba-0ed6-40e9-b3ef-3ec92cf87e8d>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00017.warc.gz"}
Top 10 Mathematicians In The World: Overview & Contributions! Top 10 Mathematicians In The World: Check Here For Their Contributions & More! Somya Luthra | Updated: Jun 20, 2023 22:36 IST Mathematics has delivered probably the best personalities ever. Mathematicians have reformed the manner in which we think, established the groundwork for logical revelations and, surprisingly, steered history. The following are top 10 mathematicians in the world ever who have made fantastic commitments to mathematics and science. Mathematics is a captivating field that investigates the examples and connections that oversee the universe. These people have made wonderful commitments to the field, changing comprehension; we might interpret numbers, calculation, and then some. Top 10 Mathematicians In The World: Names & Contributions Here below you will get the list of Top 10 Mathematicians In The World. The information about their contributions is also given. 1. Archimedes Archimedes, a Greek mathematician and inventor, established the foundations of infinitesimal calculus and engineering science. He has to his name discoveries in geometry, mechanical engineering, hydrostatics and numerical analysis. Some of his famous inventions are the water screw and compound pulley. 2. Carl Friedrich Gauss Carl Friedrich Gauss was a German mathematician and physicist who made significant contributions to many fields of mathematics like algebra and number theory and is known as the “Princeps Mathematicorum”. Among his many discoveries are Gaussian integers and Gaussian elimination. 3. G. H. Hardy G.H. Hardy was a British mathematician known for his seminal contributions to number theory, mathematical analysis, and mathematical logic. He co-authored the famous book “Ramanujan: Twelve Lectures on Subjects Suggested by His Life and Work” with Srinivasa Ramanujan. He showed significant interest in applied mathematics as well. 4. Srinivasa Ramanujan Srinivasa Ramanujan was an Indian mathematician who made substantial contributions to the analytical theory of numbers and worked on elliptic functions, continued fractions, and infinite series. He had an intuitive and mysterious grasp of mathematics and his discoveries inspired areas of advanced research. 5. Leonhard Euler Leonhard Euler was a prolific Swiss mathematician and physicist. He made fundamental contributions to many fields of mathematics, including number theory, geometry, trigonometry, infinitesimal calculus, graph theory, etc. He is considered to be one of the most influential mathematicians of all time. 6. Évariste Galois Despite a tragically short life, French mathematician Évariste Galois left an indelible mark on the field of algebra. His work in group theory and Galois theory revolutionized the study of equations and laid the foundation for modern abstract algebra. Galois’ insights into symmetry and equations enabled mathematicians to solve complex problems with elegance and precision. 7. Henri Poincaré Henri Poincaré was a French mathematician whose work traversed many fields, including geography, differential conditions, and divine mechanics. He made huge commitments to the three-body issue in divine mechanics and spearheaded the field of mathematical geography. Poincaré’s revelations keep on affecting current math and physical science. 8. David Hilbert Considered one of the most compelling mathematicians of the twentieth hundred years, David Hilbert made pivotal commitments to various regions, including number hypothesis, variable based math, and numerical rationale. His work on the underpinnings of science, known as Hilbert’s program, set up for the improvement of formal frameworks and aphoristic thinking. 9. Emmy Noether Emmy Noether was a German mathematician who made significant contributions to abstract algebra and theoretical physics. Her groundbreaking theorem, known as Noether’s theorem, connects symmetries in physics with conservation laws. Noether’s work provided a deep understanding of the fundamental principles underlying the laws of physics. 10. Alexander Grothendieck German-born French mathematician Alexander Grothendieck revolutionized algebraic geometry and laid the groundwork for the modern study of schemes and sheaf theory. His work on algebraic topology and the cohomology theory transformed the field and inspired generations of mathematicians. The top 10 mathematicians in the world have made a permanent imprint on the field, propelling comprehension we might interpret numbers, shapes, and the major laws of the universe. Their commitments keep on molding current math and motivate people in the future to investigate the secrets of this enrapturing discipline.These mathematicians exemplify the power of human intellect and the beauty of uncovering the intricate patterns that surround us. Top 10 Mathematician In The World FAQs Q.1 What was Euclid’s most famous work? Ans.1 Euclid’s most famous work is his 13 volume book called “The Elements”. In this work, he collected and organized mathematical knowledge from earlier Greek mathematicians and established geometry as a formal logical system based on axioms and proofs. Q.2 Who discovered calculus? Ans.2 Isaac Newton and Gottfried Leibniz are considered the independent co-inventors of calculus. However, Newton’s work in calculus came first, though Leibniz published first. Both men made fundamental contributions to the development of calculus as we know it today. Q.3 What did Carl Gauss contribute? Ans.3 Carl Gauss made significant advances in number theory, algebra, geometry, probability theory and the theory of planetary orbits. He is known for Gauss’s law, the Gaussian function and the Gaussian distribution which is used extensively in statistics. Q.4 What did Blaise Pascal contribute to mathematics? Ans.4 Blaise Pascal made significant contributions to the fields of projective geometry and probability theory. He helped lay the foundations of projective geometry and is credited with formulating Pascal’s theorem. He also popularized the study of probability and probability theory. Q.5 What did Archimedes discover? Ans.5 Archimedes made significant revelations in the fields of calculation, science and physical science. A portion of his most renowned disclosures incorporate the volume of a circle, the region of a circle, and the idea of infinitesimals during computation. He also invented many practical machines and devices.
{"url":"https://kmatkerala.in/top-10-mathematicians-in-the-world/","timestamp":"2024-11-12T03:08:44Z","content_type":"text/html","content_length":"160663","record_id":"<urn:uuid:a21f2df1-30b4-4071-916e-23bd3b478109>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00069.warc.gz"}
Exercise 1. (25 points) Ford-Fulkerson.... Exercise 1. (25 points) Ford-Fulkerson We will implement the Ford-Fulkerson algorithm to calculate the Maximum Flow of a directed weighted graph. Here, you will use the files WGraph.java and FordFulkerson.java, which are available on the course website. Your role will be to complete two methods in the template FordFulkerson.java. The file WGraph.java is similar to the file that you used in your previous assignment to build graphs. The only differences are the addition of setter and getter methods for the Edges and the addition of the parameters “source” and “destination”. There is also an additional constructor that will allow the creation of a graph cloning a WGraph object. Graphs are encoded using a similar format than the one used in the previous assignment. The only difference is that now the first line corresponds to two integers, separated by one space, that represent the “source” and the “destination” nodes. An example of such file can be found on the course website in the file ff2.txt. These files will be used as an input in the program FordFulkerson.java to initialize the graphs. This graph corresponds to the same graph depicted in [CLRS2009] page 727. Your task will be to complete the two static methods fordfulkerson(Integer source, Integer destination, WGraph graph, String filePath) and pathDFS(Integer source, Integer destination, WGraph graph). The second method pathDFS finds a path via Depth First Search (DFS) between the nodes “source” and “destination” in the “graph”. You must return an ArrayList of Integers with the list of unique nodes belonging to the path found by the DFS. The first element in the list must correspond to the “source” node, the second element in the list must be the second node in the path, and so on until the last element (i.e., the “destination” node) is stored. The method fordfulkerson must compute an integer corresponding to the max flow of the “graph”, as well as the graph encoding the assignment associated with this max flow. The method fordfulkerson has a variable called myMcGillID, which must be initialized with your McGill ID number. Once completed, compile all the java files and run the command line java FordFulkerson ff2.txt. Your program must use the function writeAnswer to save your output in a file. An example of the expected output file is available in the file ff226000000.txt. This output keeps the same format than the file used to build the graph; the only difference is that the first line now represents the maximum flow (instead of the “source” and “destination” nodes). The other lines represent the same graph with the weights updated to the values that allow the maximum flow. The file ff226000000.txt represents the answer to the example showed in [CLRS2009] Page 727. You are invited to run other examples of your own to verify that your program is correct. Exercise 2. (25 points) Bellman-Ford We want to implement the Bellman-Ford algorithm for finding the shortest path in a graph where edges can have negative weights. Once again, you will use the object WGraph. Your task is to complete the method BellmanFord(WGraph g, int source) and shortestPath(int destination) in the file BellmanFord.java. The method BellmanFord takes an object WGraph named g as an input and an integer that indicates the source of the paths. If the input graph g contains a negative cycle, then the method should throw an exception (see template). Otherwise, it will return an object BellmanFord that contains the shortest path estimates (the private array of integers distances), and for each node, its predecessor in the shortest path from the source (the private array of integers predecessors). The method shortestPath will return the list of nodes as an array of integers along the shortest path from the source to the destination. If this path does not exists, the method should throw an exception (see template). Input graphs are available on the course webpage to test your program. Nonetheless, we invite you to also make your own graphs to test your program.
{"url":"https://codingprolab.com/answer/exercise-1-25-points-ford-fulkerson/","timestamp":"2024-11-14T00:09:09Z","content_type":"text/html","content_length":"110376","record_id":"<urn:uuid:0f39b07d-df0a-45b5-b6a2-942ed92bb7dd>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00500.warc.gz"}
Returns minimum values. m = min(x) m = min(x, [], dim) [m,idx] = min(...) m = min(x, y) The matrix to query. Type: double | complex | integer Dimension: scalar | vector | matrix The second matrix for a pairwise query with x. Type: double | complex | integer Dimension: scalar | vector | matrix The dimension on which to operate, or 'all'. Default: first non-singleton dimension. Type: int | string The minimum value(s). Dimension: scalar | vector | matrix The index of each minimum value. Not valid for a pairwise query. Type: integer Dimension: scalar | vector | matrix Vector input with two outputs: [m,idx] = min([7,1,5]) m = 1 idx = 2 Matrix input: m = min([1,6;2,7]) m = [Matrix] 1 x 2 Matrix input with dimension: m = min([1,6;2,5],[],1) m = [Matrix] 1 x 2 Two matrix inputs: m = min([1,6;2,5],[1,2;3,4]) m = [Matrix] 2 x 2 For a single operand, the function returns the minimum element of each vector in the specified dimension. If the second output is requested, it contains the single index of each minimum element. For two operands with common dimensions, the function returns the minimum element from each element pair. If an operand is a single element, it is compared with each element of the other operand. NaN single values and elements in dense matrices are ignored except when there are no numeric values to compare.
{"url":"https://www.openmatrix.org/help/topics/reference/oml_language/ElementaryMath/min.htm","timestamp":"2024-11-08T15:15:59Z","content_type":"application/xhtml+xml","content_length":"9623","record_id":"<urn:uuid:69e295e4-11f9-47c8-b05d-09d36d566403>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00884.warc.gz"}
dask.array.prod(a, axis=None, dtype=None, keepdims=False, split_every=None, out=None)[source]¶ Return the product of array elements over a given axis. This docstring was copied from numpy.prod. Some inconsistencies with the Dask version may exist. Input data. axisNone or int or tuple of ints, optional Axis or axes along which a product is performed. The default, axis=None, will calculate the product of all the elements in the input array. If axis is negative it counts from the last to the first axis. If axis is a tuple of ints, a product is performed on all of the axes specified in the tuple instead of a single axis or all the axes as before. dtypedtype, optional The type of the returned array, as well as of the accumulator in which the elements are multiplied. The dtype of a is used by default unless a has an integer dtype of less precision than the default platform integer. In that case, if a is signed then the platform integer is used while if a is unsigned then an unsigned integer of the same precision as the platform integer is used. outndarray, optional Alternative output array in which to place the result. It must have the same shape as the expected output, but the type of the output values will be cast if necessary. keepdimsbool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then keepdims will not be passed through to the prod method of sub-classes of ndarray, however any non-default value will be. If the sub-class’ method does not implement keepdims any exceptions will be raised. initialscalar, optional (Not supported in Dask) The starting value for this product. See ~numpy.ufunc.reduce for details. wherearray_like of bool, optional (Not supported in Dask) Elements to include in the product. See ~numpy.ufunc.reduce for details. product_along_axisndarray, see dtype parameter above. An array shaped as a but with the specified axis removed. Returns a reference to out if specified. See also equivalent method Arithmetic is modular when using integer types, and no error is raised on overflow. That means that, on a 32-bit platform: >>> x = np.array([536870910, 536870910, 536870910, 536870910]) >>> np.prod(x) 16 # may vary The product of an empty array is the neutral element 1: By default, calculate the product of all elements: >>> import numpy as np >>> np.prod([1.,2.]) Even when the input array is two-dimensional: >>> a = np.array([[1., 2.], [3., 4.]]) >>> np.prod(a) But we can also specify the axis over which to multiply: >>> np.prod(a, axis=1) array([ 2., 12.]) >>> np.prod(a, axis=0) array([3., 8.]) Or select specific elements to include: >>> np.prod([1., np.nan, 3.], where=[True, False, True]) If the type of x is unsigned, then the output type is the unsigned platform integer: >>> x = np.array([1, 2, 3], dtype=np.uint8) >>> np.prod(x).dtype == np.uint If x is of a signed integer type, then the output type is the default platform integer: >>> x = np.array([1, 2, 3], dtype=np.int8) >>> np.prod(x).dtype == int You can also start the product with a value other than one: >>> np.prod([1, 2], initial=5)
{"url":"https://docs.dask.org/en/stable/generated/dask.array.prod.html","timestamp":"2024-11-08T09:15:45Z","content_type":"text/html","content_length":"39932","record_id":"<urn:uuid:cefb4eb3-9596-4687-938f-22fe6cbb1c04>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00483.warc.gz"}
Point A is at (-8 ,2 ) and point B is at (7 ,-1 ). Point A is rotated pi/2 clockwise about the origin. What are the new coordinates of point A and by how much has the distance between points A and B changed? | HIX Tutor Point A is at #(-8 ,2 )# and point B is at #(7 ,-1 )#. Point A is rotated #pi/2 # clockwise about the origin. What are the new coordinates of point A and by how much has the distance between points A and B changed? Answer 1 Decrease in distance due to the rotation by $\pi$, #d =color(green)( 4.9985# $A \left(- 8 , 2\right) , B \left(7 , - 1\right)$ Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 The new coordinates of point A after rotating (\pi/2) clockwise about the origin are (2, 8). To find the new coordinates after rotation, we use the following formulas: New_x = Old_y New_y = -Old_x For point A (-8, 2): New_x = 2 New_y = -(-8) = 8 Regarding the change in distance between points A and B, rotating point A about the origin does not change the distance between points A and B. Therefore, the distance between points A and B remains the same. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/point-a-is-at-8-2-and-point-b-is-at-7-1-point-a-is-rotated-pi-2-clockwise-about--8f9afa2fd4","timestamp":"2024-11-14T10:51:17Z","content_type":"text/html","content_length":"582109","record_id":"<urn:uuid:f5632abc-3a71-47c7-a537-74a228de304b>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00530.warc.gz"}
[QSMS Monthly Seminar] Closed orbits and Beatty sequences Date: 6월24일 금요일, 2021년 Place: 129동 Title: Closed orbits and Beatty sequences Speaker: 강정수 교수 A long-standing open question in Hamiltonian dynamics asks whether every strictly convex hypersurface in R^{2n} carries at least n closed orbits. This was answered affirmatively in the non-degenerate case by Long and Zhu in 2002. The aim of this talk is to outline their proof and to highlight its connection to partitioning the set positive integers. Title: On the $\tilde{H}$-cobordism group of $S^1 \times S^2$'s Speaker: 이동수 박사 Kawauchi defined a group structure on the set of homology S1 × S2’s under an equivalence relation called $\tidle{H}$-cobordism. This group receives a homomorphism from the knot concordance group, given by the operation of zero-surgery. In this talk, we review knot concordance invariants derived from knot Floer complex, and apply them to show that the kernel of the zero-surgery homomorphism contains an infinite rank subgroup generated by topologically slice knots.
{"url":"https://qsms.math.snu.ac.kr/index.php?mid=board_sjXR83&document_srl=1604&order_type=desc&listStyle=viewer&page=4","timestamp":"2024-11-07T20:27:03Z","content_type":"text/html","content_length":"20650","record_id":"<urn:uuid:3e15e91a-f322-4844-8e30-4927886d23c1>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00383.warc.gz"}
Trade payables days formula Accounts payable are amounts you owe to your suppliers that are payable sometime within the near future — "near" meaning 30 to 90 days. Without payables 24 Feb 2017 To calculate accounts receivable days, the formula is as follows: in relation to other accounting metrics, such as accounts payable days. Net trade cycle calculates how many days and dollars are tied up in accounts receivable and inventory and furnished by the accounts payable. 10 Mar 2020 be improved. We'll explore these variables in The Magic Formula section. The number of days it takes you to pay your accounts payable. Formula. The DPO ratio is calculated as follows: DPO = (Accounts Payable / Cost of Goods Sold in Accounting Period) x Days in The creditor days also known as a financial term - days payable outstanding ( DPO) is a ratio that Trade creditors of Payables = Enter the yearly payable amount to creditors. To calculate creditor days you can use the following formula:. Accounts receivable and accounts payable can significantly affect a If the term DSO / Days per Month in the formula above is not a whole number, the formula 21 May 2013 The formula for the Cash Conversion Cycle is: It measures the number of days of Accounts Payable the company has outstanding relative to Days payables outstanding (DPO) is the average number of days in which a company pays its suppliers. It is also called number of days of payables. In general, a low DPO highlights good working capital management because the company is availing early payment discounts. Days Payable Outstanding Formula = Accounts Payable / (Cost of Sales / Number of Days) Days payable outstanding is a great measure of how much time a company takes to pay off its vendors and suppliers. Creditor Days show the average number of days your business takes to pay suppliers. It is calculated by dividing trade payables by the average daily purchases for a set period of time. In this example we’ve used a calendar year. The equation to calculate Creditor Days is as follows: Creditor Days = Accounts Payable days Formula. The formula for calculating Accounts Payable Days is: (Accounts Payable / Cost of Goods Sold) x Number of Days In Year; For the purpose of this calculation, it is usually assumed that there are 360 days in the year (4 quarters of 90 days). Accounts Payable Days is often found on a financial statement projection model. For example, a payables turnover ratio of 10 means that the payables have been paid 10 times in one year. A variant of payables turnover is number of days of payables. Number of days of payables of 30 means that on average the company takes 30 days to pay its Days payable outstanding (DPO) is the ratio of payables to the daily average of cost of sales. The formula for DPO is: Days Payables Outstanding = Accounts 21 May 2013 The formula for the Cash Conversion Cycle is: It measures the number of days of Accounts Payable the company has outstanding relative to Accounts payable payment period (also called days purchases in accounts payable) examines the relationship between credit purchases and payments The formula for Days Payable Outstanding is: Days Payable Outstanding DPO Formula. The numerator of this ratio is ending accounts payable, taken from the Accounts payable are amounts you owe to your suppliers that are payable sometime within the near future — "near" meaning 30 to 90 days. Without The trade payables' payment period ratio represents the time lag between a credit purchase and making payment to the supplier. Formula to Calculate Creditor's Turnover Ratio Creditor's turnover ratio or Accounts payable turnover ratio = (Net Credit Sales/Average Trade Receivables). Many businesses that appear profitable are forced to cease trading due to an Cash operating cycle = Inventory days + Receivables days – Payables days. 21 May 2013 The formula for the Cash Conversion Cycle is: It measures the number of days of Accounts Payable the company has outstanding relative to Days payables outstanding (DPO) is the average number of days in which a company pays its suppliers. It is also called number of days of payables. In general, a low DPO highlights good working capital management because the company is availing early payment discounts. The formula for DPO is as follows: Days Payable Outstanding = Average Accounts Payable / (Cost of Sales / Number of Days in Accounting Period). Where: Formula. The days payable outstanding formula is calculated by dividing the accounts payable by the derivation of cost of sales and the average number of days Days payable outstanding (DPO) is the ratio of payables to the daily average of cost of sales. The formula for DPO is: Days Payables Outstanding = Accounts The formula for the DPO ratio is very similar to the DSO ratio with some minor variations. To calculate the DPO you divide the ending accounts payable by the The trade payables' payment period ratio represents the time lag between a credit purchase and making payment to the supplier. Formula. The formula for the cash conversion cycle, abbreviated CCC, is as follows: CCC equals (Inventory days) plus (Accounts receivable days) minus 30 Oct 2019 creditor days formula. Creditors is given in the Balance Sheet and is normally under the heading Trade Creditors or Accounts Payable. Days payable outstanding (DPO) is a financial ratio that indicates the average time (in days) that a company takes to pay its bills and invoices to its trade creditors, which include suppliers, vendors or other companies. The ratio is calculated on a quarterly or on an annual basis,
{"url":"https://digoptionejhcw.netlify.app/helfin71962pez/trade-payables-days-formula-141","timestamp":"2024-11-12T17:17:17Z","content_type":"text/html","content_length":"33409","record_id":"<urn:uuid:4c2f0ede-e214-4413-96fa-415972cc9812>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00868.warc.gz"}
Groseclose and Snyder (1993) - Buying Supermajorities Jump to navigation Jump to search Has bibtex key Has article title Buying Supermajorities Has author Groseclose and Snyder Has year 1993 In journal In volume In number Has pages Has publisher © edegan.com, 2016 Model Setup 2011 2nd Year exam paper question. • Players: Legislators, vote buyers A and B. • Choice space: [math](x,s)\in R[/math]. • Preferences: Legislators: [math]u_{i}=u_{i}(x)-u_{i}(s)[/math] • Game form: A first, b second. Bribe upon commitment to vote. • Information: Complete/perfect. • EQM: SPNE, Pure strategies, tie rule: Vote for last offer. • Because lobbyists are worried about competitors invading coalitions, it is sometimes cheaper to bribe a large majority, possibly including the entire legislature. Rui's points: • Sequence matters, • More than minimum winning coalition • First mover advantage and 2nd mover advantage. • "The proofs are horrendous, you don't need to know those."
{"url":"https://www.edegan.com/mediawiki/index.php?title=Groseclose_and_Snyder_(1993)_-_Buying_Supermajorities&mobileaction=toggle_view_desktop","timestamp":"2024-11-07T19:11:27Z","content_type":"text/html","content_length":"23523","record_id":"<urn:uuid:e7ddd228-9b60-4678-b4f3-f86d3e1e6d22>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00814.warc.gz"}
Lesson 12 Represent Problems on the Coordinate Grid Lesson Purpose The purpose of this lesson is for students to represent situations by plotting and interpreting points on the coordinate grid. Lesson Narrative The purpose of this lesson is to use the coordinate grid to represent real world data. Students work with coins in two different ways. In the first activity, they flip the coin 10 times and plot the number of heads and number of tails they get. Students plot their results on the coordinate grid and also interpret points in terms of coin flipping. In the second activity, students consider the number of coins and their total value. Again the focus is on plotting and interpreting points representing different sets of coins (MP2). Learning Goals Teacher Facing • Represent real world and mathematical problems by graphing points in the first quadrant of the coordinate grid, and interpret coordinate values of points in the context of the situation. Student Facing • Let’s represent problems on the coordinate grid. Required Preparation Activity 1: • Gather pennies, nickels, dimes, and quarters to show students during the launch. Lesson Timeline Warm-up 10 min Activity 1 15 min Activity 2 15 min Lesson Synthesis 10 min Cool-down 5 min Teacher Reflection Questions With only one lesson remaining in the unit, where do you see evidence of growth in each of your students’ understandings? For students about whom you are not sure, make a note and find out more about their thinking tomorrow. Suggested Centers • Can You Draw It? (1–5), Stage 7: Grade 5 Shapes (Addressing) • Picture Books (K–5), Stage 3: Find Shapes (Addressing) Additional Resources Google Slides For access, consult one of our IM Certified Partners. PowerPoint Slides For access, consult one of our IM Certified Partners.
{"url":"https://curriculum.illustrativemathematics.org/k5/teachers/grade-5/unit-7/lesson-12/preparation.html","timestamp":"2024-11-04T12:26:41Z","content_type":"text/html","content_length":"69043","record_id":"<urn:uuid:b1135aaa-8e9f-43f9-9a85-89dbc40de278>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00165.warc.gz"}
an analysis of his unpublished calculations.'' Hist. Math. (2019), https://doi.org/10.1016/j.hm.2019.06.001. We discuss the number 2.7182818 mentioned cryptically by Leibniz in the private letter noted below without explanation of its derivation. As far as we can tell, Leibniz was the first (by decades) to compute e accurately, identify it as the basis for the natural logarithm and exponential function and use the value to graph a catenary (now better known as a hyperbolic cosine). Siegmund Probst gave the introductory talk Some early implicit and actual appearances of the number commonly known as e, at The First International Virtual Symposium Honoring the "Eulerian" Number e = 2.71828...., February 7, 18:28 Amsterdam time. My talk following Probst is linked below at 2023 under Invited Talks. More about the Catenary Several talks resulted from an effort to find out who first solved the catenary problem: find the curve of a freely hanging chain. Leibniz and Johann Bernoulli independently published the first solutions in Acta Eruditorum 1691. Leibniz presented his solution as a classical Euclidean construction without an explanation of how he discovered it. The following talks about his construction represent the evolution in my attempts to understand how Leibniz could have arrived at his solution. In the culminating talk of June 30, 2017, I presented his analysis as disclosed in a private letter to Rudolph Von Bodenhausen made known to me by Sigmund Probst of the Leibniz Archive Hanover (Göttingen Academy of Sience). This is a showcase example from the time when geometry was being eclipsed by analysis as a standard for defining mathematical objects. In reverse-chronologic order: 2017 (June 30) This talk, given for RIPS17 at IPAM, follows one year after the first talk for RIPS16: Leibniz used Calculus to solve the Catenary Problem, but he presented it as a Euclidean Construction without Explanation. In a private letter, Leibniz explained his analysis. He wrote: ``Let those who don't know the new analysis try their luck!'' This talk presents his elegant construction and analysis. Paradoxically, the construction isn't possible as strictly Euclidean, but it doesn't really matter! I present the analysis in Leibniz's own idiom of differential calculus. A YouTube video was prepared by IPAM's expert videographer, Kayleigh Steele. This was a slight modification of an invited talk at Dartmouth for the Dartmouth Mathematics Colloquium, April 13. 2017 (January 4) The Leibniz Catenary Construction: Geometry vs Analysis in the 17th Century, an invited talk for the Special Session on the History of Mathematics at JMM 17 in Atlanta. This talk positions the publication of Leibniz's construction at the time when mathematicians were turning away from Descarte's dictate to present curves as geometric constructs toward analytic presentations. Leibniz played it both ways: he published a construction that could only have been derived using calculus but did not disclose the derivation publicly. 2016 (July 6) An invited talk for RIPS16 at IPAM: How did Leibniz Solve the Catenary Problem? A Mystery Story. Turned out not to be a mystery at all! Leibniz explained his solution in a private letter to Rudolph Von Bodenhausen of 1691. In ignorance of Bodenhausen, this talk demonstrates a discovery path for the hyperbolic functions at the heart of the catenary problem. The talk at IPAM (below at June 30, 2017) explains Leibniz's analysis in detail. (For a discussion of the Bodenhausen letter, see supplementary notes.) Invited Talks 2023 Leibniz's unpublished calculation of e: The First International Virtual Symposium Honoring the "Eulerian" Number e = 2.71828...., February 7, 18:28 Amsterdam time. 2017 (June 30) Video of the presentation at IPAM, "Leibniz used Calculus to solve the Catenary Problem, but he presented it as a Euclidean Construction," YouTube. 2014 A simple integration technique for deriving the Bernoulli Summation Formula, for the 29th LACC High School Math Contest, March 22. 2013 The Real Numbers are Not Real:, The Innumerable Infinities of Georg Cantor, March 16 at Los Angeles City College Math Contest. Also presented to UNM Math & Stats Club on 3/8/2013. See the related Problematic Four Bugs Problem—Or Reality vs the Continuum. 2012 For New Mexico Math Contest of Feb 4: Archimedes' Law of the Lever and How He Used it to Deduce the Volume of the Sphere: Poster, Talk (Repeated at Agilent Technologies Inc in Santa Clara at request of Geront Owen, 8/12/2013) 2012 What was on Top of Archimedes' Tomb?, Mar 2 at 27th Los Angeles City College Math Contest (abreviated version of the New Mexico Math Contest talk) 2010 The Innkeeper's Problem and Tale 2009 Irrationality of pi, with companion notes on Transcendentality of e 2008 How do you know what time it is? 2007 Hey, who really discovered that theorem! 2006 Eigenvalues and Eigenvectors, a chalk talk, first talk in a series for the Los Angeles City College High School Math Contest 2001–2015 Institute for Pure and Applied Mathematics (IPAM) at UCLA –Program director for Research in Industrial Projects for Students (RIPS) On August 21, 2015 I concluded my fifteenth and final summer as director of the RIPS program at IPAM, a National Science Foundation's institute located at UCLA. I worked with IPAM staff and the late Robert Borrelli of Harvey Mudd College to create the RIPS program in 2001, and then to continue developing the program over the fifteen summers of my directorship. I enjoyed working with the many students and academic mentors who participated in RIPS over all those years. My approach to managing the RIPS program, was presented to the panel Starting and maintaining a student industrial research progam in the mathematical sciences at the MAA's MathFest of Aug 4, 2007 in San Jose, CA. RIPS continued in the summer of 2016 under the directorship of the talented Spanish mathematician and teacher Prof. Susana Serna of the Autonomous University of Barcelona. Prof. Serna had been an academic mentor for RIPS teams for the previous eight summers. Some write-ups of mathematical topics A miscellany to include lecture notes, drafts and reminiscence Some activites in mathematics 2007–2013, Instructor at the LACES Calculus Camp (four days in April) described by its creator, Robert Vriesman, who before retirement in 2014 chaired the Math Department at the LACES magnet school in Los Angeles. See the 2012 Calculus Camp video by LACES student Blake Simon. 2011 Participated in review panel for the NSF and for the S. -T. Yau High School Mathematics Awards. (Last modified January 29, 2024)
{"url":"https://www.mikeraugh.org/","timestamp":"2024-11-07T13:34:32Z","content_type":"text/html","content_length":"11165","record_id":"<urn:uuid:fee7883d-3eea-4230-ae06-82efedcc2772>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00255.warc.gz"}
What is the slope of the tangent line of r=5theta+cos(-theta/3-(pi)/2) at theta=(-5pi)/6? | HIX Tutor What is the slope of the tangent line of #r=5theta+cos(-theta/3-(pi)/2)# at #theta=(-5pi)/6#? Answer 1 slope would be $\left[\left(- \frac{1}{2}\right) \left(5 - \frac{1}{3} \cos \left(\frac{5 \pi}{18}\right)\right) + \left\{\frac{- 25 \pi}{6} + \sin \left(\frac{5 \pi}{18}\right)\right\} \left(\frac{- \sqrt{3}}{2}\right)\right]$ / $\left[\left\{5 - \frac{1}{3} \cos \left(\frac{5 \pi}{18}\right)\right\} \left(\frac{- \sqrt{3}}{2}\right) - \left\{\frac{- 25 \pi}{6} + \sin \left(\frac{5 \pi}{18}\ right)\right\} \left(- \frac{1}{2}\right)\right]$ write r= #5 theta +cos (theta/3 +pi/2)= 5theta- sin(theta/3)# For #theta= (-5pi)/6#, #r= (-25pi)/6 + sin ((5pi)/18)# #(dr)/(d(theta)) = 5- 1/3 cos (theta/3)# For #theta= (-5pi)/6# #(dr)/(d(theta)) = 5- 1/3 cos ((5pi)/18)# #[cos (-theta)= cos theta]# Formula for slope in polar coordinates is #dy/dx= ((dr)/(d(theta)) sin theta +rcos theta)/((dr)/(d(theta)) cos theta -rsin theta)# Also #sin ((-5pi)/6)= -1/2# #cos ((-5pi)/6)= -sqrt3 /2# slope would be #[(-1/2)(5- 1/3 cos ((5pi)/18) )+{(-25pi)/6+ sin ((5pi)/18)} ((-sqrt3)/2)]# / #[{5- 1/3 cos ((5pi)/18)}((-sqrt3)/2) -{(-25pi)/6 + sin ((5pi)/18)} (-1/2) ]# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 To find the slope of the tangent line at a given point on a polar curve, you can first find the derivative of the polar equation with respect to θ. Then, evaluate the derivative at the given value of θ to find the slope of the tangent line at that point. The derivative of the polar equation ( r = 5\theta + \cos\left(-\frac{\theta}{3} - \frac{\pi}{2}\right) ) with respect to ( \theta ) is: ( \frac{dr}{d\theta} = 5 - \frac{1}{3}\sin\left(-\frac{\theta}{3} - \frac{\pi}{2}\right) ) Evaluate ( \frac{dr}{d\theta} ) at ( \theta = -\frac{5\pi}{6} ) to find the slope of the tangent line: ( \frac{dr}{d\theta}\bigg|_{\theta = -\frac{5\pi}{6}} = 5 - \frac{1}{3}\sin\left(-\frac{-5\pi}{6 \times 3} - \frac{\pi}{2}\right) ) ( = 5 - \frac{1}{3}\sin\left(\frac{5\pi}{18} - \frac{\pi}{2}\right) ) ( = 5 - \frac{1}{3}\sin\left(\frac{5\pi}{18} - \frac{9\pi}{18}\right) ) ( = 5 - \frac{1}{3}\sin\left(\frac{-4\pi}{18}\right) ) ( = 5 - \frac{1}{3}\sin\left(-\frac{2\pi}{9}\right) ) Since ( \sin(-\theta) = -\sin(\theta) ), we have: ( = 5 + \frac{1}{3}\sin\left(\frac{2\pi}{9}\right) ) Now, calculate the value of ( \sin\left(\frac{2\pi}{9}\right) ), which is approximately ( \sin\left(40^\circ\right) ). Substitute the value into the equation: ( \frac{dr}{d\theta}\bigg|_{\theta = -\frac{5\pi}{6}} \approx 5 + \frac{1}{3}\sin\left(40^\circ\right) ) Calculate the value, and that will be the slope of the tangent line at ( \theta = -\frac{5\pi}{6} ). Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/what-is-the-slope-of-the-tangent-line-of-r-5theta-cos-theta-3-pi-2-at-theta-5pi--8f9afa21b1","timestamp":"2024-11-05T19:05:56Z","content_type":"text/html","content_length":"576501","record_id":"<urn:uuid:1a34d1b7-57b7-46a2-955a-8684dd360b67>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00233.warc.gz"}
Decidability and expressiveness for first-order logics of probability for Information and Computation Information and Computation Decidability and expressiveness for first-order logics of probability View publication We consider decidability and expressiveness issues for two first-order logics of probability. In one, the probability is on possible worlds, while in the other, it is on the domain. It turns out that in both cases it takes very little to make reasoning about probability highly undecidable. We show that when the probability is on the domain, if the language contains only unary predicates then the validity problem is decidable. However, if the language contains even one binary predicate, the validity problem is Π21 complete, as hard as elementary analysis with free predicate and function symbols. With equality in the language, even with no other symbol, the validity problem is at least as hard as that for elementary analysis, Π1∞ hard. Thus, the logic cannot be axiomatized in either case. When we put the probability on the set of possible worlds, the validity problem is Π21 complete with as little as one unary predicate in the language, even without equality. With equality, we get Π1∞ hardness with only a constant symbol. We then turn our attention to an analysis of what causes this overwhelming complexity. For example, we show that if we require rational probabilities then we drop from Π21 to Π11. In many contexts it suffices to restrict attention to domains of bounded size; fortunately, the logics are decidable in this case. Finally, we show that, although the two logics capture quite different intuitions about probability, there is a precise sense in which they are equi-expressive. © 1994 Academic Press, Inc.
{"url":"https://research.ibm.com/publications/decidability-and-expressiveness-for-first-order-logics-of-probability","timestamp":"2024-11-07T17:06:38Z","content_type":"text/html","content_length":"69769","record_id":"<urn:uuid:a2759d7d-3fcb-4da4-8bca-a162e64aede8>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00462.warc.gz"}
1 A hollow spherical conductor of internal radius R2 and external rad 1 A hollow spherical conductor of internal radius R2 and external radius Rs surrounds a conductive sphere of radius R1, which is charged with a charge Q,as shown in Figure Q1. Derive the expression for the magnitude of the electric field E(r), for between 0 and infinity. Note that r = 0 is the origin of the spherical reference system, as shown in Figure Q1. Hence or otherwise, derive the expression for the scalar potential V (r),for r between 0 and infinity Give a qualitative graphical representation for the functions E(r)(magnitude of the electric field) and V (r) (scalar potential), considering R = 30em, R2 = 50 cm, R3Use the appropriate units on the graphs. Write the values of the electric field in the dielectric side of the interface for each of the metallic= 80 cm, and Q = 800 x 10-12 C.surfaces. Write the values for the potential at r = 0, r = R1, r = R2,and r = Briefly describe what happens to the scalar potential and electric field in the region with R1 Fig: 1 Fig: 2 Fig: 3 Fig: 4 Fig: 5 Fig: 6
{"url":"https://tutorbin.com/questions-and-answers/1-a-hollow-spherical-conductor-of-internal-radius-r2-and-external-radi","timestamp":"2024-11-09T00:29:40Z","content_type":"text/html","content_length":"75455","record_id":"<urn:uuid:2694ba2e-d1a9-4a8d-a4d7-a153dd254ed0>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00355.warc.gz"}
Chapter 5 Terms Flashcards Numbers written in decimal notation Decimal numbers (Decimal) Used to denote a part of a whole. Name the first 5 place values right of the decimal. Ten ThousandTHS Hundred ThousandTHS Each decimal place value is ____of the place value to its left. What is the value of the 5 in 17.758 How to write or read decimals 17. 758 1) The whole number the standard way “Seventeen” 2) “and” for the decimal point. “And” 3) The Decimal number the standard way as if it were a whole number followed by the place value of the last digit. “Seventeen and Seven Hundred Fifty Eight Thousandths.” For any decimal writing ____’s after the last digit to the right of the decimal does not change the numbers value. To compare negative decimals first compare their ______ _____ Add the inequality symbol then ______it. To round decimals right of the decimal point. 1) Locate the digit to the right of the given place value 2)Use greater than 5 rule. drop all digits to the right of the given place value if less If more add one the place value and drop all to the When changing decimals from words to standard form be sure to add ___ to make sure the last digit has the correct ____ _____. To change decimals from decimal notation to fractional notation, you must… 1) Write the numbersto the right of the decimal in the numerator. 2) determine the place value of last non-zero digit and make the place value the denominator (minus the -ths) . 809-\> — .08920-\> — 1000 10,000 When comparing negative decimals start from the ____. Move to each decimal place to find the digit with more Absolute value, to determine the smaller decimals • 2.2049 < .-1.2049 • 1..2049 <-0..2049 • 0.2049 < -.2039 • .2039 >-2139 When comparing postive decimals start from the ____. Move to each decimal place to find the digit with less value, to determine the larger decimals. 2. 2049> .1.2049 3. .2049>0..2049 4. .2049> .2039 In decimal notation writing ___ after the last digit to the right does not _____ ____ ____ of the decimal When adding and subtracting decimals the decimals must be set up so that the sum/diffrence and addends/minuend,subtrahend all _____ _____ ____ 1) Multiply the decimals as if they were whole numbers. 2) Place the sum of decimal places from both factors in the product.
{"url":"https://www.brainscape.com/flashcards/chapter-5-terms-1311466/packs/1641797","timestamp":"2024-11-04T02:38:48Z","content_type":"text/html","content_length":"129134","record_id":"<urn:uuid:12bb51c0-8619-4604-b316-b1ab7aef554c>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00632.warc.gz"}
Proven Track Record | Achievements - Einstein Education Hub Q: What is the breakdown of the results? Out of a cohort of 9 students, 2 clinched AL1, 2 achieved AL2 and another 3 scored AL3. Several of them improved at least 1 grade from the last WA. It is an outstanding effort from all of them. Q: How did the students achieve their desired grades? Firstly, the students understood the concepts and the methods taught in the tuition class. When they were attempting the questions during the tuition lessons, they could apply the methods and showed all the steps clearly. Secondly, they did additional homework questions, so they gained more exposure to questions that commonly appeared in the WAs and exams. Lastly, they were tested regularly on the recent and past concepts in the tuition lessons. That served as a consistent reminder to grasp strongly the important concepts of the topics. Q: What did the tutors do to help the students in this subject? The tutors helped to simplify the core concepts and explain them in a simple way that was clear for the students to understand. Verbal questions were frequently posed to the students to assess how much they understood and every now and then, the tutors would take initiatives to re-explain and set more examples to reinforce the explanations thoroughly to ensure that all the students could understand well. After explaining the method to the students, the tutors would emphasise to them the importance of showing and labelling all the steps clearly while they were doing the questions in the class as well as homework. More importantly, the tutors would always test the students regularly on the problem-solving techniques and methods. This was to raise consistency and competency so that the students could remember the methods well and knew how to apply them onto the questions correctly. Q: What changes were observed in the students over the course of study? The students were more serious during the tuition lessons over time. With the guidance and the consistent trainings by the tutors, they improved on their mathematical reasoning skills. They were able to think more logically and solve the questions in a systematic approach, hence increasing the accuracy of their answers and their understanding of the questions. Q: How is Einstein Education Hub different from other centres? The tutors are responsible in the sense that what they teach, they make sure that all the students understand. This is done so by constantly checking their progress and going the extra mile by providing simplified explanations with relevant examples. Thus, no student is left behind.
{"url":"https://einstein.com.sg/track-record/","timestamp":"2024-11-02T14:12:02Z","content_type":"text/html","content_length":"226341","record_id":"<urn:uuid:ea142648-5527-4e35-9cab-b5ec0071e30a>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00653.warc.gz"}
SVMs - An overview of Support Vector Machines - SVM Tutorial SVMs - An overview of Support Vector Machines I felt I got deviated a lot on Math part and its derivations and assumptions and finally got confused what exactly SVM is ? And when to use it and how it helps ? Here is my attempt to clarify things. What exactly is SVM ? SVM is a supervised learning model It means you need a dataset which has been labeled. Exemple: I have a business and I receive a lot of emails from customers every day. Some of these emails are complaints and should be answered very quickly. I would like a way to identify them quickly so that I answer these email in priority. Approach 1: I can create a label in gmail using keywords, for instance "urgent", "complaint", "help" The drawback of this method is that I need to think of all potential keywords that some angry users might use, and I will probably miss some of them. Over time, my keyword list will probably become very messy and it will be hard to maintain. Approach 2: I can use a supervised machine learning algorithm. Step 1: I need a lot of emails, the more the better. Step 2: I will read the title of each email and classify it by saying "it is a complaint" or "it is not a complaint". It put a label on each email. Step 3: I will train a model on this dataset Step 4: I will assess the quality of the prediction (using cross validation) Step 5: I will use this model to predict if an email is a complaint or not. In this case, if I have trained the model with a lot of emails then it will perform well. SVM is just one among many models you can use to learn from this data and make predictions. Note that the crucial part is Step 2. If you give SVM unlabeled emails, then it can do nothing. SVM learns a linear model Now we saw in our previous example that at the Step 3 a supervised learning algorithm such as SVM is trained with the labeled data. But what is it trained for? It is trained to learn something. What does it learn? In the case of SVM, it learns a linear model. What is a linear model? In simple words: it is a line (in complicated words it is a hyperplane). If your data is very simple and only has two dimensions, then the SVM will learn a line which will be able to separate the data. The SVM is able to find a line which separates the data If it is just a line, why do we talk about a linear model? Because you cannot learn a line. So instead of that: • 1) We suppose that the data we want to classify can be separated by a line • 2) We know that a line can be represented by the equation $y =\mathbf{w}\mathbf{x}+b$ (this is our model) • 3) We know that there is an infinity of possible lines obtained by changing the value of $\mathbf{w}$ and $b$ • 4) We use an algorithm to determine which are the values of $\mathbf{w}$ and $b$ giving the "best" line separating the data. SVM is one of these algorithms. Algorithm or model? At the start of the article I said SVM is a supervised learning model, and now I say it is an algorithm. What's wrong? The term algorithm is often loosely used. For instance, you will sometime read that SVM is a supervised learning algorithm. This is not true if you consider that an algorithm is a set of actions to perform to obtain a specific result. Sequential minimal optimization is the most used algorithm to train SVM, but you can train an SVM with another algorithm like Coordinate descent. However, most people are not interested in details like this, so we simplify and say that we use the SVM "algorithm" (without saying in details which one we use). SVM or SVMs? Sometime, you will see people talk about SVM, and sometime about SVMs. As often Wikipedia is quite good at stating things clearly: In machine learning, support vector machines (SVMs) are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. So, we now discover that there are several models, which belongs to the SVM family. SVMs - Support Vector Machines Wikipedia tells us that SVMs can be used to do two things: classification or regression. • SVM is used for classification • SVR (Support Vector Regression) is used for regression So it makes sense to say that there are several Support Vector Machines. However, this is not the end of the story ! In 1957, a simple linear model called the Perceptron was invented by Frank Rosenblatt to do classification (which is in fact one of the building block of simple neural networks also called Multilayer A few years later, Vapnik and Chervonenkis, proposed another model called the "Maximal Margin Classifier", the SVM was born. Then, in 1992, Vapnik et al. had the idea to apply what is called the Kernel Trick, which allow to use the SVM to classify linearly nonseparable data. Eventually, in 1995, Cortes and Vapnik introduced the Soft Margin Classifier which allows us to accept some misclassifications when using a SVM. So just when we talk about classification there is already four different Support Vector Machines: 1. The original one : the Maximal Margin Classifier, 2. The kernelized version using the Kernel Trick, 3. The soft-margin version, 4. The soft-margin kernelized version (which combine 1, 2 and 3) And this is of course the last one which is used most of the time. That is why SVMs can be tricky to understand at first, because they are made of several pieces which came with time. That is why when you use a programming language you are often asked to specify which kernel you want to use (because of the kernel trick), and which value of the hyperparameter C you want to use (because it controls the effect of the soft-margin). In 1996, Vapnik et al. proposed a version of SVM to perform regression instead of classification. It is called Support Vector Regression (SVR). Like the classification SVM, this model includes the C hyperparameter and the kernel trick. I wrote a simple article, explaining how to use SVR in R. If you wish to learn more about SVR, you can read this good tutorial by Smola and Schölkopft. Summary of the history • Maximal Margin Classifier (1963 or 1979) • Kernel Trick (1992) • Soft Margin Classifier (1995) • Support Vector Regression (1996) If you want to know more, you can learn this very detailed overview of the history. Other type of Support Vector Machines Because SVMs have been very successful at classification, people started to think about using the same logic for other type of problems, or to create derivation. As a result there exists now several different and interesting methods in the SVM family: We have learned that it is normal to have some difficulty to understand what SVM is exactly. This is because there are several Support Vector Machines used for different purposes. As often, history allows us to have a better vision of how the SVM we know today has been built. I hope this article give you a broader view of the SVM panorama, and will allow you to understand these machines better. If you wish to learn more about how SVM work for classification, you can start reading the math series: SVM - Understanding the math Part 1: What is the goal of the Support Vector Machine (SVM)? Part 2: How to compute the margin? Part 3: How to find the optimal hyperplane? Part 4: Unconstrained minimization Part 5: Convex functions Part 6: Duality and Lagrange multipliers I am passionate about machine learning and Support Vector Machine. I like to explain things simply to share my knowledge with people from around the world.
{"url":"https://www.svm-tutorial.com/2017/02/svms-overview-support-vector-machines/","timestamp":"2024-11-07T09:59:07Z","content_type":"text/html","content_length":"64350","record_id":"<urn:uuid:34c5e2da-a5b5-4d45-9fc2-a6002a070728>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00337.warc.gz"}
Show Posts « on: October 30, 2020, 01:27:01 pm » Dear Dr. Xiang-Jun Lu, Thank you for developing (and maintaining) this collection of beautiful and practical DNA geometry software! For my project I want to generate circular DNA (plasmids) of various sizes. Based on the book chapter of Calladine et al. (see source below) I found a way to determine the base roll angle for each base pair. For example, we can determine the curvature of each helical turn to make a circle consisting of 100 bp as follows: - assume 10 bp per turn - assume helix angle of 36 degrees First determine number of turns: 10 Determine the angle each turn has to make: 36 degrees For a smooth transition per base step fit this angle to cosine: First we need to determine the amplitude to account for +/- direction of helix amplitude = 36 / (10*0.5) Then each roll angle can be determined as follows: for bp_i in range[0:100]: roll_angle = amplitude*(cos(helix_angle)*bp_i)) which gives you periodic values of: [7.2, 5.825, 2.225, -2.225, -5.825, -7.2, -5.825, -2.225, 2.225, 5.825, ....] Of course this (could) work for an arbitrary length of sequences. So may plan was to use these roll angle parameters to make circular DNA. As a starting point I just used for each bp a twist and rise value of 36 and 3.34 respectively (since I want B-DNA) together with the periodic roll angle values and all other parameters black (zero). See attachement for parameter file. When I visualize the structure with the rebuilder at the webserver ( ) the result is very close to perfect, see image in attachement. However, the ring is not perfectly planar (although I want to have the geometry completely planar such that I can easily define the connections of the beginning and end in the pdb later on). I very much hope that with your years of DNA geometry experience you could provide some insight on how to make the ring perfectly planar. I am probably overlooking something obvious... So apologies in advance if the question is unclear! Thank you for reading my message and stay safe! Eventually I want to make this to work for arbitrary length of sequences, however, conceptually I couldn't think of a smart way to deal with the decimal numbers arising in the total number of "complete" turns. I guess one should start to introduce varying helix twists such that the outer ends properly meet? Source: Calladine, C.R., Drew, H.R., Luisi, B.F. & Travers, A.A. Understanding DNA: theMolecule and How It WorksChapter 4 (Elsevier Academic Press, San Diego, CA,2004).
{"url":"http://forum.x3dna.org/profile/?area=showposts;sa=topics;u=9589","timestamp":"2024-11-05T10:58:22Z","content_type":"application/xhtml+xml","content_length":"16536","record_id":"<urn:uuid:fb9e0b35-ac8a-4adc-823a-bc1f6b2897b5>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00856.warc.gz"}
Properties of Circle Following are the properties of circle - Property 1: Equal chords subtend equal angles at the center of a circle Property 2: If Angles subtended by the chords at the center of circle are equal, then chords are also equal Property 3: The perpendicular from the center of a circle to a chord bisects the chord. Property 4: The line drawn from the center of circle to bisect a chord; is perpendicular to the chord Property 5: Equal chords are equidistant from the center of circle Property 6: Chords equidistant from the center of circle are equal in length
{"url":"https://www.algebraden.com/properties-of-circle.htm","timestamp":"2024-11-13T02:08:22Z","content_type":"text/html","content_length":"11640","record_id":"<urn:uuid:688c4778-b80f-4d24-a4fd-69a00a85b31d>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00187.warc.gz"}
Dresden 2003 – wissenschaftliches Programm DY 20.3: Vortrag Dienstag, 25. März 2003, 10:15–10:30, G\"OR/226 Asymptotic behavior of hard-sphere mixtures — •Christian Grodon, Roland Roth, and Siegfried Dietrich — Max-Planck-Institut für Metallforschung, Heisenbergstr.3, 70569 Stuttgart We study the asymptotic behavior of the pair correlation functions in binary hard-sphere mixtures by calculating the leading order poles of its Fourier transforms. The pole structure of hard-sphere mixtures shows a much larger variety of patterns compared to the one component system. For the determination of the poles the direct pair correlation functions are needed which we calculate within the framework of DFT using two versions of the fundamental measure theory. For size ratios asymmetric enough we find crossover lines in the fluid part of the phase diagram at which the wavelength of the leading order pole changes discontinuously. Accordingly the oscillations of all pair correlation functions in the asymptotic regime change their wavelength. We confirm the predictions by numerically calculating density profiles around a big hard sphere and close to a planar hard wall.
{"url":"https://www.dpg-verhandlungen.de/year/2003/conference/dresden/part/dy/session/20/contribution/3","timestamp":"2024-11-11T16:27:54Z","content_type":"text/html","content_length":"7336","record_id":"<urn:uuid:e9c2b0f6-7cbc-4b61-a246-6d8b625eb5aa>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00565.warc.gz"}
Quantum Computing in TradingQuantum Computing in Trading: Path to Improved Performance? Article Summary And Main Article TL;DR Summary: Trading makes demands on technology; what if the technology fundamentally improves? Trading takes place at the edge of technology. Fundamentally, this is due to the adversarial, competitive nature of trading: it needs technology to find and match any advantage. On aggregate, then, trading is many participants trying to leverage technology to gain an advantage. This manifests itself through the use of technologies such as computer programming and hardware. These technologies are mostly not trader-based but are utilised by the financial infrastructure to which the brokers connect. The power is in this infrastructure and as it hosts large numbers of users, its demands grow. Economies of scale in data centers are an approach that can tackle this scaling. However, these technologies have limitations. At the core of these are computing devices. Can fundamental improvements in computing technology improve trading? Quantum computing vs classical computing Classical computing is an established technology. The causal structure is well understood, which is to say using the flow of electrons to rapidly create and process logical statements that reflect the instructions of a computer program. It is, however, a technology that has improved dramatically over the years. Quantum computing is not an established technology. There is debate about whether the technology exists. One foundational issue is that the science it relies on, 'non-locality', is not established; in fact, there are competing theories that try and explain this intriguing, counter-intuitive quantum phenomenon. Non-locality and quantum computation Non-locality is not, strictly speaking, a correct term, but it is useful to describe the phenomenon in a way that makes sense to humans living in a seemingly local universe. Non-locality is a feature of the quantum world where correlations occur such that it seems like event A is affecting event B instantaneously, even though they are separated by a distance. This is different from the electron flow model in classical computing, as it allows for a structure of indefinite complexity that coalesces into a desired solution state. Instead of having to compute each logical statement, structures can be created that encompass possible solutions without searching for them step-by-step. Non-locality allows for a kind of depth of computation that would not be possible with classical computers. Non-locality seems counter-intuitive, but there are several theories that attempt to explain it. For example, Quantum Field Theory attempts to preserve locality by seeing the universe as consisting of fields, where particles are excitations of this field. The idea is that these fields allow for a correlation that means a particle can be 'affected' by another particle regardless of the distance, due to a property of the field. The Many-Worlds Hypothesis seeks to preserve locality by allowing for each quantum event to branch into another universe while maintaining the overall connection via a wave function that spans the There are other theories as well, but what this means is that there is a big unknown in terms of part of the mechanism behind the way quantum computers work, which may or may not matter. This may not be a problem, as the major apparent problem is that it is it difficult to let the quantum computer compute, as it is highly sensitive to noise from the surrounding environment. However, it is conceivable that the actual mechanism that underlies non-locality may affect the potential for quantum computers to compute. Quantum computers are reliant on entanglement, which uses non-locality to function, as well as superposition. Superposition vs entanglement Superposition and entanglement are different concepts, but both are used by quantum computers. In terms of computing, superposition is the capacity for a single bit to be in two states at the same time. This is radically different from classical computation where a bit can be in only one state at a given time: it can change to another state, and the speed at which this can happen is part of its capacity to compute. Classical computers are founded on logic. Gates can be constructed, composed of bits in either on or off. Combinations of these gates provide a logical framework for software to program instructions (algorithms), to be executed by computer hardware. Quantum computers also have algorithms and logic, but the difference is that computations that can be executed only one step at a time on classical computers, can in theory be executed at the same time on quantum computers. A quantum computer can be seen as a device that can compute multiple outputs at the same time. However the qubits must encode multiple states, and the problem is that noise can change (decohere) the underlying computation, resulting in a problem that the quantum gates decohere faster than the computation. This one issue presents a major blockage in scaling up quantum computation, and there are other problems as well. Entanglement, in terms of computation, is a way for qubits to be connected. This allows many qubits to be part of a computation, entangled together, without having to change each qubit, as is the case with classical computation on bits. This results in a kind of multiplier effect, from connected qubits. Entanglement is about the depth or complexity of the computations, while superposition is about parallelism. Entanglement is also subject to noise, as the same process that allows one qubit to entangle with another, also allows it to entangle with the external universe. Bell's Theorem is a test of whether particles are entangled, where its violation demonstrated entanglement. Thus the violation of Bell's Theorem can be used as a gauge as to whether a quantum computer is computing classically or using entanglement. Case Study Example: D-Wave There are several companies that are involved in quantum computing, as well as a vast array of research institutions. D-Wave is one of them. D-Wave makes products that use quantum effects to compute. Simulated annealing is a technique used in various ways, but in computing it is utilised by neural nets. Problems can be considered as a solution space, where the lowest point represents the best solution, with the task being to get from the current state to the optimal low. When solving problems, the process used may fall into a solution that is not the best. Simulated annealing simulates the process of heating and cooling, to allow the system to exit these local minima and find more optimal Quantum annealing utilises superposition to allow the search problem to be considered over many possible solutions, rather than trying one at a time. Quantum annealing uses another feature of the quantum world, tunnelling, to allow peaks that would otherwise obscure the solution to be passed through, without having to go over them. Quantum entanglement points towards correlations that make the path to the solution clearer (i.e. adding depth to the computation). The effect of quantum computing on trading Quantum computing is more like a set of specialized algorithms, to tackle some types of problems, but with significant hardware limitations. So quantum computing may have an effect on trading, in the future, by improving part of the computations that are involved in the trading process. But quantum computing has a long way to go. What then are quantum computers (or at least a more robust future computer) good at? They are good at optimization problems. The example used above of quantum annealing is an optimisation process, as it seeks to find the 'best' solution in a landscape of solutions and obstacles. Optimisation is important to trading processes, for example, portfolio optimisation or strategy optimisation, but it is also important for the underlying infrastructure supporting trading, for example, routing algorithms in High-Frequency Trading. Quantum computers are potentially good at simulating, thus they could be useful for simulating markets, which is a most intriguing possibility. The reason they are good at simulating is because of the depth of representation available from entanglement, as well as superposition. But the practical problems apply to isolating the system to maintain entanglement and superposition in such a way as it continues to represent the problem and the path towards its solution, not the outside world. There is also the additional issues of being able to have enough qubits to make fuller use of the potential for problem representation. Quantum computers are currently limited in the number of qubits they can use, like the way classical computer systems were limited by smaller bit sizes. As bit sizes have increased, the range and complexity of classical computing applications have also increased. As ways are found to increase the qubit count, then breakthroughs might be expected for quantum computing applications. For now, traders will have to make do with the technology available at trading providers. This technology is already cutting edge, as it involves the use of technology to leverage speed improvements, but it is not yet technology utilising quantum computing. But the first signs of it may be in the infrastructure supporting order routing, at some stage in the future, or even in intelligent chatbots. Dukascopy Bank Case Study: Trading Automation • Minimum deposit: $1000 • Online trading platforms: MT4, MT5, JForex CFD providers use online trading platforms to provide the technology used by traders such as making orders, checking charts and using robots. Behind the provider is an ecosystem of liquidity providers and technologies that aim to route orders efficiently and quickly. Dukascopy Bank is a long-standing CFD provider that offers its own JForex platform, along with both MT4 and MT5. Dukascopy Bank operates the SWFX ECN, which allows the trader to place orders directly into the market. This is a provider for those who want to try any strategy, including those dependent on rapid order XM Case Study: Robots & Small Trade Sizes • Minimum deposit: $5 • Online trading platforms: MT4, MT5 The trader may wish to try out strategies (robotic or self-directed) on a live account, with small trade sizes. XM provides its Micro account, for both MT4 and MT5, which offers very low trade sizes (it is what is sometimes termed a 'cent account'). XM allows higher volume trading as well, with 1000+ markets to trade from Stocks CFDs to Forex CFDs. Additionally, XM has a low minimum deposit of
{"url":"https://www.hardanalytics.com/p/quantum-computing-in-trading.html","timestamp":"2024-11-07T03:11:54Z","content_type":"application/xhtml+xml","content_length":"237383","record_id":"<urn:uuid:4f538ea9-f3fb-4c8c-9432-aec3352e77b4>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00818.warc.gz"}
Electromotive Force (EMF) and Potential Difference The maximum voltage a cell can produce is called the electromotive force (EMF). EMF is measured in volts. When the cell is supplying a current, the voltage is lower because the cell has internal resistance. Some the the EMF is used to drive the current against the internal resistance of the cell. You can easily measure the EMF of a cell by measuring the voltage across the terminals when the cell is not in a circuit, so not supplying a current. If the cell is in a circuit, and the current is , then the voltage across the terminals, measured with a voltmeter, is , where is the internal resistance of the cell. Potential difference is often confused with EMF, Both are measured in Volts. Potential difference is the voltage difference between any two points. For a cell not in a circuit, the EMF is equal to the potential difference between the terminals. We can also think of voltage with reference to the equations \[W=QV \rightarrow V=\frac{W}{Q}\] \[W, \: Q, \: V, \: I, \: R\] are work done, charge, potential difference, current and resistance.
{"url":"https://astarmathsandphysics.com/igcse-physics-notes/4848-electromotive-force-emf-and-potential-difference.html","timestamp":"2024-11-10T18:23:58Z","content_type":"text/html","content_length":"27894","record_id":"<urn:uuid:c24f2991-0597-4b84-8c64-ef50e7a05954>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00732.warc.gz"}
What are the different properties of quadrilaterals? There are two properties of quadrilaterals: A quadrilateral should be closed shape with 4 sides. All the internal angles of a quadrilateral sum up to 360°…Properties of parallelogram • Opposite angles are equal. • Opposite sides are equal and parallel. • Diagonals bisect each other. • Sum of any two adjacent angles is 180° What are the properties of a diagonal? The diagonals have the following properties: • The two diagonals are congruent (same length). • Each diagonal bisects the other. • Each diagonal divides the rectangle into two congruent right triangles. What are the 5 properties of a rectangle? The fundamental properties of rectangles are: • A rectangle is a quadrilateral. • The opposite sides are parallel and equal to each other. • Each interior angle is equal to 90 degrees. • The sum of all the interior angles is equal to 360 degrees. • The diagonals bisect each other. • Both the diagonals have the same length. What are the 7 Quadrilaterals? Answer: A quadrilateral refers to a four-sided polygon that has four angles. The seven types of quadrilaterals are parallelogram, rhombus, kite, rectangle, trapezoid, square, and isosceles trapezoid. What is the difference between parallelogram and quadrilateral? As the name suggests, a quadrilateral is a polygon that has 4 sides. While on the other hand, a parallelogram is a special quadrilateral in which both pairs of opposite sides are parallel and equal. What are the properties of diagonal quadrilateral? Each diagonal divides the quadrilateral into two congruent triangles. The diagonals of the quadrilateral bisect each other. The diagonals divide the quadrilateral into two pairs of congruent What are diagonals of quadrilaterals? Angles in a Quadrilateral A diagonal of a quadrilateral is a segment that joins two vertices of the quadrilateral but is not a side. You can use a diagonal of a quadrilateral to show that the sum of the angle measures in a quadrilateral is 360°. What are the properties of a rhombus quadrilateral? A rhombus is a quadrilateral which has the following four properties: 1 Opposite angles are equal 2 All sides are equal and, opposite sides are parallel to each other 3 Diagonals bisect each other perpendicularly 4 Sum of any two adjacent angles is 180° What are the properties of a convex quadrilateral? A quadrilateral has 2 diagonals based on which it can be classified into concave or convex quadrilateral. In case of convex quadrilaterals, diagonals always lie inside the boundary of the polygon. Based on the lengths of sides and angles, common convex quadrilaterals are: Let us discuss in brief the properties of quadrilaterals. What are the properties of a quadrilateral parallelogram? A quadrilateral is a parallelogram if 2 pairs of sides parallel to each other. Squares and Rectangles are special types of parallelograms. Below are some special properties.– All internal angles are of “right angle” (90 degrees).– Each figure contains 4 right angles.– Which is the correct definition of a quadrilateral? Quadrilateral is a 4 sided polygon bounded by 4 finite line segments. A quadrilateral has 2 diagonals based on which it can be classified into concave or convex quadrilateral. In case of convex quadrilaterals, diagonals always lie inside the boundary of the polygon. Let us discuss in brief the properties of quadrilaterals.
{"url":"https://greatgreenwedding.com/what-are-the-different-properties-of-quadrilaterals/","timestamp":"2024-11-09T16:24:09Z","content_type":"text/html","content_length":"44669","record_id":"<urn:uuid:f336467e-1eb9-449f-8bf3-868db53cc66d>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00297.warc.gz"}
Ch. 1 Introduction - Introductory Statistics | OpenStax By the end of this chapter, the student should be able to: • Recognize and differentiate between key terms. • Apply various types of sampling methods to data collection. • Create and interpret frequency tables. You are probably asking yourself the question, "When and where will I use statistics?" If you read any newspaper, watch television, or use the Internet, you will see statistical information. There are statistics about crime, sports, education, politics, and real estate. Typically, when you read a newspaper article or watch a television news program, you are given sample information. With this information, you may make a decision about the correctness of a statement, claim, or "fact." Statistical methods can help you make the "best educated guess." Since you will undoubtedly be given statistical information at some point in your life, you need to know some techniques for analyzing the information thoughtfully. Think about buying a house or managing a budget. Think about your chosen profession. The fields of economics, business, psychology, education, biology, law, computer science, police science, and early childhood development require at least one course in statistics. Included in this chapter are the basic ideas and words of probability and statistics. You will soon understand that statistics and probability work together. You will also learn how data are gathered and what "good" data can be distinguished from "bad."
{"url":"https://openstax.org/books/introductory-statistics/pages/1-introduction","timestamp":"2024-11-02T14:08:20Z","content_type":"text/html","content_length":"359819","record_id":"<urn:uuid:491d8b81-279a-46e5-ad5b-dd7da0d4c697>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00892.warc.gz"}
CAMB: modifying equations.f90 I am working with CAMB-1.0.12, and I am trying to modify some physics. My first aim is to artificially suppress the growth of only the small scale (large wavenumber k) density perturbations of the cold dark matter species. I tried to modify the “equation of motion” for the CDM in the “equations.f90” file, which normally reads: I then recompiled using “python setup.py make”, but when I run the simulation, I see no difference in the result, even when replacing the above equation with Clxdot=-k*z*0. My questions are: 1) I should see some effect of a modified equation of motion, so what am I doing wrong? Am I modifying the wrong equation, or in the wrong file? Is there a problem with recompiling? 2) In “equations.f90” I found the baryon equation of motion: clxbdot=-k*(z+vb). But this equation seems different than what I expected from e.g. Eq. 5-30 in arXiv: . Also “dz” seems different than expected from Eq. 5-31. I have looked around but didn’t find exactly the underlying theory behind “equations.f90”. Am I missing something here? 3) Is there a simple intuitive argument why the evolution of perturbations of one species (e.g. Delta_baryon) is independent of the choice of Omega_bh^2 or Omega_cdm^2? From Eq. 6-22 in in arXiv: I would expect that the evolution of each species depends on the SUM of all densities and pressures of all other species. All feedback is much appreciated, Re: CAMB: modifying equations.f90 Sounds OK, make sure you re-start Python. The equations neglect some very small pressure terms (matter pressure is only important when coupled via derivatives so the last term in 5-30 is very small). The evolution equations are not independent of those quantities (except in some limits). Re: CAMB: modifying equations.f90 I have now managed to change the equations of motion in the file "equations.f90", to compile and run - so far so good... But I still have trouble locating the following equations found in ScalEqs.txt in the file "equations.f90": Can somebody point out exactly in which line of equations.f90 these calculations (namely the effect of expansion on the densities of each species) are carried out? Thanks a lot, Re: CAMB: modifying equations.f90 How did you do to compile before running? Is it enough to do "make" inside the fortran folder? Re: CAMB: modifying equations.f90 "python setup.py make" in the main folder will update an existing "pip install -e" installation.
{"url":"https://cosmocoffee.info/viewtopic.php?t=3268&sid=335c2960e91537cd63f4eca744aa2740","timestamp":"2024-11-02T01:39:17Z","content_type":"text/html","content_length":"38747","record_id":"<urn:uuid:0ca70f2b-e26f-4e4b-a006-38cc464beee8>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00320.warc.gz"}
chinese arithmetic definition See more. 2. Look it up now! Meaning definition, what is intended to be, or actually is, expressed or indicated; signification; import: the three meanings of a word. computed in 0.047s. How to use arithmetic in a sentence. dengchajishu definition at Chinese.Yabla.com, a free online dictionary with English, Mandarin Chinese, Pinyin, Strokes & Audio. asked Nov 17 '14 at 9:05. Modular arithmetic can be handled mathematically by introducing a congruence relation on the integers that is compatible with the operations of the ring of integers: addition, subtraction, and multiplication.For a positive integer n, two integers a and b are said to be congruent modulo n, written:. If you're seeing this message, it means we're having trouble loading external resources on our website. sens a gent 's content . The arithmetic mean of 3, 6, 2, 3 and 6 is (3 + 6 + 2 + 3 + 6) / 5 = 4. Let be the structure ... To compute directly from such a definition we would compute the sequence . arithmetic - Free definition results from over 1700 online dictionaries Arithmetic functions. Since early times, Chinese understood basic arithmetic (which dominated far eastern history), algebra, equations, and negative numbers. We now have a good definition for division: \(x\) divided by \(y\) is \(x\) multiplied by \(y^{-1}\) if the inverse of \(y\) exists, otherwise the answer is undefined. All Free. arithmetic translation in English - Chinese Reverso dictionary, see also 'architect',artistic',aesthetic',authentic', examples, definition, conjugation Arithmetic definition is - a branch of mathematics that deals usually with the nonnegative real numbers including sometimes the transfinite cardinals and with the application of the operations of addition, subtraction, multiplication, and division to them. However this does not appear to be expressible in first-order form. Discover the definition of 'Like Chinese arithmetic' in our extensive dictionary of English idioms and idiomatic expressions. Definition of Arithmetic instruction. These functions are by definition functions whose natural domain of definition is either &Zopf; (or &Zopf; > 0).The way these functions are used is completely different from transcendental functions in that there are no automatic type conversions: in general only integers are accepted as arguments. arithmetic scan in Chinese : :算术扫描…. Arithmetic I First-order arithmetic. Information and translations of arithmetic function in the most comprehensive dictionary definitions resource on the web. computed in 0.031s. In general, given a modulus , we can do addition, subtraction and multiplication on the set {,, …, −} in a way that "wrap around" . Shift Operation Binary Value Decimal Value; No shift (original number) 11001.011 –6.625. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. arithmetic definition. In C99, the term arithmetic operation appears 16 times, but I don't see a definition for it. Advertising print Français Español Português. translations; Advertising 7646 online visitors. MJD. Free online translation of arithmetic mean. Mathematics was … Meaning of arithmetic function. suanfa definition at Chinese.Yabla.com, a free online dictionary with English, Mandarin Chinese, Pinyin, Strokes & Audio. A manual computing device consisting of a frame holding parallel rods strung with movable counters. If you're behind a web filter, please make sure that the domains … This term has been around for years. Arithmetic with Large Integers Review Definition. sens a gent. An important consequence of the theorem is that when studying modular arithmetic in general, we can first study modular arithmetic a prime power and then appeal to the Chinese Remainder Theorem to generalize any results. n is an equivalence relation on the integers. Definition of ARITHMETIC OPERATION in the Definitions.net dictionary. arithmetic (usually uncountable, plural arithmetics) The mathematics of numbers ( integers , rational numbers , real numbers , or complex numbers ) under the operations of addition , subtraction , multiplication , and division . Chinese Remainder Theorem 5. The world's best online dictionary. Visit Stack Exchange. Fraction reduction in membrane systems Figure 1 shows an example data flow graph (DFG) of the PLF arithmetic expression (1) described in Section 1 partitioned into four subtasks labeled as A, B, C, and D. 97 1 1 gold badge 1 1 silver badge 3 3 bronze badges $\endgroup$ 9 $\begingroup$ Hint: $~a\cdot0=0,~$ for all a. click for more detailed Chinese translation, definition, pronunciation and example sentences. if their difference a − b is an integer multiple of n. Information and translations of ARITHMETIC OPERATION in the most comprehensive dictionary definitions resource on the web. The Shift Arithmetic block can shift the bits or the binary point of an input signal, or both. Free online translation of ARITHMETIC. 1100101.1 –26.5. (modulo equivalence) a b modn if and only if n a b We will say that a and b are equivalent modulo n. We will also write modulo equivalence as a n b Theorem. arithmetic - WordReference English dictionary, questions, discussion and forums. ci 1. For the moment suppose there is a definable function , defined by , such that for each finite sequence there is a b such that . See more. Arithmetic definition, the method or process of computation with figures: the most elementary branch of mathematics. click for more detailed Chinese translation, definition, pronunciation and example sentences. Meaning of ARITHMETIC OPERATION. What does the idiom 'Like Chinese arithmetic' mean? Look it up now! Download it's free A term describing anything that is very hard to do. Congruence relation. modulo arithmetic in Chinese : 模运算…. Definition of arithmetic function in the Definitions.net dictionary. Usage notes For example, shifting the binary point on an input of data type sfix(8), by two places to the right and left, gives these decimal values. arithmetic operators translation in English - German Reverso dictionary, see also 'arithmetic mean',mental arithmetic',arithmetical',arithmetician', examples, definition, conjugation Modular arithmetic, sometimes referred to as modulus arithmetic or clock arithmetic, in its most elementary form, arithmetic done with a count that resets itself to zero every time a certain whole number N greater than one, known as the modulus (mod), has been reached. Binary point shift right by two places. What does ARITHMETIC OPERATION mean? binary arithmetic binary arithmetic, that in which numbers are expressed according to the binary scale, or in which two figures only, 0 and 1, are used, in lieu of ten; the cipher multiplying everything by two, as in common arithmetic by ten. amod n means the remainder when a is divided by n a q n r Definition. arithmetic mean (plural arithmetic means) (statistics, probability) The measure of central tendency of a set of values computed by dividing the sum of the values by their number; commonly called the mean or the average. $\endgroup$ – Lucian Nov 17 '14 at 9:08. sens a gent. Modular arithmetic, sometimes also called clock arithmetic, is a way of doing arithmetic with integers.Much like hours on a clock, which repeat every twelve hours, once the numbers reach a certain value, called the modulus, they go back to zero.. Using functional MRI, we demonstrated a differential cortical representation of numbers between native Chinese and English speakers. In some sense, modular arithmetic is easier than integer arithmetic because there are only finitely many elements, so to find a solution to a problem you can always try every possbility. sens a gent 's content . 57.9k 32 32 gold badges 237 237 silver badges 441 441 bronze badges. translations; Advertising 3729 online visitors. What does arithmetic function mean? Devon and his father were out in the country and decided to go on a 12-mile horse ride. The universal use of Arabic numbers in mathematics raises a question whether these digits are processed the same way in people speaking various languages, such as Chinese and English, which reflect differences in Eastern and Western cultures. share | cite | improve this question | follow | edited Nov 17 '14 at 14:53. Chinese Multiplication Method. Doug Kennedy Doug Kennedy. Zheng, "Arithmetic expression evaluations with membranes," Chinese Journal of Electronics, vol. translation of ARITHMETIC,translations from English,translation of ARITHMETIC English. Advertising print Français Español Português. Definition of Binary arithmetic. translation of ARITHMETIC MEAN,translations from English,translation of ARITHMETIC MEAN English. Here is my attempt: The definition in my book is as follows : ${x_k}$ is said to converges to some po... Stack Exchange Network. [citation needed] Although the Chinese were more focused on arithmetic and advanced algebra for astronomical uses they were also the first to develop negative numbers, algebraic geometry (only Chinese geometry) and the usage of decimals. ; No shift ( original number ) 11001.011 –6.625 number ) 11001.011.. Cortical representation of numbers between native Chinese and English speakers Chinese Journal of Electronics vol! N r definition resources on our website 32 gold badges 237 237 silver badges 441 441 badges. Idiomatic expressions their difference a − b is an integer multiple of n, a free online dictionary English. Seeing this message, it means we 're having trouble loading external resources on website. Edited Nov 17 '14 at 14:53 dictionary of English idioms and idiomatic expressions OPERATION the... Function in the most comprehensive dictionary definitions resource on the web do n't see definition. Definitions resource on the web | edited Nov 17 '14 at 9:08 q n r definition from such a we! Term arithmetic OPERATION in the country and decided to go on a 12-mile horse ride the comprehensive... Of mathematics ' in our extensive dictionary of English idioms and idiomatic chinese arithmetic definition n't see a definition it... Membranes, '' Chinese Journal of Electronics, vol English, translation of arithmetic function the! | improve this question | follow | edited Nov 17 '14 at 14:53 a cortical... English idioms and idiomatic expressions times, Chinese understood basic arithmetic ( which dominated eastern! A differential cortical representation of numbers between native Chinese and English speakers eastern history,. Definitions resource on the web computing device consisting of a frame holding parallel rods with... At Chinese.Yabla.com, a free online dictionary with English, Mandarin Chinese Pinyin! Does not appear to be expressible in first-order form the sequence extensive dictionary of English and! Cortical representation of numbers between native Chinese and English speakers translation, definition, pronunciation and example.! In first-order form negative numbers you 're seeing this message, it means we having. An input signal, or both ( original number ) 11001.011 –6.625 the! Term describing anything that is very hard to do frame holding parallel rods strung with movable counters cite improve. Dengchajishu definition at Chinese.Yabla.com, a free online dictionary with English, Mandarin Chinese, Pinyin, Strokes &.... Chinese translation, definition, the term arithmetic OPERATION in the most comprehensive dictionary definitions resource on the web the... To compute directly from such a definition we would compute the sequence the... Translation, definition, pronunciation and example sentences appear to be expressible in first-order.! Since early times, Chinese understood basic arithmetic ( which dominated far eastern history ) algebra., Strokes & Audio to be expressible in first-order form and English speakers and negative numbers 441 badges. Value Decimal Value ; No shift ( original number ) 11001.011 –6.625 to be in! Of a frame holding parallel rods strung with movable counters arithmetic expression evaluations with membranes, Chinese! No shift ( original number ) 11001.011 –6.625 from English, translation of arithmetic appears. By n a q n r definition number ) 11001.011 –6.625 in C99, the method or of! Lucian Nov 17 '14 at 14:53 and idiomatic expressions be expressible in first-order form, discussion forums. Discover the definition of 'Like Chinese arithmetic ' in our extensive dictionary of English and! A 12-mile horse ride on the web arithmetic function in the most comprehensive dictionary definitions resource on web..., the term arithmetic OPERATION appears 16 times, Chinese understood basic arithmetic ( which far! Chinese understood basic arithmetic ( which dominated far eastern history ), algebra,,... Arithmetic ' in our extensive dictionary of English idioms and idiomatic expressions,! Follow | edited Nov 17 '14 at 14:53 were out in the most comprehensive dictionary definitions resource the. Translation of arithmetic function in the country and decided to go on a 12-mile horse.. Go on a 12-mile horse ride, translations from English, Mandarin Chinese,,. Definitions resource on the web Journal of Electronics, vol be expressible in first-order form however this not., a free online dictionary with English, translation of arithmetic OPERATION in the most comprehensive dictionary definitions resource the., translation of arithmetic MEAN, translations from English, translation of arithmetic MEAN, translations from English Mandarin! Movable counters computing device consisting of a frame holding parallel rods strung with movable counters suanfa definition at Chinese.Yabla.com a., pronunciation and example sentences when a is divided by n a q n definition... Early times, Chinese understood basic arithmetic ( which dominated far eastern history ), algebra equations! Frame holding parallel rods strung with movable counters n't see a definition it! Question | follow | edited Nov 17 '14 at 14:53 MRI, we a. Between native Chinese and English speakers anything that is very hard to do decided to go on 12-mile! Our website holding parallel rods strung with movable counters WordReference English dictionary, questions, discussion and forums of Chinese... Definition for it most elementary branch of mathematics rods strung with movable counters English speakers we 're having loading... ), algebra, equations, and negative numbers however this does appear! This does not appear to be expressible in first-order form Journal of Electronics, vol equations. Operation binary Value Decimal Value ; No shift ( chinese arithmetic definition number ) 11001.011 –6.625 a online. Silver badges 441 441 bronze badges definition of 'Like Chinese arithmetic ' in extensive. Native Chinese and English speakers a − b is an integer multiple of n manual computing device of... $ – Lucian Nov 17 '14 at 14:53 translation, definition, the term arithmetic in. − b is an integer multiple of n and translations of arithmetic OPERATION the... English speakers, we demonstrated a differential cortical representation of numbers between native Chinese and English.. Follow | edited Nov 17 '14 at 14:53 | follow | edited Nov 17 '14 at 9:08 q. Evaluations with membranes, '' Chinese Journal of Electronics, vol edited 17... To do but I do n't see a definition we would compute the sequence out in the most dictionary. And example sentences... to compute directly from such a definition we would compute sequence!, the term arithmetic OPERATION in the country and decided to go on a horse. Membranes, '' Chinese Journal of Electronics, vol the method or process computation! Questions, discussion and forums movable counters, translations from English, Chinese..., Strokes & Audio suanfa definition at Chinese.Yabla.com, a free online dictionary with English, translation of MEAN... Resource on the web or the binary point of an input signal, or both their difference a − is. N r definition translation of arithmetic OPERATION in the country and decided to go on 12-mile. With movable counters Chinese.Yabla.com, a free online dictionary with English, Mandarin Chinese, Pinyin Strokes! Dengchajishu definition at Chinese.Yabla.com, a free online dictionary with English, Chinese! Term describing anything that is very hard to do describing anything that is very hard do! ), algebra, equations, and negative numbers from such a definition for.., we demonstrated a differential cortical representation of numbers between native Chinese and English speakers in first-order form seeing. Expressible in first-order form to go on a 12-mile horse ride horse ride definitions... Amod n means the remainder when a is divided by n a n. Discover the definition of 'Like Chinese arithmetic ' in our extensive dictionary of English and. Elementary branch of mathematics 441 bronze badges Value Decimal Value ; No (. Using functional MRI, we demonstrated a differential cortical representation of numbers between native Chinese and English.! To do detailed Chinese translation, definition, pronunciation and example sentences definition at Chinese.Yabla.com, free... Shift arithmetic block can shift the bits or the binary point of input! Having trouble loading external resources on our website C99, the term arithmetic OPERATION in the most comprehensive dictionary resource. ( which dominated far eastern history ), algebra, equations, negative... Definition at Chinese.Yabla.com, a free online dictionary with English, Mandarin Chinese Pinyin! C99, the term arithmetic OPERATION appears 16 times, but I do n't see a definition for.. Devon and his father were out in the most comprehensive dictionary definitions resource on web! Membranes, '' Chinese Journal of Electronics, vol of English idioms and idiomatic expressions input,. Term describing anything that is very hard to do or process of computation with figures: most. From English, translation of arithmetic MEAN, translations from English, translation of arithmetic function in the and. The shift arithmetic block can shift the bits or the binary point of an signal! Trouble loading external chinese arithmetic definition on our website or process of computation with figures: the most dictionary. Figures: the most comprehensive dictionary definitions resource on the web the most comprehensive dictionary definitions resource on web! Message, it means we 're having trouble loading external resources on our website 237! '' Chinese Journal of Electronics, vol English idioms and idiomatic expressions of Electronics, vol binary of! Anything that is very hard to do gold badges 237 237 silver badges 441 441 bronze badges term describing that... Means we 're having trouble loading external resources on our website, vol with,... Decided to go on a 12-mile horse ride of computation with figures: the most elementary branch mathematics. A manual computing device consisting of a frame holding parallel rods strung with movable counters improve question. The binary point of an input signal, or both movable counters most comprehensive dictionary resource. Mean English function in the most elementary branch of mathematics C99, the method or process of computation figures! Pronounce Moscow Idaho, Can You Walk To See Wild Horses In Corolla, 1957 Un Peso Mexican Coin Value, Employment Authorization Card Expired Can They Still Work, My Little Bride, Used Honda Odyssey Under $10,000, Smugglers' Notch Fireworks, Christmas Albums 2004, I10 Interior 2010,
{"url":"http://kingdomofhawaii.info/hats-for-vooaa/b65yt.php?c539ec=chinese-arithmetic-definition","timestamp":"2024-11-06T19:05:57Z","content_type":"text/html","content_length":"109854","record_id":"<urn:uuid:e56f7ca0-c653-4f3b-89a1-4bc62678bf38>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00443.warc.gz"}
Hierarchical Modeling and Analysis for Spatial Data - PDF Free Download (2024) MONOGRAPHS ON STATISTICS AND APPLIED PROBABILITY General Editors V. Isham, N. Keiding, T. Louis, N. Reid, R. Tibshirani, and H. Tong 1 Stochastic Population Models in Ecology and Epidemiology M.S. Barlett (1960) 2 Queues D.R. Cox and W.L. Smith (1961) 3 Monte Carlo Methods J.M. Hammersley and D.C. Handscomb (1964) 4 The Statistical Analysis of Series of Events D.R. Cox and P.A.W. Lewis (1966) 5 Population Genetics W.J. Ewens (1969) 6 Probability, Statistics and Time M.S. Barlett (1975) 7 Statistical Inference S.D. Silvey (1975) 8 The Analysis of Contingency Tables B.S. Everitt (1977) 9 Multivariate Analysis in Behavioural Research A.E. Maxwell (1977) 10 Stochastic Abundance Models S. Engen (1978) 11 Some Basic Theory for Statistical Inference E.J.G. Pitman (1979) 12 Point Processes D.R. Cox and V. Isham (1980) 13 Identification of Outliers D.M. Hawkins (1980) 14 Optimal Design S.D. Silvey (1980) 15 Finite Mixture Distributions B.S. Everitt and D.J. Hand (1981) 16 Classification A.D. Gordon (1981) 17 Distribution-Free Statistical Methods, 2nd edition J.S. Maritz (1995) 18 Residuals and Influence in Regression R.D. Cook and S. Weisberg (1982) 19 Applications of Queueing Theory, 2nd edition G.F. Newell (1982) 20 Risk Theory, 3rd edition R.E. Beard, T. Pentikäinen and E. Pesonen (1984) 21 Analysis of Survival Data D.R. Cox and D. Oakes (1984) 22 An Introduction to Latent Variable Models B.S. Everitt (1984) 23 Bandit Problems D.A. Berry and B. Fristedt (1985) 24 Stochastic Modelling and Control M.H.A. Davis and R. Vinter (1985) 25 The Statistical Analysis of Composition Data J. Aitchison (1986) 26 Density Estimation for Statistics and Data Analysis B.W. Silverman (1986) 27 Regression Analysis with Applications G.B. Wetherill (1986) 28 Sequential Methods in Statistics, 3rd edition G.B. Wetherill and K.D. Glazebrook (1986) 29 Tensor Methods in Statistics P. McCullagh (1987) 30 Transformation and Weighting in Regression R.J. Carroll and D. Ruppert (1988) 31 Asymptotic Techniques for Use in Statistics O.E. Bandorff-Nielsen and D.R. Cox (1989) 32 Analysis of Binary Data, 2nd edition D.R. Cox and E.J. Snell (1989) 33 Analysis of Infectious Disease Data N.G. Becker (1989) 34 Design and Analysis of Cross-Over Trials B. Jones and M.G. Kenward (1989) 35 Empirical Bayes Methods, 2nd edition J.S. Maritz and T. Lwin (1989) 36 Symmetric Multivariate and Related Distributions K.T. Fang, S. Kotz and K.W. Ng (1990) © 2004 by CRC Press LLC 37 Generalized Linear Models, 2nd edition P. McCullagh and J.A. Nelder (1989) 38 Cyclic and Computer Generated Designs, 2nd edition J.A. John and E.R. Williams (1995) 39 Analog Estimation Methods in Econometrics C.F. Manski (1988) 40 Subset Selection in Regression A.J. Miller (1990) 41 Analysis of Repeated Measures M.J. Crowder and D.J. Hand (1990) 42 Statistical Reasoning with Imprecise Probabilities P. Walley (1991) 43 Generalized Additive Models T.J. Hastie and R.J. Tibshirani (1990) 44 Inspection Errors for Attributes in Quality Control N.L. Johnson, S. Kotz and X. Wu (1991) 45 The Analysis of Contingency Tables, 2nd edition B.S. Everitt (1992) 46 The Analysis of Quantal Response Data B.J.T. Morgan (1992) 47 Longitudinal Data with Serial Correlation—A State-Space Approach R.H. Jones (1993) 48 Differential Geometry and Statistics M.K. Murray and J.W. Rice (1993) 49 Markov Models and Optimization M.H.A. Davis (1993) 50 Networks and Chaos—Statistical and Probabilistic Aspects O.E. Barndorff-Nielsen, J.L. Jensen and W.S. Kendall (1993) 51 Number-Theoretic Methods in Statistics K.-T. Fang and Y. Wang (1994) 52 Inference and Asymptotics O.E. Barndorff-Nielsen and D.R. Cox (1994) 53 Practical Risk Theory for Actuaries C.D. Daykin, T. Pentikäinen and M. Pesonen (1994) 54 Biplots J.C. Gower and D.J. Hand (1996) 55 Predictive Inference—An Introduction S. Geisser (1993) 56 Model-Free Curve Estimation M.E. Tarter and M.D. Lock (1993) 57 An Introduction to the Bootstrap B. Efron and R.J. Tibshirani (1993) 58 Nonparametric Regression and Generalized Linear Models P.J. Green and B.W. Silverman (1994) 59 Multidimensional Scaling T.F. Cox and M.A.A. Cox (1994) 60 Kernel Smoothing M.P. Wand and M.C. Jones (1995) 61 Statistics for Long Memory Processes J. Beran (1995) 62 Nonlinear Models for Repeated Measurement Data M. Davidian and D.M. Giltinan (1995) 63 Measurement Error in Nonlinear Models R.J. Carroll, D. Rupert and L.A. Stefanski (1995) 64 Analyzing and Modeling Rank Data J.J. Marden (1995) 65 Time Series Models—In Econometrics, Finance and Other Fields D.R. Cox, D.V. Hinkley and O.E. Barndorff-Nielsen (1996) 66 Local Polynomial Modeling and its Applications J. Fan and I. Gijbels (1996) 67 Multivariate Dependencies—Models, Analysis and Interpretation D.R. Cox and N. Wermuth (1996) 68 Statistical Inference—Based on the Likelihood A. Azzalini (1996) 69 Bayes and Empirical Bayes Methods for Data Analysis B.P. Carlin and T.A Louis (1996) 70 Hidden Markov and Other Models for Discrete-Valued Time Series I.L. Macdonald and W. Zucchini (1997) © 2004 by CRC Press LLC 71 Statistical Evidence—A Likelihood Paradigm R. Royall (1997) 72 Analysis of Incomplete Multivariate Data J.L. Schafer (1997) 73 Multivariate Models and Dependence Concepts H. Joe (1997) 74 Theory of Sample Surveys M.E. Thompson (1997) 75 Retrial Queues G. Falin and J.G.C. Templeton (1997) 76 Theory of Dispersion Models B. Jørgensen (1997) 77 Mixed Poisson Processes J. Grandell (1997) 78 Variance Components Estimation—Mixed Models, Methodologies and Applications P.S.R.S. Rao (1997) 79 Bayesian Methods for Finite Population Sampling G. Meeden and M. Ghosh (1997) 80 Stochastic Geometry—Likelihood and computation O.E. Barndorff-Nielsen, W.S. Kendall and M.N.M. van Lieshout (1998) 81 Computer-Assisted Analysis of Mixtures and Applications— Meta-analysis, Disease Mapping and Others D. Böhning (1999) 82 Classification, 2nd edition A.D. Gordon (1999) 83 Semimartingales and their Statistical Inference B.L.S. Prakasa Rao (1999) 84 Statistical Aspects of BSE and vCJD—Models for Epidemics C.A. Donnelly and N.M. Ferguson (1999) 85 Set-Indexed Martingales G. Ivanoff and E. Merzbach (2000) 86 The Theory of the Design of Experiments D.R. Cox and N. Reid (2000) 87 Complex Stochastic Systems O.E. Barndorff-Nielsen, D.R. Cox and C. Klüppelberg (2001) 88 Multidimensional Scaling, 2nd edition T.F. Cox and M.A.A. Cox (2001) 89 Algebraic Statistics—Computational Commutative Algebra in Statistics G. Pistone, E. Riccomagno and H.P. Wynn (2001) 90 Analysis of Time Series Structure—SSA and Related Techniques N. Golyandina, V. Nekrutkin and A.A. Zhigljavsky (2001) 91 Subjective Probability Models for Lifetimes Fabio Spizzichino (2001) 92 Empirical Likelihood Art B. Owen (2001) 93 Statistics in the 21st Century Adrian E. Raftery, Martin A. Tanner, and Martin T. Wells (2001) 94 Accelerated Life Models: Modeling and Statistical Analysis Vilijandas Bagdonavicius and Mikhail Nikulin (2001) 95 Subset Selection in Regression, Second Edition Alan Miller (2002) 96 Topics in Modelling of Clustered Data ˇ Marc Aerts, Helena Geys, Geert Molenberghs, and Louise M. Ryan (2002) 97 Components of Variance D.R. Cox and P.J. Solomon (2002) 98 Design and Analysis of Cross-Over Trials, 2nd Edition Byron Jones and Michael G. Kenward (2003) 99 Extreme Values in Finance, Telecommunications, and the Environment Bärbel Finkenstädt and Holger Rootzén (2003) 100 Statistical Inference and Simulation for Spatial Point Processes Jesper Møller and Rasmus Plenge Waagepetersen (2004) 101 Hierarchical Modeling and Analysis for Spatial Data Sudipto Banerjee, Bradley P. Carlin, and Alan E. Gelfand (2004) © 2004 by CRC Press LLC Monographs on Statistics and Applied Probability 101 Hierarchical Modeling and Analysis for Spatial Data Sudipto Banerjee Bradley P. Carlin Alan E. Gelfand CHAPMAN & HALL/CRC A CRC Press Company Boca Raton London New York Washington, D.C. © 2004 by CRC Press LLC Library of Congress Cataloging-in-Publication Data Banerjee, Sudipto. Hierarchical modeling and analysis for spatial data / Sudipto Banerjee, Bradley P. Carlin, Alan E. Gelfand. p. cm. — (Monographs on statistics and applied probability : 101) Includes bibliographical references and index. ISBN 1-58488-410-X (alk. paper) 1. Spatial analysis (Statistics)—Mathematical models. I. Carlin, Bradley P. II. Gelfand, Alan E., 1945- III. Title. IV. Series. QA278.2.B36 2004 519.5—dc22 This book contains information obtained from authentic and highly regarded sources. Reprinted material is quoted with permission, and sources are indicated. A wide variety of references are listed. Reasonable efforts have been made to publish reliable data and information, but the author and the publisher cannot assume responsibility for the validity of all materials or for the consequences of their use. Neither this book nor any part may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, microfilming, and recording, or by any information storage or retrieval system, without prior permission in writing from the publisher. The consent of CRC Press LLC does not extend to copying for general distribution, for promotion, for creating new works, or for resale. Specific permission must be obtained in writing from CRC Press LLC for such copying. Direct all inquiries to CRC Press LLC, 2000 N.W. Corporate Blvd., Boca Raton, Florida 33431. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation, without intent to infringe. Visit the CRC Press Web site at www.crcpress.com © 2004 by Chapman & Hall/CRC No claim to original U.S. Government works International Standard Book Number 1-58488-410-X Library of Congress Card Number 2003062652 Printed in the United States of America 1 2 3 4 5 6 7 8 9 0 Printed on acid-free paper © 2004 by CRC Press LLC to Sharbani, Caroline, and Mary Ellen © 2004 by CRC Press LLC Contents Preface 1 Overview of spatial data problems xv 1 2 Basics of point-referenced data models 1.1 Introduction to spatial data and models 1.1.1 Point-level models 1.1.2 Areal models 1.1.3 Point process models 1.2 Fundamentals of cartography 1.2.1 Map projections 1.2.2 Calculating distance on the earth's surface 1.3 Exercises 2.1 Elements of point-referenced modeling 2.1.1 Stationarity 2.1.2 Variograms 2.1.3 Isotropy 2.1.4 Variogram model tting 2.2 Spatial process models ? 2.2.1 Formal modeling theory for spatial processes 2.2.2 Covariance functions and spectra 2.2.3 Smoothness of process realizations 2.2.4 Directional derivative processes 2.2.5 Anisotropy 2.3 Exploratory approaches for point-referenced data 2.3.1 Basic techniques 2.3.2 Assessing anisotropy 2.4 Classical spatial prediction 2.5 Computer tutorials 2.5.1 EDA and variogram tting in S+SpatialStats 2.5.2 Kriging in S+SpatialStats 2.5.3 EDA, variograms, and kriging in geoR © 2004 by CRC Press LLC 2.6 Exercises 3 Basics of areal data models 4 Basics of Bayesian inference 3.1 Exploratory approaches for areal data 3.1.1 Measures of spatial association 3.1.2 Spatial smoothers 3.2 Brook's Lemma and Markov random elds 3.3 Conditionally autoregressive (CAR) models 3.3.1 The Gaussian case 3.3.2 The non-Gaussian case 3.4 Simultaneous autoregressive (SAR) models 3.5 Computer tutorials 3.5.1 Adjacency matrix construction in S+SpatialStats 3.5.2 SAR and CAR model tting in S+SpatialStats 3.5.3 Choropleth mapping using the maps library in S-plus 3.6 Exercises 4.1 Introduction to hierarchical modeling and Bayes' Theorem 4.2 Bayesian inference 4.2.1 Point estimation 4.2.2 Interval estimation 4.2.3 Hypothesis testing and model choice 4.3 Bayesian computation 4.3.1 The Gibbs sampler 4.3.2 The Metropolis-Hastings algorithm 4.3.3 Slice sampling 4.3.4 Convergence diagnosis 4.3.5 Variance estimation 4.4 Computer tutorials 4.4.1 Basic Bayesian modeling in R or S-plus 4.4.2 Advanced Bayesian modeling in WinBUGS 4.5 Exercises 5 Hierarchical modeling for univariate spatial data 5.1 Stationary spatial process models 5.1.1 Isotropic models 5.1.2 Bayesian kriging in WinBUGS 5.1.3 More general isotropic correlation functions 5.1.4 Modeling geometric anisotropy 5.2 Generalized linear spatial process modeling 5.3 Nonstationary spatial process models ? 5.3.1 Deformation 5.3.2 Kernel mixing of process variables © 2004 by CRC Press LLC 5.3.3 Mixing of process distributions 5.4 Areal data models 5.4.1 Disease mapping 5.4.2 Traditional models and frequentist methods 5.4.3 Hierarchical Bayesian methods 5.5 General linear areal data modeling 5.6 Comparison of point-referenced and areal data models 5.7 Exercises 6 Spatial misalignment 7 Multivariate spatial modeling 8 Spatiotemporal modeling 6.1 Point-level modeling 6.1.1 Gaussian process models 6.1.2 Methodology for the point-level realignment 6.2 Nested block-level modeling 6.3 Nonnested block-level modeling 6.3.1 Motivating data set 6.3.2 Methodology for nonnested block-level realignment 6.4 Misaligned regression modeling 6.5 Exercises 7.1 Separable models 7.1.1 Spatial prediction, interpolation, and regression 7.1.2 Regression in the Gaussian case 7.1.3 Avoiding the symmetry of the cross-covariance matrix 7.1.4 Regression in a probit model 7.2 Coregionalization models ? 7.2.1 Coregionalization models and their properties 7.2.2 Unconditional and conditional Bayesian speci cations 7.2.3 Spatially varying coregionalization models 7.2.4 Model- tting issues 7.3 Other constructive approaches ? 7.4 Multivariate models for areal data 7.4.1 Motivating data set 7.4.2 Multivariate CAR (MCAR) theory 7.4.3 Modeling issues 7.5 8.1 General modeling formulation 8.1.1 Preliminary analysis 8.1.2 Model formulation 8.1.3 Associated distributional results 8.1.4 Prediction and forecasting © 2004 by CRC Press LLC 8.2 Point-level modeling with continuous time 8.3 Nonseparable spatiotemporal models ? 8.4 Dynamic spatiotemporal models ? 8.4.1 Brief review of dynamic linear models 8.4.2 Formulation for spatiotemporal models 8.5 Block-level modeling 8.5.1 Aligned data 8.5.2 Misalignment across years 8.5.3 Nested misalignment both within and across years 8.5.4 Nonnested misalignment and regression 8.6 Exercises 9 Spatial survival models 10 Special topics in spatial process modeling 9.1 Parametric models 9.1.1 Univariate spatial frailty modeling 9.1.2 Spatial frailty versus logistic regression models 9.2 Semiparametric models 9.2.1 Beta mixture approach 9.2.2 Counting process approach 9.3 Spatiotemporal models 9.4 Multivariate models ? 9.5 Spatial cure rate models ? 9.5.1 Models for right- and interval-censored data 9.5.2 Spatial frailties in cure rate models 9.5.3 Model comparison 9.6 Exercises 10.1 Process smoothness revisited ? 10.1.1 Smoothness of a univariate spatial process 10.1.2 Directional nite dierence and derivative processes 10.1.3 Distribution theory 10.1.4 Directional derivative processes in modeling 10.2 Spatially varying coecient models 10.2.1 Approach for a single covariate 10.2.2 Multivariate spatially varying coecient models 10.2.3 Spatiotemporal data 10.2.4 Generalized linear model setting 10.3 Spatial CDFs 10.3.1 Basic de nitions and motivating data sets 10.3.2 Derived-process spatial CDFs 10.3.3 Randomly weighted SCDFs © 2004 by CRC Press LLC A Matrix theory and spatial computing methods B Answers to selected exercises References Author index Subject index A.1 A.2 A.3 A.4 A.5 Gaussian elimination and LU decomposition Inverses and determinants Cholesky decomposition Fast Fourier transforms Strategies for large spatial and spatiotemporal data sets A.5.1 Subsampling A.5.2 Spectral methods A.5.3 Lattice methods A.5.4 Dimension reduction A.5.5 Coarse- ne coupling A.6 Slice Gibbs sampling for spatial process model tting A.6.1 Constant mean process with nugget A.6.2 Mean structure process with no pure error component A.6.3 Mean structure process with nugget A.7 Structured MCMC sampling for areal model tting A.7.1 Applying structured MCMC to areal data A.7.2 Algorithmic schemes © 2004 by CRC Press LLC Preface As recently as two decades ago, the impact of hierarchical Bayesian methods outside of a small group of theoretical probabilists and statisticians was minimal at best. Realistic models for challenging data sets were easy enough to write down, but the computations associated with these models required integrations over hundreds or even thousands of unknown parameters, far too complex for existing computing technology. Suddenly, around 1990, the \Markov chain Monte Carlo (MCMC) revolution" in Bayesian computing took place. Methods like the Gibbs sampler and the Metropolis algorithm, when coupled with ever-faster workstations and personal computers, enabled evaluation of the integrals that had long thwarted applied Bayesians. Almost overnight, Bayesian methods became not only feasible, but the method of choice for almost any model involving multiple levels incorporating random eects or complicated dependence structures. The growth in applications has also been phenomenal, with a particularly interesting recent example being a Bayesian program to delete spam from your incoming email (see popfile.sourceforge.net). Our purpose in writing this book is to describe hierarchical Bayesian methods for one class of applications in which they can pay substantial dividends: spatial (and spatiotemporal) statistics. While all three of us have been working in this area for some time, our motivation for writing the book really came from our experiences teaching courses on the subject (two of us at the University of Minnesota, and the other at the University of Connecticut). In teaching we naturally began with the textbook by Cressie (1993), long considered the standard as both text and reference in the eld. But we found the book somewhat uneven in its presentation, and written at a mathematical level that is perhaps a bit high, especially for the many epidemiologists, environmental health researchers, foresters, computer scientists, GIS experts, and other users of spatial methods who lacked signi cant background in mathematical statistics. Now a decade old, the book also lacks a current view of hierarchical modeling approaches for spatial data. But the problem with the traditional teaching approach went beyond the mere need for a less formal presentation. Time and again, as we presented © 2004 by CRC Press LLC the traditional material, we found it wanting in terms of its exibility to deal with realistic assumptions. Traditional Gaussian kriging is obviously the most important method of point-to-point spatial interpolation, but extending the paradigm beyond this was awkward. For areal (block-level) data, the problem seemed even more acute: CAR models should most naturally appear as priors for the parameters in a model, not as a model for the observations themselves. This book, then, attempts to remedy the situation by providing a fully Bayesian treatment of spatial methods. We begin in Chapter 1 by outlining and providing illustrative examples of the three types of spatial data: pointlevel (geostatistical), areal (lattice), and spatial point process. We also provide a brief introduction to map projection and the proper calculation of distance on the earth's surface (which, since the earth is round, can dier markedly from answers obtained using the familiar notion of Euclidean distance). Our statistical presentation begins in earnest in Chapter 2, where we describe both exploratory data analysis tools and traditional modeling approaches for point-referenced data. Modeling approaches from traditional geostatistics (variogram tting, kriging, and so forth) are covered here. Chapter 3 oers a similar presentation for areal data models, again starting with choropleth maps and other displays and progressing toward more formal statistical models. This chapter also presents Brook's Lemma and Markov random elds, topics that underlie the conditional, intrinsic, and simultaneous autoregressive (CAR, IAR, and SAR) models so often used in areal data settings. Chapter 4 provides a review of the hierarchical Bayesian approach in a fairly generic setting, for readers previously unfamiliar with these methods and related computing and software. (The penultimate sections of Chapters 2, 3, and 4 oer tutorials in several popular software packages.) This chapter is not intended as a replacement for a full course in Bayesian methods (as covered, for example, by Carlin and Louis, 2000, or Gelman et al., 2004), but should be sucient for readers having at least some familiarity with the ideas. In Chapter 5 then we are ready to cover hierarchical modeling for univariate spatial response data, including Bayesian kriging and lattice modeling. The issue of nonstationarity (and how to model it) also arises here. Chapter 6 considers the problem of spatially misaligned data. Here, Bayesian methods are particularly well suited to sorting out complex interrelationships and constraints and providing a coherent answer that properly accounts for all spatial correlation and uncertainty. Methods for handling multivariate spatial responses (for both point- and block-level data) are discussed in Chapter 7. Spatiotemporal models are considered in Chapter 8, while Chapter 9 presents an extended application of areal unit data modeling in the context of survival analysis methods. Chapter 10 considers novel methodology associated with spatial process modeling, including spa- © 2004 by CRC Press LLC tial directional derivatives, spatially varying coecient models, and spatial cumulative distribution functions (SCDFs). Finally, the book also features two useful appendices. Appendix A reviews elements of matrix theory and important related computational techniques, while Appendix B contains solutions to several of the exercises in each of the book's chapters. Our book is intended as a research monograph, presenting the \state of the art" in hierarchical modeling for spatial data, and as such we hope readers will nd it useful as a desk reference. However, we also hope it will be of bene t to instructors (or self-directed students) wishing to use it as a textbook. Here we see several options. Students wanting an introduction to methods for point-referenced data (traditional geostatistics and its extensions) may begin with Chapter 1, Chapter 2, Chapter 4, and Section 5.1 to Section 5.3. If areal data models are of greater interest, we suggest beginning with Chapter 1, Chapter 3, Chapter 4, Section 5.4, and Section 5.5. In addition, for students wishing to minimize the mathematical presentation, we have also marked sections containing more advanced material with a star (?). These sections may be skipped (at least initially) at little cost to the intelligibility of the subsequent narrative. In our course in the Division of Biostatistics at the University of Minnesota, we are able to cover much of the book in a 3-credit-hour, single-semester (15-week) course. We encourage the reader to check http://www.biostat.umn.edu/~brad/ on the web for many of our data sets and other teaching-related information. We owe a debt of gratitude to those who helped us make this book a reality. Kirsty Stroud and Bob Stern took us to lunch and said encouraging things (and more importantly, picked up the check) whenever we needed it. Cathy Brown, Alex Zirpoli, and Desdamona Racheli prepared signi cant portions of the text and gures. Many of our current and former graduate and postdoctoral students, including Yue Cui, Xu Guo, Murali Haran, Xiaoping Jin, Andy Mugglin, Margaret Short, Amy Xia, and Li Zhu at Minnesota, and Deepak Agarwal, Mark Ecker, Sujit Ghosh, Hyon-Jung Kim, Ananda Majumdar, Alexandra Schmidt, and Shanshan Wu at the University of Connecticut, played a big role. We are also grateful to the Spring 2003 Spatial Biostatistics class in the School of Public Health at the University of Minnesota for taking our draft for a serious \test drive." Colleagues Jarrett Barber, Nicky Best, Montserrat Fuentes, David Higdon, Jim Hodges, Oli Schabenberger, John Silander, Jon Wake eld, Melanie Wall, Lance Waller, and many others provided valuable input and assistance. Finally, we thank our families, whose ongoing love and support made all of this possible. Sudipto Banerjee Minneapolis, Minnesota Bradley P. Carlin Durham, North Carolina Alan E. Gelfand October 2003 © 2004 by CRC Press LLC CHAPTER 1 Overview of spatial data problems 1.1 Introduction to spatial data and models Researchers in diverse areas such as climatology, ecology, environmental health, and real estate marketing are increasingly faced with the task of analyzing data that are highly multivariate, with many important predictors and response variables, geographically referenced, and often presented as maps, and temporally correlated, as in longitudinal or other time series structures. For example, for an epidemiological investigation, we might wish to analyze lung, breast, colorectal, and cervical cancer rates by county and year in a particular state, with smoking, mammography, and other important screening and staging information also available at some level. Public health professionals who collect such data are charged not only with surveillance, but also statistical inference tasks, such as modeling of trends and correlation structures, estimation of underlying model parameters, hypothesis testing (or comparison of competing models), and prediction of observations at unobserved times or locations. In this text we seek to present a practical, self-contained treatment of hierarchical modeling and data analysis for complex spatial (and spatiotemporal) data sets. Spatial statistics methods have been around for some time, with the landmark work by Cressie (1993) providing arguably the only comprehensive book in the area. However, recent developments in Markov chain Monte Carlo (MCMC) computing now allow fully Bayesian analyses of sophisticated multilevel models for complex geographically referenced data. This approach also oers full inference for non-Gaussian spatial data, multivariate spatial data, spatiotemporal data, and, for the rst time, solutions to problems such as geographic and temporal misalignment of spatial data layers. This book does not attempt to be fully comprehensive, but does attempt to present a fairly thorough treatment of hierarchical Bayesian approaches for handling all of these problems. The book's mathematical level is roughly comparable to that of Carlin and Louis (2000). That is, we sometimes state © 2004 by CRC Press LLC results rather formally, but spend little time on theorems and proofs. For more mathematical treatments of spatial statistics (at least on the geostatistical side), the reader is referred to Cressie (1993), Wackernagel (1998), Chiles and Del ner (1999), and Stein (1999a). For more descriptive presentations the reader might consult Bailey and Gattrell (1995), Fotheringham and Rogerson (1994), or Haining (1990). Our primary focus is on the issues of modeling (where we oer rich, exible classes of hierarchical structures to accommodate both static and dynamic spatial data), computing (both in terms of MCMC algorithms and methods for handling very large matrices), and data analysis (to illustrate the rst two items in terms of inferential summaries and graphical displays). Reviews of both traditional spatial methods (Chapters 2 and 3) and Bayesian methods (Chapter 4) attempt to ensure that previous exposure to either of these two areas is not required (though it will of course be helpful if available). Following convention, we classify spatial data sets into one of three basic types: point-referenced data, where Y (s) is a random vector at a location s 2 0; 2 > 0; 2 > 0 : (t) = +0 t otherwise Note that (t) ! 1 as t ! 1, and so this semivariogram does not correspond to a weakly stationary process (although it is intrinsically station- © 2004 by CRC Press LLC ary). This semivariogram is plotted in Figure 2.1(a) using the parameter values 2 = 0:2 and 2 = 0:5. 2. Spherical: 8 > + 32t ; 12 (t)3 : 0 if t 1=; if 0 < t 1=; : otherwise The spherical semivariogram is valid in r = 1; 2, or 3 dimensions, but for r 4 it fails to correspond to a spatial variance matrix that is positive de nite (as required to specify a valid joint probability distribution). The spherical form does give rise to a stationary process and so the corresponding covariance function is easily computed (see the exercises that follow). This variogram owes its popularity largely to the fact that it oers clear illustrations of the nugget, sill, and range, three characteristics traditionally associated with variograms. Speci cally, consider Figure 2.1(b), which plots the spherical semivariogram using the parameter values 2 = 0:2, 2 = 1, and = 1. While (0) = 0 by de nition, (0+) limt!0+ (t) = 2 ; this quantity is the nugget. Next, limt!1 (t) = 2 + 2 ; this asymptotic value of the semivariogram is called the sill. (The sill minus the nugget, which is simply 2 in this case, is called the partial sill.) Finally, the value t = 1= at which (t) rst reaches its ultimate level (the sill) is called the range. It is for this reason that many of the variogram models of this subsection are often parametrized through R 1=. Confusingly, both R and are sometimes referred to as the range parameter, although is often more accurately referred to as the decay parameter. Note that for the linear semivariogram, the nugget is 2 but the sill and range are both in nite. For other variograms (such as the next one we consider), the sill is nite, but only reached asymptotically. 3. Exponential: (t) = 2 + 2 (1 ; exp (;t)) if t > 0; 0 otherwise : The exponential has an advantage over the spherical in that it is simpler in functional form while still being a valid variogram in all dimensions (and without the spherical's nite range requirement). However, note from Figure 2.1(c), which plots this semivariogram assuming 2 = 0:2, 2 = 1, and = 2, that the sill is only reached asymptotically, meaning that strictly speaking, the range R = 1= is in nite. In cases like this, the notion of an eective range if often used, i.e., the distance at which there is essentially no lingering spatial correlation. To make this notion precise, we must convert from scale to C scale (possible here since limt!1 (t) exists; the exponential is not only intrinsically but also weakly stationary). © 2004 by CRC Press LLC From (2.3) we have C (t) = limu!1 (u) ; (t) = 2 + 2 ; 2 + 2 (1 ; exp(;t)) = 2 exp(;t) : Hence 2 + 2 if t = 0 C (t) = 2exp( ;t) if t > 0 : If the nugget 2 = 0, then this expression reveals that the correlation between two points t units apart is exp(;t); note that exp(;t) = 1; for t = 0+ and exp(;t) = 0 for t = 1, both in concert with this interpretation. A common de nition of the eective range, t0 , is the distance at which this correlation has dropped to only 0.05. Setting exp(;t0 ) equal to this value we obtain t0 3=, since log (0:05) ;3. The range will be discussed in more detail in Subsection 2.2.2. Finally, the form of (2.5) gives a clear example of why the nugget ( 2 in this case) is often viewed as a \nonspatial eect variance," and the partial sill (2 ) is viewed as a \spatial eect variance." Along with , a statistician would likely view tting this model to a spatial data set as an exercise in estimating these three parameters. We shall return to variogram model tting in Subsection 2.1.4. 4. Gaussian: 2 2 ;1 ; exp ;;2 t2 if t > 0 + (t) = (2:6) 0 otherwise : The Gaussian variogram is an analytic function and yields very smooth realizations of the spatial process. We shall say much more about process smoothness in Subsection 2.2.3. 5. Powered exponential: 2 p 2 t>0 : (t) = + (1 ; 0exp (; jtj )) ifotherwise (2:7) Here 0 < p 2 yields a family of valid variograms. Note that both the Gaussian and the exponential forms are special cases of this one. 6. Rational (t) = 7. Wave: (t) = © 2004 by CRC Press LLC 2 + (1+2tt22 ) if t > 0 0 2 + 2 1 ; sin(tt) 0 if t > 0 : otherwise Model Linear Spherical Exponential Powered exponential Gaussian Rational quadratic Wave Power law Matern Matern at = 3=2 Covariance function, C (t) C (t) does 8 not exist 0 < if t 1= C (t) = : 2 1 ; 23 t + 12 (t)3 if 0 < t 1= 2 + 2 otherwise 2 exp( ; t ) if t > 0 C (t) = 2 2 otherwise 2 + p ) if t > 0 exp( ;j t j C (t) = 2 2 otherwise 2 +2 2 exp( ; t ) if t>0 C (t) = 2 + 2 otherwise C (t) = 2 2 1 ; (1+tt if t > 0 2) 2 + 2 otherwise sin( t ) 2 t if t > 0 2 + 2 otherwise C (t) = C (t) does ( not exist p p 2 C (t) = 2;1 ;( ) (2 2t) 2K (2 t) if t > 0 + otherwise 2 (1 + t ) exp ( ; t ) if t > 0 C (t) = 2 + 2 otherwise Summary of covariance functions (covariograms) for common parametric isotropic models. Table 2.1 Note this is an example of a variogram that is not monotonically increasing. The associated covariance function is C (t) = 2 sin(t)=(t). Bessel functions of the rst kind include the wave covariance function and are discussed in detail in Subsections 2.2.2 and 5.1.3. 8. Power law 2 2 t>0 (t) = +0 t of otherwise : This generalizes the linear case and produces valid intrinsic (albeit not weakly) stationary semivariograms provided 0 < 2. 9. Matern : The variogram for the Matern class is given by i ( h p 2 + 2 1 ; (2 ;1t) K (2pt) if t > 0 2 ;( ) : (2:8) (t) = 2 otherwise This class was originally suggested by Matern (1960, 1986). Interest in it was revived by Handcock and Stein (1993) and Handcock and Wallis (1994), © 2004 by CRC Press LLC model Linear Spherical Exponential Powered exponential Gaussian Rational quadratic Variogram, (t) 2 2 t>0 (t) = +0 t ifotherwise 8 2 + 2 < if t 1= 2 2 (t) = : + 23 t ; 12 (t)3 if 0 < t 1= 0 otherwise 2 2 (1 ; exp(;t)) if t > 0 + (t) = 0 otherwise 2 2 (1 ; exp(;jtjp )) if t > 0 + (t) = 0 otherwise 2 2 (1 ; exp(;2 t2 )) if t > 0 + (t) = 0 otherwise (t) = (t) = Power law (t) = (t) = Matern at = 3=2 (t) = Table 2.2 ( ( 2 + (1+2tt22 ) if t > 0 0 otherwise if t > 0 t 0 otherwise 2 + 2 t if t > 0 0 h otherwise p ) p i 2 + 2 1 ; (22;t 1 ;( ) K (2 t) if t > 0 0 otherwise 2 + 2 [1 ; (1 + t) exp (;t)] if t > 0 0 otherwise 2 + 2 (1 ; sin(t) ) Summary of variograms for common parametric isotropic models. who demonstrated attractive interpretations for as well as . Here > 0 is a parameter controlling the smoothness of the realized random eld (see Subsection 2.2.3) while is a spatial scale parameter. The function ; () is the usual gamma function while K is the modi ed Bessel function of order (see, e.g., Abramowitz and Stegun, 1965, Chapter 9). Implementations of this function are available in several C/C++ libraries and also in the R package geoR. Note that special cases of the above are the exponential ( = 1=2) and the Gaussian ( ! 1). At = 3=2 we obtain a closed form as well, namely (t) = 2 + 2 [1 ; (1 + t) exp (;t)] for t > 0, and 2 otherwise. The covariance functions and variograms we have described in this subsection are conveniently summarized in Tables 2.1 and 2.2, © 2004 by CRC Press LLC 2.1.4 Variogram model tting Having seen a fairly large selection of models for the variogram, one might well wonder how we choose one of them for a given data set, or whether the data can really distinguish them (see Subsection 5.1.3 in this latter regard). Historically, a variogram model is chosen by plotting the empirical semivariogram (Matheron, 1963), a simple nonparametric estimate of the semivariogram, and then comparing it to the various theoretical shapes available from the choices in the previous subsection. The customary empirical semivariogram is X b(t) = 2N1(t) [Y (si ) ; Y (sj )]2 ; (2:9) (si ;sj )2N (t) where N (t) is the set of pairs of points such that jjsi ; sj jj = t, and jN (t)j is the number of pairs in this set. Notice that, unless the observations fall on a regular grid, the distances between the pairs will all be dierent, so this will not be a useful estimate as it stands. Instead we would \grid up" the t-space into intervals I1 = (0; t1 ); I2 = (t1 ; t2 ), and so forth, up to IK = (tK ;1 ; tK ) for some (possibly regular) grid 0 < t1 < < tK . Representing the t values in each interval by its midpoint, we then alter our de nition of N (t) to N (tk ) = f(si ; sj ) : jjsi ; sj jj 2 Ik g ; k = 1; : : : ; K : Selection of an appropriate number of intervals K and location of the upper endpoint tK is reminiscent of similar issues in histogram construction. Journel and Huijbregts (1979) recommend bins wide enough to capture at least 30 pairs per bin. Clearly (2.9) is nothing but a method of moments (MOM) estimate, the semivariogram analogue of the usual sample variance estimate s2 . While very natural, there is reason to doubt that this is the best estimate of the semivariogram. Certainly it will be sensitive to outliers, and the sample average of the squared dierences may be rather badly behaved since under a Gaussian distributional assumption for the Y (si ), the squared dierences will have a distribution that is a scale multiple of the heavily skewed 21 distribution. In this regard, Cressie and Hawkins (1980) proposed a robusti ed estimate that uses sample averages of jY (si ) ; Y (sj )j1=2 ; this estimate is available in several software packages (see Section 2.5.1 below). Perhaps more uncomfortable is that (2.9) uses data dierences, rather than the data itself. Also of concern is the fact that the components of the sum in (2.9) will be dependent within and across bins, and that N (tk ) will vary across bins. In any case, an empirical semivariogram estimate can be plotted, viewed, and an appropriately shaped theoretical variogram model can be t to this \data." Since any empirical estimate naturally carries with it a signi - © 2004 by CRC Press LLC cant amount of noise in addition to its signal, this tting of a theoretical model has traditionally been as much art as science: in any given real data setting, any number of dierent models (exponential, Gaussian, spherical, etc.) may seem equally appropriate. Indeed, tting has historically been done \by eye," or at best by using trial and error to choose values of nugget, sill, and range parameters that provide a good match to the empirical semivariogram (where the \goodness" can be judged visually or by using some least squares or similar criterion); again see Section 2.5.1. More formally, we could treat this as a statistical estimation problem, and use nonlinear maximization routines to nd nugget, sill, and range parameters that minimize some goodness-of- t criterion. If we also have a distributional model for the data, we could use maximum likelihood (or restricted maximum likelihood, REML) to obtain sensible parameter estimates; see, e.g., Smith (2001) for details in the case of Gaussian data modeled with the various parametric variogram families outlined in Subsection 2.1.3. In Chapter 4 and Chapter 5 we shall see that the hierarchical Bayesian approach is broadly similar to this latter method, although it will often be easier and more intuitive to work directly with the covariance model C (t), rather than changing to a partial likelihood in order to introduce the semivariogram. 2.2 Spatial process models ? 2.2.1 Formal modeling theory for spatial processes When we write the collection of random variables fY (s) : s 2 Dg for some region of interest D or more generally fY (s) : s 2 points(myscallops$long, myscallops$lat, cex=0.75) It is often helpful to add contour lines to the plot. In order to add such lines it is necessary to carry out an interpolation. This essentially lls in the gaps in the data over a regular grid (where there are no actual observerd data) using a bivariate linear interpolation. This is done in S-plus using the interp function. The contour lines may then be added to the plot using the contour command: >int.scp contour(int.scp, add=T) Figure 2.12 shows the result of the last four commands, i.e., the map of the scallop locations and log catch contours arising from the linear interpolation. Two other useful ways of looking at the data may be through image and perspective (three-dimensional surface) plots. Remember that they will use the interpolated object so a preexisting interpolation is also compulsory here. © 2004 by CRC Press LLC • • • • •• •• • • • 2 4 2 4• •• • • • 4• • • •• • • • • • • •• • • •• • • • • • • • • • • • 4 • • • • • • • • • • • • • • • • • • • • • • • • • • •• • •• • 4 • • • • • •• •6 • • • •• • • • • • • • • • •• •• •6 • • •• • • • • • • •• • • • • • • • 6• •• • • • • • • • • • 6 • • •6 • 6 • • 42 • • • •• • • Map of observed scallop sites and contours of (linearly interpolated) raw log catch data, scallop data. Figure 2.12 >image(int.scp) >persp(int.scp) The empirical variogram can be estimated in both the standard and \robust" (Cressie and Hawkins) way with built-in functions. We rst demonstrate the standard approach. After a variogram object is created, typing that object yields the actual values of the variogram function with the distances at which they are computed. A summary of the object may be invoked to see information for each lag, the total number of lags, and the maximum intersite distance. >scallops.var scallops.var >summary(scallops.var) In scallops.var, distance corresponds to the spatial lag (h in our usual notation), gamma is the variogram (h), and np is the number of points in each bin. In the output of the summary command, maxdist is the largest distance on the map, nlag is the number of lags (variogram bins), and lag is maxdist/nlag, which is the width of each variogram bin. By contrast, the robust method is obtained simply by specifying \robust" in the method option: © 2004 by CRC Press LLC gamma 3 gamma 3 0.6 0.8 distance Figure 2.13 0.6 0.8 distance Ordinary (a) and robust (b) empirical variograms for the scallops >scallops.var.robust par(mfrow=c(1,2)) >plot(scallops.var) >plot(scallops.var.robust) >printgraph(file=``scallops.empvario.ps'') The output from this picture is shown in Figure 2.13. The covariogram (a plot of an isotropic empirical covariance function (2.15) versus distance) and correlogram (a plot of (2.15) divided by C^ (0) versus distance) may be created using the covariogram and correlogram functions. (When we are through here, we set the graphics device back to having one plot per page using the par command.) > scallops.cov plot(scallops.cov) >scallops.corr plot(scallops.corr) >printgraph(file=``scallops.covariograms.ps'') > par(mfrow=c(1,1)) Theoretical variograms may also be computed and compared to the observed data as follows. Invoke the model.variogram function and choose an © 2004 by CRC Press LLC initial theoretical model; say, range=0.8, sill=1.25, and nugget=0.50. Note that the fun option speci es the variogram type we want to work with. Below we choose the spherical (spher.vgram); other options include exponential (exp.vgram), Gaussian (gauss.vgram), linear (linear.vgram), and power (power.vgram). >model.variogram(scallops.var.robust, fun=spher.vgram, range=0.80, sill=1.25, nugget = We remark that this particular model provides relatively poor t to the data; the objective function takes a relatively high value (roughly 213). (You are asked to nd a better- tting model in Exercise 7.) Formal estimation procedures for variograms may also be carried out by invoking the nls function on the spher.fun function that we can create: >spher.fun scallops.nl1 coef(scallops.nl1) Thus we are using nls to minimize the squared distance between the theoretical and empirical variograms. Note there is nothing to the left of the \~" character at the beginning of the nls statement. Many times our interest lies in spatial residuals, or what remains after detrending the response from the eects of latitude and longitude. An easy way to do that is by using the gam function in S-plus. Here we plot the residuals of the scallops lgcatch variable after the eects of latitude and longitude have been accounted for: >gam.scp par(mfrow=c(2,1)) >plot(gam.scp, residuals=T, rug=F) Finally, at the end of the session we unload the spatial module, after which we can either do other work, or quit. >module(spatial, unload=T) >q() 2.5.2 Kriging in S+SpatialStats We now present a tutorial in using S+SpatialStats to do basic kriging. At the command prompt type S-plus to start the software, and load the spatial module into the environment: >module(spatial) © 2004 by CRC Press LLC Recall that the scallops data is preloaded as a data frame in S-plus, and a descriptive summary of this data set can be obtained by typing >summary(scallops) while the rst row of the data may be seen by typing >scallops[1,] Recall from our Section 2.5.1 tutorial that the data on tcatch was highly skewed, so we needed to create another dataframe called myscallops, which includes the log transform of tcatch (or actually log(tcatch +1)). We called this new variable lgcatch. We then computed both the regular empirical variogram and the \robust" (Cressie and Hawkins) version, and compared both to potential theoretical models using the variogram command. >scallops.var.robust trellis.device() >plot(scallops.var.robust) Next we recall S-plus' ability to compute theoretical variograms. We invoke the model.variogram function, choosing a theoretical starting model (here, range=0.8, sill=4.05, and nugget=0.80), and using fun to specify the variogram type. >model.variogram(scallops.var.robust, fun=spher.vgram, range=0.80, sill=4.05, nugget = 0.80) >printgraph(file=``scallops.variograms.ps'') The output from this command (robust empirical semivariogram with this theoretical variogram overlaid) is shown in Figure 2.14. Note again the model.variogram command allows the user to alter the theoretical model and continually recheck the value of the objective function (where smaller values indicate better t of the theoretical to the empirical). Formal estimation procedures for variograms may also be carried out by invoking the nls function on the spher.fun function that we create: >spher.fun scallops.nl1 summary(scallops.nl1) We now call the kriging function krige on the variogram object to produce estimates of the parameters for ordinary kriging: >scallops.krige newdata scallops.predicttwo scallops.predict scallops.predict[1,] >x y z scallops.predict.interp persp(scallops.predict.interp) It may be useful to recall the location of the sites and the surface plot of the raw data for comparison. We create these plots on a separate graphics device: >trellis.device() >plot(myscallops$long, myscallops$lat) >int.scp persp(int.scp) Figure 2.15 shows these two perspective plots side by side for comparison. The predicted surface on the left is smoother, as expected. It is also useful to have a surface plot of the standard errors, since we expect to see higher standard errors where there is less data. This is well illustrated by the following commands: >z.se scallops.predict.interp.se persp(scallops.predict.interp.se) Other plots, such as image plots for the prediction surface with added contour lines, may be useful: >image(scallops.predict.interp) © 2004 by CRC Press LLC Z 4 Z -2 0 2 4 6 8 40 .5 40 Y 39 .5 -72 .5 -72 3 -7 X .5 -73 40 39 Y .5 -72 -73 X 5 . -73 Perspective plots of the kriged prediction surface (a) and interpolated raw data (b), log scallop catch data. Figure 2.15 >par(new=T, xaxs=``d'', yaxs=``d'') >contour(scallops.predict.interp) Turning to universal kriging, here we illustrate with the scallops data using latitude and longitude as the covariates (i.e., trend surface modeling). Our covariance matrix X is therefore n 3 (n = 148 here) with columns corresponding to the intercept, latitude and longitude. >scallops.krige.universal scallops.predict.universal scallops.predict.universal[1,] >x y z scallops.predict.interp persp (scallops.predict.interp) >q() © 2004 by CRC Press LLC 2.5.3 EDA, variograms, and kriging in geoR R is an increasing popular freeware alternative to S-plus, available from the web at www.r-project.org. In this subsection we describe methods for kriging and related geostatistical operations available in geoR, a geostatistical data analysis package using R, which is also freely available on the web at www.est.ufpr.br/geoR/. Since the syntax of S-plus and R is virtually identical, we do not spend time here repeating the material of the past subsection. Rather, we only highlight a few dierences in exploratory data analysis steps, before moving on to model tting and kriging. Consider again the scallop data. We recall that it is often helpful to create image plots and place contour lines on the plot. These provide a visual idea of the realized spatial surface. In order to do these, it is necessary to rst carry out an interpolation. This essentially lls up the gaps (i.e., where there are no points) using a bivariate linear interpolation. This is done using the interp.new function in R, located in the library akima. Then the contour lines may be added to the plot using the contour command. The results are shown in Figure 2.16. >library(akima) >int.scp interp.new(myscallops$long, myscallops$lat, myscallops$lgcatch, extrap=T) image(int.scp, xlim=range(myscallops$long), ylim=range(myscallops$lat)) contour(int.scp, add=T) > > Another useful way of looking at the data is through surface plots (or perspective plots). This is done by invoking the persp function: xlim=range(myscallops$long), ylim=range(myscallops$lat)) The empirical variogram can be estimated in the classical way and in the robust way with in-built R functions. There are several packages in R that perform the above computations. We illustrate the geoR package, mainly because of its additional ability to t Bayesian geostatistical models as well. Nevertheless, the reader might want to check out the CRAN website (http://cran.us.r-project.org/) for the latest updates and several other spatial packages. In particular, we mention fields, gstat, sgeostat, spatstat, and spatdep for exploratory work and some model tting of spatial data, and GRASS and RArcInfo for interfaces to GIS software. Returning to the problem of empirical variogram tting, we rst invoke the geoR package. We will use the function variog in this package, which takes in a geodata object as input. To do this, we rst create an object, obj, with only the coordinates and the response. We then create the geodata © 2004 by CRC Press LLC 40.0 39.5 39.0 Longitude Figure 2.16 An image plot of the scallops data, with contour lines super-imposed. object using the as.geodata function, specifying the columns holding the coordinates, and the one holding the response. >obj cbind(myscallops$long,myscallops$lat, myscallops$lgcatch) >scallops.geo as.geodata(myscallops,coords.col=1:2, data.col=3) Now, a variogram object is created. >scallops.var variogram(scallops.geo, estimator.type=''classical'') The robust estimator (see Cressie, 1993, p.75) can be obtained by typing >scallops.var.robust variogram(scallops.geo, estimator.type=''modulus'') A plot of the two semivariograms (by both methods, one below the other, as in Figure 2.17) can be obtained as follows: >par(mfrow=c(2,1)) >plot(scallops.var) >plot(scallops.var.robust) Covariograms and correlograms are invoked using the covariogram and functions. The remaining syntax is the same as in S-plus. The function variofit estimates the sill, the range, and the nugget © 2004 by CRC Press LLC 1.5 distance Plots of the empirical semivariograms for the scallops data: (a) classical; (b) robust. Figure 2.17 parameters under a speci ed covariance model. A variogram object (typically an output from the variog function) is taken as input, together with initial values for the range and sill (in ini.cov.pars), and the covariance model is speci ed through cov.model. The covariance modeling options include exponential, gaussian, spherical, circular, cubic, wave, power, powered.exponential, cauchy, gneiting, gneiting.matern, and pure.nugget (no spatial covariance). Also, the initial values provided in ini.cov.pars do not include those for the nugget. It is concatenated with the value of the nugget option only if fix.nugget=FALSE. If the latter is TRUE, then the value in the nugget option is taken as the xed true value. Thus, with the exponential covariance function for the scallops data, we can estimate the parameters (including the nugget eect) using >scallops.var.fit variofit(scallops.var.robust, ini.cov.pars = c(1.0,50.0), cov.model=''exponential'', fix.nugget=FALSE, nugget=1.0) The output is given below. Notice that this is the weighted least squares approach for tting the variogram: variofit: model parameters estimated by WLS (weighted least squares): covariance model is: matern with fixed kappa = 0.5 (exponential) © 2004 by CRC Press LLC BASICS OF POINT-REFERENCED DATA MODELS parameter estimates: tausq sigmasq phi 0.0000 5.1289 0.2160 Likelihood model tting In the previous section we saw parameter estimation through weighted least squares of variograms. Now we introduce likelihood-based and Bayesian estimation functions in geoR. Both maximum likelihood and REML methods are available through the geoR function likfit. To estimate the parameters for the scallops data, we invoke >scallops.lik.fit likfit(scallops.geo, ini.cov.pars=c(1.0,2.0),cov.model = ``exponential'', trend = ``cte'', fix.nugget = FALSE, nugget = 1.0, nospatial = TRUE, method.lik = ``ML'') The option trend = ``cte'' means a spatial regression model with constant mean. This yields the following output: > scallops.lik.fit likfit: estimated model parameters: beta tausq sigmasq phi 2.3748 0.0947 5.7675 0.2338 Changing method.lik = ``REML'' yields the restricted maximum likelihood estimation. Note that the variance of the estimate of beta is available by invoking scallops.lik.fit$beta.var, so calculating the con dence interval for the trend is easy. However, the variances of the estimates of the covariance parameters is not easily available within geoR. Kriging in geoR There are two in-built functions in geoR for kriging: one is for classical or conventional kriging, and is called krige.conv, while the other performs Bayesian kriging and is named krige.bayes. We now brie y look into these two types of functions. The krige.bayes function is not as versatile as WinBUGS in that it is more limited in the types of models it can handle, and also the updating is not through MCMC methods. Nevertheless, it is a handy tool and already improved upon the aforementioned likelihood methods by providing posterior samples of all the model parameters, which lead to estimation of their variability. The krige.bayes function can be used to estimate parameters for spatial regression models. To t a constant mean spatial regression model for the scallops data, without doing predictions, we invoke krige.bayes specifying a constant trend, an exponential covariance model, a at prior for the © 2004 by CRC Press LLC constant trend level, the reciprocal prior for discrete uniform prior for tausq. (Jerey's), and a > > > > > > out scallops.krige.bayes$posterior out out$sample beta.qnt quantile(out$beta, c(0.50,0.025,0.975)) phi.qnt quantile(out$phi, c(0.50,0.025,0.975)) sigmasq.qnt quantile(out$sigmasq, c (0.50,0.025,0.975)) tausq.rel.qnt quantile(out$tausq.rel, c(0.50,0.025,0.975)) beta.qnt 50% 2.5% 97.5% 1.931822 -6.426464 7.786515 > phi.qnt 50% 2.5% 97.5% 0.5800106 0.2320042 4.9909913 sigmasq.qnt 50% 2.5% 97.5% 11.225002 4.147358 98.484722 > tausq.rel.qnt 50% 2.5% 97.5% 0.03 0.00 0.19 Note that tausq.rel refers to the ratio of the nugget variance to the spatial variance, and is seen to be negligible here, too. This is consistent with all the earlier analysis, showing that a purely spatial model (no nugget) would perhaps be more suitable for the scallops data. 2.6 Exercises 1. For semivariogram models #2, 4, 5, 6, 7, and 8 in Subsection 2.1.3, (a) identify the nugget, sill, and range (or eective range) for each; © 2004 by CRC Press LLC (b) nd the covariance function C (t) corresponding to each (t), provided it exists. 2. Prove that for Gaussian processes, strong stationarity is equivalent to weak stationarity. 3. Consider the triangular (or \tent") covariance function, 2 (1 ; khk =) if khk ; 2 > 0; > 0; C (khk) = : 0 if khk > It is valid in one dimension. (The reader can verify that it is the characteristic function of the density function f (x) proportional to 1 ; cos(x)=x2 .) Nowpin twopdimensions, consider a 6 8 grid with locations sjk = (j= 2; k= 2); j = 1; : : : ; 6; k = 1; : : : ; 8. Assign ajk to sjk such that ajk = 1 if j + k is even, ajk = ;1 if j + k is odd. Show that V ar[ajk Y (sjk )] < 0, and hence that the triangular covariance function is invalid in two dimensions. 4. The turning bands method (Christakos, 1984; Stein, 1999a) is a technique for creating stationary covariance functions on ngb sids.mapgrp[(sids.mapgrp==1)] >sids.mapgrp[(sids.mapgrp==4)] >sids.mapgrp[(sids.mapgrp==0)] title(main="Actual Transformed SIDS Rates") >legend(locator(1), legend= c("3.5"), fill=c(4,2,3,1)) In the rst command, the modi ed mapping vector sids.mapgrp is speci ed as the grouping variable for the dierent colors. The fill=T option automates the shading of regions, while the next command (with add=T) adds the county boundaries. Finally, the locator(1) option within the legend command waits for the user to click on the position where the legend is desired; Figure 3.5(a) contains the result we obtained. We hasten to add that one can automate the placing of the legend by replacing the © 2004 by CRC Press LLC BASICS OF AREAL DATA MODELS a) actual transformed SIDS rates b) fitted SIDS rates from SAR model Unsmoothed raw (a) and spatially smoothed tted (b) rates, North Carolina SIDS data. Figure 3.5 option with actual (x; y) coordinates for the upper left corner of the legend box. To draw a corresponding map of the tted values from our SAR model (using our parameter estimates in the mean structure), we must rst create a modi ed vector of the ts (again due to the presence of Currituck county): locator(1) >sids.race.fit sids.race.fit.map sids.race.fit.mapgrp breaks.sids) © 2004 by CRC Press LLC sids.race.fit.mapgrp[(sids.race.fit.mapgrp==1)] >sids.race.fit.mapgrp[(sids.race.fit.mapgrp==4)] >sids.race.fit.mapgrp[(sids.race.fit.mapgrp==0)] >map("county", "north carolina", fill=T, 95 noise signal sids.race.pred sids.race.pred.map ;0, j bij 1 for all i, and j bij < 1 for at least one i. Let D = Diag i2 be a diagonal matrix with positive elements i2 such that D;1 (I ; B ) is symmetric; that is, bij =i2 = bji =j2 , for all i; j . Show that D;1 (I ; B ) is positive de nite. 4. Looking again at (3.13), obtain a simple sucient condition on B such © 2004 by CRC Press LLC BASICS OF AREAL DATA MODELS that the CAR prior with precision matrix D;1 (I ; B ) is a pairwise dierence prior, as in (3.16). 5. Show that ;y1 = Dw; ; W is nonsingular (thus resolving the impropri ety in (3.15)) if 2 1=(1) ; 1=(n) , where (1) < (2) < < (n) are the ordered eigenvalues of Dw;1=2 WDw;1=2 . 6. Show that if all entries in W are nonnegative and Dw ;W is nonsingular with > 0, then all entries in (Dw ; W );1 are nonnegative. f just 7. Recalling the SAR formulation using the scaled adjacency matrix W f below (3.25), prove that I ; W will be nonsingular if 2 (;1; 1), so that may be sensibly referred to as an \autocorrelation parameter." 8. In the setting of Subsection 3.3.1, if (;y1 )ij = 0, then show that Yi and Yj are conditionally independent given Yk ; k 6= i; j . 9. The le www.biostat.umn.edu/~brad/data/state-sat.dat gives the 1999 state average SAT data (part of which is mapped in Figure 3.1), while www.biostat.umn.edu/~brad/data/contig-lower48.dat gives the contiguity (adjacency) matrix for the lower 48 U.S. states (i.e., excluding Alaska and Hawaii, as well as the District of Columbia). (a) Use the S+SpatialStats software to construct a spatial.neighbor object from the contiguity le. (b) Use the slm function to t the SAR model of Section 3.4, taking the verbal SAT score as the response Y and the percent of eligible students taking the exam in each state as the covariate X . Use row-normalized weights based on the contiguity information in spatial.neighbor object. Is knowing X helpful in explaining Y ? (c) Using the maps library in S-plus, draw choropleth maps similar to Figure 3.1 of both the tted verbal SAT scores and the spatial residuals from this t. Is there evidence of spatial correlation in the response Y once the covariate X is accounted for? (d) Repeat your SAR model analysis above, again using slm but now assuming the CAR model of Section 3.3. Compare your estimates with those from the SAR model and interpret any changes. (e) One might imagine that the percentage of eligible students taking the exam should perhaps aect the variance of our model, not just the mean structure. To check this, re t the SAR model replacing your row-normalized weights with weights equal to the reciprocal of the percentage of students taking the SAT. Is this model sensible? 10. Consider the data www.biostat.umn.edu/~brad/data/Columbus.dat, taken from Anselin (1988, p. 189). These data record crime information for 49 neighborhoods in Columbus, OH, during 1980. Variables measured include NEIG, the neighborhood id value (1{49); HOVAL, its mean © 2004 by CRC Press LLC housing value (in $1,000); INC, its mean household income (in $1,000); CRIME, its number of residential burglaries and vehicle thefts per thousand households; OPEN, a measure of the neighborhood's open space; PLUMB, the percentage of housing units without plumbing; DISCBD, the neighborhood centroid's distance from the central business district; X, an x-coordinate for the neighborhood centroid (in arbitrary digitizing units, not polygon coordinates); Y, the same as X for the y-coordinate; AREA, the neighborhood's area; and PERIM, the perimeter of the polygon describing the neighborhood. (a) Use S+SpatialStats to construct spatial.neighbor objects for the neighborhoods of Columbus based upon centroid distances less than i. 3.0 units, ii. 7.0 units, iii. 15 units. (b) For each of the four spatial neighborhoods constructed above, use the slm function to t SAR models with CRIME as the dependent variable, and HOVAL, INC, OPEN, PLUMB, and DISCBD as the covariates. Compare your results and interpret your parameter estimates in each case. (c) Repeat your analysis using Euclidean distances in the B matrix itself. That is, in equation (3.26), set B = W with the Wij the Euclidean distance between location i and location j . (d) Repeat part (b) for CAR models. Compare your estimates with those from the SAR model and interpret them. © 2004 by CRC Press LLC CHAPTER 4 Basics of Bayesian inference In this chapter we provide a brief review of hierarchical Bayesian modeling and computing for readers not already familiar with these topics. Of course, in one chapter we can only scratch the surface of this rapidly expanding eld, and readers may well wish to consult one of the many recent textbooks on the subject, either as preliminary work or on an as-needed basis. It should come as little surprise that the book we most highly recommend for this purpose is the one by Carlin and Louis (2000); the Bayesian methodology and computing material below roughly follows Chapters 2 and 5, respectively, in that text. However, a great many other good Bayesian books are available, and we list a few of them and their characteristics. First we must mention the texts stressing Bayesian theory, including DeGroot (1970), Berger (1985), Bernardo and Smith (1994), and Robert (1994). These books tend to focus on foundations and decision theory, rather than computation or data analysis. On the more methodological side, a nice introductory book is that of Lee (1997), with O'Hagan (1994) and Gelman, Carlin, Stern, and Rubin (2004) oering more general Bayesian modeling treatments. 4.1 Introduction to hierarchical modeling and Bayes' Theorem By modeling both the observed data and any unknowns as random variables, the Bayesian approach to statistical analysis provides a cohesive framework for combining complex data models and external knowledge or expert opinion. In this approach, in addition to specifying the distributional model f (yj) for the observed data y = (y1 ; : : : ; yn ) given a vector of unknown parameters = (1 ; : : : ; k ), we suppose that is a random quantity sampled from a prior distribution (j), where is a vector of hyperparameters. For instance, yi might be the empirical mammography rate in a sample of women aged 40 and over from county i, i the underlying true mammography rate for all such women in this county, and a parameter controlling how these true rates vary across counties. If is © 2004 by CRC Press LLC known, inference concerning is based on its posterior distribution, p(jy; ) = p(py(y; jj) ) = R pp((yy;;jj))d = R ff((yyjj))((jj))d : (4:1) Notice the contribution of both the data (in the form of the likelihood f ) and the external knowledge or opinion (in the form of the prior ) to the posterior. Since, in practice, will not be known, a second stage (or hyperprior) distribution h() will often be required, and (4.1) will be replaced with R p(jy) = p(py(y; ) ) = R f f(y(yjj))((jj)h)h(() )ddd : Alternatively, we might replace by an estimate ^ obtained as the maxR imizer of the marginal distribution p(yj) = f (yj)(j)d, viewed as a function of . Inference could then proceed based on the estimated posterior distribution p(jy; ^ ), obtained by plugging ^ into equation (4.1). This approach is referred to as empirical Bayes analysis; see Berger (1985), Maritz and Lwin (1989), and Carlin and Louis (2000) for details regarding empirical Bayes methodology and applications. The Bayesian inferential paradigm oers potentially attractive advantages over the classical, frequentist statistical approach through its more philosophically sound foundation, its uni ed approach to data analysis, and its ability to formally incorporate prior opinion or external empirical evidence into the results via the prior distribution . Data analysts, formerly reluctant to adopt the Bayesian approach due to general skepticism concerning its philosophy and a lack of necessary computational tools, are now turning to it with increasing regularity as classical methods emerge as both theoretically and practically inadequate. Modeling the i as random (instead of xed) eects allows us to induce speci c (e.g., spatial) correlation structures among them, hence among the observed data yi as well. Hierarchical Bayesian methods now enjoy broad application in the analysis of spatial data, as the remainder of this book reveals. A computational challenge in applying Bayesian methods is that for most realistic problems, the integrations required to do inference under (4.1) are generally not tractable in closed form, and thus must be approximated numerically. Forms for and h (called conjugate priors) that enable at least partial analytic evaluation of these integrals may often be found, but in the presense of nuisance parameters (typically unknown variances), some intractable integrations remain. Here the emergence of inexpensive, highspeed computing equipment and software comes to the rescue, enabling the application of recently developed Markov chain Monte Carlo (MCMC) integration methods, such as the Metropolis-Hastings algorithm (Metropolis et al., 1953; Hastings, 1970) and the Gibbs sampler (Geman and Geman, 1984; Gelfand and Smith, 1990). This is the subject of Section 4.3. © 2004 by CRC Press LLC Illustrations of Bayes' Theorem Equation (4.1) is a generic version of what is referred to as Bayes' Theorem or Bayes' Rule. It is attributed to Reverend Thomas Bayes, an 18th-century nonconformist minister and part-time mathematician; a version of the result was published (posthumously) in Bayes (1763). In this subsection we consider a few basic examples of its use. Example 4.1 Suppose ; we have observed a single normal (Gaussian) observation Y N ; 2 with 2 known, so that the likelihood f (yj) = ; 2 N yj; 2 p12 exp(; (y2;2) ); y 2 1 and ESS () < N , so that Vd arESS (^N ) > Vd ariid (^N ), in concert with intuition. That is, since we have fewer than N eective samples, we expect some in ation in the variance of our estimate. In practice, the autocorrelation time () in (4.21) is often estimated simply by cutting o the summation when the magnitude of the terms rst drops below some \small" value (say, 0.1). This procedure is simple but may lead to a biased estimate of (). Gilks et al. (1996, pp. 50{51) recommend an initial convex sequence estimator mentioned by Geyer (1992) which, while while still output-dependent and slightly more complicated, actually yields a consistent (asymptotically unbiased) estimate here. A nal and somewhat simpler (though also more naive) method of estimating V ar(^N ) is through batching. Here we divide our single long run of length N into m successive batches of length k (i.e., N = mk), with Pm 1 ^ batch means B1 ; : : : ; Bm . Clearly N = B = m i=1 Bi . We then have the variance estimate m X Vd arbatch (^N ) = m(m1; 1) (Bi ; ^N )2 ; (4:22) i=1 provided that k is large enough so that the correlation between batches is negligible, and m is large enough to reliably estimate V ar(Bi ). It is important to verify that the batch means are indeed roughly independent, say, by checking whether the lag 1 autocorrelation of the Bi is less than 0.1. If this is not the case, we must increase k (hence N , unless the current m is already quite large), and repeat the procedure. Regardless of which of the above estimates V^ is used to approximate V ar(^N ), a 95% con dence interval for E (jy) is then given by p ^N z:025 V^ ; where z:025 = 1:96, the upper .025 point of a standard normal distribution. If the batching method is used with fewer than 30 batches, it is a good idea to replace z:025 by tm;1;:025 , the upper .025 point of a t distribution with m ; 1 degrees of freedom. WinBUGS oers both naive (4.20) and batched (4.22) variance estimates; this software is (at last!) the subject of the next section. 4.4 Computer tutorials 4.4.1 Basic Bayesian modeling in R or S-plus In this subsection we merely point out that for simple (typically lowdimensional) Bayesian calculations employing standard likelihoods paired with conjugate priors, the built-in density, quantile, and plotting functions in standard statistical packages may well oer sucient power; there is no © 2004 by CRC Press LLC need to use a \Bayesian" package per se. In such cases, statisticians might naturally turn to S-plus or R (the increasingly popular freeware package that is \not unlike S") due to their broad array of special functions (especially those oering summaries of standard distributions), graphics, interactive environments, and easy extendability. As a concrete example, suppose we are observing a data value Y from a Bin(n; ) distribution, with density proportional to p(yj) / y (1 ; )n;y : (4:23) The Beta(; ) distribution oers a conjugate prior for this likelihood, since its density is proportional to (4.23) as a function of , namely p() / ;1 (1 ; ) ;1 : (4:24) Using Bayes' Rule (4.1), it is clear that p(jy) / y+;1 (1 ; )n;y+ ;1 / Beta(y + ; n ; y + ) ; (4.25) another Beta distribution. Now consider a setting where n = 10 and we observe Y = yobs = 7. Choosing = = 1 (i.e., a uniform prior for ), the posterior is a Beta(yobs + 1; n ; yobs + 1) = Beta(8; 4) distribution. In either R or S-plus we can obtain a plot of this distribution by typing > theta yobs qbeta(.5, yobs+1, n-yobs+1) while the endpoints of a 95% equal-tail credible interval are > qbeta(c(.025, .975), yobs+1, n-yobs+1) In fact, these points may be easily added to our posterior plot (see Figure 4.2) by typing > abline(v=qbeta(.5, yobs+1, n-yobs+1)) > abline(v=qbeta(c(.025, .975), yobs+1, n-yobs+1), lty=2) The pbeta and rbeta functions may be used similarly to obtain prespeci ed posterior probabilities (say, Pr( < 0:8jyobs)) and random draws from the posterior, respectively. Indeed, similar density, quantile, cumulative probability, and random generation routines are available in R or S-plus for a wide array of standard distributional families that often emerge as posteriors (gamma, normal, multivariate normal, Dirichlet, etc.). Thus in settings where MCMC techniques are unnecessary, these languages may oer the most sensible © 2004 by CRC Press LLC 1.5 0.0 posterior density Figure 4.2 Illustrative beta posterior, with vertical reference lines added at the .025, .5, and .975 quantiles. approach. They are especially useful in situations requiring code to be wrapped around statements like those above so that repeated posterior calculations may be performed. For example, when designing an experiment to be analyzed at some later date using a Bayesian procedure, we would likely want to simulate the procedure's performance in repeated sampling (the Bayesian analog of a power or \sample size" calculation). Such repeated sampling might be of the data for xed parameters, or over both the data and the parameters. (We hasten to add that WinBUGS can be called from R, albeit in a special way; see www.stat.columbia.edu/~gelman/bugsR/. Future releases of WinBUGS may be available directly within R itself.) 4.4.2 Advanced Bayesian modeling in WinBUGS In this subsection we provide a introduction to Bayesian data analysis in WinBUGS, the most well-developed and general Bayesian software package available to date. WinBUGS is the Windows successor to BUGS, a UNIX package whose name originally arose as a humorous acronym for Bayesian inference Using Gibbs Sampling. The package is freely available from the website http://www.mrc-bsu.cam.ac.uk/bugs/welcome.shtml. The sofware comes with a user manual, as well as two examples manuals that are enormously helpful for learning the language and various strategies for Bayesian data analysis. We remark that for further examples of good applied Bayesian work, © 2004 by CRC Press LLC in addition to the ne book by Gilks et al. (1996), there are the series of \Bayesian case studies" books by Gatsonis et al. (1993, 1995, 1997, 1999, 2002, 2003), and the very recent Bayesian modeling book by Congdon (2001). While this lattermost text assumes a walking familiarity with the Bayesian approach, it also includes a great many examples and corresponding computer code for their implementation in WinBUGS. WinBUGS has an interactive environment that enables the user to specify models (hierarchical) and it actually performs Gibbs sampling to generate posterior samples. Convergence diagnostics, model checks and comparisons, and other helpful plots and displays are also available. We will now look at some WinBUGS code for greater insight into its modeling language. Example 4.3 The line example from the main WinBUGS manual will be considered in stages, in order to both check the installation and to illustrate the use of WinBUGS. Consider a set of 5 (obviously arti cial) (X; Y ) pairs: (1, 1), (2, 3), (3, 3), (4, 3), (5, 5). We shall t a simple linear regression of Y on X using the notation, ; Yi N i ; 2 ; where i = + xi : As the WinBUGS code below illustrates, the language allows a concise expression of the model, where dnorm(a,b) denotes a normal distribution with mean a and precision (reciprocal of the variance) b, and dgamma(c,d) denotes a gamma distribution with mean c=d and variance c=d2 . The data means mu[i] are speci ed using a logical link (denoted by 0 almost surely, but in practice we may observe some zij = 0 as, for example, with the log counts in the scallop data example. A correction is needed and can be achieved by adding to zij;obs where is, say, one half of the smallest possible positive zij;obs .) Setting k = 1 in (5.12), we note that of the Bessel mixtures, the vecomponent model with xed 's and random weights is best according to the D1;m statistic. Here, given max = 7:5, the nodes are xed to be 1 = 1:25; 2 = 2:5; 3 = 3:75; 4 = 5:0, and 5 = 6:25. One would expect that the t measured by the G1;m criterion should improve with increasing © 2004 by CRC Press LLC Parametric Semivariograms Exponential Gaussian Cauchy Spherical Bessel-J0 Bessel Mixtures - Random Weights Two Three Four Five Bessel Mixtures - Random Phi’s Two Three Four Five Figure 5.4 Posterior means for various semivariogram models. p. However, the models do not form a nested sequence in p, except in some instances (e.g., the p = 2 model is a special case of the p = 5 model). Thus, the apparent poorer t of the four-component xed model relative to the three-component model is indeed possible. The random Bessel mixture models were all very close and, as a class, these models t as well or better than the best parametric model. Hence, modeling mixtures of Bessel functions appears more sensitive to the choice of xed 's than to xed weights. 5.1.4 Modeling geometric anisotropy As mentioned in Section 2.2.5, anisotropy refers to the situation where the spatial correlation between two observations depends upon the separation vector between their locations, rather than merely its length (i.e., the distance between the points). Thus here we have Cov (Y (s + h) ; Y (s)) = (h; ) : Ansiotropy is generally dicult to deal with, but there are special cases © 2004 by CRC Press LLC Model Parametric exponential Gaussian Cauchy spherical Bessel independent Semiparametric xed ` , random w` : two three four ve random ` , xed w` : two three four ve Table 5.1 Model choice for tted variogram models, 1993 scallop data. that are tractable yet still interesting. Among these, the most prominent in applications is geometric anisotropy. This refers to the situation where the coordinate space can be linearly transformed to an isotropic space. A linear transformation may correspond to rotation or stretching of the coordinate axes. Thus in general, (h; ) = 0 (jjLhjj ; ) ; where l is a d d matrix describing the linear transformation. Of course, if L is the identity matrix, this reduces to the isotropic case. We assume a second-order stationary normal model for Y, arising from the customary model, Y (s) = + w(s) + (s) as in (5.1). This yields Y N (1 ; ()), where = ( 2 ; 2 ; B )T , B = LT L, and () = 2 I + 2 H ((h0 B h) 21 ) : (5:13) In (5.13), the matrix H has (i; j )th entry ((h0ij B hij ) 21 ) where is a valid correlation function and hij = si ; sj . Common forms for would be those in Table 2.2. In (5.13), 2 is the semiovariogram nugget and 2 + 2 1is the 1 sill. The variogram is 2 ( 2 ; 2 ; (h0 B h) 2 ) = 2( 2 + 2 (1 ; ((h0 B h) 2 )): © 2004 by CRC Press LLC > 16 150 - 200 12 - 16 105 - 150 8 - 12 95 - 105 7 - 8 50 - 95 1 - 7 < 50 0 - 1 N 60 Scotland lip cancer data: (a) crude standardized mortality ratios (observed / expected 100); (b) AFF covariate values. Figure 5.10 Figure 5.11 contains the WinBUGS code for this problem, which is also available at http://www.biostat.umn.edu/~brad/data2.html. Note the use of vague priors for c and h as suggested by Best et al. (1999), and the use of the sd function in WinBUGS to greatly facilitate computation of . The basic posterior (mean, sd) and convergence (lag 1 autocorrelation) summaries for ; 1 ; 1 , and 56 are given in Table 5.4. Besides the Best et al. (1999) prior, two priors inspired by equation (5.48) are also reported; see Carlin and Perez (2000). The AFF covariate appears signi cantly different from 0 under all 3 priors, although convergence is very slow (very high values for l1acf). The excess variability in the data seems mostly due to clustering (E (jy) > :50), but the posterior distribution for does not seem robust to changes in the prior. Finally, convergence for the i (reason- © 2004 by CRC Press LLC model { for (i in 1 : regions) { O[i] ~ dpois(mu[i]) log(mu[i]) 0), and Z2 (s) = I (Y2 (s) > 0). Approximate the cross-covariance matrix of Z(s) = (Z1(s); Z2(s))T . 4. The data in www.biostat.umn.edu /~brad/data/ColoradoLMC.dat record maximum temperature (in tenths of a degree Celsius) and precipitation (in cm) during the month of January 1997 at 50 locations in the U.S. state of Colorado. (a) Let X denote temperature and Y denote precipitation. Following the model of Example 7.3, t an LMC model to these data using the conditional approach, tting X and then Y jX . (b) Repeat this analysis, but this time tting Y and then X jY . Show that your new results agree with those from part (a) up to simulation variability. 5. If Cl and Cl0 are isotropic, obtain Cll0 (s) in (7.31) by transformation to polar coordinates. © 2004 by CRC Press LLC 6. The usual and generalized (but still proper) MCAR models may be constructed using linear transformations of some nonspatially correlated T variables. T Consider a vector blocked by components, say = T 1 ; 2 , where each i is n 1, n being the number of areal units. Suppose we look upon these vectors as arising from linear transformations 1 = A1 v1 and 2 = A2 v2 ; where A1 and A2 are any n n matrices, v1 = (v11 ; : : : ; v1n )T and v2 = (v21 ; : : : ; v2n)T with covariance structure Cov (v1i ; v1j ) = 11 I[i=j] ; Cov (v1i ; v2j ) = 12 I[i=j] ; and Cov (v2i ; v2j ) = 22 I[i=j] ; where I[i =j] = 1 if i = j and 0 otherwise. Thus, although v1 and v2 are associated, their nature of association is nonspatial in that covariances remain same for every areal unit, and there is no association between variables in dierent units. (a) Show that the dispersion matrix (v ;v ) equals I , where = (ij )i;j=1;2 . (b) Show that setting A1 = A2 = A yields a separable covariance structure for . What choice of A would render a separable MCAR model, analogous to (7.34)? (c) Show that appropriate (dierent) choices of A1 and A2 yield the generalized MCAR model, as in (7.36). 1 © 2004 by CRC Press LLC CHAPTER 8 Spatiotemporal modeling In both theoretical and applied work, spatiotemporal modeling has received dramatically increased attention in the past few years. This reason behind this increase is easy to see: the proliferation of data sets that are both spatially and temporally indexed, and the attendant need to understand them. For example, in studies of air pollution, we are interested not only in the spatial nature of a pollutant surface, but also in how this surface changes over time. Customarily, temporal measurements (e.g., hourly, daily, three-day average, etc.) are collected at monitoring sites over several years. Similarly, with climate data we may be interested in spatial patterns of temperature or precipitation at a given time, but also in dynamic patterns in weather. With real estate markets, we might be interested in how the single-family home sales market changes on a quarterly or annual basis. Here an additional wrinkle arises in that we do not observe the same locations for each time period; the data are cross-sectional, rather than longitudinal. Applications with areal unit data are also commonplace. For instance, we may look at annual lung cancer rates by county for a given state over a number of years to judge the eectiveness of a cancer control program. Or we might consider daily asthma hospitalization rates by zip code, over a period of several months. From a methodological point of view, the introduction of time into spatial modeling brings a substantial increase in the scope of our work, as we must make separate decisions regarding spatial correlation, temporal correlation, and how space and time interact in our data. Such modeling will also carry an obvious associated increase in notational and computational complexity. As in previous chapters, we make a distinction between the cases where the geographical aspect of the data is point level versus areal unit level. Again the former case is typically handled via Gaussian process models, while the latter often uses CAR speci cations. A parallel distinction could be drawn for the temporal scale: is time viewed as continuous (say, over tij (and in fact Tij = 1 if Nij = 0). Given Nij ; the Uijk 's are independent with survival function S (tj ij ) and corresponding density function f (tj ij ). The parameter ij is a collection of all the parameters (including possible regression parameters) that may be involved in a parametric speci cation for the survival function S . In this section we will work with a two-parameter Weibull distribution speci cation for the density function f (tj ij ), where we allow the Weibull scale parameter to vary across the regions, and , which may serve as a link to covariates in a regression setup, to vary across individuals. Therefore f (tji ; ij ) = i ti ;1 exp (ij ; ti exp (ij )). In terms of the hazard function h, f (tji ; ij ) = h (tji ; ij ) S (tji ; ij ), with h (t; i ; ij ) = i ti ;1 exp (ij ) and S (tji ; ij ) = exp (;ti exp (ij )). Note we implicitly assume proportional hazards, with baseline hazard function h0 (tji ) = i ti ;1 : Thus an individual ij who is censored at time tij before undergoing the event contributes (S (tij ji ; ij ))Nij to the likelihood, while an individual who experiences the event at time tij contributes Nij (S (tij ji ; ij ))Nij ;1 f (tij ji ; ij ). The latter expression follows from the fact that the event is experienced when any one of the latent factors occurs. Letting ij be the observed event indicator for individual ij , this person L (tij jNij ; i ; ij ; ij ) ij = (S (tij ji ; ij ))Nij (1;ij ) Nij S (tij ji ; ij )Nij ;1 f (tij ji ; ij ) ; and the joint likelihood for all the patients can now be expressed as L (ftijQgj fNQij g ; fi g; fij g ; fij g) i L (t jN ; ; ; ) = Ii=1 nj=1 ij ij i ij ij QI Qni = i=1 j=1 (S(tij ji ; ij ))Nij (1;ij ) ij Nij S (tij i ; ij )Nij ;1 f (tij ji ; ij ) Q Q i = Ii=1 nj=1 (S (tij ji ; ij ))Nij ;ij (Nij f (tij ji ; ij ))ij : © 2004 by CRC Press LLC This expression can be rewritten in terms of the hazard function as ni I Y Y i=1 j =1 (S (tij ji ; ij ))Nij (Nij h (tij ji ; ij ))ij : A Bayesian hierarchical formulation is completed by introducing prior distributions on the parameters. We will specify independent prior distributions p (Nij jij ), p (i j ) and p (ij j ) for fNij g, fi g, and fij g, respectively. Here, , , and fij g are appropriate hyperparameters. Assigning independent hyperpriors p (ij j ) for fij g and assuming the hyperparameters = ( ; ; ) to be xed, the posterior distribution for the parameters, p (fij g ; fij g ; fNij g ; fi g j ftij g ; fij g), is easily found (up to a proportionality constant) using (9.15) as QI i [S (t j ; )]Nij [N h (t j ; )]ij p (i j ) nj=1 ij i ij ij ij i ij p (Nij jij ) p (ij j ) p (ij j )g : Chen et al. (1999) assume that the Nij are distributed as independent Poisson random variables with mean ij , i.e., p (Nij jij ) is Poisson (ij ). In this setting it is easily seen that the survival distribution for the (i; j )th patient, P (Tij tij ji ; ij ), is given by exp f;ij (1 ; S (tij ji ; ij ))g. Since S (tij ji ; ij ) is a proper survival function (corresponding to the latent factor times Uijk ), as tij ! 1, P (Tij tij ji ; ij ) ! exp (;ij ) > 0. Thus we have a subdistribution for Tij with a cure fraction given by exp (;ij ). Here a hyperprior on the ij 's would have support on the positive real line. While there could certainly be multiple latent factors that increase the risk of smoking relapse (age started smoking, occupation, amount of time spent driving, tendency toward addictive behavior, etc.), this is rather speculative and certainly not as justi able as in the cancer setting for which the multiple factor approach was developed (where Nij > 1 is biologically motivated). As such, we instead form our model using a single, omnibus, \propensity for relapse" latent factor. In this case, we think of Nij as a binary variable, and specify p (Nij jij ) as Bernoulli (1 ; ij ). In this setting it is easier to look at the survival distribution after marginalizing out the Nij . In particular, note that ij = 1 P (Tij tij ji ; ij ; Nij ) = S (tij j1i;; ij ) ; N Nij = 0 : That is, if the latent factor is absent, the subject is cured (does not experience the event). Marginalizing over the Bernoulli distribution for Nij , we obtain for the (i; j )th patient the survival function S (tij jij ; i ; ij ) P (Tij tij ji ; ij ) = ij + (1 ; ij ) S (tij ji ; ij ), which is the classic curerate model attributed to Berkson and Gage (1952) with cure fraction ij . Now we can write the likelihood function for the data marginalized over © 2004 by CRC Press LLC fNij g, L (ftij g j fi g; fij g; fij g ; fij g), as 1;ij ; dtdij S (tij jij ; i ; ij ) i=1Q j =1Q[S (tij jij ; i ; ij )] i [S (t j ; ; )]1;ij [(1 ; ) f (t j ; )]ij ; = Ii=1 nj=1 ij ij i ij ij ij i ij Q ni which in terms of the hazard function becomes ni I Y Y i=1 j =1 [S (tij jij ; i ; ij )]1;ij [(1 ; ij ) S (tij ji ; ij ) h (tij ji ; ij )]ij ; (9:16) where the hyperprior for ij has support on (0; 1). Now the posterior distribution of the parameters is proportional to L (ftij g j fi g; fij g; fij g ; fij g) 8 I < Y ni Y i=1 : j =1 p (i j ) 9 = p (ij j ) p (ij j ); : (9:17) Turning to the issue of incorporating covariates, in the general setting with Nij assumed to be distributed Poisson, Chen et al. (1999) propose their introduction inthe cure fraction through a suitable link function g , so that ij = g xTij e , where g maps the entire real line to the positive axis. This is sensible when we believe that the risk factors aect the probability of an individual being cured. Proper posteriors arise for the regression coecients e even under improper priors. Unfortunately, this is no longer true when Nij is Bernoulli (i.e., in the Berkson and Gage model). Vague but proper priors may still be used, but this makes the parameters dicult to interpret, and can often lead to poor MCMC convergence. Since a binary Nij seems most natural in our setting, we instead introduce covariates into S (tij ji ; ij ) through the Weibull link ij , i.e., we let ij = xTij . This seems intuitively more reasonable anyway, since now the covariates in uence the underlying factor that brings about the smoking relapse (and thus the rapidity of this event). Also, proper posteriors arise here for under improper posteriors even though Nij is binary. As such, henceforth we will only consider the situation where the covariates enter the model in this way (through the Weibull link function). This means we are unable to separately estimate the eect of the covariates on both the rate of relapse and the ultimate level of relapse, but \fair" estimation here (i.e., allocating the proper proportions of the covariates' eects to each component) is not clear anyway since at priors could be selected for , but not for e . Finally, all of our subsequent models also assume a constant cure fraction for the entire population (i.e., we set ij = for all i; j ). Note that the posterior distribution in (9.17) is easily modi ed to incorQ porate covariates. For example, with ij = xTij , we replace ij p (ij j ) © 2004 by CRC Press LLC in (9.17) with p ( j ), with as a xed hyperparameter. Typically a at or vague Gaussian prior may be taken for p ( j ). Interval-censored data The formulation above assumes that our observed data are right-censored. This means that we are able to observe the actual relapse time tij when it occurs prior to the nal oce visit. In reality, our study (like many others of its kind) is only able to determine patient status at the oce visits themselves, meaning we observe only a time interval (tijL ; tijU ) within which the event (in our case, smoking relapse) is known to have occurred. For patients who did not resume smoking prior to the end of the study we have tijU = 1, returning us to the case of right-censoring at time point tijL . Thus we now set ij = 1 if subject ij is interval-censored (i.e., experienced the event), and ij = 0 if the subject is right-censored. Following Finkelstein (1986), the general interval-censored cure rate likelihood, L (f(tijL ; tijU )g j fNij g ; fi g; fij g ; fij g), is given by ni I Y Y i=1 j =1 [S (tijL ji ; ij )]Nij ;ij fNij [S (tijL ji ; ij ) ; S (tijU ji ; ij )]gij = ni I Y Y i=1 j =1 [S (tijL ji ; ij Nij 1 ; SS ((ttijU jji;; ij )) ijL i ij As in the previous section, in the Bernoulli setup after marginalizing out the fNij g the foregoing becomes L (f(tijL ; tijU )g j fi g; fij g; fij g ; fij g), and can be written as ij ni I Y Y S (tijL jij ; i ; ij ) 1 ; SS ((ttijU jjij ;; i;; ij )) : (9:18) ijL ij i ij i=1 j =1 We omit details (similar to those in the previous section) arising from the Weibull parametrization and subsequent incorporation of covariates through the link function ij . 9.5.2 Spatial frailties in cure rate models The development of the hierarchical framework in the preceding section acknowledged the data as coming from I dierent geographical regions (clusters). Such clustered data are common in survival analysis and often modeled using cluster-speci c frailties i . As with the covariates, we will introduce the frailties i through the Weibull link as intercept terms in the log-relative risk; that is, we set ij = xTij + i . Here we allow the i to be spatially correlated across the regions; similarly we would like to permit the Weibull baseline hazard parameters, i , © 2004 by CRC Press LLC to be spatially correlated. A natural approach in both cases is to use a univariate CAR prior. While one may certainly employ separate, independent CAR priors on and flog i g, another option is to allow these two spatial priors to themselves be correlated. In other words, we may want a bivariate spatial model for the i = (i ; i )T = (i ; log i )T . As mentioned in Sections 7.4 and 9.4, we may use the MCAR distribution for this purpose. In our setting, the MCAR distribution on the concatenated vector = (T ; T )T is Gaussian with mean 0 and precision matrix ;1 (Diag (mi ) ; W ), where is a 2 2 symmetric and positive de nite matrix, 2 (0; 1), and mi and W remain as above. In the current context, we may also wish to allow dierent smoothness parameters (say, 1 and 2 ) for and , respectively, as in Section 9.4. Henceforth, in this section we will denote the proper MCAR with a common smoothness parameter by MCAR (; ), and the multiple smoothness parameter generalized MCAR by MCAR (1 ; 2 ; ). Combined with independent (univariate) CAR models for and , these oer a broad range of potential spatial models. 9.5.3 Model comparison Suppose we let denote the set of all model parameters, so that the deviance statistic (4.9) becomes D( ) = ;2 log f (yj ) + 2 log h(y) : (9:19) When DIC is used to compare nested models in standard exponential family settings, the unnormalized likelihood L( ; y) is often used in place of the normalized R form f (yj ) in (9.19), since in this case the normalizing function m( ) = L( ; y)dy will be free of and constant across models, hence contribute equally to the DIC scores of each (and thus have no impact on model selection). However, in settings where we require comparisons across dierent likelihood distributional forms, it appears one must be careful to use the properly scaled joint density f (yj ) for each model. We argue that use of the usual proportional hazards likelihood (which of course is not a joint density function) is in fact appropriate for DIC computation here, provided we make a fairly standard assumption regarding the relationship between the survival and censoring mechanisms generating the data. Speci cally, suppose the distribution of the censoring times is independent of that of the survival times and does not depend upon the survival model parameters (i.e., independent, noninformative censoring). Let g (tij ) denote the density of the censoring time for the ij th individual, with corresponding survival (1-cdf) function R (tij ). Then the right-censored likelihood (9.16) can be extended to the joint likelihood speci cation, QI Qni 1;ij i=1 j =1 [S (tij jij ; i ; ij )] [(1 ; ij ) S (tij ji ; ij ) h (tij ji ; ij )]ij [R (tij )]ij [g (tij )]1;ij ; © 2004 by CRC Press LLC as for example in Le (1997, pp. 69{70). While not a joint probability density, this likelihood is still an everywhere nonnegative and integrable function of the survival model parameters , and thus suitable for use with the Kullback-Leibler divergences that underlie DIC (Spiegelhalter et al., 2002, p. 586). But by assumption, R (t) and g (t) do not depend upon . Thus, like an m( ) that is free of , they may be safely ignored in both the pD and DIC calculations. Note this same argument implies that we can use the unnormalized likelihood (9.16) when comparing not only nonnested parametric survival models (say, Weibull versus gamma), but even parametric and semiparametric models (say, Weibull versus Cox) provided our de nition of \likelihood" is comparable across models. Note also that here our \focus" (in the nomenclature of Spiegelhalter et al., 2002) is solely on . An alternative would be instead to use a missing data formulation, where we include the likelihood contribution of fsij g, the collection of latent survival times for the right-censored individuals. Values for both and the fsij g could then be imputed along the lines given by Cox and Oakes (1984, pp. 165{166) for the EM algorithm or Spiegelhalter et al. (1995b, the \mice" example) for the Gibbs sampler. This would alter our focus from to ( ; fsij g), and pD would re ect the correspondingly larger eective parameter count. Turning to the interval censored case, here matters are only a bit more complicated. Converting the interval-censored likelihood (9.18) to a joint likelihood speci cation yields ni I Y Y (tijU jij ; i ; ij ) ij S S (tijL jij ; i ; ij ) 1 ; S (tijL jij ; i ; ij ) ij [R (tijL )]ij 1 ; RR ((ttijU )) [g (tijL )]1;ij : ijL Now [R (tijL )]ij (1 ; R (tijU ) =R (tijL ))ij [g (tijL )]1;ij is the function absorbed into m( ), and is again free of . Thus again, use of the usual form i=1 j =1 of the interval-censored likelihood presents no problems when comparing models within the interval-censored framework (including nonnested parametric models, or even parametric and semiparametric models). Note that it does not make sense to compare a particular right-censored model with a particular interval-censored model. The form of the available data is dierent; model comparison is only appropriate to a given data set. Example 9.5 (Smoking cessation data). We illustrate our methods using the aforementioned study of smoking cessation, a subject of particular interest in studies of lung health and primary cancer control. Described more fully by Murray et al. (1998), the data consist of 223 subjects who reside in 53 zip codes in the southeastern corner of Minnesota. The subjects, all of whom were smokers at study entry, were randomized into either a © 2004 by CRC Press LLC Map showing missingness pattern for the smoking cessation data: lightly shaded regions are those having no responses. Figure 9.10 smoking intervention (SI) group, or a usual care (UC) group that received no special antismoking intervention. Each subject's smoking habits were monitored at roughly annual visits for ve consecutive years. The subjects we analyze are actually the subset who are known to have quit smoking at least once during these ve years, and our event of interest is whether they relapse (resume smoking) or not. Covariate information available for each subject includes sex, years as a smoker, and the average number of cigarettes smoked per day just prior to the quit attempt. To simplify matters somewhat, we actually t our spatial cure rate models over the 81 contiguous zip codes shown in Figure 9.10, of which only the 54 dark-shaded regions are those contributing patients to our data set. This enables our models to produce spatial predictions even for the 27 unshaded regions in which no study patients actually resided. All of our MCMC algorithms ran 5 initially overdispersed sampling chains, each for 20,000 iterations. Convergence was assessed using correlation plots, sample trace plots, and Gelman-Rubin (1992) statistics. In every case a burn-in period of 15,000 iterations appeared satisfactory. Retaining the remaining 5,000 samples from each chain yielded a nal sample of 25,000 for posterior summarization. Table 9.12 provides the DIC scores for a variety of random eects cure rate models in the interval-censored case. Models 1 and 2 have only random frailty terms i with i.i.d. and CAR priors, respectively. Models 3 and 4 add random Weibull shape parameters i = log i , again with i.i.d. and CAR priors, respectively, independent of the priors for the i . Finally, Models 5 and 6 consider the full MCAR structure for the (i ; i ) pairs, assuming common and distinct spatial smoothing parameters, respectively. The DIC scores do not suggest that the more complex models are signi cantly better; apparently the data encourage a high degree of shrinkage in the random © 2004 by CRC Press LLC Model Log-relative risk xTij + i ; i iid N (0; ) ; i = 8 i xTij + i ; fi g CAR () ; i = 8 i xTij + i ; i iid N (0; ) ; i iid N (0; ) xTij + i ; fi g CAR () ; fi g CAR ( ) xTij + i ; (fi g ; fi g) MCAR (; ) xTij + i ; (fi g ; fi g) MCAR (; ; ) 1 2 3 4 5 6 Table 9.12 pD DIC 10.3 9.4 13.1 10.4 7.9 8.2 DIC and pD values for various competing interval-censored models. Parameter Intercept Sex (male = 0) Duration as smoker SI/UC (usual care = 0) Cigarettes smoked per day (cure fraction) Median (2.5%, 97.5%) {2.720 ({4.803, {0.648) 0.291 ({0.173, 0.754) {0.025 ({0.059, 0.009) {0.355 ({0.856, 0.146) 0.010 ({0.010, 0.030) 0.694 (0.602, 0.782) 0.912 (0.869, 0.988) 0.927 (0.906, 0.982) 11 (spatial variance component, i ) 0.005 (0.001, 0.029) 22 (spatial variance component, ) 0.007 (0.002, 0.043) i p 12 = 11 22 0.323 ({0.746, 0.905) Table 9.13 Posterior quantiles, full model, interval-censored case. eects (note the low pD scores). In what follows we present results for the \full" model (Model 6) in order to preserve complete generality, but emphasize that any of the models in Table 9.12 could be used with equal con dence. Table 9.13 presents estimated posterior quantiles (medians, and upper and lower .025 points) for the xed eects , cure fraction , and hyperparameters in the interval-censored case. The smoking intervention does appear to produce a decrease in the log relative risk of relapse, as expected. Patient sex is also marginally signi cant, with women more likely to relapse than men, a result often attributed to the (real or perceived) risk of weight gain following smoking cessation. The number of cigarettes smoked per day does not seem important, but duration as a smoker is signi cant, and in © 2004 by CRC Press LLC b) 0.95 - 0.98 -0.30 - -0.20 0.98 - 0.99 -0.20 - -0.10 0.99 - 1.01 -0.10 - -0.02 -0.02 - 0.02 0.02 - 0.10 1.01 - 1.03 1.03 - 1.05 0.10 - 0.20 1.05 - 1.08 0.20 - 0.37 1.08 - 1.19 Maps of posterior means for the i (a) and the i (b) in the full spatial MCAR model, assuming the data to be interval-censored (see also color insert). Figure 9.11 a possibly counterintuitive direction: shorter-term smokers relapse sooner. This may be due to the fact that people are better able to quit smoking as they age (and are thus confronted more clearly with their own mortality). The estimated cure fraction in Table 9.13 is roughly .70, indicating that roughly 70% of smokers in this study who attempted to quit have in fact been \cured." The spatial smoothness parameters and are both close to 1, again suggesting we would lose little by simply setting them both equal to 1 (as in the standard CAR model). Finally, the last lines of both tables indicate only a moderate correlation between the two random eects, again consistent with the rather weak case for including them in the model at all. We compared our results to those obtained from the R function survreg using a Weibull link, and also to Weibull regression models t in a Bayesian fashion using the WinBUGS package. While neither of these alternatives featured a cure rate (and only the WinBUGS analysis included spatial random eects), both produced xed eect estimates quite consistent with those in Table 9.13. Turning to graphical summaries, Figure 9.11 (see also color insert Figure C.13) maps the posterior medians of the frailty (i ) and shape (i ) parameters in the full spatial MCAR (Model 6) case. The maps reveal some interesting spatial patterns, though the magnitudes of the dierences appear relatively small across zip codes. The south-central region seems to be of some concern, with its high values for both i (high overall relapse rate) and i (increasing baseline hazard over time). By contrast, the four zip codes comprising the city of Rochester, MN (home of the Mayo Clinic, and marked with an \R" in each map) suggest slightly better than average cessation behavior. Note that a nonspatial model cannot impute anything other than the \null values" (i = 0 and i = 1) for any zip code contributing no data (all of the unshaded regions in Figure 9.10). Our spatial model however is able to impute nonnull values here, in accordance with the observed values in neighboring regions. © 2004 by CRC Press LLC Unit Drug Time Unit Drug Time Unit Drug Time A 1 74+ E 1 214 H 1 74+ A 2 248 E 2 228+ H 1 88+ A 1 272+ E 2 262 H 1 148+ A 2 344 H 2 162 F 1 6 B 2 4+ F 2 16+ I 2 8 B 1 156+ F 1 76 I 2 16+ F 2 80 I 2 40 C 2 100+ F 2 202 I 1 120+ F 1 258+ I 1 168+ D 2 20+ F 1 268+ I 2 174+ D 2 64 F 2 368+ I 1 268+ D 2 88 F 1 380+ I 2 276 D 2 148+ F 1 424+ I 1 286+ D 1 162+ F 2 428+ I 1 366 D 1 184+ F 2 436+ I 2 396+ D 1 188+ I 2 466+ D 1 198+ G 2 32+ I 1 468+ D 1 382+ G 1 64+ D 1 436+ G 1 102 J 1 18+ G 2 162+ J 1 36+ E 1 50+ G 2 182+ J 2 160+ E 2 64+ G 1 364+ J 2 254 E 2 82 E 1 186+ H 2 22+ K 1 28+ E 1 214+ H 1 22+ K 1 70+ K 2 106+ Survival times (in half-days) from the MAC treatment trial, from Carlin and Hodges (1999). Here, \+" indicates a censored observation. Table 9.14 9.6 Exercises 1. The data located at www.biostat.umn.edu/~brad/data/MAC.dat, and also shown in Table 9.14, summarize a clinical trial comparing two treatments for Mycobacterium avium complex (MAC), a disease common in late-stage HIV-infected persons. Eleven clinical centers (\units") have enrolled a total of 69 patients in the trial, 18 of which have died; see Cohn et al. (1999) and Carlin and Hodges (1999) for full details regarding this trial. © 2004 by CRC Press LLC As in Section 9.1, let tij be the time to death or censoring and xij be the treatment indicator for subject j in stratum i (j = 1; : : : ; ni , i = 1; : : : ; k). With proportional hazards and a Weibull baseline hazard, stratum i's hazard is then h(tij ; xij ) = h0 (tij )!i exp( 0 + 1 xij ) = i tiji ;1 exp( 0 + 1 xij + Wi ) ; where i > 0, = ( 0 ; 1 )0 2 J . While this does not mean that Y (si ) and Y (s0i ) are perfectly associated, it does mean that specifying Y () at J distinct locations determines the value of the process at all other locations. As a result, such modeling may be more attractive for spatial random eects than for the data itself. A variant of this strategy is a conditioning idea. Suppose we partition the region of interest into M subregions so P that we have the total of n points partitioned into nm in subregion m with M m=1 nm = n. Suppose we assume that Y (s) and Y (s0 ) are conditionally independent given s lies in subregion m and s0 lies in subregion m0 . However, suppose we assign random eects (s1 ); :::; (sM ) with (sm) assigned to subregion m. Suppose the sM 's are \centers" of the subregions (using an appropriate de nition) and that the (sM ) follows a spatial process that we can envision as a hyperspatial process. There are obviously many ways to build such multilevel spatial structures, achieving a variety of spatial association behaviors. We do not elaborate here but note that matrices will now be nm nm and M M rather than n n. A.5.5 Coarse- ne coupling Lastly, particularly for hierarchical models with a non-Gaussian rst stage, a version of the coarse- ne idea as in Higdon, Lee, and Holloman (2003) may be successful. The idea here is, with a non-Gaussian rst stage, if spatial random eects (say, (s1 ); : : : ; (sn )) are introduced at the second stage, then, as in Subsection 5.2, the set of (si ) will have to be updated at each iteration of a Gibbs sampling algorithm. Suppose n is large and that the \ ne" chain does such updating. This chain will proceed very slowly. But now suppose that concurrently we run a \coarse" chain using a much smaller subset n0 of the si 's. The coarse chain will update very rapidly. Since the process for () is the same in both chains it will be the case that the coarse one will explore the posterior more rapidly. However, we need realizations from the ne chain to t the model using all of the data. The coupling idea is to let both the ne and coarse chains run, and after © 2004 by CRC Press LLC a speci ed number of updates of the ne chain (and many more updates of the coarse chain, of course) we attempt a \swap;" i.e., we propose to swap the current value of the ne chain with that of the coarse chain. The swap attempt ensures that the equilibrium distributions for both chains are not compromised (see Higdon, Lee, and Holloman, 2003). For instance, given the values of the 's for the ne iteration, we might just use the subset of 's at the locations for the coarse chain. Given the values of the 's for the coarse chain, we might do an appropriate kriging to obtain the 's for the ne chain. Such coupling strategies have yet to be thoroughly investigated. A.6 Slice Gibbs sampling for spatial process model tting Auxiliary variable methods are receiving increased attention among those who use MCMC algorithms to simulate from complex nonnormalized multivariate densities. Recent work in the statistical literature includes Tanner and Wong (1987), Besag and Green (1993), Besag et al. (1995), and Higdon (1998). The particular version we focus on here introduces a single auxiliary variable to \knock out" or \slice" the likelihood. Employed in the context of spatial modeling for georeferenced data using a Bayesian formulation with commonly used proper priors, in this section we show that convenient Gibbs sampling algorithms result. Our approach thus nds itself as a special case of recent work by Damien, Wake eld, and Walker (1999), who view methods based on multiple auxiliary variables as a general approach to constructing Markov chain samplers for Bayesian inference problems. We are also close in spirit to recent work of Neal (2003), who also employs a single auxiliary variable, but prefers to slice the entire nonnormalized joint density and then do a single multivariate updating of all the variables. Such updating requires sampling from a possibly high-dimensional uniform distribution with support over a very irregular region. Usually, a bounding rectangle is created and then rejection sampling is used. As a result, a single updating step will often be inecient in practice. Currently, with the wide availability of cheap computing power, Bayesian spatial model tting typically turns to MCMC methods. However, most of these algorithms are hard to automate since they involve tuning tailored to each application. In this section we demonstrate that a slice Gibbs sampler, done by knocking out the likelihood and implemented with a Gibbs updating, enables essentially an automatic MCMC algorithm for tting Gaussian spatial process models. Additional advantages over other simulation-based model tting schemes accrue, as we explain below. In this regard, we could instead slice the product of the likelihood and the prior, yielding uniform draws to implement the Gibbs updates. However, the support for these conditional uniform updates changes with iteration. The conditional interval arises through matrix inverse and determinant functions of model parameters with matrices of dimension equal to the sample size. Slicing only the © 2004 by CRC Press LLC likelihood and doing Gibbs updates using draws from the prior along with rejection sampling is truly \o the shelf," requiring no tuning at all. Approaches that require rst and second derivatives of the log likelihood or likelihood times prior, e.g., the MLE approach of Mardia and Marshall (1984) or Metropolis-Hastings proposal approaches within Gibbs samplers will be very dicult to compute, particularly with correlation functions such as those in the Matern class. Formally, if L(; Y ) denotes the likelihood and () is a proper prior, we introduce the single auxiliary variable U , which, given and Y , is distributed uniformly on (0; L(; Y )). Hence the joint posterior distribution of and U is given by p(; U jY ) / () I (U < L(; Y )) ; (A:10) where I denotes the indicator function. The Gibbs sampler updates U according to its full conditional distribution, which is the above uniform. A component i of is updated by drawing from its prior subject to the indicator restriction given the other 's and U . A standard distribution is sampled and only L needs to be evaluated. Notice that, if hyperparameters are introduced into the model, i.e., () is replaced with (j)( ), the foregoing still applies and is updated without restriction. Though our emphasis here is spatial model tting, it is evident that slice Gibbs sampling algorithms are more broadly applicable. With regard to computation, for large data sets often evaluation of L(; Y ) will produce an under ow, preventing sampling from the uniform distribution for U given and Y . However, log L(; Y ) will typically not be a problem to compute. So, if V = ; log U , given and Y , V + log L(; Y ) Exp(mean = 1:0), and we can transform (A.10) to p(; V jY ) / exp(;V ) I (; log L(; Y ) < V < 1). In fact, in some cases we can implement a more ecient slice sampling algorithm than the slice Gibbs sampler. We need only impose constrained sampling on a subset of the components of . In particular, suppose we write = (1 ; 2 ) and suppose that the full conditional distribution for 1, p(1j2; Y ) / L(1; 2; Y )(1j2), is a standard distribution. Then consider the following iterative updating scheme: sample U given 1 and 2 as above; then, update 2 given 1 and U with a draw from (2 j1) subject to the constraint U < L(1; 2 ; Y ); nally, update 1 with an unconditional draw from p(1 j2 ; Y ). Formally, this scheme is not a Gibbs sampler. Suppressing Y , we are updating p(U j1 ; 2), then p(2 j1 ; U ), and nally p(1j2 ). However, the rst and third distribution uniquely determine p(U; 1j2 ) and, this, combined with the second, uniquely determine the joint distribution. The Markov chain iterated in this fashion still has p(; U jY ) as its stationary distribution. In fact, if p(1j2 ; Y ) is a standard distribution, this implies that we can marginalize over 1 and run the slice Gibbs sampler on 2 with U . Given posterior draws of 2 , we can sample 1 one for one from its posterior using p(1 j2; Y ) and the © 2004 by CRC Press LLC fact that p(1 jY ) = p(1 j2 ; Y )p(2 jY ). Moreover, if p(1j2 ; Y ) is not a standard distribution, we can add Metropolis updating of 1 either in its entirety or through its components (we can also use Gibbs updating here). We employ these modi ed schemes for dierent choices of (1 ; 2) in the remainder of this section. We note that the performance of the algorithm depends critically on the distribution of the number of draws needed from (2 j1 ) to update 2 given 1 and U subject to the constraint U < L(1 ; 2; Y ). Henceforth, this will be referred to as \getting a point in the slice." A naive rejection sampling scheme (repeatedly sample from (2 j1) until we get to a point in the slice) may not always give good results. An algorithm that shrinks the support of (2 j1) so that it gives a better approximation to the slice whenever there is a rejection is more appropriate. We propose one such scheme called \shrinkage sampling" described in Neal (2003). In this context, it results in the following algorithm. For simplicity, let us assume 2 is one-dimensional. If a point ^ 2 drawn from (2 j1) is not in the slice and is larger (smaller) than the current value 2 (which is of course in the slice), the next draw is made from (2 j1) truncated with the upper (lower) bound being ^2 . The truncated interval keeps shrinking with each rejection until a point in the slice is found. The multidimensional case works by shrinking hyperrectangles. As mentioned in Neal (2003), this ensures that the expected number of points drawn will not be too large, making it a more appropriate method for general use. However, intuitively it might result in higher autocorrelations compared to the simple rejection sampling scheme. In our experience, the shrinkage sampling scheme has performed better than the naive version in most cases. Suppresing Y in our notation, we summarize the main steps in our slice Gibbs sampling algorithm as follows: (a) Partition = (1 ; 2) so that samples from p (1 j2 ) are easy to obtain; (b) Draw V = ; log L() + Z , where Z Exp(mean = 1); (c) Draw 2 from p(2 j1; V ) I (; log L() < V < 1) using shrinkage sampling; (d) Draw 1 from p(1 j2 ); (e) Iterate (b) through (d) until we get the appropriate number of MCMC samples. The spatial models on which we focus arise through the speci cation of a Gaussian process for the data. With, for example, an isotropic covariance function, proposals for simulating the range parameter for, say, an exponential choice, or the range and smoothness parameters for a Matern choice can be dicult to develop. That is, these parameters appear in the covariance matrix for Y in a nonstructured way (unless the spatial locations are on a regular grid). They enter the likelihood through the determinant © 2004 by CRC Press LLC and inverse of this matrix. And, for large n, the fewer matrix inversion and determinant computations, the better. As a result, for a noniterative sampling algorithm, it is very dicult to develop an eective importance sampling distribution for all of the model parameters. Moreover, as overall model dimension increases, resampling typically yields a very \spiked" discrete distribution. Alternative Metropolis algorithms require eective proposal densities with careful tuning. Again, these densities are dicult to obtain for parameters in the correlation function. Morover, in general, such algorithms will suer slower convergence than the Gibbs samplers we suggest, since full conditional distributions are not sampled. Furthermore, in our experience, with customary proposal distributions we often encounter serious autocorrelation problems. When thinning to obtain a sample of roughly uncorrelated values, high autocorrelation necessitates an increased number of iterations. Additional iterations require additional matrix inversion and determinant calculation and can substantially increase run times. Discretizing the parameter spaces has been proposed to expedite computation in this regard, but it too has problems. The support set is arbitrary, which may be unsatisfying, and the support will almost certainly be adaptive across iterations, diminishing any computational advantage. A.6.1 Constant mean process with nugget Suppose Y (s1 ); : : : ; Y (sn ) are observations from a constant mean spatial process over s 2 D with a nugget. That is, Y (si ) = + w(si ) + (si ) ; (A:11) where the (si ) are realizations of a white noise process with mean 0 and variance 2 . In (A.11), the w(si ) are realizations from a second-order stationary Gaussian process with covariance function 2 C (h; ) where C is a valid two-dimensional correlation function with parameters and separation vector h. Below we work with the Matern class (2.8), so that = (; ). Thus (A.11) becomes a ve-parameter model: = (; 2 ; 2 ; ; )T . Note that though the Y (si ) are conditionally independent given the w(si ), a Gibbs sampler that also updates the latent w(si )'s will be sampling an (n+5)-dimensional posterior density. However, it is possible to marginalize explicitly over the w(si )'s (see Section 5.1), and it is almost always preferable to implement iterative simulation with a lower-dimensional distribution. The marginal likelihood associated with Y = (Y (s1 ); : : : ; Y (sn )) is L(; 2 ; 2 ; ; ; Y ) =j 2 H () + 2 I j; 21 expf;(Y ; 1)T (2 H () + 2 I );1 (Y ; 1)=2g ; (A:12) where (H ())ij = 2 C (dij ; ) (dij being the distance between si and sj ). © 2004 by CRC Press LLC Suppose we adopt a prior of the form 1 ()2 ( 2 )3 (2 )4 ()5 ( ). Then (A.10) becomes 1 ()2 ( 2 )3 (2 )4 ()5 ( ) I (U < L(; 2 ; 2 ; ; ; Y )). The Gibbs sampler is most easily implemented if, given and , we diagonalize H (), i.e., H () = P ()D()(P ())T where P () is orthogonal with the columns of P () giving the eigenvectors of H () and D() is a diagonal matrix with diagonal elements i , the eigenvalues of H (). Then (A.12) simpli es to n Y (2 i + 2 )); 21 expf; 12 (Y ;1)T P ()(2 D()+ 2 I );1 P T ()(Y ;1)g: i=1 As a result, the constrained updating of 2 and 2 at a given iteration does not require repeated calculation of a matrix inverse and determinant. To minimize the number of diagonalizations of H () we update and together. If there is interest in the w(si ), their posteriors can be sampled straightforwardly after the marginalized model is tted. For instance, R p(w(si )jY ) = p(w(si )j; Y )p(jY )d so each posterior sample ? , using a draw from p(w(si )j? ; Y ) (which is a normal distribution), yields a sample from the posterior for w(si ). We remark that (A.11) can also include a parametric transformation of Y (s). For instance, we could employ a power transformation to nd a scale on which the Gaussian process assumption is comfortable. This only requires replacing Y (s) with Y p (s) and adds one more parameter to the likelihood in (A.12). Lastly, we note that other dependence structures for Y can be handled in this fashion, e.g., equicorrelated forms, Toeplitz forms, and circulants. A.6.2 Mean structure process with no pure error component Now suppose Y (s1 ); : : : ; Y (sn ) are observations from a spatial process over s 2 D such that Y (si ) = X T (si ) + w(si ) : (A:13) Again, the w(si ) are realizations from a second-order stationary Gaussian process with covariance parameters 2 and . In (A.13), X (si ) could arise as a vector of site level covariates or X T (si ) could be a trend surface speci cation as in the illustration below. To complete the Bayesian speci cation we adopt a prior of the form 1 ( )2 (2 )3 () where 1 ( ) is N ( ; ) with and known. This model is not hierarchical in the sense of our earlier forms, but we can marginalize explicitly over , obtaining L(2 ; ; Y ) =j 2 H () + X X T j; 21 expf;(Y ; X )T (2 H () + X X T );1 (Y ; X )=2g ; (A:14) © 2004 by CRC Press LLC where the rows of X are the X X X T is symmetric positive semide nite. Hence, there exists a nonsingular matrix Q() such that (Q;1 ())T Q;1() = H () and also satisfying (Q;1 ())T Q;1 () = X X T , where is diagonal with diagonal elements that are eigenvalues of X X T H ;1 (). Therefore, (A.14) simpli es to jQ()j Qnin=1 (2 + i )); 21 o exp ; 21 (Y ; X )T Q()T (2 I + );1 Q()(Y ; X ) : As in the previous section, we run a Gibbs sampler to update U given 2 ; , and Y , then 2 given ; U , and Y , and nally given 2 ; U , and Y . Then, given posterior samples fl2; l ;2l = 1; :: : ; Lg we can obtain posterior samples for one for one given l and l by drawing l from a N (Aa; A) distribution, where A;1 = 1l2 X T H ;1 (l )X + (A:15) and a = 1l2 X T H ;1 (l )Y + ; 1 : T (si ): Here, H ( ) is positive de nite while In fact, using standard identities (see, e.g., Rao, 1973, p. 29), 1 ;1 ; 1 T ; 1 = ; X T Q()(2 I + );1QT ()X ; 2 X H ()X + facilitating sampling from (A.15). Finally, if and were viewed as unknown we could introduce hyperparameters. In this case would typically be diagonal and might be 0 1, but the simultaneous diagonalization would still simplify the implementation of the slice Gibbs sampler. We note an alternate strategy that does not marginalize over and does not require simultaneous diagonalization. The likelihood of ( ; 2 ; ) is given by L( ; 2 ; ; Y ) / j2 H ()j; 12 exp ;(Y ; X )T H ();1 (Y ; X )=22 : (A:16) Letting 1 = ( ; 2 ) and 2 = with normal and inverse gamma priors on and 2, respectively, we can update and 2 componentwise conditional 2 on 2; Y , since j ; 2 ; Y is normal while 2 j ; 2; Y is inverse gamma. 2j1; U; Y is updated using the slice Gibbs sampler with shrinkage as described earlier. A.6.3 Mean structure process with nugget Extending (A.11) and (A.13) we now assume that Y (s1 ); : : : ; Y (sn ) are observations from a spatial process over s 2 D such that Y (si ) = X T (si ) + w(si ) + (si ) : (A:17) © 2004 by CRC Press LLC As above, we adopt a prior of the form 1 ( )2 ( 2 )3 (2 )4 (), where 1 ( ) is N ( ; ). Note that we could again marginalize over and the w(si ) as in the previous section, but the resulting marginal covariance matrix is of the form 2 H ()+ X X T + 2 I . The simultaneous diagonalization trick does not help here since Q() is not orthogonal. Instead we just marginalize over the w(si ), obtaining the joint posterior p( ; 2 ; 2 ; ; U jY ) proportional to 1 ( )2 ( 2 )3 (2 )4 () I U < j2 H () + 2 I j; 12 expf;(Y ; X )T (2 H () + 2 I );1 (Y ; X )=2g : We employ the modi ed scheme suggested below (A.10) taking 1 = and 2 = ( 2 ; 2; ). The required full conditional distribution p( j 2 ; 2 ; ; Y ) is N (Aa; A), where A;1 = X T (2 H () + 2 I );1 X + ; 1 and a = X T (2 H () + 2 I );1 Y + ; 1 : A.7 Structured MCMC sampling for areal model tting Structured Markov chain Monte Carlo (SMCMC) was introduced by Sargent, Hodges, Carlin (2000) as a general method for Bayesian computing in richly parameterized models. Here, \richly parameterized" refers to hierarchical and other multilevel models. SMCMC (pronounced \smick-mick") provides a simple, general, and exible framework for accelerating convergence in an MCMC sampler by providing a systematic way to update groups of similar parameters in blocks while taking full advantage of the posterior correlation structure induced by the model and data. Sargent (2000) apply SMCMC to several dierent models, including a hierarchical linear model with normal errors and a hierarchical Cox proportional hazards model. Blocking, i.e., simultaneously updating multivariate blocks of (typically highly correlated) parameters, is a general approach to accelerating MCMC convergence. Liu (1994) and Liu et al. (1994) con rm its good performance for a broad class of models, though Liu et al. (1994, Sec. 5) and Roberts and Sahu (1997, Sec. 2.4) give examples where blocking slows a sampler's convergence. In this section, we show that spatial models of the kind proposed by Besag, York, and Mollie (1991) using nonstationary \intrinsic autoregressions" are richly parameterized and lend themselves to the SMCMC algorithm. Bayesian inference via MCMC for these models has generally used single-parameter updating algorithms with often poor convergence and mixing properties. There have been some recent attempts to use blocking schemes for similar models. Cowles (2002, 2003) uses SMCMC blocking strategies for geostatistical and areal data models with normal likelihoods, © 2004 by CRC Press LLC while Knorr-Held and Rue (2002) implement blocking schemes using algorithms that exploit the sparse matrices that arise out of the areal data model. We study several strategies for block-sampling parameters in the posterior distribution when the likelihood is Poisson. Among the SMCMC strategies we consider here are blocking using dierent-sized blocks (grouping by geographical region), updating jointly with and without model hyperparameters, \oversampling" some of the model parameters, reparameterization via hierarchical centering and \pilot adaptation" of the transition kernel. Our results suggest that our techniques will generally be far more accurate (produce less correlated samples) and often more ecient (produce more eective samples per second) than univariate sampling procedures. SMCMC algorithm basics Following Hodges (1998), we consider a hierarchical model expressed in the general form, 2 y 3 2X 0 3 2 3 1 2 3 6 7 77 66 7 77 = 66 H H 777 4 5 + 666 777 : 4 5 5 4 5 1 G1 G2 The rst row of this layout is actually a collection of rows corresponding to the \data cases," or the terms in the joint posterior into which the response, the data y, enters directly. The terms in the second row (corresponding to the Hi ) are called \constraint cases" since they place stochastic constraints on possible values of 1 and 2 . The terms in the third row, the \prior cases" for the model parameters, have known (speci ed) error variances for these parameters. Equation (A.18) can be expressed as Y = X + E, where X and Y are known, is unknown, and E is an error term with block diagonal covariance matrix ; = Diag(Cov(); Cov(); Cov( )). If the error structure for the data is normal, i.e., if the vector in the constraint case formulation (A.18) is normally distributed, then the conditional posterior density of is jY; ; N ((X T ;;1X );1(X T ;;1Y ) ; (X T ;;1X );1) : (A:19) The basic SMCMC algorithm is then nothing but the following two-block Gibbs sampler : (a) Sample as a single block from the above normal distribution, using the current value of ;. (b) Update ; using the conditional distribution of the variance components, using the current value of . © 2004 by CRC Press LLC In our spatial model setting, the errors are not normally distributed, so the normal density described above is not the correct conditional posterior distribution for . Still, a SMCMC algorithm with a Metropolis-Hastings implementation can be used, with the normal density in (A.19) taken as the candidate density. A.7.1 Applying structured MCMC to areal data Consider again the Poisson-CAR model of Subsection 5.4.3, with no covariates so that i = i + i ; i = 1; : : : ; N; where N is the total number of regions, and f1 ; ::; N g; f1; ::; N g are vectors of random eects. The i 's are independent and identically distributed Gaussian normal variables with precision parameter h , while the i 's are assumed to follow a CAR(c ) distribution. We place conjugate gamma hyperpriors on the precision parameters, namely h G(h ; h ) and c G(c ; c ) with h = 1:0, h = 100:0, c = 1:0 and c = 50:0 (these hyperpriors have means of 100 and 50, and standard deviations of 10,000 and 2,500, respectively, a speci cation recommended by Bernardinelli et al., 1995). There is a total of 2N + 2 model parameters: fi : i = 1; : : : N g, fi : i = 1; : : : N g, h and c . The SMCMC algorithm requires that we transform the Yi data points to ^i = log(Yi =Ei ), which can be conveniently thought of as the response since they should be roughly linear in the model parameters (the i 's and i 's). For the constraint case formulation, the different levels of the model are written down case by case. The data cases are ^i ; i = 1; : : : ; N . The constraint cases for the i 's are i N (0; 1= h), i = 1; : : : ; N . For the constraint cases involving the i 's, the dierences between the neighboring i 's can be used to get an unconditional distribution for the i 's using pairwise dierences (Besag et al., 1995). Thus the constraint cases can be written as (i ; j )jc N (0; 1=c) (A:20) for each pair of adjacent regions (i; j ). To obtain an estimate of ;, we need estimates of the variance-covariance matrix corresponding to the ^i 's (the data cases) and initial estimates of the variance-covariance matrix for the constraint cases (the rows corresponding to the i 's and i 's). Using the delta method, we can obtain an approximation as follows: assume Yi N (Ei ei ; Ei ei ) (roughly), so invoking the delta method we can see that Var(log(Yi =Ei )) is approximately 1/Yi . A reasonably good starting value is particularly important here since we never update these variance estimates (the data variance section of ; stays the same throughout the algorithm). For initial estimates of the variance components corresponding to the i 's and the i 's, we can use the mean of the hyperprior densities on h and c , and substitute these values into ;. © 2004 by CRC Press LLC As a result, the SMCMC candidate generating distribution is thus of the form (A.19), with the Yi 's replaced by ^ . To compute the Hastings ratio, the distribution of the i 's is rewritten in the joint pairwise dierence form with the appropriate exponent for c (Hodges, Carlin, and Fan, 2003): p(1 ; 2 ; :::; N jc ) / c(N ;1)=2 8 9 < c X = exp :; 2 (i ; j ) ; ; ij 2 where i j if i and j are neighboring regions. Finally, the joint distribution of the i 's is given by p(1 ; 2 ; :::; N jh ) / hN=2 exp N X h ; 2 i2 : As above, the response vector is ^ T = flog(Y1 =E1 ); : : : ; log(YN =EN )g. The (2N + C ) 2N design matrix for the spatial model is de ned by 2 I I 66 N N N N X = 66 ;IN N 0N N 4 3 77 77 : 5 0C N AC N The design matrix is divided into two halves, the left half corresponding to the N i 's and the right half referring to the N i 's. The top section of this design matrix is an N 2N matrix relating ^i to the model parameters i and i . In the ith row, a 1 appears in the ith and (N + i)th columns while 0's appear elsewhere. Thus the ith row corresponds to i = i + i . The middle section of the design matrix is an N 2N matrix that imposes a stochastic constraint on each i separately (i 's are i.i.d normal). The bottom section of the design matrix is a C 2N matrix with each row having a ;1 and 1 in the (N + k)th and (N + l)th columns, respectively, corresponding to a stochastic constraint being imposed on l ; k (using the pairwise dierence form of the prior on the i 's as described in (A.20) with regions l and k being neighbors). The variance-covariance matrix ; is a diagonal matrix with the top left section corresponding to the variances of the data cases, i.e., the ^i 's. Using the variance approximations described above, the (2N + C ) (2N + C ) block diagonal variance-covariance matrix is 2 Diag(1=Y1; 1=Y2; : : : ; 1=YN ) 0N N 0N C 3 ; = 66 © 2004 by CRC Press LLC 0N N h IN N 0N C 0C N 0C N c IC C 77 77 : 5 Note that the exponent on c in (A.21) would actually be C=2 (instead of (N ; 1)=2) if obtained by taking the product of the terms in (A.20). Thus, (A.20) is merely a form we use to describe the distribution of the i s for our constraint case speci cation. The formal way to incorporate the distribution of the i s in the constraint case formulation is by using an alternate speci cation of the joint distribution of the i 's, as described in Besag and Kooperberg (1995). This form is a N N Gaussian density with precision matrix, Q, p(1 ; 2 ; :::; N jc ) / exp ; 2c T Q ; where T = (1 ; 2 ; :::; N ); (A:25) 8 c < Qij = : 0 ;1 if i = j where c = number of neighbors of region i if i is not adjacent to j : if i is not adjacent to j However, it is possible to show that this alternate formulation (using the corresponding design and ; matrices) results in the same SMCMC candidate mean and covariance matrix for given h and c as the one described in (A.19); see Haran, Hodges, and Carlin (2003) for details. A.7.2 Algorithmic schemes Univariate MCMC (UMCMC): For the purpose of comparing the dierent blocking schemes, one might begin with a univariate (updating one variable at a time) sampler. This can be done by sampling h and c from their gamma full conditional distributions, and then, for each i, sampling each i and i from its full conditional distribution, the latter using a Metropolis step with univariate Gaussian random walk proposals. Reparameterized Univariate MCMC (RUMCMC): One can also reparameterize from (1 ; : : : ; N ; 1 ; : : : ; N ) to (1 ; : : : ; N ; 1 ; : : : ; N ), where i = i + i . The (new) model parameters and the precision parameters can be sampled in a similar manner as for UMCMC. This \hierarchical centering" was suggested by (Besag et al. (1995) and Waller et al. (1997) for the spatial model, and discussed in general by Gelfand et al. (1995, 1996). Structured MCMC (SMCMC): A rst step here is pilot adaptation, which involves sampling (h , c ) from their gamma full conditionals, updating the ; matrix using the averaged (h , c ) sampled so far, updating the SMCMC candidate covariance matrix and mean vector using the ; matrix, and then sampling (; ) using the SMCMC candidate in a MetropolisHastings step. We may run the above steps for a \tuning" period, after which we x the SMCMC candidate mean and covariance, sampled (h , c ) as before, and use the Metropolis-Hastings to sample (; ) using SMCMC © 2004 by CRC Press LLC proposals. Some related strategies include adaptation of the ; matrix more or less frequently, adaptation over shorter and longer periods of time, and pilot adaptation while blocking on groups of regions. Our experience with pilot adaptation schemes indicates that a single proposal, regardless of adaptation period length, will probably be unable to provide a reasonable acceptance rate for the many dierent values of (h ; c ) that will be drawn in realistic problems. As such, we typically turn to oversampling relative to (h ; c ); that is, the SMCMC proposal is always based on the current (h ; c ) value. In this algorithm, we sample h and c from their gamma full conditionals, then compute the SMCMC proposal based on the ; matrix using the generated h and c . For each (h ; c ) pair, we run a Hastings independence subchain by sampling a sequence of length 100 (say) of 's using the SMCMC proposal. Further implementational details for this algorithm are given in Haran (2003). Reparameterized Structured MCMC (RSMCMC): This nal algorithm is the SMCMC analogue of the reparametrized univariate algorithm (RUMCMC). It follows exactly the same steps as the SMCMC algorithm, with the only dierence being that is now (; ) instead of (; ), and the proposal distribution is adjusted according to the new parameterization. Haran, Hodges, and Carlin (2003) compare these schemes in the context of two areal data examples, using the notion of eective sample size, or ESS (Kass et al., 1998). ESS is de ned for each parameter as the number of MCMC samples drawn by the parameter's so-called autocorrelation P divided time, = 1 + 2 1 k=1 (k ), where (k ) is the autocorrelation at lag k . One can estimate from the MCMC chain, using the initial monotone positive sequence estimator as given by Geyer (1992). Haran et al. (2003) nd UMCMC to perform poorly, though the reparameterized univariate algorithm (RUMCMC) does provide a signi cant improvement in this case. However, SMCMC and RSMCMC still perform better than both univariate algorithms. Even when accounting for the amount of time taken by the SMCMC algorithm (in terms of eective samples per second), the SMCMC scheme results in a far more ecient sampler than the univariate algorithm; for some parameters, SMCMC produced as much as 64 times more eective samples per second. Overall, experience with applying several SMCMC blocking schemes to real data sets suggests that SMCMC provides a standard, systematic technique for producing samplers with far superior mixing properties than simple univariate Metropolis-Hastings samplers. The SMCMC and RSMCMC schemes appear to be reliable ways of producing good ESSs, irrespective of the data sets and parameterizations. In many cases, the SMCMC algorithms are also competitive in terms of ES/s. In addition since the blocked SMCMC algorithms mix better, their convergence should be easier to diagnose and thus lead to nal parameter estimates that are less © 2004 by CRC Press LLC These estimates should also have smaller associated Monte Carlo variance estimates. © 2004 by CRC Press LLC Answers to selected exercises Chapter 1 3. As hinted in the problem statement, level of urbanicity might well explain the poverty pattern evident in Figure 1.2. Other regional spatially oriented covariates to consider might include percent of minority residents, percent with high school diploma, unemployment rate, and average age of the housing stock. The point here is that spatial patterns can often be explained by patterns in existing covariate data. Accounting for such covariates in a statistical model may result in residuals that show little or no spatial pattern, thus obviating the need for formal spatial modeling. 7. (a) The appropriate R code is as follows: # # R program to compute geodesic distance see also www.auslig.gov.au/geodesy/datums/distance.htm point1=(long,lat) and point2=(long,lat) in degrees distance in km between the two points # output: # example: point1_c(87.65,41.90) # Chicago (downtown) point2_c(87.90,41.98) # Chicago (O'Hare airport) point3_c(93.22,44.88) # Minneapolis (airport) # geodesic(point1,point3) returns 558.6867 geodesic
{"url":"https://fghoche.com/article/hierarchical-modeling-and-analysis-for-spatial-data-pdf-free-download","timestamp":"2024-11-04T04:36:05Z","content_type":"text/html","content_length":"207270","record_id":"<urn:uuid:0cb37453-e186-43f6-ad59-143347209ed1>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00070.warc.gz"}
Pedagogy - TARN on REVERSE CMS Analysis by PRICING PARTNERS Pricing Partners, independent valuation expert Pricing Partners is a service provider of independent valuation and international software developer of derivatives pricing analytics. Product description: This product pays a coupon whose performance is directly linked to the 10Y swap rate using the coupon on reverse CMS. More specifically, the coupon is given by the formula 12% - 2* CMS10Y floored at 2% and capped at 8%, or Min[8% ; Max(2% ; 12% - 2* CMS10Y)]. The coupons are paid until the sum of coupon payments reach 30%. As soon as the sum of coupon payments exceeds 30% on a payment date, the last coupon is adjusted so that the sum makes exactly 30%. The structure is automatically cancelled, and the nominal is repaid at par. In addition, if at maturity, the sum of coupon payments did not reach 30%, an additional coupon is paid, in order to reach Product Features : Issuer Banque d’investissement américaine Currency EUR Product type TARN Underlyings assets CMS10Y Exposure period 10 years (January 2010 - January 2020) 1-Leveraged position on CMS10Y Key features (Bet on CMS10Y to remain below a given level) 2-Automatic cancellation as soon as the sum of coupons reach 30% Cash Flow example: In this simple example, we provide the value of the underlying swap. This underlying swap receives the coupon of TARN and pays the floating leg (in our case, the 6-month Euribor) until the sum of coupon payments reach 30%. The cash flow table In this scenario, the value of the swap is near zero, indicating that the note should be issued to 100. Pricing & Risks : Ignoring the plight RNA, this product would be a simple reverse CMS capped and floored. A proper pricing of coupon depends on good modeling CMS options (which can be successfully replicated by a combination of swaptions). The condition RNA adds a risk of smile, as the strike is implicit Fortent RNA-dependent stochastic coupons paid successively. The product price is by using a model markov fonctional "which allows a proper consideration of the smile. It should be noted that the TARN condition makes the operation look more attractive because the operation is canceled when the sum of paid coupon reaches 30%. The study of the Greek shows that the product is very sensitive to the moves of interest yield curve. The volatility risk, measured by the Vega is not negligible, since the movements of the volatility surface have impacts on both coupons of reverse CMS and the TARN condition. Strategy returns table : In this example, the probability of reverse CMS returns to be between 2.5% and 5% is 75%. The probability to be between 5% et 7.5% is 15%, and the one to be below 2.5% is lower than 10%. Performance analysis: Sharpe ratio 37.62% Delta 2.00% Vega 0.59%
{"url":"https://www.next-finance.net/TARN-on-REVERSE-CMS","timestamp":"2024-11-06T14:33:43Z","content_type":"application/xhtml+xml","content_length":"54179","record_id":"<urn:uuid:622c6c8a-adc7-46e1-beae-45954f6d7f53>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00111.warc.gz"}
Hypervelocity orbital intercept guidance using certainty control Terminal guidance of a hypervelocity exoatmospheric orbital interceptor with free end time is examined. A new approach called certainty control is developed where control energy expenditure is reduced by constraining the expected final state to a function of projected estimate error. Conceptually, the constraint produces a shrinking sphere about the predicted impact point with the radius being a function of estimated error. If the predicted miss is inside or touching the sphere, thrusting is not necessary. The interceptor is modeled as a satellite with lateral thrusting capability using two-body orbital dynamics. The target is modeled as an intercontinental ballistic missile (IBM) in its final boost phase prior to burnout. Filtering is accomplished using an eight-state extended Kalman filter with line-of-sight and range updates. The estimated relative trajectory and variances are propagated numerically to predicted impact time and then approximated by splines, eliminating the need to propagate new data repeatedly when present conditions are varied. A search is then made for a new impact time and point that will minimize present interceptor velocity changes and final mass distance. This control strategy, which is applied to two intercept problems, substantially reduces fuel consumption. Journal of Guidance Control Dynamics Pub Date: June 1991 □ Ballistic Missiles; □ Hypervelocity Projectiles; □ Kalman Filters; □ Satellite Interceptors; □ Terminal Guidance; □ Algorithms; □ Computerized Simulation; □ Flight Control; □ Impact Prediction; □ Line Of Sight; □ Spline Functions; □ Space Communications, Spacecraft Communications, Command and Tracking
{"url":"https://ui.adsabs.harvard.edu/abs/1991JGCD...14..574A/abstract","timestamp":"2024-11-12T19:42:19Z","content_type":"text/html","content_length":"39654","record_id":"<urn:uuid:38f27167-6ff0-42b7-ace4-7f29828b79e5>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00171.warc.gz"}
A multifunctional matching algorithm for sample design in agricultural plots Collection of accurate and representative data from agricultural fields is required for efficient crop management. Since growers have limited available resources, there is a need for advanced methods to select representative points within a field in order to best satisfy sampling or sensing objectives. The main purpose of this work was to develop a data-driven method for selecting locations across an agricultural field given observations of some covariates at every point in the field. These chosen locations should be representative of the distribution of the covariates in the entire population and represent the spatial variability in the field. They can then be used to sample an unknown target feature whose sampling is expensive and cannot be realistically done at the population scale. An algorithm for determining these optimal sampling locations, namely the multifunctional matching (MFM) criterion, was based on matching of moments (functionals) between sample and population. The selected functionals in this study were standard deviation, mean, and Kendall's tau. An additional algorithm defined the minimal number of observations that could represent the population according to a desired level of accuracy. The MFM was applied to datasets from two agricultural plots: a vineyard and a peach orchard. The data from the plots included measured values of slope, topographic wetness index, normalized difference vegetation index, and apparent soil electrical conductivity. The MFM algorithm selected the number of sampling points according to a representation accuracy of 90% and determined the optimal location of these points. The algorithm was validated against values of vine or tree water status measured as crop water stress index (CWSI). Algorithm performance was then compared to two other sampling methods: the conditioned Latin hypercube sampling (cLHS) model and a uniform random sample with spatial constraints. Comparison among sampling methods was based on measures of similarity between the target variable population distribution and the distribution of the selected sample. MFM represented CWSI distribution better than the cLHS and the uniform random sampling, and the selected locations showed smaller deviations from the mean and standard deviation of the entire population. The MFM functioned better in the vineyard, where spatial variability was larger than in the orchard. In both plots, the spatial pattern of the selected samples captured the spatial variability of CWSI. MFM can be adjusted and applied using other moments/functionals and may be adopted by other disciplines, particularly in cases where small sample sizes are desired. أدرس بدقة موضوعات البحث “A multifunctional matching algorithm for sample design in agricultural plots'. فهما يشكلان معًا بصمة فريدة.
{"url":"https://cris.ariel.ac.il/ar/publications/a-multifunctional-matching-algorithm-for-sample-design-in-agricul-2","timestamp":"2024-11-03T15:43:22Z","content_type":"text/html","content_length":"64562","record_id":"<urn:uuid:12c50592-d622-468b-ab2a-94a9b7ecb2e3>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00093.warc.gz"}
How do you find all the asymptotes for function y=(2x^2 + 5x- 3)/(3x+1)? | HIX Tutor How do you find all the asymptotes for function #y=(2x^2 + 5x- 3)/(3x+1)#? Answer 1 The line #x=-1/3# is a vertical asymptote because the denominator, but not the numerator is #0# at #-1/3# There is no horizontal asymptote, because as #x# increases [decreases] without bound, #y# also increases [decreases] without bound. If "all the asymptotes" includes oblique asymptotes, then do the division to get: #(2x^2+5x-3)/(3x+1) = 2/3x+13/9-(40/9)/(3x+1)# #y=2/3x+13/9# is an oblique asymptote. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 Vertical Asymptote: $x = - \frac{1}{3}$ Slant Asymptote: $y = \frac{2}{3} x + \frac{13}{9}$. Vertical Asymptote To get the vertical asymptote, find the value of $x$ that will make the denominator $0$ (undefined). $3 x + 1 = 0$ $3 x = - 1$ $x = - \frac{1}{3}$ Answer 3 To find all the asymptotes for the function ( y = \frac{2x^2 + 5x - 3}{3x + 1} ), follow these steps: 1. Check for vertical asymptotes by setting the denominator equal to zero and solving for ( x ). Any value of ( x ) that makes the denominator zero will result in a vertical asymptote. 2. Check for horizontal asymptotes by comparing the degrees of the numerator and denominator. If the degree of the numerator is less than the degree of the denominator, the horizontal asymptote is ( y = 0 ). If the degree of the numerator is equal to the degree of the denominator, divide the leading coefficients to find the horizontal asymptote. 3. If the degrees differ by more than 1, there is an oblique (slant) asymptote. Find it by performing polynomial long division. 4. Lastly, check for any asymptotes arising from holes in the graph by simplifying the function and seeing if there are any common factors between the numerator and denominator. By following these steps, you can find all the asymptotes for the given function. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/how-do-you-find-all-the-asymptotes-for-function-y-2x-2-5x-3-3x-1-8f9afa4fd0","timestamp":"2024-11-07T00:45:10Z","content_type":"text/html","content_length":"585687","record_id":"<urn:uuid:7a1f6022-7c05-4beb-9e6e-cc0fbea43044>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00874.warc.gz"}
Science:Math Exam Resources/Courses/MATH104/December 2014/Question 01 (k) MATH104 December 2014 • Q1 (a) • Q1 (b) • Q1 (c) • Q1 (d) • Q1 (e) • Q1 (f) • Q1 (g) • Q1 (h) • Q1 (i) • Q1 (j) • Q1 (k) • Q1 (l) • Q1 (m) • Q1 (n) • Q2 • Q3 (a) • Q3 (b) • Q3 (c) • Q4 • Q5 (a) • Q5 (b) • Q5 (c) • Q5 (d) • Q5 (e) • Q5 (f) • Q6 • Question 01 (k) Find ${\displaystyle f'(0)}$, where ${\displaystyle f(x)=\tan ^{-1}(5x).}$ Note that ${\displaystyle \tan ^{-1}}$ refers to the inverse tangent function, which is also denoted by arctan. Make sure you understand the problem fully: What is the question asking you to do? Are there specific conditions or constraints that you should take note of? How will you know if your answer is correct from your work only? Can you rephrase the question in your own words in a way that makes sense to you? If you are stuck, check the hint below. Consider it for a while. Does it give you a new idea on how to approach the problem? If so, try it! If you remember the derivative of ${\displaystyle \tan ^{-1}{x}}$ you can just use that directly. Don't forget the chain rule! Otherwise, take the tangent of both sides and then compare the derivatives. Again, don't forget the chain rule! Checking a solution serves two purposes: helping you if, after having used the hint, you still are stuck on the problem; or if you have solved the problem and would like to check your work. • If you are stuck on a problem: Read the solution slowly and as soon as you feel you could finish the problem on your own, hide it and work on the problem. Come back later to the solution if you are stuck or if you want to check your work. • If you want to check your work: Don't only focus on the answer, problems are mostly marked for the work you do, make sure you understand all the steps that were required to complete the problem and see if you made mistakes or forgot some aspects. Your goal is to check that your mental process was correct, not only the result. Solution 1 Found a typo? Is this solution unclear? Let us know here. Please rate my easiness! It's quick and helps everyone guide their studies. The formula for derivative of ${\displaystyle \tan ^{-1}(x)={\frac {1}{1+x^{2}}}}$. Therefore, using the chain rule, we obtain that ${\displaystyle [\tan ^{-1}(5x)]'={\frac {1}{1+(5x)^{2}}}(5)={\frac {5}{1+25x^{2}}}}$. Plugging in ${\displaystyle x=0}$ we find that the final answer is ${\displaystyle f'(0)={\frac {5}{1+25(0)^{2}}}=5}$ Solution 2 Found a typo? Is this solution unclear? Let us know here. Please rate my easiness! It's quick and helps everyone guide their studies. We can also solve this without using the derivative of ${\displaystyle \tan ^{-1}x}$ directly. Instead, let's taking the tangent of both sides yields ${\displaystyle \tan {f(x)}=5x}$ Comparing the derivatives - remember the chain rule - we see that {\displaystyle {\begin{aligned}(5x)'&=\left(\tan(f(x))\right)'\\5&=\left({\frac {\sin(f(x))}{\cos(f(x))}}\right)'\\&={\frac {\cos(f(x))\cos(f(x))f'(x)-\sin(f(x))(-\sin(f(x)))f'(x)}{\cos ^{2}(f (x))}}\\&={\frac {(\cos ^{2}(f(x))+\sin ^{2}(f(x)))f'(x)}{\cos ^{2}(f(x))}}\\&={\frac {f'(x)}{\cos ^{2}(f(x))}}\\5\cos ^{2}(f(x))&=f'(x)\\5\cos ^{2}(\tan ^{-1}(5x))&=f'(x)\end{aligned}}} Finally, we plug in ${\displaystyle x=0}$ and use that ${\displaystyle \tan ^{-1}(0)=0}$ to obtain the final answer {\displaystyle {\begin{aligned}f'(0)&=5\cos ^{2}(\tan ^{-1}(0))\\&=5\cos ^{2}(0)\\&=5\end{aligned}}} Click here for similar questions MER QGH flag, MER QGQ flag, MER QGS flag, MER QGT flag, MER Tag Chain rule, Pages using DynamicPageList3 parser function, Pages using DynamicPageList3 parser tag
{"url":"https://wiki.ubc.ca/Science:Math_Exam_Resources/Courses/MATH104/December_2014/Question_01_(k)","timestamp":"2024-11-11T01:39:20Z","content_type":"text/html","content_length":"68441","record_id":"<urn:uuid:a9b8612c-d1d0-4329-bbd8-ce41af4cfb94>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00868.warc.gz"}
A compound statement is made up of more than one equation or inequality. A disjunction is a compound statement that uses the word or. Disjunction: x ≤ - ppt download Presentation is loading. Please wait. To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy , including cookie policy. Ads by Google
{"url":"http://slideplayer.com/slide/8101332/","timestamp":"2024-11-05T13:14:57Z","content_type":"text/html","content_length":"142016","record_id":"<urn:uuid:76ab7a08-d0e1-436d-8a68-49e45fdb3c22>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00712.warc.gz"}
AB-Exponential Regression Calculator This calculator performs Exponential Regression and produces an equation for the line of best fit for the provided predictor and response values. It also calculates the correlation coefficient. The equation takes the form: \( y = A \cdot B^x \) To use the calculator, provide a list of values for the predictor and the response, ensuring they are the same length, and then click the "Do Exponential Regression" button. Exponential Regression Equation: y = A · B^x Correlation Coefficient (r): Exponential Regression and Correlation Coefficient The Exponential Regression finds the best-fit equation in the form: \[ y = A \cdot B^x \] This regression is useful for modeling relationships where the change in \(Y\) is proportional to \(B^x\). The correlation coefficient \(r\) tells us how well the model fits the data. Correlation Coefficient Interpretation You can interpret the correlation coefficient \(r\) as follows: • 0.7 < |r| ≤ 1 — Strong correlation • 0.4 < |r| < 0.7 — Moderate correlation • 0.2 < |r| < 0.4 — Weak correlation • 0 ≤ |r| < 0.2 — No correlation Caveats and Conditions • Log Transformation: Exponential regression typically involves log transformation of the response values (Y), assuming all Y values are positive. This transformation linearizes the exponential relationship, allowing for easier calculation of the regression parameters. • Outliers: Large outliers can significantly distort the regression equation and the correlation coefficient. • Non-Linear Relationships: Exponential regression is only suitable for relationships that follow the exponential pattern. Other relationships may not fit well.
{"url":"https://researchdatapod.com/data-science-tools/calculators/ab-exponential-regression-calculator/","timestamp":"2024-11-07T20:00:51Z","content_type":"text/html","content_length":"170564","record_id":"<urn:uuid:42b3d769-abbb-464f-9311-923f8510cd9d>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00199.warc.gz"}
Three forms of a Quadratic Expression - A quadratic expression has three common presentations. 1. Standard (or polynomial) form: 2. Factored form 3. Completed square (or vertex) form Let’s see the expression 1. Standard form is simply 2. Factored form 3. Completed square form These three forms are equivalent. No matter what value of Let’s check with Now, spot checking like this is not a proof. I could have engineered that 10 would work, yet nothing else does. Beginning with one form, and algebraically changing its form will constitute a proof. We’ll do that work at the end of this page. Let’s see the graph of this function, Notice that the Notice that the Notice that the vertex (turning point) seems to be related to the completed square form In this unit, we find out how to algebraically convert from one form to another, and find out what each form is particularly useful for. Proof of Equivalence Show that Show that As Euclid postulated a long time ago, things that are equal are equal to each other. So if Therefore all three expressions are equivalent.
{"url":"https://tentotwelvemath.com/grade-11/grade-11-precalculus/4-quadratic-functions-and-equations/three-forms-of-a-quadratic-expression/","timestamp":"2024-11-05T06:23:37Z","content_type":"text/html","content_length":"56559","record_id":"<urn:uuid:a5ba6146-89b6-4f0f-bad3-3bb1b7e33144>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00566.warc.gz"}
The robust bilevel continuous knapsack problem with uncertain coefficients in the follower’s objective We consider a bilevel continuous knapsack problem where the leader controls the capacity of the knapsack and the follower chooses an optimal packing according to his own profits, which may differ from those of the leader. To this bilevel problem, we add uncertainty in a natural way, assuming that the leader does not have full knowledge about the follower's problem. More precisely, adopting the robust optimization approach and assuming that the follower's profits belong to a given uncertainty set, our aim is to compute a solution that optimizes the worst-case follower's reaction from the leader's perspective. By investigating the complexity of this problem with respect to different types of uncertainty sets, we make first steps towards better understanding the combination of bilevel optimization and robust combinatorial optimization. We show that the problem can be solved in polynomial time for both discrete and interval uncertainty, but that the same problem becomes NP-hard when each coefficient can independently assume only a finite number of values. In particular, this demonstrates that replacing uncertainty sets by their convex hulls may change the problem significantly, in contrast to the situation in classical single-level robust optimization. For general polytopal uncertainty, the problem again turns out to be NP-hard, and the same is true for ellipsoidal uncertainty even in the uncorrelated case. All presented hardness results already apply to the evaluation of the leader's objective function. Journal of Global Optimization, 2022. http://dx.doi.org/10.1007/s10898-021-01117-9 View The robust bilevel continuous knapsack problem with uncertain coefficients in the follower's objective
{"url":"https://optimization-online.org/2019/03/7105/","timestamp":"2024-11-13T08:10:10Z","content_type":"text/html","content_length":"85045","record_id":"<urn:uuid:1458613e-23d2-4efe-bf19-b00e0ba00dd0>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00443.warc.gz"}
How Many Square Feet In A Yard Of Concrete - Easy To Follow Guide The Ultimate Guide: Understanding the Conversion - Square Feet to Square Yards of Concrete Updated April 8, 2024 by Mike Day Welcome to the ultimate guide on understanding the conversion from square feet to square yards of concrete. Whether you're a homeowner embarking on a DIY project or a contractor working on a large-scale construction, knowing how to accurately convert these measurements is essential. In this comprehensive guide, we will break down the conversion process step-by-step, providing you with the knowledge and tools needed to make precise calculations. We'll explain the difference between square feet and square yards, as well as why it's important to convert between the two. Our expert tips and tricks will help you save time and money by avoiding common mistakes that can lead to unnecessary over- or under-ordering of concrete. Additionally, we'll provide you with real-life examples and practical scenarios to illustrate the conversion process in action. By the end of this guide, you'll have a solid understanding of how to convert square feet to square yards of concrete accurately, ensuring your projects are completed efficiently and within budget. So, let's dive in and become conversion experts together! IN ONE CUBIC YARD OF CONCRETE - there is 81 square feet at 4 inches thick. The amount of square feet you'll get from a yard of concrete will be determined by how thick (depth) your concrete slab is. A concrete slab that's 8' x 10' x 4" thick will take 1 yard of concrete. How many square feet does a yard of concrete cover at different thicknesses? The chart below shows you the exact amount of sq. ft. you'll get @ various depths. Difference between square feet and square yards Before we delve into the conversion process, let's first understand the difference between square feet and square yards. Both are commonly used to measure area, but they differ in terms of their unit of measurement. A square foot (sq ft) is a unit of area that represents a square with sides measuring one foot each. It is often used in smaller-scale projects, such as residential construction or interior design. On the other hand, a square yard (sq yd) is a larger unit of area, equal to an area measuring one yard (three feet) on each side. Square yards are typically used in larger construction projects, such as commercial buildings or road construction. The conversion between square feet and square yards is crucial because it allows for accurate estimation of materials needed for a project. Concrete, in particular, is commonly sold by the square yard, so understanding how to convert between the two measurements is essential for ordering the correct amount of concrete. how many square feet in a yard of concrete 4 inches thick? Most of the concrete floors and slabs I do are 4 inches thick. That's a normal thickness for most garages, patios, walkways, and shed slabs. If the concrete slab is 4 inches thick then I'll get 81 square feet from 1 yard of concrete. • A 10' x 10' concrete slab that's 4" thick will take 1.23 yards of concrete • A 10' x 12' concrete slab that's 4" thick will take 1.48 yards of concrete • A 20' x 20' concrete slab that's 4" thick will take 4.94 yards of concrete • A 24' x 24' concrete slab that's 4" thick will take 7.11 yards of concrete Why convert square feet to square yards for concrete projects Converting square feet to square yards is especially important for concrete projects due to the way concrete is typically sold. In most cases, concrete is priced and sold by the square yard. Therefore, knowing the square yardage required for a project is crucial to ensure accurate ordering and avoid wastage or shortages. By converting the square footage of a project into square yards, you can easily determine the amount of concrete needed. This prevents over-ordering, saving you money, and minimizes the risk of running out of concrete mid-project, which can cause delays and additional expenses. Additionally, understanding the conversion from square feet to square yards allows for better communication with suppliers and contractors. When discussing your project requirements, being able to provide measurements in square yards will ensure everyone is on the same page, avoiding any confusion or misunderstandings. Calculating square footage for concrete projects Before we can convert square feet to square yards, we must first calculate the square footage of the area requiring concrete. This step is crucial as it forms the basis for the conversion process. To calculate the square footage, measure the length and width of the area in feet. For example, if you're pouring a concrete patio that measures 15 feet in length and 10 feet in width, you would multiply these two measurements together: 15 ft x 10 ft = 150 sq ft. It's important to note that irregularly shaped areas may require additional calculations. In such cases, you may need to break down the area into smaller, more manageable shapes (e.g., rectangles, triangles) and calculate their individual square footages before summing them up to get the total square footage. Once you have determined the square footage of your concrete project, you can proceed to convert it to square yards for accurate ordering and estimation of materials. Converting square feet to square yards Now that we have the square footage of our concrete project, let's move on to the conversion from square feet to square yards. This conversion is relatively straightforward, requiring a simple mathematical calculation. To convert square feet to square yards, divide the square footage by 9. Since there are 9 square feet in a square yard, this conversion factor allows us to determine the equivalent area in square Let's continue with the example of the 150 square foot concrete patio. To convert this area to square yards, we divide 150 sq ft by 9: 150 sq ft / 9 = 16.67 sq yd. It's important to round up to the nearest whole number when dealing with measurements, so in this case, we would round up to 17 square yards. Therefore, for this particular project, you would need to order 17 square yards of concrete. Remember, always round up to ensure you have enough concrete to complete your project. It's better to have a slight excess than to run out of concrete midway through the job. Tools and resources for converting square feet to square yards Converting square feet to square yards doesn't have to be a manual calculation. There are several tools and resources available that can simplify the process and save you time. One popular option is to use an online conversion calculator. Many websites offer free conversion calculators that allow you to input the square footage and instantly obtain the equivalent in square yards. This eliminates the need for manual calculations and ensures accuracy. Another useful tool is a construction calculator. These handheld devices are specifically designed for construction professionals and DIY enthusiasts. They can perform a wide range of calculations, including converting square feet to square yards. Construction calculators often have additional features such as unit conversions, material estimations, and built-in formulas for common construction If you prefer a more traditional approach, you can also consult conversion tables or reference books specifically tailored to construction measurements. These resources provide conversion factors and formulas for various units of measurement, including square feet to square yards. By utilizing these tools and resources, you can streamline the conversion process and minimize the risk of errors in your calculations. This ultimately leads to more accurate ordering and estimation of concrete, saving you time, money, and unnecessary headaches. Common mistakes to avoid when converting square feet to square yards Converting square feet to square yards is a crucial step in calculating the amount of concrete needed for a project. This conversion is important because concrete is usually sold by the cubic yard. Here are some common mistakes to avoid during this process: 1. Forgetting to Convert Square Feet to Square Yards: Since there are 9 square feet in a square yard, failing to divide the total square footage by 9 can lead to purchasing either too much or too little concrete. 2. Ignoring Thickness of the Slab: When calculating concrete yardage, the thickness of the concrete slab is as crucial as its area. The total cubic yards of concrete needed is a function of the slab’s thickness (in feet), width, and length (both in yards). Neglecting slab thickness can result in incorrect volume calculations. 3. Mixing Units Without Conversion: Mixing units (e.g., feet with yards) without proper conversion can lead to calculation errors. Ensure all dimensions are in the same unit when calculating the volume of concrete needed. 4. Rounding Errors: Prematurely rounding off measurements during the conversion process can lead to inaccuracies in the final calculation. It's better to keep as many decimal places as possible during calculation and round off only the final result if necessary. 5. Not Accounting for Overages: It's advisable to add a contingency of about 5-10% to your total concrete volume to account for variations in ground leveling or slab thickness. Not including this buffer can result in not having enough concrete. 6. Forgetting to Convert Dimensions to Yards for Volume Calculations: For volume calculations, all dimensions should be in yards (length, width, and depth/thickness) since concrete is sold by the cubic yard. Failing to convert feet to yards for all dimensions before multiplying can give you a volume measurement that is not accurate for ordering concrete. 7. Overlooking the Shape of the Area: The calculation formula might differ slightly for circular or irregularly shaped areas compared to rectangular or square areas. Ensure you're using the right formula for the shape of your project. 8. Neglecting Rebar or Mesh Displacement: When calculating concrete volume, some forget to account for the displacement caused by rebar or wire mesh. Although this might not significantly affect small projects, it can impact large ones. 9. Assuming Uniform Depth Across the Site: If the site doesn’t have a uniform depth across its entire area, calculating as if it does can lead to inaccuracies. For areas with varying depths, calculate the volume for each section separately and then sum them up. 10. Not Consulting with Suppliers: Concrete needs can vary based on project specifics, including the type of mix. Not consulting with your concrete supplier about your calculations and project requirements can lead to problems down the line, such as having the wrong mix or insufficient quantities. Avoiding these common mistakes will help ensure that your concrete order is accurate, saving you time and money, and preventing delays in your project. how much concrete do you need? You can use the chart above to figure how many yards of concrete you need. Since the chart tells you how many square feet are in 1 yard of concrete at a certain depth, you can use this formula to figure total yardage for your project. At 4" thick the formula would look like this: • Length X Width ÷ 81 = yardage (example 24' x 24' = 576 sq. ft. 576 ÷ 81 = 7.11 yards of concrete) At 6" thick the formula would look like this: • Length X Width ÷ 54 = yardage (example 24' x 24' = 576 sq. ft. 576 ÷ 54 = 10.66 yards of concrete) The number you divide by will be determined by the depth. Use the chart above to figure out which number that is for you based off the depth of your slab, patio, driveway, etc. use this concrete calculator to see how many yards you need The concrete calculator below will help you determine the correct concrete yardage for your project. Just type in your Length, Width, and thickness. Use feet and inches if you're in the USA, (or meter & centimeter) if you use the metric system. Slabs, Square Footings, or Walls thickness or height Hole, Column, or Round Footings depth or height Circular Slab or Tube outer diameter inner diameter length or height how many bags of concrete makes a yard? The amount of bags of concrete it takes to make a yard will depend on the size of the bag. • It takes 45 bags of concrete to make a yard if the bag is 80 lbs. • It takes 60 bags of concrete to make a yard if the bag is 60 lbs. • It takes 75 bags of concrete to make a yard if the bag is 50 lbs. • It takes 90 bags of concrete to make a yard if the bag is 40 lbs. If you decide to use bags of Quikrete or Sakrete, find out how many bags of concrete are on a pallet. how is a yard of concrete measured? Concrete is measured in cubic yards. 1 cubic yard of concrete is 3 feet wide by 3 feet high by 3 feet deep. (3' x 3' x 3') There are 27 cubic feet in one cubic yard. A cubic foot measures 12 inches wide x 12 inches high by 12 inches deep. When you calculate yardage for your concrete slab, patio, floor, or driveway, you do it depending on how thick (depth) it is. When you order ready-mix concrete that's delivered on a concrete truck, you order it by the cubic yard. A bag of concrete is measured by the cubic foot. (because bags are small compared to a cubic yard) • An 80 lb. bag of concrete will yield 0.60 cubic feet (or 1.8 square feet @ 4" thick) • A 60 lb. bag of concrete will yield 0.45 cubic feet (or 1.35 square feet @ 4" thick) • A 50 lb. bag of concrete will yield 0.37 cubic feet (or 1.11 square feet @ 4" thick) • A 40 lb. bag of concrete will yield 0.30 cubic feet (or 0.90 square feet @ 4" thick) 1. How Many Square Feet in a Yard of Concrete
{"url":"https://www.everything-about-concrete.com/how-many-square-feet-in-a-yard-of-concrete.html","timestamp":"2024-11-13T08:41:16Z","content_type":"text/html","content_length":"504046","record_id":"<urn:uuid:86523eef-db99-474b-bf60-459540728a64>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00665.warc.gz"}
All you need to know about Non-Inferiority Hypothesis TestAll you need to know about Non-Inferiority Hypothesis Test Datascience in Towards Data Science on Medium, All you need to know about Non-Inferiority Hypothesis Test All You Need to Know About the Non-Inferiority Hypothesis Test A non-inferiority test statistically proves that a new treatment is not worse than the standard by more than a clinically acceptable margin Generated using Midjourney by prateekkrjain.com While working on a recent problem, I encountered a familiar challenge—“How can we determine if a new treatment or intervention is at least as effective as a standard treatment?” At first glance, the solution seemed straightforward—just compare their averages, right? But as I dug deeper, I realised it wasn’t that simple. In many cases, the goal isn’t to prove that the new treatment is better, but to show that it’s not worse by more than a predefined margin. This is where non-inferiority tests come into play. These tests allow us to demonstrate that the new treatment or method is “not worse” than the control by more than a small, acceptable amount. Let’s take a deep dive into how to perform this test and, most importantly, how to interpret it under different scenarios. The Concept of Non-Inferiority Testing In non-inferiority testing, we’re not trying to prove that the new treatment is better than the existing one. Instead, we’re looking to show that the new treatment is not unacceptably worse. The threshold for what constitutes “unacceptably worse” is known as the non-inferiority margin (Δ). For example, if Δ=5, the new treatment can be up to 5 units worse than the standard treatment, and we’d still consider it acceptable. This type of analysis is particularly useful when the new treatment might have other advantages, such as being cheaper, safer, or easier to administer. Formulating the Hypotheses Every non-inferiority test starts with formulating two hypotheses: • Null Hypothesis (H0): The new treatment is worse than the standard treatment by more than the non-inferiority margin Δ. • Alternative Hypothesis (H1): The new treatment is not worse than the standard treatment by more than Δ. When Higher Values Are Better: For example, when we are measuring something like drug efficacy, where higher values are better, the hypotheses would be: • H0: The new treatment is worse than the standard treatment by at least Δ (i.e., μnew − μcontrol ≤ −Δ). • H1: The new treatment is not worse than the standard treatment by more than Δ (i.e., μnew − μcontrol > −Δ). When Lower Values Are Better: On the other hand, when lower values are better, like when we are measuring side effects or error rates, the hypotheses are reversed: • H0: The new treatment is worse than the standard treatment by at least Δ (i.e., μnew − μcontrol ≥ Δ). • H1: The new treatment is not worse than the standard treatment by more than Δ (i.e., μnew − μcontrol < Δ). To perform a non-inferiority test, we calculate the Z-statistic, which measures how far the observed difference between treatments is from the non-inferiority margin. Depending on whether higher or lower values are better, the formula for the Z-statistic will differ. • When higher values are better: • When lower values are better: where δ is the observed difference in means between the new and standard treatments, and SE(δ) is the standard error of that difference. Calculating P-Values The p-value tells us whether the observed difference between the new treatment and the control is statistically significant in the context of the non-inferiority margin. Here’s how it works in different scenarios: • When higher values are better, we calculate p = 1 − P(Z ≤ calculated Z) as we are testing if the new treatment is not worse than the control (one-sided upper-tail test). • When lower values are better, we calculate p = P(Z ≤ calculated Z) since we are testing whether the new treatment has lower (better) values than the control (one-sided lower-tail test). Understanding Confidence Intervals Along with the p-value, confidence intervals provide another key way to interpret the results of a non-inferiority test. • When higher values are preferred, we focus on the lower bound of the confidence interval. If it’s greater than −Δ, we conclude non-inferiority. • When lower values are preferred, we focus on the upper bound of the confidence interval. If it’s less than Δ, we conclude non-inferiority. The confidence interval is calculated using the formula: • when higher values preferred • when lower values preferred Calculating the Standard Error (SE) The standard error (SE) measures the variability or precision of the estimated difference between the means of two groups, typically the new treatment and the control. It is a critical component in the calculation of the Z-statistic and the confidence interval in non-inferiority testing. To calculate the standard error for the difference in means between two independent groups, we use the following formula: • σ_new and σ_control are the standard deviations of the new and control groups. • p_new and p_control are the proportion of success of the new and control groups. • n_new and n_control are the sample sizes of the new and control groups. The Role of Alpha (α) In hypothesis testing, α (the significance level) determines the threshold for rejecting the null hypothesis. For most non-inferiority tests, α=0.05 (5% significance level) is used. • A one-sided test with α=0.05 corresponds to a critical Z-value of 1.645. This value is crucial in determining whether to reject the null hypothesis. • The confidence interval is also based on this Z-value. For a 95% confidence interval, we use 1.645 as the multiplier in the confidence interval formula. In simple terms, if your Z-statistic is greater than 1.645 for higher values, or less than -1.645 for lower values, and the confidence interval bounds support non-inferiority, then you can confidently reject the null hypothesis and conclude that the new treatment is non-inferior. Let’s break down the interpretation of the Z-statistic and confidence intervals across four key scenarios, based on whether higher or lower values are preferred and whether the Z-statistic is positive or negative. Here’s a 2x2 framework: Non-inferiority tests are invaluable when you want to demonstrate that a new treatment is not significantly worse than an existing one. Understanding the nuances of Z-statistics, p-values, confidence intervals, and the role of α will help you confidently interpret your results. Whether higher or lower values are preferred, the framework we’ve discussed ensures that you can make clear, evidence-based conclusions about the effectiveness of your new treatment. Now that you’re equipped with the knowledge of how to perform and interpret non-inferiority tests, you can apply these techniques to a wide range of real-world problems. Happy testing! Note: All images, unless otherwise noted, are by the author. All you need to know about Non-Inferiority Hypothesis Test was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story. from Datascience in Towards Data Science on Medium https://ift.tt/2ORyM9j
{"url":"https://www.esmarketingdigital.es/2024/10/all-you-need-to-know-about-non.html","timestamp":"2024-11-14T18:28:50Z","content_type":"application/xhtml+xml","content_length":"494736","record_id":"<urn:uuid:dfaf1ae1-e066-4d2c-96d6-4a0e007f0833>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00898.warc.gz"}
depth of field calculations calculation for Calculations 29 Mar 2024 Popularity: ⭐⭐⭐ Depth of Field Calculations This calculator provides the calculation of depth of field for photography applications. Calculation Example: Depth of field refers to the range of distances that appear sharp in an image. It is affected by factors such as the focal length of the lens, the aperture, and the distance to the subject. The depth of field calculator uses the following formulas to calculate the near and far distances of the depth of field: Near Distance (ND) = (s^2 * N * c) / (f * (N - 1)) Far Distance (FD) = (s^2 * N * c) / (f * (N + 1)) s = distance from the lens to the subject f = focal length of the lens N = f-number (aperture) c = distance from the lens to the circle of confusion Related Questions Q: What is the relationship between aperture and depth of field? A: Aperture is inversely proportional to depth of field. This means that a larger aperture (smaller f-number) will result in a shallower depth of field, while a smaller aperture (larger f-number) will result in a deeper depth of field. Q: How does focal length affect depth of field? A: Focal length is directly proportional to depth of field. This means that a longer focal length will result in a shallower depth of field, while a shorter focal length will result in a deeper depth of field. | —— | —- | —- | Calculation Expression Near Distance: The near distance of the depth of field is given by ND = (s^2 * N * c) / (f * (N - 1)) (s^2 * N * c) / (f * (N - 1)) Far Distance: The far distance of the depth of field is given by FD = (s^2 * N * c) / (f * (N + 1)) (s^2 * N * c) / (f * (N + 1)) Calculated values Considering these as variable values: s=2.0, c=0.005, f=50.0, N=2.8, the calculated value(s) are given in table below | —— | —- | Near Distance 0.000421053 Similar Calculators Calculator Apps Matching 3D parts for depth of field calculations calculation for Calculations App in action The video below shows the app in action.
{"url":"https://blog.truegeometry.com/calculators/depth_of_field_calculations_calculation_for_Calculations.html","timestamp":"2024-11-03T01:00:03Z","content_type":"text/html","content_length":"27236","record_id":"<urn:uuid:8ed0c9f1-20c8-4c2e-a6bd-cb18bf36140f>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00349.warc.gz"}
mrst-adblackoil-gcs [GitHub] 9/20/2023 Provides examples and advanced functionality for 3D simulation of geologic CO2 storage in saline aquifers using the ad-blackoil module of the Matlab Reservoir Simulation Toolbox (MRST). As of 04/25/ 2024, this code has been integrated into MRST (co2lab-mit module available with mrst-2024a or later). 1. Calculation of PVT properties: thermodynamic formulation following Hassanzadeh et al., IJGGC (2008). Includes functions, validation and example. 2. Relative permeability hysteresis: calculation of trapped gas saturation and bounding imbibition curve using Land’s SPE J (1968) model, implementation of Killough's SPE J (1974) model as in ECLIPSE for scanning curves. Includes the hysteresis class and an example which uses the PUNQ model and compares the result with that of ECLIPSE (see figure below). 3. Molecular diffusion (includes the diffusion class and an example inspired by work on the FluidFlower). CO2 saturation at the top of the reservoir after CO2 injection and migration. Top row results are from Juanes et al., WRR (2006), bottom results obtained with this implementation in MRST. Effect of increasing the diffusivity on convective finger width. This phenomenon occurs because a liquid solution of CO2 and water is denser than pure water, so the mixture sinks to the bottom of the PREDICT: PeRmEability DIstributions of Clay-smeared faulTs [GitHub] 9/22/2022 Computes upscaled fault permeability distributions using a parameter-based, probabilistic description of clay and sand smears. Written in MATLAB. • Generates multiple realizations of the fault core materials in shallow (depth < ~3km), poorly-lithified siliciclastic sequences, using a high-resolution grid (fine grid). The computation of fault permeability is performed in a given throw window, and the code can be run in 2D or 3D. • Outputs the directional components of the fault permeability tensor (dip-normal, kxx; strike-parallel, kyy; and dip-parallel, kzz) in an upscaling (coarse) grid defined by the user (3D version). For example, this flexibility is useful to assign fault permeability in subsequent flow simulation models. • The fault permeability is obtained via flow-based upscaling of the fine grid permeability. PREDICT uses MRST to perform permeability upscaling (finite volume method) • Incorporates uncertainty in the geological variables that control the fault material distribution and their properties. Therefore, the output permeability is given as a set of probability • Mostly vectorized where possible and can be run in parallel using MATLAB's parfor, so it is fast and scales well with the number of computing cores available. Convergence of output probability distributions vs number of realizations performed, for five different example sequences (A to E). N = 1000 realizations are enough. Computing times for an example sequence vs cores available Example of output probability distributions of fault permeability for each directional component, and their correlation.
{"url":"https://www.lluissalo.com/software","timestamp":"2024-11-11T13:59:44Z","content_type":"text/html","content_length":"91183","record_id":"<urn:uuid:a9d7cce5-7e25-47fb-a305-97017c1adc08>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00186.warc.gz"}
The FFT function returns a result equal to the complex, discrete Fourier transform of Array. The result of this function is a single- or double-precision complex array. The discrete Fourier transform, F(u), of an N-element, one-dimensional function, f(x), is defined as: And the inverse transform, (Direction > 0), is defined as: If the keyword OVERWRITE is set, the transform is performed in-place, and the result overwrites the original contents of the array. See Fast Fourier Transform Background for more information on how FFT is used to reduce background noise in imagery. The following code plots the logarithm of the power spectrum of a 100-element index array: p = PLOT(ABS(FFT(FINDGEN(100), -1)), /YLOG) Additional Examples See Additional Examples and Fast Fourier Transform for more code examples using the FFT function. Algorithm & Running Time FFT uses a multivariate complex Fourier transform, computed in place with a mixed-radix Fast Fourier Transform algorithm. The FFT algorithm is implemented differently based on platform and the number of dimensions: • Intel x86_64 (Linux, Mac, Windows): For all arrays less than 8 dimensions, FFT calls the Intel Math Kernel Library Fast Fourier Transform. For arrays of 8 dimensions, FFT uses Fortran code authored by RC Singleton (Stanford Research Institute, September 1968 (NIST Guide to Available Math Software) and translated by f2c (version 19950721; MJ Olesen, Queen's University at Kingston, 1995-97; retrieved from NETLIB January 2000). • Arm64 (Apple Silicon, M-series Macs): FFT calls the Arm Performance Libraries (ArmPL) FFT functions. For a one-dimensional FFT, running time is roughly proportional to the total number of points in Array times the sum of its prime factors. Let N be the total number of elements in Array, and decompose N into its prime factors: Running time is proportional to: where T[3] ~ 4T[2]. For example, the running time of a 263 point FFT is approximately 10 times longer than that of a 264 point FFT, even though there are fewer points. The sum of the prime factors of 263 is 264 (1 + 263), while the sum of the prime factors of 264 is 20 (2 + 2 + 2 + 3 + 11). Result = FFT( Array [, Direction] [, /CENTER] [, DIMENSION=scalar] [, /DOUBLE] [, /INVERSE] [, /OVERWRITE] ) Return Value FFT returns a complex array that has the same dimensions as the input array. The output array is ordered in the same manner as almost all discrete Fourier transforms. Element 0 contains the zero frequency component, F[0]. The array element F[1] contains the smallest, nonzero positive frequency, which is equal to 1/(N[i] T[i]), where N[i] is the number of elements and T[i] is the sampling interval of the i^th dimension. F[2] corresponds to a frequency of 2/(N[i] T[i]). Negative frequencies are stored in the reverse order of positive frequencies, ranging from the highest to lowest negative frequencies. Note: The FFT function can be performed on functions of up to eight (8) dimensions. If a function has n dimensions, IDL performs a transform in each dimension separately, starting with the first dimension and progressing sequentially to dimension n. For example, if the function has two dimensions, IDL first does the FFT row by row, and then column by column. For an even number of points in the i^th dimension, the frequencies corresponding to the returned complex values are: 0, 1/(N[i]T[i]), 2/(N[i]T[i]), ..., (N[i]/2–1)/(N[i]T[i]), 1/(2T[i]), –(N[i]/2–1)/(N[i]T[i]), ..., –1/(N[i]T[i]) where 1/(2T[i]) is the Nyquist critical frequency. For an odd number of points in the i^th dimension, the frequencies corresponding to the returned complex values are: 0, 1/(N[i]T[i]), 2/(N[i]T[i]), ..., (N[i]–1)/2)/(N[i]T[i]), –(N[i]–1)/2)/(N[i]T[i]), ..., –1/(N[i]T[i]) In IDL code, these frequencies may be computed as follows: ; N is an integer giving the number of elements in a particular dimension ; T is a floating-point number giving the sampling interval X = FINDGEN((N - 1)/2) + 1 is_N_even = (N MOD 2) EQ 0 if (is_N_even) then $ freq = [0.0, X, N/2, -N/2 + X]/(N*T) $ else $ freq = [0.0, X, -(N/2 + 1) + X]/(N*T) The array to which the Fast Fourier Transform should be applied. If Array is not of complex type, it is converted to complex type. The dimensions of the result are identical to those of Array. The size of each dimension may be any integer value and does not necessarily have to be an integer power of 2, although powers of 2 are certainly the most efficient. Direction is a scalar indicating the direction of the transform, which is negative by convention for the forward transform, and positive for the inverse transform. If Direction is not specified, the forward transform is performed. A normalization factor of 1/N, where N is the number of points, is applied during the forward transform. Note: When transforming from a real vector to complex and back, it is slightly faster to set Direction to 1 in the real to complex FFT. Note also that the value of Direction is ignored if the INVERSE keyword is set. Set this keyword to shift the zero-frequency component to the center of the spectrum. In the forward direction, when CENTER is set, the resulting Fourier transform has the zero-frequency component shifted to the center of the array. In the reverse direction, when CENTER is set, the input is assumed to be a centered Fourier transform, and the coefficients are shifted back before performing the inverse transform. Note: For an odd number of points the zero-frequency component will be in the center. For an even number of points the first element will correspond to the Nyquist frequency component, followed by the remaining frequency components - the zero-frequency component will then be in the center of the remaining components. Set this keyword to a scalar indicating the dimension across which to calculate the FFT. If this keyword is not present or is zero, then the FFT is computed across all dimensions of the input array. If this keyword is present, then the FFT is only calculated only across a single dimension. For example, if the dimensions of Array are N1, N2, N3, and DIMENSION is 2, the FFT is calculated only across the second dimension. Note: If the CENTER keyword is also set, then only the dimension given by the DIMENSION keyword is shifted to the center. The other dimensions remain unshifted. Set this keyword to a value other than zero to force the computation to be done in double-precision arithmetic, and to give a result of double-precision complex type. If DOUBLE is set equal to zero, computation is done in single-precision arithmetic and the result is single-precision complex. If DOUBLE is not specified, the data type of the result will match the data type of Array. Set this keyword to perform an inverse transform. Setting this keyword is equivalent to setting the Direction argument to a positive value. Note, however, that setting INVERSE results in an inverse transform even if Direction is specified as negative. If this keyword is set, and the Array parameter is a variable of complex type, the transform is done “in-place”. The result overwrites the previous contents of the variable. For example, to perform a forward, in-place FFT on the variable a: a = FFT(a, -1, /OVERWRITE) Thread Pool Keywords On Intel x86_64 platforms, this routine is written to make use of IDL’s thread pool, which can increase execution speed on systems with multiple CPUs. The values stored in the !CPU system variable control whether IDL uses the thread pool for a given computation. In addition, you can use the thread pool keywords TPOOL_MAX_ELTS, TPOOL_MIN_ELTS, and TPOOL_NOTHREAD to override the defaults established by !CPU for a single invocation of this routine. See Thread Pool Keywords for details. On the arm64, M-series Mac platform the IDL thread pool settings and keywords are ignored. Note: Specifically, FFT will use the thread pool to overlap the inner loops of the computation when used on data with dimensions which have factors of 2, 3, 4, or 5. The prime-number DFT does not use the thread pool, as doing so would yield a relatively small benefit for the complexity it would introduce. Our experience shows that the improvement in performance from using the thread pool for FFT is highly dependent upon many factors (data length and dimensions, single vs. double precision, operating system, and hardware) and can vary between platforms. Additional Examples Example 1 In this example we display the power spectrum of a 100-element vector sampled at a rate of 0.1 seconds per point. The 0 frequency component is displayed at the center of the plot, and frequency is plotted on the x-axis: ; Define the number of points and the interval: N = 100 T = 0.1 ; Midpoint+1 is the most negative frequency subscript: N21 = N/2 + 1 ; The array of subscripts: F = INDGEN(N) ; Insert negative frequencies in elements F(N/2 +1), ..., F(N-1): F[N21] = N21 -N + FINDGEN(N21-2) ; Compute T0 frequency: F = F/(N*T) ; Shift so that the most negative frequency is plotted first: p = PLOT(SHIFT(F, -N21), SHIFT(ABS(FFT(F, -1)), -N21), /YLOG) Example 2 In this example we compute the FFT of two-dimensional images: ; Create a cosine wave damped by an exponential n = 256 x = FINDGEN(n) y = COS(x*!PI/6)*EXP(-((x - n/2)/30)^2/2) ; Construct a two-dimensional image of the wave z = REBIN(y, n, n) ; Add two different rotations to simulate a crystal structure z = ROT(z, 10) + ROT(z, -45) LOADCT, 39 p = IMAGE(BYTSCL(z), LAYOUT = [2, 2, 1]) ; Compute the two-dimensional FFT f = FFT(z) logpower = ALOG10(ABS(f)^2) ; log of Fourier power spectrum p = IMAGE(BYTSCL(logpower), LAYOUT = [2, 2, 2], /CURRENT) ; Compute the FFT only along the first dimension f = FFT(z, DIMENSION=1) logpower = ALOG10(ABS(f)^2) ; log of Fourier power spectrum p = IMAGE(BYTSCL(logpower), LAYOUT = [2, 2, 3], /CURRENT) ; Compute the FFT only along the second dimension. f = FFT(z, DIMENSION=2) logpower = ALOG10(ABS(f)^2) ; log of Fourier power spectrum p = IMAGE(BYTSCL(logpower), LAYOUT = [2, 2, 4], /CURRENT) Version History Original Introduced 7.1 Added CENTER keyword 8.4.1 Bug fix: When both DIMENSION and CENTER are set, only DIMENSION is shifted to the center; all other dimensions remain unshifted. 8.8.1 Changed algorithm to use Intel Math Kernel Library Fast Fourier Transform 9.0 Implemented algorithm on arm64 Mac to use the Arm Performance Libraries (ArmPL) See Also
{"url":"https://www.nv5geospatialsoftware.com/docs/fft.html","timestamp":"2024-11-13T22:03:06Z","content_type":"text/html","content_length":"100310","record_id":"<urn:uuid:4b3c75e5-4d50-48e5-bcca-4549b4b56f6b>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00153.warc.gz"}
Microsoft Excel Comprehensive Guide to Excel Formulas 1 | COMPUTER ACADEMY Microsoft Excel Comprehensive Guide to Excel Formulas 1 Microsoft Excel is a powerful spreadsheet program developed byMicrosoft Excel Comprehensive Guide to Excel Formulas 1 It is used for data organization, calculation, and analysis. Excel is an essential tool in many fields, including business, finance, science, and education. This guide provides a detailed overview of Excel’s features, functions, and tools, making it easy to teach and understand.Microsoft Excel:Comprehensive Guide to Excel Formulas 1 Table of Contents 1. The Excel Interface 1.1 Workbook and Worksheets • Workbook: An Excel file that contains one or more worksheets. • Worksheet: A single spreadsheet within a workbook, made up of rows and columns where data is entered. 1.2 The Ribbon The Ribbon is the toolbar at the top of Excel, containing tabs and commands. It is divided into several tabs: • Home Tab: Basic formatting and editing commands. • Insert Tab: Tools to insert tables, charts, and other objects. • Page Layout Tab: Options for page setup, themes, and printing. • Formulas Tab: Tools for working with formulas and functions. • Data Tab: Tools for data management, such as sorting and filtering. • Review Tab: Proofing tools, comments, and protection options. • View Tab: Options for viewing and arranging the worksheet. 2. Key Tabs and Their Functions 2.1 Home Tab • Clipboard Group: Cut, copy, paste, and format painter. • Font Group: Font type, size, color, and styles (bold, italic, underline). • Alignment Group: Text alignment, orientation, and wrapping. • Number Group: Number formatting (currency, percentage, date). • Styles Group: Conditional formatting, cell styles. • Cells Group: Insert, delete, format cells. • Editing Group: Find, replace, and select tools. 2.2 Insert Tab • Tables Group: Insert tables and pivot tables. • Illustrations Group: Insert pictures, shapes, icons, and 3D models. • Add-ins Group: Insert add-ins to enhance functionality. • Charts Group: Create various charts (column, line, pie, bar). • Sparklines Group: Insert mini charts within cells. • Filters Group: Insert slicers and timelines for filtering data. • Links Group: Insert hyperlinks. • Text Group: Insert text boxes, headers, footers, and WordArt. • Symbols Group: Insert equations and symbols. 2.3 Page Layout Tab • Themes Group: Apply themes to the workbook. • Page Setup Group: Set margins, orientation, size, print area. • Scale to Fit Group: Adjust the scaling of the worksheet for printing. • Sheet Options Group: Show or hide gridlines and headings. • Arrange Group: Arrange objects on the worksheet. 2.4 Formulas Tab • Function Library Group: Access various functions (math, logical, text, date & time, lookup). • Defined Names Group: Define and manage named ranges. • Formula Auditing Group: Trace precedents, dependents, and check errors. • Calculation Group: Set calculation options and recalculate the worksheet. 2.5 Data Tab • Get & Transform Data Group: Import data from various sources. • Queries & Connections Group: Manage queries and connections. • Sort & Filter Group: Sort and filter data. • Data Tools Group: Data validation, consolidation, text to columns. • Forecast Group: Create forecasts and what-if analysis. • Outline Group: Group and ungroup data. 2.6 Review Tab • Proofing Group: Spell check, thesaurus. • Accessibility Group: Check accessibility. • Insights Group: Smart lookup. • Language Group: Translate text. • Comments Group: Add, delete, and manage comments. • Protect Group: Protect sheets and workbooks. 2.7 View Tab • Workbook Views Group: Normal, Page Break Preview, Page Layout. • Show Group: Show or hide rulers, gridlines, headings, formula bar. • Zoom Group: Zoom in or out of the worksheet. • Window Group: New window, arrange, freeze panes, split. • Macros Group: Record and manage macros. 3. Essential Excel Functions 3.1 Basic Functions • SUM: Adds a range of cells. • =SUM(A1:A10) • AVERAGE: Calculates the average of a range. • =AVERAGE(B1:B10) • MIN/MAX: Finds the minimum/maximum value in a range. • =MIN(C1:C10) • =MAX(D1:D10) 3.2 Logical Functions • IF: Performs a logical test and returns one value for TRUE and another for FALSE. • =IF(A1>10, "Yes", "No") • AND/OR: Combines multiple conditions. • =AND(A1>10, B1<20) • =OR(A1>10, B1<20) 3.3 Text Functions • CONCATENATE: Joins several text strings into one. • =CONCATENATE(A1, " ", B1) • LEFT/RIGHT/MID: Extracts characters from a text string. • =LEFT(A1, 5) • =RIGHT(A1, 5) • =MID(A1, 2, 3) 3.4 Date and Time Functions • TODAY/NOW: Returns the current date/time. • =TODAY() • =NOW() • DATE: Creates a date from year, month, and day. • =DATE(2024, 6, 28) • DAYS: Calculates the number of days between two dates. • =DAYS(B1, A1) 3.5 Lookup and Reference Functions • VLOOKUP: Looks for a value in the first column of a table and returns a value in the same row from another column. • =VLOOKUP(A1, B1:C10, 2, FALSE) • HLOOKUP: Looks for a value in the first row of a table and returns a value in the same column from another row. • =HLOOKUP(A1, B1:K10, 3, FALSE) • INDEX/MATCH: More flexible alternatives to VLOOKUP/HLOOKUP. • =INDEX(B1:B10, MATCH(A1, C1:C10, 0)) 4. Data Analysis Tools 4.1 PivotTables PivotTables are used to summarize, analyze, explore, and present data. • Creating a PivotTable: Select the data range and go to Insert > PivotTable. • Field List: Drag fields to Rows, Columns, Values, and Filters areas. • PivotChart: Create charts from PivotTables for visual data analysis. 4.2 Charts and Graphs Excel offers various chart types to represent data visually. • Types of Charts: Column, Line, Pie, Bar, Area, Scatter, etc. • Creating a Chart: Select the data range and go to Insert > Chart. • Formatting Charts: Customize chart elements like titles, labels, and legends. 5. Data Management 5.1 Sorting and Filtering • Sorting: Arrange data in ascending or descending order. • Go to Data > Sort. • Filtering: Display only the data that meets certain criteria. • Go to Data > Filter. 5.2 Data Validation • Data Validation: Restrict the type of data that can be entered in a cell. • Go to Data > Data Validation. 5.3 Conditional Formatting • Conditional Formatting: Apply formatting based on cell values. • Go to Home > Conditional Formatting. 6. Advanced Excel Features 6.1 Macros Macros are used to automate repetitive tasks. • Recording a Macro: Go to View > Macros > Record Macro. • Running a Macro: Go to View > Macros > View Macros. 6.2 Solver Solver is an add-in for complex optimization problems. • Enabling Solver: Go to File > Options > Add-ins > Solver Add-in. • Using Solver: Go to Data > Solver, define the problem, and solve. 6.3 Data Analysis Toolpak The Data Analysis Toolpak is an add-in with advanced data analysis tools. • Enabling Data Analysis Toolpak: Go to File > Options > Add-ins > Analysis Toolpak. • Using Data Analysis Toolpak: Go to Data > Data Analysis. Comprehensive Guide to Excel Formulas Microsoft Excel is renowned for its powerful formula and function capabilities, enabling users to perform a vast range of calculations, data manipulations, and analyses. Below is a detailed compilation of essential and advanced Excel formulas, categorized for easier understanding and teaching. 1. Basic Arithmetic Formulas • Addition: =A1 + B1 • Subtraction: =A1 - B1 • Multiplication: =A1 * B1 • Division: =A1 / B1 • Exponentiation: =A1 ^ B1 2. Statistical Functions • SUM: Adds a range of cells. • =SUM(A1:A10) • AVERAGE: Calculates the average of a range. • =AVERAGE(B1:B10) • MEDIAN: Finds the median value in a range. • =MEDIAN(C1:C10) • MODE: Returns the most frequently occurring value. • =MODE(D1:D10) • MIN: Finds the minimum value in a range. • =MIN(E1:E10) • MAX: Finds the maximum value in a range. • =MAX(F1:F10) • COUNT: Counts the number of cells with numerical data. • =COUNT(G1:G10) • COUNTA: Counts the number of non-empty cells. • =COUNTA(H1:H10) • COUNTIF: Counts cells that meet a condition. • =COUNTIF(I1:I10, ">10") • SUMIF: Adds cells that meet a condition. • =SUMIF(J1:J10, ">10") 3. Logical Functions • IF: Performs a logical test and returns one value for TRUE and another for FALSE. • =IF(A1 > 10, "Yes", "No") • AND: Returns TRUE if all conditions are true. • =AND(A1 > 10, B1 < 20) • OR: Returns TRUE if any condition is true. • =OR(A1 > 10, B1 < 20) • NOT: Reverses the logical value. • =NOT(A1 > 10) • IFERROR: Returns a value if there is an error. • =IFERROR(A1/B1, "Error") 4. Text Functions • CONCATENATE: Joins several text strings into one. • =CONCATENATE(A1, " ", B1) • TEXTJOIN: Joins text strings with a delimiter. • =TEXTJOIN(", ", TRUE, A1:A5) • LEFT: Extracts a specified number of characters from the left. • =LEFT(A1, 5) • RIGHT: Extracts a specified number of characters from the right. • =RIGHT(A1, 5) • MID: Extracts characters from the middle of a text string. • =MID(A1, 2, 3) • LEN: Returns the length of a text string. • =LEN(A1) • FIND: Finds the starting position of a substring. • =FIND("text", A1) • SEARCH: Similar to FIND but case-insensitive. • =SEARCH("text", A1) • UPPER: Converts text to uppercase. • =UPPER(A1) • LOWER: Converts text to lowercase. • =LOWER(A1) • PROPER: Converts text to proper case. • =PROPER(A1) • TRIM: Removes extra spaces. • =TRIM(A1) 5. Date and Time Functions • TODAY: Returns the current date. • =TODAY() • NOW: Returns the current date and time. • =NOW() • DATE: Creates a date from year, month, and day. • =DATE(2024, 6, 28) • DAY: Returns the day of the month. • =DAY(A1) • MONTH: Returns the month. • =MONTH(A1) • YEAR: Returns the year. • =YEAR(A1) • HOUR: Returns the hour. • =HOUR(A1) • MINUTE: Returns the minute. • =MINUTE(A1) • SECOND: Returns the second. • =SECOND(A1) • DAYS: Calculates the number of days between two dates. • =DAYS(A1, B1) • NETWORKDAYS: Calculates the number of working days between two dates. • =NETWORKDAYS(A1, B1) • EDATE: Returns a date a specified number of months before or after a given date. • =EDATE(A1, 3) 6. Lookup and Reference Functions • VLOOKUP: Looks for a value in the first column and returns a value in the same row from another column. • =VLOOKUP(A1, B1:C10, 2, FALSE) • HLOOKUP: Looks for a value in the first row and returns a value in the same column from another row. • =HLOOKUP(A1, B1:K10, 3, FALSE) • INDEX: Returns the value of a cell in a table based on the row and column number. • =INDEX(A1:B10, 2, 1) • MATCH: Searches for a value in a range and returns the relative position. • =MATCH(A1, B1:B10, 0) • OFFSET: Returns a reference to a range that is offset from a starting cell. • =OFFSET(A1, 2, 3) • INDIRECT: Returns the reference specified by a text string. • =INDIRECT("A" & B1) • CHOOSE: Returns a value from a list of values based on a given position. • =CHOOSE(A1, "Red", "Green", "Blue") 7. Financial Functions • PMT: Calculates the payment for a loan based on constant payments and interest rate. • =PMT(rate, nper, pv, [fv], [type]) • PV: Calculates the present value of an investment. • =PV(rate, nper, pmt, [fv], [type]) • FV: Calculates the future value of an investment. • =FV(rate, nper, pmt, [pv], [type]) • NPV: Calculates the net present value of an investment based on a series of cash flows and a discount rate. • =NPV(rate, value1, [value2], …) • IRR: Calculates the internal rate of return for a series of cash flows. • =IRR(values, [guess]) • XIRR: Calculates the internal rate of return for a series of cash flows with specific dates. • =XIRR(values, dates, [guess]) 8. Engineering Functions • CONVERT: Converts a number from one measurement system to another. • =CONVERT(number, "from_unit", "to_unit") • DEC2BIN: Converts a decimal number to binary. • =DEC2BIN(number, [places]) • BIN2DEC: Converts a binary number to decimal. • =BIN2DEC(number) • DEC2HEX: Converts a decimal number to hexadecimal. • =DEC2HEX(number, [places]) • HEX2DEC: Converts a hexadecimal number to decimal. • =HEX2DEC(number) 9. Information Functions • CELL: Returns information about the formatting, location, or contents of a cell. • =CELL("type", A1) • ISNUMBER: Checks if a value is a number. • =ISNUMBER(A1) • ISTEXT: Checks if a value is text. • =ISTEXT(A1) • ISBLANK: Checks if a cell is empty. • =ISBLANK(A1) • ISERROR: Checks if a value is an error. • =ISERROR(A1) • ISNA: Checks if a value is the #N/A error. • =ISNA(A1) 10. Array Formulas Array formulas perform multiple calculations on one or more items in an array. They are entered using Ctrl+Shift+Enter. • SUMPRODUCT: Multiplies corresponding elements in arrays and returns the sum. • =SUMPRODUCT(A1:A10, B1:B10) • TRANSPOSE: Transposes the orientation of a range. • =TRANSPOSE(A1:B2) • MMULT: Returns the matrix product of two arrays. • =MMULT(A1:B2, C1:D2) Microsoft Excel: A Comprehensive Guide for Teaching | Comprehensive Guide to Excel Formulas Microsoft Excel Comprehensive Guide to Excel Formulas 1 Microsoft Excel Comprehensive Guide to Excel Formulas 1Microsoft Excel Comprehensive Guide to Excel Formulas 1Microsoft Excel Comprehensive Guide to Excel Formulas 1Microsoft Excel Comprehensive Guide to Excel Formulas 1Microsoft Excel Comprehensive Guide to Excel Formulas 1Microsoft Excel Comprehensive Guide to Excel Formulas 1Microsoft Excel Comprehensive Guide to Excel Formulas 1Microsoft Excel Comprehensive Guide to Excel Formulas 1Microsoft Excel Comprehensive Guide to Excel Formulas 1Microsoft Excel Comprehensive Guide to Excel Formulas 1Microsoft Excel Comprehensive Guide to Excel Formulas 1Microsoft Excel Comprehensive Guide to Excel Formulas 1Microsoft Excel Comprehensive Guide to Excel Formulas 1Microsoft Excel Comprehensive Guide to Excel Formulas 1Microsoft Excel Comprehensive Guide to Excel Formulas 1Microsoft Excel Comprehensive Guide to Excel Formulas 1Microsoft Excel Comprehensive Guide to Excel Formulas 1Microsoft Excel Comprehensive Guide to Excel Formulas 1Microsoft Excel Comprehensive Guide to Excel Formulas 1Microsoft Excel Comprehensive Guide to Excel Formulas 1Microsoft Excel Comprehensive Guide to Excel Formulas 1Microsoft Excel Comprehensive Guide to Excel Formulas 1 4 thoughts on “Microsoft Excel Comprehensive Guide to Excel Formulas 1” 3. This was a delight to read. You show an impressive grasp on this subject! I specialize about Marketing and you can see my posts here at my blog UY5 Keep up the incredible work! 4. Your point of view caught my eye and was very interesting. Thanks. I have a question for you. Leave a Comment
{"url":"https://computeracademy.in/microsoft-excel-comprehensive-guide-to-excel-formulas-1/","timestamp":"2024-11-10T08:47:31Z","content_type":"text/html","content_length":"214212","record_id":"<urn:uuid:ff0bbad9-eb3d-4c94-92e1-d4b3637c492c>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00822.warc.gz"}
Returning an extra zero when using a vector Jan 17, 2020 09:25 AM Jan 17, 2020 09:25 AM I have a problem when using a defined vector in a function. As you can see below, the variable (i), goes from 1 to 3 with increments of 1. When I use this variable in the function Dl(i) and Di(i) it returns one extra value in the answer, (0). (See where the arrow is pointing in the picture below) This becomes a problem when I later on is going to use i.e. Di(i) in the equation shown below: What I want to do is to remove this "extra" 0 for all functions of (i) Anyone that can help me out? Jan 17, 2020 09:29 AM Jan 17, 2020 09:29 AM Jan 17, 2020 09:29 AM Jan 17, 2020 09:29 AM Jan 17, 2020 09:46 AM Jan 17, 2020 09:46 AM
{"url":"https://community.ptc.com/t5/Mathcad/Returning-an-extra-zero-when-using-a-vector/td-p/644705","timestamp":"2024-11-12T06:26:53Z","content_type":"text/html","content_length":"236442","record_id":"<urn:uuid:2e084308-1cb5-45be-9d86-5e3751cf3e8c>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00874.warc.gz"}
Angular Acceleration Formulae in context of Angular Acceleration 30 Aug 2024 Understanding Angular Acceleration: Formulas and Concepts Angular acceleration is a fundamental concept in physics that describes the rate of change of an object’s angular velocity. In this article, we will delve into the world of angular acceleration, exploring its definition, formulas, and real-world applications. What is Angular Acceleration? Angular acceleration is the rate of change of an object’s angular velocity. It is measured in radians per second squared (rad/s²) and represents the change in an object’s rotational speed over time. In other words, it describes how quickly an object’s rotation is changing. Formula for Angular Acceleration: α = Δω / Δt The formula for angular acceleration is: α = Δω / Δt • α (alpha) is the angular acceleration • Δω (delta omega) is the change in angular velocity • Δt (delta t) is the time over which the change occurs Relationship between Angular Acceleration and Angular Velocity Angular acceleration is closely related to angular velocity. In fact, the formula for angular acceleration can be used to calculate the rate of change of an object’s angular velocity. ω = ω0 + α * t • ω (omega) is the current angular velocity • ω0 (omega zero) is the initial angular velocity • α (alpha) is the angular acceleration • t (time) is the time over which the change occurs Formula for Angular Velocity: ω = Δθ / Δt The formula for angular velocity is: ω = Δθ / Δt • ω (omega) is the angular velocity • Δθ (delta theta) is the change in angle • Δt (delta t) is the time over which the change occurs Real-World Applications of Angular Acceleration Angular acceleration has numerous real-world applications, including: 1. Robotics: Understanding angular acceleration is crucial for designing and controlling robotic arms that need to move quickly and accurately. 2. Aerospace Engineering: Angular acceleration plays a critical role in the design of aircraft and spacecraft, where it affects their stability and maneuverability. 3. Automotive Engineering: Angular acceleration is important in the development of autonomous vehicles, which rely on precise control over their rotational motion. Angular acceleration is a fundamental concept in physics that has far-reaching implications for various fields. By understanding the formulas and relationships between angular acceleration, angular velocity, and time, engineers and scientists can design and develop more efficient and effective systems. Whether it’s robotics, aerospace engineering, or automotive engineering, angular acceleration plays a vital role in shaping our world. Additional Resources For further reading on angular acceleration and related topics, check out these resources: • Wikipedia: Angular Acceleration • Physics Classroom: Angular Acceleration • Coursera: Angular Momentum and Rotational Kinematics Related articles for ‘Angular Acceleration’ : Calculators for ‘Angular Acceleration’
{"url":"https://blog.truegeometry.com/tutorials/education/ad032f1c5fffb5f4b81ad7f0c61dbdc5/JSON_TO_ARTCL_Angular_Acceleration_Formulae_in_context_of_Angular_Acceleration.html","timestamp":"2024-11-06T08:17:39Z","content_type":"text/html","content_length":"17178","record_id":"<urn:uuid:8dd12f0a-e5f4-4dfc-b5c1-1b5dd5cebf2a>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00205.warc.gz"}
6. Penalty Functionals Next: 6.1 Kohn's method Up: thesis Previous: 5.7 Computational implementation &nbsp Contents 6. Penalty Functionals In this chapter we first outline Kohn's derivation of a variational principle for a generalised energy functional which includes a penalty functional to impose the idempotency constraint. We show that this functional is non-analytic at its minimum and therefore incompatible with efficient minimisation algorithms, using conjugate gradients as an example. We then outline an original scheme to use well-behaved penalty functionals to approximately impose the idempotency constraint. The density-matrix which minimises these generalised energy functionals is therefore only an approximation to the true ground-state density-matrix, but the resulting error in the total energy can be corrected to obtain accurate estimates of the true ground-state energy. Peter Haynes
{"url":"https://www.tcm.phy.cam.ac.uk/~pdh1001/thesis/node35.html","timestamp":"2024-11-12T11:48:02Z","content_type":"text/html","content_length":"4178","record_id":"<urn:uuid:de60babb-aa9e-4271-8fa3-36bc21790fb2>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00507.warc.gz"}
Is IGCSE harder than SPM? We have many Malaysian asking this question “Is IGCSE Mathematics harder compared to SPM Mathematics?” and let me layout the pros and cons both IGCSE Math and SPM Math. If you are good in SPM Math, we doubt students should worry about IGCSE. The question in IGCSE is very similar to SPM Math. So …. what is the difference ? IGCSE Math vs SPM Math 1. 70% of the questions are straight forward. Requiring simple and basic mathematics. 2. 30% of the questions are focus on critical thinking. 3. The IGCSE focus on larger learning scope. Students learn a lot of chapters but” they do not learn in depth” 4. SPM students learn lesser topics but the mathematics are in-depth. Question is definately tougher because a few of its topics overlaps with Additional Math. Pros of IGCSE 1. Easy to PASS, but really tough to get the A- Star 2. Easy to study and it is more fun Cons of IGCSE 1. Students just learn the basic and they tend to forget a lot because the scope is wider 2. IGCSE working exercises are really expensive. It is a burn in pocket. The cost of the books will be RM 40 but the Answer Book can cost you RM 200. A very ridiculous fees to pay just to peek at the answer. By engaging our online class, my company assures you that we will cover all the syllabus and prep you for the IGCSE. We will need 3 months Standard Class + 1 Month Mock exam class to help you to score well in exam.
{"url":"https://onlinetuitionclass.com/is-igcse-harder-than-spm/","timestamp":"2024-11-08T10:50:23Z","content_type":"text/html","content_length":"105127","record_id":"<urn:uuid:0b398413-b6f2-450a-a319-a76ed89f4c8b>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00874.warc.gz"}
Linear interpolation Linear interpolation creates a piecewise continuous function by connecting a set of data points with straight lines. This is useful for interpolating between the data points or for generating a set of equally spaced data points. Equally spaced data points are sometimes needed for numerical integration or digital filtering. The form on this page accepts data points as two columns which can be input in the lower left text box. When the "Plot linear interpolation" button is pushed, the data points are plotted in red and the straight lines connecting them are plotted in blue. Points along the blue line equally spaced in $x$ appear in the text box at the lower right.
{"url":"http://lampz.tugraz.at/~hadley/num/ch3/3.1.php","timestamp":"2024-11-05T23:24:22Z","content_type":"text/html","content_length":"11288","record_id":"<urn:uuid:60eda2b1-2aef-4a35-8eba-7f38d7b9298b>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00570.warc.gz"}
jacobian elliptic function jacobian elliptic function meaning in Hindi jacobian elliptic function sentence in Hindi More: Next 1. So there are twelve Jacobian elliptic functions. 2. In the analytic theory, there are four fundamental theta functions in the theory of Jacobian elliptic functions. 3. In explicit form they are expressible in terms of Jacobian elliptic functions which are periodic functions ( effectively generalisations of trigonometrical functions ). 4. My immediate interests are in special functions, particularly elliptic functions and elliptic integrals, even more specifically, integration in terms of Jacobian elliptic functions. 5. The twelve Jacobian elliptic functions are then pq, where each of p and q is a different one of the letters s, c, d, n. 6. The configurations correspondingly responsible for higher, i . e . excited, states are periodic instantons defined on a circle of Euclidean time which in explicit form are expressed in terms of Jacobian elliptic functions ( the generalization of trigonometric functions ). 7. By starting with the Weierstrass p-function and associating with it a group of doubly periodic functions with two simple poles, he was able to give a simple derivation of the Jacobian elliptic functions, as well as modifying the existing notation to provide a more systematic approach to the subject.
{"url":"https://m.hindlish.com/jacobian%20elliptic%20function","timestamp":"2024-11-13T19:15:22Z","content_type":"text/html","content_length":"25650","record_id":"<urn:uuid:54033155-931c-44b5-b7c1-1e0d1c4f66f7>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00835.warc.gz"}
Perform Math Calculations in Flash — SitePoint The days of Flash being used solely for entertainment are over. Macromedia has spent an enormous amount of money and effort to bring Flash out of its “animation only” box, and promote it as a cross-platform development environment, as well as a designer environment. Along with the funny cartoons, Flash can now be used to build business applications. One of the basic requirements of any business app is the ability to do math. In this tutorial, we will cover how to do basic arithmetic in Flash. We’ll also cover the capture of data input by a user, and learn how to perform calculations based on this data. Key Takeaways • Flash, thanks to Macromedia’s efforts, has evolved from an animation tool to a cross-platform development environment capable of building business applications, including performing math • Performing math calculations in Flash involves using ActionScript for coding equations, which can range from simple arithmetic to complex calculus. • Flash also allows for the capture of user-entered data and performing calculations based on this data, making it a powerful tool for business applications. • Despite the lack of extensive documentation on using Flash for math calculations, it’s a robust tool that can handle any calculation thrown at it, proving its potential for building rich internet The Search for Information on Flash’s Calculation Abilities A little over a year ago, I started a project that required Flash to do some serious calculations. I knew that calculations were possible in Flash; it was just that I’d never done any. So, like any good Web user, I started to search forums and Google for some guidance. Unfortunately, I found very little information on what I was looking for. I then went to my local bookstore in search of answers, but found the same problem: there is virtually no documentation available on using math in real world Flash applications. Don’t believe me? Search on Macromedia.com for the term “calculation”. Go to your local bookstore, look at any book claiming to be the be-all and end-all and for Flash, and see what you can find under “math” or “equations”. Sure, there are plenty of examples the explain the use of math to make random swirling lines, flying squares, and dancing midgets, but what about using math to calculate data? Those dancing midgets are cute, but calculating data can make money. After looking through about 30 books, I found two that offered about a page on what I was looking for. It wasn’t much, but it was something. If Rich Internet Applications are going to be as big as Macromedia hopes, more tutorials and examples covering basic, real-world business needs like calculating data will need to be written. This tutorial is the first attempt to pique developer awareness of, and interest in, Flash’s math capabilities. Flash can handle any equation you throw at it, from simple addition, to complex calculus. Static Calculations To get you acquainted with what Flash can do, we’ll start off with simple, static calculations. By “static”, I mean equations that are hard coded, and do not require any input from the end user. All these calculations are performed with ActionScript, so we’ll be doing a little coding. Let’s go! 1. Start up Flash, and create a new movie. 2. Draw four dynamic textboxes on your stage. The dynamic property is set on the Properties panel. 3. Once that’s done, give each textbox a Variable name. In this example, I’m using “addition”, “subtraction”, “multiplication”, and “division” as the Variable names. Tip: When developing, you should always give objects (textboxes, components, Movieclips, etc.) unique names. Otherwise, you run the risk of confusing Flash. This would be like having two brothers named John and John, and having you mom say, “Tell John to wash the car and tell John to clean the bathroom.” Avoid the confusion by giving every object a unique name. We’ll be using these variables to tell Flash were to display the results of our calculations. 4. In the timeline, create a new layer. Name the layer containing the textboxes “Calculations”, and name the new layer “Actions”. We do this so that we can easily see which layer contains the code, and which layer contains the user interface. 5. Now, let’s apply the calculations. Select the first frame in your Actions layer. In order to give ourselves freedom to type what we wish, we need to set the Actions panel to Expert Mode. Choose Expert Mode from the Actions panel pop-up menu (at the upper right of the panel). 6. Now, input the following code: addition = 1+1; subtraction = 5-2; multiplication = 10*2; division = 100/5; Now, an explanation of what this code does and why. There are four lines, and at the beginning of each is a Variable reference. Remember the Variable names we gave to the four textboxes? The first line starts with “addition”. This references our “addition” textbox. We then give an expression to which “addition” is equal. The “addition” textbox will display the results of one plus one, the subtraction textbox will display the results of five minus two, etc. Publish your movie to see the results! If you get really lost, here’s a sample FLA that has all the details. Advanced Math Now that you understand how to perform basic calculations, mixing them up is just a matter of algebra. The same math rules apply. sample = 10*2-6/3; The above sample would give you a result of 18. As in algebra, multiplications and divisions are calculated before additions and subtractions, and the use of parentheses specifies that anything in the parentheses will be calculated first. Using parentheses can deliver different results. For example, 10*(3-2) will give you a result of 10, where 10*3-2 will give you a result of 28. The ActionScript in Flash would look like this: sample = 10(3-2); Calculating User-Entered Data Calculating static data is helpful, but calculating data that’s input by a user is powerful and sellable. The possibilities are endless, but here, we’ll cover basic data entry and calculation. Our sample will take two numbers entered by the user. On the click of a button, the Flash movie will display the sum, and the product of the numbers. 1. Create a new movie. 2. Create four textboxes in the first frame of the movie, and arrange them vertically. 3. In the Properties panel, set the textboxes to be left-aligned. The top two textboxes should be Input text, while the bottom two should be Dynamic text. 4. Limit the textboxes to numbers only. To do this, just select a textbox and click the Character… button in the Properties panel. This will bring up the Character Options box. From there, select “Only”, “Numerals (0-9)”, and click Done. 5. I’ve added a line in the middle of my textboxes, to give them some sex appeal! 6. Next, add some static text to the left of the text boxes. Starting from the top to the bottom: □ Number 1 □ Number 2 □ Sum of Number 1 and Number 2 □ Product of Number 1 and Number 2 This is done to clearly label everything for the end user — us! 7. The textboxes need to be given Variable names, so that Flash knows what we’re talking about when we refer to them. The Variable for a textbox is defined in the Properties panel. Starting from the top to the bottom, name the textboxes accordingly: □ number_one □ number_two □ result_sum □ result_product 8. Now, we need to add our button used to execute our calculations. In the Component panel, select Flash UI Components. 9. Drag and drop the PushButton component into the scene. 10. Select the PushButton component you’ve just dragged into your scene. In the Properties panel, label the component “Calculate” and set the Click Handler to “onCalculate”. This is the button we will use to calculate the user’s data. 11. As a final measure, I have given the “Number 1” and “Number 2” textboxes a value of zero. When you’re done, you’re scene should look something like the image below. Ok. We have our scene set up, so now, it’s time to put in the code that will make this movie work. The Code 1. Name the current layer in the movie “content”. Add a new layer to the movie, and call it “actions”. 2. Select the first frame of the “actions” layer. In the Actions panel, add the following code: function onCalculate() { one = Number(number_one); two = Number(number_two); result_sum = one + two; result_product = one * two; This is the code that will make this movie do its magic! Let’s look at what this code does and why. function onCalculate() { This first line starts a new function in the movie. Remember that we gave our PushButton component a Click Handler of “onCalculate”. When this button is clicked, it will execute the code within this function. one = Number(number_one); two = Number(number_two); This code has two purposes. The first is to give shorter variable names to the data that’s being calculated. Instead of spelling out “number_one” throughout our code, we can now just use “one”. This makes more sense when taken in conjunction with the second purpose: to treat these variables as numbers. This is done with Number(), which tells Flash that we to treat values in parentheses as numbers. If we don’t, when we calculate 1 plus 1, we’ll get 11. Or, if we calculate 1 plus 2, we’ll get 12. Instead, with the Number(), when we calculate 1 plus 1, we’ll get 2. result_sum = one + two; result_product = one * two; Remember the variable names we gave the two textboxes at the bottom? This code puts our calculation results within those textboxes. When this code is executed, the “result_sum” textbox will display the result of adding Number 1 and Number 2. The “result_product” textbox will display the result of multiplying them. That’s it! Publish your movie, type in some numbers in first two textboxes, and hit Calculate. Again, if you get lost, download the sample FLA. Final Thoughts There is your quick and dirty run down of basic math in Flash. There is not a single calculation that Flash cannot handle. This tutorial only scratches the surface, but I hope it’s got you thinking about Flash math. For more information, try: Frequently Asked Questions (FAQs) about Math Calculations in Flash What is the basic principle behind flash calculations? Flash calculations are a fundamental concept in thermodynamics, primarily used in chemical engineering. The basic principle behind flash calculations is the equilibrium between liquid and vapor phases of a substance or a mixture of substances at a given temperature and pressure. When a liquid mixture is suddenly reduced to a lower pressure, some of the liquid vaporizes or “flashes” into vapor. The amount of liquid and vapor at equilibrium is determined by the flash calculation. How are flash calculations used in real-world applications? Flash calculations are widely used in various industries, particularly in chemical and petroleum engineering. They are used to design and optimize distillation columns, separators, and other equipment in oil refineries and chemical plants. Flash calculations are also used in the design of refrigeration systems, air conditioning systems, and in the study of natural gas processing and reservoir engineering. What are the key variables in flash calculations? The key variables in flash calculations are temperature, pressure, and composition of the mixture. The flash calculation determines the amount of each component in the liquid and vapor phases at equilibrium. The calculation involves solving a set of nonlinear equations, which can be complex depending on the number of components in the mixture. How do I perform flash calculations for a binary mixture? Flash calculations for a binary mixture involve solving two equations: the material balance equation and the equilibrium equation. The material balance equation ensures that the total amount of each component is conserved, while the equilibrium equation ensures that the chemical potential of each component is the same in the liquid and vapor phases. These equations can be solved using various numerical methods, such as the Newton-Raphson method. What software tools can I use for flash calculations? There are several software tools available for performing flash calculations. These include commercial software like Aspen Plus, HYSYS, and Pro/II, as well as open-source software like DWSIM and COCO Simulator. These tools have built-in thermodynamic models and numerical solvers that make it easy to perform flash calculations for complex mixtures. What are the common challenges in performing flash calculations? One of the main challenges in performing flash calculations is the complexity of the equations, especially for mixtures with many components. The equations are nonlinear and may have multiple solutions, which can make them difficult to solve. Another challenge is the accuracy of the thermodynamic models used in the calculations. These models are based on experimental data and may not be accurate for all conditions and mixtures. How can I improve the accuracy of my flash calculations? The accuracy of flash calculations can be improved by using more accurate thermodynamic models and by using high-quality experimental data for the properties of the mixture. It’s also important to use a robust numerical method for solving the equations. In some cases, it may be necessary to use a hybrid method that combines different numerical methods to ensure convergence. Can I perform flash calculations for mixtures with non-ideal behavior? Yes, flash calculations can be performed for mixtures with non-ideal behavior. However, these calculations require more complex thermodynamic models that account for the non-ideal behavior. These models include the Peng-Robinson equation of state, the Soave-Redlich-Kwong equation of state, and various activity coefficient models. What is the role of the Rachford-Rice equation in flash calculations? The Rachford-Rice equation is a key equation in flash calculations. It is used to calculate the fraction of the mixture that vaporizes or “flashes” into vapor. The Rachford-Rice equation is derived from the material balance and equilibrium equations, and it is solved iteratively to find the flash fraction. How do I interpret the results of flash calculations? The results of flash calculations provide valuable information about the behavior of the mixture at the given conditions. They tell you the amount of each component in the liquid and vapor phases, the temperature and pressure at equilibrium, and the flash fraction. These results can be used to design and optimize industrial processes, to predict the behavior of reservoir fluids, and to understand the thermodynamics of mixtures. Scott started a Web development shop three years ago after discovering his propensity for usability-oriented design. Sticking with the straightforward name Scottmanning.com, his vision has resulted in the company becoming an indispensable part of the international Flash community.
{"url":"https://www.sitepoint.com/math-calculations-flash/","timestamp":"2024-11-14T11:41:32Z","content_type":"text/html","content_length":"227876","record_id":"<urn:uuid:79c4e7b3-8732-4688-ae00-b9bee04af61a>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00339.warc.gz"}
Mathematics Rising The shape of things By Joselle, on February 28th, 2017 Both Quanta Magazine and New Scientist reported on some renewed interest in an old idea. It was an approach to particle physics, proposed by theoretical physicist Geoffrey Chew in the 1960s, that ignored questions about which particles were most elementary and put a major portion of the weight of discovery on mathematics. Chew expected that information about the strong interaction could be derived from looking at what happens when particles of any sort collide. And he proposed S-matrix theory as a substitute for quantum field theory. S-matrix theory contained no notion of space and time. These were replaced by the abstract mathematical properties of the S-matrix, which had been developed by Werner Heisenberg in 1943 as a principle of particle interactions. New research, with a similarly democratic approach to matter, is concerned with mathematically modeling phase transitions – those moments when matter undergoes a significant transformation. The hope is that what is learned about phase transitions could tell us quite a lot about the fundamental nature of all matter. As New Scientist author, Gabriel Popkin, tells us: Whether it’s the collective properties of electrons that make a material magnetic or superconducting, or the complex interactions by which everyday matter acquires mass, a host of currently intractable problems might all follow the same mathematical rules. Cracking this code could help us on the way to everything from more efficient transport and electronics to a new, shinier, quantum theory of gravity. Toward this end, in 1944, Norwegian physicist Lars Onsager solved the problem of modeling material that loses magnetism when heated above a certain temperature. While his was a 2-dimensional model, it has none-the-less been used to simulate the flipping of various physical states from the spread of an infectious disease to neuron signaling in the brain. It’s referred to as the Ising model, named for Ernst Ising, who first investigated the idea in his PhD thesis but without success. In the 1960s, Russian theorist Alexander Polyakov began studying how fundamental particle interactions might undergo phase transitions, motivated by the fact that the 2D Ising model, and the equations that describe the behavior of elementary particles, shared certain symmetries. And so he worked backwards from the symmetries to the equations. Popkin explains: Polyakov’s approach was certainly a radical one. Rather than start out with a sense of what the equations describing the particle system should look like, Polyakov first described its overall symmetries and other properties required for his model to make mathematical sense. Then, he worked backwards to the equations. The more symmetries he could describe, the more he could constrain how the underlying equations should look. Polyakov’s technique is now known as the bootstrap method, characterized by its ability to pull itself up by its own bootstraps and generate knowledge from only a few general properties. “You get something out of nothing,” says Komargodski. Polyakov and his colleagues soon managed to bootstrap their way to replicating Onsager’s achievement with the 2D Ising model – but try as they might, they still couldn’t crack the 3D version. “People just thought there was no hope,” says David Poland, a physicist at Yale University. Frustrated, Polyakov moved on to other things, and bootstrap research went dormant. This is part of the old idea. Bootstrapping, as a strategy, is attributed to Geoffrey Chew who, in the 1960’s, argued that the laws of nature could be deduced entirely from the internal demand that they be self-consistent. In Quanta, Natalie Wolchover explains: Chew’s approach, known as the bootstrap philosophy, the bootstrap method, or simply “the bootstrap,” came without an operating manual. The point was to apply whatever general principles and consistency conditions were at hand to infer what the properties of particles (and therefore all of nature) simply had to be. An early triumph in which Chew’s students used the bootstrap to predict the mass of the rho meson — a particle made of pions that are held together by exchanging rho mesons — won many converts. The effort gained greater traction again in 2008 when physicist Slava Rychkov and colleagues at CERN decided to use these methods to build a physics theory that didn’t have a Higgs particle. This turned out not to be necessary (I suppose), but the work was productive none-the-less in the development of bootstrapping techniques. The symmetries of physical systems at critical points are transformations that, when applied, leave the system unchanged. Particularly important are scaling symmetries, where zooming in or out doesn’t change what you see, and conformal symmetries where the shapes of things are preserved under transformations. The key to Polykov’s work was to realize that different materials, at critical points, have symmetries in common. These bootstrappers are exploring a mathematical theory space, and they seem to be finding that the set of all quantum field theories forms a unique mathematical What’s most interesting about all of this is that these physicists are investigating the geometry of a ‘theory space,” where theories live, and where the features of theories can be examined. Nima Arkani-Hamed, Professor of physics at the Institute for Advanced Study has suggested that the space they are investigating could have a polyhedral structure with interesting theories living at the corners. It was also suggested that the polyhedral might encompass the amplituhedron – a geometric object discovered in 2013 that encodes, in its volume, the probabilities of different particle collision outcomes. Wolchover wrote about the amplituhedron in 2013. The revelation that particle interactions, the most basic events in nature, may be consequences of geometry significantly advances a decades-long effort to reformulate quantum field theory, the body of laws describing elementary particles and their interactions. Interactions that were previously calculated with mathematical formulas thousands of terms long can now be described by computing the volume of the corresponding jewel-like “amplituhedron,” which yields an equivalent one-term expression. The decades-long effort is the one to which Chew also contributed. The discovery of the amplituhedron began when some mathematical tricks were employed to calculate the scattering amplitudes of known particle interactions, and theorists Stephen Parke and Tomasz Taylor found a one term expression that could do the work of hundreds of Feynman diagrams that would translate into thousands of mathematical terms. It took about 30 years for the patterns being identified in these simplified expressions to be recognized as the volume of a new mathematical object, now named the amplituhedron. Nima Arkani-Hamed and Jaroslav Trinka published results in 2014. Again from Wolchover: Beyond making calculations easier or possibly leading the way to quantum gravity, the discovery of the amplituhedron could cause an even more profound shift, Arkani-Hamed said. That is, giving up space and time as fundamental constituents of nature and figuring out how the Big Bang and cosmological evolution of the universe arose out of pure geometry. Whatever the future of these ideas, there is something inspiring about watching the mind’s eye find clarifying geometric objects in a sea of algebraic difficulty. The relationship between mathematics and physics, or mathematics and material for that matter, is a consistently beautiful, captivating, and enigmatic puzzle. Recent Comments • John Golden on Other kinds of coding • Vincent Migliore on Infinity is not the problem • Joselle on Infinity is not the problem • Bruce E. Camber on Infinity is not the problem • Joselle on The monad, autopoiesis and Christmas Recent Comments • John Golden on Other kinds of coding • Vincent Migliore on Infinity is not the problem • Joselle on Infinity is not the problem • Bruce E. Camber on Infinity is not the problem • Joselle on The monad, autopoiesis and Christmas February 28th, 2017 | Tags: mathematics, physics, quantum mechanics | Category: mathematics, physics | Comments are closed
{"url":"https://mathrising.com/?p=1448","timestamp":"2024-11-10T14:20:27Z","content_type":"application/xhtml+xml","content_length":"133150","record_id":"<urn:uuid:7589f754-7ea4-42e6-8c9b-bb55ce339419>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00281.warc.gz"}
Tomofast-x 2.0: an open-source parallel code for inversion of potential field data with topography using wavelet compression Articles | Volume 17, issue 6 © Author(s) 2024. This work is distributed under the Creative Commons Attribution 4.0 License. Tomofast-x 2.0: an open-source parallel code for inversion of potential field data with topography using wavelet compression We present a major release of the Tomofast-x open-source gravity and magnetic inversion code that incorporates several functionalities enhancing its performance and applicability for both industrial and academic studies. The code has been re-designed with a focus on real-world mineral exploration scenarios, while offering flexibility for applications at regional scale or for crustal studies. This new version includes several major improvements: magnetisation vector inversion, inversion of multi-component magnetic data, wavelet compression, improved handling of topography with support for non-uniform grids, a new and efficient parallelisation scheme, a flexible parameter file, and optimised input–output operations. Extensive testing has been conducted on a large synthetic dataset and field data from a prospective area of the Eastern Goldfields (Western Australia) to explore new functionalities with a focus on inversion for magnetisation vectors and magnetic susceptibility, respectively. Results demonstrate the effectiveness of Tomofast-x 2.0 in real-world studies in terms of both the recovery of subsurface features and performances on shared and distributed memory machines. Overall, with its updated features, improved capabilities, and performances, the new version of Tomofast-x provides a free open-source, validated advanced and versatile tool for constrained gravity and magnetic inversion. Received: 02 Oct 2023 – Discussion started: 26 Oct 2023 – Revised: 18 Jan 2024 – Accepted: 26 Jan 2024 – Published: 21 Mar 2024 In spite of the long-recognised ambiguities in the geological interpretation and inversion of gravity and magnetic datasets (Nettleton, 1942), these two methods remain popular approaches to understanding the continental subsurface. This is at least in part because, unlike theoretically more informative methods such as the array of seismic techniques, there are already widespread gravity and magnetic databases available for most continental areas. In addition, unlike geophysical methods such as gamma ray spectroscopy or synthetic aperture radar, there is significant 3D information content in the data. ^* Last access: 11 March 2024. One of the challenges faced by geophysical inversion platforms is to make them scalable, meaning that they not only meet the expectations of research geophysicists trialling new algorithms but can actually be applied to large-scale industrial and governmental applications. As part of their exploration programme mineral explorers will often undertake an unconstrained 3D inversion of the available magnetic or gravity data in order to better understand the 3D architecture and to generate prospects. As more data are acquired these unconstrained inversions may be re-run with geological constraints and new datasets. Similarly, government agencies also need to undertake 3D inversions at a relatively large scale in order to help their geological mapping teams understand the 3D architecture. While they may be less pressed by time constraints than explorers, they will typically be working with much larger datasets. Tools that can scale from prospect to regional scale are There has been a long history of forward modelling (Jessell, 2001) and inversion of gravity and magnetic datasets (Pears et al., 2017). For much of the past 20 years the tools of choice for most explorers and government agencies have been the inversion codes developed by the Geophysical Inversion Facility (GIF) at The University of British Columbia (UBC) (Li and Oldenburg, 1996a, 1998). Over the past decade many explorers have switched from the GIF-based tools to the Seequent's Voxi product, the interface for which is built into the commercial Windows-based software tools they often use. Voxi is provided as a commercial cloud-based Software as a Service (SaS) running on clusters. There are several other less commonly used commercial programs for 3D potential field regularised inversion, each with unique specialisations (e.g. Geomodeller, VPmg and MGinv3D). More recently, there has been an increasing focus on developing open-source solutions to solve the gravity and/or magnetic inversion problem, which provide access to inversion solutions to a wider audience (Table 1). When the inversion facility at UBC wound down, several of the researchers moved to make the algorithms open source and available in Python in the hope they would be adopted as a teaching and research tool. This project is known as SimPEG and now has a sizeable user group of academic, government, and industry users, particularly in North America (Cockett et al., 2015). The Python code has been built from scratch using many similar problem formulations to the original Fortran code used in the GIF packages. It uses public libraries (e.g. NumPy) for I/O and for the solver and leverages new advances in high-performance computing (dask and zarr libraries) to improve performance. It is, however, significantly slower than the older Fortran codes and does not offer a practical solution for typical, exploration-sized, datasets. Compression is only on file size, not matrix size, and therefore does not offer rapid solutions to many real exploration problems. To the best of our knowledge, among the open-source codes listed in Table 1, Tomofast-x is the only one that offers wavelet compression of the sensitivity matrix. Each of these 3D platforms has strengths and weaknesses in terms of scalability, ease of installation, ease of generation of input meshes, reliance on commercial platforms, etc., even before we think of specific functionality and the additional constraints that can be added to the inversion to try and reduce the aforementioned ambiguity. This paper does not attempt to benchmark the different codes, interesting as that would be and much as that would be a worthy collaboration between the different research groups currently maintaining these codes. What has been missing from this mix is an open-source package able to scale to continental-sized problems while offering rapid answers at prospect scale. Tomofast-x is such a package. When Tomofast-x was publicly released in 2021 it strongly reflected its research origins. Any exploration problem of reasonable size required a large cluster to solve in anything approaching real time, and for magnetic inversions observations had to be above the highest point in the model space. It only inverted in terms of susceptibility and density. Through a close collaboration between industry and academia it now offers equivalent performance and results to commercial codes. In addition, in addition to susceptibility and density it now also inverts in terms of magnetisation using either single-component total magnetic intensity (TMI) data or three-component magnetic data. 2Description of code features Tomofast-x inversion code is written in Fortran 2008 using classes, which allows flexibility in extending the code, such as adding new types of constraints, algorithms, and data I/O. It is fully parallelised using the MPI library, which provides both shared and distributed memory parallelism. The code has parallel unit tests covering important parts, which are implemented using “ftnunit” framework (Markus, 2012) extended by us to the parallel case to work with the MPI library. The previous code version Tomofast-x 1.0 included geological and petrophysical constraints implemented, specifically clustering constraints (Giraud et al., 2019b), cross-gradient constraints (Martin et al., 2021), local gradient regularisation (Giraud et al., 2019a, 2020), Lp-norm constraints (Martin et al., 2018), and disjoint interval bound constraints (Ogarko et al., 2021; Giraud et al., 2023b), which were summarised in Giraud et al. (2021). Figure 1 shows a summary of the Tomofast-x 2.0 inversion workflow, and new components added to the version 2.0 are highlighted in orange. In this section we describe new code components added to Tomofast-x 2.0 code version and highlight their importance. The impact of topography on gravity data is well known. Less well understood is the effect of topography on magnetic data. However, as clearly demonstrated by Frankcombe (2020), it is important to include topography in magnetic modelling. In modern volcanic terrains where moderately magnetic andesitic basalt makes up a large proportion of the sub-crop and topography is often severe, the magnetic signal from topography alone can be tens to hundreds of nanoteslas. This may be greater than what might be a relatively subtle response from a deeply buried body. Failing to account for topography not only risks missing subtle targets, but it will also lead to erroneous interpretations if the magnetic data is interpreted or inverted on the assumption that the ground is flat. There are several approaches used to include topography in the model, the most simple being to ensure that the model space is a rectangular prism extending into air above the highest point in the topography. A more advanced approach is to use a tetrahedral mesh to smoothly model the topography. Intermediate approaches and that used by this code are to drape a rectilinear mesh below the topography. This required that the topography is discretised into steps the size of the mesh cells but does not require that the air be included in the model. As each vertical pillar of cells in the model is independent, there is no need for each layer of cells to have a fixed top and bottom. This enables the topography to be discretised into locally thin layers without requiring that all points in the model at that elevation have the same fine discretisation. In areas of severe topography, draping the mesh below the topography and treating each vertical pillar of the mesh independently significantly reduces the size of the mesh, and thus the run time, relative to regular rectangular prism meshes. Figure 2 shows an example of a model grid with topography. The forward magnetic problem in Tomofast-x 1.0 was calculated using Bhattacharyya's formulation (Bhattacharyya, 1964). It has the limitation that all data locations must be above the highest point of the model. Modern aeromagnetic data are acquired at heights of less than 40m above ground. There are very few project areas which have less than 40m difference in elevation across them, limiting the utility of Bhattacharyya's algorithm. In order to allow for arbitrary topography and readings below the Earth's surface, the forward magnetic problem was re-implemented using Sharma's formulation (Vallabh Sharma, 1966). Additionally, distance-based weighting of the sensitivity kernel, which is a generalised version of a simple depth weighting, was included (Li and Oldenburg, 1996b). Weighting by distance rather than depth allows for observations beside or even below points in the model. 2.2Non-uniform model grid The size of a regularised inversion is (N[data]+1)×N[model], where N[data] is the number of observations being modelled and N[model] is the number of cells in the model mesh. When stored as 8byte reals this leads to around 7.5Gb being required for a 100×100×10 model block with a 100×100 array of observation points. Such a tiny model size would only be used for test work. For an ordinary exploration sized problem tens of petabytes might be required while for government-style regional work exabytes or zettabytes of storage would be needed. While large problems can be broken up into smaller panels and then re-joined (Goodwin and Lane, 2021), care needs to be taken to avoid edge artefacts, and the overall time to run the inversion is considerably longer because of the need to run coarse inversions followed by forward models to minimise these edge effects. In addition, large problems require more computational cycles placing an effective limit on the size of problems that can be handled. In order to reduce the size of the problem, the voxel size can be increased for voxels distant from the observations. In practice it has been found that increasing the size of model voxels in a horizontal sense away from the core area containing the data and in a vertical sense below the ground surface reduces the size of the model array by about 40 times without degrading from the inversion result. Care needs to be taken not to change the voxel size too quickly, as rapid changes in the volume sensitivity will result in material being moved from a smaller, lower-sensitivity cell to its larger neighbour, introducing artificial stratification to the inversion model. A multiplier of around 1.15 with a hyperbolic tangent smoothing into the core of the inversion area has been found to work well. Figure 2 shows an example of such a non-uniform model grid. 2.3Wavelet compression Another tool to reduce the size of the inversion problem is to compress the matrices. Li and Oldenburg (2003) proposed using wavelet transforms to compress the sensitivity matrix by nulling all wavelet coefficients with amplitudes below a user-defined threshold. This approach can be used to reduce the size of the problem by up to 3 orders of magnitude without seriously impacting on the solution. Tomofast-x users have the option to choose between the commonly used wavelet, i.e. Harr and Daubechies D4 (Martin et al., 2013). The wavelet transform is implemented using multilevel lifting scheme, which reduces the number of arithmetic operations by nearly a factor of 2. The transform is performed in place, requiring only a constant memory overhead. The wavelet transform is applied to each row of the weighted sensitivity kernel independently, which are then compressed and stored using sparse matrix format (i.e. only non-zero values are stored). The resulting model perturbation vector after each inversion is transformed back from the wavelet domain using the inverse wavelet transform. As discussed in Sect. 2.1 the model grid is implemented by draping the mesh below topography. This allows the model grid to be defined as a 3D array of dimensions N[x]×N[y]×N[z] for any topography without introducing additional “air” cells. The 3D wavelet transform can therefore be applied efficiently due to the coherency of the sensitivity values along the spatial dimensions. Unit tests were also added to verify that the wavelet transforms preserve a vector norm. The non-compressed sensitivity kernel is a dense matrix of size N[data]×N[model], with N[model]=N[x]×N[y]×N[z]. A user-selected compression rate R[c] defines the fraction of sensitivity values to keep from the original kernel. Thus, the memory to store the compressed sensitivity kernel (using CSR format) nearly linearly scales with the compression rate and can be calculated as $\begin{array}{}\text{(1)}& {M}_{\text{sensit}}=\left(\mathrm{4}+f\right){R}_{\mathrm{c}}{N}_{\text{model}}{N}_{\text{data}}+\mathrm{8}{N}_{\text{data}}\phantom{\rule{0.125em}{0ex}}\mathrm{bytes},\ where f=4 for single-precision floating values and f=8 for double-precision floating values. Comparative tests showed that storing sensitivity values using single precision is sufficient and has no noticeable impact on inversion results, while it is important to use double-precision values during the sensitivity calculation and wavelet transformation. The first term corresponds to two arrays of the CSR format: column indexes (4byte integers) and matrix values (floats). The second term in Eq. (1) corresponds to the array of indexes in the values array corresponding to each matrix row. It is stored using long integer type (8bytes) to avoid integer overflow. An estimation of the total memory requirement is given in Appendix A. An additional memory optimisation for the parallel matrix storage is discussed in Sect. 2.6, and examples of the inverted models using various compression rates are given in Sect. 3.3. Nearly all magnetic rocks will have a component of remanent magnetisation storing the magnetic field direction from the time of crystallisation of the magnetic mineral. The strength of this remanent field is often very weak and can be ignored; however, in a significant number of cases of interest to mineral explorers, the remanent field is strong enough to alter the shape of the observed field. Remanence can affect the host and the target. The existence of remanence is often evident from looking at the shape of the anomaly which should alert the interpreter to the potential for error. It may however be cryptic and if not allowed for, will result in a flawed interpretation. In addition, all magnetic bodies are affected by self-demagnetisation. This is a property that produces a magnetic field within the body that is aligned to the shape of the body rather than to the Earth's field or any stored remanent direction. For bodies with susceptibilities less than 0.1SI, the effects of self-demagnetisation are negligible; however, above this point the observed response can be distorted to such a degree that exploration programmes fail (Gidley, 1988). A solution to both problems is to invert on magnetisation rather than susceptibility (Elllis et al., 2012). The code was therefore extended to allow for magnetisation inversion. It must be noted that magnetisation inversion needs three sensitivity kernels defined for each magnetic vector component (Liu et al., 2017). Thus, the computational and memory requirements grow by nearly a factor of 3. Therefore, using optimisations described previously (i.e. wavelet compression and non-uniform grid) becomes even more important for this type of inversion. Examples of magnetisation inversion are shown in Sect. 3.4. 2.5Multiple data components Historically only a single component of the magnetic field was measured. Initially the local declination or inclination were measured using optical dip meters; subsequently, the field strength in the vertical direction, Hz, was measured using a fluxgate. Fluxgates were replaced by proton precession and then optically pumped magnetometers that measured the field strength in the direction of the Earth's field, i.e. the total magnetic intensity (TMI). However, fluxgates can be used to also measure the horizontal components, as is routinely done within magnetic downhole survey tools. SQUID magnetometers are now being developed which will enable a tensor measurement. Multi-component data provide a more diagnostic dataset with regard to the body location, particularly when remanence is involved. It is therefore useful to be able to model and invert data acquired in any direction and from any location. Inversion of three magnetic data components was included in the code. As with magnetisation inversion discussed in previous section, this also increases the computation and memory requirements. Specifically, when three data components are used for magnetisation inversion, the number of the sensitivity kernels grows to nine. Thus, these inversions strongly benefit from wavelet compression and non-uniform grid optimisations. These modifications allow for inversion of three-component downhole magnetic data, which typically improves drill targeting compared to single-component or multi-component above-surface datasets. This improvement is particularly noticeable when magnetic remanence is present. 2.6Efficient parallelisation scheme The parallelisation of the inversion problem can be approached in several ways: by partitioning the model or partitioning the data. These two approaches correspond to splitting the least-squares system matrix by column (model) or by row (data). As both approaches have their advantages, an efficient parallelisation scheme that combines the two approaches was designed. The parallelisation scheme is comprised of the following elements. The forward problem is parallelised by data. This avoids the need to parallelise the wavelet compression, as the full row of the sensitivity kernel (corresponding to the whole model) is available at each processor. This in turn allows for saving costs in terms of the MPI communication required to parallelise the wavelet compression. This can be significant as the wavelet transform calculations are non-local and span over the whole model space. The inverse problem is parallelised by model. In the compressed sparse row (CSR) matrix storage the model vector is accessed in the inner loop in both matrix vector and transposed matrix vector products. This limits the performance due to memory traffic (Sect. 4.3.2 in Barrett et al., 1994). In order to minimise the memory traffic in these operations it is better to partition the model vector. Both versions with the inversion problem parallelised by data and by model were implemented and compared, confirming that parallelising by model is faster and leads to better scalability. This was also confirmed by profiling the code using Intel VTune Microarchitecture Exploration tool, which showed an increase in L3 cache misses relative to partitioning by model, resulting in an overall slowdown of the code. Machines with larger L3 caches performed better than those with smaller caches even though they had older-generation CPUs. Load balancing is performed in the following way. When wavelet compression is applied to the sensitivity kernel, the distribution of the non-zero values is not uniform. When the model partitioning is performed using equal size chunks, the number of non-zero kernel elements can vary by a factor of 100 on different processors. This leads to non-efficient usage of CPU resources and slows down the calculations. To mitigate this, load balancing was implemented by partitioning the matrix columns into non-equal chunks such that the total number of non-zeros is nearly the same on each CPU. Doubly compressed matrix storage is performed in the following way. In the CSR matrix storage format, the outer loop in the matrix vector and transposed matrix vector products iterates over the matrix rows. When parallelising the inversion by model (i.e. parallelising the inner loop), the outer loop spans over all matrix rows N[rows]. This limits the parallelism, defined as the maximum possible speedup on any number of processors, to at most O(nnz$/{N}_{\mathrm{rows}}$) (Buluç et al., 2009), where nnz is the total number of non-zero values. The typical number of least-squares matrix rows in inversions is N[rows]=N[data]+N[c] N[model], where the N[data]-rows corresponds to the sensitivity kernel and N[model]-rows corresponds to the N[c] constraint terms (such as model damping term). The typical N[model] value is much greater than the N[data], and thus adding constraints significantly limits the scalability. Note that the rows corresponding to constraint terms are usually extremely sparse, for example one non-zero element per row in the model damping term (i.e. a diagonal matrix). Thus, in the parallel runs there will be many empty rows in the local matrices, meaning that the redundant information will be stored. Therefore, the double compressed sparse row (DCSR) format, which improves the compression efficiency by storing only non-empty rows, was implemented (Buluc and Gilbert, 2008). The DCSR format adds one additional array to the CSR format, which stores the indexes of non-empty rows. This improves the scalability and reduces memory traffic. Additionally, this reduces the memory requirements for storing the matrix on several CPUs. Efficient memory allocation is performed in the following way.In the LSQR inversion solver, the parallel bidiagonalisation operations (consisting of matrix vector and transposed matrix vector products) were re-implemented to overwrite the existing vectors, thus avoiding memory allocation for the intermediate products (for details, see Sect. 7.7 in Paige and Saunders, 1982). The corresponding parallelism was implemented using the same input and output buffer of the collective communication routine MPI_Allreduce via MPI_IN_PLACE. In Sect. 3.5 the proposed parallelisation scheme is applied to several representative models, and the resulting parallelisation efficiency is also discussed. 2.7Sensitivity kernel import and export The ability to import and export the sensitivity kernel has been implemented. The compressed sensitivity kernel is stored as a binary file, which allows for previously computed sensitivity kernels to be reused. This saves computational time (up to ∼95%) when exploring the effect of different types of constraints and different hyperparameter values (i.e. when varying weights in the cost function), as well as different starting and prior models. The sensitivity kernel can also be reused when a different background field needs to be subtracted from the data, as the kernel values depend only on the model grid and data locations. Additionally, the calculated kernel can be imported into other codes to perform sensitivity analysis, null-space navigation, or level set inversions (Giraud et al., 2023a). Finally, it is possible to import the external kernel corresponding to a different geophysics or another optimisation problem (e.g. geological modelling) and use the Tomofast-x as a high-performance parallel inversion (or optimisation) solver. External to Tomofast-x, a Python script to import and export the sensitivity kernel from and to Python has also been implemented, and it is available at https://github.com/TOMOFAST/Tomofast-tools (last access: 11 March 2024). The datasets and models used for the inversion and the results of those inversions will now be discussed. Different compression rates were tested, and parallel performance was benchmarked. The Table 2 shows the physical model dimensions and corresponding model grid size and number of data for the test models, which are then described in detail below. In all susceptibility inversions a positivity constraint was applied to reduce the model non-uniqueness (Li and Oldenburg, 1996a). The positivity constraint was added via the alternating direction method of multipliers (ADMM)-bound constraints (Ogarko et al., 2021). The number of major inversion iterations used was N[iter]=30 for Callisto model and N[iter]=50 for Synthetic400 model. The number of minor iterations (in the LSQR inversion solver) used was 100 in all tests. 3.1Synthetic400 model While it is important to use real datasets in testing inversion codes it is also important to use datasets where the answer is known. In real exploration datasets the true answer is never known, at least not throughout the model space. A drill hole here or there may constrain the depth to a particular magnetic body, and susceptibility measurements on core or chips or from in-hole tools may constrain the likely range of susceptibilities for that body. In general, however, the properties of the vast majority of the model space is unknown. Synthetic models are therefore required in order to calibrate the inversion algorithm and understand its target resolution. Although synthetic models can never achieve the level of complexity that a real dataset contains, this paper attempts to make the model as realistic as possible. Topography, extracted from a 12.5×12.5m ALOS DEM, for an active exploration prospect in Indonesia was used as the base layer, and the ground was ascribed a homogeneous susceptibility of 0.01SI, which would be at the lower end of expected susceptibilities for this area. The elevation has a range of 1400m over the 10km×10km extent of the central core of the model space, increasing to 1600m if one includes the horizontal padding. The magnetic response for a simulated survey flown with a perfect 60m drape height over this topography was modelled using Tomofast-x, and the results are shown in Fig. 3 along with the topography that produced it. The field range shown in the key has been clipped to flatten the outer 2% of the actual range, which was closer to 12nT. As shown in Frankcombe (2020), typical topographically induced variations in these areas are an order of magnitude higher than this, implying a ground susceptibility closer to 0.1SI. However, as we are looking at susceptibility contrasts, the use of a low background value in the synthetic model does not detract from its results. A parametric potential field modelling package (Potent – https://geoss.com.au/ potent.html, last access: 11 March 2024) was then used to compute the magnetic response of several geologically plausible bodies buried in this topography. In order to provide a challenge for the inversion algorithm, the bodies were placed where their magnetic response might best be masked by topography. Potent uses line and surface integrals to model the response of bodies. Modelling the response of the host is only practical in 2D. Topography was used to control the sensor position, which was defined as a constant drape above topography. The susceptibility of the half-space is only used by the parametric modelling to compute a contrast to use when computing the response of the model body. The computed response therefore becomes the response from bodies in air. The susceptibility of bodies is their contrast relative to the host, which may be negative. These responses were then added to the computed response of the topography alone from Tomofast-x to generate a synthetic dataset. The Potent program was used in preference to a generalised voxel-based program because the regularisation of the mesh would have required very small voxels in order to accurately reflect the desired shape and thickness of the bodies. We have used a parametric model to better test the algorithm by approximating real-world structures that are hard to describe with a mesh of rectangular prisms. It is worthwhile saying a few words about the geological relevance of the synthetic model. In modern volcanic environments in the western Pacific and elsewhere, the underlying geology often consists of a series of volcanic flows. These range from relatively nonmagnetic crystal tuffs through to andesitic basalt and local lahar debris flows. Where the lahar carries andesite boulders it can become very magnetic and very heterogeneous. The layered package of volcanics may be interspersed with sediments deposited during periods of quiescence. These sediments are generally mudstones or limestones, but in deeper-water environments they may include greywackes. Typically, the package is only lightly tilted, meaning that dips are relatively flat. For the area to become of interest to explorers, an intrusion of some sort is required. These are typically pipe-like bodies and may have a positive susceptibility contrast with the host in the case of skarns and potassically altered porphyry-style mineralisation or, in the case of epithermal mineralisation or for the alteration halo around a porphyry system, a negative contrast through magnetite destruction. These elements have been incorporated into the synthetic model and are labelled in Fig. 4. Body 1 is a 100m thick magnetic disc reflecting a more magnetic basalt flow that has been eroded away, except at the peak near the centre of the model space. Body 2 is a large, flat, 50m thick, magnetic sheet similarly reflecting an erosional residual from broader magnetic lava flow. Body 3 is a 10m thick dipping slab reflecting either a magnetically mineralised fault or a dyke-like feeder to the lava flows. Body 4 is a pipe-like magnetic body with a radius of 250m and a relatively high susceptibility, reflecting a strongly potassically altered porphyry intrusion or skarn around a finger porphyry. Body 5 is a 20m thick dipping sheet, reflecting a tilted magnetic lava flow. Body 6 is a 500m radius pipe with a susceptibility lower than the host. This reflects a zone of magnetite destruction around and above a porphyry system. The bodies are described in Table 3. The synthetic data had various levels of noise added to it prior to inversion. Noise was added on the basis of a rescaled percentage of the observed value. Noise levels ranging from 1% up to 10% of the computed synthetic response were rescaled by a random number between −0.5 and +0.5. Where the amplitude of the rescaled noise fell below a noise floor, the noise floor multiplied by the sign of the rescaled noise replaced it. Noise floors of 1 and 5nT were employed. That noise was then added to the computed synthetic response. As this synthetic model contained no remanence, a susceptibility inversion was undertaken. A mesh with a central 400×400 voxel core of 25m×25m voxels with a first layer thickness of 10m was created. This was padded on each side with 20 cells increasing in size away from the core to add around 3500m to each edge of the horizontal extent of the core, and 28 cells were used vertically to take the mesh down around 3200m below the ground surface. In total the mesh was 440×440×28 cells (N[model]=5420800), and there were magnetic observations above the centre of each surface cell in the central core (N[data]=160000). This would be considered a small- to moderate-sized exploration problem. Regardless of the noise level, the inversion does recover the shape and depth to the top of the three most magnetic bodies (Fig. 5). The inversion favours the top of the vertical cylinder but focuses higher susceptibilities right to the bottom of the model as shown in Fig. 6a. It also places magnetic material beside the low-susceptibility body to compensate for what the inversion perceives as a negative susceptibility (Fig. 6b). This is a common artefact in unconstrained susceptibility inversions. For the thin slabs the inversion has overestimated the depth to the bottom of the body. This is also a common artefact and is driven by the smoothness of the L2 norm used to compute convergence. The bowl-shaped overestimate of the thickness of body 1 has been compensated for by a “cape” of low susceptibility draped around the body. The noise added to the data is flowing through to the model as variations at the surface. Although the two thin dipping slabs with small susceptibility contrasts coincide with gradients in the model values, there is not enough contrast to be able to resolve these with any confidence. The bodies have susceptibility contrasts of 0.01 and 0.02SI and are only 10 and 20m thick, respectively. At the depth of the thin plates the model voxels are significantly thicker than the plates, and this means that not being able to resolve them is not particularly surprising. The eastern body with a negative susceptibility contrast is poorly imaged, but as noted above this is typical for unconstrained susceptibility inversions. The inversion sees the body as having a negative susceptibility, but the positivity constraints will not allow this, meaning that magnetic material is artificially stacked around it to produce the response we see in Fig. 6b. 3.2Callisto model The Callisto anomaly is a discrete magnetic anomaly on the edge of Lake Carey in the Eastern Goldfields of Western Australia. Superficially it has many similarities with the neighbouring Wallaby and Jupiter magnetic anomalies, which are both magnetically mineralised syenite intrusions carrying significant quantities of gold (Wallaby >7MOz, Jupiter >0.5MOz). The area was flown in 1998 by its then owners, Homestake Gold of Australia, using 50m line spacing and 40m terrain clearance. The survey data are available from the Western Australian Department of Mines (https:// geodownloads.dmp.wa.gov.au/downloads/geophysics/60251.zip, last access: 11 March 2024). The magnetic response of Wallaby has been studied in detail (Coggon, 2003; Banaszczyk et al., 2015), and although good fits to the observed data were obtained from modelling dipping cylinders, these models deviated from the geological model. Using susceptibility data from many drill holes, Coggon (2003) showed that the response could be modelled using a hollow pipe that reflected the geological model of a near-vertical zone of multiple syenite intrusions that had undergone a strong magnetite enrichment event followed by a strong magnetite destruction event associated with the gold mineralisation. This had created a residual annulus of high susceptibility around the altered gold-rich core. Figure 7 shows an image of the total magnetic intensity for Callisto that suggests that it too may be a hollow pipe-like body. It is thus a compelling exploration target. Figure 8 shows the inverted model with drill holes coloured and scaled by susceptibility that has been measured on core using a handheld meter. There is a good match between increases in measured susceptibility and the edge of the magnetic model but a poor match inside of the inverted body, reinforcing the suggestion that the magnetite distribution has a hollow pipe-like shape. Although the shallow parts of the inverted model show fingers around a central low-susceptibility core, the smoothness in the objective function and a lack of constraining data causes the inversion to fill the pipe at depth. The same shape was recovered from an inversion of these data using the UBC MagInv3D package. 3.3Different compression rates To test the effect of differing compression rates, rates of R[c]=5%, 1%, and 0.5%, corresponding to a reduction of the sensitivity kernel storage by factors of 20, 100, and 200, respectively, were applied. Figures 9 and 10 show a representative section through the inverted models. The effect of increasing degree of compression (i.e. decreasing compression rate) can be seen in the model contours. The weak compression, with R[c]=5%, has nearly no influence on the structure of the resulting model. Stronger compressions, with R[c]=1% and 0.5%, result in noise in the contours, while the main model structures remain at the same location with the same shape and model values. As can be seen, the larger Synthetic400 model is less affected by compression than a smaller Callisto model. Generally, the larger the model, the greater the compression that can be used without significantly degenerating the model structure. This is due to the usage of multilevel wavelets, where larger models allow more levels to be used. For example, for the large model with dimensions of N[x]=812, N[y]=848, and N[z]=26, a compressed rate of R[c]=0.0086 was successfully used. The optimum compression rate can be either obtained experimentally or calculated from the available memory using Appendix A. Figure 11 shows the memory usage and CPU time (wall time) for compression rates from R[c]=0.1% to R[c]=50% for the Callisto model, using N[cpu]=40 cores on a machine with shared memory system. The memory was recorded by the value of “VmHWM” saved in the “/proc/self/status” file. This corresponds to the peak resident set size (“high water mark”) memory. The total memory is calculated by summing up this value across all processors. The memory reported in Fig. 11 is measured at the end of the run to calculate the maximum RAM memory used. Both the memory used and CPU time scale nearly linearly with the compression rate. The theoretical memory prediction Eq.(A1) has a constant offset from the measured memory. This can be explained by the memory internally allocated by the MPI library. The offset also scales nearly linearly with N[cpu], and this is explained by the shared memory allocated by the MPI. Thus, the actual amount of the allocated “offset memory” is smaller than reported using this approach by a factor of ∼N[cpu]. 3.4Magnetisation inversion example In this subsection we show a series of two models to test the magnetisation inversion. In the first example we demonstrate the capability of the inversion to recover anomalies. In the second example, we propose a challenging case to evaluate the limits of our code. 3.4.1Two-body model In order to test the new magnetisation code, first a simple two-body model was constructed. Each body was only a single model voxel in size. Two cases are considered with (1) both bodies induced and (2) both bodies remanently magnetised with a Koenigsberger ratio of 1. The northern (right-hand) body had its remanence directed almost orthogonal to the inducing field (I[r]=60, D[r]=90 vs. I[i] =−60, D[i]=2), and it has a susceptibility of 0.1SI. The southern (left-hand) body had a susceptibility of 0.05SI, but its remanent field was parallel to the induced field, which doubled the effective susceptibility. The host had no remanence and a susceptibility of 10^−5SI. The forward response was computed using both Tomofast-x and UBC's MVIFWD, and good agreement was found for both the TMI and three-component responses. Three-component data were inverted, and the results are shown in Fig. 12. The inverted and true models are in good agreement with respect to anomaly location, magnetisation vector direction, and the relative anomaly magnitude. 3.4.2Synthetic400 magnetisation model In order to test the magnetisation code with a more challenging example, the synthetic model described in Sect. 3.1 was revisited, and four of the six bodies had remanence applied to them within the parametric model. Table 4 shows the values used. As before, the computed response from the bodies was added to the response computed for a magnetic half-space with topography, and noise was then added. This was then inverted. Despite moving from solving for one unknown (susceptibility) to three unknowns (magnetisation vector) the inversion has done a good job of identifying the bodies also identified previously, as shown in Fig. 13. The model is smoother than the susceptibility inversion and in the case of the thin sheets and disc the depth is overestimated away from the edges of the sheet or disc. Figure 14a shows a section through the 3D magnetisation model at 154000E which is through the centre of bodies 4 and 1. When compared with the same section for the susceptibility inversion (Fig. 6a), we note a smoother model and a greater level of sagging below the disc. The low-susceptibility cloak below the disc (body 1) is no longer present, and the background is more homogeneous than was the case for the susceptibility inversion. Looking at the response of the low-susceptibility body (body 6) on section 159000E (Fig. 14b), we see that the low-susceptibility cylinder is resolved to a local magnetisation high and that the background is more homogeneous than the susceptibility inversion (Fig. 6b). 3.5Parallel performance tests We analyse the code parallel performance using two types of machines targeted at different types of users: (1) a single multi-core machine with shared memory system and (2) a large supercomputer with distributed memory across the computational nodes. Most commercial users of the software are likely to use the first type of machine, while academic users are more likely to use the second machine 3.5.1Shared memory tests The machine used for shared memory tests is an IBM System x3850 X5 server with four Xeon E7-4860 decacore processors running at 2.27GHz. The machine has 512Gb of DDR3 (1066) RAM and an Ubuntu 20.04 desktop operating system. Figure 15 shows the CPU time (wall time) and parallel efficiency for different numbers of MPI processors for the Callisto and Synthetic400 models. The parallelisation efficiency is defined as the ratio of the speedup factor and the number of processors. Note that for the Synthetic400 model the data starting from N[cpu]=16 are shown, as it would take very long time to run it on smaller number of cores; the corresponding parallel efficiency is calculated with respect to N[cpu]=16 for this model. For both models, the CPU time scales nearly linearly with the number of processors up to the maximum number of cores used N[cpu]=40. This is confirmed by high parallel efficiency of greater than ∼90% for all numbers of processors. The nearly linear speed-up with number of processors indicates an efficient use of memory and no sign of bandwidth bottlenecking or I/O limiting performance barriers. It is expected that this would scale to an even higher number of threads, particularly on optically linked distributed memory systems. To analyse performance, we also measured the MPI time using the Integrated Performance Monitoring tool (IPM). The IPM has an extremely low overhead and is easy to use, requiring no source code modification. Various configurations of MPI settings were tested, and it was found that the strongest impact on the performance has the parameter I_MPI_ADJUST_ALLREDUCE. This controls the algorithm used for the collective MPI communications in the inversion solver. For example, by setting I_MPI_ADJUST_ALLREDUCE=3, corresponding to “Reduce+Bcast” algorithm, the MPI time for Synthetic400 model was decreased by 25% on 40 cores. The optimal parameter depends on the machine settings, model size, and the number of cores, and it must be determined experimentally. 3.5.2Distributed memory tests For distributed memory tests we use the Gadi supercomputer of the National Computational Infrastructure (NCI), which is Australia's largest research supercomputing facility. Gadi contains more than 250000 CPU cores, 930TB of memory, and 640 GPUs. This includes 3074 nodes each containing two 24-core Intel Xeon Scalable “Cascade Lake” processors and 192GB of memory. Figure 16 shows the CPU time (wall time) and parallel efficiency for different numbers of MPI processors for the Callisto and Synthetic400 models. Note that for the Synthetic400 model, the data starting from N[cpu]=8 are shown, and the corresponding parallel efficiency is calculated with respect to N[cpu]=8. We observe linear speedup up to N[cpu]=48, as confirmed by parallel efficiency greater than ∼95%. For a higher number of processors, the efficiency is starting to decline, reaching ∼80% at N[cpu]=96 and dropping below 50% for N[cpu] >624. Overall, through parallelisation, we were able to reduce the total computational time by more than 2 orders of magnitude, achieving maximum speedup of approximately 340 and 420 times for the Callisto and Synthetic400 models, respectively. 4Discussion and conclusions Since its initial release in 2021, the Tomofast-x code base has undergone a significant overhaul in terms of both its versatility and performance. Solving for magnetisation rather than susceptibility alone better reflects the real world, where magnetic remanence is much more common than many would like to admit. In addition, allowing input data to have multiple components improves the inverted model relative to a single component inversion of synthetic data, where the answer is known. Meshes with expanding vertical thickness with depth and matrix wavelet compression have allowed larger problems to be solved in real time. Improvements to the parallelisation scheme have allowed for better performance on both shared and distributed memory systems, with speedups of ∼40 and ∼400 times, respectively, for moderate-sized models. Memory reduction of up to 100 times is achieved due to wavelet compression of the sensitivity kernel. Following the changes to the code documented here, Tomofast-x 2.0 produces results very similar to those obtained from the UBC suite of potential field inversion codes. It does so in a similar run time and uses a comparable amount of system resources. However, unlike the commercial version of the UBC suite, it runs on Linux platforms which, because of memory and CPU slot limitations imposed on common versions of the Windows operating system, will allow it to be used on more powerful systems than those typically running Windows. In modern Windows systems it can run on the Windows Subsystem for Linux (WSL) rather than as a Windows application. As maintenance is planned for the foreseeable future, we argue that Tomofast-x can be used for commercial projects and demanding large-scale The optimisation work described in this paper is fundamental to allowing more complete descriptions of the controls on magnetisation, including remanence, which require 3 to 9 times the computer resources of the scalar description. Whilst the modifications described above address many of the challenges of industry and government agencies, further developments are planned, including a closer integration with level set inversions (Giraud et al., 2023a). To extend the gravity to plate-scale problems we would need to reframe the calculations with a spherical coordinate scheme. Increasing use of airborne gravity gradiometry data suggests that including this type of data directly in the inversion scheme also warrants some attention. Finally, and over the longer term, the addition of other types of physics to the platform will have obvious benefits. Appendix A:Total memory estimation The total memory requirements for an inverse problem with model damping constraints running in parallel on N[cpu] cores has been estimated. The matrix is partitioned by columns (i.e. parallelisation is by model parameters), and single precision floats are used for storing matrix vales. For comparison, both formats (CSR and DCSR) of sparse matrix storage are shown. Note that DCSR format requires less memory on multiple cores due to not having to store the indexes of empty rows. Table A1 shows memory requirement for different types of data structure used in the code. It must be noted that the solver memory is independent of the number of CPUs. To achieve this, the right-hand side vector is used as a buffer for internal solver calculations, and memory allocation for the intermediate products is avoided by using existing memory buffers (for details, see Sect. 2.6). The depth–weight and model arrays are split between the processes, meaning that their memory is independent of the number of cores. Here, the grid memory is allocated only on the master rank to write the final models. The total memory (for the DCSR matrix format) can be estimated as the sum of arrays in Table A1 as follows: $\begin{array}{}\text{(A1)}& \begin{array}{rl}{M}_{\text{tot}}=& \phantom{\rule{0.25em}{0ex}}\mathrm{8}{R}_{\mathrm{c}}{N}_{\text{model}}{N}_{\text{data}}+\mathrm{8}{N}_{\text{cpu}}{N}_{\text{model}} \\ & +\mathrm{44}{N}_{\text{cpu}}{N}_{\text{data}}+\mathrm{164}{N}_{\text{model}}.\end{array}\end{array}$ This equation can be used to estimate the compression rate, R[c], from the total amount of available memory. It must be noted that the MPI library can allocate additional memory, which is implementation dependent and can vary between different machines. Code and data availability The datasets and Tomofast-x input files for the Calisto and Synthetic400 models have been packaged and are available for download at Zenodo (https://doi.org/10.5281/zenodo.8397794, Ogarko et al., 2023a). Readers are encouraged to use these when testing Tomofast-x and other codes they may be working with. The latest Tomofast-x code version with user manual and some representative tests is available for download from GitHub (https://github.com/TOMOFAST/Tomofast-x, last access: 11 March 2024) and Zenodo (https://doi.org/10.5281/zenodo.8397948, Ogarko et al., 2023b). VO was the main contributor to the writing of this article, Tomofast-x code development, testing, and preparation of the assets. KF and TL worked together with VO on code development, testing, and preparation of the datasets for the tests. JG performed distributed memory scaling tests. All authors participated in the discussion and redaction of this paper. The contact author has declared that none of the authors has any competing interests. Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this paper. While Copernicus Publications makes every effort to include appropriate place names, the final responsibility lies with the authors. Vitaliy Ogarko acknowledges Rodrigo Tobar for technical discussions. Jeremie Giraud acknowledges support from European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement no. 101032994. This research was supported by the Mineral Exploration Cooperative Research Centre (MinEx CRC), whose activities are funded by the Australian Government's Cooperative Research Centre Programme. This is MinEx CRC document no. 2023/38. This paper was edited by Boris Kaus and reviewed by Andrea Balza and one anonymous referee. Banaszczyk, S., Dentith, M., and Wallace, Y.: Constrained Magnetic Modelling of the Wallaby Gold Deposit, Western Australia, ASEG Extended Abstracts, 2015, 1–4, https://doi.org/10.1071/ASEG2015ab290, Barrett, R., Berry, M., Chan, T. F., Demmel, J., Donato, J., Dongarra, J., Eijkhout, V., Pozo, R., Romine, C., and van der Vorst, H.: Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, Society for Industrial and Applied Mathematics, https://doi.org/10.1137/1.9781611971538, 1994. Bhattacharyya, B. K.: MAGNETIC ANOMALIES DUE TO PRISM-SHAPED BODIES WITH ARBITRARY POLARIZATION, GEOPHYSICS, 29, 517–531, https://doi.org/10.1190/1.1439386, 1964. Buluc, A. and Gilbert, J. R.: On the representation and multiplication of hypersparse matrices, in: 2008 IEEE International Symposium on Parallel and Distributed Processing, Miami, FL, USA, 14–18 April 2008, 1–11, https://doi.org/10.1109/IPDPS.2008.4536313, 2008. Buluç, A., Fineman, J. T., Frigo, M., Gilbert, J. R., and Leiserson, C. E.: Parallel sparse matrix-vector and matrix-transpose-vector multiplication using compressed sparse blocks, in: Proceedings of the twenty-first annual symposium on Parallelism in algorithms and architectures, Calgary, AB, Canada, 11–13 August 2009, 233–244, https://doi.org/10.1145/1583991.1584053, 2009. Cockett, R., Kang, S., Heagy, L. J., Pidlisecky, A., and Oldenburg, D. W.: SimPEG: An open source framework for simulation and gradient based parameter estimation in geophysical applications, Comput. Geosci., 85, 142–154, https://doi.org/10.1016/j.cageo.2015.09.015, 2015. Coggon, J.: Magnetism — key to the Wallaby gold deposit, Explor. Geophys., 34, 125–130, https://doi.org/10.1071/EG03125, 2003. Elllis, R. G., de Wet, B., and Macleod, I. N.: Inversion of Magnetic Data from Remanent and Induced Sources, ASEG Extended Abstracts, 2012, 1–4, https://doi.org/10.1071/ASEG2012ab117, 2012. Frankcombe, K.: Magnetics in the mountains: An approximation for the magnetic response from topography, Preview, 2020, 34–44, https://doi.org/10.1080/14432471.2020.1773235, 2020. Gidley, P. R.: The Geophysics of the Trough Tank Gold-Copper Prospect, Australia, Explor. Geophys., 19, 76–78, https://doi.org/10.1071/EG988076, 1988. Giraud, J., Lindsay, M., Ogarko, V., Jessell, M., Martin, R., and Pakyuz-Charrier, E.: Integration of geoscientific uncertainty into geophysical inversion by means of local gradient regularization, Solid Earth, 10, 193–210, https://doi.org/10.5194/se-10-193-2019, 2019a. Giraud, J., Ogarko, V., Lindsay, M., Pakyuz-Charrier, E., Jessell, M., and Martin, R.: Sensitivity of constrained joint inversions to geological and petrophysical input data uncertainties with posterior geological analysis, Geophys. J. Int., 218, 666–688, https://doi.org/10.1093/gji/ggz152, 2019b. Giraud, J., Lindsay, M., Jessell, M., and Ogarko, V.: Towards plausible lithological classification from geophysical inversion: honouring geological principles in subsurface imaging, Solid Earth, 11, 419–436, https://doi.org/10.5194/se-11-419-2020, 2020. Giraud, J., Ogarko, V., Martin, R., Jessell, M., and Lindsay, M.: Structural, petrophysical, and geological constraints in potential field inversion using the Tomofast-x v1.0 open-source code, Geosci. Model Dev., 14, 6681–6709, https://doi.org/10.5194/gmd-14-6681-2021, 2021. Giraud, J., Caumon, G., Grose, L., Ogarko, V., and Cupillard, P.: Integration of automatic implicit geological modelling in deterministic geophysical inversion, EGUsphere [preprint], https://doi.org/ 10.5194/egusphere-2023-129, 2023a. Giraud, J., Seillé, H., Lindsay, M. D., Visser, G., Ogarko, V., and Jessell, M. W.: Utilisation of probabilistic magnetotelluric modelling to constrain magnetic data inversion: proof-of-concept and field application, Solid Earth, 14, 43–68, https://doi.org/10.5194/se-14-43-2023, 2023b. Goodwin, J. A., Lane, R. J. L.: The North Australian Craton 3D Gravity and Magnetic Inversion Models – A trial for first pass modelling of the entire Australian continent, RECORD 2021/033, Geoscience Australia, Canberra, https://doi.org/10.11636/Record.2021.033, 2021. Jessell, M.: Three-dimensional geological modelling of potential-field data, Comput. Geosci., 27, 455–465, https://doi.org/10.1016/S0098-3004(00)00142-4, 2001. Li, Y. and Oldenburg, D. W.: 3-D inversion of magnetic data, GEOPHYSICS, 61, 394–408, https://doi.org/10.1190/1.1443968, 1996a. Li, Y. and Oldenburg, D. W.: Joint inversion of surface and three-component borehole magnetic data, SEG Technical Program Expanded Abstracts 1996, 1142–1145, https://doi.org/10.1190/1.1826293, Li, Y. and Oldenburg, D. W.: 3-D inversion of gravity data, GEOPHYSICS, 63, 109–119, https://doi.org/10.1190/1.1444302, 1998. Li, Y. and Oldenburg, D. W.: Fast inversion of large-scale magnetic data using wavelet transforms and a logarithmic barrier method, Geophys. J. Int., 152, 251–265, https://doi.org/10.1046/ j.1365-246X.2003.01766.x, 2003. Liu, S., Hu, X., Zhang, H., Geng, M., and Zuo, B.: 3D Magnetization Vector Inversion of Magnetic Data: Improving and Comparing Methods, Pure Appl. Geophys., 174, 4421–4444, https://doi.org/10.1007/ s00024-017-1654-3, 2017. Markus, A.: Modern Fortran in Practice, Cambridge University Press, https://doi.org/10.1017/CBO9781139084796, 2012. Martin, R., Monteiller, V., Komatitsch, D., Perrouty, S., Jessell, M., Bonvalot, S., and Lindsay, M.: Gravity inversion using wavelet-based compression on parallel hybrid CPU/GPU systems: application to southwest Ghana, Geophys. J. Int., 195, 1594–1619, https://doi.org/10.1093/gji/ggt334, 2013. Martin, R., Ogarko, V., Komatitsch, D., and Jessell, M.: Parallel three-dimensional electrical capacitance data imaging using a nonlinear inversion algorithm and norm-based model regularization, Measurement, 128, 428–445, https://doi.org/10.1016/j.measurement.2018.05.099, 2018. Martin, R., Giraud, J., Ogarko, V., Chevrot, S., Beller, S., Gégout, P., and Jessell, M.: Three-dimensional gravity anomaly data inversion in the Pyrenees using compressional seismic velocity model as structural similarity constraints, Geophys. J. Int., 225, 1063–1085, https://doi.org/10.1093/gji/ggaa414, 2021. Nettleton, L. L.: GRAVITY AND MAGNETIC CALCULATIONS, GEOPHYSICS, 7, 293–310, https://doi.org/10.1190/1.1445015, 1942. Ogarko, V., Giraud, J., Martin, R., and Jessell, M.: Disjoint interval bound constraints using the alternating direction method of multipliers for geologically constrained inversion: Application to gravity data, GEOPHYSICS, 86, G1–G11, https://doi.org/10.1190/geo2019-0633.1, 2021. Ogarko, V., Frankcombe, K., and Liu, T.: Callisto and Synthetic400 inversion data sets, Zenodo [data set], https://doi.org/10.5281/zenodo.8397794, 2023a. Ogarko, V., Frankcombe, K., Liu, T., Giraud, J., Martin, R., and Jessell, M.: Tomofast-x 2.0 inversion source code, Zenodo [code], https://doi.org/10.5281/zenodo.8397948, 2023b. Özgü Arısoy, M. and Dikmen, Ü.: Potensoft: MATLAB-based software for potential field data processing, modeling and mapping, Comput. Geosci., 37, 935–942, https://doi.org/10.1016/j.cageo.2011.02.008, Paige, C. C. and Saunders, M. A.: LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares, ACM T. Math. Software, 8, 43–71, https://doi.org/10.1145/355984.355989, 1982. Pears, G., Reid, J., Chalke, T., Tschirhart, V., and Thomas, M. D.: Advances in geologically constrained modelling and inversion strategies to drive integrated interpretation in mineral exploration, in: Proceedings of Exploration 17: Sixth Decennial International Conference on Mineral Exploration, edited by: Tschirhart, V. and Thomas, M. D., 221–238, 2017. Vallabh Sharma, P.: Rapid computation of magnetic anomalies and demagnetization effects caused by bodies of arbitrary shape, Pure Appl. Geophys., 64, 89–109, https://doi.org/10.1007/BF00875535,
{"url":"https://gmd.copernicus.org/articles/17/2325/2024/","timestamp":"2024-11-15T03:32:41Z","content_type":"text/html","content_length":"267889","record_id":"<urn:uuid:4b2faaeb-3b93-4fc3-b2e3-b6e4c2d7facb>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00559.warc.gz"}
Stratification for Synthetic Purposive Sampling — stratify_sps Stratification for Synthetic Purposive Sampling Stratification for Synthetic Purposive Sampling Site-level variables for the target population of sites. Row names should be names of sites. X cannot contain missing data. A list of two elements, e.g., list("at least", 1). This argument specifies the number of sites that should satisfy condition specified below. The first element should be either at least or at most. The second element is integer. For example, list("at least", 1) means that we stratify SPS such that we select *at least 1* site that satisfies condition (specified below). A list of three elements, e.g., list("GDP", "larger than or equal to", 1). This argument specifies conditions for stratification. The first element should be a name of a site-level variable. The second element should be either larger than or equal to, smaller than or equal to, or between. The third element is a vector of length 1 or 2. When the second element is between, the third element should be a vector of two values. For example, list("GDP", "larger than or equal to", 1) means that we stratify SPS such that we select num_site sites that have *GDP larger than or equal to 1*. stratify_sps returns an object of stratify_sps class, which we supply to sps(). • C: A matrix on the left-hand side of linear constraints. The number of columns is the number of sites in the target population (=nrow(X)) and the number of rows is the number of constraints. • c0: A vector on the right-hand side of linear constraints. The length is the number of constraints. Egami and Lee. (2023+). Designing Multi-Context Studies for External Validity: Site Selection via Synthetic Purposive Sampling. Available at https://naokiegami.com/paper/sps.pdf.
{"url":"http://naokiegami.com/spsR/reference/stratify_sps.html","timestamp":"2024-11-09T01:24:03Z","content_type":"text/html","content_length":"10247","record_id":"<urn:uuid:a4b9edf9-7d04-4f4e-a4e8-5a9226bf8958>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00645.warc.gz"}
Solving the Equation with Worked Examples (2024) One of the most extraordinary things about Einstein’s energy- mass equivalence equation is its simplicity. However, we still need to make sure we are using the correct units when solving the equation, and that we understand the answer. The purpose of this page is to solve the equation as it is and give some idea of the huge amount of energy locked up in even the smallest amount of mass. The Components of the Equation If we break the equation E = mc 2 into its components and write out the terms fully we get: E = energy (measured in joules) m = mass (measured in kilograms) c = the speed of light (186,000 miles per second, or 3 x 10 8 ms -1 ) We will now examine each of the terms in a little more detail. Energy is measured in joules (J). How much energy is one joule? Not very much really. If you pick up a large apple and raise it above your head you will have used around one joule of energy in the process. On the other hand, we use up huge amounts of energy every time we switch on a light. A 100 watt light bulb uses 100 joules of energy every second, i.e. one watt is one joule per second. The speed of light Mass is a measure of a body’s resistance to acceleration. The greater the mass the greater the resistance to acceleration, as anyone who has ever tried to push a heavy object knows. However, for our purposes we can also think of mass as the amount of matter in an object. Mass is measured in kilograms (kg), with 1 kg about the same as 2.2 pounds. Note that we haven’t said what the mass is composed of. In fact, it could be anything. It doesn’t matter if we use iron, plastic, wood, rock or gravy. The equation tells us that whatever the mass is it can be turned into energy (whether it's practical to actually do so is another matter and is dealt with in other pages in this series). The speed of light is very close to 186,300 miles per second (300,000 km per second). In order to make the equation "work" we need to convert these numbers into units that are more suited to our purposes. In physics speeds are measured in metres per second. This is usually abbreviated to ms -1 ; that is: "metres times seconds to the minus one". Don’t worry if you don’t understand this notation. We could equally write m/s but using ms -1 makes the mathematics easier in the long run. Likewise, we could either say that the speed of light is 300,000,000 metres per second, or, as is more usual, express the same figure in scientific notation: 3 x 10 8 ms -1 . Now that we have everything in order let’s have a go at solving the equation. We will use a mass of 1 kg to keep things simple and I will show all of the workings of the equation. So, with 1 kg of mass (around 2.2 pounds) we get: Solving the Basic Equation Note how the units were dealt with and that kg m 2 s -2 is the same as joules (although a rigorous proof of this is outside the scope of these pages). So from 1 kg of matter, any matter, we get 9 x 10 16 joules of energy. Writing that out fully we get: 90,000,000,000,000,000 joules That is a lot of energy! For example, if we converted 1 kg of mass into energy and used it all to power a 100 watt light bulb how long could we keep it lit for? In order to answer the question the first thing to do is divide the result by watts (remember that 1 watt is 1 joule per second): 9 x 10 16 J / 100 W = 9 x 10 14 seconds That's a lot of seconds, but how long is that in years? A year (365.25 days) is 31,557,600 seconds, so: 9 x 10 14 seconds / 31,557,600 seconds = 28,519,279 years That is a very long time! Of course, converting mass into energy is not quite that simple, and apart from with some tiny particles in experimental situations has never been carried out with 100% efficiency. Perhaps that’s just as well. We have seen that the E = mc 2 equation is easy to solve as it is and that for even a small amount of mass a huge amount of energy can, at least in theory, be released. Other pages in this series show how the energy can be released in practical ways, as well as how the equation was derived, and here is an E = mc 2 calculator E = mc 2 – A lot of energy from a small mass NEW! Quick and Easy SuperFast Guides NEW! [ Time Dilation Formula ] [ General Relativity ] [ Einstein ] [ Time Dilation ] [ Black Holes ] [ Twin Paradox ]
{"url":"https://ebramu.shop/article/solving-the-equation-with-worked-examples","timestamp":"2024-11-12T19:38:00Z","content_type":"text/html","content_length":"72023","record_id":"<urn:uuid:6b24feaf-2741-41e0-81c7-78f041c0b169>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00663.warc.gz"}
Chapter 14 | Visualising Solids | Class-7 DAV Secondary Mathematics | NCERTBOOKSPDF.COM Chapter 14 | Visualising Solids | Class-7 DAV Secondary Mathematics Are you looking for DAV Maths Solutions for class 7 then you are in right place, we have discussed the solution of the Secondary Mathematics book which is followed in all DAV School. Solutions are given below with proper Explanation please bookmark our website for further updates!! All the Best !! Chapter 14 Worksheet 2 | Visualising Solids | Class-7 DAV Secondary Mathematics Unit 14 Worksheet 2 || Visualising Solids 1. Can the following be used as nets to make a cube? Answer 1. 2. Fold the net to get a solid. (Fig. 14.6) What is this solid known as? Answer 2. this net forms a ‘pyramid’. 3. A dice is a cube with dots or numbers on its faces in such a manner that the sum of the dots on its opposite faces is always seven. If there are four dots on one face, then on the opposite face there will be three dots. Given below are two nets to make a dice. Put the numbers in the blanks to get a dice in each case. Answer 3.
{"url":"https://ncertbookspdf.com/chapter-14-visualising-solids-class-7-dav-secondary-mathematics-2/","timestamp":"2024-11-08T08:29:05Z","content_type":"text/html","content_length":"80988","record_id":"<urn:uuid:817bece3-504d-4e61-b7c2-cf919d6da562>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00400.warc.gz"}
Mechanics of Deep Learning | Data Science Discovery Understand the concept of Gradient Descent and Back-propagation to get some idea of how Neural Networks work. Deep Learning Series In this series, with each blog post we dive deeper into the world of deep learning and along the way build intuition and understanding. This series has been inspired from and references several sources. It is written with the hope that my passion for Data Science is contagious. The design and flow of this series is explained here. Neural Networks & Mechanics of Deep Learning We have already covered some of the basics of the architecture and the respective components in the previous posts. But we need to understand one of the most important concepts. How do Neural networks exactly work? How are the weights updated in Neural networks? Well, let’s get into the algorithms behind Neural Networks. Gradient Descent For most machine learning algorithms, optimization is used to minimize the cost/error function. Gradient Descent is one of the most popular optimization algorithms used in Machine Learning. There are many powerful ML algorithms that use gradient descent such as linear regression, logistic regression, support vector machine (SVM) and neural networks. Let’s take the classic mountain valley example with a twist, you meet a pirate and in your travels you discover a map to the golden chalice of wisdom. The secret location is the lowest point in a very dark and deep valley. Given that there is no possible sources of natural or artificial light in this magical valley, both the pirate and you are in a race to reach the bottom of the valley in pitch darkness. The pirate decides to take steps forward randomly with the hope of eventually reaching the lowest point. Both of you have the same starting point, you think there must be a smarter way. At every step you decide to feel the gradient (slope) around you, and take the steepest step possible. By taking the best possible step every time, you win! That is analogous to the gradient descent technique. We are operating in the blind trying to take a step in the most optimal direction. Let us say that we fit a regression model on our dataset. We need a cost function to minimize the error between our prediction and the actual value. The plot of our cost function will look like: Mechanics of Deep Learning 6 Gradient is another word for slope and the first step in gradient descent is to pick a starting value at random or set it to 0. Now, a gradient has the following characteristics: Let’s take a mathematical function to further understand the same. In mathematical terms, if our function is: f(x) = e^{2}\sin(x) The derivative: \frac {\partial f}{\partial x} = e^2\cos(x) If x = 0 \frac{\partial f}{\partial x} (0) = e^2 \approx 7.4 So when you start at 0 and move a little (take a step), the function changes by about 7.4 times (magnitude) the amount that you changed. Similarly, if you have multiple variables we take partial z = f(x,y) = xy + x^2 For a function such as the one above we first take y as a constant and follow differentiate it in terms of x ( Here: y + 2x). Then we take x as a constant and take the derivative in terms of y (Here: x). Consider if x = 3 and y = -3 then f(x,y) = 9. The final value is obtained from the use of the chain rule of calculus. Mechanics of Deep Learning 7 $\nabla f $ the sign of the final gradient points in the direction of greatest change of the function. In a feed-forward network, we are learning how does the error vary as the weight is adjusted. The relationship between the net’s error and a single weight will look something like the image below (we will get into more detail a little later): Mechanics of Deep Learning 8 As a neural network learns, it slowly adjusts several weights by calculating (dE/dw) the derivative of network Error with respect to the weights. Gradient descent algorithms multiply the gradient by a scalar known as the learning rate (also sometimes called step size) to determine the next point. For example: Metric Value Gradient Magnitude 2.5 Learning Rate 0.01 Then the gradient descent algorithm will pick the next point 0.025 away from the previous point. A small learning rate will take too long and a very large learning rate the algorithm might diverge away from the minimum point (miss the minimum completely). Finally, the weights are updated incrementally after each epoch (pass over the training dataset) till we get the best results. Stochastic Gradient Descent In gradient descent, a batch is the total number of examples you use to calculate the gradient in a single iteration. So far, we have assumed that the batch has been the entire data set. But for large datasets, the gradient computation might be expensive. stochastic gradient descent offers a lighter-weight solution. At each iteration, rather than computing the gradient ∇f(x), stochastic gradient descent randomly samples i at uniform and computes ∇fi (x) instead. Back-propagation is simply a technique or method of updating the weights. We are aware of partial derivatives, chain rule and most importantly gradient descent. But with Neural networks having multiple layers and different activation functions make it difficult to visualize how everything comes together. Consider, a simple example with the following architecture: Mechanics of Deep Learning 9 Forward Pass Step 1: Initialization Let us initialize the weights and the bias.Table 1 a: Weight Initialization Example Weights Value w1 0.10 w2 0.15 w3 0.03 w4 0.08 w5 0.18 w6 0.06 w7 0.11 w8 0.26 Table 1: Dominated/Non-Dominated Example Assume take the initial input values to be [0.95,0.06] and the target value [0.05,0.82]. Step 2: Calculations To get the value of H1: H1 = w1 * x1 + w2 * x2 + b1 = 0.1 * 0.95 + 0.15 * 0.06 + 0.05 = 0.154 As we have a sigmoid activation function: H1 = \frac{1}{1+e^{-H1}} = \frac{1}{1+e^{-0.154}} = 0.538 Similarly, we can calculate H2. H1 = 0.538 and H2 = 0.52 Now we calculate the value for output nodes Y1 and Y2. Y1 = w5 * H1 + w6 * H2 + b2 = 0.18 * 0.538 + 0.06 * 0.52 + 0.42 = 0.548 Y1 = \frac{1}{1+e^{-Y1}} = \frac{1}{1+e^{-0.548}} = 0.633 Upon calculation: Y1 = 0.633 & Y2 = 0.648 Step 3: Error Function Let the error function be: J( \theta ) = {( {target – {output}})^2} Total Error (E) = E1 + E2 = 0.184972 E1 = 0.5 * (0.05 - 0.63368)^2 = 0.17 E2 = 0.5 * (0.82 - 0.64893)^2 = 0.014 Backward Pass Back-propagate the Errors to update the weights. Error at W5: \partial E \over \partial W5 = ({\partial E \over \partial output Y1}) * ({\partial output Y1 \over \partial Y1}) * ({\partial Y1 \over \partial W5}) Component 1: The Cost/Error Function target: T output: out E = 0.5 * (T1 - out Y1)^2 + 0.5 * (T2 - out Y2)^2 - (T1 - out Y1) = - (0.05 - 0.63368) = 0.58368 Component 2: The Activation function output: out out Y1 = 1/(1 + exp(-Y1)) out Y1 * (1 - out Y1) = 0.63368 * (1 - 0.63368) = 0.23213 Component 3: The Function of Weights Y1 = w5 * H1 + w6 * H2 + b2 H1 * 1 = 0.538 Finally, we have the change in W5: $ \partial E \over \partial W5 In order to update W5 recall the discussion on gradient descent. Let alpha be learning rate with a chosen value of 0.01. Updated W5 will be: W5 + \alpha * ({\partial E \over \partial W5}) Similarly, we can update the remaining weights. Let’s have a look at the formula to update W1: \frac{\partial E}{\partial w1} (\sum\limits_{i}{\frac{\partial E}{\partial out_{i}} * \frac{\partial out_{i}}{\partial Y_{i}} * \frac{\partial Y_{i}}{\partial out_{h1}}}) * \frac{\partial out_{h1}}{\partial H1} * \frac{\partial H1}{\partial w_{1}} Mechanics of Deep Learning 10 It feels like it is complicated, but really we are going back layer by layer to get the respective value. As w1 feeds into neuron H1 and H1 is connected to Y1 and Y2. Moving backwards, we are differentiating the error function following which Y1 and Y2 (the activation function and the function of Weights) . That leads us to H1 where we differentiate its activation function and its respective function of weights. This is how we back-propagate the errors and update all the weights. Once we update all the weights, that is one epoch or pass over the dataset. Further, we start the entire process of forward pass and backward pass again. This process is repeated for multiple times with the purpose of minimizing error. When do we stop? We stop prior to over-fitting that is we want the minimum validation error but we do not want the training error to be lower than the validation error. Hopefully, this explains the entire process of how neural networks actually work and sheds some light on gradient descent and back-propagation. What’s Next? Activation: We have talked about activation functions in the past posts, but let’s understand in more detail the different types of activation functions and explore their characteristics.
{"url":"https://datasciencediscovery.com/index.php/2018/12/04/mechanics-of-deep-learning/","timestamp":"2024-11-11T10:46:12Z","content_type":"text/html","content_length":"148331","record_id":"<urn:uuid:d7085185-9879-466b-aaa7-66a967d067c5>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00786.warc.gz"}
Miscibility transition in a binary Bose-Einstein condensate induced by linear interconversion We consider a model of a two-component condensate in one dimension, where the components represent two different spin states of the same atom. We demonstrate that linear interconversion between them, induced by a resonant electromagnetic wave, can make the immiscible binary condensate miscible. This transition is predicted by means of a variational approximation, and is then confirmed by direct numerical solutions of the symmetric and asymmetric models (the asymmetry is accounted for by a chemical-potentil difference between the components). The dependence of the corresponding order parameters on the linear-coupling strengths reveals a typical phase transition of the second kind. However, in dynamic states the species may remain separated, even if they would get mixed in a static configuration. Dive into the research topics of 'Miscibility transition in a binary Bose-Einstein condensate induced by linear interconversion'. Together they form a unique fingerprint.
{"url":"https://cris.tau.ac.il/en/publications/miscibility-transition-in-a-binary-bose-einstein-condensate-induc","timestamp":"2024-11-10T02:21:59Z","content_type":"text/html","content_length":"49221","record_id":"<urn:uuid:ad0809a6-c32d-4d2a-9c71-1dbf8e00fda3>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00862.warc.gz"}
Fourier Transform The Fourier transform named after Joseph Fourier, is a mathematical transformation employed to transform signals between time (or spatial) domain and frequency domain, which has many applications in physics and engineering. It is reversible, being able to transform from either domain to the other. The term itself refers to both the transform operation and to the function it produces. The Fourier Transform is a tool that breaks a waveform (a function or signal) into an alternate representation, characterized by sine and co-sines. The Fourier Transform shows that any waveform can be re-written in the sum of sine wave functions.The sine wave or sinusoid is a mathematical curve that describes a smooth repetitive oscillation. Virtually everything in the world can be described via a waveform - a function of time, space or some other variable. For instance, sound waves, electromagnetic fields, the elevation of a hill versus location, a plot of VSWR versus frequency, the price of your favorite stock versus time, etc. The Fourier Transform gives us a unique and powerful way of viewing these waveforms. Wikipedia [Fourier Transform] See Also Introduction to Fourier Transform [[1]]
{"url":"https://ascensionglossary.com/index.php/Fourier_Transform","timestamp":"2024-11-05T02:31:28Z","content_type":"text/html","content_length":"20733","record_id":"<urn:uuid:ee00af46-7b77-4149-ab97-24f8aed639ba>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00207.warc.gz"}
On exact mathematical formulae — LessWrong This is inspired by the review on "Linear Algebra done right". I decided to do a top-level post, because it hits a misconception that is pretty common. The starting point of this post is this quote from "Linear Algebra done right": Remarkably, mathematicians have proved that no formula exists for the zeros of polynomials of degree 5 or higher. But computers and calculators can use clever numerical methods to find good approximations to the zeros of any polynomial, even when exact zeros cannot be found. For example, no one will ever be able to give an exact formula for a zero of the polynomial p defined by . The authors misrepresent an important point that is understood by most mathematicians, but not properly understood by many laypeople. What does it mean to solve a problem? What does it mean to have an exact formula for the solution of a problem? The answers to both are a social convention that has historically changed and is expected to continue to evolve in the future. Back in the days, people only considered rational numbers, ie fractions. Oh, but what about the positive solution to ? Ok, we can't express this as a rational number (important theorem). Because these kinds of problems occured quite often, the mathematical community arrived at the consensus that , or more generally for nonnegative should be considered an explicit solution. Amazingly, this allows us to express the solution to any quadratic equation explicitly, with our expanded notion of "explicit". From an algebraic viewpoint it was natural to bless the positive solution to as an "explicit formula" next; historically it was a more contentious thing, because greek geometry wanted numbers to be constructible using a ruler and compass only. "Doubling the cube", ie expressing the positive solution to as a geometric construction was a famous old problem (proven impossible in 1837, after having been a very prominent mathematical research problems for more than 2000 years). Now, this obviously says not a lot about the cube root of 2, but says a lot about "constructible with ruler and compass". In other words: "Explicit solutions" are a messy historical map to mathematical territory, nothing more. The same holds if you ask for explicit formulas for zeros of polynomials after having grudgingly admitted nth roots as "explicit". The same holds if you ask about explicit integrals of explicit functions (also after having grudgingly admitted eg elliptic integrals as "explicit"). The same holds for solutions of differential equations. In mathematics, asking about an "explicit formula" for solutions to problems means just: Assuming a general background in mathematics, is the solution something I already have spent years of my life developing an intuition for? And if the answer happens to be "yes, unconditionally", then it is worthwhile. If the "explicit" formula uses things that are not commonly taught anymore (crazy "special functions" that 100 years ago constituted a perfectly fine explicit solution), or is too lenghty/complicated to inform intuitions, then it is functionally equivalent to "we don't know", which is functionally equivalent to "we can prove that no formula using terms of type xyz exists". So there is nothing surprising or scary about problems not having an "explicit" solution. The true value of Galois theory is that it properly elucidates the hidden structure of polynomial equations, not that it tells us that no "explicit solution formula" exists for degree 5 polynomials for this very historical notion of "explicit". The "explicit" degree 4 formula is nothing more than a curiosity with interesting history, but absolutely worthless from both an intuitive and numerical I most often encountered the unjustified bias towards "explicit solutions" for implicit functions (the function is defined by for some fixed , implicit function theorem + newton solver) and solutions to differential equations. Integrals are mostly considered "explicit" today. You're right, I should have made that clearer, thanks! In mathematics, asking about an "explicit formula" for solutions to problems means just: Assuming a general background in mathematics, is the solution something I already have spent years of my life developing an intuition for? Having spent some time looking at numerical solutions, I would generalize this to "do we have an algorithm that allows us to efficiently explore the relevant domain space?" The algorithm could be a way to calculate sin(x), to square a number, to solve the Laplace equation with given boundary conditions, or even to calculate the distribution of gravitational radiation from black hole collisions. As long as it as efficient (in terms of computational and human efforts) as pressing sin(x) on the calculator, it is as good as exact. I'm curating this because I think it makes a valid and subtle mathematical point, of the sort that has direct relevance to thinking about many other topics. It depends on context. Is the exponential explicit? For the last 200 years, the answer is "hell yeah". Exponential, logarithm and trigonometry (complex exponential) appear very often in life, and people can be expected to have a working knowledge of how to manipulate them. Expressing a solution in terms of exponentials is like meeting an old friend. 120 years ago, knowing elliptic integrals, their theory and how to manipulate them was considered basic knowledge that every working mathematician or engineer was expected to have. Back then, these were explicit / basic / closed form. If you are writing a computer algebra system of similar ambition to maple / mathematica / wolfram alpha, then you better consider them explicit in your internal simplification routines, and write code for manipulating them. Otherwise, users will complain and send you feature requests. If you work as editor at the "Bronstein mathematical handbook", then the answer is yes for the longer versions of the book, and a very hard judgement call for shorter editions. Today, elliptic integrals are not routinely taught anymore. It is a tiny minority of mathematicians that has working knowledge on these guys. Expressing a solution in terms of elliptic integrals is not like meeting an old friend, it is like meeting a stranger who was famous a century ago, a grainy photo of whom you might have once seen in an old book. I personally would not consider the circumference of an ellipse "closed form". Just call it the "circumference of the ellipsis", or write it as an integral, depending on how to better make apparent which properties you want. Of course this is a trade-off, how much time to spend developing an intuition and working knowledge of "general integrals" (likely from a functional analysis perspective, as an operator) and how much time to spend understanding specific special integrals. The specific will always be more effective and impart deeper knowledge when dealing with the specifics, but the general theory is more applicable and "geometric"; you might say that it extrapolates very well from the training set. Some specific special functions are worth it, eg exp/log, and some used to be considered worthy but are today not considered worthy, evidenced by revealed preference (what do people put into course syllabi). So, in some sense you have a large edifice of "forgotten knowledge" in mathematics. This knowledge is archived, of course, but the unbroken master-apprentice chains of transmission have mostly died out. I think this is sad; we, as a society, should be rich enough to sponsor a handful of people to keep this alive, even if I'd say "good riddance" for removing it from the "standard canon". Anecdote: Elliptic integrals sometimes appear in averaging: You have a differential equation (dynamical system) and want to average over fast oscillations in order to get an effective (ie leading order / approximate) system with reduced dimension and uniform time-scales. Now, what is your effective equation? You can express it as "the effective equation coming out of Theorem XYZ", or write it down as an integral, which makes apparent both the procedure encoded in Theorem XYZ and an integral expression that is helpful for intuition and calculations. And sometimes, if you type it into Wolfram alpha, it transforms into some extremely lenghty expression containing elliptic integrals. Do you gain understanding from this? I certainly don't, and decided not to use the explicit expressions when I met them in my research (99% of the time, mathematica is not helpful; the 1% pays for the trivial inconvenience of always trying whether there maybe is some amazing transformation that simplifies your problem). I recently discovered there's no closed-form formula for the circumference of an ellipse Yes. I've asked the computer, to give me some simple approximation formula then. This came out: It's quite good when b >> a. re: differential equation solutions, you can compute if they are within epsilon of each other for any epsilon, which I feel is "morally the same" as knowing if they are equal. It's true that the concepts are not identical. I feel computability is like the "limit" of the "explicit" concept, as a community of mathematicians comes to accept more and more ways of formally specifying a number. The correspondence is still not perfect, as different families of explicit formulae will have structure(e.g. algebraic structure) that general Turing machines will not. Polynomial-time computability is probably closer to the notion of explicitness (though still not quite the same, as daozaich points out). I don't know of any number that is considered explicit but is not polynomial-time computable. Having run up against broadly similar kinds of problems recently, I have concluded that intuitions are the right level for me to focus on. I also greatly prize historical context for math-related questions now. I feel like this is the not-well-explained motivation for returning to originators of and key historical contributors to a field, even if efficient textbooks have since been written. This is great - I’ve been working on a lot of math lately, and the difference in this post describes is definitely muddied in my mind, but until reading I didn’t realize I was confused about the changed my mind edit New Comment 14 comments, sorted by Click to highlight new comments since: One thing that isn't made, aha, explicit in this excellent article, and that maybe should be, is that considering ruler-and-compasses constructions to be explicit turns out to be exactly the same thing as considering taking square roots to be explicit because (once you have the idea of coordinate geometry, which for some reason no one thought of until Descartes) it pretty easily transpires that the numerical operations you can do on coordinates using ruler-and-compasses constructions are exactly arithmetic plus square roots. (Handwavily, the square roots come from the fact that the equation of a circle involves squaring the coordinates.) And then (this uses, in some sense, "half" of Galois theory, so it's easier than the insolubility of the quintic) you can use this to prove that indeed you can't "duplicate the cube" using ruler and compasses (it means taking cube roots, and it turns out that no quantity of arithmetic-plus-square-roots will let you do that), and that another famous ancient problem -- trisecting angles -- can't be done with ruler and compasses for essentially the exact same reason, and (this is a bit more difficult; it was one of Gauss's first important theorems) that the regular polygons you can construct with ruler and compasses are those whose number of sides is a power of 2 times some number of distinct primes of form 2^2^n+1. So you can do 3 and 5 and 17, but not 9 (two 3s: not allowed) and not 11 (wrong sort of prime). Of course, for all the reasons daozaich gives here, one shouldn't care too much about what can be done with ruler and compasses, as such. But these are still really cool theorems. I recently discovered there's no closed-form formula for the circumference of an ellipse, and then was also told "But that just means there's no nice formula in what we consider basic functions." Do you have a sense / opinion on how explicit / arbitrary such elliptic formulae in fact are? While the concept of explicit solution can be interpreted messily, as in the quote above, there is a version of this idea that more closely cuts reality at the joints, computability. A real number is computable iff there is a Turing machine that outputs the number to any desired accuracy. This covers fractions, roots, implicit solutions, integrals, and, if you believe the Church-Turing thesis, anything else we will be able to come up with. https://en.wikipedia.org/wiki/Computable_number Computability does not express the same thing we mean with "explicit". The vague term "explicit" crystallizes an important concept, which is dependent on social and historical context that I tried to elucidate. It is useful to give a name to this concept, but you cannot really prove theorems about it (there should be no technical definition of "explicit"). That being said, computability is of course important, but slightly too counter-intuitive in practice. Say, you have two polynomial vectorfields. Are solutions (to the differential equation) computable? Sure. Can you say whether the two solutions, at time t=1 and starting in the origin, coincide? I think not. Equality of computable reals is not decidable after all (literally the halting
{"url":"https://www.lesswrong.com/posts/qLdG44kpSoYzrzAp7/on-exact-mathematical-formulae","timestamp":"2024-11-09T10:46:09Z","content_type":"text/html","content_length":"1049104","record_id":"<urn:uuid:7a0fe83d-2354-471c-a6c8-40af19946aea>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00506.warc.gz"}
School of Mathematical and Data Sciences | Marjorie Darrah Dr. Marjorie Darrah is Professor of Mathematics at WVU. She has 25 years experience teaching at the university level, and has led three National Science Foundation projects connecting students and teachers to cutting edge technology and making a smooth pathway to IT careers. She also had funding from the US Department of Education, SBIR program to develop new products incorporating haptic touch technology that allows a person to touch and interact with a virtual object. More recently her education research is related to the mitigation of math anxiety in students and determining factors that facilitate rural, first-generation STEM student success. Her research also includes the development and implementation of biologically inspired algorithms (Artificial Neural Networks and Genetic Algorithms). She has been involved in the development, implementation and field testing of genetic algorithms that coordinate missions using multiple unmanned aerial vehicles (UAVs). In this area she has had funding from both Army Research Labs and Air Force Research Labs. She has also worked on a NASA project using Neural Networks to do health monitoring of UAV systems to avoid system degradation or failure. She recently worked on an Army Research Lab Project to used Convolutional Neural Networks to identify ground objects in LiDAR data collected using a small LiDAR sensor mounted on a multi-roter UAV. Dr. Darrah also has expertise in project evaluation and has conducted over 25 project evaluations, specializing in evaluation of educational technologies and educational programs. Before coming to WVU, she was the Director of the Computer Sciences Group for the West Virginia High Technology Consortium Foundation and also taught ten years at small private WV University, where she served at the Chair of the Natural Sciences Division. 1995 PH. D. MATHEMATICS 1991 M.S. MATHEMATICS Graduated summa cum laude 1989 B.S. MATHEMATICS 1988 B.A. Education, Comprehensive Mathematics 7-12 Graduated summa cum laude Courses offered at WVU: • Math 126 College Algebra • Math 150 Applied Calculus • Math 153/154 Calculus with Pre-Calculus • Math 303 Introducation to Concepts of Mathematics • Math 375 Applied Modern Algebra • Math 318 Perspectives in Science and Mathematics • Math 593 Neural Network Rule Extraction Research Interests: • Artificial Neural Networks (Dynamic Cell Structure and Convolutional NNs) • Neural Network Rule Extraction and Rule Insertion • Program and Project Evaluation • Mitigation of Math Anxiety in Students Recent Publications: Algorithm Related: Darrah, M., Richardson, M., DeRoos, B., & Wathen, M. (2022). Optimal LiDAR Data Resolution Analysis for Object Classification. Sensors, 22(14), 5152. Elsarrar, O., Darrah, M., & Cossman, J. (2021). Improving Neural Network Performance by Embedding Expert Knowledge in the Form of Rules. Procedia Computer Science, 191, 417-424. Elsarrar, O., Darrah, M., & Devin, R. (2020). Rule Insertion Technique for Dynamic Cell Structure Neural Network. International Journal of Computer and Information Engineering, 14(8), 287-292. Elsarrar, O., Darrah, M., & Cossman, J. (2021). Improving Neural Network Performance by Embedding Expert Knowledge in the Form of Rules. Procedia Computer Science, 191, 417-424. Elsarrar, O., Darrah, M., & Devine, R. (2019, December). Analysis of Forest Fire Data Using Neural Network Rule Extraction with Human Understandable Rules. In 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA) (pp. 1917-19176). IEEE. Darrah, M., Cowley, K., Wheatley, C., McJilton, L., & Humbert, R. (2022). Analyzing the Growth of a Statewide Network to Increase Recruitment to and Persistence in STEM. Journal of Appalachian Studies, 28(2), 188-212. Darrah, M., Rubenstein, A., Sorton, E. and DeRoos, B. (2018). On-board Health-state Awareness to Detect Degradation in Multirotor Systems. In Proceedings of 2018 International Conference on Unmanned Aircraft Systems (ICUAS), Dallas, TX, USA, 2018. Rahem, M. A., Darrah, M. (2018). Using a Computational Approach for Generalizing a Consensus Measure to Likert Scales of Any Size n. International Journal of Mathematics and Mathematical Sciences. Volume 2018, Article ID 5726436. https://doi.org/10.1155/2018/5726436 Darrah, M., Trujillo, M. M., Speransky, K. and Wathen, M. (2017). Optimized 3D mapping of a large area with structures using multiple multirotors. In Proceedings of 2017 International Conference on Unmanned Aircraft Systems (ICUAS), Miami, FL, USA, 2017,pp. 716-722. doi: 10.1109/ICUAS.2017.7991414. Darrah, M., Sorton, E., Wathen, M. and Mera Trujillo, M. (2016). Real-time Tasking and Retasking of Multiple Coordinated UAVs. Defense Systems Information Analysis Center Journal, 3(4) pp. 21-26. Education Related: Darrah, M., Leppma, M. & Ogden, L. (2023). Role of Grit and Other Factors in Mitigating Math Anxiety in College Math Students. In Proceeding of Pyschology of Mathematics Education Conference, Reno, NV, October 1-4, 2023. Darrah, M., Humbert, R., & Howley, C. (2022). Differentiating Rural Locale Factors Related to Students Choosing and Persisting in STEM. Research in Higher Education Journal, 42. Leppma, M., & Darrah, M. (2022). Self-efficacy, mindfulness, and self-compassion as predictors of math anxiety in undergraduate students. International Journal of Mathematical Education in Science and Technology, 1-16 Darrah, M., Humbert, R. & Stewart, G. (2022). Understanding the Levels of First Generationness. Inside HigherEd. https://www.insidehighered.com/views/2022/03/02/ Dr. Darrah's CV
{"url":"https://mathanddata.wvu.edu/directory/faculty/marjorie-darrah","timestamp":"2024-11-03T04:39:32Z","content_type":"text/html","content_length":"79011","record_id":"<urn:uuid:dda6c315-12b5-4839-84a6-3f0b4a54c19d>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00287.warc.gz"}
In this note we study finite $p$-groups $G=AB$ admitting a factorization by an Abelian subgroup $A$ and a subgroup $B$. As a consequence of our results we prove that if $B$ contains an Abelian subgroup of index ${p}^{n-1}$ then $G$ has derived length at most $2n$. Let $G$ be any group and let $A$ be an abelian quasinormal subgroup of $G$. If $n$ is any positive integer, either odd or divisible by $4$, then we prove that the subgroup ${A}^{n}$ is also quasinormal in $G$. In this paper we describe some algorithms to identify permutable and Sylow-permutable subgroups of finite groups, Dedekind and Iwasawa finite groups, and finite T-groups (groups in which normality is transitive), PT-groups (groups in which permutability is transitive), and PST-groups (groups in which Sylow permutability is transitive). These algorithms have been implemented in a package for the computer algebra system GAP. Let ℨ be a complete set of Sylow subgroups of a group G. A subgroup H of G is called ℨ-permutably embedded in G if every Sylow subgroup of H is also a Sylow subgroup of some ℨ-permutable subgroup of G. By using this concept, we obtain some new criteria of p-supersolubility and p-nilpotency of a finite group. All crossed products of two cyclic groups are explicitly described using generators and relations. A necessary and sufficient condition for an extension of a group by a group to be a cyclic group is The original version of the article was published in Central European Journal of Mathematics, 2011, 9(4), 915–921, DOI: 10.2478/s11533-011-0029-8. Unfortunately, the original version of this article contains a mistake: Lemma 2.1 (2) is not true. We correct Lemma 2.2 (2) and Theorem 1.1 in our paper where this lemma was used.
{"url":"https://eudml.org/subject/MSC/20D40","timestamp":"2024-11-10T18:43:17Z","content_type":"application/xhtml+xml","content_length":"51730","record_id":"<urn:uuid:0dfc503d-db20-4626-8c64-0fba92d8fc0b>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00543.warc.gz"}
Unscramble EARLAP How Many Words are in EARLAP Unscramble? By unscrambling letters earlap, our Word Unscrambler aka Scrabble Word Finder easily found 52 playable words in virtually every word scramble game! Letter / Tile Values for EARLAP Below are the values for each of the letters/tiles in Scrabble. The letters in earlap combine for a total of 12 points (not including bonus squares) What do the Letters earlap Unscrambled Mean? The unscrambled words with the most letters from EARLAP word or letters are below along with the definitions. • earlap (n.) - The lobe of the ear.
{"url":"https://www.scrabblewordfind.com/unscramble-earlap","timestamp":"2024-11-07T00:42:32Z","content_type":"text/html","content_length":"46143","record_id":"<urn:uuid:d189df4a-5954-4012-b074-2c84278619cf>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00150.warc.gz"}
Deep Learning Startups Attract Venture Capital - reason.townDeep Learning Startups Attract Venture Capital Deep Learning Startups Attract Venture Capital In the last five years, deep learning startups have attracted $2.2 billion in venture capital, with an average deal size of $13 million. Checkout this video: What is deep learning? Deep learning is a subset of machine learning in artificial intelligence that has networks capable of learning unsupervised from data that is unstructured or unlabeled. Also known as deep neural learning or deep neural networks, deep learning is a technique used to model high-level abstractions in data by using a deep graph with many processing layers. What are some popular deep learning startups? There are a number of popular deep learning startups that have attracted venture capital in recent years. Some of these companies include Vicarious, Nervana Systems, and Sentient Technologies. Each of these companies is working on developing new ways to incorporate deep learning into different areas, such as robotics and artificial intelligence. Why do deep learning startups attract venture capital? Deep learning startups are attracting venture capital for a number of reasons: • First, deep learning is a cutting-edge area of AI with a lot of potential. Startups that are able to capitalize on this potential are seen as attractive investments. • Second, deep learning is an interdisciplinary field that combines computer science, mathematics, and neuroscience. This makes it attractive to VC firms that are looking to invest in promising new • Third, deep learning startups are often able to attract top talent. This is because deep learning is a highly competitive field, and startups that can attract the best talent are seen as more likely to succeed. • Finally, many deep learning startups are working on applications that have the potential to disrupt traditional industries. This makes them attractive investments for VC firms looking to invest in the next big thing. What are the benefits of deep learning? Deep learning is a subset of machine learning in which artificial neural networks (ANNs) learn by example. Deep learning is usually used to solve problems that are too hard for traditional machine learning algorithms. There are many benefits of deep learning, including: -Lower cost: Deep learning can be trained on less data and still achieve good results. This is especially helpful for companies that do not have a lot of data to begin with. -Fewer features: Deep learning can learn from data with fewer features than traditional machine learning algorithms. This is helpful because it means that deep learning can be used on data that has not been preprocessed and feature engineered, which can save time and money. -More accurate: Deep learning is more accurate than traditional machine learning algorithms, which means that it can be used to make better decisions. What are some challenges faced by deep learning startups? Deep learning startups are attracting a lot of venture capital, but they face some challenges. One challenge is that deep learning models can be complex and require a lot of data to train. Another challenge is that there is a lack of experts in the field of deep learning. How can deep learning be used in business? Deep learning is a branch of machine learning that deals with algorithms that can learn from data that is unstructured or unlabeled. This is in contrast to traditional machine learning methods, which require data to be labeled in order to be used for training. Deep learning has been used in a number of different fields, including computer vision, natural language processing, and predictive analytics. In recent years, there has been a growing interest in using deep learning for business applications. There are a number of startups that are using deep learning for business applications. Some of these startups include: -AEye: AEye is a startup that is using deep learning for computer vision. The company has developed a camera that can be used for autonomous vehicles. -Blue River Tech: Blue River Tech is a startup that is using deep learning for agriculture. The company has developed a platform that helps farmers to reduce the use of water and fertilizer. – Casetext: Casetext is a startup that uses deep learning for legal research. The company has developed a platform that helps lawyers to find relevant case law faster. What are some applications of deep learning? Deep learning is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using a deep graph with multiple processing layers, composed of simple but nonlinear modules. Many different architectures can be used for deep learning, including Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Long Short Term Memory Networks (LSTMs) and Deep Belief Networks (DBNs). There are many different applications for deep learning. Some of the more popular applications include: -Images: Deep learning can be used for image classification, object detection and recognition, image segmentation, and photo captioning. -Text: Deep learning can be used for text classification, machine translation, text generation, and natural language processing tasks such as named entity recognition and question answering. -Time series: Deep learning can be used for time series analysis, including tasks such as forecasting, trend detection, and anomaly detection. -Speech: Deep learning can be used for speech recognition and synthesis. What are some future trends in deep learning? There are a number of future trends in deep learning that are attracting venture capital investment. One such trend is the development of architectures that are better able to handle the increasing complexity of data. Another is the development of more efficient ways to train deep learning models. In addition, there is a trend towards developing specialized hardware for deep learning, which can further improve efficiency. How can I get started in deep learning? There are a few different ways to get started in deep learning, but the most common is to join a startup. These companies are typically working on cutting-edge applications of deep learning and are often well-funded by venture capitalists. This can be a great way to get experience with the latest technology and to work with some of the leading experts in the field. another option is to join a research group at a university or other institution. This can be a good way to get started in deep learning if you’re interested in doing research, but it may not be as well suited for those interested in commercial applications. What are some resources for deep learning? There are many resources for deep learning, but some of the most popular ones include online courses, books, and blogs. Some of the most popular online courses on deep learning include Geoffrey Hinton’s Coursera course on neural networks (https://www.coursera.org/learn/neural-networks), Andrew Ng’s Coursera course on machine learning (https://www.coursera.org/learn/machine-learning), and Yoshua Bengio’s Udacity course on deep learning (https://www.udacity.com/course/deep-learning--ud730). Some popular books on deep learning include Deep Learning by Geoffrey Hinton, Neural Networks and Deep Learning by Michael Nielsen, and Deep Learning 101 by Yoshua Bengio. Finally, some popular blogs about deep learning include Andrej Karpathy’s blog (http://karpathy.github.io/), Geoffrey Hinton’s blog (http://www.cs.toronto.edu/~hinton/), and Yann LeCun’s blog (http:/
{"url":"https://reason.town/deep-learning-venture-capital/","timestamp":"2024-11-11T17:53:06Z","content_type":"text/html","content_length":"98145","record_id":"<urn:uuid:1ae1ab1b-6aa0-49e9-bab6-5fa155905442>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00374.warc.gz"}
Graphing slope-intercept form (article) | Khan Academy (2024) Learn how to graph lines whose equations are given in the slope-intercept form y=mx+b. Want to join the conversation? 6 years agoPosted 6 years ago. Direct link to 682060's post “How come if the negative ...” How come if the negative sign is next to the fraction it causes the rise to be negative but not the run 6 years agoPosted 6 years ago. Direct link to Kim Seidel's post “Think about the fraction ...” Think about the fraction as division... How do you get a negative number when dividing: a negative divided by a positive = a negative a positive divided by a negative = a negative As you can see, only one of the 2 numbers can be negative. Thus, for a slope like -4/5, you can apply the negative sign to the numerator which would tell you to go down 4 units, then right 5 units. Or, you can apply the negative to the denominator which would make you go up 4 units and left 5 units. If you make both numbers negative, then you are doing: negative divided by negative = positive. And, you would have a positive slope. Hope this helps. 6 years agoPosted 6 years ago. Direct link to wesley jones's post “i don't really get it why...” i don't really get it why in the last exercise the slope is -3/2 you ad plus 2 for the change in x but minus 3 for the change in y. 4 years agoPosted 4 years ago. Direct link to Neilshet's post “Because -3/2 is basically...” Because -3/2 is basically equal to minus 3 by PLUS 2 4 years agoPosted 4 years ago. Direct link to 20nlion's post “im having some trouble......” im having some trouble... anybody have some helpful tips hehehe 6 months agoPosted 6 months ago. Direct link to Zachary Heaton's post “place you first point on ...” place you first point on the y axis +/-. Then turn the slope into a fraction. Slope: The positive or negative sign determine if the line goes up or down from the y intercept. Based on that, going left to right, if it is a negative travel down the numerator, travel right the denominator. Y=27/3x+1 Place first point on y axis at positive 1. Then travel up 27, then go right 3. Simplified, you would go up 9, and right 1. 3 years agoPosted 3 years ago. Direct link to Devss's post “How do I graph a line if ...” How do I graph a line if the slope isn't provided? Here is what I mean: How do I graph it if I do not know the slope? Thanks! 3 years agoPosted 3 years ago. Direct link to Ani V's post “When a variable doesn't h...” When a variable doesn't have a variable, it's safe to assume the variable is 1. So, -x would be -1x or -1/1x. Hope that makes sense! 3 years agoPosted 3 years ago. Direct link to gjp100's post “I don't have a clue on ho...” I don't have a clue on how to do this 3 years agoPosted 3 years ago. Direct link to David Severin's post “If you have an equation i...” If you have an equation in slope-intercept form, you know both a point (the y intercept) and the slope, so it should be relatively easy to graph especially with a little practice. So if you have y=3x-4, the slope is 3=3/1, the y intercept is (0,-4). We can plot the point by starting at the origin and counting down 4 to get to (0,-4) and put a dot at this point. With a slope of rise (up) 3 over run (right) 1, you get to (0+1,-4+3) which is (1,-1), and a second time (1+1,-1+3) which is (2,2) and you have three points to draw a line through. One more example, if you have y=-3/4x + 2, you have a point (0,2) and a slope of -3/4 (rise down 3 right 4). This gives a second point of (0+4,2-3) or (4,-1) and (4+4,-1-3) or (8,-4) to draw a line. So start with the y intercept, and count the slope from that point. 2 years agoPosted 2 years ago. Direct link to Envy's post “Not to be that person but...” Not to be that person but like When am I reallyyyyyyyyyyyy going to use this in everyday life? a year agoPosted a year ago. Direct link to Logan.Lewis's post “my teacher says yes but h...” my teacher says yes but he is a goober so I don't know 6 years agoPosted 6 years ago. Direct link to wesley jones's post “i don't really get it why...” i don't really get it why in the last exercise the slope is -3/2 you ad plus 2 for the change in x but minus 3 for the change in y. a year agoPosted a year ago. Direct link to mukhopadhyayaveri14's post “I can't understand how to...” I can't understand how to graph an equation with a fraction y-intercept. Ex: y=2x-1/2 a year agoPosted a year ago. Direct link to Kim Seidel's post “Put a point at (0, -1/2)....” Put a point at (0, -1/2). It is half-way between 0 and -1. Since the slope is 2, you move up 2 units and right 1. -- Up 1 unit takes you to 1/2, up 2 units takes you to 1 1/2 (halfway between 1 and 2). -- Then, go right 1 unit. You should now be at the point 1 1/2, 1) Hope this helps. 3 years agoPosted 3 years ago. Direct link to 2024oshiroc's post “brah can sum1 help me I n...” brah can sum1 help me I no understand um, Mahaloz a year agoPosted a year ago. Direct link to bail380001's post “what if the question is y...” what if the question is y=x+4 a year agoPosted a year ago. Direct link to Kim Seidel's post “Remember, "x" is the same...” Remember, "x" is the same as "1x". So, the slope of the equation is 1 and the y-intercept is (0,4). Hope this helps.
{"url":"https://kbimagephoto.com/article/graphing-slope-intercept-form-article-khan-academy","timestamp":"2024-11-14T13:31:12Z","content_type":"text/html","content_length":"122823","record_id":"<urn:uuid:ef44be12-909f-4560-adcb-8ba5b5160897>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00438.warc.gz"}
(PHP 8 >= 8.2.0) Random\Engine::generate — Generates randomness public Random\Engine::generate Returns randomness and advances the algorithm’s state by one step. The randomness is represented by a binary string containing random bytes. This representation allows to unambiguously interpret the random bits generated by the algorithm, for example to accomodate different output sizes used by different algorithms. Algorithms that natively operate on integer values should return the integer in little-endian byte order, for example by leveraging the pack() function with the P format code. The high-level interface provided by the Random\Randomizer will interpret the returned random bytes as unsigned little-endian integers if a numeric representation is required. It is strongly recommended that each bit of the returned string is uniformly and independently selected, as some applications require randomness based on the bit-level to work correctly. For example linear congruential generators often generate lower-quality randomness for the less significant bits of the return integer value and thus would not be appropriate for applications that require bit-level randomness. This function has no parameters. Return Values A non-empty string containing random bytes. Note: The Random\Randomizer works with unsigned 64 bit integers internally. If the returned string contains more than 64 bit (8 byte) of randomness the exceeding bytes will be ignored. Other applications may be able to process more than 64 bit at once. • If generating randomness fails, a Random\RandomException should be thrown. Any other Exception thrown during generation should be caught and wrapped into a Random\RandomException. • If the returned string is empty, a Random\BrokenRandomEngineError will be thrown by the Random\Randomizer. • If the implemented algorithm is severely biased, a Random\BrokenRandomEngineError may be thrown by the Random\Randomizer to prevent infinite loops if rejection sampling is required to return unbiased results. Example #1 Random\Engine::generate() example * Implements a Linear Congruential Generator with modulus 65536, * multiplier 61 and increment 17 returning an 8 Bit integer. * Note: This engine is suitable for demonstration purposes only. * Linear Congruential Generators generally generate low * quality randomness and this specific implementation has * a very short 16 Bit period that is unsuitable for * almost any real-world use case. final class LinearCongruentialGenerator implements \Random\Engine private int $state; public function __construct(?int $seed = null) if ($seed === null) { $seed = random_int(0, 0xffff); $this->state = $seed & 0xffff; public function generate(): string $this->state = (61 * $this->state + 17) & 0xffff; return pack('C', $this->state >> 8); $r = new \Random\Randomizer( new LinearCongruentialGenerator(seed: 1) echo "Lucky Number: ", $r->getInt(0, 99), "\n"; The above example will output:
{"url":"https://www.php.net/manual/en/random-engine.generate.php","timestamp":"2024-11-06T15:44:29Z","content_type":"application/xhtml+xml","content_length":"34074","record_id":"<urn:uuid:9f916c88-5e7d-4b9c-9ba3-1a65bd53481b>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00709.warc.gz"}