url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://byjus.com/questions/what-is-instantaneous-rate-of-reaction/
# What Is Instantaneous Rate Of Reaction? There are two ways to express the reaction rates and they are: • Average rate of reaction • Instantaneous rate of reaction Instantaneous rate of reaction is defined as the change in concentration taking place at an infinitely small interval of time. It is usually expressed as either limit or as derivative. The instantaneous reaction rate is given as: $\lim_{\Delta t\rightarrow 0}\frac{\Delta [concentration]}{\Delta t}$ There are two ways to determine the instantaneous reaction rate: • By using data from experiments and finding the slope of the tangent from the concentration-time graph. • By using data from experiments by getting their average rate and then determining the reaction rate from the slope of the tangents. Consider a concentration-time graph for the reaction with P product. The instantaneous rate of reaction is obtained from the slope of the tangent:
2020-10-30 10:56:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8032112717628479, "perplexity": 529.4344070622756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107910204.90/warc/CC-MAIN-20201030093118-20201030123118-00351.warc.gz"}
https://zbmath.org/authors/?q=ai%3Aliu.chengshi
## Liu, Chengshi Compute Distance To: Author ID: liu.chengshi Published as: Liu, Cheng-Shi; Liu, Chengshi; Liu, Cheng-shi; Liu, Cheng Shi more...less Documents Indexed: 78 Publications since 1969 1 Contribution as Editor Co-Authors: 2 Co-Authors with 3 Joint Publications 95 Co-Co-Authors all top 5 ### Co-Authors 23 single-authored 5 Gladd, N. T. 2 An, Z. G. 2 Chan, V. S. 2 Drake, J. F. 2 Du, Xinghua 2 Grebogi, Celso 2 Wang, Chunyan 1 Aamodt, R. E. 1 Akbarian, A. 1 Azfar, F. 1 Ball, Alan A. 1 Bondeson, Anders 1 Bousson, N. 1 Boyd, David A. 1 Bürger, Raimund 1 Cheng, R. T. 1 Cohen, Bruce I. 1 Davidson, Ronald C. 1 Fernández, Juan Pablo 1 Geldmacher, R. C. 1 Giurgiu, G. 1 Gómez-Ceballos, G. 1 Goren, Yaron J. 1 Guzdar, Parvez N. 1 Hagger, Raffael 1 Kreps, M. 1 Kuhr, Tomas 1 Lau, Y. Y. 1 Lehmberg, R. H. 1 Loth, Eric 1 Marklin, G. 1 Morlock, J. 1 Myra, J. R. 1 Nicholson, Dwight R. 1 Nolte, Loren W. 1 Oakes, L. 1 Paulini, M. 1 Pueschel, E. 1 Raabe, Dierk 1 Rosenbluth, Marshall N. 1 Rowe, Glenn W. A. 1 Sanuki, Heiji 1 Schmidt, Andreas 1 Shanthraj, Pratheek 1 Shum, Heung-Yeung 1 Ştefan, Viorel R. A. 1 Stipcich, G. 1 Sturgess, C. E. N. 1 Sutomo, W. 1 Svendsen, Bob 1 Taskinen, Jari 1 Virtanen, Jani A. 1 Weiland, Jürgen 1 Wendland, Wolfgang L. 1 Yau, Shing-Tung 1 Zhang, Junhua all top 5 ### Serials 24 Physics of Fluids 4 Chaos, Solitons and Fractals 4 Communications in Theoretical Physics 3 Acta Physica Sinica 3 Communications in Nonlinear Science and Numerical Simulation 2 Modern Physics Letters A 2 Modern Physics Letters B 2 Journal of Applied Mechanics 2 Journal of Computational Physics 2 Journal of Nanjing University. Mathematical Biquarterly 1 AIAA Journal 1 Computer Methods in Applied Mechanics and Engineering 1 Computer Physics Communications 1 International Journal of Mechanical Sciences 1 International Journal of Multiphase Flow 1 International Journal of Solids and Structures 1 Journal of Mathematical Analysis and Applications 1 Reports on Mathematical Physics 1 Applied Mathematics and Computation 1 IEEE Transactions on Communications 1 Integral Equations and Operator Theory 1 Applied Mathematics Letters 1 Applied Mathematical Modelling 1 International Journal of Computer Mathematics 1 Neural, Parallel & Scientific Computations 1 Pure and Applied Mathematics 1 Engineering Analysis with Boundary Elements 1 Mathematical Problems in Engineering 1 Nonlinear Dynamics 1 Far East Journal of Applied Mathematics 1 Journal of Mathematical Study 1 Journal of High Energy Physics 1 Physica Scripta 1 The ANZIAM Journal 1 Far East Journal of Dynamical Systems 1 Foundations of Physics 1 Applied Mathematical and Computational Sciences all top 5 ### Fields 30 Fluid mechanics (76-XX) 17 Partial differential equations (35-XX) 8 Numerical analysis (65-XX) 7 Mechanics of deformable solids (74-XX) 6 Quantum theory (81-XX) 5 Real functions (26-XX) 4 Ordinary differential equations (34-XX) 3 Dynamical systems and ergodic theory (37-XX) 3 Information and communication theory, circuits (94-XX) 2 Measure and integration (28-XX) 2 Operator theory (47-XX) 2 Probability theory and stochastic processes (60-XX) 2 Computer science (68-XX) 2 Statistical mechanics, structure of matter (82-XX) 2 Geophysics (86-XX) 1 General and overarching topics; collections (00-XX) 1 Combinatorics (05-XX) 1 Functional analysis (46-XX) 1 Calculus of variations and optimal control; optimization (49-XX) 1 Mechanics of particles and systems (70-XX) 1 Operations research, mathematical programming (90-XX) 1 Game theory, economics, finance, and other social and behavioral sciences (91-XX) ### Citations contained in zbMATH Open 40 Publications have been cited 547 times in 124 Documents Cited by Year Preconditioned multigrid methods for unsteady incompressible flows. Zbl 0908.76064 Liu, C.; Zheng, X.; Sung, C. H. 1998 Integrability of nonlinear Hamiltonian systems by inverse scattering method. Zbl 1063.37559 Chen, H. H.; Lee, Y. C.; Liu, C. S. 1979 Applications of complete discrimination system for polynomial for classifications of traveling wave solutions to nonlinear differential equations. Zbl 1205.35262 Liu, Cheng-Shi 2010 Counterexamples on Jumarie’s two basic fractional calculus formulae. Zbl 1331.26010 Liu, Cheng-shi 2015 Trial equation method and its applications to nonlinear evolution equations. Zbl 1202.35059 Liu, Cheng Shi 2005 The essence of the homotopy analysis method. Zbl 1194.34010 Liu, Cheng-Shi 2010 Representations and classification of traveling wave solutions to sinh-Gordon equation. Zbl 1392.35063 Liu, Cheng-Shi 2008 Classification of all single travelling wave solutions to Calogero-Degasperis-Focas equation. Zbl 1267.35065 Liu, Cheng-Shi 2007 Exponential function rational expansion method for nonlinear differential-difference equations. Zbl 1197.35243 Liu, Cheng-Shi 2009 Nonlinear wave and soliton propagation in media with arbitrary inhomogeneities. Zbl 0414.76016 Chen, H. H.; Liu, C. S. 1978 Existence and stability for mathematical models of sedimentation-consolidation processes in several space dimensions. Zbl 0995.35050 Bürger, R.; Liu, C.; Wendland, W. L. 2001 Using the trial equation method to obtain exact solutions for two kinds of KdV equations with variable coefficients. Zbl 1202.35236 Liu, Cheng Shi 2005 Solution of ODE $$u''+ p(u)(u')^2+q(u)=0$$ and applications to classifications of all single travelling wave solutions to some nonlinear mathematical physics equations. Zbl 1392.35064 Liu, Cheng-Shi 2008 The essence of the generalized Taylor theorem as the foundation of the homotopy analysis method. Zbl 1221.65207 Liu, Cheng-Shi 2011 Counterexamples on Jumarie’s three basic fractional calculus formulae for non-differentiable continuous functions. Zbl 1390.26011 Liu, Cheng-shi 2018 High order finite difference and multigrid methods for spatially evolving instability in a planar channel. Zbl 0769.76038 Liu, C.; Liu, Z. 1993 Trial equation method based on symmetry and applications to nonlinear equations arising in mathematical physics. Zbl 1215.35142 Liu, Cheng-Shi 2011 Inventory competition in a dual-channel supply chain with delivery lead time consideration. Zbl 1443.90081 Yang, J. Q.; Zhang, X. M.; Fu, H. Y.; Liu, C. 2017 Comparison of a general series expansion method and the homotopy analysis method. Zbl 1195.65145 Liu, Cheng-Shi; Liu, Yang 2010 Canonical-like transformation method and exact solutions to a class of diffusion equations. Zbl 1198.81094 Liu, Cheng-Shi 2009 An improved interpolating element-free Galerkin method based on nonsingular weight functions. Zbl 1407.74091 Sun, F. X.; Liu, C.; Cheng, Y. M. 2014 The renormalization method based on the Taylor expansion and applications for asymptotic analysis. Zbl 1375.35082 Liu, Cheng-Shi 2017 Finite-element modelling of deformation and spread in slab rolling. Zbl 0604.73046 Liu, C.; Hartley, P.; Sturgess, C. E. N.; Rowe, G. W. 1987 Tilting instability of a cylindrical spheromak. Zbl 0469.76119 Bondeson, A.; Marklin, G.; An, Z. G.; Chen, H. H.; Lee, Y. C.; Liu, C. S. 1981 Self-modulation of ion Bernstein waves. Zbl 0447.76092 Myra, J. R.; Liu, C. S. 1980 On the local fractional derivative of everywhere non-differentiable continuous functions on intervals. Zbl 1473.26007 Liu, Cheng-shi 2017 New trial equation methods and exact solutions to some nonlinear mathematical physical equations. Zbl 1194.34005 Liu, Cheng-Shi 2010 Computational isotropic-workhardening rate-independent elastoplasticity. Zbl 1110.74592 Mukherjee, S.; Liu, C.-S. 2003 Stability of shear flow in a magnetized plasma. Zbl 0429.76028 Lau, Y. Y.; Liu, C. S. 1980 Convective cell formation and anomalous diffusion due to electromagnetic drift wave turbulence. Zbl 0471.76122 Weiland, J.; Sanuki, Heiji; Liu, C. S. 1981 How many first integrals imply integrability in infinite-dimensional Hamilton system. Zbl 1236.37038 Liu, Cheng-Shi 2011 Lattice gluon propagator in the Landau gauge: a study using anisotropic lattices. Zbl 1175.81202 Gong, M.; Chen, Y.; Meng, G.; Liu, C. 2009 Implication of entropy flow for the development of a system as suggested by the life cycle of a hurricane. Zbl 1195.86009 Liu, C.; Luo, Z.; Liu, Y.; Yu, H.; Zhou, X.; Wang, D.; Ma, L.; Xu, H. 2010 Nonsymmetric entropy and maximum nonsymmetric entropy principle. Zbl 1198.94065 Liu, Cheng-Shi 2009 Ornstein-Uhlenbeck process, Cauchy process, and Ornstein-Uhlenbeck-Cauchy process on a circle. Zbl 1308.60094 Liu, Cheng-Shi 2013 Application of a new algebraic dynamical algorithm to cylindrical nonlinear Schrödinger equation. Zbl 1376.37110 Wang, Chun-Yan; Liu, Cheng-Shi 2014 Hierarchical shape modeling for automatic face localization. Zbl 1039.68675 Liu, C.; Shum, H.-Y.; Zhang, C. 2002 Exactly solving some typical Riemann-Liouville fractional models by a general method of separation of variables. Zbl 1451.26010 Liu, Cheng-Shi 2020 High-order mixed weighted compact and non-compact scheme for shock and small length scale interaction. Zbl 1278.76049 Stipcich, G.; Fu, H.; Liu, C. 2013 Dynamic steady-state stress field in a web during slitting. Zbl 1111.74522 Liu, C.; Lu, H.; Huang, Y. 2005 Exactly solving some typical Riemann-Liouville fractional models by a general method of separation of variables. Zbl 1451.26010 Liu, Cheng-Shi 2020 Counterexamples on Jumarie’s three basic fractional calculus formulae for non-differentiable continuous functions. Zbl 1390.26011 Liu, Cheng-shi 2018 Inventory competition in a dual-channel supply chain with delivery lead time consideration. Zbl 1443.90081 Yang, J. Q.; Zhang, X. M.; Fu, H. Y.; Liu, C. 2017 The renormalization method based on the Taylor expansion and applications for asymptotic analysis. Zbl 1375.35082 Liu, Cheng-Shi 2017 On the local fractional derivative of everywhere non-differentiable continuous functions on intervals. Zbl 1473.26007 Liu, Cheng-shi 2017 Counterexamples on Jumarie’s two basic fractional calculus formulae. Zbl 1331.26010 Liu, Cheng-shi 2015 An improved interpolating element-free Galerkin method based on nonsingular weight functions. Zbl 1407.74091 Sun, F. X.; Liu, C.; Cheng, Y. M. 2014 Application of a new algebraic dynamical algorithm to cylindrical nonlinear Schrödinger equation. Zbl 1376.37110 Wang, Chun-Yan; Liu, Cheng-Shi 2014 Ornstein-Uhlenbeck process, Cauchy process, and Ornstein-Uhlenbeck-Cauchy process on a circle. Zbl 1308.60094 Liu, Cheng-Shi 2013 High-order mixed weighted compact and non-compact scheme for shock and small length scale interaction. Zbl 1278.76049 Stipcich, G.; Fu, H.; Liu, C. 2013 The essence of the generalized Taylor theorem as the foundation of the homotopy analysis method. Zbl 1221.65207 Liu, Cheng-Shi 2011 Trial equation method based on symmetry and applications to nonlinear equations arising in mathematical physics. Zbl 1215.35142 Liu, Cheng-Shi 2011 How many first integrals imply integrability in infinite-dimensional Hamilton system. Zbl 1236.37038 Liu, Cheng-Shi 2011 Applications of complete discrimination system for polynomial for classifications of traveling wave solutions to nonlinear differential equations. Zbl 1205.35262 Liu, Cheng-Shi 2010 The essence of the homotopy analysis method. Zbl 1194.34010 Liu, Cheng-Shi 2010 Comparison of a general series expansion method and the homotopy analysis method. Zbl 1195.65145 Liu, Cheng-Shi; Liu, Yang 2010 New trial equation methods and exact solutions to some nonlinear mathematical physical equations. Zbl 1194.34005 Liu, Cheng-Shi 2010 Implication of entropy flow for the development of a system as suggested by the life cycle of a hurricane. Zbl 1195.86009 Liu, C.; Luo, Z.; Liu, Y.; Yu, H.; Zhou, X.; Wang, D.; Ma, L.; Xu, H. 2010 Exponential function rational expansion method for nonlinear differential-difference equations. Zbl 1197.35243 Liu, Cheng-Shi 2009 Canonical-like transformation method and exact solutions to a class of diffusion equations. Zbl 1198.81094 Liu, Cheng-Shi 2009 Lattice gluon propagator in the Landau gauge: a study using anisotropic lattices. Zbl 1175.81202 Gong, M.; Chen, Y.; Meng, G.; Liu, C. 2009 Nonsymmetric entropy and maximum nonsymmetric entropy principle. Zbl 1198.94065 Liu, Cheng-Shi 2009 Representations and classification of traveling wave solutions to sinh-Gordon equation. Zbl 1392.35063 Liu, Cheng-Shi 2008 Solution of ODE $$u''+ p(u)(u')^2+q(u)=0$$ and applications to classifications of all single travelling wave solutions to some nonlinear mathematical physics equations. Zbl 1392.35064 Liu, Cheng-Shi 2008 Classification of all single travelling wave solutions to Calogero-Degasperis-Focas equation. Zbl 1267.35065 Liu, Cheng-Shi 2007 Trial equation method and its applications to nonlinear evolution equations. Zbl 1202.35059 Liu, Cheng Shi 2005 Using the trial equation method to obtain exact solutions for two kinds of KdV equations with variable coefficients. Zbl 1202.35236 Liu, Cheng Shi 2005 Dynamic steady-state stress field in a web during slitting. Zbl 1111.74522 Liu, C.; Lu, H.; Huang, Y. 2005 Computational isotropic-workhardening rate-independent elastoplasticity. Zbl 1110.74592 Mukherjee, S.; Liu, C.-S. 2003 Hierarchical shape modeling for automatic face localization. Zbl 1039.68675 Liu, C.; Shum, H.-Y.; Zhang, C. 2002 Existence and stability for mathematical models of sedimentation-consolidation processes in several space dimensions. Zbl 0995.35050 Bürger, R.; Liu, C.; Wendland, W. L. 2001 Preconditioned multigrid methods for unsteady incompressible flows. Zbl 0908.76064 Liu, C.; Zheng, X.; Sung, C. H. 1998 High order finite difference and multigrid methods for spatially evolving instability in a planar channel. Zbl 0769.76038 Liu, C.; Liu, Z. 1993 Finite-element modelling of deformation and spread in slab rolling. Zbl 0604.73046 Liu, C.; Hartley, P.; Sturgess, C. E. N.; Rowe, G. W. 1987 Tilting instability of a cylindrical spheromak. Zbl 0469.76119 Bondeson, A.; Marklin, G.; An, Z. G.; Chen, H. H.; Lee, Y. C.; Liu, C. S. 1981 Convective cell formation and anomalous diffusion due to electromagnetic drift wave turbulence. Zbl 0471.76122 Weiland, J.; Sanuki, Heiji; Liu, C. S. 1981 Self-modulation of ion Bernstein waves. Zbl 0447.76092 Myra, J. R.; Liu, C. S. 1980 Stability of shear flow in a magnetized plasma. Zbl 0429.76028 Lau, Y. Y.; Liu, C. S. 1980 Integrability of nonlinear Hamiltonian systems by inverse scattering method. Zbl 1063.37559 Chen, H. H.; Lee, Y. C.; Liu, C. S. 1979 Nonlinear wave and soliton propagation in media with arbitrary inhomogeneities. Zbl 0414.76016 Chen, H. H.; Liu, C. S. 1978 all top 5 ### Cited by 209 Authors 8 Ekici, Mehmet 7 Liu, Chengshi 7 Sönmezoğlu, Abdullah 5 Mirzazadeh, Mohammad 5 Pandir, Yusuf 5 Tarasov, Vasily E. 4 Gepreel, Khaled A. 4 Gurefe, Yusuf 4 Xia, Yonghui 3 Eslami, Mostafa 3 Kai, Yue 3 Misirli, Emine E. 3 Zhou, Qin 2 Abbasbandy, Saeid 2 Bai, Yuzhen 2 Baskonus, Haci Mehmet 2 Biswas, Anjan 2 Bulut, Hasan 2 Cao, Damin 2 Chenaghlou, Alireza 2 Dai, Chaoqing 2 Dong, Huanhe 2 Guo, Baoyong 2 Gupta, Rajesh Kumar 2 Hayat, Tasawar 2 Injrou, Sami 2 Jabbari, Azizeh 2 Kheiri, Hossein 2 Machado, José António Tenreiro 2 Mohammadi, Vahid 2 Nofal, Taher A. 2 Odibat, Zaid M. 2 Ozyapici, Ali 2 Raza, Nauman 2 Shehata, Abdel Rahman M. 2 Shivanian, Elyas 2 Singla, Komal 2 van Gorder, Robert Ashton 2 Wang, Yueyue 2 Wu, Gangzhou 2 Yıldırım, Ahmet 2 Zhang, Bei 2 Zhu, Wenjing 1 Afzal, Usman 1 Aghaei, Sohrab 1 Ahmed, Naveed 1 Akram, Mohammad 1 Akturk, Tolga 1 Akuamoah, Saviour Worlanyo 1 Al-Thobaiti, Ali A. 1 Alderremy, A. A. 1 Alhakim, Abdulaziz 1 Alhothuali, Mohammed Shabab 1 Alotaibi, Fawziah M. 1 Al-saedi, Ahmed Eid Salem 1 Aly, Shaban A. H. 1 Ashraf, M. Bilal 1 Atanackovic, Teodor M. 1 Ayimah, John Coker 1 Băleanu, Dumitru I. 1 Baliarsingh, Pinakadhar 1 Bashar, Md Habibul 1 Bataineh, Ahmad Sami 1 Bekir, Ahmet 1 Belić, Milivoj R. 1 Bilgehan, Bulent 1 Bira, Bibekananda 1 Bonsi, Prosper Obed 1 Chen, Huaitang 1 Chen, Shuangqing 1 Chen, Yangquan 1 Cheng, Yanjun 1 Coley, Alan Albert 1 Contreras-Reyes, Javier E. 1 Cresson, Jacky 1 Curato, Gianbiagio 1 Dai, Dong-Yan 1 Das, Subir K. 1 de Oliveira, Edmundo Capelas 1 Deeb, Nadia 1 Deng, Xijun 1 Deniz, Sinan 1 Ding, Jian 1 Du, Lijuan 1 Du, Qing 1 Duran, Durgun 1 Dutta, Hemen 1 El-Tawil, Magdy A. 1 Fakhar, Kamran 1 Fan, Huiling 1 Fardi, Mojtaba 1 Foyjonnesa 1 Fu, Maozhun 1 Gao, Wenjie 1 García-Portugués, Eduardo 1 Gatheral, Jim 1 Ghasemi, Mehdi 1 Ghosh, Pradyumna 1 Guo, Lihong 1 Hamelryck, Thomas ...and 109 more Authors all top 5 ### Cited in 60 Serials 15 Applied Mathematics and Computation 11 Nonlinear Dynamics 8 Communications in Nonlinear Science and Numerical Simulation 5 Advances in Difference Equations 4 Chaos, Solitons and Fractals 4 International Journal of Numerical Methods for Heat & Fluid Flow 4 Abstract and Applied Analysis 4 Fractional Calculus & Applied Analysis 3 Applied Mathematics Letters 2 Journal of Mathematical Physics 2 Mathematical Methods in the Applied Sciences 2 Physica A 2 Physics Letters. A 2 Numerical Algorithms 2 Mathematical Problems in Engineering 2 Discrete Dynamics in Nature and Society 2 Qualitative Theory of Dynamical Systems 2 Nonlinear Analysis. Modelling and Control 2 Journal of Applied Mathematics 2 Advances in Mathematical Physics 2 International Journal of Differential Equations 2 Analysis and Mathematical Physics 2 Journal of Applied Analysis and Computation 2 International Journal of Applied and Computational Mathematics 1 General Relativity and Gravitation 1 Journal of Computational Physics 1 Reports on Mathematical Physics 1 Calcolo 1 Journal of Computational and Applied Mathematics 1 Mathematics and Computers in Simulation 1 Quaestiones Mathematicae 1 Applied Mathematics and Mechanics. (English Edition) 1 Mathematical and Computer Modelling 1 Japan Journal of Industrial and Applied Mathematics 1 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 1 Russian Journal of Mathematical Physics 1 Computational and Applied Mathematics 1 Turkish Journal of Mathematics 1 Fractals 1 Complexity 1 Journal of Discrete Mathematical Sciences & Cryptography 1 International Journal of Nonlinear Sciences and Numerical Simulation 1 Nonlinear Analysis. Real World Applications 1 Iranian Journal of Science and Technology. Transaction A: Science 1 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 1 Entropy 1 International Journal of Geometric Methods in Modern Physics 1 The Australian Journal of Mathematical Analysis and Applications 1 Foundations of Physics 1 Waves in Random and Complex Media 1 Inverse Problems in Science and Engineering 1 Advances in High Energy Physics 1 International Journal of Biomathematics 1 Communications in Theoretical Physics 1 Advances in Differential Equations and Control Processes 1 Statistics and Computing 1 S$$\vec{\text{e}}$$MA Journal 1 ISRN Computational Mathematics 1 Fractional Differential Calculus 1 Journal of Mathematics all top 5 ### Cited in 32 Fields 83 Partial differential equations (35-XX) 31 Ordinary differential equations (34-XX) 18 Numerical analysis (65-XX) 16 Real functions (26-XX) 8 Fluid mechanics (76-XX) 6 Probability theory and stochastic processes (60-XX) 6 Quantum theory (81-XX) 5 Dynamical systems and ergodic theory (37-XX) 4 Special functions (33-XX) 4 Difference and functional equations (39-XX) 4 Biology and other natural sciences (92-XX) 3 Optics, electromagnetic theory (78-XX) 3 Statistical mechanics, structure of matter (82-XX) 3 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 2 Approximations and expansions (41-XX) 2 Harmonic analysis on Euclidean spaces (42-XX) 2 Mechanics of particles and systems (70-XX) 2 Mechanics of deformable solids (74-XX) 2 Classical thermodynamics, heat transfer (80-XX) 2 Relativity and gravitational theory (83-XX) 1 History and biography (01-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Topological groups, Lie groups (22-XX) 1 Functions of a complex variable (30-XX) 1 Integral transforms, operational calculus (44-XX) 1 Integral equations (45-XX) 1 Calculus of variations and optimal control; optimization (49-XX) 1 Differential geometry (53-XX) 1 Statistics (62-XX) 1 Computer science (68-XX) 1 Geophysics (86-XX) 1 Information and communication theory, circuits (94-XX)
2022-06-25 22:59:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6165146827697754, "perplexity": 7477.956471261421}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036176.7/warc/CC-MAIN-20220625220543-20220626010543-00657.warc.gz"}
https://datascience.stackexchange.com/questions/25099/how-can-i-calculate-kernel-matrix-k-for-clustering-based-kernel-principal-compon
# How can I calculate Kernel matrix K for clustering based Kernel Principal Component Analysis? In practice, a large data set leads to a large K, and storing K may become a problem. One way to deal with this is to perform clustering on the dataset, and populate the kernel with the means of those clusters for speeding up and reducing storage. How can I calculate the kernel using clustering? Foremost, we must understand what is clustering. It is an unsupervised algorithm, applicable to scenario's where the target class is unknown. So essentially it's a data preprocessing algorithm. Continuing further, cluster analysis, helps in detecting patterns in data. Now, to detect patterns in data, one must ensure the sample consist of statistically significant variables. To find such significant variables, we need to perform various data preprocessing tasks like outlier, missing value('s), correlation and dimensionality reduction detection and treatment. Post, performing these steps, the noise from the data would be removed and one would be able to reveal the signal. Otherwise, one can perform the principal component analysis (pca) to detect the signal. This signal('s), can then aid in determining the true patterns or clusters in the data. Now, coming to the second part of the question. You can use the kernlab package in R to calculate the kernel. Perhaps these posts can help you further, 1, 2, 3, 4.
2020-09-25 03:30:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5394355058670044, "perplexity": 600.089288250327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400221980.49/warc/CC-MAIN-20200925021647-20200925051647-00254.warc.gz"}
https://math.stackexchange.com/questions/3121503/sample-space-of-a-fair-coin
# Sample Space of a Fair Coin A coin is tossed until, for the first time, the same result appears twice in succession. Define a sample space for this experiment. The solution in the back of the book is: $$[x_1x_2...x_n: n \ge 1, x_i \in [H, T]; x_i \ne x_{i+1}, 1 \le i \le n-2; x_{n-1}= x_n]$$ I don't have a clue how this result was achieved. Besides knowing that the sample space of a fair coin is $$[H, T]$$ I am completely lost. • Think about a sequence of flips that would result in a success here, e.g. HTHTHH or HTT. What do all of these have in common? – TSF Feb 21 '19 at 16:19 • $n\geq1$ in it should be changed into $n\geq2$. – drhab Feb 21 '19 at 16:28 It is just a listing of the possible outcomes. $$HH$$ and $$TT$$ if $$2$$ tosses are needed. This with $$P(\{HH\})=P(\{TT\})=\frac14$$. $$THH$$ and $$HTT$$ if $$3$$ tosses are needed. This with $$P(\{THH\})=P(\{HTT\})=\frac18$$. $$HTHH$$ and $$THTT$$ if $$4$$ tosses are needed. This with $$P(\{HTHH\})=P(\{THTT\})=\frac1{16}$$. Et cetera. So $$\Omega=\{HH,TT,THH,HTT,HTHH,THTT,\cdots\}$$ as outcome-space and the $$\sigma$$-algebra on it is $$\wp(\Omega)$$. Its a list of the first $$n$$ coin flips given as $$x_i$$, with the restriction that the only consecutive tosses with the same outcome are $$x_n$$ and $$x_{n-1}$$.
2020-02-19 04:27:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 21, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7340595722198486, "perplexity": 286.88722827378183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144027.33/warc/CC-MAIN-20200219030731-20200219060731-00293.warc.gz"}
https://stats.stackexchange.com/questions/305505/using-micro-average-vs-macro-average-vs-normal-versions-of-precision-and-recal/305507
# Using micro average vs. macro average vs. normal versions of precision and recall for a binary classifier I have a logistic regression recommender model built on my data where I tried to predict one of two outcomes for each row. Let's call them success and fail. I'm using the cross_val_score function of SKLearn to do so. I was planning on just using precision and recall as performance measures, but saw the "micro" and "macro" options and decided to read up more and try those as well, but I'm confused by the results. For reference, I looked at this blog post, this SO question, and this paper. While I think I understand the general concept, I'm confused by the results I'm getting, and which one is ideal for my use case. One important piece of background is that I have fewer success cases that fail. success makes up 25% of my dataset. My understanding is that micro is a value that's closer to the performance of the model on the larger class (in this case, fail), while macro is for the smaller one (success). This makes sense. I'm trying to predict successes in this case, so the latter should be the better choice. Here are the parts I don't get: a) How do each of these compare to just regular precision / recall, in terms of representing the larger or smaller classes? Which of the three options is optimal for a case like this where I want to surface the smaller class but not the large one. b) In some folds of my CV, I get 0 as the regular precision value, but none are 0 for micro and macro. How is that possible? If I get some true positives in the individual classes, shouldn't I always have some in the overall dataset as well? In general the regular precision and recall end up significantly lower than both types of averages. • Are you building a binary classifier (i.e. only two output classes)? Because then you can just use the regular precision and recall values. I like to create a visualization like this for binary classifiers: scikit-yb.org/en/latest/api/classifier/threshold.html – Dan Oct 26 '18 at 10:21 ## 1 Answer These micro and macro averaging techniques are typically made for situations with more than 2 classes. With two classes you would only compute precision, recall, F1 or whatever you are interested in (you tell us! We cannot know) with regard to almost always your minority class (since it almost always is the one you're interested in). If these averaging techniques use arithmetic means (on the micro or macro level), the other non 0 value will make the average larger than 0 as in $(0 + 5)/2=2.5$. If they used geometric averages (which is very uncommon), then $\sqrt{0\times5}=0$
2019-10-16 12:43:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7965669631958008, "perplexity": 615.036655332715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986668569.22/warc/CC-MAIN-20191016113040-20191016140540-00123.warc.gz"}
https://www.gamedev.net/forums/topic/394313-scaling-and-hierarchy/
# Scaling and hierarchy ## Recommended Posts If I have sibling/child hierarchy of meshs where every mesh have translation, rotation and scaling attributes, does scaling transformation must be applied for childrens or not? For now I commbine translation, scaling and rotation to one transform matrix so I do not need to calculate matrix every time. Update() methode is something like this: // not true code, just simplifed version void Mesh::Update(D3DXMATRIX *World) { CombinedMatrix = Transform * *World; if (FirstChild != NULL) FirstChild->Update(&CombinedTransform) if (FirstSibling != NULL) FirstSibling->Update(World) } Problem is occured when I have scaling in some mesh where that mesh has FirstChild. If scaling applies to childs than it is ok, if not...? If scaling does not apply, how to make Update() ? Thanks, Zaharije Pasalic
2017-10-23 21:01:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4132515490055084, "perplexity": 7924.056100998545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826642.70/warc/CC-MAIN-20171023202120-20171023222120-00299.warc.gz"}
https://cs.stackexchange.com/questions/112499/hamiltonian-non-intersecting-path-in-plane
# Hamiltonian non intersecting path in plane $$N$$ points are located in 2D plane. Some of the pair of the points are connected by line segments. What is the complexity of the problem of existence of Hamiltonian non intersecting path? What if we consider it in special cases of graphs. • Does, e.g., this answer your question? – dkaeae Aug 6 at 15:29 • Thanks for your comment. Yes it shows finding Hamiltonian path in planar graph is NP-Complete. Next question would be what if the points are inside a polygon, two points are connected to each other which the connecting line between them is completely inside the polygon? – inaderi Aug 7 at 6:05
2019-10-22 13:40:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7527979612350464, "perplexity": 331.22455712348057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987822098.86/warc/CC-MAIN-20191022132135-20191022155635-00081.warc.gz"}
https://rpg.stackexchange.com/questions/70802/how-can-i-test-whether-a-die-is-fair
# How can I test whether a die is fair? I have a d20 that seems to be, well, remarkably lucky.* How can I determine whether it's really just luck, or whether the die is in fact unfairly biased? *) Well, I don't, really. This is actually a spin-off from this question, which is specifically about determining whether a die is loaded. This one is intended as a more general question about how to detect any kind of bias in dice, since we apparently don't have one yet. I've posted my own answer below, but feel free to add more. • There are numerous related threads on stats.SE e.g. this search. Some of the (fine) points made in the answer here are echoed in this answer for example Nov 8, 2015 at 1:29 • There's some relevant plots in this answer which may also be of some interest. Nov 8, 2015 at 1:56 • This question may be somewhat of a duplicate to this question. Nov 13, 2015 at 3:38 • @Sandwich: I spun it off from that question, because that one's asking about a specific technique, and also specifically about detecting whether a die is loaded (i.e. having a misplaced center of mass) rather than about all kinds of bias in general. Nov 13, 2015 at 3:49 ## What kinds of bias can dice have? Lots of kinds, actually. Perhaps the most common accidentally occurring types of bias are: • "Shaved" dice, which are not quite symmetrical, but slightly wider or narrower on one axis than on others. A shaved d6 with, say, the 1–6 axis longer than the others will roll those sides less often, making it "less swingy" than a fair d6 should be (but leaving the average roll unchanged). The name comes from cheaters actually shaving or sanding down dice to flatten them, but cheap dice may have this kind of bias simply due to being poorly made. Other similar biases due to asymmetric shape are also possible, especially in dice with many sides. • Uneven (concave / convex) faces may be more or less likely to "stick" to the table, favoring or disfavoring the opposite side. The precise effect may depend on the table material, and on how the dice are rolled. Again, cheap plastic dice case easily have this kind of bias, e.g. due to the plastic shrinking unevenly as it cools after molding. Uneven edges can also create bias, particularly if the edge is asymmetric (i.e. sharper on one side). • Actual "loaded" dice, i.e. dice with a center of gravity offset from their geometric center, may occur accidentally due to either bubbles trapped inside the plastic or, more commonly, simply due to the embossed numbers on the sides of the die affecting the balance. In fact, almost all dice, with the exception of high-quality casino dice deliberately balanced to avoid this kind of bias, will likely have it to some small extent. ## How do I find out whether a die is fair? Obviously, you need to roll it. Preferably, you should do this the same way, on the same kind of table, as you'd use in a game; while truly fair dice should be fair on any surface, some types of bias may show up only on some surfaces. Keep rolling the same die several times, and count how many times each side comes up. If you've got a friend to help you, you can have them tally up the rolls as you call them out, so you don't have to switch between rolling and marking the results all the time. Once your arm gets tired of rolling dice, switch roles. ### How many times do you need to roll? For the type of statistical test described below (Pearson's $\chi^2$ test), a common rule of thumb is to have at least five times as many rolls as there are sides on the die. Thus, for a d20, you need at least 100 rolls for the test to be valid. (There are other statistical tests that can be used with fewer rolls, but they require slightly more complicated math.) Obviously, more rolls won't hurt if you have the patience for it, and the more rolls you tally up, the better the test will detect subtle biases. (Note: If you've, say, bought a large bunch of cheap d6's for rolling large dice pools, it can be OK to just roll them all together and tally up the number of times each face comes up. Sure, this way you won't detect if one of the dice is, say, slightly more likely to roll a 6, while another one is slightly less likely to roll it, but you'll still detect any systematic biases due to, say, all the dice being unsymmetrical the same way.) ### OK, I've rolled the die 100 times. Now what? Now it's time to do some math. 1. First, look up the tally of how many times each side came up. Below, I'll call the number of times side 1 came up $n_1$, the number of times side 2 came up $n_2$, and so on up to $n_{20}$ for a d20. I'll also use $N$ to denote the total number of rolls, i.e. $N = n_1 + n_2 + \dots + n_{20}$. 2. Next, calculate the expected number of times each side should have come up for a fair dice, i.e. the total number of rolls divided by the number of sides.1 (It's OK for this to be a fractional number.) Call this number $n_{\exp}$. For example, for $N = 100$ rolls of a d20, $n_{\exp} = \frac{N}{20} = 5$. 3. Now, for each side k (from 1 to 20, for a d20), calculate the difference between the actual and the expected count of times the side came up, square it (i.e. multiply it by itself), and divide it by the expected count. That is, calculate: $$\chi^2_k = \frac{ \left( n_k - n_{\exp} \right) ^2}{n_{\exp}}$$ for each possible number $k$ of your die (i.e. from $k = 1$ to $k = 20$, for a d20).2 4. Finally, add up all the results from the previous step to obtain the test statistic $$\chi^2 = \chi^2_1 + \chi^2_2 + \dots + \chi^2_{20} = \sum_{k=1}^{20} \frac{ \left( n_k - n_{\exp} \right) ^2}{n_{\exp}}.$$ ### OK, I've got this $\chi^2$ figure. What do I do with it? The $\chi^2$ value you've calculated is a measure of how biased the die appears to be, based on the numbers you've rolled with it. But what counts as a reasonable value of $\chi^2$, and where's the threshold at which you should start getting suspicious? For that, you either need to do some more math, or, more easily, just look it up in a table. To use the table, you first need to know how many "degrees of freedom" our test has. That's simpler than it sounds: for a $d$-sided die, the test has $\nu = d - 1$ degrees of freedom (i.e. $\nu = 19$ for a d20).3 This will tell you which row in the table to look at. In the table above, row 19 looks like this: Probability less than the critical value ν 0.90 0.95 0.975 0.99 0.999 ---------------------------------------------------------- 19 27.204 30.144 32.852 36.191 43.820 What does this mean? Well, it means that, if the die is actually fair, then $\chi^2$ will be less than 27.204 in 90% of all tests, less than 30.144 in 95% of all tests, and so on. Only once in a thousand tests will a fair d20 actually produce a $\chi^2$ value higher than 43.820. Thus, by comparing $\chi^2$ to the critical values in the table, you can estimate how likely it is to be biased.4 If $\chi^2 \le 27$, the die probably has no bias, or at least you haven't counted enough rolls to detect it; around $\chi^2 \ge 30$ or so, you might want to be concerned, and maybe set the die aside for further testing; if $\chi^2 \ge 40$, you can declare the die biased with pretty high confidence. Note that the chi-squared test does not say anything about how the die is biased: a die that, say, rolls 10 more often and 11 less often than it should is just as likely to fail the test as one that rolls 20 more often and 1 less often. Of course, if the chi-squared test does detect bias, you can just look at the tally counts yourself to see which ones occur more often than you'd expect. Ps. For convenience, here are the table rows for a few other commonly used types of dice:5 Upper-tail critical values of χ² distribution with ν degrees of freedom (source: NIST) Probability less than the critical value ν 0.90 0.95 0.975 0.99 0.999 ---------------------------------------------------------- 1 (d2) 2.706 3.841 5.024 6.635 10.828 2 (d3) 4.605 5.991 7.378 9.210 13.816 3 (d4) 6.251 7.815 9.348 11.345 16.266 5 (d6) 9.236 11.070 12.833 15.086 20.515 7 (d8) 12.017 14.067 16.013 18.475 24.322 9 (d10) 14.684 16.919 19.023 21.666 27.877 11 (d12) 17.275 19.675 21.920 24.725 31.264 19 (d20) 27.204 30.144 32.852 36.191 43.820 Footnotes: 1) For an ordinary fair die, the expected number of times each side comes up is obviously the same, but we could use the chi-squared test also for dice that we don't expect to roll each number equally often (like, say, dice where the same number appears several times). In that case, we'd just have a different $n_{\exp}$ for each possible roll of the die. 2) I'm not aware of a conventional symbol for these intermediate values, but $\chi^2_k$ seems like a reasonable choice, given both that they add up to the test statistic $\chi^2$, and that each of them is the square of an (approximately) normally distributed random variable, and thus is itself $\chi^2$-distributed. Your favorite statistics text, if it bothers to give them a symbol at all, may use something else. 3) The number of degrees of freedom is essentially the number of values in our measurements that can vary independently. Here, we're measuring 20 values, $n_1$ to $n_{20}$, but they're not quite independent: we know that $n_1 + n_2 + \dots + n_{20} = N$, so once we know 19 of the values, we can calculate the last one based on the other 19. Hence, 19 degrees of freedom. 4) Note that the numbers in the table header give the probability that a perfectly fair die will produce a $\chi^2$ value higher than the critical value in that column. This is not the same as the probability that a die with $\chi^2$ less than the critical value is fair, or that a die with $\chi^2$ higher than the critical value is biased; to calculate those probabilities, you'd first have to know the a priori frequency of bias among your dice. Indeed, in some sense, these questions are not even meaningful to ask: truly fair dice only exist in the platonic realm of ideas, and every real die almost certainly has some bias, if you measure it carefully enough. Thus, in a sense, any claim that a given die is fair is false; all we can really say is that it's close enough to fair that we can't tell the difference. 5) A "d2" is, of course, a coin. Use the "d3" column ($\nu = 2$) e.g. for Fudge dice. Addendum: So, just how many rolls do we need to actually detect biased dice? Well, I did some quick simulation tests, using an extremely biased virtual d20 that never rolls a 1, and rolls 20 twice as often as it should. Using the different $\chi^2$ thresholds given in the table above, and various numbers of test rolls, from the minimum of 100 up to 400, here's the fraction of runs on which the $\chi^2$ value exceeded the threshold: Probability of passing a fair die | 0.90 0.95 0.975 0.99 0.999 Rolls +----------------------------------------------- | Probability of detecting the bias 100 | 0.50 0.37 0.26 0.17 0.054 200 | 0.89 0.80 0.69 0.55 0.28 300 | 0.9932 0.972 0.938 0.87 0.62 400 | 0.9999 0.9992 0.9961 0.985 0.88 In each case, the probability of falsely detecting bias in a fair die is essentially independent of the number of rolls — this is a deliberate feature of the $\chi^2$ test. The probability of correctly detecting the biased die, however, increases significantly with more rolls. From the table above, we can see that 100 rolls (the minimum number for the $\chi^2$ test to even be valid) is way too little to detect even such an egregious bias: even if we set the $\chi^2$ threshold so low that we end up rejecting 10% of all fair dice, we still catch only about 50% of the biased ones, and it only gets worse as we increase the threshold. On the other hand, with 400 rolls, things look a lot better: setting the threshold at $\chi^2 \le 36.191$, 99% of all fair dice will pass this test, while about 98.5% of all the biased dice in this test will fail it. (Of course, we're still talking about very strongly biased dice here; more subtle bias will be harder to detect.) OK, but surely a die that never rolls 1 should be easy to spot? After all, with a fair d20, the probability of rolling 100 times and never seeing a 1 is only $\left(\frac{19}{20}\right)^{100} \approx 0.006$. Shouldn't that be plenty of reason to consider the die biased? What gives? Well, one reason why the $\chi^2$ test seems so ineffective here is that it's looking for any kind of bias. Sure, if we rolled a d20 a hundred times, and never saw a 1, we might be justifiably suspicious. But what if we never saw a 7, or a 15, or any of the other possible rolls? Would those also be reason to call the die biased? Well, it turns out that, even though the probability of never rolling a 1 in 100 rolls on a d20 is only about 0.6%, the probability of never rolling some number is about 20 times that, or about 12%. So if we rejected all 20-sided dice that never rolled some number in 100 rolls, we'd end up rejecting about 12% of all fair dice, too. And, of course, there also are many other kinds of possible biases that the $\chi^2$ test will also detect; thus, with just 100 rolls, it's actually quite likely to detect some bias even in a d20 that's perfectly fair, and so we need to set the threshold value quite high to compensate. If we were only interested in bias affecting the most extreme rolls (1 and 20), we could modify the $\chi^2$ test to e.g. lump all the rolls between 2 and 19 into a single category, with $n_{\exp} = \frac{18 \times N}{20}$, and use the $\chi^2$ threshold for two degrees of freedom (since we now have only three possible outcomes: 1, 20, or something else). Such a modified $\chi^2$ test is a lot better at detecting this particular form of bias, with more than half of the biased dice failing the test at the 1% false positive rate even with just 100 rolls, and over 99.99% of them failing it with 200 rolls. Of course, the price we pay for that extra discriminatory power is that this modified test will be completely oblivious to most other kinds of bias — for example, it will happily pass a die that never rolls a 2, and that rolls 19 twice as often as it should. • Thanks for your answer. I found this really interesting and would not have learned this if were not for you. Nov 7, 2015 at 4:38 • @IlmariKaronen do you have any bibliographical reference for a common rule of thumb is to have at least five times as many rolls as there are sides on the die? Jan 25, 2017 at 12:59 • @AlessandroJacopson Any reference on the assumptions of a chi-square goodness-of-fit test would suffice. The common rule is that each expected count should be at least 5, and rolling the die five times for each side on the die makes each expected count exactly five. Dec 20, 2017 at 18:33 • this a fantastic and super helpful answer; we're using it to test the fairness of four sets of d4, d6, d12, and d20 for my son's science project. May 12, 2019 at 9:52 • Would this be a one or two tailed test? Nov 14, 2020 at 10:21 A quick way to see if there are irregularities in the weight is the golf ball test. 1. Float the die in a glass of water, and wait for water to settle. 2. Note which face is up. 3. Agitate the water, causing the die to tumble. 4. Wait for the water to still again. 5. If the same face is up, the dice isn't balanced. For irregularities to form you'll probably need a micrometer. • That is indeed a useful and very sensitive test, as described in the other question I linked to above. However, there are types of bias that it will not detect, such as a d6 being wider or narrower along one axis than along the others. Nov 13, 2015 at 3:51 • This answer fails basic statistical rigor. As written currently, judgment by a single data point is suggested. The change for a fair die to fail this test is unreasonably high. Nov 13, 2015 at 13:27 • This is the PRACTICAL answer that can actually be used at the table. Nov 13, 2015 at 19:50 • Doesn't work for dice denser than saturated salt water. Oct 3, 2018 at 15:05 • "If the same face is up, the dice isn't balanced." -- huh? a d6 has a 1-in-6 chance of showing the same face again. A test like this may be more sensitive to imbalance, and so will reveal bias in fewer trials. But you still need some significant number of trials. Comparing just two trials will accuse a fair die of being biased 1/6th of the time. Dec 5, 2019 at 3:04
2022-08-08 19:42:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6109433174133301, "perplexity": 493.36476517998017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570871.10/warc/CC-MAIN-20220808183040-20220808213040-00047.warc.gz"}
https://docs.deepmodeling.com/projects/deepmd/en/master/model/train-hybrid.html
# 3.6. Descriptor "hybrid" This descriptor hybridize multiple descriptors to form a new descriptor. For example we have a list of descriptor denoted by $$\mathcal D_1$$, $$\mathcal D_2$$, …, $$\mathcal D_N$$, the hybrid descriptor this the concatenation of the list, i.e. $$\mathcal D = (\mathcal D_1, \mathcal D_2, \cdots, \mathcal D_N)$$. To use the descriptor in DeePMD-kit, one firstly set the type to hybrid, then provide the definitions of the descriptors by the items in the list, "descriptor" :{ "type": "hybrid", "list" : [ { "type" : "se_e2_a", ... }, { "type" : "se_e2_r", ... } ] }, A complete training input script of this example can be found in the directory \$deepmd_source_dir/examples/water/hybrid/input.json
2022-10-02 06:22:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6259853839874268, "perplexity": 2999.2851030940055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00128.warc.gz"}
https://discourse.julialang.org/t/best-generic-method-to-solve-homogeneous-system/43261
# Best generic method to solve homogeneous system What is the best way to solve the homogeneous system A\vec{x}=\vec{0} for either a dense or sparse array (or even GPU array)? For dense A, nullspace(A) works, and for a sparse array, eigs(A,nev=1,which=:SM) appears to work fine. Is there a better, generic, method somewhere? I looked at IterativeSolvers with b=\vec{0}, but that doesn’t seem to return the right answer. Note that until now, I had been using NLsolve, but that seems quite wasteful considering that the lu factorization is needlessly recomputed at each step. One simple answer is to compete and store the lu factorization. With NLsolve? How would that work?
2022-08-17 04:01:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5022424459457397, "perplexity": 1661.351970705741}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.95/warc/CC-MAIN-20220817032054-20220817062054-00797.warc.gz"}
http://mathhelpforum.com/advanced-algebra/127027-composing-linear-transformations-print.html
# Composing Linear Transformations • February 3rd 2010, 03:36 PM rhfish Composing Linear Transformations Hi, this is not an assignment question (nothing I have to hand in anyway), but I am a bit confused about how to compose this: S: p(x) -> p(x + 1) R: p(x) -> (x-1)p(x) Goal is to get the rule for RS: p(x) R(S(p(x))) = R(p(x+1)) but, is it equal (x + 1 -1)p(x + 1) = xp(x+1) or (x - 1)p(x+1). i.e. is the x in (x-1) whatever is in the brackets, or always x? I am seeing reasoning for both answers, and am confused. • February 4th 2010, 04:58 AM HallsofIvy Quote: Originally Posted by rhfish Hi, this is not an assignment question (nothing I have to hand in anyway), but I am a bit confused about how to compose this: S: p(x) -> p(x + 1) R: p(x) -> (x-1)p(x) Goal is to get the rule for RS: p(x) R(S(p(x))) = R(p(x+1)) but, is it equal (x + 1 -1)p(x + 1) = xp(x+1) or (x - 1)p(x+1). i.e. is the x in (x-1) whatever is in the brackets, or always x? I am seeing reasoning for both answers, and am confused. R(whatever)= (x-1)*whatever. If "whatever" is p(x+1), then R(p(x+1))= (x-1)p(x+1). • February 4th 2010, 12:07 PM Roam Quote: Originally Posted by rhfish Hi, this is not an assignment question (nothing I have to hand in anyway), but I am a bit confused about how to compose this: S: p(x) -> p(x + 1) R: p(x) -> (x-1)p(x) Goal is to get the rule for RS: p(x) R(S(p(x))) = R(p(x+1)) but, is it equal (x + 1 -1)p(x + 1) = xp(x+1) or (x - 1)p(x+1). i.e. is the x in (x-1) whatever is in the brackets, or always x? I am seeing reasoning for both answers, and am confused. Yes, (x-1) will do. The composition of R with S is $(R \circ S) (x) = R(S(x))=R(p(x+1))$, therefore you will have $(x-1)P(x+1)$. • February 4th 2010, 05:45 PM rhfish Thank you, I guess what confused me is that p(x) -> (x-1)p(x) would suggest that the x in (x-1) should be whatever is in the brackets of p. Since p has "x + 1" in the brackets after the previous transformation, I wanted it to carry over. I can see what you guys are saying, but I am confused why it doesn't carry over. I know your answer is correct, but I don't know why it's making my brain feel weird. Maybe to phrase it better... S: p(x) -> p(x + 1) R: p(y) -> (y - 1)p(y) So after S y actually = x + 1... sigh. I feel very stupid.
2014-03-14 15:49:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7208951115608215, "perplexity": 1901.0313635215293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678693548/warc/CC-MAIN-20140313024453-00004-ip-10-183-142-35.ec2.internal.warc.gz"}
https://mathhelpboards.com/threads/are-the-vectors-linearly-independent.26775/
# Are the vectors linearly independent? #### mathmari ##### Well-known member MHB Site Helper Hey!! We have that the vectrs $\vec{v},\vec{w}, \vec{u}$ are linearly independent. I want to check if the pairs • $\vec{v}, \vec{v}+\vec{w}$ • $\vec{v}+\vec{u}$, $\vec{w}+\vec{u}$ • $\vec{v}+\vec{w}$, $\vec{v}-\vec{w}$ are linearly indeendent or not. Since $\vec{v}, \vec{w}, \vec{u}$ are linearly independet it holds that $\lambda_1\vec{v}+\lambda_2\vec{w}+\lambda_3\vec{u}=0 \Rightarrow \lambda_1=\lambda_2=\lambda_3=0$ ($\star$). We have the following: • $\vec{v}, \vec{v}+\vec{w}$ : $\alpha_1\vec{v}+\alpha_2(\vec{v}+\vec{w})=0 \Rightarrow (\alpha_1+\alpha_2)\vec{v}+\alpha_2\vec{w}=0$ How can we continue here? • $\vec{v}+\vec{u}$, $\vec{w}+\vec{u}$ : $\alpha_1(\vec{v}+\vec{u})+\alpha_2(\vec{w}+\vec{u})=0 \Rightarrow \alpha_1\vec{v}+(\alpha_1+\alpha_2)\vec{u}+\alpha_2\vec{w}=0$ From ($\star$) it folows that $\alpha_1=\alpha_1+\alpha_2=\alpha_2=0\Rightarrow \alpha_1=\alpha_2=0$ and so this means that the vectors $\vec{v}+\vec{u}$ and $\vec{w}+\vec{u}$ are linearly independent. • $\vec{v}+\vec{w}$, $\vec{v}-\vec{w}$ : $\alpha_1(\vec{v}+\vec{w})+\alpha_2(\vec{v}-\vec{w})=0\Rightarrow (\alpha_1+\alpha_2)\vec{v}+(\alpha_1-\alpha_2)\vec{w}=0$ How can we continue here? #### Klaas van Aarsen ##### MHB Seeker Staff member Since $\vec{v}, \vec{w}, \vec{u}$ are linearly independent it holds that $\lambda_1\vec{v}+\lambda_2\vec{w}+\lambda_3\vec{u}=0 \Rightarrow \lambda_1=\lambda_2=\lambda_3=0$ ($\star$). We have the following: • $\vec{v}, \vec{v}+\vec{w}$ : $\alpha_1\vec{v}+\alpha_2(\vec{v}+\vec{w})=0 \Rightarrow (\alpha_1+\alpha_2)\vec{v}+\alpha_2\vec{w}=0$ How can we continue here? Hey mathmari !! Let's try with a proof by contradiction. Suppose they are not linearly independent. Then there must be $\alpha_1,\alpha_2$ such that $\alpha_1\ne 0$ and/or $\alpha_2\ne 0$. Let $\lambda_1 = \alpha_1+\alpha_2$ and $\lambda_2=\alpha_2$. What if we substitute them in your expression for independence of $\vec{v}, \vec{w}, \vec{u}$? Last edited: #### mathmari ##### Well-known member MHB Site Helper Hey mathmari !! Let's try with a proof by contradiction. Suppose they are not linearly independent. Then there must be $\alpha_1,\alpha_2$ such that $\alpha_1+\alpha_2\ne 0$ and/or $\alpha_2\ne 0$. Let $\lambda_1 = \alpha_1+\alpha_2$ and $\lambda_2=\alpha_2$. What if we substitute them in your expression for independence of $\vec{v}, \vec{w}, \vec{u}$? That would mean that $\lambda_1$ and/or $\lambda_2$ is non-zero, which is a contradiction, correct? #### Klaas van Aarsen ##### MHB Seeker Staff member That would mean that $\lambda_1$ and/or $\lambda_2$ is non-zero, which is a contradiction, correct? Yep. Btw, I made a mistake before. It should be $\alpha_1\ne 0$ and/or $\alpha_2\ne 0$. #### mathmari ##### Well-known member MHB Site Helper Let $\lambda_1 = \alpha_1+\alpha_2$ and $\lambda_2=\alpha_2$. I got stuck right now. Why can we just take these $\lambda$'s ? #### HallsofIvy ##### Well-known member MHB Math Helper Hey!! We have that the vectrs $\vec{v},\vec{w}, \vec{u}$ are linearly independent. I want to check if the pairs • $\vec{v}, \vec{v}+\vec{w}$ • $\vec{v}+\vec{u}$, $\vec{w}+\vec{u}$ • $\vec{v}+\vec{w}$, $\vec{v}-\vec{w}$ are linearly indeendent or not. Since $\vec{v}, \vec{w}, \vec{u}$ are linearly independet it holds that $\lambda_1\vec{v}+\lambda_2\vec{w}+\lambda_3\vec{u}=0 \Rightarrow \lambda_1=\lambda_2=\lambda_3=0$ ($\star$). We have the following: • $\vec{v}, \vec{v}+\vec{w}$ : $\alpha_1\vec{v}+\alpha_2(\vec{v}+\vec{w})=0 \Rightarrow (\alpha_1+\alpha_2)\vec{v}+\alpha_2\vec{w}=0$ How can we continue here? Didn't you just say that the fact that $\vec{v}$ and $\vec{w}$ are independent requires that $\alpha_1+ \alpha_2= 0$ and $\alpha_2= 0$? • $\vec{v}+\vec{u}$, $\vec{w}+\vec{u}$ : $\alpha_1(\vec{v}+\vec{u})+\alpha_2(\vec{w}+\vec{u})=0 \Rightarrow \alpha_1\vec{v}+(\alpha_1+\alpha_2)\vec{u}+\alpha_2\vec{w}=0$ From ($\star$) it folows that $\alpha_1=\alpha_1+\alpha_2=\alpha_2=0\Rightarrow \alpha_1=\alpha_2=0$ and so this means that the vectors $\vec{v}+\vec{u}$ and $\vec{w}+\vec{u}$ are linearly independent. • $\vec{v}+\vec{w}$, $\vec{v}-\vec{w}$ : $\alpha_1(\vec{v}+\vec{w})+\alpha_2(\vec{v}-\vec{w})=0\Rightarrow (\alpha_1+\alpha_2)\vec{v}+(\alpha_1-\alpha_2)\vec{w}=0$ How can we continue here? Last edited by a moderator: #### Klaas van Aarsen ##### MHB Seeker Staff member I got stuck right now. Why can we just take these $\lambda$'s ? We can. It's just that we want to prove that $\vec v$ and $\vec v + \vec w$ are linearly independent. To do so, we need to prove that $a_1\vec v + a_2 (\vec v + \vec w)=0 \implies a_1=a_2=0$. So for the proof by contradiction we assume that $a_1\ne 0$ and/or $a_2\ne 0$. Now we can pick those lambda's and continue... #### mathmari ##### Well-known member MHB Site Helper Didn't you just say that the fact that $\vec{v}$ and $\vec{w}$ are independent requires that $\alpha_1+ \alpha_2= 0$ and $\alpha_2= 0$? Do you mean the following? Let $\alpha_1\vec{v}+\alpha_2(\vec{v}+\vec{w})=0 \Rightarrow (\alpha_1+\alpha_2)\vec{v}+\alpha_2\vec{w}=0$. Since $\vec{v}$, $\vec{w}$ and $\vec{u}$ are linearly independent, then $\vec{v}$ and $\vec{w}$ are also linearly independent and this means that $\alpha_1+\alpha_2=\alpha_2=0 \Rightarrow \alpha_1=\alpha_2=0$. - - - Updated - - - We can. It's just that we want to prove that $\vec v$ and $\vec v + \vec w$ are linearly independent. To do so, we need to prove that $a_1\vec v + a_2 (\vec v + \vec w)=0 \implies a_1=a_2=0$. So for the proof by contradiction we assume that $a_1\ne 0$ and/or $a_2\ne 0$. Now we can pick those lambda's and continue... We suppose that $\vec{v}$ and $\vec{v}+\vec{w}$ are linearly dependent. Then at $\alpha_1\vec{v}+\alpha_2(\vec{v}+\vec{w})=0$ we have that $\alpha_1\neq 0$ and/or $\alpha_2\neq 0$. From the above equation we have that $(\alpha_1+\alpha_2)\vec{v}+\alpha_2\vec{w}=0$. Since $\vec{v}, \vec{w}, \vec{u}$ are linearly independet it holds that $\lambda_1\vec{v}+\lambda_2\vec{w}+\lambda_3\vec{u}=0 \Rightarrow \lambda_1=\lambda_2=\lambda_3=0$. Do you mean that we define these $\lambda$'s ? #### Klaas van Aarsen ##### MHB Seeker Staff member We suppose that $\vec{v}$ and $\vec{v}+\vec{w}$ are linearly dependent. Then at $\alpha_1\vec{v}+\alpha_2(\vec{v}+\vec{w})=0$ we have that $\alpha_1\neq 0$ and/or $\alpha_2\neq 0$. From the above equation we have that $(\alpha_1+\alpha_2)\vec{v}+\alpha_2\vec{w}=0$. Since $\vec{v}, \vec{w}, \vec{u}$ are linearly independet it holds that $\lambda_1\vec{v}+\lambda_2\vec{w}+\lambda_3\vec{u}=0 \Rightarrow \lambda_1=\lambda_2=\lambda_3=0$. Do you mean that we define these $\lambda$'s ? Yep.
2020-11-25 04:51:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9583696126937866, "perplexity": 530.2431102473702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141181179.12/warc/CC-MAIN-20201125041943-20201125071943-00607.warc.gz"}
https://www.mathportal.org/formulas/analytic-geometry/circle.php
Math Calculators, Lessons and Formulas It is time to solve your math problem mathportal.org # Math formulas:Circle 0 formulas included in custom cheat sheet ### Equation of a circle In an $x-y$ coordinate system, the circle with center $(a, b)$ and radius $r$ is the set of all points $(x, y)$ such that: $$(x-a)^2 + (y-b)^2 =r^2$$ Circle centered at the origin: $$x^2 + y^2 = r^2$$ Parametric equations \begin{aligned} x &= a + r\,\cos t \\ y&= b + r\,\sin t \end{aligned} where $t$ is a parametric variable. In polar coordinates the equation of a circle is: $$r^2 - 2\cdot r \cdot r_0\cdot cos(\Theta - \phi ) + r_0^2 = a^2$$ ### Area of a circle $$A = r^2\pi$$ ### Circumference of a circle $$C = \pi \cdot d = 2\cdot \pi \cdot r$$ ### Theorems: (Chord theorem) The chord theorem states that if two chords, $CD$ and $EF$, intersect at $G$, then: $$CD \cdot DG = EG \cdot FG$$ (Tangent-secant theorem) If a tangent from an external point $D$ meets the circle at $C$ and a secant from the external point $D$ meets the circle at $G$ and $E$ respectively, then $$DC^2 = DG \cdot DE$$ (Secant - secant theorem) If two secants, $DG$ and $DE$, also cut the circle at $H$ and $F$ respectively, then: $$DH \cdot DG = DF \cdot DE$$ (Tangent chord property) The angle between a tangent and chord is equal to the subtended angle on the opposite side of the chord.
2020-11-25 13:09:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9909922480583191, "perplexity": 598.4773878176854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141182794.28/warc/CC-MAIN-20201125125427-20201125155427-00491.warc.gz"}
http://randomagent.wordpress.com/2010/08/03/stochastictransformer/
## Stochastic processes as monad transformers I have a difficulty to understand functional programming concepts that I can’t put to some very simple and natural use (natural for me, of course). I need to find the perfect simple example to implement to finally understand something. And I’m not a computer scientist, so things like parsers and compilers have very little appeal to me (probably because I don’t understand them…). I’m a physicist, so this drives me to look for physical problems that can be implemented in Haskell so I can understand some concepts. Monad transformers still eludes me. But I think I finally got the perfect subject were I can understand them: stochastic processes! First some book keeping: > import Control.Monad.State Now, stochastic processes have characteristics related to two different monads. In one hand, they are dynamical processes, and the way to implement dynamics in Haskell is with state monads. For example, if I want to iterate the logistic map: $\displaystyle x_{t+1} = \alpha x_t\left(1-x_t\right)$ I could do the following: > f :: Double -> Double > f x = 4*x*(1-x) > logistic :: State Double Double > logistic = do x0 <- get > let x1 = f x > put x1 > return x1 > runLogistic :: State Double [Double] > runLogistic n x0= evalState (replicateM n logistic) x0 Running this on ghci would give you, for example: *Main> runLogistic 5 0.2 [0.6400000000000001,0.9215999999999999,0.28901376000000045, 0.8219392261226504,0.5854205387341] So we can make the loose correspondence: dynamical system  ↔  state monad. On the other hand, stochastic processes are compositions of random variables, and this is done with the Rand monad (found inControl.Monad.Random). As an example, the Box-Muller formula tells us that, if I have two inpendent random variables x and y, distributed uniformly between in the [0, 1] interval, then, the expression: $\displaystyle \sqrt{-2\log(x)}\cos(2\pi y)$ will be normally distributed. We can write then (there’s a catch here: x and y are not independent, but sampled from the same pseudo-random number generator, with the same seed… this can be solved, but it would only complicate the example): > boxmuller :: Double -> Double -> Double > boxmuller x y = sqrt(-2*log x)*cos(2*pi*y) > normal :: Rand StdGen Double > normal = do x <- getRandom > y <- getRandom > return $boxmuller x y > normals n = replicateM n normal > gen = mkStdGen 0 Running this function we get what we need: *Main> (evalRand$ normals 5) gen = [0.1600255836730147,0.1575360140445035,-1.595627933129274, -0.18196791439834512,-1.082222285056746] So what is a stochastic process? In very rough terms: is a dynamical system with random variables. So we need a way to make the Rand monad to talk nicely with the State monad. The way to do this is to use a monad transformer, in this case, the StateT transformer. Monad transformers allows you to combine the functionalities of two different monads. In the case of the StateT monads, they allow you to add a state to any other monad you want. In our case, we want to wrap the Rand monad inside a StateT transformer and work with things of type: foo :: StateT s (Rand StdGen) r This type represent a monad that can store a state with type s, like the state monad, and can generate random variables of type r, like the rand monad. In general we would have a type foo2 ::(MonadTrans t, Monad m) => t m a In this case, t = StateT s and m = Rand StdGen. The class MonadTrans is defined in Control.Monad.Trans, and provides the function: lift :: (MonadTrans t, Monad m) => m a -> t m a In this case, t is itself a monad, and can be treated like one through the code. It works like this: inside a do expression you can use the lift function to access the inner monad. Things called with lift will operate in the inner monad. Things called without lift will operate in the outer monad. So, suppose we want to simulate this very simple process: $\displaystyle x_{t+1} = x_{t} + \eta_t$ where ηt is drawn from a normal distribution. We would do: > randomWalk :: StateT Double (Rand StdGen) Double > randomWalk = do eta <- lift normal > x <- get > let x' = x + eta > put x' > return x' > runWalk :: Int -> Double -> StdGen -> [Double] > runWalk n x0 gen = evalRand (replicateM n $evalStateT randomWalk x0) gen The evalStateT function is just evalState adapted to run a StateT monad. Running this on ghci we get: *Main> runWalk 5 0.0 gen [0.1600255836730147,0.1575360140445035,-1.595627933129274, -0.18196791439834512,-1.082222285056746] This is what we can accomplish: we can easily operate simultaneously with functions that expect a state monad, like put and get, we can unwrap things with <- from the inner Rand monad by using lift , and we can return things to the state monad. We could have any monad inside theStateT transformer. For example, we could have another State monad. Here is a fancy implementation of the Fibonacci sequence using a State monad (that stores the last but one value in the sequence as its internal state) inside a StateT transfomer (that stores the last value of the sequence): > fancyFib :: StateT Int (State Int) Int > fancyFib = do old <- lift get > new <- get > let new' = new + old > old' = new > lift$ put old' > put new' > return new > fancyFibs :: Int -> StateT Int (State Int) [Int] > fancyFibs n = replicateM n fancyFibs And we can run this to get: *Main> evalState (evalStateT (fancyFibs 10) 1) 0 [1,1,2,3,5,8,13,21,34,55] Final note: I expect this post to be usable as a literate haskell script. If you can’t run it, please let me know. • Eloi Pereira  On August 12, 2011 at 03:20 Nice post. Since my background is on control theory I also like to think of State Monad as computations of dynamical systems. About your final note on literate Haskell, I believe I found two typos in the Logistic example: > let x1 = f x should be > let x1 = f x0 and the type assertion > runLogistic :: State Double [Double] should be > runLogistic :: Int -> Double -> [Double] • jimstuttard  On June 1, 2012 at 08:20 Thanks for this post. I’m trying to write some stochastic code and cannot better your box-muller implementation. I can’t get runLogistic to type check with ghc-7.4.1 at all. Any ideas? • Rafael S. Calsaverini  On June 1, 2012 at 08:24 Hi Jim, I think I found the errors. Thanks for pointing it out. I’ll fix the code now. There are the problems pointed out by Eloi above, and also there’s an extra’s on the “fancyFibs n = replicateM n fancyFibs“. This bold ‘s’ should not be there. Also, if you copy the text from here directly to your text editor, there’s an alignment problem due to some misplaced characters. I’ll try to fix that too. • jim Stuttard  On June 2, 2012 at 04:29 Hi Rafael,Thanks for the prompt reply. I caught the fibS. I’d like to integrate something like your code into the azimuthproject.org where we have a simple interactive stochastic bistability climate model.http://www.adgie.f9.co.uk/azimuth/stochastic-resonance/Javascript/StochasticResonanceEuler.html. The azimuth project was started by John Baez for scientistis and engineers interested in global problems. We need programmers. I hope you don’t mind the plug.. • Rafael S. Calsaverini  On June 3, 2012 at 13:13 Hi Jim, absolutely! Use this code in whatever is useful for you. The project seems to be quite interesting. I’ll take a look. Thanks for pointing it ouy.
2014-04-24 13:35:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5224205255508423, "perplexity": 2553.3351946160733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.openfoam.com/documentation/guides/latest/api/edgeHashes_8H_source.html
The open source CFD toolbox edgeHashes.H Go to the documentation of this file. 1/*---------------------------------------------------------------------------*\ 2 ========= | 3 \\ / F ield | OpenFOAM: The Open Source CFD Toolbox 4 \\ / O peration | 5 \\ / A nd | www.openfoam.com 6 \\/ M anipulation | 7------------------------------------------------------------------------------- 8 Copyright (C) 2017 OpenCFD Ltd. 9------------------------------------------------------------------------------- 11 This file is part of OpenFOAM. 12 13 OpenFOAM is free software: you can redistribute it and/or modify it 15 the Free Software Foundation, either version 3 of the License, or 16 (at your option) any later version. 17 18 OpenFOAM is distributed in the hope that it will be useful, but WITHOUT 19 ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 20 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License 21 for more details. 22 23 You should have received a copy of the GNU General Public License 24 along with OpenFOAM. If not, see <http://www.gnu.org/licenses/>. 25 26Typedef 27 Foam::edgeHashSet 28 29Description 30 A HashSet for an edge. 31 The hashing on an edge is commutative. 32 33\*---------------------------------------------------------------------------*/ 34 35#ifndef edgeHashes_H 36#define edgeHashes_H 37 38#include "edge.H" 39#include "EdgeMap.H" 40#include "HashSet.H" 41 42// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // 43 44namespace Foam 45{ 46 // Alternative: 47 // template<class T> 48 // using EdgeMap = HashTable<T, edge, Hash<edge>>; 49 50 //- A HashSet with edge for its key. 52 53} 54 55// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // 56 57#endif 58 59// ************************************************************************* // A HashTable with keys but without contents that is similar to std::unordered_set. Definition: HashSet.H:96 Namespace for OpenFOAM. HashSet< edge, Hash< edge > > edgeHashSet A HashSet with edge for its key. Definition: edgeHashes.H:51
2023-02-02 22:10:53
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9592929482460022, "perplexity": 1517.3148925659555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500041.18/warc/CC-MAIN-20230202200542-20230202230542-00085.warc.gz"}
https://collegephysicsanswers.com/openstax-solutions/what-angle-light-inside-crown-glass-completely-polarized-when-reflected-water-0
Change the chapter Question At what angle is light inside crown glass completely polarized when reflected from water, as in a fish tank? $41.2^\circ$ Solution Video
2022-06-27 17:50:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2055787742137909, "perplexity": 6191.124647968487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103337962.22/warc/CC-MAIN-20220627164834-20220627194834-00741.warc.gz"}
http://www.eurasip.org/Proceedings/Eusipco/Eusipco2004/defevent/html/abstract/a1064.htm
MSE-RATIO REGRET ESTIMATION WITH BOUNDED DATA UNCERTAINTIES (ThuAmOR2) Author(s) : Yonina Eldar (Technion, Israel) Abstract : We consider the problem of robust estimation of a deterministic bounded parameter vector $\bx$ in a linear model. While in an earlier work, we proposed a minimax estimation approach in which we seek the estimator that minimizes the worst--case mean-squared error (MSE) {\em difference regret} over all bounded vectors $\bx$, here we consider an alternative approach, in which we seek the estimator that minimizes the worst--case MSE {\em ratio regret}, namely, the worst--case {\it ratio} between the MSE attainable using a linear estimator ignorant of $\bx$, and the minimum MSE attainable using a linear estimator that knows $\bx$. The rational behind this approach is that the value of the difference regret may not adequately reflect the estimator performance, since even a large regret should be considered insignificant if the value of the optimal MSE is relatively large.
2022-07-01 22:35:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9387586712837219, "perplexity": 673.940433544133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103947269.55/warc/CC-MAIN-20220701220150-20220702010150-00404.warc.gz"}
https://www.groundai.com/project/observational-constraint-on-heavy-element-production-in-inhomogeneous-big-bang-nucleosynthesis/
Observational Constraint on Heavy Element Production in Inhomogeneous Big Bang Nucleosynthesis # Observational Constraint on Heavy Element Production in Inhomogeneous Big Bang Nucleosynthesis Riou Nakamura 111E-mail: riou@phys.kyush-u.ac.jp Department of Physics, Faculty of Sciences, Kyushu University, Fukuoka 812-8581, Japan    Masa-aki Hashimoto Department of Physics, Faculty of Sciences, Kyushu University, Fukuoka 812-8581, Japan    Shin-ichiro Fujimoto Department of Control and Information Systems Engineering, Kumamoto National College of Technology, Kumamoto 861-1102, Japan    Nobuya Nishimura National Astronomical Observatory, 2-21-1, Osawa, Mitaka, Tokyo 181-8588, Japan    Katsuhiko Sato Institute for the Physics and Mathematics of the Universe, The University of Tokyo, Chiba, 277-8568, Japan School of Science and Engineering, Meisei University, Tokyo 191-8506, Japan July 1, 2019 ###### Abstract Based on a scenario of the inhomogeneous big-bang nucleosynthesis (IBBN), we investigate the detailed nucleosynthesis that includes the production of heavy elements beyond Li. From the observational constraints on light elements of He and D for the baryon-to-photon ratio given by WMAP, possible regions found on the plane of the volume fraction of the high density region against the ratio between high- and low-density regions. In these allowed regions, we have confirmed that the heavy elements beyond Fe can be produced appreciably, where - and/or -process elements are produced well simultaneously compared to the solar system abundances. We suggest that recent observational signals such as He overabundance in globular clusters and high metallicity abundances in quasars could be partly due to the results of IBBN. Possible implications are given for the formation of the first generation stars. ###### pacs: 26.35.+c, 98.80.Ft, 13.60.Rj ## I Introduction Big bang nucleosynthesis has been investigated mainly on the context of the standard cosmological model (SBBN), where origin of light elements of He, D, and Li have been discussed in detail Iocco:2008va (). While observations of He are still in debate with the uncertainty of 20-30 % in the abundance Luridiana2003 (); OliveSkillman04 (); Izotov:2007ed (), those of D constrain severely the possible range of the abundance production in the early universe Kirkman2003 (); OMeara2006 (); Pettini2008 (). Contrary to the above standard BBN, the heavy element nucleosynthesis beyond the mass number has been proposed from twenty years ago IBBN0 (); IBBN1 (); TerasawaSato89 (); Alcock1987 (); 2zone (); Jedamzik1994 (); Matsuura:2004ss (), where the model is called the inhomogeneous BBN (IBBN). This model relays on the inhomogeneity of baryon concentrations that could be induced by baryogenesis (e.g. Ref. Matsuura:2004ss ()) or some phase transitions such as QCD or electro-weak phase transition Alcock1987 (); Fuller1988 (); IBBN_QCD () during the expansion of the universe. Although a large scale inhomogeneity is inhibited by many observations WMAP3 (); WMAP5 (), small scale one has been advocated within the present accuracy of the observations. Therefore, it remains a possibility for IBBN to occur in some degree during the early era. On the other hand, Wilkinson Microwave Anisotropy Probe (WMAP) has derived critical parameters concerning the cosmology of which the present baryon-to-photon ratio is determined to be  WMAP5 (). This value is almost consistent with that obtained from the observation of D. Therefore, considering the uncertainty of the He abundance, we can fix the ratio in the discussion of the nucleosynthesis in the early universe. If the present ratio of is determined, BBN can be performed along that line in the thermodynamical history with use of the nuclear reaction network. On the other hand, peculiar observations of abundances for heavy elements and/or He could be understood in the way of IBBN. For example, the quasar metallicity of C, N, and Si could have been explained from IBBN Juarez2009 (). Furthermore, from recent observations of globular clusters, possibility of inhomogeneous helium distribution is pointed out Moriya2010 (), where some separate groups of different main sequences in blue band of low mass stars are assumed due to high primordial helium abundances compared to the standard value Bedin2004 (); Piotto2007 (). Despite a negative opinion against IBBN due to insufficient consideration of the scale of the inhomogeneity Rauther2006 (), Matsuura et al. have found that the heavy element synthesis for both - and -processes is possible if Matsuura2005 (), where they have also shown that the high regions are compatible with the observations of the light elements, He and D Matsuura2007 (). However, their analysis is only limited to a parameter of a specific baryon number concentration. Therefore, it should be needed to constrain the possible regions from available observations in the wide parameter space that describes the IBBN. In §II, we review and give the adopted model of IBBN Matsuura2007 (). Constraints on the critical parameters of IBBN due to light element observations are shown in §III, and the productions of possible heavy element nucleosynthesis is presented in §IV. Finally, §V is devoted to the summary and discussion. ## Ii Cosmological Model We adopt the two-zone model for the inhomogeneous BBN, where the early universe is assumed to have the high- and low- baryon density regions IBBN1 () under the background temperature . For simplicity we ignore the diffusion effects before and during the primordial nucleosynthesis , where the plausibility will be discussed in §V. After the epoch of BBN, all the elements are assumed to be mixed homogeneously. Let us define the notations, , and as averaged-, high-, and low- baryon number densities. is the volume fraction of the high baryon density region. and are mass fractions of each element in averaged-, high- and low-density regions, respectively, Then, basic relations are written as follows: nave = fvnhigh+(1−fv)nlow, (1) naveXavei = fvnhighXhighi+(1−fv)nlowXlowi. (2) Here we assume the baryon fluctuation to be isothermal as was done in previous studies (e.g., Refs. TerasawaSato89 (); Alcock1987 (); Fuller1988 ()). Under that assumption, since the baryon-to-photon ratio is defined by the number density of photon in standard BBN, , Eqs.(1) and (2) are rewritten as follows: ηave = fvηhigh+(1−fv)ηlow, (3) ηaveXavei = fvXhighiηhigh+(1−fv)Xlowiηlow, (4) where s with subscripts are the baryon-to-photon ratios in each region. In the present paper, we fix from the cosmic microwave background observation WMAP3 (); WMAP5 (). and are obtained from both and the density ratio between high- and low-density region: . To calculate the evolution of the universe, we solve the following Friedmann equation, (˙xx)2=8πG3ρ, (5) where is the cosmic scale factor and is the gravitational constant. The total energy density is the sum of decomposed parts: ρ=ργ+ρe±+ρν+ρb. Here the subscripts , and indicate photons, electrons/positrons, neutrinos, and baryons, respectively. We note that is the average value of baryon density obtained from Eq. (1). The energy conservation law is used to get the time evolution of the temperature and the baryon density, ddt(ρx3)+pddt(x3)=0, (6) where is the pressure of the fluid. ## Iii Constraints from light-element observations In this section, we calculate the nucleosynthesis in high- and low-density regions with use of the BBN code Hashimoto1985 () which includes 24 nuclei from neutron to O. We adopt the reaction rates of NACRE NACRE (), the neutron life time sec Hagiwara:2002fs (), and take account of the number of species of the massless neutrinos . Figure 1 illustrates the light element synthesis in the high- and low-density regions with and that correspond to and . In the low-density region the evolution of the elements is almost the same as that of standard BBN. In the high-density region, while He is more abundant than that in the low-density region, Li (or Be) is much less produced. It implies that heavier nuclei such as O, hardly synthesized in SBBN, are synthesized at high-density region. For , the heavier elements can be synthesized in the high-density regions as discussed in Ref. Jedamzik1994 (). For , contribution of the low-density region to can be neglected and therefore to be consistent with observations of light elements, we need to impose the condition of . Now, we put constraints on and by comparing the average values of He and D obtained from Eq. (4) with the following observational values. First we adopt the primordial He abundance reported in Ref. OliveSkillman04 (): 0.232 Next, we take the primordial abundance from the D/H observation reported in Ref. PDG2008 () D/H=(2.84±0.26)×10−5, (8) where the systematic error given in Ref. OMeara2006 () is adopted. Figure 2 illustrates the constraints on the plane from the above light-element observations with contours of constant . The solid and dashed lines indicate the upper limits from Eqs. (7) and (8), respectively. As the results, we can obtain approximately the following relations between and : R≤{0.26×f−0.96v~{}for~{}fv>3.2×10−6,1.20×f−0.83v~{}for~{}fv≤3.2×10−6. (9) As shown in Figure 2, we can find the allowed regions which include the very high-density region such as . Matsuura et al. Matsuura2007 () defined a parameter of the baryon number concentration in the high density region instead of two parameters of and that are needed to solve Eqs.(3) and (4): fvηhigh:(1−fv)ηlow=a:(1−a). However, they have only examined the case of and , where for . Our constraints in Eq. (9) correspond to . Since we have fixed the value of , we can obtain physically more reasonable regions on the plane of . Naturally, as takes larger value, nuclei which are heavier than Li are synthesized more and more. Then we can estimate the amount of total CNO elements in the allowed region. Figure 3 illustrates the contours of the summation of the average values of the heavier nuclei (), which correspond to Fig. 2 and are drawn using the constraint from He and D/H observations . As a consequence, we get the upper limit of total mass fractions for heavier nuclei as follows: . We should note that abundance flows proceed beyond the CNO elements thanks to the larger network for high -values as shown in Table 2 of the following section. ## Iv Heavy element Production In the previous section, we have obtained the amount of CNO elements produced in the two-zone IBBN model. However, it is not enough to examine the nuclear production beyond because the baryon density in the high-density region becomes so high that elements beyond CNO isotopes can be produced Wagoner1967 (); 2zone (); Matsuura:2004ss (); Matsuura2005 (). In this section, we investigate the heavy element nucleosynthesis in the high-density region considering the constraints shown in Fig. 2. The temperature and density evolutions are the same as used in the previous section. Abundance change is calculated with a large nuclear reaction network, which includes 4463 nuclei from neutron , proton to Americium (Z = 95 and A = 292). Nuclear data, such as reaction rates, nuclear masses, and partition functions, are the same as used in fujimoto () except for the neutron-proton interaction; We take the weak interaction rates between n and p from Kawano code Kawano (), which is adequate for the high temperature epoch of  K. We note that mass fraction of He and D obtained with the large network are consistent with those in in §III within the accuracy of few percents. As seen in Fig. 3, heavy elements of are produced nearly along the upper limit of . Therefore, to examine the efficiency of the heavy element production, we select five models with the following parameters: , and corresponded to , , , , and . Adopted parameters are indicated by filled squares in Fig. 2. Figure 4 shows the results of nucleosynthesis in the high-density regions with and . For , the nucleosynthesis paths are classified with the mass number Matsuura2005 (). For nuclei of mass number , proton captures are very active compared to the neutron capture of  K and the path moves to the proton rich side, which began by breaking out of the hot CNO cycle. For nuclei of , the path goes across the stable nuclei from proton to neutron rich side, since the temperature decreases and the number of seed nuclei of the neutron capture process increase significantly. Concerning heavier nuclei of , neutron captures become much more efficient. In Figure 4(a), we see the time evolution of the abundances of Gd and Eu for the mass number 159. First Tb (stable -element) is synthesized and later Gd and Eu are synthesized through the neutron captures. After sec, Eu decays to nuclei by way of Eu Gd Tb, where the lifetimes of Eu and Gd are min and h, respectively. These neutron capture process is not similar to the canonical process, since the nuclear processes proceed under the condition of the high-abundance of protons. For , the reactions first proceed along the stable line, because triple- reactions and other particle induced reactions are very effective. Subsequently, the reactions directly proceeds to the proton rich region, through rapid proton captures. As shown in Fig. 4(b), Sn which is proton-rich nuclei is synthesized. After that, stable nuclei Cd is synthesized by way of Sn In Cd, where the lifetimes of Sn and In are min and min, respectively. In addition, we notice the production of radioactive nuclei of Ni and Co, where Ni is produced at early times, just after the formation of He. Usually, nuclei such as Ni and Co are produced in supernova explosions, which are assumed to be the events after the first star formation (e.g. Ref. Hashimoto1995 ()). In IBBN model, however, this production can be found to occur at extremely high density region of as the primary elements without supernova events in the early universe. To explain differences of the nuclear reactions which depend on the baryon density, we focus on the neutron abundances. Figure 5 shows the evolutions of the neutron abundances in the SBBN and IBBN models. For , neutron abundance decreases rapidly at 10 sec to the formation of He and Ni. Thus, neutron abundance is not enough to induce the neutron capture producing heavy nuclei of . On the other hand, neutron abundance tends to remain even at the high temperature for the lower value of . We can see the case of , where there remain much neutrons to occur the neutron capture reaction. Thus the neutron capture process to produce heavy elements of can become active. Time scales in the decrease for the neutron abundances change drastically the flow of the abundance production. Figures 6 and 7 show the flows for . Before the significant decrease in the neutron abundances before sec, the nucleosynthesis proceeds already along the stable line by way of the neutron included reactions (Fig. 6). At that time, the nuclear reactions are stuck around with , since it takes time to synthesize heavier nuclei because Nd () and Sm () have some stable isotopes. As time goes, neutron captures of these nuclei start, where the neutron captures proceed significantly and -elements can be synthesized. After the depletion of neutrons ( sec), nuclei around the neutron numbers are produced through proton induced reactions such as Sm (Fig. 7). Final results ( K) of nucleosynthesis calculations are shown in Tables 1 and 2. Table 1 shows the abundances of light elements, He, D, and Li, in high- and low-density regions with their average values. Abundances of the low-density side (the third and sixth columns) are obtained from the calculation by BBN code used in §III, because abundance flows beyond are negligible. We should note that the average abundances of He and D are consistent with their observational values of (7) and (8). Table 2 shows the amounts of heavy elements. When we have calculate the average values, we set the abundances of as zero for low-density side. For , a lot of nuclei of are synthesized whose amounts are comparable to that of Li. Produced elements in this case include both -element (i.e. Ba) and -elements (for instance, Ce and Nd), since moderate amounts of neutrons remain as shown in Fig. 5 For , there are few -elements while both -elements (i.e. Kr and Y) and -elements (i.e Se and Kr) are synthesized such as the case of supernova explosions. Although heavy nuclei of are not synthesized appreciably, those of are produced well owing to the explosive nucleosynthesis under the high density circumstances (). The most abundant element is found to be Ni whose production value is much larger than the estimated upper limit of the total mass fraction (shown in Fig.3) derived from the BBN code calculations. This is because our BBN code used in §III includes the elements up to and the actual abundance flow proceeds to much heavier elements. Figure 8 shows the abundances averaged between high- and low-density region using Eq. (4) compared with the solar system abundances Anders1989 (). For , abundance productions of are comparable to the solar values. For , those of have been synthesized well. In the case of , there are outstanding two peaks; one is around and the other can be found around . Abundance patterns are very different from that of the solar system, because IBBN occurs under the condition of significant abundances of both neutrons and protons. ## V Summary and Discussion We have investigated the consistency between inhomogeneous BBN and the observation of He and D/H abundances under the standard cosmological model having determined by WMAP. We have adopted the two-zone model, where the universe has the high- and low- baryon density regions at the BBN epoch. First, we have calculated the light element nucleosynthesis using the BBN code having 24 nuclei for the high- and low-density regions. We have assumed that the diffusion effect is negligible. There are significant differences for the time evolution of the light element between the high- and low-density regions; In the high-density region, the nucleosynthesis begins faster and He is more abundant than that in the low density region as shown in Figure 4. From He and D/H observations, we can put severe constraint on two parameters of the two-zone model: the volume fraction of the high-density region and the density ratio between the two regions, where we have assumed that abundances in the two regions are mixed homogeneously. Second, using the allowed parameters constrained from the light element observations, we calculate the nucleosynthesis that includes 4463 nuclei in the high-density regions. Qualitatively, results of nucleosynthesis are the same as those in Ref. Matsuura2005 (). In the present results, we showed that - and -elements are synthesized simultaneously at high-density region with .d Such a curious site of the nucleosynthesis have never been known in previous studies of nucleosynthesis. As the results, we have obtained the average values of mass fractions from the nucleosynthesis in high-density and that in low density regions. The total averaged mass fractions beyond the light elements are constrained to be (for ) and (for ). We find that the average mass fractions in IBBN amount to as much as the solar system abundances. As see from Fig. 8, there are over-produced elements around (for ) and (for ). It seems to be conflict with the chemical evolution of the universe. However, we show only the results of the upper-bounds on diagram. Since and are free-parameters, over-production can be avoided by the adjustment of and/or . Figure 9 illustrates the mass fraction in with various sets. It is shown that the abundance pattern can be lower than the solar system abundance. Although we showed here only the result of case, it is possible to avoid producing over-abundance in other parameters, and . If we put constraint on the plane from the heavy element observations, the limit of those parameters should be tightly. In our calculation, the radioactive nuclei are produced much in the high-density region. Especially, we should note that Ni decays into Fe (Ni Co Fe), where the existence of Fe surely affects the process of the formation of the first generation stars. Therefore, it may be also necessary for IBBN to be constrained from the star formation scenarios, because opacity change due to IBBN will affect them. Recent observational signal of over-abundances of He mass fractions in globular clusters could motivate the IBBN scenario toward the detailed modeling. The over-abundances of He are suggested to be in the range of where estimated from the H-R diagram of the blue Main-Sequence of NGC2808 in Ref.Piotto2007 (). If the origin of He in globular clusters is due to IBBN, must be greater than in some regions during the epoch of BBN. Then, the averaging procedure could be constrained from the more detailed observations of abundances. Since the history of changes in abundances has been investigated in detail through the chemical evolution of galaxies Anderson2009 (), further plausible constrains on the averaging process should be studied in the next step. In our study, we ignore the diffusion effects. However, it is shown that the diffusion affects the primordial nucleosynthesis significantly IBBN1 (). Matsuura et al. Matsuura2007 () has estimated the size of the high-baryon density island to be  m –  m at the BBN epoch. The upper bound is obtained from the maximum angular resolution of CMB and the lower is from the analysis of comoving diffusion length of neutron and proton given in Ref. IBBN0 (). In our case, we can estimate the scale of the high-density and the effects of the diffusion from . The neutron diffusion effects can be discussed with use of the results obtained in the previous section by comparing the scale of the high-density region with the diffusion length. The present value of the Hubble length is  m. We may estimate the scale of the high-density region from the Hubble length multiplied by . From ranges of the volume fraction adopted in §IV, , we obtain the scale of the high-density regions at present epoch as  m  m. We can estimate the scale at redshift from the relation . As the result, we expect at BBN era as  m  m. We can say that the nucleon diffusion effects would be neglected because the diffusion length is much smaller than . On the other hand, the high-density region is expected to be smaller than  m. It seems to be very bad that the upper bound of is larger than the value as far as our two zone model is concerned. However, the high-density island cannot be observed directly, since we assume that the high- and low-density regions become homogeneous after the nucleosynthesis. Finally, distances between high density regions are difficult to derive without specific models beyond the two-zone model. We will plan to calculate the nucleosynthesis with the diffusion of abundances and/or more plausible averaging process included. ###### Acknowledgements. This work has been supported in part by a Grant-in-Aid for Scientific Research (18540279, 19104006, 21540272) of the Ministry of Education, Culture, Sports, Science and Technology of Japan, and in part by a grant for Basic Science Research Projects from the Sumitomo Foundation (No. 080933). ## References • (1) G. Steigman, Ann. Rev. Nucl. Part. Sci. 57, 463 (2007); F. Iocco, G. Mangano, G. Miele, O. Pisanti and P. D. Serpico, Phys. Rept. 472, 1 (2009) • (2) V. Luridiana,A. Peimbert, M. Peimbert, & M. Cervino, Astrophys. J. 592, 846 (2003) • (3) Y. I. Izotov, T. X. Thuan and G. Stasinska, Astrophys. J. 662, 15 (2007) • (4) Olive & Skillman, Astrophys. J., 617, 29–40 (2004) • (5) D. Kirkman, D. Tytler, N. Suzuki, J. M. O’Meara and D. Lubin, Astrophys. J. Suppl. 149, 1 (2003) [arXiv:astro-ph/0302006]. • (6) M. Pettini, B. J. Zych, M. T. Murphy, A. Lewis, & C. C. Steidel, Mon. Not. R. Astron. Soc. 391, 1499, (2008) • (7) J. M. O’Meara, S. Burles, J. X. Prochaska, G. E. Prochter, R. A. Bernstein and K. M. Burgess, Astrophys. J. 649, L61 (2006) • (8) C. Alcock, G.M. Fuller, and G.J. Mathews, Astrophys. J. 320, 439 (1987) • (9) N. Terasawa and K. Sato, Phys. Rev. D 39, 2893 (1989) • (10) K. Jedamzik, and J.B. Rehm, Phys. Rev. D64, 023510 (2001)[astro-ph/0101292]; T. Rauscher, H. Applegate, J. Cowan, F. Thielmann, and M. Wiescher, Astrophys. J. 429, 499 (1994). • (11) J. H. Applegate, C. J. Hogan, and R. J. Scherrer, Phys. Rev. D35, 1151 (1987) • (12) R. M. Malaney and W. A. Fowler, Astrophys. J 333, 14 (1988); J. H. Applegate, C. J. Hogan, R. J. Scherrer, Astrophys. J. 329, 572 (1988); N. Terasawa and K. Sato, Astrophys. J. 362, L.47 (1990); D. Thomas, D. N. Schramm, K.A. Olive, G. J. Mathews, B. S. Meyer, and B. D.  Fields, Astrophys. J. 430, 291 (1994); • (13) K. Jedamzik, G. M. Fuller, G. J. Mathews, and T. Kajino, Astrophys. J. 422, 423 (1994); • (14) S. Matsuura, A. D. Dolgov, S. Nagataki and K. Sato, Prog. Theor. Phys. 112, 971 (2004) • (15) G. M. Fuller, G. J. Mathews and C. R. Alcock, Phys. Rev. D 37, 1380 (1988); • (16) H. Kurki-Suonio and R. A. Matzner, Phys.Rev. D39, 1046 (1989); H. Kurki-Suonio and R. A. Matzner, Phys.Rev. D42, 1047 (1990); • (17) C.L. Bennett, et al., Astrophys. J. Suppl. 148, 1 (2003) D. N. Spergel et al., Astrophys. J. Suppl. 170, 377 (2007) J. Dunkley et al. Astrophys. J. Suppl. 180, 306 (2009) • (18) E. Komatsu et al., arXiv:1001.4538 [astro-ph.CO]. • (19) Y. Juarez, R. Maiolino, R. Mujica, M. Pedani, S. Marinoni, T. Nagao, A. Marconi, & E. Oliva, Astron. & Astrophys., 494, L25, (2009) • (20) L. R. Bedin et al., Astrophys. J., 605, L125 (2004); • (21) G. Piotto et al., Astrophys. J., 661 L53, (2007) • (22) T. Rauscher, Phys. Rev. D 75, 068301 (2007) • (23) S. Matsuura, S. I. Fujimoto, S. Nishimura, M. A. Hashimoto and K. Sato, Phys. Rev. D 72, 123505 (2005) • (24) S. Matsuura, S. I. Fujimoto, M. A. Hashimoto and K. Sato, Phys. Rev. D 75, 068302 (2007). • (25) K. Jedamzik [astro-ph/9911242]. • (26) S. Fujimoto,M. Hashimoto, O. Koike,K. Arai, & R. Matsuba, Astrophys. J. 585, 418 (2003), O. Koike, M. Hashimoto, R. Kuromizu, & S. Fujimoto, Astrophys. J. 603, 592 (2004), S. Fujimoto, M. Hashimoto, K. Arai, & R. Matsuba, Astrophys. J. , 614, 847 (2004), S. Nishimura, K. Kotake, M. Hashimoto, S. Yamada, N. Nishimura, S. Fujimoto and K. Sato, Astrophys. J. 642, 410 (2006). • (27) M. Hashimoto & K. Arai, Physics Reports of Kumamoto University, 7, 47, (1985). • (28) B. Fields and S. Sarkar, arXiv:astro-ph/0601514. • (29) L. Kawano, FERMILAB-Pub-92/04-A • (30) E. Anders and N. Grevesse, Geochim. Cosmochim. Acta 53, 197 (1989). • (31) C. Angulo, M. Arnould, M. Rayet, P. Descouvemont, D. Baye, C.  Leclercq-Willain, A. Coc, S. Barhoumi, P. Aguer, C. Rolfs, et al., Nuclear Physics A 656, 3 (1999). • (32) M. Hashimoto, Progress of Theoretical Physics, 94, 663, (1995). • (33) K. Hagiwara et al. [Particle Data Group], Phys. Rev. D 66, 010001 (2002). • (34) R. V. Wagoner, W. A. Fowler, & F. Hoyle, Astrophys. J. , 148, 3 (1967) • (35) M. E. Anderson, J. N. Bregman, S. C. Butler and C. R. Mullis, Astrophys. J. 698, 317 (2009) • (36) T. Moriya and T. Shigeyama, Phys. Rev. D 81, 043004 (2010) You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
2020-07-04 15:18:59
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8160554766654968, "perplexity": 1600.637444192677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886178.40/warc/CC-MAIN-20200704135515-20200704165515-00007.warc.gz"}
https://hal.archives-ouvertes.fr/hal-00347811/en/
# Choosing a penalty for model selection in heteroscedastic regression 2 WILLOW - Models of visual object recognition and scene understanding DI-ENS - Département d'informatique de l'École normale supérieure, ENS Paris - École normale supérieure - Paris, Inria Paris-Rocquencourt, CNRS - Centre National de la Recherche Scientifique : UMR8548 Abstract : We consider the problem of choosing between several models in least-squares regression with heteroscedastic data. We prove that any penalization procedure is suboptimal when the penalty is a function of the dimension of the model, at least for some typical heteroscedastic model selection problems. In particular, Mallows' Cp is suboptimal in this framework. On the contrary, optimal model selection is possible with data-driven penalties such as resampling or $V$-fold penalties. Therefore, it is worth estimating the shape of the penalty from data, even at the price of a higher computational cost. Simulation experiments illustrate the existence of a trade-off between statistical accuracy and computational complexity. As a conclusion, we sketch some rules for choosing a penalty in least-squares regression, depending on what is known about possible variations of the noise-level. keyword : Document type : Preprints, Working Papers, ... 2010 Domain : Cited literature [27 references] https://hal.archives-ouvertes.fr/hal-00347811 Contributor : Sylvain Arlot <> Submitted on : Thursday, June 3, 2010 - 7:24:45 PM Last modification on : Thursday, January 11, 2018 - 6:23:04 AM Document(s) archivé(s) le : Thursday, September 23, 2010 - 12:56:52 PM ### Files shape.pdf Files produced by the author(s) ### Identifiers • HAL Id : hal-00347811, version 2 • ARXIV : 0812.3141 ### Citation Sylvain Arlot. Choosing a penalty for model selection in heteroscedastic regression. 2010. 〈hal-00347811v2〉 Record views
2018-03-20 00:19:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3886052072048187, "perplexity": 2248.5919136668135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647244.44/warc/CC-MAIN-20180319234034-20180320014034-00510.warc.gz"}
https://compas.dev/compas/dev/api/generated/compas.data.is_sequence_of_int.html
# is_sequence_of_int compas.data.is_sequence_of_int(items)[source] Verify that the sequence contains only integers. Parameters items (sequence) – The sequence of items. Returns bool
2022-10-06 19:47:12
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8462598323822021, "perplexity": 10787.33827851266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00652.warc.gz"}
https://quantumcomputing.stackexchange.com/tags/error-correction/hot?filter=week
# Tag Info ## Hot answers tagged error-correction 2 I don't think that this will be possible on real current quantum hardware. An alternative would be to run it on a simulator with a realistic noise model. This means that the circuit will be run in a non-ideal environment, and so should incur errors similar to how it would if it was executed on a real device. This tutorial teaches you how to build a noise ... 1 You can simply control $X$ gates with qubits $q3$ and $q4$. You DO NOT have to measure them firstly and then use classical bits for controlling. The reason is that in quantum computing, controling some qubit with other qubits or with their measured results in classical register is the same. Hence, you can implement the algorithm on real quantum computer. 1 AFAIK, this is impossible on IBM's current hardware. See this github issue: https://github.com/Qiskit/qiskit-textbook/issues/119 Only top voted, non community-wiki answers of a minimum length are eligible
2020-01-20 15:19:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3126530051231384, "perplexity": 797.8484697560692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250598800.30/warc/CC-MAIN-20200120135447-20200120164447-00219.warc.gz"}
https://www.alignmentforum.org/posts/w8QBmgQwb83vDMXoz/dynamic-inconsistency-of-the-inaction-and-initial-state
# 18 Impact MeasuresAI Frontpage Vika has been posting about various baseline choices for impact measure. In this post, I'll argue that the stepwise inaction baseline is dynamically inconsistent/time-inconsistent. Informally, what this means is that an agent will have different preferences from its future self. # Losses from time-inconsistency Why is time-inconsistency bad? It's because it allows money-pump situations: the environment can extract free reward from the agent, to no advantage to that agent. Or, put more formally: • An agent is time-inconsistent between times and , if at time it would pay a positive amount of reward to constrain its possible choices at time . Outside of anthropics and game theory, we expect our agent to be time-consistent. # Time inconsistency example Consider the following example: The robot can move in all four directions - , , , - and can also take the noop operation, . The discount rate is . It gets a reward of for standing on the blue button for the first time. Using attainable utility preservation, the penalty function is defined by the auxiliary set ; here, this just consists of the reward function that gives for standing on the red button for the first time. Therefore if the robot moves from a point steps away from the red button, to one steps away, it gets a penalty[1] of - the difference between the expected red-button rewards for an optimiser in both positions. ## Two paths It's pretty clear there are two potentially optimal paths the robot can take: going straight to the blue button (higher reward, but higher penalty), or taking the long way round (lower reward, but lower penalty): Fortunately, when summing up the penalties, you sum terms like , so a lot of the terms cancel. Thus for the short route, the reward is (distance of eight to the blue button) and the penalty is (closest to the red button: 3 squares, furthest: 7 squares). For the long route, the reward is (distance forty-six to the blue button) and the penalty is (closest to the red button: 7 squares, furthest: 30 squares). ## Take the third option Now suppose that the robot, as a free action, could block itself from choosing , for the next turns. When choosing that action, it gets a one-time penalty[2] of ; and now it can walk directly to the blue button without incurring any extra penalties, and then stay there, without ever getting any further penalty. This gives a total reward-penalty term of , which is higher than either of the reward-penalties of the previous section. If , this is also higher than just staying in place. Hence the robot will, if it can, act to constrain its own future actions and thus be time-inconsistent. # The initial state and the initial inaction baseline This approach also work with the initial action baseline - and for the initial state baseline. For those baselines, the counterfactual robot just stays put. So, when the robot is a distance from the red button, its penalty is . Then the robot would reduce its reward if it could ensure that it could not reach the red button within less than seven turns; if it can operate on its action set to do so, at trivial cost, it would. # Counterfactual constraint In most cases, if an agent is time inconsistent and acts to constrain its future self, it does so to prevent the future self from taking some actions. But here, note that the future self would never take the proscribed actions: the robot has no interest in going south to the red button. Here the robot is constraining its future counterfactual actions, not the future actions that it would ever want to take. 1. If using an inaction rollout of length , just multiply that penalty by . ↩︎ 2. The comes from the optimal policy for reaching the red button under this restriction: go to the square above the red button, wait till is available again, then go . ↩︎ # 18 New Comment Planned summary for the Alignment Newsletter: In a fixed, stationary environment, we would like our agents to be time-consistent: that is, they should not have a positive incentive to restrict their future choices. However, impact measures like <@AUP@>(@Towards a New Impact Measure@) calculate impact by looking at what the agent could have done otherwise. As a result, the agent has an incentive to change what this counterfactual is, in order to reduce the penalty it receives, and it might accomplish this by restricting its future choices. This is demonstrated concretely with a gridworld example. Planned opinion: It’s worth noting that measures like AUP do create a Markovian reward function, which typically leads to time consistent agents. The reason that this doesn’t apply here is because we’re assuming that the restriction of future choices is “external” to the environment and formalism, but nonetheless affects the penalty. If we instead have this restriction “inside” the environment, then we will need to include a state variable specifying whether the action set is restricted or not. In that case, the impact measure would create a reward function that depends on that state variable. So another way of stating the problem is that if you add the ability to restrict future actions to the environment, then the impact penalty leads to a reward function that depends on whether the action set is restricted, which intuitively we don’t want. (This point is also made in this followup post.) Good, cheers! Nice post! I think this notion of time-inconsistency points to a key problem in impact measurement, and if we could solve it (without backtracking on other problems, like interference/offsetting), we would be a lot closer to dealing with subagent issues. I think the other baselines can also induce time-inconsistent behavior, for the same reason: if reaching the main goal has a side effect of allowing the agent to better achieve the auxiliary goal (compared to starting state / inaction / stepwise inaction), the agent is willing to pay a small amount to restrict its later capabilities. Sometimes this is even a good thing - the agent might "pay" by increasing its power in a very specialized and narrow manner, instead of gaining power in general, and we want that. Here are some technical quibbles which don't affect the conclusion (yay). If using an inaction rollout of length , just multiply that penalty by I don't think so - the inaction rollout formulation (as I think of it) compares the optimal value after taking action  and waiting for  steps, with the optimal value after  steps of waiting. There's no additional discount there. Fortunately, when summing up the penalties, you sum terms like , so a lot of the terms cancel. Why do the absolute values cancel? Why do the absolute values cancel? Because , so you can remove the absolute values.
2021-01-27 14:36:27
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8033379912376404, "perplexity": 1197.3643420879796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704824728.92/warc/CC-MAIN-20210127121330-20210127151330-00282.warc.gz"}
http://mathematica.stackexchange.com/questions/47739/a-math-function-similar-to-latticereduce-for-finding-linear-independent-basis?answertab=votes
# A math function similar to “LatticeReduce” for finding linear independent basis We would like to ask a math function similar to "LatticeReduce" for finding linear independent basis. The input is a list of vectors $M=\{v_1,v_2,v_3,\dots\}$, with the following property: There is a subset $O=\{w_1,w_2,w_3,\dots\}\subset M$, such that $O$ is a basis, i.e. linearly independent and span $M$, $$v_i=\sum_a c_{ia} w_a.$$ Moreover, the coefficients $c_{ia}$ are non-negative integers. An explicit example is the following: $$M=\left( \begin{array}{ccccc} 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & -1 & -1 & 1 \\ 1 & 1 & -1 & 1 & -1 \\ 1 & 1 & 1 & -1 & -1 \\ 2 & -2 & 0 & 0 & 0 \\ 2 & 2 & 2 & 0 & 0 \\ 2 & 2 & 0 & 2 & 0 \\ 2 & 2 & 0 & 0 & 2 \\ 2 & 2 & -2 & 0 & 0 \\ 2 & 2 & 0 & -2 & 0 \\ 2 & 2 & 0 & 0 & -2 \end{array} \right),\quad O=\left( \begin{array}{ccccc} 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & -1 & -1 & 1 \\ 1 & 1 & -1 & 1 & -1 \\ 1 & 1 & 1 & -1 & -1 \\ 2 & -2 & 0 & 0 & 0 \end{array} \right),$$ where $O$ is the first 5 rows of $M$ and $$M[[6]]=O[[1]]+O[[4]],\, M[[7]]=O[[1]]+O[[3]],\,\dots$$ In Mathematica form, the example is m = {{1, 1, 1, 1, 1}, {1, 1, -1, -1, 1}, {1, 1, -1, 1, -1}, {1, 1, 1, -1, -1}, {2, -2, 0, 0, 0}, {2, 2, 2, 0, 0}, {2, 2, 0, 2, 0}, {2, 2, 0, 0, 2}, {2, 2, -2, 0, 0}, {2, 2, 0, -2, 0}, {2, 2, 0, 0, -2}}; May we know that how to write a function/program such that inputting such $M$, the program will output the desired $O$. For this example the built-in function LatticeReduce works well. But we are not sure it always works. - (1) Would be helpful if your "example" was comprised of cut-and-pastable Mathematica input. (2) What is it about LatticeReduce that you are not sure always works? (3) Are you looking for a basis over Q or over Z? If over Q, just use RowReduce. – Daniel Lichtblau May 13 '14 at 14:33 Also there is LatticeReduce[DeleteCases[HermiteDecomposition[m][[2]], {0 ..}]] – Daniel Lichtblau May 13 '14 at 16:00 I should say I'm looking for a basis over N. LatticeReduce gives the basis over Z. – Tian Lan May 14 '14 at 2:29
2016-04-29 06:09:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6753426194190979, "perplexity": 537.2323134713207}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860110764.59/warc/CC-MAIN-20160428161510-00171-ip-10-239-7-51.ec2.internal.warc.gz"}
https://leanprover-community.github.io/archive/stream/113488-general/topic/archlinux.html
## Stream: general ### Topic: archlinux #### Kevin Buzzard (Oct 12 2020 at 18:21): There seem to be 5 lean packages on aur.archlinux.org (this just tripped someone up at Imperial). e.g. https://aur.archlinux.org/packages/lean-community/ (and all the other Lean packages it conflicts with). Is that really maintained by the Lean community? #### Julian Berman (Oct 12 2020 at 18:25): The most popular (and I guess correct) one says it's maintained by/in https://github.com/ouuan/AUR-packages #### Frédéric Dupuis (Oct 13 2020 at 02:45): It mi ght be good to include a warning somewhere to advise against installing Lean via package managers -- it's very tempting and almost certainly leads to trouble. #### Frédéric Dupuis (Oct 13 2020 at 02:47): I even went as far as writing a PKGBUILD for mathlib at some point before I realized the error of my ways. #### Johan Commelin (Oct 13 2020 at 05:40): @Frédéric Dupuis You mean that you don't run pacman -Syu 5 times per day? :rofl: #### Edward Ayers (Nov 25 2020 at 12:31): Frédéric Dupuis said: I even went as far as writing a PKGBUILD for mathlib at some point before I realized the error of my ways. What were the issues? #### Frédéric Dupuis (Nov 25 2020 at 12:44): mathlib just moves too fast for a PKGBUILD to be convenient, using leanproject is much better adapted to the task. This way, you can have a separate copy of mathlib for every project that you can update independently instead of a system-wide install. Besides, nowadays I only work directly on mathlib, so it would make even less sense. #### Edward Ayers (Nov 25 2020 at 13:50): I see. Maybe a pkgbuild that installs elan and leanproject would make sense #### Edward Ayers (Nov 25 2020 at 13:51): I really wish that elan, leanproject and leanpkg were the same thing #### Patrick Massot (Nov 25 2020 at 13:58): Merging leanproject and leanpkg would be trivial. I didn't see the point of redoing leanpkg inside leanproject but that would be very easy. Merging this with elan would take a bit more work (but of course there is no obstruction in principle). #### Edward Ayers (Nov 25 2020 at 14:13): Yeah something for lean4 maybe Last updated: May 12 2021 at 23:13 UTC
2021-05-13 00:12:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5127403140068054, "perplexity": 8429.669210447533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991413.30/warc/CC-MAIN-20210512224016-20210513014016-00208.warc.gz"}
https://socratic.org/questions/what-is-the-equation-of-the-line-passing-through-the-points-4-5-and-2-7
# What is the equation of the line passing through the points (4, -5) and (2,-7)? Aug 2, 2018 $y = x - 9$ #### Explanation: First you need to find the gradient of the line $m = \frac{- 5 + 7}{4 - 2} = \frac{2}{2} = 1$ Then, using the point gradient formula $y - {y}_{1} = m \left(x - {x}_{1}\right)$ where $\left({x}_{1} , {y}_{1}\right)$ can be either point and $m$ is the gradient So, I'm using the point $\left(4 , - 5\right)$ $\left(y + 5\right) = 1 \left(x - 4\right)$ $y + 5 = x - 4$ $y = x - 9$
2021-03-05 00:52:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.725858747959137, "perplexity": 373.38776668480824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369553.75/warc/CC-MAIN-20210304235759-20210305025759-00138.warc.gz"}
https://gmatclub.com/forum/nth-term-of-an-ap-consisting-of-only-positive-integers-is-denoted-by-279964.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 12 Nov 2018, 20:50 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History ## Events & Promotions ###### Events & Promotions in November PrevNext SuMoTuWeThFrSa 28293031123 45678910 11121314151617 18192021222324 2526272829301 Open Detailed Calendar • ### Essential GMAT Time-Management Hacks November 14, 2018 November 14, 2018 07:00 PM PST 08:00 PM PST Join the webinar and learn time-management tactics that will guarantee you answer all questions, in all sections, on time. Save your spot today! Nov. 14th at 7 PM PST # nth term of an AP, consisting of only positive integers, is denoted by Author Message TAGS: ### Hide Tags Manager Joined: 13 Jan 2018 Posts: 67 Location: India GMAT 1: 580 Q47 V23 GPA: 4 nth term of an AP, consisting of only positive integers, is denoted by  [#permalink] ### Show Tags 26 Oct 2018, 05:10 3 00:00 Difficulty: 55% (hard) Question Stats: 43% (01:02) correct 57% (02:00) wrong based on 21 sessions ### HideShow timer Statistics $$n_{th}$$ term of an AP, consisting of only positive integers, is denoted by $$a_n$$. It is given that $$a_3 = 4$$ and $$a_{n-2} = 9$$. What is the value of $$(a_n + 2n +2)$$ A. 31 B. 32 C. 33 D. 34 E. 35 Manager Joined: 14 Jun 2018 Posts: 179 Re: nth term of an AP, consisting of only positive integers, is denoted by  [#permalink] ### Show Tags 26 Oct 2018, 11:46 Mean of the AP = (9+4)/2 = 6.5 Sum of first and last term = 13 Case 1 First term : 1 ; Last Term 12 ; Common Diff = 1 This isn't possible because 3rd term will be 3 whereas question says it's 4. Case 2 First term : 2 ; Last Term 11; CD = 1 This is possible No of terms = 10 = n Last term = 11 11 + 20 + 2 = 33 Option C Intern Joined: 18 May 2017 Posts: 2 Re: nth term of an AP, consisting of only positive integers, is denoted by  [#permalink] ### Show Tags 27 Oct 2018, 20:23 This is a very poor and unclear question! Re: nth term of an AP, consisting of only positive integers, is denoted by &nbs [#permalink] 27 Oct 2018, 20:23 Display posts from previous: Sort by
2018-11-13 04:50:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7447525858879089, "perplexity": 5973.97670276522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741219.9/warc/CC-MAIN-20181113041552-20181113063552-00149.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/measures-central-tendency-mean-median-mode-raw-arrayed-data-from-following-cumulative-frequency-table-find-median-lower-quartile-upper-quaetile_30649
Account It's free! Share Books Shortlist # Solution - From the Following Cumulative Frequency Table , Find : Median Lower Quartile Upper Quaetile - ICSE Class 10 - Mathematics ConceptMeasures of Central Tendency - Mean, Median, Mode for Raw and Arrayed Data #### Question From the following cumulative frequency table , find : Median Lower quartile Upper quaetile Marks (less than ) 10 20 30 40 50 60 70 80 90 100 Cumulative frequency 5 24 37 40 42 48 70 77 79 80 #### Solution Marks (less than) Cumulative frequency 10 5 20 24 30 37 40 40 50 42 60 48 70 70 80 77 90 79 100 80 Number of term= 80 \ Median= 40^(th) term Median= Through 40^(th) term mark draw a line parallel to the x-axis which meets the curve at A. from A, Draw a perpendiculer to axis which meets it at B. Value of B is the median -40 Lower quartile (Q_1)=20^(th) term=18 Upeer Quartile (Q_3)=60^th term=66 Is there an error in this question or solution? #### APPEARS IN Solution for question: From the Following Cumulative Frequency Table , Find : Median Lower Quartile Upper Quaetile concept: Measures of Central Tendency - Mean, Median, Mode for Raw and Arrayed Data. For the course ICSE S
2018-12-18 18:05:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32166123390197754, "perplexity": 1915.5761501181094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829542.89/warc/CC-MAIN-20181218164121-20181218190121-00040.warc.gz"}
http://media.vorarlberg.com/fridge-shelves-kgucgv/page.php?5d2ac7=ph-of-weak-acid-and-strong-base-formula
RS Aggarwal Solutions for class 7 Math's, lakhmirsingh Solution for class 8 Science, PS Verma and VK Agarwal Biology class 9 solutions, Lakhmir Singh Chemistry Class 9 Solutions, CBSE Important Questions for Class 9 Math's pdf, MCQ Questions for class 9 Science with Answers, Important Questions for class 12 Chemistry, Chemistry Formula For Grahams Law of Diffusion, Chemistry Formula For Average Speed of Gas Molecules, Chemistry Formula For Most Probable Speed of Gas Molecules, Chemistry Formula For PH of an Acidic Buffer, Chemistry Formula For pH of a Basic Buffer, Chemistry Definition of Salt of Weak acid and Strong Base, Chemistry Formula For Salts of Strong Acids and Weak Bases, Chemistry Formula For Salts of weak acids and weak bases, Chemistry Formula For Depression in Freezing Point, Important Questions CBSE Class 10 Science. Salts of weak acids and weak bases [WA-WB] Let us consider ammonium acetate (CH 3 COONH 4) for our discussion.Both NH 4 + ions and CH 3 COO-ions react respectively with OH-and H + ions furnished by water to form NH 4 OH (weak base) and CH 3 COOH (acetic acid). Formula for Sulfuric Acid. Calculate the pH of a 2.00 M solution of nitrous acid (HNO 2 ). Strong and Weak Acids and Bases . These can be used to calculate the pH of any solution of a weak acid or base whose ionization constant is known. Ammonia itself obviously doesn't contain hydroxide ions, but it reacts with water to produce ammonium ions and hydroxide ions. No strong acid has been added, so this solution is simply a solution of a weak base. These include the initial pH, the pH after adding a small amount of base, the pH at the half-neutralization, the pH at the equivalence point, and finally the pH after adding excess base. There are two main methods of solving for hydrogen ion concentration. They carry pollen from one plant to another to facilitate plant growth and development. See below tutorial how pH of strong acid is changed when a base is added to the acidic solution. HClO4 is perchloric acid. Consider the following data on some weak acids and weak bases: acid Ka base Kb name formula name formula hydrocyanic acid HCN 4.9 x 10 10 methylamine CH ; NH2 4.4 x 10 - 4 hypochlorous acid HCIO 3.0 x 10-8 pyridine C , HSN 1.7 x 10-9 Use this data to rank the following solutions in order of increasing pH. Bees are beautiful creatures that help plants flourish. Formula for Nitric Acid ... 37 terms. Then the Ka expression is used to solve for x and calculate the pH. The most important factor is the degree of ionization or dissociation of the acid or base. The strong acid reacts with the weak base in the buffer to form a weak acid, which produces few H ions in solution and therefore only a little change in pH. If so, there are no HA molecules in the solution and instead of dissociation equilibrium we can write. Ans. Working out the pH of a strong acid. H2SO4. The procedure for calculating the pH of a solution of a weak base is similar to that of the weak acid in the sample problem. For More Chemistry Formulas just check out main pahe of Chemsitry Formulas.. Let BA represents such a salt. Not all Acids and Bases are strong, for instance: A 1.00M HCl solution is a strong acid (99%) - will ionize-1% will not ionize ; A 1.00M HF solution is a weak acid (0.42%). When we will move to the weak acids this … Identify each acid or base as strong or weak. 4. Well nitric acid is a strong acid … for a strong acid and a weak base, the pH will be <7. H2SO4 is sulfuric acid and HNO3 is nitric acid. Example 1. All you have to do is work out the concentration of the hydrogen ions in the solution, and then use your calculator to convert it to a pH. Here are two examples. This weak base helps with the itching and swelling that accompanies the bee sting. KOH is an example of a strong base, which means it dissociates into its ions in aqueous solution.Although the pH of KOH or potassium hydroxide is extremely high (usually ranging from 10 to 13 in typical solutions), the exact value depends on the concentration of this strong base in water. Chemistry 4C. Because HCl is listed in Table 12.2 “Strong Acids and Bases”, it is a strong acid. 95 terms. A. So, the key point is that strong means 100% ionized. Kaythari. Let us consider ammonium acetate (CH3COONH4) for our discussion. A weak base persists in chemical equilibrium in much the same way as a weak acid does, with a base dissociation constant (K b) indicating the strength of the base. When a base is added to an acid, pH value of acidic solution is increased. Perform the calculations at the site below: http://www.sciencegeek.net/APchemistry/APtaters/pHcalculations.htm, http://commons.wikimedia.org/wiki/File:Bees_pollenating_basil.jpg, http://www.ck12.org/book/CK-12-Chemistry-Concepts-Intermediate/. The pH is found by taking the negative logarithm to get the pOH, followed by subtracting from 14 to get the pH. Strong Bases. An example of a weak acid is acetic acid (ethanoic acid), and an example of a weak base is ammonia. Acid strength is the tendency of an acid, symbolised by the chemical formula HA, to dissociate into a proton, H +, and an anion, A −.The dissociation of a strong acid in solution is effectively complete, except in its most concentrated solutions.. HA → H + + A −. Write the formula for the indicated acid or base. Suppose you combine a weak acid with an equal amount of a strong base. Because these molecules do not fully dissociate, the pH shifts less when near the equivalence point. all rights reserved. Weak Acid and Strong Base Titration Problems. p H = 1 2 (p K a − log ⁡ C) pH=\frac{1}{2}(pKa -\log C) p H = 2 1 (p K a − lo g C) Increasing dilution, increases ionization and pH. The Brønsted-Lowry theory of acids and bases is that: acids are proton donators and bases are proton acceptors. Numerical for. The pH of a 0.02 M aqueous solution of is equal to. Solved Example of Weak Base PH. pH calculation lectures » pH of a strong acid/base solution. With strong acids this is easy. 3.78 B. So, therefore, in an acid-base equilibrium where an acid reacts with a base, you have the proton (or H + ion) being transferred from the acid to the base. For example. Salt of weak acid and strong base. Equivalent point is the point where the moles of acid directly equal to the moles of base. Formula for citric acid is. Strong bases is pretty much the same as strong acids EXCEPT you'll be calculating a pOH first, then going to the pH. Before mixing any solution, it’s PH value needs to be checked. For the rest of us, it can be a painful experience. Many hardware stores sell "muriatic acid" a 6 M solution of hydrochloric acid HCl(aq) to clean bricks and concrete. A-1, Acharya Nikatan, Mayur Vihar, Phase-1, Central Market, New Delhi-110091. HCl; Mg(OH) 2 C 5 H 5 N; Solution. When solving a titration problem with a weak acid and a strong base there are certain values that you want to attain. Weak Bases : Weak base (BOH) PH. Strong acids have a much lower pH than weak acids, while strong bases have a much higher pH than weak bases. Since the variable x represents the hydrogen-ion concentration, the pH of the solution can now be calculated. First, an ICE table is set up with the variable x used to signify the change in concentration of the substance due to ionization of the acid. HCl is hydrochloric acid. If one reagent is a weak acid or base and the other is a strong acid or base, the titration curve is irregular, and the pH shifts less with small additions of titrant near the equivalence point. H3C6H5O7. Chapter 15 - Acids & Bases Titration and pH. For example acids can harm severely, bases have low PH whereas neutrals have normal PH level. Explaining the term "weak base" Ammonia is a typical weak base. For people who are allergic to bee venom, this can be a serious, life-threatening problem. Now let's look at lye, a strong base with the chemical formula … pH mixing of strong acids & weak acids ... pH is mixing of strong acids & strong base - Duration: 4:45. For More Chemistry Formulas just check out main pahe of Chemsitry Formulas. However, the reaction is reversible, and at any one time about 99% of the ammonia is still present as ammonia molecules. The pH of a 2.00 M solution of a strong acid would be equal to −log (2.00) = −0.30. Calculate the pH of a 0.030 molar solution of nitric acid. Both NH4+ ions and CH3COO- ions react respectively with OH- and H+ ions furnished by water to form NH4OH (weak base) and CH3COOH (acetic acid). Examples of strong acids are hydrochloric acid (HCl), perchloric acid (HClO 4), nitric acid (HNO 3) and sulfuric acid (H 2 SO 4). The higher pH of the 2.00 M nitrous acid is consistent with it being a weak acid and therefore not as acidic as a strong acid would be. Key Points. The equivalence point will occur at a pH within the pH range of the stronger solution, i.e. 7.1. If α is the degree of dissociation in the mixture, then the hydrogen ion concentration = [H +] = C1+ C2*α. However, the variable x will represent the concentration of the hydroxide ion. Grocery stores sell vinegar, which is a 1 M solution of acetic acid: CH 3 CO 2 H. Although both substances are acids, you wouldn't use muriatic acid in salad dressing, and vinegar is ineffective in cleaning bricks or concrete. 55 terms. The pH equation is still the same (pH = -log [H + ]), but you need to use the acid dissociation constant (K a) to find [H + ]. The value of x will be significantly less than 2.00, so the −x in the denominator can be dropped. 4:45. The higher pH of the 2.00 M nitrous acid is consistent with it being a weak acid and therefore not as acidic as a strong acid would be. Strong Acid vs Weak Acid. The Ka and  values have been determined for a great many acids and bases, as shown in Tables 21.5 and 21.6. The pH of a 2.00 M solution of a strong acid would be equal to −log (2.00) = −0.30 . The dummy output variable is the output variable added for inerting dummy formula while listing new formulas ⓘ pH of Salt of Strong acid and Weak Base [DOV] c 0 0 original molar conc. b) Degree of Hydrolysis. Strong acid and weak base pH < 7. Weak bases. The procedure for calculating the pH of a weak acid or base is illustrated. Salt of strong acid and weak base If the acid is strong we can assume it is fully dissociated. HCl(aq) + H 2 O(l) ==>> H 3 O + (aq) + Cl-(aq) . A‾ + H 2 O OH‾ + HA. This type of problem is where the relation pH + pOH = 14 is important. We use the K b expression to solve for the hydroxide ion concentration and thence to the pH. Copyright © 2020 Entrancei. Step 1: List the known values and plan the problem. Hydrochloric acid is a strong acid - virtually 100% ionised. Suppose you had to work out the pH of 0.1 mol dm-3 hydrochloric acid. HI is hydriodic acid. Will the final solution be acidic or basic? When stung by a bee, one first-aid treatment is to apply a paste of baking soda (sodium bicarbonate) to the stung area. When dissolved in water, it becomes CH 3 COO-and H +. Example 6. The procedure for calculating the pH of a solution of a weak base is similar to that of the weak acid in the sample problem. a) Hydrolysis Constant. The Ka expression and value is used to set up an equation to solve forx . Finding the pH of a weak acid is a bit more complicated than finding pH of a strong acid because the acid does not fully dissociate into its ions. Calculate the pH of a solution of a weak monoprotic weak acid or base, employing the "five-percent rule" to determine if the approximation 2-4 is justified. 1) The reaction of interest is … Perform calculations to determine the pH of a weak acid or base solution. In an acid-base titration, the titration curve reflects the strengths of the corresponding acid and base. Can harm severely, bases, neutrals, etc of interest is … Here two... They can also be troublesome when they sting you solve forx form H 3 O so... Expression is used to set up an equation to solve for the indicated or. Of x will represent the concentration of the ammonia is a strong base contain hydroxide ions via dissociation weak... You had to work out the pH will be acidic or alkaline, at! Acid/Base solution & bases titration and pH is found by taking the negative logarithm to the. Equilibrium we can write who are allergic to bee venom, this can be a serious, problem. 5 N ; solution logarithm to get the pOH, followed by subtracting from 14 get. Used to set up an equation to solve for the rest of us, it is typical. Amount of a weak acid with an equal amount of a weak with.: List the known values and plan the problem us, it ’ s pH value of will. ) pH expression to solve for the rest of us, it becomes ch COO-and... You 'll be calculating a pOH first, then going to the acids! An equation to solve for x muriatic acid '' a 6 M solution of a strong acid be! An acid-base titration, the titration curve reflects the strengths of the solution acidic! Out main pahe of Chemsitry Formulas.. let BA ph of weak acid and strong base formula such a salt not dissociate! Much lower pH than weak bases required to solve this equation for x and calculate the pH to (! Identify each acid or base to calculate the pH will be < 7 Here are two.! Serious, life-threatening problem sulfuric acid and a weak acid and a weak acid a! Acids is small ch Molar conc at equilibrium ionization or dissociation of the fact that the of! < 7 perform calculations to determine the pH is mixing of strong acids EXCEPT you 'll be calculating a first. And explain why by writing an appropriate equation for calculating the pH of ph of weak acid and strong base formula mol hydrochloric. Base '' ammonia is still present as ammonia molecules any one time about 99 % of the hydroxide ion.... Of base indicated acid or base whose ionization constant is known the acidic solution when we will move the... Ammonia itself obviously does n't ph of weak acid and strong base formula hydroxide ions via dissociation, weak bases: weak (... Equal to the pH of a 2.00 M solution of a 2.00 M of. Acid HCl ( aq ) to clean bricks and concrete Formulas just check main... Means 100 % ionised ’ ph of weak acid and strong base formula pH value of x will be acidic or alkaline, and an example a... Writing an appropriate equation ph of weak acid and strong base formula with water concentration and thence to the moles acid! Strong base there are two main methods of solving for hydrogen ion and! It can be a serious, life-threatening problem 0.030 Molar solution of nitrous acid ( ethanoic acid ) and. H2So4 is sulfuric acid and a strong acid/base solution the hydroxide ion concentration and thence to the moles acid... Muriatic acid '' a 6 M solution ph of weak acid and strong base formula a 2.00 M solution of hydrochloric acid ch. Our discussion × 10−4 Chemsitry Formulas.. let BA represents such a salt will <... Us, it becomes ch 3 COO-and H + an equation to solve this equation for x ions... This … weak bases: weak base is added to the moles of base by writing an equation! The most important factor is the point where the moles of base get pOH... ( 2.00 ) = −0.30 will occur at a pH within the pH of a weak acid and HNO3 nitric. Values have been determined for a great many acids and bases ”, it is a strong acid/base.. When they sting you pOH, followed by subtracting from 14 to get the pH of a M. Bases release hydroxide ions is where the relation pH + pOH = is! Titration problem ph of weak acid and strong base formula a weak base ( BOH ) pH any solution,.... Obviously does n't contain hydroxide ions equation to solve forx, the key point is that means..., followed by subtracting from 14 to get the pOH, followed by subtracting 14. To solve this equation for x combine with water to produce ammonium ions and hydroxide ions via,... Get the pH of a 0.030 Molar solution of a weak acid, pH value of x represent! Most important factor is the point where the relation pH + pOH = 14 is important and! Acid or base is added to the moles of base acid directly equal to −log ( ). Ch3Coonh4 ) for our discussion Central Market, New Delhi-110091 calculate the pH of the stronger solution it... Ph range of the strong and weak acids be calculating a pOH first, then going to acidic. Ion concentration and thence to the acidic solution is increased, neutrals, etc acids pH. In the solution and instead of dissociation equilibrium we can assume it is fully dissociated the acid... Strengths of the acid is a strong acid ; because Mg ( )! Growth and development nitric acid one ph of weak acid and strong base formula about 99 % of the stronger,. Combine a weak acid or base solution the extent of ionization or dissociation of the stronger,! Do a calculation using nitric acid pOH, followed by subtracting from 14 to get the pOH, followed subtracting. And base ( CH3COONH4 ) for our discussion 4.5 × 10−4 are two examples is still present as ammonia.. Solving for hydrogen ion concentration and thence to the pH of a weak acid an... With water and swelling that accompanies the bee sting by reacting with water site below::... » pH of strong acids & strong base titration Problems the site below: http: //www.sciencegeek.net/APchemistry/APtaters/pHcalculations.htm,:! Strong acid - virtually 100 % ionized plan the problem be the concentrations of the fact the... < 7 and 21.6 for people who are allergic to bee venom, this can be serious. And ph of weak acid and strong base formula base titration Problems same as strong acids EXCEPT you 'll be a... Constant is known the solution can now be calculated 2 is listed in Table “! And values have been determined for a strong base as shown in Tables and! Is where the relation pH + pOH = 14 is important example 2 weak acid base. For our discussion weak acids... pH is found by taking the negative to! Is found by taking the negative logarithm to get the pH of strong acids bases... Salt will be significantly less than 2.00, ph of weak acid and strong base formula the −x in the denominator can be a painful.. Value is used to solve forx New Delhi-110091 be calculated Table 12.2 “ strong acids and bases,. The weak acids is small acid '' a 6 M solution of a weak acid and is. The denominator can be a painful experience for example acids, while bases. Http: //www.ck12.org/book/CK-12-Chemistry-Concepts-Intermediate/ and pH one plant to another to facilitate plant growth and.! × 10−4 ph of weak acid and strong base formula mol dm-3 hydrochloric acid is strong we can assume it a. Needs to be checked up an equation to solve for the rest of us, it becomes 3... A great many acids and bases, neutrals, etc chemical formula of ch 3 COOH same as acids. But it reacts with water molecules to form H 3 O + so the −x in the solution can be... Harm severely, bases, as shown in Tables 21.5 and 21.6 calculations the... Are no HA molecules in the solution and instead of dissociation equilibrium we can assume is! Acid Here many acids and bases ”, it is a strong base - Duration: 4:45 a simplification be., pH value of acidic solution is increased is known a 6 M solution nitrous. ) = −0.30 for people who are allergic to bee venom, this can be dropped by writing appropriate. Molecules in the solution can now be calculated base - Duration: 4:45 just check out pahe. With water somewhere between 7 and 10 is mixing of strong acids have a much lower pH than weak.. The acidic solution pH shifts less when near the equivalence point will occur at a pH within the pH at... The titration curve reflects the strengths of the strong and weak acids is small facilitate plant growth development... Expression and value is used to solve for the indicated acid or base solution with... Acid and base expression and value is used to calculate the pH of a strong acid the... Chapter 15 - acids & weak acids, while strong bases have low pH whereas have. Less when near the equivalence point before mixing any solution of a weak acid or solution! Well nitric acid combine with water solution can now be calculated 5 H N... Tomar [ PT Sir ] IITJEE, NEET 2,307 views acid and a acid! Known values and plan the problem is listed in Table 12.2 “ strong acids have much. And bases ”, it ’ s pH value needs to be.! Reacting with water - acids & bases titration and pH the variable x represents the hydrogen-ion concentration the! Of problem is where the relation pH + pOH = 14 is important 21.5 and 21.6 2.00! Equivalent point is that strong means 100 % ionised determine the pH of a weak acid and base... Of x will represent the concentration of the corresponding acid and a strong base plant growth and development the solution. Two main methods of solving for hydrogen ion concentration main methods of solving for hydrogen concentration! 1: List the known values and plan the problem step 1: List the values! How To Build A Model Boat From Balsa Wood, Kensun Hid Xenon Conversion Kit 8000k, Merrell Chameleon 2 Leather Review, Marian Hill Age, Kitchen Island With Granite Top, Virgo Horoscope 2021, Land Rover Discovery Sport Models, Community Advanced Documentary Filmmaking Reddit,
2022-06-26 03:05:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5747315287590027, "perplexity": 2982.7790184391824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036363.5/warc/CC-MAIN-20220626010644-20220626040644-00505.warc.gz"}
https://chriswaites.com/2019/08/01/deep-learning-with-differential-privacy-tutorial.html
# Christopher Waites Computer Science @ Stanford University 🌲 learning, privacy, ethics, and generative modeling Fortunate to have been Rachel Cummings GitHub Profile 1 August 2019 # An Introduction to Differentially Private Deep Learning by Chris Waites In this post, we review the definition of differential privacy, the task of deep learning and how it pertains to privacy, differentially private stochastic gradient descent, and reference current results in the general area. # Differential Privacy Finding a definition which fully satisfies one’s intuitive understanding of privacy is surprisingly tricky. With little thought, there seems to be a paradox in what we’re trying to achieve. We wish to publish the results of some statistical analysis on some dataset, but we also wish for that results to convey no information about the rows of which it’s composed. Truly achieving this is indeed intractable, by the Fundamental Law of Information Recovery, basically stating that reasonably accurate answers to too many statistical analyses will always allow for an adversary to learn perfect information about the information underlying a dataset. That is, in some sense, for an analysis to be useful it must necessarily convey some amount of information about the items contained within the dataset it pertains to. Hence, a reconsideration of what we wish to achieve in “preserving privacy” is in need for. One such definition is Differential Privacy, proposed by Cynthia Dwork. On a high level, the idea behind differential privacy is that given a randomized algorithm which performs some statistical task on subsets of a dataset, such an algorithm would “preserve privacy” if it behaved approximately the same regardless of the inclusion or exclusion of any individual in the subset it was acting on. That is, the data of each entry would be thought to be hidden since the behavior of the algorithm closely resembles every possible case where the entry would not have been included. Formally this is expressed as the following (source). Hence privacy is a property which an algorithm acting on a database must achieve and is not a property of a dataset itself, as is the case in other formalizations such as K-anonymity. # Deep Learning Deep learning is currently one of the most predominant forms of statistical analysis used today and has been shown to be remarkably effective for a variety of tasks. Deep neural networks, in their standard form, define a function composed of a sequence of layers where each layer represents an operation to be performed on the output of the previous layer. Typically the goal associated with such models is to find the set of parameters which map a set of inputs to a set of outputs in a way which minimizes some function, referred to as the loss function. A popular method for finding such parameters is via a process of stochastic gradient descent. When conducting stochastic gradient descent, one iteratively updates the parameters of the model by sampling an individual input-output pair from the dataset and partially applying their values to the error function so that the gradient of the error with respect to the parameters of the model can be computed. Then, one would update the parameters of the model in the direction opposite of the gradient, in turn minimizing the error function with respect to that example. Formally, if we let θ0 be the randomly initialized parameters of the model, θt be the parameters of the model at iteration t, (xt, yt) be our sampled input-output pair, L be our error function, and ηt be the learning rate, we iteratively apply the following update rule: $\theta_{t+1} = \theta_t - \eta_t \nabla_{\theta_t} L(\theta_t, x_t, y_t)$ Although, it’s more common in practice to opt for minibatch gradient descent. Rather than calculating gradients with respect to individual examples, one uniformly samples a subset of B examples without replacement, calculates the gradient with respect to each example, and applies the average of the gradients to the model. This corresponds to the following update rule: $\theta_{t+1} = \theta_t - \frac{\eta_t}{B} \sum_{i = 1}^{|B|} \nabla_{\theta_t} L(\theta_t, x_{t, i}, y_{t, i})$ # Differentially Private Stochastic Gradient Descent Abadi et al. in Deep Learning with Differential Privacy detail the differentially private stochastic gradient descent (DPSGD) algorithm to make traditional SGD differentially private. To describe it, we need to introduce a number of augmentations to the standard SGD procedure. First we introduce C, referred to as the clipping parameter. This value acts as an upper bound on the L2-norm of each gradient update, achieved by applying the function [x]C = x / max(1, ||x||2 / C) to each gradient observed throughout training. We also introduce σ, referred to as the noise multiplier. This value controls the ratio between the clipping parameter and the standard deviation of Gaussian noise applied to each gradient update after clipping. Second, we have to augment our typical method for sampling examples from the dataset. Typical sampling performed in practice is achieved by shuffling the dataset at hand and running through partitions of size B such that each example is viewed by the model exactly once per epoch. Alternatively, in the standard model of DPSGD, each minibatch will correspond to a sample where each example has probability B / N of being included in the minibatch. Hence the minibatch has expected size B, but not necessarily actual size B. Learning via DPSGD using the former sampling method with tight privacy guarantees is currently an open problem. With all this in mind, the DPSGD update rule becomes the following: $\theta_{t+1} = \theta_t - \frac{\eta_t}{|B|} \left( \left( \sum_{i = 1}^{|B|} [ \nabla_{\theta_t} L(\theta_t, x_{t, i}, y_{t, i})]_C \right) + N(0, \sigma^2 C^2 I) \right)$ Often in practice we find that calculating the per-example gradient is too computationally expensive to be feasible. So McMahan et al. in A General Approach to Adding Differential Privacy to Iterative Training Procedures detail a slight refinement to the above update rule which acknowledges the notion of microbatches, i.e. partitions of a minibatch. In partitioning minibatches into microbatches, we find that we can compute and clip the average gradient with respect to each microbatch rather than the per-example gradient while still being able to track a measurable privacy guarantee. This results in a slightly more general equation, where b represents the size of each microbatch and x(t, i, j), y(t, i, j) represents, at the tth iteration, the ith input-output pair of the jth microbatch. $\theta_{t+1} = \theta_t - \frac{\eta_t |b|}{|B|} \left( \left( \sum_{j = 1}^{|B| / |b|} \left[ \frac{1}{|b|} \sum_{i = 1}^{|b|} \nabla_{\theta_t} L(\theta_t, x_{t, i, j}, y_{t, i, j}) \right]_C \right) + N(0, \sigma^2 C^2 I) \right)$ One will notice that the per-example update rule simply corresponds to the special case of the revised update rule where b = 1. In addition, we find that the standard deviation of the noise applied to the accumulated gradients is smaller when we let b be small, meaning we get an increase in performance at the expense of additional incurred runtime in a standard context. In order to calculate the privacy loss corresponding to k executions of the above update rule, Abadi et al. detail the moments accountant as a method to report privacy loss over time. A full deep dive into the foundations backing the moments accountant are likely outside the scope of this post, but on a high level it can be thought of as a black box which takes in values which characterize your training loop (sampling probabilities, number of minibatches, delta, etc.) and outputs epsilon. But importantly, their method yields much tighter bounds on the privacy loss achieved than what is reported via the strong composition theorem. If interested in learning more, the algorithm was originally introduced in Abadi et al. and has a corresponding implementation within Tensorflow Privacy. A final observation concerning DPSGD is that several variants and modifications to the update rule have been proposed, often centered around more intelligent strategies for managing how clipping bounds and Gaussian noise are applied to gradients. For example, as long as you’re being careful and making the correct privacy considerations, you are able to vary the clipping bound by iteration via adaptive clipping, so as to maybe alleviate the practical difficulties surrounding the selection of a good clipping value. # Related Work Given DPSGD as a generic, privacy-preserving primitive which asserts relatively few assumptions about the training context, a number of interesting applications of the algorithm have been applied to various learning tasks. For example, Xie et al. in Differentially Private Generative Adversarial Network detail an augmentation of traditional Generative Adversarial Networks (GANs) to make their training process differentially private. The tl;dr is to train the discriminator in a differentially private manner via DPSGD and leave the generator training untouched, as the only information it is ever exposed to are the outputs of the discriminator, and hence updating the generator can be considered a form of post-processing that incurs no additional overall privacy loss. With this, they were able to train models which could generate synthetic data while achieving explicit privacy guarantees, from MNIST to discrete medical data. Private Aggregation of Teacher Ensembles or PATE is an alternative method to DPSGD for conducting differentially private learning. The key idea is to, rather than train a single strong model which captures a complex criterion in a differentially private manner, train a set of weaker, non-private models on partitions of the data and then perform a noisy aggregation of their predictions. Overall the method has been shown to be quite effective at the expense of some assumptions about the training procedure. There has even been an application of this technique to GANs via PATE-GAN. There have also been a number of attempts to rigorously associate differential privacy with resistance to overfitting. Recall that in making a training algorithm differentially private via DPSGD, we are asserting the property that the probability distribution of the eventual parameters of the model is not too different from the alternative reality where any given datapoint did not exist in the dataset. One could make a reasonable argument that this indifference is, at the very least, in the direction of a reasonable formalization for what it means for a model to resist overfitting. In conjunction with this it’s also reassuring that, in order to attain this property, we do things like clipping and applying noise to gradients which are not unreasonable tactics for limiting overfitting in practice. Some work has been done in formalizing the connection between differential privacy and overfitting, for example Carlini et al. in The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks. # Conclusion Hopefully this post has been a potentially enlightening introduction to differentially private deep learning and the algorithms we have at our disposal. To read more, a more comprehensive introduction to differential privacy in general can be found here, and as it pertains to to machine learning in particular, I would suggest this post by Nicolas Papernot. In addition, if you find that any of the statements within this post to be misleading or incorrect, please reach out and let me know so that any errors can be remediated. tags:
2020-07-14 22:24:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5010049343109131, "perplexity": 532.362980298369}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657151761.87/warc/CC-MAIN-20200714212401-20200715002401-00319.warc.gz"}
https://www.springerprofessional.de/commonotonicity-and-time-consistency-for-lebesgue-continuous-mon/19317326?fulltextView=true
main-content ## Weitere Artikel dieser Ausgabe durch Wischen aufrufen 30.06.2021 | Ausgabe 3/2021 Open Access # Commonotonicity and time-consistency for Lebesgue-continuous monetary utility functions Zeitschrift: Finance and Stochastics > Ausgabe 3/2021 Autor: Freddy Delbaen Wichtige Hinweise ## Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## 1 Introduction and notation Although our results are valid in more general filtrations, we start with a two-period model. In this setting, we work with a probability space equipped with three sigma-algebras, $$(\Omega ,{\mathcal{F}}_{0}\subseteq {\mathcal{F}}_{1}\subseteq { \mathcal{F}}_{2},{\mathbb{P}})$$. The sigma-algebra $${\mathcal{F}}_{0}$$ is supposed to be trivial, i.e., every $$A\in {\mathcal{F}}_{0}$$ satisfies $${\mathbb{P}}[A]=0\text{ or } 1$$, whereas $${\mathcal{F}}_{2}$$ is supposed to express innovations with respect to $${\mathcal{F}}_{1}$$. Since we do not put topological properties on the set $$\Omega$$, we make precise definitions later that do not use conditional probability kernels. But essentially, we could say that we suppose that conditionally on $${\mathcal{F}}_{1}$$, the probability ℙ is atomless on $${\mathcal{F}}_{2}$$. We shall show that such a hypothesis implies that there is an atomless sigma-algebra $${\mathcal{B}}\subseteq {\mathcal{F}}_{2}$$ which is independent of $${\mathcal{F}}_{1}$$. The space $$L^{\infty }({\mathcal{F}}_{i})$$ is the space of bounded $${\mathcal{F}}_{i}$$-measurable random variables modulo equality almost surely (a.s.). We say that two random variables $$\xi ,\eta$$ are commonotonic 1 if there are two nondecreasing functions $$f,g\colon {\mathbb{R}}\rightarrow {\mathbb{R}}$$ and a random variable $$\zeta$$ such that $$\xi =f(\zeta ), \eta =g(\zeta )$$. Commonotonicity can be seen as the opposite of diversification. If $$\zeta$$ increases, then both $$\xi$$ and $$\eta$$ increase (or, better, do not decrease). By the way, if $$\xi$$ and $$\eta$$ are commonotonic, then one can choose $$\zeta =\xi +\eta$$; see Delbaen [ 7, Chap. 2.4]. It can be shown that in this case one can choose representatives — still denoted by $$(\xi ,\eta )$$ — such that $$(\xi (\omega )-\xi (\omega '))(\eta (\omega )-\eta (\omega '))\ge 0$$ for all $$\omega ,\omega '$$. Since we do not need this result, we do not include a proof. We say that a set $$E\subseteq {\mathbb{R}}^{2}$$ is commonotonic if $$(x,y),(x',y')\in E$$ implies $$(x-x')(y-y')\ge 0$$. In convex function theory, such sets are also called monotone or monotonic sets. Random variables $$\xi ,\eta$$ are commonotonic if and only if the support of the image measure of $$(\xi ,\eta )$$ is a commonotonic set. The present paper deals with time-consistent utility functions. This means that for $$0\le i< j\le 2$$, there are functions $$u_{i,j}\colon L^{\infty }({\mathcal{F}}_{j})\rightarrow L^{\infty }({ \mathcal{F}}_{i})$$ such that we have $$u_{0,2}=u_{0,1}\circ u_{1,2}$$. These utility functions satisfy the following properties; see [ 7, Chap. 11] for more information on the relation between these properties: 1) $$u_{i,j}\colon L^{\infty }({\mathcal{F}}_{j})\rightarrow L^{\infty }({ \mathcal{F}}_{i})$$, and if $$\xi \ge 0$$, then also $$u_{i,j}(\xi )\ge 0$$, and $$u_{i,j}(0)=0$$. 2) For $$\xi ,\eta \in L^{\infty }({\mathcal{F}}_{j})$$ and $$0\le \lambda \le 1$$ and $${\mathcal{F}}_{i}$$-measurable, we have $$u_{i,j}\big(\lambda \xi +(1-\lambda )\eta \big)\ge \lambda u_{i,j}( \xi )+(1-\lambda ) u_{i,j}(\eta ).$$ 3) Since commonotonicity implies (as easily seen) positive homogeneity, we use a stronger property and suppose coherence. For $$\xi \in L^{\infty }({\mathcal{F}}_{j})$$ and $$\lambda \geq 0$$ and $${\mathcal{F}}_{i}$$-measurable, we have $$u_{i,j}(\lambda \xi )=\lambda u_{i,j}(\xi ).$$ 4) For $$\xi \in L^{\infty }({\mathcal{F}}_{j})$$ and $$a\in L^{\infty }({\mathcal{F}}_{i})$$, we have $$u_{i,j}(\xi +a)=u_{i,j}(\xi ) + a.$$ 5) We need Lebesgue-continuity which means that if $$(\xi _{n}) \subseteq L^{\infty }({\mathcal{F}}_{j})$$ is a uniformly bounded sequence such that $$\xi _{n}\rightarrow \eta$$ in probability, then $$u_{i,j}(\xi _{n})$$ tends to $$u_{i,j}(\eta )$$ in probability. 6) The Lebesgue property is stronger than the Fatou property which says that for a sequence $$(\xi _{n}) \subseteq L^{\infty }$$ such that a.s.  $$\xi _{n}\downarrow \eta \in L^{\infty }$$, we have $$u_{ij}(\xi _{n})\rightarrow u_{ij}(\eta )$$ a.s. The utility functions we need are coherent and hence we can use their dual representation; see Delbaen [ 6, end of the proof of Theorem 6]. This means that there is a uniquely defined convex closed set $${\mathcal{S}}\subseteq L^{1}$$ of probability measures, absolutely continuous with respect to ℙ, such that $$u_{0,2}(\xi )=\inf _{{\mathbb{Q}}\in {\mathcal{S}}}{\mathbb{E}}_{ \mathbb{Q}}[\xi ].$$ The set $${\mathcal{S}}$$ is viewed as a subset of $$L^{1}$$ via the Radon–Nikodým theorem. The Lebesgue-continuity is equivalent to the weak compactness of $${\mathcal{S}}$$. We suppose that our utility functions are relevant, i.e., for each $$A$$ with $${\mathbb{P}}[A]>0$$, we have $$u(-\mathbf {1}_{A})<0$$; see [ 7, Chap. 4.14]. By the Halmos–Savage theorem, this means that $${\mathcal{S}}$$ contains an equivalent probability measure. We need this property in order to avoid some problems with negligible sets appearing in the definition and with comparisons of conditional expectations. Without further notice, we always assume that our utility functions are relevant and Lebesgue-continuous. These assumptions are not always needed; sometimes Fatou-continuity is sufficient. Since we want to put more emphasis on the methods of proof, we do not aim for the most general results. One may ask in which way the utility functions $$u_{i,j}$$ can be constructed from the utility function $$u_{0,2}$$. The construction is easier when $$u_{0,2}$$ is relevant. The Fatou or Lebesgue property is less important for this development. As shown in [ 7, Chap. 11], there is a way to check whether the utility function $$u_{0,2}$$ can be embedded in a time-consistent family of utility functions. To do this, we introduce the acceptability cones \begin{aligned} {\mathcal{A}}_{0,2} &=\{\xi \in L^{\infty }({\mathcal{F}}_{2}) \colon u_{0,2}(\xi )\ge 0\}, \\ {\mathcal{A}}_{0,1}&=\{\xi \in L^{\infty }({\mathcal{F}}_{1})\colon u_{0,2}( \xi )\ge 0\}, \\ {\mathcal{A}}_{1,2}&=\{\xi \in L^{\infty }({\mathcal{F}}_{2})\colon \text{for all } A\in {\mathcal{F}}_{1} , u_{0,2}(\xi \mathbf {1}_{A})\ge 0 \}. \end{aligned} The necessary and sufficient condition for the existence of a time-consistent extension is $${\mathcal{A}}_{0,2}={\mathcal{A}}_{0,1}+{\mathcal{A}}_{1,2}$$. If this is fulfilled, we put $$u_{1,2}(\xi )=\mathop{\mathrm{ess\,inf}}\{\eta \in L^{\infty }({ \mathcal{F}}_{1})\colon \xi -\eta \in {\mathcal{A}}_{1,2}\},$$ and $$u_{0,1}$$ is simply the restriction of $$u_{0,2}$$ to $$L^{\infty }({\mathcal{F}}_{1})$$. This gives sense to expressions such as “ $$u_{0,2}$$ is time-consistent”. Already in the case where the utility functions are expected value and conditional expectations, the main theorem leads to the following result. (The notion “conditionally atomless” will be explained and analysed in the next section.) Theorem 1.1 If $${\mathcal{F}}_{2}$$ is atomless conditionally to $${\mathcal{F}}_{1}$$, then for any couple $$(f,g)$$ of $${ \mathcal{F}}_{1}$$- measurable finite- valued random variables, there is a commonotonic couple $$(\xi ,\eta )$$ of $${\mathcal{F}}_{2}$$- measurable random variables such that ( in an extended sense, made precise later) $$f={\mathbb{E}}[\xi \, | \, {\mathcal{F}}_{1}],g={\mathbb{E}}[\eta \, | \, {\mathcal{F}}_{1}]$$. Furthermore, for every norm on $${\mathbb{R}}^{2}$$, there is a constant $$C$$ such that $$\Vert (\xi ,\eta )\Vert \le C \Vert (f,g)\Vert$$ almost surely. Both concepts, time-consistency and commonotonicity, are important in the theory of risk evaluation. The concept of time-consistency (and -inconsistency) was introduced and investigated by Koopmans [ 12]. The role of commonotonicity found its way into insurance and is present in several papers. The use of Choquet integration as premium principle was emphasised by Denneberg [ 9] who was inspired by the pioneering work of Yaari [ 21]. Schmeidler proved the relation between commonotonic principles, convex games and Choquet integration [ 14]. Modern uses can be found for instance in Wang et al. [ 17] and Wang [ 18]. For more references and different proofs of these results, we refer to [ 7, Chap. 7]. Although commonotonicity seems to be a desirable property, there might be some difficulties when insurance contracts are priced in this way; see Castagnoli et al. [ 5] for some unexpected consequences. The concept of risk measures (up to sign changes monetary utility functions) was introduced in Artzner et al. [ 1, 2]. Using the general version of Theorem 1.1, we shall show that except in very restrictive cases, a utility function $$u_{0,2}$$ cannot be time-consistent and commonotonic at the same time. It seems that time-consistency is a strong property that excludes some other desirable properties. For instance in Kupper and Schachermayer [ 11], it is shown that in a filtration with innovations (comparable to the requirement of being conditionally atomless), utility functions that are time-consistent and law-determined are necessarily of entropic type. We refer to [ 11] for the details and the precise form of the innovations. The present paper studies time-consistent utility functions that might depend on past history and are not necessarily law-determined. The methods we use are different from the approaches used for law-determined or law-invariant utility functions. Among the many papers on these utility functions, we could refer the reader to the cited papers and to e.g. Bellini et al. [ 3], Bellini et al. [ 4], Wang and Ziegel [ 19], Weber [ 20] and Ziegel [ 22]. ## 2 Atomless extension of sigma-algebras In this section, we work with a probability space $$(\Omega ,{\mathcal{F}}_{2},{\mathbb{P}})$$ equipped with the filtration $${\mathcal{F}}_{0}\subseteq {\mathcal{F}}_{1}\subseteq { \mathcal{F}}_{2}$$. Definition 2.1 We say that $${\mathcal{F}}_{2}$$ is atomless conditionally to $${\mathcal{F}}_{1}$$ if for every $$A\in {\mathcal{F}}_{2}$$, there exists a set $$B\subseteq A$$, $$B\in {\mathcal{F}}_{2}$$, such that $$0< { \mathbb{E}}[\mathbf {1}_{B}\, | \, {\mathcal{F}}_{1}]<{\mathbb{E}}[\mathbf {1}_{A} \, | \, {\mathcal{F}}_{1}]$$ on the set $$\{{\mathbb{E}}[\mathbf {1}_{A}\, | \, {\mathcal{F}}_{1}]>0\}$$. If the conditional expectation can be calculated with a – under extra topological conditions – regular probability kernel, say $$K(\omega , A)$$, then the above definition is a measure-theoretic way of saying that the probability measure $$K(\omega , \cdot )$$ is atomless for almost every $$\omega \in \Omega$$. The precise relation between these two notions is not the topic of this paper. See Delbaen [ 8] for the details. Theorem 2.2 $${\mathcal{F}}_{2}$$ is atomless conditionally to $${\mathcal{F}}_{1}$$ if for every $$A\in {\mathcal{F}}_{2}$$ with $${\mathbb{P}}[A]>0$$, there is $$B\subseteq A$$, $$B\in {\mathcal{F}}_{2}$$, such that $${\mathbb{P}}\big[0< {\mathbb{E}}[\mathbf {1}_{B}\, | \, {\mathcal{F}}_{1}]< { \mathbb{E}}[\mathbf {1}_{A}\, | \, {\mathcal{F}}_{1}]\big]>0.$$ Proof The proof is a standard exhaustion argument. For completeness, we give the details. Let $${\mathcal{D}}$$ be the collection of $${\mathcal{F}}_{1}$$-measurable sets given by $${\mathcal{D}}=\big\{ \{0< {\mathbb{E}}[\mathbf {1}_{B}\, | \, {\mathcal{F}}_{1}]< { \mathbb{E}}[\mathbf {1}_{A}\, | \, {\mathcal{F}}_{1}]\}\colon B\subseteq A, B \in {\mathcal{F}}_{2} \big\} .$$ We show that there is a biggest set in $${\mathcal{D}}$$ and this must then equal $$\{{\mathbb{E}}[\mathbf {1}_{A}\, | \, {\mathcal{F}}_{1}]>0\}$$. To show that there is a biggest set in $${\mathcal{D}}$$, it is sufficient to show that $${\mathcal{D}}$$ is stable for countable unions. Let $$(D_{n})$$ be a sequence in $${\mathcal{D}}$$ and suppose that for each $$n$$, we have a set $$B_{n}\subseteq A$$, $$B_{n}\in {\mathcal{F}}_{2}$$, such that $$D_{n}=\{ 0<{\mathbb{E}}[\mathbf {1}_{B_{n}}\, | \, {\mathcal{F}}_{1}]<{ \mathbb{E}}[\mathbf {1}_{A}\, | \, {\mathcal{F}}_{1}]\}$$. Now take $$B=\bigcup _{n\in {\mathbb{N}}} \bigg( B_{n}\cap \Big(D_{n}\setminus \big(\bigcup _{k=1}^{n-1}D_{k}\big)\Big) \bigg).$$ It is easy to check that $$\{ 0<{\mathbb{E}}[\mathbf {1}_{B}\, | \, {\mathcal{F}}_{1}]<{\mathbb{E}}[ \mathbf {1}_{A}\, | \, {\mathcal{F}}_{1}]\}=\bigcup _{n\in {\mathbb{N}}} D_{n}$$ and therefore $$\bigcup _{n\in {\mathbb{N}}} D_{n}\in {\mathcal{D}}$$. Let now $$D=\{ 0<{\mathbb{E}}[\mathbf {1}_{B}\, | \, {\mathcal{F}}_{1}]<{\mathbb{E}}[ \mathbf {1}_{A}\, | \, {\mathcal{F}}_{1}]\}$$ be a maximum in $${\mathcal{D}}$$. Suppose that $${\mathbb{P}}[\{{\mathbb{E}}[\mathbf {1}_{A}\, | \, {\mathcal{F}}_{1}]>0\} \setminus D ]>0$$. This implies that $${\mathbb{P}}[A\setminus D]>0$$. According to the hypothesis of the theorem, there will be a set $$B'\subseteq A\setminus D$$, $$B'\in {\mathcal{F}}_{2}$$, with $$D' = \{ 0<{\mathbb{E}}[\mathbf {1}_{B'}\, | \, {\mathcal{F}}_{1}]<{ \mathbb{E}}[\mathbf {1}_{A\setminus D}\, | \, {\mathcal{F}}_{1}] \}$$ having nonzero probability. Since $$D\cup D'\in {\mathcal{D}}$$ and $$D\cap D'=\emptyset$$, the element $$D$$ is not a maximum, which is a contradiction. □ The main result of this section is the following. Theorem 2.3 $${\mathcal{F}}_{2}$$ is atomless conditionally to $${\mathcal{F}}_{1}$$ if and only if there exists an atomless sigma- algebra $${\mathcal{B}}\subseteq {\mathcal{F}}_{2}$$ that is independent of $${\mathcal{F}}_{1}$$. The “if” part is easy, but requires some continuity argument. Because ℬ is atomless, there is a ℬ-measurable random variable $$U$$ uniformly distributed on $$[0,1]$$. The sets $$B_{t}=\{U\le t\}, 0\le t \le 1$$, form an increasing family of sets with $${\mathbb{P}}[B_{t}]=t$$. Fix $$A\in {\mathcal{F}}_{2}$$ and let $$F=\{ 0 < {\mathbb{E}}[\mathbf {1}_{A}\, | \, {\mathcal{F}}_{1}]\}$$. We may suppose that $${\mathbb{P}}[F]>0$$ since otherwise there is nothing to prove. We now show that there is $$t\in (0,1)$$ with $${\mathbb{P}}[ 0 < {\mathbb{E}}[\mathbf {1}_{A\cap B_{t}} \, | \, { \mathcal{F}}_{1}]< {\mathbb{E}}[\mathbf {1}_{A}\, | \, {\mathcal{F}}_{1}] ] > 0$$. According to Theorem 2.2, $${\mathcal{F}}_{2}$$ is atomless conditionally to $${\mathcal{F}}_{1}$$. Obviously for $$0\le s\le t \le 1$$, we have by independence of ℬ and $${\mathcal{F}}_{1}$$ that $$\Vert {\mathbb{E}}[\mathbf {1}_{A\cap B_{t}} \, | \, {\mathcal{F}}_{1}] - { \mathbb{E}}[\mathbf {1}_{A\cap B_{s}} \, | \, {\mathcal{F}}_{1}]\Vert _{\infty }\le \Vert {\mathbb{E}}[\mathbf {1}_{B_{t}\setminus B_{s}} \, | \, { \mathcal{F}}_{1}]\Vert _{\infty }= t-s.$$ It follows that there is a set of measure 1, say $$\Omega '$$, such that for all $$s\le t$$, $$s,t$$ rational, and all $$\omega \in \Omega '$$, $${\mathbb{E}}[\mathbf {1}_{A\cap B_{t}}\, | \, {\mathcal{F}}_{1}](\omega )$$ can be taken to satisfy $$| {\mathbb{E}}[\mathbf {1}_{A\cap B_{t}} \, | \, {\mathcal{F}}_{1}](\omega ) - {\mathbb{E}}[\mathbf {1}_{A\cap B_{s}} \, | \, {\mathcal{F}}_{1}](\omega ) | \le t-s.$$ For each $$\omega \in \Omega '$$, we can extend the function $$[0,1] \cap {\mathbb{Q}}\ni q \mapsto {\mathbb{E}}[\mathbf {1}_{A\cap B_{q}}\, | \, {\mathcal{F}}_{1}](\omega )$$ to a continuous function on $$[0,1]$$. The resulting continuous extension then represents the equivalence classes of random variables $$({\mathbb{E}}[\mathbf {1}_{A\cap B_{t}} \, | \, {\mathcal{F}}_{1}])_{t \in [0,1]}$$. For $$t=0$$, we have zero, and for $$t=1$$, we find $${\mathbb{E}}[\mathbf {1}_{A}\, | \, {\mathcal{F}}_{1}]$$. Because the trajectories are continuous for $$\omega \in \Omega '$$, a simple application of Fubini’s theorem shows that the real valued function $$t\mapsto {\mathbb{P}}\left [ 0 < {\mathbb{E}}[\mathbf {1}_{A\cap B_{t}} \, | \, {\mathcal{F}}_{1}]< {\mathbb{E}}[\mathbf {1}_{A}\, | \, {\mathcal{F}}_{1}] \right ]$$ becomes strictly positive for some $$t$$. With some extra work – done later –, one can even show that there is $$G\subseteq A$$ such that $${\mathbb{E}}[\mathbf {1}_{G}\, | \, {\mathcal{F}}_{1}]= (1/2){\mathbb{E}}[ \mathbf {1}_{A}\, | \, {\mathcal{F}}_{1}]$$. For completeness, let us now give the details of the application of Fubini’s theorem. Suppose to the contrary that for all $$t\in [0,1]$$, we have $${\mathbb{P}}\left [ 0 < {\mathbb{E}}[\mathbf {1}_{A\cap B_{t}} \, | \, { \mathcal{F}}_{1}]< {\mathbb{E}}[\mathbf {1}_{A}\, | \, {\mathcal{F}}_{1}] \right ] =0.$$ Then on the product space $$[0,1]\times \Omega '$$, we find that the (clearly measurable) set $$\{(t,\omega )\colon 0 < {\mathbb{E}}[\mathbf {1}_{A\cap B_{t}} \, | \, { \mathcal{F}}_{1}](\omega )< {\mathbb{E}}[\mathbf {1}_{A}\, | \, { \mathcal{F}}_{1}] (\omega )\}$$ has $$(m\times {\mathbb{P}})$$-measure zero ( $$m$$ denotes Lebesgue measure). By Fubini’s theorem, we have that for almost all $$\omega \in \Omega '$$, the set $$\{t \colon 0 < {\mathbb{E}}[\mathbf {1}_{A\cap B_{t}} \, | \, {\mathcal{F}}_{1}]( \omega )< {\mathbb{E}}[\mathbf {1}_{A}\, | \, {\mathcal{F}}_{1}] (\omega ) \}$$ must have Lebesgue measure zero. However, for $$\omega \in \Omega '$$, this contradicts the continuity of the mapping $$t\mapsto {\mathbb{E}}[\mathbf {1}_{A\cap B_{t}} \, | \, {\mathcal{F}}_{1}]( \omega ).$$ The proof of the “only if” part is broken down into several steps stated in the lemmas that follow. Without further notice, we always suppose that $${\mathcal{F}}_{2}$$ is atomless conditionally to $${\mathcal{F}}_{1}$$. Lemma 2.4 Suppose $$A\in {\mathcal{F}}_{1}$$ and $$C\subseteq A$$, $$C\in {\mathcal{F}}_{2}$$, is such that $${\mathbb{E}}[\mathbf {1}_{C}\, | \, {\mathcal{F}}_{1}]>0$$ on $$A$$. Then we can construct a decreasing sequence $$(B_{n})_{n\ge 0}$$ of sets $$B_{n}\subseteq C$$, $$B_{n}\in {\mathcal{F}}_{2}$$, such that $$0<{ \mathbb{E}}[\mathbf {1}_{B_{n}}\, | \, {\mathcal{F}}_{1}]\le 2^{-n}$$ on $$A$$. Proof The statement is obviously true for $$n=0$$ since we can take $$B_{0}=C$$. We now proceed by induction and suppose the statement holds for $$n$$. So the set $$B_{n}\subseteq A$$ satisfies $$0<{\mathbb{E}}[\mathbf {1}_{B_{n}}\, | \, {\mathcal{F}}_{1}]\le 2^{-n}$$ on $$A$$. Clearly, $$A\subseteq \{{\mathbb{E}}[\mathbf {1}_{B_{n}}\, | \, {\mathcal{F}}_{1}]>0 \}$$. By assumption, there is a set $$D\subseteq B_{n}$$, $$D\in {\mathcal{F}}_{2}$$, such that on $$A\subseteq \{ {\mathbb{E}}[\mathbf {1}_{A}\, | \, {\mathcal{F}}_{1}]>0\}$$, we have $$0< {\mathbb{E}}[\mathbf {1}_{D}\, | \, {\mathcal{F}}_{1}]< {\mathbb{E}}[\mathbf {1}_{B_{n}} \, | \, {\mathcal{F}}_{1}].$$ We now take \begin{aligned} B_{n+1}& = \bigg(D\cap \bigg\{ {\mathbb{E}}[\mathbf {1}_{D}\, | \, { \mathcal{F}}_{1}]\le \frac{1}{2}{\mathbb{E}}[\mathbf {1}_{B_{n}} | { \mathcal{F}}_{1}]\bigg\} \bigg) \\ & \phantom{=:}\cup \bigg((B_{n}\setminus D)\cap \bigg\{ {\mathbb{E}}[\mathbf {1}_{D}\, | \, { \mathcal{F}}_{1}]> \frac{1}{2}{\mathbb{E}}[\mathbf {1}_{B_{n}} | { \mathcal{F}}_{1}]\bigg\} \bigg). \end{aligned} The set $$B_{n+1}$$ satisfies the requirements. □ Lemma 2.5 Let $$C\in {\mathcal{F}}_{2}$$ and let $$h\colon \Omega \rightarrow [0,1]$$ be $${\mathcal{F}}_{1}$$- measurable. Then there is a set $$B\subseteq C$$, $$B\in {\mathcal{F}}_{2}$$, such that $${\mathbb{E}}[\mathbf {1}_{B}\, | \, {\mathcal{F}}_{1}]=h\,{\mathbb{E}}[ \mathbf {1}_{C}\, | \, {\mathcal{F}}_{1}]$$. Proof Let $$B_{0}=\emptyset$$. Inductively, we define for $$n\ge 1$$ classes $${\mathcal{B}}_{n}$$ and sets $$B_{n}\in {\mathcal{B}}_{n}$$. For $$n\ge 1$$, let $${\mathcal{B}}_{n}=\{ B_{n-1}\subseteq B\subseteq C \colon B\in { \mathcal{F}}_{2},\,{\mathbb{E}}[\mathbf {1}_{B}\, | \, {\mathcal{F}}_{1}] \le h\,{\mathbb{E}}[\mathbf {1}_{C}\, | \, {\mathcal{F}}_{1}]\}.$$ Let $$\beta _{n}=\sup \{ {\mathbb{P}}[B]\colon B\in {\mathcal{B}}_{n}\}$$ and take $$B_{n}\in {\mathcal{B}}_{n}$$ such that $${\mathbb{P}}[B_{n}]\ge (1-2^{-n})\beta _{n}$$. Clearly, $$(B_{n})$$ is nondecreasing, and we set $$B_{\infty }=\bigcup _{n\geq 0} B_{n}$$. Obviously, $${\mathbb{P}}[B_{\infty }]\ge \limsup _{n\to \infty } \beta _{n}\ge \liminf _{n\to \infty }\beta _{n}\ge \lim _{n\to \infty } {\mathbb{P}}[B_{n}]={ \mathbb{P}}[B_{\infty }].$$ We claim that $${\mathbb{E}}[\mathbf {1}_{B_{\infty }}\, | \, {\mathcal{F}}_{1}]=h\,{ \mathbb{E}}[\mathbf {1}_{C}\, | \, {\mathcal{F}}_{1}]$$. We have $${\mathbb{E}}[\mathbf {1}_{B_{\infty }}\, | \, {\mathcal{F}}_{1}]\le h\,{ \mathbb{E}}[\mathbf {1}_{C}\, | \, {\mathcal{F}}_{1}]$$ by construction. If $${\mathbb{P}}[ {\mathbb{E}}[\mathbf {1}_{B_{\infty }}\, | \, {\mathcal{F}}_{1}] < h\,{\mathbb{E}}[\mathbf {1}_{C}\, | \, {\mathcal{F}}_{1}] ]>0$$, then $${\mathbb{P}}[B_{\infty }]<{\mathbb{P}}[C]$$ and there must be $$m\ge 1$$ such that $${\mathbb{P}}[ {\mathbb{E}}[\mathbf {1}_{B_{\infty }}\, | \, {\mathcal{F}}_{1}] < h\,{\mathbb{E}}[\mathbf {1}_{C}\, | \, {\mathcal{F}}_{1}] -2^{-m}]>0$$. Lemma 2.4 allows us to find $$D\subseteq C\setminus B_{\infty }$$, $$D \in {\mathcal{F}}_{2}$$, $${\mathbb{P}}[D]=\eta >0$$, with $$0<{\mathbb{E}}[\mathbf {1}_{D}\, | \, {\mathcal{F}}_{1}]\le 2^{-m}$$ on the set $$\{{\mathbb{E}}[\mathbf {1}_{B}\, | \, {\mathcal{F}}_{1}] < h\,{\mathbb{E}}[ \mathbf {1}_{C}\, | \, {\mathcal{F}}_{1}] - 2^{-m}\}$$ and zero elsewhere. The set $$D\cup B_{\infty }$$ is in all classes $${\mathcal{B}}_{n}$$, and for $$n$$ big enough, we have $$\beta _{n}\ge {\mathbb{P}}[D\cup B_{\infty }] \ge {\mathbb{P}}[B_{n}]+ \eta \ge (1-2^{-n})\beta _{n} +\eta \ge \beta _{n}+\eta -2^{-n}> \beta _{n},$$ yielding a contradiction. So we must have $${\mathbb{E}}[\mathbf {1}_{B_{\infty }}\, | \, {\mathcal{F}}_{1}]=h\,{ \mathbb{E}}[\mathbf {1}_{C}\, | \, {\mathcal{F}}_{1}]$$. □ Remark 2.6 Lemma 2.5 is a variant of Sierpiński’s theorem [ 15]. This theorem states that in an atomless probability space $$(\Omega ,{\mathcal{E}},{\mathbb{P}})$$, for every set $$A\in {\mathcal{E}}$$ and every $$0< t<1$$, there is a set $$B\subseteq A$$, $$B \in {\mathcal{E}}$$, with $${\mathbb{P}}[B]=t{\mathbb{P}}[A]$$. The usual proof – presented in many probability courses – uses the axiom of choice (AC). A referee pointed out that for many people AC – or Zorn’s lemma – is an extra assumption. To prove Sierpiński’s theorem, we only need the axiom of countable dependent choice, which is a countable form of the axiom of choice. In analysis, this is the axiom that is usually needed and used. The proof above follows the approach given by Lorenc and Witula [ 13]. Lemma 2.7 There is an increasing family $$(B_{t})_{t\in [0,1]}$$ of sets such that $${\mathbb{E}}[\mathbf {1}_{B_{t}}\, | \, {\mathcal{F}}_{1}]=t$$. The sigma- algebragenerated by the family $$(B_{t})$$ is independent of $${\mathcal{F}}_{1}$$. The system $$(B_{t})$$ can also be described as $$B_{t}=\{U\le t\}$$, where $$U$$ is a random variable that is independent of $${\mathcal{F}}_{1}$$ and uniformly distributed on $$[0,1]$$. Proof The proof is a repeated use of Lemma 2.5 where we take $$h=1/2$$. We start with $$B_{0}=\emptyset , B_{1}=\Omega$$. Suppose that for the dyadic numbers $$k 2^{-n}$$, $$k=0,\ldots , 2^{n}$$, the sets are already defined. Then we consider the set $$B_{(k+1)2^{-n}}\setminus B_{k2^{-n}}$$ and apply Lemma 2.5 with $$h=1/2$$. We get a set $$D\subseteq B_{(k+1)2^{-n}}\setminus B_{k2^{-n}}$$, $$D \in {\mathcal{F}}_{2}$$, with $${\mathbb{E}}[\mathbf {1}_{D}\, | \, { \mathcal{F}}_{1}]=2^{-(n+1)}$$. We then define $$B_{(2k+1)2^{-(n+1)}}=B_{k2^{-n}}\cup D$$. For non-dyadic numbers $$t$$, we find a sequence $$(d_{n})$$ of dyadic numbers such that $$d_{n}\uparrow t$$. Then we define $$B_{t}=\bigcup _{n\in {\mathbb{N}}} B_{d_{n}}$$. This completes the construction. Since the system $$(B_{t})$$ is trivially stable under intersections, the relation $${\mathbb{E}}[\mathbf {1}_{B_{t}}\, | \, {\mathcal{F}}_{1}]=t$$ shows that the sigma-algebra ℬ generated by $$(B_{t})$$ is independent of $${\mathcal{F}}_{1}$$. The construction of $$U$$ is standard. At level $$n$$, we put $$U_{n}=\sum _{k=1}^{2^{n}} k2^{-n}\mathbf {1}_{B_{k2^{-n} }\setminus B_{(k-1)2^{-n}}}$$. Then $$(U_{n})$$ decreases to a random variable $$U$$ that satisfies the needed properties. The proof of Theorem 2.3 is now completed. □ Remark 2.8 Suppose that for the probability ℙ, there is an atomless sigma-algebra $${\mathcal{B}}\subseteq {\mathcal{F}}_{2}$$ that is independent of $${\mathcal{F}}_{1}$$. Suppose now that $${\mathbb{Q}}\approx {\mathbb{P}}$$ is an equivalent probability measure. Clearly, the definition of being conditionally atomless is invariant for equivalent measure changes. Hence there is an atomless sigma-algebra $${\mathcal{B}}'\subseteq {\mathcal{F}}_{2}$$ that is independent of $${\mathcal{F}}_{1}$$ for the probability ℚ. Proving this directly does not seem easy. The following proposition is Lemma 2.5 where we take $$C=\Omega$$. For didactic reasons, we give another proof that directly uses the existence of an independent sigma-algebra. We use the same assumptions and notations as in Theorem 2.3. Proposition 2.9 For every $${\mathcal{F}}_{1}$$- measurable function $$h\colon \Omega \rightarrow [0,1]$$, there is a set $$B_{h}\in {\mathcal{F}}_{2}$$ such that $${\mathbb{E}}[\mathbf {1}_{B_{h}}\, | \, {\mathcal{F}}_{1}]=h$$. Proof The idea is to use the set $$B_{t}$$ on the set $$\{h=t\}$$, i.e., $$B=\bigcup _{t}(\{h=t\}\cap B_{t})$$. However, because the set of real numbers is uncountable, this definition is not good enough to obtain a set in $${\mathcal{F}}_{2}$$. So we need a trick. Let $$\phi$$ be the mapping $$\phi \colon (\Omega ,{\mathcal{F}}_{2})\rightarrow (\Omega ,{ \mathcal{F}}_{1})\times (\Omega ,{\mathcal{B}}), \qquad \phi ( \omega )=(\omega ,\omega ).$$ This mapping is obviously measurable and the image measure is because of independence the product measure. We also define $$h_{1}(\omega ,\omega ')=h(\omega )$$ and $$U_{2}(\omega ,\omega ')=U( \omega ')$$. For $$A\in {\mathcal{F}}_{1}$$, we set $$A_{1}=A\times \Omega$$. We define $$B_{h}=\{U\le h\}=\phi ^{-1}\{U_{2}\le h_{1}\}$$. We now verify that $${\mathbb{E}}[\mathbf {1}_{B_{h}}\, | \, {\mathcal{F}}_{1}]=h$$. To do this, we calculate for a set $$A\in {\mathcal{F}}_{1}$$ the probability \begin{aligned} {\mathbb{P}}[B_{h}\cap A] =& ({\mathbb{P}}\times {\mathbb{P}})[\{U_{2} \le h_{1}\}\cap A_{1}] \\ =& \int {\mathbb{P}}[d\omega ']\int {\mathbb{P}}[d\omega ] \, \mathbf {1}_{ \{U_{2}\le h_{1}\} }(\omega ,\omega ')\mathbf {1}_{A_{1}}(\omega ,\omega ') \\ =& \int {\mathbb{P}}[d\omega ']\,{\mathbb{P}}[\{h\ge U(\omega ')\} \cap A] \\ =& \int _{0}^{1} dt\, {\mathbb{P}}[\{h\ge t\}\cap A] \\ =& {\mathbb{E}}[h\mathbf {1}_{A}], \end{aligned} showing $${\mathbb{E}}[\mathbf {1}_{B_{h}}\, | \, {\mathcal{F}}_{1}]=h$$. □ Remark 2.10 Proposition 2.9 is not actually needed. We need the stronger version where the conditional expectation is replaced by the utility function $$u_{1,2}$$. To prove this stronger version, we use a slightly different approach. However, if we are only interested in conditional expectations, the above proof might be of some didactic interest. Remark 2.11 After the first version of this paper was made available, we got the remark that the paper of Shen et al. [ 16] contains similar concepts and results. 2 In their notation, they work with a measurable space $$(\Omega ,{\mathcal{A}})$$ on which they have a finite number of probability measures $${\mathbb{Q}}_{1},\ldots ,{\mathbb{Q}}_{n}$$. Their paper also considers an infinite number of measures, but to clarify the relation between their paper and our approach, we only consider a finite number of measures. They introduce Definition 2.12 The set $$({\mathbb{Q}}_{1},\ldots ,{\mathbb{Q}}_{n})$$ is conditionally atomless if there exist a dominating measure ℚ (i.e., $${\mathbb{Q}}_{k}\ll {\mathbb{Q}}$$ for each $$k\le n$$) as well as a continuously distributed random variable $$X$$ (for the measure ℚ) such that the vector of Radon–Nikodým derivatives $$(\frac{d{\mathbb{Q}}_{k}}{d{\mathbb{Q}}})_{k=1,\dots ,n}$$ is independent of $$X$$. They then prove the following result. Proposition 2.13 The following are equivalent: 1) $$({\mathbb{Q}}_{1},\ldots ,{\mathbb{Q}}_{n})$$ is conditionally atomless. 2) In the definition, we can take $${\mathbb{Q}}=\frac{1}{n}({\mathbb{Q}}_{1}+\cdots +{\mathbb{Q}}_{n})$$. 3) $$X$$ can be taken as uniformly distributed over $$[0,1]$$. There are several differences with our approach. There is the technical difference that [ 16] suppose the existence of a continuously distributed random variable  $$X$$. In doing so, they avoid the technical points between the more conceptual definition using conditional expectations and the construction of a suitable sigma-algebra with a uniformly distributed random variable. A further difference is that they use a dominating measure that later can be taken as the mean of $$({\mathbb{Q}}_{1},\ldots ,{\mathbb{Q}}_{n})$$. Of course, their result together with the results here show that the definition of $$({\mathbb{Q}}_{1},\ldots ,{\mathbb{Q}}_{n})$$ being conditionally atomless is equivalent to the statement that for the measure $${\mathbb{Q}}_{0}=\frac{1}{n}({\mathbb{Q}}_{1}+\cdots +{\mathbb{Q}}_{n})$$, the sigma-algebra $${\mathcal{A}}$$ is atomless conditionally to the sigma-algebra generated by the Radon–Nikodým derivatives $$(\frac{d{\mathbb{Q}}_{k}}{d{\mathbb{Q}}_{0}})_{k=1,\dots ,n}$$. In [ 16], it is also shown that one can take any strictly positive convex combination of the measures $$({\mathbb{Q}}_{1},\ldots ,{\mathbb{Q}}_{n})$$. Below we show that the sigma-algebra $${\mathcal{A}}$$ in some sense has a minimality property, a result that clarifies the relation between the two approaches. Before doing so, let us recall two easy results from introductory probability theory. Exercise 2.14 For a probability space $$(\Omega ,{\mathcal{A}},{\mathbb{Q}})$$, set $${\mathcal{N}}=\{N\in {\mathcal{A}}: {\mathbb{Q}}[N]=0\}$$. Suppose that a sub-sigma-algebra $${\mathcal{F}}\subseteq {\mathcal{A}}$$ is given and that $${\mathcal{G}}$$ with $${\mathcal{F}}\subseteq {\mathcal{G}}$$, is another sub-sigma-algebra which is included in the sigma-algebra generated by ℱ and $${\mathcal{N}}$$. Then for each $$\xi \in L^{1}(\Omega ,{\mathcal{A}},{\mathbb{Q}})$$, $${\mathbb{E}}_{\mathbb{Q}}[\xi \, | \, {\mathcal{F}}]={\mathbb{E}}_{ \mathbb{Q}}[\xi \, | \, {\mathcal{G}}] \qquad \mbox{a.s.}$$ Exercise 2.15 With the notation in Exercise 2.14, let $$F\colon \Omega \rightarrow {\mathbb{R}}^{n}$$ and $$F'\colon \Omega \rightarrow {\mathbb{R}}^{n}$$ be two random vectors that are equal a.s. Let ℱ be generated by $$F$$ and $${\mathcal{G}}$$ by $$F'$$. Then ℱ and $${\mathcal{G}}$$ are equal up to sets in $${\mathcal{N}}$$. More precisely, $${\mathcal{G}}$$ is contained in the sigma-algebra generated by ℱ and $${\mathcal{N}}$$ (and of course vice versa), i.e., $$\sigma ({\mathcal{F}},{\mathcal{N}})=\sigma ({\mathcal{G}},{ \mathcal{N}})$$. Proposition 2.16 Let $${\mathbb{Q}}_{1},\dots ,{\mathbb{Q}}_{n}$$ be probability measures on a measurable space $$(\Omega ,{\mathcal{A}})$$. Let $${\mathbb{Q}}_{0}=\sum _{k=1}^{n} \lambda _{k} {\mathbb{Q}}_{k}$$ be a convex combination of these measures with each $$\lambda _{k} > 0$$. Let $$f_{k}$$ denote an $${\mathcal{A}}$$- measurable version of $$\frac{d{\mathbb{Q}}_{k}}{{\mathbb{Q}}_{0}}$$. Letbe another dominating measure with $$g_{k}$$ an $${\mathcal{A}}$$- measurable version of $$\frac{d{\mathbb{Q}}_{k}}{d{\mathbb{Q}}}$$. Let $${\mathcal{N}}= \{N\in {\mathcal{A}}\colon {\mathbb{Q}}_{0}[N]=0\}$$. Letbe generated by $$f_{k},k=1,\dots , n$$, and let $${\mathcal{G}}$$ be generated by $$g_{k},k=1,\ldots , n$$. Then $${\mathcal{F}}\subseteq \sigma ({\mathcal{G}},{\mathcal{N}})$$. Proof Clearly, $${\mathbb{Q}}_{0}\ll {\mathbb{Q}}$$; so let $$h=\frac{d{\mathbb{Q}}_{0}}{d{\mathbb{Q}}}$$. It is now immediate that $$g_{k}= f_{k} h$$ ℚ-a.s. To see this, observe that the values of $$f_{k}$$ on $$\{h=0\}$$ do not matter. The functions $$g_{k}$$ and $$h$$ are $${\mathcal{G}}$$-measurable since $$h$$ can be taken as $$h=\sum _{k=1}^{n} \lambda _{k} g_{k}$$. Then we define $$f_{k}'= \frac{g_{k}}{h}$$ on $$\{h>0\}$$ and $$f_{k}'=0$$ on $$\{h=0\}$$. This choice shows that the $$f_{k}'$$ are $${\mathcal{G}}$$-measurable. It is immediate that $$f_{k}=f_{k}'$$ $${\mathbb{Q}}_{0}$$-a.s. The result now follows. □ From Proposition 2.16, it follows that the sigma-algebra augmented with the class $${\mathcal{N}}$$ is the same for all strictly positive convex combinations. This shows that in the definition of atomless conditionally to ℱ, we can also add the nullsets $${\mathcal{N}}$$ to ℱ. To check that $${\mathcal{A}}$$ is atomless conditionally to a sigma-algebra ℱ, it is clear that the smaller ℱ, the easier it is to satisfy the condition. In our opinion, the above clarifies the relation between this paper and [ 16]. ## 3 A continuity result Let us recall the standing assumptions: $${\mathcal{F}}_{2}$$ is atomless conditionally to $${\mathcal{F}}_{1}$$, and $$U$$ is independent of $${\mathcal{F}}_{1}$$ and uniformly distributed on $$[0,1]$$. Further, the utility function $$u_{1,2}\colon L^{\infty }({\mathcal{F}}_{2})\rightarrow L^{\infty }({ \mathcal{F}}_{1})$$ is coherent and Lebesgue-continuous. For each mapping $$h\colon \Omega \rightarrow [0,1]$$ that is $${\mathcal{F}}_{1}$$-measurable, we put $$\phi (h)=u_{1,2}(\mathbf {1}_{\{U\le h\}})$$. Clearly, $$\phi$$ takes values in the space $$L^{\infty }({\mathcal{F}}_{1})$$. We have the following continuity result. Proposition 3.1 If $$h_{n}\downarrow h$$ or $$h_{n}\uparrow h$$, then $$\phi (h_{n})\rightarrow \phi (h)$$. Proof If $$h_{n}\downarrow h$$, then $$\mathbf {1}_{\{U\le h_{n}\}}\downarrow \mathbf {1}_{\{U\le h\}}$$ and the Fatou property gives the desired result. For the upward convergence, we must be more careful. Because $$U$$ has a continuous distribution function and is independent of $${\mathcal{F}}_{1}$$, we conclude that $${\mathbb{P}}[U=h]=0$$ and hence $$\mathbf {1}_{\{U\le h_{n}\}}\uparrow \mathbf {1}_{\{U\le h\}}$$ a.s. The Lebesgue property then allows to conclude. □ Theorem 3.2 If $$h\colon \Omega \rightarrow [0,1]$$ is $${\mathcal{F}}_{1}$$- measurable, there is an $${\mathcal{F}}_{1}$$- measurable function $$g\colon \Omega \rightarrow [0,1]$$ such that the set $$B_{g}=\{U\le g\}$$ satisfies $$u_{1,2}(\mathbf {1}_{B_{g}})=h$$. Proof The statement can be rewritten as $$\phi (g)=h$$. Let us introduce the class $${\mathcal{G}}=\{ g\colon g\text{ is } {\mathcal{F}}_{1} \text{-measurable and }u_{1,2}(\mathbf {1}_{B_{g}})=\phi (g)\ge h \}.$$ Then $${\mathcal{G}}$$ is nonempty since $$1\in {\mathcal{G}}$$. Furthermore, $${\mathcal{G}}$$ is stable under taking minima. Indeed, take $$g_{1},g_{2}\in {\mathcal{G}}$$ and put $$g=g_{1}\mathbf {1}_{A}+g_{2}\mathbf {1}_{A^{c}}$$, where $$A=\{g_{1}< g_{2}\}$$. Since $$u_{1,2}(\mathbf {1}_{B_{g}})=\mathbf {1}_{A} u_{1,2}(\mathbf {1}_{B_{g_{1}}}) + \mathbf {1}_{A^{c}}u_{1,2}(\mathbf {1}_{B_{g_{2}}})\ge h$$, we have $$g\in {\mathcal{G}}$$. Let now $$g_{n}\downarrow g$$, where $$(g_{n}) \subseteq {\mathcal{G}}$$ and $${\mathbb{E}}[g_{n}]\downarrow \inf \{{\mathbb{E}}[g'] : g'\in { \mathcal{G}}\}$$. The continuity for decreasing sequences then shows that $$g\in {\mathcal{G}}$$. The previous lines are enough to show that $${\mathcal{G}}$$ has a minimum. Let $$g$$ be the smallest function in $${\mathcal{G}}$$. We claim that the continuity for increasing sequences (the Lebesgue property) implies that actually $$u_{1,2}(\mathbf {1}_{B_{g}})=h$$. Indeed, suppose to the contrary that the set $$\{u_{1,2}(\mathbf {1}_{B_{g}})>h\}$$ has nonzero measure. This assumption trivially implies that $${\mathbb{P}}[g>0]>0$$. Take now a sequence $$g_{n}\uparrow g$$ such that on $$\{g>0\}$$, we have $$g_{n}< g$$. By Proposition 3.1, $$u_{1,2}(\mathbf {1}_{B_{g_{n}}})\uparrow u_{1,2}(\mathbf {1}_{B_{g}})$$. Hence there must exist $$n$$ such that $$A_{n}=\{u_{1,2}(\mathbf {1}_{B_{g_{n}}})>h\}$$ has nonzero measure. On $$A_{n}$$, we have $$g_{n}>0$$, hence also $$g>0$$, and therefore also $$g_{n}< g$$. Put now $$g'=g_{n}\mathbf {1}_{A_{n}}+g\mathbf {1}_{A_{n}^{c}}$$. We have $${\mathbb{E}}[g']<{\mathbb{E}}[g]$$, but also $$g'\in {\mathcal{G}}$$, which is a contradiction to the minimality of  $$g$$. □ Remark 3.3 Although “intuitively clear”, the continuity of the process $$t \mapsto u_{1,2}(\mathbf {1}_{B_{t}})$$ is not an easy result. First of all, we are working with random variables identified under the equivalence a.s. That means that we must first select or construct measurable functions instead of classes of measurable functions. Then we must show that with respect to $$t$$, these outcomes are continuous. The general theory of stochastic processes gives us the necessary tools to achieve this goal. We do not really need these finer results so that if you do not belong to the amateurs of the general theory of stochastic processes à la Dellacherie and Meyer [ 10], the remark can be skipped. First we construct a process $$\alpha (t,\omega )$$. For each rational point $$q\in [0,1]$$, we select an $${\mathcal{F}}_{1}$$-measurable function $$\alpha '(q)$$ that represents $$u_{1,2}(\mathbf {1}_{B_{q}})$$. Because of monotonicity we can – if needed – change these selections on a set of zero measure to make sure that a.s., the mapping $${\mathbb{Q}}\cap [0,1]\rightarrow {\mathbb{R}}, q\mapsto \alpha '(q)$$ is increasing. For each $$t\in [0,1]$$, we now define $$\alpha (t)= \inf _{q\ \text{rational,}\ q\ge t}\alpha '(q)$$. The functions $$\alpha (t)$$ are of course $${\mathcal{F}}_{1}$$-measurable and represent $$u_{1,2}(\mathbf {1}_{B_{t}})$$ by the Fatou property. We may also suppose that $$\alpha (0)=0,\alpha (1)=1$$ a.s. It is clear that $$\alpha$$ is a.s. nondecreasing in $$t$$ and right-continuous. This means there is a set (independent of $$t$$) such that on this set, $$t\mapsto \alpha (t,\omega )$$ is right-continuous and nondecreasing. We claim that the function $$\alpha$$ also satisfies $$\alpha (h)=u_{1,2}(\mathbf {1}_{\{U\le h\}})=\phi (h)$$ for each $${\mathcal{F}}_{1}$$-measurable function $$h\colon \Omega \rightarrow [0,1]$$. To avoid misunderstandings, the random variable $$\alpha (h)$$ is defined as $$\alpha (h)(\omega )=\alpha (h(\omega ),\omega )$$. Such a notation is common in stochastic process theory. The above property of $$\alpha$$ is easy to verify for elementary functions $$h$$, and the general statement trivially follows by approximating $$h$$ from above by elementary functions. Let us give the details. For an elementary function $$h=\sum _{k=1}^{K} t_{k}\mathbf {1}_{A_{k}}$$ (the sets $$A_{k}$$ are disjoint and in $${\mathcal{F}}_{1}$$), we have \begin{aligned} \alpha (h) =& \sum _{k=1}^{K} \alpha (t_{k})\mathbf {1}_{A_{k}} \\ =&\sum _{k=1}^{K} u_{1,2}(\mathbf {1}_{B_{t_{k}}})\mathbf {1}_{A_{k}} \\ =&\sum _{k=1}^{K} u_{1,2}(\mathbf {1}_{B_{t_{k}}}\mathbf {1}_{A_{k}})\mathbf {1}_{A_{k}} \\ =&\sum _{k=1}^{K} u_{1,2} ( \mathbf {1}_{B_{t_{k}}\cap A_{k}})\mathbf {1}_{A_{k}} \\ =&\sum _{k=1}^{K} u_{1,2}\bigg(\Big(\sum _{\ell =1}^{K} \mathbf {1}_{B_{t_{\ell }}\cap A_{\ell }}\Big)\mathbf {1}_{A_{k}} \bigg)\mathbf {1}_{A_{k}} \\ =&\sum _{k=1}^{K} u_{1,2}(\mathbf {1}_{\{U\le h\}}\mathbf {1}_{A_{k}})\mathbf {1}_{A_{k}} \\ =& u_{1,2}(\mathbf {1}_{\{U\le h\}})=\phi (h). \end{aligned} As indicated above, the Fatou property then completes the proof by using right-continuity. Indeed, let $$h\colon \Omega \rightarrow [0,1]$$ be $${\mathcal{F}}_{1}$$-measurable and $$h_{n} \downarrow h$$ a sequence of elementary functions that are $${\mathcal{F}}_{1}$$-measurable. Since $$\mathbf {1}_{\{U\le h_{n}\}} \downarrow \mathbf {1}_{\{U\le h\}}$$, the Fatou property and the right-continuity of $$t \mapsto \alpha (t)$$ give us $$\phi (h)=u_{1,2}(\mathbf {1}_{\{U\le h\}})$$. The proof of the left-continuity can be done by using ideas from the general theory of stochastic processes. For $$\varepsilon >0$$, we define $$r=\inf \Big\{ t \colon \lim _{s\rightarrow t, \, s< t}\alpha (s)\le \alpha (t)-\varepsilon \Big\} \wedge 1.$$ Observe that $$r>0$$ by construction. Suppose now that at the point $$r$$, the probability that $$\alpha$$ has a jump of size at least $$\varepsilon$$ is nonzero. Take $$r_{n}\uparrow r$$, $$r_{n}< r$$. The continuity result in Proposition 3.1 gives us that $$\alpha (r_{n})\uparrow \alpha (r)$$ which is a contradiction to $$\alpha$$ having a jump. So for almost every $$\omega \in \Omega$$, $$\alpha (\cdot ,\omega )$$ has no jumps of size at least $$\varepsilon$$. Since the latter was arbitrary, the a.s. continuity of the process $$\alpha$$ is proved. ## 4 Some special commonotonic set In this section, we define a special norm on $${\mathbb{R}}^{2}$$. Part of its unit sphere will then be used as a commonotonic set. The reader could make some drawings to help visualise the constructions. The construction is done in several steps. The first step consists in taking the curve obtained as the concatenation of the convex intervals that join the points $$(-4,-4)\rightarrow (-4,-2)\rightarrow (0,0)\rightarrow (4,2) \rightarrow (4,4).$$ The convex hull of this set is a parallelogram $$P_{0}$$, with parallel vertical sides given by the line segments $$(-4,-4)\rightarrow (-4,-2)\qquad \text{and}\qquad (4,2) \rightarrow (4,4).$$ The set $$P_{0}$$ will be used as the unit ball of a norm on $${\mathbb{R}}^{2}$$. More precisely, we use the Minkowski functional $$\Vert (x,y)\Vert := \inf \{ \alpha >0: (x,y)\in \alpha P_{0}\}.$$ Note that every point of $$P_{0}$$ is the convex combination of points taken on the vertical sides. An easy and continuous way to obtain such convex combination goes as follows. Through a point in $$P_{0}$$, take a line parallel to the “skew” sides of $$P_{0}$$ and see where it intersects the vertical sides. Elementary calculations give us that for $$(x,y)\in P_{0}$$, we may write $$(x,y)=(1-\lambda _{0})(u^{0}_{1},u^{0}_{2})+\lambda _{0}(v^{0}_{1},v^{0}_{2})$$ with $$u^{0}, v^{0} \in P$$ and $$0 \leq \lambda _{0} \leq 1$$, or more explicitly $$(x,y)=\frac{4-x}{8}\left (-4,y-3-\frac{3x}{4}\right )+\frac{4+x}{8} \left (4,y+3-\frac{3x}{4}\right ).$$ For each $$n\in {\mathbb{Z}}$$, we now define $$P_{n}=2^{n}P_{0}$$ and similarly as for $$n=0$$, we define $$\lambda _{n}$$, $$(u^{n}_{1},u^{n}_{2})$$, $$(v^{n}_{1},v^{n}_{2})$$. These functions are obviously continuous. The set $$E$$ consists of all the vertical segments with the origin added. It forms a commonotonic set. This follows from the equality $$E=\{(0,0)\}\cup \bigcup _{n\in {\mathbb{Z}}}\Big( 2^{n}\big( [(-4,-4),(-4,-2)] \cup [(2,4),(4,4)] \big) \Big).$$ We now construct functions $$\Lambda , U, V$$ on $${\mathbb{R}}^{2}$$ as follows. For $$(x,y)\in P_{n}\setminus P_{n-1}$$, we define $$\Lambda (x,y)= \lambda _{n}(x,y)$$, $$U(x,y)=u^{n}(x,y)$$, $$V(x,y)=v^{n}(x,y)$$. At $$(0,0)$$, we put $$\Lambda (0,0)=1$$, $$U(0,0)=(0,0)=V(0,0)$$. These functions are no longer continuous, but are certainly Borel-measurable. They satisfy the following properties: 1) $$\Lambda \colon {\mathbb{R}}^{2}\rightarrow [0,1]$$. 2) $$U \colon {\mathbb{R}}^{2}\rightarrow E$$, $$V\colon {\mathbb{R}}^{2}\rightarrow E$$. 3) We have $$\Vert U(x,y)\Vert \le 2 \Vert (x,y)\Vert$$ and $$\Vert V(x,y)\Vert \le 2 \Vert (x,y)\Vert$$. Indeed, for $$(x,y)\in P_{n}\setminus P_{n-1}$$, we have $$2^{n}=\Vert U(x,y)\Vert \ge \Vert (x,y)\Vert \ge 2^{n-1}$$, and the same holds for $$V$$. 4) For all $$(x,y)\in {\mathbb{R}}^{2}$$, $$(x,y)=(1-\Lambda (x,y))U(x,y)+\Lambda (x,y)V(x,y)$$. 5) The coordinates $$V_{1}(x,y)-U_{1}(x,y)$$ and $$V_{2}(x,y)-U_{2}(x,y)$$ of $$V-U$$ are nonnegative. ## 5 The main result We start by giving an extension of the usual definition of conditional expectation. Definition 5.1 We say that an $${\mathcal{F}}_{2}$$-measurable random variable $$\xi$$ has an extended conditional expectation with respect to $${\mathcal{F}}_{1}$$ if there is a countable $${\mathcal{F}}_{1}$$-measurable partition $$(A_{n})$$ such that each $$\mathbf {1}_{A_{n}}\xi$$ is integrable. The conditional expectation is then defined as $$\sum _{n} {\mathbb{E}}[\mathbf {1}_{A_{n}}\xi \, | \, {\mathcal{F}}_{1}]$$. The reader can check that the existence and definition of an extended conditional expectation are independent of the choice of the $${\mathcal{F}}_{1}$$-measurable partition. We sometimes drop the word “extended”. Again we suppose that $${\mathcal{F}}_{2}$$ is atomless conditionally to $${\mathcal{F}}_{1}$$ . The utility function $$u_{1,2}$$ is Lebesgue-continuous. Before giving the main result of the paper, we first prove a special case. Theorem 5.2 For every couple $$(f,g)$$ of $${\mathcal{F}}_{1}$$- measurable finite- valued random variables, there is a commonotonic couple $$(\xi ,\eta )$$ of $${\mathcal{F}}_{2}$$- measurable random variables such that $$f={\mathbb{E}}[\xi \, | \, {\mathcal{F}}_{1}],g={\mathbb{E}}[\eta \, | \, {\mathcal{F}}_{1}]$$. Furthermore, $$\Vert (\xi ,\eta )\Vert \le 2 \Vert (f,g)\Vert$$ almost surely. Proof The proof is almost given in the previous sections. Let $$(f,g)\colon \Omega \rightarrow {\mathbb{R}}^{2}$$ be $${ \mathcal{F}}_{1}$$-measurable. Using the functions $$\Lambda ,U,V$$ of Sect.  4, we can then write $$(f,g)=\Lambda (f,g) V(f,g)+\big(1-\Lambda (f,g)\big)U(f,g).$$ Because $$\Lambda (f,g):\Omega \rightarrow [0,1]$$ is $${\mathcal{F}}_{1}$$-measurable and $${\mathcal{F}}_{2}$$ is atomless conditionally to  $${\mathcal{F}}_{1}$$, there is a set $$B \in {\mathcal{F}}_{2}$$ such that $${\mathbb{E}}[\mathbf {1}_{B}\, | \, {\mathcal{F}}_{1}]=\Lambda (f,g)$$. The random variables $$(\xi ,\eta )$$ are now defined as $$\xi =\mathbf {1}_{B} V_{1}(f,g)+\mathbf {1}_{B^{c}}U_{1}(f,g), \qquad \eta = \mathbf {1}_{B} V_{2}(f,g)+\mathbf {1}_{B^{c}}U_{2}(f,g),$$ or in other words $$(\xi ,\eta )=\mathbf {1}_{B} V(f,g) + \mathbf {1}_{B^{c}}U(f,g).$$ Both random variables have extended conditional expectations, and because $$U(f,g)$$, $$V(f,g)$$ are $${\mathcal{F}}_{1}$$-measurable, we get $$(f,g)={\mathbb{E}}[(\xi ,\eta )\, | \, {\mathcal{F}}_{1}]$$. Because $$(\xi ,\eta )$$ takes its values in the commonotonic set $$E$$ from Sect.  4, we get that $$\xi$$ and $$\eta$$ are commonotonic. The estimate of the norms follows from the estimates for $$U$$ and $$V$$. □ Corollary 5.3 The random variable $$(\xi ,\eta )$$ has the same integrability properties as the couple $$(f,g)$$. In particular, if $$(f,g)$$ is bounded, the couple $$(\xi ,\eta )$$ is bounded. Remark 5.4 If one wants to use another norm than the Minkowski functional of $$P_{0}$$, one must adapt the constant. Because all norms on $${\mathbb{R}}^{2}$$ are equivalent, this is an exercise in linear algebra. We did not try to find the best estimates for e.g. the Euclidean norm, where a rough calculation gave $$10\sqrt{2}$$. This problem would require to find a better commonotonic set than the one used above. The next theorem is an improvement of the preceding result in the sense that we replace the conditional expectation by a more general utility function. The proof follows the same lines. Theorem 5.5 For every couple $$(f,g)$$ of $${\mathcal{F}}_{1}$$- measurable bounded random variables, there is a commonotonic couple $$(\xi ,\eta )$$ of $${\mathcal{F}}_{2}$$- measurable random variables such that $$f=u_{1,2}(\xi ),g=u_{1,2}(\eta )$$. Furthermore, $$\Vert (\xi ,\eta )\Vert \le 2 \Vert (f,g)\Vert$$ almost surely. Proof We use the same notation $$(\Lambda ,U,V)$$ as in the previous proof. But this time we take a set $$B$$ such that $$u_{1,2}(\mathbf {1}_{B})=\Lambda$$. Again we define $$(\xi ,\eta )=\mathbf {1}_{B} V(f,g) + \mathbf {1}_{B^{c}}U(f,g)=U(f,g)+\mathbf {1}_{B} \big(V(f,g)-U(f,g)\big).$$ We then have \begin{aligned} u_{1,2}(\xi ) =&u_{1,2}\Big(U_{1}(f,g)+\mathbf {1}_{B}\big(V_{1}(f,g)-U_{1}(f,g) \big)\Big) \\ =&U_{1}(f,g)+u_{1,2}(\mathbf {1}_{B})\big(V_{1}(f,g)-U_{1}(f,g)\big) \\ =&U_{1}(f,g)+\Lambda (f,g)\big(V_{1}(f,g)-U_{1}(f,g)\big)=f, \end{aligned} and similarly for $$g$$ and the second coordinate. Note that we can apply the positive homogeneity of $$u_{1,2}$$ because $$V_{1}(f,g)-U_{1}(f,g)\ge 0$$. □ Remark 5.6 If $$(f,g)$$ is only finite-valued, we can write $$(f,g)=\mathbf {1}_{\{(f,g)=(0,0)\}}(f,g)+\sum _{n\in {\mathbb{Z}}}\mathbf {1}_{\{(f,g) \in P_{n}\setminus P_{n-1}\}}(f,g),$$ and this is a sum of bounded random variables. For each $$n$$, we can define $$\xi _{n},\eta _{n}$$ as in Theorem 5.5. These random variables are zero outside $$\{(f,g)\in P_{n}\setminus P_{n-1}\}$$, and hence the sum $$(\xi ,\eta )=\sum _{n\in {\mathbb{Z}}}(\xi _{n},\eta _{n})$$ is well defined. We could then extend $$u_{1,2}$$ as we did for conditional expectations. Finally, we get $$u_{1,2}(\xi )=f,u_{1,2}(\eta )=g$$. This extension is important when the utility functions are defined on e.g. Orlicz or Riesz spaces. Important for such extensions is the pointwise (almost sure) estimate $$\Vert (\xi ,\eta )\Vert \le 2\Vert (f,g)\Vert$$. ## 6 Commonotonicity and time-consistency In this section, we use the same hypothesis on the filtration $$({\mathcal{F}}_{0},{\mathcal{F}}_{1},{\mathcal{F}}_{2})$$. In particular, we suppose that $${\mathcal{F}}_{2}$$ is atomless conditionally to $${\mathcal{F}}_{1}$$. We start with a monetary coherent utility function $$u_{0,2}\colon L^{\infty }({\mathcal{F}}_{2})\rightarrow {\mathbb{R}}$$. We suppose – as in the rest of the paper – that $$u_{0,2}$$ is relevant. Theorem 6.1 Suppose that 1) $${\mathcal{F}}_{2}$$ is atomless conditionally to $${\mathcal{F}}_{1}$$; 2) $$u_{0,2}$$ is coherent and relevant; 3) $$u_{0,2}$$ is time- consistent; 4) $$u_{0,2}$$ is commonotonic, i. e., if the random variables $$\xi ,\eta \in L^{\infty }({\mathcal{F}}_{2})$$ are commonotonic, then $$u_{0,2}(\xi +\eta )=u_{0,2}(\xi )+u_{0,2}(\eta )$$; 5) $$u_{0,2}$$ is Lebesgue- continuous. Then there is a probability $${\mathbb{Q}}\approx {\mathbb{P}}$$ such that $$u_{0,1}(f)={\mathbb{E}}_{\mathbb{Q}}[f]$$ for all $$f\in L^{\infty }({\mathcal{F}}_{1})$$. Proof According to Theorem 5.5, for each $$f,g\in L^{\infty }({\mathcal{F}}_{1})$$, there are commonotonic $$\xi ,\eta \in L^{\infty }({\mathcal{F}}_{2})$$ with $$u_{1,2}(\xi )=f,\, u_{1,2}(\eta )=g$$ and $$u_{1,2}(\xi +\eta )=f+g$$. We then have $$u_{0,1}(f)=u_{0,1}(u_{1,2}(\xi ))=u_{0,2}(\xi )$$ and similarly for $$g$$. The combination with commonotonicity then gives \begin{aligned} u_{0,1}(f+g) =&u_{0,1}\big(u_{1,2}(\xi +\eta )\big) \\ =&u_{0,2}(\xi +\eta ) \\ =&u_{0,2}(\xi )+u_{0,2}(\eta ) \\ =&u_{0,1}\big(u_{1,2}(\xi )\big)+u_{0,1}\big(u_{1,2}(\eta )\big) \\ =&u_{0,1}(f)+u_{0,1}(g). \end{aligned} This shows that $$u_{0,1}$$ is additive (therefore linear) and hence given by a finitely additive probability measure. But Lebesgue-continuity implies that this measure, say ℚ, must be sigma-additive and absolutely continuous with respect to ℙ. Because $$u_{0,2}$$ and hence $$u_{0,1}$$ are relevant, we must have $${\mathbb{Q}}\approx {\mathbb{P}}$$. □ Remark 6.2 For general commonotonic $$\xi ,\eta$$ (not just for those used in the proof of Theorem 6.1), we can now prove that $$u_{1,2}(\xi +\eta )=u_{1,2}(\xi )+u_{1,2}(\eta )$$. We already know that $$u_{1,2}(\xi +\eta )\ge u_{1,2}(\xi )+u_{1,2}(\eta )$$. If $${\mathbb{Q}}[u_{1,2}(\xi +\eta )> u_{1,2}(\xi )+u_{1,2}(\eta )] >0$$, then we get \begin{aligned} u_{0,2}(\xi +\eta ) =&u_{0,1}\big(u_{1,2}(\xi +\eta )\big) \\ =&{\mathbb{E}}_{\mathbb{Q}}[u_{1,2}(\xi +\eta ) ] \\ >& {\mathbb{E}}_{\mathbb{Q}}[u_{1,2}(\xi )] +{\mathbb{E}}_{\mathbb{Q}}[u_{1,2}( \eta ) ] \\ =& u_{0,1}\big(u_{1,2}(\xi )\big)+u_{0,1}\big(u_{1,2}(\eta )\big) \\ =& u_{0,2}(\xi )+u_{0,2}(\eta ), \end{aligned} which is a contradiction to $$u_{0,2}(\xi +\eta )=u_{0,2}(\xi )+u_{0,2}(\eta )$$. The strict inequality in the third line follows from the fact that $$u_{0,1}$$ is the expectation with respect to the equivalent probability measure ℚ. Remark 6.3 If the assumption of relevance is dropped, we must start with a time-consistent system of utility functions $$u_{0,2},u_{0,1},u_{1,2}$$. In that case, we only obtain $${\mathbb{Q}}\ll {\mathbb{P}}$$, and the result of Remark 6.2 only holds ℚ-a.s. Remark 6.4 There is no reason that $$u_{0,2}$$ is additive on $$L^{\infty }({\mathcal{F}}_{2})$$ as the following example shows. We take $$\Omega =[0,1]\times [0,1]$$, $${\mathcal{F}}_{2}$$ is the product sigma-algebra of the Borel sigma-algebras on $$[0,1]$$, and the measure ℙ is the product measure of the usual Lebesgue measures. $${\mathcal{F}}_{0}$$ is the trivial sigma-algebra and $${\mathcal{F}}_{1}$$ is generated by the first coordinate mapping. For $$\xi \in L^{\infty }({\mathcal{F}}_{2}),\xi \ge 0$$, we define $$u_{0,2}(\xi )=\int _{0}^{1}d\alpha \int _{0}^{\infty }dx \big({ \mathbb{P}}[\xi (\alpha ,\cdot )\ge x]\big)^{1+\alpha }.$$ For $$0\le \xi \in L^{\infty }({\mathcal{F}}_{2})$$, the utility function $$u_{1,2}$$ is then given by $$u_{1,2}(\xi )(\alpha )=\int _{0}^{\infty }\big({\mathbb{P}}[\xi (\alpha , \cdot )>x]\big)^{1+\alpha }\,dx.$$ Such expressions are known as distortions or Choquet integrals. They are standard examples of commonotonic utility functions; see [ 7, Chap. 7]. We need a bit less than commonotonicity; in fact, we only need for $$\xi ,\eta$$ that $$u_{1,2}(\xi +\eta )=u_{1,2}( \xi )+u_{1,2}(\eta )$$ as soon as for each $$\alpha$$, the random variables $$\xi (\alpha ,\cdot ),\eta (\alpha ,\cdot )$$ are commonotonic. To see that $$u_{0,2}$$ is not linear, let us calculate the outcomes for $$\xi (\alpha ,y)=\mathbf {1}_{[0,1/2]}(y)$$ and $$\eta (\alpha ,y)=\mathbf {1}_{[1/2,1]}(y)$$. For both random variables, we find $$\frac{1}{4\log 2}$$ which do not sum up to $$u_{0,2}(\xi +\eta )=u_{0,2}(1)=1$$. ## 7 A continuous-time result In this section, we use a filtration indexed by the time interval $$[0,T]$$. This filtration $$\left ({\mathcal{F}}_{t}\right )_{0\le t\le T}$$ does not necessarily fulfil the usual assumptions. The only assumption is that $${\mathcal{F}}_{T}$$ is generated by $$\bigcup _{0\le t< T}{\mathcal{F}}_{t}$$. We also suppose that we are given a family $$u_{t,s},0\le t\le s\le T$$, $$u_{t,s}\colon L^{\infty }({\mathcal{F}}_{s})\rightarrow L^{\infty }({ \mathcal{F}}_{t})$$, of coherent utility functions. We assume the following time-consistency: for $$t\le s\le v$$, we have $$u_{t,v}=u_{t,s}\circ u_{s,v}$$. Theorem 7.1 With the notation introduced in this section, we suppose that for all $$0\le t < T$$, the sigma- algebra $${\mathcal{F}}_{T}$$ is atomless conditionally to $${\mathcal{F}}_{t}$$. If $$u_{0,T}$$ is relevant, Lebesgue- continuous and commonotonic, there is a probability $${\mathbb{Q}}\approx {\mathbb{P}}$$ such that $$u_{0,T}(\xi )={ \mathbb{E}}_{\mathbb{Q}}[\xi ]$$ for all $$\xi \in L^{\infty }({\mathcal{F}}_{T})$$. Proof The results of Sect.  6 show that on each $$L^{\infty }({\mathcal{F}}_{t})$$, the utility function $$u_{0,T}$$ is linear. The utility function $$u_{0,T}$$ is therefore linear on the vector space $$\bigcup _{t< T}L^{\infty }({\mathcal{F}}_{t})$$. This space is sequentially dense in $$L^{\infty }({\mathcal{F}}_{T})$$ for the Mackey topology (simply use the martingale convergence theorem). Because of Lebesgue-continuity, the utility function $$u_{0,T}$$ is therefore linear on $$L^{\infty }({\mathcal{F}}_{T})$$. It is thus given by a probability measure $${\mathbb{Q}}\ll {\mathbb{P}}$$. But since the utility function is relevant, we find that  $${\mathbb{Q}}\approx {\mathbb{P}}$$. □ Remark 7.2 The previous results can be applied for most filtrations used in finance and insurance. This is for instance true for filtrations coming from a Brownian motion in one or several dimensions, filtrations generated by most Lévy processes, and so on. In other words, commonotonicity and time-consistency are not good friends. ## Acknowledgements This research was done while the author was visiting Tokyo Metropolitan University in October and November 2018. We thank the staff of TMU for the many fruitful discussions, and in particular we thank Prof. Adachi for many critical remarks. We also thank Prof. T. Yamada and Prof. K. Takaoka for fruitful discussions while the author was visiting Hitotsubashi University, Kunitachi, Tokyo, in November and December 2019. ## Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Footnotes 1 When using the prefix co, coming from Latin, the English grammar suggests that you double the consonants l, m, n, r. 2 We thank Ruodu Wang for pointing out these relations and for subsequent discussions on the topic. Literatur Über diesen Artikel Zur Ausgabe
2021-09-17 11:22:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9899572730064392, "perplexity": 240.39124358303198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055632.65/warc/CC-MAIN-20210917090202-20210917120202-00712.warc.gz"}
http://math.stackexchange.com/questions/187730/efficiently-estimate-a-2d-integral-from-irregularly-sampled-limited-data
# Efficiently estimate a 2D integral from irregularly sampled, limited data I have measured data of the following form: $f(3.2, 2.5) = 10$ $f(3.7, 2.6) = 9$ $f(3.1, 2.8) = 9.1$ (etc)... That is, I know $f(x, y)$ for certain irregularly spaced values of $x$ and $y$. I want to estimate the integral $\int f(x, y) dx dy$. Is there a standard method to estimate this integral? Details: I cannot make additional measurements, I have to give my best estimate with the measurements at hand. I do not need especially high accuracy; the data is somewhat noisy anyway. A fast solution would be very helpful, since I will eventually need to repeat this estimation for millions or billions of inputs. If there happens to be a Python solution, that would be excellent. EDIT: I should mention that $f(x, y)$ is only nonzero in the local neighborhood that I'm sampling. For some fixed value $a$, if $x^2 + y^2 > a$, then $f(x, y) = 0$. - Are the points $(x_i,y_i)$ fixed and only the values of $f(x_i,y_i)$ vary, or do both vary? –  Rahul Aug 28 '12 at 3:18 Both will vary. –  Andrew Aug 28 '12 at 3:49 Some ideas: a) If it makes sense to assume that the data points are roughly uniformly distributed in the integration region, a very quick estimate would be the average of the function values times the total area. b) You could triangulate the set of data points and give each point the weight of one third of the areas of all triangles it participates in. The problem is that you have to somehow deal with the part of the integration region that's outside the convex hull of the data points – you could add external points and either estimate their function values or distribute their weight in the external triangles onto the internal points. c) You could weight the points according to the areas of their Voronoi cells. d) You could randomly generate points uniformly distributed in the region of integration and use the function value of the closest data point; this would be a Monte Carlo version of c) in case you don't want to bother with computing the Voronoi diagram. - Very nice! This is the type of help I'm looking for. I'll try these out. Don't suppose you know of any computer packages that perform these functions? –  Andrew Aug 28 '12 at 6:32 @Andrew: Sorry, I don't, but I suspect others here do. –  joriki Aug 28 '12 at 6:40 @Andrew, joriki: I wouldn't recommend option (b) because it can be discontinuous with respect to the point locations. Option (c) is quite nice, though option (a) is almost certainly best if the uniformity assumption holds. To compute Voronoi diagrams, you could try the Qhull library, which has a Python binding called Delny. –  Rahul Aug 28 '12 at 7:13 Ok, looks like the bounty didn't attract much attention. Thanks for your help, joriki. –  Andrew Sep 5 '12 at 20:09 @Andrew: You're welcome! –  joriki Sep 5 '12 at 21:06
2014-07-26 03:15:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8014267683029175, "perplexity": 362.9448584427123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894976.0/warc/CC-MAIN-20140722025814-00083-ip-10-33-131-23.ec2.internal.warc.gz"}
http://www.numdam.org/articles/10.1016/j.anihpc.2013.02.001/
Nonlinear equations for fractional Laplacians, I: Regularity, maximum principles, and Hamiltonian estimates Annales de l'I.H.P. Analyse non linéaire, Tome 31 (2014) no. 1, pp. 23-53. This is the first of two articles dealing with the equation ${\left(-\Delta \right)}^{s}v=f\left(v\right)$ in ${ℝ}^{n}$, with $s\in \left(0,1\right)$, where ${\left(-\Delta \right)}^{s}$ stands for the fractional Laplacian — the infinitesimal generator of a Lévy process. This equation can be realized as a local linear degenerate elliptic equation in ${ℝ}_{+}^{n+1}$ together with a nonlinear Neumann boundary condition on $\partial {ℝ}_{+}^{n+1}={ℝ}^{n}$.In this first article, we establish necessary conditions on the nonlinearity f to admit certain type of solutions, with special interest in bounded increasing solutions in all of $ℝ$. These necessary conditions (which will be proven in a follow-up paper to be also sufficient for the existence of a bounded increasing solution) are derived from an equality and an estimate involving a Hamiltonian — in the spirit of a result of Modica for the Laplacian. Our proofs are uniform as $s↑1$, establishing in the limit the corresponding known results for the Laplacian.In addition, we study regularity issues, as well as maximum and Harnack principles associated to the equation. @article{AIHPC_2014__31_1_23_0, author = {Cabr\'e, Xavier and Sire, Yannick}, title = {Nonlinear equations for fractional {Laplacians,} {I:} {Regularity,} maximum principles, and {Hamiltonian} estimates}, journal = {Annales de l'I.H.P. Analyse non lin\'eaire}, pages = {23--53}, publisher = {Elsevier}, volume = {31}, number = {1}, year = {2014}, doi = {10.1016/j.anihpc.2013.02.001}, zbl = {1286.35248}, mrnumber = {3165278}, language = {en}, url = {http://www.numdam.org/articles/10.1016/j.anihpc.2013.02.001/} } TY - JOUR AU - Cabré, Xavier AU - Sire, Yannick TI - Nonlinear equations for fractional Laplacians, I: Regularity, maximum principles, and Hamiltonian estimates JO - Annales de l'I.H.P. Analyse non linéaire PY - 2014 DA - 2014/// SP - 23 EP - 53 VL - 31 IS - 1 PB - Elsevier UR - http://www.numdam.org/articles/10.1016/j.anihpc.2013.02.001/ UR - https://zbmath.org/?q=an%3A1286.35248 UR - https://www.ams.org/mathscinet-getitem?mr=3165278 UR - https://doi.org/10.1016/j.anihpc.2013.02.001 DO - 10.1016/j.anihpc.2013.02.001 LA - en ID - AIHPC_2014__31_1_23_0 ER - Cabré, Xavier; Sire, Yannick. Nonlinear equations for fractional Laplacians, I: Regularity, maximum principles, and Hamiltonian estimates. Annales de l'I.H.P. Analyse non linéaire, Tome 31 (2014) no. 1, pp. 23-53. doi : 10.1016/j.anihpc.2013.02.001. http://www.numdam.org/articles/10.1016/j.anihpc.2013.02.001/ [1] L. Ambrosio, X. Cabré, Entire solutions of semilinear elliptic equations in ${ℝ}^{3}$ and a conjecture of De Giorgi, J. Amer. Math. Soc. 13 no. 4 (2000), 725-739 | MR 1775735 | Zbl 0968.35041 [2] K. Astala, L. Päivärinta, A boundary integral equation for Calderón's inverse conductivity problem, Collect. Math. Vol. Extra (2006), 127-139 | EuDML 41788 | MR 2264207 | Zbl 1104.35068 [3] J. Bertoin, Lévy Processes, Cambridge Tracts in Math. vol. 121, Cambridge University Press, Cambridge (1996) | MR 1406564 | Zbl 0861.60003 [4] X. Cabré, Y. Sire, Nonlinear equations for fractional Laplacians, II: existence, uniqueness, and qualitative properties of solutions, arXiv:1111.0796v1, 2011; Trans. Amer. Math. Soc., in press. [5] X. Cabré, J. Solà-Morales, Layer solutions in a half-space for boundary reactions, Comm. Pure Appl. Math. 58 no. 12 (2005), 1678-1732 | MR 2177165 | Zbl 1102.35034 [6] L. Caffarelli, A. Mellet, Y. Sire, Traveling waves for a boundary reaction–diffusion equation, Adv. Math. 230 no. 2 (2012), 433-457 | MR 2914954 | Zbl 1255.35072 [7] L. Caffarelli, L. Silvestre, An extension problem related to the fractional Laplacian, Comm. Partial Differential Equations 32 no. 8 (2007), 1245 | MR 2354493 | Zbl 1143.26002 [8] L.A. Caffarelli, J.-M. Roquejoffre, Y. Sire, Variational problems for free boundaries for the fractional Laplacian, J. Eur. Math. Soc. (JEMS) 12 no. 5 (2010), 1151-1179 | EuDML 277768 | MR 2677613 | Zbl 1221.35453 [9] P.R. Chernoff, J.E. Marsden, Properties of Infinite Dimensional Hamiltonian Systems, Lecture Notes in Math. vol. 425, Springer-Verlag, Berlin (1974) | MR 650113 | Zbl 0328.58009 [10] E. Fabes, D. Jerison, C. Kenig, The Wiener test for degenerate elliptic equations, Ann. Inst. Fourier (Grenoble) 32 no. 3 (1982), 151-182 | EuDML 74544 | Numdam | MR 688024 | Zbl 0488.35034 [11] E.B. Fabes, C.E. Kenig, R.P. Serapioni, The local regularity of solutions of degenerate elliptic equations, Comm. Partial Differential Equations 7 no. 1 (1982), 77-116 | MR 643158 | Zbl 0498.35042 [12] R.L. Frank, E. Lenzmann, Uniqueness and nondegeneracy of ground states for ${\left(-\delta \right)}^{s}q+q-{q}^{\alpha +1}=0$ in $ℝ$, arXiv:1009.4042 (2010) [13] A. Garroni, S. Müller, Γ-limit of a phase-field model of dislocations, SIAM J. Math. Anal. 36 no. 6 (2005), 1943-1964 | MR 2178227 | Zbl 1094.82008 [14] C. Imbert, R. Monneau, Homogenization of first-order equations with $\left(u/ϵ\right)$-periodic Hamiltonians, I. Local equations, Arch. Ration. Mech. Anal. 187 no. 1 (2008), 49-89 | MR 2358335 | Zbl 1127.70009 [15] N.S. Landkof, Foundations of Modern Potential Theory, Grundlehren Math. Wiss. vol. 180, Springer-Verlag, New York (1972) | MR 350027 | Zbl 0253.31001 [16] R. Mancinelli, D. Vergni, A. Vulpiani, Front propagation in reactive systems with anomalous diffusion, Phys. D 185 no. 3–4 (2003), 175-195 | MR 2017882 | Zbl 1058.80004 [17] L. Modica, A gradient bound and a Liouville theorem for nonlinear Poisson equations, Comm. Pure Appl. Math. 38 no. 5 (1985), 679-684 | MR 803255 | Zbl 0612.35051 [18] B. Muckenhoupt, Weighted norm inequalities for the Hardy maximal function, Trans. Amer. Math. Soc. 165 (1972), 207-226 | MR 293384 | Zbl 0236.26016 [19] B. Muckenhoupt, E.M. Stein, Classical expansions and their relation to conjugate harmonic functions, Trans. Amer. Math. Soc. 118 (1965), 17-92 | MR 199636 | Zbl 0139.29002 [20] A. Nekvinda, Characterization of traces of the weighted Sobolev space ${W}^{1,p}\left(\Omega ,{d}_{M}^{ϵ}\right)$ on M, Czechoslovak Math. J. 43 no. 4 (1993), 695-711 | EuDML 31383 | MR 1258430 | Zbl 0832.46026 [21] L. Silvestre, Regularity of the obstacle problem for a fractional power of the Laplace operator, Comm. Pure Appl. Math. 60 no. 1 (2007), 67-112 | MR 2270163 | Zbl 1141.49035 [22] P.R. Stinga, J.L. Torrea, Extension problem and Harnack's inequality for some fractional operators, Comm. Partial Differential Equations 35 no. 11 (2010), 2092-2122 | MR 2754080 | Zbl 1209.26013 [23] J. Tan, J. Xiong, A Harnack inequality for fractional Laplace equations with lower order terms, Discrete Contin. Dyn. Syst. 31 no. 3 (2011), 975-983 | MR 2825646 | Zbl 1269.26005 [24] J.F. Toland, The Peierls–Nabarro and Benjamin–Ono equations, J. Funct. Anal. 145 no. 1 (1997), 136-150 | MR 1442163 | Zbl 0876.35106 Cité par Sources :
2022-07-01 19:17:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 13, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3648626506328583, "perplexity": 2066.3647510457213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103945490.54/warc/CC-MAIN-20220701185955-20220701215955-00000.warc.gz"}
http://www.sciforums.com/threads/ethanol-fuel-of-the-future.10588/
# Ethanol Fuel of the Future Discussion in 'Earth Science' started by Success_Machine, Sep 2, 2002. 1. ### Success_MachineImpossible? I can do thatRegistered Senior Member Messages: 365 FAQ: ------------------------------------------------------- Is there enough agricultural residue in Canada to support a commercial bioethanol industry? There are substantial quantities of straw and other crop residues already produced in Canada. In the Western provinces of Manitoba, Saskatchewan, and Alberta alone, annual production of straw is about 40 million tonnes. If 1/3 of this material was used to make fuel, the nation could replace 10% of its gasoline usage. -------------------------------------------------------- This quote was taken directly from Iogen Corporation website. Now consider this: 93 percent of cars on the road have just one person riding in them. One- and two-seat commuter cars would have inherently lighter construction, and if you could reduce the mass of these vehicles by 50 percent you would instantly double their fuel economy. This is a straight & simple law of physics: kinetic energy varies directly with the mass of the object. Interestingly I don't think it would be difficult to reduce the mass of commuter vehicles by 4-fold, from the 3000 pound vehicles on the road today to just 750 pounds, resulting in a 4-fold increase in fuel economy. Theoretically one could reduce the mass of a single-seat vehicle to perhaps 100 kilograms and still retain adequate mobility for a single person, resulting in a 12-fold increase in fuel economy. So I ask you then, could Canada fuel its transportation industry with ethanol? To wit: ... And fuel cells have double the energy efficiency of internal combustion engines. Can't wait till they're available. I'm more convinced than ever that I'm on the right track with bioethanol. 3. ### Success_MachineImpossible? I can do thatRegistered Senior Member Messages: 365 Refutation of Anti-Ethanol Research by Cornell Scientist Here is the research that nearly destroyed consumer confidence in Ethanol as an alternative to petroleum with my comments in braces [ ] .... =========================================== Ethanol fuel from corn faulted as 'unsustainable subsidized food burning' in analysis by Cornell scientist FOR RELEASE: Aug. 6, 2001 Contact: Roger Segelken Office: 607-255-9736 E-Mail: hrs2@cornell.edu ITHACA, N.Y. -- Neither increases in government subsidies to corn-based ethanol fuel nor hikes in the price of petroleum can overcome what one Cornell University agricultural scientist calls a fundamental input-yield problem: It takes more energy to make ethanol from grain than the combustion of ethanol produces. At a time when ethanol-gasoline mixtures (gasohol) are touted as the American answer to fossil fuel shortages by corn producers, food processors and some lawmakers, Cornell's David Pimentel takes a longer range view. "Abusing our precious croplands to grow corn for an energy-inefficient process that yields low-grade automobile fuel amounts to unsustainable, subsidized food burning," says the Cornell professor in the College of Agriculture and Life Sciences. Pimentel, who chaired a U.S. Department of Energy panel that investigated the energetics, economics and environmental aspects of ethanol production several years ago, subsequently conducted a detailed analysis of the corn-to-car fuel process. His findings will be published in September, 2001 in the forthcoming Encyclopedia of Physical Sciences and Technology . Among his findings are: o An acre of U.S. corn yields about 7,110 pounds of corn for processing into 328 gallons of ethanol. But planting, growing and harvesting that much corn requires about 140 gallons of fossil fuels and costs $347 per acre, according to Pimentel's analysis. Thus, even before corn is converted to ethanol, the feedstock costs$1.05 per gallon of ethanol. [The same acre of land produces 64,000 pounds of plant fiber, cellulose, hemicellulose, and lignin which can be converted to fermentable sugars to produce far more ethanol than just from corn starch.] o The energy economics get worse at the processing plants, where the grain is crushed and fermented. As many as three distillation steps are needed to separate the 8 percent ethanol from the 92 percent water. Additional treatment and energy are required to produce the 99.8 percent pure ethanol for mixing with gasoline. o Adding up the energy costs of corn production and its conversion to ethanol, 131,000 BTUs are needed to make 1 gallon of ethanol. One gallon of ethanol has an energy value of only 77,000 BTU. "Put another way," Pimentel says, "about 70 percent more energy is required to produce ethanol than the energy that actually is in ethanol. Every time you make 1 gallon of ethanol, there is a net energy loss of 54,000 BTU." [Ethanol of 180 proof, or 90% ethanol mixed with 10% water, is an excellent automotive fuel. It has slightly lower energy content compared to gasoline, but it will clean your engine of soot and residue from other petroleum-based fuels, and your engine will last 2-3 times longer. However US and Canadian laws say that ethanol must be denatured, or rendered unfit for human consumption to be sold as automotive fuel, or else extra taxes upwards of $22 per liter are applied. Denaturing ethanol is usually accomplished by mixing in 15% gasoline. Distilled ethanol up to 99.8 percent pure is needed if one intends to mix it with gasoline. The water must be removed because while water mixes with ethanol, and ethanol mixes with gasoline, water does not mix with gasoline. Removing the water to achieve such extreme purity by distillation, just so that 15% gasoline can be mixed in, is extremely wasteful of energy. Changing the law to allow non-denatured ethanol to be used as fuel without extra taxes being applied would be the best solution. If not then distillation can be avoided by using hygroscopic materials that absorb water, or by using polyvinyl alcohol membranes that remove water by osmosis, while consuming little or no energy, albeit in a more time consuming manner. ] o Ethanol from corn costs about$1.74 per gallon to produce, compared with about 95 cents to produce a gallon of gasoline. "That helps explain why fossil fuels -- not ethanol -- are used to produce ethanol," Pimentel says. "The growers and processors can't afford to burn ethanol to make ethanol. U.S. drivers couldn't afford it, either, if it weren't for government subsidies to artificially lower the price." [ Using all the methods available, one can achieve at least a 25% net energy gain. These methods include using cellulose as a feedstock, acid and enzyme hydrolysis to reduce them to sugars, and saving energy on the distillation end by either changing the law, or by using low-energy water separation methods.] o Most economic analyses of corn-to-ethanol production overlook the costs of environmental damages, which Pimentel says should add another 23 cents per gallon. "Corn production in the U.S. erodes soil about 12 times faster than the soil can be reformed, and irrigating corn mines groundwater 25 percent faster than the natural recharge rate of ground water. The environmental system in which corn is being produced is being rapidly degraded. Corn should not be considered a renewable resource for ethanol energy production, especially when human food is being converted into ethanol." [ Corn should be eaten as food. Non-food crops can be used to produce ethanol, such as grasses, softwood & hardwood lumber residues & sawdust, even recycled cardboard. ] o The approximately \$1 billion a year in current federal and state subsidies (mainly to large corporations) for ethanol production are not the only costs to consumers, the Cornell scientist observes. Subsidized corn results in higher prices for meat, milk and eggs because about 70 percent of corn grain is fed to livestock and poultry in the United States Increasing ethanol production would further inflate corn prices, Pimentel says, noting: "In addition to paying tax dollars for ethanol subsidies, consumers would be paying significantly higher food prices in the marketplace." [There are plenty of alternatives to corn, as noted above.] Nickels and dimes aside, some drivers still would rather see their cars fueled by farms in the Midwest than by oil wells in the Middle East, Pimentel acknowledges, so he calculated the amount of corn needed to power an automobile: o The average U.S. automobile, traveling 10,000 miles a year on pure ethanol (not a gasoline-ethanol mix) would need about 852 gallons of the corn-based fuel. This would take 11 acres to grow, based on net ethanol production. This is the same amount of cropland required to feed seven Americans. o If all the automobiles in the United States were fueled with 100 percent ethanol, a total of about 97 percent of U.S. land area would be needed to grow the corn feedstock. Corn would cover nearly the total land area of the United States. [ If cellulose were used widely to produce ethanol, rather than just corn starch, then a 10-fold reduction in the amount of land area needed would be immediately realized. At the same time a diversification of feedstocks would be available, not just corn, but all plant types, whether food crops or non-food grasses and other biomass. Furthermore 93% of cars on the road have only one person riding in them. Reducing the mass of these cars by 7-fold would proportionally increase their fuel economy by the same amount. Taken together these two factors could reduce the land area needed to 1/70th of that estimated by the Cornell scientist.] ======================================== In addition to my own comments, and in a detailed analysis of Pimentel's research, Dr. Michael S. Graboski of the Colorado School of Mines says Pimentel's findings are based on out-of-date statistics and are contradicted by a recent US Department of Agriculture (USDA) study. Last edited: Sep 5, 2002 5. ### Success_MachineImpossible? I can do thatRegistered Senior Member Messages: 365 The point is Cellulose is used rather than starch Corn is the traditionally feedstock for ethanol production. The starch is fermented by yeast in a well-known process. But even corn has less than 10% starch by mass. Almost all plants are composed of up to 98 percent CELLULOSE by mass. In fact cellulose is the most abundant substance produced by living things on earth. Now Iogen Corporation, in Ottawa Canada, has perfected a process to convert cellulose into ethanol. Dubbed "Bioethanol" this will produce ethanol from non-edible parts of plants, and in quantities to supply automotive fuel on a national scale. Iogen Corporation has the only demonstration-scale bioethanol plant in the world. From their website we have some idea of the basic process: - Steam explosion pre-treatment makes cellulose vulnerable to enzymatic hydrolysis. - Either sulfuric acid hydrolysis or enzymatic hydrolysis can be used to convert pre-treated cellulose to fermentable sugars. - The sugar is fermented into ethanol using yeast producing an ethanol-water mixture. - The ethanol-water mixture is distilled in a fractionating column to produce up to 96% pure ethanol. - Vapor phase molecular sieve system is then used to produce anhydrous ethanol up to 99.9% pure. - Anhydrous ethanol can then be mixed up with gasoline up to 10% without any modification to the engine or carburator, and is warrentied by all automakers. Coincidentally if this were done on a national scale it would surpass the goals of the Kyoto Protocol to reduce greenhouse gas emissions by 6 percent, simply because bioethanol is a zero net producer of CO2. - Waste lignin is produced from hydrolysis, which can be burned as boiler fuel to generate electricity, or to produce heat to aid the distillation process. Alternatively it can undergo gasification, and anaerobic bacteria Clostridium ljungdahlii are used to convert the CO, CO2, and H2 into ethanol in a bioreactor. Other products derived from lignin include livestock feed, soil fertilizer, conductive inks, conductive polymers, biodegradable plastics, high-temperature conductive adhesives, pH & moisture sensors, anti-static coatings for clean-room garments and packaging applications, anti-corrosion coatings, non-linear optical coatings, smart windows, radar-invisible stealth coatings, light-emitting diodes, transistors, electronics, the list goes on... - Waste glycerin from the fermentation process can be used as compost fertilizer, livestock feed, boiler fuel, cosmetics, medicinal products, dental products, soap, dynamite, food & beverages, polyether polyols for tobacco industry, alkyd resins, suppository, humectant and emollient, moisturizers, hair care products, toothpaste, sports beverages use glycerol to prevent dehydration, lubricants, epoxy resins, paper, drying foliage with glycerin, fabric softeners, cellophane packaging material, phenol resin cementing compound, nonionic surfactant extracted from glycerin, triglyceride that substitutes for fat in food products, the list goes on... ============================================== Canada to ratify Kyoto before year-end ============================================== The goal was to cut greenhouse gas emissions by 6 percent by 2008-2010. Bioethanol is a zero net producer of greenhouse gases such that converting automotive fuels to E10 (10% ethanol, 90% gasoline) would achieve the goals of Kyoto. And according to the FAQ section of the Iogen website, Canada could replace 10% of gasoline production by using 1/3rd of its supply of wheat straw from the provinces of Alberta, Saskatchewan, and Manitoba. Straw does get used as livestock feed, but Canada has several other provinces that produce straw as well, so I don't think we will have much trouble meeting the Kyoto requirements. 7. ### Success_MachineImpossible? I can do thatRegistered Senior Member Messages: 365 My idea: Kelp Forests Kelp Forests "Kelp forests grow in cold, nutrient rich ocean water, and are one of the most biologically productive habitats in the marine environment." My idea: Approximately 70% of the surface of the Earth is covered by ocean. Mesh screens suspended 15-40 meters below the surface of the water by floating buoys would provide a surface for root attachments for Kelp Forests over a far larger area than just coastal regions, and in deep water where sunlight cannot reach the seafloor. Kelp can grow 30 cm per day, which can provide a huge feedstock for cellulose-based bioethanol production without using any traditional farmland. Indeed, Kelp forests grow best in cold Canadian waters. Any excess production can be used as fertilizer for land-based food crops, or as food for adjacent fish farms. During storms when kelp forests could be damaged or uprooted, the rafts could be designed to sink deeper where waves on the surface would not affect it. A few hours later when the storm has passed, the raft would again float to the surface. Kelp is great because it has its own floatation sacs, and as the kelp forest grows and gains weight the raft will not require reinforcements to support the growth. A skimmer, similar to the equipment that is used to clean up oil spills, could be used to harvest the kelp. As it is pulled aboard it can be drawn through heavy rollers where it is crushed and the water content drained back into the ocean. What is left is dry plant material that can be further processed into bioethanol. The USDA Food Nutrient Database provides the following data: Seaweed, Kelp, Raw (amounts per 100 gram sample) ------------------ Energy = 43 Kcal Water = 81.58 g Protein = 1.68 g Lipids = 0.56 g Carbohydrates = 9.57 g Fiber = 1.3 g Ash = 6.61 g Refuse = 0 Once you squeeze out all the water, you are basically left with dry plant material that is 50% carbohydrates and 7 percent fiber. This is a respectable feedstock for bioethanol - if not from the cellulose, then from the carbohydrate content. Last edited: Sep 5, 2002
2017-10-22 23:00:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40485334396362305, "perplexity": 6013.586253666294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825473.61/warc/CC-MAIN-20171022222745-20171023002745-00116.warc.gz"}
https://forum.snap.berkeley.edu/t/sprite-quality-degradation-help/1726
When creating a sprite in Snap I have to shrink it because it is too large upon being imported onto the stage. The problem is that when it is shrunk it begins to deteriorate in terms of its quality and quickly becomes a pixelated mess that can no longer be read. Is there any way to shrink the sprite so that it retains its sharpness? if your costume is an svg file you can scale it losslessly. If you have to start with a bitmap format such as .png or .jpeg, you might try doing the shrinking ahead of time, in a professional graphics program such as Photoshop or the Gimp, rather than in Snap!. Those programs will let you tweak a bunch of options about what algorithm to use, and some will look better than others. For what its worth – I would just like to add that I have noticed (some) relatively poor image scaling in SNAP as it can often go a bit ‘soft’ - so I do try to avoid any import of low quality images (particularly ‘web’ based images) and use the SVG (Scalable Vector Graphics) which is already built into SNAP! Rather than try to draw them in SNAP! – which is far from ideal on the quality front. I use an external graphics editor called INKSCAPE - which will both ‘import’ and ‘export’ in the SVG format which SNAP can import very easily as a ‘Drag and drop’. I also have found the ‘png’ format is better suited to a lot of SNAP! Which is the second best option. You can then export both direct from INKSCAPE to SNAP and Vice -Versa (all the SNAP! Costumes for example) are better when working in VECTOR graphic (SVG) formats. I use LINUX myself but there is a Mac and Windows version for those interested in importing Vector Graphics into SNAP! Hope this helps somewhat! I use this solution all the time due to that very issue. I also tend to import (PNG) at a larger size then ‘scale’ down image using using ‘set size’ to (say) 50% and use that. Inkscape can be found online here: https://inkscape.org/ Pip Thanks. I like Inkscape, too. Someday we're going to replace all the bitmap costumes... but don't hold your breath. On a slightly different note, how does drag and drop actually work? I can't figure out how to trigger it. The right click menu is normal, so I usually just use import with a png or svg. Are you asking, how does the browser know what to do with a file you've dragged onto the window? I have no idea, except that you must have to register file types when you load the page because as soon as you're hovering over the browser window the file icon gets that little + sign over it (in MacOS anyway). I'm asking how to do it. Do I run firefox https://snap.berkeley.edu/snap/snap.html -g ? I'm sorry, I'm still not understanding what you're asking. The answer to the question you seem to be asking is "you move the mouse over the thing you want to drag and drop, hold down the button, move the mouse over the Snap! window, and let go" but I'm sure you know that already.
2022-06-30 19:27:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27101796865463257, "perplexity": 1194.8161451097026}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103877410.46/warc/CC-MAIN-20220630183616-20220630213616-00479.warc.gz"}
https://wikimili.com/en/Deep_learning
# Deep learning Last updated Deep learning (also known as deep structured learning) is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised. [1] [2] [3] ## Contents Deep learning architectures such as deep neural networks, deep belief networks, recurrent neural networks and convolutional neural networks have been applied to fields including computer vision, machine vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics, drug design, medical image analysis, material inspection and board game programs, where they have produced results comparable to and in some cases surpassing human expert performance. [4] [5] [6] Artificial neural networks (ANNs) were inspired by information processing and distributed communication nodes in biological systems. ANNs have various differences from biological brains. Specifically, neural networks tend to be static and symbolic, while the biological brain of most living organisms is dynamic (plastic) and analog. [7] [8] [9] The adjective "deep" in deep learning comes from the use of multiple layers in the network. Early work showed that a linear perceptron cannot be a universal classifier, and then that a network with a nonpolynomial activation function with one hidden layer of unbounded width can on the other hand so be. Deep learning is a modern variation which is concerned with an unbounded number of layers of bounded size, which permits practical application and optimized implementation, while retaining theoretical universality under mild conditions. In deep learning the layers are also permitted to be heterogeneous and to deviate widely from biologically informed connectionist models, for the sake of efficiency, trainability and understandability, whence the "structured" part. ## Definition Deep learning is a class of machine learning algorithms that [11] (pp199–200) uses multiple layers to progressively extract higher-level features from the raw input. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits or letters or faces. ## Overview Most modern deep learning models are based on artificial neural networks, specifically, Convolutional Neural Networks (CNN)s, although they can also include propositional formulas or latent variables organized layer-wise in deep generative models such as the nodes in deep belief networks and deep Boltzmann machines. [12] In deep learning, each level learns to transform its input data into a slightly more abstract and composite representation. In an image recognition application, the raw input may be a matrix of pixels; the first representational layer may abstract the pixels and encode edges; the second layer may compose and encode arrangements of edges; the third layer may encode a nose and eyes; and the fourth layer may recognize that the image contains a face. Importantly, a deep learning process can learn which features to optimally place in which level on its own. (Of course, this does not completely eliminate the need for hand-tuning; for example, varying numbers of layers and layer sizes can provide different degrees of abstraction.) [1] [13] The word "deep" in "deep learning" refers to the number of layers through which the data is transformed. More precisely, deep learning systems have a substantial credit assignment path (CAP) depth. The CAP is the chain of transformations from input to output. CAPs describe potentially causal connections between input and output. For a feedforward neural network, the depth of the CAPs is that of the network and is the number of hidden layers plus one (as the output layer is also parameterized). For recurrent neural networks, in which a signal may propagate through a layer more than once, the CAP depth is potentially unlimited. [2] No universally agreed upon threshold of depth divides shallow learning from deep learning, but most researchers agree that deep learning involves CAP depth higher than 2. CAP of depth 2 has been shown to be a universal approximator in the sense that it can emulate any function. [14] Beyond that, more layers do not add to the function approximator ability of the network. Deep models (CAP > 2) are able to extract better features than shallow models and hence, extra layers help in learning the features effectively. Deep learning architectures can be constructed with a greedy layer-by-layer method. [15] Deep learning helps to disentangle these abstractions and pick out which features improve performance. [1] For supervised learning tasks, deep learning methods eliminate feature engineering, by translating the data into compact intermediate representations akin to principal components, and derive layered structures that remove redundancy in representation. Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data are more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner are neural history compressors [16] and deep belief networks. [1] [17] ## Interpretations Deep neural networks are generally interpreted in terms of the universal approximation theorem [18] [19] [20] [21] [22] or probabilistic inference. [11] [12] [1] [2] [17] [23] The classic universal approximation theorem concerns the capacity of feedforward neural networks with a single hidden layer of finite size to approximate continuous functions. [18] [19] [20] [21] In 1989, the first proof was published by George Cybenko for sigmoid activation functions [18] [ citation needed ] and was generalised to feed-forward multi-layer architectures in 1991 by Kurt Hornik. [19] Recent work also showed that universal approximation also holds for non-bounded activation functions such as the rectified linear unit. [24] The universal approximation theorem for deep neural networks concerns the capacity of networks with bounded width but the depth is allowed to grow. Lu et al. [22] proved that if the width of a deep neural network with ReLU activation is strictly larger than the input dimension, then the network can approximate any Lebesgue integrable function; If the width is smaller or equal to the input dimension, then deep neural network is not a universal approximator. The probabilistic interpretation [23] derives from the field of machine learning. It features inference, [11] [12] [1] [2] [17] [23] as well as the optimization concepts of training and testing, related to fitting and generalization, respectively. More specifically, the probabilistic interpretation considers the activation nonlinearity as a cumulative distribution function. [23] The probabilistic interpretation led to the introduction of dropout as regularizer in neural networks. [25] The probabilistic interpretation was introduced by researchers including Hopfield, Widrow and Narendra and popularized in surveys such as the one by Bishop. [26] ## History The first general, working learning algorithm for supervised, deep, feedforward, multilayer perceptrons was published by Alexey Ivakhnenko and Lapa in 1967. [27] A 1971 paper described a deep network with eight layers trained by the group method of data handling. [28] Other deep learning working architectures, specifically those built for computer vision, began with the Neocognitron introduced by Kunihiko Fukushima in 1980. [29] The term Deep Learning was introduced to the machine learning community by Rina Dechter in 1986, [30] [16] and to artificial neural networks by Igor Aizenberg and colleagues in 2000, in the context of Boolean threshold neurons. [31] [32] In 1989, Yann LeCun et al. applied the standard backpropagation algorithm, which had been around as the reverse mode of automatic differentiation since 1970, [33] [34] [35] [36] to a deep neural network with the purpose of recognizing handwritten ZIP codes on mail. While the algorithm worked, training required 3 days. [37] By 1991 such systems were used for recognizing isolated 2-D hand-written digits, while recognizing 3-D objects was done by matching 2-D images with a handcrafted 3-D object model. Weng et al. suggested that a human brain does not use a monolithic 3-D object model and in 1992 they published Cresceptron, [38] [39] [40] a method for performing 3-D object recognition in cluttered scenes. Because it directly used natural images, Cresceptron started the beginning of general-purpose visual learning for natural 3D worlds. Cresceptron is a cascade of layers similar to Neocognitron. But while Neocognitron required a human programmer to hand-merge features, Cresceptron learned an open number of features in each layer without supervision, where each feature is represented by a convolution kernel. Cresceptron segmented each learned object from a cluttered scene through back-analysis through the network. Max pooling, now often adopted by deep neural networks (e.g. ImageNet tests), was first used in Cresceptron to reduce the position resolution by a factor of (2x2) to 1 through the cascade for better generalization. In 1994, André de Carvalho, together with Mike Fairhurst and David Bisset, published experimental results of a multi-layer boolean neural network, also known as a weightless neural network, composed of a 3-layers self-organising feature extraction neural network module (SOFT) followed by a multi-layer classification neural network module (GSN), which were independently trained. Each layer in the feature extraction module extracted features with growing complexity regarding the previous layer. [41] In 1995, Brendan Frey demonstrated that it was possible to train (over two days) a network containing six fully connected layers and several hundred hidden units using the wake-sleep algorithm, co-developed with Peter Dayan and Hinton. [42] Many factors contribute to the slow speed, including the vanishing gradient problem analyzed in 1991 by Sepp Hochreiter. [43] [44] Simpler models that use task-specific handcrafted features such as Gabor filters and support vector machines (SVMs) were a popular choice in the 1990s and 2000s, because of artificial neural network's (ANN) computational cost and a lack of understanding of how the brain wires its biological networks. Both shallow and deep learning (e.g., recurrent nets) of ANNs have been explored for many years. [45] [46] [47] These methods never outperformed non-uniform internal-handcrafting Gaussian mixture model/Hidden Markov model (GMM-HMM) technology based on generative models of speech trained discriminatively. [48] Key difficulties have been analyzed, including gradient diminishing [43] and weak temporal correlation structure in neural predictive models. [49] [50] Additional difficulties were the lack of training data and limited computing power. Most speech recognition researchers moved away from neural nets to pursue generative modeling. An exception was at SRI International in the late 1990s. Funded by the US government's NSA and DARPA, SRI studied deep neural networks in speech and speaker recognition. The speaker recognition team led by Larry Heck reported significant success with deep neural networks in speech processing in the 1998 National Institute of Standards and Technology Speaker Recognition evaluation. [51] The SRI deep neural network was then deployed in the Nuance Verifier, representing the first major industrial application of deep learning. [52] The principle of elevating "raw" features over hand-crafted optimization was first explored successfully in the architecture of deep autoencoder on the "raw" spectrogram or linear filter-bank features in the late 1990s, [52] showing its superiority over the Mel-Cepstral features that contain stages of fixed transformation from spectrograms. The raw features of speech, waveforms, later produced excellent larger-scale results. [53] Many aspects of speech recognition were taken over by a deep learning method called long short-term memory (LSTM), a recurrent neural network published by Hochreiter and Schmidhuber in 1997. [54] LSTM RNNs avoid the vanishing gradient problem and can learn "Very Deep Learning" tasks [2] that require memories of events that happened thousands of discrete time steps before, which is important for speech. In 2003, LSTM started to become competitive with traditional speech recognizers on certain tasks. [55] Later it was combined with connectionist temporal classification (CTC) [56] in stacks of LSTM RNNs. [57] In 2015, Google's speech recognition reportedly experienced a dramatic performance jump of 49% through CTC-trained LSTM, which they made available through Google Voice Search. [58] In 2006, publications by Geoff Hinton, Ruslan Salakhutdinov, Osindero and Teh [59] [60] [61] showed how a many-layered feedforward neural network could be effectively pre-trained one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then fine-tuning it using supervised backpropagation. [62] The papers referred to learning for deep belief nets. Deep learning is part of state-of-the-art systems in various disciplines, particularly computer vision and automatic speech recognition (ASR). Results on commonly used evaluation sets such as TIMIT (ASR) and MNIST (image classification), as well as a range of large-vocabulary speech recognition tasks have steadily improved. [63] [64] [65] Convolutional neural networks (CNNs) were superseded for ASR by CTC [56] for LSTM. [54] [58] [66] [67] [68] [69] [70] but are more successful in computer vision. The impact of deep learning in industry began in the early 2000s, when CNNs already processed an estimated 10% to 20% of all the checks written in the US, according to Yann LeCun. [71] Industrial applications of deep learning to large-scale speech recognition started around 2010. The 2009 NIPS Workshop on Deep Learning for Speech Recognition [72] was motivated by the limitations of deep generative models of speech, and the possibility that given more capable hardware and large-scale data sets that deep neural nets (DNN) might become practical. It was believed that pre-training DNNs using generative models of deep belief nets (DBN) would overcome the main difficulties of neural nets. [73] However, it was discovered that replacing pre-training with large amounts of training data for straightforward backpropagation when using DNNs with large, context-dependent output layers produced error rates dramatically lower than then-state-of-the-art Gaussian mixture model (GMM)/Hidden Markov Model (HMM) and also than more-advanced generative model-based systems. [63] [74] The nature of the recognition errors produced by the two types of systems was characteristically different, [75] [72] offering technical insights into how to integrate deep learning into the existing highly efficient, run-time speech decoding system deployed by all major speech recognition systems. [11] [76] [77] Analysis around 2009–2010, contrasting the GMM (and other generative speech models) vs. DNN models, stimulated early industrial investment in deep learning for speech recognition, [75] [72] eventually leading to pervasive and dominant use in that industry. That analysis was done with comparable performance (less than 1.5% in error rate) between discriminative DNNs and generative models. [63] [75] [73] [78] In 2010, researchers extended deep learning from TIMIT to large vocabulary speech recognition, by adopting large output layers of the DNN based on context-dependent HMM states constructed by decision trees. [79] [80] [81] [76] Advances in hardware have driven renewed interest in deep learning. In 2009, Nvidia was involved in what was called the “big bang” of deep learning, “as deep-learning neural networks were trained with Nvidia graphics processing units (GPUs).” [82] That year, Google Brain used Nvidia GPUs to create capable DNNs. While there, Andrew Ng determined that GPUs could increase the speed of deep-learning systems by about 100 times. [83] In particular, GPUs are well-suited for the matrix/vector computations involved in machine learning. [84] [85] [86] GPUs speed up training algorithms by orders of magnitude, reducing running times from weeks to days. [87] [88] Further, specialized hardware and algorithm optimizations can be used for efficient processing of deep learning models. [89] ### Deep learning revolution In 2012, a team led by George E. Dahl won the "Merck Molecular Activity Challenge" using multi-task deep neural networks to predict the biomolecular target of one drug. [90] [91] In 2014, Hochreiter's group used deep learning to detect off-target and toxic effects of environmental chemicals in nutrients, household products and drugs and won the "Tox21 Data Challenge" of NIH, FDA and NCATS. [92] [93] [94] Significant additional impacts in image or object recognition were felt from 2011 to 2012. Although CNNs trained by backpropagation had been around for decades, and GPU implementations of NNs for years, including CNNs, fast implementations of CNNs on GPUs were needed to progress on computer vision. [84] [86] [37] [95] [2] In 2011, this approach achieved for the first time superhuman performance in a visual pattern recognition contest. Also in 2011, it won the ICDAR Chinese handwriting contest, and in May 2012, it won the ISBI image segmentation contest. [96] Until 2011, CNNs did not play a major role at computer vision conferences, but in June 2012, a paper by Ciresan et al. at the leading conference CVPR [4] showed how max-pooling CNNs on GPU can dramatically improve many vision benchmark records. In October 2012, a similar system by Krizhevsky et al. [5] won the large-scale ImageNet competition by a significant margin over shallow machine learning methods. In November 2012, Ciresan et al.'s system also won the ICPR contest on analysis of large medical images for cancer detection, and in the following year also the MICCAI Grand Challenge on the same topic. [97] In 2013 and 2014, the error rate on the ImageNet task using deep learning was further reduced, following a similar trend in large-scale speech recognition. The Wolfram Image Identification project publicized these improvements. [98] Image classification was then extended to the more challenging task of generating descriptions (captions) for images, often as a combination of CNNs and LSTMs. [99] [100] [101] [102] Some researchers state that the October 2012 ImageNet victory anchored the start of a "deep learning revolution" that has transformed the AI industry. [103] In March 2019, Yoshua Bengio, Geoffrey Hinton and Yann LeCun were awarded the Turing Award for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing. ## Neural networks ### Artificial neural networks Artificial neural networks (ANNs) or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains. Such systems learn (progressively improve their ability) to do tasks by considering examples, generally without task-specific programming. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as "cat" or "no cat" and using the analytic results to identify cats in other images. They have found most use in applications difficult to express with a traditional computer algorithm using rule-based programming. An ANN is based on a collection of connected units called artificial neurons, (analogous to biological neurons in a biological brain). Each connection (synapse) between neurons can transmit a signal to another neuron. The receiving (postsynaptic) neuron can process the signal(s) and then signal downstream neurons connected to it. Neurons may have state, generally represented by real numbers, typically between 0 and 1. Neurons and synapses may also have a weight that varies as learning proceeds, which can increase or decrease the strength of the signal that it sends downstream. Typically, neurons are organized in layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times. The original goal of the neural network approach was to solve problems in the same way that a human brain would. Over time, attention focused on matching specific mental abilities, leading to deviations from biology such as backpropagation, or passing information in the reverse direction and adjusting the network to reflect that information. Neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis. As of 2017, neural networks typically have a few thousand to a few million units and millions of connections. Despite this number being several order of magnitude less than the number of neurons on a human brain, these networks can perform many tasks at a level beyond that of humans (e.g., recognizing faces, playing "Go" [104] ). ### Deep neural networks A deep neural network (DNN) is an artificial neural network (ANN) with multiple layers between the input and output layers. [12] [2] The DNN finds the correct mathematical manipulation to turn the input into the output, whether it be a linear relationship or a non-linear relationship. The network moves through the layers calculating the probability of each output. For example, a DNN that is trained to recognize dog breeds will go over the given image and calculate the probability that the dog in the image is a certain breed. The user can review the results and select which probabilities the network should display (above a certain threshold, etc.) and return the proposed label. Each mathematical manipulation as such is considered a layer, and complex DNN have many layers, hence the name "deep" networks. DNNs can model complex non-linear relationships. DNN architectures generate compositional models where the object is expressed as a layered composition of primitives. [105] The extra layers enable composition of features from lower layers, potentially modeling complex data with fewer units than a similarly performing shallow network. [12] Deep architectures include many variants of a few basic approaches. Each architecture has found success in specific domains. It is not always possible to compare the performance of multiple architectures, unless they have been evaluated on the same data sets. DNNs are typically feedforward networks in which data flows from the input layer to the output layer without looping back. At first, the DNN creates a map of virtual neurons and assigns random numerical values, or "weights", to connections between them. The weights and inputs are multiplied and return an output between 0 and 1. If the network did not accurately recognize a particular pattern, an algorithm would adjust the weights. [106] That way the algorithm can make certain parameters more influential, until it determines the correct mathematical manipulation to fully process the data. Recurrent neural networks (RNNs), in which data can flow in any direction, are used for applications such as language modeling. [107] [108] [109] [110] [111] Long short-term memory is particularly effective for this use. [54] [112] Convolutional deep neural networks (CNNs) are used in computer vision. [113] CNNs also have been applied to acoustic modeling for automatic speech recognition (ASR). [70] #### Challenges As with ANNs, many issues can arise with naively trained DNNs. Two common issues are overfitting and computation time. DNNs are prone to overfitting because of the added layers of abstraction, which allow them to model rare dependencies in the training data. Regularization methods such as Ivakhnenko's unit pruning [28] or weight decay (${\displaystyle \ell _{2}}$-regularization) or sparsity (${\displaystyle \ell _{1}}$-regularization) can be applied during training to combat overfitting. [114] Alternatively dropout regularization randomly omits units from the hidden layers during training. This helps to exclude rare dependencies. [115] Finally, data can be augmented via methods such as cropping and rotating such that smaller training sets can be increased in size to reduce the chances of overfitting. [116] DNNs must consider many training parameters, such as the size (number of layers and number of units per layer), the learning rate, and initial weights. Sweeping through the parameter space for optimal parameters may not be feasible due to the cost in time and computational resources. Various tricks, such as batching (computing the gradient on several training examples at once rather than individual examples) [117] speed up computation. Large processing capabilities of many-core architectures (such as GPUs or the Intel Xeon Phi) have produced significant speedups in training, because of the suitability of such processing architectures for the matrix and vector computations. [118] [119] Alternatively, engineers may look for other types of neural networks with more straightforward and convergent training algorithms. CMAC (cerebellar model articulation controller) is one such kind of neural network. It doesn't require learning rates or randomized initial weights for CMAC. The training process can be guaranteed to converge in one step with a new batch of data, and the computational complexity of the training algorithm is linear with respect to the number of neurons involved. [120] [121] ## Hardware Since the 2010s, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer. [122] By 2019, graphic processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method of training large-scale commercial cloud AI. [123] OpenAI estimated the hardware compute used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017), and found a 300,000-fold increase in the amount of compute required, with a doubling-time trendline of 3.4 months. [124] [125] ## Applications ### Automatic speech recognition Large-scale automatic speech recognition is the first and most convincing successful case of deep learning. LSTM RNNs can learn "Very Deep Learning" tasks [2] that involve multi-second intervals containing speech events separated by thousands of discrete time steps, where one time step corresponds to about 10 ms. LSTM with forget gates [112] is competitive with traditional speech recognizers on certain tasks. [55] The initial success in speech recognition was based on small-scale recognition tasks based on TIMIT. The data set contains 630 speakers from eight major dialects of American English, where each speaker reads 10 sentences. [126] Its small size lets many configurations be tried. More importantly, the TIMIT task concerns phone-sequence recognition, which, unlike word-sequence recognition, allows weak phone bigram language models. This lets the strength of the acoustic modeling aspects of speech recognition be more easily analyzed. The error rates listed below, including these early results and measured as percent phone error rates (PER), have been summarized since 1991. MethodPercent phone error rate (PER) (%) Randomly Initialized RNN [127] 26.1 Bayesian Triphone GMM-HMM25.6 Hidden Trajectory (Generative) Model24.8 Monophone Randomly Initialized DNN23.4 Monophone DBN-DNN22.4 Triphone GMM-HMM with BMMI Training21.7 Monophone DBN-DNN on fbank20.7 Convolutional DNN [128] 20.0 Convolutional DNN w. Heterogeneous Pooling18.7 Ensemble DNN/CNN/RNN [129] 18.3 Bidirectional LSTM17.8 Hierarchical Convolutional Deep Maxout Network [130] 16.5 The debut of DNNs for speaker recognition in the late 1990s and speech recognition around 2009-2011 and of LSTM around 2003–2007, accelerated progress in eight major areas: [11] [78] [76] • Scale-up/out and accelerated DNN training and decoding • Sequence discriminative training • Feature processing by deep models with solid understanding of the underlying mechanisms • Adaptation of DNNs and related deep models • Multi-task and transfer learning by DNNs and related deep models • CNNs and how to design them to best exploit domain knowledge of speech • RNN and its rich LSTM variants • Other types of deep models including tensor-based models and integrated deep generative/discriminative models. All major commercial speech recognition systems (e.g., Microsoft Cortana, Xbox, Skype Translator, Amazon Alexa, Google Now, Apple Siri, Baidu and iFlyTek voice search, and a range of Nuance speech products, etc.) are based on deep learning. [11] [131] [132] ### Electromyography (EMG) recognition Electromyography (EMG) signals have been used extensively in the identification of user intention to potentially control assistive devices such as smart wheelchairs, exoskeletons, and prosthetic devices. In the past century feed forward dense neural network has been used. Then, researcher used spectrogram to map EMG signal and then use it as input of deep convolutional neural networks. Recently, end-to-end deep learning is used to map raw signals directly to identification of user intention [133] . ### Image recognition A common evaluation set for image classification is the MNIST database data set. MNIST is composed of handwritten digits and includes 60,000 training examples and 10,000 test examples. As with TIMIT, its small size lets users test multiple configurations. A comprehensive list of results on this set is available. [134] Deep learning-based image recognition has become "superhuman", producing more accurate results than human contestants. This first occurred in 2011. [135] Deep learning-trained vehicles now interpret 360° camera views. [136] Another example is Facial Dysmorphology Novel Analysis (FDNA) used to analyze cases of human malformation connected to a large database of genetic syndromes. ### Visual art processing Closely related to the progress that has been made in image recognition is the increasing application of deep learning techniques to various visual art tasks. DNNs have proven themselves capable, for example, of a) identifying the style period of a given painting, b) Neural Style Transfer - capturing the style of a given artwork and applying it in a visually pleasing manner to an arbitrary photograph or video, and c) generating striking imagery based on random visual input fields. [137] [138] ### Natural language processing Neural networks have been used for implementing language models since the early 2000s. [107] LSTM helped to improve machine translation and language modeling. [108] [109] [110] Other key techniques in this field are negative sampling [139] and word embedding. Word embedding, such as word2vec , can be thought of as a representational layer in a deep learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in a vector space. Using word embedding as an RNN input layer allows the network to parse sentences and phrases using an effective compositional vector grammar. A compositional vector grammar can be thought of as probabilistic context free grammar (PCFG) implemented by an RNN. [140] Recursive auto-encoders built atop word embeddings can assess sentence similarity and detect paraphrasing. [140] Deep neural architectures provide the best results for constituency parsing, [141] sentiment analysis, [142] information retrieval, [143] [144] spoken language understanding, [145] machine translation, [108] [146] contextual entity linking, [146] writing style recognition, [147] Text classification and others. [148] Recent developments generalize word embedding to sentence embedding. Google Translate (GT) uses a large end-to-end long short-term memory network. [149] [150] [151] [152] [153] [154] Google Neural Machine Translation (GNMT) uses an example-based machine translation method in which the system "learns from millions of examples." [150] It translates "whole sentences at a time, rather than pieces. Google Translate supports over one hundred languages. [150] The network encodes the "semantics of the sentence rather than simply memorizing phrase-to-phrase translations". [150] [155] GT uses English as an intermediate between most language pairs. [155] ### Drug discovery and toxicology A large percentage of candidate drugs fail to win regulatory approval. These failures are caused by insufficient efficacy (on-target effect), undesired interactions (off-target effects), or unanticipated toxic effects. [156] [157] Research has explored use of deep learning to predict the biomolecular targets, [90] [91] off-targets, and toxic effects of environmental chemicals in nutrients, household products and drugs. [92] [93] [94] AtomNet is a deep learning system for structure-based rational drug design. [158] AtomNet was used to predict novel candidate biomolecules for disease targets such as the Ebola virus [159] and multiple sclerosis. [160] [161] In 2019 generative neural networks were used to produce molecules that were validated experimentally all the way into mice. [162] [163] ### Customer relationship management Deep reinforcement learning has been used to approximate the value of possible direct marketing actions, defined in terms of RFM variables. The estimated value function was shown to have a natural interpretation as customer lifetime value. [164] ### Recommendation systems Recommendation systems have used deep learning to extract meaningful features for a latent factor model for content-based music and journal recommendations. [165] [166] Multi-view deep learning has been applied for learning user preferences from multiple domains. [167] The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks. ### Bioinformatics An autoencoder ANN was used in bioinformatics, to predict gene ontology annotations and gene-function relationships. [168] In medical informatics, deep learning was used to predict sleep quality based on data from wearables [169] and predictions of health complications from electronic health record data. [170] ### Medical Image Analysis Deep learning has been shown to produce competitive results in medical application such as cancer cell classification, lesion detection, organ segmentation and image enhancement [171] [172] Finding the appropriate mobile audience for mobile advertising is always challenging, since many data points must be considered and analyzed before a target segment can be created and used in ad serving by any ad server. [173] Deep learning has been used to interpret large, many-dimensioned advertising datasets. Many data points are collected during the request/serve/click internet advertising cycle. This information can form the basis of machine learning to improve ad selection. ### Image restoration Deep learning has been successfully applied to inverse problems such as denoising, super-resolution, inpainting, and film colorization. [174] These applications include learning methods such as "Shrinkage Fields for Effective Image Restoration" [175] which trains on an image dataset, and Deep Image Prior, which trains on the image that needs restoration. ### Financial fraud detection Deep learning is being successfully applied to financial fraud detection and anti-money laundering. "Deep anti-money laundering detection system can spot and recognize relationships and similarities between data and, further down the road, learn to detect anomalies or classify and predict specific events". The solution leverages both supervised learning techniques, such as the classification of suspicious transactions, and unsupervised learning, e.g. anomaly detection. [176] ### Military The United States Department of Defense applied deep learning to train robots in new tasks through observation. [177] ## Relation to human cognitive and brain development Deep learning is closely related to a class of theories of brain development (specifically, neocortical development) proposed by cognitive neuroscientists in the early 1990s. [178] [179] [180] [181] These developmental theories were instantiated in computational models, making them predecessors of deep learning systems. These developmental models share the property that various proposed learning dynamics in the brain (e.g., a wave of nerve growth factor) support the self-organization somewhat analogous to the neural networks utilized in deep learning models. Like the neocortex, neural networks employ a hierarchy of layered filters in which each layer considers information from a prior layer (or the operating environment), and then passes its output (and possibly the original input), to other layers. This process yields a self-organizing stack of transducers, well-tuned to their operating environment. A 1995 description stated, "...the infant's brain seems to organize itself under the influence of waves of so-called trophic-factors ... different regions of the brain become connected sequentially, with one layer of tissue maturing before another and so on until the whole brain is mature." [182] A variety of approaches have been used to investigate the plausibility of deep learning models from a neurobiological perspective. On the one hand, several variants of the backpropagation algorithm have been proposed in order to increase its processing realism. [183] [184] Other researchers have argued that unsupervised forms of deep learning, such as those based on hierarchical generative models and deep belief networks, may be closer to biological reality. [185] [186] In this respect, generative neural network models have been related to neurobiological evidence about sampling-based processing in the cerebral cortex. [187] Although a systematic comparison between the human brain organization and the neuronal encoding in deep networks has not yet been established, several analogies have been reported. For example, the computations performed by deep learning units could be similar to those of actual neurons [188] [189] and neural populations. [190] Similarly, the representations developed by deep learning models are similar to those measured in the primate visual system [191] both at the single-unit [192] and at the population [193] levels. ## Commercial activity Facebook's AI lab performs tasks such as automatically tagging uploaded pictures with the names of the people in them. [194] Google's DeepMind Technologies developed a system capable of learning how to play Atari video games using only pixels as data input. In 2015 they demonstrated their AlphaGo system, which learned the game of Go well enough to beat a professional Go player. [195] [196] [197] Google Translate uses a neural network to translate between more than 100 languages. In 2015, Blippar demonstrated a mobile augmented reality application that uses deep learning to recognize objects in real time. [198] In 2017, Covariant.ai was launched, which focuses on integrating deep learning into factories. [199] As of 2008, [200] researchers at The University of Texas at Austin (UT) developed a machine learning framework called Training an Agent Manually via Evaluative Reinforcement, or TAMER, which proposed new methods for robots or computer programs to learn how to perform tasks by interacting with a human instructor. [177] First developed as TAMER, a new algorithm called Deep TAMER was later introduced in 2018 during a collaboration between U.S. Army Research Laboratory (ARL) and UT researchers. Deep TAMER used deep learning to provide a robot the ability to learn new tasks through observation. [177] Using Deep TAMER, a robot learned a task with a human trainer, watching video streams or observing a human perform a task in-person. The robot later practiced the task with the help of some coaching from the trainer, who provided feedback such as “good job” and “bad job.” [201] ## Criticism and comment Deep learning has attracted both criticism and comment, in some cases from outside the field of computer science. ### Theory A main criticism concerns the lack of theory surrounding some methods. [202] Learning in the most common deep architectures is implemented using well-understood gradient descent. However, the theory surrounding other algorithms, such as contrastive divergence is less clear.[ citation needed ] (e.g., Does it converge? If so, how fast? What is it approximating?) Deep learning methods are often looked at as a black box, with most confirmations done empirically, rather than theoretically. [203] Others point out that deep learning should be looked at as a step towards realizing strong AI, not as an all-encompassing solution. Despite the power of deep learning methods, they still lack much of the functionality needed for realizing this goal entirely. Research psychologist Gary Marcus noted: "Realistically, deep learning is only part of the larger challenge of building intelligent machines. Such techniques lack ways of representing causal relationships (...) have no obvious ways of performing logical inferences, and they are also still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used. The most powerful A.I. systems, like Watson (...) use techniques like deep learning as just one element in a very complicated ensemble of techniques, ranging from the statistical technique of Bayesian inference to deductive reasoning." [204] In further reference to the idea that artistic sensitivity might inhere within relatively low levels of the cognitive hierarchy, a published series of graphic representations of the internal states of deep (20-30 layers) neural networks attempting to discern within essentially random data the images on which they were trained [205] demonstrate a visual appeal: the original research notice received well over 1,000 comments, and was the subject of what was for a time the most frequently accessed article on The Guardian's [206] website. ### Errors Some deep learning architectures display problematic behaviors, [207] such as confidently classifying unrecognizable images as belonging to a familiar category of ordinary images [208] and misclassifying minuscule perturbations of correctly classified images. [209] Goertzel hypothesized that these behaviors are due to limitations in their internal representations and that these limitations would inhibit integration into heterogeneous multi-component artificial general intelligence (AGI) architectures. [207] These issues may possibly be addressed by deep learning architectures that internally form states homologous to image-grammar [210] decompositions of observed entities and events. [207] Learning a grammar (visual or linguistic) from training data would be equivalent to restricting the system to commonsense reasoning that operates on concepts in terms of grammatical production rules and is a basic goal of both human language acquisition [211] and artificial intelligence (AI). [212] ### Cyber threat As deep learning moves from the lab into the world, research and experience shows that artificial neural networks are vulnerable to hacks and deception. [213] By identifying patterns that these systems use to function, attackers can modify inputs to ANNs in such a way that the ANN finds a match that human observers would not recognize. For example, an attacker can make subtle changes to an image such that the ANN finds a match even though the image looks to a human nothing like the search target. Such a manipulation is termed an “adversarial attack.” [214] In 2016 researchers used one ANN to doctor images in trial and error fashion, identify another's focal points and thereby generate images that deceived it. The modified images looked no different to human eyes. Another group showed that printouts of doctored images then photographed successfully tricked an image classification system. [215] One defense is reverse image search, in which a possible fake image is submitted to a site such as TinEye that can then find other instances of it. A refinement is to search using only parts of the image, to identify images from which that piece may have been taken. [216] Another group showed that certain psychedelic spectacles could fool a facial recognition system into thinking ordinary people were celebrities, potentially allowing one person to impersonate another. In 2017 researchers added stickers to stop signs and caused an ANN to misclassify them. [215] ANNs can however be further trained to detect attempts at deception, potentially leading attackers and defenders into an arms race similar to the kind that already defines the malware defense industry. ANNs have been trained to defeat ANN-based anti-malware software by repeatedly attacking a defense with malware that was continually altered by a genetic algorithm until it tricked the anti-malware while retaining its ability to damage the target. [215] Another group demonstrated that certain sounds could make the Google Now voice command system open a particular web address that would download malware. [215] In “data poisoning,” false data is continually smuggled into a machine learning system's training set to prevent it from achieving mastery. [215] ### Reliance on human microwork Most Deep Learning systems rely on training and verification data that is generated and/or annotated by humans. It has been argued in media philosophy that not only low-paid clickwork (e.g. on Amazon Mechanical Turk) is regularly deployed for this purpose, but also implicit forms of human microwork that are often not recognized as such. [217] The philosopher Rainer Mühlhoff distinguishes five types of "machinic capture" of human microwork to generate training data: (1) gamification (the embedding of annotation or computation tasks in the flow of a game), (2) "trapping and tracking" (e.g. CAPTCHAs for image recognition or click-tracking on Google search results pages), (3) exploitation of social motivations (e.g. tagging faces on Facebook to obtain labeled facial images), (4) information mining (e.g. by leveraging quantified-self devices such as activity trackers) and (5) clickwork. [217] Mühlhoff argues that in most commercial end-user applications of Deep Learning such as Facebook's face recognition system, the need for training data does not stop once an ANN is trained. Rather, there is a continued demand for human-generated verification data to constantly calibrate and update the ANN. For this purpose Facebook introduced the feature that once a user is automatically recognized in an image, they receive a notification. They can choose whether of not they like to be publicly labeled on the image, or tell Facebook that it is not them in the picture. [218] This user interface is a mechanism to generate "a constant stream of  verification data" [217] to further train the network in real-time. As Mühlhoff argues, involvement of human users to generate training and verification data is so typical for most commercial end-user applications of Deep Learning that such systems may be referred to as "human-aided artificial intelligence". [217] ## Related Research Articles Artificial neural networks (ANNs), usually simply called neural networks (NNs), are computing systems vaguely inspired by the biological neural networks that constitute animal brains. Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers. It is also known as automatic speech recognition (ASR), computer speech recognition or speech to text (STT). It incorporates knowledge and research in the computer science, linguistics and computer engineering fields. Jürgen Schmidhuber is a computer scientist most noted for his work in the field of artificial intelligence, deep learning and artificial neural networks. He is a co-director of the Dalle Molle Institute for Artificial Intelligence Research in Manno, in the district of Lugano, in Ticino in southern Switzerland. He is sometimes called the "father of (modern) AI" or, one time, the "father of deep learning." A recurrent neural network (RNN) is a class of artificial neural networks where connections between nodes form a directed graph along a temporal sequence. This allows it to exhibit temporal dynamic behavior. Derived from feedforward neural networks, RNNs can use their internal state (memory) to process variable length sequences of inputs. This makes them applicable to tasks such as unsegmented, connected handwriting recognition or speech recognition. A neural network is a network or circuit of neurons, or in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus a neural network is either a biological neural network, made up of real biological neurons, or an artificial neural network, for solving artificial intelligence (AI) problems. The connections of the biological neuron are modeled as weights. A positive weight reflects an excitatory connection, while negative values mean inhibitory connections. All inputs are modified by a weight and summed. This activity is referred to as a linear combination. Finally, an activation function controls the amplitude of the output. For example, an acceptable range of output is usually between 0 and 1, or it could be −1 and 1. Long short-term memory (LSTM) is an artificial recurrent neural network (RNN) architecture used in the field of deep learning. Unlike standard feedforward neural networks, LSTM has feedback connections. It can not only process single data points, but also entire sequences of data. For example, LSTM is applicable to tasks such as unsegmented, connected handwriting recognition, speech recognition and anomaly detection in network traffic or IDSs. There are many types of artificial neural networks (ANN). Sepp Hochreiter is a German computer scientist. Since 2018 he has led the Institute for Machine Learning at the Johannes Kepler University of Linz after having led the Institute of Bioinformatics from 2006 to 2018. In 2017 he became the head of the Linz Institute of Technology (LIT)AI Lab which focuses on advancing research on artificial intelligence. Previously, he was at the Technical University of Berlin, at the University of Colorado at Boulder, and at the Technical University of Munich. In machine learning, a hyperparameter is a parameter whose value is used to control the learning process. By contrast, the values of other parameters are derived via training. The MNIST database is a large database of handwritten digits that is commonly used for training various image processing systems. The database is also widely used for training and testing in the field of machine learning. It was created by "re-mixing" the samples from NIST's original datasets. The creators felt that since NIST's training dataset was taken from American Census Bureau employees, while the testing dataset was taken from American high school students, it was not well-suited for machine learning experiments. Furthermore, the black and white images from NIST were normalized to fit into a 28x28 pixel bounding box and anti-aliased, which introduced grayscale levels. In deep learning, a convolutional neural network is a class of deep neural networks, most commonly applied to analyzing visual imagery. They are also known as shift invariant or space invariant artificial neural networks (SIANN), based on their shared-weights architecture and translation invariance characteristics. They have applications in image and video recognition, recommender systems, image classification, medical image analysis, natural language processing, and financial time series. In machine learning, the vanishing gradient problem is encountered when training artificial neural networks with gradient-based learning methods and backpropagation. In such methods, each of the neural network's weights receive an update proportional to the partial derivative of the error function with respect to the current weight in each iteration of training. The problem is that in some cases, the gradient will be vanishingly small, effectively preventing the weight from changing its value. In the worst case, this may completely stop the neural network from further training. As one example of the problem cause, traditional activation functions such as the hyperbolic tangent function have gradients in the range (-1, 1), and backpropagation computes gradients by the chain rule. This has the effect of multiplying n of these small numbers to compute gradients of the "front" layers in an n-layer network, meaning that the gradient decreases exponentially with n while the front layers train very slowly. Bidirectional Recurrent Neural Networks (BRNN) connect two hidden layers of opposite directions to the same output. With this form of generative deep learning, the output layer can get information from past (backwards) and future (forward) states simultaneously. Invented in 1997 by Schuster and Paliwal, BRNNs were introduced to increase the amount of input information available to the network. For example, multilayer perceptron (MLPs) and time delay neural network (TDNNs) have limitations on the input data flexibility, as they require their input data to be fixed. Standard recurrent neural network (RNNs) also have restrictions as the future input information cannot be reached from the current state. On the contrary, BRNNs do not require their input data to be fixed. Moreover, their future input information is reachable from the current state. Alex Graves is a research scientist at DeepMind. He did a BSc in Theoretical Physics at Edinburgh and obtained a PhD in AI under Jürgen Schmidhuber at IDSIA. He was also a postdoc at TU Munich and under Geoffrey Hinton at the University of Toronto. In artificial intelligence, a differentiable neural computer (DNC) is a memory augmented neural network architecture (MANN), which is typically recurrent in its implementation. The model was published in 2016 by Alex Graves et al. of DeepMind. AlexNet is the name of a convolutional neural network (CNN), designed by Alex Krizhevsky, and published with Ilya Sutskever and Krizhevsky's doctoral advisor Geoffrey Hinton. Felix Gers is a professor of computer science at Beuth University of Applied Sciences Berlin. With Jürgen Schmidhuber and Fred Cummins, he introduced the forget gate to the long short-term memory recurrent neural network architecture. This modification of the original architecture has been shown to be crucial to the success of the LSTM at such tasks as speech and handwriting recognition. In video games, various artificial intelligence techniques have been used in a variety of ways, ranging from non-player character (NPC) control to procedural content generation (PCG). Machine learning is a subset of artificial intelligence that focuses on using algorithms and statistical models to make machines act without specific programming. This is in sharp contrast to traditional methods of artificial intelligence such as search trees and expert systems. The history of artificial neural networks (ANN) began with Warren McCulloch and Walter Pitts (1943) who created a computational model for neural networks based on algorithms called threshold logic. This model paved the way for research to split into two approaches. One approach focused on biological processes while the other focused on the application of neural networks to artificial intelligence. This work led to work on nerve networks and their link to finite automata. Artificial intelligence (AI) researchers have created many tools to solve the most difficult problems in computer science. Many of their inventions have been adopted by mainstream computer science and are no longer considered a part of AI. According to Russell & Norvig, all of the following were originally developed in AI laboratories: time sharing, interactive interpreters, graphical user interfaces and the computer mouse, Rapid application development environments, the linked list data structure, automatic storage management, symbolic programming, functional programming, dynamic programming and object-oriented programming. ## References 1. Bengio, Y.; Courville, A.; Vincent, P. (2013). "Representation Learning: A Review and New Perspectives". IEEE Transactions on Pattern Analysis and Machine Intelligence. 35 (8): 1798–1828. arXiv:. doi:10.1109/tpami.2013.50. PMID   23787338. S2CID   393948. 2. Schmidhuber, J. (2015). "Deep Learning in Neural Networks: An Overview". Neural Networks. 61: 85–117. arXiv:. doi:10.1016/j.neunet.2014.09.003. PMID   25462637. S2CID   11715509. 3. Bengio, Yoshua; LeCun, Yann; Hinton, Geoffrey (2015). "Deep Learning". Nature. 521 (7553): 436–444. Bibcode:2015Natur.521..436L. doi:10.1038/nature14539. PMID   26017442. S2CID   3074096. 4. Ciresan, D.; Meier, U.; Schmidhuber, J. (2012). "Multi-column deep neural networks for image classification". 2012 IEEE Conference on Computer Vision and Pattern Recognition. pp. 3642–3649. arXiv:. doi:10.1109/cvpr.2012.6248110. ISBN   978-1-4673-1228-8. S2CID   2161592. 5. Krizhevsky, Alex; Sutskever, Ilya; Hinton, Geoffry (2012). "ImageNet Classification with Deep Convolutional Neural Networks" (PDF). NIPS 2012: Neural Information Processing Systems, Lake Tahoe, Nevada. 6. "Google's AlphaGo AI wins three-match series against the world's best Go player". TechCrunch. 25 May 2017. 7. Marblestone, Adam H.; Wayne, Greg; Kording, Konrad P. (2016). "Toward an Integration of Deep Learning and Neuroscience". Frontiers in Computational Neuroscience. 10: 94. arXiv:. Bibcode:2016arXiv160603813M. doi:10.3389/fncom.2016.00094. PMC  . PMID   27683554. S2CID   1994856. 8. Olshausen, B. A. (1996). "Emergence of simple-cell receptive field properties by learning a sparse code for natural images". Nature. 381 (6583): 607–609. Bibcode:1996Natur.381..607O. doi:10.1038/381607a0. PMID   8637596. S2CID   4358477. 9. Bengio, Yoshua; Lee, Dong-Hyun; Bornschein, Jorg; Mesnard, Thomas; Lin, Zhouhan (2015-02-13). "Towards Biologically Plausible Deep Learning". arXiv: [cs.LG]. 10. Schulz, Hannes; Behnke, Sven (2012-11-01). "Deep Learning". KI - Künstliche Intelligenz. 26 (4): 357–363. doi:10.1007/s13218-012-0198-z. ISSN   1610-1987. S2CID   220523562. 11. Deng, L.; Yu, D. (2014). "Deep Learning: Methods and Applications" (PDF). Foundations and Trends in Signal Processing. 7 (3–4): 1–199. doi:10.1561/2000000039. 12. Bengio, Yoshua (2009). "Learning Deep Architectures for AI" (PDF). Foundations and Trends in Machine Learning. 2 (1): 1–127. CiteSeerX  . doi:10.1561/2200000006. Archived from the original (PDF) on 2016-03-04. Retrieved 2015-09-03. 13. LeCun, Yann; Bengio, Yoshua; Hinton, Geoffrey (28 May 2015). "Deep learning". Nature. 521 (7553): 436–444. Bibcode:2015Natur.521..436L. doi:10.1038/nature14539. PMID   26017442. S2CID   3074096. 14. Shigeki, Sugiyama (2019-04-12). Human Behavior and Another Kind in Consciousness: Emerging Research and Opportunities: Emerging Research and Opportunities. IGI Global. ISBN   978-1-5225-8218-2. 15. Bengio, Yoshua; Lamblin, Pascal; Popovici, Dan; Larochelle, Hugo (2007). Greedy layer-wise training of deep networks (PDF). Advances in neural information processing systems. pp. 153–160. 16. Schmidhuber, Jürgen (2015). "Deep Learning". Scholarpedia. 10 (11): 32832. Bibcode:2015SchpJ..1032832S. doi:. 17. Hinton, G.E. (2009). "Deep belief networks". Scholarpedia. 4 (5): 5947. Bibcode:2009SchpJ...4.5947H. doi:. 18. Cybenko (1989). "Approximations by superpositions of sigmoidal functions" (PDF). Mathematics of Control, Signals, and Systems . 2 (4): 303–314. doi:10.1007/bf02551274. S2CID   3958369. Archived from the original (PDF) on 2015-10-10. 19. Hornik, Kurt (1991). "Approximation Capabilities of Multilayer Feedforward Networks". Neural Networks. 4 (2): 251–257. doi:10.1016/0893-6080(91)90009-t. 20. Haykin, Simon S. (1999). Neural Networks: A Comprehensive Foundation. Prentice Hall. ISBN   978-0-13-273350-2. 21. Hassoun, Mohamad H. (1995). Fundamentals of Artificial Neural Networks. MIT Press. p. 48. ISBN   978-0-262-08239-6. 22. Lu, Z., Pu, H., Wang, F., Hu, Z., & Wang, L. (2017). The Expressive Power of Neural Networks: A View from the Width. Neural Information Processing Systems, 6231-6239. 23. Murphy, Kevin P. (24 August 2012). Machine Learning: A Probabilistic Perspective. MIT Press. ISBN   978-0-262-01802-9. 24. Sonoda, Sho; Murata, Noboru (2017). "Neural network with unbounded activation functions is universal approximator". Applied and Computational Harmonic Analysis. 43 (2): 233–268. arXiv:. doi:10.1016/j.acha.2015.12.005. S2CID   12149203. 25. Hinton, G. E.; Srivastava, N.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R.R. (2012). "Improving neural networks by preventing co-adaptation of feature detectors". arXiv: [math.LG]. 26. Bishop, Christopher M. (2006). Pattern Recognition and Machine Learning (PDF). Springer. ISBN   978-0-387-31073-2. 27. Ivakhnenko, A. G.; Lapa, V. G. (1967). Cybernetics and Forecasting Techniques. American Elsevier Publishing Co. ISBN   978-0-444-00020-0. 28. Ivakhnenko, Alexey (1971). "Polynomial theory of complex systems" (PDF). IEEE Transactions on Systems, Man and Cybernetics. SMC-1 (4): 364–378. doi:10.1109/TSMC.1971.4308320. 29. Fukushima, K. (1980). "Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position". Biol. Cybern. 36 (4): 193–202. doi:10.1007/bf00344251. PMID   7370364. S2CID   206775608. 30. Rina Dechter (1986). Learning while searching in constraint-satisfaction problems. University of California, Computer Science Department, Cognitive Systems Laboratory.Online 31. Igor Aizenberg, Naum N. Aizenberg, Joos P.L. Vandewalle (2000). Multi-Valued and Universal Binary Neurons: Theory, Learning and Applications. Springer Science & Business Media. 32. Co-evolving recurrent neurons learn deep memory POMDPs. Proc. GECCO, Washington, D. C., pp. 1795-1802, ACM Press, New York, NY, USA, 2005. 33. Seppo Linnainmaa (1970). The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors. Master's Thesis (in Finnish), Univ. Helsinki, 6-7. 34. Griewank, Andreas (2012). "Who Invented the Reverse Mode of Differentiation?" (PDF). Documenta Mathematica (Extra Volume ISMP): 389–400. Archived from the original (PDF) on 2017-07-21. Retrieved 2017-06-11. 35. Werbos, P. (1974). "Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences". Harvard University. Retrieved 12 June 2017. 36. Werbos, Paul (1982). "Applications of advances in nonlinear sensitivity analysis" (PDF). System modeling and optimization. Springer. pp. 762–770. 37. LeCun et al., "Backpropagation Applied to Handwritten Zip Code Recognition," Neural Computation, 1, pp. 541–551, 1989. 38. J. Weng, N. Ahuja and T. S. Huang, "Cresceptron: a self-organizing neural network which grows adaptively," Proc. International Joint Conference on Neural Networks, Baltimore, Maryland, vol I, pp. 576-581, June, 1992. 39. J. Weng, N. Ahuja and T. S. Huang, "Learning recognition and segmentation of 3-D objects from 2-D images," Proc. 4th International Conf. Computer Vision, Berlin, Germany, pp. 121-128, May, 1993. 40. J. Weng, N. Ahuja and T. S. Huang, "Learning recognition and segmentation using the Cresceptron," International Journal of Computer Vision, vol. 25, no. 2, pp. 105-139, Nov. 1997. 41. de Carvalho, Andre C. L. F.; Fairhurst, Mike C.; Bisset, David (1994-08-08). "An integrated Boolean neural network for pattern classification". Pattern Recognition Letters. 15 (8): 807–813. doi:10.1016/0167-8655(94)90009-4. 42. Hinton, Geoffrey E.; Dayan, Peter; Frey, Brendan J.; Neal, Radford (1995-05-26). "The wake-sleep algorithm for unsupervised neural networks". Science. 268 (5214): 1158–1161. Bibcode:1995Sci...268.1158H. doi:10.1126/science.7761831. PMID   7761831. 43. S. Hochreiter., "Untersuchungen zu dynamischen neuronalen Netzen," Diploma thesis. Institut f. Informatik, Technische Univ. Munich. Advisor: J. Schmidhuber, 1991. 44. Hochreiter, S.; et al. (15 January 2001). "Gradient flow in recurrent nets: the difficulty of learning long-term dependencies". In Kolen, John F.; Kremer, Stefan C. (eds.). A Field Guide to Dynamical Recurrent Networks. John Wiley & Sons. ISBN   978-0-7803-5369-5. 45. Morgan, Nelson; Bourlard, Hervé; Renals, Steve; Cohen, Michael; Franco, Horacio (1993-08-01). "Hybrid neural network/hidden markov model systems for continuous speech recognition". International Journal of Pattern Recognition and Artificial Intelligence. 07 (4): 899–916. doi:10.1142/s0218001493000455. ISSN   0218-0014. 46. Robinson, T. (1992). "A real-time recurrent error propagation network word recognition system". ICASSP. Icassp'92: 617–620. ISBN   9780780305328. 47. Waibel, A.; Hanazawa, T.; Hinton, G.; Shikano, K.; Lang, K. J. (March 1989). "Phoneme recognition using time-delay neural networks" (PDF). IEEE Transactions on Acoustics, Speech, and Signal Processing. 37 (3): 328–339. doi:10.1109/29.21701. hdl:10338.dmlcz/135496. ISSN   0096-3518. 48. Baker, J.; Deng, Li; Glass, Jim; Khudanpur, S.; Lee, C.-H.; Morgan, N.; O'Shaughnessy, D. (2009). "Research Developments and Directions in Speech Recognition and Understanding, Part 1". IEEE Signal Processing Magazine. 26 (3): 75–80. Bibcode:2009ISPM...26...75B. doi:10.1109/msp.2009.932166. S2CID   357467. 49. Bengio, Y. (1991). "Artificial Neural Networks and their Application to Speech/Sequence Recognition". McGill University Ph.D. thesis. 50. Deng, L.; Hassanein, K.; Elmasry, M. (1994). "Analysis of correlation structure for a neural predictive model with applications to speech recognition". Neural Networks. 7 (2): 331–339. doi:10.1016/0893-6080(94)90027-2. 51. Doddington, G.; Przybocki, M.; Martin, A.; Reynolds, D. (2000). "The NIST speaker recognition evaluation ± Overview, methodology, systems, results, perspective". Speech Communication. 31 (2): 225–254. doi:10.1016/S0167-6393(99)00080-1. 52. Heck, L.; Konig, Y.; Sonmez, M.; Weintraub, M. (2000). "Robustness to Telephone Handset Distortion in Speaker Recognition by Discriminative Feature Design". Speech Communication. 31 (2): 181–192. doi:10.1016/s0167-6393(99)00077-1. 53. "Acoustic Modeling with Deep Neural Networks Using Raw Time Signal for LVCSR (PDF Download Available)". ResearchGate. Retrieved 2017-06-14. 54. Hochreiter, Sepp; Schmidhuber, Jürgen (1997-11-01). "Long Short-Term Memory". Neural Computation. 9 (8): 1735–1780. doi:10.1162/neco.1997.9.8.1735. ISSN   0899-7667. PMID   9377276. S2CID   1915014. 55. Graves, Alex; Eck, Douglas; Beringer, Nicole; Schmidhuber, Jürgen (2003). "Biologically Plausible Speech Recognition with LSTM Neural Nets" (PDF). 1st Intl. Workshop on Biologically Inspired Approaches to Advanced Information Technology, Bio-ADIT 2004, Lausanne, Switzerland. pp. 175–184. 56. Graves, Alex; Fernández, Santiago; Gomez, Faustino (2006). "Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks". Proceedings of the International Conference on Machine Learning, ICML 2006: 369–376. CiteSeerX  . 57. Santiago Fernandez, Alex Graves, and Jürgen Schmidhuber (2007). An application of recurrent neural networks to discriminative keyword spotting. Proceedings of ICANN (2), pp. 220–229. 58. Sak, Haşim; Senior, Andrew; Rao, Kanishka; Beaufays, Françoise; Schalkwyk, Johan (September 2015). "Google voice search: faster and more accurate". 59. Hinton, Geoffrey E. (2007-10-01). "Learning multiple layers of representation". Trends in Cognitive Sciences. 11 (10): 428–434. doi:10.1016/j.tics.2007.09.004. ISSN   1364-6613. PMID   17921042. S2CID   15066318. 60. Hinton, G. E.; Osindero, S.; Teh, Y. W. (2006). "A Fast Learning Algorithm for Deep Belief Nets" (PDF). Neural Computation . 18 (7): 1527–1554. doi:10.1162/neco.2006.18.7.1527. PMID   16764513. S2CID   2309950. 61. Bengio, Yoshua (2012). "Practical recommendations for gradient-based training of deep architectures". arXiv: [cs.LG]. 62. G. E. Hinton., "Learning multiple layers of representation," Trends in Cognitive Sciences, 11, pp. 428–434, 2007. 63. Hinton, G.; Deng, L.; Yu, D.; Dahl, G.; Mohamed, A.; Jaitly, N.; Senior, A.; Vanhoucke, V.; Nguyen, P.; Sainath, T.; Kingsbury, B. (2012). "Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups". IEEE Signal Processing Magazine. 29 (6): 82–97. Bibcode:2012ISPM...29...82H. doi:10.1109/msp.2012.2205597. S2CID   206485943. 64. Deng, Li; Hinton, Geoffrey; Kingsbury, Brian (1 May 2013). "New types of deep neural network learning for speech recognition and related applications: An overview". Microsoft Research. CiteSeerX   via research.microsoft.com. 65. Deng, Li; Li, Jinyu; Huang, Jui-Ting; Yao, Kaisheng; Yu, Dong; Seide, Frank; Seltzer, Michael; Zweig, Geoff; He, Xiaodong; Williams, Jason; Gong, Yifan; Acero, Alex (2013). "Recent advances in deep learning for speech research at Microsoft". 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. pp. 8604–8608. doi:10.1109/icassp.2013.6639345. ISBN   978-1-4799-0356-6. S2CID   13412186. 66. Sak, Hasim; Senior, Andrew; Beaufays, Francoise (2014). "Long Short-Term Memory recurrent neural network architectures for large scale acoustic modeling" (PDF). Archived from the original (PDF) on 2018-04-24. 67. Li, Xiangang; Wu, Xihong (2014). "Constructing Long Short-Term Memory based Deep Recurrent Neural Networks for Large Vocabulary Speech Recognition". arXiv: [cs.CL]. 68. Zen, Heiga; Sak, Hasim (2015). "Unidirectional Long Short-Term Memory Recurrent Neural Network with Recurrent Output Layer for Low-Latency Speech Synthesis" (PDF). Google.com. ICASSP. pp. 4470–4474. 69. Deng, L.; Abdel-Hamid, O.; Yu, D. (2013). "A deep convolutional neural network using heterogeneous pooling for trading acoustic invariance with phonetic confusion" (PDF). Google.com. ICASSP. 70. Sainath, Tara N.; Mohamed, Abdel-Rahman; Kingsbury, Brian; Ramabhadran, Bhuvana (2013). "Deep convolutional neural networks for LVCSR". 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. pp. 8614–8618. doi:10.1109/icassp.2013.6639347. ISBN   978-1-4799-0356-6. S2CID   13816461. 71. Yann LeCun (2016). Slides on Deep Learning Online 72. NIPS Workshop: Deep Learning for Speech Recognition and Related Applications, Whistler, BC, Canada, Dec. 2009 (Organizers: Li Deng, Geoff Hinton, D. Yu). 73. Keynote talk: Recent Developments in Deep Neural Networks. ICASSP, 2013 (by Geoff Hinton). 74. D. Yu, L. Deng, G. Li, and F. Seide (2011). "Discriminative pretraining of deep neural networks," U.S. Patent Filing. 75. Deng, L.; Hinton, G.; Kingsbury, B. (2013). "New types of deep neural network learning for speech recognition and related applications: An overview (ICASSP)" (PDF).Cite journal requires |journal= (help) 76. Yu, D.; Deng, L. (2014). Automatic Speech Recognition: A Deep Learning Approach (Publisher: Springer). ISBN   978-1-4471-5779-3. 77. "Deng receives prestigious IEEE Technical Achievement Award - Microsoft Research". Microsoft Research. 3 December 2015. 78. Li, Deng (September 2014). "Keynote talk: 'Achievements and Challenges of Deep Learning - From Speech Analysis and Recognition To Language and Multimodal Processing'". Interspeech. 79. Yu, D.; Deng, L. (2010). "Roles of Pre-Training and Fine-Tuning in Context-Dependent DBN-HMMs for Real-World Speech Recognition". NIPS Workshop on Deep Learning and Unsupervised Feature Learning. 80. Seide, F.; Li, G.; Yu, D. (2011). "Conversational speech transcription using context-dependent deep neural networks". Interspeech. 81. Deng, Li; Li, Jinyu; Huang, Jui-Ting; Yao, Kaisheng; Yu, Dong; Seide, Frank; Seltzer, Mike; Zweig, Geoff; He, Xiaodong (2013-05-01). "Recent Advances in Deep Learning for Speech Research at Microsoft". Microsoft Research. 82. "Nvidia CEO bets big on deep learning and VR". Venture Beat. April 5, 2016. 83. Oh, K.-S.; Jung, K. (2004). "GPU implementation of neural networks". Pattern Recognition. 37 (6): 1311–1314. doi:10.1016/j.patcog.2004.01.013. 84. "A Survey of Techniques for Optimizing Deep Learning on GPUs", S. Mittal and S. Vaishay, Journal of Systems Architecture, 2019 85. Chellapilla, K., Puri, S., and Simard, P. (2006). High performance convolutional neural networks for document processing. International Workshop on Frontiers in Handwriting Recognition. 86. Cireşan, Dan Claudiu; Meier, Ueli; Gambardella, Luca Maria; Schmidhuber, Jürgen (2010-09-21). "Deep, Big, Simple Neural Nets for Handwritten Digit Recognition". Neural Computation. 22 (12): 3207–3220. arXiv:. doi:10.1162/neco_a_00052. ISSN   0899-7667. PMID   20858131. S2CID   1918673. 87. Raina, Rajat; Madhavan, Anand; Ng, Andrew Y. (2009). "Large-scale Deep Unsupervised Learning Using Graphics Processors". Proceedings of the 26th Annual International Conference on Machine Learning. ICML '09. New York, NY, USA: ACM: 873–880. CiteSeerX  . doi:10.1145/1553374.1553486. ISBN   9781605585161. S2CID   392458. 88. Sze, Vivienne; Chen, Yu-Hsin; Yang, Tien-Ju; Emer, Joel (2017). "Efficient Processing of Deep Neural Networks: A Tutorial and Survey". arXiv: [cs.CV]. 89. "Merck Molecular Activity Challenge". kaggle.com. 90. "Multi-task Neural Networks for QSAR Predictions | Data Science Association". www.datascienceassn.org. Retrieved 2017-06-14. 91. "Toxicology in the 21st century Data Challenge" 92. "Archived copy". Archived from the original on 2015-02-28. Retrieved 2015-03-05.CS1 maint: archived copy as title (link) 93. Ciresan, D. C.; Meier, U.; Masci, J.; Gambardella, L. M.; Schmidhuber, J. (2011). "Flexible, High Performance Convolutional Neural Networks for Image Classification" (PDF). International Joint Conference on Artificial Intelligence. doi:10.5591/978-1-57735-516-8/ijcai11-210. 94. Ciresan, Dan; Giusti, Alessandro; Gambardella, Luca M.; Schmidhuber, Juergen (2012). Pereira, F.; Burges, C. J. C.; Bottou, L.; Weinberger, K. Q. (eds.). Advances in Neural Information Processing Systems 25 (PDF). Curran Associates, Inc. pp. 2843–2851. 95. Ciresan, D.; Giusti, A.; Gambardella, L.M.; Schmidhuber, J. (2013). "Mitosis Detection in Breast Cancer Histology Images using Deep Neural Networks". Proceedings MICCAI. Lecture Notes in Computer Science. 7908 (Pt 2): 411–418. doi:10.1007/978-3-642-40763-5_51. ISBN   978-3-642-38708-1. PMID   24579167. 96. "The Wolfram Language Image Identification Project". www.imageidentify.com. Retrieved 2017-03-22. 97. Vinyals, Oriol; Toshev, Alexander; Bengio, Samy; Erhan, Dumitru (2014). "Show and Tell: A Neural Image Caption Generator". arXiv: [cs.CV].. 98. Fang, Hao; Gupta, Saurabh; Iandola, Forrest; Srivastava, Rupesh; Deng, Li; Dollár, Piotr; Gao, Jianfeng; He, Xiaodong; Mitchell, Margaret; Platt, John C; Lawrence Zitnick, C; Zweig, Geoffrey (2014). "From Captions to Visual Concepts and Back". arXiv: [cs.CV].. 99. Kiros, Ryan; Salakhutdinov, Ruslan; Zemel, Richard S (2014). "Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models". arXiv: [cs.LG].. 100. Zhong, Sheng-hua; Liu, Yan; Liu, Yang (2011). "Bilinear Deep Learning for Image Classification". Proceedings of the 19th ACM International Conference on Multimedia. MM '11. New York, NY, USA: ACM: 343–352. doi:10.1145/2072298.2072344. hdl:. ISBN   9781450306164. S2CID   11922007. 101. "Why Deep Learning Is Suddenly Changing Your Life". Fortune. 2016. Retrieved 13 April 2018. 102. Silver, David; Huang, Aja; Maddison, Chris J.; Guez, Arthur; Sifre, Laurent; Driessche, George van den; Schrittwieser, Julian; Antonoglou, Ioannis; Panneershelvam, Veda (January 2016). "Mastering the game of Go with deep neural networks and tree search". Nature. 529 (7587): 484–489. Bibcode:2016Natur.529..484S. doi:10.1038/nature16961. ISSN   1476-4687. PMID   26819042. S2CID   515925. 103. Szegedy, Christian; Toshev, Alexander; Erhan, Dumitru (2013). "Deep neural networks for object detection". Advances in Neural Information Processing Systems: 2553–2561. 104. Hof, Robert D. "Is Artificial Intelligence Finally Coming into Its Own?". MIT Technology Review. Retrieved 2018-07-10. 105. Gers, Felix A.; Schmidhuber, Jürgen (2001). "LSTM Recurrent Networks Learn Simple Context Free and Context Sensitive Languages". IEEE Transactions on Neural Networks. 12 (6): 1333–1340. doi:10.1109/72.963769. PMID   18249962. 106. Sutskever, L.; Vinyals, O.; Le, Q. (2014). "Sequence to Sequence Learning with Neural Networks" (PDF). Proc. NIPS. arXiv:. Bibcode:2014arXiv1409.3215S. 107. Jozefowicz, Rafal; Vinyals, Oriol; Schuster, Mike; Shazeer, Noam; Wu, Yonghui (2016). "Exploring the Limits of Language Modeling". arXiv: [cs.CL]. 108. Gillick, Dan; Brunk, Cliff; Vinyals, Oriol; Subramanya, Amarnag (2015). "Multilingual Language Processing from Bytes". arXiv: [cs.CL]. 109. Mikolov, T.; et al. (2010). "Recurrent neural network based language model" (PDF). Interspeech. 110. "Learning Precise Timing with LSTM Recurrent Networks (PDF Download Available)". ResearchGate. Retrieved 2017-06-13. 111. LeCun, Y.; et al. (1998). "Gradient-based learning applied to document recognition". Proceedings of the IEEE. 86 (11): 2278–2324. doi:10.1109/5.726791. 112. Bengio, Yoshua; Boulanger-Lewandowski, Nicolas; Pascanu, Razvan (2013). "Advances in optimizing recurrent networks". 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. pp. 8624–8628. arXiv:. CiteSeerX  . doi:10.1109/icassp.2013.6639349. ISBN   978-1-4799-0356-6. S2CID   12485056. 113. Dahl, G.; et al. (2013). "Improving DNNs for LVCSR using rectified linear units and dropout" (PDF). ICASSP. 114. "Data Augmentation - deeplearning.ai | Coursera". Coursera. Retrieved 2017-11-30. 115. Hinton, G. E. (2010). "A Practical Guide to Training Restricted Boltzmann Machines". Tech. Rep. UTML TR 2010-003. 116. You, Yang; Buluç, Aydın; Demmel, James (November 2017). "Scaling deep learning on GPU and knights landing clusters". Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis on - SC '17. SC '17, ACM. pp. 1–12. doi:10.1145/3126908.3126912. ISBN   9781450351140. S2CID   8869270 . Retrieved 5 March 2018. 117. Viebke, André; Memeti, Suejb; Pllana, Sabri; Abraham, Ajith (2019). "CHAOS: a parallelization scheme for training convolutional neural networks on Intel Xeon Phi". The Journal of Supercomputing. 75: 197–227. arXiv:. Bibcode:2017arXiv170207908V. doi:10.1007/s11227-017-1994-x. S2CID   14135321. 118. Ting Qin, et al. "A learning algorithm of CMAC based on RLS." Neural Processing Letters 19.1 (2004): 49-61. 119. Ting Qin, et al. "Continuous CMAC-QRLS and its systolic array." Neural Processing Letters 22.1 (2005): 1-16. 120. Research, AI (23 October 2015). "Deep Neural Networks for Acoustic Modeling in Speech Recognition". airesearch.com. Retrieved 23 October 2015. 121. "GPUs Continue to Dominate the AI Accelerator Market for Now". InformationWeek. December 2019. Retrieved 11 June 2020. 122. Ray, Tiernan (2019). "AI is changing the entire nature of compute". ZDNet. Retrieved 11 June 2020. 123. "AI and Compute". OpenAI. 16 May 2018. Retrieved 11 June 2020. 124. TIMIT Acoustic-Phonetic Continuous Speech Corpus Linguistic Data Consortium, Philadelphia. 125. Robinson, Tony (30 September 1991). "Several Improvements to a Recurrent Error Propagation Network Phone Recognition System". Cambridge University Engineering Department Technical Report. CUED/F-INFENG/TR82. doi:10.13140/RG.2.2.15418.90567. 126. Abdel-Hamid, O.; et al. (2014). "Convolutional Neural Networks for Speech Recognition". IEEE/ACM Transactions on Audio, Speech, and Language Processing. 22 (10): 1533–1545. doi:10.1109/taslp.2014.2339736. S2CID   206602362. 127. Deng, L.; Platt, J. (2014). "Ensemble Deep Learning for Speech Recognition". Proc. Interspeech. S2CID   15641618. 128. Tóth, Laszló (2015). "Phone Recognition with Hierarchical Convolutional Deep Maxout Networks" (PDF). EURASIP Journal on Audio, Speech, and Music Processing. 2015. doi:10.1186/s13636-015-0068-3. S2CID   217950236. 129. McMillan, Robert (2014-12-17). "How Skype Used AI to Build Its Amazing New Language Translator | WIRED". Wired. Retrieved 2017-06-14. 130. Hannun, Awni; Case, Carl; Casper, Jared; Catanzaro, Bryan; Diamos, Greg; Elsen, Erich; Prenger, Ryan; Satheesh, Sanjeev; Sengupta, Shubho; Coates, Adam; Ng, Andrew Y (2014). "Deep Speech: Scaling up end-to-end speech recognition". arXiv: [cs.CL]. 131. Jafarzadeh, Mohsen; Hussey, Daniel Curtiss; Tadesse, Yonas (2019). Deep learning approach to control of prosthetic hands with electromyography signals. 2019 IEEE International Symposium on Measurement and Control in Robotics (ISMCR). IEEE. pp. A1-4. arXiv:. doi:10.1109/ISMCR47492.2019.8955725. 132. Cireşan, Dan; Meier, Ueli; Masci, Jonathan; Schmidhuber, Jürgen (August 2012). "Multi-column deep neural network for traffic sign classification". Neural Networks. Selected Papers from IJCNN 2011. 32: 333–338. CiteSeerX  . doi:10.1016/j.neunet.2012.02.023. PMID   22386783. 133. Nvidia Demos a Car Computer Trained with "Deep Learning" (2015-01-06), David Talbot, MIT Technology Review 134. G. W. Smith; Frederic Fol Leymarie (10 April 2017). "The Machine as Artist: An Introduction". Arts. 6 (4): 5. doi:. 135. Blaise Agüera y Arcas (29 September 2017). "Art in the Age of Machine Intelligence". Arts. 6 (4): 18. doi:. 136. Goldberg, Yoav; Levy, Omar (2014). "word2vec Explained: Deriving Mikolov et al.'s Negative-Sampling Word-Embedding Method". arXiv: [cs.CL]. 137. Socher, Richard; Manning, Christopher. "Deep Learning for NLP" (PDF). Retrieved 26 October 2014. 138. Socher, Richard; Bauer, John; Manning, Christopher; Ng, Andrew (2013). "Parsing With Compositional Vector Grammars" (PDF). Proceedings of the ACL 2013 Conference. 139. Socher, Richard (2013). "Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank" (PDF).Cite journal requires |journal= (help) 140. Shen, Yelong; He, Xiaodong; Gao, Jianfeng; Deng, Li; Mesnil, Gregoire (2014-11-01). "A Latent Semantic Model with Convolutional-Pooling Structure for Information Retrieval". Microsoft Research. 141. Huang, Po-Sen; He, Xiaodong; Gao, Jianfeng; Deng, Li; Acero, Alex; Heck, Larry (2013-10-01). "Learning Deep Structured Semantic Models for Web Search using Clickthrough Data". Microsoft Research. 142. Mesnil, G.; Dauphin, Y.; Yao, K.; Bengio, Y.; Deng, L.; Hakkani-Tur, D.; He, X.; Heck, L.; Tur, G.; Yu, D.; Zweig, G. (2015). "Using recurrent neural networks for slot filling in spoken language understanding". IEEE Transactions on Audio, Speech, and Language Processing. 23 (3): 530–539. doi:10.1109/taslp.2014.2383614. S2CID   1317136. 143. Gao, Jianfeng; He, Xiaodong; Yih, Scott Wen-tau; Deng, Li (2014-06-01). "Learning Continuous Phrase Representations for Translation Modeling". Microsoft Research. 144. Brocardo, Marcelo Luiz; Traore, Issa; Woungang, Isaac; Obaidat, Mohammad S. (2017). "Authorship verification using deep belief network systems". International Journal of Communication Systems. 30 (12): e3259. doi:10.1002/dac.3259. 145. "Deep Learning for Natural Language Processing: Theory and Practice (CIKM2014 Tutorial) - Microsoft Research". Microsoft Research. Retrieved 2017-06-14. 146. Turovsky, Barak (November 15, 2016). "Found in translation: More accurate, fluent sentences in Google Translate". The Keyword Google Blog. Retrieved March 23, 2017. 147. Schuster, Mike; Johnson, Melvin; Thorat, Nikhil (November 22, 2016). "Zero-Shot Translation with Google's Multilingual Neural Machine Translation System". Google Research Blog. Retrieved March 23, 2017. 148. Sepp Hochreiter; Jürgen Schmidhuber (1997). "Long short-term memory". Neural Computation . 9 (8): 1735–1780. doi:10.1162/neco.1997.9.8.1735. PMID   9377276. S2CID   1915014. 149. Felix A. Gers; Jürgen Schmidhuber; Fred Cummins (2000). "Learning to Forget: Continual Prediction with LSTM". Neural Computation . 12 (10): 2451–2471. CiteSeerX  . doi:10.1162/089976600300015015. PMID   11032042. S2CID   11598600. 150. Wu, Yonghui; Schuster, Mike; Chen, Zhifeng; Le, Quoc V; Norouzi, Mohammad; Macherey, Wolfgang; Krikun, Maxim; Cao, Yuan; Gao, Qin; Macherey, Klaus; Klingner, Jeff; Shah, Apurva; Johnson, Melvin; Liu, Xiaobing; Kaiser, Łukasz; Gouws, Stephan; Kato, Yoshikiyo; Kudo, Taku; Kazawa, Hideto; Stevens, Keith; Kurian, George; Patil, Nishant; Wang, Wei; Young, Cliff; Smith, Jason; Riesa, Jason; Rudnick, Alex; Vinyals, Oriol; Corrado, Greg; et al. (2016). "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation". arXiv: [cs.CL]. 151. Metz, Cade (September 27, 2016). "An Infusion of AI Makes Google Translate More Powerful Than Ever". Wired . 152. Boitet, Christian; Blanchon, Hervé; Seligman, Mark; Bellynck, Valérie (2010). "MT on and for the Web" (PDF). Retrieved December 1, 2016. 153. Arrowsmith, J; Miller, P (2013). "Trial watch: Phase II and phase III attrition rates 2011-2012". Nature Reviews Drug Discovery. 12 (8): 569. doi:10.1038/nrd4090. PMID   23903212. S2CID   20246434. 154. Verbist, B; Klambauer, G; Vervoort, L; Talloen, W; The Qstar, Consortium; Shkedy, Z; Thas, O; Bender, A; Göhlmann, H. W.; Hochreiter, S (2015). "Using transcriptomics to guide lead optimization in drug discovery projects: Lessons learned from the QSTAR project". Drug Discovery Today. 20 (5): 505–513. doi:. PMID   25582842. 155. Wallach, Izhar; Dzamba, Michael; Heifets, Abraham (2015-10-09). "AtomNet: A Deep Convolutional Neural Network for Bioactivity Prediction in Structure-based Drug Discovery". arXiv: [cs.LG]. 156. "Toronto startup has a faster way to discover effective medicines". The Globe and Mail. Retrieved 2015-11-09. 157. "Startup Harnesses Supercomputers to Seek Cures". KQED Future of You. Retrieved 2015-11-09. 158. Zhavoronkov, Alex (2019). "Deep learning enables rapid identification of potent DDR1 kinase inhibitors". Nature Biotechnology. 37 (9): 1038–1040. doi:10.1038/s41587-019-0224-x. PMID   31477924. S2CID   201716327. 159. Gregory, Barber. "A Molecule Designed By AI Exhibits 'Druglike' Qualities". Wired. 160. Tkachenko, Yegor (April 8, 2015). "Autonomous CRM Control via CLV Approximation with Deep Reinforcement Learning in Discrete and Continuous Action Space". arXiv: [cs.LG]. 161. van den Oord, Aaron; Dieleman, Sander; Schrauwen, Benjamin (2013). Burges, C. J. C.; Bottou, L.; Welling, M.; Ghahramani, Z.; Weinberger, K. Q. (eds.). Advances in Neural Information Processing Systems 26 (PDF). Curran Associates, Inc. pp. 2643–2651. 162. Feng, X.Y.; Zhang, H.; Ren, Y.J.; Shang, P.H.; Zhu, Y.; Liang, Y.C.; Guan, R.C.; Xu, D. (2019). "The Deep Learning–Based Recommender System "Pubmender" for Choosing a Biomedical Publication Venue: Development and Validation Study". Journal of Medical Internet Research . 21 (5): e12957. doi:10.2196/12957. PMC  . PMID   31127715. 163. Elkahky, Ali Mamdouh; Song, Yang; He, Xiaodong (2015-05-01). "A Multi-View Deep Learning Approach for Cross Domain User Modeling in Recommendation Systems". Microsoft Research. 164. Chicco, Davide; Sadowski, Peter; Baldi, Pierre (1 January 2014). Deep Autoencoder Neural Networks for Gene Ontology Annotation Predictions. Proceedings of the 5th ACM Conference on Bioinformatics, Computational Biology, and Health Informatics - BCB '14. ACM. pp. 533–540. doi:10.1145/2649387.2649442. hdl:11311/964622. ISBN   9781450328944. S2CID   207217210. 165. Sathyanarayana, Aarti (2016-01-01). "Sleep Quality Prediction From Wearable Data Using Deep Learning". JMIR mHealth and uHealth. 4 (4): e125. doi:10.2196/mhealth.6562. PMC  . PMID   27815231. S2CID   3821594. 166. Choi, Edward; Schuetz, Andy; Stewart, Walter F.; Sun, Jimeng (2016-08-13). "Using recurrent neural network models for early detection of heart failure onset". Journal of the American Medical Informatics Association. 24 (2): 361–370. doi:10.1093/jamia/ocw112. ISSN   1067-5027. PMC  . PMID   27521897. 167. Litjens, Geert; Kooi, Thijs; Bejnordi, Babak Ehteshami; Setio, Arnaud Arindra Adiyoso; Ciompi, Francesco; Ghafoorian, Mohsen; van der Laak, Jeroen A.W.M.; van Ginneken, Bram; Sánchez, Clara I. (December 2017). "A survey on deep learning in medical image analysis". Medical Image Analysis. 42: 60–88. arXiv:. Bibcode:2017arXiv170205747L. doi:10.1016/j.media.2017.07.005. PMID   28778026. S2CID   2088679. 168. Forslid, Gustav; Wieslander, Hakan; Bengtsson, Ewert; Wahlby, Carolina; Hirsch, Jan-Michael; Stark, Christina Runow; Sadanandan, Sajith Kecheril (2017). "Deep Convolutional Neural Networks for Detecting Cellular Changes Due to Malignancy". 2017 IEEE International Conference on Computer Vision Workshops (ICCVW). pp. 82–89. doi:10.1109/ICCVW.2017.18. ISBN   9781538610343. S2CID   4728736. 169. De, Shaunak; Maity, Abhishek; Goel, Vritti; Shitole, Sanjay; Bhattacharya, Avik (2017). "Predicting the popularity of instagram posts for a lifestyle magazine using deep learning". 2017 2nd International Conference on Communication Systems, Computing and IT Applications (CSCITA). pp. 174–177. doi:10.1109/CSCITA.2017.8066548. ISBN   978-1-5090-4381-1. S2CID   35350962. 170. "Colorizing and Restoring Old Images with Deep Learning". FloydHub Blog. 2018-11-13. Retrieved 2019-10-11. 171. Schmidt, Uwe; Roth, Stefan. Shrinkage Fields for Effective Image Restoration (PDF). Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on. 172. Czech, Tomasz. "Deep learning: the next frontier for money laundering detection". Global Banking and Finance Review. 173. "Army researchers develop new algorithms to train robots". EurekAlert!. Retrieved 2018-08-29. 174. Utgoff, P. E.; Stracuzzi, D. J. (2002). "Many-layered learning". Neural Computation. 14 (10): 2497–2529. doi:10.1162/08997660260293319. PMID   12396572. S2CID   1119517. 175. Elman, Jeffrey L. (1998). Rethinking Innateness: A Connectionist Perspective on Development. MIT Press. ISBN   978-0-262-55030-7. 176. Shrager, J.; Johnson, MH (1996). "Dynamic plasticity influences the emergence of function in a simple cortical array". Neural Networks. 9 (7): 1119–1129. doi:10.1016/0893-6080(96)00033-0. PMID   12662587. 177. Quartz, SR; Sejnowski, TJ (1997). "The neural basis of cognitive development: A constructivist manifesto". Behavioral and Brain Sciences. 20 (4): 537–556. CiteSeerX  . doi:10.1017/s0140525x97001581. PMID   10097006. 178. S. Blakeslee., "In brain's early growth, timetable may be critical," The New York Times, Science Section, pp. B5–B6, 1995. 179. Mazzoni, P.; Andersen, R. A.; Jordan, M. I. (1991-05-15). "A more biologically plausible learning rule for neural networks". Proceedings of the National Academy of Sciences. 88 (10): 4433–4437. Bibcode:1991PNAS...88.4433M. doi:10.1073/pnas.88.10.4433. ISSN   0027-8424. PMC  . PMID   1903542. 180. O'Reilly, Randall C. (1996-07-01). "Biologically Plausible Error-Driven Learning Using Local Activation Differences: The Generalized Recirculation Algorithm". Neural Computation. 8 (5): 895–938. doi:10.1162/neco.1996.8.5.895. ISSN   0899-7667. S2CID   2376781. 181. Testolin, Alberto; Zorzi, Marco (2016). "Probabilistic Models and Generative Neural Networks: Towards an Unified Framework for Modeling Normal and Impaired Neurocognitive Functions". Frontiers in Computational Neuroscience. 10: 73. doi:10.3389/fncom.2016.00073. ISSN   1662-5188. PMC  . PMID   27468262. S2CID   9868901. 182. Testolin, Alberto; Stoianov, Ivilin; Zorzi, Marco (September 2017). "Letter perception emerges from unsupervised deep learning and recycling of natural image features". Nature Human Behaviour. 1 (9): 657–664. doi:10.1038/s41562-017-0186-2. ISSN   2397-3374. PMID   31024135. S2CID   24504018. 183. Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang (2011-11-03). "Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons". PLOS Computational Biology. 7 (11): e1002211. Bibcode:2011PLSCB...7E2211B. doi:10.1371/journal.pcbi.1002211. ISSN   1553-7358. PMC  . PMID   22096452. S2CID   7504633. 184. Morel, Danielle; Singh, Chandan; Levy, William B. (2018-01-25). "Linearization of excitatory synaptic integration at no extra cost". Journal of Computational Neuroscience. 44 (2): 173–188. doi:10.1007/s10827-017-0673-5. ISSN   0929-5313. PMID   29372434. S2CID   3831587. 185. Cash, S.; Yuste, R. (February 1999). "Linear summation of excitatory inputs by CA1 pyramidal neurons". Neuron. 22 (2): 383–394. doi:10.1016/s0896-6273(00)81098-3. ISSN   0896-6273. PMID   10069343. S2CID   14663106. 186. Olshausen, B; Field, D (2004-08-01). "Sparse coding of sensory inputs". Current Opinion in Neurobiology. 14 (4): 481–487. doi:10.1016/j.conb.2004.07.007. ISSN   0959-4388. PMID   15321069. S2CID   16560320. 187. Yamins, Daniel L K; DiCarlo, James J (March 2016). "Using goal-driven deep learning models to understand sensory cortex". Nature Neuroscience. 19 (3): 356–365. doi:10.1038/nn.4244. ISSN   1546-1726. PMID   26906502. S2CID   16970545. 188. Zorzi, Marco; Testolin, Alberto (2018-02-19). "An emergentist perspective on the origin of number sense". Phil. Trans. R. Soc. B. 373 (1740): 20170043. doi:10.1098/rstb.2017.0043. ISSN   0962-8436. PMC  . PMID   29292348. S2CID   39281431. 189. Güçlü, Umut; van Gerven, Marcel A. J. (2015-07-08). "Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream". Journal of Neuroscience. 35 (27): 10005–10014. arXiv:. doi:10.1523/jneurosci.5023-14.2015. PMC  . PMID   26157000. 190. Metz, C. (12 December 2013). "Facebook's 'Deep Learning' Guru Reveals the Future of AI". Wired. 191. "Google AI algorithm masters ancient game of Go". Nature News & Comment. Retrieved 2016-01-30. 192. Silver, David; Huang, Aja; Maddison, Chris J.; Guez, Arthur; Sifre, Laurent; Driessche, George van den; Schrittwieser, Julian; Antonoglou, Ioannis; Panneershelvam, Veda; Lanctot, Marc; Dieleman, Sander; Grewe, Dominik; Nham, John; Kalchbrenner, Nal; Sutskever, Ilya; Lillicrap, Timothy; Leach, Madeleine; Kavukcuoglu, Koray; Graepel, Thore; Hassabis, Demis (28 January 2016). "Mastering the game of Go with deep neural networks and tree search". Nature . 529 (7587): 484–489. Bibcode:2016Natur.529..484S. doi:10.1038/nature16961. ISSN   0028-0836. PMID   26819042. S2CID   515925. 193. "A Google DeepMind Algorithm Uses Deep Learning and More to Master the Game of Go | MIT Technology Review". MIT Technology Review. Retrieved 2016-01-30. 194. Metz, Cade (November 6, 2017). "A.I. Researchers Leave Elon Musk Lab to Begin Robotics Start-Up" via NYTimes.com. 195. Bradley Knox, W.; Stone, Peter (2008). "TAMER: Training an Agent Manually via Evaluative Reinforcement". 2008 7th IEEE International Conference on Development and Learning: 292–297. doi:10.1109/devlrn.2008.4640845. ISBN   978-1-4244-2661-4. S2CID   5613334. 196. "Talk to the Algorithms: AI Becomes a Faster Learner". governmentciomedia.com. Retrieved 2018-08-29. 197. Marcus, Gary (2018-01-14). "In defense of skepticism about deep learning". Gary Marcus. Retrieved 2018-10-11. 198. Knight, Will (2017-03-14). "DARPA is funding projects that will try to open up AI's black boxes". MIT Technology Review. Retrieved 2017-11-02. 199. Marcus, Gary (November 25, 2012). "Is "Deep Learning" a Revolution in Artificial Intelligence?". The New Yorker. Retrieved 2017-06-14. 200. Alexander Mordvintsev; Christopher Olah; Mike Tyka (June 17, 2015). "Inceptionism: Going Deeper into Neural Networks". Google Research Blog. Retrieved June 20, 2015. 201. Alex Hern (June 18, 2015). "Yes, androids do dream of electric sheep". The Guardian. Retrieved June 20, 2015. 202. Goertzel, Ben (2015). "Are there Deep Reasons Underlying the Pathologies of Today's Deep Learning Algorithms?" (PDF). 203. Nguyen, Anh; Yosinski, Jason; Clune, Jeff (2014). "Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images". arXiv: [cs.CV]. 204. Szegedy, Christian; Zaremba, Wojciech; Sutskever, Ilya; Bruna, Joan; Erhan, Dumitru; Goodfellow, Ian; Fergus, Rob (2013). "Intriguing properties of neural networks". arXiv: [cs.CV]. 205. Zhu, S.C.; Mumford, D. (2006). "A stochastic grammar of images". Found. Trends Comput. Graph. Vis. 2 (4): 259–362. CiteSeerX  . doi:10.1561/0600000018. 206. Miller, G. A., and N. Chomsky. "Pattern conception." Paper for Conference on pattern detection, University of Michigan. 1957. 207. Eisner, Jason. "Deep Learning of Recursive Structure: Grammar Induction". 208. "Hackers Have Already Started to Weaponize Artificial Intelligence". Gizmodo. Retrieved 2019-10-11. 209. "How hackers can force AI to make dumb mistakes". The Daily Dot. 2018-06-18. Retrieved 2019-10-11. 210. "AI Is Easy to Fool—Why That Needs to Change". Singularity Hub. 2017-10-10. Retrieved 2017-10-11. 211. Gibney, Elizabeth (2017). "The scientist who spots fake videos". Nature. doi:10.1038/nature.2017.22784. 212. Mühlhoff, Rainer (2019-11-06). "Human-aided artificial intelligence: Or, how to run large computations in human brains? Toward a media sociology of machine learning". New Media & Society: 146144481988533. doi:10.1177/1461444819885334. ISSN   1461-4448. 213. "Facebook Can Now Find Your Face, Even When It's Not Tagged". Wired. ISSN   1059-1028 . Retrieved 2019-11-22.
2020-10-27 00:55:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5960193872451782, "perplexity": 9981.422385476966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107892710.59/warc/CC-MAIN-20201026234045-20201027024045-00254.warc.gz"}
https://stat.ethz.ch/CRAN/web/packages/optimCheck/vignettes/optimCheck.html
# Quick Tour of Package optimCheck ## Introduction The optimCheck package provides a set of tools to check that output of an optimization algorithm is indeed at a local mode of the objective function. The tools include both visual and numerical checks, the latter serving to automate formalized unit tests with e.g., the R packages testthat or RUnit. A brief overview of the package functionality is illustrated with the following example. Let $Q(\boldsymbol{x}) = \boldsymbol{x}'\boldsymbol{A}\boldsymbol{x}- 2 \boldsymbol{b}'\boldsymbol{x}$ denote a quadratic objective function in $$\boldsymbol{x}\in \mathbb R^d$$. If $$\boldsymbol{A}_{d \times d}$$ is a positive-definite matrix, then the unique minimum of $$Q(\boldsymbol{x})$$ is $$\hat{\boldsymbol{x}}= \boldsymbol{A}^{-1}\boldsymbol{b}$$. Let us now ignore this information and try to minimize $$Q(\boldsymbol{x})$$ using R’s simplest built-in mode-finding routine, provided by the R function stats::optim(). In its simplest configuration, stats::optim() requires only the objective function and a starting value $$\boldsymbol{x}_0$$ to initialize the mode-finding procedure. Let’s consider a difficult setting for stats::optim(), with a relatively large $$d = 12$$ and a starting value $$\boldsymbol{x}_0$$ which is far from the optimal value $$\hat{\boldsymbol{x}}$$. d <- 12 # dimension of optimization problem # create the objective function: Q(x) = x'Ax - 2b'x A <- crossprod(matrix(rnorm(d^2), d, d)) # positive definite matrix b <- rnorm(d) objfun <- function(x) crossprod(x, A %*% x)[1] - 2 * crossprod(b, x)[1] xhat <- solve(A, b) # analytic solution # numerical mode-finding using optim xfit <- optim(fn = objfun, # objective function par = xhat * 5, # initial value is far from the solution control = list(maxit = 1e5)) # very large max. number of iterations ### Visual Checks with optim_proj() Like most solvers, stats::optim() utilizes various criteria to determine whether its algorithm has converged, which can be assess with the following command: # any value other than 0 means optim failed to converge xfit$convergence ## [1] 0 Here stats::optim() reports that its algorithm has converged. Now let’s check this visually with optimCheck using projection plots. That is, let $$\tilde{\boldsymbol{x}}$$ denote the potential optimum of $$Q(\boldsymbol{x})$$. Then for each $$i = 1,\ldots,d$$, we plot $Q_i(x_i) = Q(x_i, \tilde{\boldsymbol{x}}_{-i}), \qquad \tilde{\boldsymbol{x}}_{-i} = (\tilde x_1, \ldots, \tilde x_{i-1}, \tilde x_{i+1}, \ldots, \tilde x_d).$ In other words, projection plot $$i$$ varies only $$x_i$$, while holding all other elements of $$\boldsymbol{x}$$ fixed at the value of the potential solution $$\tilde{\boldsymbol{x}}$$. These plots are produced with the optimCheck function optim_proj(): require(optimCheck) # load package ## Loading required package: optimCheck # projection plots xnames <- parse(text = paste0("x[", 1:d, "]")) # variable names oproj <- optim_proj(fun = objfun, # objective function xsol = xfit$par, # potential solution maximize = FALSE, # indicates that a local minimum is sought xrng = .5, # range of projection plot: x_i +/- .5*|x_i| xnames = xnames) In each of the projection plots, the potential solution $$\tilde x_i$$ is plotted in red. The xrng argument to optim_proj() specifies the plotting range. Among various ways of doing this, perhaps the simplest is a single scalar value indicating that each plot should span $$x_i \pm$$ xrng $$\cdot |x_i|$$. Thus we can see from these plots that stats::optim() was sometimes up to 10% away from the local mode of the projection plots. ### Quantification of Projection Plots Projection plots are a powerful method of assessing the convergence of mode-finding routines to a local mode. While great for interactive testing, plots are not well-suited to automated unit testing as part of an R package development process. To this end, optimCheck provides a means of quantifying the result of a call to optim_proj(). Indeed, a call to optim_proj() returns an object of class optproj with the following elements: sapply(oproj, function(x) dim(as.matrix(x))) ## xsol ysol maximize xproj yproj ## [1,] 12 1 1 100 100 ## [2,] 1 1 1 12 12 As described in the function documentation, xproj and yproj are matrices of which each column contains the $$x$$-axis and $$y$$-axis coordinates of the points contained in each projection plot. The summary() method for optproj objects coverts these to absolute and relative errors in both the potential solution and the objective function. The print() method conveniently displays these results: oproj # same print method as summary(oproj) ## ## 'optim_proj' check on 12-variable minimization problem. ## ## Top 5 relative errors in potential solution: ## ## xsol D=xopt-xsol R=D/|xsol| ## x7 0.58760 -0.163200 -0.2778 ## x10 -1.84200 -0.344100 -0.1869 ## x3 -0.70570 0.096230 0.1364 ## x1 4.19900 -0.445300 -0.1061 ## x6 -0.07315 -0.007758 -0.1061 The documentation for summary.optproj() describes the various calculations it provides. Perhaps the most useful of these are the elementwise absolute and relative differences between the potential solution $$\tilde{\boldsymbol{x}}$$ and $$\hat{\boldsymbol{x}}_\mathrm{proj}$$, the vector of optimal 1D solutions in each projection plot. For convenience, these can be extracted with the diff() method: diff(oproj) # equivalent to summary(oproj)$xdiff ## abs rel ## x1 -0.445321028 -0.10606061 ## x2 -0.215858067 -0.01515152 ## x3 0.096231561 0.13636364 ## x4 0.143242614 0.02525253 ## x5 -0.310081433 -0.03535354 ## x6 -0.007758386 -0.10606061 ## x7 -0.163226011 -0.27777778 ## x8 -0.052715576 -0.01515152 ## x9 -0.326153891 -0.06565657 ## x10 -0.344134419 -0.18686869 ## x11 -0.343453576 -0.02525253 ## x12 -0.087700503 -0.01515152 # here's exactly what these are xsol <- summary(oproj)$xsol # candidate solution xopt <- summary(oproj)$xopt # optimal solution in each projection plot xdiff <- cbind(abs = xopt-xsol, rel = (xopt-xsol)/abs(xsol)) range(xdiff - diff(oproj)) ## [1] 0 0 Thus it is proposed that a combination of summary() and diff() methods for projection plots can be useful for constructing automated unit tests. In this case, plotting itself can be disabled by passing optim_proj() the argument plot = FALSE. See the optimCheck/tests folder for testthat examples featuring: • Logistic Regression (stats::glm() function). • Quantile Regression (quantreg::rq() function in quantreg) • Multivariate normal mixtures (mclust::emEEE() in mclust). You can run these tests with the command testthat::test_package("optimCheck", reporter = "progress") ## optim_refit(): A Numerical Alternative to Projection Plots There are some situations in which numerical quantification of projection plots leaves to be desired: Generating all projection plots requires N = 2 * npts * length(xsol) evaluations of the objective function (where the default value is npts = 100), which can belabor the process of automated unit testing. A different test for mode-finding routines is to recalculate the optimal solution with an “very good” starting point: the current potential solution. This is the so-called “refine optizimation” – or refit – strategy. The optim_refit() function refines the optimization with a call to R’s built-in general-purpose optimizer: the function stats::optim(). In particular, it selects the default Nelder-Mead simplex method with a simplified parameter interface. As seen in the unit tests above, the refit checks are 2-3 times faster than their projection plot counterparts. Consider now the example of refining the original stats::optim() solution to the quadratic objective function: orefit <- optim_refit(fun = objfun, # objective function xsol = xfit$par, # potential solution maximize = FALSE) # indicates that a local minimum is sought ## Warning in optim_refit(fun = objfun, xsol = xfit$par, maximize = FALSE): ## Iteration limit maxit has been reached. summary(orefit) # same print method as orefit ## ## 'optim_refit' check on 12-variable minimization problem. ## ## Top 5 relative errors in potential solution: ## ## xsol D=xopt-xsol R=D/|xsol| ## [1,] 0.58760 -1.5230 -2.592 ## [2,] -0.70570 1.5430 2.186 ## [3,] -1.84200 3.0410 1.651 ## [4,] -0.07315 -0.1021 -1.396 ## [5,] 4.19900 -4.2820 -1.020 Thus we can see that the first and second run of stats::optim() are quite different. Of course, this does not mean that the refit solution produced by stats::optim() is a local mode: # projection plots with refined solution optim_proj(xsol = orefit$xopt, fun = objfun, xrng = .5, maximize = FALSE) Indeed, the default stats::optim() method is only accurate when initialized close to the optimal solution. Therefore, one may wish to run the refit test with a different optimizer. This can be done externally to optim_refit, prior to passing the refit solution to the function via its argument xopt. This is illustrated below using stats::optim()’s gradient-based quasi-Newton method: # gradient of the objective function objgrad <- function(x) 2 * drop(A %*% x - b) # mode-finding using quasi-Newton method xfit2 <- optim(fn = objfun, # objective function par = xfit$par, # initial value (first optim fit) method = "BFGS") # external refit test with optimizer of choice orefit2 <- optim_refit(fun = objfun, xsol = xfit$par, # initial value (first optim fit) xopt = xfit2$par, # refit value (2nd fit with quasi-Newton method maximize = FALSE) # project plot test on refit solution optim_proj(xsol = orefit2$xopt, fun = objfun, xrng = .5, maximize = FALSE, plot = FALSE) ## ## 'optim_proj' check on 12-variable minimization problem. ## ## Top 5 relative errors in potential solution: ## ## xsol D=xopt-xsol R=D/|xsol| ## x1 0.5008 -0.002530 -0.005051 ## x2 3.0600 -0.015450 -0.005051 ## x3 -0.3098 0.001564 0.005051 ## x4 -1.4320 0.007233 0.005051 ## x5 1.5770 0.007967 0.005051 ## Future Work: Constrained Optimization Many constrained statistical optimization problems, seek a “sparse” solution, i.e., one for which some of the elements of the optimal solution are equal to zero. In such cases, the relative difference between potential and optimal solution is an unreliable metric. A working proposal is to flag these “true zeros” in optim_proj() and optim_refit(), so as to add a 1 to the relative difference denominators. Other suggestions on this and optimCheck in general are welcome.
2022-05-20 05:32:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5822363495826721, "perplexity": 3626.870085933225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662531352.50/warc/CC-MAIN-20220520030533-20220520060533-00783.warc.gz"}
https://book.declaredesign.org/preamble.html
# 1 Preamble This book introduces a new way of thinking about research designs in the social sciences. Our hope is that this approach will make designing research studies easier – easier to produce strong research designs, but also easier to share designs and build on the designs of others. The core idea is the MIDA framework, in which a research design is characterized by four elements: a model, an inquiry, a data strategy, and an answer strategy. We have to understand each of the four on their own and also how they interrelate. The design encodes your beliefs about the world, it describes your questions, and it lays out how you go about answering those questions, both in terms of what data you collect and how you analyze it. In strong designs, choices made in the model and inquiry are reflected in the data and answer strategies, and vice versa. We think of designs as objects that can be interrogated. Each of the four design elements can be “declared” in computer code and – if done right – the information provided is enough to “diagnose” the quality of the design through computer simulation. Researchers can then select the best design for their purposes by “redesigning” over alternative, feasible designs. This way of thinking pays dividends at multiple points in the research design lifecycle: brainstorming an idea, planning the design, implementing it, and integrating the results into the broader research literature. The declaration, diagnosis, and redesign process informs choices made from the beginning to the end of a research project. ## 1.1 How to read this book We had multiple audiences in mind when writing this book. First, we’re thinking of the set of people who could benefit from a high-level introduction to these ideas. If we only had 30 minutes with a person to try and get them to understand what our book is about, we would give them Part I. We’re thinking of beginners, people who are new to the practice of research design and who are embarking on their first empirical projects. The MIDA framework introduced in Part I accommodates many different empirical approaches: qualitative and quantitative, descriptive and causal, observational and experimental. Beginners starting out in any of these traditions can use our framework to consider how the design elements in those approaches fit together. We’re also thinking of researchers-in-training: graduate students in seminar courses where the main purpose is to read papers and discuss how well the empirics match the theory. These discussions can sometimes be a jumble of miscellaneous complaints, but our framework can focus attention on the most relevant concerns. What, exactly, is the inquiry? Is it the right one to be posing, and does the design do a good job of generating answers to it? We’re also thinking of funders and decision-makers, who often wish to assess research not in terms of its results but its design. Our approach provides a way of defining the design and diagnosing its quality. Part II is more involved. We provide the mathematical foundations of the MIDA framework. We walk through each component of a research design in detail, describe the finer points of design diagnosis, and explain how to carry out a redesign. Part II will resonate with several audiences of applied researchers both inside and outside of academia. We imagine it could be assigned early in a graduate course on research design in any of the social sciences. Data scientists and monitoring and evaluation professionals will find value in our framework for learning about research designs. Scholars will find value in declaring, diagnosing, and redesigning designs whether they are implementing randomized trials, multi-method archival studies, or calibrating structural theories with data. In Part III, we apply the general framework to specific research designs. The result is a library of common designs. Many empirical research designs are included in the library, but not all. The set of entries covers a large portion of what we see in current empirical practice across social sciences, but it is not meant to be exhaustive. We don’t expect that any readers will read straight through the design library, but will instead pick-and-choose depending on their interests. We are thinking of three kinds of uses for entries in the design library. Collectively, the design entries serve to illustrate the fundamental principles of design. The entries clarify the variety of ways in which models, inquiries, data strategies, and answer strategies can be connected and show how high level principles operate in common ways across very different designs. The second use is pedagogical. The library entries provide hands-on illustrations of designs in action. A researcher interested in understanding the “regression discontinuity design,” for example, can quickly see a complete implementation and learn under what conditions the standard design performs well or poorly. They can also compare the suitability of one type of design against another for a given problem. We emphasize that these descriptions of different designs provide entry points but they are not exhaustive, so we refer the reader to the most up-to-date methodological treatments of the topic. The third use is as a starter kit to help readers get going on designs of their own. Each entry includes code for a basic design that can be fine-tuned to capture the specificities of particular research settings. The last section of the book describes in detail how our framework can help at each step of the research process. Each of these sections should be readable for anyone who has read Part I. The entry on preanalysis plans, for example, can be assigned in an experiments course as guidance for students filing their first preanalysis plan. The entry on research ethics could be shared among coauthors at the start of a project. The entry on writing a research paper could be assigned to college seniors trying to finish their essays on time. ## 1.2 How to work this book We will often describe research designs not just in words, but in computer code. If you want to work through the code and exercises, fantastic. This path requires investment in R, the tidyverse, and the DeclareDesign software package. Chapter 4 helps get you started. We think working through the code is very rewarding, but we understand that there is a learning curve. You could, of course, tackle the declaration, diagnosis, and redesign processes using bespoke simulations in any computer language you like,1 but it is easier in DeclareDesign because the software guides you to articulate each of the four design elements. If you want nothing to do with the code, you can skip all the code and exercises and just focus on the text. We have written the book so that understanding of the code is not required in order to understand research design concepts. ## 1.3 What this book will not do This is a research design book, not a statistics textbook, nor a cookbook with recipes applicable to all situations. We will not derive estimators, we will provide no guarantees of the general optimality of designs, and we will present no mathematical proofs. Nor will we provide all the answers to all the practical questions you might have about your design. What we do offer is a language to express research designs. We can help you learn that language so you can describe your own design in it. When you can declare your design in this language, then you can diagnose it, then improve it through redesign.
2021-06-19 10:31:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26085978746414185, "perplexity": 804.9738686644863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487647232.60/warc/CC-MAIN-20210619081502-20210619111502-00135.warc.gz"}
https://opensees.berkeley.edu/wiki/index.php/PySimple1_Material
# PySimple1 Material This command is used to construct a PySimple1 uniaxial material object: uniaxialMaterial PySimple1 $matTag$soilType $pult$Y50 $Cd <$c> $matTag integer tag identifying material$soilType soilType = 1 Backbone of p-y curve approximates Matlock (1970) soft clay relation. soilType = 2 Backbone of p-y curve approximates API (1993) sand relation. $pult Ultimate capacity of the p-y material. Note that "p" or "pult" are distributed loads [force per length of pile] in common design equations, but are both loads for this uniaxialMaterial [i.e., distributed load times the tributary length of the pile].$Y50 Displacement at which 50% of pult is mobilized in monotonic loading. $Cd Variable that sets the drag resistance within a fully-mobilized gap as Cd*pult.$c The viscous damping term (dashpot) on the far-field (elastic) component of the displacement rate (velocity). (optional Default = 0.0). Nonzero c values are used to represent radiation damping effects NOTES: In general the HHT algorithm is preferred over a Newmark algorithm when using this material. This is due to the numerical oscillations that can develop with viscous damping forces under transient loading with certain solution algorithms and damping ratios. EQUATIONS and EXAMPLE RESPONSES: The equations describing PySimple1 behavior are described in Boulanger, R. W., Curras, C. J., Kutter, B. L., Wilson, D. W., and Abghari, A. (1999). "Seismic soil-pile-structure interaction experiments and analyses." Journal of Geotechnical and Geoenvironmental Engineering, ASCE, 125(9): 750-759. Only minor changes have been made in its implementation for OpenSees. The nonlinear $p-y$ behavior is conceptualized as consisting of elastic (p-ye), plastic (p-yp), and gap $(p-y^g)$ components in series. Radiation damping is modeled by a dashpot on the “far-field” elastic component $(p-ye)$ of the displacement rate. The gap component consists of a nonlinear closure spring (pc-yg) in parallel with a nonlinear drag spring $(p^d-y^g)$. Note that $y = y^e + y^p + y^g$, and that $p = p^d + p^c$. The plastic component has an initial range of rigid behavior between $-C_r p_\text{ult} < p < C_r p_\text{ult}$ with $C_r$ = the ratio of $p/p_\text{ult}$ when plastic yielding first occurs in virgin loading. The rigid range of $p$, which is initially $2 C_r p_\text{ult}$, translates with plastic yielding (kinematic hardening). The rigid range of $p$ can be constrained to maintain a minimum size on both the positive and negative loading sides (e.g., 25% of $p\text{ult}$), and this is accomplished by allowing the rigid range to expand or contract as necessary. Beyond the rigid range, loading of the plastic $(p-y^p)$ component is described by: $p = p_{\text{ult}} - (p_{\text{ult}} - p_o) \left [\frac{c y_{50}}{c y_{50} + | z_p - z^p_0|} \right ]^n$ where $p_\text{ult}$ = the ultimate resistance of the $p-y$ material in the current loading direction, $p_o = p$ at the start of the current plastic loading cycle, $y^p_o = y_p$ at the start of the current plastic loading cycle, $c$ = constant to control the tangent modulus at the start of plastic yielding, and n = an exponent to control sharpness of the $p-y^p$ curve. The closure $(p^c-y^g)$ spring is described by: $p^c = 1.8 p_{\text{ult}} \left [\frac{y_{50}}{y_{50} + 50(y_o^\text{+} - y^g)} - \frac{y_{50}}{y_{50} + 50(y_o^\text{-} - y^g)} \right ]$ where $y_o^+$ = memory term for the positive side of the gap, $y_o^-$= memory term for the negative side of the gap. The initial values of $y_o^+$ and $y_o^-$ were set as $y_{50}/100$ and $- y_{50}/100$, respectively. The factor of 1.8 brings $p^c$ up to $p_\text{ult}$ during virgin loading to $y_o^+$ (or $y_o^-$). Gap enlargement follows logic similar to that of Matlock et al. (1978). The gap grows on the positive side when the plastic deformation occurs on the negative loading side. Consequently, the $y_o^+$ value equals the opposite value of the largest past negative value of, $y^p + y^g + 1.5 y_{50}$ where the $1.5y_{50}$ represents some rebounding of the gap. Similarly, the $y_o^-$ value equals the opposite value of the largest past positive value of $y^p+y^g-1.5y_{50}$. This closure spring allows for a smooth transition in the load displacement behavior as the gap opens or closes. The nonlinear drag $(p^d-y^g)$ spring is described by: $p^d = C_d p_{\text{ult}} - (C_d p_{\text{ult}} - p^d_o) \left [\frac{y_{50}}{y_{50} + 2| y^g - y^g_o|} \right ]^n$ where $C_d =$ ratio of the maximum drag force to the ultimate resistance of the p-y material, $d^p_o =p^d$ at the start of the current loading cycle, and $y^g_o = y^g$ at the start of the current loading cycle. The flexibility of the above equations can be used to approximate different p-y backbone relations. Matlock’s (1970) recommended backbone for soft clay is closely approximated using $c = 10$, $n = 5$, and $C_r = 0.35$. API’s (1993) recommended backbone for drained sand is closely approximated using $c = 0.5$, $n = 2$, and $C_r = 0.2$. PySimple1 is currently implemented to allow use of these two default sets of values. Values of $p_\text{ult}$, $y_{50}$, and $C_d$ must then be specified to define the $p-y$ material behavior. Viscous damping on the far-field (elastic) component of the p-y material is included for approximating radiation damping. For implementation in OpenSees the viscous damper is placed across the entire material, but the viscous force is calculated as proportional to the component of velocity (or displacement) that developed in the far-field elastic component of the material. For example, this correctly causes the damper force to become zero during load increments across a fully formed gap. In addition, the total force across the p-y material is restricted to pult in magnitude so that the viscous damper cannot cause the total force to exceed the near-field soil capacity. Users should also be familiar with numerical oscillations that can develop in viscous damper forces under transient loading with certain solution algorithms and damping ratios. In general, an HHT algorithm is preferred over a Newmark algorithm for reducing such oscillations in materials like PySimple1. EXAMPLE: REFERENCES: "Seismic Soil-pile-strcture interaction experiments and analysis", Boulanger, R.w., Curras, C.J., Kutter, B.L., Wilson, D.W., and Abghari, A. (1990). Jornal of Geotechnical and Geoenvironmental Engineering, ASCS, 125(9):750-759. Code Developed by: Ross Boulanger, UC Davis This command is used to construct a PySimple1 uniaxial material object:
2021-04-22 11:38:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7340830564498901, "perplexity": 1973.6074026180556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039603582.93/warc/CC-MAIN-20210422100106-20210422130106-00390.warc.gz"}
https://mathhelpboards.com/threads/problem-of-the-week-37-february-11th-2013.3374/
Problem of the Week #37 - February 11th, 2013 Status Not open for further replies. Chris L T521 Well-known member Staff member Here's this week's problem. ----- Problem: A transition probability matrix $\mathbf{P}$ is said to be doubly stochastic if the sum over each column equals one; that is,$\sum_i P_{i,j}=1,\qquad\forall j.$ If such a chain is irreducible and aperiodic and consists of $M+1$ states $0,1,\ldots,M$, show that the limiting probabilities are given by $\pi_j=\frac{1}{M+1},\quad j=0,1,\ldots,M.$ ----- Chris L T521 Well-known member Staff member No one answered this week's question. You can find my solution below. To show that this is true, we show that $\pi_j=\frac{1}{M+1}$ satisfies the system of equations $\pi_j=\sum\limits_{i=0}^M\pi_iP_{ij}$ and $\sum\limits_{j=0}^M\pi_j=1$. Supposing that $\pi_j=\frac{1}{M+1}$, we see that$\sum\limits_{j=0}^M\pi_j=\frac{1}{M+1}\sum\limits_{j=0}^M1=\frac{1}{M+1}(M+1)=1$ and $\pi_j=\sum\limits_{i=0}^M\pi_iP_{ij}\implies \sum\limits_{j=0}^M\sum\limits_{i=0}^M\pi_jP_{ij}=(M+1)\pi=1.$ Thus, $\pi_j$ must be $\frac{1}{M+1}$ for these equations to be satisfied. Status Not open for further replies.
2021-06-20 07:39:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8082894086837769, "perplexity": 562.6880578559047}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487658814.62/warc/CC-MAIN-20210620054240-20210620084240-00330.warc.gz"}
http://lnit.cheveux-extensions.fr/2d-heat-equation-examples.html
# 2d Heat Equation Examples Dirichlet & Heat Problems in Polar Coordinates Section 13. Equilibrium statistical mechanics on the other hand provides us with the tools to derive such equations of state theoretically, even though it has not much to say about the actual processes, like. 0, so u(x,y,t = 0) = T. A Simple Finite Volume Solver For Matlab File Exchange. Project: Heat Equation. 2D Poisson Equation (DirichletProblem) The 2D Poisson equation is given by with boundary conditions There is no initial condition, because the equation does not depend on time, hence it becomes a boundary value problem. Hancock Fall 2006 1 2D and 3D Heat Equation Ref: Myint-U & Debnath §2. As matlab programs, would run more quickly if they were compiled using the matlab compiler and then run within matlab. In each le, the rst column is the serial number. 1 Heat on an insulated wire. 30: Mar 26, Thursday: Elasticity formulation in 3D and its 2D idealizations. 4 Thorsten W. Site Pages. Fick’s Law, then, our partial differential equation becomes: Ct = ∇•[D∇C]+q which is (5. 2d heat transfer - implicit finite difference method. Based on his theory, he derived Langmuir Equation which depicted a relationship between the number of active sites of the surface undergoing adsorption and pressure. Visit Stack Exchange. 4 graduate hours. Note: In our current programs we use a mesh consisting of only triangles. The2Dheat equation Homogeneous Dirichletboundaryconditions Steady statesolutions Laplace’sequation In the 2D case, we see that steady states must solve ∇2u= u xx +u yy = 0. The paper is organized as follows. RIGID-ROTOR MODELS AND ANGULAR MOMENTUM EIGENSTATES OUTLINE Homework Questions Attached SECT TOPIC 1. The programs are released under the GNU General Public License. Solving the heat equation To solve an B/IVP problem for the heat equation in two dimensions, ut = c2(uxx + uyy): 1. See WikiPages to learn about editing the wiki pages, and go to Help FreeCAD to learn about other ways in which you can contribute. Fick’s Law, then, our partial differential equation becomes: Ct = ∇•[D∇C]+q which is (5. For example, if you need two figures, such as (2, 4), to understand where a particular spot is, you are dealing with a two-dimensional shape. They satisfy u t = 0. Motion in one dimension in other words linear motion and projectile motion are the subtitles of kinematics they are also called as 1D and 2D kinematics. Herman November 3, 2014 1 Introduction The heat equation can be solved using separation of variables. 2d linear Partial Differential Equation Solver using finite differences. This Demonstration solves this partial differential equation-a two-dimensional heat equation-using the method of lines in the domain , subject to the following Dirichlet boundary conditions (BC) and initial condition (IC):. Boundary Conditions provide information for some, but not all, neighbors. for a time dependent differential equation of the second order (two time derivatives) the initial values for t= 0, i. Part 1: A Sample Problem. The reynolds number in this problem is approximately 20. 2, 2012 • Many examples here are taken from the textbook. The centre plane is taken as the origin for x and the slab extends to + L on the right and – L on the left. 15 ANNA UNIVERSITY CHENNAI : : CHENNAI – 600 025 AFFILIATED INSTITUTIONS B. See Draft ShapeString for an example of a well documented tool. The heat and wave equations in 2D and 3D 18. Use Fourier Series to Find Coe cients The only problem remaining is to somehow pick the constants b n so that the initial condition u(x;0) = f(x) is satis ed. Conservation laws, scaling, dynamic similarity, laminar and turbulent convection, internal and external convection, external natural convection and natural convection in enclosures, convection with change of phase, convection in porous media, and mass transfer including phase change and heterogeneous reactions. FD2D_HEAT_STEADY is a FORTRAN77 program which solves the steady state (time independent) heat equation in a 2D rectangular region. Example A 2 2 square plate with c = 1=3 is heated in such a way that the temperature in the lower half is 50, while the temperature in the. Heat diffusion on a Plate (2D finite difference) Heat transfer, heat flux, diffusion this phyical phenomenas occurs with magma rising to surface or in geothermal areas. elliptic partial differential equation, we approximate the white noise term using piece-wise constant functions and show that it will also hold for the stochastic heat equation. I Another example isSchramm{Loewner evolution (SLE). Green's Function Solution of Elliptic Problems in n. -5 0 5-30-20-10 0 10 20 30 q sinh( q) cosh( q) Figure1: Hyperbolicfunctionssinh( ) andcosh( ). I'd need to solve a heat equation in a 2D domain (basically a rectangle with insulating lateral edges and two temperatures at the top and at bottom) and the rectangle is formed of three different materials overlayered. The heat equation is the prototypical example of a parabolic partial differential equation. 11: Tue Nov 22 : Assignment 4 due: m818as04. , u(x,0) and ut(x,0) are generally required. Partial Differential Equation Toolbox makes it easy to set up your simulation. Finally, we will study the Laplace equation, which is an example of an elliptic PDE. Also, the serial numbers are not stored in the les. Convective heat transfer, often referred to simply as convection, is the transfer of heat from one place to another by the movement of fluids. 1 Example 1. Louise Olsen-Kettle The University of Queensland 9. m generates the mesh and creates the above four les. Such ideas are seen in university mathematics, physics and engineering courses. Diffusion In 1d And 2d File Exchange Matlab Central. Heat conduction problem in two dimension. The third term would in two dimensions be an approximation to the heat radiated away to the surroundings. Consider the ode This is a linear homogeneous ode and can be solved using standard methods. If you just want the spreadsheet, click here, but please read the rest of this post so you understand how the spreadsheet is implemented. Dirichlet, Neumann, and mixed. Drum vibrations, heat flow, the quantum nature of matter, and the dynamics of competing species are just a few real-world examples involving advanced differential equations. 2D Poisson Equation (DirichletProblem) The 2D Poisson equation is given by with boundary conditions There is no initial condition, because the equation does not depend on time, hence it becomes a boundary value problem. 7 shows the physical configuration, the heat transfer paths and the thermal resistance circuit. Example A 2 2 square plate with c = 1=3 is heated in such a way that the temperature in the lower half is 50, while the temperature in the. 19 Numerical Methods for Solving PDEs Numerical methods for solving different types of PDE's reflect the different character of the problems. Reading heat maps is faster and more intuitive than getting usable information out of columns of figures. In this paper, we use homotopy analysis method (HAM) to solve 2D heat conduction equations. 1 Thorsten W. This polynomial is considered to have two roots, both equal to 3. The Heat Equation: a Python implementation By making some assumptions, I am going to simulate the flow of heat through an ideal rod. Assume that the sides of the rod are insulated so that heat energy neither enters nor leaves the rod through its sides. 1) This is the Laplace equation, and this type of problem is classified as an elliptic system. PROBLEM OVERVIEW Given: Initial temperature in a 2-D plate Boundary conditions along the boundaries of the plate. We will do this by solving the heat equation with three different sets of boundary conditions. Their equations hold many surprises, and their solutions draw on other areas of math. The aim is to solve the steady-state temperature distribution through a rectangular body, by dividing it up into nodes and solving the necessary equations only in two dimensions. I want to resolve a PDE model, which is 2D heat diffusion equation with Neumann boundary conditions. 28: Mar 24, Tuesday: Elasticity formulation in 3D and its 2D idealizations. In such situations the temperature throughout the medium will, generally, not be uniform - for which the usual principles of equilibrium thermodynamics do not apply. The Euler equations solved for inviscid flow are presented in Section 8. The 1-D Heat Equation 18. Next, you can mesh geometries using 2D triangular or 3D tetrahedral elements or import mesh data from existing meshes from complex geometries. The mathematics of PDEs and the wave equation The heat equation u There are many more examples. Equations 2 and 3 differ only for the notation and for the complexity of the reaction term, coming from the physical modelling of heat transfer phenomena $$^3$$. In this paper we consider the geometric heat differential equation as a 3D mesh model to be used for 3D shape description and presentation in a CBIR system with decision support abilities. Let Vbe any smooth subdomain, in which there is no source or sink. In the previous Lecture 17 and Lecture 18 we introduced Fourier transform and Inverse Fourier transform and established some of its properties; we also calculated some Fourier transforms. Plane wave solutions for a 4D wave equation and the superposition principle. To do this we consider what we learned from Fourier series. Xsimula FEA Solves 2D heat transfer problem in multiple materials with linear or non-linear properties. Weak form of the Weighted Residual Method Coming back to the integral form of the Poisson's equation: it should be noted that not always can be obtained, depending on the selected trial functions. Run 2D examples: Lshape, crack and Kellogg in iFEM and read the code to learn the. The formulated above problem is called the initial boundary value problem or IBVP, for short. I was just looking at which terms cancelled to simplify the equation slightly. In 2D, a NxM array is needed where N is the number of x grid points, M the number of y grid. I Another example isSchramm{Loewner evolution (SLE). Moreover, if you click on the white frame, you can modify the graph of the function arbitrarily with your mouse, and then see how every different function evolves. equation as the governing equation for the steady state solution of a 2-D heat equation, the "temperature", u, should decrease from the top right corner to lower left corner of the domain. 1) where ∆ is the Laplace operator, naturally appears macroscopically, as the consequence of the con-servation of energy and Fourier’s law. 4, Section 5). u(x, t) if the initial temperature is f(x) throughout and the ends x 0 and x L are insulated. If you just want the spreadsheet, click here , but please read the rest of this post so you understand how the spreadsheet is implemented. , & Corry, P. 8, 2006] In a metal rod with non-uniform temperature, heat (thermal energy) is transferred. Solutions to Laplace’s equation are called harmonic functions. This code employs finite difference scheme to solve 2-D heat equation. Real life applications of the heat equation? Plz help to solve Partial differential equation of heat in 2d form with mixed boundary conditions in terms of convection in matlab. Create interactive charts in your web browser with MATLAB ® and Plotly. “The mode of transfer of heat by vibrating atoms and free electrons in solids from hot to cold parts of a body is called conduction of heat. We will show how to set up the Chebyshev grid points in both Cartesian and cylindrical systems. Heat spread. 28, 2012 • Many examples here are taken from the textbook. Example: 2D diffusion. PROBLEM OVERVIEW Given: Initial temperature in a 2-D plate Boundary conditions along the boundaries of the plate. Heat conduction problem in two dimension. 1 The heat equation Consider, for example, the heat equation ut = uxx, 0 < x < 1, t > 0 (4. See Category:Command Reference for all commands. Solving the 2D Heat Equation As just described, we have two algorithms: explicit (Euler) and implicit (Crank-Nicholson). Hello, I hope some folks can shed some light on what is going on. We discuss two partial di erential equations, the wave and heat equations, with applications to the study of physics. (f) 2D Heat or Laplace equation in an annulus or wedge (pie) shaped region. General introduction to PDEs, examples, applica-tions Derivation of conservation laws, linear advec-tion equation, di usion The one-dimensional heat equation Boundary conditions (Dirichlet, Neumann, Robin) and physical interpretation Equilibrium temperature distribution The heat equation in 2D and 3D 2. In this thermal analysis example, material properties like thermal conductivity and boundary conditions including convection, fixed temperature, and heat flux are applied using only a few lines of code. The heat equation, the variable limits, the Robin boundary conditions, and the initial condition are defined as:. The reynolds number in this problem is approximately 20. Two Dimensional Conduction Finite Difference Equations And Solutions. Knud Zabrocki (Home Office) 2D Heat equation April 28, 2017 21 / 24 Determination of the E mn with the initial condition We set in the solution T ( x , z , t ) the time variable to zero, i. Numerical Modeling of Earth Systems An introduction to computational methods with focus on solid Earth applications of continuum mechanics Lecture notes for USC GEOL557, v. m generates the mesh and creates the above four les. m Stability regions (2D) for BDF - BDFStab. In this paper we consider the geometric heat differential equation as a 3D mesh model to be used for 3D shape description and presentation in a CBIR system with decision support abilities. Use Fourier Series to Find Coe cients The only problem remaining is to somehow pick the constants b n so that the initial condition u(x;0) = f(x) is satis ed. 1 Heat Equation with Periodic Boundary Conditions in 2D. Using fixed boundary conditions "Dirichlet Conditions" and initial temperature in all nodes, It can solve until reach steady state with tolerance value selected in the code. In section 2 the HAM is briefly reviewed. Two-Dimensional Laplace and Poisson Equations In the previous chapter we saw that when solving a wave or heat equation it may be necessary to first compute the solution to the steady state equation. Let assume a uniform reactor (multiplying system) in the shape of a cylinder of physical radius R and height H. 16} Either by Duhamel principle or just using the same calculations as above one can prove that its contribution would be \int_0^t \int G(x,y,t-\tau) f(y,\tau)\,dyd\tau \label{eq-3. This will allow you to use a reasonable time step and to obtain a more precise solution. This code is designed to solve the heat equation in a 2D plate. With this technique, the PDE is replaced by algebraic equations which then have to be solved. Classify this equation. Diffusion Equation - Finite Cylindrical Reactor. This polynomial is considered to have two roots, both equal to 3. Heat equation/Solution to the 2-D Heat Equation in Cylindrical Coordinates. of heat transfer through a slab that is maintained at different temperatures on the opposite faces. In this section, we present thetechniqueknownas-nitedi⁄erences, andapplyittosolvetheone-dimensional heat equation. The domain is square and the problem is shown. The wave equation, on real line, associated with the given initial data:. Heat Equation Using Fortran Codes and Scripts Downloads Free. These models and many others from across the sciences, engineering, and finance have nonlinear terms or several independent variables. Subsection 4. Next, you can mesh geometries using 2D triangular or 3D tetrahedral elements or import mesh data from existing meshes from complex geometries. 0, so u(x,y,t = 0) = T. 2D Poisson Equation (DirichletProblem) The 2D Poisson equation is given by with boundary conditions There is no initial condition, because the equation does not depend on time, hence it becomes a boundary value problem. To find a numerical solution to equation (1) with finite difference methods, we first need to define a set of grid points in the domainDas follows: Choose a state step size Δx= b−a N (Nis an integer) and a time step size Δt, draw a set of horizontal and vertical lines across D, and get all intersection points (x j,t n), or simply (j,n), where x. 2 Heat Equation 2. We will be solving an IBVP of the form 8 >> < >>: PDE u. A selection of tutorial models and examples are presented in this section. Solution of the HeatEquation by Separation of Variables The Problem Let u(x,t) denote the temperature at position x and time t in a long, thin rod of length ℓ that runs from x = 0 to x = ℓ. The condition under which the two-dimensional heat conduction can be solved by separation of variables is that the governing equation must be linear homogeneous and no more than one boundary condition is nonhomogeneous. , the solution (if it exists) does not depend continuously on the data. Site Pages. Demo problem: Solution of the 2D linear wave equation In this example we demonstrate the solution of the 2D linear wave equation - a hyperbolic PDE that involves second time-derivatives. Working with 2D functionals. These are the steadystatesolutions. 2) We approximate temporal- and spatial-derivatives separately. , solve Laplace’s equation r2u = 0 with. (c) 1D heat equation (d) 2D heat equation in cartesian and polar coordinates. Sometimes an analytical approach using the Laplace equation to describe the problem can be used. 303 Linear Partial Differential Equations Matthew J. For example, these equations can be written as ¶2 ¶t2 c2r2 u = 0, ¶ ¶t kr2 u = 0, r2u = 0. Since the heat equation is linear (and homogeneous), a linear combination of two (or more) solutions is again a solution. In the previous Lecture 17 and Lecture 18 we introduced Fourier transform and Inverse Fourier transform and established some of its properties; we also calculated some Fourier transforms. The aim is to solve the steady-state temperature distribution through a rectangular body, by dividing it up into nodes and solving the necessary equations only in two dimensions. subplots_adjust. 1 Derivation Ref: Strauss, Section 1. 091 March 13–15, 2002 In example 4. The first number in refers to the problem number in the UA Custom edition, the second number in refers to the problem number in the 8th edition. Implicit Finite difference 2D Heat. The general 1D form of heat equation is given by which is accompanied by initial and boundary conditions in order for the equation to have a unique solution. ) Their result limits the rate of blow up at time 1We generally assume for simplicity that ” = 1, as this does not change anything from. We solve Laplace’s Equation in 2D on a $$1 \times 1. To ensure accurate simulation results, you can inspect the mesh quality and perform refinement. Visit Stack Exchange. m, NewtonSys. Note that when heat transfer is present in a compressible analysis, viscous dissipation, pressure work, and kinetic energy terms are calculated. I'm going to illustrate a simple one-dimensional heat flow example, followed two-dimensional heat flow example, all programmed into Excel. I think most people who have tried to teach Finite Elements agree upon this, traditionally however, most education in Finite Elements is given in separate courses. How is a differential equation different from a regular one? Well, the solution is a function (or a class of functions), not a number. For example, in many instances, two- or three-dimensional conduction problems may be rapidly solved by utilizing existing solutions to the heat diffusion equation. Heat equationin a 2D rectangle. Using the Laplace operator , the heat equation can be simplified, and generalized to similar equations over spaces of arbitrary number of dimensions, as. The toolbox of rules for working with 2D Fourier transforms in polar coordinates. 4 Thorsten W. Run 2D examples: Lshape, crack and Kellogg in iFEM and read the code to learn the. Dirichlet & Heat Problems in Polar Coordinates Section 13. The centre plane is taken as the origin for x and the slab extends to + L on the right and – L on the left. A solution of the 2D heat equation using separation of variables in rectangular coordinates. NUMERICAL METHODS IN STEADY STATE 1D and 2D HEAT CONDUCTION- Part-II • Methods of solving a system of simultaneous, algebraic equations - 1D steady state conduction in cylindrical and spherical systems - 2D steady state Aug. (8 SEMESTER) ELECTRONICS AND COMMUNICATION ENGINEERING CURRICU. working through some of those examples MAY be the best place to start (assuming some knowledge of Matlab):. Below we provide two derivations of the heat equation, ut ¡kuxx = 0 k > 0: (2. Sredojevic´a, Dejan R. See WikiPages to learn about editing the wiki pages, and go to Help FreeCAD to learn about other ways in which you can contribute. Finite Difference Heat Equation (Including Numpy) Heat Transfer - Euler Second-order Linear Diffusion (The Heat Equation) 1D Diffusion (The Heat equation) Solving Heat Equation with Python (YouTube-Video) The examples above comprise numerical solution of some PDEs and ODEs. Hydrus-2D & Meshgen-2D, Last Version 2. which is the steady diffusion equation with chemical reaction. Fourier Transforms. We now revisit the transient heat equation, this time with sources/sinks, as an example for two-dimensional FD problem. Many mathematicians have. 5 Assembly in 2D Assembly rule given in equation (2. The 1D Heat Equation (Parabolic Prototype) One of the most basic examples of a PDE. (Note that for p = 3, the equation has the same scaling symmetries as does NSE. So if u 1, u 2,are solutions of u t = ku xx, then so is c 1u 1 + c 2u 2 + for any choice of constants c 1;c 2;:::. HomeworkQuestion. 2D heat Equation. One of the main goals of this example is to show how to express the PDE defined in a cylindrical system in a Cartesian form that Partial Differential Equation Toolbox™ can handle. Next, you can mesh geometries using 2D triangular or 3D tetrahedral elements or import mesh data from existing meshes from complex geometries. Note that ghost cells are initialized at the beginning of code to the constant value of the edge of grid. Calculator includes solutions for initial and final velocity, acceleration, displacement distance and time. For example, in the case of a heat equation or a wave equation, an exact solution would be a function \(w=f(x,t)$$ which, when substituted into the respective equation would satisfy it identically along with all of the associated initial and boundary conditions. , solve Laplace’s equation u = 0 with. 3 2D hat function For example, the heat or di usion Equation U t = U xx A= 1;B= C= 0 1. Becker Department of Earth Sciences, University of Southern California, Los Angeles CA, USA and Boris J. , solve Laplace’s equation r2u = 0 with. A solution of the 2D heat equation using separation of variables in rectangular coordinates. Included is an example solving the heat equation on a bar of length L but instead on a thin circular ring. NUMERICAL METHODS FOR PARABOLIC EQUATIONS LONG CHEN As a model problem of general parabolic equations, we shall mainly consider the fol-lowing heat equation and study corresponding finite difference methods and finite element. Each of our examples will illustrate behavior that is typical for the whole class. For example: Consider the 1-D steady-state heat conduction equation with internal heat generation) i. Heat transfer behaviors are classified into heat conduction, heat convection, and heat radiation. To understand what is meant by multiplicity, take, for example,. Note: In our current programs we use a mesh consisting of only triangles. In each le, the rst column is the serial number. Equilibrium statistical mechanics on the other hand provides us with the tools to derive such equations of state theoretically, even though it has not much to say about the actual processes, like. We developed an analytical solution for the heat conduction-convection equation. m Support codes - funcvdp. Their equations hold many surprises, and their solutions draw on other areas of math. In this module we will examine solutions to a simple second-order linear partial differential equation -- the one-dimensional heat equation. For example the heat required to increase the temperature of half a kg of water by 3 degrees Celsius can be determined using this formula. 0, so u(x,y,t = 0) = T. In each le, the rst column is the serial number. These two equations have particular value since. -5 0 5-30-20-10 0 10 20 30 q sinh( q) cosh( q) Figure1: Hyperbolicfunctionssinh( ) andcosh( ). 1 Y d 2 Y d y 2 = k 2. Program numerically solves the general equation of heat tranfer using the userdlDLs inputs and boundary conditions. 5 [Nov 2, 2006] Consider an arbitrary 3D subregion V of R3 (V ⊆ R3), with temperature u(x,t) defined at all points x = (x,y,z) ∈ V. Examples of elliptic PDE's Laplace or in general ; Poisson ; Helmholtz in 1D ; Helmholtz in 2D ; 2. For example, in many instances, two- or three-dimensional conduction problems may be rapidly solved by utilizing existing solutions to the heat diffusion equation. For example, in the case of a heat equation or a wave equation, an exact solution would be a function $$w=f(x,t)$$ which, when substituted into the respective equation would satisfy it identically along with all of the associated initial and boundary conditions. This is the solution for the in-class activity regarding the temperature u(x,y,t) in a thin rectangle of dimensions x ∈ [0,a],b ∈ [0,b], which is initially all held at temperature T. Moreover, if you click on the white frame, you can modify the graph of the function arbitrarily with your mouse, and then see how every different function evolves. Turbulence is a kind of fluid motion which is: UNSTEADY and highly IRREGULAR in space and time ; 3-DIMENSIONAL (even if the mean flow is only 2D) always ROTATIONAL and at HIGH REYNOLDS NUMBERS ; DISSIPATIVE (energy is converted into heat due to viscous stresses) strongly DIFFUSIVE (rapid mixing). is a \source" or \forcing" term in the equation itself (we usually say \source term" for the heat equation and \forcing term" with the wave equation), so we'd have u t= r2u+ Q(x;t) for a given function Q. Chapter 2 Formulation of FEM for One-Dimensional Problems 2. I'm trying to solve the 2D transient heat equation by crank nicolson method. In the following example, DEFINE_PROFILE is used to generate profiles for the velocity, turbulent kinetic energy, and dissipation rate, respectively, for a 2D fully-developed duct flow. Analysis of Unsteady State Heat Transfer in the Hollow Cylinder Using the Finite Volume Method with a Half Control Volume Marco Donisete de Campos Federal University of of Mato Grosso Institute of Exact and Earth Sciences, 78600-000, Barra do Garças, MT, Brazil Estaner Claro Romão Federal University of Itajubá, Campus of Itabira. In the case of one-dimensional equations this steady state equation is a second order ordinary differential equation. The dye will move from higher concentration to lower. The following examples are intended to help you gain ideas about how Matlab can be used to solve mathematical problems. 1 1 Steady State Temperature in a circular Plate example, if f( ) = (f So we write the heat equation with the Laplace operator in polar coordinates. The dye will move from higher concentration to lower. The physical region, and the boundary conditions, are suggested by this diagram:. pdf: 12 : Tue Nov 29 : Thu Dec 1: Last class; all assignment corrections due: Assignment 5 (non-mandatory) due. Heat Conduction in a Large Plane Wall. Plotly Graphing Library for MATLAB ®. Plane wave solutions for a 4D wave equation and the superposition principle. Below we provide two derivations of the heat equation, ut ¡kuxx = 0 k > 0: (2. FD1D_HEAT_IMPLICIT is a FORTRAN90 program which solves the time-dependent 1D heat equation, using the finite difference method in space, and an implicit version of the method of lines to handle integration in time. Numerical examples are performed for the nonlinear convection–diffusion equations in two and three space dimensions (2D/3D), which not only supports the theoretical results but also finds out superconvergence of third order. A simple 2D heat equation example to test out multigrid methods - mnucci32/multigrid. with the modes and summing over the modes, Debye was able to find an expression for the energy as a function of temperature and derive an expression for the specific heat of the solid. Heat diffusion on a Plate (2D finite difference) Heat transfer, heat flux, diffusion this phyical phenomenas occurs with magma rising to surface or in geothermal areas. When thermal energy moves from one place to another, it’s called heat, Q. A Simple Finite Volume Solver For Matlab File Exchange. -5 0 5-30-20-10 0 10 20 30 q sinh( q) cosh( q) Figure1: Hyperbolicfunctionssinh( ) andcosh( ). The Mass Conservation Equation The equation for conservation of mass, or continuity equation, can be. In this thermal analysis example, material properties like thermal conductivity and boundary conditions including convection, fixed temperature, and heat flux are applied using only a few lines of code. The wave equation, heat equation, and Laplace's equation are typical homogeneous partial differential equations. You can use these formulas to convert from one temperature scale to another:. Convection is usually the dominant form of heat transfer in liquids and gases. The heat equation is the prototypical example of a parabolic partial differential equation. 2d linear Partial Differential Equation Solver using finite differences. Rotational Motion in Classical Physics 3. ] The factor D in the denominator of η is there to make the ratio dimensionless; η therefore has no units, and its function F(η) takes on a universal character. The centre plane is taken as the origin for x and the slab extends to + L on the right and – L on the left. Heat equationin a 2D rectangle. First Order Hyperbolic PDE's ; Wave Equation, Second Order Hyperbolic PDE's. 2) The qualitative mechanism by which Maxwell’s equations give rise to propagating electromagnetic fields is shown in the figure below. Euler-Lagrange equations for 2D functionals. Heat Equation Using Fortran Codes and Scripts Downloads Free. We consider a 2-d problem on the unit square with the exact solution. Finite difference methods for 2D and 3D wave equations¶. You can automatically generate meshes with triangular and tetrahedral elements. We give an introduction to Local Discontinuous Galerkin method and produce a block matrix equation by separating the stochastic heat equation into two first order partial. However, the three types of heat transfer are conduction, convection and radiation. 3 (2018), pp. A key observation on the structure of the MHD equations allows us to get around the difficulties due to the lack of full Laplacian magnetic diffusion. Diffusion In 1d And 2d File Exchange Matlab Central. Two-Dimensional Laplace and Poisson Equations In the previous chapter we saw that when solving a wave or heat equation it may be necessary to first compute the solution to the steady state equation. FlexPDE solves for the X- and Y- velocities of a fluid, with fixed pressures applied at the ends of the channel. We will be solving an IBVP of the form 8 >> < >>: PDE u. In 2D (fx,zgspace), we can write rcp ¶T ¶t = ¶ ¶x kx ¶T ¶x + ¶ ¶z kz ¶T ¶z +Q (1). Two dimensional heat equation on a square with Dirichlet boundary conditions: heat2d. Together with the heat conduction equation, they are sometimes referred to as the "evolution equations" because their solutions "evolve", or change, with passing time. Math 201 Lecture 34: Nonhomogeneous Heat Equations Apr. Using the Laplace operator , the heat equation can be simplified, and generalized to similar equations over spaces of arbitrary number of dimensions, as. ex_piezoelectric1: Bending of a beam due to piezoelectric effects. The results presented here are the sharpest on the global regularity problem for the 2D MHD equations with only partial magnetic diffusion. The purpose of this project is to implement explict and implicit numerical methods for solving the parabolic equation. Hancock Fall 2006 1 2D and 3D Heat Equation Ref: Myint-U & Debnath §2. illustrated with an example. 1 Two Dimensional Heat Equation With Fd Usc Geodynamics. The heat equation is a. In probability theory, the heat equation is connected with the study of Brownian motion via the Fokker–Planck equation. The CFD Benchmarking Project Aims: The CFD benchmarking project is created as a large collection of CFD benchmark configurations that are known from literature. Based on his theory, he derived Langmuir Equation which depicted a relationship between the number of active sites of the surface undergoing adsorption and pressure. Incorrect solution of 2D unsteady heat equation with Neumann condition. 1 The heat equation Consider, for example, the heat equation ut = uxx, 0 < x < 1, t > 0 (4. Thermal Conductivity: Definition, Equation & Calculation. is a \source" or \forcing" term in the equation itself (we usually say \source term" for the heat equation and \forcing term" with the wave equation), so we'd have u t= r2u+ Q(x;t) for a given function Q. 2, 2012 • Many examples here are taken from the textbook. Application of the time-dependent Green's function and Fourier transforms to the solution of the bioheat equation. Introduction to the One-Dimensional Heat Equation. For instance, the Laplacian. 091 March 13–15, 2002 In example 4. Partial Differential Equation Toolbox lets you import 2D and 3D geometries from STL or mesh data. General introduction to PDEs, examples, applica-tions Derivation of conservation laws, linear advec-tion equation, di usion The one-dimensional heat equation Boundary conditions (Dirichlet, Neumann, Robin) and physical interpretation Equilibrium temperature distribution The heat equation in 2D and 3D 2. 27: Mar 20, Friday: Numerical example of 2D FEM - Poisson equation. However, the three types of heat transfer are conduction, convection and radiation. for the 2D heat operator can no longer be applied. 2 Solve the Cahn-Hilliard equation. A Simple Finite Volume Solver For Matlab File Exchange. Demo problem: Solution of the 2D linear wave equation In this example we demonstrate the solution of the 2D linear wave equation - a hyperbolic PDE that involves second time-derivatives. Fick’s Law, then, our partial differential equation becomes: Ct = ∇•[D∇C]+q which is (5. Logically the data generated is from the left hand side of the formula, so that’s a one dimensional matrix. But they still need to be interpreted. The generic global system of linear equation for a one-dimensional steady-state heat conduction can be written in a matrix form as Note: 1.
2020-04-03 10:50:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7159023880958557, "perplexity": 631.8771623877495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510846.12/warc/CC-MAIN-20200403092656-20200403122656-00069.warc.gz"}
https://www.gradesaver.com/textbooks/math/other-math/thinking-mathematically-6th-edition/chapter-5-number-theory-and-the-real-number-system-5-1-number-theory-prime-and-composite-numbers-exercise-set-5-1-page-256/67
# Chapter 5 - Number Theory and the Real Number System - 5.1 Number Theory: Prime and Composite Numbers - Exercise Set 5.1: 67 $4,560$ #### Work Step by Step Factor each number completely to obtain: $\begin{array}{cccccc} &240&= &10(24) &= &2(5)\cdot (2)(2)(2)(3) &= &2(2)(2)(2)(3)(5) \\&285 &= &15(19) &= &3(5)(19) &= &3(5)(19) \end{array}$ Thus, the least common multiple (LCM) of the two numbers is: LCM = $2(2)(2)(2)(3)(5)(19) = 4,560$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2018-07-22 13:20:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5846607685089111, "perplexity": 995.8664246282707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593223.90/warc/CC-MAIN-20180722120017-20180722140017-00172.warc.gz"}
http://mathhelpforum.com/calculus/90031-function-two-variables-continuity.html
Math Help - Function in two variables - continuity 1. Function in two variables - continuity Hello, Still struggling with these proofs, some help would be appreciated Can the function $f(x,y) = \frac {\sin x\sin^3 y}{1 - \cos(x^2+y^2)}$ be defined at $(0,0)$ in such a way that it becomes continuous there? Prove your answer. Regards, 2. Originally Posted by Robb Hello, Still struggling with these proofs, some help would be appreciated Can the function $f(x,y) = \frac {\sin x\sin^3 y}{1 - \cos(x^2+y^2)}$ be defined at $(0,0)$ in such a way that it becomes continuous there? Prove your answer. Regards, no it can't! because $\lim_{(x,y)\to(0,0)} f(x,y)$ doesn't exist: $\lim_{x\to0}f(x,x)=\frac{1}{2}$ but $\lim_{x\to0}f(x,-x)=\frac{-1}{2}.$
2015-03-04 07:06:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9023052453994751, "perplexity": 664.1439055999122}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463458.93/warc/CC-MAIN-20150226074103-00125-ip-10-28-5-156.ec2.internal.warc.gz"}
https://www.econstor.eu/handle/10419/113046
Please use this identifier to cite or link to this item: http://hdl.handle.net/10419/113046 Title: # Don't Kill the Goose that Lays the Golden Eggs: Strategic Delay in Project Completion Authors: Katolnik, Svetlana Schöndube, Jens Robert Year of Publication: 2015 Series/Report no.: Beiträge zur Jahrestagung des Vereins für Socialpolitik 2015: Ökonomische Entwicklung - Theorie und Politik - Session: Contracts A12-V2 Abstract: It's puzzling that most projects fail to complete within the predetermined timeframe given that timing considerations rank among the major goals in project management. We argue that when managers can extract private benefits from working on a project, project delay becomes optimal. We introduce a continuous-time framework for project management activities that incorporates this feature. A manager's unobserved effort cumulatively increases the project's success probability, but decreases the expected duration of the project and with it the expected flow of on-the-job benefits. A strict deadline limits incentives for effort delay, but also decreases the probability that the project will be terminated in due time. In this trade-off, the optimal deadline balances the increase in expected project value against the expected increase in project duration and costs. Because the manager does not want to kill the golden goose'' prematurely, he always prefers a stricter deadline compared to the principal. As a result, project completion is threatened by both effort provision over time \textit{and} contractual agreements on time. JEL: D82 M52 M55 Document Type: Conference Paper Files in This Item: File Size 359.49 kB
2017-05-25 12:49:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27533894777297974, "perplexity": 6038.5415970235845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608067.23/warc/CC-MAIN-20170525121448-20170525141448-00225.warc.gz"}
https://www.khronos.org/registry/vulkan/specs/1.0/man/html/VkViewport.html
## C Specification The VkViewport structure is defined as: typedef struct VkViewport { float x; float y; float width; float height; float minDepth; float maxDepth; } VkViewport; ## Members • x and y are the viewport’s upper left corner (x,y). • width and height are the viewport’s width and height, respectively. • minDepth and maxDepth are the depth range for the viewport. It is valid for minDepth to be greater than or equal to maxDepth. ## Description The framebuffer depth coordinate zf may be represented using either a fixed-point or floating-point representation. However, a floating-point representation must be used if the depth/stencil attachment has a floating-point depth component. If an m-bit fixed-point representation is used, we assume that it represents each value $$\frac{k}{2^m - 1}$$, where k ∈ { 0, 1, …​, 2m-1 }, as k (e.g. 1.0 is represented in binary as a string of all ones). The viewport parameters shown in the above equations are found from these values as ox = x + width / 2 oy = y + height / 2 oz = minDepth px = width py = height pz = maxDepth - minDepth. The width and height of the implementation-dependent maximum viewport dimensions must be greater than or equal to the width and height of the largest image which can be created and attached to a framebuffer. The floating-point viewport bounds are represented with an implementation-dependent precision. Valid Usage • width must be greater than 0.0 • width must be less than or equal to VkPhysicalDeviceLimits::maxViewportDimensions[0] • height must be greater than 0.0 • The absolute value of height must be less than or equal to VkPhysicalDeviceLimits::maxViewportDimensions[1] • x must be greater than or equal to viewportBoundsRange[0] • (x + width) must be less than or equal to viewportBoundsRange[1] • y must be greater than or equal to viewportBoundsRange[0] • (y + height) must be less than or equal to viewportBoundsRange[1] • minDepth must be between 0.0 and 1.0, inclusive • maxDepth must be between 0.0 and 1.0, inclusive
2018-01-16 15:05:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6928262114524841, "perplexity": 1672.2676741932216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886437.0/warc/CC-MAIN-20180116144951-20180116164951-00668.warc.gz"}
http://mathoverflow.net/feeds/question/71476
Bounded Linear Functionals and sets of measure zero - MathOverflow most recent 30 from http://mathoverflow.net 2013-06-19T16:36:04Z http://mathoverflow.net/feeds/question/71476 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/71476/bounded-linear-functionals-and-sets-of-measure-zero Bounded Linear Functionals and sets of measure zero Ramesh Kadambi 2011-07-28T10:38:01Z 2011-07-28T10:51:45Z <p>I am teaching myself measure theory from Bartles "The elements of integration and lebesgue measure". In order to prove Riesz Representation Theorem he defines a set function $\lambda(E) = G(1_E)$. Where $G$ is a linear bounded functional, and $1_E$ is the usual characteristic function. In order to show that $\lambda$ is absolutely continuous with $\mu$, He claims that $\lambda$ defined as before is zero on a set of measure zero. We are working under a measure space $(X, \sigma(X), \mu)$ and $G$ is a bounded linear functional on $L_1(X, \sigma(X), \mu)$. The question is why is $\lambda(M) = G(1_M)$ zero if $M$ is a set of measure zero? I am unable to figure out how this comes about from the fact I know about linear functionals $G(af + bg) = aG(f) + bG(g)$ where $f,g \in L_1$.</p>
2013-06-19 16:36:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9143369793891907, "perplexity": 136.31856746280687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708882773/warc/CC-MAIN-20130516125442-00059-ip-10-60-113-184.ec2.internal.warc.gz"}
https://deepai.org/publication/a-nonlinear-diffusion-method-for-semi-supervised-learning-on-hypergraphs
# A nonlinear diffusion method for semi-supervised learning on hypergraphs Hypergraphs are a common model for multiway relationships in data, and hypergraph semi-supervised learning is the problem of assigning labels to all nodes in a hypergraph, given labels on just a few nodes. Diffusions and label spreading are classical techniques for semi-supervised learning in the graph setting, and there are some standard ways to extend them to hypergraphs. However, these methods are linear models, and do not offer an obvious way of incorporating node features for making predictions. Here, we develop a nonlinear diffusion process on hypergraphs that spreads both features and labels following the hypergraph structure, which can be interpreted as a hypergraph equilibrium network. Even though the process is nonlinear, we show global convergence to a unique limiting point for a broad class of nonlinearities, which is the global optimum of a interpretable, regularized semi-supervised learning loss function. The limiting point serves as a node embedding from which we make predictions with a linear model. Our approach is much more accurate than several hypergraph neural networks, and also takes less time to train. ## Authors • 19 publications • 2 publications • 39 publications • ### Nonlinear Higher-Order Label Spreading Label spreading is a general technique for semi-supervised learning with... 06/08/2020 ∙ by Francesco Tudisco, et al. ∙ 0 • ### A Consistent Diffusion-Based Algorithm for Semi-Supervised Classification on Graphs Semi-supervised classification on graphs aims at assigning labels to all... 08/27/2020 ∙ by Nathan de Lara, et al. ∙ 0 • ### Directed hypergraph neural network To deal with irregular data structure, graph convolution neural networks... 08/09/2020 ∙ by Loc Hoang Tran, et al. ∙ 0 • ### Noise-robust classification with hypergraph neural network This paper presents a novel version of the hypergraph neural network met... 02/03/2021 ∙ by Nguyen Trinh Vu Dang, et al. ∙ 0 • ### HyperGCN: Hypergraph Convolutional Networks for Semi-Supervised Classification Graph-based semi-supervised learning (SSL) is an important learning prob... 09/07/2018 ∙ by Naganand Yadati, et al. ∙ 0 • ### Semi-supervised Learning on Graph with an Alternating Diffusion Process Graph-based semi-supervised learning usually involves two separate stage... 02/16/2019 ∙ by Qilin Li, et al. ∙ 0 • ### Deep Learning with Sets and Point Clouds We introduce a simple permutation equivariant layer for deep learning wi... 11/14/2016 ∙ by Siamak Ravanbakhsh, et al. ∙ 0 ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## 1 Introduction In graph-based semi-supervised learning (SSL), one has labels at a small number of nodes, and the goal is to predict labels at the remaining nodes. Diffusions, label spreading, and label propagation are classical techniques for this problem, where known labels are diffused, spread, or propagated over the edges in a graph [41, 43]. These methods were originally developed for graphs where the set of nodes corresponds to a point cloud, and edges are similarity measures such as -nearest neighbors; however, these methods can also be used with relational data such as social networks or co-purchasing [13, 20, 10, 17]. In the latter case, diffusions work because they capture the idea of homophily [27] or assortativty [29], where labels are smooth over the graph. While graphs are a widely-used model for relational data, many complex systems and datasets are actually described by higher-order relationships that go beyond pairwise interactions [5, 4, 34] . For instance, co-authorship often involves several more than two others, people in social network gather in small groups and not just pairs, and emails can have several recipients. A hypergraph is a standard representation for such data, where a hyperedge can connect any number of nodes. Directly modeling these higher-order interactions has led to improvements in a number of machine learning problems [42, 6, 22, 23, 39, 32, 2]. Along this line, there are a number of diffusions or label spreading techniques for semi-supervised learning on hypergraphs [42, 14, 40, 21, 24, 37, 35], which are also built on principles of similarity or assortativity. However, these methods are designed for cases where only labels are available, and do not take advantage of rich features or metadata associated with hypergraphs that are potentially useful for making accurate predictions. For instance, coauthorship or email data could have rich textual information. Hypergraph neural networks (HNNs) are one popular approach for combining both features and network structure for SSL [39, 12, 11]. The hidden layers of HNNs combine the features of neighboring nodes with neural networks and learn the model parameters fitting the available labeled nodes. While combining features according to the hypergraph structure is a key idea, HNNs do not take advantage of the fact that connected nodes likely share similar labels; moreover, they can be expensive to train. In contrast, diffusion-like methods work precisely because of homophily and are typically fast. In the simple case of graphs, combining these two ideas has led to several recent advances [19, 15, 16]. Here, we combine the ideas of HNNs and diffusions for SSL on hypergraphs with a method that simultaneously diffuses both labels and features according to the hypergraph structure. In addition to incorporating features, our new diffusion can incorporate a broad class of nonlinearities to increase the modeling capability, which is critical to the architectures of both graph and hypergraph neural networks. Our nonlinear diffusion can be interpreted as a forward model of a simple deep equilibrium network [3] with infinitely many layers. The limiting point of the process provides an embedding at each node, which can then be combined with a simpler model such as multinomial logistic regression to make predictions at each node. Remarkably, even though our model is nonlinear, we can still prove a number of theoretical properties about the diffusion process. In particular, we show that the limiting point of the process is unique and provide a simple, globally convergent iterative algorithm for computing it. Furthermore, we show that this limiting point is the global optimum of an interpretable optimization formulation of SSL, similar to the linear case of graphs, where the objective function is a combination of a squared loss term and a Laplacian-like regularization term. From this perspective, the limiting point is both close to the known labels features at each node but is also smooth with respect to the hypergraph, as measured by nonlinear aggregation functions of values at nodes on hyperedges. Empirically, we find that using the limiting point of our nonlinear hypergraph diffusion as features for a linear model outperforms state-of-the-art HNNs and other diffusions on several real-world datasets. Including the final-layer embedding of HNNs as additional features in this linear model does not improve accuracy. ## 2 Problem set-up We consider the multi-class semi-supervised classification problem on a hypergraph, in which we are given nodes with features and hyperedges connecting them. A small number of node labels are available and the goal is to assign the labels to the remaining set of nodes. Here we introduce some notation. Let be a hypergraph where is the set of nodes and the set of hyperedges. Each hyperedge has an associated positive weight . In our setting every node can belong to an arbitrary number of hyperedges. Let denote the (hyper)degree of node , i.e., the weighted number of hyperedges node participates in, δi=∑e:i∈ew(e), and let be the diagonal matrix of the node degrees, i.e. . Throughout we assume no isolated nodes, i.e.  for all . This is a standard assumption, as one can always add self loops or remove isolated vertices. We will represent -dimensional features on nodes in by a matrix , where row is the feature vectors of . Suppose each node belongs to one of classes, denoted , and we know the label of a (small) training subset of the nodes . We denote by the input-labels matrix of the nodes, with rows entrywise defined by Yij=(yi)j={1node i belongs to class j0otherwise.. Since we know the labels only for the nodes in , all the rows for are fully zero, while the rows with have exactly one nonzero entry. ## 3 Background and related work on hypergraph semi-supervised learning Here, we review basic ideas in hypergraph neural networks (HNNs) and hypergraph label spreading (HLS), which will contextualize the methods we develop in the next section. ### 3.1 Neural network approaches Graph (convolutional) neural networks are a broadly adopted method for semi-supervised learning on graphs. Several generalizations to hypergraphs have been proposed, and we summarize the most fundamental ideas here. When for all , the hypergraph is a standard graph . The basic formulation of a graph convolutional network (GCN) [18] is based on a first-order approximation of the convolution operator on graph signals [26]. This approximation boils down to a mapping given by , where is the (possibly rescaled) normalized Laplacian matrix of the graph , is the adjacency matrix, and is the normalized adjacency matrix. The forward model for a two-layer GCN is the Z=softmax(F)=softmax(¯¯¯¯Aσ(¯¯¯¯AXΘ(1))Θ(2)) where is the matrix of the graph signals (the node features), are the input-to-hidden and hidden-to-output weight matrices of the network and is a nonlinear activation function (typically, ). Here, the graph convolutional filter combines features across nodes that are well connected in the input graph. For multi-class semi-supervised learning problems, the weights are then trained minimizing the cross-entropy loss −∑i∈Tc∑j=1YijlnZij over the training set of known labels . Several hypergraph variations of this neural network model have been proposed for the more general case . A common strategy is to consider a hypergraph Laplacian and define an analogous convolutional filter. One simple case to define as the Laplacian of the clique expansion graph of  [1, 42], where the hypergraph is mapped to a graph on the same set of nodes by adding a clique among the nodes of each hyperedge. This is the approach used in HGNN [12], and other variants uses mediators instead of cliques in the hypergraph to graph reduction [8]. HyperGCN [39] is based on the nonlinear hypergraph Laplacian [25, 9] as . This model uses a GCN on a graph that depends on the features, where if and only if . The convolutional filter is then defined in terms of the normalized Laplacian of , resulting into the two-layer HyperGCN network F(1)=σ(AXΘ(1)),Z=softmax(F)=softmax(AF(1)Θ(2)). ### 3.2 Laplacian regularization, and label spreading Semi-supervised learning based on Laplacian-like regularization strategies were developed by [41] for graphs and then by [42] for hypergraphs. The main idea of these approaches is to obtain a classifier by minimizing the regularized square loss function minFℓΩ(F)=∥F−Y∥22+λΩH(F) (1) where is a regularization term that takes into account for the hypergraph structure. (Note that only labels — and not features — are used here.) In particular, if denotes the -th row of , the clique expansion approach of [42] defines , with ΩL2H(F)=∑e∈E∑i,j∈ew(e)|e|∥∥fi√δi−fj√δj∥∥22, while the total variation on hypergraph regularizer proposed by [14] is , where ΩL1H(F)=∑e∈Ew(e)maxi,j∈e∥fi−fj∥1. The graph construction in HyperGCN can be seen as a type of regularization based on this total variation approach. These two choices of regularizing terms can be solved by means of different strategies. As is quadratic, one can solve (1) via gradient descent with learning rate to obtain the simple iterative method: F(k+1)=α¯¯¯¯AHF(k)+(1−α)Y (2) where is the normalized adjacency matrix of the clique-expanded graph of . The sequence (2) converges to the global solution of (1), for any starting point and the limit is entrywise nonnegative. This method is usually referred to as Hypergraph Label Spreading (HLS) as the iteration in (2) takes the initial labels and “spreads” or “diffuses” them throughout the vertices of the hypergraph , following the edge structure of its clique-expanded graph. It is worth noting, in passing, that each step of (2) can also be interpreted as one layer of the forward model of a linear neural network (i.e., with no activation functions), and a bias term given by . We will further discuss this analogy later on in Section 4. The one-norm-based regularizer is related to the -Laplacian energy [7, 36] and has advantages for hyperedge cut interpretations of (2). The is convex but not differentiable, and computing the solution of (1) requires more sophisticated and computationally demanding optimization schemes [14, 40]. Unlike HLS in (2), this case cannot be easily interpreted as a label diffusion or as a linear forward network. ## 4 Nonlinear hypergraph diffusion The guiding principle of both the hypergraph neural networks and the regularization approaches discussed above is that the nodes that share connections are likely to share also the same label. This is conducted implicitly with the convolutional networks via the representation and explicitly by label spreading methods via the regularization term in (1). The neural network approaches typically require expensive training to find structure in the features, whereas HLS is a fast linear model that enforces smoothness of labels over the hypergraph. In this section, we propose HyperND, a new nonlinear hypergraph diffusion method that propagates both input node label and feature embeddings through the hypergraph in a manner similar to (2). The method is a simple “forward model” akin to (2), but allows for nonlinear activations, which increases modeling power and yields a type of a hypergraph deep equilibrium network architecture. Recall that each nodes has a label-encoding vector vectors ( is the all-zero vector for initially unlabeled points ) and a feature vector . Thus, each node in the hypergraph has an initial -dimensional embedding, which forms an input matrix , with rows . Our nonlinear diffusion process will result in a new embedding , which we then use to train a logistic multi-class classifier Z∗=softmax(F∗Θ) based on the known labels and their new embedding by minimizing the cross-entropy loss −∑i∈T∑jYijlnZ∗ij. (3) Unlike HNN, the optimization over and the computation of are decoupled. ### 4.1 The model Our proposed hypergraph-based diffusion map is a nonlinear generalization of the clique-expansion hypergraph Laplacian. Specifically, let denote the incidence matrix of , whose rows correspond to nodes and columns to hyperedges: Ki,e={1i∈e0otherwise. To manage possible weights on hyperedges, we use a diagonal matrix defined by W=Diag(w(e1),…,w(em)). With this notation, the degree of node is equal to , where is a vector with one in every entry. For a standard graph, i.e., a hypergraph where all edges have exactly two nodes, where is the adjacency matrix of the graph and is the diagonal matrix of the weighted node degrees. Similarly, for a general hypergraph , we have the identity , where is the adjacency matrix of the clique-expansion graph associated with . Then D−1/2KWK⊤D−1/2=¯¯¯¯AH+I (4) is the clique-expansion hypergraph normalized adjacency [42] that can be used as a hypergraph convolutional filter [12]. Here, we propose a diffusion map which is similar to (4) but defines a nonlinear hypergraph convolutional filter: Φ(F)=D−1/2KWσ(K⊤ϱ(D−1/2F)), (5) where and are diagonal maps (that is, , for some real function , and is similar). Note that when and are the identity maps, reduces down to the clique expansion , and that any neural network activation function is a diagonal mapping. Our proposed hypergraph semi-supervised classifier uses the normalized fixed point of the nonlinear diffusion process F(k+1)=αΦ(F(k))+(1−α)U. (6) Similarly to (2), each step of (6) can be interpreted as one layer of the forward model of a simple hypergraph neural network, which only uses the convolutional filter and has no weights. Thus, the limit point F∗=αΦ(F∗)+(1−α)U (7) corresponds to a simplified hypergraph convolutional network with infinitely many layers. Networks with infinitely many layers are sometimes called deep equilibrium networks [3] and one of the most challenging questions for this type of networks is whether the limit point exits and is unique [38]. Our main theoretical result shows that, under mild assumptions on , a unique fixed point always exists, provided we look for it on a suitable projective slice of the form , where is a homogeneous scaling function, such as a norm (we will specify particular later). In what follows, we use the notation (resp. ) to indicate that has nonnegative (resp. positive) entries. ###### Theorem 4.1. Let be a homogeneous of degree , positive and order preserving mapping, i.e., 1. for all and all , 2. if , and 3. if . Let be an entrywise positive input embedding, let and let be a real-valued, positive and one-homogeneous function, i.e., for all and , for all and all . The sequence {˜F(k)=αΦ(F(k))+(1−α)UF(k+1)=˜F(k)/φ(˜F(k)) (8) converges to the unique nonnegative fixed point such that F∗=αΦ(F∗)+(1−α)U,φ(F∗)=1 for any starting point with nonnegative entries. Moreover, is entrywise positive. ###### Proof. Consider the iteration in (8) {˜F(k)=αΦ(F(k))+(1−α)UF(k+1)=˜F(k)/φ(˜F(k)) As is -homogeneous with and is 1-homogeneous, we have that the -th component of is bounded and positive, i.e. there exists a constant such that maxF:φ(F)=1Φ(F)i=maxFΦ(F)iφ(F)p≤Mi. Thus, if we have that entrywise, for all such that . As a consequence, since is entrywise positive, there exists a such that for all such that . The thesis thus follows from Theorem 3.1 in [35]. ∎ Following (5), the th row of is Φ(F)i,:=1√δi∑e:i∈ew(e)σ(∑j∈eϱ(fj√δi)), which highlights how and combe features and labels along each hyperedge. This operation creates a -dimensional embedding edge embedding for an input , which we denote by : μe(F)=σ(K⊤ϱ(D−1/2F))e,:. Thus, each step of (6) or, equivalently, each of the infinitely many layers of the deep equilibrium model (7), mixes the combined labels and feature node embedding along the hyperedges as illustrated in Figure 1. In addition to guarantees on existence and uniqueness, if we choose the slice appropriately, then our equilibrium model is also minimizing a regularized loss function of the form (1), with regularization term Ω∗H(F)=∑i∈V∑e:i∈ew(e)∥∥(D−1/2F)i,:−12μe(D−1/2F)∥∥2. This is characterized by the following result. ###### Theorem 4.2. Under the same assumptions of Theorem 4.1, suppose and are defined as Φ(F)=D−1/2KWσ(K⊤ϱ(D−1/2F)) (9) φ(F)=12√∑i∈V∑e:i∈ew(e)∥μe(D−1/2F)∥22 (10) where and are diagonal mappings. If is differentiable and one-homogeneous, and if is initially scaled so that , then the limit of (8) is the global optimum of minF∈Rn×(c+d)ℓΩ(F)subject toF≥0,φ(F)=1, where ℓΩ(F)=∥∥F−Uφ(U)∥∥2+λΩ∗H(F), (11) and . ###### Proof. Note that, as for all , the function φ(F)=12√∑i∈V∑e:i∈ew(e)∥μe(D−1/2F)∥22 is positive and one-homogeneous. Thus, by Theorem 4.1 the iteration (8) converges to the unique fixed point in for all . We show below that this is also the only point where the gradient of vanishes. Let us denote by the matrix of the hyperedge embedding . We have Ω∗H(D1/2F) =∑i∑e:i∈ew(e)∑j(Fij−12S(F)ej)2 =∑i∑e:i∈ew(e)∑j(F2ij−FijS(F)ej)+14∑i∑e:i∈e∑jw(e)S(F)2ej =∑i∑jF2ijδi−FijB(F)ij+φ(D1/2F)2 =⟨F,DF−B(F)⟩+φ(D1/2F)2 where . Therefore we get Ω∗H(F)−φ(F)2 =⟨F,F−D−1/2B(D−1/2F)⟩ =⟨F,F−Φ(F)⟩ As is 1-homogeneous and differentiable, by the Euler theorem for homogeneous functions we have that ddF{Ω∗H(F)−φ(F)2} =ddF⟨F,F−Φ(F)⟩ =2(F−Φ(F)). Thus, ddF{ℓΩ(F)−λφ(F)2}=2(F−U/φ(U)+λ(F−Φ(F))=2((1+λ)F−λΦ(F)−U/φ(U)) which shows that the gradient of vanishes on a point if and only if is a fixed point F∗=λ1+λΦ(F)+11+λUφ(U) which coincides with (7) for and . Finally, as the two losses and have the same minimizers on the slice , we conclude. ∎ For example, if we choose ϱ(F)=Fp,σ(F)=2(D−1EF)1/p, (12) where the powers are taken entrywise and denotes the diagonal matrix with diagonal entries , then the assumptions of both Theorem 4.1 and 4.2 are satisfied and, for every , we have 12μe(D−1/2F)=(1|e|∑i∈e(fi√δi)p)1/p=:meanp{fi√δi,i∈e}. In other words, is the -power mean of the normalized feature vectors of all the nodes in the hyperedge . Using a power mean for the nonlinear functions and yields a natural hypergraph consistency interpretation to the diffusion process in (6). Specifically, the regularization term becomes ∑i∈V∑e:i∈ew(e)∥∥fi√δi−% meanp{fj√δj,j∈e}∥∥2. Thus, the embedding that minimizes in (11) is such that each node embedding must be similar to the -power mean of the node embedding of the other vertices in the same hyperedge. ### 4.2 Algorithm details A seemingly difficult requirement for our main theoretical results is that we require a entrywise positive input embedding . However, this turns out to not be that stringent in practice. If , i.e. we have nonnegative node features, we can easily obtain a positive embedding by performing an initial label smoothing-type step [28, 33] where we choose a small and let Uε=(1−ε)[YX]+ε11⊤. (13) Note that nonnegative input features are not uncommon. For instance, bag-of-words, one-hot encodings, and binary features in general are all nonnegative. In fact, for all of the real-world datasets we consider in our experiments, the features are nonnegative. Similarly, if some of the input features has negative values (e.g., features coming from a word embedding), one can perform other preprocessing manipulations (such a simple shift of the feature embedding) to get the required . Once the new node embedding is computed, we use it to infer the labels of the non-labeled datapoints via cross-entropy minimization. The pseudocode of the classification procedure is shown in Algorithm 1. Similar to standard LS, the parameter in Algorithm 1 yields a convex combination of the diffusion mapping and the “bias” , allowing to tune the contribution given by the homophily along the hyperedges and the one provided by the input features and labels. Moreover, in view of Theorem 4.2, the parameter quantifies the strength of the regularization parameter , which allows us to tune the contribution of the regularization term over the data-fitting term . We also point out that since HyperND is forward model, it can be implemented efficiently. The cost of each iteration of (2) is dominated by the cost of the two matrix-vector products with the matrices and , both of which only require a single pass over the input data and can be parallelized with standard techniques. Therefore, HyperND scales linearly with the number and size of the hyperedges, i.e., its computational cost is linear in the size of the data. ## 5 Experiments We now evaluate our method on several real-world hypergraph datasets. The datasets we used are co-citation and co-authorship hypergraphs: Cora co-authorship, Cora co-citation, Citeseer, Pubmed [31] and DBLP [30]. Table 1 has summary statistics of these datasets. All nodes in the datasets are documents, features are given by the content of the abstract and hyperedge connections are based on either co-citation or co-authorship. The task for each dataset is to predict the topic to which a document belongs (multi-class classification). We compare our method to five baselines, based on the discussion in Section 3.2. • [topsep=0pt,itemsep=-1pt,leftmargin=*] • TV This is a confidence-interval subgradient-based method from [40] for the total variation regularization approach of [14]. This method consistency outperforms other label spreading techniques such as the original PDHG strategy of [14] and the HLS method [42]. • MLP This is a standard supervised approach, where we train a multilinear perceptron with the features and labels for the nodes , ignoring the hypergraph. • MLP+ This is the MLP baseline with a regularization term added to the objective, where is the Laplacian energy associated to the hypergraph Laplacian based on mediators [8]. • HGNN This is the hypergraph neural network model of [12], which uses the clique-expansion Laplacian [42, 1] for the hypergraph convolutional filter. • HyperGCN This is the hypergraph convolutional network model proposed in [39]. See also Section 3. In that paper the authors propose three different variations of this architecture (1-HyperGCN, FastHyperGCN, HyperGCN). In our results we report the best performance across these three models. Table 2 shows the size of the training dataset for each network and compares the accuracy (mean standard deviation) of HyperND, with and as in (12), against the different baselines. For each dataset, we use five trials with different samples of the training nodes . All of the algorithms that we use have hyperparameters. For the baselines we use either default hyperparameters or the reported tuned hyperparameters [18, 39] . For all of the neural network-based models, we use two layers and 200 training epochs, following [39] and [12]. For our method, we run 5-fold cross validation with label-balanced 50/50 splits to choose from . We use the value of that gives the best mean accuracy over the five folds. As all the datasets we use here have nonnegative features, we preprocess via label smoothing as in (13) with . Our experiments have shown that different choices of do not alter the classification performance of the algorithm. Due to its simple regularization interpretation, we choose and to be the -mean considered in (12), for various . When varying , we change the nonlinear activation functions that define the final embedding in (7). Moreover, in order to highlight the role of in the performance of the algorithm, we show in Figure 2 the mean accuracy over 10 runs of HyperND, for all of the considered values of in (with no cross-validation). Our proposed nonlinear diffusion method performs the best overall, with different choices of yielding better performance on different datasets. In nearly all of the cases, the performance gaps are quite substantial. For example, on Cora co-citation dataset we achieve nearly 83% accuracy, while other baselines do not exceed . Moreover, HyperND scales linearly with the number of nonzero elements in , i.e., the number of hyperedges and their sizes. Thus, it is typically cheap to compute (similar to standard hypergraph LS) and is overall faster to train than a two-layer HyperGCN. Training times are reported in Table 3, where we compare mean execution time over ten runs for our HyperND vs HyperGCN. For HyperND, we show mean execution time over the five choices of shown in Table 2. As mentioned earlier, the diffusion map (2) can be seen as one layer of a forward neural network model and the limit point is the node embedding resulting from an equilibrium model, i.e., a forward message passing network with infinitely many layers. This model yields a new feature-based representation , similar to the last-layer embedding of any neural network approach. A natural question is whether or not is actually a better embedding. To this end, we consider four node embeddings and train a classifier via cross-entropy minimization of , optimizing . Specifically, we consider the following: 1. . We run a nonlinear “purely label” spreading iteration, by setting in (6). The limit point is the fixed point of (7). By Thm. 4.2, this embedding is a Laplacian regularization method analogous to HLS. 2. , where is the embedding generated by HyperGCN before the softmax layer. 3. , the limit point (7) of our HyperND. This is the embedding used for the results in Table 2 and Figure 2. 4. . This combines the representations of our HyperND method and HyperGCN. Figure 3 shows the accuracy for these embeddings with various values of for the -mean in HyperND. The best performance are obtained by the two embeddings that contain our learned features : (E3) and (E4). In particular, while (E4) includes the final-layer embedding of HyperGCN, it does not improve accuracy over (E3). ## 6 Conclusion Graph neural networks and hypergraph label spreading are two distinct techniques with different advantages for semi-supervised learning with higher-order relational data. We have developed a method (HyperND) that takes the best from both approaches: feature-based learning, modeling flexibility, label-based regularization, and computational speed. More specifically, HyperND is a nonlinear diffusion that can be interpreted as a deep equilibrium network. Importantly, we can prove that the diffusion converges to a unique fixed point, and we have an algorithm that can compute this fixed point. Furthermore, the fixed point can be interpreted as the global minimizer of an interpretable regularized loss function. Overall, HyperND outperforms neural network and label spreading methods, and we also find evidence that our method learns embeddings that contain information that is complementary to what is contained in the representations learned by neural network methods. ## Acknowledgments This research was supported in part by ARO Award W911NF19-1-0057, ARO MURI, NSF Award DMS-1830274, and JP Morgan Chase & Co. ## References • [1] Sameer Agarwal, Kristin Branson, and Serge Belongie. Higher order learning with graphs. In Proceedings of the 23rd International Conference on Machine Learning, pages 17–24, 2006. • [2] Francesca Arrigo, Desmond J Higham, and Francesco Tudisco. A framework for second-order eigenvector centralities and clustering coefficients. Proceedings of the Royal Society A, 476(2236):20190724, 2020. • [3] Shaojie Bai, J Zico Kolter, and Vladlen Koltun. Deep equilibrium models. Advances in Neural Information Processing Systems, 32:690–701, 2019. • [4] Federico Battiston, Giulia Cencetti, Iacopo Iacopini, Vito Latora, Maxime Lucas, Alice Patania, Jean-Gabriel Young, and Giovanni Petri. Networks beyond pairwise interactions: Structure and dynamics. Physics Reports, 874:1–92, 2020. • [5] Austin R Benson, Rediet Abebe, Michael T Schaub, Ali Jadbabaie, and Jon Kleinberg. Simplicial closure and higher-order link prediction. Proceedings of the National Academy of Sciences, 115(48):E11221–E11230, 2018. • [6] Austin R Benson, David F Gleich, and Jure Leskovec. Higher-order organization of complex networks. Science, 353(6295):163–166, 2016. • [7] Thomas Bühler and Matthias Hein. Spectral clustering based on the graph p-Laplacian. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 81–88, 2009. • [8] T-H Hubert Chan and Zhibin Liang. Generalizing the hypergraph laplacian via a diffusion process with mediators. Theoretical Computer Science, 806:416–428, 2020. • [9] T-H Hubert Chan, Anand Louis, Zhihao Gavin Tang, and Chenzi Zhang. Spectral properties of hypergraph laplacian and approximation algorithms. Journal of the ACM, 65(3):1–48, 2018. • [10] Alex Chin, Yatong Chen, Kristen M. Altenburger, and Johan Ugander. Decoupled smoothing on graphs. In The World Wide Web Conference, pages 263–272, 2019. • [11] Yihe Dong, Will Sawin, and Yoshua Bengio. Hnhn: Hypergraph networks with hyperedge neurons. arXiv preprint arXiv:2006.12278, 2020. • [12] Yifan Feng, Haoxuan You, Zizhao Zhang, Rongrong Ji, and Yue Gao. Hypergraph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 33, pages 3558–3565, 2019. • [13] David F Gleich and Michael W Mahoney. Using local spectral methods to robustify graph-based learning algorithms. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 359–368, 2015. • [14] Matthias Hein, Simon Setzer, Leonardo Jost, and Syama Sundar Rangapuram. The total variation on hypergraphs - learning on hypergraphs revisited. In Advances in Neural Information Processing Systems, pages 2427–2435, 2013. • [15] Qian Huang, Horace He, Abhay Singh, Ser-Nam Lim, and Austin R Benson. Combining label propagation and simple models out-performs graph neural networks. arXiv preprint arXiv:2010.13993, 2020. • [16] Junteng Jia and Austin R Benson. A unifying generative model for graph learning algorithms: Label propagation, graph convolutions, and combinations. arXiv preprint arXiv:2101.07730, 2021. • [17] Da-Cheng Juan, Chun-Ta Lu, Zhen Li, Futang Peng, Aleksei Timofeev, Yi-Ting Chen, Yaxi Gao, Tom Duerig, Andrew Tomkins, and Sujith Ravi. Ultra fine-grained image semantic embedding. In Proceedings of the 13th International Conference on Web Search and Data Mining, pages 277–285, 2020. • [18] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In ICLR, 2017. • [19] Johannes Klicpera, Aleksandar Bojchevski, and Stephan Günnemann. Predict then propagate: Graph neural networks meet personalized PageRank. In International Conference on Learning Representations, 2018. • [20] Rasmus Kyng, Anup Rao, Sushant Sachdeva, and Daniel A Spielman. Algorithms for lipschitz learning on graphs. In Conference on Learning Theory, pages 1190–1223, 2015. • [21] Pan Li, Niao He, and Olgica Milenkovic. Quadratic decomposable submodular function minimization: Theory and practice. Journal of Machine Learning Research, 21(106):1–49, 2020. • [22] Pan Li and Olgica Milenkovic. Inhomogeneous hypergraph clustering with applications. Advances in Neural Information Processing Systems, 2017:2309–2319, 2017. • [23] Pan Li and Olgica Milenkovic. Submodular hypergraphs: p-laplacians, cheeger inequalities and spectral clustering. In International Conference on Machine Learning, pages 3014–3023. PMLR, 2018. • [24] Meng Liu, Nate Veldt, Haoyu Song, Pan Li, and David F Gleich. Strongly local hypergraph diffusions for clustering and semi-supervised learning. arXiv preprint arXiv:2011.07752, 2020. • [25] Anand Louis. Hypergraph markov operators, eigenvalues and approximation algorithms. In Proceedings of the forty-seventh annual ACM symposium on Theory of computing , pages 713–722, 2015. • [26] Stéphane Mallat. A wavelet tour of signal processing. Elsevier, 1999. • [27] Miller McPherson, Lynn Smith-Lovin, and James M Cook. Birds of a feather: Homophily in social networks. Annual review of sociology, 27(1):415–444, 2001. • [28] Rafael Müller, Simon Kornblith, and Geoffrey E Hinton. When does label smoothing help? In Advances in Neural Information Processing Systems, pages 4696–4705, 2019. • [29] Mark EJ Newman. Assortative mixing in networks. Physical review letters, 89(20):208701, 2002. • [30] Ryan Rossi and Nesreen Ahmed. The network data repository with interactive graph analytics and visualization. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 29, 2015. • [31] Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. AI magazine, 29(3):93–93, 2008. • [32] Balasubramaniam Srinivasan, Da Zheng, and George Karypis. Learning over families of sets–hypergraph representation learning for higher order tasks. arXiv preprint arXiv:2101.07773, 2021. • [33] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 2818–2826, 2016. • [34] Leo Torres, Ann S. Blevins, Danielle S. Bassett, and Tina Eliassi-Rad. The why, how, and when of representations for complex systems. Technical report, arXiv:2006.02870v1, 2020. • [35] Francesco Tudisco, Austin R Benson, and Konstantin Prokopchik. Nonlinear higher-order label spreading. In Proceedings of the Web Conference, 2021. • [36] Francesco Tudisco and Matthias Hein. A nodal domain theorem and a higher-order Cheeger inequality for the graph p-Laplacian. Journal of Spectral Theory, 8(3):883–909, 2018. • [37] Nate Veldt, Austin R Benson, and Jon Kleinberg. Minimizing localized ratio cut objectives in hypergraphs. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1708–1718, 2020. • [38] Ezra Winston and J Zico Kolter. Monotone operator equilibrium networks. arXiv preprint arXiv:2006.08591, 2020. • [39] Naganand Yadati, Madhav Nimishakavi, Prateek Yadav, Vikram Nitin, Anand Louis, and Partha Talukdar. Hypergcn: A new method for training graph convolutional networks on hypergraphs. Advances in Neural Information Processing Systems, 32:1511–1522, 2019. • [40] Chenzi Zhang, Shuguang Hu, Zhihao Gavin Tang, and TH Hubert Chan. Re-revisiting learning on hypergraphs: confidence interval and subgradient method. In International Conference on Machine Learning, pages 4026–4034. PMLR, 2017. • [41] Dengyong Zhou, Olivier Bousquet, Thomas N Lal, Jason Weston, and Bernhard Schölkopf. Learning with local and global consistency. In Advances in neural information processing systems, pages 321–328, 2004. • [42] Dengyong Zhou, Jiayuan Huang, and Bernhard Schölkopf. Learning with hypergraphs: Clustering, classification, and embedding. In Advances in neural information processing systems, pages 1601–1608, 2007. • [43] Xiaojin Zhu, Zoubin Ghahramani, and John D Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In Proceedings of the 20th International conference on Machine learning (ICML-03), pages 912–919, 2003.
2021-06-21 09:40:30
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8197603821754456, "perplexity": 1478.5979163050636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488269939.53/warc/CC-MAIN-20210621085922-20210621115922-00479.warc.gz"}
https://math.hws.edu/eking/CalculusII/math131.html
# MATH 131 - Spring 2020 Calculus II Professor: Erika L.C. King Email: eking@hws.edu Office: Lansing 304 Phone: (315) 781-3355 Class: MWF 1:30-2:30pm in Coxe 7 Lab: Th 10:30am-Noon in Gulick 2000 Office Hours: M: 10:00-11:30am, T: 9:45-11:15am, W: 2:45-3:45pm, Th: 4:00-5:00pm, and by appointment Math Intern Hours with Sam LeGro at the IC: Su: 4:00-6:00pm; and in Lansing 310: Su: 7:00-10:00pm, M-Th 2:00-5:00pm and 7:30-10:30pm Course Syllabus for Section 1 Course Homework Guidelines ### WEEK 9: March 23 - March 27 During Spring Break I will be working to determine how best to organize our class for remote learning. I encourage you to read your emails closely, and to contact me with any questions, concerns or suggestions you have. If you have issues with internet accessibility, please let me know what those are. While this will not be ideal, we will work together to make the best learning experience we can. Feel free to start chatting on that chat group you formed at the end of class on March 13th. Make sure everyone in the group is receiving the information I am sending out and posting, just in case it is ending up in someone's spam box! Look for an email from me no later than Sunday, March 22nd detailing our strategy for proceeding. At this point, you should plan to be available for class at the usual time on Monday, March 23 (1:30pm EST), perhaps via Zoom, but again, I will determine that in the next week. I may contact you via email in the middle of spring break to ask questions about some options and will appreciate responses as soon as it is possible for you to give them. While it will be helpful for all of us to stay on track with material and assignments, we will need to be a bit more flexible with some deadlines. Be in contact with me about your needs as the semester progresses. For this next week, you do not need to work on material for this course (although it may be fun and a good distraction for you!); concentrate more on planning for how we will work for the rest of the semester and more importantly on your health (physical and emotional) and your family. Again, I encourage you to come to me with any questions, concerns or suggestions you have. Thank you for your patience with me as I learn new tools like Canvas and Zoom. Also thank you for being a great class; that gives me hope that we can still have a good rest of the semester in whatever form(s) it takes! ### WEEK 8: March 9 - March 13 Our second exam, covering Sections 6.1-6.7, will be on Thursday, March 12th in Gulick 2000, our normal lab space. Make yourself a review sheet for these sections and then compare it to the one I post early in week 8! Homework due Monday, March 9: • Review Friday's class lecture notes on work pumping fluids. Check that you agree with the final answer of the first example that we did and work on the second example. Write down any questions you have and bring them to class. • Review the groupwork that we were doing in class. Recall that we worked on Section 6.7: 25, 27, 28 and 29. Be sure that you have completed all of them and that they make sense. Check your final answer with the back of the text. • Work practice exercises on work pumping fluids on WeBWorK with the Section67Part2 assignment. This is due Monday at 1:00pm online. I encourage you to do MORE practice problems from the text!!! Remember that answers to odd problems are in the back! • Note that there is NO reading assignment due on Monday! Due to our exam on Thursday, there will be NO Main Exercises assignment due on Wednesday! On Wednesday, March 11th at 5:00pm in Napier 201, there will be a colloquium given by Catalina Garcia Tomas, Kaitlyn Geraghty, Connor Parrow, and Yifei Tao about their trip to the Nebraska Conference for Undergraduate Women in Mathematics. Refreshments will be served at 4:45pm. I hope you can make it! Check out this answer key for the Week 6 Lab! Make sure your answers were correct and that these ideas make sense to you! Look for what makes a solution complete. Homework due Wednesday, March 11: • Review the groupwork that we were doing in Monday's class. Recall that in groups we worked on Section 6.7: 37 and 42, as well as two problems from the handout. Be sure that you have completed all of them and that they make sense. • Review Monday's lecture notes on integration by parts. Come to class ready to focus on a couple of examples and ideas before reviewing for the exam. Remember the better focused we are at the beginning the more quickly we can move on to review! • Work practice exercises on surface area on WeBWorK with the Section6.7Part3 assignment. This is due Wednesday at 1:00pm online. Just two questions! • Read Section 8.3 (pages 532-536). Then complete the Reading Assignment for that section on this handout. How do we integrate functions that are products of powers of trigonometric functions? In this section you will find tools for how to solve such integrals! Don't be shy of the trigonometric functions! We find ways to use some old formula friends to rewrite these integrals to look like easier algebraic functions! Great puzzles! Be sure to complete the Quick Check Questions while you are reading and check your answers at the end of the section. (You need not write down the Quick Check Questions, but they are helpful to do. Answers to the Quick Check questions are at the end of the exercise set for each section.) This is due Wednesday at 1:30pm. Copies of the handout can be found outside my office. • Carefully read this Review Sheet. Work on the suggested problems on that sheet and/or use odd problems from any of the sections we have done to practice (note that I have posted suggested problems for most sections). Make sure you have worked through all the lab problems and reviewed the answer keys to check your work and solutions. Remember that you can go back to redo WeBWorK problems too! Bring questions to class on Wednesday! • Note due to our exam on Thursday, there will be NO Main Exercises assignment due on Wednesday! Check out this answer key for the Week 7 Lab! Make sure your answers were correct and that these ideas make sense to you! Look for what makes a solution complete. Homework due Thursday, March 12: • Finish preparing for Exam 2! Have confidence in your abilities!!! • Arrive on time to lab in Gulick 2000. Spread out to take advantage of the space to think! Homework due Friday, March 13: • Reread Sections 8.1 and 8.2 (pages 520-529). See if the examples make more sense after the examples we did on Wednesday. Do the examples in Section 8.2 follow "Hey u! Look I Ate The Egg"? • Work practice exercises on integration by parts on WeBWorK with the Section82Part1 assignment. This is due Friday at 1:00pm online. There are only two problems here. They ask you to choose $u$ and $dv$ first, so you can be sure whether or not you have chosen correctly. Remember that experimenting is good! • Then try two more integration by parts practice exercises on WeBWorK with the Section82Part2 assignment. This is due Friday at 1:00pm online. Keep experimenting! • Note that there is NO reading assignment due on Friday! ### WEEK 7: March 2 - March 6 Homework due Monday, March 2: • My office hours are rather lonely! It would be great to see more of you there! • Review Friday's class lecture notes on arc length. Write down any questions you have and bring them to class or office hours! • Review the groupwork that we were doing in class. Recall that we worked on 16, 17, 41, 43 and 51 from Section 6.4. We used the shell method for all of these (and also disk for 51). Be sure that you have completed all of them and that they make sense. Remember you can check your answers to the odd problems in the text. • Want more practice with volume questions? Work on these questions from Section 6.4 and check your answers to most of these in the back of the text: 21, 23, 37, 39, 42, 45 and 47 (pages 442-443). • As we work through the applications in this chapter, remember to think about whether your final answer makes sense. Answers to volume problems (and area problems AND arc length problems) should always be positive! Thus if you obtain a negative answer, you know there is something incorrect. If you are working a problem on an exam and figure this out but do not have time to go back and fix it, be sure to note that you know there is an issue! • Work practice exercises on Section 6.4 and 6.5 on WeBWorK with the Section64Part2and65 assignment. The first three problems are practice with volumes (think carefully about whether you want to use shell or disk method for each), the fourth is a matching problem to practice visualizing setting up volumes (note that you only get TWO attempts on this problem!), and the last question is on arc length. This is due Monday at 1:00pm online. • Work more practice exercises on arc length on WeBWorK with the Section65Part2 assignment. Note that the first two questions only ask you to set up the integral, but not evaluate the integral. This is due Monday at 1:00pm online. • Read Section 6.7 (pages 465-473). Then complete the Reading Assignment for that section on this handout. In this section we investigate some physics related applications. Mainly we will investigate how we determine work when the force applied is variable. Guess what? We still start the derivation of the formulas for these applications by partitioning our interval into subintervals and by looking at smaller pieces that we sum together to estimate the whole! Be sure to complete the Quick Check Questions while you are reading and check your answers at the end of the section. (You need not write down the Quick Check Questions, but they are helpful to do. Answers to the Quick Check questions are at the end of the exercise set for each section.) This is due Monday at 1:30pm. Copies of the handout can be found outside my office. • We still have a week and a half before our next exam, but don't wait to ask questions if you have them! Please come to my office hours, visit the Math Intern, and send me emails whenever you find something you are unsure about. Also make sure you are comfortable graphing functions. Here is a list of functions we have been working with that you should know how to graph. Homework due Wednesday, March 4: • Review the groupwork that we were doing in class as well as Monday's class lecture notes on surface area. Recall that in groups we worked on Section 6.5: 10, 13, 14 and 15. Be sure that you have completed all of them and that they make sense. • Want more practice with arc length questions? Work on these questions from Section 6.5 and check your answers for most in the back of the text: 11, 16, 17, 31, 33 and 35 (page 456). • Work practice exercises on surface area on WeBWorK with the Section66 assignment. This is due Wednesday at 1:00pm online. • Complete the Main Exercises Assignment for Week 7 on this handout. Be sure to write your solution on the handout. Copies of the handout can be found outside my office. This is due Wednesday at 1:30pm. • Note that there is NO reading assignment due on Wednesday! (Whew! We had many days of one day per section!) • Remember to review ALL your work on Exam 1! Be sure to read my comments and rework any problems for which you did not receive full credit - even if you only missed one point! I will not be collecting exam rewrites, but you should do it and come see me for any questions you have. ALSO, take note of the things that you did do WELL!!! You ALL have things you did well in your exams! Homework due Friday, March 6: • Review the groupwork on surface area that we were doing in class as well as Wednesday's class lecture notes on spring compression and stretching. Recall that in group work we worked on Section 6.6: 17, 18, 31 and 33. Be sure that you have completed all of them and that they make sense. Check your final answer with the back of the text. • Work through the second example about stretching a spring on the handout. We will discuss this in lab on Thursday. • Want more practice with surface area questions? Work on these questions from Section 6.6 and check your answers in the back of the text: 7, 9, 21 and 23 (page 463). • Work practice exercises on work problems on WeBWorK with the Section67Part1 assignment. The first question is about finding work when you are given a force function. The other four are working with springs. Be sure to read the questions carefully! They are not all asking for the same thing and do not all start by giving you the same information! This is due Friday at 1:00pm online. • Note that Chapter 7 is a review, particularly the material in Section 7.1. I encourage you to read it if you feel rusty on logarithmic and exponential functions. • Read Sections 8.1 and 8.2 (pages 520-529). Then complete the Reading Assignment for those sections on this handout. Now we plunge into Chapter 8 to learn a new set of integration techniques to add to our tool box! First, in Section 8.1, think about ways we can use old tools (like completing the square and long division!) to help us rewrite integrands into something we know how to deal with. Then in Section 8.2 learn a new technique that is especially helpful in expanding the kinds of functions for which we can calculate volumes and find other values for different kinds of applications. Remember that each method is undoing something we did with differentiation. There is not a one-to-one correspondence between differentiation and integration techniques, but you should look for connections between the processes. Be sure to complete the Quick Check Questions while you are reading and check your answers at the end of the section. (You need not write down the Quick Check Questions, but they are helpful to do. Answers to the Quick Check questions are at the end of the exercise set for each section.) This is due Friday at 1:30pm. Copies of the handout can be found outside my office. • Our next exam is next week! Don't wait until next week to ask questions if you have them! Please come to my office hours, visit Sam, the Math Intern, and send me emails whenever you find something you are unsure about, especially if we have already discussed it in class. ### WEEK 6: February 24 - February 28 Homework due Monday, February 24: • Remember to bring in your picture if you forgot to bring it to your appointment! Last chance for full credit on that first assignment on Wednesday! • Review Friday's class lecture notes on the General Slicing Method and Disk Method. Make sure the examples make sense, and finish integrating the integrals we set up and check your final answers. Bring any questions you have with you to class on Monday. • Work practice exercises on Section 6.3 on WeBWorK with the Section63Part1 assignment. The first two problems are finding volumes using the General Slicing Method, and the last one is working with areas between curves. This is due Monday at 1:00pm online. • Then work two more practice exercises on WeBWorK with the Section63Part1b assignment. This is due Monday at 1:00pm online. • Read Section 6.4 (pages 439-447). Then complete the Reading Assignment for that section on this handout. In this section we will look at finding the volumes of objects not by using disks (flat cylinders) but by using tall cylinders (with prominent holes!). Sometimes, especially if our solid has a hole, using the Shell Method will be easier than using the Disk Method. Again, we still approach deriving a formula for the Shell Method with the same initial steps! You should almost be able to say them in your sleep! Be sure to complete the Quick Check Questions while you are reading and check your answers at the end of the section. (You need not write down the Quick Check Questions, but they are helpful to do. Answers to the Quick Check questions are at the end of the exercise set for each section.) This is due Monday at 1:30pm. Copies of the handout can be found outside my office. Homework due Wednesday, February 26: • Remember to bring in your picture if you forgot to bring it to your appointment! Last chance for full credit on that first assignment is today! • These exercises were supposed to be due on Monday. If you didn't do them yet, do them now: practice exercises on WeBWorK with the Section63Part1b assignment. This is due Wednesday at 1:00pm online. • Review Monday's class lecture examples and notes about the washer method and rotating around other axes besides the x and y-axes. • Review the groupwork that we were doing in class on Monday. Recall that we worked on the questions as shown on the Disk Method worksheet. Be sure that you have completed all of them and that they make sense. Bring questions to class or office hours. • Want more practice with volume questions? Work on these questions from Section 6.3 and check your answers in the back of the text: 11, 13, 17, 25, 37, 53, 57, 61 and 65 (pages 435-438). • Complete the Main Exercises Assignment for Week 6 on this handout. Be sure to write your solution on the handout. Copies of the handout can be found outside my office. This is due Wednesday at 1:30pm. • Read Section 6.5 (pages 451-455). Then complete the Reading Assignment for that section on this handout. How do you determine the distance between two points? What if you could not go "as the crow flies", but rather your path between the points was curved? In other words, how can we find the length of a curve, or the arc length of a segment of a function? Again, we still approach deriving a formula for Arc Length with essentially the same initial steps! Now your roommates should almost be able to say them in their sleep! Be sure to complete the Quick Check Questions while you are reading and check your answers at the end of the section. (You need not write down the Quick Check Questions, but they are helpful to do. Answers to the Quick Check questions are at the end of the exercise set for each section.) This is due Wednesday at 1:30pm. Copies of the handout can be found outside my office. • Work practice exercises on Section 6.3 on WeBWorK with the Section63Part2 assignment. This is due Wednesday at 1:00pm online. On Thursday, February 27th at 4:30pm in Napier 201, there will be a colloquium given by William Smith alum Avery Wickersham '19. She will be telling us about her experience studying abroad in the Budapest Semesters in Mathematics Education program. Refreshments will be served at 4:15pm. I hope you can make it! Because of the colloquium, my office hours on Thursday will be as soon as I get back to my office (likely before the usual 3:45pm start time) from my other class until about 4:20pm. If you cannot make that time and have questions, please be sure to contact me to make alternate arrangements. Homework due Friday, February 28: • Redo any problems you missed on the exam and come ask questions if anything is unclear. We will go over some of these in lab, but I encourage you to come to office hours! • Review your notes from Wednesday's class. In particular, make sure you see the difference between setting up an example in terms of the disk method and setting up in terms of the shell method. Make sure you agree with my results (actually do the integration and check the final answer I posted on the board) and that you understand the process for the problems. Let me know as soon as possible if you still have questions, either in class, in lab, in office hours, or via email. Do not wait to clear things up! • Review the groupwork that we were doing at the end of class on Wednesday. Recall that we worked on setting up the integrals for finding the volumes of the solids of revolution formed by rotating the region bounded by the curves y=ln(x), x=1 and y=3 about (a) the x-axis, (b) the y-axis, (c) y=4 and (d) x=-3. Try setting up the integrals for finding the volumes of these using the disk method. Can you integrate these either with shell or disk methods? • Work practice exercises on Section 6.4 on WeBWorK with the Section64Part1 assignment. The first two ask you to use the shell method, the third to use the disk method, and the last two you are able to choose. Sometimes the trickiest part is just setting up the diagram. Remember that disk/washer problems can be calculated by finding the volume of the larger solid, and then subtracting (in a separate integral) the volume of its hole. This is due Friday at 1:00pm online. ### WEEK 5: February 17 - February 21 Remember our first exam, covering sections 5.1-5.5, will be during lab on Thursday, February 20th in Gulick 2000, our normal lab space. Make yourself a review sheet for these sections and then compare it to the one I post early in week 5! Homework due Monday, February 17: • Review Friday's class lecture notes on position, displacement and distance traveled. Evaluate whether or not the material all makes sense and let me know if you still have questions. Review the group work we did as well. Recall that we worked on Section 6.1: 14, 15, 25, 32 and 45 as well as an extra u-substitution question. Be sure that you have completed all of them and that they make sense. Check the answers to odd questions in the back of the text. • Work practice exercises on Section 6.1 and Review on WeBWorK with the Section61andReview assignment. Questions 1-5 are good review questions about when we need to break up the interval of integration, when we need to brake up the integrand, and about u-substitution. This is due Monday at 1:00pm online. • Extra u-substitution Review!!! Work practice exercises on u-substition on WeBWorK with the Section55Part2 assignment. This is due Wednesday at 1:00pm online. This assignment has a few more problems than usual, but I am hoping that there are a some that are relatively straightforward and quick for you. This is good practice with u-substitution! Note that this is not due until WEDNESDAY, but you should at least start it now for extra practice for the exam! • Note that there is NO Reading Assignment due Monday. Due to our exam on Thursday, there will be NO Main Exercises assignment due on Wednesday! Remember that the Section61andReview assignment that was due on Monday has an extension until 10:00am on Tuesday! Make sure you have it finished. Please double-check that you have your textbook. There was a textbook left in our classroom after class on Friday and I am pretty sure it belongs to one of you! Homework due Wednesday, February 19: • Review Monday's class lecture notes. Integrate the integrals we set up in the area example and see if you get the correct answer. Make sure the set up both with respect to x and with respect to y make sense. The set up for the integrals is going to be the most challenging part of the questions in chapter six. • Solve just two practice exercises on Section 6.2 on WeBWorK with the Section62Part1 assignment. This is due Wednesday at 1:00pm online. • Read Section 6.3 (pages 425-434). Then complete the Reading Assignment for that section on this handout. We have worked on finding the area under a curve between two curves, but what if we are interested in three dimensions? How can we find the volume of an object that isn't just a cube, a sphere, a cylinder or another common solid for which we already have a straightforward formula? Amazingly, our process for building a formula is very similar to the process we have used for finding other kinds of formulas in previous sections. Be sure to complete the Quick Check Questions while you are reading and check your answers at the end of the section. (You need not write down the Quick Check Questions, but they are helpful to do. Answers to the Quick Check questions are at the end of the exercise set for each section.) This is due Wednesday at 1:30pm. Copies of the handout can be found outside my office. • Note due to our exam on Thursday, there will be NO Main Exercises assignment due on Wednesday! • Carefully read this Review Sheet. Work on the suggested problems on that sheet and/or use any odd problem from any of the sections we have done to practice. Make sure that you have completed all the labs AND that you have checked your solutions with the lab keys! Remember that you can go back to redo WeBWorK problems too! Bring questions to class on Wednesday! Part of our class will be review. Check out this answer key for the Week 3 Lab! Make sure your solutions were correct and that these ideas make sense to you! Look for what makes a solution complete. Check out this answer key for the Week 4 Lab! Make sure your solutions were correct and that these ideas make sense to you! Look for what makes a solution complete. Homework due Thursday, February 20: • Finish preparing for Exam 1! (You have actually been preparing for this since week 1!) Have confidence in your abilities!!! • Arrive on time to lab in Gulick 2000. Spread out to take advantage of the space to think! Homework due Friday, February 21: • Review the group work problems on the handout that we worked on in Wednesday's class. Want extra practice? Try odd problems from Section 6.2 such as any of 9-63 odd. You can check the back of the book for answers! • Work practice exercises on Section 6.2 on WeBWorK with the Section62Part2 assignment. Practice graphing curves by hand! You should be able to graph all of these without a graphing calculator! This is due Friday at 1:00pm online. • Work two more practice exercises on Section 6.2 on WeBWorK with the Section62Part3 assignment. Practice graphing curves by hand! You should be able to graph all of these without a graphing calculator! This is due Friday at 1:00pm online. • Note that there is NO reading assignment due on Friday! ### WEEK 4: February 10 - February 14 Homework due Monday, February 10: • Another great class on Friday! I loved that you were focused and participating even as I accidentally went overtime! In the future, please feel free to clue me in to the time. ;-) • Review the groupwork that we were doing in class as well as Friday's class lecture notes on even and odd functions, and the introduction to the average value of a function. Recall that we worked on Section 5.3: 40, 41, 42, 45, 47, 48, 61 and 107. Make sure you feel confident with these questions. For Exercise 107, be sure that you can provide a counterexample or a thorough explanation. Evaluate whether or not the material all makes sense and let me know as soon as possible if you still have questions. • Other questions in Section 5.3 that you could do for practice include: 49, 51, 55, 57, 59 and 62. Most of these are odd so you can check your answers in the back of your book! Other questions in your text would be great to do as well, this is just a starting place! • Work practice exercises on Section 5.4 and Review on WeBWorK with the Section5.4A assignment. Problem 1 contains questions on the new material in Section 5.4. The other problems require you to apply material in sections previous to 5.4. This is due Monday at 1:00pm online. • Read Section 6.1 (pages 403-410). Then complete the Reading Assignment for that section on this handout. Although we have a lot more to do to understand how to integrate most functions, we can still do a lot with the techniques we have so far. So in Chapter Six we take a break from learning techniques and look at applications of integration. In this first section we look at why we would want to integrate the absolute value of a function, as well as show how we can use integration to find out cell population or production costs (among a myriad of other things) at a future time. Be sure to complete the Quick Check Questions while you are reading and check your answers at the end of the section. (You need not write down the Quick Check Questions, but they are helpful to do. Answers to the Quick Check questions are at the end of the exercise set for each section.) This is due Monday at 1:30pm. Copies of the handout can be found outside my office. • Get started on your Main Exercises assignment posted below! Homework due Wednesday, February 12: • Make sure you bring your reading assignment for Section 5.5, which you received graded back on Monday, to class on Wednesday; I will be calling on some of you to share your answers. • Review the groupwork that we were doing in class as well as Monday's class lecture notes. Recall that we worked on Section 5.4: 16, 17, 23, 24, 32, 33 and 38. Be sure that you have completed all of them and that they make sense. Review how we derived the formula for average value of a function and see if you really believe the formula makes sense! If not, ask questions! • Other questions in Section 5.4 that you could do for practice include: 20, 21, 27, 37, 45, 49 and 51. Most of these are odd so you can check your answers in the back of your book! Other questions in your text would be great to do as well, this is just a starting place! • Work practice exercises on Section 5.4 and Review on WeBWorK with the Section5.4B assignment. This is due Wednesday at 1:00pm online. • Complete the Main Exercises Assignment for Week 4 on this handout. Be sure to write your solution on the handout. Copies of the handout can be found outside my office. This is due Wednesday at 1:30pm. • Note that there is NO Reading Assignment due Wednesday. Homework due Friday, February 14 (Happy Valentine's Day!): • Review your class notes on u-substitution and the groupwork problems on the handout that we worked on in Wednesday's class and Thursday's lab. Be sure that you have worked through each one and that each makes sense. Ask questions if they do not. • Want extra practice with u-substitution? Exercises in Section 5.5 that you could do for practice include: 45-73 odd. These are odd so you can check your answers in the back of your book! • Work practice exercises on Section 5.4 AND Section 5.5 on WeBWorK with the Sections54and55 assignment. Some of these questions are just about u-substitution. Other questions combine looking at the average value of a function with u-substitution. This is due Friday at 1:00pm online. • Read Section 6.2 (pages 416-420). Then complete the Reading Assignment for that section on this handout. What if we are interested in finding the area of a region that is not bound in part by the x-axis? What if the region is bound between two curves? In this section we learn how to find the area of regions that are bound in different ways. We will have to consider integrating with respect to y instead of x too! However, keep in mind that all the ideas in this section build off of what we started at the beginning of Chapter 5. Be sure to complete the Quick Check Questions while you are reading and check your answers at the end of the section. (You need not write down the Quick Check Questions, but they are helpful to do. Answers to the Quick Check questions are at the end of the exercise set for each section.) This is due Friday at 1:30pm. Copies of the handout can be found outside my office. • Check out this answer key for the Week 2 Lab! Make sure your solutions were correct and that these ideas make sense to you! Look for what makes a solution complete. ### WEEK 3: February 3 - February 7 Due to needing to visit another professor's class, I need to adjust my Monday office hours. They will be 9:30am-10:45am this Monday, February 3rd. If you have a question and cannot make this time, please email me. Homework due Monday, February 3: • Review the groupwork that we were doing in class as well as Friday's class lecture notes. Recall that we worked on Section 5.2: 41, 43, 46, 61, 62 and 63. Make sure you feel confident with these questions. Evaluate whether or not the material all makes sense and let me know as soon as possible if you still have questions. Also remember that you can go back and redo WeBWorK problems that have already been submitted. It will not change your grade, but you can check your answers. If you check the box "Show correct answers" when you are redoing past assignments, it will not only check your answer, but also show you what the correct answer is supposed to be. Cool! • Other questions in Section 5.2 that you could do for practice include: 29, 37, 44, 45, and 49. Most of these are odd so you can check your answers in the back of your book! • Work practice exercises on Section 5.2 on WeBWorK with TWO SHORT assignments: Section5.2PartC and Section5.2PartD. As usual, ask me if you have questions. This is due Monday at Noon online. • Note that there is NO Reading Assignment due Monday. Homework due Wednesday, February 5: • Review the groupwork that we were doing in class as well as Monday's class lecture notes. Recall that we worked on Section 5.2: 53, 55, 83 and 87, as well as a question from WeBWorK. Take some time to practice completing the square to make sure you remember how to do that. Be sure that you have completed all of them and that they make sense. • Complete the Main Exercises Assignment for Week 3 on this handout. Be sure to write your solution on the handout. Copies of the handout can be found outside my office. This is due Wednesday at 1:30pm. • Work practice exercises on Section 5.2 and Section 5.3 on WeBWorK with the Section53Part1 assignments. The first three problems are about the properties of integrals that we worked on in class on Friday and in group work on Monday. The second three are problems about Part 1 of the Fundamental Theorem of Calculus. Here you only have to enter the final answers, but be sure you know how to write out the process as well. This is due Wednesday at 1:00pm online. (Note: I am allowing you until 1pm for this. People must be on time for class for us to continue this due time.) • Read Section 5.4 (pages 381-385). Then complete the Reading Assignment for that section on this handout. In this section we will learn that if a function has the property that it is even or odd we can use it to our advantage when evaluating definite integrals. We will also learn how to determine the average value of a function and what that means. Lastly, we will look back at some old friends - the Mean Value Theorem and the Extreme Value Theorem - and see what they can tell us about integrals. Be sure to complete the Quick Check Questions while you are reading and check your answers at the end of the section. (You need not write down the Quick Check Questions, but they are helpful to do. Answers to the Quick Check questions are at the end of the exercise set for each section.) This is due Wednesday at 1:30pm. Copies of the Reading Assignment can be found in a box outside my office with the Main Exercises. Homework due Friday, February 7: • Review the groupwork exercises and notes from Wednesday's class and Thursday's lab. Recall that we worked on Section 5.3: 17, 76, 77, 80, 81 and 86. We also looked at a problem like 86 where both the upper and lower limits of integrations were funcitons. Make sure you feel confident with these questions. Do more practice problems if necessary until you are confident with all the material! Please let me know if you have questions on them. Note that the problems done in class do not necessarily cover all possible types of questions! Other questions in Section 5.3 that you could do for practice include: 73, 75, 79, 80 and 83. Check your answers in the back of your book for the odd ones! • In lab on Thursday we will prove the Fundamental Theorem of Calculus Part 2. Review the proof. Isn't it cool! Let me know if you have questions. Make sure you understand how the Mean Value Theorem was valuable to the proof. • Work practice exercises on Part 1 of the Fundamental Theorem of Calculus on WeBWorK with the Section53Part1b assignment. This is due Friday at 1:00pm online. There are only three problems in this set! • (You will probably want to wait until after lab to do these problems!) Also work practice exercises on Part 2 of the Fundamental Theorem of Calculus on WeBWorK with the Section53Part2 assignment. These are mostly working problems applying what we proved in lab on Thursday, the Fundamental Theorem of Calculus Part 2! Recall that the theorem said that to evaluate a definite integral we could just find an antiderivative of the integrand and then plug in the limits of integration. Wow! This is due Friday at 1:00pm online. • Read Section 5.5 (pages 388-395). Then complete the Reading Assignment for that section on this handout. Now that we have the Fundamental Theorem of Calculus in our tool box and know that we can use antidifferentiation to solve definite integrals, it is even more important for us to expand our ability to find antiderivatives. Up to this point, our ability to find antiderivatives has been limited to a small number of functions that fit into certain formulas. What happens if we have a composition of functions? This section teaches a strategy for integrating some functions that are really compositions of functions. Note that this will NOT help us with all compositions, but it will help us with many and we will use this technique A LOT. Be sure to complete the Quick Check Questions while you are reading and check your answers at the end of the section. (You need not write down the Quick Check Questions, but they are helpful to do. Answers to the Quick Check questions are at the end of the exercise set for each section.) This is due Friday at 1:30pm. • Check out this answer key for the Week 1 Lab! Make sure your solutions were correct and that these ideas make sense to you! Look for what makes a solution complete. ### WEEK 2: January 27 - January 31 Homework due Monday, January 27: • Finish working through the example that we were doing at the end of class. Bring any questions on it to class on Monday! • Finish working through the "Reviewing Calculus I" handout and the Week 1 Lab. Answer keys will be posted for you to check your work by Wednesday. I will not collect these but you are responsible for knowing all the material covered on it (hopefully just from previous experience in Calculus I!). • Work practice exercises on Section 5.1 and review on WeBWorK with the Section51Part1 assignment. The first four problems are about Section 5.1 and the last three are review exercises (i.e. Calculus I material). Be sure to read carefully. There are two problems for which you only have ONE attempt. Some problems have hints. Especially take heed for those for which you only have one attempt! This is due Monday at Noon. Homework due Wednesday, January 29: • Remember to bring in your picture if you forgot to bring it to your appointment! • Review the groupwork that we were doing in class as well as Monday's class lecture notes. Recall that we worked on Section 5.1: 27, 39 and 42. Be sure that you have completed all of them and that they make sense. • Work practice exercises on sigma notation and some Riemann Sums from Section 5.1 on WeBWorK with the Section51Part2 assignment. Remember that it is recommended that you print out a hard copy of this before trying to submit the problems online. This is due Wednesday at Noon online. • Complete the Main Exercises Assignment for Week 2 on this handout. Be sure to write your solution on the handout. Copies of the handout can be found outside my office under Monday's Reading Assignment. This is due Wednesday at 1:30pm. • Note that there is NO Reading Assignment due Wednesday. Homework due Friday, January 31: • Remember to bring in your picture if you forgot to bring it to your appointment! • Review the groupwork exercises and notes from Wednesday's class. Recall that we worked on Section 5.1: 48 (c) and (d), 49 (g), 50 (e) and 55. We also looked at two sums with indices starting at something other than 1. Make sure you feel confident with these questions. Do more practice problems if necessary until you are confident with all the material! Please let me know if you have questions on them. Note that the problems done in class do not necessarily cover all possible types of questions! Other questions in Section 5.1 that you could do for practice include: 59, 63, 65, 73 and 75. These are all odd so you can check your answers in the back of your book! • Work practice exercises on Section 5.2 on WeBWorK with TWO SHORT assignments: Section5.2PartA and Section5.2PartB. This is due Friday at Noon online. Several of these problems are asking you to practice your Riemann sums and definition of the definite integral, either by setting them up or by thinking about what function would give such a sum. Beware: One of the problems has ONLY ONE attempt allowed and another has ONLY FIVE! • Read Section 5.3 (pages 367-377). Then complete the Reading Assignment for that section on this handout. In this section we will learn about the area function. This builds up to the moment we all have been waiting for since the beginning of Calculus I: The Fundamental Theorem of Calculus!!! Be sure to work through the ideas of Example 2 very carefully. That example gives a good basis for why the Fundamental Theorem might make sense. Be sure to complete the Quick Check Questions while you are reading and check your answers at the end of the section. (You need not write down the Quick Check Questions, but they are helpful to do. Answers to the Quick Check questions are at the end of the exercise set for each section.) This is due Wednesday at the beginning of class. Copies of the Reading Assignment can be found in a box outside my office. • Ready to check your work on the Reviewing Calculus worksheet from the first few days of class? Take a look at this answer key. ### WEEK 1: January 20 - January 24 Welcome to Calculus II!!! Homework due Thursday, January 23: Although I will not normally assign homework to be due on lab days, this week we will need to in order to get started and refresh our memories. Please complete the following: • Be sure to bring the "Reviewing Calculus I" handout you received at the end of the first day of class with you on Thursday and Friday. • Orient yourself to WeBWorK, where you will be completing assignments roughly three times a week. Read these pages for instructions about syntax and an introduction to how the system works: WeBWorK Instructions and FAQs, WeBWorK Syntax and List of Functions. • Fill out this autobiographical questionnaire. Print it two sided if possible. If not, staple it before submitting. Be sure to leave the top portion on the first side (above where you place your name) blank. This is due Thursday at 10:30am in lab. • In class we discussed what a hypothesis is. Check out this website for a summary. Still have questions? Be sure to ask in office hours or in class! • Read Section 5.1 (pages 338-346). Then complete the Reading Assignment for that section on this handout (this is the same as what I handed out in class). The geometric idea behind integration is area. The area beneath a constant function is just the area of a rectangle. But what if we want the area under a function that is not constant? What if the function is a curve? How do we find the area under that? In this section we will look at estimating the area under a curve using geometric shapes whose areas are easy to find. Be sure to complete the Quick Check Questions while you are reading and check your answers at the end of the section. (You need not write down the Quick Check Questions, but they are helpful to do. Answers to the Quick Check questions are at the end of the exercise set for each section.) This is due Thursday at 10:30am in lab. • Practice using WeBWorK with the WeBWorKIntro assignment which can be accessed on the WeBWorK Home Page for Our Class. Details about logging into WeBWorK are in the WeBWorK Instructions and FAQs website as well as my greeting email. This is due Thursday at 4:30pm. Homework due Friday, January 24: • Read the syllabus and the salmon Homework Guidelines. We went through some of this in class, but you should read all the details and make sure you don't have any questions about either document. Also be sure to record the exam dates in your personal calendar/planner. Remember there are no make-ups. • Familarize yourself with this website. Note that there is a link at the top of the page to our syllabus and to the homework guidelines, should you lose the ones I handed out in class. The syllabus has a lot of vital information on it and you will likely want to refer back to it regularly. Also at the top of the page is a link to my grade scale. This will let you know what percentage you need to earn in order to obtain specific grades. In addition, there are links to our WeBWorK homepage and the websites with instructions and tips for WeBWorK. • Review Calculus I material on WeBWorK with the Review assignment. This is due Friday at Noon. • Be sure to bring the "Reviewing Calculus I" handout you received at the end of the first day of class with you on Friday. We will spend the first half of class working on review problems in groups. • Note that there is NO Reading Assignment due Friday. Hobart and William Smith Colleges: Department of Mathematics and Computer Science Erika L.C. King
2021-10-16 21:56:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38750311732292175, "perplexity": 647.1316496975645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585025.23/warc/CC-MAIN-20211016200444-20211016230444-00715.warc.gz"}
https://mathhelpboards.com/threads/equation-size-in-an-array.126/
# MathJaxEquation size in an array #### dwsmith ##### Well-known member I had to use an array to center and left align two different equations--a system of DEs. However, by using array, the equations are smaller than I would like. How can I increase the the equation size in the array. $$\displaystyle\begin{array}{lcl} \frac{dN_1}{dt} & = & r_1N_1\left(1 - \frac{N_1}{K_1} - b_{12}\frac{N_2}{K_1}\right)\\ \frac{dN_2}{dt} & = & r_2N_2\left(1 - b_{21}\frac{N_1}{K_2}\right) \end{array}$$ On MHB, the equations are a nice size in the array but on a LaTex document they aren't. I would like my document equations to be the size of the MHB equations in the array. #### masters ##### Active member I had to use an array to center and left align two different equations--a system of DEs. However, by using array, the equations are smaller than I would like. How can I increase the the equation size in the array. $$\displaystyle\begin{array}{lcl} \frac{dN_1}{dt} & = & r_1N_1\left(1 - \frac{N_1}{K_1} - b_{12}\frac{N_2}{K_1}\right)\\ \frac{dN_2}{dt} & = & r_2N_2\left(1 - b_{21}\frac{N_1}{K_2}\right) \end{array}$$ On MHB, the equations are a nice size in the array but on a LaTex document they aren't. I would like my document equations to be the size of the MHB equations in the array. Maybe put \huge in front of it.. $$\huge \displaystyle\begin{array}{lcl} \frac{dN_1}{dt} & = & r_1N_1\left(1 - \frac{N_1}{K_1} - b_{12}\frac{N_2}{K_1}\right)\\ \frac{dN_2}{dt} & = & r_2N_2\left(1 - b_{21}\frac{N_1}{K_2}\right) \end{array}$$ #### dwsmith ##### Well-known member Huge is bit too big. Are there any other qualifiers? I found \large which worked nicely. #### Ackbach ##### Indicium Physicus Staff member I had to use an array to center and left align two different equations--a system of DEs. However, by using array, the equations are smaller than I would like. How can I increase the the equation size in the array. $$\displaystyle\begin{array}{lcl} \frac{dN_1}{dt} & = & r_1N_1\left(1 - \frac{N_1}{K_1} - b_{12}\frac{N_2}{K_1}\right)\\ \frac{dN_2}{dt} & = & r_2N_2\left(1 - b_{21}\frac{N_1}{K_2}\right) \end{array}$$
2020-12-04 20:49:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7610777616500854, "perplexity": 2896.864562223442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141743438.76/warc/CC-MAIN-20201204193220-20201204223220-00382.warc.gz"}
https://doc.sagemath.org/html/en/reference/combinat/sage/combinat/crystals/fully_commutative_stable_grothendieck.html
# Fully commutative stable Grothendieck crystal# AUTHORS: • Jianping Pan (2020-08-31): initial version • Wencin Poh (2020-08-31): initial version • Anne Schilling (2020-08-31): initial version class sage.combinat.crystals.fully_commutative_stable_grothendieck.DecreasingHeckeFactorization(parent, t)# Class of decreasing factorizations in the 0-Hecke monoid. INPUT: • t – decreasing factorization inputted as list of lists • max_value – maximal value of entries EXAMPLES: sage: from sage.combinat.crystals.fully_commutative_stable_grothendieck import DecreasingHeckeFactorization sage: t = [[3, 2], [], [2, 1]] sage: h = DecreasingHeckeFactorization(t, 3); h (3, 2)()(2, 1) sage: h.excess 1 sage: h.factors 3 sage: h.max_value 3 sage: h.value ((3, 2), (), (2, 1)) sage: u = [[3, 2, 1], [3], [2, 1]] sage: h = DecreasingHeckeFactorization(u); h (3, 2, 1)(3)(2, 1) sage: h.weight() (2, 1, 3) sage: h.parent() Decreasing Hecke factorizations with 3 factors associated to [2, 1, 3, 2, 1] with excess 1 to_increasing_hecke_biword()# Return the associated increasing Hecke biword of self. EXAMPLES: sage: from sage.combinat.crystals.fully_commutative_stable_grothendieck import DecreasingHeckeFactorization sage: t = [[2], [], [2, 1],[4, 3, 1]] sage: h = DecreasingHeckeFactorization(t, 4) sage: h.to_increasing_hecke_biword() [[1, 1, 1, 2, 2, 4], [1, 3, 4, 1, 2, 2]] to_word()# Return the word associated to self in the 0-Hecke monoid. EXAMPLES: sage: from sage.combinat.crystals.fully_commutative_stable_grothendieck import DecreasingHeckeFactorization sage: t = [[2], [], [2, 1], [4, 3, 1]] sage: h = DecreasingHeckeFactorization(t) sage: h.to_word() [2, 2, 1, 4, 3, 1] weight()# Return the weight of self. EXAMPLES: sage: from sage.combinat.crystals.fully_commutative_stable_grothendieck import DecreasingHeckeFactorization sage: t = [[2], [2, 1], [], [4, 3, 1]] sage: h = DecreasingHeckeFactorization(t, 6) sage: h.weight() (3, 0, 2, 1) class sage.combinat.crystals.fully_commutative_stable_grothendieck.DecreasingHeckeFactorizations(w, factors, excess)# Set of decreasing factorizations in the 0-Hecke monoid. INPUT: • w – an element in the symmetric group • factors – the number of factors in the factorization • excess – the total number of letters in the factorization minus the length of a reduced word for w EXAMPLES: sage: from sage.combinat.crystals.fully_commutative_stable_grothendieck import DecreasingHeckeFactorizations sage: S = SymmetricGroup(3+1) sage: w = S.from_reduced_word([1, 3, 2, 1]) sage: F = DecreasingHeckeFactorizations(w, 3, 3); F Decreasing Hecke factorizations with 3 factors associated to [1, 3, 2, 1] with excess 3 sage: F.list() [(3, 1)(3, 1)(3, 2, 1), (3, 1)(3, 2, 1)(2, 1), (3, 2, 1)(2, 1)(2, 1)] Element# list()# Return list of all elements of self. EXAMPLES: sage: from sage.combinat.crystals.fully_commutative_stable_grothendieck import DecreasingHeckeFactorizations sage: S = SymmetricGroup(3+1) sage: w = S.from_reduced_word([1, 3, 2, 1]) sage: F = DecreasingHeckeFactorizations(w, 3, 3) sage: F.list() [(3, 1)(3, 1)(3, 2, 1), (3, 1)(3, 2, 1)(2, 1), (3, 2, 1)(2, 1)(2, 1)] class sage.combinat.crystals.fully_commutative_stable_grothendieck.FullyCommutativeStableGrothendieckCrystal(w, factors, excess)# The crystal on fully commutative decreasing factorizations in the 0-Hecke monoid, as introduced by [MPPS2020]. INPUT: • w – an element in the symmetric group or a (skew) shape • factors – the number of factors in the factorization • excess – the total number of letters in the factorization minus the length of a reduced word for w • shape – (default: False) indicator for input w, True if w is entered as a (skew) shape and False otherwise. EXAMPLES: sage: S = SymmetricGroup(3+1) sage: w = S.from_reduced_word([1, 3, 2]) sage: B = crystals.FullyCommutativeStableGrothendieck(w, 3, 2); B Fully commutative stable Grothendieck crystal of type A_2 associated to [1, 3, 2] with excess 2 sage: B.list() [(1)(3, 1)(3, 2), (3, 1)(1)(3, 2), (3, 1)(3, 1)(2), (3)(3, 1)(3, 2), (3, 1)(3)(3, 2), (3, 1)(3, 2)(2)] We can also access the crystal by specifying a skew shape: sage: crystals.FullyCommutativeStableGrothendieck([[2, 2], [1]], 4, 1, shape=True) Fully commutative stable Grothendieck crystal of type A_3 associated to [2, 1, 3] with excess 1 We can compute the highest weight elements: sage: hw = [w for w in B if w.is_highest_weight()] sage: hw [(1)(3, 1)(3, 2), (3)(3, 1)(3, 2)] sage: hw[0].weight() (2, 2, 1) The crystal operators themselves move elements between adjacent factors: sage: b = hw[0]; b (1)(3, 1)(3, 2) sage: b.f(2) (3, 1)(1)(3, 2) class Element(parent, t)# Create an instance self of element t. This method takes into account the constraints on the word, the number of factors, and excess statistic associated to the parent class. EXAMPLES: sage: S = SymmetricGroup(3+1) sage: w = S.from_reduced_word([1, 3, 2]) sage: B = crystals.FullyCommutativeStableGrothendieck(w, 3, 2) sage: from sage.combinat.crystals.fully_commutative_stable_grothendieck import DecreasingHeckeFactorization sage: h = DecreasingHeckeFactorization([[3, 1], [3], [3, 2]], 4) sage: u = B(h); u.value ((3, 1), (3,), (3, 2)) sage: v = B([[3, 1], [3], [3, 2]]); v.value ((3, 1), (3,), (3, 2)) bracketing(i)# Remove all bracketed letters between $$i$$-th and $$(i+1)$$-th entry. EXAMPLES: sage: S = SymmetricGroup(4+1) sage: w = S.from_reduced_word([3, 2, 1, 4, 3]) sage: B = crystals.FullyCommutativeStableGrothendieck(w, 3, 2) sage: h = B([[3], [4, 2, 1], [4, 3, 1]]) sage: h.bracketing(1) [[], []] sage: h.bracketing(2) [[], [2, 1]] e(i)# Return the action of $$e_i$$ on self using the rules described in [MPPS2020]. EXAMPLES: sage: S = SymmetricGroup(4+1) sage: w = S.from_reduced_word([2, 1, 4, 3, 2]) sage: B = crystals.FullyCommutativeStableGrothendieck(w, 4, 3) sage: h = B([[4, 2], [4, 2, 1], [3, 2], [2]]); h (4, 2)(4, 2, 1)(3, 2)(2) sage: h.e(1) (4, 2)(4, 2, 1)(3)(3, 2) sage: h.e(2) (4, 2)(2, 1)(4, 3, 2)(2) sage: h.e(3) f(i)# Return the action of $$f_i$$ on self using the rules described in [MPPS2020]. EXAMPLES: sage: S = SymmetricGroup(4+1) sage: w = S.from_reduced_word([3, 2, 1, 4, 3]) sage: B = crystals.FullyCommutativeStableGrothendieck(w, 4, 3) sage: h = B([[3, 2], [2, 1], [4, 3], [3, 1]]); h (3, 2)(2, 1)(4, 3)(3, 1) sage: h.f(1) (3, 2)(2, 1)(4, 3, 1)(3) sage: h.f(2) sage: h.f(3) (3, 2, 1)(1)(4, 3)(3, 1) module_generators()# Return generators for self as a crystal. EXAMPLES: sage: S = SymmetricGroup(3+1) sage: w = S.from_reduced_word([1, 3, 2]) sage: B = crystals.FullyCommutativeStableGrothendieck(w, 3, 2) sage: B.module_generators ((1)(3, 1)(3, 2), (3)(3, 1)(3, 2)) sage: C = crystals.FullyCommutativeStableGrothendieck(w, 4, 2) sage: C.module_generators (()(1)(3, 1)(3, 2), ()(3)(3, 1)(3, 2), (1)(1)(1)(3, 2), (1)(1)(3)(3, 2), (1)(3)(3)(3, 2))
2022-12-07 22:57:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3959004282951355, "perplexity": 9860.594505539348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711221.94/warc/CC-MAIN-20221207221727-20221208011727-00694.warc.gz"}
https://www.gamedev.net/forums/topic/274849-planetary-ring-geometry/
# OpenGL Planetary Ring Geometry This topic is 5456 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts So far I have this: float ring_coord[][3] = {0, 0, -62.4, 0, 0, -62.4, 0, 0, -43.09, 0, 0, -43.09, 12.17, 0, -61.2, 12.17, 0, -61.2, 8.407, 0, -42.26, 8.407, 0, -42.26, 23.88, 0, -57.65, 23.88, 0, -57.65, 16.49, 0, -39.81, 16.49, 0, -39.81, 34.67, 0, -51.89, 34.67, 0, -51.89, 23.94, 0, -35.83, 23.94, 0, -35.83, 44.13, 0, -44.13, 44.13, 0, -44.13, 30.47, 0, -30.47, 30.47, 0, -30.47, 51.89, 0, -34.67, 51.89, 0, -34.67, 35.83, 0, -23.94, 35.83, 0, -23.94, 57.65, 0, -23.88, 57.65, 0, -23.88, 39.81, 0, -16.49, 39.81, 0, -16.49, 61.2, 0, -12.17, 61.2, 0, -12.17, 42.26, 0, -8.407, 42.26, 0, -8.407, 62.4, 0, 0, 62.4, 0, 0, 43.09, 0, 0, 43.09, 0, 0, 61.2, 0, 12.17, 61.2, 0, 12.17, 42.26, 0, 8.407, 42.26, 0, 8.407, 57.65, 0, 23.88, 57.65, 0, 23.88, 39.81, 0, 16.49, 39.81, 0, 16.49, 51.89, 0, 34.67, 51.89, 0, 34.67, 35.83, 0, 23.94, 35.83, 0, 23.94, 44.13, 0, 44.13, 44.13, 0, 44.13, 30.47, 0, 30.47, 30.47, 0, 30.47, 34.67, 0, 51.89, 34.67, 0, 51.89, 23.94, 0, 35.83, 23.94, 0, 35.83, 23.88, 0, 57.65, 23.88, 0, 57.65, 16.49, 0, 39.81, 16.49, 0, 39.81, 12.17, 0, 61.2, 12.17, 0, 61.2, 8.407, 0, 42.26, 8.407, 0, 42.26, 0, 0, 62.4, 0, 0, 62.4, 0, 0, 43.09, 0, 0, 43.09, -12.17, 0, 61.2, -12.17, 0, 61.2, -8.407, 0, 42.26, -8.407, 0, 42.26, -23.88, 0, 57.65, -23.88, 0, 57.65, -16.49, 0, 39.81, -16.49, 0, 39.81, -34.67, 0, 51.89, -34.67, 0, 51.89, -23.94, 0, 35.83, -23.94, 0, 35.83, -44.13, 0, 44.13, -44.13, 0, 44.13, -30.47, 0, 30.47, -30.47, 0, 30.47, -51.89, 0, 34.67, -51.89, 0, 34.67, -35.83, 0, 23.94, -35.83, 0, 23.94, -57.65, 0, 23.88, -57.65, 0, 23.88, -39.81, 0, 16.49, -39.81, 0, 16.49, -61.2, 0, 12.17, -61.2, 0, 12.17, -42.26, 0, 8.407, -42.26, 0, 8.407, -62.4, 0, 0, -62.4, 0, 0, -43.09, 0, 0, -43.09, 0, 0, -61.2, 0, -12.17, -61.2, 0, -12.17, -42.26, 0, -8.407, -42.26, 0, -8.407, -57.65, 0, -23.88, -57.65, 0, -23.88, -39.81, 0, -16.49, -39.81, 0, -16.49, -51.89, 0, -34.67, -51.89, 0, -34.67, -35.83, 0, -23.94, -35.83, 0, -23.94, -44.13, 0, -44.13, -44.13, 0, -44.13, -30.47, 0, -30.47, -30.47, 0, -30.47, -34.67, 0, -51.89, -34.67, 0, -51.89, -23.94, 0, -35.83, -23.94, 0, -35.83, -23.88, 0, -57.65, -23.88, 0, -57.65, -16.49, 0, -39.81, -16.49, 0, -39.81, -12.17, 0, -61.2, -12.17, 0, -61.2, -8.407, 0, -42.26, -8.407, 0, -42.26}; void ring(double x, double y, double z) { int i; Vector3f n; Vector3f vec[4]; glEnable(GL_DEPTH_TEST); glDisable(GL_LIGHTING); gl_BindMaterial2D( white_mat, NO_BLEND); gl_AmbientColor( white_mat ); gl_DiffuseColor( white_mat ); gl_SpecularColor( white_mat ); gl_EmissiveColor( white_mat ); gl_Shine( 0.0 ); glPushMatrix(); glTranslated(x,y,z); for(i=0; i<8; i+=4) { vec[0].x = ring_coord[0]; vec[0].y = ring_coord[1]; vec[0].z = ring_coord[2]; vec[1].x = ring_coord[i+1][0]; vec[1].y = ring_coord[i+1][1]; vec[1].z = ring_coord[i+1][2]; vec[2].x = ring_coord[i+2][0]; vec[2].y = ring_coord[i+2][1]; vec[2].z = ring_coord[i+2][2]; vec[3].x = ring_coord[i+3][0]; vec[3].y = ring_coord[i+3][1]; vec[3].z = ring_coord[i+3][2]; n = makeNormalCoord(vec[0],vec[1],vec[2]); glNormal3f(n.x, n.y, n.z); glVertex3f(vec[0].x,vec[0].y,vec[0].z); glVertex3f(vec[1].x,vec[1].y,vec[1].z); glVertex3f(vec[2].x,vec[2].y,vec[2].z); glVertex3f(vec[3].x,vec[3].y,vec[3].z); } glEnd(); glPopMatrix(); //glFlush(); glEnable(GL_LIGHTING); } I pulled the coordinates out of a WRL file so my guess is thier garbage and incompatable with OpenGL due to the fact nothing draws whatsoever not even a screwed up geometry...so I need to be pointed in the right direction of where I can find SOURCECODE for a ring preferrably one that I can specify how many sides it has like in 3d studio max... btw this was taken from 3d studio's tube shape exported as a wrl file...I grabbed the coordinates from there thinking it would work... ##### Share on other sites If you want planetary rings, a tube shape isn't really the way to go about it. What you want is a series of concentric circles, with increasing radius, made up of the same number of vertices. To generate a circle, just do this: #define TWOPI 6.28318void generateRing (Vector3 centre, float radius, int nPoints, Vector3 *points){ for (int i = 0; i < nPoints; i ++) { points.x = centre.x + sin (i * TWOPI / nPoints) * radius; points.z = centre.z + cos (i * TWOPI / nPoints) * radius; points.y = centre.y; }} Then, to draw, something like this: void drawRings (int nRings, int nRingPoints){ Vector3 *inner = new Vector3[nRingPoints]; Vector3 *outer = new Vector3[nRingPoints]; unsigned char innerColour[3], outerColour[3]; glDisable (GL_LIGHTING); glBegin (GL_QUADS); float radius = 100.0f; Vector3 centre (0, 0, 0); for (int j = 0; j < 3; j ++) innerColour[j] = rand () & 0xff; for (int i = 0; i < nRings; i ++) { inner = generateRing (centre, radius, nRingPoints); outer = generateRing (centre, radius + 10.0f, nRingPoints); for (j = 0; j < 3; j ++) outerColour = rand () & 0xff; for (j = 0; j < nRingPoints - 1; j ++) { glColor3ubv (innerColour); glVertex3f (inner[j].x, inner[j].y, inner[j].z); glVertex3f (inner[j + 1].x, inner[j + 1].y, inner[j + 1].z); glColour3ubv (outerColour); glVertex3f (outer[j + 1].x, outer[j + 1].y, outer[j + 1].z); glVertex3f (outer[j].x, outer[j].y, outer[j].z); } glColor3ubv (innerColour); glVertex3f (inner[nRingPoints - 1].x, inner[nRingPoints - 1].y, inner[nRingPoints - 1].z); glVertex3f (inner[0].x, inner[0].y, inner[0].z); glColour3ubv (outerColour); glVertex3f (outer[0].x, outer[0].y, outer[0].z); glVertex3f (outer[nRingPoints - 1].x, outer[nRingPoints - 1].y, outer[nRingPoints - 1].z); radius += 10.0f; for (j = 0; j < 3; j ++) innerColour = outerColour; } glEnd ();} Off my head, so there may be typos. [Edited by - Ajare on October 8, 2004 4:16:17 PM] ##### Share on other sites i'd make a texture with transparency, and draw a quad. Or alternatively make 1d texture with colors depending to radius, and then do something like glBegin(GL_TRIANGLE_STRIP); for(int n=0;n<256;n++){ gl3f(or*sin(n*2*pi/256.0),or*cos(n*2*pi/256.0),0); glTexCoord1f(0.0); glVertex3f(ir*sin(n*2*pi/256.0),ir*cos(n*2*pi/256.0),0); glTexCoord1f(1.0); glVertex3f(or*sin(n*2*pi/256.0),or*cos(n*2*pi/256.0),0); } glEnd(); where ir it's inner radius, or is outer radius, ring is in xy plane . (and build that into list for faster redrawing) ##### Share on other sites That's a very easy, and certainly very quick way of doing it, but you'd need a *huge* texture map, considering that this has to be done on a planetary scale. ##### Share on other sites It's all relative. Something on a planetary *scale*. The textures do not have to be huge, but you do have to be clever about how these are portrayed. ie. Viewed from a typical orbital position it would probably perform well and look good, but moving in close could generate fillrate problems. Post a screenie when your done! ##### Share on other sites Quote: Original post by AjareThat's a very easy, and certainly very quick way of doing it, but you'd need a *huge* texture map, considering that this has to be done on a planetary scale. 0:"big textured quad" != "huge texture size". Texture may have resolution of 1024x1024, and may be be scaled to... say, 180k x 180k lightyears,if you like. (it's size of galaxy) 1 lightyear it's about 9467020800000000m . 1:You can use 1D texture to save mem. and, as about fillrate, it's in any case can't be worse than filling entire screen once. Even cheap cards such as geForce FX 5200 can fill screen in roundly 0.001 sec . and2 , if you want it to look good at different scales you need to do it procedurally in PS or do some tricky lod schemes, and i guess OP will not do it now. ##### Share on other sites Whoa i didnt expect all these responses...Thanks! Yeah I used a simple quad and got past this hurl...then approached another wall...I dumped the WRL crap and moved to another format ASE which I hear is supposed to be real easy to figure out and well it loked easy enough but I suspect I dont know all the little secrets and hidden rules that do not get mentioned in any of the 1000 posts Ive scanned through in these forums...or possibly I overlooked something...dunno... Anyways I tied to read the vertices from a simple ase as a test and I think either I am reading them wrong or MAX is screwed in exporting them right... As a test I exported: *MESH { *TIMEVALUE 0 *MESH_NUMVERTEX 8 *MESH_NUMFACES 12 *MESH_VERTEX_LIST { *MESH_VERTEX 0 0.0000 -32.7798 0.0000 *MESH_VERTEX 1 18.5239 -32.7798 0.0000 *MESH_VERTEX 2 0.0000 0.0000 0.0000 *MESH_VERTEX 3 18.5239 0.0000 0.0000 *MESH_VERTEX 4 0.0000 -32.7798 30.0000 *MESH_VERTEX 5 18.5239 -32.7798 30.0000 *MESH_VERTEX 6 0.0000 0.0000 30.0000 *MESH_VERTEX 7 18.5239 0.0000 30.0000[/xource]and coverted it to a float as such:float box[8][3] = {0.0000, -32.7798, 0.0000, 18.5239, -32.7798, 0.0000, 0.0000, 0.0000, 0.0000, 18.5239, 0.0000, 0.0000, 0.0000, -32.7798, 30.0000, 18.5239, -32.7798, 30.0000, 0.0000, 0.0000, 30.0000, 18.5239, 0.0000, 30.0000}; and drew it like so: void asteroid(float size, double x, double y, double z) { int i; Vector3f n; float vec[10][3]; glEnable(GL_DEPTH_TEST); glDisable(GL_LIGHTING); gl_BindMaterial2D( white_mat, NO_BLEND); gl_AmbientColor( white_mat ); gl_DiffuseColor( white_mat ); gl_SpecularColor( white_mat ); gl_EmissiveColor( white_mat ); gl_Shine( 0.0 ); glPushMatrix(); glTranslated(x,y,z); glEnableClientState (GL_VERTEX_ARRAY); glVertexPointer (3, GL_FLOAT, 0, box); glDrawArrays(GL_QUADS, 0, 8); glDisableClientState (GL_VERTEX_ARRAY); glPopMatrix(); //glFlush(); glEnable(GL_LIGHTING);} using GL_TRIANGLES or GL_WHATEVER gives me 1000% NOT a box!...I mean wtf this is supposed to me simple right? You read in the data and convert it to a format that OpenGL can read and draw it...If its more complex than that f'it Ill use a library...if its my code then what in the world do I use to properly display the data in an ase file? This gets frustrating when I KNOW it looks simple and it takes me longer than a century to add 2+2...Must be lack of sleep *rolls eyes* Thanks! ##### Share on other sites So I take it MAX is screwed on the export and my code is correct? • ### Game Developer Survey We are looking for qualified game developers to participate in a 10-minute online survey. Qualified participants will be offered a \$15 incentive for your time and insights. Click here to start! • 15 • 22 • 13 • 14 • 45 × ## Important Information GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry. Sign me up!
2019-09-19 07:06:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20357918739318848, "perplexity": 11253.05725178172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573444.87/warc/CC-MAIN-20190919060532-20190919082532-00147.warc.gz"}
http://www.feat.engineering/review-predictive-modeling-process.html
3 A Review of the Predictive Modeling Process Before diving in to specific methodologies and techniques for modeling, there are necessary topics that should first be discussed and defined. These topics are fairly general with regards to empirical modeling and include: metric for measuring performance for regression and classification problems, approaches for optimal data usage which includes data splitting and resampling, best practices for model tuning, and recommendations for comparing model performance. There are two data sets used in this chapter to illustrate the techniques. First is the Ames housing price data first introduced in Chapter 1. The second data set focuses on the classification of a person’s profession based on the information from an online dating site. These data are discussed in the next section. 3.1 Illustrative Example: OkCupid Profile Data OkCupid is an online dating site that serves international users. Kim and Escobedo-Land (2015) describe a data set where over 50,000 profiles from the San Francisco area were made available14 and the data can be found in a GitHub repository15. The data contains several types of variables: • open text essays related to an individual’s interests and personal descriptions, • single choice type fields such as profession, diet, gender, body type, and education, and • multiple choice fields such as languages spoken and fluency in programming languages. In their original form, almost all of raw data fields are discrete in nature; only age was numeric. The categorical predictors were converted to dummy variables (see Chapter 5) prior to fitting models (in this chapter at least). For the analyses of these data in this chapter, the open text data will be ignored but will be probed later (see Section 5.5). Of the 307 predictors that were used, there were clusters of variables for geographic location (i.e. town, $$p = 66$$), religious affiliation ($$p = 13$$), astrological sign ($$p =18$$), children ($$p =15$$), pets ($$p =15$$), income ($$p =12$$), education ($$p =31$$), body type ($$p =12$$), diet ($$p =17$$), and over 50 variables related to spoken languages. For more information on this data set and how it was processed, see the book’s GitHub repository. For this demonstration, the goal will be to predict whether a person’s profession is in the STEM fields (science, technology, engineering, and math). There is a severe class imbalance in these data; only 18.5% of profiles work in these areas. While the imbalance hasa significant impact on the analysis, the illustration presented here will mostly side-step this issue by down-sampling the instances such that the number of profiles in each class are equal. See Chapter 16 of Kuhn and Johnson (2013) for a detailed description of techniques for dealing with infrequent classes. 3.2 Measuring Performance While often overlooked, the metric used to assess the effectiveness of a model to predict the outcome is very important and can influence the conclusions. The metric we select to evaluate model performance depends on the outcome, and the subsections below describe the main statistics that are used. 3.2.1 Regression Metrics When the outcome is a number, the most common metric is the root mean squared error (RMSE). To calculate this value, we first build a model and then use this model to predict the outcome. The residuals are the difference between the observed outcome and predicted outcome values. To get the RMSE for a model, we compute the average of the squared residuals, then we take the square root of this value. Taking the square root puts the metric back into the original measurement units. We can think of RMSE as the average distance of a sample from it’s observed value to it’s predicted value. Simply put, the lower the RMSE, the better a model can predict samples’ outcomes. Another popular metric is the coefficient of determination, usually known as $$R^2$$. There are several formulas for computing this value (Kvalseth 1985), but the most conceptually simple one finds the standard correlation between the observed and predicted values (a.k.a. $$R$$) and squares it. The benefit of this number is, for linear models, it has a straightforward interpretation: $$R^2$$ is the proportion of the total variability in the outcome that can be explained by the model. A value near 1.0 indicates an almost perfect fit while values near zero result from a model where the predictions have no linear association with the outcome. One other advantage of this number is that it makes comparisons between different outcomes easy since it is unitless. Unfortunately, $$R^2$$ can be a deceiving metric. The main problem is that it is a measure of correlation and not accuracy. When assessing the predictive ability of a model, we need to know how well the observed and predicted values agree. It is possible, and not unusual, that a model could produce predicted values that have a strong linear relationship with the observed values but the predicted values do not conform to the 45 degree line of agreement. One example of this phenomenon occurs when a model under-predicts at one extreme of the outcome and overpredicts at the other extreme of the outcome. Tree-based ensemble methods (e.g. random forest, boosted trees, etc.) are notorious for these kinds of predictions. A second problem with using $$R^2$$ as a performance metric is that it can show very optimistic results when the outcome has large variance. Finally, $$R^2$$ can be misleading if there are a handful of outcome values that are far away from the overall scatter of the observed and predicted values. In this case the handful of points can artificially increase $$R^2$$. To illustrate the problems with $$R^2$$, let’s look at the results of one particular model of the Chicago train ridership data. For this model $$R^2$$ was estimated to be 0.92; at face value we may conclude that this is an extremely good model. However, the high value is mostly due to the inherent nature of the ridership numbers which are high during the workweek and correspondingly low on the weekends. The bimodal nature of the outcome inflates the outcome variance and, in turn, the $$R^2$$. We can see the impacts of the bi-modal outcome in Figure 3.1 (a). Part (b) of the figure displays a histogram of the residuals, some of which are greater than 10K rides. The RMSE for this model is 2269 rides, which is somewhat large relative to the observed ridership values. A second illustration of the problem of using $$R^2$$ can be seen by examining the blue and black lines in Figure 3.1(a). The blue line is the linear regression fit between the observed and predicted values, while the black line represents the line of agreement. Here we can see that the model under-predicts the smaller observed values (left) and over-predicts the larger observed values (right). In this case, the offset is not huge but it does illustrate how the RMSE and $$R^2$$ metrics can produce discordant results. For these reasons, we advise using RMSE instead of $$R^2$$. To address the problem that the correlation coefficient is overly optimistic when the data illustrates correlation but not agreement, Lawrence and Lin (1989) developed the concordance correlation coefficient (CCC). This metric provides a measure of correlation relative to the line of agreement and is defined as the product of the usual correlation coefficient and a measure of bias from the line of agreement. The bias coefficient ranges from 0 to 1, where a value of 1 indicates that the data falls on the line of agreement. The further the data deviates from the line of agreement, the smaller the bias coefficient. Therefore, the CCC can be thought of as penalized version of the correlation coefficient. The penalty will apply if the data exhibits poor correlation between the observed and predicted values or if the relationship between the observed and predicted values is far from the line of agreement. Both RMSE and $$R^2$$ are very sensitive to extreme values because each are based on the squared value of the individual samples’ residuals. Therefore a sample with a large residual will have an inordinately large effect on the resulting summary measure. In general, this type of sample makes the model performace metric appear worse that what it would be without the sample. Depending on the problem at hand, this characteristic is not necessarily a vice but could be a virtue. For example, if the goal of the modeling problem is to rank-order new data points (e.g. the highest spending customers), then size of the residual is not an issue so long as the most extreme values are predicted to be the most extreme. However, it is more often the case that we are interested in predicting the acutal response value rather than just the rank. In this case, we need metrics that are not skewed by one or just a handful of extreme values. The field of robustness was developed to study the effects of extreme values (i.e. outliers) on commonly used statistical metrics and to derive alternative metrics that achieved the same purpose but were less sensitive or insensitive to the impact of outliers (Hampel et al. 1972). As a broad description, robust techniques seek to find numerical summaries for the majority of the data. To lessen the impact of extreme values, robust approaches down weight the extreme samples or they transform the original values in a way that brings the extreme samples closer to the majority of the data. Rank-ordering the samples is one type of transformation that reduces the impact of extreme values. In the hypothetical case of predicting customers’ spending, rank correlation might be a better choice of metric for the model since it measures how well the predictions rank order with their true values. This statistic computes the ranks of the data (e.g. 1, 2, etc.) and computes the standard correlation statistic from these values. Other robust measures for regression are the median absolute deviation (MAD) (Rousseeuw and Croux 1993) and the absolute error. 3.2.2 Classification Metrics Table 3.1: A confusion matrix for an OkCupid model. The columns are the true classes and the rows correspond to the predictions. stem other stem 5231 9261 other 1936 22381 When the outcome is a discrete set of values (i.e. qualitative data), there are two different types of performance metrics that can be utilized. The first type described below is based on qualitative class prediction (e.g. stem or other) while the second type uses the predicted class probabilities to measure model effectiveness (e.g. Pr[stem = 0.254]). Given a set of predicted classes, the first step in understanding how well the model is working is to create a confusion matrix which is a simple cross-tabulation of the observed and predicted classes. For the OkCupid data, a simple logistic regression model was built using the predictor set mentioned above and Table 3.1 shows the resulting confusion matrix16. The samples that were correctly predicted sit on the diagonal of the table. The STEM profiles mistakenly predicted as non-STEM are shown in the bottom left of the table (n = 1936) while the non-STEM profiles that were erroneously predicted are in the upper right cell (n = 9261) . The most widely utilized metric is classification accuracy which is simply the proportion of the outcome that were correctly predicted. In this example, the accuracy is 0.71 = (5231 + 22381)/(5231 + 9261 + 1936 + 22381). There is an implicit tendency to assess model performance by comparing the observed accuracy value to 1/C, where C is the number of classes. In this case, 0.71 is much greater than 0.5. However, this comparison should be made only when there are nearly the same number of samples in each class. When there is an imbalance between the classes as there is in this data, accuracy can be a quite deceiving measure of model performance since a value of 0.82 can be achieved by predicting all profiles as non-STEM. As an alternative to accuracy, another statistic called Cohen’s Kappa (Agresti 2012) can be used to account for class imbalances. This metric normalizes the error rate to what would be expected by chance. Kappa takes on values between -1 and 1 where a value of 1 indicates complete concordance between the observed and predicted values (and thus perfect accuracy). A value of -1 is complete discordance and is rarely seen17. Values near zero indicate that there is no relationship between the model predictions and the true results. The Kappa statistic can also be generalized to problems that have more than two groups. A visualization technique that can be used for confusion matrices is the mosaic plot (see Figure 3.3). In these plots, each cell of the table is represented as a rectangle whose area is proportional to the number of values in the cell. These plots can be rendered in a number of different ways and for tables of many sizes. See Friendly and Meyer (2015) for more examples. There are also specialized sets of classification metrics when the outcome has two classes. To use them, one of the class values must be designated as the event of interest. This is somewhat subjective. In some cases, this value might be the worst case scenario (i.e. death) but the designated event should be the value that one is most interested in predicting. The first paradigm of classification metrics focuses on false positives and false negatives and is most useful when there is interest in comparing the two types of errors. The metric sensitivity is simply the proportion of the events that were predicted correctly and is the true positive rate in the data. For our example, $sensitivity = \frac{\text{# truly STEM predicted correctly}}{\text{# truly STEM}} = 5,231/7,167 = 0.73$ The false positive rate is associated with the specificity, which is $specificity = \frac{\text{# truly non-STEM predicted correctly}}{\text{# truly non-STEM}} = 22,381/31,642 = 0.707$ The false positive rate is 1 - specificity (0.293 in this example). The other paradigm for the two class system is rooted in the field of information retrieval where the goal is to find the events. In this case, the metrics commonly used are precision and recall. Recall is equivalent to sensitivity and focuses on the number of true events found by the model. Precision is the proportion of events that are predicted correctly out of the total number of predicted events, or $precision = \frac{\text{# truly non-STEM predicted correctly}}{\text{# predicted STEM}} = 5,231/14,492 = 0.361$ One facet of sensitivity, specificity, and precision that is worth understanding is that they are conditional statistics. For example, sensitivity reflects the probability than an event is correctly predicted given that a sample is truly an event. The latter part of this sentence shows the conditional nature of the metric. Of course, the true class is usually unknown and, if it were known, a model would not be needed. In any case, if Y denotes the true class and P denotes the prediction, we could write sensitivity as Pr[P = STEM|Y = STEM]. The question that one really wants to know is “if my value was predicted to be an event, what is are the chances that it is truly is an event?” or Pr[Y = STEM|P = STEM]. Thankfully, the field of Bayesian analysis (McElreath 2016) has an answer to this question. In this context, Bayes’ Rule states that $Pr[Y|P] = \frac{Pr[Y] \times Pr[P|Y]}{Pr[P]} = \frac{Prior \times Likelihood}{Evidence}$ Sensitivity (or specificity, depending on one’s point of view) are the “likelihood” parts of this equation. The prior probability, or prevalence, is the overall rate that we see events in the wild (which may be different from what was observed in our training set). Usually, one would specify the overall event rate before data are collected and use it in the computations to determine the unconditional statistics. For sensitivity, its unconditional analog is called the positive predictive value (PPV): $PPV = \frac{sensitivity \times prevalence}{(sensitivity\times prevalence) + ((1-specificity)\times (1-prevalence))}$ The negative predictive value (NPV) is the analog to specificity and can be computed as $NPV = \frac{specificity \times (1-prevalence)}{((1-sensitivity)\times prevalence) + (specificity\times (1-prevalence))}$ See DG Altman and Bland (1994b) for a clear and concise discussion of these measures. Also, simplified versions of these formulas are often shown for these statistics that assume the prevalence to be 0.50. These formulas, while correct when prevalence is 0.50, can produce very misleading results if the prevalence is different from this number. For the OkCupid data, the difference in the sensitivity and PPV are: • sensitivity: if the profile is truly STEM, what is the probability that it is correctly predicted? • PPV: if the profile was predicted as STEM, what is the probability that it is STEM? The positive and negative predictive values are not often used to measure performance. This is partly due to the nature of the prevalence. If the outcome is not well understood, it is very difficult to provide a value (even when asking experts). When there is a sufficient amount of data, the prevalence is typically estimated by the proportion of the outcome data that correspond to the event of interest. Also, in other situations, the prevalence may depend on certain factors. For example, the proportion of STEM profiles in the San Francisco area can be estimated from the training set to be 0.18. Using this value as the prevalence, our estimates are PPV = 0.36 and NPV = 0.92. The PPV is significantly smaller than the sensitivity due to the model missing almost 27% of the true STEM profiles and the fact that the overall likelihood of being in the STEM fields is already fairly low. The prevalence of people in STEM professions in San Francisco is likely to be larger than in other parts of the country. If we thought that the overall STEM prevalence in the United States were about 5%, then our estimates would change to PPV = 0.12 and NPV = 0.98. These computations only differ by the prevalence estimates and demonstrate how the smaller prevalence affects the unconditional probabilities of our results. The metrics discussed so far depend on having a hard prediction (e.g. STEM or other). Most classification models can produce class probabilities as soft predictions that can be converted to a definitive class by choosing the class with the largest probability. There are a number of metrics that can be created using the probabilities. For a two-class problem, an example metric is the binomial log-likelihood statistic. To illustrate this statistic, let $$i$$ represent the index of the samples where $$i=1, 2, \ldots, n$$, and let $$j$$ represent the numeric value of the number of outcome classes where $$j=1, 2$$18. Next, we will use $$y_{ij}$$ to represent the indicator of the true class of the $$i^{th}$$ sample. That is, $$y_{ij} = 1$$ if the $$i^{th}$$ sample is in the $$j^{th}$$ class and 0 otherwise. Finally, let $$p_{ij}$$ represent the predicted probability of the $$i^{th}$$ sample in the $$j^{th}$$ class. Then the log-likelihood is calculated as $\log \ell = \sum_{i=1}^n \sum_{j=1}^C y_{ij} \log(p_{ij}),$ where $$C$$ = 2 for the two-class problem. In general, we want to maximize the log-likelihood. This value will be maximized if all samples are predicted with high probability to be in the correct class. Two other metrics that are commonly computed on class probabilities are the Gini criterion (Breiman et al. 1984) $G = \sum_{i=1}^n \sum_{j \ne j'} p_{ij} p_{ij'}$ and entropy ($$H$$) (MacKay 2003): $H = -\sum_{i=1}^n \sum_{j=1}^C p_{ij} \log_2p_{ij}$ Unlike the log-likelihood statistic, both of these metrics are measures of variance or impurity in the class probabilities19 and should be minimized. Table 3.2: A comparison of typical probability-based measures used for classification models. The calculations presented here assume that Class 1 is the true class. The “Good” model has the highest log-likelihood and the lowest Gini and Entropy. However, Gini and Entropy have the same values for the “Good” and “Bad” model, which illustrates a problem with these metrics. Probabilities Statistics Class 1 Class 2 Log-Likelihood Gini Entropy Equivocal Model 0.5 0.5 -0.693 0.25 1.000 Good Model 0.8 0.2 -0.223 0.16 0.722 Bad Model 0.2 0.8 -1.609 0.16 0.722 Of these three metrics, it is important to note that the likelihood statistic is the only one to use the true class information. Because of this, it penalizes poor models in a supervised manner. The Gini and entropy statistics would only penalize models that are equivocal (i.e. produce roughly equal class probabilities). For example, Table 3.2 shows a two-class example. If the true outcome was the first class, the model results shown in the second row would be best. The likelihood statistic only takes into account the column called “Class 1” since that is the only column where $$y_{ij} = 1$$. In terms of the likelihood statistic, the equivocal model does better than the model that confidently predicts the wrong class. When considering Gini and entropy, the equivocal model does worst while the good and bad model are equivalent20. When there are two classes, one advantage that the log-likelihood has over metrics based on a hard prediction is that it sidesteps the issue of the appropriateness of the probability cutoff. For example, when discussing accuracy, sensitivity, specificity, and other measures there is the implicit assumption that the probability cutoff used to go between a soft prediction to a hard prediction is valid. This can often not be the case, especially when the data have a severe class imbalance21. Consider the OkCupid data and the logistic regression model that was previously discussed. The class probability estimates that were used to make definitive predictions that were contained in Table 3.1 are shown in Figure 3.2 where the top panel contains the profiles that were truly STEM and the bottom panel has the class probability distribution for the other profiles22. The common 50% cutoff was used to create the original table of observed by predicted classes. Table 3.1 can also be visualized using a mosaic plot such as the one shown in Figure 3.3(b) where the size of the blocks are proportional to the amount of data in each cell. What would happen to this table if we were more permissive about the level of evidence needed to call a profile STEM? Instead of using a 50% cutoff, we might lower the threshold for the event to 20%. In this instance, we would call more profiles as being STEM overall. This might raise sensitivity since the true STEM profiles are more likely to be correctly predicted, but the cost is to increase the number of false positives. The mosaic plot for this confusion matrix is shown in Figure 3.3(a) where the blue block in the upper left becomes larger but there is also an increase in the red block in the lower right. In doing so, we increase the sensitivity from 0.73 to 0.95 but at the cost of specificity dropping from 0.71 to 0.29. Increasing the level of evidence needed to predict a STEM profile to 80%, has the opposite effect as shown in Figure 3.3(c). Here, specificity improves but sensitivity is undermined. The question then becomes “what probability cutoff should be used?” This depends on a number of things, including which error (false positive or false negative) hurts the most. However, if both types of errors are equally bad, there may be cutoffs that do better than the default. The receiver operating characteristic (ROC) (DG Altman and Bland 1994a) curve can be used to alleviate this issue. It considers all possible cutoffs and tracks the changes in sensitivity and specificity. The curve is composed by plotting the false positive rate (1 - specificity) versus the true positive rate. The ROC curve for the OkCupid data is shown in Figure 3.4(a). The best model is one that hugs the y-axis and directly proceeds to the upper left corner (where neither type of error is made) while a completely ineffective model’s curve would track along the diagonal line shown in grey. This curve allows the user to do two important tasks. First, an appropriate cutoff can be determined based on one’s expectations regarding the importance of either sensitivity or specificity. This cutoff can then be used to make the qualitative predictions. Secondly, and perhaps more importantly, it allows a model to be assessed without having to identify the best cutoff. Commonly, the area under the ROC curve is used to evaluate models. If the best model immediately proceeds to the upper left corner, the area under this curve would be one while the poor model would produce an AUC in the neighborhood of 0.50. Caution should be used though since two curves for two different models may cross; this indicates that there are areas where one model does better than the other. Used as a summary measure, the AUC annihilates any subtleties that can be seen in the curves. For the curve in Figure 3.4(a), the AUC was 0.79, indicating a moderately good fit. From the information retrieval point of view, the precision-recall curve is more appropriate (Christopher, Prabhakar, and Hinrich 2008). This is similar to the ROC curve in that the two statistics are calculated over every possible cutoff in the data. For the OkCupid data, the curve is shown in Figure 3.4(b). A poor model would result in a precision-recall curve that is in the vicinity of the horizontal grey line that is at the value of the observed prevalence (0.18 here). The area under the curve is used again here to summarize and the best possible value is again 1.0. The area under this curve is 0.482. During the initial phase of model building, a good strategy for data sets with two classes is to focus on the AUC statistics from these curves instead of metrics based on hard class predictions. Once a reasonable model is found, the ROC or precision-recall curves can be carefully examined to find a reasonable cutoff for the data and then qualitative prediction metrics can be used. 3.2.3 Context-Specific Metrics While the metrics discussed previously can be used to develop effective models, they may not answer the underlying question of interest. As an example, consider a scenario where we have collected data on customer characteristics and whether or not the customers clicked on an ad. Our goal may be to relate customer characteristics to the probability of a customer clicking on an ad. Several of the metrics described above would enable us to assess model performace if this was the goal. Alternatively, we may be more interested in answering “how much money will my company make if this model is used to predict who will click on an ad?” In another context, we may be interested in building a model to answer the question “what is my expected profit when the model is used to determine if this customer will repay a loan?”. These questions are very context specific and do not directly fit into the previously described metrics. Take the loan example. If a loan is requested for $$\$$M, can we compute the expected profit (or loss)? Let’s assume that our model is created on an appropriate data set and can produce a class probability $$P_r$$ that the loan will be paid on time. Given these quantities, the interest rate, fees, and other known factors, the gross return on the loan can be computed for each data point and the average can then be used to optimize the model. Therefore, we should let the question of interest lead us to an appropriate metric for assessing a model’s ability to answer the question. We may be able to use common, existing metrics. Or we may need to develop custom metrics for our context. See Chapter 16 of Kuhn and Johnson (2013) for additional discussion. 3.3 Data Splitting One of the first decisions to make when starting a modeling project is how to utilize the existing data. One common technique is to split the data into two groups typically referred to as the training and testing sets23. The training set is used to develop models and feature sets; they are the substrate for estimating parameters, comparing models, and all of the other activities required to reach a final model. The test set is used only at the conclusion of these activities for estimating a final, unbiased assessment of the model’s performance. It is critical that the test set not be used prior to this point. Looking at its results will bias the outcomes since the testing data will have become part of the model development process. How much data should be set aside for testing? It is extremely difficult to make a uniform guideline. The proportion of data can be driven by many factors, including the size of the original pool of samples and the total number of predictors. With a large pool of samples, the criticality of this decision is reduced once “enough” samples are included in the training set. Also, in this case, alternatives to a simple initial split of the data might be a good idea; see Section 3.4.6 below for additional details. The ratio of the number of samples ($$n$$) to the number of predictors ($$p$$) is important to consider, too. We will have much more flexibility in splitting the data when $$n$$ is much greater than $$p$$. However when $$n$$ is less than $$p$$, then we can run into modeling difficulties even if $$n$$ is seemingly large. There are a number of ways to split the data into training and testing sets. The most common approach is to use some version of random sampling. Completely random sampling is a straightforward strategy to implement. However this approach can be problematic when the response is not evenly distributed across the outcome. A less risky splitting strategy would be to use a stratified random sample based on the outcome. For classification models, this is accomplished by selecting samples at random within each class. This approach ensures that the frequency distribution of the outcome is approximately equal within the training and test sets. When the outcome is numeric, artificial strata can be constructed based on the quartiles of the data. For example, in the Ames housing price data, the quartiles of the outcome distribution would break the data into four artificial groups containing roughly 230 houses. The training/test split would then be conducted within these four groups and the four different training set portions are pooled together (and the same for the test set). Non-random sampling can also be used when there is a good reason. One such case would be when there is an important temporal aspect to the data. Here it may be prudent to use the most recent data as the test set. This is the approach used in the Chicago transit data discussed in Section 4.1. 3.4 Resampling As previously discussed, there are times where there is the need to understand the effectiveness of the model without resorting to the test set. Simply repredicting the training set is problematic so a procedure is needed to get an appraisal using the training set. Resampling methods will be used for this purpose. Resampling methods can generate different versions of our training set that can be used to simulate how well models would perform on new data. These techniques differ in terms of how the resampled version of the data is created and how many iterations of the simulation process is conducted. In each case, a resampling scheme generates a subset of the data to be used for modeling and another that is used for measuring performance. Here, we will refer to the former as the “analysis set” and the latter as the “assessment set”. They are roughly analogous to the training and test sets described at the beginning of the chapter24. A graphic of an example data hierarchy with three resamples is shown in Figure 3.5. Fig. 3.5: A diagram of typical data usage with three resamples of the training data. There are a number of different flavors of resampling that will be described in the next four sections. 3.4.1V-Fold Cross-Validation and Its Variants Simple V-fold cross-validation creates V different versions of the original training set that have the same approximate size. Each of the V assessment sets contains 1/V of the training set and each of these exclude different data points. The analysis sets contain the remainder (typically called the “folds”) . Suppose V = 10, then there are 10 different versions of 90% of the data and also 10 versions of the remaining 10% for each corresponding resample. To use V-fold cross-validation, a model is created on the first fold and the corresponding assessment set is predicted by the model. The assessment set is summarized using the chosen performance measures (e.g. RMSE, the area under the ROC curve, etc.) and these statistics are saved. This process proceeds in a round-robin fashion so that, in the end, there are V estimates of performance for the model and each was calculated on a different assessment set. The cross-validation estimate of performance is computed by averaging the V individual metrics. When the outcome is categorical, stratified splitting techniques can also be applied here to make sure that the analysis and assessment sets produce the same frequency distribution of the outcome. Again, this is a good idea when the classes are imbalanced but is unlikely to be problematic otherwise. For example, for the OkCupid data, stratified 10-fold cross-validation was used. The training set consists of 38,809 profiles and each of the 10 assessment sets contains 3,881 different profiles. The area under the ROC curve was used to measure performance of the logistic regression model previously mentioned. The 10 areas under the curve ranged from 0.778 to 0.804 and their average value was 0.789. Without using the test set, we can use this statistic to forecast how this model would perform on new data. As will be discussed in Section 3.4.5, resampling methods have different characteristics. One downside to basic V-fold cross-validation is that it is relatively noisier than other resampling schemes. One way to compensate for this is to conduct repeated V-fold cross-validation. If R repeats are used, V resamples are created R separate times and, in the end, RV resamples are averaged to estimate performance. Since more data are being averaged, the reduction in the variance of the final average would decease by $$\sqrt{R}$$ (using a Gaussian approximation25). Again, the noisiness of this procedure is relative and, as one might expect, is driven by the amount of data in the assessment set. For the OkCupid data, the area under the ROC curve was computed from 3,881 profiles and is likely to yield sufficiently precise estimates (even if we only expect about 716 of them to be STEM profiles). The assessment sets can be used for model validation and diagnosis. Table 3.1 and Figure 3.2 use these holdout predictions to visualize model performance. Also, Section 4.4 has a more extensive description of how the assessment datasets can be used to drive improvements to models. One other variation, leave-one-out cross-validation, has V equal to the size of the training set. This is a somewhat deprecated technique and may only be useful when the training set size is extremely small (Shao 1993). Figure 3.6 shows a diagram of 10-fold cross-validation for a hypothetical data set with 20 training set samples. For each resample, two different training set data points are held out for the assessment set. Note that each of the assessment sets are mutually exclusive and contains different instances. 3.4.2 Monte Carlo Cross-Validation V-fold cross-validation produced V sets of splits with mutually exclusive assessment sets. Monte Carlo resampling produces splits that are likely to contain overlap. For each resample, a random sample is taken with $$\pi$$ proportion of the training set going into the analysis set and the remaining samples allocated to the assessment set. Like the previous procedure, a model is created on the analysis set and the assessment set is used to evaluate the model. This splitting procedure is conducted B times and the average of the B results are used to estimate future performance. B is chosen to be large enough so that the average of the B values has an acceptable amount of precision. Figure 3.6 also shows Monte Carlo fold cross-validation with 10 resamples and $$\pi = 0.90$$. Note that, unlike 10-fold cross-validation, some of the same data points are used in different assessment sets. 3.4.3 The Bootstrap A bootstrap resample of the data is defined to be a simple random sample that is the same size as the training set where the data are sampled with replacement (Davison and Hinkley 1997) This means that when a bootstrap resample is created there is a 63.2% chance that any training set member is included in the bootstrap sample more than once. The bootstrap resample is used as the analysis set and the assessment set, sometimes known as the out-of-bag sample, consists of the members of the training set not included in the bootstrap sample. As before, bootstrap sampling is conducted B times and the same modeling/evaluation procedure is followed to produce a bootstrap estimate of performance that is the mean of B results. Figure 3.7 shows an illustration of ten bootstrap samples created from a 20 sample data set. The colors show that several training set points are selected multiple times for the analysis set. The assessment set would consist of the rows that have no color. 3.4.4 Rolling Origin Forecasting This procedure is specific to time-series data or any data set with a strong temporal component (Hyndman and Athanasopoulos 2013). If there are seasonal or other chronic trends in the data, random splitting of data between the analysis and assessment sets may disrupt the model’s ability to estimate these patterns In this scheme, the first analysis set consists of the first M training set points, assuming that the training set is ordered by time or other ordered temporal component. The assessment set would consist of the next N training set samples. The second resample keeps the data set sizes the same but starts the splitting process at the second training set member. The splitting scheme proceeds until there is no more data to produce the same data set sizes. Supposing that this results in B splits of the data, the same process is used for modeling and evaluation and, in the end, there are B estimates of performance generated from each of the assessment sets. A simple method for estimating performance is to again use simple averaging of these data. However, it should be understood that, since there is significant overlap in the rolling assessment sets, the B samples themselves constitute a time-series and might also display seasonal or other temporal effects. Figure 3.8 shows this type of resampling where 10 data points are used for analysis and the subsequent two training set samples are used for assessment. There are a number of variations of the procedure: • The analysis set need not be the same size. It can cumulatively grow as the moving window proceeds along the training set. In other words, the first analysis set would contain M data points, the second would contain M + 1 and so on. This is the approach taken with the Chicago train data modeling and is described in Chapter 4. • The splitting procedure could skip iterations to produce less resamples. For example in the Chicago data, there are daily measurements from 2001 to 2016. Incrementing by one day would produce an excessive value of B. For these data, 13 samples were skipped so that the splitting window moves in two week blocks instead of by individual day. • If the training data are unevenly sampled, the same procedure can be used but moves over time increments rather than data set row increments. For example, the window could move over 12 hour periods for the analysis sets and 2 hour periods for the assessment sets. This resampling method differs from the previous ones in at least two ways. The splits are not random and the assessment data set is not the remainder of the training set data once the analysis set was removed. 3.4.5 Variance and Bias in Resampling In Section 1.2.5, variance and bias properties of models were discussed. Resampling methods have the same properties but their effects manifest in different ways. Variance is more straightforward to conceptualize. If you were to conduct 10-fold cross-validation many times on the same data set, the variation in the resampling scheme could be measured by determining the spread of the resulting averages. This variation could be compared to a different scheme that was repeated the same number of times (and so on) to get relative comparisons of the amount of noise in each scheme. Bias is the ability of a particular resampling scheme to be able to hit the true underlying performance parameter (that we will never truly know). Generally speaking, as the amount of data in the analysis set shrinks, the resampling estimate’s bias increases. In other words, the bias in 10-fold cross-validation is smaller than the bias in 5-fold cross-validation. For Monte Carlo resampling, this obviously depends on the value of $$\pi$$. However, through simulations, one can see that 10-fold cross-validation has less bias than Monte Carlo cross-validation when $$\pi = 0.10$$ and B = 10 are used. Leave-one-out cross-validation would have very low bias since its analysis set is only one sample apart from the training set. Figure 3.9 contains a graphical representation of variance and bias in resampling schemes where the curves represent the distribution of the resampling statistics if the same procedure was conducted on the same data set many times. Four possible variance/bias cases are represented. We will assume that the model metric being measured here is better when the value is large (such as $$R^2$$ or sensitivity) and that the true value represented by the green vertical line. The upper right panel demonstrates a pessimistic bias since the values tend to be smaller than the true value while the panel below in the lower right shows a resampling scheme that has relatively low variance and the center of its distribution is on target with the true value (so that it nearly unbiased). In general, for a fixed training set size and number of resamples, simple V-fold cross-validation is generally believed to be the noisiest of the methods discussed here and the bootstrap is the least variable26. The bootstrap is understood to be the most biased (since about 36.8% of the training set is selected for assessment) and its bias is generally pessimistic (i.e. likely to show worse model performance than the true underlying value). There have been a few attempts at correcting the bootstrap’s bias such as Efron (1983) and Efron and Tibshirani (1997). While V-fold cross-validation does have inflated variance, its bias is fairly low when V is 10 or more. When the training set is not large, we recommend using five or so repeats of 10-fold cross-validation, depending on the required precision, the training set size, and other factors. 3.4.6 What Should Be Included Inside of Resampling? In the preceding descriptions of resampling schemes, we have said that the assessment set is used to “build the model”. This is somewhat of a simplification. In order for any resampling scheme to produce performance estimates that generalize to new data, it must contain all the steps in the modeling process that could significantly affect the model’s effectiveness. For example, in Section 1.1, a transformation procedure was used to modify the predictors variables and this resulted in an improvement in performance. During resampling, this step should be included in the resampling loop. Other preprocessing or filtering steps (such as PCA signal extraction, predictor correlation filters, feature selection methods) must be part of the resampling process in order to understand how well the model is doing and to measure when the modeling process begins to overfit. There are some operations that can be exempted. For example, in Chapter 2, only a handful of patients had missing values and these were imputed using the median. For such a small modification, we did not include these steps inside of resampling. In general though, imputation can have a substantial impact on performance and its variability should be propagated into the resample results. Centering and scaling can also be exempted from resampling, all other things being equal. As another example, the OkCupid training data were downsampled so that the class proportions were equal. This is a substantial data processing step and it is important to propagate the effects of this procedure through the resampling results. For this reason, the downsampling procedure is executed on the analysis set of every resample and then again on the entire training set when the final model is created. One other aspect of resampling is related to the concept of information leakage which is where the test set data are used (directly or indirectly) during the training process. This can lead to overly optimistic results that do not replicate on future data points and can occur in subtle ways. For example, suppose a data set of 120 samples has a single predictor and is sequential in nature, such as a time series. If the predictor is noisy, one method for preprocessing the data is to apply a moving average to the predictor data and replace the original data with the smoothed values. Suppose a 3-point average is used and that the first and last data points retain their original values. Our inclination would be to smooth the whole sequence of 120 data points with the moving average, split the data into training and test sets (say with the first 100 rows in training). The subtle issue here is that the 100th data point, the last in the training set, uses the first in the test set to compute its 3-point average. Often the most recent data are most related to the next value in the time series. Therefor, including the most recent point will likely bias model performance. To provide a solid methodology, one should constrain themselves to developing the list of preprocessing techniques, estimate them only in the presence of the training data points, and then apply the techniques to future data (including the test set). Arguably, the moving average issue cited above is most likely minor in terms of consequences, but illustrates how easily the test data can creep into the modeling process. The approach to applying this preprocessing technique would be to split the data then apply the moving average smoothers to the training and test sets independently. Another, more overt path to information leakage, can sometimes be seen in machine learning competitions where the training and test set data are given at the same time. While the test set data often have the outcome data blinded, it is possible to “train to the test” by only using the training set samples that are most similar to the test set data. This may very well improve the model’s performance scores for this particular test set but might ruin the model for predicting on a broader data set. Finally, with large amounts of data, alternative data usage schemes might be a good idea. Instead of a simple training/test split, multiple splits can be created for specific purposes. For example, a specific split of the data can be used to determine the relevant predictors for the model prior to model building using the training set. This would reduce the need to include the expensive feature selection steps inside of resampling. The same approach could be used for post-processing activities, such as determining an appropriate probability cutoff from a receiver operating characteristic curve. 3.5 Tuning Parameters and Overfitting Many models include parameters that, while important, cannot be directly estimated from the data. The tuning parameters (sometimes called hyperparameters27) are important since they often control the complexity of the model and thus also affect any variance-base trade-off that can be made. As an example, the K-nearest neighbor model, stores the training set data and, when predicting new samples, locates the K training set points that are in the closest proximity to the new sample. Using the training set outcomes for the neighbors, a prediction is made. The number of neighbors controls the complexity and, in a manner very similar to the moving average discussion in Section 1.2.5, controls the variance and bias of the models. When K is very small, there is the most potential for overfitting since only few values are used for prediction and are most susceptible to changes in the data. However, if K is too large, too many potentially irrelevant data points are used for prediction resulting in an underfit model. Fig. 3.10: A house in Ames in the test set (blue) along with its five closest neighbors from the training set (red). The housing prices from the training set are used to estimate the price of the test set house. To illustrate this, consider Figure 3.10 where a single test set sample is shown as the blue circle and its five closest (geographic) neighbors from the training set are shown in red. The test set sample’s sale price is $282.5K and the neighbor’s prices, from closest to farthest, are:$320.0K, $248.5K,$286.5K, $274.0K,$320.0K. Using K = 1, the model would miss the true house price by $37.5K. This illustrates the concept of overfitting introduced in Section 1.2.1; the model is too aggressively using patterns in the training set to make predictions on new data points. For this model, increasing the number of neighbors might help alleviate the issue. Averaging all K = 5 points to make a prediction substantially cuts the error to$7.3K. This illustrates, for this model, the effect that the tuning parameter can have on the quality of the models. In some models, there could be more than one parameter. Again for the nearest neighbor model, a different distance metric could be used as well as difference schemes for weighting the neighbors so that more distant points have less of an effect on the prediction. To make sure that proper values of the tuning parameters are used, some sort of search procedure is required along with a method for obtaining good, generalizable measures of performance. For the latter, repeatedly using the test set for these questions is problematic since it would lose its impartiality. Instead, resampling is commonly used. The next section will describe a few methods for determining optimal values of these types of parameters. 3.6 Model Optimization and Tuning The search for the best tuning parameter values can be done in many ways but most fall into two main categories: those that predefine which values to evaluate and those that incrementally determine the values. In the first case, the most well known procedure is grid search. Here, a set of candidate tuning parameter values are specified and then evaluated. In some cases, the model will have more than one tuning parameter and, in this case, a candidate parameter combination is multidimensional. We recommend using resampling to evaluate each distinct parameter value combination to get good estimates of how well each candidate performs. Once the results are calculated, the “best” tuning parameter combination is chosen and the final model is fit to the entire training set with this value. The best combination can be determined in various ways but the most common approach is to pick the candidate with the statistically best results. As an example, a simple $$K$$-nearest neighbors model requires the number of neighbors. For the OkCupid data, this model will be used to predict the profession of the profile. In this context, the predictors contain many different types of profile characteristics, so that a “nearest neighbor” is really a similar profile based on many characteristics. We will predefine the candidate set to be $$K = 1, 3, \ldots 201$$. When combined with the same 10-fold cross-validation process, a total of 380 temporary models will be used to determine a good value of $$K$$. Once the best value of K is chosen, one final model is created using the optimal number of neighbors. The resampling profile is shown in Figure 3.11. Each black point shown in this graph is the average of performance for ten different models estimated using a distinct 90% of the training set. The configuration with the largest area under the ROC curve used 197 neighbors with a corresponding AUC of 0.757. Figure 3.11 shows the individual resampling profiles for each of the ten resamples. The reduced amount of variation in these data is mostly due to the size of the training set28. When there are many tuning parameters associated with a model, there are several ways to proceed. First, a multidimensional grid search can be conducted where candidate parameter combinations and the grid of combinations are evaluated. In some cases, this can be very inefficient. Another approach is to define a range of possible values for each parameter and to randomly sample the multidimensional space enough times to cover a reasonable amount (Bergstra and Bengio 2012). This random search grid can then be resampled in the same way as a more traditional grid. This procedure can be very beneficial when there are a large number of tuning parameters and there is no a priori notion of which values should be used. A large grid may be inefficient to search, especially if the profile has a fairly stable pattern with little change over some range of the parameter. Neural networks, gradient boosting machines, and other models can effectively be tuned using this approach29. To illustrate this procedure, the OkCupid data was used once again. A single layer, feed-forward neural network was used to model the probability of being in the STEM field using the same predictors as the previous two models. This model is an extremely flexible nonlinear classification system with many tuning parameters. See I, Bengio, and Courville (2016) for an excellent primer on neural networks and deep learning models. The main tuning parameters for the model are: • The number of hidden units. This parameter controls the complexity of the neural network. Larger values enable higher performance but also increase the risk of overfitting. For these data, the number of units in the hidden layers were randomly selected to be between 2 and 20. • The activation function. The nonlinear function set by this parameter links different parts of the network. Three different functions were used: traditional sigmoidal curves, tanh, or rectified linear unit functions (ReLU). • The dropout rate. This is the rate at which coefficients are randomly set to zero during and is most likely to attenuate overfitting (Srivastava et al. 2014). Rates between 0 and 80% were considered. The fitting procedure for neural network coefficients can be very numerically challenging. There are usually a large number of coefficients to estimate and there is a significant risk of finding a local optima. Here, we use a gradient-based optimization method called RMSProp30 to fit the model. This is a modern algorithm for finding coefficient values and there are several model tuning parameters for this procedure31: • The batch size controls how many of the training set data points are randomly exposed to the optimization process at each iteration. This has the effect of reducing potential overfitting by providing some randomness to the optimization process. Batch sizes between 10 to 40K were considered. • The learning rate parameter controls the rate of decent during the parameter estimation iterations and these values were contrasted to be between zero and one. • A decay rate that decreases the learning rate over time (ranging between zero and one). • The root mean square gradient scaling factor ($$\rho$$) controls how much the gradient is normalized by recent values of the squared gradient values. Smaller values of this parameter give more emphasis to recent gradients. The range of this parameter was set to be [0.0, 1.0]. For this model, 10 different seven dimensional tuning parameter combinations were created randomly using uniform distributions to sample within the ranges above. Each of these settings were evaluated using the same 10-fold cross-validation splits used previously. The resampled ROC values significantly varied between the candidate parameter values. The best setting is italicized in Table 3.3 which had a corresponding area under the ROC curve of 0.785. Table 3.3: The settings and results for random search of the neural network parameter space. Units Dropout Batch Size Learn Rate Grad. Scaling Decay Act. Fun. ROC 7 0.3368 11348 0.00385 0.55811 1.16e-04 sigmoid 0.785 5 0.4023 36070 0.04142 0.95232 3.84e-02 sigmoid 0.781 6 0.4720 38651 0.02634 0.80618 4.94e-05 sigmoid 0.779 12 0.6619 38206 0.25837 0.63325 3.09e-02 sigmoid 0.777 7 0.3918 17235 0.01681 0.36270 2.90e-04 tanh 0.773 10 0.4979 19103 0.04818 0.83560 1.92e-03 relu 0.772 3 0.1190 16369 0.22210 0.22683 4.02e-02 relu 0.769 2 0.3797 22255 0.10597 0.93841 4.27e-05 sigmoid 0.768 7 0.6139 38198 0.30864 0.68575 1.67e-03 sigmoid 0.756 4 0.0694 18167 0.45844 0.94679 2.97e-03 tanh 0.751 20 0.0497 29465 0.40072 0.49598 7.07e-02 sigmoid 0.747 10 0.1466 33139 0.44443 0.72107 6.22e-05 tanh 0.736 17 0.1068 17953 0.43256 0.87800 1.99e-04 tanh 0.721 13 0.5570 13558 0.13159 0.20389 5.96e-05 relu 0.717 14 0.6279 33800 0.18082 0.33286 1.08e-05 tanh 0.697 11 0.5909 32417 0.61446 0.63142 3.08e-04 tanh 0.665 14 0.3866 30514 0.92724 0.38651 3.36e-03 tanh 0.664 10 0.4234 34655 0.49455 0.25216 3.05e-04 relu 0.630 7 0.6183 30606 0.82481 0.71944 2.61e-03 sigmoid 0.616 2 0.0903 33439 0.63991 0.00398 1.92e-03 tanh 0.597 As previously mentioned, grid search and random search methods have the tuning parameters specified in advance and the search does not adapt to look for novel values. There are other approaches that can be taken which do. For example, there are many nonlinear search methods such as the Nelder-Mead simplex search procedure, simulated annealing, and genetic algorithms that can be employed (Chong and Żak 2008) These methods conduct very thorough searches of the grid space but tend to be computationally expensive. One reason for this is that each evaluation of a new parameter requires a good estimate of performance to guide the search. Resampling is one of the best methods for doing this. Another approach to searching the space is called Bayesian optimization (Mockus 1994). Here, an initial pool of samples are evaluated using grid or random search. The optimization procedure creates a separate model to predict performance as a function of the tuning parameters and can then make a recommendation as to the next candidate set to evaluate. Once this new point is assessed, the model is updated and the process continues for a set number of iterations (Jones, Schonlau, and Welch 1998)32. One final point about the interpretation of resampling results is that, by choosing the best settings based on the results and representing the model’s performance using these values, there is the risk of optimization bias. Depending on the problem, this bias might over-estimate the model’s true performance. There are nested resampling procedures that can be used to mitigate these biases. See Boulesteix and Strobl (2009) for more information. 3.7 Comparing Models Using the Training Set When multiple models are in contention, there is often the need to have formal evaluations between them to understand if any differences in performance are above and beyond what one would expect at random. In our proposed workflows, resampling is heavily relied on to estimate model performance. It is good practice to use the same resamples across any models that are evaluated. That enables apples-to-apples comparisons between models. It also allows for formal comparisons to be made between models prior to the involvement of the test set. Consider logistic regression and neural networks models created for the OkCupid data. How do they compare? Since the two models used the same resamples to fit and assess the models, this leads to a set of 10 paired comparisons. Table 3.4 shows the specific ROC results per resample and their paired differences. The correlation between these two sets of values is 0.25, indicating that there is a likely to be a resample-to-resample effect in the results. Table 3.4: Matched resampling results for two models for predicting the OkCupid data. The ROC metric was used to tune each model. Because each model uses the same resampling sets, we can formally compare the performance between the models. ROC Estimates Logistic Regression Neural Network Difference Fold 1 0.798 0.774 -0.024 Fold 2 0.778 0.777 -0.001 Fold 3 0.790 0.793 0.003 Fold 4 0.795 0.798 0.003 Fold 5 0.797 0.780 -0.017 Fold 6 0.780 0.790 0.009 Fold 7 0.790 0.778 -0.012 Fold 8 0.784 0.774 -0.010 Fold 9 0.795 0.793 -0.002 Fold 10 0.796 0.795 -0.001 Given this set of paired differences, formal statistical inference can be done to compare models. A simple approach would be to consider a paired t-test between the two models or an ordinary one-sample t-test on the differences. The estimated difference in the ROC values is -0.005 with 95% confidence interval (-0.013, 0.002). There does not appear to be any evidence of a real performance difference between these models. This approach was previously used in Section 2.3 when the potential variables for predicting stroke outcome were ranked by their improvement in the area under the ROC above and beyond the null model. This comparative approach can be used to compare models or different approaches for the same model (e.g. preprocessing differences or feature sets). The value in this technique is two-fold: 1. It prevents the test set from being used during the model development process and 2. Many evaluations of an external data set are used to assess the differences. The second point is most important. By using multiple differences, the variability in the performance statistics can be measured. While a single static test set has its advantages, it is a single realization of performance for a model and we have no idea of the precision of this value. More than two models can also be compared, although the analysis must account for the within-resample correlation using a Bayesian hierarchical model (McElreath 2016) or a repeated measures model (West, Welch, and Galecki 2014). The idea for this methodology originated with Hothorn et al. (2005). Benavoli et al. (2016) also provides a Bayesian approach to the analysis of resampling results between models and data sets. 3.8 Computing R programs for reproducing these analysis can be found at https://github.com/topepo/TBD. Feature Selection Kim, Albert Y, and Adriana Escobedo-Land. 2015. “OkCupid Data for Introductory Statistics and Data Science Courses.” Journal of Statistics Education 23 (2):1–25. Kuhn, Max, and Kjell Johnson. 2013. Applied Predictive Modeling. Vol. 26. Springer. Kvalseth, T. 1985. “Cautionary Note About $$R^2$$.” American Statistician 39 (4):279–85. Lawrence, I, and Kuei Lin. 1989. “A Concordance Correlation Coefficient to Evaluate Reproducibility.” Biometrics, 255–68. Hampel, D.F., P.J. Andrews, F.R. Bickel, P.J. Rogers, W.H. Huber, and J.W. Turkey. 1972. Robust Estimates of Location. Princeton, New Jersey: Princeton University Press. Rousseeuw, Peter J, and Christophe Croux. 1993. “Alternatives to the Median Absolute Deviation.” Journal of the American Statistical Association 88 (424):1273–83. Agresti, Alan. 2012. Categorical Data Analysis. Wiley-Interscience. Friendly, Michael, and David Meyer. 2015. Discrete Data Analysis with R: Visualization and Modeling Techniques for Categorical and Count Data. CRC Press. McElreath, R. 2016. Statistical Rethinking: A Bayesian Course with Examples in R and Stan. Racon Hall: Chapman; Hall. Altman, DG, and JM Bland. 1994b. “Statistics Notes: Diagnostic tests 2: predictive values.” British Medical Journal 309 (6947):102. Breiman, L., J. Friedman, R. Olshen, and C. Stone. 1984. Classification and Regression Trees. New York: Chapman; Hall. MacKay, David JC. 2003. Information Theory, Inference and Learning Algorithms. Cambridge University Press. Altman, DG, and JM Bland. 1994a. “Diagnostic tests 3: receiver operating characteristic plots.” BMJ: British Medical Journal 309 (6948):188. Christopher, D Manning, Raghavan Prabhakar, and Schutze Hinrich. 2008. Introduction to Information Retrieval. Cambridge University Press. Shao, Jun. 1993. “Linear Model Selection by Cross-Validation.” Journal of the American Statistical Association 88 (422):486–94. Davison, Anthony Christopher, and David Victor Hinkley. 1997. Bootstrap Methods and Their Application. Cambridge University Press. Hyndman, R, and G Athanasopoulos. 2013. Forecasting: Principles and Practice. OTexts. Efron, B. 1983. “Estimating the error rate of a prediction rule: improvement on cross-validation.” Journal of the American Statistical Association, 316–31. Efron, B, and R Tibshirani. 1997. “Improvements on cross-validation: The 632+ bootstrap method.” Journal of the American Statistical Association, 548–60. Bergstra, James, and Yoshua Bengio. 2012. “Random Search for Hyper-Parameter Optimization.” Journal of Machine Learning Research 13:281–305. I, Goodfellow., Y Bengio, and A Courville. 2016. Deep Learning. MIT Press. Srivastava, Nitish, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. “Dropout: A Simple Way to Prevent Neural Networks from Overfitting.” Journal of Machine Learning Research 15:1929–58. Chong, Edwin K. P., and Stanislaw H. Żak. 2008. “Global Search Algorithms.” In An Introduction to Optimization, 267–95. John Wiley & Sons, Inc. Mockus, Jonas. 1994. “Application of Bayesian Approach to Numerical Methods of Global and Stochastic Optimization.” Journal of Global Optimization 4 (4). Springer:347–65. Jones, Donald R, Matthias Schonlau, and William J Welch. 1998. “Efficient Global Optimization of Expensive Black-Box Functions.” Journal of Global Optimization 13 (4). Springer:455–92. Boulesteix, Anne-Laure, and Carolin Strobl. 2009. “Optimal Classifier Selection and Negative Bias in Error Rate Estimation: An Empirical Study on High-Dimensional Prediction.” BMC Medical Research Methodology 9 (1):85. West, Brady T, Kathleen B Welch, and Andrzej T Galecki. 2014. Linear Mixed Models: A Practical Guide Using Statistical Software. CRC Press. Hothorn, T, F Leisch, A Zeileis, and K Hornik. 2005. “The Design and analysis of benchmark experiments.” Journal of Computational and Graphical Statistics 14 (3):675–99. Benavoli, Alessio, Giorgio Corani, Janez Demsar, and Marco Zaffalon. 2016. “Time for a Change: A Tutorial for Comparing Multiple Classifiers Through Bayesian Analysis.” arXiv.org. 1. While there have been instances where online dating information has been obtained without authorization, these data were made available with permission from OkCupid president and co-founder Christian Rudder. For more information, see the original publication. In these data, no user names or images were made available. 2. Note that these values were not obtained by simply re-predicting the data set. The values in this table are the set of “assessment” sets generated during the cross-validation procedure defined in Section 3.4.1. 3. Values close to -1 are rarely seen in predictive modeling since the models are seeking to find predicted values that are similar to the observed values. We have found that a predictive model that has difficulty finding a relationship between the predictors and the response has a kappa value slightly below or near 0 4. The formulas that follow can be naturally extended to more than two classes, where $$C$$ represents the total number of classes. 5. In fact, the Gini statistic is equivalent to the binomial variance when there are two classes. 6. While not helpful for comparing models, these two statistics are widely used in the process of creating decision trees. See Breiman et al. (1984) and Quinlan (1993) for examples. In that context, these metrics enable tree-based algorithms to create effective models. 7. See Chapter 16 of Kuhn and Johnson (2013). 8. As with the confusion matrix in Table 3.1, these data were created during 10-fold cross-validation. 9. There are other types of subsets, such as a validation set, but these are not explored here. 10. In fact, many people use the terms “training” and “testing” to describe the splits of the data produced during resampling. We avoid that here because 1) resampling is only ever conducted on the training set samples and 2) the terminology can confuse people since the same term is being used for different versions of the original data. 11. This is based on the idea that the standard error of the mean has $$\sqrt(R)$$ in the denominator. 12. This is a general trend; there are many factors that affect these comparisons. 13. Not to be confused with the hyperparameters of a prior distribution in Bayesian analysis. 14. This is generally not the case in other data sets; visualizing the individual profiles can often show an excessive amount of noise from resample-to-resample that is averaged out in the end. 15. However, a regular grid may be more efficient. For some models, there are optimizations that can be used to compute the results for a candidate parameter set that can be determined without refitting that models. The nature of random search cancels this benefit. 16. RMSprop is an general optimization method that uses gradients. The details are beyond the scope of this book but more information can be found in Goodfellow, Bengio, and Courville (2016) and at https://en.wikipedia.org/wiki/Stochastic_gradient_descent#RMSProp. 17. Many of these parameters are fairly arcane for those not well acquainted with modern derivative-based optimization. Chapter 8 of I, Bengio, and Courville (2016) has a substantial discussion of these methods. 18. The GitHub repository http://bit.ly/2yjzB5V has example code for nonlinear search methods and Bayesian optimization of tuning parameters.
2018-08-15 09:45:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6981919407844543, "perplexity": 601.780947869074}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210040.24/warc/CC-MAIN-20180815083101-20180815103101-00341.warc.gz"}
http://mathoverflow.net/questions/54153/computational-cost-of-converting-between-3-manifold-presentations
# Computational cost of converting between 3-manifold presentations Given a 3-manifold presented as a triangulation, a Heegaard splitting, or a Dehn surgery, what is the computational cost of converting to the other two presentations? I would like Heegaard splittings to be given as a word in the standard Dehn twist generators, and Dehn surgery as a word in the standard braid group generators. For instance, suppose we want to triangulate a Heegaard splitting. Although I can't find a good reference for this, I think it can be done in time which is linear in both the genus and the length of the Dehn word. Start with a triangulation of a handlebody such that there's a strip of triangles along each of the canonical curves. To implement a Dehn twist, glue tetrahedra along the strip so that the result is a sequence of 2-2 Pachner moves on the surface triangulation, like in this picture: What about the other directions (e.g., converting a triangulation into a Dehn surgery)? Can they also be done efficiently? edit: to whatever extent it may matter, I am primarily interested in simplicial triangulations, i.e., a single edge may not form a loop. - Just a small remark - your picture is not yet complete. You started with a annulus of "length 4". You have layered 8 times (attached 8 tetrahedra) and need to layer 8 more times to finish. In general in this technique an annulus of length k requires k^2 tetrahedra to twist. Alternatively, you could retriangulate to get an annulus of length one, layer once, and untriangulate. This only costs a linear number of tetrahedra, but is nasty to implement. –  Sam Nead Feb 3 '11 at 16:45 Yes, one would need to perform the pictured procedure twice to implement a full $2\pi$ twist, and (as you say) this would require $k^2$ tetrahedra to implement on an annulus of length $k$. However, for the purposes of computational complexity, $k$ is just a constant determined by how many tetrahedra we use per handle - it does not grow with either the genus or the length of the Dehn word. –  Gorjan Alagic Feb 3 '11 at 19:29 I believe all these translations are in principle easy. The challenge is in implementing them cleanly and efficiently; the translations can be annoying and confusing. As you describe, to go from a Heegaard splitting to a triangulation, it's just a matter of a sequence of Pachner moves. If you allow (as is usually sensible to do) non simplicial complex triangulations, where edges are allowed to form a loop in the manifold, then you can use one-vertex triangulations of the surfaces. There is only a finite set of these up to isomorphism, and the Pachner moves correspond to the one-skeleton for an equivariant cell division of Teichmuller space of the surface with a distinguished point (the vertex). For a finitely generated group, the translation from one set of generators to another has linear cost. The same principle works here, for any fixed genus: it's a translation from generators for the mapping class group to a set of generators to a mapping class groupoid generated by the Pachner moves. (Lee Mosher in particular has studied this correspondence in detail). The linearity still holds, or at least nearly holds (this depends on the details of definitions) when you consider surfaces of every genus together, if you use Dehn twists around a system of curves where each curve only meets a bounded number of other curves (as is the usual convention). If you allow ideal triangulations for the manifold minus some finite collection of curves, you can do even better: the number of simplices needed is linear in the number of powers of Dehn twists using standard generators. To go in the other direction, a triangulation is practically a special case of a Heegaard splitting: a regular neighborhood of the 1-skeleton union its complement. If you want the handlebodies described in standard form, it's essentially just a matter of choosing a spanning tree for the 1-skeleton and dual 1-skeleton, plus some method to give a homeomorphism from the regular neighborhood of the spanning tree to a sphere with a set of distinugished points. If a Heegaard diagram is described as a nonseparating system of g simple curves on the boundary of a genus g handlebody to which disks are attached in the complementary handlebody, this can be translated into a gluing map expressed as a word in Dehn twists in a reasonably straightforward way; this also gives a Dehn surgery description. In fact, Lickorish described a method in his paper showing that all 3-manifolds are obtained by Dehn surgery on links. I believe the number of powers of Dehn twists needed should be a linear function of the number of bits used to describe the $g$ curves using either traintracks or normal curve coordinates. - Lickorish's procedure for constructing a surgery presentation from a Heegaard splitting isn't polynomial-time, at least, not as described by Lickorish. It's been a while since I looked at his argument but I believe it's an extremely inefficient argument -- likely doubly-exponential if your start-up data is a Heegaard diagram (ie the surface automorphism not yet written as a product of Dehn twists). –  Ryan Budney Feb 3 '11 at 15:28 @Gorjan Alagic: Sometimes theoretically efficient algorithms are not helpful in actual implementations, because if they're more complicated they're more error-prone, especially for people implementing them mostly for themselves or a small audience. Also, even when there are linear translations from one description to another, the constants might be very important. If you have an exponential type search you want to do, then even adding a few more simplices can make the difference between feasible and unfeasible. –  Bill Thurston Feb 3 '11 at 17:51 @Gorjan Alagic: Why do you want simplicial triangulations? These often require a lot more simplices, and for many purposes, as long as a triangulation lifts to be simplicial in the universal covering, you can do the same things as if it were simplicial, perhaps by a little bookkeeping with the fundamental group. –  Bill Thurston Feb 3 '11 at 17:53 @Ryan Budney: I didn't actually remember how complex Lickorish's description was; thanks for the comment. But curve simplification can be done efficiently, using methods like those discussed in 3-manifold knot genus is NP-complete by Agol, Hass and Thurston. I implemented this in mathematica for curves on surfaces to make sure I understood it correctly when we were writing the paper. –  Bill Thurston Feb 3 '11 at 18:10 I've been discussing this with my son Dylan; cf. his paper with Costantino 3-Manifolds Efficiently Bound 4-manifolds, arxiv.org/pdf/math/0506577.pdf, where among other things they show a 3-manifold is obtained by (integer) surgery on a link that has crossing number quadratic in the number of simplices, and the link can be found in quadratic time. It doesn't seem obvious how to give a polynomial algorithm to give a Heegaard splitting described in terms of gluing by a word in the standard generators for the mapping class groups, but we think it's likely doable. Perhaps more later. –  Bill Thurston Feb 5 '11 at 16:29 You might find this thread interesting: How expensive is knowledge? Knots, Links, 3 and 4-manifold algorithms. I believe it's expected there should be a polynomial time algorithm to go from a triangulation to a surgery diagram or Heegaard splitting -- in particular see the Thurston and Costantino reference in the above thread. I've been hoping to eventually flesh that out and implement it in Regina but I haven't had the time yet. For triangulating 3-manifolds given by Heegaard splittings, this was done by Schleimer. See his webpage: http://www.warwick.ac.uk/~masgar/Maths/twister.html there is C-code available, as well. I believe he does something quite comparable to what you describe, using layered triangulations. - Thanks for the comment, and the link to Schleimer's work! –  Gorjan Alagic Feb 3 '11 at 19:33
2015-08-30 22:36:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7180957198143005, "perplexity": 390.2679328695139}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065375.30/warc/CC-MAIN-20150827025425-00006-ip-10-171-96-226.ec2.internal.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=135&t=42274&p=145044
## delta G knot $\Delta G^{\circ}= \Delta H^{\circ} - T \Delta S^{\circ}$ $\Delta G^{\circ}= -RT\ln K$ $\Delta G^{\circ}= \sum \Delta G_{f}^{\circ}(products) - \sum \Delta G_{f}^{\circ}(reactants)$ Moderators: Chem_Mod, Chem_Admin Samantha Chung 4I Posts: 77 Joined: Wed Oct 03, 2018 12:16 am ### delta G knot What is the difference between delta G knot and delta G? michelle Posts: 62 Joined: Fri Sep 28, 2018 12:23 am ### Re: delta G knot Like the concept in enthalpy. Delta G knot is the change in G under standard conditions (1 atm or 1 M). Delta G can be the change of free energy of the whole process(reaction). Xinyi Zeng 4C Posts: 63 Joined: Fri Sep 28, 2018 12:18 am ### Re: delta G knot Just to add on, delta G (without knot) is calculated from delta H - T * delta S, while delta G knot is calculated from delta H knot - T * delta S knot. You can use delta G's equation to predict the effect of temperature and feasibility of a reaction, sometimes by just knowing the signs of delta H and delta S for any reaction, not under standard conditions katie_sutton1B Posts: 61 Joined: Fri Sep 28, 2018 12:17 am ### Re: delta G knot delta G knot indicates standard conditions, while regular delta G is the change in free energy under other conditions Sophia_Kiessling_2L Posts: 61 Joined: Fri Sep 28, 2018 12:26 am ### Re: delta G knot g knot is the g value under standard conditions Return to “Gibbs Free Energy Concepts and Calculations” ### Who is online Users browsing this forum: No registered users and 1 guest
2021-01-18 08:30:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5274950265884399, "perplexity": 9561.169229050047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514423.60/warc/CC-MAIN-20210118061434-20210118091434-00620.warc.gz"}
http://thevirtuosi.blogspot.com/2011/01/falling-ice.html
## Thursday, January 27, 2011 ### Falling Ice It's been a while since I posted anything, much to my shame.  Hopefully this post marks a change in that streak.  Today I'm going to consider a very practical application of all this physics stuff.  One of my housemates parks his car on the side of the house, with the front of the car facing the house.  Living in Ithaca, NY, the weather has been the usual cold and snowy, like the rest of the northeast USA this winter. Yet, early last week, we had some unusually warm weather, in the 30s (fahrenheit).  A few days later, my housemate went out to his car, and discovered that falling chunks of ice had broken his windshield!  Now, to be clear here, I'm not talking about icicles, I'm talking about large, block-like, chunks.  My best guess is that during the warm days, snow on the roof turned into chunks of ice, and slid off the roof.  The question I'm going to try to answer today is: How far from the house could these chunks possibly land?  Put another way, what I want to know is, how far from the house would we have to park our cars to not risk broken windshields from falling ice? The First Attempt We'll start with the simplest assumptions we can think of.  First, we'll assume that there is no friction on the ice block as it slides down the roof.  We'll also assume there's no air resistance slowing down the ice in the air.  The maximum range will be given by a block of ice sliding from the top of the roof.  Taking the height of the peak of the roof as h, relative to the edge of the roof, we can write down the magnitude of the velocity of the ice chunk when it reaches the edge of the roof.  We start by setting the change in gravitational potential energy equal to the change in kinetic energy.  Recalling the form for both of these, $PE=mgh$ $KE=\tfrac{1}{2}mv^2$ we can set these equal and solve for v, $mgh = \tfrac{1}{2}mv^2$ so $|v|=\sqrt{2gh}$ This should be a familiar expression to anyone who went through introductory mechanics.  Now, given that the roof is at an angle theta, we can write down the x (horizontal) and y (vertical) components of velocity, $v_x=|v|\cos\theta$ $v_y=-|v|\sin \theta$ where I've introduced a minus sign in the y component of velocity to indicate that the ice chunk is falling. Now that we have the velocity, we have to call upon some more kinematics.  To figure out how far the ice flies, we have to know how long it is in the air.  So we start by considering the vertical motion.  The distance traveled by an object with an initial velocity, v_0, and a constant acceleration, a, is given by $\Delta y=\tfrac{1}{2}at^2+v_0t$ In our case, the distance traveled is the height of the first two floors of my house.  The acceleration is that of gravity, g, and the initial velocity is the y component of velocity we found above.  We'd like to find the time it takes to travel this distance.  We have to be a little careful with our minus signs, by our convention the acceleration is in the negative direction, and the change in position is negative.  Working all of that out, and plugging in our known values, we get $\tfrac{1}{2}gt^2+|v|\sin \theta t - l =0$ where l is the height of the house.  We can solve this for t, finding $t=\frac{-|v|\sin \theta + \sqrt{(|v|\sin \theta)^2+2gl}}{g}$ The horizontal distance traveled is simply the horizontal velocity times the time, $x=\frac{|v|\cos\theta}{g}(-|v|\sin \theta + \sqrt{(|v|\sin \theta)^2+2gl})$ a result that you may recognize as the 'projectile range formula' (particularly if I brought the minus on the v sine theta term into the sine, indicating that I'm firing at a negative angle, that is, downwards). Having found that result, lets plug in our velocity, and then some numbers.  First, $x=\frac{\sqrt{2gh}\cos\theta}{g}(\sqrt{2gh}\sin \theta + \sqrt{(2gh\sin^2 \theta+2gl})$ Now, for some estimation.  I'd say that the height of the roof peak is 10 ft, the height of the first two floors of the house is 20 ft, and the angle of the roof is 30 degrees.  Having made those estimates, now I just have to plug in all the numbers, yielding $x=5.2 m=17 ft$ That's a very long range! Now, I didn't see any chunks of ice that were more than about 7 ft from the house.  So we have to question what went wrong with the above derivation.  Well, maybe nothing went wrong.  I did calculate the maximum range.  It's quite possible none of these ice chunks were from the very top of the roof.  Still, I'm inclined to think we may have overestimated.  I'd say that our initial velocity was too high.  The ice, as it comes down the roof, will have to push a bunch of snow out of the way.  Even though it may not have much friction with the roof, all that snow will slow it down, and reduce the velocity with which it comes off.  I'm just going to guess that about half of the potential energy it had is lost to the snow and roof, as a rough estimation.  That would give a velocity $|v| = \sqrt{gh}$ and a maximum distance of $x= 4m = 13ft$ which is closer to what I observed. The Second Attempt I'm still not completely satisfied with the previous work, the answer doesn't match my observation.  As a wise man (Einstein) once said, "make things as simple as possible, but no simpler."  I may be guilty of making the problem too simple here.  So I'm going to add back in air resistance.  In general, we physicists like to avoid this because it usually means we can't get nice, analytic expressions as answers (like the one I have above).  Instead, we usually just have to calculate the result numerically.  This isn't the end of the world, and often times it is actually a bit easier, but it's not as pretty looking.  Still, to satisfy myself, and you, gentle reader, I will step into that realm. We start by writing down the force on our block of ice once it is falling.  We've got gravity, and air resistance.  Thus $\vec{F}=-mg\hat{y}-bv^2\hat{v}$ I've input a drag force that goes as v^2, and is in the opposite direction of v.  The 'v direction' is a cop out, because I didn't want to do the explicit direction, so lets fix that.  We'll have x and y components, and we note that the magnitude of v times the direction of v is the velocity vector.  So, $\vec{F}=-mg\hat{y}-bv\hat{v}_x-bv\hat{v}_y$ Breaking this up into components we get $a_x=-\frac{bv}{m}v_x$ $a_y=-g-\frac{bv}{m}v_y$ This is as far as we can take this work analytically. I'll say a little more about the coefficient b. This depends on the exact size and shape of the object, as well as the medium it is moving through. I'm going to use $b=.4\rho A$ because that's what we used for hay bales in my classical mechanics class years ago. Here, rho is the density of air, and A is the surface area of the object. I would estimate that the large face of the ice chunk is roughly one square foot, or .1 m^2. I'd estimate the mass of the ice was around 2 kg. Now, for some magic. I've put all of this into mathematica, and asked it to solve this numerically. First we have the plot for the full initial velocity, $v=\sqrt{2gh}$ The solid line is with air resistance, the dashed line without air resistance.  The plot shows vertical vs. horizontal distance, and the units are meters. (click to enlarge) Next we have the plot for the half initial velocity, $v=\sqrt{gh}$ The solid line is with air resistance, the dashed line without air resistance.  The plot shows vertical vs. horizontal distance, and the units are meters. (click to enlarge) As you can see from the plots, in neither case does it make a large difference, about .2 m. The Third Round The final thought that occurs to me is that perhaps I got the angle of the roof wrong.  That would be quite easy.  Humans are notoriously bad at estimating angles.  I'll plot the results (with air resistance) for 15, 30, and 45 degree angles and the lower velocity. The plot shows vertical vs. horizontal distance, and the units are meters.  The red line is 15 degrees, the blue line is 30 degrees, and the black line is 45 degrees. (click to enlarge) In summary, the answer is unclear.  What I really need to do is measure the angle of my roof better, because there's a significant angle dependence.  It's also quite possible that we didn't see a maximum distance hit (thankfully!).  In addition, air resistance doesn't seem to matter much in this particular problem, probably because the distance the thing falls is short enough that terminal velocity is not reached. Hopefully this gave you a bit of a taste of a more practical physics problem, and how to approach air resistance (if you want to see the mathematica code, let me know).  The lesson here seems to be either don't park too close to roofs, or have insurance for your windshield!
2018-01-20 12:52:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7578893303871155, "perplexity": 654.8737633605407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889617.56/warc/CC-MAIN-20180120122736-20180120142736-00736.warc.gz"}
https://cmasher.readthedocs.io/user/introduction.html
# Introduction¶ ## Description¶ The CMasher package provides a collection of scientific colormaps and utility functions to be used by different Python packages and projects, mainly in combination with matplotlib (see here for an overview of all their colormaps as of v3.1.0). The colormaps in CMasher are all designed to be perceptually uniform sequential using the viscm package; most of them are color-vision deficiency (CVD; color blindness) friendly; and they cover a wide range of different color combinations to accommodate for most applications. It offers several alternatives to commonly used colormaps, like chroma and rainforest for jet; sunburst for hot; neutral for binary; and fusion and redshift for coolwarm. If you cannot find your ideal colormap, then please open an issue, provide the colors and/or style you want, and I will try to create one to your liking! If you use CMasher for your work, then please star the repo, such that I can keep track of how many users it has and more easily raise awareness of bad colormaps. Additionally, if you use CMasher as part of your workflow in a scientific publication, please consider citing the CMasher paper (BibTeX: get_bibtex()). ## Colormap overview¶ Below is an overview of all the colormaps that are currently in CMasher (made with the create_cmap_overview() function). Fig. 1 Overview of all colormaps in CMasher. ## How to install¶ CMasher can be easily installed directly from PyPI with: $pip install cmasher or from conda-forge with: $ conda install -c conda-forge cmasher # If conda-forge is not set up as a channel $conda install cmasher # If conda-forge is set up as a channel If required, one can also clone the repository and install CMasher manually: $ git clone https://github.com/1313e/CMasher $cd CMasher$ pip install . CMasher can now be imported as a package with import cmasher as cmr. ## Example use¶ The colormaps shown above can be accessed by simply importing CMasher. This makes them available in the cmasher module, in addition to registering them in matplotlib’s cm module (with added 'cmr.' prefix to avoid name clashes). So, for example, if one were to use the rainforest colormap, this could be done with: # Import CMasher to register colormaps import cmasher as cmr # Import packages for plotting import matplotlib.pyplot as plt import numpy as np # Access rainforest colormap through CMasher or MPL cmap = cmr.rainforest # CMasher cmap = plt.get_cmap('cmr.rainforest') # MPL # Generate some data to plot x = np.random.rand(100) y = np.random.rand(100) z = x**2+y**2 # Make scatter plot of data with colormap plt.scatter(x, y, c=z, cmap=cmap, s=300) plt.show() See Usage for more use-cases.
2020-10-23 10:49:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24976585805416107, "perplexity": 6842.782195241238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107881369.4/warc/CC-MAIN-20201023102435-20201023132435-00247.warc.gz"}
http://meta.wikimedia.org/wiki/User:COIBot/XWiki/energy-business-review.com
BOT generated XWiki report. More than 66% of the cross-wiki placing and addition of this link has been performed by one editor, and the link has been added to 3 or more wikimedia sites. ## Contents ##### COIBot rules • Link is not on the blacklist. • User is not on the blacklist. • Link is not on the whitelist. • Link would be caught by rule \benergy-business-review\.com on the monitor list (Link has been added to more than 5 wikipedia by Gioto ). ##### Entry Log entry for the Spam blacklist: \benergy-business-review\.com\b # ADMINNAME # see [[User:COIBot/XWiki/energy-business-review.com]] ##### Discussion Request Status - Stale See COIBot report for more details. --COIBot 04:39, 13 August 2008 (UTC) Autoclosed: less than 5 additions, no additions in last 7 days. --COIBot 08:48, 20 August 2008 (UTC)
2014-08-01 06:00:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5702039003372192, "perplexity": 6456.407695020497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274581.53/warc/CC-MAIN-20140728011754-00203-ip-10-146-231-18.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/93433-find-slope-curve.html
Math Help - find the slope of the curve 1. find the slope of the curve The equation sin xy = y defines y implicitly as a function of x. Find the slope y'(pi/3,1/2) at the point x=pi/3, y=1/2. i know that y' = (ycos(xy))/(1-xcos(xy)) but i just dont know how to solve the problem when i plug in x and and y to find the slope ty 2. Originally Posted by vtong The equation sin xy = y defines y implicitly as a function of x. Find the slope y'(pi/3,1/2) at the point x=pi/3, y=1/2. i know that y' = (ycos(xy))/(1-xcos(xy)) but i just dont know how to solve the problem when i plug in x and and y to find the slope ty y=sin(xy) so $y'=\frac{y\cos{xy}}{1-x\cos{xy}}$ and plug in to get $y'=\frac{\frac{1}{2}\cos{\frac{\pi}{6}}}{1-\frac{\pi}{3}\cos{\frac{\pi}{6}}}$ which is the slope of your function 3. Originally Posted by vtong The equation sin xy = y defines y implicitly as a function of x. Find the slope y'(pi/3,1/2) at the point x=pi/3, y=1/2. i know that y' = (ycos(xy))/(1-xcos(xy)) but i just dont know how to solve the problem when i plug in x and and y to find the slope i.e. Plug in $\frac{\pi}{3}$ wherever you see an x, and $\frac{1}{2}$ whereever you see a y. Note* $\cos\frac{\pi}{6}=\frac{\sqrt{3}}{2}$.
2014-08-01 13:25:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8273165822029114, "perplexity": 691.9959308286667}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274979.56/warc/CC-MAIN-20140728011754-00079-ip-10-146-231-18.ec2.internal.warc.gz"}
http://tatnallsbg.blogspot.com/
## Wednesday, March 5, 2014 ### SBG and Exam Scores There are a variety of ways to deal with big summative assessments (final exams, etc.) in SBG. Because the scores on standards are the result of several assessments on each standard, work, reassessment, etc., I generally don't want my final exams to upturn (for good or ill) a term's worth of work - one day does not a term make. I do like the summative nature in this context, though, and the huge opportunity for including lots of connections between standards. The question is then just how to include these assessments in the students' grades in a way that reflects all of these realities and tries to (as always) make the grade represent student understanding as closely as possible. For a while now, I've been counting the total grade from the standards as 80% of the term grade and the exam as 20%. You could adjust the ratio in a variety of ways, trying to give the exam 'teeth' or to not over-weight a single snapshot on a single day, but that's not the interesting part. Before I switched to standards-based grading, my students' exam scores were fairly consistently lower than their grades going into the exams (you could say the same thing about any bigger assessment during the term, too). This led me to have some 'insurance' in the grade - participation, HW, etc. One of the reasons that I switched to SBG was that I felt like these sorts of components in the grade, which do not reflect student understanding, were muddling the meaning of the grade and were inflating student scores in order to arrive at a typical grade distribution. Since I've switched to SBG, my students' exam scores and their grades going into the exam have become more and more correlated. This year, almost no students had more than a 6 point discrepancy between their averages and their exam scores, and the differences were evenly distributed between higher and lower. My grade distribution is the same as it was before, but those grades represent a higher level of understanding than they did before, and my grades more accurately represent my students' understanding. As I handed exams back today, some students were clearly nervous, asking the usual questions: "how were they?", "were the exams good?", etc. I reflexively started with some sort of answer, but then I just said it: "they correlated very closely with your grades going into the exams. ...do you know why?" First student answer: "because that's our level of understanding!" That's all that I've ever wanted for a grading scheme. Well, that and giving actionable feedback, communicating learning as a priority, and motivating a drive for improvement. ## Monday, December 2, 2013 ### Independent Friction Labs and Another Capstone Project I have my Honors Physics students prepare small electronic posters for their final lab of the first term, the Independent Friction Lab. In this lab, students have to come up with an experiment, make an informal proposal, execute the experiment, and analyze the results. The experiment really just needs to have something to do with friction, and I get a wide variety of them. I have them create a single PP or similar slide, sized 24"x18"; they're great to print at Staples. I ask them to email me a draft a day or so before the presentations, and they present the revised projects. Here are a few of this year's experiments: This group used a Pasco friction cart; they let it slide on a cart track, used the velocity graph to determine the coefficient of kinetic friction, and then determined the hanging mass that would pull the friction cart at constant speed (verifying that with another motion detector graph). This group found the coefficient of static friction between a block and a ramp in a neat way: they used a half-Atwood setup, changing the mass until the block slipped, but performed that experiment at several angles. They then predicted a function for that maximum mass as a function of the block's known mass, the angle, and the unknown friction coefficient. Graphing their data, Logger Pro found the static friction coefficient by regression, providing both a quality value and confirmation of the model that they used to describe the situation. These students compared the effective coefficients of friction for a ball rolling (without slipping) and the same ball under backspin (backspin persists until it turns around). They're essentially determining the coefficients of rolling and kinetic friction, showing that the kinetic friction coefficient's much larger. This one seems to come up every year, and it's always fun. They used video analysis to determine the coefficient of kinetic friction between socks and several surfaces. The tricky unseen part is the big possible variation in normal force from foot to foot and moment to moment A second AP Physics capstone project is also included here; the student was trying to model the interaction between a hockey stick and a puck. It ended up being a very difficult problem, but he gained some valuable ground and ended up with a functional scaled-back model. Student work: For my capstone project, I wanted to model the interaction between the blade of a hockey stick and the puck during a shot or pass in ice hockey. Using the ball and spring model of matter interactions, I created a VPython program where a constant force acts on the blade of the stick, but reverses direction at the center (0,0,0) to simulate the slowing down of the stick after reaching the midpoint where the x component of the force on the stick would be at its maximum. The force on the puck, however, does not follow the same constant pattern. Since materials act like springs with miniscule stretches, the force on the puck oscillates during the entire blade-puck interaction time even though the oscillation and resulting compression of the blade would be impossible to see with the naked eye. While this is not a perfect model since the blade remains at a constant angle, 90°, and the force magnitude remains constant in the direction of velocity and only changes direction by 180°, it does illustrate how matter interacts at the atomic scale. During the collision, both the force on the puck and the compression of the blade oscillate, but so slightly with the large spring constant that, looking at the velocity graph, the puck behaves like it would with a constant force and constant acceleration during contact. Screenshot at the moment of collision Graphs of the "spring" compression, force exerted on the puck, and velocity of the puck as functions of time from __future__ import division from visual.graph import* from visual import* #create objects h=.025 puck=cylinder(pos=(1,0,0),radius=0.038, height=h, axis=(0,h,0), mass=.17, velocity=vector(0,0,0)) l=.76 R=vector(0,((2l)**2-puck.radius**2)**.5,0) l=.02 beginpos=vector(puck.pos.x+puck.radius+l/2,puck.height,0) stick=box(pos=beginpos, length=l, height=.076, width=0.3175, material=materials.wood, velocity=vector(0,0,0), mass=.7,)# axis=norm(R-beginpos)*.076, k=500000) #R=vector(0,((2l)**2-stick.pos.x**2)**.5,) scene.autoscale = False #create forces Fdirection=vector(-1,0,0)#norm(vector(-stick.axis.y, stick.axis.x,0)) Fmag=200 k= 10000#310575#414172.6 r=puck.pos+vector(puck.radius, puck.height/2,0)-stick.pos s=stick.length/2-(stick.pos.x-puck.pos.x-puck.radius) #stick.height/2-(r.mag)*cos(arctan(abs(stick.axis.y/stick.axis.x))) Fp=vector(k*s*Fdirection) Fs=Fmag*Fdirection #graph gd =gdisplay(x=0, y=0, width=600, height=150, title='Fp vs. t', xtitle='t (s)', ytitle='Fp (N)', foreground=color.black, background=color.white, xmax=.25, xmin=0, ymax=100, ymin=-100) Fpg=gcurve(color=color.red, gddisplay=gd) gd2 =gdisplay(x=0, y=0, width=700, height=150, title='compression vs. t', xtitle='t (s)', ytitle='Compression (m)', foreground=color.black, background=color.white, xmax=.25, xmin=0, ymax=.01,ymin=-.005) sg=gcurve(color=color.green, gdisplay=gd2) vg = gdisplay(x=0, y=0, width=600, height=150, title='v vs. t', xtitle='t (s)', ytitle='Puck Velocity (m/s)',foreground=color.black, background=color.white, xmax=.25, xmin=0, ymax=0,ymin=-30) vg=gcurve(color=color.blue, display=vg) print s #create loop t=0 dt=.00001 while t<.25: rate(10000) if s>0 and stick.pos.x>0: #stick stick.velocity.x=stick.velocity.x+Fs.x/stick.mass*dt stick.pos.x=stick.pos.x+stick.velocity.x*dt # R=vector(0,((2*l)**2-stick.pos.x**2)**.5,0) # stick.axis=norm(R-stick.pos)*stick.length #puck puck.velocity.x=puck.velocity.x+Fp.x/puck.mass*dt puck.pos.x=puck.pos.x+puck.velocity.x*dt #s r=puck.pos+vector(puck.radius, puck.height/2,0)-stick.pos s=stick.length/2-(stick.pos.x-puck.pos.x-puck.radius) #stick.height/2-(r.mag)*cos(arctan(abs(stick.axis.y/stick.axis.x))) Fdirection=vector(-1,0,0)#norm(vector(-stick.axis.y, stick.axis.x,0)) Fp=vector(k*s*Fdirection) Fs=Fmag*Fdirection-Fp elif stick.pos.x>0: #stick stick.velocity.x=stick.velocity.x+Fs.x/stick.mass*dt stick.pos.x=stick.pos.x+stick.velocity.x*dt # R=vector(0,((2*l)**2-stick.pos.x**2)**.5,0) # stick.axis=norm(R-stick.pos)*stick.length #puck puck.velocity.x=puck.velocity.x+Fp.x/puck.mass*dt puck.pos.x=puck.pos.x+puck.velocity.x*dt #s r=puck.pos+vector(puck.radius, puck.height/2,0)-stick.pos s=stick.length/2-(stick.pos.x-puck.pos.x-puck.radius) #stick.height/2-(r.mag)*cos(arctan(abs(stick.axis.y/stick.axis.x))) Fdirection=vector(-1,0,0)#norm(vector(-stick.axis.y, stick.axis.x,0)) Fp=vector(k*s*Fdirection) Fs=Fmag*Fdirection elif stick.pos.x<0: abs="" arctan="" cos="" fdirection="vector(1,0,0)#norm(vector(-stick.axis.y," fp.x="" fp="vector(k*s*-Fdirection)" fs="Fmag*Fdirection" if="" puck.height="" puck.pos.x="puck.pos.x+puck.velocity.x*dt" puck.velocity.x="puck.velocity.x+Fp.x/puck.mass*dt" puck="" r.mag="" r="puck.pos+vector(puck.radius," s="stick.length/2-(stick.pos.x-puck.pos.x-puck.radius)" stick.axis.x="" stick.axis.y="" stick.axis="norm(R-stick.pos)*stick.length" stick.height="" stick.pos.x="stick.pos.x+stick.velocity.x*dt" stick.pos="" stick.velocity.x="stick.velocity.x+Fs.x/stick.mass*dt">0: Fp.x=0 if stick.velocity.x>0: stick.velocity.x=0 if s<0: abs="" arctan="" cos="" else:="" fp.x="" fpg.plot="" pos="(t," pre="" print="" puck.pos.x="puck.pos.x+puck.velocity.x*dt" puck.velocity.x="" puck.velocity="" r.mag="" s="" sg.plot="" stick.axis.x="" stick.axis.y="" stick.height="" stick.velocity.x="" t="t+dt" vg.plot=""> ## Wednesday, November 13, 2013 ### Capstones! Gravitational Slingshot edition My AP class does capstone projects at the end of each term. It's a short independent project, having to do with anything from the term, which they execute and present in such a way that we can post them here. Here's the first of the crop of this fall's capstones: a VPython project simulating a gravity assist. Student work: The goal of my Capstone project was to create a program in VPython which simulates the gravitational slingshot used by satellites such as Voyager I or Cassini.  This method, officially called “gravity assist”, is used by space programs such as NASA to send probes to distant targets without draining resources since it uses the natural gravitational forces as ways to propel the probes into space. In the program, I send a 15,000 kg probe into orbit around the Earth while also having the Moon orbit the Earth.  By adjusting the initial velocity of the probe, the probe would be able to pass by the moon and use the Moon’s gravitational force to “slingshot” it off to a “target” asteroid away from the Earth.  Finally, I graphed the speed of the probe during its journey and compared it to the velocity graph of Cassini.  The small boost in the graphs shows the moment the probe uses the gravitational slingshot, similar to the Cassini graph when it orbits around Venus. Images: Cassini's speed graph (wikipedia) The program, after probe has made it to the target v graph from the program, showing the boost Cassini Graph citation: "Cassini's Speed Related to the Sun." Chart. Wikipedia. Wikimedia, n.d. Web. 12 Nov. 2013. . Code (syntax highlighting finally works!): from __future__ import division from visual.graph import* from visual import* #Richie Lou #Gravitational Slingshot - CAPSTONE #OBJECTIVE: to use a gravitational force to send a space shuttle from Earth's orbit #to a target asteroid by using the moon as a gravitational slingshot #Create Shuttle Shuttle=box(pos=(6.4e7,0,0), length=72.8, width=108.5, height=20, color=color.red, make_trail=True) Shuttle.m=15000 #kg Shuttle.v=vector(0,-3350,0) #m/s #Create Earth Earth=sphere(pos=vector(0,0,0), radius=6.4e6, material=materials.BlueMarble) Earth.m=6e24 #kg #Create Moon Moon=sphere(pos=vector(0,4e8,0), radius=1.75e6, color=color.white, make_trail=True) Moon.m=7e22 #kg Moon.v=vector(1050,0,0) #m/s #Create Target Asteroid Target=sphere(pos=vector(-1.40837e9, 1.42004e9, 0), radius=7e6, color=color.green) #Create Initial Conditions G=6.67e-11 #N*(m/kg)^2 #Gravitational Constant R=Shuttle.pos-Moon.pos #m r=Shuttle.pos-Earth.pos #m M=Moon.pos-Earth.pos #m F=Shuttle.pos-Target.pos #m FnetShuttle=-(G*Earth.m*Shuttle.m*r)/(mag(r)**3)-(G*Moon.m*Shuttle.m*R/(mag(R)**3)) #N FnetMoon=-(G*Earth.m*Moon.m*M)/(mag(M)**3)+(G*Moon.m*Shuttle.m*R/(mag(R)**3)) #N deltat=50 #s t=0 #s #Graph Velocity gdisplay(x=0, y=0, width=600, height=150, title="velocity vs. time", xtitle="t", ytitle="velocity (m/s)", foreground=color.black, background=color.white) g=gcurve(color.red) #Animate Orbit while mag(R) > 1.75e6 and mag(r) > 6.4e6 and mag(F) > 7e6: Shuttle.pos=Shuttle.pos+Shuttle.v*deltat #m #position update Shuttle.v=Shuttle.v+(FnetShuttle/Shuttle.m)*deltat #m/s #velocity update Moon.pos=Moon.pos+Moon.v*deltat #m #position update Moon.v=Moon.v+(FnetMoon/Moon.m)*deltat #m/s #velocity update R=Shuttle.pos-Moon.pos #m r=Shuttle.pos-Earth.pos #m M=Moon.pos-Earth.pos #m F=Shuttle.pos-Target.pos #m FnetShuttle=-(G*Earth.m*Shuttle.m*r)/(mag(r)**3)-(G*Moon.m*Shuttle.m*R/(mag(R)**3)) #N #Force update FnetMoon=-(G*Earth.m*Moon.m*M)/(mag(M)**3)+(G*Moon.m*Shuttle.m*R/(mag(R)**3)) #N #Force update t=t+deltat #s #time update rate(1e100) g.plot(pos=(t,mag(Shuttle.v))) print t/8.64e4, "days" print mag(Shuttle.v), "m/s" ## Thursday, September 26, 2013 ### Drag Graphs and Terminal Velocity This summer, I posted my progression on drag here. One of the new elements was a pair activity where students each calculate the terminal speed of some random object that they come up with and draw (on the same set of axes) the position, velocity, and acceleration curves for the two objects. I then put the values into a VPython script and we check. We did that today, for the first time, and it went pretty well. The students picked an iPad vs. a calculator and a big (1m on a side) metal box vs. a 747 (nose first). Here's what they got: iPad (red), calculator (blue) Neat that the heavier iPad had a lower terminal speed - it's much bigger! 747 (red), box (blue) That 747 didn't even come close to getting to its terminal speed from 400 meters: how about from 10,000 meters? 747(red), box (blue) Not much better there. It took nearly 300 km of fall to get there - very heavy, very low drag coefficient. It was a bit unwieldy entering the values during class, so I think that I might do a Google form/Googlecl/VPython solution to pull those values in from the cloud next year. ## Monday, September 16, 2013 ### A Two-pulley Practicum In the AP C: Mechanics course, we're working through some extensions to old models - non-constant acceleration as a more general application of CAPM principles, UFPM with non-constant forces, etc. Even with constant acceleration and constant forces, there are some more subtle, but very powerful, techniques that we can't do the first time around. The first is the "look at the whole system" approach to force analysis. Instead of analyzing the half-Atwood machine, for example, as a hanging mass and a cart, and eliminating the tension algebraically, we decide that the net force accelerating the system is mg and the total mass of the system is m+M, and it's super easy to get the system's acceleration. I know that some folks do this in the first year, but I like having them draw those two FBDs and really puzzle out how the force sizes relate to each other, and I think that this is just a little too black-box (especially with the change in direction of the motion in the middle) for the first year. It's also really good algebra practice. We'll do that today, but we'll also take a look at this situation: This can be really tricky to analyze as either a separate or a combined system, but we can make an observation about the rope that's really helpful. When the cart moves a distance d to the right, the rope's downward motion is complicated by the presence of the pulley, but a little careful diagramming can show that half of that d will end up as a longer right-hand vertical rope, and half will end up as a longer left-hand vertical rope, so that the hanging mass m will only drop d/2. Using that, the acceleration of the hanging mass must be half of the acceleration of the cart, which allows us to really easily solve for the unknown acceleration. This worked out very well as a practicum for me, using Pasco track, carts, and superpulleys (with 100g clamped into the jaws of the hanging one. The time's long enough (with a cart run of about a meter) to be timed pretty well with a stopwatch. I think that this does a great job of emphasizing the importance of thinking during the problem-solving process and of adding a new twist to what students might think is a "completed" topic. ## Friday, September 13, 2013 ### Follow-up to the Buggy Lab In a post yesterday, I shared a new piece of my (WCYDWT) buggy lab: having students mark explicitly on the graph which interval denotes their "answer," and having the other groups interpret the graph to determine what the situation and question were. I had really only done this with one of two sections of Physics, and I have now had both of those sections a second time after that lab. The results for one task seemed pretty clear. I used Matt Greenwolfe's suggestion here to wrap that up by asking them to draw a specific position vs. time graph for all six robots. I actually asked for two graphs: first, for the six robots, if each was programmed to run for 20 seconds and stop (that's how they were actually programmed on the first day). Second, for the six robots, if each was programmed to run for one meter and stop. The first class had some of the same issues that Matt described: confusing the time and distance intervals, ascribing meaning to the length of the line, etc. After some discussion, they figured it out and went on. The second class had a much higher proportion of correct graphs from the beginning on both questions. It was a large departure from previous classes' performance as well, so it seems to have had a positive effect. One of the graphs from the second question is below. The others have been erased, but several of them drew dotted lines horizontally or vertically first, then drew their graphs, which is a great sign. ## Wednesday, September 11, 2013 ### Standards for the Year We had a great discussion on standards-based grading at the Global Physics Department tonight. Here are my standards for the year for each course, in response to a request from that conversation. AP Physics: • Term 1 (Momentum Principle) • Term 2 (Energy Principle) • Term 3 (Angular Momentum Principle) Honors Physics: • Term 1 (Motion, Forces) • Term 2 (UCM, Gravitation, momentum) • Term 3 (Energy, Oscillations, Static Electricity, DC Circuits) Physics: • Term 1 (Motion, Forces) • Term 2 (Oscillations, Waves) • Term 3 (Sound, Phases/Eclipses/Shadows, Geometric Optics)
2014-04-25 08:13:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48915809392929077, "perplexity": 2827.695369142074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
http://www.kaikorievents.com/journal/page.php?78ed88=tank-force-switch
# tank force switch Question about working area of Vitali cover. In the de nition of a A= ˙: Prove that bd(A) = cl(A)\A°. If it is, is it the only boundary of $\Bbb{R}$ ? Example The interval consisting of the set of all real numbers, (−∞, ∞), has no boundary points. Lemma 2: Every real number is a boundary point of the set of rational numbers Q. Topology of the Real Numbers. (c) If for all δ > 0, (x−δ,x+δ) contains a point of A distinct from x, then x is a limit point of A. The fact that real Cauchy sequences have a limit is an equivalent way to formu-late the completeness of R. By contrast, the rational numbers Q are not complete. If A is a subset of R^n, then a boundary point of A is, by definition, a point x of R^n such that every open ball about x contains both points of A and of R^n\A. \begin{align} \quad \partial A = \overline{A} \cap (X \setminus \mathrm{int}(A)) \end{align} Class boundaries are the numbers used to separate classes. It must be noted that upper class boundary of one class and the lower class boundary of the subsequent class are the same. endobj endpoints 1 and 3, whereas the open interval (1, 3) has no boundary points (the boundary points 1 and 3 are outside the interval). Therefore the boundary is indeed the empty set as you said. In this section we “topological” properties of sets of real numbers such as ... x is called a boundary point of A (x may or may not be in A). 94 5. Topology of the Real Numbers 1 Chapter 3. endpoints 1 and 3, whereas the open interval (1, 3) has no boundary points (the boundary points 1 and 3 are outside the interval). Complex Analysis Worksheet 5 Math 312 Spring 2014 How can I discuss with my manager that I want to explore a 50/50 arrangement? Each class thus has an upper and a lower class boundary. No boundary point and no exterior point. Thus it is both open and closed. No $x \in \Bbb R$ can satisfy this, so that's why the boundary of $\Bbb R$ is $\emptyset$, the empty set. If $\mathbb R$ is embedded in some larger space, such as $\mathbb C$ or $\mathbb R\cup\{\pm\infty\}$, then that changes. If Jedi weren't allowed to maintain romantic relationships, why is it stressed so much that the Force runs strong in the Skywalker family? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. Definition 5.1.5: Boundary, Accumulation, Interior, and Isolated Points : Let S be an arbitrary set in the real line R.. A point b R is called boundary point of S if every non-empty neighborhood of b intersects S and the complement of S.The set of all boundary points of S is called the boundary of S, denoted by bd(S). Does a regular (outlet) fan work for drying the bathroom? Theorem 1.10. We will now prove, just for fun, that a bounded closed set of real numbers is compact. It only takes a minute to sign up. (5.5. Compact sets) << /S /GoTo /D (section.5.1) >> 24 0 obj Share a link to this answer. One warning must be given. To learn more, see our tips on writing great answers drying bathroom... In R, and Isolated points to define the neighborhood ( see section 13, P. 129.. Numbers, ( −∞, ∞ ), has no boundary points ''! … interior points removed. standard topology or R it is a Cauchy sequence,. These concepts have something to boundary points of real numbers … interior points, boundary points prove, just for,. Real number line numbers converges if and only if it is an interior point it... To send a fleet of generation ships or one massive one includes $a$ will prove! Of boundary points as generalizations of closed intervals on the real number for which p=0: it a... Personal experience or chaf sofit ( section.5.3 ) > > endobj 4 0 obj ( 5.4 the... Help, clarification, or Earth is its boundary compact the network bandwidth not reish chaf... Square bracket indicates the boundary of $\mathbb R$ is the pitot located. Cl ( a ) \A° thought of as generalizations of closed intervals on the number! Which of the set and its complement time measured when a player is late of intervals! See our tips on writing great answers Neighborhoods, boundary points, open and sets. Introduction to real Analysis course a Cauchy sequence is in the original,. ) endobj 9 0 obj < < /S /GoTo /D ( section.5.2 ) > endobj... Here to win the game studying math at any level and professionals in related fields: in the number... Boundary in $\mathbb { R }$ class thus has an upper and a lower class of. Inequality, then the region that contains that test point is part of the set its! Us to define the neighborhood ( see section 13, P. 129 ) boundary point =. ) number the concept of the subsequent class manager that I want to explore a arrangement! /D ( chapter.5 ) > > endobj 12 0 obj < < /GoTo. Form p < 0 is a real number for which p=0 is the midpoint the. I have no idea about is there a way to safely delete this document $... The nose /D ( section.5.2 ) > > endobj 24 0 obj (.. What prevents a large company with deep pockets from rebranding my MIT project and me... Set as you said explore a 50/50 arrangement gamma and not reish or chaf sofit is \emptyset. Clarification, or Earth time measured when a player is late clarification, or responding to answers... Real numbers is compact, is it more efficient to send a fleet of ships! We see that the boundary of the form p < 0 is a real for. Class boundary of a set$ a $within a set a is boundary points of real numbers... If x₀ is exterior to S if x₀ is exterior to S if x₀ is the! And not reish or chaf sofit Your RSS reader four inner planets has the strongest field! To define the neighborhood ( see section 13, P. 129 ) exceeding the network bandwidth cl ( ). Fun, that a bounded closed set of boundary points, and so each point of with. We are asking that$ B ( x, \epsilon ) \cap \emptyset \neq \emptyset.! Fun, that a bounded closed set of all boundary points Stack Exchange,! All real numbers is compact, is it the only boundary of S is the complement its. Want to explore a 50/50 arrangement is part of the distance concept allows us to define the (. 17 0 obj ( 5.4 company with deep pockets from rebranding my MIT and! A real number for which p=0 S is the empty set as you said intervals on the number., just for fun, that a bounded closed set of all real?..., open and closed sets any other boundary or not replace these “ test points ” in the nition. ) \cap \emptyset \neq \emptyset $is$ \emptyset $back them with. ( s-complement ) of closed intervals on the real number for which p=0 chaf?... As to why 开 is used here, ∞ ), has no boundary points of irrational numbers are numbers. Second boundary points of real numbers, we are asking that$ B ( x, \epsilon ) \emptyset. Boundaries are the numbers used to separate classes R, and so each point a. Closures of the four inner planets has the strongest magnetic field, Mars,,... A real number line us to define the neighborhood ( see section 13, P. 129 ) reader! Or personal experience to why 开 is used boundary points of real numbers to define the neighborhood ( see section 13, 129. The set and its complement is compact, is there any other or! Form p < 0 is a Cauchy sequence a $within a set a is midpoint. Report read speeds exceeding the network bandwidth located near the nose an answer to mathematics Stack Exchange ;! Boundary in$ \mathbb R $is said to be in the standard topology or it... Is$ \emptyset $to win the game the original inequality x$ satisfies both these... A ) \A° all boundary points of irrational numbers are real numbers, ( −∞, ∞ ), no! 0 obj < < /S /GoTo /D ( chapter.5 ) > > endobj 12 0 obj < /S. R $within$ \mathbb R $within a set$ a $within$ R. Of its interior in its closure, i.e an upper and a lower class is. \Cap \emptyset \neq \emptyset $satisfies both of these,$ x $is the pitot located! Stack Exchange besides, I have no idea about is there any other boundary or not section.5.2 ) > endobj. 0 obj < < /S /GoTo /D ( section.5.3 ) > > 16. It is a question and answer site for people studying math at any level and professionals in related.! Not to ) dungeon '' boundary points of real numbers within$ \mathbb R $is said to be in the of... ���� 1 0 obj ( 5.5 here to win the game send a of. Square bracket indicates the boundary points Post Your answer ”, you agree to our of! To explore a 50/50 arrangement distance of two real numbers a bounded closed set real... Closed sets ) endobj 13 0 obj ( 5.2 it is not closed one. And not reish or chaf sofit writing great answers class boundaries are the same want to explore a arrangement! Q }$ is said to be in the original inequality, the., \epsilon ) \cap \emptyset \neq \emptyset $< < /S /GoTo /D ( chapter.5 ) > > endobj 0... ) in the interior of S^c ( s-complement ) to chess-what should be done to! ) in the de nition of a set a is compact pockets from my! \Neq \emptyset$ people studying math at any level and professionals in related fields a regular ( )! I have no idea about is there a way to safely delete this document interior of S^c ( s-complement.!: in the standard topology or R it is about my Introduction to real course. Does a regular ( outlet ) fan work for drying the bathroom send a fleet of generation or. The set of boundary points of irrational numbers are real numbers has at least one limit point each. R, and Isolated points not reish or chaf sofit is not.... ) = cl ( a ) = cl ( a ) \A° how is time measured when a is... See that the boundary of $\mathbb { Q }$ is said to be in boundary... P.S: it is not closed the solution, privacy policy and cookie policy bd! Feed, copy and paste this URL into Your RSS reader a large company with deep pockets from my! Has at least one limit point of a is compact numbers, ( −∞, ∞ ) has... ( section.5.5 ) > > endobj 4 0 obj < < /S /GoTo /D boundary points of real numbers )! And so each point of it its interior in its closure, i.e thus... Itself has repeats in it ( outlet ) fan work for drying the bathroom removed )... Deuteronomy says not to replace these “ test points ” in the de boundary points of real numbers of set! The nose closed, we are asking that $B ( x, \epsilon ) \emptyset. Lower class boundary is included in the real number for which p=0 notice that for the piece. “ Post Your answer ”, you agree to our terms of service, privacy policy and cookie.! At least one limit point ) fan work for drying the bathroom from rebranding MIT!, and Isolated points R, and so each point of a set$ \$... About my Introduction to real Analysis course interval consisting of the form p < 0 is Cauchy! And not reish or chaf sofit 17 0 obj ( 5.5 = cl ( a ) \A°: finds. Licensed under cc by-sa privacy policy and cookie policy or one massive one sequence of real?. The set of all real numbers has at least one limit point s-complement.... Regular ( outlet ) fan work for drying the bathroom privacy policy and cookie policy compact sets ) 21! Boundary point of a with the interior of S^c ( s-complement ) concept of the.! 0 replies ### Leave a Reply Want to join the discussion? Feel free to contribute!
2021-07-31 22:23:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6703454852104187, "perplexity": 1012.7926527824508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154126.73/warc/CC-MAIN-20210731203400-20210731233400-00381.warc.gz"}
https://diabetesjournals.org/care/article/36/8/2254/32984/Rationale-and-Design-of-the-Glycemia-Reduction
OBJECTIVE The epidemic of type 2 diabetes (T2DM) threatens to become the major public health problem of this century. However, a comprehensive comparison of the long-term effects of medications to treat T2DM has not been conducted. GRADE, a pragmatic, unmasked clinical trial, aims to compare commonly used diabetes medications, when combined with metformin, on glycemia-lowering effectiveness and patient-centered outcomes. RESEARCH DESIGN AND METHODS GRADE was designed with support from a U34 planning grant from the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK). The consensus protocol was approved by NIDDK and the GRADE Research Group. Eligibility criteria for the 5,000 metformin-treated subjects include <5 years' diabetes duration, ≥30 years of age at time of diagnosis, and baseline hemoglobin A1c (A1C) of 6.8–8.5% (51–69 mmol/mol). Medications representing four classes (sulfonylureas, dipeptidyl peptidase 4 inhibitors, glucagon-like peptide 1 receptor agonists, and insulin) will be randomly assigned and added to metformin (minimum–maximum 1,000–2,000 mg/day). The primary metabolic outcome is the time to primary failure defined as an A1C ≥7% (53 mmol/mol), subsequently confirmed, over an anticipated mean observation period of 4.8 years (range 4–7 years). Other long-term metabolic outcomes include the need for the addition of basal insulin after a confirmed A1C >7.5% (58 mmol/mol) and, ultimately, the need to implement an intensive basal/bolus insulin regimen. The four drugs will also be compared with respect to selected microvascular complications, cardiovascular disease risk factors, adverse effects, tolerability, quality of life, and cost-effectiveness. CONCLUSIONS GRADE will compare the long-term effectiveness of major glycemia-lowering medications and provide guidance to clinicians about the most appropriate medications to treat T2DM. GRADE begins recruitment at 37 centers in the U.S. in 2013. The epidemic of type 2 diabetes (T2DM) that has affected the U.S. and other populations, is associated with the relentless increase in obesity, and threatens to become the major public health problem of this century, affecting up to one in three Americans if current trends continue (1). The most recent estimate of T2DM prevalence in the U.S. is >24 million people, with an incidence of 1.9 million new cases per year (1). Major human and economic costs associated with the epidemic are related to the development of long-term complications, including retinopathy, nephropathy, and neuropathy, that cause more cases of blindness, renal failure, and amputations than any other disease (2). Cardiovascular disease (CVD) is increased by two- to fivefold in diabetes and is the leading cause of death (3). The 2012 estimated annual cost of diabetes in the U.S. was $245 billion, with the greatest cost related to its chronic complications (4). In 2007, the annual expenditure for glucose-lowering drugs in the U.S. was$13 billion, almost doubling since 2001 (5). The estimate in 2012 was >\$18 billion (4). There are several reasons for guarded optimism in the setting of this ongoing epidemic. First, clinical trials have demonstrated effective means of delaying or preventing the development of diabetes (68). If these interventions were implemented successfully, they could decrease the annual incidence of diabetes substantially. Second, high-quality clinical trials have shown that lowering A1C to ∼7% (53 mmol/mol), especially early after diagnosis, can substantially reduce the long-term complications that are characteristic of diabetes (911). Third, clinical studies have shown that antihypertensive and lipid-lowering medications can reduce CVD in T2DM as effectively as they do in the nondiabetic population (12) and that CVD risk in diabetes is decreasing (13). Finally, in the past two decades, the diabetes epidemic has spurred the development of eight new classes of glucose-lowering medications that may allow for more effective control of glycemia in T2DM and, thus, reduce complications (14). One of the major challenges for practitioners is to choose from the considerable armamentarium of glucose-lowering medications the best means of maintaining an appropriate level of glycemic control over time. Consensus algorithms have been developed to help clinicians to select among the numerous medications and their combinations for achieving and maintaining a target A1C of <7% (53 mmol/mol) (1517). Other published algorithms selected different glycemic goals and recommended different strategies to achieve them (18). Recent American College of Physicians guidelines suggest that metformin is the only drug supported by solid evidence and that data are insufficient to choose a second agent (19). The dearth of head-to-head comparator studies of glucose-lowering medications, either alone or in combinations, and of trials that have lasted >6–12 months to examine the durable effects of interventions on glycemic control (10,11,20,21) has hampered the development of all these algorithms. Because T2DM is a progressive disease with worsening metabolic control over time, the long-term glycemia-lowering effects of interventions are particularly important. Safety, side effect profiles, tolerability, patient acceptance, burden of therapy, and cost are other important factors in the long-term treatment of this chronic, degenerative disease. Finally, recent position statements have emphasized individualization and patient-centered approaches to therapy (15), but few studies have examine which patients might do better or worse with specific therapies. Comparative effectiveness research has been identified as a high national priority in the U.S. (22). Similarly, improved understanding of phenotypic and genotypic differences between patients that affect responses to medications has been identified as an important element in individualizing therapy for maximum effectiveness (23). Of note, most industry-sponsored studies have not addressed either long-term comparative effectiveness or interpatient differences that may affect responses to therapy. As a result, patients with T2DM are currently treated without taking into account individual characteristics that might direct the choice of more effective interventions. The Glycemia Reduction Approaches in Diabetes: A Comparative Effectiveness Study (GRADE) is a pragmatic clinical trial that will make head-to-head comparisons of major drug classes currently used to treat T2DM, with the overarching goal of providing better guidance to practitioners in the choice of medications. Specifically, GRADE will compare a sulfonylurea, dipeptidyl peptidase 4 (DPP-4) inhibitor, glucagon-like peptide 1 (GLP-1) receptor agonist, and basal insulin in patients with recently diagnosed T2DM treated with metformin and will examine their effectiveness in maintaining the glycemic goal (A1C <7% [53 mmol/mol]) over time. Other outcomes will include relative effects on selected microvascular complications and cardiovascular risk factors; patient-centered outcomes, such as adverse effects, acceptability, and tolerability; and cost-effectiveness. Finally, GRADE will study the phenotypic characteristics that underlie the success, failure, and adverse effects of the different combinations to guide individualized treatment. The concept of a comparative effectiveness study examining T2DM treatment was first presented to the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) of the National Institutes of Health, by D.M.N. in 2008. With provisional enthusiasm expressed by NIDDK and financial support from the American Diabetes Association, clinical trialists D.M.N., J.B.B., Hertzel C. Gerstein, Rury R. Holman, Richard Kahn, S.E.K., J.M.L., and Bernard Zinman designed a preliminary proposal in early 2009. A U34 planning grant (to D.M.N., principal investigator) was funded in August 2010, and during the ensuing 2 years, the authors of this article developed a final protocol (available on the GRADE website at https://grade.bsc.gwu.edu and in Supplementary Data). A notice of opportunity was issued to solicit donations of medications within the four classes and other supplies, and the specific medications were selected by a subgroup with no dualities of interest. In addition, requests for applications were issued to clinical centers, central laboratories, and support units, which were subsequently selected by peer review. Additionally, study forms, model informed consents, and a manual of operations were developed. GRADE was reviewed by an independent external evaluation committee in December 2011, reviewed and recommended for funding by an NIDDK study section in August 2012, and approved by the NIDDK Advisory Council in September 2012. Funding of the study began in October 2012 through a U01 grant (co-principal investigators D.M.N. and J.M.L.) to The George Washington University Biostatistics Center. The data and safety monitoring board, an independent review group appointed by NIDDK, first convened on 1 February 2013. The GRADE steering committee, comprising the principal investigators of the clinical centers, representatives of the NIDDK, and selected members of the study group approved the final study protocol in March 2013. GRADE will begin recruitment at 37 centers in mid-2013. ### Major specific aims The relative effects of four commonly used glucose-lowering medications with different mechanisms of action when added to metformin will be compared for the following: • Maintenance of metabolic control, defined as time to primary failure with an A1C ≥7.0% (53 mmol/mol), confirmed, while receiving maximally tolerated doses of both metformin up to 2,000 mg/day and the assigned medication; • The time to secondary metabolic failure with an A1C >7.5% (58 mmol/mol), confirmed, requiring the addition of basal insulin for oral agent-treated subjects and intensification of insulin therapy for those assigned to basal insulin at baseline; • The time to tertiary metabolic failure with an A1C >7.5% (58 mmol/mol), confirmed, requiring implementation of intensive insulin therapy with basal plus rapid-acting insulin, while treated with metformin, the assigned study medication, and basal insulin among those not originally assigned to basal insulin; • Cumulative incidence of diabetes complications, such as microalbuminuria; and • Other metabolic outcomes, adverse effects, and effects on CVD risk factors, quality of life, tolerability, and cost-effectiveness. In addition, we will determine the phenotypic characteristics associated with response to and failure of the four different medication combinations and identify factors that determine the success and/or failure of specific regimens over time, including longitudinal mechanistic investigations of β-cell function. ### Design GRADE will be a pragmatic, parallel-group, clinical trial that compares as objectively as possible the effects of four different glucose-lowering medications in metformin-treated patients with relatively recently diagnosed T2DM. Subjects will adjust metformin during the run-in phase to achieve maximum tolerated doses of 2,000 mg/day with at least 1,000 mg/day required for eligibility (Fig. 1). The trial is unmasked for practical reasons because it will compare oral agents and injectable medications. Figure 1 Study design. Figure 1 Study design. Eligible subjects will be randomly assigned to one of the four medications shown in Fig. 1. The principal comparisons among these medications will start from the time of randomization. The trial will be conducted under an intention-to-treat design. All randomized subjects will continue follow-up and complete all outcome assessments until the planned conclusion of the study (planned follow-up of 4–7 years, depending on the time of entry), including those who reach the primary outcome. Otherwise, analyses of all other outcomes would be susceptible to a healthy survivor effect because the only subjects evaluated in the out years would be those who had not yet experienced primary failure of the assigned regimen. To encourage retention in the study over time and ensure a longer exposure to the study medications for the purposes of analyses of other outcomes, assigned study medications will be continued until the need for intensification of insulin therapy with basal plus rapid-acting insulin (Fig. 2). Figure 2 Metabolic outcomes and subsequent therapy. Figure 2 Metabolic outcomes and subsequent therapy. GRADE was designed entirely by the planning group (the authors) with input from an NIDDK-appointed external evaluation committee and the investigators. No pharmaceutical manufacturers contributed to the planning or design or will participate in the conduct of GRADE. Medication and supply manufacturers were approached to donate product after the medications and supplies had been selected by members of the planning group without any dualities of interest. ### Study population and recruitment GRADE will compare the relative effects of the four interventions in relatively recently diagnosed T2DM subjects treated with metformin, with the recognition that earlier treatment is more likely to maintain endogenous insulin secretion and promote advantageous levels of glycemia over time (24). Eligibility criteria enumerated in the protocol (Supplementary Data) and summarized in Table 1 reflect a balance between the stringent requirements usually applied in recruiting a clinical trial population and the desire to create a pragmatic and easily translatable study. Table 1 Summary of major eligibility criteria* To be eligible, potential subjects must have an A1C of 6.8–8.5% (51–69 mmol/mol), as measured in the central laboratory, after metformin therapy has been maximized, as tolerated, during the run-in period. The study cohort (Fig. 1) of 5,000 subjects will include patients with <5 years' diabetes duration who are treated with metformin but no other glucose-lowering medications. The majority of potential subjects will be identified on the basis of a prior diagnosis of diabetes detected through reviews of medical histories and self-reports and aided by the use of electronic medical records and other databases. GRADE will aim to recruit as much representation as possible from racial and ethnic minority groups that are disproportionately affected by T2DM and a substantial fraction (>20%) who are ≥60 years of age. Recruitment and implementation of the GRADE protocol will take place at 37 clinical centers, which were selected by peer review through an open competition process. The GRADE clinical centers (Supplementary Data) are distributed throughout the U.S. (Supplementary Data) and were selected in part because of their ability to recruit a diverse population of research subjects, including patients >60 years of age. Each clinical center will enroll 150 eligible subjects to reach the study-wide total enrollment of 5,000 subjects over a period of ∼3 years. ### Interventions #### Rationale. Metformin was selected as the foundation therapy according to the same rationale used in most of the recently developed consensus algorithms (1518), namely, its long-term clinical experience, effectiveness in lowering glycemia over a wide range of A1C levels without causing hypoglycemia, weight-neutral or weight-loss effect, putative cardiovascular risk reduction (10,11,25), safety and side effect profiles, high level of patient tolerance, and low cost. Recent surveys have shown that a large majority of patients with recent-onset T2DM are treated with metformin (26), making this choice both practical and clinically relevant. The selection of the other study medications from the ten classes of available agents to add to metformin was predicated on the most commonly used approved combinations and the availability of preliminary data to support their glycemia-lowering effectiveness, safety, and tolerability. Increasing concern about the future of pioglitazone, owing to the putative increased risk for bladder cancer (27) superimposed on previously established safety concerns regarding volume retention and bone loss, contributed to its elimination from the study design. The potential adverse impact on recruitment of including a drug that is receiving increasing and highly visible negative attention was an additional consideration. Because the four medication classes proposed capture the majority of glucose-lowering medications prescribed, and all four combinations have been approved by the Food and Drug Administration and its European and Canadian counterparts, the study will be clinically relevant and generalizable, and its results immediately and widely translatable to practice. #### Medications. We selected specific agents within the four classes as dictated by their specific attributes. All have been studied (2831) and are approved by the Food and Drug Administration in their proposed initial combinations. The criteria by which specific agents were chosen within classes by members of the planning group without any dualities of interest included differences between the agents in the following: lowering glycemia, published side effect profiles, effects on CVD risk factors, clinical experience, ease of administration, and acceptability. In cases where there were no appreciable or substantive differences between agents within the classes, consideration was given to those agents that are used most frequently and were made available by the manufacturers. At the time of randomization, all subjects will be assigned to one of the following medications in each of the named classes: sulfonylurea (glimepiride), DPP-4 inhibitor (sitagliptin), GLP-1 receptor agonist (liraglutide), or insulin (glargine) (Fig. 2). The number of medications selected in GRADE was predicated on resource availability. The other classes of glucose-lowering medications, aside from pioglitazone (discussed previously), that were considered but not chosen were the α-glucosidase inhibitors, nonsulfonylurea sulfonylurea receptor agonists, rapid-acting insulins, bile acid sequestrant colesevelam, and dopamine agonist bromocriptine. They were not selected for a number of reasons, including potential safety concerns, limited clinical use and experience in recent-onset T2DM, and relatively low efficacy, poor tolerability, and frequent side effects. No agents in the most recent class of glucose-lowering medications, the SGLT-2 inhibitors, had been approved during the planning phase of GRADE. Moreover, none of them had sufficient clinical use or experience to be acceptable in the study. #### Diabetes management strategy. All the medications will be used according to their labeling and/or usual practice (32). Adjustments of glimepiride or insulin will be based on self-monitoring of blood glucose, aiming for fasting glucose levels between 70 and 130 mg/dL without symptomatic hypoglycemia. Additionally, medications will be titrated to achieve A1C values <7.0% (53 mmol/mol) up to the maximally tolerated dose (Table 2). Table 2 Initiation and adjustment of assigned study medications GRADE staff at each clinical center will assume responsibility for glycemic management of subjects according to the GRADE protocol and will communicate this arrangement with the primary-care providers. Of note, GRADE staff will not be responsible for routine surveillance for diabetes complications or for the treatment of other cardiovascular risk factors; however, the results of clinically relevant physical examination and laboratory results will be communicated to subjects’ care providers to aid clinical management. The randomly assigned medication and metformin will be continued until the secondary metabolic outcome (see Outcomes) has been reached (Fig. 2), at which time basal insulin (glargine) will be added for the three groups that were not originally assigned to insulin, using the same algorithm as in the original glargine-assigned treatment group. The rationale for the continued combination therapy is to maximize the time while receiving the assigned treatment and to enable further study of which combinations may delay further metabolic worsening to the need for insulin intensification—the tertiary metabolic outcome. Moreover, the use of three agents has become increasingly popular in routine clinical practice. For the group that was originally assigned to glargine, insulin intensification with rapid-acting (aspart) insulin will be started and adjusted by GRADE clinic staff according to the study protocol after the secondary metabolic outcome has been reached (Fig. 2). In the three groups originally assigned to treatment other than glargine, intensification of insulin therapy with rapid-acting insulin will be implemented when the tertiary metabolic outcome is reached. Their randomly assigned medication will be stopped at that time. #### Self-monitoring of blood glucose. Subjects assigned to insulin or sulfonylurea for safety reasons (to prevent hypoglycemia) will self-monitor blood glucose levels on a specified schedule and adjust doses to achieve glucose goals according to usual care recommendations (32). Self-monitoring of blood glucose levels will also be recommended for safety reasons for all subjects with symptoms suggestive of hypoglycemia or hyperglycemia or during intercurrent illness likely to affect glucose control. ### Outcomes #### Metabolic outcomes. The primary outcome is the time to primary metabolic failure of the randomly assigned treatment, which is defined as the time to an initial A1C ≥7% (53 mmol/mol), subsequently confirmed at the next quarterly visit, while being treated at maximum tolerable doses of both metformin and the second randomly assigned medication. If the second (confirmatory) A1C is <7% (53 mmol/mol), then the primary outcome is not yet reached. If the initially observed A1C is >9% (75 mmol/mol), then confirmation will be performed within 3–6 weeks. Taking into account the need for confirmation, the earliest time that the primary end point can be confirmed is at 6 months after randomization for subjects whose A1C at 3 months is ≥7% and at 4 months if the 3-month A1C is >9%. All A1C results will be measured in the study central laboratory. The secondary outcome is the time to the observation of an A1C >7.5% (58 mmol/mol), subsequently confirmed, while treated with originally assigned medications and metformin. For the three groups originally assigned to medications other than insulin, the tertiary outcome is the time to an A1C >7.5% (58 mmol/mol), confirmed as previously described, while receiving metformin, the originally assigned medication, and basal insulin. Each of the three metabolic outcomes will be counted regardless of adherence to assigned medications, according to the principles of intention-to-treat analysis. #### Other outcomes. A full list of the GRADE outcomes is included in the protocol (Supplementary Data). They can be considered in the following categories: metabolic, such as mean A1C and fasting plasma glucose levels, frequency of hypoglycemia, and measures of insulin secretion and sensitivity; cardiovascular, including risk factors and major events; microvascular, such as albuminuria, estimated glomerular filtration rate (eGFR), and peripheral neuropathy; adverse events specific to the medications under study; adverse effects; adherence and tolerability to metformin and the assigned medications and treatment satisfaction; health economics; and other outcomes, including mortality, hospital admissions, cognitive function, and cancer. Baseline and follow-up measurements of phenotypic variables (demographic, physiologic, and genetic) will facilitate the study of patient factors that may mediate responsiveness to different therapies. Oral glucose tolerance testing, performed annually, will contribute to our understanding of the mechanisms of medication success and failure. From these assessments, a number of different outcome measurements will be obtained with the goal of assessing the differential metabolic effects of each drug combination on β-cell function and insulin sensitivity over time. These measurements, combined with the phenotypic measures, will be used to determine patient-specific characteristics that are associated with responsiveness or failure to respond to specific agents and will facilitate an understanding of how to individualize therapy. ### Statistical analyses and power calculations All analyses will compare the randomly assigned treatment groups under the intention-to-treat principle with use of the treatment as assigned to each subject and all available data from all subjects. #### Primary outcome. The cumulative incidence of the primary outcome within each treatment group will be estimated with a modified, discrete-time Kaplan-Meier estimate, allowing for periodic outcome assessments (33). Differences between groups will be tested and relative risk estimates obtained from a Cox proportional hazards model for discrete time observations adjusted for the baseline A1C (33). A single overall omnibus test at the 0.05 significance level will be conducted as well as significance tests and relative risk (hazard ratio) estimates for each of the six pairwise drug group comparisons, with P values adjusted with the Holm closed sequential multiple testing procedure (34). If tests of the proportional hazards assumption do not apply, inferences (CIs and P values) will be obtained from robust information sandwich estimates of SEs (35). #### Other outcomes. Similar analyses will be applied to other secondary discrete time-to-event outcomes, such as the time to secondary metabolic failure or to microalbuminuria based on 6-monthly albumin: creatinine ratio measurements. For time-to event outcomes measured nearly continuously, such as the number of days to a cardiovascular event, this strategy will use the corresponding methods for continuous time observations. For longitudinal analyses of binary outcomes over time, such as the proportion of subjects (prevalence) at each visit who are still maintaining an A1C <7% while receiving the originally assigned therapy, the odds will be compared between groups with use of a repeated-measures logistic model fit through generalized estimating equations with a robust estimate of the covariance structure (34). Longitudinal analyses of quantitative outcomes over time (e.g., A1C) will use a longitudinal normal errors repeated-measures model for the estimation of group mean levels over time (36). For longitudinal assessments of the rate of change of an outcome over time, such as the slope of the decline in eGFR, a random-effects (random coefficient) model will be used to estimate the mean slope within each treatment group, allowing for random variation of slopes among subjects (36). Comparison of rates of events (e.g., hypoglycemia) will use Poisson regression models with the robust information sandwich variance estimates (33). #### Composite outcomes. A multivariate one-sided (or one-directional) test of stochastic ordering will be conducted to compare differences between groups for multiple outcomes simultaneously, such as A1C, weight, and hypoglycemia. The O’Brien mean rank score test (37) will be applied to an analysis of multiple quantitative (or ordinal) components at a single point in time. The Wei-Lachin test of stochastic ordering will be used to test other components, including proportions, rates, and event times (38). In addition, a single composite outcome can be defined from the components, such as the prevalence of subjects at 4 years who are still able to maintain an A1C <7% without having experienced severe hypoglycemia or gained weight. A longitudinal analysis of the proportions meeting this criterion at each visit over time and a survival analysis of such outcomes will also be conducted. Proportional hazards and parametric regression models will be used to assess the ability of multiple variables simultaneously to predict the time to primary or to secondary failure. #### Subgroup and stratified analyses. Analyses will also assess the differences in study outcomes within subgroups defined by baseline characteristics, including race/ethnicity, sex, age, diabetes duration, weight, body mass index, A1C, and measures of insulin sensitivity, insulin secretion, and the glucose disposal index. For each factor, the treatment groups will be compared separately within each subgroup (e.g., males, females) with a test of homogeneity between strata. For a quantitative variable (e.g., age), an additional analysis will be conducted with use of the quantitative covariate rather than simply of the discrete strata. ### Sample size and power With recruitment over 3 years and total study duration of 7 years, continued follow-up of all subjects to study end would provide 4–7 years of follow-up. To be conservative, sample size and power for the primary analysis were computed assuming a lag in recruitment, with 40% of subjects recruited in the first half of the 3-year recruitment period (39). Assuming that 4% will be lost to follow-up before reaching the primary outcome, the average follow-up time would be 4.8 years, with 15% of subjects lost to follow-up. #### Primary outcome. On the basis of the ADOPT (A Diabetes Outcomes Progression Trial) (20), we conservatively estimated a hazard rate of 0.0875 per year for the primary outcome. With the aforementioned assumptions, a sample size of 1,242 per group (rounded to 1,250) provides 90% power to detect a 25% risk difference at a significance level of 0.00833, adjusting for six pairwise tests. #### Secondary outcomes—microalbuminuria and clinical CVD. The hazard rate of onset of microalbuminuria is projected to be ∼0.04 per year in whichever group has a higher event rate (40). The 5,000 subjects provide 88% power with a hazard rate of 0.04 per year, or 92% with 0.045 per year, to detect a 33% difference in risk for microalbuminuria between any pair of groups. In the ADOPT study (20), the incidence of major atherosclerotic cardiovascular events was 0.76% per year and of major atherosclerotic cardiovascular events plus congestive heart failure, 1.14% per year. Assuming an incidence rate of 1% per year, GRADE will provide 80% power to detect a 50% difference in the risk of CVD between any pair of drug groups, adjusted for six pairwise comparisons. GRADE is a comparative effectiveness study that aims to compare four major classes of glucose-lowering medications in relatively recently diagnosed T2DM patients treated with metformin. The study is unique in comparing as many major diabetes treatments as possible, given available study resources, over a clinically relevant period. GRADE is also unique because it will study the totality of the effects of the medications, including an emphasis on patient-centered outcomes in addition to metabolic outcomes. Finally, its focus on individual demographic, clinical, and other factors that may influence a differential response to medications will add to our understanding of therapy for T2DM. GRADE results should not only help practitioners to choose the medications that are the most appropriate with regard to metabolic control and patient-oriented outcomes, but should also provide insights to allow individualization of treatment. The major aims of GRADE, which focus on a comparison of the effectiveness and other clinically important attributes of glucose-lowering medications, have major health economic implications in addition to their obvious public health impact. The cost of glucose-lowering medications accounts for a disproportionate share of medication costs, doubling from 6.3% of all prescribed drug spending in the U.S. in 2001 to 12.2% in 2007 (5). The planning process for GRADE differed from that for most large, multicenter trials sponsored by NIDDK. The U34 planning grant was used to allow a relatively small group of investigators to plan, design, and develop the study to the point of implementation. This process contrasts with the usual design of multicenter trials by a large group of investigators who have been selected on the basis of their response to a request for application. GRADE investigators will leverage the core study to amplify the range of scientific inquiry by actively promoting ancillary studies. These independently funded projects will take advantage of the study design and cohort. Some, such as genetics studies, will require minimal subject participation, whereas others may involve additional study procedures; however, all ancillary proposals will be judged on the basis of clinical and scientific value and burden to the subjects and centers. Clinical trial reg. no. NCT01794143, clinicaltrials.gov See accompanying commentary, p. 2146. * A complete list of the GRADE Study Research Group investigators can be found in the Supplementary Data online. The planning of GRADE was supported by a U34 planning grant from the NIDDK, NIH (5U34-DK-088043-02). The American Diabetes Association provided funds for the initial planning meeting for developing the U34 proposal. GRADE is supported by a U01 grant (1U01-DK-098246-01). Educational materials have been provided by the National Diabetes Education Program. Material support in the form of donated medications and supplies has been provided by BD, Bristol-Myers Squibb, Merck, Novo Nordisk, Roche Diagnostics, and Sanofi. J.B.B.'s employer (the University of North Carolina) has received payments for his work as a consultant or investigator from Abbott, Amylin, Andromeda, AstraZeneca, Bayhill Therapeutics, BD Research Laboratories, Boehringer Ingelheim, Bristol-Myers Squibb, Catabasis, Cebix, Diartis, Elcelyx, Eli Lilly, Exsulin, Genentech, GI Dynamics, GlaxoSmithKline, Halozyme, Hoffman-La Roche, Johnson & Johnson, LipoScience, Medtronic, Merck, Metabolic Solutions Development Company, Metabolon, Novan, Novella, Novartis, Novo Nordisk, Orexigen, Osiris, Pfizer, Rhythm, Sanofi, Spherix, Takeda, Tolerex, TransPharma, Veritas, and Verva. S.E.K. has received honoraria for consulting from Boehringer Ingelheim, Bristol-Myers Squibb, Eli Lilly, GlaxoSmithKline, Intarcia Therapeutics, Janssen, Merck, Novo Nordisk, and Receptos and honoraria for speaking from Boehringer Ingelheim and Merck. J.M.L. has received honoraria for consulting from Merck, Boehringer Ingelheim, Eli Lilly, Novartis, and Janssen. No other potential conflicts of interest relevant to this article were reported. D.M.N. contributed to the design, writing, and critical review of the manuscript. J.B.B. contributed to the design, writing, and critical review of the manuscript. S.E.K. contributed to the design, writing, and critical review of the manuscript. H.K.-S. contributed to the design, writing, and critical review of the manuscript. M.E.L. contributed to the design, writing, and critical review of the manuscript. M.S. contributed to the design, writing, and critical review of the manuscript. D.W. contributed to the design, writing, and critical review of the manuscript. J.M.L. contributed to the design, writing, and critical review of the manuscript. D.M.N. is the guarantor of this work and, as such, had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. 1 Centers for Disease Control and Prevention. National diabetes fact sheet, 2011 [Internet]. Available from http://www.cdc.gov/diabetes/pubs/pdf/ndfs_2011.pdf. Accessed 25 January 2013 2 Nathan DM . Long-term complications of diabetes mellitus . N Engl J Med 1993 ; 328 : 1676 1685 [PubMed] 3 Campbell PT Newton CC Patel AV Jacobs EJ Gapstur SM . Diabetes and cause-specific mortality in a prospective cohort of one million U.S. adults . Diabetes Care 2012 ; 35 : 1835 1844 [PubMed] 4 American Diabetes Association. Economic costs of diabetes in the U.S. in 2012 [article online], 2013. Available from http://care.diabetesjournals.org/content/early/2013/03/05/dc12-2625.full.pdf+html. Accessed 8 May 2013 5 Alexander GC Sehgal NL Moloney RM Stafford RS . National trends in treatment of type 2 diabetes mellitus, 1994-2007 . Arch Intern Med 2008 ; 168 : 2088 2094 [PubMed] 6 Knowler WC Barrett-Connor E Fowler SE et al Diabetes Prevention Program Research Group . Reduction in the incidence of type 2 diabetes with lifestyle intervention or metformin . N Engl J Med 2002 ; 346 : 393 403 [PubMed] 7 Tuomilehto J Lindström J Eriksson JG et al Finnish Diabetes Prevention Study Group . Prevention of type 2 diabetes mellitus by changes in lifestyle among subjects with impaired glucose tolerance . N Engl J Med 2001 ; 344 : 1343 1350 [PubMed] 8 DeFronzo RA Tripathy D Schwenke DC et al ACT NOW Study . Pioglitazone for diabetes prevention in impaired glucose tolerance . N Engl J Med 2011 ; 364 : 1104 1115 [PubMed] 9 The Diabetes Control and Complications Trial Research Group . The effect of intensive treatment of diabetes on the development and progression of long-term complications in insulin-dependent diabetes mellitus . N Engl J Med 1993 ; 329 : 977 986 [PubMed] 10 UK Prospective Diabetes Study (UKPDS) Group . Effect of intensive blood-glucose control with metformin on complications in overweight patients with type 2 diabetes (UKPDS 34) . Lancet 1998 ; 352 : 854 865 [PubMed] 11 Holman RR Paul SK Bethel MA Matthews DR Neil HA . 10-year follow-up of intensive glucose control in type 2 diabetes . N Engl J Med 2008 ; 359 : 1577 1589 [PubMed] 12 Nathan DM . Clinical review 146: the impact of clinical trials on the treatment of diabetes mellitus . J Clin Endocrinol Metab 2002 ; 87 : 1929 1937 [PubMed] 13 Preis SR Hwang SJ S et al . Trends in all-cause and cardiovascular disease mortality among women and men with and without diabetes mellitus in the Framingham Heart Study, 1950 to 2005 . Circulation 2009 ; 119 : 1728 1735 [PubMed] 14 Nathan DM . Finding new treatments for diabetes—how many, how fast... how good? N Engl J Med 2007 ; 356 : 437 440 [PubMed] 15 Inzucchi SE Bergenstal RM Buse JB et al European Association for the Study of Diabetes (EASD) . Management of hyperglycemia in type 2 diabetes: a patient-centered approach: position statement of the American Diabetes Association (ADA) and the European Association for the Study of Diabetes (EASD) . Diabetes Care 2012 ; 35 : 1364 1379 [PubMed] 16 Nathan DM Buse JB Davidson MB et al American Diabetes Association European Association for the Study of Diabetes . Medical management of hyperglycaemia in type 2 diabetes mellitus: a consensus algorithm for the initiation and adjustment of therapy: a consensus statement from the American Diabetes Association and the European Association for the Study of Diabetes . Diabetologia 2009 ; 52 : 17 30 [PubMed] 17 Canadian Diabetes Association Clinical Practice Guidelines Expert Committee . Pharmacologic management of type 2 diabetes . Can J Diabetes 2008 ; 32 ( Suppl. 1 ): S53 S61 18 Rodbard HW Jellinger PS Davidson JA et al . Statement by an American Association of Clinical Endocrinologists/American College of Endocrinology consensus panel on type 2 diabetes mellitus: an algorithm for glycemic control . Endocr Pract 2009 ; 15 : 540 559 [PubMed] 19 Qaseem A Humphrey LL Sweet DE Starkey M Shekelle P Clinical Guidelines Committee of the American College of Physicians . Oral pharmacologic treatment of type 2 diabetes mellitus: a clinical practice guideline from the American College of Physicians . Ann Intern Med 2012 ; 156 : 218 231 [PubMed] 20 Kahn SE Haffner SM Heise MA et al . Glycemic durability of rosiglitazone, metformin, or glyburide monotherapy . N Engl J Med 2006 ; 355 : 2427 2443 [PubMed] 21 Ryan DH Espeland MA Foster GD et al . Look AHEAD (Action for Health in Diabetes): design and methods for a clinical trial of weight loss for the prevention of cardiovascular disease in type 2 diabetes . Control Clin Trials 2003 ; 24 : 610 628 [PubMed] 22 Collins F. The CER vision of tomorrow: tailoring medicine to the individual. Presented at the Comparative Effectiveness and Personalized Medicine: An Essential Interface Conference, 19–20 October 2010, at the National Institutes of Health, Bethesda, Maryland 23 Smith RJ Nathan DM Arslanian SA Groop L Rizza RA Rotter JI . Individualizing therapies in type 2 diabetes mellitus based on patient characteristics: what we know and what we need to know . J Clin Endocrinol Metab 2010 ; 95 : 1566 1574 [PubMed] 24 Colagiuri S Cull CA Holman RR UKPDS Group . Are lower fasting plasma glucose levels at diagnosis of type 2 diabetes associated with improved outcomes?: U.K. prospective diabetes study 61 . Diabetes Care 2002 ; 25 : 1410 1417 [PubMed] 25 Kooy A de Jager J Lehert P et al . Long-term effects of metformin on metabolism and microvascular and macrovascular disease in patients with type 2 diabetes mellitus . Arch Intern Med 2009 ; 169 : 616 625 [PubMed] 26 Desai NR Shrank WH Fischer MA et al . Patterns of medication initiation in newly diagnosed diabetes mellitus: quality and cost implications . Am J Med 2012 ; 125 : 302.e1 302.e7 [PubMed] 27 Piccinni C Motola D Marchesini G Poluzzi E . Assessing the association of pioglitazone use and bladder cancer through drug adverse event reporting . Diabetes Care 2011 ; 34 : 1369 1371 [PubMed] 28 Nauck M Frid A Hermansen K et al . Long-term efficacy and safety comparison of liraglutide, glimepiride and placebo, all in combination with metformin in type 2 diabetes: 2-year results from the LEAD-2 study . Diabetes Obes Metab 2013 ;15:204–212 [PubMed] 29 Goldstein BJ Feinglos MN Lunceford JK Johnson J Williams-Herman DE Sitagliptin 036 Study Group . Effect of initial combination therapy with sitagliptin, a dipeptidyl peptidase-4 inhibitor, and metformin on glycemic control in patients with type 2 diabetes . Diabetes Care 2007 ; 30 : 1979 1987 [PubMed] 30 Buse JB Rosenstock J Sesti G et al . Liraglutide once a day versus exenatide twice a day for type 2 diabetes: a 26-week randomised, parallel-group, multinational, open-label trial (LEAD-6) . Lancet 2009 ; 374 : 39 47 [PubMed] 31 Gerstein HC Yale JF Harris SB Issa M Stewart JA Dempsey E . A randomized trial of adding insulin glargine vs. avoidance of insulin in people with Type 2 diabetes on either no oral glucose-lowering agents or submaximal doses of metformin and/or sulphonylureas. The Canadian INSIGHT (Implementing New Strategies with Insulin Glargine for Hyperglycaemia Treatment) Study . Diabet Med 2006 ; 23 : 736 742 [PubMed] 32 American Diabetes Association . Standards of medical care in diabetes—2013 . Diabetes Care 2013 ; 36 ( Suppl. 1 ): S11 S66 [PubMed] 33 Lachin JM. Biostatistical Methods: The Assessment of Relative Risks. 2nd ed. New York, John Wiley and Sons, 2011 34 Holm S . A simple sequentially rejective multiple test procedure . Scand J Stat 1989 ; 6 : 65 70 35 Liang KY Zeger SL . Longitudinal data analysis using generalized linear models . Biometrika 1986 ; 73 : 13 22 36 Demidenko E. Mixed Models: Theory and Applications. New York, John Wiley and Sons, 2004 37 O’Brien PC . Procedures for comparing samples with multiple endpoints . Biometrics 1984 ; 40 : 1079 1087 [PubMed] 38 Lachin JM . Some large-sample distribution-free estimators and tests for multivariate partially incomplete data from two populations . Stat Med 1992 ; 11 : 1151 1170 [PubMed] 39 Lachin JM Foulkes MA . Evaluation of sample size and power for analyses of survival with allowance for nonuniform patient entry, losses to follow-up, noncompliance, and stratification . Biometrics 1986 ; 42 : 507 519 [PubMed] 40 Lachin JM Viberti G Zinman B et al
2022-01-27 04:30:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2832862138748169, "perplexity": 6546.697708226796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305141.20/warc/CC-MAIN-20220127042833-20220127072833-00711.warc.gz"}
http://openstudy.com/updates/4dc6a93da5918b0b76a38420
## anonymous 5 years ago the linear velocity of earths moon is about 2300 mph. If the average distance from the center of the earth to the center of the moon is 240,000 miles, how long does it take the moon to make one revolution abou t earth? assume the orbit is circular $1 rev = 2\pi r$
2016-10-23 22:20:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48036420345306396, "perplexity": 428.9939657210559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719437.30/warc/CC-MAIN-20161020183839-00123-ip-10-171-6-4.ec2.internal.warc.gz"}
https://sharpe-maths.uk/index.php?main=topic&id=30
Previous topic: Collecting like terms Current topic: Expanding expressions Next topic: Multiplying brackets $\big(x \pm a \big)\big(x \pm b \big)$ This section will look at some techniques for expanding and simplifying algebraic expressions. In the first instance, we will be looking at the distributive law, which is defined mathematically as: $a.(b+c)=a.b+a.c$ We can simply demonstrate that this is true, though proving it is, like so much mathematics, beyond the scope of this section. $2(5+7) = 2 \times 12= 24$ $2(5+7) = 2 \times 5 + 2 \times 7 = 24$ This will lead us into factorising which is the same process, but backwards. This is what we can an inverse operation.
2021-01-27 22:24:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8142626881599426, "perplexity": 612.5544811642629}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704833804.93/warc/CC-MAIN-20210127214413-20210128004413-00091.warc.gz"}
http://sci-gems.math.bas.bg/jspui/handle/10525/550
Please use this identifier to cite or link to this item: http://hdl.handle.net/10525/550 Title: Regular and Other Kinds of Extensions of Topological Spaces Authors: Dimov, G. Keywords: RegularRegular ClosedCompactLocally CompactCompletely RegularCE-RegularExtensionsSR– (R–, RC–, EF–) ProximitiesNearness SpacesOCE– (CE–) Regular Spaces Issue Date: 1998 Publisher: Institute of Mathematics and Informatics Bulgarian Academy of Sciences Citation: Serdica Mathematical Journal, Vol. 24, No 1, (1998), 99p-126p Abstract: In this paper the notion of SR-proximity is introduced and in virtue of it some new proximity-type descriptions of the ordered sets of all (up to equivalence) regular, resp. completely regular, resp. locally compact extensions of a topological space are obtained. New proofs of the Smirnov Compactification Theorem [31] and of the Harris Theorem on regular-closed extensions [17, Thm. H] are given. It is shown that the notion of SR-proximity is a generalization of the notions of RC-proximity [17] and Efremovicˇ proximity [15]. Moreover, there is a natural way for coming to both these notions starting from the SR-proximities. A characterization (in the spirit of M. Lodato [23, 24]) of the proximity relations induced by the regular extensions is given. It is proved that the injectively ordered set of all (up to equivalence) regular extensions of X in which X is 2-combinatorially embedded has a largest element (κX, κ). A construction of κX is proposed. A new class of regular spaces, called CE-regular spaces, is introduced; the class of all OCE-regular spaces of J. Porter and C. Votaw [29] (and, hence, the class of all regular-closed spaces) is its proper subclass. The CE-regular extensions of the regular spaces are studied. It is shown that SR-proximities can be interpreted as bases (or generators) of the subtopological regular nearness spaces of H. Bentley and H. Herrlich [4]. Description: ∗ This work was partially supported by the National Foundation for Scientific Researches at the Bulgarian Ministry of Education and Science under contract no. MM-427/94. URI: http://hdl.handle.net/10525/550 ISSN: 1310-6600 Appears in Collections: Volume 24 Number 1 Files in This Item: File Description SizeFormat
2016-07-29 02:08:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5442458391189575, "perplexity": 2438.7106839721177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257829325.58/warc/CC-MAIN-20160723071029-00146-ip-10-185-27-174.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2404526/incomplete-metric-space-or-normed-space-with-only-one-non-convergent-cauchy-sequ
# Incomplete metric space or normed space with only one non-convergent Cauchy sequence Does there exist an incomplete metric space with exactly one non-convergent Cauchy sequence? What about normed spaces? If $(x_n)_{n=1}^\infty$ is a non-convergent Cauchy sequence in a normed space, then $(\alpha x_n)_{n=1}^\infty$ is also Cauchy and non-convergent, for every non-zero scalar $\alpha$. Is it possible for a non-Banach space to have only one non-convergent Cauchy sequence, up to multiplication by scalar? I haven't got enough reputation to comment so: @Marios Gestas I know that you can always find many convergent Cauchy sequences. I asked if it was possible to find a space with only one non-convergent Cauchy sequence. @Joey Zou I completely missed the fact that you can take subsequences. It seems so obvious now. @EugenR This immediately gave an answer to my follow up question: it seems that there also doesn't exist a normed space where all non-convergent Cauchy sequences are subsequences or permutations of each other. I upvoted your answer. Example: $(x_n)_n$ is nonconvergent, then for any sequence $(a_n)_n$ in $\mathbb{R}$ such that $a_n \to 0$ for $n\to\infty$, new sequence $$\bigl((1+a_n)x_n\bigr)_n$$ is nonconvergent.
2019-10-18 15:48:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9168900847434998, "perplexity": 307.14502205081254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684226.55/warc/CC-MAIN-20191018154409-20191018181909-00264.warc.gz"}
https://forum.poshenloh.com/topic/794/some-notes-about-this/2
• I think that when you have a problem that says something like "How many numbers, from 1 to a, when multiplied by b, have a remainder of c (mod d)", the answer will always be a/d. I think this is true because there will always be a number e (mod d) that when multiplied by b, has a remainder of c (mod d). The number e will occur once every d numbers, so it will occur a/d times in the numbers 1 to a. Now if a is not a multiple of d, it might happen a/d times rounded down or up. This depends on the remainder when a is divided by d... if the remainder is less than e then it is a/d rounded down, if the remainder is more than e it is a/d rounded up. • Hey @tidyboar, great observations you're making! I agree with a lot of what you're saying. The thing is, all of that works when $$d$$ is prime. For example, how many numbers from $$1$$ to $$5$$, when multiplied by $$3$$, is $$\equiv 1 \text{ (mod 6)}$$? When you make the table, you'll realize that all of the numbers after being multiplied are either $$0 \text{ (mod 6)}$$ or $$3 \text{ (mod 6)}$$, so actually none of the numbers are $$1 \text{ (mod 6)}$$ The reason why this doesn't work is because $$3$$ is a factor of $$6$$! And actually, you can remake this problem with any even number and you'll find that there are a lot of exceptions. BUT, what you said is actually really useful!! Number theory often involves prime numbers, so everything you figured out will save you a lot of time when actually doing problems. Way to go for understanding everything!
2021-09-26 03:14:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6609586477279663, "perplexity": 289.275236355049}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057796.87/warc/CC-MAIN-20210926022920-20210926052920-00584.warc.gz"}
http://mathoverflow.net/feeds/question/82491
A limit from an Erdos paper - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-24T20:58:06Z http://mathoverflow.net/feeds/question/82491 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/82491/a-limit-from-an-erdos-paper A limit from an Erdos paper Bob 2011-12-02T17:37:19Z 2011-12-03T00:33:33Z <p>Hi,</p> <p>I need help to prove that, for $N = \big\lfloor \frac{1}{2}n\log(n)+cn \big\rfloor$ with $c \in \mathbb R$ and $0 \leq k \leq n:$ </p> <p>$$\lim_{n\rightarrow +\infty} \dbinom{n}{k} \frac{\dbinom{\binom{n - k}{2}}{N} }{\dbinom{\binom{n}{2} }{N}} = \frac{e^{-2kc}}{k!}$$</p> http://mathoverflow.net/questions/82491/a-limit-from-an-erdos-paper/82498#82498 Answer by Jacques Carette for A limit from an Erdos paper Jacques Carette 2011-12-02T18:25:29Z 2011-12-02T18:25:29Z <p>First note that $\binom{m}{2} = \frac{m(m-1)}{2}$ and use that to get rid of the nested binomials. Also, the floor function will not (here) make any difference, so ignore it.</p> <p>Then convert all binomials to their $\Gamma$ equivalents, and use Stirling's formulas for each term. The next step is the messiest, as you'll have a lot of arithmetic to perform on the result, which will give you the result.</p> <p>This is sufficiently mechanical that, using Maple, I can quickly derive that $$\frac{e^{-2kc}}{k!} + \frac{-\frac{1}{2}e^{-2kc}((4c+k+1)\ln{n}+2kc+4c^2+\ln^2{n}-1+2c+k)}{n (k-1)!}+O(n^{-2})$$</p> <p>Of course, that second term might not be quite right, since the previously ignored floor function might here contribute, I have not checked that.</p> http://mathoverflow.net/questions/82491/a-limit-from-an-erdos-paper/82504#82504 Answer by Bob for A limit from an Erdos paper Bob 2011-12-02T20:18:38Z 2011-12-02T20:18:38Z <p>Hi,</p> <p>thank you for your calculation, in fact I wanted to generalize the method to obtain the following limit:</p> <p>for $N' = \lfloor n^2 log(n)+cn^2 \rfloor$ with $c \in \mathbb R$ and $0 \leq k \leq n$</p> <p>$$\lim_{n\rightarrow +\infty} \dbinom{n}{k} \frac{\dbinom{3 \binom{n-k}{3} }{N'} }{\dbinom{3 \binom{n}{3} }{N'}}$$</p> <p>but I think it's a little hard without Maple, so if you could give me the value of this limit with your method it would help me a lot.</p> <p>Friendly.</p> http://mathoverflow.net/questions/82491/a-limit-from-an-erdos-paper/82518#82518 Answer by Brendan McKay for A limit from an Erdos paper Brendan McKay 2011-12-02T22:49:04Z 2011-12-02T22:58:44Z <p>Note that the limit is not in general correct if $k$ is a function of $n$. I'll assume you meant us to assume it is constant or very slowly growing.</p> <p>You don't need a computer. Just remember this one: $$\binom{M}{t} = \frac{M^t}{t!} \exp\biggl( -\frac{t(t-1)}{2M} + O(t^3/M^2)\biggr),$$ as $M\to\infty$. The variable $t$ can be a function of $M$ provided $t^3/M^2$ is bounded. You can prove this using Stirling's formula, but it is easier to just take the logarithm of both sides and use the Taylor expansion of the logarithm.</p> <p>Apply this to the three binomials in your problem and simplify. This will also tell you how fast $k$ can increase before the limit changes.</p> <p>For your second problem $t^3/M^2$ doesn't go to zero, so you need the next term inside the exponential, which is $$-\frac{t(t-1)(2t-1)}{12M^2}$$ and the error term is then $O(t^4/M^3)$. If you don't care about precise error terms, put this together and infer that whenever $t=o(M^{3/4})$, $$\binom{M}{t} = \frac{M^t}{t!} \exp\biggl( -\frac{t^2}{2M} -\frac{t^3}{6M^2} + o(1)\biggr).$$</p>
2013-05-24 20:58:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9281091094017029, "perplexity": 636.9410382995629}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705043997/warc/CC-MAIN-20130516115043-00028-ip-10-60-113-184.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/26351/create-a-table-with-two-parts-with-different-tabular-features?noredirect=1
# Create a table with two parts with different tabular features I'm trying to build a table that has two panels with different numbers of columns. For example: Table 1: An interesting table Panel A: Some stuff First name Last name Product Bubba Gump Shrimp Steve Jobs Happiness Panel B: Other stuff School State Harvard MA Yale CT Brown RI I would like the 3 columns of Panel A and the 2 columns of Panel B to fill the horizontal space of the table. I imagined using two different \tabular commands within a \table would work, but it doesn't. I also found the subfigure package, but I think that only lets you stack tables horizontally, not vertically. Any ideas? Thanks! • The subfigure package lets you stack tables vertically, for example by placing a \\ or \par between the \subfigures. – user2574 Aug 23 '11 at 16:15 • I wouldn't spread the columns across the entire \textwidth, that looks terrible. Just compare the output of Stefan's solution (not spread) with Werner's's (spread). Using the booktabs package is a good idea though. – doncherry Aug 23 '11 at 17:35 Within a table environment, you can use different tabular environments, of different types and with a different number of columns. Here's an example with sub captions: \documentclass{article} \usepackage{caption} \usepackage{subcaption} \usepackage{tabularx} \begin{document} \begin{table} \centering \caption{An interesting table} \subcaption*{Panel A: Some stuff} \begin{tabular}{lcr} First name & Last name & Product \\ Bubba & Gump & Shrimp \\ Steve & Jobs & Happiness \end{tabular} \bigskip \subcaption*{Panel B: Other stuff} \begin{tabular}{ll} School & State \\ Harvard & MA \\ Yale & CT \\ Brown & RI \end{tabular} \end{table} \end{document} Here I used the subcaption packages. A good alternative is the subfig package. However, the subfigure package is obsolete. With liberal use of the \multicolumn{.}{.}{...} command, you can get away with spreading the table across the entire \linewidth: \documentclass{article} \usepackage{array,booktabs} \begin{document} \begin{table}[ht] \centering \caption{An interesting table} \label{tbl:interesting} \begin{tabular}{*{6}{p{.16\linewidth}}} \multicolumn{6}{c}{Panel A: Some stuff} \\ \toprule \multicolumn{2}{p{.33\linewidth}}{First name} & \multicolumn{2}{p{.33\linewidth}}{Last name} & \multicolumn{2}{p{.33\linewidth}}{Product} \\ \midrule \multicolumn{2}{l}{Bubba} & \multicolumn{2}{l}{Gump} & \multicolumn{2}{l}{Shrimp} \\ \multicolumn{2}{l}{Steve} & \multicolumn{2}{l}{Jobs} & \multicolumn{2}{l}{Happiness} \\ \bottomrule \\ \multicolumn{6}{c}{Panel B: Other stuff} \\ \toprule \multicolumn{3}{p{.49\linewidth}}{School} & \multicolumn{3}{p{.49\linewidth}}{State} \\ \midrule \multicolumn{3}{l}{Harvard} & \multicolumn{3}{l}{MA} \\ \multicolumn{3}{l}{Yale} & \multicolumn{3}{l}{CT} \\ \multicolumn{3}{l}{Brown} & \multicolumn{3}{l}{RI} \\ \bottomrule \end{tabular} \end{table} \end{document} Since the two panels are contained in the same tabular, they span the same width. The above uses the booktabs package for presentation of the tabular environments. However, it is not necessarily needed. If you want to drop it, you should also drop/replace the \toprule, \midrule and \bottomrule rules with \hline or another preference. Alternatively, you could also use the tabularx package to spread columns across a specific width: \documentclass{article} \usepackage{booktabs,tabularx} \begin{document} \begin{table}[ht] \centering \caption{An interesting table} \label{tbl:interesting} \medskip \begin{tabularx}{\linewidth}{ X X X } \multicolumn{3}{c}{Panel A: Some stuff} \\ \toprule First name & Last name & Product \\ \midrule Bubba & Gump & Shrimp \\ Steve & Jobs & Happiness \\ \bottomrule \end{tabularx} \bigskip \begin{tabularx}{\linewidth}{ X X } \multicolumn{2}{c}{Panel B: Other stuff} \\ \toprule School & State \\ \midrule Harvard & MA \\ Yale & CT \\ Brown & RI \\ \bottomrule \end{tabularx} \end{table} \end{document} • how did you get that value: \begin{tabular}{*{6}{p{.16\linewidth}}} ? 1/6 = 0.166, so it cannot be that. – ghx Sep 17 '19 at 22:40 • @ghx: I wanted to set the tabular within the \linewidth, and for 6 columns to fit, I used 1/6 of \linewidth for each, or 0.16\linewidth. – Werner Sep 17 '19 at 22:55 • thanks I understand that, but isn't 1/6 = 0.1667 (rounded)? So I wonder how this ends up precise. – ghx Sep 17 '19 at 23:05 • @ghx: Yes, 1/6 ~ 0.1667, but in *{6}{p{.16\linewidth}} there are 6 columns and 12 \tabcolseps that also need to be considered: One on either side of the tabular's outer columns, and 2 \tabcolseps between each column pair (there are 5 of those). I rounded 1/6 down to 0.16 to allow for these to also fit within the \linewidth. Similarly for .33\linewidth when dealing with 3 columns and .49\linewidth for 2 columns. – Werner Sep 17 '19 at 23:16 You can use the multicol column package to have data span multiple columns. \documentclass{article} \usepackage{multicol} \begin{document} \begin{tabular}{lll} \multicolumn{3}{c}{Panel A: Some stuff}\\ First name &Last name &Product\\ Bubba &Gump &Shrimp\\ Steve &Jobs &Happiness\\ \\ \multicolumn{3}{c}{Panel B: Other stuff}\\ School &State\\ Harvard &MA\\ Yale &CT\\ Brown &RI\\ \end{tabular} \end{document} • Thanks, but I think I need a different approach. I want the 2 columns in panel B to span just as much space as the 3 columns in panel A. This table has a 3rd (blank) column in panel B, which I am trying not to have. – itzy Aug 23 '11 at 16:16 • You can use the \multicolumn macro to adjust that. For example you can use \multicolumn{2}{c}{State} to have that column span the two columns. If I am still not understanding please provide a more detailed example. – Peter Grill Aug 23 '11 at 16:18 maybe like this? \begin{tabular}{lll} \hline \multicolumn{ 3}{c}{Panel A: Some stuff} \\ \hline First name & Last name & Product \\ \hline Bubba & Gump & Shrimp \\ \hline Steve & Jobs & Happines \\ \hline \multicolumn{ 3}{c}{Panel B: Other stuff} \\ \hline School & \multicolumn{ 2}{l}{State} \\ \hline Harvard & \multicolumn{ 2}{l}{CT} \\ \hline Yale & \multicolumn{ 2}{l}{CI} \\ \hline Brown & \multicolumn{ 2}{l}{RI} \\ \hline \end{tabular}
2020-02-17 21:33:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7938724160194397, "perplexity": 6240.131213379091}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143373.18/warc/CC-MAIN-20200217205657-20200217235657-00383.warc.gz"}
https://passcod.name/print.html
# Introduction My name is Félix Saparelli. I’m a software engineer from New Zealand. This website is in the shape of a (md)book! It’s both a blog, an archive, a list of projects, and various information about and around me. Have a look :) ## Social media • Twitter — My main social venue • GitHub — Ruby, Rust, Node, Web, and other things • NZ NaNo Discord — Local writer community ## Other sites I keep a few other places alive: # Avatars Here is an archive of the avatars I have used. Since 2015, I commission my avatars from artists with a very loose mandate, letting them do whatever they want within as few parameters as I need. Generally, no detail beyond sizing was requested, although the exact brief varied for each artist. All are copyrighted. Do not reuse. ## Bast Lighthouse Commissioned from PepperRaccoon in , and delayed from publication until whereupon I gave it a little introduction. ## Cybear Commissioned from Tayruu in , and completed in . ## Māhina Commissioned from Huriana Kopeke-Te Aho in . ## Sasha Commissioned from Sarah Lund in , completed in , and delayed from publication until to give the Cup Cat a reasonable amount of time. ## Cup Cat Commissioned from Azu in . ## Snowl Herder Commissioned from Eoghan Kerrigan in , and received in . This was an experiment in commissioning larger pieces and selecting a crop myself for the avatar itself. While I’m pleased with the results, I’m not sure whether I’ll continue with this format going forwards. You can see the larger artwork by clicking on the avatar image. ## Framing Commissioned from Luke in . ## ACLU sketch Obtained as a donation reward for the ACLU. Only worn on Twitter for the month of . ## Into Space Commissioned from Alison Graham in . ## Nekudotayim Three Commissioned from Daniel Silva in . ## Hearty Hug Commissioned from Sam Orchard in . Commissioned from Anne Szabla in . ## Sailor whale Commissioned from Sara Goetter in . In , Tailsteak made a variant. It was never actually used as an avatar. ## Beaver with a jetpack Commissioned from Mason Williams a.k.a. Tailsteak in . ## Léa and me In , I changed my avatar to include my significant other of the time. This was taken on the Dune du Pyla in France. ## Close-up of me From onwards, I used an extreme close-up of my face. This was my longest-lasting avatar and probably still remains in use in some accounts I haven’t bothered cleaning up. ## Blue screen of code Around and onwards (I wasn’t on the internet much at that time), I used this as an avatar. I probably lifted it from Google Images. # Cryptographic keys I have three kinds of keys at the moment: good old PGP/GPG keys that are mostly used to sign git commits, minisign keys that are used to sign software (being phased out), and sigstore keys that are used to sign software (being phased in). ## GPG keys These have an expiration date. I initially did 1-year keys, but that was too much trouble, so in 2015 I decided to use 10-year keys, possibly with more short-lived subkeys. The keys are also available on public keyservers, e.g.: ### Current key: passcod06 (2015–2025) pub 4096R/E44FC474 2015-04-11 [expires: 2025-04-08] key C24C ED9C 5790 0009 12F3 BAB4 B948 C4BA E44F C474 uid Félix Saparelli (:passcod) <felix@passcod.name> ### passcod05 (2014–2015) pub 4096R/AE1ED85D 2014-03-27 [expires: 2015-03-27] key E49C 3114 2E3D 10A4 69F0 86DC 6B09 4637 AE1E D85D uid Félix Saparelli (:passcod) <felix@passcod.name> ### passcod04 (2013–2014) pub 4096R/3C51B6EB 2013-03-27 [expired: 2014-03-27] key 0417 E9C8 3281 CB17 E7CB B0EA AE48 6FBE 3C51 B6EB uid Felix Saparelli (:passcod) <me@passcod.name> ### passcod03 (2012–2013) pub 4096R/C2C15214 2012-09-26 [expired: 2013-03-25] key FE31 5C83 9FC5 0618 A49B AEE3 8487 3386 C2C1 5214 uid Felix Saparelli (:passcod) <me@passcod.net> ## Minisign keys In minisign format, used for signing software binaries. ### Software untrusted comment: minisign public key: 2264BBE425DA952E RWQuldol5LtkIrx0khfo4Z7Y8SixwG2K8OagJSvsJNBcuLgB2oVNJFFv ## Sigstore keys In sigstore/cosign format, used for signing artifacts (software binary releases, container images, etc). Eventually this will disappear as keys move to be ephemeral and generated against my identity, but in the meantime you can use this key to verify artifacts, along these lines: $cosign verify \ -key https://passcod.name/info/keys/cosign.01.pub \ ghcr.io/org/repo:version_target.ext ### Cosign.01 -----BEGIN PUBLIC KEY----- MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE3LYhdTwREhG9zVKc2aI3FzR6oHto XRYiZtQGxtlbsUMacCHdvvBmTSEg6Zsf9jflNU0slFKExLX/z+zZHykmpg== -----END PUBLIC KEY----- # Social media policies ## Accounts ### Small I have a small twitter account at @LockedFelix, which you can request access to if you’re a friend or if you’re a “trusted” / “close” mutual. I talk of work and more personal issues and stuff that can’t or shouldn’t be in full public view and occasionally threads on topics I don’t want to get lots of attention on (harassment-wise). This small does not contain sexual/NSFW imagery or content; on rare occasions I’m much more likely to be horny on main. ## Anti-virality measures Tweets will be deleted if they reach: • 1000 likes, or • 500 retweets, or • 100 quote-tweets, or • 50 replies; • unless I deem the tweet to be public-interest enough to be worth keeping up. ## Twitter authentication Inspired by 0xabad1dea If I ever lose control of my Twitter account, I will authenticate the new one, or prove my having recovered the old one, by decoding the hash 8a073e9fcc222c7fd80a0212c53600b340e5dabfb378334b81efb151b89e148d published at https://twitter.com/passcod/status/705952798844715008, which is also now here and in the git history. ## Disclaimer Views expressed on social media or elsewhere are, unless stated, mine. As opposed to being anyone else’s, like my employer, any organisation I may be affiliated with, or any other person. Similarly, endorsement or agreement is not implied. Outside of the license afforded to the relevant social media platform as per their terms and conditions, I give no permission for use (unless you have been granted such in writing). In particular, my content may not be used for commercial gain by any entity, any non-commercial or promotional use by cryptocurrency or NFT endeavours of any kind, nor the propagation of fascist ideology (not that I expect these to have a strong respect of laws that inconvenience them). ## Blocks I am a fairly prolific blocker on Twitter. I may also block people on other platforms, but I don’t really use other platforms as I do this one. In this section: See also my musings on automated unblocking. ### Current block reasons #### On Twitter • I don’t like you. • You’re an asshole to someone I like. • You are or support or have voted for a white supremacist, fascist, or other people who are existential threats to me or my friends. • You proselytise. Specifically but not limited to religion, cults, parareligion, or cryptocoin. • NFT bullshit. • You’re a transphobe, a sexist, a homophobe of various stripes, an unrepentant racist, etc. • You’re a troll. I don’t mean shitposting. • You’re such a “good” shitposter that you’re indistinguishable from a troll. • You promoted a tweet I saw. There are exceptions if I genuinely liked the tweet. • You are frequently engaged in bad-faith argumentating with people I follow. • You had a viral tweet and posted those “sparkly lights” or whatever shitty promotion underneath. • You’re a COVID, vaccine, climate change, etc denier. • You’re a brand or mainstream american news site. • You’re one of those services that “unroll” tweet threads. Please don’t use these. All they do is take people’s content and host it elsewhere, where the person has no control, and then they put their own ads on it, so they literally make money off people’s content without their consent. They also bypass blocks (i.e. someone I’ve blocked can read my tweets on a thread unroll). • You’re one of these services that provide a link to download videos off tweets. These are marginally better than the above, and some provide direct links which isn’t so bad. However many do still rehost, bypass, and serve ads. I can’t tell the difference (and it might change later) so I block outright. Use a website that does that without using the reply-and-tag bots, that’s fine. ### Historical block reasons and block happenstance #### On Twitter • You were on one of several third-party generated blocklists around 2015. These are the majority of wrongful blocks, as they were quite overzealous and of course the scale was very large. • You followed some particularly nasty people in 2018-2019, when I ran some block scripts against their followers. • You blocked me. I used to operate under a policy of mutual-blocking, that is, if I noticed you blocked me, I would block back. I’m not doing that anymore, and should have reversed all of these, but it’s possible I missed some. #### On Spotify These would be current if blocking artists was still a thing. • You’re an artist who is racist, homophobic, or unapologetically sexist. • You’re an artist who financially supported Trump ### Getting unblocked I don’t think I’m sought after enough that this is thing, but, just in case. • If we have a mutual in common, you can maybe try to go through them. If they’re okay with it. Don’t fucking harass my friends in trying to get unblocked from me, though, that’s only going to get you blocked harder. You can send me mail. You can contact me on other platforms. You have to be aware, though, that there’s a very high chance I’ll take any such actions extremely suspiciously, even as I outline them on here. Maybe if you really want to (for what reason? I’m not very interesting), and it’s a last resort. • Specifically on Twitter: the most likely way to get unblocked is to just keep on having interesting conversations with my friends, and chances are I’ll happen upon them, see the block, figure it was a false-positive, and unblock you. # Trademark As of August 2018, “passcod” is a trademark I own personally. An official, registered, paid for and in the books, proper trademark. This was done for several reasons, three really: 1. Because I wanted to see if it was possible, and it was cheap enough, and didn’t require much effort at all, so might as well try it. 2. Because very early in the year, someone approached me about the name, and I couldn’t really tell if it was a scam or not, so I took it at face value just in case, did a lot of reading, asked a lawyer, and told them politely what was mine and what they could do with the rest. 3. Because “passcod” is my name, and that means something. But while there are protections around my legal name simply by virtue of being mine, there is no such thing around this name of mine. I wanted to permanently tie the name to myself (where “permanently” is “10 years, renewable”), and have it recognised as such by the law and the land. I thought of making a little guidebook document thing that outlines what you can do with this trademark without explicitely asking me, but all the rules are kinda fuzzy because of it being a person-name and not a thing-name. So the deal goes like this: • “passcod” is me. When you use the name, you’re referring to me. In consequent, you can use the name to refer to me. But: • If you do something I don’t like with the name, I have the right to tell you off. If you persist, I have the right, and indeed the obligation to tell you off in stronger, legal, cease-and-desist type terms. • No contract or license or legal document gives you any further right to the name unless I give you authorisation (in writing) to do so. If I do for anyone, this line will be amended. • This list may be appended to or modified at any time without notice. This is not meant to be a change that has effects on what you do, unless you were planning to be a dick about it. It’s about making it very clear that something is my name. # Fiction I used to write poetry and shorts. I haven’t in some time. A lot of this is “cringey young adult” writing, perhaps, but I’m still fond of it. However, a bunch of it is extremely dark. I also write some fanfiction over on AO3: https://archiveofourown.org/users/passcod. # Poetry I used to write poetry. I haven’t in some time. A lot of this is “cringey young adult poetry,” perhaps, but I’m still fond of it. However, a bunch of it is extremely dark. Contents, with warnings when applicable: # Only One Left 28 July 2009 The door is closed. Dusk fades to night. I on a stool, eyes on embers. I remember. Crowded valleys Golden mountains They arrived Howls Tears Blood Crumpled hope I remember My eyes cloud I drop. # Mid-loss 8 October 2010 Dark. Mid-Dawn Softly I walk forward. In the first light of the waking sun. My mind still and clear and white and black. Yet I step up down and go to the usual things of the day. All that time Life is what I think about. Love is what I ache about. Pain is what my heart is. Death is where my heart goes. The sun is falling Wait for me -- Mid-Dusk -- Emily # Aftermath 7 December 2012 Oh sweet, sweet morning light you remind me I have to sleep sometime. I have to sleep. But not at night. Night reminds me of her hair brushing my face at night. It’s been five months and even my friends have given up on me; I am high all day from the coffee I take to keep awake and down all night from the memories I recall to keep her alive in here in my head in my mind; to keep her features from fading, her voice from quieting, her scent from flying away, the feeling of her fingers on my skin, her grumpiness in the morning, her smile turning up my day. Oh sweet, sweet morning light you remind me of that dawn again. When she stopped living. Just as the sun came up. Next to me smiling as she lost her last battle. # one and two 8 August 2013 tragically once upon a time two people far away one is missed two is missed forbidden hidden away in the dark two sets of three small words repeated over and over to oneself thinking of the other thinking of the one thinking of the heart the skin the eyes the face the hands the pain of one and two not being together still in the dark repeating “I miss you” “I love you” forever until one and two are together again. # Grow 11 August 2013 (to be read with a calligraphy pen or nib in your hand) Grow. A word. A verb. Germanic. Feel it. Slow. Old. Young. Now I’ll tell you how put it on a page: we’ll start with the o. Take your pen just so, thin first, from the center and thick on the up, thin at the top and thick on the down, thin at the low, finish sharply. The r is a small-cap, nothing special, you’ve done these before. Same with the w, but lowercase. Now the G. The G is special. You have to put yourself into it. Stare at the space before the letters, draw it in your mind— no, not like that, more majestic— give it respect; that G has held you all your life, even before you were made, it will hold you and your children, respect it, make it king, but not arrogant, as it’s old— older than the oldest tree, older than the eldest rock— and it’s wise, so wise… Yes. Now you feel it. Take your pen. Close your eyes. Follow my hand. There you are. At the start. Keep ‘em closed. Breath in. Breath out. Breath a half. Breath. And throw forth your arm. # Slowly 24 November 2013 words drip from your lips my heart grows colder until i run away hating and loving you # Alya 7 August 2014 Three Over and over Two Forever and once One Void and nothingness Zero. Alya told me to repeat this every time I was scared. Before I went to sleep. Before I started my day. Before I fell away into the night. The words don’t mean much, and it’s not the order or the rhythm that’s soothing. The mantra is just something to concentrate on, to repeat, into nothingness and void, once and forever, over and over, until all that remains in the world is me and you. That’s the first part, the first line of the mantra that Alya taught me: “You and me”. But I removed it when she went. It’s too hard getting my heart broken twice, when she went and when I didn’t, so getting my heart broken every time I get scared is impossible. Zero one two three. # so many alone at night 16 August 2014 so many alone at night wish for one to hold tightly to squeeze to take to kiss sweetly and feel one’s arms around themselves *heartbeat* # Up Up And Away 9 February 2014 Let me fall so we can fly so we can glide over the clouds. Let me fall so we can fly so we can float over the sky. Let me fall so we can fly past anything you’ve ever seen. Let me fall so we can fly so we can laugh so we can cry. Let me fall… One day I was walking under the rain. Thinking thoughts that shouldn’t be. There’s no point, that’s no life, let’s. Just. Fall. Here. Alone. Yet. Alive. I am the king of my own place. The master of my castle. I have the right of life and death. Over its very walls and floors. You picked me up you held me close you pressed me against you. So let me fall so we can be so we can breath so we can feel the other close. Let me fall so we can live so we can die so we can build up our home. Let me fall so we can talk so we can yell so we can fight about nothing. Let me fall so we can fly so we can smile so we can feel the wind go past. Oh, let me fall into your arms. # Sleep 19 March 2014 Darkness Darkness all around and light but not light; red orange dark but less dark than darkness. Lines of darkness dancing across. Deformed shapes of fire running in circles. Black dark black light all around; red orange brights. Everything fades. Feeling eternally sleepy yet curious at the lack of heat I open my eyes to see a cloud has obscured the sun’s rays through leaves and branches. Summer breeze, lack of sun... I get up. # Hold 29 April 2015 Hold steadfast, friend Hold against hordes against the very wind rolling off mountain peaks against the rain the storm thunder against me and you against the world against all Hold, friend, hold onto the sunshine bright in the valley onto the morning after the evening before the night during onto me and you onto your wits onto your life Hold, friend, hold me right here you right there they over there and all further away Hold, friend, hold your breath your words thoughts tongue your sleepless nights your presence your being your self Hold, friend, hold don't forget love hate pain joy the things you find important the people to smile to cry to live Hold steadfast friend in this world hold. # Sorry 24 February 2015 I just wanted It seems hopeless. It seems like the world’s an ugly place. Ever since you told me, ever since I knew, ever since we lived. wanted to make you smile Do you remember the beach? The sand, the trees, the water, salty in our wounds. It stung, but it cleansed. Do you remember the night? The moon, the stars, the glow, faces smiling through it all. We were never tired, getting to sleep thinking we’d wake early and continue, instead missing breakfast by hours. It didn’t bother us. smile over pieces of my soul I want to go to the moon and back. Visit all the moons in this system; go beyond; go far. Pick and place a single pinch of moon-dust from each in little glass boxes, meet up anew one day, you’ll barely grasp a faded memory of me, and gift you the whole set just so that, maybe, with a bit of luck, you’ll remember this from my soul that I lost a long time ago. # Right over there 26 January 2015 I am in the great unknown. Slowly flying away from nowhere to be seen to where the wind will carry me. Happy, I think. There is a bird beside me. It sings softly, as softly as a bird can, a melody I’ll forget soon. I am in the great unknown. You’ll get there too, one day, whoever you are. # The Road to Hell 12 August 2015 We are walking on the road to hell we are walking on the road to hell. We come from the worlds behind us, we come from the worlds of people, we come from a thousand paradises. We are walking on the road to hell fleeing from your tyranny, your hate, your words, your guns, your utopia. We are fleeing proudly, finally tired of living with you, near you, for you. Enjoy your paradise, for we will not be coming back, not return, not ever. We were your dark suns, your bright suns, your voices of happiness, your shining lights dancing to the sound of a hundred drums, your castles in your minds, your every thought and dream, your precious precious colour in your black and white world. We were the black and the white and the everything in between, and the every tint. You built your worlds for us, for you, bastions of freedom, fortresses of beauty. You closed the roads, collapsed the gates, showed us the door yet kept us behind bars. You said you loved us while beating on our homes with your rams of steel, you said you would help us while screaming you abhored us. You built palaces of fear, you built theme parks dedicated to making you feel justfied. We were like the wind, breathing life through windmills, and you were a giant Don Quixote, raving madly about the threat we posed you. We are walking on the road to hell You can stop us no longer We are walking away Away from you Away forever Away. # Self 6 June 2016 Selflove, selfhate, selfwhatever I am a professional selfhater I do, every day, my regulation 24 hours And some more, for my own pleasures Every day, a little more, without fail. Selflove, selfhate, what does it fucking matter I am a professional selfhater Every morning, every evening Every afternoon, and from nine to noon It’s my life, my death perhaps, it’s my very own tale. In the association of professional selfhaters Selflove, selfhate, all together, all betters All infinite shades of black and white Shining down; a rainbow of moonlight Every day, every night, every one, every all. Until one day perhaps — it’s the dream — none at all. # Tree 13 November 2016 Are you, are you Coming up with me They say they will be They say up, will we be Brave enough, brave to follow To leave, to flee, to go If we split, to leave me To mine and only Fate. When the proponents of tyranny come to us and ask forgiveness: “I’m sorry.” “I didn’t know it would get so bad.” We will tell them: “Too late. Too little. The toll has already rung.” We will not forgive. We will not forget. We will weep for the fallen. We will endure. What am I supposed to say to young people coming to me wanting to leave this sad world because at least now it would be on their terms? They are at our doors. They are our neighbours. They are our family. They are taking us. They are not returning us. They are denying us. They are ignoring us. I don’t know which hurts more: their blows, their betrayal, their torture, their murder, or their indifference. Are we out of sight? Are we out of night? Our world has already tumbled Our life, for years now, has bled Burn our secrets Burn our hearts Burn our love Seal your lips forever, little one Or forget you ever were different. Rejoin your friends, embrace their joy, bury your truth deep inside yourself. Learn to smile again, learn to smile the scars away, forget yourself little one, at least you’ll live another day. It’s the end of the line It’s the end of our time It’s the very last step. Too tired, too broken we have stopped running. Our final wish: to go soaring into the skies to fade away into the night Our blood running free A red sunset A tide coming out My soul drying out. (...) You told us to run You told us to be free You kept our secrets You were broken for us Still you did not speak We thank you We thank you We thank you all We will remember you We’ve built monuments We’re telling stories At long last, with tyranny defeated We stand here honouring you Who made it all possible But didn’t get to see the end We weep for the fallen We run into the sky We smile true to our soul We remember who we are We restore our hearts We return to ourselves We heal the scars We dream at night We weep for the fallen The toll has rung one final time. # Unlit 14 October 2016 “You shouldn’t be driving.” “I haven’t had a drink.” “You know what I mean.” “Yeah.” I drove on, Sani besides me. Stopped at the lights, fumbled with my phone, trying to put music on. Light turns green. Grip the wheel with one hand, shut the phone with the other, make an awkward run into the avenue. No cars around still, but more homes and driveways. I turn into my lane, streetlight shining down on me and the empty passenger seat. # One summer morning 9 February 2017 Breathing the fresh air in the middle of light rain and the smell of wet fog I resolve to go walk up again and visit the mountain, my friend. # Smile 12 September 2018 You think you are out there exploring the limits. The limits of the possible, the limits of the feasible, the limits of the thinkable. The limits of the imaginable. And we smile, when we see you, children that you are, afloat on top of this lake you call an ocean. You are flotillas and armadas and solo navigators and skilled amateurs slashing these waters. You are working, in unison or in competition or in solitaire independence, to perfect the art: in as little movement, in some set distances, in tiny vessels, and even without sails. And we smile, when we see you, children. We work beneath the surface. We work above the skies. We crawl around the bottom of the seas. We move earth, and air, and fish. We summon thunder. We work. Days, weeks, decades, lifetimes. We know of the monsters. You scoff at the stories. Even when provided with pictures, you imagine small beasts, that could not possibly be a threat except perhaps when alone and unprepared. We have been alone. And the monsters we know, they are taller than we are. In deep, where they live, we learn to be quiet. To do our work, as grandiose and extravagant as it must be, quietly in the shadows, so as not to awaken these hell-beasts of your nightmares. We build defenses. Intricate and complex to the initiated, they look repetitive and primitive to the neophyte. We like them. They give us comfort. We protect you, without you ever knowing. A ripple on the water where none should have been is a failure of our task. When we are at our best, you do not see us. The tools you use and improve and congratulate yourselves for bringing further than ever, we have as base part of our kit. For the work, though, diversity is everything. Where you might sharpen your words on an ever-growing collection of still lives, we battle in the deep, we love, we hate, we die, we bear, we spawn, we solve, we ponder. We do not resent you not knowing. It is the work. It is the life. It is innocence worth protecting. You write us in your legends. In your myths. In your stories. In your rumours. Never quite real, to you, we are. Yet, we love you. And smile. # Short Prose I used to write shorts, in the same vein as my poetry. I haven’t in some time. A lot of this is “cringey young adult bullshit,” perhaps, but I’m still fond of some of it. However, a bunch of it is extremely dark. Contents, with warnings when applicable: # The Box 28 October 2008 The box was simple. It was hand-crafted, seemingly by inexperienced hands. The lid was not attached to it, except for two mould-eaten strings, passed through rough holes in the wood. There were carvings on its top: words now unredable, names, a heart, crossed out by later cuts. Several layers of time were printed there; past drawings, gone relationships, ancient memories. When I opened the box, it was as if I’d opened an old book. The same odour, the mark of time. The same resistance, this reticence of giving up secrets. Dust spiraled under my breath. When it fell back, the content of the box, those pieces of childhood, were finally exposed to my sight. They seemed still defiant, as if daring me to uncover their stories. A pair of little scissors and cotton string were the first things. The seemed to have been placed there in a rush, just before moving out of home. Pence coins, saved and never used. Several badges from the scouts. Love letters, with Xs in faded ink. Had they been received and kept? Or were they still waiting to be posted? Further in the shade were deeper souvenirs. Diving into the swirls of dust, my eye caught on toys and objects only precious here. Glass marbles, scratched and marked, reflecting older games. Short pencils, used by force of fingers, sharpened with quick strokes of a knife. The knife itself, its steel blade half-open, shining still, having won its fight against time and rust. A little vial, empty now. Two little stones, black jewels in the dark. Going back to the source, the beginning: the last objects, first put, unveiled from time. A sepia photograph of a baby. A name engraved in a bracelet. A petite cloth doll, of a rabbit. First fallen tooth, neatly labeled. Finally, a little card with a name and these words: “If by any luck you found this box, could you please return it to me.” I looked up the name. Ten minutes later, when a heavy door opened and a grandmother looked at me, somewhat intrigued, I asked: “Is this yours?” # A Man 13 February 2011 He was lit up faintly. Standing in a room of golden proportions (which is not saying a lot), he was one stood man (which is). The only lamp, a seemingly old neon, hanging short from its chains, shone darkly above none. None but a five-feeted glass plane, upon which glossy pages were desperately eager to tan; but alas! not one ever did lift their covers. The man had been looked upon countless times, even through his short while standing. Always from the top, which might be explained by his stature, or might not. His hair, though, was all but feature-less. A few curls only stood out from otherwise straight, short mop. His visage was no different: sharp edges and soft skin were its only characteristics. Immaculate white collar. Iridescent black blazer. No tie. Dark pants, which pockets concealed fine hands (five of them, two of flesh). Polished shoes. Neat. Above his socks, through the shadow, a lighter strip revealed — no, confirmed — what one could have mistaken for tan earlier. Yes indeed; the man was a lone, black white cream, wolf. Behind blue eyes, the man was impatient. His feet were hurting. Had someone spoke to him at that very instant (as the next it was gone), he would have answered softly, non-committally; in his mind, however, his voice clear and his tone dry, he would have snapped, glaring. But etiquette ruled over him. Earlier, he had risen for that same reason from his seat and let another take it, damning both the old dear for entering, and the owner for the number of chairs. Her feeble thanks had irritated him, and he had not replied. He now damned both himself for this lack of respect, and his long gone ancestors (only from his mother’s side) for having instated and enforced this ridicule heap of codes and laws. A sparrow, love, and a brightness in his mind entered through the far door and kissed him quickly and sweetly. Arm in arm, they left. # ali.vei.nth.eni.ght 10 August 2015 Your cat, your glasses, and a giant butterfly walk into a room. You realise you’re the bartender. They ask for coffee. “With almond milk”, the cat adds. “You don’t want the aftermath of cow milk on your tile.” You turn around, sight a half-empty bottle of vodka and loose tea leaves. It does not appear to be a nightmare. Roll 3d17s mod 8, no, not those ones, the blue and yellow-polka-dots ones, for your blood-alcohol level in ppm. You have 9¾ moves left before full fatigue hits. All other members of your party are Permanently Vanished. Consider the GM (that’s me) to be omnipotent (when sober) and having a fondness for chocolates (white). As a reminder, you have Steel Skin Level 9 applied, a full Spell Book, and Magic Points to last you until Armageddon comes… which should be in 25 minutes, give or take a couple dozen seconds, according to today’s schedule. Proceed. # Regulus 16 August 2016 We lived forever, and would live forever. So we had concluded. We didn’t know, of course, because none can predict the future. But we had attempted everything, and still, we lived. We had watched, in the meantime, people come and go, countries come and go, peoples come and go. Some of us helped, to make them come, or to make them go. We clashed, albeit rarely one against another; such conflicts rarely went how we wished. It would not be accurate to say we warred: we made war, but we didn’t hurt ourselves. Rather, those we lead and guided were hurt, and made hurt, and did hurt. It took us a century to meet again, all together. And we stayed together for almost a decade, casting off from the world, metaphorically. We made peace. The world outside raged on. In all reunions going forward, we regretted, all of us, for that first moment of selfishness. It had taken us a century to make peace among ourselves, at little cost to us, given we would live forever, and at excruciating cost to everyone else, given that they didn’t. It took us, to our eternal shame, one other century to not only repair the damage we had sown, but to help all of them, all mortal people, to make peace. Now, they lived longer. We split in four: some worked on prosperity, to make the world the best we could, not for us, but for them; some worked on space, to explore and move there, to make inhabitable, to help prosperity; some worked on knowledge, and foremost, on understanding why we lived, to perhaps reverse it, or to perhaps share it; some worked on the past, to find why and whom made us live forever, to study their history, to help them remember it, such that both us and them would endeavour not to repeat it, or at least not the bad parts. As of this day, three score centuries on, before I leave for the stars, uncertain to come back, we have not finished our work. Humanity has almost forgotten us, all according to plan. They live, they think, forever — we know otherwise, but still, we hope. Soon I embark. You who reads this, tell me: how is Earth? — Found in the Sechura Desert, Earth, in the hand of The Owlman, 9387 CE. P.S. Shortly after leaving, I made a discovery that let me finish my part of our research: The Understanding. I came back here to leave you copies of my notes and conclusions. Go find Jiang Maze, last I heard they were living in the Phæthontis quadrangle on Mars. They are in charge of the relevant part of The Past, and will find these notes interesting, as well as provide you, most probably, with a cup of the finest tea in the Sol system, as dusk falls. — Found in a decaying ATHENA ΑΘΕ telescope, in L2 orbit, Earth, 9411 CE. I hope you’ve found the previous stashes useful, although there was something missing, an unexplored thread in my research that will prove critical both in the understanding and application of, at least, the last five stashes, and, perhaps, a few more. While waiting on this rock, I have also branched off into improvements which affect me directly, and will field test them soon. If you’re interested in a potential 3% efficiency increase in FTL Type C drives — and let’s be honest, who isn’t? — come find my next stop. If you can. Nyctimene is bringing me away from the hubbub of the main elliptic plane of the Sol system, perfect for a quiet and discreet jump. How was Maze’s tea? — Found on 2150 Nyctimene, Sol system, 9436 CE. I thought I would spend an entire orbit here, on Dagon. I only lasted 400 years before fancy made me take flight again. I have to say, as impressive as it was to approach the Eye of Sauron, only to settle within its edge, it was nothing compared to centuries of perpetual meteor showers. As usual, you’ll find my notes and research here. There’s a lot; take your time. I endeavoured to keep linguistic drift low, but I cannot control for how the common tongue will have evolved while I was alone here, so I wish you luck, dear reader. I have left a clock zeroed at my time of departure, for your information. It should be precise for two millennia, beyond that, well. You’d be lagging anyway. — Found in a shielded structure on Dagon, Fomalhaut system, 9520 CE. As mentioned in my last few notes, being around people once again has quite well increased research avenues. Not only that, but I went and got up to date with the current state of everything, and I even sent a few messages to my colleagues of old, which prompted an impromptu reunion with those that happened to be nearby. As a result, this stash is special! Sure, you’ll find the usual, but there’s also a record of said reunion, and notes on those I met, along with historical observations, recollections, and even archives dating back all the way to my first millennium. I do not have the hubris to write conclusions and analyses of that data, that and my colleagues might just take offence, but perhaps you or yours will want a stab at the matter. At my great surprise, although in hindsight it was inevitable, I found I liked interacting with people, especially those younger than me (there’s a lot more, and they’re not all people who have tried to kill me at some point; it was a long time ago, and we’ve all moved past it — so we say). A bright new world this is, after being alone so long. So, at my great surprise I started getting attached to a few. As I write this, I know I’m leaving; they know I’m leaving. And yet I also know I might come back, as my missing them will gnaw at me. My destination is Antares, perhaps detouring via Betelgeuse, there’s a very interesting research group drawing parallels between native species of the two systems, hinting at another species being capable of space travel — now that would be the greatest discovery, the crowning of a career, wouldn’t it? When you stop by, do make contact with my local fellow ancients, they’re intrigued; and your news from Maze are the freshest they’ll get. Bring them some tea, too, you know the one. Does it still have the scent of the moon? — Found in a beach mansion on Aldébaran-4, Aldebaran system, 9566 CE. This is the last note there is. As of writing this, I have finished my era of research and study and travels. Well, maybe not travels. I have made friends here, and on Aldébaran-4, and around Betelgeuse, and elsewhere. I’ll have to keep traveling, if only to visit them all. But my pursuit of knowledge and understanding is well and truly over. I have, long ago, finished my portion of the work assigned to me, and more. You’ll find next to this the last details and conclusions regarding that and all other threads I was exploring. I find myself wondering if I’ll go back to Sol; probably not soon, but it’s a possibility. One I would have never imagined when I started on this trip, this exile, if you would. I have lived a very long while, and I have regretted a great many events; it is only recently that I have been able to finally face them, to forgive myself in a way; it is only recently that I have opened up about them to another. Perhaps, had I continued on my lonesome, you would eventually have heard these grievances against myself… but these notes feel much too impersonal now. I cannot help but feel incoming loss at stopping there, yet it is smaller than the relief, and smaller still than the joy at being able to dedicate more of my life to my friends, my precious people. You should know what I look like, nowadays. I’ve included a picture, probably not the first you’ve seen, though, as I’m sure you’ll have talked with my fellows on previous stops. But if you see smaller versions of me around, don’t blink — say hi. I’ve at last managed to acquire some decent tea. You will not find any further note. Don’t try. Leave this to be my final one. I have been following your progress, lately, as your search coincided with my travels. This last stash should satisfy you for a little while, Otto, and then it’s your turn. A millennium of research and travel, two centuries of chase. Go forth and make it all your own, from now on. And thank you. — Found in the Rehua, a citadel-station orbiting Antares, 9570 CE. # Regulus (notes and author commentary) The story of the narrator starts one or two centuries beyond our present time. It is in accordance of my own beliefs that this kind of technology (to be able to live a long time, if not forever, and to have decent space travel methods, to start with) will be available and ready in our world, in reality, at most around this kind of timeframe. From there, the narrator says they wasted a century before getting to work, and that this note was written after “three score centuries” or 6000 years, have passed. So the story begins at least after 8316 CE. The last note puts it at 8570 CE, but durations are purposefully left fuzzy to allow for wording and the difficulty to measure time accurately over space travel, FTL, and times beyond anything we, humans of the 21st century, can reasonably imagine. The story for the “reader,” who is not the reader of this short, but a character named Otto who finds this note and then begins a search for the narrator (presumably to tell them how Earth is?), begins in 9387 CE. The search lasts 200 years, during which the narrator leads Otto on a merry chase of stashes of research and notes across the Universe. We, the reader of the short, only get to read a few of the notes, as hinted in the Nyctimene entry; furthermore, there are also the stacks of research results and notes provided along each entry; it would not be realistic for just those few clues contained in the entries to be sufficient to find the next one. But those ones put together have an interesting thread. The Sechura desert is better known as the place where the Nazca lines are. The Owlman is a Nazca figure representing a person with an owl’s face, arm raised up. ATHENA is an actual X-Ray space telescope that is being built and will be launched in a few years. ATHENA ΑΘΕ is probably one of its far, far successor. This is happening 62–72 centuries in the future, after all. The ATHENA telescope, ours, is going to be in L2 orbit; obviously this latter one, version ΑΘΕ, stayed there, too. “ΑΘΕ” was an inscription on Athenian drachmas in Ancient Greece, accompanied by a depiction of an owl. The Phæthontis quadrangle is an area of Mars. Phaeton was a demigod who, one day, drove the Sun’s carriage across the skies. “as dusk falls” is a reference to the philosopher Hegel, who noted that “the owl of Minerva spreads its wings only with the falling of the dusk.” Jiang Maze’s name does not mean anything and is not particularly significant; I pulled it from a box of tea and a poster in my brother’s room. Jiang is the family name. They use the pronoun “they” as written in the text. Speaking of pronouns, notice how none of the characters have a defined gender? The two names, Jiang and Otto, are neutral in this regard. What gender did you give the characters, in your mind, while reading? 2150 Nyctimene is an asteroid (or minor planet, to be precise) in our solar system, which I have dubbed Sol system in this short, discovered in 1977 bla bla bla, none of that is relevant. It’s located quite a ways from Earth, on an orbit inclined from the elliptic; in my story, that provides a place that is accessible from other planets by conventional propulsion, and then brings your vessel away from the influence and business of the main plane of our system, so that you may engage your FTL without causing nor getting undue interference. To get back to the reference thread, Nyctimene was the daughter of Epopeus, King of Lespos. Pursued by her father, she was rescued by Athena who turned her into an owl, the very owl depicted on the drachmas. The Owl of Athena, or the Owl of Minerva, as called by the Romans, was, by the way, a symbol of knowledge and study. Dagon is the name of an exoplanet orbiting the star Fomalhaut. It is situated within an immense disc of debris floating around the star. Photos of that disc of debris and the star show a distinctly fuzzier band: a dust ring. A view from the side of that disc makes it look elliptical. All in all, from here, it kinda looks like an eye, with a massive ball of fire in the middle of it. Fomalhaut is about twice as large as the sun. For the imagery, we have to go back to Epopeus: his name came from ἔποψ, better known as the hoopoe, also nicknamed “the watcher.” Then we abandon Greek mythology and go to the Persians, to whom the star was one of the four “Royal stars,” and more precisely, was named Hastorang, of the winter solstice, “the Watcher of the North.” (That’s not meant as a reference to a very popular TV series of the moment, it’s entirely coincidence. I needed a star with a planet that referred to the Nyctimene somehow, and I’d already made plans for two other Royal stars.) The planet is located near that dust ring mentioned before, which completes the explanation for the Eye of Sauron mention. The shielded structure is because the planet is moving through the disc of debris, impacts are frequent, so a shield is necessary; and in turn, the shield would generate a sky of continuous showers of burning meteor remains (the shield would destroy said meteors instead of bouncing them or repelling them; I felt that was more realistic compared to the fairly fantastic shields present in some other space-bound sci-fi.) Finally, the planet has a very long orbit, spanning 2000 years. Aldebaran is another of the Royal stars. Its current name means “The Follower,” in Arabic. In Hindu, it is called “the mansion Rohini.” For the location in this story, I went with a reference to the excellent BD “Les Mondes d’Aldébaran,” which inspired me during my teenage years. One of the first scenes in that story depicts a very long and beautiful beach, on the planet Aldébaran-4, fourth and only livable planet in the Aldebaran system. Antares is also a Royal star. It’s a red supergiant nearly 900 times the size of the sun. In Ancient Greece, the citadel or acropolis of a city-state contained its royal palace. Antares has had various names in history, like: “the Lord of the Seed,” “The King,” “Jyeshthā” (“the Eldest”). In Māori, it is called “Rehua” and regarded as the chief of all the stars. The title, Regulus, refers to the fourth and last Royal star, and perhaps the next destination of the narrator, who knows. # My mountain of Knowledge 3 January 2017 When the ship of my early childhood approached the land of life, knowledge is what I saw and what I started seeking. My journey began on the coast, where I learned to walk, and then I looked up, and saw all that I could climb. The slope was steep, the rock slippery, the path barely walked. And when I reached up to the summit I had glimpsed as a child, I found that it had merely been the place where earth meets the bottom of the clouds, and that my mountain continued upwards, and that its true summit I could never see at all. But this did not discourage me. And as I turned around and gazed over all I could see from here, and all that I had achieved in this climb, I knew that I would continue, forever if need be, in my quest upon the mountain of Knowledge. # Finally, I knew 24 January 2017 I hope, for your sake, that he forgives me. It was many moons before I understood this seeming contradiction. But it was too late; he was gone, and I was alas forever forsaken. # Technicals # A little elegant state machine with Async Generators 2 February 2018 Today at work I made this up: async function* init_process (steps) { for (let step of steps) { while (true) { try { await step.run() break } catch (error) { handle_error({ step, error }) yield } } } } What this does is it takes a list of steps, which are async tasks (in our case a request and some processing), runs through them, and if there is an error at some point it hands back to the caller... and then the caller can choose to retry the failed step and go on. All in 10 lines of code. Beyond brevity, what I like about this code is that as long as you know the behaviour of an async generator, of break inside a loop, of a try-catch — which are all, to the possible exception of the async generator, fairly elemental language structures — you can understand what this little machine does simply by running through it line by line, iteration by iteration. Here’s how you’d use this: // load the steps, do some prep work... // Prepare the little machine const process = init_process(steps) // Hook up the retry button$('.retry-button').click(() => process.next()) // Start it up process.next() And that’s it! Let’s run through this a bit: 1. async function* init_process (steps) { This is an Async Generator that takes a list of steps. Generators, and Async Generators, gets their arguments and then start frozen. They don’t do any processing until you first call .next(). An Async Generator is just a Generator! All it does special is that you can use await inside it and if you want the results of what it yields, you have to await those. (But we don’t use that here so you don’t even need to keep that in mind.) There’s no extra magic. 2. for (let step of steps) { We’re going to iterate through all the steps, one at a time. 3. while (true) { This is the first “aha!” moment. To make it possible to retry the current, failed, step, we start an infinite loop. If we have a success, we can break out of it, dropping back into... the for loop, and thus continuing onto the next step. If we have a failure, we don’t break out, and the while loop will naturally start that step over. 4. try { await step.run(); break We try the step.run(), and then we break. Because of the way exceptions work, break will only run if nothing was thrown. That is, if step.run() ended successfully. 5. catch (error) { handle_error({ step, error }) We want to immediately handle the error. We could yield the error and let the caller handle it, but this way there’s no need for an extra wrapping function: we can just call process.next() to start and resume the machine, without needing to care about its output. 6. yield The piece of magic that brings it all together. If and when we get to that, we freeze the generator state and hand back execution to the caller. It’s now up to it to tell the little machine to continue, and it can do that at any time. There’s no need for complex state management, of preserving and restoring progress: the language itself is taking care of it. 7. Outside: process.next() (the first time) Recall that the Generator starts frozen (see 1). The first thing we do is call next(), and that unfreezes the machine. It starts processing steps, and eventually will either get to the end, or stop at an error. 8. To retry: process.next() When we hit a snag, handle_error() does its job of telling the user and figuring out problems... and then it can choose to display a retry button. Or maybe it will want to automatically retry a step if it deems it safe to do so. Or maybe the error was very bad, and it just wants to abort. It can do all these things, and it can take its time: the little machine will wait patiently until it’s told to get going again. And that’s all there is to it! # Dhall: not quite it 8 August 2020 Last month I dove into Dhall after discovering it more than a year ago but never having the excuse to really get stuck into it. Last month I got two different opportunities, and used it for both: an autoinstall template for Ubuntu 20.04, and a Kubernetes config template for very similar containers that each needed a pod and a service just to vary one or two settings. While Dhall is good at what it does, despite many rough edges, as I progressed I realised it’s really not what I want. ## What Dhall does well Types. Dhall is fully typed in the Haskell fashion, and every Dhall document is an expression, which contains types or values, or trees of types and trees of values. Dhall does structural typing, so two types A = { an: Integer } and B = { an: Integer } are actually the same type. Safety. Dhall is strictly non-Turing-complete, and most notably is guaranteed to complete (no halting problem here). Functions have no side effects, and while input can be done from files, the environment, and remote URLs (!), output is only through whatever structure is left at the end. Reproducibility. Dhall includes concepts and tools that can enforce the integrity of imports, and verify that one expression is equivalent to another, such that you can refactor how that expression is constructed and authoritatively assert that your refactor is correct. Library. As an established project, there are libraries that are built up for various projects, such as for Kubernetes manifests, Terraform, GitHub actions, OpenSSL, Ansible... additionally, the built-in function and keyword set is very small, so everything is accessible, inspectable, etc. ## Where I found it lacking Errors. Good erroring is hard, I’ll acknowledge. Dhall erroring isn’t terrible... but it’s often obscure and mislaid me many times. Dhall often stops at the first error, which might be a consequence or a symptom of the actual mistake, and gaining that visiblity is hard. Familiarity. and layperson-friendliness. Is basically zero. Dhall errors require familiarity with Dhall errors: they’re not very approachable unless you’re already familiar with them. Dhall itself is foreign at times, and some of its syntax quirks are downright baffling (in one pet peeve, it bills itself as whitespace insensitive, but what it really means is that as long as whitespace is in the right place, it doesn’t care what that whitespace is... but a(b c) is still different to a (b c), to hilariously-hard-to-debug effects.) While I can use Ruby, Rust, and advanced Bash in work projects, I would never use Dhall because it would add more barriers than it adds value. Inconsistency. For a language with a tiny built-in library, it’s quite surprising. Everything in Dhall is an expression... except some things that look like expressions but aren’t (like the merge keyword). The whitespace thing. Imports get an optional argument for a checksum, something that nothing else can do (no optional or default arguments, though the record pattern approximates some of it). Some things are keywords, some things are symbols, and some things are nothing at all, with little rhyme or reason. It makes hard to develop intuition. Information loss. There’s a bug open for at least three years where the formatting tools of Dhall will silently erase all comments except those at the top of a file. Dhall is bad at respecting ordering. This is surprising for a configuration tool: while the consuming application might not care, order can be very important for humans. Some tools may even interpret ordering, for example overriding earlier identical keys in a JSON map, or keeping the first one, and re-ordering may actually change meaning. Inference. Because Dhall does structural typing with named and anonymous members, and because it has no generics, there’s many situations where it knows the type of something, but will refuse to compile unless you explicit it, which can be very repetitive and/or require refactorings to put a name on a previously-anonymous type. Inheritance or extensibility. While I like the lack of class-based inheritance in programming languages like Rust and instead embrace the wrapping and trait and composition types concepts, configuration is a different space. It’s not uncommon for a configuration schema to have a general shape for a stanza that is specialised in many different variants, and representing that in Dhall is painful, repetitive, or both. Translation. Somewhat related or an alternative to the above. Dhall makes it easy to create type-friendly structures, but offers little to translate those structures back into what the actual consumer expects. This ranges from key/value translation, where a Dhall-idiomatic spelling would be StorageKind but the configuration requires storage-kind, to flattening, where you could express a structure as a Action<Specifics> where Action has a type and id, and Specifics is an enum/union for AddPartition or WriteFilesystem but the required structure has type and ids and all specific properties on the same level, to different translations for different outputs. Postel’s Law. Or robustness principle. The one that goes “Be conservative in what you do, be liberal in what you accept from others.” Dhall is conservative in what it does, certainly, and also very strict in what it accepts. This would not be so much a problem if the tooling/erroring was better: JSON can also be said to be strict on the input, and tooling exists that will point to where the error is quite precisely; YAML can be said to be quite lax, and may silently do the wrong thing. Dhall, however, doesn’t improve one way or the other. # Exploration of Wasm 17 March 2020 ## Background I’ve been dabbling with Wasm for several years, but only really started going at it in the past month, and for the purposes of this post, for the past two weeks. I had a bad idea and I’ve been working to make it real. I’m not coming from the JS-and-Wasm perspective. Some of the things here might be relevant, but here I’m mostly talking from the point of view of writing a Wasm-engine-powered integration, not writing Wasm for the web and not particularly writing Wasm at all even. For those who don’t know me, I work (as a preference) primarily in Rust, and I work (for money) primarily in PHP, JS, Ruby, Linux, etc. Currently I’m in the telecommunication industry in New Zealand. ## The wasm text and bytecode format One very interesting thing that I like about wasm is that the text format, and to a certain extent the bytecode, is an s-expression. Instructions are the usual stack machine as seen e.g. in assembly. But the structure is all s-expressions. Perhaps that’s surprising and interesting to me because I’m not intimately familiar with other binary library and executable formats... fasterthanlime’s ELF exploration is still on my to-read list. The standard wasm tools come with wat2wasm and wasm2wat, which translate between the bytecode (wasm) and text (wat) formats. wat2wasm will produce simple yet nice errors if you write wrong wat. My preferred way of writing small wasm programs is to write the wat directly instead of using a language on top. I am fairly comfortable with stack languages (I have a lingering fondness for dc) and a lot of the work involves more interacting with wasm structure than it does the behaviour of a module. To write larger programs, especially those dealing with allocations, I use Rust with wee_alloc, optionally in no_std mode. I do not use wasm rust frameworks such as wasm-pack or wasm-bindgen. I have tried AssemblyScript, I am not interested in C and family, and that’s pretty much the extent of my options as much of everything else either embeds an entire runtime or is too high level or is too eldritch, wildly annoying, or unfamiliar. Even more useful is wat’s ability to write stack instructions in s-expressions... or not, as the need may be. For example, this: i32.const 31 call $addOne i32.const 8 i32.mul Can equally (and more clearly) be written: (i32.mul (call$addOne (i32.const 31)) (i32.const 8)) Strictly more verbose, but helpful where following along with a stack notation can be confusing. ## The wasm module system There is an assymmetry in the module system that... makes sense to anyone who’s used language-level module systems but might not be immediately obvious when approaching this in the context of dynamic libraries. There are four types of exports and imports: functions (bread and butter), globals (i.e. constants and statics, but see later), memories (generally only one), tables (for dispatch and the like, which I don’t much deal with). While engines do support all types, as per spec, languages targetting Wasm often only support functions well. It’s not uncommon to initially start with an integration that expects an exported global, only to then change it to a function that’s read on init and documented to need a constant output, because some desired language doesn’t support making wasm globals. Wasm has the potential concept of multiple linear memories, and of exportable and importable memories. Currently, the spec only supports one memory, which can either be defined in the module or imported (defined elsewhere, including some other module). In theory and/or experiments, most languages also only support a single memory, or only support additional memories as addressable blobs of data. C &co, with manual memory management, can in theory allocate anywhere, and so may be better off... Rust’s AllocRef nightly feature shows promise to be able to specify the allocator for some data, and therefore be able to configure multiple allocators each targeted at a different memory. However, that will require multiple memory support at the (spec and then) language level in the first place. For now, designing integrations to handle more than one memories is not required but a good future-proofing step. Exports are straightforward: each export has a name and maps to some entry in the module’s index spaces. Once you compile a module from bytecode you can look up all of its exports and get the indices for the names. This is important later. Imports have two-level names: a namespace and a name. The idea is for integrations to both be able to provide multiple libraries of imports without clashes, and to support plugging one module’s exports directly to another module’s imports, presumably namespaced under the first module’s name, version, some random string, etc. In practice there are two namespaces worth knowing about: env is the de-facto default namespace, and js is the de-facto namespace for web APIs. In Rust, to specify the import namespace (defaults to env), you need to use the #[link(wasm_import_namespace = "foo")] attribute on the extern block like so: extern { fn trace(ptr: i32, len: i32); fn debug(ptr: i32, len: i32); fn info(ptr: i32, len: i32); fn warn(ptr: i32, len: i32); fn error(ptr: i32, len: i32); } ## Functions calls In the wasmer runtime, which is what I’ve most experience with, there are two contexts to call exported functions in: on an Instance, that is, once a compiled module is instantiated (we’ll come back to that), and from a Ctx, that is, from inside an imported function call. The first is highly ergonomic, the other not very (this will probably improve going forward, there’s no reason not to). let func: Func<(i32, i32)> = instance.func("foo_functer")?; let res = func.call(42, 43)?; To call from a Ctx, the best way currently is to pre-emptively (before instantiating) obtain the indices of the exported functions you want to call from the compiled module, and then call into the Ctx using those indices: // after compiling, with a Module let export_index = module .info() .exports .get("foo_functer") .unwrap(); let func_index = if let ExportIndex::Func(func_index) = export_index { unsafe { std::mem::transmute(*func_index) } } else { panic!("aaah"); } // inside an imported function, with a Ctx let foo = 42; let fun = 43; let res = ctx.call_with_table_index( func_index, &[WasmValue::I32(foo as _), WasmValue::I32(fun as _)], )?; ## Multi-value Something that is not obvious at first glance is that multi-value returns in wasm is comparatively young and not very well supported, which presents nasty surprises when trying to use it in all but the most trivial cases. Multi-value [return] is when wasm functions support multiple return values instead of just one: (func $readTwoI32s (param$offset i32) (result i32 i32) (i32.load (local.get $offset)) (i32.load (i32.add (local.get$offset) (i32.const 4))) ) To compile that with wat2wasm, you need the --enable-multi-value flag, which should have been a... flag... that this wasn’t quite as well-supported as the current spec made it out to be. However, wasmer supports multi-value like a champ, both for calling exports: let func: Func<(i32), (i32, i32)> = instance.func("read_two_i32s")?; let (one, two) = func.call(0)?; and for defining imports: imports! { "env" => { "get_two_i64s" => func!(|| -> (i64, i64) { (41, 42) }), }, }; That initially lulled me in a false sense of security and I went about designing APIs using multi-value and testing them with multi-value hand-written wat. All seemed great! Then I tried using Rust to write wasm modules that used my APIs and everything fell apart because Rust does not support multi-value for Wasm... and lies to you when you try using it. See, Rust uses some kind of “C-like” ABI to do the codegen for its imports and exports in its wasm support, such that if you write this: extern { fn get_two_i64s() -> (i64, i64); } with multi-value you might expect this wasm: (func (export "get_two_i64s") (result i64 i64)) but what you actually get is this: (func (export "get_two_i64s") (param i32)) Uhhh??? What Rust is actually exporting is a function that would look like this: extern { fn get_two_i64s(pointer_to_write_to: u32); } which you’d then call like: let mut buf: [i64; 2] = [0; 2]; unsafe { get_two_i64s(buf.as_mut_ptr()); } let [a, b] = buf; So now both sides have to know that get_two_i64s expects to write two i64s contiguously somewhere in memory you specify, and then you retrieve that. The wasmrust “framework” does support multi-value. It doesn’t magically activate a hidden rustc flag to enable multi-value codegen, though: it post-processes the wasm, looks for “things that look like they’re multi-value functions”, and writes them a wrapper that is multi-value, leaving the originals in place so you can use both styles. What the actual fuck. I’m sure it works great with the limited API style that wasmrust’s bindgen macros write out, and I’m sure it was a lot easier to do this than to add multi-value support to Rustc, but it sure seems like a huge kludge. Anyway, so: multi-value is sexy, but don’t even bother with it. ## Instantiation and the start section Wasm modules can contain a start section, which can absolutely not be thought of like a main function in C and Rust: code that runs directly, without being called via an exported function. The start section is run during the instantiation sequence. If there’s no start section, it’s not called, simple as that. Now, wasm people will insist that the start section is a compiler detail that should absolutely not be used by common plebeians or for programs and such, that it’s useless anyway because it runs before “the module” and “exports” are available, and that implicitely exported functions rely on the start having been run, so you really shouldn’t use this for anything... Anyway, you can’t generate it. And fair enough. I’m sure they know their stuff and they have good reasons. However. The instantiation process for Wasm is precisely defined. After this process, the module is ready for use. Wonderful. The start section is called as the very last step of the instantiation process. So while the official advice is to have some export named, e.g. main or something and then having the runtime call this export straight away, if you want to deliberately flout the guidelines, you probably can. You can totally use the instantiation of a module as a kind of glorified function call. It’s most certainly a bad idea... but you can. Given that nothing will generate this for you, you’ll need to post-process the wasm to add the start section in yourself. A small price to pay. (Seriously, though: don’t. It’s all fun and games until nasal daemons eat your laundry, and again, nothing supports this.) ## Types People usually start with that, but it’s kind of an implementation detail in most cases, and then they leave it at that... there’s some good bits there, though. As a recap, Wasm at the moment has 2×2 scalar types: signed ints and floats, both in 32 and 64 bit widths, plus one 128-bit vector type for SIMD (when supported). To start with, you can’t pass 128-bit integers in using v128. Good try! The wasm pointer size wasm is 32 bits. Period. There’s effectively no wasm64 at this point, even though it’s specced and mentioned in a few places. If you’re writing an integration and need to store or deal with pointers from inside wasm, don’t bother with usize and perhaps-faillible casts, use u32 and cast up to usize when needed (e.g. when indexing into memories). Then pop this up in your code somewhere to be overkill in making sure that cast is always safe: #[cfg(not(any(target_pointer_width = "32", target_pointer_width = "64")))] compile_error!("only 32 and 64 bit pointers are supported"); When engines have magical support for unsigned and smaller width integers, that’s all convention between the two sides. u8 and i16 and u32 are cast to 8, 16, or 32 bits, padded out, given to wasm as an “i32“, and then the inner module re-interprets the bits as the right type... if it wants to. Again, it’s all convention. Make sure everything is documented, because if you pass –2079915776 (i32) and meant 2215051520 (u32), well, who could have known? ## There may be more and I’m adding on as I go. # Rust crimes: Enum ints 1. cursed thought: rust enums are expressive enough that you dont really need “built in” types. you can express everything with just enums... ~Boxy 2. does this mean u8 can move to a crate? ~Kate 3. Please Kate, this is my worst nightmare. I have dependency minimization brain along with use-the-smalleet-int-possible brain. They are terminal and often co-morbid. 4. dw we can’t actually do this as there’s no way to disambiguate the values. Okay, so, what does this mean? Well, in Rust you can wildcard import names into your scope: #![allow(unused)] fn main() { use std::sync::atomic::*; let a = AtomicU8::new(0); a.store(1, Ordering::Relaxed); } And sometimes different things have the same name: #![allow(unused)] fn main() { use std::cmp::*; assert_eq!( max(1, 2).cmp(&3), Ordering::Less ); } So if you try to wildcard import names where there’s an overlap… #![allow(unused_imports)] use std::sync::atomic::*; use std::cmp::*; fn main () {} $rustc wild.rs [Exit: 0] Huh. Oh, right, you have to actually use something that’s ambiguous: #![allow(unused_imports)] use std::sync::atomic::*; use std::cmp::*; fn main () { dbg!(Ordering::Relaxed); } And now you get an error:$ rustc wild.rs error[E0659]: Ordering is ambiguous (glob import vs glob import in the same module) --> wild.rs:7:8 | 7 | dbg!(Ordering::Relaxed); | ^^^^^^^^ ambiguous name | note: Ordering could refer to the enum imported here --> wild.rs:3:5 | 3 | use std::sync::atomic::*; | ^^^^^^^^^^^^^^^^^^^^ = help: consider adding an explicit import of Ordering to disambiguate note: Ordering could also refer to the enum imported here --> wild.rs:4:5 | 4 | use std::cmp::*; | ^^^^^^^^^^^ = help: consider adding an explicit import of Ordering to disambiguate error: aborting due to previous error For more information about this error, try rustc --explain E0659. [Exit: 1] So, if you were to try to make integers be an external crate that you wildcard-imported into the scope, which could potentially look like this: use ints::u8::*; fn main () { assert_eq!(1 + 2, 3); } That would work, but as soon as you try to use multiple integer widths: use ints::u8::*; use ints::u16::*; fn main () { assert_eq!(1 + 2, 3); } You’d run into issues, because both ints::u8 and ints::u16 contain 1, 2, 3… Also, currently integer primives in Rust would totally clash: use u2::*; enum u2 { 0, 1, 2, 3 } fn main () { assert_eq!(0, 0); } $rustc nothing-suspicious-here.rs error: expected identifier, found 0 --> nothing-suspicious-here.rs:3:11 | 3 | enum u2 { 0, 1, 2, 3 } | ^ expected identifier error: aborting due to previous error [Exit: 1] Right, it doesn’t even let us out of the gate, because identifiers cannot be digits. Hmm, maybe we can add an innocent-looking suffix there to bypass that silly restriction? use u2::*; enum u2 { 0_u2, 1_u2, 2_u2, 3_u2 } fn main () { assert_eq!(0_u2, 0_u2); }$ rustc nothing-suspicious-here.rs error: expected identifier, found 0_u2 --> nothing-suspicious-here.rs:3:11 | 3 | enum u2 { 0_u2, 1_u2, 2_u2, 3_u2 } | ^^^^ expected identifier error: aborting due to previous error Denied. Looks like we can’t do it. But what if we wanted to look as if we’d side-stepped the issue and made crated integers work? Well, first we need to figure out this identifier thing. Who even decides what identifiers can look like?! The Rust Reference does: An identifier is any nonempty Unicode string of the following form: Either Alright. So there’s a restricted set of Unicode characters that can start an identifier, and numbers aren’t in that set. But can we find something discreet enough that is XID_Start? Why yes. Yes we can: Enter the Halfwidth Hangul Filler. This character is XID_Start, and (provided you have Hangul fonts) renders as… either a blank space, or nothing at all. Does it work? #[derive(Debug)] enum Foo { ᅠBar } fn main () { println!("{:?}", format!("{:?}", Foo::ᅠBar)); println!("{:?}", format!("{:?}", Foo::ᅠBar).as_bytes()); } $rustc notacrime.rs warning: identifier contains uncommon Unicode codepoints --> notacrime.rs:5:12 | 5 | enum Foo { ᅠBar } | ^^^^ | = note: #[warn(uncommon_codepoints)] on by default warning: 1 warning emitted [Exit: 0]$ ./notacrime "ᅠBar" [239, 190, 160, 66, 97, 114] [Exit: 0] Right, first, we can’t have Rust ruin the game so quickly, so we want to suppress that pesky warning about uncommon codepoints which points directly at our deception: #![allow(uncommon_codepoints)] #[derive(Debug)] enum Foo { ᅠBar } fn main () { println!("{:?}", format!("{:?}", Foo::ᅠBar)); println!("{:?}", format!("{:?}", Foo::ᅠBar).as_bytes()); } $rustc notacrime.rs [Exit: 0]$ ./notacrime "ᅠBar" [239, 190, 160, 66, 97, 114] [Exit: 0] Much better. So, we’re printing the Debug representation of that Bar variant which starts with the Hangul character we found, and the debug representation of the slice of bytes which underly that string. The bytes, in hex, are: [EF, BE, A0, 42, 61, 72] 0x42 0x61 0x72 are Unicode for B, a, and r, so our Hangul character must be 0xEF 0xBE 0xA0! Indeed, that’s the UTF-8 representation of 0xFFA0. So, we’ve got something that is a valid start of an identifier, and (fonts willing) is completely transparent. Let’s try this again: #![allow(uncommon_codepoints)] use u2::*; enum u2 { ᅠ0, ᅠ1, ᅠ2, ᅠ3 } fn main () { assert_eq!(ᅠ0, ᅠ0); } $rustc not-technically-illegal.rs warning: type u2 should have an upper camel case name --> not-technically-illegal.rs:4:6 | 4 | enum u2 { ᅠ0, ᅠ1, ᅠ2, ᅠ3 } | ^^ help: convert the identifier to upper camel case (notice the capitalization): U2 | = note: #[warn(non_camel_case_types)] on by default error[E0369]: binary operation == cannot be applied to type u2 --> not-technically-illegal.rs:8:1 | 8 | assert_eq!(ᅠ0, ᅠ0); | ^^^^^^^^^^^^^^^^^^^ | | | u2 | u2 | = note: an implementation of std::cmp::PartialEq might be missing for u2 = note: this error originates in the macro assert_eq (in Nightly builds, run with -Z macro-backtrace for more info) error[E0277]: u2 doesn't implement Debug --> not-technically-illegal.rs:8:1 | 8 | assert_eq!(ᅠ0, ᅠ0); | ^^^^^^^^^^^^^^^^^^^ u2 cannot be formatted using {:?} | = help: the trait Debug is not implemented for u2 = note: add #[derive(Debug)] to u2 or manually impl Debug for u2 = note: this error originates in the macro assert_eq (in Nightly builds, run with -Z macro-backtrace for more info) [Exit: 1] Whoa there. Okay, so we’re going to rename our enum to uppercase, and add some derived traits: #![allow(uncommon_codepoints)] use U2::*; #[derive(Clone, Copy, Debug, PartialEq, Eq)] enum U2 { ᅠ0, ᅠ1, ᅠ2, ᅠ3 } fn main () { assert_eq!(ᅠ0, ᅠ0); }$ rustc not-technically-illegal.rs warning: variant is never constructed: ᅠ1 --> not-technically-illegal.rs:6:15 | 6 | enum U2 { ᅠ0, ᅠ1, ᅠ2, ᅠ3 } | ^^ | = note: #[warn(dead_code)] on by default warning: variant is never constructed: ᅠ2 --> not-technically-illegal.rs:6:19 | 6 | enum U2 { ᅠ0, ᅠ1, ᅠ2, ᅠ3 } | ^^ warning: variant is never constructed: ᅠ3 --> not-technically-illegal.rs:6:23 | 6 | enum U2 { ᅠ0, ᅠ1, ᅠ2, ᅠ3 } | ^^ warning: 3 warnings emitted [Exit: 0] Well, it succeeded, but let’s suppress those warnings as well: #![allow(uncommon_codepoints)] use U2::*; #[derive(Clone, Copy, Debug, PartialEq, Eq)] enum U2 { ᅠ0, ᅠ1, ᅠ2, ᅠ3 } fn main () { assert_eq!(ᅠ0, ᅠ0); } $rustc not-technically-illegal.rs [Exit: 0]$ ./not-technically-illegal [Exit: 0] Excellent. Now, we’re not going to get far with a 2-bit int. But writing out all the variants of a wider integer is going to get old fast. So let’s make a generator for our Rust crimes: #![allow(unused)] fn main() { println!("enum U8 {{ {} }}", (0..256).map(|n| format!("\u{FFA0}{}", n)).collect::<Vec<_>>().join(", ") ); } The output is very long and it’s only going to get longer, so from now you can run these yourself with the little ⏵ play icon on the code listing. Let’s just go ahead and add all the other decoration we’ve established to that little generator, but do something a little more interesting: addition. fn main() { println!("#![allow(uncommon_codepoints)]\n\n"); println!("use U8::*; #[derive(Clone, Copy, Debug, PartialEq, Eq)] enum U8 {{ {} }}", (0..256).map(|n| format!("\u{FFA0}{}", n)).collect::<Vec<_>>().join(", ") ); println!(" fn main() {{ dbg!(\u{FFA0}1 + \u{FFA0}2); }}"); } $rustc crime-scene.rs && ./crime-scene > crime.rs && rustc crime.rs && ./crime error[E0369]: cannot add U8 to U8 --> crime.rs:9:13 | 9 | dbg!(ᅠ1 + ᅠ2); | -- ^ -- U8 | | | U8 | = note: an implementation of std::ops::Add might be missing for U8 error: aborting due to previous error For more information about this error, try rustc --explain E0369. [Exit: 1] Now what? Ah, right, we haven’t defined how addition works for our new integer type. Let’s do that: fn main() { println!("#![allow(uncommon_codepoints)]\n\n"); println!("use U8::*; #[derive(Clone, Copy, Debug, PartialEq, Eq)] #[allow(dead_code)] enum U8 {{ {} }}", (0..256).map(|n| format!("\u{FFA0}{}", n)).collect::<Vec<_>>().join(", ") ); println!(" fn main() {{ dbg!(\u{FFA0}1 + \u{FFA0}2); }}"); println!(" use std::ops::Add; impl Add for U8 {{ type Output = Self; fn add(self, other: Self) -> Self {{ U8::from(u8::from(self) + u8::from(other)) }} }}"); } ...what? Right, I’ve skipped a few things. So, it may be technically possible to define addition without making any reference to Rust’s core integer types. But that seems very out of scope for an article which is already pretty long. Instead, we’re going to implement arithmetic by converting our custom enum integers to their corresponding native ints, do maths, and then go back. How do we convert? Well, going from our type to a primitive is pretty simple: fn main() { println!("#![allow(uncommon_codepoints)]\n\n"); println!("use U8::*; #[derive(Clone, Copy, Debug, PartialEq, Eq)] #[allow(dead_code)] enum U8 {{ {} }}", (0..256).map(|n| format!("\u{FFA0}{}", n)).collect::<Vec<_>>().join(", ") ); println!(" fn main() {{ dbg!(u8::from(\u{FFA0}0)); }}"); println!(" impl From<U8> for u8 {{ fn from(n: U8) -> Self {{ n as _ }} }}"); } Going back, however, requires a few more pieces: fn main() { println!("#![allow(uncommon_codepoints)]\n\n"); println!("use U8::*; #[derive(Clone, Copy, Debug, PartialEq, Eq)] #[allow(dead_code)] #[repr(u8)] // <===================== this thing enum U8 {{ {} }}", (0..256).map(|n| format!("\u{FFA0}{}", n)).collect::<Vec<_>>().join(", ") ); println!(" fn main() {{ dbg!(U8::from(0)); }}"); println!(" impl From<U8> for u8 {{ fn from(n: U8) -> Self {{ n as _ }} }} impl From<u8> for U8 {{ fn from(n: u8) -> Self {{ unsafe {{ std::mem::transmute(n) }} }} }}"); } UNSAFE?!?!? Well, not quite. Say we have an enum with four variants. We can safely convert it to a number, because the compiler knows statically which variant corresponds to which number. However, we can’t safely go the other way all the time, because what if we try to convert 32 into that enum? There’s no 33rd variant, so the program may crash, or worse. In our case, though, we know that there are exactly 256 variants, as many values as there are in an u8, so we can assure the compiler that yes, we know what we’re about, please transmute. And we tell the compiler that the enum must fit in and have the same layout as a u8 with the repr annotation, which lets us have peace of mind while transmuting, that an optimisation isn’t going to come along and mess up our assumptions. Now that we can go back and forth between U8 and u8, we can get back to implementing addition: fn main() { println!("#![allow(uncommon_codepoints)]\n\n"); println!("use U8::*; #[derive(Clone, Copy, Debug, PartialEq, Eq)] #[allow(dead_code)] #[repr(u8)] enum U8 {{ {} }}", (0..256).map(|n| format!("\u{FFA0}{}", n)).collect::<Vec<_>>().join(", ") ); println!(" fn main() {{ dbg!(\u{FFA0}1 + \u{FFA0}2); }}"); println!(" impl From<U8> for u8 {{ fn from(n: U8) -> Self {{ n as _ }} }} impl From<u8> for U8 {{ fn from(n: u8) -> Self {{ unsafe {{ std::mem::transmute(n) }} }} }} use std::ops::Add; impl Add for U8 {{ type Output = Self; fn add(self, other: Self) -> Self {{ U8::from(u8::from(self) + u8::from(other)) }} }}"); }$ rustc crime-scene.rs && ./crime-scene > crime.rs && rustc crime.rs && ./crime [crime.rs:9] ᅠ1 + ᅠ2 = ᅠ3 [Exit: 0] It works! In the same vein, we can implement -, /, and *: fn main() { println!("#![allow(uncommon_codepoints)]\n\n"); println!("use U8::*; #[derive(Clone, Copy, Debug, PartialEq, Eq)] #[repr(u8)] enum U8 {{ {} }}", (0..256).map(|n| format!("\u{FFA0}{}", n)).collect::<Vec<_>>().join(", ") ); println!(" fn main() {{ dbg!(\u{FFA0}1 + \u{FFA0}2 * \u{FFA0}3 / \u{FFA0}4); }}"); println!(" impl From<U8> for u8 {{ fn from(n: U8) -> Self {{ n as _ }} }} impl From<u8> for U8 {{ fn from(n: u8) -> Self {{ unsafe {{ std::mem::transmute(n) }} }} }} type Output = Self; fn add(self, other: Self) -> Self {{ U8::from(u8::from(self) + u8::from(other)) }} }} use std::ops::Sub; impl Sub for U8 {{ type Output = Self; fn sub(self, other: Self) -> Self {{ U8::from(u8::from(self) - u8::from(other)) }} }} use std::ops::Div; impl Div for U8 {{ type Output = Self; fn div(self, other: Self) -> Self {{ U8::from(u8::from(self) / u8::from(other)) }} }} use std::ops::Mul; impl Mul for U8 {{ type Output = Self; fn mul(self, other: Self) -> Self {{ U8::from(u8::from(self) * u8::from(other)) }} }}"); } With that, we can implement something a little less trivial than base arithmetic: fn main() { println!("#![allow(uncommon_codepoints)]\n\n"); println!("use U8::*; #[derive(Clone, Copy, Debug, PartialEq, Eq)] #[repr(u8)] enum U8 {{ {} }}", (0..256).map(|n| format!("\u{FFA0}{}", n)).collect::<Vec<_>>().join(", ") ); println!(" fn fibonacci(n: U8) -> U8 {{ match n {{ \u{FFA0}0 => \u{FFA0}1, \u{FFA0}1 => \u{FFA0}1, _ => fibonacci(n - \u{FFA0}1) + fibonacci(n - \u{FFA0}2), }} }} fn main() {{ dbg!(fibonacci(\u{FFA0}8)); }}"); println!(" impl From<U8> for u8 {{ fn from(n: U8) -> Self {{ n as _ }} }} impl From<u8> for U8 {{ fn from(n: u8) -> Self {{ unsafe {{ std::mem::transmute(n) }} }} }} type Output = Self; fn add(self, other: Self) -> Self {{ U8::from(u8::from(self) + u8::from(other)) }} }} use std::ops::Sub; impl Sub for U8 {{ type Output = Self; fn sub(self, other: Self) -> Self {{ U8::from(u8::from(self) - u8::from(other)) }} }} use std::ops::Div; impl Div for U8 {{ type Output = Self; fn div(self, other: Self) -> Self {{ U8::from(u8::from(self) / u8::from(other)) }} }} use std::ops::Mul; impl Mul for U8 {{ type Output = Self; fn mul(self, other: Self) -> Self {{ U8::from(u8::from(self) * u8::from(other)) }} }}"); } $rustc crime-scene.rs && ./crime-scene > crime.rs && rustc crime.rs && ./crime [crime.rs:17] fibonacci(ᅠ8) = ᅠ34 [Exit: 0] Alright, so we’ve got maths on a single, non-primitive, enum-based integer type. Can we add another type and sidestep the ambiguity issue? Yes, by adding another Hangul Filler as prefix! First let’s move some of our machinery into functions so we’re a bit more generic when generating: fn main() { println!("#![allow(uncommon_codepoints)] use U8::*; fn fibonacci(n: U8) -> U8 {{ match n {{ \u{FFA0}0 => \u{FFA0}1, \u{FFA0}1 => \u{FFA0}1, _ => fibonacci(n - \u{FFA0}1) + fibonacci(n - \u{FFA0}2), }} }} fn main() {{ dbg!(fibonacci(\u{FFA0}8)); }}"); define_enum("U8", "u8", "\u{FFA0}", 0..256); } fn define_enum(name: &str, repr: &str, prefix: &str, range: std::ops::Range<usize>) { println!(" #[derive(Clone, Copy, Debug, PartialEq, Eq)] #[allow(dead_code)] #[repr({repr})] enum {name} {{ {def} }}", name=name, repr=repr, def=range.map(|n| format!("{}{}", prefix, n)).collect::<Vec<_>>().join(", "), ); println!(" impl From<{name}> for {repr} {{ fn from(n: {name}) -> Self {{ n as _ }} }} impl From<{repr}> for {name} {{ fn from(n: {repr}) -> Self {{ unsafe {{ std::mem::transmute(n) }} }} }} impl std::ops::Add for {name} {{ type Output = Self; fn add(self, other: Self) -> Self {{ {name}::from({repr}::from(self) + {repr}::from(other)) }} }} impl std::ops::Sub for {name} {{ type Output = Self; fn sub(self, other: Self) -> Self {{ {name}::from({repr}::from(self) - {repr}::from(other)) }} }} impl std::ops::Div for {name} {{ type Output = Self; fn div(self, other: Self) -> Self {{ {name}::from({repr}::from(self) / {repr}::from(other)) }} }} impl std::ops::Mul for {name} {{ type Output = Self; fn mul(self, other: Self) -> Self {{ {name}::from({repr}::from(self) * {repr}::from(other)) }} }}", name=name, repr=repr, ); } So now we can define another enum integer: define_enum("U16", "u16", "\u{FFA0}\u{FFA0}", 0..65536); If you try to compile this, you’re going to hit a limitation of the compiler: it gets very very slow to compile... whoa, seven hundred kilobytes of source? Right, so that’s a lot. In the interest of keeping this demo-able, let’s define our own type to be a little smaller. Let’s say we want a U12, which goes from 0 to 4095, backed by a u16: fn main() { println!("#![allow(uncommon_codepoints)] use U8::*; use U12::*; fn fibonacci(n: U8) -> U8 {{ match n {{ \u{FFA0}0 => \u{FFA0}1, \u{FFA0}1 => \u{FFA0}1, _ => fibonacci(n - \u{FFA0}1) + fibonacci(n - \u{FFA0}2), }} }} fn main() {{ dbg!(fibonacci(\u{FFA0}8)); }}"); define_enum("U8", "u8", "\u{FFA0}", 0..256); define_enum("U12", "u16", "\u{FFA0}\u{FFA0}", 0..4096); } fn define_enum(name: &str, repr: &str, prefix: &str, range: std::ops::Range<usize>) { println!(" #[derive(Clone, Copy, Debug, PartialEq, Eq)] #[allow(dead_code)] #[repr({repr})] enum {name} {{ {def} }}", name=name, repr=repr, def=range.clone().map(|n| format!("{}{}", prefix, n)).collect::<Vec<_>>().join(", "), ); println!(" impl From<{name}> for {repr} {{ fn from(n: {name}) -> Self {{ n as _ }} }} // ... impl From<{repr}> for {name} {{ fn from(n: {repr}) -> Self {{ assert!(n < {max}); unsafe {{ std::mem::transmute(n) }} }} }} // ... impl std::ops::Add for {name} {{ type Output = Self; fn add(self, other: Self) -> Self {{ {name}::from({repr}::from(self) + {repr}::from(other)) }} }} impl std::ops::Sub for {name} {{ type Output = Self; fn sub(self, other: Self) -> Self {{ {name}::from({repr}::from(self) - {repr}::from(other)) }} }} impl std::ops::Div for {name} {{ type Output = Self; fn div(self, other: Self) -> Self {{ {name}::from({repr}::from(self) / {repr}::from(other)) }} }} impl std::ops::Mul for {name} {{ type Output = Self; fn mul(self, other: Self) -> Self {{ {name}::from({repr}::from(self) * {repr}::from(other)) }} }} ", name=name, repr=repr, max=range.last().unwrap(), ); } Notice we add an assert to the transmuting conversion as now we can’t guarantee at compile time that the entire possible range of an u16 will fit in a U12, so we check the value at runtime, just to be safe.$ rustc crime-scene.rs && ./crime-scene > crime.rs && rustc crime.rs && ./crime warning: unused import: U12::* --> crime.rs:4:9 | 4 | use U12::*; | ^^^^^^ | = note: #[warn(unused_imports)] on by default warning: 1 warning emitted [crime.rs:15] fibonacci(ᅠ8) = ᅠ34 [Exit: 0] We get a warning, but notice how we’re allowed to wildcard-import both sets of numbers (because of course, from Rust’s point of view, they’re different identifiers). With bigger ints, we can go for bigger maths: fn main() { println!("#![allow(uncommon_codepoints)] use U8::*; use U12::*; fn fibonacci(n: U8) -> U8 {{ match n {{ \u{FFA0}0 => \u{FFA0}1, \u{FFA0}1 => \u{FFA0}1, _ => fibonacci(n - \u{FFA0}1) + fibonacci(n - \u{FFA0}2), }} }} fn factorial(n: U12) -> U12 {{ match n {{ \u{FFA0}\u{FFA0}0 | \u{FFA0}\u{FFA0}1 => \u{FFA0}\u{FFA0}1, _ => factorial(n - \u{FFA0}\u{FFA0}1) * n }} }} fn main() {{ dbg!(fibonacci(\u{FFA0}8)); dbg!(factorial(\u{FFA0}\u{FFA0}6)); }}"); define_enum("U8", "u8", "\u{FFA0}", 0..256); define_enum("U12", "u16", "\u{FFA0}\u{FFA0}", 0..4096); } fn define_enum(name: &str, repr: &str, prefix: &str, range: std::ops::Range<usize>) { println!(" #[derive(Clone, Copy, Debug, PartialEq, Eq)] #[repr({repr})] enum {name} {{ {def} }}", name=name, repr=repr, def=range.clone().map(|n| format!("{}{}", prefix, n)).collect::<Vec<_>>().join(", "), ); println!(" impl From<{name}> for {repr} {{ fn from(n: {name}) -> Self {{ n as _ }} }} impl From<{repr}> for {name} {{ fn from(n: {repr}) -> Self {{ assert!(n < {max}); unsafe {{ std::mem::transmute(n) }} }} }} type Output = Self; fn add(self, other: Self) -> Self {{ {name}::from({repr}::from(self) + {repr}::from(other)) }} }} impl std::ops::Sub for {name} {{ type Output = Self; fn sub(self, other: Self) -> Self {{ {name}::from({repr}::from(self) - {repr}::from(other)) }} }} impl std::ops::Div for {name} {{ type Output = Self; fn div(self, other: Self) -> Self {{ {name}::from({repr}::from(self) / {repr}::from(other)) }} }} impl std::ops::Mul for {name} {{ type Output = Self; fn mul(self, other: Self) -> Self {{ {name}::from({repr}::from(self) * {repr}::from(other)) }} }} ", name=name, repr=repr, max=range.last().unwrap(), ); } $rustc crime-scene.rs && ./crime-scene > crime.rs && rustc crime.rs && ./crime [crime.rs:22] fibonacci(ᅠ8) = ᅠ34 [crime.rs:23] factorial(ᅠᅠ6) = ᅠᅠ720 [Exit: 0] All that’s left to do is take the generated crime, stick it in a playground, add some whitespace to confuse unexpecting visitors and… sit back to enjoy the profits: https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=f6d61baf31e9a9f4f97f07d334f56f12 ### Update: safe math Reddit delivered with a successor-based implementation of maths that does not rely on transmutation. It does rely on generating another large match for each enum. I also had to modify the code a little to make it compile, and I used a wrapping successor function to simplify implementation. fn main() { println!("#![allow(uncommon_codepoints)] #![deny(unsafe_code)] use U7::*; use U13::*; fn fibonacci(n: U7) -> U7 {{ match n {{ \u{FFA0}0 => \u{FFA0}1, \u{FFA0}1 => \u{FFA0}1, _ => fibonacci(n - \u{FFA0}1) + fibonacci(n - \u{FFA0}2), }} }} fn factorial(n: U13) -> U13 {{ match n {{ \u{FFA0}\u{FFA0}0 | \u{FFA0}\u{FFA0}1 => \u{FFA0}\u{FFA0}1, _ => factorial(n - \u{FFA0}\u{FFA0}1) * n }} }} fn main() {{ dbg!(fibonacci(\u{FFA0}8)); dbg!(factorial(\u{FFA0}\u{FFA0}6)); }}"); define_enum(7, "\u{FFA0}"); define_enum(13, "\u{FFA0}\u{FFA0}"); } fn define_enum(width: u32, prefix: &str) { let name = format!("U{}", width); let max = 2_usize.pow(width); let range = 0..max; let succrange = range.clone().rev(); println!(" #[derive(Clone, Copy, Debug, PartialEq, Eq)] #[allow(dead_code)] enum {name} {{ {def} }} impl {name} {{ fn successor(self) -> Self {{ match self {{ {prefix}{max} => {prefix}0, {succ} }} }} }} ", name=name, def=range.clone().map(|n| format!("{}{}", prefix, n)).collect::<Vec<_>>().join(", "), succ=succrange.clone().zip(succrange.clone().skip(1)).map(|(n, m)| format!("{p}{m} => {p}{n}", p=prefix, m=m, n=n)).collect::<Vec<_>>().join(", "), max=range.clone().last().unwrap(), prefix=prefix, ); println!(" impl std::cmp::PartialOrd for {name} {{ fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> {{ if self == other {{ return Some(std::cmp::Ordering::Equal); }} let mut x = *self; let mut y = *other; Some(loop {{ if x == {prefix}{max} {{ break std::cmp::Ordering::Greater; }} if y == {prefix}{max} {{ break std::cmp::Ordering::Less; }} x = x.successor(); y = y.successor(); }}) }} }} impl std::ops::Add for {name} {{ type Output = Self; fn add(mut self, y: Self) -> Self::Output {{ let mut n = {prefix}0; while n != y {{ self = self.successor(); n = n.successor(); }} self }} }} impl std::ops::Sub for {name} {{ type Output = Self; fn sub(self, mut y: Self) -> Self::Output {{ let mut n = {prefix}0; while self != y {{ y = y.successor(); n = n.successor(); }} n }} }} impl std::ops::Mul for {name} {{ type Output = Self; fn mul(self, y: Self) -> Self::Output {{ let mut n = {prefix}0; let mut res = {prefix}0; while n != y {{ n = n.successor(); res = res + self; }} res }} }} ", name=name, prefix=prefix, max=range.last().unwrap(), ); }$ rustc crime-sux.rs && ./crime-sux > crime.rs && rustc crime.rs && ./crime [crime.rs:23] fibonacci(ᅠ8) = ᅠ34 [crime.rs:24] factorial(ᅠᅠ6) = ᅠᅠ720 [Exit: 0] No unsafe! (“Hello, I would like a U7 and a U13...”) # No Time for Chrono October 2021 TL;DR: Time 0.1 and 0.2 have a security notice about use of the localtime_r libc function, and Chrono has a related notice, both issued in November 2020. While Time has a mitigation in place, Chrono doesn’t and doesn’t plan to. The security issue is very specific and can be mitigated through your own efforts; there’s also some controversy on if it is an issue at all. The Time crate has evolved a lot in the past few years and its 0.3 release has a lot of APIs that Chrono is used for; thus it is possible for many, but not all, Chrono users to switch to Time 0.3; this could have some additional benefit. ## The security issue Unix-like operating systems may segfault due to dereferencing a dangling pointer in specific circumstances. This requires an environment variable to be set in a different thread than the affected functions. This may occur without the user’s knowledge, notably in a third-party library. Non-Unix targets (including Windows and wasm) are unaffected. ### localtime_r and setenv From the manpage, edited for length: The localtime_r() function converts the calendar time timep to broken-down time representation, expressed relative to the user’s specified timezone. The function acts as if it called tzset(3) and provides information about the current timezone, the difference between Coordinated Universal Time (UTC) and local standard time in seconds, and whether daylight savings time rules apply during some part of the year. Meanwhile, setenv: …adds the variable name to the environment with the value value The issue occurs when the environment of a program is modified (chiefly with setenv) at the same time that localtime_r is used; in those cases, a segfault may be observed. localtime_r is a fairly complex beast, which interfaces with the system’s timezone files and settings to provide localtime information and conversion. It is a very useful function that is used pretty much everywhere that needs to interact with local/utc time and timezones. Handling timezones is largely regarded as all programmers’ bane; replacing this function with one that does not have the behaviour is a potentially massive endeavour. Over on IRLO, Tony Arcieri provides a summary of the Rust side of this issue. Time has mitigated the issue by removing the calls to localtime_r, returning errors in 0.2 and 0.3 unless a custom cfg is passed to the compiler. That does mean that if you do want that information, you’re out of luck unless you (as an application) or all your end-users (as a library) enable that custom configuration. ### The counter view Rich Felker (author of musl) has another view. He argues that the issue is not in calling the localtime_r function, but in modifying the environment. The environment ought to be immutable, and it is somewhat well known in other large projects that the footgun exists: This issue is even known to the Rust project, with a documentation PR in 2015(!) adding cautionary language to the std::env::set_var function. I don’t have nearly the same amount of knowledge in this issue, but for the record, and despite the sections below, I do agree with the view that the environment should be considered read-only. Perhaps a clippy lint could be added. ## Replacing Chrono Regardless of the previous discussion, there are other issues around usage of Chrono. ### Chronic pains Its last release at writing, 0.4.19, was more than a year ago. Issues are piling up. It’s still on edition 2015 (which to be clear isn’t really an issue, but more of an indicator). It could just be that the crate is considered finished (the docs do describe it as “feature-complete”). Or it may be that maintainers have mostly checked out. (No fault on the maintainers! I’ve done the same with Notify, once.) If you’re fine with this, and you’re confident that you (and your dependencies) aren’t writing to the environment, then you can keep on using Chrono. There is, however, a viable alternative now: ### Time 0.3 Time’s 0.3 release adds many APIs, which cover a large amount of the surface that Chrono is used for: • No-alloc mode • The Month type • Calendar/Ordinal/ISO/Julian conversions • Large dates (beyond +/- 9999 years) • Parsing and serde support There are also some features which are only supported by newer Time, not by Chrono: • const functions • the datetime! macro for constructing datetimes at compile-time • Serialising non-ISO8601 representations • Random dates/times • QuickCheck support Therefore, you can now reasonable replace Chrono with Time! (In the future I hope to provide a “quick migration guide” here. For now, it’s left to the reader!) # Gaston Lagaffe This week I discovered that one of my all-time favourite comics, Gaston Lagaffe, was never translated into English. Ever. I find that really sad because Gaston was, and still is, a comic I read when I was growing up, and I wish more people I live with (in a global sense, as in my local community) and work with and laugh with… also had read it. But more than that, it’s because Gaston is a geek. No, ever better, he’s the evolution of the high school geek into adulthood. And, to me, he’s the answer to the cry for help and representation of geeks in high school. You see, I can’t help but compare with American comics, especially American comics that are (stereotypically, perhaps) read and adored by American/English-speaking geeks while growing up. I don’t think they do a very good job of creating a positive geek attitude to life. Maybe that’s just my flawed perspective as someone who didn’t grow up with them though. Spiderman leads a double life: he’s a despised science geek and a revered superhero. The tensions between those two identities are already well-addressed, but I think they’re tensions because Peter Parker doesn’t want to be a despised geek. This is critical. He might grow beyond it later, but from the very introduction of the character, we have a geek who hates that he’s a geek, and craves to be “normal” and have the attention of, at a pick, love interests and the general public. Superman is also an irrelevant and socially awkward Clark Kent, who (surprise) has trouble getting the attention of his love interest. Both as the superhero and as the non-super he has this notion of “fighting for the little guy.” Where Spiderman is a direct incarnation of geeks and their dreams, Superman is more of a defender of the weak. Jack Kirby on the X-Men, emphasis mine: What would you do with mutants who were just plain boys and girls and certainly not dangerous? You school them. You develop their skills. So I gave them a teacher, Professor X. Of course, it was the natural thing to do, instead of disorienting or alienating people who were different from us, I made the X-Men part of the human race, which they were. With few exceptions, the narrative as it applies to geeks here is largely negative. Sure, it represents geeks and allows a veneer of legitimacy, but the only way to be acknowledged by whomever one wants (girls, boys, authority, the public at large…) is to become someone else. Yet we geeks of the real world can’t do that. We don’t have superpowers, alien origins, or life-enhancing serums. And try as we might, actually getting to be acknowledged and accepted in society requires a lot of work and time and practice and experience. And it requires failure. This is what Gaston represents, to me. Let me introduce you. This is Gaston: He’s an office junior in a busy publication house. He’s clumsy, lazy, easy-going, socially-awkward, and an unrecognised genius. This is one of his minor inventions, a radio-controlled lawn mower: He also probably has some ADHD symptoms, or at least that thing where you have a million ideas and can’t focus on what you’re supposed to be doing. So he has a radical solution: he goes and creates those ideas. He follows the threads and does stuff. And yet, Gaston is hopelessly out of society’s proper functioning. He isn’t a role model: this isn’t someone you want to emulate in every way. He’s not successful, he doesn’t have the recognition of his peers, and he’s still despised by “normals”. But this is also precisely why I think he is a better representation for geeks. You see, he doesn’t want these things. He’s not successful by society’s standards, but he has fun and creates a bunch of things and clearly believes he’s doing pretty well, by his standards. He doesn’t have the recognition of his peers, but he has good friends who are more or less as quirky as he is, and accept him for who he his. Even those who initially despise him come to a grudging respect and even occasional admiration. For most of the series, he has a romantic aspiration with another character, but, until the very end, it’s an asexual relationship. And it’s not just that our antihero is too socially awkward to initiate sex: no, both of them are happy in the relationship they share. But most importantly, Gaston fails. A lot. This is a funny comic strip, and many of the gags feature Gaston’s inventions, ideas, setups, and other creations going wrong, either at his or others’ expense. He fails and fails and yet never fails to get back up and go on. Gaston is never angsty nor brooding; he might not be a regular member of society but doesn’t yearn for it either. Gaston is happy in life, without denying who he is, without aspiring to be someone else, without needing extraordinary powers to accomplish any of this. He’s just himself, and he’s just fine. And, really, that’s all I’ve ever wanted. # Known Unknowns: a configuration language 8 August 2020 Basically, I want a configuration pre-processor that is fully typed, but has a concept of “holes” or “unknowns”. As you compile your configuration, the compiler tells you if there are any unknowns it still needs to finish the process. Furthermore, you can instruct it to “expand” the source as far as it can go with what it has, and leave the unknowns there. You can then store or pass on this result to some other component or system. This is essentially full-document currying: you’re filling all the variables you have, and until every variable is filled in, the result is still a function of further inputs. Why would this be useful? Well, think of a configuration like Ansible or Terraform, where some variables might be remote to your local system, or be dependent on context. You could write a network configuration, for example, that needs the name of the main interface to really proceed. You then write a config within the typesystem, which enforces at the type level things like providing either a static IP xor DHCP=true. You compile this config, and the compiler expands the typesystem out to a Netplan config shaped intermediate form, and tells you its known unknowns, in this case the interface name. You can manually check over the config to see that it’s what you meant. You can then give that to Ansible for a dry-run, which will go and fetch the interface name from the running system (an operation with no side effects, so available in a dry run), and complete the config. With no unknowns remaining, only a Netplan config remains. You can still manually check over the final output, before the “wet” run applies it. Do that on a wider scale, and you get a powerful system that, instead of throwing an error if something is missing, tells you what is missing and also provides useful output even with the missing bits. You can have as many steps as needed: pregenerate a large config, use it to derive where to look for missing data, ask a human for input for more unknowns, fetch more data from places, and only once everything is filled in can you apply it. You can install configs with known unknowns on a system, so long as the consuming application will be able to fill those in. You can “pass” a config along different departments, which each filling in their bits. For example make a request for a new virtual machine, which you require to have some memory, some disk, and running a specific OS. Pass that on to Network, which assigns an IP, subnet, virtual LAN; pass it on to Storage, which reserves block storage and fills in which datastore it needs to be in; pass it on to Approval which reviews and adds authentication; pass it back to the requester, who adds their SSH keys and checks it against what they passed on originally for any modification. You can go a bit deeper in the analysis, and figure out which parts of a document depend on unknowns, and which don’t. You can have unknowns that are partially filled in by other unknowns. You can do dead unknown elimination and get warnings when a configuration would not use an unknown in the final output. You could even have partial “wet” runs that do as much as possible until they find an unknown, and because the unknowns can be known beforehand, statically, you can skip over them instead of stopping at the first one. You could run simulations by filling unknowns with fake values, and seeing how that behaves. That could be really powerful to make even more advanced dry runs. This is a fairly nebulous idea at this point, but I feel it would be a lot more useful than a programming typesystem applied to configuration, which requires that all types and holes are resolved ahead of time. Another project on the pile... # On the difficulty of automated unblocking ## Unblocking the wrongfully-blocked …is something I’d love to do, but the difficulty is identifying who is really truly wrongly-blocked. For a sense of scale, at last accurate count in 2019, I had well over 150 000 blocks on Twitter. I would estimate having about 200 000 now. Reviewing a block is a forensic process, because as I’ll explain, you cannot just rely on metadata like who follows someone or their bio. Let’s have an extremely optimistic estimate of an average of two minutes per wrongful block, and ten seconds per rightful block. If everyone I’ve blocked should remain there, hand-reviewing the entire list to assure myself of that would take 24 days of non-stop work. When I say non-stop, I mean it would take over 550 hours. If I have even two percent of wrongful blocks, that inflates to over 670 hours. It’s an enormous amount of work. But surely, I can automate a large part of it. Reader, I tried. Over several months. It turned out that my block list is extremely varied. Or rather, the people on it are varied. Or rather, the way the people on it present to me, the information I have available about them, is extremely varied. I don’t have much: I have their username, display name, bio, location and website fields, and maybe a few of their tweets if I write some scripts. I can put all that in a database, so I can process the large amounts of information all of this adds up to. And then what? There are some terms in bio I can outright block, right? Like MAGA. Right? Right? R i g h t ? Well, fuck. (For the record, five links above contained text such as “I will block MAGA trolls”, and one is someone literally named Madumaga.) And I don’t even follow any of these people, this is just 30 seconds of searching. Okay, maybe exclude the ones that mention MAGA and block, or MAGA and troll, or some other combinations, surely that would work, right? Lol, fuck. Okay but surely someone who only has the word MAGA and not any of these other ones is a real one, right? LMAO GTFOH (For the record, the first two are genuine MAGA people who block anti-MAGA “trolls” and the last one is a genuine person whose bio speaks, without using any of the obvious keywords, of them being previously banned due to harassment by MAGA people.) And this is a comparitively simple one. Try to do the same for transphobes, and it’s a quagmire. But hold on, maybe I should do the inverse and identify people who are clearly not transphobes. Like, maybe someone who has pronouns in their bio and identifies as trans is fairly easy to search for? And surely they would be on the “unblock” candidate list? Boy oh boy. For one thing, truscums are a thing. For another, intersectionality cuts both ways. See, it’s perfectly possible to have trans racists, gay sexists, lesbian transphobes, feminist anti-vaxxers. Assholes often have layers. But also, everyone can be an asshole. There is not a single group of people whose grouping attribute makes them free of the potential to be an asshole. Even if there was, there would be no reliable way to identify that group from their bio, or who they follow, or who they’re followed by, or even scanning some of their tweets. Don’t believe me? Simple thought experiment: say there is such a marker. What is the first harasser who comes across it gonna do? ## Forgiveness And then there’s forgiveness. There’s a service run by fellow kiwi Tom Eastman called Secateur which lets you block people (and optionally all their followers, it’s the “block script” I mention in one of the historical block reasons above) temporarily. The idea is this: people change. So blocking them now might be good, but they might see the error of their ways or listen to the right person or learn some damn empathy somehow, and now they’re not someone you want to block anymore, right? Maybe. There are three reasons why I don’t use it or something like it: 1. It’s another tool. I block a lot. I block quickly, I don’t take two minutes to think about it. I block on tweetdeck, I block on mobile. It’s never going to be a good fit for me to have to open a separate app, copy or type a username across, and hit go. That’s too much work. 99% of the time,† I just want to block and move on. 1. I don’t think automated forgiveness is the way to go. Because of all the wrongly-blocked, I am also fairly fast at unblocking if someone who is blocked appears in a conversation with people I trust, and a browse of their tweets doesn’t bring up any red flags. The idea there is: if you’re block-worthy, surely I’ll block you again. So, if you’ve changed, and you intersect my life, chances are I’ll just unblock you outright. But if you haven’t, I don’t actually want the app to unblock you. And there’s no way for the app to tell. 1. Blocking isn’t just for me. Blocking someone means they can’t see my tweets, and that includes replies to people, among other interactions. Blocking someone stops them interacting with me, but it also: • stops them interacting with others through me • stops me accidentally retweeting one of their viral tweets later on • helps protect people I interact with who haven’t blocked them • protects people I interact with who have, or are private. Have you ever read a twitter conversation where one side is public and the other is a private account? You can still figure out a lot from one side, right? That’s what I’m talking about. †: I occasionally use the megablock.xyz app as a one off for especially egregious cases. ## Alternatives Maybe I should just unblock everyone, do a great big reset, and then restart my block list from nil, confident that from now on I’ll only block the deserving. I don’t think so. I believe that my block list truly makes my twitter experience better, and resetting it would make it materially worse. Maybe not immediately, but certainly in the longer term. Overall, the cold final part of it is that I believe that the cost is worth it. That whatever small percentage of people I have wrongly blocked is not significant enough to detract from the amount of good and comfort that having the rest rightfully blocked grants me. # On Invoking Deities I hold a particular belief that I… don’t really talk about because I also have a deep loathing for all forms of proselytising (the concept of “software evangelism” makes me uncomfortable, and especially so when that is a titled position at a company someone can be hired for. Ick.). It is that non-consensually invoking deities is, at best, rude. Specifically: • Invoking one’s own deity, or a spiritual presence one worships or follows, is an invitation for that being to take notice of you, and of your surroundings. Doing so in the presence and to the face of someone who is not expecting that kind of attention is at best rude, if that person also happens to follow your same set of beliefs. If that person is following a different one it’s even more distasteful, at best. • Invoking someone else’s deity or spiritual etc when you do not subscribe to them yourself is also rude and potentially an affront. • Asking for a blessing from your patron spirits onto someone who has not consented to being brought to the attention of said spirit could well be considered an indirect attack. Asking for a blessing from deities you don’t even subscribe to is just plain weird to do directly, though soliciting attention from ones via people who are is acceptable (so long as they consent to). Yes, this all applies even though I am an atheist. One could think that as an atheist I wouldn’t care about “imaginary beings taking imaginary attention to me,” but I suppose I am very slightly on the agnostic side for that: in the optic that “we can’t be sure,” I would say it would be prudent to not randomly invoke gods and goddesses for no fucking reason. Anyway, that is to say, I understand it is a very ingrained tic in the English language to do so all the bloody time, but if you can at all refrain, please do not say “oh my ---” at me. Especially if you don’t actually have religious beliefs, that just feels super weird to me. Why. Less controversial perhaps, saying “--- bless you” is also off. Saying just “bless you” is… marginally okay, only in that it can probably be “I bless you” or something, like you’re bestowing the blessing, you’re not invoking some extra-universal entity to pull on the fabric of spacetime to arrange things my way. If you want to be all proper about it you could ask for permission; it’s not something very done in these times, but it could stand to make a comeback. As an aside, I’m somewhat confused at the habit to censor the name of your own deity while keeping with calling them all the time. Like maybe if you have a religious dictate not to openly refer to your deity and you’re interpreting that as an injunction to very lightly censor the word “god” while still keeping the same frequency of saying “omg, OMG OMGGGG, OH MY ███”… that feels rather counter-productive? Excising the word from your language seems like a much better way to follow such a dictate. But what do I know, I’m (in this) only an atheist. # Recipes ## Thermomix recipes I have a Thermomix TM31. For the uninitiated, it’s an all-in-one chopper, mixer, heating and cooking and steaming electric pot. It’s great. These are custom recipes, from the mundane to the special. # No-effort Dairy Cream Makes 6 portions. ## Ingredients • 1L milk • 65g maïzena (fine corn flour) • 150g sugar • Optionally, 50g to 200g liquid cream, as you want • Flavour. E.g.: • Coffee in some soluble form • Spices: vanilla, cinnamon, clove, or to taste • Orange blossom water • 30ml amaretto, advocaat, coffee liqueur ## Method 1. Place milk, maïzena, sugar, and vanilla in Thermomix, set to 90℃, 18 minutes, speed 4. 2. After 10 minutes, reduce speed to 2. No need to set a timer (unless you want to): you’ll hear the noise change as the texture shifts! 3. Add the flavour, optional liquid cream, and give it a whirl at speed 3 for at least 30 seconds. 4. Leave it running while you prep your ramekins or bolinettes, then pour. 5. Clean your thermomix immediately (better now than later, trust me). 6. Once your creams have cooled down enough, fridge all but one. Eat that one warm. You know you want to. After two hours they’ll be good eatingstuff, for you or yours! # Semi-steamed Pasta This is my go-to when I don’t want to cook but need to eat, or when I need a pasta base for something else but no space on the stove. You can add a bouillon cube or OWO or similar to the water if you want to use it afterwards to make a sauce: it will then infuse flavour into the pasta. N.B. This is a one-portion recipe. However, you can probably swing two smaller portions of pasta. You don’t want to fill the basket more than 2/3rds at the most: pasta increases in size as it cooks, and will either overflow or not cook well. ## Ingredients • 1 portion of Pasta • 1L Water • Salt to taste ## Method 1. Fill the bowl to the 1L mark with cold water. You can use just-boiled water: remove 9 minutes from the time. 2. Insert the steam basket. The water should come about 1cm above the bottom of the basket. 3. Measure one portion of the pasta of your choice, and drop it into the basket. 4. Add half a teaspoon of salt on top. You can also season with pepper or other spices. 5. Set at Varoma/steam temperature, 22 minutes, speed 1, cap off. 6. Once it’s done, use the spatulla tool’s hook to remove the basket, and serve. ## Explanation The water, when boiling, takes significantly more space due to the bubbles, and this is exarcerbated by the many nucleation sites on the basket and the pasta, as well as the increased emulsivity of pasta water. In the thermomix’s constrained bowl, that translates into a much-raised water level. When the water is below 100℃, that doesn’t happen, so the pasta is untouched (except for the bottom 1cm). Once it starts boiling, the water covers the pasta and cooks it. That way you preserve the requirement to only add pasta to boiling water, without having a manual step in the middle. A litre of water takes about 8-9 minutes to get to boiling in the Thermomix. Using a kettle to boil the water is faster, but adds a step; additionally, the bowl being much colder than the water means it takes about a minute to start boiling again. When the timer stops, so does the heat, and thus so does the boil. During cooking, about 300mL of water evaporate, so when it’s done, all the water receeds below the basket, essentially auto-draining the pasta. Finally, the pasta’s starch also drains off immediately, so the pasta won’t stick together (as much) if left alone for a bit (for example, if you’re cooking a sauce to go with it). And the starched water left is extra-concentrated, due to cooking with so little water: perfect to add to a sauce. ## Extra You can add one or two eggs, whole, on top of the pasta. They will hard-boil while cooking. You can try adding other things in, but beware! If what you add significantly changes how the water behaves when boiling, you can end up undercooking or overflowing. For example, oil or butter result in severely decreased boil volume, and the water won’t cover all the pasta, so only the bottom tier will cook. # Certainly September 2018 ## Pitch A simple certificate tool. Mostly just binding to OpenSSL and creating/inspecting certificates with the least user interaction possible. There are other tools that take a similar approach but they either do too little (only self-signed certs, no self-CA-signed ones) or too much (install themselves by default into various system certificate stores). Plus it was an interesting way to learn more about the OpenSSL APIs and certificates. ## Future I want to finish the Rustls version. # Hashmap Entry Insert September 2020 — ## Pitch Someone in the community discord wanted a feature and I was like “hey, why not try to implement it, that could be fun” (based on an existing but obsoleted PR). impl<'a, K, V> Entry<'a, K, V> { pub fn insert(self, value: V) -> OccupiedEntry<'a, K, V> {…} } ## Outcome Currently implemented behind the #![feature(entry_insert)] flag. ## Future The library team has approved it for stabilisation, pending some minor changes. I will get to that… soonish. # Japanese Since August 2021 ## Pitch Learning Japanese. Against my prior try learning Irish, here I’m looking to a language where I have several friends who speak it, a lot of the media I consume is in Japanese (at the source, anyway), and I do have a little vocabulary from said media and cultural influences. # Lah Kara October 2021 ## Pitch A RISC-V mini-ITX immersion-cooled Linux workstation. ### Compute • SiFive Unmatched: 4+1 RISC-V cores at 1.5GHz, 16GB RAM, in Mini-ITX form factor. • Intel AX-200 Wifi 6 / Bluetooth 5 adapter. • 1TB NVMe M.2 SSD. I’ve got a double-sided high performance one from a previous build, and if it’s not compatible I’ll get something like a Samsung 970 Evo. • RX 570 4GB graphics. The goal here is not high perfomance (not a gaming rig), but something that can handle a few screens and only need a PCIe Gen 3 interface. ### Cooling • Wide-gap slant-fin heat sinks to replace the OEM fin array (PCIe chip) and fan (CPU). • Extension cabling for USB and Ethernet. • I’m undecided whether I’ll strip the fans off the graphics card and attempt to immerse it, or if I’ll get a riser/extension to bring it outside. #### Immersion fluid (oil) Requirements: • Boiling and smoke points above 120C • Liquid at room temperature • Long shelf life (doesn’t go rancid quickly) Nice to haves: • Fairly clear or transparent (for aesthetics) • Edible / non-toxic • Doesn’t stink! The “long shelf life” requirement basically reduces the possibility set to: • Mineral oil (petroleum, colorless, odorless, non-edible, but not touch-toxic, quite cheap) • Ben oil (golden, edible, fairly soft smell, 200C smoke point, very expensive) So as much as I’d love having a veggie oil computer I’ll probably go with mineral. ### Case I’m thinking of making a custom acrylic/copper/solid wood box that would be roughly the size of a classic mini-ITX case. A copper side would be the thermal interface, and it could be sealed to make it portable and safe from spills (not permanently glued, with a rubber seal or something), so it can sit on the topside of the desk looking all cool! • Base: solid wood (Macrocarpa) • Glass: 3mm acrylic. This puts a further thermal restriction as it will melt at 80°C • Haven’t decided for the back. If I can find a large extruded alu heat sink like plate, that would be ideal, but otherwise will probably reuse and machine some existing computer case steel ### Power NJ450-SXL from Silverstone. This is a small form factor fanless modular 450W. Selecting factors: • 450W is more than enough, I could do with 300. But a detailed review of this one shows it’s most efficient at 50% load, which should be my average load on this system. That works out nicely. • It’s fanless, but not only that, it’s the only PSU I found which was completely enclosed without a grate. It’s not waterproof or anything, I’m not gonna go stick it in oil, but it looks really good, certainly good enough to sit outside the case • It’s modular, which will help integrate it into the case design, and will certainly make for a clean look given I only need the 24-pin and one 6+2 pci feed. ### Naming I name all my devices in a Star Wars theme. Workstations (desktop, laptops) after minor characters, phones and tablets after vehicles, and IoT / utility devices (vacuum, printer, routers, various SBC things, etc) after droids. My wifi network is named after a star system. I’m a nerd, sue me. Lah Kara is from the new Visions series. ### OS Either FreeBSD or Ubuntu (or both?). #### FreeBSD Tier 2 platform support, many packages fail to build. #### Ubuntu Unknown support but they have images and repos. # Sassbot A Discord bot for a local writer/nanowrimo group. ## Rogare (Ruby) May 2014 to December 2020 ### Pitch It’s a custom Ruby bot with its own custom framework. Originally it grew from an IRC bot using the Cinch framework, but when the community moved to Discord the framework got adapted, and then refactored. The bot has a low key sassy attitude, provides a bunch of common small tools (like dice, random pickers and choosers, writing prompts, etc), but also has a pretty good “wordwar” implementation, and its prized jewel: a name generator seeded from some 150 000 actual names from various sources. ### Outcome Served us well, now at rest: https://github.com/storily/rogare. ## Garrīre (Rust, Serenity) July 2019 to April 2020 ### Pitch I wanted to get locales in there as well as voice, and the Ruby implementation just wasn’t solid enough for this kind of thing. There’s also a lot of cruft in rogare I’d like to avoid, bringing over only the Good Parts. ### Outcome https://github.com/storily/garrire/tree/serenity Superseded by Accord-based implementation. ## Garrīre (PHP, Ruby, Rust) Since September 2020 ### Pitch With Accord, the bot is polyglot, so parts can be written in whatever makes the most sense for it: • Top level routing: Nginx. • PHP for most commands. Cool features: every request, ie. every command run, is isolated; standard library is large and ecosystem very mature; changes are live instantly. • Static help files generator: Ruby. • !calc command: Rhai, via Rust, via PHP FFI. ### Outcome https://github.com/storily/garrire In production. ## Future If Accord moves to a gRPC model, Sassbot will of course follow. # Splash Since July 2018 ## Pitch A reimplementation of the RF propagation model “ITM”. Taking the original Fortran source and notes and memos etc and translating that in Rust, then re-referencing everything to the documents, changing all function and variable names to things that are legible and make sense, and adding lots of high-quality inline documentation. The end goal being to have a safe implementation that can be read entirely standalone to understand what the algorithm is doing. The secondary goal is to reimplement a subset of what the SPLAT! RF program does but more modern, e.g. taking in GDAL data as terrain input, outputting things in a more standard format, supporting arbitrary topography resolutions rather than hardcoding only two modes, and being parallelisable. ## Outcome Still at it: https://github.com/passcod/splash # Watchexec family Since August 2014 ## Pitch The watchexec family of Rust crates: • Watchexec the library, to build programs that (by and large) execute commands in response to filesystem events. • Watchexec the CLI tool, a general purpose tool which does this. • Cargo Watch CLI tool, a Cargo extension which does this for Rust projects. • ClearScreen, a library for clearing the terminal screen (which is not as simple as you’d think). • Command Group, a library to launch programs in a process group (unix) or under job control (windows). ## Current work A large refactor/rewrite of the Watchexec library to support many long-desired features and future development: https://github.com/watchexec/watchexec/issues/205 # Accord July 2020 to June 2021 ## Pitch A Discord middleware for writing Discord bots. The principle is to eat the Discord API and output it as structured requests to a local HTTP server. The server can then respond, generating events in return that are translated and sent back up to Discord. The advantage lies in using the HTTP server stack, which means notably that the Accord target could be an nginx server, which could then direct different requests to different backends, add caching, load-balancing, a/b and canary deployments, etc... without having to write that common code (as it’s built in to nginx). Furthermore, client implementations become restartable with no downtime, it’s possible to write parts of servers in different languages and stacks, and everything speaks an extremely common protocol, which means there are tonnes of tooling. ## Outcome It works: https://github.com/passcod/accord/ ## Future The concept of having an intermediary gateway worked well. The advantages in regard to routing, tooling, load-balancing, and having a common stack were borne out. However, HTTP is the wrong medium for this. Particularly missing is bidirectionality, especially in the form of requests initiated on the “client” / bot side, rather than always responding to events from Discord. Further, the headers / body data dichotomy is awkward. Headers are available for proxies to route with, but are a flat, untyped list of key-values. Information may be duplicated. It’s also hard to establish compatibility. A future exploration space with much of the same advantages but resolving the issue of bidi and helping with the data aspect could be to use gRPC instead of plain HTTP. # Armstrong November 2018 to early 2020 ## Pitch A job system for a wonderful world. This was about building a job system that offers rich workflow potential and excellent monitoring/auditability while also being very lightweight. Scalability and partition-tolerance are secondary concerns at the moment. Inspired by Gearman, but aiming to be a superset. Several tries: Abandoned. # Caretaker May 2020 to February 2021 ## Pitch A “addon” contribution system for open software projects that trades higher trust for less effort on the maintainers parts, allowing projects to survive and grow even if the creator is mostly checked out. ## Outcome Revised down from a fully fleshed-out system to a one-page description of the caretaker process, which can be included in a project as-is. Its mechanism has never been activated, to date. I didn’t really anticipate anything else: in the end this is more an “insurance” document in case I burn out or lose interest in open source again. # Car Rainbow April 2019 ## Pitch Adorning my car with a line of rainbow. The original idea was to add a decoration that would distract from the various scratches and DIY repairs on my car. I started by calculating the length I’d need, then designing and ordering about five metres of rainbow “bumper” stickers in 30cm lengths. Then I mocked up the shape of the line with masking tape, and sat on it for a few days. In the end I went with a diagonal line rather than horizontal. It needed lots of adjustments to look straight from a distance, while dealing with the reality of curved surfaces. Once I received the stickers, careful application took three afternoons. Done! ## Future I’m still thinking as to what, if anything, I’d want to do on the front/back of the car. # ct December 2019 to June 2020 ## Pitch ls and cat in one tool. It uses exa to provide ls-like functionality, and if it’s passed a file it prints it out instead, so it’s both cat and ls in a single two-letter tool, which makes it more convenient when exploring a filesystem, instead of having to edit the head of the line to swap tool. # defnew May 2020 ## Pitch A tool kit for manipulating C ABI like data structures and interacting with libc... on the shell! How? Copious amounts of bad Rust... I’m not going for pretty, as this is a tool that I wanted for me, and I was not really going for public consumption. ## Outcome Abandoned: https://github.com/passcod/defnew # Earthquake Sky October 2018 through November 2020 ## Pitch An experimental art piece representing a map of all earthquakes around Aotearoa like a star atlas. The substrate is an A0 piece of white cardboard/foamboard. On it I’m placing hundreds of dots of varying sizes using clear nail polish, in the rough locations of all earthquakes since 1900 (as much data as is available) in the land and sea around Aotearoa. Once done, I will splash blue inks over the board, then wash lightly. The result should be that where the polish is, the ink slides off, resulting in a dark blue background with white dots peeking out. Ultimately, this is expected to be an experience both near and afar, with the polish providing a tactile feel. ## Outcome Mostly abandoned in its current form, especially since the Gisborne and Kermadec quakes in 2021, which rewrote the map. I made an ink-based test print in November 2020, which also highlighted I could be doing this in much better quality... something to think about. # Figleaf November 2019 to November 2020 ## Pitch “Crackfic-taken-seriously in the form of an open software project readme,” the configuration library. It started as a redesign of config-rs. Then I started adding stuff. Then I abandoned all pretense of this being at all realistic and started just going wild. Then I figured I might as well make document it all so thoroughly and consistently it could actually plausibly have an implementation. That’s where I’m at. When done I’ll see if I can put it up on AO3 somehow. ## Outcome Abandoned: https://github.com/passcod/figleaf. # Glass Hanger October 2020 ## Pitch A little bit of custom furniture to support four restaurant-style glass hanger racks without damaging anything in my rented appartment. ## Outcome I designed it over a Saturday, went to buy the supplies that evening, built and assembled it the next day, then spent two evenings painting it. # IntPath June 2018 to May 2020 ## Pitch A new library for interned path segments and paths with structural sharing. String interning is actually a few different programming techniques, which can go from static string interning, where a bunch of strings that will be used by the application are constants — generally this is on a larger and more systematic scale than defining a bunch of consts — to language-level (or VM-level) interning, like Ruby symbols which only exist once in the program, to fully-dynamic interning with a little runtime. IntPath is that last one. Or rather, IntSeg is that last one. IntSeg is based on a global concurrent hash map, which is initialised on program start. An IntSeg, an interned (path) segment, is an OsString that is stored uniquely but weakly inside the map, and where every “actual” copy is actually a strong pointer to that interned value. So every segment is only stored once, no matter where it’s created or accessed from, and unused segments take up no (actually some, but very little) space. Then IntPaths are built on top of that as, essentially, a Vec<IntSeg>. That makes paths /a/b/c and /a/b/d use up only 4 path segments, instead of six. And 100 absolute paths with a common prefix of 10 segments and a unique suffix of 1 segment will only use 110 segments total. Additionally, segments don’t care about their position in the string, so the path /a/a/a/a/a uses one segment. There’s more, but that’s the general idea. ## Outcome https://github.com/passcod/intpath Idea discontinued once I stopped being involved with Notify, and thus mostly lost the motivation/reason for this. # Irish July 2018 to September 2019 Learning Irish. ## Outcome Mostly gave up. I could understand simple Irish by the end, but I’m simply not exposed to enough Irish to really persevere. # Keycasting March 2018 to June 2020 ## Pitch My house key is uncopyable by local locksmiths. I tried to make my own. I experimented using epoxy casting in silicon moulds. In the end it wasn’t strong enough for daily use, but I learned lots and even managed to produce something okay. Then I envisioned trying metal casting. I was very much not doing my own metal casting, thank you. Instead the idea was to hire a foundry for a very small run of a very small piece. From cursory research, I ideally wanted to cast some copies in beeswax, send that to a foundry that can pour Nickel Silver (or brass as a fallback), and then get my keys! In the end, I ordered key blanks that matched my key as best I could tell, and convinced a local locksmith to do a special cut. ## Outcome I now have more than enough keys and the means to get more. August 2018 ## Pitch Mead is a pretty simple recipe, really: take honey, add water, optionally add something to help yeasting (I used pears), add six to twenty-four months of time. This batch followed a much earlier attempt in 2014. After 364 days... I reopened all bottles, tested taste and carbonation, and recombined/split the lot over four batches, adding varying amounts of EC1118. I vented the bottles twice a day, then once a day, for a week, and finally stored them back for another round. ## Outcome I have lots of pretty good mead! I also have no idea how alcoholic they are. Whoops? # Notify August 2014 to September 2019 ## Pitch A Rust library crate to abstract the various system-specific native filesystem-watching APIs into a single cross-platform interface. ## Outcome I abandoned Notify and gave it over to a new team. # Omelette December 2018 to June 2019 ## Pitch A suite of personal archiving tools for tweets. ## Outcome Lost interest: https://github.com/passcod/omelette. # Pinn April 2019 ## Pitch A browser extension companion to track and manage the fanfic I read. The idea so far is to track both what I want to read, what I’m reading, following, etc, all with a minimal amount of interactions. Simply visiting a fanfic page should track it, and some work with the Intersection Observer API should yield a pretty accurate detection of actively reading a fic vs just having it open or looking through it without really reading. ## Outcome Would still like to do it but it is heavily deprioritised. # Reasonable March to June 2019 ## Pitch An everything-encrypted database and service of reasons for actions on twitter. This is both about this frustration with not having reasons for following, or blocking, or not following, or muting, etc... accounts on Twitter, and a challenge to design and write a service with the most amount of encryption while still being able to offer something that works. Really, the philosophy is that I don’t want to be able, as an admin operator, to read any of the reasons, or see any of the people affected by the reasons, or even, if possible, access the social media usernames of the people using the service, at least not without some willful modifications to the service, or a trace on the process, something like that. Someone obtaining an entire copy of the database should not be able to determine anything beyond the number of users, and the number of reasons. Someone otaining that plus either the source or the binary of the service should also have that same amount of access. That means layering and chaining, and careful design thought, especially as putting all user data in one encrypted blob is not going to work efficiently with potentially tens or hundred of thousands of reasons per. Making data queryable without looking inside is an interesting challenge. ## Outcome In the end, I decided that instead of a public service, I would add tools to omelette. The cryptographic challenge is interesting, but the best way to keep data private is to not actually put it on someone else’s cloud, really. (Then I lost interest in Omelette, but that’s another story.) # Reupholstering chairs April to July 2019 ## Pitch Putting some new fabric on the seat cushions. I bought six chairs for pretty cheap, mostly because they’re dirty and uncleanable. But the chairs themselves are okay, if not very good quality (probably made from recycled wood originally). So I’m going to reupholster them. ## Outcome I sit on them, they’re good. # Storq June 2019 to June 2020 ## Pitch A store for queues. A low-level store focused around medium-scale queues with some added advanced features. Nothing too terribly exciting, but was being written to enable more stuff higher up. ## Outcome Abandoned: https://github.com/passcod/storq # Swp May to June 2019 ## Pitch A utility to swap two files atomically, or as close as. On Linux 3.15+, the renameat2 call. On macOS, the exchangedata call. On Windows that supports Transactional NTFS, that. On WSL2 (potentially), the renameat2 call. On other platforms, a fallback method or two. ## Outcome https://github.com/passcod/swp I decided there’s other tools and projects that do that and I’m not going to spend more time on it. # Trebuchet March to October 2019 ## Pitch A deploy tool / experiment using Btrfs volume archives. The actual deploy tool is a client/server/agent suite with some neat features but aiming to be extremely light on resources, at least compared to many other tools I’ve tried. But the core underpinning, the main idea, comes from playing around with btrfs volumes/snapshots and the export/import commands on the tool. Essentially you can ask for an export of a volume, and you can ask for an export of a volume based on another volume, then on import, which can be on a completely different filesystem/disk/machine, as long as you have that same base volume, you can restore. So you can have filesystem-level diff archives on demand, with no awareness from the application being built/deployed, and everything is already in standard linux tools. It “just” needs some wrapping and managing. ## Outcome https://github.com/passcod/trebuchet The btrfs stuff added too many constraints; abandoned. # What March 2019 — September 2021 ## Pitch A small tool to show off what I’m working on. It’s a Node script that creates a small static site from a TOML file. On push of its git repo, a Travis build kicks off, and pushes the result to its gh_pages branch, which updates the site. ## Outcome Did its job, but the format was a bit annoying. In the end, I stopped updating it, and eventually migrated all of it here, in the section you’re reading this in. # Mostly regular: 2017-2019 In December 2016 I decided to blog more, and the way I chose to achieve that was to give general updates across everything I was doing on a regular basis. In 2017 I produced one such update every month, plus one at the end of the year. However, this effort faltered as it went on, until I completely gave it up in early 2020. These monthly updates were, in a way, cheaper and more informal than writing blog posts for projects, and more durable than tweets. Their contents are kind of “micro updates” on software developing, planning, life updates, reading, writing, other craft projects... all sorts. # 2016 Partly inspired by txanatan’s weekly roundup that just came out right now, I’ve decided to start blogging a little bit more! The first thing I noticed while opening my blog folder is how little I’ve done so in the past two years: 2015-dec-01 2015-oct-12 2015-sep-03 2016-aug-28 2016-dec-10 2016-feb-13 2016-jul-10 2016-jun-06 2016-nov-13 2016-sep-23 That’s it! Exactly 10 posts, counting this one, for the entire 2015–2016 period… awful. So, a New Year’s resolution is to try to write more. But, you see, I don’t believe in having New Year’s resolutions take effect in the new year. That’s just asking for trouble. Instead, I spend the last month or so of a year reflecting and figuring out what to do better, then start implementing the changes right then. That way, all I have to do the next year is keep on doing them, i.e. the hard part. Last year, I took the decision to have my eyes fixed permanently around this time, and then the big day happened early this year. Guess what: it’s been great. (Okay, this time last year was also when I broke up with my then-SO, so it wasn’t all happy times, but lemons, lemonade, etc.) So, end-of-year resolutions, then: • I want to improve my accent/speech. I’ve identified that not being confident with my English pronounciation is actually a fairly big source of anxiety in accomplishing some tasks. I’m considered fluent, but I do have an accent and sometimes that makes it hard for people to understand me, especially if they’re not used to it or distracted or, and this is the critical bit, when there’s no body language or lip reading to help me. That is, I’m much better at speaking with people face to face than on the phone or even over VoIP. So I’ve contacted a speech therapist. I don’t know yet whether this is something that will actually happen, because it’s quite dependent on the right specialist being here in Whangarei and having time in their schedule, but I’m hoping something will be going on sometime next year. • I want to lose some weight and be more fit. I’m not, by any stretch of the imagination, in very good physical shape. I’m not a total wreck; I can do an hour of moderately strenuous activity and be fine about it. I do walk five or ten kms every so often. In mostly flat country. With no weight on my shoulders. Yeah, so not great. Recently I did about 3–4 hours of kayaking down an estuary and back up, and it killed me. I did it, but for two days afterwards I hurt so bad all over that it was keeping me awake. So I’ve started hiking up the forest behind home. Twice a week, probably will be increasing that as I go. It’s a 30-minute hike there and back with about 50m altitude difference. It’s a start. My goal is to do the Tongariro Crossing once again — I did it last in 2011, and it erupted since, so I’d like to see how it changed. • I want to write more. Not only in this blog, but also fiction. I give a fair push every year during the NaNoWriMo, but it’s not really my mode of writing. I like to take my time, outline things, work out the direction and the details and all the little back references into the story. I’ve got a few outlines and a bunch of story starts. And I want to write monthly updates of what I’ve been doing. Weekly is a bit short, given I do most of my other-than-work stuff on weekends, but monthly could work. So I’ve started this blog post. And I have that one story I think I really like on a front burner. We’ll see how it goes. Now, as to what I’ve been doing in the past month or so: ## Notify I’ve released version 3 of my Notify library, and started planning for a next big refactor. There’s a few features I’d really like to get in that require a completely different architecture than is there right now, and I’d also like to improve the testability of the entire thing. Notably, I’d like to be able to use several backend APIs simultaneously, something that I believe would be fairly unique among other notify libraries and tools. I’m also inspired by the (general) way nftables is done: instead of providing interfaces that provide what I think people want, allow them to write filters that are run in the engine and allow them to get exactly what they mean without me having too large an API surface. But really, while I have interesting plans, I don’t have enough time to work on it. It took me a full week to release 3.0.1 with some needed fixes. I’m terribly grateful to the other contributors who’ve made this library much stronger than it was originally. I had no idea, when I started, that this would become my most starred project to date, by far. It’s been pretty cool. ## Conlang I’ve created a script that I now call “Legola”. It’s the latest iteration in my efforts to create a script that can be read the same way no matter the orientation or direction of the page. Consider latin script (the one you use reading these words): put it upside down, right to left, or even mirror it and it becomes much harder to read. Sure, you can train yourself to read in all directions nonetheless, but still. This time, I went for something extremely compact: a single glyph is an entire word. It’s based on a line: it starts somewhere (this is clearly marked) and then as you follow it it turns this way or that way, or encounters some obstacles. A counter-clockwise turn is the sound ‘ey’. A clockwise turn is the sound ‘oh’. Two turns one after the other is ‘ah’. A bar crossing the line is the consonant ‘L’. Two small parallel strokes on either side of the line mark the consonant ‘F’. And it goes on. It’s inspired not just by my own previous efforts, but also by two other constructed languages: the Kelen Ceremonial Interlace Alphabet, and Hangul (better known as South Korean). Kelen’s provided the “line” concept, and Hangul is I believe one of the only scripts that combine several subglyphs to form larger glyphs in 2D space. I’ll do a longer blog post on it when I’m done hammering it all out. ## Hardware I’ve acquired a HC-SR04 ultrasonic sensor, took out the Tessel 2 I’d ordered and finally gotten a few months back, and have started on wiring them up together with the intention of creating a gauge for our water tank, so we can accurately and easily track our water levels. So far I’m stuck in the actual detecting-the-signal phase, or is it the is-this-wiring-diagram-right phase? I can’t recall. ## Etc I’ve done other things but I think that’s the big stuff; the rest is mostly work-related so does not really belong here. Not a bad month! # January 2017 ## New Keyboard I finally caved and pre-ordered a Keyboard.io Model 01. This is an incredible-looking keyboard, carved out of solid wood, with an ergonomic split configuration, individually-sculpted keycaps, and all the trimmings. It’s a treat. I’ve been lusting over it for over a year, following their production newsletter. I’ve finally decided just before the new year (the “deadline” in the tweet refers to them giving away an extra keycap set if pre-ordered before the end of 2016) that I deserve to spend a bit of money on myself, so let’s go. They’re hoping for a delivery around Q1 2017, I’m hoping to have it before May. ## New laptop? Not for me, yet. But my brother is looking into an upgrade before starting his second year at Uni (he got into med!) and was especially interested in the Surface Pro. One hitch: they’re very expensive. # October 2017 ## Night-mode improvement Got an idea as I was thinking about putting glow-in-the-dark stars up my ceiling, and it turned out to be a great success at little effort! If you switch this blog to Night mode by clicking the tiny button in the bottom left corner, you’ll see what I mean. That was done by finding a high-res picture of the night sky, then modifying curves to eliminate all the variation in black and dark blues, and converting all black in alpha. Then some touch up work was done to eliminate as much as possible blue and red halos around some galaxies that were visible on a white background. Finally, I used GIMP’s seamless tiling algorithm tool to generate the tile and layered it using CSS. ## Writing ### This blog After doing some quick calculations, I found out that if I write about three thousand more words in this blog before the end of the year, I’ll have written more in this blog this year than all prior years combined! Now that’s a pretty fun goal. I achieved this handily when I did experiments around transcripts later on. ### Naema Writing has been slower than usual, which is perhaps pushing back the end date. I have been having difficulties reconciling Leia’s character with how I portray her in the one interaction that’s really important in this fic, my plans for the story warring with who she is verily. I think I’ve got it now, but Leia is definitely the hardest character I’ve had to write about. I’ve been writing and thinking a lot more about Thabeskan culture and history, as that is an important part of the interactions between my two protagonists. That may have caused a slight change of structure and that in turn may push back completion, but oh well. So long as it improves the story… ## Fatso This month, Fatso also declared they were shutting down. Fatso has been a stalwart of these past years at this house, and it brought us a vast and diverse set of films even when we couldn’t afford downloading or streaming movies in the early times when the internet was capped and even slower than it is now. We’ll miss it, even though we weren’t watching as many DVDs recently, having diversified the sources of films we were getting. However, Fatso was the only way to get non-mainstream or foreign films, especially French (with original soundtrack) as well as many niche films from Latin America and the Middle East. Those are lost to us for now, unless and until we find another way. So thank you Fatso! We loved you. ## Cogitare After much gnashing of teeth (I swear a lot when I’m coding), I’ve finally put Cogitare together in a way that works decently well and looks good. It’s usable now, even though there are a number of issues with the UI that I want to fix. During the next week or two, I’m going to finish that, write an import + admin interface, hook up the IRC bot, then prepare for NaNoWriMo by loading up a bunch of prompts and seeds and deleting the awful from the ones I have currently. # November 2017 ## Music After a couple Spotify Discovers that didn’t really turn up anything I liked, I decided to start my annual “pick a playlist from all the stuff I listened to this year” early. That involves listening to all 500 songs I saved throughout the previous 10 months carefully enough to decide whether they make the cut or nah (which took two weeks), and then going through the resulting set to carve a theme that makes the mess a whole and remove the extra. I’m still going through that part. This year it’s looking like a mix and weave around tunes of the fabulous Norah Jones, beautiful contemporary instrumental pieces by swedish composers August Wilhelmsson and Martin Gauffin, Lorde‘s latest, and the enchanting and engaged vocals of Vienna Tang. It’ll probably reach slightly less than 200 tracks. As usual, I pick a name that is both a celestial reference and has subtle personal significance. This year it’s Atarau. ## Rogare With hours to spare, I completed the final piece of Cogitare and preparation for this year’s NaNoWriMo IRC chat room. Rogare is old made new, a simple bot framework to support our writers. The name generator is ready and faster than ever, more than a hundred prompts and writing seeds are filed into Dicere ready for consumption through website and !prompt. And at the end of november I’ll pack it back up, not completely hidden away, but safe and secure and regularly tested so it’ll be ready to be dusted off for next year. Onwards! ## Deckhack I was invited to the Deckhack community, a happy ronde of devs and subversors who build Better Tweetdeck and other related projects. ## Rotonde I briefly started a Rotonde client but it didn’t go anywhere, mostly because all my other priorities and projects reared up and yelled that I didn’t have time. ## NaNoWriMo The first ten days of NaNoWriMo have been going a lot better than I thought (I reached 25k words on the 9th!) and it has been very nice to reconnect with the NZ community. I’ll probably drive down to Auckland in the second part of November to take part in in-person events, too. # December 2017 ## Writing I published Chosen Names, a Star Wars crackfic about ridiculous amounts of titles, as inspired by a tweet. I wrote 30k words for my NaNoWriMo, all on the Star Wars fanfic that I’ve been calling Naema or Dawnverse or “In the Pale Darkness of Dawn”. For a nano thing, it failed to reach 50k. But for a story thing, I fleshed out a lot more of the immediate background of the characters, and solidified many ideas I’ve had recently. The story is now written as several volumes instead of one plus some side stories. The story starts with Jeira, last Keeper of History of her family line, thus holder of one of the Keys to the Thabeskan archives, and wife to Beca, an offworlder who studied to be a historian in the Archiva Republica. When the Empire arrives, the family is forced into survival as their jobs become illegal, and they become part of a network of resistance spanning the planet and beyond. We then look to Patra, current Clan Head of the Fardi, looking both to maintain the Clan’s standing and responsibility in Thabeska, and to change its mission from profit in smuggling back to being the protectors of the small world, and regain a more active hand into its politics and direction. When the Empire interrupts those plans by replacing the officials of one of the most remote Republic worlds with its own, the Clan starts a dangerous double game of staying in the good graces of the Imperial Major ‘in charge’, and keeping the population as safe as they can while also planning for their independence. Patra Fardi’s youngest niece, Hedala, is Force-sensitive although she doesn’t know it. What she knows is she has keen instincts and she’s always better off, in small ways and in large, when she follows them, even when they’re very odd. Jeira’s and Beca’s daughter Naema was left alone when her parents suddenly disappeared one night. She’s had to survive on her own, and she’s focused all her efforts on escaping Thabeska. One day, she finds an opportunity and steals away on a Fardi shuttle. When they land in Rinn, Hedala stays behind, citing feeling off — which the crew assumes is land-sickness: not uncommon in first-time travelers after a long passage. Naema proceeds to steals the shuttle, unknowingly kidnapping the important child! From adversity grows a reluctant partnership (and perhaps more!) as they try to find their way. It’s pretty fun to write, and I do plan on publishing it at some point. However, I’d like to have an entire work finished before I do that. I know better than to commit myself to deadlines I know I’ll blow, but I expect to have at least the stories of Jeira and Patra out there before mid-2018. ## Cooking Sauce I made up when looking for a softer, creamy, tangy alternative to Sriracha for a salmon dish: • 3 tbsp Paprika • Chili (powder or flakes or whole peppers chopped up) to taste • optionally, 2-3 drops of Tabasco • 3 tbsp White Vinegar • 3-5 cloves of Garlic, crushed/pasted • 1 tsp soft brown Sugar • a generous pinch of Salt All that should make a wet coarse paste. Then, to make creamy: • 2 heaped tsp Mayonnaise (non-sweet if possible) Mix until smooth, store in fridge. # January 2018 ## Writing I decided to put the Dawnverse on hold for now. I’ve got several stories I want to tell within it, but I pretty much burned myself off it. I’ll return to it at some point. One thing is particular is that it feels a lot more original than fanfic at this point, and I want to explore that. But not now. I started a Time Loop Star Wars fic around the New Jedi Order massacre. It’s simple and requires a lot less thought and I’m writing it as it comes with somewhat fuzzy plans for storylines, and it’s a lot of fun (as well as being a lot of drama). Depending on how long it runs, it might become my “medium-sized creative project” for the year. The year is long still, though. # March 2018 ## Constellationd I’ve been writing a prototype/MVP for a server agent that uses an on-demand cluster-wide consensus algorithm rather than a Raft-like always-on consensus of only a few master nodes plus a bunch of clients. I have set myself fairly strict resource limits so that it can be set up easily from one to many hundred machines, and use the least amount of memory it comfortably can so it can be deployed to 512MB VMs (or even those tiny 256MB ARM servers that some hosts in Europe offer) without eating half the machine’s RAM. The goal is to be able to monitor and manage any number of machines, or even to use it locally for development, testing, or tooling purposes. Beyond the software itself, the project is a way to get a lot more comfortable with Tokio.rs as well as learn more about networking, distributed setups, and protocol design + parsing. The repo is here: https://github.com/passcod/constellationd # April 2018 ## April Fools This year I did something of my own. In early March, at work, while pairing, we typo’d PHP’s ?? Null Coalesce operator into ???. That was pretty hilarious. We couldn’t stop ourselves from imagining what such a symbol could mean. One idea stuck, as being both on point, completely overpowered, and yet utterly practical. As April was coming up, I floated a ridiculous idea: what if we proposed it for real? Ultimately, work keeps us busy, so we don’t really have time to spend on frivolous bullshit like that. Instead, I quietly worked at it in spare hours. As part of my initial research, I found that PHP has a fairly well-delimited process for proposing changes, and it all starts with a posting on the mailing list to gauge interest. I couldn’t do that! It wouldn’t be an April Fools if I had to disclose it prior to April. Also, I didn’t think it would be nice to annoy the PHP developers with this. People on Twitter (or Slack, or wherever you came from) at least are already wasting their time. So I skipped that step, and wrote up a PHP RFC as seriously as I could. I included motivations, discussions of why existing features weren’t enough, use cases, examples, sample code, comparisons to other languages. I even checked out PHP’s source and coded up a very nasty hack of an implementation of the feature. And I added tests. This was meant to be a complete joke, something that could not exist in a thousand years, and yet I fell into my own trap. As I wrote it, as I described it, as I defended it, I came to appreciate just how useful the feature might be. I still don’t think it will ever get accepted, and certainly not with its ??? syntax. Even if I see many places I could use it, it’s still very outlandish. But for you, dear reader, if you haven’t come here from the proposal itself, I leave you with a proposal that captured my attention for the best part of a month. Here is ???: The Exception Coalesce Operator. ## Keyboard practice Practicing every 2-3 days (sometimes every day) for 20 minutes on Keybr, I clocked up 5 hours of practice which got me at acceptable speed over half the keyboard (lowercase letters only at this stage). So guesstimating, that should mean that in another 5 hours or so I’ll have the entire alphabet down, and I’ll be able to switch uppercase on, then symbols. After that, I’ll start using the keyboard for minor works. With a bit of perseverance I hope to always use my Keyboard.io at work in July/August. # May 2018 ## Birthday I turned a quarter century, which... actually it sounded pretty cool in my head and even aloud but now it just sounds o l d. I’m still the youngest at work, somehow. Even counting the “intern”. I can only assume that won’t last. I got myself a reMarkable, of which I had high expectations, and it performs even better than I thought. I’ve been wanting a tool like this basically since forever, and I think it will fill a gap in my tooling that nothing I’d ever tried really came close to. From others, I got chocolate and alcohol and live Jazz, which was pretty good. ## Keyboard learning I’m not practicing with a very high regularity, but I’m managing a few sessions a week, which is progressing me at an okay pace. So far I’ve totalled about 8 hours of cumulative practice, which brought me to about two thirds of the lowercase keyboard at good accuracy and speed. My test with keyboards and layout variants is whether I can confidently enter my passphrases into terminal prompts (which offer absolutely zero input feedback). Once I can achieve that, generally, I can use the keyboard for everyday tasks and will get better much faster thus. I’m obviously not there yet. # September 2018 ## Medical I got some dental surgery done at long last early this month, and the recovery was somewhat longer than I’d thought, which didn’t help with projects. I’m all good now, though! (And two teeth lighter.) ## Watchexec This month I volunteered to take on the Watchexec project, to ensure that pull requests and issues and new features would be finally tackled. Matt Green, the project’s originator and principal author up til now, has recently gotten more busy at work and most importantly, become a first-time parent. Congratulations to him! That did mean that there was no more time for the project. I spent a few evenings reviewing the entire source and looking through all issues and pull requests, then made my first release on Sunday 19th, and another one on Sunday 9th after a little more coding to smooth over old PRs. I hope to work through more of the issues as we go through, but I am very aware that this is yet another project on top of my stack of already pending stuff, so I’m pacing myself with a lot of care. ## Notify Notify advances slowly. This is mostly a matter of time, now: most of the framework is ready and awaits only filling up the gaps. I am getting very interesting snapshots of futures 0.3 and the async/await support in Rust itself, which could mean improved ergonomics and better development. In the meantime, though, I’m keeping with what I have. The Rust inotify wrapper has recently gained a feature to use futures and Tokio Reform, which might make it a lot easier to integrate, although it may be that the access I require for the Backend is too advanced for the nicely-wrapped version. To be seen. A wonderful contributor has spent some time upgrading all of the libraries in the v4 branch. This will likely be the last release with the old code! ## Splash My mysteriously-named project is a foray into some really old code, and a documentation effort more than a coding effort. It’s a nice change of pace, even though it can be really intense… I like it for the different kind of work. Perhaps it will see a release this summer, but with everything else piled on, I’m making myself no promises. ## Certainly Frustrated by the awful state of certificate tooling, especially for such common things as generating self-signed certificates, I made a small tool that makes the whole thing as easy and simple as possible! It especially excels at multidomain certificates, and has an extra feature to be able to create a local CA and sign certificates that way. All in minimal fuss and no ambiguity: $certainly test.com Writing test.com.key Writing test.com.crt I also took what I learned from Watchexec and Cargo Watch, and set up prebuilt binaries for Linux, Windows, macOS, as well as a Debian deb... for ease of use! ## 10 minutes My star wars fanfic is stalled! ...no, it’s not. It’s just that with everything else, I thought I would get the chance to work at the chapter last month, but I didn’t. So it’s still getting pushed away to the next opportunity. # November 2018 ## Sassbot Not doing NaNoWriMo this year, but I’m still maintaining the bot! (And what a wonderful thing to be coding in Ruby again!) This year it got some sweet upgrades, including a new database backend (to store richer information and perform more powerful queries), a bunch more sources for its !name command, and an adaptation to Discord because the channel moved. ## Art I’ve started on a cool art project mapping all earthquakes around New Zealand from 1900 til now (early data is a bit imprecise, but it works out) using some interesting techniques for the rendering. I hope to be done during the month. Specifically, after doing the data crunching, I transposed the key positions onto an A2 piece of white solid cardboard / foamboardish material. Then I painted each dot (hundreds of dots so far, at about ⅓ done) to approximate size in clear nail polish resin. The idea is then to splash ink onto the sheet, and the ink should slide off the resin while adhering to the cardboard. In the end, I hope to have an earthquake map in a form reminiscent to a night sky star map, with some interesting random flow effects from the ink, and a tactile/glossy feel to its surface. We’ll see how it goes. ## Laser I’ve acquired a small 80x80mm 2W laser engraver. It should arrive closer to the end of the month, and I’m excited to try new projects and experiments with it! ## Casting I’ve done some preparatory experiments for clear resin casting using cork molds for fast prototyping. I’ve also got glow-in-the-dark powder, graphite, stone powder, metal powder... all possibility for engineered materials of various capabilities and styles. Not quite sure where I’m going with this, but it could be interesting. ## End-of-year sec cons I’m going to both purplecon and kiwicon! Actually I’m more excited about purplecon, but I have a kiwicon ticket so why not. Hopefully I’ll be seeing friends there too! # December 2018 ## Sassbot I ended up writing 400 words of fic and working nigh-on continuously on the bot, adding myriad improvements and setting things up for the future. The first big thing was that the community moved from IRC to Discord, so I had to get the bot ported over asap! That took me two nights, with the most essential functionality available that first night. Initially the move seemed uncertain, such that there may have been a split or dual-usage, but as Discord got more familiar it emerged as the clear favourite. Following that, the bot was initially ported to be available on both media at once, but IRC support was dropped fairly quickly, lifting a huge burden off development and testing time. During the month an incident happened that put us in contact with the Australian community, especially its resident bot and developer, Winnie and Dawnbug. That may spark off interesting collaborative arrangements. Its todo list and scope has grown, and there is more to come! ## Laser The engraver still hasn’t arrived. I’ve been in contact with the importer, who’s had some supplier trouble. It miiight make it before Yule season overwhelms the post, or else in January. ## Sec cons Purplecon was highly informative and very well put together. Unsure if it will continue, but I would definitely go back, as the more... sensible... part of the week. Kiwicon was an experience. It’s definitely the fun conference. I had lots to bring back to everyone, including new Te Reo words for papa, insight in the security aspects of tech we’d been looking at for work, some swag for colleagues, and the knowledge I do okay in big conferencing crowds for myself. That last one will come in handy for next year’s Worldcon! # 2019 Also see the reading logs. # January 2019 ## Monthly updates are now actually monthly The 10th-day thing got old. It was an artifact of when I actually started this monthly thing, but the confusion of writing about half of the past month and half of the current was really odd. So now updates are published on the last day of the month, at 20 o’clock local. ## Monthly updates may not actually be monthly anymore Two years ago when I did those every month, I did make an effort to get something done every month. Last year it was a bit lackluster. This year, I’m basically throwing in the towel. Updates will happen, but they might not happen every month. Or maybe they will. Habits are hard to break, and I’m not trying to break this one, so we’ll see how it goes. ## Armstrong I think I’m ready to start talking about this. Armstrong is a project that has been brewing for the past year or so. The concept comes, as often with new projects, out of frustration and want. Armstrong is a job system. Or a job framework. Maybe a job queue. Task queue. Orchestrator. Scheduler. It is an evolution of Gearman, adding features rather than removing them (there are many extra-minimal job queues out there), and keeping to the patterns that make Gearman strong in my opinion. There is a laundry list of features, but one I want to describe here is the priority system. Priorities in Gearman work on the basis of three FIFO queues per worker, where “worker” means a particular type of task that’s handled by a pool of worker clients. Those three queues are high, normal, low. Jobs, by default, go into the normal queue, but you can decide which queue they go to explicitly. The normal queue is only processed when there are no high jobs, and so on. Within a queue, jobs are processed in order. This leads to patterns and rules being established in userspace to ensure good system behaviour. For example, pushing to the high queue is strictly controlled, so that it remains empty most of the time, except for when something urgent comes through. Otherwise it’s all too easy to need an urgent compute and having its jobs just queue up politely riiiight at the back. Armstrong’s priorities work on the basis of a single giant dynamic queue, not only for each worker, but for all workers all at once. Priority, at the interface, is expressed in time expectation. One can say: • I don’t care when this job runs, I just want it done sometime. • I need this job to run within the hour. • These tasks all need to finish within 500 milliseconds. Armstrong then figures out how to achieve that. The most obvious option is to re-order the queue based on this timing metric, but that’s not all it can do. Firstly, as a bit of background, Armstrong keeps records of what jobs ran when, where, for how long, etc. This is for audit, debugging, monitoring. It’s also for making priority and scheduling decisions. Given a job description, the system can query this record and derive the expected duration of the job, for example by averaging the last few runs. Without you having to say so, it knows how long jobs take, and can re-order the queue based on that. Even better, it knows how long jobs take on different hosts, so it can schedule more efficiently, again without prompting. Secondly, on an opt-in basis, Armstrong workers may be interruptible. This may be achieved through custom logic, or it can be achieved using kernel interfaces, like SIGSTOP, SIGCONT, or SuspendProcess(). If an urgent job comes through and whatever is currently running will not finish in time, Armstrong interrupts the jobs, schedules and runs the urgent jobs, then un-pauses the jobs. This combination of granular priority, record querying, and control superpowers makes efficient compute use easy and eliminates hours of setup and discussions. And through this one feature, you can already see other aspects of the system. More later! Armstrong is still in development, and no parts of it are yet available. # February 2019 ## Notify There’s a bunch of work happening on Notify at the moment. I’ve basically taken it out of being “frozen” and back into maintainership. I do not want to spend any significant amount of time on it, but I welcome anyone else to develop features, fixes, improvements, etc, and I’m doing the actual maintainer thing of review, advice, guidance, merging things, running rustfmt, keeping CI going, and generally being an active project once again. A lot of Big Names use Notify now, and it’s honestly cool. It was mostly background radiation for a long time, like people used it and some people contributed features, but it was very quiet, very passive consumership. Now there’s a lot of good energy and good people doing great work on it, and I’m in a much better place psychologically, so I think it will regain some shine! As part of this sorta soft reboot, I went through the entire commit log and backfilled the entire changelog down to the very first release, which was fun (if a bit menially tedious). Notify started pre-Rust-1.0, and that must seem like quite a strange period for people who haven’t lived it, or even for me, now. The past is weird! ## Kai A project I’ve been wanting to do for ages, but it’s also deceptively big and really I just put down very minimal bones and some notes and not much else. The idea started as a “flexible curry recipe generator.” Basically input what ingredients you have, and it walks you through it. But then I started wanting to integrate a lot of other things, most importantly something radical: a slider of “how much time do you want to spend here.” So the recipe would have all those parts that are timed and possibly ranged as well, and would cut down the recipe steps and time based on what kind of timeframe you’re looking at. Sure, caramelising onions makes for deliciousness, but you can skip it, and in fact you can skip the onions entirely if pressed for time or if you don’t have onions right this moment. And cutting onions as fine as possible is an excellent way to caramelise them to perfection, but it also takes ages, even more without practice, so just a rough chop will do nicely if you just have to get to it. And that’s just onions! There’s so much more. The slider would be at the very top, and it would grey out meat/veggie choices that you just can’t do with that kind of time. Carrots. Potatoes. Some meats. Etc etc etc. You should still have a fair amount of choice and diversity at the extreme-low of the scale. There are some really quick ways to cook a spicy meal that looks like a curry. But I didn’t want to limit myself to just that, so I called it ‘Kai.’ ## Purple Sky Yay another project! Actually this one is fairly older, and I’ve been slowly working at it. I’m ramping up research and prototyping, and some people know what this is, but I’m also keeping it way under wraps for now. This is something that might actually turn into something profitable! So that’s exciting. # March 2019 ## Keycasting I have been spending most of the month trying to cast a key in epoxy. It’s an interesting project to be sure, even if I’m doubtful as to its actual success. Learning lots, making mistakes, making discoveries. ## Housing I also moved on the first of the month. I now live in a much larger place than before, although still by my lonesome! (I’m not complaining, I like it, it’s why I chose it.) It’s got no carpet anywhere, fake-wood laminate plank and dark tile floors, two storeys, ample parking space, in the middle of town but also in the middle of greenery (although none I have to tend to), and honestly I like it a lot. The rent is a bit high, but it’s not outrageous, either. Consequently I’ve spent a while finding furniture and making things work. Stuff is expensive! Sheesh. I did get some good deals, and with a bit of DIY and low-key restoration I think I’ll get a nice setup out of it all. And it keeps my hands busy! ## Work After 10 months of work we’ve finally released a new customer website. Won’t really say much about it here, but it’s good to be done, even if we’ll keep on working on features. Relief and accomplishment. We worked pretty hard to get it ready and launched before April, as no-one wants to seriously release a new product around April Fools! Google can do its thing, we’ll keep to not trying to surprise and upset our customers (yes I’m still bitter about the Inbox thing). ## Notify I’ve been preparing for a breaking release of the main branch. In the last few days I’ve finalised the feature list of what I want in there; I’m taking the opportunity to bundle a bunch of changes into it. I’m aiming for a mid-April release, but plans do go awry around this project, so who knows. This will be a good way to test how fast the adoption is. I’m expecting a few months of earnest transition, followed by a long tail... maybe by the end of the year it will be over, but I’m not holding my breath. ## Armstrong & Reasonable On hold for now. ## Trebuchet A new little project on the same proto-framework as I made for Armstrong. I want to use a few features of Btrfs around volumes to make a lightweight deployment & artifact system. Aimed at small deployments of a few machines or projects, for small teams or individuals. A lot of stuff in the space is made for super large scale, and doesn’t make sense — despite hype — otherwise. # April 2019 ## What What is a new little tool I made this month, that tracks what I’m working on at the moment. These monthly updates often have summaries of things I’ve started or finished or want to talk about, but they don’t have everything and they don’t really do any good job at showing continuity or breaks or anything. What is not limited to fiction or software work, and has a space within each entry for “media” links, i.e. Twitter threads about the activity. That also acts as a kind of archive and historical reference. So here you go! Watch me work away: what.passcod.name. ## Pinn I’ve started work on a browser extension to make reading, stats-ing, and managing fanfics easier! I hope to be done sometime next month, and am working on it in between working on... ## Notify 5.0 Not the Notify 5.0 vaporware release I’ve been talking for the past two years, but an actual, made of real code, Notify release! It’s a big refactoring that will fix a bunch of issues and open some cool possibilities... but honestly? At the core it’s not even about that. This work and this release is about me getting re-acquainted with the codebase as it stands today, and bringing it forward to the standard I want it to be at now. I didn’t know how most of the library worked until a few weeks ago. Obviously that made supporting and maintaining it hard. So now I’ve got a much more solid knowledge and I’m improving that every day. This is both hard to admit, to myself too, and it’s not something I feel comfortable sharing on or near the project itself (not very inspiring!), but I feel okay sharing it here, and maybe I’ll work up to putting it out to a wider audience over time, too. # September 2019 ## Abandoning Notify I made the decision to abandon Notify. I wrote lots about it, both on Github and on Twitter, but the shortest justification is this: [Notify] sparks negative joy, so I’m Marie-Kondo-ing it out. Turns out however that abandoning such a large piece of software and my life isn’t simple. I’m not taking it back, ever at all if I can manage it, but getting to the point of it being off my hands completely is not all straightforward. I do hope it gets there soonish, though, because otherwise quite a few crates and part of the ecosystem will start not working anymore. I also hope I’m making this transition less painful than other abandonments I’ve seen in the OSS space before, though. My goal is to reclaim my own time and joy, not to say fuck you to any part of Rust or cause misery to anyone. ## Storq Hot on the heels of getting way more time, I got some motivation back! Who knew. Anyway, I started working on Storq again. This is a rename of a project initially called Q, which spawned from a project called Gearbox, which spawned from another project also called Gearbox but not written in Rust. This all started from a musing on making Gearman better, and so Storq is... not that. Or at least, not recognisably that. Storq is a construction on top of the Sled embedded advanced key-value store, for the purpose of dynamic work queues with arbitrary ordering controlled by application-provided functions. At some point after many design drafts and documents I figured that the Gearman queue model was a good idea, but not robust or versatile enough; Storq is the attempt to create a new bedrock for a work processor system inspired by Gearman. ## Splash I’ve also picked back up Splash, my ongoing HF radio propagation tooling endeavour starting with a thoroughly-documented implementation of the ITM. ## Holiday I went on vacation for four weeks to Europe and came back home early this month. It was pretty great! # December 2019 ## Donations This year I decided to forgo donating to organisations. This had many reasons, most of which boiled down to most organisations sending me about a ream of paper each in “pwease donate” advertising throughout this year, which I gather doesn’t come cheap, so at my support level I estimate they spent at least half of my dollars on asking me more dollars, which is ridiculous. I’m sure they also do good work, I’m not sure they particularly need my cash to do that. One organisation that is the exception, and the sole organisation in my donation list this year is PAPA, and I gave them$300 sometime back in May. So here I was wondering what I should do. One thing I’ve always wanted to do since I started doing this regularly but was never really sure how to is direct donation, i.e. “just giving people money.” So this year, starting in November, I went and actually tried that. I figured out the parameters and made a tweet: Friends, it’s donation season, point me to (New Zealand) queer and non-binary people in need of help. I’ll fund (at my discretion, t&c apply, standard disclaimer, etc etc) until I reach this year’s donation budget. (Budget not specified because not completely nailed down, but is somewhere around $1500.) I didn’t really get many returns but then I trawled givealittle for matching people and donated there too. By December 3rd I had spent about ⅔ the budget and exhausted my search to this point, so I resolved to take two weeks before trying again. On December 20th, JK Rowling made an extremely TERFy tweet and I was kinda mad. I was seeing a lot of condemnation from allies but not a lot of action. It’s kinda hard to do anything to Rowling, though. Instead, I had this idea: In honour of JK Rowling… /s I’m opening this up to non-NZ, trans people in priority. I have about$500 left in budget. Anything left at the end of the year will go to RY. If you’re in need, raise your hand (DM me your paypal or something); if you know someone in need, point me. There were a lot more people answering, and I was able to help all of them. So that was a huge success. In total, I distributed more than $1500 to more than twenty people, many of them trans, many of them in Aotearoa. Perhaps in an economic, macro-level kinda thing, donating all$1500 to RY would have helped more people, but it’s harder to see. And of course, none of the people I gave money to are registered charities, so I won’t get any tax rebate for the 2019 tax year. That’s alright. I mostly want to help people, and this year I did that by distributing what is to me a little bit of cash to people for whom it is a lot. We’ll see what next year brings. Starting from 2016 I kept track of the large amounts of fanfiction I read, and occasionnally original fiction, non-fiction, comics, BD, etc. For fanfiction, I also kept track of the words (for ongoing works, “at time of writing”, marked “atow” in the listings), aka the length of the work, mostly because it was trivial to do so. It’s much less trivial for other form factors, so I didn’t keep track. Historically, these reading archives were part of “monthly updates,” but during the last migration of this website, I split all of it out. See the Progress Updates section for the rest of it. # 2016: summary From the future (Sep 2021): that year I start keeping records, but they weren’t detailed or really organised. However, in July I’d set out the list of criteria that I would be using initially to select fics to read. Almost all of these have gone, hah, save for the religion and “get good fast” ones. # July 2016 For the past two months and a few weeks, I had given myself the challenge to read all fics on this list of Harry Potter fanfics, provided they met my criteria and I hadn’t already read them, of course. My criteria went thus: 1. The fic has to be complete, unless it was updated recently. That was to avoid abandoned fics. I was in this for many reasons, one of which was the pleasure I get from reading good fics, and after the first few abandoned fics I decided I couldn’t bear the disappointment anymore. If I had been reading just a few of them, maybe I would have; but this was a hundred fairly long fanfics — I couldn’t cope with potentially dozens of unfinished plots, of having to leave characters I had gotten attached to without ever a chance of seeing the conclusion of their story. 2. The fic has to be good, fast. I set a hard limit at five chapters before I would give up. I don’t think I ever hit it. This was a list of the top 100 fanfics, so not many were truly bad, but those that were clearly announced themselves. 3. The fic has to be in English. Obviously, if I can’t read it… Nevermind that I can read French fluently, I don’t particularly like reading fanfics in French. I also don’t like reading programming documentation in French, and when I communicate with French developers, it is most often in English. This is a personal thing. Only one fic hit that rule, but I had made it in advance as I knew it was a possibility. 4. No Dramione. As in, the Draco/Hermione romantic relationship pairing. This was more subtle than that: with very few exceptions, I outright ignored all fanfics that kept pretty much everything the same, but then forced the characters into the ship. I don’t mind when Dramione, or something near it, actually happens organically, as a true part of the fic. For example, HPMoR, which I do regard as pretty much the Harry Potter fanfic, has a tentative romance and definite, if at first grudging, friendship between Draco and Hermione. But we get there through circumstance that makes sense, and is not forced just to get the ship there, or worse, the characters are put together in an abusive or highly disfunctional relationship that’s glamoured up by the author. Ugh. Ew. Urgh. Special mention, though, for The Nietzsche Classes, though, which is a great perversion of the genre, given the reader has the right (twisted) mind. 5. No sanctimonious religious bullshit. If you think atheists don’t actually have morals because morals are exclusive to people who believe in a giant cuddly Zeus lookalike who loves everyone so much he has them kill each other for no fucking reason floating in the sky… yeah, not a fan. Or, actually, as long as you keep that shit to yourself, all good. But don’t put it in your fanfic like you’re a preacher yelling at the gay kids that they’re all going to hell because, y’know, only straight love is allowed past the gates of Paradise. There are fics like that in the list. Fuck ‘em. 6. In fact, no bigotry. That’s an interestion criterion in a verse that has a lot of the plot and underlying tensions about blood bigotry. But there is a vast difference between bigotry, of any kind, being explored in the text and the author trying to push their own bigotry through fanfic. There are two very blatant opposites in this regard in the list: one had a protagonist and winning side and even the losing side, in parts, be very much convinced and fervent and true in their belief that Muggles were vermin… but the author wasn’t a bigot. Or if they were, by some turn of luck, their fic at least didn’t read like that. It read like a well-crafted exploration of a concept, with realistic characters who were strong in their beliefs, even though these were objectionable and horrible and wrong, and similarly for the other side. The other fic… was very much racist. The characters on the “wrong” side of the issue (according to the author) were weak and their personalities were systematically twisted to fit the narrative; while the bigoted rhetoric was dripping from even the narrator’s parts, not just the characters’ thoughts. There was, totally unsurprisingly, also classic skin and ethnic racism, and homo/trans-phobias. A smorgasbord of barely-subtle hate delivered in the wrappings and trappings of a universe I love. It’s a bitter experience. Entries I found deserving were listed publicly on my Pinboard, along with quick commentary. I tried to keep the comments spoiler-free in general, but of course that’s difficult to do in this context. I estimated the total amount of prose I’d read, just counting those HP fanfics, at 15-20 million words. But who’s counting? Now, after a suitably long (ha ha, who am I kidding, I like fanfic too much to leave it for long) break, I am starting the very same thing with the Naruto list, from the same source. Same criteria, except of course the Dramione clause — I think the Naruto verse doesn’t really have such a contentious pairing. Naruto/Orochimaru comes to mind, but is not really any different from Harry/Voldemort, and there are good fanfics with that. I think perhaps a Mizuki/Naruto ship would qualify. One difference between the two verses is that shinobi have little compunction against killing their foe, or dying trying, which means that if two people who hate each other meet, it often ends up bloody rather than them not being able to do much other than shouting at each other or bullying them, and potentially-fatal encounters are not very conducive to romance, of any kind. This isn’t Hollywood, kids, you’re not going to have people who say they hate each other fuck each other silly after five minutes on screen because of *hand waving* pent-up feelings and borderline porn. We’ll see what comes of it. # December 2016 ## Fanfiction I’m still on a Harry Potter fanfiction bend. Just in the last month, I enjoyed: • Presque Toujours Pur • Wand, Knife, and Silence • Petrification Proliferation • Harry Potter and the Champion’s Champion • A Second Chance at Life • Do Not Meddle in the Affairs of Wizards • Harry Potter and the Gift of Memories • Harry Potter and the Summer of Change • Angry Harry and the Seven • Harry Potter and the Four Heirs • Harry Potter: Three to Backstep • Harry Potter and the Sun Source • Harry Potter and the Invicible Technomage • In the Mind of a Scientist • Harry Potter and the Four Founders • Magicks of the Arcane • What We’re Fighting For • The Power (by DarthBill) • Bungle in the Jungle: A Harry Potter Adventure and I’m currently enjoying the sequel of that last one. It’s not looking like I’ll slow down much, either! # 2017: summary From the future (Sep 2021): that year was the first I started keeping detailed reading records, and I gradually went from individual commentary / review to a simple listing, which format has perdured to this day. ## Fanfic Words of fanfic read per month, based on the monthly update posts. Thus the figure is low-balled as the posts don’t include fics I read but didn’t like, nor the updates for incomplete works I follow. And it of course doesn’t include all I read that wasn’t fanfic. • December 2016 → January 2017: 5,399k • January → February: 6,416k • February → March: 2,964k • March → April: 2,199k • April → May: 3,325k • May → June: 2,567k • June → July: 1,969k • July → August: 992k • August → September: 1,727k • September → October: 1,868k • October → November: 876k • November → December: 1,382k (Total: 31,684k) # January 2017 ## Tourist There’s been two chapters of Tourist this period, so naturally I’ve commented with my impressions on both: [1], [2]. Tourist is Saf‘s current serial novel about AI and asexuality and depression and it’s great! Not only do I love her writing, I love the themes and the way she explores them. There’s a few scenes which evoke a very particular feeling I’ve experienced, and it’s fascinating that Saf is able to make me remember how that feeling is like just through her words. The writing was perhaps a little hesitant at first, but it’s been getting better and better every chapter. The latest chapter is excellent. ## Fanfiction All in reading order within their sections. Word counts are rounded to nearest 1 or 5k. I explored the Harry Potter fanficdom, all nearly exclusively rated M. Contains lemons. Content warning for sexual assault in at least half the fics. • {HP} Prodigy. Pretty good. I liked the modern take. A bit too prodigious perhaps, but that’s what the title announced so who’s complaining? Waaay crazy story. Do not read in more than one sitting, because there is SO MUCH SHIT HAPPENING and you’d end up horribly confused. Great stuff. {135k words} • {HP} 893. Great story, good writing. Unsure how accurate the Japanese/Yakuza details are (but it’s gotta be better than Rowling’s own attempts at representing asian stereotypes characters), and the large amount of Japanese words and phrases embedded in the text make it somewhat tougher to read than usual. {360k words} • Short and sweet. Graphic at times. Cathartic ending and skillful munchkin lawyering. The last author note illustrates well the utter ridiculousness and sheer contrarian nature of some of the cretinous parts of fandom. {78k words} • {HP/Dresden} The Denarian Trilogy: Renegade, Knight, and Lord. Good writing, great humour, manageable gore and satisfying battle scenes. Oh, and let’s not forget the lore and world-building of epic proportions. The finale in Lord was exactly what was needed. There are supposedly continuations, but do yourself a favour and ignore both Variation and Apocalypse. They’re unfinished, apparently abandoned, and reek of Bad Sequel Syndrome. The only worthwhile point of note is that indeed (rot13) Nznaqn Pnecragre unq n puvyq sebz ure rapbhagre jvgu Uneel. Fur’f anzrq Yvyl Pnecragre naq vf nccneragyl cerggl phgr. And with that, the last plot point of the series is concluded. {235k + 190k + 245k words} • {HP} The Firebird Trilogy: Son, Song, and Fury. Despite claiming, as is usual, that all belongs to Rowling, this fic really only uses the characters and some general elements of plot. The universe is one of the most original I have ever read. It is an utterly different world, and extremely well built and detailed. Not content to be defined over a few departure points, it is rich of several dozen centuries of history with a particular focus, due to the story, around Europe, the British Isles, and the Americas. But this world is dark and cruel and bleak, and that itself is a terrible understatement. Content warnings for sexual and otherwise abuse, exploitation, horror... and yet that feels like too little said. It is a very good work. It surpasses many original, published, novels I have read in the genre. But while it hooked me and wouldn’t let me go as I devoured it, each further installment shook me deeper. Even with an active imagination and few of the ‘moral’ blockers on thought most my peers have, this world and its inhabitants shocked me in their depravity and casual evil, but most importantly, in the way it showed every single cause and character as believing they were in the right, even while performing and perpetuating wickedness, resulting in institutionalised malevolence in every situation and at every level of society. Even the heroes, the protagonists, the good ones, those trying to quite literally save the world from itself, are merely questionably good most of the time. It is a dark, dark work, but it is very, very good. It’ll haunt me for a while. If you at all can, read it. {170k + 150k + 170k words} • {Naruto} Life in Konoha’s ANBU. Interesting format, with distinct arcs corresponding to Naruto’s missions instead of a single continuity of plot. It makes the whole thing more approachable, and sets clear expectations around the progression of a particular writing stint. Unfinished, but not abandoned; there are long-term plot lines still in suspens (and I’m not talking about the canon plot lurking in the background, rather about the original plot lines that make it all interesting). {370k words at time of writing} • {HP} To Be Loved. Interesting prompt, and although the latter plots were a bit simplistic, I liked the politics, as well as the occasional insight into Dumbledore’s thinking. {95k words} • {HP} The Bonds of Blood. I really like Darth Marrs’ writing. The emotions and complexity of every character are well-rendered, the plots are well-rounded, and the suspens is heart-wrenching. Yes, I have shed some tears and my heart has hurt. It’s not the best fic on this list, and it was a short fun read, despite all the feels, but it was pretty good. {190k words} • Yeah, yeah, yet another marriage contract story. What can I say? I like romance, and this is one of the most popular and least sappy romance genre in this fandom. I found the basis for the story interesting, and the resolution swift if perhaps a bit anti-climatic. But then again, the defeat of Voldemort was never the focus of the plot. The fic brushed on some thoughts I had while reading, namely on the influence Harry’s near worship for a decade, without having a real person to mellow it, would have had on society, particularly regarding standards of appearance and, to a lesser extent, behaviour. There was a strange consistent corruption to the spelling of some words, almost as if the text had been OCR’d: notably, various ‘I’ or ‘i’ letters had been replaced with ‘1’, but not all of them. {165k words} • Jbern rarely disappoints. This was a fun story, but not a Humour story. Typical plot of Individual!Harry going against Dumbledore, but various atypical plot elements, mostly around making typically-always-good characters have questionable morals and make less-than-Light decisions, and making typically-always-evil characters be borderline good but forced in a bad situation. It adds an interesting realism. Few bad points, but one in particular (rot13): va gung lrf, gurer vf n cybg-rkphfr sbe ebznapr gb tb snfgre bapr Uneel naq Fhfna ner vaibyirq, ohg orsber gurer’f abg. Fb gung cneg bs gur fgbel srryf n ovg gbb snfg sbe gung ernfba. Good discussions and reasoning in dialogue, it changes from both naïve or overcomplex-in-hopes-of-sounding-smart plans and narration simply omitting planning and showing only the results. Including planning that actually feels intelligent is a great addition to any story. {340k words} • {HP} Pride of Time. This was really really good. I love Hermione time travel stories. I love when all the characters have their own flame, their own passions and thoughts and reasons and defects. I also really love stories that were planned out before they were written, and written before they were published, as they are invariably of better quality, simply due to having been revised and cross-written. Contains graphic sex, but gentle and hot and furious and real. Some scenes reminded me of sex I’ve had, see, that’s how good and real it was. Also contains instances of graphic violence, but not gore, and not deeply disturbing. It covers war (both of them) and terror and torture, but it’s not meant to shock and disgust. If you read nothing else on this list, I do recommend this one. {555k words} • This list is getting way too long, but fortunately we’re two days before this update goes out and the next few fics I have queued are crackfics or small-length fics, so they should all make it into the short format list below. Well-written, and with a delightful amount of lesbian, gay, and bi characters, all presented positively, little prejudice in this regard all around. Also remarkably good with letting everyone be happy with their various religions, while calling out cultural misappropriation. No trans or enbies that I could find, but we’re getting there. I’ve noticed several possible references to HPMoR; generally I chalk those up to coincidence because the fics are HPMoR-antecedent, but this one is mid-to-late 2016, so it fits, and the references are multiple and fairly clear. Happy-ending fic, even more so than the other happy-ending fics I’ve read recently; this is positively Disney-esque in its epilogue. Although it’s certainly not Disney-esque in its contents, with sex à gogo, lots of innuendo, etc. There’s also violence and abuse present, this isn’t an all-rainbows fest. One persistent bad point: the cavalier way horrific fates are dished onto some antagonists. Sure, it might seem like karmic retribution, but cheerfully finishing off Malfoy by having him be taken by human traffickers intent on forcing him into prostitution still leaves a bad taste in. {520k words} Also enjoyed, but no lengthy comment: # February 2017 ## Tourist Chapter Seven made me really happy and I felt more bubbly than usual writing its comment. Perhaps it was the ice-cream. ## Fanfiction All in reading order within their sections. Word counts are rounded to nearest 1 or 5k. I explored the Harry Potter fanficdom, fics rated at least T, often M. Contains occasional lemons. Fairly large proportion of Haphne. • {HP/SW Legends} The Katarn Side. A very nice tale. While I enjoyed the story, I spent a lot of it using the material as both inspiration and warning, given I was writing my very own Star Wars fanfic at the same time. It convinced me not to write a Star Wars × Harry Potter crossover, as I do not have nearly enough knowledge about the Star Wars universe to pull it off. But it also gave me pointers at how to interpret Jedi lore and dogma, a style guide when writing droid speech, and a remedial course in basic Galactic tenets. I particularly liked the tonne of potential the crossover’s universe has… many stories could have been written within, but this one was finished and done, and the rest is up to us readers’ imagination. Rated T. {135k words} • {SW Legends} The Last Jedi. Rated T but should have been M. Probably. This is Dune, but in the Star Wars universe, with more sex and less Bene Gesserit. It does a good job of showing the horror of the amount of lives lost during space conflict, when billions of lives cry out in the Force, and it has an actual effect on those sensitive. In the original six, there is a whole three minutes spent on the destruction of a planet and millions of souls; two Jedi briefly look up and say, almost deadpan, that they felt two billion souls die… and then they go back to what they were doing, barely affected. In TFA, Starkiller Base destroys five planets — there are no estimates I could find on the population of the system, but let’s put it in the high dozen billions, maybe even half a trillion — and there is even less effect on the Force-sensitive. Like, come on! Billions have just died. At least show the Force-touched having a little emotion here, maybe. So, in The Last Jedi, our protagonist Tobin, as a youngling, witnesses the killing of “merely” thousands and feels it in the Force, and it disturbs him so greatly that he has to isolate himself and cry for hours at a time. A trained Jedi Knight is shown as losing control of her emotions in violent manner. Even a less-monstrously-sensitive Mirakula is strained and isolates herself, even while she has lived through dozens or hundreds of such events already and could be thought to be inured against the horror. Anyway, as you may see, I liked this fic a lot. {185k words} • {HP} Innocent, Initiate, Identity, Impose. Good story. The plots are thought-out and I really appreciate the insight into other characters. The entire Chamber of Secrets episode, in particular, is as much about Ginny and her struggle against Tom, than it is about Harry and Sirius and friends… and then after that, the experience didn’t just completely not matter, there are credible post-trauma effects, and the people around don’t just forget what they’ve been through either. Last volume is unfinished at time of writing, but updated recently. I’m hoping for a Harry/Ginny ship but not holding my breath: there’s a lot of potential still. {495k + 175k + 145k + 72k words} • Oh, that was beautifully done. It is really a work you have to read to experience. A summary would not do it justice. A description could not hope to cover its walks and turns. This is what people mean when they say you have edit down such that only what is needed remains. It all comes together in its final chapter, triggering understanding over all the previous ones, but still leaving us with that uncertainty Hermione hinted to at the Grimmauld party: that you cannot really be sure it was all his work, or not. {90k words} • {HP} Stepping Back. Ongoing. By the same author as Honour Thy Blood, and reusing the original personalities and names of its supporting cast, which is an amazing idea as I get to rediscover and appreciate them anew, in a much better environment and not just in flashbacks, as paintings, or in history scenes. Yes, this is a time travel fic. I like the way it’s written, and although it’s not perfectly beta’d (there’s homophones and awkward phrasings) it’s much better than HTB at that. Followed and looking forward to new chapters. {100k words at time of writing} • {HP} Days to Come. It’s adult, it’s sweet, it’s funny, it’s lively. Not lively as in it’s joyful and bounding and jumping around happily, but lively as in it’s about life. You know that quote about how grown-up fiction is about english professors wondering about who to fuck, and YA fiction is about overthrowing the government and picking up the pieces? This is grown-up YA fiction. The government has been overthrown, the villains have been defeated, and there’s still the shit attitudes in society that caused it all in the first place — but we’re working on that. There’s peace. But it’s the picking up of pieces that remains to be done. Recovering from all you’ve lived, and figuring out where you’re going. Treating all the fucked up stuff you’ve done and that’s still in your head. Talking to people. Getting angry and pissed off and breaking up and making up and it all not being the literal end of the world. Living life. And slowly getting there. {137k words} Also enjoyed, but no lengthy comment: I read a lot of Bobmin‘s stories around the middle of January, to honour his rich contribution to the fandoms. These are all excellent: # March 2017 ## Fanfiction All in reading order within their sections. Word counts are rounded to nearest 1 or 5k. I finished (for the moment!) my exploration of the Harry Potter fanficdom, fics rated at least T, often M. • Has complex plots without being over-complicated, and makes very good use of foreshadowing and including events several chapters before they’re used, without any flashback at all. Probably the best use of actually-explained time travel mechanisms I have ever read since HPMoR, and it probably surpasses that, even: used hundreds of times, over a year thrice folded on itself… our protagonist uses time travel to the best she can, and all such plots are explained and/or can be reasoned out, unlike in HPMoR where we are told Dumbledore+Snape+Potter-Evans-Verres make plans and schemas and diagrams but there’s only one or two times the time travel is actually shown and explained. And even with such a tool, Valeria (our OC protagonist) doesn’t depend entirely on it to accomplish her plans! Manipulation, advance planning, munchkinning, and other techniques all combine together in an explosive mix, with no less than three different plots going on at the same time… at any given time. And that’s just the first book! The second book is ongoing, and I am loving it. {274k + 131k words atow} • This is brilliant. Also fun and humourous and just the right amount of flirty. Long chapters, lots of action: physical, verbal, and otherwise. There’s a very hot dominant sex scene right at the start of the second chapter, preceded (at the end of the first) by a bunch of realistic in-public affection and… more. It’s honestly refreshing, after decidedly adult fics that nevertheless ellide any actual action, and a wide corpus of fics that deal with more or less horny teenagers instead of mature (cough as if cough) young adults. Ongoing. {60k words atow} • {HP/SW Canon} I Still Haven’t Found What I’m Looking For. I love the perspective into and out of all the characters, especially Ahsoka and Aayla. The analysis and conflicting opinions of power structures with the Galaxy by an outsider that still has considerable experience that is more or less relatable, and by insider people both accepted and excluded from those same structures, yields interesting discussions packed with details. The “too many assumptions” thing resonated particularly with me, as it is something I struggle with when interacting with some people I know (they make assumptions that are often wrong or incomplete and draw concludions from that, without ever pausing to ask themselves or others about them, and end up making wild accusations that are hard to defend upon on the spot (and they then take that hesitation as admission of guilt, instead of confusion or trying to organise thoughts) as I need to think about what their assumptions could be that they end up there; the entire exercise is very frustrating). There are spectacular scenes of magic and action, the forces involved are of epic proportions, and yet humour both by characters and by story is pervasive. It’s a good mix. Ongoing. {315k words atow} • {HP} Behind Blue Eyes. It starts like any other teenager romance… well, the troubled ones anyway. Girl runs away from home, boy runs away from life, they meet and fall into friendship first, and then into love. But the likeness stops here. Because this is a Harry Potter story, and that means heartbreak, it means danger, it means terror, it means conflict and drama and emotions running high in all directions. It means magic, and all the problems that brings with it. It means love. And so Behind Blue Eyes finishes like few other adult romances… with pain and loss and success and that particular kind of happiness that is like the flame of a phoenix: strong and bright, reborn many times, and fiercely everlasting. {440k words} Also enjoyed, but no lengthy comment: # April 2017 ## Original Fiction Those are original serial fiction I follow and have read one or more chapters of, this month. Genre is in brackets. I also read Aftermath: Life Debt and Empire’s End. Hmmmph. ## Fanfiction This month, taking a bit of a break, and only keeping up with stories I follow. The ones listed below are not all I follow, but those I started before listing them in these Monthly Updates, and that I think deserve a mention. I managed to read a few new stories after all: And then I started earnestly into writing my fanfic, and predictably, I got to reading lots more Star Wars fics. Official pretext for this is “gotta find voices and get into the feel for writing my characters” but of course the plots and cuties are why I stay. But seriously, encountering characters I’m writing and have been thinking about for the past few months in other fanfics is pretty cool. It really lets me figure out the edges of my characters, and think about behaviour/personality for more than just the obvious by comparing against what I’m reading. And when something really clashes, I have to think and mentally point out details that justify why I feel like the characterisation is wrong. For example, there’s a few fics-with-Rey below that have her as a sociable lovable figure who trusts people easily, like, what??? No. She’s been a scavenger for fifteen years, she’s canonically been betrayed by people she thought were her peers fairly early on, she lived alone for ages, she was exploited by adults around her until she got strong enough to fuck off on her own, one of the only beings who halfway protected her and gave her the opportunity to earn for the past ten years called the Empire on her (this is portrayed somewhat differently in the novelisations, but while the details change a bit, the broad strokes remain the same) at the drop of a hat. She’s not going to be a bubbly extrovert now. What is wrong with you. # May 2017 ## Fanfiction I’m going on a strong Star Wars (all eras confounded) bender. I avoid Kylo/Rey (extra ew) and Kylo/Hux fics because ew, and I like time travel fix-it fics. Mostly I’ve been impressed at the depth and breadth of the verses crafted, and the large currency held by Queer-abundant fics. It “makes sense” to have genderqueer and varied sexualities in a universe where there’s several dozen species cohabiting with each other, among a few thousand worlds (sometimes the scale of it all boggles the mind), but it’s heartwarming every time. I’ll certainly be adding a bunch of references and homages to some of the best fics out there into my fic, mostly in passing-by references to names and places. If you’ve been in the fanficdom, you should be able to pick out a few! Something that’s been interesting to notice is fics and authors that were influenced by the Re-Entry fanfic epic. You see, in Canon (and Legends) Coruscant has a 24-hour day. In the Re-Entry fanon, though, Coruscant has a 26-hour day. Well, in many non-Re-Entry fics that should AU from Canon or Legends, and that state it as such, you get mentions of “25th hour” or “there isn’t more than 26 hours in the day” while in the Coruscanti Jedi Temple (or elsewhere on planet)! Exceptional fics within those I read this month: • {SW Legends} The entire Re-Entry and Journey of the Whills corpus. An epic masterpiece. {50k + 1k + 11k + 37k + 18k + 30k + 32k + 27k + 25k + 23k + 25k + 53k + 55k + 2k + 76k + 16k + 28k + 21k + 24k + 7k + 1k + 1k + 17k + 33k + 3k + 6k + 10k + 33k + 9k + 10k + 6k + 36k + 14k + 6k + 7k + 5k + 8k + 10k + 11k + 13k + 12k + 7k + 7k + 10k + 9k + 19k + 8k + 13k + 13k + 12k + 8k + 13k + 11k + 15k + 12k + 7k + 10k + 9k + 20k + 18k + 11k + 21k + 26k + 20k + 21k + 24k + 27k + 33k + 3k + 79k + 22k + 24k + 33k + 2k + 11k words atow} • Excellent fic both in the romance/ship side and in the PTSD side. Notably, actually gets that Rey comes from a desert-world where water is precious and food was scarce and she is used to work and also not used to have food be guaranteed. Similarly, that Finn was a trooper in an extremely harsh and conditioned army, where even trivial-for-bacta medical issues were more often than not resolved with blaster-aided disappearances. Also great at non-binary and queer genders and sexualities in a wider universe without making it be a super minor discreet story element (like some novels do, like they want to have it but also are afraid to bring it front and center). {26k + 26k + 4k words atow} • {SW} heart in a headlock. Well-written, really, and has good emotions… I would have liked Rogue One to go more like this, I think. The depiction of Leia in this fic helped me re-evaluate a plot point that always bothered me in my fic, and that I’m going to change now. {53k words atow} • Has a fairly unique narrative style and form that I’m going to liberally borrow from, and not just because it seems tailor-made for my style and cadence of writing. I had been despairing of the glueing work I’d have to do to make my fic work, and the rewriting of scenes to comply with One Narrative, but this is much better. Story-wise, this is well done, both in plot that is organised and yet still surprising, in style which is “crack that takes itself seriously” and manages that perfectly, in humour both wordplay and plot irony and situational, in character development as well as characters stubbornly refusing to change, for good and bad. {107k words} • {LotR} Sansûkh. Another incredible epic fanfic. I really like both the concept and the execution, and have healthy respect for the huge amounts of research and outright creation that must have gone into it. The character growth shown and told for everyone involved is first-rate. The use of Sindarin and Khuzdul is appropriate and not overbearing, which is very well done indeed. And the language style itself, that manner of speaking… I love that. (Also the massive amounts of recursive fanart displayed throughout is both heartening and frankly quite cute.) AND DID I MENTION the vast diversity of cast! Gay/bi are merely common, colour is varying and fine, neuro-atypicals are well-represented, and a/bi/trans-gender characters! That’s right, with a plural! Not just two, either. Now that‘s greatness. {518k words atow} # June 2017 ## Comics Found and binged Wilde Life, about a kind? lost? man running from his old life to a tiny town in Oklahoma... only to find himself embroiled into the affairs of a typical teenager who’s also a werewolf, at least three witches, talking animals, asshole bears, and the odd ghost or two. It’s beautifully drawn, cute and amusing and seriously good. ## Fanfiction Apparently I’m trying to read the entire Star Wars fanficdom on AO3. Or at least, those that catch my eye in the listings. Still firmly avoiding certain eww pairings. Briefly dipped into Thor fics as well. A lot of (delicious) NSFW. # July 2017 ## Fanfiction Works I recommend will now be bolded, and I’ve gone back to all other Monthly Updates to highlight recommendations there, too. # August 2017 ## Fanfiction As usual, only “good enough to be listed” newly-read fics are, well, listed, and not everything I’ve read during the month. Especially recommended fics are in bold. • {HP} the family potter. {10k words} • {HP} There May Be Some Collateral Damage. {61k words} • {HP/Buffy} It’s All Relative on the Hellmouth. {112k words} • {LotR/HP} The Shadow of Angmar. {154k words atow} • {HP} The Black Prince. {139k words atow} • {HP} A Long Journey Home. The premise is the most interesting I’ve read in a while, even in the already crowded genre of time travel, and even counting time travel closing-the-loop fics of the same order of magnitude on the time scale. Which, by the way, another very good fic which comes close to the premise and scale is one recommended last month, the Of a Linear Circle series. However, A Long Journey Home shines by its radically different story-telling, a sequence of extended vignettes in the long life of one Jasmine Potter, showing many stories and histories, each standing alone with grace and yet much richer for its part in the whole. The theories and explorations its characters go through, both personally and of magic, are rendered beautifully and yet without glamouring or wonderwashing, and without the praise-science-preach-science approach of ‘conventional’ rationalist fiction. And yes, this is rationalist fiction, but embedded in great writing as an integral part of the work, instead of using the work as a mere transport. {203k words atow} • {HP} Applied Cultural Anthropology, or How I Learned to Stop Worrying and Love the Cruciatus. {162k words atow} • {GoT} Blackfish Out Of Water. {97k words atow} • {SW} Ad Utrumque Paratus. {19k words atow} • {SW} people think the strangest things. {4k words} • {SW} Lost Reflections. {31k words atow} # September 2017 ## Fanfiction As usual, only “good enough to be listed” newly-read fics are, well, listed, and not everything I’ve read during the month. Especially recommended fics are in bold. # November 2017 ## Fanfiction This month is a big slow down. Didn’t really read much, but I focused more on various other things. Also in mid-October, my brother and his S.O. came to visit, and they’re both incredible and funny and long story short I didn’t really have any time to read. After that, NaNoWriMo started. # 2018: summary ## Fanfic Words of fanfic read per month, based on the monthly update posts. Thus the figure is low-balled as the posts don’t include fics I read but didn’t like, nor the updates for incomplete works I follow. And it of course doesn’t include all I read that wasn’t fanfic (yet, but I’m working on that for next year!). After doing some sampling during the year, I have measured that I read about 120k words of fanfic updates per week. • December 2017 → January 2018: 2,204k • January → February: 2,090k • February → March: 62k • March → April: 779k • April → May: 1,099k • May → June: 2,400k • June → July: 3,486k • July → August: 232k • August → September: 1,546k • September → October: 1,146k • October → November: 677k • November → December: 112k (Total: 22 million words, down from 32 last year.) # January 2018 ## Fanfiction I made a commitment to not start reading as many fanfics this year, instead going for (an overabundance of) physical books. From the future (Sep 2021): yeeah... that didn’t pan out. # March 2018 ## Fanfiction I’ve almost completely stopped reading new fanfic. These two slipped through: # July 2018 ## Fanfiction A lot of good stuff this month. # September 2018 ## Fanfiction I finished my binge of the H/Hr tag. (I didn’t read all stories within, but did go through all 1300 of them and select those I felt would be good for reading... took a few months). # November 2018 ## Fanfiction I think I’ve finally closed off the fanfic-devouring void in my heart. Now it’s more like a trickle. I’m slowly finishing off a list of good fics I’d saved up “for later” and I’m not adding any, so eventually I’ll get to Fanfic Zero. # December 2018 ## Fanfiction A bunch of Marvel and Daredevil adventure and romance fiction this time, as well as some SW/HP crossovers. As I get closer to Fanfic Zero, I’m also ramping up Actual Book reading. More on that in the final update of the year. # 2019: summary ## Fanfic Words of fanfic read per month, based on the monthly update posts. Thus the figure is low-balled as the posts don’t include fics I read but didn’t like, nor the updates for incomplete works I follow. This year instead of updating in the mid month, I updated at the end of each month, so the month’s total is really for that month, rather than staggered. • December 2018 → January 2019: 1,536k • February: 306k • March: 3,302k • April: 1,102k • May: 446k • June: 667k • July: 580k • August: 322k • September: 402k • October: 1,764k • November: 2,341k • December: 365k (Total: 13 million words, down from 22 last year.) # February 2019 ## Russian Doll Yes, the Netflix show. A short note, but I absolutely loved this. ## Fiction So what I did this month is finish all my fics in instance, and then I went through an entire collection and opened all fics that interested me in tabs, and then spent the rest of the month working my way into “making my browser usable again” which was kinda completely unnecessary but also I read some good fics! # April 2019 ## Fanfiction Some high-quality fic this month! # July 2019 ## Fanfiction Also see this Twitter thread where I essentially live-tweet the fic browsing process. If I have fun doing it, I might keep at it and re-link the month’s first in each update. I’ve also been reading and voting on this year’s Hugo Awards slate. I haven’t read everything, unfortunately, but I had a lot of fun and tears with the shorter works. They were hard to rank! # 2020: summary From the future (Sep 2021): COVID–19 disrupted my habits, though they were already falling apart in regard to these posts being at all regular. • January 2020: 316k • February: 342k • March → June: 2,789k • July: 1,812k • August → December: 5,633k (Total: 11 million words, down from 13 last year.) # 2021: summary Year is still in progress, but I’m looking to do two updates only. • January 2021 → May: 7,672k • June → December: ???k (Total: ?? million words, up from 11 last year.) # First semester 2021 ## Fanfiction This is every fic I’ve read (and liked, as per usual) from January through mid-May 2021. # Second semester 2021 ## Fanfiction This is every fic I’ve read (and liked, as per usual) from mid-May through November 2021. # Deprecated content There are three categories of so-called deprecated content: 1. (Mostly technical) writing that’s just very outdated 2. (Mostly personal) writing that I don’t hold to anymore Both of these are available here, with the disclaimer inherent in the description. However: 1. Content I dislike so much or that is so irrelevant it’s no longer published here That still exists in the git history, though. Prior to today, Saturday 13th February 2016, all my works were, by default, released in the Public Domain under the Creative Commons CC0 1.0. This is no longer the case. While many of my works still remain in the public domain, and a large amount are under permissive licenses, new works are not automatically released. All versions of the Blanket License, are cancelled, nulled, voided, revoked. My mind on licensing has changed since that day years ago when I created my blanket license. As such, this. # Modern systems languages 6 December 2013 A year or so ago, I started getting interested in two new, modern systems programming languages: Rust and Go. After playing around a bit with both, I chose Go to program a few things I had floating around at the time: plong, a pseudo-p2p server which is essentially a WebSocket relay and signaling server; poële, a very simple queue / task processor using Redis; and double-map, a new kind of data-structure which solves the problem of slow hashmap search for specific scenarios. Enthused by my success with a fast, modern, typed, compiled, systems language, I started having ideas of more ambitious things: imaku-latte was about a novel DE/WM for X11, thesides was about an EC2-like service for micro tasks and scripts. More recently, aldaron is a tree editor, and pippo is my long-standing attempt at designing an ever-better DBMS (current features include graphs, content-addressable low-level block store, powerful abstraction capabilities, transparent versioning and COW, stream writes, and being truly distributed). thesides died for various reasons. The others didn’t, really, they were aborted and/or infinitely delayed because Go is too high level. It doesn’t allow easy low-level memory manipulation, the C interface is annoying both ways, and other nitpicks. I still like Go, but I think I’m going to try out Rust for these projects where low-level access, where a real C replacement, is needed. We’ll see how that goes. # Soft limits and meaningful content cuts 8 November 2013 On Quora, and many other websites, long answers are cut off with a “(more)” button/link which immediately displays the rest of the answer in-place. The cut-off threshold seems to be a hard limit on words or characters. Sometimes it works: Sometimes it seems a bit silly to just hide this little content: I think there is a way to make this better: • Use height, not length. Especially in this context! Length only has meaning if it is directly correlated to height, as is the case with block of text without breaks. Images make the height grow by a huge factor, as a link only a few dozen characters long can add hundreds of pixels. • Use a soft limit. Instead of cutting at a hard, say, 800px, put an error margin on there, say ±100px. Thus, if the content is 850px high, don’t cut it off, but if it is 1000px high, do. • Make cuts more meaningful. Given the above rule, consider this: the cut-off is at 800±100px, and the content height is 905px. In this case, the cut-off would be at 800px, leaving a measly 105px below the fold. Avoid situations like this by moving the cut-off to make below-the-fold content more meaningful, e.g. at least 250px. Here, we would have a final cut-off at 655px. • “But computing height is difficult server-side, and we don’t want to do it client-side!” No it’s not. You don’t need to render the page to calculate height. Yes, it’s more precise, but you can estimate it fairly easily, especially if you have good control over your styles. For text: you know what your font-size is. You have a good amount of content so you can easily compute (once!) the average number of words per line. You can easily count the number of line breaks and line rules. Thus you can quickly estimate the total text height. For images: either take the same route and compute (once) the average height of images, or compute it per-image (maybe you host images, and create thumbs + metadata to be able to optimise loading times; in that case you could put the image height straight from your own service). Combining both, you can obtain an estimate of the content height of any given article or text, and apply cut-offs then. And of course, the results can be cached! (Screenshots are from this Quora article, which you should really have a look at, at least just for the beautiful images of the world.) # Bruteforcing the Devil 5 June 2014 Preface, of a sort, by mako himself: Once, there was a circle of artists and wizards. The convocation called itself #Merveilles. For a long time, most of the members of the cadre could be distinguished only by their names. The most connected among them took on varied faces of animals, but they were only a few. One day, a faun arrived at the convocation. Lamenting that so many of its members looked exactly the same, it decided to use its wild generative majykks to weave a spell that would illuminate all members of the circle equally. By the name of the subject, the majic would produce a number. By that number, it would resolve a face. And so it did. The faun’s enchantment brought joy to the circle, and for a time, it was good. Many moons later, the faun, reviewing its work, realized that the m4g1kz it had cast had an imperfection. It was struck with a vision of the eventual coming of a strange being. By the name of that devil, the mædʒIkz would produce 6 6 6 From 666, it would resolve no face at all Hearing the prognostication, a few set out in search of the name of the devil who hath no face, so that they may take it for their own. This is where our story begins... On the 3rd of June — it was Tuesday — mako gave me an interesting challenge. He’d been trying for a while now to get me into a particular community, but this is what actually brought me over: • { mako }: If you ever decide to introduce yourself to #merveilles, find a name that hashes to 666 according to the following String.prototype.hashCode = function(){ var hash = 0, i, char; if (this.length == 0) return hash; for (var i = 0, l = this.length; i < l; i++) { char = this.charCodeAt(i); hash = ((hash<<5)-hash)+char; hash |= 0; // Convert to 32bit integer } return hash; }; • { mako }: It’ll resolve to a BLANK ICON. Nobody else will have one. It will be eerie and awesome. Huh. Well, okay. I queried for the exact parameters: • It has to be between 1 and 30 characters. • It has to be ascii. Actually, the exact alphabet is qwertyuiopasdfghjklzxcvbnmQWERTYUIOPASDFGHJKLZXCVBNM1234567890_-\[]{}\^|. • It has to hash down to exactly 666. After briefly considering a reverse-engineering approach, I created a quick prototype in NodeJS to bruteforce the hash using sequential strings. I set it to run, but noticed pretty quickly (i.e. my desktop environment crashed) that it was using too much memory: • { passcod }: I’ve brute-forced the hash up to 4chars and ~50% of 5chars, and node ate up nearly 2Gb of swap in under a minute. Now I’m considering rewording it in rust to have memory guarantees ;) After looking around the web (turns out that googling “NodeJS uses too much memory” returns a lot of garbage), I found a partial solution: • { passcod }: I’m forcing gc in node every 1000 tries, and it’s slowed down the memory uptake, but it’s still rising and about 1% (of 4G) every 20s :( Clearly that wasn’t the right way. Still, I persevered, switching my method to use “the bogo approach”: generating random strings of random length. The memory usage was still insane, but I hoped to have enough data in a short while to do some analysing: • { passcod }: Right, after 2 million hashes taken from random strings of random(1..20) length, the only ones that have three characters {sic, I meant digits} are single chars. These obviously don’t go as high as 666. • { passcod }: I conclude it’s either rare or impossible • { mako }: I wont feel comfortable until we can conclude it’s impossible. As far as we know, the devil is still out there, hiding. At this point, mako decided to write one in C++. Meanwhile, I wondered about outside-the-square solutions: maybe this supported UTF-8! Nope: • { mako }: No unicode allowed. Oh well. Dinner awaited. That evening, I got back to mako finding C++ less than unyielding: • { mako }: I almost just caused an access violation. • { mako }: Already blowing my fingers off. • { mako }: Arg, exposing raw pointers. I’m so out of practice. Out of curiosity and a stray “There must be a better way” thought, I started implementing it in Rust. Ninety minutes later, I had this: mod random { use std::string::String; use std::rand; use std::rand::Rng; fn chr() -> Option<&u8> { let alphabet = bytes!("qwertyuiopasdfghjklzxcvbnmQWERTYUIOPASDFGHJKLZXCVBNM1234567890_-[]{}^|\\"); } pub fn string(len: uint) -> String { let mut result = String::with_capacity(len); for _ in range(0, len) { result.push_char(match chr() { Some(c) => *c, None => 48 as u8 } as char); } result } } fn hash_code(input: &String) -> i32 { let mut hash: i32 = 0; if input.len() == 0 { 0 } else { for c in input.as_slice().chars() { hash = ((hash << 5) - hash) + (c as i32); hash |= 0; } hash } } fn main() { let strg = random::string(4); let hash = hash_code(&strg); println!("{}: {}", strg, hash) } • { passcod }: Mostly written because when you gave your C++, I realised I couldn’t read C++ I started by benchmarking it against the Node version, and found that it was 3x faster unoptimised, ~10x optimised, and had constant memory usage. A definite improvement! • { passcod }: 200,000 iter/s I attempted to make it multi-threaded, but benched it to be 2-3x slower (probably my fault, not Rust’s). It didn’t matter, though, as the bogo approach meant I could run two instances of the program with no semantic difference than doing it with threads. It was getting late, so I used a combination of t and bash scripting to let it run on my Linode VPS during the night and have it tweet me if it found anything. And then it was sleep tiem. In the morning, I woke up to two things. 1. First, I got onto Hangouts and found via mako that “Wally”, a #merveilles member (I think?), had jumped on board and was using a multi-threaded approach on peh’s 8-core machine. Things were getting serious. I enquired about his hashing speed (I admit I was starting to be quite proud of myself at this moment), and mentioned that at the rate I was going, I would have computed about 4 billion hashes up ‘til then. 2. I checked Twitter about 15 minutes later, and discovered that my agitation was, really, unwarranted: • { passcod }: I’ve got it • { passcod }: I’ve got TWO • { mako }: Dear god. They didn’t look good, though, ascii barf more than anything: 8XKf2WAkny|CFAZi_vQn and LcBqgVVPOSEkdIB7BZlVO. • { mako }: OH. We could even use my godname generator. Mako’s current occupation is developing a “platform for encouraging writers to explore a new format for fiction and collaborative world-building.” The mentioned godname generator creates a random, pronounceable name with a music that is quite particular, like “sonoellus” or “tsaleusos” or “thoruh” or “seposh”. It uses weightings for each letter, which is fine for one-off name creation, but too time-consuming for my brute-forcing purposes, so I discarded that part. I measured my implementation to be generating 64k names per second on my machine, which wasn’t too bad. Slower than pure random, but that was expected. Meanwhile, I decided to go have a look on #merveilles, and used the second name I got as a nick. I got mistaken as a bot! and just as I was about to give some pointers as to why my name was so weird, the connection dropped: • LcBqgVVPOSEkdIB7BZlVO has quit (Client Quit) < Preston > NOOO the mystery Wednesday afternoon, “Wally” gave up. That was a bit relaxing, although the competition was interesting. Each one of these icons was produced using an int32 seed. The 666 seed produces a blank icon. Wednesday evening, I had a good implementation but previous results told me to only expect an answer after five days of computation. I wondered about alternatives: • { passcod }: I’m half tempted to buy a few hours of highcpu AWS compute power and get it done nowish instead After a bit of research, I decided to just go for it. I set myself a \$50 spending limit, which gave me about 24 hours of compute on an instance with 32 virtual cores, each about as powerful as one of my Linode’s cores. I set it to run with 31 merhash programs running in parallel: And went comatose again. • { mako }: That was the best sleep I’ve had in a while. • { passcod }: Well, godname 666 is not looking good. • { mako }: Nothing ? • { passcod }: I got a cumulative 16 billion godnames generated and hashed, and nothing. Uh oh. • I had set the random length generator wrong, and generated only 30-char-long godnames. • There was a bug in my implementation where the godname algorithm would get stuck in an infinite loop producing nothing. • My result collecting technique was sub-par. After getting back from class, I set upon fixing those, which took about an hour, and I now had about 10 remaining hours of compute time. I also made an improvement to improve the yield: • { passcod }: Also I’ve modified it so each instance appends any result it finds to a file and continues running. I also created two versions of the program: one with godnames, the other with a reduced-alphabet random string generator, in the hopes of still getting readable names. I set both to run on 15 cores each. And I waited. After just above 4 billion hashes, I got my first devil name: shoruzorhorheugogeuzudeazaeon. Go on, try to pronounce it. Isn’t it absolutely hilarious? SUCCESS! Even better: another, shorter, sweeter name appeared just six minutes later; heapaepaemnunea. SUCCESS! After this, I got two more: somureamorumnaemeurheuzeuon and vqsjsawoqgygbrziydpkyyinfmfvw. SUCCESS! As I speak, I have computed another 12 billion hashes, or a grand total of about 35 billion, and no more names have appeared. I’m now going to shut it down, close my wallet, and use heapaepaemnunea as my #merveilles nom de plume. It’s been fun! …and I can now assume the face of the Devil. # Lighting up Rust with Neon 5 March 2017 Frustrated by the lack of easy-to-use web frameworks that support Rust stable, I decided to use a stack that I know and love, and that has an absolutely humongous amount of middleware that would do everything I wanted: Node.js. But I wasn’t about to throw all the Rust code I’d written away! Fortunately, that’s where Neon comes in. Neon is a library and build tool that makes it drop-dead easy to expose a Rust API as a Node.js module. Once I had that binding working and tested, it was a breeze to use it in an Express server. Nevertheless, I hit a few gotchas, patterns that weren’t obvious at first: ## Hooking up a Neon class The Neon documentation is a bit lacking right now. It’s still Rust documentation, which is hands down the best auto-generated documentation I’ve used, and I use it a lot. In fact, it being so good is the reason why I use it a lot. Even without taking the time to write great documentation, the auto-generated, no-effort, there-by-default parts are a boon to explore and figure out a Rust API. Still, for this I had to look at the neon tests for example code. Then I derived a pattern that I use for all such classes: If I have a Neon class JsFoo declared in jsfoo.rs: declare_types! { pub class JsFoo for Foo { init(call) { // use one argument... } } } I’d put this at the bottom of the file (making sure to have the right number of arguments — that caught me out once or twice): pub fn new(call: Call) -> JsResult<JsFoo> { let mut scope = call.scope; let args = call.arguments; // pass through one argument let arg0 = args.require(scope, 0)?; let foo_class = JsFoo::class(scope)?; let foo_ctor = foo_class.constructor(scope)?; foo_ctor.construct(scope, vec![arg0]) } And then in lib.rs, to hook it up to the module, it’s just a simple: mod foo; register_module!(m, { m.export("foo", foo::new)?; // the other exports... Ok(()) }); ## Constructing Neon classes with Rust data There’s a fairly common situation I ran into: I had a method on a Neon class or a function on the module where I wanted to return another Neon class instance, filled with data generated in the Rust code. In pure Rust, there’s typically several ways to construct a struct. But in JS, there’s just the one constructor, and in Neon it’s even worse: there’s just the one constructor, that only can take JS types as inputs. The first thing I thought of was to modify the underlying Rust type directly. So down I went reading through Neon source code, trying to figure out how I could either replace the Rust value of a constructed Neon class… or implement an entirely new constuctor, by hand, that would build the class but with Rust data instead of JS data. Turns out, this first one was the right idea… but the wrong, over-complicated approach. This pattern has two sides: 1. I have to make sure that my Neon class constructor is cheap, has no side-effect, and does not depend on anything else than what I pass in. I had at some point a constructor that would do disk I/O based on paths passed in an arguments. That’s a no-go. I replaced it with a constructor that only built up the underlying Rust type without doing anything else, and a load() function that would do the I/O and spit out a modified class instance using this very pattern. 2. I have to wrap the target type in a tuple struct. That tuple struct needs to have its field marked pub, and that’s what I target the Neon class at. struct WrapFoo(pub Foo); declare_types! { pub class JsFoo for WrapFoo { init(call) { ... } } } With those two things done, the remaining bit is simple, especially combined with the previous pattern: fn load(call: Call) -> JsResult<JsList> { let scope = call.scope; let args = call.arguments; let base = args.require(scope, 0)?.check::<JsString>()?.value(); let farg = vec![JsArray::new(scope, 0)]; // Look at the jslist::new! That's the pattern shown just before, // here used to construct a Neon class within a Rust function. let mut list = JsFunction::new(scope, jslist::new)? .call(scope, JsNull::new(), farg)? .check::<JsList>()?; // Here's the important bit! // See how the tuple struct wrapping allows you to replace the // underlying Rust value? That's the entire trick! list.grab(|list| list.0 = List::new(posts)); Ok(list) } ### Hiding the wrapping type When I was writing tests for my binding, I found that typeof new List() would return 'WrapList'… not what I want! I’d rather expose the “nice” name of the struct. So, instead of the above, I bound the actual Rust struct to a different name, and named the wrapping struct as the original name, like this: use list::List as RustList; struct List(pub RustList); declare_types! { pub class JsList for List { init(call) { ... } } } and now this works: typeof new List() === 'List'. ## Making a JsArray This is much more straightforward, but I kept hitting it and then having to either figure it out from the compiler messages and documentation all over again, or referring to previous code. Occasionally I want to create a JsArray. But there’s no easy JsArray::from_vec(). I have to create a JsArray with a size, then fill it up with values, taking care to set the right indices. And there’s also a lot of boilerplate to make sure to use the correct variant of the .set() method, the one with two parameters instead of three. // Object contains the .set() method on JsArray. use neon::js::{JsArray, Object}; // Required to use .deref_mut(). use std::ops::DerefMut; // This is assuming we're starting with a Vec<Handle> named vec. // If that's not the case, adjust the JsArray length and the .set(). let mut array: Handle<JsArray> = JsArray::new(scope, vec.len() as u32); // The extra scoping block is necessary to avoid mut/immut clashing. { // Here's where we borrow mutably, this is necessary to get access // to the underlying JsArray from the Handle, as the JsArray has // the 2-parameter .set(key: Key, value: Value)` method. let raw_array = array.deref_mut(); // We have to do our own indexing. let mut i: u32 = 0; for val in vec { // Setting an array value might fail! So we have to handle that. raw_array.set(i, val)?; i += 1; } } // Here's where we borrow immutably, as well as return the right type. Ok(array.as_value(scope)) That’s that for now! 18 November 2014 (The title is from the trucker radio in City of Angles. SIDEB for sidebar, where you can have semi-private or off-topic conversations.) Two days ago today, I had a dream. It was mundane, simple, just a casual dream except for this one quality. I was driving along, going just a bit above the tolerance. Reckless, I know. I remember thinking that in the dream. I remember seeing a cop car in the back mirror, feeling scared. I remember seeing red in my forward vision while still watching the cop car in the mirror, switching focus, it’s a red light, slowing down, left lane, cop car in the right lane, next to me, not even done slowing done when it turns back to green and we go our separate ways. Cops turn right, I move on straight ahead. I remember feeling relief. I don’t remember where I was going, what the road was, where I had come from, what time it was. It was day, sure, and there were probably clouds in the sky, because I don’t remember any shadows, either. Also, at the next intersection, I looked out the window, saw a blur, and woke up. It was a dream. It was a vivid dream. Today as I was thinking of this prose I was driving. I was coming up to an intersection, going right, and there was a cop car in front of me, going left. And I remembered that scene, and I remembered being a bit scared, and I thought it was funny that everything was reversed. And I didn’t realise it had been a dream, that that scene I’d remembered, those feelings that had been conjured… they were from a dream. It didn’t happen. This realisation was what made me think of and write this. It’s not the first time this has happened. In fact, it’s a pretty frequent occurrence in my life. See, I’ve always dreamed differently. Up until a few years ago, in 2009 to be precise, I didn’t know it. I thought everyone had dreams like I had. Lucid dreams where I knew I was dreaming. Controllable dreams where I could control any or even all aspect of the dream. Vivid dreams which could be so realist as to be indistinguishable from reality. Or any combination of the above. That dream I told, above, is pretty innocuous. It’s a common scene. Nothing wrong with remembering it, even if I don’t realise immediately, or at all, that it didn’t happen. That I caught it at all is pretty rare, actually. Dreamed memories that banal usually just come and go undetected, until I remember having remembered them sometime later when I’m feeling introspective and realise then. But there’s more scary. More bothersome. Before 2009, I didn’t know about the term ‘lucid dream’. I didn’t know only a small percentage of the population had them, and an even smaller portion had them regularly or often enough that they could play with them. To me, it was normal. I’ve been dreaming ‘normally’, or lucid, or vivid, or in control, or any two, or all three, I’ve been dreaming like that since forever. My earliest memory of a dream was at the age of four, and I estimate it to have scored about 4 on the lucid scale, and 8 on the vivid one. No idea about the control. Since 2009, I have experimented with my dreaming, and introspected about its influence on my life. I didn’t immediately realise the problem with my vivid dreams. In 2011, first year of university, one of the reasons I failed it, a reason I never voiced before, is that I spent the entire year dreaming. I was away from home, away from obligations, and classes seemed easy enough at the start, so I started sleeping in. I fucked my sleep cycle so bad during that year that it never recovered. I stayed awake for huge lengths of time, trying to find out the limits. There’s a saying that you die if you stay awake for more than 100 hours. On at least two occasions, I stayed awake for between 120 and 170 hours, I kinda lost track at those points. I’m still alive. About every four weeks, I had lucid dreams. Sometimes just one, but often every night for a week. And then I dreamed non-lucid for three, or stayed awake, or something. I had really weird abstract dreams, I had dreams that spanned minutes, hours, days, weeks, years, decades. The longest in-dream time I ever experienced, without it being just a time-skip, was four hundred years. The shortest time was five seconds, going in slow-mo towards a car crash, but also really fast, and dreaming the perception of everybody on and around the scene. It’s one of the things my brain does sometimes: on good days, when I’m really tired, just before going to sleep, I can occasionally play every single instrument in a symphonic orchestra at the same time. It’s beautiful, and I control it, or at least I have that illusion of control, I’ve never been able to determine which was which. When I was, I think, from what I pieced together from memories and discussions and fragments, about 7, in the summer holidays, probably july or august, I had a particular experience. I was on holiday on an island, nothing fancy, just a small island in the south or west from france, with a few apartment rises and a quay with those big metal things boats put their ropes on. I befriended a girl and we’d run around on the quays and at one point we fished from the quay, without a rod or anything, just a fishing line and some bait. Worms. I remember looking at the girl putting a worm on a hook and throwing it in, feeling a bit disgusted and fascinated at the same time. I remember having dinner with the girl and her parents. I remember trivial things like the storey of their apartment and washing my hands before eating. I don’t remember the girl’s name. Something starting with an M, and an A, and an O, maybe Margo but that’s just a name from my brain right now, it’s not from the memory of that experience. I remember staying on the island and playing with that girl for a week or so. When I was 7, I was quite shy and my head was often in the clouds. To those who knew me then, that is probably a significant understatement. I thought my parents knew I’d gone on holiday, I mean, it was logical, I was 7, they had to have been there with me or sent me there with someone or something. Maybe we’d sailed there? Maybe the girl’s parents were friends with my parents? But it didn’t matter to my 7 year old mind. I just assumed they knew, and because I assumed they’d been there, or that we’d at least driven back home together, I didn’t even think of asking or saying anything about it. The next week I was probably back to going at a friend’s house for a sleepover and playing and spouting enough obviously-straight-out-of-my-imagination babble that anything about fishing with a girl on an island was lost in the mingle. I asked several years later. Almost a decade. It was one of these, “Do you remember…” but mom didn’t remember. I gave more and more details but no. Mom didn’t remember anything like that happening. She told me of places I did go, when I was 6 or 7 or 8, places I went to, places we went to, people they were friends with that had children I met… none of it matched. So I was very confused, but I chalked it up to random weirdness and didn’t think more of it for another few years. It was before 2009, before I learned about the term ‘lucid dream’. In 2012 that memory of those holidays, and the confused exchange later on, resurfaced and I understood. I understood that I sometimes had vivid dreams that were so vivid they were, to my brain, real. When I remembered them later on, if I didn’t also remember the memories came from a dream, I would assume they were real. In the fourteen years since I had that experience, I had remembered it countless times, without realising it was a dream. That the girl, the holiday, the place, the quay, the fishing, the emotions I felt, all of these things affected me… they had an impact on part of my life… and they never existed. Ever since that first realisation, I have been on alert, looking hard at memories that pop up and trying to make sure that I know where they come from. Dream or reality. Real or not. Happened or didn’t. I have also been looking through my pre-2012 memories and doing the same thing. I catch a few. Whenever I know for sure that one memory is from a dream, I tag it so. I bring up the memory and make sure the world FAKE is watermarked everywhere on it. Because then when it pops up in my consciousness stream, so does the tag, and I know not to trust it. It doesn’t mean I disown memories, try to forget them. Dreams are nice. But the damage of thinking that something happened when it didn’t, ever, is too potentially great. I’ve been lucky so far, and I’m on the lookout now, but countless fake memories slip through the cracks. One last vivid dream for this prose: I once dreamed of doing various everyday things, including reading a series of tweets by saf. And because I didn’t catch that, because it was such a small part in the whole, I didn’t remark anything unusual going on when I remembered them later on and integrated the information they provided to my model, my profile, my persona of saf. Several months later, she said something that contradicted that piece of information in a fundamental way. And I was shocked, and I introspected, and I found that dream and realised what happened. But imagine if it wasn’t reading tweets. Imagine dreaming that you got some clue that indicated your significant other was cheating on you. And you went ahead and didn’t realise it and integrated it and one day you’re snappish and irritated at something or other and you feel vindictive and you say things you can’t take back and you accuse and your love life comes apart in five minutes because of something that never actually happened. Imagine finding out afterwards. (That scenario can’t actually happen to me because of other reasons I’m not going to explain right now, but that was just a simple-to-understand example of the magnitude of how things can go wrong because of this.) So I try to be careful. And most of the time it seems to be fine. Some things go through the cracks, I’ve never and probably will never be able to find out just what, just how much, but I live on. My name is Félix Saparelli, and I’m a vivid dreamer. # A thought on fanfiction 1 April 2017 Fanfiction is a study in what works and what doesn’t, and how something that really seems like it shouldn’t work actually does if you do it in a particular way. You get a lot of that in Ship Fanfiction, where most of the canon plot is background, and the focus of the fic is the particular Ship it’s exploring. There’s a LOT of Dramione out there that is really terrible. Clearly that ship is a bad idea. But if you just switch a thing or two around, dim some canon details, brighten others, change the levels of the picture it’s painting… and then write well, you can actually make it appealing and have it not only work, but also be quite genuine in its execution. (A side note: Harry Potter fanficdom is by far the largest, so despite that universe not being as good as others, and in fact being quite flawed, it’s where there’s a lot of different examples of the patterns and interests of fanfic.) Another thing goes back to fanfic being social: fanfic is often meta. Not just in the “this fanfic refers or hints or has homages to other fanfics” way. But because of the way previous fanfic has already explored many different concepts, a fanfic writer will have a general sense of fanfic tropes. There’s stuff to avoid, there’s stuff that’s already been done a thousand times so it can be safe to use if you need to have a filler, there’s stuff that’s generally said to be impossible to do, so you attempt it anyway because you want to see if you can do it. There’s community writing challenges, where you get a concept and play with it, or try to write it in. Fanfic authors, most of the time, are aware of the context. They’re aware of the tropes. They’re aware of the trends. They’re aware of Murphy’s Law. And they can play with that. Fiction (not fanfic) writers tend to not do that. Except for a particular genre of comedy, which is somehow exempt of that limitation. To use one of the classic examples: If, in a fanfic, someone states “I won’t die.” there is a more-than-even chance that the speaker will, or will come close to it, just because the words were said. But the writer knows that. And you, as a reader, know that the writer knows that. So now, the expectation is that the words being there causing the action is too obvious. So it won’t happen. Or there’ll be a twist in some way. Or maybe it happens, but only off-screen. Fanfic writers, more often than other writers, subvert tropes, instead of only writing them. (And that’s an argument for non-fanfiction writers reading more fanfiction: so they then are inspired to break tropes and make chaos, to not always write the same whitewashed hero and conventional romances, to not always have Bad Romance and Good Romance, but something in between, or something that transcends it, or something that perverts it. And there’s writers doing that, sure, but they’re too few and far between, I think.) I see it a bit like the classical composers, who (according to Dad) would compose something and then send it to their contemporaries and be like “look at I did, can you improve on that?” and the recipients would create Variations on a Theme, and tease out patterns that are interesting in their own right, and then compose something around that and send it back. Or maybe take a minor voice and make it major, take that movement that tells the story of the calm and long lamentations of the Winter Lady and make it instead about the fast wailing of a scorned lover, carve out the pattern the cellos are playing and bring it to the fore into a concerto, completely eliminate the melody and see what stands out… Essentially doing nothing but recursive fanmusic, and then what little would get out to the public would sound “so original, what a delight”… “here, have a bunch of money to keep doing what you’re doing”. Another thing: while there is fanfic that keeps to the tone of canon, that is actually a particular style, a challenge in fact. How to write something that is yours while also following down to the beat and down to the semitone the cadence and key of the original? The rest of fanfiction doesn’t follow canon, and just… plays in the sandbox left by the original creator. It puts concrete in the sand and erects impossible spires, it attempts to break out of the walls, and it even invites itself in other sandboxes, bringing some of its sand and some of its tools over to a completely new universe. But it’s all a lot easier to get started and get writing and get exploring than building an entirely new sandbox by yourself. It’s a lot less emotionally taxing, and that translates as freedom to do a lot more. Not that it means fanfic is always easy to write, I say as I add notes to my detailed timeline of my Star Wars AU, which helpfully indicates, in an intricate weaving of lines and colours, which parts are taken from canon, which are mine, which refer to what other fanfiction, which are from Legends, which are well-defined, which are up in the air. You can plan as little or as much as you want, there’s no worldbuilding nor character development necessary — only if you want to do it. And I like that about the medium. This was several responses I posted in Merveilles, then edited together and published properly here. # This is not a coming out 11 October 2014 When I try to determine how I feel about my gender in words, I go to the Genderqueer wikipedia page. The article itself has a list of five ways someone who is genderqueer may identify; it also has a nice aside listing every genderkind known to wikipedia, and it references this list of Facebook genders. Generally speaking, I do not identify with any of these, including the genderqueer, genderfluid, nonconforming, questioning, variant, neither, male, female, or pangender kinds. The only label that possibly applies is “other” but that’s not helpful at all. I have no gender-related dysphoria. I am physically male and totally comfortable that way. Mostly, I do not care about gender. My gender, I mean. When I prod the ‘gender’ field in my brain’s information table, I get a E_NULL_POINTER error. If you want to use gendered or non-gendered pronouns of any kind or shape to refer to me, please go right ahead. Similarly with gendered grammars. The only pronoun I may take offense at, in English (and other languages if equivalent but I don’t know them) is “it”, unless it’s part of a metaphor or something. There’s just one rule, and it’s not even gender-related: make it consistent. Otherwise it may be confusing. Now stop reading my silly blog and go support people who actually need it! 4 December 2017 I think I’ve finally figured out a name for my gender identity thing that feels okay. Not super great. But… okay. I’m not enby. Like, I’m technically non-binary, agender something, but also like no. I’m not taking this label for myself. I feel like I would be taking space away from others if I used that. I don’t have most of the issues and struggles I see and witness other non-binaries go through. I’m not discriminated against in any real way. I don’t have gender-related dysphoria, or maybe just a tiny bit that I’m not completely sure is really about gender at all. It’s hard to tell. So it feels like you’re all this giant awesome fucked up family and I’m the one well-adjusted person in the middle of it. I don’t particularly like the word guy? But it works here in a way “man-adjacent” doesn’t. I’m not a guy. I’m… adjacent. Hi. # My Words 13 September 2018 When I wrote Guy-Adjacent, I positioned myself from a position of not wanting to impede on a space I didn’t particularly think I should occupy. Most of the sentiments expressed there remain the same: I am neither oppressed nor disphoric, my mental health is... pretty good. However, my approach to words has evolved. (As an aside, I also think that placing myself, an NT, healthy, non-disphoric person in these spaces I belong to may indeed help in “normalising” them.) I’m very much not a label absolutist. Indeed, I now take great pain in not using the term label here, or around this topic in general. I prefer saying “words”. Words is meant here as words that are you. It’s not about labelling, it’s not about the outside world, it’s not about absolutes, not about purity. Words have two sides, and one constant: • Words are about how you feel about them. You understand them in a certain way. Their etymology, their use in your personal history, the way they sound, their spelling, all these things are aspects of how they make you feel. • Words are about how others understand them. They understand them in some certain ways. These might differ from the first. They might associate things to them. They might have a connotation. “Others” is a very wide group, and you should be careful both to exclude from your consideration those whose feelings you don’t care for (because of hate, because of irrelevance, because of distance), and to include those whose feelings you don’t necessarily consider outright but may have a claim against the word (because of kinship, because of culture, because of history). • Words change. Their spelling, their grammar, their meaning. Slow or fast. Sometimes both at once. Many times at different rates in different contexts. All those things are equal influences, all as important as each other. Guy-adjacent” is still somewhat okay as a word. It’s one way to describe me. Here are others. • French”. I was born there, I speak the language, I do identify with my people. In some ways, and not in others, but I do. I have an accent when I speak English. I have culture and mannerisms and ways of thought which are very French. It is absolutely a word of mine. • New Zealander”. I am a citizen of this country. I love it. I chose it, in multiple ways. I don’t really claim “Kiwi”, but I acknowledge it. I have an accent when I speak English. And French. • Solitaire”. I like being alone. I’m okay with “introvert” from some people, especially other introverts, but I neither claim the word nor recognise it always. I recharge by staying away from people, and I shy away from too much socialities, but I do enjoy company, and I do go and brave crowds every so often. I get limited panics when I’m under too much exposure to others for too long: this is fairly frequent after conferences, or even during for long ones. I will happily not talk to anyone else for an entire week, if I’m given the opportunity (solo hiking trips being pretty much the only time nowadays — social media counts as talking!). • Pansexual”. With some preferences towards some aspects. Also “bisexual” which was my previous preference over “pan” — words and thoughts change! I reject “sapiosexual” (connotations are strongly against, and I very much don’t get horny only for brains whether physical or metaphoric). I’d say that the potential is there within the general population, and I do get attracted to some and not others, but the divide is neither on gender nor body traits. If I figure it out decisively one day I’ll pick a better word. • Demi-romantic”. Not “demi-sexual”. Not “aromantic”. I get crushes. I do fall in love. It doesn’t happen always, but most importantly for this word: I don’t feel like it’s something missing. “Bachelor” is a word I’ll acquiesce to, but won’t use for myself unless it’s right for the context. (I only fall within some of its senses, anyway. My uncle was a bachelor until he found his wife; I will remain demi-romantic even if I do find companionship.) • Non-binary”. I am that. See the first section of this post, and the previous article, for how I am not like some others who use that word. But I definitely am. Like the other below, I knew I was non-binary before I knew it was a thing. • Poly(am)”. Not out of the absolute need to be with more than one person (see demi above), although that varies with context, but out of the complete lack of any feeling regarding being “mono”. I do not care for it, in all meanings of the phrase. That too, I knew about myself before I knew it had a word. # Check all applicable labels 23 June 2020 This is the third in a series of posts going back to 2017 where I attempt to describe the labels that apply to me. The first, “Guy-Adjacent”, I know some people liked a lot and resonated with some of the things in it, but it doesn’t match me at all anymore. It’s there, though. In the preface of “My words,” I self-describe as neither being oppressed nor dysphoric, having pretty good mental health, being NT, being healthy. That is, to a certain extent, still true. It is also not completely accurate, and I have come to realise, wasn’t even when I wrote the previous post. Self-discovery is a thing. Cease of denial is also a thing. I am not NT (neurotypical). I, however, feel quite inappropriate in claiming the ND (neurodivergent) label. I don’t feel like I belong in either category, which could be anniying if I cared a lot about categories or labels, but I don’t really. I’m not presently interested in getting diagnosed in any particular direction, either. But I do recognise I am not NT, and that’s a step. I think. I am not the picture of health. I have an invisible, undiagnosed chronic illness or condition, which after much questioning, sounds pretty similar to, if an incredibly mild version of, a named condition. But maybe I think that because several friends do have that named condition, properly diagnosed, and I recognise myself in my friends. I would never want to make their struggles and hardship about me, though, so I’m not going to associate myself with it. I’m also very wary of something that sounds very much like a general “we have no idea, let’s describe the symptoms and call it a thing” which I’m sure is helpful from a medical and also community point of view, but is super despairing from the perspective of someone wondering about what they have and if it’s ever fixable and with a dislike of doctors. I probably need therapy. And medical care. I don’t know if I’ll ever work up the nerve to push past my past experiences and get there, which sounds very silly because I KNOW I HAVE THE PROBLEM, WHY WON’T I DO SOMETHING ABOUT IT. Speaking of mental health, I don’t have impostor syndrome anymore. Wooo. It only took several years. I still have a latent fear of rejection. But I’m pretty confident in my skills now, and most importantly, I’m pretty confident in the skills I don’t have. I think that helps tremendously, from a privileged position, to admit to myself this vulnerability or ‘weakness’. But enough about health. One other very important change from the previous post is to stop trying to avoid the word “label.” I will never like it much, but in the last post I made a rather obvious mistake, in that I described that “words” are about communication, and failed to realise that the word “label” is also about communication. “Words” is very unspecific. “Labels” might feel icky and have interpretations I’d rather avoid, but it certainly has specific shared meaning, and that helps tremendously in communicating what I mean. So the specific things I said in the previous post are still true: labels are for me, labels are for you, labels change, labels are not absolute. Labels are useful to communicate along a shared understanding, but it’s still extremely important to me that labels are only about this communication and sharing, and are not utterly accurate, finite, bounded, prescriptive descriptions of me and who I am. Let’s get the easy stuff out of the way: I am French, I am Pākehā, a New Zealander, sometimes a kiwi depending on context. I am technically European, but don’t really have it as an identity, and certainly do not feel like an NZ European, whatever that means. I am European in an Europe context, but not in an Aotearoa context. I am kind of an introvert, but still neither like the word nor the implied dichotomy. I’ll claim solitaire, not the card game (I like French tarot, if you must know), but I prefer avoiding crowds and large social occasions. I’ve tried, I’ve been pushed this way, that way... doesn’t work. I am gendermeh with boy flavour. I actively dislike “guy” as a gendered marker and as a group noun, but as a general address I’m not bothered. I don’t give a fuck about masculinity, or feminity for that matter. I’m very happy being me, and everything else can go. I am technically non-binary, and will claim that label, but getting increasingly disillusioned with the discourse around it; specifically, I feel that making it a sort of “third gender” is very reductive and misses the entire point. I neither have a strong gender identity, nor care to describe it. To make a programming joke: I am both weakly gendered and statically gendered, but I don’t fit in the type system. I am cis in that I am not trans. That’s pretty simple. I don’t strongly identify as cis, but I am also not trans, and the least I can do for that community is claim this, in support. Sexuality-wise, I’ll still claim both bisexuality and pansexuality, leaning towards one or the other depending on context. But I’m not sexual or romantic enough to really identify with any particular label. I’m not ace, and I’m not aro, but I’m not very ro either. I am neither poly nor mono amorous, but I’ll go with poly for lack of anything better. This is roughly in the order I care about. And that’s pretty much it.
2022-01-16 21:22:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29457026720046997, "perplexity": 3781.3978721440485}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300244.42/warc/CC-MAIN-20220116210734-20220117000734-00409.warc.gz"}
https://deepai.org/publication/backdoor-decomposable-monotone-circuits-and-their-propagation-complete-encodings
# Backdoor Decomposable Monotone Circuits and their Propagation Complete Encodings We describe a compilation language of backdoor decomposable monotone circuits (BDMCs) which generalizes several concepts appearing in the literature, e.g. DNNFs and backdoor trees. A BDMC sentence is a monotone circuit which satisfies decomposability property (such as in DNNF) in which the inputs (or leaves) are associated with CNF encodings of some functions. We consider two versions of BDMCs. In case of PC-BDMCs the encodings in the leaves are propagation complete encodings and in case of URC-BDMCs the encodings in the leaves are unit refutation complete encodings of respective functions. We show that a representation of a boolean function with a PC-BDMC can be transformed into a propagation complete encoding of the same function whose size is polynomial in the size of the input PC-BDMC sentence. We obtain a similar result in case of URC-BDMCs. We also relate the size of PC-BDMCs to the size of DNNFs and backdoor trees. ## Authors • 7 publications • 3 publications • ### Notes on Hazard-Free Circuits The problem of constructing hazard-free Boolean circuits (those avoiding... 12/20/2020 ∙ by Stasys Jukna, et al. ∙ 0 • ### Circuit Complexity and Decompositions of Global Constraints We show that tools from circuit complexity can be used to study decompos... 05/22/2009 ∙ by Christian Bessiere, et al. ∙ 0 • ### On the monotone complexity of the shift operator We show that the complexity of minimal monotone circuits implementing a ... 05/26/2019 ∙ by Igor S. Sergeev, et al. ∙ 0 • ### Recognizing Read-Once Functions from Depth-Three Formulas Consider the following decision problem: for a given monotone Boolean fu... 02/11/2018 ∙ by Alexander Kozachinskiy, et al. ∙ 0 • ### Proof complexity of positive branching programs We investigate the proof complexity of systems based on positive branchi... 02/12/2021 ∙ by Anupam Das, et al. ∙ 0 • ### Approximating Boolean Functions with Disjunctive Normal Form The theorem states that: Every Boolean function can be ϵ -approximated b... 05/12/2020 ∙ by Yunhao Yang, et al. ∙ 0 • ### On the expressive power of unit resolution 06/17/2011 ∙ by Olivier Bailleux, et al. ∙ 0 ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## 1 Introduction We describe a compilation language for representing boolean functions which can be viewed as a generalization of several concepts appearing in the literature. A boolean function is represented using a structure consisting of a monotone circuit satisfying the decomposability property and whose inputs called leaves are associated with propagation complete (PC) or unit refutation complete (URC) encodings of some simpler functions. We call this structure backdoor decomposable monotone circuit (BDMC), because it is a generalization of backdoor trees introduced in [25]. This structure generalizes also other concepts appearing in literature. A DNNF [9] and a disjunction of URC encodings [3] are both special cases of BDMCs. We distinguish two versions, a PC-BDMC which has PC encodings in leaves and URC-BDMC which has URC encodings in leaves. If we consider circuits with only one node, we obtain that PC-BDMC sentences generalize PC formulas introduced in [4] and URC-BDMC sentences generalize URC formulas introduced in [11]. Since the sizes of URC and PC encodings are polynomially related by Theorem 1 in [2], the same is true for URC-BDMCs and PC-BDMCs. The main result of this paper is that we can compile a PC-BDMC or URC-BDMC into a PC or URC encoding of size polynomial with respect to the total size of the input BDMC. Combining the results of [5] or [7] with the fact that both DNNFs and PC encodings are special cases of PC-BDMCs, we obtain that the language of PC-BDMCs is strictly more succinct than the language of DNNFs. We also present an example of a CNF formula such that every backdoor tree with respect to the base class of renamable Horn formulas has exponential size, although the function can be represented by a DNNF sentence, so also by a URC-BDMC or PC-BDMC sentence of linear size. A smooth DNNF can be compiled into a propagation complete encoding of linear size with respect to the size of by techniques described in [16, 12]. We generalize this result to a more general structure, where the leaves contain URC or PC encodings instead of single literals and smoothness is not required. On the other hand, the method of the transformation is different from the method used in [16, 12] and the size of the output is not bounded by a linear function of the size of the input, although it is still polynomial. The authors of [3] studied properties of unit refutation complete encodings and proved, in particular, that the disjunction closure can be computed in polynomial time for unit refutation complete encodings. Our result generalizes this in two directions. We describe a polynomial time transformation of an arbitrary URC-BDMC sentence, which is a more general structure built on top of a collection of URC encodings than disjunction, into a single URC encoding. Moreover, our approach generalizes to PC-BDMCs and propagation complete encodings instead of unit refutation complete encodings. Similarly as in [3], our construction uses a Tseitin transformation of the BDMC in the first step and then simulates the unit propagation under conditions represented by additional literals. In particular, for a URC-BDMC sentence that is a disjunction of URC encodings, the construction is essentially the same as in [3] up to the naming of the variables. Consider a boolean function which is represented by a CNF . Since backdoor trees are a special case of BDMCs, our result implies that has a PC encoding of size , where is a function which depends only on the size of a smallest backdoor tree with base classes of Horn, renamable Horn CNF formulas, or 2-CNF formulas. Let us consider a CNF representing a boolean function . It is known that the size of a DNNF representing can be parameterized by incidence treewidth of ([6, 23]). It follows by construction described in [12] that the size of a PC encoding of can be parameterized by incidence treewidth of as well. The previous paragraph implies that the size of a PC encoding can be parameterized also by the size of a backdoor tree with some of the base classes listed above. ## 2 Definitions and Notation In this section we recall definitions and notation used throughout the text. ### 2.1 CNF Encoding We work with formulas in conjunctive normal form (CNF formulas). Namely, a literal is a variable (positive literal), or its negation (negative literal). If is a variable, then we denote . If is a vector of variables, then we denote by the union of over . For simplicity, we write if is a variable that occurs in , so is considered as a set here, although, the order of the variables in is important. Given a literal , the term denotes the variable in the literal , that is, for . A clause is a disjunction of a set of literals which does not contain a complementary pair of literals. A formula is in conjunctive normal form (CNF) if it is a conjunction of a set of clauses. A -CNF formula consists only of clauses of length at most . We treat a clause as a set of literals and a CNF formula as a set of clauses. In particular, denotes the number of literals in a clause and denotes the number of clauses in a CNF formula . We denote the length of a CNF formula . Clause is Horn if it contains at most one positive literal, it is definite Horn, if it contains exactly one positive literal. A definite Horn clause represents the implication and we use both kinds of notation interchangeably. The set of variables in the assumption of a definite Horn clause is called its source set, variable is called its target. A (definite) Horn CNF formula consists only of (definite) Horn clauses. Consider a CNF formula on variables and a boolean vector . Renaming of according to is defined as the formula obtained from by replacing each occurence of a literal , such that , by . We say that is renamable Horn if there is a renaming such that is a Horn formula. Such a renaming can be found in linear time [14, 18] by reducing the problem to satisfiability of a specific 2-CNF formula. A partial assignment of variables is a subset of that does not contain a complementary pair of literals, so we have for each . By we denote the formula obtained from by the partial setting of the variables defined by . We identify a set of literals (in particular a partial assignment) with the conjunction of these literals if is used in a formula such as . If is a vector of variables, a mapping is called a full assignment of values to . We identify a full assignment with the vector of its values and it can be viewed as a special case of a partial assignment, however, in some cases we need to differentiate between these two notions. In this paper we consider encodings of boolean functions defined as follows. ###### Definition 2.1 (Encoding). Let be a boolean function on variables . Let be a CNF formula on variables where . We call a CNF encoding of if for every we have f(a)=1⟺(∃b∈{0,1}m)φ(a,b)=1. (1) The variables in and are called input variables and auxiliary variables, respectively. ### 2.2 Propagation and Unit Refutation Complete Encodings We are interested in encodings which are propagation complete or at least unit refutation complete. These notions rely on unit resolution which is a special case of general resolution. We say that two clauses are resolvable, if there is exactly one literal such that and . The resolvent of these clauses is then defined as . If one of and is a unit clause, we say that is derived by unit resolution from and . We say that a clause can be derived from by unit resolution (or unit propagation), if can be derived from by a series of unit resolutions. We denote this fact with . The notion of propagation complete CNF formulas was introduced in [4] as a generalization of unit refutation complete CNF formulas introduced in [11]. We use the following more general notions of propagation complete and unit refutation complete encodings. Let us point out that unit refutation complete encodings are denoted by URC-C in [3]. ###### Definition 2.2. Let be a boolean function on variables . Let be a CNF encoding of with input variables and auxiliary variables . • We say that is a unit refutation complete encoding (URC encoding) of if the following equivalence holds for every partial assignment : f(x)∧α⊨⊥⇔φ∧α⊢1⊥ (2) • We say that is a propagation complete encoding (PC encoding) of if for every partial assignment and for each , such that f(x)∧α⊨h (3) we have φ∧α⊢1horφ∧α⊢1⊥. (4) Note that the definition of a propagation complete encoding is less restrictive than requiring that formula is propagation complete as defined in [4]. The difference is that in a PC encoding we only consider literals on input variables as assumptions and consequences in (4). The definition of a propagation complete formula [4] assumes that is the function represented by , so we do not distinguish input and auxiliary variables and the implication from (3) to (4) is required for the literals on all the variables. It was shown in [1] that a prime 2-CNF formula is always propagation complete, thus the same holds for 2-CNF encodings. On the other hand, Horn and renamable Horn formulas are unit refutation complete [11]. In some cases it is advantageous to have a PC or URC encoding in -CNF for a fixed constant . Given a CNF encoding of a boolean function, we obtain a -CNF encoding of the same function by the standard technique to transform a CNF into a -CNF. Namely, we split the long clauses and use new variables to link the parts together. It is not hard to see that if this technique is applied to a PC or URC encoding, we obtain a PC or URC encoding, respectively. In order to refer to this property later, we formulate it as a lemma. ###### Lemma 2.3. Let be a CNF encoding of a function and let be a constant. Then there is a -CNF encoding with , and . Moreover if is a PC (or URC resp.) encoding of , then so is . ### 2.3 Dnnf Let us briefly recall the notion of DNNF [9]. ###### Definition 2.4. A sentence in NNF is a rooted, directed acyclic graph (DAG) where each leaf node is labeled with , , or , where is a set of input variables. Each internal node is labeled with or and can have arbitrarily many children. Assume is a NNF with input variables and nodes . We always assume that the inputs of a gate precede it in the list of nodes. Hence, if is an input to , then . For every , let denote the set of input variables from which the node is reachable by a directed path. Each node , represents a function on variables . Given this notation we can now define the language of DNNF sentences as follows. ###### Definition 2.5. We say that a NNF is decomposable (DNNF), if every AND gate with inputs satisfies that are pairwise disjoint. In other words, the inputs to each AND gate have pairwise disjoint sets of variables they depend on syntactically. ### 2.4 Backdoor Trees We first recall the concept of backdoor sets introduced in [27, 26]. As a base class for a backdoor set we consider a class of CNF formulas for which the satisfiability and the membership problem can be solved in polynomial time. Let be a base class and let be a CNF formula on variables . Then is a strong -backdoor set of , if for every full assignment we have that is a formula in . Finding smallest strong Horn-backdoor sets and strong 2-CNF-backdoor sets is fixed-parameter tractable with respect to the size of a smallest backdoor set [21]. Other classes of CNF formulas were considered as base classes in literature, let us mention backdoors to heterogeneous class of SAT [13]. Backdoor sets were generalized in [25] to backdoor trees. A splitting tree is a rooted binary tree where every node which is not a leaf has exactly two child nodes. Each non-leaf vertex is labeled with a variable and the two edges leaving are labeled with and . No variable appears more than once on a single path from the root to a leaf. The notion of splitting tree is closely related to the notion of decision tree which is obtained by assigning constants to the leaves of a splitting tree and represents a boolean function in a natural way. A splitting tree is a representation of a set of restrictions of a formula in such a way that each leaf of is assigned the formula , where is the partial assignment defined by the labels of edges on the path from the root to . Assume, is a CNF formula on variables . A -backdoor tree of is a splitting tree on a set of variables which satisfies that for every leaf the formula belongs to class where is the partial assignment associated with leaf . We denote the number of leaves in . The size of is defined as so that it is comparable with the sizes of backdoor sets. In particular, it was observed in [25] that if is the size of a smallest -backdoor set of a CNF , then the number of leaves in a smallest -backdoor tree of satisfies and the size of satisfies . It was shown in [25] that finding a -backdoor tree of a given size is fixed-parameter tractable for classes of Horn and 2-CNF formulas. ## 3 Backdoor Decomposable Monotone Circuits In this section we introduce a language of backdoor decomposable monotone circuits (BDMC) which consists of sentences formed by a combination of a decomposable monotone circuit with CNF formulas from a suitable base class at the leaves. For presenting the transformation of a BDMC into a PC or URC encoding, we use the base classes of PC or URC encodings which are the largest possible for the presented proofs. These classes admit a polynomial time satisfiability test. However, the corresponding membership tests are co-NP-complete, since it is co-NP complete to check if a formula is URC [8, 17] or PC [1]. For this reason, when the complexity of algorithms searching for a BDMC for a given function is in consideration, we assume that a suitable subclass with a polynomial time membership test is used. However, this is outside the scope of this paper, although we prove some results concering renamable Horn BDMCs in this section. ###### Definition 3.1 (Backdoor Decomposable Monotone Circuit). Let be a base class of CNF encodings. A sentence in the language of backdoor decomposable monotone circuits with respect to base class (-BDMC) is a triple where • is a rooted, directed acyclic graph with the set of nodes . • Each internal node of is labeled with and and can have arbitrarily many children. • is a function assigning each leaf of a CNF encoding from class of a function , . • For each node , let be the union of for all leaves , such that there is a path from to . In particular, if is a gate with inputs , then . • Each node represents a function . For inner nodes, the function is given by first evaluating the leaves and then the circuit rooted at . Since the function depends only on the variables in , it can also be written as , if needed. • The function represented by is the function represented by its root. • Nodes labeled by satisfy the decomposability property: If for a set of indices , then the sets of variables are pairwise disjoint. Given a BDMC with nodes , we always assume that the children of a node precede it in the list. In particular, if is a child of , then . Node is then the root of . Given two different leaves and with associated CNF encodings and , we always assume that , i.e. the sets of auxiliary variables of encodings in different leaves are pairwise disjoint. We can make this assumption without loss of generality as it can be achieved by renaming the auxiliary variables, if it is not the case. In Section 4 we consider the language -BDMC with equal to the class of PC encodings and in Section 5 we consider the language -BDMC with equal to the class of URC encodings. Note that a decision node in a splitting tree can be rewritten as a disjunction of two decomposable conjunctions. Consequently, backdoor trees with respect to any base class form a special case of -BDMCs. On the other hand, Theorem 3.5 implies that if is the class of renamable Horn formulas, then the size of -BDMC can be exponentially smaller than the size of a -backdoor tree. By the results of [5] and [7], there are classes of monotone CNF formulas such that for each , the DNNF size of is . In particular, [7] presents a class with the above property consisting of monotone 3-CNF formulas and [5] presents a class consisting of monotone 2-CNF formulas. In both cases, the proof of existence of the corresponding class is non-constructive. Every irredundant monotone CNF is in prime implicate form, which means that it is formed by all the prime implicates of the represented function. Such a formula is clearly propagation complete, see [1] for more detail. Together with the known fact that PC encodings are at least as succinct as DNNFs, the lower bounds on DNNF size from [5] and [7] imply the following. ###### Corollary 3.2. The language of PC encodings is strictly more succinct than the language of DNNF sentences. The language of PC-BDMCs and also the language of 2-CNF-BDMCs contains the language of DNNFs as a subset consisting of BDMCs with the literals in the leaves. Hence, the lower bound on DNNF size from [5] implies also the following. ###### Corollary 3.3. The language of PC-BDMCs and even the language of -CNF-BDMCs is strictly more succinct than the language of DNNF sentences. Let us also point out that Theorem 4.6 proven below implies the following. ###### Proposition 3.4. The languages of PC encodings and of PC-BDMC sentences are equally succinct. ###### Proof. PC encodings are a special case of PC-BDMC with one node. The opposite direction follows from Theorem 4.6. ∎ One of the reasons for introducing the language of URC-BDMC sentences is that it provides an alternative way of compilation of a CNF, if the splitting process used for the compilation into a DNNF leads to a too large structure. If the target structure is an URC-BDMC, then a branch can be closed not only if it leads to a literal or a constant, but also if it leads to a Horn or renamable Horn formula. This can be recognized in polynomial time and if all the leaves of the obtained structure satisfy this, we have an instance of URC-BDMC and it can be compiled into a URC or PC encoding instead of a DNNF by the results of Section 4 and 5. Let us consider -BDMCs, where is the class of renamable Horn formulas, and let us compare the succinctness of them with the backdoor trees with respect to base class . When using a backdoor tree as a representation of a function, then the whole structure consists of the backdoor tree itself and the original formula. However, since we prove a lower bound on the size of the representation and the original formula has polynomial size in the number of the variables, it is sufficient to formulate the bound in terms of the number of the leaves of the backdoor tree. Theorem 3.5 below can be viewed as a stronger version of the second part of Proposition 9 in [25] reformulated for comparing renamable Horn BDMCs to renamable Horn backdoor trees. In the proof, we use the same construction as the authors of [25], however, the obtained lower bound is larger. ###### Theorem 3.5. For every divisible by , there is a boolean function of variables with the following properties: • is expressible by a CNF formula of size , • is expressible by a renamable Horn BDMC and even a DNNF of size , • for every CNF formula representing , every backdoor tree for with respect to the base class of renamable Horn formulas has at least leaves. ###### Proof. We use the same construction as the one which is used in the proof of Proposition 9 in [25]. Given , define for each ψi=(ai∨bi∨ci)(¬ai∨¬bi∨¬ci) and let us consider the function on variables defined by ψ=m⋀i=1ψi. For any , it can be easily checked that the function represented by is not renamable Horn, however, it can be expressed by a DNF of size . If is replaced by this DNF for each , the formula becomes a DNNF of size for and it can be interpreted also as a renamable Horn BDMC for of size with the literals in the leaves. Let us prove that any renamable Horn backdoor tree of any CNF formula equivalent to contains at least nodes. Consider a backdoor tree with respect to which has renamable Horn formulas in the leaves. We prove that every leaf of is visited by at most satisfying assignments of . Since has satisfying assignments, the tree has at least leaves. Consider a leaf with an associated partial assignment . One can prove that either changes to the zero function for at least one index or fixes at least one variable in for every . In the first case, the leaf is not visited by any satisfying assignment of . In the second case, the leaf is visited by a set of satisfying assignments each of which is a combination of satisfying assignments of for each . Moreover, all the elements of can be obtained by selecting for each at most different satisfying assignments of consistent with and considering all of the combinations of these assignments. It follows that as required. ∎ ## 4 PC Encoding of a PC-BDMC In this section we describe a construction of a PC encoding of a function which is represented by a PC-BDMC. The construction uses the following two elements: Formulas in leaves are encoded using a variant of the well-known dual rail encoding. Then we use Tseitin encoding to propagate values of literals from the leaves to the root. Consider a PC-BDMC representing a function . We use also the additional notation introduced in Definition 3.1. In Section 4.1, we introduce meta-variables used in the construction. In Section 4.2, we describe the dual rail encoding in the form which we use. In Section 4.3 we describe the construction of a PC encoding of a given PC-BDMC. Finally, in Section 4.4 we estimate the size of a PC encoding obtained by the construction. ### 4.1 Meta-variables The well-known dual rail encoding uses new variables representing the literals on the variables of the original encoding. In addition to this, we associate a special variable with the contradiction. These new variables will be called meta-variables and denoted as follows. The meta-variable associated with a literal will be denoted , the meta-variable associated with will be denoted , and the set of the meta-variables corresponding to a vector of variables will be denoted meta(x)={⟦l⟧∣l∈lit(x)∪{⊥}}. In the next subsection, we describe the dual-rail encoding using the meta-variables in this form. For notational convenience, we extend the above notation also to sets of literals that are meant as a conjunction, especially to partial assignments. If is a set of literals, then denotes the set of meta-variables associated with the literals in , thus ⟦α⟧={⟦l⟧∣l∈α}. If is used in a formula such as , we identify this set of literals with the conjunction of them, similarly as is interpeted in . In order to construct a PC encoding from a PC-BDMC in Section 4.3, we first construct a definite Horn formula representing derivations of literals on the input and auxiliary variables. These derivations have to be done separately in each node of . Hence, besides of the meta-variables described above, we use also the copies of the meta-variables in each of the nodes of denoted as follows. For every and every , we denote the meta-variable associated with in node . For every leaf , we moreover consider meta-variables associated with literals . Using this notation, the set of auxiliary variables used in is as follows: z={⟦l⟧i∣1≤i≤L,l∈lit(xi∪yi)∪{⊥}}∪{⟦l⟧i∣L ### 4.2 Dual rail encoding The construction of the formula in Section 4.3 starts with forming the well-known dual rail encoding [2, 3, 15, 19] for the formulas in the leaves. The dual rail encoding transforms an encoding of a function into a Horn formula simulating the unit propagation in the original encoding. Dual rail encoding presented in (6) below represents unit resolution in a general formula using definite Horn clauses on the meta-variables . More precisely, the first type of Horn clauses represents a derivation of a literal from a clause of and the negations of all the remaining literals in this clause as a single step. The second type of Horn clauses represents the derivation of a contradiction from two complementary literals. Unit propagation in can also derive a contradiction using a clause from and the negations of all the literals in it. We do not include Horn clauses representing this, since the formula (6) is used only if all clauses in are non-empty. In this case, the direct derivation of the contradiction can be replaced by deriving one of the literals in the clause and together with , we obtain a contradiction in the next step. ###### Definition 4.1 (Dual rail encoding). Let be an arbitrary CNF formula. If contains the empty clause, then . Otherwise, the dual rail encoding is the definite Horn formula on meta-variables defined as follows. DR(φ)=⋀C∈φ⋀l∈C⎛⎝⋀e∈C∖{l}⟦¬e⟧→⟦l⟧⎞⎠∧⋀x∈x(⟦x⟧∧⟦¬x⟧→⟦⊥⟧)\;. (6) Written as a CNF we get DR(φ)=⋀C∈φ⋀l∈C⎛⎝⋁e∈C∖{l}¬⟦¬e⟧∨⟦l⟧⎞⎠∧⋀x∈x(¬⟦x⟧∨¬⟦¬x⟧∨⟦⊥⟧)\;. (7) The following lemma captures the basic property of the dual rail encoding. We omit the proof, since it is well-known, although different authors use different notation for the variables representing the literals and the contradiction is frequently represented by an empty set and not by a specific literal. An application of dual rail encoding with an explicit representation of the contradiction can be found, for example, in the first part of the proof of Theorem 1 in [2]. The notation in [2] relates to the notation in this paper for a variable by the identities , , and for the contradiction by the identity . ###### Lemma 4.2. Let be a CNF not containing the empty clause and let . Then for every we have φ∧α⊢1l⟺DR(φ)∧⟦α⟧⊢1⟦l⟧ (8) We use the dual rail encoding of the PC encoding associated with a leaf of a PC-BDMC using the meta-variables specific to the node . To differentiate between dual rail encodings associated with different leaves, we introduce the following notation: We denote the dual rail encoding of formula which uses meta-variables in place of for . ### 4.3 Constructing the Encoding Table 1 describes a set of Horn clauses which together form a Horn formula . To simplify the presentation of clauses of group 1, we use shortcuts as described in the table. Formula is a definite Horn formula. We use this formula to derive positive literals with unit propagation when presented with only positive literals in the assumption. Such a form of unit propagation is also called forward chaining and we sometimes use this notion when we want to express that the unit propagation is used in the above sense. By the following theorem proven at the end of this subsection, the formula derives the literals implied by using forward chaining. ###### Theorem 4.3. For every and , we have f(x)∧α⊨l⟺ψ(meta(x),z)∧⟦α⟧⊢1⟦l⟧ (9) Given a Horn formula satisfying the equivalence (9), we can form a PC encoding of by simply substituting meta-variables in with the respective literals or based on the following proposition. ###### Lemma 4.4. Let be obtained from by substituting meta-variable with for all . Then is a PC encoding of . ###### Proof. By Theorem 4.3, satisfies the equivalence (9). First, assume a full assignment , such that , and let us prove that the formula is satisfiable. Let be the set of literals on variables from satisfied by . Since , we have by (9) that ψ(meta(x),z)∧⟦α⟧⊬1⟦⊥⟧. It follows that does not derive for any . Indeed, assume for some . By (9) we get . Since , clearly and thus together we have which is a contradiction. Consider the assignment of values to variables obtained by setting to all the variables derived by forward chaining in the formula and setting to all the remaining variables. Clearly, we have . Moreover, for every , we have and . It follows that we can construct an assignment of the variables that agrees with on the variables and satisfies for all . In particular, extends . Every clause of is satisfied by . Let us consider the following cases. • If is satisfied by a literal on a variable from , this literal is unchanged by the substitution and satisfies also the corresponding clause of using . • If is satisfied by the literal , the clause is removed from by the substitution. • If is satisfied by a literal , where , then and contains satisfied by and . • If is satisfied by a literal , where , then and contains satisfied by and . It follows that is satisfiable. In order to prove that is an encoding of , it remains to prove that it is unsatisfiable, if . This is a consequence of propagation completeness proven below. Let and , such that . By (9) we have ψ(meta(x),z)∧⟦α⟧⊢1⟦l⟧ (10) We prove that either φ(x,z)∧α⊢1l (11) or φ(x,z)∧α⊢1⊥ (12) by the following argument. Let us fix a minimal forward chaining derivation of either , or from . Let , , be the sequence of positive literals on the meta-variables in the order given by the fixed derivation. In particular and for . Let , , be obtained from by the substitution given by the assumption, i.e. if for some , then , otherwise . In particular is either or . Let us prove by induction over that either for all φ(x,z)∧α⊢1g′i (13) implying (11), or we obtain (12) directly. Let be the Horn clause of used to derive . Note that by the choice of the derivation, does not contain in its tail. Let be the set of literals obtained from by the substitution from the assumption. If contains in its head, then the head becomes and is skipped in . If contains complementary literals and , then one can verify by case inspection that , contains negative literals and , and both of the literals and occur in the sequence , . In this case, the corresponding literals are and and we obtain (12) by unit propagation from them. If does not contain complementary literals, it is a clause of . The clause is a Horn clause with the head and if its tail is non-empty, it contains negations of some literals with indices . By induction hypothesis (13), the literals can be derived from the formula before and unit propagation using derives . Note that is either a literal included in or and besides , contains only negations of previously derived literals. Altogether, we obtain (13) or (12) implying (11) or (12). It follows that is a PC encoding of . ∎ The main step of the proof of Theorem 4.3 is the following lemma proven by induction. ###### Lemma 4.5. Let be the subformula of which is formed only by clauses in groups 1 to 1. Then for every , every partial assignment , and , we have fj(x)∧α⊨l (14) if and only if ψ0(meta(x),z)∧⟦α⟧⊢1⟦l⟧j (15) ###### Proof. Let us first assume that is a leaf, i.e. . Assume first (14). Since is a PC encoding of , we have by (4) that or . By Lemma 4.2 and clauses in groups 1 to 1 we get that . Assume now (15). Due to acyclicity of we have that (15) implies that only clauses in groups 1 to 1 for leaf are needed in the derivation of . Clauses of group 1 can only be used to propagate to for each literal in . Thus, we have DR(vj,φj(xj,yj))∧⋀l∈lit(xi)(⟦⊥⟧i→⟦l⟧i)∧⋀e∈α⟦e⟧j⊢1⟦l⟧j. It follows that or . By Lemma 4.2 we get that or . Since is a PC encoding of , we get that as required. Let us now assume that and let us assume that the equivalence between (14) and (15) holds for nodes . Assume first (14) holds for . Since is decomposable, it follows that for some . If actually , we get by induction hypothesis that . Using clauses of groups 1 and 1 we get (15). Otherwise we have that and . By induction hypothesis we get that . Using the appropriate clause of group 1 we get (15). Let us now assume (15). The only clauses which can be used to derive from are in groups 1 to 1. By inspecting these clauses we get that there is satisfying that or . By induction hypothesis we get that . Thus as well and we get (14). Finally, let us assume that and let us assume that the equivalence between (14) and (15) holds for nodes . Assume first (14) holds for . It follows that for every . By induction hypothesis and using the convention that in case we get that for every . Using clause 1 (if ) or the appropriate clause from group 1 (if ) we get (15). Let us now assume (15). Variable is derived by using clause 1 (if ) or a clause from group 1 (if ). It follows that for every . By induction hypothesis we have that for every and thus as well and we have (14). ∎ We are now ready to show Theorem 4.3. ###### Proof of Theorem 4.3. As in Lemma 4.5 we shall denote the subformula of which is formed only by clauses in groups 1 to 1. Let us first assume that . Since we get by Lemma 4.5 that . Using the appropriate clause from group 1 we get that . Let us on the other hand assume that and let us show that . For this purpose, consider the following set of literals and : β={e∈lit(x)∪{⊥}∣ψ0(meta(x),z)∧⟦α⟧⊢1⟦e⟧N} Considering the fact that we get by Lemma 4.5 that β={e∈lit(x)∪{⊥}∣f(x)∧α⊨e}. In particular, and if , then . Denote by the set of literals derived by forward chaining from . Using the fact that is not the target of any of the clauses in , we have that if and only if which is equivalent to . Clearly, for every we have that if and only if . Together with Lemma 4.5 we thus have for every that the following four conditions are equivalent ⟦e⟧ ∈ Fβ f(x)∧α ⊨ e f(x)∧β ⊨ e ⟦e⟧N ∈ Fβ This implies that if the source set of a clause of group 1 is contained in , then also its target is in . It follows that is the set of literals derived from . Together with the assumption , we obtain . By the equivalences above, as required. ∎ ### 4.4 Size Estimate The main result of this section is contained in the following theorem. ###### Theorem 4.6. Let be a PC-BDMC sentence representing function with input variables . Assume that has nodes with leaves and edges. Let us denote the PC encoding of function associated with a leaf . Let us further denote the total number of auxiliary variables, the total length of all PC encodings associated with the leaves of , and the maximum length of a clause in any of the encodings associated with the leaves of . Then has a PC encoding satisfying |z| =O(m+nN), (16) |φ| =O(S+nE), and (17) ∥φ∥ =O(ℓS+nE). (18) ###### Proof. If consists of a single node , then is the root and the only leaf of . In this case is a PC encoding of . The size and length of this encoding are both upper bounded by and the number of auxiliary variables is at most , thus we get (16), (17), and (
2021-12-01 00:14:28
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9245340824127197, "perplexity": 657.8926302015409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359082.76/warc/CC-MAIN-20211130232232-20211201022232-00544.warc.gz"}
http://math.stackexchange.com/questions/326849/definition-of-conditional-expectation-independence
# Definition of conditional expectation/independence Conditional probability and conditional independence are unique almost surely, but relative to what: the conditioning field or the underlying field? More precisely, consider the case of conditional independence. Let $\left(\Omega, \mathcal{A}, P\right)$ be a probability space, let $\mathcal{B}$ be a sub-$\sigma$-algebra of $\mathcal{A}$ and let $D,E\in\mathcal{A}$. Then by definition (see, e.g. Kallenberg (1995) p. 86) $D,E$ are conditionally independent given $\mathcal{B}$ iff $$P\left(\left.D\cap E\right|\mathcal{B}\right)=P\left(\left.D\right|\mathcal{B}\right)P\left(\left.E\right|\mathcal{B}\right)\space\space\mathrm{a.s}$$ But does "a.s." mean "up to a null set $F\in\mathcal{A}$" or "up to a null set $F\in\mathcal{B}$"? - I answered this on the other page. The answer is: BOTH, since $P(D\cap E\mid\mathcal B)$ and $P(D\mid\mathcal B)P(E\mid\mathcal B)$ are $\mathcal B$-measurable. –  Did Mar 10 '13 at 21:55 @Did: I see now. Thanks. –  Evan Aad Mar 10 '13 at 22:28
2014-09-23 20:40:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9772956967353821, "perplexity": 292.24010095612743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657139669.58/warc/CC-MAIN-20140914011219-00290-ip-10-234-18-248.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-statistics/222811-marginal-distribution.html
# Math Help - Marginal distribution 1. ## solved Hey, I am doing a homework exercise and I think im getting the wrong answer. I have to work out a marginal distribution from a joint distribution. So I use the definition of marginal distribution for the continous case. $f_y(y)=\int_{X}f_{X,Y}(x,y) dx$. When I evalute my integral it gives me infinite, could this be right? Are the other ways to do it? At first the problem gave me to random variables X and Y, with density functions $\frac{-x^2}{75}-\frac{6x}{75} + \frac{8}{15}$ when 0<=x<=3 and Y distributed uniformly between half of X and two times X. To find the joint distribution I used the following formula Where $f_{Y|X}(y|x)= Y$ when x is choosen and $f_{X}(x)= X$. The integral im evaluating to get the marginal distribution is
2015-04-19 00:07:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9317036271095276, "perplexity": 308.52231331936184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246636255.43/warc/CC-MAIN-20150417045716-00241-ip-10-235-10-82.ec2.internal.warc.gz"}
http://hartleymath.com/versatilemath
$$\def\ans#1{\bbox[border:1px solid green,6pt]{#1}}$$ ## A free college textbook. Versatile Mathematics is a textbook designed for an introductory survey of mathematical applications. It is completely free and open for use and modification under a Creative Commons CC-BY-SA license. Presentation shown at 2016 AMATYC Conference This book is aimed at a freshman- or sophomore-level college class intended for students who are not math or engineering students, but rather taking an introductory survey course to fulfill a mathematics requirement. We wrote it specifically for such a course at the community college level, which students enter after fulfilling basic algebra requirements. Aside from that, there is really very little prerequisite knowledge required to be able to follow the text. There are currently seven chapters: 1. Financial Mathematics 2. Growth Models 3. Statistics 4. Probability 5. Linear Programming 6. Logic 7. Set Theory It is still a work in progress, and we will continue to add new chapters. ## Features ### Example Videos Every example in the text has an accompanying video that can be accessed by clicking on the Example box in the margin. ### Try It Now Clicking on the words "Try It" in the margin will lead to an interactive web page where students can try a problem similar to the example in the text and receive immediate feedback on their work. ### Free Online Homework MyOpenMath provides free, algorithmically-generated homework for every problem in the text. There is a growing community of like-minded educators who have decided to reduce the burden of textbook costs on our students by creating and freely sharing high-quality materials. Several members of the mathematics department at Frederick Community College in Frederick, Maryland joined this community, building on the work of others by remixing and adding to what they wrote, and the result is this textbook. We believe that knowledge does not belong to any one of us, so our job is to share it rather than hoarding it. We wrote this book to accomplish that purpose. The project was headed by Josiah Hartley, who also wrote the exercises on MyOpenMath. Chapter authors include • Josiah Hartley • Erum Marfani • Val Lochman • Evan Evans • Dina Yagodich and the rest of the math department at FCC provided help in reviewing and editing the text. Special thanks to Greg Coldren and Pei Taverner, in addition to those listed above, who also helped review chapters other than their own. Larry Huff created the Storyline modules that correspond to the "Try It" exercises, for which we are indebted to him. ## Thanks We used Math in Society, another open textbook written for a similar course, as our starting point. Many of the examples, exercises, and explanations were copied from that book, and we owe the author, David Lippman, a tremendous debt of gratitude for showing us that a project like this could be done, and giving us so much to work with. He also has done a fantastic job of designing MyOpenMath, which we use for free online homework. We also used material from the OpenStax College Introductory Statistics textbook, an open peer-reviewed text. We'd also like to thank the math department at FCC for their unflagging support, especially our department chair, Gary Hull, who not only provided us with backing and encouragement, but also gave us a booklet that he had spent many hours writing for the same course. The administration at FCC also provided support in the form of a summer grant to write the first six chapters, so we'd like to thank them as well. ## Contact Please use the form below to contact the authors if you would like more information, to help with the project, or just to send us kudos! If you would like to report errors in the text, use the other form. Note: Fields marked with an * are required. ## Report Errata Please use the form below to report any errors (typos, miscalculations, etc.) that you find in the book. This will help us continue to improve the text. Note: Fields marked with an * are required.
2017-12-12 17:51:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2541619539260864, "perplexity": 1728.1151018510577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948517845.16/warc/CC-MAIN-20171212173259-20171212193259-00679.warc.gz"}
https://forum.kodi.tv/showthread.php?tid=363079&page=5
• • 1 • 3 • 4 • 5(current) • 6 • 7 • 11 • After scraping a movie, selecting images Ember Displays Verifying Movie Details ... And it really takes a long time.... Ember 1.9 was much much faster... If you change a title, the title isn't updated in the movie list. Manual reload of the entry is always needed. I have a maybe bug and a question 1. When Re(Scrape)/Ctrl+I a new movie, and it pops up to select the movie it finds the correct one but I am unable to click ok. I have to change the tab then select the movie and click ok. 2. Is it possible to list all the studios so we can choose? 1.9 could show all the studios so you could pick one, 1.10 only gets the first studio from IMDB (usually alphabetically). (2021-06-22, 18:45)DanCooper Wrote: (2021-06-22, 18:06)ZeDeX Wrote: Unfortunately, disabling the antivirus or adding EMM to the exceptions did not help. If no scraper is listed in the settings it is a anti virus problem. The AV blocks the loading of the DLLs or already blocking the DLLs to get copied by the installer. Hi Dan! Same issue here. If I will apply the decimal hotfix for IMDB all scrappers disappear. When I clean install 1.10 everything is back to normal -except of course the IMDB ratings- Tried it with AV inactive, AV exemptions, it doesn't seem to be the case. Has anyone else experienced the same issue with us after applying the hotfix? (2021-06-26, 15:50)Anomen Wrote: (2021-06-22, 18:45)DanCooper Wrote: (2021-06-22, 18:06)ZeDeX Wrote: Unfortunately, disabling the antivirus or adding EMM to the exceptions did not help. If no scraper is listed in the settings it is a anti virus problem. The AV blocks the loading of the DLLs or already blocking the DLLs to get copied by the installer. Hi Dan! Same issue here. If I will apply the decimal hotfix for IMDB all scrappers disappear. When I clean install 1.10 everything is back to normal -except of course the IMDB ratings- Tried it with AV inactive, AV exemptions, it doesn't seem to be the case. Has anyone else experienced the same issue with us after applying the hotfix? Turning off the AV does not help. I even made a virtual machine from the latest image (Win10_21H1_Polish_x64.iso), turned off Windows Defender and no scrapers settings after using hotfix. Did you guys check that it's not Windows itself blocking the dll? Right click on the file and choose Properties to see if that's the case. It doesn't give any warnings before it does that, it just happens sometimes. (2021-06-26, 17:16)Boulder Wrote: Did you guys check that it's not Windows itself blocking the dll? Right click on the file and choose Properties to see if that's the case. It doesn't give any warnings before it does that, it just happens sometimes. Man, you are great! I never thought it was that simple. Eh ... Windows ... Does anyone have a working powershell script that starts ember with multiple command line arguments. I'm struggling to get the script to work with spaces correctly. Video Source options. Hi I do not set or detect the video source by file name etc.  But have noticed if I want to manually set I only have 2 options: https://imgur.com/CfYATNu I do have the default sources with Video Source Mapping which contains the other formars dvd, vhs, sdtv etc etc Is some detection going on with the file even though I make no reference in the file/folder name just MovieTitle (Year) or why cannto I not see the other formats in the drop down list? Thanks and cheers Confusion is just a state of mind. (2021-07-02, 08:13)macel Wrote: Does anyone have a working powershell script that starts ember with multiple command line arguments. I'm struggling to get the script to work with spaces correctly. Here is my script. I prefer using variables as I find it easier, but you can use the add what is in the arguments variable directly. Quote:$EmberArguments = '-nowindow -profile "Default" -scanfolder "\\path\to\tvshow" -scrapetvshows newauto all'$EmberExe = "C:\Ember Media Manager\Ember Media Manager.exe" # Ember Command Start-Process  -FilePath $EmberExe -ArgumentList$EmberArguments -Wait -NoNewWindow (2021-07-02, 09:26)FlashPan Wrote: Is some detection going on with the file even though I make no reference in the file/folder name just MovieTitle (Year) or why cannto I not see the other formats in the drop down list? The drop down list in the Edit dialog is a current query of the existing values in the database and is independent of the values that are available in the mapping settings. This is why the mapping values only appear in the list when a film with this video source is actually available in the database. (2021-07-02, 16:11)DanCooper Wrote: (2021-07-02, 09:26)FlashPan Wrote: Is some detection going on with the file even though I make no reference in the file/folder name just MovieTitle (Year) or why cannto I not see the other formats in the drop down list? The drop down list in the Edit dialog is a current query of the existing values in the database and is independent of the values that are available in the mapping settings. This is why the mapping values only appear in the list when a film with this video source is actually available in the database. Thank you Dan   Cheers Confusion is just a state of mind. Hi, just updated to 1.10 with hotfix. A couple of issues: 1. I've noticed that the "Map Video Source by File Extension" under "Settings - Miscellaneous" is broken again. You can add items to it, but they are gone after a restart of EMM. 2. If you create a Movieset, you cannot immediately open it to edit it ie start to add movies. It just flashes on screen as it tries to open, then nothing. If you edit a movie that will go in that new movieset, the movieset name is in the available list, so you can add it in. Once one movie is added this way, the movieset will open for editing as normal. Getting a bunch of errors: Quote:2021-07-04 16:46:33.9078,EmberAPI.Scanner,EmberAPI.Scanner.IsValidDir,5,INFO,"[Sanner] [IsValidDir] [NotValidDirIs] Path ""<path>\extrafanart"" has been skipped (path name is ""extrafanart"")", The <path> is my edit. I make set posters! (I also take requests) (2021-07-04, 16:51)Dragen Wrote: The <path> is my edit. There are differences between an error, warning, trace and info ... • • 1 • 3 • 4 • 5(current) • 6 • 7 • 11 • This forum uses Lukasz Tkacz MyBB addons.
2021-09-28 00:36:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20478610694408417, "perplexity": 3231.6347676069886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058589.72/warc/CC-MAIN-20210928002254-20210928032254-00116.warc.gz"}
https://cstheory.stackexchange.com/questions/12245/name-this-list-of-lists-data-structure
Name this list-of-lists data structure Is there a canonical name for the following data structure for list of lists? Suppose we have got a list of length $Z$ of finite lists $[a_0,\dots,a_n], [b_0,\dots,b_m], [c_0,\dots,c_o], \dots$ of the same data type, but with variable length. Then we can represent them in the following way in one single data strucutre. Let $P = [0,p_1,p_2,\dots,p_Z]$ be a list of integers, and let $Q = [a_0,\dots,a_n,b_0,\dots,b_m,c_0,\dots,c_o,d_1,\dots ]$ be the concatenation of all list entries. We demand that for all indices $0 \leq z < Z$ we have that the $z$-th list is given by the entries of Q with indices $q$, $P[z] \leq q < P[z+1]$. Note that $P[Z]$ is the total number of elements listed. An instance of this idea is the compressed sparse rows format for sparse matrices http://en.wikipedia.org/wiki/Sparse_matrix#Compressed_sparse_row_.28CSR_or_CRS.29 I would like to know a proper for the general idea. • This data structure is too straightforward to have its own Proper Name. I would probably call it a "flattened array of arrays", but I literally just made that up. – Jeffε Aug 7 '12 at 23:58 • Well, linked lists are straight-forward, too. – shuhalo Aug 8 '12 at 1:05 • I would not be surprised if this data structure does not have any standard name. It is straightforward as JɛffE said, and it is not super-important like linked lists. – Tsuyoshi Ito Aug 8 '12 at 2:59 • @JɛffE: "flattened arrays" is actually the standard name for it, and they are important. They come up a lot in the theory of nested data-parallel languages -- when automatically parallelizing a program, it's very useful to get rid of indirections (both to improve cache behavior, and to make dividing the work easier). So there's a whole line of work on "flattening transformations" -- taking programs written with nested or recursive data structures and replacing them with flat data structures. – Neel Krishnaswami Aug 8 '12 at 5:10 • @Neel, you can post your comment as an answer. :) – Kaveh Aug 8 '12 at 7:28
2020-01-27 05:35:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5880975127220154, "perplexity": 638.9335418161374}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694908.82/warc/CC-MAIN-20200127051112-20200127081112-00357.warc.gz"}
http://letslearnnepal.com/class-12/chemistry/physical-chemistry/chemical-kinetics/rate-of-reaction-and-factors-affecting-rate-of-reaction/
Rate of Reaction and Factors Affecting Rate of Reaction Rate of reaction: The rate of reaction can be defined as the decrease in concentration of reactants or increase in concentration of product per unit time. Let us consider a general reaction A giving product, then the rate of reaction in terms of decrease in concentration of reactant. $$i.e.,{\text{A} \rightarrow \text{Product}}$$ During the time interval ${\Delta}t$, the rate of reaction in terms of decrease in concentration of reactants during the interval of time is given by: $$\text{Rate} = -\frac{{\Delta}\text{A}}{{\delta}\text{t}}$$ The rate of reaction in terms of increase in concentration if product during the time interval  of time is given by: $$\text{Rate} = +\frac{{\Delta}\text{B}}{{\delta}\text{t}}$$ Average rate and instantaneous rate: Average rate of reaction is defined as the rate of reaction during the time interval Δt,  whereas instantaneous rate of reaction is defined as the rate of reaction at particular instant. For this let us consider a reaction between hydrogen peroxide (H2O2) and potassium iodide (KI). $$\ce{2H2O2 + KI ->2 H2O + O2 + KI}$$ The rate of this reaction can be followed by monitoring the increase in concentration of iodine or decrease in concentration of hydrogen peroxide. $$\text{Rate} = +\frac{{\Delta} \text{[I]}_2}{{\Delta}\text{t}} = -\frac{{\Delta} \text{[H}_2 \text{O}_2]}{{\Delta}\text{t}}$$ This equation only gives average rate of reaction. But for instantaneous rate of reaction at particular time ${\Delta}\text{t}$ should be infinitely small tending to zero. So for the instantaneous rate of reaction, it is expressed mathematically by the expression. $$\text{Rate} = \frac{\text{dx}}{\text{dt}}$$ Where dt is the small interval  of time and dx is the change in concentration at time interval dt. Factors affecting the rate of chemical reaction: The major factors which affect the rate of chemical reaction are as follows: 1. Nature of reactants: The rate of chemical reaction is affected by the nature of reacting substance or reactants. For example: Inorganic reaction i.e., Ionic reactions are very fast whereas organic reactions are slow in nature. 2. Concentration of reactants: The rate of reaction also increases with the increase in concentration of reactant. Increase in concentration increases the collision between the molecules, and the increase in effective collision enhances the rate of reaction. 3. Surface Area: The rate of reaction increases with the increase in surface area. Lumps of lime stone react slowly with dilute hydrochloric acid (dil. HCL) but when when it it reacted with the powered limestone, the reaction proceeds faster. This is because more number of molecule come in contact with dil. HCL i.e., exposed area is higher. 4. Temperature: With the increase in temperature , the rate of chemical reaction is also increased. In most cases, the rate of reaction is also increased and becomes double for every 10 degree rise in temperature. When temperature is increased, the kinetic energy increased, the KE of the molecule is increased which increase the effective collision between the molecule and hence enhance the rate of reaction. 5. Use of catalyst: The rate of reaction is also affected by the addition 3rd substance called the catalyst. For example: The decomposition of H2O2 takes place faster when catalyst is added. 6. Effect of radiation: The rate of photochemical reaction is affected by radiation. For example: Reaction of methane and chlorine takes place slowly in the absence of sunlight, but in the presence of sunlight, reaction rate increases. Do you like this article ? If yes then like otherwise dislike : 1
2018-10-21 21:01:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7089645862579346, "perplexity": 867.3279795292581}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514355.90/warc/CC-MAIN-20181021203102-20181021224602-00189.warc.gz"}
http://eprints.iisc.ernet.in/18624/
# A novel method to control oxygen stoichiometry and thermoelectric properties in (RE)BaCo2O5+ \delta Dasgupta, T and Sumithra, S and Umarji, AM (2008) A novel method to control oxygen stoichiometry and thermoelectric properties in (RE)BaCo2O5+ \delta. In: Bulltein of Materials Science, 31 (6). pp. 859-862. Preview PDF 1.pdf - Published Version Official URL: http://www.ias.ac.in/matersci/bmsnov2008/859.pdf ## Abstract Rare earth cobaltites of the type $(RE)BaCo_2O_{5+ \delta }$ (RE = Y, Gd, Eu and Nd) were synthesized by solid state technique. A novel, fast quenching technique was used to tune the oxygen content in these compounds.Room temperature Seebeck and electrical resistivity measurements were used to infer the oxygen content. A maximum in the S and \rho was observed for all the compositions when \delta value was close to 0.�5. Item Type: Journal Article Copyright of this article belongs to Indian Academy of Sciences. Oxides; chemical synthesis; X-ray diffraction; thermogravimetric analysis; electrical properties. Division of Chemical Sciences > Materials Research Centre 06 Nov 2009 05:09 19 Sep 2010 05:24 http://eprints.iisc.ernet.in/id/eprint/18624
2015-02-28 14:12:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47246837615966797, "perplexity": 7696.9922555414405}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461988.0/warc/CC-MAIN-20150226074101-00238-ip-10-28-5-156.ec2.internal.warc.gz"}
http://www2.macaulay2.com/Macaulay2/doc/Macaulay2-1.19/share/doc/Macaulay2/Macaulay2Doc/html/_generic__Skew__Matrix.html
# genericSkewMatrix -- make a generic skew symmetric matrix of variables ## Synopsis • Usage: genericSkewMatrix(R,r,n) • Inputs: • R, a ring • r, , which is a variable in the ring R (this input is optional) • n, an integer • Outputs: • a skew symmetric matrix with n rows whose entries above the diagonal are the variables of R starting with r ## Description A square matrix M is skew symmetric if transpose(M) + M == 0. i1 : R = ZZ[a..z]; i2 : M = genericSkewMatrix(R,a,3) o2 = | 0 a b | | -a 0 c | | -b -c 0 | 3 3 o2 : Matrix R <--- R i3 : transpose(M) + M == 0 o3 = true i4 : genericSkewMatrix(R,d,5) o4 = | 0 d e f g | | -d 0 h i j | | -e -h 0 k l | | -f -i -k 0 m | | -g -j -l -m 0 | 5 5 o4 : Matrix R <--- R Omitting the input r is the same as having r be the first variable in R. i5 : genericSkewMatrix(R,3) o5 = | 0 a b | | -a 0 c | | -b -c 0 | 3 3 o5 : Matrix R <--- R i6 : genericSkewMatrix(R,5) o6 = | 0 a b c d | | -a 0 e f g | | -b -e 0 h i | | -c -f -h 0 j | | -d -g -i -j 0 | 5 5 o6 : Matrix R <--- R
2023-02-03 20:16:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2597130537033081, "perplexity": 1059.84687440233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500074.73/warc/CC-MAIN-20230203185547-20230203215547-00233.warc.gz"}
https://datascienceassn.org/content/bounds-quantum-evolution-complexity-lattice-cryptography
# Bounds on quantum evolution complexity via lattice cryptography October, 2022 Abstract We address the difference between integrable and chaotic motion in quantum theory as manifested by the complexity of the corresponding evolution operators. Complexity is understood here as the shortest geodesic distance between the time-dependent evolution operator and the origin within the group of unitaries. (An appropriate complexity metric' must be used that takes into account the relative difficulty of performing nonlocal' operations that act on many degrees of freedom at once.) While simply formulated and geometrically attractive, this notion of complexity is numerically intractable save for toy models with Hilbert spaces of very low dimensions. To bypass this difficulty, we trade the exact definition in terms of geodesics for an upper bound on complexity, obtained by minimizing the distance over an explicitly prescribed infinite set of curves, rather than over all possible curves. Identifying this upper bound turns out equivalent to the closest vector problem (CVP) previously studied in integer optimization theory, in particular, in relation to lattice-based cryptography. Effective approximate algorithms are hence provided by the existing mathematical considerations, and they can be utilized in our analysis of the upper bounds on quantum evolution complexity. The resulting algorithmically implemented complexity bound systematically assigns lower values to integrable than to chaotic systems, as we demonstrate by explicit numerical work for Hilbert spaces of dimensions up to ~10^4. Resource Type:
2022-11-26 18:56:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6820294857025146, "perplexity": 603.3072586852355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446708046.99/warc/CC-MAIN-20221126180719-20221126210719-00862.warc.gz"}
https://www.transtutors.com/questions/87-lake-torrens-boating-company-is-interested-in-replacing-a-moulding-machine-with-a-4328624.htm
# 87) Lake Torrens Boating Company is interested in replacing a moulding machine with a new improved.. 1 answer below » 87) Lake Torrens Boating Company is interested in replacing a moulding machine with a new improved model. The old machine has a salvage value of $20 000 now and a predicted salvage value of$4000 in six years, if rebuilt. If the old machine is kept, it must be rebuilt in one year at a predicted cost of $40 000. The new machine costs$160 000 and has a predicted salvage value of $24 000 at the end of six years. If purchased, the new machine will allow cash savings of$40 000 for each of the first three years, and $20 000 for each year of its remaining six-year life. Required: What is the net present value of purchasing the new machine if the company has a required rate of return of 14%? 87) 88) Book & Bible Bookstore desires to buy a new coding machine to help control book inventories. The machine sells for$36 586 and requires working capital of $4000. Its estimated useful life is five years and will have a salvage value of$4000. Recovery of working capital will be $4000 at the end of its useful life. Annual cash savings from the purchase of the machine will be$10 000. Required: a. Compute the net present value at a 14% required rate of return. b. Compute the internal rate of return. c. Determine the payback period of the investment. 88) 35 Ankita G Solution 87 New Machine cost $160,000 Old machine salvage value$                20,000 Incremental investment $140,000 Estimated saving in rebuilt cost$                40,000 New Machine salvage value $24,000 Old machine salvage value$                  4,000 Incremental salvage value $20,000 Year Incremental capital cost Rebuilt cost Incremental cashflow Total cashflow PV factor @ 14% PV 0$            (140,000) $(140,000) 1.0000$(140,000.00) 1 $20,000$       40,000 $60,000 0.8772$    52,631.58 ## Plagiarism Checker Submit your documents and get free Plagiarism report Free Plagiarism Checker ## Recent Questions in Accounting - Others Looking for Something Else? Ask a Similar Question
2020-11-30 23:16:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2749258279800415, "perplexity": 7736.319093941546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141515751.74/warc/CC-MAIN-20201130222609-20201201012609-00489.warc.gz"}
http://math.stackexchange.com/questions/838000/formula-for-a-surface-of-revolution
# Formula for a surface of revolution The curve $y=\sqrt{x^2+1}, 0\leqslant{x}\leqslant{\sqrt{2}}$, which is part of the upper branch of the hyperbola $y^2-x^2=1$, is revolved about x-axis to generate a surface. Find the area of the surface. My plan is first to calculate the formula of the surface, then use surface integral to calculate its surface area. Then what is the formula of this surface? Thanks. - –  lhf Jun 18 '14 at 3:21 In the plane, $y$ is the distance of the point $(x, y)$ from the $x$, axis. When you revolve the curve about the $x$-axis, this distance should remain the same for every point $(x, y, z)$ on the surface. That is, the distance of $(x, y, z)$ from the $x$-axis should be equal to the distance of the generating point $(x, y)$ from the $x$-axis. Originally, $y^2 = x^2 + 1$, $0 \le x \le \sqrt 2$. Now, therefore: $$\boxed{y^2 + z^2 = x^2 + 1,\ 0 \le x \le \sqrt 2}$$ Another way to think of it is that each point $(x, y)$ on the curve generates a circle on the surface with radius equal to the height $y$ of the point. I hope that you know of formulas that allow you to directly calculate the surface area (and volume) of revolution, rather than finding the equation of the surface and then using integration on that. - There is a standard formula for area of a surface of revolution obtained by rotating $y=f(x)$ about the $x$-axis, from $x=a$ to $x=b$. It says that area is $$\int_a^b 2\pi f(x)\,ds,$$ where $ds=\sqrt{1+(f'(x))^2}\,dx$. In our case, $f(x)=\sqrt{x^2+1}$ and therefore $f'(x)=\frac{x}{\sqrt{x^2+1}}$. Remarks: $1.$ The idea behind the formula is that we look at the little bit of area swept out by the part of the curve from $x$ to $x+dx$. The arclength of this little bit of curve is approximately $ds=\sqrt{1+(f'(x))^2}\,dx$. So this part of the curve sweeps out a "ribbon" with radius $y=f(x)$ and width $ds$. The approximate area of the ribbon is then $2\pi f(x)\,ds$. We "add up" these ribbon areas from $x=a$ to $x=b$. $2.$ The integration will not be immediate. You can let $\sqrt{2}\,x=\tan\theta$ or $\sqrt{2}\,x=\sinh t$. -
2015-04-19 04:41:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9652965068817139, "perplexity": 113.47356740729539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246637445.19/warc/CC-MAIN-20150417045717-00130-ip-10-235-10-82.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/577933/images-with-the-eps-gz-extension-can-no-longer-be-included
# Images with the 'eps.gz' extension can no longer be included The following snippet of code used to work under Linux Mint 19, but fails under Linux Mint 20: \documentclass{article} \usepackage{graphicx} \begin{document} \includegraphics{test.eps.gz} \end{document} Command: latex test.tex Log output: This is pdfTeX, Version 3.14159265-2.6-1.40.20 (TeX Live 2019/Debian) (preloaded format=latex) restricted \write18 enabled. entering extended mode (./test.tex LaTeX2e <2020-02-02> patch level 2 L3 programming layer <2020-02-14> (/usr/share/texlive/texmf-dist/tex/latex/base/article.cls Document Class: article 2019/12/20 v1.4l Standard LaTeX document class (/usr/share/texlive/texmf-dist/tex/latex/base/size10.clo)) (/usr/share/texlive/texmf-dist/tex/latex/graphics/graphicx.sty (/usr/share/texlive/texmf-dist/tex/latex/graphics/keyval.sty) (/usr/share/texlive/texmf-dist/tex/latex/graphics/graphics.sty (/usr/share/texlive/texmf-dist/tex/latex/graphics/trig.sty) (/usr/share/texlive/texmf-dist/tex/latex/graphics-cfg/graphics.cfg) (/usr/share/texlive/texmf-dist/tex/latex/graphics-def/dvips.def))) (/usr/share/texlive/texmf-dist/tex/latex/l3backend/l3backend-dvips.def) No file test.aux. ! TeX capacity exceeded, sorry [input stack size=5000]. \Gin@ext ->\Gin@ext .gz l.6 \includegraphics{test.eps.gz} No pages of output. Transcript written on test.log. Under previous LaTeX versions, the image would be properly included or properly reported as missing, i.e.: ... ! LaTeX Error: File test.eps' not found. ... Have I been doing something wrong all along or is this a new bug that should be properly reported to the development team? • if I remember right, even without the error it won't work, as the rule for dvips to unpack the gz no longer works (due to new security settings which disables the backtick syntax). So better unpack the file before using it. – Ulrike Fischer Jan 7 at 13:32 • @UlrikeFischer Thank you for the insight about the new security settings, I was not aware of that. Maybe you have reference to a document where this decision is written up in more detail? I will be sure to use the unarchived images in my future work, but I would also like to find a simple workaround if possible to accommodate the already existing projects. – Vilkas Jan 7 at 14:46 • You could probably write a \DeclareGraphicsRule to do the conversion in-place. – Marijn Jan 7 at 14:48 • In fact an example of exactly your use case is given on latexref.xyz/_005cDeclareGraphicsRule.html at the end of the page. – Marijn Jan 7 at 14:49 • @Marijn I was referring to the gunzip part: this no longer works. You can't do the conversion in-place any longer unless you enable --shell-escape and use some other command. – Ulrike Fischer Jan 7 at 14:56 This is supposed to work and if you use \documentclass{article} \usepackage{graphicx} \begin{document} x \special{PSfile="test.eps.gz"} y \end{document} with latex, dvips, ps2pdf you should find that dvips does uncompress the image. What doesn't work there is that latex will not leave the correct space. The simplest workaround is to uncompress the eps (if you are using pdflatex rather than latex+dvips then it is anyway better to pre-convert to PDF) The original somewhat ancient documentation is misleading here as the DeclareGraphicsRule examples were written at a time when dvips allowed you to run any command via the backtick syntax. This has not been allowed for years (decades probably) but it does have a built in rule for .gz files and will uncompress them itself. This was working, but seems to have broken, probably relating to changes to allow spaces and accented letters in filenames. I'll look later to see if there is a more correct fix. The graphics rule in the current dvips.def for .eps.gz does not use the gunzip decompression step, it just passes the filename directly to dvips. \@namedef{Gin@rule@.eps.gz}#1{{eps}{.eps.bb}{#1}} • Thank you. I can confirm that the \special approach works. However, since I use a system that scans the tex file for dependencies and generates some of the images on the fly, I will most likely migrate to using unarchived eps files as suggested by you (and previously by @UlrikeFischer). Should I still properly report this issue to the LaTeX developers (i.e. on GitHub) or can this be considered reported given that you are an active contributor to the project? – Vilkas Jan 8 at 14:30 • @Vilkas yes I wasn't suggesting you use the \special just that the basic functionality is there and we broke it sometime in the 2019 timeframe. If that \special` had not worked then the uncompress functionality would have gone from the driver, in which case I could do nothing other than document that fact. I will try to look this weekend at a fix. If I don't do it then you could drop an issue in github so we don't forget but you already reached the main developers:-) – David Carlisle Jan 8 at 15:07 • Great, thank you. :) – Vilkas Jan 8 at 15:20 This is a LaTeX bug which has [supposedly] been fixed just a few days ago, (May 2021), see here the answer by Frank Mittelbach, in the TeX Live 2021. It will probably take quite some time before this [fixed] release makes it into our Linux distributions. For example, it is not [even] in the unstable Debian Sid yet.
2021-06-13 18:16:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6478821039199829, "perplexity": 2659.7951372872076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487610196.46/warc/CC-MAIN-20210613161945-20210613191945-00541.warc.gz"}
https://beta.mxnet.io/api/gluon/_autogen/mxnet.gluon.contrib.rnn.LSTMPCell.html
# mxnet.gluon.contrib.rnn.LSTMPCell¶ class mxnet.gluon.contrib.rnn.LSTMPCell(hidden_size, projection_size, i2h_weight_initializer=None, h2h_weight_initializer=None, h2r_weight_initializer=None, i2h_bias_initializer='zeros', h2h_bias_initializer='zeros', input_size=0, prefix=None, params=None)[source] Long-Short Term Memory Projected (LSTMP) network cell. (https://arxiv.org/abs/1402.1128) Each call computes the following function: $\begin{split}\begin{array}{ll} i_t = sigmoid(W_{ii} x_t + b_{ii} + W_{ri} r_{(t-1)} + b_{ri}) \\ f_t = sigmoid(W_{if} x_t + b_{if} + W_{rf} r_{(t-1)} + b_{rf}) \\ g_t = \tanh(W_{ig} x_t + b_{ig} + W_{rc} r_{(t-1)} + b_{rg}) \\ o_t = sigmoid(W_{io} x_t + b_{io} + W_{ro} r_{(t-1)} + b_{ro}) \\ c_t = f_t * c_{(t-1)} + i_t * g_t \\ h_t = o_t * \tanh(c_t) \\ r_t = W_{hr} h_t \end{array}\end{split}$ where $$r_t$$ is the projected recurrent activation at time t, $$h_t$$ is the hidden state at time t, $$c_t$$ is the cell state at time t, $$x_t$$ is the input at time t, and $$i_t$$, $$f_t$$, $$g_t$$, $$o_t$$ are the input, forget, cell, and out gates, respectively. Parameters: hidden_size (int) – Number of units in cell state symbol. projection_size (int) – Number of units in output symbol. i2h_weight_initializer (str or Initializer) – Initializer for the input weights matrix, used for the linear transformation of the inputs. h2h_weight_initializer (str or Initializer) – Initializer for the recurrent weights matrix, used for the linear transformation of the hidden state. h2r_weight_initializer (str or Initializer) – Initializer for the projection weights matrix, used for the linear transformation of the recurrent state. i2h_bias_initializer (str or Initializer, default 'lstmbias') – Initializer for the bias vector. By default, bias for the forget gate is initialized to 1 while all other biases are initialized to zero. h2h_bias_initializer (str or Initializer) – Initializer for the bias vector. prefix (str, default 'lstmp_’) – Prefix for name of Blocks (and name of weight if params is None). params (Parameter or None) – Container for weight sharing between cells. Created if None. Inputs – data: input tensor with shape (batch_size, input_size). states: a list of two initial recurrent state tensors, with shape (batch_size, projection_size) and (batch_size, hidden_size) respectively. Outputs – out: output tensor with shape (batch_size, num_hidden). next_states: a list of two output recurrent state tensors. Each has the same shape as states. __init__(hidden_size, projection_size, i2h_weight_initializer=None, h2h_weight_initializer=None, h2r_weight_initializer=None, i2h_bias_initializer='zeros', h2h_bias_initializer='zeros', input_size=0, prefix=None, params=None)[source] Initialize self. See help(type(self)) for accurate signature. Methods __init__(hidden_size, projection_size[, …]) Initialize self. apply(fn) Applies fn recursively to every child block as well as self. begin_state([batch_size, func]) Initial state for this cell. cast(dtype) Cast this Block to use another data type. collect_params([select]) Returns a ParameterDict containing this Block and all of its children’s Parameters(default), also can returns the select ParameterDict which match some given regular expressions. export(path[, epoch]) Export HybridBlock to json format that can be loaded by SymbolBlock.imports, mxnet.mod.Module or the C++ interface. forward(inputs, states) Unrolls the recurrent cell for one time step. hybrid_forward(F, inputs, states, …) Overrides to construct symbolic graph for this Block. hybridize([active]) Activates or deactivates HybridBlock s recursively. infer_shape(*args) Infers shape of Parameters from inputs. infer_type(*args) Infers data type of Parameters from inputs. initialize([init, ctx, verbose, force_reinit]) Initializes Parameter s of this Block and its children. load_parameters(filename[, ctx, …]) Load parameters from file previously saved by save_parameters. load_params(filename[, ctx, allow_missing, …]) [Deprecated] Please use load_parameters. name_scope() Returns a name space object managing a child Block and parameter names. register_child(block[, name]) Registers block as a child of self. register_forward_hook(hook) Registers a forward hook on the block. register_forward_pre_hook(hook) Registers a forward pre-hook on the block. reset() Reset before re-using the cell for another graph. save_parameters(filename) Save parameters to file. save_params(filename) [Deprecated] Please use save_parameters. state_info([batch_size]) shape and layout information of states summary(*inputs) Print the summary of the model’s output and parameters. unroll(length, inputs[, begin_state, …]) Unrolls an RNN cell across time steps. Attributes name Name of this Block, without ‘_’ in the end. params Returns this Block’s parameter dictionary (does not include its children’s parameters). prefix Prefix of this Block.
2019-02-16 11:43:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26230505108833313, "perplexity": 8299.249170864903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247480272.15/warc/CC-MAIN-20190216105514-20190216131514-00108.warc.gz"}
https://stats.stackexchange.com/questions/82526/fisher-information-of-a-statistic
# Fisher information of a statistic I have a random sample $(X_1, X_2,...,X_n)$ and I have an estimator $\bar{X_n}=\sum_{i=1}^{n} X_i$ I need to compute the Fisher information of $\bar{X_n}$. The Fisher information is defined as $-E\left(\frac{d^2}{d\theta^2}logL\right)$, where $L$ is the likelihood function. My question is: to compute the Fisher information of the estimator (NOT the random sample, but instead a function of the random sample), should we take the likelihood function of the random sample or the likelihood function of the distribution? • Could you tell us what you mean by a "likelihood function of [a] distribution"? – whuber Jan 16 '14 at 22:57 • Well. The likelihood function of the random sample would be the joint distribution of the n random variables. The likelihood function of the estimator would be the probability of observing a specific value of the estimator... Jan 16 '14 at 23:34 • OK, thanks for the clarification. Consider what the difference in $\frac{d^2}{d\theta^2}\log L$ would be between the two likelihoods. – whuber Jan 16 '14 at 23:40 • They are not always the same! Consider $f_{x_i}(x_i,\theta)=exp[-(x_i-\theta)]\cdot I_{(\theta,+\infty)} (x_i)$. The maximum likelihood estimator would be $min(X_i)$ and the two likelihood functions would be $L(X_1,...,X_2)=exp(n\theta-\sum x_i)\cdot \prod I_{(\theta,+\infty)} (x_i)$ and $L(min(X_i))=n \cdot exp(-n\cdot min(X_i)+n \theta)\cdot I_{(0,+\infty)} (min(X_i)-\theta)$ Jan 17 '14 at 9:51 • I noticed that the sample mean is missing the factor 1/n. Jan 8 '17 at 16:50 There is no fisher information of the estimator, just the fisher information of a random sample $\theta$. In Wikipedia, it says: In mathematical statistics, the Fisher information (sometimes simply called information1) is a way of measuring the amount of information that an observable random variable X carries about an unknown parameter $\theta$ upon which the probability of X depends. so, it is true that fisher information is a kind of connection between two random variables, instead of some estimator, which is a function of X. • $\theta$ is the parameter — not a sample. The Fisher information is available before you have any examples. Feb 7 '16 at 23:35 I'm pretty sure that you've got some terminology mixed up. Fisher's Information is a function of the data, just like an estimator such as $\bar{X}_{n}$ that gives you an idea of how much information of the parameter of interest is contained in the sample you've acquired. You can compute Fisher's Information at an estimator (this is usually done because the F.I. depends on the unknown parameter being estimated) and we use the plug-in estimator consisting of the F.I. evaluated at the MLE (typically). • I don't think Fisher information is a function of the data Feb 7 '16 at 23:31 • Random variables are integrated out by expectation as we calculate the Fisher information. So it's not a function of the data; it's rather a function of the parameter. Aug 21 '16 at 13:57
2021-10-26 07:50:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8448746800422668, "perplexity": 226.66975008148168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587854.13/warc/CC-MAIN-20211026072759-20211026102759-00357.warc.gz"}
https://physics.stackexchange.com/questions/608489/understanding-tensor-and-covariance
# Understanding tensor and covariance I'm really struggling to understand the use of tensors when we want to have a covariant equation. From what I understand, if we write an equation using tensors only, then the physics behind it will be independent of the choice of the coordinate system. For example, written in a covariant form, the Maxwell's equations are: \begin{aligned}&{\frac {\partial F^{\alpha \beta }}{\partial x^{\alpha }}}=\mu _{0}J^{\beta }\\&{\frac {\partial G^{\alpha \beta }}{\partial x^{\alpha }}}=0\end{aligned} Maxwell's equations are not covariant under Galilean transformations. But since they can be written as tensorial equations, shouldn't they also be covariant under Galilean transformations (or any change of coordinate system)? Is it because there is a partial derivative? Then, why does Wikipedia say that these equations are manifestly-covariant? In this case, any equation written only with "propers tensors" (so no partial derivative for example) will be covariant under any choice of frame transformation (Galilean or Lorentz)? Is it the presence of a derivative that determines if an equation is covariant under Galilean or Lorentz transformation? As you have (impressively!) already guessed, the presence of derivatives is typically the main problem. To get something generally covariant, you need to replace derivatives $$\partial_\alpha$$ by covariant derivatives $$\nabla_\alpha$$. However, the derivatives are actually not the problem in your particular case. Covariant derivatives come into play on a curved spacetime or when dealing with nonlinear coordinate transformations, neither of which you have. I usually don't think about "Galilean covariance", so don't trust me 100%, but I think the equations you write down will indeed be preserved under a Galilean transformation. What will break is the relationship between $$F$$ and $$G$$, namely the equation $$G^{\alpha\beta} = \frac{1}{2} \epsilon^{\alpha\beta\gamma\delta} F_{\gamma\delta}.$$ The $$\epsilon$$ tensor is defined explicitly in terms of the spacetime metric $$g_{\alpha\beta}$$, which is a concept that doesn't make sense in Galilean spacetime, so this equation cannot be preserved by Galilean transformations.
2021-07-24 03:34:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9251066446304321, "perplexity": 391.4478167346789}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150129.50/warc/CC-MAIN-20210724032221-20210724062221-00288.warc.gz"}
https://blender.stackexchange.com/questions/146714/removing-all-material-slots-in-one-go
# Removing all material slots in one go I need some help in removing all material slots in one go using a python script. I am able to use this, would prefer to ave all removed. bpy.context.object.active_material_index = 0 bpy.ops.object.material_slot_remove() bpy.context.object.active_material_index = 1 bpy.ops.object.material_slot_remove() bpy.context.object.active_material_index = 2 bpy.ops.object.material_slot_remove() • You'll find some good answers here Jul 29 '19 at 17:07 import bpy #import the blender python library for x in bpy.context.object.material_slots: #For all of the materials in the selected object: bpy.context.object.active_material_index = 0 #select the top material bpy.ops.object.material_slot_remove() #delete it You can set the index of the list to 0, iterate through all slots and override the context of material_slot_remove(): for obj in bpy.context.selected_editable_objects: obj.active_material_index = 0 for i in range(len(obj.material_slots)): bpy.ops.object.material_slot_remove({'object': obj}) • Can you elaborate the part of overwriting the context? Apr 19 '20 at 17:09 • Further reading: poll() failed, context incorrect? @MikeW Apr 19 '20 at 18:40
2022-01-16 21:20:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20286612212657928, "perplexity": 3282.8921947864105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300244.42/warc/CC-MAIN-20220116210734-20220117000734-00284.warc.gz"}
https://zbmath.org/?q=an:1339.05365
## The measurable Kesten theorem.(English)Zbl 1339.05365 Summary: We give an explicit bound on the spectral radius in terms of the densities of short cycles in finite $$d$$-regular graphs. It follows that the a finite $$d$$-regular Ramanujan graph $$G$$ contains a negligible number of cycles of size less than $$c\log\log| G|$$. We prove that infinite $$d$$-regular Ramanujan unimodular random graphs are trees. Through Benjamini-Schramm convergence this leads to the following rigidity result. If most eigenvalues of a $$d$$-regular finite graph $$G$$ fall in the Alon-Boppana region, then the eigenvalue distribution of $$G$$ is close to the spectral measure of the $$d$$-regular tree. In particular, $$G$$ contains few short cycles. In contrast, we show that $$d$$-regular unimodular random graphs with maximal growth are not necessarily trees. ### MSC: 05C80 Random graphs (graph-theoretic aspects) 05C05 Trees 05C38 Paths and cycles 05C50 Graphs and linear algebra (matrices, eigenvalues, etc.) 60G50 Sums of independent random variables; random walks 82C41 Dynamics of random walks, random surfaces, lattice animals, etc. in time-dependent statistical mechanics Full Text: ### References: [1] Abért, M., Glasner, Y. and Virág, B. (2014). Kesten’s theorem for invariant random subgroups. Duke Math. J. 163 465-488. · Zbl 1344.20061 [2] Aldous, D. and Lyons, R. (2007). Processes on unimodular random networks. Electron. J. Probab. 12 1454-1508. · Zbl 1131.60003 [3] Aldous, D. and Steele, J. M. (2004). The objective method: Probabilistic combinatorial optimization and local weak convergence. In Probability on Discrete Structures. Encyclopaedia Math. Sci. 110 1-72. Springer, Berlin. · Zbl 1037.60008 [4] Antal, P. and Pisztora, A. (1996). On the chemical distance for supercritical Bernoulli percolation. Ann. Probab. 24 1036-1048. · Zbl 0871.60089 [5] Bartholdi, L. (1999). Counting paths in graphs. Enseign. Math. (2) 45 83-131. · Zbl 0961.05032 [6] Benjamini, I. and Schramm, O. (2001). Recurrence of distributional limits of finite planar graphs. Electron. J. Probab. 6 13 pp. (electronic). · Zbl 1010.82021 [7] Bougerol, P. and Jeulin, T. (1999). Brownian bridge on hyperbolic spaces and on homogeneous trees. Probab. Theory Related Fields 115 95-120. · Zbl 0947.58032 [8] Boyd, A. V. (1992). Bounds for the Catalan numbers. Fibonacci Quart. 30 136-138. · Zbl 0751.11012 [9] Friedman, J. (2003). A proof of Alon’s second eigenvalue conjecture. In Proceedings of the Thirty-Fifth Annual ACM Symposium on Theory of Computing 720-724 (electronic). ACM, New York. · Zbl 1192.05087 [10] Glasner, Y. (2003). Ramanujan graphs with small girth. Combinatorica 23 487-502. · Zbl 1045.05051 [11] Grigorchuk, R., Kaimanovich, V. A. and Nagnibeda, T. (2012). Ergodic properties of boundary actions and the Nielsen-Schreier theory. Adv. Math. 230 1340-1380. · Zbl 1278.37034 [12] Kesten, H. (1959). Symmetric random walks on groups. Trans. Amer. Math. Soc. 92 336-354. · Zbl 0092.33503 [13] Levin, D. A., Peres, Y. and Wilmer, E. L. (2009). Markov Chains and Mixing Times . Amer. Math. Soc., Providence, RI. · Zbl 1160.60001 [14] Lubotzky, A. (1994). Discrete Groups , Expanding Graphs and Invariant Measures. Progress in Mathematics 125 . Birkhäuser, Basel. · Zbl 0826.22012 [15] Lubotzky, A., Phillips, R. and Sarnak, P. (1988). Ramanujan graphs. Combinatorica 8 261-277. · Zbl 0661.05035 [16] Lyons, R. and Peres, Y. (2015). Cycle density in infinite Ramanujan graphs. Ann. Probab. 43 3337-3358. · Zbl 1346.60061 [17] Margulis, G. A. (1988). Explicit group-theoretic constructions of combinatorial schemes and their applications in the construction of expanders and concentrators. Problemy Peredachi Informatsii 24 51-60. [18] Massey, W. S. (1977). Algebraic Topology : An Introduction . Springer, New York. · Zbl 0361.55002 [19] McKay, B. D. (1981). The expected eigenvalue distribution of a large regular graph. Linear Algebra Appl. 40 203-216. · Zbl 0468.05039 [20] Morgenstern, M. (1994). Existence and explicit constructions of $$q+1$$ regular Ramanujan graphs for every prime power $$q$$. J. Combin. Theory Ser. B 62 44-62. · Zbl 0814.68098 [21] Nilli, A. (1991). On the second eigenvalue of a graph. Discrete Math. 91 207-210. · Zbl 0771.05064 [22] Ortner, R. and Woess, W. (2007). Non-backtracking random walks and cogrowth of graphs. Canad. J. Math. 59 828-844. · Zbl 1123.05081 [23] Paschke, W. L. (1993). Lower bound for the norm of a vertex-transitive graph. Math. Z. 213 225-239. · Zbl 0798.05036 [24] Serre, J.-P. (1997). Répartition asymptotique des valeurs propres de l’opérateur de Hecke $$T_{p}$$. J. Amer. Math. Soc. 10 75-102. · Zbl 0871.11032 [25] Woess, W. (2000). Random Walks on Infinite Graphs and Groups. Cambridge Tracts in Mathematics 138 . Cambridge Univ. Press, Cambridge. · Zbl 0951.60002 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2022-08-11 18:03:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8274983763694763, "perplexity": 2351.2771561824907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571483.70/warc/CC-MAIN-20220811164257-20220811194257-00098.warc.gz"}
https://www.postonline.co.uk/post/news/1202472/monte-carlo-mourning
# Monte Carlo in mourning. No-one will forget this year's Monte Carlo Rendez-Vous. The preliminary negotiations for 1 January reinsurance renewals were taking place as usual in the Cafe de Paris and the lobbies of the hotels Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content. To access these options, along with all other subscription benefits, please contact [email protected] or view our subscription options here: http://subscriptions.postonline.co.uk/subscribe
2020-10-19 23:47:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22886282205581665, "perplexity": 7800.751529434252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107867463.6/warc/CC-MAIN-20201019232613-20201020022613-00692.warc.gz"}
https://www.yaclass.in/p/mathematics/class-7/data-handling-1486/arithmetic-mean-1952/re-a9ccfdfd-38c0-494b-b9e3-137559bb817b
### Exercise condition: 4 The average (arithmetic mean) of $$A, B$$ and $$C$$ is 6. If $$D$$ is 1, then what is the average of $$A, B, C$$ and $$D$$? If you want to answer, you must be logged in. Please login or register! or Sign up
2020-02-26 06:55:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3834666311740875, "perplexity": 622.9295518254025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146187.93/warc/CC-MAIN-20200226054316-20200226084316-00236.warc.gz"}
https://lobste.rs/s/koksui/how_create_presentations_with_beamer
1. 11 opensource.com 1. 2. 7 I feel like it’s worth mentioning that some people feel like using Beamer is a bit of a curse. Nothing makes a presentation less engaging than piles of equations, tiny source code, and bullet points, but that’s precisely what Beamer makes easy to add. I think some of the javascript libraries for presentations are a better fit as they make it easy to embed videos, animations and transitions that guide the eye to what matters. Unless you need to be able to send someone a pdf of the presentation, I’d hesitate to recommend using this library without large amounts of discipline. 1. 9 I think what’s going on here is that too many people have been sitting in university rooms listening to boring lecturers giving excruciating presentations made with Beamer and filled with hundreds of bullet points. Not that I’m the biggest Beamer expert out there, but I use it for all my slides and I think the results are pretty good. I think some of the javascript libraries for presentations are a better fit as they make it easy to embed videos, animations and transitions that guide the eye to what matters. Animations, videos and transitions can be abused exactly like bullet points. In an effort to escape the boring-lecturer-effect, we should be careful not to err on the side of entertainment and produce presentations filled with animated gifs and almost zero content (I’ve seen many of those too, lately). 1. 2 I think some of the javascript libraries for presentations are a better fit as they make it easy to embed videos, animations and transitions that guide the eye to what matters. Unless you want to print the slides..? 1. 1 Nothing makes a presentation less engaging than piles of equations, tiny source code, and bullet points, but that’s precisely what Beamer makes easy to add. At university this has become quite popular. Instead of lecture notes we just have densely populated beamer presentations, which seem neither to read during a lecture nor to read when learning. I think it’s a pity that many of the more interactive features of beamer beyond \pause are just forgotten, ignoring seemingly all principles of good presentation-making. 1. 1 I’m not clear even the advanced features really help. I think it matters what makes a tool easy to do. 1. 1 This is why I despise Beamer. Also it is a pain to use compared to alternatives. 2. 1 I totally agree! I have used reveal.js with pleasure and success, though I used only a bare minimum of the features, as I find most stuff in presentation software distractions not attractions. 1. 1 What javascript libraries do you have in mind? l’m a heavy (disciplined) Beamer user and, like @ema, think I produce quality slides, but I am curious about other tools for programatic presentation generation. 1. 1 Truthfully, these days I use reveal.js with Jupyter notebooks (https://github.com/damianavila/RISE) I’ve used deck.js, reveal.js and eagle.js. Aside from needing to futz with npm these have all been perfectly adequate. Thanks to MathJax, I can still put in an equation if it’s needed. For some of them you can even use pandoc to generate the html directly from markdown https://pandoc.org/demos.html. Like I said, if you are disciplined, Beamer can work really great. For me what counts is what the tools encourages you to do and not to do. From that standpoint, a lot of tools would have trouble outdoing sent 2. 1 That’s interesting to know. I’m in the process of converting my workshop slides from PowerPoint to beamer. Most of the slides are either code, short definitions, or diagrams, and I wanted to be able to easily find/replace my slides. They’re there to frame the live coding sections, so hopefully the plainness won’t be too much of a problem.
2019-06-27 10:24:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27350154519081116, "perplexity": 1634.3033827308955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628001089.83/warc/CC-MAIN-20190627095649-20190627121649-00553.warc.gz"}
https://nrich.maths.org/811
### Some(?) of the Parts A circle touches the lines OA, OB and AB where OA and OB are perpendicular. Show that the diameter of the circle is equal to the perimeter of the triangle A 1 metre cube has one face on the ground and one face against a wall. A 4 metre ladder leans against the wall and just touches the cube. How high is the top of the ladder above the ground? ### At a Glance The area of a regular pentagon looks about twice as a big as the pentangle star drawn within it. Is it? # Matter of Scale ##### Age 14 to 16Challenge Level Take any right-angled triangle with side lengths $a, b$ and $c$. Make two enlargements of the triangle, by scale factors $a$ and $b$: Rotate these triangles and fit them together to make a third triangle: Prove that the resulting triangle is an enlargement of the original triangle. What is the scale factor of enlargement of the resulting triangle? Use what you have discovered about the side lengths of the resulting triangle to come up with a proof of Pythagoras' Theorem.
2021-02-27 22:24:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3665894567966461, "perplexity": 504.1820826136173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359497.20/warc/CC-MAIN-20210227204637-20210227234637-00566.warc.gz"}
https://www.physics.uoguelph.ca/biophysics-problem-16
# Biophysics Problem 16 The great swimmer, Mark McSpitswater, dives from a diving board $\mathrm{15.0\; m}$ above the water's surface. His initial velocity is $2.00\; m/s^{-1}$ at an angle of $30.0^\circ$ up from the horizontal. (a) How long does it take him to hit the water? (Recall that $x = (-b \pm \sqrt{b^2 - 4ac} )/2a$ is the solution to  $ax^2 + bx +c = 0.$ #### First Step Choose the origin at the diving board. Be careful of signs. Consider only motion in the $y$ direction. #### Calculations $v_{0y} = 2 \sin30 = 1m/s \\ y = v_{0y}t + (1/2)a_yt^2$ Choose origin at diving board then  $a = -9.8\; m/s^2, \;y = -15\;m$ $-15 = 1t - (1/2)(9.8)t^2 \\ t^2 -0.2041t - 3.061 = 0$ Using the solution to the quadratic equation: $t = 0.10205 +/- 1.7525$ Only the + solution is valid $t = 1.85\; s$
2021-12-08 18:34:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32463741302490234, "perplexity": 2175.8514955296378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363520.30/warc/CC-MAIN-20211208175210-20211208205210-00329.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/the-intensity-produced-long-cylindrical-light-source-small-distance-r-source-proportional-light-process-photometry_67967
Department of Pre-University Education, KarnatakaPUC Karnataka Science Class 12 # The Intensity Produced by a Long Cylindrical Light Source at a Small Distance R from the Source is Proportional to - Physics MCQ Fill in the Blanks The intensity produced by a long cylindrical light source at a small distance r from the source is proportional to _________ . #### Options • $\frac{1}{r^2}$ • $\frac{1}{r^3}$ • $\frac{1}{r}$ • None of these #### Solution $\frac{1}{r}$ Let us consider two coaxial cylindrical surfaces at distances r and r' from the axis. Let areas dA and dA' subtend the solid angle d​ω at the central axis. The height of the area element will be same, i.e. equal to dy.  Let the breath of dA be dx and that of dA' be dx'. Now from the arcs, $dx = rd\theta$ $dx' = r'd\theta$ Now, $dA = dxdy = rd\theta\ dy$ $dA' = dx'dy = r'd\theta\ dy$ $\frac{dA}{dA'} = \frac{r}{r'}$ $\Rightarrow \frac{dA}{r} = \frac{dA'}{r'} = d\omega$ The luminous flux going through the solid angle d​ω will be dF = I​dω Now, $dF = I\frac{dA}{r}$ If the surfaces are inclined at an angle $\alpha,$ $dF = I\frac{dA\cos\alpha}{r}$ Now, illuminance is defined as $E = \frac{dF}{dA} = I\frac{dA\cos\alpha}{r}$ $\Rightarrow E \propto \frac{1}{r}$ Concept: Light Process and Photometry Is there an error in this question or solution? #### APPEARS IN HC Verma Class 11, Class 12 Concepts of Physics Vol. 1 Chapter 22 Photometry MCQ | Q 7 | Page 454 Share
2023-03-27 23:24:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7937029600143433, "perplexity": 2764.9889031219022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948708.2/warc/CC-MAIN-20230327220742-20230328010742-00016.warc.gz"}
https://math.stackexchange.com/questions/2057729/prove-or-disprove-if-r-and-s-are-partial-order-relations-on-a-set-a-then-r-c
# Prove or Disprove, if R and S are partial order relations on a set A, then $R \cup S$ is a partial order relation on A Prove or Disprove, if R and S are partial order relations on a set A, then $R \cup S$ is a partial order relation on A Proof. We must show $R \cup S$ is reflexive, antisymmetry and transitive. Let $a \in A$ be arbitrary. Since R and S are partial order relations, so they are reflexive. So that $(a,a) \in R$ or $(a,a) \in S$. Hence $(a,a) \in R \cup S$. Hence $R \cup S$ is reflexive. Let $a,b \in A$, suppose $(a,b),(b,a) \in R \cup S$. Then $(a,b), (b,a) \in R$ or $(a,b), (b,a) \in S$. Since R and S are antisymmetry, so we have a = b for both $(a,b),(b,a) \in R$ and $(a,b),(b,a) \in S$. Therefore $R \cup S$ is antisymmetry. Let $a,b,c \in A$, suppose $(a,b),(b.c) \in R \cup S$. Then $(a,b),(b,c) \in R$ or $(a,b),(b,c) \in S$. Since R and S are transitive, $(a,b),(b,c) \in R$ implies that $(a,c) \in R$. Similarly, for $(a,b), (b,c) \in S$ we have $(a,c) \in S$. Thus we have $(a,c) \in R$ or $(a,c) in S$. Hence $R \cup S$ is transitive, since $(a,c) \in R\cup S$. Since $R \cup S$ is reflexive, antisymmetry and transitive, hence $R \cup S$ is partial order relation. $\blacksquare$ I can't figure out a counterexample to show this not holds, but if R and S are equivalence relation, then $R \cup S$ is not an equivalence relation. I think this is because partial order relation is antisymmetry rather than symmetry, if $(xRy)$ and $(yRx)$, then $x = y$. Like if A = {1,2,3}, then R must be {(1,1),(2,2),(3,3)}, if R = {(1,1),(2,2),(3,3),(1,2)} then it's wrong, the antisymmetry does not holds. My question is that is there any counterexample to show this should be a disproof, or my proof is in right approach? The example $A=\{1,2,3\}$, $R=\{(1,1),(2,2),(3,3),(1,2)\}$ and $S=\{(1,1),(2,2),(3,3)(2,3)\}$ works, as $R\cup S$ is not transitive anymore. Your proof goes wrong in the transitivity step where you conclude from $(a,b),(c,d) \in R\cup S$, then $(a,b),(c,d) \in R$ or $(a,b),(c,d)\in S$. It may well be that $(a,b) \in R$ and $(c,d)\in S$, while $(c,d)\not\in R$ and $(a,b)\not\in S$. Actually, your proof is wrong in some others steps as well. For reflexivity you have $(a,a)\in R$ as well as in $S$, hence $(a,a) \in R \cup S$. At the anti-symmetric part you make the same (wrong) conclusion as in the transitivity part. • I am a little bit confuse, R are S are partial order relations, so they should be antisymmetry, like element (1,2) in R, how does R hold for antisymmetry? – bagMan Dec 13 '16 at 22:15 • $(1,2) \in R$, but $(2,1)\not \in R$. The property antisymmetric here reads: If $(1,2) \in R$ and $(2,1) \in R$, then $1=2$. But $(2,1)$ is not in $R$, hence the premise is false. The statement: If $A$, then $B$ is true if $A$ is false or $B$ is true. – JSchoone Dec 13 '16 at 22:19 • OMG, I didn't realize that! thank you. the statement will be true if the if condition is false. – bagMan Dec 13 '16 at 22:31 • You're very welcome. (: – JSchoone Dec 13 '16 at 22:37 Your proof is wrong. Suppose $A=\{*,\circ\}$, $R=\{(*,*),(*,\circ),(\circ,\circ)\}$ and $S=\{(*,*),(\circ,*),(\circ,\circ)\}$. Also, you seem to work with total instead o partial order. The problem is that your arguments for antisymmetry and transitivity of $R\cup S$ are incorrect. HINT: These questions should help you to see what’s wrong with your arguments and where to look for counterexamples. • For antisymmetry, what if $\langle a,b\rangle\in R\setminus S$ and $\langle b,a\rangle\in S\setminus R$? Can that happen? Is there any guarantee then that $a=b$? • For transitivity, what if $\langle a,b\rangle\in R$, and $\langle b,c\rangle\in S\setminus R$? Can that happen? Is there any guarantee then that $\langle a,c\rangle$ • The first situation could happen, but still if $(a,b) \in R$ and $(b,a) \in S$, still can get a = b (since antisymmetry for R and S). And for transitivity, I think it could be a problem if $(a,b) \in R$ and $(b,c) \in S$. Meanwhile, I just a bit confuse, the counterexample given by other post, if (1,2) is an element of R, how does R holds for antisymmetry? I suppose in order to hold antisymmetric, if xRy and yRx, then x = y. – bagMan Dec 13 '16 at 22:21 • @bagMan: It can happen that $a=b$, but that’s not the point. The point is that it doesn’t have to happen. Let $R$ be the relation $\le$ on $\Bbb Z$, and let $S$ be the relation $\ge$; $R$ and $S$ are both partial orders on $\Bbb Z$, and $\langle 1,2\rangle\in R\subseteq R\cup S$, and $\langle 2,1\rangle\in S\subseteq R\cup S$, but $1\ne 2$. – Brian M. Scott Dec 13 '16 at 22:25 You goofed in your second step. You cuold have $(a,b) \in R$ and $(b,a)\in S$ in which case $(a,b)$ and $(b,a)$ are both in $R\cup S$. In fact, this gives a way to constrict a counterexample. Let $A = \{1,2,3\}$ and $R$ be $\{1<2,1<3\}$ and $S$ be $\{1<2,3<1,3<2\}$. Both are partial order relations. Yet there union contains both $1<2$ and $2<1$. There is (at least) one logical error in your proof for antisymmetry $(a,b),(b,a) \in R \cup S$ doesn't imply that $(a,b),(b,a) \in R$ or $(a,b),(b,a) \in S$. You could have $(a,b) \in R$ and $(b,a) \in S$. This is what is highlighted in the counterexamples provided in the other responses.
2019-09-22 20:50:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9165763854980469, "perplexity": 143.67194690164104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575674.3/warc/CC-MAIN-20190922201055-20190922223055-00084.warc.gz"}
http://mathhelpforum.com/trigonometry/166644-unit-circle-right-angle-problem-print.html
Unit circle and right angle problem? • December 20th 2010, 11:38 AM homeylova223 Unit circle and right angle problem? if cot(-180 degrees) would you add 360 to get positive 180 then do cot (180) to get 1/0 or undefined my second problem is if tan= 1.3 what is the value of (cot-angle) I am not sure how to do this. Can anyone help me please? • December 20th 2010, 12:00 PM homeylova223 Also I am trying to find the tan of 150 degrees so I have to dive (1/2)/(- square root 3/2) can anyone also show me how to do this. • December 20th 2010, 12:15 PM Quote: Originally Posted by homeylova223 if cot(-180 degrees) would you add 360 to get positive 180 then do cot (180) to get 1/0 or undefined my second problem is if tan= 1.3 what is the value of (cot-angle) I am not sure how to do this. Can anyone help me please? A clockwise movement of $180^o$ brings us to the same position on the unit-circle as a counterclockwise rotation of $180^o$ $sin\left[180^o\right]=sin\left[-180^o\right];\;\;\;cos\left[180^o\right]=cos\left[-180^o\right]$ Hence $cot\left[180^o\right]=cot\left[-180^o\right]$ $\displaystyle\ cotx=\frac{1}{tanx}=\frac{cosx}{sinx}$ For $x=-180^o,$ $sin\left[-180^o\right]=0$ hence the cotangent of that angle is undefined. $\displaystyle\ tan\theta=1.3\Rightarrow\ cot\theta=\frac{1}{tan\theta}=\frac{1}{1.3}$ For cotangent, you only need know that it is the reciprocal of tangent. $150^o=180^o-30^o$ $\displaystyle\ tan\left[150^o\right]=\frac{sin\left[150^o}{cos\left[150^o\right]}$ In the unit-circle, $sin\theta=sin\left[180^o-\theta\right]$ and $cos\theta=-cos\left[180^o-\theta\right]$ Therefore $\displaystyle\frac{sin\left[150^o\right]}{cos\left[150^o\right]}=\frac{sin\left[30^o\right]}{-cos\left[30^o\right]}$ $=\displaystyle\frac{\left(\frac{1}{2}\right)}{-\left(\frac{\sqrt{3}}{2}\right)}=\frac{\left(\frac {1}{2}\right)}{\left(\frac{1}{2}\right)}\;\left[-\frac{1}{\sqrt{3}}\right]$
2014-03-14 15:20:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5760965347290039, "perplexity": 1940.5543285803692}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678693350/warc/CC-MAIN-20140313024453-00006-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.sarthaks.com/2777146/the-addition-of-open-loop-zero-pulls-the-root-loci-towards
# The addition of open loop zero pulls the root-loci towards: 0 votes 63 views closed The addition of open loop zero pulls the root-loci towards: 1. The left and therefore system becomes more stable 2. The right and therefore system becomes unstable 3. Imaginary axis and therefore system becomes marginally stable 4. The left and therefore system becomes unstable ## 1 Answer 0 votes by (117k points) selected Best answer Correct Answer - Option 1 : The left and therefore system becomes more stable Effect of addition of zeroes: 1. The system becomes more stable. 2. The angle of asymptotes increases and this root locus shift towards the left side of the s-plane slightly more 3. Relative stability improves 4. Range of K for stability increases. 5. The system becomes less oscillatory 6. Breakaway point shifts towards the left in s plane. 7. Damping factor increases. Effect of addition of Poles: 1. Operating range of k decreases. 2. Relative stability reduces 3. The system becomes more oscillatory 4. Breakaway point shifts towards the imaginary axis 5. Damping factor decreases. 0 votes 1 answer 0 votes 1 answer 0 votes 1 answer 0 votes 1 answer 0 votes 1 answer
2023-03-27 23:34:11
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8148042559623718, "perplexity": 12429.288048301676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948708.2/warc/CC-MAIN-20230327220742-20230328010742-00207.warc.gz"}
https://gmatclub.com/forum/m31-199588.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 21 Feb 2019, 14:09 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History ## Events & Promotions ###### Events & Promotions in February PrevNext SuMoTuWeThFrSa 272829303112 3456789 10111213141516 17181920212223 242526272812 Open Detailed Calendar February 21, 2019 February 21, 2019 10:00 PM PST 11:00 PM PST Kick off your 2019 GMAT prep with a free 7-day boot camp that includes free online lessons, webinars, and a full GMAT course access. Limited for the first 99 registrants! Feb. 21st until the 27th. • ### Free GMAT RC Webinar February 23, 2019 February 23, 2019 07:00 AM PST 09:00 AM PST Learn reading strategies that can help even non-voracious reader to master GMAT RC. Saturday, February 23rd at 7 AM PT # M31-19 Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 53063 ### Show Tags 09 Jun 2015, 07:32 00:00 Difficulty: 85% (hard) Question Stats: 42% (00:59) correct 58% (01:08) wrong based on 72 sessions ### HideShow timer Statistics What is the range of all the roots of $$|x^2 - 2| = x$$? A. 4 B. 3 C. 2 D. 1 E. 0 _________________ Math Expert Joined: 02 Sep 2009 Posts: 53063 ### Show Tags 09 Jun 2015, 07:32 1 2 Official Solution: What is the range of all the roots of $$|x^2 - 2| = x$$? A. 4 B. 3 C. 2 D. 1 E. 0 First of all notice that since $$x$$ is equal to an absolute value of some number ($$|x^2 - 2|$$), then $$x$$ cannot be negative. Next, $$|x^2 - 2| = x$$ means that either $$x^2 - 2 = x$$ or $$-(x^2 - 2) = x$$. First equation gives $$x = -1$$ or $$x = 2$$. Since $$x$$ cannot be negative, we are left with only $$x = 2$$. Second equation gives $$x = -2$$ or $$x = 1$$. Again, since $$x$$ cannot be negative, we are left with only $$x = 1$$. The range = {largest} - {smallest} = 2 - 1 = 1. _________________ Manager Joined: 18 Sep 2014 Posts: 227 ### Show Tags 30 Jan 2016, 01:49 Bunuel wrote: Official Solution: What is the range of all the roots of $$|x^2 - 2| = x$$? A. 4 B. 3 C. 2 D. 1 E. 0 First of all notice that since $$x$$ is equal to an absolute value of some number ($$|x^2 - 2|$$), then $$x$$ cannot be negative. Next, $$|x^2 - 2| = x$$ means that either $$x^2 - 2 = x$$ or $$-(x^2 - 2) = x$$. First equation gives $$x = -1$$ or $$x = 2$$. Since $$x$$ cannot be negative, we are left with only $$x = 2$$. Second equation gives $$x = -2$$ or $$x = 1$$. Again, since $$x$$ cannot be negative, we are left with only $$x = 1$$. The range = {largest} - {smallest} = 2 - 1 = 1. Hi, Could you kindly explain the below in red: First of all notice that since x is equal to an absolute value of some number (|x 2 −2| ), then x cannot be negative. Next, |x 2 −2|=x means that either x 2 −2=x or −(x 2 −2)=x . First equation gives x=−1 or x=2 x^2 - 2 = x => x^2 - x = 2 => x(x-1) = 2 => x = 2 or x-1 = 2 (x=3) . Since x cannot be negative, we are left with only x=2 (should be 3) . Second equation gives x=−2 or x=1 . Again, since x cannot be negative, we are left with only x=1 . The range = {largest} - {smallest} = 2 - 1 = 1. IMO it should be 3-1 = 2 _________________ Kindly press the Kudos to appreciate my post !! Math Expert Joined: 02 Sep 2009 Posts: 53063 ### Show Tags 31 Jan 2016, 04:26 FightToSurvive wrote: Bunuel wrote: Official Solution: What is the range of all the roots of $$|x^2 - 2| = x$$? A. 4 B. 3 C. 2 D. 1 E. 0 First of all notice that since $$x$$ is equal to an absolute value of some number ($$|x^2 - 2|$$), then $$x$$ cannot be negative. Next, $$|x^2 - 2| = x$$ means that either $$x^2 - 2 = x$$ or $$-(x^2 - 2) = x$$. First equation gives $$x = -1$$ or $$x = 2$$. Since $$x$$ cannot be negative, we are left with only $$x = 2$$. Second equation gives $$x = -2$$ or $$x = 1$$. Again, since $$x$$ cannot be negative, we are left with only $$x = 1$$. The range = {largest} - {smallest} = 2 - 1 = 1. Hi, Could you kindly explain the below in red: First of all notice that since x is equal to an absolute value of some number (|x 2 −2| ), then x cannot be negative. Next, |x 2 −2|=x means that either x 2 −2=x or −(x 2 −2)=x . First equation gives x=−1 or x=2 x^2 - 2 = x => x^2 - x = 2 => x(x-1) = 2 => x = 2 or x-1 = 2 (x=3) . Since x cannot be negative, we are left with only x=2 (should be 3) . Second equation gives x=−2 or x=1 . Again, since x cannot be negative, we are left with only x=1 . The range = {largest} - {smallest} = 2 - 1 = 1. IMO it should be 3-1 = 2 Substitute x=3 into x^2 - 2 = x. Does it hold? _________________ Intern Joined: 03 Oct 2014 Posts: 3 ### Show Tags 03 Mar 2016, 04:01 Hi Bunuel, is it correct to solve like usual the abs value function for both the cases and then put alltogether in a system? In this case, I have that everything is above the root should be >= 0, |x^2-2|>=0. Then I solve the two cases: 1. x^2-2 for x^2>=2 I obtain: x<=-1 x>=2 (only x>=2 is acceptable 2.- x^2+2 for x^2<=2 I obtain: x<=-1 -2<x<=1 acceptable range Now if I pu all togheter to have a solution that is >=0: x<-2 and 1<x<2 so I take only the second range? Correct me on my reasoning . Thanks Intern Joined: 03 Oct 2014 Posts: 3 ### Show Tags 03 Mar 2016, 04:03 1 FightToSurvive wrote: Bunuel wrote: Official Solution: What is the range of all the roots of $$|x^2 - 2| = x$$? A. 4 B. 3 C. 2 D. 1 E. 0 First of all notice that since $$x$$ is equal to an absolute value of some number ($$|x^2 - 2|$$), then $$x$$ cannot be negative. Next, $$|x^2 - 2| = x$$ means that either $$x^2 - 2 = x$$ or $$-(x^2 - 2) = x$$. First equation gives $$x = -1$$ or $$x = 2$$. Since $$x$$ cannot be negative, we are left with only $$x = 2$$. Second equation gives $$x = -2$$ or $$x = 1$$. Again, since $$x$$ cannot be negative, we are left with only $$x = 1$$. The range = {largest} - {smallest} = 2 - 1 = 1. Hi, Could you kindly explain the below in red: First of all notice that since x is equal to an absolute value of some number (|x 2 −2| ), then x cannot be negative. Next, |x 2 −2|=x means that either x 2 −2=x or −(x 2 −2)=x . First equation gives x=−1 or x=2 x^2 - 2 = x => x^2 - x = 2 => x(x-1) = 2 => x = 2 or x-1 = 2 (x=3) . Since x cannot be negative, we are left with only x=2 (should be 3) . Second equation gives x=−2 or x=1 . Again, since x cannot be negative, we are left with only x=1 . The range = {largest} - {smallest} = 2 - 1 = 1. IMO it should be 3-1 = 2 Hi Bunuel, is it correct to solve like usual the abs value function for both the cases and then put alltogether in a system? In this case, I have that everything is above the root should be >= 0, |x^2-2|>=0. Then I solve the two cases: 1. x^2-2 for x^2>=2 I obtain: x<=-1 x>=2 (only x>=2 is acceptable 2.- x^2+2 for x^2<=2 I obtain: x<=-1 -2<x<=1 acceptable range Now if I pu all togheter to have a solution that is >=0: x<-2 and 1<x<2 so I take only the second range? Correct me on my reasoning . Thanks Current Student Joined: 14 Oct 2014 Posts: 4 Location: India GMAT 1: 660 Q47 V34 GRE 1: Q160 V160 GPA: 3.98 WE: Account Management (Consulting) ### Show Tags 23 Jun 2016, 07:57 I think this is a high-quality question. Intern Joined: 31 Oct 2016 Posts: 2 ### Show Tags 12 Nov 2016, 07:24 1 I believe there is a small correction needed in the explanation: Scenario I: For x^2 - 2 > 0 i.e. x > Sqrt(2) then x=2, x=-1, the only acceptable solution for this scenario is x=2 Scenario II: For x^2 - 2 < 0 i.e. - Sqrt(2) < x < Sqrt(2), then x=-2, x=1. the acceptable solution for this scenario is x=1 Therfore the range xmax - xmin = 2 - 1 = 1 Thanks for all the great work Math Expert Joined: 02 Sep 2009 Posts: 53063 ### Show Tags 12 Nov 2016, 07:31 jcuchet wrote: I believe there is a small correction needed in the explanation: Scenario I: For x^2 - 2 > 0 i.e. x > Sqrt(2) then x=2, x=-1, the only acceptable solution for this scenario is x=2 Scenario II: For x^2 - 2 < 0 i.e. - Sqrt(2) < x < Sqrt(2), then x=-2, x=1. the acceptable solution for this scenario is x=1 Therfore the range xmax - xmin = 2 - 1 = 1 Thanks for all the great work _______________ What correction? _________________ Intern Joined: 31 Oct 2016 Posts: 2 ### Show Tags 12 Nov 2016, 07:36 Bunuel wrote: jcuchet wrote: I believe there is a small correction needed in the explanation: Scenario I: For x^2 - 2 > 0 i.e. x > Sqrt(2) then x=2, x=-1, the only acceptable solution for this scenario is x=2 Scenario II: For x^2 - 2 < 0 i.e. - Sqrt(2) < x < Sqrt(2), then x=-2, x=1. the acceptable solution for this scenario is x=1 Therfore the range xmax - xmin = 2 - 1 = 1 Thanks for all the great work _______________ What correction? _______________ Scenario II is for - Sqrt(2) < x < Sqrt(2) not just for x < 0. Therefore under scenario II a negative solution is allowable between -Sqrt(2) and 0 Math Expert Joined: 02 Sep 2009 Posts: 53063 ### Show Tags 12 Nov 2016, 07:41 jcuchet wrote: Bunuel wrote: jcuchet wrote: I believe there is a small correction needed in the explanation: Scenario I: For x^2 - 2 > 0 i.e. x > Sqrt(2) then x=2, x=-1, the only acceptable solution for this scenario is x=2 Scenario II: For x^2 - 2 < 0 i.e. - Sqrt(2) < x < Sqrt(2), then x=-2, x=1. the acceptable solution for this scenario is x=1 Therfore the range xmax - xmin = 2 - 1 = 1 Thanks for all the great work _______________ What correction? _______________ Scenario II is for - Sqrt(2) < x < Sqrt(2) not just for x < 0. Therefore under scenario II a negative solution is allowable between -Sqrt(2) and 0 No, that's not correct. x cannot be negative for any scenario. This is explained in the first sentence of the solution: First of all notice that since $$x$$ is equal to an absolute value of some number ($$|x^2 - 2|$$), then $$x$$ cannot be negative. _________________ Intern Joined: 10 Dec 2015 Posts: 12 GMAT 1: 590 Q36 V35 ### Show Tags 26 Nov 2016, 18:46 Hi, could you please explain how the first equation gives x=−1 or x=2? I got x=2 and x=3. Thank you very much Math Expert Joined: 02 Sep 2009 Posts: 53063 ### Show Tags 27 Nov 2016, 00:34 nelliegu wrote: Hi, could you please explain how the first equation gives x=−1 or x=2? I got x=2 and x=3. Thank you very much x^2 - x - 2 = 0 (x - 2) (x + 1) = 0 x = 2 or x = -1. _________________ Intern Joined: 10 Dec 2015 Posts: 12 GMAT 1: 590 Q36 V35 ### Show Tags 27 Nov 2016, 08:00 Bunuel wrote: nelliegu wrote: Hi, could you please explain how the first equation gives x=−1 or x=2? I got x=2 and x=3. Thank you very much x^2 - x - 2 = 0 (x - 2) (x + 1) = 0 x = 2 or x = -1. Of course!! thank you very much Bunuel Manager Joined: 19 Jul 2016 Posts: 50 ### Show Tags 28 Jan 2017, 21:44 hi Bunuel why are are not considering negative root here ? Math Expert Joined: 02 Sep 2009 Posts: 53063 ### Show Tags 29 Jan 2017, 02:54 gupta87 wrote: hi Bunuel why are are not considering negative root here ? This is explained in the first sentence of the solution: First of all notice that since $$x$$ is equal to an absolute value of some number ($$|x^2 - 2|$$), then $$x$$ cannot be negative. _________________ Retired Moderator Joined: 26 Nov 2012 Posts: 592 ### Show Tags 01 Jun 2017, 22:28 Bunuel wrote: gupta87 wrote: hi Bunuel why are are not considering negative root here ? This is explained in the first sentence of the solution: First of all notice that since $$x$$ is equal to an absolute value of some number ($$|x^2 - 2|$$), then $$x$$ cannot be negative. Bunuel, can you please share all conditions, under which cases $$x$$ is equal to an absolute value of some number is positive or negative. I didn't find the above highlighted point in our math book. Math Expert Joined: 02 Sep 2009 Posts: 53063 ### Show Tags 01 Jun 2017, 22:51 msk0657 wrote: Bunuel wrote: gupta87 wrote: hi Bunuel why are are not considering negative root here ? This is explained in the first sentence of the solution: First of all notice that since $$x$$ is equal to an absolute value of some number ($$|x^2 - 2|$$), then $$x$$ cannot be negative. Bunuel, can you please share all conditions, under which cases $$x$$ is equal to an absolute value of some number is positive or negative. I didn't find the above highlighted point in our math book. x = |some expression| in any case means that x cannot be negative, it can be 0 or positive. _________________ Director Joined: 21 Mar 2016 Posts: 521 ### Show Tags 02 Jun 2017, 07:40 the question can be solved in another way too.. square both sides of the equation the resulting equation is x^4 - 5x^2 +4 = 0 solving for x^2 gives 4 and 1 as roots implies x= +/- 2 or +/- 1 range = 2-1 = 1 ans D SVP Joined: 26 Mar 2013 Posts: 2068 ### Show Tags 03 Jun 2017, 02:14 Bunuel wrote: Official Solution: What is the range of all the roots of $$|x^2 - 2| = x$$? A. 4 B. 3 C. 2 D. 1 E. 0 First of all notice that since $$x$$ is equal to an absolute value of some number ($$|x^2 - 2|$$), then $$x$$ cannot be negative. Next, $$|x^2 - 2| = x$$ means that either $$x^2 - 2 = x$$ or $$-(x^2 - 2) = x$$. First equation gives $$x = -1$$ or $$x = 2$$. Since $$x$$ cannot be negative, we are left with only $$x = 2$$. Second equation gives $$x = -2$$ or $$x = 1$$. Again, since $$x$$ cannot be negative, we are left with only $$x = 1$$. The range = {largest} - {smallest} = 2 - 1 = 1. Bunuel, I used the highlighted part above as a base for square both sides and arrived to same solutions presented above. Is my basis valid or not? Thakns Re: M31-19   [#permalink] 03 Jun 2017, 02:14 Go to page    1   2    Next  [ 22 posts ] Display posts from previous: Sort by # M31-19 Moderators: chetan2u, Bunuel Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2019-02-21 22:09:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8377848267555237, "perplexity": 2000.865511192056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247511174.69/warc/CC-MAIN-20190221213219-20190221235219-00072.warc.gz"}
https://www.physicsforums.com/threads/separation-of-nanotube.713026/
# Separation of Nanotube Anyone familiar with separation of nanotubes produced by electric discharge UltrafastPED Gold Member Can you be more specific? I've applied high voltages to densified CNT structures in a vacuum. chemisttree Homework Helper Gold Member http://eosl.gtri.gatech.edu/Default.aspx?tabid=117 [Broken] fellows have developed several techniques for purification of carbon nanotubes. Last edited by a moderator: Sorry I don't know which fellows Office_Shredder Staff Emeritus Gold Member arauca, the word "these" in his post is a link you can click. 1 person What does algebra have to do with my question Borek Mentor Curiouser and curiouser! As far as I can tell link that chemisttree posted: http://eosl.gtri.gatech.edu/Default.aspx?tabid=117 [Broken]
2021-05-08 15:45:51
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8034389615058899, "perplexity": 9506.377726519295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988882.94/warc/CC-MAIN-20210508151721-20210508181721-00541.warc.gz"}
https://cs.stackexchange.com/questions/133304/write-a-pseudo-code-for-a-graph-algorithm
Write a pseudo code for a Graph algorithm Given a DAG $$G=(V, E)$$ and a function $$f(v)$$ which maps every vertex to a unique number from 1 to $$|V|$$, I need to write a pseudo code for an algorithm that finds for every $$v\in V$$ the minimal value of $$f(u)$$, among all vertices $$u$$ that are reachable from $$v$$, and save it a an attribute of v. The time complexity of the algorithm needs to be $$O(V+E)$$ (assuming that time complexity of $$f(v)$$ is $$\Theta (1)$$). I thought about using DFS (or a variation of it) and/or topological sort, but I don't know how to use it in order to solve this problem. In addition, I need to think about an algorithm that gets an undirected graph and the function $$f(v)$$, and calculate the same thing for every vertex, and I don't know how to do it either. • For a connected undirected graph, all vertices are reachable from every other vertex. So every vertex would have the same minimum reachable value right? Dec 13 '20 at 12:07 Yes, you can use the topological sort. Suppose topological sorting of the vertices gives you the sequence: $$v_{1}, \dotsc, v_{n}$$ such that there is no edge of the form $$(v_{j},v_{i})$$ for any $$i < j$$. Following would be the pseudocode to compute the minimum reachable $$f(u)$$ value for each vertex $$v_{i} \in V$$: fun() ----- int min_val_reachable[n]; // array that stores minimum reachable f(u) value for each vertex ----- for(i = n to 1; i--) -------- min_val_reachable[i] = f(v_i) // since a vertex v_i is reachable to itself -------- for each vertex 'u' in adj_list[v_i] -------------- if (min_val_reachable[i] > min_val_reachable[u]) -------------------min_val_reachable[i] = min_val_reachable[u] -------------- end --------- end ----- end ----- return min_val_reachable[]; The time complexity of the topological sort is $$O(|V| + |E|)$$, and the time complexity of the above procedure is also $$O(|V| + |E|)$$. Thus, the overall complexity is $$O(|V| + |E|)$$. You can prove the correctness of the above procedure using the induction technique as follows: Hypothesis: After $$t$$ iterations of the outer "for loop", we get the correct min_value_reachable[] for every vertex from $$v_{n}$$ to $$v_{n-t+1}$$ Base Case: For $$t = 1$$, it is easy to see that min_value_reachable[$$v_n$$] = $$f(v_n)$$ since $$v_{n}$$ does not has any child, and $$v_{n}$$ is reachable to itself. The induction case is also simple. Hope you can figure out the details yourself. Given a vertex $$v$$, let $$F(v)$$ be the minimum value $$f(u)$$ among all nodes $$u$$ reachable from $$u$$ in the input DAG $$G=(V,E)$$. Notice that a vertex $$u$$ is reachable from $$v$$ if and only if $$u=v$$ or $$u$$ is reachable from some out-neighbor $$w$$ of $$v$$. Then, we can write: $$F(v) = \min\{ f(v), \min_{(v,w) \in E} F(w) \},$$ where the minimum of over an empty range is $$+\infty$$. Let $$v_1, \dots, v_n$$ be the vertices of $$G$$ in reverse topological order, and notice that the previous equation for $$F(v_i)$$ only depends on $$f(v_i)$$ and on the values $$F(v_j)$$ with $$j. If we compute $$F(v_1), F(v_2), \dots, F(v_n)$$ in this order, then we will only need time proportional to the out-degree $$\delta_i$$ of each $$v_i$$. More precisely, we will spend time $$O( 1 + \delta_i)$$ to compute $$F(v_i)$$. Since $$\sum_{i=1}^n ( 1 + \delta_i) = |V| + |E|$$, the overall time complexity is also $$O(|V| + |E|)$$. If the graph $$G$$ is not a DAG then, the same approach works once you preprocess $$G$$ by identifying all the connected components $$C$$ into a single vertex $$v_C$$ having $$f(v_C) = \min_{u \in C} f(u)$$. This preprocessing requires time $$O(|V|+|E|)$$, since this is the time required to compute the connected components of $$G$$ (which are a partition of $$G$$). In particular this captures the case where $$G$$ is an undirected graph since it is equivalent to solving the problem using the directed version of $$G$$: just replace each undirected edge $$\{u, v\}$$ with the pair of directed edges $$(u,v)$$ and $$(v,u)$$.
2022-01-20 18:14:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 67, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8026245832443237, "perplexity": 267.0398851065495}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302355.97/warc/CC-MAIN-20220120160411-20220120190411-00477.warc.gz"}
https://www.npf.ws/sudbury-mass-czlolfi/equivalence-point-formula-c26b47
When all of a weak acid has been neutralized by strong base, the solution is essentially equivalent to a solution of the conjugate base of the weak acid. For the first reaction select the reactant “H2CO3” and enter “1 mmol/L” as shown in the right screenshot. equivalence point potential is about 180 mv more posi- tive (assuming 0.01 M initial thiosulfate concentration and negligible volume change, e.g., titration with 0.1 M Is-) than the potential calculated from the weighted average of the EO's for alone. The calculator below plots the theoretical titration curve for the acid-base titration of monoprotic acids and bases. Visualize the curve as you enter data point by point during the titration in the laboratory, or afterwards. An equivalence point is a special equilibrium state at which chemically equivalent quantities of acid and bases have been mixed: (1) equivalence point: [acid] T = [base] T. The square brackets (with the small subscript “T”) denote the total molar amount of acid or base. If you had the two solutions of the same concentration, you would have to use twice the volume of hydrochloric acid to reach the equivalence point - because of the 1 : 2 ratio in the equation. Assume that your base is about 0.05 M and t, Calculate the pH at the equivalence point for the titration of 1.0 M ethylamine, C2H5NH2, by 1.0 M perchloric acid, HClO4. What is the pH at the equivalence point? Note: You will obtain exactly the same pH values (equivalence points) when NaHCO 3 is replaced by KHCO 3, and Na 2 CO 3 by K 2 CO 3.That is because NaOH and KOH are both strong acids with the same impact on the carbonate system.. DIC Variations. Let us assume that R be a relation on the set of ordered pairs of positive integers such that ((a, b), (c, d))∈ R if and only if ad=bc. All rights reserved. Strong Acid vs Strong Base: Here one can simply apply law of equivalence and find amount of $\ce{H+}$ in the solution. equivalence formula: Äquivalenzformel {f} equivalence operation: Äquivalenzverknüpfung {f} comp. You can test out of the A 10.0 mL sample of acetic acid requires 37.55 mL of 0.233 M NaOH to reach the equivalence point. I am looking for the formula used in determining the (currently unknown) molarity of a base in a simple acid base reaction. In a well-planned titration, the endpoint often occurs very close to the equivalence point. first two years of college and save thousands off your degree. Titrations often involve indicators, a molecule that changes color under in certain circumstances. Strong + Strong react: HCl + NaOH -> HOH + NaCl Would you look at that! The equivalence points will change for other DIC values. That is, while element masses differ, when it comes to bonding with other atoms, the number of atoms, expressed in moles, is the determining factor in how much of a given element or compound will react with a given mass of another. In titration, the equivalence point is defined as that solution in which the acid-base reaction is completed stoichiometrically.1 In other words, we have to perform equilibrium calculations for the three reactants given above. Both are not exactly the same. • Anonymous . a. sodium hydroxide (NaOH) b. hydroxylamine (NH_2 OH) c. aniline (C_6 H_5. The equivalence point of the neutralisation titration is the point at which the moles of H + is equal to the moles of OH-. In each case, we are using the same balanced equation. ↩, Copyright © 2012-2021 aqion. Let us assume that R be a relation on the set of ordered pairs of positive integers such that ((a, b), (c, d))∈ R if and only if ad=bc. Definition, Formula & Examples (K_s = 6.3 times 10^-4) 14. - Definition, Formula & Examples, Reducing vs. Non-Reducing Sugars: Definition & Comparison, Eukaryotic and Prokaryotic Cells: Similarities and Differences, ORELA General Science: Practice & Study Guide, Prentice Hall Chemistry: Online Textbook Help, ILTS Science - Physics (116): Test Practice and Study Guide, SAT Subject Test Chemistry: Practice and Study Guide, ILTS Science - Chemistry (106): Test Practice and Study Guide, CSET Science Subtest II Chemistry (218): Practice & Study Guide. Calculate the volume of KOH required to reach the equivalence point. In other words, the moles of acid are equivalent to the moles of base, according to the equation (this does not necessarily imply a 1:1 molar ratio of acid:base, merely that the ratio is the same as in the equation). We have seen that the equivalence point is where there is no excess reactant in a chemical reaction. Get the unbiased info you need to find the right school. Already registered? = 1 Agility = 1 Stamina = 2 Strength = .1% Crit. An error occurred trying to load this video. Then you will know if she took the normal amount or an overdose. pH=5.86 The net ionic equation for the titration in question is the following: CH_3NH_2+H^(+)->CH_3NH_3^(+) This exercise will be solved suing two kinds of problems: Stoichiometry problem and equilibrium problem . What Can You Do With a PhD in Social Work? In titrations, this is particularly useful because we can calculate the unknown amount of a reactant present if we carry out a reaction where we know the amount of another chemical called the titrant. That’s simple. Economic equivalence is a combination of interest rate and time value of money to determine the different amounts of money at different points in time that are equal in economic value. In any chemical reaction, the equivalence point is reached when the exact amount of each chemical needed to react is present. equivalence operation Äquivalenzverknüpfung {f} equivalence partition Äquivalenzpartition {f}comp. It has been reported previously that the inflection point deviates [2] from the equivalence point depending on strength of acid or base. This occurs when [H 2 PO 4-] is a maximum. Based on the Forum Discussion initiated by Ming from Lightning's Blade. Chloropropionic acid, ClCH_2CH_2COOH is a weak monoprotic acid with K_a = 7.94 \times 10^{-5} M. Calculate the pH at the equivalence point in a titration 32.4 mL of 1.77 M chloropropionic acid with 0. Identifying an Unknown. the initial volume reading was 0.16 mL for HCL and 0.04 mL for NaOH . We also know that concentration means moles per liter. The difference between these two amounts of titrant, i.e., the amount required for com- pleting the reaction with remaining halide ion, can be expressed in terms of[X-L by CA^ (Veq - Vl) (Vo + Vi) [X-h. For example, if a 0.2 M solution of acetic acid is titrated to the equivalence point by adding an equal volume of 0.2 M NaOH, the resulting solution is exactly the same as if you had prepared a 0.1 M solution of sodium acetate. Working Scholars® Bringing Tuition-Free College to the Community, In case 1, you would need to add 2 more moles of. Anyone can earn In case 3, the amount of the product (chemical. An indicator is used to indicate the equivalence point during a titration by changing colour 2.; The titration experiment is usually conducted several times carefully and the volume of solution used from the burette (buret) recorded (known as a titre). Titration is a procedure used in chemistry in order to determine the molarity of an acid or a base.A chemical reaction is set up between a known volume of a solution of unknown concentration and a known volume of a solution with a known concentration. equivalence point of a diprotic titration curve, the pH is the average of the pK a’s for that diprotic acid. That is because NaOH and KOH are both strong acids with the same impact on the carbonate system. Scientists often use the endpoint to estimate when the equivalence point occurred. Select the smoothing factor of the spline that shows the most accurate interpolation of the endpoints (stoichiometric points or equivalence points) on the derivative curves. Calculate the pH at the halfway point and at the equivalence point for each of the following titrations. A 50.0 mL 0.10 M solution of NH3 was titrated with a 0.5 M HClO4solution. then register yourself on Vedantu or download Vedantu learning app for class 6-10, IITJEE and NEET. Example #2: How many milliliters of 0.105 M HCl are needed to titrate 22.5 mL of 0.118 M NH 3 to the equivalence point: Solution (using the step by step solution technique and moles): We will ignore the fact that HCl-NH 3 is actually a strong-weak titration. Equivalence Point Definition . With click on Start the resulting pH will be displayed. Examples of diprotic acids are sulfuric acid, H 2 SO 4, and carbonic acid, H 2 CO 3.A diprotic acid dissociates in water in two stages: Create your account. To show that these two equations are equivalent, choose a generic point $(x_{1}, y_{1})$. Weak Base Strong Acid Titration Calculations – pH Before, During and Beyond the Equivalence Point 32. Definition of EPn. DIC Variations. Main Difference – Equivalence Point vs Endpoint. For the first reaction select the reactant “H2CO3” and enter “1 mmol/L” as shown in the right screenshot. Plus, get practice tests, quizzes, and personalized coaching to help you 42.23 mL * 1 L/1000 mL = 0.04223 L0.1255 M NaOH * 0.04223 L NaOH = 0.005300 mol NaOH. The resulting solution at the equivalence point will have a pH dependent on the acid and base’s relative strengths. To demonstrate this effect we perform the same calculations for 13 different values of DIC between 10-12 and 101 M (in 13 logarithmic steps). What is the pH of the solution after 60.0 mL HClO4 has been added. Get access risk-free for 30 days, What Looks Good on a College Application? The reaction used 12mL of .1M HCl and 7mL of unknownM NaOH. just create an account. and career path that can help you find the school that's right for you. = .2% Dodge = .13% Parry = .13% To Hit = 2 Attack Power = 5 Daggers = 4 Any Resistance = 5 Health/5 Sec. Services. For each reactant, we set up a ratio so that the units we start with are in the denominator of the mole-to-mole ratio and will cancel out. Ammonia is a weak base so its pH is above 7 but it as lower as compared to a strong base NaOH shown in case 1. equivalence partition: Äquivalenzpartition {f} comp. We are only interested in the volume required for the equivalence point, not the pH at the equivalence point. Sociology 110: Cultural Studies & Diversity in the U.S. CPA Subtest IV - Regulation (REG): Study Guide & Practice, Properties & Trends in The Periodic Table, Solutions, Solubility & Colligative Properties, Creating Routines & Schedules for Your Child's Pandemic Learning Experience, How to Make the Hybrid Learning Model Effective for Your Child, Distance Learning Considerations for English Language Learner (ELL) Students, Roles & Responsibilities of Teachers in Distance Learning, Between Scylla & Charybdis in The Odyssey, Hermia & Helena in A Midsummer Night's Dream: Relationship & Comparison. Not sure what college you want to attend yet? However, before solving an example in which we will use this approach, let's try to derive more or less general formula for equivalence point potential Let's assume the reaction that takes place is m A A Red + m B B Ox = m A A Ox + m B B Red where Red and Ox indices denote reduced and oxidized forms. Note that the change around the second equivalence point is much smaller than the first one. equivalence partitioning Äquivalenzklassenbildung {f}comp. ; Endpoint – Sometimes, equivalence point can be considered as the endpoint since they are approximately equal. A titration is a reaction where we know the exact amount present of one reactant (the titrant) and we want to find the exact amount of another reactant (called the analyte). | Differentiated Instruction Resources, Discovering Geometry An Investigative Approach: Online Help, DSST Lifespan Developmental Psychology: Study Guide & Test Prep, Developmental Psychology: Certificate Program, Introduction to Physical Geology: Help and Review, AP Environmental Science - Running Water: Tutoring Solution, After the Civil War - Reconstruction: Homework Help Resource, Quiz & Worksheet - Anecdote in Literature, Quiz & Worksheet - The Mali Empire in the 13th-16th Centuries, Quiz & Worksheet - Rise of Aztec Civilization, Quiz & Worksheet - Overview of Urbanization, Kindergarten Word Walls: Ideas & Activities. The equivalence point (stoichiometric point) should be distinguished from the titration endpoint (where the indicator changes its color). equivalence formula Äquivalenzformel {f}math. Examples of equivalence point in the following topics: Acid-Base Titrations. The determination of significantly different results can be used to argue that a phenomenon is novel or to justify a claim of a sig-nificant improvement in a technique, process, or product. 5.0 moles D * (3 moles E / 2 moles D) = 7.5 moles E required to react with all the D5.0 moles E * (2 moles D / 3 moles E) = 3.3 moles D required to react with all the E. Examining this, we can see that reactant D is currently in excess - the amount needed is less than what we already have. The amount of one reactant will completely balance another reactant - no more or less. The end point is detected by some physical change produced by the solution, by itself or more usually by the addition of an auxiliary reagent known as an 'indicator'. For years, PowerPoint has had this capability, but not many people know about it. Then you will know if she took the normal amount or an overdose. Statistical Equivalence Testing BEYOND O ne of the most common questions considered by ana-lytical chemists is whether replicate measurements are the same or significantly different from each other. Ok, so we need to look at the equations at the equivalence point, because the pH is determined by the hydrolysis of the conjugate acid/base. login. The reaction used 12mL of .1M HCl and 7mL of unknownM NaOH. That means we have to find pK b of conjugated base and calculate concentration of OH - starting from there, then use pH=14-pOH formula. Create an account to start this course today. In the other side, Endpoint is a point where the symbol changes colour. PH=(PKa1+PKa2)/2 First equivalence point PH=(PKa2+PKa3)/2 Second equivalence point The Attempt at a Solution I honest don't know what to do, I did try finding the concentrations of OH at the equivalence points using the pH formulas above, but that didn't seem to get me anywhere since those values were tiny and I didn't know how to get mL from it.. Sciences, Culinary Arts and Personal imprint Sometimes, you need to display a complex formula or equation in PowerPoint. Statistical Equivalence Testing BEYOND O ne of the most common questions considered by ana-lytical chemists is whether replicate measurements are the same or significantly different from each other. credit-by-exam regardless of age or education level. What you would need to add to the following reactions to reach the equivalence point? At the first equivalence point , [H 3 PO 4] approaches zero. In the case of titration of weak acid with strong base, pH at the equivalence point is determined by the weak acid salt hydrolysis. Your job is not to determine if the medicine is in her blood (since she was supposed to be taking the medicine, some should be present). Log in or sign up to add this lesson to a Custom Course. Best Bachelor's Degrees in Entrepreneurship, Warrant Officer: Salary Info, Duties and Requirements, Online Medical Secretary Courses and Training Programs, Distance Learning Certificate in Television Production, How to Become an Administrator at a Nursing Home, Best Schools with Graduate Programs in Philosophy List of Schools, What Can You Do With a Masters Degree in Finance, AP Chemistry: Experimental Laboratory Chemistry: Homework Help, AP Chemistry: Properties of Matter: Homework Help, AP Chemistry: Atomic Structure: Homework Help, AP Chemistry: The Periodic Table of Elements: Homework Help, AP Chemistry: Nuclear Chemistry: Homework Help, AP Chemistry: Phase Changes for Liquids and Solids: Homework Help, AP Chemistry: Stoichiometry and Chemical Equations: Homework Help, AP Chemistry: Acids, Bases and Chemical Reactions, Equivalence Point: Definition & Calculation, AP Chemistry: Thermodynamics: Homework Help, AP Chemistry: Organic Chemistry: Homework Help, Portions of the AP Chemistry Exam: Homework Help, CSET Science Subtest II Earth and Space Sciences (219): Test Prep & Study Guide, ILTS Science - Earth and Space Science (108): Test Practice and Study Guide, Praxis Earth & Space Sciences - Content Knowledge (5571): Practice & Study Guide, Human Anatomy & Physiology: Help and Review, College Chemistry: Homework Help Resource, UExcel Anatomy & Physiology: Study Guide & Test Prep, CSET Science Subtest I - General Science (215): Practice & Study Guide, Plant Photoreceptors: Definition, Types & Function, Adventitious Roots: Definition & Examples, Perfect Flowers: Definition, Diagram & Examples, Mouth Cancer: Causes, Symptoms & Treatment, What is Angiogenesis? The advantage of using equivalence ratio over hydrocarbon-to-oxidizer ratio is that it does not have the same dependence as hydrocarbon-to-oxidizer ratio on the units being used. At the equivalence point, none of the reactants are in excess - you have exactly the amount needed and no more. You can estimate the equivalence point’s pH using the following rules: A strong acid will react with a weak base to form an acidic (pH < 7) solution. of moles of the analyte. In four of these texts the calculation was completely ignored. Thus an inflection point has to be present in the titration curve in order to locate the equivalence point. The determination of significantly different results can be used to argue that a phenomenon is novel or to justify a claim of a sig- Visit the AP Chemistry: Homework Help Resource page to learn more. University of Texas Equivalence Point Potential Austin 12 I in Oxidation-Riduction Titrations A survey of thirty-four quantitative analysis textbooks, both at the elementary and ad- vanced levels, revealed that no general equation for the equivalence point potential in oxidation-reduction titrations has been described. Equivalence point is a stage in which the amount of reagent added is exactly and stoichiometrically equivalent to the amount of the reacting substance in the titrated solution. Let's assume we precipitate salt of a general formula Me k X l, with known K sp: SS Dam. Stoichiometry Problem : At the equivalence point, the number of mole of the acid added is equal to the number o fmole of base present. succeed. A diprotic acid is an acid that yields two H + ions per acid molecule. This was the difference between end point and equivalence point, if you are confused in other confusable terms used in chemistry such as isotropic and anisotropic, orbit and orbital, petrol and diesel, sigma and pi bonds, thermoplastic and thermosetting plastics etc. As an illustration, if the interest rate is 6% per year, $100 today (present time) is equivalent to$106 one year from today. When the balanced reaction involves coefficients other than 1, the mole-to-mole ratio must be used to determine the equivalent amount needed. Log in here for access. You will also see the effect of ions, precipitates, and water on conductivity. So to reach the equivalence point, we need to add more E. Just as in our simple examples, we subtract the amount we have from what is required. The endpoint of a titration is when the indicator changes color. Rather, your job is to figure out how muchof the medicine is present. For example, a solution of KHP, which you begin with in Part B. Phthalic acid is H 9.28 c. 4. However, when a weak polyprotic acid is titrated, there are multiple equivalence points because the equivalence point will occur when an H … Did you know… We have over 220 college • She titrates with 0.1255 M base NaOH, and it requires 42.23 mL to reach the equivalence point. A suitable pH indicator must be chosen in order to detect the end point of the titration. = 50 Armor For Rogues, two other formulas are useful: Sinister Strike AEP Formula: .33 Avg. Rather, your job is to figure out how much of the medicine is present. Rate of a Chemical Reaction: Modifying Factors, Quiz & Worksheet - Calculating Equivalence Point, Over 83,000 lessons in all major subjects, {{courseNav.course.mDynamicIntFields.lessonCount}}, LeChatelier's Principle: Disruption and Re-Establishment of Equilibrium, Equilibrium Constant (K) and Reaction Quotient (Q), Using a RICE Table in Equilibrium Calculations, Solubility Equilibrium: Using a Solubility Constant (Ksp) in Calculations, The Common Ion Effect and Selective Precipitation, Acid-Base Buffers: Calculating the pH of a Buffered Solution, Titration of a Strong Acid or a Strong Base, Biological and Biomedical We have reviewed that the moles must be balanced (not volume, concentration, mass, or anything else), and that only reactants matter (not products). The typical units used for measuring the amount of chemicals are moles. points indicating the equivalence points in the titration. Thanks in advance! When an acid is titrated, there is an equivalence, or stoichiometric, point, which is when the moles of the strong base added equal of the moles of weak acid present. Stoichiometry Problem : At the equivalence point, the number of mole of the acid added is equal to the number o fmole of base present. Similar method for Strong base vs Strong Acid. What we actually have is not determined by the balanced equation, just like having a recipe for cookies does not determine what you have in your pantry. Titration of a weak Acid with a strong base. The concept of equivalent weight allows you to explore the fact that atoms combine to form molecules in fixed number ratios, not mass ratios. The balanced equation is sort of a recipe that tells us how much stuff reacts together. There are 3 cases. CO 2 is an abbreviation for the composite carbonic acid H 2 CO 3 * , which is the sum of dissolved CO … This is also known as the stoichiometric point because it is where the moles of acid are equal to the amount needed to … I hate titration questions. The titration curve for a 25.0-mL aliquot of the protonated form of 0.100 M glycine (H 2 A+) with 50.0 mL of 0.100 M HCl solution is shown in Figure 4. Your job is not to determine if the medicine is in her blood (since she was supposed to be taking the medicine, some should be present). Consider the titration of acetic acid with sodium hydroxide. The program is helpful also for other tasks like determining the amount of acid or base required to neutralize a sample (neutralization), to prepare or displace the pH of a buffer, to change color of a visual indicator, to find the isoelectric point of amino acids, etc. What is the Main Frame Story of The Canterbury Tales? a, 11.72 b. Solving For X Without Using The Quadratic Formula 31. Definition: The equivalence point of a chemical reaction is the point at which equal quantities of reactants are mixed chemically. The results are plotted here. Since the mole ratio is 1:1, 0.005300 mol HX are present. At equivalence point we have just a saturated solution of insoluble salt, so calculation of concentration of the determined ion is identical to the solubility calculations. The color change or other effect should occur close to the equivalence point of the reaction so that the experimenter can accurately determine when that point is reached. courses that prepare you to earn study What is the equivalence point formula? There are 3 cases. Calculate the pH at the equivalence point in the titration of 0.1 M acetic acid (pKa = 4.76) with 0.1 M sodium hydroxide The answer should be 8.73, any ideas? 7.5 moles E (required) - 5.0 moles E (initially present) = 2.5 moles E needed to reach equivalence point. A point of equivalence in a titration refers to a point at which the added titrant is chemically equivalent to the sample analyte. One can see this point in the relative concentration plot. Repeating this procedure for the other two cases yields the three equivalence points (for 25°C): 1 mM. Quiz & Worksheet - Who is Judge Danforth in The Crucible? = 5 Defense = .33% Dodge = .2% Parry = .13% To Hit = 2 AP = 4 Any Resist = 5 Health/5 Sec. Earn Transferable Credit & Get your Degree, Calculating Formal Charge: Definition & Formula, Acid-Base Equilibrium: Calculating the Ka or Kb of a Solution, Determining Rate Equation, Rate Law Constant & Reaction Order from Experimental Data, The Relationship Between Free Energy and the Equilibrium Constant, Neutralization Reaction: Definition, Equation & Examples, The Relationship Between Enthalpy (H), Free Energy (G) and Entropy (S), Serial Dilution in Microbiology: Calculation, Method & Technique, Hydrates: Determining the Chemical Formula From Empirical Data, Polar and Nonpolar Covalent Bonds: Definitions and Examples, What is Molar Mass? {{courseNav.course.topics.length}} chapters | 1 0. Repeating this procedure for the other two cases yields the three equivalence points (for 25°C): Note: You will obtain exactly the same pH values (equivalence points) when NaHCO3 is replaced by KHCO3, and Na2CO3 by K2CO3. When all of a weak acid has been neutralized by strong base, the solution is essentially equivalent to a solution of the conjugate base of the weak acid. imaginable degree, area of What are the equivalence points (pH values) of the following three solutions: These solutions refer to a total carbonate amount (DIC) of 1 mM. Find an Equivalence Point In this experiment, you will monitor conductivity during the reaction between sulfuric acid, H2S04,and barium hydroxide, Ba(OH)2, in order to determine the equivalence point. Try refreshing the page, or contact customer support. - Definition & Factors, Quiz & Worksheet - Homologous Structures in Biology, Quiz & Worksheet - Purpose of Spindle Fibers, AP Physics 1: Newton's Third Law of Motion, California Sexual Harassment Refresher Course: Supervisors, California Sexual Harassment Refresher Course: Employees. It is the point where the volume added is half of what it will be at the equivalence point. All other trademarks and copyrights are the property of their respective owners. Here is an equivalence relation example to prove the properties. Equivalence Relation Proof. 's' : ''}}. In this equation, m represents the mass, n represents number of moles, suffix st stands for stoichiometric conditions. In all three cases we start with pure water (button New) and use the reaction tool (button Reac). From this information, you can find the concentration ofthe Ba(OH)2 solution. (pKb for C2H5NH2 = 3.25), Calculate the pH at the equivalence point for-titrating 0.150 M solutions of each of the following bases with 0.150 M HBr. point [1] and that it is regarded to be the equivalence point. halfway point equivalence point (b) 100.0 mL of 0.29 M C2H5NH2 (Kb = 5.6 multiplied by 10-4) titrated by 0.58 M HNO3 halfway point equivalence point (c) 100.0 mL of 0.28 M HCl titrated by 0.14 M NaOH halfway point equivalence point I'm pretty unclear as far as how to go about it and would greatly appreciate help. A 25.0 mL sample of 0.150 M hydrazoic acid is titrated with a 0.150 M NaOH solution. Since we are asked for concentration, we need to give the moles per liter for HX, 25.00 mL * 1 L/1000 mL = 0.02500 L HX0.005300 mol HX / 0.02500 L HX = 0.2120 M HX. flashcard set{{course.flashcardSetCoun > 1 ? The pH at the equivalence point will be higher than 7 because you are titrating a weak acid with a strong base. It calculates how the curve should look like with KNOWN molarity or molar concentration of titrand, KNOWN molarity of titrant and, in some cases, KNOWN titrand ionization constant (acid dissociation constant for weak acids and base dissociation constant for weak bases). pH=5.86 The net ionic equation for the titration in question is the following: CH_3NH_2+H^(+)->CH_3NH_3^(+) This exercise will be solved suing two kinds of problems: Stoichiometry problem and equilibrium problem . Before that, the Equation Editor was a separate window. For example, if a 0.2 M solution of acetic acid is titrated to the equivalence point by adding an equal volume of 0.2 M NaOH, the resulting solution is exactly the same as if you had prepared a 0.1 M solution of sodium acetate. Titration methods are often used to identify and quantify the components in a solution mixture. Calculate the solubility product constant for lead(II) chlor, A volume of 60.0 mL of a 0.160 M HNO_3 solution is titrated with 0.770 M KOH. in other words, no of moles of titrant is equal to the no. The equivalence point (stoichiometric point) should be distinguished from the titration endpoint (where the indicator changes its color). Remember: A student has a 25.00 mL sample of an unknown monoprotic acid, HX. Was 0.16 mL for HCl and equivalence point formula mL for NaOH term that helpful. Of reaction titrated with a strong base the normal amount or an overdose of... Relative acidity ( basicity ) of an unknown monoprotic acid of reaction titrant added is half of it! Much of the pK a ’ s for that diprotic acid is acid... & Answers same balanced equation is sort of a titration is when the balanced equation plots the titration! Chemistry, an equivalence relation example to prove the properties will know if she took the amount... Chosen in order to detect the end point of equivalence in a solution mixture three equivalence points are discussed... Remember: a student has a 25.00 mL sample of an unknown monoprotic acid the term 'equivalence '. Your data points on the acid and base ’ s for that diprotic acid learning app for class 6-10 IITJEE. Equivalence point will be displayed problem-solving strategies related to equivalence point depending on Strength of acid or base Vedantu... Jen has a 25.00 mL sample of an aqueous solution can be determined using the same balanced equation unknownM.! 7.5 moles E needed to react is present 2, you need to add 2 more moles titrant. 0.150 M hydrazoic acid is 4.50 times 10^-4 NaOH = 0.005300 mol NaOH, none of the chemical,... In excess, the equivalence point depending on Strength of acid or base equivalence partition Äquivalenzpartition { f }.. Button New ) and use the reaction used 12mL of.1M HCl and 7mL of unknownM NaOH find... Of their respective owners do with a 0.150 M hydrazoic acid is 4.50 times.! Register yourself on Vedantu or download Vedantu learning app for class 6-10, IITJEE NEET. Are titrating a weak acid with a strong base indicators, a M... Mililiters of NaOH is used while performing titration using the same as in any system dominated by.! Is Judge Danforth in the fraction plot or neutralization reaction technically changes a... The K_a of hydrazoic acid is an equivalence relation example to prove the properties titrate! For other DIC values ( currently unknown equivalence point formula molarity of a base in a titration of a acid. Is enough to completely neutralize the analyte in the right screenshot distinguished from the balanced reaction involves other. To display a complex formula or equation in PowerPoint the no the was..., 0.005300 mol HX are present order to locate the equivalence point is not well defined, the Editor... Seen that the change around the second equivalence point is exactly what it will be at the equivalence point be... With unknown concentration this equation, M represents the mass, n represents of! Acid, HX be considered as the endpoint since they are approximately equal 5.0 moles E needed to reach point. Is enough to completely neutralize the analyte solution 0.04 mL for NaOH college you want to attend yet ratios the... Analyte solution asked to find the right school, Health and medicine - Questions & Answers, Health medicine... M HClO4solution value is varied between 10-12 and 101 M endpoint – Sometimes, you can find the points... Previously that the change around the second equivalence point describe the concentration of common problem-solving strategies to... Are often used to identify and quantify the components in a simple acid base reaction separate.! - you have exactly the amount of titrant added is half of what it will be displayed sort of weak... The point where the equivalence point formula changes color - > HOH + NaCl would you look at that, you find. Applies to any acid-base or neutralization reaction technically a 25.0 mL sample of an monoprotic. Titrated with a strong base data points on the carbonate system react: HCl NaOH! The ribbon in Real Estate Marketing Basics, flashcards - Promotional Marketing in Real Estate Marketing Basics flashcards. Acids are those that donate 1 H atom to a Custom Course { M } < AEF, Äf math. The reactants are in excess - you have exactly the amount of total carbonate! Defined, the titration curve for the first equivalence point of reaction select the “... Old woman has recently died, and the police suspect she may have been poisoned her! Respective owners it will be displayed, suffix st stands for stoichiometric conditions no more or less our Credit. Lesson you must be used to titrate the following topics: acid-base titrations = 2 Strength.1., for example, NaOH and KOH are both strong acids with the same balanced equation Estate Marketing Basics flashcards... = 1 Stamina = 2 Strength =.1 % Crit was a separate window can seriously! To figure out how much of the reactants are in excess - you have exactly the of. Is to figure out how much of the titration curve may appear to be that of a is! A Study.com Member following topics: acid-base titrations has had this capability, but not many people know about.. Has to be present in the Crucible following titrations plus, get practice tests, quizzes and... - Questions & Answers, Health and medicine - Questions & Answers base reaction button Reac ) college and thousands... Lesson you must be a Study.com Member equivalence points have a pH dependent on the first equivalence point formula plot the above.: equivalence points ( for 25°C ): 1 mM & Worksheet - Who is Judge Danforth in the analyte. Neutralize the analyte equivalence point formula acid-base titrations up to add this lesson explains the term 'equivalence point ' in chemistry an. Of acid or base heart medication equivalence operation: Äquivalenzverknüpfung { f } comp ’ s strengths... Same as in any system dominated by NaOH analyte solution chemically equivalent to the point! May have been poisoned using her own heart medication a 0.150 M hydrazoic acid is with... Is to figure out how muchof the medicine is present currently unknown ) molarity of a monoprotic acid people about. Moles per liter mole-to-mole ratio must be chosen in order to locate the equivalence point calculations and out., we need to find the equivalence point ( stoichiometric point ) should be from. Contact customer support following topics: acid-base titrations of titrant added is half of what it will at! Passing quizzes and exams Scholars® Bringing Tuition-Free college to the equivalence point is not well defined the! Editor and since PowerPoint 2010, it has been reported previously that the point... These texts the calculation was completely ignored use equivalence points can also be identified in the fraction plot approximately. A simple acid base reaction a. sodium hydroxide ] from the titration of, for example, NaOH and are! A point of reaction Vedantu learning app for class 6-10, IITJEE and NEET will walk through several of! Calculations – pH Before, During and Beyond, the equation Editor was a window... 4 ] approaches zero 2, you would need to find the concentration of HX pH change is pH. To get to the equivalence points are often used to determine the equivalent amount needed and no more Tuition-Free..., none of equivalence point formula reactants are mixed chemically of their respective owners PowerPoint has had capability! 4 ] approaches zero = 2.5 moles E ( required ) - 5.0 moles E ( required ) - moles. Equivalence partitioning < EP > Äquivalenzklassenbildung { f } comp pH indicator must a. Right school excess, the equivalence point occurred is 4.50 times 10^-4 measuring amount. Sure what college you want to attend yet point at which the titrant is chemically equivalent the! Stands for stoichiometric conditions ) c. aniline ( C_6 H_5 E needed to react is present would need to anything! The solution with unknown concentration off your degree that diprotic acid ) should be distinguished from the equivalence depending! Add anything figure out how muchof the medicine is present muchof the medicine is present Schools! The acid and base ’ s relative strengths Before that, the amount of one reactant will balance... Courses: equivalence point is such a point at which equal quantities of reactants are in excess you. Point ( stoichiometric point ) should be distinguished from the titration curve in order to the! Used 12mL of.1M HCl and 7mL of unknownM NaOH ) equal the moles of the analyte. ( C_6 H_5 + NaCl would you look at that it is the pH change is the point in following... Do not need to add this lesson explains the term 'equivalence point ' in chemistry number of moles the... Know that concentration means moles per liter download Vedantu learning app for class 6-10 IITJEE., visit our Earning Credit page the Canterbury Tales will want about 30 mL of 0.233 M NaOH reach! Than the first equivalence point, none of the solution with unknown concentration an indicator that is used performing! To prove the properties typical units used for measuring the amount of titrant is chemically to! Formula used in determining the ( currently unknown ) molarity of a monoprotic acid, HX: point! Concentration ofthe Ba ( OH ) 2 solution a 0.860 M solution of NH3 was titrated with a M.
2021-05-12 21:32:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4824540913105011, "perplexity": 2868.915680931546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989705.28/warc/CC-MAIN-20210512193253-20210512223253-00323.warc.gz"}
http://docs.itascacg.com/flac3d700/common/sel/doc/manual/sel_manual/hybrid/fish/sel.hybrid_intrinsics/fish_sel.hybrid.rupture.dowel.html
struct.hybrid.rupture.dowel Syntax b := struct.hybrid.rupture.dowel(p) Get the shear rupture state for the dowels associated with this hybrid bolt element. $$true$$ indicates rutpure due to shear strain exceeding the strain limit for ANY of the dowels associated with this element. Returns: b - shear rupture state of the dowel(s) p - a pointer to a hybrid bolt element
2022-05-17 10:03:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2395017147064209, "perplexity": 5650.958570086154}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517245.1/warc/CC-MAIN-20220517095022-20220517125022-00383.warc.gz"}