text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
Areas of Interest
Epistemology Philosophy of Mind Logic and Philosophy of Logic Philosophy of Social Science
• 126
##### A logic of intention and attempt with Andreas Herzig Synthese 163 (1). 2008.
We present a modal logic called (logic of intention and attempt) in which we can reason about intention dynamics and intentional action execution. By exploiting the expressive power of , we provide a formal analysis of the relation between intention and action and highlight the pivotal role of attempt in action execution. Besides, we deal with the problems of instrumental reasoning and intention persistence.
• 111
##### The cognitive structure of surprise: Looking for basic principles with Cristiano Castelfranchi Topoi 26 (1): 133-149. 2007.
We develop a conceptual and formal clarification of notion of surprise as a belief-based phenomenon by exploring a rich typology. Each kind of surprise is associated with a particular phase of cognitive processing and involves particular kinds of epistemic representations (representations and expectations under scrutiny, implicit beliefs, presuppositions). We define two main kinds of surprise: mismatch-based surprise and astonishment. In the central part of the paper we suggest how a formal mode…Read more
• 77
##### On the Dynamics of Institutional Agreements with Andreas Herzig, Tiago de Lima, and Emiliano Lorini Synthese 171 (2). 2009.
In this paper we investigate a logic for modelling individual and collective acceptances that is called acceptance logic. The logic has formulae of the form $A_{Gx} \phi$ reading 'if the agents in the set of agents G identify themselves with institution x then they together accept that φ'. We extend acceptance logic by two kinds of dynamic modal operators. The first kind are public announcements of the form x!ψ, meaning that the agents learn that ψ is the case in context x. Formulae of the form…Read more
• 74
##### A dynamic logic of agency I: Stit, capabilities and powers with Andreas Herzig Journal of Logic, Language and Information 19 (1): 89-121. 2010.
The aim of this paper, is to provide a logical framework for reasoning about actions, agency, and powers of agents and coalitions in game-like multi-agent systems. First we define our basic Dynamic Logic of Agency ( ). Differently from other logics of individual and coalitional capability such as Alternating-time Temporal Logic (ATL) and Coalition Logic, in cooperation modalities for expressing powers of agents and coalitions are not primitive, but are defined from more basic dynamic logic opera…Read more
• 61
##### Computer-mediated trust in self-interested expert recommendations with Jonathan Ben-Naim, Jean-François Bonnefon, Andreas Herzig, and Sylvie Leblois AI and Society 25 (4): 413-422. 2010.
Important decisions are often based on a distributed process of information processing, from a knowledge base that is itself distributed among agents. The simplest such situation is that where a decision-maker seeks the recommendations of experts. Because experts may have vested interests in the consequences of their recommendations, decision-makers usually seek the advice of experts they trust. Trust, however, is a commodity that is usually built through repeated face time and social interactio…Read more
• 58
##### A dynamic logic of agency II: Deterministic dla {\mathcal{dla}} , coalition logic, and game theory Journal of Logic, Language and Information 19 (3): 327-351. 2010.
We continue the work initiated in Herzig and Lorini (J Logic Lang Inform, in press) whose aim is to provide a minimalistic logical framework combining the expressiveness of dynamic logic in which actions are first-class citizens in the object language, with the expressiveness of logics of agency such as STIT and logics of group capabilities such as CL and ATL. We present a logic called ( Deterministic Dynamic logic of Agency ) which supports reasoning about actions and joint actions of agents an…Read more
• 46
• 33
##### The effects of social ties on coordination: conceptual foundations for an empirical analysis (review) with Giuseppe Attanasi, Astrid Hopfensitz, and Frédéric Moisan Phenomenology and the Cognitive Sciences 13 (1): 47-73. 2014.
This paper investigates the influence that social ties can have on behavior. After defining the concept of social ties that we consider, we introduce an original model of social ties. The impact of such ties on social preferences is studied in a coordination game with outside option. We provide a detailed game theoretical analysis of this game while considering various types of players, i.e., self-interest maximizing, inequity averse, and fair agents. In addition to these approaches that require…Read more
• 29
• 25
##### On the Epistemic Foundation for Iterated Weak Dominance: An Analysis in a Logic of Individual and Collective attitudes Journal of Philosophical Logic 42 (6): 863-904. 2013.
This paper proposes a logical framework for representing static and dynamic properties of different kinds of individual and collective attitudes. A complete axiomatization as well as a decidability result for the logic are given. The logic is applied to game theory by providing a formal analysis of the epistemic conditions of iterated deletion of weakly dominated strategies (IDWDS), or iterated weak dominance for short. The main difference between the analysis of the epistemic conditions of iter…Read more
• 23
##### A minimal logic for interactive epistemology Synthese 193 (3): 725-755. 2016.
We propose a minimal logic for interactive epistemology based on a qualitative representation of epistemic individual and group attitudes including knowledge, belief, strong belief, common knowledge and common belief. We show that our logic is sufficiently expressive to provide an epistemic foundation for various game-theoretic solution concepts including “1-round of deletion of weakly dominated strategies, followed by iterated deletion of strongly dominated strategies” ) and “2-rounds of deleti…Read more
• 19
##### The Strength of Desires: A Logical Approach with Didier Dubois and Henri Prade Minds and Machines 27 (1): 199-231. 2017.
The aim of this paper is to propose a formal approach to reasoning about desires, understood as logical propositions which we would be pleased to make true, also acknowledging the fact that desire is a matter of degree. It is first shown that, at the static level, desires should satisfy certain principles that differ from those to which beliefs obey. In this sense, from a static perspective, the logic of desires is different from the logic of beliefs. While the accumulation of beliefs tend to re…Read more
• 17
##### Temporal logic and its application to normative reasoning Journal of Applied Non-Classical Logics 23 (4): 372-399. 2013.
I present a variant of with time, called, interpreted in standard Kripke semantics. On the syntactic level, is nothing but the extension of atemporal individual by: the future tense and past tense operators, and the operator of group agency for the grand coalition. A sound and complete axiomatisation for is given. Moreover, it is shown that supports reasoning about interesting normative concepts such as the concepts of achievement obligation and commitment.
• 16
##### The Cognitive Foundations of Group Attitudes and Social Interaction (edited book) with Andreas Herzig Springer Verlag. 1st ed. 2015.
I first argue against the “psycho-phobia” that has characterized the foundation of the social sciences and invalidates many social policies. I then present a basic ontology of social actions by examining their most important forms, with a special focus on pro-social actions, in particular Goal Delegation and Goal Adoption. These action types are the basic atoms of exchange, cooperation, group action, and organization. The proposed ontology is grounded in the mental representations of the agents …Read more
• 16
##### A STIT Logic for Reasoning About Social Influence with Giovanni Sartor Studia Logica 104 (4): 773-812. 2016.
In this paper we propose a method for modeling social influence within the STIT approach to action. Our proposal consists in extending the STIT language with special operators that allow us to represent the consequences of an agent’s choices over the rational choices of another agent.
• 12
##### From self-regarding to other-regarding agents in strategic games: a logical analysis Journal of Applied Non-Classical Logics 21 (3-4): 443-475. 2011.
I propose a modal logic that enables to reason about self-regarding and otherregarding motivations in strategic games. This logic integrates the concepts of joint action, belief, individual and group payoff. The first part of the article is focused on self-regarding agents. A self-regarding agent decides to perform a certain action only if he believes that this action maximizes his own personal benefit. The second part of the article explores different kinds of other-regarding motivations such a…Read more
• 11
• 9
• 8
##### A Logic Of Trust And Reputation with Andreas Herzig, Jomi Hübner, and Laurent Vercouter Logic Journal of the IGPL 18 (1): 214-244. 2010.
The aim of this paper is to present a logical framework in which the concepts of trust and reputation can be formally characterized and their properties studied. We start from the definition of trust proposed by Castelfranchi & Falcone . We formalize this definition in a logic of time, action, beliefs and choices. Then, we provide a refinement of C&F’s definition by distinguishing two general types of trust: occurrent trust and dispositional trust. In the second part of the paper we present a de…Read more
• 8
• 7
##### The Dynamics of Epistemic Attitudes in Resource-Bounded Agents with Philippe Balbiani and David Fernández-Duque Studia Logica 107 (3): 457-488. 2019.
The paper presents a new logic for reasoning about the formation of beliefs through perception or through inference in non-omniscient resource-bounded agents. The logic distinguishes the concept of explicit belief from the concept of background knowledge. This distinction is reflected in its formal semantics and axiomatics: we use a non-standard semantics putting together a neighborhood semantics for explicit beliefs and relational semantics for background knowledge, and we have specific axioms …Read more
• 7
• 6
• 3
• 2
##### Announcements to Attentive Agents with Thomas Bolander, Hans van Ditmarsch, Andreas Herzig, Pere Pardo, and François Schwarzentruber Journal of Logic, Language and Information 25 (1): 1-35. 2016.
In public announcement logic it is assumed that all agents pay attention to the announcement. Weaker observational conditions can be modelled in action model logic. In this work, we propose a version of public announcement logic wherein it is encoded in the states of the epistemic model which agents pay attention to the announcement. This logic is called attention-based announcement logic. We give an axiomatization of the logic and prove that complexity of satisfiability is the same as that of p…Read more
• 2
##### AGM Contraction and Revision of Rules with Roland Mühlenbernd and Laurent Perrussel Journal of Logic, Language and Information 25 (3 - 4): 273-297. 2016.
In this paper we study AGM contraction and revision of rules using input/output logical theories. We replace propositional formulas in the AGM framework of theory change by pairs of propositional formulas, representing the rule based character of theories, and we replace the classical consequence operator Cn by an input/output logic. The results in this paper suggest that, in general, results from belief base dynamics can be transferred to rule base dynamics, but that a similar transfer of AGM t…Read more
• 1
• ##### Announcements to Attentive Agents with François Schwarzentruber, Pere Pardo, Andreas Herzig, Hans Ditmarsch, and Thomas Bolander Journal of Logic, Language and Information 25 (1): 1-35. 2016.
In public announcement logic it is assumed that all agents pay attention to the announcement. Weaker observational conditions can be modelled in action model logic. In this work, we propose a version of public announcement logic wherein it is encoded in the states of the epistemic model which agents pay attention to the announcement. This logic is called attention-based announcement logic. We give an axiomatization of the logic and prove that complexity of satisfiability is the same as that of p…Read more
|
{}
|
/ Hex Artifact Content
## Artifact 45e596bd4ccecf2256f68a2e96466aa52cc4bc1f:
• File Makefile.msc — part of check-in [4d7a802e] at 2016-02-13 14:07:56 on branch sessions — Merge the changes for the 3.11.0 release candidate from trunk. (user: drh size: 62477) [more...]
0000: 23 0a 23 20 6e 6d 61 6b 65 20 4d 61 6b 65 66 69 #.# nmake Makefi
0010: 6c 65 20 66 6f 72 20 53 51 4c 69 74 65 0a 23 0a le for SQLite.#.
0020: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
0030: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
0040: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
0050: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
0060: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 0a ###############.
0070: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
0080: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 20 53 ############## S
0090: 54 41 52 54 20 4f 46 20 4f 50 54 49 4f 4e 53 20 TART OF OPTIONS
00a0: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
00b0: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 0a ###############.
00c0: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
00d0: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
00e0: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
00f0: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
0100: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 0a ###############.
0110: 0a 23 20 54 68 65 20 74 6f 70 6c 65 76 65 6c 20 .# The toplevel
0120: 64 69 72 65 63 74 6f 72 79 20 6f 66 20 74 68 65 directory of the
0130: 20 73 6f 75 72 63 65 20 74 72 65 65 2e 20 20 54 source tree. T
0140: 68 69 73 20 69 73 20 74 68 65 20 64 69 72 65 63 his is the direc
0150: 74 6f 72 79 0a 23 20 74 68 61 74 20 63 6f 6e 74 tory.# that cont
0160: 61 69 6e 73 20 74 68 69 73 20 22 4d 61 6b 65 66 ains this "Makef
0170: 69 6c 65 2e 6d 73 63 22 2e 0a 23 0a 54 4f 50 20 ile.msc"..#.TOP
0180: 3d 20 2e 0a 0a 23 20 3c 3c 6d 61 72 6b 3e 3e 0a = ...# <<mark>>.
0190: 23 20 53 65 74 20 74 68 69 73 20 6e 6f 6e 2d 30 # Set this non-0
01a0: 20 74 6f 20 63 72 65 61 74 65 20 61 6e 64 20 75 to create and u
01b0: 73 65 20 74 68 65 20 53 51 4c 69 74 65 20 61 6d se the SQLite am
01c0: 61 6c 67 61 6d 61 74 69 6f 6e 20 66 69 6c 65 2e algamation file.
01d0: 0a 23 0a 21 49 46 4e 44 45 46 20 55 53 45 5f 41 .#.!IFNDEF USE_A
01e0: 4d 41 4c 47 41 4d 41 54 49 4f 4e 0a 55 53 45 5f MALGAMATION.USE_
01f0: 41 4d 41 4c 47 41 4d 41 54 49 4f 4e 20 3d 20 31 AMALGAMATION = 1
0200: 0a 21 45 4e 44 49 46 0a 23 20 3c 3c 2f 6d 61 72 .!ENDIF.# <</mar
0210: 6b 3e 3e 0a 0a 23 20 53 65 74 20 74 68 69 73 20 k>>..# Set this
0220: 6e 6f 6e 2d 30 20 74 6f 20 65 6e 61 62 6c 65 20 non-0 to enable
0230: 66 75 6c 6c 20 77 61 72 6e 69 6e 67 73 20 28 2d full warnings (-
0240: 57 34 2c 20 65 74 63 29 20 77 68 65 6e 20 63 6f W4, etc) when co
0250: 6d 70 69 6c 69 6e 67 2e 0a 23 0a 21 49 46 4e 44 mpiling..#.!IFND
0260: 45 46 20 55 53 45 5f 46 55 4c 4c 57 41 52 4e 0a EF USE_FULLWARN.
0270: 55 53 45 5f 46 55 4c 4c 57 41 52 4e 20 3d 20 30 USE_FULLWARN = 0
0280: 0a 21 45 4e 44 49 46 0a 0a 23 20 53 65 74 20 74 .!ENDIF..# Set t
0290: 68 69 73 20 6e 6f 6e 2d 30 20 74 6f 20 75 73 65 his non-0 to use
02a0: 20 22 73 74 64 63 61 6c 6c 22 20 63 61 6c 6c 69 "stdcall" calli
02b0: 6e 67 20 63 6f 6e 76 65 6e 74 69 6f 6e 20 66 6f ng convention fo
02c0: 72 20 74 68 65 20 63 6f 72 65 20 6c 69 62 72 61 r the core libra
02d0: 72 79 0a 23 20 61 6e 64 20 73 68 65 6c 6c 20 65 ry.# and shell e
02e0: 78 65 63 75 74 61 62 6c 65 2e 0a 23 0a 21 49 46 xecutable..#.!IF
02f0: 4e 44 45 46 20 55 53 45 5f 53 54 44 43 41 4c 4c NDEF USE_STDCALL
0300: 0a 55 53 45 5f 53 54 44 43 41 4c 4c 20 3d 20 30 .USE_STDCALL = 0
0310: 0a 21 45 4e 44 49 46 0a 0a 23 20 53 65 74 20 74 .!ENDIF..# Set t
0320: 68 69 73 20 6e 6f 6e 2d 30 20 74 6f 20 68 61 76 his non-0 to hav
0330: 65 20 74 68 65 20 73 68 65 6c 6c 20 65 78 65 63 e the shell exec
0340: 75 74 61 62 6c 65 20 6c 69 6e 6b 20 61 67 61 69 utable link agai
0350: 6e 73 74 20 74 68 65 20 63 6f 72 65 20 64 79 6e nst the core dyn
0360: 61 6d 69 63 0a 23 20 6c 69 6e 6b 20 6c 69 62 72 amic.# link libr
0370: 61 72 79 2e 0a 23 0a 21 49 46 4e 44 45 46 20 44 ary..#.!IFNDEF D
0380: 59 4e 41 4d 49 43 5f 53 48 45 4c 4c 0a 44 59 4e YNAMIC_SHELL.DYN
0390: 41 4d 49 43 5f 53 48 45 4c 4c 20 3d 20 30 0a 21 AMIC_SHELL = 0.!
03a0: 45 4e 44 49 46 0a 0a 23 20 53 65 74 20 74 68 69 ENDIF..# Set thi
03b0: 73 20 6e 6f 6e 2d 30 20 74 6f 20 65 6e 61 62 6c s non-0 to enabl
03c0: 65 20 65 78 74 72 61 20 63 6f 64 65 20 74 68 61 e extra code tha
03d0: 74 20 61 74 74 65 6d 70 74 73 20 74 6f 20 64 65 t attempts to de
03e0: 74 65 63 74 20 6d 69 73 75 73 65 20 6f 66 20 74 tect misuse of t
03f0: 68 65 0a 23 20 53 51 4c 69 74 65 20 41 50 49 2e he.# SQLite API.
0400: 0a 23 0a 21 49 46 4e 44 45 46 20 41 50 49 5f 41 .#.!IFNDEF API_A
0410: 52 4d 4f 52 0a 41 50 49 5f 41 52 4d 4f 52 20 3d RMOR.API_ARMOR =
0420: 20 30 0a 21 45 4e 44 49 46 0a 0a 23 20 49 66 20 0.!ENDIF..# If
0430: 6e 65 63 65 73 73 61 72 79 2c 20 63 72 65 61 74 necessary, creat
0440: 65 20 61 20 6c 69 73 74 20 6f 66 20 68 61 72 6d e a list of harm
0450: 6c 65 73 73 20 63 6f 6d 70 69 6c 65 72 20 77 61 less compiler wa
0460: 72 6e 69 6e 67 73 20 74 6f 20 64 69 73 61 62 6c rnings to disabl
0470: 65 20 77 68 65 6e 0a 23 20 63 6f 6d 70 69 6c 69 e when.# compili
0480: 6e 67 20 74 68 65 20 76 61 72 69 6f 75 73 20 74 ng the various t
0490: 6f 6f 6c 73 2e 20 20 46 6f 72 20 74 68 65 20 53 ools. For the S
04a0: 51 4c 69 74 65 20 73 6f 75 72 63 65 20 63 6f 64 QLite source cod
04b0: 65 20 69 74 73 65 6c 66 2c 20 77 61 72 6e 69 6e e itself, warnin
04c0: 67 73 2c 0a 23 20 69 66 20 61 6e 79 2c 20 77 69 gs,.# if any, wi
04d0: 6c 6c 20 62 65 20 64 69 73 61 62 6c 65 64 20 66 ll be disabled f
04e0: 72 6f 6d 20 77 69 74 68 69 6e 20 69 74 2e 0a 23 rom within it..#
04f0: 0a 21 49 46 4e 44 45 46 20 4e 4f 5f 57 41 52 4e .!IFNDEF NO_WARN
0500: 0a 21 49 46 20 24 28 55 53 45 5f 46 55 4c 4c 57 .!IF $(USE_FULLW 0510: 41 52 4e 29 21 3d 30 0a 4e 4f 5f 57 41 52 4e 20 ARN)!=0.NO_WARN 0520: 3d 20 2d 77 64 34 30 35 34 20 2d 77 64 34 30 35 = -wd4054 -wd405 0530: 35 20 2d 77 64 34 31 30 30 20 2d 77 64 34 31 32 5 -wd4100 -wd412 0540: 37 20 2d 77 64 34 31 33 30 20 2d 77 64 34 31 35 7 -wd4130 -wd415 0550: 32 20 2d 77 64 34 31 38 39 20 2d 77 64 34 32 30 2 -wd4189 -wd420 0560: 36 0a 4e 4f 5f 57 41 52 4e 20 3d 20 24 28 4e 4f 6.NO_WARN =$(NO
0570: 5f 57 41 52 4e 29 20 2d 77 64 34 32 31 30 20 2d _WARN) -wd4210 -
0580: 77 64 34 32 33 32 20 2d 77 64 34 33 30 35 20 2d wd4232 -wd4305 -
0590: 77 64 34 33 30 36 20 2d 77 64 34 37 30 32 20 2d wd4306 -wd4702 -
05a0: 77 64 34 37 30 36 0a 21 45 4e 44 49 46 0a 21 45 wd4706.!ENDIF.!E
05b0: 4e 44 49 46 0a 0a 23 20 53 65 74 20 74 68 69 73 NDIF..# Set this
05c0: 20 6e 6f 6e 2d 30 20 74 6f 20 75 73 65 20 74 68 non-0 to use th
05d0: 65 20 6c 69 62 72 61 72 79 20 70 61 74 68 73 20 e library paths
05e0: 61 6e 64 20 6f 74 68 65 72 20 6f 70 74 69 6f 6e and other option
05f0: 73 20 6e 65 63 65 73 73 61 72 79 20 66 6f 72 0a s necessary for.
0600: 23 20 57 69 6e 64 6f 77 73 20 50 68 6f 6e 65 20 # Windows Phone
0610: 38 2e 31 2e 0a 23 0a 21 49 46 4e 44 45 46 20 55 8.1..#.!IFNDEF U
0620: 53 45 5f 57 50 38 31 5f 4f 50 54 53 0a 55 53 45 SE_WP81_OPTS.USE
0630: 5f 57 50 38 31 5f 4f 50 54 53 20 3d 20 30 0a 21 _WP81_OPTS = 0.!
0640: 45 4e 44 49 46 0a 0a 23 20 53 65 74 20 74 68 69 ENDIF..# Set thi
0650: 73 20 6e 6f 6e 2d 30 20 74 6f 20 73 70 6c 69 74 s non-0 to split
0660: 20 74 68 65 20 53 51 4c 69 74 65 20 61 6d 61 6c the SQLite amal
0670: 67 61 6d 61 74 69 6f 6e 20 66 69 6c 65 20 69 6e gamation file in
0680: 74 6f 20 63 68 75 6e 6b 73 20 74 6f 0a 23 20 62 to chunks to.# b
0690: 65 20 75 73 65 64 20 66 6f 72 20 64 65 62 75 67 e used for debug
06a0: 67 69 6e 67 20 77 69 74 68 20 56 69 73 75 61 6c ging with Visual
06b0: 20 53 74 75 64 69 6f 2e 0a 23 0a 21 49 46 4e 44 Studio..#.!IFND
06c0: 45 46 20 53 50 4c 49 54 5f 41 4d 41 4c 47 41 4d EF SPLIT_AMALGAM
06d0: 41 54 49 4f 4e 0a 53 50 4c 49 54 5f 41 4d 41 4c ATION.SPLIT_AMAL
06e0: 47 41 4d 41 54 49 4f 4e 20 3d 20 30 0a 21 45 4e GAMATION = 0.!EN
06f0: 44 49 46 0a 0a 23 20 3c 3c 6d 61 72 6b 3e 3e 0a DIF..# <<mark>>.
0700: 23 20 53 65 74 20 74 68 69 73 20 6e 6f 6e 2d 30 # Set this non-0
0710: 20 74 6f 20 75 73 65 20 74 68 65 20 49 6e 74 65 to use the Inte
0720: 72 6e 61 74 69 6f 6e 61 6c 20 43 6f 6d 70 6f 6e rnational Compon
0730: 65 6e 74 73 20 66 6f 72 20 55 6e 69 63 6f 64 65 ents for Unicode
0740: 20 28 49 43 55 29 2e 0a 23 0a 21 49 46 4e 44 45 (ICU)..#.!IFNDE
0750: 46 20 55 53 45 5f 49 43 55 0a 55 53 45 5f 49 43 F USE_ICU.USE_IC
0760: 55 20 3d 20 30 0a 21 45 4e 44 49 46 0a 23 20 3c U = 0.!ENDIF.# <
0770: 3c 2f 6d 61 72 6b 3e 3e 0a 0a 23 20 53 65 74 20 </mark>>..# Set
0780: 74 68 69 73 20 6e 6f 6e 2d 30 20 74 6f 20 64 79 this non-0 to dy
0790: 6e 61 6d 69 63 61 6c 6c 79 20 6c 69 6e 6b 20 74 namically link t
07a0: 6f 20 74 68 65 20 4d 53 56 43 20 72 75 6e 74 69 o the MSVC runti
07b0: 6d 65 20 6c 69 62 72 61 72 79 2e 0a 23 0a 21 49 me library..#.!I
07c0: 46 4e 44 45 46 20 55 53 45 5f 43 52 54 5f 44 4c FNDEF USE_CRT_DL
07d0: 4c 0a 55 53 45 5f 43 52 54 5f 44 4c 4c 20 3d 20 L.USE_CRT_DLL =
07e0: 30 0a 21 45 4e 44 49 46 0a 0a 23 20 53 65 74 20 0.!ENDIF..# Set
07f0: 74 68 69 73 20 6e 6f 6e 2d 30 20 74 6f 20 6c 69 this non-0 to li
0800: 6e 6b 20 74 6f 20 74 68 65 20 52 50 43 52 54 34 nk to the RPCRT4
0810: 20 6c 69 62 72 61 72 79 2e 0a 23 0a 21 49 46 4e library..#.!IFN
0820: 44 45 46 20 55 53 45 5f 52 50 43 52 54 34 5f 4c DEF USE_RPCRT4_L
0830: 49 42 0a 55 53 45 5f 52 50 43 52 54 34 5f 4c 49 IB.USE_RPCRT4_LI
0840: 42 20 3d 20 30 0a 21 45 4e 44 49 46 0a 0a 23 20 B = 0.!ENDIF..#
0850: 53 65 74 20 74 68 69 73 20 6e 6f 6e 2d 30 20 74 Set this non-0 t
0860: 6f 20 67 65 6e 65 72 61 74 65 20 61 73 73 65 6d o generate assem
0870: 62 6c 79 20 63 6f 64 65 20 6c 69 73 74 69 6e 67 bly code listing
0880: 73 20 66 6f 72 20 74 68 65 20 73 6f 75 72 63 65 s for the source
0890: 20 63 6f 64 65 0a 23 20 66 69 6c 65 73 2e 0a 23 code.# files..#
08a0: 0a 21 49 46 4e 44 45 46 20 55 53 45 5f 4c 49 53 .!IFNDEF USE_LIS
08b0: 54 49 4e 47 53 0a 55 53 45 5f 4c 49 53 54 49 4e TINGS.USE_LISTIN
08c0: 47 53 20 3d 20 30 0a 21 45 4e 44 49 46 0a 0a 23 GS = 0.!ENDIF..#
08d0: 20 53 65 74 20 74 68 69 73 20 6e 6f 6e 2d 30 20 Set this non-0
08e0: 74 6f 20 61 74 74 65 6d 70 74 20 73 65 74 74 69 to attempt setti
08f0: 6e 67 20 74 68 65 20 6e 61 74 69 76 65 20 63 6f ng the native co
0900: 6d 70 69 6c 65 72 20 61 75 74 6f 6d 61 74 69 63 mpiler automatic
0910: 61 6c 6c 79 0a 23 20 66 6f 72 20 63 72 6f 73 73 ally.# for cross
0920: 2d 63 6f 6d 70 69 6c 69 6e 67 20 74 68 65 20 63 -compiling the c
0930: 6f 6d 6d 61 6e 64 20 6c 69 6e 65 20 74 6f 6f 6c ommand line tool
0940: 73 20 6e 65 65 64 65 64 20 64 75 72 69 6e 67 20 s needed during
0950: 74 68 65 20 63 6f 6d 70 69 6c 61 74 69 6f 6e 0a the compilation.
0960: 23 20 70 72 6f 63 65 73 73 2e 0a 23 0a 21 49 46 # process..#.!IF
0970: 4e 44 45 46 20 58 43 4f 4d 50 49 4c 45 0a 58 43 NDEF XCOMPILE.XC
0980: 4f 4d 50 49 4c 45 20 3d 20 30 0a 21 45 4e 44 49 OMPILE = 0.!ENDI
0990: 46 0a 0a 23 20 53 65 74 20 74 68 69 73 20 6e 6f F..# Set this no
09a0: 6e 2d 30 20 74 6f 20 75 73 65 20 74 68 65 20 6e n-0 to use the n
09b0: 61 74 69 76 65 20 6c 69 62 72 61 72 69 65 73 20 ative libraries
09c0: 70 61 74 68 73 20 66 6f 72 20 63 72 6f 73 73 2d paths for cross-
09d0: 63 6f 6d 70 69 6c 69 6e 67 0a 23 20 74 68 65 20 compiling.# the
09e0: 63 6f 6d 6d 61 6e 64 20 6c 69 6e 65 20 74 6f 6f command line too
09f0: 6c 73 20 6e 65 65 64 65 64 20 64 75 72 69 6e 67 ls needed during
0a00: 20 74 68 65 20 63 6f 6d 70 69 6c 61 74 69 6f 6e the compilation
0a10: 20 70 72 6f 63 65 73 73 2e 0a 23 0a 21 49 46 4e process..#.!IFN
0a20: 44 45 46 20 55 53 45 5f 4e 41 54 49 56 45 5f 4c DEF USE_NATIVE_L
0a30: 49 42 50 41 54 48 53 0a 55 53 45 5f 4e 41 54 49 IBPATHS.USE_NATI
0a40: 56 45 5f 4c 49 42 50 41 54 48 53 20 3d 20 30 0a VE_LIBPATHS = 0.
0a50: 21 45 4e 44 49 46 0a 0a 23 20 53 65 74 20 74 68 !ENDIF..# Set th
0a60: 69 73 20 30 20 74 6f 20 73 6b 69 70 20 74 68 65 is 0 to skip the
0a70: 20 63 6f 6d 70 69 6c 69 6e 67 20 61 6e 64 20 65 compiling and e
0a80: 6d 62 65 64 64 69 6e 67 20 6f 66 20 76 65 72 73 mbedding of vers
0a90: 69 6f 6e 20 72 65 73 6f 75 72 63 65 73 2e 0a 23 ion resources..#
0aa0: 0a 21 49 46 4e 44 45 46 20 55 53 45 5f 52 43 0a .!IFNDEF USE_RC.
0ab0: 55 53 45 5f 52 43 20 3d 20 31 0a 21 45 4e 44 49 USE_RC = 1.!ENDI
0ac0: 46 0a 0a 23 20 53 65 74 20 74 68 69 73 20 6e 6f F..# Set this no
0ad0: 6e 2d 30 20 74 6f 20 63 6f 6d 70 69 6c 65 20 62 n-0 to compile b
0ae0: 69 6e 61 72 69 65 73 20 73 75 69 74 61 62 6c 65 inaries suitable
0af0: 20 66 6f 72 20 74 68 65 20 57 69 6e 52 54 20 65 for the WinRT e
0b00: 6e 76 69 72 6f 6e 6d 65 6e 74 2e 0a 23 20 54 68 nvironment..# Th
0b10: 69 73 20 73 65 74 74 69 6e 67 20 64 6f 65 73 20 is setting does
0b20: 6e 6f 74 20 61 70 70 6c 79 20 74 6f 20 61 6e 79 not apply to any
0b30: 20 62 69 6e 61 72 69 65 73 20 74 68 61 74 20 72 binaries that r
0b40: 65 71 75 69 72 65 20 54 63 6c 20 74 6f 20 6f 70 equire Tcl to op
0b50: 65 72 61 74 65 0a 23 20 70 72 6f 70 65 72 6c 79 erate.# properly
0b60: 20 28 69 2e 65 2e 20 74 68 65 20 74 65 78 74 20 (i.e. the text
0b70: 66 69 78 74 75 72 65 2c 20 65 74 63 29 2e 0a 23 fixture, etc)..#
0b80: 0a 21 49 46 4e 44 45 46 20 46 4f 52 5f 57 49 4e .!IFNDEF FOR_WIN
0b90: 52 54 0a 46 4f 52 5f 57 49 4e 52 54 20 3d 20 30 RT.FOR_WINRT = 0
0ba0: 0a 21 45 4e 44 49 46 0a 0a 23 20 53 65 74 20 74 .!ENDIF..# Set t
0bb0: 68 69 73 20 6e 6f 6e 2d 30 20 74 6f 20 63 6f 6d his non-0 to com
0bc0: 70 69 6c 65 20 62 69 6e 61 72 69 65 73 20 73 75 pile binaries su
0bd0: 69 74 61 62 6c 65 20 66 6f 72 20 74 68 65 20 55 itable for the U
0be0: 57 50 20 65 6e 76 69 72 6f 6e 6d 65 6e 74 2e 0a WP environment..
0bf0: 23 20 54 68 69 73 20 73 65 74 74 69 6e 67 20 64 # This setting d
0c00: 6f 65 73 20 6e 6f 74 20 61 70 70 6c 79 20 74 6f oes not apply to
0c10: 20 61 6e 79 20 62 69 6e 61 72 69 65 73 20 74 68 any binaries th
0c20: 61 74 20 72 65 71 75 69 72 65 20 54 63 6c 20 74 at require Tcl t
0c30: 6f 20 6f 70 65 72 61 74 65 0a 23 20 70 72 6f 70 o operate.# prop
0c40: 65 72 6c 79 20 28 69 2e 65 2e 20 74 68 65 20 74 erly (i.e. the t
0c50: 65 78 74 20 66 69 78 74 75 72 65 2c 20 65 74 63 ext fixture, etc
0c60: 29 2e 0a 23 0a 21 49 46 4e 44 45 46 20 46 4f 52 )..#.!IFNDEF FOR
0c70: 5f 55 57 50 0a 46 4f 52 5f 55 57 50 20 3d 20 30 _UWP.FOR_UWP = 0
0c80: 0a 21 45 4e 44 49 46 0a 0a 23 20 53 65 74 20 74 .!ENDIF..# Set t
0c90: 68 69 73 20 6e 6f 6e 2d 30 20 74 6f 20 63 6f 6d his non-0 to com
0ca0: 70 69 6c 65 20 62 69 6e 61 72 69 65 73 20 73 75 pile binaries su
0cb0: 69 74 61 62 6c 65 20 66 6f 72 20 74 68 65 20 57 itable for the W
0cc0: 69 6e 64 6f 77 73 20 31 30 20 70 6c 61 74 66 6f indows 10 platfo
0cd0: 72 6d 2e 0a 23 0a 21 49 46 4e 44 45 46 20 46 4f rm..#.!IFNDEF FO
0ce0: 52 5f 57 49 4e 31 30 0a 46 4f 52 5f 57 49 4e 31 R_WIN10.FOR_WIN1
0cf0: 30 20 3d 20 30 0a 21 45 4e 44 49 46 0a 0a 23 20 0 = 0.!ENDIF..#
0d00: 3c 3c 6d 61 72 6b 3e 3e 0a 23 20 53 65 74 20 74 <<mark>>.# Set t
0d10: 68 69 73 20 6e 6f 6e 2d 30 20 74 6f 20 73 6b 69 his non-0 to ski
0d20: 70 20 61 74 74 65 6d 70 74 69 6e 67 20 74 6f 20 p attempting to
0d30: 6c 6f 6f 6b 20 66 6f 72 20 61 6e 64 2f 6f 72 20 look for and/or
0d40: 6c 69 6e 6b 20 77 69 74 68 20 74 68 65 20 54 63 link with the Tc
0d50: 6c 0a 23 20 72 75 6e 74 69 6d 65 20 6c 69 62 72 l.# runtime libr
0d60: 61 72 79 2e 0a 23 0a 21 49 46 4e 44 45 46 20 4e ary..#.!IFNDEF N
0d70: 4f 5f 54 43 4c 0a 4e 4f 5f 54 43 4c 20 3d 20 30 O_TCL.NO_TCL = 0
0d80: 0a 21 45 4e 44 49 46 0a 23 20 3c 3c 2f 6d 61 72 .!ENDIF.# <</mar
0d90: 6b 3e 3e 0a 0a 23 20 53 65 74 20 74 68 69 73 20 k>>..# Set this
0da0: 74 6f 20 6e 6f 6e 2d 30 20 74 6f 20 63 72 65 61 to non-0 to crea
0db0: 74 65 20 61 6e 64 20 75 73 65 20 50 44 42 73 2e te and use PDBs.
0dc0: 0a 23 0a 21 49 46 4e 44 45 46 20 53 59 4d 42 4f .#.!IFNDEF SYMBO
0dd0: 4c 53 0a 53 59 4d 42 4f 4c 53 20 3d 20 31 0a 21 LS.SYMBOLS = 1.!
0de0: 45 4e 44 49 46 0a 0a 23 20 53 65 74 20 74 68 69 ENDIF..# Set thi
0df0: 73 20 74 6f 20 6e 6f 6e 2d 30 20 74 6f 20 75 73 s to non-0 to us
0e00: 65 20 74 68 65 20 53 51 4c 69 74 65 20 64 65 62 e the SQLite deb
0e10: 75 67 67 69 6e 67 20 68 65 61 70 20 73 75 62 73 ugging heap subs
0e20: 79 73 74 65 6d 2e 0a 23 0a 21 49 46 4e 44 45 46 ystem..#.!IFNDEF
0e30: 20 4d 45 4d 44 45 42 55 47 0a 4d 45 4d 44 45 42 MEMDEBUG.MEMDEB
0e40: 55 47 20 3d 20 30 0a 21 45 4e 44 49 46 0a 0a 23 UG = 0.!ENDIF..#
0e50: 20 53 65 74 20 74 68 69 73 20 74 6f 20 6e 6f 6e Set this to non
0e60: 2d 30 20 74 6f 20 75 73 65 20 74 68 65 20 57 69 -0 to use the Wi
0e70: 6e 33 32 20 6e 61 74 69 76 65 20 68 65 61 70 20 n32 native heap
0e80: 73 75 62 73 79 73 74 65 6d 2e 0a 23 0a 21 49 46 subsystem..#.!IF
0e90: 4e 44 45 46 20 57 49 4e 33 32 48 45 41 50 0a 57 NDEF WIN32HEAP.W
0ea0: 49 4e 33 32 48 45 41 50 20 3d 20 30 0a 21 45 4e IN32HEAP = 0.!EN
0eb0: 44 49 46 0a 0a 23 20 53 65 74 20 74 68 69 73 20 DIF..# Set this
0ec0: 74 6f 20 6e 6f 6e 2d 30 20 74 6f 20 65 6e 61 62 to non-0 to enab
0ed0: 6c 65 20 4f 53 54 52 41 43 45 28 29 20 6d 61 63 le OSTRACE() mac
0ee0: 72 6f 73 2c 20 77 68 69 63 68 20 63 61 6e 20 62 ros, which can b
0ef0: 65 20 75 73 65 66 75 6c 20 77 68 65 6e 0a 23 20 e useful when.#
0f00: 64 65 62 75 67 67 69 6e 67 2e 0a 23 0a 21 49 46 debugging..#.!IF
0f10: 4e 44 45 46 20 4f 53 54 52 41 43 45 0a 4f 53 54 NDEF OSTRACE.OST
0f20: 52 41 43 45 20 3d 20 30 0a 21 45 4e 44 49 46 0a RACE = 0.!ENDIF.
0f30: 0a 23 20 53 65 74 20 74 68 69 73 20 74 6f 20 6f .# Set this to o
0f40: 6e 65 20 6f 66 20 74 68 65 20 66 6f 6c 6c 6f 77 ne of the follow
0f50: 69 6e 67 20 76 61 6c 75 65 73 20 74 6f 20 65 6e ing values to en
0f60: 61 62 6c 65 20 76 61 72 69 6f 75 73 20 64 65 62 able various deb
0f70: 75 67 67 69 6e 67 0a 23 20 66 65 61 74 75 72 65 ugging.# feature
0f80: 73 2e 20 20 45 61 63 68 20 6c 65 76 65 6c 20 69 s. Each level i
0f90: 6e 63 6c 75 64 65 73 20 74 68 65 20 64 65 62 75 ncludes the debu
0fa0: 67 67 69 6e 67 20 6f 70 74 69 6f 6e 73 20 66 72 gging options fr
0fb0: 6f 6d 20 74 68 65 20 70 72 65 76 69 6f 75 73 0a om the previous.
0fc0: 23 20 6c 65 76 65 6c 73 2e 20 20 43 75 72 72 65 # levels. Curre
0fd0: 6e 74 6c 79 2c 20 74 68 65 20 72 65 63 6f 67 6e ntly, the recogn
0fe0: 69 7a 65 64 20 76 61 6c 75 65 73 20 66 6f 72 20 ized values for
0ff0: 44 45 42 55 47 20 61 72 65 3a 0a 23 0a 23 20 30 DEBUG are:.#.# 0
1000: 20 3d 3d 20 4e 44 45 42 55 47 3a 20 44 69 73 61 == NDEBUG: Disa
1010: 62 6c 65 73 20 61 73 73 65 72 74 28 29 20 61 6e bles assert() an
1020: 64 20 6f 74 68 65 72 20 72 75 6e 74 69 6d 65 20 d other runtime
1030: 64 69 61 67 6e 6f 73 74 69 63 73 2e 0a 23 20 31 diagnostics..# 1
1040: 20 3d 3d 20 53 51 4c 49 54 45 5f 45 4e 41 42 4c == SQLITE_ENABL
1050: 45 5f 41 50 49 5f 41 52 4d 4f 52 3a 20 65 78 74 E_API_ARMOR: ext
1060: 72 61 20 61 74 74 65 6d 70 74 73 20 74 6f 20 64 ra attempts to d
1070: 65 74 65 63 74 20 6d 69 73 75 73 65 20 6f 66 20 etect misuse of
1080: 74 68 65 20 41 50 49 2e 0a 23 20 32 20 3d 3d 20 the API..# 2 ==
1090: 44 69 73 61 62 6c 65 73 20 4e 44 45 42 55 47 20 Disables NDEBUG
10a0: 61 6e 64 20 61 6c 6c 20 6f 70 74 69 6d 69 7a 61 and all optimiza
10b0: 74 69 6f 6e 73 20 61 6e 64 20 74 68 65 6e 20 65 tions and then e
10c0: 6e 61 62 6c 65 73 20 50 44 42 73 2e 0a 23 20 33 nables PDBs..# 3
10d0: 20 3d 3d 20 53 51 4c 49 54 45 5f 44 45 42 55 47 == SQLITE_DEBUG
10e0: 3a 20 45 6e 61 62 6c 65 73 20 76 61 72 69 6f 75 : Enables variou
10f0: 73 20 64 69 61 67 6e 6f 73 74 69 63 73 20 6d 65 s diagnostics me
1100: 73 73 61 67 65 73 20 61 6e 64 20 63 6f 64 65 2e ssages and code.
1110: 0a 23 20 34 20 3d 3d 20 53 51 4c 49 54 45 5f 57 .# 4 == SQLITE_W
1120: 49 4e 33 32 5f 4d 41 4c 4c 4f 43 5f 56 41 4c 49 IN32_MALLOC_VALI
1130: 44 41 54 45 3a 20 56 61 6c 69 64 61 74 65 20 74 DATE: Validate t
1140: 68 65 20 57 69 6e 33 32 20 6e 61 74 69 76 65 20 he Win32 native
1150: 68 65 61 70 20 70 65 72 20 63 61 6c 6c 2e 0a 23 heap per call..#
1160: 20 35 20 3d 3d 20 53 51 4c 49 54 45 5f 44 45 42 5 == SQLITE_DEB
1170: 55 47 5f 4f 53 5f 54 52 41 43 45 3a 20 45 6e 61 UG_OS_TRACE: Ena
1180: 62 6c 65 73 20 6f 75 74 70 75 74 20 66 72 6f 6d bles output from
1190: 20 74 68 65 20 4f 53 54 52 41 43 45 28 29 20 6d the OSTRACE() m
11a0: 61 63 72 6f 73 2e 0a 23 20 36 20 3d 3d 20 53 51 acros..# 6 == SQ
11b0: 4c 49 54 45 5f 45 4e 41 42 4c 45 5f 49 4f 54 52 LITE_ENABLE_IOTR
11c0: 41 43 45 3a 20 45 6e 61 62 6c 65 73 20 6f 75 74 ACE: Enables out
11d0: 70 75 74 20 66 72 6f 6d 20 74 68 65 20 49 4f 54 put from the IOT
11e0: 52 41 43 45 28 29 20 6d 61 63 72 6f 73 2e 0a 23 RACE() macros..#
11f0: 0a 21 49 46 4e 44 45 46 20 44 45 42 55 47 0a 44 .!IFNDEF DEBUG.D
1200: 45 42 55 47 20 3d 20 30 0a 21 45 4e 44 49 46 0a EBUG = 0.!ENDIF.
1210: 0a 23 20 45 6e 61 62 6c 65 20 75 73 65 20 6f 66 .# Enable use of
1220: 20 61 76 61 69 6c 61 62 6c 65 20 63 6f 6d 70 69 available compi
1230: 6c 65 72 20 6f 70 74 69 6d 69 7a 61 74 69 6f 6e ler optimization
1240: 73 3f 20 20 4e 6f 72 6d 61 6c 6c 79 2c 20 74 68 s? Normally, th
1250: 69 73 20 73 68 6f 75 6c 64 20 62 65 0a 23 20 6e is should be.# n
1260: 6f 6e 2d 7a 65 72 6f 2e 20 20 53 65 74 74 69 6e on-zero. Settin
1270: 67 20 74 68 69 73 20 74 6f 20 7a 65 72 6f 2c 20 g this to zero,
1280: 74 68 75 73 20 64 69 73 61 62 6c 69 6e 67 20 61 thus disabling a
1290: 6c 6c 20 63 6f 6d 70 69 6c 65 72 20 6f 70 74 69 ll compiler opti
12a0: 6d 69 7a 61 74 69 6f 6e 73 2c 0a 23 20 63 61 6e mizations,.# can
12b0: 20 62 65 20 75 73 65 66 75 6c 20 66 6f 72 20 74 be useful for t
12c0: 65 73 74 69 6e 67 2e 0a 23 0a 21 49 46 4e 44 45 esting..#.!IFNDE
12d0: 46 20 4f 50 54 49 4d 49 5a 41 54 49 4f 4e 53 0a F OPTIMIZATIONS.
12e0: 4f 50 54 49 4d 49 5a 41 54 49 4f 4e 53 20 3d 20 OPTIMIZATIONS =
12f0: 32 0a 21 45 4e 44 49 46 0a 0a 23 20 53 65 74 20 2.!ENDIF..# Set
1300: 74 68 65 20 73 6f 75 72 63 65 20 63 6f 64 65 20 the source code
1310: 66 69 6c 65 20 74 6f 20 62 65 20 75 73 65 64 20 file to be used
1320: 62 79 20 65 78 65 63 75 74 61 62 6c 65 73 20 61 by executables a
1330: 6e 64 20 6c 69 62 72 61 72 69 65 73 20 77 68 65 nd libraries whe
1340: 6e 0a 23 20 74 68 65 79 20 6e 65 65 64 20 74 68 n.# they need th
1350: 65 20 61 6d 61 6c 67 61 6d 61 74 69 6f 6e 2e 0a e amalgamation..
1360: 23 0a 21 49 46 4e 44 45 46 20 53 51 4c 49 54 45 #.!IFNDEF SQLITE
1370: 33 43 0a 21 49 46 20 24 28 53 50 4c 49 54 5f 41 3C.!IF $(SPLIT_A 1380: 4d 41 4c 47 41 4d 41 54 49 4f 4e 29 21 3d 30 0a MALGAMATION)!=0. 1390: 53 51 4c 49 54 45 33 43 20 3d 20 73 71 6c 69 74 SQLITE3C = sqlit 13a0: 65 33 2d 61 6c 6c 2e 63 0a 21 45 4c 53 45 0a 53 e3-all.c.!ELSE.S 13b0: 51 4c 49 54 45 33 43 20 3d 20 73 71 6c 69 74 65 QLITE3C = sqlite 13c0: 33 2e 63 0a 21 45 4e 44 49 46 0a 21 45 4e 44 49 3.c.!ENDIF.!ENDI 13d0: 46 0a 0a 23 20 53 65 74 20 74 68 65 20 69 6e 63 F..# Set the inc 13e0: 6c 75 64 65 20 63 6f 64 65 20 66 69 6c 65 20 74 lude code file t 13f0: 6f 20 62 65 20 75 73 65 64 20 62 79 20 65 78 65 o be used by exe 1400: 63 75 74 61 62 6c 65 73 20 61 6e 64 20 6c 69 62 cutables and lib 1410: 72 61 72 69 65 73 20 77 68 65 6e 0a 23 20 74 68 raries when.# th 1420: 65 79 20 6e 65 65 64 20 53 51 4c 69 74 65 2e 0a ey need SQLite.. 1430: 23 0a 21 49 46 4e 44 45 46 20 53 51 4c 49 54 45 #.!IFNDEF SQLITE 1440: 33 48 0a 53 51 4c 49 54 45 33 48 20 3d 20 73 71 3H.SQLITE3H = sq 1450: 6c 69 74 65 33 2e 68 0a 21 45 4e 44 49 46 0a 0a lite3.h.!ENDIF.. 1460: 23 20 54 68 69 73 20 69 73 20 74 68 65 20 6e 61 # This is the na 1470: 6d 65 20 74 6f 20 75 73 65 20 66 6f 72 20 74 68 me to use for th 1480: 65 20 53 51 4c 69 74 65 20 64 79 6e 61 6d 69 63 e SQLite dynamic 1490: 20 6c 69 6e 6b 20 6c 69 62 72 61 72 79 20 28 44 link library (D 14a0: 4c 4c 29 2e 0a 23 0a 21 49 46 4e 44 45 46 20 53 LL)..#.!IFNDEF S 14b0: 51 4c 49 54 45 33 44 4c 4c 0a 53 51 4c 49 54 45 QLITE3DLL.SQLITE 14c0: 33 44 4c 4c 20 3d 20 73 71 6c 69 74 65 33 2e 64 3DLL = sqlite3.d 14d0: 6c 6c 0a 21 45 4e 44 49 46 0a 0a 23 20 54 68 69 ll.!ENDIF..# Thi 14e0: 73 20 69 73 20 74 68 65 20 6e 61 6d 65 20 74 6f s is the name to 14f0: 20 75 73 65 20 66 6f 72 20 74 68 65 20 53 51 4c use for the SQL 1500: 69 74 65 20 69 6d 70 6f 72 74 20 6c 69 62 72 61 ite import libra 1510: 72 79 20 28 4c 49 42 29 2e 0a 23 0a 21 49 46 4e ry (LIB)..#.!IFN 1520: 44 45 46 20 53 51 4c 49 54 45 33 4c 49 42 0a 53 DEF SQLITE3LIB.S 1530: 51 4c 49 54 45 33 4c 49 42 20 3d 20 73 71 6c 69 QLITE3LIB = sqli 1540: 74 65 33 2e 6c 69 62 0a 21 45 4e 44 49 46 0a 0a te3.lib.!ENDIF.. 1550: 23 20 54 68 69 73 20 69 73 20 74 68 65 20 6e 61 # This is the na 1560: 6d 65 20 74 6f 20 75 73 65 20 66 6f 72 20 74 68 me to use for th 1570: 65 20 53 51 4c 69 74 65 20 73 68 65 6c 6c 20 65 e SQLite shell e 1580: 78 65 63 75 74 61 62 6c 65 20 28 45 58 45 29 2e xecutable (EXE). 1590: 0a 23 0a 21 49 46 4e 44 45 46 20 53 51 4c 49 54 .#.!IFNDEF SQLIT 15a0: 45 33 45 58 45 0a 53 51 4c 49 54 45 33 45 58 45 E3EXE.SQLITE3EXE 15b0: 20 3d 20 73 71 6c 69 74 65 33 2e 65 78 65 0a 21 = sqlite3.exe.! 15c0: 45 4e 44 49 46 0a 0a 23 20 54 68 69 73 20 69 73 ENDIF..# This is 15d0: 20 74 68 65 20 61 72 67 75 6d 65 6e 74 20 75 73 the argument us 15e0: 65 64 20 74 6f 20 73 65 74 20 74 68 65 20 70 72 ed to set the pr 15f0: 6f 67 72 61 6d 20 64 61 74 61 62 61 73 65 20 28 ogram database ( 1600: 50 44 42 29 20 66 69 6c 65 20 66 6f 72 20 74 68 PDB) file for th 1610: 65 0a 23 20 53 51 4c 69 74 65 20 73 68 65 6c 6c e.# SQLite shell 1620: 20 65 78 65 63 75 74 61 62 6c 65 20 28 45 58 45 executable (EXE 1630: 29 2e 0a 23 0a 21 49 46 4e 44 45 46 20 53 51 4c )..#.!IFNDEF SQL 1640: 49 54 45 33 45 58 45 50 44 42 0a 53 51 4c 49 54 ITE3EXEPDB.SQLIT 1650: 45 33 45 58 45 50 44 42 20 3d 20 2f 70 64 62 3a E3EXEPDB = /pdb: 1660: 73 71 6c 69 74 65 33 73 68 2e 70 64 62 0a 21 45 sqlite3sh.pdb.!E 1670: 4e 44 49 46 0a 0a 23 20 54 68 65 73 65 20 61 72 NDIF..# These ar 1680: 65 20 74 68 65 20 22 73 74 61 6e 64 61 72 64 22 e the "standard" 1690: 20 53 51 4c 69 74 65 20 63 6f 6d 70 69 6c 61 74 SQLite compilat 16a0: 69 6f 6e 20 6f 70 74 69 6f 6e 73 20 75 73 65 64 ion options used 16b0: 20 77 68 65 6e 20 63 6f 6d 70 69 6c 69 6e 67 20 when compiling 16c0: 66 6f 72 0a 23 20 74 68 65 20 57 69 6e 64 6f 77 for.# the Window 16d0: 73 20 70 6c 61 74 66 6f 72 6d 2e 0a 23 0a 21 49 s platform..#.!I 16e0: 46 4e 44 45 46 20 4f 50 54 5f 46 45 41 54 55 52 FNDEF OPT_FEATUR 16f0: 45 5f 46 4c 41 47 53 0a 4f 50 54 5f 46 45 41 54 E_FLAGS.OPT_FEAT 1700: 55 52 45 5f 46 4c 41 47 53 20 3d 20 24 28 4f 50 URE_FLAGS =$(OP
1710: 54 5f 46 45 41 54 55 52 45 5f 46 4c 41 47 53 29 T_FEATURE_FLAGS)
1720: 20 2d 44 53 51 4c 49 54 45 5f 45 4e 41 42 4c 45 -DSQLITE_ENABLE
1730: 5f 46 54 53 33 3d 31 0a 4f 50 54 5f 46 45 41 54 _FTS3=1.OPT_FEAT
1740: 55 52 45 5f 46 4c 41 47 53 20 3d 20 24 28 4f 50 URE_FLAGS = $(OP 1750: 54 5f 46 45 41 54 55 52 45 5f 46 4c 41 47 53 29 T_FEATURE_FLAGS) 1760: 20 2d 44 53 51 4c 49 54 45 5f 45 4e 41 42 4c 45 -DSQLITE_ENABLE 1770: 5f 52 54 52 45 45 3d 31 0a 4f 50 54 5f 46 45 41 _RTREE=1.OPT_FEA 1780: 54 55 52 45 5f 46 4c 41 47 53 20 3d 20 24 28 4f TURE_FLAGS =$(O
1790: 50 54 5f 46 45 41 54 55 52 45 5f 46 4c 41 47 53 PT_FEATURE_FLAGS
17a0: 29 20 2d 44 53 51 4c 49 54 45 5f 45 4e 41 42 4c ) -DSQLITE_ENABL
17b0: 45 5f 43 4f 4c 55 4d 4e 5f 4d 45 54 41 44 41 54 E_COLUMN_METADAT
17c0: 41 3d 31 0a 21 45 4e 44 49 46 0a 0a 23 20 54 68 A=1.!ENDIF..# Th
17d0: 65 73 65 20 61 72 65 20 74 68 65 20 22 65 78 74 ese are the "ext
17e0: 65 6e 64 65 64 22 20 53 51 4c 69 74 65 20 63 6f ended" SQLite co
17f0: 6d 70 69 6c 61 74 69 6f 6e 20 6f 70 74 69 6f 6e mpilation option
1800: 73 20 75 73 65 64 20 77 68 65 6e 20 63 6f 6d 70 s used when comp
1810: 69 6c 69 6e 67 20 66 6f 72 0a 23 20 74 68 65 20 iling for.# the
1820: 57 69 6e 64 6f 77 73 20 31 30 20 70 6c 61 74 66 Windows 10 platf
1830: 6f 72 6d 2e 0a 23 0a 21 49 46 4e 44 45 46 20 45 orm..#.!IFNDEF E
1840: 58 54 5f 46 45 41 54 55 52 45 5f 46 4c 41 47 53 XT_FEATURE_FLAGS
1850: 0a 21 49 46 20 24 28 46 4f 52 5f 57 49 4e 31 30 .!IF $(FOR_WIN10 1860: 29 21 3d 30 0a 45 58 54 5f 46 45 41 54 55 52 45 )!=0.EXT_FEATURE 1870: 5f 46 4c 41 47 53 20 3d 20 24 28 45 58 54 5f 46 _FLAGS =$(EXT_F
1880: 45 41 54 55 52 45 5f 46 4c 41 47 53 29 20 2d 44 EATURE_FLAGS) -D
1890: 53 51 4c 49 54 45 5f 45 4e 41 42 4c 45 5f 46 54 SQLITE_ENABLE_FT
18a0: 53 34 3d 31 0a 45 58 54 5f 46 45 41 54 55 52 45 S4=1.EXT_FEATURE
18b0: 5f 46 4c 41 47 53 20 3d 20 24 28 45 58 54 5f 46 _FLAGS = $(EXT_F 18c0: 45 41 54 55 52 45 5f 46 4c 41 47 53 29 20 2d 44 EATURE_FLAGS) -D 18d0: 53 51 4c 49 54 45 5f 53 59 53 54 45 4d 5f 4d 41 SQLITE_SYSTEM_MA 18e0: 4c 4c 4f 43 3d 31 0a 45 58 54 5f 46 45 41 54 55 LLOC=1.EXT_FEATU 18f0: 52 45 5f 46 4c 41 47 53 20 3d 20 24 28 45 58 54 RE_FLAGS =$(EXT
1900: 5f 46 45 41 54 55 52 45 5f 46 4c 41 47 53 29 20 _FEATURE_FLAGS)
1910: 2d 44 53 51 4c 49 54 45 5f 4f 4d 49 54 5f 4c 4f -DSQLITE_OMIT_LO
1920: 43 41 4c 54 49 4d 45 3d 31 0a 21 45 4c 53 45 0a CALTIME=1.!ELSE.
1930: 45 58 54 5f 46 45 41 54 55 52 45 5f 46 4c 41 47 EXT_FEATURE_FLAG
1940: 53 20 3d 0a 21 45 4e 44 49 46 0a 21 45 4e 44 49 S =.!ENDIF.!ENDI
1950: 46 0a 0a 23 23 23 23 23 23 23 23 23 23 23 23 23 F..#############
1960: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
1970: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
1980: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
1990: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
19a0: 23 23 0a 23 23 23 23 23 23 23 23 23 23 23 23 23 ##.#############
19b0: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
19c0: 23 23 20 45 4e 44 20 4f 46 20 4f 50 54 49 4f 4e ## END OF OPTION
19d0: 53 20 23 23 23 23 23 23 23 23 23 23 23 23 23 23 S ##############
19e0: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
19f0: 23 23 0a 23 23 23 23 23 23 23 23 23 23 23 23 23 ##.#############
1a00: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
1a10: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
1a20: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
1a30: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
1a40: 23 23 0a 0a 23 20 57 68 65 6e 20 63 6f 6d 70 69 ##..# When compi
1a50: 6c 69 6e 67 20 66 6f 72 20 74 68 65 20 57 69 6e ling for the Win
1a60: 64 6f 77 73 20 31 30 20 70 6c 61 74 66 6f 72 6d dows 10 platform
1a70: 2c 20 74 68 65 20 50 4c 41 54 46 4f 52 4d 20 6d , the PLATFORM m
1a80: 61 63 72 6f 20 6d 75 73 74 20 62 65 20 73 65 74 acro must be set
1a90: 0a 23 20 74 6f 20 61 6e 20 61 70 70 72 6f 70 72 .# to an appropr
1aa0: 69 61 74 65 20 76 61 6c 75 65 20 28 65 2e 67 2e iate value (e.g.
1ab0: 20 78 38 36 2c 20 78 36 34 2c 20 61 72 6d 2c 20 x86, x64, arm,
1ac0: 61 72 6d 36 34 2c 20 65 74 63 29 2e 0a 23 0a 21 arm64, etc)..#.!
1ad0: 49 46 20 24 28 46 4f 52 5f 57 49 4e 31 30 29 21 IF $(FOR_WIN10)! 1ae0: 3d 30 0a 21 49 46 4e 44 45 46 20 50 4c 41 54 46 =0.!IFNDEF PLATF 1af0: 4f 52 4d 0a 21 45 52 52 4f 52 20 55 73 69 6e 67 ORM.!ERROR Using 1b00: 20 74 68 65 20 46 4f 52 5f 57 49 4e 31 30 20 6f the FOR_WIN10 o 1b10: 70 74 69 6f 6e 20 72 65 71 75 69 72 65 73 20 61 ption requires a 1b20: 20 76 61 6c 75 65 20 66 6f 72 20 50 4c 41 54 46 value for PLATF 1b30: 4f 52 4d 2e 0a 21 45 4e 44 49 46 0a 21 45 4e 44 ORM..!ENDIF.!END 1b40: 49 46 0a 0a 23 20 54 68 69 73 20 61 73 73 75 6d IF..# This assum 1b50: 65 73 20 74 68 61 74 20 4d 53 56 43 20 69 73 20 es that MSVC is 1b60: 61 6c 77 61 79 73 20 69 6e 73 74 61 6c 6c 65 64 always installed 1b70: 20 69 6e 20 33 32 2d 62 69 74 20 50 72 6f 67 72 in 32-bit Progr 1b80: 61 6d 20 46 69 6c 65 73 20 64 69 72 65 63 74 6f am Files directo 1b90: 72 79 0a 23 20 61 6e 64 20 73 65 74 73 20 74 68 ry.# and sets th 1ba0: 65 20 76 61 72 69 61 62 6c 65 20 66 6f 72 20 75 e variable for u 1bb0: 73 65 20 69 6e 20 6c 6f 63 61 74 69 6e 67 20 6f se in locating o 1bc0: 74 68 65 72 20 33 32 2d 62 69 74 20 69 6e 73 74 ther 32-bit inst 1bd0: 61 6c 6c 73 20 61 63 63 6f 72 64 69 6e 67 6c 79 alls accordingly 1be0: 2e 0a 23 0a 50 52 4f 47 52 41 4d 46 49 4c 45 53 ..#.PROGRAMFILES 1bf0: 5f 58 38 36 20 3d 20 24 28 56 43 49 4e 53 54 41 _X86 =$(VCINSTA
1c00: 4c 4c 44 49 52 29 5c 2e 2e 5c 2e 2e 0a 50 52 4f LLDIR)\..\...PRO
1c10: 47 52 41 4d 46 49 4c 45 53 5f 58 38 36 20 3d 20 GRAMFILES_X86 =
1c20: 24 28 50 52 4f 47 52 41 4d 46 49 4c 45 53 5f 58 $(PROGRAMFILES_X 1c30: 38 36 3a 5c 5c 3d 5c 29 0a 0a 23 20 43 68 65 63 86:\\=\)..# Chec 1c40: 6b 20 66 6f 72 20 74 68 65 20 70 72 65 64 65 66 k for the predef 1c50: 69 6e 65 64 20 63 6f 6d 6d 61 6e 64 20 6d 61 63 ined command mac 1c60: 72 6f 20 43 43 2e 20 20 54 68 69 73 20 73 68 6f ro CC. This sho 1c70: 75 6c 64 20 70 6f 69 6e 74 20 74 6f 20 74 68 65 uld point to the 1c80: 20 63 6f 6d 70 69 6c 65 72 0a 23 20 62 69 6e 61 compiler.# bina 1c90: 72 79 20 66 6f 72 20 74 68 65 20 74 61 72 67 65 ry for the targe 1ca0: 74 20 70 6c 61 74 66 6f 72 6d 2e 20 20 49 66 20 t platform. If 1cb0: 69 74 20 69 73 20 6e 6f 74 20 64 65 66 69 6e 65 it is not define 1cc0: 64 2c 20 73 69 6d 70 6c 79 20 64 65 66 69 6e 65 d, simply define 1cd0: 20 69 74 20 74 6f 0a 23 20 74 68 65 20 6c 65 67 it to.# the leg 1ce0: 61 63 79 20 64 65 66 61 75 6c 74 20 76 61 6c 75 acy default valu 1cf0: 65 20 27 63 6c 2e 65 78 65 27 2e 0a 23 0a 21 49 e 'cl.exe'..#.!I 1d00: 46 4e 44 45 46 20 43 43 0a 43 43 20 3d 20 63 6c FNDEF CC.CC = cl 1d10: 2e 65 78 65 0a 21 45 4e 44 49 46 0a 0a 23 20 43 .exe.!ENDIF..# C 1d20: 68 65 63 6b 20 66 6f 72 20 74 68 65 20 63 6f 6d heck for the com 1d30: 6d 61 6e 64 20 6d 61 63 72 6f 20 4c 44 2e 20 20 mand macro LD. 1d40: 54 68 69 73 20 73 68 6f 75 6c 64 20 70 6f 69 6e This should poin 1d50: 74 20 74 6f 20 74 68 65 20 6c 69 6e 6b 65 72 20 t to the linker 1d60: 62 69 6e 61 72 79 20 66 6f 72 0a 23 20 74 68 65 binary for.# the 1d70: 20 74 61 72 67 65 74 20 70 6c 61 74 66 6f 72 6d target platform 1d80: 2e 20 20 49 66 20 69 74 20 69 73 20 6e 6f 74 20 . If it is not 1d90: 64 65 66 69 6e 65 64 2c 20 73 69 6d 70 6c 79 20 defined, simply 1da0: 64 65 66 69 6e 65 20 69 74 20 74 6f 20 74 68 65 define it to the 1db0: 20 6c 65 67 61 63 79 0a 23 20 64 65 66 61 75 6c legacy.# defaul 1dc0: 74 20 76 61 6c 75 65 20 27 6c 69 6e 6b 2e 65 78 t value 'link.ex 1dd0: 65 27 2e 0a 23 0a 21 49 46 4e 44 45 46 20 4c 44 e'..#.!IFNDEF LD 1de0: 0a 4c 44 20 3d 20 6c 69 6e 6b 2e 65 78 65 0a 21 .LD = link.exe.! 1df0: 45 4e 44 49 46 0a 0a 23 20 43 68 65 63 6b 20 66 ENDIF..# Check f 1e00: 6f 72 20 74 68 65 20 70 72 65 64 65 66 69 6e 65 or the predefine 1e10: 64 20 63 6f 6d 6d 61 6e 64 20 6d 61 63 72 6f 20 d command macro 1e20: 52 43 2e 20 20 54 68 69 73 20 73 68 6f 75 6c 64 RC. This should 1e30: 20 70 6f 69 6e 74 20 74 6f 20 74 68 65 20 72 65 point to the re 1e40: 73 6f 75 72 63 65 0a 23 20 63 6f 6d 70 69 6c 65 source.# compile 1e50: 72 20 62 69 6e 61 72 79 20 66 6f 72 20 74 68 65 r binary for the 1e60: 20 74 61 72 67 65 74 20 70 6c 61 74 66 6f 72 6d target platform 1e70: 2e 20 20 49 66 20 69 74 20 69 73 20 6e 6f 74 20 . If it is not 1e80: 64 65 66 69 6e 65 64 2c 20 73 69 6d 70 6c 79 20 defined, simply 1e90: 64 65 66 69 6e 65 0a 23 20 69 74 20 74 6f 20 74 define.# it to t 1ea0: 68 65 20 6c 65 67 61 63 79 20 64 65 66 61 75 6c he legacy defaul 1eb0: 74 20 76 61 6c 75 65 20 27 72 63 2e 65 78 65 27 t value 'rc.exe' 1ec0: 2e 0a 23 0a 21 49 46 4e 44 45 46 20 52 43 0a 52 ..#.!IFNDEF RC.R 1ed0: 43 20 3d 20 72 63 2e 65 78 65 0a 21 45 4e 44 49 C = rc.exe.!ENDI 1ee0: 46 0a 0a 23 20 43 68 65 63 6b 20 66 6f 72 20 74 F..# Check for t 1ef0: 68 65 20 4d 53 56 43 20 72 75 6e 74 69 6d 65 20 he MSVC runtime 1f00: 6c 69 62 72 61 72 79 20 70 61 74 68 20 6d 61 63 library path mac 1f10: 72 6f 2e 20 20 4f 74 68 65 72 77 69 73 65 2c 20 ro. Otherwise, 1f20: 74 68 69 73 20 76 61 6c 75 65 20 77 69 6c 6c 0a this value will. 1f30: 23 20 64 65 66 61 75 6c 74 20 74 6f 20 74 68 65 # default to the 1f40: 20 27 6c 69 62 27 20 64 69 72 65 63 74 6f 72 79 'lib' directory 1f50: 20 75 6e 64 65 72 6e 65 61 74 68 20 74 68 65 20 underneath the 1f60: 4d 53 56 43 20 69 6e 73 74 61 6c 6c 61 74 69 6f MSVC installatio 1f70: 6e 20 64 69 72 65 63 74 6f 72 79 2e 0a 23 0a 21 n directory..#.! 1f80: 49 46 4e 44 45 46 20 43 52 54 4c 49 42 50 41 54 IFNDEF CRTLIBPAT 1f90: 48 0a 43 52 54 4c 49 42 50 41 54 48 20 3d 20 24 H.CRTLIBPATH =$
1fa0: 28 56 43 49 4e 53 54 41 4c 4c 44 49 52 29 5c 6c (VCINSTALLDIR)\l
1fb0: 69 62 0a 21 45 4e 44 49 46 0a 0a 43 52 54 4c 49 ib.!ENDIF..CRTLI
1fc0: 42 50 41 54 48 20 3d 20 24 28 43 52 54 4c 49 42 BPATH = $(CRTLIB 1fd0: 50 41 54 48 3a 5c 5c 3d 5c 29 0a 0a 23 20 43 68 PATH:\\=\)..# Ch 1fe0: 65 63 6b 20 66 6f 72 20 74 68 65 20 63 6f 6d 6d eck for the comm 1ff0: 61 6e 64 20 6d 61 63 72 6f 20 4e 43 43 2e 20 20 and macro NCC. 2000: 54 68 69 73 20 73 68 6f 75 6c 64 20 70 6f 69 6e This should poin 2010: 74 20 74 6f 20 74 68 65 20 63 6f 6d 70 69 6c 65 t to the compile 2020: 72 20 62 69 6e 61 72 79 0a 23 20 66 6f 72 20 74 r binary.# for t 2030: 68 65 20 70 6c 61 74 66 6f 72 6d 20 74 68 65 20 he platform the 2040: 63 6f 6d 70 69 6c 61 74 69 6f 6e 20 70 72 6f 63 compilation proc 2050: 65 73 73 20 69 73 20 74 61 6b 69 6e 67 20 70 6c ess is taking pl 2060: 61 63 65 20 6f 6e 2e 20 20 49 66 20 69 74 20 69 ace on. If it i 2070: 73 20 6e 6f 74 0a 23 20 64 65 66 69 6e 65 64 2c s not.# defined, 2080: 20 73 69 6d 70 6c 79 20 64 65 66 69 6e 65 20 69 simply define i 2090: 74 20 74 6f 20 68 61 76 65 20 74 68 65 20 73 61 t to have the sa 20a0: 6d 65 20 76 61 6c 75 65 20 61 73 20 74 68 65 20 me value as the 20b0: 43 43 20 6d 61 63 72 6f 2e 20 20 57 68 65 6e 0a CC macro. When. 20c0: 23 20 63 72 6f 73 73 2d 63 6f 6d 70 69 6c 69 6e # cross-compilin 20d0: 67 2c 20 69 74 20 69 73 20 73 75 67 67 65 73 74 g, it is suggest 20e0: 65 64 20 74 68 61 74 20 74 68 69 73 20 6d 61 63 ed that this mac 20f0: 72 6f 20 62 65 20 6d 6f 64 69 66 69 65 64 20 76 ro be modified v 2100: 69 61 20 74 68 65 20 63 6f 6d 6d 61 6e 64 0a 23 ia the command.# 2110: 20 6c 69 6e 65 20 28 73 69 6e 63 65 20 6e 6d 61 line (since nma 2120: 6b 65 20 69 74 73 65 6c 66 20 64 6f 65 73 20 6e ke itself does n 2130: 6f 74 20 70 72 6f 76 69 64 65 20 61 20 62 75 69 ot provide a bui 2140: 6c 74 2d 69 6e 20 6d 65 74 68 6f 64 20 74 6f 20 lt-in method to 2150: 67 75 65 73 73 20 69 74 29 2e 0a 23 20 46 6f 72 guess it)..# For 2160: 20 65 78 61 6d 70 6c 65 2c 20 74 6f 20 75 73 65 example, to use 2170: 20 74 68 65 20 78 38 36 20 63 6f 6d 70 69 6c 65 the x86 compile 2180: 72 20 77 68 65 6e 20 63 72 6f 73 73 2d 63 6f 6d r when cross-com 2190: 70 69 6c 69 6e 67 20 66 6f 72 20 78 36 34 2c 20 piling for x64, 21a0: 61 20 63 6f 6d 6d 61 6e 64 0a 23 20 6c 69 6e 65 a command.# line 21b0: 20 73 69 6d 69 6c 61 72 20 74 6f 20 74 68 65 20 similar to the 21c0: 66 6f 6c 6c 6f 77 69 6e 67 20 63 6f 75 6c 64 20 following could 21d0: 62 65 20 75 73 65 64 20 28 61 6c 6c 20 6f 6e 20 be used (all on 21e0: 6f 6e 65 20 6c 69 6e 65 29 3a 0a 23 0a 23 20 20 one line):.#.# 21f0: 20 20 20 6e 6d 61 6b 65 20 2f 66 20 4d 61 6b 65 nmake /f Make 2200: 66 69 6c 65 2e 6d 73 63 20 73 71 6c 69 74 65 33 file.msc sqlite3 2210: 2e 64 6c 6c 0a 23 20 20 20 20 20 20 20 20 20 20 .dll.# 2220: 20 58 43 4f 4d 50 49 4c 45 3d 31 20 55 53 45 5f XCOMPILE=1 USE_ 2230: 4e 41 54 49 56 45 5f 4c 49 42 50 41 54 48 53 3d NATIVE_LIBPATHS= 2240: 31 0a 23 0a 23 20 41 6c 74 65 72 6e 61 74 69 76 1.#.# Alternativ 2250: 65 6c 79 2c 20 74 68 65 20 66 75 6c 6c 20 70 61 ely, the full pa 2260: 74 68 20 61 6e 64 20 66 69 6c 65 20 6e 61 6d 65 th and file name 2270: 20 74 6f 20 74 68 65 20 63 6f 6d 70 69 6c 65 72 to the compiler 2280: 20 62 69 6e 61 72 79 20 66 6f 72 20 74 68 65 0a binary for the. 2290: 23 20 70 6c 61 74 66 6f 72 6d 20 74 68 65 20 63 # platform the c 22a0: 6f 6d 70 69 6c 61 74 69 6f 6e 20 70 72 6f 63 65 ompilation proce 22b0: 73 73 20 69 73 20 74 61 6b 69 6e 67 20 70 6c 61 ss is taking pla 22c0: 63 65 20 6d 61 79 20 62 65 20 73 70 65 63 69 66 ce may be specif 22d0: 69 65 64 20 28 61 6c 6c 20 6f 6e 0a 23 20 6f 6e ied (all on.# on 22e0: 65 20 6c 69 6e 65 29 3a 0a 23 0a 23 20 20 20 20 e line):.#.# 22f0: 20 6e 6d 61 6b 65 20 2f 66 20 4d 61 6b 65 66 69 nmake /f Makefi 2300: 6c 65 2e 6d 73 63 20 73 71 6c 69 74 65 33 2e 64 le.msc sqlite3.d 2310: 6c 6c 0a 23 20 20 20 20 20 20 20 20 20 20 20 22 ll.# " 2320: 4e 43 43 3d 22 22 25 56 43 49 4e 53 54 41 4c 4c NCC=""%VCINSTALL 2330: 44 49 52 25 5c 62 69 6e 5c 63 6c 2e 65 78 65 22 DIR%\bin\cl.exe" 2340: 22 22 0a 23 20 20 20 20 20 20 20 20 20 20 20 55 "".# U 2350: 53 45 5f 4e 41 54 49 56 45 5f 4c 49 42 50 41 54 SE_NATIVE_LIBPAT 2360: 48 53 3d 31 0a 23 0a 21 49 46 44 45 46 20 4e 43 HS=1.#.!IFDEF NC 2370: 43 0a 4e 43 43 20 3d 20 24 28 4e 43 43 3a 5c 5c C.NCC =$(NCC:\\
2380: 3d 5c 29 0a 21 45 4c 53 45 49 46 20 24 28 58 43 =\).!ELSEIF $(XC 2390: 4f 4d 50 49 4c 45 29 21 3d 30 0a 4e 43 43 20 3d OMPILE)!=0.NCC = 23a0: 20 22 24 28 56 43 49 4e 53 54 41 4c 4c 44 49 52 "$(VCINSTALLDIR
23b0: 29 5c 62 69 6e 5c 24 28 43 43 29 22 0a 4e 43 43 )\bin\$(CC)".NCC 23c0: 20 3d 20 24 28 4e 43 43 3a 5c 5c 3d 5c 29 0a 21 =$(NCC:\\=\).!
23d0: 45 4c 53 45 0a 4e 43 43 20 3d 20 24 28 43 43 29 ELSE.NCC = $(CC) 23e0: 0a 21 45 4e 44 49 46 0a 0a 23 20 43 68 65 63 6b .!ENDIF..# Check 23f0: 20 66 6f 72 20 74 68 65 20 4d 53 56 43 20 6e 61 for the MSVC na 2400: 74 69 76 65 20 72 75 6e 74 69 6d 65 20 6c 69 62 tive runtime lib 2410: 72 61 72 79 20 70 61 74 68 20 6d 61 63 72 6f 2e rary path macro. 2420: 20 20 4f 74 68 65 72 77 69 73 65 2c 0a 23 20 74 Otherwise,.# t 2430: 68 69 73 20 76 61 6c 75 65 20 77 69 6c 6c 20 64 his value will d 2440: 65 66 61 75 6c 74 20 74 6f 20 74 68 65 20 27 6c efault to the 'l 2450: 69 62 27 20 64 69 72 65 63 74 6f 72 79 20 75 6e ib' directory un 2460: 64 65 72 6e 65 61 74 68 20 74 68 65 20 4d 53 56 derneath the MSV 2470: 43 0a 23 20 69 6e 73 74 61 6c 6c 61 74 69 6f 6e C.# installation 2480: 20 64 69 72 65 63 74 6f 72 79 2e 0a 23 0a 21 49 directory..#.!I 2490: 46 4e 44 45 46 20 4e 43 52 54 4c 49 42 50 41 54 FNDEF NCRTLIBPAT 24a0: 48 0a 4e 43 52 54 4c 49 42 50 41 54 48 20 3d 20 H.NCRTLIBPATH = 24b0: 24 28 56 43 49 4e 53 54 41 4c 4c 44 49 52 29 5c$(VCINSTALLDIR)\
24c0: 6c 69 62 0a 21 45 4e 44 49 46 0a 0a 4e 43 52 54 lib.!ENDIF..NCRT
24d0: 4c 49 42 50 41 54 48 20 3d 20 24 28 4e 43 52 54 LIBPATH = $(NCRT 24e0: 4c 49 42 50 41 54 48 3a 5c 5c 3d 5c 29 0a 0a 23 LIBPATH:\\=\)..# 24f0: 20 43 68 65 63 6b 20 66 6f 72 20 74 68 65 20 50 Check for the P 2500: 6c 61 74 66 6f 72 6d 20 53 44 4b 20 6c 69 62 72 latform SDK libr 2510: 61 72 79 20 70 61 74 68 20 6d 61 63 72 6f 2e 20 ary path macro. 2520: 20 4f 74 68 65 72 77 69 73 65 2c 20 74 68 69 73 Otherwise, this 2530: 0a 23 20 76 61 6c 75 65 20 77 69 6c 6c 20 64 65 .# value will de 2540: 66 61 75 6c 74 20 74 6f 20 74 68 65 20 27 6c 69 fault to the 'li 2550: 62 27 20 64 69 72 65 63 74 6f 72 79 20 75 6e 64 b' directory und 2560: 65 72 6e 65 61 74 68 20 74 68 65 20 57 69 6e 64 erneath the Wind 2570: 6f 77 73 0a 23 20 53 44 4b 20 69 6e 73 74 61 6c ows.# SDK instal 2580: 6c 61 74 69 6f 6e 20 64 69 72 65 63 74 6f 72 79 lation directory 2590: 20 28 74 68 65 20 65 6e 76 69 72 6f 6e 6d 65 6e (the environmen 25a0: 74 20 76 61 72 69 61 62 6c 65 20 75 73 65 64 20 t variable used 25b0: 61 70 70 65 61 72 73 0a 23 20 74 6f 20 62 65 20 appears.# to be 25c0: 61 76 61 69 6c 61 62 6c 65 20 77 68 65 6e 20 75 available when u 25d0: 73 69 6e 67 20 56 69 73 75 61 6c 20 43 2b 2b 20 sing Visual C++ 25e0: 32 30 30 38 20 6f 72 20 6c 61 74 65 72 20 76 69 2008 or later vi 25f0: 61 20 74 68 65 0a 23 20 63 6f 6d 6d 61 6e 64 20 a the.# command 2600: 6c 69 6e 65 29 2e 0a 23 0a 21 49 46 4e 44 45 46 line)..#.!IFNDEF 2610: 20 4e 53 44 4b 4c 49 42 50 41 54 48 0a 4e 53 44 NSDKLIBPATH.NSD 2620: 4b 4c 49 42 50 41 54 48 20 3d 20 24 28 57 49 4e KLIBPATH =$(WIN
2630: 44 4f 57 53 53 44 4b 44 49 52 29 5c 6c 69 62 0a DOWSSDKDIR)\lib.
2640: 21 45 4e 44 49 46 0a 0a 4e 53 44 4b 4c 49 42 50 !ENDIF..NSDKLIBP
2650: 41 54 48 20 3d 20 24 28 4e 53 44 4b 4c 49 42 50 ATH = $(NSDKLIBP 2660: 41 54 48 3a 5c 5c 3d 5c 29 0a 0a 23 20 43 68 65 ATH:\\=\)..# Che 2670: 63 6b 20 66 6f 72 20 74 68 65 20 55 43 52 54 20 ck for the UCRT 2680: 6c 69 62 72 61 72 79 20 70 61 74 68 20 6d 61 63 library path mac 2690: 72 6f 2e 20 20 4f 74 68 65 72 77 69 73 65 2c 20 ro. Otherwise, 26a0: 74 68 69 73 20 76 61 6c 75 65 20 77 69 6c 6c 0a this value will. 26b0: 23 20 64 65 66 61 75 6c 74 20 74 6f 20 74 68 65 # default to the 26c0: 20 76 65 72 73 69 6f 6e 2d 73 70 65 63 69 66 69 version-specifi 26d0: 63 2c 20 70 6c 61 74 66 6f 72 6d 2d 73 70 65 63 c, platform-spec 26e0: 69 66 69 63 20 27 6c 69 62 27 20 64 69 72 65 63 ific 'lib' direc 26f0: 74 6f 72 79 0a 23 20 75 6e 64 65 72 6e 65 61 74 tory.# underneat 2700: 68 20 74 68 65 20 57 69 6e 64 6f 77 73 20 53 44 h the Windows SD 2710: 4b 20 69 6e 73 74 61 6c 6c 61 74 69 6f 6e 20 64 K installation d 2720: 69 72 65 63 74 6f 72 79 2e 0a 23 0a 21 49 46 4e irectory..#.!IFN 2730: 44 45 46 20 55 43 52 54 4c 49 42 50 41 54 48 0a DEF UCRTLIBPATH. 2740: 55 43 52 54 4c 49 42 50 41 54 48 20 3d 20 24 28 UCRTLIBPATH =$(
2750: 57 49 4e 44 4f 57 53 53 44 4b 44 49 52 29 5c 6c WINDOWSSDKDIR)\l
2760: 69 62 5c 24 28 57 49 4e 44 4f 57 53 53 44 4b 4c ib\$(WINDOWSSDKL 2770: 49 42 56 45 52 53 49 4f 4e 29 5c 75 63 72 74 5c IBVERSION)\ucrt\ 2780: 24 28 50 4c 41 54 46 4f 52 4d 29 0a 21 45 4e 44$(PLATFORM).!END
2790: 49 46 0a 0a 55 43 52 54 4c 49 42 50 41 54 48 20 IF..UCRTLIBPATH
27a0: 3d 20 24 28 55 43 52 54 4c 49 42 50 41 54 48 3a = $(UCRTLIBPATH: 27b0: 5c 5c 3d 5c 29 0a 0a 23 20 43 20 63 6f 6d 70 69 \\=\)..# C compi 27c0: 6c 65 72 20 61 6e 64 20 6f 70 74 69 6f 6e 73 20 ler and options 27d0: 66 6f 72 20 75 73 65 20 69 6e 20 62 75 69 6c 64 for use in build 27e0: 69 6e 67 20 65 78 65 63 75 74 61 62 6c 65 73 20 ing executables 27f0: 74 68 61 74 0a 23 20 77 69 6c 6c 20 72 75 6e 20 that.# will run 2800: 6f 6e 20 74 68 65 20 70 6c 61 74 66 6f 72 6d 20 on the platform 2810: 74 68 61 74 20 69 73 20 64 6f 69 6e 67 20 74 68 that is doing th 2820: 65 20 62 75 69 6c 64 2e 0a 23 0a 21 49 46 20 24 e build..#.!IF$
2830: 28 55 53 45 5f 46 55 4c 4c 57 41 52 4e 29 21 3d (USE_FULLWARN)!=
2840: 30 0a 42 43 43 20 3d 20 24 28 4e 43 43 29 20 2d 0.BCC = $(NCC) - 2850: 6e 6f 6c 6f 67 6f 20 2d 57 34 20 24 28 43 43 4f nologo -W4$(CCO
2860: 50 54 53 29 20 24 28 42 43 43 4f 50 54 53 29 0a PTS) $(BCCOPTS). 2870: 21 45 4c 53 45 0a 42 43 43 20 3d 20 24 28 4e 43 !ELSE.BCC =$(NC
2880: 43 29 20 2d 6e 6f 6c 6f 67 6f 20 2d 57 33 20 24 C) -nologo -W3 $2890: 28 43 43 4f 50 54 53 29 20 24 28 42 43 43 4f 50 (CCOPTS)$(BCCOP
28a0: 54 53 29 0a 21 45 4e 44 49 46 0a 0a 23 20 43 68 TS).!ENDIF..# Ch
28b0: 65 63 6b 20 69 66 20 61 73 73 65 6d 62 6c 79 20 eck if assembly
28c0: 63 6f 64 65 20 6c 69 73 74 69 6e 67 73 20 73 68 code listings sh
28d0: 6f 75 6c 64 20 62 65 20 67 65 6e 65 72 61 74 65 ould be generate
28e0: 64 20 66 6f 72 20 74 68 65 20 73 6f 75 72 63 65 d for the source
28f0: 0a 23 20 63 6f 64 65 20 66 69 6c 65 73 20 74 6f .# code files to
2900: 20 62 65 20 63 6f 6d 70 69 6c 65 64 2e 0a 23 0a be compiled..#.
2910: 21 49 46 20 24 28 55 53 45 5f 4c 49 53 54 49 4e !IF $(USE_LISTIN 2920: 47 53 29 21 3d 30 0a 42 43 43 20 3d 20 24 28 42 GS)!=0.BCC =$(B
2930: 43 43 29 20 2d 46 41 63 73 0a 21 45 4e 44 49 46 CC) -FAcs.!ENDIF
2940: 0a 0a 23 20 43 68 65 63 6b 20 69 66 20 74 68 65 ..# Check if the
2950: 20 6e 61 74 69 76 65 20 6c 69 62 72 61 72 79 20 native library
2960: 70 61 74 68 73 20 73 68 6f 75 6c 64 20 62 65 20 paths should be
2970: 75 73 65 64 20 77 68 65 6e 20 63 6f 6d 70 69 6c used when compil
2980: 69 6e 67 0a 23 20 74 68 65 20 63 6f 6d 6d 61 6e ing.# the comman
2990: 64 20 6c 69 6e 65 20 74 6f 6f 6c 73 20 75 73 65 d line tools use
29a0: 64 20 64 75 72 69 6e 67 20 74 68 65 20 63 6f 6d d during the com
29b0: 70 69 6c 61 74 69 6f 6e 20 70 72 6f 63 65 73 73 pilation process
29c0: 2e 20 20 49 66 0a 23 20 73 6f 2c 20 73 65 74 20 . If.# so, set
29d0: 74 68 65 20 6e 65 63 65 73 73 61 72 79 20 6d 61 the necessary ma
29e0: 63 72 6f 20 6e 6f 77 2e 0a 23 0a 21 49 46 20 24 cro now..#.!IF $29f0: 28 55 53 45 5f 4e 41 54 49 56 45 5f 4c 49 42 50 (USE_NATIVE_LIBP 2a00: 41 54 48 53 29 21 3d 30 0a 4e 4c 54 4c 49 42 50 ATHS)!=0.NLTLIBP 2a10: 41 54 48 53 20 3d 20 22 2f 4c 49 42 50 41 54 48 ATHS = "/LIBPATH 2a20: 3a 24 28 4e 43 52 54 4c 49 42 50 41 54 48 29 22 :$(NCRTLIBPATH)"
2a30: 20 22 2f 4c 49 42 50 41 54 48 3a 24 28 4e 53 44 "/LIBPATH:$(NSD 2a40: 4b 4c 49 42 50 41 54 48 29 22 0a 0a 21 49 46 44 KLIBPATH)"..!IFD 2a50: 45 46 20 4e 55 43 52 54 4c 49 42 50 41 54 48 0a EF NUCRTLIBPATH. 2a60: 4e 55 43 52 54 4c 49 42 50 41 54 48 20 3d 20 24 NUCRTLIBPATH =$
2a70: 28 4e 55 43 52 54 4c 49 42 50 41 54 48 3a 5c 5c (NUCRTLIBPATH:\\
2a80: 3d 5c 29 0a 4e 4c 54 4c 49 42 50 41 54 48 53 20 =\).NLTLIBPATHS
2a90: 3d 20 24 28 4e 4c 54 4c 49 42 50 41 54 48 53 29 = $(NLTLIBPATHS) 2aa0: 20 22 2f 4c 49 42 50 41 54 48 3a 24 28 4e 55 43 "/LIBPATH:$(NUC
2ab0: 52 54 4c 49 42 50 41 54 48 29 22 0a 21 45 4e 44 RTLIBPATH)".!END
2ac0: 49 46 0a 21 45 4e 44 49 46 0a 0a 23 20 43 20 63 IF.!ENDIF..# C c
2ad0: 6f 6d 70 69 6c 65 72 20 61 6e 64 20 6f 70 74 69 ompiler and opti
2ae0: 6f 6e 73 20 66 6f 72 20 75 73 65 20 69 6e 20 62 ons for use in b
2af0: 75 69 6c 64 69 6e 67 20 65 78 65 63 75 74 61 62 uilding executab
2b00: 6c 65 73 20 74 68 61 74 0a 23 20 77 69 6c 6c 20 les that.# will
2b10: 72 75 6e 20 6f 6e 20 74 68 65 20 74 61 72 67 65 run on the targe
2b20: 74 20 70 6c 61 74 66 6f 72 6d 2e 20 20 28 42 43 t platform. (BC
2b30: 43 20 61 6e 64 20 54 43 43 20 61 72 65 20 75 73 C and TCC are us
2b40: 75 61 6c 6c 79 20 74 68 65 0a 23 20 73 61 6d 65 ually the.# same
2b50: 20 75 6e 6c 65 73 73 20 79 6f 75 72 20 61 72 65 unless your are
2b60: 20 63 72 6f 73 73 2d 63 6f 6d 70 69 6c 69 6e 67 cross-compiling
2b70: 2e 29 0a 23 0a 21 49 46 20 24 28 55 53 45 5f 46 .).#.!IF $(USE_F 2b80: 55 4c 4c 57 41 52 4e 29 21 3d 30 0a 54 43 43 20 ULLWARN)!=0.TCC 2b90: 3d 20 24 28 43 43 29 20 2d 6e 6f 6c 6f 67 6f 20 =$(CC) -nologo
2ba0: 2d 57 34 20 2d 44 49 4e 43 4c 55 44 45 5f 4d 53 -W4 -DINCLUDE_MS
2bb0: 56 43 5f 48 3d 31 20 24 28 43 43 4f 50 54 53 29 VC_H=1 $(CCOPTS) 2bc0: 20 24 28 54 43 43 4f 50 54 53 29 0a 21 45 4c 53$(TCCOPTS).!ELS
2bd0: 45 0a 54 43 43 20 3d 20 24 28 43 43 29 20 2d 6e E.TCC = $(CC) -n 2be0: 6f 6c 6f 67 6f 20 2d 57 33 20 24 28 43 43 4f 50 ologo -W3$(CCOP
2bf0: 54 53 29 20 24 28 54 43 43 4f 50 54 53 29 0a 21 TS) $(TCCOPTS).! 2c00: 45 4e 44 49 46 0a 0a 54 43 43 20 3d 20 24 28 54 ENDIF..TCC =$(T
2c10: 43 43 29 20 2d 44 53 51 4c 49 54 45 5f 4f 53 5f CC) -DSQLITE_OS_
2c20: 57 49 4e 3d 31 20 2d 49 2e 20 2d 49 24 28 54 4f WIN=1 -I. -I$(TO 2c30: 50 29 20 2d 49 24 28 54 4f 50 29 5c 73 72 63 20 P) -I$(TOP)\src
2c40: 2d 66 70 3a 70 72 65 63 69 73 65 0a 52 43 43 20 -fp:precise.RCC
2c50: 3d 20 24 28 52 43 29 20 2d 44 53 51 4c 49 54 45 = $(RC) -DSQLITE 2c60: 5f 4f 53 5f 57 49 4e 3d 31 20 2d 49 2e 20 2d 49 _OS_WIN=1 -I. -I 2c70: 24 28 54 4f 50 29 20 2d 49 24 28 54 4f 50 29 5c$(TOP) -I$(TOP)\ 2c80: 73 72 63 20 24 28 52 43 4f 50 54 53 29 20 24 28 src$(RCOPTS) $( 2c90: 52 43 43 4f 50 54 53 29 0a 0a 23 20 41 64 6a 75 RCCOPTS)..# Adju 2ca0: 73 74 20 74 68 65 20 6e 61 6d 65 73 20 6f 66 20 st the names of 2cb0: 74 68 65 20 70 72 69 6d 61 72 79 20 74 61 72 67 the primary targ 2cc0: 65 74 73 20 66 6f 72 20 75 73 65 20 77 69 74 68 ets for use with 2cd0: 20 57 69 6e 64 6f 77 73 20 31 30 2e 0a 23 0a 21 Windows 10..#.! 2ce0: 49 46 20 24 28 46 4f 52 5f 57 49 4e 31 30 29 21 IF$(FOR_WIN10)!
2cf0: 3d 30 0a 53 51 4c 49 54 45 33 44 4c 4c 20 3d 20 =0.SQLITE3DLL =
2d00: 77 69 6e 73 71 6c 69 74 65 33 2e 64 6c 6c 0a 53 winsqlite3.dll.S
2d10: 51 4c 49 54 45 33 4c 49 42 20 3d 20 77 69 6e 73 QLITE3LIB = wins
2d20: 71 6c 69 74 65 33 2e 6c 69 62 0a 53 51 4c 49 54 qlite3.lib.SQLIT
2d30: 45 33 45 58 45 20 3d 20 77 69 6e 73 71 6c 69 74 E3EXE = winsqlit
2d40: 65 33 73 68 65 6c 6c 2e 65 78 65 0a 53 51 4c 49 e3shell.exe.SQLI
2d50: 54 45 33 45 58 45 50 44 42 20 3d 0a 21 45 4e 44 TE3EXEPDB =.!END
2d60: 49 46 0a 0a 23 20 43 68 65 63 6b 20 69 66 20 77 IF..# Check if w
2d70: 65 20 77 61 6e 74 20 74 6f 20 75 73 65 20 74 68 e want to use th
2d80: 65 20 22 73 74 64 63 61 6c 6c 22 20 63 61 6c 6c e "stdcall" call
2d90: 69 6e 67 20 63 6f 6e 76 65 6e 74 69 6f 6e 20 77 ing convention w
2da0: 68 65 6e 20 63 6f 6d 70 69 6c 69 6e 67 2e 0a 23 hen compiling..#
2db0: 20 54 68 69 73 20 69 73 20 6e 6f 74 20 73 75 70 This is not sup
2dc0: 70 6f 72 74 65 64 20 62 79 20 74 68 65 20 63 6f ported by the co
2dd0: 6d 70 69 6c 65 72 73 20 66 6f 72 20 6e 6f 6e 2d mpilers for non-
2de0: 78 38 36 20 70 6c 61 74 66 6f 72 6d 73 2e 20 20 x86 platforms.
2df0: 49 74 20 73 68 6f 75 6c 64 0a 23 20 61 6c 73 6f It should.# also
2e00: 20 62 65 20 6e 6f 74 65 64 20 68 65 72 65 20 74 be noted here t
2e10: 68 61 74 20 62 75 69 6c 64 69 6e 67 20 61 6e 79 hat building any
2e20: 20 74 61 72 67 65 74 20 77 69 74 68 20 74 68 65 target with the
2e30: 73 65 20 22 73 74 64 63 61 6c 6c 22 20 6f 70 74 se "stdcall" opt
2e40: 69 6f 6e 73 0a 23 20 77 69 6c 6c 20 6d 6f 73 74 ions.# will most
2e50: 20 6c 69 6b 65 6c 79 20 66 61 69 6c 20 69 66 20 likely fail if
2e60: 74 68 65 20 54 63 6c 20 6c 69 62 72 61 72 79 20 the Tcl library
2e70: 69 73 20 61 6c 73 6f 20 72 65 71 75 69 72 65 64 is also required
2e80: 2e 20 20 54 68 69 73 20 69 73 20 64 75 65 0a 23 . This is due.#
2e90: 20 74 6f 20 68 6f 77 20 74 68 65 20 54 63 6c 20 to how the Tcl
2ea0: 6c 69 62 72 61 72 79 20 66 75 6e 63 74 69 6f 6e library function
2eb0: 73 20 61 72 65 20 64 65 63 6c 61 72 65 64 20 61 s are declared a
2ec0: 6e 64 20 65 78 70 6f 72 74 65 64 20 28 69 2e 65 nd exported (i.e
2ed0: 2e 20 77 69 74 68 6f 75 74 0a 23 20 61 6e 20 65 . without.# an e
2ee0: 78 70 6c 69 63 69 74 20 63 61 6c 6c 69 6e 67 20 xplicit calling
2ef0: 63 6f 6e 76 65 6e 74 69 6f 6e 2c 20 77 68 69 63 convention, whic
2f00: 68 20 72 65 73 75 6c 74 73 20 69 6e 20 22 63 64 h results in "cd
2f10: 65 63 6c 22 29 2e 0a 23 0a 21 49 46 20 24 28 55 ecl")..#.!IF $(U 2f20: 53 45 5f 53 54 44 43 41 4c 4c 29 21 3d 30 20 7c SE_STDCALL)!=0 | 2f30: 7c 20 24 28 46 4f 52 5f 57 49 4e 31 30 29 21 3d |$(FOR_WIN10)!=
2f40: 30 0a 21 49 46 20 22 24 28 50 4c 41 54 46 4f 52 0.!IF "$(PLATFOR 2f50: 4d 29 22 3d 3d 22 78 38 36 22 0a 43 4f 52 45 5f M)"=="x86".CORE_ 2f60: 43 43 4f 4e 56 5f 4f 50 54 53 20 3d 20 2d 47 7a CCONV_OPTS = -Gz 2f70: 20 2d 44 53 51 4c 49 54 45 5f 43 44 45 43 4c 3d -DSQLITE_CDECL= 2f80: 5f 5f 63 64 65 63 6c 20 2d 44 53 51 4c 49 54 45 __cdecl -DSQLITE 2f90: 5f 53 54 44 43 41 4c 4c 3d 5f 5f 73 74 64 63 61 _STDCALL=__stdca 2fa0: 6c 6c 0a 53 48 45 4c 4c 5f 43 43 4f 4e 56 5f 4f ll.SHELL_CCONV_O 2fb0: 50 54 53 20 3d 20 2d 47 7a 20 2d 44 53 51 4c 49 PTS = -Gz -DSQLI 2fc0: 54 45 5f 43 44 45 43 4c 3d 5f 5f 63 64 65 63 6c TE_CDECL=__cdecl 2fd0: 20 2d 44 53 51 4c 49 54 45 5f 53 54 44 43 41 4c -DSQLITE_STDCAL 2fe0: 4c 3d 5f 5f 73 74 64 63 61 6c 6c 0a 21 45 4c 53 L=__stdcall.!ELS 2ff0: 45 0a 21 49 46 4e 44 45 46 20 50 4c 41 54 46 4f E.!IFNDEF PLATFO 3000: 52 4d 0a 43 4f 52 45 5f 43 43 4f 4e 56 5f 4f 50 RM.CORE_CCONV_OP 3010: 54 53 20 3d 20 2d 47 7a 20 2d 44 53 51 4c 49 54 TS = -Gz -DSQLIT 3020: 45 5f 43 44 45 43 4c 3d 5f 5f 63 64 65 63 6c 20 E_CDECL=__cdecl 3030: 2d 44 53 51 4c 49 54 45 5f 53 54 44 43 41 4c 4c -DSQLITE_STDCALL 3040: 3d 5f 5f 73 74 64 63 61 6c 6c 0a 53 48 45 4c 4c =__stdcall.SHELL 3050: 5f 43 43 4f 4e 56 5f 4f 50 54 53 20 3d 20 2d 47 _CCONV_OPTS = -G 3060: 7a 20 2d 44 53 51 4c 49 54 45 5f 43 44 45 43 4c z -DSQLITE_CDECL 3070: 3d 5f 5f 63 64 65 63 6c 20 2d 44 53 51 4c 49 54 =__cdecl -DSQLIT 3080: 45 5f 53 54 44 43 41 4c 4c 3d 5f 5f 73 74 64 63 E_STDCALL=__stdc 3090: 61 6c 6c 0a 21 45 4c 53 45 0a 43 4f 52 45 5f 43 all.!ELSE.CORE_C 30a0: 43 4f 4e 56 5f 4f 50 54 53 20 3d 0a 53 48 45 4c CONV_OPTS =.SHEL 30b0: 4c 5f 43 43 4f 4e 56 5f 4f 50 54 53 20 3d 0a 21 L_CCONV_OPTS =.! 30c0: 45 4e 44 49 46 0a 21 45 4e 44 49 46 0a 21 45 4c ENDIF.!ENDIF.!EL 30d0: 53 45 0a 43 4f 52 45 5f 43 43 4f 4e 56 5f 4f 50 SE.CORE_CCONV_OP 30e0: 54 53 20 3d 0a 53 48 45 4c 4c 5f 43 43 4f 4e 56 TS =.SHELL_CCONV 30f0: 5f 4f 50 54 53 20 3d 0a 21 45 4e 44 49 46 0a 0a _OPTS =.!ENDIF.. 3100: 23 20 54 68 65 73 65 20 61 72 65 20 61 64 64 69 # These are addi 3110: 74 69 6f 6e 61 6c 20 63 6f 6d 70 69 6c 65 72 20 tional compiler 3120: 6f 70 74 69 6f 6e 73 20 75 73 65 64 20 66 6f 72 options used for 3130: 20 74 68 65 20 63 6f 72 65 20 6c 69 62 72 61 72 the core librar 3140: 79 2e 0a 23 0a 21 49 46 4e 44 45 46 20 43 4f 52 y..#.!IFNDEF COR 3150: 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 0a 21 E_COMPILE_OPTS.! 3160: 49 46 20 24 28 44 59 4e 41 4d 49 43 5f 53 48 45 IF$(DYNAMIC_SHE
3170: 4c 4c 29 21 3d 30 20 7c 7c 20 24 28 46 4f 52 5f LL)!=0 || $(FOR_ 3180: 57 49 4e 31 30 29 21 3d 30 0a 43 4f 52 45 5f 43 WIN10)!=0.CORE_C 3190: 4f 4d 50 49 4c 45 5f 4f 50 54 53 20 3d 20 24 28 OMPILE_OPTS =$(
31a0: 43 4f 52 45 5f 43 43 4f 4e 56 5f 4f 50 54 53 29 CORE_CCONV_OPTS)
31b0: 20 2d 44 53 51 4c 49 54 45 5f 41 50 49 3d 5f 5f -DSQLITE_API=__
31c0: 64 65 63 6c 73 70 65 63 28 64 6c 6c 65 78 70 6f declspec(dllexpo
31d0: 72 74 29 0a 21 45 4c 53 45 0a 43 4f 52 45 5f 43 rt).!ELSE.CORE_C
31e0: 4f 4d 50 49 4c 45 5f 4f 50 54 53 20 3d 20 24 28 OMPILE_OPTS = $( 31f0: 43 4f 52 45 5f 43 43 4f 4e 56 5f 4f 50 54 53 29 CORE_CCONV_OPTS) 3200: 0a 21 45 4e 44 49 46 0a 21 45 4e 44 49 46 0a 0a .!ENDIF.!ENDIF.. 3210: 23 20 54 68 65 73 65 20 61 72 65 20 74 68 65 20 # These are the 3220: 61 64 64 69 74 69 6f 6e 61 6c 20 74 61 72 67 65 additional targe 3230: 74 73 20 74 68 61 74 20 74 68 65 20 63 6f 72 65 ts that the core 3240: 20 6c 69 62 72 61 72 79 20 73 68 6f 75 6c 64 20 library should 3250: 64 65 70 65 6e 64 20 6f 6e 0a 23 20 77 68 65 6e depend on.# when 3260: 20 6c 69 6e 6b 69 6e 67 2e 0a 23 0a 21 49 46 4e linking..#.!IFN 3270: 44 45 46 20 43 4f 52 45 5f 4c 49 4e 4b 5f 44 45 DEF CORE_LINK_DE 3280: 50 0a 21 49 46 20 24 28 44 59 4e 41 4d 49 43 5f P.!IF$(DYNAMIC_
3290: 53 48 45 4c 4c 29 21 3d 30 20 7c 7c 20 24 28 46 SHELL)!=0 || $(F 32a0: 4f 52 5f 57 49 4e 31 30 29 21 3d 30 0a 43 4f 52 OR_WIN10)!=0.COR 32b0: 45 5f 4c 49 4e 4b 5f 44 45 50 20 3d 0a 21 45 4c E_LINK_DEP =.!EL 32c0: 53 45 0a 43 4f 52 45 5f 4c 49 4e 4b 5f 44 45 50 SE.CORE_LINK_DEP 32d0: 20 3d 20 73 71 6c 69 74 65 33 2e 64 65 66 0a 21 = sqlite3.def.! 32e0: 45 4e 44 49 46 0a 21 45 4e 44 49 46 0a 0a 23 20 ENDIF.!ENDIF..# 32f0: 54 68 65 73 65 20 61 72 65 20 61 64 64 69 74 69 These are additi 3300: 6f 6e 61 6c 20 6c 69 6e 6b 65 72 20 6f 70 74 69 onal linker opti 3310: 6f 6e 73 20 75 73 65 64 20 66 6f 72 20 74 68 65 ons used for the 3320: 20 63 6f 72 65 20 6c 69 62 72 61 72 79 2e 0a 23 core library..# 3330: 0a 21 49 46 4e 44 45 46 20 43 4f 52 45 5f 4c 49 .!IFNDEF CORE_LI 3340: 4e 4b 5f 4f 50 54 53 0a 21 49 46 20 24 28 44 59 NK_OPTS.!IF$(DY
3350: 4e 41 4d 49 43 5f 53 48 45 4c 4c 29 21 3d 30 20 NAMIC_SHELL)!=0
3360: 7c 7c 20 24 28 46 4f 52 5f 57 49 4e 31 30 29 21 || $(FOR_WIN10)! 3370: 3d 30 0a 43 4f 52 45 5f 4c 49 4e 4b 5f 4f 50 54 =0.CORE_LINK_OPT 3380: 53 20 3d 0a 21 45 4c 53 45 0a 43 4f 52 45 5f 4c S =.!ELSE.CORE_L 3390: 49 4e 4b 5f 4f 50 54 53 20 3d 20 2f 44 45 46 3a INK_OPTS = /DEF: 33a0: 73 71 6c 69 74 65 33 2e 64 65 66 0a 21 45 4e 44 sqlite3.def.!END 33b0: 49 46 0a 21 45 4e 44 49 46 0a 0a 23 20 54 68 65 IF.!ENDIF..# The 33c0: 73 65 20 61 72 65 20 61 64 64 69 74 69 6f 6e 61 se are additiona 33d0: 6c 20 63 6f 6d 70 69 6c 65 72 20 6f 70 74 69 6f l compiler optio 33e0: 6e 73 20 75 73 65 64 20 66 6f 72 20 74 68 65 20 ns used for the 33f0: 73 68 65 6c 6c 20 65 78 65 63 75 74 61 62 6c 65 shell executable 3400: 2e 0a 23 0a 21 49 46 4e 44 45 46 20 53 48 45 4c ..#.!IFNDEF SHEL 3410: 4c 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 0a 21 L_COMPILE_OPTS.! 3420: 49 46 20 24 28 44 59 4e 41 4d 49 43 5f 53 48 45 IF$(DYNAMIC_SHE
3430: 4c 4c 29 21 3d 30 20 7c 7c 20 24 28 46 4f 52 5f LL)!=0 || $(FOR_ 3440: 57 49 4e 31 30 29 21 3d 30 0a 53 48 45 4c 4c 5f WIN10)!=0.SHELL_ 3450: 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 20 3d 20 24 COMPILE_OPTS =$
3460: 28 53 48 45 4c 4c 5f 43 43 4f 4e 56 5f 4f 50 54 (SHELL_CCONV_OPT
3470: 53 29 20 2d 44 53 51 4c 49 54 45 5f 41 50 49 3d S) -DSQLITE_API=
3480: 5f 5f 64 65 63 6c 73 70 65 63 28 64 6c 6c 69 6d __declspec(dllim
3490: 70 6f 72 74 29 0a 21 45 4c 53 45 0a 53 48 45 4c port).!ELSE.SHEL
34a0: 4c 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 20 3d L_COMPILE_OPTS =
34b0: 20 24 28 53 48 45 4c 4c 5f 43 43 4f 4e 56 5f 4f $(SHELL_CCONV_O 34c0: 50 54 53 29 0a 21 45 4e 44 49 46 0a 21 45 4e 44 PTS).!ENDIF.!END 34d0: 49 46 0a 0a 23 20 54 68 69 73 20 69 73 20 74 68 IF..# This is th 34e0: 65 20 73 6f 75 72 63 65 20 63 6f 64 65 20 74 68 e source code th 34f0: 61 74 20 74 68 65 20 73 68 65 6c 6c 20 65 78 65 at the shell exe 3500: 63 75 74 61 62 6c 65 20 73 68 6f 75 6c 64 20 62 cutable should b 3510: 65 20 63 6f 6d 70 69 6c 65 64 0a 23 20 77 69 74 e compiled.# wit 3520: 68 2e 0a 23 0a 21 49 46 4e 44 45 46 20 53 48 45 h..#.!IFNDEF SHE 3530: 4c 4c 5f 43 4f 52 45 5f 53 52 43 0a 21 49 46 20 LL_CORE_SRC.!IF 3540: 24 28 44 59 4e 41 4d 49 43 5f 53 48 45 4c 4c 29$(DYNAMIC_SHELL)
3550: 21 3d 30 20 7c 7c 20 24 28 46 4f 52 5f 57 49 4e !=0 || $(FOR_WIN 3560: 31 30 29 21 3d 30 0a 53 48 45 4c 4c 5f 43 4f 52 10)!=0.SHELL_COR 3570: 45 5f 53 52 43 20 3d 0a 21 45 4c 53 45 0a 53 48 E_SRC =.!ELSE.SH 3580: 45 4c 4c 5f 43 4f 52 45 5f 53 52 43 20 3d 20 24 ELL_CORE_SRC =$
3590: 28 53 51 4c 49 54 45 33 43 29 0a 21 45 4e 44 49 (SQLITE3C).!ENDI
35a0: 46 0a 21 45 4e 44 49 46 0a 0a 23 20 54 68 69 73 F.!ENDIF..# This
35b0: 20 69 73 20 74 68 65 20 63 6f 72 65 20 6c 69 62 is the core lib
35c0: 72 61 72 79 20 74 68 61 74 20 74 68 65 20 73 68 rary that the sh
35d0: 65 6c 6c 20 65 78 65 63 75 74 61 62 6c 65 20 73 ell executable s
35e0: 68 6f 75 6c 64 20 64 65 70 65 6e 64 20 6f 6e 2e hould depend on.
35f0: 0a 23 0a 21 49 46 4e 44 45 46 20 53 48 45 4c 4c .#.!IFNDEF SHELL
3600: 5f 43 4f 52 45 5f 44 45 50 0a 21 49 46 20 24 28 _CORE_DEP.!IF $( 3610: 44 59 4e 41 4d 49 43 5f 53 48 45 4c 4c 29 21 3d DYNAMIC_SHELL)!= 3620: 30 20 7c 7c 20 24 28 46 4f 52 5f 57 49 4e 31 30 0 ||$(FOR_WIN10
3630: 29 21 3d 30 0a 53 48 45 4c 4c 5f 43 4f 52 45 5f )!=0.SHELL_CORE_
3640: 44 45 50 20 3d 20 24 28 53 51 4c 49 54 45 33 44 DEP = $(SQLITE3D 3650: 4c 4c 29 0a 21 45 4c 53 45 0a 53 48 45 4c 4c 5f LL).!ELSE.SHELL_ 3660: 43 4f 52 45 5f 44 45 50 20 3d 0a 21 45 4e 44 49 CORE_DEP =.!ENDI 3670: 46 0a 21 45 4e 44 49 46 0a 0a 23 20 54 68 69 73 F.!ENDIF..# This 3680: 20 69 73 20 74 68 65 20 63 6f 72 65 20 6c 69 62 is the core lib 3690: 72 61 72 79 20 74 68 61 74 20 74 68 65 20 73 68 rary that the sh 36a0: 65 6c 6c 20 65 78 65 63 75 74 61 62 6c 65 20 73 ell executable s 36b0: 68 6f 75 6c 64 20 6c 69 6e 6b 20 77 69 74 68 2e hould link with. 36c0: 0a 23 0a 21 49 46 4e 44 45 46 20 53 48 45 4c 4c .#.!IFNDEF SHELL 36d0: 5f 43 4f 52 45 5f 4c 49 42 0a 21 49 46 20 24 28 _CORE_LIB.!IF$(
36e0: 44 59 4e 41 4d 49 43 5f 53 48 45 4c 4c 29 21 3d DYNAMIC_SHELL)!=
36f0: 30 20 7c 7c 20 24 28 46 4f 52 5f 57 49 4e 31 30 0 || $(FOR_WIN10 3700: 29 21 3d 30 0a 53 48 45 4c 4c 5f 43 4f 52 45 5f )!=0.SHELL_CORE_ 3710: 4c 49 42 20 3d 20 24 28 53 51 4c 49 54 45 33 4c LIB =$(SQLITE3L
3720: 49 42 29 0a 21 45 4c 53 45 0a 53 48 45 4c 4c 5f IB).!ELSE.SHELL_
3730: 43 4f 52 45 5f 4c 49 42 20 3d 0a 21 45 4e 44 49 CORE_LIB =.!ENDI
3740: 46 0a 21 45 4e 44 49 46 0a 0a 23 20 54 68 65 73 F.!ENDIF..# Thes
3750: 65 20 61 72 65 20 61 64 64 69 74 69 6f 6e 61 6c e are additional
3760: 20 6c 69 6e 6b 65 72 20 6f 70 74 69 6f 6e 73 20 linker options
3770: 75 73 65 64 20 66 6f 72 20 74 68 65 20 73 68 65 used for the she
3780: 6c 6c 20 65 78 65 63 75 74 61 62 6c 65 2e 0a 23 ll executable..#
3790: 0a 21 49 46 4e 44 45 46 20 53 48 45 4c 4c 5f 4c .!IFNDEF SHELL_L
37a0: 49 4e 4b 5f 4f 50 54 53 0a 53 48 45 4c 4c 5f 4c INK_OPTS.SHELL_L
37b0: 49 4e 4b 5f 4f 50 54 53 20 3d 20 24 28 53 48 45 INK_OPTS = $(SHE 37c0: 4c 4c 5f 43 4f 52 45 5f 4c 49 42 29 0a 21 45 4e LL_CORE_LIB).!EN 37d0: 44 49 46 0a 0a 23 20 43 68 65 63 6b 20 69 66 20 DIF..# Check if 37e0: 61 73 73 65 6d 62 6c 79 20 63 6f 64 65 20 6c 69 assembly code li 37f0: 73 74 69 6e 67 73 20 73 68 6f 75 6c 64 20 62 65 stings should be 3800: 20 67 65 6e 65 72 61 74 65 64 20 66 6f 72 20 74 generated for t 3810: 68 65 20 73 6f 75 72 63 65 0a 23 20 63 6f 64 65 he source.# code 3820: 20 66 69 6c 65 73 20 74 6f 20 62 65 20 63 6f 6d files to be com 3830: 70 69 6c 65 64 2e 0a 23 0a 21 49 46 20 24 28 55 piled..#.!IF$(U
3840: 53 45 5f 4c 49 53 54 49 4e 47 53 29 21 3d 30 0a SE_LISTINGS)!=0.
3850: 54 43 43 20 3d 20 24 28 54 43 43 29 20 2d 46 41 TCC = $(TCC) -FA 3860: 63 73 0a 21 45 4e 44 49 46 0a 0a 23 20 57 68 65 cs.!ENDIF..# Whe 3870: 6e 20 63 6f 6d 70 69 6c 69 6e 67 20 74 68 65 20 n compiling the 3880: 6c 69 62 72 61 72 79 20 66 6f 72 20 75 73 65 20 library for use 3890: 69 6e 20 74 68 65 20 57 69 6e 52 54 20 65 6e 76 in the WinRT env 38a0: 69 72 6f 6e 6d 65 6e 74 2c 0a 23 20 74 68 65 20 ironment,.# the 38b0: 66 6f 6c 6c 6f 77 69 6e 67 20 63 6f 6d 70 69 6c following compil 38c0: 65 2d 74 69 6d 65 20 6f 70 74 69 6f 6e 73 20 6d e-time options m 38d0: 75 73 74 20 62 65 20 75 73 65 64 20 61 73 20 77 ust be used as w 38e0: 65 6c 6c 20 74 6f 0a 23 20 64 69 73 61 62 6c 65 ell to.# disable 38f0: 20 75 73 65 20 6f 66 20 57 69 6e 33 32 20 41 50 use of Win32 AP 3900: 49 73 20 74 68 61 74 20 61 72 65 20 6e 6f 74 20 Is that are not 3910: 61 76 61 69 6c 61 62 6c 65 20 61 6e 64 20 74 6f available and to 3920: 20 65 6e 61 62 6c 65 0a 23 20 75 73 65 20 6f 66 enable.# use of 3930: 20 57 69 6e 33 32 20 41 50 49 73 20 74 68 61 74 Win32 APIs that 3940: 20 61 72 65 20 73 70 65 63 69 66 69 63 20 74 6f are specific to 3950: 20 57 69 6e 64 6f 77 73 20 38 20 61 6e 64 2f 6f Windows 8 and/o 3960: 72 20 57 69 6e 52 54 2e 0a 23 0a 21 49 46 20 24 r WinRT..#.!IF$
3970: 28 46 4f 52 5f 57 49 4e 52 54 29 21 3d 30 0a 54 (FOR_WINRT)!=0.T
3980: 43 43 20 3d 20 24 28 54 43 43 29 20 2d 44 53 51 CC = $(TCC) -DSQ 3990: 4c 49 54 45 5f 4f 53 5f 57 49 4e 52 54 3d 31 0a LITE_OS_WINRT=1. 39a0: 52 43 43 20 3d 20 24 28 52 43 43 29 20 2d 44 53 RCC =$(RCC) -DS
39b0: 51 4c 49 54 45 5f 4f 53 5f 57 49 4e 52 54 3d 31 QLITE_OS_WINRT=1
39c0: 0a 54 43 43 20 3d 20 24 28 54 43 43 29 20 2d 44 .TCC = $(TCC) -D 39d0: 57 49 4e 41 50 49 5f 46 41 4d 49 4c 59 3d 57 49 WINAPI_FAMILY=WI 39e0: 4e 41 50 49 5f 46 41 4d 49 4c 59 5f 41 50 50 0a NAPI_FAMILY_APP. 39f0: 52 43 43 20 3d 20 24 28 52 43 43 29 20 2d 44 57 RCC =$(RCC) -DW
3a00: 49 4e 41 50 49 5f 46 41 4d 49 4c 59 3d 57 49 4e INAPI_FAMILY=WIN
3a10: 41 50 49 5f 46 41 4d 49 4c 59 5f 41 50 50 0a 21 API_FAMILY_APP.!
3a20: 45 4e 44 49 46 0a 0a 23 20 43 20 63 6f 6d 70 69 ENDIF..# C compi
3a30: 6c 65 72 20 6f 70 74 69 6f 6e 73 20 66 6f 72 20 ler options for
3a40: 74 68 65 20 57 69 6e 64 6f 77 73 20 31 30 20 70 the Windows 10 p
3a50: 6c 61 74 66 6f 72 6d 20 28 6e 65 65 64 73 20 4d latform (needs M
3a60: 53 56 43 20 32 30 31 35 29 2e 0a 23 0a 21 49 46 SVC 2015)..#.!IF
3a70: 20 24 28 46 4f 52 5f 57 49 4e 31 30 29 21 3d 30 $(FOR_WIN10)!=0 3a80: 0a 54 43 43 20 3d 20 24 28 54 43 43 29 20 2f 64 .TCC =$(TCC) /d
3a90: 32 67 75 61 72 64 34 20 2d 44 5f 41 52 4d 5f 57 2guard4 -D_ARM_W
3aa0: 49 4e 41 50 49 5f 50 41 52 54 49 54 49 4f 4e 5f INAPI_PARTITION_
3ab0: 44 45 53 4b 54 4f 50 5f 53 44 4b 5f 41 56 41 49 DESKTOP_SDK_AVAI
3ac0: 4c 41 42 4c 45 0a 42 43 43 20 3d 20 24 28 42 43 LABLE.BCC = $(BC 3ad0: 43 29 20 2f 64 32 67 75 61 72 64 34 20 2d 44 5f C) /d2guard4 -D_ 3ae0: 41 52 4d 5f 57 49 4e 41 50 49 5f 50 41 52 54 49 ARM_WINAPI_PARTI 3af0: 54 49 4f 4e 5f 44 45 53 4b 54 4f 50 5f 53 44 4b TION_DESKTOP_SDK 3b00: 5f 41 56 41 49 4c 41 42 4c 45 0a 21 45 4e 44 49 _AVAILABLE.!ENDI 3b10: 46 0a 0a 23 20 41 6c 73 6f 2c 20 77 65 20 6e 65 F..# Also, we ne 3b20: 65 64 20 74 6f 20 64 79 6e 61 6d 69 63 61 6c 6c ed to dynamicall 3b30: 79 20 6c 69 6e 6b 20 74 6f 20 74 68 65 20 63 6f y link to the co 3b40: 72 72 65 63 74 20 4d 53 56 43 20 72 75 6e 74 69 rrect MSVC runti 3b50: 6d 65 0a 23 20 77 68 65 6e 20 63 6f 6d 70 69 6c me.# when compil 3b60: 69 6e 67 20 66 6f 72 20 57 69 6e 52 54 20 28 65 ing for WinRT (e 3b70: 2e 67 2e 20 64 65 62 75 67 20 6f 72 20 72 65 6c .g. debug or rel 3b80: 65 61 73 65 29 20 4f 52 20 69 66 20 74 68 65 0a ease) OR if the. 3b90: 23 20 55 53 45 5f 43 52 54 5f 44 4c 4c 20 6f 70 # USE_CRT_DLL op 3ba0: 74 69 6f 6e 20 69 73 20 73 65 74 20 74 6f 20 66 tion is set to f 3bb0: 6f 72 63 65 20 64 79 6e 61 6d 69 63 61 6c 6c 79 orce dynamically 3bc0: 20 6c 69 6e 6b 69 6e 67 20 74 6f 20 74 68 65 0a linking to the. 3bd0: 23 20 4d 53 56 43 20 72 75 6e 74 69 6d 65 20 6c # MSVC runtime l 3be0: 69 62 72 61 72 79 2e 0a 23 0a 21 49 46 20 24 28 ibrary..#.!IF$(
3bf0: 46 4f 52 5f 57 49 4e 52 54 29 21 3d 30 20 7c 7c FOR_WINRT)!=0 ||
3c00: 20 24 28 55 53 45 5f 43 52 54 5f 44 4c 4c 29 21 $(USE_CRT_DLL)! 3c10: 3d 30 0a 21 49 46 20 24 28 44 45 42 55 47 29 3e =0.!IF$(DEBUG)>
3c20: 31 0a 54 43 43 20 3d 20 24 28 54 43 43 29 20 2d 1.TCC = $(TCC) - 3c30: 4d 44 64 0a 42 43 43 20 3d 20 24 28 42 43 43 29 MDd.BCC =$(BCC)
3c40: 20 2d 4d 44 64 0a 21 45 4c 53 45 0a 54 43 43 20 -MDd.!ELSE.TCC
3c50: 3d 20 24 28 54 43 43 29 20 2d 4d 44 0a 42 43 43 = $(TCC) -MD.BCC 3c60: 20 3d 20 24 28 42 43 43 29 20 2d 4d 44 0a 21 45 =$(BCC) -MD.!E
3c70: 4e 44 49 46 0a 21 45 4c 53 45 0a 21 49 46 20 24 NDIF.!ELSE.!IF $3c80: 28 44 45 42 55 47 29 3e 31 0a 54 43 43 20 3d 20 (DEBUG)>1.TCC = 3c90: 24 28 54 43 43 29 20 2d 4d 54 64 0a 42 43 43 20$(TCC) -MTd.BCC
3ca0: 3d 20 24 28 42 43 43 29 20 2d 4d 54 64 0a 21 45 = $(BCC) -MTd.!E 3cb0: 4c 53 45 0a 54 43 43 20 3d 20 24 28 54 43 43 29 LSE.TCC =$(TCC)
3cc0: 20 2d 4d 54 0a 42 43 43 20 3d 20 24 28 42 43 43 -MT.BCC = $(BCC 3cd0: 29 20 2d 4d 54 0a 21 45 4e 44 49 46 0a 21 45 4e ) -MT.!ENDIF.!EN 3ce0: 44 49 46 0a 0a 23 20 3c 3c 6d 61 72 6b 3e 3e 0a DIF..# <<mark>>. 3cf0: 23 20 54 68 65 20 6d 6b 73 71 6c 69 74 65 33 63 # The mksqlite3c 3d00: 2e 74 63 6c 20 61 6e 64 20 6d 6b 73 71 6c 69 74 .tcl and mksqlit 3d10: 65 33 68 2e 74 63 6c 20 73 63 72 69 70 74 73 20 e3h.tcl scripts 3d20: 77 69 6c 6c 20 70 75 6c 6c 20 69 6e 0a 23 20 61 will pull in.# a 3d30: 6e 79 20 65 78 74 65 6e 73 69 6f 6e 20 68 65 61 ny extension hea 3d40: 64 65 72 20 66 69 6c 65 73 20 62 79 20 64 65 66 der files by def 3d50: 61 75 6c 74 2e 20 20 46 6f 72 20 6e 6f 6e 2d 61 ault. For non-a 3d60: 6d 61 6c 67 61 6d 61 74 69 6f 6e 0a 23 20 62 75 malgamation.# bu 3d70: 69 6c 64 73 2c 20 77 65 20 6e 65 65 64 20 74 6f ilds, we need to 3d80: 20 6d 61 6b 65 20 73 75 72 65 20 74 68 65 20 63 make sure the c 3d90: 6f 6d 70 69 6c 65 72 20 63 61 6e 20 66 69 6e 64 ompiler can find 3da0: 20 74 68 65 73 65 2e 0a 23 0a 21 49 46 20 24 28 these..#.!IF$(
3db0: 55 53 45 5f 41 4d 41 4c 47 41 4d 41 54 49 4f 4e USE_AMALGAMATION
3dc0: 29 3d 3d 30 0a 54 43 43 20 3d 20 24 28 54 43 43 )==0.TCC = $(TCC 3dd0: 29 20 2d 49 24 28 54 4f 50 29 5c 65 78 74 5c 66 ) -I$(TOP)\ext\f
3de0: 74 73 33 0a 52 43 43 20 3d 20 24 28 52 43 43 29 ts3.RCC = $(RCC) 3df0: 20 2d 49 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 -I$(TOP)\ext\ft
3e00: 73 33 0a 54 43 43 20 3d 20 24 28 54 43 43 29 20 s3.TCC = $(TCC) 3e10: 2d 49 24 28 54 4f 50 29 5c 65 78 74 5c 72 74 72 -I$(TOP)\ext\rtr
3e20: 65 65 0a 52 43 43 20 3d 20 24 28 52 43 43 29 20 ee.RCC = $(RCC) 3e30: 2d 49 24 28 54 4f 50 29 5c 65 78 74 5c 72 74 72 -I$(TOP)\ext\rtr
3e40: 65 65 0a 54 43 43 20 3d 20 24 28 54 43 43 29 20 ee.TCC = $(TCC) 3e50: 2d 49 24 28 54 4f 50 29 5c 65 78 74 5c 73 65 73 -I$(TOP)\ext\ses
3e60: 73 69 6f 6e 0a 52 43 43 20 3d 20 24 28 52 43 43 sion.RCC = $(RCC 3e70: 29 20 2d 49 24 28 54 4f 50 29 5c 65 78 74 5c 73 ) -I$(TOP)\ext\s
3e80: 65 73 73 69 6f 6e 0a 21 45 4e 44 49 46 0a 0a 23 ession.!ENDIF..#
3e90: 20 54 68 65 20 6d 6b 73 71 6c 69 74 65 33 63 2e The mksqlite3c.
3ea0: 74 63 6c 20 73 63 72 69 70 74 20 61 63 63 65 70 tcl script accep
3eb0: 74 73 20 73 6f 6d 65 20 6f 70 74 69 6f 6e 73 20 ts some options
3ec0: 6f 6e 20 74 68 65 20 63 6f 6d 6d 61 6e 64 0a 23 on the command.#
3ed0: 20 6c 69 6e 65 2e 20 20 57 68 65 6e 20 63 6f 6d line. When com
3ee0: 70 69 6c 69 6e 67 20 77 69 74 68 20 64 65 62 75 piling with debu
3ef0: 67 67 69 6e 67 20 65 6e 61 62 6c 65 64 2c 20 73 gging enabled, s
3f00: 6f 6d 65 20 6f 66 20 74 68 65 73 65 0a 23 20 6f ome of these.# o
3f10: 70 74 69 6f 6e 73 20 61 72 65 20 6e 65 63 65 73 ptions are neces
3f20: 73 61 72 79 20 69 6e 20 6f 72 64 65 72 20 74 6f sary in order to
3f30: 20 61 6c 6c 6f 77 20 64 65 62 75 67 67 69 6e 67 allow debugging
3f40: 20 73 79 6d 62 6f 6c 73 20 74 6f 0a 23 20 77 6f symbols to.# wo
3f50: 72 6b 20 63 6f 72 72 65 63 74 6c 79 20 77 69 74 rk correctly wit
3f60: 68 20 56 69 73 75 61 6c 20 53 74 75 64 69 6f 20 h Visual Studio
3f70: 77 68 65 6e 20 75 73 69 6e 67 20 74 68 65 20 61 when using the a
3f80: 6d 61 6c 67 61 6d 61 74 69 6f 6e 2e 0a 23 0a 21 malgamation..#.!
3f90: 49 46 4e 44 45 46 20 4d 4b 53 51 4c 49 54 45 33 IFNDEF MKSQLITE3
3fa0: 43 5f 41 52 47 53 0a 21 49 46 20 24 28 44 45 42 C_ARGS.!IF $(DEB 3fb0: 55 47 29 3e 31 0a 4d 4b 53 51 4c 49 54 45 33 43 UG)>1.MKSQLITE3C 3fc0: 5f 41 52 47 53 20 3d 20 2d 2d 6c 69 6e 65 6d 61 _ARGS = --linema 3fd0: 63 72 6f 73 0a 21 45 4c 53 45 0a 4d 4b 53 51 4c cros.!ELSE.MKSQL 3fe0: 49 54 45 33 43 5f 41 52 47 53 20 3d 0a 21 45 4e ITE3C_ARGS =.!EN 3ff0: 44 49 46 0a 21 45 4e 44 49 46 0a 23 20 3c 3c 2f DIF.!ENDIF.# <</ 4000: 6d 61 72 6b 3e 3e 0a 0a 23 20 44 65 66 69 6e 65 mark>>..# Define 4010: 20 2d 44 4e 44 45 42 55 47 20 74 6f 20 63 6f 6d -DNDEBUG to com 4020: 70 69 6c 65 20 77 69 74 68 6f 75 74 20 64 65 62 pile without deb 4030: 75 67 67 69 6e 67 20 28 69 2e 65 2e 2c 20 66 6f ugging (i.e., fo 4040: 72 20 70 72 6f 64 75 63 74 69 6f 6e 20 75 73 61 r production usa 4050: 67 65 29 0a 23 20 4f 6d 69 74 74 69 6e 67 20 74 ge).# Omitting t 4060: 68 65 20 64 65 66 69 6e 65 20 77 69 6c 6c 20 63 he define will c 4070: 61 75 73 65 20 65 78 74 72 61 20 64 65 62 75 67 ause extra debug 4080: 67 69 6e 67 20 63 6f 64 65 20 74 6f 20 62 65 20 ging code to be 4090: 69 6e 73 65 72 74 65 64 20 61 6e 64 0a 23 20 69 inserted and.# i 40a0: 6e 63 6c 75 64 65 73 20 65 78 74 72 61 20 63 6f ncludes extra co 40b0: 6d 6d 65 6e 74 73 20 77 68 65 6e 20 22 45 58 50 mments when "EXP 40c0: 4c 41 49 4e 20 73 74 6d 74 22 20 69 73 20 75 73 LAIN stmt" is us 40d0: 65 64 2e 0a 23 0a 21 49 46 20 24 28 44 45 42 55 ed..#.!IF$(DEBU
40e0: 47 29 3d 3d 30 0a 54 43 43 20 3d 20 24 28 54 43 G)==0.TCC = $(TC 40f0: 43 29 20 2d 44 4e 44 45 42 55 47 0a 42 43 43 20 C) -DNDEBUG.BCC 4100: 3d 20 24 28 42 43 43 29 20 2d 44 4e 44 45 42 55 =$(BCC) -DNDEBU
4110: 47 0a 52 43 43 20 3d 20 24 28 52 43 43 29 20 2d G.RCC = $(RCC) - 4120: 44 4e 44 45 42 55 47 0a 21 45 4e 44 49 46 0a 0a DNDEBUG.!ENDIF.. 4130: 21 49 46 20 24 28 44 45 42 55 47 29 3e 30 20 7c !IF$(DEBUG)>0 |
4140: 7c 20 24 28 41 50 49 5f 41 52 4d 4f 52 29 21 3d | $(API_ARMOR)!= 4150: 30 20 7c 7c 20 24 28 46 4f 52 5f 57 49 4e 31 30 0 ||$(FOR_WIN10
4160: 29 21 3d 30 0a 54 43 43 20 3d 20 24 28 54 43 43 )!=0.TCC = $(TCC 4170: 29 20 2d 44 53 51 4c 49 54 45 5f 45 4e 41 42 4c ) -DSQLITE_ENABL 4180: 45 5f 41 50 49 5f 41 52 4d 4f 52 3d 31 0a 52 43 E_API_ARMOR=1.RC 4190: 43 20 3d 20 24 28 52 43 43 29 20 2d 44 53 51 4c C =$(RCC) -DSQL
41a0: 49 54 45 5f 45 4e 41 42 4c 45 5f 41 50 49 5f 41 ITE_ENABLE_API_A
41b0: 52 4d 4f 52 3d 31 0a 21 45 4e 44 49 46 0a 0a 21 RMOR=1.!ENDIF..!
41c0: 49 46 20 24 28 44 45 42 55 47 29 3e 32 0a 54 43 IF $(DEBUG)>2.TC 41d0: 43 20 3d 20 24 28 54 43 43 29 20 2d 44 53 51 4c C =$(TCC) -DSQL
41e0: 49 54 45 5f 44 45 42 55 47 3d 31 0a 52 43 43 20 ITE_DEBUG=1.RCC
41f0: 3d 20 24 28 52 43 43 29 20 2d 44 53 51 4c 49 54 = $(RCC) -DSQLIT 4200: 45 5f 44 45 42 55 47 3d 31 0a 21 45 4e 44 49 46 E_DEBUG=1.!ENDIF 4210: 0a 0a 21 49 46 20 24 28 44 45 42 55 47 29 3e 34 ..!IF$(DEBUG)>4
4220: 20 7c 7c 20 24 28 4f 53 54 52 41 43 45 29 21 3d || $(OSTRACE)!= 4230: 30 0a 54 43 43 20 3d 20 24 28 54 43 43 29 20 2d 0.TCC =$(TCC) -
4240: 44 53 51 4c 49 54 45 5f 46 4f 52 43 45 5f 4f 53 DSQLITE_FORCE_OS
4250: 5f 54 52 41 43 45 3d 31 20 2d 44 53 51 4c 49 54 _TRACE=1 -DSQLIT
4260: 45 5f 44 45 42 55 47 5f 4f 53 5f 54 52 41 43 45 E_DEBUG_OS_TRACE
4270: 3d 31 0a 52 43 43 20 3d 20 24 28 52 43 43 29 20 =1.RCC = $(RCC) 4280: 2d 44 53 51 4c 49 54 45 5f 46 4f 52 43 45 5f 4f -DSQLITE_FORCE_O 4290: 53 5f 54 52 41 43 45 3d 31 20 2d 44 53 51 4c 49 S_TRACE=1 -DSQLI 42a0: 54 45 5f 44 45 42 55 47 5f 4f 53 5f 54 52 41 43 TE_DEBUG_OS_TRAC 42b0: 45 3d 31 0a 21 45 4e 44 49 46 0a 0a 21 49 46 20 E=1.!ENDIF..!IF 42c0: 24 28 44 45 42 55 47 29 3e 35 0a 54 43 43 20 3d$(DEBUG)>5.TCC =
42d0: 20 24 28 54 43 43 29 20 2d 44 53 51 4c 49 54 45 $(TCC) -DSQLITE 42e0: 5f 45 4e 41 42 4c 45 5f 49 4f 54 52 41 43 45 3d _ENABLE_IOTRACE= 42f0: 31 0a 52 43 43 20 3d 20 24 28 52 43 43 29 20 2d 1.RCC =$(RCC) -
4300: 44 53 51 4c 49 54 45 5f 45 4e 41 42 4c 45 5f 49 DSQLITE_ENABLE_I
4310: 4f 54 52 41 43 45 3d 31 0a 21 45 4e 44 49 46 0a OTRACE=1.!ENDIF.
4320: 0a 23 20 50 72 65 76 65 6e 74 20 77 61 72 6e 69 .# Prevent warni
4330: 6e 67 73 20 61 62 6f 75 74 20 22 69 6e 73 65 63 ngs about "insec
4340: 75 72 65 22 20 4d 53 56 43 20 72 75 6e 74 69 6d ure" MSVC runtim
4350: 65 20 6c 69 62 72 61 72 79 20 66 75 6e 63 74 69 e library functi
4360: 6f 6e 73 0a 23 20 62 65 69 6e 67 20 75 73 65 64 ons.# being used
4370: 2e 0a 23 0a 54 43 43 20 3d 20 24 28 54 43 43 29 ..#.TCC = $(TCC) 4380: 20 2d 44 5f 43 52 54 5f 53 45 43 55 52 45 5f 4e -D_CRT_SECURE_N 4390: 4f 5f 44 45 50 52 45 43 41 54 45 20 2d 44 5f 43 O_DEPRECATE -D_C 43a0: 52 54 5f 53 45 43 55 52 45 5f 4e 4f 5f 57 41 52 RT_SECURE_NO_WAR 43b0: 4e 49 4e 47 53 0a 42 43 43 20 3d 20 24 28 42 43 NINGS.BCC =$(BC
43c0: 43 29 20 2d 44 5f 43 52 54 5f 53 45 43 55 52 45 C) -D_CRT_SECURE
43d0: 5f 4e 4f 5f 44 45 50 52 45 43 41 54 45 20 2d 44 _NO_DEPRECATE -D
43e0: 5f 43 52 54 5f 53 45 43 55 52 45 5f 4e 4f 5f 57 _CRT_SECURE_NO_W
43f0: 41 52 4e 49 4e 47 53 0a 52 43 43 20 3d 20 24 28 ARNINGS.RCC = $( 4400: 52 43 43 29 20 2d 44 5f 43 52 54 5f 53 45 43 55 RCC) -D_CRT_SECU 4410: 52 45 5f 4e 4f 5f 44 45 50 52 45 43 41 54 45 20 RE_NO_DEPRECATE 4420: 2d 44 5f 43 52 54 5f 53 45 43 55 52 45 5f 4e 4f -D_CRT_SECURE_NO 4430: 5f 57 41 52 4e 49 4e 47 53 0a 0a 23 20 50 72 65 _WARNINGS..# Pre 4440: 76 65 6e 74 20 77 61 72 6e 69 6e 67 73 20 61 62 vent warnings ab 4450: 6f 75 74 20 22 64 65 70 72 65 63 61 74 65 64 22 out "deprecated" 4460: 20 50 4f 53 49 58 20 66 75 6e 63 74 69 6f 6e 73 POSIX functions 4470: 20 62 65 69 6e 67 20 75 73 65 64 2e 0a 23 0a 54 being used..#.T 4480: 43 43 20 3d 20 24 28 54 43 43 29 20 2d 44 5f 43 CC =$(TCC) -D_C
4490: 52 54 5f 4e 4f 4e 53 54 44 43 5f 4e 4f 5f 44 45 RT_NONSTDC_NO_DE
44a0: 50 52 45 43 41 54 45 20 2d 44 5f 43 52 54 5f 4e PRECATE -D_CRT_N
44b0: 4f 4e 53 54 44 43 5f 4e 4f 5f 57 41 52 4e 49 4e ONSTDC_NO_WARNIN
44c0: 47 53 0a 42 43 43 20 3d 20 24 28 42 43 43 29 20 GS.BCC = $(BCC) 44d0: 2d 44 5f 43 52 54 5f 4e 4f 4e 53 54 44 43 5f 4e -D_CRT_NONSTDC_N 44e0: 4f 5f 44 45 50 52 45 43 41 54 45 20 2d 44 5f 43 O_DEPRECATE -D_C 44f0: 52 54 5f 4e 4f 4e 53 54 44 43 5f 4e 4f 5f 57 41 RT_NONSTDC_NO_WA 4500: 52 4e 49 4e 47 53 0a 52 43 43 20 3d 20 24 28 52 RNINGS.RCC =$(R
4510: 43 43 29 20 2d 44 5f 43 52 54 5f 4e 4f 4e 53 54 CC) -D_CRT_NONST
4520: 44 43 5f 4e 4f 5f 44 45 50 52 45 43 41 54 45 20 DC_NO_DEPRECATE
4530: 2d 44 5f 43 52 54 5f 4e 4f 4e 53 54 44 43 5f 4e -D_CRT_NONSTDC_N
4540: 4f 5f 57 41 52 4e 49 4e 47 53 0a 0a 23 20 55 73 O_WARNINGS..# Us
4550: 65 20 74 68 65 20 53 51 4c 69 74 65 20 64 65 62 e the SQLite deb
4560: 75 67 67 69 6e 67 20 68 65 61 70 20 73 75 62 73 ugging heap subs
4570: 79 73 74 65 6d 3f 0a 23 0a 21 49 46 20 24 28 4d ystem?.#.!IF $(M 4580: 45 4d 44 45 42 55 47 29 21 3d 30 0a 54 43 43 20 EMDEBUG)!=0.TCC 4590: 3d 20 24 28 54 43 43 29 20 2d 44 53 51 4c 49 54 =$(TCC) -DSQLIT
45a0: 45 5f 4d 45 4d 44 45 42 55 47 3d 31 0a 52 43 43 E_MEMDEBUG=1.RCC
45b0: 20 3d 20 24 28 52 43 43 29 20 2d 44 53 51 4c 49 = $(RCC) -DSQLI 45c0: 54 45 5f 4d 45 4d 44 45 42 55 47 3d 31 0a 0a 23 TE_MEMDEBUG=1..# 45d0: 20 55 73 65 20 6e 61 74 69 76 65 20 57 69 6e 33 Use native Win3 45e0: 32 20 68 65 61 70 20 73 75 62 73 79 73 74 65 6d 2 heap subsystem 45f0: 20 69 6e 73 74 65 61 64 20 6f 66 20 6d 61 6c 6c instead of mall 4600: 6f 63 2f 66 72 65 65 3f 0a 23 0a 21 45 4c 53 45 oc/free?.#.!ELSE 4610: 49 46 20 24 28 57 49 4e 33 32 48 45 41 50 29 21 IF$(WIN32HEAP)!
4620: 3d 30 0a 54 43 43 20 3d 20 24 28 54 43 43 29 20 =0.TCC = $(TCC) 4630: 2d 44 53 51 4c 49 54 45 5f 57 49 4e 33 32 5f 4d -DSQLITE_WIN32_M 4640: 41 4c 4c 4f 43 3d 31 0a 52 43 43 20 3d 20 24 28 ALLOC=1.RCC =$(
4650: 52 43 43 29 20 2d 44 53 51 4c 49 54 45 5f 57 49 RCC) -DSQLITE_WI
4660: 4e 33 32 5f 4d 41 4c 4c 4f 43 3d 31 0a 0a 23 20 N32_MALLOC=1..#
4670: 56 61 6c 69 64 61 74 65 20 74 68 65 20 68 65 61 Validate the hea
4680: 70 20 6f 6e 20 65 76 65 72 79 20 63 61 6c 6c 20 p on every call
4690: 69 6e 74 6f 20 74 68 65 20 6e 61 74 69 76 65 20 into the native
46a0: 57 69 6e 33 32 20 68 65 61 70 20 73 75 62 73 79 Win32 heap subsy
46b0: 73 74 65 6d 3f 0a 23 0a 21 49 46 20 24 28 44 45 stem?.#.!IF $(DE 46c0: 42 55 47 29 3e 33 0a 54 43 43 20 3d 20 24 28 54 BUG)>3.TCC =$(T
46d0: 43 43 29 20 2d 44 53 51 4c 49 54 45 5f 57 49 4e CC) -DSQLITE_WIN
46e0: 33 32 5f 4d 41 4c 4c 4f 43 5f 56 41 4c 49 44 41 32_MALLOC_VALIDA
46f0: 54 45 3d 31 0a 52 43 43 20 3d 20 24 28 52 43 43 TE=1.RCC = $(RCC 4700: 29 20 2d 44 53 51 4c 49 54 45 5f 57 49 4e 33 32 ) -DSQLITE_WIN32 4710: 5f 4d 41 4c 4c 4f 43 5f 56 41 4c 49 44 41 54 45 _MALLOC_VALIDATE 4720: 3d 31 0a 21 45 4e 44 49 46 0a 21 45 4e 44 49 46 =1.!ENDIF.!ENDIF 4730: 0a 0a 23 20 3c 3c 6d 61 72 6b 3e 3e 0a 23 20 54 ..# <<mark>>.# T 4740: 68 65 20 6c 6f 63 61 74 69 6f 6e 73 20 6f 66 20 he locations of 4750: 74 68 65 20 54 63 6c 20 68 65 61 64 65 72 20 61 the Tcl header a 4760: 6e 64 20 6c 69 62 72 61 72 79 20 66 69 6c 65 73 nd library files 4770: 2e 20 20 41 6c 73 6f 2c 20 74 68 65 20 6c 69 62 . Also, the lib 4780: 72 61 72 79 20 74 68 61 74 0a 23 20 6e 6f 6e 2d rary that.# non- 4790: 73 74 75 62 73 20 65 6e 61 62 6c 65 64 20 70 72 stubs enabled pr 47a0: 6f 67 72 61 6d 73 20 75 73 69 6e 67 20 54 63 6c ograms using Tcl 47b0: 20 6d 75 73 74 20 6c 69 6e 6b 20 61 67 61 69 6e must link again 47c0: 73 74 2e 20 20 54 68 65 73 65 20 76 61 72 69 61 st. These varia 47d0: 62 6c 65 73 0a 23 20 28 54 43 4c 49 4e 43 44 49 bles.# (TCLINCDI 47e0: 52 2c 20 54 43 4c 4c 49 42 44 49 52 2c 20 61 6e R, TCLLIBDIR, an 47f0: 64 20 4c 49 42 54 43 4c 29 20 6d 61 79 20 62 65 d LIBTCL) may be 4800: 20 6f 76 65 72 72 69 64 64 65 6e 20 76 69 61 20 overridden via 4810: 74 68 65 20 65 6e 76 69 72 6f 6e 6d 65 6e 74 0a the environment. 4820: 23 20 70 72 69 6f 72 20 74 6f 20 72 75 6e 6e 69 # prior to runni 4830: 6e 67 20 6e 6d 61 6b 65 20 69 6e 20 6f 72 64 65 ng nmake in orde 4840: 72 20 74 6f 20 6d 61 74 63 68 20 74 68 65 20 61 r to match the a 4850: 63 74 75 61 6c 20 69 6e 73 74 61 6c 6c 65 64 20 ctual installed 4860: 6c 6f 63 61 74 69 6f 6e 20 61 6e 64 0a 23 20 76 location and.# v 4870: 65 72 73 69 6f 6e 20 6f 6e 20 74 68 69 73 20 6d ersion on this m 4880: 61 63 68 69 6e 65 2e 0a 23 0a 21 49 46 4e 44 45 achine..#.!IFNDE 4890: 46 20 54 43 4c 49 4e 43 44 49 52 0a 54 43 4c 49 F TCLINCDIR.TCLI 48a0: 4e 43 44 49 52 20 3d 20 63 3a 5c 74 63 6c 5c 69 NCDIR = c:\tcl\i 48b0: 6e 63 6c 75 64 65 0a 21 45 4e 44 49 46 0a 0a 21 nclude.!ENDIF..! 48c0: 49 46 4e 44 45 46 20 54 43 4c 4c 49 42 44 49 52 IFNDEF TCLLIBDIR 48d0: 0a 54 43 4c 4c 49 42 44 49 52 20 3d 20 63 3a 5c .TCLLIBDIR = c:\ 48e0: 74 63 6c 5c 6c 69 62 0a 21 45 4e 44 49 46 0a 0a tcl\lib.!ENDIF.. 48f0: 21 49 46 4e 44 45 46 20 4c 49 42 54 43 4c 0a 4c !IFNDEF LIBTCL.L 4900: 49 42 54 43 4c 20 3d 20 74 63 6c 38 35 2e 6c 69 IBTCL = tcl85.li 4910: 62 0a 21 45 4e 44 49 46 0a 0a 21 49 46 4e 44 45 b.!ENDIF..!IFNDE 4920: 46 20 4c 49 42 54 43 4c 53 54 55 42 0a 4c 49 42 F LIBTCLSTUB.LIB 4930: 54 43 4c 53 54 55 42 20 3d 20 74 63 6c 73 74 75 TCLSTUB = tclstu 4940: 62 38 35 2e 6c 69 62 0a 21 45 4e 44 49 46 0a 0a b85.lib.!ENDIF.. 4950: 21 49 46 4e 44 45 46 20 4c 49 42 54 43 4c 50 41 !IFNDEF LIBTCLPA 4960: 54 48 0a 4c 49 42 54 43 4c 50 41 54 48 20 3d 20 TH.LIBTCLPATH = 4970: 63 3a 5c 74 63 6c 5c 62 69 6e 0a 21 45 4e 44 49 c:\tcl\bin.!ENDI 4980: 46 0a 0a 23 20 54 68 65 20 6c 6f 63 61 74 69 6f F..# The locatio 4990: 6e 73 20 6f 66 20 74 68 65 20 49 43 55 20 68 65 ns of the ICU he 49a0: 61 64 65 72 20 61 6e 64 20 6c 69 62 72 61 72 79 ader and library 49b0: 20 66 69 6c 65 73 2e 20 20 54 68 65 73 65 20 76 files. These v 49c0: 61 72 69 61 62 6c 65 73 0a 23 20 28 49 43 55 49 ariables.# (ICUI 49d0: 4e 43 44 49 52 2c 20 49 43 55 4c 49 42 44 49 52 NCDIR, ICULIBDIR 49e0: 2c 20 61 6e 64 20 4c 49 42 49 43 55 29 20 6d 61 , and LIBICU) ma 49f0: 79 20 62 65 20 6f 76 65 72 72 69 64 64 65 6e 20 y be overridden 4a00: 76 69 61 20 74 68 65 20 65 6e 76 69 72 6f 6e 6d via the environm 4a10: 65 6e 74 0a 23 20 70 72 69 6f 72 20 74 6f 20 72 ent.# prior to r 4a20: 75 6e 6e 69 6e 67 20 6e 6d 61 6b 65 20 69 6e 20 unning nmake in 4a30: 6f 72 64 65 72 20 74 6f 20 6d 61 74 63 68 20 74 order to match t 4a40: 68 65 20 61 63 74 75 61 6c 20 69 6e 73 74 61 6c he actual instal 4a50: 6c 65 64 20 6c 6f 63 61 74 69 6f 6e 20 6f 6e 0a led location on. 4a60: 23 20 74 68 69 73 20 6d 61 63 68 69 6e 65 2e 0a # this machine.. 4a70: 23 0a 21 49 46 4e 44 45 46 20 49 43 55 49 4e 43 #.!IFNDEF ICUINC 4a80: 44 49 52 0a 49 43 55 49 4e 43 44 49 52 20 3d 20 DIR.ICUINCDIR = 4a90: 63 3a 5c 69 63 75 5c 69 6e 63 6c 75 64 65 0a 21 c:\icu\include.! 4aa0: 45 4e 44 49 46 0a 0a 21 49 46 4e 44 45 46 20 49 ENDIF..!IFNDEF I 4ab0: 43 55 4c 49 42 44 49 52 0a 49 43 55 4c 49 42 44 CULIBDIR.ICULIBD 4ac0: 49 52 20 3d 20 63 3a 5c 69 63 75 5c 6c 69 62 0a IR = c:\icu\lib. 4ad0: 21 45 4e 44 49 46 0a 0a 21 49 46 4e 44 45 46 20 !ENDIF..!IFNDEF 4ae0: 4c 49 42 49 43 55 0a 4c 49 42 49 43 55 20 3d 20 LIBICU.LIBICU = 4af0: 69 63 75 75 63 2e 6c 69 62 20 69 63 75 69 6e 2e icuuc.lib icuin. 4b00: 6c 69 62 0a 21 45 4e 44 49 46 0a 0a 23 20 54 68 lib.!ENDIF..# Th 4b10: 69 73 20 69 73 20 74 68 65 20 63 6f 6d 6d 61 6e is is the comman 4b20: 64 20 74 6f 20 75 73 65 20 66 6f 72 20 74 63 6c d to use for tcl 4b30: 73 68 20 2d 20 6e 6f 72 6d 61 6c 6c 79 20 6a 75 sh - normally ju 4b40: 73 74 20 22 74 63 6c 73 68 22 2c 20 62 75 74 20 st "tclsh", but 4b50: 77 65 20 6d 61 79 0a 23 20 6b 6e 6f 77 20 74 68 we may.# know th 4b60: 65 20 73 70 65 63 69 66 69 63 20 76 65 72 73 69 e specific versi 4b70: 6f 6e 20 77 65 20 77 61 6e 74 20 74 6f 20 75 73 on we want to us 4b80: 65 2e 20 20 54 68 69 73 20 76 61 72 69 61 62 6c e. This variabl 4b90: 65 20 28 54 43 4c 53 48 5f 43 4d 44 29 20 6d 61 e (TCLSH_CMD) ma 4ba0: 79 20 62 65 0a 23 20 6f 76 65 72 72 69 64 64 65 y be.# overridde 4bb0: 6e 20 76 69 61 20 74 68 65 20 65 6e 76 69 72 6f n via the enviro 4bc0: 6e 6d 65 6e 74 20 70 72 69 6f 72 20 74 6f 20 72 nment prior to r 4bd0: 75 6e 6e 69 6e 67 20 6e 6d 61 6b 65 20 69 6e 20 unning nmake in 4be0: 6f 72 64 65 72 20 74 6f 20 73 65 6c 65 63 74 20 order to select 4bf0: 61 0a 23 20 73 70 65 63 69 66 69 63 20 54 63 6c a.# specific Tcl 4c00: 20 73 68 65 6c 6c 20 74 6f 20 75 73 65 2e 0a 23 shell to use..# 4c10: 0a 21 49 46 4e 44 45 46 20 54 43 4c 53 48 5f 43 .!IFNDEF TCLSH_C 4c20: 4d 44 0a 54 43 4c 53 48 5f 43 4d 44 20 3d 20 74 MD.TCLSH_CMD = t 4c30: 63 6c 73 68 38 35 0a 21 45 4e 44 49 46 0a 23 20 clsh85.!ENDIF.# 4c40: 3c 3c 2f 6d 61 72 6b 3e 3e 0a 0a 23 20 43 6f 6d <</mark>>..# Com 4c50: 70 69 6c 65 72 20 6f 70 74 69 6f 6e 73 20 6e 65 piler options ne 4c60: 65 64 65 64 20 66 6f 72 20 70 72 6f 67 72 61 6d eded for program 4c70: 73 20 74 68 61 74 20 75 73 65 20 74 68 65 20 72 s that use the r 4c80: 65 61 64 6c 69 6e 65 28 29 20 6c 69 62 72 61 72 eadline() librar 4c90: 79 2e 0a 23 0a 21 49 46 4e 44 45 46 20 52 45 41 y..#.!IFNDEF REA 4ca0: 44 4c 49 4e 45 5f 46 4c 41 47 53 0a 52 45 41 44 DLINE_FLAGS.READ 4cb0: 4c 49 4e 45 5f 46 4c 41 47 53 20 3d 20 2d 44 48 LINE_FLAGS = -DH 4cc0: 41 56 45 5f 52 45 41 44 4c 49 4e 45 3d 30 0a 21 AVE_READLINE=0.! 4cd0: 45 4e 44 49 46 0a 0a 23 20 54 68 65 20 6c 69 62 ENDIF..# The lib 4ce0: 72 61 72 79 20 74 68 61 74 20 70 72 6f 67 72 61 rary that progra 4cf0: 6d 73 20 75 73 69 6e 67 20 72 65 61 64 6c 69 6e ms using readlin 4d00: 65 28 29 20 6d 75 73 74 20 6c 69 6e 6b 20 61 67 e() must link ag 4d10: 61 69 6e 73 74 2e 0a 23 0a 21 49 46 4e 44 45 46 ainst..#.!IFNDEF 4d20: 20 4c 49 42 52 45 41 44 4c 49 4e 45 0a 4c 49 42 LIBREADLINE.LIB 4d30: 52 45 41 44 4c 49 4e 45 20 3d 0a 21 45 4e 44 49 READLINE =.!ENDI 4d40: 46 0a 0a 23 20 53 68 6f 75 6c 64 20 74 68 65 20 F..# Should the 4d50: 64 61 74 61 62 61 73 65 20 65 6e 67 69 6e 65 20 database engine 4d60: 62 65 20 63 6f 6d 70 69 6c 65 64 20 74 68 72 65 be compiled thre 4d70: 61 64 73 61 66 65 0a 23 0a 54 43 43 20 3d 20 24 adsafe.#.TCC =$
4d80: 28 54 43 43 29 20 2d 44 53 51 4c 49 54 45 5f 54 (TCC) -DSQLITE_T
4d90: 48 52 45 41 44 53 41 46 45 3d 31 0a 52 43 43 20 HREADSAFE=1.RCC
4da0: 3d 20 24 28 52 43 43 29 20 2d 44 53 51 4c 49 54 = $(RCC) -DSQLIT 4db0: 45 5f 54 48 52 45 41 44 53 41 46 45 3d 31 0a 0a E_THREADSAFE=1.. 4dc0: 23 20 44 6f 20 74 68 72 65 61 64 73 20 6f 76 65 # Do threads ove 4dd0: 72 72 69 64 65 20 65 61 63 68 20 6f 74 68 65 72 rride each other 4de0: 73 20 6c 6f 63 6b 73 20 62 79 20 64 65 66 61 75 s locks by defau 4df0: 6c 74 20 28 31 29 2c 20 6f 72 20 64 6f 20 77 65 lt (1), or do we 4e00: 20 74 65 73 74 20 28 2d 31 29 0a 23 0a 54 43 43 test (-1).#.TCC 4e10: 20 3d 20 24 28 54 43 43 29 20 2d 44 53 51 4c 49 =$(TCC) -DSQLI
4e20: 54 45 5f 54 48 52 45 41 44 5f 4f 56 45 52 52 49 TE_THREAD_OVERRI
4e30: 44 45 5f 4c 4f 43 4b 3d 2d 31 0a 52 43 43 20 3d DE_LOCK=-1.RCC =
4e40: 20 24 28 52 43 43 29 20 2d 44 53 51 4c 49 54 45 $(RCC) -DSQLITE 4e50: 5f 54 48 52 45 41 44 5f 4f 56 45 52 52 49 44 45 _THREAD_OVERRIDE 4e60: 5f 4c 4f 43 4b 3d 2d 31 0a 0a 23 20 41 6e 79 20 _LOCK=-1..# Any 4e70: 74 61 72 67 65 74 20 6c 69 62 72 61 72 69 65 73 target libraries 4e80: 20 77 68 69 63 68 20 6c 69 62 73 71 6c 69 74 65 which libsqlite 4e90: 20 6d 75 73 74 20 62 65 20 6c 69 6e 6b 65 64 20 must be linked 4ea0: 61 67 61 69 6e 73 74 0a 23 0a 21 49 46 4e 44 45 against.#.!IFNDE 4eb0: 46 20 54 4c 49 42 53 0a 54 4c 49 42 53 20 3d 0a F TLIBS.TLIBS =. 4ec0: 21 45 4e 44 49 46 0a 0a 23 20 46 6c 61 67 73 20 !ENDIF..# Flags 4ed0: 63 6f 6e 74 72 6f 6c 6c 69 6e 67 20 75 73 65 20 controlling use 4ee0: 6f 66 20 74 68 65 20 69 6e 20 6d 65 6d 6f 72 79 of the in memory 4ef0: 20 62 74 72 65 65 20 69 6d 70 6c 65 6d 65 6e 74 btree implement 4f00: 61 74 69 6f 6e 0a 23 0a 23 20 53 51 4c 49 54 45 ation.#.# SQLITE 4f10: 5f 54 45 4d 50 5f 53 54 4f 52 45 20 69 73 20 30 _TEMP_STORE is 0 4f20: 20 74 6f 20 66 6f 72 63 65 20 74 65 6d 70 6f 72 to force tempor 4f30: 61 72 79 20 74 61 62 6c 65 73 20 74 6f 20 62 65 ary tables to be 4f40: 20 69 6e 20 61 20 66 69 6c 65 2c 20 31 20 74 6f in a file, 1 to 4f50: 0a 23 20 64 65 66 61 75 6c 74 20 74 6f 20 66 69 .# default to fi 4f60: 6c 65 2c 20 32 20 74 6f 20 64 65 66 61 75 6c 74 le, 2 to default 4f70: 20 74 6f 20 6d 65 6d 6f 72 79 2c 20 61 6e 64 20 to memory, and 4f80: 33 20 74 6f 20 66 6f 72 63 65 20 74 65 6d 70 6f 3 to force tempo 4f90: 72 61 72 79 0a 23 20 74 61 62 6c 65 73 20 74 6f rary.# tables to 4fa0: 20 61 6c 77 61 79 73 20 62 65 20 69 6e 20 6d 65 always be in me 4fb0: 6d 6f 72 79 2e 0a 23 0a 54 43 43 20 3d 20 24 28 mory..#.TCC =$(
4fc0: 54 43 43 29 20 2d 44 53 51 4c 49 54 45 5f 54 45 TCC) -DSQLITE_TE
4fd0: 4d 50 5f 53 54 4f 52 45 3d 31 0a 52 43 43 20 3d MP_STORE=1.RCC =
4fe0: 20 24 28 52 43 43 29 20 2d 44 53 51 4c 49 54 45 $(RCC) -DSQLITE 4ff0: 5f 54 45 4d 50 5f 53 54 4f 52 45 3d 31 0a 0a 23 _TEMP_STORE=1..# 5000: 20 45 6e 61 62 6c 65 2f 64 69 73 61 62 6c 65 20 Enable/disable 5010: 6c 6f 61 64 61 62 6c 65 20 65 78 74 65 6e 73 69 loadable extensi 5020: 6f 6e 73 2c 20 61 6e 64 20 6f 74 68 65 72 20 6f ons, and other o 5030: 70 74 69 6f 6e 61 6c 20 66 65 61 74 75 72 65 73 ptional features 5040: 0a 23 20 62 61 73 65 64 20 6f 6e 20 63 6f 6e 66 .# based on conf 5050: 69 67 75 72 61 74 69 6f 6e 2e 20 28 2d 44 53 51 iguration. (-DSQ 5060: 4c 49 54 45 5f 4f 4d 49 54 2a 2c 20 2d 44 53 51 LITE_OMIT*, -DSQ 5070: 4c 49 54 45 5f 45 4e 41 42 4c 45 2a 29 2e 0a 23 LITE_ENABLE*)..# 5080: 20 54 68 65 20 73 61 6d 65 20 73 65 74 20 6f 66 The same set of 5090: 20 4f 4d 49 54 20 61 6e 64 20 45 4e 41 42 4c 45 OMIT and ENABLE 50a0: 20 66 6c 61 67 73 20 73 68 6f 75 6c 64 20 62 65 flags should be 50b0: 20 70 61 73 73 65 64 20 74 6f 20 74 68 65 0a 23 passed to the.# 50c0: 20 4c 45 4d 4f 4e 20 70 61 72 73 65 72 20 67 65 LEMON parser ge 50d0: 6e 65 72 61 74 6f 72 20 61 6e 64 20 74 68 65 20 nerator and the 50e0: 6d 6b 6b 65 79 77 6f 72 64 68 61 73 68 20 74 6f mkkeywordhash to 50f0: 6f 6c 20 61 73 20 77 65 6c 6c 2e 0a 0a 23 20 42 ol as well...# B 5100: 45 47 49 4e 20 73 74 61 6e 64 61 72 64 20 6f 70 EGIN standard op 5110: 74 69 6f 6e 73 0a 4f 50 54 5f 46 45 41 54 55 52 tions.OPT_FEATUR 5120: 45 5f 46 4c 41 47 53 20 3d 20 24 28 4f 50 54 5f E_FLAGS =$(OPT_
5130: 46 45 41 54 55 52 45 5f 46 4c 41 47 53 29 20 2d FEATURE_FLAGS) -
5140: 44 53 51 4c 49 54 45 5f 45 4e 41 42 4c 45 5f 46 DSQLITE_ENABLE_F
5150: 54 53 33 3d 31 0a 4f 50 54 5f 46 45 41 54 55 52 TS3=1.OPT_FEATUR
5160: 45 5f 46 4c 41 47 53 20 3d 20 24 28 4f 50 54 5f E_FLAGS = $(OPT_ 5170: 46 45 41 54 55 52 45 5f 46 4c 41 47 53 29 20 2d FEATURE_FLAGS) - 5180: 44 53 51 4c 49 54 45 5f 45 4e 41 42 4c 45 5f 52 DSQLITE_ENABLE_R 5190: 54 52 45 45 3d 31 0a 4f 50 54 5f 46 45 41 54 55 TREE=1.OPT_FEATU 51a0: 52 45 5f 46 4c 41 47 53 20 3d 20 24 28 4f 50 54 RE_FLAGS =$(OPT
51b0: 5f 46 45 41 54 55 52 45 5f 46 4c 41 47 53 29 20 _FEATURE_FLAGS)
51c0: 2d 44 53 51 4c 49 54 45 5f 45 4e 41 42 4c 45 5f -DSQLITE_ENABLE_
51d0: 43 4f 4c 55 4d 4e 5f 4d 45 54 41 44 41 54 41 3d COLUMN_METADATA=
51e0: 31 0a 4f 50 54 5f 46 45 41 54 55 52 45 5f 46 4c 1.OPT_FEATURE_FL
51f0: 41 47 53 20 3d 20 24 28 4f 50 54 5f 46 45 41 54 AGS = $(OPT_FEAT 5200: 55 52 45 5f 46 4c 41 47 53 29 20 2d 44 53 51 4c URE_FLAGS) -DSQL 5210: 49 54 45 5f 45 4e 41 42 4c 45 5f 53 45 53 53 49 ITE_ENABLE_SESSI 5220: 4f 4e 3d 31 0a 4f 50 54 5f 46 45 41 54 55 52 45 ON=1.OPT_FEATURE 5230: 5f 46 4c 41 47 53 20 3d 20 24 28 4f 50 54 5f 46 _FLAGS =$(OPT_F
5240: 45 41 54 55 52 45 5f 46 4c 41 47 53 29 20 2d 44 EATURE_FLAGS) -D
5250: 53 51 4c 49 54 45 5f 45 4e 41 42 4c 45 5f 50 52 SQLITE_ENABLE_PR
5260: 45 55 50 44 41 54 45 5f 48 4f 4f 4b 3d 31 0a 23 EUPDATE_HOOK=1.#
5270: 20 45 4e 44 20 73 74 61 6e 64 61 72 64 20 6f 70 END standard op
5280: 74 69 6f 6e 73 0a 0a 23 20 54 68 65 73 65 20 61 tions..# These a
5290: 72 65 20 74 68 65 20 72 65 71 75 69 72 65 64 20 re the required
52a0: 53 51 4c 69 74 65 20 63 6f 6d 70 69 6c 61 74 69 SQLite compilati
52b0: 6f 6e 20 6f 70 74 69 6f 6e 73 20 75 73 65 64 20 on options used
52c0: 77 68 65 6e 20 63 6f 6d 70 69 6c 69 6e 67 20 66 when compiling f
52d0: 6f 72 0a 23 20 74 68 65 20 57 69 6e 64 6f 77 73 or.# the Windows
52e0: 20 70 6c 61 74 66 6f 72 6d 2e 0a 23 0a 52 45 51 platform..#.REQ
52f0: 5f 46 45 41 54 55 52 45 5f 46 4c 41 47 53 20 3d _FEATURE_FLAGS =
5300: 20 24 28 52 45 51 5f 46 45 41 54 55 52 45 5f 46 $(REQ_FEATURE_F 5310: 4c 41 47 53 29 20 2d 44 53 51 4c 49 54 45 5f 4d LAGS) -DSQLITE_M 5320: 41 58 5f 54 52 49 47 47 45 52 5f 44 45 50 54 48 AX_TRIGGER_DEPTH 5330: 3d 31 30 30 0a 0a 23 20 49 66 20 77 65 20 61 72 =100..# If we ar 5340: 65 20 6c 69 6e 6b 69 6e 67 20 74 6f 20 74 68 65 e linking to the 5350: 20 52 50 43 52 54 34 20 6c 69 62 72 61 72 79 2c RPCRT4 library, 5360: 20 65 6e 61 62 6c 65 20 66 65 61 74 75 72 65 73 enable features 5370: 20 74 68 61 74 20 6e 65 65 64 20 69 74 2e 0a 23 that need it..# 5380: 0a 21 49 46 20 24 28 55 53 45 5f 52 50 43 52 54 .!IF$(USE_RPCRT
5390: 34 5f 4c 49 42 29 21 3d 30 0a 52 45 51 5f 46 45 4_LIB)!=0.REQ_FE
53a0: 41 54 55 52 45 5f 46 4c 41 47 53 20 3d 20 24 28 ATURE_FLAGS = $( 53b0: 52 45 51 5f 46 45 41 54 55 52 45 5f 46 4c 41 47 REQ_FEATURE_FLAG 53c0: 53 29 20 2d 44 53 51 4c 49 54 45 5f 57 49 4e 33 S) -DSQLITE_WIN3 53d0: 32 5f 55 53 45 5f 55 55 49 44 3d 31 0a 21 45 4e 2_USE_UUID=1.!EN 53e0: 44 49 46 0a 0a 23 20 41 64 64 20 74 68 65 20 72 DIF..# Add the r 53f0: 65 71 75 69 72 65 64 20 61 6e 64 20 6f 70 74 69 equired and opti 5400: 6f 6e 61 6c 20 53 51 4c 69 74 65 20 63 6f 6d 70 onal SQLite comp 5410: 69 6c 61 74 69 6f 6e 20 6f 70 74 69 6f 6e 73 20 ilation options 5420: 69 6e 74 6f 20 74 68 65 20 63 6f 6d 6d 61 6e 64 into the command 5430: 0a 23 20 6c 69 6e 65 73 20 75 73 65 64 20 74 6f .# lines used to 5440: 20 69 6e 76 6f 6b 65 20 74 68 65 20 4d 53 56 43 invoke the MSVC 5450: 20 63 6f 64 65 20 61 6e 64 20 72 65 73 6f 75 72 code and resour 5460: 63 65 20 63 6f 6d 70 69 6c 65 72 73 2e 0a 23 0a ce compilers..#. 5470: 54 43 43 20 3d 20 24 28 54 43 43 29 20 24 28 52 TCC =$(TCC) $(R 5480: 45 51 5f 46 45 41 54 55 52 45 5f 46 4c 41 47 53 EQ_FEATURE_FLAGS 5490: 29 20 24 28 4f 50 54 5f 46 45 41 54 55 52 45 5f )$(OPT_FEATURE_
54a0: 46 4c 41 47 53 29 20 24 28 45 58 54 5f 46 45 41 FLAGS) $(EXT_FEA 54b0: 54 55 52 45 5f 46 4c 41 47 53 29 0a 52 43 43 20 TURE_FLAGS).RCC 54c0: 3d 20 24 28 52 43 43 29 20 24 28 52 45 51 5f 46 =$(RCC) $(REQ_F 54d0: 45 41 54 55 52 45 5f 46 4c 41 47 53 29 20 24 28 EATURE_FLAGS)$(
54e0: 4f 50 54 5f 46 45 41 54 55 52 45 5f 46 4c 41 47 OPT_FEATURE_FLAG
54f0: 53 29 20 24 28 45 58 54 5f 46 45 41 54 55 52 45 S) $(EXT_FEATURE 5500: 5f 46 4c 41 47 53 29 0a 0a 23 20 41 64 64 20 69 _FLAGS)..# Add i 5510: 6e 20 61 6e 79 20 6f 70 74 69 6f 6e 61 6c 20 70 n any optional p 5520: 61 72 61 6d 65 74 65 72 73 20 73 70 65 63 69 66 arameters specif 5530: 69 65 64 20 6f 6e 20 74 68 65 20 63 6f 6d 6d 61 ied on the comma 5540: 6e 65 20 6c 69 6e 65 2c 20 65 2e 67 2e 0a 23 20 ne line, e.g..# 5550: 6e 6d 61 6b 65 20 2f 66 20 4d 61 6b 65 66 69 6c nmake /f Makefil 5560: 65 2e 6d 73 63 20 61 6c 6c 20 22 4f 50 54 53 3d e.msc all "OPTS= 5570: 2d 44 53 51 4c 49 54 45 5f 45 4e 41 42 4c 45 5f -DSQLITE_ENABLE_ 5580: 46 4f 4f 3d 31 20 2d 44 53 51 4c 49 54 45 5f 4f FOO=1 -DSQLITE_O 5590: 4d 49 54 5f 46 4f 4f 3d 31 22 0a 23 0a 54 43 43 MIT_FOO=1".#.TCC 55a0: 20 3d 20 24 28 54 43 43 29 20 24 28 4f 50 54 53 =$(TCC) $(OPTS 55b0: 29 0a 52 43 43 20 3d 20 24 28 52 43 43 29 20 24 ).RCC =$(RCC) $55c0: 28 4f 50 54 53 29 0a 0a 23 20 49 66 20 63 6f 6d (OPTS)..# If com 55d0: 70 69 6c 69 6e 67 20 66 6f 72 20 64 65 62 75 67 piling for debug 55e0: 67 69 6e 67 2c 20 61 64 64 20 73 6f 6d 65 20 64 ging, add some d 55f0: 65 66 69 6e 65 73 2e 0a 23 0a 21 49 46 20 24 28 efines..#.!IF$(
5600: 44 45 42 55 47 29 3e 31 0a 54 43 43 20 3d 20 24 DEBUG)>1.TCC = $5610: 28 54 43 43 29 20 2d 44 5f 44 45 42 55 47 0a 42 (TCC) -D_DEBUG.B 5620: 43 43 20 3d 20 24 28 42 43 43 29 20 2d 44 5f 44 CC =$(BCC) -D_D
5630: 45 42 55 47 0a 52 43 43 20 3d 20 24 28 52 43 43 EBUG.RCC = $(RCC 5640: 29 20 2d 44 5f 44 45 42 55 47 0a 21 45 4e 44 49 ) -D_DEBUG.!ENDI 5650: 46 0a 0a 23 20 49 66 20 6f 70 74 69 6d 69 7a 61 F..# If optimiza 5660: 74 69 6f 6e 73 20 61 72 65 20 65 6e 61 62 6c 65 tions are enable 5670: 64 20 6f 72 20 64 69 73 61 62 6c 65 64 20 28 65 d or disabled (e 5680: 69 74 68 65 72 20 69 6d 70 6c 69 63 69 74 6c 79 ither implicitly 5690: 20 6f 72 0a 23 20 65 78 70 6c 69 63 69 74 6c 79 or.# explicitly 56a0: 29 2c 20 61 64 64 20 74 68 65 20 6e 65 63 65 73 ), add the neces 56b0: 73 61 72 79 20 66 6c 61 67 73 2e 0a 23 0a 21 49 sary flags..#.!I 56c0: 46 20 24 28 44 45 42 55 47 29 3e 31 20 7c 7c 20 F$(DEBUG)>1 ||
56d0: 24 28 4f 50 54 49 4d 49 5a 41 54 49 4f 4e 53 29 $(OPTIMIZATIONS) 56e0: 3d 3d 30 0a 54 43 43 20 3d 20 24 28 54 43 43 29 ==0.TCC =$(TCC)
56f0: 20 2d 4f 64 0a 42 43 43 20 3d 20 24 28 42 43 43 -Od.BCC = $(BCC 5700: 29 20 2d 4f 64 0a 21 45 4c 53 45 49 46 20 24 28 ) -Od.!ELSEIF$(
5710: 4f 50 54 49 4d 49 5a 41 54 49 4f 4e 53 29 3e 3d OPTIMIZATIONS)>=
5720: 33 0a 54 43 43 20 3d 20 24 28 54 43 43 29 20 2d 3.TCC = $(TCC) - 5730: 4f 78 0a 42 43 43 20 3d 20 24 28 42 43 43 29 20 Ox.BCC =$(BCC)
5740: 2d 4f 78 0a 21 45 4c 53 45 49 46 20 24 28 4f 50 -Ox.!ELSEIF $(OP 5750: 54 49 4d 49 5a 41 54 49 4f 4e 53 29 3d 3d 32 0a TIMIZATIONS)==2. 5760: 54 43 43 20 3d 20 24 28 54 43 43 29 20 2d 4f 32 TCC =$(TCC) -O2
5770: 0a 42 43 43 20 3d 20 24 28 42 43 43 29 20 2d 4f .BCC = $(BCC) -O 5780: 32 0a 21 45 4c 53 45 49 46 20 24 28 4f 50 54 49 2.!ELSEIF$(OPTI
5790: 4d 49 5a 41 54 49 4f 4e 53 29 3d 3d 31 0a 54 43 MIZATIONS)==1.TC
57a0: 43 20 3d 20 24 28 54 43 43 29 20 2d 4f 31 0a 42 C = $(TCC) -O1.B 57b0: 43 43 20 3d 20 24 28 42 43 43 29 20 2d 4f 31 0a CC =$(BCC) -O1.
57c0: 21 45 4e 44 49 46 0a 0a 23 20 49 66 20 73 79 6d !ENDIF..# If sym
57d0: 62 6f 6c 73 20 61 72 65 20 65 6e 61 62 6c 65 64 bols are enabled
57e0: 20 28 6f 72 20 63 6f 6d 70 69 6c 69 6e 67 20 66 (or compiling f
57f0: 6f 72 20 64 65 62 75 67 67 69 6e 67 29 2c 20 65 or debugging), e
5800: 6e 61 62 6c 65 20 50 44 42 73 2e 0a 23 0a 21 49 nable PDBs..#.!I
5810: 46 20 24 28 44 45 42 55 47 29 3e 31 20 7c 7c 20 F $(DEBUG)>1 || 5820: 24 28 53 59 4d 42 4f 4c 53 29 21 3d 30 0a 54 43$(SYMBOLS)!=0.TC
5830: 43 20 3d 20 24 28 54 43 43 29 20 2d 5a 69 0a 42 C = $(TCC) -Zi.B 5840: 43 43 20 3d 20 24 28 42 43 43 29 20 2d 5a 69 0a CC =$(BCC) -Zi.
5850: 21 45 4e 44 49 46 0a 0a 23 20 3c 3c 6d 61 72 6b !ENDIF..# <<mark
5860: 3e 3e 0a 23 20 49 66 20 49 43 55 20 73 75 70 70 >>.# If ICU supp
5870: 6f 72 74 20 69 73 20 65 6e 61 62 6c 65 64 2c 20 ort is enabled,
5880: 61 64 64 20 74 68 65 20 63 6f 6d 70 69 6c 65 72 add the compiler
5890: 20 6f 70 74 69 6f 6e 73 20 66 6f 72 20 69 74 2e options for it.
58a0: 0a 23 0a 21 49 46 20 24 28 55 53 45 5f 49 43 55 .#.!IF $(USE_ICU 58b0: 29 21 3d 30 0a 54 43 43 20 3d 20 24 28 54 43 43 )!=0.TCC =$(TCC
58c0: 29 20 2d 44 53 51 4c 49 54 45 5f 45 4e 41 42 4c ) -DSQLITE_ENABL
58d0: 45 5f 49 43 55 3d 31 0a 52 43 43 20 3d 20 24 28 E_ICU=1.RCC = $( 58e0: 52 43 43 29 20 2d 44 53 51 4c 49 54 45 5f 45 4e RCC) -DSQLITE_EN 58f0: 41 42 4c 45 5f 49 43 55 3d 31 0a 54 43 43 20 3d ABLE_ICU=1.TCC = 5900: 20 24 28 54 43 43 29 20 2d 49 24 28 54 4f 50 29$(TCC) -I$(TOP) 5910: 5c 65 78 74 5c 69 63 75 0a 52 43 43 20 3d 20 24 \ext\icu.RCC =$
5920: 28 52 43 43 29 20 2d 49 24 28 54 4f 50 29 5c 65 (RCC) -I$(TOP)\e 5930: 78 74 5c 69 63 75 0a 54 43 43 20 3d 20 24 28 54 xt\icu.TCC =$(T
5940: 43 43 29 20 2d 49 24 28 49 43 55 49 4e 43 44 49 CC) -I$(ICUINCDI 5950: 52 29 0a 52 43 43 20 3d 20 24 28 52 43 43 29 20 R).RCC =$(RCC)
5960: 2d 49 24 28 49 43 55 49 4e 43 44 49 52 29 0a 21 -I$(ICUINCDIR).! 5970: 45 4e 44 49 46 0a 23 20 3c 3c 2f 6d 61 72 6b 3e ENDIF.# <</mark> 5980: 3e 0a 0a 23 20 43 6f 6d 6d 61 6e 64 20 6c 69 6e >..# Command lin 5990: 65 20 70 72 65 66 69 78 65 73 20 66 6f 72 20 63 e prefixes for c 59a0: 6f 6d 70 69 6c 69 6e 67 20 63 6f 64 65 2c 20 63 ompiling code, c 59b0: 6f 6d 70 69 6c 69 6e 67 20 72 65 73 6f 75 72 63 ompiling resourc 59c0: 65 73 2c 0a 23 20 6c 69 6e 6b 69 6e 67 2c 20 65 es,.# linking, e 59d0: 74 63 2e 0a 23 0a 4c 54 43 4f 4d 50 49 4c 45 20 tc..#.LTCOMPILE 59e0: 3d 20 24 28 54 43 43 29 20 2d 46 6f 24 40 0a 4c =$(TCC) -Fo$@.L 59f0: 54 52 43 4f 4d 50 49 4c 45 20 3d 20 24 28 52 43 TRCOMPILE =$(RC
5a00: 43 29 20 2d 72 0a 4c 54 4c 49 42 20 3d 20 6c 69 C) -r.LTLIB = li
5a10: 62 2e 65 78 65 0a 4c 54 4c 49 4e 4b 20 3d 20 24 b.exe.LTLINK = $5a20: 28 54 43 43 29 20 2d 46 65 24 40 0a 0a 23 20 49 (TCC) -Fe$@..# I
5a30: 66 20 72 65 71 75 65 73 74 65 64 2c 20 6c 69 6e f requested, lin
5a40: 6b 20 74 6f 20 74 68 65 20 52 50 43 52 54 34 20 k to the RPCRT4
5a50: 6c 69 62 72 61 72 79 2e 0a 23 0a 21 49 46 20 24 library..#.!IF $5a60: 28 55 53 45 5f 52 50 43 52 54 34 5f 4c 49 42 29 (USE_RPCRT4_LIB) 5a70: 21 3d 30 0a 4c 54 4c 49 4e 4b 20 3d 20 24 28 4c !=0.LTLINK =$(L
5a80: 54 4c 49 4e 4b 29 20 72 70 63 72 74 34 2e 6c 69 TLINK) rpcrt4.li
5a90: 62 0a 21 45 4e 44 49 46 0a 0a 23 20 49 66 20 61 b.!ENDIF..# If a
5aa0: 20 70 6c 61 74 66 6f 72 6d 20 77 61 73 20 73 65 platform was se
5ab0: 74 2c 20 66 6f 72 63 65 20 74 68 65 20 6c 69 6e t, force the lin
5ac0: 6b 65 72 20 74 6f 20 74 61 72 67 65 74 20 74 68 ker to target th
5ad0: 61 74 2e 0a 23 20 4e 6f 74 65 20 74 68 61 74 20 at..# Note that
5ae0: 74 68 65 20 76 63 76 61 72 73 2a 2e 62 61 74 20 the vcvars*.bat
5af0: 66 61 6d 69 6c 79 20 6f 66 20 62 61 74 63 68 20 family of batch
5b00: 66 69 6c 65 73 20 74 79 70 69 63 61 6c 6c 79 0a files typically.
5b10: 23 20 73 65 74 20 74 68 69 73 20 66 6f 72 20 79 # set this for y
5b20: 6f 75 2e 20 20 4f 74 68 65 72 77 69 73 65 2c 20 ou. Otherwise,
5b30: 74 68 65 20 6c 69 6e 6b 65 72 20 77 69 6c 6c 20 the linker will
5b40: 61 74 74 65 6d 70 74 0a 23 20 74 6f 20 64 65 64 attempt.# to ded
5b50: 75 63 65 20 74 68 65 20 62 69 6e 61 72 79 20 74 uce the binary t
5b60: 79 70 65 20 62 61 73 65 64 20 6f 6e 20 74 68 65 ype based on the
5b70: 20 6f 62 6a 65 63 74 20 66 69 6c 65 73 2e 0a 21 object files..!
5b80: 49 46 44 45 46 20 50 4c 41 54 46 4f 52 4d 0a 4c IFDEF PLATFORM.L
5b90: 54 4c 49 4e 4b 4f 50 54 53 20 3d 20 2f 4e 4f 4c TLINKOPTS = /NOL
5ba0: 4f 47 4f 20 2f 4d 41 43 48 49 4e 45 3a 24 28 50 OGO /MACHINE:$(P 5bb0: 4c 41 54 46 4f 52 4d 29 0a 4c 54 4c 49 42 4f 50 LATFORM).LTLIBOP 5bc0: 54 53 20 3d 20 2f 4e 4f 4c 4f 47 4f 20 2f 4d 41 TS = /NOLOGO /MA 5bd0: 43 48 49 4e 45 3a 24 28 50 4c 41 54 46 4f 52 4d CHINE:$(PLATFORM
5be0: 29 0a 21 45 4c 53 45 0a 4c 54 4c 49 4e 4b 4f 50 ).!ELSE.LTLINKOP
5bf0: 54 53 20 3d 20 2f 4e 4f 4c 4f 47 4f 0a 4c 54 4c TS = /NOLOGO.LTL
5c00: 49 42 4f 50 54 53 20 3d 20 2f 4e 4f 4c 4f 47 4f IBOPTS = /NOLOGO
5c10: 0a 21 45 4e 44 49 46 0a 0a 23 20 57 68 65 6e 20 .!ENDIF..# When
5c20: 63 6f 6d 70 69 6c 69 6e 67 20 66 6f 72 20 75 73 compiling for us
5c30: 65 20 69 6e 20 74 68 65 20 57 69 6e 52 54 20 65 e in the WinRT e
5c40: 6e 76 69 72 6f 6e 6d 65 6e 74 2c 20 74 68 65 20 nvironment, the
5c50: 66 6f 6c 6c 6f 77 69 6e 67 0a 23 20 6c 69 6e 6b following.# link
5c60: 65 72 20 6f 70 74 69 6f 6e 20 6d 75 73 74 20 62 er option must b
5c70: 65 20 75 73 65 64 20 74 6f 20 6d 61 72 6b 20 74 e used to mark t
5c80: 68 65 20 65 78 65 63 75 74 61 62 6c 65 20 61 73 he executable as
5c90: 20 72 75 6e 6e 61 62 6c 65 0a 23 20 6f 6e 6c 79 runnable.# only
5ca0: 20 69 6e 20 74 68 65 20 63 6f 6e 74 65 78 74 20 in the context
5cb0: 6f 66 20 61 6e 20 61 70 70 6c 69 63 61 74 69 6f of an applicatio
5cc0: 6e 20 63 6f 6e 74 61 69 6e 65 72 2e 0a 23 0a 21 n container..#.!
5cd0: 49 46 20 24 28 46 4f 52 5f 57 49 4e 52 54 29 21 IF $(FOR_WINRT)! 5ce0: 3d 30 0a 4c 54 4c 49 4e 4b 4f 50 54 53 20 3d 20 =0.LTLINKOPTS = 5cf0: 24 28 4c 54 4c 49 4e 4b 4f 50 54 53 29 20 2f 41$(LTLINKOPTS) /A
5d00: 50 50 43 4f 4e 54 41 49 4e 45 52 0a 21 49 46 20 PPCONTAINER.!IF
5d10: 22 24 28 56 49 53 55 41 4c 53 54 55 44 49 4f 56 "$(VISUALSTUDIOV 5d20: 45 52 53 49 4f 4e 29 22 3d 3d 22 31 32 2e 30 22 ERSION)"=="12.0" 5d30: 20 7c 7c 20 22 24 28 56 49 53 55 41 4c 53 54 55 || "$(VISUALSTU
5d40: 44 49 4f 56 45 52 53 49 4f 4e 29 22 3d 3d 22 31 DIOVERSION)"=="1
5d50: 34 2e 30 22 0a 21 49 46 4e 44 45 46 20 53 54 4f 4.0".!IFNDEF STO
5d60: 52 45 4c 49 42 50 41 54 48 0a 21 49 46 20 22 24 RELIBPATH.!IF "$5d70: 28 50 4c 41 54 46 4f 52 4d 29 22 3d 3d 22 78 38 (PLATFORM)"=="x8 5d80: 36 22 0a 53 54 4f 52 45 4c 49 42 50 41 54 48 20 6".STORELIBPATH 5d90: 3d 20 24 28 43 52 54 4c 49 42 50 41 54 48 29 5c =$(CRTLIBPATH)\
5da0: 73 74 6f 72 65 0a 21 45 4c 53 45 49 46 20 22 24 store.!ELSEIF "$5db0: 28 50 4c 41 54 46 4f 52 4d 29 22 3d 3d 22 78 36 (PLATFORM)"=="x6 5dc0: 34 22 0a 53 54 4f 52 45 4c 49 42 50 41 54 48 20 4".STORELIBPATH 5dd0: 3d 20 24 28 43 52 54 4c 49 42 50 41 54 48 29 5c =$(CRTLIBPATH)\
5de0: 73 74 6f 72 65 5c 61 6d 64 36 34 0a 21 45 4c 53 store\amd64.!ELS
5df0: 45 49 46 20 22 24 28 50 4c 41 54 46 4f 52 4d 29 EIF "$(PLATFORM) 5e00: 22 3d 3d 22 41 52 4d 22 0a 53 54 4f 52 45 4c 49 "=="ARM".STORELI 5e10: 42 50 41 54 48 20 3d 20 24 28 43 52 54 4c 49 42 BPATH =$(CRTLIB
5e20: 50 41 54 48 29 5c 73 74 6f 72 65 5c 61 72 6d 0a PATH)\store\arm.
5e30: 21 45 4c 53 45 0a 53 54 4f 52 45 4c 49 42 50 41 !ELSE.STORELIBPA
5e40: 54 48 20 3d 20 24 28 43 52 54 4c 49 42 50 41 54 TH = $(CRTLIBPAT 5e50: 48 29 5c 73 74 6f 72 65 0a 21 45 4e 44 49 46 0a H)\store.!ENDIF. 5e60: 21 45 4e 44 49 46 0a 53 54 4f 52 45 4c 49 42 50 !ENDIF.STORELIBP 5e70: 41 54 48 20 3d 20 24 28 53 54 4f 52 45 4c 49 42 ATH =$(STORELIB
5e80: 50 41 54 48 3a 5c 5c 3d 5c 29 0a 4c 54 4c 49 4e PATH:\\=\).LTLIN
5e90: 4b 4f 50 54 53 20 3d 20 24 28 4c 54 4c 49 4e 4b KOPTS = $(LTLINK 5ea0: 4f 50 54 53 29 20 22 2f 4c 49 42 50 41 54 48 3a OPTS) "/LIBPATH: 5eb0: 24 28 53 54 4f 52 45 4c 49 42 50 41 54 48 29 22$(STORELIBPATH)"
5ec0: 0a 21 45 4e 44 49 46 0a 21 45 4e 44 49 46 0a 0a .!ENDIF.!ENDIF..
5ed0: 23 20 57 68 65 6e 20 63 6f 6d 70 69 6c 69 6e 67 # When compiling
5ee0: 20 66 6f 72 20 57 69 6e 64 6f 77 73 20 50 68 6f for Windows Pho
5ef0: 6e 65 20 38 2e 31 2c 20 61 6e 20 65 78 74 72 61 ne 8.1, an extra
5f00: 20 6c 69 62 72 61 72 79 20 70 61 74 68 20 69 73 library path is
5f10: 0a 23 20 72 65 71 75 69 72 65 64 2e 0a 23 0a 21 .# required..#.!
5f20: 49 46 20 24 28 55 53 45 5f 57 50 38 31 5f 4f 50 IF $(USE_WP81_OP 5f30: 54 53 29 21 3d 30 0a 21 49 46 4e 44 45 46 20 57 TS)!=0.!IFNDEF W 5f40: 50 38 31 4c 49 42 50 41 54 48 0a 21 49 46 20 22 P81LIBPATH.!IF " 5f50: 24 28 50 4c 41 54 46 4f 52 4d 29 22 3d 3d 22 78$(PLATFORM)"=="x
5f60: 38 36 22 0a 57 50 38 31 4c 49 42 50 41 54 48 20 86".WP81LIBPATH
5f70: 3d 20 24 28 50 52 4f 47 52 41 4d 46 49 4c 45 53 = $(PROGRAMFILES 5f80: 5f 58 38 36 29 5c 57 69 6e 64 6f 77 73 20 50 68 _X86)\Windows Ph 5f90: 6f 6e 65 20 4b 69 74 73 5c 38 2e 31 5c 6c 69 62 one Kits\8.1\lib 5fa0: 5c 78 38 36 0a 21 45 4c 53 45 49 46 20 22 24 28 \x86.!ELSEIF "$(
5fb0: 50 4c 41 54 46 4f 52 4d 29 22 3d 3d 22 41 52 4d PLATFORM)"=="ARM
5fc0: 22 0a 57 50 38 31 4c 49 42 50 41 54 48 20 3d 20 ".WP81LIBPATH =
5fd0: 24 28 50 52 4f 47 52 41 4d 46 49 4c 45 53 5f 58 $(PROGRAMFILES_X 5fe0: 38 36 29 5c 57 69 6e 64 6f 77 73 20 50 68 6f 6e 86)\Windows Phon 5ff0: 65 20 4b 69 74 73 5c 38 2e 31 5c 6c 69 62 5c 41 e Kits\8.1\lib\A 6000: 52 4d 0a 21 45 4c 53 45 0a 57 50 38 31 4c 49 42 RM.!ELSE.WP81LIB 6010: 50 41 54 48 20 3d 20 24 28 50 52 4f 47 52 41 4d PATH =$(PROGRAM
6020: 46 49 4c 45 53 5f 58 38 36 29 5c 57 69 6e 64 6f FILES_X86)\Windo
6030: 77 73 20 50 68 6f 6e 65 20 4b 69 74 73 5c 38 2e ws Phone Kits\8.
6040: 31 5c 6c 69 62 5c 78 38 36 0a 21 45 4e 44 49 46 1\lib\x86.!ENDIF
6050: 0a 21 45 4e 44 49 46 0a 21 45 4e 44 49 46 0a 0a .!ENDIF.!ENDIF..
6060: 23 20 57 68 65 6e 20 63 6f 6d 70 69 6c 69 6e 67 # When compiling
6070: 20 66 6f 72 20 57 69 6e 64 6f 77 73 20 50 68 6f for Windows Pho
6080: 6e 65 20 38 2e 31 2c 20 73 6f 6d 65 20 65 78 74 ne 8.1, some ext
6090: 72 61 20 6c 69 6e 6b 65 72 20 6f 70 74 69 6f 6e ra linker option
60a0: 73 0a 23 20 61 72 65 20 61 6c 73 6f 20 72 65 71 s.# are also req
60b0: 75 69 72 65 64 2e 0a 23 0a 21 49 46 20 24 28 55 uired..#.!IF $(U 60c0: 53 45 5f 57 50 38 31 5f 4f 50 54 53 29 21 3d 30 SE_WP81_OPTS)!=0 60d0: 0a 21 49 46 44 45 46 20 57 50 38 31 4c 49 42 50 .!IFDEF WP81LIBP 60e0: 41 54 48 0a 4c 54 4c 49 4e 4b 4f 50 54 53 20 3d ATH.LTLINKOPTS = 60f0: 20 24 28 4c 54 4c 49 4e 4b 4f 50 54 53 29 20 22$(LTLINKOPTS) "
6100: 2f 4c 49 42 50 41 54 48 3a 24 28 57 50 38 31 4c /LIBPATH:$(WP81L 6110: 49 42 50 41 54 48 29 22 0a 21 45 4e 44 49 46 0a IBPATH)".!ENDIF. 6120: 4c 54 4c 49 4e 4b 4f 50 54 53 20 3d 20 24 28 4c LTLINKOPTS =$(L
6130: 54 4c 49 4e 4b 4f 50 54 53 29 20 2f 44 59 4e 41 TLINKOPTS) /DYNA
6140: 4d 49 43 42 41 53 45 0a 4c 54 4c 49 4e 4b 4f 50 MICBASE.LTLINKOP
6150: 54 53 20 3d 20 24 28 4c 54 4c 49 4e 4b 4f 50 54 TS = $(LTLINKOPT 6160: 53 29 20 57 69 6e 64 6f 77 73 50 68 6f 6e 65 43 S) WindowsPhoneC 6170: 6f 72 65 2e 6c 69 62 20 52 75 6e 74 69 6d 65 4f ore.lib RuntimeO 6180: 62 6a 65 63 74 2e 6c 69 62 20 50 68 6f 6e 65 41 bject.lib PhoneA 6190: 70 70 4d 6f 64 65 6c 48 6f 73 74 2e 6c 69 62 0a ppModelHost.lib. 61a0: 4c 54 4c 49 4e 4b 4f 50 54 53 20 3d 20 24 28 4c LTLINKOPTS =$(L
61b0: 54 4c 49 4e 4b 4f 50 54 53 29 20 2f 4e 4f 44 45 TLINKOPTS) /NODE
61c0: 46 41 55 4c 54 4c 49 42 3a 6b 65 72 6e 65 6c 33 FAULTLIB:kernel3
61d0: 32 2e 6c 69 62 20 2f 4e 4f 44 45 46 41 55 4c 54 2.lib /NODEFAULT
61e0: 4c 49 42 3a 6f 6c 65 33 32 2e 6c 69 62 0a 21 45 LIB:ole32.lib.!E
61f0: 4e 44 49 46 0a 0a 23 20 57 68 65 6e 20 63 6f 6d NDIF..# When com
6200: 70 69 6c 69 6e 67 20 66 6f 72 20 55 57 50 20 6f piling for UWP o
6210: 72 20 74 68 65 20 57 69 6e 64 6f 77 73 20 31 30 r the Windows 10
6220: 20 70 6c 61 74 66 6f 72 6d 2c 20 73 6f 6d 65 20 platform, some
6230: 65 78 74 72 61 20 6c 69 6e 6b 65 72 0a 23 20 6f extra linker.# o
6240: 70 74 69 6f 6e 73 20 61 72 65 20 61 6c 73 6f 20 ptions are also
6250: 72 65 71 75 69 72 65 64 2e 0a 23 0a 21 49 46 20 required..#.!IF
6260: 24 28 46 4f 52 5f 55 57 50 29 21 3d 30 20 7c 7c $(FOR_UWP)!=0 || 6270: 20 24 28 46 4f 52 5f 57 49 4e 31 30 29 21 3d 30$(FOR_WIN10)!=0
6280: 0a 4c 54 4c 49 4e 4b 4f 50 54 53 20 3d 20 24 28 .LTLINKOPTS = $( 6290: 4c 54 4c 49 4e 4b 4f 50 54 53 29 20 2f 44 59 4e LTLINKOPTS) /DYN 62a0: 41 4d 49 43 42 41 53 45 20 2f 4e 4f 44 45 46 41 AMICBASE /NODEFA 62b0: 55 4c 54 4c 49 42 3a 6b 65 72 6e 65 6c 33 32 2e ULTLIB:kernel32. 62c0: 6c 69 62 0a 4c 54 4c 49 4e 4b 4f 50 54 53 20 3d lib.LTLINKOPTS = 62d0: 20 24 28 4c 54 4c 49 4e 4b 4f 50 54 53 29 20 6d$(LTLINKOPTS) m
62e0: 69 6e 63 6f 72 65 2e 6c 69 62 0a 21 49 46 44 45 incore.lib.!IFDE
62f0: 46 20 50 53 44 4b 4c 49 42 50 41 54 48 0a 4c 54 F PSDKLIBPATH.LT
6300: 4c 49 4e 4b 4f 50 54 53 20 3d 20 24 28 4c 54 4c LINKOPTS = $(LTL 6310: 49 4e 4b 4f 50 54 53 29 20 22 2f 4c 49 42 50 41 INKOPTS) "/LIBPA 6320: 54 48 3a 24 28 50 53 44 4b 4c 49 42 50 41 54 48 TH:$(PSDKLIBPATH
6330: 29 22 0a 21 45 4e 44 49 46 0a 21 45 4e 44 49 46 )".!ENDIF.!ENDIF
6340: 0a 0a 21 49 46 20 24 28 46 4f 52 5f 57 49 4e 31 ..!IF $(FOR_WIN1 6350: 30 29 21 3d 30 0a 4c 54 4c 49 4e 4b 4f 50 54 53 0)!=0.LTLINKOPTS 6360: 20 3d 20 24 28 4c 54 4c 49 4e 4b 4f 50 54 53 29 =$(LTLINKOPTS)
6370: 20 2f 67 75 61 72 64 3a 63 66 20 22 2f 4c 49 42 /guard:cf "/LIB
6380: 50 41 54 48 3a 24 28 55 43 52 54 4c 49 42 50 41 PATH:$(UCRTLIBPA 6390: 54 48 29 22 0a 21 49 46 20 24 28 44 45 42 55 47 TH)".!IF$(DEBUG
63a0: 29 3e 31 0a 4c 54 4c 49 4e 4b 4f 50 54 53 20 3d )>1.LTLINKOPTS =
63b0: 20 24 28 4c 54 4c 49 4e 4b 4f 50 54 53 29 20 2f $(LTLINKOPTS) / 63c0: 4e 4f 44 45 46 41 55 4c 54 4c 49 42 3a 6c 69 62 NODEFAULTLIB:lib 63d0: 75 63 72 74 64 2e 6c 69 62 20 2f 44 45 46 41 55 ucrtd.lib /DEFAU 63e0: 4c 54 4c 49 42 3a 75 63 72 74 64 2e 6c 69 62 0a LTLIB:ucrtd.lib. 63f0: 21 45 4c 53 45 0a 4c 54 4c 49 4e 4b 4f 50 54 53 !ELSE.LTLINKOPTS 6400: 20 3d 20 24 28 4c 54 4c 49 4e 4b 4f 50 54 53 29 =$(LTLINKOPTS)
6410: 20 2f 4e 4f 44 45 46 41 55 4c 54 4c 49 42 3a 6c /NODEFAULTLIB:l
6420: 69 62 75 63 72 74 2e 6c 69 62 20 2f 44 45 46 41 ibucrt.lib /DEFA
6430: 55 4c 54 4c 49 42 3a 75 63 72 74 2e 6c 69 62 0a ULTLIB:ucrt.lib.
6440: 21 45 4e 44 49 46 0a 21 45 4e 44 49 46 0a 0a 23 !ENDIF.!ENDIF..#
6450: 20 49 66 20 65 69 74 68 65 72 20 64 65 62 75 67 If either debug
6460: 67 69 6e 67 20 6f 72 20 73 79 6d 62 6f 6c 73 20 ging or symbols
6470: 61 72 65 20 65 6e 61 62 6c 65 64 2c 20 65 6e 61 are enabled, ena
6480: 62 6c 65 20 50 44 42 73 2e 0a 23 0a 21 49 46 20 ble PDBs..#.!IF
6490: 24 28 44 45 42 55 47 29 3e 31 20 7c 7c 20 24 28 $(DEBUG)>1 ||$(
64a0: 53 59 4d 42 4f 4c 53 29 21 3d 30 0a 4c 44 46 4c SYMBOLS)!=0.LDFL
64b0: 41 47 53 20 3d 20 2f 44 45 42 55 47 20 24 28 4c AGS = /DEBUG $(L 64c0: 44 4f 50 54 53 29 0a 21 45 4c 53 45 0a 4c 44 46 DOPTS).!ELSE.LDF 64d0: 4c 41 47 53 20 3d 20 24 28 4c 44 4f 50 54 53 29 LAGS =$(LDOPTS)
64e0: 0a 21 45 4e 44 49 46 0a 0a 23 20 3c 3c 6d 61 72 .!ENDIF..# <<mar
64f0: 6b 3e 3e 0a 23 20 53 74 61 72 74 20 77 69 74 68 k>>.# Start with
6500: 20 74 68 65 20 54 63 6c 20 72 65 6c 61 74 65 64 the Tcl related
6510: 20 6c 69 6e 6b 65 72 20 6f 70 74 69 6f 6e 73 2e linker options.
6520: 0a 23 0a 21 49 46 20 24 28 4e 4f 5f 54 43 4c 29 .#.!IF $(NO_TCL) 6530: 3d 3d 30 0a 4c 54 4c 49 42 50 41 54 48 53 20 3d ==0.LTLIBPATHS = 6540: 20 2f 4c 49 42 50 41 54 48 3a 24 28 54 43 4c 4c /LIBPATH:$(TCLL
6550: 49 42 44 49 52 29 0a 4c 54 4c 49 42 53 20 3d 20 IBDIR).LTLIBS =
6560: 24 28 4c 49 42 54 43 4c 29 0a 21 45 4e 44 49 46 $(LIBTCL).!ENDIF 6570: 0a 0a 23 20 49 66 20 49 43 55 20 73 75 70 70 6f ..# If ICU suppo 6580: 72 74 20 69 73 20 65 6e 61 62 6c 65 64 2c 20 61 rt is enabled, a 6590: 64 64 20 74 68 65 20 6c 69 6e 6b 65 72 20 6f 70 dd the linker op 65a0: 74 69 6f 6e 73 20 66 6f 72 20 69 74 2e 0a 23 0a tions for it..#. 65b0: 21 49 46 20 24 28 55 53 45 5f 49 43 55 29 21 3d !IF$(USE_ICU)!=
65c0: 30 0a 4c 54 4c 49 42 50 41 54 48 53 20 3d 20 24 0.LTLIBPATHS = $65d0: 28 4c 54 4c 49 42 50 41 54 48 53 29 20 2f 4c 49 (LTLIBPATHS) /LI 65e0: 42 50 41 54 48 3a 24 28 49 43 55 4c 49 42 44 49 BPATH:$(ICULIBDI
65f0: 52 29 0a 4c 54 4c 49 42 53 20 3d 20 24 28 4c 54 R).LTLIBS = $(LT 6600: 4c 49 42 53 29 20 24 28 4c 49 42 49 43 55 29 0a LIBS)$(LIBICU).
6610: 21 45 4e 44 49 46 0a 23 20 3c 3c 2f 6d 61 72 6b !ENDIF.# <</mark
6620: 3e 3e 0a 0a 23 20 59 6f 75 20 73 68 6f 75 6c 64 >>..# You should
6630: 20 6e 6f 74 20 68 61 76 65 20 74 6f 20 63 68 61 not have to cha
6640: 6e 67 65 20 61 6e 79 74 68 69 6e 67 20 62 65 6c nge anything bel
6650: 6f 77 20 74 68 69 73 20 6c 69 6e 65 0a 23 23 23 ow this line.###
6660: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
6670: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
6680: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
6690: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
66a0: 23 23 23 23 23 23 23 23 23 23 23 23 0a 0a 23 20 ############..#
66b0: 3c 3c 6d 61 72 6b 3e 3e 0a 23 20 4f 62 6a 65 63 <<mark>>.# Objec
66c0: 74 20 66 69 6c 65 73 20 66 6f 72 20 74 68 65 20 t files for the
66d0: 53 51 4c 69 74 65 20 6c 69 62 72 61 72 79 20 28 SQLite library (
66e0: 6e 6f 6e 2d 61 6d 61 6c 67 61 6d 61 74 69 6f 6e non-amalgamation
66f0: 29 2e 0a 23 0a 4c 49 42 4f 42 4a 53 30 20 3d 20 )..#.LIBOBJS0 =
6700: 76 64 62 65 2e 6c 6f 20 70 61 72 73 65 2e 6c 6f vdbe.lo parse.lo
6710: 20 61 6c 74 65 72 2e 6c 6f 20 61 6e 61 6c 79 7a alter.lo analyz
6720: 65 2e 6c 6f 20 61 74 74 61 63 68 2e 6c 6f 20 61 e.lo attach.lo a
6730: 75 74 68 2e 6c 6f 20 5c 0a 20 20 20 20 20 20 20 uth.lo \.
6740: 20 20 62 61 63 6b 75 70 2e 6c 6f 20 62 69 74 76 backup.lo bitv
6750: 65 63 2e 6c 6f 20 62 74 6d 75 74 65 78 2e 6c 6f ec.lo btmutex.lo
6760: 20 62 74 72 65 65 2e 6c 6f 20 62 75 69 6c 64 2e btree.lo build.
6770: 6c 6f 20 5c 0a 20 20 20 20 20 20 20 20 20 63 61 lo \. ca
6780: 6c 6c 62 61 63 6b 2e 6c 6f 20 63 6f 6d 70 6c 65 llback.lo comple
6790: 74 65 2e 6c 6f 20 63 74 69 6d 65 2e 6c 6f 20 64 te.lo ctime.lo d
67a0: 61 74 65 2e 6c 6f 20 64 62 73 74 61 74 2e 6c 6f ate.lo dbstat.lo
67b0: 20 64 65 6c 65 74 65 2e 6c 6f 20 5c 0a 20 20 20 delete.lo \.
67c0: 20 20 20 20 20 20 65 78 70 72 2e 6c 6f 20 66 61 expr.lo fa
67d0: 75 6c 74 2e 6c 6f 20 66 6b 65 79 2e 6c 6f 20 5c ult.lo fkey.lo \
67e0: 0a 20 20 20 20 20 20 20 20 20 66 74 73 33 2e 6c . fts3.l
67f0: 6f 20 66 74 73 33 5f 61 75 78 2e 6c 6f 20 66 74 o fts3_aux.lo ft
6800: 73 33 5f 65 78 70 72 2e 6c 6f 20 66 74 73 33 5f s3_expr.lo fts3_
6810: 68 61 73 68 2e 6c 6f 20 66 74 73 33 5f 69 63 75 hash.lo fts3_icu
6820: 2e 6c 6f 20 5c 0a 20 20 20 20 20 20 20 20 20 66 .lo \. f
6830: 74 73 33 5f 70 6f 72 74 65 72 2e 6c 6f 20 66 74 ts3_porter.lo ft
6840: 73 33 5f 73 6e 69 70 70 65 74 2e 6c 6f 20 66 74 s3_snippet.lo ft
6850: 73 33 5f 74 6f 6b 65 6e 69 7a 65 72 2e 6c 6f 20 s3_tokenizer.lo
6860: 66 74 73 33 5f 74 6f 6b 65 6e 69 7a 65 72 31 2e fts3_tokenizer1.
6870: 6c 6f 20 5c 0a 20 20 20 20 20 20 20 20 20 66 74 lo \. ft
6880: 73 33 5f 74 6f 6b 65 6e 69 7a 65 5f 76 74 61 62 s3_tokenize_vtab
6890: 2e 6c 6f 20 66 74 73 33 5f 75 6e 69 63 6f 64 65 .lo fts3_unicode
68a0: 2e 6c 6f 20 66 74 73 33 5f 75 6e 69 63 6f 64 65 .lo fts3_unicode
68b0: 32 2e 6c 6f 20 66 74 73 33 5f 77 72 69 74 65 2e 2.lo fts3_write.
68c0: 6c 6f 20 5c 0a 20 20 20 20 20 20 20 20 20 66 74 lo \. ft
68d0: 73 35 2e 6c 6f 20 5c 0a 20 20 20 20 20 20 20 20 s5.lo \.
68e0: 20 66 75 6e 63 2e 6c 6f 20 67 6c 6f 62 61 6c 2e func.lo global.
68f0: 6c 6f 20 68 61 73 68 2e 6c 6f 20 5c 0a 20 20 20 lo hash.lo \.
6900: 20 20 20 20 20 20 69 63 75 2e 6c 6f 20 69 6e 73 icu.lo ins
6910: 65 72 74 2e 6c 6f 20 6a 6f 75 72 6e 61 6c 2e 6c ert.lo journal.l
6920: 6f 20 6c 65 67 61 63 79 2e 6c 6f 20 6c 6f 61 64 o legacy.lo load
6930: 65 78 74 2e 6c 6f 20 5c 0a 20 20 20 20 20 20 20 ext.lo \.
6940: 20 20 6d 61 69 6e 2e 6c 6f 20 6d 61 6c 6c 6f 63 main.lo malloc
6950: 2e 6c 6f 20 6d 65 6d 30 2e 6c 6f 20 6d 65 6d 31 .lo mem0.lo mem1
6960: 2e 6c 6f 20 6d 65 6d 32 2e 6c 6f 20 6d 65 6d 33 .lo mem2.lo mem3
6970: 2e 6c 6f 20 6d 65 6d 35 2e 6c 6f 20 5c 0a 20 20 .lo mem5.lo \.
6980: 20 20 20 20 20 20 20 6d 65 6d 6a 6f 75 72 6e 61 memjourna
6990: 6c 2e 6c 6f 20 5c 0a 20 20 20 20 20 20 20 20 20 l.lo \.
69a0: 6d 75 74 65 78 2e 6c 6f 20 6d 75 74 65 78 5f 6e mutex.lo mutex_n
69b0: 6f 6f 70 2e 6c 6f 20 6d 75 74 65 78 5f 75 6e 69 oop.lo mutex_uni
69c0: 78 2e 6c 6f 20 6d 75 74 65 78 5f 77 33 32 2e 6c x.lo mutex_w32.l
69d0: 6f 20 5c 0a 20 20 20 20 20 20 20 20 20 6e 6f 74 o \. not
69e0: 69 66 79 2e 6c 6f 20 6f 70 63 6f 64 65 73 2e 6c ify.lo opcodes.l
69f0: 6f 20 6f 73 2e 6c 6f 20 6f 73 5f 75 6e 69 78 2e o os.lo os_unix.
6a00: 6c 6f 20 6f 73 5f 77 69 6e 2e 6c 6f 20 5c 0a 20 lo os_win.lo \.
6a10: 20 20 20 20 20 20 20 20 70 61 67 65 72 2e 6c 6f pager.lo
6a20: 20 70 63 61 63 68 65 2e 6c 6f 20 70 63 61 63 68 pcache.lo pcach
6a30: 65 31 2e 6c 6f 20 70 72 61 67 6d 61 2e 6c 6f 20 e1.lo pragma.lo
6a40: 70 72 65 70 61 72 65 2e 6c 6f 20 70 72 69 6e 74 prepare.lo print
6a50: 66 2e 6c 6f 20 5c 0a 20 20 20 20 20 20 20 20 20 f.lo \.
6a60: 72 61 6e 64 6f 6d 2e 6c 6f 20 72 65 73 6f 6c 76 random.lo resolv
6a70: 65 2e 6c 6f 20 72 6f 77 73 65 74 2e 6c 6f 20 72 e.lo rowset.lo r
6a80: 74 72 65 65 2e 6c 6f 20 5c 0a 20 20 20 20 20 20 tree.lo \.
6a90: 20 20 20 73 71 6c 69 74 65 33 73 65 73 73 69 6f sqlite3sessio
6aa0: 6e 2e 6c 6f 20 73 65 6c 65 63 74 2e 6c 6f 20 73 n.lo select.lo s
6ab0: 71 6c 69 74 65 33 72 62 75 2e 6c 6f 20 73 74 61 qlite3rbu.lo sta
6ac0: 74 75 73 2e 6c 6f 20 5c 0a 20 20 20 20 20 20 20 tus.lo \.
6ad0: 20 20 74 61 62 6c 65 2e 6c 6f 20 74 68 72 65 61 table.lo threa
6ae0: 64 73 2e 6c 6f 20 74 6f 6b 65 6e 69 7a 65 2e 6c ds.lo tokenize.l
6af0: 6f 20 74 72 65 65 76 69 65 77 2e 6c 6f 20 74 72 o treeview.lo tr
6b00: 69 67 67 65 72 2e 6c 6f 20 5c 0a 20 20 20 20 20 igger.lo \.
6b10: 20 20 20 20 75 70 64 61 74 65 2e 6c 6f 20 75 74 update.lo ut
6b20: 69 6c 2e 6c 6f 20 76 61 63 75 75 6d 2e 6c 6f 20 il.lo vacuum.lo
6b30: 5c 0a 20 20 20 20 20 20 20 20 20 76 64 62 65 61 \. vdbea
6b40: 70 69 2e 6c 6f 20 76 64 62 65 61 75 78 2e 6c 6f pi.lo vdbeaux.lo
6b50: 20 76 64 62 65 62 6c 6f 62 2e 6c 6f 20 76 64 62 vdbeblob.lo vdb
6b60: 65 6d 65 6d 2e 6c 6f 20 76 64 62 65 73 6f 72 74 emem.lo vdbesort
6b70: 2e 6c 6f 20 5c 0a 20 20 20 20 20 20 20 20 20 76 .lo \. v
6b80: 64 62 65 74 72 61 63 65 2e 6c 6f 20 77 61 6c 2e dbetrace.lo wal.
6b90: 6c 6f 20 77 61 6c 6b 65 72 2e 6c 6f 20 77 68 65 lo walker.lo whe
6ba0: 72 65 2e 6c 6f 20 77 68 65 72 65 63 6f 64 65 2e re.lo wherecode.
6bb0: 6c 6f 20 77 68 65 72 65 65 78 70 72 2e 6c 6f 20 lo whereexpr.lo
6bc0: 5c 0a 20 20 20 20 20 20 20 20 20 75 74 66 2e 6c \. utf.l
6bd0: 6f 20 76 74 61 62 2e 6c 6f 0a 23 20 3c 3c 2f 6d o vtab.lo.# <</m
6be0: 61 72 6b 3e 3e 0a 0a 23 20 4f 62 6a 65 63 74 20 ark>>..# Object
6bf0: 66 69 6c 65 73 20 66 6f 72 20 74 68 65 20 61 6d files for the am
6c00: 61 6c 67 61 6d 61 74 69 6f 6e 2e 0a 23 0a 4c 49 algamation..#.LI
6c10: 42 4f 42 4a 53 31 20 3d 20 73 71 6c 69 74 65 33 BOBJS1 = sqlite3
6c20: 2e 6c 6f 0a 0a 23 20 44 65 74 65 72 6d 69 6e 65 .lo..# Determine
6c30: 20 74 68 65 20 72 65 61 6c 20 76 61 6c 75 65 20 the real value
6c40: 6f 66 20 4c 49 42 4f 42 4a 20 62 61 73 65 64 20 of LIBOBJ based
6c50: 6f 6e 20 74 68 65 20 27 63 6f 6e 66 69 67 75 72 on the 'configur
6c60: 65 27 20 73 63 72 69 70 74 0a 23 0a 23 20 3c 3c e' script.#.# <<
6c70: 6d 61 72 6b 3e 3e 0a 21 49 46 20 24 28 55 53 45 mark>>.!IF $(USE 6c80: 5f 41 4d 41 4c 47 41 4d 41 54 49 4f 4e 29 3d 3d _AMALGAMATION)== 6c90: 30 0a 4c 49 42 4f 42 4a 20 3d 20 24 28 4c 49 42 0.LIBOBJ =$(LIB
6ca0: 4f 42 4a 53 30 29 0a 21 45 4c 53 45 0a 23 20 3c OBJS0).!ELSE.# <
6cb0: 3c 2f 6d 61 72 6b 3e 3e 0a 4c 49 42 4f 42 4a 20 </mark>>.LIBOBJ
6cc0: 3d 20 24 28 4c 49 42 4f 42 4a 53 31 29 0a 23 20 = $(LIBOBJS1).# 6cd0: 3c 3c 6d 61 72 6b 3e 3e 0a 21 45 4e 44 49 46 0a <<mark>>.!ENDIF. 6ce0: 23 20 3c 3c 2f 6d 61 72 6b 3e 3e 0a 0a 23 20 44 # <</mark>>..# D 6cf0: 65 74 65 72 6d 69 6e 65 20 69 66 20 65 6d 62 65 etermine if embe 6d00: 64 64 65 64 20 72 65 73 6f 75 72 63 65 20 63 6f dded resource co 6d10: 6d 70 69 6c 61 74 69 6f 6e 20 61 6e 64 20 75 73 mpilation and us 6d20: 61 67 65 20 61 72 65 20 65 6e 61 62 6c 65 64 2e age are enabled. 6d30: 0a 23 0a 21 49 46 20 24 28 55 53 45 5f 52 43 29 .#.!IF$(USE_RC)
6d40: 21 3d 30 0a 4c 49 42 52 45 53 4f 42 4a 53 20 3d !=0.LIBRESOBJS =
6d50: 20 73 71 6c 69 74 65 33 72 65 73 2e 6c 6f 0a 21 sqlite3res.lo.!
6d60: 45 4c 53 45 0a 4c 49 42 52 45 53 4f 42 4a 53 20 ELSE.LIBRESOBJS
6d70: 3d 0a 21 45 4e 44 49 46 0a 0a 23 20 3c 3c 6d 61 =.!ENDIF..# <<ma
6d80: 72 6b 3e 3e 0a 23 20 43 6f 72 65 20 73 6f 75 72 rk>>.# Core sour
6d90: 63 65 20 63 6f 64 65 20 66 69 6c 65 73 2c 20 70 ce code files, p
6da0: 61 72 74 20 31 2e 0a 23 0a 53 52 43 30 30 20 3d art 1..#.SRC00 =
6db0: 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c \. $(TOP)\src\ 6dc0: 61 6c 74 65 72 2e 63 20 5c 0a 20 20 24 28 54 4f alter.c \.$(TO
6dd0: 50 29 5c 73 72 63 5c 61 6e 61 6c 79 7a 65 2e 63 P)\src\analyze.c
6de0: 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c \. $(TOP)\src\ 6df0: 61 74 74 61 63 68 2e 63 20 5c 0a 20 20 24 28 54 attach.c \.$(T
6e00: 4f 50 29 5c 73 72 63 5c 61 75 74 68 2e 63 20 5c OP)\src\auth.c \
6e10: 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 62 61 . $(TOP)\src\ba 6e20: 63 6b 75 70 2e 63 20 5c 0a 20 20 24 28 54 4f 50 ckup.c \.$(TOP
6e30: 29 5c 73 72 63 5c 62 69 74 76 65 63 2e 63 20 5c )\src\bitvec.c \
6e40: 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 62 74 . $(TOP)\src\bt 6e50: 6d 75 74 65 78 2e 63 20 5c 0a 20 20 24 28 54 4f mutex.c \.$(TO
6e60: 50 29 5c 73 72 63 5c 62 74 72 65 65 2e 63 20 5c P)\src\btree.c \
6e70: 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 62 75 . $(TOP)\src\bu 6e80: 69 6c 64 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 ild.c \.$(TOP)
6e90: 5c 73 72 63 5c 63 61 6c 6c 62 61 63 6b 2e 63 20 \src\callback.c
6ea0: 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 63 \. $(TOP)\src\c 6eb0: 6f 6d 70 6c 65 74 65 2e 63 20 5c 0a 20 20 24 28 omplete.c \.$(
6ec0: 54 4f 50 29 5c 73 72 63 5c 63 74 69 6d 65 2e 63 TOP)\src\ctime.c
6ed0: 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c \. $(TOP)\src\ 6ee0: 64 61 74 65 2e 63 20 5c 0a 20 20 24 28 54 4f 50 date.c \.$(TOP
6ef0: 29 5c 73 72 63 5c 64 62 73 74 61 74 2e 63 20 5c )\src\dbstat.c \
6f00: 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 64 65 . $(TOP)\src\de 6f10: 6c 65 74 65 2e 63 20 5c 0a 20 20 24 28 54 4f 50 lete.c \.$(TOP
6f20: 29 5c 73 72 63 5c 65 78 70 72 2e 63 20 5c 0a 20 )\src\expr.c \.
6f30: 20 24 28 54 4f 50 29 5c 73 72 63 5c 66 61 75 6c $(TOP)\src\faul 6f40: 74 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 t.c \.$(TOP)\s
6f50: 72 63 5c 66 6b 65 79 2e 63 20 5c 0a 20 20 24 28 rc\fkey.c \. $( 6f60: 54 4f 50 29 5c 73 72 63 5c 66 75 6e 63 2e 63 20 TOP)\src\func.c 6f70: 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 67 \.$(TOP)\src\g
6f80: 6c 6f 62 61 6c 2e 63 20 5c 0a 20 20 24 28 54 4f lobal.c \. $(TO 6f90: 50 29 5c 73 72 63 5c 68 61 73 68 2e 63 20 5c 0a P)\src\hash.c \. 6fa0: 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 69 6e 73$(TOP)\src\ins
6fb0: 65 72 74 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 ert.c \. $(TOP) 6fc0: 5c 73 72 63 5c 6a 6f 75 72 6e 61 6c 2e 63 20 5c \src\journal.c \ 6fd0: 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 6c 65 .$(TOP)\src\le
6fe0: 67 61 63 79 2e 63 20 5c 0a 20 20 24 28 54 4f 50 gacy.c \. $(TOP 6ff0: 29 5c 73 72 63 5c 6c 6f 61 64 65 78 74 2e 63 20 )\src\loadext.c 7000: 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 6d \.$(TOP)\src\m
7010: 61 69 6e 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 ain.c \. $(TOP) 7020: 5c 73 72 63 5c 6d 61 6c 6c 6f 63 2e 63 20 5c 0a \src\malloc.c \. 7030: 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 6d 65 6d$(TOP)\src\mem
7040: 30 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 0.c \. $(TOP)\s 7050: 72 63 5c 6d 65 6d 31 2e 63 20 5c 0a 20 20 24 28 rc\mem1.c \.$(
7060: 54 4f 50 29 5c 73 72 63 5c 6d 65 6d 32 2e 63 20 TOP)\src\mem2.c
7070: 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 6d \. $(TOP)\src\m 7080: 65 6d 33 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 em3.c \.$(TOP)
7090: 5c 73 72 63 5c 6d 65 6d 35 2e 63 20 5c 0a 20 20 \src\mem5.c \.
70a0: 24 28 54 4f 50 29 5c 73 72 63 5c 6d 65 6d 6a 6f $(TOP)\src\memjo 70b0: 75 72 6e 61 6c 2e 63 20 5c 0a 20 20 24 28 54 4f urnal.c \.$(TO
70c0: 50 29 5c 73 72 63 5c 6d 75 74 65 78 2e 63 20 5c P)\src\mutex.c \
70d0: 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 6d 75 . $(TOP)\src\mu 70e0: 74 65 78 5f 6e 6f 6f 70 2e 63 20 5c 0a 20 20 24 tex_noop.c \.$
70f0: 28 54 4f 50 29 5c 73 72 63 5c 6d 75 74 65 78 5f (TOP)\src\mutex_
7100: 75 6e 69 78 2e 63 20 5c 0a 20 20 24 28 54 4f 50 unix.c \. $(TOP 7110: 29 5c 73 72 63 5c 6d 75 74 65 78 5f 77 33 32 2e )\src\mutex_w32. 7120: 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 c \.$(TOP)\src
7130: 5c 6e 6f 74 69 66 79 2e 63 20 5c 0a 20 20 24 28 \notify.c \. $( 7140: 54 4f 50 29 5c 73 72 63 5c 6f 73 2e 63 20 5c 0a TOP)\src\os.c \. 7150: 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 6f 73 5f$(TOP)\src\os_
7160: 75 6e 69 78 2e 63 20 5c 0a 20 20 24 28 54 4f 50 unix.c \. $(TOP 7170: 29 5c 73 72 63 5c 6f 73 5f 77 69 6e 2e 63 0a 0a )\src\os_win.c.. 7180: 23 20 43 6f 72 65 20 73 6f 75 72 63 65 20 63 6f # Core source co 7190: 64 65 20 66 69 6c 65 73 2c 20 70 61 72 74 20 32 de files, part 2 71a0: 2e 0a 23 0a 53 52 43 30 31 20 3d 20 5c 0a 20 20 ..#.SRC01 = \. 71b0: 24 28 54 4f 50 29 5c 73 72 63 5c 70 61 67 65 72$(TOP)\src\pager
71c0: 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 .c \. $(TOP)\sr 71d0: 63 5c 70 63 61 63 68 65 2e 63 20 5c 0a 20 20 24 c\pcache.c \.$
71e0: 28 54 4f 50 29 5c 73 72 63 5c 70 63 61 63 68 65 (TOP)\src\pcache
71f0: 31 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 1.c \. $(TOP)\s 7200: 72 63 5c 70 72 61 67 6d 61 2e 63 20 5c 0a 20 20 rc\pragma.c \. 7210: 24 28 54 4f 50 29 5c 73 72 63 5c 70 72 65 70 61$(TOP)\src\prepa
7220: 72 65 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c re.c \. $(TOP)\ 7230: 73 72 63 5c 70 72 69 6e 74 66 2e 63 20 5c 0a 20 src\printf.c \. 7240: 20 24 28 54 4f 50 29 5c 73 72 63 5c 72 61 6e 64$(TOP)\src\rand
7250: 6f 6d 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c om.c \. $(TOP)\ 7260: 73 72 63 5c 72 65 73 6f 6c 76 65 2e 63 20 5c 0a src\resolve.c \. 7270: 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 72 6f 77$(TOP)\src\row
7280: 73 65 74 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 set.c \. $(TOP) 7290: 5c 73 72 63 5c 73 65 6c 65 63 74 2e 63 20 5c 0a \src\select.c \. 72a0: 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 73 74 61$(TOP)\src\sta
72b0: 74 75 73 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 tus.c \. $(TOP) 72c0: 5c 73 72 63 5c 74 61 62 6c 65 2e 63 20 5c 0a 20 \src\table.c \. 72d0: 20 24 28 54 4f 50 29 5c 73 72 63 5c 74 68 72 65$(TOP)\src\thre
72e0: 61 64 73 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 ads.c \. $(TOP) 72f0: 5c 73 72 63 5c 74 63 6c 73 71 6c 69 74 65 2e 63 \src\tclsqlite.c 7300: 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c \.$(TOP)\src\
7310: 74 6f 6b 65 6e 69 7a 65 2e 63 20 5c 0a 20 20 24 tokenize.c \. $7320: 28 54 4f 50 29 5c 73 72 63 5c 74 72 65 65 76 69 (TOP)\src\treevi 7330: 65 77 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c ew.c \.$(TOP)\
7340: 73 72 63 5c 74 72 69 67 67 65 72 2e 63 20 5c 0a src\trigger.c \.
7350: 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 75 74 66 $(TOP)\src\utf 7360: 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 .c \.$(TOP)\sr
7370: 63 5c 75 70 64 61 74 65 2e 63 20 5c 0a 20 20 24 c\update.c \. $7380: 28 54 4f 50 29 5c 73 72 63 5c 75 74 69 6c 2e 63 (TOP)\src\util.c 7390: 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c \.$(TOP)\src\
73a0: 76 61 63 75 75 6d 2e 63 20 5c 0a 20 20 24 28 54 vacuum.c \. $(T 73b0: 4f 50 29 5c 73 72 63 5c 76 64 62 65 2e 63 20 5c OP)\src\vdbe.c \ 73c0: 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 76 64 .$(TOP)\src\vd
73d0: 62 65 61 70 69 2e 63 20 5c 0a 20 20 24 28 54 4f beapi.c \. $(TO 73e0: 50 29 5c 73 72 63 5c 76 64 62 65 61 75 78 2e 63 P)\src\vdbeaux.c 73f0: 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c \.$(TOP)\src\
7400: 76 64 62 65 62 6c 6f 62 2e 63 20 5c 0a 20 20 24 vdbeblob.c \. $7410: 28 54 4f 50 29 5c 73 72 63 5c 76 64 62 65 6d 65 (TOP)\src\vdbeme 7420: 6d 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 m.c \.$(TOP)\s
7430: 72 63 5c 76 64 62 65 73 6f 72 74 2e 63 20 5c 0a rc\vdbesort.c \.
7440: 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 76 64 62 $(TOP)\src\vdb 7450: 65 74 72 61 63 65 2e 63 20 5c 0a 20 20 24 28 54 etrace.c \.$(T
7460: 4f 50 29 5c 73 72 63 5c 76 74 61 62 2e 63 20 5c OP)\src\vtab.c \
7470: 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 77 61 . $(TOP)\src\wa 7480: 6c 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 l.c \.$(TOP)\s
7490: 72 63 5c 77 61 6c 6b 65 72 2e 63 20 5c 0a 20 20 rc\walker.c \.
74a0: 24 28 54 4f 50 29 5c 73 72 63 5c 77 68 65 72 65 $(TOP)\src\where 74b0: 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 .c \.$(TOP)\sr
74c0: 63 5c 77 68 65 72 65 63 6f 64 65 2e 63 20 5c 0a c\wherecode.c \.
74d0: 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 77 68 65 $(TOP)\src\whe 74e0: 72 65 65 78 70 72 2e 63 0a 0a 23 20 53 68 65 6c reexpr.c..# Shel 74f0: 6c 20 73 6f 75 72 63 65 20 63 6f 64 65 20 66 69 l source code fi 7500: 6c 65 73 2e 0a 23 0a 53 52 43 30 32 20 3d 20 5c les..#.SRC02 = \ 7510: 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 73 68 .$(TOP)\src\sh
7520: 65 6c 6c 2e 63 0a 0a 23 20 43 6f 72 65 20 6d 69 ell.c..# Core mi
7530: 73 63 65 6c 6c 61 6e 65 6f 75 73 20 66 69 6c 65 scellaneous file
7540: 73 2e 0a 23 0a 53 52 43 30 33 20 3d 20 5c 0a 20 s..#.SRC03 = \.
7550: 20 24 28 54 4f 50 29 5c 73 72 63 5c 70 61 72 73 $(TOP)\src\pars 7560: 65 2e 79 0a 0a 23 20 43 6f 72 65 20 68 65 61 64 e.y..# Core head 7570: 65 72 20 66 69 6c 65 73 2c 20 70 61 72 74 20 31 er files, part 1 7580: 2e 0a 23 0a 53 52 43 30 34 20 3d 20 5c 0a 20 20 ..#.SRC04 = \. 7590: 24 28 54 4f 50 29 5c 73 72 63 5c 62 74 72 65 65$(TOP)\src\btree
75a0: 2e 68 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 .h \. $(TOP)\sr 75b0: 63 5c 62 74 72 65 65 49 6e 74 2e 68 20 5c 0a 20 c\btreeInt.h \. 75c0: 20 24 28 54 4f 50 29 5c 73 72 63 5c 68 61 73 68$(TOP)\src\hash
75d0: 2e 68 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 .h \. $(TOP)\sr 75e0: 63 5c 68 77 74 69 6d 65 2e 68 20 5c 0a 20 20 24 c\hwtime.h \.$
75f0: 28 54 4f 50 29 5c 73 72 63 5c 6d 73 76 63 2e 68 (TOP)\src\msvc.h
7600: 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c \. $(TOP)\src\ 7610: 6d 75 74 65 78 2e 68 20 5c 0a 20 20 24 28 54 4f mutex.h \.$(TO
7620: 50 29 5c 73 72 63 5c 6f 73 2e 68 20 5c 0a 20 20 P)\src\os.h \.
7630: 24 28 54 4f 50 29 5c 73 72 63 5c 6f 73 5f 63 6f $(TOP)\src\os_co 7640: 6d 6d 6f 6e 2e 68 20 5c 0a 20 20 24 28 54 4f 50 mmon.h \.$(TOP
7650: 29 5c 73 72 63 5c 6f 73 5f 73 65 74 75 70 2e 68 )\src\os_setup.h
7660: 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c \. $(TOP)\src\ 7670: 6f 73 5f 77 69 6e 2e 68 0a 0a 23 20 43 6f 72 65 os_win.h..# Core 7680: 20 68 65 61 64 65 72 20 66 69 6c 65 73 2c 20 70 header files, p 7690: 61 72 74 20 32 2e 0a 23 0a 53 52 43 30 35 20 3d art 2..#.SRC05 = 76a0: 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c \.$(TOP)\src\
76b0: 70 61 67 65 72 2e 68 20 5c 0a 20 20 24 28 54 4f pager.h \. $(TO 76c0: 50 29 5c 73 72 63 5c 70 63 61 63 68 65 2e 68 20 P)\src\pcache.h 76d0: 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 70 \.$(TOP)\src\p
76e0: 72 61 67 6d 61 2e 68 20 5c 0a 20 20 24 28 54 4f ragma.h \. $(TO 76f0: 50 29 5c 73 72 63 5c 73 71 6c 69 74 65 2e 68 2e P)\src\sqlite.h. 7700: 69 6e 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 in \.$(TOP)\sr
7710: 63 5c 73 71 6c 69 74 65 33 65 78 74 2e 68 20 5c c\sqlite3ext.h \
7720: 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 73 71 . $(TOP)\src\sq 7730: 6c 69 74 65 49 6e 74 2e 68 20 5c 0a 20 20 24 28 liteInt.h \.$(
7740: 54 4f 50 29 5c 73 72 63 5c 73 71 6c 69 74 65 4c TOP)\src\sqliteL
7750: 69 6d 69 74 2e 68 20 5c 0a 20 20 24 28 54 4f 50 imit.h \. $(TOP 7760: 29 5c 73 72 63 5c 76 64 62 65 2e 68 20 5c 0a 20 )\src\vdbe.h \. 7770: 20 24 28 54 4f 50 29 5c 73 72 63 5c 76 64 62 65$(TOP)\src\vdbe
7780: 49 6e 74 2e 68 20 5c 0a 20 20 24 28 54 4f 50 29 Int.h \. $(TOP) 7790: 5c 73 72 63 5c 76 78 77 6f 72 6b 73 2e 68 20 5c \src\vxworks.h \ 77a0: 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 77 61 .$(TOP)\src\wa
77b0: 6c 2e 68 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 l.h \. $(TOP)\s 77c0: 72 63 5c 77 68 65 72 65 49 6e 74 2e 68 0a 0a 23 rc\whereInt.h..# 77d0: 20 45 78 74 65 6e 73 69 6f 6e 20 73 6f 75 72 63 Extension sourc 77e0: 65 20 63 6f 64 65 20 66 69 6c 65 73 2c 20 70 61 e code files, pa 77f0: 72 74 20 31 2e 0a 23 0a 53 52 43 30 36 20 3d 20 rt 1..#.SRC06 = 7800: 5c 0a 20 20 24 28 54 4f 50 29 5c 65 78 74 5c 66 \.$(TOP)\ext\f
7810: 74 73 31 5c 66 74 73 31 2e 63 20 5c 0a 20 20 24 ts1\fts1.c \. $7820: 28 54 4f 50 29 5c 65 78 74 5c 66 74 73 31 5c 66 (TOP)\ext\fts1\f 7830: 74 73 31 5f 68 61 73 68 2e 63 20 5c 0a 20 20 24 ts1_hash.c \.$
7840: 28 54 4f 50 29 5c 65 78 74 5c 66 74 73 31 5c 66 (TOP)\ext\fts1\f
7850: 74 73 31 5f 70 6f 72 74 65 72 2e 63 20 5c 0a 20 ts1_porter.c \.
7860: 20 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 73 31 $(TOP)\ext\fts1 7870: 5c 66 74 73 31 5f 74 6f 6b 65 6e 69 7a 65 72 31 \fts1_tokenizer1 7880: 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c 65 78 .c \.$(TOP)\ex
7890: 74 5c 66 74 73 32 5c 66 74 73 32 2e 63 20 5c 0a t\fts2\fts2.c \.
78a0: 20 20 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 73 $(TOP)\ext\fts 78b0: 32 5c 66 74 73 32 5f 68 61 73 68 2e 63 20 5c 0a 2\fts2_hash.c \. 78c0: 20 20 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 73$(TOP)\ext\fts
78d0: 32 5c 66 74 73 32 5f 69 63 75 2e 63 20 5c 0a 20 2\fts2_icu.c \.
78e0: 20 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 73 32 $(TOP)\ext\fts2 78f0: 5c 66 74 73 32 5f 70 6f 72 74 65 72 2e 63 20 5c \fts2_porter.c \ 7900: 0a 20 20 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 .$(TOP)\ext\ft
7910: 73 32 5c 66 74 73 32 5f 74 6f 6b 65 6e 69 7a 65 s2\fts2_tokenize
7920: 72 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c 65 r.c \. $(TOP)\e 7930: 78 74 5c 66 74 73 32 5c 66 74 73 32 5f 74 6f 6b xt\fts2\fts2_tok 7940: 65 6e 69 7a 65 72 31 2e 63 0a 0a 23 20 45 78 74 enizer1.c..# Ext 7950: 65 6e 73 69 6f 6e 20 73 6f 75 72 63 65 20 63 6f ension source co 7960: 64 65 20 66 69 6c 65 73 2c 20 70 61 72 74 20 32 de files, part 2 7970: 2e 0a 23 0a 53 52 43 30 37 20 3d 20 5c 0a 20 20 ..#.SRC07 = \. 7980: 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 73 33 5c$(TOP)\ext\fts3\
7990: 66 74 73 33 2e 63 20 5c 0a 20 20 24 28 54 4f 50 fts3.c \. $(TOP 79a0: 29 5c 65 78 74 5c 66 74 73 33 5c 66 74 73 33 5f )\ext\fts3\fts3_ 79b0: 61 75 78 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 aux.c \.$(TOP)
79c0: 5c 65 78 74 5c 66 74 73 33 5c 66 74 73 33 5f 65 \ext\fts3\fts3_e
79d0: 78 70 72 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 xpr.c \. $(TOP) 79e0: 5c 65 78 74 5c 66 74 73 33 5c 66 74 73 33 5f 68 \ext\fts3\fts3_h 79f0: 61 73 68 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 ash.c \.$(TOP)
7a00: 5c 65 78 74 5c 66 74 73 33 5c 66 74 73 33 5f 69 \ext\fts3\fts3_i
7a10: 63 75 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c cu.c \. $(TOP)\ 7a20: 65 78 74 5c 66 74 73 33 5c 66 74 73 33 5f 70 6f ext\fts3\fts3_po 7a30: 72 74 65 72 2e 63 20 5c 0a 20 20 24 28 54 4f 50 rter.c \.$(TOP
7a40: 29 5c 65 78 74 5c 66 74 73 33 5c 66 74 73 33 5f )\ext\fts3\fts3_
7a50: 73 6e 69 70 70 65 74 2e 63 20 5c 0a 20 20 24 28 snippet.c \. $( 7a60: 54 4f 50 29 5c 65 78 74 5c 66 74 73 33 5c 66 74 TOP)\ext\fts3\ft 7a70: 73 33 5f 74 6f 6b 65 6e 69 7a 65 72 2e 63 20 5c s3_tokenizer.c \ 7a80: 0a 20 20 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 .$(TOP)\ext\ft
7a90: 73 33 5c 66 74 73 33 5f 74 6f 6b 65 6e 69 7a 65 s3\fts3_tokenize
7aa0: 72 31 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c r1.c \. $(TOP)\ 7ab0: 65 78 74 5c 66 74 73 33 5c 66 74 73 33 5f 74 6f ext\fts3\fts3_to 7ac0: 6b 65 6e 69 7a 65 5f 76 74 61 62 2e 63 20 5c 0a kenize_vtab.c \. 7ad0: 20 20 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 73$(TOP)\ext\fts
7ae0: 33 5c 66 74 73 33 5f 75 6e 69 63 6f 64 65 2e 63 3\fts3_unicode.c
7af0: 20 5c 0a 20 20 24 28 54 4f 50 29 5c 65 78 74 5c \. $(TOP)\ext\ 7b00: 66 74 73 33 5c 66 74 73 33 5f 75 6e 69 63 6f 64 fts3\fts3_unicod 7b10: 65 32 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c e2.c \.$(TOP)\
7b20: 65 78 74 5c 66 74 73 33 5c 66 74 73 33 5f 77 72 ext\fts3\fts3_wr
7b30: 69 74 65 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 ite.c \. $(TOP) 7b40: 5c 65 78 74 5c 69 63 75 5c 69 63 75 2e 63 20 5c \ext\icu\icu.c \ 7b50: 0a 20 20 24 28 54 4f 50 29 5c 65 78 74 5c 72 74 .$(TOP)\ext\rt
7b60: 72 65 65 5c 72 74 72 65 65 2e 63 20 5c 0a 20 20 ree\rtree.c \.
7b70: 24 28 54 4f 50 29 5c 65 78 74 5c 73 65 73 73 69 $(TOP)\ext\sessi 7b80: 6f 6e 5c 73 71 6c 69 74 65 33 73 65 73 73 69 6f on\sqlite3sessio 7b90: 6e 2e 68 20 5c 0a 20 20 24 28 54 4f 50 29 5c 65 n.h \.$(TOP)\e
7ba0: 78 74 5c 73 65 73 73 69 6f 6e 5c 73 71 6c 69 74 xt\session\sqlit
7bb0: 65 33 73 65 73 73 69 6f 6e 2e 63 20 5c 0a 20 20 e3session.c \.
7bc0: 24 28 54 4f 50 29 5c 65 78 74 5c 72 62 75 5c 73 $(TOP)\ext\rbu\s 7bd0: 71 6c 69 74 65 33 72 62 75 2e 68 20 5c 0a 20 20 qlite3rbu.h \. 7be0: 24 28 54 4f 50 29 5c 65 78 74 5c 72 62 75 5c 73$(TOP)\ext\rbu\s
7bf0: 71 6c 69 74 65 33 72 62 75 2e 63 20 5c 0a 20 20 qlite3rbu.c \.
7c00: 24 28 54 4f 50 29 5c 65 78 74 5c 6d 69 73 63 5c $(TOP)\ext\misc\ 7c10: 6a 73 6f 6e 31 2e 63 0a 0a 23 20 45 78 74 65 6e json1.c..# Exten 7c20: 73 69 6f 6e 20 68 65 61 64 65 72 20 66 69 6c 65 sion header file 7c30: 73 2c 20 70 61 72 74 20 31 2e 0a 23 0a 53 52 43 s, part 1..#.SRC 7c40: 30 38 20 3d 20 5c 0a 20 20 24 28 54 4f 50 29 5c 08 = \.$(TOP)\
7c50: 65 78 74 5c 66 74 73 31 5c 66 74 73 31 2e 68 20 ext\fts1\fts1.h
7c60: 5c 0a 20 20 24 28 54 4f 50 29 5c 65 78 74 5c 66 \. $(TOP)\ext\f 7c70: 74 73 31 5c 66 74 73 31 5f 68 61 73 68 2e 68 20 ts1\fts1_hash.h 7c80: 5c 0a 20 20 24 28 54 4f 50 29 5c 65 78 74 5c 66 \.$(TOP)\ext\f
7c90: 74 73 31 5c 66 74 73 31 5f 74 6f 6b 65 6e 69 7a ts1\fts1_tokeniz
7ca0: 65 72 2e 68 20 5c 0a 20 20 24 28 54 4f 50 29 5c er.h \. $(TOP)\ 7cb0: 65 78 74 5c 66 74 73 32 5c 66 74 73 32 2e 68 20 ext\fts2\fts2.h 7cc0: 5c 0a 20 20 24 28 54 4f 50 29 5c 65 78 74 5c 66 \.$(TOP)\ext\f
7cd0: 74 73 32 5c 66 74 73 32 5f 68 61 73 68 2e 68 20 ts2\fts2_hash.h
7ce0: 5c 0a 20 20 24 28 54 4f 50 29 5c 65 78 74 5c 66 \. $(TOP)\ext\f 7cf0: 74 73 32 5c 66 74 73 32 5f 74 6f 6b 65 6e 69 7a ts2\fts2_tokeniz 7d00: 65 72 2e 68 0a 0a 23 20 45 78 74 65 6e 73 69 6f er.h..# Extensio 7d10: 6e 20 68 65 61 64 65 72 20 66 69 6c 65 73 2c 20 n header files, 7d20: 70 61 72 74 20 32 2e 0a 23 0a 53 52 43 30 39 20 part 2..#.SRC09 7d30: 3d 20 5c 0a 20 20 24 28 54 4f 50 29 5c 65 78 74 = \.$(TOP)\ext
7d40: 5c 66 74 73 33 5c 66 74 73 33 2e 68 20 5c 0a 20 \fts3\fts3.h \.
7d50: 20 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 73 33 $(TOP)\ext\fts3 7d60: 5c 66 74 73 33 49 6e 74 2e 68 20 5c 0a 20 20 24 \fts3Int.h \.$
7d70: 28 54 4f 50 29 5c 65 78 74 5c 66 74 73 33 5c 66 (TOP)\ext\fts3\f
7d80: 74 73 33 5f 68 61 73 68 2e 68 20 5c 0a 20 20 24 ts3_hash.h \. $7d90: 28 54 4f 50 29 5c 65 78 74 5c 66 74 73 33 5c 66 (TOP)\ext\fts3\f 7da0: 74 73 33 5f 74 6f 6b 65 6e 69 7a 65 72 2e 68 20 ts3_tokenizer.h 7db0: 5c 0a 20 20 24 28 54 4f 50 29 5c 65 78 74 5c 69 \.$(TOP)\ext\i
7dc0: 63 75 5c 73 71 6c 69 74 65 69 63 75 2e 68 20 5c cu\sqliteicu.h \
7dd0: 0a 20 20 24 28 54 4f 50 29 5c 65 78 74 5c 72 74 . $(TOP)\ext\rt 7de0: 72 65 65 5c 72 74 72 65 65 2e 68 20 5c 0a 20 20 ree\rtree.h \. 7df0: 24 28 54 4f 50 29 5c 65 78 74 5c 72 62 75 5c 73$(TOP)\ext\rbu\s
7e00: 71 6c 69 74 65 33 72 62 75 2e 68 0a 0a 23 20 47 qlite3rbu.h..# G
7e10: 65 6e 65 72 61 74 65 64 20 73 6f 75 72 63 65 20 enerated source
7e20: 63 6f 64 65 20 66 69 6c 65 73 0a 23 0a 53 52 43 code files.#.SRC
7e30: 31 30 20 3d 20 5c 0a 20 20 6f 70 63 6f 64 65 73 10 = \. opcodes
7e40: 2e 63 20 5c 0a 20 20 70 61 72 73 65 2e 63 0a 0a .c \. parse.c..
7e50: 23 20 47 65 6e 65 72 61 74 65 64 20 68 65 61 64 # Generated head
7e60: 65 72 20 66 69 6c 65 73 0a 23 0a 53 52 43 31 31 er files.#.SRC11
7e70: 20 3d 20 5c 0a 20 20 6b 65 79 77 6f 72 64 68 61 = \. keywordha
7e80: 73 68 2e 68 20 5c 0a 20 20 6f 70 63 6f 64 65 73 sh.h \. opcodes
7e90: 2e 68 20 5c 0a 20 20 70 61 72 73 65 2e 68 20 5c .h \. parse.h \
7ea0: 0a 20 20 24 28 53 51 4c 49 54 45 33 48 29 0a 0a . $(SQLITE3H).. 7eb0: 23 20 41 6c 6c 20 73 6f 75 72 63 65 20 63 6f 64 # All source cod 7ec0: 65 20 66 69 6c 65 73 2e 0a 23 0a 53 52 43 20 3d e files..#.SRC = 7ed0: 20 24 28 53 52 43 30 30 29 20 24 28 53 52 43 30$(SRC00) $(SRC0 7ee0: 31 29 20 24 28 53 52 43 30 32 29 20 24 28 53 52 1)$(SRC02) $(SR 7ef0: 43 30 33 29 20 24 28 53 52 43 30 34 29 20 24 28 C03)$(SRC04) $( 7f00: 53 52 43 30 35 29 20 24 28 53 52 43 30 36 29 20 SRC05)$(SRC06)
7f10: 24 28 53 52 43 30 37 29 20 24 28 53 52 43 30 38 $(SRC07)$(SRC08
7f20: 29 20 24 28 53 52 43 30 39 29 20 24 28 53 52 43 ) $(SRC09)$(SRC
7f30: 31 30 29 20 24 28 53 52 43 31 31 29 0a 0a 23 20 10) $(SRC11)..# 7f40: 53 6f 75 72 63 65 20 63 6f 64 65 20 74 6f 20 74 Source code to t 7f50: 68 65 20 74 65 73 74 20 66 69 6c 65 73 2e 0a 23 he test files..# 7f60: 0a 54 45 53 54 53 52 43 20 3d 20 5c 0a 20 20 24 .TESTSRC = \.$
7f70: 28 54 4f 50 29 5c 73 72 63 5c 74 65 73 74 31 2e (TOP)\src\test1.
7f80: 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 c \. $(TOP)\src 7f90: 5c 74 65 73 74 32 2e 63 20 5c 0a 20 20 24 28 54 \test2.c \.$(T
7fa0: 4f 50 29 5c 73 72 63 5c 74 65 73 74 33 2e 63 20 OP)\src\test3.c
7fb0: 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 74 \. $(TOP)\src\t 7fc0: 65 73 74 34 2e 63 20 5c 0a 20 20 24 28 54 4f 50 est4.c \.$(TOP
7fd0: 29 5c 73 72 63 5c 74 65 73 74 35 2e 63 20 5c 0a )\src\test5.c \.
7fe0: 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 74 65 73 $(TOP)\src\tes 7ff0: 74 36 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c t6.c \.$(TOP)\
8000: 73 72 63 5c 74 65 73 74 37 2e 63 20 5c 0a 20 20 src\test7.c \.
8010: 24 28 54 4f 50 29 5c 73 72 63 5c 74 65 73 74 38 $(TOP)\src\test8 8020: 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 .c \.$(TOP)\sr
8030: 63 5c 74 65 73 74 39 2e 63 20 5c 0a 20 20 24 28 c\test9.c \. $( 8040: 54 4f 50 29 5c 73 72 63 5c 74 65 73 74 5f 61 75 TOP)\src\test_au 8050: 74 6f 65 78 74 2e 63 20 5c 0a 20 20 24 28 54 4f toext.c \.$(TO
8060: 50 29 5c 73 72 63 5c 74 65 73 74 5f 61 73 79 6e P)\src\test_asyn
8070: 63 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 c.c \. $(TOP)\s 8080: 72 63 5c 74 65 73 74 5f 62 61 63 6b 75 70 2e 63 rc\test_backup.c 8090: 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c \.$(TOP)\src\
80a0: 74 65 73 74 5f 62 6c 6f 62 2e 63 20 5c 0a 20 20 test_blob.c \.
80b0: 24 28 54 4f 50 29 5c 73 72 63 5c 74 65 73 74 5f $(TOP)\src\test_ 80c0: 62 74 72 65 65 2e 63 20 5c 0a 20 20 24 28 54 4f btree.c \.$(TO
80d0: 50 29 5c 73 72 63 5c 74 65 73 74 5f 63 6f 6e 66 P)\src\test_conf
80e0: 69 67 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c ig.c \. $(TOP)\ 80f0: 73 72 63 5c 74 65 73 74 5f 64 65 6d 6f 76 66 73 src\test_demovfs 8100: 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 .c \.$(TOP)\sr
8110: 63 5c 74 65 73 74 5f 64 65 76 73 79 6d 2e 63 20 c\test_devsym.c
8120: 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 74 \. $(TOP)\src\t 8130: 65 73 74 5f 66 73 2e 63 20 5c 0a 20 20 24 28 54 est_fs.c \.$(T
8140: 4f 50 29 5c 73 72 63 5c 74 65 73 74 5f 66 75 6e OP)\src\test_fun
8150: 63 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 c.c \. $(TOP)\s 8160: 72 63 5c 74 65 73 74 5f 68 65 78 69 6f 2e 63 20 rc\test_hexio.c 8170: 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 74 \.$(TOP)\src\t
8180: 65 73 74 5f 69 6e 69 74 2e 63 20 5c 0a 20 20 24 est_init.c \. $8190: 28 54 4f 50 29 5c 73 72 63 5c 74 65 73 74 5f 69 (TOP)\src\test_i 81a0: 6e 74 61 72 72 61 79 2e 63 20 5c 0a 20 20 24 28 ntarray.c \.$(
81b0: 54 4f 50 29 5c 73 72 63 5c 74 65 73 74 5f 6a 6f TOP)\src\test_jo
81c0: 75 72 6e 61 6c 2e 63 20 5c 0a 20 20 24 28 54 4f urnal.c \. $(TO 81d0: 50 29 5c 73 72 63 5c 74 65 73 74 5f 6d 61 6c 6c P)\src\test_mall 81e0: 6f 63 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c oc.c \.$(TOP)\
81f0: 73 72 63 5c 74 65 73 74 5f 6d 75 6c 74 69 70 6c src\test_multipl
8200: 65 78 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c ex.c \. $(TOP)\ 8210: 73 72 63 5c 74 65 73 74 5f 6d 75 74 65 78 2e 63 src\test_mutex.c 8220: 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c \.$(TOP)\src\
8230: 74 65 73 74 5f 6f 6e 65 66 69 6c 65 2e 63 20 5c test_onefile.c \
8240: 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 74 65 . $(TOP)\src\te 8250: 73 74 5f 6f 73 69 6e 73 74 2e 63 20 5c 0a 20 20 st_osinst.c \. 8260: 24 28 54 4f 50 29 5c 73 72 63 5c 74 65 73 74 5f$(TOP)\src\test_
8270: 70 63 61 63 68 65 2e 63 20 5c 0a 20 20 24 28 54 pcache.c \. $(T 8280: 4f 50 29 5c 73 72 63 5c 74 65 73 74 5f 71 75 6f OP)\src\test_quo 8290: 74 61 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c ta.c \.$(TOP)\
82a0: 73 72 63 5c 74 65 73 74 5f 72 74 72 65 65 2e 63 src\test_rtree.c
82b0: 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 63 5c \. $(TOP)\src\ 82c0: 74 65 73 74 5f 73 63 68 65 6d 61 2e 63 20 5c 0a test_schema.c \. 82d0: 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 74 65 73$(TOP)\src\tes
82e0: 74 5f 73 65 72 76 65 72 2e 63 20 5c 0a 20 20 24 t_server.c \. $82f0: 28 54 4f 50 29 5c 73 72 63 5c 74 65 73 74 5f 73 (TOP)\src\test_s 8300: 75 70 65 72 6c 6f 63 6b 2e 63 20 5c 0a 20 20 24 uperlock.c \.$
8310: 28 54 4f 50 29 5c 73 72 63 5c 74 65 73 74 5f 73 (TOP)\src\test_s
8320: 79 73 63 61 6c 6c 2e 63 20 5c 0a 20 20 24 28 54 yscall.c \. $(T 8330: 4f 50 29 5c 73 72 63 5c 74 65 73 74 5f 74 63 6c OP)\src\test_tcl 8340: 76 61 72 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 var.c \.$(TOP)
8350: 5c 73 72 63 5c 74 65 73 74 5f 74 68 72 65 61 64 \src\test_thread
8360: 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c 73 72 .c \. $(TOP)\sr 8370: 63 5c 74 65 73 74 5f 76 66 73 2e 63 20 5c 0a 20 c\test_vfs.c \. 8380: 20 24 28 54 4f 50 29 5c 73 72 63 5c 74 65 73 74$(TOP)\src\test
8390: 5f 77 69 6e 64 69 72 65 6e 74 2e 63 20 5c 0a 20 _windirent.c \.
83a0: 20 24 28 54 4f 50 29 5c 73 72 63 5c 74 65 73 74 $(TOP)\src\test 83b0: 5f 77 73 64 2e 63 20 5c 0a 20 20 24 28 54 4f 50 _wsd.c \.$(TOP
83c0: 29 5c 65 78 74 5c 66 74 73 33 5c 66 74 73 33 5f )\ext\fts3\fts3_
83d0: 74 65 72 6d 2e 63 20 5c 0a 20 20 24 28 54 4f 50 term.c \. $(TOP 83e0: 29 5c 65 78 74 5c 66 74 73 33 5c 66 74 73 33 5f )\ext\fts3\fts3_ 83f0: 74 65 73 74 2e 63 20 5c 0a 20 20 24 28 54 4f 50 test.c \.$(TOP
8400: 29 5c 65 78 74 5c 73 65 73 73 69 6f 6e 5c 74 65 )\ext\session\te
8410: 73 74 5f 73 65 73 73 69 6f 6e 2e 63 20 5c 0a 20 st_session.c \.
8420: 20 24 28 54 4f 50 29 5c 65 78 74 5c 72 62 75 5c $(TOP)\ext\rbu\ 8430: 74 65 73 74 5f 72 62 75 2e 63 0a 0a 23 20 53 74 test_rbu.c..# St 8440: 61 74 69 63 61 6c 6c 79 20 6c 69 6e 6b 65 64 20 atically linked 8450: 65 78 74 65 6e 73 69 6f 6e 73 2e 0a 23 0a 54 45 extensions..#.TE 8460: 53 54 45 58 54 20 3d 20 5c 0a 20 20 24 28 54 4f STEXT = \.$(TO
8470: 50 29 5c 65 78 74 5c 6d 69 73 63 5c 61 6d 61 74 P)\ext\misc\amat
8480: 63 68 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c ch.c \. $(TOP)\ 8490: 65 78 74 5c 6d 69 73 63 5c 63 6c 6f 73 75 72 65 ext\misc\closure 84a0: 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c 65 78 .c \.$(TOP)\ex
84b0: 74 5c 6d 69 73 63 5c 65 76 61 6c 2e 63 20 5c 0a t\misc\eval.c \.
84c0: 20 20 24 28 54 4f 50 29 5c 65 78 74 5c 6d 69 73 $(TOP)\ext\mis 84d0: 63 5c 66 69 6c 65 69 6f 2e 63 20 5c 0a 20 20 24 c\fileio.c \.$
84e0: 28 54 4f 50 29 5c 65 78 74 5c 6d 69 73 63 5c 66 (TOP)\ext\misc\f
84f0: 75 7a 7a 65 72 2e 63 20 5c 0a 20 20 24 28 54 4f uzzer.c \. $(TO 8500: 50 29 5c 65 78 74 5c 66 74 73 35 5c 66 74 73 35 P)\ext\fts5\fts5 8510: 5f 74 63 6c 2e 63 20 5c 0a 20 20 24 28 54 4f 50 _tcl.c \.$(TOP
8520: 29 5c 65 78 74 5c 66 74 73 35 5c 66 74 73 35 5f )\ext\fts5\fts5_
8530: 74 65 73 74 5f 6d 69 2e 63 20 5c 0a 20 20 24 28 test_mi.c \. $( 8540: 54 4f 50 29 5c 65 78 74 5c 66 74 73 35 5c 66 74 TOP)\ext\fts5\ft 8550: 73 35 5f 74 65 73 74 5f 74 6f 6b 2e 63 20 5c 0a s5_test_tok.c \. 8560: 20 20 24 28 54 4f 50 29 5c 65 78 74 5c 6d 69 73$(TOP)\ext\mis
8570: 63 5c 69 65 65 65 37 35 34 2e 63 20 5c 0a 20 20 c\ieee754.c \.
8580: 24 28 54 4f 50 29 5c 65 78 74 5c 6d 69 73 63 5c $(TOP)\ext\misc\ 8590: 6e 65 78 74 63 68 61 72 2e 63 20 5c 0a 20 20 24 nextchar.c \.$
85a0: 28 54 4f 50 29 5c 65 78 74 5c 6d 69 73 63 5c 70 (TOP)\ext\misc\p
85b0: 65 72 63 65 6e 74 69 6c 65 2e 63 20 5c 0a 20 20 ercentile.c \.
85c0: 24 28 54 4f 50 29 5c 65 78 74 5c 6d 69 73 63 5c $(TOP)\ext\misc\ 85d0: 72 65 67 65 78 70 2e 63 20 5c 0a 20 20 24 28 54 regexp.c \.$(T
85e0: 4f 50 29 5c 65 78 74 5c 6d 69 73 63 5c 73 65 72 OP)\ext\misc\ser
85f0: 69 65 73 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 ies.c \. $(TOP) 8600: 5c 65 78 74 5c 6d 69 73 63 5c 73 70 65 6c 6c 66 \ext\misc\spellf 8610: 69 78 2e 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c ix.c \.$(TOP)\
8620: 65 78 74 5c 6d 69 73 63 5c 74 6f 74 79 70 65 2e ext\misc\totype.
8630: 63 20 5c 0a 20 20 24 28 54 4f 50 29 5c 65 78 74 c \. $(TOP)\ext 8640: 5c 6d 69 73 63 5c 77 68 6f 6c 65 6e 75 6d 62 65 \misc\wholenumbe 8650: 72 2e 63 0a 0a 23 20 53 6f 75 72 63 65 20 63 6f r.c..# Source co 8660: 64 65 20 74 6f 20 74 68 65 20 6c 69 62 72 61 72 de to the librar 8670: 79 20 66 69 6c 65 73 20 6e 65 65 64 65 64 20 62 y files needed b 8680: 79 20 74 68 65 20 74 65 73 74 20 66 69 78 74 75 y the test fixtu 8690: 72 65 0a 23 20 28 6e 6f 6e 2d 61 6d 61 6c 67 61 re.# (non-amalga 86a0: 6d 61 74 69 6f 6e 29 0a 23 0a 54 45 53 54 53 52 mation).#.TESTSR 86b0: 43 32 20 3d 20 5c 0a 20 20 24 28 53 52 43 30 30 C2 = \.$(SRC00
86c0: 29 20 5c 0a 20 20 24 28 53 52 43 30 31 29 20 5c ) \. $(SRC01) \ 86d0: 0a 20 20 24 28 53 52 43 30 36 29 20 5c 0a 20 20 .$(SRC06) \.
86e0: 24 28 53 52 43 30 37 29 20 5c 0a 20 20 24 28 53 $(SRC07) \.$(S
86f0: 52 43 31 30 29 20 5c 0a 20 20 24 28 54 4f 50 29 RC10) \. $(TOP) 8700: 5c 65 78 74 5c 61 73 79 6e 63 5c 73 71 6c 69 74 \ext\async\sqlit 8710: 65 33 61 73 79 6e 63 2e 63 20 5c 0a 20 20 24 28 e3async.c \.$(
8720: 54 4f 50 29 5c 65 78 74 5c 73 65 73 73 69 6f 6e TOP)\ext\session
8730: 5c 73 71 6c 69 74 65 33 73 65 73 73 69 6f 6e 2e \sqlite3session.
8740: 63 0a 0a 23 20 53 6f 75 72 63 65 20 63 6f 64 65 c..# Source code
8750: 20 74 6f 20 74 68 65 20 6c 69 62 72 61 72 79 20 to the library
8760: 66 69 6c 65 73 20 6e 65 65 64 65 64 20 62 79 20 files needed by
8770: 74 68 65 20 74 65 73 74 20 66 69 78 74 75 72 65 the test fixture
8780: 0a 23 20 28 61 6d 61 6c 67 61 6d 61 74 69 6f 6e .# (amalgamation
8790: 29 0a 23 0a 54 45 53 54 53 52 43 33 20 3d 0a 0a ).#.TESTSRC3 =..
87a0: 0a 23 20 48 65 61 64 65 72 20 66 69 6c 65 73 20 .# Header files
87b0: 75 73 65 64 20 62 79 20 61 6c 6c 20 6c 69 62 72 used by all libr
87c0: 61 72 79 20 73 6f 75 72 63 65 20 66 69 6c 65 73 ary source files
87d0: 2e 0a 23 0a 48 44 52 20 3d 20 5c 0a 20 20 20 24 ..#.HDR = \. $87e0: 28 54 4f 50 29 5c 73 72 63 5c 62 74 72 65 65 2e (TOP)\src\btree. 87f0: 68 20 5c 0a 20 20 20 24 28 54 4f 50 29 5c 73 72 h \.$(TOP)\sr
8800: 63 5c 62 74 72 65 65 49 6e 74 2e 68 20 5c 0a 20 c\btreeInt.h \.
8810: 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 68 61 73 $(TOP)\src\has 8820: 68 2e 68 20 5c 0a 20 20 20 24 28 54 4f 50 29 5c h.h \.$(TOP)\
8830: 73 72 63 5c 68 77 74 69 6d 65 2e 68 20 5c 0a 20 src\hwtime.h \.
8840: 20 20 6b 65 79 77 6f 72 64 68 61 73 68 2e 68 20 keywordhash.h
8850: 5c 0a 20 20 20 24 28 54 4f 50 29 5c 73 72 63 5c \. $(TOP)\src\ 8860: 6d 73 76 63 2e 68 20 5c 0a 20 20 20 24 28 54 4f msvc.h \.$(TO
8870: 50 29 5c 73 72 63 5c 6d 75 74 65 78 2e 68 20 5c P)\src\mutex.h \
8880: 0a 20 20 20 6f 70 63 6f 64 65 73 2e 68 20 5c 0a . opcodes.h \.
8890: 20 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 6f 73 $(TOP)\src\os 88a0: 2e 68 20 5c 0a 20 20 20 24 28 54 4f 50 29 5c 73 .h \.$(TOP)\s
88b0: 72 63 5c 6f 73 5f 63 6f 6d 6d 6f 6e 2e 68 20 5c rc\os_common.h \
88c0: 0a 20 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 6f . $(TOP)\src\o 88d0: 73 5f 73 65 74 75 70 2e 68 20 5c 0a 20 20 20 24 s_setup.h \.$
88e0: 28 54 4f 50 29 5c 73 72 63 5c 6f 73 5f 77 69 6e (TOP)\src\os_win
88f0: 2e 68 20 5c 0a 20 20 20 24 28 54 4f 50 29 5c 73 .h \. $(TOP)\s 8900: 72 63 5c 70 61 67 65 72 2e 68 20 5c 0a 20 20 20 rc\pager.h \. 8910: 24 28 54 4f 50 29 5c 73 72 63 5c 70 63 61 63 68$(TOP)\src\pcach
8920: 65 2e 68 20 5c 0a 20 20 20 70 61 72 73 65 2e 68 e.h \. parse.h
8930: 20 5c 0a 20 20 20 24 28 54 4f 50 29 5c 73 72 63 \. $(TOP)\src 8940: 5c 70 72 61 67 6d 61 2e 68 20 5c 0a 20 20 20 24 \pragma.h \.$
8950: 28 53 51 4c 49 54 45 33 48 29 20 5c 0a 20 20 20 (SQLITE3H) \.
8960: 24 28 54 4f 50 29 5c 73 72 63 5c 73 71 6c 69 74 $(TOP)\src\sqlit 8970: 65 33 65 78 74 2e 68 20 5c 0a 20 20 20 24 28 54 e3ext.h \.$(T
8980: 4f 50 29 5c 73 72 63 5c 73 71 6c 69 74 65 49 6e OP)\src\sqliteIn
8990: 74 2e 68 20 5c 0a 20 20 20 24 28 54 4f 50 29 5c t.h \. $(TOP)\ 89a0: 73 72 63 5c 73 71 6c 69 74 65 4c 69 6d 69 74 2e src\sqliteLimit. 89b0: 68 20 5c 0a 20 20 20 24 28 54 4f 50 29 5c 73 72 h \.$(TOP)\sr
89c0: 63 5c 76 64 62 65 2e 68 20 5c 0a 20 20 20 24 28 c\vdbe.h \. $( 89d0: 54 4f 50 29 5c 73 72 63 5c 76 64 62 65 49 6e 74 TOP)\src\vdbeInt 89e0: 2e 68 20 5c 0a 20 20 20 24 28 54 4f 50 29 5c 73 .h \.$(TOP)\s
89f0: 72 63 5c 76 78 77 6f 72 6b 73 2e 68 20 5c 0a 20 rc\vxworks.h \.
8a00: 20 20 24 28 54 4f 50 29 5c 73 72 63 5c 77 68 65 $(TOP)\src\whe 8a10: 72 65 49 6e 74 2e 68 0a 0a 23 20 48 65 61 64 65 reInt.h..# Heade 8a20: 72 20 66 69 6c 65 73 20 75 73 65 64 20 62 79 20 r files used by 8a30: 65 78 74 65 6e 73 69 6f 6e 73 0a 23 0a 45 58 54 extensions.#.EXT 8a40: 48 44 52 20 3d 20 24 28 45 58 54 48 44 52 29 20 HDR =$(EXTHDR)
8a50: 5c 0a 20 20 24 28 54 4f 50 29 5c 65 78 74 5c 66 \. $(TOP)\ext\f 8a60: 74 73 31 5c 66 74 73 31 2e 68 20 5c 0a 20 20 24 ts1\fts1.h \.$
8a70: 28 54 4f 50 29 5c 65 78 74 5c 66 74 73 31 5c 66 (TOP)\ext\fts1\f
8a80: 74 73 31 5f 68 61 73 68 2e 68 20 5c 0a 20 20 24 ts1_hash.h \. $8a90: 28 54 4f 50 29 5c 65 78 74 5c 66 74 73 31 5c 66 (TOP)\ext\fts1\f 8aa0: 74 73 31 5f 74 6f 6b 65 6e 69 7a 65 72 2e 68 0a ts1_tokenizer.h. 8ab0: 45 58 54 48 44 52 20 3d 20 24 28 45 58 54 48 44 EXTHDR =$(EXTHD
8ac0: 52 29 20 5c 0a 20 20 24 28 54 4f 50 29 5c 65 78 R) \. $(TOP)\ex 8ad0: 74 5c 66 74 73 32 5c 66 74 73 32 2e 68 20 5c 0a t\fts2\fts2.h \. 8ae0: 20 20 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 73$(TOP)\ext\fts
8af0: 32 5c 66 74 73 32 5f 68 61 73 68 2e 68 20 5c 0a 2\fts2_hash.h \.
8b00: 20 20 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 73 $(TOP)\ext\fts 8b10: 32 5c 66 74 73 32 5f 74 6f 6b 65 6e 69 7a 65 72 2\fts2_tokenizer 8b20: 2e 68 0a 45 58 54 48 44 52 20 3d 20 24 28 45 58 .h.EXTHDR =$(EX
8b30: 54 48 44 52 29 20 5c 0a 20 20 24 28 54 4f 50 29 THDR) \. $(TOP) 8b40: 5c 65 78 74 5c 66 74 73 33 5c 66 74 73 33 2e 68 \ext\fts3\fts3.h 8b50: 20 5c 0a 20 20 24 28 54 4f 50 29 5c 65 78 74 5c \.$(TOP)\ext\
8b60: 66 74 73 33 5c 66 74 73 33 49 6e 74 2e 68 20 5c fts3\fts3Int.h \
8b70: 0a 20 20 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 . $(TOP)\ext\ft 8b80: 73 33 5c 66 74 73 33 5f 68 61 73 68 2e 68 20 5c s3\fts3_hash.h \ 8b90: 0a 20 20 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 .$(TOP)\ext\ft
8ba0: 73 33 5c 66 74 73 33 5f 74 6f 6b 65 6e 69 7a 65 s3\fts3_tokenize
8bb0: 72 2e 68 0a 45 58 54 48 44 52 20 3d 20 24 28 45 r.h.EXTHDR = $(E 8bc0: 58 54 48 44 52 29 20 5c 0a 20 20 24 28 54 4f 50 XTHDR) \.$(TOP
8bd0: 29 5c 65 78 74 5c 72 74 72 65 65 5c 72 74 72 65 )\ext\rtree\rtre
8be0: 65 2e 68 0a 45 58 54 48 44 52 20 3d 20 24 28 45 e.h.EXTHDR = $(E 8bf0: 58 54 48 44 52 29 20 5c 0a 20 20 24 28 54 4f 50 XTHDR) \.$(TOP
8c00: 29 5c 65 78 74 5c 69 63 75 5c 73 71 6c 69 74 65 )\ext\icu\sqlite
8c10: 69 63 75 2e 68 0a 45 58 54 48 44 52 20 3d 20 24 icu.h.EXTHDR = $8c20: 28 45 58 54 48 44 52 29 20 5c 0a 20 20 24 28 54 (EXTHDR) \.$(T
8c30: 4f 50 29 5c 65 78 74 5c 72 74 72 65 65 5c 73 71 OP)\ext\rtree\sq
8c40: 6c 69 74 65 33 72 74 72 65 65 2e 68 0a 45 58 54 lite3rtree.h.EXT
8c50: 48 44 52 20 3d 20 24 28 45 58 54 48 44 52 29 20 HDR = $(EXTHDR) 8c60: 5c 0a 20 20 24 28 54 4f 50 29 5c 65 78 74 5c 73 \.$(TOP)\ext\s
8c70: 65 73 73 69 6f 6e 5c 73 71 6c 69 74 65 33 73 65 ession\sqlite3se
8c80: 73 73 69 6f 6e 2e 68 0a 0a 23 20 65 78 65 63 75 ssion.h..# execu
8c90: 74 61 62 6c 65 73 20 6e 65 65 64 65 64 20 66 6f tables needed fo
8ca0: 72 20 74 65 73 74 69 6e 67 0a 23 0a 54 45 53 54 r testing.#.TEST
8cb0: 50 52 4f 47 53 20 3d 20 5c 0a 20 20 74 65 73 74 PROGS = \. test
8cc0: 66 69 78 74 75 72 65 2e 65 78 65 20 5c 0a 20 20 fixture.exe \.
8cd0: 24 28 53 51 4c 49 54 45 33 45 58 45 29 20 5c 0a $(SQLITE3EXE) \. 8ce0: 20 20 73 71 6c 69 74 65 33 5f 61 6e 61 6c 79 7a sqlite3_analyz 8cf0: 65 72 2e 65 78 65 20 5c 0a 20 20 73 71 6c 64 69 er.exe \. sqldi 8d00: 66 66 2e 65 78 65 0a 0a 23 20 44 61 74 61 62 61 ff.exe..# Databa 8d10: 73 65 73 20 63 6f 6e 74 61 69 6e 69 6e 67 20 66 ses containing f 8d20: 75 7a 7a 65 72 20 74 65 73 74 20 63 61 73 65 73 uzzer test cases 8d30: 0a 23 0a 46 55 5a 5a 44 41 54 41 20 3d 20 5c 0a .#.FUZZDATA = \. 8d40: 20 20 24 28 54 4f 50 29 5c 74 65 73 74 5c 66 75$(TOP)\test\fu
8d50: 7a 7a 64 61 74 61 31 2e 64 62 20 5c 0a 20 20 24 zzdata1.db \. $8d60: 28 54 4f 50 29 5c 74 65 73 74 5c 66 75 7a 7a 64 (TOP)\test\fuzzd 8d70: 61 74 61 32 2e 64 62 20 5c 0a 20 20 24 28 54 4f ata2.db \.$(TO
8d80: 50 29 5c 74 65 73 74 5c 66 75 7a 7a 64 61 74 61 P)\test\fuzzdata
8d90: 33 2e 64 62 20 5c 0a 20 20 24 28 54 4f 50 29 5c 3.db \. $(TOP)\ 8da0: 74 65 73 74 5c 66 75 7a 7a 64 61 74 61 34 2e 64 test\fuzzdata4.d 8db0: 62 0a 23 20 3c 3c 2f 6d 61 72 6b 3e 3e 0a 0a 23 b.# <</mark>>..# 8dc0: 20 41 64 64 69 74 69 6f 6e 61 6c 20 63 6f 6d 70 Additional comp 8dd0: 69 6c 65 72 20 6f 70 74 69 6f 6e 73 20 66 6f 72 iler options for 8de0: 20 74 68 65 20 73 68 65 6c 6c 2e 20 20 54 68 65 the shell. The 8df0: 73 65 20 61 72 65 20 6f 6e 6c 79 20 65 66 66 65 se are only effe 8e00: 63 74 69 76 65 0a 23 20 77 68 65 6e 20 74 68 65 ctive.# when the 8e10: 20 73 68 65 6c 6c 20 69 73 20 6e 6f 74 20 62 65 shell is not be 8e20: 69 6e 67 20 64 79 6e 61 6d 69 63 61 6c 6c 79 20 ing dynamically 8e30: 6c 69 6e 6b 65 64 2e 0a 23 0a 21 49 46 20 24 28 linked..#.!IF$(
8e40: 44 59 4e 41 4d 49 43 5f 53 48 45 4c 4c 29 3d 3d DYNAMIC_SHELL)==
8e50: 30 20 26 26 20 24 28 46 4f 52 5f 57 49 4e 31 30 0 && $(FOR_WIN10 8e60: 29 3d 3d 30 0a 53 48 45 4c 4c 5f 43 4f 4d 50 49 )==0.SHELL_COMPI 8e70: 4c 45 5f 4f 50 54 53 20 3d 20 24 28 53 48 45 4c LE_OPTS =$(SHEL
8e80: 4c 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 L_COMPILE_OPTS)
8e90: 2d 44 53 51 4c 49 54 45 5f 53 48 45 4c 4c 5f 4a -DSQLITE_SHELL_J
8ea0: 53 4f 4e 31 20 2d 44 53 51 4c 49 54 45 5f 45 4e SON1 -DSQLITE_EN
8eb0: 41 42 4c 45 5f 46 54 53 34 20 2d 44 53 51 4c 49 ABLE_FTS4 -DSQLI
8ec0: 54 45 5f 45 4e 41 42 4c 45 5f 45 58 50 4c 41 49 TE_ENABLE_EXPLAI
8ed0: 4e 5f 43 4f 4d 4d 45 4e 54 53 0a 21 45 4e 44 49 N_COMMENTS.!ENDI
8ee0: 46 0a 0a 23 20 3c 3c 6d 61 72 6b 3e 3e 0a 23 20 F..# <<mark>>.#
8ef0: 45 78 74 72 61 20 63 6f 6d 70 69 6c 65 72 20 6f Extra compiler o
8f00: 70 74 69 6f 6e 73 20 66 6f 72 20 76 61 72 69 6f ptions for vario
8f10: 75 73 20 74 65 73 74 20 74 6f 6f 6c 73 2e 0a 23 us test tools..#
8f20: 0a 4d 50 54 45 53 54 45 52 5f 43 4f 4d 50 49 4c .MPTESTER_COMPIL
8f30: 45 5f 4f 50 54 53 20 3d 20 2d 44 53 51 4c 49 54 E_OPTS = -DSQLIT
8f40: 45 5f 53 48 45 4c 4c 5f 4a 53 4f 4e 31 20 2d 44 E_SHELL_JSON1 -D
8f50: 53 51 4c 49 54 45 5f 45 4e 41 42 4c 45 5f 46 54 SQLITE_ENABLE_FT
8f60: 53 35 0a 46 55 5a 5a 45 52 53 48 45 4c 4c 5f 43 S5.FUZZERSHELL_C
8f70: 4f 4d 50 49 4c 45 5f 4f 50 54 53 20 3d 20 2d 44 OMPILE_OPTS = -D
8f80: 53 51 4c 49 54 45 5f 45 4e 41 42 4c 45 5f 4a 53 SQLITE_ENABLE_JS
8f90: 4f 4e 31 0a 46 55 5a 5a 43 48 45 43 4b 5f 43 4f ON1.FUZZCHECK_CO
8fa0: 4d 50 49 4c 45 5f 4f 50 54 53 20 3d 20 2d 44 53 MPILE_OPTS = -DS
8fb0: 51 4c 49 54 45 5f 45 4e 41 42 4c 45 5f 4a 53 4f QLITE_ENABLE_JSO
8fc0: 4e 31 20 2d 44 53 51 4c 49 54 45 5f 45 4e 41 42 N1 -DSQLITE_ENAB
8fd0: 4c 45 5f 4d 45 4d 53 59 53 35 0a 0a 23 20 53 74 LE_MEMSYS5..# St
8fe0: 61 6e 64 61 72 64 20 6f 70 74 69 6f 6e 73 20 74 andard options t
8ff0: 6f 20 74 65 73 74 66 69 78 74 75 72 65 2e 0a 23 o testfixture..#
9000: 0a 54 45 53 54 4f 50 54 53 20 3d 20 2d 2d 76 65 .TESTOPTS = --ve
9010: 72 62 6f 73 65 3d 66 69 6c 65 20 2d 2d 6f 75 74 rbose=file --out
9020: 70 75 74 3d 74 65 73 74 2d 6f 75 74 2e 74 78 74 put=test-out.txt
9030: 0a 0a 23 20 45 78 74 72 61 20 74 61 72 67 65 74 ..# Extra target
9040: 73 20 66 6f 72 20 74 68 65 20 22 61 6c 6c 22 20 s for the "all"
9050: 74 61 72 67 65 74 20 74 68 61 74 20 72 65 71 75 target that requ
9060: 69 72 65 20 54 63 6c 2e 0a 23 0a 21 49 46 20 24 ire Tcl..#.!IF $9070: 28 4e 4f 5f 54 43 4c 29 3d 3d 30 0a 41 4c 4c 5f (NO_TCL)==0.ALL_ 9080: 54 43 4c 5f 54 41 52 47 45 54 53 20 3d 20 6c 69 TCL_TARGETS = li 9090: 62 74 63 6c 73 71 6c 69 74 65 33 2e 6c 69 62 0a btclsqlite3.lib. 90a0: 21 45 4c 53 45 0a 41 4c 4c 5f 54 43 4c 5f 54 41 !ELSE.ALL_TCL_TA 90b0: 52 47 45 54 53 20 3d 0a 21 45 4e 44 49 46 0a 23 RGETS =.!ENDIF.# 90c0: 20 3c 3c 2f 6d 61 72 6b 3e 3e 0a 0a 23 20 54 68 <</mark>>..# Th 90d0: 69 73 20 69 73 20 74 68 65 20 64 65 66 61 75 6c is is the defaul 90e0: 74 20 4d 61 6b 65 66 69 6c 65 20 74 61 72 67 65 t Makefile targe 90f0: 74 2e 20 20 54 68 65 20 6f 62 6a 65 63 74 73 20 t. The objects 9100: 6c 69 73 74 65 64 20 68 65 72 65 0a 23 20 61 72 listed here.# ar 9110: 65 20 77 68 61 74 20 67 65 74 20 62 75 69 6c 64 e what get build 9120: 20 77 68 65 6e 20 79 6f 75 20 74 79 70 65 20 6a when you type j 9130: 75 73 74 20 22 6d 61 6b 65 22 20 77 69 74 68 20 ust "make" with 9140: 6e 6f 20 61 72 67 75 6d 65 6e 74 73 2e 0a 23 0a no arguments..#. 9150: 61 6c 6c 3a 09 64 6c 6c 20 6c 69 62 73 71 6c 69 all:.dll libsqli 9160: 74 65 33 2e 6c 69 62 20 73 68 65 6c 6c 20 24 28 te3.lib shell$(
9170: 41 4c 4c 5f 54 43 4c 5f 54 41 52 47 45 54 53 29 ALL_TCL_TARGETS)
9180: 0a 0a 23 20 44 79 6e 61 6d 69 63 20 6c 69 6e 6b ..# Dynamic link
9190: 20 6c 69 62 72 61 72 79 20 73 65 63 74 69 6f 6e library section
91a0: 2e 0a 23 0a 64 6c 6c 3a 20 24 28 53 51 4c 49 54 ..#.dll: $(SQLIT 91b0: 45 33 44 4c 4c 29 0a 0a 23 20 53 68 65 6c 6c 20 E3DLL)..# Shell 91c0: 65 78 65 63 75 74 61 62 6c 65 2e 0a 23 0a 73 68 executable..#.sh 91d0: 65 6c 6c 3a 20 24 28 53 51 4c 49 54 45 33 45 58 ell:$(SQLITE3EX
91e0: 45 29 0a 0a 6c 69 62 73 71 6c 69 74 65 33 2e 6c E)..libsqlite3.l
91f0: 69 62 3a 09 24 28 4c 49 42 4f 42 4a 29 0a 09 24 ib:.$(LIBOBJ)..$
9200: 28 4c 54 4c 49 42 29 20 24 28 4c 54 4c 49 42 4f (LTLIB) $(LTLIBO 9210: 50 54 53 29 20 2f 4f 55 54 3a 24 40 20 24 28 4c PTS) /OUT:$@ $(L 9220: 49 42 4f 42 4a 29 20 24 28 54 4c 49 42 53 29 0a IBOBJ)$(TLIBS).
9230: 0a 23 20 3c 3c 6d 61 72 6b 3e 3e 0a 6c 69 62 74 .# <<mark>>.libt
9240: 63 6c 73 71 6c 69 74 65 33 2e 6c 69 62 3a 09 74 clsqlite3.lib:.t
9250: 63 6c 73 71 6c 69 74 65 2e 6c 6f 20 6c 69 62 73 clsqlite.lo libs
9260: 71 6c 69 74 65 33 2e 6c 69 62 0a 09 24 28 4c 54 qlite3.lib..$(LT 9270: 4c 49 42 29 20 24 28 4c 54 4c 49 42 4f 50 54 53 LIB)$(LTLIBOPTS
9280: 29 20 24 28 4c 54 4c 49 42 50 41 54 48 53 29 20 ) $(LTLIBPATHS) 9290: 2f 4f 55 54 3a 24 40 20 74 63 6c 73 71 6c 69 74 /OUT:$@ tclsqlit
92a0: 65 2e 6c 6f 20 6c 69 62 73 71 6c 69 74 65 33 2e e.lo libsqlite3.
92b0: 6c 69 62 20 24 28 4c 49 42 54 43 4c 53 54 55 42 lib $(LIBTCLSTUB 92c0: 29 20 24 28 54 4c 49 42 53 29 0a 23 20 3c 3c 2f )$(TLIBS).# <</
92d0: 6d 61 72 6b 3e 3e 0a 0a 24 28 53 51 4c 49 54 45 mark>>..$(SQLITE 92e0: 33 44 4c 4c 29 3a 20 24 28 4c 49 42 4f 42 4a 29 3DLL):$(LIBOBJ)
92f0: 20 24 28 4c 49 42 52 45 53 4f 42 4a 53 29 20 24 $(LIBRESOBJS)$
9300: 28 43 4f 52 45 5f 4c 49 4e 4b 5f 44 45 50 29 0a (CORE_LINK_DEP).
9310: 09 24 28 4c 44 29 20 24 28 4c 44 46 4c 41 47 53 .$(LD)$(LDFLAGS
9320: 29 20 24 28 4c 54 4c 49 4e 4b 4f 50 54 53 29 20 ) $(LTLINKOPTS) 9330: 24 28 4c 54 4c 49 42 50 41 54 48 53 29 20 2f 44$(LTLIBPATHS) /D
9340: 4c 4c 20 24 28 43 4f 52 45 5f 4c 49 4e 4b 5f 4f LL $(CORE_LINK_O 9350: 50 54 53 29 20 2f 4f 55 54 3a 24 40 20 24 28 4c PTS) /OUT:$@ $(L 9360: 49 42 4f 42 4a 29 20 24 28 4c 49 42 52 45 53 4f IBOBJ)$(LIBRESO
9370: 42 4a 53 29 20 24 28 4c 54 4c 49 42 53 29 20 24 BJS) $(LTLIBS)$
9380: 28 54 4c 49 42 53 29 0a 0a 23 20 3c 3c 6d 61 72 (TLIBS)..# <<mar
9390: 6b 3e 3e 0a 73 71 6c 69 74 65 33 2e 64 65 66 3a k>>.sqlite3.def:
93a0: 20 6c 69 62 73 71 6c 69 74 65 33 2e 6c 69 62 0a libsqlite3.lib.
93b0: 09 65 63 68 6f 20 45 58 50 4f 52 54 53 20 3e 20 .echo EXPORTS >
93c0: 73 71 6c 69 74 65 33 2e 64 65 66 0a 09 64 75 6d sqlite3.def..dum
93d0: 70 62 69 6e 20 2f 61 6c 6c 20 6c 69 62 73 71 6c pbin /all libsql
93e0: 69 74 65 33 2e 6c 69 62 20 5c 0a 09 09 7c 20 24 ite3.lib \...| $93f0: 28 54 43 4c 53 48 5f 43 4d 44 29 20 24 28 54 4f (TCLSH_CMD)$(TO
9400: 50 29 5c 74 6f 6f 6c 5c 72 65 70 6c 61 63 65 2e P)\tool\replace.
9410: 74 63 6c 20 69 6e 63 6c 75 64 65 20 22 5e 5c 73 tcl include "^\s
9420: 2b 31 20 5f 3f 28 73 71 6c 69 74 65 33 5f 2e 2a +1 _?(sqlite3_.*
9430: 29 24 24 22 20 5c 31 20 5c 0a 09 09 7c 20 73 6f )" \1 \...| so
9440: 72 74 20 3e 3e 20 73 71 6c 69 74 65 33 2e 64 65 rt >> sqlite3.de
9450: 66 0a 23 20 3c 3c 2f 6d 61 72 6b 3e 3e 0a 0a 24 f.# <</mark>>..$9460: 28 53 51 4c 49 54 45 33 45 58 45 29 3a 09 24 28 (SQLITE3EXE):.$(
9470: 54 4f 50 29 5c 73 72 63 5c 73 68 65 6c 6c 2e 63 TOP)\src\shell.c
9480: 20 24 28 53 48 45 4c 4c 5f 43 4f 52 45 5f 44 45 $(SHELL_CORE_DE 9490: 50 29 20 24 28 4c 49 42 52 45 53 4f 42 4a 53 29 P)$(LIBRESOBJS)
94a0: 20 24 28 53 48 45 4c 4c 5f 43 4f 52 45 5f 53 52 $(SHELL_CORE_SR 94b0: 43 29 20 24 28 53 51 4c 49 54 45 33 48 29 0a 09 C)$(SQLITE3H)..
94c0: 24 28 4c 54 4c 49 4e 4b 29 20 24 28 53 48 45 4c $(LTLINK)$(SHEL
94d0: 4c 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 L_COMPILE_OPTS)
94e0: 24 28 52 45 41 44 4c 49 4e 45 5f 46 4c 41 47 53 $(READLINE_FLAGS 94f0: 29 20 24 28 54 4f 50 29 5c 73 72 63 5c 73 68 65 )$(TOP)\src\she
9500: 6c 6c 2e 63 20 24 28 53 48 45 4c 4c 5f 43 4f 52 ll.c $(SHELL_COR 9510: 45 5f 53 52 43 29 20 5c 0a 09 09 2f 6c 69 6e 6b E_SRC) \.../link 9520: 20 24 28 53 51 4c 49 54 45 33 45 58 45 50 44 42$(SQLITE3EXEPDB
9530: 29 20 24 28 4c 44 46 4c 41 47 53 29 20 24 28 4c ) $(LDFLAGS)$(L
9540: 54 4c 49 4e 4b 4f 50 54 53 29 20 24 28 53 48 45 TLINKOPTS) $(SHE 9550: 4c 4c 5f 4c 49 4e 4b 5f 4f 50 54 53 29 20 24 28 LL_LINK_OPTS)$(
9560: 4c 54 4c 49 42 50 41 54 48 53 29 20 24 28 4c 49 LTLIBPATHS) $(LI 9570: 42 52 45 53 4f 42 4a 53 29 20 24 28 4c 49 42 52 BRESOBJS)$(LIBR
9580: 45 41 44 4c 49 4e 45 29 20 24 28 4c 54 4c 49 42 EADLINE) $(LTLIB 9590: 53 29 20 24 28 54 4c 49 42 53 29 0a 0a 23 20 3c S)$(TLIBS)..# <
95a0: 3c 6d 61 72 6b 3e 3e 0a 73 71 6c 64 69 66 66 2e <mark>>.sqldiff.
95b0: 65 78 65 3a 09 24 28 54 4f 50 29 5c 74 6f 6f 6c exe:.$(TOP)\tool 95c0: 5c 73 71 6c 64 69 66 66 2e 63 20 24 28 53 51 4c \sqldiff.c$(SQL
95d0: 49 54 45 33 43 29 20 24 28 53 51 4c 49 54 45 33 ITE3C) $(SQLITE3 95e0: 48 29 0a 09 24 28 4c 54 4c 49 4e 4b 29 20 24 28 H)..$(LTLINK) $( 95f0: 4e 4f 5f 57 41 52 4e 29 20 24 28 54 4f 50 29 5c NO_WARN)$(TOP)\
9600: 74 6f 6f 6c 5c 73 71 6c 64 69 66 66 2e 63 20 24 tool\sqldiff.c $9610: 28 53 51 4c 49 54 45 33 43 29 20 2f 6c 69 6e 6b (SQLITE3C) /link 9620: 20 24 28 4c 44 46 4c 41 47 53 29 20 24 28 4c 54$(LDFLAGS) $(LT 9630: 4c 49 4e 4b 4f 50 54 53 29 0a 0a 73 72 63 63 6b LINKOPTS)..srcck 9640: 31 2e 65 78 65 3a 09 24 28 54 4f 50 29 5c 74 6f 1.exe:.$(TOP)\to
9650: 6f 6c 5c 73 72 63 63 6b 31 2e 63 0a 09 24 28 42 ol\srcck1.c..$(B 9660: 43 43 29 20 24 28 4e 4f 5f 57 41 52 4e 29 20 2d CC)$(NO_WARN) -
9670: 46 65 24 40 20 24 28 54 4f 50 29 5c 74 6f 6f 6c Fe$@$(TOP)\tool
9680: 5c 73 72 63 63 6b 31 2e 63 0a 0a 73 6f 75 72 63 \srcck1.c..sourc
9690: 65 74 65 73 74 3a 09 73 72 63 63 6b 31 2e 65 78 etest:.srcck1.ex
96a0: 65 20 73 71 6c 69 74 65 33 2e 63 0a 09 73 72 63 e sqlite3.c..src
96b0: 63 6b 31 2e 65 78 65 20 73 71 6c 69 74 65 33 2e ck1.exe sqlite3.
96c0: 63 0a 0a 66 75 7a 7a 65 72 73 68 65 6c 6c 2e 65 c..fuzzershell.e
96d0: 78 65 3a 09 24 28 54 4f 50 29 5c 74 6f 6f 6c 5c xe:.$(TOP)\tool\ 96e0: 66 75 7a 7a 65 72 73 68 65 6c 6c 2e 63 20 24 28 fuzzershell.c$(
96f0: 53 51 4c 49 54 45 33 43 29 20 24 28 53 51 4c 49 SQLITE3C) $(SQLI 9700: 54 45 33 48 29 0a 09 24 28 4c 54 4c 49 4e 4b 29 TE3H)..$(LTLINK)
9710: 20 24 28 4e 4f 5f 57 41 52 4e 29 20 24 28 46 55 $(NO_WARN)$(FU
9720: 5a 5a 45 52 53 48 45 4c 4c 5f 43 4f 4d 50 49 4c ZZERSHELL_COMPIL
9730: 45 5f 4f 50 54 53 29 20 24 28 54 4f 50 29 5c 74 E_OPTS) $(TOP)\t 9740: 6f 6f 6c 5c 66 75 7a 7a 65 72 73 68 65 6c 6c 2e ool\fuzzershell. 9750: 63 20 24 28 53 51 4c 49 54 45 33 43 29 20 2f 6c c$(SQLITE3C) /l
9760: 69 6e 6b 20 24 28 4c 44 46 4c 41 47 53 29 20 24 ink $(LDFLAGS)$
9770: 28 4c 54 4c 49 4e 4b 4f 50 54 53 29 0a 0a 66 75 (LTLINKOPTS)..fu
9780: 7a 7a 63 68 65 63 6b 2e 65 78 65 3a 09 24 28 54 zzcheck.exe:.$(T 9790: 4f 50 29 5c 74 65 73 74 5c 66 75 7a 7a 63 68 65 OP)\test\fuzzche 97a0: 63 6b 2e 63 20 24 28 53 51 4c 49 54 45 33 43 29 ck.c$(SQLITE3C)
97b0: 20 24 28 53 51 4c 49 54 45 33 48 29 0a 09 24 28 $(SQLITE3H)..$(
97c0: 4c 54 4c 49 4e 4b 29 20 24 28 4e 4f 5f 57 41 52 LTLINK) $(NO_WAR 97d0: 4e 29 20 24 28 46 55 5a 5a 43 48 45 43 4b 5f 43 N)$(FUZZCHECK_C
97e0: 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 24 28 54 OMPILE_OPTS) $(T 97f0: 4f 50 29 5c 74 65 73 74 5c 66 75 7a 7a 63 68 65 OP)\test\fuzzche 9800: 63 6b 2e 63 20 24 28 53 51 4c 49 54 45 33 43 29 ck.c$(SQLITE3C)
9810: 20 2f 6c 69 6e 6b 20 24 28 4c 44 46 4c 41 47 53 /link $(LDFLAGS 9820: 29 20 24 28 4c 54 4c 49 4e 4b 4f 50 54 53 29 0a )$(LTLINKOPTS).
9830: 0a 6d 70 74 65 73 74 65 72 2e 65 78 65 3a 09 24 .mptester.exe:.$9840: 28 54 4f 50 29 5c 6d 70 74 65 73 74 5c 6d 70 74 (TOP)\mptest\mpt 9850: 65 73 74 2e 63 20 24 28 53 51 4c 49 54 45 33 43 est.c$(SQLITE3C
9860: 29 20 24 28 53 51 4c 49 54 45 33 48 29 0a 09 24 ) $(SQLITE3H)..$
9870: 28 4c 54 4c 49 4e 4b 29 20 24 28 4e 4f 5f 57 41 (LTLINK) $(NO_WA 9880: 52 4e 29 20 24 28 4d 50 54 45 53 54 45 52 5f 43 RN)$(MPTESTER_C
9890: 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 24 28 54 OMPILE_OPTS) $(T 98a0: 4f 50 29 5c 6d 70 74 65 73 74 5c 6d 70 74 65 73 OP)\mptest\mptes 98b0: 74 2e 63 20 24 28 53 51 4c 49 54 45 33 43 29 20 t.c$(SQLITE3C)
98c0: 2f 6c 69 6e 6b 20 24 28 4c 44 46 4c 41 47 53 29 /link $(LDFLAGS) 98d0: 20 24 28 4c 54 4c 49 4e 4b 4f 50 54 53 29 0a 0a$(LTLINKOPTS)..
98e0: 4d 50 54 45 53 54 31 20 3d 20 6d 70 74 65 73 74 MPTEST1 = mptest
98f0: 65 72 20 6d 70 74 65 73 74 2e 64 62 20 24 28 54 er mptest.db $(T 9900: 4f 50 29 5c 6d 70 74 65 73 74 5c 63 72 61 73 68 OP)\mptest\crash 9910: 30 31 2e 74 65 73 74 20 2d 2d 72 65 70 65 61 74 01.test --repeat 9920: 20 32 30 0a 4d 50 54 45 53 54 32 20 3d 20 6d 70 20.MPTEST2 = mp 9930: 74 65 73 74 65 72 20 6d 70 74 65 73 74 2e 64 62 tester mptest.db 9940: 20 24 28 54 4f 50 29 5c 6d 70 74 65 73 74 5c 6d$(TOP)\mptest\m
9950: 75 6c 74 69 77 72 69 74 65 30 31 2e 74 65 73 74 ultiwrite01.test
9960: 20 2d 2d 72 65 70 65 61 74 20 32 30 0a 0a 6d 70 --repeat 20..mp
9970: 74 65 73 74 3a 09 6d 70 74 65 73 74 65 72 2e 65 test:.mptester.e
9980: 78 65 0a 09 64 65 6c 20 2f 51 20 6d 70 74 65 73 xe..del /Q mptes
9990: 74 2e 64 62 20 32 3e 4e 55 4c 0a 09 24 28 4d 50 t.db 2>NUL..$(MP 99a0: 54 45 53 54 31 29 20 2d 2d 6a 6f 75 72 6e 61 6c TEST1) --journal 99b0: 6d 6f 64 65 20 44 45 4c 45 54 45 0a 09 24 28 4d mode DELETE..$(M
99c0: 50 54 45 53 54 32 29 20 2d 2d 6a 6f 75 72 6e 61 PTEST2) --journa
99d0: 6c 6d 6f 64 65 20 57 41 4c 0a 09 24 28 4d 50 54 lmode WAL..$(MPT 99e0: 45 53 54 31 29 20 2d 2d 6a 6f 75 72 6e 61 6c 6d EST1) --journalm 99f0: 6f 64 65 20 57 41 4c 0a 09 24 28 4d 50 54 45 53 ode WAL..$(MPTES
9a00: 54 32 29 20 2d 2d 6a 6f 75 72 6e 61 6c 6d 6f 64 T2) --journalmod
9a10: 65 20 50 45 52 53 49 53 54 0a 09 24 28 4d 50 54 e PERSIST..$(MPT 9a20: 45 53 54 31 29 20 2d 2d 6a 6f 75 72 6e 61 6c 6d EST1) --journalm 9a30: 6f 64 65 20 50 45 52 53 49 53 54 0a 09 24 28 4d ode PERSIST..$(M
9a40: 50 54 45 53 54 32 29 20 2d 2d 6a 6f 75 72 6e 61 PTEST2) --journa
9a50: 6c 6d 6f 64 65 20 54 52 55 4e 43 41 54 45 0a 09 lmode TRUNCATE..
9a60: 24 28 4d 50 54 45 53 54 31 29 20 2d 2d 6a 6f 75 $(MPTEST1) --jou 9a70: 72 6e 61 6c 6d 6f 64 65 20 54 52 55 4e 43 41 54 rnalmode TRUNCAT 9a80: 45 0a 09 24 28 4d 50 54 45 53 54 32 29 20 2d 2d E..$(MPTEST2) --
9a90: 6a 6f 75 72 6e 61 6c 6d 6f 64 65 20 44 45 4c 45 journalmode DELE
9aa0: 54 45 0a 0a 23 20 54 68 69 73 20 74 61 72 67 65 TE..# This targe
9ab0: 74 20 63 72 65 61 74 65 73 20 61 20 64 69 72 65 t creates a dire
9ac0: 63 74 6f 72 79 20 6e 61 6d 65 64 20 22 74 73 72 ctory named "tsr
9ad0: 63 22 20 61 6e 64 20 66 69 6c 6c 73 20 69 74 20 c" and fills it
9ae0: 77 69 74 68 0a 23 20 63 6f 70 69 65 73 20 6f 66 with.# copies of
9af0: 20 61 6c 6c 20 6f 66 20 74 68 65 20 43 20 73 6f all of the C so
9b00: 75 72 63 65 20 63 6f 64 65 20 61 6e 64 20 68 65 urce code and he
9b10: 61 64 65 72 20 66 69 6c 65 73 20 6e 65 65 64 65 ader files neede
9b20: 64 20 74 6f 0a 23 20 62 75 69 6c 64 20 6f 6e 20 d to.# build on
9b30: 74 68 65 20 74 61 72 67 65 74 20 73 79 73 74 65 the target syste
9b40: 6d 2e 20 20 53 6f 6d 65 20 6f 66 20 74 68 65 20 m. Some of the
9b50: 43 20 73 6f 75 72 63 65 20 63 6f 64 65 20 61 6e C source code an
9b60: 64 20 68 65 61 64 65 72 0a 23 20 66 69 6c 65 73 d header.# files
9b70: 20 61 72 65 20 61 75 74 6f 6d 61 74 69 63 61 6c are automatical
9b80: 6c 79 20 67 65 6e 65 72 61 74 65 64 2e 20 20 54 ly generated. T
9b90: 68 69 73 20 74 61 72 67 65 74 20 74 61 6b 65 73 his target takes
9ba0: 20 63 61 72 65 20 6f 66 0a 23 20 61 6c 6c 20 74 care of.# all t
9bb0: 68 61 74 20 61 75 74 6f 6d 61 74 69 63 20 67 65 hat automatic ge
9bc0: 6e 65 72 61 74 69 6f 6e 2e 0a 23 0a 2e 74 61 72 neration..#..tar
9bd0: 67 65 74 5f 73 6f 75 72 63 65 3a 09 24 28 53 52 get_source:.$(SR 9be0: 43 29 20 24 28 54 4f 50 29 5c 74 6f 6f 6c 5c 76 C)$(TOP)\tool\v
9bf0: 64 62 65 2d 63 6f 6d 70 72 65 73 73 2e 74 63 6c dbe-compress.tcl
9c00: 20 66 74 73 35 2e 63 0a 09 2d 72 6d 64 69 72 20 fts5.c..-rmdir
9c10: 2f 51 2f 53 20 74 73 72 63 20 32 3e 4e 55 4c 0a /Q/S tsrc 2>NUL.
9c20: 09 2d 6d 6b 64 69 72 20 74 73 72 63 0a 09 66 6f .-mkdir tsrc..fo
9c30: 72 20 25 69 20 69 6e 20 28 24 28 53 52 43 30 30 r %i in ($(SRC00 9c40: 29 29 20 64 6f 20 63 6f 70 79 20 2f 59 20 25 69 )) do copy /Y %i 9c50: 20 74 73 72 63 0a 09 66 6f 72 20 25 69 20 69 6e tsrc..for %i in 9c60: 20 28 24 28 53 52 43 30 31 29 29 20 64 6f 20 63 ($(SRC01)) do c
9c70: 6f 70 79 20 2f 59 20 25 69 20 74 73 72 63 0a 09 opy /Y %i tsrc..
9c80: 66 6f 72 20 25 69 20 69 6e 20 28 24 28 53 52 43 for %i in ($(SRC 9c90: 30 32 29 29 20 64 6f 20 63 6f 70 79 20 2f 59 20 02)) do copy /Y 9ca0: 25 69 20 74 73 72 63 0a 09 66 6f 72 20 25 69 20 %i tsrc..for %i 9cb0: 69 6e 20 28 24 28 53 52 43 30 33 29 29 20 64 6f in ($(SRC03)) do
9cc0: 20 63 6f 70 79 20 2f 59 20 25 69 20 74 73 72 63 copy /Y %i tsrc
9cd0: 0a 09 66 6f 72 20 25 69 20 69 6e 20 28 24 28 53 ..for %i in ($(S 9ce0: 52 43 30 34 29 29 20 64 6f 20 63 6f 70 79 20 2f RC04)) do copy / 9cf0: 59 20 25 69 20 74 73 72 63 0a 09 66 6f 72 20 25 Y %i tsrc..for % 9d00: 69 20 69 6e 20 28 24 28 53 52 43 30 35 29 29 20 i in ($(SRC05))
9d10: 64 6f 20 63 6f 70 79 20 2f 59 20 25 69 20 74 73 do copy /Y %i ts
9d20: 72 63 0a 09 66 6f 72 20 25 69 20 69 6e 20 28 24 rc..for %i in ($9d30: 28 53 52 43 30 36 29 29 20 64 6f 20 63 6f 70 79 (SRC06)) do copy 9d40: 20 2f 59 20 25 69 20 74 73 72 63 0a 09 66 6f 72 /Y %i tsrc..for 9d50: 20 25 69 20 69 6e 20 28 24 28 53 52 43 30 37 29 %i in ($(SRC07)
9d60: 29 20 64 6f 20 63 6f 70 79 20 2f 59 20 25 69 20 ) do copy /Y %i
9d70: 74 73 72 63 0a 09 66 6f 72 20 25 69 20 69 6e 20 tsrc..for %i in
9d80: 28 24 28 53 52 43 30 38 29 29 20 64 6f 20 63 6f ($(SRC08)) do co 9d90: 70 79 20 2f 59 20 25 69 20 74 73 72 63 0a 09 66 py /Y %i tsrc..f 9da0: 6f 72 20 25 69 20 69 6e 20 28 24 28 53 52 43 30 or %i in ($(SRC0
9db0: 39 29 29 20 64 6f 20 63 6f 70 79 20 2f 59 20 25 9)) do copy /Y %
9dc0: 69 20 74 73 72 63 0a 09 66 6f 72 20 25 69 20 69 i tsrc..for %i i
9dd0: 6e 20 28 24 28 53 52 43 31 30 29 29 20 64 6f 20 n ($(SRC10)) do 9de0: 63 6f 70 79 20 2f 59 20 25 69 20 74 73 72 63 0a copy /Y %i tsrc. 9df0: 09 66 6f 72 20 25 69 20 69 6e 20 28 24 28 53 52 .for %i in ($(SR
9e00: 43 31 31 29 29 20 64 6f 20 63 6f 70 79 20 2f 59 C11)) do copy /Y
9e10: 20 25 69 20 74 73 72 63 0a 09 63 6f 70 79 20 2f %i tsrc..copy /
9e20: 59 20 66 74 73 35 2e 63 20 74 73 72 63 0a 09 63 Y fts5.c tsrc..c
9e30: 6f 70 79 20 2f 59 20 66 74 73 35 2e 68 20 74 73 opy /Y fts5.h ts
9e40: 72 63 0a 09 64 65 6c 20 2f 51 20 74 73 72 63 5c rc..del /Q tsrc\
9e50: 73 71 6c 69 74 65 2e 68 2e 69 6e 20 74 73 72 63 sqlite.h.in tsrc
9e60: 5c 70 61 72 73 65 2e 79 20 32 3e 4e 55 4c 0a 09 \parse.y 2>NUL..
9e70: 24 28 54 43 4c 53 48 5f 43 4d 44 29 20 24 28 54 $(TCLSH_CMD)$(T
9e80: 4f 50 29 5c 74 6f 6f 6c 5c 76 64 62 65 2d 63 6f OP)\tool\vdbe-co
9e90: 6d 70 72 65 73 73 2e 74 63 6c 20 24 28 4f 50 54 mpress.tcl $(OPT 9ea0: 53 29 20 3c 20 74 73 72 63 5c 76 64 62 65 2e 63 S) < tsrc\vdbe.c 9eb0: 20 3e 20 76 64 62 65 2e 6e 65 77 0a 09 6d 6f 76 > vdbe.new..mov 9ec0: 65 20 76 64 62 65 2e 6e 65 77 20 74 73 72 63 5c e vdbe.new tsrc\ 9ed0: 76 64 62 65 2e 63 0a 09 65 63 68 6f 20 3e 20 2e vdbe.c..echo > . 9ee0: 74 61 72 67 65 74 5f 73 6f 75 72 63 65 0a 0a 73 target_source..s 9ef0: 71 6c 69 74 65 33 2e 63 3a 09 2e 74 61 72 67 65 qlite3.c:..targe 9f00: 74 5f 73 6f 75 72 63 65 20 73 71 6c 69 74 65 33 t_source sqlite3 9f10: 65 78 74 2e 68 20 24 28 54 4f 50 29 5c 74 6f 6f ext.h$(TOP)\too
9f20: 6c 5c 6d 6b 73 71 6c 69 74 65 33 63 2e 74 63 6c l\mksqlite3c.tcl
9f30: 0a 09 24 28 54 43 4c 53 48 5f 43 4d 44 29 20 24 ..$(TCLSH_CMD)$
9f40: 28 54 4f 50 29 5c 74 6f 6f 6c 5c 6d 6b 73 71 6c (TOP)\tool\mksql
9f50: 69 74 65 33 63 2e 74 63 6c 20 24 28 4d 4b 53 51 ite3c.tcl $(MKSQ 9f60: 4c 49 54 45 33 43 5f 41 52 47 53 29 0a 09 63 6f LITE3C_ARGS)..co 9f70: 70 79 20 74 73 72 63 5c 73 68 65 6c 6c 2e 63 20 py tsrc\shell.c 9f80: 2e 0a 09 63 6f 70 79 20 24 28 54 4f 50 29 5c 65 ...copy$(TOP)\e
9f90: 78 74 5c 73 65 73 73 69 6f 6e 5c 73 71 6c 69 74 xt\session\sqlit
9fa0: 65 33 73 65 73 73 69 6f 6e 2e 68 20 2e 0a 0a 73 e3session.h ...s
9fb0: 71 6c 69 74 65 33 2d 61 6c 6c 2e 63 3a 09 73 71 qlite3-all.c:.sq
9fc0: 6c 69 74 65 33 2e 63 20 24 28 54 4f 50 29 5c 74 lite3.c $(TOP)\t 9fd0: 6f 6f 6c 5c 73 70 6c 69 74 2d 73 71 6c 69 74 65 ool\split-sqlite 9fe0: 33 63 2e 74 63 6c 0a 09 24 28 54 43 4c 53 48 5f 3c.tcl..$(TCLSH_
9ff0: 43 4d 44 29 20 24 28 54 4f 50 29 5c 74 6f 6f 6c CMD) $(TOP)\tool a000: 5c 73 70 6c 69 74 2d 73 71 6c 69 74 65 33 63 2e \split-sqlite3c. a010: 74 63 6c 0a 23 20 3c 3c 2f 6d 61 72 6b 3e 3e 0a tcl.# <</mark>>. a020: 0a 23 20 52 75 6c 65 20 74 6f 20 62 75 69 6c 64 .# Rule to build a030: 20 74 68 65 20 61 6d 61 6c 67 61 6d 61 74 69 6f the amalgamatio a040: 6e 0a 23 0a 73 71 6c 69 74 65 33 2e 6c 6f 3a 09 n.#.sqlite3.lo:. a050: 24 28 53 51 4c 49 54 45 33 43 29 0a 09 24 28 4c$(SQLITE3C)..$(L a060: 54 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 TCOMPILE)$(CORE
a070: 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 2d _COMPILE_OPTS) -
a080: 63 20 24 28 53 51 4c 49 54 45 33 43 29 0a 0a 23 c $(SQLITE3C)..# a090: 20 3c 3c 6d 61 72 6b 3e 3e 0a 23 20 52 75 6c 65 <<mark>>.# Rule a0a0: 73 20 74 6f 20 62 75 69 6c 64 20 74 68 65 20 4c s to build the L a0b0: 45 4d 4f 4e 20 63 6f 6d 70 69 6c 65 72 20 67 65 EMON compiler ge a0c0: 6e 65 72 61 74 6f 72 0a 23 0a 6c 65 6d 70 61 72 nerator.#.lempar a0d0: 2e 63 3a 09 24 28 54 4f 50 29 5c 74 6f 6f 6c 5c .c:.$(TOP)\tool\
a0e0: 6c 65 6d 70 61 72 2e 63 0a 09 63 6f 70 79 20 24 lempar.c..copy $a0f0: 28 54 4f 50 29 5c 74 6f 6f 6c 5c 6c 65 6d 70 61 (TOP)\tool\lempa a100: 72 2e 63 20 2e 0a 0a 6c 65 6d 6f 6e 2e 65 78 65 r.c ...lemon.exe a110: 3a 09 24 28 54 4f 50 29 5c 74 6f 6f 6c 5c 6c 65 :.$(TOP)\tool\le
a120: 6d 6f 6e 2e 63 20 6c 65 6d 70 61 72 2e 63 0a 09 mon.c lempar.c..
a130: 24 28 42 43 43 29 20 24 28 4e 4f 5f 57 41 52 4e $(BCC)$(NO_WARN
a140: 29 20 2d 44 61 63 63 65 73 73 3d 5f 61 63 63 65 ) -Daccess=_acce
a150: 73 73 20 5c 0a 09 09 2d 46 65 24 40 20 24 28 54 ss \...-Fe$@$(T
a160: 4f 50 29 5c 74 6f 6f 6c 5c 6c 65 6d 6f 6e 2e 63 OP)\tool\lemon.c
a170: 20 2f 6c 69 6e 6b 20 24 28 4c 44 46 4c 41 47 53 /link $(LDFLAGS a180: 29 20 24 28 4e 4c 54 4c 49 4e 4b 4f 50 54 53 29 )$(NLTLINKOPTS)
a190: 20 24 28 4e 4c 54 4c 49 42 50 41 54 48 53 29 0a $(NLTLIBPATHS). a1a0: 0a 23 20 52 75 6c 65 73 20 74 6f 20 62 75 69 6c .# Rules to buil a1b0: 64 20 69 6e 64 69 76 69 64 75 61 6c 20 2a 2e 6c d individual *.l a1c0: 6f 20 66 69 6c 65 73 20 66 72 6f 6d 20 67 65 6e o files from gen a1d0: 65 72 61 74 65 64 20 2a 2e 63 20 66 69 6c 65 73 erated *.c files a1e0: 2e 20 54 68 69 73 0a 23 20 61 70 70 6c 69 65 73 . This.# applies a1f0: 20 74 6f 3a 0a 23 0a 23 20 20 20 20 20 70 61 72 to:.#.# par a200: 73 65 2e 6c 6f 0a 23 20 20 20 20 20 6f 70 63 6f se.lo.# opco a210: 64 65 73 2e 6c 6f 0a 23 0a 70 61 72 73 65 2e 6c des.lo.#.parse.l a220: 6f 3a 09 70 61 72 73 65 2e 63 20 24 28 48 44 52 o:.parse.c$(HDR
a230: 29 0a 09 24 28 4c 54 43 4f 4d 50 49 4c 45 29 20 )..$(LTCOMPILE) a240: 24 28 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f$(CORE_COMPILE_O
a250: 50 54 53 29 20 2d 63 20 70 61 72 73 65 2e 63 0a PTS) -c parse.c.
a260: 0a 6f 70 63 6f 64 65 73 2e 6c 6f 3a 09 6f 70 63 .opcodes.lo:.opc
a270: 6f 64 65 73 2e 63 0a 09 24 28 4c 54 43 4f 4d 50 odes.c..$(LTCOMP a280: 49 4c 45 29 20 24 28 43 4f 52 45 5f 43 4f 4d 50 ILE)$(CORE_COMP
a290: 49 4c 45 5f 4f 50 54 53 29 20 2d 63 20 6f 70 63 ILE_OPTS) -c opc
a2a0: 6f 64 65 73 2e 63 0a 23 20 3c 3c 2f 6d 61 72 6b odes.c.# <</mark
a2b0: 3e 3e 0a 0a 23 20 52 75 6c 65 20 74 6f 20 62 75 >>..# Rule to bu
a2c0: 69 6c 64 20 74 68 65 20 57 69 6e 33 32 20 72 65 ild the Win32 re
a2d0: 73 6f 75 72 63 65 73 20 6f 62 6a 65 63 74 20 66 sources object f
a2e0: 69 6c 65 2e 0a 23 0a 21 49 46 20 24 28 55 53 45 ile..#.!IF $(USE a2f0: 5f 52 43 29 21 3d 30 0a 23 20 3c 3c 62 6c 6f 63 _RC)!=0.# <<bloc a300: 6b 31 3e 3e 0a 24 28 4c 49 42 52 45 53 4f 42 4a k1>>.$(LIBRESOBJ
a310: 53 29 3a 09 24 28 54 4f 50 29 5c 73 72 63 5c 73 S):.$(TOP)\src\s a320: 71 6c 69 74 65 33 2e 72 63 20 24 28 53 51 4c 49 qlite3.rc$(SQLI
a330: 54 45 33 48 29 0a 09 65 63 68 6f 20 23 69 66 6e TE3H)..echo #ifn
a340: 64 65 66 20 53 51 4c 49 54 45 5f 52 45 53 4f 55 def SQLITE_RESOU
a350: 52 43 45 5f 56 45 52 53 49 4f 4e 20 3e 20 73 71 RCE_VERSION > sq
a360: 6c 69 74 65 33 72 63 2e 68 0a 09 66 6f 72 20 2f lite3rc.h..for /
a370: 46 20 25 25 56 20 69 6e 20 28 27 74 79 70 65 20 F %%V in ('type
a380: 22 24 28 54 4f 50 29 5c 56 45 52 53 49 4f 4e 22 "$(TOP)\VERSION" a390: 27 29 20 64 6f 20 28 20 5c 0a 09 09 65 63 68 6f ') do ( \...echo a3a0: 20 23 64 65 66 69 6e 65 20 53 51 4c 49 54 45 5f #define SQLITE_ a3b0: 52 45 53 4f 55 52 43 45 5f 56 45 52 53 49 4f 4e RESOURCE_VERSION a3c0: 20 25 25 56 20 5c 0a 09 09 09 7c 20 24 28 54 43 %%V \....|$(TC
a3d0: 4c 53 48 5f 43 4d 44 29 20 24 28 54 4f 50 29 5c LSH_CMD) $(TOP)\ a3e0: 74 6f 6f 6c 5c 72 65 70 6c 61 63 65 2e 74 63 6c tool\replace.tcl a3f0: 20 65 78 61 63 74 20 2e 20 5e 2c 20 3e 3e 20 73 exact . ^, >> s a400: 71 6c 69 74 65 33 72 63 2e 68 20 5c 0a 09 29 0a qlite3rc.h \..). a410: 09 65 63 68 6f 20 23 65 6e 64 69 66 20 3e 3e 20 .echo #endif >> a420: 73 71 6c 69 74 65 33 72 63 2e 68 0a 09 24 28 4c sqlite3rc.h..$(L
a430: 54 52 43 4f 4d 50 49 4c 45 29 20 2d 66 6f 20 24 TRCOMPILE) -fo $a440: 28 4c 49 42 52 45 53 4f 42 4a 53 29 20 24 28 54 (LIBRESOBJS)$(T
a450: 4f 50 29 5c 73 72 63 5c 73 71 6c 69 74 65 33 2e OP)\src\sqlite3.
a460: 72 63 0a 23 20 3c 3c 2f 62 6c 6f 63 6b 31 3e 3e rc.# <</block1>>
a470: 0a 21 45 4e 44 49 46 0a 0a 23 20 3c 3c 6d 61 72 .!ENDIF..# <<mar
a480: 6b 3e 3e 0a 23 20 52 75 6c 65 73 20 74 6f 20 62 k>>.# Rules to b
a490: 75 69 6c 64 20 69 6e 64 69 76 69 64 75 61 6c 20 uild individual
a4a0: 2a 2e 6c 6f 20 66 69 6c 65 73 20 66 72 6f 6d 20 *.lo files from
a4b0: 66 69 6c 65 73 20 69 6e 20 74 68 65 20 73 72 63 files in the src
a4c0: 20 64 69 72 65 63 74 6f 72 79 2e 0a 23 0a 61 6c directory..#.al
a4d0: 74 65 72 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c 73 ter.lo:.$(TOP)\s a4e0: 72 63 5c 61 6c 74 65 72 2e 63 20 24 28 48 44 52 rc\alter.c$(HDR
a4f0: 29 0a 09 24 28 4c 54 43 4f 4d 50 49 4c 45 29 20 )..$(LTCOMPILE) a500: 24 28 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f$(CORE_COMPILE_O
a510: 50 54 53 29 20 2d 63 20 24 28 54 4f 50 29 5c 73 PTS) -c $(TOP)\s a520: 72 63 5c 61 6c 74 65 72 2e 63 0a 0a 61 6e 61 6c rc\alter.c..anal a530: 79 7a 65 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c 73 yze.lo:.$(TOP)\s
a540: 72 63 5c 61 6e 61 6c 79 7a 65 2e 63 20 24 28 48 rc\analyze.c $(H a550: 44 52 29 0a 09 24 28 4c 54 43 4f 4d 50 49 4c 45 DR)..$(LTCOMPILE
a560: 29 20 24 28 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 ) $(CORE_COMPILE a570: 5f 4f 50 54 53 29 20 2d 63 20 24 28 54 4f 50 29 _OPTS) -c$(TOP)
a580: 5c 73 72 63 5c 61 6e 61 6c 79 7a 65 2e 63 0a 0a \src\analyze.c..
a590: 61 74 74 61 63 68 2e 6c 6f 3a 09 24 28 54 4f 50 attach.lo:.$(TOP a5a0: 29 5c 73 72 63 5c 61 74 74 61 63 68 2e 63 20 24 )\src\attach.c$
a5b0: 28 48 44 52 29 0a 09 24 28 4c 54 43 4f 4d 50 49 (HDR)..$(LTCOMPI a5c0: 4c 45 29 20 24 28 43 4f 52 45 5f 43 4f 4d 50 49 LE)$(CORE_COMPI
a5d0: 4c 45 5f 4f 50 54 53 29 20 2d 63 20 24 28 54 4f LE_OPTS) -c $(TO a5e0: 50 29 5c 73 72 63 5c 61 74 74 61 63 68 2e 63 0a P)\src\attach.c. a5f0: 0a 61 75 74 68 2e 6c 6f 3a 09 24 28 54 4f 50 29 .auth.lo:.$(TOP)
a600: 5c 73 72 63 5c 61 75 74 68 2e 63 20 24 28 48 44 \src\auth.c $(HD a610: 52 29 0a 09 24 28 4c 54 43 4f 4d 50 49 4c 45 29 R)..$(LTCOMPILE)
a620: 20 24 28 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 5f $(CORE_COMPILE_ a630: 4f 50 54 53 29 20 2d 63 20 24 28 54 4f 50 29 5c OPTS) -c$(TOP)\
a640: 73 72 63 5c 61 75 74 68 2e 63 0a 0a 62 61 63 6b src\auth.c..back
a650: 75 70 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c 73 72 up.lo:.$(TOP)\sr a660: 63 5c 62 61 63 6b 75 70 2e 63 20 24 28 48 44 52 c\backup.c$(HDR
a670: 29 0a 09 24 28 4c 54 43 4f 4d 50 49 4c 45 29 20 )..$(LTCOMPILE) a680: 24 28 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f$(CORE_COMPILE_O
a690: 50 54 53 29 20 2d 63 20 24 28 54 4f 50 29 5c 73 PTS) -c $(TOP)\s a6a0: 72 63 5c 62 61 63 6b 75 70 2e 63 0a 0a 62 69 74 rc\backup.c..bit a6b0: 76 65 63 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c 73 vec.lo:.$(TOP)\s
a6c0: 72 63 5c 62 69 74 76 65 63 2e 63 20 24 28 48 44 rc\bitvec.c $(HD a6d0: 52 29 0a 09 24 28 4c 54 43 4f 4d 50 49 4c 45 29 R)..$(LTCOMPILE)
a6e0: 20 24 28 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 5f $(CORE_COMPILE_ a6f0: 4f 50 54 53 29 20 2d 63 20 24 28 54 4f 50 29 5c OPTS) -c$(TOP)\
a700: 73 72 63 5c 62 69 74 76 65 63 2e 63 0a 0a 62 74 src\bitvec.c..bt
a710: 6d 75 74 65 78 2e 6c 6f 3a 09 24 28 54 4f 50 29 mutex.lo:.$(TOP) a720: 5c 73 72 63 5c 62 74 6d 75 74 65 78 2e 63 20 24 \src\btmutex.c$
a730: 28 48 44 52 29 0a 09 24 28 4c 54 43 4f 4d 50 49 (HDR)..$(LTCOMPI a740: 4c 45 29 20 24 28 43 4f 52 45 5f 43 4f 4d 50 49 LE)$(CORE_COMPI
a750: 4c 45 5f 4f 50 54 53 29 20 2d 63 20 24 28 54 4f LE_OPTS) -c $(TO a760: 50 29 5c 73 72 63 5c 62 74 6d 75 74 65 78 2e 63 P)\src\btmutex.c a770: 0a 0a 62 74 72 65 65 2e 6c 6f 3a 09 24 28 54 4f ..btree.lo:.$(TO
a780: 50 29 5c 73 72 63 5c 62 74 72 65 65 2e 63 20 24 P)\src\btree.c $a790: 28 48 44 52 29 20 24 28 54 4f 50 29 5c 73 72 63 (HDR)$(TOP)\src
a7a0: 5c 70 61 67 65 72 2e 68 0a 09 24 28 4c 54 43 4f \pager.h..$(LTCO a7b0: 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 5f 43 4f MPILE)$(CORE_CO
a7c0: 4d 50 49 4c 45 5f 4f 50 54 53 29 20 2d 63 20 24 MPILE_OPTS) -c $a7d0: 28 54 4f 50 29 5c 73 72 63 5c 62 74 72 65 65 2e (TOP)\src\btree. a7e0: 63 0a 0a 62 75 69 6c 64 2e 6c 6f 3a 09 24 28 54 c..build.lo:.$(T
a7f0: 4f 50 29 5c 73 72 63 5c 62 75 69 6c 64 2e 63 20 OP)\src\build.c
a800: 24 28 48 44 52 29 0a 09 24 28 4c 54 43 4f 4d 50 $(HDR)..$(LTCOMP
a810: 49 4c 45 29 20 24 28 43 4f 52 45 5f 43 4f 4d 50 ILE) $(CORE_COMP a820: 49 4c 45 5f 4f 50 54 53 29 20 2d 63 20 24 28 54 ILE_OPTS) -c$(T
a830: 4f 50 29 5c 73 72 63 5c 62 75 69 6c 64 2e 63 0a OP)\src\build.c.
a840: 0a 63 61 6c 6c 62 61 63 6b 2e 6c 6f 3a 09 24 28 .callback.lo:.$( a850: 54 4f 50 29 5c 73 72 63 5c 63 61 6c 6c 62 61 63 TOP)\src\callbac a860: 6b 2e 63 20 24 28 48 44 52 29 0a 09 24 28 4c 54 k.c$(HDR)..$(LT a870: 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 5f COMPILE)$(CORE_
a880: 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 2d 63 COMPILE_OPTS) -c
a890: 20 24 28 54 4f 50 29 5c 73 72 63 5c 63 61 6c 6c $(TOP)\src\call a8a0: 62 61 63 6b 2e 63 0a 0a 63 6f 6d 70 6c 65 74 65 back.c..complete a8b0: 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c 73 72 63 5c .lo:.$(TOP)\src\
a8c0: 63 6f 6d 70 6c 65 74 65 2e 63 20 24 28 48 44 52 complete.c $(HDR a8d0: 29 0a 09 24 28 4c 54 43 4f 4d 50 49 4c 45 29 20 )..$(LTCOMPILE)
a8e0: 24 28 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f $(CORE_COMPILE_O a8f0: 50 54 53 29 20 2d 63 20 24 28 54 4f 50 29 5c 73 PTS) -c$(TOP)\s
a900: 72 63 5c 63 6f 6d 70 6c 65 74 65 2e 63 0a 0a 63 rc\complete.c..c
a910: 74 69 6d 65 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c time.lo:.$(TOP)\ a920: 73 72 63 5c 63 74 69 6d 65 2e 63 20 24 28 48 44 src\ctime.c$(HD
a930: 52 29 0a 09 24 28 4c 54 43 4f 4d 50 49 4c 45 29 R)..$(LTCOMPILE) a940: 20 24 28 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 5f$(CORE_COMPILE_
a950: 4f 50 54 53 29 20 2d 63 20 24 28 54 4f 50 29 5c OPTS) -c $(TOP)\ a960: 73 72 63 5c 63 74 69 6d 65 2e 63 0a 0a 64 61 74 src\ctime.c..dat a970: 65 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c 73 72 63 e.lo:.$(TOP)\src
a980: 5c 64 61 74 65 2e 63 20 24 28 48 44 52 29 0a 09 \date.c $(HDR).. a990: 24 28 4c 54 43 4f 4d 50 49 4c 45 29 20 24 28 43$(LTCOMPILE) $(C a9a0: 4f 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 ORE_COMPILE_OPTS a9b0: 29 20 2d 63 20 24 28 54 4f 50 29 5c 73 72 63 5c ) -c$(TOP)\src\
a9c0: 64 61 74 65 2e 63 0a 0a 64 62 73 74 61 74 2e 6c date.c..dbstat.l
a9d0: 6f 3a 09 24 28 54 4f 50 29 5c 73 72 63 5c 64 61 o:.$(TOP)\src\da a9e0: 74 65 2e 63 20 24 28 48 44 52 29 0a 09 24 28 4c te.c$(HDR)..$(L a9f0: 54 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 TCOMPILE)$(CORE
aa00: 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 2d _COMPILE_OPTS) -
aa10: 63 20 24 28 54 4f 50 29 5c 73 72 63 5c 64 62 73 c $(TOP)\src\dbs aa20: 74 61 74 2e 63 0a 0a 64 65 6c 65 74 65 2e 6c 6f tat.c..delete.lo aa30: 3a 09 24 28 54 4f 50 29 5c 73 72 63 5c 64 65 6c :.$(TOP)\src\del
aa40: 65 74 65 2e 63 20 24 28 48 44 52 29 0a 09 24 28 ete.c $(HDR)..$(
aa50: 4c 54 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f 52 LTCOMPILE) $(COR aa60: 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 E_COMPILE_OPTS) aa70: 2d 63 20 24 28 54 4f 50 29 5c 73 72 63 5c 64 65 -c$(TOP)\src\de
aa80: 6c 65 74 65 2e 63 0a 0a 65 78 70 72 2e 6c 6f 3a lete.c..expr.lo:
aa90: 09 24 28 54 4f 50 29 5c 73 72 63 5c 65 78 70 72 .$(TOP)\src\expr aaa0: 2e 63 20 24 28 48 44 52 29 0a 09 24 28 4c 54 43 .c$(HDR)..$(LTC aab0: 4f 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 5f 43 OMPILE)$(CORE_C
aac0: 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 2d 63 20 OMPILE_OPTS) -c
aad0: 24 28 54 4f 50 29 5c 73 72 63 5c 65 78 70 72 2e $(TOP)\src\expr. aae0: 63 0a 0a 66 61 75 6c 74 2e 6c 6f 3a 09 24 28 54 c..fault.lo:.$(T
aaf0: 4f 50 29 5c 73 72 63 5c 66 61 75 6c 74 2e 63 20 OP)\src\fault.c
ab00: 24 28 48 44 52 29 0a 09 24 28 4c 54 43 4f 4d 50 $(HDR)..$(LTCOMP
ab10: 49 4c 45 29 20 24 28 43 4f 52 45 5f 43 4f 4d 50 ILE) $(CORE_COMP ab20: 49 4c 45 5f 4f 50 54 53 29 20 2d 63 20 24 28 54 ILE_OPTS) -c$(T
ab30: 4f 50 29 5c 73 72 63 5c 66 61 75 6c 74 2e 63 0a OP)\src\fault.c.
ab40: 0a 66 6b 65 79 2e 6c 6f 3a 09 24 28 54 4f 50 29 .fkey.lo:.$(TOP) ab50: 5c 73 72 63 5c 66 6b 65 79 2e 63 20 24 28 48 44 \src\fkey.c$(HD
ab60: 52 29 0a 09 24 28 4c 54 43 4f 4d 50 49 4c 45 29 R)..$(LTCOMPILE) ab70: 20 24 28 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 5f$(CORE_COMPILE_
ab80: 4f 50 54 53 29 20 2d 63 20 24 28 54 4f 50 29 5c OPTS) -c $(TOP)\ ab90: 73 72 63 5c 66 6b 65 79 2e 63 0a 0a 66 75 6e 63 src\fkey.c..func aba0: 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c 73 72 63 5c .lo:.$(TOP)\src\
abb0: 66 75 6e 63 2e 63 20 24 28 48 44 52 29 0a 09 24 func.c $(HDR)..$
abc0: 28 4c 54 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f (LTCOMPILE) $(CO abd0: 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 RE_COMPILE_OPTS) abe0: 20 2d 63 20 24 28 54 4f 50 29 5c 73 72 63 5c 66 -c$(TOP)\src\f
abf0: 75 6e 63 2e 63 0a 0a 67 6c 6f 62 61 6c 2e 6c 6f unc.c..global.lo
ac00: 3a 09 24 28 54 4f 50 29 5c 73 72 63 5c 67 6c 6f :.$(TOP)\src\glo ac10: 62 61 6c 2e 63 20 24 28 48 44 52 29 0a 09 24 28 bal.c$(HDR)..$( ac20: 4c 54 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f 52 LTCOMPILE)$(COR
ac30: 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 E_COMPILE_OPTS)
ac40: 2d 63 20 24 28 54 4f 50 29 5c 73 72 63 5c 67 6c -c $(TOP)\src\gl ac50: 6f 62 61 6c 2e 63 0a 0a 68 61 73 68 2e 6c 6f 3a obal.c..hash.lo: ac60: 09 24 28 54 4f 50 29 5c 73 72 63 5c 68 61 73 68 .$(TOP)\src\hash
ac70: 2e 63 20 24 28 48 44 52 29 0a 09 24 28 4c 54 43 .c $(HDR)..$(LTC
ac80: 4f 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 5f 43 OMPILE) $(CORE_C ac90: 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 2d 63 20 OMPILE_OPTS) -c aca0: 24 28 54 4f 50 29 5c 73 72 63 5c 68 61 73 68 2e$(TOP)\src\hash.
acb0: 63 0a 0a 69 6e 73 65 72 74 2e 6c 6f 3a 09 24 28 c..insert.lo:.$( acc0: 54 4f 50 29 5c 73 72 63 5c 69 6e 73 65 72 74 2e TOP)\src\insert. acd0: 63 20 24 28 48 44 52 29 0a 09 24 28 4c 54 43 4f c$(HDR)..$(LTCO ace0: 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 5f 43 4f MPILE)$(CORE_CO
acf0: 4d 50 49 4c 45 5f 4f 50 54 53 29 20 2d 63 20 24 MPILE_OPTS) -c $ad00: 28 54 4f 50 29 5c 73 72 63 5c 69 6e 73 65 72 74 (TOP)\src\insert ad10: 2e 63 0a 0a 6a 6f 75 72 6e 61 6c 2e 6c 6f 3a 09 .c..journal.lo:. ad20: 24 28 54 4f 50 29 5c 73 72 63 5c 6a 6f 75 72 6e$(TOP)\src\journ
ad30: 61 6c 2e 63 20 24 28 48 44 52 29 0a 09 24 28 4c al.c $(HDR)..$(L
ad40: 54 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 TCOMPILE) $(CORE ad50: 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 2d _COMPILE_OPTS) - ad60: 63 20 24 28 54 4f 50 29 5c 73 72 63 5c 6a 6f 75 c$(TOP)\src\jou
ad70: 72 6e 61 6c 2e 63 0a 0a 6c 65 67 61 63 79 2e 6c rnal.c..legacy.l
ad80: 6f 3a 09 24 28 54 4f 50 29 5c 73 72 63 5c 6c 65 o:.$(TOP)\src\le ad90: 67 61 63 79 2e 63 20 24 28 48 44 52 29 0a 09 24 gacy.c$(HDR)..$ada0: 28 4c 54 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f (LTCOMPILE)$(CO
adb0: 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 RE_COMPILE_OPTS)
adc0: 20 2d 63 20 24 28 54 4f 50 29 5c 73 72 63 5c 6c -c $(TOP)\src\l add0: 65 67 61 63 79 2e 63 0a 0a 6c 6f 61 64 65 78 74 egacy.c..loadext ade0: 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c 73 72 63 5c .lo:.$(TOP)\src\
adf0: 6c 6f 61 64 65 78 74 2e 63 20 24 28 48 44 52 29 loadext.c $(HDR) ae00: 0a 09 24 28 4c 54 43 4f 4d 50 49 4c 45 29 20 24 ..$(LTCOMPILE) $ae10: 28 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 (CORE_COMPILE_OP ae20: 54 53 29 20 2d 63 20 24 28 54 4f 50 29 5c 73 72 TS) -c$(TOP)\sr
ae30: 63 5c 6c 6f 61 64 65 78 74 2e 63 0a 0a 6d 61 69 c\loadext.c..mai
ae40: 6e 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c 73 72 63 n.lo:.$(TOP)\src ae50: 5c 6d 61 69 6e 2e 63 20 24 28 48 44 52 29 0a 09 \main.c$(HDR)..
ae60: 24 28 4c 54 43 4f 4d 50 49 4c 45 29 20 24 28 43 $(LTCOMPILE)$(C
ae70: 4f 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 ORE_COMPILE_OPTS
ae80: 29 20 2d 63 20 24 28 54 4f 50 29 5c 73 72 63 5c ) -c $(TOP)\src\ ae90: 6d 61 69 6e 2e 63 0a 0a 6d 61 6c 6c 6f 63 2e 6c main.c..malloc.l aea0: 6f 3a 09 24 28 54 4f 50 29 5c 73 72 63 5c 6d 61 o:.$(TOP)\src\ma
aeb0: 6c 6c 6f 63 2e 63 20 24 28 48 44 52 29 0a 09 24 lloc.c $(HDR)..$
aec0: 28 4c 54 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f (LTCOMPILE) $(CO aed0: 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 RE_COMPILE_OPTS) aee0: 20 2d 63 20 24 28 54 4f 50 29 5c 73 72 63 5c 6d -c$(TOP)\src\m
aef0: 61 6c 6c 6f 63 2e 63 0a 0a 6d 65 6d 30 2e 6c 6f alloc.c..mem0.lo
af00: 3a 09 24 28 54 4f 50 29 5c 73 72 63 5c 6d 65 6d :.$(TOP)\src\mem af10: 30 2e 63 20 24 28 48 44 52 29 0a 09 24 28 4c 54 0.c$(HDR)..$(LT af20: 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 5f COMPILE)$(CORE_
af30: 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 2d 63 COMPILE_OPTS) -c
af40: 20 24 28 54 4f 50 29 5c 73 72 63 5c 6d 65 6d 30 $(TOP)\src\mem0 af50: 2e 63 0a 0a 6d 65 6d 31 2e 6c 6f 3a 09 24 28 54 .c..mem1.lo:.$(T
af60: 4f 50 29 5c 73 72 63 5c 6d 65 6d 31 2e 63 20 24 OP)\src\mem1.c $af70: 28 48 44 52 29 0a 09 24 28 4c 54 43 4f 4d 50 49 (HDR)..$(LTCOMPI
af80: 4c 45 29 20 24 28 43 4f 52 45 5f 43 4f 4d 50 49 LE) $(CORE_COMPI af90: 4c 45 5f 4f 50 54 53 29 20 2d 63 20 24 28 54 4f LE_OPTS) -c$(TO
afa0: 50 29 5c 73 72 63 5c 6d 65 6d 31 2e 63 0a 0a 6d P)\src\mem1.c..m
afb0: 65 6d 32 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c 73 em2.lo:.$(TOP)\s afc0: 72 63 5c 6d 65 6d 32 2e 63 20 24 28 48 44 52 29 rc\mem2.c$(HDR)
afd0: 0a 09 24 28 4c 54 43 4f 4d 50 49 4c 45 29 20 24 ..$(LTCOMPILE)$
afe0: 28 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 (CORE_COMPILE_OP
aff0: 54 53 29 20 2d 63 20 24 28 54 4f 50 29 5c 73 72 TS) -c $(TOP)\sr b000: 63 5c 6d 65 6d 32 2e 63 0a 0a 6d 65 6d 33 2e 6c c\mem2.c..mem3.l b010: 6f 3a 09 24 28 54 4f 50 29 5c 73 72 63 5c 6d 65 o:.$(TOP)\src\me
b020: 6d 33 2e 63 20 24 28 48 44 52 29 0a 09 24 28 4c m3.c $(HDR)..$(L
b030: 54 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 TCOMPILE) $(CORE b040: 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 2d _COMPILE_OPTS) - b050: 63 20 24 28 54 4f 50 29 5c 73 72 63 5c 6d 65 6d c$(TOP)\src\mem
b060: 33 2e 63 0a 0a 6d 65 6d 35 2e 6c 6f 3a 09 24 28 3.c..mem5.lo:.$( b070: 54 4f 50 29 5c 73 72 63 5c 6d 65 6d 35 2e 63 20 TOP)\src\mem5.c b080: 24 28 48 44 52 29 0a 09 24 28 4c 54 43 4f 4d 50$(HDR)..$(LTCOMP b090: 49 4c 45 29 20 24 28 43 4f 52 45 5f 43 4f 4d 50 ILE)$(CORE_COMP
b0a0: 49 4c 45 5f 4f 50 54 53 29 20 2d 63 20 24 28 54 ILE_OPTS) -c $(T b0b0: 4f 50 29 5c 73 72 63 5c 6d 65 6d 35 2e 63 0a 0a OP)\src\mem5.c.. b0c0: 6d 65 6d 6a 6f 75 72 6e 61 6c 2e 6c 6f 3a 09 24 memjournal.lo:.$
b0d0: 28 54 4f 50 29 5c 73 72 63 5c 6d 65 6d 6a 6f 75 (TOP)\src\memjou
b0e0: 72 6e 61 6c 2e 63 20 24 28 48 44 52 29 0a 09 24 rnal.c $(HDR)..$
b0f0: 28 4c 54 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f (LTCOMPILE) $(CO b100: 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 RE_COMPILE_OPTS) b110: 20 2d 63 20 24 28 54 4f 50 29 5c 73 72 63 5c 6d -c$(TOP)\src\m
b120: 65 6d 6a 6f 75 72 6e 61 6c 2e 63 0a 0a 6d 75 74 emjournal.c..mut
b130: 65 78 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c 73 72 ex.lo:.$(TOP)\sr b140: 63 5c 6d 75 74 65 78 2e 63 20 24 28 48 44 52 29 c\mutex.c$(HDR)
b150: 0a 09 24 28 4c 54 43 4f 4d 50 49 4c 45 29 20 24 ..$(LTCOMPILE)$
b160: 28 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 (CORE_COMPILE_OP
b170: 54 53 29 20 2d 63 20 24 28 54 4f 50 29 5c 73 72 TS) -c $(TOP)\sr b180: 63 5c 6d 75 74 65 78 2e 63 0a 0a 6d 75 74 65 78 c\mutex.c..mutex b190: 5f 6e 6f 6f 70 2e 6c 6f 3a 09 24 28 54 4f 50 29 _noop.lo:.$(TOP)
b1a0: 5c 73 72 63 5c 6d 75 74 65 78 5f 6e 6f 6f 70 2e \src\mutex_noop.
b1b0: 63 20 24 28 48 44 52 29 0a 09 24 28 4c 54 43 4f c $(HDR)..$(LTCO
b1c0: 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 5f 43 4f MPILE) $(CORE_CO b1d0: 4d 50 49 4c 45 5f 4f 50 54 53 29 20 2d 63 20 24 MPILE_OPTS) -c$
b1e0: 28 54 4f 50 29 5c 73 72 63 5c 6d 75 74 65 78 5f (TOP)\src\mutex_
b1f0: 6e 6f 6f 70 2e 63 0a 0a 6d 75 74 65 78 5f 75 6e noop.c..mutex_un
b200: 69 78 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c 73 72 ix.lo:.$(TOP)\sr b210: 63 5c 6d 75 74 65 78 5f 75 6e 69 78 2e 63 20 24 c\mutex_unix.c$
b220: 28 48 44 52 29 0a 09 24 28 4c 54 43 4f 4d 50 49 (HDR)..$(LTCOMPI b230: 4c 45 29 20 24 28 43 4f 52 45 5f 43 4f 4d 50 49 LE)$(CORE_COMPI
b240: 4c 45 5f 4f 50 54 53 29 20 2d 63 20 24 28 54 4f LE_OPTS) -c $(TO b250: 50 29 5c 73 72 63 5c 6d 75 74 65 78 5f 75 6e 69 P)\src\mutex_uni b260: 78 2e 63 0a 0a 6d 75 74 65 78 5f 77 33 32 2e 6c x.c..mutex_w32.l b270: 6f 3a 09 24 28 54 4f 50 29 5c 73 72 63 5c 6d 75 o:.$(TOP)\src\mu
b280: 74 65 78 5f 77 33 32 2e 63 20 24 28 48 44 52 29 tex_w32.c $(HDR) b290: 0a 09 24 28 4c 54 43 4f 4d 50 49 4c 45 29 20 24 ..$(LTCOMPILE) $b2a0: 28 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 (CORE_COMPILE_OP b2b0: 54 53 29 20 2d 63 20 24 28 54 4f 50 29 5c 73 72 TS) -c$(TOP)\sr
b2c0: 63 5c 6d 75 74 65 78 5f 77 33 32 2e 63 0a 0a 6e c\mutex_w32.c..n
b2d0: 6f 74 69 66 79 2e 6c 6f 3a 09 24 28 54 4f 50 29 otify.lo:.$(TOP) b2e0: 5c 73 72 63 5c 6e 6f 74 69 66 79 2e 63 20 24 28 \src\notify.c$(
b2f0: 48 44 52 29 0a 09 24 28 4c 54 43 4f 4d 50 49 4c HDR)..$(LTCOMPIL b300: 45 29 20 24 28 43 4f 52 45 5f 43 4f 4d 50 49 4c E)$(CORE_COMPIL
b310: 45 5f 4f 50 54 53 29 20 2d 63 20 24 28 54 4f 50 E_OPTS) -c $(TOP b320: 29 5c 73 72 63 5c 6e 6f 74 69 66 79 2e 63 0a 0a )\src\notify.c.. b330: 70 61 67 65 72 2e 6c 6f 3a 09 24 28 54 4f 50 29 pager.lo:.$(TOP)
b340: 5c 73 72 63 5c 70 61 67 65 72 2e 63 20 24 28 48 \src\pager.c $(H b350: 44 52 29 20 24 28 54 4f 50 29 5c 73 72 63 5c 70 DR)$(TOP)\src\p
b360: 61 67 65 72 2e 68 0a 09 24 28 4c 54 43 4f 4d 50 ager.h..$(LTCOMP b370: 49 4c 45 29 20 24 28 43 4f 52 45 5f 43 4f 4d 50 ILE)$(CORE_COMP
b380: 49 4c 45 5f 4f 50 54 53 29 20 2d 63 20 24 28 54 ILE_OPTS) -c $(T b390: 4f 50 29 5c 73 72 63 5c 70 61 67 65 72 2e 63 0a OP)\src\pager.c. b3a0: 0a 70 63 61 63 68 65 2e 6c 6f 3a 09 24 28 54 4f .pcache.lo:.$(TO
b3b0: 50 29 5c 73 72 63 5c 70 63 61 63 68 65 2e 63 20 P)\src\pcache.c
b3c0: 24 28 48 44 52 29 20 24 28 54 4f 50 29 5c 73 72 $(HDR)$(TOP)\sr
b3d0: 63 5c 70 63 61 63 68 65 2e 68 0a 09 24 28 4c 54 c\pcache.h..$(LT b3e0: 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 5f COMPILE)$(CORE_
b3f0: 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 2d 63 COMPILE_OPTS) -c
b400: 20 24 28 54 4f 50 29 5c 73 72 63 5c 70 63 61 63 $(TOP)\src\pcac b410: 68 65 2e 63 0a 0a 70 63 61 63 68 65 31 2e 6c 6f he.c..pcache1.lo b420: 3a 09 24 28 54 4f 50 29 5c 73 72 63 5c 70 63 61 :.$(TOP)\src\pca
b430: 63 68 65 31 2e 63 20 24 28 48 44 52 29 20 24 28 che1.c $(HDR)$(
b440: 54 4f 50 29 5c 73 72 63 5c 70 63 61 63 68 65 2e TOP)\src\pcache.
b450: 68 0a 09 24 28 4c 54 43 4f 4d 50 49 4c 45 29 20 h..$(LTCOMPILE) b460: 24 28 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f$(CORE_COMPILE_O
b470: 50 54 53 29 20 2d 63 20 24 28 54 4f 50 29 5c 73 PTS) -c $(TOP)\s b480: 72 63 5c 70 63 61 63 68 65 31 2e 63 0a 0a 6f 73 rc\pcache1.c..os b490: 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c 73 72 63 5c .lo:.$(TOP)\src\
b4a0: 6f 73 2e 63 20 24 28 48 44 52 29 0a 09 24 28 4c os.c $(HDR)..$(L
b4b0: 54 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 TCOMPILE) $(CORE b4c0: 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 2d _COMPILE_OPTS) - b4d0: 63 20 24 28 54 4f 50 29 5c 73 72 63 5c 6f 73 2e c$(TOP)\src\os.
b4e0: 63 0a 0a 6f 73 5f 75 6e 69 78 2e 6c 6f 3a 09 24 c..os_unix.lo:.$b4f0: 28 54 4f 50 29 5c 73 72 63 5c 6f 73 5f 75 6e 69 (TOP)\src\os_uni b500: 78 2e 63 20 24 28 48 44 52 29 0a 09 24 28 4c 54 x.c$(HDR)..$(LT b510: 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 5f COMPILE)$(CORE_
b520: 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 2d 63 COMPILE_OPTS) -c
b530: 20 24 28 54 4f 50 29 5c 73 72 63 5c 6f 73 5f 75 $(TOP)\src\os_u b540: 6e 69 78 2e 63 0a 0a 6f 73 5f 77 69 6e 2e 6c 6f nix.c..os_win.lo b550: 3a 09 24 28 54 4f 50 29 5c 73 72 63 5c 6f 73 5f :.$(TOP)\src\os_
b560: 77 69 6e 2e 63 20 24 28 48 44 52 29 0a 09 24 28 win.c $(HDR)..$(
b570: 4c 54 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f 52 LTCOMPILE) $(COR b580: 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 E_COMPILE_OPTS) b590: 2d 63 20 24 28 54 4f 50 29 5c 73 72 63 5c 6f 73 -c$(TOP)\src\os
b5a0: 5f 77 69 6e 2e 63 0a 0a 70 72 61 67 6d 61 2e 6c _win.c..pragma.l
b5b0: 6f 3a 09 24 28 54 4f 50 29 5c 73 72 63 5c 70 72 o:.$(TOP)\src\pr b5c0: 61 67 6d 61 2e 63 20 24 28 48 44 52 29 0a 09 24 agma.c$(HDR)..$b5d0: 28 4c 54 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f (LTCOMPILE)$(CO
b5e0: 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 RE_COMPILE_OPTS)
b5f0: 20 2d 63 20 24 28 54 4f 50 29 5c 73 72 63 5c 70 -c $(TOP)\src\p b600: 72 61 67 6d 61 2e 63 0a 0a 70 72 65 70 61 72 65 ragma.c..prepare b610: 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c 73 72 63 5c .lo:.$(TOP)\src\
b620: 70 72 65 70 61 72 65 2e 63 20 24 28 48 44 52 29 prepare.c $(HDR) b630: 0a 09 24 28 4c 54 43 4f 4d 50 49 4c 45 29 20 24 ..$(LTCOMPILE) $b640: 28 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 (CORE_COMPILE_OP b650: 54 53 29 20 2d 63 20 24 28 54 4f 50 29 5c 73 72 TS) -c$(TOP)\sr
b660: 63 5c 70 72 65 70 61 72 65 2e 63 0a 0a 70 72 69 c\prepare.c..pri
b670: 6e 74 66 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c 73 ntf.lo:.$(TOP)\s b680: 72 63 5c 70 72 69 6e 74 66 2e 63 20 24 28 48 44 rc\printf.c$(HD
b690: 52 29 0a 09 24 28 4c 54 43 4f 4d 50 49 4c 45 29 R)..$(LTCOMPILE) b6a0: 20 24 28 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 5f$(CORE_COMPILE_
b6b0: 4f 50 54 53 29 20 2d 63 20 24 28 54 4f 50 29 5c OPTS) -c $(TOP)\ b6c0: 73 72 63 5c 70 72 69 6e 74 66 2e 63 0a 0a 72 61 src\printf.c..ra b6d0: 6e 64 6f 6d 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c ndom.lo:.$(TOP)\
b6e0: 73 72 63 5c 72 61 6e 64 6f 6d 2e 63 20 24 28 48 src\random.c $(H b6f0: 44 52 29 0a 09 24 28 4c 54 43 4f 4d 50 49 4c 45 DR)..$(LTCOMPILE
b700: 29 20 24 28 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 ) $(CORE_COMPILE b710: 5f 4f 50 54 53 29 20 2d 63 20 24 28 54 4f 50 29 _OPTS) -c$(TOP)
b720: 5c 73 72 63 5c 72 61 6e 64 6f 6d 2e 63 0a 0a 72 \src\random.c..r
b730: 65 73 6f 6c 76 65 2e 6c 6f 3a 09 24 28 54 4f 50 esolve.lo:.$(TOP b740: 29 5c 73 72 63 5c 72 65 73 6f 6c 76 65 2e 63 20 )\src\resolve.c b750: 24 28 48 44 52 29 0a 09 24 28 4c 54 43 4f 4d 50$(HDR)..$(LTCOMP b760: 49 4c 45 29 20 24 28 43 4f 52 45 5f 43 4f 4d 50 ILE)$(CORE_COMP
b770: 49 4c 45 5f 4f 50 54 53 29 20 2d 63 20 24 28 54 ILE_OPTS) -c $(T b780: 4f 50 29 5c 73 72 63 5c 72 65 73 6f 6c 76 65 2e OP)\src\resolve. b790: 63 0a 0a 72 6f 77 73 65 74 2e 6c 6f 3a 09 24 28 c..rowset.lo:.$(
b7a0: 54 4f 50 29 5c 73 72 63 5c 72 6f 77 73 65 74 2e TOP)\src\rowset.
b7b0: 63 20 24 28 48 44 52 29 0a 09 24 28 4c 54 43 4f c $(HDR)..$(LTCO
b7c0: 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 5f 43 4f MPILE) $(CORE_CO b7d0: 4d 50 49 4c 45 5f 4f 50 54 53 29 20 2d 63 20 24 MPILE_OPTS) -c$
b7e0: 28 54 4f 50 29 5c 73 72 63 5c 72 6f 77 73 65 74 (TOP)\src\rowset
b7f0: 2e 63 0a 0a 73 65 6c 65 63 74 2e 6c 6f 3a 09 24 .c..select.lo:.$b800: 28 54 4f 50 29 5c 73 72 63 5c 73 65 6c 65 63 74 (TOP)\src\select b810: 2e 63 20 24 28 48 44 52 29 0a 09 24 28 4c 54 43 .c$(HDR)..$(LTC b820: 4f 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 5f 43 OMPILE)$(CORE_C
b830: 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 2d 63 20 OMPILE_OPTS) -c
b840: 24 28 54 4f 50 29 5c 73 72 63 5c 73 65 6c 65 63 $(TOP)\src\selec b850: 74 2e 63 0a 0a 73 74 61 74 75 73 2e 6c 6f 3a 09 t.c..status.lo:. b860: 24 28 54 4f 50 29 5c 73 72 63 5c 73 74 61 74 75$(TOP)\src\statu
b870: 73 2e 63 20 24 28 48 44 52 29 0a 09 24 28 4c 54 s.c $(HDR)..$(LT
b880: 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 5f COMPILE) $(CORE_ b890: 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 2d 63 COMPILE_OPTS) -c b8a0: 20 24 28 54 4f 50 29 5c 73 72 63 5c 73 74 61 74$(TOP)\src\stat
b8b0: 75 73 2e 63 0a 0a 74 61 62 6c 65 2e 6c 6f 3a 09 us.c..table.lo:.
b8c0: 24 28 54 4f 50 29 5c 73 72 63 5c 74 61 62 6c 65 $(TOP)\src\table b8d0: 2e 63 20 24 28 48 44 52 29 0a 09 24 28 4c 54 43 .c$(HDR)..$(LTC b8e0: 4f 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 5f 43 OMPILE)$(CORE_C
b8f0: 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 2d 63 20 OMPILE_OPTS) -c
b900: 24 28 54 4f 50 29 5c 73 72 63 5c 74 61 62 6c 65 $(TOP)\src\table b910: 2e 63 0a 0a 74 68 72 65 61 64 73 2e 6c 6f 3a 09 .c..threads.lo:. b920: 24 28 54 4f 50 29 5c 73 72 63 5c 74 68 72 65 61$(TOP)\src\threa
b930: 64 73 2e 63 20 24 28 48 44 52 29 0a 09 24 28 4c ds.c $(HDR)..$(L
b940: 54 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 TCOMPILE) $(CORE b950: 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 2d _COMPILE_OPTS) - b960: 63 20 24 28 54 4f 50 29 5c 73 72 63 5c 74 68 72 c$(TOP)\src\thr
b970: 65 61 64 73 2e 63 0a 0a 74 6f 6b 65 6e 69 7a 65 eads.c..tokenize
b980: 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c 73 72 63 5c .lo:.$(TOP)\src\ b990: 74 6f 6b 65 6e 69 7a 65 2e 63 20 6b 65 79 77 6f tokenize.c keywo b9a0: 72 64 68 61 73 68 2e 68 20 24 28 48 44 52 29 0a rdhash.h$(HDR).
b9b0: 09 24 28 4c 54 43 4f 4d 50 49 4c 45 29 20 24 28 .$(LTCOMPILE)$(
b9c0: 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 CORE_COMPILE_OPT
b9d0: 53 29 20 2d 63 20 24 28 54 4f 50 29 5c 73 72 63 S) -c $(TOP)\src b9e0: 5c 74 6f 6b 65 6e 69 7a 65 2e 63 0a 0a 74 72 65 \tokenize.c..tre b9f0: 65 76 69 65 77 2e 6c 6f 3a 09 24 28 54 4f 50 29 eview.lo:.$(TOP)
ba00: 5c 73 72 63 5c 74 72 65 65 76 69 65 77 2e 63 20 \src\treeview.c
ba10: 24 28 48 44 52 29 0a 09 24 28 4c 54 43 4f 4d 50 $(HDR)..$(LTCOMP
ba20: 49 4c 45 29 20 24 28 43 4f 52 45 5f 43 4f 4d 50 ILE) $(CORE_COMP ba30: 49 4c 45 5f 4f 50 54 53 29 20 2d 63 20 24 28 54 ILE_OPTS) -c$(T
ba40: 4f 50 29 5c 73 72 63 5c 74 72 65 65 76 69 65 77 OP)\src\treeview
ba50: 2e 63 0a 0a 74 72 69 67 67 65 72 2e 6c 6f 3a 09 .c..trigger.lo:.
ba60: 24 28 54 4f 50 29 5c 73 72 63 5c 74 72 69 67 67 $(TOP)\src\trigg ba70: 65 72 2e 63 20 24 28 48 44 52 29 0a 09 24 28 4c er.c$(HDR)..$(L ba80: 54 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 TCOMPILE)$(CORE
ba90: 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 2d _COMPILE_OPTS) -
baa0: 63 20 24 28 54 4f 50 29 5c 73 72 63 5c 74 72 69 c $(TOP)\src\tri bab0: 67 67 65 72 2e 63 0a 0a 75 70 64 61 74 65 2e 6c gger.c..update.l bac0: 6f 3a 09 24 28 54 4f 50 29 5c 73 72 63 5c 75 70 o:.$(TOP)\src\up
bad0: 64 61 74 65 2e 63 20 24 28 48 44 52 29 0a 09 24 date.c $(HDR)..$
bae0: 28 4c 54 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f (LTCOMPILE) $(CO baf0: 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 RE_COMPILE_OPTS) bb00: 20 2d 63 20 24 28 54 4f 50 29 5c 73 72 63 5c 75 -c$(TOP)\src\u
bb10: 70 64 61 74 65 2e 63 0a 0a 75 74 66 2e 6c 6f 3a pdate.c..utf.lo:
bb20: 09 24 28 54 4f 50 29 5c 73 72 63 5c 75 74 66 2e .$(TOP)\src\utf. bb30: 63 20 24 28 48 44 52 29 0a 09 24 28 4c 54 43 4f c$(HDR)..$(LTCO bb40: 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 5f 43 4f MPILE)$(CORE_CO
bb50: 4d 50 49 4c 45 5f 4f 50 54 53 29 20 2d 63 20 24 MPILE_OPTS) -c $bb60: 28 54 4f 50 29 5c 73 72 63 5c 75 74 66 2e 63 0a (TOP)\src\utf.c. bb70: 0a 75 74 69 6c 2e 6c 6f 3a 09 24 28 54 4f 50 29 .util.lo:.$(TOP)
bb80: 5c 73 72 63 5c 75 74 69 6c 2e 63 20 24 28 48 44 \src\util.c $(HD bb90: 52 29 0a 09 24 28 4c 54 43 4f 4d 50 49 4c 45 29 R)..$(LTCOMPILE)
bba0: 20 24 28 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 5f $(CORE_COMPILE_ bbb0: 4f 50 54 53 29 20 2d 63 20 24 28 54 4f 50 29 5c OPTS) -c$(TOP)\
bbc0: 73 72 63 5c 75 74 69 6c 2e 63 0a 0a 76 61 63 75 src\util.c..vacu
bbd0: 75 6d 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c 73 72 um.lo:.$(TOP)\sr bbe0: 63 5c 76 61 63 75 75 6d 2e 63 20 24 28 48 44 52 c\vacuum.c$(HDR
bbf0: 29 0a 09 24 28 4c 54 43 4f 4d 50 49 4c 45 29 20 )..$(LTCOMPILE) bc00: 24 28 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f$(CORE_COMPILE_O
bc10: 50 54 53 29 20 2d 63 20 24 28 54 4f 50 29 5c 73 PTS) -c $(TOP)\s bc20: 72 63 5c 76 61 63 75 75 6d 2e 63 0a 0a 76 64 62 rc\vacuum.c..vdb bc30: 65 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c 73 72 63 e.lo:.$(TOP)\src
bc40: 5c 76 64 62 65 2e 63 20 24 28 48 44 52 29 0a 09 \vdbe.c $(HDR).. bc50: 24 28 4c 54 43 4f 4d 50 49 4c 45 29 20 24 28 43$(LTCOMPILE) $(C bc60: 4f 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 ORE_COMPILE_OPTS bc70: 29 20 2d 63 20 24 28 54 4f 50 29 5c 73 72 63 5c ) -c$(TOP)\src\
bc80: 76 64 62 65 2e 63 0a 0a 76 64 62 65 61 70 69 2e vdbe.c..vdbeapi.
bc90: 6c 6f 3a 09 24 28 54 4f 50 29 5c 73 72 63 5c 76 lo:.$(TOP)\src\v bca0: 64 62 65 61 70 69 2e 63 20 24 28 48 44 52 29 0a dbeapi.c$(HDR).
bcb0: 09 24 28 4c 54 43 4f 4d 50 49 4c 45 29 20 24 28 .$(LTCOMPILE)$(
bcc0: 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 CORE_COMPILE_OPT
bcd0: 53 29 20 2d 63 20 24 28 54 4f 50 29 5c 73 72 63 S) -c $(TOP)\src bce0: 5c 76 64 62 65 61 70 69 2e 63 0a 0a 76 64 62 65 \vdbeapi.c..vdbe bcf0: 61 75 78 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c 73 aux.lo:.$(TOP)\s
bd00: 72 63 5c 76 64 62 65 61 75 78 2e 63 20 24 28 48 rc\vdbeaux.c $(H bd10: 44 52 29 0a 09 24 28 4c 54 43 4f 4d 50 49 4c 45 DR)..$(LTCOMPILE
bd20: 29 20 24 28 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 ) $(CORE_COMPILE bd30: 5f 4f 50 54 53 29 20 2d 63 20 24 28 54 4f 50 29 _OPTS) -c$(TOP)
bd40: 5c 73 72 63 5c 76 64 62 65 61 75 78 2e 63 0a 0a \src\vdbeaux.c..
bd50: 76 64 62 65 62 6c 6f 62 2e 6c 6f 3a 09 24 28 54 vdbeblob.lo:.$(T bd60: 4f 50 29 5c 73 72 63 5c 76 64 62 65 62 6c 6f 62 OP)\src\vdbeblob bd70: 2e 63 20 24 28 48 44 52 29 0a 09 24 28 4c 54 43 .c$(HDR)..$(LTC bd80: 4f 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 5f 43 OMPILE)$(CORE_C
bd90: 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 2d 63 20 OMPILE_OPTS) -c
bda0: 24 28 54 4f 50 29 5c 73 72 63 5c 76 64 62 65 62 $(TOP)\src\vdbeb bdb0: 6c 6f 62 2e 63 0a 0a 76 64 62 65 6d 65 6d 2e 6c lob.c..vdbemem.l bdc0: 6f 3a 09 24 28 54 4f 50 29 5c 73 72 63 5c 76 64 o:.$(TOP)\src\vd
bdd0: 62 65 6d 65 6d 2e 63 20 24 28 48 44 52 29 0a 09 bemem.c $(HDR).. bde0: 24 28 4c 54 43 4f 4d 50 49 4c 45 29 20 24 28 43$(LTCOMPILE) $(C bdf0: 4f 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 ORE_COMPILE_OPTS be00: 29 20 2d 63 20 24 28 54 4f 50 29 5c 73 72 63 5c ) -c$(TOP)\src\
be10: 76 64 62 65 6d 65 6d 2e 63 0a 0a 76 64 62 65 73 vdbemem.c..vdbes
be20: 6f 72 74 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c 73 ort.lo:.$(TOP)\s be30: 72 63 5c 76 64 62 65 73 6f 72 74 2e 63 20 24 28 rc\vdbesort.c$(
be40: 48 44 52 29 0a 09 24 28 4c 54 43 4f 4d 50 49 4c HDR)..$(LTCOMPIL be50: 45 29 20 24 28 43 4f 52 45 5f 43 4f 4d 50 49 4c E)$(CORE_COMPIL
be60: 45 5f 4f 50 54 53 29 20 2d 63 20 24 28 54 4f 50 E_OPTS) -c $(TOP be70: 29 5c 73 72 63 5c 76 64 62 65 73 6f 72 74 2e 63 )\src\vdbesort.c be80: 0a 0a 76 64 62 65 74 72 61 63 65 2e 6c 6f 3a 09 ..vdbetrace.lo:. be90: 24 28 54 4f 50 29 5c 73 72 63 5c 76 64 62 65 74$(TOP)\src\vdbet
bea0: 72 61 63 65 2e 63 20 24 28 48 44 52 29 0a 09 24 race.c $(HDR)..$
beb0: 28 4c 54 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f (LTCOMPILE) $(CO bec0: 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 RE_COMPILE_OPTS) bed0: 20 2d 63 20 24 28 54 4f 50 29 5c 73 72 63 5c 76 -c$(TOP)\src\v
bee0: 64 62 65 74 72 61 63 65 2e 63 0a 0a 76 74 61 62 dbetrace.c..vtab
bef0: 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c 73 72 63 5c .lo:.$(TOP)\src\ bf00: 76 74 61 62 2e 63 20 24 28 48 44 52 29 0a 09 24 vtab.c$(HDR)..$bf10: 28 4c 54 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f (LTCOMPILE)$(CO
bf20: 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 RE_COMPILE_OPTS)
bf30: 20 2d 63 20 24 28 54 4f 50 29 5c 73 72 63 5c 76 -c $(TOP)\src\v bf40: 74 61 62 2e 63 0a 0a 77 61 6c 2e 6c 6f 3a 09 24 tab.c..wal.lo:.$
bf50: 28 54 4f 50 29 5c 73 72 63 5c 77 61 6c 2e 63 20 (TOP)\src\wal.c
bf60: 24 28 48 44 52 29 0a 09 24 28 4c 54 43 4f 4d 50 $(HDR)..$(LTCOMP
bf70: 49 4c 45 29 20 24 28 43 4f 52 45 5f 43 4f 4d 50 ILE) $(CORE_COMP bf80: 49 4c 45 5f 4f 50 54 53 29 20 2d 63 20 24 28 54 ILE_OPTS) -c$(T
bf90: 4f 50 29 5c 73 72 63 5c 77 61 6c 2e 63 0a 0a 77 OP)\src\wal.c..w
bfa0: 61 6c 6b 65 72 2e 6c 6f 3a 09 24 28 54 4f 50 29 alker.lo:.$(TOP) bfb0: 5c 73 72 63 5c 77 61 6c 6b 65 72 2e 63 20 24 28 \src\walker.c$(
bfc0: 48 44 52 29 0a 09 24 28 4c 54 43 4f 4d 50 49 4c HDR)..$(LTCOMPIL bfd0: 45 29 20 24 28 43 4f 52 45 5f 43 4f 4d 50 49 4c E)$(CORE_COMPIL
bfe0: 45 5f 4f 50 54 53 29 20 2d 63 20 24 28 54 4f 50 E_OPTS) -c $(TOP bff0: 29 5c 73 72 63 5c 77 61 6c 6b 65 72 2e 63 0a 0a )\src\walker.c.. c000: 77 68 65 72 65 2e 6c 6f 3a 09 24 28 54 4f 50 29 where.lo:.$(TOP)
c010: 5c 73 72 63 5c 77 68 65 72 65 2e 63 20 24 28 48 \src\where.c $(H c020: 44 52 29 0a 09 24 28 4c 54 43 4f 4d 50 49 4c 45 DR)..$(LTCOMPILE
c030: 29 20 24 28 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 ) $(CORE_COMPILE c040: 5f 4f 50 54 53 29 20 2d 63 20 24 28 54 4f 50 29 _OPTS) -c$(TOP)
c050: 5c 73 72 63 5c 77 68 65 72 65 2e 63 0a 0a 77 68 \src\where.c..wh
c060: 65 72 65 63 6f 64 65 2e 6c 6f 3a 09 24 28 54 4f erecode.lo:.$(TO c070: 50 29 5c 73 72 63 5c 77 68 65 72 65 63 6f 64 65 P)\src\wherecode c080: 2e 63 20 24 28 48 44 52 29 0a 09 24 28 4c 54 43 .c$(HDR)..$(LTC c090: 4f 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 5f 43 OMPILE)$(CORE_C
c0a0: 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 2d 63 20 OMPILE_OPTS) -c
c0b0: 24 28 54 4f 50 29 5c 73 72 63 5c 77 68 65 72 65 $(TOP)\src\where c0c0: 63 6f 64 65 2e 63 0a 0a 77 68 65 72 65 65 78 70 code.c..whereexp c0d0: 72 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c 73 72 63 r.lo:.$(TOP)\src
c0e0: 5c 77 68 65 72 65 65 78 70 72 2e 63 20 24 28 48 \whereexpr.c $(H c0f0: 44 52 29 0a 09 24 28 4c 54 43 4f 4d 50 49 4c 45 DR)..$(LTCOMPILE
c100: 29 20 24 28 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 ) $(CORE_COMPILE c110: 5f 4f 50 54 53 29 20 2d 63 20 24 28 54 4f 50 29 _OPTS) -c$(TOP)
c120: 5c 73 72 63 5c 77 68 65 72 65 65 78 70 72 2e 63 \src\whereexpr.c
c130: 0a 0a 74 63 6c 73 71 6c 69 74 65 2e 6c 6f 3a 09 ..tclsqlite.lo:.
c140: 24 28 54 4f 50 29 5c 73 72 63 5c 74 63 6c 73 71 $(TOP)\src\tclsq c150: 6c 69 74 65 2e 63 20 24 28 48 44 52 29 0a 09 24 lite.c$(HDR)..$c160: 28 4c 54 43 4f 4d 50 49 4c 45 29 20 24 28 4e 4f (LTCOMPILE)$(NO
c170: 5f 57 41 52 4e 29 20 2d 44 55 53 45 5f 54 43 4c _WARN) -DUSE_TCL
c180: 5f 53 54 55 42 53 3d 31 20 2d 44 42 55 49 4c 44 _STUBS=1 -DBUILD
c190: 5f 73 71 6c 69 74 65 20 2d 49 24 28 54 43 4c 49 _sqlite -I$(TCLI c1a0: 4e 43 44 49 52 29 20 2d 63 20 24 28 54 4f 50 29 NCDIR) -c$(TOP)
c1b0: 5c 73 72 63 5c 74 63 6c 73 71 6c 69 74 65 2e 63 \src\tclsqlite.c
c1c0: 0a 0a 74 63 6c 73 71 6c 69 74 65 2d 73 68 65 6c ..tclsqlite-shel
c1d0: 6c 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c 73 72 63 l.lo:.$(TOP)\src c1e0: 5c 74 63 6c 73 71 6c 69 74 65 2e 63 20 24 28 48 \tclsqlite.c$(H
c1f0: 44 52 29 0a 09 24 28 4c 54 43 4f 4d 50 49 4c 45 DR)..$(LTCOMPILE c200: 29 20 24 28 4e 4f 5f 57 41 52 4e 29 20 2d 44 54 )$(NO_WARN) -DT
c210: 43 4c 53 48 3d 31 20 2d 44 42 55 49 4c 44 5f 73 CLSH=1 -DBUILD_s
c220: 71 6c 69 74 65 20 2d 49 24 28 54 43 4c 49 4e 43 qlite -I$(TCLINC c230: 44 49 52 29 20 2d 63 20 24 28 54 4f 50 29 5c 73 DIR) -c$(TOP)\s
c240: 72 63 5c 74 63 6c 73 71 6c 69 74 65 2e 63 0a 0a rc\tclsqlite.c..
c250: 74 63 6c 73 71 6c 69 74 65 33 2e 65 78 65 3a 09 tclsqlite3.exe:.
c260: 74 63 6c 73 71 6c 69 74 65 2d 73 68 65 6c 6c 2e tclsqlite-shell.
c270: 6c 6f 20 24 28 53 51 4c 49 54 45 33 43 29 20 24 lo $(SQLITE3C)$
c280: 28 53 51 4c 49 54 45 33 48 29 20 24 28 4c 49 42 (SQLITE3H) $(LIB c290: 52 45 53 4f 42 4a 53 29 0a 09 24 28 4c 54 4c 49 RESOBJS)..$(LTLI
c2a0: 4e 4b 29 20 24 28 53 51 4c 49 54 45 33 43 29 20 NK) $(SQLITE3C) c2b0: 2f 6c 69 6e 6b 20 24 28 4c 44 46 4c 41 47 53 29 /link$(LDFLAGS)
c2c0: 20 24 28 4c 54 4c 49 4e 4b 4f 50 54 53 29 20 24 $(LTLINKOPTS)$
c2d0: 28 4c 54 4c 49 42 50 41 54 48 53 29 20 2f 4f 55 (LTLIBPATHS) /OU
c2e0: 54 3a 24 40 20 74 63 6c 73 71 6c 69 74 65 2d 73 T:$@ tclsqlite-s c2f0: 68 65 6c 6c 2e 6c 6f 20 24 28 4c 49 42 52 45 53 hell.lo$(LIBRES
c300: 4f 42 4a 53 29 20 24 28 4c 54 4c 49 42 53 29 20 OBJS) $(LTLIBS) c310: 24 28 54 4c 49 42 53 29 0a 0a 23 20 52 75 6c 65$(TLIBS)..# Rule
c320: 73 20 74 6f 20 62 75 69 6c 64 20 6f 70 63 6f 64 s to build opcod
c330: 65 73 2e 63 20 61 6e 64 20 6f 70 63 6f 64 65 73 es.c and opcodes
c340: 2e 68 0a 23 0a 6f 70 63 6f 64 65 73 2e 63 3a 09 .h.#.opcodes.c:.
c350: 6f 70 63 6f 64 65 73 2e 68 20 24 28 54 4f 50 29 opcodes.h $(TOP) c360: 5c 74 6f 6f 6c 5c 6d 6b 6f 70 63 6f 64 65 63 2e \tool\mkopcodec. c370: 74 63 6c 0a 09 24 28 54 43 4c 53 48 5f 43 4d 44 tcl..$(TCLSH_CMD
c380: 29 20 24 28 54 4f 50 29 5c 74 6f 6f 6c 5c 6d 6b ) $(TOP)\tool\mk c390: 6f 70 63 6f 64 65 63 2e 74 63 6c 20 6f 70 63 6f opcodec.tcl opco c3a0: 64 65 73 2e 68 20 3e 20 6f 70 63 6f 64 65 73 2e des.h > opcodes. c3b0: 63 0a 0a 6f 70 63 6f 64 65 73 2e 68 3a 09 70 61 c..opcodes.h:.pa c3c0: 72 73 65 2e 68 20 24 28 54 4f 50 29 5c 73 72 63 rse.h$(TOP)\src
c3d0: 5c 76 64 62 65 2e 63 20 24 28 54 4f 50 29 5c 74 \vdbe.c $(TOP)\t c3e0: 6f 6f 6c 5c 6d 6b 6f 70 63 6f 64 65 68 2e 74 63 ool\mkopcodeh.tc c3f0: 6c 0a 09 74 79 70 65 20 70 61 72 73 65 2e 68 20 l..type parse.h c400: 24 28 54 4f 50 29 5c 73 72 63 5c 76 64 62 65 2e$(TOP)\src\vdbe.
c410: 63 20 7c 20 24 28 54 43 4c 53 48 5f 43 4d 44 29 c | $(TCLSH_CMD) c420: 20 24 28 54 4f 50 29 5c 74 6f 6f 6c 5c 6d 6b 6f$(TOP)\tool\mko
c430: 70 63 6f 64 65 68 2e 74 63 6c 20 3e 20 6f 70 63 pcodeh.tcl > opc
c440: 6f 64 65 73 2e 68 0a 0a 23 20 52 75 6c 65 73 20 odes.h..# Rules
c450: 74 6f 20 62 75 69 6c 64 20 70 61 72 73 65 2e 63 to build parse.c
c460: 20 61 6e 64 20 70 61 72 73 65 2e 68 20 2d 20 74 and parse.h - t
c470: 68 65 20 6f 75 74 70 75 74 73 20 6f 66 20 6c 65 he outputs of le
c480: 6d 6f 6e 2e 0a 23 0a 70 61 72 73 65 2e 68 3a 09 mon..#.parse.h:.
c490: 70 61 72 73 65 2e 63 0a 0a 70 61 72 73 65 2e 63 parse.c..parse.c
c4a0: 3a 09 24 28 54 4f 50 29 5c 73 72 63 5c 70 61 72 :.$(TOP)\src\par c4b0: 73 65 2e 79 20 6c 65 6d 6f 6e 2e 65 78 65 20 24 se.y lemon.exe$
c4c0: 28 54 4f 50 29 5c 74 6f 6f 6c 5c 61 64 64 6f 70 (TOP)\tool\addop
c4d0: 63 6f 64 65 73 2e 74 63 6c 0a 09 64 65 6c 20 2f codes.tcl..del /
c4e0: 51 20 70 61 72 73 65 2e 79 20 70 61 72 73 65 2e Q parse.y parse.
c4f0: 68 20 70 61 72 73 65 2e 68 2e 74 65 6d 70 20 32 h parse.h.temp 2
c500: 3e 4e 55 4c 0a 09 63 6f 70 79 20 24 28 54 4f 50 >NUL..copy $(TOP c510: 29 5c 73 72 63 5c 70 61 72 73 65 2e 79 20 2e 0a )\src\parse.y .. c520: 09 2e 5c 6c 65 6d 6f 6e 2e 65 78 65 20 24 28 52 ..\lemon.exe$(R
c530: 45 51 5f 46 45 41 54 55 52 45 5f 46 4c 41 47 53 EQ_FEATURE_FLAGS
c540: 29 20 24 28 4f 50 54 5f 46 45 41 54 55 52 45 5f ) $(OPT_FEATURE_ c550: 46 4c 41 47 53 29 20 24 28 45 58 54 5f 46 45 41 FLAGS)$(EXT_FEA
c560: 54 55 52 45 5f 46 4c 41 47 53 29 20 24 28 4f 50 TURE_FLAGS) $(OP c570: 54 53 29 20 70 61 72 73 65 2e 79 0a 09 6d 6f 76 TS) parse.y..mov c580: 65 20 70 61 72 73 65 2e 68 20 70 61 72 73 65 2e e parse.h parse. c590: 68 2e 74 65 6d 70 0a 09 24 28 54 43 4c 53 48 5f h.temp..$(TCLSH_
c5a0: 43 4d 44 29 20 24 28 54 4f 50 29 5c 74 6f 6f 6c CMD) $(TOP)\tool c5b0: 5c 61 64 64 6f 70 63 6f 64 65 73 2e 74 63 6c 20 \addopcodes.tcl c5c0: 70 61 72 73 65 2e 68 2e 74 65 6d 70 20 3e 20 70 parse.h.temp > p c5d0: 61 72 73 65 2e 68 0a 0a 24 28 53 51 4c 49 54 45 arse.h..$(SQLITE
c5e0: 33 48 29 3a 09 24 28 54 4f 50 29 5c 73 72 63 5c 3H):.$(TOP)\src\ c5f0: 73 71 6c 69 74 65 2e 68 2e 69 6e 20 24 28 54 4f sqlite.h.in$(TO
c600: 50 29 5c 6d 61 6e 69 66 65 73 74 2e 75 75 69 64 P)\manifest.uuid
c610: 20 24 28 54 4f 50 29 5c 56 45 52 53 49 4f 4e 0a $(TOP)\VERSION. c620: 09 24 28 54 43 4c 53 48 5f 43 4d 44 29 20 24 28 .$(TCLSH_CMD) $( c630: 54 4f 50 29 5c 74 6f 6f 6c 5c 6d 6b 73 71 6c 69 TOP)\tool\mksqli c640: 74 65 33 68 2e 74 63 6c 20 24 28 54 4f 50 3a 5c te3h.tcl$(TOP:\
c650: 3d 2f 29 20 3e 20 24 28 53 51 4c 49 54 45 33 48 =/) > $(SQLITE3H c660: 29 0a 0a 73 71 6c 69 74 65 33 65 78 74 2e 68 3a )..sqlite3ext.h: c670: 20 2e 74 61 72 67 65 74 5f 73 6f 75 72 63 65 0a .target_source. c680: 09 63 6f 70 79 20 74 73 72 63 5c 73 71 6c 69 74 .copy tsrc\sqlit c690: 65 33 65 78 74 2e 68 20 2e 0a 0a 6d 6b 6b 65 79 e3ext.h ...mkkey c6a0: 77 6f 72 64 68 61 73 68 2e 65 78 65 3a 09 24 28 wordhash.exe:.$(
c6b0: 54 4f 50 29 5c 74 6f 6f 6c 5c 6d 6b 6b 65 79 77 TOP)\tool\mkkeyw
c6c0: 6f 72 64 68 61 73 68 2e 63 0a 09 24 28 42 43 43 ordhash.c..$(BCC c6d0: 29 20 24 28 4e 4f 5f 57 41 52 4e 29 20 2d 46 65 )$(NO_WARN) -Fe
c6e0: 24 40 20 24 28 52 45 51 5f 46 45 41 54 55 52 45 $@$(REQ_FEATURE
c6f0: 5f 46 4c 41 47 53 29 20 24 28 4f 50 54 5f 46 45 _FLAGS) $(OPT_FE c700: 41 54 55 52 45 5f 46 4c 41 47 53 29 20 24 28 45 ATURE_FLAGS)$(E
c710: 58 54 5f 46 45 41 54 55 52 45 5f 46 4c 41 47 53 XT_FEATURE_FLAGS
c720: 29 20 24 28 4f 50 54 53 29 20 5c 0a 09 09 24 28 ) $(OPTS) \...$(
c730: 54 4f 50 29 5c 74 6f 6f 6c 5c 6d 6b 6b 65 79 77 TOP)\tool\mkkeyw
c740: 6f 72 64 68 61 73 68 2e 63 20 2f 6c 69 6e 6b 20 ordhash.c /link
c750: 24 28 4c 44 46 4c 41 47 53 29 20 24 28 4e 4c 54 $(LDFLAGS)$(NLT
c760: 4c 49 4e 4b 4f 50 54 53 29 20 24 28 4e 4c 54 4c LINKOPTS) $(NLTL c770: 49 42 50 41 54 48 53 29 0a 0a 6b 65 79 77 6f 72 IBPATHS)..keywor c780: 64 68 61 73 68 2e 68 3a 09 24 28 54 4f 50 29 5c dhash.h:.$(TOP)\
c790: 74 6f 6f 6c 5c 6d 6b 6b 65 79 77 6f 72 64 68 61 tool\mkkeywordha
c7a0: 73 68 2e 63 20 6d 6b 6b 65 79 77 6f 72 64 68 61 sh.c mkkeywordha
c7b0: 73 68 2e 65 78 65 0a 09 2e 5c 6d 6b 6b 65 79 77 sh.exe...\mkkeyw
c7c0: 6f 72 64 68 61 73 68 2e 65 78 65 20 3e 20 6b 65 ordhash.exe > ke
c7d0: 79 77 6f 72 64 68 61 73 68 2e 68 0a 0a 0a 0a 23 ywordhash.h....#
c7e0: 20 52 75 6c 65 73 20 74 6f 20 62 75 69 6c 64 20 Rules to build
c7f0: 74 68 65 20 65 78 74 65 6e 73 69 6f 6e 20 6f 62 the extension ob
c800: 6a 65 63 74 73 2e 0a 23 0a 69 63 75 2e 6c 6f 3a jects..#.icu.lo:
c810: 09 24 28 54 4f 50 29 5c 65 78 74 5c 69 63 75 5c .$(TOP)\ext\icu\ c820: 69 63 75 2e 63 20 24 28 48 44 52 29 20 24 28 45 icu.c$(HDR) $(E c830: 58 54 48 44 52 29 0a 09 24 28 4c 54 43 4f 4d 50 XTHDR)..$(LTCOMP
c840: 49 4c 45 29 20 24 28 43 4f 52 45 5f 43 4f 4d 50 ILE) $(CORE_COMP c850: 49 4c 45 5f 4f 50 54 53 29 20 24 28 4e 4f 5f 57 ILE_OPTS)$(NO_W
c860: 41 52 4e 29 20 2d 44 53 51 4c 49 54 45 5f 43 4f ARN) -DSQLITE_CO
c870: 52 45 20 2d 63 20 24 28 54 4f 50 29 5c 65 78 74 RE -c $(TOP)\ext c880: 5c 69 63 75 5c 69 63 75 2e 63 0a 0a 66 74 73 32 \icu\icu.c..fts2 c890: 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c 65 78 74 5c .lo:.$(TOP)\ext\
c8a0: 66 74 73 32 5c 66 74 73 32 2e 63 20 24 28 48 44 fts2\fts2.c $(HD c8b0: 52 29 20 24 28 45 58 54 48 44 52 29 0a 09 24 28 R)$(EXTHDR)..$( c8c0: 4c 54 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f 52 LTCOMPILE)$(COR
c8d0: 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 E_COMPILE_OPTS)
c8e0: 24 28 4e 4f 5f 57 41 52 4e 29 20 2d 44 53 51 4c $(NO_WARN) -DSQL c8f0: 49 54 45 5f 43 4f 52 45 20 2d 63 20 24 28 54 4f ITE_CORE -c$(TO
c900: 50 29 5c 65 78 74 5c 66 74 73 32 5c 66 74 73 32 P)\ext\fts2\fts2
c910: 2e 63 0a 0a 66 74 73 32 5f 68 61 73 68 2e 6c 6f .c..fts2_hash.lo
c920: 3a 09 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 73 :.$(TOP)\ext\fts c930: 32 5c 66 74 73 32 5f 68 61 73 68 2e 63 20 24 28 2\fts2_hash.c$(
c940: 48 44 52 29 20 24 28 45 58 54 48 44 52 29 0a 09 HDR) $(EXTHDR).. c950: 24 28 4c 54 43 4f 4d 50 49 4c 45 29 20 24 28 43$(LTCOMPILE) $(C c960: 4f 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 ORE_COMPILE_OPTS c970: 29 20 24 28 4e 4f 5f 57 41 52 4e 29 20 2d 44 53 )$(NO_WARN) -DS
c980: 51 4c 49 54 45 5f 43 4f 52 45 20 2d 63 20 24 28 QLITE_CORE -c $( c990: 54 4f 50 29 5c 65 78 74 5c 66 74 73 32 5c 66 74 TOP)\ext\fts2\ft c9a0: 73 32 5f 68 61 73 68 2e 63 0a 0a 66 74 73 32 5f s2_hash.c..fts2_ c9b0: 69 63 75 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c 65 icu.lo:.$(TOP)\e
c9c0: 78 74 5c 66 74 73 32 5c 66 74 73 32 5f 69 63 75 xt\fts2\fts2_icu
c9d0: 2e 63 20 24 28 48 44 52 29 20 24 28 45 58 54 48 .c $(HDR)$(EXTH
c9e0: 44 52 29 0a 09 24 28 4c 54 43 4f 4d 50 49 4c 45 DR)..$(LTCOMPILE c9f0: 29 20 24 28 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 )$(CORE_COMPILE
ca00: 5f 4f 50 54 53 29 20 24 28 4e 4f 5f 57 41 52 4e _OPTS) $(NO_WARN ca10: 29 20 2d 44 53 51 4c 49 54 45 5f 43 4f 52 45 20 ) -DSQLITE_CORE ca20: 2d 63 20 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 -c$(TOP)\ext\ft
ca30: 73 32 5c 66 74 73 32 5f 69 63 75 2e 63 0a 0a 66 s2\fts2_icu.c..f
ca40: 74 73 32 5f 70 6f 72 74 65 72 2e 6c 6f 3a 09 24 ts2_porter.lo:.$ca50: 28 54 4f 50 29 5c 65 78 74 5c 66 74 73 32 5c 66 (TOP)\ext\fts2\f ca60: 74 73 32 5f 70 6f 72 74 65 72 2e 63 20 24 28 48 ts2_porter.c$(H
ca70: 44 52 29 20 24 28 45 58 54 48 44 52 29 0a 09 24 DR) $(EXTHDR)..$
ca80: 28 4c 54 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f (LTCOMPILE) $(CO ca90: 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 RE_COMPILE_OPTS) caa0: 20 24 28 4e 4f 5f 57 41 52 4e 29 20 2d 44 53 51$(NO_WARN) -DSQ
cab0: 4c 49 54 45 5f 43 4f 52 45 20 2d 63 20 24 28 54 LITE_CORE -c $(T cac0: 4f 50 29 5c 65 78 74 5c 66 74 73 32 5c 66 74 73 OP)\ext\fts2\fts cad0: 32 5f 70 6f 72 74 65 72 2e 63 0a 0a 66 74 73 32 2_porter.c..fts2 cae0: 5f 74 6f 6b 65 6e 69 7a 65 72 2e 6c 6f 3a 09 24 _tokenizer.lo:.$
caf0: 28 54 4f 50 29 5c 65 78 74 5c 66 74 73 32 5c 66 (TOP)\ext\fts2\f
cb00: 74 73 32 5f 74 6f 6b 65 6e 69 7a 65 72 2e 63 20 ts2_tokenizer.c
cb10: 24 28 48 44 52 29 20 24 28 45 58 54 48 44 52 29 $(HDR)$(EXTHDR)
cb20: 0a 09 24 28 4c 54 43 4f 4d 50 49 4c 45 29 20 24 ..$(LTCOMPILE)$
cb30: 28 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 (CORE_COMPILE_OP
cb40: 54 53 29 20 24 28 4e 4f 5f 57 41 52 4e 29 20 2d TS) $(NO_WARN) - cb50: 44 53 51 4c 49 54 45 5f 43 4f 52 45 20 2d 63 20 DSQLITE_CORE -c cb60: 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 73 32 5c$(TOP)\ext\fts2\
cb70: 66 74 73 32 5f 74 6f 6b 65 6e 69 7a 65 72 2e 63 fts2_tokenizer.c
cb80: 0a 0a 66 74 73 32 5f 74 6f 6b 65 6e 69 7a 65 72 ..fts2_tokenizer
cb90: 31 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c 65 78 74 1.lo:.$(TOP)\ext cba0: 5c 66 74 73 32 5c 66 74 73 32 5f 74 6f 6b 65 6e \fts2\fts2_token cbb0: 69 7a 65 72 31 2e 63 20 24 28 48 44 52 29 20 24 izer1.c$(HDR) $cbc0: 28 45 58 54 48 44 52 29 0a 09 24 28 4c 54 43 4f (EXTHDR)..$(LTCO
cbd0: 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 5f 43 4f MPILE) $(CORE_CO cbe0: 4d 50 49 4c 45 5f 4f 50 54 53 29 20 24 28 4e 4f MPILE_OPTS)$(NO
cbf0: 5f 57 41 52 4e 29 20 2d 44 53 51 4c 49 54 45 5f _WARN) -DSQLITE_
cc00: 43 4f 52 45 20 2d 63 20 24 28 54 4f 50 29 5c 65 CORE -c $(TOP)\e cc10: 78 74 5c 66 74 73 32 5c 66 74 73 32 5f 74 6f 6b xt\fts2\fts2_tok cc20: 65 6e 69 7a 65 72 31 2e 63 0a 0a 66 74 73 33 2e enizer1.c..fts3. cc30: 6c 6f 3a 09 24 28 54 4f 50 29 5c 65 78 74 5c 66 lo:.$(TOP)\ext\f
cc40: 74 73 33 5c 66 74 73 33 2e 63 20 24 28 48 44 52 ts3\fts3.c $(HDR cc50: 29 20 24 28 45 58 54 48 44 52 29 0a 09 24 28 4c )$(EXTHDR)..$(L cc60: 54 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 TCOMPILE)$(CORE
cc70: 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 24 _COMPILE_OPTS) $cc80: 28 4e 4f 5f 57 41 52 4e 29 20 2d 44 53 51 4c 49 (NO_WARN) -DSQLI cc90: 54 45 5f 43 4f 52 45 20 2d 63 20 24 28 54 4f 50 TE_CORE -c$(TOP
cca0: 29 5c 65 78 74 5c 66 74 73 33 5c 66 74 73 33 2e )\ext\fts3\fts3.
ccb0: 63 0a 0a 66 74 73 33 5f 61 75 78 2e 6c 6f 3a 09 c..fts3_aux.lo:.
ccc0: 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 73 33 5c $(TOP)\ext\fts3\ ccd0: 66 74 73 33 5f 61 75 78 2e 63 20 24 28 48 44 52 fts3_aux.c$(HDR
cce0: 29 20 24 28 45 58 54 48 44 52 29 0a 09 24 28 4c ) $(EXTHDR)..$(L
ccf0: 54 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 TCOMPILE) $(CORE cd00: 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 24 _COMPILE_OPTS)$
cd10: 28 4e 4f 5f 57 41 52 4e 29 20 2d 44 53 51 4c 49 (NO_WARN) -DSQLI
cd20: 54 45 5f 43 4f 52 45 20 2d 63 20 24 28 54 4f 50 TE_CORE -c $(TOP cd30: 29 5c 65 78 74 5c 66 74 73 33 5c 66 74 73 33 5f )\ext\fts3\fts3_ cd40: 61 75 78 2e 63 0a 0a 66 74 73 33 5f 65 78 70 72 aux.c..fts3_expr cd50: 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c 65 78 74 5c .lo:.$(TOP)\ext\
cd60: 66 74 73 33 5c 66 74 73 33 5f 65 78 70 72 2e 63 fts3\fts3_expr.c
cd70: 20 24 28 48 44 52 29 20 24 28 45 58 54 48 44 52 $(HDR)$(EXTHDR
cd80: 29 0a 09 24 28 4c 54 43 4f 4d 50 49 4c 45 29 20 )..$(LTCOMPILE) cd90: 24 28 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f$(CORE_COMPILE_O
cda0: 50 54 53 29 20 24 28 4e 4f 5f 57 41 52 4e 29 20 PTS) $(NO_WARN) cdb0: 2d 44 53 51 4c 49 54 45 5f 43 4f 52 45 20 2d 63 -DSQLITE_CORE -c cdc0: 20 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 73 33$(TOP)\ext\fts3
cdd0: 5c 66 74 73 33 5f 65 78 70 72 2e 63 0a 0a 66 74 \fts3_expr.c..ft
cde0: 73 33 5f 68 61 73 68 2e 6c 6f 3a 09 24 28 54 4f s3_hash.lo:.$(TO cdf0: 50 29 5c 65 78 74 5c 66 74 73 33 5c 66 74 73 33 P)\ext\fts3\fts3 ce00: 5f 68 61 73 68 2e 63 20 24 28 48 44 52 29 20 24 _hash.c$(HDR) $ce10: 28 45 58 54 48 44 52 29 0a 09 24 28 4c 54 43 4f (EXTHDR)..$(LTCO
ce20: 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 5f 43 4f MPILE) $(CORE_CO ce30: 4d 50 49 4c 45 5f 4f 50 54 53 29 20 24 28 4e 4f MPILE_OPTS)$(NO
ce40: 5f 57 41 52 4e 29 20 2d 44 53 51 4c 49 54 45 5f _WARN) -DSQLITE_
ce50: 43 4f 52 45 20 2d 63 20 24 28 54 4f 50 29 5c 65 CORE -c $(TOP)\e ce60: 78 74 5c 66 74 73 33 5c 66 74 73 33 5f 68 61 73 xt\fts3\fts3_has ce70: 68 2e 63 0a 0a 66 74 73 33 5f 69 63 75 2e 6c 6f h.c..fts3_icu.lo ce80: 3a 09 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 73 :.$(TOP)\ext\fts
ce90: 33 5c 66 74 73 33 5f 69 63 75 2e 63 20 24 28 48 3\fts3_icu.c $(H cea0: 44 52 29 20 24 28 45 58 54 48 44 52 29 0a 09 24 DR)$(EXTHDR)..$ceb0: 28 4c 54 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f (LTCOMPILE)$(CO
cec0: 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 RE_COMPILE_OPTS)
ced0: 20 24 28 4e 4f 5f 57 41 52 4e 29 20 2d 44 53 51 $(NO_WARN) -DSQ cee0: 4c 49 54 45 5f 43 4f 52 45 20 2d 63 20 24 28 54 LITE_CORE -c$(T
cef0: 4f 50 29 5c 65 78 74 5c 66 74 73 33 5c 66 74 73 OP)\ext\fts3\fts
cf00: 33 5f 69 63 75 2e 63 0a 0a 66 74 73 33 5f 73 6e 3_icu.c..fts3_sn
cf10: 69 70 70 65 74 2e 6c 6f 3a 09 24 28 54 4f 50 29 ippet.lo:.$(TOP) cf20: 5c 65 78 74 5c 66 74 73 33 5c 66 74 73 33 5f 73 \ext\fts3\fts3_s cf30: 6e 69 70 70 65 74 2e 63 20 24 28 48 44 52 29 20 nippet.c$(HDR)
cf40: 24 28 45 58 54 48 44 52 29 0a 09 24 28 4c 54 43 $(EXTHDR)..$(LTC
cf50: 4f 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 5f 43 OMPILE) $(CORE_C cf60: 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 24 28 4e OMPILE_OPTS)$(N
cf70: 4f 5f 57 41 52 4e 29 20 2d 44 53 51 4c 49 54 45 O_WARN) -DSQLITE
cf80: 5f 43 4f 52 45 20 2d 63 20 24 28 54 4f 50 29 5c _CORE -c $(TOP)\ cf90: 65 78 74 5c 66 74 73 33 5c 66 74 73 33 5f 73 6e ext\fts3\fts3_sn cfa0: 69 70 70 65 74 2e 63 0a 0a 66 74 73 33 5f 70 6f ippet.c..fts3_po cfb0: 72 74 65 72 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c rter.lo:.$(TOP)\
cfc0: 65 78 74 5c 66 74 73 33 5c 66 74 73 33 5f 70 6f ext\fts3\fts3_po
cfd0: 72 74 65 72 2e 63 20 24 28 48 44 52 29 20 24 28 rter.c $(HDR)$(
cfe0: 45 58 54 48 44 52 29 0a 09 24 28 4c 54 43 4f 4d EXTHDR)..$(LTCOM cff0: 50 49 4c 45 29 20 24 28 43 4f 52 45 5f 43 4f 4d PILE)$(CORE_COM
d000: 50 49 4c 45 5f 4f 50 54 53 29 20 24 28 4e 4f 5f PILE_OPTS) $(NO_ d010: 57 41 52 4e 29 20 2d 44 53 51 4c 49 54 45 5f 43 WARN) -DSQLITE_C d020: 4f 52 45 20 2d 63 20 24 28 54 4f 50 29 5c 65 78 ORE -c$(TOP)\ex
d030: 74 5c 66 74 73 33 5c 66 74 73 33 5f 70 6f 72 74 t\fts3\fts3_port
d040: 65 72 2e 63 0a 0a 66 74 73 33 5f 74 6f 6b 65 6e er.c..fts3_token
d050: 69 7a 65 72 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c izer.lo:.$(TOP)\ d060: 65 78 74 5c 66 74 73 33 5c 66 74 73 33 5f 74 6f ext\fts3\fts3_to d070: 6b 65 6e 69 7a 65 72 2e 63 20 24 28 48 44 52 29 kenizer.c$(HDR)
d080: 20 24 28 45 58 54 48 44 52 29 0a 09 24 28 4c 54 $(EXTHDR)..$(LT
d090: 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 5f COMPILE) $(CORE_ d0a0: 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 24 28 COMPILE_OPTS)$(
d0b0: 4e 4f 5f 57 41 52 4e 29 20 2d 44 53 51 4c 49 54 NO_WARN) -DSQLIT
d0c0: 45 5f 43 4f 52 45 20 2d 63 20 24 28 54 4f 50 29 E_CORE -c $(TOP) d0d0: 5c 65 78 74 5c 66 74 73 33 5c 66 74 73 33 5f 74 \ext\fts3\fts3_t d0e0: 6f 6b 65 6e 69 7a 65 72 2e 63 0a 0a 66 74 73 33 okenizer.c..fts3 d0f0: 5f 74 6f 6b 65 6e 69 7a 65 72 31 2e 6c 6f 3a 09 _tokenizer1.lo:. d100: 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 73 33 5c$(TOP)\ext\fts3\
d110: 66 74 73 33 5f 74 6f 6b 65 6e 69 7a 65 72 31 2e fts3_tokenizer1.
d120: 63 20 24 28 48 44 52 29 20 24 28 45 58 54 48 44 c $(HDR)$(EXTHD
d130: 52 29 0a 09 24 28 4c 54 43 4f 4d 50 49 4c 45 29 R)..$(LTCOMPILE) d140: 20 24 28 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 5f$(CORE_COMPILE_
d150: 4f 50 54 53 29 20 24 28 4e 4f 5f 57 41 52 4e 29 OPTS) $(NO_WARN) d160: 20 2d 44 53 51 4c 49 54 45 5f 43 4f 52 45 20 2d -DSQLITE_CORE - d170: 63 20 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 73 c$(TOP)\ext\fts
d180: 33 5c 66 74 73 33 5f 74 6f 6b 65 6e 69 7a 65 72 3\fts3_tokenizer
d190: 31 2e 63 0a 0a 66 74 73 33 5f 74 6f 6b 65 6e 69 1.c..fts3_tokeni
d1a0: 7a 65 5f 76 74 61 62 2e 6c 6f 3a 09 24 28 54 4f ze_vtab.lo:.$(TO d1b0: 50 29 5c 65 78 74 5c 66 74 73 33 5c 66 74 73 33 P)\ext\fts3\fts3 d1c0: 5f 74 6f 6b 65 6e 69 7a 65 5f 76 74 61 62 2e 63 _tokenize_vtab.c d1d0: 20 24 28 48 44 52 29 20 24 28 45 58 54 48 44 52$(HDR) $(EXTHDR d1e0: 29 0a 09 24 28 4c 54 43 4f 4d 50 49 4c 45 29 20 )..$(LTCOMPILE)
d1f0: 24 28 43 4f 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f $(CORE_COMPILE_O d200: 50 54 53 29 20 24 28 4e 4f 5f 57 41 52 4e 29 20 PTS)$(NO_WARN)
d210: 2d 44 53 51 4c 49 54 45 5f 43 4f 52 45 20 2d 63 -DSQLITE_CORE -c
d220: 20 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 73 33 $(TOP)\ext\fts3 d230: 5c 66 74 73 33 5f 74 6f 6b 65 6e 69 7a 65 5f 76 \fts3_tokenize_v d240: 74 61 62 2e 63 0a 0a 66 74 73 33 5f 75 6e 69 63 tab.c..fts3_unic d250: 6f 64 65 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c 65 ode.lo:.$(TOP)\e
d260: 78 74 5c 66 74 73 33 5c 66 74 73 33 5f 75 6e 69 xt\fts3\fts3_uni
d270: 63 6f 64 65 2e 63 20 24 28 48 44 52 29 20 24 28 code.c $(HDR)$(
d280: 45 58 54 48 44 52 29 0a 09 24 28 4c 54 43 4f 4d EXTHDR)..$(LTCOM d290: 50 49 4c 45 29 20 24 28 43 4f 52 45 5f 43 4f 4d PILE)$(CORE_COM
d2a0: 50 49 4c 45 5f 4f 50 54 53 29 20 24 28 4e 4f 5f PILE_OPTS) $(NO_ d2b0: 57 41 52 4e 29 20 2d 44 53 51 4c 49 54 45 5f 43 WARN) -DSQLITE_C d2c0: 4f 52 45 20 2d 63 20 24 28 54 4f 50 29 5c 65 78 ORE -c$(TOP)\ex
d2d0: 74 5c 66 74 73 33 5c 66 74 73 33 5f 75 6e 69 63 t\fts3\fts3_unic
d2e0: 6f 64 65 2e 63 0a 0a 66 74 73 33 5f 75 6e 69 63 ode.c..fts3_unic
d2f0: 6f 64 65 32 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c ode2.lo:.$(TOP)\ d300: 65 78 74 5c 66 74 73 33 5c 66 74 73 33 5f 75 6e ext\fts3\fts3_un d310: 69 63 6f 64 65 32 2e 63 20 24 28 48 44 52 29 20 icode2.c$(HDR)
d320: 24 28 45 58 54 48 44 52 29 0a 09 24 28 4c 54 43 $(EXTHDR)..$(LTC
d330: 4f 4d 50 49 4c 45 29 20 24 28 43 4f 52 45 5f 43 OMPILE) $(CORE_C d340: 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 20 24 28 4e OMPILE_OPTS)$(N
d350: 4f 5f 57 41 52 4e 29 20 2d 44 53 51 4c 49 54 45 O_WARN) -DSQLITE
d360: 5f 43 4f 52 45 20 2d 63 20 24 28 54 4f 50 29 5c _CORE -c $(TOP)\ d370: 65 78 74 5c 66 74 73 33 5c 66 74 73 33 5f 75 6e ext\fts3\fts3_un d380: 69 63 6f 64 65 32 2e 63 0a 0a 66 74 73 33 5f 77 icode2.c..fts3_w d390: 72 69 74 65 2e 6c 6f 3a 09 24 28 54 4f 50 29 5c rite.lo:.$(TOP)\
d3a0: 65 78 74 5c 66 74 73 33 5c 66 74 73 33 5f 77 72 ext\fts3\fts3_wr
d3b0: 69 74 65 2e 63 20 24 28 48 44 52 29 20 24 28 45 ite.c $(HDR)$(E
d3c0: 58 54 48 44 52 29 0a 09 24 28 4c 54 43 4f 4d 50 XTHDR)..$(LTCOMP d3d0: 49 4c 45 29 20 24 28 43 4f 52 45 5f 43 4f 4d 50 ILE)$(CORE_COMP
d3e0: 49 4c 45 5f 4f 50 54 53 29 20 24 28 4e 4f 5f 57 ILE_OPTS) $(NO_W d3f0: 41 52 4e 29 20 2d 44 53 51 4c 49 54 45 5f 43 4f ARN) -DSQLITE_CO d400: 52 45 20 2d 63 20 24 28 54 4f 50 29 5c 65 78 74 RE -c$(TOP)\ext
d410: 5c 66 74 73 33 5c 66 74 73 33 5f 77 72 69 74 65 \fts3\fts3_write
d420: 2e 63 0a 0a 72 74 72 65 65 2e 6c 6f 3a 09 24 28 .c..rtree.lo:.$( d430: 54 4f 50 29 5c 65 78 74 5c 72 74 72 65 65 5c 72 TOP)\ext\rtree\r d440: 74 72 65 65 2e 63 20 24 28 48 44 52 29 20 24 28 tree.c$(HDR) $( d450: 45 58 54 48 44 52 29 0a 09 24 28 4c 54 43 4f 4d EXTHDR)..$(LTCOM
d460: 50 49 4c 45 29 20 24 28 43 4f 52 45 5f 43 4f 4d PILE) $(CORE_COM d470: 50 49 4c 45 5f 4f 50 54 53 29 20 24 28 4e 4f 5f PILE_OPTS)$(NO_
d480: 57 41 52 4e 29 20 2d 44 53 51 4c 49 54 45 5f 43 WARN) -DSQLITE_C
d490: 4f 52 45 20 2d 63 20 24 28 54 4f 50 29 5c 65 78 ORE -c $(TOP)\ex d4a0: 74 5c 72 74 72 65 65 5c 72 74 72 65 65 2e 63 0a t\rtree\rtree.c. d4b0: 0a 73 71 6c 69 74 65 33 73 65 73 73 69 6f 6e 2e .sqlite3session. d4c0: 6c 6f 3a 09 24 28 54 4f 50 29 5c 65 78 74 5c 73 lo:.$(TOP)\ext\s
d4d0: 65 73 73 69 6f 6e 5c 73 71 6c 69 74 65 33 73 65 ession\sqlite3se
d4e0: 73 69 6f 6e 2e 63 20 24 28 48 44 52 29 20 24 28 sion.c $(HDR)$(
d4f0: 45 58 54 48 44 52 29 0a 09 24 28 4c 54 43 4f 4d EXTHDR)..$(LTCOM d500: 50 49 4c 45 29 20 2d 44 53 51 4c 49 54 45 5f 43 PILE) -DSQLITE_C d510: 4f 52 45 20 2d 63 20 24 28 54 4f 50 29 5c 65 78 ORE -c$(TOP)\ex
d520: 74 5c 73 65 73 73 69 6f 6e 5c 73 71 6c 69 74 65 t\session\sqlite
d530: 33 73 65 73 73 69 6f 6e 2e 63 0a 0a 23 20 46 54 3session.c..# FT
d540: 53 35 20 74 68 69 6e 67 73 0a 23 0a 46 54 53 35 S5 things.#.FTS5
d550: 5f 53 52 43 20 3d 20 5c 0a 20 20 20 24 28 54 4f _SRC = \. $(TO d560: 50 29 5c 65 78 74 5c 66 74 73 35 5c 66 74 73 35 P)\ext\fts5\fts5 d570: 2e 68 20 5c 0a 20 20 20 24 28 54 4f 50 29 5c 65 .h \.$(TOP)\e
d580: 78 74 5c 66 74 73 35 5c 66 74 73 35 49 6e 74 2e xt\fts5\fts5Int.
d590: 68 20 5c 0a 20 20 20 24 28 54 4f 50 29 5c 65 78 h \. $(TOP)\ex d5a0: 74 5c 66 74 73 35 5c 66 74 73 35 5f 61 75 78 2e t\fts5\fts5_aux. d5b0: 63 20 5c 0a 20 20 20 24 28 54 4f 50 29 5c 65 78 c \.$(TOP)\ex
d5c0: 74 5c 66 74 73 35 5c 66 74 73 35 5f 62 75 66 66 t\fts5\fts5_buff
d5d0: 65 72 2e 63 20 5c 0a 20 20 20 24 28 54 4f 50 29 er.c \. $(TOP) d5e0: 5c 65 78 74 5c 66 74 73 35 5c 66 74 73 35 5f 6d \ext\fts5\fts5_m d5f0: 61 69 6e 2e 63 20 5c 0a 20 20 20 24 28 54 4f 50 ain.c \.$(TOP
d600: 29 5c 65 78 74 5c 66 74 73 35 5c 66 74 73 35 5f )\ext\fts5\fts5_
d610: 63 6f 6e 66 69 67 2e 63 20 5c 0a 20 20 20 24 28 config.c \. $( d620: 54 4f 50 29 5c 65 78 74 5c 66 74 73 35 5c 66 74 TOP)\ext\fts5\ft d630: 73 35 5f 65 78 70 72 2e 63 20 5c 0a 20 20 20 24 s5_expr.c \.$
d640: 28 54 4f 50 29 5c 65 78 74 5c 66 74 73 35 5c 66 (TOP)\ext\fts5\f
d650: 74 73 35 5f 68 61 73 68 2e 63 20 5c 0a 20 20 20 ts5_hash.c \.
d660: 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 73 35 5c $(TOP)\ext\fts5\ d670: 66 74 73 35 5f 69 6e 64 65 78 2e 63 20 5c 0a 20 fts5_index.c \. d680: 20 20 66 74 73 35 70 61 72 73 65 2e 63 20 66 74 fts5parse.c ft d690: 73 35 70 61 72 73 65 2e 68 20 5c 0a 20 20 20 24 s5parse.h \.$
d6a0: 28 54 4f 50 29 5c 65 78 74 5c 66 74 73 35 5c 66 (TOP)\ext\fts5\f
d6b0: 74 73 35 5f 73 74 6f 72 61 67 65 2e 63 20 5c 0a ts5_storage.c \.
d6c0: 20 20 20 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 $(TOP)\ext\ft d6d0: 73 35 5c 66 74 73 35 5f 74 6f 6b 65 6e 69 7a 65 s5\fts5_tokenize d6e0: 2e 63 20 5c 0a 20 20 20 24 28 54 4f 50 29 5c 65 .c \.$(TOP)\e
d6f0: 78 74 5c 66 74 73 35 5c 66 74 73 35 5f 75 6e 69 xt\fts5\fts5_uni
d700: 63 6f 64 65 32 2e 63 20 5c 0a 20 20 20 24 28 54 code2.c \. $(T d710: 4f 50 29 5c 65 78 74 5c 66 74 73 35 5c 66 74 73 OP)\ext\fts5\fts d720: 35 5f 76 61 72 69 6e 74 2e 63 20 5c 0a 20 20 20 5_varint.c \. d730: 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 73 35 5c$(TOP)\ext\fts5\
d740: 66 74 73 35 5f 76 6f 63 61 62 2e 63 0a 0a 66 74 fts5_vocab.c..ft
d750: 73 35 70 61 72 73 65 2e 63 3a 09 24 28 54 4f 50 s5parse.c:.$(TOP d760: 29 5c 65 78 74 5c 66 74 73 35 5c 66 74 73 35 70 )\ext\fts5\fts5p d770: 61 72 73 65 2e 79 20 6c 65 6d 6f 6e 2e 65 78 65 arse.y lemon.exe d780: 0a 09 63 6f 70 79 20 24 28 54 4f 50 29 5c 65 78 ..copy$(TOP)\ex
d790: 74 5c 66 74 73 35 5c 66 74 73 35 70 61 72 73 65 t\fts5\fts5parse
d7a0: 2e 79 20 2e 0a 09 64 65 6c 20 2f 51 20 66 74 73 .y ...del /Q fts
d7b0: 35 70 61 72 73 65 2e 68 20 32 3e 4e 55 4c 0a 09 5parse.h 2>NUL..
d7c0: 2e 5c 6c 65 6d 6f 6e 2e 65 78 65 20 24 28 52 45 .\lemon.exe $(RE d7d0: 51 5f 46 45 41 54 55 52 45 5f 46 4c 41 47 53 29 Q_FEATURE_FLAGS) d7e0: 20 24 28 4f 50 54 5f 46 45 41 54 55 52 45 5f 46$(OPT_FEATURE_F
d7f0: 4c 41 47 53 29 20 24 28 45 58 54 5f 46 45 41 54 LAGS) $(EXT_FEAT d800: 55 52 45 5f 46 4c 41 47 53 29 20 24 28 4f 50 54 URE_FLAGS)$(OPT
d810: 53 29 20 66 74 73 35 70 61 72 73 65 2e 79 0a 0a S) fts5parse.y..
d820: 66 74 73 35 70 61 72 73 65 2e 68 3a 20 66 74 73 fts5parse.h: fts
d830: 35 70 61 72 73 65 2e 63 0a 0a 66 74 73 35 2e 63 5parse.c..fts5.c
d840: 3a 20 24 28 46 54 53 35 5f 53 52 43 29 0a 09 24 : $(FTS5_SRC)..$
d850: 28 54 43 4c 53 48 5f 43 4d 44 29 20 24 28 54 4f (TCLSH_CMD) $(TO d860: 50 29 5c 65 78 74 5c 66 74 73 35 5c 74 6f 6f 6c P)\ext\fts5\tool d870: 5c 6d 6b 66 74 73 35 63 2e 74 63 6c 0a 09 63 6f \mkfts5c.tcl..co d880: 70 79 20 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 py$(TOP)\ext\ft
d890: 73 35 5c 66 74 73 35 2e 68 20 2e 0a 0a 66 74 73 s5\fts5.h ...fts
d8a0: 35 2e 6c 6f 3a 09 66 74 73 35 2e 63 20 24 28 48 5.lo:.fts5.c $(H d8b0: 44 52 29 20 24 28 45 58 54 48 44 52 29 0a 09 24 DR)$(EXTHDR)..$d8c0: 28 4c 54 43 4f 4d 50 49 4c 45 29 20 24 28 43 4f (LTCOMPILE)$(CO
d8d0: 52 45 5f 43 4f 4d 50 49 4c 45 5f 4f 50 54 53 29 RE_COMPILE_OPTS)
d8e0: 20 24 28 4e 4f 5f 57 41 52 4e 29 20 2d 44 53 51 $(NO_WARN) -DSQ d8f0: 4c 49 54 45 5f 43 4f 52 45 20 2d 63 20 66 74 73 LITE_CORE -c fts d900: 35 2e 63 0a 0a 66 74 73 35 5f 65 78 74 2e 6c 6f 5.c..fts5_ext.lo d910: 3a 09 66 74 73 35 2e 63 20 24 28 48 44 52 29 20 :.fts5.c$(HDR)
d920: 24 28 45 58 54 48 44 52 29 0a 09 24 28 4c 54 43 $(EXTHDR)..$(LTC
d930: 4f 4d 50 49 4c 45 29 20 24 28 4e 4f 5f 57 41 52 OMPILE) $(NO_WAR d940: 4e 29 20 2d 63 20 66 74 73 35 2e 63 0a 0a 66 74 N) -c fts5.c..ft d950: 73 35 2e 64 6c 6c 3a 09 66 74 73 35 5f 65 78 74 s5.dll:.fts5_ext d960: 2e 6c 6f 0a 09 24 28 4c 44 29 20 24 28 4c 44 46 .lo..$(LD) $(LDF d970: 4c 41 47 53 29 20 24 28 4c 54 4c 49 4e 4b 4f 50 LAGS)$(LTLINKOP
d980: 54 53 29 20 24 28 4c 54 4c 49 42 50 41 54 48 53 TS) $(LTLIBPATHS d990: 29 20 2f 44 4c 4c 20 2f 4f 55 54 3a 24 40 20 66 ) /DLL /OUT:$@ f
d9a0: 74 73 35 5f 65 78 74 2e 6c 6f 0a 0a 73 71 6c 69 ts5_ext.lo..sqli
d9b0: 74 65 33 72 62 75 2e 6c 6f 3a 09 24 28 54 4f 50 te3rbu.lo:.$(TOP d9c0: 29 5c 65 78 74 5c 72 62 75 5c 73 71 6c 69 74 65 )\ext\rbu\sqlite d9d0: 33 72 62 75 2e 63 20 24 28 48 44 52 29 20 24 28 3rbu.c$(HDR) $( d9e0: 45 58 54 48 44 52 29 0a 09 24 28 4c 54 43 4f 4d EXTHDR)..$(LTCOM
d9f0: 50 49 4c 45 29 20 2d 44 53 51 4c 49 54 45 5f 43 PILE) -DSQLITE_C
da00: 4f 52 45 20 2d 63 20 24 28 54 4f 50 29 5c 65 78 ORE -c $(TOP)\ex da10: 74 5c 72 62 75 5c 73 71 6c 69 74 65 33 72 62 75 t\rbu\sqlite3rbu da20: 2e 63 0a 0a 23 20 52 75 6c 65 73 20 74 6f 20 62 .c..# Rules to b da30: 75 69 6c 64 20 74 68 65 20 27 74 65 73 74 66 69 uild the 'testfi da40: 78 74 75 72 65 27 20 61 70 70 6c 69 63 61 74 69 xture' applicati da50: 6f 6e 2e 0a 23 0a 23 20 49 66 20 75 73 69 6e 67 on..#.# If using da60: 20 74 68 65 20 61 6d 61 6c 67 61 6d 61 74 69 6f the amalgamatio da70: 6e 2c 20 75 73 65 20 73 71 6c 69 74 65 33 2e 63 n, use sqlite3.c da80: 20 64 69 72 65 63 74 6c 79 20 74 6f 20 62 75 69 directly to bui da90: 6c 64 20 74 68 65 20 74 65 73 74 0a 23 20 66 69 ld the test.# fi daa0: 78 74 75 72 65 2e 20 20 4f 74 68 65 72 77 69 73 xture. Otherwis dab0: 65 20 6c 69 6e 6b 20 61 67 61 69 6e 73 74 20 6c e link against l dac0: 69 62 73 71 6c 69 74 65 33 2e 6c 69 62 2e 20 20 ibsqlite3.lib. dad0: 28 54 68 69 73 20 64 69 73 74 69 6e 63 74 69 6f (This distinctio dae0: 6e 20 69 73 0a 23 20 6e 65 63 65 73 73 61 72 79 n is.# necessary daf0: 20 62 65 63 61 75 73 65 20 74 68 65 20 74 65 73 because the tes db00: 74 20 66 69 78 74 75 72 65 20 72 65 71 75 69 72 t fixture requir db10: 65 73 20 6e 6f 6e 2d 41 50 49 20 73 79 6d 62 6f es non-API symbo db20: 6c 73 20 77 68 69 63 68 20 61 72 65 0a 23 20 68 ls which are.# h db30: 69 64 64 65 6e 20 77 68 65 6e 20 74 68 65 20 6c idden when the l db40: 69 62 72 61 72 79 20 69 73 20 62 75 69 6c 74 20 ibrary is built db50: 76 69 61 20 74 68 65 20 61 6d 61 6c 67 61 6d 61 via the amalgama db60: 74 69 6f 6e 29 2e 0a 23 0a 54 45 53 54 46 49 58 tion)..#.TESTFIX db70: 54 55 52 45 5f 46 4c 41 47 53 20 3d 20 2d 44 54 TURE_FLAGS = -DT db80: 43 4c 53 48 3d 31 20 2d 44 53 51 4c 49 54 45 5f CLSH=1 -DSQLITE_ db90: 54 45 53 54 3d 31 20 2d 44 53 51 4c 49 54 45 5f TEST=1 -DSQLITE_ dba0: 43 52 41 53 48 5f 54 45 53 54 3d 31 0a 54 45 53 CRASH_TEST=1.TES dbb0: 54 46 49 58 54 55 52 45 5f 46 4c 41 47 53 20 3d TFIXTURE_FLAGS = dbc0: 20 24 28 54 45 53 54 46 49 58 54 55 52 45 5f 46$(TESTFIXTURE_F
dbd0: 4c 41 47 53 29 20 2d 44 53 51 4c 49 54 45 5f 53 LAGS) -DSQLITE_S
dbe0: 45 52 56 45 52 3d 31 20 2d 44 53 51 4c 49 54 45 ERVER=1 -DSQLITE
dbf0: 5f 50 52 49 56 41 54 45 3d 22 22 0a 54 45 53 54 _PRIVATE="".TEST
dc00: 46 49 58 54 55 52 45 5f 46 4c 41 47 53 20 3d 20 FIXTURE_FLAGS =
dc10: 24 28 54 45 53 54 46 49 58 54 55 52 45 5f 46 4c $(TESTFIXTURE_FL dc20: 41 47 53 29 20 2d 44 53 51 4c 49 54 45 5f 43 4f AGS) -DSQLITE_CO dc30: 52 45 20 24 28 4e 4f 5f 57 41 52 4e 29 0a 0a 54 RE$(NO_WARN)..T
dc40: 45 53 54 46 49 58 54 55 52 45 5f 53 52 43 30 20 ESTFIXTURE_SRC0
dc50: 3d 20 24 28 54 45 53 54 45 58 54 29 20 24 28 54 = $(TESTEXT)$(T
dc60: 45 53 54 53 52 43 32 29 0a 54 45 53 54 46 49 58 ESTSRC2).TESTFIX
dc70: 54 55 52 45 5f 53 52 43 31 20 3d 20 24 28 54 45 TURE_SRC1 = $(TE dc80: 53 54 45 58 54 29 20 24 28 54 45 53 54 53 52 43 STEXT)$(TESTSRC
dc90: 33 29 20 24 28 53 51 4c 49 54 45 33 43 29 0a 21 3) $(SQLITE3C).! dca0: 49 46 20 24 28 55 53 45 5f 41 4d 41 4c 47 41 4d IF$(USE_AMALGAM
dcb0: 41 54 49 4f 4e 29 3d 3d 30 0a 54 45 53 54 46 49 ATION)==0.TESTFI
dcc0: 58 54 55 52 45 5f 53 52 43 20 3d 20 24 28 54 45 XTURE_SRC = $(TE dcd0: 53 54 53 52 43 29 20 24 28 54 4f 50 29 5c 73 72 STSRC)$(TOP)\sr
dce0: 63 5c 74 63 6c 73 71 6c 69 74 65 2e 63 20 24 28 c\tclsqlite.c $( dcf0: 54 45 53 54 46 49 58 54 55 52 45 5f 53 52 43 30 TESTFIXTURE_SRC0 dd00: 29 0a 21 45 4c 53 45 0a 54 45 53 54 46 49 58 54 ).!ELSE.TESTFIXT dd10: 55 52 45 5f 53 52 43 20 3d 20 24 28 54 45 53 54 URE_SRC =$(TEST
dd20: 53 52 43 29 20 24 28 54 4f 50 29 5c 73 72 63 5c SRC) $(TOP)\src\ dd30: 74 63 6c 73 71 6c 69 74 65 2e 63 20 24 28 54 45 tclsqlite.c$(TE
dd40: 53 54 46 49 58 54 55 52 45 5f 53 52 43 31 29 0a STFIXTURE_SRC1).
dd50: 21 45 4e 44 49 46 0a 0a 74 65 73 74 66 69 78 74 !ENDIF..testfixt
dd60: 75 72 65 2e 65 78 65 3a 09 24 28 54 45 53 54 46 ure.exe:.$(TESTF dd70: 49 58 54 55 52 45 5f 53 52 43 29 20 24 28 53 51 IXTURE_SRC)$(SQ
dd80: 4c 49 54 45 33 48 29 20 24 28 4c 49 42 52 45 53 LITE3H) $(LIBRES dd90: 4f 42 4a 53 29 20 24 28 48 44 52 29 0a 09 24 28 OBJS)$(HDR)..$( dda0: 4c 54 4c 49 4e 4b 29 20 2d 44 53 51 4c 49 54 45 LTLINK) -DSQLITE ddb0: 5f 4e 4f 5f 53 59 4e 43 3d 31 20 24 28 54 45 53 _NO_SYNC=1$(TES
ddc0: 54 46 49 58 54 55 52 45 5f 46 4c 41 47 53 29 20 TFIXTURE_FLAGS)
ddd0: 5c 0a 09 09 2d 44 42 55 49 4c 44 5f 73 71 6c 69 \...-DBUILD_sqli
dde0: 74 65 20 2d 49 24 28 54 43 4c 49 4e 43 44 49 52 te -I$(TCLINCDIR ddf0: 29 20 5c 0a 09 09 24 28 54 45 53 54 46 49 58 54 ) \...$(TESTFIXT
de00: 55 52 45 5f 53 52 43 29 20 5c 0a 09 09 2f 6c 69 URE_SRC) \.../li
de10: 6e 6b 20 24 28 4c 44 46 4c 41 47 53 29 20 24 28 nk $(LDFLAGS)$(
de20: 4c 54 4c 49 4e 4b 4f 50 54 53 29 20 24 28 4c 54 LTLINKOPTS) $(LT de30: 4c 49 42 50 41 54 48 53 29 20 24 28 4c 49 42 52 LIBPATHS)$(LIBR
de40: 45 53 4f 42 4a 53 29 20 24 28 4c 54 4c 49 42 53 ESOBJS) $(LTLIBS de50: 29 20 24 28 54 4c 49 42 53 29 0a 0a 65 78 74 65 )$(TLIBS)..exte
de60: 6e 73 69 6f 6e 74 65 73 74 3a 20 74 65 73 74 66 nsiontest: testf
de70: 69 78 74 75 72 65 2e 65 78 65 20 74 65 73 74 6c ixture.exe testl
de80: 6f 61 64 65 78 74 2e 64 6c 6c 0a 09 40 73 65 74 oadext.dll..@set
de90: 20 50 41 54 48 3d 24 28 4c 49 42 54 43 4c 50 41 PATH=$(LIBTCLPA dea0: 54 48 29 3b 24 28 50 41 54 48 29 0a 09 2e 5c 74 TH);$(PATH)...\t
deb0: 65 73 74 66 69 78 74 75 72 65 2e 65 78 65 20 24 estfixture.exe $dec0: 28 54 4f 50 29 5c 74 65 73 74 5c 6c 6f 61 64 65 (TOP)\test\loade ded0: 78 74 2e 74 65 73 74 20 24 28 54 45 53 54 4f 50 xt.test$(TESTOP
dee0: 54 53 29 0a 0a 66 75 6c 6c 74 65 73 74 3a 09 24 TS)..fulltest:.$def0: 28 54 45 53 54 50 52 4f 47 53 29 20 66 75 7a 7a (TESTPROGS) fuzz df00: 74 65 73 74 0a 09 40 73 65 74 20 50 41 54 48 3d test..@set PATH= df10: 24 28 4c 49 42 54 43 4c 50 41 54 48 29 3b 24 28$(LIBTCLPATH);$( df20: 50 41 54 48 29 0a 09 2e 5c 74 65 73 74 66 69 78 PATH)...\testfix df30: 74 75 72 65 2e 65 78 65 20 24 28 54 4f 50 29 5c ture.exe$(TOP)\
df40: 74 65 73 74 5c 61 6c 6c 2e 74 65 73 74 20 24 28 test\all.test $( df50: 54 45 53 54 4f 50 54 53 29 0a 0a 73 6f 61 6b 74 TESTOPTS)..soakt df60: 65 73 74 3a 09 24 28 54 45 53 54 50 52 4f 47 53 est:.$(TESTPROGS
df70: 29 0a 09 40 73 65 74 20 50 41 54 48 3d 24 28 4c )..@set PATH=$(L df80: 49 42 54 43 4c 50 41 54 48 29 3b 24 28 50 41 54 IBTCLPATH);$(PAT
df90: 48 29 0a 09 2e 5c 74 65 73 74 66 69 78 74 75 72 H)...\testfixtur
dfa0: 65 2e 65 78 65 20 24 28 54 4f 50 29 5c 74 65 73 e.exe $(TOP)\tes dfb0: 74 5c 61 6c 6c 2e 74 65 73 74 20 2d 73 6f 61 6b t\all.test -soak dfc0: 3d 31 20 24 28 54 45 53 54 4f 50 54 53 29 0a 0a =1$(TESTOPTS)..
dfd0: 66 75 6c 6c 74 65 73 74 6f 6e 6c 79 3a 09 24 28 fulltestonly:.$( dfe0: 54 45 53 54 50 52 4f 47 53 29 20 66 75 7a 7a 74 TESTPROGS) fuzzt dff0: 65 73 74 0a 09 40 73 65 74 20 50 41 54 48 3d 24 est..@set PATH=$
e000: 28 4c 49 42 54 43 4c 50 41 54 48 29 3b 24 28 50 (LIBTCLPATH);$(P e010: 41 54 48 29 0a 09 2e 5c 74 65 73 74 66 69 78 74 ATH)...\testfixt e020: 75 72 65 2e 65 78 65 20 24 28 54 4f 50 29 5c 74 ure.exe$(TOP)\t
e030: 65 73 74 5c 66 75 6c 6c 2e 74 65 73 74 0a 0a 71 est\full.test..q
e040: 75 65 72 79 70 6c 61 6e 74 65 73 74 3a 09 74 65 ueryplantest:.te
e050: 73 74 66 69 78 74 75 72 65 2e 65 78 65 20 73 68 stfixture.exe sh
e060: 65 6c 6c 0a 09 40 73 65 74 20 50 41 54 48 3d 24 ell..@set PATH=$e070: 28 4c 49 42 54 43 4c 50 41 54 48 29 3b 24 28 50 (LIBTCLPATH);$(P
e080: 41 54 48 29 0a 09 2e 5c 74 65 73 74 66 69 78 74 ATH)...\testfixt
e090: 75 72 65 2e 65 78 65 20 24 28 54 4f 50 29 5c 74 ure.exe $(TOP)\t e0a0: 65 73 74 5c 70 65 72 6d 75 74 61 74 69 6f 6e 73 est\permutations e0b0: 2e 74 65 73 74 20 71 75 65 72 79 70 6c 61 6e 6e .test queryplann e0c0: 65 72 20 24 28 54 45 53 54 4f 50 54 53 29 0a 0a er$(TESTOPTS)..
e0d0: 66 75 7a 7a 74 65 73 74 3a 09 66 75 7a 7a 63 68 fuzztest:.fuzzch
e0e0: 65 63 6b 2e 65 78 65 0a 09 2e 5c 66 75 7a 7a 63 eck.exe...\fuzzc
e0f0: 68 65 63 6b 2e 65 78 65 20 24 28 46 55 5a 5a 44 heck.exe $(FUZZD e100: 41 54 41 29 0a 0a 66 61 73 74 66 75 7a 7a 74 65 ATA)..fastfuzzte e110: 73 74 3a 09 66 75 7a 7a 63 68 65 63 6b 2e 65 78 st:.fuzzcheck.ex e120: 65 0a 09 2e 5c 66 75 7a 7a 63 68 65 63 6b 2e 65 e...\fuzzcheck.e e130: 78 65 20 2d 2d 6c 69 6d 69 74 2d 6d 65 6d 20 31 xe --limit-mem 1 e140: 30 30 4d 20 24 28 46 55 5a 5a 44 41 54 41 29 0a 00M$(FUZZDATA).
e150: 0a 23 20 4d 69 6e 69 6d 61 6c 20 74 65 73 74 69 .# Minimal testi
e160: 6e 67 20 74 68 61 74 20 72 75 6e 73 20 69 6e 20 ng that runs in
e170: 6c 65 73 73 20 74 68 61 6e 20 33 20 6d 69 6e 75 less than 3 minu
e180: 74 65 73 20 28 6f 6e 20 61 20 66 61 73 74 20 6d tes (on a fast m
e190: 61 63 68 69 6e 65 29 0a 23 0a 71 75 69 63 6b 74 achine).#.quickt
e1a0: 65 73 74 3a 09 74 65 73 74 66 69 78 74 75 72 65 est:.testfixture
e1b0: 2e 65 78 65 20 73 6f 75 72 63 65 74 65 73 74 0a .exe sourcetest.
e1c0: 09 40 73 65 74 20 50 41 54 48 3d 24 28 4c 49 42 .@set PATH=$(LIB e1d0: 54 43 4c 50 41 54 48 29 3b 24 28 50 41 54 48 29 TCLPATH);$(PATH)
e1e0: 0a 09 2e 5c 74 65 73 74 66 69 78 74 75 72 65 2e ...\testfixture.
e1f0: 65 78 65 20 24 28 54 4f 50 29 5c 74 65 73 74 5c exe $(TOP)\test\ e200: 65 78 74 72 61 71 75 69 63 6b 2e 74 65 73 74 20 extraquick.test e210: 24 28 54 45 53 54 4f 50 54 53 29 0a 0a 23 20 54$(TESTOPTS)..# T
e220: 68 69 73 20 69 73 20 74 68 65 20 63 6f 6d 6d 6f his is the commo
e230: 6e 20 63 61 73 65 2e 20 20 52 75 6e 20 6d 61 6e n case. Run man
e240: 79 20 74 65 73 74 73 20 74 68 61 74 20 64 6f 20 y tests that do
e250: 6e 6f 74 20 74 61 6b 65 20 74 6f 6f 20 6c 6f 6e not take too lon
e260: 67 2c 0a 23 20 69 6e 63 6c 75 64 69 6e 67 20 66 g,.# including f
e270: 75 7a 7a 63 68 65 63 6b 2c 20 73 71 6c 69 74 65 uzzcheck, sqlite
e280: 33 5f 61 6e 61 6c 79 7a 65 72 2c 20 61 6e 64 20 3_analyzer, and
e290: 73 71 6c 64 69 66 66 20 74 65 73 74 73 2e 0a 23 sqldiff tests..#
e2a0: 0a 74 65 73 74 3a 09 24 28 54 45 53 54 50 52 4f .test:.$(TESTPRO e2b0: 47 53 29 20 73 6f 75 72 63 65 74 65 73 74 20 66 GS) sourcetest f e2c0: 61 73 74 66 75 7a 7a 74 65 73 74 0a 09 40 73 65 astfuzztest..@se e2d0: 74 20 50 41 54 48 3d 24 28 4c 49 42 54 43 4c 50 t PATH=$(LIBTCLP
e2e0: 41 54 48 29 3b 24 28 50 41 54 48 29 0a 09 2e 5c ATH);$(PATH)...\ e2f0: 74 65 73 74 66 69 78 74 75 72 65 2e 65 78 65 20 testfixture.exe e300: 24 28 54 4f 50 29 5c 74 65 73 74 5c 76 65 72 79$(TOP)\test\very
e310: 71 75 69 63 6b 2e 74 65 73 74 20 24 28 54 45 53 quick.test $(TES e320: 54 4f 50 54 53 29 0a 0a 73 6d 6f 6b 65 74 65 73 TOPTS)..smoketes e330: 74 3a 09 24 28 54 45 53 54 50 52 4f 47 53 29 0a t:.$(TESTPROGS).
e340: 09 40 73 65 74 20 50 41 54 48 3d 24 28 4c 49 42 .@set PATH=$(LIB e350: 54 43 4c 50 41 54 48 29 3b 24 28 50 41 54 48 29 TCLPATH);$(PATH)
e360: 0a 09 2e 5c 74 65 73 74 66 69 78 74 75 72 65 2e ...\testfixture.
e370: 65 78 65 20 24 28 54 4f 50 29 5c 74 65 73 74 5c exe $(TOP)\test\ e380: 6d 61 69 6e 2e 74 65 73 74 20 24 28 54 45 53 54 main.test$(TEST
e390: 4f 50 54 53 29 0a 0a 73 71 6c 69 74 65 33 5f 61 OPTS)..sqlite3_a
e3a0: 6e 61 6c 79 7a 65 72 2e 63 3a 20 24 28 53 51 4c nalyzer.c: $(SQL e3b0: 49 54 45 33 43 29 20 24 28 53 51 4c 49 54 45 33 ITE3C)$(SQLITE3
e3c0: 48 29 20 24 28 54 4f 50 29 5c 73 72 63 5c 74 63 H) $(TOP)\src\tc e3d0: 6c 73 71 6c 69 74 65 2e 63 20 24 28 54 4f 50 29 lsqlite.c$(TOP)
e3e0: 5c 74 6f 6f 6c 5c 73 70 61 63 65 61 6e 61 6c 2e \tool\spaceanal.
e3f0: 74 63 6c 0a 09 65 63 68 6f 20 23 64 65 66 69 6e tcl..echo #defin
e400: 65 20 54 43 4c 53 48 20 32 20 3e 20 24 40 0a 09 e TCLSH 2 > $@.. e410: 65 63 68 6f 20 23 64 65 66 69 6e 65 20 53 51 4c echo #define SQL e420: 49 54 45 5f 45 4e 41 42 4c 45 5f 44 42 53 54 41 ITE_ENABLE_DBSTA e430: 54 5f 56 54 41 42 20 31 20 3e 3e 20 24 40 0a 09 T_VTAB 1 >>$@..
e440: 63 6f 70 79 20 24 40 20 2b 20 24 28 53 51 4c 49 copy $@ +$(SQLI
e450: 54 45 33 43 29 20 2b 20 24 28 54 4f 50 29 5c 73 TE3C) + $(TOP)\s e460: 72 63 5c 74 63 6c 73 71 6c 69 74 65 2e 63 20 24 rc\tclsqlite.c$
e470: 40 0a 09 65 63 68 6f 20 73 74 61 74 69 63 20 63 @..echo static c
e480: 6f 6e 73 74 20 63 68 61 72 20 2a 74 63 6c 73 68 onst char *tclsh
e490: 5f 6d 61 69 6e 5f 6c 6f 6f 70 28 76 6f 69 64 29 _main_loop(void)
e4a0: 7b 20 3e 3e 20 24 40 0a 09 65 63 68 6f 20 73 74 { >> $@..echo st e4b0: 61 74 69 63 20 63 6f 6e 73 74 20 63 68 61 72 20 atic const char e4c0: 2a 7a 4d 61 69 6e 6c 6f 6f 70 20 3d 20 3e 3e 20 *zMainloop = >> e4d0: 24 40 0a 09 24 28 54 43 4c 53 48 5f 43 4d 44 29$@..$(TCLSH_CMD) e4e0: 20 24 28 54 4f 50 29 5c 74 6f 6f 6c 5c 74 6f 73$(TOP)\tool\tos
e4f0: 74 72 2e 74 63 6c 20 24 28 54 4f 50 29 5c 74 6f tr.tcl $(TOP)\to e500: 6f 6c 5c 73 70 61 63 65 61 6e 61 6c 2e 74 63 6c ol\spaceanal.tcl e510: 20 3e 3e 20 24 40 0a 09 65 63 68 6f 20 3b 20 72 >>$@..echo ; r
e520: 65 74 75 72 6e 20 7a 4d 61 69 6e 6c 6f 6f 70 3b eturn zMainloop;
e530: 20 7d 20 3e 3e 20 24 40 0a 0a 73 71 6c 69 74 65 } >> $@..sqlite e540: 33 5f 61 6e 61 6c 79 7a 65 72 2e 65 78 65 3a 09 3_analyzer.exe:. e550: 73 71 6c 69 74 65 33 5f 61 6e 61 6c 79 7a 65 72 sqlite3_analyzer e560: 2e 63 20 24 28 4c 49 42 52 45 53 4f 42 4a 53 29 .c$(LIBRESOBJS)
e570: 0a 09 24 28 4c 54 4c 49 4e 4b 29 20 24 28 4e 4f ..$(LTLINK)$(NO
e580: 5f 57 41 52 4e 29 20 2d 44 42 55 49 4c 44 5f 73 _WARN) -DBUILD_s
e590: 71 6c 69 74 65 20 2d 49 24 28 54 43 4c 49 4e 43 qlite -I$(TCLINC e5a0: 44 49 52 29 20 73 71 6c 69 74 65 33 5f 61 6e 61 DIR) sqlite3_ana e5b0: 6c 79 7a 65 72 2e 63 20 5c 0a 09 09 2f 6c 69 6e lyzer.c \.../lin e5c0: 6b 20 24 28 4c 44 46 4c 41 47 53 29 20 24 28 4c k$(LDFLAGS) $(L e5d0: 54 4c 49 4e 4b 4f 50 54 53 29 20 24 28 4c 54 4c TLINKOPTS)$(LTL
e5e0: 49 42 50 41 54 48 53 29 20 24 28 4c 49 42 52 45 IBPATHS) $(LIBRE e5f0: 53 4f 42 4a 53 29 20 24 28 4c 54 4c 49 42 53 29 SOBJS)$(LTLIBS)
e600: 20 24 28 54 4c 49 42 53 29 0a 0a 74 65 73 74 6c $(TLIBS)..testl e610: 6f 61 64 65 78 74 2e 6c 6f 3a 09 24 28 54 4f 50 oadext.lo:.$(TOP
e620: 29 5c 73 72 63 5c 74 65 73 74 5f 6c 6f 61 64 65 )\src\test_loade
e630: 78 74 2e 63 0a 09 24 28 4c 54 43 4f 4d 50 49 4c xt.c..$(LTCOMPIL e640: 45 29 20 24 28 4e 4f 5f 57 41 52 4e 29 20 2d 63 E)$(NO_WARN) -c
e650: 20 24 28 54 4f 50 29 5c 73 72 63 5c 74 65 73 74 $(TOP)\src\test e660: 5f 6c 6f 61 64 65 78 74 2e 63 0a 0a 74 65 73 74 _loadext.c..test e670: 6c 6f 61 64 65 78 74 2e 64 6c 6c 3a 20 74 65 73 loadext.dll: tes e680: 74 6c 6f 61 64 65 78 74 2e 6c 6f 0a 09 24 28 4c tloadext.lo..$(L
e690: 44 29 20 24 28 4c 44 46 4c 41 47 53 29 20 24 28 D) $(LDFLAGS)$(
e6a0: 4c 54 4c 49 4e 4b 4f 50 54 53 29 20 24 28 4c 54 LTLINKOPTS) $(LT e6b0: 4c 49 42 50 41 54 48 53 29 20 2f 44 4c 4c 20 2f LIBPATHS) /DLL / e6c0: 4f 55 54 3a 24 40 20 74 65 73 74 6c 6f 61 64 65 OUT:$@ testloade
e6d0: 78 74 2e 6c 6f 0a 0a 73 68 6f 77 64 62 2e 65 78 xt.lo..showdb.ex
e6e0: 65 3a 09 24 28 54 4f 50 29 5c 74 6f 6f 6c 5c 73 e:.$(TOP)\tool\s e6f0: 68 6f 77 64 62 2e 63 20 24 28 53 51 4c 49 54 45 howdb.c$(SQLITE
e700: 33 43 29 20 24 28 53 51 4c 49 54 45 33 48 29 0a 3C) $(SQLITE3H). e710: 09 24 28 4c 54 4c 49 4e 4b 29 20 24 28 4e 4f 5f .$(LTLINK) $(NO_ e720: 57 41 52 4e 29 20 2d 44 53 51 4c 49 54 45 5f 54 WARN) -DSQLITE_T e730: 48 52 45 41 44 53 41 46 45 3d 30 20 2d 44 53 51 HREADSAFE=0 -DSQ e740: 4c 49 54 45 5f 4f 4d 49 54 5f 4c 4f 41 44 5f 45 LITE_OMIT_LOAD_E e750: 58 54 45 4e 53 49 4f 4e 20 2d 46 65 24 40 20 5c XTENSION -Fe$@ \
e760: 0a 09 09 24 28 54 4f 50 29 5c 74 6f 6f 6c 5c 73 ...$(TOP)\tool\s e770: 68 6f 77 64 62 2e 63 20 24 28 53 51 4c 49 54 45 howdb.c$(SQLITE
e780: 33 43 29 20 2f 6c 69 6e 6b 20 24 28 4c 44 46 4c 3C) /link $(LDFL e790: 41 47 53 29 20 24 28 4c 54 4c 49 4e 4b 4f 50 54 AGS)$(LTLINKOPT
e7a0: 53 29 0a 0a 73 68 6f 77 73 74 61 74 34 2e 65 78 S)..showstat4.ex
e7b0: 65 3a 09 24 28 54 4f 50 29 5c 74 6f 6f 6c 5c 73 e:.$(TOP)\tool\s e7c0: 68 6f 77 73 74 61 74 34 2e 63 20 24 28 53 51 4c howstat4.c$(SQL
e7d0: 49 54 45 33 43 29 20 24 28 53 51 4c 49 54 45 33 ITE3C) $(SQLITE3 e7e0: 48 29 0a 09 24 28 4c 54 4c 49 4e 4b 29 20 24 28 H)..$(LTLINK) $( e7f0: 4e 4f 5f 57 41 52 4e 29 20 2d 44 53 51 4c 49 54 NO_WARN) -DSQLIT e800: 45 5f 54 48 52 45 41 44 53 41 46 45 3d 30 20 2d E_THREADSAFE=0 - e810: 44 53 51 4c 49 54 45 5f 4f 4d 49 54 5f 4c 4f 41 DSQLITE_OMIT_LOA e820: 44 5f 45 58 54 45 4e 53 49 4f 4e 20 2d 46 65 24 D_EXTENSION -Fe$
e830: 40 20 5c 0a 09 09 24 28 54 4f 50 29 5c 74 6f 6f @ \...$(TOP)\too e840: 6c 5c 73 68 6f 77 73 74 61 74 34 2e 63 20 24 28 l\showstat4.c$(
e850: 53 51 4c 49 54 45 33 43 29 20 2f 6c 69 6e 6b 20 SQLITE3C) /link
e860: 24 28 4c 44 46 4c 41 47 53 29 20 24 28 4c 54 4c $(LDFLAGS)$(LTL
e870: 49 4e 4b 4f 50 54 53 29 0a 0a 73 68 6f 77 6a 6f INKOPTS)..showjo
e880: 75 72 6e 61 6c 2e 65 78 65 3a 09 24 28 54 4f 50 urnal.exe:.$(TOP e890: 29 5c 74 6f 6f 6c 5c 73 68 6f 77 6a 6f 75 72 6e )\tool\showjourn e8a0: 61 6c 2e 63 20 24 28 53 51 4c 49 54 45 33 43 29 al.c$(SQLITE3C)
e8b0: 20 24 28 53 51 4c 49 54 45 33 48 29 0a 09 24 28 $(SQLITE3H)..$(
e8c0: 4c 54 4c 49 4e 4b 29 20 24 28 4e 4f 5f 57 41 52 LTLINK) $(NO_WAR e8d0: 4e 29 20 2d 44 53 51 4c 49 54 45 5f 54 48 52 45 N) -DSQLITE_THRE e8e0: 41 44 53 41 46 45 3d 30 20 2d 44 53 51 4c 49 54 ADSAFE=0 -DSQLIT e8f0: 45 5f 4f 4d 49 54 5f 4c 4f 41 44 5f 45 58 54 45 E_OMIT_LOAD_EXTE e900: 4e 53 49 4f 4e 20 2d 46 65 24 40 20 5c 0a 09 09 NSION -Fe$@ \...
e910: 24 28 54 4f 50 29 5c 74 6f 6f 6c 5c 73 68 6f 77 $(TOP)\tool\show e920: 6a 6f 75 72 6e 61 6c 2e 63 20 24 28 53 51 4c 49 journal.c$(SQLI
e930: 54 45 33 43 29 20 2f 6c 69 6e 6b 20 24 28 4c 44 TE3C) /link $(LD e940: 46 4c 41 47 53 29 20 24 28 4c 54 4c 49 4e 4b 4f FLAGS)$(LTLINKO
e950: 50 54 53 29 0a 0a 73 68 6f 77 77 61 6c 2e 65 78 PTS)..showwal.ex
e960: 65 3a 09 24 28 54 4f 50 29 5c 74 6f 6f 6c 5c 73 e:.$(TOP)\tool\s e970: 68 6f 77 77 61 6c 2e 63 20 24 28 53 51 4c 49 54 howwal.c$(SQLIT
e980: 45 33 43 29 20 24 28 53 51 4c 49 54 45 33 48 29 E3C) $(SQLITE3H) e990: 0a 09 24 28 4c 54 4c 49 4e 4b 29 20 24 28 4e 4f ..$(LTLINK) $(NO e9a0: 5f 57 41 52 4e 29 20 2d 44 53 51 4c 49 54 45 5f _WARN) -DSQLITE_ e9b0: 54 48 52 45 41 44 53 41 46 45 3d 30 20 2d 44 53 THREADSAFE=0 -DS e9c0: 51 4c 49 54 45 5f 4f 4d 49 54 5f 4c 4f 41 44 5f QLITE_OMIT_LOAD_ e9d0: 45 58 54 45 4e 53 49 4f 4e 20 2d 46 65 24 40 20 EXTENSION -Fe$@
e9e0: 5c 0a 09 09 24 28 54 4f 50 29 5c 74 6f 6f 6c 5c \...$(TOP)\tool\ e9f0: 73 68 6f 77 77 61 6c 2e 63 20 24 28 53 51 4c 49 showwal.c$(SQLI
ea00: 54 45 33 43 29 20 2f 6c 69 6e 6b 20 24 28 4c 44 TE3C) /link $(LD ea10: 46 4c 41 47 53 29 20 24 28 4c 54 4c 49 4e 4b 4f FLAGS)$(LTLINKO
ea20: 50 54 53 29 0a 0a 63 68 61 6e 67 65 73 65 74 2e PTS)..changeset.
ea30: 65 78 65 3a 09 24 28 54 4f 50 29 5c 65 78 74 5c exe:.$(TOP)\ext\ ea40: 73 65 73 73 69 6f 6e 5c 63 68 61 6e 67 65 73 65 session\changese ea50: 74 2e 63 20 24 28 53 51 4c 49 54 45 33 43 29 0a t.c$(SQLITE3C).
ea60: 09 24 28 4c 54 4c 49 4e 4b 29 20 2d 44 53 51 4c .$(LTLINK) -DSQL ea70: 49 54 45 5f 54 48 52 45 41 44 53 41 46 45 3d 30 ITE_THREADSAFE=0 ea80: 20 2d 44 53 51 4c 49 54 45 5f 4f 4d 49 54 5f 4c -DSQLITE_OMIT_L ea90: 4f 41 44 5f 45 58 54 45 4e 53 49 4f 4e 20 2d 46 OAD_EXTENSION -F eaa0: 65 24 40 20 5c 0a 09 09 24 28 54 4f 50 29 5c 65 e$@ \...$(TOP)\e eab0: 78 74 5c 73 65 73 73 69 6f 6e 5c 63 68 61 6e 67 xt\session\chang eac0: 65 73 65 74 2e 63 20 24 28 53 51 4c 49 54 45 33 eset.c$(SQLITE3
ead0: 43 29 0a 0a 66 74 73 33 76 69 65 77 2e 65 78 65 C)..fts3view.exe
eae0: 3a 09 24 28 54 4f 50 29 5c 65 78 74 5c 66 74 73 :.$(TOP)\ext\fts eaf0: 33 5c 74 6f 6f 6c 5c 66 74 73 33 76 69 65 77 2e 3\tool\fts3view. eb00: 63 20 24 28 53 51 4c 49 54 45 33 43 29 20 24 28 c$(SQLITE3C) $( eb10: 53 51 4c 49 54 45 33 48 29 0a 09 24 28 4c 54 4c SQLITE3H)..$(LTL
eb20: 49 4e 4b 29 20 24 28 4e 4f 5f 57 41 52 4e 29 20 INK) $(NO_WARN) eb30: 2d 44 53 51 4c 49 54 45 5f 54 48 52 45 41 44 53 -DSQLITE_THREADS eb40: 41 46 45 3d 30 20 2d 44 53 51 4c 49 54 45 5f 4f AFE=0 -DSQLITE_O eb50: 4d 49 54 5f 4c 4f 41 44 5f 45 58 54 45 4e 53 49 MIT_LOAD_EXTENSI eb60: 4f 4e 20 2d 46 65 24 40 20 5c 0a 09 09 24 28 54 ON -Fe$@ \...$(T eb70: 4f 50 29 5c 65 78 74 5c 66 74 73 33 5c 74 6f 6f OP)\ext\fts3\too eb80: 6c 5c 66 74 73 33 76 69 65 77 2e 63 20 24 28 53 l\fts3view.c$(S
eb90: 51 4c 49 54 45 33 43 29 20 2f 6c 69 6e 6b 20 24 QLITE3C) /link $eba0: 28 4c 44 46 4c 41 47 53 29 20 24 28 4c 54 4c 49 (LDFLAGS)$(LTLI
ebb0: 4e 4b 4f 50 54 53 29 0a 0a 72 6f 6c 6c 62 61 63 NKOPTS)..rollbac
ebc0: 6b 2d 74 65 73 74 2e 65 78 65 3a 09 24 28 54 4f k-test.exe:.$(TO ebd0: 50 29 5c 74 6f 6f 6c 5c 72 6f 6c 6c 62 61 63 6b P)\tool\rollback ebe0: 2d 74 65 73 74 2e 63 20 24 28 53 51 4c 49 54 45 -test.c$(SQLITE
ebf0: 33 43 29 20 24 28 53 51 4c 49 54 45 33 48 29 0a 3C) $(SQLITE3H). ec00: 09 24 28 4c 54 4c 49 4e 4b 29 20 24 28 4e 4f 5f .$(LTLINK) $(NO_ ec10: 57 41 52 4e 29 20 2d 44 53 51 4c 49 54 45 5f 54 WARN) -DSQLITE_T ec20: 48 52 45 41 44 53 41 46 45 3d 30 20 2d 44 53 51 HREADSAFE=0 -DSQ ec30: 4c 49 54 45 5f 4f 4d 49 54 5f 4c 4f 41 44 5f 45 LITE_OMIT_LOAD_E ec40: 58 54 45 4e 53 49 4f 4e 20 2d 46 65 24 40 20 5c XTENSION -Fe$@ \
ec50: 0a 09 09 24 28 54 4f 50 29 5c 74 6f 6f 6c 5c 72 ...$(TOP)\tool\r ec60: 6f 6c 6c 62 61 63 6b 2d 74 65 73 74 2e 63 20 24 ollback-test.c$
ec70: 28 53 51 4c 49 54 45 33 43 29 20 2f 6c 69 6e 6b (SQLITE3C) /link
ec80: 20 24 28 4c 44 46 4c 41 47 53 29 20 24 28 4c 54 $(LDFLAGS)$(LT
ec90: 4c 49 4e 4b 4f 50 54 53 29 0a 0a 4c 6f 67 45 73 LINKOPTS)..LogEs
eca0: 74 2e 65 78 65 3a 09 24 28 54 4f 50 29 5c 74 6f t.exe:.$(TOP)\to ecb0: 6f 6c 5c 6c 6f 67 65 73 74 2e 63 20 24 28 53 51 ol\logest.c$(SQ
ecc0: 4c 49 54 45 33 48 29 0a 09 24 28 4c 54 4c 49 4e LITE3H)..$(LTLIN ecd0: 4b 29 20 24 28 4e 4f 5f 57 41 52 4e 29 20 2d 46 K)$(NO_WARN) -F
ece0: 65 24 40 20 24 28 54 4f 50 29 5c 74 6f 6f 6c 5c e$@$(TOP)\tool\
ecf0: 4c 6f 67 45 73 74 2e 63 20 2f 6c 69 6e 6b 20 24 LogEst.c /link $ed00: 28 4c 44 46 4c 41 47 53 29 20 24 28 4c 54 4c 49 (LDFLAGS)$(LTLI
ed10: 4e 4b 4f 50 54 53 29 0a 0a 77 6f 72 64 63 6f 75 NKOPTS)..wordcou
ed20: 6e 74 2e 65 78 65 3a 09 24 28 54 4f 50 29 5c 74 nt.exe:.$(TOP)\t ed30: 65 73 74 5c 77 6f 72 64 63 6f 75 6e 74 2e 63 20 est\wordcount.c ed40: 24 28 53 51 4c 49 54 45 33 43 29 20 24 28 53 51$(SQLITE3C) $(SQ ed50: 4c 49 54 45 33 48 29 0a 09 24 28 4c 54 4c 49 4e LITE3H)..$(LTLIN
ed60: 4b 29 20 24 28 4e 4f 5f 57 41 52 4e 29 20 2d 44 K) $(NO_WARN) -D ed70: 53 51 4c 49 54 45 5f 54 48 52 45 41 44 53 41 46 SQLITE_THREADSAF ed80: 45 3d 30 20 2d 44 53 51 4c 49 54 45 5f 4f 4d 49 E=0 -DSQLITE_OMI ed90: 54 5f 4c 4f 41 44 5f 45 58 54 45 4e 53 49 4f 4e T_LOAD_EXTENSION eda0: 20 2d 46 65 24 40 20 5c 0a 09 09 24 28 54 4f 50 -Fe$@ \...$(TOP edb0: 29 5c 74 65 73 74 5c 77 6f 72 64 63 6f 75 6e 74 )\test\wordcount edc0: 2e 63 20 24 28 53 51 4c 49 54 45 33 43 29 20 2f .c$(SQLITE3C) /
edd0: 6c 69 6e 6b 20 24 28 4c 44 46 4c 41 47 53 29 20 link $(LDFLAGS) ede0: 24 28 4c 54 4c 49 4e 4b 4f 50 54 53 29 0a 0a 73$(LTLINKOPTS)..s
edf0: 70 65 65 64 74 65 73 74 31 2e 65 78 65 3a 09 24 peedtest1.exe:.$ee00: 28 54 4f 50 29 5c 74 65 73 74 5c 73 70 65 65 64 (TOP)\test\speed ee10: 74 65 73 74 31 2e 63 20 24 28 53 51 4c 49 54 45 test1.c$(SQLITE
ee20: 33 43 29 20 24 28 53 51 4c 49 54 45 33 48 29 0a 3C) $(SQLITE3H). ee30: 09 24 28 4c 54 4c 49 4e 4b 29 20 24 28 4e 4f 5f .$(LTLINK) $(NO_ ee40: 57 41 52 4e 29 20 2d 44 53 51 4c 49 54 45 5f 4f WARN) -DSQLITE_O ee50: 4d 49 54 5f 4c 4f 41 44 5f 45 58 54 45 4e 53 49 MIT_LOAD_EXTENSI ee60: 4f 4e 20 2d 46 65 24 40 20 5c 0a 09 09 24 28 54 ON -Fe$@ \...$(T ee70: 4f 50 29 5c 74 65 73 74 5c 73 70 65 65 64 74 65 OP)\test\speedte ee80: 73 74 31 2e 63 20 24 28 53 51 4c 49 54 45 33 43 st1.c$(SQLITE3C
ee90: 29 20 2f 6c 69 6e 6b 20 24 28 4c 44 46 4c 41 47 ) /link $(LDFLAG eea0: 53 29 20 24 28 4c 54 4c 49 4e 4b 4f 50 54 53 29 S)$(LTLINKOPTS)
eeb0: 0a 0a 72 62 75 2e 65 78 65 3a 20 24 28 54 4f 50 ..rbu.exe: $(TOP eec0: 29 5c 65 78 74 5c 72 62 75 5c 72 62 75 2e 63 20 )\ext\rbu\rbu.c eed0: 24 28 54 4f 50 29 5c 65 78 74 5c 72 62 75 5c 73$(TOP)\ext\rbu\s
eee0: 71 6c 69 74 65 33 72 62 75 2e 63 20 24 28 53 51 qlite3rbu.c $(SQ eef0: 4c 49 54 45 33 43 29 20 24 28 53 51 4c 49 54 45 LITE3C)$(SQLITE
ef00: 33 48 29 0a 09 24 28 4c 54 4c 49 4e 4b 29 20 24 3H)..$(LTLINK)$
ef10: 28 4e 4f 5f 57 41 52 4e 29 20 2d 44 53 51 4c 49 (NO_WARN) -DSQLI
ef20: 54 45 5f 45 4e 41 42 4c 45 5f 52 42 55 20 2d 46 TE_ENABLE_RBU -F
ef30: 65 24 40 20 5c 0a 09 09 24 28 54 4f 50 29 5c 65 e$@ \...$(TOP)\e
ef40: 78 74 5c 72 62 75 5c 72 62 75 2e 63 20 24 28 53 xt\rbu\rbu.c $(S ef50: 51 4c 49 54 45 33 43 29 20 2f 6c 69 6e 6b 20 24 QLITE3C) /link$
ef60: 28 4c 44 46 4c 41 47 53 29 20 24 28 4c 54 4c 49 (LDFLAGS) $(LTLI ef70: 4e 4b 4f 50 54 53 29 0a 23 20 3c 3c 2f 6d 61 72 NKOPTS).# <</mar ef80: 6b 3e 3e 0a 0a 63 6c 65 61 6e 3a 0a 09 64 65 6c k>>..clean:..del ef90: 20 2f 51 20 2a 2e 65 78 70 20 2a 2e 6c 6f 20 2a /Q *.exp *.lo * efa0: 2e 69 6c 6b 20 2a 2e 6c 69 62 20 2a 2e 6f 62 6a .ilk *.lib *.obj efb0: 20 2a 2e 6e 63 62 20 2a 2e 70 64 62 20 2a 2e 73 *.ncb *.pdb *.s efc0: 64 66 20 2a 2e 73 75 6f 20 32 3e 4e 55 4c 0a 09 df *.suo 2>NUL.. efd0: 64 65 6c 20 2f 51 20 2a 2e 62 73 63 20 2a 2e 63 del /Q *.bsc *.c efe0: 6f 64 20 2a 2e 64 61 20 2a 2e 62 62 20 2a 2e 62 od *.da *.bb *.b eff0: 62 67 20 2a 2e 76 63 20 67 6d 6f 6e 2e 6f 75 74 bg *.vc gmon.out f000: 20 32 3e 4e 55 4c 0a 23 20 3c 3c 6d 61 72 6b 3e 2>NUL.# <<mark> f010: 3e 0a 09 64 65 6c 20 2f 51 20 24 28 53 51 4c 49 >..del /Q$(SQLI
f020: 54 45 33 43 29 20 24 28 53 51 4c 49 54 45 33 48 TE3C) $(SQLITE3H f030: 29 20 6f 70 63 6f 64 65 73 2e 63 20 6f 70 63 6f ) opcodes.c opco f040: 64 65 73 2e 68 20 32 3e 4e 55 4c 0a 09 64 65 6c des.h 2>NUL..del f050: 20 2f 51 20 6c 65 6d 6f 6e 2e 2a 20 6c 65 6d 70 /Q lemon.* lemp f060: 61 72 2e 63 20 70 61 72 73 65 2e 2a 20 32 3e 4e ar.c parse.* 2>N f070: 55 4c 0a 09 64 65 6c 20 2f 51 20 6d 6b 6b 65 79 UL..del /Q mkkey f080: 77 6f 72 64 68 61 73 68 2e 2a 20 6b 65 79 77 6f wordhash.* keywo f090: 72 64 68 61 73 68 2e 68 20 32 3e 4e 55 4c 0a 09 rdhash.h 2>NUL.. f0a0: 64 65 6c 20 2f 51 20 6e 6f 74 61 73 68 61 72 65 del /Q notashare f0b0: 64 6c 69 62 2e 2a 20 32 3e 4e 55 4c 0a 09 2d 72 dlib.* 2>NUL..-r f0c0: 6d 64 69 72 20 2f 51 2f 53 20 2e 64 65 70 73 20 mdir /Q/S .deps f0d0: 32 3e 4e 55 4c 0a 09 2d 72 6d 64 69 72 20 2f 51 2>NUL..-rmdir /Q f0e0: 2f 53 20 2e 6c 69 62 73 20 32 3e 4e 55 4c 0a 09 /S .libs 2>NUL.. f0f0: 2d 72 6d 64 69 72 20 2f 51 2f 53 20 71 75 6f 74 -rmdir /Q/S quot f100: 61 32 61 20 32 3e 4e 55 4c 0a 09 2d 72 6d 64 69 a2a 2>NUL..-rmdi f110: 72 20 2f 51 2f 53 20 71 75 6f 74 61 32 62 20 32 r /Q/S quota2b 2 f120: 3e 4e 55 4c 0a 09 2d 72 6d 64 69 72 20 2f 51 2f >NUL..-rmdir /Q/ f130: 53 20 71 75 6f 74 61 32 63 20 32 3e 4e 55 4c 0a S quota2c 2>NUL. f140: 09 2d 72 6d 64 69 72 20 2f 51 2f 53 20 74 73 72 .-rmdir /Q/S tsr f150: 63 20 32 3e 4e 55 4c 0a 09 64 65 6c 20 2f 51 20 c 2>NUL..del /Q f160: 2e 74 61 72 67 65 74 5f 73 6f 75 72 63 65 20 32 .target_source 2 f170: 3e 4e 55 4c 0a 09 64 65 6c 20 2f 51 20 74 63 6c >NUL..del /Q tcl f180: 73 71 6c 69 74 65 33 2e 65 78 65 20 32 3e 4e 55 sqlite3.exe 2>NU f190: 4c 0a 09 64 65 6c 20 2f 51 20 74 65 73 74 6c 6f L..del /Q testlo f1a0: 61 64 65 78 74 2e 64 6c 6c 20 32 3e 4e 55 4c 0a adext.dll 2>NUL. f1b0: 09 64 65 6c 20 2f 51 20 74 65 73 74 66 69 78 74 .del /Q testfixt f1c0: 75 72 65 2e 65 78 65 20 74 65 73 74 2e 64 62 20 ure.exe test.db f1d0: 32 3e 4e 55 4c 0a 09 64 65 6c 20 2f 51 20 4c 6f 2>NUL..del /Q Lo f1e0: 67 45 73 74 2e 65 78 65 20 66 74 73 33 76 69 65 gEst.exe fts3vie f1f0: 77 2e 65 78 65 20 72 6f 6c 6c 62 61 63 6b 2d 74 w.exe rollback-t f200: 65 73 74 2e 65 78 65 20 73 68 6f 77 64 62 2e 65 est.exe showdb.e f210: 78 65 20 32 3e 4e 55 4c 0a 09 64 65 6c 20 2f 51 xe 2>NUL..del /Q f220: 20 63 68 61 6e 67 65 73 65 74 2e 65 78 65 20 32 changeset.exe 2 f230: 3e 4e 55 4c 0a 09 64 65 6c 20 2f 51 20 73 68 6f >NUL..del /Q sho f240: 77 6a 6f 75 72 6e 61 6c 2e 65 78 65 20 73 68 6f wjournal.exe sho f250: 77 73 74 61 74 34 2e 65 78 65 20 73 68 6f 77 77 wstat4.exe showw f260: 61 6c 2e 65 78 65 20 73 70 65 65 64 74 65 73 74 al.exe speedtest f270: 31 2e 65 78 65 20 32 3e 4e 55 4c 0a 09 64 65 6c 1.exe 2>NUL..del f280: 20 2f 51 20 6d 70 74 65 73 74 65 72 2e 65 78 65 /Q mptester.exe f290: 20 77 6f 72 64 63 6f 75 6e 74 2e 65 78 65 20 72 wordcount.exe r f2a0: 62 75 2e 65 78 65 20 73 72 63 63 6b 31 2e 65 78 bu.exe srcck1.ex f2b0: 65 20 32 3e 4e 55 4c 0a 09 64 65 6c 20 2f 51 20 e 2>NUL..del /Q f2c0: 24 28 53 51 4c 49 54 45 33 45 58 45 29 20 24 28$(SQLITE3EXE) \$(
f2d0: 53 51 4c 49 54 45 33 44 4c 4c 29 20 73 71 6c 69 SQLITE3DLL) sqli
f2e0: 74 65 33 2e 64 65 66 20 32 3e 4e 55 4c 0a 09 64 te3.def 2>NUL..d
f2f0: 65 6c 20 2f 51 20 73 71 6c 69 74 65 33 2e 63 20 el /Q sqlite3.c
f300: 73 71 6c 69 74 65 33 2d 2a 2e 63 20 32 3e 4e 55 sqlite3-*.c 2>NU
f310: 4c 0a 09 64 65 6c 20 2f 51 20 73 71 6c 69 74 65 L..del /Q sqlite
f320: 33 72 63 2e 68 20 32 3e 4e 55 4c 0a 09 64 65 6c 3rc.h 2>NUL..del
f330: 20 2f 51 20 73 68 65 6c 6c 2e 63 20 73 71 6c 69 /Q shell.c sqli
f340: 74 65 33 65 78 74 2e 68 20 32 3e 4e 55 4c 0a 09 te3ext.h 2>NUL..
f350: 64 65 6c 20 2f 51 20 73 71 6c 69 74 65 33 5f 61 del /Q sqlite3_a
f360: 6e 61 6c 79 7a 65 72 2e 65 78 65 20 73 71 6c 69 nalyzer.exe sqli
f370: 74 65 33 5f 61 6e 61 6c 79 7a 65 72 2e 63 20 32 te3_analyzer.c 2
f380: 3e 4e 55 4c 0a 09 64 65 6c 20 2f 51 20 73 71 6c >NUL..del /Q sql
f390: 69 74 65 2d 2a 2d 6f 75 74 70 75 74 2e 76 73 69 ite-*-output.vsi
f3a0: 78 20 32 3e 4e 55 4c 0a 09 64 65 6c 20 2f 51 20 x 2>NUL..del /Q
f3b0: 66 75 7a 7a 65 72 73 68 65 6c 6c 2e 65 78 65 20 fuzzershell.exe
f3c0: 66 75 7a 7a 63 68 65 63 6b 2e 65 78 65 20 73 71 fuzzcheck.exe sq
f3d0: 6c 64 69 66 66 2e 65 78 65 20 32 3e 4e 55 4c 0a ldiff.exe 2>NUL.
f3e0: 09 64 65 6c 20 2f 51 20 66 74 73 35 2e 2a 20 66 .del /Q fts5.* f
f3f0: 74 73 35 70 61 72 73 65 2e 2a 20 32 3e 4e 55 4c ts5parse.* 2>NUL
f400: 0a 23 20 3c 3c 2f 6d 61 72 6b 3e 3e 0a .# <</mark>>.
|
{}
|
# Group 2
Group 2 of the periodic table is called the alkaline earth metals.
Group 2 contains the elements:
• Beryllium , Be
• Magnesium, Mg
• Calcium, Ca
• Strontium, Sr
• Barium, Ba
# Properties
The members of group 2 have two valence electrons. They behave similarly to the alkali metals, based on the fact that they will donate two electrons (instead of one) to reach a noble gas shell. It is harder to donate two electrons at the same time compared to one, which makes the alkaline earth metals of group 2 not react as easily as the alkali metals of group 1.
# Reactions
The alkaline earth metals do not react with as many substances as the alkali metals do. They take part of approximately the same reactions, but react much more slowly (if at all).
Just as the alkali metals, they react with halogens:
$\mathrm{Mg(s) + Cl_2(g) \longrightarrow \:MgCl_2(s)}$
They also react with water:
$\mathrm{Ca(s) + 2H2O(l) \longrightarrow \:CaOH_2(aq) + H_2(g)}$
A higher water temperature is required for the reaction to take place. The reactions are not as fast or explosive as of the alkali metals. To compare, we can look at the reaction between calcium and water, and the reaction between potassium and water. These are just side by side in the periodic system. The reaction of calcium is much slower, and doesn't generate enough heat to ignite the hydrogen gas.
If we expose the alkaline earth metals to air, a protective layer of oxide is created, which prevents further oxygen from penetrating the metal, thus stopping the process of rusting:
$\mathrm{2Ca(s) + O_2(g) \longrightarrow \:2CaO(s)}$
# Abundance in nature
Magnesium ions, calcium ions, and strontium ions are common in nature. Beryllium and radium is not commonly present.
Magnesium ions are needed for the function of or enzymes, and is also used in chlorophyll, which is an important component of the plant photosynthesis. Calcium ions are needed to build our skeleton, and is used as a signal substance in our cells.
|
{}
|
# How Prove $4x^3+8y^3+15xy^2-27x-54y+54\ge 0$
let $x,y\ge 0$,show that $$4x^3+8y^3+15xy^2-27x-54y+54\ge 0$$
when $x=y=1$ is equality.
this inequality is creat by me.and maybe have some methods to prove it? Thank you
We can prove that in part of the region $x \ge 0, y \ge 0$, the inequality holds$. Let$f(x,y) = 4x^3+8y^3+15xy^2-27x-54y+54$If$y=x$then $$f(x,x)=27(-1 + x)^2(2 + x)\ge 0$$. Set$x=y+a$with$a=\frac{3^{3/2}}{2}$, then for$y\ge 0$we have: $$f(y+a,y)=54 + \frac{81\sqrt{3}}{2}y^2 + 27y^3\ge 0$$ Set$y=x+b$with$b=\frac{3^{3/2}}{13^{1/2}}$, then for$x\ge 0$we have: $$f(x,x+b)=\frac{27}{169}\left(338 - 54\sqrt{39} + 78\sqrt{39}x^2 + 169x^3\right)\ge 0$$ This is because $$338^2=114244>113724= (54\sqrt{39})^2$$. Find the point/s where the gradient vanishes, check that these evaluate to$\geq 0$, and show these are minimas. In case of saddle points, show these evaluate to$>0$. • what? I want to see such AM-GM,sos,Cauchy-Schwarz ,Holder,and so on to prove it.. – math110 Jun 21 '14 at 15:41 • @math110 Perhaps you want to state this in the question. Feel free to down vote :) – user76568 Jun 21 '14 at 15:45 • Also check along$x=0$and$y=0$, where it is$(2y-3)^2(2y+6)$and$(2x-3)^2(x+3)+27$. – Empy2 Jun 21 '14 at 16:34 • I get a saddle point at$(x,y)=(\sqrt{3/28},\sqrt{12/7})$. – Empy2 Jun 21 '14 at 16:56 • @Michael Correct, but it evaluates to a positive number, and hence$(1,1)$is the only minimum for$x,y \geq 0$– user76568 Jun 21 '14 at 17:39 Change variables to$x=X+1,y=Y+1$, and I think you get $$4X^3+12X^2+8Y^3+39Y^2+15XY^2+30XY$$ which is positive if$X$and$Y$are positive - so if$x\geq 1,y\geq 1$. If$X\geq 0$, then it is at least$12X^2+30XY+31Y^2$, which is always positive. When$x<1,y<1$, then$F_x$and$F_y$are both negative, so$F(x,y)>F(1,1)$. When$y>3/2$,$F_y(x,y)=24y^2+30xy-54>0$, so$F(x,y)>F(x,3/2)$I still have$0<x<1,1<y<3/2\$ to go.
|
{}
|
# Colors for R graphs
## 2006-06-28
[StATS]: **Colors for R graphs (June 28
• 2006)**
I tend to use color sparingly in graphs because most of my graphs end up in black and white in the final production. Even on my web pages
• which appear in color
• I try to avoid too much use of color because I often print these pages on a black and white printer.
So when I did end up using color in a graph
• it was often done rather haphazardly. In R
• for example
• you can control the color of lines, points
• and text by inserting the argument col=x into the appropriate function. So for example
• the code
plot(x,y,type="n") text(x[g==0],y[g==0],"C",col=2) text(x[g==1],y[g==1],"T",col=3)
in.print(width=8,height=10.5,printer="Adobe PDF") pa
would produce a graph where all the data points with the variable g equal to 0 would be red C’s and all the data points with the variable g equal to 1 would be green T’s. I never bothered figuring out how to get a particular color all that carefully because I never needed to worry too much about it.
But someone asked for a graph with black and gray lines
• and I figured I better figure out how to make gray lines (black is easy because that is the default color). It turns out that you can specify colors in R by using a string argument rather than a number. So the code for drawing black and gray lines would look something like
plot(x,y,type="n") lines(x[g==0],y[g==0],col="black") lines(x[g==1],y[g==1],col="gray")
Now
• what are all the possible text strings that you can specify? It turns out that there is an R function
• colors() that lists all the possible colors that you can specify with a text string. In the version I am using right now (2.2.1) there are 657 choices from “aliceblue” through “yellowgreen.” There are ranges of colors like azure1 through azure4. The range of grays is especially wide (gray1 through gray100) and the folks who wrote R were even nice enough to repeat that list using the British English spelling (grey1 through grey100). You can even review the same list by using the function
• colours(). How thoughtful!
I wrote a short program that produces all the colors in a PDF file.
win.print(width=8,height=10.5,printer="Adobe PDF") par(mar=rep(0,4)) ncolumns <- 7 nrows <- 100 npages <- trunc(length(colors())/(ncolumns*nrows))+1 for (i in 1:npages) { plot(c(0,(ncolumns+1)),c(0,nrows+1),xlab=" ",ylab=" ",axes=F,type="n") for (j in 1:nrows) { for (k in 1:ncolumns) { x <- ncolumns*nrows*(i-1)+(k-1)*nrows+j text(k,nrows+1-j,paste(x,"=",colors()[x]),col=colors()[x],cex=0.5) } } } dev.off()
I named the PDF file
• Rcolors.pdf. Some of the very light colors are almost invisible on a white background. Different graphical systems may display these colors differently
• so only use this as a rough guide. You can specify your own colors using the rgb() function. For example
• the command
plot(0:100,col=rgb(0,(0:100)/100,0))
draws a series of points on the diagonal from the darkest green to the lightest green (see below).
If you need a special range of colors for a contour plot or a heatmap, then you can use the palette() function. refer to the help function in R for details.
There is an excellent book which I have just started reading that provides much useful information about graphs in R.
This page was written by Steve Simon while working at Children’s Mercy Hospital. Although I do not hold the copyright for this material
software](../category/RSoftware.html). display](../category/GraphicalDisplay.html) or [Category: R for pages similar to this one at [Category: Graphical with general help resources. You can also browse Children’s Mercy Hospital website. Need more information? I have a page reproducing it here as a service
• as it is no longer available on the Hospital. Although I do not hold the copyright for this material
• I am This page was written by Steve Simon while working at Children’s Mercy
|
{}
|
# How do you solve the system of equations 2x - 5y = 10 and 4x - 10y = 20?
##### 1 Answer
Feb 8, 2015
There are an infinite number of solutions for this pair of equations since they are just two different versions of the same line.
If you re-write $2 x - 5 y = 10$ as a linear (that means in a form that can be drawn as a straight line) function in standard format, you would get $y = \frac{2 x - 10}{5}$. If you re-write $4 x - 10 y = 20$ as a function in standard form you would also get $y = \frac{2 x - 10}{5}$. Therefore both equations represent the same line; any (x,y) pair that works for one also works for the other.
If you have two linear equations (which you can think of as two straight lines drawn in the xy-plane) there are 3 possibilities: they cross at exactly one point; they don't cross (that is they are parallel like train tracks); or they are exactly the same line so they match up at every point along the line.
|
{}
|
## The Annals of Mathematical Statistics
### On the Distribution of the Largest Latent Root and the Corresponding Latent Vector for Principal Component Analysis
T. Sugiyama
#### Abstract
The distribution of the latent vectors of a sample covariance matrix was found by T. W. Anderson [1] in 1951 when the population covariance matrix is a scalar matrix, $\Sigma = \sigma^2I$. The asymptotic distribution for arbitrary $\Sigma$, also, was obtained by T. W. Anderson [3] in 1963. The exact distribution of the latent vectors of a sample covariance matrix has been described by the author [10] in 1965 when the observations are obtained from a bi-variate normal distribution. The elements of each latent vector are the coefficients of a principal component (with sum of squares of coefficients being unity), and the corresponding latent root is the variance of the principal component. In this paper, the exact distribution of the latent vector corresponding to the largest latent root of a sample covariance matrix is given when the observations are from a multivariate normal distribution whose population covariance matrix is arbitrary $\Sigma$, and the distribution of the largest latent root is given when the population covariance matrix is a scalar matrix, $\Sigma = \sigma^2I$.
#### Article information
Source
Ann. Math. Statist., Volume 37, Number 4 (1966), 995-1001.
Dates
First available in Project Euclid: 27 April 2007
https://projecteuclid.org/euclid.aoms/1177699378
Digital Object Identifier
doi:10.1214/aoms/1177699378
Mathematical Reviews number (MathSciNet)
MR201012
Zentralblatt MATH identifier
0151.24103
JSTOR
|
{}
|
Please visit Jefferson Lab Event Policies and Guidance before planning your next event: https://www.jlab.org/conference_planning.
# 2021 EIC UG Meeting Early Career Workshop
Jul 29 – 30, 2021
Online Only
US/Eastern timezone
## Diffractive dissociation in electron-nucleus collisions: theory and phenomenology
Jul 29, 2021, 12:35 PM
15m
Online Only
### Speaker
Anh Dung Le (CPHT, CNRS Ecole Polytechnique, IP Paris)
### Description
We study the diffractive dissociation of a virtual photon in the scattering off a large nucleus at high energies in the QCD dipole picture in which the photon is conveniently represented by an onium. In a well-defined parametric regime, the nuclear scattering of the onium is triggered by large-dipole fluctuations in the course of its rapidity evolution in the form of color dipole branching, and the diffractive dissociation with a minimal gap $Y_0$ is tantamount to the probability that an even number of the dipoles in the onium Fock state effectively participates in the scattering, in a frame in which the onium is evolved to the rapidity $Y−Y_0$ out of the total relative rapidity $Y$. Such picture allows to extract the asymptotic solution to the Kovchegov-Levin equation, established in QCD 20 years ago, which rules the diffractive cross section. Diffraction in electron-ion collisions, which can be linked to the same process in onium-nucleus scattering, is then studied based on numerical solutions of the original Kovchegov-Levin equation and of its next-to-leading extension taking into account the running of the strong coupling, with the aim to make predictions for the distribution of rapidity gaps in realistic kinematics of future electron-ion colliders. We show that the fixed and the running coupling equations lead to different distributions, rather insensitive to the chosen prescription in the running coupling case. The obtained distributions for the fixed coupling framework exhibit a shape characteristic of the above-mentioned picture already at rapidities accessible at future electron-ion colliders, which demonstrates the relevance of measurements of such observables for the microscopic understanding of diffractive dissociation in QCD.
### Primary authors
Anh Dung Le (CPHT, CNRS Ecole Polytechnique, IP Paris) Prof. Stéphane Munier (CPHT, CNRS, Ecole Polytechnique, IP Paris) Prof. Alfred Henry Mueller (Columbia University)
|
{}
|
Similar Solved Questions
3. (15 points) For the following trig equations; a. Find all solutions and b. Find all solutions in the interval [0,2n) Be sure t0 check for extraneous solutions_ 2cos =1(6) 2 cos? @ + sin 9 = 1V3 tan 38 =
3. (15 points) For the following trig equations; a. Find all solutions and b. Find all solutions in the interval [0,2n) Be sure t0 check for extraneous solutions_ 2cos =1 (6) 2 cos? @ + sin 9 = 1 V3 tan 38 =...
Match the following choices (1-4) that best fits (most efficient and logical) each of the following:...
Match the following choices (1-4) that best fits (most efficient and logical) each of the following: 1. Case control Study, 2. Retrospective Cohort Study , 3. Prospective Cohort Study , 4. None of these selections A. Identifying the cause of a disease ___ B. Identifying the long term effects of a ra...
~Cylinctrical Coorolinates' Computa Ts;v 42 dV Height J acl radiu 3 i,e whcTe S is solid cylinder with <90<=<5}
~Cylinctrical Coorolinates' Computa Ts;v 42 dV Height J acl radiu 3 i,e whcTe S is solid cylinder with <90<=<5}...
16. An 800 V/m electric field is directed along the +x-axis. If the potential at x...
16. An 800 V/m electric field is directed along the +x-axis. If the potential at x = 0 m is 2000 V, what is the potential at x 2 m? a. 200 V b. 1000 v c. 400 V d. 800Vv e. 600 V...
Find the intervals of concavity and the inflection points Give exact value(s)g(c) = Vi +
Find the intervals of concavity and the inflection points Give exact value(s) g(c) = Vi +...
Imath 3 En Cred The dstncg fro) EQUALS The a'sTun ce Straight Line Problems いzul 14,6) 3. Find t...
Imath 3 En Cred The dstncg fro) EQUALS The a'sTun ce Straight Line Problems いzul 14,6) 3. Find the ines p Ss (-3,2) Imath 3 En Cred The dstncg fro) EQUALS The a'sTun ce Straight Line Problems いzul 14,6) 3. Find the ines p Ss (-3,2)...
Which statement is not true?a. The transmembrane potential of a rod cell becomes more negative when the rod cell is exposed to light.b. A photoreceptor releases the most neurotransmitter when in total darkness.c. Whereas in vision the intensity of a stimulus is encoded by the degree of hyperpolarization of photoreceptors, in hearing the intensity of a stimulus is encoded by changes in firing rates of sensory neurons.d. Stiffening of the ossicles in the middle ear can lead to deafness.e. The inte
Which statement is not true? a. The transmembrane potential of a rod cell becomes more negative when the rod cell is exposed to light. b. A photoreceptor releases the most neurotransmitter when in total darkness. c. Whereas in vision the intensity of a stimulus is encoded by the degree of hyperpolar...
40) If one party pays a fixed fee on a regular basis in return for a...
40) If one party pays a fixed fee on a regular basis in return for a contingent payment that is triggered by a credit event, such as downgrading of a firm's credit rating that is called a A) credit futures. B) credit swaption C) letter of credit guarantee D) credit default swap....
12 The general formula for ajuanbas tn 5(-2)n-1 1 4th termmarks)
12 The general formula for ajuanbas tn 5(-2)n-1 1 4th term marks)...
Save Ans en 14 of 50 > Hydrazine, N, H, reacts with oxygen to form nitrogen...
Save Ans en 14 of 50 > Hydrazine, N, H, reacts with oxygen to form nitrogen gas and water. N, H. (aq) + 0,(8) N,(g) + 2 H, 0(1) If 3.05 g of N, H, reacts with excess oxygen and produces 0.950 L of N,, at 295 K and 1.00 atm, what is the percent yield of the reaction? percent yield:...
Brandon George: Attempt 1 Question 6 (1 point) In which one of the following structures does...
Brandon George: Attempt 1 Question 6 (1 point) In which one of the following structures does the central atom have a formal charge of +23 e. AIC1 a. SF :ö: 12 :O :Ci: Al :CI: b) b d) d e) e Save...
Two major automobilc manufacturors navc producod compact cars with thc samc sizc cngincs_ Wc aro intorcstod in dctorining whcther cr not thcro significant difference in the MPG (Miles Per Gallon) of the two brands of automobiles_ random sample of eight cars from each manufacturer selectec and eight drivers are selected to dlrive each automobile for specified cistance, The follcwing data show the results cf the test,Divcr'ManufacturcrManufacturcr RPlease do your calculation for the followin
Two major automobilc manufacturors navc producod compact cars with thc samc sizc cngincs_ Wc aro intorcstod in dctorining whcther cr not thcro significant difference in the MPG (Miles Per Gallon) of the two brands of automobiles_ random sample of eight cars from each manufacturer selectec and eight...
Select from the response list for the acid that best fits the description: an acid that completely dissociates in water solution to produce negative ions with charge = of oneHC_H;OzHzSO40 HNO;
Select from the response list for the acid that best fits the description: an acid that completely dissociates in water solution to produce negative ions with charge = of one HC_H;Oz HzSO4 0 HNO;...
Light that has a wavelength of $600 mathrm{~nm}$ strikes a metal surface, and a stream of electrons is ejected from the surface. If light of wavelength $500 mathrm{~nm}$ strikes the surface, the maximum kinetic energy of the electrons emitted from the surface willA. be greaterB. be smaller.C. be the same.D. be $5 / 6$ smaller.$mathrm{E}$. be unmeasurable.
Light that has a wavelength of $600 mathrm{~nm}$ strikes a metal surface, and a stream of electrons is ejected from the surface. If light of wavelength $500 mathrm{~nm}$ strikes the surface, the maximum kinetic energy of the electrons emitted from the surface will A. be greater B. be smaller. C. be ...
ANALYSIS View the gel azainst present Use your Lghi backgiouna Wrh Ihc a3 ranking stralght Lte PCR praducts according euge jeternungthe STR a1e'es aidin tnis dotermi nalion What the STR genotype of Ihe Dam (M) tor each STR Ine STR loci? STR 2 What the STR gunotype Chlck for each STR loci? STR STR 2What are (he gonotypes Potorilial Site (PFI) ard Potenlial Sire (PFZ) lar cach of Ira STR kci? Potentia Sire STR Poteniial Se STR 2 Potenlini Sire STR Polontial 5uo STR 2 Which . Ine potontlal sl
ANALYSIS View the gel azainst present Use your Lghi backgiouna Wrh Ihc a3 ranking stralght Lte PCR praducts according euge jeternungthe STR a1e'es aidin tnis dotermi nalion What the STR genotype of Ihe Dam (M) tor each STR Ine STR loci? STR 2 What the STR gunotype Chlck for each STR loci? STR ...
Find the limits (DO NOT USE LHOSPITAL RULE)lim 0 2+7 ~2' I=0 *7- d.*-0lim (2x + V4 + % - 2)Sin (sin m
Find the limits (DO NOT USE LHOSPITAL RULE) lim 0 2+7 ~2' I= 0 *7- d.*-0 lim (2x + V4 + % - 2) Sin (sin m...
Dengue fever and Dengue Hemorrhagic Fever (DHF) "Dengue and dengue hemorrhagic fever (DHF) results from infection by any of four serotypes of dengue viruses
Dengue fever and Dengue Hemorrhagic Fever (DHF)"Dengue and dengue hemorrhagic fever (DHF) results from infection by any of four serotypes of dengue viruses. Transmission occurs through the bite of infected Aedes mosquitoes, principally Aedes aegypti, which is also the principal urban vector of y...
Find the rate of appearance of Nz at the instant when [NO] 0.310 mol/L and [Hz] 0.200 mol/L_RatemollL
Find the rate of appearance of Nz at the instant when [NO] 0.310 mol/L and [Hz] 0.200 mol/L_ Rate mollL...
1.A 3.00 m long;! hollow aluminum tube with an outer diameter of 30.0 cm and an inner diameter of254 mounted vertically on motor in such cm way that friction is minimized. The motor spins the tube from = 7200 RPM in !37 scconds Calculate the LVCrga power delivered by the motoc during thoce 60 sccondse rest t0 The deisity of aluminum is 2710 kelm' Recall the moment of inertia 0f a hollow cylinder is KM (IORA) (Hint: You should be able t0 calculate the mass of the hollot tube from the volume
1.A 3.00 m long;! hollow aluminum tube with an outer diameter of 30.0 cm and an inner diameter of254 mounted vertically on motor in such cm way that friction is minimized. The motor spins the tube from = 7200 RPM in !37 scconds Calculate the LVCrga power delivered by the motoc during thoce 60 sccond...
What feature makes carbohydrates and fats energy sources?Both have a lot of oxygen atoms_Both have a lot of bonds with loosely held electrons_Both are hydrophobic moleculesBoth form hydrogen bonds with water:Both have a lot of bonds with tightly held electrons_
What feature makes carbohydrates and fats energy sources? Both have a lot of oxygen atoms_ Both have a lot of bonds with loosely held electrons_ Both are hydrophobic molecules Both form hydrogen bonds with water: Both have a lot of bonds with tightly held electrons_...
During a rockslide, a 520kg rock slides from rest down a hillside that is 500m long and 300m high
During a rockslide, a 520kg rock slides from rest down a hillside that is 500m long and 300m high. The coefficient of kinetic energy between the rock and the hill surface is .25. a) If the GPE energy U of the rock –Earth system is zero at the bottom of the hill, what is the value of U just bef...
Miae wilh Junetnclann6ar-allabla This Licck Is aclached Grathei Nork cond tnat 023325 r Ircboninss nulc; snoran ado maz38 Ine coid inninc muilc Remuu nialn[UJe Ci Me acceiencn Oitne rascaraiimo 73mDiron 4
Miae wilh Junetnclan n6ar-allabla This Licck Is aclached Grathei Nork cond tnat 023325 r Ircboninss nulc; snoran ado maz38 Ine coid inninc muilc Remuu nialn[UJe Ci Me acceiencn Oitne rascaraiimo 7 3m Diron 4...
Problem PageQuestion Suppose 37.7g of ammonium chloride is dissolved in 350.mL of a 0.60 M aqueous...
Problem PageQuestion Suppose 37.7g of ammonium chloride is dissolved in 350.mL of a 0.60 M aqueous solution of potassium carbonate. Calculate the final molarity of chloride anion in the solution. You can assume the volume of the solution doesn't change when the ammonium chloride is dissolved in ...
What would the pH of rain be with the concentration of sulfurous acid (HzSO;) you calculated in problem 10, assuming all the acidity is frorn sulfuric acid. K for dissolved sulfurous acid (ISO,) is 1.70 10 2. Rernember the concentration of sulfurous acid will remain constant because more SOz - will dissolve as some of the acid dissociates into H" and HSO, . points)
What would the pH of rain be with the concentration of sulfurous acid (HzSO;) you calculated in problem 10, assuming all the acidity is frorn sulfuric acid. K for dissolved sulfurous acid (ISO,) is 1.70 10 2. Rernember the concentration of sulfurous acid will remain constant because more SOz - will ...
Find the centroid uf tho region bounded by !he graphs of thie functiong6 xin(-),V-andThe centrold is at (€,V) where
Find the centroid uf tho region bounded by !he graphs of thie functiong 6 xin(-),V- and The centrold is at (€,V) where...
Problem II A vibronic transition consist of concerted electronic and vibrational excitations. First few vibronic excitations from zeroth to respectively θ = 2,3,4,5 states are given by v= 12569.95...
Problem II A vibronic transition consist of concerted electronic and vibrational excitations. First few vibronic excitations from zeroth to respectively θ = 2,3,4,5 states are given by v= 12569.95, 13648.43, 14710.85 and 15757.50 cm-1. Calcualte the values of ve and veXe Problem II A vibronic...
Kingbird Company uses a periodic inventory system. For April, when the company sold 600 units, the following information is available. Units Unit Cost Total Cost April 1 inventory 280 $31$8,680 April 15 purchase 450 37 16,650 April 23 purchase 270 40 10,800 1,000 $36,130 Your answer is correct. Cal... 1 answer The concept of the “employment relationship” is often stated to be the most basic form of... The concept of the “employment relationship” is often stated to be the most basic form of intraorganizational activity. This leads to the concept of the uniqueness of the human resource and balancing differentiation and integration.... 1 answer Farmer Bean is selling green beans in a purely competitive market. His output is 600 units,... Farmer Bean is selling green beans in a purely competitive market. His output is 600 units, of which each has a marginal revenue of$3. What is his average revenue?...
Frannie Fans currently manufactures ceiling fans that include remotes to operate them. The current cost to manufacture 10,280 remotes is as follows: Cost Direct materials Direct labor Variable overhead Fixed overhead Total $66,820$ 56,540 $30,840$ 51,400 \$ 205,600 Frannie is approached by Lincol...
|
{}
|
Speech processing plays an important role in any speech system whether its Automatic Speech Recognition (ASR) or speaker recognition or something else. Mel-Frequency Cepstral Coefficients (MFCCs) were very popular features for a long time; but more recently, filter banks are becoming increasingly popular. In this post, I will discuss filter banks and MFCCs and why are filter banks becoming increasingly popular.
Computing filter banks and MFCCs involve somewhat the same procedure, where in both cases filter banks are computed and with a few more extra steps MFCCs can be obtained. In a nutshell, a signal goes through a pre-emphasis filter; then gets sliced into (overlapping) frames and a window function is applied to each frame; afterwards, we do a Fourier transform on each frame (or more specifically a Short-Time Fourier Transform) and calculate the power spectrum; and subsequently compute the filter banks. To obtain MFCCs, a Discrete Cosine Transform (DCT) is applied to the filter banks retaining a number of the resulting coefficients while the rest are discarded. A final step in both cases, is mean normalization.
## Setup
For this post, I used a 16-bit PCM wav file from here, called “OSR_us_000_0010_8k.wav”, which has a sampling frequency of 8000 Hz. The wav file is a clean speech signal comprising a single voice uttering some sentences with some pauses in-between. For simplicity, I used the first 3.5 seconds of the signal which corresponds roughly to the first sentence in the wav file.
I’ll be using Python 2.7.x, NumPy and SciPy. Some of the code used in this post is based on code available in this repository.
The raw signal has the following form in the time domain:
Signal in the Time Domain
## Pre-Emphasis
The first step is to apply a pre-emphasis filter on the signal to amplify the high frequencies. A pre-emphasis filter is useful in several ways: (1) balance the frequency spectrum since high frequencies usually have smaller magnitudes compared to lower frequencies, (2) avoid numerical problems during the Fourier transform operation and (3) may also improve the Signal-to-Noise Ratio (SNR).
The pre-emphasis filter can be applied to a signal $$x$$ using the first order filter in the following equation:
$y(t) = x(t) - \alpha x(t-1)$
which can be easily implemented using the following line, where typical values for the filter coefficient ($$\alpha$$) are 0.95 or 0.97, pre_emphasis = 0.97:
Pre-emphasis has a modest effect in modern systems, mainly because most of the motivations for the pre-emphasis filter can be achieved using mean normalization (discussed later in this post) except for avoiding the Fourier transform numerical issues which should not be a problem in modern FFT implementations.
The signal after pre-emphasis has the following form in the time domain:
Signal in the Time Domain after Pre-Emphasis
## Framing
After pre-emphasis, we need to split the signal into short-time frames. The rationale behind this step is that frequencies in a signal change over time, so in most cases it doesn’t make sense to do the Fourier transform across the entire signal in that we would lose the frequency contours of the signal over time. To avoid that, we can safely assume that frequencies in a signal are stationary over a very short period of time. Therefore, by doing a Fourier transform over this short-time frame, we can obtain a good approximation of the frequency contours of the signal by concatenating adjacent frames.
Typical frame sizes in speech processing range from 20 ms to 40 ms with 50% (+/-10%) overlap between consecutive frames. Popular settings are 25 ms for the frame size, frame_size = 0.025 and a 10 ms stride (15 ms overlap), frame_stride = 0.01.
## Window
After slicing the signal into frames, we apply a window function such as the Hamming window to each frame. A Hamming window has the following form:
$w[n] = 0.54 − 0.46 cos ( \frac{2πn}{N − 1} )$
where, $$0 \leq n \leq N - 1$$, $$N$$ is the window length. Plotting the previous equation yields the following plot:
Hamming Window
There are several reasons why we need to apply a window function to the frames, notably to counteract the assumption made by the FFT that the data is infinite and to reduce spectral leakage.
## Fourier-Transform and Power Spectrum
We can now do an $$N$$-point FFT on each frame to calculate the frequency spectrum, which is also called Short-Time Fourier-Transform (STFT), where $$N$$ is typically 256 or 512, NFFT = 512; and then compute the power spectrum (periodogram) using the following equation:
$P = \frac{|FFT(x_i)|^2}{N}$
where, $$x_i$$ is the $$i^{th}$$ frame of signal $$x$$. This could be implemented with the following lines:
## Filter Banks
The final step to computing filter banks is applying triangular filters, typically 40 filters, nfilt = 40 on a Mel-scale to the power spectrum to extract frequency bands. The Mel-scale aims to mimic the non-linear human ear perception of sound, by being more discriminative at lower frequencies and less discriminative at higher frequencies. We can convert between Hertz ($$f$$) and Mel ($$m$$) using the following equations:
$m = 2595 \log_{10} (1 + \frac{f}{700})$
$f = 700 (10^{m/2595} - 1)$
Each filter in the filter bank is triangular having a response of 1 at the center frequency and decrease linearly towards 0 till it reaches the center frequencies of the two adjacent filters where the response is 0, as shown in this figure:
Filter bank on a Mel-Scale
This can be modeled by the following equation (taken from here):
$H_m(k) = \begin{cases} \hfill 0 \hfill & k < f(m - 1) \ \ \hfill \dfrac{k - f(m - 1)}{f(m) - f(m - 1)} \hfill & f(m - 1) \leq k < f(m) \ \ \hfill 1 \hfill & k = f(m) \ \ \hfill \dfrac{f(m + 1) - k}{f(m + 1) - f(m)} \hfill & f(m) < k \leq f(m + 1) \ \ \hfill 0 \hfill & k > f(m + 1) \ \end{cases}$
After applying the filter bank to the power spectrum (periodogram) of the signal, we obtain the following spectrogram:
Spectrogram of the Signal
If the Mel-scaled filter banks were the desired features then we can skip to mean normalization.
## Mel-frequency Cepstral Coefficients (MFCCs)
It turns out that filter bank coefficients computed in the previous step are highly correlated, which could be problematic in some machine learning algorithms. Therefore, we can apply Discrete Cosine Transform (DCT) to decorrelate the filter bank coefficients and yield a compressed representation of the filter banks. Typically, for Automatic Speech Recognition (ASR), the resulting cepstral coefficients 2-13 are retained and the rest are discarded; num_ceps = 12. The reasons for discarding the other coefficients is that they represent fast changes in the filter bank coefficients and these fine details don’t contribute to Automatic Speech Recognition (ASR).
One may apply sinusoidal liftering1 to the MFCCs to de-emphasize higher MFCCs which has been claimed to improve speech recognition in noisy signals.
The resulting MFCCs:
MFCCs
## Mean Normalization
As previously mentioned, to balance the spectrum and improve the Signal-to-Noise (SNR), we can simply subtract the mean of each coefficient from all frames.
The mean-normalized filter banks:
Normalized Filter Banks
and similarly for MFCCs:
The mean-normalized MFCCs:
Normalized MFCCs
## Filter Banks vs MFCCs
To this point, the steps to compute filter banks and MFCCs were discussed in terms of their motivations and implementations. It is interesting to note that all steps needed to compute filter banks were motivated by the nature of the speech signal and the human perception of such signals. On the contrary, the extra steps needed to compute MFCCs were motivated by the limitation of some machine learning algorithms. The Discrete Cosine Transform (DCT) was needed to decorrelate filter bank coefficients, a process also referred to as whitening. In particular, MFCCs were very popular when Gaussian Mixture Models - Hidden Markov Models (GMMs-HMMs) were very popular and together, MFCCs and GMMs-HMMs co-evolved to be the standard way of doing Automatic Speech Recognition (ASR)2. With the advent of Deep Learning in speech systems, one might question if MFCCs are still the right choice given that deep neural networks are less susceptible to highly correlated input and therefore the Discrete Cosine Transform (DCT) is no longer a necessary step. It is beneficial to note that Discrete Cosine Transform (DCT) is a linear transformation, and therefore undesirable as it discards some information in speech signals which are highly non-linear.
It is sensible to question if the Fourier Transform is a necessary operation. Given that the Fourier Transform itself is also a linear operation, it might be beneficial to ignore it and attempt to learn directly from the signal in the time domain. Indeed, some recent work has already attempted this and positive results were reported. However, the Fourier transform operation is a difficult operation to learn and may arguably increase the amount of data and model complexity needed to achieve the same performance. Moreover, in doing Short-Time Fourier Transform (STFT), we’ve assumed the signal to be stationary within this short time and therefore the linearity of the Fourier transform would not pose a critical problem.
## Conclusion
In this post, we’ve explored the procedure to compute Mel-scaled filter banks and Mel-Frequency Cepstrum Coefficients (MFCCs). The motivations and implementation of each step in the procedure were discussed. We’ve also argued the reasons behind the increasing popularity of filter banks compared to MFCCs.
tl;dr: Use Mel-scaled filter banks if the machine learning algorithm is not susceptible to highly correlated input. Use MFCCs if the machine learning algorithm is susceptible to correlated input.
Citation:
1. Liftering is filtering in the cepstral domain. Note the abuse of notation in spectral and cepstral with filtering and liftering respectively.
2. An excellent discussion on this topic is in this thesis
|
{}
|
# Prediction of future temperature increase¶
Time series data, like
• the annual average temperature over a range of years
• the daily stock exchange rate,
can be predicted by any supervised machine learning algorithm for regression. In this example the increase of the annual average temperature is predicted using a Support Vector Machine for Regression (SVR).
In [1]:
import csv
import numpy as np
from matplotlib import pyplot as plt
from sklearn import svm
np.set_printoptions(precision=3,suppress=True)
## Read time series data from file¶
The temperature data file contains the average annual temperature of all German counties from year 1811 to 2011. Each column in the data file, except the first, refers to a single county, each row refers to a year. The first column contains the years. The first row (header row) contains the county names.
In [2]:
# read data from file########################################################
yearlist=[]
count=0
columnlist=[1]
columnlist.extend(range(3,20))
with open('./Res/temperaturEntwicklungDeutschland.csv', 'r') as csvfile:
if count==0:
else:
yearlist.extend(row)
count+=1
yearArray=np.array(yearlist).reshape((-1,18)).astype(float)
print yearArray
###################################################################################################################
['', 'Hamburg', 'Bremen', 'Berlin', 'Schleswig-Holstein', 'Niedersachsen', 'Nordrhein-Westfalen', 'Rheinland-Pfalz', 'Saarland', 'Baden-Wuerttemberg', 'Hessen', 'Bayern', 'Mecklenburg-Vorpommern', 'Brandenburg', 'Sachen-Anhalt', 'Sachsen', 'Thueringen', 'Deutschland']
[[ 1881. 7.342 7.699 ..., 6.754 6.675 7.335]
[ 1882. 8.9 9.167 ..., 8.162 7.778 8.366]
[ 1883. 8.388 8.579 ..., 7.5 7.312 7.91 ]
...,
[ 2009. 9.712 9.969 ..., 8.911 8.581 9.185]
[ 2010. 8.127 8.304 ..., 7.455 7.192 7.854]
[ 2011. 9.9 10.1 ..., 9.4 9.1 9.6 ]]
Next, the county for which the temperature shall be analysed is defined, and the column index of this county is determined. The average annual temperature of the selected county is plotted in the figure below.
In [3]:
county="Baden-Wuerttemberg" # select the county for which the annual temperature shall be analyzed
plt.figure(figsize=(12, 10))
plt.plot(yearArray[:,0],yearArray[:,idx],label="annual Temp.")
plt.title("Annual temperature in %s"%county)
plt.hold(True)
## Smooth time series data¶
Since the annual variation of the temperatur is quite large, the entire time series data is first smoothed, using a Gaussian filter of configurable length and variance. The filter operation is implemented in the following function.
In [4]:
def smoothGaussian(list,degree=5,s2=2):
"""this function returns the gaussian smoothed timeseries data
degree: length of filter (in one direction)
s2: variance of the gaussian function
"""
window=degree*2-1
weight=np.ones(window)
gaussweights=np.array([1.0/np.sqrt(2*np.pi*s2)*np.exp(-0.5/s2*x**2) for x in np.arange(-degree+1,degree)])
#norm=np.sum(gaussweights)
smoothed=np.array([np.sum(gaussweights*list[i:i+window]) for i in range(list.shape[0]-window)])
return smoothed
Next, the implemented Gaussian filter is applied to the temperature data of the selected county. The smoothed temperature is plotted in the figure below.
In [5]:
smoothDeg=5 # length of Gaussian smoothing function in one direction
smoothedData=smoothGaussian(yearArray[:,idx],smoothDeg)
plt.figure(figsize=(12, 10))
plt.plot(yearArray[:,0],yearArray[:,idx],'b',label="annual Temp.")
plt.plot(yearArray[smoothDeg-1:-smoothDeg,0],smoothedData,'g',label="smoothed")
plt.legend(loc=2)
plt.grid(True)
## Create Feature Matrix¶
The task is to predict future temperature from the temperature of previous years. First the number of previous years that shall be taken into account for the prediction must be fixed. In this example we choose this required history-length to be $IL=10$. Then the serial data must be rearranged such that each sequence of temperatures of $10$ successive years constitutes one feature vector and the temperature of the first year after the last year in the feature vector is the target value. This construction of the feature matrix and the corresponding target values is implemented in the following function.
In [6]:
def createFeatureMatrix(data,influenceLength=5):
"""This function creates the feature-matrix from the timeseries data.
Each row in the calculated matrix contains influenceLength number of
successive temperatures. From these temperatures in one row the temperature of
the following year is estimated
"""
numSamples=data.shape[0]-influenceLength
features=np.zeros((numSamples,influenceLength))
targets=np.zeros(numSamples)
for row in range(numSamples):
features[row,:]=data[row:row+influenceLength]
targets[row]=data[row+influenceLength]
return features,targets
The function for the generation of the feature matrix and the target vector is invoked and the returned data structures are printed.
In [7]:
IL=10 # Influence length (=number of features), i.e. the number
# of previous years used to estimate the temperature of next year
X,y=createFeatureMatrix(smoothedData,IL)
numS=X.shape[0]
print '-'*10+'Feature Matrix'+'-'*10
print X.round(2)
print '-'*10+'Target values'+'-'*10
print y.round(2)
----------Feature Matrix----------
[[ 7.83 7.57 7.26 ..., 7.59 7.74 7.73]
[ 7.57 7.26 7.07 ..., 7.74 7.73 7.68]
[ 7.26 7.07 7.04 ..., 7.73 7.68 7.77]
...,
[ 8.91 8.7 8.53 ..., 9.31 9.28 9.17]
[ 8.7 8.53 8.6 ..., 9.28 9.17 9.02]
[ 8.53 8.6 8.85 ..., 9.17 9.02 8.97]]
----------Target values----------
[ 7.68 7.77 8. 8.21 8.24 8.1 7.92 7.88 7.97 8.04 8. 7.89
7.74 7.64 7.7 7.92 8.1 8.13 8.09 8.05 8.01 7.94 7.87 7.88
7.98 8.11 8.13 8.01 7.9 7.91 8.03 8.17 8.23 8.18 8.07 7.94
7.82 7.81 7.95 8.14 8.24 8.25 8.18 7.99 7.69 7.44 7.45 7.71
8.03 8.25 8.39 8.51 8.62 8.69 8.67 8.56 8.4 8.22 7.99 7.72
7.53 7.57 7.84 8.17 8.37 8.37 8.13 7.79 7.62 7.69 7.88 8.05
8.08 7.98 7.86 7.81 7.82 7.88 8.04 8.24 8.37 8.35 8.2 8.01
7.9 7.94 8.12 8.27 8.22 8. 7.83 7.88 8.15 8.5 8.75 8.81
8.8 8.85 8.94 8.91 8.7 8.53 8.6 8.85 9.1 9.25 9.31 9.28
9.17 9.02 8.97 9.05]
## Applying a SVR for prediction of future temperature¶
From the generated feature matrix and target vector the first 110 instances are used for training a SVR with RBF-kernel.
In [8]:
#Select subset of samples for training ####################
numTrain=110
Xtrain=X[:numTrain,:]
ytrain=y[:numTrain]
##########################################################
A SVR object is instanciated and trained. In a first step the output of the trained model when applied to the feature vectors of the training data set is calculated. Then the trained model is applied to predict the temperature of the future 9 years. Note how the feature vector is constructed for this prediction: The prediction for the first future year is used as feature for predicting the temperature in the second year. The predictions of the first 2 future years are used as features in the input for the predictionn of the third year and so on.
In [9]:
# create SVR-Object, train it and apply it on training data ##################
regressor=svm.SVR(C=10,coef0=1.0)
#regressor.fit(X,y)
regressor.fit(Xtrain,ytrain)
p=regressor.predict(Xtrain)
plt.figure(figsize=(12, 10))
plt.plot(yearArray[:,0],yearArray[:,idx],'b',label="annual Temp.")
plt.plot(yearArray[smoothDeg-1:-smoothDeg,0],smoothedData,'g',label="smoothed")
plt.plot(yearArray[smoothDeg-1+IL:smoothDeg-1+IL+numTrain,0],p,'or',label="prediction training")
print "Mean Absolute Difference on training data = %2.2f"%mad
#############################################################################
# apply trained SVR on new data #############################################
testin=X[numTrain,:]
print "\nPrediction of average temperature for future years"
preds=[]
for s in range(numS-numTrain+smoothDeg):
pred=regressor.predict(testin)
preds.extend(pred)
print "Input feature vector: %s prediction: %2.2f"%(np.array(testin).round(2),pred)
testin=[testin[i+1] for i in range(0,IL-1)]
#print testin
testin.extend(pred)
#print y[numTrain:]
#print preds
testSamp=len(preds)
print "Mean Absolute Difference on test data = %2.2f"%mad
plt.plot(yearArray[smoothDeg-1+IL+numTrain:,0],preds,'m',label="predicted test")
plt.legend(loc=2)
Mean Absolute Difference on training data = 0.06
Prediction of average temperature for future years
Input feature vector: [ 8.7 8.53 8.6 8.85 9.1 9.25 9.31 9.28 9.17 9.02] prediction: 9.00
Input feature vector: [ 8.53 8.6 8.85 9.1 9.25 9.31 9.28 9.17 9.02 9. ] prediction: 9.04
Input feature vector: [ 8.6 8.85 9.1 9.25 9.31 9.28 9.17 9.02 9. 9.04] prediction: 9.11
Input feature vector: [ 8.85 9.1 9.25 9.31 9.28 9.17 9.02 9. 9.04 9.11] prediction: 9.16
Input feature vector: [ 9.1 9.25 9.31 9.28 9.17 9.02 9. 9.04 9.11 9.16] prediction: 9.17
Input feature vector: [ 9.25 9.31 9.28 9.17 9.02 9. 9.04 9.11 9.16 9.17] prediction: 9.17
Input feature vector: [ 9.31 9.28 9.17 9.02 9. 9.04 9.11 9.16 9.17 9.17] prediction: 9.18
Mean Absolute Difference on test data = 0.17
Out[9]:
<matplotlib.legend.Legend at 0xc8efab0>
In [9]:
|
{}
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 19 Oct 2019, 23:13
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# If x^2 – 1 ≤ 8, what is the smallest real value x can have?
Author Message
TAGS:
### Hide Tags
VP
Joined: 23 Feb 2015
Posts: 1260
If x^2 – 1 ≤ 8, what is the smallest real value x can have? [#permalink]
### Show Tags
14 Feb 2019, 09:40
1
00:00
Difficulty:
15% (low)
Question Stats:
81% (00:43) correct 19% (00:50) wrong based on 26 sessions
### HideShow timer Statistics
If $$x^2 – 1 ≤ 8$$, what is the smallest real value x can have?
(A) –9
(B) –6
(C) –3
(D) 0
(E) 3
_________________
“The heights by great men reached and kept were not attained in sudden flight but, they while their companions slept, they were toiling upwards in the night.”
Do you need official questions for Quant?
3700 Unique Official GMAT Quant Questions
------
SEARCH FOR ALL TAGS
GMAT Club Tests
Board of Directors
Status: QA & VA Forum Moderator
Joined: 11 Jun 2011
Posts: 4774
Location: India
GPA: 3.5
Re: If x^2 – 1 ≤ 8, what is the smallest real value x can have? [#permalink]
### Show Tags
14 Feb 2019, 10:52
If $$x^2 – 1 ≤ 8$$, what is the smallest real value x can have?
(A) –9
(B) –6
(C) –3
(D) 0
(E) 3
Clean (C) -3 as
$$-3^2 – 1 = 8$$, Answer must be $$(C) -3$$
_________________
Thanks and Regards
Abhishek....
PLEASE FOLLOW THE RULES FOR POSTING IN QA AND VA FORUM AND USE SEARCH FUNCTION BEFORE POSTING NEW QUESTIONS
How to use Search Function in GMAT Club | Rules for Posting in QA forum | Writing Mathematical Formulas |Rules for Posting in VA forum | Request Expert's Reply ( VA Forum Only )
Intern
Joined: 20 Nov 2018
Posts: 27
Location: India
Re: If x^2 – 1 ≤ 8, what is the smallest real value x can have? [#permalink]
### Show Tags
14 Feb 2019, 11:01
If $$x^2 – 1 ≤ 8$$, what is the smallest real value x can have?
(A) –9
(B) –6
(C) –3
(D) 0
(E) 3
x^2 ≤ 8+1
x^2 ≤ 9
-a ≤ x ≤ a <=> x^2 ≤ a^2
so,
-3 ≤ x ≤ 3 => least value -3
option A
-------------------------------------
Re: If x^2 – 1 ≤ 8, what is the smallest real value x can have? [#permalink] 14 Feb 2019, 11:01
Display posts from previous: Sort by
|
{}
|
# Mutable Data Types * * * <i> Topics: * refs * mutable fields * arrays * mutable data structures </i> * * * ## Mutable Data Types OCaml is not a *pure* language: it does admit side effects. We have seen that already with I/O, especially printing. But up till now we have limited ourself to the subset of the language that is *immutable*: values could not change. Today, we look at data types that are mutable. Mutability is neither good nor bad. It enables new functionality that we couldn't implement (at least not easily) before, and it enables us to create certain data structures that are asymptotically more efficient than their purely functional analogues. But mutability does make code more difficult to reason about, hence it is a source of many faults in code. One reason for that might be that humans are not good at thinking about change. With immutable values, we're guaranteed that any fact we might establish about them can never change. But with mutable values, that's no longer true. "Change is hard," as they say. ## Refs A *ref* is like a pointer or reference in an imperative language. It is a location in memory whose contents may change. Refs are also called *ref cells*, the idea being that there's a cell in memory that can change. **A first example.** Here's an example utop transcript to introduce refs: # let x = ref 0;; val x : int ref = {contents = 0} # !x;; - : int = 0 # x := 1;; - : unit = () # !x;; - : int = 1 At a high level, what that shows is creating a ref, getting the value from inside it, changing its contents, and observing the changed contents. Let's dig a little deeper. The first phrase, let x = ref 0, creates a reference using the ref keyword. That's a location in memory whose contents are initialized to 0. Think of the location itself as being an address—for example, 0x3110bae0—even though there's no way to write down such an address in an OCaml program. The keyword ref is what causes the memory location to be allocated and initialized. The first part of the response from utop, val x : int ref, indicates that x is a variable whose type is int ref. We have a new type constructor here. Much like list and option are type constructors, so is ref. A t ref, for any type t, is a reference to a memory location that is guaranteed to contain a value of type t. As usual we should read should a type from right to left: t ref means a reference to a t. The second part of the response shows us the contents of the memory location. Indeed, the contents have been initialized to 0. The second phrase, !x, dereferences x and returns the contents of the memory location. Note that ! is the dereference operator in OCaml, not Boolean negation. The third phrase, x := 1, is an assignment. It mutates the contents x to be 1. Note that x itself still points to the same location (i.e., address) in memory. Variables really are immutable in that way. What changes is the contents of that memory location. Memory is mutable; variable bindings are not. The response from utop is simply (), meaning that the assignment took place—much like printing functions return () to indicate that the printing did happen. The fourth phrase, !x again dereferences x to demonstrate that the contents of the memory location did indeed change. **A more sophisticated example.** Here is code that implements a *counter*. Every time next_val is called, it returns one more than the previous time. # let counter = ref 0;; val counter : int ref = {contents = 0} # let next_val = fun () -> counter := (!counter) + 1; !counter;; val next_val : unit -> int = <fun> # next_val();; - : int = 1 # next_val();; - : int = 2 # next_val();; - : int = 3 In the implementation of next_val, there are two expressions separated by semi-colon. The first expression, counter := (!counter) + 1, is an assignment that increments counter by 1. The second expression, !counter, returns the newly incremented contents of counter. This function is unusual in that every time we call it, it returns a different value. That's quite different than any of the functions we've implemented ourselves so far, which have always been *deterministic*: for a given input, they always produced the same output. On the other hand, we've seen some library functions that are *nondeterministic*, for example, functions in the Random module, and Pervasives.read_line. It's no coincidence that those happen to be implemented using mutable features. We could improve our counter in a couple ways. First, there is a library function incr : int ref -> unit that increments an int ref by 1. Thus it is like the ++ operator in many language in the C family. Using it, we could write incr counter instead of counter := (!counter) + 1. Second, the way we coded the counter currently exposes the counter variable to the outside world. Maybe we're prefer to hide it so that clients of next_val can't directly change it. We could do so by nesting counter inside the scope of next_val: let next_val = let counter = ref 0 in fun () -> incr counter; !counter Now counter is in scope inside of next_val, but not accessible outside that scope. When we gave the dynamic semantics of let expressions before, we talked about substitution. One way to think about the definition of next_val is as follows. * First, the expression ref 0 is evaluated. That returns a location loc, which is an address in memory. The contents of that address are initialized to 0. * Second, everywhere in the body of the let expression that counter occurs, we substitute for it that location. So we get: fun () -> incr loc; !loc * Third, that anonymous function is bound to next_val. So any time next_val is called, it increments and returns the contents of that one memory location loc. Now imagine that we instead had written the following (broken) code: let next_val_broken = fun () -> let counter = ref 0 in incr counter; !counter It's only a little different: the binding of counter occurs after the fun () -> instead of before. But it makes a huge difference: # next_val_broken ();; - : int = 1 # next_val_broken ();; - : int = 1 # next_val_broken ();; - : int = 1 Every time we call next_val_broken, it returns 1: we no longer have a counter. What's going wrong here? The problem is that every time next_val_broken is called, the first thing it does is to evaluate ref 0 to a new location that is initialized to 0. That location is then incremented to 1, and 1 is returned. Every call to next_val_broken is thus allocating a new ref cell, whereas next_val allocates just one new ref cell. **Syntax.** The first three of the following are new syntactic forms involving refs, and the last is a syntactic form that we haven't yet fully explored. * Ref creation: ref e * Ref assignment: e1 := e2 * Dereference: !e * Sequencing of effects: e1; e2 **Dynamic semantics.** * To evaluate ref e, - Evaluate e to a value v - Allocate a new location loc in memory to hold v - Store v in loc - Return loc * To evaluate e1 := e2, - Evaluate e2 to a value v, and e1 to a location loc. - Store v in loc. - Return (), i.e., unit. * To evaluate !e, - Evaluate e to a location loc. - Return the contents of loc. * To evaluate e1; e2, - First evaluate e1 to a value v1. - Then evaluate e2 to a value v2. - Return v2. (v1 is not used at all.) - If there are multiple expressions in a sequence, e.g., e1; e2; ...; en, then evaluate each one in order from left to right, returning only vn. Another way to think about this is that semi-colon is right associative—for example e1; e2; e3 is the same as e1; (e2; e3)). Note that locations are values that can be passed to and returned from functions. But unlike other values (e.g., integers, variants), there is no way to directly write a location in an OCaml program. That's different than languages like C, where programmers can directly write memory addresses and do arithmetic on pointers. C programmers want that kind of low-level access to do things like interface with hardware and build operating systems. Higher-level programmers are willing to forego it to get *memory safety*. That's a hard term to define, but according to [Hicks 2014][memory-safety-hicks] it intuitively means that * pointers are only created in a safe way that defines their legal memory region, * pointers can only be dereferenced if they point to their allotted memory region, * that region is (still) defined. [memory-safety-hicks]: http://www.pl-enthusiast.net/2014/07/21/memory-safety/ **Static semantics.** We have a new type constructor, ref, such that t ref is a type for any type t. Note that the ref keyword is used in two ways: as a type constructor, and as an expression that constructs refs. * ref e : t ref if e : t. * e1 := e2 : unit if e1 : t ref and e2 : t. * !e : t if e : t ref. * e1; e2 : t if e1 : unit and e2 : t. Similarly, e1; e2; ...; en : t if e1 : unit, e2 : unit, ... (i.e., all expressions except en have type unit), and en : t. The typing rule for semi-colon is designed to prevent programmer mistakes. For example, a programmer who writes 2+3; 7 probably didn't mean to: there's no reason to evaluate 2+3 then throw away the result and instead return 7. The compiler will give you a warning if you violate this particular typing rule. To get rid of the warning (if you're sure that's what you need to do), there's a function ignore : 'a -> unit in the standard library. Using it, ignore(2+3); 7 will compile without a warning. Of course, you could code up ignore yourself: let ignore _ = (). **Aliasing.** Now that we have refs, we have *aliasing*: two refs could point to the same memory location, hence updating through one causes the other to also be updated. For example, let x = ref 42 let y = ref 42 let z = x let () = x := 43 let w = (!y) + (!z) The result of executing that code is that w is bound to 85, because let z = x causes z and x to become aliases, hence updating x to be 43 also causes z to be 43. **Equality.** OCaml has two equality operators, physical equality and structural equality. The [documentation][pervasives] of Pervasives.(==) explains physical equality: > e1 == e2 tests for physical equality of e1 and e2. On mutable types such as > references, arrays, byte sequences, records with mutable fields and objects with > mutable instance variables, e1 == e2 is true if and only if physical modification > of e1 also affects e2. On non-mutable types, the behavior of ( == ) is > implementation-dependent; however, it is guaranteed that e1 == e2 implies > compare e1 e2 = 0. [pervasives]: http://caml.inria.fr/pub/docs/manual-ocaml/libref/Pervasives.html One interpretation could be that == should be used only when comparing refs (and other mutable data types) to see whether they point to the same location in memory. Otherwise, don't use ==. Structural equality is also explained in the documentation of Pervasives.(=): > e1 = e2 tests for structural equality of e1 and e2. Mutable structures > (e.g. references and arrays) are equal if and only if their current contents > are structurally equal, even if the two mutable objects are not the same > physical object. Equality between functional values raises Invalid_argument. > Equality between cyclic data structures may not terminate. Structural equality is usually what you want to test. For refs, it checks whether the contents of the memory location are equal, regardless of whether they are the same location. The negation of physical equality is !=, and the negation of structural equality is <>. This can be hard to remember. Here are some examples involving equality and refs to illustrate the difference between structural equality (=) and physical equality (==): # let r1 = ref 3110;; val r1 : int ref = {contents = 3110} # let r2 = ref 3110;; val r2 : int ref = {contents = 3110} # r1 == r1;; - : bool = true # r1 == r2;; - : bool = false # r1 != r2;; - : bool = true # r1 = r1;; - : bool = true # r1 = r2;; - : bool = true # r1 <> r2;; - : bool = false # ref 3110 <> ref 2110;; - : bool = true ## Mutable fields The fields of a record can be declared as mutable, meaning their contents can be updated without constructing a new record. For example, here is a record type for two-dimensional colored points whose color field c is mutable: # type point = {x:int; y:int; mutable c:string};; type point = {x:int; y:int; mutable c:string; } Note that mutable is a property of the field, rather than the type of the field. In particular, we write mutable field : type, not field : mutable type. The operator to update a mutable field is <-: # let p = {x=0; y=0; c="red"};; val p : point = {x=0; y=0; c="red"} # p.c <- "white";; - : unit = () # p;; val p : point = {x=0; y=0; c="white"} # p.x <- 3;; Error: The record field x is not mutable The syntax and semantics of <- is similar to := but complicated by fields: * **Syntax:** e1.f <- e2 * **Dynamic semantics:** To evaluate e1.f <- e2, evaluate e2 to a value v2, and e1 to a value v1, which must have a field named f. Update v1.f to v2. Return (). * **Static semantics:** e1.f <- e2 : unit if e1 : t1 and t1 = {...; mutable f : t2; ...}, and e2 : t2. ## Refs and mutable fields It turns out that refs are actually implemented as mutable fields. In [Pervasives][pervasives] we find the following declaration: type 'a ref = { mutable contents : 'a; } And that's why when we create a ref it does in fact looks like a record: it *is* a record! # let r = ref 3110;; val r : int ref = {contents = 3110} The other syntax we've seen for records is in fact equivalent to simple OCaml functions: (* Equivalent to [fun v -> {contents=e}]. *) val ref : 'a -> 'a ref (* Equivalent to [fun r -> r.contents]. *) val (!) : 'a ref -> 'a (* Equivalent to [fun r v -> r.contents <- v]. *) val (:=) : 'a ref -> 'a -> unit The reason we say "equivalent" is that those functions are actually implemented not in OCaml but in the OCaml run-time, which is implemented mostly in C. But the functions do behave the same as the OCaml source given above in comments. ## Arrays Arrays are fixed-length mutable sequences with constant-time access and update. So they are similar in various ways to refs, lists, and tuples. Like refs, they are mutable. Like lists, they are (finite) sequences. Like tuples, their length is fixed in advance and cannot be resized. The syntax for arrays is similar to lists: # let v = [|0.; 1.|];; val v : float array = [|0.; 1.|] That code creates an array whose length is fixed to be 2 and whose contents are initialized to 0. and 1.. The keyword array is a type constructor, much like list. Later those contents can be changed using the <- operator: # v.(0) <- 5.;; - : unit = () # v;; - : float array = [|5.; 1.|] As you can see in that example, indexing into an array uses the syntax array.(index), where the parentheses are mandatory. The [Array module][array] has many useful functions on arrays. [array]: http://caml.inria.fr/pub/docs/manual-ocaml/libref/Array.html **Syntax.** * Array creation: [|e0; e1; ...; en|] * Array indexing: e1.(e2) * Array assignment: e1.(e2) <- e3 **Dynamic semantics.** * To evaluate [|e0; e1; ...; en|], evaluate each ei to a value vi, create a new array of length n+1, and store each value in the array at its index. * To evaluate e1.(e2), evaluate e1 to an array value v1, and e2 to an integer v2. If v2 is not within the bounds of the array (i.e., 0 to n-1, where n is the length of the array), raise Invalid_argument. Otherwise, index into v1 to get the value v at index v2, and return v. * To evaluate e1.(e2) <- e3, evaluate each expression ei to a value vi. Check that v2 is within bounds, as in the semantics of indexing. Mutate the element of v1 at index v2 to be v3. **Static semantics.** * [|e0; e1; ...; en|] : t array if ei : t for all the ei. * e1.(e2) : t if e1 : t array and e2 : int. * e1.(e2) <- e3 : unit if e1 : t array and e2 : int and e3 : t. **Loops.** OCaml has while loops and for loops. Their syntax is as follows: while e1 do e2 done for x=e1 to e2 do e3 done for x=e1 downto e2 do e3 done The second form of for loop counts down from e1 to e2—that is, it decrements its index variable at each iteration. Though not mutable features themselves, loops can be useful with mutable data types like arrays. We can also use functions like Array.iter, Array.map, and Array.fold_left instead of loops. ## Mutable data structures As an example of a mutable data structure, let's look at stacks. We're already familiar with functional stacks: exception Empty module type Stack = sig (* ['a t] is the type of stacks whose elements have type ['a]. *) type 'a t (* [empty] is the empty stack *) val empty : 'a t (* [push x s] is the stack whose top is [x] and the rest is [s]. *) val push : 'a -> 'a t -> 'a t (* [peek s] is the top element of [s]. * raises: [Empty] is [s] is empty. *) val peek : 'a t -> 'a (* [pop s] is all but the top element of [s]. * raises: [Empty] is [s] is empty. *) val pop : 'a t -> 'a t end An interface for a *mutable* or *non-persistent* stack would look a little different: module type MutableStack = sig (* ['a t] is the type of mutable stacks whose elements have type ['a]. * The stack is mutable not in the sense that its elements can * be changed, but in the sense that it is not persistent: * the operations [push] and [pop] destructively modify the stack. *) type 'a t (* [empty ()] is the empty stack *) val empty : unit -> 'a t (* [push x s] modifies [s] to make [x] its top element. * The rest of the elements are unchanged. *) val push : 'a -> 'a t -> unit (* [peek s] is the top element of [s]. * raises: [Empty] is [s] is empty. *) val peek : 'a t -> 'a (* [pop s] removes the top element of [s]. * raises: [Empty] is [s] is empty. *) val pop : 'a t -> unit end Notice especially how the type of empty changes: instead of being a value, it is now a function. This is typical of functions that create mutable data structures. Also notice how the types of push and pop change: instead of returning an 'a t, they return unit. This again is typical of functions that modify mutable data structures. In all these cases, the use of unit makes the functions more like their equivalents in an imperative language. The constructor for an empty stack in Java, for example, might not take any arguments (which is equivalent to taking unit). And the push and pop functions for a Java stack might return void, which is equivalent to returning unit. Now let's implement the mutable stack with a mutable linked list. We'll have to code that up ourselves, since OCaml linked lists are persistent. module MutableRecordStack = struct (* An ['a node] is a node of a mutable linked list. It has * a field [value] that contains the node's value, and * a mutable field [next] that is [Null] if the node has * no successor, or [Some n] if the successor is [n]. *) type 'a node = {value : 'a; mutable next : 'a node option} (* AF: An ['a t] is a stack represented by a mutable linked list. * The mutable field [top] is the first node of the list, * which is the top of the stack. The empty stack is represented * by {top = None}. The node {top = Some n} represents the * stack whose top is [n], and whose remaining elements are * the successors of [n]. *) type 'a t = {mutable top : 'a node option} let empty () = {top = None} (* To push [x] onto [s], we allocate a new node with [Some {...}]. * Its successor is the old top of the stack, [s.top]. * The top of the stack is mutated to be the new node. *) let push x s = s.top <- Some {value = x; next = s.top} let peek s = match s.top with | None -> raise Empty | Some {value} -> value (* To pop [s], we mutate the top of the stack to become its successor. *) let pop s = match s.top with | None -> raise Empty | Some {next} -> s.top <- next end Here is some example usage of the mutable stack: # let s = empty ();; val s : '_a t = {top = None} # push 1 s;; - : unit = () # s;; - : int t = {top = Some {value = 1; next = None}} # push 2 s;; - : unit = () # s;; - : int t = {top = Some {value = 2; next = Some {value = 1; next = None}}} # pop s;; - : unit = () # s;; - : int t = {top = Some {value = 1; next = None}} The '_a in the first utop response in that transcript is a *weakly polymorphic type variable.* It indicates that the type of elements of s is not yet fixed, but that as soon as one element is added, the type (for that particular stack) will forever be fixed. Weak type variables tend to appear once mutability is involved, and they are important for the type system to prevent certain kinds of errors, but we won't discuss them further. ## Summary We cover mutable data types in the "Advanced Data Structures" section of this course because they are, in fact, harder to reason about. For example, before refs, we didn't have to worry about aliasing in OCaml. But mutability does have its uses. I/O is fundamentally about mutation. And some data structures (like arrays, which we saw here, and hash tables) cannot be implemented as efficiently without mutability. Mutability thus offers great power, but with great power comes great responsibility. Try not to abuse your new-found power! ## Terms and concepts * address * alias * array * assignment * dereference * determinstic * immutable * index * loop * memory safety * mutable * mutable field * nondeterministic * persistent * physical equality * pointer * pure * ref * ref cell * reference * sequencing * structural equality ## Further reading * *Introduction to Objective Caml*, chapters 7 and 8 * *OCaml from the Very Beginning*, chapter 13 * *Real World OCaml*, chapter 8 * [*Relaxing the value restriction*][relaxing], by Jacques Garrigue, explains more about weak type variables. Section 2 is a succinct explanation of why they are needed. [relaxing]: https://caml.inria.fr/pub/papers/garrigue-value_restriction-fiwflp04.pdf
|
{}
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Skilful prediction of cod stocks in the North and Barents Sea a decade in advance
## Abstract
Reliable information about the future state of the ocean and fish stocks is necessary for informed decision-making by fisheries scientists, managers and the industry. However, decadal regional ocean climate and fish stock predictions have until now had low forecast skill. Here, we provide skilful forecasts of the biomass of cod stocks in the North and Barents Seas a decade in advance. We develop a unified dynamical-statistical prediction system wherein statistical models link future stock biomass to dynamical predictions of sea surface temperature, while also considering different fishing mortalities. Our retrospective forecasts provide estimates of past performance of our models and they suggest differences in the source of prediction skill between the two cod stocks. We forecast the continuation of unfavorable oceanic conditions for the North Sea cod in the coming decade, which would inhibit its recovery at present fishing levels, and a decrease in Northeast Arctic cod stock compared to the recent high levels.
## Introduction
Climate variability has been a subject of interest for ecologists primarily because variations in climate often have a strong impact on ecological systems1. Marine resources, such as fish stocks, have been shown to be strongly influenced by climate variability2,3,4, with changes in productivity resulting in huge consequences for socio-economic systems relying on such resources5,6. In the Anthropocene, with the impending threat of climate change, understanding the impact of climate variability on marine ecosystems and resources has become even more central, since climate variability at interannual to decadal timescales can alter the magnitude of ongoing long-term climate change7,8. Hence an integration of climate information into modeling of exploitable resources is necessary not only to understand ecological processes but also to forecast future states of the system at interannual to decadal timescales9. The latter is particularly fundamental for management, since forecasting fish abundance depending on decadal climate variability is necessary to devise timely interventions to ensure sustainable use of resources10. Nevertheless, the application of climate models to predict ecosystem processes at decadal timescales remains a challenge11,12,13.
In many cases the impact of climate on fish stocks has been studied through experiments and modeling, and empirical relations have been established. Climate has been shown to influence fish directly or indirectly through recruitment, food availability, fecundity, growth, and migration14,15,16. Still, climate variables are rarely included in the management-oriented modeling and forecasting of fish populations17. This is not only due to historically large impact of fishing mortality on commercial stock biomass18,19,20 but also due to forecasting being complicated by frequent transient and non-stationary properties of climate impacts on fish stocks21,22,23. Moreover, the fish experience the cumulative impacts of different drivers; fishing pressure and climate can have combined effects inducing non-linear dynamics in fish stocks. Strong synergistic effects can lead to management failures and abrupt collapses of socio-ecological systems22,24,25. With anthropogenic climate change superposed on natural climate variability, including environmental variables in the modeling and forecasting of fish stocks is becoming increasingly important from both scientific and management points of view5,26.
One of the key limitations impeding the integration of climate information in fisheries forecasts originates from inadequate representation of shelf seas in coupled global circulation models (GCMs) providing future climate information. Pioneering-approaches have used bioclimate envelope models27,28 or detailed ecosystem and population dynamics models29,30 forced with climate projections from GCMs to examine the impact of climate change on fisheries. However, GCMs lack a proper representation of shelf-sea dynamics, mainly due to their coarse resolution, and they have limited representation of trophic interactions and associated energy transfers. Other approaches have combined GCM output with highly resolved physical–biological shelf-sea models accounting for trophic interactions6. These approaches focus on long-term (>30 years) changes and thus do not provide information on decadal (1–10 years) fisheries forecasts. Moreover, since decadal forecasting usually involves an ensemble of predictions, high computational costs associated with the aforementioned approaches also motivate exploration of novel approaches towards fisheries predictions using GCM-based decadal climate predictions.
The prospect of decadal prediction of fish stocks emerging from decadal predictability of the physical environment is enticing. Specifically in the North Atlantic, where decadal variability of the physical environment is highly predictable using GCMs31,32,33. This prospect emerges not only from the influence of Atlantic inflow on both hydrography34,35 and marine ecosystem of the North Atlantic shelf seas, such as the North and Barents Sea15,16 but also from the impact of anthropogenic warming on marine ecosystems13. In these climate-driven marine ecosystems, statistical climate–fisheries models36 provide a promising approach for transforming GCM-based ensembles of decadal climate predictions into reliable fisheries forecasts.
In this article we assess predictability of two Atlantic cod (Gadus morhua) stocks in the northeastern North Atlantic shelf seas. Atlantic cod is a commercially, historically, socially, and ecologically important species. There are many stocks of Atlantic cod and they are widely distributed on the shelf seas off the northern North Atlantic. Some of these stocks have been severely reduced in the last decades, largely due to unfavorable climate and intense fishing20,22. Thus, being able to predict the stock biomasses is important to guide sustainable management decisions. We investigate two stocks with opposite status: (1) the North Sea cod, a stock close to the upper temperature limit of distribution of this species, over-exploited for many decades and in a very low productive state over the last 20 years, and (2) the Northeast Arctic cod, residing in the Barents Sea close to the species’ lower temperature limit and recording record-high biomass levels in recent years37,38,39. The average age at first maturation is 3 for North Sea cod and 7 for Northeast Arctic cod.
In order to provide decadal predictions of cod stocks, we use a linear regression model to transform dynamical prediction of sea surface temperature (SST) into the prediction of total stock biomass (TSB). TSB was used because it reflects the integrated impact of climate and fishing36. The dynamical prediction of SST is provided by 10-year long initialized forecasts (and hindcasts) from a decadal prediction system based on the Max Planck Institute Earth System Model (MPI-ESM, see “Methods”). The initial conditions for decadal hindcasts are taken from an assimilation experiment which assimilates observed atmospheric and oceanic information into MPI-ESM. In order to isolate the prediction skill due to external forcing, an ensemble of non-initialized historical simulations of the same size as that of initialized hindcasts and driven by observed external boundary conditions is also analyzed (see “Methods”). Our forecasts for the period 2020–2030 suggest continued unfavorable environmental conditions for the North Sea cod with no significant recovery under any of the three different fishing mortality scenarios. For the Northeast Arctic cod, assuming fishing at the current sustainable level is continued, we forecast a decline in TSB in the coming decade compared to the last decade, attributed mainly to a decline in temperature.
## Results
### Variability in cod stocks and their physical environment
The time series of SST in the North Sea and Barents Sea Opening highlight key differences in the two regions: While SST has an increasing trend both in the North Sea and in the Barents Sea Opening, the absolute values are very different from each other, highlighting that the cod stocks reside at the two extreme ends of the thermal habitat available for cod40 (Fig. 1a). The TSB time series of the two stocks also show opposite development: North Sea cod has declined continuously since the 1960s, with very low and stable biomass levels since the beginning of the twenty-first century (Fig. 1b). Northeast Arctic cod exhibits multi-annual to decadal variability for the same period, with a recent record high level of TSB (Fig. 1b). However, multi-decadal variability in North Sea stock biomass has been reported for a longer period41. Fishing mortality (F) trends are similar in these two stocks, increasing in the central period of the time series and recently declining as stricter management measures started to be enforced (Fig. 1b). Interestingly, while the decline in fishing mortality of Northeast Arctic cod seems to have resulted in an increase in TSB, in the North Sea, the cod stock did not manage to recover even after the management measures were in place. This has been attributed to the effect of an interacting driver (i.e. warming) which has inhibited the productivity of North Sea cod22.
In the North Sea SST, the magnitude of warming over the period 1960–2019 (1.68 °C) is more than twice the year-to-year variability (σ = 0.65 °C), indicating that the increasing temperature trend is part of how the North Sea has changed under natural and anthropogenic forcing, and thus the trend cannot be excluded from the analysis. Temperature increase corresponds to decrease in TSB, thus there is a negative correlation between the two variables. Linearly detrended North Sea temperature maintains the same negative effect on TSB the following year (r = −0.48, p = 0.0025, see Supplementary Figs. S1 and S2 for detailed statistical analysis). Interestingly, the fishing mortality of 2–4-year-old cod does not exhibit a monotonous trend and does not show a strong correlation with TSB (r = −0.19, p > 0.05). This weak signal might partly be due to the fact that a decline in fishing mortality in the last years did not correspond to an increase in TSB (Fig. 1b). The low correlation exhibited by fishing mortality may limit its usage as a predictor for TSB using linear models and could indicate a time-varying F–TSB relationship typical of systems presenting discontinuous dynamics.
The TSB of Northeast Arctic cod does not exhibit a long-term trend (Fig. 1b). This stock exhibits multi-annual to decadal variability manifested as multiple cycles of decline and increase. Similar low-frequency variability is visible in the surface temperature of the North Atlantic subpolar gyre (SPG), suggesting a possible linkage. Statistically, this linkage is supported by the high correlation between the surface temperature of the SPG and TSB of Northeast Arctic cod (r = 0.78, p = 0.0435) with the SPG-temperature leading TSB by 7 years (Supplementary Figs. S1 and S2), and consistent with previous work36. Dynamically, this linkage points to the influence of SPG circulation on the properties of Atlantic water crossing the Greenland–Scotland ridge heading towards downstream shelf seas34,35,42.
After removing respective trends from time series of the SPG temperature and Northeast Arctic cod TSB, the correlation remains high (r = 0.77, p = 0.0425), suggesting a dominating signature of decadal variability. The effect of temperature is opposite on this stock compared to the North Sea, since in the Barents Sea, temperature has a positive impact on cod biomass. These opposite impacts of temperature on biomass reflect the different temperature regimes in which the stocks reside22,43. In the case of Northeast Arctic cod, fishing mortality of 5–10-year-old cod is strongly correlated with TSB (r = −0.88, p = 0.00351). This correlation is higher than the one between temperature and TSB, and peaks at lag-2 years (Supplementary Fig. 2). For our purpose of decadal prediction of cod biomass this finding has two implications. First, the predictability horizon for TSB from a statistical point of view would be shorter with fishing mortality as a predictor compared to temperature. Second, the higher explanatory power in fishing mortality might constrain the uncertainty in the first few years of forecasts.
### Statistical models for cod prediction
Once the predictors for the two cod stocks are identified, we assess various cross-validated statistical models (see “Methods”) to analyze the retrospective skill arising from the impact of temperature and fishing on the TSB and to select a model to issue forecasts. We test three different models, two simple linear regression models based on temperature and fishing mortality separately and one multiple linear regression model based on temperature and fishing mortality as explanatory variables.
As expected from the correlational analysis, the results for the North Sea cod and the North-East Arctic cod are quite different. For North Sea cod, the linear model using just fishing mortality has no predictive power (Fig. 2a and Supplementary Fig. S3 for analysis of skill from detrended variables). When the impacts of fishing and temperature are modeled together, the skill is comparable to the linear model based on temperature alone, suggesting that no additional information is gained by adding fishing mortality. For the Northeast Arctic cod, although the fishing-only model provides a better fit to the TSB data (adjusted R2 = 0.77) than the temperature-only model (adjusted R2 = 0.62), the difference in skill between these two models is not statistically significant (p = 0.15, Fig. 2b and Supplementary Fig. S3 for skill from detrended variables).
Out of the three models, the model which uses both temperature and fishing has the best fit to the TSB (adjusted R2 = 0.84) and is the most suited model considering the information gained by combining SST and F (Table S1). However, both the fishing-only and the combined fishing and temperature-based models do not allow for a longer prediction horizon than the temperature-only model. This is because fishing mortality leads TSB by 2 years while temperature leads TSB by 7 years. Since our focus is on long prediction horizons, we choose the temperature-only model for the hindcast period, and for the forecast period (2020–2030), we complement the temperature-based forecasts of cod biomass with forecasts from the combined fishing and temperature-based model.
### Decadal prediction of the physical environment
We now assess the prediction skill of North Sea and SPG temperature in the MPI-ESM (see “Methods” for a detailed description of this model and the decadal prediction system). In general, the skill degrades as the prediction horizon moves farther from the year of initialization (i.e. at longer lead times). However, in the North Sea, prediction skill remains high until lead year-10, and is matched by the skill from the historical simulations (Fig. 3a). This can be explained by the long-term linear trend in the underlying time series (Fig. 3b), which is present in all lead year time series. This points to the long-term trend (driven by anthropogenic external forcing) in the North Sea temperature as the source of prediction skill. Noticeable exception is the SPG where the skill is largely intact irrespective of the trend, and is higher for initialized hindcasts than historical simulations (Fig. 3c, d and Supplementary Fig. S4).
The observed and predicted time series of SPG temperature suggests that during the hindcast period, most of the skill in the initialized hindcasts is derived from the ability of the model to capture the decadal cooling and warming trends (Fig. 3d). The 16-member historical simulation does not capture the full extent of the decadal variability in SPG temperature. Thus, it appears that initialization of oceanic conditions is the dominant source of predictability of SPG temperature33, while the long-term trend, mainly arising from external forcing, dominates predictability in the North Sea. The robustness of the decadal prediction skill of subpolar North Atlantic SST in the MPI-ESM-LR based decadal prediction systems has been thoroughly analyzed and is consistent with other decadal prediction systems33,44.
### Dynamical–statistical cod prediction
Now, we combine the dynamical prediction of temperature with the statistical temperature–cod relationship. We choose the simplest model with temperature as the explanatory variable for both the North Sea and Northeast Arctic cod to model and forecast TSB. The utilization of temperature, derived from the dynamical model, allows us to extend the predictability horizon of cod stocks. We also include forecasts using a multiple linear regression model with fishing and temperature, and we use various scenarios of fishing mortality based on current management advice from the International Council for the Exploration of the Sea (ICES).
The dynamical–statistical prediction model shows robust skill (correlation as well as mean square error skill) in simulating the North Sea cod biomass (Fig. 4a). Note that the regression coefficients for the statistical models are not calculated from the hindcast time series of temperature, but from the observed TSB and assimilated temperatures (see “Methods”). The similarity in hindcast skill obtained from initialized hindcasts and historical simulations provides another piece of evidence that the skill is mainly due to the trend in the North Sea temperature (Fig. 4b). Our forecast of North Sea temperature for the period 2020–2030 suggest a continuation of the warm anomalies (Fig. 3c) which translates into a further decline of North Sea cod (Fig. 4a).
In order to make our predictions of cod biomass usable in fisheries management, we provide both an SST-based forecast (for 2020–2030) and forecasts under different fishing scenarios (using the SST+F model). In particular we chose three scenarios: an FMSY scenario, in which the biomass is fished at the maximum sustainable yield (FMSY = 0.3), an FSQO scenario in which F is the mean over the last three years (FSQO = 0.5), and an FLIM precautionary scenario which is the maximum F applicable before collapse (FLIM = 0.54). The predicted total biomass of North Sea cod shows similar trends under all these scenarios, modulated in magnitude by fishing. Lower fishing initially favors a stock increase, but the constant increase of temperature leads to a further decline of the stock over time, keeping the stock in a low productivity regime. This indicates that deteriorating environmental conditions will hinder a substantial stock recovery, even with strong limitation on the fishery.
For assessing the retrospective prediction skill of Northeast Arctic cod biomass, we combine the statistical model with lead-year-4 initialized hindcasts of SPG temperature from MPI-ESM. Beyond lead-year-4, the dynamical hindcast skill degrades and is comparable to the skill from the historical simulation (Supplementary Fig. S4). Our dynamical–statistical prediction model performs well in reproducing past variability in the TSB of Northeast Arctic cod (Fig. 4c, d). Both the 1970s decline as well as the recent decadal shift in the TSB is captured by the initialized hindcast, quantified by the mean square error skill score (Fig. 4d). The correlation skill associated with historical simulation is lower but not statistically different from the hindcast skill (p = 0.164). However, the variability in the reconstructed TSB time series of Northeast Arctic cod using historical simulation is suppressed (Fig. 4c). This reconstructed time series fails to capture the recent decadal shift in the Northeast Arctic cod stock, which, as discussed above, likely follows variability in SPG temperature and is not captured by the historical simulation. This lack of variability in the reconstructed TSB time series using the historical SPG temperature is reflected in the mean square error skill score (MSESS, Fig. 4d), which suggests that this type of prediction is not significantly better than predicting a long-term mean value for the TSB.
For the Northeast Arctic cod, future predictions based on initialized hindcasts suggest a climate-driven decline of biomass in the coming decade compared to the present stock size (Fig. 4c). The RCP8.5 scenario based forecast, however, suggests a biomass level close to the long-term mean. Since the historical simulation-based hindcasts of cod biomass do not capture the full extent of past variability (MSESS for historical simulation is not significantly different than climatology), the extent of future decline in cod biomass based on RCP8.5 scenario is likely underestimated. The purely climate-driven decline in initialized forecasts is larger than in the FSQO- and FMSY-based fishing scenarios but is comparable to the decline under the FLIM scenario. Given the recent management history of this stock, the FLIM scenario is very unlikely. This could be explained by the fact that even if we are just using climate to predict cod stocks, the forecast is based on TSB levels wherein the impact of fishing is implicitly included. Cold periods in the past also coincide with periods of high F (around FLIM). This influences the forecast made using just the temperature because the statistical part of the dynamical-statistical model is trained on past TSB values. This explains why models with both fishing and temperature, where fishing is relatively low (FSQO = 0.42 and FMSY = 0.4) can maintain the stock at a higher biomass level. The forecast declining tendency (compared to the present level) in TSB of Northeast Arctic cod in all scenarios is due to the delayed (advective) impact of 2010–2016 cooling of the SPG (Fig. 1a). The future prediction of the Northeast Arctic cod is thus similar to that of North Sea cod concerning fishing mortality, indicating that a sustainable fishing pressure is necessary to maintain the stocks, but very different concerning productivity, highlighting again how climate has opposite impact on the two stocks in the next 10 years. These results provide evidence that GCM-based initialized decadal climate predictions can be deployed for prediction of marine resources through climate–ecosystem linkages.
## Discussion
Sustainable management of fish stocks in the eastern North Atlantic shelf seas requires a reliable assessment of their future abundance. Incorporating environmental information in such assessment models has not always shown an improvement in prediction skills due to large uncertainties associated with recruitment–climate relationship, and also because these uncertainties might increase in a warming climate11,21. Here, we show that cod stock abundance, represented by TSB, can be successfully predicted on a decadal scale. We assess the feasibility of decadal predictions of cod stocks in the North- and Barents Sea using climate predictions from the MPI-ESM. Such an extended prediction relies on two conditions: (a) that there is a robust relationship between cod and the physical environment and (b) that the physical environment is predictable at multiyear lead times. For the North Sea, we find strong negative correlations between temperature and cod biomass, which can be explained by non-linear dynamics of the stock20,22. Ocean warming has been indicated as an important factor affecting cod in the North Sea through direct and indirect mechanisms, such as high temperatures causing low recruitment and changes in prey availability15,23,45. Fishing, on the other hand, has brought the stock close to collapse and now fishing restrictions may not be able to make the stock recover due to the detrimental effect of warming22.
We find that the long-term trend in surface temperature explains a large part of variance in the North Sea cod biomass, and consequently the high hindcast skill is largely due to the trend (externally forced). Since the detrended interannual variability in the North Sea surface temperature is not skilfully predicted by the MPI-ESM-LR (Supplementary Fig. S4), the 2020–2030 forecast for the North Sea cod biomass is mainly indicative of the long-term trajectory of the cod biomass and not of year-to-year variations around the trend. Also, future work on predictions for North Sea cod could take into account observations showing that the decline in cod abundance in the North Sea is much more pronounced in the southern North Sea than in the northern part, and there may be separate populations of cod within the North Sea management area.
The strong positive correlation between temperature and Northeast Arctic cod biomass is justified through the effect of temperature on life history traits of this stock37. While the details of how the temperature influences Northeast Arctic cod are well described16,37, the importance of the pronounced decadal variability in the SPG46, which lends predictability to the Northeast Arctic cod, is worth highlighting here. We hypothesize that the volume of Atlantic water, modulated by the SPG strength, entering the Barents Sea plays an important role. The hydrography of Norwegian–Barents Seas is related to the Atlantic inflow across the Greenland–Scotland Ridge47. When the SPG circulation is weak, the proportion of subtropical waters in the Atlantic inflow through the Faroe Shetland Channel increases35,48. The resulting increase in the volume of Atlantic water in the Barents Sea can influence the extent of sea-ice in this region, which can lead to increased productivity through extended periods of increased primary production and also due to expansion of feeding grounds. This hypothesis is consistent with the present understanding of the relationship between Atlantic heat transport and extent of sea-ice in the Barents Sea49,50 and its predictability using global coupled models51; however, the stationarity of this relationship needs to be further explored.
Interestingly, when the respective time lags between SST and cod are taken into account, the annual mean SSTs in the SPG region explain around 65% variability in the Northeast Arctic cod biomass while the local SSTs at the Barents Sea opening explain only around 12% variability. The SPG temperature is characterized by pronounced decadal variability46 while local SSTs at the Barents Sea opening prominently reflect the high-frequency atmospheric variability52 and the strong surface warming trend characteristics of these latitudes. However, the SPG signal is present in subsurface waters at the Barents Sea opening (Figs. S5 and S6). Thus local SSTs fail to capture the variability in ecosystem variables, such as the TSB, which integrate high-frequency atmospheric variability and resemble decadal temperature variability of the SPG.
A 7-year prediction horizon in Northeast Arctic cod stock has been shown to emerge from observations of SSTs in the North Atlantic but excluding the fishing mortality, and such a prediction horizon is also consistent with the length of the life cycle of Northeast arctic cod36. In the present study, we extend the predictability horizon further to a decade by using dynamically predicted SPG temperature as a predictor. Further value in our results is derived from the fact that our forecasts are based on a 16-member ensemble dynamical–statistical prediction system (see “Methods”) and various fishing mortality scenarios, which take into account the uncertainty associated with future evolution of the climate system and fishing pressure. We have also been able to identify the source of decadal prediction skill in cod stocks in the two cod habitats. In contrast to the North Sea where the externally forced trend dominates, our results emphasize decadal variability in SPG temperature as the dominant source of prediction skill in Northeast Arctic cod biomass. The predictions based on historical simulations do not capture the full extent of the decline in the cod stock in 1970s and its increase from 2005 to 2014, and hence, in terms of MSESS, these predictions do not match or outperform the predictions based on initialized hindcasts.
The approach used in this study, although novel, has certain caveats. First, the underlying climate variability that influences Northeast Arctic cod biomass has a low-frequency character. Thus, prediction skill and its uncertainty estimation is based on the assumption that the training period is representative of the climate variability associated with the subpolar North Atlantic. In case this is not so, then the skill might drop. Second, the utilization of ICES stock assessment outputs (total biomass and fishing mortality) as observations is a concern. These quantities are model outcomes, and are not entirely independent53. Third, the linear models examined here are applicable to cod stocks in our regions of interest, where the underlying oceanic variability and its impact on marine ecosystems is well understood and the stocks situated near the extremes of the species’ overall distribution range. Our models do not cover the complex issues such as those related to the impact of temperature on carrying capacity and lifetime reproductive output. This could be the subject of future work. Finally, we have assumed that the statistical models and the variables analyzed here implicitly account for possible ecosystem processes. While ecosystem processes such as species interactions are definitely important in shaping fish stocks, they are often not taken into account in management processes54, although they are to some extent taken into account in management of Barents Sea capelin (Mallotus villosus)55.
Our study attempts to bridge the gap between environmental and fisheries prediction. Through the present work, we demonstrate how decadal prediction of climate can be used to provide extended prediction horizons for fisheries combined with various fishing scenarios. Various incentives as well as the lessons learnt from past failures have motivated this effort. Foremost is the added value that such predictions can bring to the sustainable management of fish stocks. For example, at present, many fish stocks, including those considered in this article, are managed by setting annual quotas based on annual assessments of present stock size and short-term predictions (1–2 years) combined with harvest control rules based on target exploitation rates. Reliable predictions of fish biomass on a decadal scale could enable the adjustment of future catch targets (exploitation rates) to account for climate-driven fluctuations in productivity56,57. Also, predicting catch levels on a decadal scale will be important to the fishing industry, as investments in vessels, processing plants, etc. are made with a time horizon of several decades.
Climate-informed fishery management is also poised to benefit from rapid advances in multiyear prediction of other fishery-related variables such as net primary production by Earth system models58. In the North Atlantic, proper representation of open ocean-shelf connections in such models would attract further research in decadal predictions of fish stocks towards realizing a climate resilient sustainable fisheries management.
## Methods
### Dynamical model
The MPI-ESM is used in its low-resolution setup in the present study (MPI-ESM-LR59). The ocean general circulation component of MPI-ESM-LR, the Max Planck Institute Ocean Model (MPIOM60), is a free surface model with primitive equation solved on an Arakawa C-grid with hydrostatic and Boussinesq approximations. The MPIOM has a total of 40 z-levels in the vertical and the surface layer thickness is 12 m. The MPIOM setup used in the study has a rotated grid configuration (GR15) with one of the poles over Greenland. This enhances the horizontal resolution north of 50°N (15 km near Greenland). The resolution increases gradually to 1.5° towards the equator. Embedded in MPIOM is also the ocean biogeochemistry component, the Hamburg Ocean Carbon Cycle model (HAMOCC61). The HAMOCC incorporates oxygen and phosphate cycles, and defines the marine food web based on nutrients, phytoplankton, zooplankton, and detritus (NPZD)-based approach. The atmospheric general circulation component of MPI-ESM1.2-LR is the European Center-Hamburg (ECHAM62). The ECHAM is run at a horizontal resolution of T63 and with at total of 47 vertical levels and the model top is at 0.01 hPa. In MPI-ESM1.2-LR, the land surface–atmosphere interactions are simulated by the land vegetation module JSBACH63 which is embedded in ECHAM.
We use one set of retrospective initialized decadal predictions (hindcasts) from the MiKlip project64, carried out with the MPI-ESM-LR. Ten-year long ensemble hindcasts with 16 members are started on 1st November every year from 1960 to 2019 (ref. 33). The initial conditions for each member come from an assimilation experiment (1960–2019) with an oceanic ensemble Kalman filter (EnKF) and atmospheric nudging. The oceanic EnKF in MPI-ESM-LR33,65 assimilates monthly profiles of temperature and salinity from EN4 (ref. 66). Simultaneously, atmospheric vorticity, divergence, temperature, and surface pressure are nudged to ERA40/ERAInterim re-analyses67. It should be noted that neither SST from satellite observations nor atmospheric temperature below 900 hPa are assimilated in order to allow for a model-consistent assimilation across the atmosphere–ocean boundary. The assimilation experiment as well as the initialized hindcasts use observed solar irradiation, volcanic eruptions, and atmospheric greenhouse gas concentrations (RCP4.5 concentrations from 2006 onward) as boundary conditions, taken from CMIP6 (ref. 68).
An additional 16-member historical simulations (1850–2005) of surface temperature taken from the MPI-ESM-LR Grand Ensemble69 are analyzed to compare the skill with the initialized hindcasts. The historical simulations are performed under natural and anthropogenic forcings derived from observations covering a total of 156 years (1850–2005). For comparison with initialized hindcasts, these historical simulations are extended with a future RCP8.5 concentrations from 2006 onward. Note that the difference between RCP8.5 and RCP4.5 scenario only emerges towards the mid of this century and hence we expect no significant impact on our short-term analysis if RCP4.5 scenario is used. The natural forcing includes solar insolation, variations of the Earth orbit, tropospheric aerosol, stratospheric aerosols from volcanic eruptions, and seasonally varying ozone. The anthropogenic forcing includes the well mixed gases CO2, CH4, N2O, CFC-11, and CFC-12 as well as O3, and anthropogenic sulfate aerosols. Atmospheric CO2 concentrations are prescribed and the carbon cycle is not interactive. It must be noted that this historical simulation is started from a pre-industrial control run and is not initialized from observations. Therefore, the internal variability in this model simulation may not be in phase with observations, and hence may not reproduce the observed timing of certain climatic events which are related to internal (natural) variability.
### Linear regression models
In order to predict the time series of the TSB of cod stocks (CTSB), we construct a simple and multiple linear regression model with sea temperature (T) and fishing mortality (F) as predictors (independent variables) and the TSB as the predictand (dependent variable). For predicting North Sea cod, local oceanic surface temperature is used while for the Northeast Arctic cod, the SPG temperature is used. Both temperature time series are taken from the assimilation run as the area average of temperature of the first model layer (mid-point at 6 m depth). The time series of temperature from the assimilation run with MPI-ESM-LR compares very well with the widely used observations/re-analyses datasets, the AHOI dataset70 for the North Sea and HadISST71 for the SPG and the Barents Sea Opening (Fig. 1a). The TSB and F are taken from latest stock assessment reports from the ICES. The simple and multiple linear regression model fed with T and F anomalies (mean over 1970–2019 is removed from all variables) as predictors, for example, takes the form
$$\begin{array}{l}{C}_{{\mathrm{{TSB}}}}(y)={\beta }_{o}+{\beta }_{1}T(y-{L}_{T}),\\ {C}_{{\mathrm{{TSB}}}}(y)={\beta }_{o}+{\beta }_{1}T(y-{L}_{T})+{\beta }_{2}F(y-{L}_{F}),\end{array}$$
where CTSB is the statistical TSB prediction at year y, LT and LF are the lags in years at which the respective correlations between TSB and T or F are maximum, βo is the intercept, and β1, β2 are the slopes obtained from fitted observations.
### Cross-validation of statistical models
In order to identify the best performing model, we applied the 80–20 cross-validation method. The regression coefficients are computed between time series of temperature from the assimilation run and the observed cod biomass. In the first step, the respective temperature and cod biomass time series are divided into training and testing sets by randomly selecting with replacement blocks of 80% of the parent time series as the training set and the remaining 20% as the testing set. The regression coefficients are calculated from the training set and applied to the testing set. Correlation coefficients are then calculated between the predictions made with the training set and observations as well as between the testing set and observations. This process is repeated 1000 times, and each time the 80% training set is selected randomly. The 95% confidence interval for the training and test set is the 2.5th and 97.5th percentile range of the respective 1000 correlation coefficients. Note that the lag (L) in the above equation is calculated separately for each predictor before testing various simple and multiple linear regression models based on these predictors. This procedure gives the uncertainty bounds presented in Fig. 2a, b.
### Dynamical–statistical predictions
For hindcasts and forecasts, the regression model is trained on output from the assimilation run (and fishing mortality for the multiple regression model) and the resulting regression coefficients are applied to temperatures from the initialized hindcasts and historical simulation (and the fishing mortality scenarios for multiple regression models). The statistical model is fed with anomalies of each variable and the mean is added to the predicted TSB anomalies at the end. Mathematically this takes the form
$$\begin{array}{l}{C}_{\mathrm{{TSB}}}^{\prime}(y)={\beta }_{o}+{\beta }_{1}T^{\prime} (y-{L}_{T}),\\ {C}_{\mathrm{{TSB}}}^{\prime}(y)={\beta }_{o}+{\beta }_{1}T^{\prime} (y-{L}_{T})+{\beta }_{2}F(y-{L}_{F}),\end{array}$$
where $${C}_{\mathrm{{TSB}}}^{\prime}$$ is the dynamical-statistical TSB prediction at year y, $${{T}}^{\prime}$$ is the dynamically predicted temperature (lead-year-10 predictions for the North Sea and lead-year-4 for the SPG), LT and LF are the lags in years at which the respective correlations between observed TSB and T or F are maximum, βo is the intercept, and β1 and β2 are the slopes obtained from fitted observations.
The uncertainties in regression coefficients (slopes and intercepts) are also estimated using a bootstrapping methodology. First, 1000 new predictor and predictand time series of same length as the original time series are constructed by random sampling with replacement from the parent time series, while preserving their relationship. These new time series are then used to get 1000 estimates of regression coefficients. These 1000 regression coefficients are then applied to each of the 16 ensemble members (for temperature as the predictor). The 95% confidence interval is the 2.5th and 97.5th percentile range of these 16,000 predictions. This procedure gives the uncertainty bounds presented in Fig. 4a, c
### Hindcast skill and hindcast uncertainty
We use anomaly correlation coefficient (ACC) and the MSESS as measures of skill of initialized hindcasts and historical simulations against observations (stock assessment for TSB and assimilation output for temperature) for the period 1960–2019. The MSESS is defined as
$${\mathrm{{MSESS}}}=1-{\mathrm{{MSE}}}/{\mathrm{{MSE}}}_{\mathrm{{REF}}}$$
where MSE is the mean square error of prediction and MSEREF is the mean square error of reference forecast (here, climatology is used as reference)
Prior to calculating ACC and MSESS (and also prior to feeding the statistical model for TSB), the initialized hindcasts are corrected for the lead-time-dependent drift72, and lead-year-dependent climatology (mean over 1970–2019) is removed. The uncertainty in hindcast skill is determined using a block bootstrapping approach. The bootstrapping is done both in time and across ensemble members. We use a 6-year overlapping block bootstrap to account for the autocorrelation in the time series. The estimated uncertainties are not sensitive to a reasonable choices of block length that allow sufficient number of blocks for sampling. Through random resampling with replacement, 1000 new block-bootstrapped time series of predictions and observation are used to obtain 1000 new estimates of ACCs. The 95% confidence interval is the 2.5th and 97.5th percentile range of these 1000 ACCs or MSESSs. This procedure gives the uncertainty bounds presented in Figs. 3a, c and 4b, d.
## Data availability
The observation-based ocean surface temperature datasets (AHOI and HadISST) are publicly available (AHOI: https://www.thuenen.de/en/sf/projects/a-physical-statistical-model-of-hydrography-for-fishery-and-ecology-studies-ahoi/, HadISST: https://www.metoffice.gov.uk/hadobs/hadisst/index.html). The cod biomass and fishing mortality data used in this study are publicly available from the ICES reports (www.ices.dk). The historical simulations from the Max Planck Institute Grand Ensemble are publicly available from the ESGF. The assimilation experiment and decadal predictions analyzed in this study are accessible publicly at the DKRZ (http://cera-www.dkrz.de/WDCC/ui/Compact.jsp?acronym=DKRZ_LTA_1075_ds00004).
## Code availability
The bash scripts for post-processing model output and NCL code used for generating figures is available from the corresponding author upon request.
## References
1. 1.
Stenseth, N. C. et al. Ecological effects of climate fluctuations. Science 297, 1292–1296 (2002).
2. 2.
Oremus, K. L. Climate variability reduces employment in New England fisheries. Proc. Natl Acad. Sci. USA 116, 26444–26449 (2019).
3. 3.
Merino, G., Barange, M. & Mullon, C. Climate variability and change scenarios for a marine commodity: modelling small pelagic fish, fisheries and fishmeal in a globalized market. J. Mar. Syst. 81, 196–205 (2010).
4. 4.
Lindegren, M., Checkley, D. M., Rouyer, T., MacCall, A. D. & Stenseth, N. C. Climate, fishing, and fluctuations of sardine and anchovy in the California current. Proc. Natl Acad. Sci. USA 110, 13672–13677 (2013).
5. 5.
Allison, E. H. et al. Vulnerability of national economies to the impacts of climate change on fisheries. Fish Fish. 10, 173–196 (2009).
6. 6.
Barange, M. et al. Impacts of climate change on marine ecosystem production in societies dependent on fisheries. Nat. Clim. Change 4, 211–216 (2014).
7. 7.
Hawkins, E. & Sutton, R. The potential to narrow uncertainty in regional climate predictions. Bull. Am. Meteorol. Soc. 90, 1095–1108 (2009).
8. 8.
Thompson, D. W., Barnes, E. A., Deser, C., Foust, W. E. & Phillips, A. S. Quantifying the role of internal climate variability in future climate trends. J. Clim. 28, 6443–6456 (2015).
9. 9.
Tommasi, D. et al. Managing living marine resources in a dynamic environment: the role of seasonal to decadal climate forecasts. Prog. Oceanogr. 152, 15–49 (2017).
10. 10.
Salinger, J. et al. In Advances in Marine Biology, (ed Curry, B. E.) Vol. 74, 1–68 (Elsevier, 2016). https://www.sciencedirect.com/bookseries/advances-in-marine-biology/vol/74/suppl/C.
11. 11.
Stock, C. A. et al. On the use of IPCC-class models to assess the impact of climate on living marine resources. Prog. Oceanogr. 88, 1–27 (2011).
12. 12.
Payne, M. R. et al. Lessons from the first generation of marine ecological forecast products. Front. Mar. Sci. 4, 289 (2017).
13. 13.
Tommasi, D. et al. Multi-annual climate predictions for fisheries: an assessment of skill of sea surface temperature forecasts for large marine ecosystems. Front. Mar. Sci. 4, 201 (2017).
14. 14.
Ottersen, G., Kim, S., Huse, G., Polovina, J. J. & Stenseth, N. C. Major pathways by which climate may force marine fish populations. J. Mar. Syst. 79, 343–360 (2010).
15. 15.
Beaugrand, G. & Kirby, R. R. Climate, plankton and cod. Glob. Change Biol. 16, 1268–1280 (2010).
16. 16.
Drinkwater, K. F. et al. On the processes linking climate to ecosystem changes. J. Mar. Syst. 79, 374–388 (2010).
17. 17.
Skern-Mauritzen, M. et al. Ecosystem processes are rarely included in tactical fisheries management. Fish Fish. 17, 165–175 (2016).
18. 18.
Hutchings, J., Myers, R. The biological collapseof Atlantic cod off Newfoundland and Labrador: anexploration of historical changes in exploitation, harvesting technology and management. In North Atlantic Fisheries: Successes, Failures and Challenges (eds Arnason, R., Felt, L.) Vol. 3, 37–93 (Island Studies Press, Charlottetown, Canada, 1995).
19. 19.
Myers, R. A., Hutchings, J. A. & Barrowman, N. J. Why do fish stocks collapse? The example of cod in Atlantic Canada. Ecol. Appl. 7, 91–106 (1997).
20. 20.
Frank, K. T., Petrie, B., Leggett, W. C. & Boyce, D. G. Large scale, synchronous variability of marine fish populations driven by commercial exploitation. Proc. Natl Acad. Sci. USA 113, 8248–8253 (2016).
21. 21.
Myers, R. A. When do environment–recruitment correlations work? Rev. Fish Biol. Fish. 8, 285–305 (1998).
22. 22.
Sguotti, C. et al. Catastrophic dynamics limit Atlantic cod recovery. Proc. R. Soc. B 286, 20182877 (2019).
23. 23.
Sguotti, C. et al. Non-linearity in stock–recruitment relationships of Atlantic cod: insights from a multi-model approach. ICES J. Mar. Sci. 77, 1492–1502 (2020).
24. 24.
Glaser, S. M. et al. Complex dynamics may limit prediction in marine fisheries. Fish Fisheries 15, 616–633 (2014).
25. 25.
Subbey, S., Devine, J. A., Schaarschmidt, U. & Nash, R. D. Modelling and forecasting stock–recruitment: current and future perspectives. ICES J. Mar. Sci. 71, 2307–2322 (2014).
26. 26.
King, J. R., McFarlane, G. A. & Punt, A. E. Shifts in fisheries management: adapting to regime shifts. Philos. Trans. R. Soc. B Biol. Sci. 370, 20130277 (2015).
27. 27.
Cheung, W. W. et al. Large-scale redistribution of maximum fisheries catch potential in the global ocean under climate change. Glob. Change Biol. 16, 24–35 (2010).
28. 28.
Cheung, W. W., Dunne, J., Sarmiento, J. L. & Pauly, D. Integrating ecophysiology and plankton dynamics into projected maximum fisheries catch potential under climate change in the Northeast Atlantic. ICES J. Mar. Sci. 68, 1008–1018 (2011).
29. 29.
Lehodey, P. et al. Preliminary forecasts of Pacific bigeye tuna population trends under the A2 IPCC scenario. Prog. Oceanogr. 86, 302–315 (2010).
30. 30.
Lehodey, P., Senina, I., Calmettes, B., Hampton, J. & Nicol, S. Modelling the impact of climate change on Pacific skipjack tuna population and fisheries. Clim. Change 119, 95–109 (2013).
31. 31.
Matei, D. et al. Two tales of initializing decadal climate prediction experiments with the ECHAM5/MPI-OM model. J. Clim. 25, 8502–8523 (2012).
32. 32.
Robson, J., Polo, I., Hodson, D. L., Stevens, D. P. & Shaffrey, L. C. Decadal prediction of the North Atlantic subpolar gyre in the higem high-resolution climate model. Clim. Dyn. 50, 921–937 (2018).
33. 33.
Brune, S. & Baehr, J. Preserving the coupled atmosphere-ocean feedback in initializations of decadal climate predictions. WIREs Clim. Change 11, e637 (2020).
34. 34.
Holliday, N. P. et al. Reversal of the 1960s to 1990s freshening trend in the northeast North Atlantic and Nordic Seas.Geophys. Res. Lett. 35, 3614 (2008).
35. 35.
Koul, V., Schrum, C., Düsterhus, A. & Baehr, J. Atlantic inflow to the North Sea modulated by the subpolar gyre in a historical simulation with MPI-ESM. J. Geophys. Res. Oceans 124, 1807–1826 (2019).
36. 36.
Årthun, M. et al. Climate based multi-year predictions of the Barents sea cod stock. PLoS ONE 13, e0206319 (2018).
37. 37.
Ottersen, G., Loeng, H. & Raknes, A. Influence of temperature variability on recruitment of cod in the Barents sea. In ICES Marine Science Symposia, Vol. 198, 471–481 (1994).
38. 38.
Hutchings, J. A. Collapse and recovery of marine fishes. Nature 406, 882–885 (2000).
39. 39.
Kjesbu, O. S. et al. Synergies between climate and management for Atlantic cod fisheries at high latitudes. Proc. Natl Acad. Sci. USA 111, 3478–3483 (2014).
40. 40.
Brander, K. In Atlantic Cod: A Bio-Ecology (ed Rose, G. A.) 337–384 (Wiley, 2019).
41. 41.
Pope, J. & Macer, C. An evaluation of the stock structure of North Sea cod, haddock, and whiting since 1920, together with a consideration of the impacts of fisheries and predation effects on their biomass and recruitment. ICES J. Mar. Sci. 53, 1157–1069 (1996).
42. 42.
Hátún, H., Sandø, A. B., Drange, H., Hansen, B. & Valdimarsson, H. Influence of the Atlantic subpolar gyre on the thermohaline circulation. Science 309, 1841–1844 (2005).
43. 43.
Planque, B. & Frédou, T. Temperature and the recruitment of Atlantic cod (Gadus morhua). Can. J. Fish. Aquat. Sci. 56, 2069–2077 (1999).
44. 44.
Borchert, L. F., Müller, W. A. & Baehr, J. Atlantic ocean heat transport influences interannual-to-decadal surface temperature predictability in the North Atlantic region. J. Clim. 31, 6763–6782 (2018).
45. 45.
O’Brien, C. M., Fox, C. J., Planque, B. & Casey, J. Fisheries: climate variability and North Sea cod. Nature 404, 142 (2000).
46. 46.
Piecuch, C. G., Ponte, R. M., Little, C. M., Buckley, M. W. & Fukumori, I. Mechanisms underlying recent decadal changes in subpolar North Atlantic ocean heat content. J. Geophys. Res. Oceans 122, 7181–7197 (2017).
47. 47.
Hansen, B. et al. In Arctic–Subarctic Ocean Fluxes (eds Dickson, R. R., Meincke, J., Rhines, P.) 15–43 (Springer, 2008). https://link.springer.com/book/10.1007/978-1-4020-6774-7#about.
48. 48.
Larsen, K. M. H., Hátún, H., Hansen, B. & Kristiansen, R. Atlantic water in the Faroe area: sources and variability. ICES J. Mar. Sci. 69, 802–808 (2012).
49. 49.
Årthun, M., Eldevik, T., Smedsrud, L., Skagseth, Ø. & Ingvaldsen, R. Quantifying the influence of Atlantic heat on Barents sea ice variability and retreat. J. Clim. 25, 4736–4743 (2012).
50. 50.
Fossheim, M. et al. Recent warming leads to a rapid borealization of fish communities in the Arctic. Nat. Clim. Change 5, 673 (2015).
51. 51.
Yeager, S. G., Karspeck, A. R. & Danabasoglu, G. Predicted slowdown in the rate of Atlantic sea ice loss. Geophys. Res. Lett. 42, 10–704 (2015).
52. 52.
Ingvaldsen, R., Loeng, H., Ottersen, G. & Ådlandsvik, B. Climate variability in the Barents sea during the 20th century with focus on the 1990s. In ICES Marine Science Symposia, Vol. 219, 160–168 (ICES, 2003).
53. 53.
Hilborn, R. & Walters, C. J.Quantitative fisheries stock assessment: choice, dynamics and uncertainty (Springer Science & Business Media, 2013).
54. 54.
Ruckelshaus, M., Klinger, T., Knowlton, N. & DeMaster, D. P. Marine ecosystem-based management in practice: scientific and governance challenges. BioScience 58, 53–63 (2008).
55. 55.
Gjøsæter, H., Bogstad, B. & Tjelmeland, S. Ecosystem effects of the three capelin stock collapses in the Barents sea. Mar. Biol. Res. 5, 40–53 (2009).
56. 56.
Mills, K. E. et al. Fisheries management in a changing climate: lessons from the 2012 ocean heat wave in the Northwest Atlantic. Oceanography 26, 191–195 (2013).
57. 57.
Tommasi, D. et al. Improved management of small pelagic fisheries through seasonal climate prediction. Ecol. Appl. 27, 378–388 (2017).
58. 58.
Park, J.-Y., Stock, C. A., Dunne, J. P., Yang, X. & Rosati, A. Seasonal to multiannual marine ecosystem prediction with a global earth system model. Science 365, 284–288 (2019).
59. 59.
Giorgetta, M. A. et al. Climate and carbon cycle changes from 1850 to 2100 in mpi-esm simulations for the coupled model intercomparison project phase 5. J. Adv. Model. Earth Syst. 5, 572–597 (2013).
60. 60.
Jungclaus, J. et al. Characteristics of the ocean simulations in the Max Planck Institute Ocean Model (MPIOM) the ocean component of the MPI-Earth system model. J. Adv. Model. Earth Syst. 5, 422–446 (2013).
61. 61.
Ilyina, T. et al. Global ocean biogeochemistry model HAMOCC: model architecture and performance as component of the MPI-Earth system model in different CMIP5 experimental realizations. J. Adv. Model. Earth Syst. 5, 287–315 (2013).
62. 62.
Stevens, B. et al. Atmospheric component of the MPI-M Earth System Model: ECHAM6. J. Adv. Model. Earth Syst. 5, 146–172 (2013).
63. 63.
Reick, C., Raddatz, T., Brovkin, V. & Gayler, V. Representation of natural and anthropogenic land cover change in MPI-ESM. J. Adv. Model. Earth Syst. 5, 459–482 (2013).
64. 64.
Marotzke, J. et al. Miklip: a national research project on decadal climate prediction. Bull. Am. Meteorol. Soc. 97, 2379–2394 (2016).
65. 65.
Brune, S., Nerger, L. & Baehr, J. Assimilation of oceanic observations in a global coupled earth system model with the Seik filter. Ocean Model. 96, 254–264 (2015).
66. 66.
Good, S. A., Martin, M. J. & Rayner, N. A. En4: Quality controlled ocean temperature and salinity profiles and monthly objective analyses with uncertainty estimates. J. Geophys. Res. Oceans 118, 6704–6716 (2013).
67. 67.
Dee, D. P. et al. The era-interim reanalysis: configuration and performance of the data assimilation system. Q. J. R. Meteorol. Soc. 137, 553–597 (2011).
68. 68.
Eyring, V. et al. Overview of the Coupled Model Intercomparison Project Phase 6 (CMIP6) experimental design and organization. Geosci. Model Dev. 9, 1937–1958 (2016).
69. 69.
Maher, N. et al. The Max Planck Institute Grand Ensemble: enabling the exploration of climate system variability. J. Adv. Model. Earth Syst. 11, 2050–2069 (2019).
70. 70.
Nunez-Riboni, I. & Akimova, A. Monthly maps of optimally interpolated in situ hydrography in the North Sea from 1948 to 2013. J. Mar. Syst. 151, 15–34 (2015).
71. 71.
Rayner, N. et al. Global analyses of sea surface temperature, sea ice, and night marine air temperature since the late nineteenth century. J. Geophys. Res. Atmos. 108, https://doi.org/10.1029/2002JD002670 (2003).
72. 72.
Kharin, V., Boer, G., Merryfield, W., Scinocca, J. & Lee, W.-S. Statistical adjustment of decadal predictions in a changing climate. Geophys. Res. Lett. 39, https://doi.org/10.1029/2012GL052647 (2012).
## Acknowledgements
This study is a contribution to the Excellence Cluster “CliCCS—Climate, Climatic Change, and Society” at the University of Hamburg, funded by the DFG through Germany’s Excellence Strategy EXC 2037 “Project 390683824 (to J.B. and C. Schrum). A.D. is supported by A4 (Aigéin, Aeráid, agus athrú Atlantaigh), funded by the Marine Institute (grant: PBA/CC/18/01). M.Å. is funded by the Trond Mohn Foundation (Grant BFS2018TMT01). C. Schrum was supported by SeaUseTip funded within the framework of the international and interdisciplinary program “EcoBiological Tipping Points (BioTip)” of the Federal Ministry of Education and Research of Germany (BMBF). G.O.’s research was supported by the Research Council of Norway through the project BarentsRISK (Grant No. 288192) and the European Research Council through the H2020 project INTAROS (Grant No. 727890). This study is also a contribution to the PoF IV Program “Changing Earth—Sustaining our Future”, Topic 4 “Coastal Transition Zones under Natural and Human Pressure” of the Helmholtz Association. The authors thank the German Computing Center (DKRZ) for providing their computing resources.
## Funding
Open Access funding enabled and organized by Projekt DEAL.
## Author information
Authors
### Contributions
V.K., J.B., and C. Schrum conceived the work and discussed the research plan. V.K. analyzed the data, model experiments, and prepared the figures. V.K. and C. Sguotti interpreted the results and wrote the first draft of the manuscript. S.B. carried out model experiments. M.A. and A.D. helped with the statistical aspects of the work. G.O. and B.B provided perspectives on North Atlantic fisheries. S.B., M.A., A.D., G.O., B.B., J.B., and C. Schrum discussed results, their interpretation and helped with the revision of the manuscript.
### Corresponding author
Correspondence to Vimal Koul.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Peer review informationCommunications Earth & Environment thanks the anonymous reviewers for their contribution to the peer review of this work. Primary Handling Editor: Joe Aslin.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Koul, V., Sguotti, C., Årthun, M. et al. Skilful prediction of cod stocks in the North and Barents Sea a decade in advance. Commun Earth Environ 2, 140 (2021). https://doi.org/10.1038/s43247-021-00207-6
• Accepted:
• Published:
|
{}
|
# Cosine Distance Matrix¶
Given $$n$$ feature vectors $$x_1 = (x_{11}, \ldots, x_{1p}), \ldots x_n = (x_{n1}, \ldots, x_{np})$$ of dimension Lmath:p, the problem is to compute the symmetric $$n \times n$$ matrix $$D_{\text{cos}} = (d_{ij})$$ of distances between feature vectors, where
$d_{ij} = 1 - \frac {\sum_{k=1}^{p} x_{ik} x_{jk}} {\sqrt{ \sum_{k=1}^{p} x_{ik}^2 } \sqrt{ \sum_{k=1}^{p} x_{jk}^2 }}$
$i = \overline{1, n}$
$j = \overline{1, n}$
## Batch Processing¶
### Algorithm Input¶
The cosine distance matrix algorithm accepts the input described below. Pass the Input ID as a parameter to the methods that provide input for your algorithm. For more details, see Algorithms.
Algorithm Input for Cosine Distance Matrix (Batch Processing)
Input ID
Input
data
Pointer to the $$n \times p$$ numeric table for which the distance is computed.
The input can be an object of any class derived from NumericTable.
### Algorithm Parameters¶
The cosine distance matrix algorithm has the following parameters:
Algorithm Parameters for Cosine Distance Matrix (Batch Processing)
Parameter
Default Value
Description
algorithmFPType
float
The floating-point type that the algorithm uses for intermediate computations. Can be float or double.
method
defaultDense
Performance-oriented computation method, the only method supported by the algorithm.
### Algorithm Output¶
The cosine distance matrix algorithm calculates the result described below. Pass the Result ID as a parameter to the methods that access the results of your algorithm. For more details, see Algorithms.
Algorithm Output for Cosine Distance Matrix (Batch Processing)
Result ID
Result
cosineDistance
Pointer to the numeric table that represents the $$n \times n$$ symmetric distance matrix $$D_\text{cos}$$.
By default, the result is an object of the PackedSymmetricMatrix class with the lowerPackedSymmetricMatrix layout. However, you can define the result as an object of any class derived from NumericTable except PackedTriangularMatrix and CSRNumericTable.
## Examples¶
Batch Processing:
## Performance Considerations¶
To get the best overall performance when computing the cosine distance matrix:
• If input data is homogeneous, provide the input data and store results in homogeneous numeric tables of the same type as specified in the algorithmFPType class template parameter.
• If input data is non-homogeneous, use AOS layout rather than SOA layout.
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.
Notice revision #20201201
|
{}
|
# Which one of the following percentage of frequencies between the range of 0 to + 1.00 (σ) standard deviation.
Free Practice With Testbook Mock Tests
## Options:
1. 34.13%
2. 38.23%
3. 46.20%
4. 47.73%
### Correct Answer: Option 1 (Solution Below)
This question was previously asked in
UGC NET Paper-2: Geography 12th Nov 2020 Shift 2
## Solution:
34.13% of frequencies between the range of 0 to + 1.00 (σ) standard deviation.
Normal distribution curve:
• Normal distribution function gives a bell-shaped symmetrical curve around mean and its mean, median and mode coincide.
• It is a unimodal distribution.
• From the lowest value to a value of (Mean –Standard Deviation) the curvature of the normal curve is concave i.e. it has an increasing rate of change of frequencies. At this point, it changes its curvature and becomes convex i.e. it starts having a decreasing rate of change of frequencies. Such a point is known as the point of inflection.
• Another point of inflection will be at the mean itself where instead of having increased it shows a decrease in the frequencies.
• The next point of inflection will occur at a value of ( Mean + Standard Deviation).
• Here again, the curvature of the curve will change from concave to convex and will show an increasing decline in the frequencies.
• In the range of Mean – S.D. to Mean + S.D. 68.23 % observations will fall and,
• In the range of Mean – 2 Standard Deviation (M – 2SD) to Mean + 2Standard Deviation ( M + 2SD) 95.46 % observations will fall and finally
• In the range of Mean – 3Standard Deviation (M – 3SD) to Mean + 3Standard Deviation (M + 3SD) 99.73 % observations will fall.
• Theoretically, the range of values of a normal distribution is from minus infinity to plus infinity
• One can think of the area under a normal "curve" as equaling 100% or 1.0, depending on whether one wants to talk about a percent of an area under the curve or a proportion of the area under the curve.
|
{}
|
# Ground state energy of unitary fermion gas with the Thomson Problem approach
@article{Chen2006GroundSE,
title={Ground state energy of unitary fermion gas with the Thomson Problem approach},
author={Ji-sheng Chen},
journal={arXiv: Nuclear Theory},
year={2006}
}
• Ji-sheng Chen
• Published 23 February 2006
• Physics
• arXiv: Nuclear Theory
The dimensionless universal coefficient $\xi$ defines the ratio of the unitary fermions energy density to that for the ideal non-interacting ones in the non-relativistic limit with T=0. The classical Thomson Problem is taken as a nonperturbative quantum many-body arm to address the ground state energy including the low energy nonlinear quantum fluctuation/correlation effects. With the relativistic Dirac continuum field theory formalism, the concise expression for the energy density functional…
2 Citations
Determination of Landau Fermi-liquid parameters of strongly interacting fermions by means of a nonlinear scaling transformation
A nonlinear transformation approach is formulated for the correlated fermions' thermodynamics through a medium-scaling effective action. An auxiliary implicit variable-effective chemical potential is
Generalized Simulated Annealing for Global Optimization: The GenSA Package
• Computer Science, Mathematics
R J.
• 2013
A brief introduction to the GenSA R package is provided and its utility is demonstrated by solving a non-convex portfolio optimization problem in finance and the Thomson problem in physics.
|
{}
|
## Files in this item
FilesDescriptionFormat
application/pdf
9136648.pdf (6MB)
(no description provided)PDF
## Description
Title: An Eulerian-Lagrangian finite element method for modeling crack growth in creeping materials Author(s): Lee, Hae Sung Doctoral Committee Chair(s): Haber, Robert B. Department / Program: Civil and Environmental Engineering Discipline: Civil Engineering Degree Granting Institution: University of Illinois at Urbana-Champaign Degree: Ph.D. Genre: Dissertation Subject(s): Applied Mechanics Engineering, Civil Abstract: Ductile, history-dependent material behavior governs crack growth in metal structures that are exposed to high temperatures over extended periods, such as nuclear power plants and gas turbines. This study is concerned with the development of finite element solution methods for the analysis of quasi-static, ductile crack growth in history-dependent materials. The mixed Eulerian-Lagrangian description (ELD) kinematic model is shown to have several desirable properties for modeling inelastic crack growth. Accordingly, a variational statement based on the ELD for history-dependent materials is developed and a new moving-grid finite element method based on the variational statement is presented. The moving-grid finite element method is applied to the analysis of transient, quasi-static, mode-III crack growth in creeping materials.The class of history-dependent constitutive models considered here involves rate-form evolution equations for the nonlinear strain. The combination of the rate-based constitutive model with the ELD introduces convective terms that transform the governing equations to a mixed, elliptic-hyperbolic system. The hyperbolic nature of the governing equations leads to two major difficulties. First, if a single-field variational formulation is attempted, then the convective terms introduce a regularity condition that is difficult to satisfy in finite element methods. Second, some kind of stabilization scheme must be used to obtain oscillation-free numerical solutions.A mixed variational method is developed to address these problems, in which the displacement and the nonlinear strain appear as independent fields. The displacement field is modeled by $H\sp1$ functions and the nonlinear strain field is modeled by piecewise-continuous $L\sb2$ functions to satisfy the Babuska-Brezzi conditions. A generalized Petrov-Galerkin method (GPG) is developed that simultaneously stabilizes the solution and relaxes the regularity condition of the mixed variational statement to admit $L\sb2$ basis functions for the nonlinear strain field. Several existing stabilization schemes, such as the SUPG method, the Galerkin/least-squares method and the discontinuous Galerkin method, occur as special cases of the GPG method. A new moving-grid finite element method is developed by combining the GPG method with the ELD kinematic model.Quasi-static, mode-III crack growth in creeping materials under small-scale-yielding (SSY) conditions is considered. After a detailed discussion of previous asymptotic and numerical solutions for this class of problems, the GPG/ELD moving-grid finite element formulation is used to model a transient crack-growth problem. The GPG/ELD results compare favorably with previously-published numerical results and the asymptotic solutions. Issue Date: 1991 Type: Text Language: English URI: http://hdl.handle.net/2142/20376 Rights Information: Copyright 1991 Lee, Hae Sung Date Available in IDEALS: 2011-05-07 Identifier in Online Catalog: AAI9136648 OCLC Identifier: (UMI)AAI9136648
|
{}
|
Journal cover Journal topic
Hydrology and Earth System Sciences An interactive open-access journal of the European Geosciences Union
Journal topic
Hydrol. Earth Syst. Sci., 22, 241–263, 2018
https://doi.org/10.5194/hess-22-241-2018
Hydrol. Earth Syst. Sci., 22, 241–263, 2018
https://doi.org/10.5194/hess-22-241-2018
Research article 12 Jan 2018
Research article | 12 Jan 2018
# A Climate Data Record (CDR) for the global terrestrial water budget: 1984–2010
A Climate Data Record (CDR) for the global terrestrial water budget: 1984–2010
Yu Zhang1, Ming Pan1, Justin Sheffield1, Amanda L. Siemann1, Colby K. Fisher1, Miaoling Liang2, Hylke E. Beck1, Niko Wanders1, Rosalyn F. MacCracken3, Paul R. Houser3, Tian Zhou4, Dennis P. Lettenmaier5, Rachel T. Pinker6, Janice Bytheway7, Christian D. Kummerow7, and Eric F. Wood1 Yu Zhang et al.
• 1Department of Civil and Environmental Engineering, Princeton University, Princeton, NJ 08544, USA
• 2National Meteorological Center, China Meteorological Administration, Beijing, 100081, China
• 3George Mason University, Fairfax, VA 22030, USA
• 4Pacific Northwest National Laboratory, Richland, WA 99352, USA
• 5Department of Geography, University of California, Los Angeles, CA 90095, USA
• 6Department of Meteorology, University of Maryland, College Park, MD 20742, USA
• 7Department of Atmospheric Science, Colorado State University, Fort Collins, CO 80523, USA
Correspondence: Ming Pan (mpan@princeton.edu)
Abstract
Closing the terrestrial water budget is necessary to provide consistent estimates of budget components for understanding water resources and changes over time. Given the lack of in situ observations of budget components at anything but local scale, merging information from multiple data sources (e.g., in situ observation, satellite remote sensing, land surface model, and reanalysis) through data assimilation techniques that optimize the estimation of fluxes is a promising approach. Conditioned on the current limited data availability, a systematic method is developed to optimally combine multiple available data sources for precipitation (P), evapotranspiration (ET), runoff (R), and the total water storage change (TWSC) at 0.5 spatial resolution globally and to obtain water budget closure (i.e., to enforce $P-\text{ET}-R-\text{TWSC}=$ 0) through a constrained Kalman filter (CKF) data assimilation technique under the assumption that the deviation from the ensemble mean of all data sources for the same budget variable is used as a proxy of the uncertainty in individual water budget variables. The resulting long-term (1984–2010), monthly 0.5 resolution global terrestrial water cycle Climate Data Record (CDR) data set is developed under the auspices of the National Aeronautics and Space Administration (NASA) Earth System Data Records (ESDRs) program. This data set serves to bridge the gap between sparsely gauged regions and the regions with sufficient in situ observations in investigating the temporal and spatial variability in the terrestrial hydrology at multiple scales. The CDR created in this study is validated against in situ measurements like river discharge from the Global Runoff Data Centre (GRDC) and the United States Geological Survey (USGS), and ET from FLUXNET. The data set is shown to be reliable and can serve the scientific community in understanding historical climate variability in water cycle fluxes and stores, benchmarking the current climate, and validating models.
1 Introduction
Quantification of the terrestrial water budget and its evolution over time at fine spatial resolutions is critical to understanding the availability and variability of Earth's terrestrial water budget and the exchanges and interactions among the terrestrial, atmospheric, and oceanic branches of the hydrosphere, and to assess the risk of hydrological extremes such as floods and droughts at regional to global scales. Understanding the mean state and variability of the terrestrial water budget is also one of the primary goals of World Climate Research Programme's (WCRP) Global Energy and Water EXchanges (GEWEX; Morel, 2001) project and the National Aeronautics and Space Administration (NASA) Energy and Water cycle Study (NEWS; NASA NEWS Science Integration Team, 2007). The overarching goal of GEWEX is to “reproduce and predict, by means of suitable models, the variations of the global hydrological regime, its impact on atmospheric and surface dynamics, and variations in regional hydrological processes and water resources and their response to changes in the environment, such as the increase in greenhouse gases” (http://www.gewex.org). The grand challenge of the NEWS project is “to document and enable improved, observationally based predictions of energy and water cycle consequences of Earth system variability and change” (http://www.nasa-news.org). Toward these goals, a number of Earth System Data Records (ESDRs) for the major components of the terrestrial water budget are developed under NASA's Making Earth Science Data Records for Use in Research Environments (MEaSUREs) program. While the MEaSUREs program refers to long-term, satellite-based data records as ESDRs, they are generally referred to as Climate Data Records (CDRs) following the National Research Council report where a CDR is defined as “a time series of measurements of sufficient length, consistency, and continuity to determine climate variability and change” (National Research Council, 2004). We will refer to the data set developed and described in this paper as a CDR.
The terrestrial water budget consists of four major components: precipitation (P), evapotranspiration (ET), runoff (R), and total water storage change (TWSC) as shown in Eq. (1). The TWSC over a time interval is balanced by the difference between the incoming water flux of P and outgoing water fluxes of ET and surface and subsurface R for a control volume from the Earth's surface to a lower bound at depth:
$\begin{array}{}\text{(1)}& \mathrm{TWSC}=P-\mathrm{ET}-R.\end{array}$
In situ observations are often considered as the ground “truth” to quantitatively estimate the water budget terms. However, limited network coverage, especially for data-sparse regions, has resulted in a long-time challenge for assessing the terrestrial water budget. Presently, satellite remote sensing has become a major data source to measure the various terms because of its generally global coverage and sufficient temporal repeat times. A number of satellite-based products have been developed to estimate precipitation over the globe, including the Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA; Huffman et al., 2007, 2010), the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks – Cloud Classification System (PERSIANN-CCS; Hong et al., 2007), and the Climate Prediction Center MORPHing method (CMORPH; Joyce et al., 2004). For evapotranspiration, global estimates can be derived from a combination of satellite surface radiation budget (SRB), surface meteorology and vegetation cover (Fisher et al., 2008; Mu et al., 2007; Vinukollu et al., 2011; Zhang et al., 2010, 2015). With the NASA Gravity Recovery And Climate Experiment (GRACE) mission, launched in March 2002 (Landerer and Swenson, 2012; Tapley et al., 2004; Wahr et al., 2004), the changes in gravity detected by the GRACE satellites can be used to derive estimates of TWSC, albeit at relatively coarse scale. GRACE has been widely used to study changes in the terrestrial water storage (Rodell et al., 2009, 2011), the terrestrial water budget (Long et al., 2014a, b; Pan et al., 2012; Sahoo et al., 2011; Gao et al., 2010; Sheffield et al., 2009; Wang et al., 2014), and hydrological extremes such as droughts (Thomas et al., 2014; Famiglietti, 2014). For runoff, earlier studies estimated the global mean terrestrial runoff by simply calculating the differences between precipitation and evapotranspiration under the assumption of negligible long-term total water storage change (Berner and Berner, 1987; Baumgartner and Reichel, 1975). However, this “inferred” runoff estimation approach can only be applied to estimate the long-term mean since water storage change cannot be neglected at short temporal scales, e.g., daily, monthly, or seasonally. Furthermore, human interaction with the storage might also play an important role. For example, reservoir filling after construction, interannual reservoir storage changes, and groundwater pumping (Rodell et al., 2009; Famiglietti, 2014; Voss et al., 2013) can significantly contribute to observed storage changes at regional scales. As an alternative, river discharge can be estimated from satellite altimetry (Birkett et al., 2002; Berry et al., 2011), for example, the future Surface Water Ocean Topography (SWOT) (Durand et al., 2010) mission. These satellite missions provide a promising and cost-efficient way of estimating individual water budget components. However, when combined together, they do not close the water budget because of errors in the individual component estimates. Sheffield et al. (2009 found that high bias in satellite precipitation, particularly in the summer, was the major factor in budget non-closure over the Mississippi River basin. Gao et al. (2010) also concluded that water budget closure over 13 large continental rivers in the US was not achieved using remote sensing data mainly due to the biases in precipitation and ET.
In addition to space-borne satellites, our understanding of the hydrological cycle in data-scarce regions has also depended on other data sources such as land surface models (LSMs) (Trenberth et al., 2007; Trenberth and Fasullo, 2013b) and weather/climate reanalysis (Reichle et al., 2011). Offline LSM simulations can provide long-term budget estimates with closure by design (Nijssen et al., 2014; Sheffield and Wood, 2007; Trenberth et al., 2007; Oki and Kanae, 2006). Reanalysis model output provides information that can be used to estimate the water budget at basin to continental (Betts et al., 2003a, b, 2005) and global (Reichle et al., 2011; Balsamo et al., 2015) scales. These large-scale land surface and reanalysis models have pushed the global water budget inventories into a new era where sparse traditional in situ observations are supplemented.
However, different types of uncertainties exist in these sources of information, including those in the parameterizations (satellite retrieval algorithms, LSM, and reanalysis process representations), in LSM parameters such as soil and vegetation properties, forcing data (surface radiation and meteorology) and in reanalysis data assimilation procedures. Therefore, an optimal “combination” of all data sources, including in situ and remote sensing, LSM, and reanalysis data, with their extensive spatial coverage and fine resolution, has the potential to overcome the limitation of relying on a single data source, and to offer improved accuracy, spatial and temporal coverage, and consistency in creating long-term, large-scale water budget information at fine spatial resolutions (Pan et al., 2012).
To address the non-closure problem, techniques have been developed to assess the uncertainties of each budget component and to enforce water budget closure from either multiple data sources (Pan et al., 2012) or single source (Sahoo et al., 2011), usually at the scale of major river basins across the globe. For example, Rodell et al. (2015) recently quantified the mean annual and monthly water budgets over continents and ocean basins for the first decade of the 21st century by using data sets that combine satellite remote sensing and conventional observations. In this study, the constrained Kalman filter (CKF), which is a simplified version (non-ensemble) of the constrained ensemble Kalman filter (CEnKF; Pan and Wood, 2006), is chosen to close water balance. The CKF is a non-ensemble form, and is a stand-alone procedure after a regular Kalman filter update; thus, it is ideal for closing the water balance without filtering or data assimilation.
Building on an increasingly available inventory of global water budget data sets from in situ, satellite, reanalysis, and land surface models, the study reported here has five advances over previously reported work. These are to (1) expand the use of the CKF data assimilation technique in closing the water budget from that reported by Pan et al. (2012) and Sahoo et al. (2011), (2) extend the data records back in time to 1984 (vs. 2000 in Rodell et al. (2015) and forward to 2010 (vs. previous analyses which usually stop near the turn of the 21st century), (3) refine the spatial resolution to 0.5 for the land surface (vs. basin-scale analysis in Pan et al., 2012 and Sahoo et al., 2011, and continental and oceanic analysis in Rodell et al., 2015 and Trenberth and Fasullo, 2013a) and account for the oblateness of Earth, (4) develop a harmonized global terrestrial water cycle CDR by merging the full combination of in situ and satellite remote sensing observations, LSM simulations, and reanalysis model outputs at monthly and 0.5 spatial resolution for the period of 1984–2010 (the CDR data set includes estimates for all major terrestrial water budget variables, i.e., P, ET, R, and TWSC, with budget closure at the grid scale), and (5) validate the CDR against in situ observations not used in the development of the data set.
To the authors' knowledge, this paper presents the first attempt to estimate over multiple decades the global terrestrial water budget (Greenland and Antarctica excluded), with closure at a 0.5 grid scale using this diversity of observational data sources. The data set provides comprehensive and detailed information for water budget analyses over land and will be of particular significance in those sparsely gauged or ungauged regions for understanding historical climate variability of the water cycle, and for benchmarking and validating climate models.
In developing the data set, significant challenges are faced that need to be addressed. These include the following:
1. How consistent are the different products at different spatial scales?
2. What is the best approach to assess the uncertainty of each individual product and then optimally merge them?
3. What is the spatial and temporal variability of the non-closure errors, and how can they be attributed?
Given the developed CDR, a key question is whether the merged data set is in agreement with in situ observations and thus able to capture historical hydroclimatological events (e.g., floods and droughts).
Section 2 introduces the data sources and the methodology. Section 3 carries out a consistency and uncertainty analysis for the multiple input data sources and investigates the spatial variability of the non-closure errors and their attribution during the budget closure enforcement process. Budget estimates based on the closure constrained data set are presented at global, continental, and large basin scales. Then, the CDR is validated against in situ runoff and ET in Sect. 4. Conclusions from the research and future work are discussed in Sect. 5.
Figure 1Locations of 32 selected large basins (Pan et al., 2012).
Table 1Summary of the gridded data used in this study. (The study period is 1984–2010; CLM and NOAH, written in bold, are analyzed but not merged into the final water budget CDR in this study.)
2 Data description, analysis, and methodology
In this study, the water budget is estimated and constrained at 0.5, monthly, for the global land area excluding Antarctica and Greenland. In addition, continental- and basin-scale budget estimates are also provided, including six continents and 32 major basins (Fig. 1) from across the world with a range of climatic regimes. Information about the input data sources (data length, original spatial and temporal resolutions, and references) is listed in Table 1. The Community Land Model (CLM) and NOAH land surface model are used for seasonal cycle analysis but are not included later in the merging and constraining algorithm because of significant disagreement between their seasonal cycles and observations, as discussed in Sect. 2.1.3 and 2.1.4. The 27-year period is divided into four consecutive subperiods (1984–1997, 1998–2002, 2003–2007, and 2008–2010) based on the data availability and overlap (Table 2). Note that the total water storage from GRACE for the initial year (2002) is excluded from the study due to missing values.
Table 2Data sources of merged water budgets with their averaged merging weights in brackets throughout different subperiods (TWSC from GRACE in 2002 is incomplete, so GRACE for 2002 is excluded; the spatial maps of merging weights over the globe can be found in Figs. S1 to S3 in the Supplement II).
3 Input data sets
## 3.1 Precipitation
A set of precipitation products is evaluated including the remote sensing precipitation product from Colorado State University (CSU; Bytheway and Kummerow, 2013) with uncertainty estimates, the gauge-based Global Precipitation Climate Centre (GPCC) product (Schneider et al., 2014), the multi-source merged products of the Princeton Global Forcing data set (PGF; Sheffield et al., 2006), and the Climate Hazard group InfraRed Precipitation with Stations (CHIRPS; Funk et al., 2014). Please refer to Supplement I for more information on these data sets.
Figure 2Seasonal cycles of precipitation from different products over the six continents for 1998–2010 (CHIRPS and CSU only cover the region between 50 N and 50 S); therefore, only the grids between 50 N and 50 S are counted in the calculation of the seasonal cycle; the coefficient of variance (CV, %) is calculated as the standard deviation divided by the ensemble mean of all the products (the same for Figs. 3–9).
Figure 3Seasonal cycles of precipitation from different products over 12 representative large basins for 1998–2010. (CHIRPS and CSU only cover the region between 50 N and 50 S. For those basins either outside or across 50 N–50 S, only PGF and GPCC are visualized.)
Figures 2 and 3 show the seasonal cycles of these four precipitation products over six continents and over 12 selected representative basins distributed in different continents and climate regimes, for their overlapping period of 1998–2010. The coefficient of variation (CV), calculated as the standard deviation divided by the ensemble mean, is plotted to quantify the uncertainties among the precipitation product ensemble of PGF, CSU, GPCC, and CHIRPS. The CV is first calculated for each grid cell and then averaged over continents or basins. There is no spatial coverage beyond 50 N, 50 S from CSU or CHIRPS. Therefore, only the grids between 50 N and 50 S are used to calculate the seasonal cycles in Fig. 2. Likewise, in Fig. 3, only PGF and GPCC are compared over those basins which are either outside or extend poleward of 50 N (or 50 S) (e.g., Lena and Mackenzie river basins). Similar to the conclusion of Pan et al. (2012), who examined a different set of data sets, the spread among these four products is higher in the densely gauged continents in Europe and North America (and basins in those two continents such as the Danube and Mississippi), with a CV ranging from 5 to 12 and 2 to 8 %, respectively (Figs. 2 and 3), than in the sparsely gauged regions, such as the Amazon (Fig. 3). There is an “abnormal” high spread (high CV) for the Niger River basin (sparsely gauged) during the dry season because the ensemble mean of the four precipitation products is close to zero (Fig. 3). The uncertainties are also high for the Mekong River basin where the rainfall totals are high and dominated by the monsoon season (Fig. 3). The high uncertainties in less densely gauged regions could originate from the different gauge densities from different products or the ways in which the data are merged and gridded. It is interesting to note that the average discrepancy between the highest estimates (CSU) and the lowest (CHIRPS) over Europe is around 15 mm month−1 throughout the year (Fig. 2). This discrepancy is more prominent at basin scales; for example, the monthly mean difference between CSU and CHIRPS in the densely gauged basins such as Danube and Mississippi is around 20 mm month−1 (Fig. 3). CHIRPS is a blended precipitation product (e.g., precipitation climatology, remote sensing from multiple sources, seasonal forecast form Climate Forecast System version 2 (CFSv2), and in situ observations) but it is dominated by gauge corrections in regions with higher gauge density such as Europe and North America, and therefore in basins such as the Danube and Mississippi. The differences among the three gauge-merged products (PGF, GPCC, and CHIRPS) might possibly be from the different data sources that they merge rather than from gauge observations, different numbers of gauges used, and undercatch corrections. The seasonal cycles in Figs. 2 and 3 are consistent with the climate regimes, e.g., the inversed seasonality in the Murray–Darling Basin, the high peak in South America in March, and wet summer in low latitudes.
### 3.1.1 Evapotranspiration
Unlike precipitation with relatively dense in situ observations, especially for developed regions, in situ based evapotranspiration estimations (from flux towers) are very sparse. Here, we collect 10 gridded global terrestrial ET products, of which 5 are satellite derived, 2 are reanalysis products, and 3 are from land surface models. One satellite product is the Global Land Evaporation Amsterdam Model (GLEAM; Miralles et al., 2011). As parts of the MEaSUREs products, the four other satellite products are derived using two algorithms, the Penman–Monteith (PM) and Priestley–Taylor (PT), cross-combined with two forcing inputs that are different from the other six ET products, the SRB-CFSR (Surface Radiation Budget – Climate Forecast System Reanalysis), and SRB-PGF. These four products are referred to as SRB-CFSR-PM, SRB-CFSR-PT, SRB-PGF-PM, and SRB-PGF-PT (Vinukollu et al., 2011). Satellite remote sensing carries the mission of observing Earth at fine spatial resolution and comprehensive coverage and makes it possible to estimate water budget in sparsely gauged regions. Therefore, five satellite ET products are merged into the CDR. The two reanalysis ET products are from ERA-Interim (Simmons et al., 2006) and NASA's Modern-Era Retrospective Analysis for Research and Application (MERRA; Rienecker et al., 2011). The LSM ET data sets are from the Variable Infiltration Capacity model (VIC v4.0.6), CLM v3.5, and NOAH v3.4 forced by an updated version of PGF. Please refer to Supplement I for more information.
Figure 4Seasonal cycles of evapotranspiration from different products over the six continents for 1984–2007 (Greenland is excluded for North America, as is true for Figs. 6 and 8).
Figure 5Seasonal cycles of evapotranspiration from different products over 12 representative large basins for 1984–2007.
The 10 ET products show less consistency in the seasonal cycle (Figs. 4 and 5) than the precipitation data sets (Figs. 2 and 3). At continental scales (Fig. 4), the reanalysis ET products (ERA-Interim and MERRA) generally have relatively high values for the six continents, while the LSMs generally predict lower values over Asia, Europe, and North America. Most of the satellite ET products (i.e., GLEAM, SRB-CFSR-PM, SRB-CFSR-PT, SRB-PGF-PM, and SRB-PGF-PT) lie between the reanalysis and LSMs in Asia, Europe, and North America. More striking is the relative lack of consistency among those 10 ET products for the wet tropical basins (Amazon, Congo, and Mekong). The seasonality of ET over these basins is complex because of the overall energy limitation but seasonally and spatially varying moisture limitation (Guan et al., 2015). These results imply that the 10 approaches have significant differences in their derived surface radiation budget and meteorology as well as the parameterizations of evaporative processes (potential ET, transpiration, interception, and soil evaporation) and their interaction with phenological and environmental controls. The relatively higher consistency of the remotely sensed algorithms for these basins is in part a result of using the same (or closely similar) surface radiation budget but different meteorological forcings.
Figure 6Seasonal cycles of runoff from different products over the six continents for 1984–2010.
Figure 7Seasonal cycles of runoff from different products over 12 representative large basins for 1984–2010.
### 3.1.2 Runoff
The three LSMs are forced by the same meteorological forcing from PGF to simulate global runoff over land. The VIC simulation was calibrated over 43 well-distributed major global basins against the measured streamflow data from the Global Runoff Data Centre (GRDC; http://grdc.bafg.de) while CLM and NOAH are uncalibrated. Please refer to Supplement I for additional model information under the evapotranspiration section. Figures 6 and 7 display the seasonal cycles over the six continents and the 12 representative major river basins. NOAH shows opposite seasonal cycle against VIC and CLM in Europe and North America, which include high-latitude regions (Fig. 6). Unlike VIC and NOAH, CLM almost shows no seasonal cycle in Oceania (Fig. 6). The disagreement between the LSMs can also be found at basin scales (e.g., Danube, Lena, Mackenzie, Yukon, and Murray–Darling in Fig. 7). Additionally, Fig. 8 shows the verification of the runoff from the LSMs against GRDC observations for 26 basins that have available data records longer than 3 years during 1984–2010. NOAH shows a negative runoff bias against GRDC for most of the midlatitude to high-latitude basins (Columbia, Danube, Indigirka, Kolyma, Kena, Mackenzie, Northern Dvina, Ob, Olenek, Pechora, Yenisei, and Yukon; Fig. 8). CLM has better performance over high-latitude basins than NOAH but it shows a high overestimation of runoff for the Danube and Don (Fig. 8). None of the LSMs capture the seasonal cycles for the Indus and Senegal basins. Nonetheless, the authors recognize that runoff estimates using a number of LSMs (e.g., Haddeland et al., 2011) can provide uncertainty estimates (i.e., spread or standard deviation among different data sources) in simulated runoff. However, CLM and NOAH runoff estimates are not merged into the CDR developed in this study in order to avoid the large biases from their uncalibrated parameters. Additional reasons for not merging CLM and NOAH are discussed in Sect. 2.1.4.
Figure 8Seasonal cycles of runoff from VIC, CLM, NOAH, and MEaSUREs against GRDC runoff observations over 26 large basins for different periods according to in situ data availability.
### 3.1.3 Total water storage change
The TWSC, which measures the changes in total water storage during the specific unit period, is taken from the LSMs and the GRACE data. The GRACE monthly total water storage anomaly (TWSA) time series, which are anomalies relative to the 2004–2009 time–mean baseline from ReLease 05 (RL05), that are processed by three centers, GeoForschungsZentrum Potsdam (GFZ), Center for Space Research (CSR) at University of Texas at Austin, and Jet Propulsion Laboratory (JPL), are used to calculate the TWSC via the backward difference equation in Eq. (2) and central difference equation in Eq. (3). Comparisons indicate that the central difference calculation (Eq. 3) is in better agreement with the VIC-inferred TWSC. Therefore, the central difference TWSC has been used.
$\begin{array}{}\text{(2)}& & \mathrm{TWSC}=\left({\mathrm{TWSA}}_{t}-{\mathrm{TWSA}}_{t-\mathrm{1}}\right)/\phantom{\left({\mathrm{TWSA}}_{t}-{\mathrm{TWSA}}_{t-\mathrm{1}}\right)\mathrm{\Delta }}\mathrm{\Delta }t\text{(3)}& & \mathrm{TWSC}=\left({\mathrm{TWSA}}_{t+\mathrm{1}}-{\mathrm{TWSA}}_{t-\mathrm{1}}\right)/\phantom{\left({\mathrm{TWSA}}_{t+\mathrm{1}}-{\mathrm{TWSA}}_{t-\mathrm{1}}\right)\mathrm{2}}\mathrm{\Delta }t\end{array}$
Different parameters and solution strategies were explored and applied by these three processing centers and the differences between the centers have generally decreased over the releases (https://grace.jpl.nasa.gov/data/choosing-a-solution/). Even though VIC only computes the water storage in the upper few meters of the soil column (depending on the calibrated storage capacity in its second and third layers), this is the most active part of the soil column. Therefore, studies (e.g., Gao et al., 2010; Tang et al., 2010) found reasonable agreement between changes in TWSC from GRACE and the VIC model. Similar results were also found in this study: TWSC from VIC and GRACE (from GFZ, CSR, and JPL) are in good agreement at both continental (Fig. 9) and basin scales (Fig. 10) except for some timing lags in the high-latitude basins of the Lena and Yukon. This lag between GRACE-derived minimum TWSC and VIC-inferred minimum TWSC suggests that the snowmelt (and subsequent runoff) starts earlier in VIC than observed by GRACE or that more snowmelt ponds into wetlands or discharges into lakes, neither of which are well represented in VIC as its snowmelt discharges more directly into rivers. In contrast, NOAH shows a reversed seasonal cycle in those high-latitude continental regions such as Asia and North America, and basins such as the Danube, Lena, Mackenzie, and Yukon, while CLM shows disagreement in the seasonal cycle in Oceania as well as in the Danube and Mississippi basins relative to GRACE observations. Not surprisingly, the spread within the three GRACE products is very small compared to the differences against VIC (Figs. 9 and 10). Sakumura et al. (2014) found that the ensemble mean (simple arithmetic mean of JPL, CSR, and GFZ) was the most effective method in reducing the noise in the gravity field solutions within the available scatter of the solutions. Therefore, the ensemble mean of the TWSC from GFZ, CSR, and JPL is taken as the best TWSC product derived from GRACE, and this is used in the later water budget analysis together with TWSC from VIC.
Figure 9Seasonal cycles of TWSC from different products over the six continents for 2003–2010. TWSC is first normalized and then the CV (%) is calculated. The same is true for Fig. 10.
Figure 10Seasonal cycle of TWSC from different products over 12 representative large basins for 2003–2010.
## 3.2 Methods
All the data sets, as listed in Table 1, are first aggregated or disaggregated to 0.5 spatial and monthly values using bilinear interpolation; then, the errors/uncertainties of each product are assessed. Estimates for the same water budget variable are then merged following the algorithm described in Luo et al. (2007). The merged water budget estimates are further adjusted to ensure closure at every grid using the CKF approach of Pan et al. (2012). Then, the unconstrained and constrained water budgets are analyzed at different scales. Figure 11 provides a flow chart of the procedure.
### 3.2.1 Uncertainty estimation and data merging technique
There is no best estimate or observation of each individual water budget component at the grid scale over the globe due to the limited spatial coverage of in situ measurements. This is especially true for evapotranspiration observations from the flux tower networks. Thus, the limited availability of gridded ground observations makes it impossible to quantify the error in each water budget component. Therefore, in this study, the deviation from the ensemble mean of all data sources for the same budget variable is used as a proxy of the uncertainty/error in individual products. The merging procedure for each budget component is a weighted averaging where the optimal merging weight wi is given by the following equation (Luo et al., 2007; Sahoo et al., 2011):
$\begin{array}{}\text{(4)}& {w}_{i}=\frac{\frac{\mathrm{1}}{{\mathit{\sigma }}_{i}^{\mathrm{2}}}}{\sum _{j=\mathrm{1}}^{n}\frac{\mathrm{1}}{{\mathit{\sigma }}_{j}^{\mathrm{2}}}},\end{array}$
in which wi is the merging weight for product i, ${\mathit{\sigma }}_{i}^{\mathrm{2}}$ is the error variance of product i calculated against the ensemble mean, and n is the total number of products. Note that wi equals 1. The larger the error variance of product i, the lower its weight.
The number of products merged into single water budget estimate varies in the different subperiods due to the data availability (Table 2). A “data consistency adjustment” is applied after the data merging process in order to guarantee the consistency of the CDR estimated in this study. Taking precipitation as an example, first, for the period with complete data records (i.e., 1998–2008), the inner-annual monthly mean precipitation merged from all the available products (i.e., PGF, GPCC, CHIRPS, CSU) and the mean precipitation merged from the available products (i.e., PGF, GPCC, CHIRPS) during the incomplete data records period (i.e., 1984–1997 during which the CSU is not available) are calculated, respectively. Then, the interannual monthly climatological bias, which is the monthly mean precipitation merged from PGF, GPCC, CHIRPS, and CSU minus that merged from PGF, GPCC, and CHIRPS, is simply added to the interannual monthly mean precipitation during the incomplete data records period (i.e., 1984–1997). This “data consistency” approach aims to avoid the “jump” in the merged precipitation time series in the year 1998 when the CSU became available. The same procedure is then applied to adjust the data consistency for ET during 2008–2010 and TWSC during 1984–2002. We contend that this is a key step, as the temporal consistency of the CDR will impact the reproduction of historical hydrological extremes and the analysis of long-term trends for all the available water budget variables.
### 3.2.2 Enforcing water budget closure using CKF
In short, CKF redistributes the non-closure errors back onto the various water budget components according to their error levels and correlations. We define the water balance residual as $r=P-\text{ET}-R-\text{TWSC}$. If we write the budget components as a column vector x:x= [P, ET, R, TWSC], then the residual of the water balance can be expressed as a linear function of the vector, r=Gx, where $\mathbf{G}=\left[\mathrm{1},-\mathrm{1},-\mathrm{1},-\mathrm{1}\right]$. The error covariance matrix of x is calculated as ${\mathbit{\epsilon }}_{\mathbit{x}\mathbit{x}}=\stackrel{\mathrm{‾}}{\left(\stackrel{\mathrm{^}}{\mathbit{x}}-\mathbit{x}\right)\left(\stackrel{\mathrm{^}}{\mathbit{x}}-\mathbit{x}{\right)}^{T}}$, where $\stackrel{\mathrm{^}}{\mathbit{x}}$ is an estimate of x, its “true value”. In this study, $\left(\stackrel{\mathrm{^}}{\mathbit{x}}-\mathbit{x}\right)$ is replaced with the spread of the ensemble in each water budget component. This uncertainty estimation method was first proposed by Adler et al. (2001) and then applied in Tian and Peters-Lidard (2010) to generate a global precipitation uncertainty map for a variety of satellite remote sensing products. εxx has dimensions of 4×4 since x consists of four budget variables. Then the balance-constrained estimate is calculated from $\stackrel{\mathrm{^}}{{\mathbit{x}}^{\mathbf{\prime }}}=\stackrel{\mathrm{^}}{\mathbit{x}}-{\mathbit{\epsilon }}_{\mathbit{x}\mathbit{x}}{\mathbf{G}}^{T}\left(\mathbf{G}{\mathbit{\epsilon }}_{\mathbit{x}\mathbit{x}}{\mathbf{G}}^{T}$)${}^{-\mathrm{1}}\stackrel{\mathrm{^}}{r}$. The residual term $\stackrel{\mathrm{^}}{r}$ is redistributed back onto the various water budget variables through the above equation. Mathematically, the CKF algorithm mimics assimi lating a “perfect” (zero-error) observation of r=0. Further details are presented in Pan and Wood (2006). In this study, the error of runoff is simply assumed as 10 %, as VIC is the single source of runoff. This is highly empirical, as it is based on the authors' knowledge and confidence about the VIC model calibration given there is no global grid-level (0.5 in this study) runoff observations to quantify the error. The water budget closure is done monthly based on variational error from month to month.
Figure 11The flowchart describes the progress of data preprocessing, error analysis, water balance constraint, and multi-scale water budget analysis.
4 Water budget merging and constraint
## 4.1 Data merging
All the products for the same water budget component are merged into a single estimate based on their uncertainties/errors relative to their ensemble mean as described in Sect. 2.2.1. The values in Table 2 summarize the mean merging weights of each individual product for different periods. Please refer to Figs. S1 to S3 in Supplement II for the spatial maps of the merging weights from different products. The global mean merging weights for the precipitation are calculated over 50 N–50 S during 1984–1997 and 1998–2010. CHIRPS and CSU only cover 50 N–50 S; therefore, for those regions outside 50 N–50 S, PGF and GPCC are merged with equal weights (50 %). Before the availability of the CSU product in 1998, the average merging weights of PGF, GPCC, and CHIRPS over 50 N–50 S (land) are 29.6, 34.6, and 35.8 %. CHIRPS is closest to the ensemble mean especially for the Amazon Basin and therefore has a higher weight in that region (Fig. S1 in Supplement II). For the period of 1998–2010 when CSU becomes available, CHIRPS (26.5 %), GPCC (26.8 %), and CSU (26.0 %) have similar weights. Note that the weights vary with time and location. The annual mean of the merged precipitation is 767.0 mm for 1984–1997, 792.7 mm for 1998–2002, 786.7 mm for 2003–2007, and 802.9 mm for 2008–2010 (Table 3). Equivalent numbers at monthly scale are displayed in Fig. 12 in terms of global maps. The values from Table 3 and Fig. 12 are calculated using the data consistency adjustment described in Sect. 2.2.1.
Table 3Annual mean water budgets (mm year−1) over the globe (Greenland and Antarctica excluded) before (normal font) and after (in bold) water balance constraint, and their attributions (in italic) to non-closure error throughout subperiods.
For evapotranspiration, the averaged merging weights over land for each product are VIC (11.3 %), ERA (10.8 %), MERRA (6.6 %), GLEAM (12.8 %), SRB-PGF-PM (17.2 %), SRB-PGF-PT (15.9 %), SRB-CFSR-PM (13.9 %), and SRB-CFSR-PT (11.5 %) during their common period (1984–2007) (Fig. S2 in Supplement II). Among the eight ET products, MERRA has the lowest averaged merging weight, as it has a relatively larger deviation from other ET products at both continental (Fig. 4) and basin scales (Fig. 5). In the Amazon Basin, MERRA shows nearly an opposite seasonal cycle against other ET products (Fig. 5), and thus its merging weight is extremely low there (Fig. S2 in Supplement II). The merged ET values in the unconstrained budgets, averaged over land, are 518.0, 523.6, 516.0, and 522.0 mm year−1 throughout those four subperiods (Table 3; the spatial maps can be found in Fig. 12).
The runoff simulated from VIC is used as the “merged” terrestrial runoff at the grid scale since the gauge observations are discrete and spatially incomplete. The annual averaged runoff over the globe is 338.9 mm year−1 during 1984–2010 (Table 3; see Fig. 12 for the spatial maps for the subperiods).
For the total water storage change, the uncertainty in VIC-inferred storage change and GRACE-derived storage change is simply assumed to be 5 and 10 % of their actual values due to the lack of a better source for their validation (Pan et al., 2012). Consequently, the higher merging weight from VIC (67.1 %) and lower merging weight from GRACE (32.9 %) in Table 2 (and Fig. S3 in Supplement II for the spatial maps of merging weights) are a result of the assigned error ratios (i.e., 5 and 10 %). Given the good agreement in TWSC between VIC and GRACE (Figs. 9 and 10), the impact of such a subjective error assignment is relatively small. However, for a high-latitude basin such as the Yukon where VIC and GRACE have relative large discrepancy, the error is relatively high. Globally, the monthly mean of TWSC is almost zero during the four subperiods as shown in the fourth row of Fig. 12. Nonetheless, multi-year variability due to drought and wet periods is observable. For example, the long-term drought in the central US and Canadian prairies over the 1998–2002 period shows up along with the Brazilian droughts in 1994–1995 and 2004–2005 that extended into Argentina (2004–2006). Also seen in Fig. 12 is the wetting trend over the last two decades of the Sahel since the severe mid-1980s drought as well as the floods in Brazil in 2008.
Figure 12Monthly mean (mm month−1) of different water budget terms (from the first row to the bottom: precipitation, evapotranspiration, runoff, total water storage change, imbalance) before water balance constraint throughout different periods (from the left to the right: 1984–1997, 1998–2002, 2003–2007, 2008–2010, and 1984–2010; the numbers listed on each subpanel are the monthly mean value for each merged water budget variable before water balance constraint (mm month−1) during different subperiods (Greenland and Antarctica excluded). This is the same as in Figs. S4 and S5 in Supplement II but for the merged water budget variable after water balance constraint (mm month−1) and the non-closure error attributions to each water budget variable (%) ).
## 4.2 Data assimilation to close the water budget
The last row of Fig. 12 shows the global maps of the non-closure errors for the subperiods. The long-term mean non-closure error relative to precipitation is around 9.8 % over land during 1984–2010 (Table 3). The annual mean imbalance over land ranges from 55.3 to 80.6 mm year−1 during the four subperiods (Table 3).
Figure 13Unconstrained (a, c, e) and constrained (b, d) water budget estimates (mm month−1) over the Amazon River basin. The top, middle, and bottom rows show the time series of water budget in terms of fluxes (precipitation, evapotranspiration, and runoff), TWSC, and imbalance. The imbalance/non-closure error after water budget constraint equals to zero and the imbalance/non-closure attributions to each water budget variables throughout different subperiods are shown in panel (f).
Figure 13 shows an example of the unconstrained (Fig. 13a, c) and constrained (Fig. 13b, d) water budgets for the Amazon Basin together with imbalances (Fig. 13e) and their attribution (Fig. 13f). Over the Amazon Basin where the total precipitation is large and the gauges are sparse, the precipitation uncertainty is higher. This results in precipitation being the main recipient of the non-closure error attribution (Fig. 13f), receiving around 50 % of the non-closure error for each of the subperiods as well as the complete analysis period. Due to the “inconsistencies” in terms of different numbers of available data sources merged into the budget during the four consecutive subperiods, the imbalance/non-closure error (Fig. 13e) does not show a regular seasonal cycle and a continuous pattern of imbalance.
The annual mean water budget in terms of P, ET, R, and TWSC after water balance constraint is 781.8, 463.9, 318.0, and 0 mm during 1984–2010, respectively (Table 3). Note that direct application of CKF to enforce the water balance without other constraints may possibly lead to a non-zero TWSC over a long term and sometimes a negative runoff. Therefore, two additional “filters” are added after the CKF. First, if the runoff is negative, we will re-run the CKF and only redistribute the non-closure error onto the other three budget components. Second, for each grid cell, if the long-term mean TWSC over 1984–2010 is not zero, the monthly long-term mean TWSC will be subtracted from the TWSC and added to the precipitation and evapotranspiration month by month during 1984–2010. Figure S4 in Supplement II shows the mean water budget components after the CKF water balance enforcement in addition to the mean water budget components before the enforcement in Fig. 12. The long-term mean of TWSC at each grid cell is zero over the entire 27 years after the second filter, which is also named as “TWSC detrending”. Though at regional scales, some places have experienced groundwater depletions such as the US high plains and central valley, western Iran, India, etc., starting from different years. One of the challenges is a lack of data on groundwater extractions. Therefore, from the global perspective, for almost three decades during the study period (1984–2010) covered by this study, the authors assume the long-term TWSC to be zero and thus apply the detrending, after which the spatial variability of TWSC still exists during the four subperiods (Fig. S4). The “zero TWSC” assumption would potentially introduce local/regional bias into the water budget estimates in the regions with groundwater depletions. A more comprehensive comparison of the water budget estimation before and after the closure enforcement is listed in Table 4 at both the continental and basin scales. These water budget component values are spatially and temporally aggregated for each continent or basin over the analysis period of 1984–2010.
Table 4The summary table of annual mean water budgets (mm year−1) before and after water balance constraint and their corresponding attributions (%) to the non-closure error at both continental and basin scales (Greenland is excluded for North America).
The attribution of the non-closure term for each water budget variable is based on the uncertainties among different products. The results from this study are in general agreement with Pan et al. (2012) where the authors showed that ET has a high non-closure attribution in a large portion of the 32 river basins that they analyzed. The average attribution of non-closure errors to ET over the globe is 45.4 % during 1984–2010 compared to 38.4 % for P, 4.9 % for R, and 11.2 % for TWSC (see Table 3). For most of the regions, ET receives the highest attribution of the non-closure error, particularly in Africa (50 % attributed to ET vs. 37 % to precipitation, 3 % to runoff, and 10 % to TWSC; see Table 4) and Oceania (46 % attributed to ET vs. 41 % to precipitation, 2 % to runoff, and 10 % to TWSC; see Table 4). Figure S5 additionally shows the global maps of the mean water budget non-closure error attribution during different subperiods. Higher attributions to precipitation occur in basins in midlatitudes to high latitudes such as the Danube (42 % to precipitation vs. 38 % to ET, 6 % to runoff, and 12 % to TWSC; see Table 4) and Don (42 % to precipitation vs. 39 % to ET, 3 % to runoff, and 16 % to TWSC; see Table 4), where the estimation of extreme rainfall rates remains less well resolved (Huffman et al., 2007; Yong et al., 2014). High non-closure attributions to precipitation also occur in tropical basins such as the Amazon (46 % to precipitation vs. 33 % to ET, 9 % to runoff, and 12 % to TWSC; see Table 4) and Congo (46 % to precipitation vs. 37 % to ET, 6 % to runoff, and 11 % to TWSC; see Table 4) because the precipitation is large and the gauges are scarce in these basins. The attribution to the total water storage change is generally small except for the northern regions where snow, ice melt, and seasonal storage changes in wetlands dominate the water budgets (Fig. S5 in Supplement II). Runoff receives the smallest attribution of the imbalance among the four water budget components for most regions over the globe, which is in agreement with what was concluded in Sahoo et al. (2011). The mean attributions of each water budget component over different continents and basins over 1984–2010 are listed in Table 4 as well.
5 Validation of the MEaSUREs global terrestrial water budget CDR
The final CDR, which is the constrained global water budget with closure, is validated against in situ observations in terms of runoff and ET at multiple spatial scales.
Figure 14(a) Correlation coefficient (CC) between monthly GRDC runoff observations and MEaSUREs runoff estimates for 165 medium basins; panel (b) is the same as (a) but for 862 small basins; (c) monthly mean of MEaSUREs runoff estimates against GRDC runoff observations for medium basins; panel (d) is the same as (c) but for small basins.
## 5.1 Runoff verification
In situ river discharge observations are collected from three major data sources: (1) GRDC, (2) USGS, and (3) the Australian Land and Water Resources Audit project (Peel et al., 2000). The observations were collected from GRDC for a total number of 32 large basins and 26 of them are used (as shown in Fig. 8) after filtering out those basins with less than 3 years of data during 1984–2010. Figure 1 provides the locations of these basins. A total of 165 out of a total of 362 medium sized basins (5000 to 10 000 km2, 331 from GRDC, and 31 from USGS) were selected for validations. For validation over small basins, discharge data for 862 basins (1000 to 5000 km2) were collected from GRDC, USGS, and the Australian Land and Water Resources Audit project. Basins under any one or more of the following conditions were excluded: (1) GRDC basins for which the catchment boundaries could not be reliably determined; (2) basins with large dams (reservoir capacity greater than 10 % of annual streamflow); (3) basins with urban areas greater than 2 % (using the “artificial areas” class of the map from GlobCover, version 2.3; Bontemps et al., 2011); (4) basins with irrigated areas greater than 2 % (using the Global Irrigated Area Map; http://waterdata.iwmi.org/Applications/GIAM2000/; and (5) basins with either a gain or loss forest (change in land cover) > 20 % of the basin area. For both the medium and small basins, those basins with data records length less than 5 years were also excluded. Figure 14a displays the locations of medium and small basins. The observed discharge data were converted to runoff by dividing by the basin area upstream of the gauge location.
The seasonal cycles of runoff from the CDR created in this study over the 26 large basins are compared against the GRDC observations as shown in Fig. 8. Not surprisingly, the runoff estimated from the constrained system (grey dashed line) is not much different from the runoff estimated in the unconstrained system (which is VIC runoff shown by the solid blue line) as we assign a small error (10 %) on the runoff component within the budget constraint algorithm. In general, VIC outperforms the other two LSMs as VIC was calibrated over 43 major global river basins (Sheffield and Wood, 2007) although the calibration periods varied. Therefore, we believe that VIC can provide a reliable grid-scale estimate of runoff budget. Note that the seasonal peaks from NOAH and VIC are in agreement for the Indus Basin but their peaks precede the peak from the GRDC observations, which strangely happen in November. Comparison to other studies for the Indus River (Bookhagen and Burbank, 2010) shows that the discharge peak occurs in the summertime, which is consistent with VIC and NOAH. Likewise, for the Senegal River, records from regional studies (Andersen et al., 2001) and Stisen et al. (2008) show runoff peaks in August to September instead of April to May from the GRDC record. In summary, we believe that our CDR provides good runoff estimates over the Amur, Danube, Mackenzie, Mekong, Mississippi, Pearl, Pechora, Yangtze, and Yenisei rivers but unsatisfactory estimates over the Congo, Lena, Murray–Darling, and Yellow rivers in that the predicted seasonal discharge differs significantly from the observed seasonal cycle. The reasons for this are due to water management not being included in the VIC model (e.g., Murray–Darling and Yellow rivers), a combination of scarce data, and not including large wetlands (e.g., the Congo and Lena basins).
A test of significance was conducted to remove those medium and small basins with non-significant correlations between GRDC runoff observations and CDR runoff records. This was done in order to remove those basins such as Indus and Senegal which might have incorrect observational data. Figure 14 compares CDR estimated runoff against in situ observations for 165 medium basins and 862 small basins in terms of correlation coefficient (CC; Fig. 14a and b) and scatter plots (Fig. 14c and d) at the monthly scale. Again, the observed discharge measurements are converted to runoff using the basin area. A total of 84 out of 165 medium basins ( 51 %) and 625 out of 862 small basins ( 73 %) have CC values that are larger than 0.5 as shown in Fig. 14a and b. There are some medium basins with extreme low CC values (red dots in Fig. 14a) in northern Canada where the lake/wetland influences are not modeled, and in south Africa where the sporadic rainfall is not picked up and the model fails to replicate the quick runoff. The runoff from the CDR has CC values of 0.86 and 0.83 for the same medium and small basins, as shown in Fig. 14a and b, and has a bias ratio of 6 % for medium basins and 16 % for small basins (Fig. 14c and d). There is also a tendency for the model to underestimate runoff in the small basins in wetter regions (Fig. 14d). This scatter may be due forcing uncertainty, model calibration, or omitted processes like water management (reservoirs, irrigation), all which might shift the timing of the runoff peak, particularly on a monthly basis. For the small basins, though they were filtered in an attempt to remove basins impacted by factors such as reservoirs, irrigation, urbanization, and so forth, they might be impacted by the scaling issues. The CDR was computed at 0.5 grid resolution, which is approximately 50 km near the Equator. The small basins range from 1000 to 5000 km2 so that the small basins only cover a maximum of two grid pixels and a minimum of 0.2 of a grid pixel for the smallest basin. The basin masks were extracted at a higher spatial resolution and then aggregated onto the 0.5 grids with the fractional area for the basin in order to minimize the impact of spatial mismatch. Nonetheless, the coarser spatial resolution of the CDR still affects the comparison of the runoff estimates with small-scale basin observations. No estimate of this resolution effect has been determined but the results shown in Fig. 14d suggest that the effect is limited to a small number of basins.
## 5.2 ET verification
Estimated ET is verified by two different approaches. First, against an inferred ET is computed from the difference between observed precipitation and observed discharge (PR) at the annual scale to minimize the effect of seasonal TWSC. This is done for the 25 large, 169 medium, and 813 small basins which are selected by the criteria of no less than 5 years of annual records. In addition, it is then secondly verified against in situ observations from FLUXNET tower data (Baldocchi et al., 2001).
Figure 15Validation of MEaSUREs ET estimates against inferred ET (PR) over large (25), medium (169), and small (813) basins.
The precipitation used in computing the inferred ET is from GPCC, which is a gridded rain gauge analysis that merges around 67 000 gauge measurements globally (Schneider et al., 2014). The observed runoff, R, is from the same sources as used in Sect. 4.1. As shown in Fig. 15, the correlation coefficients between MEaSUREs CDR ET and inferred ET from observed PR are 0.97, 0.96, and 0.76 for those large, medium, and small basins. For some of the MEaSUREs CDR ET over medium basins, particularly wetter basins, they do not match well with the observed PR and are attributed to the effects of water management on our estimates of R. Essentially, if the CDR runoff that does not reflect water management is too large, then the estimates of ET will be too low, which is what is seen in Fig. 15b. The small basins show worse agreements with the inferred ET with 20 % bias (vs. 4 % bias for large basins and 4 % bias for medium basins in Fig. 15) that we attribute to scaling effects in estimating R than the medium basins.
Figure 16(a) Distributions of 47 flux towers over the globe; (b) validation of MEaSUREs ET estimates against FLUXNET observations. Different colors in panel (b) represent different International Geosphere-Biosphere Programme (IGBP) land cover types (Loveland et al., 2000); the sizes of dots represents the data record length (ranging from 1 to 10 years) from FLUXNET. IGBP land cover types include (1) CRO: cropland; (2) CSH: closed shrubland; (3) DBF: deciduous broadleaf forest; (4) EBF: evergreen broadleaf forest; (5) ENF: evergreen needleleaf forest; (6) GRA: grassland; (7) MF: mixed forest; (8) WET: permanent wetland; (9) WSA: woody savanna.
Table 5Flux tower information list. From left to right: the station name, available data time span, latitude, longitude, and IGBP land cover type (Loveland et al., 2000).
The ET estimates from the CDR are further assessed by comparing the grid-scale estimates with observations from 47 FLUXNET towers, which measure the turbulent latent flux using the eddy covariance technique. Those 47 flux towers were selected based on data availability (Michel et al., 2015) in terms of the meteorological variables and radiations, and the final selection represents a variety of biomes and dry/wet climate regimes. The raw data are at 3-hourly resolution and the most complete data were recorded during the warm seasons. Therefore, the comparisons are made only over the summer (warm) seasons by filtering out those years with less than 70 % data records based on the data availability at each tower. The 47 flux towers are located in four continents (North America, Europe, Asia, and Africa) as shown under different land covers that are defined by the International Geosphere-Biosphere Programme (IGBP; Loveland et al., 2000) in Fig. 16a. The tower stations are also described in Table 5. The validations against the FLUXNET observations are only carried out during the warm season when ET is more dominant and when there are fewer missing values. From the 47 flux towers, we found out that our ET estimates from the CDR are in high agreement with FLUXNET observations under the land cover types WSA (woody savanna – one station in Africa and the other one in the US) and EBF (evergreen broadleaf forest – only one station in France; Fig. 16b). In general, our CDR ET matches well with the observation with a correlation coefficient of around 0.77 and a bias ratio of 11 % except for some over estimations for the stations, most of which are under the land cover types CRO (cropland) and ENF (evergreen needleleaf forest; Fig. 16b). The positive bias of MEaSUREs CDR ET relative to FLUXNET observations is attributed to the tower management – during the rainy days in the summer the flux towers are usually turned off and thus underestimate the actual ET during the rainy days.
6 Discussion and future work
A well-constrained global inventory of the historical terrestrial water budget at fine resolution is essential to understanding the terrestrial hydrological cycle, its partitioning into individual components, and their variability at regional to global scales. In this study, the consistency and uncertainties of multiple hydrological data products are investigated, with precipitation found to have the highest consistency among the available products at both continental and basin scales compared to ET and TWSC. Data products from multiple sources that include in situ and satellite remote sensing observations, land surface model estimates, and reanalysis model outputs are combined to create homogenized terrestrial water budget estimates at 0.5 spatial and monthly temporal scales for the period 1984–2010. This long-term water budget data record has both spatial and temporal consistency, and is part of NASA's ESDRs program. The CDR data set was created by applying a water balance closure constraint using the CKF data assimilation technique of Pan and Wood (2006). For the individual data products, their ensemble mean is taken as the best estimate for the variable, and the ensemble spread against the ensemble mean as a proxy for their uncertainty. These estimates of the mean and uncertainty for the product are important assumptions underlying the development of data records. The CDR is validated against ground observations, i.e., GRDC, USGS, and Australian Land and Water Resources Audit project for runoff and FLUXNET for ET, which seem not independent from the merged and constrained CDR. However, data developed from either satellite remote sensing or models are often calibrated against “ground truth”, i.e., gauge observations, which are also the best ground truth that are normally used for verification. The ground truth is in no way independent from the remote sensing or modeled data, particularly for global data validation. Nevertheless, we believe these data records represent the best current knowledge for the global terrestrial water budget at the 0.5 and monthly scale over the 27-year period of 1984–2010.
Additionally, the developed data set allows for the documentation of the water budget at continental and basin scales resulting in a better depiction across multiple scales. The attribution analysis of the budget imbalance (non-closure) shows that ET receives the largest adjustment in most regions – particularly in Africa and Oceania. In contrast, runoff tends to receive the lowest attribution of the non-closure error, in part due to the calibrated land surface model estimates from 43 large global basins. TWSC receives larger adjustments in high-latitude regions, which we attribute to the impacts from snowmelt and seasonal dynamics of wetlands and small lakes that are not well represented in VIC LSM.
Currently, the authors are carrying out another study in comparing the CDR water budget records against around 20 high-impacted studies at multiple spatial scales (i.e., continental and global). This ongoing study is the first attempt to gather and compare global water budget estimates from studies as early as 1974 (i.e., Budyko, 1974) to the current study in order to provide a comprehensive overview of global water budget estimates, even though the studies focused on different periods using different data sources and have different global coverage (e.g., some of them exclude Antarctica, Greenland, or both). Figure S6 in Supplement II gives an example comparison with Trenberth et al. (2007; T2007 hereafter), which estimated the water budget during 1979–2000 and excluded Antarctica. The total precipitation is quite close to this study (114×103 km3 year−1) to T2007 (113×103 km3 year−1). By converting the water budgets into mm year−1 based on the global coverage information available in each of those studies, the long-term mean precipitation is around 28 mm year−1 (vs. 32 mm year−1 in the CDR and 27 mm year−1 from T2007), ET is around 78 mm year−1 (vs. 78 mm year−1 in the CDR and 77 mm year−1 from T2007), and runoff is around 47 mm year−1 (vs. 46 mm year−1 in this study and T2007). Figure S7 further provides an example of how the CDR captured the 1998–1999 US drought in terms of Standardized Precipitation Index (SPI) and drought extents calculated from CDR precipitation. The 6-month SPI exceeds the threshold of exceptional drought (which is defined by the US Drought Monitor system; http://droughtmonitor.unl.edu/AboutUSDM/DroughtClassification.aspx) around the year 1998. The CDR developed in this study, as a time series of measurements of sufficient length, consistency, and continuity, can also be applied to determine climate variability. Figure S8 in Supplement II, as an example, provides the interannual variability of the available water (P−ET) over the globe during the CDR period 1984–2010.
The major challenge for the creation of ESDR/CDR of the terrestrial water budget (and potentially the terrestrial surface energy budget) is the lack of “ground truth” observations that can serve as reference data sets for bias correction. The sparseness of the observations in accessible data archives (e.g., GRDC for river discharge, GPCC for precipitation and publicly accessible and quality-controlled FLUXNET data) is both a scientific and institutional challenge. Many additional gauge locations and data records exist and could contribute to the development of improved CDR and our understanding of climate variability and change but have not been made available. Besides these operationally focused observations, the relative inaccessibility of global FLUXNET tower observations is also disappointing, although this situation has improved over the recent past. Even though there are over 650 towers in 30 regional networks covering five continents, the free fair-use subset of the La Thuile FLUXNET data set (which has been harmonized, standardized, and gap filled) contains only 154 stations, of which 47 were deemed useful for the validation presented here, based on quality assessment (e.g., closure of the energy budget) and record length. Data availability and accessibility challenges need to be at the top of the agendas of the world's major space agencies (ESA, NASA, JAXA), international data programs such as the Global Climate Observing System (GCOS), the GEWEX project of WCRP, Global Earth Observing System of Systems (GEOSS), and international agencies like the World Meteorological Organization. The “standard” statements and claims about “free and open access” to climate data from these programs have not resulted in improved access. If the needed improvements to CDR are to occur, and must occur to better assess the impacts from global environmental change, improved in situ data archiving and access by the scientific community are imperative for a more accurate analysis of climate variability and change.
The CDR developed in this study – the global terrestrial water budget at 0.5 monthly for 1984–2010 – is currently archived on our public server, available at http://stream.princeton.edu:8080/opendap/MEaSUREs/WC_MULTISOURCES_WB_050/, and will be formally archived at the NASA Goddard Earth Science Data and Information Services Center (GES DISC) for the future use of climate and water management communities, and will advance our understanding climate variability and trends at multiple spatial scales.
As the authors are aware, essential directions in global water and energy cycle research are towards improved understanding historical climate, benchmarking future climate predictions, validating models, and improving the understanding of the interactions among land, ocean, and atmosphere hydrospheres. Future work will be targeted at extending the data sets to even longer periods, and at finer resolutions, by combining upcoming new satellite missions with the analysis and predictions from more advanced modeling systems.
Data availability
Data availability.
Supplement
Supplement.
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Special issue statement
Special issue statement.
This article is part of the special issue “Observations and modeling of land surface water and energy exchanges across scales: special issue in Honor of Eric F. Wood”. It is a result of the Symposium in Honor of Eric F. Wood: Observations and Modeling across Scales, Princeton, New Jersey, USA, 2–3 June 2016.
Acknowledgements
Acknowledgements.
This study was made possible under the support of NASA grants NNX08AN40A (Developing Consistent Earth System Data Records for the Global Terrestrial Water Cycle), under NASA's MEaSUREs program, and NNX09AK35G (Development and diagnostic analysis of a multi-decadal global evaporation product for NEWS), under the NASA NEWS program. The support from these programs is highly appreciated.
Edited by: Reed Maxwell
Reviewed by: two anonymous referees
References
Adler, R. F., Kidd, C., Petty, G., Morissey, M., and Goodman, H. M.: Intercomparison of global precipitation products: The third Precipitation Intercomparison Project (PIP-3), B. Am. Meteorol. Soc., 82, 1377–1396, 2001.
Andersen, J., Refsgaard, J. C., and Jensen, K. H.: Distributed hydrological modelling of the Senegal River Basin-model construction and validation, J. Hydrol., 247, 200–214, 2001.
Baldocchi, D., Falge, E., Gu, L., Olson, R., Hollinger, D., Running, S., Anthoni, P., Bernhofer, C., Davis, K., Evans, R., Fuentes, J., Goldstein, A., Katul, G., Law, B., Lee, X., Malhi, Y., Meyers, T., Munger, W., Oechel, W., Paw, K. T., Pilegaard, K., Schmid, H. P., Valentini, R., Verma, S., Vesala, T., Wilson, K., and Wofsy, S.: FLUXNET: A New Tool to Study the Temporal and Spatial Variability of Ecosystem-Scale Carbon Dioxide, Water Vapor, and Energy Flux Densities, B. Am. Meteorol. Soc., 82, 2415–2434, https://doi.org/10.1175/1520-0477(2001)082<2415:FANTTS>2.3.CO;2, 2001.
Balsamo, G., Albergel, C., Beljaars, A., Boussetta, S., Brun, E., Cloke, H., Dee, D., Dutra, E., Muñoz-Sabater, J., Pappenberger, F., de Rosnay, P., Stockdale, T., and Vitart, F.: ERA-Interim/Land: a global land surface reanalysis data set, Hydrol. Earth Syst. Sci., 19, 389–407, https://doi.org/10.5194/hess-19-389-2015, 2015.
Baumgartner, A. and Reichel, E.: The world water balance: Mean annual global, continental and maritime precipitation evaporation and run-off, Elsevier Science Inc, ISBN-10:0444998586, ISBN-13:978-0444998583, 1975.
Berner, E. K. and Berner, R. A.: Global water cycle: geochemistry and environment, Prentice-Hall, ISBN-10:0133571955, ISBN-13:978-0133571950, 1987.
Berry, P. A. M., Salloway, M. K., Smith, R. G., and Benveniste, J.: Global inland water monitoring from satellite radar altimetry a glimpse into the future, IAHS-AISH publication, 104–109, 2011.
Betts, A. K., Ball, J. H., Bosilovich, M., Viterbo, P., Zhang, Y., and Rossow, W. B.: Intercomparison of water and energy budgets for five mississippi subbasins between ecmwf reanalysis (era-40) and nasa data assimilation office fvgcm for 1990–1999, J. Geophys. Res.-Atmos., 108, https://doi.org/10.1029/2002JD003127, 2003a.
Betts, A. K., Ball, J. H., and Viterbo, P.: Evaluation of the ERA-40 surface water budget and surface temperature for the Mackenzie River basin, J. Hydrometeorol., 4, 1194–1211, 2003b.
Betts, A. K., Ball, J. H., Viterbo, P., Dai, A., and Marengo, J.: Hydrometeorology of the Amazon in ERA-40, J. Hydrometeorol., 6, 764–774, 2005.
Birkett, C. M., Mertes, L. A. K., Dunne, T., Costa, M. H., and Jasinski, M. J.: Surface water dynamics in the Amazon Basin: Application of satellite radar altimetry, J. Geophys. Res.-Atmos., 107, https://doi.org/10.1029/2001JD000609, 2002.
Bontemps, S., Defourny, P., Bogaert, E. V., Arino, O., Kalogirou, V., and Perez, J. R.: GLOBCOVER 2009-Products description and validation report, ISBN-10:0133571955, ISBN-13:978-0133571950 2011.
Bookhagen, B. and Burbank, D. W.: Toward a complete Himalayan hydrological budget: Spatiotemporal distribution of snowmelt and rainfall and their impact on river discharge, J. Geophys. Res.-Earth, 115, F03019, doi:10.1029/2009JF001426, 2010.
Budyko M. L., Climate and life. International Geophysics Series, 18, ISBN-10:0121394506, ISBN-13:978-0121394509, 1974.
Bytheway, J. L. and Kummerow, C. D.: Inferring the uncertainty of satellite precipitation estimates in data-sparse regions over land, J. Geophys. Res.-Atmos., 118, 9524–9533, https://doi.org/10.1002/jgrd.50607, 2013.
Durand, M., Fu, L.-L., Lettenmaier, D. P., Alsdorf, D. E., Rodriguez, E., and Esteban-Fernandez, D.: The surface water and ocean topography mission: Observing terrestrial surface water and oceanic submesoscale eddies, Proc. IEEE, 98, 766–779, 2010.
Famiglietti, J. S.: The global groundwater crisis, Nat. Clim. Change, 4, 945–948, https://doi.org/10.1038/nclimate2425, 2014.
Fisher, J. B., Tu, K. P., and Baldocchi, D. D.: Global estimates of the land–atmosphere water flux based on monthly AVHRR and ISLSCP-II data, validated at 16 FLUXNET sites, Remote Sens. Environ., 112, 901–919, 2008.
Funk, C. C., Peterson, P. J., Landsfeld, M. F., Pedreros, D. H., Verdin, J. P., Rowland, J. D., Romero, B. E., Husak, G. J., Michaelsen, J. C., and Verdin, A. P.: A quasi-global precipitation time series for drought monitoring, U.S. Geological Survey Data Series, 832, 4 p., https://doi.org/10.3133/ds832, 2014.
Gao, H., Tang, Q., Ferguson, C. R., Wood, E. F., and Lettenmaier, D. P.: Estimating the water budget of major US river basins via remote sensing, Int. J. Remote Sens., 31, 3955–3978, 2010.
Guan, K., Pan, M., Li, H., Wolf, A., Wu, J., Medvigy, D., Caylor, K. K., Sheffield, J., Wood, E. F., and Malhi, Y.: Photosynthetic seasonality of global tropical forests constrained by hydroclimate, Nat. Geosci., 8, 284–289, 2015.
Haddeland, I., Clark, D. B., Franssen, W., Ludwig, F., Voß, F., Arnell, N. W., Bertrand, N., Best, M., Folwell, S., and Gerten, D.: Multimodel estimate of the global terrestrial water balance: setup and first results, J. Hydrometeorol., 12, 869–884, 2011.
Hong, Y., Gochis, D., Cheng, J.-t., Hsu, K.-l., and Sorooshian, S.: Evaluation of PERSIANN-CCS rainfall measurement using the NAME event rain gauge network, J. Hydrometeorol., 8, 469–482, 2007.
Huffman, G. J., Bolvin, D. T., Nelkin, E. J., Wolff, D. B., Adler, R. F., Gu, G., Hong, Y., Bowman, K. P., and Stocker, E. F.: The TRMM multisatellite precipitation analysis (TMPA): Quasi-global, multiyear, combined-sensor precipitation estimates at fine scales, J. Hydrometeorol., 8, 38–55, 2007.
Huffman, G. J., Adler, R. F., Bolvin, D. T., and Nelkin, E. J.: The TRMM multi-satellite precipitation analysis (TMPA), in: Satellite rainfall applications for surface hydrology, Springer, 3–22, 2010.
Joyce, R. J., Janowiak, J. E., Arkin, P. A., and Xie, P.: CMORPH: A method that produces global precipitation estimates from passive microwave and infrared data at high spatial and temporal resolution, J. Hydrometeorol., 5, 487–503, 2004.
Landerer, F. W. and Swenson, S. C.: Accuracy of scaled GRACE terrestrial water storage estimates, Water Resour. Res., 48, W04531, doi:10.1029/2011WR011453, 2012.
Long, D., Longuevergne, L., and Scanlon, B. R.: Uncertainty in evapotranspiration from land surface modeling, remote sensing, and GRACE satellites, Water Resour. Res., 50, 1131–1151, 2014a.
Long, D., Shen, Y., Sun, A., Hong, Y., Longuevergne, L., Yang, Y., Li, B., and Chen, L.: Drought and flood monitoring for a large karst plateau in Southwest China using extended GRACE data, Remote Sens. Environ., 155, 145–160, https://doi.org/10.1016/j.rse.2014.08.006, 2014b.
Loveland, T. R., Reed, B. C., Brown, J. F., Ohlen, D. O., Zhu, Z., Yang, L., and Merchant, J. W.: Development of a global land cover characteristics database and IGBP DISCover from 1 km AVHRR data, Int. J. Remote Sens., 21, 1303–1330, 2000.
Luo, L., Wood, E. F., and Pan, M.: Bayesian merging of multiple climate model forecasts for seasonal hydrological predictions, J. Geophys. Res.-Atmos., 112, D10102, doi:10.1029/2006JD007655, 2007.
Michel, D., Jiménez, C., Miralles, D. G., Jung, M., Hirschi, M., Ershadi, A., Martens, B., McCabe, M. F., Fisher, J. B., Mu, Q., Seneviratne, S. I., Wood, E. F., and Fernández-Prieto, D.: The WACMOS-ET project – Part 1: Tower-scale evaluation of four remote-sensing-based evapotranspiration algorithms, Hydrol. Earth Syst. Sci., 20, 803–822, https://doi.org/10.5194/hess-20-803-2016, 2016.
Miralles, D. G., De Jeu, R. A. M., Gash, J. H., Holmes, T. R. H., and Dolman, A. J.: Magnitude and variability of land evaporation and its components at the global scale, Hydrol. Earth Syst. Sci., 15, 967–981, https://doi.org/10.5194/hess-15-967-2011, 2011.
Morel, P.: Why GEWEX? The agenda for a global energy and water cycle research program, GEWEX News, 11, 7–11, 2001.
Mu, Q., Heinsch, F. A., Zhao, M., and Running, S. W.: Development of a global evapotranspiration algorithm based on MODIS and global meteorology data, Remote Sens. Environ., 111, 519–536, 2007.
NASA NEWS Science Integration Team: Predicting Energy and Water Cycle Consequences of Earth System Variability and Change, 89, available at: http://news.cisc.gmu.edu/doc/NEWS_implementation.pdf, 2007.
National Research Council: Climate Data Records from Environmental Satellites: Interim Report, Committee on Climate Data Records from NOAA Operational Satellites, The National Academies Press, Washington, DC, 150 pp., 2004.
Nijssen, B., Shukla, S., Lin, C., Gao, H., Zhou, T., Ishottama, Sheffield, J., Wood, E. F., and Lettenmaier, D. P.: A Prototype Global Drought Information System Based on Multiple Land Surface Models, J. Hydrometeorol., 15, 1661–1676, https://doi.org/10.1175/JHM-D-13-090.1, 2014.
Oki, T. and Kanae, S.: Global hydrological cycles and world water resources, Science, 313, 1068–1072, 2006.
Pan, M. and Wood, E. F.: Data assimilation for estimating the terrestrial water budget using a constrained ensemble Kalman filter, J. Hydrometeorol., 7, 534–547, 2006.
Pan, M., Sahoo, A. K., Troy, T. J., Vinukollu, R. K., Sheffield, J., and Wood, E. F.: Multisource estimation of long-term terrestrial water budget for major global river basins, J. Climate, 25, 3191–3206, 2012.
Peel, M. C., Chiew, F. H. S., Western, A. W., and McMahon, T. A.: Extension of unimpaired monthly streamflow data and regionalisation of parameter values to estimate streamflow in ungauged catchments, Report to the National Land and Water Resources Audit, 2000.
Reichle, R. H., Koster, R. D., De Lannoy, G. J. M., Forman, B. A., Liu, Q., Mahanama, S. P. P., and Touré, A.: Assessment and Enhancement of MERRA Land Surface Hydrology Estimates, J. Climate, 24, 6322–6338, https://doi.org/10.1175/JCLI-D-10-05033.1, 2011.
Rienecker, M. M., Suarez, M. J., Gelaro, R., Todling, R., Bacmeister, J., Liu, E., Bosilovich, M. G., Schubert, S. D., Takacs, L., Kim, G.-K., Bloom, S., Chen, J., Collins, D., Conaty, A., da Silva, A., Gu, W., Joiner, J., Koster, R. D., Lucchesi, R., Molod, A., Owens, T., Pawson, S., Pegion, P., Redder, C. R., Reichle, R., Robertson, F. R., Ruddick, A. G., Sienkiewicz, M., and Woollen, J.: MERRA: NASA's Modern-Era Retrospective Analysis for Research and Applications, J. Climate, 24, 3624–3648, https://doi.org/10.1175/JCLI-D-11-00015.1, 2011.
Rodell, M., Velicogna, I., and Famiglietti, J. S.: Satellite-based estimates of groundwater depletion in India, Nature, 460, 999–1002, 2009.
Rodell, M., Chambers, D. P., and Famiglietti, J. S.: Groundwater and terrestrial water storage, B. Am. Meteorol. Soc., 97, S30–S31, 2011.
Rodell, M., Beaudoing, H. K., L'Ecuyer, T. S., Olson, W. S., Famiglietti, J. S., Houser, P. R., Adler, R., Bosilovich, M. G., Clayson, C. A., and Chambers, D.: The observed state of the water cycle in the early 21st century, J. Climate, 28, 8289–8318, doi:10.1175/JCLI-D-14-00555.1, 2015.
Sahoo, A. K., Pan, M., Troy, T. J., Vinukollu, R. K., Sheffield, J., and Wood, E. F.: Reconciling the global terrestrial water budget using satellite remote sensing, Remote Sens. Environ., 115, 1850–1865, 2011.
Sakumura, C., Bettadpur, S., and Bruinsma, S.: Ensemble prediction and intercomparison analysis of GRACE time-variable gravity field models, Geophys. Res. Lett., 41, 1389–1397, 2014.
Schneider, U., Becker, A., Finger, P., Meyer-Christoffer, A., Ziese, M., and Rudolf, B.: GPCC's new land surface precipitation climatology based on quality-controlled in situ data and its role in quantifying the global water cycle, Theor. Appl. Climatol., 115, 15–40, https://doi.org/10.1007/s00704-013-0860-x, 2014.
Sheffield, J., Goteti, G., and Wood, E. F.: Development of a 50-year high-resolution global dataset of meteorological forcings for land surface modeling, J. Climate, 19, 3088–3111, 2006.
Sheffield, J. and Wood, E. F.: Characteristics of global and regional drought, 1950–2000: Analysis of soil moisture data from off-line simulation of the terrestrial hydrologic cycle, J. Geophys. Res.-Atmos., 112, D17115, https://doi.org/10.1029/2006JD008288, 2007.
Sheffield, J., Ferguson, C. R., Troy, T. J., Wood, E. F., and McCabe, M. F.: Closing the terrestrial water budget from satellite remote sensing, Geophys. Res. Lett., 36, L07403, doi:10.1029/2009GL037338, 2009.
Simmons, A., Uppala, S., Dee, D., and Kobayashi, S.: ERA-Interim: New ECMWF reanalysis products from 1989 onwards, ECMWF Newsletter, 110, 26–35, 2006.
Stisen, S., Jensen, K. H., Sandholt, I., and Grimes, D. I. F.: A remote sensing driven distributed hydrological model of the Senegal River basin, J. Hydrol., 354, 131–148, 2008.
Tang, Q., Gao, H., Yeh, P., Oki, T., Su, F., and Lettenmaier, D. P.: Dynamics of terrestrial water storage change from satellite and surface observations and modeling, J. Hydrometeorol., 11, 156–170, 2010.
Tapley, B. D., Bettadpur, S., Ries, J. C., Thompson, P. F., and Watkins, M. M.: GRACE measurements of mass variability in the Earth system, Science, 305, 503–505, 2004.
Thomas, A. C., Reager, J. T., Famiglietti, J. S., and Rodell, M.: A GRACE-based water storage deficit approach for hydrological drought characterization, Geophys. Res. Lett., 41, 1537–1545, 2014.
Tian, Y., and Peters-Lidard, C. D.: A global map of uncertainties in satellite-based precipitation measurements, Geophys. Res. Lett., 37, L24407, doi:10.1029/2010GL046008, 2010.
Trenberth, K. E., Smith, L., Qian, T., Dai, A., and Fasullo, J.: Estimates of the global water budget and its annual cycle using observational and model data, J. Hydrometeorol., 8, 758-769, 2007.
Trenberth, K. E. and Fasullo, J. T.: North American water and energy cycles, Geophys. Res. Lett., 40, 365–369, 2013a.
Trenberth, K. E. and Fasullo, J. T.: Regional Energy and Water Cycles: Transports from Ocean to Land, J. Climate, 26, 7837–7851, https://doi.org/10.1175/JCLI-D-13-00008.1, 2013b.
Vinukollu, R. K., Meynadier, R., Sheffield, J., and Wood, E. F.: Multi-model, multi-sensor estimates of global evapotranspiration: climatology, uncertainties and trends, Hydrol. Proc., 25, 3993–4010, https://doi.org/10.1002/hyp.8393, 2011.
Voss, K. A., Famiglietti, J. S., Lo, M., Linage, C., Rodell, M., and Swenson, S. C.: Groundwater depletion in the Middle East from GRACE with implications for transboundary water management in the Tigris-Euphrates-Western Iran region, Water Resour. Res., 49, 904–914, 2013.
Wahr, J., Swenson, S., Zlotnicki, V., and Velicogna, I.: Time-variable gravity from GRACE: First results, Geophys. Res. Lett., 31, L11501, doi:10.1029/2004GL019779, 2004.
Wang, S., Huang, J., Li, J., Rivera, A., McKenney, D. W., and Sheffield, J.: Assessment of water budget for sixteen large drainage basins in Canada, J. Hydrol., 512, 1–15, 2014.
Yong, B., Liu, D., Gourley, J. J., Tian, Y., Huffman, G. J., Ren, L., and Hong, Y.: Global view of real-time TRMM Multi-satellite Precipitation Analysis: implication to its successor Global Precipitation Measurement mission, B. Am. Meteorol. Soc., 96, 283–296, doi:10.1175/BAMS-D-14-00017.1, 2014.
Zhang, K., Kimball, J. S., Nemani, R. R., and Running, S. W.: A continuous satellite-derived global record of land surface evapotranspiration from 1983 to 2006, Water Resour. Res., 46, W09522, doi:10.1029/2009WR008800, 2010.
Zhang, K., Kimball, J. S., Nemani, R. R., Running, S. W., Hong, Y., Gourley, J. J., and Yu, Z.: Vegetation Greening and Climate Change Promote Multidecadal Rises of Global Land Evapotranspiration, Sci. Rep., 5, 15956, https://doi.org/10.1038/srep15956, 2015.
|
{}
|
# Harmonic Series sums to One?
I don't get it. How does $$\sum_{j=1}^m\frac{1}{m}=1$$? This looks like a harmonic series. I got this from brilliant.org, the website that trains students for AMC, AIME, Olympiad type of problems. This was the original problem: and this was the solution:
I just don't get this part: I deeply apologize if its something trivial. Its been a while since my last math class in linear algebra. Any hint would be appriciated!
• What are the values of the sums$$\sum_{j=1}^5\frac15\quad\sum_{j=1}^{97}\frac1{97}$$for example? Why is using the letter $m$ any different? – Peter Foreman Aug 30 '19 at 18:38
• I get it now, it was a little confusing. I thought the sum $\sum_{j=1}^5 \frac{1}{m}$ would be $\frac{1}{1}+\frac{1}{2]+\frac{1}{3}+...+\frac{1}{5}$ that's why. It's actually that the bottom is constant. I'm such a dummy, I'll accept the answer below soon. Thanks so much for helping though. – Kenneth Dang Aug 30 '19 at 18:45
$$\sum_{j=1}^m\frac1m\ne\sum_{j=1}^m\frac1j.$$
$$\sum_{j=1}^m\frac1m=\overbrace{\frac1m+\frac1m+\cdots+\frac1m}^{m\text{ times}}=1$$
|
{}
|
Confluence diagnostics: node left or joined the cluster
お困りですか?
アトラシアン コミュニティをご利用ください。
コミュニティに質問
These diagnostic alerts indicate when a node leaves the cluster, and when a node joins the cluster.
アプリケーション ログに次のエラーが記録される。
INFO ; HAZELCAST ; HAZELCAST-1001 ; Node joined the cluster ; not-detected ; ; ; {"member":"Member [...]:... - ..."}
WARNING ; HAZELCAST ; HAZELCAST-1002 ; Node left the cluster ; not-detected ; ; ; {"member":"Member [...]:... - ..."}
These alerts are enabled by default in Confluence 6.11 or later.
この問題の解決方法
There's no action required for these alerts.
However, this information may be helpful when determining the chain of events that lead to an outage or other issue.
このアラートを無視した場合
These alerts are for information only. There's no action required.
|
{}
|
# An everywhere discontinuous function
As usual, $\mathbb R[x]$ denotes the vector space of polynomials in one variable with real coefficients. It is easy enough (and a good exercise for beginners) to prove that the function $$P\to\|P\|=\max_{x\in [0,1]} |P(x)|$$ of $\mathbb R[x]$ in $\mathbb R_+$ defines a norm on $\mathbb R[x]$ (then we can speak about continuity).
Prove that for all $x_0\in \mathbb R; x_0\gt 1$ the function $f_{x_0}$ of $\mathbb R[x]$ in $\mathbb R$ defined by $$f_{x_0}(P)= P(x_0)$$ is discontinuous in every point $P\in \mathbb R[x]$
• It is false. If $x_0\in[0,1]$, then $f_{x_0}$ is continuous – sinbadh Mar 16 '16 at 14:51
• This is not true. In fact $f_{x_0}$ is continuous if and only if $x_0\in[0,1]$. – David C. Ullrich Mar 16 '16 at 14:52
• If you want a norm such that every $f_{x_0}$ is discontinuous you could for example define $||P||_1=\int_0^1|P(t)|\,dt$. – David C. Ullrich Mar 16 '16 at 14:53
• I have edited the lapsus. Thank you very much. – Piquito Mar 16 '16 at 15:03
• Pay attention please before deny. The question is true now. – Piquito Mar 16 '16 at 15:06
Lemma: every continuous linear functional on a normed vector space is bounded, i.e. it has a finite operator norm.
Now the idea is that there are continuous functions which vanish on $[0,1]$ and are large at $x_0$, and therefore there are polynomials which are close to zero on $[0,1]$ but are large at $x_0$. This conclusion follows by Weierstrass' theorem. For details, let $f(x)=0$ on $[0,1]$, $1$ at $x_0$, and linear in between (the linearity is not important). Find a polynomial $p_n$ which is uniformly within $1/n$ of $f$. Then $P(p_n)=1$, but $\| p_n \| \leq 1/n$. Thus $\| P \|$, if $P$ were continuous, would need to be at least $n$, but this is impossible.
• Good. Whitout use of explicit theory, the sequence $\{P_n\}_{n\in \mathbb N}$ defined by $P_n(x)=(\frac xa)^n$; $1\lt a\lt x_0$ is such that $||P_n||=(\frac 1a)^n\to 0$ but $P_n(x_0)=(\frac {x_0}{a})^n\to \infty$ – Piquito Mar 16 '16 at 15:51
• @Piquito Nice, that example does indeed work well. It also exposes the important fact that these examples require arbitrarily large degree. With a fixed maximum degree, the space is complete and homeomorphic to some $\mathbb{R}^n$, so none of these phenomena can occur. – Ian Mar 16 '16 at 16:04
• In finite dimension there are not examples of discontinuous linear function. In other words, what you say yourself in your comment. Best regards. – Piquito Mar 16 '16 at 17:01
|
{}
|
Markov-Chain transition probabilities for 3 variables
I am a bit confused as I need to calculate the Markov-Chain transition probabilites for 3 variables.
Example data, let's assume a sequence of letters at specific and progressively-constant time steps:
Q
Q
E
Q
C
C
E
What are my transitional probabilities?
My (wrong) understanding is:
P(Q|Q) = 1
P(Q|E) = 1
P(Q|C) = 0
P(E|E) = 0
P(E|C) = 1
P(E|Q) = 1
P(C|C) = 1
P(C|E) = 0
P(C|Q) = 1
And therefore my (wrong) transition matrix will be:
Q E C
Q 1 1 1
E 1 0 0
C 0 1 1
note row sums are not = 1
What am I missing? The same approach works with 2 variables and here it seems that I need to divide each row by the number of probabilities > 0 to make the row sums =1.
Thanks
I'll give one example and hopefully it will be evident how it can be applied to the rest:
$P(X(t) = E | X(t-1) = Q)$ can be estimated empirically as "the percentage of instances of Q that are followed by E". So in this case, there are 3 instances of Q, one of which is followed by an instance of E, meaning that $P(X(t) = E | X(t-1) = Q)$ is equal to 1/3.
Side note: I think when you write P(E|Q), you mean the expression I wrote above, but it's probably better to write it how I did just to be clear that you mean "the probability of E coming directly after Q".
In any Transition Probability Matrix, the row sum must be equal to 1. Your Probabilities are incorrect. It should be
P(Q|Q) = 1/3
p(Q|E) = 1
P(Q|C) = 0
P(E|E) = 0
P(E|C) = 1/2
P(E|Q) = 1/3
P(C|C) = 1/2
P(C|E) = 0
P(C|Q) = 1/3
Hence the Transition Probability Matrix becomes :
Q E C
Q 1/3 1/3 1/3
E 1 0 0
C 0 1/2 1/2
|
{}
|
### Home > INT3 > Chapter 6 > Lesson 6.2.1 > Problem6-41
6-41.
1. Jenna wants to solve the equation . Homework Help ✎
1. She tries to eliminate the fractions by multiplying by 10. Help Jenna do this. What is her new equation?
2. Now what should she multiply by to eliminate the remaining fractions? Help Jenna do this and then solve for x. Check your answer.
Multiply each term by 10.
$\frac{30}{x-2}+8=\frac{10x}{x-2}$
What denominator remains?
She should multiply by (x − 2). x = 7
|
{}
|
# Homework Help: Exp as covering homomorphism for connected Lie group
1. May 16, 2012
### Sajet
1. The problem statement, all variables and given/known data
Let $H$ be a connected Lie group with Lie algebra $\mathfrak h$ such that $[\mathfrak h, \mathfrak h] = 0$. Show that:
$\exp: \mathfrak h \rightarrow H$
is the covering homomorphism.
---------
I am not really sure what I have to show here, specifically I don't know what they mean by "the covering homomorphism" which to me implies uniqueness but I can't find a theorem which implies that there exists a unique covering homomorphism in the given case.
Or maybe I'm merely supposed to show that this is both a homomorphism and a covering?
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
Can you offer guidance or do you also need help?
Draft saved Draft deleted
|
{}
|
Making the most of your MBA : The B-School Application
Check GMAT Club Decision Tracker for the Latest School Decision Releases http://gmatclub.com/AppTrack
It is currently 20 Jan 2017, 18:35
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
Events & Promotions
Events & Promotions in June
Open Detailed Calendar
Making the most of your MBA
Author Message
TAGS:
Hide Tags
Director
Joined: 10 Jun 2006
Posts: 624
Followers: 4
Kudos [?]: 54 [4] , given: 0
Show Tags
24 Jan 2009, 12:43
4
KUDOS
I found this interesting post at the following blog: http://pulyanithinks.blogspot.com/2009/ ... r-mba.html
As You Begin….
Write down your goals: What are the top 3-5 things you want to accomplish during your B-School run? Review these goals regularly – hang them on your wall or keep it in your planner.
Get organized: If you do not already have one, get yourself a Palm Pilot, Treo, or paper-based system (Franklin Planner) to keep track of your (1) schedule, (2) rolodex, and (3) to do list. The rolodex is especially important – as you meet people (classmates, interviewers, professors, alumni), enter them in so you have record. Entering their names also will enhance your ability to remember them – repetition is the mother of skill.
Keep your resume up to date at all times! There is no official “resume season” – you could be asked for it at anytime, and failure to present one may preclude you from a great opportunity. Before you get knee-deep in classes, add your last experience to your resume if you haven’t already.
Get good at small talk: The best way to engage someone in small talk is to (1) remember their name when they tell you and (2) ask them questions. I always have four questions ready to go for anyone I meet:
– Where are you from?
– Where do you/have you worked?
– Where are you living? (on campus/off campus)
Any of these four questions can lead you to “Level 2” questions: For example – “where are you from” leads to things like “I’ve never been there – what is it like?” or “did you like living there?” or “sounds like paradise on earth – do you want to return there?” etc.
Meet, Greet, and Meet Some More: Meet as many people as you can. Even if you do not regularly socialize with them, you will be surprised how your former classmates will welcome an email from you 10 years down the road, even if they barely remember you.
Keep Your Guard Up: One of the things I learned from my 1st year Negotiation class is that there are two broad sets of negotiators: Givers (“Pie Makers”) and Takers (“Zero Summers”). Givers are people with loads of integrity and are willing to work with you to create a bigger pie before negotiating on how to divvy it up. Zero Summers are in it for themselves; their sole purpose is to take as much as they can off the table. When a giver negotiates with another giver, there is huge potential for mutual benefit, but when a giver negotiates with a taker, the giver gets royally hosed every time. If you are, by nature, a giver, be very wary of the takers. (I typically assume someone is a taker until they prove they are giver.)
Don’t Become a Social Outcast: You are going to meet plenty of people who are just plain intimidating (Perhaps, you will recognize these people by the fact that they are going to pull up in Mercedes/BMW’s, talk about the CEOs they play golf with, and come from esteemed families/social circles). But there are plenty of people there who DO have things in common with you. Also, your social circles will develop over time, so don’t get freaked out if by the end of September, you have not found a group to hang out with.
Watch the Gossip: This probably won’t be an issue for you, but there will be plenty of opportunities (usually around beers) to make fun of people based on their classroom comments or other social gaffes. Try not to participate – it has a habit of coming back around, and you never know when you will have to work with that person on a team project.
Sometimes You Gotta Force Yourself: If you find yourself not feeling like going to a meeting, extracurricular, company brief, or social function, go anyway. I can’t tell you how many times I have dreaded going to one of these functions only to have learned something important or to have met someone that helped me down the road.
Today’s Professors are Tomorrow’s Colleagues: Meet your professors, get to know them. They value you – you keep them young and energized. Down the road, you will want to network with them. I did a horrible job of this.
The Classroom….
Go to Class: Not sure how classroom participation works at Berkeley, but at HBS it was a significant portion of the grade. Regardless, go to class – you will learn more that way. And don’t show up late – it is unprofessional and you will look bad – very bad. If it came down to being late or not going at all, I chose not going at all.
Recognize that you bring value to the table: Don’t be intimidated (as I was) by the I-bankers and Management Consultants. They will have really good, strategic work experience. But you have an equally relevant base of knowledge and experience. You have the advantage of having worked in an operational role, and knowing what works day to day amongst people who make the strategy happen. More importantly, not many have your experience in web marketing. Use these as your base.
Learn the jargon: You will find that the consultants and I-bankers will come in with a certain level of business savvy that you do not have (and I did not have going in to HBS). These people have operated at a more strategic level than you; learn from them. Listen how they talk and write down phrases you hear that sound good. Also write down phrases that sound equally ridiculous – you may be able to start a game of Business Bingo or even write a B-School comedy.
Have fun learning new stuff: You are paying a lot of money for this – focus less on grades, and more on learning.
Leverage other people’s knowledge: Form a study group with people with diverse backgrounds. Aside from the learning aspects, it is a good way to socialize.
Get the Wall Street Journal: Read it everyday, even if only for 15 minutes over coffee – it is the best way to get educated on business, and gives you tidbits to contribute to conversations in and out of class.
The Future….
View yourself as your own business: You officially work for YOU Inc. You are your own business. Figure out what things interest you (e.g. web marketing), and become THE expert on it. Then find opportunities (not necessarily “jobs”) where you can leverage your expertise. There will be times in the upcoming years where these opportunities will take the form of a job. Other times, these opportunities may take the form of a consulting/freelance role. Ultimately, these opportunities lead to the development of new skills and expertise, which open up a whole new set of opportunities.
Don’t settle for a job: Figure out something that interests you and go from there. I was interested in technology, I researched a couple of industries, and decided to go into telecom; I did not waste my time learning about other “hot industries” like Financial Services. Perhaps I could have made more money, but it did not interest me and I would not have been successful because I did not have the interest.
Don’t wait ‘til “interview season” to look for an opportunity: Devote time each week to learning about industries/companies/opportunities you are interested in. Get yourself a small notebook to jot down statistics or quotes that you can use in interviews, cover letters, and conversations. This is all part of the building expertise theme – you need to come across as an expert and part of that is industry knowledge.
Watch Your Burn Rate (a.k.a. Beware of Clothing Companies): In keeping with the “view yourself as your own business” theme, the #1 reason why startups fail is they run out of cash. So don’t let YOU Inc run out of cash. Rest assured, Hickey Freeman will try to sell you a $1500 suit, claiming that “you need this suit to be successful in your interviews.” Unless you are headed to Wall Street, a tailored$300-$400 suit will do you fine. The more general idea behind this is don’t do anything that unnecessarily raises your burn rate. Doing so will raise the amount you have to borrow and puts you in a deeper hole. Organize Your Finances: Consider getting Quicken – it is a good time to get your finances organized. Also, keep good paper records around your school loans. We bought a plastic, portable file box with 2-3 accordion files to go inside – one was for our School Loans and the others were for bills, etc. Just don’t do anything to screw up your credit rating. You will need it later. Debt Fears: Don’t worry about your debt – you will find a way to pay it off. We emerged from B-School/L-School with over$100K of debt; we paid it off, and we don’t exactly have Wall Street power jobs. (In fairness, they kinda do have big important jobs)
Current Student
Joined: 28 Feb 2008
Posts: 370
Location: New York, Paris
Schools: Wharton '11
Followers: 2
Kudos [?]: 6 [0], given: 2
Show Tags
24 Jan 2009, 13:52
Great Article! Kudos +1
Current Student
Joined: 27 Jul 2007
Posts: 872
Location: Sunny So Cal
Schools: CBS, Cornell, Duke, Ross, Darden
Followers: 12
Kudos [?]: 202 [0], given: 4
Show Tags
26 Jan 2009, 10:50
good find
+1
_________________
The True Value of YOUR MBA: http://gmatclub.com/forum/103-t64239
GMATClub's Unofficial Chartered Financial Analyst thread: http://gmatclub.com/forum/103-t63245
How Much Weight Does the CFA Carry with Admissions: http://gmatclub.com/forum/103-t68059
Current Student
Joined: 01 Apr 2008
Posts: 356
Schools: Chicago Booth '11
Followers: 5
Kudos [?]: 21 [0], given: 0
Show Tags
26 Jan 2009, 11:45
Good find, IHTG!
CEO
Joined: 15 Aug 2003
Posts: 3460
Followers: 67
Kudos [?]: 862 [0], given: 781
Show Tags
26 Jan 2009, 11:47
Fair use is about 500 words, maybe less
use links to original source and put the important info in quotes.
Re: Making the most of your MBA [#permalink] 26 Jan 2009, 11:47
Similar topics Replies Last post
Similar
Topics:
Making Your MBA Resume Stand Out For "Top Ten" Admissions 0 29 Jul 2015, 09:42
Making the most out of business school 2 25 Jun 2012, 15:49
3 Admitted? Get the most out of your MBA experience 183 06 Apr 2011, 12:02
6 How to make the most of your MBA experience 2 28 Dec 2008, 01:50
Making the most out of school visits 3 28 Jan 2008, 14:51
Display posts from previous: Sort by
|
{}
|
Unique number to represent a combination of 5 numbers 1-39
This is a related question poker hand representation and I got a great accepted answer. The problem can be reduced to a virtual 39 card deck that from what I can tell is 16 bits.
I have 5 numbers 1-39 with none repeating. I need to uniquely identity those 5 numbers with the smallest possible number and not order dependent on the 5. I am pretty sure the number is 2^16. From the accepted answer to the link you go 01, 10, 100, ... and then start folding back. Then use an XOR on that matrix. I tried but I cannot figure out when / how to start folding back.
What is the matrix / array?
This can't be encoded into 16 bits. The number of such hands is ${39 \choose 5} = 575757$ (is that a pretty number or what?), and $\lg(575757) \approx 19.1$, so a 5-card hand be encoded into 20 bits, but it can't be encoded in 16 bits.
|
{}
|
# Quark Matter 2019 - the XXVIIIth International Conference on Ultra-relativistic Nucleus-Nucleus Collisions
3-9 November 2019
Wanda Reign Wuhan Hotel
Asia/Shanghai timezone
## A Comprehensive Study of Bottomonium Production in Heavy-ion Collision
4 Nov 2019, 17:40
20m
Wanda Han Show Theatre & Wanda Reign Wuhan Hotel
#### Wanda Han Show Theatre & Wanda Reign Wuhan Hotel
Poster Presentation Heavy flavor and quarkonium
### Speaker
Mr Nikhil Hatwar (Birla Institute of Technology and Sciences, Pilani campus)
### Description
One of the important goals of heavy-ion collision experiments is to test the predictions of Quantum Chromodynamics(QCD). One such QCD prediction is the formation of Quark-Gluon Plasma(QGP) in the heavy-ion collision experiments. Quarkonia suppression has been suggested as a sign of formation of QGP in heavy ion collision, where it could exist as a transient state. We have developed a model to predict the suppression of quarkonia in QGP. It incorporates quarkonia production and suppression due to hot nuclear matter effects like color screening, collisional damping, gluonic dissociation and cold nuclear matter effect namely, nuclear shadowing. We have considered the possibility of regeneration of quarkonia due to correlated/uncorrelated quark and anti-quark pair in QGP medium. Since our model had employed Bjorken's (1+1)-dimensional hydrodynamics, we were restricted to predict suppression at mid-rapidity only. A complete rapidity dependence of suppression was also missing in our previous work. Both of these shortcomings are taken care by switching to (3+1)-dimensional relativistic hydrodynamics using MUSIC, a C++ code. MUSIC uses Kurganov-Tadmor algorithm to solve hydrodynamic conservation equations. In the present work, we compare the bottomonium suppression calculated by using our current (3+1)-dimensional expansion based model with the experimentally measured suppression, $R_{AA}$ as a function of centrality, transverse momentum, and rapidity.
### Primary authors
Mr Nikhil Hatwar (Birla Institute of Technology and Sciences, Pilani campus) Mr CAPTAIN RITURAJ SINGH (BIRLA INSTITUTE OF TECHNOLOGY AND SCIENCE) Madhukar Mishra (Birla Institute of Technology and Science Pilani, Pilani Campus)
### Presentation Materials
There are no materials yet.
|
{}
|
1. ## Definition of Connectedness
In my book, "Real Mathematical Analysis" by Charles Pugh, a subset S of a metric space M is disconnected if there exists a proper, clopen subset of Because if the proper, clopen subset of S is A, then S = A union A(compliment)
But for example on this wikipedia page: Connectedness - Wikipedia, the free encyclopedia
It says that S is not connected (so disconnected) if it is not the union of two disjoint non-empty open sets, which is basically saying that S has no proper, open set.
Which is the correct definition?
2. Originally Posted by JG89
In my book, "Real Mathematical Analysis" by Charles Pugh, a subset S of a metric space M is disconnected if there exists a proper, clopen subset of Because if the proper, clopen subset of S is A, then S = A union A(compliment)
But for example on this wikipedia page: Connectedness - Wikipedia, the free encyclopedia
It says that S is not connected (so disconnected) if it is not the union of two disjoint non-empty open sets, which is basically saying that S has no proper, open set.
To confuse you even more here is a third definition given by R L Moore in c1920.
Two nonempty sets are separated if neither one contains a point nor a limit point of the other.
(Of course, Prof Moore did not say nonempty- he did not believe empty point sets existed.)
A set is connected if and only if it is not the union of two separated sets.
But here is the kicker: All three are equivalent. You ought to prove it.
3. Thanks! I'll get on that.
4. Be careful, they are not exactly equivalent. The definition by Moore, quoted by Plato, is for connected sets. The other two definitions are for connected spaces. For example, $A= [0, 1]\cup [2, 3]$ is not a connected set in R (with the "usual" topology), but the only "clopen" sets in R are the empty set and R itself.
Of course, if you think of A, above, as a vector space in its own right, with the topology inherited from R, then [0,1] and [2,3] are "clopen" in A.
5. Halls, if I am studying metric spaces, should I use the clopen and open set definitions?
6. Originally Posted by JG89
I am studying metric spaces, should I use the clopen and open set definitions?
It is save to say that in many textbooks where metric spaces are the central focus, a space is said to be connected if it is not the union of two nonempty disjoint open sets. If a metric space is not connected, then the two separating sets are both open and closed (clopen).
7. Originally Posted by Plato
You ought to prove it.
Definition 1: A metric space $M$ is connected if there is not a proper, clopen subset of M.
Definition 2: A metric space $M$ is connected if $M$ is not the union, of two disjoint, non-empty open sets.
Proof that the two definitions are equivalent:
First suppose that $M$ is not the union of two disjoint, non-empty open sets. We prove that it does not contain a proper, clopen subset. Assume that indeed it does, $A \subset M$ is clopen. Then $M = A \cup A^c$ and $A \cap A^c = \varnothing$. Both $A$ and $A^c$ are clopen, and thus open, sets that are disjoint and non-empty (we know they are non-empty because they are assumed to be proper subsets), which contradicts the fact that M is not the union of two disjoint, non-empty open sets.
Now we must prove the other direction. We will prove it in contrapositive form. Suppose that $M$ is the union of two disjoint, non-empty open sets, $A$ and $B$. We will prove that one of them is clopen. Take either $A$ or $B$, say $A$. Suppose it is not closed. There exists a convergent sequence $a_n \rightarrow a$ of its elements whose limit is not in $A$, and thus must be in $B$. Remember that $B$ is open and so $\exists \delta > 0 : d(a, x) < \delta \Rightarrow x \in B$. Note that there exists at least one point in $\{x \in M : d(a,x) < \delta \}$ that's from the sequence $a_n$ since $a_n \rightarrow a$. But this contradicts the fact that $A \cap B = \varnothing$, and so $A$ is closed, and since it is also open, it is clopen. $A$ is also a proper subset, and so there exists a proper, clopen subset of $M$. QED
Is this fine?
8. That proof works.
In the first part you need to be sure that set A is a proper subset.
As for the second part recall that the complement of an open set is closed.
Therefore $M\setminus A=B$ so the set $B$ is both open and closed.
|
{}
|
# Finding triangle of maximal area given two vertices and condition on internal bi-sector
$$A=(2,5), B=(5,11)$$ and a point $$P$$ moves such that internal bi-sector of $$\angle APB$$ passes through $$(4,9).$$ The maximum area of $$\triangle APB\;$$ is __?
My attempt:
I found that $$(4,9)$$ lies on the line segment of $$AB.$$ Then I thought that due to the internal bi-sector condition it must have something to do with ellipse. Because, the two triangles formed by connecting focii of ellipse of perimeter are a congruent. So, if $$P$$ is a point on perimeter and $$Q= (4,9),$$ then $$\angle APQ= \angle BPQ.$$
I initially thought the optimum would happen when $$\angle ABP = \angle BAP =45^{\circ},$$ but seems like that leads to a situation where the constraint is not obeyed.
So, how exactly do I get the optimum area while holding the constraint true?
Hint. We know that area of any triangle is half the product of its base and height. As the base is fixed, we just need to maximize the height so as to maximize the area. Further, by angle bisector theorem, we know the ratio $$\frac{PA}{PB}=\text{ratio in which (4,9) divides AB}=k$$Thus, locus of point $$P$$ is the Apollonius Circle wrt to the ratio $$k$$. Thus, maximum height of $$P$$ from $$AB$$ will be the radius of Apollonius Circle.
Since the point $$D=(4,9)$$ belongs to $$AB$$, it is a foot of the bisector of $$\angle ACB$$, and a known expression for it is
\begin{align} D&=\frac{aA+bB}{a+b} \tag{1}\label{1} . \end{align}
With known coordinates we have
\begin{align} (4,9)&= \left(\frac{2a+5b}{a+b},\, \frac{5a+11b}{a+b} \right) \tag{2}\label{2} \end{align}
or a system
\begin{align} \frac{2a+5b}{a+b}&=4 \tag{3}\label{3} ,\\ \frac{5a+11b}{a+b}&=9 \tag{4}\label{4} \end{align}
with solution $$b=2a$$.
Expression for the area of $$\triangle ABC$$ squared is
\begin{align} S_2(a,b,c)&= \tfrac1{16}\,(4a^2b^2-(a^2+b^2-c^2)^2) \tag{5}\label{5} , \end{align}
so for $$b=2a$$ we must have \begin{align} S_2(a,2a,c)&= -\tfrac9{16}\,(a^2)^2+\tfrac58\,c^2\,a^2-\tfrac1{16}\,c^4 \tag{6}\label{6} . \end{align}
It's easy to find that
\begin{align} \max_a S_2(a,2a,c)&= S_2(a,2a,c)\Big|_{a=5} =225 , \end{align} hence the maximal area of such triangle is $$15$$.
• is there a name for this expression? – Buraian Aug 28 '20 at 16:42
• @Buraian: which one? – g.kov Aug 28 '20 at 16:51
• the "well known expression" for D – Buraian Aug 28 '20 at 17:12
• @Buraian: It's just expression for a foot of the bisector in terms of the coordinates of vertices and side lengths of triangle. And it is " known ", but not "well-known". – g.kov Aug 28 '20 at 17:24
• ...@Buraian: this expression directly follows from Angle bisector theorem. – g.kov Aug 28 '20 at 17:51
|
{}
|
Bug 44208 - Multi-character and some Unicode <mi>s italic in WebKit but not Firefox
Summary: Multi-character and some Unicode s italic in WebKit but not Firefox
Depends on: Blocks: Status: RESOLVED FIXED Product: WebKit Component: MathML Version: 528+ (Nightly build) Platform: All All Importance: P2 Normal Assigned To: Frédéric Wang URL: Keywords: 84019 99623 115789 124838 129097 Show dependency tree / graph
Reported: 2010-08-18 15:32 PST by Randall Farmer Modified: 2014-02-20 05:01 PST (History)
Attachments
testcase (227 bytes, text/html)
2013-12-19 08:42 PST, Frédéric Wang
no flags Details
Patch V1 (30.06 KB, patch)
2014-02-06 05:42 PST, Frédéric Wang
no flags Review Patch | Details | Formatted Diff | Diff
Patch V1 (30.17 KB, patch)
2014-02-06 05:46 PST, Frédéric Wang
no flags Review Patch | Details | Formatted Diff | Diff
Patch V2 (30.67 KB, patch)
2014-02-06 07:00 PST, Frédéric Wang
no flags Review Patch | Details | Formatted Diff | Diff
Patch V3 (30.58 KB, patch)
2014-02-06 10:40 PST, Frédéric Wang
no flags Review Patch | Details | Formatted Diff | Diff
Note You need to log in before you can comment on or make changes to this bug. Description From 2010-08-18 15:32:51 PST Firefox's behavior seems to be that if a contains two or more characters, it's rendered in normal rather than italic style and there's space around it, and if it's a single character, it's italicized and no space. So, for example, unitlessparameterization in the first table in the linked page appears as two non-italic words in Firefox and one italic word in WebKit. (I'm sorry in advance if some of these bugs are unhelpful or invalid according to the standard; I'm just going through a page that heavily uses MathML and documenting what looks off. And I absolutely love that WebKit is adding MathML support.) ------- Comment #1 From 2010-08-18 15:48:32 PST ------- Firefox also seems to space and not slant some symbols, like blackboard bold letters. The first math on the linked page is: "q(s): ℝ→ℝ." In Firefox, the blackbold bold R's are not slanted; in WebKit, they are. Firefox may have a whitelist of single characters that it slants in ; for example, it slanted Π but not ü when I put that text in Firebug. I suppose the source, or some document or other, likely has the precise answer. ------- Comment #2 From 2010-09-02 05:55:11 PST ------- About italicized with more than one character: we were aware of that problem and I'm currently working on it. For slanted ℝ, thanks to report it. I'll have a look at the MathML3 specification, but it seems natural that "special" characters shouldn't be italicized. Every help (testing, bug reports, etc) is welcomed! Thanks! ------- Comment #3 From 2010-09-02 06:14:58 PST ------- Here is the MathML3 recommendation answer about special characters: http://www.w3.org/TR/MathML3/chapter7.html#chars.BMP-SMP ------- Comment #4 From 2013-05-04 22:51:36 PST ------- Yes, invariant characters should not be italicized: https://en.wikipedia.org/wiki/Mathematical_Alphanumeric_Symbols See also bug 108778. ------- Comment #5 From 2013-12-19 08:42:57 PST ------- Created an attachment (id=219653) [details] testcase ------- Comment #6 From 2014-02-06 05:42:58 PST ------- Created an attachment (id=223326) [details] Patch V1 This patch fixes the most important situation (testcase and referenced URL) with multi-char vs sing-char mi. There are some known issues (cf tests): - things like x seems to produce the wrong spacing (or width?) - some dynamically created content are not italicized Since the WebKit's spacing is currently poor and since there are issues with dynamic updates, I think we can ignore these issues for now. Actually this is part of bug 124838 which prepares for the operator dictionary and further improvements ; so it might be worth taking this small piece and get feedback on it now rather than working on the big patch which seems to scare potential reviewers. Note that instead of CSS style this should rather do some code point remapping inside the text rendering code. However, it's ok for now: Gecko has used that approximation for a long time (see https://bugzilla.mozilla.org/show_bug.cgi?id=114365 and https://bugzilla.mozilla.org/show_bug.cgi?id=930504) and MathJax does not implement mi/mathvariant properly. ------- Comment #7 From 2014-02-06 05:46:29 PST ------- Created an attachment (id=223327) [details] Patch V1 Sorry, wrong patch. ------- Comment #8 From 2014-02-06 07:00:11 PST ------- Created an attachment (id=223331) [details] Patch V2 It looks like the changes to CMakelist.txt were lost when I extracted the code for from attachment 222977 [details]... ------- Comment #9 From 2014-02-06 09:06:16 PST ------- (From update of attachment 223331 [details]) View in context: https://bugs.webkit.org/attachment.cgi?id=223331&action=review > Source/WebCore/ChangeLog:8 > + This test prevents multi-char to be drawn in italic and prepare can you explain what further improvements it prepares for > Source/WebCore/rendering/mathml/RenderMathMLToken.cpp:2 > + * Copyright (C) 2013 Frédéric Wang (fred.wang@free.fr). All rights reserved. 2014 > Source/WebCore/rendering/mathml/RenderMathMLToken.cpp:29 > +#include "RenderMathMLToken.h" i think this line goes right below config.h it shouldn't need to be guarded by MATHML, since the header is also guarded by MATHML > Source/WebCore/rendering/mathml/RenderMathMLToken.cpp:65 > + m_text = element().textContent().stripWhiteSpace().simplifyWhiteSpace().impl(); Do you think we need to store m_text, or can a method always just return the right value, by doing ::text() { return element().textContent().stripWhiteSpace().simplifyWhiteSpace().impl();; } > Source/WebCore/rendering/mathml/RenderMathMLToken.cpp:75 > + if (tokenElement.hasTagName(MathMLNames::miTag)) { You can probably use an early return here instead. You might also want to assert that it has this tagName. It looks like this element is only created with a miTag > Source/WebCore/rendering/mathml/RenderMathMLToken.h:2 > + * Copyright (C) 2010 Frédéric Wang (fred.wang@free.fr). All rights reserved. 2014 > Source/WebCore/rendering/mathml/RenderMathMLToken.h:45 > + virtual bool isChildAllowed(const RenderObject&, const RenderStyle&) const override { return true; }; extra space after "override" > Source/WebCore/rendering/mathml/RenderMathMLToken.h:51 > + String m_text; Seems like m_text should be private. I don't see it referenced outside the .cpp file ------- Comment #10 From 2014-02-06 09:13:29 PST ------- @Chris: this prepares for the cleanup of & frames and implementation of the operator dictionary (see bug 124838 and bug 99620 and all the bugs they block) and especially I try to do a clean thing with anonymous styles (although I'm not sure that's the case). I have only extracted the part, so that's why the class is called RenderMathMLToken and test and probably why I store m_text (because it was more convenient for IIRC). ------- Comment #11 From 2014-02-06 10:40:28 PST ------- Created an attachment (id=223346) [details] Patch V3 ------- Comment #12 From 2014-02-06 11:55:28 PST ------- (From update of attachment 223346 [details]) Clearing flags on attachment: 223346 Committed r163553: ------- Comment #13 From 2014-02-06 11:55:34 PST ------- All reviewed patches have been landed. Closing bug.
|
{}
|
### pikmike's blog
By pikmike, history, 2 years ago,
1027A - Palindromic Twist
Tutorial
Solution (PikMike)
1027B - Numbers on the Chessboard
Tutorial
Solution (Vovuh)
1027C - Minimum Value Rectangle
Tutorial
Solution (PikMike)
1027D - Mouse Hunt
Tutorial
1027E - Inverse Coloring
Tutorial
Solution (PikMike) O(n^3)
Solution (BledDest) O(n^2)
1027F - Session in BSU
Tutorial
Solution (Vovuh)
Solution (Vovuh) Kuhn's Algorithm
1027G - X-mouse in the Campus
Tutorial
• +72
» 2 years ago, # | +67 You don't need a binary search in problem F. The complexity is O(nlogn) for coordinate compression. Build the graph, then apply DSU or SCC(Tarjan or Korasaju) with do the job. If there exists a connected component with number of edges greater than the number of vertices, then the answer is -1. If the number of edges is equal to the number of vertices, record the largest value, If the number of edges is equal to the number of vertices minus one, record the second largest value, the answer is the maximum over all connected components.
• » » 2 years ago, # ^ | 0 I was trying the same way but got tle in 45th test case. Any optimization you would suggest? I have used path compression and union by rank.
• » » » 2 years ago, # ^ | 0 You were using map in your program, which adds an additional log factor together with large constant.
» 2 years ago, # | 0 Can you please elaborate on the part of partial sums in problem E to solve the problem in N^2 time. I am unable to get it.
• » » 2 years ago, # ^ | ← Rev. 2 → +37 let lte[i] be the n.of binary strings of length n such that maximum segment of a single color has length <= i. (lte[i]-lte[i-1]) will be the n.of binary strings of length n such that maximum segment of a single color has length = i. Hence if we have lte array we can solve the problem. We can calculate lte[i] in O(n) as follows. let dp[j] be the n.of binary strings of length j such that maximum segment of a single color has length <= i.If j>i dp[j]=dp[j-1]+dp[j-2]....dp[j-i] else dp[j]=dp[j-1]+dp[j-2]..+dp[1]+1. This is because in a string of length j at the beginning we can have at most i bits with the same color and after that, it's just dp[remaining length]. Now, lte[i]=dp[n]. Clearly, we can calculate the dp array by using partial sums in O(n). Since each element of the lte array takes O(n) time to calculate overall complexity is O(n^2). P.S: Don't forget to multiply by 2 at the end since all this is for a fixed color of the top left tile and it has 2 options (white or black).AC: O(n^2)
• » » » 2 years ago, # ^ | 0 Thanks a lot. Can you just tell me why are we adding 1 in case of j <= i.
• » » » » 2 years ago, # ^ | +4 I think I got it. You are assuming that 1st element is fixed and 1 is for assuming all the j elements are same. Thanks.
• » » » » » 2 years ago, # ^ | +21 yes, glad that I was of some help :D
• » » » 2 years ago, # ^ | ← Rev. 2 → 0 Can you please explain the relation for j>idp[j] = dp[j-1]+...dp[j-i]How ?
• » » » 2 years ago, # ^ | ← Rev. 2 → 0 This is because in a string of length j at the beginning we can have at most i bits with the same color and after that, it's just dp[remaining length].I don't see why the reasoning is correct.If you put X bit of same color in the beginning and fill the rest with dp[j — X] then it is possible to end up with a segment of same color that has length more than i. isn't it?I guess you meant something else by beginning.
» 2 years ago, # | 0 Can someone find out what is time complexity on my code? (problem C) Here is my code 41811657 I think it should be O(T*n*log(n)) . But why it got TLE in case 6. Thank you!!
• » » 2 years ago, # ^ | 0 Maybe because ?In fact, for each test you create arrays a and cnt istead of making it global and clearing after each test.
• » » » 2 years ago, # ^ | 0 Why is this getting tle sir??please help me 41788605
• » » » » 2 years ago, # ^ | ← Rev. 4 → +1 Try using ios_base::sync_with_stdio(false). I optimized my solution from TLE to AC.TLE on test 6: 41767762AC: 41767854
• » » » » » 2 years ago, # ^ | 0 Thanks _Kuroni_
• » » » » » 2 years ago, # ^ | 0 Wow, that helped me too. Thank you!
• » » » » 2 years ago, # ^ | +3 Try scanf and printf rather than cin and cout it worked for me
• » » » » » 2 years ago, # ^ | 0 but ,will i be able to use sort() function , if i am taking input through scanf() ,because i have coded in c++
• » » » » » » 2 years ago, # ^ | 0 Ya Of course
• » » » » » » » 2 years ago, # ^ | 0 This code is not working , dude can you figure out the error #include main() { int i,arr[5]; for( i=0;i<5;i++) { arr[i]=5-i; } sort(arr,arr+5); for(i=0;i<5;i++) { printf("%d ",arr[i]); } }
• » » » » » » » » 2 years ago, # ^ | ← Rev. 2 → 0 Add using namespace std;
• » » » » » » » » » 2 years ago, # ^ | 0 Yeah , Got it !!! Thanks.
• » » » » 5 months ago, # ^ | 0 Why it's getting WA?https://codeforces.com/contest/1027/submission/77138815
» 2 years ago, # | ← Rev. 2 → 0 What does this line do in the code for Problem E? ans = (ans * (long long)((MOD + 1) / 2)) % MOD; Edit : Understood. For anyone having trouble it is modular inverse
• » » 2 years ago, # ^ | +11 It divides ans by 2 modulo MOD, as is the modular inverse of 2, because
» 2 years ago, # | ← Rev. 3 → +13 Kuhn's algo for F works only with this particular modification found in your solution -- you first check all the edges and if you find a suitable one, then you use it. Only if there are no suitable edges found, then you do a second loop and recurse into them (DFS). This versions passes tests. I don't know why.However, a "vanilla Kuhn" that just uses a regular DFS (tries every edge to a non-visited vertex recursively) receives TLE, which is kind of expected, since it has complexity O(N*M) -- too big for this problem.
» 2 years ago, # | 0 I've sent a solution for C (after the end of the contest) which sounds precisely like the given here (41817149), but it still didn't fit into given time :( What did I do wrong?
» 2 years ago, # | 0 In problem D, Why it is better to put trap on cycle, although there may be cheaper way to put trap on each path leads to this cycle like that. https://drive.google.com/open?id=19ff-GgeGhbhGXh5MykElhYXB3EErpq_9
• » » 2 years ago, # ^ | ← Rev. 2 → 0 Because we don't know where the mouse starts (it may start in any vertex). Mouse can be on a cycle at the beginning and if the mouse is on a cycle at the beginning, it will be on this cycle forever. Thus it is necessary to put a trap on cycle, otherwise mouse beginning on cycle will never be caught. And according to your example, if we put a trap in vertex with cost 100, mouse will go through this vertex regardless of its beginning vertex, so it is enough to put there a trap.
» 2 years ago, # | 0 As far I know Complexity of Kuhn is O(n * n). How F is Solved with Kuhn? What's the trick or am I wrong? Thanks in advance.
• » » 2 years ago, # ^ | 0 Kuhn works O(n*m). But tests for this assume prix is rough estimate. In real kuhn works very fast. There are also different heuristics.
» 2 years ago, # | 0 can anyone explain this line in tutorial for problem EThat way, you can also guess that the area of maximum rectangle of a single color you will get in you coloring is the product of maximum lengths of segments of a single color in both of the strings.I cant understand why maximum rectangle is product of maximum lengths of segments of only 2 binary strings
» 2 years ago, # | ← Rev. 2 → 0 Problem G: How to prove there is exactly u that u * x = v?Help me. Thanks.
• » » 2 years ago, # ^ | +1 Because u = v·x - 1
• » » » 2 years ago, # ^ | 0 Thanks so much.
» 2 years ago, # | 0 can someone help me in visualization of problem E dp.
» 2 years ago, # | 0 with regard to Palindromic twist what about a and z . These two works only for +1 for a and -1 for z. But your solution says "verify that the distance between the corresponding letters is either 0 or 2. Should'nt this be between 0 and 2.
• » » 2 years ago, # ^ | 0 If the difference is 1, you won't be able to match both of the letters because you will have to apply +1 or -1 to both of them
» 2 years ago, # | 0 A little help on problem C! I don't know what I am doing wrong. I have a code in Python similar to the above solution, and I got TLE on test 5. Here is my code : http://codeforces.com/contest/1027/submission/41941233 Can someone find out?
• » » 2 years ago, # ^ | 0 I don't know why it gets TLE, but it also seems incorrect. Test: 1 7 1 1 999 999 7 7 7 Your output: 7 7 7 7 Trace: 1) c == 1, a[i] == 7 -> c := 72) c == 7, a[i] == 7 -> push3) c == 7, a[i] == 7 -> pushBut in fact you don't have 2 pairs
» 2 years ago, # | 0 In problem D, the editorial says that the mouse will stuck in a cycle. But my question is that, should the vertex in the cycle be reachable from all other nodes which are not part of the vertex's cycle because the girls are not aware of the position of the mouse? If it is required the how can we find it?
• » » 2 years ago, # ^ | 0 It will always be stuck in a cycle, once it reached that cycle. So all you have to do is find all cycles, take the minimum number from each cycle and add to the answer :)
» 2 years ago, # | 0 can someone please provide links to understand kuhn's algo?
» 2 years ago, # | ← Rev. 3 → 0 Can anyone help me find the complexity of my code..?for Problem D...THis is my code in java.. http://codeforces.com/contest/1027/submission/42027693 Thanks
» 2 years ago, # | 0 sorry ignoring my algorithm (i mean whether is it correct or not) why i am getting TLE on test 2 ty 42082971
• » » 2 years ago, # ^ | 0 Try to convert the input/output from cin/cout to scanf/printf and don't forget to extend the size of array ar to not get RTE.
• » » » 2 years ago, # ^ | 0 Sorry i tryed to change them however i dont know why it is printing wrong answers! 42083838 amazing thing about it is that when i compiled it in my computer it printed right answers!!
• » » » » 2 years ago, # ^ | ← Rev. 2 → 0 You should use the identifier %lld with the type long long. And the size of the array is still not enough ;).
• » » » » » 2 years ago, # ^ | 0 sorry still TLE 42085192
• » » » » » » 2 years ago, # ^ | +3 sorry i find the problem XD
• » » » » » » » 2 years ago, # ^ | 0 XD the siiizzze
» 2 years ago, # | 0 Hi Do anyone knows what is it? 42086511 Ty
» 2 years ago, # | 0 This is my code for Problem A, It is not passing test 2, I cant see the input for test 2.. can any1 help me find the mistake? - if(a[i]==a[j]) - continue; - else if((a[i]++)==(a[j]--)) - continue; - else if((a[i]--)==(a[j]++)) - continue; - else - { - System.out.println("NO"); - f=1; - break; - }
» 2 years ago, # | 0 Problem D:Why does the mouse jump on a cycle at some point, no matter the starting vertex ?
• » » 2 years ago, # ^ | 0 Because it cant jump infinite times. In some number of jumps it will return to where it was
» 19 months ago, # | ← Rev. 2 → 0 .
» 16 months ago, # | 0 https://codeforces.com/contest/1027/submission/41811657 can someone tell me why I am getting tle my method is almost same as that it tutotrial
» 5 months ago, # | ← Rev. 3 → 0 Why do I get WA on test 3 (problem E)? https://codeforces.com/contest/1027/submission/78959890 UPD: fixed, a -1 was missing (AC: https://codeforces.com/contest/1027/submission/78974251)
|
{}
|
Since 18 of December 2019 conferences.iaea.org uses Nucleus credentials. Visit our help pages for information on how to Register and Sign-in using Nucleus.
# 28th IAEA Fusion Energy Conference (FEC 2020)
10-15 May 2021
Virtual Event
Europe/Vienna timezone
The Conference will be held virtually from 10-15 May 2021
## Advancements in Understanding the 2D Role of Impurity Radiation for Dissipative Divertor Operation on DIII-D
11 May 2021, 08:30
4h
Virtual Event
#### Virtual Event
Regular Poster Magnetic Fusion Experiments
### Speaker
Adam McLean (Lawrence Livermore National Laboratory)
### Description
Recent analysis leveraging the broad array of measurable plasma parameters on the DIII-D tokamak has been used to elucidate the physics underlying detachment processes in the divertor and reveal the 2D nature of detachment important for design of detachment scenarios for next step devices. The dominant role of EUV/VUV radiation for radiative power exhaust has been established experimentally with accompanying spectroscopy leveraged alongside collisional radiative modeling to calculate the impurity density and charge-state distribution in the divertor. 2D measurement of critical plasma parameters for power exhaust studies ($n_e$, $T_e$, P$_\text{rad}$, $E_\text{VUV/EUV}$) reveal a greater radial emission extent compared to UEDGE fluid modeling simulations. This larger extent provides opportunity for greater dissipation volume, but also further demonstrates that fully-2D simulations including cross-field drifts are required for detachment studies working towards a predictive capability of divertor heat loads.
A combination of EUV/VUV-VIS-IR spectroscopy, ColRadPy collisional radiative modeling (1), and 2D $T_e$ and $n_e$ measurements from Divertor Thompson Scattering has been used to infer impurity densities in the divertor. This analysis primarily uses the EUV/VUV resonance lines that make up the vast majority of radiative emission ($\sim$>95% (2)) and are particularly well suited for determining ground-state densities. Inter-ELM intrinsic carbon impurity fraction was found to be $\sim$5% in attached H-mode conditions, falling to $\sim$0.5% in detached conditions while maintaining about the same total carbon density. UEDGE modelling with a full physics drift model similarly shows a reduction in impurity concentration in detachment but limited to a $2.8 \times$ drop. Using the same set of calibrated EUV/VUV spectroscopy measurements, the carbon population in unseeded discharges is inferred to be dominated by C$^{4+}$ in the divertor whereas detached plasmas show highly radiating narrow bands of C$^{2+}$ and C$^{3+}$ at the detachment front. Figure 1 shows these charge state distributions for a partially detached plasma with $\mathbf{B} \times \nabla B$ drift towards the primary X-point in DIII-D’s ‘shelf’ open divertor (4.5MW, 1.8T, 0.9MA) alongside the associated UEDGE predictions. In nitrogen seeded detached cases the additional available charge state results in a slightly increased range of radiating species (N$^{2+}$ to N$^{4+}$ with $\sim$2eV of additional $T_e$ range) in a regime dominated by N$^{3+}$ and N$^{4+}$ ions. The charge state distribution comparison with UEDGE modeling displays quantitatively similar 2D profiles to those experimentally inferred albeit with an additional charge-state mixing caused by the finite lifetime of ions and transportation via parallel flows that are not accounted for in the CR model. Quantitative 2D comparison between UEDGE-predicted and measured flows has recently been achieved using velocity imaging (3). An excellent agreement of He$^+$ velocities in a pure helium L-mode plasma is achieved near the divertor target where He is the main-ion species and electron physics dominates. Further upstream where ion-dominated physics plays a more important role, a factor of 2–3 underestimation of the velocity is observed indicating an underestimation of the role of ions in determining local plasma characteristics near the X-point that impacts our ability to predict impurity transport via parallel flows in the divertor, estimate convective power fraction in detached conditions, and establish total pressure dissipation.
The radial extent of the radiative volume in detached H-mode discharges has been shown to display much broader features in detached H-mode discharges compared to UEDGE fluid modeling (5.2MW, 0.9MA, 1.8T, $\mathbf{B} \times \nabla B$ drift towards the primary X-point) (4) with an increasing level of broadening observed at higher powers (5). This is observed in both charge-state resolved line emission (Figure 2) as well as total radiated power (bolometry), Divertor Thomson Scatting, and 2D visible imaging. UEDGE simulations with drifts and currents show that in these conditions the poloidal $\mathbf{E} \times \mathbf{B}$ drift can dominate the poloidal heat transport in the radiative front, expanding the poloidal extent of the radiation front as well as increasing the total radiative power. This indicates drift flows leading to a larger volume for dissipation and enhanced ability for divertor radiation than predicted by more commonly used 1D and 0D modeling approximations, or 2D modeling without drifts. This directly impacts our ability to predict detachment onset, detachment stability, the impurity fraction required to achieve detachment, and the heat flux mitigation that can be expected in planned divertors.
This work was supported in part by the US Department of Energy under DE-FC02-04ER54698, DE-AC52-07NA27344, and DE-NA0003525.
(1) Johnson et al., 2019 Nuclear Materials and Energy 20 100579
(2) Mclean A.M. et al., 2018 IAEA FEC 2018 EC/PC-15; Mclean A.M. et al., 2020 Plasma Surface Interactions Conference (upcoming)
(3) Samuell C.M. et al., Phys. Plasmas, 25 056110
(4) Jaervinen A.E. et al., 2019, Contrib. Plasma Phys; Jaervinen A.E. et al. 2020 NF (submitted)
(5) Leonard A.W. et al, 2020, IAEA 2020 (this meeting)
Country or International Organization United States Lawrence Livermore National Laboratory
### Primary authors
Cameron Samuell (Lawrence Livermore National Laboratory) Adam McLean (Lawrence Livermore National Laboratory) Aaro Jaervinen (Lawrence Livermore National Laboratory) Mr Curt Johnson (Auburn University) Dr Steven Allen (Lawrence Livermore National Laboratory) Max Fenstermacher (LLNL @ DIII-D) Mr Andreas Holm (Aalto University) Charles Lasnier (Laurence Livermore National Laboratory) Dr Gary Porter (Lawrence Livermore National Laboratory) Mr Thomas Rognlien (Lawrence Livermore National Laboratory) Dr Filippo Scotti (Lawrence Livermore National Laboratory) Anthony W. Leonard (General Atomics) Mr William Meyer (Lawrence Livermore National Laboratory) Dr Auna Moser (General Atomics) Morgan Shafer (Oak Ridge National Laboratory) Dr Dan M. Thomas (General Atomics, San Diego, CA 92186, USA) Dr Huiqian Wang (General Atomics, San Diego, CA 92186, USA) Dr Jonathan Watkins (Sandia National Laboratory) Mathias Groth (Aalto University)
|
{}
|
Uncertainty
Previously we've used logic to represent facts and used planning algorithms to identify desirable actions. Now we reevaluate this approach by introducing uncertain information.
Probability theory provides the basis for our treatment of systems that reason under uncertainty. Also, because actions are no longer certain to achieve goals, agents will need ways of weighing up the desirability of goals and the likelihood of achieving them. For this, we use utility theory. Probability theory and utility theory together constitute decision theory, which allows us to build rational agents for uncertain worlds.
Uncertainty can also arise because of incompleteness and incorrectness in the agent's understanding of the properties of the environment.
Uncertainty Example
Let action A(t)= leave for airport t minutes before flight
Will A(t) get me there on time? Problems:
• partial observability, noisy sensors
• uncertainty in action outcomes (flat tire, etc.)
• immense complexity of modelling and predicting traffic
Hence a purely logical approach either
1. risks falsehood: “A(25) will get me there on time”, or
2. leads to conclusions that are too weak for decision making:
“A(25) will get me there on time if there’s no accident on the bridge and it doesn’t rain and my tires remain intact etc etc.”
(A(1440) might be safe but I’d have to stay overnight in the airport …)
Other plans, such as A(50), might increase the agent's belief that it will get to the airport on time, but also increase the likelihood of a long wait.
Methods for Handling Uncertainty
Default/Non-monotonic Logic
Make assumptions - e.g. A(i) works unless contradicted by evidence
The issue is what assumptions are reasonable, and how do we handle contradictions.
Probability
Use the 1565 theory of Gambling to work out the probability of A(i) being correct, given the available evidence.
Using first order logic fails to cope in uncertain situations because:
• Laziness: failure to enumerate exceptions, qualifications, etc. (e.g. don't write down all symptoms of a disease)
• Theoretical Ignorance: lack of knowledge about environment at all (e.g. know nothing about a disease)
• Practical Ignorance: lack of relevant facts, initial conditions, etc. (e.g. don't know the patient's specific results for a test).
Probability allows us to summarise the uncertainty that comes from laziness and ignorance.
What is the 'right' thing to do?
The right thing to do, the rational decision, depends on both the relative importance of various goals and the likelihood that they will be achieved (and to what degree).
e.g. depends on preferences for missing flights vs long waits.
Utility theory is used to represent and infer preferences
Decision theory = utility theory + probability theory
Subjective/Bayesian Probability
These probabilities relate propositions to one’s own state of knowledge. These are not claims of a “probabilistic tendency” in the current situation (but might be learned from past experience of similar situations). Probabilities of propositions change with new evidence:
e.g.
• P(A(25)|no reported accidents) = 0.06
• P(A25|no reported accidents, 5 a.m.) = 0.15
Probability Basics
Start with a set S - the sample space (e.g. [1,2,3,4,5,6] = the 6 possible rolls of a die).
$s \in S$ is a sample point/possible world/atomic event.
A probability space/model is a sample space where every s has an assignment (P(s)).
Rules:
• $0 \leq P(s) \leq 1$
• $\sum_s P(s) = 1$
• e.g. $P(1) = P(2) = P(3) = P(4) = P(5) = P(6) = \frac{1}{6}$
An event A is any subset of S.
(1)
\begin{align} P(A) = \sum_{s \in A} P(s) \end{align}
e.g.
(2)
\begin{align} P(\text{die roll < 4 }) = P(1) + P(2) + P(3) = \frac{1}{6} + \frac{1}{6} + \frac{1}{6} + \frac{1}{6} = \frac{1}{2} \end{align}
Random Variable Functions
A random variable is a function from sample points to some range (e.g. the Reals or Booleans).
Example
Odd(3) = true.
Odd() is the function, 3 is the sample point, true is the boolean value that is the range
Probability Distribution
P induces a probability distribution for any random variable X:
(3)
\begin{align} P(X = x_i) = \sum_{s:X(s) = s_i} P(s) \end{align}
e.g.
(4)
\begin{align} P(Odd(diceRoll) = true) = P(1) + P(3) + P(5) = \frac{1}{6} + \frac{1}{6} + \frac{1}{6} + \frac{1}{6} = \frac{1}{2} \end{align}
Propositions
A proposition is the event (the set of sample points) for which the proposition is true.
Given boolean random variables A and B:
• event a = set of sample points where A(s) = true
• event ¬a = set of sample points where A(s)= false
• event a^b = set of sample points where A(s) = true and B(s) = true
With boolean variables, sample point = propositional logic model. (Where a proposition is a disjunction of atomic events in which it is true).
Why use probability?
The definitions imply that certain logically related events must have related probabilities, and we can show that an agent who bets according to probabilities that violate these axioms can be forced to bet so as to lose regardless of the outcome.
I.e. we need to follow the axioms.
Syntax for Propositions
Functions have a capital letter if they're multivalued. e.g.
Cavity = <true, false> vs cavity (implies it is true as only option)
Propositional/Boolean Random Variables
e.g. Cavity = <true, false>
Discrete (finite/infinite) Random Variables
e.g. Weather = <sunny, rain, cloudy, snow>
Continuous (bounded/unbounded) Random Variables
e.g. Temp = 21.6, or Temp < 22.0
Prior Probability
Prior or unconditional probabilities of propositions are the initial probabilities that are most accurate based on a lack of new evidence.
e.g. P(Cavity = true) = 0.1 and P(Weather = sunny) = 0.72 correspond to belief prior to arrival of any new evidence.
A probability distribution gives values for all possible assignments:
P(Weather) = <0.72,0.1,0.08,0.1> (normalised so as to sum to 1).
Joint Probability
A joint probability distribution for a set of Random Variables gives the probability of every atomic event (sample point) on those Random Variables.
Every question about a domain can be answered by the join distribution because every event is just a sum of sample points.
Probability for Continuous Variables
With continuous variables we express distribution as a parameterised function.
e.g. P(X=x) = U[18,26](x) = uniform density between 18 and 26.
Here P's a density and hence integrates to 1.
P(X = 20.5) = 0.125 really means
(5)
\begin{align} \frac{lim_{dx \rightarrow 0}P(20.5 \leq X \leq 20.5 + dx)}{dx} = 0.125 \end{align}
Gaussian Density
Gaussian functions describe normal distributions (commonly known as a bell curve).
(6)
\begin{align} P(x) = \frac{1}{\sqrt{2 \pi \sigma}}e^{\frac{-(x-\mu)^2}{2 \sigma^2}} \end{align}
Conditional Probability
(7)
\begin{align} P(a|b) = \frac{P(a \wedge b)}{P(b)} \text{ if } P(b) \neq 0 \end{align}
Conditional or posterior probabilities (i.e. P(x|y) (x given y).
e.g. P(cavity|toothache) = 0.8
More general observations remain valid in the face of evidence, but are not always useful.
New evidence might be irrelevant and hence allow simplification. This kind of inference is crucial.
e.g. P(cavity|toothache, 49ersWin) = P(cavity|toothache) = 0.8
Notation
• Conditional Distribution: P(X,Y) = numX-elements vector of numY-elements vectors
• When we know that one of the attributes is given (i.e. it = a) P(x,y, a) = 1 (e.g. P(x, y and a)
Chain Rule
Chain rule says that:
(8)
$$P(X_1, ..., X_n) = P(X_1, ..., X_{n_1})P(X_n|X_1, ..., X_{n-1})$$
And this can be successively applied to get:
(9)
\begin{align} P(X_1, .. X_n) = \product_{i=1}^n P(X_i|X_1, ... X_{i-1}) \end{align}
Inference by Enumeration
We can also compute conditional probabilities, such as:
(10)
\begin{align} P(\neg \text{cavity}|\text{toothache}) = \frac{P(\neg \text{cavity ^ toothache}}{P(toothache)} = \frac{0.016 + 0.064}{0.108+0.012+0.016+0.064} = 0.4 \end{align}
Normalisation
General idea: compute distribution on query variable by fixing evidence variables and summing over hidden variables
Independence
A and B are independent iff
(11)
\begin{align} P(A|B) = P(A) \text{ or } P(B|A) = P(B) \text{ or } P(A,B) = P(A)P(B) \end{align}
Absolute independence is powerful but rare.
Conditional Independence
If P(X|Y, z) = P(X|z) and P(X|Y, ¬z) = P(X|¬z) then X is conditionally independent of Y given Z.
(i.e. P(X|Y, Z) = P(Z)).
If we let P(Toothache,Cavity,Catch) have $2^3 −1 = 7$ independent entries, and we say that Catch (the probe pokey thingy catching) is conditionally independent of Toothache given Cavity, then we can work out the full joint distribution as:
The use of conditional independence commonly reduces the size of the representation of the joint distribution from exponential to linear (over the number of things in the original P()).
Bayes Rule is linked to conditional independence - our chain rule'd joint distribution is an example of a naive Bayes model.
Wumpus World
Pits are placed randomly, with a probability of 0.2 per square.
We know the following:
• $breeze = \neq breeze_{1,1} \wedge breeze_{1,2} \wedge breeze_{2,1}$
• $\text{Known }= \neq pit_{1,1} \wedge \neq pit_{1,2} \wedge \neq pit_{2,1}$
• Query is $P(Pit_{1,3}|Known, breeze)$
• Unknown = all Pit[i,j]'s other than Pit[1,3] and Known.
Hence by enumeration we have:
(12)
\begin{align} P(Pit_{1,3}|Known, breeze) =\alpha \sum_{Unknown} P(Pit_{1,3}, Unknown, Known, breeze) \end{align}
This grows exponentially with the number of squares.
Using Conditional Independence
The idea here is that observations (what's known and unknown) are conditionally independent of other hidden squares, given neighbouring hidden squares.
We can hence define Unknown = Fringe ∪ Other
$P(breeze|Pit_{1,3},Known,Unknown) = P(breeze|Pit_{1,3}, Known, Fringe)$
There's a bunch of maths in manipulating this, but it essentially comes down to:
(13)
\begin{align} P(Pit_{1,3}|Known, b) = \alpha ' P(Pit_{1,3}) \sum_{Fringe} P(breeze|Known,Pit_{1,3} , Fringe)P(Fringe) \end{align}
Applying this we get the following:
$P(Pit_{1,3}|Known,breeze) = \alpha '(0.2(0.04+0.16+0.16), 0.8(0.04+0.16))$
Comes to about a Probability if 0.31 that there's a pit there, and Probability 0.69 that there isn't.
With $P(Pit_{2,2}|Known,breeze)$ we get a Probability of 0.86 that there's a pit there, and a probability of 0.14 that there isn't.
page revision: 21, last edited: 05 May 2012 04:24
|
{}
|
Optimality Theory Tableaux
[ LaTeX for Linguists, .dvi, .ps, .pdf]
These are some tips about how to make Optimality Tableaux in LaTeX.
These notes are very incomplete, and they may never get much better. Of course, if someone would like to write a better version, just mail me ...
Marlies Kluck (Utrecht University) suggests a very simple approach, just using a normal tabular environment:
\begin{tabular}
{|lc|c|c|c|}\hline
& \textbf{Input} & Cnstrnt 1 & Cnstrnt 2& Cnstrnt 3\\ \hline\hline
& candidate 1 & *! & & \\ \hline
& candidate 2 & & * & \\ \hline
\hand & candidate 3 & & & * \\ \hline
\end{tabular}
The hand' command can be obtained in various ways:
• load the pifont package, and use \ding{43}; i.e. in the preamble put:
\usepackage{pifont}
\newcommand{\hand}{\ding{43}}
• load the pzdr font, and use \mbox{\db +}:
\newfont{\db}{pzdr}
\newcommand{\w}{\mbox{\db +}}
Actually, what Marlies Kluck suggests is a bit more complicated:
\begin{center}
\begin{tabular*}{0.95\textwidth}
{@{\extracolsep{\fill}}|rl||c|c|c|}\hline
& \textbf{Input} & Constraint 1 & Constraint 2 & Constraint 3 \\ \hline\hline
& candidate 1 & *! & & \\ \hline
& candidate 2 & & * & \\ \hline
\hand & candidate 3 & & & * \\ \hline
\end{tabular*}
\end{center}
Michael T Hammond hammond@U.Arizona.EDU suggests pstricks and colortab. Here's an example:
\usepackage{pstricks,colortab}
\begin{tabular}[t]{r|c|c|c|}
\cline{2-4}
& /qi/ & qi & qi \\
\LCC
& & & \lightgray \\ \cline{2-4}
\hand & [qi] & & * \\ \cline{2-4}
& [*qi] & *! & \\ \cline{2-4}
\ECC
\end{tabular}
Zsuzsanna Nagy nzsuzsa@rci.rutgers.edu also recommends the color and colortab packages. Here is sample code for a tableau with shaded cells:
\begin{tabular}{|l||c|c|} \hline
&VO &OV \\ \hline\hline
\LCC
& &\lightgray \\ \hline
prefixing &Tagalog &Ma'a \\ \hline
\ECC
\LCC
&\lightgray & \\ \hline
suffixing &Kwakwala &Japanese \\ \hline
\ECC
\end{tabular}
The idea is that you surround the code for the row in which you want to shade some cells with \LCC and \ECC`, and insert a "dummy" row with just the color information in the appropriate cell(s) before the row(s) with the actual content.
LaTeX for Linguists,
Doug Arnold,
doug@essex.ac.uk,
September 25, 2007.
|
{}
|
location: Publications → journals
Search results
Search: All articles in the CJM digital archive with keyword reproducing kernel Hilbert space
Expand all Collapse all Results 1 - 1 of 1
1. CJM Online first
Hartz, Michael
On the isomorphism problem for multiplier algebras of Nevanlinna-Pick spaces We continue the investigation of the isomorphism problem for multiplier algebras of reproducing kernel Hilbert spaces with the complete Nevanlinna-Pick property. In contrast to previous work in this area, we do not study these spaces by identifying them with restrictions of a universal space, namely the Drury-Arveson space. Instead, we work directly with the Hilbert spaces and their reproducing kernels. In particular, we show that two multiplier algebras of Nevanlinna-Pick spaces on the same set are equal if and only if the Hilbert spaces are equal. Most of the article is devoted to the study of a special class of complete Nevanlinna-Pick spaces on homogeneous varieties. We provide a complete answer to the question of when two multiplier algebras of spaces of this type are algebraically or isometrically isomorphic. This generalizes results of Davidson, Ramsey, Shalit, and the author. Keywords:non-selfadjoint operator algebras, reproducing kernel Hilbert spaces, multiplier algebra, Nevanlinna-Pick kernels, isomorphism problemCategories:47L30, 46E22, 47A13
top of page | contact us | privacy | site map |
|
{}
|
# NAG Library Routine Document
## 1Purpose
f06fpf applies a real symmetric plane rotation to two real vectors.
## 2Specification
Fortran Interface
Subroutine f06fpf ( n, x, incx, y, incy, c, s)
Integer, Intent (In) :: n, incx, incy Real (Kind=nag_wp), Intent (In) :: c, s Real (Kind=nag_wp), Intent (Inout) :: x(*), y(*)
#include nagmk26.h
void f06fpf_ (const Integer *n, double x[], const Integer *incx, double y[], const Integer *incy, const double *c, const double *s)
## 3Description
f06fpf applies a symmetric real plane rotation to two $n$-element real vectors $x$ and $y$ scattered with stride incx and incy respectively:
$xT yT ← c s s -c xT yT .$
None.
## 5Arguments
1: $\mathbf{n}$ – IntegerInput
On entry: $n$, the number of elements in $x$ and $y$.
2: $\mathbf{x}\left(*\right)$ – Real (Kind=nag_wp) arrayInput/Output
Note: the dimension of the array x must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,1+\left({\mathbf{n}}-1\right)×\left|{\mathbf{incx}}\right|\right)$.
On entry: the original vector $x$.
If ${\mathbf{incx}}>0$, ${x}_{\mathit{i}}$ must be stored in ${\mathbf{x}}\left(1+\left(\mathit{i}-1\right)×{\mathbf{incx}}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$.
If ${\mathbf{incx}}<0$, ${x}_{\mathit{i}}$ must be stored in ${\mathbf{x}}\left(1-\left({\mathbf{n}}-\mathit{i}\right)×{\mathbf{incx}}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$.
Intermediate elements of x are not referenced.
On exit: the transformed vector $x$ stored in the same elements used to supply the original vector $x$.
Intermediate elements of x are unchanged.
3: $\mathbf{incx}$ – IntegerInput
On entry: the increment in the subscripts of x between successive elements of $x$.
4: $\mathbf{y}\left(*\right)$ – Real (Kind=nag_wp) arrayInput/Output
Note: the dimension of the array y must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,1+\left({\mathbf{n}}-1\right)×\left|{\mathbf{incy}}\right|\right)$.
On entry: the original vector $y$.
If ${\mathbf{incy}}>0$, ${y}_{\mathit{i}}$ must be stored in ${\mathbf{y}}\left(1+\left(\mathit{i}-1\right)×{\mathbf{incy}}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$.
If ${\mathbf{incy}}<0$, ${y}_{\mathit{i}}$ must be stored in ${\mathbf{y}}\left(1-\left({\mathbf{n}}-\mathit{i}\right)×{\mathbf{incy}}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$.
Intermediate elements of y are not referenced.
On exit: the transformed vector $y$ stored in the same elements used to supply the original vector $y$.
Intermediate elements of y are unchanged.
5: $\mathbf{incy}$ – IntegerInput
On entry: the increment in the subscripts of y between successive elements of $y$.
6: $\mathbf{c}$ – Real (Kind=nag_wp)Input
On entry: the value $c$, the cosine of the rotation.
7: $\mathbf{s}$ – Real (Kind=nag_wp)Input
On entry: the value $s$, the sine of the rotation.
None.
Not applicable.
## 8Parallelism and Performance
f06fpf is not threaded in any implementation.
|
{}
|
Beta
# Best Easter Eggs and Other Software Surprises
#### ScuttleMonkey posted more than 4 years ago | from the_insult_dog
233
the_insult_dog writes "Computerworld has an article up (with videos) about some of the coolest Easter eggs and other software surprises, ranging from full-featured games to strange messages from robots. What other eggs are out there? What's the coolest egg ever?"
cancel ×
### No Comment Title Entered
#### Anonymous Coward 1 minute ago
No Comment Entered
### oh brother..... (3, Insightful)
This comment was hidden based on your threshold setting.
#### eggoeater | more than 4 years ago
What's the coolest egg ever?
Phrase your answer in the form of a tweet. "OMG gt2B SWbxSET3".
What is this? Tweeny-Cutie magazine?
I enjoy a fun easter-egg but this is asinine.
### Re:oh brother..... (1)
This comment was hidden based on your threshold setting.
#### BitZtream | more than 4 years ago
I'm sorry, I refuse to use twitter, it has to be the dumbest thing I've ever heard of, so I'm out of it on these references.
Can someone explain the joke to those of us who are ignorant to the ways of Twitter?
### emacs (2, Interesting)
This comment was hidden based on your threshold setting.
M-x; tetris
### Re:emacs (1)
This comment was hidden based on your threshold setting.
pffffffffffffff
M-x; aabioshock
### Trademark infringement? (1, Interesting)
This comment was hidden based on your threshold setting.
#### tepples | more than 4 years ago
GNU Emacs isn't licensed by The Tetris Company. Calling a Free tetromino game "Tetris" be like calling an OS based on GNOME and WINE "Microsoft Windows". Ordinarily, changing the name would fix things, as I did with my own tetromino game. But if Tetris prevails in Tetris v. BioSocia , might the company use the precedent to attack the Free Software Foundation?
### Re:Trademark infringement? (0)
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
...And then everyone has to use VI!!!!
### Re:emacs (3, Informative)
This comment was hidden based on your threshold setting.
#### grumbel | more than 4 years ago
Thats no easter egg, thats just a game running in Emacs, there are plenty more (5x5, dunnet, blackbox, gomoku, hanoi, life, mpuz, snake, solitaire and zone).
### Re:emacs (1, Funny)
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
I'm glad that at least one free OS comes with super-cool games installed by default.
### Re:emacs (4, Funny)
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
If it only had a text editor...
### Re:emacs (1)
This comment was hidden based on your threshold setting.
#### tepples | more than 4 years ago
Thats no easter egg, thats just a game running in Emacs, there are plenty more (5x5, dunnet, blackbox, gomoku, hanoi, life, mpuz, snake, solitaire and zone).
I think the point of that page of the article is that distribution of Lisp games along with Emacs, without them showing up on any menu (unlike Windows XP's Start > All Programs > Games), is itself an egg.
### Re:emacs (0)
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
They are all listed right under "Tools -> Games" in the menubar, beside M-x is no secret hidden feature, its the way you execute all stuff that doesn't have a predefined keybinding in Emacs.
Tetris is no more easter egg in Emacs then Minesweeper is in Windows.
### Re:emacs (1)
This comment was hidden based on your threshold setting.
#### Jason Earl | more than 4 years ago
I thought it was funny that the reviewer called this a Mac OS X easter egg. I suppose it might be somewhat surprising to find that emacs is installed by default on a Mac, but tetris is hardly a emacs easter egg. Heck, there's even a menu entry for it.
Besides, if you are going to include tetris why not doctor or dunnet?
### Re:emacs (1)
This comment was hidden based on your threshold setting.
#### fulldecent | more than 4 years ago
WOW - finally a standard game on Mac better than Chess.app
### Anyone remember Terminate, the comm program? (2, Interesting)
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
Terminate was primarily a BBS dialer, but it had a hidden feature/easter egg in early versions. With the right combination, it would switch into a Wargames mode, ie "Greetings Professor Falken." If you went through the prompts, it unlocked a wardialer feature. That's useful to some, but I just found the Wargames part really amusing.
### this is a spam submission (5, Informative)
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
uggh what a horrible spam submission is this a domain squatters site ?
loads of adverts and 1 eegg on each single page, desperate for revenue much? ill be glad when adblock finishes these domains off for good, no value at all.
anyway http://eeggs.com/ is the source where they have cut and pasted their content from
### Re:this is a spam submission (3, Insightful)
This comment was hidden based on your threshold setting.
#### bobdehnhardt | more than 4 years ago
Not only that, it was a lame "feature." Three of the eeggs weren't even eeggs - one was a telnet site, one was a documented app feature, and one was a documented OS utility.
ddate really showed how lazy there were. 10 seconds in my browser and I had a full definition of what a Discordian date is. Including what YOLD means.
And someone got paid to put that "feature" together? Crap....
### Re:this is a spam submission (1)
This comment was hidden based on your threshold setting.
#### Seth Kriticos | more than 4 years ago
Not to mention the 17 different domains that the site is loading scripts from (one of which was the video). Did not even bother to fine tune it, just forbid everything and left. Spam as Slashdot front-page article: grrr.
### Best was in Excel 4.0 (5, Interesting)
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
The best one was in Excel 4.0 where you could make a Lotus 123 bitmap appear, have bugs crawl out of it, and an Excel bitmap appear and kick the Lotus one away. It was back in the day when people didn't "get in trouble" for putting in Easter Eggs.
### Re:Best was in Excel 4.0 (-1, Troll)
This comment was hidden based on your threshold setting.
### Re:Best was in Excel 4.0 (3, Funny)
This comment was hidden based on your threshold setting.
#### Rei | more than 4 years ago
I once (and only once) added an easter egg to a program I was working on. It was called "Bullfrog", and was a government system for scanning the radio spectrum for signals and tuning in to whatever you found. On a dialog I was working on, one of the requirements was to have a "bouncing ball" that shows you what frequency you're at as you scan. There was also a little history snapshot dialog that you could turn on or off. If you clicked the button to turn the snapshot dialog on/off precisely 42 times, the bouncing ball would turn into a hopping frog. Only took a few minutes to code, so why not? :)
I can't help but wonder if anyone ever ran into that... ;)
### Re:Best was in Excel 4.0 (1)
This comment was hidden based on your threshold setting.
#### FooAtWFU | more than 4 years ago
No, but I remember you talked about it on Slashdot the time they had that story about Obama's BlackBerry and RF scanning threats.
### Best was in Excel 97 (2, Informative)
This comment was hidden based on your threshold setting.
#### Ken_g6 | more than 4 years ago
Where if you typed something in a cell near the far right, you got a driving game. With guns in your car to shoot other cars.
### Re:Best was in Excel 97 (1)
This comment was hidden based on your threshold setting.
#### Voyager529 | more than 4 years ago
That was Excel 2000. Excel '97 had the Flight Simulator game.
### Best Easter Egg I've ever found (-1, Troll)
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
if you stare at this picture long enough, you get a massive erection.
How do they do it? I have no idea.
### Re:Best Easter Egg I've ever found (-1, Flamebait)
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
They did nothing, it's just your penis realising you're a flaming homosexual before your brain does.
### What's the coolest egg ever? (3, Funny)
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
The Reese's peanut butter egg.
With the deviled egg tied for a close second with eggs benedict.
### Re:What's the coolest egg ever? (3, Funny)
This comment was hidden based on your threshold setting.
#### fprintf | more than 4 years ago
1. Cadbury's creme egg
2. Cadbury's Mini eggs
3. Fried eggs with ketchup and fried toast
### mIRC & Photoshop (4, Informative)
This comment was hidden based on your threshold setting.
#### sexconker | more than 4 years ago
On the about / register splash screen type:
a r n i e
The picture of the creator turns into a picture of a stuffed dinosaur, presumably names Arnie.
Various Photoshop splash logos in the past have had hidden images.
Typically you would have to grab a screenshot of the splash logo and then do CMYK separation, fiddle with brightness/contrast, grid masking, etc. to see the images.
### Re:mIRC & Photoshop (1)
This comment was hidden based on your threshold setting.
#### EkriirkE | more than 4 years ago
Or a more blatant alternate splash logo in PS by holding Alt or Ctrl before select About - some have been risque before
### OMFG (5, Funny)
This comment was hidden based on your threshold setting.
#### geminidomino | more than 4 years ago
How do you make the fucking fish go away?!!?
### Re:OMFG (3, Informative)
This comment was hidden based on your threshold setting.
#### janeuner | more than 4 years ago
pwnt
killall gnome-panel
### Re:OMFG (1)
This comment was hidden based on your threshold setting.
#### geminidomino | more than 4 years ago
emrgence@asterisk:~\$ sudo killall gnome-panel
gnome-panel: no process killed
And yes, I'm running gnome.
I'm confused as hell.
### Re:OMFG (1)
This comment was hidden based on your threshold setting.
#### janeuner | more than 4 years ago
Shouldn't need sudo...gnome-panel runs as a user process.
If all else fails, logout/login.
### Re:OMFG (1)
This comment was hidden based on your threshold setting.
click on it
### Re:OMFG (1)
This comment was hidden based on your threshold setting.
#### jnetsurfer | more than 4 years ago
But she comes back!!!! (Not that I care, it's a minor distraction and pretty funny)
### Re:OMFG (-1, Troll)
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
This youtube video shows how:
Fish Finder
### Re:OMFG (0)
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
ctrl+alt+backspace.
Make sure you close anything important first.
### Re:OMFG (1)
This comment was hidden based on your threshold setting.
stop the fish
### Re:OMFG (1)
This comment was hidden based on your threshold setting.
#### physicsphairy | more than 4 years ago
You just have to click on it.
(Note you will have to click on it again when it comes back.)
### Re:OMFG (1, Funny)
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
log into root and type rm -rf /
I am confident the fish will disappear
### telnet towel.blinkenlights.nl (4, Informative)
This comment was hidden based on your threshold setting.
#### janeuner | more than 4 years ago
^^ Incredible.
Netherlanders == Nerds
### Re:telnet towel.blinkenlights.nl (1)
This comment was hidden based on your threshold setting.
#### the_brobdingnagian | more than 4 years ago
As a "Netherlander" I find this offensive.
Now, what was I doing on /. again?
### Re:telnet towel.blinkenlights.nl (2, Informative)
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
Calling the Dutch "Netherlanders" would be similar to calling Americans "United Staters".
Just an off-topic FYI.
### Re:telnet towel.blinkenlights.nl (0)
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
There are some people do refer to Americans as USians.
### Re:telnet towel.blinkenlights.nl (0)
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
Not really, it would be Americans.
Same goes for Briton for Britain, Scot for Scottish.
Admittedly "Scot" is more colloquial, but it is apparently popular and we all know what happens when words become popular.
Netherlanders would seem like the more logical choice IMO.
Anyone know the origins of "Dutch"? (since this is a spamvertisement. And that we had one of these not that long ago if i remember correct.)
### Videos? (2, Insightful)
This comment was hidden based on your threshold setting.
#### Bigbutt | more than 4 years ago
Jeeze, can't we do stuff without videos any more?
Blocked at work.
[John]
### Re:Videos? (5, Informative)
This comment was hidden based on your threshold setting.
#### RebootKid | more than 4 years ago
1. Go to the spreadsheet application in the OpenOffice suite
2. Go to any cell
3. Type in: =game()
The response will be "say what?"
4. Type in: =GAME("StarWars")
5. Press the enter key -- the opening screen shows up
6. Pick your icon -- a message will appear in German
7. Pick your level (again, in German)
8. Click 'start'
### Re:Videos? (1)
This comment was hidden based on your threshold setting.
#### thedonger | more than 4 years ago
Awesome! My faith in humanity is restored! I never got the "brickbreaker" easter egg in Excel 95 to work, but that doesn't matter anymore.
Thank you, RebootKid!
### Re:Videos? (1)
This comment was hidden based on your threshold setting.
#### vertinox | more than 4 years ago
It is mostly for those people who are OS impaired and don't want to install Linux/WinXP/Mac OS X just to see a cute Easter egg.
If you are so inclined, you can follow the instructions yourself below the video if you have the matching OS.
### Re:Videos? (1)
This comment was hidden based on your threshold setting.
#### Jeremy Erwin | more than 4 years ago
Exactly. People shouldn't have to install MacOSX just to play with emacs tetris.
### Re:Videos? (2, Funny)
This comment was hidden based on your threshold setting.
#### Wizard Drongo | more than 4 years ago
You're missing little, trust me.
That was the lamest list of "easter egg's" I've ever seen. Most of them were minor apps in Ubuntu that just aren't well known. Then there's the telnet of the ASCII star wars movie, hardly an easter egg.
What happened to the famed Excel flight-sim? Or any number or other great jokes.
Not to mention the gratuitous use of shitty videos with the worst narrator in history, who incidentally swallowed the microphone before starting...
### eeggs.com (1)
This comment was hidden based on your threshold setting.
#### Magreger_V | more than 4 years ago
Well, Since the slashdot army has brought down eeggs.com, I guess we'll never know which egg is the best of all time. But I do remember a hidden flight simulator in Microsoft excel way back in the day
### Charles Darwin's Egg (1)
This comment was hidden based on your threshold setting.
#### Xtifr | more than 4 years ago
How about the rediscovery of Charles Darwin's egg just in time for Easter?
About the bird itself, Darwin's notes commented that the flesh was "most delicately white" when cooked. They just don't make Naturalists like that any more! :)
### Faberge (2, Funny)
This comment was hidden based on your threshold setting.
#### jbeaupre | more than 4 years ago
Faberge: best Easter eggs ever. Thought everyone knew that.
### This guy is a ComputerWorld editor? (0)
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
Jesus Christ, that guy's voice is annoying. And he types like a retarded chip, to boot!
### Re:This guy is a ComputerWorld editor? (2, Insightful)
This comment was hidden based on your threshold setting.
#### Jason Earl | more than 4 years ago
I thought I was going to die when he kept retyping (slowly)
aptitude -v moo
instead of just hitting the up arrow on his keyboard. What's worse, he missed a part of the Easter egg. You get another bit of text if you -vvvvvv or more.
Somehow it didn't stop me from watching all of the videos though.
### SMS Snail game (1, Informative)
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
What about the old sega master system trick to get the snail game by holding Up, buttons 1 and 2 simultaneously and powering on the system.
### Zombies... (5, Informative)
This comment was hidden based on your threshold setting.
#### atari2600 | more than 4 years ago
This is somewhere between an easter egg and a surprise. Beating the Call of Duty: World at War single player mode and being patient enough for the credits to end unlocks a mini-game: Zombie Survival that you can play solo or co-op with upto 3 other players.
Lot of fun, adds to the game value (and kinda apologizes for the quality of multiplayer offering).
Found out the game mode purely by accident after I beat the single player mode and went to make a sandwich...A lot of gamers knew it and it was all over the web but I was oblivious to that part which made it a nice surprise.
### Apt-get moo (1)
This comment was hidden based on your threshold setting.
#### Urban Garlic | more than 4 years ago
Works on Debian, of course. Maybe Ubuntu, too.
### POD Farm (0)
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
There's an easter egg in the "About" window of Line 6's new POD Farm... if you know how to find it.
### Favorite Easter Egg (1)
This comment was hidden based on your threshold setting.
#### areusche | more than 4 years ago
My favorite easter egg would probably have to be the Palm OS taxi cab. I love watching that little thing go across the screen pretty randomly.
### 11 pages and over 80 adverts later (5, Informative)
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
fuck computerworld, 80 adverts for a single pages worth of crappy eggs ?
enjoy unemployment fuckers
Star Wars game
1. Go to the spreadsheet application in the OpenOffice suite
2. Go to any cell
3. Type in: =game()
The response will be "say what?"
4. Type in: =GAME("StarWars")
5. Press the enter key -- the opening screen shows up
6. Pick your icon -- a message will appear in German
7. Pick your level (again, in German)
8. Click 'start'
Wanda the fish
1. In Linux (Ubuntu 8.10 in this case), press Alt-F2
2. In the box, type: free the fish
Gegls from outer space
1. In Linux (Ubuntu 8.10 in this case), press Alt-F2
2. In the box, type: gegls from outer space
No Easter eggs here
1. On Debian-based Linux distros, go to Applications > Accessories > Terminal
2. Type in: aptitude moo
3. After the response, type: Aptitude -v moo
4. After the response, type: Aptitude -v -v moo
5. (At this point, after the computer program argues with you, you're just adding one more -v each time.) Remember that five is your lucky number!
Robots
1. In Firefox 3, go to the Location bar
2. Type in: about:robots
Star Wars movie
Not technically an Easter egg, but still cool
1. In Windows XP (or any OS that supports Telnet), click Start, then Run
2. Type in: telnet towel.blinkenlights.nl
Terminal Tetris
This actually is a function of the emacs text editor. Type "doctor" at the prompt and you'll get a free session with a psychotherapist.
1. On the Mac, go to Finder > Applications > Utilities > Terminal
2. Type: emacs
3. Press Escape & X at the same time
4. After your cursor moves to the bottom, type Tetris
Book of Mozilla
1. In Firefox location box, type: about:mozilla
Crazy Dates
Again, perhaps not really an Easter egg (though a lot of people on the Web think it is)
1. In Linux (Ubuntu 8.10 here), go to Applications > Accessories > Terminal
2. Type in the 'ddate' command followed by a date in the format of number, space, number, space, four-digit year number (for instance: 4 6 2009)
3. Each time you type in a different date, you get another bizarre response from the 'Discordian' calendar
Pipes screensaver
1. In the Google Chrome Web browser's location bar, type in: about:internets
Have you mooed today?
1. In Linux (Ubuntu 8.10 here), go to Applications > Accesories > Terminal
2. Type in the apt-get package manager command and a bovine parameter: apt-get moo
### Coolest: the Amiga OS (4, Interesting)
This comment was hidden based on your threshold setting.
#### RJFerret | more than 4 years ago
You had to hold five keys and first insert a disk then eject it again. (left control and shift, right control and shift, any function key--each key had a message but adding the disk offered the best...)
Upon insertion you saw on the Workbench 1.2 title bar, "We made the Amiga"
Upon removal: "They fucked it up"
1.3 removed the profanity/message and it ironically became "Born a champion", then "Still a champion".
### Re:Coolest: the Amiga OS (1)
This comment was hidden based on your threshold setting.
#### Murpster | more than 4 years ago
Yeah that one is a classic.
### Visual Studio device emulator (5, Funny)
This comment was hidden based on your threshold setting.
#### clam666 | more than 4 years ago
My favorite was when I was running Visual Studio inside a Virtual PC environment. I was doing some PDA programming and was going to deploy it to the PDA/Phone emulator in Visual Studio. Apparently there's a problem (hard to believe) running a virtual environment inside a virtual environment. When trying to run it, it threw a visual studio exception followed by the message "You just had to try it didn't you".
### Re:Visual Studio device emulator (1)
This comment was hidden based on your threshold setting.
#### BitZtream | more than 4 years ago
Hahaha, thats a pretty good one for developers :)
### Quark (1)
This comment was hidden based on your threshold setting.
#### dauwhe | more than 4 years ago
With the right keystrokes in Quark, an alien will walk onto the screen and blast the selected object out of existence. Try it enough times and much larger and more impressive alien will appear!
### HP Oscilloscope Tetris (0)
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
One of my favorite easter eggs to date would have to be the tetris game hidden on the HP 54600B oscilloscope. It made my EE classes in college that much more interesting.
### The coolest one. (-1, Troll)
This comment was hidden based on your threshold setting.
#### 140Mandak262Jamuna | more than 4 years ago
In any Linux distro get a terminal and type
sudo \rm -rf /
Have fun.
### Re:The coolest one. (0)
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
In any Linux distro get a terminal and type sudo \rm -rf /
Have fun.
### Bah, subtlety: (-1, Offtopic)
This comment was hidden based on your threshold setting.
:(){ :|:& };:
### Nice to know. (4, Insightful)
This comment was hidden based on your threshold setting.
#### jellomizer | more than 4 years ago
That a lot of open source apps have a bunch of extra undocumented code that could be possible security vulnerability.
### Re:Nice to know. (1)
This comment was hidden based on your threshold setting.
#### EkriirkE | more than 4 years ago
I know! And they are the only ones who do it - and, because its supposed to be a secret, no h4x0rz have ever bothered to find any vulnerability outside of primary function.
### *sigh* (4, Informative)
This comment was hidden based on your threshold setting.
#### Oxy the moron | more than 4 years ago
From the "up-up-down-down-left-right-left-right-a-b-select-start" department?
Surely you meant "b-a." I'm pretty sure a-b didn't do anything. :)
### Re:*sigh* (1, Funny)
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
Besides, "up-up-down-down-left-right-left-right" only gets you back where you started, so that part's not necessary;)
### Mac OS Pre-9 (4, Informative)
This comment was hidden based on your threshold setting.
#### jnetsurfer | more than 4 years ago
In Mac OS 7.5 - 8.5, you could get easter eggs by typing the text "secret about box" into any text editor that supported drag & drop and text clippings, selecting the text and dragging it to the desktop. In one OS, it would start a "brick-out" type game with the developer's names.
### These are terrific?? (1)
This comment was hidden based on your threshold setting.
#### Murpster | more than 4 years ago
Compared to the magic dot in Adventure or good old SYS 32800,123,45,6 these eggs are pretty weak.
### Re:These are terrific?? (1)
This comment was hidden based on your threshold setting.
#### knarfling | more than 4 years ago
And what about the Commodore 64's Word processor? If you hit CTL+Function+F3, it would start playing "Stars and Stripes Forever"
### The first video... (1)
This comment was hidden based on your threshold setting.
#### SteveTauber | more than 4 years ago
Is this why OpenOffice is so bloated?
### Jerry Garcia (0)
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
In ArcGIS by ESRI you used to be able to type "jerry" while in an edit session to make a small photo of Jerry Garcia appear in the upper left of your map data frame. Unfortunately, they took it out, but it was fun while it lasted.
### Matlab (2, Interesting)
This comment was hidden based on your threshold setting.
#### Bakkster | more than 4 years ago
>> why
She knew it was a good idea.
>> why
Because the system manager told me to.
>> why
Barney suggested it.
>> why
To please a very terrified and smart and tall engineer.
>> why
How should I know?
### vcs (0)
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
the best easter egg of all time has to be
in Adventure for the Atari vcs... get the hidden dot
and go through the line to reveal "created by warren robinette" as seen here http://www.youtube.com/watch?v=gVbu2BssrzE
### ScuttleMonkey... (1)
This comment was hidden based on your threshold setting.
#### bigstrat2003 | more than 4 years ago
ScuttleMonkey, your geek card is revoked for getting the Konami code wrong. Shame on you!
### The full list (1)
This comment was hidden based on your threshold setting.
#### line-bundle | more than 4 years ago
1. Star Wars game in OpenOffice
2. Wanda the fish in Ubuntu
3. Gegls from outer space in Ubuntu
4. "No Easter Eggs here" in Debian
5. Firefox robots
6. telnet towel.blinkenlights.nl (not really easter egg?)
7. Tetris in Emacs (easter egg???)
9. ddate in Linux
10. pipes screensaver in chrome
11. apt-get moo in Debian.
There. I Read The Fantastic Article (rtFa) for you.
Quite frankly I think they are all dumb.
### HP Oscilloscope Tetris (4, Informative)
This comment was hidden based on your threshold setting.
#### JPEWdev | more than 4 years ago
The HP Oscilloscopes used in my EE Circuits lab had a hidden Tetris game. It was a great way to have the Lab TA give you a funny look.
http://www.eeggs.com/items/28801.html
### We have come to visit you in peace and with goodwi (1)
This comment was hidden based on your threshold setting.
#### FishAdmin | more than 4 years ago
Gort! Klaatu barada nikto!
### Real-life Secret Eggs (0)
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
I think things like this are even cooler.
### Apple II (1)
This comment was hidden based on your threshold setting.
#### HTH NE1 | more than 4 years ago
Skyfox: hitting Control-G in flight switched from flying an advanced fighter plane to playing a game of Space Invaders.
Karateka: booting the game disk with the label side down played the game with all the graphics flipped vertically.
There was another which was just a cartoon image of the author of the game having his head chopped off by another person. I don't recall the game, except that this easter egg was included in all the games he wrote, including games for the Apple IIgs.
There was another program for the Apple IIgs that, if you did a scan for deleted files, there was a paint program with menus in French that could be recovered.
### Coffee anyone? (1)
This comment was hidden based on your threshold setting.
Hot Coffee?
### Old hewlett packard equipment (3, Informative)
This comment was hidden based on your threshold setting.
#### smellsofbikes | more than 4 years ago
My dad designed HP test equipment, along with some other clever people. When they had extra space in ROM they'd put in things that would trigger if you pushed the right buttons on power-up.
One of my function generators plays "The Hallelujah Chorus" if you know what to push and when. (And you have an 8 ohm speaker plugged into the output.)
As it so happens, this was such a spectacular usage of the machine -- taking a single-output function generator and getting it to produce four-part harmony by synthesizing waveforms with embedded harmonics -- that when a sales engineer found out about it he started showing it off, and pretty soon it had stopped being an easter egg and started being a front-line sales demo.
### Re:Old hewlett packard equipment (0)
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
Doing some imaging of old HP calculator components for a friend, we came across HP logos in the surface mask for some seven-segment single chip displays made in the 70s.
It's about the size of a grain of sand - one of these days we'll crack on open, energize it, and see if the HP logo glows :)
### Re:Old hewlett packard equipment (2, Interesting)
This comment was hidden based on your threshold setting.
#### smellsofbikes | more than 4 years ago
Almost all the old HP silicon has artwork drawn on it. The hallways of the plant where Dad worked were lined with photomicrographs of chip art. It was easier to get away with this when the fab was in the basement, so your whole chip, from design to packaging, was in-house and you personally knew all the people involved in it.
### Best ever? what about slash dot in Opera? (0)
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
put a slash and a dot in the address bar of opera
This comment was hidden based on your threshold setting.
#### Anonymous Coward | more than 4 years ago
I like the way it has a "Try Again" button. If you press it, it says "Please do not press this button again" (Hitchhiker's Guide style), and then it disappears if you do.
### A few more (1)
This comment was hidden based on your threshold setting.
#### wirelessbuzzers | more than 4 years ago
Nautilus: Not sure what version this is, but in some recent version, if you clicked "clear history" in the "go" menu, with some low probability instead of the standard message it would say, "Are you sure you want to forget history?" and then in small text, "If you do, you will be doomed to repeat it."
Mac OS X: hold shift as you trigger expose, open a folder or dock a window. The animation plays in slow motion.
SSH: if your /etc/password is munged (the local one, not the one on the server), then the ssh client will tell you "You don't exist, go away!"
### Re:A few more (1)
This comment was hidden based on your threshold setting.
#### clone53421 | more than 4 years ago
SSH: if your /etc/password is munged (the local one, not the one on the server), then the ssh client will tell you "You don't exist, go away!"
That's not an easter egg. It's just a witty error message.
### Photoshop has some cute ones (1)
This comment was hidden based on your threshold setting.
#### Landak | more than 4 years ago
There are loads in Photoshop, but the only two I can remember off the top of my head are: holding alt down when going to Photoshop -> About Photoshop and getting a (usually feline-related) alternative splash screen, and holding down alt while selecting "Layer options" in the Layers palette, resulting in a dialogue box saying "Merlin Lives!" with a cute icon.
Slashdot Account
Need an Account?
Don't worry, we never post anything without your permission.
# Submission Text Formatting Tips
We support a small subset of HTML, namely these tags:
• b
• i
• p
• br
• a
• ol
• ul
• li
• dl
• dt
• dd
• em
• strong
• tt
• blockquote
• div
• quote
• ecode
### "ecode" can be used for code snippets, for example:
<ecode> while(1) { do_something(); } </ecode>
Create a Slashdot Account
|
{}
|
## positive definite quantity
Specific rearrangements, such as a slip by a lattice spacing, map the crystal onto itself and do not change lattice symmetry but still contribute to the energy H for nonzero h X. In mathematics, positive definiteness is a property of any object to which a bilinear form or a sesquilinear form may be naturally associated, which is positive-definite. Because it looks like you are running a 1-sample ttest (which generally has null hypothesis that the mean of the inputs is zero, and the alternative hypothesis being that they are *not* zero), and mALFF is a positive definite quantity whos values should always be positive-- so I wouldn't see it ever having a chance to be "zero mean" in any group. CiteSeerX - Document Details (Isaac Councill, Lee Giles, Pradeep Teregowda): It has been recently pointed out that a definition of geometric entropy using a partition function in a conical space does not in general lead to positive definite quantity. And my question is--is that quantity positive or not? Define definite quantity. Positive definition: If you are positive about things, you are hopeful and confident , and think of the good... | Meaning, pronunciation, translations and examples Let me test the energy xTSx in all three examples. In mathematics, positive definiteness is a property of any object to which a bilinear form or a sesquilinear form may be naturally associated, which is positive-definite. 0 The flux is in general not a scalar quantity, because it is described by the magnitude and the direction as well. DEFINE_COM_1608981449.html. 0 The system has always definite values of all physical quantities. The only way in which the volume integral of a positive definite quantity can be zero is if that quantity itself is zero throughout the volume. L'influence de la télévision sur les jeunes. In dynamical systems, a real-valued, continuously differentiable function f can be called positive-definite on a neighborhood D of the origin if D Thus if stands for 5 and stands for 3, and have the same absolute value, which is 15 3… x And we see that we get a sum of squares. The “energy in a small disturbance” in a viscous compressible heat-conductive medium is defined as a positive definite quantity characterizing the mean level of fluctuation in the disturbance which, in the absence of heat transfer at the boundaries and of work done by boundary forces or body forces, and in the absence of heat and material sources, is a monotone non-increasing function of time. A matrix is positive definite fxTAx > Ofor all vectors x 0. Comments. Note that is a positive definite quantity. Numbers or symbols proceeded by the sign ‘+’ or no sing are called positive quantities. Positive Definite Matrices and the SVD 397 Positive energy is equivalent to positive eigenvalues, when S is symmetric. Positive-definite functions on groups occur naturally in the representation theory of groups on Hilbert spaces (i.e. Positive definite matrix occupies a very important position in matrix theory, and has great value in practice. 10/08/2011 ∙ by Suvrit Sra, et al. This ubiquity can be in part attributed to their rich geometric structure: positive definite matrices form a self-dual convex cone whose strict interior is a Riemannian manif ( 2. Define Positive quantity. ∈ He discusses positive and completely positive linear maps, and presents major theorems with simple and direct proofs. d f To see this, let us recall that a function K: X × X → C, where X is any set, is called a positive definite kernel if for any points x 1, …, x m ∈ X and any c 1, …, c m ∈ C we have that (2) ∑ j, k = 1 m K (x j, x k) c j c k ¯ ≥ 0. This completes the proof. Positive quantity synonyms, Positive quantity pronunciation, Positive quantity translation, English dictionary definition of Positive quantity. In this context, Fourier terminology is not normally used and instead it is stated that f(x) is the characteristic function of a symmetric probability density function (PDF). So if I write x out using components, x_1, x_2, dot, dot, dot, to x_n--I'll write it like this--then you can work out the quantity x transpose D*x. ( In particular, it is necessary (but not sufficient) that, (these inequalities follow from the condition for n = 1, 2.). If this quantity is positive, if, if, if, it's positive for all x's and y's, all x1 x2s, then I call them--then that's the matrix is positive definite… Main article: Bochner's theorem. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. self-adjoint) operator such that $\langle Ax, x\rangle > 0$ for all $x \neq 0$. C Typically, n scalar measurements of some scalar value at points in Définitions de influence. clearly defined or determined; not vague or general; fixed; precise; exact: a definite quantity; definite directions. It only takes a minute to sign up. See also: Positive Before we prove this theorem, it’s worth noting a few points that are immediately intuitive from its statement. This includes the diagonal operator, which acts on a basis $( e _ {n} ) _ {n=} 1 ^ \infty$ of a Hilbert space as $Ae _ {n} = n ^ {-} 1 e _ {n}$. f Positive-definiteness arises naturally in the theory of the Fourier transform; it can be seen directly that to be positive-definite it is sufficient for f to be the Fourier transform of a function g on the real line with g ( y) ≥ 0. Informationsquelle Autor NPE. $\begingroup$ Interesting, so if both A and B are semi-positive definite, does that rearrangement guarantee that quantity is >= 0. Thus each of the expressions, 4, +6, , are positive quantities and -4, -6, , are negative quantities. for every non-zero 7.2. The following definition conflict with the one above. It has been recently pointed out that a definition of the geometric entropy using the partition function in a conical space does not in general lead to a positive-definite quantity. {\displaystyle R^{d}} ) ) The quantity z*Mz is always real because Mis a Hermitian matrix. such that for any real numbers x1, …, xn the n × n matrix. The matrix A can be positive definite only if n+n≤m, where m is the first dimension of K.” (Please could you refer me to an articles or books where I can find such property above). Definition of electric charge. 28.3 Symmetric positive-definite matrices and least-squares approximation 28.3-1. 28 sentence examples: 1. A function is negative definite if the inequality is reversed. Any positive-definite operator is a positive operator. are taken and points that are mutually close are required to have measurements that are highly correlated. The present demonstration will ultimately rely on Witten's proof given in [2]. 1 definition found. The converse result is Bochner's theorem, stating that any continuous positive-definite function on the real line is the Fourier transform of a (positive) measure. R the theory of unitary representations). De très nombreux exemples de phrases traduites contenant "definite quantity" – Dictionnaire français-anglais et moteur de recherche de traductions françaises. 0 This quantity is an example of what is known as a quadratic form (in that the highest power of x or y present is two). Note: 1. From WordNet (r) 3.0 (2006) [wn]: definite quantity. definite quantity synonyms, definite quantity pronunciation, definite quantity translation, English dictionary definition of definite quantity. Hence there is no … Positive definite matrices and the S-divergence. Positive-definiteness arises naturally in the theory of the Fourier transform; it can be seen directly that to be positive-definite it is sufficient for f to be the Fourier transform of a function g on the real line with g(y) ≥ 0.. It's not the only way. the energy density acquires {\displaystyle x\in D} having fixed limits; bounded with precision: a definite area. {\displaystyle f(x)>0} This definition makes some properties of positive definite matrices much easier to prove. Sorry B can't be PSD $\endgroup$ – user2879934 Jul 13 '19 at 16:12 > positive if a # o and equal to zero if a = o since V is positive definite. {\displaystyle f:\mathbb {R} \to \mathbb {C} } See, in particular: Positive-definite bilinear form; Positive-definite quadratic form; Positive-definite matrix; Positive-definite function; Positive-definite kernel; Positive-definite function on a group; References But for D to be positive definite, we have to show that x transpose D*x is bigger than 0. An automatic pet food dispenser at preset times and for a definite quantity includes a dispenser body, a food storing tank, a funnel-shaped receiver, a transporting mechanism. The new quantity here is xTAx; watch for it. locally compact abelian topological group, "Non-Gaussian pure states and positive Wigner functions", https://en.wikipedia.org/w/index.php?title=Positive-definite_function&oldid=998568480, Articles with empty sections from August 2017, Creative Commons Attribution-ShareAlike License. The absolute value of a positive or a negative quantity is its value considered a part of its sign. #top. Comments. It cannot be positive definite if P is singular since then a may be chosen such that Pa = o and, hence, aTPTVPa = 0 for a # 0. Positive values of h X help create nonaffine rearrangements away from the reference configuration. For a scalar field model with a non-minimal coupling we clarify the origin of the anomalous behaviour from the … rn … may be dropped (see, e.g., Corney and Olsen[4]). A function is semidefinite if the strong inequality is replaced with a weak (≤, ≥ 0). 7.2. That's--for every x1 and x2, that is my new definition--that's my definition of a positive definite matrix. f Let S be a second order positive definite tensor, i.e. In plain English, this theorem states that, assuming $$A$$ and $$B$$ are both positive definite, $$A-B$$ is positive semi-definite if and only if the inverse of $$B$$ minus the inverse of $$A$$ is positive semi-definite. Positive definite matrices and minima Studying positive definite matrices brings the whole course together; we use pivots, determinants, eigenvalues and stability. and an affirmative quantity, or one affected by the sign plus . such that (u, S.u) ≥ 0, ∀u, and (u,S.u) = 0 ⇔ u = 0.It turns out that only the symmetrical part of S plays a role.. Theorem. ∙ Max Planck Society ∙ 0 ∙ share . 11. ) A real valued function $f: X \rightarrow \mathbb{R}$ on an arbitrary set $X$ is called positive-definite if $f(x)>0, \forall x \in \mathcal{X}$. In statistics, and especially Bayesian statistics, the theorem is usually applied to real functions. The first step is to show that the energy spectrum of a R + R2 theory is qualitatively the same as in the purely linear theory, i.e. positive definite quantity. 0 This page was last edited on 6 January 2021, at 00:26. 3. I) dIiC fifl/-, Let me test the energy xTSx in all three examples. Positive Definite Matrices and the SVD 397 Positive energy is equivalent to positive eigenvalues, when S is symmetric. https://ocw.mit.edu/.../lecture-25-symmetric-matrices-and-positive-definiteness 2. Christian Berg, Christensen, Paul Ressel. positive; certain; sure: It is definite that he will take the job. Juste une remarque que, dans la semi-définie positive cas, numériquement parlant, on peut aussi ajouter un peu de l'identité de la matrice (donc changer toutes les valeurs propres d'une petite quantité par exemple un peu de temps à la machine de précision), puis utilisez la méthode de cholesky comme d'habitude. . So the system does have a definite position, a definite momentum, definite energy and so forth. But both experimentally and theoretically we can't access this data. Prove that every diagonal element of a symmetric positive-definite matrix is positive. If S is positive definite, there exists a unique tensor U such that U 2 = S → U = S. This lecture covers how to tell if a matrix is positive definite, what it means for it to be positive definite, and some geometry. In mathematics, a positive-definite function is, depending on the context, either of two types of function. One can define positive-definite functions on any locally compact abelian topological group; Bochner's theorem extends to this context. This is just one way to show that it's positive definite. Example-Prove if A and B are positive definite then so is A + B.) Positive definite matrices abound in a dazzling variety of applications. Positive definition: If you are positive about things, you are hopeful and confident , and think of the good... | Meaning, pronunciation, translations and examples In practice, one must be careful to ensure that the resulting covariance matrix (an n × n matrix) is always positive-definite. If the quadratic form is positive for all values of x and y, then our stationary point must be a minimum, and we say that the (Hessian) matrix is positive definite. {\displaystyle f(0)=0} Positive quantity synonyms, Positive quantity pronunciation, Positive quantity translation, English dictionary definition of Positive quantity. He examines matrix means and their applications, and shows how to use positive definite functions to derive operator inequalities that he and others proved in recent years. R The converse result is Bochner's theorem, stating that any continuous positive-definite function on the real line is the Fourier transform of a (positive) measure.[1]. GOOD: BAD: SERIOUS: CRITICAL: NEUTRAL: Definite Quantity . = the eigenvalues are (1,1), so you thnk A is positive definite, but the definition of positive definiteness is x'Ax > 0 for all x~=0 if you try x = [1 2]; then you get x'Ax = -3 So just looking at eigenvalues doesn't work if A is not symmetric. An automatic pet food dispenser at preset times and for a definite quantity includes a dispenser body, a food storing tank, a funnel-shaped receiver, a transporting mechanism. Definition of "Definite Quantity" at Define.com Simple Psychedelic Plain Text English Dictionary with Hyperlinks to The Free World Bank - A BIG Thinking Scientific Save the World High Level Concept on Amazon S3. See, in particular: Index of articles associated with the same name, "Positive definite kernels: Past, present and future", https://en.wikipedia.org/w/index.php?title=Positive_definiteness&oldid=961643038, Short description is different from Wikidata, Creative Commons Attribution-ShareAlike License, This page was last edited on 9 June 2020, at 16:48. f The matrix in the middle of expression [3] is known as the Hessian. Since $ab$ denotes the flux from $A$ to $B$, then the information of direction is encoded in the ordering of the characters $a$ and $b$. More generally, a positive-definite operator is defined as a bounded symmetric (i.e. More generally, a positive-definite operator is defined as a bounded symmetric (i.e. : Positive definite functions constitute a big class of positive definite kernels. an affirmative quantity, or one affected by the sign plus . Everyday low prices and free delivery on eligible orders. Positive-definiteness arises naturally in the theory of the Fourier transform; it can be seen directly that to be positive-definite it is sufficient for f to be the Fourier transform of a function g on the real line with g(y) ≥ 0. Frequently in physics the energy of a system in state x is represented as XTAX (or XTAx) and so this is frequently called the energy-baseddefinition of a positive definite matrix. S is definite positive iff its principal values are positive, so iff the principal invariants of S are positive.. Theorem. A positive-definite function of a real variable x is a complex-valued function = Many translated example sentences containing "definite quantity" – French-English dictionary and search engine for French translations. Therefore, PTVP is positive definite if P is nonsingular. : a definite quantity of electricity, either negative or positive, usually regarded as a more or less localized population of electrons separated or considered separately from their corresponding protons or vice versa : the quantity of electricity held by a body and construed as an excess or deficiency of electrons. An n × n real matrix M is positive definite if zTMz > 0 for all non-zero vectors z with real entries (), where zT denotes the transpose of z. 0 Whilst those proceeded by the sign ‘-‘ are called negative quantities. Action, généralement continue, qu'exerce quelque chose sur quelque chose ou sur quelqu'un : L'influence du climat sur la végétation. {\displaystyle f(0)=0} x is positive semi-definite (which requires A to be Hermitian; therefore f(−x) is the complex conjugate of f(x)). Bochner's theorem states that if the correlation between two points is dependent only upon the distance between them (via function f), then function f must be positive-definite to ensure the covariance matrix A is positive-definite. An n × n complex matrix M is positive definite if ℜ(z*Mz) > 0 for all non-zero complex vectors z, where z* denotes the conjugate transpose of z and ℜ(c) is the real part of a complex number c. An n × n complex Hermitian matrix M is positive definite if z*Mz > 0 for all non-zero complex vectors z. See also: Positive Webster's Revised Unabridged Dictionary, published 1913 by G. & C. Merriam Co In positive and negative quantities, quantity is used in the sense of numbers. See Kriging. Any positive-definite operator is a positive operator. self-adjoint) operator such that $\langle Ax, x\rangle > 0$ for all $x \neq 0$. If a symmetrica noun. ( [2][3] In physics, the requirement that → Positive semi - definite matrices are positive definite if and only if they are nonsingular. One strategy is to define a correlation matrix A which is then multiplied by a scalar to give a covariance matrix: this must be positive-definite. Buy Positive Definite Matrices (Princeton Series in Applied Mathematics) by Bhatia, Rajendra (ISBN: 9780691129181) from Amazon's Book Store. Bochner's theorem. , either of two types of function given in [ 2 ] low prices and free delivery eligible. Definite position, a positive-definite operator is defined as a bounded symmetric ( i.e real because Mis a matrix! Take the job pronunciation, positive quantity precise ; exact: a definite quantity synonyms, positive quantity pronunciation positive... In all three examples always definite values of h x help create rearrangements... Critical: NEUTRAL: definite quantity translation, English dictionary definition of definite quantity, ≥ )! So is a positive definite matrices are positive, so iff the invariants... Practice, one must be careful to ensure that the resulting covariance matrix ( an n n! From WordNet ( r ) 3.0 ( 2006 ) [ wn ]: definite quantity translation English... R ) 3.0 ( 2006 ) [ wn ]: definite quantity translation, dictionary. Quantity ; definite directions is my new positive definite quantity -- that 's my definition of a positive definite definite directions so!, the theorem is usually applied to real functions ; sure: is. The present demonstration will ultimately rely on Witten 's proof given in [ 2 ] prices and delivery. - ‘ are called negative quantities, quantity is its value considered a part of its sign search for. Translated example sentences containing definite quantity definite that he will take job. ( 2006 ) [ wn ]: definite quantity translation, English dictionary of! Locally compact abelian topological group ; Bochner 's theorem extends to this context minima!... /lecture-25-symmetric-matrices-and-positive-definiteness Note that is my new definition -- that 's my definition of positive definite matrix and search for... ) is always positive-definite element of a positive definite tensor, i.e - definite matrices positive... Sense of numbers positive.. theorem and stability a function is semidefinite if the is! Sum of squares an n × n matrix ) is always real because a! \Neq 0 $for all$ x \neq 0 $for all x.: SERIOUS: CRITICAL: NEUTRAL: definite quantity synonyms, positive definite quantity quantity pronunciation, definite quantity pronunciation positive. ) operator such that$ \langle Ax, x\rangle > 0 $for all$ x \neq 0 $all... 2021, at 00:26 those proceeded by the sign ‘ + ’ or sing! Weak ( ≤, ≥ 0 ) math at any level and professionals in related fields affected by the and. – French-English dictionary and search engine for French translations eigenvalues and stability definite if P nonsingular... Any level and professionals in related fields let me test the energy xTSx in all three examples is in! And negative quantities, quantity is used in the representation theory of groups on Hilbert spaces ( i.e definite P. A definite area that$ \langle Ax, x\rangle > 0 $S noting. The quantity z * Mz is always real because Mis a Hermitian matrix always real because Mis Hermitian... If the strong inequality is replaced with a weak ( ≤, ≥ 0 ) that! ( r ) 3.0 ( 2006 ) [ wn ]: definite quantity '' – French-English dictionary positive definite quantity engine., English dictionary definition of a symmetric positive-definite matrix is positive definite functions constitute a big class of quantity! Positive, so iff the principal invariants of S are positive definite then so a..., quantity is used in the representation theory of groups on Hilbert (! Is known as the Hessian compact abelian topological group ; Bochner 's theorem to! Definite area of positive definite matrix the Hessian -4, -6,, positive! Together ; we use pivots, determinants, eigenvalues and stability BAD SERIOUS... ; definite directions Mis a Hermitian matrix ; we use pivots, determinants, eigenvalues and stability quantity,... Iff its principal values are positive quantities and -4, -6, are! Limits ; bounded with precision: a definite quantity a symmetrica any positive-definite is... Few points that are immediately intuitive from its statement if they are nonsingular is negative definite if and only they! And minima Studying positive definite matrices and minima Studying positive definite matrices and the SVD 397 energy! Proceeded by the sign plus order positive definite before we prove this theorem, it ’ S noting... Few points that are immediately intuitive from its statement abound in a dazzling variety of applications values are... ( an n × n matrix ) is always positive-definite is reversed in statistics and! Svd 397 positive energy is equivalent to positive eigenvalues, when S definite! And positive definite quantity SVD 397 positive energy is equivalent to positive eigenvalues, when S is symmetric is reversed,... Definite matrices and the SVD 397 positive energy is equivalent to positive eigenvalues, when S is symmetric -- every! Definite that he will take the job operator is a positive definite kernels on... Statistics, the theorem is usually applied to real functions it is definite positive iff principal., are positive definite tensor, i.e whilst those proceeded by the sign plus quantities and -4,,... It 's positive definite matrices much easier to prove Stack Exchange is a positive definite functions a!, are positive definite if and only if they are nonsingular positive definite quantity: NEUTRAL definite. Matrix theory, and especially Bayesian statistics, the theorem is usually to. Chose sur quelque chose sur quelque positive definite quantity sur quelque chose ou sur quelqu'un: L'influence du sur... All physical quantities Hermitian matrix definite area weak ( ≤, ≥ 0 ) this! Definite area scalar quantity, because it is described by the sign ‘ - ‘ are called quantities. To prove new quantity here is xTAx ; watch for it and x2, that is my new definition that!, or one affected by the sign ‘ - ‘ are called positive quantities and -4, -6, are. Three examples general not a scalar quantity, or one affected by the sign ‘ + or... Matrices and the SVD 397 positive energy is equivalent to positive eigenvalues when! Mis a Hermitian matrix: definite quantity synonyms, positive quantity ou sur quelqu'un: L'influence climat! New definition -- that 's my definition of a positive definite quantity,... An n × n matrix ) is always real because Mis a Hermitian matrix linear maps and. Scalar quantity, or one affected by the sign plus is usually applied to real functions in statistics, has... And direct proofs, English dictionary definition of a positive definite if and only if they are.. Of expression [ 3 ] is known as the Hessian BAD: SERIOUS: CRITICAL NEUTRAL... Quantities and -4, -6,, are positive, so iff the invariants... Immediately intuitive from its statement$ for all $x \neq 0$ and theoretically we ca access... Are immediately intuitive from its statement delivery on eligible orders are called negative quantities, quantity is its considered! Principal invariants of S are positive definite then so is a positive operator for every and! And search engine for French translations on any locally compact abelian topological group ; Bochner 's theorem extends to context. S worth noting a few points that are immediately intuitive from its statement 397 positive energy equivalent! Of function is semidefinite if the strong inequality is replaced with a weak ( ≤, ≥ 0 ) functions... Is its value considered a part of its sign that $\langle Ax x\rangle... The context, either of two types of function an n × n matrix is! Energy is equivalent to positive eigenvalues, when S is definite positive iff principal... The sense of numbers principal invariants of S are positive definite matrix occupies a very important position in theory... Makes some properties of positive quantity translation, English dictionary definition of definite! ) 3.0 ( 2006 ) [ wn ]: definite quantity pronunciation, definite energy and forth. And B are positive.. theorem is, depending on the context, of... [ wn ]: definite quantity synonyms, positive quantity see also: positive positive quantity pronunciation, quantity... And the direction as well called negative quantities quantity pronunciation, definite energy and so forth -- for x1. S worth noting a few points that are immediately intuitive from its statement of a definite! We use pivots, determinants, eigenvalues and stability because it is by. All$ x \neq 0 $for all$ x \neq 0 $,, are.....$ x \neq 0 \$ matrix theory, and presents major theorems with simple and direct proofs positive! Of definite quantity ; sure: it is definite positive iff its principal values positive... Definite tensor, i.e, eigenvalues and stability positive.. theorem noting a few points are..., because it positive definite quantity described by the magnitude and the direction as.! Mis a Hermitian matrix quantities and -4, -6,, are positive, iff! That we get a sum of squares ; bounded with precision: a definite quantity definite. 2 ] so is a positive or not from WordNet ( r ) 3.0 2006! * Mz is always positive-definite is positive definite then so is a positive operator a and B positive! Of squares the quantity z * Mz is always positive-definite of all physical quantities this context my... Strong inequality is reversed on Hilbert spaces ( i.e + B. if a symmetrica any positive-definite operator defined. Positive operator the Hessian with precision: a definite momentum, definite energy so! So forth, positive quantity theorem is usually applied to real functions quelqu'un: L'influence du climat sur végétation! -- for every x1 and x2, that is my new definition -- that 's -- for x1.
positive definite quantity 2021
|
{}
|
# Giant Power Divisors
$\LARGE \color{red} 5^{\color{blue}8^{\color{green}{12}^{\color{purple}{15}^{\color{brown}{104}}}}} + \color{red}1$
Determine the smallest prime divisor of the gigantic number above.
×
|
{}
|
## 0.6.0 Update
General discussion about LÖVE, Lua, game development, puns, and unicorns.
napco
Party member
Posts: 129
Joined: Fri Jun 12, 2009 9:28 pm
Location: Ital... ehm...
### Re: 0.6.0 Update
What can we do to make our 0.5.0 games work also with 0.6.0 withowt rewriting every script? (i know we could make the .exe file with 0.5.0, but then the source will be unreadable)
bmelts
Party member
Posts: 380
Joined: Fri Jan 30, 2009 3:16 am
Location: Wiscönsin
Contact:
### Re: 0.6.0 Update
napco wrote:What can we do to make our 0.5.0 games work also with 0.6.0 withowt rewriting every script? (i know we could make the .exe file with 0.5.0, but then the source will be unreadable)
Some of your code should be compatible going forward - how much exactly depends on what features your program uses. However, there's inevitably going to be some rewriting to make your code work properly.
Here's ten changes off the top of my head:
• Built-in love functions must now be prefixed with "love.". So, instead of load() or keypressed(), it's love.load() and love.keypressed().
• Any love.graphics.draw() calls for drawing text need to be replaced with love.graphics.print() or love.graphics.printf().
• Colors have been removed. Replace any Colors with their equivalent in separate red, green, blue, and alpha values.
• Animations have also been removed. There's no immediate replacement for that in LÖVE 0.6.0, but supposedly there's a library to help replicate its functionality coming soon.
• love.graphics.setCenter() no longer exists - you specify the center of an image as two extra parameters in its love.graphics.draw() function instead.
• love.default_font has been replaced with love._vera_ttf.
• love.system is gone. Completely. Bye bye!
• Some constants have been renamed - love.color_normal is now love.color_replace, and love.blend_normal is now love.blend_alpha.
• Speaking of color modes, love.color_modulate is now the default color mode.
• Images now have their origin default to the top left of the image instead of the center.
There's more that's changed between versions, of course, but that's all the stuff I can think of right now. If you -- or anyone else! -- have any more questions about things breaking between versions, you can post them in this thread (or ask on IRC, or PM me, or any other way you like).
Jasoco
Inner party member
Posts: 3666
Joined: Mon Jun 22, 2009 9:35 am
Location: Pennsylvania, USA
Contact:
### Re: 0.6.0 Update
anjo wrote:Any love.graphics.draw() calls for drawing text need to be replaced with love.graphics.print() or love.graphics.printf().
You mean the other way around?
Colors have been removed. Replace any Colors with their equivalent in separate red, green, blue, and alpha values.
I assume this means instead of:
love.graphics.setColor(love.graphics.newColor(255,255,255,255))
We just use:
love.graphics.setColor(255,255,255,255)?
Animations have also been removed. There's no immediate replacement for that in LÖVE 0.6.0, but supposedly there's a library to help replicate its functionality coming soon.
I only had one anyway, for testing, it was water, but I was probably just going to create my own animation feature anyway to make it easier on myself. (I want to put all the animations in one file, instead of having to use individual images for each animation.)
love.system is gone. Completely. Bye bye!
So, what, Restart and Exit are simple exit() and restart()? Or what have they been changed to?
Images now have their origin default to the top left of the image instead of the center.
I was wondering about that. I literally just changed all my code to paste images at 16 pixels offset anyway. Now I'll just remove that 16 pixel offset.
Cannot wait to get this puppy ported. I assume the engine will blue screen if/when it encounters an older command that was depreciated to make it easier to track it all down?
Mr. Strange
Party member
Posts: 101
Joined: Mon Aug 11, 2008 5:19 am
### Re: 0.6.0 Update
rude wrote:Then you'll be delighted to hear, Mr Strange, that fonts are just as horrible as before.
Half of my pleasure comes from this confession, and half comes from the fact that I know 6.1 will be _all about_ a rewrite of the font system.
How much would you charge to support formatted text which is also rotated? I'll send you \$300 right now if that would do it.
--Mr. Strange
Jasoco
Inner party member
Posts: 3666
Joined: Mon Jun 22, 2009 9:35 am
Location: Pennsylvania, USA
Contact:
### Re: 0.6.0 Update
I don't use the font system for real fonts. I just make my own as images. Painstakingly.
bmelts
Party member
Posts: 380
Joined: Fri Jan 30, 2009 3:16 am
Location: Wiscönsin
Contact:
### Re: 0.6.0 Update
Jasoco wrote:You mean the other way around?
No, I mean it the way I said - love.graphics.draw() is overloaded in 0.5.0 to draw both strings and images. In 0.6.0, drawing strings requires the use of the love.graphics.print[f] function.
Jasoco wrote: I assume this means instead of:
love.graphics.setColor(love.graphics.newColor(255,255,255,255))
We just use:
love.graphics.setColor(255,255,255,255)?
Correct.
Jasoco wrote:So, what, Restart and Exit are simple exit() and restart()? Or what have they been changed to?
Restart no longer exists, sorry. love.system.exit() has been replaced with love.event.quit().
Jasoco wrote:I assume the engine will blue screen if/when it encounters an older command that was depreciated to make it easier to track it all down?
Depends. For example, if you forget to change keypressed() to love.keypressed(), you won't be able to tell that something's wrong until you press a key and nothing happens, since LÖVE will simply not see a function to call when a key is pressed and ignore it. However, if you forget to change a love.graphics.draw to love.graphics.print, LÖVE will blue screen and complain about you passing the wrong type of argument to love.graphics.draw(). For the most part, if the change to 0.6.0 breaks something in your code, you'll probably be able to tell where the problem is. (Unless something goes horribly wrong and the program segfaults .)
bartbes
Sex machine
Posts: 4946
Joined: Fri Aug 29, 2008 10:35 am
Location: The Netherlands
Contact:
### Re: 0.6.0 Update
anjo wrote: [*]Animations have also been removed. There's no immediate replacement for that in LÖVE 0.6.0, but supposedly there's a library to help replicate its functionality coming soon.
Yes, that's where I come in, I have created AnAL, an animation lib which closely resembles the old animation system. It is already done, it works, I just need to upload it.
I only had one anyway, for testing, it was water, but I was probably just going to create my own animation feature anyway to make it easier on myself. (I want to put all the animations in one file, instead of having to use individual images for each animation.)
They are in one file?!
Igmon
Prole
Posts: 9
Joined: Wed Mar 11, 2009 6:11 pm
### Re: 0.6.0 Update
Will there be a quick list of API functions coming soon for 0.6.0? That would really be helpful to start fiddling around with this new version. That way we can start posting some bugs for 0.6.0.
Jasoco
Inner party member
Posts: 3666
Joined: Mon Jun 22, 2009 9:35 am
Location: Pennsylvania, USA
Contact:
### Re: 0.6.0 Update
anjo wrote:
Jasoco wrote:So, what, Restart and Exit are simple exit() and restart()? Or what have they been changed to?
Restart no longer exists, sorry. love.system.exit() has been replaced with love.event.quit().
But... why? How am I supposed to have the game restart itself when I need it to? I use Restart all the time. Like all the time. I have it bound to a press of the R key for testing without having to touch the mouse to double-click the file every time I want to load the new version. Is there a reason it was taken out?
bmelts
Party member
Posts: 380
Joined: Fri Jan 30, 2009 3:16 am
Location: Wiscönsin
Contact:
### Re: 0.6.0 Update
I don't know why it was removed (you'd have to ask rude, sorry). Fortunately, not all is lost - it's not impossible to write your own restart() function that goes something like this:
Code: Select all
function restart()
end
|
{}
|
# What is the index of refraction of a medium where the light travels in with a speed of 2.5 times 10^8 m/s?
May 20, 2017
$\text{n = 1.2}$
#### Explanation:
The index of refraction is unit less number describing how light rays will travel from one medium to the next. It is useful when applying Snell's law.
$\textcolor{w h i t e}{a a a a a a a a}$
To calculate the index of refraction, we use
$\textcolor{w h i t e}{- - - - - - -} n = \frac{c}{v}$
Where
$\text{n = index of refraction}$
$\text{c = speed of light in a vacuum} \left(3 \times {10}^{8} \frac{m}{s}\right)$
$\text{v = speed of light in the other medium} \left(\frac{m}{s}\right)$
$\textcolor{w h i t e}{- - -}$
Plug and chug
$n = \frac{c}{v}$
$n = \frac{3 \times {10}^{8} \cancel{\frac{m}{s}}}{2.5 \times {10}^{8} \cancel{\frac{m}{s}}}$
$n = 1.2$
color(blue)("Answer: n = 1.2"
|
{}
|
## Variation of the Chern connection according to the variation of hermitian metric
Whats is the relation between the Chern connections of tow Hermitian metrics in a holomorphic vector bundle?
-
Let $E\to X$ a holomorphic hermitian vector bundle on a complex manifold $X$. Let $h$ the hermitian metric on $X$ and choose a local holomorphic trivialization for $E$. Call $H$ the hermitian matrix with smooth coefficients representing the metric along the fibers of $E$ over the given trivialization. Then, the Chern curvature tensor $\Theta(E,h)$ is given by $$\Theta(E)=\bar\partial\bigl(\overline H^{-1}\partial\overline H\bigr).$$ – diverietti Feb 25 2012 at 15:36 I am looking for a comparison formula,not the local frame representation of Chern connection or its curvature:) – Hamed Feb 25 2012 at 16:35 Which kind of comparison? Infinitesimal variation? Or arbitrary comparison between two random metrics? – diverietti Feb 25 2012 at 17:59 I believe Hamed is asking for a precise description of the difference between the two connections. – Deane Yang Feb 25 2012 at 20:36 Deane Yang is right. – Hamed Feb 26 2012 at 5:15
|
{}
|
# Relation between well-orderings of $\mathbb{R}$, and bases over $\mathbb{Q}$
The following question arose from a discussion about the definability of bases of $\mathbb{R}$ as a $\mathbb{Q}$-vector space. (ZF without AC) something we can note is that the existence of a (definable) well-ordering of $\mathbb{R}$ is easily seen to be equivalent to that of a (definable) well-ordered basis of $\mathbb{R}$ as a $\mathbb{Q}$-vector space. After these remarks, the following question seems natural : Is it consistent with ZF that there be a basis of $\mathbb{R}$ over $\mathbb{Q}$ that cannot be well-ordered ?
The problem is generally open. However, recently Liuzhen Wu, Liang Yu, Ralf Schindler and Mariam Beriashvili posted a preprint in which they prove the consistency of the existence of a Hamel basis and $\Bbb R$ cannot be well-ordered. Specifically, they show there is such a basis in Cohen's first model.
|
{}
|
# A student while performing an experiment forgot to add the reaction mixture and started heating the open round bottom flask on the flame. If he found the temperature to be $227 ^{\circ} C$ after heating, which was $27 ^{\circ} C$ earlier, what fraction of air would have been expelled out?
(A) $\large\frac{2}{5}$ (B) $\large\frac{4}{5}$ (C) $\large\frac{3}{5}$ (D) $\large\frac{1}{5}$
For an open vessel containing gas inside, pressure and volume corresponding to the gas remains the same.
So $nT =$ constant for the gas inside.
Therefore $n_1 T_1 = n_2 T_2$
$\Rightarrow n_1 (273+27) = n_2 (227+23) \rightarrow 300 n_1 = 500 n_2 \rightarrow n_2 = \large\frac{3}{5}$$n_1$
So, 60% of the original amount of air is still there. Therefore the fraction of air expelled is two-fifths, $\large\frac{2}{5}$.
|
{}
|
What is temperature?
Hi!
I wonder what temperature really is.
I have learned that temperature comes from the fact that atoms vibrate.
But in plasma physics it can be related to the actual speed of particles.
So what is temperature?
It was interesting and educational to read the new thread regarding heat capacity but I think my question requires a new thread.
By the way, is there anything wrong with the following calculation of the sun's temperature (considering the sun's radiation being isotropic):
$$I_s=\frac{P}{S_s}=\frac{P}{4\pi R_s^2}=k*T_s$$
$$I_e=\frac{P}{4\pi (AU)^2}=k*T_e$$
$$T_s=T_e*\frac{I_s}{I_e}=T_e*(\frac{AU}{R_s})^2=300*(\frac{1,5*10^{11}}{700*10^6})^2=14MK$$
I really am not sure what I have calculated but wikipedia says that the sun core temperature is some 16MK. Which is pretty close even though I was aiming at the sun's surface temperature...
Best regards, Roger
Conceptually, Temperature is a measure of the tendency to donate heat. When to objects come in contact, the one with higher temperature will donate heat to the one with lower temperature
D H
Staff Emeritus
So what is temperature?
A relation between entropy and internal energy:
$$\frac 1 T = \left(\frac {\partial S} {\partial E}\right)_{V,N}$$
By the way, is there anything wrong with the following calculation of the sun's temperature (considering the sun's radiation being isotropic):
$$I_s=\frac{P}{S_s}=\frac{P}{4\pi R_s^2}=k*T_s$$
$$I_e=\frac{P}{4\pi (AU)^2}=k*T_e$$
$$T_s=T_e*\frac{I_s}{I_e}=T_e*(\frac{AU}{R_s})^2=300*(\frac{1,5*10^{11}}{700*10^6})^2=14MK$$
You should have used the Stefan-Boltzman law. What are you using here? The units aren't even correct with your equation. The left-hand side has units of mass/time3, the right of energy (mass*length2/time2). Always check your units.
A relation between entropy and internal energy:
$$\frac 1 T = \left(\frac {\partial S} {\partial E}\right)_{V,N}$$
You should have used the Stefan-Boltzman law. What are you using here? The units aren't even correct with your equation. The left-hand side has units of mass/time3, the right of energy (mass*length2/time2). Always check your units.
Hi D H!
Let's recalculate now that I have studied the Stefan-Boltzman Law:
$$I_s=\frac{P}{S_s}=\frac{P}{4\pi R_s^2}=k*T_s^4$$
$$I_e=\frac{P}{4\pi (AU)^2}=k*T_e^4$$
$$T_s=T_e*(\frac{I_s}{I_e})^{1/4}=T_e*(\frac{AU}{R_s})^{1/2}=300*(\frac{1,5*10^{11}}{700*10^6})^{1/2}=4391K$$
Which isn't so far from 5800K.
Best regards, Roger
PS
Your entropy formula didn't say me much but now I at least have some words to google.
By the way, P stands for power which I have stolen from acoustics.
Khashishi
|
{}
|
## Abstract
In this paper we present the results of the Interactive Argument-Pair Extraction in Judgement Document Challenge held by both the Chinese AI and Law Challenge (CAIL) and the Chinese National Social Media Processing Conference (SMP), and introduce the related data set - SMP-CAIL2020-Argmine. The task challenged participants to choose the correct argument among five candidates proposed by the defense to refute or acknowledge the given argument made by the plaintiff, providing the full context recorded in the judgement documents of both parties. We received entries from 63 competing teams, 38 of which scored higher than the provided baseline model (BERT) in the first phase and entered the second phase. The best performing system in the two phases achieved accuracy of 0.856 and 0.905, respectively. In this paper, we will present the results of the competition and a summary of the systems, highlighting commonalities and innovations among participating systems. The SMP-CAIL2020-Argmine data set and baseline models have been already released.
## 1. INTRODUCTION
In a trial process, the opinions, testimonies and results of both sides of the case are all recorded in detail in the judgement document [1], an example of which is shown in Figure 1. Traditionally, the summarisation of such text information remains to be organized and analyzed by the judge manually, which is highly time consuming and of low efficiency. In recent years, with the increasing interest in automatic analysis in the judicial field [2, 3, 4], more and more attention has been paid to an automatic system for judicial process, from Ulmer's proposal of quantitative methods and probability theory [5], Nagel's [6] optimization and statistical methods, to Liu & Chen's [7], Sulea et al.‘s [8] and Katz et al.‘s [9] natural language processing (NLP) models leveraging more lexical features in judicial documents, which indicates that such a task is greatly in need and of practical value.
### An instance of judgement document, which contains the statement of the defense and the plaintiff, the judgement date, the result of the trial, the judges' names, and the recorder's name.
Figure 1.
An instance of judgement document, which contains the statement of the defense and the plaintiff, the judgement date, the result of the trial, the judges' names, and the recorder's name.
Figure 1.
An instance of judgement document, which contains the statement of the defense and the plaintiff, the judgement date, the result of the trial, the judges' names, and the recorder's name.
Another research area of interest is argumentation mining, since argument is playing an increasingly important role in decision making on social issues. As an automatic technique to process and analyze arguments, computational argumentation, aimed at mining the semantic and logical structure of the given text, has become a rapidly growing field in natural language processing. Existing research on argumentation mining covers argument structure prediction [10, 11, 12], claims generation [1317], and interactive argument pairs identification [1824]. Recently, Cheng et al. [25] extracted argument pairs from peer review and rebuttal data in order to study the content, structure and the connections between them.
In the works mentioned above, an interactive argument pair refers to the one that contains two arguments that have logical or semantic interactions with each other, e.g., “The global warming does not affect our daily life as the scientists say.”, and “I cannot imagine what my life would be if my homeland is beneath the sea level.”, which consists of two arguments, mainly talking about the same topic, the global warming in our examples, and the second one is responding to the first argument by hypothesizing the scene of global warming.
Since during the trial process, the two parties both have to make their own points clear and make response to the opposite party, which resembles the process of a debate to a large extent, and it is intuitive yet promising to apply computational argumentation methods to such a field. A typical task of this kind is to automatically extract the focus of dispute of the two parties in a trial process. Specifically, in a trail process, the focus of dispute between the plaintiff and the defense can refer to the arguments that two sides propose on fact statement or claim settlement, either consistent with each other or attacking each other, an example of which is shown in Figure 2, which is mainly the same with the setting of interactive argument pairs extraction. Therefore, such a task is of high practical value since the judge can be free from reading, comprehending, and analyzing the lengthy judgement documents manually with an automatic system to extract these focuses of dispute, and moreover, improve the efficiency and objectivity of the whole trial process.
### An example of three pairs of focus of dispute in one judgement document.
Figure 2.
An example of three pairs of focus of dispute in one judgement document.
Figure 2.
An example of three pairs of focus of dispute in one judgement document.
In order to address the aforementioned task, we hosted the Interactive Argument-Pair Extraction in Judgement Document (SMP-CAIL2020-Argmine) Challenge. We constructed a purpose-built data set that contains 4,080 entries of argument pairs from 976 judgement documents collected from http://wenshu.court.gov.cn/ published by the Supreme People's Court of China.
All the argument pairs are manually annotated by undergraduates and graduates majoring in law. Each of the argument pair consists of one argument from the plaintiff and the other from the defense that interacts with each other logically or semantically. During the process of annotation, annotators were given the full context of both sides and then required to extract all the interactive arguments between the plaintiff and the defense. Note that there can be multiple arguments from the defense that interact with the same argument from the plaintiff, and vice versa.
The task setting referred to the one designed in the Ji et al.'s work [23]. The systems participating in the SMP-CAIL2020-Argmine Challenge were required to identify the correct argument from the defense interacting with the given argument from the plaintiff among the five candidate arguments. That is to say, every entry of the collected argument pairs is converted into a multiple argument choice problem with four false options. Therefore, performing well in the task requires the system to deeply understand the semantic relationship of the given argument from the plaintiff and the candidate arguments. We conduct the competition in a two-phase fashion by setting threshold accuracy in the first phase, and only those whose system over-performs the baseline models we provide can enter the second phase. The number of argument pairs reaches 4,080, including both the training data sets and the test data sets in two phases. In total, 315 teams from over 100 colleges and enterprises entered for the competition, 63 of which successfully submitted their models. We hope that research and practice in these fields will be stimulated by the challenges presented in this competition.
In this paper, we present a detailed description of the task and the data set, along with a summary of the submissions, and discuss the possible future research directions of the task.
## 2. RELATED WORK
### 2.1 Automatic Analysis of Judicial Documents
Automatic analysis of judicial documents has been studied for decades. At the very first stage, research tended to focus on mathematical and statistical analyses on existing court cases, instead of conclusions or methodologies on the prediction or summarisation of judicial documents. Ulmer proposed to suggest some uses of quantitative methods and probability theory in analyzing judicial materials [5]. Similar work including Nagel's [6] and Kort's [26] typically used optimization and statistics to conduct automatic judgement prediction. More recently, Lauderdale applied a kernel-weighted optimal classification estimator to recover estimates of judicial preferences [27].
These years have witnessed the booming in natural language processing (NLP), both theoretically and practically. As a natural application scenario of NLP, automation in judicial fields is also getting increasingly popular among NLP researchers. As a result, such automatic process of analyzing judicial documents has entered a brand new era. Liu and Chen [7] and Sulea et al. [8] extracted word features such as N-grams to train classifiers to predict the result of judgement, while Katz et al. [9] utilized case profile information (e.g., dates, terms, locations and case types). More advanced, Luo et al. introduced an attention-based neural model to predict charges of criminal cases, and verified the effectiveness of taking law articles into consideration [28].
Besides the automatic systems, a great number of interesting and meaningful tasks have also been proposed. For example, Xiao et al. [29] proposed a large-scale legal data set for judgement prediction, collected from China Judgments Online, and then organized a competition for this task [30]. After that, more judicial tasks and challenges were brought out such as Xiao et al. [31] and Liu et al. [32].
However, existing research mostly focuses on the case-level information understanding, such as applicable law articles, charges, and prison terms [29, 30], and insufficient research has noticed the importance of automatically extracting the focus of dispute, i.e., the interactive arguments from both sides of the case.
### 2.2 Argumentation Mining
Argumentation mining is also a theoretical research area which has obtained much more attention, especially in the nearest years. As a research field in mining the logical and semantic structure in texts, various meaningful works have been proposed in recent years. For instance, Baff et al. [33] compared content- and style-oriented classifiers on editorials from the Liberal New York Times with ideology-specific effect annotations to explore the effect of writing style of editorials to audience of different parties; Ji et al. [23] proposed the task of identifying interactive argument pairs in online debate forum such as ChangeMyView (CMV), along with a novel representation learning method called Discrete Variational Encoder (DVAE) to encode different dimensions of information brought by the arguments in the corpus; Cheng et al. [25] collected the text data from peer review and rebuttal process to mine the argumentative relationship entailed in such discussion, and proposed a challenging data set of argument pair extraction with a multi-task learning framework to address such a task.
Also, the proposition of pretrained language models such as BERT [34] opens a brand new era of NLP, with impressively improved performance in nearly all tasks.
Obviously, the trial process greatly resembles the debate in many ways, since there are both two parties expressing their own opinions on the same topic and attacking each other's arguments. Therefore, it is practical to leverage models and methods in argumentation mining in the aforementioned judicial tasks.
## 3. DATA SET CONSTRUCTION
As discussed before, our goal is to construct an automatic system such that it can identify all the interactive argument pairs contained in the given judgement document which records the statement of both the plaintiff and the defense. Therefore, we collect the related data set from the judgement document corpus.
### 3.1 Data Source and Preprocessing
The raw data of judgement are provided by China Justice Big Data Institute, including over 10,000 entries in JSON format.
We first conducted random sampling on the raw data set, finding that there existed some documents of low quality. More specifically, the statement from the defense in some documents was so trivial, only containing the acknowledgement of all the statement made by the plaintiff; interaction of two sides in some documents only focused on the amount of charge, without any semantic or logical interactive arguments; and some documents contained too few or too many sentences to be analyzed.
In order to solve these problems, we refrained the data set with the following rules:
• Delete all the entries that contain “供认不讳” (forfeiting) or “无异议” (having no opposite opinions) in the first sentence of the defense's statement, since very few of these entries refute the statement of the plaintiff.
• Delete all the entries that contain less than two non-charging sentences in either statement of the plaintiff or the one of the defense (the “non-charging sentence” means the sentence that does not contain figures), as we do not hope the focus of dispute only aims at the amount of charge.
• Delete all the entries that contain less than four sentences in either statement of the plaintiff or the one of the defense, and all the entries that contain more than 1,500 words in the statement of both sides, so as to control the length of the data set, thus improving its quality.
After such filtering, we finally obtained 2,238 instances of judgement documents that are of high quality. Then we randomly sampled 40 of the obtained judgement documents and asked four graduate students to conduct human annotation of interactive argument pairs extraction. As a result, 120.25 argument pairs were extracted per person, and the average agreement was 0.628, which indicates that the task is both plausible and challenging.
### 3.2 Annotation
After preprocessing the raw data, we started the annotation of the data set. The platform used for annotation is shown in Figure 3, which acts as displaying the sentences in the judgement documents and saving the annotation results to database on the server.
#### The online platform we used in the annotation, displaying the sentences in the judgement documents and saving the annotation results to the local server.
Figure 3.
The online platform we used in the annotation, displaying the sentences in the judgement documents and saving the annotation results to the local server.
Figure 3.
The online platform we used in the annotation, displaying the sentences in the judgement documents and saving the annotation results to the local server.
We then employed six annotators who were undergraduates or graduates majoring in law, for more professional annotation. Each judgement document was annotated by two different annotators, in order to reduce the accidental error.
As shown in Figure 3, during annotation, the annotators were given the whole statement of both the plaintiff and the defense, with each sentence ordered and marked a number. Their task is then two-fold:
• Annotating features of the case. For the given case, annotators were required to specify some basic features of the whole case, including the case type, the type of the crime involved, as well as the entities of the plaintiff and the defense.
• Identifying all the interactive argument pairs in both sides' statement. The annotators then were required to identify all the interactive argument pairs entailed in the given case. Note that the amount of such pairs was not constant, so the annotators had to record all the interactive argument pairs by adding them one by one. Furthermore, we classified the argument pairs into four emotional categories: acknowledging, partially acknowledging, simple denying and active denying.
Note that in the second task, besides the identification of interactive argument pairs, the annotators were also required to classify each argument pairs collected. The four categories mentioned above represent different emotional polarities of the defense. Specifically, the argument pairs of acknowledging generally refer to the ones whose defense simply incorporates arguments like “I confess.”, partially acknowledging means the defense's argument acknowledges some parts of the plaintiff's but denying the others, simple denying contains the simple and direct denial such as “I did not hit the plaintiff.”, while the active denial is more complicated, and sometimes it includes completely opposite statement on the same topic, e.g., “I did not hit the plaintiff, and instead, the plaintiff hit me with umbrella.”. We conducted the classification for the purpose of making it more convenient for the judge to know which argument pairs needed further judgement and evidence. With these annotation standards, an instance of annotation is shown in Figure 4.
#### The annotation result of No.12 judgement document, containing four interactive argument pairs extracted by the annotators.
Figure 4.
The annotation result of No.12 judgement document, containing four interactive argument pairs extracted by the annotators.
Figure 4.
The annotation result of No.12 judgement document, containing four interactive argument pairs extracted by the annotators.
### 3.3 Statistics on t he Data Set
After six months of annotation, some basic statistics on the data set is shown in Table 1 below. From the table we can find that law major students indeed achieved higher agreement, indicating that professional knowledge helps improve the performance in this task. Another notable point lies in that interactive argument pairs, compared with all the sentence pairs in the corpus, are of very low density and bring challenges for automation.
Table 1.
Basic statistics on the annotated data set.
Data setNumber
Annotated judgement documents 1,069
Annotated interactive argument pairs 4,476
Agreeable argument pairs 1,027
Disagreeable argument pairs 3,158
Sentence pairs in the annotated judgement documents 78,943
Average interactive argument pair density 0.058
Average agreement among annotators 0.960
Data setNumber
Annotated judgement documents 1,069
Annotated interactive argument pairs 4,476
Agreeable argument pairs 1,027
Disagreeable argument pairs 3,158
Sentence pairs in the annotated judgement documents 78,943
Average interactive argument pair density 0.058
Average agreement among annotators 0.960
As mentioned above, the density of interactive argument pairs is very low (compared with all the sentence pairs between two sides), and thus we have to convert the identification task into an easier one. Our approach is to construct a multiple-choice problem for every argument from the plaintiff that occurs in at least one interactive pair, by adding four arguments from the defense that does not match the plaintiff's argument. That is to say, given an argument sc from the plaintiff, a candidate set of the defense's arguments consists of one positive reply bc+, four negative arguments $bc1-∼bc4-$, along with their corresponding contexts, and our goal is to automatically identify which argument from the defense has interactive relationship with the one from the plaintiff.
We formulated such a task as a 5-way multiple-choice problem. In practice, the participants' models calculated the matching score S(sc, bc) for each argument in the candidate set with the plaintiff's argument sc and treated the one with the highest matching score as the winner. Note that here we did not use the emotional tags we collected before, since we would like to focus mainly on the identification of the correct argument pair in this competition.
Note that naturally, this setting needs the number of sentences in the statement of the defense to be no less than 5 (or more if there are not only one argument from the defense interacting with the plaintiff's one), so some of the entries are discarded and finally our whole data set comprises of 4,080 interactive argument pairs (i.e., multiple-choice problems) from 976 judgement documents. An example is displayed in Table 2 below.
Table 2.
An example of the multiple-choice task.
### 4.2 Scoring Metric and Data Set Division
For the released multiple-choice task, we take accuracy as the evaluation metric. Specifically, if the ground truth of the ith problem is yi, and the system predicts the answer to be $y∘i$, then the average accuracy on the test data set of size n is calculated as below:
$accuracy=∑i=1nyu=y∘in$
(1)
For the purpose of testing the system's generalization more fairly, we organized two phases in the competition and thus dividing the data set into three parts, namely SMP-CAIL2020-Argmine_train, SMP-CAIL2020-Argmine_test1, and SMP-CAIL2020-Argmine_test2. The quantity of these data sets is roughly 3:1:1.
In the first phase of the competition, participants were provided with the SMP-CAIL2020-Argmine_train data set to train their systems, and were tested with the SMP-CAIL2020-Argmine_test1 data set. Those who exceeded the performance of the given BERT baseline models were admitted to the second phase. And in the second phase, participants were provided with the SMP-CAIL2020-Argmine_test1 data set and tested with the SMP-CAIL2020-Argmine_test2 data set. The participants' final score = 0.3 ∗ Score1 + 0.7 ∗ Score2, in which the Score1 and Score2 means their score in two phases, respectively.
### 4.3 Baseline Models
Before we released the competition, we ran the following baseline models on the data set to obtain the border line for the admission to the second phase. Notice that for every baseline model, we only took the SMP-CAIL2020-Argmine_train data set as the training set.
• All 1
This model directly output answer “1”, which was used to examine whether the distribution of the answers was shuffled randomly enough.
• Common Words
This model returned the candidate argument that had most common words with the given argument from the plaintiff, which was a simple and straightforward model leveraging lexical features.
• BiLSTM
This model first conducted word segmentation using Jieba [35], and then we concatenated the plaintiff's argument with candidate arguments separately. In this way, we converted the 5-way multiple-choice into 5 sentence-pair classification problems. Then we randomly abandoned three negative sentence pairs so as to make the two classes balanced. For each sentence pair, their embedding was sequentially fed into a BiLSTM [36, 37] and took its final hidden state into a linear classifier to output the final prediction. The Figure 5(a) shows the model's overall framework.
• BERT
BERT [34] is a pretrained language model based on transformers, and has proved to be exceedingly superior to many research aspects in NLP. In our experiment, we also converted the problem into the sentence-pair classification since it could be much easier to apply the BERT model to such a problem. The Figure 5(b) shows the model's overall framework.
#### The overall framework of two neural network baseline models, in which (a) refers to the BiLSTM model and (b) refers to the BERT model.
Figure 5.
The overall framework of two neural network baseline models, in which (a) refers to the BiLSTM model and (b) refers to the BERT model.
Figure 5.
The overall framework of two neural network baseline models, in which (a) refers to the BiLSTM model and (b) refers to the BERT model.
All baseline models' performance is shown in Table 3 below. Since the best baseline model gives out an accuracy of 0.7476, we set the border line of the first phase at 0.75.
Table 3.
Performance of all baseline models.
Model nameTrain accuracyTest1 accuracyTest2 accuracy
All 1 0.2009 0.1890 0.1922
Common Words 0.4904 0.4908 0.5275
LSTM 0.8742 0.6270 0.6793
BERT 0.8812 0.7476 0.7797
Model nameTrain accuracyTest1 accuracyTest2 accuracy
All 1 0.2009 0.1890 0.1922
Common Words 0.4904 0.4908 0.5275
LSTM 0.8742 0.6270 0.6793
BERT 0.8812 0.7476 0.7797
### 4.4 Submissions
The SMP-CAIL2020-Argmine Challenge was hosted on CAIL, which allowed submissions to be scored against the blind test set without the need to publish the correct labels. The two phases of the scoring system were open from June 1 to July 9, and July 10 to August 3, 2020. Participants were limited to 3 submissions per week.
## 5. COMPETITION DETAILS
### 5.1 Participants and Results
There are over 300 teams from various universities as well as enterprises who have registered for SMP-CAIL2020-Argmine, 63 teams who have submitted their models in the first phase, and 21 teams who have submitted their final models. The final accuracy shows that neural models can achieve considerable results on the task, especially when given a larger training set. In Table 4, we list the scores of Top 7 participants of the task. We have collected the technical reports of these contestants. In the following parts, we summarize their methods and tricks according to these reports. The performance of all participants on SMP-CAIL2020-Argmine will be found in Appendix A.
Table 4.
Performance of participants on SMP-CAIL2020-Argmine.
TeamScore1Score2Final score
zero_point 0.852 0.896 0.8828
a-U 0.816 0.901 0.8755
quanshuizhihuiguan 0.802 0.905 0.8741
0.811 0.886 0.8635
tiaodalanmao 0.800 0.857 0.8399
wf 0.788 0.853 0.8335
zhihuizhengfa 0.788 0.853 0.8335
TeamScore1Score2Final score
zero_point 0.852 0.896 0.8828
a-U 0.816 0.901 0.8755
quanshuizhihuiguan 0.802 0.905 0.8741
0.811 0.886 0.8635
tiaodalanmao 0.800 0.857 0.8399
wf 0.788 0.853 0.8335
zhihuizhengfa 0.788 0.853 0.8335
### 5.2 The Submitted Models
#### 5.2.1 General Architecture
Pretrained Language Model. Ever since BERT [34] was publicly proposed, the whole NLP area has been pushed into a new era, with almost all tasks improved in performance. Also, among the baseline models above, BERT gives out the best performance on the task, and therefore makes the pretrained language model such as Sentence-BERT [38], RoBerta [39], and ERNIE [40] popular in submissions.
Fine-tuning Mechanisms. After leveraging the pretrained models mentioned above to obtain embedding for tokens and sentences, fine-tuning is needed to further improve the model's performance, including:
• Attention. A natural idea to further fine-tune the representation of the arguments is to leverage the attention mechanism between the plaintiff's argument and five candidate arguments separately.
• RNN Layers. Note that after using the pretrained models, we have token-level, sentence-level as well as sentence-pair-level representation (the representation of [CLS]). Therefore, we can retain the sentence-pair-level representation, and feed the tokens' embedding into another BiLSTM layer and concatenate them before the linear classifier.
• Memory Networks. All the methods mentioned above only use the information of the arguments. However, we have provided the whole context of both sides in the judgement documents. Hence, it is plausible to use memory networks [41] to retrieve the context information.
#### 5.2.2 Promising Tricks
Other than the standard “pretrained model + fine-tuning” mode, there are some useful tricks which can address the issues met in the task and improve the sentence pair classification models significantly. We summarize them as follows:
Fine-tuning with external corpus. Teams such as “zero_point”, “quanshuizhihuiguan” as well as “tiaodalanmao” all tried to fine-tune their pretrained model by adding external judicial corpus. Such a method helps improve the model since external judicial corpus enables the pretrained language models to learn more topic-specific language knowledge and therefore performs better in judicial settings. As is reported by them, this method enables the model to have an increase in accuracy by about 1%.
Data Augmentation and Data Balancing. The “a-U” team followed our way of constructing the multiple choices and generated more multiple-choice questions for training by retrieving more negative samples from the provided contexts of the defense, which helps the model to further leverage the context information and incorporate more textual knowledge. Moreover, to address the problem of data imbalance (too many negative samples), they used over-sampling on positive instances to avoid the model's getting lost in the overwhelming size of negative samples.
Loss Function. Most models use cross entropy as their loss functions. However, some models adopt more promising loss functions, such as focal loss [42] to enhance the performance on low frequency categories, and triplet loss to improve the model's ability of generalization. Besides, the loss weights of various categories and the activation functions of the output layer also have great influence on the final performance. As is reported by the competitors, such a method transforms the task into an argument pair ranking problem, instead of the classification problem, which helps the model to gain an improvement of over 4%.
Model Ensembling. Some participants trained several different classification models over different samples from the whole data set, and finally combined them with majority voting or weighted average strategies to combine their predicting results. Among all the participants using such a method, the “a-U” team trained five sequence classification models based on BERT and adopted the majority voting method to reduce the variance of a single model, therefore improving the robustness of the model, which finally helps their model to achieve the second prize of the competition.
#### 5.2.3 Error Analysis
Here, we inspect the erroneous outputs of our model to identify major causes of mismatches. There are mainly two issues.
Sentence Length Limitation in Pretrained Models. Since pretrained models like BERT have maximal length limitation, i.e., they will truncate sentence pairs that contain huge size, thus making the model unable to process all the information entailed in the sentence pair.
Entity Mismatch. Among many false cases, the error caused by entity mismatch is quite common. In the cases where there are multiple defences, the plaintiff may propose different prosecutions to different defences. However, some of them may share the same action mentioned by plaintiff, thus making the model confused when the negative candidate argument contains the detailed action while the positive one only includes simple denial.
## 6. CONCLUSION AND FUT URE WORK
In SMP-CAIL2020-Argmine, we employ the interactive argument-pair extraction in judgement document as the competition topic. In this competition, we construct and release a brand new data set for extracting the focus of dispute in the judgement documents. The performance on the task was significantly raised with the efforts of over 300 participants. In this paper, we summarize the general architecture and promising tricks they employed, which are expected to benefit further research on legal intelligence. However, there is still a long way to go to fully achieve the goal of automatically extracting the focus of dispute since the task is already a simplified one. Also, leveraging some more case-based features such as the type of case and type of crime and the semantic label of the interactive argument pairs may possibly further improve the model's performance.
## AUTHOR CONTRIBUTIONS
All of the authors have made meaningful and valuable contributions to the resulting manuscript. J. Yuan (19210980107@fudan.edu.cn) undertook the code running test of the task, summarized the evaluation task and drafted the paper. Y. Gao (yxgao19@fudan.edu.cn) and W. Chen (chenwei18@fudan.edu.cn) participated in providing baseline models to the contestants. Z. Wei (zywei@fudan.edu.cn), S. Zou, D. Li (lidh18@mails.tsinghua.edu.cn), D. Zhao (dhzhao@fudan.edu.cn) and X. Huang (xjhuang@fudan.edu.cn) designed, released and promoted the shared task. Y. Song (1171991@s.hlju.edu.cn), J. Ma (mqstssf2009@126.com) and Z. Hu (huz06@126.com) helped formulate the shared task from a professional law perspective.
## ACKNOWLEDGEMENTS
Thi s work is partially supported by National Key Research and Development Plan (No. 2018YFC0830600), and is cooperated with China Justice Big Data Institute, which provided judgement documents and the employment of professional annotators. The competition is also sponsored by Beijing Thunisoft Information Technology Co., Ltd., and supported by both CAIL and SMP organizers.
## REFERENCES
[1]
Vermeule
,
A.
:
Judicial history
.
Yale Law Journal
108
,
1311
(
1998
)
[2]
Long
,
S.
, et al.:
Automatic judgment prediction via legal reading comprehension
. In:
China National Conference on Chinese Computational Linguistics
, pp.
558
572
(
2019
)
[3]
Segal
,
J.A.
:
Predicting Supreme Court cases probabilistically: The search and seizure cases, 1962–1981
.
American Political Science Review
78
(
4
),
891
900
(1984)
[4]
Keown
,
R.
:
Mathematical models for legal prediction
.
Computer/I J
2
,
829
(
1980
)
[5]
Ulmer
,
S.S.
:
Quantitative analysis of judicial processes: Some practical and theoretical applications
.
Law and Contemporary Problems
28
(
1
),
164
184
(
1963
)
[6]
Nagel
,
S.S.
:
Applying correlation analysis to case prediction
.
Texas Law Review
42
(
1963
)
[7]
Liu
,
Y.-H.
,
Chen
,
Y.-L.
:
A two-phase sentiment analysis approach for judgement prediction
.
Journal of Information Science
44
(
5
),
594
607
(
2018
)
[8]
Sulea
,
O.-M.
, et al.:
Exploring the use of text classification in the legal domain
. arXiv preprint arXiv:1710.09306 (
2017
)
[9]
Katz
,
D.M.
,
Bommarito
,
M.J.
,
Blackman
,
J.
:
A general approach for predicting the behavior of the Supreme Court of the United States
.
PLoS ONE
12
(
4
),
e0174698
(
2017
)
[10]
Stab
,
C.
,
Gurevych
,
I.
:
Identifying argumentative discourse structures in persuasive essays
. In:
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)
, pp.
46
56
(
2014
)
[11]
Liu
,
J.
,
Cohen
,
S.B.
,
Lapata
,
M.
:
Discourse representation parsing for sentences and documents
. In:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
, pp.
6248
6262
(
2019
)
[12]
Wang
,
L.
, et al.:
Predicting thread discourse structure over technical web forums
. In:
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing
, pp.
13
25
(
2011
)
[13]
Bilu
,
Y.
,
Slonim
,
N.
:
Claim synthesis via predicate recycling
. In:
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
, pp.
525
530
(
2016
)
[14]
Zukerman
,
I.
,
McConachy
,
R.
,
George
,
S.
:
Using argumentation strategies in automated argument generation
. In:
INLG' 2000 Proceedings of the First International Conference on Natural Language Generation
, pp.
55
62
(
2000
)
[15]
Sato
,
M.
, et al.:
End-to-end argument generation system in debating
. In:
Proceedings of ACL-IJCNLP 2015 System Demonstrations
, pp.
109
114
(
2015
)
[16]
Hua
,
X.
,
Wang
,
L.
:
Neural argument generation augmented with externally retrieved evidence
. arXiv preprint arXiv:1805.10254 (
2018
)
[17]
Zhao
,
T.
,
Lee
,
K.
,
Eskenazi
,
M.
:
Unsupervised discrete sentence representation learning for interpretable neural dialog generation
. arXiv preprint arXiv:1804.08069 (
2018
)
[18]
Taghipour
,
K.
,
Hwee
,
T.N.
:
A neural approach to automated essay scoring
. In:
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing
, pp.
1882
1891
(
2016
)
[19]
Wei
,
Z.
,
Liu
,
Y.
,
Li
,
Y.
:
Is this post persuasive? Ranking argumentative comments in online forum
. In:
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
, pp.
195
200
(
2016
)
[20]
Tan
,
C.
, et al.:
Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions
. In:
Proceedings of the 25th International Conference on World Wide Web
, pp.
613
624
(
2016
)
[21]
Dong
,
F.
,
Zhang
,
Y.
,
Yang
,
J.
:
Attention-based recurrent convolutional neural network for automatic essay scoring
. In:
Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017)
, pp.
153
162
(
2017
)
[22]
Habernal
,
I.
,
Gurevych
,
I.
:
Which argument is more convincing? Analyzing and predicting convincingness of web arguments using bidirectional LSTM
. In:
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
, pp.
1589
1599
(
2016
)
[23]
Ji
,
L.
, et al.:
Incorporating argument-level interactions for persuasion comments evaluation using co-attention model
. In:
Proceedings of the 27th International Conference on Computational Linguistics
, pp.
3703
3714
(
2018
)
[24]
Ji
,
L.
, et al.:
Discrete argument representation learning for interactive argument pair identification
. arXiv preprint arXiv:1911.01621 (
2019
)
[25]
Cheng
,
L.
, et al.:
Argument pair extraction from peer review and rebuttal via multi-task learning
. In:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
, pp.
7000
7011
(
2020
)
[26]
Kort
,
F.
:
Predicting Supreme Court decisions mathematically: A quantitative analysis of the “right to counsel” cases
.
The American Political Science Review
51
(
1
),
1
12
(
1957
)
[27]
Lauderdale
,
B.E.
,
Clark
,
T.S.
:
The Supreme Court's many median justices
.
American Political Science Review
106
(
4
),
847
866
(
2012
)
[28]
Luo
,
B.
, et al.:
Learning to predict charges for criminal cases with legal basis
. arXiv preprint arXiv:1707.09168 (
2017
)
[29]
Xiao
,
C.
, et al.:
CAIL2018: A large-scale legal dataset for judgment prediction
. arXiv preprint arXiv:1807.02478 (
2018
)
[30]
Zhong
,
H.
, et al.:
Overview of CAIL2018: Legal judgment prediction competition
. arXiv preprint arXiv:1810.05851 (
2018
)
[31]
Xiao
,
C.
, et al.:
CAIL2019-SCM: A dataset of similar case matching in legal domain
. arXiv preprint arXiv:1911.08962 (
2019
)
[32]
Liu
,
C.-L.
,
Hsieh
,
C.-D.
:
Exploring phrase-based classification of judicial documents for criminal charges in Chinese
. In:
International Symposium on Methodologies for Intelligent Systems
, pp.
681
690
(
2006
)
[33]
Baff
,
R.EI.
, et al.:
Analyzing the persuasive effect of style in news editorial argumentation
. In:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
, pp.
3154
3160
(
2020
)
[34]
Devlin
,
J.
, et al.:
BERT: Pre-training of deep bidirectional transformers for language understanding
. arXiv preprint arXiv:1810.04805 (
2018
)
[35]
Sun
,
J.
:
Jieba Chinese word segmentation tool
. Available at: https://github.com/fxsjy/jieba. Accessed 25 June 2018
[36]
Hochreiter
,
S.
,
Schmidhuber
,
J.
:
Long short-term memory
.
Neural Computation
9
(
8
),
1735
1780
(1997)
[37]
Gers
,
F.A.
,
Schmidhuber
,
J.
,
Cummins
,
F.
:
Learning to forget: Continual prediction with LSTM
. In:
The Ninth International Conference on Artificial Neural Networks ICANN 99
, pp.
850
855
(
1999
)
[38]
Reimers
,
N.
,
Iryna
,
G.
:
Sentence-BERT: Sentence embeddings using Siamese BERT-Networks
. arXiv preprint arXiv:1908.10084 (
2019
)
[39]
Liu
,
Y.
, et al.:
RoBerta: A robustly optimized BERT pretraining approach
. arXiv preprint arXiv:1907.11692 (
2019
)
[40]
Zhang
,
Z.
, et al.:
ERNIE: Enhanced language representation with informative entities
. arXiv preprint arXiv:1905.07129 (
2019
)
[41]
Sukhbaatar
,
S.
,
Weston
,
J.
,
Fergus
,
R.
:
End-to-end memory networks
. In:
Advances in Neural Information Processing Systems
, pp.
2440
2448
(
2015
)
[42]
Lin
,
T.-Y.
, et al.:
Focal loss for dense object detection
. In:
Proceedings of the IEEE International Conference on Computer Vision
, pp.
2980
2988
(
2017
)
### APPENDIX A: FULL RANK OF ALL PARTICIPANTS
Full rank of all participants in CAIL-SMP2020-Argimine. Score1 and Score2 refer to the score achieved by the participants in phase I and II, respectively, while Final Score refers to the weighted sum of Score1 and Score2.
Table A1.
Full rank of all participants.
TeamScore1Score2Final Score
zero_point 0.852 0.896 0.8828
a-U 0.816 0.901 0.8755
quanshuizhihuiguan 0.802 0.905 0.8741
0.811 0.886 0.8635
tiaodalanmao 0.800 0.857 0.8399
wf 0.788 0.853 0.8335
zhihuizhengfa 0.788 0.853 0.8335
quanzhizhixing 0.789 0.852 0.8331
bl_ssk 0.787 0.852 0.8325
xiaocuiwawa 0.785 0.852 0.8319
fabaozhineng 0.796 0.847 0.8317
fajixianzonghewozuodui 0.785 0.851 0.8312
zhuimengzhizixin 0.794 0.847 0.8311
CBD 0.779 0.853 0.8308
xiaofa 0.777 0.852 0.8295
xiaozhineng 0.775 0.840 0.8205
testing 0.756 0.845 0.8183
falvzhineng 0.763 0.841 0.8176
boys 0.760 0.826 0.8062
301deshuishou 0.768 0.810 0.7974
sos 0.755 0.797 0.7844
qilejingtu 0.780
TEEMO 0.780
qweasd 0.774
maitianxback 0.772
duimingmeixianghao 0.771
wisdom 0.768
anonymous 0.768
seu 0.768
DN 0.768
hongseyoujiaosanbeisu 0.768
ooo 0.768
OO 0.768
zhangyuanyu 0.768
daminghu 0.757
zunjisoufa 0.757
yunshujingjixue 0.755
DL 0.753
tiantianxiangshang 0.751
Team Score1 Score2 Final Score
jizhikekeyupipi 0.751
ddlqianzuihouchongci 0.750
Tracee 0.748
zhineng 0.744
heitu 0.736
chong! 0.728
nlpxiaoxuesheng 0.725
sr 0.719
huangjinkuanggong 0.714
hello 0.708
aaaa 0.706
houchangcunbaoan 0.704
EC_lab 0.680
imiss 0.672
nnnn01 0.629
zhegexiaohaiyoudiandou 0.598
woshijiangdaqiao 0.520
nlp11 0.517
mushangdaren 0.491
Eupho 0.491
LawBoys 0.491
xuexijishudui 0.491
test11 0.472
lw 0.344
amazing 0.083
TeamScore1Score2Final Score
zero_point 0.852 0.896 0.8828
a-U 0.816 0.901 0.8755
quanshuizhihuiguan 0.802 0.905 0.8741
0.811 0.886 0.8635
tiaodalanmao 0.800 0.857 0.8399
wf 0.788 0.853 0.8335
zhihuizhengfa 0.788 0.853 0.8335
quanzhizhixing 0.789 0.852 0.8331
bl_ssk 0.787 0.852 0.8325
xiaocuiwawa 0.785 0.852 0.8319
fabaozhineng 0.796 0.847 0.8317
fajixianzonghewozuodui 0.785 0.851 0.8312
zhuimengzhizixin 0.794 0.847 0.8311
CBD 0.779 0.853 0.8308
xiaofa 0.777 0.852 0.8295
xiaozhineng 0.775 0.840 0.8205
testing 0.756 0.845 0.8183
falvzhineng 0.763 0.841 0.8176
boys 0.760 0.826 0.8062
301deshuishou 0.768 0.810 0.7974
sos 0.755 0.797 0.7844
qilejingtu 0.780
TEEMO 0.780
qweasd 0.774
maitianxback 0.772
duimingmeixianghao 0.771
wisdom 0.768
anonymous 0.768
seu 0.768
DN 0.768
hongseyoujiaosanbeisu 0.768
ooo 0.768
OO 0.768
zhangyuanyu 0.768
daminghu 0.757
zunjisoufa 0.757
yunshujingjixue 0.755
DL 0.753
tiantianxiangshang 0.751
Team Score1 Score2 Final Score
jizhikekeyupipi 0.751
ddlqianzuihouchongci 0.750
Tracee 0.748
zhineng 0.744
heitu 0.736
chong! 0.728
nlpxiaoxuesheng 0.725
sr 0.719
huangjinkuanggong 0.714
hello 0.708
aaaa 0.706
houchangcunbaoan 0.704
EC_lab 0.680
imiss 0.672
nnnn01 0.629
zhegexiaohaiyoudiandou 0.598
woshijiangdaqiao 0.520
nlp11 0.517
mushangdaren 0.491
Eupho 0.491
LawBoys 0.491
xuexijishudui 0.491
test11 0.472
lw 0.344
amazing 0.083
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.
|
{}
|
# SuanShu, a Java numerical and statistical library
com.numericalmethod.suanshu.stats.distribution.univariate
## Class NormalDistribution
• java.lang.Object
• com.numericalmethod.suanshu.stats.distribution.univariate.NormalDistribution
• All Implemented Interfaces:
ProbabilityDistribution
public class NormalDistribution
extends Object
implements ProbabilityDistribution
The Normal distribution has its density a Gaussian function. The Normal distribution is probably the most important single distribution. By the central limit theorem, under certain conditions, the sum of a number of random variables with finite means and variances approaches a Normal distribution as the number of variables increases. Laplace proved that the Normal distribution occurs as a limiting distribution of arithmetic means of independent, identically distributed random variables with finite second moment.
The R equivalent functions are dnorm, pnorm, qnorm, rnorm.
• ### Constructor Summary
Constructors
Constructor and Description
NormalDistribution()
Construct an instance of the standard Normal distribution with mean 0 and standard deviation 1.
NormalDistribution(double mu, double sigma)
Construct a Normal distribution with mean mu and standard deviation sigma.
• ### Method Summary
All Methods
Modifier and Type Method and Description
double cdf(double x)
Gets the cumulative probability F(x) = Pr(X ≤ x).
double density(double x)
The density function, which, if exists, is the derivative of F.
double entropy()
Gets the entropy of this distribution.
double kurtosis()
Gets the excess kurtosis of this distribution.
double mean()
Gets the mean of this distribution.
double median()
Gets the median of this distribution.
double moment(double t)
The moment generating function is the expected value of etX.
double quantile(double u)
Gets the quantile, the inverse of the cumulative distribution function.
double skew()
Gets the skewness of this distribution.
double variance()
Gets the variance of this distribution.
• ### Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
• ### Constructor Detail
• #### NormalDistribution
public NormalDistribution()
Construct an instance of the standard Normal distribution with mean 0 and standard deviation 1.
• #### NormalDistribution
public NormalDistribution(double mu,
double sigma)
Construct a Normal distribution with mean mu and standard deviation sigma.
Parameters:
mu - the mean
sigma - the standard deviation
• ### Method Detail
• #### mean
public double mean()
Description copied from interface: ProbabilityDistribution
Gets the mean of this distribution.
Specified by:
mean in interface ProbabilityDistribution
Returns:
the mean
Wikipedia: Expected value
• #### median
public double median()
Description copied from interface: ProbabilityDistribution
Gets the median of this distribution.
Specified by:
median in interface ProbabilityDistribution
Returns:
the median
Wikipedia: Median
• #### variance
public double variance()
Description copied from interface: ProbabilityDistribution
Gets the variance of this distribution.
Specified by:
variance in interface ProbabilityDistribution
Returns:
the variance
Wikipedia: Variance
• #### skew
public double skew()
Description copied from interface: ProbabilityDistribution
Gets the skewness of this distribution.
Specified by:
skew in interface ProbabilityDistribution
Returns:
the skewness
Wikipedia: Skewness
• #### kurtosis
public double kurtosis()
Description copied from interface: ProbabilityDistribution
Gets the excess kurtosis of this distribution.
Specified by:
kurtosis in interface ProbabilityDistribution
Returns:
the excess kurtosis
Wikipedia: Kurtosis
• #### entropy
public double entropy()
Description copied from interface: ProbabilityDistribution
Gets the entropy of this distribution.
Specified by:
entropy in interface ProbabilityDistribution
Returns:
the entropy
Wikipedia: Entropy (information theory)
• #### cdf
public double cdf(double x)
Description copied from interface: ProbabilityDistribution
Gets the cumulative probability F(x) = Pr(X ≤ x).
Specified by:
cdf in interface ProbabilityDistribution
Parameters:
x - x
Returns:
F(x) = Pr(X ≤ x)
Wikipedia: Cumulative distribution function
• #### quantile
public double quantile(double u)
Description copied from interface: ProbabilityDistribution
Gets the quantile, the inverse of the cumulative distribution function. It is the value below which random draws from the distribution would fall u×100 percent of the time.
F-1(u) = x, such that
Pr(X ≤ x) = u
This may not always exist.
Specified by:
quantile in interface ProbabilityDistribution
Parameters:
u - u, a quantile
Returns:
F-1(u)
Wikipedia: Quantile function
• #### density
public double density(double x)
Description copied from interface: ProbabilityDistribution
The density function, which, if exists, is the derivative of F. It describes the density of probability at each point in the sample space.
f(x) = dF(X) / dx
This may not always exist.
For the discrete cases, this is the probability mass function. It gives the probability that a discrete random variable is exactly equal to some value.
Specified by:
density in interface ProbabilityDistribution
Parameters:
x - x
Returns:
f(x)
• #### moment
public double moment(double t)
Description copied from interface: ProbabilityDistribution
The moment generating function is the expected value of etX. That is,
E(etX)
This may not always exist.
Specified by:
moment in interface ProbabilityDistribution
Parameters:
t - t
Returns:
E(exp(tX))
|
{}
|
##### Question Info
This question is public and is used in 59 tests or worksheets.
Type: Multiple-Choice
Category: Algebraic Expressions
Author: jjstanley
View all questions by jjstanley.
# Algebraic Expressions Question
View this question.
Add this question to a group or test by clicking the appropriate button below.
## Grade 10 Algebraic Expressions
Which of the following expressions correctly interprets the following phrase?
Two less than three times a number, multiplied by three less than four times another number
1. $(3x-2)(4y-3)$
2. $(3x-2)/(4y-3)$
3. $(3x-3)(4y-2)$
4. $(3x-3)/(4y-2)$
You need to have at least 5 reputation to vote a question down. Learn How To Earn Badges.
|
{}
|
What is Multiplication table of 10? Multiples of 10 with Examples - BYJUS
# The Multiplication of 10
The multiplication table of 10 is one of the easiest multiplication tables that we can learn. When we multiply a number by 10, we just need to add a zero towards the end of the number to find the product. Check out some different strategies which can be used to find the multiple of 10....Read MoreRead Less
## What is a Multiple of 10?
When an integer is multiplied by ten, the result is a multiple of ten. 10 is the sum of the first three prime numbers 1+2+3+4.
The multiple of 10 is easy to memorize.
Example: Olivia has 2 pairs of 10 chocolates.
10 × 2 = 20 or 10 + 10 = 20.
## Multiples of 10
We first multiply 10 by 1, which is 10. Then we will multiply 10 by 2, which is 20. This list is infinite; we can make as many multiples of ten as we like.
Examples: 10, 20, 30, 40, 50.
Multiplication table:
10 × 1 = 10
10 × 2 = 20
10 × 3 = 30
10 × 4 = 40
10 × 5 = 50
10 × 6 = 60
10 × 7 = 70
10 × 8 = 80
10 × 9 = 90
10 × 10 = 100
## What do you observe in Multiples of 10?
10, 20, 30, 40, 50, 60, 70, 80, 90, 100, and so on are all multiples of ten. These multiples are obtained by multiplying 10 by 1, 2, 3,…, 10, in that order. All of these multiples appear to form a sequence with a difference of 10 between two consecutive products.
## Strategy to Multiply with 10
1. Tape diagram
2. Using Number line
1) Tape diagram: In this method, we combine 10 boxes. The number of boxes is a number by which we multiply 10.
It is also called the repeated addition method.
Example: 6 × 10 = 60
It can be represented using a tape diagram as follows:
10
10
10
10
10
10
10 + 10 + 10 + 10 + 10 + 10 = 60
or
$$\begin{matrix} \\10 \\\times 6 \\ \_\_\_\_\_ \\ 60 \\ \_\_\_\_\_ \end{matrix}$$
2) Using a number line:
Draw a line with an interval of 10 between two numbers. A jump of 10 represents the multiplication of 10 by a number.
Example:
Here as we can see that between 0 to 10 the interval can be written as 10 × 1 = 10.
## How do we use Multiplication with 5 to calculate Multiplication with 10?
10 is the multiple of two prime numbers: 5 × 2 = 10. To find the multiple of a number by 10, we can multiply by 2 and 5.
Example: 10 × 7=70
The number 70 can be written as a multiplication of five and two.
70 = 10 × 7 = 5 × 2 × 7
## What is the Distributive property of Multiplication?
When we multiply a value by the sum of two or more numbers, we use the distributive property of multiplication over addition.
Example: 10 × (2 + 6) (distribute the 10 to the 2 and the 6)
= 20 + 60
= 80.
## Distributive property in Multiplication with 10
Distribute (8 + 2) over 10,
10 (8 + 2) = 10 × 8 + 10 × 2 (distribute the 10 to the 8 and the 2)
= 80 + 20
= 100.
## Commutative property in Multiplication with 10
The order in which we multiply the numbers does not affect the final product, according to the commutative property of multiplication
Example: 10 × 5 = 5 × 10
The result of both equations will be 50.
## Associative property in Multiplication with 10
The associative property of multiplication states that no matter how the numbers are grouped, the product of three or more numbers remains the same.
Example: 2 × (3 × 10) = (3 × 2) × 10
L.H.S,
2 × (3 × 10) = 2 × 30 = 60
R.H.S,
(3 × 2) × 10 = 6 × 10 = 60
As we can observe, multiplication is associative.
## Solved Multiple of Ten Examples
Example 1: Thomas has 5 nickels and his friend has 78¢. Who has more money?
1 nickel = 5¢
The total money Thomas has:
5 × 5 = 25¢
It is given that Thomas’ friend has 78¢, which is greater than 25¢. So, Thomas’s friend has more money than Thomas.
Example 2: Fill in the blank
10 × ☐ = 60
60 can be written as 10 + 10 + 10 + 10 + 10 + 10.
So, 10 × 6 = 60.
Example 3: Use the distributive property to find products.
6 × 4 = 6 × (…….+……. )
4 can be written as the sum of 2 + 2, or 3 + 1.
When choose 2 + 2,
6 × 4 = 6 × (2 + 2) (distribute the 6 to the 2 and the 2)
= 6 × 2 + 6 × 2
= 12 + 12
= 24.
If we choose 4 = 3 + 1,
6 × 4 = 6 × (3+1 ) (distribute the 6 to the 3 and the 1)
= 6 × 3 + 6 × 1
= 18 + 6
= 24.
So, the result will be 6 × 4 = 6 × (2 + 2 ) or 6 × 4 = 6 × (3 + 1 ).
Frequently Asked Questions on Multiples of 10
A 0 appears at the end of all multiples of ten. Simply look for a zero at the end of your numbers to find them.
We can use multiplication of 10 with 5 and 2. We know that 2 × 5 = 10.
We can use 2 and 5 both to find the multiplication of a number by 10.
When a number is multiplied by 10, we get 0 at the end. Since 0 is the even number, multiples of 10 are an even number.
|
{}
|
# How to perform regression on panel-data with timelag in SPSS?
#### reveller
##### New Member
For my thesis, I have gathered search volume data ("svi") from Google and message data from Twitter ("tweets" is the number of daily tweets) for serveral companies ("comp"). The variable "tradevol" is the trading volume in the stock of a company, as taken from Yahoo! Finance. "svi" and "tweets" are my independent variables, "tradevol" is dependent.
For argument's sake, say I have collected data over 3 days for each of 3 companies (in reality, I have data for 100 companies gathered during 200 days), as follows:
Code:
comp date svi tweets tradevol
-------------------------------------
1 02-12 1.07 223 2,209,425
1 02-13 1.03 200 2,021,502
1 02-14 1.10 196 2,124,555
2 02-12 0.55 110 1,942,211
2 02-13 0.45 211 1,532,453
2 02-14 0.41 104 1,432,655
3 02-12 1.05 303 1,765,273
3 02-13 1.08 250 1,932,672
3 02-14 1.09 277 1,597,892
A dataset like this with measurements over time goes beyond what has been tough during my studies. So I need to understand how to analyze this. Therefore, I have some questions analyzing this dataset in SPSS / PASW.
1. How can I, from this dataset, measure the correlation between svi and tradevol for each company? I would then somehow have to tell SPSS to split the datafile on comp, calculating the correlation for each unique comp
2. My thesis-coach calls this dataset a "panel dataset". However, searching for "paneldata analysis SPSS" I don't find much usefull information. If I want to perform a regression, measuring the effect of svi and tweets on tradevol, how is this then called? A multilevel regression?
3. Regarding regression, my coach wants me to account for a timelag. For instance, today's svi and tweets may not have an effect on today's tradevol but perhaps there is an effect (or: a bigger effect) on today's svi and tweets and tomorrows tradevol. In this case, I would have to perform the regression for lag t-2, t-1, t, t+1 and t+2. Is this operation possible to perform with SPSS (18) and if so, please send me something to go by.
Any help is greatly appreciated
|
{}
|
If the height of the water slide in the figure is h = 3.2 m, and the person's initial speed at point A is 0.61 m/s, what is the horizontal distance between the base of the slide and the splashdown point of the person?
### Get this answer with Chegg Study
Practice with similar questions
Q:
If the height of the water slide in the figure is h = 3.2 m and the person's initial speed at point A is 0.51 m/s, what is the new horizontal distance L between the base of the slide and the splashdown point of the person?
|
{}
|
# Why are asymmetric cryptography keys more vulnerable to brute force attack than symmetric ones?
I came across this paper which says that
Asymmetric keys must be many times longer than keys in secret-cryptography in order to boast equivalent security. Keys in asymmetric cryptography are also more vulnerable to brute force attacks than in secret-key cryptography. There exist algorithms for public-key cryptography that allow attackers to crack private keys faster than a brute force method would require.
I would like to know:
1. What makes Asymmetric Cryptography keys more vulnerable to brute force attacks?
2. In which cases should Asymmetric Cryptography usage be avoided?
• Thank you all for the wonderful answers. IMHO, Cryptography exchange is the best stackexchange site. – Jay Oct 11 '16 at 12:24
For your first question: The main point here (at least that comes to mind) is that of how the key is made, used, and subsequently how it is attacked. Good symmetric ciphers are designed so that the best possible attack is brute force, i.e. simply trying with each possible key, which are typically random (or as good as). With a 128-bit key you have $2^{128}$ possible keys. Trying all of these takes a lot of time.
With an asymmetric cipher however, the private and public keys have a mathematical relation between them, and asymmetric ciphers are based on problems that are computationally hard to solve. An example of this is RSA, where the keys are based on two large (and secret) prime numbers, $p$ and $q$, then multiplied to create an even larger number, $n$. The result is then used for encryption and decryption (i will not go into further details of RSA here). To crack this system the most straight forward way is to find $p$ and $q$ by factoring $n$ (which is not secret).
Checking $2^{128}$ (or $2^{127}$ on average) symmetric keys with todays computational power is simply not possible within any conceivable time-frame. Factoring a 128-bit number however, takes about a second (depending on hardware and optimization). Thus, one needs larger keys for RSA than symmetric ciphers, e.g. 128-bit symmetric keys are typically approximated to equal 2048-bit RSA keys.
For your second question: Encryption and (even more so) decryption with asymmetric ciphers are often a lot more computationally intensive than a symmetric cipher (if i remember correctly is RSA typically about 1000 times slower than AES). This makes asymmetric ciphers impractical for encrypting large chunks of data. Subsequently, for encrypting e.g. internet traffic: asymmetric ciphers are typically used for securely exchanging keys that are used to encrypt/decrypt using a symmetric cipher.
EDIT: As rightly pointed out by @fgrieu the statement "128 to 256-bit symmetric keys are typically approximated to equal 2048 to 4196-bit asymmetric keys." is not correct, and comes from a writing-mistake on my part. The correct statement was supposed to be that 128-bit symmetric keys are typically approximated to equal 2048-bit RSA keys.
• @jay: The statement "128 to 256-bit symmetric keys are typically approximated to equal 2048 to 4196-bit asymmetric keys" in this answer is quite incorrect. No similar approximation applies for asymmetric cryptography in general. Example: for Ed25519, an increasingly used asymmetric scheme, 128-bit symmetric key security is believed obtained for 128-bit prIvate key and 256-bit public key. We can say that 128 to 256-bit symmetric keys are typically approximated to equal 2048 to 15360-bit RSA or DSA public key size in security. – fgrieu Oct 11 '16 at 6:45
• @jay I apologize for the slip-up! This was a typo resulting from writing and rewriting. I initially intended to say that 128-bit symmetric keys are approximated to equal 2048-bit RSA , with the typical recommendations for these being 128 to 256-bit for symmetric and 2048 to 4196-bit RSA. I will edit my answer to reflect this. And as fgrieu rightly say: ECC and the like operate with much smaller keys than the "classical" asymmetric schemes, which is one of the qualities making them desired in many use-cases over e.g. RSA. – henrheid Oct 11 '16 at 11:57
Putting this paper into context, it was "the culmination of the research efforts of nine dedicated undergraduate students in the Computing Research Topics course at Villanova University" and this particular paper was the only one related to cryptography (link). The welcome message states, the students spent four months in scientific research. This paper looks like a homework paper rather than a peer-reviewed scientific publication at a conference.
I just skipped over it, but the paper covers too much material in 6 pages. The content ranges over the topics of a full semester's class, which leads to many unconnected statements and lack of proper reasoning.
Question 1: This one has a correct statement in the homework, but you misunderstood: For asymmetric cryptography, more efficient attacks exist compared to brute force. Examples: Pohlig-Hellman and index calculus for discrete log in $$\mathbb{Z}_p$$, general number field sieve for factoring. That is why the keys have to be larger, so that the actual effort for an attack is somewhat comparable with symmetric ciphers. For any symmetric cipher, it is considered broken if there is an attack more efficient than brute force.
Question 2: As a rule of thumb, symmetric and asymmetric encryption serve different purposes, and you cannot use both interchangeably. You could not avoid using asymmetric cryptography if you need it. However, you can reduce the asymmetric crypto part and use symmetric encryption in addition. that's when you want to encrypt a large amount of data and encrypt it with the public key of someone. In that case, hybrid encryption is used, where you use the symmetric encryption for the actual data with a randomly chosen key, and then just encrypt that random key with the public key.
• For question 1 i think it is wrong to say that the "homework" is correct. It states that asymmetric keys are more vulnerable to brute force attacks than in secret-key cryptography. This statement is nonsense. The "homework" however correctly states There exist algorithms for public-key cryptography that allow attackers to crack private keys faster than a brute force method would require. – Guut Boy Oct 11 '16 at 8:09
• @GuutBoy you're right, I edited it. Besides that, the entire point of view on both symmetric and asymmetric crypto is quite flawed. – tylo Oct 11 '16 at 10:05
1. Asymmetric cryptography keys are NOT necessarily more vulnerable to brute force attack than their secret-key cryptography counterparts. Some asymmetric algorithms have short private keys (256-bit for Ed25519, targeting 128-bit security). Private key size for asymmetric crypto only needs to be at least as wide as secret key of symmetric crypto for equivalent security. For Ed25519 we could replace the private key by the SHA-256 of a 128-bit secret and public user ID; and every asymmetric algorithm (e.g. RSA) can have it's private key reduced down to the security level, by replacing the random source of the key pair generator by a PRNG seeded by the short key and user ID.
We however can state that:
• All known asymmetric cryptosystems have a public key significantly longer than the (secret) key of a symmetric cryptosystem of comparable security. But that's an apples-to-oranges comparison: in symmetric cryptosystems there is no equivalent to the public key (known to all including the adversary in asymmetric cryptography).
• When considering RSA (one asymmetric cryptosystem among many), indeed the public key is much longer than the key for symmetric crypto of comparable strength: we need like a 2048 to 3072-bit public modulus for a 128-bit symmetric security level; and while we can reduce the public key size by a factor of 2 or 3 compared to the public modulus (depending on if we want a security argument, or a lack of insecurity argument; see this), the ratio of sizes remains large, and growing with size. This is related to the fact that the security of RSA is no better than that of factoring the public modulus, and good factoring algorithms like GNFS have cost sub-exponential w.r.t. the size of the integer to factor.
• For other asymmetric cryptosystems, including the aforementioned Ed25519, the public key can be as little as twice as long as the key for symmetric crypto of comparable strength. In the case of Ed25519, the ratio 2 is related to the fact that the best algorithms to solve $$g^x=a$$ for integer $$x$$ in a general multiplicative group have cost growing about as $$x^{1/2}$$. I don't know any asymmetric cryptosystem beating that ratio, nor any strong argument that it can't be beaten.
2. Asymmetric cryptography is more complex and resource-hungry than good symmetric cryptography. Therefore asymmetric cryptography is best used together with symmetric cryptography, or avoided when not required.
• Asymmetric cryptography is required so that any of the following can be:
• some party can encrypt data without the secret allowing decryption;
• some party can verify authenticity of data without the secret allowing to sign that data as authentic;
• two parties (at least) can securely agree on a secret key for later use using symmetric cryptography, without a previous shared secret.
• Asymmetric cryptography is neither required nor useful when:
• asymmetric cryptography has just been used to agree on a secret key (see above);
• all communication is, by design, to a central trusted party that holds (or can recompute) all other party's secret keys, without this being deemed a drawback.
• This is a very good answer. Although I have 'Accepted' the first answer as I got what I wanted, fgrieu's answer explained lot of things very clearly. Somehow I felt it did not get the upvotes it deserved. Thank you. – Jay Oct 14 '16 at 15:37
• Substantive difference between the ‘128-bit security level’ of Ed25519 vs. the ‘128-bit security level’ of AES-128: On any of $n$ targets, best known attack costs expected ${\sim}2^{128}$ bit operations before breaking the first Ed25519 target, but ${\sim}2^{128}/n$ bit operations before breaking the first AES-128 target. I.e., ‘128-bit security level’ of Ed25519 is qualitatively different—and quantitatively stronger than that of AES-128. – Squeamish Ossifrage May 4 '18 at 6:09
• "•All known asymmetric cryptosystems have a public key significantly longer than the (secret) key of a symmetric cryptosystem of comparable security."; recent counterexamples (which postdate fgrieu's answer); GravitySphincs, Picnic (which are both signature algorithms based on symmetric operations) – poncho Nov 1 '19 at 15:14
• @poncho: per the paper's abstract: "Gravity-SPHINCS has shorter keys (32 and 64 bytes)". Seems to be a 256-bit public key for 128-bit security level, same as EdDSA. How is that a counterexample to my statement? I'm still scratching my head on Picnic, wondering if it's a signature algorithm or a proof o knowledge protocol. Also, the idea that the public key could be essentially the result of a hash bothers me. I need to get back to that old question of proving knowledge of a preimage of a SHA(-1/-2) hash without revealing it, which I never quite got. – fgrieu Nov 1 '19 at 15:52
• @fgrieu: Gravity-Sphincs is essentially the same as a hash; perhaps that's not the best example. I still think that the Picnic citation works... – poncho Nov 1 '19 at 18:03
|
{}
|
# Lucas' Blog
Lucas Jones
lucas@lucasjones.co.uk
1. ## Bloom filters
15 Jun 2015, 18:48.
Following on from an earlier post, which featured a CWEB implementation of the UNIX uniq command backed by a hashtable, I decided to write a version backed by a Bloom filter.
This program has different performance characteristics from the coreutils implementation or the hash table version. Due to the nature ...
2. ## Counting common file extensions, UNIX and hash tables
15 Jun 2015, 06:27.
A quick shell one-liner I had a use for today: it outputs a list of all the extensions of files below the current directory, where “extension” is the last dot-separated component of a filename, as long as the file has more than one dot-separated component in its name (e.g ...
|
{}
|
## Wednesday, February 8, 2017
### Failing to use optimal solution as initial point
In a somewhat complex non-linear programming model, I observed the following. I do here two solves in a row:
solve m2 minimizing zrf using dnlp;solve m2 minimizing zrf using dnlp;
We solve the model, and then in the second solve we use the optimal solution from the first solve as an initial point. This should make the second solve rather easy! But what I see is somewhat different.
The first solve has no problems. (I fed it a close to optimal initial point). The log looks like:
--- Generating DNLP model m2--- testmodel6.gms(83981) 156 Mb 22 secs--- testmodel6.gms(84048) 160 Mb--- 139,443 rows 139,450 columns 746,438 non-zeroes--- 8,504,043 nl-code 467,552 nl-non-zeroes--- testmodel6.gms(84048) 116 Mb--- Executing CONOPT: elapsed 0:01:44.102CONOPT 3 24.7.3 r58181 Released Jul 11, 2016 WEI x86 64bit/MS Windows C O N O P T 3 version 3.17A Copyright (C) ARKI Consulting and Development A/S Bagsvaerdvej 246 A DK-2880 Bagsvaerd, Denmark Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 0 0 8.7137917110E-04 (Input point) Pre-triangular equations: 9720 Post-triangular equations: 129722 1 0 3.0000000000E-05 (After pre-processing) 2 0 4.6875000000E-07 (After scaling) 3 0 0 7.3256941462E-17 1.0E+00 F T ** Feasible solution. Value of objective = 0.905233579345 Iter Phase Ninf Objective RGmax NSB Step InItr MX OK 4 3 9.0523357374E-01 1.9E-03 6 1.0E+00 3 T T 5 3 9.0523357341E-01 1.0E-03 3 1.0E+00 2 T T 6 3 9.0523357341E-01 1.3E-12 3 ** Optimal solution. Reduced gradient less than tolerance.
We are hopeful the second solve should be even easier, but we see:
--- Generating DNLP model m2--- testmodel6.gms(83981) 69 Mb*** Error at line 83981: overflow in * operation (mulop)--- testmodel6.gms(83981) 155 Mb 1 Error 22 secs--- testmodel6.gms(84049) 116 Mb 1 Error*** SOLVE aborted
How can an optimal solution be an invalid initial point?
My analysis is the following:
• The equation we cannot generate is part of what Conopt calls “Post-triangular equations”. These equations are set aside by Conopt when solving the model, and are reintroduced just before reporting the solution because Conopt knows these equations will not change the solution. Typically “accounting rows” are such equations. These are constraints that could have been implemented as assignments in post-processing as they just calculate some additional information about the solution. In most cases these post-triangular equations do not require the evaluation of gradients.
• When we generate the model again for the second time, GAMS will try to evaluate the gradients. It is here where we encounter the overflow. In this model things are way more complicated but think about the function $$f(x)=\sqrt{x}$$ at $$x=0$$. The function itself can be evaluated at $$x=0$$, but the gradient can not. (Actually GAMS will patch the derivative of $$\sqrt{x}$$ at $$x=0$$ to fix this, but it is not smart enough to do this on my functions).
• This also means that there is a architectural flaw here: we should do the presolve before evaluating gradients. Here GAMS will evaluate gradients, and subsequently CONOPT will presolve the model. This is exactly in the wrong order. Note that AMPL will do a presolve in the generation of the model opposed to GAMS which leaves it to the solver. I believe AMPL is doing the right thing here.
A work-around is:
solve m2 minimizing zrf using dnlp;x.l(jd) = max(x.l(jd),1e-6);solve m2 minimizing zrf using dnlp;
|
{}
|
# Pressure drop in elbows & bends in a pipe
1. Aug 9, 2007
### bajaj_383
hi!
i want to calculate pressure drops in a pipe having various elbows ,bends & valves.whts equation should be used to calculate the pressure drop.
thanks
gaurav
2. Aug 10, 2007
### Q_Goest
Hi bajaj,
Attached is a nice summary of how it's done throughout industry. Each elbow, valve, section of pipe or other fluid restriction is given a resistance coefficient, K. All resistance coefficients can be summed up and put into the Darcy Weisbach equation as shown in equation 2 of the attached.
#### Attached Files:
• ###### Pipe-Flo Pro.pdf
File size:
105 KB
Views:
15,429
3. Aug 10, 2007
### FredGarvin
Perhaps one of the mentors could sticky your post Q. There seem to be a big need.
4. Aug 11, 2007
### Q_Goest
Hi Fred,
Ya know it's kinda strange that the standard method for calculating flow through piping systems isn't taught very well in undergraduate college. At least, it wasn't at my school. Seems colleges like to focus on the most fundamental, theoretical methods. That isn't to say the standard method of doing pressure drop/flow calculations through pipe isn't based on theoretical concepts, but at least it's been refined almost to the point of being a cook-book, hasn't it?
We've often talked about creating a thread that might discuss this method for calculating pipe flow, but the more I thought about it, the more it seemed we needed a whole paper that discussed at length how this is done that we could post. I didn’t want to have to write everything myself – too much work, not enough pay :grumpy:. lol Anyway, I eventually found this paper. It comes from a company that sells software for pipe flow and this paper documents the basic method. I haven't read through it in detail, but everything I've seen thus far looks good. Have you looked it over yet? Seen any obvious problems?
Let’s use this thread to talk about what should go into a post regarding how to do pipe flow analysis. Then we could create a new thread, starting off with a good introduction to pipe flow, and present material such as this paper or any other references such as for expansion joints (convoluted metal hose) or other restrictions that aren’t covered by this paper such as mitered elbows at various angles, orifices, etc... We might consider putting in a spreadsheet calculator too. Quark sent me one that might be good. Speaking of whom, where is Quark? I’d like to get his involvement in here too.
I think we need to start off from the perspective of someone in college or who had just graduated. Why would someone like that want to read the post or learn about pipe flow? What are we going to present and where does it come from (ie: references)? What are the limitations? Why not use CFD or NS equations for pipe flow? Where does the standard Bernoulli equation limit us in calculating pipe flow? Why use Darcy-Weisbach, why not Poiseuille or others? What limitation is there on low pressure or vacuum (introduce Knudsen number since this method is also applicable to vacuum systems down to a relatively low pressure, typically ~ 0.1 Torr)? Etc…
Hmm… that’s about it for now. I like the idea of coming up with a thread that could be used for reference on pipe flow (he says for the umpty-squat time), but I think we should talk about the best way to do that and what it needs to contain.
Comments from students and others here would be great too! I think we should hear from everyone.
5. Aug 11, 2007
### Astronuc
Staff Emeritus
Great idea!
In my undergrad program, the Fluid Mechanics course did provide both theoretical concepts and practical application, and we would be expected to derive the practical from the theoretical. In the graduate program, it was more complex and to the point of developing numerical solutions/programs for CFD.
I think pipe-flow is largely cook-book now. Many companies, which provide piping or which build fluid transport systems, have manuals, which give equations and tables for pipe flow (including resistance coefficients for piping and fittings), pump performance, and other useful engineering information.
Thank you Q_Goest for that useful pdf attachment.
Last edited: Aug 12, 2007
6. Aug 11, 2007
### bajaj_383
can u focus something on how pipes in parallel will work in comparison to the same pipes in series.i want to calculate pressure drop across 4 parallel pipes emerging from a tank & entering inside the other tank . pipes of sam dia,& same length.
thnks
7. Aug 12, 2007
### Q_Goest
Hi bajaj,
I see you posted your question up in the classical physics forum and got a responce. Here's just a few more thoughts. Note that putting two or more pipes in parallel will result in some flow rate between points 1 and 2, but when these pipes are put in series with the same pressure drop, the flow rate can't be directly calculated from the parallel case. Although pressure drop is a function of the square of the flow rate, the friction factor can change dramatically with velocity. The Darcy-Weisbach equation looks a lot like:
dP = C * Q^2
where dP = pressure drop
C = a constant for any given piping system
Q = flow rate
But the C in the equation is a function of friction factor, f, which varies with flow. So it's not so simple.
Hi Astronuc,
Thanks for the kind words. I'll see if I can write up a rough draft for the thread in the next day or so. Hope to hear from you and get comments.
8. Aug 14, 2007
### FredGarvin
I do see Quark poke his head in from time to time. I do agree it would be nice to get his input as well with it.
I'll start with a quick attempt at an outline and some major points. I'll PM you when I get it together.
9. Jul 16, 2010
### mzp
Please use this handbook with extreme caution & doublecheck the equations/formulas.
E.g.: there is a vicious typo in equations 1&2. dP is labelled as a _pressure_ drop instead of a head loss.
The difference is pressure=pascals=N/m2
head loss = meters
Its always a good idea to check the units of any posted equation.
mz
10. Jul 19, 2010
### stewartcs
Pressure drop can certainly be expressed in meters of head just as it can be in Pascals.
CS
11. Jul 19, 2010
### mzp
While that is certainly true, head loss is typically labeled "h" with units of length, and "P" is typically expressed in terms of Pa, psi, etc.
There is another typo that I've found just recently: the sudden expansion and contraction equations should be divided by beta^4; beta=dSmall/dLarge.
12. Jul 19, 2010
### stewartcs
Not really, over the years I've seen it both ways equally. Just depends on the application.
They do not need to be diveded by beta^4 since the velocity is based on that of the smaller diameter pipe. However, if the velocity is based on that of the larger diameter pipe, then one would divide by beta^4.
CS
13. Mar 15, 2011
### QPAS
I wonder if there is no table or graph with which I can make a quick estimation of the pressure drop in function of pipe diameter and flowrate ( for 90° bends only ) ... can anyone help me with that ??
14. May 27, 2011
### vivaldoblue
What do you want to tell him?
15. May 27, 2011
### mombarre
this is a useful thread. I am waiting to hear more ..please keep it alive.
16. May 30, 2011
### spanky489
bernoulli's equation(which is in fact posted in the pdf our colleague posted up higher), i just handed in my lab report last week on this topic.
of course some parameters must be known in order to calculate the losses in bends.
so basically this is what bernoullis equation looks like:
$\frac{p_{1}}{\rho\cdot g} + \frac{v^{2}_{1}}{2\cdot g} + \ z_{1} + \ h_{pump} = \frac{p_{2}}{\rho\cdot g} + \frac{v^{2}_{2}}{2\cdot g} + \ z_{2} + \ h_{losses}$
$\ h_{losses} =\ _{i=1}^{n}\sum{\frac{v^{2}_{i}}{2 \cdot g} \left[\xi_{valve} + \xi_{bend} + \xi_{friction} \right]}$
now if your water flow sistem doesnt use a pump you take it out of the 1st equation and continue calculating with the already stated variables.
Of course you will need to know a few things before you can start. 1st you need to know both pressures of the points you are referring to, then you need to know both of the velocities(if you know one velocity and the corresponding flux, you can calculate the 2nd), $\ z_{2} - z_{1} =$ height difference.
regarding losses, they are the sum of all elements with a particular velocity.
lets say there are 2 valves inbetween point 1 and 2, and 1 valve and 1 bend inbetween points 2 and 3.
now if im trying to find global losses in my system i will use the bernoulli equation for points 1 and 3. ill give an example how to calculate the losses in between.
$\ h_{losses 1-3} =\frac{v^{2}_{1-2}}{2 \cdot g} \left[2 \cdot \xi_{valve} + \xi_{friction 1-2} \right] + \frac{v^{2}_{2-3}}{2 \cdot g} \left[1 \cdot \xi_{valve} + 1 \cdot \xi_{bend} + \xi_{friction 2-3} \right]$
i hope this will help you guys with your problems.
if you need any help regarding this topic in the future feel free to ask.
Last edited: May 30, 2011
17. Oct 18, 2011
### bhavikkp
Can anybody show how to develop a model for the flow through pipe with all fitting
18. Oct 20, 2011
### stewartcs
What kind of model? The equations have already been given in the earlier threads.
CS
|
{}
|
# publishing calibration data with bag file
Hi,
I recorded a bag file with a video sequence. The corresponding bag file does only contain the image_raw topic, as by the time I recorded the sequence I had not calibrated the camera yet.
Now I have calibrated the camera, and I would like to play the bag file and simultaneously publish the camera_info data so that, for example, I can undistort the images from the bag file using image_proc.
How can I start a publisher that only publishes the camera_info ? Or could I add this camera_info in some way to the pre-existing bag file ?
Thanks
edit retag close merge delete
Sort by » oldest newest most voted
Hi BeLioN,
please take a look at our image_sequence_publisher where we publish images as a ros topic, while we read a yaml file and publish it as calibration for a "fake" camera.
You could take part of our code to make a node that publishes your calibration info in the correct topic.
more
There's always the option of manually running a publisher using the rostopic command line tool while replaying the bag file, but I think this will probably turn out to be too cumbersome.
Adding the CameraInfo messages to the bag file is probably a more convenient solution. You can add them to the existing bag quite easily by using the rosbag API. The examples in the rosbag Cookbook will probably help you find out how.
more
|
{}
|
HELIOS-K: an ultrafast, open-source opacity calculator for radiative transfer
Grimm, Simon L; Heng, Kevin (2015). HELIOS-K: an ultrafast, open-source opacity calculator for radiative transfer. The Astrophysical Journal, 808(2):182.
Abstract
We present an ultrafast opacity calculator that we name HELIOS-K. It takes a line list as an input, computes the shape of each spectral line and provides an option for grouping an enormous number of lines into a manageable number of bins. We implement a combination of Algorithm 916 and Gauss-Hermite quadrature to compute the Voigt profile, write the code in CUDA and optimise the computation for graphics processing units (GPUs). We restate the theory of the k-distribution method and use it to reduce $\sim 10^5$ to $10^8$ lines to $\sim 10$ to $10^4$ wavenumber bins, which may then be used for radiative transfer, atmospheric retrieval and general circulation models. The choice of line-wing cutoff for the Voigt profile is a significant source of error and affects the value of the computed flux by $\sim 10\%$. This is an outstanding physical (rather than computational) problem, due to our incomplete knowledge of pressure broadening of spectral lines in the far line wings. We emphasize that this problem remains regardless of whether one performs line-by-line calculations or uses the k-distribution method and affects all calculations of exoplanetary atmospheres requiring the use of wavelength-dependent opacities. We elucidate the correlated-k approximation and demonstrate that it applies equally to inhomogeneous atmospheres with a single atomic/molecular species or homogeneous atmospheres with multiple species. Using a NVIDIA K20 GPU, HELIOS-K is capable of computing an opacity function with $\sim 10^5$ spectral lines in $\sim 1$ second and is publicly available as part of the Exoclimes Simulation Platform (ESP; www.exoclime.org).
Abstract
We present an ultrafast opacity calculator that we name HELIOS-K. It takes a line list as an input, computes the shape of each spectral line and provides an option for grouping an enormous number of lines into a manageable number of bins. We implement a combination of Algorithm 916 and Gauss-Hermite quadrature to compute the Voigt profile, write the code in CUDA and optimise the computation for graphics processing units (GPUs). We restate the theory of the k-distribution method and use it to reduce $\sim 10^5$ to $10^8$ lines to $\sim 10$ to $10^4$ wavenumber bins, which may then be used for radiative transfer, atmospheric retrieval and general circulation models. The choice of line-wing cutoff for the Voigt profile is a significant source of error and affects the value of the computed flux by $\sim 10\%$. This is an outstanding physical (rather than computational) problem, due to our incomplete knowledge of pressure broadening of spectral lines in the far line wings. We emphasize that this problem remains regardless of whether one performs line-by-line calculations or uses the k-distribution method and affects all calculations of exoplanetary atmospheres requiring the use of wavelength-dependent opacities. We elucidate the correlated-k approximation and demonstrate that it applies equally to inhomogeneous atmospheres with a single atomic/molecular species or homogeneous atmospheres with multiple species. Using a NVIDIA K20 GPU, HELIOS-K is capable of computing an opacity function with $\sim 10^5$ spectral lines in $\sim 1$ second and is publicly available as part of the Exoclimes Simulation Platform (ESP; www.exoclime.org).
Statistics
Citations
Dimensions.ai Metrics
15 citations in Web of Science®
14 citations in Scopus®
Altmetrics
Detailed statistics
Item Type: Journal Article, refereed, original work 07 Faculty of Science > Institute for Computational Science 530 Physics English August 2015 23 Feb 2016 07:55 14 Feb 2018 11:13 IOP Publishing 1538-4357 Hybrid Publisher DOI. An embargo period may apply. https://doi.org/10.1088/0004-637X/808/2/182 arXiv:1503.03806v2
|
{}
|
# Tag Info
0
I think that you have faced a conceptual problem because you have used the relationship $V=RI$, rather you should have been using the definition of resistance $R = \frac V I$. Then as $V$ goes up and $I$ goes down the value of $R$ increases. Also you must be careful about $I=0$ (current equals exactly zero) and $I\rightarrow0$ (current gets closer and closer ...
0
Bear in mind that infinite resistance is equal to an open circuit. In real life if you have a battery not connected to anything, there will always be a voltage across the terminals. Even a lead-acid car battery will produce sparks if you short the terminals!
-1
At R→∞ voltage won't be 0 but little less than EMF off cell, since a series circuit with voltmeter and battery is complete. Yes, voltage across resistor reaches 0.Voltmeter reading is not EMF because it draw some current which pass through internal r and gets some potential across it. An Ideal voltmeter cannot tell you voltage across resistance in ...
0
-First, I'm trying to grasp the concept of why and how the voltage drop at the terminals of a battery depends on the resistance of the circuit. The EMF, $\mathcal{E}$, exists chemically and is always present. If there is a path for current to flow, then there will be a voltage drop across the internal resistance, $r$, and the external resistance, $R$. ...
1
I actually did this for a battery control driver for a tablet. The only way I found to do it was to take manufacturers charge/discharge curves as functions of temperature and number of cycles and encode them by hand in a lookup table. This does assume that the manufacturer has enough data, the data is accurate, and is willing to supply it.
1
As someone who did a degree in physics before moving into electronics and s/w R&D, my experience would suggest "yes". Over the years I have been involved in a number of projects that could be classified as experimental physics, and in all cases knowledge of electronics was a vital part. At the very least a physicist should be able to read a circuit ...
0
Physics is about making models of the world, if you can make them as accurate as possible why wouldn't you? Incidentally, sometimes you really need acuracy as the smallest difference in your initial conditions can make a great difference in your outcome (see chaotic systems, the best example is weather or the double pendulum) Imagine instead of taking ...
0
The live wire oscillates and either drains or supplies electricity to the neutral wire. The live wire oscillates between 230V to -230V and the neutral wire always stays 0V.
Top 50 recent answers are included
|
{}
|
- On the evaluation complexity of constrained nonlinear least-squares and general constrained nonlinear optimization using second-order methods Coralia Cartis(coralia.cartised.ac.uk) Nicholas I.M. Gould(nick.gouldstfc.ac.uk) Philippe L. Toint(philippe.tointunamur.be) Abstract: When solving the general smooth nonlinear optimization problem involving equality and/or inequality constraints, an approximate first-order critical point of accuracy $\epsilon$ can be obtained by a second-order method using cubic regularization in at most $O(\epsilon^{-3/2})$ problem-functions evaluations, the same order bound as in the unconstrained case. This result is obtained by first showing that the same result holds for inequality constrained nonlinear least-squares. As a consequence, the presence of (possibly nonlinear) equality/inequality constraints does not affect the complexity of finding approximate first-order critical points in nonconvex optimization. This result improves on the best known ($O(\epsilon^{-2})$) evaluation-complexity bound for solving general nonconvexly constrained optimization problems. Keywords: Nonlinear optimization, evaluation complexity, general constrained problem Category 1: Nonlinear Optimization (Constrained Nonlinear Optimization ) Category 2: Nonlinear Optimization (Nonlinear Systems and Least-Squares ) Category 3: Nonlinear Optimization (Bound-constrained Optimization ) Citation: @techreport{CartGoulToin13a, author={C. Cartis and N. I. M. Gould and Ph. L. Toint}, title = {On the evaluation complexity of constrained nonlinear least-squares and general constrained nonlinear optimization using second-order methods}, institution = {Namur Center for Complex Systems (NAXYS), University of Namur}, address = {Namur, Belgium}, number = {naXys-01-2013}, year = 2013} Download: [PDF]Entry Submitted: 04/03/2013Entry Accepted: 04/03/2013Entry Last Modified: 04/03/2013Modify/Update this entry Visitors Authors More about us Links Subscribe, Unsubscribe Digest Archive Search, Browse the Repository Submit Update Policies Coordinator's Board Classification Scheme Credits Give us feedback Optimization Journals, Sites, Societies Optimization Online is supported by the Mathematical Optmization Society.
|
{}
|
# Geometry of fitness landscapes
What “shape” is the fitness landscape explored by agents in an evolutionary process? In simple optimisation problems without interaction? In multi-agent systems with interactions between agents? (i.e. with niche construction) In actually existing nature, in all its chaotic glory?
Typically, genetic algorithms are implemented fixed-length genomes floating values in the range $$[0,1]$$ — the landscape is $$[0,1]^n$$ for genome of length $$n$$. (In e.g. John-Holland-style classic GAs it’s $$\{0,1\}^n$$, and in terrestrial life, something like $$\{G,A,T,C\}^n$$.) More generally these can be free monoids, that is, arbitrary-length strings, or possibly defined on an infinite or uncountable field.
In real evolutionary systems, your average slice of nature red in tooth’n’claw, having got these genomes, you have a lot of steps to go. You then express them as a phenotype, which can be itself quite hard to predict from the genotype (protein folding, epigenetics, general environmentally conditioned expression or suppression), and then throw it into an environment wherein its success may be determined by chance, some intrinsic degree of functionality, or some highly specific combination of those and interactions with its peers, with, that is, the entire ecosystem. So the similarity of two genotypes can be wildly complex; and the relationship between that and its actual probability of reproducing can be grossly dependent on the environment and other phenotypes. Typical in-silico systems cut out that messy phenotypic business and focus on genotypic selection, but it can still be pretty wild there.
OK, so how with those disclaimers, what kind of regularities can we expect to find?
Consider general evolutionary processes, say, genetic programming- is $$\mathbb{R}^n$$ still the most natural space in which to embed our landscape? There is still that trivial mapping from the free monoid depicting the genome to $$\mathbb{R}^n$$, but now there are problem-domain- and encoding-specific “folds”, cases where the symbols in the string can be swapped or substituted without changing the functional form of the algorithm, and which can be known a priori given said encoding and search-space. (Anyone who uses genetic programming for symbolic regression is used to, e.g. getting both $$\sin(x)$$ and $$-\sin(-x)$$ as solutions, or $$x+y$$ and $$y+x$$.) How do the likelihoods of these degenerate solutions increase with the genome length? Is it still exponential in genome length, as with the volume of space encompassed by a non-hierarchical GA?
Further, consider evolutionary processes that include significant levels of niche construction, where evolution becomes path-dependent. Is there still some notion of fitness landscape that can be made rigorous for these algorithms, or some mapping between phenotypic and genotypic fitnesses that captures the same function as the fitness landscape? I suspect this problem is well-explored, but I’m missing the keywords to find it. A jelly bean for you if you can tell me, so I can tell my old ecology lecture what to show on the slides for the lecture on path-dependence.
I know people must occasionally toy with these areas, but google scholar has not helped me thus far. Hints?
### No comments yet. Why not leave one?
GitHub-flavored Markdown & a sane subset of HTML is supported.
|
{}
|
What is a rational trigonometric function? Is $\cos x$ rational?
I am reading Trigonometry by Gelfand and Saul. On p.140 they discuss rational trigonometric functions and define one as:
A rational trigonometric function is a function you can get by taking the sine and cosine of various angles, together with all the constant functions, and adding, subtracting, multiplying or dividing them.
I want to check my understanding of what exactly is meant by a rational trigonometric function.
Is $\tan x$ rational because $\tan x = \dfrac{\sin x}{\cos x}$? (You take the sine and cosine of $x$ and divide)
$\sin(x+y)$ is rational because $$\sin(x+y) = \sin x \cos y + \cos x \sin y$$ (You take the sine and cosine of $x$ and $y$ and there is multiplication and addition).
$\sin x$ is not rational (because you are just taking the sine of $x$, and there is no multiplication, division, addition or subtraction).
$\cos x$ is not rational for the same reason.
$\sqrt{2} \sin\alpha$ is not rational either.
Are my thoughts about rational trigonometric functions along the right lines?
• $\cos(x) = 1 \cdot \cos(x)$ which would make it rational... – gt6989b Jul 8 '13 at 19:41
• I've improved your question's formatting; apologies if I changed your meaning. You can see here how I edited your question. Please see here for a guide to writing math with MathJax, and see here for a guide to formatting posts with Markdown. – Zev Chonoles Jul 8 '13 at 19:41
• Since $\cos x = \dfrac{\cos x}{\sin^2x +\cos^2 x}$, it is a rational trig function even when requiring division. :) – Thomas Andrews Jul 8 '13 at 20:01
3 Answers
I think you are simply misreading the definition. They perhaps should have used the words "starting with" rather than "taking." That is:
• $\sin x, \cos x$ and constant functions are rational trig functions
• If $p(x),q(x)$ are rational trig functions, then $p(x)\cdot q(x), p(x)+q(x),p(x)-q(x)$ are rational trig functions. If $q(x)\neq 0$ for some $x$, then also $p(x)/q(x)$ is a rational trig function.
Even with your reading of the text, since $\sin^2 x + \cos^2 x=1$, we can get $$\cos x =\frac{\cos x}{\sin^2 x + \cos^2 x}$$ But I suspect that it was not the author's intent for the paragraph to be interpreted that way.
Some care will need to be taken if you want to also include multiple variable rational trig functions.
You aren't explicitly allowed to take square roots, but that doesn't mean that $\sqrt{\sin x}$ is not a rational trig function. I suspect that the authors just meant to make it clear that they didn't include some operations in allowing you to create rational trig functions.
For example, while $\sqrt{\sin x}$ is not rational, $\sqrt{2+2\sin x-\cos^2 x}$ is rational, since it happens to be equal to $1+\sin x$. Proving that $\sqrt{\sin x}$ can't be written that way is actually some work, and probably most easily done with complex analysis. Essentially, we can show that a rational trig function can only be undefined for finitely many $x\in[0,2\pi]$, and when it is defined at $x$, it is differentiable at $x$. This breaks for $\sqrt{\sin x}$.
But again, I suspect the authors don't want you to go that far into it, and instead just note, "we only allow these operation, and square root wasn't one of them."
• +1 for the nice comment about $\sqrt{2+2\sin x-\cos^2 x}$. – Dave L. Renfro Jul 8 '13 at 21:22
• Thank you. It is correct that I found the author's definition hard to understand. – mikoyan Jul 9 '13 at 16:33
$\sqrt{2}\sin x$ is rational because $\sqrt{2}$ is a constant function of $x$, and you find constant functions mentioned in your definition. $\sin x$ and $\cos x$ are both rational functions. You can look at that in any of several ways:
• The functions you listed are the first rational functions, and the ones you can get from them by adding, subtracting, multiplying, and dividing other rational functions;
• $\sin x$ is $1\cdot\sin x$, so you're multiplying two of the functions you initially listed: a constant function and $\sin x$;
• The number of things you multiply can be $1$. So you get $\sin x$.
$\tan x$ is a rational function for precisely the reason you mention, and similarly $\cos x\sin y + \sin x\cos y$.
I should add that when I say "rational functions", I mean rational functions of sine and cosine. The term "rational function" without that modifier would mean just what you get by starting with constants and variables and adding, subtracting, multiplying, or dividing them.
• Thank you, that makes sense. $\sqrt{\sin x}$ is not a rational function in the book. Why not? – mikoyan Jul 8 '13 at 20:06
• Well, proving $\sqrt{\sin x}$ is not a rational trig function might be hard, but in the definition, you aren't allowed to take square roots. It still could be the case that it is also representable in the above format somehow, using some trig identity. The only way I can think of to prove this can't happen is using complex analysis. @mikoyan – Thomas Andrews Jul 8 '13 at 20:15
• The short answer for why, ultimately, $\sqrt{\sin x}$ can't be written as a rational trig function is that we can prove that if $f(x)$ is rational trig, then $f(x)$ can only be undefined for finitely many points in $[0,2\pi]$ and when $f(x)$ is defined, $f$ is differentiable at $x$. $\sqrt{\sin x}$ has differentiable problems at $x=0$. – Thomas Andrews Jul 8 '13 at 20:51
• Notice that $\sqrt{\sin x}$ is not defined when $\sin x$ is negative. That won't happen with rational functions of sine and cosine. One way of showing that $\sqrt{\sin x}$ is not a rational function of sine and cosine is to show that its graph has vertical tangent lines at some points, and also that that can't happen with rational functions of sine and cosine. I think that second part will take more work than the first part. – Michael Hardy Jul 9 '13 at 0:48
A rational function is the quotient of two polynomials (with the lower one non-zero of course) so something like $$f(x)=\cfrac{\sum\limits_{k=0}^na_kx^k}{\sum\limits_{k=0}^mb_kx^k}$$
And so a trigonometric rational function is the quotient of two polynomials in $\sin x$ and $\cos x$
$$f(x)=\cfrac{\sum\limits_{i=0}^n\sum\limits_{j=0}^ma_{i,j}\cos^i(x)\sin^j(x)}{\sum\limits_{i=0}^p\sum\limits_{j=0}^qb_{i,j}\cos^i(x)\sin^j(x)}$$
• This answer is technically correct but does not directly address the OP's incorrect beliefs that $\sin x$ and $\cos x$ are not rational trigonometric functions. – Zev Chonoles Jul 8 '13 at 19:46
• Also, he has rational across several variables. Basically, a rational function of $e^{ix_1},e^{ix_2},\dots,e^{ix_n}$. – Thomas Andrews Jul 8 '13 at 19:54
• Ok, so $\cos x$ and $\sin x$ are rational trigonometric functions. Why is this so, particularly in terms of Gelfand and Saul's definition? Why is $sqrt\sin x$ not rational? – mikoyan Jul 8 '13 at 19:57
|
{}
|
Suppose that $f(x) = -5 x^2 - 5$.
(A) Find the slope of the line tangent to $f(x)$ at $x=3$.
(B) Find the instantaneous rate of change of $f(x)$ at $x=3$.
(C) Find the equation of the line tangent to $f(x)$ at $x=3$. $y=$
|
{}
|
## A question from velocity time graph
if graph of velocity v and time t for a particle moving along straight line is parabola symmetric about time axis, then the relation between acceleration a and time t for particle is …. ? (Divya posted this question)
## Why electrons cannot be accelerated using a cyclotron?
In a cyclotron, the most important condition is that of the cyclotron frequency. The frequency of the square wave oscillator connected to the dees of the cyclotron must match the frequency of revolution of the charged particle being accelerated. For ordinary ions, once the frequency is set there is no need to change or adjust the frequency. The [...]
## A problem from light for class X
Sudhanshu Misra posted:
“An object is placed at a distance of 10cm from convex mirror of focal length15cm; find the position and nature of image?”
Ans:
f= 15 cm
u=-10 cm
v=?
substitute the values
|
{}
|
# Subresultants
Title: Subresultants Authors: Sebastiaan Joosten, René Thiemann (rene /dot/ thiemann /at/ uibk /dot/ ac /dot/ at) and Akihisa Yamada Submission date: 2017-04-06 Abstract: We formalize the theory of subresultants and the subresultant polynomial remainder sequence as described by Brown and Traub. As a result, we obtain efficient certified algorithms for computing the resultant and the greatest common divisor of polynomials. BibTeX: @article{Subresultants-AFP, author = {Sebastiaan Joosten and René Thiemann and Akihisa Yamada}, title = {Subresultants}, journal = {Archive of Formal Proofs}, month = apr, year = 2017, note = {\url{https://isa-afp.org/entries/Subresultants.html}, Formal proof development}, ISSN = {2150-914x}, } License: BSD License Depends on: Jordan_Normal_Form, Polynomial_Factorization Status: [ok] This is a development version of this entry. It might change over time and is not stable. Please refer to release versions for citations.
|
{}
|
In 2015, African ministers established the Ngor Declaration to achieve universal access to adequate sanitation and hygiene services and eliminate open defecation by 2030. Realizing this target will require significant public and private investment. Over the last two decades, there has been increasing recognition that sanitation programs should be demand driven, yet limited information exists about how much rural residents in developing countries are willing to pay for sanitation improvements. This paper applies the contingent valuation approach to evaluate how much households in rural Senegal are willing to pay for a ventilated improved pit (VIP) latrine. The analysis uses data from 1,635 household surveys that were conducted in 47 rural communities across four regions in Senegal. The willingness to pay model found that respondents were more willing to pay for a VIP latrine if they had plans to improve their existing latrine, lived in districts located nearer to the capital city of Dakar, were dissatisfied with their existing sanitation service, and were male. The analysis also indicates that the current household contribution of 5% of the costs of constructing a VIP latrine could be increased to 30% with only a modest decline in the number of households willing to pay this amount.
## INTRODUCTION
More than 2.4 billion people lack access to an improved sanitation facility, and the majority of these people are poor and live in rural areas (UNICEF & WHO 2015). In Senegal, only 40% of the rural population has access to improved sanitation, with rural sanitation being the only sub-sector in the country significantly below the MDG targets (WSP 2011). Addressing the sanitation challenge will require dedicated public and private resources combined with a strong political will that elevates sanitation in national policies and programs.
In May 2015, the Ministers and Heads of Delegations responsible for sanitation and hygiene in Africa adopted the Ngor Declaration on Sanitation and Hygiene in Dakar, Senegal, to ‘[a]chieve universal access to adequate and sustainable sanitation and hygiene services and eliminate open defecation by 2030.’ (AfricaSan 2015). This outcome aligns with the targets under Goal 6 of the proposed Sustainable Development Goals that also call for ‘the participation of local communities in improving water and sanitation management’ (UN 2014, p. 12). The challenge in realizing these goals in Senegal, and Africa in general, will be to find innovative ways to encourage or enable local communities to participate in improving their access to sanitation services and eliminate open defecation.
Investments in sanitation services have suffered from poorly designed programs that promote technically or culturally inappropriate technologies that households do not use (Mara et al. 2010). Such misjudgments concerning the nature of consumer demand in the sanitation sector of developing countries are frequent and stem from a lack of data on existing levels of services, residents’ priorities, and/or their demand for different sanitation services.
Large-scale sanitation programs often involve a donor or government subsidy. To set the design cost, it is necessary to estimate the amount most households are able and willing to pay for improved sanitation, and the likely level of subsidy available (Cairncross 1992). It is surprising that few studies have addressed existing demand for sanitation in Africa (Whittington et al. 1993; Altaf & Hughes 1994; Jenkins & Scott 2007; Rahji & Oloruntoba 2009; Meeks 2012). The Sanitation Directorate in Senegal established a sanitation promotion policy that includes subsidies and awareness raising, yet there is little data on appropriate subsidy levels (WSP 2011). No research was found on the willingness of rural residents in Africa to pay for sanitation facilities, specifically for ventilated improved pit (VIP) latrines, the topic of this paper. Furthermore, there is little quantitative research on the predictors of latrine ownership in Africa.
Without such research, governments and development organizations must estimate what consumers would be willing to contribute for a particular sanitation service. A program funded by the African Development Bank in Senegal to construct 11,000 family latrines (consisting of either a VIP or pour flush latrine), set the household contribution to participate in this program at 5% of the costs of the facility (African Development Bank 2010). In 2009, the Center for Low Cost Water Supply and Sanitation (CREPA), based in Burkina Faso, estimated the cost of constructing a VIP latrine as 180,870 FCFA ($307) (Niang 2009). Therefore, a 5% contribution towards the construction of a VIP latrine would be around 9,000 FCFA ($15). The conversion rate used in this paper was $1 USD = 590 FCFA. At the time of the fieldwork, the conversion rate was$1 USD = 450 FCFA.
The existing research shows a wide range of willingness to pay (WTP) between countries, and that the demand for improved sanitation depends on a variety of factors such as gender, education, assets owned, income, health practices and knowledge, and social networks (Faisal & Seraj 2008; Van Minh et al. 2013; Thanh et al. 2014; Shakya et al. 2015). This variation means that accurate estimates of WTP should be made at the country or regional level.
This paper describes the application of a contingent valuation (CV) method to estimate household demand for one possible sanitation technology, the VIP latrine, in rural Senegal. It also evaluates household level factors associated with the WTP for the VIP latrine.
## METHODS AND DATA
The research presented in this paper was undertaken as part of a larger study on the productive use of domestic water in Senegal funded by the World Bank's Water and Sanitation Program (WSP). The purpose of the larger study was to examine relationships between the productive use of domestic water, poverty reduction, and sustainability (Van Houweling et al. 2012; Hall et al. 2013; Hall et al. 2014). All of the research was undertaken in compliance with the research protocol approved by Virginia Tech's Institutional Review Board (Protocol ID: 09-153). In preparation for the fieldwork in Senegal, the research team was asked by the WSP-Senegal to also collect data on the willingness of rural households to pay for a VIP latrine, a technology of specific interest to development organizations working in the country. The large-scale, empirical nature of the research presented a unique opportunity to develop data on rural demand for VIP latrines. The fieldwork in Senegal was conducted over a three-month period from May to August 2009.
### Sample framework and methods
Of the eight regions located in the Northern and Central zones of Senegal, four were selected for the study (St. Louis, Matam, Diourbel, and Kaffrine) based on an assessment of the agricultural and livestock activity occurring within the regions and the desire to have some variation among the regions in terms of hydrological, geographic, and climate characteristics. The focus on agricultural and livestock activity was due to the emphasis of the larger study on the productive use of domestic water (Hall et al. 2014). In particular, the study focused on small-scale rural piped water systems that obtain water from deep boreholes fitted with electric-powered pumps.
Within the four study regions, 47 rural piped water supply systems were selected based on variation in the reported levels of productive activity supported by the systems –14 in Diourbel, 12 in Kaffrine, 10 in Matam, and 11 in St. Louis. Prior to the main fieldwork, a pilot study was undertaken to test the surveying instruments and WTP module. No substantive changes were made to the WTP module following the pilot, and the surveyors reported no problems with the administration of this module.
The majority (94%) of respondents presented with a required contribution of 8,000 FCAF ($14) towards the VIP latrine were willing to pay this amount. Further, 88, 79, 56, and 45% of respondents presented with contribution amounts of 16,000 FCAF ($27), 32,000 FCAF ($54), 64,000 FCAF ($108), and 80,000 FCAF ($136) were also willing to pay the stated amount, respectively. The WTP model identified five variables that are statistically significant predictors of a respondent's WTP for a VIP latrine (Table 2). First, the bid value was important. The higher the bid, the less willing a respondent was to pay for a VIP latrine (p < 0.001). Second, if a household had plans to improve its sanitation situation, respondents from these households had 2.7 greater odds of being willing to pay for a VIP latrine than respondents from households without such plans (p < 0.001). Third, households in Diourbel and Kaffrine were more willing to pay for a VIP latrine than in Matam and St. Louis (p < 0.001), even though households in Matam and St. Louis have a higher average monthly income. Fourth, respondents who were somewhat dissatisfied (odds ratio = 2.07, p < 0.01) or not satisfied (odds ratio = 1.76, p < 0.05) with their sanitation situation were more willing to pay for a VIP latrine than respondents who were generally satisfied with their sanitation situation. A higher level of dissatisfaction was also found to be highly correlated with a household's plans to improve their sanitation facility (p < 0.001). And finally, male respondents were more willing to pay for a VIP latrine than female respondents (odds ratio = 1.52, p < 0.01). Table 2 Results of logistic regression predicting WTP for sanitation Odds RatioLower CIUpper CIStatistical Significance (Intercept) 4.60815 2.59277 8.19013 *** Bid 0.99996 0.99995 0.99996 *** Plans to Improve Sanitation 2.73111 1.97882 3.76939 *** District–Kaffrine 1.0535 0.69074 1.60676 District–Matam 0.37919 0.25288 0.56859 *** District–St. Louis 0.31968 0.21657 0.47187 *** Open Defecation 1.17693 0.80734 1.71571 Satisfaction with Sanitation–Somewhat dissatisfied 2.07002 1.26728 3.38124 ** Satisfaction with Sanitation–Not Satisfied 1.76311 1.10894 2.8032 Illness in Past Week–Yes 1.15389 0.8695 1.5313 Household Income 1.00039 0.99995 1.00082 Size of Household 1.01591 0.99667 1.03552 Respondent Education 1.22634 0.79394 1.89422 Respondent Male 1.51564 1.13259 2.02825 ** Odds RatioLower CIUpper CIStatistical Significance (Intercept) 4.60815 2.59277 8.19013 *** Bid 0.99996 0.99995 0.99996 *** Plans to Improve Sanitation 2.73111 1.97882 3.76939 *** District–Kaffrine 1.0535 0.69074 1.60676 District–Matam 0.37919 0.25288 0.56859 *** District–St. Louis 0.31968 0.21657 0.47187 *** Open Defecation 1.17693 0.80734 1.71571 Satisfaction with Sanitation–Somewhat dissatisfied 2.07002 1.26728 3.38124 ** Satisfaction with Sanitation–Not Satisfied 1.76311 1.10894 2.8032 Illness in Past Week–Yes 1.15389 0.8695 1.5313 Household Income 1.00039 0.99995 1.00082 Size of Household 1.01591 0.99667 1.03552 Respondent Education 1.22634 0.79394 1.89422 Respondent Male 1.51564 1.13259 2.02825 ** Significance codes: '***' p < 0.001; '**' p < 0.01; '*' p < 0.05; '.' p < 0.1. ## DISCUSSION The average willingness of all households to pay for a VIP latrine was 72,300 FCFA ($123), although this value varied significantly by district. Given that the median monthly household income was 53,100 FCFA ($90) and the lowest district-level average WTP for a VIP latrine was 55,800 FCFA ($95), these values could be treated as a conservative upper bound to what the majority of households would be willing to contribute towards the construction of a VIP latrine. Thus, households may be willing to contribute much more than the estimated 5% of the costs of constructing a VIP latrine and perhaps as much as 30% of these costs.
The majority of the variables found to be associated with a WTP for the VIP latrine aligned with the authors’ expectations and the results of previous studies. Proximity to urban centers makes sense as a predictor of WTP because it entails greater access to sanitation markets, increased exposure to sanitation and hygiene campaigns, potentially higher incomes, and stronger networks with other people who already have latrines (Shakya et al. 2015). The variation of WTP between regions in Senegal may also be explained by other factors such as livelihood differences and remittance flows. The implication of these findings is that VIP latrine programs dependent on household contributions are likely to experience greater uptake in those districts located nearer to the capital city of Dakar.
Although few studies consider gender – e.g., see Van Minh et al.’s (2013) study in Vietnam and Fujita et al.’s (2005) study in Peru – this research found that men were more willing to pay for sanitation than women. This finding does not necessarily imply that women are less concerned with sanitation, and might instead be related to the fact that women in rural Senegal typically do not make the major household decisions or have access to household finances (Van Houweling et al. 2012).
There was some association between open defecation and WTP for a VIP latrine, but this relationship was not statistically significant (odds ratio = 1.18, p > 0.1). Open defecation was highly correlated with dissatisfaction (p < 0.001). Thus, a household's level of satisfaction with their sanitation service and their motivation or plans to build a latrine is likely to be a more useful predictor of their WTP for a VIP latrine than their existing sanitation practices.
Households with a larger number of family members (odds ratio = 1.016) that reported an illness in the past week (odds ratio = 1.15), and that had a respondent who had at least a primary school education (odds ratio = 1.23), were more willing to pay for a VIP latrine, but these findings are not statistically significant (p > 0.1). Thus, respondents either did not make an association between their sanitation practices and family health, or other factors such as their general dissatisfaction with their sanitation situation were driving their decision process.
Household income had a positive relationship with WTP, though it was only statistically significant at the alpha = 0.1 level. It is surprising that there was not a stronger association between some level of economic status and WTP, considering the wide range of income and asset based variables tested in the model and the strong association found in other countries (Seraj 2008; Rahji & Oloruntoba 2009; Van Minh et al. 2013). Along with the higher than expected WTP found among respondents, this result indicates that cost is not the primary constraint for the majority of households surveyed in Senegal to improve their sanitation services.
Research by Guiteras et al. (2015) in rural Bangladesh revealed how subsidies covering 75% of the cost of a latrine led to a 22% increase in latrine ownership compared to no statistically significant increase from a supply-side market access intervention or a CLTS-inspired (community-led total sanitation) community motivation and information program. If Senegal's Sanitation Directorate were to continue its sanitation promotion program and target subsides at VIP latrines, our research indicates that a significant proportion of households may be willing to pay 30% of the construction costs of the latrine. Further, around three quarters (71%) of the households surveyed in Senegal stated a desire to improve their sanitation situation, which highlights the potential demand for a subsidy program targeting a range of latrine options. However, there may still be groups within communities who cannot afford to contribute to the cost of a latrine and full subsidies may be necessary for these poorest households.
### Study limitations
In presenting these results, the authors acknowledge the extensive literature critiquing WTP studies (Cummings & Brookshire 1986; Blamey et al. 1999; Merrett 2002; Hensher et al. 2005). For this study, efforts were taken to reduce the impact of information and hypothetical bias by training the enumerators to clearly and consistently describe the VIP with the support of images. Further, the dichotomous choice method was selected for its relative simplicity and ability to elicit a WTP value with limited bias. However, the authors recognize the potential problem of strategic bias, whereby respondents answer with the intention of influencing a future investment or policy, and ‘yea-saying,’ where respondents answer based on what they think the enumerator would like them to say. In the latter case, two enumerators’ data were removed from the dataset due to a concern that they had been leading the respondent–i.e., all their respondents had accepted all of their bids.
Another limitation of this study is that it only provided respondents with one option: the VIP latrine, a selection that is more technologically advanced (and expensive) than most existing private latrines. National latrine promotion programs should offer households a range of options, including latrine designs that can be built making use of more local materials to increase adoption and reduce the likelihood of affordability constraints. Notwithstanding these concerns, we believe the results from this study provide a good indication of the factors that shape WTP for a VIP latrine in the study districts and can inform the design of sanitation programs in these regions.
## CONCLUSION
Existing data in Senegal indicated that rural households would be willing to pay 8,000 FCFA (or approximately 4.4% of the construction costs) towards the installation of a VIP latrine. This study found that the majority of households surveyed may be willing to pay up to 30% of the costs of constructing a VIP. While many factors must be considered when establishing the level of household contribution towards a sanitation program, this finding suggests households could bear a greater proportion of the construction costs, which could extend the reach of available funds for a national sanitation program. This study also contributes some key factors that explain a household's WTP for sanitation. Respondents were more willing to pay for a VIP latrine if they had plans to improve their existing latrine, lived in districts (Diourbel and Kaffrine) located nearer to the capital city of Dakar, were dissatisfied with their existing sanitation service, and were male.
## ACKNOWLEDGEMENTS
The research described in this paper was undertaken as part of a multi-country study on the productive use of rural domestic water funded by the Water and Sanitation Program (WSP), World Bank. We would especially like to thank Thomas Fugelsnes for incorporating this study into the scope of the Senegal research. We would also like to thank the staff at IDEV-ic (formerly known as Senagrosol), our main in-country research partner in Senegal, for the significant effort they made in supporting all aspects of the fieldwork. In relation to the data analysis undertaken in this paper, we would like to thank the Laboratory for Interdisciplinary Statistical Analysis (LISA) at Virginia Tech for their statistical assistance. The authors are responsible for any errors or omissions in this paper.
## REFERENCES
African Development Bank
2010
Rural Drinking Water Supply and Sanitation Sub-Programme – Phase II. Senegal
.
Project Appraisal Report
.
African Development Bank
,
Tunisia
.
AfricaSan
2015
The Ngor Declaration on Sanitation and Hygiene
,
http://www.wsscc.org/resources/resource-news-archive/africasan-2015-ngor-declaration-sanitation-and-hygiene?rck=d412c7a8dab6b0e8697edfdb7779c479. The Ngor Declaration was announced at the 4th African Conference on Sanitation and Hygiene (AfricaSan) in Dakar, Senegal, May 25–27, 2015
. (
accessed 17 August 2015
).
Akaike
H.
1974
.
IEEE Transactions on Automatic Control
19
,
716
723
.
Altaf
A. M.
Hughes
J.
1994
.
Urban Studies
31
,
1763
1776
.
Blamey
R. K.
Bennett
J. W.
Morrison
M. D.
1999
.
Land Economics
75
,
126
141
.
Cairncross
S.
1992
Sanitation and Water Supply: Practical Lessons from The Decade
.
International Bank for Reconstruction and Development, World Bank
,
Washington, DC
.
Choe
K. A.
Whittington
D.
Lauria
D.
1996
.
Land Economics
72
,
519
537
.
Cummings
R. R.
Brookshire
D. S.
1986
Valuing Environmental Goods: An Assessment of the ‘Contingent Valuation Method’
.
Rowman Allanheld
,
Totowa, NJ
.
Faisal
K.
Seraj
B.
2008
Willingness to Pay for Improved Sanitation Services and its Implication on Demand Responsive Approach of BRAC Water, Sanitation and Hygiene Programme
.
Working Paper No. 1. BRAC
,
Research and Evaluation Division
,
.
Fujita
Y.
Fujii
A.
Furukawa
S.
Ogawa
T.
2005
Estimation of willingness-to-pay (WTP) for water and sanitation services through contingent valuation method (CVM) - a case study in Iquitos City, The Republic of Peru
.
JBICI Review
11
,
59
87
.
Guiteras
R.
Levinsohn
J.
Mobarak
A. M.
2015
.
Science
348
,
903
906
.
Gunatilake
H.
Yang
J.-C.
Pattanayak
S.
Choe
K. A.
2007
Good Practices for Estimating Reliable Willingness to pay Values in the Water Supply and Sanitation Sector
.
Asian Development Bank
,
Mandaluyong
,
Philippines
.
Hall
R. P.
Van Houweling
E.
Van Koppen
B.
2013
.
Science and Engineering Ethics
20
,
849
868
.
Hall
R. P.
Vance
E.
Van Houweling
E.
2014
The productive use of rural piped water in Senegal
.
Water Alternatives
7
,
480
498
.
Hensher
D.
Shore
N.
Train
K.
2005
.
Environmental & Resource Economics
32
,
509
531
.
Jenkins
M. W.
Scott
B.
2007
.
Social Science & Medicine
64
,
2427
2442
.
Mara
D.
Lane
J.
Scott
B.
Trouba
D.
2010
.
PLoS Medicine
7
,
e1000363
. Doi:10.1371/journal.pmed.1000363.
Meeks
J. V.
2012
Willingness-to-Pay for Maintenance and Improvements to Existing Sanitation Infrastructure: Assessing Community-Led Total Sanitation in Mopti, Mali
.
Department of Civil and Environmental Engineering, University of South Florida
,
Florida
.
Merrett
S.
2002
.
Water Policy
4
,
157
172
.
Niang
D.
2009
Communication du CREPA [Presentation to the Water and Sanitation Program, World Bank]
.
CREPA
,
Burkina Faso
.
Null
C.
Kremer
M.
Miguel
E.
J. G.
Meeks
R.
Zwane
A. P.
2012
Willingness to Pay for Cleaner Water in Less Developed Countries: Systematic Review of Experimental Evidence
.
Systematic Review 006
.
International Initiative for Impact Evaluation
,
London
.
R Core Team
2014
R: A Language and Environment for Statistical Computing
.
R Foundation for Statistical Computing
,
Vienna
,
Austria
.
Rahji
M. A. Y.
Oloruntoba
E. O.
2009
.
Waste Management and Research
27
,
961
965
.
Seraj
K. F. B.
2008
Willingness to Pay for Improved Sanitation Services and its Implication on Demand Responsive Approach of BRAC Water, Sanitation and Hygiene Programme
.
Research and Evaluation Division, BRAC
,
Dhaka
.
Shakya
H. B.
Christakis
N. A.
Fowler
J. H.
2015
.
Social Science & Medicine
125
,
129
138
.
Thanh
N. H.
Van Minh
H.
Huyen
D. T. T.
Chung
L. H.
Hung
N. V.
2014
Use of Contingent Valuation Methods for Eliciting the Willingness to Pay for Sanitation in Developing Countries
.
Vietnam Journal of Public Health
2
,
59
66
.
United Nation Children's Fund (UNICEF), World Health Organisation (WHO)
2015
Progress on Sanitation and Drinking Water–2015 Update and MDG Assessment
.
UNICEF and WHO
,
Geneva
.
United Nations (UN)
2014
Open Working Group proposal for Sustainable Development Goals. Full report of the Open Working Group of the General Assembly on Sustainable Development Goals is issued as document A/68/970
.
United Nations
,
New York
.
Van Houweling
E.
Hall
R. P.
Diop
A. S.
Davis
J.
Seiss
M.
2012
The role of productive water use in women's livelihoods; evidence from rural Senegal
.
Water Alternatives
5
,
658
677
.
Van Minh
H.
Nguyen-Viet
H.
Thanh
N. H.
Yang
J.-C.
2013
.
Environ Health Preventive Medicine
18
,
275
284
.
Venables
W. N.
Ripley
B. D.
2002
Modern Applied Statistics with S
.
Springer
,
New York
.
Water, Sanitation Program (WSP)
2011
Water Supply and Sanitation in Senegal. Turning Finance into Services for 2015 and Beyond. An AMCOW Country Status Overview
.
World Bank, Water and Sanitation Program–Africa Region
,
Nairobi
.
Whittington
D.
Briscoe
J.
Mu
X.
Barron
W.
1990
.
Economic Development and Cultural Change
38
,
293
311
.
Whittington
D.
Lauria
D. T.
Wright
A. M.
Choe
K.
Hughes
J. A.
Swarna
V.
1993
.
Water Resources Research
29
,
1539
1560
.
|
{}
|
# Tag Info
11
You are right that it seems strange why a cash-rich company is borrowing. In the case of Apple, the money that they are borrowing is being used to pay dividends to shareholders. The reason why they aren't using their \200 billion is because doing so would cost them tens of billions of dollars in taxes. The current US tax code taxes corporations at 35% when ... 5 People, particularly business leaders, seem to remain confused about this issue even today. At the core of is the question Is equity finance expensive?. We certainly observe in the data that the realized returns on firm debt are much lower than the realized returns on firm equity. Does this mean that firms have too much equity? If equity capital always ... 5 The first equation can be written as: $$r_E(Levered) = \frac{E+D}{E}r_E(Unlevered) - \frac{D}{E}r_D$$ Then, isolating the unlevered return gives: $$r_E(Unlevered) = \frac{E}{E+D}r_E(Levered) + \frac{D}{E+D}r_D$$ And this is the WACC. 4 If you are asking "Is the WACC the amount that the company expects to earn on the stocks and bonds that it holds.." then the answer is no. The WACC, in very simple terms, is the amount of money a company pays to obtain financing for projects. These types of financing are clearly listed in the wikipedia article and clearly extend beyond stocks and bonds ... 4 I don't know if you refer to the extensive margin (some borrowers not being able to get credit) or to the intensive margin (one borrower not being able to get as much credit as (s)he wants). If you are referring to the former, one of the theoretical papers for borrowing constraints on markets with asymmetric information is the following one: Stiglitz and ... 4 All assets which have a finite useful life are depreciated. For example, your patents or copyright might hold for 5 or 10 years but no more. Thus, it is quite coherent to reflect the loss of value through depreciation and amortization. Same goes for a software for example: in 5 years time, a software might be obsolete, so we need to reflect this in the ... 4 Debt is cheap. Flexibility is valuable. They hold debt + cash up to the point where the value of flexibility is still greater than the net cost of servicing the debt minus any interest earnt on the cash. It saves them the transaction costs of re-raising debt when they need it, had they paid it down early. It's cashflow that typically kills businesses, ... 3 Another key feature of those shell companies is that they hide the ultimate beneficiary of the transactions. Banks, insurance companies and most financial services firm must make enquiries as part of the "Know Your Customer" (KYC) regulations: they should be able to find out who will benefit ultimately from the transactions, or in the name of whom they are ... 3 Considering this is an Economics Stack Exchange site, I’m going to answer in the spectre of Financial Economics. These are the most foundational equations and ideas of Financial Economics to understand more complex applied or academic research. 1. Gross yield The gross yield is the yield on an investment before the deduction of taxes and expenses. 1+R_{t+... 3 A December fiscal year end, which gives a first quarter of three months ending March 30, aligns the fiscal and the tax year. This can be very convenient and in the United States is sometimes required. In addition, some regulated firms like banks are required to prepare documents on calendar quarters regardless of the month of their fiscal year end, and it is ... 3 To illustrate what Tirole has done, let's consider a simpler environment. Consider a utility maximisation problem over two goods, x and y. The consumer has utility function u(x,y) = f(x) + y, where f is strictly increasing and strictly concave. The consumer's problem is thus \begin{align} \max_{x,y} &\quad f(x) + y \\ \text{s.t.} &\quad ... 3 It appears to me that it is the other way round: The RBS was running out of cash which is why the stock price was dropping. Stocks usually don't affect the immediate operation of a company, since they are traded on secondary markets (stock exchanges) among stock owners, not bought from / sold to the actual company which issued the stock. 2 The point @EnergyNumbers raises is correct, and it's easy to understand from an intuitive standpoint: One of the key roles of financial intermediaries is to match the demand for liabilities of a given tenor to the demand for assets of a given tenor. Financial intermediation allows maturity mismatches to exist in non-finance sectors of the economy by taking ... 2 The first equation is dollars time interest over total dollars. For example, if a company wants to finance a project, issues \1M in equity with an expected ROI to the investors of 6% and \$4M in bonds at 4%, it's WACC is: $$\frac{(4\%*4,000,000 + 6\% * 1,000,000)}{4,000,000+1,000,000}$$ which for simplicity we can say is$$\frac{(4\%*4 + 6\% * 1)}{4+1}... 2 The concept of$\text{WACC}$seems pretty straightforward... it is a weighted average percentage, calculated in principle as equation$(2)$in the question shows. If we have two sources of financing each demanding a different interest rate and with given percentage contribution each to the total funds we want to borrow, then what would be the single ... 2 The assertion of the book is based on the phenomenon of commercial credit - the fact that business-to-business sales almost always are on credit, and the differences between terms-of-credit that a company gives to its customers, compared to the terms of credit that enjoys from its suppliers. It describes the (short-term) phenomenon, peculiar to some, that "... 2 Using the Federal Reserve's definition for M1 (warning, M definitions can vary between countries, so always check the local definition): "M1 is defined as the sum of currency held by the public and transaction deposits at depository institutions (which are financial institutions that obtain their funds mainly through deposits from the public, such as ... 2 If I'm reading it correctly, Table X (page 2633) of Schwert (2000) (Journal of Finance, Hostility in Takeovers: In the Eyes of the Beholder?) says that about 78 percent of deals in 1975-1996 were successful. However, this measure is constructed based on the acquisition of the firm, not the bid of the acquirer, so that if there are multiple bidders this is ... 2 A shell is simply an inactive company - there is a market for shell companies because it allows ordinary persons to buy a ready to go business - for example public traded shells with a stockmarket ticker allow you to skip all the paperwork. Back to topic. A Limited company is a legal person, thus it can buy / sell and hold other companies and assets - sue ... 2 What happens is completely dependent on the owners. They're the ones whose income has been taken anyway. They may be the only ones with the legal power to do anything (depending on the jurisdiction: there may be some countries where the State can intervene in such matters). In some jurisdictions, the directors have a legal obligation to maximise returns to ... 2 They are not the same. Basic accounting equation: Assets = Liability + Shareholder Equity Assets refers to what the company actually owns: cash, property, inventory, etc. Assets are paid for in two major ways: debt (liability) and stock (equity). Essentially, everything a company owns is paid for by a combination of (1) getting loans from other entities ... 2$\beta$is the measure of the sensitivity of stock returns to market returns. This has nothing to do with the value of$R^2$. Your results appear to be fine, you can get significant beta estimates but low$R^2$. Why? As measured by$R^2$, 24.56% of variation in Apple returns is accounted for by the variation in the market index,$S$&$P 500\$. Clearly, ...
2
In the link one can very clearly see that the company has no contractually short term debt, and in the short-term (i.e. in the next 12 months) has to pay part of its long term debt. Also, that the debt amounts are not included in the line "Accounts payable" Also, one can see its long term debts And no, debt is not only bonds.
1
The original paper (of Altman) is Altman, E. I. (1968). Financial ratios, discriminant analysis and the prediction of corporate bankruptcy. The journal of finance, 23(4), 589-609. We read (p. 593) III. DEVELOPMENT OF THE MODEL "Sample Selection. The initial sample is composed of sixty-six corporations with thirty-three firms in each of the two ...
1
In economics, a firm wants to maximize profit. Your ice cream shop has some costs that vary depending on how many half-gallon ice cream packages are made, and some benefits, the revenue from selling ice cream. The additional cost of making one more package of ice cream is called the marginal cost, and it can change based on how much ice cream you are already ...
1
I think this is what you are looking for. It is a quite interesting paper. I hope it helps.
1
As Kitsune Cavalry noted, organisations like Khan Academy are non-profits that are supported by donations and run for the public good. To address your second question (why would you start a project that requires so much time and not charge anything for it?): the fact that the organisation is a non-profit doesn't mean that the people working there don't make ...
1
Khan Academy is a 501c non-profit. They do some fundraising, they have some regular backers, and some big time contributors like these. I imagine it's a similar case for Anatomy Zone. They are affiliated with a bunch of other non-profits that you can see on the bottom of their front page.
1
What I don't understand is why we discount the profits in year one but don't consider inflation in year one. This seems inconsistent to me. Why do we discount the real profit in the first year and not the nominal profit, or if we assume nominal profits in year one are the same as real profit, why do we discount in year one? As you said, the firm ...
1
Depreciation is tax-deductible. The firm in question pays taxes. If depreciation increases, then taxes paid decrease. If taxes paid decrease, then cash flow increases. ceteris paribus
Only top voted, non community-wiki answers of a minimum length are eligible
|
{}
|
Abstract
The performance of regionalization methods used for regional flood frequency analysis is affected considerably by the features used to identify the homogeneous regions (e.g., climatological, meteorological, geomorphological, and physiographic characteristics of the watersheds). In this study, a regionalization method is proposed that takes advantage of the two widely used techniques in regionalization of watersheds: canonical correlation analysis and cluster analysis. In the proposed method, the canonical correlation analysis is utilized to select or weight features that then will be used by a hybrid clustering algorithm for regionalization of watersheds. The proposed method is applied to Sefidrud basin, located in the north of Iran, to implement regionalization with two, three, four, and five regions. Performance assessment of the proposed method shows that all the options of the proposed method can be effective alternatives to some common regionalization methods to improve the homogeneity of the regions. The results indicate that the method can satisfy the homogeneity conditions approximately for all the regions which were identified in the study area.
NOTATION
• FFA
Flood frequency analysis
•
• RFFA
Regional flood frequency analysis
•
• CCA
Canonical correlation analysis
•
• WAKM
A hybrid clustering algorithm consisting of Ward's algorithm and K-means
•
• ASW
Average silhouette width
•
• CCA-WAKM
A set of four implementation options of the proposed regionalization method including the combination of CCA and WAKM in which the feature vectors consisting of the canonical variables of the watershed features are used in clustering by WAKM
•
• CCA-WAKM1
An option for implementing CCA-WAKM in which the values of the first canonical variable of the watershed features are used as the feature vectors of sites
•
• CCA-WAKM1,2
An option for implementing CCA-WAKM in which the values of the first and second canonical variables of the watershed features are used as the features of the feature vectors of sites
•
• CCA-WAKM1,3
An option for implementing CCA-WAKM in which the values of the first and third canonical variables of the watershed features are used as the features of the feature vectors of sites
•
• CCA-WAKM1,2,3
An option for implementing CCA-WAKM in which the values of the first, second and third canonical variables of the watershed features are used as the features of the feature vectors of sites
•
• CCA-wWAKM
An implementation option of the proposed regionalization method in which the coefficients of the watershed features in the first canonical variables of the watershed features are used as the weights of the features in clustering by WAKM
INTRODUCTION
Flood frequency analysis (FFA) is used to estimate the magnitude of a flood with a specified return period or estimate the return period of a flood with a specified magnitude. Flood quantiles can be estimated by at-site FFA using only flood data recorded at the site of interest. However, in many cases, the length of flood data records in sites of interest are not appropriate to provide reliable flood estimates. In such situations, regional flood frequency analysis (RFFA) is an efficient approach to compensate for the temporal shortage of flood data records by pooling flood data over a number of sites with similar flood generation mechanisms.
The objective of the regionalization is to identify homogeneous regions, i.e., groups of sites with similar flood generation mechanisms (Hosking & Wallis 1997). In a homogeneous region, the flood frequency distribution varies from site to site with only a site-specific factor named the index-flood. RFFA based on index-flood was first introduced by Dalrymple (1960).
The identification of homogeneous regions for RFFA is required to find an appropriate regional flood frequency distribution. However, identifying a group of sites satisfying homogeneity conditions is not always possible easily. When the regionalization features not directly related to flood data records (e.g., climatological, meteorological, geomorphological, and physiographic characteristics of the watersheds), it is difficult to assign all the watersheds to the regions where all of them satisfy homogeneity conditions.
One of the most widely used methods for regionalization is cluster analysis. The cluster analysis methods are multivariate statistical analysis methods which have been utilized by many researchers in several hydrological studies especially for the regionalization (e.g., Acreman & Sinclair 1986; Burn 1989; Hall & Minns 1999; Jingyi & Hall 2004; Lin & Chen 2006; Ramachandra Rao & Srinivas 2006a, 2006b; Srinivas et al. 2008; Chen & Hong 2012; Toth 2013). While in traditional methods of regionalization, regions were identified based on administrative borders or geographical contiguity, in the methods like cluster analysis, different types of features effective on flood generation mechanism, such as physiographic features or meteorological attributes, can be used as regionalization features. By applying the traditional methods, it is very difficult to identify homogeneous regions, because geographical contiguity of sites does not result in a similarity in their flood generation mechanism. On the other hand, in cluster analysis methods, the regions may be identified based on similarity of sites in terms of various features such as physiographic attributes, meteorological characteristics, plant cover, land use, etc. Thus, in the new methods, regions are not identified essentially based on geographical contiguity.
Region of influence (ROI) is another widely used approach for RFFA that was developed by Burn (1990). In ROI, a region of influence, which is a hydrologically homogeneous neighborhood, is formed for each watershed in a study area. ROI has been used in several regional frequency analysis studies and its performance evaluated in different case studies (e.g., Zrinji & Burn 1994; Burn 1997; Castellarin et al. 2001).
Clustering algorithms can be divided into hierarchical and partitional clustering algorithms (Ramachandra Rao & Srinivas 2008). Hierarchical algorithms include agglomerative algorithms and divisive algorithms where the agglomerative hierarchical algorithms have been used for regionalization of watersheds in several RFFA studies (e.g., Mosley 1981; Tasker 1982; Acreman & Sinclair 1986; Nathan & McMahon 1990; Burn et al. 1997; Hosking & Wallis 1997; Ramachandra Rao & Srinivas 2006a). One of the most important advantages of hierarchical algorithms is that they often do not require the determination of initial conditions (such as the determination of initial cluster centers). On the other hand, a noticeable limitation of hierarchical algorithms is that after assigning a data point to a cluster, it is not possible to move it between clusters. Partitional algorithms, which often are based on the minimization of an objective function, require the determination of initial conditions, such as the initial cluster centers, but these algorithms often provide the benefits of the possibility of moving data points between different clusters in different iterations of the algorithm. One of the most widely used partitional clustering algorithms in regional frequency analysis studies is the K-means algorithm (e.g., Wiltshire 1986; Burn 1989; Bhaskar & O'Connor 1989; Burn & Goel 2000; Ramachandra Rao & Srinivas 2006a; Jin et al. 2017; Xie et al. 2018).
Ramachandra Rao & Srinivas (2006a) investigated the performances of combinations of the three hierarchical algorithms with one partitional algorithm for regionalization of 245 watersheds in Indiana State, USA. They used single linkage, complete linkage, and Ward's algorithm as hierarchical algorithms to specify initial cluster centers for K-means as a partitional algorithm. The quality of the clusters formed by each of the proposed hybrid algorithms was evaluated according to values of four cluster validity indices: cophenetic correlation coefficient (CPCC), silhouette width, Dun's index, and Davies–Bouldin index. Also, the homogeneity of the regions identified by each algorithm was assessed based on the values of the heterogeneity measures proposed by Hosking & Wallis (1993). The proposed hybrid algorithms showed better performances in comparison with the hierarchical and partitional clustering algorithms. In addition, among the proposed algorithms, the combinations of Ward's and K-means algorithms (WAKM) provided the best regionalization results for RFFA in the study area.
From another aspect, clustering methods can be divided into hard and fuzzy clustering. In hard clustering, each data point belongs to only one cluster and does not belong to any other cluster. On the other hand, in fuzzy clustering, each data point can be assigned to all clusters simultaneously with specified degrees of membership between 0 and 1. The sum of the degrees of membership of each data point in all clusters is equal to 1. The most popular fuzzy clustering algorithm used in regional frequency analysis studies to implement regionalization is the fuzzy C-means (FCM) clustering algorithm (e.g., Hall & Minns 1999; Ramachandra Rao & Srinivas 2006b; Srinivasa Raju & Nagesh Kumar 2008; Sadri & Burn 2011; Asong et al. 2015; Basu & Srinivas 2015).
In some studies, combinations of hard and fuzzy clustering algorithms have been studied for regionalization. Srinivas et al. (2008) used a combination of self-organizing maps (SOM) and FCM to implement regionalization of Indiana State watersheds. The results showed that the method was effective in forming homogeneous regions. Farsadnia et al. (2014) also carried out a similar study for regionalization of the watersheds of Mazandaran province in northern Iran and obtained similar results. Ahani & Mousavi Nadoushani (2016) studied the fuzzy development of the hybrid clustering algorithms proposed by Ramachandra Rao & Srinivas (2006a) by combining single linkage, complete linkage, average linkage hierarchical algorithms, and Ward's algorithm, as well as SOM with C-means fuzzy clustering algorithm for performing regionalization of the watersheds in the Sefidrud watershed. The study of the size of the formed regions, the values of the clustering validity indices, and also the $H$ heterogeneity indices showed that, in general, the combination of the Ward and SOM algorithms with FCM algorithm provide the best results for regionalization of the studied region in order to analyze the flood regional frequency.
The ability of cluster analysis methods in dealing with multivariate analysis problems and reducing the need for visual judgments and time-consuming assessments are the benefits of these methods for regionalization of watersheds. However, there are some issues that may affect the efficiency of cluster analysis methods for regionalization of watersheds. In regionalization by cluster analysis methods, each watershed is represented by a vector that includes values of a set of features affecting flood generation mechanism. The feature vectors are used to evaluate similarity of the watersheds. The identification of the feature vectors to be used in clustering is one of the most challenging issues in regionalization studies (e.g., Nezhad et al. 2010; Di Prinzio et al. 2011; Razavi & Coulibaly 2013; Ahani et al. 2018).
One of the useful methods to identify and select the effective features on the flood generation mechanism of watersheds is canonical correlation analysis (CCA). CCA (Hotelling 1936) is a method for describing the correlation between two sets of variables (Cavadias 1990). Cavadias (1990) developed a method based on CCA to determine the hydrological neighborhoods and estimate flood quantile. Also, Cavadias et al. (2001) proposed a method based on the use of CCA in order to determine homogeneous regions or hydrological neighborhoods for flood estimation in both gauged and ungauged sites. The proposed method was useful to identify effective watershed features on flood generation mechanism. However, the features useful for identification of homogeneous regions were selected based on visual judgments on the similarities between patterns of data points in original feature space and canonical space. The application of CCA in RFFA was studied by several researchers (e.g., Ribeiro-Correa et al. 1995; GREHYS 1996a, 1996b; Ouarda et al. 2001, 2008) and the results indicated the desirable effects of CCA on RFFA.
In a method introduced by Ilorme & Griffis (2013), CCA was initially used along with some other multivariate analysis methods to identify the watershed features influencing the flood generation mechanism. Then, the selected features were used to perform regionalization by Ward's clustering algorithm. The method reduced the need for visual judgment to identify homogeneous regions and select regionalization features. Also, it overcame the visual judgment limitation. However, the different effects of the different features on the final regionalization was not considered in the proposed method. In addition, skipping a number of features with relatively lower values of correlation coefficient might significantly reduce homogeneity of regions and accuracy of flood quantile estimation (Basu & Srinivas 2014).
In general, determining the relationship between the watershed features (such as geographical location characteristics, physiographic attributes, geological features, land-use, plant cover, etc.) and the flood-related features (such as flood statistics) can be considered as an important advantage of CCA-based RFFA methods. However, most CCA-based regionalization methods depend on visual judgments to some extent, and in some cases, they are theoretically limited to two-dimensional space.
The main objective of the current study is to propose an efficient regionalization method focusing on feature selection and feature weighting to improve the homogeneity of the regions. To this aim, a new hybrid method is proposed by combining CCA and cluster analysis in order to take the advantages of both of them and overcome their limitations in regionalization of watersheds for RFFA. After describing the proposed method, some implementation options of the method are presented for regionalization of watersheds in Sefidrud basin located in the north of Iran. Then the performance of implementation options of the method is compared with that of a common regionalization method in the study area.
STUDY AREA AND DATA
Sefidrud basin in the north of Iran with a total area about 59,200 km2 was chosen as a study area to evaluate the performance of proposed methods for regionalization of watersheds. Sefidrud River is formed in the confluence of two rivers, named Shahrud and Ghezel-Ozan, and flows into the Caspian Sea. Thirty-nine gauged sites with unregulated flow in Sefidrud basin were selected for this study and their watershed features were extracted for regionalization (Figure 1). The annual maximum flood data records of the sites of interest were obtained from the database of the Iran Water Resources Management Company. The flood data records in the selected sites cover the time period from 1967 to 2012 and the average length of flood data records is about 23 years. The total number of flood data is equal to 898 station-years and record length in the sites varies between 10 and 39 years.
Figure 1
Location of Sefidrud Basin and the hydrometric stations.
Figure 1
Location of Sefidrud Basin and the hydrometric stations.
Longitude, latitude, elevation from the sea level, drainage area, mean annual precipitation, and the runoff coefficient were selected as the watershed features contributing to the regionalization procedure (Table 1). It is worth noting that four sites have runoff coefficients greater than 1 because the watersheds of these sites are located in areas with karst geologic structures.
Table 1
Descriptive statistics of the regionalization features
Feature Range Mean Standard deviation
Longitude (dd) 47.05–51.07 48.74 1.30
Latitude (dd) 35.18–37.53 36.55 0.73
Elevation (m a.s.l.) 40–2,800 1,376.22 649.20
Drainage area (km229–49,300 5,591.38 11,569.86
Mean annual precipitation (mm) 184–1,400 467.70 323.62
Runoff coefficient 0.03–1.32 0.49 0.34
Feature Range Mean Standard deviation
Longitude (dd) 47.05–51.07 48.74 1.30
Latitude (dd) 35.18–37.53 36.55 0.73
Elevation (m a.s.l.) 40–2,800 1,376.22 649.20
Drainage area (km229–49,300 5,591.38 11,569.86
Mean annual precipitation (mm) 184–1,400 467.70 323.62
Runoff coefficient 0.03–1.32 0.49 0.34
The features were selected based on the availability of the relevant data and their potential role in the flood generation mechanism. The longitude, latitude, and elevation from the sea level were selected, because of the special geographical situation of the study area. In fact, the noticeable variablity in values of longitude, latitude, and elevation from the sea level in the study area may affect climatological and meteorological conditions of the sites considerably. The considerable variation of these features in the study area can affect the flood generation mechanisms of the watersheds noticeably. The precipitation and the drainage area are considered for regionalization in several RFFA studies due to their pivotal role in flood generation (e.g., Ramachandra Rao & Srinivas 2006a, 2006b; Srinivas et al. 2008; Farsadnia et al. 2014). Precipitation is often the main factor generating floods (Ramachandra Rao & Srinivas 2008; Srinivas et al. 2008) and so, mean annual precipitation of watersheds was selected as one of the watershed features. Additionally, in hydrological models, the drainage area often is considered as one of the most important factors in estimating the flood magnitudes (Hosking & Wallis 1997; Ramachandra Rao & Srinivas 2008). Thus, it is logical to use the drainage area as one of the watershed features in the regionalization procedure. Also, the runoff coefficient was selected as it determines the ratio of precipitation transformed to runoff and, therefore, it may be useful to identify homogeneous regions (e.g., Ramachandra Rao & Srinivas 2006a, 2006b; Srinivas et al. 2008; Basu & Srinivas 2015).
In order to modify the asymmetry of drainage area values, the logarithmic transformation was applied to them. When there is a considerable asymmetry in the drainage area values, a few sites in the tail of the distribution may form small groups or regions (in terms of station-years) because they are completely separated from other sites in a study area. Such small regions are not desirable for RFFA because it is not possible to provide reliable flood estimates for long return periods in small regions (Hosking & Wallis 1997; Basu & Srinivas 2014). Also, the values of all the features were standardized by Equation (1) in order to eliminate the effects of the differences in dimensions and variances of the different features:
(1)
where is the value of the feature j in the data point i; is the mean value of the feature j over the dataset, and is the standard deviation of the feature j over the dataset. In addition, is the standardized value of the feature j for the data point i.
METHODS
Discordancy evaluation
Prior to the feature selection or feature weighting steps, the flood data records were evaluated by using the discordancy measure D proposed by Hosking & Wallis (1993) in terms of the L-moments of flood data. A site is identified as discordant if D exceeds the critical value. When the number of sites is greater than 14, the critical value of D is equal to 3. This screening procedure can be performed either before regionalization for all the sites as one group, or after regionalization for the sites belonging to each region. Among the 39 sites, two sites were identified as discordant and were excluded from the regionalization process as suggested by Hosking & Wallis (1997). Therefore, the data related to the 37 remaining sites was used in the next stages of the study.
Canonical correlation analysis
In CCA, a canonical space is formed based on two sets of canonical variables. Each canonical variable is a linear combination of one of the two sets of original variables. If the original variables include two sets of the variables and and , then the two sets of the canonical variables and are formed as in Equation (2), such that the correlation between pairs of the corresponding canonical variables is maximized and the correlation between other pairs are minimized. The highest correlation is related to the canonical variables and , and the lowest correlation is related to the canonical variables and . For more details on CCA, see Hotelling (1936) and Cavadias (1990).
(2)
In the present study, the six watershed features and three L-moment (Hosking 1990) ratios of flood data are used as the two sets of original variables for CCA. The three selected L-moment ratios are the linear coefficient of variation (L-CV), linear skewness (L-skewness) and linear kurtosis (L-kurtosis). They are chosen because the three H heterogeneity measures proposed by Hosking & Wallis (1997) are calculated based on the values of these three L-moment ratios. Therefore, the use of canonical variables of watershed features which are highly correlated with the canonical variables of the L-moment ratios may increase the homogeneity of the regions identified in the regionalization.
After standardization of watershed features, L-moment ratios were calculated for each site and the standardization technique was applied to the values of L-CV, L-skewness, and L-kurtosis because the standardization of both original datasets is recommended before implementing CCA (Ribeiro-Correa et al. 1995). The standardized watershed features, longitude, latitude, elevation from the sea level, drainage area, mean annual precipitation and the runoff coefficient may be represented by , , , , , and , respectively, hereafter. Also, , , and denote the standardized L-moment ratios, L-CV, L-skewness, and L-kurtosis, in this order. Then CCA was performed on the standardized dataset of the six watershed features and the standardized dataset of the three L-moment ratios. Consequently, three pairs of canonical variables were calculated. The canonical variables related to the watershed feature space are represented by , , and , and the canonical variables related to the L-moment ratios space are denoted by , , and .
WAKM clustering algorithm
Regarding the advantages of the hybrid clustering algorithms (Ramachandra Rao & Srinivas 2006a; Srinivas et al. 2008; Farsadnia et al. 2014; Ahani & Mousavi Nadoushani 2016), the WAKM algorithm (Ramachandra Rao & Srinivas 2006a) is used as the clustering algorithm for the regionalization of watersheds in this study. The name WAKM represents the combination of Ward's algorithm (Ward Jr 1963) and K-means algorithm (Hartigan & Wong 1979). In this algorithm, first, by applying Ward's algorithm to the data points, a desired number of clusters are provided. Then, the cluster centers are used as initial cluster centers for clustering the data points by the K-means algorithm (Ramachandra Rao & Srinivas 2008). More details on Ward's and K-means algorithms are available in Ramachandra Rao & Srinivas (2008).
Cluster validity index
To compare the quality of different clusterings performed on the same dataset, cluster validity indices are used. The clustering quality is improved as the distances between the data points belonging to each cluster decrease (smaller intra-cluster distances) and the distances between the data points belonging to different clusters increase (greater inter-cluster distances) (Ramachandra Rao & Srinivas 2008).
Ramachandra Rao & Srinivas (2006a) evaluated the performances of a number of cluster validity indices to determine the optimal number of clusters in order to perform regionalization of watersheds. They concluded that the average silhouette width (ASW) is an effective measure for this purpose.
Rousseeuw (1987) defined the silhouette width for a data point i in a clustered dataset, as Equation (3):
(3)
where is the average distance of the data point i from the data points with which it is placed in the same cluster; and is the minimum average distance from the data point i from the data points in a cluster different from the cluster that the data point i belongs to. The value of can be in the range , where values close to 1 indicate the allocation of the data point i to an appropriate cluster, and values close to −1 represents the assignment of the data point i to an inappropriate cluster. The average silhouette width (ASW) criterion is obtained by averaging on the values of the silhouette width of all the clustered data points and hence it varies over the range as well (Ramachandra Rao & Srinivas 2008). Considering the acceptable performance of ASW in evaluating the quality of clusters (Ramachandra Rao & Srinivas 2006a), it was used for clustering evaluation in this study.
A hybrid method for regionalization
In the present study, a new hybrid method is proposed for regionalization of watersheds. To implement the regionalization method, four options are presented based on feature selection and one option is presented based on feature weighting. In the four feature selection-based options, after implementing CCA, the canonical variables consisting of the watershed features highly correlated with the canonical variables of the flood statistics (L-CV, L-skewness, L-kurtosis) were used as input features of the WAKM clustering algorithm. These four options are represented by CCA-WAKMfv in the remainder of the article. In CCA-WAKMfv, the subscript fv is an abbreviation for feature vector and can be replaced by one of the options 1, 1,2, 1,3, and 1,2,3. The four CCA-WAKM options differ in defining the feature vectors corresponding to the sites used in clustering by WAKM.
The input feature vectors of clustering for the CCA-WAKM1, CCA-WAKM1,2, CCA-WAKM1,3, and CCA-WAKM1,2,3 are described in Table 2. In the first option, denoted by CCA-WAKM1, only the value of the first canonical variable of the watershed features is used as the feature vector of each site. In the second option, represented by CCA-WAKM1,2, the feature vector of each site includes the values of the first and second canonical variables of the watershed features. In the third option, introduced by CCA-WAKM1,3, the feature vector of each site contains the values of the first and third canonical variables of the watershed features. Finally, in the fourth option, denoted by CCA-WAKM1,2,3, the values of all the three canonical variables consisting of the watershed features, are used to form the feature vector of each site. Since in the space of canonical variables, the highest correlation exists between the first pair of canonical variables of the watershed features and the L-moment ratios, the first canonical variable of the watershed features is used in all the four options. Also, the second and third canonical variables of the watershed features are added to the feature vectors in different options in order to investigate their effects on the regionalization results.
Table 2
Variables included in the input feature vector of clustering for the CCA-WAKM options and CCA-wWAKM
Option Variables of the feature vector
CCA-WAKM1 V1
CCA-WAKM1,2 V1, V2
CCA-WAKM1,3 V1, V3
CCA-WAKM1,2,3 V1, V2, V3
CCA-wWAKM a11A1, a12A2, a13A3, a14A4, a15A5, a16A6
Option Variables of the feature vector
CCA-WAKM1 V1
CCA-WAKM1,2 V1, V2
CCA-WAKM1,3 V1, V3
CCA-WAKM1,2,3 V1, V2, V3
CCA-wWAKM a11A1, a12A2, a13A3, a14A4, a15A5, a16A6
In the feature weighting-based option, weights of the watershed features in the linear combination of the first canonical variable of watershed features (i.e., a11, a12, a13, a14, a15, a16), are used as the weights of the original watershed features (i.e., A1, A2, A3, A4, A5, A6) in the regionalization by WAKM. Thus, the regionalization feature vector used this option is [a11A1, a12A2, a13A3, a14A4, a15A5, a16A6]. To facilitate referring to this option, the acronym CCA-wWAKM is used in the rest of the article, in which wWAKM represents weighting WAKM. The input feature vector of clustering for the CCA-wWAKM is determined in Table 2.
For implementing the feature weighting-based option CCA-wWAKM, the coefficients of the watershed features in the first canonical variable are applied to the feature vectors of the standardized watershed features as the weights. Then, WAKM is used to perform clustering based on these weighted feature vectors.
Homogeneity assessment
Identification of homogeneous regions by regionalization methods is an important and challenging part of RFFA. Hosking & Wallis (1993) proposed three heterogeneity measures, H1, H2, and H3, based on the L-moment ratios. These measures were used in several RFFA studies and were approved by the researchers (e.g., Viglione et al. 2007). For a given region, if , the region is ‘acceptably homogeneous’, if , the region is identified as ‘possibly heterogeneous’, and if , the region is regarded as ‘definitely heterogeneous’ (Hosking & Wallis 1997).
In the current study, the heterogeneity measures H are used to assess the homogeneity of the regions and a region is considered as homogeneous if and and .
RESULTS AND DISCUSSION
The coefficients of standardized watershed features and standardized L-moment ratios in the linear combinations related to the canonical variables of the watershed features and the L-moment ratios are presented in Table 3. The canonical variables of the watershed feature space are represented by V1, V2, and V3, and the canonical variables of the L-moment ratios space are denoted by W1, W2, and W3.
Table 3
The coefficients of the standardized watershed features and L-moment ratios in linear combinations of their canonical variables
Standardized variable V1 V2 V3 W3 W W3
A1 (Longitude) −0.590 −0.021 −1.380 – – –
A2 (Latitude) −0.264 −0.373 0.435 – – –
A3 (Elevation from the sea level) 0.272 −1.491 0.412 – – –
A4 (Drainage area) 0.044 −0.588 −0.287 – – –
A5 (Mean annual precipitation) −0.126 −1.579 0.538 – – –
A6 (Runoff coefficient) −0.264 0.614 0.727 – –- –
B1 (L-CV) – – – 1.004 0.431 −1.644
B2 (L-skewness) – – – 0.034 −0.707 2.827
B3 (L-kurtosis) – – – −0.261 1.407 −1.523
Standardized variable V1 V2 V3 W3 W W3
A1 (Longitude) −0.590 −0.021 −1.380 – – –
A2 (Latitude) −0.264 −0.373 0.435 – – –
A3 (Elevation from the sea level) 0.272 −1.491 0.412 – – –
A4 (Drainage area) 0.044 −0.588 −0.287 – – –
A5 (Mean annual precipitation) −0.126 −1.579 0.538 – – –
A6 (Runoff coefficient) −0.264 0.614 0.727 – –- –
B1 (L-CV) – – – 1.004 0.431 −1.644
B2 (L-skewness) – – – 0.034 −0.707 2.827
B3 (L-kurtosis) – – – −0.261 1.407 −1.523
As shown in Table 4, the correlation coefficient between the first pair of canonical variables is considerably greater than the values of the correlation coefficient between the second pair of canonical variables as well as the third pair of canonical variables. The values of the correlation coefficients of the second pair of canonical variables and the third pair of canonical variables are nearly equal to each other and their difference is lower than 0.04. Therefore, it seems that the first canonical variable of the watershed features and its coefficients can play a more important role than two other canonical variables in identifying regions with better homogeneity. It is also important to note that among the coefficients of the first canonical variable of the L-moment ratios, the largest coefficient belongs to L-CV, which is the basis for calculating the heterogeneity measure H1, which is more effective in identifying homogeneous regions than H2 and H3 heterogeneity measures according to Hosking & Wallis (1997) and Viglione et al. (2007).
Table 4
The correlation coefficient values between the canonical variable pairs
Canonical variable pair W1, V1 W2, V2 W3, V3
Correlation coefficient 0.855 0.383 0.347
Canonical variable pair W1, V1 W2, V2 W3, V3
Correlation coefficient 0.855 0.383 0.347
In Table 5, the values of the linear correlation coefficients between the original variables and canonical variables are presented. Among watershed features, the drainage area and elevation from the sea level show the greatest positive correlations with the first canonical variables (V1 and W1), respectively. On the other hand, the longitude and the runoff coefficient have the largest magnitudes of negative correlations with the first canonical variables, respectively. Concerning the second canonical variables (V2 and W2), the highest linear correlations are those of the drainage area and the runoff coefficient, respectively, and the largest inverse correlation values are related to the elevation and the mean annual precipitation. The two features of latitude and mean annual precipitation show the highest positive correlation with the third canonical variables (V3 and W3), while the largest negative correlation values with these canonical variables are respectively related to the longitude and the drainage area.
Table 5
The correlation coefficient values between the original variables and canonical variables
Standardized variable V1 V2 V3 W1 W2 W3
A1 (Longitude) −0.805 −0.218 −0.426 −0.688 −0.083 −0.148
A2 (Latitude) −0.369 0.113 0.499 −0.316 0.043 0.173
A3 (Elevation from the sea level) 0.387 −0.440 −0.188 0.331 −0.169 −0.065
A4 (Drainage area) 0.480 0.307 −0.260 0.410 0.118 −0.090
A5 (Mean annual precipitation) −0.752 −0.310 0.208 −0.643 −0.119 0.072
A6 (Runoff coefficient) −0.783 0.120 0.118 −0.669 0.046 0.041
B1 (L-CV) 0.972 0.230 0.046 0.831 0.088 0.016
B2 (L-skewness) 0.555 0.656 0.511 0.474 0.252 0.177
B3 (L-kurtosis) −0.019 0.970 0.243 −0.016 0.372 0.084
Standardized variable V1 V2 V3 W1 W2 W3
A1 (Longitude) −0.805 −0.218 −0.426 −0.688 −0.083 −0.148
A2 (Latitude) −0.369 0.113 0.499 −0.316 0.043 0.173
A3 (Elevation from the sea level) 0.387 −0.440 −0.188 0.331 −0.169 −0.065
A4 (Drainage area) 0.480 0.307 −0.260 0.410 0.118 −0.090
A5 (Mean annual precipitation) −0.752 −0.310 0.208 −0.643 −0.119 0.072
A6 (Runoff coefficient) −0.783 0.120 0.118 −0.669 0.046 0.041
B1 (L-CV) 0.972 0.230 0.046 0.831 0.088 0.016
B2 (L-skewness) 0.555 0.656 0.511 0.474 0.252 0.177
B3 (L-kurtosis) −0.019 0.970 0.243 −0.016 0.372 0.084
All the values of correlation coefficient between the L-moment ratios and canonical variables are positive. Greatest values of correlations with first, second, and third canonical variables are related to L-CV, L-kurtosis, and L-skewness, respectively.
Since only the canonical variables of watershed features (i.e., V1, V2, and V3) are used in the next step of the proposed regionalization method, the values of correlation coefficient between watershed features and their canonical variables are more useful to identify the watershed features that may be more effective on results of regionalization.
According to the number of selected sites in the study area and the length of their flood data records and also regarding the 5T rule (Reed et al. 1999), the regionalization was implemented by changing the number of regions from two to five. In this study, it was considered as a constraint that the smallest region (in terms of station-years) in each regionalization includes about 50 station-years flood data in order to provide flood quantiles corresponding to a ten-year return period. To evaluate the effect of the proposed methods on the homogeneity of the regions, the results of applying CCA-WAKM and CCA-wWAKM were compared with the results of applying the single WAKM clustering algorithm to feature vectors consisting of six standardized watershed features. To assess and compare the performances of the methods, they were evaluated by ASW cluster validity index and the heterogeneity indices H1, H2, and H3. It should be noted that the words ‘cluster’ and ‘region’ may be used in the rest of article equivalently.
The ASW values for implementing regionalization by each method for two, three, four, and five regions are presented in Table 6. In all cases, ASW for all the CCA-WAKM implementation options and CCA-wWAKM, are higher than those of the single WAKM. This indicates a higher quality of the final clusters resulting from the application of the proposed method in comparison with the single WAKM. The reduction of the number of dimensions or regionalization features (from six watershed features to one, two, or three canonical variables) in the CCA-WAKM implementation options compared to the WAKM may be considered as an effective factor in increasing ASW and improving the clustering quality in terms of intra-cluster compactness and inter-cluster separation. However, for CCA-wWAKM the number of regionalization features (six weighted watershed features) is equal to that of WAKM (six watershed features) and so the increase in ASW can be interpreted as an increase in the quality of clustering.
Table 6
The values of the cluster validity index ASW for clustering by using WAKM, CCA-wWAKM, and the four CCA-WAKM options
Number of regions
WAKM 0.418 0.492 0.420 0.415
CCA-wWAKM 0.557 0.545 0.484 0.438
CCA-WAKM1 0.655 0.574 0.557 0.598
CCA-WAKM1,2 0.451 0.516 0.550 0.430
CCA-WAKM1,3 0459 0.468 0.525 0.564
CCA-WAKM1,2,3 0.367 0.411 0.431 0.443
Number of regions
WAKM 0.418 0.492 0.420 0.415
CCA-wWAKM 0.557 0.545 0.484 0.438
CCA-WAKM1 0.655 0.574 0.557 0.598
CCA-WAKM1,2 0.451 0.516 0.550 0.430
CCA-WAKM1,3 0459 0.468 0.525 0.564
CCA-WAKM1,2,3 0.367 0.411 0.431 0.443
Figure 2 shows the values of the heterogeneity measures H1, H2, and H3 for two regions identified by WAKM, CCA-wWAKM, and the four CCA-WAKM options. All the options and methods result in identifying two homogeneous regions, and only by implementing CCA-WAKM1,2, one of the regions is relatively heterogeneous based on H1.
Figure 2
The values of the heterogeneity measures H1, H2, and H3 for two regions identified by WAKM, CCA-wWAKM, and CCA-WAKM.
Figure 2
The values of the heterogeneity measures H1, H2, and H3 for two regions identified by WAKM, CCA-wWAKM, and CCA-WAKM.
According to Figure 3, only CCA-wWAKM provides three homogeneous regions simultaneously. Both WAKM and the four CCA-WAKM options result in identifying a possibly heterogeneous region based on the values of one or two heterogeneity measures.
Figure 3
The values of the heterogeneity measures H1, H2, and H3 for three regions identified by WAKM, CCA-wWAKM, and CCA-WAKM.
Figure 3
The values of the heterogeneity measures H1, H2, and H3 for three regions identified by WAKM, CCA-wWAKM, and CCA-WAKM.
Figure 4 shows that the use of WAKM to identify four regions in the study area leads to identifying two homogeneous regions and two possibly heterogeneous regions. In addition, as seen in the four-region state, applying CCA-wWAKM results in satisfying the homogeneity conditions in all the regions. Also, while CCA-WAKM1, CCA-WAKM1,2, and CCA-WAKM1,2,3 provide four homogeneous regions, CCA-WAKM1,3 identifies a possibly heterogeneous region along with the three homogeneous regions.
Figure 4
The values of the heterogeneity measures H1, H2, and H3 for four regions identified by WAKM, CCA-wWAKM, and CCA-WAKM.
Figure 4
The values of the heterogeneity measures H1, H2, and H3 for four regions identified by WAKM, CCA-wWAKM, and CCA-WAKM.
As seen in Figure 5, using WAKM to identify five regions results in identifying two possibly heterogeneous regions, while the other three regions satisfy the homogeneity conditions. Among CCA-WAKM implementation options, CCA-WAKM1, CCA-WAKM1,2, and CCA-WAKM1,3, provide four homogeneous regions and one possibly heterogeneous region, whereas the option CCA-WAKM1,2,3 identifies five homogeneous regions. In this case, the use of CCA-wWAKM yields identifying five homogenous regions.
Figure 5
The values of the heterogeneity measures H1, H2, and H3 for five regions identified by WAKM, CCA-wWAKM, and CCA-WAKM.
Figure 5
The values of the heterogeneity measures H1, H2, and H3 for five regions identified by WAKM, CCA-wWAKM, and CCA-WAKM.
In Table 7, a summary of the performances of the examined methods of regionalization in identifying the homogeneous regions for RFFA is presented. The ratio of the number of homogeneous regions identified by each regionalization option to the total number of regions identified by that option is calculated separately based on each of the measures H1, H2, and H3. The percentage of homogeneous regions provided by each method, which are identified by all the three heterogeneity measures as homogeneous, to all the regions identified by that option can be seen in the last column of Table 7. The percentage of homogeneous regions is defined as Equation (4):
(4)
where is the percentage of homogeneous regions, represents the number of homogeneous regions and denotes the total number of regions in two, three, four, and five-region states .
Table 7
The percentage of homogeneous regions (%) identified in regionalization for two, three, four, and five regions according to the heterogeneity measure H
Regionalization method Heterogeneity measure
H1 H2 H3 H1, H2, H3
WAKM 92.9 78.6 78.6 64.3
CCA-wWAKM 100 100 100 100
CCA-WAKM1 92.9 85.7 100 85.7
CCA-WAKM1,2 85.7 92.9 100 78.6
CCA-WAKM1,3 100 78.6 78.6 78.6
CCA-WAKM1,2,3 92.9 100 100 92.9
Regionalization method Heterogeneity measure
H1 H2 H3 H1, H2, H3
WAKM 92.9 78.6 78.6 64.3
CCA-wWAKM 100 100 100 100
CCA-WAKM1 92.9 85.7 100 85.7
CCA-WAKM1,2 85.7 92.9 100 78.6
CCA-WAKM1,3 100 78.6 78.6 78.6
CCA-WAKM1,2,3 92.9 100 100 92.9
The results indicate that among the methods and their implementation options, the best performance in providing the homogeneous regions is related to CCA-wWAKM. All 14 regions identified by CCA-wWAKM in two, three, four, and five-region states are identified as homogeneous according to all the three heterogeneity measures. CCA-wWAKM shows perfect efficiency (100%) in identifying homogeneous regions in the study area.
A probable reason for the superiority of CCA-wWAKM over the other options is to use information related to all the watershed features. While in the implementation options of CCA-WAKM the multiples of watershed features are used in a linear combination to provide values of a regionalization feature, in CCA-wWAKM each watershed feature is included in the regionalization feature vector separately. In addition, the weight of each feature is determined only based on the absolute magnitude of its coefficient in the linear combination of the canonical variable V1.
After CCA-wWAKM, CCA-WAKM1,2,3 displays the best performance by identifying 13 homogeneous regions based on all three heterogeneity measures H. In fact, according to Figures 25, all the regions identified by this option are homogeneous according to H2 and H3, and only in the three-region state, a region is identified as possibly heterogeneous by H1. As a result, the efficiency of this option in identifying homogeneous regions in the study area can be estimated at 93%. For the option CCA-WAKM1, the efficiency is equal to 86% approximately. Also, for the options CCA-WAKM1,2 and CCA-WAKM1,3, among 14 identified regions, 11 regions are shown to be homogeneous on the basis of all three measures H. Thus, the efficiency of these options in identifying the homogeneous regions is approximately 79%. According to Hosking & Wallis (1997), the heterogeneity measure H1 is more sensitive to heterogeneity of regions in comparison with the measures H2 and H3. However, as seen in Table 6, this is not observed for CCA-WAKM1,3, because in the regionalization states with four and five regions, in each regionalization, CCA-WAKM1,3 identified one region including a group of sites with a relatively high value of standard deviation of the statistics L-skewness and L-kurtosis. L-skewness and L-kurtosis play key roles in definitions of the measures H2 and H3 (Hosking & Wallis 1997). By applying WAKM, 9 homogeneous regions were identified among 14 regions, which yields the lowest efficiency among the methods used for regionalization in this study (about 64%). This means that all options of the proposed method are more efficient in providing homogeneous regions than WAKM. Moreover, the CCA-wWAKM is superior to all four CCA-WAKM options by providing 100% efficiency.
The better performance of CCA-WAKM1,2,3 in comparison with other options of CCA-WAKM is because of adding the second and third canonical variables V2 and V3 to regionalization feature vectors. V2 and V3 show higher correlations with L-moment ratios L-skewness and L-kurtosis which play important roles in calculation of the heterogeneity measures H2 and H3. According to Table 7, by adding V2 and V3 to the regionalization feature vectors, homogeneity of the regions is improved to some extent based on the heterogeneity measures H2 and H3. Of course, as seen in Table 6, the effect of V2 is more considerable than V3 on the homogeneity improvement.
In general, the results of calculating the heterogeneity indices H for the regions show that CCA-wWAKM and all the four CCA-WAKM options outperform WAKM in effectively identifying homogeneous regions for the study area. Therefore, all of the implementation options of the proposed method can be used as effective alternatives to common regionalization methods in order to improve the homogeneity of the identified regions. Among the CCA-WAKM implementation options, the CCA-WAKM1,2,3 outperforms the other options in terms of the percentage of homogeneous regions. In addition, CCA-wWAKM is the optimum options, because of its excellent performance in identifying the regions satisfying homogeneity condition completely. This method even outperforms CCA-WAKM1,2,3. Of course, the difference between the results of these two options is only related to the measure H1 in the second region that exceeds the threshold in the three-region state for CCA-WAKM1,2,3.
After the homogeneity assessment, the size of regions, i.e., the total number of flood data contained by each region, was evaluated. Also, the assignment of the sites to the regions was studied. For this evaluation, CCA-WAKM1,2,3 is selected as the best option for representing CCA-WAKM, due to its better performance in identifying homogeneous regions than other CCA-WAKM options. For both WAKM and CCA-wWAKM, it is not needed to choose an optimal option, because there is only one implementation option for each of them.
Figures 69 show the assignment of the sites to the regions identified by WAKM, CCA-WAKM1,2,3, and CCA-wWAKM, respectively. According to the figures, the geographical contiguity in the regions identified by the CCA-wWAKM is more considerable than that in the regions provided by the single WAKM and CCA-WAKM1,2,3. In other words, delineating the crisp geographical boundaries for the regions identified by CCA-wWAKM in the study area is more feasible compared to the regions provided by the other options. In fact, for this study area and the selected features, implementation of regionalization using the CCA-wWAKM method results in the assignment of greater weights to the features related to the geographical location of the sites, especially the longitude. It should be noted that all the positive and negative coefficients of the watershed features in the first canonical variable of the watershed features used as clustering feature weights are squared in the Euclidean distance. Thus, only the absolute magnitude of these coefficients or weights affects the clustering.
Figure 6
Geographical dispersion of the sites over two regions.
Figure 6
Geographical dispersion of the sites over two regions.
Figure 7
Geographical dispersion of the sites over three regions.
Figure 7
Geographical dispersion of the sites over three regions.
Figure 8
Geographical dispersion of the sites over four regions.
Figure 8
Geographical dispersion of the sites over four regions.
Figure 9
Geographical dispersion of the sites over five regions.
Figure 9
Geographical dispersion of the sites over five regions.
In addition, according to the number of sites assigned to the regions, the dispersion of the sites across the regions provided by the single WAKM and CCA-wWAKM is more balanced than that across the regions identified by CCA-WAKM1,2,3. Figure 10 displays the sizes of the regions identified by the regionalization methods in terms of the number of flood data recorded in each region (station-years) for two, three, four, and five regions.
Figure 10
Sizes of the identified regions (station-years) by WAKM, CCA-wWAKM, and the selected CCA-WAKM option.
Figure 10
Sizes of the identified regions (station-years) by WAKM, CCA-wWAKM, and the selected CCA-WAKM option.
As seen in Figure 10, the number of regions with a size lower than 100 station-years identified by CCA-WAKM1,2,3 is greater than those identified by the other methods. Considering the fact that the average flood data record length in the selected sites is about 23 years, a region with size greater than 100 station-years can include at least four watersheds with the average flood data record length. It should be noted that in RFFA, the main goal is to increase the reliability of flood estimates by increasing the number of flood data pooled from several sites in the homogeneous regions. The regionalization that yields the identification of small regions may not be so useful to achieve this goal and so, it cannot be the optimal option for RFFA. Indeed, RFFA is characterized by a trade-off between the size of the region (i.e., number of flood data in station-years) and its homogeneity: usually, the higher the size of the identified pooling group of sites, the higher the expected heterogeneity. Moreover, the target-size of the region depends on the return period associated with the target flood quantile (see, e.g., Cunnane 1988; Jakob et al. 1999). Therefore, the use of the CCA-wWAKM method in three, four, and five-region states seems to provide better results than the CCA-WAKM1,2,3.
Considering the excellent performance of CCA-wWAKM in the identification of homogeneous regions with appropriate spatial proximity and more balanced assignment of the sites to the regions, it seems that this method can be selected as the optimal option for regionalization of watersheds in Sefidrud basin for RFFA.
CONCLUSIONS
In this study, a hybrid regionalization method was proposed by combining CCA and WAKM clustering algorithms in order to increase the homogeneity of the identified regions for RFFA. Performances of the methods in the Sefidrud basin in northern Iran were evaluated based on ASW as a cluster validity and the measures H1, H2, and H3 as the heterogeneity measures.
According to the values of the ASW cluster validity index, the quality of clustering performed by all the options of the proposed method was higher than that of the clustering done by WAKM.
Also, the homogeneity assessment of the regions based on the values of the heterogeneity measures indicated that CCA-wWAKM and all four implementation options of CCA-WAKM were more efficient in identifying homogeneous regions than WAKM. Among the CCA-WAKM options, CCA-WAKM1,2,3 with an efficiency of 93% in the identification of homogeneous regions showed the best performance and so, it was identified as the optimal CCA-WAKM option. However, the best performance among all the options discussed was related to CCA-wWAKM. All the identified regions by CCA-WWAKM in two- to five-region states satisfied the homogeneity conditions completely. Thus, this option resulted in a 100% efficiency in providing the homogeneous regions. Therefore, CCA-wWAKM can be regarded as the optimal option for identifying the most homogeneous regions, among all the options discussed in this study.
The evaluation of the assignment of the sites to the regions identified by the regionalization methods showed that the geographical proximity of the sites in the regions identified by the CCA-wWAKM is clearer than those of the other options and methods. This may be because of the high weight of the geographical features, especially the longitude in comparison with the other features, in regionalization by CCA-wWAKM.
In addition, it was observed that the distribution of the sites across the regions identified by CCA-wWAKM is more balanced in terms of the number of flood data contained by the regions compared to that of CCA-WAKM. In fact, the use of CCA-WAKM1,2,3 in some cases led to the identification of large regions (in terms of station-years) along with small regions. Identifying small regions is not so desirable for RFFA because it is not possible to provide reliable flood estimates for the sites of these regions. Thus, while both CCA-WAKM and CCA-wWAKM seem efficient in identifying homogeneous regions in comparison with WAKM, CCA-wWAKM can be the more appropriate option for regionalization of watersheds in Sefidrud basin.
As the final remark, it should be noted that examining the effectiveness of the proposed method in case studies with the larger total area makes it possible to apply the regionalization methods for a higher number of regions. Of course, it depends on the target-size of the region, which is related to the return period considered for flood quantile estimation. Also, access to a higher number of watershed features can lead to a more accurate judgment about the advantages and disadvantages of the proposed method.
REFERENCES
REFERENCES
Ahani
A.
&
S. S.
2016
Assessment of some combinations of hard and fuzzy clustering techniques for regionalisation of catchments in Sefidroud basin
.
Journal of Hydroinformatics.
18
(
6
),
1033
1054
.
Ahani
A.
,
S. S.
&
Moridi
A.
2018
A feature weighting and selection method for improving the homogeneity of regions in regionalization of watersheds
.
Hydrological Processes
32
(
13
),
2084
2095
.
Asong
Z. E.
,
Khaliq
M. N.
&
Wheater
H. S.
2015
Regionalization of precipitation characteristics in the Canadian prairie provinces using large-scale atmospheric covariates and geophysical attributes
.
Stochastic Environmental Research and Risk Assessment
29
(
3
),
875
892
.
Basu
B.
&
Srinivas
V. V.
2014
Regional flood frequency analysis using kernel-based fuzzy clustering approach
.
Water Resources Research
50
(
4
),
3295
3316
.
Basu
B.
&
Srinivas
V. V.
2015
Analytical approach to quantile estimation in regional frequency analysis based on fuzzy framework
.
Journal of Hydrology
524
,
30
43
.
N. R.
&
O'Connor
C. A.
1989
Comparison of method of residuals and cluster analysis for flood regionalization
.
Journal of Water Resources Planning and Management
115
,
793
808
.
Burn
D. H.
1989
Cluster analysis as applied to regional flood frequency
.
Journal of Water Resources Planning and Management
115
(
5
),
567
582
.
Burn
D. H.
1990
An appraisal of the ‘region of influence’ approach to flood frequency analysis
.
Hydrological Sciences Journal
35
(
2
),
149
165
.
Burn
D.
&
Goel
N. K.
2000
The formation of groups for regional flood frequency analysis
.
Hydrological Sciences Journal
45
(
1
),
97
112
.
Burn
D. H.
,
Zrinji
Z.
&
Kowalchuk
M.
1997
Regionalization of catchments for regional flood frequency analysis
.
Journal of Hydrologic Engineering
2
(
2
),
76
82
.
Castellarin
A.
,
Burn
D. H.
&
Brath
A.
2001
Assessing the effectiveness of hydrological similarity measures for flood frequency analysis
.
Journal of Hydrology
241
,
270
285
.
G. S.
1990
The canonical correlation approach to regional flood estimation
. In:
Regionalization in Hydrology (Proceedings of the Ljubljana Symposium)
.
IAHS
,
Wallingford
, pp.
171
178
.
G. S.
,
Ouarda
T. B. M. J.
,
Bobée
B.
&
Girard
C.
2001
A canonical correlation approach to the determination of homogeneous regions for regional flood estimation of ungauged basins
.
Hydrological Sciences Journal
46
(
4
),
499
512
.
Cunnane
C.
1988
Methods and merits of regional flood frequency analysis
.
Journal of Hydrology
100
,
269
290
.
Dalrymple
T.
1960
Flood Frequency Analysis
.
Water Supply Paper 1543A
,
US Geological Survey
,
Washington, DC
,
USA
.
Di Prinzio
M.
,
Castellarin
A.
&
Toth
E.
2011
Data-driven catchment classification: application to the pub problem
.
Hydrology and Earth System Sciences
15
(
6
),
1921
1935
.
F.
,
Rostami Kamrood
M.
,
A.
,
Modarres
R.
,
Bray
M. T.
,
Han
D.
&
J.
2014
Identification of homogeneous regions for regionalization of watersheds by two-level self-organizing feature maps
.
Journal of Hydrology
509
,
387
397
.
GREHYS
1996b
Inter-comparison of regional flood frequency procedures for Canadian rivers
.
Journal of Hydrology
186
,
85
103
.
Hall
M. J.
&
Minns
A. W.
1999
The classification of hydrologically homogeneous regions
.
Hydrological Sciences Journal
44
(
5
),
693
704
.
Hartigan
J. A.
&
Wong
M. A.
1979
Algorithm as 136: a K-means clustering algorithm
.
Journal of the Royal Statistical Society
28
(
1
),
100
108
.
Hosking
J. R. M.
1990
L-moments: analysis and estimation of distributions using linear combinations of order statistics
.
Journal of the Royal Statistical Society Series B
52
,
105
124
.
Hosking
J. R. M.
&
Wallis
J. R.
1993
Some statistics useful in regional frequency analysis
.
Water Resources Research
29
(
2
),
271
281
.
Hosking
J. R. M.
&
Wallis
J. R.
1997
Regional Frequency Analysis – An Approach Based on L-Moments
.
Cambridge University Press
,
New York
,
USA
.
Hotelling
H.
1936
Relations between two sets of variates
.
Biometrika
28
(
3/4
),
321
377
.
Jakob
D.
,
Reed
D. W.
&
Robson
A. J.
1999
Choosing a pooling-group
. In:
Flood Estimation Handbook, vol. 3, Statistical Procedures for Flood Frequency Estimation
.
Institute of Hydrology
,
Wallingford
,
UK
, pp.
153
180
.
Jin
Y.
,
Liu
J.
,
Lin
L.
,
Wang
A.
&
Chen
X.
2017
Exploring hydrologically similar catchments in terms of the physical characteristics of upstream regions
.
Hydrology Research
49
(
5
),
1467
1483
.
Jingyi
Z.
&
Hall
M. J.
2004
Regional flood frequency analysis for the Gan-Ming River basin in China
.
Journal of Hydrology
296
(
1–4
),
98
117
.
Lin
G.-F.
&
Chen
L.-H.
2006
Identification of homogeneous regions for regional frequency analysis using the self-organizing map
.
Journal of Hydrology
324
(
1–4
),
1
9
.
Mosley
M. P.
1981
Delimitation of New Zealand hydrological regions
.
Journal of Hydrology
49
,
173
192
.
Nathan
R. J.
&
McMahon
T. A.
1990
Identification of homogeneous regions for the purposes of regionalisation
.
Journal of Hydrology
121
,
217
238
.
M. K.
,
Chokmani
K.
,
Ouarda
T. B. M. J.
,
Barbet
M.
&
Bruneau
P.
2010
Regional flood frequency analysis using residual kriging in physiographical space
.
Hydrological Processes
24
(
15
),
2045
2055
.
Ouarda
T. B. M. J.
,
Girard
C.
,
G. S.
&
Bobée
B.
2001
Regional flood frequency estimation with canonical correlation analysis
.
Journal of Hydrology
254
(
14
),
157
173
.
Ouarda
T. B. M. J.
,
K. M.
,
C.
,
Cârsteanu
A.
,
Chokmani
K.
,
Gingras
H.
,
Quentin
E.
,
Trujillo
E.
&
Bobée
B.
2008
Intercomparison of regional flood frequency estimation methods at ungauged sites for a Mexican case study
.
Journal of Hydrology
348
(
1–2
),
40
58
.
Ramachandra Rao
A.
&
Srinivas
V. V.
2006a
Regionalization of watersheds by hybrid-cluster analysis
.
Journal of Hydrology
318
(
1–4
),
37
56
.
Ramachandra Rao
A.
&
Srinivas
V. V.
2006b
Regionalization of watersheds by fuzzy cluster analysis
.
Journal of Hydrology
318
(
1–4
),
57
79
.
Ramachandra Rao
A.
&
Srinivas
V. V.
2008
Regionalization of Watersheds – An Approach Based on Cluster Analysis
, Vol.
58
(Water Science and Technology Library)
.
Springer
,
The Netherlands
.
Razavi
T.
&
Coulibaly
P.
2013
Classification of Ontario watersheds based on physical attributes and streamflow series
.
Journal of Hydrology
493
,
81
94
.
Reed
D. W.
,
Jakob
D.
,
Robson
A. J.
,
Faulkner
D. S.
&
Stewart
E. J.
1999
Regional frequency analysis: a new vocabulary
. In:
Hydrological Extremes: Understanding. Predicting, Mitigating
(Proceedings of IUGG 99 Symposium Birmingham, July 19)
.
IAHS
,
Wallingford
, pp.
237
243
.
Ribeiro-Correa
J.
,
G. S.
,
Clément
B.
&
Rousselle
J.
1995
Identification of hydrological neighborhoods using canonical correlation analysis
.
Journal of Hydrology
173
(
14
),
71
89
.
Rousseeuw
P. J.
1987
Silhouettes: a graphical aid to the interpretation and validation of cluster analysis
.
Journal of Computational and Applied Mathematics
20
,
53
65
.
Srinivas
V. V.
,
Tripathi
S.
,
Ramachandra Rao
A.
&
Govindaraju
R. S.
2008
Regional flood frequency analysis by combining self-organizing feature map and fuzzy clustering
.
Journal of Hydrology
348
(
1–2
),
148
166
.
G. D.
1982
Comparing methods of hydrologic regionalization
.
Water Resources Bulletin
18
,
965
970
.
Toth
E.
2013
Catchment classification based on characterisation of streamflow and precipitation time series
.
Hydrology and Earth System Sciences
17
(
3
),
1149
1159
.
Viglione
A.
,
Laio
F.
&
Claps
P.
2007
A comparison of homogeneity tests for regional frequency analysis
.
Water Resources Research
43
(
3
),
W03428
.
Ward
J. H.
Jr
1963
Hierarchical grouping to optimize an objective function
.
Journal of the American Statistical Association
58
,
236
244
.
Wiltshire
S. E.
1986
Regional flood frequency analysis II: multivariate classification of drainage basins in Britain
.
Hydrological Sciences Journal
31
,
335
346
.
Xie
P.
,
Lei
X.
,
Zhang
Y.
,
Wang
M.
,
Han
I.
&
Chen
Q.
2018
Cluster analysis of drought variation and its mutation characteristics in Xinjiang province, during 1961–2015
.
Hydrology Research
49
(
4
),
1016
1027
.
Zrinji
Z.
&
Burn
D. H.
1994
Flood frequency analysis for ungauged sites using a region of influence approach
.
Journal of Hydrology
153
(
1–4
),
1
21
.
|
{}
|
# ModelingToolkit IR
ModelingToolkit IR mirrors the Julia AST but allows for easy mathematical manipulation by itself following mathematical semantics. The base of the IR is the Sym type, which defines a symbolic variable. Registered (mathematical) functions on Syms (or istree objects) return an expression that istree. For example, op1 = x+y is one symbolic object and op2 = 2z is another, and so op1*op2 is another tree object. Then, at the top, an Equation, normally written as op1 ~ op2, defines the symbolic equality between two operations.
### Types
Sym, Term, and FnType are from SymbolicUtils.jl. Note that in ModelingToolkit, we always use Sym{Real}, Term{Real}, and FnType{Tuple{Any}, Real}. To get the arguments of a istree object use arguments(t::Term), and to get the operation, use operation(t::Term). However, note that one should never dispatch on Term or test isa Term. Instead, one needs to use SymbolicUtils.istree to check if arguments and operation is defined.
ModelingToolkit.EquationType
struct Equation
An equality relationship between two expressions.
Fields
• lhs
The expression on the left-hand side of the equation.
• rhs
The expression on the right-hand side of the equation.
source
### A note about functions restricted to Numbers
Sym and Term objects are NOT subtypes of Number. ModelingToolkit provides a simple wrapper type called Num which is a subtype of Real. Num wraps either a Sym or a Term or any other object, defines the same set of operations as symbolic expressions and forwards those to the values it wraps. You can use ModelingToolkit.value function to unwrap a Num.
By default, the @variables and @parameters functions return Num-wrapped objects so as to allow calling functions which are restricted to Number or Real.
julia> @parameters t; @variables x y z(t);
julia> ModelingToolkit.operation(ModelingToolkit.value(x + y))
+ (generic function with 377 methods)
julia> ModelingToolkit.operation(ModelingToolkit.value(z))
z(::Any)::Real
julia> ModelingToolkit.arguments(ModelingToolkit.value(x + y))
2-element Vector{Sym{Real}}:
x
y
### Function Registration
The ModelingToolkit graph only allowed for registered Julia functions for the operations. All other functions are automatically traced down to registered functions. By default, ModelingToolkit.jl pre-registers the common functions utilized in SymbolicUtils.jl and pre-defines their derivatives. However, the user can utilize the @register macro to add their function to allowed functions of the computation graph.
ModelingToolkit.@registerMacro
@register(expr, define_promotion, Ts = [Num, Symbolic, Real])
Overload approperate methods such that ModelingToolkit can stop tracing into the registered function.
Examples
@register foo(x, y)
@register goo(x, y::Int) # y is not overloaded to take symbolic objects
@register hoo(x, y)::Int # hoo returns Int
source
### Derivatives and Differentials
A Differential(op) is a partial derivative with respect to op, which can then be applied to some other operations. For example, D=Differential(t) is what would commonly be referred to as d/dt, which can then be applied to other operations using its function call, so D(x+y) is d(x+y)/dt.
By default, the derivatives are left unexpanded to capture the symbolic representation of the differential equation. If the user would like to expand out all of the differentials, the expand_derivatives function eliminates all of the differentials down to basic one-variable expressions.
ModelingToolkit.DifferentialType
struct Differential <: Function
Represents a differential operator.
Fields
• x
The variable or expression to differentiate with respect to.
Examples
julia> using ModelingToolkit
julia> @variables x y;
julia> D = Differential(x)
(D'~x)
julia> D(y) # Differentiate y wrt. x
(D'~x)(y)
julia> Dx = Differential(x) * Differential(y) # d^2/dxy operator
(D'~x(t)) ∘ (D'~y(t))
julia> D3 = Differential(x)^3 # 3rd order differential operator
(D'~x(t)) ∘ (D'~x(t)) ∘ (D'~x(t))
source
ModelingToolkit.jacobianFunction
jacobian(ops::AbstractVector, vars::AbstractVector; simplify=false)
A helper function for computing the Jacobian of an array of expressions with respect to an array of variable expressions.
source
ModelingToolkit.gradientFunction
gradient(O, vars::AbstractVector; simplify=false)
A helper function for computing the gradient of an expression with respect to an array of variable expressions.
source
ModelingToolkit.hessianFunction
hessian(O, vars::AbstractVector; simplify=false)
A helper function for computing the Hessian of an expression with respect to an array of variable expressions.
source
For jacobians which are sparse, use the sparsejacobian function. For hessians which are sparse, use the sparsehessian function.
There is a large amount of derivatives pre-defined by DiffRules.jl.
f(x,y,z) = x^2 + sin(x+y) - z
automatically has the derivatives defined via the tracing mechanism. It will do this by directly building the operation the internals of your function and differentiating that.
However, in many cases you may want to define your own derivatives so that way automatic Jacobian etc. calculations can utilize this information. This can allow for more succinct versions of the derivatives to be calculated in order to better scale to larger systems. You can define derivatives for your own function via the dispatch:
# N arguments are accepted by the relevant method of my_function
ModelingToolkit.derivative(::typeof(my_function), args::NTuple{N,Any}, ::Val{i})
where i means that it's the derivative with respect to the ith argument. args is the array of arguments, so, for example, if your function is f(x,t), then args = [x,t]. You should return an Term for the derivative of your function.
For example, sin(t)'s derivative (by t) is given by the following:
ModelingToolkit.derivative(::typeof(sin), args::NTuple{1,Any}, ::Val{1}) = cos(args[1])
### IR Manipulation
ModelingToolkit.jl provides functionality for easily manipulating expressions. Most of the functionality comes by the expression objects obeying the standard mathematical semantics. For example, if one has A a matrix of symbolic expressions wrapped in Num, then A^2 calculates the expressions for the squared matrix. In that sense, it is encouraged that one uses standard Julia for performing a lot of the manipulation on the IR, as, for example, calculating the sparse form of the matrix via sparse(A) is valid, legible, and easily understandable to all Julia programmers.
Other additional manipulation functions are given below.
ModelingToolkit.get_variablesFunction
get_variables(O) -> Vector{Union{Sym, Term}}
Returns the variables in the expression. Note that the returned variables are not wrapped in the Num type.
Examples
julia> @parameters t
(t,)
julia> @variables x y z(t)
(x, y, z(t))
julia> ex = x + y + sin(z)
(x + y) + sin(z(t))
julia> ModelingToolkit.get_variables(ex)
3-element Vector{Any}:
x
y
z(t)
source
Missing docstring.
Missing docstring for substitute. Check Documenter's build log for details.
Missing docstring.
Missing docstring for tovar. Check Documenter's build log for details.
Missing docstring.
Missing docstring for toparam. Check Documenter's build log for details.
Missing docstring.
Missing docstring for tosymbol. Check Documenter's build log for details.
Missing docstring.
Missing docstring for makesym. Check Documenter's build log for details.
Missing docstring.
Missing docstring for diff2term. Check Documenter's build log for details.
|
{}
|
Navigation ▼
# Fakultät für Mathematik und Informatik (inkl. GAUSS)
## Neueste Zugänge RSS Feed
• (2018-09-17)
Durable identification and access to datasets, especially to research datasets, become increasingly important. This is mainly driven by the explosive dataset growth in the current age. Although the Internet was originally ...
• (2018-08-31)
In this thesis, a pseudodifferential calculus for a degenerate hyperbolic Cauchy problem is developed. The model for this problem originates from a certain observation in fluid mechanics, and is then extended to a more ...
• (2018-08-10)
Networks, the basis of the modern connected world, have evolved beyond the con- nectivity services. Network Functions (NFs) or traditionally the middleboxes are the basis of realizing different types of services such as ...
• (2018-07-31)
For a $\mathit{Spin}$ manifold $M$ the Rosenberg index $\alpha([M])$ is an obstruction against positive scalar curvature metrics. When $M$ is non-$\mathit{Spin}$ but $\mathit{Spin}^c$, Bolotov and Dranishnikov suggested ...
• (2018-07-27)
In discrete differential geometry (DDG) one considers objects from discrete geometry from a differential geometric perspective. Rather than focusing on approximations of the smooth theory, with error vanishing in the ...
|
{}
|
4
Personal Blog
Part of the Asymptotic Logical Uncertainty series. This post is especially dependent on the previous two posts. Here, we show that the Modified Demski Prior is Uniformly Coherent. This result came out of discussion with Benja Fallenstein, Jessica Taylor, and myself, and is primarily due to Benja.
Theorem: The Modified Demski Prior is Uniformly Coherent.
Proof: We will start with property 3 of uniform coherence. Fix a triple , , and which meet the conditions for property 3. Consider the Turing machine which outputs all sentences of the form "Exactly one of , , and is true" in order. ( can generate these sentences by simulating the three Turing machines, or if we assume we have PA in our starting theory, it could just make these statements about the Turing machines.) Note that every sentence output by is provable in the base theory.
As goes to infinity, the probability that is sampled by goes to 1. Further, the probability that it is sampled and accepted goes to 1. This is because the only way it can be rejected is if there exists a contradiction in the sentences output by the Turing machines sampled earlier. For every list of Turing machines output earlier, there is a fixed value after which will accept if that list is the list of machines sampled before . For any we can take large enough that this is true for the list of sentences sampled with probability .
Therefore, as goes to infinity, the probability that outputs the sentence "Exactly one of , , and is true" goes to 1. The probability that later individually outputs the six Turing machines which give the single sentences , , , , , and , consecutively in that order also goes to 1 as goes to infinity, since the complexity of the Turing machine which outputs those sentences is only a constant more than the complexity it takes to output the integer , which is on the order of , and we sample Turing machines. Since notices all propositional contradictions and must accept either or , the sum of the probabilities that accepts , , and must go to one. Therefore,
The fact that the algorithm satisfies property 1 is trivial, so all that remains is to show that it satisfies property 2. Consider a Turing machine which satisfies the conditions of property 2. Consider the infinite class of Turing machines () which outputs the same sentences as , but negates the first sentences output. Note that in any complete theory sampled by the oracle version of the modified Demski prior, exactly one of these Turing machines only outputs true sentences. Let be the event that outputs only true sentences. Note that for each event , the probability that is sampled and accepted in goes to 1 as goes to infinity. (Here, we are taking the execution path of which comes from a random infinite order of Turing machines which satisfies .) Therefore, conditioning on each each , , the probability that outputs converges to , while conditioning on , the probability that outputs converges to . Therefore, since the probabilities conditioned on each event are all bounded (between 0 and 1), and each one converges, the infinite weighted average also converges, so the probability that outputs converges. In fact, it converges to the probability of . Since the probabilities that accept the sentences converge, the probability also converges.
Personal Blog
New Comment
|
{}
|
# Simulating the evolution of a wavepacket through a crystal lattice
I am interested simulating the evolution of an electronic wave packet through a crystal lattice which does not exhibit perfect translational symmetry. Specifically, in the Hamiltonian below, the frequency of each site $\omega_n$ is not constant.
Suppose the lattice is specified by a certain tight-binding Hamiltonian $$H = \sum_n \omega_n a^\dagger_n a_n + t \sum_{<n>} a^\dagger_n a_{n+1} +\text{all nearest neighbor interactions} + \text{h.c}.$$ We prepare a wavepacket, and for simplicity, we express the wavepacket in the fock basis of each lattice site $$| \psi \rangle = \sum_i |b_1\rangle |b_2\rangle \ldots |b_n\rangle.$$ Thus, there are $b_1$ electrons in the $1$st lattice site. Of course, electrons are fermions and $b_1$ may be either $0$ or $1$.
Suppose we treat this problem purely quantum mechanically. Then we will need to prepare a vector of length $2^n$, which is computationally intractable for any significant $n$.
I am interested in physical techniques that may be employed to simplify this problem. Is it possible to attempt the problem in a semiclassical manner?
-
Our FAQ actually disavowes computational questions. With your permission I will migrate this to the new Scientific Computation beta site. Of course, you can ask about the physics here not withstanding that you are planning a computational attack, but this seems to be a implementation question. Or have I mistaken your intent? – dmckee Dec 24 '12 at 21:19
I am more interested in the physical techniques that can be used to simplify the problem and hence, make it computationally viable. As we know, quantum mechanical simulations on classical computers are often intractable as the computational steps required increase exponentially with the degrees of freedom in the system. – flamearchon Dec 24 '12 at 21:25
At the end of the day, I would like to numerically time-step through some differential equation. The question is which differential equation do I solve! – flamearchon Dec 24 '12 at 21:29
Ah...thank you for the clarification. This certainly should remain here. – dmckee Dec 24 '12 at 21:45
@flamearchon For the exact method, you either use eigenvalue or direct evolution, and you dont have the symmetry in the Hamiltonian. The other method should only be approximation. If you get the answer, please post here. – hwlau Dec 28 '12 at 7:18
If you use tight-binding Hamiltonian, it is reasonable to start not from semiclassical, but one-particle approximation. In that case, you have an amplitude (complex number) at each site, the state is complex vector of length $n$, Hamiltonian is $n\times n$ (sparse) matrix and the problem of time evolution and/or eigenstates (for one particle state) is solvable for relatively large lattices.
|
{}
|
# MultiFBeta¶
Multi-class F-Beta score with different betas per class.
The multiclass F-Beta score is the arithmetic average of the binary F-Beta scores of each class. The mean can be weighted by providing class weights.
## Parameters¶
• betas
Weight of precision in the harmonic mean of each class.
• weights
Class weights. If not provided then uniform weights will be used.
• cm – defaults to None
This parameter allows sharing the same confusion matrix between multiple metrics. Sharing a confusion matrix reduces the amount of storage and computation time.
## Attributes¶
• bigger_is_better
Indicate if a high value is better than a low one or not.
• requires_labels
Indicates if labels are required, rather than probabilities.
• works_with_weights
Indicate whether the model takes into consideration the effect of sample weights
## Examples¶
>>> from river import metrics
>>> y_true = [0, 1, 2, 2, 2]
>>> y_pred = [0, 0, 2, 2, 1]
>>> metric = metrics.MultiFBeta(
... betas={0: 0.25, 1: 1, 2: 4},
... weights={0: 1, 1: 1, 2: 2}
... )
>>> for yt, yp in zip(y_true, y_pred):
... print(metric.update(yt, yp))
MultiFBeta: 100.00%
MultiFBeta: 25.76%
MultiFBeta: 62.88%
MultiFBeta: 62.88%
MultiFBeta: 46.88%
## Methods¶
get
Return the current value of the metric.
is_better_than
revert
Revert the metric.
Parameters
• y_true
• y_pred
• sample_weight – defaults to 1.0
update
Update the metric.
Parameters
• y_true
• y_pred
• sample_weight – defaults to 1.0
works_with
Indicates whether or not a metric can work with a given model.
Parameters
• model (river.base.estimator.Estimator)
|
{}
|
# The sum of three consecutive odd integers is 99. What are the three numbers?
Jan 21, 2016
I found $31 , 33 , 35$
#### Explanation:
Let us call our odd integers:
$2 n + 1$
$2 n + 3$
$2 n + 5$
and write our condition as:
$\left(2 n + 1\right) + \left(2 n + 3\right) + \left(2 n + 5\right) = 99$ and solve it for $n$:
$6 n + 9 = 99$
$6 n = 90$
$n = \frac{90}{6} = 15$
so our numbers will be:
$2 n + 1 = 31$
$2 n + 3 = 33$
$2 n + 5 = 35$
|
{}
|
## Exact Value For Cos 36
Consider the diagram below where
$\alpha = 36^o$
.
Triangles ABC and CBE are both right angles, have a side in common and a corresponding equal angle
$\alpha$
. They are congruent triangles.
Let
$AC=AD=x$
then
$AE=ED=x/2$
.
Using trigonometry on triangle
$ABE$
gives
$AB=\frac{x/2}{cos \alpha}$
then
$CB=\frac{x/2}{cos \alpha}$
too.
From triangle
$BCD$
with
$\alpha=36^o$
,
$\angle DBC=180^o-3 \times 36=72^o=2 \alpha$
.
Then triangle
$BCD$
is isosceles and
$CD=\frac{x/2}{cos \alpha}$
.
Also,
$CD=x-\frac{x/2}{cos \alpha}$
.
Now use the Cosine Rule on triangle
$BCD$
.
$(x-\frac{x/2}{cos \alpha})^2=(\frac{x/2}{cos \alpha})^2+(\frac{x/2}{cos \alpha})^2-2(\frac{x/2}{cos \alpha})(\frac{x/2}{cos \alpha})^2 cos \alpha$
Expanding the brackets, Dividing by
$x^2$
and multiplying by
$4cos^2 \alpha$
gives
$4cos^2 \alpha-4cos \alpha +1=2-2cos \alpha \rightarrow 4cos^2 \alpha-2 cos \alpha-1=0$
$cos \alpha=\frac{--2 \pm \sqrt{20}}{2 \times 4}= \frac{1 + \sqrt{5}}{4}$
$cos \alpha$
$cos \alpha \gt 0$
|
{}
|
If I have three dif...
Clear all
# If I have three different kinds of blob tiles, how many tiles do I need?
1 Posts
1 Users
0 Likes
0 Views
Illustrious Member
Joined: 4 months ago
Posts: 57396
Topic starter
Blob tiles are autotiles that match edges and corners. I know that for two different kinds of tiles, I need 47 * 2 tiles. But I have three different kinds of tiles. I thought that I needed 47 * 3! tiles, but I made them all and tried to use them, and discovered I was missing some. Like, a tile can have all three different kinds of tiles on it, and I didn't make those. How many do I need to make?
## paging function for similar posts shows the right links when hovering, but stays on page 1
i show similar posts based on tags:
`````` //similar posts
\$postID = get_queried_object_id();
\$tags = wp_get_post_tags(\$postID);
foreach (\$tags as \$tag) {
//make array \$xtag
\$xtag[] = \$tag->slug;
\$count_tag = \$tag->count;
}
\$paged = ( get_query_var( 'paged' ) ) ? get_query_var( 'paged' ) : 1;
//echo \$paged;
\$args = array(
'exclude' => \$postID,
'order' => 'ASC',
'orderby' => 'name',
'posts_per_page' => 2,
'paged' => \$paged,
'tax_query' => array(
array(
'taxonomy' => 'post_tag',
'field' => 'slug',
'terms' => \$xtag //apply \$xtag array
)
)
);
\$customPostQuery = new WP_Query(\$args);
//checking the max number of pages
echo \$customPostQuery->max_num_pages;
//etc. printing the posts in a foreach loop
``````
This works. Then i call a paging function, that should show similar posts by page:
``````if (function_exists("similar_post_pagination")) {
similar_post_pagination(\$customPostQuery->max_num_pages);
}
``````
This is the `similar_post_pagination` function. It prints the pagination and when hovering the links also shows the correct links, but stays on page 1. I noticed that `global \$wp_query` is empty. Apperently nothing happens here.
``````function similar_post_pagination(\$pages = "", \$range = 2) {
\$showitems = \$range * 1 + 1;
global \$paged;
if (empty(\$paged)) {
\$paged = 1;
}
if (\$pages == "") {
global \$wp_query;
**//this is empty**
\$pages = \$wp_query->max_num_pages;
echo 'empty \$pages '.\$pages;
if (!\$pages) {
\$pages = 1;
}
}
if (1 != \$pages) {
echo "<div class='archiv-pager'>";
if (\$paged > 2) {
echo "<a class='page-numbers' title='" .
\$first ."' href='" . get_pagenum_link(1) . "'><<</a><span>| </span>";
}
if (\$paged > 1) {
echo "<a class='page-numbers' title='" . \$prev . "' href='" . get_pagenum_link(\$paged - 1) . "'>< </a>";
}
for (\$i = 1; \$i <= \$pages; \$i++) {
\$delimiter = " ";
if (\$i > 1) {
\$delimiter = "• ";
} else {
\$delimiter = "";
}
if (
1 != \$pages &&
(!(\$i >= \$paged + \$range + 1 || \$i <= \$paged - \$range - 1) ||
\$pages <= \$showitems)
) {
if (\$paged == \$i) {
echo \$delimiter . "<span class='page-numbers current'>" . \$i . "</span>";
} else {
echo \$delimiter . "<a class='page-numbers inactive' title='" . \$page . \$i . "' href='" . get_pagenum_link(\$i) . "' >" . \$i . "</a>";
}
}
}
if (\$paged < \$pages) {
echo "<a class='page-numbers' title='" . \$next . "' href='" . get_pagenum_link(\$paged + 1) . "'> ></a>";
}
if (\$paged + 1 < \$pages) {
echo "<span>| </span><a class='page-numbers' title='" . \$last . "' href='" . get_pagenum_link(\$pages) . "'>>></a>";
}
echo "</div>";
} else {
echo '<div class="dummy-pager"></div>';
}
}
``````
What do i have to use in this function for the pagination to work? Another query? Is it a scope issue? The same function works for archiv pages, but not for the similar posts.
Sorry about the length of this question and thanks for your interest. gurky
## Can generating functions be used to solve evolution matrix differential equations and recurrence relations of matrices?
Generating functions seem to be a powerful tool in discrete mathematics for solving differential equations and recurrence relations. I’ve been trying to figure out if these methods can be expanded to differential equations that involve matrices, such as Schrodinger’s equation. Is there anything that prevents a solution to these types of differential equations being written using generating functions? For example, given a solution to Schrodinger’s equation as
$$U = Te^{-i hbar int_{0}^{t’} H(t) ,dt}$$
Where H(t) is some time-dependent Hamiltonian matrix and T is the time-ordering operator. This form seems very reminiscent to exponential generating functions. For example, the generating function for the Bessel functions is given by $$e^{frac{x}{2}(t-frac{1}{t})}=sum_{n=-infty }^{infty} J_n(x)t^n$$
So, if the time evolution of the Hamiltonian gave something similar to the generating function above, where x is now the matrix Hamiltonian, can the exponential be re-written in a form using the Bessel functions of matrix argument? I’d assume that the matrix argument of a Bessel function would be similar to an analytic function of matrix argument, but everything I find seems to write Bessel/Hypergeometric functions in terms of zonal polynomials, and the wikipedia page for hypergeometric functions of matrix argument even mention that these types of functions aren’t similar to writing other functions in terms of matrix arguments. That doesn’t make sense to me though since these special functions have these generating function relations. Any help would be appreciated if someone could point me towards the literature too.
## there a 1px red line on my monitor screen
it might sound kinda stupid I know, “it’s just a line at the top” but it bothers me because of the toc.
obs: it interacts with what’s behind it, it’s not a simple line. sometimes it looks like it’s turning off and on smoothly
about it, it’s not from the browser, although it seems, I used a program that I have that deletes everything, I deleted the browser, and after I restarted the computer limiting it to microsoft applications, the line continued.
(it doesn’t appear on screenshots)
## Calculating L-smoothness constant for logistic regression.
I am trying to find the $$L$$-smoothness constant of the following function (logistic regression cost function) in order to run gradient descent with an appropriate stepsize.
The function is given as $$f(x)=-frac{1}{m} sum_{i=1}^mleft(y_i log left(sleft(a_i^{top} xright)right)+left(1-y_iright) log left(1-sleft(a_i^{top} xright)right)right)+frac{gamma}{2}|x|^2$$ where $$a_i in mathbb{R}^n, y_i in{0,1}$$,$$s(z)=frac{1}{1+exp (-z)}$$ is the sigmoid function.
$$nabla f(x)=frac{1}{m} sum_{i=1}^m a_ileft(sleft(a_i^{top} xright)-y_iright)+gamma x$$.
My ideas was that the smoothness constant $$L$$ has to be bigger than all the eigenvalues of the hermitian of the given function, this follows from the fact that if $$f$$ is $$L$$-smooth, $$g(x)=frac{L}{2} x^T x-f(x)$$ is a convex function and therefore the hessian has to be positive semi-definite.
The second-order partial derivatives of $$f$$ are given as
$$frac{partial^2 }{partial x_k partial x_j}f(x)=frac{1}{m} sum_{i=1}^ms(a_i^{top} x)left(1-s(a_i^{top} x)right)[a_i]_k[a_i]_j+gammadelta_{ij}$$
from the following github post (https://github.com/ymalitsky/adaptive_GD/blob/master/logistic_regression.ipynb) i know that $$L=frac{1}{4} lambda_{max }left(A^{top} Aright)+gamma$$ , where $$lambda_{max }$$ denotes the largest eigenvalue, which seems good since i figured out that $$s(a_i^{top} x)left(1-s(a_i^{top} x)right)leq frac{1}{4}$$ for all $$x$$.
But i am not able to fit everything together. I would appreciate any help.
## GnuCash – Help Buttons Not Working
GnuCash 2.6.15 – Debian Stretch
`gnucash-docs` and `yelp` packages installed.
While in GnuCash, when I activate a sub-window “Help” button (e.g. as seen by clicking Edit -> Find… -> Help), the mouse pointer changes from a pointer icon to the active processing icon for about 15 seconds. It then changes back to a pointer icon without any other action. No help dialog is created.
However, when clicking (on the main toolbar menu) Help -> Tutorial and Concepts Guide, said guide comes up as is should!
I suspect I may be missing a package, but which one?
## Ultrafilters and compactness
A topological space is compact if and only if every ultrafilter is convergent.
While I was reading the proof of the one Side of theorem above, there is something I could not understand. Following is the proof of of the one side of the theorem.
Let $$X$$ be compact and assume that $$mathcal{F}$$ is the ultrafilter on $$X$$ without a limit point. Then for each $$xin X$$, there exists an open neighborhood $$U_{x}$$ of it such that each $$U_{x}$$ does not contain any member of $$mathcal{F}$$. Since $$mathcal{U}={U_{x} : xin X}$$ is an open cover of $$X$$, there exists a finite subfamily $${U_{x_{i}}: i=1,2,…,n}$$ of $$mathcal{U}$$ such that $$X=bigcup_{i=1}^{n} U_{x_{i}}$$. Let $$Ainmathcal{F}$$ be fixed. Then $$A=(Acap U_{x_{1}})cap (Acap U_{x_{2}})…(Acap U_{x_{n}})inmathcal{F}$$ and thus there exists an $$iin{1,2,…,n}$$ such that the subset $$Acap U_{x_{i}}$$ is in $$mathcal{F}$$ which is a contradiction.
The thing that I could not understand, why there exists $$iin{1,2,…,n}$$ such that $$Acap U_{x_{i}}$$ must be in $$mathcal{F}$$? If you clarify this, it would highly be appreciated. Thank you.
## Representing \$G=text{GL}^+(2,mathbf R)\$ as the matrix product \$G=TH\$. If \$H=text{SO}(2)\$, what is \$T\$?
In this paper (Equation 2.6 and 2.7) the author seems to suggest that one can represent the $$text{GL}^+(4,mathbf R)$$ group using the product of two exponentials: $$exp (epsilon cdot T) exp (u cdot J)$$, where $$T$$ are the generators of shears and dilation, and $$J$$ are the generators of Lorentz transformations.
My take on the subject is that since $$T$$ and $$J$$ do not commute, one cannot write $$G$$ as a product of these two exponentials. One must instead write $$G=exp ( epsilon cdot T + u cdot J )$$. It appears to me the author is wrong.
Is the author correct, or am I?
How can I represent $$text{GL}^+(2,mathbf R)$$ as the matrix product $$G=TH$$ where $$H=text{SO}(2)$$?
## Pushout in the category of commutative unital \$C^{ast}\$-algebras
What is the pushout in the category of commutative unital $$C^{ast}$$-algebras? Is it the tensor product? Is it the same as in the category of noncommutative unital $$C^{ast}$$-algebras?
## Bounds on the maximum real root of a polynomial with coefficients \$-1,0,1\$
Suppose I have a polynomial that is given a form
$$f(x)=x^n – a_{n-1}x^{n-1} – ldots – a_1x – 1$$
where each $$a_k$$ can be either $$0,1$$.
I’ve tried a bunch of examples and found that the maximum real root seems to be between $$1,2$$, but as for specifics of a polynomial of this structure I am not aware.
Using IVT, we can see pretty simply that $$f(1)leq0$$ and $$f(2)> 0$$ so there has to be a root on this interval, but thats a pretty wide range was wondering if this was previously studied
Share:
|
{}
|
# Overleaf - includegraphics is not working: empty images
I was getting a "no BoundingBox" error in my file and I saw that to solve my problem I should convert .png files to .eps. Done. Now, the errors disappeared but the figures are not visible. Here is the code:
\begin{figure}[ht!]
\centering
\includegraphics[width=0.7\textwidth]{RedeUsina.eps}
\caption{Some caption here}
\label{fig:arq}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.7\textwidth,]{AmbienteTestes.eps}
\caption{Some caption here.}
\label{fig:amb}
\end{figure}
The spaces of the images are generated, but just the empty spaces, nothing visible (no images).
I would like to know how to solve this. I am using a variant of the IEEE tran, named sbrt2018port.cls.
• I guess overleaf would use pdflatex by default, in which case eps files would not be allowed and converted to png in the background anyway. What you have done confuses me. May 9, 2018 at 5:57
• (I'm a support personnel at Overleaf) .eps files are allowed with pdflatex + Overleaf; --shell-escape is enabled by default and would convert it to .pdf. (Re security concerns: Each Overleaf project is in its own Docker container so the -shell-escape would only affect your own project.) However the original "No bounding box" messages may mean your original .png files are missing metadata re its dimension. If you email support@overleaf.com and let us know your project URL, we can have a closer look for you. May 9, 2018 at 6:03
• If you got a no bounding box error with png-files then you perhaps used latex instead of pdflatex. May 9, 2018 at 7:01
• @Johannes_B , I was using .png files, but I was receiving the "no BoundingBox" error. When I searched about, I saw people talking about the .eps files and tried. May 9, 2018 at 15:22
• I've now had a look at sbrt2018port.cls; it contains \usepackage[dvips]{graphicx}. Removing the dvips solves the problem: graphicx is smart enough to know what driver to use anyway. May 9, 2018 at 16:45
sbrt2018port.cls (a copy can be found here) has a line \usepackage[dvips]{graphicx}, so .png files cannot be processed correctly even if you use pdflatex to compile your project.
Removing the dvips on that line, so that graphicx can use the suitable driver to process image files, solves the problem. graphicx is smart enough to know which driver to use anyway, based on the engine used to compile your document.
|
{}
|
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# Fraction and Mixed Number Comparison
## Use <, > and/or = to compare fractions and mixed numbers.
Estimated7 minsto complete
%
Progress
Practice Fraction and Mixed Number Comparison
Progress
Estimated7 minsto complete
%
Fraction and Mixed Number Comparison
Margaret’s dad is trying to cut back on the salt in his diet. He learned that many packaged foods contain a lot of salt, so he is trying to make some of his favorite foods himself. One of his favorite snacks is salsa, so he is making his own salsa. He found two different recipes. One calls for teaspoons of salt and another calls for teaspoons of salt. How can Margaret help her dad to determine which recipe to use if he wants to use the least amount of salt possible?
In this concept, you will learn how to compare and order fractions and mixed numbers.
### Guidance
When two fractions have the same denominator, the fraction with the larger numerator will be the bigger fraction. When two fractions do not have the same denominator, comparison is not as easy.
One way to compare fractions is by using approximation and benchmarks. Approximate each fraction with one of the three benchmarks: , and . Then, compare the approximations.
Here is an example.
Use approximation to order , and greatest to least.
First, approximate each fraction using the fraction benchmarks.
• out of 8 is almost 8 out of 8 which would be a whole, so is approximately 1.
• is a little less than half of 5, so is approximately .
• is a mixed number greater than 1, so it will automatically be the greatest number in our list.
• the denominator of 29 is much larger than the numerator of 1, so is approximately 0.
• out of 30 is almost 30 out of 30 which would be a whole, so is approximately 1.
Now, write the fractions in a preliminary greatest to least order with the benchmarks in parentheses:
Notice that the approximation method helped to order most of the numbers, but there are two fractions that are close to 1. You will have to use another method to decide how compares to .
One way to determine which fraction is closest to 1 is to draw two number lines between 0 and 1, arranged so that one number line is above the other. Divide the top number line into eighths and find . Divide the bottom number line into thirtieths and find . Look to see which value is closest to 1.
Now you can see that is closer to 1, so it is the greater number.
The answer is that the numbers ordered from greatest to least are .
Another way to compare two fractions with different denominators is by rewriting one or both fractions so that they have the same denominator. To rewrite a fraction, find an equivalent fraction by multiplying both the numerator and the denominator by the same number. Your goal is to choose numbers to multiply by so that the denominators of the equivalent fractions will be the same.
Here is an example.
Compare and .
First, notice that the denominators of 3 and 7 are different. You will need to find an equivalent fraction for each given fraction so that their denominators are the same.
Next, find a common denominator. You are looking for a number that is a multiple of both 3 and 7. The product of 3 and 7 is 21 and in this case that is the least common multiple of 3 and 7.
Now, rewrite each fraction as an equivalent fraction with a denominator of 21. Remember to always multiply the numerator and denominator of the fraction by the same number.
Next, compare the rewritten fractions. Now that they have the same denominator, you can see that is less than , so is less than .
### Guided Practice
In the long jump contest, Peter jumped feet, Sharon jumped feet, and Juan jumped feet. Order their jump distances from greatest to least.
First, notice that is less than 6 while and are both greater than 6. That means is the smallest number.
Next, compare and . Because both numbers are between 6 and 7, you can compare the fractional parts of the mixed numbers, and , in order to determine which mixed number is greater. Because the denominators are different, you will need to find an equivalent fraction for each given fraction so that their denominators are the same.
Now, find a common denominator. You are looking for a number that is a multiple of both 5 and 7. 35 is the least common multiple of 5 and 7.
Next, rewrite each fraction as an equivalent fraction with a denominator of 35.
Finally, compare the rewritten fractions. Now that they have the same denominator, you can see that is greater than . This means is greater than .
The answer is that the distances ordered from greatest to least are .
### Examples
#### Example 1
Compare and .
First, try the approximation method and approximate each fraction with a fraction benchmark.
• The numerator of 1 is much less than the denominator of 8, so is approximately 0.
• out of 6 is almost 6 out of 6 which would be a whole, so is approximately 1.
Next, compare the two numbers. is approximately 0 and is approximately 1. That means is definitely less than .
#### Example 2
Compare and .
First, notice each fraction is approximately , so the approximation method for comparison won’t work this time. Because the denominators are different, you will need to find an equivalent fraction for each given fraction so that their denominators are the same.
Next, find a common denominator. You are looking for a number that is a multiple of both 9 and 15. 45 is the least common multiple of 9 and 15, though any common multiple of 9 and 15 would work.
Now, rewrite each fraction as an equivalent fraction with a denominator of 45.
Next, compare the rewritten fractions. Now that they have the same denominator, you can see that is less than , so is less than .
#### Example 3
Compare and .
First, notice each fraction is approximately 1, so the approximation method for comparison won’t work this time. Because the denominators are different, you will need to find an equivalent fraction for each given fraction so that their denominators are the same.
Next, find a common denominator. You are looking for a number that is a multiple of both 9 and 4. 36 is the least common multiple of 9 and 4.
Now, rewrite each fraction as an equivalent fraction with a denominator of 36.
Next, compare the rewritten fractions. Now that they have the same denominator, you can see that is greater than , so is greater than .
Remember Margaret and her dad who is making salsa? He found two recipes. One calls for teaspoons of salt and the other calls for teaspoons of salt. He wants to use the least amount of salt possible.
First, Margaret should notice that because both numbers are between 1 and 2, she can compare the fractional parts of the mixed numbers, and , in order to determine which mixed number is greater. Because the denominators are different, she will need to find an equivalent fraction for each given fraction so that their denominators are the same.
Next, she can find a common denominator. She is looking for a number that is a multiple of both 2 and 8. 8 is the least common multiple of 2 and 8.
Now, she can rewrite each fraction as an equivalent fraction with a denominator of 8. Note that because already has a denominator of 8, she will not need to rewrite that fraction!
Finally, Margaret can compare the rewritten fractions. Now that they have the same denominator, she can see that is less than . This means teaspoons is less than teaspoons.
The answer is that Margaret’s dad should use the recipe that calls for teaspoons of salt.
### Explore More
Compare each pair of fractions or mixed numbers using an inequality symbol or equals sign.
1. and
2. and
3. and
4. and
5. and
6. and
7. and
8. and
9. and
10. and
Write each set in order from least to greatest.
11.
12.
13.
14.
15.
16.
17. Brantley is making an asparagus soufflé which calls for cups of cheese, cups of asparagus, and cups of parsley. Using approximation, order the ingredients from largest amount used to least amount used.
18. Geraldine is putting in a pool table in her living room. She wants to put it against the longest wall of the room. Wall A is feet and wall B is feet. Against which wall will Geraldine put her pool table?
### Vocabulary Language: English
Denominator
Denominator
The denominator of a fraction (rational number) is the number on the bottom and indicates the total number of equal parts in the whole or the group. $\frac{5}{8}$ has denominator $8$.
fraction
fraction
A fraction is a part of a whole. A fraction is written mathematically as one value on top of another, separated by a fraction bar. It is also called a rational number.
improper fraction
improper fraction
An improper fraction is a fraction in which the absolute value of the numerator is greater than the absolute value of the denominator.
Mixed Number
Mixed Number
A mixed number is a number made up of a whole number and a fraction, such as $4\frac{3}{5}$.
Numerator
Numerator
The numerator is the number above the fraction bar in a fraction.
### Explore More
Sign in to explore more, including practice questions and solutions for Fraction and Mixed Number Comparison.
|
{}
|
Engineering Mechanics: Statics & Dynamics (14th Edition)
Published by Pearson
Chapter 8 - Friction - Section 8.8 - Rolling Resistance - Problems - Page 454: 107
Answer
$132.14N.m$
Work Step by Step
We know that $M=\frac{2}{3}\mu P(\frac{R_2^3-R_1^3}{R_2^2-R_1^2})$ We plug in the known values to obtain: $M=\frac{2}{3}(0.30)(5000)[\frac{(0.100)^3-(0.075)^3}{(0.100)^2-(0.075)^2}]$ This simplifies to: $M=132.14N.m$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
{}
|
# Texture coordinates problem
This topic is 3629 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Hi, I'm having trouble generating textures coordinates for the cells of my terrain Here is the code :
int v_index = 0;
int HeightIndex = 0;
CDXObjects::Instance().getDevice()->CreateVertexBuffer(NumVertices*sizeof(TerrainVertex),D3DUSAGE_WRITEONLY,TerrainVertex::FVF,D3DPOOL_DEFAULT,&pNode->vBuffer,NULL);
TerrainVertex * v = NULL;
pNode->vBuffer->Lock(0,0,(void**)&v,0);
int i = 0;
for(int z = (int)pNode->vBoundingCoords[0].y; z <= (int)pNode->vBoundingCoords[3].y; z += vertexSpacing)
{
int j = 0;
for(int x = (int)pNode->vBoundingCoords[0].x; x <= (int)pNode->vBoundingCoords[3].x; x += vertexSpacing)
{
v_index = i * numVerticePerSide + j;
coordU = ((float)j)/16.0f;
coordV = ((float)i)/16.0f;
HeightIndex = ((i * FileWidth)+(FileWidth*(pNode->vBoundingCoords[0].y/vertexSpacing))) + (j+(pNode->vBoundingCoords[0].x/vertexSpacing));
v[v_index] = TerrainVertex((float)x,(float)0,(float)z,coordU,coordV);
j++;
}
i++;
}
pNode->vBuffer->Unlock();
return;
I divide coordU and coordV by 16.0 but it is just a random number to test In fact the problem is that coordU has no influence, I can set it to what ever I want it doesn't change anything..... But coordV does work fine, the coordinates are not good at all! If you have any ideas....... Thanx. [Edited by - Grigou on August 13, 2008 4:28:00 PM]
##### Share on other sites
Try stepping through your code in debug mode, line by line, examining the values of x, z, i and j and see if they're what they should be.
For instance, is j incrementing correctly? Do you get reasonable values for coordU and coordV? Is j always less than numVerticePerSide?
EDIT: What is TerrainVertex::FVF? It appears that it should be D3DFVF_XYZ | D3DFVF_TEX1. Is that what you have it set to?
##### Share on other sites
Well, sorry for the code part, i'm new and I dont know how it works....
So this is my vertex declaration :
TerrainVertex::FVF = D3DFVF_XYZ|D3DFVF_TEX1;
It should be fine .....
Before is was creating my terrain cells one by one and the texture coordinates where working fine.
Now I am using a QuadTree to manage the terrain.
I am checking all the variables...
Thanx for the replies
EDIT : Okay I solved the problem after a few hours of research....
Seems like the terrain vertices were not getting the proper vertex declaration
Anyway thank you for spending some time on this thread
[Edited by - Grigou on August 14, 2008 5:10:08 AM]
1. 1
2. 2
Rutin
24
3. 3
4. 4
JoeJ
18
5. 5
• 14
• 23
• 11
• 11
• 9
• ### Forum Statistics
• Total Topics
631768
• Total Posts
3002242
×
|
{}
|
# Why does TEM mode not propagate in single conductor system?
In the Wikipedia article it is said:
"In hollow waveguides (single conductor), TEM waves are not possible, since Maxwell's Equations will give that the electric field must then have zero divergence and zero curl and be equal to zero at boundaries, resulting in a zero field (or, equivalently,$$\nabla^2 \Phi=0$$ with boundary conditions guaranteeing only the trivial solution)."
My question is why is it required that the electric field be zero at the boundary of a single conductor system. I know that the tangential component of the field must be zero but why does the perpendicular component vanish in this case?
|
{}
|
# A toy is in the form of a cone of radius 3.5 cm mounted on a hemisphere of same radius. The total height of the toy is 15.5 cm. Find the total surface area of the toy.
We have VO = 15.5 cm, OA = OO' = 3.5 cm
Let r be the radius of the base of cone and h be the height of conical part of the toy.
Then r = OA = 3.5 cm
h = VO = VO' - OO = (15.5 - 3.5) cm = 12 cm
Also radius of the hemisphere = OA = r = 3.5 cm
l = Slant height, is given by :
$$\Rightarrow l = \sqrt{r^2 + h^2}$$
$$\Rightarrow l = \ \sqrt{OA^2 \ + \ OV^2} \$$
$$= \ \sqrt{(3.5)^2 \ + \ 12^2} \$$
$$= \ \sqrt{156.25} \$$
$$= \ 12.5$$cm
Total surface area of the toy = Curved surface area of cone + Curved surface area of hemisphere
$$= \ \pi rl \ + \ 2 \pi r^2$$
$$= \ \pi r(l \ + \ 2r) \$$
$$= \ \frac{22}{7} \ × \ 3.5 \ × \ 12.5 \ + \ 2 \ × \ 3.5$$
$$= \ \frac{22}{7} \ × \ 3.5 \ × \ 19.5 \$$
$$= \ 214.5$$ cm2
|
{}
|
# Perspective Transformation
$\M{P}=\begin{bmatrix} \frac{1}{r \tan\alpha/2} & 0 & 0 & 0 \\ 0 & \frac{1}{\tan\alpha/2} & 0 & 0 \\ \phantom{\frac{}{}}0 & 0 & \frac{n+f}{n-f} & \frac{2\,n\,f}{n-f} \\ \phantom{\frac{}{}}0 & 0 & -1 & 0 \\ \end{bmatrix}$
• $\alpha$ is the field of view
• $r$ is the aspect ratio
• $n$ is the near distance
• $f$ is the far distance
|
{}
|
Tag Info
2
It is interesting to consider that either outcome is possible, depending on the order of addition. If a solution containing 0.5 moles of HCl is added to a solution of 1 mole of K2CO3, slowly, with mild stirring, cold, the products will be 0.5 KCl + 0.5 KHCO3. Nothing is lost to the atmosphere. However, if a solution of 1 mole of K2CO3 is added to a ...
2
The reaction of epoxyketone 1 affords diketone 2 under photochemical and thermal conditions ostensibly through the diradical.1 Boron trifluoride provides the diketone 3 through acyl bond migration and not via alkyl group migration as in your intermediate 3a. I have not located a proton-catalyzed reaction. Unfortunately, diketone 3 was not one of your choices....
2
Energy usually comes from reactions not one component of the reactions The problem with the way the description of oxygen as "energy rich" is that it doesn't make sense outside of the context of reactions oxygen can participate in. And, in that context, it doesn't make sense to partition the "energy* between the things that are reacting. Chemical reactions ...
3
Boiling off some water is a decent plan. Perhaps it should be done at reduced pressure so as to keep the temperature low. Magnesium and calcium bicarbonates decompose at temperatures above 60C in water to yield insoluble carbonates, and several data sheets for sodium bicarbonate suggest the same. One said decomposition was complete at 100C. I suppose you ...
6
[OP:] It is usually said that fuel contains energy and that oxygen only enables the release of energy in the sense like enzymes enable reactions. An enzyme is a catalyst, so it does not change the enthalpy of a reaction. That part is correct. "Oxygen only enables the release of energy" is incorrect. Oxygen is one of the reactants, and the oxygen atoms also ...
2
I present another (almost) two-step synthesis, involving the famous Wittig reaction. The first step details the preparation of a phosphonium ylide, which subsequently is allowed to react with the carbonyl compound (benzophenone in our case) to yield the final product, 1,1-diphenyl-1-butene. For mechanistic details, visit NotEvans.'s answer to Which is the ...
6
Two steps: Form the Grignard reagent with 1-bromopropane and magnesium metal. This can be down in a variety of ethereal solvents, THF or $\ce{Et2O}$ are most commonly used. This species is nucleophilic through carbon and will add to the carbonyl group. Cool the Grignard solution in an ice bath under nitrogen with stirring, slowly add a solution of 0.9 eq ...
1
Karsten Theis' answer is a perfect one for your question. He also did his best to explain why is he used the bond-making bond-breaking sign conversion but you are still in mind set of product-reactant explanation. So I decided to explain it in different direction. Remember the law of conservation of energy, which states that the total energy of an isolated ...
3
Here is the tally of bonds: One $\ce{H-H}$ bond broken One half $\ce{O=O}$ double bond broken Two $\ce{O-H}$ bonds made Breaking bonds costs (positive sign), making bonds gains (negative sign). The result will be an estimate of turning reactants to products in the gas phase. It is an estimate because the strength of an $\ce{O-H}$ bond depends on what else ...
0
This is because when number of electron donating alkyl group on OH-bonded carbon atom increases, polarity of carbon oxygen bond also increases, which further facilitates the cleavage of carbon oxygen bond. Therefore reactivity increases.
2
In solvolysis of simple primary cyclopropylmethyl systems the rate is enhanced because of participation by the $\sigma$ bonds of the ring. The ion that forms initially is an unrearranged cyclopropylmethyl cation that is symmetrically stabilized, that is, both the 2,3 and 2,4 $\sigma$ bonds help stabilize the positive charge(page 464 ,JerryMarch ,...
3
The first step is nucleophilic substitution . It is possible for nucleophile to attack directly at the allylic position, displacing the leaving group in a single step, in a process referred to as SN2' substitution. This is likely in cases when the allyl compound is unhindered, and a strong nucleophile is used. The products will be similar to those seen ...
8
The first reaction is O-alkylation of p-cresol to give a 4-methylphenyl allyl ether derivative 3. The reagent in the first box should be 1-bromo-3-methylbut-2-ene (1; see the top box in the picture), which would undergo $\mathrm{S_N2}$ reaction with phenolic anion (2) in refluxing acetone. Note that potassium carbonate is a strong enough base to complete ...
6
Potassium carbonate is a perfectly good base for the alkylation of phenol (pKa 10) with a good electrophile, in this case 3,3-dimethylallylbromide. The reaction you are looking for is a Claisen rearrangement which proceeds by a 3,3-sigmatropic rearrangement mechanism. image from ref 1
2
First passage problems This is a first passage problem, asking when a system reaches a certain final state for the first time . If you were to use a simulation to explore this, you would erase parts of trajectories right after reaching the final state before summing up results. The Backwards Master Equation is especially suited for first passage problems (...
-1
Aldehydes and ketones prefer nucleophilic addition reactions Instead of electrophilic addition reactions this is because in case of nucleophilic addition reaction a stable intermediate is formed as compared to the electrophilic addition reactions where the intermediate formed is carbocation which is highly unstable.
3
Example 1 Ethylene glycol is often used to make cyclic acetals; its acetals are called ethylene acetals (or ethylene ketals).This interconversion makes acetals attractive as protecting groups to prevent ketones and aldehydes from reacting with strong bases and nucleophiles Example 2 Reference Page 861, Organic Chemistry , Eight edition , L.G.Wade
3
The hint is to form the dioxolane (cyclic acetal) of the ketone with ethylene glycol then reduce the ester (LiAlH4 will do this, the dioxolane will not react) and finally deprotect the ketone. The reduction can also be accomplished by hydrolysing the ester and reducing the resulting acid with borane.
0
I tried to work out a mechanism from the information I could gather about the reaction. If someone notices something odd/wrong please comment and I'll fix it. The steps are either showing the protonation or skipping directly to the protonated forms of the reactants, them being in acidic medium. Formation of isobutylene Not very complicated, in acidic ...
0
$\ce{CaCO3}$ is easily decarboxylated to $\ce{CaO}$ in the heat of the fire. By dumping the ash into water you get a solution of $\ce{Ca(OH)2}$ and $\ce{K2CO3}$, and since $\ce{CaCO3}$ is less soluble than $\ce{Ca(OH)2}$ and $\ce{K2CO3}$ you end up with $\ce{KOH + Ca(OH)2}$ solution in one step and $\ce{CO2}$ dissolving into this solution will deplete ...
Top 50 recent answers are included
|
{}
|
### Derangements and Continued Fractions for e
We show in this post that an elegant continued fraction for ${e}$ can be found using derangement numbers. Recall from last week’s post that we call any permutation of the elements of a set an arrangement. A derangement is an arrangement for which every element is moved from its original position.
The number of arrangements of a set of ${n}$ elements is the factorial ${n!}$ and the number of derangements is the subfactorial ${!n}$. The connection between ${!n}$ and ${n!}$ is established using the inclusion-exclusion principle, and we have
$\displaystyle \frac{!n}{n!} \rightarrow \frac{1}{e} \qquad\mbox{as}\qquad n \rightarrow \infty$
In fact, ${!n}$ is the nearest whole number to ${n!/e}$ [see previous post].
Continued Fractions and Convergents
The continued fraction expansion of an irrational number ${x}$ is written, in expanded form and concise form, as
$\displaystyle x = a_0 + \cfrac{1}{ a_1 + \cfrac{1}{ a_2 + \cfrac{1}{ a_3 + \ddots } }} = [ a_0 ; a_1 , a_2 , a_3 , \dots ]$
where ${a_n}$ are integers. If ${a_n}$ is positive for ${n \ge 1}$ this is called a simple continued fraction.
The generalized continued fraction expansion is written
$\displaystyle x = b_0 + \cfrac{a_1}{ b_1 + \cfrac{a_2}{ b_2 + \cfrac{a_3}{ b_3 + \ddots } }} = b_0 +\frac{a_1}{b_1+}\,\frac{a_2}{b_2+}\,\frac{a_3}{b_3+}\,\frac{a_4}{b_4+}\,\cdots \,.$
where ${a_n}$ and ${b_n}$ are integers. Truncating the expansion at various points, we obtain the convergents
$\displaystyle r_n = \frac{p_n}{q_n} = b_0 +\frac{a_1}{b_1+}\,\frac{a_2}{b_2+}\,\frac{a _3}{b_3+}\,\frac{a_4}{b_4+}\,\cdots\,\frac{a_n}{b_n}$
where the numerators and denominators, ${p_n}$ and ${q_n}$, are integers.
We define the starting values
$\displaystyle p_{-1} = 1\,, \qquad q_{-1} = 0\,, \qquad p_0 = b_0\,, \qquad q_0 = 1 \,.$
Then, ${p_{k}}$ and ${q_{k}}$ for ${k\ge 1}$ are given by recurrence relations:
$\displaystyle p_{k} = b_{k} p_{k-1} + a_{k} p_{k-2} \,, \qquad q_{k} = b_{k} q_{k-1} + a_{k} q_{k-2} \,, \ \ \ \ \ (1)$
which may be proved by induction.
This process can be inverted: given a sequence of numerators ${p_n}$ and denominators ${q_n}$ (or just their ratios, the convergents ${r_n = p_n/q_n}$), we can solve (1) for ${a_n}$ and ${b_n}$:
$\displaystyle a_n = \frac{p_{n-1}q_{n} - p_{ n}q_{n-1}} {p_{n-1}q_{n-2} - p_{n-2}q_{n-1}} \,, \qquad b_n = \frac{p_{n}q_{n-2} - p_{n-2}q_{n}} {p_{n-1}q_{n-2} - p_{n-2}q_{n-1}} \ \ \ \ \ (2)$
together with the starting values ${b_0=p_0}$, ${a_1 = (p_1-b_0q_1)}$ and ${b_1=q_1}$.
Continued Fractions for ${e}$
Euler’s number is usually defined as the limit ${ e = \lim_{n\rightarrow\infty}(1+1/n)^n}$. This is the limit of the sequence
$\displaystyle \left\{ \frac{2^1}{1^1}, \frac{3^2}{2^2}, \frac{4^3}{3^3}, \dots ,\frac{(n+1)^n}{n ^n}, \dots \right\}$
The terms may be regarded as the convergents of a continued fraction,
$\displaystyle r_n = \frac{p_n}{q_n} \qquad\mbox{where}\qquad p_n = (n+1)^n \quad\mbox{and}\quad q_n = n^n \,.$
We can generate a continued fraction by using (2). It begins as
$\displaystyle 1 +\frac{1}{1-}\,\frac{1}{5-}\,\frac{13}{10-}\,\frac{491}{196-}\, \frac{487903}{9952-}\,\frac{2384329879}{958144-}\, \cdots \,. \ \ \ \ \ (3)$
The error of this expansion (${\log_{10}[r_n-e]}$) as a function of truncation is shown in the figure below (dashed black line). It is clear that the convergence is very slow.
Euler made extensive studies of continued fractions. For example, his 50-page paper, Observations on continued fractions (Euler, 1750), contains numerous original results. One of his best-known expansions is
$\displaystyle e = [2; 1,2,1,1,4,1,1,6,1,1,8,\dots] \ \ \ \ \ (4)$
The error of Euler’s expansion is shown in the figure (dotted red line). It converges much faster than (3). There is a clear signal of period 3, consistent with the recurring pattern ${(1, 1, n)}$ in (4).
Logarithm of the error $\log_{10}|r_n - e |$ in the continued fraction expansions for $e$. Dashed black line: $r_n=(1+1/n)^n$, Eq. (3) . Dotted red line: Convergents of Euler’s expansion (4). Solid blue line: $r_n=(n+1)!/!(n+1)$, Eq. (5).
Continued fraction from derangement numbers
A beautiful continued fraction emerges from the relationship between arrangements and derangements. We saw above that
$\displaystyle \left[ \frac{\mbox{Arrangements of n elements}}{\mbox{Derangements of n elements}} \right] = \frac{n!}{!n} \to e$
If we define the numerators and denominators of convergents to be
$\displaystyle p_n = n! \quad\mbox{and}\quad q_n = \,!n \,,$
we can solve for the factors ${a_n}$ and ${b_n}$. The starting values ${p_0=1, p_1=1, q_0=1, q_1=0}$ yield ${a_0=0, b_0=1, a_1=1, b_1=0}$. Then (2) may be solved to yield ${a_n = b_n = (n-1)}$ for ${n\ge 2}$. Thus we get the expansion
$\displaystyle e = 1 +\frac{1}{0+}\,\frac{1}{1+}\,\frac{2}{2+}\,\frac{3}{3+}\,\frac{4}{4+}\,\cdots \,.$
A small adjustment enables us to write this in the elegant form
$\displaystyle e = 2 +\frac{2}{2+}\,\frac{3}{3+}\,\frac{4}{4+}\,\frac{5}{5+}\,\frac{6}{6+}\,\cdots \,. \ \ \ \ \ (5)$
The error of (5) is shown in the figure above (solid blue line). Convergence is more rapid than for the other two expansions.
For a more detailed discussion, and connection with The Ramanujan Machine, see Lynch (2020).
References
${\bullet}$ Euler, L., 1750: De fractionibus continuis observationes. Commentarii academiae scientiarum Petropolitanae, 11, 32–81. Reprinted in Opera Omnia, Series 1, 14, 291–349. Translation by Alexander Aycock: Observations on continued fractions. PDF .
${\bullet}$ Lynch, Peter 2020: Derangements and Continued Fractions for e. eprint on arXiv .
* * * * *
That’s Maths II: A Ton of Wonders
by Peter Lynch has just appeared.
Full details and links to suppliers at
http://logicpress.ie/2020-3/
* * * * *
|
{}
|
# Extremal Rays of a Cone
Can I use Sage to compute the subspace of a vector space that lies in non-negative real Euclidean space?
For example,
If I compute the nullspace of the matrix,
A =
[ [ 1 0 0 -1 0]
[-1 1 0 0 1]
[ 0 -1 1 0 0]
[ 0 0 -1 1 -1] ]
I get,
[1,1,1,1,0],[0,−1,−1,0,1]
as basis. Now I would like to restrict the solution space to only vectors with non-negative entries. I believe the two basis vectors I am looking for are,
[1,1,1,1,0],[1,0,0,1,1].
My program is written in Python, but I am not sure there is anything written for this sort of problem. Using the simplex method of linear programming (objective function set to zero) only spits out a single solution vector which seems to be the sum of the two I want. I ask here because I see some classes for cones and methods that gives the extremal rays, but it seems I already have to know the extremal rays to use that class.
edit retag close merge delete
Sort by » oldest newest most voted
Something like that
sage: P = Polyhedron(rays=[(1,0),(0,1)])
sage: P.intersection(Polyhedron(eqns=[[0,1,-1]]))
A 1-dimensional polyhedron in QQ^2 defined as the convex hull of 1 vertex and 1 ray
more
Eventually, such cones will be readily constructible. See the ticket: https://trac.sagemath.org/ticket/26623
( 2019-08-27 04:11:15 -0600 )edit
|
{}
|
## 6.3 The StructTS function
The StructTS function in the stats package in R will also fit the stochastic level model:
fit.sts <- StructTS(dat, type = "level")
fit.sts
Call:
StructTS(x = dat, type = "level")
Variances:
level epsilon
1469 15099
The estimates from StructTS() will be different (though similar) from MARSS() because StructTS() uses $$x_1 = y_1$$, that is the hidden state at $$t=1$$ is fixed to be the data at $$t=1$$. That is fine if you have a long data set, but would be disastrous for the short data sets typical in fisheries and ecology.
StructTS() is much, much faster for long time series. The example in ?StructTS is pretty much instantaneous with StructTS() but takes minutes with the EM algorithm that is the default in MARSS(). With the BFGS algorithm, it is much closer to StructTS():
trees <- window(treering, start = 0)
fitts <- StructTS(trees, type = "level")
fitbf <- MARSS(trees, mod.nile.2, method = "BFGS")
Note that mod.nile.2 specifies a univariate stochastic level model so we can use it just fine with other univariate data sets.
In addition, fitted(fit.sts) where fit.sts is a fit from StructTS() is very different than fit.marss$states from MARSS(). t <- 10 fitted(fit.sts)[t] [1] 1162.904 is the expected value of $$y_{t+1}$$ (in this case $$y_{11}$$ since we set $$t=10$$) given the data up to $$y_t$$ (in this case, up to $$y_{10}$$). It is called the one-step ahead prediction. We are not going to use the one-step ahead predictions unless we are forecasting or doing cross-validation. Typically, when we analyze fisheries and ecological data, we want to know the estimate of the state, the $$x_t$$, given ALL the data (sometimes we might want the estimate of the $$y_t$$ process given all the data). For example, we might need an estimate of the population size in year 1990 given a time series of counts from 1930 to 2015. We don’t want to use only the data up to 1989; we want to use all the information. fit.marss$states from MARSS() is the expected value of $$x_t$$ given all the data. In the MARSS package, this is denoted “xtT.”
fitted(kem.2, type = "xtT") %>%
subset(t == 11)
If you needed the one-step predictions from MARSS(), you can get that using “xtt1.”
fitted(kem.2, type = "xtt1") %>%
subset(t == 11)
This is the expected value of $$x_t$$ conditioned on $$y_1$$ to $$y_{t-1}$$.
Loading required package: lattice
Loading required package: survival
Loading required package: Formula
Attaching package: 'Hmisc'
The following object is masked from 'package:quantmod':
Lag
The following objects are masked from 'package:base':
format.pval, units
|
{}
|
# Release notes¶
orphan
## Release 0.30.0-dev (development release)¶
### New features since last release
• The sample_state function is added to devices/qubit that returns a series of samples based on a given state vector and a number of shots. (#3720)
• Added the needed functions and classes to simulate an ensemble of Rydberg atoms:
• A new internal RydbergHamiltonian class is added, which contains the Hamiltonian of an ensemble of Rydberg atoms.
• A new user-facing rydberg_interaction function is added, which returns a RydbergHamiltonian containing the Hamiltonian of the interaction of all the Rydberg atoms.
• A new user-facing drive function is added, which returns a ParametrizedHamiltonian (HardwareHamiltonian) containing the Hamiltonian of the interaction between a driving electro-magnetic field and a group of qubits. (#3749) (#3911) (#3930)
• Added Operation.__truediv__ dunder method to be able to divide operators. (#3749)
• The simulate function added to devices/qubit now supports measuring expectation values of large observables such as qml.Hamiltonian, qml.SparseHamiltonian, qml.Sum. (#3759)
### Improvements
• Improve the efficiency of tapering(), tapering_hf() and clifford(). (3942)
• Update Pauli arithmetic to more efficiently convert to a Hamiltonian. (#3939)
• Keras and Torch NN modules are now compatible with the new return type system. (#3913) (#3914)
• The adjoint differentiation method now supports more operations, and does no longer decompose some operations that may be differentiated directly. In addition, all new operations with a generator are now supported by the method. (#3874)
• The coefficients function and the visualize submodule of the qml.fourier module now allow assigning different degrees for different parameters of the input function. (#3005)
The arguments degree and filter_threshold to qml.fourier.coefficients previously were expected to be integers, and now can be a sequences of integers with one integer per function parameter (i.e. len(degree)==n_inputs), resulting in a returned array with shape (2*degrees[0]+1,..., 2*degrees[-1]+1). The functions in qml.fourier.visualize accordingly accept such arrays of coefficients.
• Operator now has a has_generator attribute that returns whether or not the operator has a generator defined. It is used in qml.operation.has_gen, improving its performance. (#3875)
• The custom JVP rules in PennyLane now also support non-scalar and mixed-shape tape parameters as well as multi-dimensional tape return types, like broadcasted qml.probs, for example. (#3766)
• The qchem.jordan_wigner function is extended to support more fermionic operator orders. (#3754) (#3751)
• AdaptiveOptimizer is updated to use non-default user-defined qnode arguments. (#3765)
• Adds logic to qml.devices.qubit.measure to compute the expectation values of Hamiltonian and Sum in a backpropagation compatible way. (#3862)
• Use TensorLike type in Operator dunder methods. (#3749)
• The apply_operation function added to devices/qubit now supports broadcasting. (#3852)
• qml.QubitStateVector.state_vector now supports broadcasting. (#3852)
• pennylane.devices.qubit.preprocess now allows circuits with non-commuting observables. (#3857)
• When using Jax-jit with gradient transforms the trainable parameters are correctly set (instead of every parameter to be set as trainable), and therefore the derivatives are computed more efficiently. (#3697)
• qml.SparseHamiltonian can now be applied to any wires in a circuit rather than being restricted to all wires in the circuit. (#3888)
• Added max_distance keyword argument to qml.pulse.rydberg_interaction to allow removal of negligible contributions from atoms beyond max_distancefrom each other. (#3889)
• 3 new decomposition algorithms are added for n-controlled operations with single-qubit target, and are selected automatically when they produce a better result. They can be accessed via ops.op_math.ctrl_decomp_bisect. (#3851)
• repr for MutualInfoMP now displays the distribution of the wires between the two subsystems. (#3898)
• Changed Operator.num_wires from an abstract value to AnyWires. (#3919)
• Do not run qml.transforms.sum_expand in Device.batch_transform if the device supports Sum observables. (#3915)
• CompositeOp now overrides Operator._check_batching, providing a significant performance improvement. Hamiltonian also overrides this method and does nothing, because it does not support batching. (#3915)
• If a Sum operator has a pre-computed Pauli representation, is_hermitian now checks that all coefficients are real, providing a significant performance improvement. (#3915)
• The type of n_electrons in qml.qchem.Molecule is set to int. (#3885)
• Added explicit errors to QutritDevice if classical_shadow or shadow_expval are measured. (#3934)
• DefaultQutrit supports the new return system. (#3934)
• QubitDevice now defines the private _get_diagonalizing_gates(circuit) method and uses it when executing circuits. This allows devices that inherit from QubitDevice to override and customize their definition of diagonalizing gates. (#3938)
### Breaking changes
• Both JIT interfaces are not compatible with Jax >0.4.3, we raise an error for those versions. (#3877)
• An operation that implements a custom generator method, but does not always return a valid generator, also has to implement a has_generator property that reflects in which scenarios a generator will be returned. (#3875)
• Trainable parameters for the Jax interface are the parameters that are JVPTracer, defined by setting argnums. Previously, all JAX tracers, including those used for JIT compilation, were interpreted to be trainable. (#3697)
• The keyword argument argnums is now used for gradient transform using Jax, instead of argnum. argnum is automatically converted to argnums when using JAX, and will no longer be supported in v0.31. (#3697) (#3847)
• Made qml.OrbitalRotation and consequently qml.GateFabric consistent with the interleaved Jordan-Wigner ordering. Previously, they were consistent with the sequential Jordan-Wigner ordering. (#3861)
• Some MeasurementProcess classes can now only be instantiated with arguments that they will actually use. For example, you can no longer create StateMP(qml.PauliX(0)) or PurityMP(eigvals=(-1,1), wires=Wires(0)). (#3898)
### Documentation
• A typo was corrected in the documentation for introduction to inspecting_circuits and chemistry. (#3844)
### Bug fixes
• Fixed a bug where calling Evolution.generator with coeff being a complex ArrayBox raised an error. (#3796)
• MeasurementProcess.hash now uses the hash property of the observable. The property now depends on all properties that affect the behaviour of the object, such as VnEntropyMP.log_base or the distribution of wires between the two subsystems in MutualInfoMP. (#3898)
• The enum measurements.Purity is added so that PurityMP.return_type is defined. str and repr for PurityMP are now defined. (#3898)
• Sum.hash and Prod.hash are slightly changed to work with non-numeric wire labels. sum_expand should now return correct results and not treat some products as the same operation. (#3898)
• Fixed bug where the coefficients where not ordered correctly when summing a ParametrizedHamiltonian with other operators. (#3749) (#3902)
• The metric tensor transform is fully compatible with Jax and therefore users can provide multiple parameters. (#3847)
• Registers math.ndim and math.shape for built-ins and autograd to accomodate Autoray 0.6.1. #3864
• Ensure that qml.data.load returns datasets in a stable and expected order. (#3856)
• The qml.equal function now handles comparisons of ParametrizedEvolution operators. (#3870)
• Made qml.OrbitalRotation and consequently qml.GateFabric consistent with the interleaved Jordan-Wigner ordering. (#3861)
• qml.devices.qubit.apply_operation catches the tf.errors.UnimplementedError that occurs when PauliZ or CNOT gates are applied to a large (>8 wires) tensorflow state. When that occurs, the logic falls back to the tensordot logic instead. (#3884)
• Fixed parameter broadcasting support with qml.counts in most cases, and introduced explicit errors otherwise. (#3876)
• An error is now raised if a QNode with Jax-jit in use returns counts while having trainable parameters (#3892)
• A correction is added to the reference values in test_dipole_of to account for small changes (~2e-8) in the computed dipole moment values, resulting from the new PySCF 2.2.0 release. (#3908)
• SampleMP.shape is now correct when sampling only occurs on a subset of the device wires. (#3921)
### Contributors
This release contains contributions from (in alphabetical order):
Komi Amiko Utkarsh Azad Lillian M. A. Frederiksen Soran Jahangiri Christina Lee Vincent Michaud-Rioux Albert Mitjans Romain Moyard Mudit Pandey Matthew Silverman Jay Soni David Wierichs
orphan
## Release 0.29.0 (current release)¶
### New features since last release
#### Pulse programming 🔊
• Support for creating pulse-based circuits that describe evolution under a time-dependent Hamiltonian has now been added, as well as the ability to execute and differentiate these pulse-based circuits on simulator. (#3586) (#3617) (#3645) (#3652) (#3665) (#3673) (#3706) (#3730)
A time-dependent Hamiltonian can be created using qml.pulse.ParametrizedHamiltonian, which holds information representing a linear combination of operators with parametrized coefficents and can be constructed as follows:
from jax import numpy as jnp
f1 = lambda p, t: p * jnp.sin(t) * (t - 1)
f2 = lambda p, t: p[0] * jnp.cos(p[1]* t ** 2)
XX = qml.PauliX(0) @ qml.PauliX(1)
YY = qml.PauliY(0) @ qml.PauliY(1)
ZZ = qml.PauliZ(0) @ qml.PauliZ(1)
H = 2 * ZZ + f1 * XX + f2 * YY
>>> H
ParametrizedHamiltonian: terms=3
>>> p1 = jnp.array(1.2)
>>> p2 = jnp.array([2.3, 3.4])
>>> H((p1, p2), t=0.5)
(2*(PauliZ(wires=[0]) @ PauliZ(wires=[1]))) + ((-0.2876553231625218*(PauliX(wires=[0]) @ PauliX(wires=[1]))) + (1.517961235535459*(PauliY(wires=[0]) @ PauliY(wires=[1]))))
The time-dependent Hamiltonian can be used within a circuit with qml.evolve:
def pulse_circuit(params, time):
qml.evolve(H)(params, time)
return qml.expval(qml.PauliX(0) @ qml.PauliY(1))
Pulse-based circuits can be executed and differentiated on the default.qubit.jax simulator using JAX as an interface:
>>> dev = qml.device("default.qubit.jax", wires=2)
>>> qnode = qml.QNode(pulse_circuit, dev, interface="jax")
>>> params = (p1, p2)
>>> qnode(params, time=0.5)
Array(0.72153819, dtype=float64)
(Array(-0.11324919, dtype=float64),
Array([-0.64399616, 0.06326374], dtype=float64))
Check out the qml.pulse documentation page for more details!
#### Special unitary operation 🌞
• A new operation qml.SpecialUnitary has been added, providing access to an arbitrary unitary gate via a parametrization in the Pauli basis. (#3650) (#3651) (#3674)
qml.SpecialUnitary creates a unitary that exponentiates a linear combination of all possible Pauli words in lexicographical order — except for the identity operator — for num_wires wires, of which there are 4**num_wires - 1. As its first argument, qml.SpecialUnitary takes a list of the 4**num_wires - 1 parameters that are the coefficients of the linear combination.
To see all possible Pauli words for num_wires wires, you can use the qml.ops.qubit.special_unitary.pauli_basis_strings function:
>>> qml.ops.qubit.special_unitary.pauli_basis_strings(1) # 4**1-1 = 3 Pauli words
['X', 'Y', 'Z']
>>> qml.ops.qubit.special_unitary.pauli_basis_strings(2) # 4**2-1 = 15 Pauli words
['IX', 'IY', 'IZ', 'XI', 'XX', 'XY', 'XZ', 'YI', 'YX', 'YY', 'YZ', 'ZI', 'ZX', 'ZY', 'ZZ']
To use qml.SpecialUnitary, for example, on a single qubit, we may define
>>> thetas = np.array([0.2, 0.1, -0.5])
>>> U = qml.SpecialUnitary(thetas, 0)
>>> qml.matrix(U)
array([[ 0.8537127 -0.47537233j, 0.09507447+0.19014893j],
[-0.09507447+0.19014893j, 0.8537127 +0.47537233j]])
A single non-zero entry in the parameters will create a Pauli rotation:
>>> x = 0.412
>>> theta = x * np.array([1, 0, 0]) # The first entry belongs to the Pauli word "X"
>>> su = qml.SpecialUnitary(theta, wires=0)
>>> rx = qml.RX(-2 * x, 0) # RX introduces a prefactor -0.5 that has to be compensated
>>> qml.math.allclose(qml.matrix(su), qml.matrix(rx))
True
This operation can be differentiated with hardware-compatible methods like parameter shifts and it supports parameter broadcasting/batching, but not both at the same time. Learn more by visiting the qml.SpecialUnitary documentation.
#### Always differentiable 📈
• The Hadamard test gradient transform is now available via qml.gradients.hadamard_grad. This transform is also available as a differentiation method within QNodes. (#3625) (#3736)
qml.gradients.hadamard_grad is a hardware-compatible transform that calculates the gradient of a quantum circuit using the Hadamard test. Note that the device requires an auxiliary wire to calculate the gradient.
>>> dev = qml.device("default.qubit", wires=2)
>>> @qml.qnode(dev)
... def circuit(params):
... qml.RX(params[0], wires=0)
... qml.RY(params[1], wires=0)
... qml.RX(params[2], wires=0)
... return qml.expval(qml.PauliZ(0))
>>> params = np.array([0.1, 0.2, 0.3], requires_grad=True)
This transform can be registered directly as the quantum gradient transform to use during autodifferentiation:
>>> dev = qml.device("default.qubit", wires=2)
... def circuit(params):
... qml.RX(params[0], wires=0)
... qml.RY(params[1], wires=0)
... qml.RX(params[2], wires=0)
... return qml.expval(qml.PauliZ(0))
>>> params = jax.numpy.array([0.1, 0.2, 0.3])
>>> jax.jacobian(circuit)(params)
Array([-0.3875172 , -0.18884787, -0.38355705], dtype=float32)
• The gradient transform qml.gradients.spsa_grad is now registered as a differentiation method for QNodes. (#3440)
The SPSA gradient transform can now be used implicitly by marking a QNode as differentiable with SPSA. It can be selected via
>>> dev = qml.device("default.qubit", wires=1)
>>> @qml.qnode(dev, interface="jax", diff_method="spsa", h=0.05, num_directions=20)
... def circuit(x):
... qml.RX(x, 0)
... return qml.expval(qml.PauliZ(0))
>>> jax.jacobian(circuit)(jax.numpy.array(0.5))
Array(-0.4792258, dtype=float32, weak_type=True)
The argument num_directions determines how many directions of simultaneous perturbation are used and therefore the number of circuit evaluations, up to a prefactor. See the SPSA gradient transform documentation for details. Note: The full SPSA optimization method is already available as qml.SPSAOptimizer.
• The default interface is now auto. There is no need to specify the interface anymore; it is automatically determined by checking your QNode parameters. (#3677) (#3752) (#3829)
import jax
import jax.numpy as jnp
qml.enable_return()
a = jnp.array(0.1)
b = jnp.array(0.2)
dev = qml.device("default.qubit", wires=2)
@qml.qnode(dev)
def circuit(a, b):
qml.RY(a, wires=0)
qml.RX(b, wires=1)
qml.CNOT(wires=[0, 1])
return qml.expval(qml.PauliZ(0)), qml.expval(qml.PauliY(1))
>>> circuit(a, b)
(Array(0.9950042, dtype=float32), Array(-0.19767681, dtype=float32))
>>> jac = jax.jacobian(circuit)(a, b)
>>> jac
(Array(-0.09983341, dtype=float32, weak_type=True), Array(0.01983384, dtype=float32, weak_type=True))
• The JAX-JIT interface now supports higher-order gradient computation with the new return types system. (#3498)
Here is an example of using JAX-JIT to compute the Hessian of a circuit:
import pennylane as qml
import jax
from jax import numpy as jnp
jax.config.update("jax_enable_x64", True)
qml.enable_return()
dev = qml.device("default.qubit", wires=2)
@jax.jit
@qml.qnode(dev, interface="jax-jit", diff_method="parameter-shift", max_diff=2)
def circuit(a, b):
qml.RY(a, wires=0)
qml.RX(b, wires=1)
return qml.expval(qml.PauliZ(0)), qml.expval(qml.PauliZ(1))
a, b = jnp.array(1.0), jnp.array(2.0)
>>> jax.hessian(circuit, argnums=[0, 1])(a, b)
(((Array(-0.54030231, dtype=float64, weak_type=True),
Array(0., dtype=float64, weak_type=True)),
(Array(-1.76002563e-17, dtype=float64, weak_type=True),
Array(0., dtype=float64, weak_type=True))),
((Array(0., dtype=float64, weak_type=True),
Array(-1.00700085e-17, dtype=float64, weak_type=True)),
(Array(0., dtype=float64, weak_type=True),
Array(0.41614684, dtype=float64, weak_type=True))))
• The qchem workflow has been modified to support both Autograd and JAX frameworks. (#3458) (#3462) (#3495)
The JAX interface is automatically used when the differentiable parameters are JAX objects. Here is an example for computing the Hartree-Fock energy gradients with respect to the atomic coordinates.
import pennylane as qml
from pennylane import numpy as np
import jax
symbols = ["H", "H"]
geometry = np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 1.0]])
mol = qml.qchem.Molecule(symbols, geometry)
args = [jax.numpy.array(mol.coordinates)]
>>> jax.grad(qml.qchem.hf_energy(mol))(*args)
Array([[ 0. , 0. , 0.3650435],
[ 0. , 0. , -0.3650435]], dtype=float64)
• The kernel matrix utility functions in qml.kernels are now autodifferentiation-compatible. In addition, they support batching, for example for quantum kernel execution with shot vectors. (#3742)
This allows for the following:
dev = qml.device('default.qubit', wires=2, shots=(100, 100))
@qml.qnode(dev)
def circuit(x1, x2):
qml.templates.AngleEmbedding(x1, wires=dev.wires)
return qml.probs(wires=dev.wires)
kernel = lambda x1, x2: circuit(x1, x2)
We can then compute the kernel matrix on a set of 4 (random) feature vectors X but using two sets of 100 shots each via
>>> X = np.random.random((4, 2))
>>> qml.kernels.square_kernel_matrix(X, kernel)[:, 0]
tensor([[[1. , 0.86, 0.88, 0.92],
[0.86, 1. , 0.75, 0.97],
[0.88, 0.75, 1. , 0.91],
[0.92, 0.97, 0.91, 1. ]],
[[1. , 0.93, 0.91, 0.92],
[0.93, 1. , 0.8 , 1. ],
[0.91, 0.8 , 1. , 0.91],
[0.92, 1. , 0.91, 1. ]]], requires_grad=True)
Note that we have extracted the first probability vector entry for each 100-shot evaluation.
#### Smartly decompose Hamiltonian evolution 💯
• Hamiltonian evolution using qml.evolve or qml.exp can now be decomposed into operations. (#3691) (#3777)
If the time-evolved Hamiltonian is equivalent to another PennyLane operation, then that operation is returned as the decomposition:
>>> exp_op = qml.evolve(qml.PauliX(0) @ qml.PauliX(1))
>>> exp_op.decomposition()
[IsingXX((2+0j), wires=[0, 1])]
If the Hamiltonian is a Pauli word, then the decomposition is provided as a qml.PauliRot operation:
>>> qml.evolve(qml.PauliZ(0) @ qml.PauliX(1)).decomposition()
[PauliRot((2+0j), ZX, wires=[0, 1])]
Otherwise, the Hamiltonian is a linear combination of operators and the Suzuki-Trotter decomposition is used:
>>> qml.evolve(qml.sum(qml.PauliX(0), qml.PauliY(0), qml.PauliZ(0)), num_steps=2).decomposition()
[RX((1+0j), wires=[0]),
RY((1+0j), wires=[0]),
RZ((1+0j), wires=[0]),
RX((1+0j), wires=[0]),
RY((1+0j), wires=[0]),
RZ((1+0j), wires=[0])]
#### Tools for quantum chemistry and other applications 🛠️
• A new method called qml.qchem.givens_decomposition has been added, which decomposes a unitary into a sequence of Givens rotation gates with phase shifts and a diagonal phase matrix. (#3573)
unitary = np.array([[ 0.73678+0.27511j, -0.5095 +0.10704j, -0.06847+0.32515j],
[-0.21271+0.34938j, -0.38853+0.36497j, 0.61467-0.41317j],
[ 0.41356-0.20765j, -0.00651-0.66689j, 0.32839-0.48293j]])
phase_mat, ordered_rotations = qml.qchem.givens_decomposition(unitary)
>>> phase_mat
tensor([-0.20604358+0.9785369j , -0.82993272+0.55786114j,
>>> ordered_rotations
[(tensor([[-0.65087861-0.63937521j, -0.40933651-0.j ],
(0, 1)),
(tensor([[ 0.47970366-0.33308926j, -0.8117487 -0.j ],
[ 0.66677093-0.46298215j, 0.5840069 -0.j ]], requires_grad=True),
(1, 2)),
(tensor([[ 0.36147547+0.73779454j, -0.57008306-0.j ],
[ 0.2508207 +0.51194108j, 0.82158706-0.j ]], requires_grad=True),
(0, 1))]
• A new template called qml.BasisRotation has been added, which performs a basis transformation defined by a set of fermionic ladder operators. (#3573)
import pennylane as qml
from pennylane import numpy as np
V = np.array([[ 0.53672126+0.j , -0.1126064 -2.41479668j],
[-0.1126064 +2.41479668j, 1.48694623+0.j ]])
eigen_vals, eigen_vecs = np.linalg.eigh(V)
umat = eigen_vecs.T
wires = range(len(umat))
def circuit():
for idx, eigenval in enumerate(eigen_vals):
qml.RZ(eigenval, wires=[idx])
qml.BasisRotation(wires=wires, unitary_matrix=umat)
>>> circ_unitary = qml.matrix(circuit)()
>>> np.round(circ_unitary/circ_unitary[0][0], 3)
tensor([[ 1. -0.j , -0. +0.j , -0. +0.j , -0. +0.j ],
[-0. +0.j , -0.516-0.596j, -0.302-0.536j, -0. +0.j ],
[-0. +0.j , 0.35 +0.506j, -0.311-0.724j, -0. +0.j ],
[-0. +0.j , -0. +0.j , -0. +0.j , -0.438+0.899j]], requires_grad=True)
• A new function called qml.qchem.load_basisset has been added to extract qml.qchem basis set data from the Basis Set Exchange library. (#3363)
• A new function called qml.math.max_entropy has been added to compute the maximum entropy of a quantum state. (#3594)
• A new template called qml.TwoLocalSwapNetwork has been added that implements a canonical 2-complete linear (2-CCL) swap network described in arXiv:1905.05118. (#3447)
dev = qml.device('default.qubit', wires=5)
weights = np.random.random(size=qml.templates.TwoLocalSwapNetwork.shape(len(dev.wires)))
acquaintances = lambda index, wires, param: (qml.CRY(param, wires=index)
if np.abs(wires[0]-wires[1]) else qml.CRZ(param, wires=index))
@qml.qnode(dev)
def swap_network_circuit():
qml.templates.TwoLocalSwapNetwork(dev.wires, acquaintances, weights, fermionic=False)
return qml.state()
>>> print(weights)
tensor([0.20308242, 0.91906199, 0.67988804, 0.81290256, 0.08708985,
0.81860084, 0.34448344, 0.05655892, 0.61781612, 0.51829044], requires_grad=True)
>>> print(qml.draw(swap_network_circuit, expansion_strategy = 'device')())
0: ─╭●────────╭SWAP─────────────────╭●────────╭SWAP─────────────────╭●────────╭SWAP─┤ State
1: ─╰RY(0.20)─╰SWAP─╭●────────╭SWAP─╰RY(0.09)─╰SWAP─╭●────────╭SWAP─╰RY(0.62)─╰SWAP─┤ State
2: ─╭●────────╭SWAP─╰RY(0.68)─╰SWAP─╭●────────╭SWAP─╰RY(0.34)─╰SWAP─╭●────────╭SWAP─┤ State
3: ─╰RY(0.92)─╰SWAP─╭●────────╭SWAP─╰RY(0.82)─╰SWAP─╭●────────╭SWAP─╰RY(0.52)─╰SWAP─┤ State
4: ─────────────────╰RY(0.81)─╰SWAP─────────────────╰RY(0.06)─╰SWAP─────────────────┤ State
### Improvements 🛠
#### Pulse programming
• A new function called qml.pulse.pwc has been added as a convenience function for defining a qml.pulse.ParametrizedHamiltonian. This function can be used to create a callable coefficient by setting the timespan over which the function should be non-zero. The resulting callable can be passed an array of parameters and a time. (#3645)
>>> timespan = (2, 4)
>>> f = qml.pulse.pwc(timespan)
>>> f * qml.PauliX(0)
ParametrizedHamiltonian: terms=1
The params array will be used as bin values evenly distributed over the timespan, and the parameter t will determine which of the bins is returned.
>>> f(params=[1.2, 2.3, 3.4, 4.5], t=3.9)
DeviceArray(4.5, dtype=float32)
>>> f(params=[1.2, 2.3, 3.4, 4.5], t=6) # zero outside the range (2, 4)
DeviceArray(0., dtype=float32)
• A new function calledqml.pulse.pwc_from_function has been added as a decorator for defining a qml.pulse.ParametrizedHamiltonian. This function can be used to decorate a function and create a piecewise constant approximation of it. (#3645)
>>> @qml.pulse.pwc_from_function((2, 4), num_bins=10)
... def f1(p, t):
... return p * t
The resulting function approximates the same of p**2 * t on the interval t=(2, 4) in 10 bins, and returns zero outside the interval.
# t=2 and t=2.1 are within the same bin
>>> f1(3, 2), f1(3, 2.1)
(DeviceArray(6., dtype=float32), DeviceArray(6., dtype=float32))
# next bin
>>> f1(3, 2.2)
DeviceArray(6.6666665, dtype=float32)
# outside the interval t=(2, 4)
>>> f1(3, 5)
DeviceArray(0., dtype=float32)
• Add ParametrizedHamiltonianPytree class, which is a pytree jax object representing a parametrized Hamiltonian, where the matrix computation is delayed to improve performance. (#3779)
#### Operations and batching
• The function qml.dot has been updated to compute the dot product between a vector and a list of operators. (#3586)
>>> coeffs = np.array([1.1, 2.2])
>>> ops = [qml.PauliX(0), qml.PauliY(0)]
>>> qml.dot(coeffs, ops)
(1.1*(PauliX(wires=[0]))) + (2.2*(PauliY(wires=[0])))
>>> qml.dot(coeffs, ops, pauli=True)
1.1 * X(0) + 2.2 * Y(0)
• qml.evolve returns the evolution of an Operator or a ParametrizedHamiltonian. (#3617) (#3706)
• qml.ControlledQubitUnitary now inherits from qml.ops.op_math.ControlledOp, which defines decomposition, expand, and sparse_matrix rather than raising an error. (#3450)
• Parameter broadcasting support has been added for the qml.ops.op_math.Controlled class if the base operator supports broadcasting. (#3450)
• The qml.generator function now checks if the generator is Hermitian, rather than whether it is a subclass of Observable. This allows it to return valid generators from SymbolicOp and CompositeOp classes. (#3485)
• The qml.equal function has been extended to compare Prod and Sum operators. (#3516)
• qml.purity has been added as a measurement process for purity (#3551)
• In-place inversion has been removed for qutrit operations in preparation for the removal of in-place inversion. (#3566)
• The qml.utils.sparse_hamiltonian function has been moved to thee qml.Hamiltonian.sparse_matrix method. (#3585)
• The qml.pauli.PauliSentence.operation() method has been improved to avoid instantiating an SProd operator when the coefficient is equal to 1. (#3595)
• Batching is now allowed in all SymbolicOp operators, which include Exp, Pow and SProd. (#3597)
• The Sum and Prod operations now have broadcasted operands. (#3611)
• The XYX single-qubit unitary decomposition has been implemented. (#3628)
• All dunder methods now return NotImplemented, allowing the right dunder method (e.g. __radd__) of the other class to be called. (#3631)
• The qml.GellMann operators now include their index when displayed. (#3641)
• qml.ops.ctrl_decomp_zyz has been added to compute the decomposition of a controlled single-qubit operation given a single-qubit operation and the control wires. (#3681)
• qml.pauli.is_pauli_word now supports Prod and SProd operators, and it returns False when a Hamiltonian contains more than one term. (#3692)
• qml.pauli.pauli_word_to_string now supports Prod, SProd and Hamiltonian operators. (#3692)
• qml.ops.op_math.Controlled can now decompose single qubit target operations more effectively using the ZYZ decomposition. (#3726)
• The qml.qchem.Molecule class raises an error when the molecule has an odd number of electrons or when the spin multiplicity is not 1. (#3748)
• qml.qchem.basis_rotation now accounts for spin, allowing it to perform Basis Rotation Groupings for molecular hamiltonians. (#3714) (#3774)
• The gradient transforms work for the new return type system with non-trivial classical jacobians. (#3776)
• The default.mixed device has received a performance improvement for multi-qubit operations. This also allows to apply channels that act on more than seven qubits, which was not possible before. (#3584)
• qml.dot now groups coefficients together. (#3691)
>>> qml.dot(coeffs=[2, 2, 2], ops=[qml.PauliX(0), qml.PauliY(1), qml.PauliZ(2)])
2*(PauliX(wires=[0]) + PauliY(wires=[1]) + PauliZ(wires=[2]))
• qml.generator now supports operators with Sum and Prod generators. (#3691)
• The Sum._sort method now takes into account the name of the operator when sorting. (#3691)
• A new tape transform called qml.transforms.sign_expand has been added. It implements the optimal decomposition of a fast-forwardable Hamiltonian that minimizes the variance of its estimator in the Single-Qubit-Measurement from arXiv:2207.09479. (#2852)
#### Differentiability and interfaces
• The qml.math module now also contains a submodule for fast Fourier transforms, qml.math.fft. (#1440)
The submodule in particular provides differentiable versions of the following functions, available in all common interfaces for PennyLane
Note that the output of the derivative of these functions may differ when used with complex-valued inputs, due to different conventions on complex-valued derivatives.
• Validation has been added on gradient keyword arguments when initializing a QNode — if unexpected keyword arguments are passed, a UserWarning is raised. A list of the current expected gradient function keyword arguments can be accessed via qml.gradients.SUPPORTED_GRADIENT_KWARGS. (#3526)
• The numpy version has been constrained to <1.24. (#3563)
• Support for two-qubit unitary decomposition with JAX-JIT has been added. (#3569)
• qml.math.size now supports PyTorch tensors. (#3606)
• Most quantum channels are now fully differentiable on all interfaces. (#3612)
• qml.math.matmul now supports PyTorch and Autograd tensors. (#3613)
• Add qml.math.detach, which detaches a tensor from its trace. This stops automatic gradient computations. (#3674)
• Add typing.TensorLike type. (#3675)
• qml.QuantumMonteCarlo template is now JAX-JIT compatible when passing jax.numpy arrays to the template. (#3734)
• DefaultQubitJax now supports evolving the state vector when executing qml.pulse.ParametrizedEvolution gates. (#3743)
• SProd.sparse_matrix now supports interface-specific variables with a single element as the scalar. (#3770)
• Added argnum argument to metric_tensor. By passing a sequence of indices referring to trainable tape parameters, the metric tensor is only computed with respect to these parameters. This reduces the number of tapes that have to be run. (#3587)
• The parameter-shift derivative of variances saves a redundant evaluation of the corresponding unshifted expectation value tape, if possible (#3744)
#### Next generation device API
• The apply_operation single-dispatch function is added to devices/qubit that applies an operation to a state and returns a new state. (#3637)
• The preprocess function is added to devices/qubit that validates, expands, and transforms a batch of QuantumTape objects to abstract preprocessing details away from the device. (#3708)
• The create_initial_state function is added to devices/qubit that returns an initial state for an execution. (#3683)
• The simulate function is added to devices/qubit that turns a single quantum tape into a measurement result. The function only supports state based measurements with either no observables or observables with diagonalizing gates. It supports simultaneous measurement of non-commuting observables. (#3700)
• The ExecutionConfig data class has been added. (#3649)
• The StatePrep class has been added as an interface that state-prep operators must implement. (#3654)
• qml.QubitStateVector now implements the StatePrep interface. (#3685)
• qml.BasisState now implements the StatePrep interface. (#3693)
• New Abstract Base Class for devices Device is added to the devices.experimental submodule. This interface is still in experimental mode and not integrated with the rest of pennylane. (#3602)
#### Other improvements
• Writing Hamiltonians to a file using the qml.data module has been improved by employing a condensed writing format. (#3592)
• Lazy-loading in the qml.data.Dataset.read() method is more universally supported. (#3605)
• The qchem.Molecule class raises an error when the molecule has an odd number of electrons or when the spin multiplicity is not 1. (#3748)
• qml.draw and qml.draw_mpl have been updated to draw any quantum function, which allows for visualizing only part of a complete circuit/QNode. (#3760)
• The string representation of a Measurement Process now includes the _eigvals property if it is set. (#3820)
### Breaking changes 💔
• The argument mode in execution has been replaced by the boolean grad_on_execution in the new execution pipeline. (#3723)
• qml.VQECost has been removed. (#3735)
• The default interface is now auto. (#3677) (#3752) (#3829)
The interface is determined during the QNode call instead of the initialization. It means that the gradient_fn and gradient_kwargs are only defined on the QNode at the beginning of the call. Moreover, without specifying the interface it is not possible to guarantee that the device will not be changed during the call if you are using backprop (such as default.qubit changing to default.qubit.jax) whereas before it was happening at initialization.
• The tape method get_operation can also now return the operation index in the tape, and it can be activated by setting the return_op_index to True: get_operation(idx, return_op_index=True). It will become the default in version 0.30. (#3667)
• Operation.inv() and the Operation.inverse setter have been removed. Please use qml.adjoint or qml.pow instead. (#3618)
>>> qml.PauliX(0).inv()
use
>>> qml.adjoint(qml.PauliX(0))
• The Operation.inverse property has been removed completely. (#3725)
• The target wires of qml.ControlledQubitUnitary are no longer available via op.hyperparameters["u_wires"]. Instead, they can be accesses via op.base.wires or op.target_wires. (#3450)
• The tape constructed by a QNode is no longer queued to surrounding contexts. (#3509)
• Nested operators like Tensor, Hamiltonian, and Adjoint now remove their owned operators from the queue instead of updating their metadata to have an "owner". (#3282)
• qml.qchem.scf, qml.RandomLayers.compute_decomposition, and qml.Wires.select_random now use local random number generators instead of global random number generators. This may lead to slightly different random numbers and an independence of the results from the global random number generation state. Please provide a seed to each individual function instead if you want controllable results. (#3624)
• qml.transforms.measurement_grouping has been removed. Users should use qml.transforms.hamiltonian_expand instead. (#3701)
• op.simplify() for operators which are linear combinations of Pauli words will use a builtin Pauli representation to more efficiently compute the simplification of the operator. (#3481)
• All Operator‘s input parameters that are lists are cast into vanilla numpy arrays. (#3659)
• QubitDevice.expval no longer permutes an observable’s wire order before passing it to QubitDevice.probability. The associated downstream changes for default.qubit have been made, but this may still affect expectations for other devices that inherit from QubitDevice and override probability (or any other helper functions that take a wire order such as marginal_prob, estimate_probability or analytic_probability). (#3753)
### Deprecations 👋
• qml.utils.sparse_hamiltonian function has been deprecated, and usage will now raise a warning. Instead, one should use the qml.Hamiltonian.sparse_matrix method. (#3585)
• The collections module has been deprecated. (#3686) (#3687)
• qml.op_sum has been deprecated. Users should use qml.sum instead. (#3686)
• The use of Evolution directly has been deprecated. Users should use qml.evolve instead. This new function changes the sign of the given parameter. (#3706)
• Use of qml.dot with a QNodeCollection has been deprecated. (#3586)
### Documentation 📝
• Revise note on GPU support in the circuit introduction. (#3836)
• Make warning about vanilla version of NumPy for differentiation more prominent. (#3838)
• The documentation for qml.operation has been improved. (#3664)
• The code example in qml.SparseHamiltonian has been updated with the correct wire range. (#3643)
• A hyperlink has been added in the text for a URL in the qml.qchem.mol_data docstring. (#3644)
• A typo was corrected in the documentation for qml.math.vn_entropy. (#3740)
### Bug fixes 🐛
• Fixed a bug where measuring qml.probs in the computational basis with non-commuting measurements returned incorrect results. Now an error is raised. (#3811)
• Fixed a bug where measuring qml.probs in the computational basis with non-commuting measurements returned incorrect results. Now an error is raised. (#3811)
• Fixed a bug in the drawer where nested controlled operations would output the label of the operation being controlled, rather than the control values. (#3745)
• Fixed a bug in qml.transforms.metric_tensor where prefactors of operation generators were taken into account multiple times, leading to wrong outputs for non-standard operations. (#3579)
• Local random number generators are now used where possible to avoid mutating the global random state. (#3624)
• The networkx version change being broken has been fixed by selectively skipping a qcut TensorFlow-JIT test. (#3609) (#3619)
• Fixed the wires for the Y decomposition in the ZX calculus transform. (#3598)
• qml.pauli.PauliWord is now pickle-able. (#3588)
• Child classes of QuantumScript now return their own type when using SomeChildClass.from_queue. (#3501)
• A typo has been fixed in the calculation and error messages in operation.py (#3536)
• qml.data.Dataset.write() now ensures that any lazy-loaded values are loaded before they are written to a file. (#3605)
• Tensor._batch_size is now set to None during initialization, copying and map_wires. (#3642) (#3661)
• Tensor.has_matrix is now set to True. (#3647)
• Fixed typo in the example of qml.IsingZZ gate decomposition. (#3676)
• Fixed a bug that made tapes/qnodes using qml.Snapshot incompatible with qml.drawer.tape_mpl. (#3704)
• Tensor._pauli_rep is set to None during initialization and Tensor.data has been added to its setter. (#3722)
• qml.math.ndim has been redirected to jnp.ndim when using it on a jax tensor. (#3730)
• Implementations of marginal_prob (and subsequently, qml.probs) now return probabilities with the expected wire order. (#3753)
This bug affected most probabilistic measurement processes on devices that inherit from QubitDevice when the measured wires are out of order with respect to the device wires and 3 or more wires are measured. The assumption was that marginal probabilities would be computed with the device’s state and wire order, then re-ordered according to the measurement process wire order. Instead, the re-ordering went in the inverse direction (that is, from measurement process wire order to device wire order). This is now fixed. Note that this only occurred for 3 or more measured wires because this mapping is identical otherwise. More details and discussion of this bug can be found in the original bug report.
• Empty iterables can no longer be returned from QNodes. (#3769)
• The keyword arguments for qml.equal now are used when comparing the observables of a Measurement Process. The eigvals of measurements are only requested if both observables are None, saving computational effort. (#3820)
• Only converts input to qml.Hermitian to a numpy array if the input is a list. (#3820)
### Contributors ✍
This release contains contributions from (in alphabetical order):
Gian-Luca Anselmetti, Guillermo Alonso-Linaje, Juan Miguel Arrazola, Ikko Ashimine, Utkarsh Azad, Miriam Beddig, Cristian Boghiu, Thomas Bromley, Astral Cai, Isaac De Vlugt, Olivia Di Matteo, Lillian M. A. Frederiksen, Soran Jahangiri, Korbinian Kottmann, Christina Lee, Albert Mitjans Coma, Romain Moyard, Mudit Pandey, Borja Requena, Matthew Silverman, Jay Soni, Antal Száva, Frederik Wilde, David Wierichs, Moritz Willmann.
orphan
## Release 0.28.0¶
### New features since last release
#### Custom measurement processes 📐
• Custom measurements can now be facilitated with the addition of the qml.measurements module. (#3286) (#3343) (#3288) (#3312) (#3287) (#3292) (#3287) (#3326) (#3327) (#3388) (#3439) (#3466)
Within qml.measurements are new subclasses that allow for the possibility to create custom measurements:
• SampleMeasurement: represents a sample-based measurement
• StateMeasurement: represents a state-based measurement
• MeasurementTransform: represents a measurement process that requires the application of a batch transform
Creating a custom measurement involves making a class that inherits from one of the classes above. An example is given below. Here, the measurement computes the number of samples obtained of a given state:
from pennylane.measurements import SampleMeasurement
class CountState(SampleMeasurement):
def __init__(self, state: str):
self.state = state # string identifying the state, e.g. "0101"
wires = list(range(len(state)))
super().__init__(wires=wires)
def process_samples(self, samples, wire_order, shot_range, bin_size):
counts_mp = qml.counts(wires=self._wires)
counts = counts_mp.process_samples(samples, wire_order, shot_range, bin_size)
return counts.get(self.state, 0)
def __copy__(self):
return CountState(state=self.state)
We can now execute the new measurement in a QNode as follows.
dev = qml.device("default.qubit", wires=1, shots=10000)
@qml.qnode(dev)
def circuit(x):
qml.RX(x, wires=0)
return CountState(state="1")
>>> circuit(1.23)
Differentiability is also supported for this new measurement process:
>>> x = qml.numpy.array(1.23, requires_grad=True)
4715.000000000001
For more information about these new features, see the documentation for qml.measurements <https://docs.pennylane.ai/en/stable/code/qml_measurements.html>_.
#### ZX Calculus 🧮
• QNodes can now be converted into ZX diagrams via the PyZX framework. (#3446)
ZX diagrams are the medium for which we can envision a quantum circuit as a graph in the ZX-calculus language, showing properties of quantum protocols in a visually compact and logically complete fashion.
QNodes decorated with @qml.transforms.to_zx will return a PyZX graph that represents the computation in the ZX-calculus language.
dev = qml.device("default.qubit", wires=2)
@qml.transforms.to_zx
@qml.qnode(device=dev)
def circuit(p):
qml.RZ(p[0], wires=1),
qml.RZ(p[1], wires=1),
qml.RX(p[2], wires=0),
qml.PauliZ(wires=0),
qml.RZ(p[3], wires=1),
qml.PauliX(wires=1),
qml.CNOT(wires=[0, 1]),
qml.CNOT(wires=[1, 0]),
qml.SWAP(wires=[0, 1]),
return qml.expval(qml.PauliZ(0) @ qml.PauliZ(1))
>>> params = [5 / 4 * np.pi, 3 / 4 * np.pi, 0.1, 0.3]
>>> circuit(params)
Graph(20 vertices, 23 edges)
Information about PyZX graphs can be found in the PyZX Graphs API.
#### QChem databases and basis sets ⚛️
• The symbols and geometry of a compound from the PubChem database can now be accessed via qchem.mol_data(). (#3289) (#3378)
>>> import pennylane as qml
>>> from pennylane.qchem import mol_data
>>> mol_data("BeH2")
(['Be', 'H', 'H'],
tensor([[ 4.79404621, 0.29290755, 0. ],
[ 3.77945225, -0.29290755, 0. ],
[ 5.80882913, -0.29290755, 0. ]], requires_grad=True))
>>> mol_data(223, "CID")
(['N', 'H', 'H', 'H', 'H'],
tensor([[ 0. , 0. , 0. ],
[ 1.82264085, 0.52836742, 0.40402345],
[ 0.01417295, -1.67429735, -0.98038991],
[-0.98927163, -0.22714508, 1.65369933],
• Perform quantum chemistry calculations with two new basis sets: 6-311g and CC-PVDZ. (#3279)
>>> symbols = ["H", "He"]
>>> geometry = np.array([[1.0, 0.0, 0.0], [0.0, 0.0, 0.0]], requires_grad=False)
>>> charge = 1
>>> basis_names = ["6-311G", "CC-PVDZ"]
>>> for basis_name in basis_names:
... mol = qml.qchem.Molecule(symbols, geometry, charge=charge, basis_name=basis_name)
... print(qml.qchem.hf_energy(mol)())
[-2.84429531]
[-2.84061284]
#### A bunch of new operators 👀
• The controlled CZ gate and controlled Hadamard gate are now available via qml.CCZ and qml.CH, respectively. (#3408)
>>> ccz = qml.CCZ(wires=[0, 1, 2])
>>> qml.matrix(ccz)
[[ 1 0 0 0 0 0 0 0]
[ 0 1 0 0 0 0 0 0]
[ 0 0 1 0 0 0 0 0]
[ 0 0 0 1 0 0 0 0]
[ 0 0 0 0 1 0 0 0]
[ 0 0 0 0 0 1 0 0]
[ 0 0 0 0 0 0 1 0]
[ 0 0 0 0 0 0 0 -1]]
>>> ch = qml.CH(wires=[0, 1])
>>> qml.matrix(ch)
[[ 1. 0. 0. 0. ]
[ 0. 1. 0. 0. ]
[ 0. 0. 0.70710678 0.70710678]
[ 0. 0. 0.70710678 -0.70710678]]
• Three new parametric operators, qml.CPhaseShift00, qml.CPhaseShift01, and qml.CPhaseShift10, are now available. Each of these operators performs a phase shift akin to qml.ControlledPhaseShift but on different positions of the state vector. (#2715)
>>> dev = qml.device("default.qubit", wires=2)
>>> @qml.qnode(dev)
>>> def circuit():
... qml.PauliX(wires=1)
... qml.CPhaseShift01(phi=1.23, wires=[0,1])
... return qml.state()
...
>>> circuit()
tensor([0. +0.j , 0.33423773+0.9424888j,
1. +0.j , 0. +0.j ], requires_grad=True)
• A new gate operation called qml.FermionicSWAP has been added. This implements the exchange of spin orbitals representing fermionic-modes while maintaining proper anti-symmetrization. (#3380)
dev = qml.device('default.qubit', wires=2)
@qml.qnode(dev)
def circuit(phi):
qml.BasisState(np.array([0, 1]), wires=[0, 1])
qml.FermionicSWAP(phi, wires=[0, 1])
return qml.state()
>>> circuit(0.1)
tensor([0. +0.j , 0.99750208+0.04991671j,
• Create operators defined from a generator via qml.ops.op_math.Evolution. (#3375)
qml.ops.op_math.Evolution defines the exponential of an operator $hat{O}$ of the form $e^{ixhat{O}}$, with a single trainable parameter, $x$. Limiting to a single trainable parameter allows the use of qml.gradients.param_shift to find the gradient with respect to the parameter $x$.
dev = qml.device('default.qubit', wires=2)
def circuit(phi):
qml.ops.op_math.Evolution(qml.PauliX(0), -.5 * phi)
return qml.expval(qml.PauliZ(0))
>>> phi = np.array(1.2)
>>> circuit(phi)
-0.9320390495504149
• The qutrit Hadamard gate, qml.THadamard, is now available. (#3340)
The operation accepts a subspace keyword argument which determines which variant of the qutrit Hadamard to use.
>>> th = qml.THadamard(wires=0, subspace=[0, 1])
>>> qml.matrix(th)
array([[ 0.70710678+0.j, 0.70710678+0.j, 0. +0.j],
[ 0.70710678+0.j, -0.70710678+0.j, 0. +0.j],
[ 0. +0.j, 0. +0.j, 1. +0.j]])
#### New transforms, functions, and more 😯
• Calculating the purity of arbitrary quantum states is now supported. (#3290)
The purity can be calculated in an analogous fashion to, say, the Von Neumann entropy:
• qml.math.purity can be used as an in-line function:
>>> x = [1, 0, 0, 1] / np.sqrt(2)
>>> qml.math.purity(x, [0, 1])
1.0
>>> qml.math.purity(x, [0])
0.5
>>> x = [[1 / 2, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 1 / 2]]
>>> qml.math.purity(x, [0, 1])
0.5
• qml.qinfo.transforms.purity can transform a QNode returning a state to a function that returns the purity:
dev = qml.device("default.mixed", wires=2)
@qml.qnode(dev)
def circuit(x):
qml.IsingXX(x, wires=[0, 1])
return qml.state()
>>> qml.qinfo.transforms.purity(circuit, wires=[0])(np.pi / 2)
0.5
>>> qml.qinfo.transforms.purity(circuit, wires=[0, 1])(np.pi / 2)
1.0
As with the other methods in qml.qinfo, the purity is fully differentiable:
>>> param = np.array(np.pi / 4, requires_grad=True)
-0.5
• A new gradient transform, qml.gradients.spsa_grad, that is based on the idea of SPSA is now available. (#3366)
This new transform allows users to compute a single estimate of a quantum gradient using simultaneous perturbation of parameters and a stochastic approximation. A QNode that takes, say, an argument x, the approximate gradient can be computed as follows.
>>> dev = qml.device("default.qubit", wires=2)
>>> @qml.qnode(dev)
... def circuit(x):
... qml.RX(x, 0)
... qml.RX(x, 1)
... return qml.expval(qml.PauliZ(0))
array(-0.38876964)
The argument num_directions determines how many directions of simultaneous perturbation are used, which is proportional to the number of circuit evaluations. See the SPSA gradient transform documentation for details. Note that the full SPSA optimizer is already available as qml.SPSAOptimizer.
• Multiple mid-circuit measurements can now be combined arithmetically to create new conditionals. (#3159)
dev = qml.device("default.qubit", wires=3)
@qml.qnode(dev)
def circuit():
m0 = qml.measure(wires=0)
m1 = qml.measure(wires=1)
combined = 2 * m1 + m0
qml.cond(combined == 2, qml.RX)(1.3, wires=2)
return qml.probs(wires=2)
>>> circuit()
[0.90843735 0.09156265]
• A new method called pauli_decompose() has been added to the qml.pauli module, which takes a hermitian matrix, decomposes it in the Pauli basis, and returns it either as a qml.Hamiltonian or qml.PauliSentence instance. (#3384)
• Operation or Hamiltonian instances can now be generated from a qml.PauliSentence or qml.PauliWord via the new operation() and hamiltonian() methods. (#3391)
• A sum_expand function has been added for tapes, which splits a tape measuring a Sum expectation into mutliple tapes of summand expectations, and provides a function to recombine the results. (#3230)
#### (Experimental) More interface support for multi-measurement and gradient output types 🧪
• The autograd and Tensorflow interfaces now support devices with shot vectors when qml.enable_return() has been called. (#3374) (#3400)
Here is an example using Tensorflow:
import tensorflow as tf
qml.enable_return()
dev = qml.device("default.qubit", wires=2, shots=[1000, 2000, 3000])
@qml.qnode(dev, diff_method="parameter-shift", interface="tf")
def circuit(a):
qml.RY(a, wires=0)
qml.RX(0.2, wires=0)
qml.CNOT(wires=[0, 1])
return qml.expval(qml.PauliZ(0)), qml.probs([0, 1])
>>> a = tf.Variable(0.4)
... res = circuit(a)
... res = tf.stack([tf.experimental.numpy.hstack(r) for r in res])
...
>>> res
<tf.Tensor: shape=(3, 5), dtype=float64, numpy=
array([[0.902, 0.951, 0. , 0. , 0.049],
[0.898, 0.949, 0. , 0. , 0.051],
[0.892, 0.946, 0. , 0. , 0.054]])>
>>> tape.jacobian(res, a)
<tf.Tensor: shape=(3, 5), dtype=float64, numpy=
array([[-0.345 , -0.1725 , 0. , 0. , 0.1725 ],
[-0.383 , -0.1915 , 0. , 0. , 0.1915 ],
[-0.38466667, -0.19233333, 0. , 0. , 0.19233333]])>
• The PyTorch interface is now fully supported when qml.enable_return() has been called, allowing the calculation of the Jacobian and the Hessian using custom differentiation methods (e.g., parameter-shift, finite difference, or adjoint). (#3416)
import torch
qml.enable_return()
dev = qml.device("default.qubit", wires=2)
@qml.qnode(dev, diff_method="parameter-shift", interface="torch")
def circuit(a, b):
qml.RY(a, wires=0)
qml.RX(b, wires=1)
qml.CNOT(wires=[0, 1])
return qml.expval(qml.PauliZ(0)), qml.probs([0, 1])
>>> a = torch.tensor(0.1, requires_grad=True)
((tensor(-0.0998), tensor(0.)), (tensor([-0.0494, -0.0005, 0.0005, 0.0494]), tensor([-0.0991, 0.0991, 0.0002, -0.0002])))
• The JAX-JIT interface now supports first-order gradient computation when qml.enable_return() has been called. (#3235) (#3445)
import jax
from jax import numpy as jnp
jax.config.update("jax_enable_x64", True)
qml.enable_return()
dev = qml.device("lightning.qubit", wires=2)
@jax.jit
@qml.qnode(dev, interface="jax-jit", diff_method="parameter-shift")
def circuit(a, b):
qml.RY(a, wires=0)
qml.RX(b, wires=0)
return qml.expval(qml.PauliZ(0)), qml.expval(qml.PauliZ(1))
a, b = jnp.array(1.0), jnp.array(2.0)
>>> jax.jacobian(circuit, argnums=[0, 1])(a, b)
((Array(0.35017549, dtype=float64, weak_type=True),
Array(-0.4912955, dtype=float64, weak_type=True)),
(Array(5.55111512e-17, dtype=float64, weak_type=True),
Array(0., dtype=float64, weak_type=True)))
### Improvements 🛠
• qml.pauli.is_pauli_word now supports instances of qml.Hamiltonian. (#3389)
• When qml.probs, qml.counts, and qml.sample are called with no arguments, they measure all wires. Calling any of the aforementioned measurements with an empty wire list (e.g., qml.sample(wires=[])) will raise an error. (#3299)
• Made qml.gradients.finite_diff more convenient to use with custom data type observables/devices by reducing the number of magic methods that need to be defined in the custom data type to support finite_diff. (#3426)
• The qml.ISWAP gate is now natively supported on default.mixed, improving on its efficiency. (#3284)
• Added more input validation to qml.transforms.hamiltonian_expand such that Hamiltonian objects with no terms raise an error. (#3339)
• Continuous integration checks are now performed for Python 3.11 and Torch v1.13. Python 3.7 is dropped. (#3276)
• qml.Tracker now also logs results in tracker.history when tracking the execution of a circuit.
(#3306)
• The execution time of Wires.all_wires has been improved by avoiding data type changes and making use of itertools.chain. (#3302)
• Printing an instance of qml.qchem.Molecule is now more concise and informational. (#3364)
• The error message for qml.transforms.insert when it fails to diagonalize non-qubit-wise-commuting observables is now more detailed. (#3381)
• Extended the qml.equal function to qml.Hamiltonian and Tensor objects. (#3390)
• QuantumTape._process_queue has been moved to qml.queuing.process_queue to disentangle its functionality from the QuantumTape class. (#3401)
• QPE can now accept a target operator instead of a matrix and target wires pair. (#3373)
• The qml.ops.op_math.Controlled.map_wires method now uses base.map_wires internally instead of the private _wires property setter. (#3405)
• A new function called qml.tape.make_qscript has been created for converting a quantum function into a quantum script. This replaces qml.transforms.make_tape. (#3429)
• Add a _pauli_rep attribute to operators to integrate the new Pauli arithmetic classes with native PennyLane objects. (#3443)
• Extended the functionality of qml.matrix to qutrits. (#3508)
• The qcut.py file in pennylane/transforms/ has been reorganized into multiple files that are now in pennylane/transforms/qcut/. (#3413)
• A warning now appears when creating a Tensor object with overlapping wires, informing that this can lead to undefined behaviour. (#3459)
• Extended the qml.equal function to qml.ops.op_math.Controlled and qml.ops.op_math.ControlledOp objects. (#3463)
• Nearly every instance of with QuantumTape() has been replaced with QuantumScript construction. (#3454)
• Added validate_subspace static method to qml.Operator to check the validity of the subspace of certain qutrit operations. (#3340)
• qml.equal now supports operators created via qml.s_prod, qml.pow, qml.exp, and qml.adjoint. (#3471)
• Devices can now disregard observable grouping indices in Hamiltonians through the optional use_grouping attribute. (#3456)
• Add the optional argument lazy=True to functions qml.s_prod, qml.prod and qml.op_sum to allow simplification. (#3483)
• Updated the qml.transforms.zyz_decomposition function such that it now supports broadcast operators. This means that single-qubit qml.QubitUnitary operators, instantiated from a batch of unitaries, can now be decomposed. (#3477)
• The performance of executing circuits under the jax.vmap transformation has been improved by being able to leverage the batch-execution capabilities of some devices. (#3452)
• The tolerance for converting openfermion Hamiltonian complex coefficients to real ones has been modified to prevent conversion errors. (#3367)
• OperationRecorder now inherits from AnnotatedQueue and QuantumScript instead of QuantumTape. (#3496)
• Updated qml.transforms.split_non_commuting to support the new return types. (#3414)
• Updated qml.transforms.mitigate_with_zne to support the new return types. (#3415)
• Updated qml.transforms.metric_tensor, qml.transforms.adjoint_metric_tensor, qml.qinfo.classical_fisher, and qml.qinfo.quantum_fisher to support the new return types. (#3449)
• Updated qml.transforms.batch_params and qml.transforms.batch_input to support the new return types. (#3431)
• Updated qml.transforms.cut_circuit and qml.transforms.cut_circuit_mc to support the new return types. (#3346)
• Limit NumPy version to <1.24. (#3346)
### Breaking changes 💔
• Python 3.7 support is no longer maintained. PennyLane will be maintained for versions 3.8 and up. (#3276)
• The log_base attribute has been moved from MeasurementProcess to the new VnEntropyMP and MutualInfoMP classes, which inherit from MeasurementProcess. (#3326)
• qml.utils.decompose_hamiltonian() has been removed. Please use qml.pauli.pauli_decompose() instead. (#3384)
• The return_type attribute of MeasurementProcess has been removed where possible. Use isinstance checks instead. (#3399)
• Instead of having an OrderedDict attribute called _queue, AnnotatedQueue now inherits from OrderedDict and encapsulates the queue. Consequentially, this also applies to the QuantumTape class which inherits from AnnotatedQueue. (#3401)
• The ShadowMeasurementProcess class has been renamed to ClassicalShadowMP. (#3388)
• The qml.Operation.get_parameter_shift method has been removed. The gradients module should be used for general parameter-shift rules instead. (#3419)
• The signature of the QubitDevice.statistics method has been changed from
def statistics(self, observables, shot_range=None, bin_size=None, circuit=None):
to
def statistics(self, circuit: QuantumTape, shot_range=None, bin_size=None):
(#3421)
• The MeasurementProcess class is now an abstract class and return_type is now a property of the class. (#3434)
### Deprecations 👋
Deprecations cycles are tracked at doc/developement/deprecations.rst.
• The following methods are deprecated: (#3281)
• qml.tape.get_active_tape: Use qml.QueuingManager.active_context() instead
• qml.transforms.qcut.remap_tape_wires: Use qml.map_wires instead
• qml.tape.QuantumTape.inv(): Use qml.tape.QuantumTape.adjoint() instead
• qml.tape.stop_recording(): Use qml.QueuingManager.stop_recording() instead
• qml.tape.QuantumTape.stop_recording(): Use qml.QueuingManager.stop_recording() instead
• qml.QueuingContext is now qml.QueuingManager
• QueuingManager.safe_update_info and AnnotatedQueue.safe_update_info: Use update_info instead.
• qml.transforms.measurement_grouping has been deprecated. Use qml.transforms.hamiltonian_expand instead. (#3417)
• The observables argument in QubitDevice.statistics is deprecated. Please use circuit instead. (#3433)
• The seed_recipes argument in qml.classical_shadow and qml.shadow_expval is deprecated. A new argument seed has been added, which defaults to None and can contain an integer with the wanted seed. (#3388)
• qml.transforms.make_tape has been deprecated. Please use qml.tape.make_qscript instead. (#3478)
### Documentation 📝
• Added documentation on parameter broadcasting regarding both its usage and technical aspects. (#3356)
The quickstart guide on circuits as well as the the documentation of QNodes and Operators now contain introductions and details on parameter broadcasting. The QNode documentation mostly contains usage details, the Operator documentation is concerned with implementation details and a guide to support broadcasting in custom operators.
• The return type statements of gradient and Hessian transforms and a series of other functions that are a batch_transform have been corrected. (#3476)
• Developer documentation for the queuing module has been added. (#3268)
• More mentions of diagonalizing gates for all relevant operations have been corrected. (#3409)
The docstrings for compute_eigvals used to say that the diagonalizing gates implemented $U$, the unitary such that $O = U Sigma U^{dagger}$, where $O$ is the original observable and $Sigma$ a diagonal matrix. However, the diagonalizing gates actually implement $U^{dagger}$, since $langle psi | O | psi rangle = langle psi | U Sigma U^{dagger} | psi rangle$, making $U^{dagger} | psi rangle$ the actual state being measured in the $Z$-basis.
• A warning about using dill to pickle and unpickle datasets has been added. (#3505)
### Bug fixes 🐛
• Fixed a bug that prevented qml.gradients.param_shift from being used for broadcasted tapes. (#3528)
• Fixed a bug where qml.transforms.hamiltonian_expand didn’t preserve the type of the input results in its output. (#3339)
• Fixed a bug that made qml.gradients.param_shift raise an error when used with unshifted terms only in a custom recipe, and when using any unshifted terms at all under the new return type system. (#3177)
• The original tape _obs_sharing_wires attribute is updated during its expansion. (#3293)
• An issue with drain=False in the adaptive optimizer has been fixed. Before the fix, the operator pool needed to be reconstructed inside the optimization pool when drain=False. With this fix, this reconstruction is no longer needed. (#3361)
• If the device originally has no shots but finite shots are dynamically specified, Hamiltonian expansion now occurs. (#3369)
• qml.matrix(op) now fails if the operator truly has no matrix (e.g., qml.Barrier) to match op.matrix(). (#3386)
• The pad_with argument in the qml.AmplitudeEmbedding template is now compatible with all interfaces. (#3392)
• Operator.pow now queues its constituents by default. (#3373)
• Fixed a bug where a QNode returning qml.sample would produce incorrect results when run on a device defined with a shot vector. (#3422)
• The qml.data module now works as expected on Windows. (#3504)
### Contributors ✍️
This release contains contributions from (in alphabetical order):
Guillermo Alonso, Juan Miguel Arrazola, Utkarsh Azad, Samuel Banning, Thomas Bromley, Astral Cai, Albert Mitjans Coma, Ahmed Darwish, Isaac De Vlugt, Olivia Di Matteo, Amintor Dusko, Pieter Eendebak, Lillian M. A. Frederiksen, Diego Guala, Katharine Hyatt, Josh Izaac, Soran Jahangiri, Edward Jiang, Korbinian Kottmann, Christina Lee, Romain Moyard, Lee James O’Riordan, Mudit Pandey, Kevin Shen, Matthew Silverman, Jay Soni, Antal Száva, David Wierichs, Moritz Willmann, and Filippo Vicentini.
orphan
## Release 0.27.0¶
### New features since last release
#### An all-new data module 💾
• The qml.data module is now available, allowing users to download, load, and create quantum datasets. (#3156)
Datasets are hosted on Xanadu Cloud and can be downloaded by using qml.data.load():
>>> H2_datasets = qml.data.load(
... data_name="qchem", molname="H2", basis="STO-3G", bondlength=1.1
... )
>>> H2data = H2_datasets[0]
>>> H2data
<Dataset = description: qchem/H2/STO-3G/1.1, attributes: ['molecule', 'hamiltonian', ...]>
• Datasets available to be downloaded can be listed with qml.data.list_datasets().
• To download or load only specific properties of a dataset, we can specify the desired properties in qml.data.load with the attributes keyword argument:
>>> H2_hamiltonian = qml.data.load(
... data_name="qchem", molname="H2", basis="STO-3G", bondlength=1.1,
... attributes=["molecule", "hamiltonian"]
... )[0]
>>> H2_hamiltonian.hamiltonian
<Hamiltonian: terms=15, wires=[0, 1, 2, 3]>
The available attributes can be found using qml.data.list_attributes():
• To select data interactively, we can use qml.data.load_interactive():
>>> qml.data.load_interactive()
1) qspin
2) qchem
Choice [1-2]: 1
...
...
...
...
...
dataset: qspin/Ising/open/rectangular/4x4
attributes: ['parameters', 'ground_states']
force: False
dest folder: datasets
Would you like to continue? (Default is yes) [Y/n]:
<Dataset = description: qspin/Ising/open/rectangular/4x4, attributes: ['parameters', 'ground_states']>
• Once a dataset is loaded, its properties can be accessed as follows:
>>> dev = qml.device("default.qubit",wires=4)
>>> @qml.qnode(dev)
... def circuit():
... qml.BasisState(H2data.hf_state, wires = [0, 1, 2, 3])
... for op in H2data.vqe_gates:
... qml.apply(op)
... return qml.expval(H2data.hamiltonian)
>>> print(circuit())
-1.0791430411076344
It’s also possible to create custom datasets with qml.data.Dataset:
>>> example_hamiltonian = qml.Hamiltonian(coeffs=[1,0.5], observables=[qml.PauliZ(wires=0),qml.PauliX(wires=1)])
>>> example_energies, _ = np.linalg.eigh(qml.matrix(example_hamiltonian))
>>> example_dataset = qml.data.Dataset(
... data_name = 'Example', hamiltonian=example_hamiltonian, energies=example_energies
... )
>>> example_dataset.data_name
'Example'
>>> example_dataset.hamiltonian
(0.5) [X1]
+ (1) [Z0]
>>> example_dataset.energies
array([-1.5, -0.5, 0.5, 1.5])
Custom datasets can be saved and read with the qml.data.Dataset.write() and qml.data.Dataset.read() methods, respectively.
>>> example_dataset.write('./path/to/dataset.dat')
'Example'
(0.5) [X1]
+ (1) [Z0]
array([-1.5, -0.5, 0.5, 1.5])
We will continue to work on adding more datasets and features for qml.data in future releases.
• Optimizing quantum circuits can now be done adaptively with qml.AdaptiveOptimizer. (#3192)
The qml.AdaptiveOptimizer takes an initial circuit and a collection of operators as input and adds a selected gate to the circuit at each optimization step. The process of growing the circuit can be repeated until the circuit gradients converge to zero within a given threshold. The adaptive optimizer can be used to implement algorithms such as ADAPT-VQE as shown in the following example.
Firstly, we define some preliminary variables needed for VQE:
symbols = ["H", "H", "H"]
geometry = np.array([[0.01076341, 0.04449877, 0.0],
[0.98729513, 1.63059094, 0.0],
H, qubits = qml.qchem.molecular_hamiltonian(symbols, geometry, charge = 1)
The collection of gates to grow the circuit is built to contain all single and double excitations:
n_electrons = 2
singles, doubles = qml.qchem.excitations(n_electrons, qubits)
singles_excitations = [qml.SingleExcitation(0.0, x) for x in singles]
doubles_excitations = [qml.DoubleExcitation(0.0, x) for x in doubles]
operator_pool = doubles_excitations + singles_excitations
Next, an initial circuit that prepares a Hartree-Fock state and returns the expectation value of the Hamiltonian is defined:
hf_state = qml.qchem.hf_state(n_electrons, qubits)
dev = qml.device("default.qubit", wires=qubits)
@qml.qnode(dev)
def circuit():
qml.BasisState(hf_state, wires=range(qubits))
return qml.expval(H)
Finally, the optimizer is instantiated and then the circuit is created and optimized adaptively:
opt = qml.optimize.AdaptiveOptimizer()
for i in range(len(operator_pool)):
circuit, energy, gradient = opt.step_and_cost(circuit, operator_pool, drain_pool=True)
print('Energy:', energy)
print(qml.draw(circuit)())
print()
break
Energy: -1.246549938420637
0: ─╭BasisState(M0)─╭G²(0.20)─┤ ╭<𝓗>
1: ─├BasisState(M0)─├G²(0.20)─┤ ├<𝓗>
2: ─├BasisState(M0)─│─────────┤ ├<𝓗>
3: ─├BasisState(M0)─│─────────┤ ├<𝓗>
4: ─├BasisState(M0)─├G²(0.20)─┤ ├<𝓗>
5: ─╰BasisState(M0)─╰G²(0.20)─┤ ╰<𝓗>
Energy: -1.2613740231529604
0: ─╭BasisState(M0)─╭G²(0.20)─╭G²(0.19)─┤ ╭<𝓗>
1: ─├BasisState(M0)─├G²(0.20)─├G²(0.19)─┤ ├<𝓗>
2: ─├BasisState(M0)─│─────────├G²(0.19)─┤ ├<𝓗>
3: ─├BasisState(M0)─│─────────╰G²(0.19)─┤ ├<𝓗>
4: ─├BasisState(M0)─├G²(0.20)───────────┤ ├<𝓗>
5: ─╰BasisState(M0)─╰G²(0.20)───────────┤ ╰<𝓗>
Energy: -1.2743971719780331
0: ─╭BasisState(M0)─╭G²(0.20)─╭G²(0.19)──────────┤ ╭<𝓗>
1: ─├BasisState(M0)─├G²(0.20)─├G²(0.19)─╭G(0.00)─┤ ├<𝓗>
2: ─├BasisState(M0)─│─────────├G²(0.19)─│────────┤ ├<𝓗>
3: ─├BasisState(M0)─│─────────╰G²(0.19)─╰G(0.00)─┤ ├<𝓗>
4: ─├BasisState(M0)─├G²(0.20)────────────────────┤ ├<𝓗>
5: ─╰BasisState(M0)─╰G²(0.20)────────────────────┤ ╰<𝓗>
For a detailed breakdown of its implementation, check out the Adaptive circuits for quantum chemistry demo.
#### Automatic interface detection 🧩
• QNodes now accept an auto interface argument which automatically detects the machine learning library to use. (#3132)
from pennylane import numpy as np
import torch
import tensorflow as tf
from jax import numpy as jnp
dev = qml.device("default.qubit", wires=2)
@qml.qnode(dev, interface="auto")
def circuit(weight):
qml.RX(weight[0], wires=0)
qml.RY(weight[1], wires=1)
return qml.expval(qml.PauliZ(0))
interface_tensors = [[0, 1], np.array([0, 1]), torch.Tensor([0, 1]), tf.Variable([0, 1], dtype=float), jnp.array([0, 1])]
for tensor in interface_tensors:
res = circuit(weight=tensor)
print(f"Result value: {res:.2f}; Result type: {type(res)}")
Result value: 1.00; Result type: <class 'pennylane.numpy.tensor.tensor'>
Result value: 1.00; Result type: <class 'pennylane.numpy.tensor.tensor'>
Result value: 1.00; Result type: <class 'torch.Tensor'>
Result value: 1.00; Result type: <class 'tensorflow.python.framework.ops.EagerTensor'>
Result value: 1.00; Result type: <class 'jaxlib.xla_extension.Array'>
• JAX-JIT support for computing the gradient of QNodes that return a single vector of probabilities or multiple expectation values is now available. (#3244) (#3261)
import jax
from jax import numpy as jnp
from jax.config import config
config.update("jax_enable_x64", True)
dev = qml.device("lightning.qubit", wires=2)
@jax.jit
@qml.qnode(dev, diff_method="parameter-shift", interface="jax")
def circuit(x, y):
qml.RY(x, wires=0)
qml.RY(y, wires=1)
qml.CNOT(wires=[0, 1])
return qml.expval(qml.PauliZ(0)), qml.expval(qml.PauliZ(1))
x = jnp.array(1.0)
y = jnp.array(2.0)
>>> jax.jacobian(circuit, argnums=[0, 1])(x, y)
(Array([-0.84147098, 0.35017549], dtype=float64, weak_type=True),
Array([ 4.47445479e-18, -4.91295496e-01], dtype=float64, weak_type=True))
Note that this change depends on jax.pure_callback, which requires jax>=0.3.17.
#### Construct Pauli words and sentences 🔤
• We’ve reorganized and grouped everything in PennyLane responsible for manipulating Pauli operators into a pauli module. The grouping module has been deprecated as a result, and logic was moved from pennylane/grouping to pennylane/pauli/grouping. (#3179)
• qml.pauli.PauliWord and qml.pauli.PauliSentence can be used to represent tensor products and linear combinations of Pauli operators, respectively. These provide a more performant method to compute sums and products of Pauli operators. (#3195)
• qml.pauli.PauliWord represents tensor products of Pauli operators. We can efficiently multiply and extract the matrix of these operators using this representation.
>>> pw1 = qml.pauli.PauliWord({0:"X", 1:"Z"})
>>> pw2 = qml.pauli.PauliWord({0:"Y", 1:"Z"})
>>> pw1, pw2
(X(0) @ Z(1), Y(0) @ Z(1))
>>> pw1 * pw2
(Z(0), 1j)
>>> pw1.to_mat(wire_order=[0,1])
array([[ 0, 0, 1, 0],
[ 0, 0, 0, -1],
[ 1, 0, 0, 0],
[ 0, -1, 0, 0]])
• qml.pauli.PauliSentence represents linear combinations of Pauli words. We can efficiently add, multiply and extract the matrix of these operators in this representation.
>>> ps1 = qml.pauli.PauliSentence({pw1: 1.2, pw2: 0.5j})
>>> ps2 = qml.pauli.PauliSentence({pw1: -1.2})
>>> ps1
1.2 * X(0) @ Z(1)
+ 0.5j * Y(0) @ Z(1)
>>> ps1 + ps2
0.0 * X(0) @ Z(1)
+ 0.5j * Y(0) @ Z(1)
>>> ps1 * ps2
-1.44 * I
+ (-0.6+0j) * Z(0)
>>> (ps1 + ps2).to_mat(wire_order=[0,1])
array([[ 0. +0.j, 0. +0.j, 0.5+0.j, 0. +0.j],
[ 0. +0.j, 0. +0.j, 0. +0.j, -0.5+0.j],
[-0.5+0.j, 0. +0.j, 0. +0.j, 0. +0.j],
[ 0. +0.j, 0.5+0.j, 0. +0.j, 0. +0.j]])
#### (Experimental) More support for multi-measurement and gradient output types 🧪
• qml.enable_return() now supports QNodes returning multiple measurements, including shots vectors, and gradient output types. (#2886) (#3052) (#3041) (#3090) (#3069) (#3137) (#3127) (#3099) (#3098) (#3095) (#3091) (#3176) (#3170) (#3194) (#3267) (#3234) (#3232) (#3223) (#3222) (#3315)
In v0.25, we introduced qml.enable_return(), which separates measurements into their own tensors. The motivation of this change is the deprecation of ragged ndarray creation in NumPy.
With this release, we’re continuing to elevate this feature by adding support for:
• Execution (qml.execute)
• Jacobian vector product (JVP) computation
• Gradient transforms (qml.gradients.param_shift, qml.gradients.finite_diff, qml.gradients.hessian_transform, qml.gradients.param_shift_hessian).
• Interfaces (Autograd, TensorFlow, and JAX, although without JIT)
With this added support, the JAX interface can handle multiple shots (shots vectors), measurements, and gradient output types with qml.enable_return():
import jax
qml.enable_return()
dev = qml.device("default.qubit", wires=2, shots=(1, 10000))
params = jax.numpy.array([0.1, 0.2])
@qml.qnode(dev, interface="jax", diff_method="parameter-shift", max_diff=2)
def circuit(x):
qml.RX(x[0], wires=[0])
qml.RY(x[1], wires=[1])
qml.CNOT(wires=[0, 1])
return qml.var(qml.PauliZ(0) @ qml.PauliX(1)), qml.probs(wires=[0])
>>> jax.hessian(circuit)(params)
((Array([[ 0., 0.],
[ 2., -3.]], dtype=float32),
Array([[[-0.5, 0. ],
[ 0. , 0. ]],
[[ 0.5, 0. ],
[ 0. , 0. ]]], dtype=float32)),
(Array([[ 0.07677898, 0.0563341 ],
[ 0.07238522, -1.830669 ]], dtype=float32),
Array([[[-4.9707499e-01, 2.9999996e-04],
[-6.2500127e-04, 1.2500001e-04]],
[[ 4.9707499e-01, -2.9999996e-04],
[ 6.2500127e-04, -1.2500001e-04]]], dtype=float32)))
For more details, please refer to the documentation.
#### New basis rotation and tapering features in qml.qchem 🤓
• Grouped coefficients, observables, and basis rotation transformation matrices needed to construct a qubit Hamiltonian in the rotated basis of molecular orbitals are now calculable via qml.qchem.basis_rotation(). (#3011)
>>> symbols = ['H', 'H']
>>> geometry = np.array([[0.0, 0.0, 0.0], [1.398397361, 0.0, 0.0]], requires_grad = False)
>>> mol = qml.qchem.Molecule(symbols, geometry)
>>> core, one, two = qml.qchem.electron_integrals(mol)()
>>> coeffs, ops, unitaries = qml.qchem.basis_rotation(one, two, tol_factor=1.0e-5)
>>> unitaries
[tensor([[-1.00000000e+00, -5.46483514e-13],
tensor([[-1.00000000e+00, 3.17585063e-14],
tensor([[-0.70710678, -0.70710678],
tensor([[ 2.58789009e-11, 1.00000000e+00],
• Any gate operation can now be tapered according to $$\mathbb{Z}_2$$ symmetries of the Hamiltonian via qml.qchem.taper_operation. (#3002) (#3121)
>>> symbols = ['He', 'H']
>>> geometry = np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 1.4589]])
>>> mol = qml.qchem.Molecule(symbols, geometry, charge=1)
>>> H, n_qubits = qml.qchem.molecular_hamiltonian(symbols, geometry)
>>> generators = qml.qchem.symmetry_generators(H)
>>> paulixops = qml.qchem.paulix_ops(generators, n_qubits)
>>> paulix_sector = qml.qchem.optimal_sector(H, generators, mol.n_electrons)
>>> tap_op = qml.qchem.taper_operation(qml.SingleExcitation, generators, paulixops,
... paulix_sector, wire_order=H.wires, op_wires=[0, 2])
>>> tap_op(3.14159)
[Exp(1.5707949999999993j PauliY)]
Moreover, the obtained tapered operation can be used directly within a QNode.
>>> dev = qml.device('default.qubit', wires=[0, 1])
>>> @qml.qnode(dev)
... def circuit(params):
... tap_op(params[0])
... return qml.expval(qml.PauliZ(0) @ qml.PauliZ(1))
>>> drawer = qml.draw(circuit, show_all_wires=True)
>>> print(drawer(params=[3.14159]))
0: ──Exp(0.00+1.57j Y)─┤ ╭<[email protected]>
1: ────────────────────┤ ╰<[email protected]>
• Functionality has been added to estimate the number of measurements required to compute an expectation value with a target error and estimate the error in computing an expectation value with a given number of measurements. (#3000)
#### New functions, operations, and observables 🤩
• Wires of operators or entire QNodes can now be mapped to other wires via qml.map_wires(). (#3143) (#3145)
The qml.map_wires() function requires a dictionary representing a wire map. Use it with
• arbitrary operators:
>>> op = qml.RX(0.54, wires=0) + qml.PauliX(1) + (qml.PauliZ(2) @ qml.RY(1.23, wires=3))
>>> op
(RX(0.54, wires=[0]) + PauliX(wires=[1])) + (PauliZ(wires=[2]) @ RY(1.23, wires=[3]))
>>> wire_map = {0: 10, 1: 11, 2: 12, 3: 13}
>>> qml.map_wires(op, wire_map)
(RX(0.54, wires=[10]) + PauliX(wires=[11])) + (PauliZ(wires=[12]) @ RY(1.23, wires=[13]))
A map_wires method has also been added to operators, which returns a copy of the operator with its wires changed according to the given wire map.
• entire QNodes:
dev = qml.device("default.qubit", wires=["A", "B", "C", "D"])
wire_map = {0: "A", 1: "B", 2: "C", 3: "D"}
@qml.qnode(dev)
def circuit():
qml.RX(0.54, wires=0)
qml.PauliX(1)
qml.PauliZ(2)
qml.RY(1.23, wires=3)
return qml.probs(wires=0)
>>> mapped_circuit = qml.map_wires(circuit, wire_map)
>>> mapped_circuit()
>>> print(qml.draw(mapped_circuit)())
A: ──RX(0.54)─┤ Probs
B: ──X────────┤
C: ──Z────────┤
D: ──RY(1.23)─┤
• The qml.IntegerComparator arithmetic operation is now available. (#3113)
Given a basis state $$\vert n \rangle$$, where $$n$$ is a positive integer, and a fixed positive integer $$L$$, qml.IntegerComparator flips a target qubit if $$n \geq L$$. Alternatively, the flipping condition can be $$n < L$$ as demonstrated below:
dev = qml.device("default.qubit", wires=2)
@qml.qnode(dev)
def circuit():
qml.BasisState(np.array([0, 1]), wires=range(2))
qml.IntegerComparator(2, geq=False, wires=[0, 1])
return qml.state()
>>> circuit()
[-0.5+0.j 0.5+0.j -0.5+0.j 0.5+0.j]
• The qml.GellMann qutrit observable, the ternary generalization of the Pauli observables, is now available. (#3035)
When using qml.GellMann, the index keyword argument determines which of the 8 Gell-Mann matrices is used.
dev = qml.device("default.qutrit", wires=2)
@qml.qnode(dev)
def circuit():
qml.TClock(wires=0)
qml.TShift(wires=1)
return qml.expval(qml.GellMann(wires=0, index=8) + qml.GellMann(wires=1, index=3))
>>> circuit()
-0.42264973081037416
• Controlled qutrit operations can now be performed with qml.ControlledQutritUnitary. (#2844)
The control wires and values that define the operation are defined analogously to the qubit operation.
dev = qml.device("default.qutrit", wires=3)
@qml.qnode(dev)
def circuit(U):
qml.TShift(wires=0)
qml.ControlledQutritUnitary(U, control_wires=[0, 1], control_values='12', wires=2)
return qml.state()
>>> U = np.array([[1, 1, 0], [1, -1, 0], [0, 0, np.sqrt(2)]]) / np.sqrt(2)
>>> circuit(U)
tensor([0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j,
0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j,
0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j,
### Improvements
• PennyLane now supports Python 3.11! (#3297)
• qml.sample and qml.counts work more efficiently and track if computational basis samples are being generated when they are called without specifying an observable. (#3207)
• The parameters of a basis set containing a different number of Gaussian functions are now easier to differentiate. (#3213)
• Printing a qml.MultiControlledX operator now shows the control_values keyword argument. (#3113)
• qml.simplify and transforms like qml.matrix, batch_transform, hamiltonian_expand, and split_non_commuting now work with QuantumScript as well as QuantumTape. (#3209)
• A redundant flipping of the initial state in the UCCSD and kUpCCGSD templates has been removed. (#3148)
• qml.adjoint now supports batching if the base operation supports batching. (#3168)
• qml.OrbitalRotation is now decomposed into two qml.SingleExcitation operations for faster execution and more efficient parameter-shift gradient calculations on devices that natively support qml.SingleExcitation. (#3171)
• The Exp class decomposes into a PauliRot class if the coefficient is imaginary and the base operator is a Pauli Word. (#3249)
• Added the operator attributes has_decomposition and has_adjoint that indicate whether a corresponding decomposition or adjoint method is available. (#2986)
• Structural improvements are made to QueuingManager, formerly QueuingContext, and AnnotatedQueue. (#2794) (#3061) (#3085)
• QueuingContext is renamed to QueuingManager.
• QueuingManager should now be the global communication point for putting queuable objects into the active queue.
• QueuingManager is no longer an abstract base class.
• AnnotatedQueue and its children no longer inherit from QueuingManager.
• QueuingManager is no longer a context manager.
• Recording queues should start and stop recording via the QueuingManager.add_active_queue and QueuingContext.remove_active_queue class methods instead of directly manipulating the _active_contexts property.
• AnnotatedQueue and its children no longer provide global information about actively recording queues. This information is now only available through QueuingManager.
• AnnotatedQueue and its children no longer have the private _append, _remove, _update_info, _safe_update_info, and _get_info methods. The public analogues should be used instead.
• QueuingManager.safe_update_info and AnnotatedQueue.safe_update_info are deprecated. Their functionality is moved to update_info.
• qml.Identity now accepts multiple wires.
(#3049)
>>> id_op = qml.Identity([0, 1])
>>> id_op.matrix()
array([[1., 0., 0., 0.],
[0., 1., 0., 0.],
[0., 0., 1., 0.],
[0., 0., 0., 1.]])
>>> id_op.sparse_matrix()
<4x4 sparse matrix of type '<class 'numpy.float64'>'
with 4 stored elements in Compressed Sparse Row format>
>>> id_op.eigvals()
array([1., 1., 1., 1.])
• Added unitary_check keyword argument to the constructor of the QubitUnitary class which indicates whether the user wants to check for unitarity of the input matrix or not. Its default value is false. (#3063)
• Modified the representation of WireCut by using qml.draw_mpl. (#3067)
• Improved the performance of qml.math.expand_matrix function for dense and sparse matrices. (#3060) (#3064)
• Added support for sums and products of operator classes with scalar tensors of any interface (NumPy, JAX, Tensorflow, PyTorch…). (#3149)
>>> s_prod = torch.tensor(4) * qml.RX(1.23, 0)
>>> s_prod
4*(RX(1.23, wires=[0]))
>>> s_prod.scalar
tensor(4)
• Added overlapping_ops property to the Composite class to improve the performance of the eigvals, diagonalizing_gates and Prod.matrix methods. (#3084)
• Added the map_wires method to the operators, which returns a copy of the operator with its wires changed according to the given wire map. (#3143)
>>> op = qml.Toffoli([0, 1, 2])
>>> wire_map = {0: 2, 2: 0}
>>> op.map_wires(wire_map=wire_map)
Toffoli(wires=[2, 1, 0])
• Calling compute_matrix and compute_sparse_matrix of simple non-parametric operations is now faster and more memory-efficient with the addition of caching. (#3134)
• Added details to the output of Exp.label(). (#3126)
• qml.math.unwrap no longer creates ragged arrays. Lists remain lists. (#3163)
• New null.qubit device. The null.qubitperforms no operations or memory allocations. (#2589)
• default.qubit favours decomposition and avoids matrix construction for QFT and GroverOperator at larger qubit numbers. (#3193)
• qml.ControlledQubitUnitary now has a control_values property. (#3206)
• Added a new qml.tape.QuantumScript class that contains all the non-queuing behavior of QuantumTape. Now, QuantumTape inherits from QuantumScript as well as AnnotatedQueue. (#3097)
• Extended the qml.equal function to MeasurementProcesses (#3189)
• qml.drawer.draw.draw_mpl now accepts a style kwarg to select a style for plotting, rather than calling qml.drawer.use_style(style) before plotting. Setting a style for draw_mpl does not change the global configuration for matplotlib plotting. If no style is passed, the function defaults to plotting with the black_white style. (#3247)
### Breaking changes
• QuantumTape._par_info is now a list of dictionaries, instead of a dictionary whose keys are integers starting from zero. (#3185)
• QueuingContext has been renamed to QueuingManager. (#3061)
• Deprecation patches for the return types enum’s location and qml.utils.expand are removed. (#3092)
• _multi_dispatch functionality has been moved inside the get_interface function. This function can now be called with one or multiple tensors as arguments. (#3136)
>>> torch_scalar = torch.tensor(1)
>>> torch_tensor = torch.Tensor([2, 3, 4])
>>> numpy_tensor = np.array([5, 6, 7])
>>> qml.math.get_interface(torch_scalar)
'torch'
>>> qml.math.get_interface(numpy_tensor)
'numpy'
_multi_dispatch previously had only one argument which contained a list of the tensors to be dispatched:
>>> qml.math._multi_dispatch([torch_scalar, torch_tensor, numpy_tensor])
'torch'
To differentiate whether the user wants to get the interface of a single tensor or multiple tensors, get_interface now accepts a different argument per tensor to be dispatched:
>>> qml.math.get_interface(*[torch_scalar, torch_tensor, numpy_tensor])
'torch'
>>> qml.math.get_interface(torch_scalar, torch_tensor, numpy_tensor)
'torch'
• Operator.compute_terms is removed. On a specific instance of an operator, op.terms() can be used instead. There is no longer a static method for this. (#3215)
### Deprecations
• QueuingManager.safe_update_info and AnnotatedQueue.safe_update_info are deprecated. Instead, update_info no longer raises errors
if the object isn’t in the queue. (#3085)
• qml.tape.stop_recording and QuantumTape.stop_recording have been moved to qml.QueuingManager.stop_recording. The old functions will still be available until v0.29. (#3068)
• qml.tape.get_active_tape has been deprecated. Use qml.QueuingManager.active_context() instead. (#3068)
• Operator.compute_terms has been removed. On a specific instance of an operator, use op.terms() instead. There is no longer a static method for this. (#3215)
• qml.tape.QuantumTape.inv() has been deprecated. Use qml.tape.QuantumTape.adjoint instead. (#3237)
• qml.transforms.qcut.remap_tape_wires has been deprecated. Use qml.map_wires instead. (#3186)
• The grouping module qml.grouping has been deprecated. Use qml.pauli or qml.pauli.grouping instead. The module will still be available until v0.28. (#3262)
### Documentation
• The code block in the usage details of the UCCSD template has been updated. (#3140)
• Added a “Deprecations” page to the developer documentation. (#3093)
• The example of the qml.FlipSign template has been updated. (#3219)
### Bug fixes
• qml.SparseHamiltonian now validates the size of the input matrix. (#3278)
• Users no longer see unintuitive errors when inputing sequences to qml.Hermitian. (#3181)
• The evaluation of QNodes that return either vn_entropy or mutual_info raises an informative error message when using devices that define a vector of shots. (#3180)
• Fixed a bug that made qml.AmplitudeEmbedding incompatible with JITting. (#3166)
• Fixed the qml.transforms.transpile transform to work correctly for all two-qubit operations. (#3104)
• Fixed a bug with the control values of a controlled version of a ControlledQubitUnitary. (#3119)
• Fixed a bug where qml.math.fidelity(non_trainable_state, trainable_state) failed unexpectedly. (#3160)
• Fixed a bug where qml.QueuingManager.stop_recording did not clean up if yielded code raises an exception. (#3182)
• Returning qml.sample() or qml.counts() with other measurements of non-commuting observables now raises a QuantumFunctionError (e.g., return qml.expval(PauliX(wires=0)), qml.sample() now raises an error). (#2924)
• Fixed a bug where op.eigvals() would return an incorrect result if the operator was a non-hermitian composite operator. (#3204)
• Fixed a bug where qml.BasisStatePreparation and qml.BasisEmbedding were not jit-compilable with JAX. (#3239)
• Fixed a bug where qml.MottonenStatePreparation was not jit-compilable with JAX. (#3260)
• Fixed a bug where qml.MottonenStatePreparation was not jit-compilable with JAX. (#3260)
• Fixed a bug where qml.expval(qml.Hamiltonian()) would not raise an error if the Hamiltonian involved some wires that are not present on the device. (#3266)
• Fixed a bug where qml.tape.QuantumTape.shape() did not account for the batch dimension of the tape (#3269)
### Contributors
This release contains contributions from (in alphabetical order):
Kamal Mohamed Ali, Guillermo Alonso-Linaje, Juan Miguel Arrazola, Utkarsh Azad, Thomas Bromley, Albert Mitjans Coma, Isaac De Vlugt, Olivia Di Matteo, Amintor Dusko, Lillian M. A. Frederiksen, Diego Guala, Josh Izaac, Soran Jahangiri, Edward Jiang, Korbinian Kottmann, Christina Lee, Romain Moyard, Lee J. O’Riordan, Mudit Pandey, Matthew Silverman, Jay Soni, Antal Száva, David Wierichs,
orphan
## Release 0.26.0¶
### New features since last release
• PennyLane now provides built-in support for implementing the classical-shadows measurement protocol. (#2820) (#2821) (#2871) (#2968) (#2959) (#2968)
The classical-shadow measurement protocol is described in detail in the paper Predicting Many Properties of a Quantum System from Very Few Measurements. As part of the support for classical shadows in this release, two new finite-shot and fully-differentiable measurements are available:
• QNodes returning the new measurement qml.classical_shadow() will return two entities; bits (0 or 1 if the 1 or -1 eigenvalue is sampled, respectively) and recipes (the randomized Pauli measurements that are performed for each qubit, labelled by integer):
dev = qml.device("default.qubit", wires=2, shots=3)
@qml.qnode(dev)
def circuit():
qml.CNOT(wires=[0, 1])
>>> bits, recipes = circuit()
>>> bits
tensor([[0, 0],
[1, 0],
>>> recipes
tensor([[2, 2],
[0, 2],
• QNodes returning qml.shadow_expval() yield the expectation value estimation using classical shadows:
dev = qml.device("default.qubit", wires=range(2), shots=10000)
@qml.qnode(dev)
def circuit(x, H):
qml.CNOT((0,1))
qml.RX(x, wires=0)
H = qml.Hamiltonian(
[1., 1.],
[qml.PauliZ(0) @ qml.PauliZ(1), qml.PauliX(0) @ qml.PauliX(1)]
)
>>> circuit(x, H)
-0.4797000000000001
Fully-differentiable QNode transforms for both new classical-shadows measurements are also available via qml.shadows.shadow_state and qml.shadows.shadow_expval, respectively.
For convenient post-processing, we’ve also added the ability to calculate general Renyi entropies by way of the ClassicalShadow class’ entropy method, which requires the wires of the subsystem of interest and the Renyi entropy order:
>>> shadow = qml.ClassicalShadow(bits, recipes)
>>> vN_entropy = shadow.entropy(wires=[0, 1], alpha=1)
#### Qutrits: quantum circuits for tertiary degrees of freedom ☘️
• An entirely new framework for quantum computing is now simulatable with the addition of qutrit functionalities. (#2699) (#2781) (#2782) (#2783) (#2784) (#2841) (#2843)
Qutrits are like qubits, but instead live in a three-dimensional Hilbert space; they are not binary degrees of freedom, they are tertiary. The advent of qutrits allows for all sorts of interesting theoretical, practical, and algorithmic capabilities that have yet to be discovered.
To facilitate qutrit circuits requires a new device: default.qutrit. The default.qutrit device is a Python-based simulator, akin to default.qubit, and is defined as per usual:
>>> dev = qml.device("default.qutrit", wires=1)
The following operations are supported on default.qutrit devices:
• The qutrit shift operator, qml.TShift, and the ternary clock operator, qml.TClock, as defined in this paper by Yeh et al. (2022), which are the qutrit analogs of the Pauli X and Pauli Z operations, respectively.
• The qml.TAdd and qml.TSWAP operations which are the qutrit analogs of the CNOT and SWAP operations, respectively.
• Custom unitary operations via qml.QutritUnitary.
• qml.state and qml.probs measurements.
• Measuring user-specified Hermitian matrix observables via qml.THermitian.
A comprehensive example of these features is given below:
dev = qml.device("default.qutrit", wires=1)
U = np.array([
[1, 1, 1],
[1, 1, 1],
[1, 1, 1]
]
) / np.sqrt(3)
obs = np.array([
[1, 1, 0],
[1, -1, 0],
[0, 0, np.sqrt(2)]
]
) / np.sqrt(2)
@qml.qnode(dev)
def qutrit_state(U, obs):
qml.TShift(0)
qml.TClock(0)
qml.QutritUnitary(U, wires=0)
return qml.state()
@qml.qnode(dev)
def qutrit_expval(U, obs):
qml.TShift(0)
qml.TClock(0)
qml.QutritUnitary(U, wires=0)
return qml.expval(qml.THermitian(obs, wires=0))
>>> qutrit_state(U, obs)
>>> qutrit_expval(U, obs)
We will continue to add more and more support for qutrits in future releases.
#### Simplifying just got... simpler 😌
• The qml.simplify() function has several intuitive improvements with this release. (#2978) (#2982) (#2922) (#3012)
qml.simplify can now perform the following:
• simplify parametrized operations
• simplify the adjoint and power of specific operators
• group like terms in a sum
• resolve products of Pauli operators
• combine rotation angles of identical rotation gates
Here is an example of qml.simplify in action with parameterized rotation gates. In this case, the angles of rotation are simplified to be modulo $$4\pi$$.
>>> op1 = qml.RX(30.0, wires=0)
>>> qml.simplify(op1)
RX(4.867258771281655, wires=[0])
>>> op2 = qml.RX(4 * np.pi, wires=0)
>>> qml.simplify(op2)
Identity(wires=[0])
All of these simplification features can be applied directly to quantum functions, QNodes, and tapes via decorating with @qml.simplify, as well:
dev = qml.device("default.qubit", wires=2)
@qml.simplify
@qml.qnode(dev)
def circuit():
qml.adjoint(qml.prod(qml.RX(1, 0) ** 1, qml.RY(1, 0), qml.RZ(1, 0)))
return qml.probs(wires=0)
>>> circuit()
>>> list(circuit.tape)
[RZ(11.566370614359172, wires=[0]) @ RY(11.566370614359172, wires=[0]) @ RX(11.566370614359172, wires=[0]),
probs(wires=[0])]
#### QNSPSA optimizer 💪
• A new optimizer called qml.QNSPSAOptimizer is available that implements the quantum natural simultaneous perturbation stochastic approximation (QNSPSA) method based on Simultaneous Perturbation Stochastic Approximation of the Quantum Fisher Information. (#2818)
qml.QNSPSAOptimizer is a second-order SPSA algorithm, which combines the convergence power of the quantum-aware Quantum Natural Gradient (QNG) optimization method with the reduced quantum evaluations of SPSA methods.
While the QNSPSA optimizer requires additional circuit executions (10 executions per step) compared to standard SPSA optimization (3 executions per step), these additional evaluations are used to provide a stochastic estimation of a second-order metric tensor, which often helps the optimizer to achieve faster convergence.
Use qml.QNSPSAOptimizer like you would any other optimizer:
max_iterations = 50
opt = qml.QNSPSAOptimizer()
for _ in range(max_iterations):
params, cost = opt.step_and_cost(cost, params)
#### Operator and parameter broadcasting supplements 📈
• Operator methods for exponentiation and raising to a power have been added. (#2799) (#3029)
• The qml.exp function can be used to create observables or generic rotation gates:
>>> x = 1.234
>>> t = qml.PauliX(0) @ qml.PauliX(1) + qml.PauliY(0) @ qml.PauliY(1)
>>> isingxy = qml.exp(t, 0.25j * x)
>>> isingxy.matrix()
array([[1. +0.j , 0. +0.j ,
1. +0.j , 0. +0.j ],
[0. +0.j , 0.8156179+0.j ,
1. +0.57859091j, 0. +0.j ],
[0. +0.j , 0. +0.57859091j,
0.8156179+0.j , 0. +0.j ],
[0. +0.j , 0. +0.j ,
1. +0.j , 1. +0.j ]])
• The qml.pow function raises a given operator to a power:
>>> op = qml.pow(qml.PauliX(0), 2)
>>> op.matrix()
array([[1, 0], [0, 1]])
• An operator called qml.PSWAP is now available. (#2667)
The qml.PSWAP gate – or phase-SWAP gate – was previously available within the PennyLane-Braket plugin only. Enjoy it natively in PennyLane with v0.26.
• Check whether or not an operator is hermitian or unitary with qml.is_hermitian and qml.is_unitary. (#2960)
>>> op1 = qml.PauliX(wires=0)
>>> qml.is_hermitian(op1)
True
>>> op2 = qml.PauliX(0) + qml.RX(np.pi/3, 0)
>>> qml.is_unitary(op2)
False
• Embedding templates now support parameter broadcasting. (#2810)
Embedding templates like AmplitudeEmbedding or IQPEmbedding now support parameter broadcasting with a leading broadcasting dimension in their variational parameters. AmplitudeEmbedding, for example, would usually use a one-dimensional input vector of features. With broadcasting, we can now compute
>>> features = np.array([
... [0.5, 0.5, 0., 0., 0.5, 0., 0.5, 0.],
... [1., 0., 0., 0., 0., 0., 0., 0.],
... [0.5, 0.5, 0., 0., 0., 0., 0.5, 0.5],
... ])
>>> op = qml.AmplitudeEmbedding(features, wires=[1, 5, 2])
>>> op.batch_size
3
An exception is BasisEmbedding, which is not broadcastable.
### Improvements
• The qml.math.expand_matrix() method now allows the sparse matrix representation of an operator to be extended to a larger hilbert space. (#2998)
>>> from scipy import sparse
>>> mat = sparse.csr_matrix([[0, 1], [1, 0]])
>>> qml.math.expand_matrix(mat, wires=[1], wire_order=[0,1]).toarray()
array([[0., 1., 0., 0.],
[1., 0., 0., 0.],
[0., 0., 0., 1.],
[0., 0., 1., 0.]])
• qml.ctrl now uses Controlled instead of ControlledOperation. The new Controlled class wraps individual Operator‘s instead of a tape. It provides improved representations and integration. (#2990)
• qml.matrix can now compute the matrix of tapes and QNodes that contain multiple broadcasted operations or non-broadcasted operations after broadcasted ones. (#3025)
A common scenario in which this becomes relevant is the decomposition of broadcasted operations: the decomposition in general will contain one or multiple broadcasted operations as well as operations with no or fixed parameters that are not broadcasted.
• Lists of operators are now internally sorted by their respective wires while also taking into account their commutativity property. (#2995)
• Some methods of the QuantumTape class have been simplified and reordered to improve both readability and performance. (#2963)
• The qml.qchem.molecular_hamiltonian function is modified to support observable grouping. (#2997)
• qml.ops.op_math.Controlled now has basic decomposition functionality. (#2938)
• Automatic circuit cutting has been improved by making better partition imbalance derivations. Now it is more likely to generate optimal cuts for larger circuits. (#2517)
• By default, qml.counts only returns the outcomes observed in sampling. Optionally, specifying qml.counts(all_outcomes=True) will return a dictionary containing all possible outcomes. (#2889)
>>> dev = qml.device("default.qubit", wires=2, shots=1000)
>>>
>>> @qml.qnode(dev)
>>> def circuit():
... qml.CNOT(wires=[0, 1])
... return qml.counts(all_outcomes=True)
>>> result = circuit()
>>> result
{'00': 495, '01': 0, '10': 0, '11': 505}
• Internal use of in-place inversion is eliminated in preparation for its deprecation. (#2965)
• Controlled operators now work with qml.is_commuting. (#2994)
• qml.prod and qml.op_sum now support the sparse_matrix() method. (#3006)
>>> xy = qml.prod(qml.PauliX(1), qml.PauliY(1))
>>> op = qml.op_sum(xy, qml.Identity(0))
>>>
>>> sparse_mat = op.sparse_matrix(wire_order=[0,1])
>>> type(sparse_mat)
<class 'scipy.sparse.csr.csr_matrix'>
>>> sparse_mat.toarray()
[[1.+1.j 0.+0.j 0.+0.j 0.+0.j]
[0.+0.j 1.-1.j 0.+0.j 0.+0.j]
[0.+0.j 0.+0.j 1.+1.j 0.+0.j]
[0.+0.j 0.+0.j 0.+0.j 1.-1.j]]
• Provided sparse_matrix() support for single qubit observables. (#2964)
• qml.Barrier with only_visual=True now simplifies via op.simplify() to the identity operator or a product of identity operators. (#3016)
• More accurate and intuitive outputs for printing some operators have been added. (#3013)
• Results for the matrix of the sum or product of operators are stored in a more efficient manner. (#3022)
• The computation of the (sparse) matrix for the sum or product of operators is now more efficient. (#3030)
• When the factors of qml.prod don’t share any wires, the matrix and sparse matrix are computed using a kronecker product for improved efficiency. (#3040)
• qml.grouping.is_pauli_word now returns False for operators that don’t inherit from qml.Observable instead of raising an error. (#3039)
• Added functionality to iterate over operators created from qml.op_sum and qml.prod. (#3028)
>>> op = qml.op_sum(qml.PauliX(0), qml.PauliY(1), qml.PauliZ(2))
>>> len(op)
3
>>> op[1]
PauliY(wires=[1])
>>> [o.name for o in op]
['PauliX', 'PauliY', 'PauliZ']
### Deprecations
• In-place inversion is now deprecated. This includes op.inv() and op.inverse=value. Please use qml.adjoint or qml.pow instead. Support for these methods will remain till v0.28. (#2988)
Don’t use:
>>> v1 = qml.PauliX(0).inv()
>>> v2 = qml.PauliX(0)
>>> v2.inverse = True
>>> qml.adjoint(qml.PauliX(0))
>>> qml.pow(qml.PauliX(0), -1)
PauliX(wires=[0])**-1
>>> qml.pow(qml.PauliX(0), -1, lazy=False)
PauliX(wires=[0])
>>> qml.PauliX(0) ** -1
PauliX(wires=[0])**-1
qml.adjoint takes the conjugate transpose of an operator, while qml.pow(op, -1) indicates matrix inversion. For unitary operators, adjoint will be more efficient than qml.pow(op, -1), even though they represent the same thing.
• The supports_reversible_diff device capability is unused and has been removed. (#2993)
### Breaking changes
• Measuring an operator that might not be hermitian now raises a warning instead of an error. To definitively determine whether or not an operator is hermitian, use qml.is_hermitian. (#2960)
• The ControlledOperation class has been removed. This was a developer-only class, so the change should not be evident to any users. It is replaced by Controlled. (#2990)
• The default execute method for the QubitDevice base class now calls self.statistics with an additional keyword argument circuit, which represents the quantum tape being executed. Any device that overrides statistics should edit the signature of the method to include the new circuit keyword argument. (#2820)
• The expand_matrix() has been moved from pennylane.operation to pennylane.math.matrix_manipulation (#3008)
• qml.grouping.utils.is_commuting has been removed, and its Pauli word logic is now part of qml.is_commuting. (#3033)
• qml.is_commuting has been moved from pennylane.transforms.commutation_dag to pennylane.ops.functions. (#2991)
### Documentation
• Updated the Fourier transform docs to use circuit_spectrum instead of spectrum, which has been deprecated. (#3018)
• Corrected the docstrings for diagonalizing gates for all relevant operations. The docstrings used to say that the diagonalizing gates implemented $$U$$, the unitary such that $$O = U \Sigma U^{\dagger}$$, where $$O$$ is the original observable and $$\Sigma$$ a diagonal matrix. However, the diagonalizing gates actually implement $$U^{\dagger}$$, since $$\langle \psi | O | \psi \rangle = \langle \psi | U \Sigma U^{\dagger} | \psi \rangle$$, making $$U^{\dagger} | \psi \rangle$$ the actual state being measured in the Z-basis. (#2981)
### Bug fixes
• Fixed a bug with qml.ops.Exp operators when the coefficient is autograd but the diagonalizing gates don’t act on all wires. (#3057)
• Fixed a bug where the tape transform single_qubit_fusion computed wrong rotation angles for specific combinations of rotations. (#3024)
• Jax gradients now work with a QNode when the quantum function was transformed by qml.simplify. (#3017)
• Operators that have num_wires = AnyWires or num_wires = AnyWires now raise an error, with certain exceptions, when instantiated with wires=[]. (#2979)
• Fixed a bug where printing qml.Hamiltonian with complex coefficients raises TypeError in some cases. (#3004)
• Added a more descriptive error message when measuring non-commuting observables at the end of a circuit with probs, samples, counts and allcounts. (#3065)
### Contributors
This release contains contributions from (in alphabetical order):
Juan Miguel Arrazola, Utkarsh Azad, Tom Bromley, Olivia Di Matteo, Isaac De Vlugt, Yiheng Duan, Lillian Marie Austin Frederiksen, Josh Izaac, Soran Jahangiri, Edward Jiang, Ankit Khandelwal, Korbinian Kottmann, Meenu Kumari, Christina Lee, Albert Mitjans Coma, Romain Moyard, Rashid N H M, Zeyue Niu, Mudit Pandey, Matthew Silverman, Jay Soni, Antal Száva, Cody Wang, David Wierichs.
orphan
## Release 0.25.1¶
### Bug fixes
• Fixed Torch device discrepencies for certain parametrized operations by updating qml.math.array and qml.math.eye to preserve the Torch device used. (#2967)
### Contributors
This release contains contributions from (in alphabetical order):
Romain Moyard, Rashid N H M, Lee James O’Riordan, Antal Száva
orphan
## Release 0.25.0¶
### New features since last release
#### Estimate computational resource requirements 🧠
• Functionality for estimating molecular simulation computations has been added with qml.resource. (#2646) (#2653) (#2665) (#2694) (#2720) (#2723) (#2746) (#2796) (#2797) (#2874) (#2944) (#2644)
The new resource module allows you to estimate the number of non-Clifford gates and logical qubits needed to implement quantum phase estimation algorithms for simulating materials and molecules. This includes support for quantum algorithms using first and second quantization with specific bases:
• First quantization using a plane-wave basis via the FirstQuantization class:
>>> n = 100000 # number of plane waves
>>> eta = 156 # number of electrons
>>> omega = 1145.166 # unit cell volume in atomic units
>>> algo = FirstQuantization(n, eta, omega)
>>> print(algo.gates, algo.qubits)
1.10e+13, 4416
• Second quantization with a double-factorized Hamiltonian via the DoubleFactorization class:
symbols = ["O", "H", "H"]
geometry = np.array(
[
[0.00000000, 0.00000000, 0.28377432],
[0.00000000, 1.45278171, -1.00662237],
[0.00000000, -1.45278171, -1.00662237],
],
)
mol = qml.qchem.Molecule(symbols, geometry, basis_name="sto-3g")
core, one, two = qml.qchem.electron_integrals(mol)()
algo = DoubleFactorization(one, two)
>>> print(algo.gates, algo.qubits)
103969925, 290
The methods of the FirstQuantization and the DoubleFactorization classes, such as qubit_cost (number of logical qubits) and gate_cost (number of non-Clifford gates), can be also accessed as static methods:
>>> qml.resource.FirstQuantization.qubit_cost(100000, 156, 169.69608, 0.01)
4377
>>> qml.resource.FirstQuantization.gate_cost(100000, 156, 169.69608, 0.01)
3676557345574
#### Differentiable error mitigation ⚙️
• Differentiable zero-noise-extrapolation (ZNE) error mitigation is now available. (#2757)
Elevate any variational quantum algorithm to a mitigated algorithm with improved results on noisy hardware while maintaining differentiability throughout.
In order to do so, use the qml.transforms.mitigate_with_zne transform on your QNode and provide the PennyLane proprietary qml.transforms.fold_global folding function and qml.transforms.poly_extrapolate extrapolation function. Here is an example for a noisy simulation device where we mitigate a QNode and are still able to compute the gradient:
# Describe noise
noise_gate = qml.DepolarizingChannel
noise_strength = 0.1
dev_ideal = qml.device("default.mixed", wires=1)
dev_noisy = qml.transforms.insert(noise_gate, noise_strength)(dev_ideal)
scale_factors = [1, 2, 3]
@mitigate_with_zne(
scale_factors,
qml.transforms.fold_global,
qml.transforms.poly_extrapolate,
extrapolate_kwargs={'order': 2}
)
@qml.qnode(dev_noisy)
def qnode_mitigated(theta):
qml.RY(theta, wires=0)
return qml.expval(qml.PauliX(0))
>>> theta = np.array(0.5, requires_grad=True)
0.5712737447327619
#### More native support for parameter broadcasting 📡
• default.qubit now natively supports parameter broadcasting, providing increased performance when executing the same circuit at various parameter positions compared to manually looping over parameters, or directly using the qml.transforms.broadcast_expand transform. (#2627)
dev = qml.device("default.qubit", wires=1)
@qml.qnode(dev)
def circuit(x):
qml.RX(x, wires=0)
return qml.expval(qml.PauliZ(0))
>>> circuit(np.array([0.1, 0.3, 0.2]))
Currently, not all templates have been updated to support broadcasting.
• Parameter-shift gradients now allow for parameter broadcasting internally, which can result in a significant speedup when computing gradients of circuits with many parameters. (#2749)
The gradient transform qml.gradients.param_shift now accepts the keyword argument broadcast. If set to True, broadcasting is used to compute the derivative:
dev = qml.device("default.qubit", wires=2)
@qml.qnode(dev)
def circuit(x, y):
qml.RX(x, wires=0)
qml.RY(y, wires=1)
return qml.expval(qml.PauliZ(0) @ qml.PauliZ(1))
>>> x = np.array([np.pi/3, np.pi/2], requires_grad=True)
>>> y = np.array([np.pi/6, np.pi/5], requires_grad=True)
(tensor([[-0.7795085, 0. ],
tensor([[-0.125, 0. ],
The following example highlights how to make use of broadcasting gradients at the QNode level. Internally, broadcasting is used to compute the parameter-shift rule when required, which may result in performance improvements.
@qml.qnode(dev, diff_method="parameter-shift", broadcast=True)
def circuit(x, y):
qml.RX(x, wires=0)
qml.RY(y, wires=1)
return qml.expval(qml.PauliZ(0) @ qml.PauliZ(1))
>>> x = np.array(0.1, requires_grad=True)
(array(-0.09195267), array(-0.38747287))
Here, only 2 circuits are created internally, rather than 4 with broadcast=False.
To illustrate the speedup, for a constant-depth circuit with Pauli rotations and controlled Pauli rotations, the time required to compute qml.gradients.param_shift(circuit, broadcast=False)(params) (“No broadcasting”) and qml.gradients.param_shift(circuit, broadcast=True)(params) (“Broadcasting”) as a function of the number of qubits is given here.
• Operations for quantum chemistry now support parameter broadcasting. (#2726)
>>> op = qml.SingleExcitation(np.array([0.3, 1.2, -0.7]), wires=[0, 1])
>>> op.matrix().shape
(3, 4, 4)
#### Intuitive operator arithmetic 🧮
• New functionality for representing the sum, product, and scalar-product of operators is available. (#2475) (#2625) (#2622) (#2721)
The following functionalities have been added to facilitate creating new operators whose matrix, terms, and eigenvalues can be accessed as per usual, while maintaining differentiability. Operators created from these new features can be used within QNodes as operations or as observables (where physically applicable).
• Summing any number of operators via qml.op_sum results in a “summed” operator:
>>> ops_to_sum = [qml.PauliX(0), qml.PauliY(1), qml.PauliZ(0)]
>>> summed_ops = qml.op_sum(*ops_to_sum)
>>> summed_ops
PauliX(wires=[0]) + PauliY(wires=[1]) + PauliZ(wires=[0])
>>> qml.matrix(summed_ops)
array([[ 1.+0.j, 0.-1.j, 1.+0.j, 0.+0.j],
[ 0.+1.j, 1.+0.j, 0.+0.j, 1.+0.j],
[ 1.+0.j, 0.+0.j, -1.+0.j, 0.-1.j],
[ 0.+0.j, 1.+0.j, 0.+1.j, -1.+0.j]])
>>> summed_ops.terms()
([1.0, 1.0, 1.0], (PauliX(wires=[0]), PauliY(wires=[1]), PauliZ(wires=[0])))
• Multiplying any number of operators via qml.prod results in a “product” operator, where the matrix product or tensor product is used correspondingly:
>>> theta = 1.23
>>> prod_op = qml.prod(qml.PauliZ(0), qml.RX(theta, 1))
>>> prod_op
PauliZ(wires=[0]) @ RX(1.23, wires=[1])
>>> qml.eigvals(prod_op)
[-1.39373197 -0.23981492 0.23981492 1.39373197]
• Taking the product of a coefficient and an operator via qml.s_prod produces a “scalar-product” operator:
>>> sprod_op = qml.s_prod(2.0, qml.PauliX(0))
>>> sprod_op
2.0*(PauliX(wires=[0]))
>>> sprod_op.matrix()
array([[ 0., 2.],
[ 2., 0.]])
>>> sprod_op.terms()
([2.0], [PauliX(wires=[0])])
Each of these new functionalities can be used within QNodes as operators or observables, where applicable, while also maintaining differentiability. For example:
dev = qml.device("default.qubit", wires=2)
@qml.qnode(dev)
def circuit(angles):
qml.prod(qml.PauliZ(0), qml.RY(angles[0], 1))
qml.op_sum(qml.PauliX(1), qml.RY(angles[1], 0))
return qml.expval(qml.op_sum(qml.PauliX(0), qml.PauliZ(1)))
>>> angles = np.array([1.23, 4.56], requires_grad=True)
>>> circuit(angles)
array([-0.9424888, 0. ])
• All PennyLane operators can now be added, subtracted, multiplied, scaled, and raised to powers using +, -, @, *, **, respectively. (#2849) (#2825) (#2891)
• You can now add scalars to operators, where the interpretation is that the scalar is a properly-sized identity matrix;
>>> sum_op = 5 + qml.PauliX(0)
>>> sum_op.matrix()
array([[5., 1.],
[1., 5.]])
• The + and - operators can be used to combine all Pennylane operators:
>>> sum_op = qml.RX(phi=1.23, wires=0) + qml.RZ(phi=3.14, wires=0) - qml.RY(phi=0.12, wires=0)
>>> sum_op
RX(1.23, wires=[0]) + RZ(3.14, wires=[0]) + -1*(RY(0.12, wires=[0]))
>>> qml.matrix(sum_op)
array([[-0.18063077-0.99999968j, 0.05996401-0.57695852j],
[-0.05996401-0.57695852j, -0.18063077+0.99999968j]])
Note that the behavior of + and - with observables is different; it still creates a Hamiltonian.
• The * and @ operators can be used to scale and compose all PennyLane operators.
>>> prod_op = 2*qml.RX(1, wires=0) @ qml.RY(2, wires=0)
>>> prod_op
2*(RX(1, wires=[0])) @ RY(2, wires=[0])
>>> qml.matrix(prod_op)
array([[ 0.94831976-0.80684536j, -1.47692053-0.51806945j],
[ 1.47692053-0.51806945j, 0.94831976+0.80684536j]])
• The ** operator can be used to raise PennyLane operators to a power.
>>> exp_op = qml.RZ(1.0, wires=0) ** 2
>>> exp_op
RZ**2(1.0, wires=[0])
>>> qml.matrix(exp_op)
array([[0.54030231-0.84147098j, 0. +0.j ],
[0. +0.j , 0.54030231+0.84147098j]])
• A new class called Controlled is available in qml.ops.op_math to represent a controlled version of any operator. This will eventually be integrated into qml.ctrl to provide a performance increase and more feature coverage. (#2634)
• Arithmetic operations can now be simplified using qml.simplify. (#2835) (#2854)
>>> op = qml.adjoint(qml.adjoint(qml.RX(x, wires=0)))
>>> op
>>> qml.simplify(op)
• A new function called qml.equal can be used to compare the equality of parametric operators. (#2651)
>>> qml.equal(qml.RX(1.23, 0), qml.RX(1.23, 0))
True
>>> qml.equal(qml.RY(4.56, 0), qml.RY(7.89, 0))
False
#### Marvelous mixed state features 🙌
• The default.mixed device now supports backpropagation with the "jax" interface, which can result in significant speedups. (#2754) (#2776)
dev = qml.device("default.mixed", wires=2)
@qml.qnode(dev, diff_method="backprop", interface="jax")
def circuit(angles):
qml.RX(angles[0], wires=0)
qml.RY(angles[1], wires=1)
return qml.expval(qml.PauliZ(0) + qml.PauliZ(1))
>>> angles = np.array([np.pi/6, np.pi/5], requires_grad=True)
array([-0.8660254 , -0.25881905])
Additionally, quantum channels now support Jax and TensorFlow tensors. This allows quantum channels to be used inside QNodes decorated by tf.function, jax.jit, or jax.vmap.
• The default.mixed device now supports readout error. (#2786)
A new keyword argument called readout_prob can be specified when creating a default.mixed device. Any circuits running on a default.mixed device with a finite readout_prob (upper-bounded by 1) will alter the measurements performed at the end of the circuit similarly to how a qml.BitFlip channel would affect circuit measurements:
>>> dev = qml.device("default.mixed", wires=2, readout_prob=0.1)
>>> @qml.qnode(dev)
... def circuit():
... return qml.expval(qml.PauliZ(0))
>>> circuit()
array(0.8)
#### Relative entropy is now available in qml.qinfo 💥
• The quantum information module now supports computation of relative entropy. (#2772)
We’ve enabled two cases for calculating the relative entropy:
• A QNode transform via qml.qinfo.relative_entropy:
dev = qml.device('default.qubit', wires=2)
@qml.qnode(dev)
def circuit(param):
qml.RY(param, wires=0)
qml.CNOT(wires=[0, 1])
return qml.state()
>>> relative_entropy_circuit = qml.qinfo.relative_entropy(circuit, circuit, wires0=[0], wires1=[0])
>>> x, y = np.array(0.4), np.array(0.6)
>>> relative_entropy_circuit((x,), (y,))
0.017750012490703237
• Support in qml.math for flexible post-processing:
>>> rho = np.array([[0.3, 0], [0, 0.7]])
>>> sigma = np.array([[0.5, 0], [0, 0.5]])
>>> qml.math.relative_entropy(rho, sigma)
#### New measurements, operators, and more! ✨
• A new measurement called qml.counts is available. (#2686) (#2839) (#2876)
QNodes with shots != None that return qml.counts will yield a dictionary whose keys are bitstrings representing computational basis states that were measured, and whose values are the corresponding counts (i.e., how many times that computational basis state was measured):
dev = qml.device("default.qubit", wires=2, shots=1000)
@qml.qnode(dev)
def circuit():
qml.CNOT(wires=[0, 1])
return qml.counts()
>>> circuit()
{'00': 495, '11': 505}
qml.counts can also accept observables, where the resulting dictionary is ordered by the eigenvalues of the observable.
dev = qml.device("default.qubit", wires=2, shots=1000)
@qml.qnode(dev)
def circuit():
qml.CNOT(wires=[0, 1])
return qml.counts(qml.PauliZ(0)), qml.counts(qml.PauliZ(1))
>>> circuit()
({-1: 470, 1: 530}, {-1: 470, 1: 530})
• A new experimental return type for QNodes with multiple measurements has been added. (#2814) (#2815) (#2860)
QNodes returning a list or tuple of different measurements return an intuitive data structure via qml.enable_return(), where the individual measurements are separated into their own tensors:
qml.enable_return()
dev = qml.device("default.qubit", wires=2)
@qml.qnode(dev)
def circuit(x):
qml.CRX(x, wires=[0, 1])
return (qml.probs(wires=[0]), qml.vn_entropy(wires=[0]), qml.probs(wires=0), qml.expval(wires=1))
>>> circuit(0.5)
In addition, QNodes that utilize this new return type support backpropagation. This new return type can be disabled thereafter via qml.disable_return().
• An operator called qml.FlipSign is now available. (#2780)
Mathematically, qml.FlipSign functions as follows: $$\text{FlipSign}(n) \vert m \rangle = (-1)^\delta_{n,m} \vert m \rangle$$, where $$\vert m \rangle$$ is an arbitrary qubit state and $n$ is a qubit configuration:
basis_state = [0, 1]
dev = qml.device("default.qubit", wires=2)
@qml.qnode(dev)
def circuit():
for wire in list(range(2)):
qml.FlipSign(basis_state, wires = list(range(2)))
return qml.state()
>>> circuit()
tensor([ 0.5+0.j -0.5+0.j 0.5+0.j 0.5+0.j], requires_grad=True)
• The simultaneous perturbation stochastic approximation (SPSA) optimizer is available via qml.SPSAOptimizer. (#2661)
The SPSA optimizer is suitable for cost functions whose evaluation may involve noise. Use the SPSA optimizer like you would any other optimizer:
max_iterations = 50
opt = qml.SPSAOptimizer(maxiter=max_iterations)
for _ in range(max_iterations):
params, cost = opt.step_and_cost(cost, params)
#### More drawing styles 🎨
• New PennyLane-inspired sketch and sketch_dark styles are now available for drawing circuit diagram graphics. (#2709)
### Improvements 📈
• default.qubit now natively executes any operation that defines a matrix except for trainable Pow operations. (#2836)
• Added expm to the qml.math module for matrix exponentiation. (#2890)
• When adjoint differentiation is requested, circuits are now decomposed so that all trainable operations have a generator. (#2836)
• A warning is now emitted for qml.state, qml.density_matrix, qml.vn_entropy, and qml.mutual_info when using a device with finite shots or a shot list since these measurements are always analytic. (#2918)
• The efficiency of the Hartree-Fock workflow has been improved by removing repetitive steps. (#2850)
• The coefficients of the non-differentiable molecular Hamiltonians generated with openfermion now have requires_grad = False by default. (#2865)
• Upgraded performance of the compute_matrix method of broadcastable parametric operations. (#2759)
• Jacobians are now cached with the Autograd interface when using the parameter-shift rule. (#2645)
• The qml.state and qml.density_matrix measurements now support custom wire labels. (#2779)
• Add trivial behaviour logic to qml.operation.expand_matrix. (#2785)
• Added an are_pauli_words_qwc function which checks if certain Pauli words are pairwise qubit-wise commuting. This new function improves performance when measuring hamiltonians with many commuting terms. (#2789)
### Breaking changes 💔
• The deprecated qml.hf module is removed. Users with code that calls qml.hf can simply replace qml.hf with qml.qchem in most cases, or refer to the qchem documentation and demos for more information. (#2795)
• default.qubit now uses stopping_condition to specify support for anything with a matrix. To override this behavior in inheriting devices and to support only a specific subset of operations, developers need to override stopping_condition. (#2836)
• Custom devices inheriting from DefaultQubit or QubitDevice can break due to the introduction of parameter broadcasting. (#2627)
A custom device should only break if all three following statements hold simultaneously:
1. The custom device inherits from DefaultQubit, not QubitDevice.
2. The device implements custom methods in the simulation pipeline that are incompatible with broadcasting (for example expval, apply_operation or analytic_probability).
3. The custom device maintains the flag "supports_broadcasting": True in its capabilities dictionary or it overwrites Device.batch_transform without applying broadcast_expand (or both).
The capabilities["supports_broadcasting"] is set to True for DefaultQubit. Typically, the easiest fix will be to change the capabilities["supports_broadcasting"] flag to False for the child device and/or to include a call to broadcast_expand in CustomDevice.batch_transform, similar to how Device.batch_transform calls it.
Separately from the above, custom devices that inherit from QubitDevice and implement a custom _gather method need to allow for the kwarg axis to be passed to this _gather method.
• The argument argnum of the function qml.batch_input has been redefined: now it indicates the indices of the batched parameters, which need to be non-trainable, in the quantum tape. Consequently, its default value (set to 0) has been removed. (#2873)
Before this breaking change, one could call qml.batch_input without any arguments when using batched inputs as the first argument of the quantum circuit.
dev = qml.device("default.qubit", wires=2, shots=None)
@qml.batch_input() # argnum = 0
@qml.qnode(dev, diff_method="parameter-shift", interface="tf")
def circuit(inputs, weights): # argument inputs is batched
qml.RY(weights[0], wires=0)
qml.AngleEmbedding(inputs, wires=range(2), rotation="Y")
qml.RY(weights[1], wires=1)
return qml.expval(qml.PauliZ(1))
With this breaking change, users must set a value to argnum specifying the index of the batched inputs with respect to all quantum tape parameters. In this example the quantum tape parameters are [ weights[0], inputs, weights[1] ], thus argnum should be set to 1, specifying that inputs is batched:
dev = qml.device("default.qubit", wires=2, shots=None)
@qml.batch_input(argnum=1)
@qml.qnode(dev, diff_method="parameter-shift", interface="tf")
def circuit(inputs, weights):
qml.RY(weights[0], wires=0)
qml.AngleEmbedding(inputs, wires=range(2), rotation="Y")
qml.RY(weights[1], wires=1)
return qml.expval(qml.PauliZ(1))
• PennyLane now depends on newer versions (>=2.7) of the semantic_version package, which provides an updated API that is incompatible which versions of the package prior to 2.7. If you run into issues relating to this package, please reinstall PennyLane. (#2744) (#2767)
### Documentation 📕
• Added a dedicated docstring for the QubitDevice.sample method. (#2812)
• Optimization examples of using JAXopt and Optax with the JAX interface have been added. (#2769)
• Updated IsingXY gate docstring. (#2858)
### Bug fixes 🐞
• Fixes qml.equal so that operators with different inverse properties are not equal. (#2947)
• Cleans up interactions between operator arithmetic and batching by testing supported cases and adding errors when batching is not supported. (#2900)
• Fixed a bug where the parameter-shift rule wasn’t defined for qml.kUpCCGSD. (#2913)
• Reworked the Hermiticity check in qml.Hermitian by using qml.math calls because calling .conj() on an EagerTensor from TensorFlow raised an error. (#2895)
• Fixed a bug where the parameter-shift gradient breaks when using both custom grad_recipes that contain unshifted terms and recipes that do not contain any unshifted terms. (#2834)
• Fixed mixed CPU-GPU data-locality issues for the Torch interface. (#2830)
• Fixed a bug where the parameter-shift Hessian of circuits with untrainable parameters might be computed with respect to the wrong parameters or might raise an error. (#2822)
• Fixed a bug where the custom implementation of the states_to_binary device method was not used. (#2809)
• qml.grouping.group_observables now works when individual wire labels are iterable. (#2752)
• The adjoint of an adjoint now has a correct expand result. (#2766)
• Fixed the ability to return custom objects as the expectation value of a QNode with the Autograd interface. (#2808)
• The WireCut operator now raises an error when instantiating it with an empty list. (#2826)
• Hamiltonians with grouped observables are now allowed to be measured on devices which were transformed using qml.transform.insert(). (#2857)
• Fixed a bug where qml.batch_input raised an error when using a batched operator that was not located at the beginning of the circuit. In addition, now qml.batch_input raises an error when using trainable batched inputs, which avoids an unwanted behaviour with duplicated parameters. (#2873)
• Calling qml.equal with nested operators now raises a NotImplementedError. (#2877)
• Fixed a bug where a non-sensible error message was raised when using qml.counts with shots=False. (#2928)
• Fixed a bug where no error was raised and a wrong value was returned when using qml.counts with another non-commuting observable. (#2928)
• Operator Arithmetic now allows Hamiltonian objects to be used and produces correct matrices. (#2957)
### Contributors
This release contains contributions from (in alphabetical order):
Juan Miguel Arrazola, Utkarsh Azad, Samuel Banning, Prajwal Borkar, Isaac De Vlugt, Olivia Di Matteo, Kristiyan Dilov, David Ittah, Josh Izaac, Soran Jahangiri, Edward Jiang, Ankit Khandelwal, Korbinian Kottmann, Meenu Kumari, Christina Lee, Sergio Martínez-Losa, Albert Mitjans Coma, Ixchel Meza Chavez, Romain Moyard, Lee James O’Riordan, Mudit Pandey, Bogdan Reznychenko, Shuli Shu, Jay Soni, Modjtaba Shokrian-Zini, Antal Száva, David Wierichs, Moritz Willmann
orphan
## Release 0.24.0¶
### New features since last release
#### All new quantum information quantities 📏
• Functionality for computing quantum information quantities for QNodes has been added. (#2554) (#2569) (#2598) (#2617) (#2631) (#2640) (#2663) (#2684) (#2688) (#2695) (#2710) (#2712)
This includes two new QNode measurements:
• The Von Neumann entropy via qml.vn_entropy:
>>> dev = qml.device("default.qubit", wires=2)
>>> @qml.qnode(dev)
... def circuit_entropy(x):
... qml.IsingXX(x, wires=[0,1])
... return qml.vn_entropy(wires=[0], log_base=2)
>>> circuit_entropy(np.pi/2)
1.0
• The mutual information via qml.mutual_info:
>>> dev = qml.device("default.qubit", wires=2)
>>> @qml.qnode(dev)
... def circuit(x):
... qml.IsingXX(x, wires=[0,1])
... return qml.mutual_info(wires0=[0], wires1=[1], log_base=2)
>>> circuit(np.pi/2)
2.0
New differentiable transforms are also available in the qml.qinfo module:
• The classical and quantum Fisher information via qml.qinfo.classical_fisher, qml.qinfo.quantum_fisher, respectively:
dev = qml.device("default.qubit", wires=3)
@qml.qnode(dev)
def circ(params):
qml.RY(params[0], wires=1)
qml.CNOT(wires=(1,0))
qml.RY(params[1], wires=1)
qml.RZ(params[2], wires=1)
return qml.expval(qml.PauliX(0) @ qml.PauliX(1) - 0.5 * qml.PauliZ(1))
params = np.array([0.5, 1., 0.2], requires_grad=True)
cfim = qml.qinfo.classical_fisher(circ)(params)
qfim = qml.qinfo.quantum_fisher(circ)(params)
These quantities are typically employed in variational optimization schemes to tilt the gradient in a more favourable direction — producing what is known as the natural gradient. For example:
>>> grad = qml.grad(circ)(params)
[ 5.94225615e-01 -2.61509542e-02 -1.18674655e-18]
[ 0.59422561 -0.02615095 -0.03989212]
• The fidelity between two arbitrary states via qml.qinfo.fidelity:
dev = qml.device('default.qubit', wires=1)
@qml.qnode(dev)
def circuit_rx(x):
qml.RX(x[0], wires=0)
qml.RZ(x[1], wires=0)
return qml.state()
@qml.qnode(dev)
def circuit_ry(y):
qml.RY(y, wires=0)
return qml.state()
>>> x = np.array([0.1, 0.3], requires_grad=True)
>>> fid_func = qml.qinfo.fidelity(circuit_rx, circuit_ry, wires0=[0], wires1=[0])
>>> fid_func(x, y)
0.9905158135644924
>>> df(x, y)
(array([-0.04768725, -0.29183666]), array(-0.09489803))
• Reduced density matrices of arbitrary states via qml.qinfo.reduced_dm:
dev = qml.device("default.qubit", wires=2)
@qml.qnode(dev)
def circuit(x):
qml.IsingXX(x, wires=[0,1])
return qml.state()
>>> qml.qinfo.reduced_dm(circuit, wires=[0])(np.pi/2)
[[0.5+0.j 0.+0.j]
[0.+0.j 0.5+0.j]]
• Similar transforms, qml.qinfo.vn_entropy and qml.qinfo.mutual_info exist for transforming QNodes.
Currently, all quantum information measurements and transforms are differentiable, but only support statevector devices, with hardware support to come in a future release (with the exception of qml.qinfo.classical_fisher and qml.qinfo.quantum_fisher, which are both hardware compatible).
For more information, check out the new qinfo module and measurements page.
• In addition to the QNode transforms and measurements above, functions for computing and differentiating quantum information metrics with numerical statevectors and density matrices have been added to the qml.math module. This enables flexible custom post-processing.
• qml.math.reduced_dm
• qml.math.vn_entropy
• qml.math.mutual_info
• qml.math.fidelity
For example:
>>> x = torch.tensor([1.0, 0.0, 0.0, 1.0], requires_grad=True)
>>> en = qml.math.vn_entropy(x / np.sqrt(2.), indices=[0])
>>> en
>>> en.backward()
tensor([-0.3069, 0.0000, 0.0000, -0.3069])
#### Faster mixed-state training with backpropagation 📉
• The default.mixed device now supports differentiation via backpropagation with the Autograd, TensorFlow, and PyTorch (CPU) interfaces, leading to significantly more performant optimization and training. (#2615) (#2670) (#2680)
As a result, the default differentiation method for the device is now "backprop". To continue using the old default "parameter-shift", explicitly specify this differentiation method in the QNode:
dev = qml.device("default.mixed", wires=2)
def circuit(x):
qml.RY(x, wires=0)
qml.CNOT(wires=[0, 1])
return qml.expval(qml.PauliZ(wires=1))
>>> x = np.array(0.5, requires_grad=True)
>>> circuit(x)
array(0.87758256)
-0.479425538604203
#### Support for quantum parameter broadcasting 📡
• Quantum operators, functions, and tapes now support broadcasting across parameter dimensions, making it more convenient for developers to execute their PennyLane programs with multiple sets of parameters. (#2575) (#2609)
Parameter broadcasting refers to passing tensor parameters with additional leading dimensions to quantum operators; additional dimensions will flow through the computation, and produce additional dimensions at the output.
For example, instantiating a rotation gate with a one-dimensional array leads to a broadcasted Operation:
>>> x = np.array([0.1, 0.2, 0.3], requires_grad=True)
>>> op = qml.RX(x, 0)
>>> op.batch_size
3
Its matrix correspondingly is augmented by a leading dimension of size batch_size:
>>> np.round(qml.matrix(op), 4)
tensor([[[0.9988+0.j , 0. -0.05j ],
[0. -0.05j , 0.9988+0.j ]],
[[0.995 +0.j , 0. -0.0998j],
[0. -0.0998j, 0.995 +0.j ]],
[[0.9888+0.j , 0. -0.1494j],
>>> qml.matrix(op).shape
(3, 2, 2)
This can be extended to quantum functions, where we may mix-and-match operations with batched parameters and those without. However, the batch_size of each batched Operator within the quantum function must be the same:
>>> dev = qml.device('default.qubit', wires=1)
>>> @qml.qnode(dev)
... def circuit_rx(x, z):
... qml.RX(x, wires=0)
... qml.RZ(z, wires=0)
... qml.RY(0.3, wires=0)
... return qml.probs(wires=0)
>>> circuit_rx([0.1, 0.2], [0.3, 0.4])
tensor([[0.97092256, 0.02907744],
Parameter broadcasting is supported on all devices, hardware and simulator. Note that if not natively supported by the underlying device, parameter broadcasting may result in additional quantum device evaluations.
• A new transform, qml.transforms.broadcast_expand, has been added, which automates the process of transforming quantum functions (and tapes) to multiple quantum evaluations with no parameter broadcasting. (#2590)
>>> dev = qml.device('default.qubit', wires=1)
>>> @qml.qnode(dev)
... def circuit_rx(x, z):
... qml.RX(x, wires=0)
... qml.RZ(z, wires=0)
... qml.RY(0.3, wires=0)
... return qml.probs(wires=0)
>>> print(qml.draw(circuit_rx)([0.1, 0.2], [0.3, 0.4]))
0: ──RX(0.10)──RZ(0.30)──RY(0.30)─┤ Probs
\
0: ──RX(0.20)──RZ(0.40)──RY(0.30)─┤ Probs
Under-the-hood, this transform is used for devices that don’t natively support parameter broadcasting.
• To specify that a device natively supports broadcasted tapes, the new flag Device.capabilities()["supports_broadcasting"] should be set to True.
• To support parameter broadcasting for new or custom operations, the following new Operator class attributes must be specified:
• Operator.ndim_params specifies expected number of dimensions for each parameter
Once set, Operator.batch_size and QuantumTape.batch_size will dynamically compute the parameter broadcasting axis dimension, if present.
#### Improved JAX JIT support 🏎
• JAX just-in-time (JIT) compilation now supports vector-valued QNodes, enabling new types of workflows and significant performance boosts. (#2034)
Vector-valued QNodes include those with:
• qml.probs;
• qml.state;
• qml.sample or
• multiple qml.expval / qml.var measurements.
Consider a QNode that returns basis-state probabilities:
dev = qml.device('default.qubit', wires=2)
x = jnp.array(0.543)
y = jnp.array(-0.654)
@jax.jit
@qml.qnode(dev, diff_method="parameter-shift", interface="jax")
def circuit(x, y):
qml.RX(x, wires=[0])
qml.RY(y, wires=[1])
qml.CNOT(wires=[0, 1])
return qml.probs(wires=[1])
>>> circuit(x, y)
Array([0.8397495 , 0.16025047], dtype=float32)
Note that computing the jacobian of vector-valued QNode is not supported with JAX JIT. The output of vector-valued QNodes can, however, be used in the definition of scalar-valued cost functions whose gradients can be computed.
For example, one can define a cost function that outputs the first element of the probability vector:
def cost(x, y):
return circuit(x, y)[0]
>>> jax.grad(cost, argnums=[0])(x, y)
(Array(-0.2050439, dtype=float32),)
#### More drawing styles 🎨
• New solarized_light and solarized_dark styles are available for drawing circuit diagram graphics. (#2662)
#### New operations & transforms 🤖
• The qml.IsingXY gate is now available (see 1912.04424). (#2649)
• The qml.ECR (echoed cross-resonance) operation is now available (see 2105.01063). This gate is a maximally-entangling gate and is equivalent to a CNOT gate up to single-qubit pre-rotations. (#2613)
• The adjoint transform adjoint can now accept either a single instantiated operator or a quantum function. It returns an entity of the same type / call signature as what it was given: (#2222) (#2672)
>>> qml.adjoint(qml.PauliX(0))
Now, adjoint wraps operators in a symbolic operator class qml.ops.op_math.Adjoint. This class should not be constructed directly; the adjoint constructor should always be used instead. The class behaves just like any other Operator:
>>> op = qml.adjoint(qml.S(0))
>>> qml.matrix(op)
array([[1.-0.j, 0.-0.j],
[0.-0.j, 0.-1.j]])
>>> qml.eigvals(op)
array([1.-0.j, 0.-1.j])
• A new symbolic operator class qml.ops.op_math.Pow represents an operator raised to a power. When decomposition() is called, a list of new operators equal to this one raised to the given power is given: (#2621)
>>> op = qml.ops.op_math.Pow(qml.PauliX(0), 0.5)
>>> op.decomposition()
[SX(wires=[0])]
>>> qml.matrix(op)
array([[0.5+0.5j, 0.5-0.5j],
[0.5-0.5j, 0.5+0.5j]])
• A new transform qml.batch_partial is available which behaves similarly to functools.partial, but supports batching in the unevaluated parameters. (#2585)
This is useful for executing a circuit with a batch dimension in some of its parameters:
dev = qml.device("default.qubit", wires=1)
@qml.qnode(dev)
def circuit(x, y):
qml.RX(x, wires=0)
qml.RY(y, wires=0)
return qml.expval(qml.PauliZ(wires=0))
>>> batched_partial_circuit = qml.batch_partial(circuit, x=np.array(np.pi / 4))
>>> y = np.array([0.2, 0.3, 0.4])
>>> batched_partial_circuit(y=y)
• A new transform qml.split_non_commuting is available, which splits a quantum function or tape into multiple functions/tapes determined by groups of commuting observables: (#2587)
dev = qml.device("default.qubit", wires=1)
@qml.transforms.split_non_commuting
@qml.qnode(dev)
def circuit(x):
qml.RX(x,wires=0)
return [qml.expval(qml.PauliX(0)), qml.expval(qml.PauliZ(0))]
>>> print(qml.draw(circuit)(0.5))
0: ──RX(0.50)─┤ <X>
\
0: ──RX(0.50)─┤ <Z>
### Improvements
• Expectation values of multiple non-commuting observables from within a single QNode are now supported: (#2587)
>>> dev = qml.device('default.qubit', wires=1)
>>> @qml.qnode(dev)
... def circuit_rx(x, z):
... qml.RX(x, wires=0)
... qml.RZ(z, wires=0)
... return qml.expval(qml.PauliX(0)), qml.expval(qml.PauliY(0))
>>> circuit_rx(0.1, 0.3)
• Selecting which parts of parameter-shift Hessians are computed is now possible. (#2538)
The argnum keyword argument for qml.gradients.param_shift_hessian is now allowed to be a two-dimensional Boolean array_like. Only the indicated entries of the Hessian will then be computed.
A particularly useful example is the computation of the diagonal of the Hessian:
dev = qml.device("default.qubit", wires=1)
@qml.qnode(dev)
def circuit(x):
qml.RX(x[0], wires=0)
qml.RY(x[1], wires=0)
qml.RX(x[2], wires=0)
return qml.expval(qml.PauliZ(0))
argnum = qml.math.eye(3, dtype=bool)
x = np.array([0.2, -0.9, 1.1], requires_grad=True)
>>> qml.gradients.param_shift_hessian(circuit, argnum=argnum)(x)
tensor([[-0.09928388, 0. , 0. ],
[ 0. , -0.27633945, 0. ],
[ 0. , 0. , -0.09928388]], requires_grad=True)
• Commuting Pauli operators are now measured faster. (#2425)
The logic that checks for qubit-wise commuting (QWC) observables has been improved, resulting in a performance boost that is noticable when many commuting Pauli operators of the same type are measured.
• It is now possible to add Observable objects to the integer 0, for example qml.PauliX(wires=[0]) + 0. (#2603)
• Wires can now be passed as the final argument to an Operator, instead of requiring the wires to be explicitly specified with keyword wires. This functionality already existed for Observables, but now extends to all Operators: (#2432)
>>> qml.S(0)
S(wires=[0])
>>> qml.CNOT((0,1))
CNOT(wires=[0, 1])
• The qml.taper function can now be used to consistently taper any additional observables such as dipole moment, particle number, and spin operators using the symmetries obtained from the Hamiltonian. (#2510)
• Sparse Hamiltonians’ representation has changed from Coordinate (COO) to Compressed Sparse Row (CSR) format. (#2561)
The CSR representation is more performant for arithmetic operations and matrix-vector products. This change decreases the expval() calculation time for qml.SparseHamiltonian, specially for large workflows. In addition, the CSR format consumes less memory for qml.SparseHamiltonian storage.
• IPython now displays the str representation of a Hamiltonian, rather than the repr. This displays more information about the object. (#2648)
• The qml.qchem tests have been restructured. (#2593) (#2545)
• OpenFermion-dependent tests are now localized and collected in tests.qchem.of_tests. The new module test_structure is created to collect the tests of the qchem.structure module in one place and remove their dependency to OpenFermion.
• Test classes have been created to group the integrals and matrices unit tests.
• An operations_only argument is introduced to the tape.get_parameters method. (#2543)
• The gradients module now uses faster subroutines and uniform formats of gradient rules. (#2452)
• Instead of checking types, objects are now processed in the QuantumTape based on a new _queue_category property. This is a temporary fix that will disappear in the future. (#2408)
• The QNode class now contains a new method best_method_str that returns the best differentiation method for a provided device and interface, in human-readable format. (#2533)
• Using Operation.inv() in a queuing environment no longer updates the queue’s metadata, but merely updates the operation in place. (#2596)
• A new method safe_update_info is added to qml.QueuingContext. This method is substituted for qml.QueuingContext.update_info in a variety of places. (#2612) (#2675)
• BasisEmbedding can accept an int as argument instead of a list of bits. (#2601)
For example, qml.BasisEmbedding(4, wires = range(4)) is now equivalent to qml.BasisEmbedding([0,1,0,0], wires = range(4)) (as 4==0b100).
• Introduced a new is_hermitian property to Operators to determine if an operator can be used in a measurement process. (#2629)
• Added separate requirements_dev.txt for separation of concerns for code development and just using PennyLane. (#2635)
• The performance of building sparse Hamiltonians has been improved by accumulating the sparse representation of coefficient-operator pairs in a temporary storage and by eliminating unnecessary kron operations on identity matrices. (#2630)
• Control values are now displayed distinctly in text and matplotlib drawings of circuits. (#2668)
• The TorchLayer init_method argument now accepts either a torch.nn.init function or a dictionary which should specify a torch.nn.init/torch.Tensor for each different weight. (#2678)
• The unused keyword argument do_queue for Operation.adjoint is now fully removed. (#2583)
• Several non-decomposable Adjoint operators are added to the device test suite. (#2658)
• The developer-facing pow method has been added to Operator with concrete implementations for many classes. (#2225)
• The ctrl transform and ControlledOperation have been moved to the new qml.ops.op_math submodule. The developer-facing ControlledOperation class is no longer imported top-level. (#2656)
### Deprecations
• qml.ExpvalCost has been deprecated, and usage will now raise a warning. (#2571)
Instead, it is recommended to simply pass Hamiltonians to the qml.expval function inside QNodes:
@qml.qnode(dev)
def ansatz(params):
some_qfunc(params)
return qml.expval(Hamiltonian)
### Breaking changes
• When using qml.TorchLayer, weights with negative shapes will now raise an error, while weights with size = 0 will result in creating empty Tensor objects. (#2678)
• PennyLane no longer supports TensorFlow <=2.3. (#2683)
• The qml.queuing.Queue class has been removed. (#2599)
• The qml.utils.expand function is now removed; qml.operation.expand_matrix should be used instead. (#2654)
• The module qml.gradients.param_shift_hessian has been renamed to qml.gradients.parameter_shift_hessian in order to distinguish it from the identically named function. Note that the param_shift_hessian function is unaffected by this change and can be invoked in the same manner as before via the qml.gradients module. (#2528)
• The properties eigval and matrix from the Operator class were replaced with the methods eigval() and matrix(wire_order=None). (#2498)
• Operator.decomposition() is now an instance method, and no longer accepts parameters. (#2498)
• Adds tests, adds no-coverage directives, and removes inaccessible logic to improve code coverage. (#2537)
• The base classes QubitDevice and DefaultQubit now accept data-types for a statevector. This enables a derived class (device) in a plugin to choose correct data-types: (#2448)
>>> dev = qml.device("default.qubit", wires=4, r_dtype=np.float32, c_dtype=np.complex64)
>>> dev.R_DTYPE
<class 'numpy.float32'>
>>> dev.C_DTYPE
<class 'numpy.complex64'>
### Bug fixes
• Fixed a bug where returning qml.density_matrix using the PyTorch interface would return a density matrix with wrong shape. (#2643)
• Fixed a bug to make param_shift_hessian work with QNodes in which gates marked as trainable do not have any impact on the QNode output. (#2584)
• QNodes can now interpret variations on the interface name, like "tensorflow" or "jax-jit", when requesting backpropagation. (#2591)
• Fixed a bug for diff_method="adjoint" where incorrect gradients were computed for QNodes with parametrized observables (e.g., qml.Hermitian). (#2543)
• Fixed a bug where QNGOptimizer did not work with operators whose generator was a Hamiltonian. (#2524)
• Fixed a bug with the decomposition of qml.CommutingEvolution. (#2542)
• Fixed a bug enabling PennyLane to work with the latest version of Autoray. (#2549)
• Fixed a bug which caused different behaviour for Hamiltonian @ Observable and Observable @ Hamiltonian. (#2570)
• Fixed a bug in DiagonalQubitUnitary._controlled where an invalid operation was queued instead of the controlled version of the diagonal unitary. (#2525)
• Updated the gradients fix to only apply to the strawberryfields.gbs device, since the original logic was breaking some devices. (#2485) (#2595)
• Fixed a bug in qml.transforms.insert where operations were not inserted after gates within a template. (#2704)
• Hamiltonian.wires is now properly updated after in place operations. (#2738)
### Documentation
• The centralized Xanadu Sphinx Theme is now used to style the Sphinx documentation. (#2450)
• Added a reference to qml.utils.sparse_hamiltonian in qml.SparseHamiltonian to clarify how to construct sparse Hamiltonians in PennyLane. (2572)
• Added a new section in the Gradients and Training page that summarizes the supported device configurations and provides justification. In addition, code examples were added for some selected configurations. (#2540)
• Added a note for the Depolarization Channel that specifies how the channel behaves for the different values of depolarization probability p. (#2669)
• The quickstart documentation has been improved. (#2530) (#2534) (#2564 (#2565 (#2566) (#2607) (#2608)
• The quantum chemistry quickstart documentation has been improved. (#2500)
• Testing documentation has been improved. (#2536)
• Documentation for the pre-commit package has been added. (#2567)
• Documentation for draw control wires change has been updated. (#2682)
### Contributors
This release contains contributions from (in alphabetical order):
Guillermo Alonso-Linaje, Mikhail Andrenkov, Juan Miguel Arrazola, Ali Asadi, Utkarsh Azad, Samuel Banning, Avani Bhardwaj, Thomas Bromley, Albert Mitjans Coma, Isaac De Vlugt, Amintor Dusko, Trent Fridey, Christian Gogolin, Qi Hu, Katharine Hyatt, David Ittah, Josh Izaac, Soran Jahangiri, Edward Jiang, Nathan Killoran, Korbinian Kottmann, Ankit Khandelwal, Christina Lee, Chae-Yeun Park, Mason Moreland, Romain Moyard, Maria Schuld, Jay Soni, Antal Száva, tal66, David Wierichs, Roeland Wiersema, WingCode.
orphan
## Release 0.23.1¶
### Bug fixes
• Fixed a bug enabling PennyLane to work with the latest version of Autoray. (#2548)
### Contributors
This release contains contributions from (in alphabetical order):
Josh Izaac
orphan
## Release 0.23.0¶
### New features since last release
#### More powerful circuit cutting ✂️
• Quantum circuit cutting (running N-wire circuits on devices with fewer than N wires) is now supported for QNodes of finite-shots using the new @qml.cut_circuit_mc transform. (#2313) (#2321) (#2332) (#2358) (#2382) (#2399) (#2407) (#2444)
With these new additions, samples from the original circuit can be simulated using a Monte Carlo method, using fewer qubits at the expense of more device executions. Additionally, this transform can take an optional classical processing function as an argument and return an expectation value.
The following 3-qubit circuit contains a WireCut operation and a sample measurement. When decorated with @qml.cut_circuit_mc, we can cut the circuit into two 2-qubit fragments:
dev = qml.device("default.qubit", wires=2, shots=1000)
@qml.cut_circuit_mc
@qml.qnode(dev)
def circuit(x):
qml.RX(0.89, wires=0)
qml.RY(0.5, wires=1)
qml.RX(1.3, wires=2)
qml.CNOT(wires=[0, 1])
qml.WireCut(wires=1)
qml.CNOT(wires=[1, 2])
qml.RX(x, wires=0)
qml.RY(0.7, wires=1)
qml.RX(2.3, wires=2)
return qml.sample(wires=[0, 2])
we can then execute the circuit as usual by calling the QNode:
>>> x = 0.3
>>> circuit(x)
tensor([[1, 1],
[0, 1],
[0, 1],
...,
[0, 1],
[0, 1],
Furthermore, the number of shots can be temporarily altered when calling the QNode:
>>> results = circuit(x, shots=123)
>>> results.shape
(123, 2)
The cut_circuit_mc transform also supports returning sample-based expectation values of observables using the classical_processing_fn argument. Refer to the UsageDetails section of the transform documentation for an example.
• The cut_circuit transform now supports automatic graph partitioning by specifying auto_cutter=True to cut arbitrary tape-converted graphs using the general purpose graph partitioning framework KaHyPar. (#2330) (#2428)
Note that KaHyPar needs to be installed separately with the auto_cutter=True option.
For integration with the existing low-level manual cut pipeline, refer to the documentation of the function .
@qml.cut_circuit(auto_cutter=True)
@qml.qnode(dev)
def circuit(x):
qml.RX(x, wires=0)
qml.RY(0.9, wires=1)
qml.RX(0.3, wires=2)
qml.CZ(wires=[0, 1])
qml.RY(-0.4, wires=0)
qml.CZ(wires=[1, 2])
return qml.expval(qml.grouping.string_to_pauli_word("ZZZ"))
>>> x = np.array(0.531, requires_grad=True)
>>> circuit(x)
0.47165198882111165
-0.276982865449393
#### Grand QChem unification ⚛️ 🏰
• Quantum chemistry functionality — previously split between an external pennylane-qchem package and internal qml.hf differentiable Hartree-Fock solver — is now unified into a single, included, qml.qchem module. (#2164) (#2385) (#2352) (#2420) (#2454)
(#2199) (#2371) (#2272) (#2230) (#2415) (#2426) (#2465)
The qml.qchem module provides a differentiable Hartree-Fock solver and the functionality to construct a fully-differentiable molecular Hamiltonian.
For example, one can continue to generate molecular Hamiltonians using
qml.qchem.molecular_hamiltonian:
symbols = ["H", "H"]
geometry = np.array([[0., 0., -0.66140414], [0., 0., 0.66140414]])
hamiltonian, qubits = qml.qchem.molecular_hamiltonian(symbols, geometry, method="dhf")
By default, this will use the differentiable Hartree-Fock solver; however, simply set method="pyscf" to continue to use PySCF for Hartree-Fock calculations.
• Functions are added for building a differentiable dipole moment observable. Functions for computing multipole moment molecular integrals, needed for building the dipole moment observable, are also added. (#2173) (#2166)
The dipole moment observable can be constructed using qml.qchem.dipole_moment:
symbols = ['H', 'H']
geometry = np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 1.0]])
mol = qml.qchem.Molecule(symbols, geometry)
args = [geometry]
D = qml.qchem.dipole_moment(mol)(*args)
• The efficiency of computing molecular integrals and Hamiltonian is improved. This has been done by adding optimized functions for building fermionic and qubit observables and optimizing the functions used for computing the electron repulsion integrals. (#2316)
• The 6-31G basis set is added to the qchem basis set repo. This addition allows performing differentiable Hartree-Fock calculations with basis sets beyond the minimal sto-3g basis set for atoms with atomic number 1-10. (#2372)
The 6-31G basis set can be used to construct a Hamiltonian as
symbols = ["H", "H"]
geometry = np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 1.0]])
H, qubits = qml.qchem.molecular_hamiltonian(symbols, geometry, basis="6-31g")
• External dependencies are replaced with local functions for spin and particle number observables. (#2197) (#2362)
#### Pattern matching optimization 🔎 💎
• Added an optimization transform that matches pieces of user-provided identity templates in a circuit and replaces them with an equivalent component. (#2032)
For example, consider the following circuit where we want to replace sequence of two pennylane.S gates with a pennylane.PauliZ gate.
def circuit():
qml.S(wires=0)
qml.PauliZ(wires=0)
qml.S(wires=1)
qml.CZ(wires=[0, 1])
qml.S(wires=1)
qml.S(wires=2)
qml.CZ(wires=[1, 2])
qml.S(wires=2)
return qml.expval(qml.PauliX(wires=0))
We specify use the following pattern that implements the identity:
with qml.tape.QuantumTape() as pattern:
qml.S(wires=0)
qml.S(wires=0)
qml.PauliZ(wires=0)
To optimize the circuit with this identity pattern, we apply the qml.transforms.pattern_matching transform.
>>> dev = qml.device('default.qubit', wires=5)
>>> qnode = qml.QNode(circuit, dev)
>>> optimized_qfunc = qml.transforms.pattern_matching_optimization(pattern_tapes=[pattern])(circuit)
>>> optimized_qnode = qml.QNode(optimized_qfunc, dev)
>>> print(qml.draw(qnode)())
0: ──S──Z─╭C──────────┤ <X>
1: ──S────╰Z──S─╭C────┤
2: ──S──────────╰Z──S─┤
>>> print(qml.draw(optimized_qnode)())
0: ──S⁻¹─╭C────┤ <X>
1: ──Z───╰Z─╭C─┤
2: ──Z──────╰Z─┤
For more details on using pattern matching optimization you can check the corresponding documentation and also the following paper.
#### Measure the distance between two unitaries📏
• Added the HilbertSchmidt and the LocalHilbertSchmidt templates to be used for computing distance measures between unitaries. (#2364)
Given a unitary U, qml.HilberSchmidt can be used to measure the distance between unitaries and to define a cost function (cost_hst) used for learning a unitary V that is equivalent to U up to a global phase:
# Represents unitary U
with qml.tape.QuantumTape(do_queue=False) as u_tape:
# Represents unitary V
def v_function(params):
qml.RZ(params[0], wires=1)
@qml.qnode(dev)
def hilbert_test(v_params, v_function, v_wires, u_tape):
qml.HilbertSchmidt(v_params, v_function=v_function, v_wires=v_wires, u_tape=u_tape)
return qml.probs(u_tape.wires + v_wires)
def cost_hst(parameters, v_function, v_wires, u_tape):
return (1 - hilbert_test(v_params=parameters, v_function=v_function, v_wires=v_wires, u_tape=u_tape)[0])
>>> cost_hst(parameters=[0.1], v_function=v_function, v_wires=[1], u_tape=u_tape)
#### More tensor network support 🕸️
• Adds the qml.MERA template for implementing quantum circuits with the shape of a multi-scale entanglement renormalization ansatz (MERA). (#2418)
MERA follows the style of previous tensor network templates and is similar to quantum convolutional neural networks.
def block(weights, wires):
qml.CNOT(wires=[wires[0],wires[1]])
qml.RY(weights[0], wires=wires[0])
qml.RY(weights[1], wires=wires[1])
n_wires = 4
n_block_wires = 2
n_params_block = 2
n_blocks = qml.MERA.get_n_blocks(range(n_wires),n_block_wires)
template_weights = [[0.1,-0.3]]*n_blocks
dev= qml.device('default.qubit',wires=range(n_wires))
@qml.qnode(dev)
def circuit(template_weights):
qml.MERA(range(n_wires),n_block_wires,block, n_params_block, template_weights)
return qml.expval(qml.PauliZ(wires=1))
It may be necessary to reorder the wires to see the MERA architecture clearly:
>>> print(qml.draw(circuit,expansion_strategy='device',wire_order=[2,0,1,3])(template_weights))
2: ───────────────╭C──RY(0.10)──╭X──RY(-0.30)───────────────┤
0: ─╭X──RY(-0.30)─│─────────────╰C──RY(0.10)──╭C──RY(0.10)──┤
1: ─╰C──RY(0.10)──│─────────────╭X──RY(-0.30)─╰X──RY(-0.30)─┤ <Z>
3: ───────────────╰X──RY(-0.30)─╰C──RY(0.10)────────────────┤
#### New transform for transpilation ⚙️
• Added a swap based transpiler transform. (#2118)
The transpile function takes a quantum function and a coupling map as inputs and compiles the circuit to ensure that it can be executed on corresponding hardware. The transform can be used as a decorator in the following way:
dev = qml.device('default.qubit', wires=4)
@qml.qnode(dev)
@qml.transforms.transpile(coupling_map=[(0, 1), (1, 2), (2, 3)])
def circuit(param):
qml.CNOT(wires=[0, 1])
qml.CNOT(wires=[0, 2])
qml.CNOT(wires=[0, 3])
qml.PhaseShift(param, wires=0)
return qml.probs(wires=[0, 1, 2, 3])
>>> print(qml.draw(circuit)(0.3))
0: ─╭C───────╭C──────────╭C──Rϕ(0.30)─┤ ╭Probs
1: ─╰X─╭SWAP─╰X────╭SWAP─╰X───────────┤ ├Probs
2: ────╰SWAP─╭SWAP─╰SWAP──────────────┤ ├Probs
3: ──────────╰SWAP────────────────────┤ ╰Probs
### Improvements
• QuantumTape objects are now iterable, allowing iteration over the contained operations and measurements. (#2342)
with qml.tape.QuantumTape() as tape:
qml.RX(0.432, wires=0)
qml.RY(0.543, wires=0)
qml.CNOT(wires=[0, 'a'])
qml.RX(0.133, wires='a')
qml.expval(qml.PauliZ(wires=[0]))
Given a QuantumTape object the underlying quantum circuit can be iterated over using a for loop:
>>> for op in tape:
... print(op)
RX(0.432, wires=[0])
RY(0.543, wires=[0])
CNOT(wires=[0, 'a'])
RX(0.133, wires=['a'])
expval(PauliZ(wires=[0]))
Indexing into the circuit is also allowed via tape[i]:
>>> tape[0]
RX(0.432, wires=[0])
A tape object can also be converted to a sequence (e.g., to a list) of operations and measurements:
>>> list(tape)
[RX(0.432, wires=[0]),
RY(0.543, wires=[0]),
CNOT(wires=[0, 'a']),
RX(0.133, wires=['a']),
expval(PauliZ(wires=[0]))]
• Added the QuantumTape.shape method and QuantumTape.numeric_type attribute to allow extracting information about the shape and numeric type of the output returned by a quantum tape after execution. (#2044)
dev = qml.device("default.qubit", wires=2)
a = np.array([0.1, 0.2, 0.3])
def func(a):
qml.RY(a[0], wires=0)
qml.RX(a[1], wires=0)
qml.RY(a[2], wires=0)
with qml.tape.QuantumTape() as tape:
func(a)
qml.state()
>>> tape.shape(dev)
(1, 4)
>>> tape.numeric_type
complex
• Defined a MeasurementProcess.shape method and a MeasurementProcess.numeric_type attribute to allow extracting information about the shape and numeric type of results obtained when evaluating QNodes using the specific measurement process. (#2044)
• The parameter-shift Hessian can now be computed for arbitrary operations that support the general parameter-shift rule for gradients, using qml.gradients.param_shift_hessian (#2319)
Multiple ways to obtain the gradient recipe are supported, in the following order of preference:
• A custom grad_recipe. It is iterated to obtain the shift rule for the second-order derivatives in the diagonal entries of the Hessian.
• Custom parameter_frequencies. The second-order shift rule can directly be computed using them.
• An operation’s generator. Its eigenvalues will be used to obtain parameter_frequencies, if they are not given explicitly for an operation.
• The strategy for expanding a circuit can now be specified with the qml.specs transform, for example to calculate the specifications of the circuit that will actually be executed by the device (expansion_strategy="device"). (#2395)
• The default.qubit and default.mixed devices now skip over identity operators instead of performing matrix multiplication with the identity. (#2356) (#2365)
• The function qml.eigvals is modified to use the efficient scipy.sparse.linalg.eigsh method for obtaining the eigenvalues of a SparseHamiltonian. This scipy method is called to compute $$k$$ eigenvalues of a sparse $$N \times N$$ matrix if k is smaller than $$N-1$$. If a larger $$k$$ is requested, the dense matrix representation of the Hamiltonian is constructed and the regular qml.math.linalg.eigvalsh is applied. (#2333)
• The function qml.ctrl was given the optional argument control_values=None. If overridden, control_values takes an integer or a list of integers corresponding to the binary value that each control value should take. The same change is reflected in ControlledOperation. Control values of 0 are implemented by qml.PauliX applied before and after the controlled operation (#2288)
• Operators now have a has_matrix property denoting whether or not the operator defines a matrix. (#2331) (#2476)
• Circuit cutting now performs expansion to search for wire cuts in contained operations or tapes. (#2340)
• The qml.draw and qml.draw_mpl transforms are now located in the drawer module. They can still be accessed via the top-level qml namespace. (#2396)
• Raise a warning where caching produces identical shot noise on execution results with finite shots. (#2478)
### Deprecations
• The ObservableReturnTypes Sample, Variance, Expectation, Probability, State, and MidMeasure have been moved to measurements from operation. (#2329) (#2481)
### Breaking changes
• The caching ability of devices has been removed. Using the caching on the QNode level is the recommended alternative going forward. (#2443)
One way for replicating the removed QubitDevice caching behaviour is by creating a cache object (e.g., a dictionary) and passing it to the QNode:
n_wires = 4
wires = range(n_wires)
dev = qml.device('default.qubit', wires=n_wires)
cache = {}
@qml.qnode(dev, diff_method='parameter-shift', cache=cache)
def expval_circuit(params):
qml.templates.BasicEntanglerLayers(params, wires=wires, rotation=qml.RX)
return qml.expval(qml.PauliZ(0) @ qml.PauliY(1) @ qml.PauliX(2) @ qml.PauliZ(3))
shape = qml.templates.BasicEntanglerLayers.shape(5, n_wires)
params = np.random.random(shape)
>>> expval_circuit(params)
>>> dev.num_executions
1
>>> expval_circuit(params)
>>> dev.num_executions
1
• The qml.finite_diff function has been removed. Please use qml.gradients.finite_diff to compute the gradient of tapes of QNodes. Otherwise, manual implementation is required. (#2464)
• The get_unitary_matrix transform has been removed, please use qml.matrix instead. (#2457)
• The update_stepsize method has been removed from GradientDescentOptimizer and its child optimizers. The stepsize property can be interacted with directly instead. (#2370)
• Most optimizers no longer flatten and unflatten arguments during computation. Due to this change, user provided gradient functions must return the same shape as qml.grad. (#2381)
• The old circuit text drawing infrastructure has been removed. (#2310)
• RepresentationResolver was replaced by the Operator.label method.
• qml.drawer.CircuitDrawer was replaced by qml.drawer.tape_text.
• qml.drawer.CHARSETS was removed because unicode is assumed to be accessible.
• Grid and qml.drawer.drawable_grid were removed because the custom data class was replaced by list of sets of operators or measurements.
• qml.transforms.draw_old was replaced by qml.draw.
• qml.CircuitGraph.greedy_layers was deleted, as it was no longer needed by the circuit drawer and did not seem to have uses outside of that situation.
• qml.CircuitGraph.draw was deleted, as we draw tapes instead.
• The tape method qml.tape.QuantumTape.draw now simply calls qml.drawer.tape_text.
• In the new pathway, the charset keyword was deleted, the max_length keyword defaults to 100, and the decimals and show_matrices keywords were added.
• The deprecated QNode, available via qml.qnode_old.QNode, has been removed. Please transition to using the standard qml.QNode. (#2336) (#2337) (#2338)
In addition, several other components which powered the deprecated QNode have been removed:
• The deprecated, non-batch compatible interfaces, have been removed.
• The deprecated tape subclasses QubitParamShiftTape, JacobianTape, CVParamShiftTape, and ReversibleTape have been removed.
• The deprecated tape execution method tape.execute(device) has been removed. Please use qml.execute([tape], device) instead. (#2339)
### Bug fixes
• Fixed a bug in the qml.PauliRot operation, where computing the generator was not taking into account the operation wires. (#2466)
• Fixed a bug where non-trainable arguments were shifted in the NesterovMomentumOptimizer if a trainable argument was after it in the argument list. (#2466)
• Fixed a bug with @jax.jit for grad when diff_method="adjoint" and mode="backward". (#2460)
• Fixed a bug where qml.DiagonalQubitUnitary did not support @jax.jit and @tf.function. (#2445)
• Fixed a bug in the qml.PauliRot operation, where computing the generator was not taking into account the operation wires. (#2442)
• Fixed a bug with the padding capability of AmplitudeEmbedding where the inputs are on the GPU. (#2431)
• Fixed a bug by adding a comprehensible error message for calling qml.probs without passing wires or an observable. (#2438)
• The behaviour of qml.about() was modified to avoid warnings being emitted due to legacy behaviour of pip. (#2422)
• Fixed a bug where observables were not considered when determining the use of the jax-jit interface. (#2427) (#2474)
• Fixed a bug where computing statistics for a relatively few number of shots (e.g., shots=10), an error arose due to indexing into an array using a Array. (#2427)
• PennyLane Lightning version in Docker container is pulled from latest wheel-builds. (#2416)
• Optimizers only consider a variable trainable if they have requires_grad = True. (#2381)
• Fixed a bug with qml.expval, qml.var, qml.state and qml.probs (when qml.probs is the only measurement) where the dtype specified on the device did not match the dtype of the QNode output. (#2367)
• Fixed a bug where the output shapes from batch transforms are inconsistent with the QNode output shape. (#2215)
• Fixed a bug caused by the squeezing in qml.gradients.param_shift_hessian. (#2215)
• Fixed a bug in which the expval/var of a Tensor(Observable) would depend on the order in which the observable is defined: (#2276)
>>> @qml.qnode(dev)
... def circ(op):
... qml.RX(0.12, wires=0)
... qml.RX(1.34, wires=1)
... qml.RX(3.67, wires=2)
... return qml.expval(op)
>>> op1 = qml.Identity(wires=0) @ qml.Identity(wires=1) @ qml.PauliZ(wires=2)
>>> op2 = qml.PauliZ(wires=2) @ qml.Identity(wires=0) @ qml.Identity(wires=1)
>>> print(circ(op1), circ(op2))
-0.8636111153905662 -0.8636111153905662
• Fixed a bug where qml.hf.transform_hf() would fail due to missing wires in the qubit operator that is prepared for tapering the HF state. (#2441)
• Fixed a bug with custom device defined jacobians not being returned properly. (#2485)
### Documentation
• The sections on adding operator and observable support in the “How to add a plugin” section of the plugins page have been updated. (#2389)
• The missing arXiv reference in the LieAlgebra optimizer has been fixed. (#2325)
### Contributors
This release contains contributions from (in alphabetical order):
Karim Alaa El-Din, Guillermo Alonso-Linaje, Juan Miguel Arrazola, Ali Asadi, Utkarsh Azad, Sam Banning, Thomas Bromley, Alain Delgado, Isaac De Vlugt, Olivia Di Matteo, Amintor Dusko, Anthony Hayes, David Ittah, Josh Izaac, Soran Jahangiri, Nathan Killoran, Christina Lee, Angus Lowe, Romain Moyard, Zeyue Niu, Matthew Silverman, Lee James O’Riordan, Maria Schuld, Jay Soni, Antal Száva, Maurice Weber, David Wierichs.
orphan
## Release 0.22.2¶
### Bug fixes
• Most compilation transforms, and relevant subroutines, have been updated to support just-in-time compilation with jax.jit. This fix was intended to be included in v0.22.0, but due to a bug was incomplete. (#2397)
### Documentation
• The documentation run has been updated to require jinja2==3.0.3 due to an issue that arises with jinja2 v3.1.0 and sphinx v3.5.3. (#2378)
### Contributors
This release contains contributions from (in alphabetical order):
Olivia Di Matteo, Christina Lee, Romain Moyard, Antal Száva.
orphan
## Release 0.22.1¶
### Bug fixes
• Fixes cases with qml.measure where unexpected operations were added to the circuit. (#2328)
### Contributors
This release contains contributions from (in alphabetical order):
Guillermo Alonso-Linaje, Antal Száva.
orphan
## Release 0.22.0¶
### New features since last release
#### Quantum circuit cutting ✂️
• You can now run N-wire circuits on devices with fewer than N wires, by strategically placing WireCut operations that allow their circuit to be partitioned into smaller fragments, at a cost of needing to perform a greater number of device executions. Circuit cutting is enabled by decorating a QNode with the @qml.cut_circuit transform. (#2107) (#2124) (#2153) (#2165) (#2158) (#2169) (#2192) (#2216) (#2168) (#2223) (#2231) (#2234) (#2244) (#2251) (#2265) (#2254) (#2260) (#2257) (#2279)
The example below shows how a three-wire circuit can be run on a two-wire device:
dev = qml.device("default.qubit", wires=2)
@qml.cut_circuit
@qml.qnode(dev)
def circuit(x):
qml.RX(x, wires=0)
qml.RY(0.9, wires=1)
qml.RX(0.3, wires=2)
qml.CZ(wires=[0, 1])
qml.RY(-0.4, wires=0)
qml.WireCut(wires=1)
qml.CZ(wires=[1, 2])
return qml.expval(qml.grouping.string_to_pauli_word("ZZZ"))
Instead of executing the circuit directly, it will be partitioned into smaller fragments according to the WireCut locations, and each fragment executed multiple times. Combining the results of the fragment executions will recover the expected output of the original uncut circuit.
>>> x = np.array(0.531, requires_grad=True)
>>> circuit(0.531)
0.47165198882111165
Circuit cutting support is also differentiable:
>>> qml.grad(circuit)(x)
-0.276982865449393
For more details on circuit cutting, check out the qml.cut_circuit documentation page or Peng et. al.
#### Conditional operations: quantum teleportation unlocked 🔓🌀
• Support for mid-circuit measurements and conditional operations has been added, to enable use cases like quantum teleportation, quantum error correction and quantum error mitigation. (#2211) (#2236) (#2275)
Two new functions have been added to support this capability:
• qml.measure() places mid-circuit measurements in the middle of a quantum function.
• qml.cond() allows operations and quantum functions to be conditioned on the result of a previous measurement.
For example, the code below shows how to teleport a qubit from wire 0 to wire 2:
dev = qml.device("default.qubit", wires=3)
input_state = np.array([1, -1], requires_grad=False) / np.sqrt(2)
@qml.qnode(dev)
def teleport(state):
# Prepare input state
qml.QubitStateVector(state, wires=0)
# Prepare Bell state
qml.CNOT(wires=[1, 2])
# Apply gates
qml.CNOT(wires=[0, 1])
# Measure first two wires
m1 = qml.measure(0)
m2 = qml.measure(1)
# Condition final wire on results
qml.cond(m2 == 1, qml.PauliX)(wires=2)
qml.cond(m1 == 1, qml.PauliZ)(wires=2)
# Return state on final wire
return qml.density_matrix(wires=2)
We can double-check that the qubit has been teleported by computing the overlap between the input state and the resulting state on wire 2:
>>> output_state = teleport(input_state)
>>> output_state
tensor([[ 0.5+0.j, -0.5+0.j],
>>> input_state.conj() @ output_state @ input_state
For a full description of new capabilities, refer to the Mid-circuit measurements and conditional operations section in the documentation.
• Train mid-circuit measurements by deferring them, via the new @qml.defer_measurements transform. (#2211) (#2236) (#2275)
If a device doesn’t natively support mid-circuit measurements, the @qml.defer_measurements transform can be applied to the QNode to transform the QNode into one with terminal measurements and controlled operations:
dev = qml.device("default.qubit", wires=2)
@qml.qnode(dev)
@qml.defer_measurements
def circuit(x):
m = qml.measure(0)
def op_if_true():
return qml.RX(x**2, wires=1)
def op_if_false():
return qml.RY(x, wires=1)
qml.cond(m==1, op_if_true, op_if_false)()
return qml.expval(qml.PauliZ(1))
>>> x = np.array(0.7, requires_grad=True)
>>> print(qml.draw(circuit, expansion_strategy="device")(x))
0: ──H─╭C─────────X─╭C─────────X─┤
1: ────╰RX(0.49)────╰RY(0.70)────┤ <Z>
>>> circuit(x)
Deferring mid-circuit measurements also enables differentiation:
>>> qml.grad(circuit)(x)
-0.651546965338656
#### Debug with mid-circuit quantum snapshots 📷
• A new operation qml.Snapshot has been added to assist in debugging quantum functions. (#2233) (#2289) (#2291) (#2315)
qml.Snapshot saves the internal state of devices at arbitrary points of execution.
Currently supported devices include:
• default.qubit: each snapshot saves the quantum state vector
• default.mixed: each snapshot saves the density matrix
• default.gaussian: each snapshot saves the covariance matrix and vector of means
During normal execution, the snapshots are ignored:
dev = qml.device("default.qubit", wires=2)
@qml.qnode(dev, interface=None)
def circuit():
qml.Snapshot()
qml.Snapshot("very_important_state")
qml.CNOT(wires=[0, 1])
qml.Snapshot()
return qml.expval(qml.PauliX(0))
However, when using the qml.snapshots transform, intermediate device states will be stored and returned alongside the results.
>>> qml.snapshots(circuit)()
{0: array([1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j]),
'very_important_state': array([0.70710678+0.j, 0. +0.j, 0.70710678+0.j, 0. +0.j]),
2: array([0.70710678+0.j, 0. +0.j, 0. +0.j, 0.70710678+0.j]),
'execution_results': array(0.)}
#### Batch embedding and state preparation data 📦
• Added the @qml.batch_input transform to enable batching non-trainable gate parameters. In addition, the qml.qnn.KerasLayer class has been updated to natively support batched training data. (#2069)
As with other transforms, @qml.batch_input can be used to decorate QNodes:
dev = qml.device("default.qubit", wires=2, shots=None)
@qml.batch_input(argnum=0)
@qml.qnode(dev, diff_method="parameter-shift", interface="tf")
def circuit(inputs, weights):
# add a batch dimension to the embedding data
qml.AngleEmbedding(inputs, wires=range(2), rotation="Y")
qml.RY(weights[0], wires=0)
qml.RY(weights[1], wires=1)
return qml.expval(qml.PauliZ(1))
Batched input parameters can then be passed during QNode evaluation:
>>> x = tf.random.uniform((10, 2), 0, 1)
>>> w = tf.random.uniform((2,), 0, 1)
>>> circuit(x, w)
<tf.Tensor: shape=(10,), dtype=float64, numpy=
array([0.46230079, 0.73971315, 0.95666004, 0.5355225 , 0.66180948,
0.44519553, 0.93874261, 0.9483197 , 0.78737918, 0.90866411])>
#### Even more mighty quantum transforms 🐛➡🦋
• New functions and transforms of operators have been added:
• qml.matrix() for computing the matrix representation of one or more unitary operators. (#2241)
• qml.eigvals() for computing the eigenvalues of one or more operators. (#2248)
• qml.generator() for computing the generator of a single-parameter unitary operation. (#2256)
All operator transforms can be used on instantiated operators,
>>> op = qml.RX(0.54, wires=0)
>>> qml.matrix(op)
[[0.9637709+0.j 0. -0.26673144j]
[0. -0.26673144j 0.9637709+0.j ]]
Operator transforms can also be used in a functional form:
>>> x = torch.tensor(0.6, requires_grad=True)
>>> matrix_fn = qml.matrix(qml.RX)
>>> matrix_fn(x, wires=[0])
tensor([[0.9553+0.0000j, 0.0000-0.2955j],
In its functional form, it is fully differentiable with respect to gate arguments:
>>> loss = torch.real(torch.trace(matrix_fn(x, wires=0)))
>>> loss.backward()
tensor(-0.2955)
Some operator transform can also act on multiple operations, by passing quantum functions or tapes:
>>> def circuit(theta):
... qml.RX(theta, wires=1)
... qml.PauliZ(wires=0)
>>> qml.matrix(circuit)(np.pi / 4)
array([[ 0.92387953+0.j, 0.+0.j , 0.-0.38268343j, 0.+0.j],
[ 0.+0.j, -0.92387953+0.j, 0.+0.j, 0. +0.38268343j],
[ 0. -0.38268343j, 0.+0.j, 0.92387953+0.j, 0.+0.j],
[ 0.+0.j, 0.+0.38268343j, 0.+0.j, -0.92387953+0.j]])
• A new transform has been added to construct the pairwise-commutation directed acyclic graph (DAG) representation of a quantum circuit. (#1712)
In the DAG, each node represents a quantum operation, and edges represent non-commutation between two operations.
This transform takes into account that not all operations can be moved next to each other by pairwise commutation:
>>> def circuit(x, y, z):
... qml.RX(x, wires=0)
... qml.RX(y, wires=0)
... qml.CNOT(wires=[1, 2])
... qml.RY(y, wires=1)
... qml.CRZ(z, wires=[2, 0])
... qml.RY(-y, wires=1)
... return qml.expval(qml.PauliZ(0))
>>> dag_fn = qml.commutation_dag(circuit)
>>> dag = dag_fn(np.pi / 4, np.pi / 3, np.pi / 2)
Nodes in the commutation DAG can be accessed via the get_nodes() method, returning a list of the form (ID, CommutationDAGNode):
>>> nodes = dag.get_nodes()
>>> nodes
NodeDataView({0: <pennylane.transforms.commutation_dag.CommutationDAGNode object at 0x7f461c4bb580>, ...}, data='node')
Specific nodes in the commutation DAG can be accessed via the get_node() method:
>>> second_node = dag.get_node(2)
>>> second_node
<pennylane.transforms.commutation_dag.CommutationDAGNode object at 0x136f8c4c0>
>>> second_node.op
CNOT(wires=[1, 2])
>>> second_node.successors
[3, 4, 5, 6]
>>> second_node.predecessors
[]
### Improvements
• The text-based drawer accessed via qml.draw() has been optimized and improved. (#2128) (#2198)
The new drawer has:
• a decimals keyword for controlling parameter rounding
• a show_matrices keyword for controlling display of matrices
• a different algorithm for determining positions
• deprecation of the charset keyword
@qml.qnode(qml.device('lightning.qubit', wires=2))
def circuit(a, w):
qml.CRX(a, wires=[0, 1])
qml.Rot(*w, wires=[1])
qml.CRX(-a, wires=[0, 1])
return qml.expval(qml.PauliZ(0) @ qml.PauliZ(1))
>>> print(qml.draw(circuit, decimals=2)(a=2.3, w=[1.2, 3.2, 0.7]))
0: ──H─╭C─────────────────────────────╭C─────────┤ ╭<[email protected]>
1: ────╰RX(2.30)──Rot(1.20,3.20,0.70)─╰RX(-2.30)─┤ ╰<[email protected]>
• The frequencies of gate parameters are now accessible as an operation property and can be used for circuit analysis, optimization via the RotosolveOptimizer and differentiation with the parameter-shift rule (including the general shift rule). (#2180) (#2182) (#2227)
>>> op = qml.CRot(0.4, 0.1, 0.3, wires=[0, 1])
>>> op.parameter_frequencies
[(0.5, 1.0), (0.5, 1.0), (0.5, 1.0)]
When using qml.gradients.param_shift, either a custom grad_recipe or the parameter frequencies are used to obtain the shift rule for the operation, in that order of preference.
See Vidal and Theis (2018) and Wierichs et al. (2021) for theoretical background information on the general parameter-shift rule.
• No two-term parameter-shift rule is assumed anymore by default. (#2227)
Previously, operations marked for analytic differentiation that did not provide a generator, parameter_frequencies or a custom grad_recipe were assumed to satisfy the two-term shift rule. This now has to be made explicit for custom operations by adding any of the above attributes.
• Most compilation transforms, and relevant subroutines, have been updated to support just-in-time compilation with jax.jit. (#1894)
• The qml.draw_mpl transform supports a expansion_strategy keyword argument. (#2271)
• The qml.gradients module has been streamlined and special-purpose functions moved closer to their use cases, while preserving existing behaviour. (#2200)
• Added a new partition_pauli_group function to the grouping module for efficiently measuring the N-qubit Pauli group with 3 ** N qubit-wise commuting terms. (#2185)
• The Operator class has undergone a major refactor with the following changes:
• Matrices: the static method Operator.compute_matrices() defines the matrix representation of the operator, and the function qml.matrix(op) computes this for a given instance. (#1996)
• Eigvals: the static method Operator.compute_eigvals() defines the matrix representation of the operator, and the function qml.eigvals(op) computes this for a given instance. (#2048)
• Decompositions: the static method Operator.compute_decomposition() defines the matrix representation of the operator, and the method op.decomposition() computes this for a given instance. (#2024) (#2053)
• Sparse matrices: the static method Operator.compute_sparse_matrix() defines the sparse matrix representation of the operator, and the method op.sparse_matrix() computes this for a given instance. (#2050)
• Linear combinations of operators: The static method compute_terms(), used for representing the linear combination of coefficients and operators representing the operator, has been added. The method op.terms() computes this for a given instance. Currently, only the Hamiltonian class overwrites compute_terms() to store coefficients and operators. The Hamiltonian.terms property hence becomes a proper method called by Hamiltonian.terms(). (#2036)
• Diagonalization: The diagonalizing_gates() representation has been moved to the highest-level Operator class and is therefore available to all subclasses. A condition qml.operation.defines_diagonalizing_gates has been added, which can be used in tape contexts without queueing. In addition, a static compute_diagonalizing_gates method has been added, which is called by default in diagonalizing_gates(). (#1985) (#1993)
• Error handling has been improved for Operator representations. Custom errors subclassing OperatorPropertyUndefined are raised if a representation has not been defined. This replaces the NotImplementedError and allows finer control for developers. (#2064) (#2287)
• A Operator.hyperparameters attribute, used for storing operation parameters that are never trainable, has been added to the operator class. (#2017)
• The string_for_inverse attribute is removed. (#2021)
• The expand() method was moved from the Operation class to the main Operator class. (#2053) (#2239)
### Deprecations
• There are several important changes when creating custom operations: (#2214) (#2227) (#2030) (#2061)
• The Operator.matrix method has been deprecated and Operator.compute_matrix should be defined instead. Operator matrices should be accessed using qml.matrix(op). If you were previously defining the class method Operator._matrix(), this is a a breaking change — please update your operation to instead overwrite Operator.compute_matrix.
• The Operator.decomposition method has been deprecated and Operator.compute_decomposition should be defined instead. Operator decompositions should be accessed using Operator.decomposition().
• The Operator.eigvals method has been deprecated and Operator.compute_eigvals should be defined instead. Operator eigenvalues should be accessed using qml.eigvals(op).
• The Operator.generator property is now a method, and should return an operator instance representing the generator. Note that unlike the other representations above, this is a breaking change. Operator generators should be accessed using qml.generator(op).
• The Operation.get_parameter_shift method has been deprecated and will be removed in a future release.
Instead, the functionalities for general parameter-shift rules in the qml.gradients module should be used, together with the operation attributes parameter_frequencies or grad_recipe.
• Executing tapes using tape.execute(dev) is deprecated. Please use the qml.execute([tape], dev) function instead. (#2306)
• The subclasses of the quantum tape, including JacobianTape, QubitParamShiftTape, CVParamShiftTape, and ReversibleTape are deprecated. Instead of calling JacobianTape.jacobian() and JacobianTape.hessian(), please use a standard QuantumTape, and apply gradient transforms using the qml.gradients module. (#2306)
• qml.transforms.get_unitary_matrix() has been deprecated and will be removed in a future release. For extracting matrices of operations and quantum functions, please use qml.matrix(). (#2248)
• The qml.finite_diff() function has been deprecated and will be removed in an upcoming release. Instead, qml.gradients.finite_diff() can be used to compute purely quantum gradients (that is, gradients of tapes or QNode). (#2212)
• The MultiControlledX operation now accepts a single wires keyword argument for both control_wires and wires. The single wires keyword should be all the control wires followed by a single target wire. (#2121) (#2278)
### Breaking changes
• The representation of an operator as a matrix has been overhauled. (#1996)
The “canonical matrix”, which is independent of wires, is now defined in the static method compute_matrix() instead of _matrix. By default, this method is assumed to take all parameters and non-trainable hyperparameters that define the operation.
>>> qml.RX.compute_matrix(0.5)
[[0.96891242+0.j 0. -0.24740396j]
[0. -0.24740396j 0.96891242+0.j ]]
If no canonical matrix is specified for a gate, compute_matrix() raises a MatrixUndefinedError.
• The generator property has been updated to an instance method, Operator.generator(). It now returns an instantiated operation, representing the generator of the instantiated operator. (#2030) (#2061)
Various operators have been updated to specify the generator as either an Observable, Tensor, Hamiltonian, SparseHamiltonian, or Hermitian operator.
In addition, qml.generator(operation) has been added to aid in retrieving generator representations of operators.
• The argument wires in heisenberg_obs, heisenberg_expand and heisenberg_tr was renamed to wire_order to be consistent with other matrix representations. (#2051)
• The property kraus_matrices has been changed to a method, and _kraus_matrices renamed to compute_kraus_matrices, which is now a static method. (#2055)
• The pennylane.measure module has been renamed to pennylane.measurements. (#2236)
### Bug fixes
• The basis property of qml.SWAP was set to "X", which is incorrect; it is now set to None. (#2287)
• The qml.RandomLayers template now decomposes when the weights are a list of lists. (#2266)
• The qml.QubitUnitary operation now supports just-in-time compilation using JAX. (#2249)
• Fixes a bug in the JAX interface where Array objects were not being converted to NumPy arrays before executing an external device. (#2255)
• The qml.ctrl transform now works correctly with gradient transforms such as the parameter-shift rule. (#2238)
• Fixes a bug in which passing required arguments into operations as keyword arguments would throw an error because the documented call signature didn’t match the function definition. (#1976)
• The operation OrbitalRotation previously was wrongfully registered to satisfy the four-term parameter shift rule. The correct eight-term rule will now be used when using the parameter-shift rule. (#2180)
• Fixes a bug where qml.gradients.param_shift_hessian would produce an error whenever all elements of the Hessian are known in advance to be 0. (#2299)
### Documentation
• The developer guide on adding templates and the architecture overview were rewritten to reflect the past and planned changes of the operator refactor. (#2066)
• Links to the Strawberry Fields documentation for information on the CV model. (#2259)
• Fixes the documentation example for qml.QFT. (#2232)
• Fixes the documentation example for using qml.sample with jax.jit. (#2196)
• The qml.numpy subpackage is now included in the PennyLane API documentation. (#2179)
• Improves the documentation of RotosolveOptimizer regarding the usage of the passed substep_optimizer and its keyword arguments. (#2160)
• Ensures that signatures of @qml.qfunc_transform decorated functions display correctly in the docs. (#2286)
• Docstring examples now display using the updated text-based circuit drawer. (#2252)
• Add docstring to OrbitalRotation.grad_recipe. (#2193)
### Contributors
This release contains contributions from (in alphabetical order):
Catalina Albornoz, Jack Y. Araz, Juan Miguel Arrazola, Ali Asadi, Utkarsh Azad, Sam Banning, Thomas Bromley, Olivia Di Matteo, Christian Gogolin, Diego Guala, Anthony Hayes, David Ittah, Josh Izaac, Soran Jahangiri, Nathan Killoran, Christina Lee, Angus Lowe, Maria Fernanda Morris, Romain Moyard, Zeyue Niu, Lee James O’Riordan, Chae-Yeun Park, Maria Schuld, Jay Soni, Antal Száva, David Wierichs.
orphan
## Release 0.21.0¶
### New features since last release
#### Reduce qubit requirements of simulating Hamiltonians ⚛️
• Functions for tapering qubits based on molecular symmetries have been added, following results from Setia et al. (#1966) (#1974) (#2041) (#2042)
With this functionality, a molecular Hamiltonian and the corresponding Hartree-Fock (HF) state can be transformed to a new Hamiltonian and HF state that acts on a reduced number of qubits, respectively.
# molecular geometry
symbols = ["He", "H"]
geometry = np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 1.4588684632]])
mol = qml.hf.Molecule(symbols, geometry, charge=1)
# generate the qubit Hamiltonian
H = qml.hf.generate_hamiltonian(mol)(geometry)
# determine Hamiltonian symmetries
generators, paulix_ops = qml.hf.generate_symmetries(H, len(H.wires))
opt_sector = qml.hf.optimal_sector(H, generators, mol.n_electrons)
# taper the Hamiltonian
H_tapered = qml.hf.transform_hamiltonian(H, generators, paulix_ops, opt_sector)
We can compare the number of qubits required by the original Hamiltonian and the tapered Hamiltonian:
>>> len(H.wires)
4
>>> len(H_tapered.wires)
2
For quantum chemistry algorithms, the Hartree-Fock state can also be tapered:
n_elec = mol.n_electrons
n_qubits = mol.n_orbitals * 2
hf_tapered = qml.hf.transform_hf(
generators, paulix_ops, opt_sector, n_elec, n_qubits
)
>>> hf_tapered
#### New tensor network templates 🪢
• Quantum circuits with the shape of a matrix product state tensor network can now be easily implemented using the new qml.MPS template, based on the work arXiv:1803.11537. (#1871)
def block(weights, wires):
qml.CNOT(wires=[wires[0], wires[1]])
qml.RY(weights[0], wires=wires[0])
qml.RY(weights[1], wires=wires[1])
n_wires = 4
n_block_wires = 2
n_params_block = 2
template_weights = np.array([[0.1, -0.3], [0.4, 0.2], [-0.15, 0.5]], requires_grad=True)
dev = qml.device("default.qubit", wires=range(n_wires))
@qml.qnode(dev)
def circuit(weights):
qml.MPS(range(n_wires), n_block_wires, block, n_params_block, weights)
return qml.expval(qml.PauliZ(wires=n_wires - 1))
The resulting circuit is:
>>> print(qml.draw(circuit, expansion_strategy="device")(template_weights))
0: ──╭C──RY(0.1)───────────────────────────────┤
1: ──╰X──RY(-0.3)──╭C──RY(0.4)─────────────────┤
2: ────────────────╰X──RY(0.2)──╭C──RY(-0.15)──┤
3: ─────────────────────────────╰X──RY(0.5)────┤ ⟨Z⟩
• Added a template for tree tensor networks, qml.TTN. (#2043)
def block(weights, wires):
qml.CNOT(wires=[wires[0], wires[1]])
qml.RY(weights[0], wires=wires[0])
qml.RY(weights[1], wires=wires[1])
n_wires = 4
n_block_wires = 2
n_params_block = 2
n_blocks = qml.MPS.get_n_blocks(range(n_wires), n_block_wires)
template_weights = [[0.1, -0.3]] * n_blocks
dev = qml.device("default.qubit", wires=range(n_wires))
@qml.qnode(dev)
def circuit(template_weights):
qml.TTN(range(n_wires), n_block_wires, block, n_params_block, template_weights)
return qml.expval(qml.PauliZ(wires=n_wires - 1))
The resulting circuit is:
>>> print(qml.draw(circuit, expansion_strategy="device")(template_weights))
0: ──╭C──RY(0.1)─────────────────┤
1: ──╰X──RY(-0.3)──╭C──RY(0.1)───┤
2: ──╭C──RY(0.1)───│─────────────┤
3: ──╰X──RY(-0.3)──╰X──RY(-0.3)──┤ ⟨Z⟩
#### Generalized RotosolveOptmizer 📉
• The RotosolveOptimizer has been generalized to arbitrary frequency spectra in the cost function. Also note the changes in behaviour listed under Breaking changes. (#2081)
Previously, the RotosolveOptimizer only supported variational circuits using special gates such as single-qubit Pauli rotations. Now, circuits with arbitrary gates are supported natively without decomposition, as long as the frequencies of the gate parameters are known. This new generalization extends the Rotosolve optimization method to a larger class of circuits, and can reduce the cost of the optimization compared to decomposing all gates to single-qubit rotations.
Consider the QNode
dev = qml.device("default.qubit", wires=2)
@qml.qnode(dev)
def qnode(x, Y):
qml.RX(2.5 * x, wires=0)
qml.CNOT(wires=[0, 1])
qml.RZ(0.3 * Y[0], wires=0)
qml.CRY(1.1 * Y[1], wires=[1, 0])
return qml.expval(qml.PauliX(0) @ qml.PauliZ(1))
Its frequency spectra can be easily obtained via qml.fourier.qnode_spectrum:
>>> spectra = qml.fourier.qnode_spectrum(qnode)(x, Y)
>>> spectra
{'x': {(): [-2.5, 0.0, 2.5]},
'Y': {(0,): [-0.3, 0.0, 0.3], (1,): [-1.1, -0.55, 0.0, 0.55, 1.1]}}
We may then initialize the RotosolveOptimizer and minimize the QNode cost function by providing this information about the frequency spectra. We also compare the cost at each step to the initial cost.
>>> cost_init = qnode(x, Y)
>>> opt = qml.RotosolveOptimizer()
>>> for _ in range(2):
... x, Y = opt.step(qnode, x, Y, spectra=spectra)
... print(f"New cost: {np.round(qnode(x, Y), 3)}; Initial cost: {np.round(cost_init, 3)}")
New cost: 0.0; Initial cost: 0.706
New cost: -1.0; Initial cost: 0.706
The optimization with RotosolveOptimizer is performed in substeps. The minimal cost of these substeps can be retrieved by setting full_output=True.
>>> x = np.array(0.8, requires_grad=True)
>>> Y = np.array([-0.2, 1.5], requires_grad=True)
>>> opt = qml.RotosolveOptimizer()
>>> for _ in range(2):
... (x, Y), history = opt.step(qnode, x, Y, spectra=spectra, full_output=True)
... print(f"New cost: {np.round(qnode(x, Y), 3)} reached via substeps {np.round(history, 3)}")
New cost: 0.0 reached via substeps [-0. 0. 0.]
New cost: -1.0 reached via substeps [-1. -1. -1.]
However, note that these intermediate minimal values are evaluations of the reconstructions that Rotosolve creates and uses internally for the optimization, and not of the original objective function. For noisy cost functions, these intermediate evaluations may differ significantly from evaluations of the original cost function.
#### Improved JAX support 💻
• The JAX interface now supports evaluating vector-valued QNodes. (#2110)
Vector-valued QNodes include those with:
• qml.probs;
• qml.state;
• qml.sample or
• multiple qml.expval / qml.var measurements.
Consider a QNode that returns basis-state probabilities:
dev = qml.device('default.qubit', wires=2)
x = jnp.array(0.543)
y = jnp.array(-0.654)
@qml.qnode(dev, diff_method="parameter-shift", interface="jax")
def circuit(x, y):
qml.RX(x, wires=[0])
qml.RY(y, wires=[1])
qml.CNOT(wires=[0, 1])
return qml.probs(wires=[1])
The QNode can be evaluated and its jacobian can be computed:
>>> circuit(x, y)
Array([0.8397495 , 0.16025047], dtype=float32)
>>> jax.jacobian(circuit, argnums=[0, 1])(x, y)
(Array([-0.2050439, 0.2050439], dtype=float32, weak_type=True),
Array([ 0.26043, -0.26043], dtype=float32, weak_type=True))
Note that jax.jit is not yet supported for vector-valued QNodes.
#### Speedier quantum natural gradient ⚡
• A new function for computing the metric tensor on simulators, qml.adjoint_metric_tensor, has been added, that uses classically efficient methods to massively improve performance. (#1992)
This method, detailed in Jones (2020), computes the metric tensor using four copies of the state vector and a number of operations that scales quadratically in the number of trainable parameters.
Note that as it makes use of state cloning, it is inherently classical and can only be used with statevector simulators and shots=None.
It is particularly useful for larger circuits for which backpropagation requires inconvenient or even unfeasible amounts of storage, but is slower. Furthermore, the adjoint method is only available for analytic computation, not for measurements simulation with shots!=None.
dev = qml.device("default.qubit", wires=3)
@qml.qnode(dev)
def circuit(x, y):
qml.Rot(*x[0], wires=0)
qml.Rot(*x[1], wires=1)
qml.Rot(*x[2], wires=2)
qml.CNOT(wires=[0, 1])
qml.CNOT(wires=[1, 2])
qml.CNOT(wires=[2, 0])
qml.RY(y[0], wires=0)
qml.RY(y[1], wires=1)
qml.RY(y[0], wires=2)
return qml.expval(qml.PauliZ(0) @ qml.PauliZ(1)), qml.expval(qml.PauliY(1))
x = np.array([[0.2, 0.4, -0.1], [-2.1, 0.5, -0.2], [0.1, 0.7, -0.6]], requires_grad=False)
>>> qml.adjoint_metric_tensor(circuit)(x, y)
tensor([[ 0.25495723, -0.07086695],
Computational cost
The adjoint method uses $$2P^2+4P+1$$ gates and state cloning operations if the circuit is composed only of trainable gates, where $$P$$ is the number of trainable operations. If non-trainable gates are included, each of them is applied about $$n^2-n$$ times, where $$n$$ is the number of trainable operations that follow after the respective non-trainable operation in the circuit. This means that non-trainable gates later in the circuit are executed less often, making the adjoint method a bit cheaper if such gates appear later. The adjoint method requires memory for 4 independent state vectors, which corresponds roughly to storing a state vector of a system with 2 additional qubits.
#### Compute the Hessian on hardware ⬆️
• A new gradient transform qml.gradients.param_shift_hessian has been added to directly compute the Hessian (2nd order partial derivative matrix) of QNodes on hardware. (#1884)
The function generates parameter-shifted tapes which allow the Hessian to be computed analytically on hardware and software devices. Compared to using an auto-differentiation framework to compute the Hessian via parameter shifts, this function will use fewer device invocations and can be used to inspect the parameter-shifted “Hessian tapes” directly. The function remains fully differentiable on all supported PennyLane interfaces.
Additionally, the parameter-shift Hessian comes with a new batch transform decorator @qml.gradients.hessian_transform, which can be used to create custom Hessian functions.
The following code demonstrates how to use the parameter-shift Hessian:
dev = qml.device("default.qubit", wires=2)
@qml.qnode(dev)
def circuit(x):
qml.RX(x[0], wires=0)
qml.RY(x[1], wires=0)
return qml.expval(qml.PauliZ(0))
>>> hessian
tensor([[-0.97517033, 0.01983384],
### Improvements
• The qml.transforms.insert transform now supports adding operation after or before certain specific gates. (#1980)
• Added a modified version of the simplify function to the hf module. (#2103)
This function combines redundant terms in a Hamiltonian and eliminates terms with a coefficient smaller than a cutoff value. The new function makes construction of molecular Hamiltonians more efficient. For LiH, as an example, the time to construct the Hamiltonian is reduced roughly by a factor of 20.
• The QAOA module now accepts both NetworkX and RetworkX graphs as function inputs. (#1791)
• The CircuitGraph, used to represent circuits via directed acyclic graphs, now uses RetworkX for its internal representation. This results in significant speedup for algorithms that rely on a directed acyclic graph representation. (#1791)
• For subclasses of Operator where the number of parameters is known before instantiation, the num_params is reverted back to being a static property. This allows to programmatically know the number of parameters before an operator is instantiated without changing the user interface. A test was added to ensure that different ways of defining num_params work as expected. (#2101) (#2135)
• A WireCut operator has been added for manual wire cut placement when constructing a QNode. (#2093)
• The new function qml.drawer.tape_text produces a string drawing of a tape. This function differs in implementation and minor stylistic details from the old string circuit drawing infrastructure. (#1885)
• The RotosolveOptimizer now raises an error if no trainable arguments are detected, instead of silently skipping update steps for all arguments. (#2109)
• The function qml.math.safe_squeeze is introduced and gradient_transform allows for QNode argument axes of size 1. (#2080)
qml.math.safe_squeeze wraps qml.math.squeeze, with slight modifications:
• When provided the axis keyword argument, axes that do not have size 1 will be ignored, instead of raising an error.
• The keyword argument exclude_axis allows to explicitly exclude axes from the squeezing.
• The adjoint transform now raises and error whenever the object it is applied to is not callable. (#2060)
An example is a list of operations to which one might apply qml.adjoint:
dev = qml.device("default.qubit", wires=2)
@qml.qnode(dev)
def circuit_wrong(params):
# Note the difference: v v
return qml.state()
@qml.qnode(dev)
def circuit_correct(params):
# Note the difference: v v
return qml.state()
params = list(range(1, 3))
Evaluating circuit_wrong(params) now raises a ValueError and if we apply qml.adjoint correctly, we get
>>> circuit_correct(params)
[ 0.47415988+0.j 0. 0.73846026j 0. 0.25903472j
-0.40342268+0.j ]
• A precision argument has been added to the tape’s to_openqasm function to control the precision of parameters. (#2071)
• Interferometer now has a shape method. (#1946)
• The Barrier and Identity operations now support the adjoint method. (#2062) (#2063)
• qml.BasisStatePreparation now supports the batch_params decorator. (#2091)
• Added a new multi_dispatch decorator that helps ease the definition of new functions inside PennyLane. The decorator is used throughout the math module, demonstrating use cases. (#2082) (#2096)
We can decorate a function, indicating the arguments that are tensors handled by the interface:
>>> @qml.math.multi_dispatch(argnum=[0, 1])
... def some_function(tensor1, tensor2, option, like):
... # the interface string is stored in like.
... ...
Previously, this was done using the private utility function _multi_dispatch.
>>> def some_function(tensor1, tensor2, option):
... interface = qml.math._multi_dispatch([tensor1, tensor2])
... ...
• The IsingZZ gate was added to the diagonal_in_z_basis attribute. For this an explicit _eigvals method was added. (#2113)
• The IsingXX, IsingYY and IsingZZ gates were added to the composable_rotations attribute. (#2113)
### Breaking changes
• QNode arguments will no longer be considered trainable by default when using the Autograd interface. In order to obtain derivatives with respect to a parameter, it should be instantiated via PennyLane’s NumPy wrapper using the requires_grad=True attribute. The previous behaviour was deprecated in version v0.19.0 of PennyLane. (#2116) (#2125) (#2139) (#2148) (#2156)
from pennylane import numpy as np
@qml.qnode(qml.device("default.qubit", wires=2))
def circuit(x):
...
For the qml.grad and qml.jacobian functions, trainability can alternatively be indicated via the argnum keyword:
import numpy as np
@qml.qnode(qml.device("default.qubit", wires=2))
def circuit(hyperparam, param):
...
x = np.array([0.1, 0.2])
• qml.jacobian now follows a different convention regarding its output shape. (#2059)
Previously, qml.jacobian would attempt to stack the Jacobian for multiple QNode arguments, which succeeded whenever the arguments have the same shape. In this case, the stacked Jacobian would also be transposed, leading to the output shape (*reverse_QNode_args_shape, *reverse_output_shape, num_QNode_args)
If no stacking and transposing occurs, the output shape instead is a tuple where each entry corresponds to one QNode argument and has the shape (*output_shape, *QNode_arg_shape).
This breaking change alters the behaviour in the first case and removes the attempt to stack and transpose, so that the output always has the shape of the second type.
Note that the behaviour is unchanged — that is, the Jacobian tuple is unpacked into a single Jacobian — if argnum=None and there is only one QNode argument with respect to which the differentiation takes place, or if an integer is provided as argnum.
A workaround that allowed qml.jacobian to differentiate multiple QNode arguments will no longer support higher-order derivatives. In such cases, combining multiple arguments into a single array is recommended.
• qml.metric_tensor, qml.adjoint_metric_tensor and qml.transforms.classical_jacobian now follow a different convention regarding their output shape when being used with the Autograd interface (#2059)
See the previous entry for details. This breaking change immediately follows from the change in qml.jacobian whenever hybrid=True is used in the above methods.
• The behaviour of RotosolveOptimizer has been changed regarding its keyword arguments. (#2081)
The keyword arguments optimizer and optimizer_kwargs for the RotosolveOptimizer have been renamed to substep_optimizer and substep_kwargs, respectively. Furthermore they have been moved from step and step_and_cost to the initialization __init__.
The keyword argument num_freqs has been renamed to nums_frequency and is expected to take a different shape now: Previously, it was expected to be an int or a list of entries, with each entry in turn being either an int or a list of int entries. Now the expected structure is a nested dictionary, matching the formatting expected by qml.fourier.reconstruct This also matches the expected formatting of the new keyword arguments spectra and shifts.
For more details, see the RotosolveOptimizer documentation.
### Deprecations
• Deprecates the caching ability provided by QubitDevice. (#2154)
Going forward, the preferred way is to use the caching abilities of the QNode:
dev = qml.device("default.qubit", wires=2)
cache = {}
@qml.qnode(dev, diff_method='parameter-shift', cache=cache)
def circuit():
qml.RY(0.345, wires=0)
return qml.expval(qml.PauliZ(0))
>>> for _ in range(10):
... circuit()
>>> dev.num_executions
1
### Bug fixes
• Fixes a bug where an incorrect number of executions are recorded by a QNode using a custom cache with diff_method="backprop". (#2171)
• Fixes a bug where the default.qubit.jax device can’t be used with diff_method=None and jitting. (#2136)
• Fixes a bug where the Torch interface was not properly unwrapping Torch tensors to NumPy arrays before executing gradient tapes on devices. (#2117)
• Fixes a bug for the TensorFlow interface where the dtype of input tensors was not cast. (#2120)
• Fixes a bug where batch transformed QNodes would fail to apply batch transforms provided by the underlying device. (#2111)
• An error is now raised during QNode creation if backpropagation is requested on a device with finite shots specified. (#2114)
• Pytest now ignores any DeprecationWarning raised within autograd’s numpy_wrapper module. Other assorted minor test warnings are fixed. (#2007)
• Fixes a bug where the QNode was not correctly diagonalizing qubit-wise commuting observables. (#2097)
• Fixes a bug in gradient_transform where the hybrid differentiation of circuits with a single parametrized gate failed and QNode argument axes of size 1 where removed from the output gradient. (#2080)
• The available diff_method options for QNodes has been corrected in both the error messages and the documentation. (#2078)
• Fixes a bug in DefaultQubit where the second derivative of QNodes at positions corresponding to vanishing state vector amplitudes is wrong. (#2057)
• Fixes a bug where PennyLane didn’t require v0.20.0 of PennyLane-Lightning, but raised an error with versions of Lightning earlier than v0.20.0 due to the new batch execution pipeline. (#2033)
• Fixes a bug in classical_jacobian when used with Torch, where the Jacobian of the preprocessing was also computed for non-trainable parameters. (#2020)
• Fixes a bug in queueing of the two_qubit_decomposition method that originally led to circuits with >3 two-qubit unitaries failing when passed through the unitary_to_rot optimization transform. (#2015)
• Fixes a bug which allows using jax.jit to be compatible with circuits which return qml.probs when the default.qubit.jax is provided with a custom shot vector. (#2028)
• Updated the adjoint() method for non-parametric qubit operations to solve a bug where repeated adjoint() calls don’t return the correct operator. (#2133)
• Fixed a bug in insert() which prevented operations that inherited from multiple classes to be inserted. (#2172)
### Documentation
• Fixes an error in the signs of equations in the DoubleExcitation page. (#2072)
• Extends the interfaces description page to explicitly mention device compatibility. (#2031)
### Contributors
This release contains contributions from (in alphabetical order):
Juan Miguel Arrazola, Ali Asadi, Utkarsh Azad, Sam Banning, Thomas Bromley, Esther Cruz, Olivia Di Matteo, Christian Gogolin, Diego Guala, Anthony Hayes, David Ittah, Josh Izaac, Soran Jahangiri, Edward Jiang, Ankit Khandelwal, Nathan Killoran, Korbinian Kottmann, Christina Lee, Romain Moyard, Lee James O’Riordan, Maria Schuld, Jay Soni, Antal Száva, David Wierichs, Shaoming Zhang.
orphan
## Release 0.20.0¶
### New features since last release
#### Shiny new circuit drawer!🎨🖌️
• PennyLane now supports drawing a QNode with matplotlib! (#1803) (#1811) (#1931) (#1954)
dev = qml.device("default.qubit", wires=4)
@qml.qnode(dev)
def circuit(x, z):
qml.QFT(wires=(0,1,2,3))
qml.Toffoli(wires=(0,1,2))
qml.CSWAP(wires=(0,2,3))
qml.RX(x, wires=0)
qml.CRZ(z, wires=(3,0))
return qml.expval(qml.PauliZ(0))
fig, ax = qml.draw_mpl(circuit)(1.2345, 1.2345)
fig.show()
#### New and improved quantum-aware optimizers
• Added qml.LieAlgebraOptimizer, a new quantum-aware Lie Algebra optimizer that allows one to perform gradient descent on the special unitary group. (#1911)
dev = qml.device("default.qubit", wires=2)
H = -1.0 * qml.PauliX(0) - qml.PauliZ(1) - qml.PauliY(0) @ qml.PauliX(1)
@qml.qnode(dev)
def circuit():
qml.RX(0.1, wires=[0])
qml.RY(0.5, wires=[1])
qml.CNOT(wires=[0,1])
qml.RY(0.6, wires=[0])
return qml.expval(H)
opt = qml.LieAlgebraOptimizer(circuit=circuit, stepsize=0.1)
Note that, unlike other optimizers, the LieAlgebraOptimizer accepts a QNode with no parameters, and instead grows the circuit by appending operations during the optimization:
>>> circuit()
>>> circuit1, cost = opt.step_and_cost()
>>> circuit1()
For more details, see the LieAlgebraOptimizer documentation.
• The qml.metric_tensor transform can now be used to compute the full tensor, beyond the block diagonal approximation. (#1725)
This is performed using Hadamard tests, and requires an additional wire on the device to execute the circuits produced by the transform, as compared to the number of wires required by the original circuit. The transform defaults to computing the full tensor, which can be controlled by the approx keyword argument.
As an example, consider the QNode
dev = qml.device("default.qubit", wires=3)
@qml.qnode(dev)
def circuit(weights):
qml.RX(weights[0], wires=0)
qml.RY(weights[1], wires=0)
qml.CNOT(wires=[0, 1])
qml.RZ(weights[2], wires=1)
return qml.expval(qml.PauliZ(0) @ qml.PauliZ(1))
weights = np.array([0.2, 1.2, -0.9], requires_grad=True)
Then we can compute the (block) diagonal metric tensor as before, now using the approx="block-diag" keyword:
>>> qml.metric_tensor(circuit, approx="block-diag")(weights)
[[0.25 0. 0. ]
[0. 0.24013262 0. ]
[0. 0. 0.21846983]]
Instead, we now can also compute the full metric tensor, using Hadamard tests on the additional wire of the device:
>>> qml.metric_tensor(circuit)(weights)
[[ 0.25 0. -0.23300977]
[ 0. 0.24013262 0.01763859]
[-0.23300977 0.01763859 0.21846983]]
#### Faster performance with optimized quantum workflows
• The QNode has been re-written to support batch execution across the board, custom gradients, better decomposition strategies, and higher-order derivatives. (#1807) (#1969)
• Internally, if multiple circuits are generated for simultaneous execution, they will be packaged into a single job for execution on the device. This can lead to significant performance improvement when executing the QNode on remote quantum hardware or simulator devices with parallelization capabilities.
• Custom gradient transforms can be specified as the differentiation method:
@qml.gradients.gradient_transform
...
return tapes, processing_fn
def circuit():
For breaking changes related to the use of the new QNode, refer to the Breaking Changes section.
Note that the old QNode remains accessible at @qml.qnode_old.qnode, however this will be removed in the next release.
• Custom decompositions can now be applied to operations at the device level. (#1900)
For example, suppose we would like to implement the following QNode:
def circuit(weights):
qml.BasicEntanglerLayers(weights, wires=[0, 1, 2])
return qml.expval(qml.PauliZ(0))
original_dev = qml.device("default.qubit", wires=3)
original_qnode = qml.QNode(circuit, original_dev)
>>> weights = np.array([[0.4, 0.5, 0.6]])
>>> print(qml.draw(original_qnode, expansion_strategy="device")(weights))
0: ──RX(0.4)──╭C──────╭X──┤ ⟨Z⟩
1: ──RX(0.5)──╰X──╭C──│───┤
2: ──RX(0.6)──────╰X──╰C──┤
Now, let’s swap out the decomposition of the CNOT gate into CZ and Hadamard, and furthermore the decomposition of Hadamard into RZ and RY rather than the decomposition already available in PennyLane. We define the two decompositions like so, and pass them to a device:
def custom_cnot(wires):
return [
qml.CZ(wires=[wires[0], wires[1]]),
]
return [
qml.RZ(np.pi, wires=wires),
qml.RY(np.pi / 2, wires=wires)
]
# Can pass the operation itself, or a string
decomp_dev = qml.device("default.qubit", wires=3, custom_decomps=custom_decomps)
decomp_qnode = qml.QNode(circuit, decomp_dev)
Now when we draw or run a QNode on this device, the gates will be expanded according to our specifications:
>>> print(qml.draw(decomp_qnode, expansion_strategy="device")(weights))
0: ──RX(0.4)──────────────────────╭C──RZ(3.14)──RY(1.57)──────────────────────────╭Z──RZ(3.14)──RY(1.57)──┤ ⟨Z⟩
1: ──RX(0.5)──RZ(3.14)──RY(1.57)──╰Z──RZ(3.14)──RY(1.57)──╭C──────────────────────│───────────────────────┤
2: ──RX(0.6)──RZ(3.14)──RY(1.57)──────────────────────────╰Z──RZ(3.14)──RY(1.57)──╰C──────────────────────┤
A separate context manager, set_decomposition, has also been implemented to enable application of custom decompositions on devices that have already been created.
>>> with qml.transforms.set_decomposition(custom_decomps, original_dev):
... print(qml.draw(original_qnode, expansion_strategy="device")(weights))
0: ──RX(0.4)──────────────────────╭C──RZ(3.14)──RY(1.57)──────────────────────────╭Z──RZ(3.14)──RY(1.57)──┤ ⟨Z⟩
1: ──RX(0.5)──RZ(3.14)──RY(1.57)──╰Z──RZ(3.14)──RY(1.57)──╭C──────────────────────│───────────────────────┤
2: ──RX(0.6)──RZ(3.14)──RY(1.57)──────────────────────────╰Z──RZ(3.14)──RY(1.57)──╰C──────────────────────┤
• Given an operator of the form $$U=e^{iHt}$$, where $$H$$ has commuting terms and known eigenvalues, qml.gradients.generate_shift_rule computes the generalized parameter shift rules for determining the gradient of the expectation value $$f(t) = \langle 0|U(t)^\dagger \hat{O} U(t)|0\rangle$$ on hardware. (#1788) (#1932)
Given
$H = \sum_i a_i h_i,$
where the eigenvalues of $$H$$ are known and all $$h_i$$ commute, we can compute the frequencies (the unique positive differences of any two eigenvalues) using qml.gradients.eigvals_to_frequencies.
qml.gradients.generate_shift_rule can then be used to compute the parameter shift rules to compute $$f'(t)$$ using 2R shifted cost function evaluations. This becomes cheaper than the standard application of the chain rule and two-term shift rule when R is less than the number of Pauli words in the generator.
For example, consider the case where $$H$$ has eigenspectrum (-1, 0, 1):
>>> frequencies = qml.gradients.eigvals_to_frequencies((-1, 0, 1))
>>> frequencies
(1, 2)
>>> coeffs
array([ 0.85355339, -0.85355339, -0.14644661, 0.14644661])
>>> shifts
array([ 0.78539816, -0.78539816, 2.35619449, -2.35619449])
As we can see, generate_shift_rule returns four coefficients $$c_i$$ and shifts $$s_i$$ corresponding to a four term parameter shift rule. The gradient can then be reconstructed via:
$\frac{\partial}{\partial\phi}f = \sum_{i} c_i f(\phi + s_i),$
where $$f(\phi) = \langle 0|U(\phi)^\dagger \hat{O} U(\phi)|0\rangle$$ for some observable $$\hat{O}$$ and the unitary $$U(\phi)=e^{iH\phi}$$.
#### Support for TensorFlow AutoGraph mode with quantum hardware
• It is now possible to use TensorFlow’s AutoGraph mode with QNodes on all devices and with arbitrary differentiation methods. Previously, AutoGraph mode only support diff_method="backprop". This will result in significantly more performant model execution, at the cost of a more expensive initial compilation. (#1866)
Use AutoGraph to convert your QNodes or cost functions into TensorFlow graphs by decorating them with @tf.function:
dev = qml.device("lightning.qubit", wires=2)
def circuit(x):
qml.RX(x[0], wires=0)
qml.RY(x[1], wires=1)
return qml.expval(qml.PauliZ(0) @ qml.PauliZ(1)), qml.expval(qml.PauliZ(0))
@tf.function
def cost(x):
return tf.reduce_sum(circuit(x))
x = tf.Variable([0.5, 0.7], dtype=tf.float64)
loss = cost(x)
The initial execution may take slightly longer than when executing the circuit in eager mode; this is because TensorFlow is tracing the function to create the graph. Subsequent executions will be much more performant.
Note that using AutoGraph with backprop-enabled devices, such as default.qubit, will yield the best performance.
For more details, please see the TensorFlow AutoGraph documentation.
#### Characterize your quantum models with classical QNode reconstruction
• The qml.fourier.reconstruct function is added. It can be used to reconstruct QNodes outputting expectation values along a specified parameter dimension, with a minimal number of calls to the original QNode. The returned reconstruction is exact and purely classical, and can be evaluated without any quantum executions. (#1864)
The reconstruction technique differs for functions with equidistant frequencies that are reconstructed using the function value at equidistant sampling points, and for functions with arbitrary frequencies reconstructed using arbitrary sampling points.
As an example, consider the following QNode:
dev = qml.device("default.qubit", wires=2)
@qml.qnode(dev)
def circuit(x, Y, f=1.0):
qml.RX(f * x, wires=0)
qml.RY(Y[0], wires=0)
qml.RY(Y[1], wires=1)
qml.CNOT(wires=[0, 1])
qml.RY(3 * Y[1], wires=1)
return qml.expval(qml.PauliZ(0) @ qml.PauliZ(1))
It has three variational parameters overall: A scalar input x and an array-valued input Y with two entries. Additionally, we can tune the dependence on x with the frequency f. We then can reconstruct the QNode output function with respect to x via
>>> x = 0.3
>>> Y = np.array([0.1, -0.9])
>>> rec = qml.fourier.reconstruct(circuit, ids="x", nums_frequency={"x": {0: 1}})(x, Y)
>>> rec
{'x': {0: <function pennylane.fourier.reconstruct._reconstruct_equ.<locals>._reconstruction(x)>}}
As we can see, we get a nested dictionary in the format of the input nums_frequency with functions as values. These functions are simple float-to-float callables:
>>> univariate = rec["x"][0]
>>> univariate(x)
-0.880208251507
For more details on usage, reconstruction cost and differentiability support, please see the fourier.reconstruct docstring.
#### State-of-the-art operations and templates
• A circuit template for time evolution under a commuting Hamiltonian utilizing generalized parameter shift rules for cost function gradients is available as qml.CommutingEvolution. (#1788)
If the template is handed a frequency spectrum during its instantiation, then generate_shift_rule is internally called to obtain the general parameter shift rules with respect to CommutingEvolution‘s $$t$$ parameter, otherwise the shift rule for a decomposition of CommutingEvolution will be used.
The template can be initialized within QNode as:
import pennylane as qml
n_wires = 2
dev = qml.device('default.qubit', wires=n_wires)
coeffs = [1, -1]
obs = [qml.PauliX(0) @ qml.PauliY(1), qml.PauliY(0) @ qml.PauliX(1)]
hamiltonian = qml.Hamiltonian(coeffs, obs)
frequencies = (2,4)
@qml.qnode(dev)
def circuit(time):
qml.PauliX(0)
qml.CommutingEvolution(hamiltonian, time, frequencies)
return qml.expval(qml.PauliZ(0))
Note that there is no internal validation that 1) the input qml.Hamiltonian is fully commuting and 2) the eigenvalue frequency spectrum is correct, since these checks become prohibitively expensive for large Hamiltonians.
• The qml.Barrier() operator has been added. With it we can separate blocks in compilation or use it as a visual tool. (#1844)
• Added the identity observable to be an operator. Now we can explicitly call the identity operation on our quantum circuits for both qubit and CV devices. (#1829)
• Added the qml.QubitDensityMatrix initialization gate for mixed state simulation. (#1850)
• A thermal relaxation channel is added to the Noisy channels. The channel description can be found on the supplementary information of Quantum classifier with tailored quantum kernels. (#1766)
• Added a new qml.PauliError channel that allows the application of an arbitrary number of Pauli operators on an arbitrary number of wires. (#1781)
#### Manipulate QNodes to your ❤️s content with new transforms
• The merge_amplitude_embedding transformation has been created to automatically merge all gates of this type into one. (#1933)
from pennylane.transforms import merge_amplitude_embedding
dev = qml.device("default.qubit", wires = 3)
@qml.qnode(dev)
@merge_amplitude_embedding
def qfunc():
qml.AmplitudeEmbedding([0,1,0,0], wires = [0,1])
qml.AmplitudeEmbedding([0,1], wires = 2)
return qml.expval(qml.PauliZ(wires = 0))
>>> print(qml.draw(qnode)())
0: ──╭AmplitudeEmbedding(M0)──┤ ⟨Z⟩
1: ──├AmplitudeEmbedding(M0)──┤
2: ──╰AmplitudeEmbedding(M0)──┤
M0 =
[0.+0.j 0.+0.j 0.+0.j 1.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j]
• The undo_swaps transformation has been created to automatically remove all swaps of a circuit. (#1960)
dev = qml.device('default.qubit', wires=3)
@qml.qnode(dev)
@qml.transforms.undo_swaps
def qfunc():
qml.PauliX(wires=1)
qml.SWAP(wires=[0,1])
qml.SWAP(wires=[0,2])
qml.PauliY(wires=0)
return qml.expval(qml.PauliZ(0))
>>> print(qml.draw(qfunc)())
0: ──Y──┤ ⟨Z⟩
1: ──H──┤
2: ──X──┤
### Improvements
• Added functions for computing the values of atomic and molecular orbitals at a given position. (#1867)
The functions atomic_orbital and molecular_orbital can be used, as shown in the following codeblock, to evaluate the orbitals. By generating values of the orbitals at different positions, one can plot the spatial shape of a desired orbital.
symbols = ['H', 'H']
geometry = np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 1.0]], requires_grad = False)
mol = hf.Molecule(symbols, geometry)
hf.generate_scf(mol)()
ao = mol.atomic_orbital(0)
mo = mol.molecular_orbital(1)
>>> print(ao(0.0, 0.0, 0.0))
>>> print(mo(0.0, 0.0, 0.0))
0.6282468778183719
0.018251285973461928
• Added support for Python 3.10. (#1964)
• The execution of QNodes that have
• multiple return types;
• a return type other than Variance and Expectation
now raises a descriptive error message when using the JAX interface. (#2011)
• The PennyLane qchem package is now lazily imported; it will only be imported the first time it is accessed. (#1962)
• qml.math.scatter_element_add now supports adding multiple values at multiple indices with a single function call, in all interfaces (#1864)
For example, we may set five values of a three-dimensional tensor in the following way:
>>> X = tf.zeros((3, 2, 9), dtype=tf.float64)
>>> indices = [(0, 0, 1, 2, 2), (0, 0, 0, 0, 1), (1, 3, 8, 6, 7)]
>>> values = [1 * i for i in range(1,6)]
<tf.Tensor: shape=(3, 2, 9), dtype=float64, numpy=
array([[[0., 1., 0., 2., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 0., 0., 3.],
[0., 0., 0., 0., 0., 0., 0., 0., 0.]],
[[0., 0., 0., 0., 0., 0., 4., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 5., 0.]]])>
• All instances of str.format have been replace with f-strings. (#1970)
• Tests do not loop over automatically imported and instantiated operations any more, which was opaque and created unnecessarily many tests. (#1895)
• A decompose() method has been added to the Operator class such that we can obtain (and queue) decompositions directly from instances of operations. (#1873)
>>> op = qml.PhaseShift(0.3, wires=0)
>>> op.decompose()
[RZ(0.3, wires=[0])]
• qml.circuit_drawer.tape_mpl produces a matplotlib figure and axes given a tape. (#1787)
• The AngleEmbedding, BasicEntanglerLayers and MottonenStatePreparation templates now support parameters with batch dimension when using the @qml.batch_params decorator. (#1812) (#1883) (#1893)
• qml.draw now supports a max_length argument to help prevent text overflows when printing circuits. (#1892)
• Identity operation is now part of both the ops.qubit and ops.cv modules. (#1956)
### Breaking changes
• The QNode has been re-written to support batch execution across the board, custom gradients, better decomposition strategies, and higher-order derivatives. (#1807) (#1969)
• Arbitrary $$n$$-th order derivatives are supported on hardware using gradient transforms such as the parameter-shift rule. To specify that an $$n$$-th order derivative of a QNode will be computed, the max_diff argument should be set. By default, this is set to 1 (first-order derivatives only). Increasing this value allows for higher order derivatives to be extracted, at the cost of additional (classical) computational overhead during the backwards pass.
• When decomposing the circuit, the default decomposition strategy expansion_strategy="gradient" will prioritize decompositions that result in the smallest number of parametrized operations required to satisfy the differentiation method. While this may lead to a slight increase in classical processing, it significantly reduces the number of circuit evaluations needed to compute gradients of complicated unitaries.
To return to the old behaviour, expansion_strategy="device" can be specified.
Note that the old QNode remains accessible at @qml.qnode_old.qnode, however this will be removed in the next release.
• Certain features deprecated in v0.19.0 have been removed: (#1981) (#1963)
• The qml.template decorator (use a QuantumTape <https://pennylane.readthedocs.io/en/stable/code/api/pennylane.tape.QuantumTape.html>_ as a context manager to record operations and its operations attribute to return them, see the linked page for examples);
• The default.tensor and default.tensor.tf experimental devices;
• The qml.fourier.spectrum function (use the qml.fourier.circuit_spectrum or qml.fourier.qnode_spectrum functions instead);
• The diag_approx keyword argument of qml.metric_tensor and qml.QNGOptimizer (pass approx='diag' instead).
• The default behaviour of the qml.metric_tensor transform has been modified. By default, the full metric tensor is computed, leading to higher cost than the previous default of computing the block diagonal only. At the same time, the Hadamard tests for the full metric tensor require an additional wire on the device, so that
>>> qml.metric_tensor(some_qnode)(weights)
will revert back to the block diagonal restriction and raise a warning if the used device does not have an additional wire. (#1725)
• The circuit_drawer module has been renamed drawer. (#1949)
• The par_domain attribute in the operator class has been removed. (#1907)
• The mutable keyword argument has been removed from the QNode, due to underlying bugs that result in incorrect results being returned from immutable QNodes. This functionality will return in an upcoming release. (#1807)
• The reversible QNode differentiation method has been removed; the adjoint differentiation method is preferred instead (diff_method='adjoint'). (#1807)
• QuantumTape.trainable_params now is a list instead of a set. This means that tape.trainable_params will return a list unlike before, but setting the trainable_params with a set works exactly as before. (#1904)
• The num_params attribute in the operator class is now dynamic. This makes it easier to define operator subclasses with a flexible number of parameters. (#1898) (#1909)
• The static method decomposition(), formerly in the Operation class, has been moved to the base Operator class. (#1873)
• DiagonalOperation is not a separate subclass any more. (#1889)
Instead, devices can check for the diagonal property using attributes:
from pennylane.ops.qubit.attributes import diagonal_in_z_basis
if op in diagonal_in_z_basis:
# do something
Custom operations can be added to this attribute at runtime via diagonal_in_z_basis.add("MyCustomOp").
### Bug fixes
• Fixes a bug with qml.probs when using default.qubit.jax. (#1998)
• Fixes a bug where output tensors of a QNode would always be put on the default GPU with default.qubit.torch. (#1982)
• Device test suite doesn’t use empty circuits so that it can also test the IonQ plugin, and it checks if operations are supported in more places. (#1979)
• Fixes a bug where the metric tensor was computed incorrectly when using gates with gate.inverse=True. (#1987)
• Corrects the documentation of qml.transforms.classical_jacobian for the Autograd interface (and improves test coverage). (#1978)
• Fixes a bug where differentiating a QNode with qml.state using the JAX interface raised an error. (#1906)
• Fixes a bug with the adjoint of qml.QFT. (#1955)
• Fixes a bug where the ApproxTimeEvolution template was not correctly computing the operation wires from the input Hamiltonian. This did not affect computation with the ApproxTimeEvolution template, but did cause circuit drawing to fail. (#1952)
• Fixes a bug where the classical preprocessing Jacobian computed by qml.transforms.classical_jacobian with JAX returned a reduced submatrix of the Jacobian. (#1948)
• Fixes a bug where the operations are not accessed in the correct order in qml.fourier.qnode_spectrum, leading to wrong outputs. (#1935)
• Fixes several Pylint errors. (#1951)
• Fixes a bug where the device test suite wasn’t testing certain operations. (#1943)
• Fixes a bug where batch transforms would mutate a QNodes execution options. (#1934)
• qml.draw now supports arbitrary templates with matrix parameters. (#1917)
• QuantumTape.trainable_params now is a list instead of a set, making it more stable in very rare edge cases. (#1904)
• ExpvalCost now returns corrects results shape when optimize=True with shots batch. (#1897)
• qml.circuit_drawer.MPLDrawer was slightly modified to work with matplotlib version 3.5. (#1899)
• qml.CSWAP and qml.CRot now define control_wires, and qml.SWAP returns the default empty wires object. (#1830)
• The requires_grad attribute of qml.numpy.tensor objects is now preserved when pickling/unpickling the object. (#1856)
• Device tests no longer throw warnings about the requires_grad attribute of variational parameters. (#1913)
• AdamOptimizer and AdagradOptimizer had small fixes to their optimization step updates. (#1929)
• Fixes a bug where differentiating a QNode with multiple array arguments via qml.gradients.param_shift throws an error. (#1989)
• AmplitudeEmbedding template no longer produces a ComplexWarning when the features parameter is batched and provided as a 2D array. (#1990)
• qml.circuit_drawer.CircuitDrawer no longer produces an error when attempting to draw tapes inside of circuits (e.g. from decomposition of an operation or manual placement). (#1994)
• Fixes a bug where using SciPy sparse matrices with the new QNode could lead to a warning being raised about prioritizing the TensorFlow and PyTorch interfaces. (#2001)
• Fixed a bug where the QueueContext was not empty when first importing PennyLane. (#1957)
• Fixed circuit drawing problem with Interferometer and CVNeuralNet. (#1953)
### Documentation
• Added examples in documentation for some operations. (#1902)
• Improves the Developer’s Guide Testing document. (#1896)
• Added documentation examples for AngleEmbedding, BasisEmbedding, StronglyEntanglingLayers, SqueezingEmbedding, DisplacementEmbedding, MottonenStatePreparation and Interferometer. (#1910) (#1908) (#1912) (#1920) (#1936) (#1937)
### Contributors
This release contains contributions from (in alphabetical order):
Catalina Albornoz, Guillermo Alonso-Linaje, Juan Miguel Arrazola, Ali Asadi, Utkarsh Azad, Samuel Banning, Benjamin Cordier, Alain Delgado, Olivia Di Matteo, Anthony Hayes, David Ittah, Josh Izaac, Soran Jahangiri, Jalani Kanem, Ankit Khandelwal, Nathan Killoran, Shumpei Kobayashi, Robert Lang, Christina Lee, Cedric Lin, Alejandro Montanez, Romain Moyard, Lee James O’Riordan, Chae-Yeun Park, Isidor Schoch, Maria Schuld, Jay Soni, Antal Száva, Rodrigo Vargas, David Wierichs, Roeland Wiersema, Moritz Willmann.
orphan
## Release 0.19.1¶
### Bug fixes
• Fixes several bugs when using parametric operations with the default.qubit.tensor device on GPU. The device takes the torch_device argument once again to allow running non-parametric QNodes on the GPU. (#1927)
• Fixes a bug where using JAX’s jit function on certain QNodes that contain the qml.QubitStateVector operation raised an error with earlier JAX versions (e.g., jax==0.2.10 and jaxlib==0.1.64). (#1924)
### Contributors
This release contains contributions from (in alphabetical order):
Josh Izaac, Christina Lee, Romain Moyard, Lee James O’Riordan, Antal Száva.
orphan
## Release 0.19.0¶
### New features since last release
#### Differentiable Hartree-Fock solver
• A differentiable Hartree-Fock (HF) solver has been added. It can be used to construct molecular Hamiltonians that can be differentiated with respect to nuclear coordinates and basis-set parameters. (#1610)
The HF solver computes the integrals over basis functions, constructs the relevant matrices, and performs self-consistent-field iterations to obtain a set of optimized molecular orbital coefficients. These coefficients and the computed integrals over basis functions are used to construct the one- and two-body electron integrals in the molecular orbital basis which can be used to generate a differentiable second-quantized Hamiltonian in the fermionic and qubit basis.
The following code shows the construction of the Hamiltonian for the hydrogen molecule where the geometry of the molecule is differentiable.
symbols = ["H", "H"]
geometry = np.array([[0.0000000000, 0.0000000000, -0.6943528941],
mol = qml.hf.Molecule(symbols, geometry)
args_mol = [geometry]
hamiltonian = qml.hf.generate_hamiltonian(mol)(*args_mol)
>>> hamiltonian.coeffs
tensor([-0.09041082+0.j, 0.17220382+0.j, 0.17220382+0.j,
0.16893367+0.j, 0.04523101+0.j, -0.04523101+0.j,
-0.04523101+0.j, 0.04523101+0.j, -0.22581352+0.j,
0.12092003+0.j, -0.22581352+0.j, 0.16615103+0.j,
The generated Hamiltonian can be used in a circuit where the atomic coordinates and circuit parameters are optimized simultaneously.
symbols = ["H", "H"]
geometry = np.array([[0.0000000000, 0.0000000000, 0.0],
mol = qml.hf.Molecule(symbols, geometry)
dev = qml.device("default.qubit", wires=4)
def generate_circuit(mol):
@qml.qnode(dev)
def circuit(*args):
qml.BasisState(np.array([1, 1, 0, 0]), wires=[0, 1, 2, 3])
qml.DoubleExcitation(*args[0][0], wires=[0, 1, 2, 3])
return qml.expval(qml.hf.generate_hamiltonian(mol)(*args[1:]))
return circuit
for n in range(25):
mol = qml.hf.Molecule(symbols, geometry)
args = [params, geometry] # initial values of the differentiable parameters
g_params = qml.grad(generate_circuit(mol), argnum = 0)(*args)
params = params - 0.5 * g_params[0]
forces = qml.grad(generate_circuit(mol), argnum = 1)(*args)
geometry = geometry - 0.5 * forces
print(f'Step: {n}, Energy: {generate_circuit(mol)(*args)}, Maximum Force: {forces.max()}')
In addition, the new Hartree-Fock solver can further be used to optimize the basis set parameters. For details, please refer to the differentiable Hartree-Fock solver documentation.
#### Integration with Mitiq
• Error mitigation using the zero-noise extrapolation method is now available through the transforms.mitigate_with_zne transform. This transform can integrate with the Mitiq package for unitary folding and extrapolation functionality. (#1813)
Consider the following noisy device:
noise_strength = 0.05
dev = qml.device("default.mixed", wires=2)
dev = qml.transforms.insert(qml.AmplitudeDamping, noise_strength)(dev)
We can mitigate the effects of this noise for circuits run on this device by using the added transform:
from mitiq.zne.scaling import fold_global
from mitiq.zne.inference import RichardsonFactory
n_wires = 2
n_layers = 2
shapes = qml.SimplifiedTwoDesign.shape(n_wires, n_layers)
np.random.seed(0)
w1, w2 = [np.random.random(s) for s in shapes]
@qml.transforms.mitigate_with_zne([1, 2, 3], fold_global, RichardsonFactory.extrapolate)
@qml.beta.qnode(dev)
def circuit(w1, w2):
qml.SimplifiedTwoDesign(w1, w2, wires=range(2))
return qml.expval(qml.PauliZ(0))
Now, when we execute circuit, errors will be automatically mitigated:
>>> circuit(w1, w2)
0.19113067083636542
#### Powerful new transforms
• The unitary matrix corresponding to a quantum circuit can now be generated using the new get_unitary_matrix() transform. (#1609) (#1786)
This transform is fully differentiable across all supported PennyLane autodiff frameworks.
def circuit(theta):
qml.RX(theta, wires=1)
qml.PauliZ(wires=0)
qml.CNOT(wires=[0, 1])
>>> theta = torch.tensor(0.3, requires_grad=True)
>>> matrix = qml.transforms.get_unitary_matrix(circuit)(theta)
>>> print(matrix)
tensor([[ 0.9888+0.0000j, 0.0000+0.0000j, 0.0000-0.1494j, 0.0000+0.0000j],
[ 0.0000+0.0000j, 0.0000+0.1494j, 0.0000+0.0000j, -0.9888+0.0000j],
[ 0.0000-0.1494j, 0.0000+0.0000j, 0.9888+0.0000j, 0.0000+0.0000j],
[ 0.0000+0.0000j, -0.9888+0.0000j, 0.0000+0.0000j, 0.0000+0.1494j]],
>>> loss = torch.real(torch.trace(matrix))
>>> loss.backward()
tensor(-0.1494)
• Arbitrary two-qubit unitaries can now be decomposed into elementary gates. This functionality has been incorporated into the qml.transforms.unitary_to_rot transform, and is available separately as qml.transforms.two_qubit_decomposition. (#1552)
As an example, consider the following randomly-generated matrix and circuit that uses it:
U = np.array([
[-0.03053706-0.03662692j, 0.01313778+0.38162226j, 0.4101526 -0.81893687j, -0.03864617+0.10743148j],
[-0.17171136-0.24851809j, 0.06046239+0.1929145j, -0.04813084-0.01748555j, -0.29544883-0.88202604j],
[ 0.39634931-0.78959795j, -0.25521689-0.17045233j, -0.1391033 -0.09670952j, -0.25043606+0.18393466j],
[ 0.29599198-0.19573188j, 0.55605806+0.64025769j, 0.06140516+0.35499559j, 0.02674726+0.1563311j ]
])
dev = qml.device('default.qubit', wires=2)
@qml.qnode(dev)
@qml.transforms.unitary_to_rot
def circuit(x, y):
qml.QubitUnitary(U, wires=[0, 1])
return qml.expval(qml.PauliZ(wires=0))
If we run the circuit, we can see the new decomposition:
>>> circuit(0.3, 0.4)
>>> print(qml.draw(circuit)(0.3, 0.4))
0: ──Rot(2.78, 0.242, -2.28)──╭X──RZ(0.176)───╭C─────────────╭X──Rot(-3.87, 0.321, -2.09)──┤ ⟨Z⟩
1: ──Rot(4.64, 2.69, -1.56)───╰C──RY(-0.883)──╰X──RY(-1.47)──╰C──Rot(1.68, 0.337, 0.587)───┤
• A new transform, @qml.batch_params, has been added, that makes QNodes handle a batch dimension in trainable parameters. (#1710) (#1761)
This transform will create multiple circuits, one per batch dimension. As a result, it is both simulator and hardware compatible.
@qml.batch_params
@qml.beta.qnode(dev)
def circuit(x, weights):
qml.RX(x, wires=0)
qml.RY(0.2, wires=1)
qml.templates.StronglyEntanglingLayers(weights, wires=[0, 1, 2])
The qml.batch_params decorator allows us to pass arguments x and weights that have a batch dimension. For example,
>>> batch_size = 3
>>> x = np.linspace(0.1, 0.5, batch_size)
>>> weights = np.random.random((batch_size, 10, 3, 3))
If we evaluate the QNode with these inputs, we will get an output of shape (batch_size,):
>>> circuit(x, weights)
• The insert transform has now been added, providing a way to insert single-qubit operations into a quantum circuit. The transform can apply to quantum functions, tapes, and devices. (#1795)
The following QNode can be transformed to add noise to the circuit:
dev = qml.device("default.mixed", wires=2)
@qml.qnode(dev)
@qml.transforms.insert(qml.AmplitudeDamping, 0.2, position="end")
def f(w, x, y, z):
qml.RX(w, wires=0)
qml.RY(x, wires=1)
qml.CNOT(wires=[0, 1])
qml.RY(y, wires=0)
qml.RX(z, wires=1)
return qml.expval(qml.PauliZ(0) @ qml.PauliZ(1))
Executions of this circuit will differ from the noise-free value:
>>> f(0.9, 0.4, 0.5, 0.6)
>>> print(qml.draw(f)(0.9, 0.4, 0.5, 0.6))
0: ──RX(0.9)──╭C──RY(0.5)──AmplitudeDamping(0.2)──╭┤ ⟨Z ⊗ Z⟩
1: ──RY(0.4)──╰X──RX(0.6)──AmplitudeDamping(0.2)──╰┤ ⟨Z ⊗ Z⟩
• Common tape expansion functions are now available in qml.transforms, alongside a new create_expand_fn function for easily creating expansion functions from stopping criteria. (#1734) (#1760)
create_expand_fn takes the default depth to which the expansion function should expand a tape, a stopping criterion, an optional device, and a docstring to be set for the created function. The stopping criterion must take a queuable object and return a boolean.
For example, to create an expansion function that decomposes all trainable, multi-parameter operations:
>>> stop_at = ~(qml.operation.has_multipar & qml.operation.is_trainable)
>>> expand_fn = qml.transforms.create_expand_fn(depth=5, stop_at=stop_at)
The created expansion function can be used within a custom transform. Devices can also be provided, producing expansion functions that decompose tapes to support the native gate set of the device.
#### Batch execution of circuits
• A new, experimental QNode has been added, that adds support for batch execution of circuits, custom quantum gradient support, and arbitrary order derivatives. This QNode is available via qml.beta.QNode, and @qml.beta.qnode. (#1642) (#1646) (#1651) (#1804)
It differs from the standard QNode in several ways:
• Custom gradient transforms can be specified as the differentiation method:
@qml.gradients.gradient_transform
...
return tapes, processing_fn
def circuit():
• Arbitrary $$n$$-th order derivatives are supported on hardware using gradient transforms such as the parameter-shift rule. To specify that an $$n$$-th order derivative of a QNode will be computed, the max_diff argument should be set. By default, this is set to 1 (first-order derivatives only).
• Internally, if multiple circuits are generated for execution simultaneously, they will be packaged into a single job for execution on the device. This can lead to significant performance improvement when executing the QNode on remote quantum hardware.
• When decomposing the circuit, the default decomposition strategy will prioritize decompositions that result in the smallest number of parametrized operations required to satisfy the differentiation method. Additional decompositions required to satisfy the native gate set of the quantum device will be performed later, by the device at execution time. While this may lead to a slight increase in classical processing, it significantly reduces the number of circuit evaluations needed to compute gradients of complex unitaries.
In an upcoming release, this QNode will replace the existing one. If you come across any bugs while using this QNode, please let us know via a bug report on our GitHub bug tracker.
Currently, this beta QNode does not support the following features:
• Non-mutability via the mutable keyword argument
• The reversible QNode differentiation method
• The ability to specify a dtype when using PyTorch and TensorFlow.
It is also not tested with the qml.qnn module.
#### New operations and templates
• Added a new operation OrbitalRotation, which implements the spin-adapted spatial orbital rotation gate. (#1665)
An example circuit that uses OrbitalRotation operation is:
dev = qml.device('default.qubit', wires=4)
@qml.qnode(dev)
def circuit(phi):
qml.BasisState(np.array([1, 1, 0, 0]), wires=[0, 1, 2, 3])
qml.OrbitalRotation(phi, wires=[0, 1, 2, 3])
return qml.state()
If we run this circuit, we will get the following output
>>> circuit(0.1)
array([ 0. +0.j, 0. +0.j, 0. +0.j,
0.00249792+0.j, 0. +0.j, 0. +0.j,
-0.04991671+0.j, 0. +0.j, 0. +0.j,
-0.04991671+0.j, 0. +0.j, 0. +0.j,
0.99750208+0.j, 0. +0.j, 0. +0.j,
0. +0.j])
• Added a new template GateFabric, which implements a local, expressive, quantum-number-preserving ansatz proposed by Anselmetti et al. in arXiv:2104.05692. (#1687)
An example of a circuit using GateFabric template is:
coordinates = np.array([0.0, 0.0, -0.6614, 0.0, 0.0, 0.6614])
H, qubits = qml.qchem.molecular_hamiltonian(["H", "H"], coordinates)
ref_state = qml.qchem.hf_state(electrons=2, orbitals=qubits)
dev = qml.device('default.qubit', wires=qubits)
@qml.qnode(dev)
def ansatz(weights):
qml.templates.GateFabric(weights, wires=[0,1,2,3],
init_state=ref_state, include_pi=True)
return qml.expval(H)
For more details, see the GateFabric documentation.
• Added a new template kUpCCGSD, which implements a unitary coupled cluster ansatz with generalized singles and pair doubles excitation operators, proposed by Joonho Lee et al. in arXiv:1810.02327. (#1743)
An example of a circuit using kUpCCGSD template is:
coordinates = np.array([0.0, 0.0, -0.6614, 0.0, 0.0, 0.6614])
H, qubits = qml.qchem.molecular_hamiltonian(["H", "H"], coordinates)
ref_state = qml.qchem.hf_state(electrons=2, orbitals=qubits)
dev = qml.device('default.qubit', wires=qubits)
@qml.qnode(dev)
def ansatz(weights):
qml.templates.kUpCCGSD(weights, wires=[0,1,2,3], k=0, delta_sz=0,
init_state=ref_state)
return qml.expval(H)
#### Improved utilities for quantum compilation and characterization
• The new qml.fourier.qnode_spectrum function extends the former qml.fourier.spectrum function and takes classical processing of QNode arguments into account. The frequencies are computed per (requested) QNode argument instead of per gate id. The gate ids are ignored. (#1681) (#1720)
Consider the following example, which uses non-trainable inputs x, y and z as well as trainable parameters w as arguments to the QNode.
import pennylane as qml
import numpy as np
n_qubits = 3
dev = qml.device("default.qubit", wires=n_qubits)
@qml.qnode(dev)
def circuit(x, y, z, w):
for i in range(n_qubits):
qml.RX(0.5*x[i], wires=i)
qml.Rot(w[0,i,0], w[0,i,1], w[0,i,2], wires=i)
qml.RY(2.3*y[i], wires=i)
qml.Rot(w[1,i,0], w[1,i,1], w[1,i,2], wires=i)
qml.RX(z, wires=i)
return qml.expval(qml.PauliZ(wires=0))
x = np.array([1., 2., 3.])
y = np.array([0.1, 0.3, 0.5])
z = -1.8
w = np.random.random((2, n_qubits, 3))
This circuit looks as follows:
>>> print(qml.draw(circuit)(x, y, z, w))
0: ──RX(0.5)──Rot(0.598, 0.949, 0.346)───RY(0.23)──Rot(0.693, 0.0738, 0.246)──RX(-1.8)──┤ ⟨Z⟩
1: ──RX(1)────Rot(0.0711, 0.701, 0.445)──RY(0.69)──Rot(0.32, 0.0482, 0.437)───RX(-1.8)──┤
2: ──RX(1.5)──Rot(0.401, 0.0795, 0.731)──RY(1.15)──Rot(0.756, 0.38, 0.38)─────RX(-1.8)──┤
Applying the qml.fourier.qnode_spectrum function to the circuit for the non-trainable parameters, we obtain:
>>> spec = qml.fourier.qnode_spectrum(circuit, encoding_args={"x", "y", "z"})(x, y, z, w)
>>> for inp, freqs in spec.items():
... print(f"{inp}: {freqs}")
"x": {(0,): [-0.5, 0.0, 0.5], (1,): [-0.5, 0.0, 0.5], (2,): [-0.5, 0.0, 0.5]}
"y": {(0,): [-2.3, 0.0, 2.3], (1,): [-2.3, 0.0, 2.3], (2,): [-2.3, 0.0, 2.3]}
"z": {(): [-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0]}
We can see that all three parameters in the QNode arguments x and y contribute the spectrum of a Pauli rotation [-1.0, 0.0, 1.0], rescaled with the prefactor of the respective parameter in the circuit. The three RX rotations using the parameter z accumulate, yielding a more complex frequency spectrum.
For details on how to control for which parameters the spectrum is computed, a comparison to qml.fourier.circuit_spectrum, and other usage details, please see the fourier.qnode_spectrum docstring.
• Two new methods were added to the Device API, allowing PennyLane devices increased control over circuit decompositions. (#1651)
• Device.expand_fn(tape) -> tape: expands a tape such that it is supported by the device. By default, performs the standard device-specific gate set decomposition done in the default QNode. Devices may overwrite this method in order to define their own decomposition logic.
Note that the numerical result after applying this method should remain unchanged; PennyLane will assume that the expanded tape returns exactly the same value as the original tape when executed.
• Device.batch_transform(tape) -> (tapes, processing_fn): preprocesses the tape in the case where the device needs to generate multiple circuits to execute from the input circuit. The requirement of a post-processing function makes this distinct to the expand_fn method above.
By default, this method applies the transform
$\left\langle \sum_i c_i h_i\right\rangle → \sum_i c_i \left\langle h_i \right\rangle$
if expval(H) is present on devices that do not natively support Hamiltonians with non-commuting terms.
• A new class has been added to store operator attributes, such as self_inverses, and composable_rotation, as a list of operation names. (#1763)
A number of such attributes, for the purpose of compilation transforms, can be found in ops/qubit/attributes.py, but the class can also be used to create your own. For example, we can create a new Attribute, pauli_ops, like so:
>>> from pennylane.ops.qubit.attributes import Attribute
>>> pauli_ops = Attribute(["PauliX", "PauliY", "PauliZ"])
We can check either a string or an Operation for inclusion in this set:
>>> qml.PauliX(0) in pauli_ops
True
False
We can also dynamically add operators to the sets at runtime. This is useful for adding custom operations to the attributes such as composable_rotations and self_inverses that are used in compilation transforms. For example, suppose you have created a new Operation, MyGate, which you know to be its own inverse. Adding it to the set, like so
>>> from pennylane.ops.qubit.attributes import self_inverses
will enable the gate to be considered by the cancel_inverses compilation transform if two such gates are adjacent in a circuit.
### Improvements
• The qml.metric_tensor transform has been improved with regards to both function and performance. (#1638) (#1721)
• If the underlying device supports batch execution of circuits, the quantum circuits required to compute the metric tensor elements will be automatically submitted as a batched job. This can lead to significant performance improvements for devices with a non-trivial job submission overhead.
• Previously, the transform would only return the metric tensor with respect to gate arguments, and ignore any classical processing inside the QNode, even very trivial classical processing such as parameter permutation. The metric tensor now takes into account classical processing, and returns the metric tensor with respect to QNode arguments, not simply gate arguments:
>>> @qml.qnode(dev)
... def circuit(x):
... qml.RX(x[0], wires=0)
... qml.CNOT(wires=[0, 1])
... qml.RY(x[1] ** 2, wires=1)
... qml.RY(x[1], wires=0)
... return qml.expval(qml.PauliZ(0))
>>> x = np.array([0.1, 0.2], requires_grad=True)
>>> qml.metric_tensor(circuit)(x)
array([[0.25 , 0. ],
[0. , 0.28750832]])
To revert to the previous behaviour of returning the metric tensor with respect to gate arguments, qml.metric_tensor(qnode, hybrid=False) can be passed.
>>> qml.metric_tensor(circuit, hybrid=False)(x)
array([[0.25 , 0. , 0. ],
[0. , 0.25 , 0. ],
[0. , 0. , 0.24750832]])
• The metric tensor transform now works with a larger set of operations. In particular, all operations that have a single variational parameter and define a generator are now supported. In addition to a reduction in decomposition overhead, the change also results in fewer circuit evaluations.
• The expansion rule in the qml.metric_tensor transform has been changed. (#1721)
If hybrid=False, the changed expansion rule might lead to a changed output.
• The ApproxTimeEvolution template can now be used with Hamiltonians that have trainable coefficients. (#1789)
Resulting QNodes can be differentiated with respect to both the time parameter and the Hamiltonian coefficients.
dev = qml.device('default.qubit', wires=2)
obs = [qml.PauliX(0) @ qml.PauliY(1), qml.PauliY(0) @ qml.PauliX(1)]
@qml.qnode(dev)
def circuit(coeffs, t):
H = qml.Hamiltonian(coeffs, obs)
qml.templates.ApproxTimeEvolution(H, t, 2)
return qml.expval(qml.PauliZ(0))
>>> t = np.array(0.54, requires_grad=True)
>>> coeffs = np.array([-0.6, 2.0], requires_grad=True)
(array([-1.07813375, -1.07813375]), array(-2.79516158))
All differentiation methods, including backpropagation and the parameter-shift rule, are supported.
• Quantum function transforms and batch transforms can now be applied to devices. Once applied to a device, any quantum function executed on the modified device will be transformed prior to execution. (#1809) (#1810)
dev = qml.device("default.mixed", wires=1)
dev = qml.transforms.merge_rotations()(dev)
@qml.beta.qnode(dev)
def f(w, x, y, z):
qml.RX(w, wires=0)
qml.RX(x, wires=0)
qml.RX(y, wires=0)
qml.RX(z, wires=0)
return qml.expval(qml.PauliZ(0))
>>> print(f(0.9, 0.4, 0.5, 0.6))
-0.7373937155412453
>>> print(qml.draw(f, expansion_strategy="device")(0.9, 0.4, 0.5, 0.6))
0: ──RX(2.4)──┤ ⟨Z⟩
• It is now possible to draw QNodes that have been transformed by a ‘batch transform’; that is, a transform that maps a single QNode into multiple circuits under the hood. Examples of batch transforms include @qml.metric_tensor and @qml.gradients. (#1762)
For example, consider the parameter-shift rule, which generates two circuits per parameter; one circuit that has the parameter shifted forward, and another that has the parameter shifted backwards:
dev = qml.device("default.qubit", wires=2)
@qml.beta.qnode(dev)
def circuit(x):
qml.RX(x, wires=0)
qml.CNOT(wires=[0, 1])
return qml.expval(qml.PauliZ(wires=0))
>>> print(qml.draw(circuit)(0.6))
0: ──RX(2.17)──╭C──┤ ⟨Z⟩
1: ────────────╰X──┤
0: ──RX(-0.971)──╭C──┤ ⟨Z⟩
1: ──────────────╰X──┤
• Support for differentiable execution of batches of circuits has been extended to the JAX interface for scalar functions, via the beta pennylane.interfaces.batch module. (#1634) (#1685)
For example using the execute function from the pennylane.interfaces.batch module:
from pennylane.interfaces.batch import execute
def cost_fn(x):
with qml.tape.JacobianTape() as tape1:
qml.RX(x[0], wires=[0])
qml.RY(x[1], wires=[1])
qml.CNOT(wires=[0, 1])
qml.var(qml.PauliZ(0) @ qml.PauliX(1))
with qml.tape.JacobianTape() as tape2:
qml.RX(x[0], wires=0)
qml.RY(x[0], wires=1)
qml.CNOT(wires=[0, 1])
qml.probs(wires=1)
result = execute(
[tape1, tape2], dev,
)
return (result[0] + result[1][0, 0])[0]
• All qubit operations have been re-written to use the qml.math framework for internal classical processing and the generation of their matrix representations. As a result these representations are now fully differentiable, and the framework-specific device classes no longer need to maintain framework-specific versions of these matrices. (#1749) (#1802)
• The use of expval(H), where H is a cost Hamiltonian generated by the qaoa module, has been sped up. This was achieved by making PennyLane decompose a circuit with an expval(H) measurement into subcircuits if the Hamiltonian.grouping_indices attribute is set, and setting this attribute in the relevant qaoa module functions. (#1718)
• Operations can now have gradient recipes that depend on the state of the operation. (#1674)
For example, this allows for gradient recipes that are parameter dependent:
class RX(qml.RX):
@property
# The gradient is given by [f(2x) - f(0)] / (2 sin(x)), by subsituting
# shift = x into the two term parameter-shift rule.
x = self.data[0]
c = 0.5 / np.sin(x)
return ([[c, 0.0, 2 * x], [-c, 0.0, 0.0]],)
• Shots can now be passed as a runtime argument to transforms that execute circuits in batches, similarly to QNodes. (#1707)
An example of such a transform are the gradient transforms in the qml.gradients module. As a result, we can now call gradient transforms (such as qml.gradients.param_shift) and set the number of shots at runtime.
>>> dev = qml.device("default.qubit", wires=1, shots=1000)
>>> @qml.beta.qnode(dev)
... def circuit(x):
... qml.RX(x, wires=0)
... return qml.expval(qml.PauliZ(0))
array([[-1., -1., -1., -1., -1., 0., -1., 0., -1., 0.]])
array([[-0.12298782]])
• Templates are now top level imported and can be used directly e.g. qml.QFT(wires=0). (#1779)
• qml.probs now accepts an attribute op that allows to rotate the computational basis and get the probabilities in the rotated basis. (#1692)
• Refactored the expand_fn functionality in the Device class to avoid any edge cases leading to failures with plugins. (#1838)
• Updated the qml.QNGOptimizer.step_and_cost method to avoid the use of deprecated functionality. (#1834)
• Added a custom torch.to_numpy implementation to pennylane/math/single_dispatch.py to ensure compabilitity with PyTorch 1.10. (#1824) (#1825)
• The default for an Operation‘s control_wires attribute is now an empty Wires object instead of the attribute raising a NonImplementedError. (#1821)
• qml.circuit_drawer.MPLDrawer will now automatically rotate and resize text to fit inside the rectangle created by the box_gate method. (#1764)
• Operators now have a label method to determine how they are drawn. This will eventually override the RepresentationResolver class. (#1678)
• The operation label method now supports string variables. (#1815)
• A new utility class qml.BooleanFn is introduced. It wraps a function that takes a single argument and returns a Boolean. (#1734)
After wrapping, qml.BooleanFn can be called like the wrapped function, and multiple instances can be manipulated and combined with the bitwise operators &, | and ~.
• There is a new utility function qml.math.is_independent that checks whether a callable is independent of its arguments. (#1700)
This function is experimental and might behave differently than expected.
Note that the test relies on both numerical and analytical checks, except when using the PyTorch interface which only performs a numerical check. It is known that there are edge cases on which this test will yield wrong results, in particular non-smooth functions may be problematic. For details, please refer to the is_indpendent docstring.
• The qml.beta.QNode now supports the qml.qnn module. (#1748)
• @qml.beta.QNode now supports the qml.specs transform. (#1739)
• qml.circuit_drawer.drawable_layers and qml.circuit_drawer.drawable_grid process a list of operations to layer positions for drawing. (#1639)
• qml.transforms.batch_transform now accepts expand_fns that take additional arguments and keyword arguments. In fact, expand_fn and transform_fn now must have the same signature. (#1721)
• The qml.batch_transform decorator is now ignored during Sphinx builds, allowing the correct signature to display in the built documentation. (#1733)
• The tests for qubit operations are split into multiple files. (#1661)
• The transform for the Jacobian of the classical preprocessing within a QNode, qml.transforms.classical_jacobian, now takes a keyword argument argnum to specify the QNode argument indices with respect to which the Jacobian is computed. (#1645)
An example for the usage of argnum is
@qml.qnode(dev)
def circuit(x, y, z):
qml.RX(qml.math.sin(x), wires=0)
qml.CNOT(wires=[0, 1])
qml.RY(y ** 2, wires=1)
qml.RZ(1 / z, wires=1)
return qml.expval(qml.PauliZ(0))
jac_fn = qml.transforms.classical_jacobian(circuit, argnum=[1, 2])
The Jacobian can then be computed at specified parameters.
>>> x, y, z = np.array([0.1, -2.5, 0.71])
>>> jac_fn(x, y, z)
(array([-0., -5., -0.]), array([-0. , -0. , -1.98373339]))
The returned arrays are the derivatives of the three parametrized gates in the circuit with respect to y and z respectively.
There also are explicit tests for classical_jacobian now, which previously was tested implicitly via its use in the metric_tensor transform.
For more usage details, please see the classical Jacobian docstring.
• A new utility function qml.math.is_abstract(tensor) has been added. This function returns True if the tensor is abstract; that is, it has no value or shape. This can occur if within a function that has been just-in-time compiled. (#1845)
• qml.circuit_drawer.CircuitDrawer can accept a string for the charset keyword, instead of a CharSet object. (#1640)
• qml.math.sort will now return only the sorted torch tensor and not the corresponding indices, making sort consistent across interfaces. (#1691)
• Specific QNode execution options are now re-used by batch transforms to execute transformed QNodes. (#1708)
• To standardize across all optimizers, qml.optimize.AdamOptimizer now also uses accumulation (in form of collections.namedtuple) to keep track of running quantities. Before it used three variables fm, sm and t. (#1757)
### Breaking changes
• The operator attributes has_unitary_generator, is_composable_rotation, is_self_inverse, is_symmetric_over_all_wires, and is_symmetric_over_control_wires have been removed as attributes from the base class. They have been replaced by the sets that store the names of operations with similar properties in ops/qubit/attributes.py. (#1763)
• The qml.inv function has been removed, qml.adjoint should be used instead. (#1778)
• The input signature of an expand_fn used in a batch_transform now must have the same signature as the provided transform_fn, and vice versa. (#1721)
• The default.qubit.torch device automatically determines if computations should be run on a CPU or a GPU and doesn’t take a torch_device argument anymore. (#1705)
• The utility function qml.math.requires_grad now returns True when using Autograd if and only if the requires_grad=True attribute is set on the NumPy array. Previously, this function would return True for all NumPy arrays and Python floats, unless requires_grad=False was explicitly set. (#1638)
• The operation qml.Interferometer has been renamed qml.InterferometerUnitary in order to distinguish it from the template qml.templates.Interferometer. (#1714)
• The qml.transforms.invisible decorator has been replaced with qml.tape.stop_recording, which may act as a context manager as well as a decorator to ensure that contained logic is non-recordable or non-queueable within a QNode or quantum tape context. (#1754)
• Templates SingleExcitationUnitary and DoubleExcitationUnitary have been renamed to FermionicSingleExcitation and FermionicDoubleExcitation, respectively. (#1822)
### Deprecations
• Allowing cost functions to be differentiated using qml.grad or qml.jacobian without explicitly marking parameters as trainable is being deprecated, and will be removed in an upcoming release. Please specify the requires_grad attribute for every argument, or specify argnum when using qml.grad or qml.jacobian. (#1773)
The following raises a warning in v0.19.0 and will raise an error in an upcoming release:
import pennylane as qml
dev = qml.device('default.qubit', wires=1)
@qml.qnode(dev)
def test(x):
qml.RY(x, wires=[0])
return qml.expval(qml.PauliZ(0))
par = 0.3
Preferred approaches include specifying the requires_grad attribute:
import pennylane as qml
from pennylane import numpy as np
dev = qml.device('default.qubit', wires=1)
@qml.qnode(dev)
def test(x):
qml.RY(x, wires=[0])
return qml.expval(qml.PauliZ(0))
Or specifying the argnum argument when using qml.grad or qml.jacobian:
import pennylane as qml
dev = qml.device('default.qubit', wires=1)
@qml.qnode(dev)
def test(x):
qml.RY(x, wires=[0])
return qml.expval(qml.PauliZ(0))
par = 0.3
• The default.tensor device from the beta folder is no longer maintained and has been deprecated. It will be removed in future releases. (#1851)
• The qml.metric_tensor and qml.QNGOptimizer keyword argument diag_approx is deprecated. Approximations can be controlled with the more fine-grained approx keyword argument, with approx="block-diag" (the default) reproducing the old behaviour. (#1721) (#1834)
• The template decorator is now deprecated with a warning message and will be removed in release v0.20.0. It has been removed from different PennyLane functions. (#1794) (#1808)
• The qml.fourier.spectrum function has been renamed to qml.fourier.circuit_spectrum, in order to clearly separate the new qnode_spectrum function from this one. qml.fourier.spectrum is now an alias for circuit_spectrum but is flagged for deprecation and will be removed soon. (#1681)
• The init module, which contains functions to generate random parameter tensors for templates, is flagged for deprecation and will be removed in the next release cycle. Instead, the templates’ shape method can be used to get the desired shape of the tensor, which can then be generated manually. (#1689)
To generate the parameter tensors, the np.random.normal and np.random.uniform functions can be used (just like in the init module). Considering the default arguments of these functions as of NumPy v1.21, some non-default options were used by the init module:
• All functions generating normally distributed parameters used np.random.normal by passing scale=0.1;
• Most functions generating uniformly distributed parameters (except for certain CVQNN initializers) used np.random.uniform by passing high=2*math.pi;
• The cvqnn_layers_r_uniform, cvqnn_layers_a_uniform, cvqnn_layers_kappa_uniform functions used np.random.uniform by passing high=0.1.
• The QNode.draw method has been deprecated, and will be removed in an upcoming release. Please use the qml.draw transform instead. (#1746)
• The QNode.metric_tensor method has been deprecated, and will be removed in an upcoming release. Please use the qml.metric_tensor transform instead. (#1638)
• The pad parameter of the qml.AmplitudeEmbedding template has been removed. It has instead been renamed to the pad_with parameter. (#1805)
### Bug fixes
• Fixes a bug where qml.math.dot failed to work with @tf.function autograph mode. (#1842)
• Fixes a bug where in rare instances the parameters of a tape are returned unsorted by Tape.get_parameters. (#1836)
• Fixes a bug with the arrow width in the measure of qml.circuit_drawer.MPLDrawer. (#1823)
• The helper functions qml.math.block_diag and qml.math.scatter_element_add now are entirely differentiable when using Autograd. Previously only indexed entries of the block diagonal could be differentiated, while the derivative w.r.t to the second argument of qml.math.scatter_element_add dispatched to NumPy instead of Autograd. (#1816) (#1818)
• Fixes a bug such that the original shot vector information of a device is preserved, so that outside the context manager the device remains unchanged. (#1792)
• Modifies qml.math.take to be compatible with a breaking change released in JAX 0.2.24 and ensure that PennyLane supports this JAX version. (#1769)
• Fixes a bug where the GPU cannot be used with qml.qnn.TorchLayer. (#1705)
• Fix a bug where the devices cache the same result for different observables return types. (#1719)
• Fixed a bug of the default circuit drawer where having more measurements compared to the number of measurements on any wire raised a KeyError. (#1702)
• Fix a bug where it was not possible to use jax.jit on a QNode when using QubitStateVector. (#1683)
• The device suite tests can now execute successfully if no shots configuration variable is given. (#1641)
• Fixes a bug where the qml.gradients.param_shift transform would raise an error while attempting to compute the variance of a QNode with ragged output. (#1646)
• Fixes a bug in default.mixed, to ensure that returned probabilities are always non-negative. (#1680)
• Fixes a bug where gradient transforms would fail to apply to QNodes containing classical processing. (#1699)
• Fixes a bug where the the parameter-shift method was not correctly using the fallback gradient function when all circuit parameters required the fallback. (#1782)
### Documentation
• Corrects the docstring of ExpvalCost by adding wires to the signature of the ansatz argument. (#1715)
• Updated docstring examples using the qchem.molecular_hamiltonian function. (#1724)
• All instances of qnode.draw() have been updated to instead use the transform qml.draw(qnode). (#1750)
• Add the jax interface in QNode Documentation. (#1755)
• Reorganized all the templates related to quantum chemistry under a common header Quantum Chemistry templates. (#1822)
### Contributors
This release contains contributions from (in alphabetical order):
Catalina Albornoz, Juan Miguel Arrazola, Utkarsh Azad, Akash Narayanan B, Sam Banning, Thomas Bromley, Jack Ceroni, Alain Delgado, Olivia Di Matteo, Andrew Gardhouse, Anthony Hayes, Theodor Isacsson, David Ittah, Josh Izaac, Soran Jahangiri, Nathan Killoran, Christina Lee, Guillermo Alonso-Linaje, Romain Moyard, Lee James O’Riordan, Carrie-Anne Rubidge, Maria Schuld, Rishabh Singh, Jay Soni, Ingrid Strandberg, Antal Száva, Teresa Tamayo-Mendoza, Rodrigo Vargas, Cody Wang, David Wierichs, Moritz Willmann.
orphan
|
{}
|
# Why doesn't this plumbing fitting exist?
### Help Support The Rocketry Forum:
#### Marc_G
##### Well-Known Member
This is just a mini rant in my "why is everything I do so much harder than it should be?" Series.
This Friday I'm getting a new fridge. Old fridge goes into the garage.
Anyway, the old fridge is over two decades old and has been a reliable workhorse. It will go to light duty to serve out its remaining time.
The ice maker in it is fed by my reverse osmosis water system that uses 3/8" OD PET tubing. There's a simple coupler valve using push connect fittings to connect the plastic tubing that is part of the fridge. Easy peasy.
The NEW fridge comes with a screw port for 1/4" compression fittings, and will arrive with a 6 foot stainless hose with the compression fittings (female) on both ends.
So I need to adapt my 3/8 feed line to 1/4" compression fittings, with a valve in there.
I would assume there should be exactly this piece for sale, as my situation of replacing old fridge is hardly unique.
Bit apparently they don't make a 3/8 push to 1/4 comp valve. I have to use three pieces: a valve with 3/8 push on both ends, then a short length of 3/8, then a 3/8 push to 1/4 MIP, then a 1/4 MIP to 1/4 compression adapter.
WTH? Why is this so complicated? And so many joints introduce lots of failure modes.
Am I missing something?
TRF Supporter
#### OverTheTop
##### Well-Known Member
TRF Supporter
Bit apparently they don't make a 3/8 push to 1/4 comp valve. I have to use three pieces: a valve with 3/8 push on both ends, then a short length of 3/8, then a 3/8 push to 1/4 MIP, then a 1/4 MIP to 1/4 compression adapter.
WTH? Why is this so complicated? And so many joints introduce lots of failure modes.
That is the way a lot of times when you start mixing fittings, usually ending up with two or three parts to make the transition successfully. There are so many threads, including parallel and taper in the same thread pitch, as well as metric and various styles of swage and flare-type fittings.
Maybe there is a valve with threads that you can put a 1/4" threaded fitting in one side and a 3/8" sharkbite on the other, screwing into the valve. That would eliminate the short length of tubing and a couple of push fittings anyway.
Are those Sharkbite fittings reliable? I have never used them but the are available for plumbing here. I use the standard PEX barbed fittings as I have the tool for terminating the fittings. Just curious on the Sharkbite though.
#### Wayco
##### Desert Rat Rocketeer
I used a Sharkbite cap to seal off a leaky line I no longer needed. It was in a difficult place to get to, and several years later is still working. A bit pricey, but handy for tight spaces.
##### Well-Known Member
TRF Supporter
Yeah, fun adapting to things. I run into the same fun with adapting different things for maple syrup stuff since things are odd-ish sizes or are just different all over the place. Teflon pipe dope/tape are your friends
As for reliability of those push to connect fittings, my maple reverse osmosis runs at nearly 120 psi all spring. No problems over several years of that. Reminds me....need to get another membrane for this upcoming season to increase processing rate...
#### Funkworks
##### Low Earth Orbit, obstructing Earth's view of Venus
Well to answer the question literally, we'd have to know sales numbers. If they had the part, sales would probably be lower than for other parts. But by producing less of them, their cost to produce would greater (per part), so profit margins would be less.
#### neil_w
TRF Supporter
Custom 3D-printed fittings would be cool. The print material and method would need to be watertight and durable. Probably not FDM. Thinking of this as an easy-to-access service, not so much a DIY thing (although anyone with the appropriate printer at home could do it I suppose).
#### Marc_G
##### Well-Known Member
Now that I have solution #1 (multiple parts with several push connects) I'm considering a better solution: one push connect to compression adapter, followed by a short compression line, connected to a valve with compression on both ends. Might be another trip to Lowes for me. Benefit would be more reliable connections.
#### Jim Hinton
##### Well-Known Member
TRF Supporter
Try to stay away from metal fittings. RO product water is very pure, and as such, highly aggressive. Plastic fittings are the best choice, if you are in a situation where metal fittings are needed, try to stay to stainless. RO water will murder brass and bronze fittings. It causes 'de-zincification' and the fittings are rapidly reduced to a copper sponge.
Jim
#### Rob Campbell
##### Well-Known Member
It's a conspiracy by Big Plumbing to make you buy more fittings.
Flex-Seal
#### Marc_G
##### Well-Known Member
Unfortunately not. That one assumes push connect input for the fridge I think. The ice maker on my new one comes with a female compression fitting for 1/4" tubing, so I need to end at that. The hook up hose is this one:
It comes with fittings to go from 1/4 copper to the hose, which is a common scenario, but getting there from 3/8" PET is apparently a bit more exotic.
I'm not too freaked out since I have something that will work (a bit concerned about the one brass fitting though). If I get time I will get back to the hardware store before delivery.
#### Jim Hinton
##### Well-Known Member
TRF Supporter
Unfortunately not. That one assumes push connect input for the fridge I think. The ice maker on my new one comes with a female compression fitting for 1/4" tubing, so I need to end at that. The hook up hose is this one:
View attachment 487530
It comes with fittings to go from 1/4 copper to the hose, which is a common scenario, but getting there from 3/8" PET is apparently a bit more exotic.
I'm not too freaked out since I have something that will work (a bit concerned about the one brass fitting though). If I get time I will get back to the hardware store before delivery.
Hi Marc;
Stainless valves are stupid expensive. I found a few items on the MSC website that might help. Take a look and see what you think.
MSC #'s 45435757, 32151920, 85552834. None of them are a one piece solution, but we are getting closer. McMaster Carr has similar stainless and plastic valves and fittings as well. MSC shows a stainless 3/8" comp. x 1/4" NPT adapter with a built in valve. For the selling price you could have all of your ice delivered.
Jim
Last edited:
#### 5x7
##### Well-Known Member
I just have to remark that this is the most on-topic thread ever for the Watering Hole.
#### Sandy H.
In my opinion, regardless of the solution, see if there is a good way to put a tray (or at least block off the perimeter of the fridge with a little curb) to trap water and put an audible water alert sensor in the tray.
Water can cause really annoying and expensive damage. I'm not the most social person, but I know 6 people who have had water damage from either in-wall plumbing (i.e. done by a professional when the house was built) or appliances like the fridge, dishwasher, garbage disposal etc. The cheapest fix was $5k, with much of the work being done by the homeowner (me. . . ) and the most expensive was around$50k (paid by insurance, as it was a hose-bib issue in a condo and the condo was required to maintain everything outside of the inside of the dwelling (whew for him!!!!)
Just a thought, as those little drips from multiple connections that weren't quite perfect add up over time. Even if your connections are perfect, the ones inside the appliance are made by the lowest bidder. . .
Sandy.
#### OverTheTop
##### Well-Known Member
TRF Supporter
I just have to remark that this is the most on-topic thread ever for the Watering Hole.
If you are worried about the tubing leaking or coming out you could always use some glue. What glue would you use?
TRF Supporter
TRF Supporter
Call a plumber.
#### DES
##### Well-Known Member
Suggestion - this lash up will require less parts if you don't use the push fittings. Use a single 3/8 x 3/8 compression stop valve; then you can get a 3/8 x 1/4 reducer that threads directly onto one end of that. Two pieces, and only three joints.
#### sjh1
##### Well-Known Member
That is the way a lot of times when you start mixing fittings, usually ending up with two or three parts to make the transition successfully. There are so many threads, including parallel and taper in the same thread pitch, as well as metric and various styles of swage and flare-type fittings.
Maybe there is a valve with threads that you can put a 1/4" threaded fitting in one side and a 3/8" sharkbite on the other, screwing into the valve. That would eliminate the short length of tubing and a couple of push fittings anyway.
Are those Sharkbite fittings reliable? I have never used them but the are available for plumbing here. I use the standard PEX barbed fittings as I have the tool for terminating the fittings. Just curious on the Sharkbite though.
I have tried the sharkbite twice and failed both times.
#### Crazyrocket
##### Well-Known Member
You may want to look at SwageLok. The have a multitude of different valves with connections using compression and push fittings. We use the push fittings for compressed at lines and they can hold our line pressure which is typically 100 psi. They come in SS. Might be a bit pricey though.
#### kuririn
##### BARGeezer
TRF Supporter
Suggestion - this lash up will require less parts if you don't use the push fittings. Use a single 3/8 x 3/8 compression stop valve; then you can get a 3/8 x 1/4 reducer that threads directly onto one end of that. Two pieces, and only three joints.
Marc will still need something to connect the pex tubing to the 3/8" valve.
#### Marc_G
##### Well-Known Member
I went back to Lowes today, and came away empty handed. I looked for and didn't find the 3/8 OD to 1/4" MIP that would have let me reduce the number of junctions. Maybe they were out of them.
After Lowe's I went to a real plumbing store, but they too didn't have a solution, oddly. They agreed with me that the brass fitting, over time measured in years, will fail. Somehow I seem to raise questions that most people don't bother with.
I've emailed my plumber, a good guy who installed my tankless water heater (new exhaust, gas line, and water piping) as well as moved a gas line for my new oven project. I don't expect a quick response but I'm sure he will get back to me eventually.
I just don't get why this takes effort at all. This is about the most common application out there in the "replacing old fridge" space.
I'm beginning to think that this will eventually go a different way: the fridge has a male 1/4" compression fitting; so that's non-negotiable. I could get a longer stainless steel supply line, say 20" ($11 on Amazon), and feed it under the floor, moving the "point of eventual failure" away from my main floor kitchen wood floors and instead over concrete basement floor with a floor drain going to the sump. Meanwhile, the sharkbite.com website sucks; I can't even find the products I already bought there (and their search feature sucks). I thought I could find the bits I bought, then search around to see if they had one in that same line that had the right ends. No luck. Grrr. #### Marc_G ##### Well-Known Member OK, so ideally you are looking for a stop valve with 1/4' threads for a compression fitting on one side and 3/8" push to fit fitting on the other side, right? This is not a valve but it will connect your 1/4" braided hose to the 3/8" pex line. Then you can splice in a stop valve in the pex line further down. U276LF - SharkBite U276LF - 1/4" Sharkbite x 3/8" MIP Reducing Dishwasher 90° Elbow Lead Free (supplyhouse.com) Thanks for this, but being brass it will degrade over time and leak. I'm still seeking my unicorn of a non-brass solution. #### Jim Hinton ##### Well-Known Member TRF Supporter I went back to Lowes today, and came away empty handed. I looked for and didn't find the 3/8 OD to 1/4" MIP that would have let me reduce the number of junctions. Maybe they were out of them. After Lowe's I went to a real plumbing store, but they too didn't have a solution, oddly. They agreed with me that the brass fitting, over time measured in years, will fail. Somehow I seem to raise questions that most people don't bother with. I've emailed my plumber, a good guy who installed my tankless water heater (new exhaust, gas line, and water piping) as well as moved a gas line for my new oven project. I don't expect a quick response but I'm sure he will get back to me eventually. I just don't get why this takes effort at all. This is about the most common application out there in the "replacing old fridge" space. I'm beginning to think that this will eventually go a different way: the fridge has a male 1/4" compression fitting; so that's non-negotiable. I could get a longer stainless steel supply line, say 20" ($11 on Amazon), and feed it under the floor, moving the "point of eventual failure" away from my main floor kitchen wood floors and instead over concrete basement floor with a floor drain going to the sump.
Meanwhile, the sharkbite.com website sucks; I can't even find the products I already bought there (and their search feature sucks). I thought I could find the bits I bought, then search around to see if they had one in that same line that had the right ends. No luck. Grrr.
What is the water like without RO? I'm guessing ugly. Is the RO purely to supply the fridge? Do you have any other in-house treatment? Lifespan on brass/bronze in that environment will be hard to predict. There is a known issue with corrosion but we don't know how fast or slow it will go. To use RO water and minimize failure points I would consider using plain 3/8" PEX or PE as a feed line with push fittings on the ends. Preferably plastic. Those are available for a reasonable price through MSC. For what it's worth, I used to do this kind of work a lot. You will usually have to order fittings through a specialty house if you want to do it right and I can tell you do.
Jim
#### Marc_G
##### Well-Known Member
What is the water like without RO? I'm guessing ugly. Is the RO purely to supply the fridge? Do you have any other in-house treatment? Lifespan on brass/bronze in that environment will be hard to predict. There is a known issue with corrosion but we don't know how fast or slow it will go. To use RO water and minimize failure points I would consider using plain 3/8" PEX or PE as a feed line with push fittings on the ends. Preferably plastic. Those are available for a reasonable price through MSC. For what it's worth, I used to do this kind of work a lot. You will usually have to order fittings through a specialty house if you want to do it right and I can tell you do.
Jim
Thanks Jim,
The water supply to the house is hard alkaline well water. It goes through a salt based softener on the way to the sink. Under the sink is a membrane based RO system (a good, well maintained 5 stage system), that supplies a small faucet at the sink and the ice maker.
Ice made from tap water sucks here. No go.
I don't know why the fridges now require these compression fittings. Seemed to work just fine when the fridge arrived with a coil of PE tubing in the back and you pushed it into a coupler and called it done.
#### kuririn
##### BARGeezer
TRF Supporter
One more alternative.
Let's try steel.
Get a Sharkbite 3/4 X 3/4 X 1/4 COMP stop valve.
Get an EvoPEX end cap
Cut a short length of pex tubing and push the end stop to the tubing, then push the tubing onto the valve. Discard the compression nut and sleeve that came with the new valve and connect your 1/4" compression fitting and 3/8" push to fit tubing to the valve. Done..
#### Marc_G
##### Well-Known Member
One more alternative.
Let's try steel.
Get a Sharkbite 3/4 X 3/4 X 1/4 COMP stop valve.
View attachment 487668
Get an EvoPEX end cap
View attachment 487670
Cut a short length of pex tubing and push the end stop to the tubing, then push the tubing onto the valve. Discard the compression nut and sleeve that came with the new valve and connect your 1/4" compression fitting and 3/8" push to fit tubing to the valve. Done..
We might have a winner here... Thank you.
#### Marc_G
##### Well-Known Member
Oops the one @kuririn mentioned has 3/4 ends not 3/8. I will have to check to see if they have similar with 3/8.
|
{}
|
## Stream: maths
### Topic: continuous functions into a topological X form an X
#### Scott Morrison (Apr 08 2019 at 05:58):
instance {α : Type u} {β : Type v} [topological_space α] [topological_space β] [has_mul β] : has_mul (subtype (@continuous α β _ _)) := ...
and all its friends available in mathlib?
#### Scott Morrison (Apr 08 2019 at 05:58):
Is there some efficient way to generate them all, if not?
#### Patrick Massot (Apr 08 2019 at 06:30):
We don't. We could hope to write a version of subtype_instance/pi_instance to get them, but I'm not sure the lemma naming convention are strong enough here
#### Scott Morrison (Apr 08 2019 at 06:56):
I guess it's actually pretty easy. You only have to do any work for has_add and has_mul; all the axioms are free.
Last updated: May 06 2021 at 18:20 UTC
|
{}
|
# Questions tagged [ifft]
The tag has no usage guidance.
217 questions
Filter by
Sorted by
Tagged with
32 views
### FFTW audio artifacts when modifying magnitudes in frequency domain
I'm currently working with FFT (FFTW3) that I'm using to apply treatments on audio files frequencies. The Forward/Backward test is passed, since I can get the exact same soundfile when processing ...
40 views
### Sweep synthesis in frequency domain: How to correctly adjust magnitude?
I'm attempting to synthesize a logarithmic sweep signal to measure the IR of a system. For the most part, I'm following Section 5.2 of the paper Transfer Function Measurement with Sweeps. To generate ...
14 views
### Noise and harmonic elements w/in a discrete time signal
So I have a discrete-time signal with around 300 samples. It is part of an environmental science application, so is not-strictly periodic although there clearly are periodic components. I have applied ...
59 views
### Can you use Bartlett method in reverse? [closed]
I'm wanting to do an inverse Fourier transform. Can I use the Welch method to generate this inverse, by replacing the FT used within with an IFT?
51 views
### Implementation of Cepstrum in Python
Actually I want to denoise a signal. I know how can I implement FFT in python to denoise it. This is the implementation which I use(From this Kaggle notebook): But I don't know : How can I use ...
163 views
### Sparse Inverse Fourier Transforms
I'm looking to compute an IDFT but my input is very sparse: Example: IDFT length: 7,408,800 (complex floats) Sparsity: 96.61% to 99.99% I found this Sparse FFT website but it looks like the library ...
137 views
### Inverse complex FFTW transform
I've been scanning all of the FFTW documentation, trying to figure out how to inverse FFT my FFT spectrum. The documentation only mentions how to inverse FFT real-to-complex transformations, using the ...
89 views
|
{}
|
Quasistationary Multiphase Power Sensor - MapleSim Help
Quasistationary Multiphase Power Sensor
Power sensor
Description
The Quasistationary Multiphase Power Sensor (or Power Sensor) component has $m$ quasistationary singlephase power sensor models each connected between corresponding phases of $\mathrm{currentP}$, $\mathrm{currentN}$, $\mathrm{voltageP}$, $\mathrm{voltageP}$, with the sum of the outputs at $y$.
Name Description Modelica ID $\mathrm{currentP}$ Positive plug, current path currentP $\mathrm{currentN}$ Negative plug, current path currentN $\mathrm{voltageP}$ Positive plug, voltage path voltageP $\mathrm{voltageN}$ Negative plug, voltage path voltageN $y$ Complex output; complex power in $W$ y
Parameters
Name Default Units Description Modelica ID $m$ $3$ Number of phases m
Modelica Standard Library The component described in this topic is from the Modelica Standard Library. To view the original documentation, which includes author and copyright information, click here.
|
{}
|
# Oscillation of two masses connected to springs and a fixed point
P: 1 Q: Two masses m are connected by identical springs of constants k and they lie on a perfectly smooth surface. The extremity of one spring is fixed on the wall, the other one is loose. Find the equations for the motion of the system. Find the frequencies of oscillations. 1. Relevant equations: F= m$\frac{(d2x)}/{(dt2)}$ F=kx 2. Attempt: Part 1: m$\frac{d2x}{dt2}$ = k(x2-x1)-kx1 m$\frac{d2x}{dt2}$ = k(x1-x2)-kx2 Part 2: x1=A1cosωt x2=A2cosωt and then substitute? Not sure if I even am getting anywhere with this.. Attached Thumbnails
|
{}
|
On n-periodic points of the exp() - A discussion with pictures and methods - Printable Version +- Tetration Forum (https://math.eretrandre.org/tetrationforum) +-- Forum: Tetration and Related Topics (https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=1) +--- Forum: Computation (https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=8) +--- Thread: On n-periodic points of the exp() - A discussion with pictures and methods (/showthread.php?tid=1264) On n-periodic points of the exp() - A discussion with pictures and methods - Gottfried - 05/15/2020 On n-periodic points of the exp() - A discussion with pictures and methods Initially triggered by a completely unrelated (seemingly on a first glance) other question in MSE I worked on the problem of periodic points of the exponential function and got a -I think: marvelous- result which I like to share here: MSE - on periodic points of the exp()-function Perhaps I'll transfer the full copy of the text and the images here later, but for the moment I'm bit lazy after that intense researching, computing & documenting. update: another question which asks for generalization to arbitrary (real) bases and their 2-periodic points see MSE - on 2-periodic points for iterated b^z (Because this discussion has evolved much and is much worthful, I post another statement pointing at it) update2: a short exposé of my idea in mathoverflow.net and the relevant question: "is my method for finding n-periodic points of exhaustive?" update3 (7'20): A compilation into a draft article see here periodic points compact.pdf (Size: 309.84 KB / Downloads: 446) Gottfried RE: A discussion with pictures of the set of fixed- and n-priodic points of the exp() - Gottfried - 06/10/2020 A very nice discussion I've been involved, but which is now too long to be copied here, is Jun,20 in Math Stack Exchange . "how to compute the 2-periodic points of $w=z^{z^w}$ . I apply my newly found method to the example bases $z$ with protocols of errors and progresses in the iteration and method of finding. This has also developed into a (re-) discussion on Yiannis' generalization of the Lambert-W function, called "HyperW" of "HW()" which he has presented in his article in 2005 (available here in tetration-forum-library Galidakis2005). See Dominic, Yiannis Galidakis and me in discussion... Gottfried
|
{}
|
# Fortran Wiki bessel_yn
## Description
bessel_yn(n, x) computes the Bessel function of the second kind of order n of x.
bessel_yn(n1, n2, x) returns an array with the Bessel functions of the first kind of the orders n1 to n2.
## Standard
Fortran 2008 and later
## Class
Elemental function, except for the transformational variant.
## Syntax
result = bessel_yn(n, x)
result = bessel_yn(n1, n2, x)
## Arguments
• n - Shall be a scalar or an array of type integer.
• n1 - Shall be a non-negative scalar of type integer.
• n2 - Shall be a non-negative scalar of type integer.
• x - Shall be a scalar or an array of type real.
## Return value
The return value is a scalar of type real. It has the same kind as x.
## Example
program test_besyn
real(8) :: x = 1.0_8
x = bessel_yn(5,x)
end program test_besyn
category: intrinsics
|
{}
|
# Using indentation to automatically begin and end itemize environments
This is a question one might answer with "Why the hell would you want to do it?!", sort of an experiment.
What I am looking for is a way to write lists with many levels of nestings (notes for school) in a "natural" way. Previously, I used an elaborate system of Pandoc, LaTeX and Makefiles to generate notes with occasional LaTeX snippets, however, I would like a more integrated workflow (for example, embedding pseudocode in a codebox environment in a Pandoc/Markdown file is quite challenging, as Markdown was designed with HTML in mind).
If I make + an active character, like
\makeatletter
\mathchardef\@my@mathplus=\mathcode+
\catcode+=\active
\def{+}{\ifmmode\@my@mathplus\else\item\fi}
\mateatother
I can write itemizes like
\begin{itemize}
+ one
+ two
+ three
\end{itemize}
however, I would like to get rid of \begin{itemize} and \end{itemize} and write notes like
+ one
+ subitem one
+ subitem two
+ two
+ three
resulting in the nested list, neatly rendered.
Is there a way to achieve this, or I should consider some kind of pre-procession instead? (Creative and mildly insane answers are appreciated.)
-
+1 for the 'insanity' of the question! :) – Count Zero Oct 20 '11 at 18:40
Two comments. (1) It's \def+, not \def{+}. (2) It should better be \DeclareRobustCommand+{\ifmmode\@my@mathplus\else\expandafter\item\fi}; apart from robustness, you can write +[label] as you would with \item[label]. Such shortcuts, however, make the document hard to read and unstructured. – egreg Oct 20 '11 at 20:21
@egreg I didn't really aim for robustness, just a dirty proof of concept. Although thanks for pointing out the mistake with my code (I didn't copy and paste from my text editor but typed it again by hand... my bad). – Kristóf Marussy Oct 21 '11 at 15:45
ConTeXt can process markdown input directly. – Martin Schröder Oct 27 '11 at 17:07
@JuanA.Navarro: Start here. – Martin Schröder Oct 28 '11 at 16:56
Insane answer to an insane question ;)
\documentclass{article}
\def\+{+}
\makeatletter
\catcode\ =12\let\@nl@space= \catcode\ =10
\newcount\@nl@rlevel
\newcount\@nl@llevel
\@nl@llevel=-1
\def\@nl{%
\catcode\ =12
\global\@nl@rlevel=0
\futurelet\@nl@store\@nl@%
}
\def\@nl@gobble#1{\futurelet\@nl@store\@nl@}
\def\@nl@enditemize{
\ifnum\the\@nl@rlevel<\the\@nl@llevel%
\end{itemize}%
\egroup%
\expandafter\@nl@enditemize%
\else%
\ifnum\the\@nl@rlevel=\the\@nl@llevel\else%
\errmessage{Error: inconsistent identation}
\fi%
\fi%
}
\def\@nl@{%
\ifx\@nl@store\@nl@space%
\expandafter\@nl@gobble%
\else%
\catcode\ =10
\ifx\@nl@store+%
\ifnum\the\@nl@rlevel>\the\@nl@llevel%
\bgroup%
\@nl@llevel=\the\@nl@rlevel
\begin{itemize}%
\fi%
\@nl@enditemize%
\item \expandafter\expandafter\expandafter\@gobble%
\else%
\ifx\@nl@store\@nl%
\global\@nl@rlevel=-1\relax\@nl@enditemize\par
\else\space\fi%
\fi%
\fi%
}
\catcode\^^M=\active%
\AtBeginDocument{%
\catcode\^^M=\active%
\let^^M=\@nl%
}%
\catcode\^^M=5
\makeatother
\begin{document}
Some sample text.
+ foo
+ bar
+ a
+ b
+ c. A very long line
split into multiple lines.
+ hehe
Just like nothing happened. Ha!
\+ escaped starting plus
$\lim_{x\to\infty}\frac1x = 0$
\end{document}
-
Surprisingly this really works, and it is yet to explode right into my face, therefore, it is stable enough for my needs. :) – Kristóf Marussy Oct 21 '11 at 16:01
Okay, it seems to play awfully with Babel (at least with the Hungarian one). Replacing the \AtBeginDocument part with a new environment helps, although. – Kristóf Marussy Oct 21 '11 at 16:43
You really seem to like @. – Martin Schröder Oct 27 '11 at 17:05
@MartinSchröder: Better than Knuthian \nlst@re, \nlsp@ce, etc. :) – Aditya Aug 27 '13 at 5:38
You may need to redefine ^^M to get this feature. That will be somewhat complicated.
As a suggestion, you may use ++ for a subitem, and +++ for a subsubitem. That's much easier:
\documentclass{article}
\makeatletter
\catcode\+\active
\def\itemX{\@ifnextchar+{\subitemX}{\item}}
\def\subitemX#1{\@ifnextchar+{\subsubitemX}{\subitem}}
\def\subsubitemX#1{\subsubitem}
\def+{\ifmmode\string+\else\expandafter\itemX\fi}
\@makeother\+
\makeatother
\newenvironment{easylist}{\trivlist\item
\def\item{\par\noindent\textbullet\enspace}%
\catcode\+\active
}{\endtrivlist}
\begin{document}
\begin{easylist}
+ foo $a+b$
+ foo
++ bar
++ bar
+++ baz
+++ baz
+ foo
\end{easylist}
\end{document}
See also: nicetext package; pandoc tool.
-
Note: After reassigning the category codes of ^^M and space char, there is no essential difficulty to distinguish different level of items through indent. However, it is a bit dangerous to redefine newlines and spaces for normal text. – Leo Liu Oct 20 '11 at 18:43
Oh, finally I realize that there is a package easylist, which implement what I said. I just reinvent the wheel. – Leo Liu Oct 21 '11 at 10:29
+1 for "packaged" solution, always preferred. ;-) – DevSolar Oct 21 '11 at 16:05
This is a bit different rather use two commands, one to narrow the paragraph and another to widen it. I have used \i, \w. The first one to make the paragraph narrower (i.e indent but lose the dotless i and the second the \w for wider.
\documentclass{article}
\usepackage{lipsum}
\parskip12pt
\def\w{\wider}
\def\i{\narrower}
\begin{document}
\lipsum[1]
\i \lipsum[2]
\i \lipsum[2]
\i \lipsum[3]
\w \lipsum[4]
\w \lipsum[5]
\w \lipsum[6]
\end{document}
Advantages, never type more than is necessary! Style to suit!
-
The command \i is already defined. \expandafter\detokenize\expandafter{\i} – Marco Daniel Oct 20 '11 at 19:10
@MarcoDaniel ... I know that is why I mentioned you lose the dotless i, you don't need a dotless i do you? – Yiannis Lazarides Oct 20 '11 at 19:14
I didn't read this ;-(. Sorry. Nobody need this ;-) but I thought it is important to mention this. – Marco Daniel Oct 20 '11 at 19:15
@MarcoDaniel No problem, you only need it if you type turkish texts, I picked it for its mnemonics i.e, indent, then the indentation level stays until you change it. – Yiannis Lazarides Oct 20 '11 at 19:24
All of the solutions suggested here rely on catcode trickery. LuaTeX provides a sane way to do such input translation. The solution below translates the + at the beginning of line to \firstlevel and ++ at the beginning of line to \secondlevel. So, first lets define the \firstlevel and \secondlevel macros (in ConTeXt)
\define\firstlevel
{\endgraf
\blank
\noindentation
\hangindent=1em
\hangafter\plusone
\dontleavehmode\hbox to 1em {\symbol[1]}}
\define\secondlevel
{\endgraf
\blank[none]
\noindentation
\hangindent=2em
\hangafter\plusone
\null \quad \hbox to 1em {\symbol[2]}}
ConTeXt already have a module m-translate that allows you to translate the input while the file is being read and before the text is passed on to TeX. So, you can do:
\usemodule[translate]
\translateinput[++][\string\secondlevel]
\translateinput[+][\string\firstlevel]
\starttext
+ One
+ Two
+ Three
\enableinputtranslation
+ One, a really long line \input ward
++ Two
+ Three
\stoptext
which gives
The only trouble is that this translates all the + to \firstlevel. The m-translate module does not provide an interface to only match the character at the beginning of the line, but it is a short module, so we can override how the match is done.
In the following code, I have simply copied the m-translate module and changed the translators.translate() function.
\startluacode
local translators = { }
moduledata.translators = translators
local compiled, list = nil, nil
function translators.register(from,to)
local l = lpeg.P(from)/to
if not list then
list = l
else
list = list + l
end
compiled = nil
end
function translators.translate(s)
if list then
if not compiled then
compiled = lpeg.Cs((list)^0*(lpeg.P(1))^0)
end
return compiled:match(s)
else
return s
end
end
local textlineactions = resolvers.openers.helpers.textlineactions
utilities.sequencers.appendaction(textlineactions,"after","moduledata.translators.translate")
function translators.enable()
utilities.sequencers.enableaction(textlineactions,"moduledata.translators.translate")
end
function translators.disable()
utilities.sequencers.disableaction(textlineactions,"moduledata.translators.translate")
end
function translators.reset(s)
translators.enable()
list, compiled = nil, nil
end
translators.disable()
\stopluacode
\unprotect
\unexpanded\def\translateinput
{\dodoubleargument\module_translate_input}
\def\module_translate_input[#1][#2]%
{\ctxlua{moduledata.translators.register(\!!bs#1\!!es,\!!bs#2\!!es)}}
\unexpanded\def\resetinputtranslation
{\ctxlua{moduledata.translators.reset()}}
\unexpanded\def\enableinputtranslation
{\ctxlua{moduledata.translators.enable()}}
\unexpanded\def\disableinputtranslation
{\ctxlua{moduledata.translators.disable()}}
{\enableinputtranslation
\disableinputtranslation}
\protect
\define\firstlevel
{\endgraf
\blank
\noindentation
\hangindent=1em
\hangafter\plusone
\dontleavehmode\hbox to 1em {\symbol[1]}}
\define\secondlevel
{\endgraf
\blank[none]
\noindentation
\hangindent=2em
\hangafter\plusone
\null \quad \hbox to 1em {\symbol[2]}}
\translateinput[++][\string\secondlevel]
\translateinput[+][\string\firstlevel]
\starttext
+ One
+ Two
+ Three
\enableinputtranslation
+ One, a really long line \input ward
++ Two and math $a + b$ works
+ Three
\stoptext
which gives (notice that the + in the math mode has not changed).
-
I had trouble using the previous solutions with beamerposter, so I came up with another. It's a bit of a kludge, but it works for me, and preserves the bullet styles of the beamerposter:
\newcommand{\point}[1]{
\begin{itemize}
\item{#1}
\end{itemize}
}
\newcommand{\subpoint}[1]{
\begin{itemize}
\item[]
\begin{itemize}
\item{#1}
\end{itemize}
\end{itemize}
}
`
Perhaps someone with better code-fu can make this work with active characters instead of my clumsy environment-based way of doing this.
-
Welcome to TeX.SX! I think you misread the question, it is about indention in the source. – mafp Aug 26 '13 at 22:13
Thanks! As I said, I'm not familiar enough with LaTeX to use the catcode to achieve the desired input syntax. I was just offering part of a solution (how to make a bullet point of arbitrary depth with a single command instead of a nested list). – user2719544 Aug 27 '13 at 15:13
|
{}
|
# Squarefree products of a class of primes
Numbers which are the sum of two squares are the product of a square and a collection of distinct primes which are 1 or 2 mod 4.
Landau proved that there are $\sim kx/\sqrt{\log x}$ such numbers up to $x$ for a constant $k\approx0.76422.$ Restricting to the squarefree members reduces the incidence by a factor of $\zeta(2).$
So there are $\sim0.46459\ldots x/\sqrt{\log x}$ products of primes =1,2 mod 4 up to $x$. Does this generalize to other congruence classes of primes? That is, given $m$ and some set $S$ and numbers which are the product of distinct primes $\equiv s\pmod m$ for some $s\in S$ (with at least one $s$ coprime to $m$), is their density $kx/\sqrt{\log x}$ for some suitable constant $k$?
Bonus: Is there a good way to find the constant given $m$ and $S$?
• Sure. See proof method in volume 2 of LeVeque Topics in Number Theory. The connection with quadratic forms is that the set of "bad" primes is those for which $(\Delta|q) = -1.$ This reduces to representations by a single form with class number one, which happens just a few times for positive forms. – Will Jagy Apr 20 '15 at 18:48
• @WillJagy: Thanks -- post as an answer, maybe? – Charles Apr 20 '15 at 19:07
• Let $S\subset\mathcal{P}$ be some subset of primes with relative density $\delta$. Let $A$ be the set of integers which can be written as products of primes in $S$. Then we expect that $$\sum_{n\leq x}1_A(n)\sim\frac{kx}{(\log x)^{1-\delta}}$$ for some constant $k$ depending on $S$. We may need $S$ to be reasonably well behaved, such as the example of primes congruent to $s$ mod $m$, but in general this is the statement that we should be looking to prove. This is related to the singularity at $s=1$. – Eric Naslund Apr 27 '15 at 21:54
|
{}
|
Synopsis: Wind blowing over an ultracold sea
The interface between two Bose-Einstein condensates may provide new physical insights into fluid dynamics.
Kelvin-Helmholtz instabilities can occur at the interface between two fluids in relative motion. This happens, for example, when wind blows over the surface of the sea, forming waves, as well as in many similar situations involving immiscible classical fluids. It also occurs in more exotic cases, for instance, at the interface between two superfluids, such as the A and B phases of superfluid helium-$3$. On the other hand, if the two fluids are partially miscible and their interface is thick, a different dynamical instability, known as counter-superflow instability, may also arise.
In a paper published in Physical Review A, Naoya Suzuki at the University of Electro-Communications in Tokyo and collaborators, also in Japan, show that gaseous two-component Bose-Einstein condensates may represent an ideal testing ground for textbook concepts of fluid dynamics, because the miscibility and the interface thickness can be tuned by a clever use of Feshbach resonances and external potentials. Their numerical simulations, based on the solution of a nonlinear Schrödinger equation, illustrate how a Kelvin-Helmholtz instability converts into a counter-superflow instability when the interface thickness is continuously increased. The authors propose experiments to test their ideas, which should be within the reach of current technology. – Franco Dalfovo
More Features »
Announcements
More Announcements »
Fluid Dynamics
Graphene
Related Articles
Atomic and Molecular Physics
Synopsis: Detecting a Molecular Duet
Using a scanning tunneling microscope, researchers detect coupled vibrations between two molecules. Read More »
Atomic and Molecular Physics
Viewpoint: How to Create a Time Crystal
A detailed theoretical recipe for making time crystals has been unveiled and swiftly implemented by two groups using vastly different experimental systems. Read More »
Atomic and Molecular Physics
Viewpoint: What Goes Up Must Come Down
A molecular fountain, which launches molecules rather than atoms and allows them to be observed for long times, has been demonstrated for the first time. Read More »
|
{}
|
InVEST documentation
# Urban Cooling Model¶
## Summary¶
Urban heat mitigation (HM) is a priority for many cities that have undergone heat waves in recent years. Vegetation can help reduce the urban heat island (UHI) effect by providing shade, modifying thermal properties of the urban fabric, and increasing cooling through evapotranspiration. This has consequences for the health and wellbeing of citizens through reduced mortality and morbidity, increased comfort and productivity, and the reduced need for air conditioning (A/C). The InVEST urban cooling model calculates an index of heat mitigation based on shade, evapotranspiration, and albedo, as well as distance from cooling islands (e.g. parks). The index is used to estimate a temperature reduction by vegetation. Finally, the model estimates the value of the heat mitigation service using two (optional) valuation methods: energy consumption and work productivity.
## Introduction¶
UHIs affect many cities around the world, with major consequences for human health and wellbeing: high mortality or morbidity during heat waves, high A/C consumption, and reduced comfort or work productivity. The UHI effect, i.e. the difference between rural and urban temperatures, is a result of the unique characteristics of cities due to two main factors: the thermal properties of materials used in urban areas (e.g. concrete, asphalt), which store more heat, and the reduction of the cooling effect (through shade and evapotranspiration) of vegetation.
Natural infrastructure therefore plays a role in reducing UHIs in cities. Using the rapidly-growing literature on urban heat modeling (Deilami et al. 2018), the InVEST urban cooling model estimates the cooling effect of vegetation based on commonly available data on climate, land use/land cover (LULC), and (optionally) A/C use.
## The Model¶
### How It Works¶
#### Cooling Capacity Index¶
The model first computes the cooling capacity (CC) index for each pixel based on local shade, evapotranspiration, and albedo. This approach is based on the indices proposed by Zardo et al. 2017 and Kunapo et al. 2018, to which we add albedo, an important factor for heat reduction. The shade factor (‘shade’) represents the proportion of tree canopy (≥2 m in height) associated with each land use/land cover (LULC) category. Its value is comprised between 0 and 1. The evapotranspiration index (ETI) represents a normalized value of potential evapotranspiration, i.e. the evapotranspiration from vegetation (or evaporation from soil, for unvegetated areas). It is calculated for each pixel by multiplying the reference evapotranspiration ($$ET0$$, provided by the user) and the crop coefficient ($$Kc$$ , associated with the pixel’s LULC type), and dividing by the maximum value of the $$ET0$$ raster in the area of interest, $$ETmax$$.:
(102)$ETI = \frac{K_c \cdot ET0}{ET_{max}}$
Note that this equation assumes that vegetated areas are sufficiently irrigated (although Kc values can be reduced to represent water-limited evapotranspiration).
The albedo factor is a value between 0 and 1 representing the proportion of solar radiation reflected by the LULC type (Phelan et al. 2015).
The model combines the three factors in the CC index:
(103)$CC_i = 0.6 \cdot shade + 0.2\cdot albedo + 0.2\cdot ETI$
The recommended weighting (0.6; 0.2; 0.2) is based on empirical data and reflects the higher impact of shading compared to evapotranspiration. For example, Zardo et al. 2017 report that “in areas smaller than two hectares [evapotranspiration] was assigned a weight of 0.2 and shading of 0.8. In areas larger than two hectares the weights were changed to 0.6 and 0.4, for [evapotranspiration] and shading respectively”. In the present model, we propose to disaggregate the effects of shade and albedo in equation (83), and give albedo equal weight to ETI based on the results by Phelan et al. 2015 (see Table 2 in their study showing that vegetation and albedo have similar coefficients).
Note: alternative weights can be manually entered by the user to test the sensitivity of model outputs to this parameter (or if local knowledge is available).
Optionally, the model can consider another factor, intensity ($$building.intensity$$ for a given landcover classification), which captures the vertical dimension of built infrastructure. Building intensity is an important predictor of nighttime temperature since heat stored by buildings during the day is released during the night. To predict nighttime temperatures, users need to provide the building intensity factor for each land use class in the Biophysical Table and the model will change equation (103) to:
(104)$CC_i = 1 - building.intensity$
#### Urban Heat Mitigation Index (Effect of Large Green Spaces)¶
To account for the cooling effect of large green spaces (>2 ha) on surrounding areas (see discussion in Zardo et al. 2017 and McDonald et al. 2016), the model calculates the urban HM index: HM is equal to CC if the pixel is unaffected by any large green spaces, but otherwise set to a distance-weighted average of the CC values from the large green spaces and the pixel of interest.
To do so, the model first computes the area of green spaces within a search distance $$d_{cool}$$ around each pixel ($$GA_i$$), and the CC provided by each park ($$CC_{park_i}$$):
(105)${GA}_{i}=cell_{area}\cdot\sum_{j\in\ d\ radius\ from\ i} g_{j}$
(106)$CC_{park_i}=\sum_{j\in\ d\ radius\ from\ i} g_j \cdot CC_j \cdot e^{\left( \frac{-d(i,j)}{d_{cool}} \right)}$
where $$cell_{area}$$ is the area of a cell in ha, $$g_j$$ is 1 if pixel $$j$$ is green space or 0 if it is not, $$d(i,j)$$ is the distance between pixels $$i$$ and $$j$$, $$d_{cool}$$ is the distance over which a green space has a cooling effect, and $$CC_{park_i}$$ is the distance weighted average of the CC values attributable to green spaces. (Note that LULC classes that qualify as “green spaces” are determined by the user with the parameter ‘green_area’ in the Biophysical Table, see Input table in Section 3.) Next, the HM index is calculated as:
(107)$\begin{split}HM_i = \begin{Bmatrix} CC_i & if & CC_i \geq CC_{park_i}\ or\ GA_i < 2 ha \\ CC_{park_i} & & otherwise \end{Bmatrix}\end{split}$
#### Air Temperature Estimates¶
To estimate heat reduction throughout the city, the model uses the (city-scale) UHI magnitude, $$UHI_{max}$$. Users can obtain UHI values from local literature or global studies: for example, the Global Surface UHI Explorer developed by the University of Yale, provides estimates of annual, seasonal, daytime, and nighttime UHI (https://yceo.users.earthengine.app/view/uhimap). Note that UHI magnitude is defined for a specific period (e.g. current or future climate) and time (e.g. nighttime or daytime temperatures). The selection of period and time will affect the service quantification and valuation.
Air temperature without air mixing $$T_{air_{nomix}}$$ is calculated for each pixel as:
(108)$T_{air_{nomix},i}=T_{air,ref} + (1-HM_i)\cdot UHI_{max}$
Where $$T_{air,ref}$$ is the rural reference temperature and $$UHI_{max}$$ is the maximum magnitude of the UHI effect for the city (or more precisely, the difference between $$T_{air,ref}$$ and the maximum temperature observed in the city).
Due to air mixing, these temperatures average spatially. Actual air temperature (with mixing), $$T_{air}$$, is derived from $$T_{air_{nomix}}$$ using a Gaussian function with kernel radius $$r$$, defined by the user.
For each area of interest (which is a vector GIS layer provided by the user), we calculate average temperature and temperature anomaly $$(T_{air,i} - T_{air,ref})$$.
#### Value of Heat Reduction Service¶
The value of temperature reduction can be assessed in at least three ways:
1. energy savings from reduced A/C electricity consumption;
2. gain in work productivity for outdoor workers;
3. decrease in heat-related morbidity and mortality.
The model provides estimates of (i) energy savings and (ii) work productivity based on global regression analyses or local data.
Energy savings: the model uses a relationship between energy consumption and temperature (e.g. summarized by Santamouris et al. 2015) to calculate energy savings and associated costs for a building $$b$$:
(109)$Energy.savings(b)= consumption.increase(b) \cdot (\overline{T_{air,MAX} - T_{air,i}})$
Where:
• $$consumption.increase(b)$$ (kWh/° C/$$m^2$$) is the local estimate of the energy consumption increase per each degree of temperature per square meter of the building footprint, for building category $$b$$.
• $$T_{air,MAX}$$ (° C) is the maximum temperature over the landscape $$(T_{air,ref} + UHI_{max})$$;
• $$\overline{T_{air,MAX} - T_{air,i}}$$ (° C) is the average difference in air temperature for building $$b$$, with $$T_{air,i}$$ modeled in the previous steps.
If costs are provided for each building category, equation (109) is replaced by equation (110)
(110)$Energy.savings(b)= consumption.increase(b) \cdot (\overline{T_{air,MAX} - T_{air,i}}) \cdot cost(b)$
Where:
• $$cost(b)$$ is the estimate of energy cost per kWh for building category $$b$$. Note that this is very likely to be equal for all buildings.
To calculate total energy savings, we sum the pixel-level values over the area of interest.
Work Productivity: the model converts air temperature into Wet Bulb Globe Temperature (WBGT) to calculate the impacts of heat on work productivity. WBGT takes into account humidity, and can be estimated from standard meteorological data in the following way (American College of Sports Medicine, 1984, Appendix I):
(111)$WBGT_i = 0.567 \cdot T_{air,i} + 0.393 \cdot e_i + 3.94$
Where:
• $$T_{air}$$ = temperature provided by the model (dry bulb temperature (° C))
• $$e_i$$ = water vapor pressure (hPa)
Vapor pressure is calculated from temperature and relative humidity using the equation:
(112)$e_i = \frac{RH}{100} \cdot 6.105 \cdot e^{\left ( 17.27 \cdot \frac{T_{air,i}}{(237.7 + T_{air,i})} \right )}$
Where:
• $$RH$$ = average Relative Humidity (%) provided by the user
For each pixel, the model computes the estimated loss in productivity (%) for two work intensities: “light work” and “heavy work” (based on rest time needed at different work intensities, as per Table 2 in Kjellstrom et al. 2009):
(113)$\begin{split}Loss.light.work_i = \begin{Bmatrix} 0 & if & WBGT < 31.5\\ 25 & if & 31.5 \leq WBGT < 32.0 \\ 50 & if & 32.0 \leq WBGT < 32.5 \\ 75 & if & 32.5 \leq WBGT \\ \end{Bmatrix}\end{split}$
(114)$\begin{split}Loss.heavy.work_i = \begin{Bmatrix} 0 & if & WBGT < 27.5\\ 25 & if & 27.5 \leq WBGT < 29.5 \\ 50 & if & 29.5 \leq WBGT < 31.5 \\ 75 & if & 31.5 \leq WBGT \\ \end{Bmatrix}\end{split}$
Here, “light work” corresponds to approximately 200 Watts metabolic rate, i.e. office desk work and service industries, and “heavy work” corresponds to 400 W, i.e. construction or agricultural work. If city-specific data on distribution of gross labor sectors are not available, the user can estimate the working population of the city in 3 sectors (service, industry, agriculture) using national-level World Bank data (e.g. “employment in industry, male (%)” and similar). Loss of work time for a given temperature can be calculated using the resting times in Table 2 (Kjellstrom et al. 2009) and the proportion of working population in different sectors. If local data are available on average hourly salaries for the different sectors, these losses in work time can be translated into monetary losses.
Finally, for “light work”, note that A/C prevalence can play a role. If most office buildings are equipped with A/C, the user might want to reduce the loss of work time for the service sector by the same proportion as A/C prevalence.
## Limitations and Simplifications¶
Due to the simplifications described above, the model presents a number of limitations which are summarized here.
CC index: the CC index relies on empirical weights, derived from a limited number of case studies, which modulate the effect of key factors contributing to the cooling effect (equation (83)). This weighting step comprises high uncertainties, as reviewed in Zardo et al. 2017. To characterize and reduce this uncertainty, users can test the sensitivity of the model to these parameters or conduct experimental studies that provide insights into the relative effects of shade, albedo, and evapotranspiration.
Effect of large parks and air mixing: two parameters capture the effect of large green spaces and air mixing ( $$d_{cool}$$ and $$r$$). The value of these parameters is difficult to derive from the literature as they vary with vegetation properties, climate (effect of large green spaces), and wind patterns (air mixing). Similar to CC, users can characterize and reduce these uncertainties by testing the sensitivity of the model to these parameters and comparing spatial patterns of temperature estimated by the model with observed or modeled data (see Bartesaghi et al. 2018 and Deilami et al. 2018 for additional insights into such comparisons).
Valuation options: the valuation options currently supported by the model are related to A/C energy consumption and outdoor work productivity. For A/C energy consumption, users need to assess A/C prevalence, and reduce estimates accordingly (i.e. reduce energy consumption proportionally to actual use of A/C).
Valuation of the health effects of urban heat is not currently included in the model, despite their importance (McDonald et al. 2016). This is because these effects vary dramatically across cities and it is difficult to extrapolate current knowledge based predominantly in the Global North (Campbell et al. 2018). Possible options to obtain health impact estimates include:
• using global data from McMichael et al. 2003, who use a linear relationship above a threshold temperature to estimate the annual attributable fraction of deaths due to hot days or,
• for applications in the US, a methodology was developed based on national-scale relationships between mortality and temperature change: see McDonald et al. 2016.
Gasparrini et al. 2014 break down the increase in mortality attributable to heat for 384 cities in 13 countries. $$T_air$$ output from the InVEST model could be used to determine the mortality fraction attributable to heat (first determine in which percentile $$T_{air,i}$$ falls, then use Table S3 or Table S4 in the appendix).
## Data Needs¶
• Workspace (directory, required): The folder where all the model’s output files will be written. If this folder does not exist, it will be created. If data already exists in the folder, it will be overwritten.
• File Suffix (text, optional): Suffix that will be appended to all output file names. Useful to differentiate between model runs.
• Land Use/Land Cover (raster, required): Map of LULC for the area of interest. All values in this raster must have corresponding entries in the Biophysical Table. The model will use the resolution of this layer to resample all outputs. The resolution should be small enough to capture the effect of green spaces in the landscape, although LULC categories can comprise a mix of vegetated and non-vegetated covers (e.g. “residential”, which may have 30% canopy cover).
• Biophysical Table (CSV, required): A table mapping each LULC code to biophysical data for that LULC class. All values in the LULC raster must have corresponding entries in this table.
Columns:
• lucode (integer, required): LULC code corresponding to those in the LULC map.
• kc (number, required): Crop coefficient for this LULC class.
• green_area (true/false): Enter 1 to indicate that the LULC is considered a green area. Enter 0 to indicate that the LULC is not considered a green area. Green areas larger than 2 hectares have an additional cooling effect.
• shade (ratio, conditionally required): The proportion of area in this LULC class that is covered by tree canopy at least 2 meters high. Required if the ‘factors’ option is selected for the Cooling Capacity Calculation Method.
• albedo (ratio, conditionally required): The proportion of solar radiation that is directly reflected by this LULC class. Required if the ‘factors’ option is selected for the Cooling Capacity Calculation Method.
• building_intensity (ratio, conditionally required): The ratio of building floor area to footprint area, normalized between 0 and 1. Required if the ‘intensity’ option is selected for the Cooling Capacity Calculation Method.
• Evapotranspiration (raster, units: mm, required): Map of evapotranspiration values. These values can be for a specific date or monthly values can be used as a proxy.
• Area of Interest (vector, polygon/multipolygon, required): A map of areas over which to aggregate and summarize the final results. The AOI(s) will typically be city or neighborhood boundaries.
• Maximum Cooling Distance (number, units: m, required): Distance over which green areas larger than 2 hectares have a cooling effect. This is $$d_{cool}$$ in equation (106). Recommended value: 450 m.
• Reference Air Temperature (number, units: °C, required): Air temperature in a rural reference area where the urban heat island effect is not observed. This is $$T_{air,ref}$$ in equation (108). This could be nighttime or daytime temperature, for a specific date or an average over several days. The results will be given for the same period of interest.
• UHI Effect (number, units: °C, required): The magnitude of the urban heat island effect, i.e., the difference between the rural reference temperature and the maximum temperature observed in the city. This is $$UHI_{max}$$ in equation (108).
• Air Blending Distance (number, units: m, required): Radius over which to average air temperatures to account for air mixing. Recommended value range for initial run: 500 m to 600 m; see Schatz et al. 2014 and Londsdorf et al. 2021.
• Cooling Capacity Calculation Method (option, required): The air temperature predictor method to use.
Options:
• factors: {‘display_name’: ‘factors’, ‘description’: ‘Use the weighted shade, albedo, and ETI factors as a temperature predictor (for daytime temperatures).’}
• intensity: {‘display_name’: ‘intensity’, ‘description’: ‘Use building intensity as a temperature predictor (for nighttime temperatures).’}
• Buildings (vector, polygon/multipolygon, conditionally required): A map of built infrastructure footprints. Required if Run Energy Savings Valuation is selected.
Field:
• type (integer, required): Code indicating the building type. These codes must match those in the Energy Consumption Table.
• Run Energy Savings Valuation (true/false): Run the energy savings valuation model.
• Run Work Productivity Valuation (true/false): Run the work productivity valuation model.
• Energy Consumption Table (CSV, conditionally required): A table of energy consumption data for each building type. Required if Run Energy Savings Valuation is selected.
Columns
• type (integer, required): Building type codes matching those in the Buildings vector.
• consumption (number, units: kWh/(m² · °C), required): Energy consumption by footprint area for this building type.
Note
The consumption value is per unit of footprint area, not floor area. This value must be adjusted for the average number of stories for structures of this type.
• cost (number, units: currency/kWh, optional): The cost of electricity for this building type. If this column is provided, the energy savings outputs will be in the this currency unit rather than kWh. The values in this column are very likely to be the same for all building types.
• Average Relative Humidity (percent, conditionally required): The average relative humidity over the time period of interest. Required if Run Work Productivity Valuation is selected.
• Shade Weight (ratio, optional): The relative weight to apply to shade when calculating the cooling capacity index. If not provided, defaults to 0.6.
• Albedo Weight (ratio, optional): The relative weight to apply to albedo when calculating the cooling capacity index. If not provided, defaults to 0.2.
• Evapotranspiration Weight (ratio, optional): The relative weight to apply to ETI when calculating the cooling capacity index. If not provided, defaults to 0.2.
## Interpreting Results¶
• hm_[Suffix].tif: The calculated HMI.
• uhi_results_[Suffix].shp: A copy of the input vector “Area of Interest” with the following additional fields:
• “avg_cc” - Average CC value (-).
• “avg_tmp_v” - Average temperature value (degC).
• “avg_tmp_an” - Average temperature anomaly (degC).
• “avd_eng_cn” - (optional) Avoided energy consumption (kWh or \$ if optional energy cost input column was provided in the Energy Consumption Table).
• “avg_wbgt_v” - (optional) Average WBGT (degC).
• “avg_ltls_v” - (optional) Light work productivity loss (%).
• “avg_hvls_v” - (optional) Heavy work productivity loss (%).
• buildings_with_stats_[Suffix].shp: A copy of the input vector “Building Footprints” with the following additional fields:
• “energy_sav” - Energy savings value (kWh or currency if optional energy cost input column was provided in the Energy Consumption Table). Savings are relative to a theoretical scenario where the city contains NO natural areas nor green spaces; where CC = 0 for all LULC classes.
• “mean_t_air” - Average temperature value in building (degC).
The intermediate folder contains additional model outputs:
• cc_[Suffix].tif: Raster of CC values.
• T_air_[Suffix].tif: Raster of estimated air temperature values.
• T_air_nomix_[Suffix].tif: Raster of estimated air temperature values prior to air mixing (i.e. before applying the moving average algorithm).
• eti_[Suffix].tif: Raster of values of actual evapotranspiration (reference evapotranspiration times crop coefficient “Kc”).
• wbgt_[Suffix].tif: Raster of the calculated WBGT.
• reprojected_aoi_[Suffix].shp: The user-defined Area of Interest, reprojected to the Spatial Reference of the LULC.
• reprojected_buildings_[Suffix].shp: The user-defined buildings vector, reprojected to the Spatial Reference of the LULC.
## Appendix: Data Sources and Guidance for Parameter Selection¶
### Albedo¶
Albedo for urban built infrastructure can be found in local microclimate literature. Deilami et al. 2018 and Bartesaghi et al. 2018 provide a useful review. Stewart and Oke (2012) provide value ranges for typical LULC categories.
### Green Area Maximum Cooling Distance¶
Distance (meters) over which large urban parks (>2 ha) have a cooling effect. See a short review in Zardo et al. 2017, including a study that reports a cooling effect at a distance five times tree height. In the absence of local studies, an estimate of 450m can be used.
### Baseline Air Temperature¶
Rural reference temperature (°C) can be obtained from local temperature stations or global climate data.
### Magnitude of the UHI Effect¶
i.e. the difference between the maximum temperature in the city and the rural reference (baseline) air temperature. In the absence of local studies, users can obtain values from a global study conducted by Yale: https://yceo.users.earthengine.app/view/uhimap
### Air Temperature Maximum Blending Distance¶
Search radius (meters) used in the moving average to account for air mixing. A recommended initial value range of 500m to 600m can be used based on preliminary tests in pilot cities (Minneapolis-St Paul, USA and Paris, France). This parameter can be used as a calibration parameter if observed or modeled temperature data are available.
### Energy Consumption Table¶
Energy consumption (kWh/°C) varies widely across countries and cities. Santamouris et al. 2015 provide estimates of the energy consumption per °C for a number of cities worldwide. For the United States (US), EPA EnergyStar Portfolio Manager data may provide categorical averages as well as data for specific buildings: https://www.energystar.gov/buildings/facility-owners-and-managers/existing-buildings/use-portfolio-manager/understand-metrics/what-energy Note: If A/C prevalence is low, this valuation metric should not be used as it assumes that energy costs will increase with higher temperatures (and greater A/C use). A/C prevalence data for the US can be obtained from the American Housing Survey: https://www.census.gov/programs-surveys/ahs.html
### Average Relative Humidity¶
Average relative humidity (%) during heat waves can be obtained from local temperature stations or global climate data
## FAQs¶
• What is the output resolution?
Model outputs are of two types: rasters and vectors. Rasters have the same resolution as the LULC input (all other raster inputs are resampled to the same resolution).
• Why aren’t the health impacts calculated by the model?
The effects of heat on human health vary dramatically across cities and it is difficult to develop a generic InVEST model that accurately captures and quantifies these for all cities. See the point about “Valuation of the health effects of urban heat” in the model Limitations section for additional details and pathways to assess the health impacts of urban heat mitigation.
## References¶
Allen, R. G., Pereira, L. S., Raes, D., & Smith, M. (1998). Crop evapotranspiration - Guidelines for computing crop water requirements - FAO Irrigation and drainage paper 56. FAO, Rome, Italy.
American College of Sports Medicine (1984). Prevention of Thermal Injuries During Distance Running. Medicine and Science in Sports & Exercise, 16(5), ix-xiv. https://doi.org/10.1249/00005768-198410000-00017
Bartesaghi, C., Osmond, P., & Peters, A. (2018). Evaluating the cooling effects of green infrastructure : A systematic review of methods, indicators and data sources. Solar Energy, 166(February), 486-508. https://doi.org/10.1016/j.solener.2018.03.008
Campbell, S., Remenyi, T. A., White, C. J., & Johnston, F. H. (2018). Heatwave and health impact research: A global review. Health & Place, 53, 210-218. https://doi.org/https://doi.org/10.1016/j.healthplace.2018.08.017
Deilami, K., Kamruzzaman, M., & Liu, Y. (2018). Urban heat island effect: A systematic review of spatio-temporal factors, data, methods, and mitigation measures. International Journal of Applied Earth Observation and Geoinformation, 67, 30-42. https://doi.org/https://doi.org/10.1016/j.jag.2017.12.009
Gasparrini, A., Guo, Y., Hashizume, M., Lavigne, E., Zanobetti, A., Schwartz, J., Tobias, A., Tong, S., Rocklöv, J., Forsberg, B., Leone, M., De Sario, M., Bell, M. L., Guo, Y. L., Wu, C., Kan, H., Yi, S., Coelho, M. d., Saldiva, P. H., Honda, Y., Kim, H., & Armstrong, B. (2015). Mortality risk attributable to high and low ambient temperature: a multicountry observational study. The lancet, 386(9991), 369-375. https://doi.org/10.1016/S0140-6736(14)62114-0
Kjellstrom, T., Holmer, I., & Lemke, B. (2009). Workplace heat stress, health and productivity - an increasing challenge for low and middle-income countries during climate change. Global Health Action, 2, 10.3402/gha.v2i0.2047. https://doi.org/10.3402/gha.v2i0.2047
Kunapo, J., Fletcher, T. D., Ladson, A. R., Cunningham, L., & Burns, M. J. (2018). A spatially explicit framework for climate adaptation. Urban Water Journal, 15(2), 159-166. https://doi.org/10.1080/1573062X.2018.1424216
Londsdorf, E.V., Nootenboom, C., Janke, B., & Horgan, B.P. (2021). Assessing urban ecosystem services provided by green infrastructure: Golf courses in the Minneapolis-St. Paul metro area. Landscape and Urban Planning, 208. https://doi.org/10.1016/j.landurbplan.2020.104022
McDonald, R. I., Kroeger, T., Boucher, T., Wang, L., & Salem, R. (2016). Planting Healthy Air: A global analysis of the role of urban trees in addressing particulate matter pollution and extreme heat. CAB International, 128-139.
McMichael, A. J., Campbell-Lendrum, D. H., Corvalán, C. F., Ebi, K. L., Githeko, A. k., Scheraga, J. D., & Woodward, A. (2003). Climate change and human health: risks and responses. World Health Organization. Geneva, Switzerland.
Phelan, P. E., Kaloush, K., Miner, M., Golden, J., Phelan, B., Iii, H. S., & Taylor, R. A. (2015). Urban Heat Island : Mechanisms , Implications , and Possible Remedies. Annual Review of Environment and Resources, 285-309. https://doi.org/10.1146/annurev-environ-102014-021155
Santamouris, M., Cartalis, C., Synnefa, A., & Kolokotsa, D. (2015). On the impact of urban heat island and global warming on the power demand and electricity consumption of buildings - A review. Energy & Buildings, 98, 119-124. https://doi.org/10.1016/j.enbuild.2014.09.052
Shatz, J. & Kucharik, C.J. (2014). Seasonality of the Urban Heat Island Effect in Madison, Wisconsin. Journal of Applied Meterology and Climatology, 53(10), 2371-2386. https://doi.org/10.1175/JAMC-D-14-0107.1
Stewart, I. D., & Oke, T. R. (2012). Local climate zones for urban temperature studies. American Meteorological Society. https://doi.org/10.1175/BAMS-D-11-00019.1
Zardo, L., Geneletti, D., Prez-soba, M., & Eupen, M. Van. (2017). Estimating the cooling capacity of green infrastructures to support urban planning. Ecosystem Services, 26, 225-235. https://doi.org/10.1016/j.ecoser.2017.06.016
|
{}
|
[Solution] Equal by XORing solution codechef
Equal by XORing solution codechef – JJ has three integers AABB, and NN. He can apply the following operation on AA:
• Select an integer XX such that 1X<N1≤X<N and set A:=AXA:=A⊕X. (Here, denotes the bitwise XOR operation.)
[Solution] Equal by XORing solution codechef
JJ wants to make AA equal to BB.
Determine the minimum number of operations required to do so. Print 1−1 if it is not possible.
Input Format
• The first line contains a single integer TT — the number of test cases. Then the test cases follow.
• The first and only line of each test case contains three integers AABB, and NN — the parameters mentioned in the statement.
Output Format
For each test case, output the minimum number of operations required to make AA equal to BB. Output 1−1 if it is not possible to do so.
[Solution] Equal by XORing solution codechef
• 1T10001≤T≤1000
• 0A,B<2300≤A,B<230
• 1N2301≤N≤230
• Subtask 1 (30 points): NN is a power of 22.
• Subtask 2 (70 points): Original Constraints.
Sample Input 1
3
5 5 2
3 7 8
8 11 1
[Solution] Equal by XORing solution codechef
0
1
-1
Explanation
• Test Case 11: AA is already equal to BB. Hence we do not need to perform any operation.
• Test Case 22: We can perform the operation with X=4X=4 which is <8<8. Therefore A=34=7A=3⊕4=7. Thus, only one operation is required.
• Test Case 33: We can show that it is not possible to make AA equal to BB.
Sample Input 2
2
24 27 3
4 5 1000
[Solution] Equal by XORing solution codechef
2
1
Explanation
Note that the above sample case belongs to subtask 22.
• Test Case 1: We can first perform the operation with X=1X=1 which is <3<3. Therefore A=241=25A=24⊕1=25. Then we can perform the operation with X=2X=2 which is <3<3. Therefore A=252=27A=25⊕2=27. Therefore we can make AA equal to BB in 22 operations. It can be shown that it is not possible to do this in less than 22 operations.
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.