arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
# Thread: Find Derivative/maximum/inflection from a graph?
1. ## Find Derivative/maximum/inflection from a graph?
Let g be a continuous function with g(2)=5. The graph of the piecewise-linear function g ' the derivative of g, is shown above for -3<=x<=7.
a) Find the x coordinate of all points of inflection of the graph of y=g(x) for -3<x<7. Justify your answer.
b) Find the absolute maximum value of g on the interval -3<=x<=7. Justify your answer.
c)Find the average rate of change of g(x) on the interval -3<=x<=7.
a. point of inflection is where x changes sign, so it would be at -1, 1, 4, right?
b. i know where the maximum is on the derivative graph, but the original I don't know
c) -1-4/7+3= -1/2 right?
2. you are given a graph of g'(x) , the derivative of g(x).
g(x) has extrema where g'(x) changes sign
g(x) has inflection points where the slope of g'(x), which is g''(x), changes sign.
|
|
Residue field of completion of a field finite type over the prime field
In Theory of p-adic Galois Representations by Fontaine and Yi Ouyang, there is a proof of Grothendieck l-adic monodromy theorem for general local fields:
Here a local field is a complete discrete valuation field with perfect residue field. But I could not understand why $k_1$ is finite type over the prime field. Is it trivially true? What is the original proof?
|
|
# Browse by Subject "Physics, Fluid and Plasma"
• (1993)
The propagation of acoustic waves in porous media is of interest in a variety of areas, including oil well logging, the seepage of contaminants into ground water supplies, and the flow of fluids in packed beds. The aim of ...
application/pdf
PDF (8MB)
• (2006)
After describing these advancements in methodology, we apply our new technology to fluid sodium near its liquid-vapor critical point. In particular, we explore the microscopic mechanisms which drive the continuous change ...
application/pdf
PDF (5MB)
• (1987)
D-T fusion-born alpha particles are mirror-confined in the central cell of a tandem mirror reactor. The resulting anisotropic loss-cone distribution of the alpha particles in velocity space is capable of destabilizing low ...
application/pdf
PDF (5MB)
• (1972)
application/pdf
PDF (5MB)
• (1989)
This dissertation describes the design, construction, and proof-of-principle verification of a neutral hydrogen flux detection system, based on an ECRH discharge as the neutral flux ionizer. The significant features of the ...
application/pdf
PDF (6MB)
• (1974)
application/pdf
PDF (5MB)
• (1994)
The high voltage spark has been used many years for emission spectrochemical analysis. Yet, spark studies have produced only heuristic descriptions of spark analyte sampling and excitation. It has been dogma in the analytical ...
application/pdf
PDF (11MB)
• (1989)
A thin layer of a single-component Boussinesq fluid, contained between two rigid horizontal plates of low thermal conductivity, is cooled from above and heated from below. In the steady-static state, a heat flux traverses ...
application/pdf
PDF (2MB)
• (1987)
The invariance properties of various sets of magnetohydrodynamic (MHD) equations are studied using techniques from the theory of differential forms. Equations considered include the ideal MHD equations in different geometries ...
application/pdf
PDF (5MB)
• (1991)
The purpose of this thesis was to explore computationally the three dimensional hydrodynamics of extragalactic radio jets. This entailed putting together a large number of numerical tools before such simulations could be ...
application/pdf
PDF (9MB)
• (1992)
Coating flows have many very important applications in engineering. The first part of this thesis deals with gravity-driven reacting coating flows down an inclined plane. When the reaction activation energy is large and ...
application/pdf
PDF (7MB)
• (1989)
This thesis deals with the theory of the primary nucleation and subsequent expansion of the superfluid $\sp3$He-B phase in the metastable, hypercooled A phase. An overview of the theory of dynamical phenomena in Fermi ...
application/pdf
PDF (8MB)
• (2000)
Boron particles ignited in Ar/F/O2 mixtures show a rapid decrease by a factor of four in ignition and burning times as the mole fraction ratio XF/XO2 is increased from 0 to 0.25. For values of XF/XO2 greater than 0.5 ...
application/pdf
PDF (10MB)
• (2007)
While the objective is to explore alternative damage mechanisms due to ultrasound, the work is not restricted as such. Indeed, the work is concerned with surface tension driven singularities at fluid interface in general. ...
application/pdf
PDF (2MB)
• (2007)
A pulsed vacuum arc discharge emits a plasma as well as macroparticles in the form of micron-sized molten droplets of cathode material. Due to their direction of flight and submicron to 100 mum diameter, these macroparticles ...
application/pdf
PDF (3MB)
• (1991)
In order to model liquid-metal flows in self-cooled liquid-lithium blankets for magnetic-confinement fusion reactors, liquid metal flows in rectangular ducts with thin conducting walls have been treated by combining ...
application/pdf
PDF (8MB)
• (1991)
Low pressure optically triggered pseudosparks, or Back-Lit Thyratrons (BLT), are inherently multidimensional transient devices. The method employed to model electron transport in such devices is therefore problematic. A ...
application/pdf
PDF (7MB)
• (1997)
Simulation of chemical lasers such as the chemical oxygen-iodine laser (COIL) is of timely interest due to the recent acceleration of the airborne laser military research program and ongoing commercial development programs. ...
application/pdf
PDF (16MB)
• (1989)
A new vortex method for simulating two-dimensional buoyancy-driven flows is presented. This Lagrangian method utilizes a discrete representation of the known density field along with the vorticity transport equation and ...
application/pdf
PDF (8MB)
• (1998)
The shock-induced separation process was found to dramatically increase the streamwise and transverse Reynolds normal stresses (which both peak near reattachment), the primary shear stress, and the normal stress anisotropy. ...
application/pdf
PDF (7MB)
|
|
# 4.1 Floating-Point Decomposition
Floating-point arithmetic is based on the observation that every nonzero real number admits a representation of the form
where is an integer, called the exponent of , and is a number in the range , called the significand of . These components are defined as follows.
Definition 4.1.1 (sgn, expo, sig) Let . If , then
is the unique integer that satisfies
and
If , then .
The decomposition property is immediate.
Lemma 4.1.1 (fp-rep, fp-abs) For all , .
Lemma 4.1.2 (expo-minus, sig-minus) For all ,
(a) ;
(b) .
The definition of may be restated as follows:
(expo<=, expo>=) For all and ,
(a) If , then ;
(b) If , then .
PROOF:
(a) If , then , which implies
(b) If , then , which implies
Corollary 4.1.4 (expo-2**n) For all , .
PROOF: since , the lemma follows from Lemma 4.1.3(b).
The width of a bit vector is determined by its exponent.
(bvecp-expo) For all , is a bit vector of width .
PROOF: This is just a restatement of the second inequality of Definition 4.1.1
PROOF: This is an instance of Lemma 1.2.16
:
We have the following bounds on .
PROOF: Definition 4.1.1 yields
Corollary 4.1.9 (expo-sig) For all , .
PROOF: Since , , and hence
Corollary 4.1.11 (sig-sig) For all , .
(fp-rep-unique) Let . If , where , , and , then and .
PROOF: Since , , where . It follows from Definition 4.1.1 that , and therefore
Changing the sign of a number does not affect its exponent or significand.
Lemma 4.1.13 (sgn-minus, expo-minus, sig-minus) For all ,
(a) (b) (c) .
A shift does not affect the sign or significand.
(sgn-shift, expo-shift, sig-shift) If , , and , then
(a) (b) (c) .
PROOF:
(a) .
(b) .
(c)
We have the following formulas for the components of a product.
(sgn-prod, expo-prod, sig-prod) Let and . If , then
(a) (b) (c)
PROOF:
(a) .
(b) Since and , we have
If , then
and by Definition 4.1.1, .
On the other hand, if , then similarly,
and Definition 4.1.1 yields .
(c) If , then
Otherwise,
David Russinoff 2017-08-01
|
|
System.Data.SQLite
Hex Artifact Content
Not logged in
## Artifact 32dbb19d1a153363b59b9965cf13bbcf1ffccf42:
• File Tests/common.eagle — part of check-in [7e3aa2f8bb] at 2012-08-24 10:00:52 on branch trunk — Add support for testing the sqlite3_win32_set_directory function. Also, add the ToFullPath connection string property. (user: mistachkin size: 36764)
0000: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
0010: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
0020: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
0030: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
0040: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 0d ###############.
0050: 0a 23 0d 0a 23 20 63 6f 6d 6d 6f 6e 2e 65 61 67 .#..# common.eag
0060: 6c 65 20 2d 2d 0d 0a 23 0d 0a 23 20 57 72 69 74 le --..#..# Writ
0070: 74 65 6e 20 62 79 20 4a 6f 65 20 4d 69 73 74 61 ten by Joe Mista
0080: 63 68 6b 69 6e 2e 0d 0a 23 20 52 65 6c 65 61 73 chkin...# Releas
0090: 65 64 20 74 6f 20 74 68 65 20 70 75 62 6c 69 63 ed to the public
00a0: 20 64 6f 6d 61 69 6e 2c 20 75 73 65 20 61 74 20 domain, use at
00b0: 79 6f 75 72 20 6f 77 6e 20 72 69 73 6b 21 0d 0a your own risk!..
00c0: 23 0d 0a 23 23 23 23 23 23 23 23 23 23 23 23 23 #..#############
00d0: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
00e0: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
00f0: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
0100: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
0110: 23 23 0d 0a 0d 0a 23 0d 0a 23 20 4e 4f 54 45 3a ##....#..# NOTE:
0120: 20 55 73 65 20 6f 75 72 20 6f 77 6e 20 6e 61 6d Use our own nam
0130: 65 73 70 61 63 65 20 68 65 72 65 20 62 65 63 61 espace here beca
0140: 75 73 65 20 65 76 65 6e 20 74 68 6f 75 67 68 20 use even though
0150: 77 65 20 64 6f 20 6e 6f 74 20 64 69 72 65 63 74 we do not direct
0160: 6c 79 0d 0a 23 20 20 20 20 20 20 20 73 75 70 70 ly..# supp
0170: 6f 72 74 20 6e 61 6d 65 73 70 61 63 65 73 20 6f ort namespaces o
0180: 75 72 73 65 6c 76 65 73 2c 20 77 65 20 64 6f 20 urselves, we do
0190: 6e 6f 74 20 77 61 6e 74 20 74 6f 20 70 6f 6c 6c not want to poll
01a0: 75 74 65 20 74 68 65 20 67 6c 6f 62 61 6c 0d 0a ute the global..
01b0: 23 20 20 20 20 20 20 20 6e 61 6d 65 73 70 61 63 # namespac
01c0: 65 20 69 66 20 74 68 69 73 20 73 63 72 69 70 74 e if this script
01d0: 20 61 63 74 75 61 6c 6c 79 20 65 6e 64 73 20 75 actually ends u
01e0: 70 20 62 65 69 6e 67 20 65 76 61 6c 75 61 74 65 p being evaluate
01f0: 64 20 69 6e 20 54 63 6c 2e 0d 0a 23 0d 0a 6e 61 d in Tcl...#..na
0200: 6d 65 73 70 61 63 65 20 65 76 61 6c 20 3a 3a 45 mespace eval ::E
0210: 61 67 6c 65 20 7b 0d 0a 20 20 69 66 20 7b 5b 69 agle {.. if {[i
0220: 73 45 61 67 6c 65 5d 7d 20 74 68 65 6e 20 7b 0d sEagle]} then {.
0230: 0a 20 20 20 20 23 23 23 23 23 23 23 23 23 23 23 . ###########
0240: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
0250: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
0260: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
0270: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
0280: 0d 0a 20 20 20 20 23 23 23 23 23 23 23 23 23 23 .. ##########
0290: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
02a0: 23 23 20 42 45 47 49 4e 20 45 61 67 6c 65 20 4f ## BEGIN Eagle O
02b0: 4e 4c 59 20 23 23 23 23 23 23 23 23 23 23 23 23 NLY ############
02c0: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
02d0: 23 0d 0a 20 20 20 20 23 23 23 23 23 23 23 23 23 #.. #########
02e0: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
02f0: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
0300: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
0310: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
0320: 23 23 0d 0a 0d 0a 20 20 20 20 70 72 6f 63 20 67 ##.... proc g
0330: 65 74 42 75 69 6c 64 59 65 61 72 20 7b 7d 20 7b etBuildYear {} {
0340: 0d 0a 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 .. #..
0350: 20 23 20 4e 4f 54 45 3a 20 53 65 65 20 69 66 20 # NOTE: See if
0360: 74 68 65 20 22 79 65 61 72 22 20 73 65 74 74 69 the "year" setti
0370: 6e 67 20 68 61 73 20 62 65 65 6e 20 6f 76 65 72 ng has been over
0380: 72 69 64 64 65 6e 20 62 79 20 74 68 65 20 75 73 ridden by the us
0390: 65 72 20 28 65 2e 67 2e 20 6f 6e 0d 0a 20 20 20 er (e.g. on..
03a0: 20 20 20 23 20 20 20 20 20 20 20 74 68 65 20 63 # the c
03b0: 6f 6d 6d 61 6e 64 20 6c 69 6e 65 29 2e 20 20 54 ommand line). T
03c0: 68 69 73 20 68 65 6c 70 73 20 63 6f 6e 74 72 6f his helps contro
03d0: 6c 20 65 78 61 63 74 6c 79 20 77 68 69 63 68 20 l exactly which
03e0: 73 65 74 20 6f 66 0d 0a 20 20 20 20 20 20 23 20 set of.. #
03f0: 20 20 20 20 20 20 62 69 6e 61 72 69 65 73 20 77 binaries w
0400: 65 20 61 72 65 20 74 65 73 74 69 6e 67 2c 20 74 e are testing, t
0410: 68 6f 73 65 20 70 72 6f 64 75 63 65 64 20 75 73 hose produced us
0420: 69 6e 67 20 74 68 65 20 56 69 73 75 61 6c 20 53 ing the Visual S
0430: 74 75 64 69 6f 0d 0a 20 20 20 20 20 20 23 20 20 tudio.. #
0440: 20 20 20 20 20 32 30 30 35 2c 20 32 30 30 38 2c 2005, 2008,
0450: 20 6f 72 20 32 30 31 30 20 62 75 69 6c 64 20 73 or 2010 build s
0460: 79 73 74 65 6d 73 2e 20 20 54 6f 20 6f 76 65 72 ystems. To over
0470: 72 69 64 65 20 74 68 69 73 20 76 61 6c 75 65 20 ride this value
0480: 76 69 61 20 74 68 65 0d 0a 20 20 20 20 20 20 23 via the.. #
0490: 20 20 20 20 20 20 20 63 6f 6d 6d 61 6e 64 20 6c command l
04a0: 69 6e 65 2c 20 65 6e 74 65 72 20 61 20 63 6f 6d ine, enter a com
04b0: 6d 61 6e 64 20 73 69 6d 69 6c 61 72 20 74 6f 20 mand similar to
04c0: 6f 6e 65 20 6f 66 20 74 68 65 20 66 6f 6c 6c 6f one of the follo
04d0: 77 69 6e 67 20 28 61 6c 6c 0d 0a 20 20 20 20 20 wing (all..
04e0: 20 23 20 20 20 20 20 20 20 6f 6e 20 6f 6e 65 20 # on one
04f0: 6c 69 6e 65 29 3a 0d 0a 20 20 20 20 20 20 23 0d line):.. #.
0500: 0a 20 20 20 20 20 20 23 20 20 20 20 20 20 20 45 . # E
0510: 61 67 6c 65 53 68 65 6c 6c 2e 65 78 65 20 2d 70 agleShell.exe -p
0520: 72 65 49 6e 69 74 69 61 6c 69 7a 65 20 22 73 65 reInitialize "se
0530: 74 20 74 65 73 74 5f 79 65 61 72 20 32 30 30 35 t test_year 2005
0540: 22 0d 0a 20 20 20 20 20 20 23 20 20 20 20 20 20 ".. #
0550: 20 20 20 2d 66 69 6c 65 20 2e 5c 70 61 74 68 5c -file .\path\
0560: 74 6f 5c 61 6c 6c 2e 65 61 67 6c 65 0d 0a 20 20 to\all.eagle..
0570: 20 20 20 20 23 0d 0a 20 20 20 20 20 20 23 20 20 #.. #
0580: 20 20 20 20 20 45 61 67 6c 65 53 68 65 6c 6c 2e EagleShell.
0590: 65 78 65 20 2d 70 72 65 49 6e 69 74 69 61 6c 69 exe -preInitiali
05a0: 7a 65 20 22 73 65 74 20 74 65 73 74 5f 79 65 61 ze "set test_yea
05b0: 72 20 32 30 30 38 22 0d 0a 20 20 20 20 20 20 23 r 2008".. #
05c0: 20 20 20 20 20 20 20 20 20 2d 66 69 6c 65 20 2e -file .
05d0: 5c 70 61 74 68 5c 74 6f 5c 61 6c 6c 2e 65 61 67 \path\to\all.eag
05e0: 6c 65 0d 0a 20 20 20 20 20 20 23 0d 0a 20 20 20 le.. #..
05f0: 20 20 20 23 20 20 20 20 20 20 20 45 61 67 6c 65 # Eagle
0600: 53 68 65 6c 6c 2e 65 78 65 20 2d 70 72 65 49 6e Shell.exe -preIn
0610: 69 74 69 61 6c 69 7a 65 20 22 73 65 74 20 74 65 itialize "set te
0620: 73 74 5f 79 65 61 72 20 32 30 31 30 22 0d 0a 20 st_year 2010"..
0630: 20 20 20 20 20 23 20 20 20 20 20 20 20 20 20 2d # -
0640: 66 69 6c 65 20 2e 5c 70 61 74 68 5c 74 6f 5c 61 file .\path\to\a
0650: 6c 6c 2e 65 61 67 6c 65 0d 0a 20 20 20 20 20 20 ll.eagle..
0660: 23 0d 0a 20 20 20 20 20 20 23 20 20 20 20 20 20 #.. #
0670: 20 45 61 67 6c 65 53 68 65 6c 6c 2e 65 78 65 20 EagleShell.exe
0680: 2d 70 72 65 49 6e 69 74 69 61 6c 69 7a 65 20 22 -preInitialize "
0690: 75 6e 73 65 74 20 2d 6e 6f 63 6f 6d 70 6c 61 69 unset -nocomplai
06a0: 6e 20 74 65 73 74 5f 79 65 61 72 22 0d 0a 20 20 n test_year"..
06b0: 20 20 20 20 23 20 20 20 20 20 20 20 20 20 2d 66 # -f
06c0: 69 6c 65 20 2e 5c 70 61 74 68 5c 74 6f 5c 61 6c ile .\path\to\al
06d0: 6c 2e 65 61 67 6c 65 0d 0a 20 20 20 20 20 20 23 l.eagle.. #
06e0: 0d 0a 20 20 20 20 20 20 69 66 20 7b 5b 69 6e 66 .. if {[inf
06f0: 6f 20 65 78 69 73 74 73 20 3a 3a 74 65 73 74 5f o exists ::test_
0700: 79 65 61 72 5d 20 26 26 20 5b 73 74 72 69 6e 67 year] && [string
0710: 20 6c 65 6e 67 74 68 20 24 3a 3a 74 65 73 74 5f length $::test_ 0720: 79 65 61 72 5d 20 3e 20 30 7d 20 74 68 65 6e 20 year] > 0} then 0730: 7b 0d 0a 20 20 20 20 20 20 20 20 23 0d 0a 20 20 {.. #.. 0740: 20 20 20 20 20 20 23 20 4e 4f 54 45 3a 20 55 73 # NOTE: Us 0750: 65 20 74 68 65 20 73 70 65 63 69 66 69 65 64 20 e the specified 0760: 74 65 73 74 20 79 65 61 72 2e 20 20 49 66 20 74 test year. If t 0770: 68 69 73 20 76 61 72 69 61 62 6c 65 20 69 73 20 his variable is 0780: 6e 6f 74 20 73 65 74 2c 20 74 68 65 0d 0a 20 20 not set, the.. 0790: 20 20 20 20 20 20 23 20 20 20 20 20 20 20 64 65 # de 07a0: 66 61 75 6c 74 20 76 61 6c 75 65 20 77 69 6c 6c fault value will 07b0: 20 62 65 20 62 61 73 65 64 20 6f 6e 20 77 68 65 be based on whe 07c0: 74 68 65 72 20 6f 72 20 6e 6f 74 20 45 61 67 6c ther or not Eagl 07d0: 65 20 68 61 73 20 62 65 65 6e 0d 0a 20 20 20 20 e has been.. 07e0: 20 20 20 20 23 20 20 20 20 20 20 20 63 6f 6d 70 # comp 07f0: 69 6c 65 64 20 61 67 61 69 6e 73 74 20 74 68 65 iled against the 0800: 20 2e 4e 45 54 20 46 72 61 6d 65 77 6f 72 6b 20 .NET Framework 0810: 32 2e 30 20 6f 72 20 34 2e 30 2e 0d 0a 20 20 20 2.0 or 4.0... 0820: 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 20 20 #.. 0830: 72 65 74 75 72 6e 20 24 3a 3a 74 65 73 74 5f 79 return$::test_y
0840: 65 61 72 0d 0a 20 20 20 20 20 20 7d 20 65 6c 73 ear.. } els
0850: 65 20 7b 0d 0a 20 20 20 20 20 20 20 20 23 0d 0a e {.. #..
0860: 20 20 20 20 20 20 20 20 23 20 4e 4f 54 45 3a 20 # NOTE:
0870: 49 66 20 45 61 67 6c 65 20 68 61 73 20 62 65 65 If Eagle has bee
0880: 6e 20 63 6f 6d 70 69 6c 65 64 20 61 67 61 69 6e n compiled again
0890: 73 74 20 74 68 65 20 2e 4e 45 54 20 46 72 61 6d st the .NET Fram
08a0: 65 77 6f 72 6b 20 34 2e 30 2c 20 75 73 65 0d 0a ework 4.0, use..
08b0: 20 20 20 20 20 20 20 20 23 20 20 20 20 20 20 20 #
08c0: 22 32 30 31 30 22 20 61 73 20 74 68 65 20 74 65 "2010" as the te
08d0: 73 74 20 79 65 61 72 3b 20 6f 74 68 65 72 77 69 st year; otherwi
08e0: 73 65 2c 20 75 73 65 20 22 32 30 30 38 22 20 28 se, use "2008" (
08f0: 77 65 20 63 6f 75 6c 64 20 75 73 65 0d 0a 20 20 we could use..
0900: 20 20 20 20 20 20 23 20 20 20 20 20 20 20 22 32 # "2
0910: 30 30 35 22 20 69 6e 20 74 68 61 74 20 63 61 73 005" in that cas
0920: 65 20 61 73 20 77 65 6c 6c 29 2e 20 20 49 66 20 e as well). If
0930: 61 6e 6f 74 68 65 72 20 6d 61 6a 6f 72 20 5b 69 another major [i
0940: 6e 63 6f 6d 70 61 74 69 62 6c 65 5d 0d 0a 20 20 ncompatible]..
0950: 20 20 20 20 20 20 23 20 20 20 20 20 20 20 76 65 # ve
0960: 72 73 69 6f 6e 20 6f 66 20 74 68 65 20 2e 4e 45 rsion of the .NE
0970: 54 20 46 72 61 6d 65 77 6f 72 6b 20 69 73 20 72 T Framework is r
0980: 65 6c 65 61 73 65 64 2c 20 74 68 69 73 20 63 68 eleased, this ch
0990: 65 63 6b 20 77 69 6c 6c 20 68 61 76 65 0d 0a 20 eck will have..
09a0: 20 20 20 20 20 20 20 23 20 20 20 20 20 20 20 74 # t
09b0: 6f 20 62 65 20 63 68 61 6e 67 65 64 2e 0d 0a 20 o be changed...
09c0: 20 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 #..
09d0: 20 20 72 65 74 75 72 6e 20 5b 65 78 70 72 20 7b return [expr {
09e0: 5b 68 61 76 65 43 6f 6e 73 74 72 61 69 6e 74 20 [haveConstraint
09f0: 69 6d 61 67 65 52 75 6e 74 69 6d 65 34 30 5d 20 imageRuntime40]
0a00: 3f 20 22 32 30 31 30 22 20 3a 20 22 32 30 30 38 ? "2010" : "2008
0a10: 22 7d 5d 0d 0a 20 20 20 20 20 20 7d 0d 0a 20 20 "}].. }..
0a20: 20 20 7d 0d 0a 0c 0d 0a 20 20 20 20 70 72 6f 63 }..... proc
0a30: 20 67 65 74 42 75 69 6c 64 43 6f 6e 66 69 67 75 getBuildConfigu
0a40: 72 61 74 69 6f 6e 20 7b 7d 20 7b 0d 0a 20 20 20 ration {} {..
0a50: 20 20 20 23 0d 0a 20 20 20 20 20 20 23 20 4e 4f #.. # NO
0a60: 54 45 3a 20 53 65 65 20 69 66 20 74 68 65 20 22 TE: See if the "
0a70: 63 6f 6e 66 69 67 75 72 61 74 69 6f 6e 22 20 73 configuration" s
0a80: 65 74 74 69 6e 67 20 68 61 73 20 62 65 65 6e 20 etting has been
0a90: 6f 76 65 72 72 69 64 64 65 6e 20 62 79 20 74 68 overridden by th
0aa0: 65 20 75 73 65 72 0d 0a 20 20 20 20 20 20 23 20 e user.. #
0ab0: 20 20 20 20 20 20 28 65 2e 67 2e 20 6f 6e 20 74 (e.g. on t
0ac0: 68 65 20 63 6f 6d 6d 61 6e 64 20 6c 69 6e 65 29 he command line)
0ad0: 2e 20 20 54 68 69 73 20 68 65 6c 70 73 20 63 6f . This helps co
0ae0: 6e 74 72 6f 6c 20 65 78 61 63 74 6c 79 20 77 68 ntrol exactly wh
0af0: 69 63 68 20 73 65 74 0d 0a 20 20 20 20 20 20 23 ich set.. #
0b00: 20 20 20 20 20 20 20 6f 66 20 62 69 6e 61 72 69 of binari
0b10: 65 73 20 77 65 20 61 72 65 20 74 65 73 74 69 6e es we are testin
0b20: 67 20 28 69 2e 65 2e 20 74 68 6f 73 65 20 62 75 g (i.e. those bu
0b30: 69 6c 74 20 69 6e 20 74 68 65 20 22 44 65 62 75 ilt in the "Debu
0b40: 67 22 20 6f 72 0d 0a 20 20 20 20 20 20 23 20 20 g" or.. #
0b50: 20 20 20 20 20 22 52 65 6c 65 61 73 65 22 20 62 "Release" b
0b60: 75 69 6c 64 20 63 6f 6e 66 69 67 75 72 61 74 69 uild configurati
0b70: 6f 6e 73 29 2e 20 20 54 6f 20 6f 76 65 72 72 69 ons). To overri
0b80: 64 65 20 74 68 69 73 20 76 61 6c 75 65 20 76 69 de this value vi
0b90: 61 20 74 68 65 0d 0a 20 20 20 20 20 20 23 20 20 a the.. #
0ba0: 20 20 20 20 20 63 6f 6d 6d 61 6e 64 20 6c 69 6e command lin
0bb0: 65 2c 20 65 6e 74 65 72 20 61 20 63 6f 6d 6d 61 e, enter a comma
0bc0: 6e 64 20 73 69 6d 69 6c 61 72 20 74 6f 20 6f 6e nd similar to on
0bd0: 65 20 6f 66 20 74 68 65 20 66 6f 6c 6c 6f 77 69 e of the followi
0be0: 6e 67 20 28 61 6c 6c 0d 0a 20 20 20 20 20 20 23 ng (all.. #
0bf0: 20 20 20 20 20 20 20 6f 6e 20 6f 6e 65 20 6c 69 on one li
0c00: 6e 65 29 3a 0d 0a 20 20 20 20 20 20 23 0d 0a 20 ne):.. #..
0c10: 20 20 20 20 20 23 20 20 20 20 20 20 20 45 61 67 # Eag
0c20: 6c 65 53 68 65 6c 6c 2e 65 78 65 20 2d 70 72 65 leShell.exe -pre
0c30: 49 6e 69 74 69 61 6c 69 7a 65 20 22 73 65 74 20 Initialize "set
0c40: 74 65 73 74 5f 63 6f 6e 66 69 67 75 72 61 74 69 test_configurati
0c50: 6f 6e 20 44 65 62 75 67 22 0d 0a 20 20 20 20 20 on Debug"..
0c60: 20 23 20 20 20 20 20 20 20 20 20 2d 66 69 6c 65 # -file
0c70: 20 2e 5c 70 61 74 68 5c 74 6f 5c 61 6c 6c 2e 65 .\path\to\all.e
0c80: 61 67 6c 65 0d 0a 20 20 20 20 20 20 23 0d 0a 20 agle.. #..
0c90: 20 20 20 20 20 23 20 20 20 20 20 20 20 45 61 67 # Eag
0ca0: 6c 65 53 68 65 6c 6c 2e 65 78 65 20 2d 70 72 65 leShell.exe -pre
0cb0: 49 6e 69 74 69 61 6c 69 7a 65 20 22 73 65 74 20 Initialize "set
0cc0: 74 65 73 74 5f 63 6f 6e 66 69 67 75 72 61 74 69 test_configurati
0cd0: 6f 6e 20 52 65 6c 65 61 73 65 22 0d 0a 20 20 20 on Release"..
0ce0: 20 20 20 23 20 20 20 20 20 20 20 20 20 2d 66 69 # -fi
0cf0: 6c 65 20 2e 5c 70 61 74 68 5c 74 6f 5c 61 6c 6c le .\path\to\all
0d00: 2e 65 61 67 6c 65 0d 0a 20 20 20 20 20 20 23 0d .eagle.. #.
0d10: 0a 20 20 20 20 20 20 23 20 20 20 20 20 20 20 45 . # E
0d20: 61 67 6c 65 53 68 65 6c 6c 2e 65 78 65 20 2d 66 agleShell.exe -f
0d30: 69 6c 65 20 2e 5c 70 61 74 68 5c 74 6f 5c 61 6c ile .\path\to\al
0d40: 6c 2e 65 61 67 6c 65 20 2d 70 72 65 54 65 73 74 l.eagle -preTest
0d50: 0d 0a 20 20 20 20 20 20 23 20 20 20 20 20 20 20 .. #
0d60: 20 20 22 75 6e 73 65 74 20 2d 6e 6f 63 6f 6d 70 "unset -nocomp
0d70: 6c 61 69 6e 20 74 65 73 74 5f 63 6f 6e 66 69 67 lain test_config
0d80: 75 72 61 74 69 6f 6e 22 0d 0a 20 20 20 20 20 20 uration"..
0d90: 23 0d 0a 20 20 20 20 20 20 69 66 20 7b 5b 69 6e #.. if {[in
0da0: 66 6f 20 65 78 69 73 74 73 20 3a 3a 74 65 73 74 fo exists ::test
0db0: 5f 63 6f 6e 66 69 67 75 72 61 74 69 6f 6e 5d 20 _configuration]
0dc0: 26 26 20 5c 0d 0a 20 20 20 20 20 20 20 20 20 20 && \..
0dd0: 5b 73 74 72 69 6e 67 20 6c 65 6e 67 74 68 20 24 [string length $0de0: 3a 3a 74 65 73 74 5f 63 6f 6e 66 69 67 75 72 61 ::test_configura 0df0: 74 69 6f 6e 5d 20 3e 20 30 7d 20 74 68 65 6e 20 tion] > 0} then 0e00: 7b 0d 0a 20 20 20 20 20 20 20 20 23 0d 0a 20 20 {.. #.. 0e10: 20 20 20 20 20 20 23 20 4e 4f 54 45 3a 20 55 73 # NOTE: Us 0e20: 65 20 74 68 65 20 73 70 65 63 69 66 69 65 64 20 e the specified 0e30: 74 65 73 74 20 63 6f 6e 66 69 67 75 72 61 74 69 test configurati 0e40: 6f 6e 2e 20 20 54 68 65 20 64 65 66 61 75 6c 74 on. The default 0e50: 20 76 61 6c 75 65 20 75 73 65 64 0d 0a 20 20 20 value used.. 0e60: 20 20 20 20 20 23 20 20 20 20 20 20 20 66 6f 72 # for 0e70: 20 74 68 69 73 20 76 61 72 69 61 62 6c 65 20 69 this variable i 0e80: 73 20 22 52 65 6c 65 61 73 65 22 2c 20 61 73 20 s "Release", as 0e90: 73 65 74 20 62 79 20 74 68 65 20 74 65 73 74 20 set by the test 0ea0: 73 75 69 74 65 20 69 74 73 65 6c 66 2e 0d 0a 20 suite itself... 0eb0: 20 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 #.. 0ec0: 20 20 72 65 74 75 72 6e 20 24 3a 3a 74 65 73 74 return$::test
0ed0: 5f 63 6f 6e 66 69 67 75 72 61 74 69 6f 6e 0d 0a _configuration..
0ee0: 20 20 20 20 20 20 7d 20 65 6c 73 65 20 7b 0d 0a } else {..
0ef0: 20 20 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 #..
0f00: 20 20 20 23 20 4e 4f 54 45 3a 20 4e 6f 72 6d 61 # NOTE: Norma
0f10: 6c 6c 79 2c 20 77 65 20 77 69 6c 6c 20 6e 65 76 lly, we will nev
0f20: 65 72 20 68 69 74 20 74 68 69 73 20 63 61 73 65 er hit this case
0f30: 20 62 65 63 61 75 73 65 20 74 68 65 20 76 61 6c because the val
0f40: 75 65 20 6f 66 20 74 68 65 0d 0a 20 20 20 20 20 ue of the..
0f50: 20 20 20 23 20 20 20 20 20 20 20 74 65 73 74 20 # test
0f60: 63 6f 6e 66 69 67 75 72 61 74 69 6f 6e 20 76 61 configuration va
0f70: 72 69 61 62 6c 65 20 69 73 20 61 6c 77 61 79 73 riable is always
0f80: 20 73 65 74 20 62 79 20 74 68 65 20 74 65 73 74 set by the test
0f90: 20 73 75 69 74 65 0d 0a 20 20 20 20 20 20 20 20 suite..
0fa0: 23 20 20 20 20 20 20 20 69 74 73 65 6c 66 3b 20 # itself;
0fb0: 68 6f 77 65 76 65 72 2c 20 69 74 20 63 61 6e 20 however, it can
0fc0: 62 65 20 6f 76 65 72 72 69 64 64 65 6e 20 75 73 be overridden us
0fd0: 69 6e 67 20 74 68 65 20 75 6e 73 65 74 20 63 6f ing the unset co
0fe0: 6d 6d 61 6e 64 0d 0a 20 20 20 20 20 20 20 20 23 mmand.. #
0ff0: 20 20 20 20 20 20 20 66 72 6f 6d 20 74 68 65 20 from the
1000: 2d 70 72 65 54 65 73 74 20 6f 70 74 69 6f 6e 20 -preTest option
1010: 74 6f 20 74 68 65 20 74 65 73 74 20 73 75 69 74 to the test suit
1020: 65 2e 0d 0a 20 20 20 20 20 20 20 20 23 0d 0a 20 e... #..
1030: 20 20 20 20 20 20 20 72 65 74 75 72 6e 20 24 3a return $: 1040: 3a 65 61 67 6c 65 5f 70 6c 61 74 66 6f 72 6d 28 :eagle_platform( 1050: 63 6f 6e 66 69 67 75 72 61 74 69 6f 6e 29 0d 0a configuration).. 1060: 20 20 20 20 20 20 7d 0d 0a 20 20 20 20 7d 0d 0a }.. }.. 1070: 0c 0d 0a 20 20 20 20 70 72 6f 63 20 67 65 74 42 ... proc getB 1080: 75 69 6c 64 44 69 72 65 63 74 6f 72 79 20 7b 7d uildDirectory {} 1090: 20 7b 0d 0a 20 20 20 20 20 20 23 0d 0a 20 20 20 {.. #.. 10a0: 20 20 20 23 20 4e 4f 54 45 3a 20 53 65 65 20 69 # NOTE: See i 10b0: 66 20 74 68 65 20 22 6e 61 74 69 76 65 22 20 72 f the "native" r 10c0: 75 6e 74 69 6d 65 20 6f 70 74 69 6f 6e 20 68 61 untime option ha 10d0: 73 20 62 65 65 6e 20 61 64 64 65 64 2e 20 20 49 s been added. I 10e0: 66 20 73 6f 2c 20 75 73 65 20 74 68 65 0d 0a 20 f so, use the.. 10f0: 20 20 20 20 20 23 20 20 20 20 20 20 20 64 69 72 # dir 1100: 65 63 74 6f 72 79 20 66 6f 72 20 74 68 65 20 6d ectory for the m 1110: 69 78 65 64 2d 6d 6f 64 65 20 61 73 73 65 6d 62 ixed-mode assemb 1120: 6c 79 20 28 61 2e 6b 2e 61 2e 20 74 68 65 20 6e ly (a.k.a. the n 1130: 61 74 69 76 65 20 69 6e 74 65 72 6f 70 0d 0a 20 ative interop.. 1140: 20 20 20 20 20 23 20 20 20 20 20 20 20 61 73 73 # ass 1150: 65 6d 62 6c 79 29 2e 20 20 54 6f 20 65 6e 61 62 embly). To enab 1160: 6c 65 20 74 68 69 73 20 6f 70 74 69 6f 6e 20 76 le this option v 1170: 69 61 20 74 68 65 20 63 6f 6d 6d 61 6e 64 20 6c ia the command l 1180: 69 6e 65 2c 20 65 6e 74 65 72 20 61 0d 0a 20 20 ine, enter a.. 1190: 20 20 20 20 23 20 20 20 20 20 20 20 63 6f 6d 6d # comm 11a0: 61 6e 64 20 73 69 6d 69 6c 61 72 20 74 6f 20 6f and similar to o 11b0: 6e 65 20 6f 66 20 74 68 65 20 66 6f 6c 6c 6f 77 ne of the follow 11c0: 69 6e 67 20 28 61 6c 6c 20 6f 6e 20 6f 6e 65 20 ing (all on one 11d0: 6c 69 6e 65 29 3a 0d 0a 20 20 20 20 20 20 23 0d line):.. #. 11e0: 0a 20 20 20 20 20 20 23 20 20 20 20 20 20 20 45 . # E 11f0: 61 67 6c 65 53 68 65 6c 6c 2e 65 78 65 20 2d 69 agleShell.exe -i 1200: 6e 69 74 69 61 6c 69 7a 65 20 2d 72 75 6e 74 69 nitialize -runti 1210: 6d 65 4f 70 74 69 6f 6e 20 6e 61 74 69 76 65 0d meOption native. 1220: 0a 20 20 20 20 20 20 23 20 20 20 20 20 20 20 20 . # 1230: 20 2d 66 69 6c 65 20 2e 5c 70 61 74 68 5c 74 6f -file .\path\to 1240: 5c 61 6c 6c 2e 65 61 67 6c 65 0d 0a 20 20 20 20 \all.eagle.. 1250: 20 20 23 0d 0a 20 20 20 20 20 20 23 20 20 20 20 #.. # 1260: 20 20 20 54 6f 20 65 6e 61 62 6c 65 20 74 68 69 To enable thi 1270: 73 20 6f 70 74 69 6f 6e 20 76 69 61 20 74 68 65 s option via the 1280: 20 63 6f 6d 6d 61 6e 64 20 6c 69 6e 65 20 70 72 command line pr 1290: 69 6f 72 20 74 6f 20 74 68 65 20 22 62 65 74 61 ior to the "beta 12a0: 20 31 36 22 0d 0a 20 20 20 20 20 20 23 20 20 20 16".. # 12b0: 20 20 20 20 72 65 6c 65 61 73 65 20 6f 66 20 45 release of E 12c0: 61 67 6c 65 2c 20 74 68 65 20 66 6f 6c 6c 6f 77 agle, the follow 12d0: 69 6e 67 20 63 6f 6d 6d 61 6e 64 20 6d 75 73 74 ing command must 12e0: 20 62 65 20 75 73 65 64 20 69 6e 73 74 65 61 64 be used instead 12f0: 0d 0a 20 20 20 20 20 20 23 20 20 20 20 20 20 20 .. # 1300: 28 61 6c 73 6f 20 61 6c 6c 20 6f 6e 20 6f 6e 65 (also all on one 1310: 20 6c 69 6e 65 29 3a 0d 0a 20 20 20 20 20 20 23 line):.. # 1320: 0d 0a 20 20 20 20 20 20 23 20 20 20 20 20 20 20 .. # 1330: 45 61 67 6c 65 53 68 65 6c 6c 2e 65 78 65 20 2d EagleShell.exe - 1340: 69 6e 69 74 69 61 6c 69 7a 65 20 2d 70 6f 73 74 initialize -post 1350: 49 6e 69 74 69 61 6c 69 7a 65 0d 0a 20 20 20 20 Initialize.. 1360: 20 20 23 20 20 20 20 20 20 20 20 20 22 6f 62 6a # "obj 1370: 65 63 74 20 69 6e 76 6f 6b 65 20 49 6e 74 65 72 ect invoke Inter 1380: 70 72 65 74 65 72 2e 47 65 74 41 63 74 69 76 65 preter.GetActive 1390: 20 41 64 64 52 75 6e 74 69 6d 65 4f 70 74 69 6f AddRuntimeOptio 13a0: 6e 20 6e 61 74 69 76 65 22 0d 0a 20 20 20 20 20 n native".. 13b0: 20 23 20 20 20 20 20 20 20 20 20 2d 66 69 6c 65 # -file 13c0: 20 2e 5c 70 61 74 68 5c 74 6f 5c 61 6c 6c 2e 65 .\path\to\all.e 13d0: 61 67 6c 65 0d 0a 20 20 20 20 20 20 23 0d 0a 20 agle.. #.. 13e0: 20 20 20 20 20 69 66 20 7b 5b 69 6e 66 6f 20 65 if {[info e 13f0: 78 69 73 74 73 20 3a 3a 62 75 69 6c 64 5f 64 69 xists ::build_di 1400: 72 65 63 74 6f 72 79 5d 20 26 26 20 5c 0d 0a 20 rectory] && \.. 1410: 20 20 20 20 20 20 20 20 20 5b 73 74 72 69 6e 67 [string 1420: 20 6c 65 6e 67 74 68 20 24 3a 3a 62 75 69 6c 64 length$::build
1430: 5f 64 69 72 65 63 74 6f 72 79 5d 20 3e 20 30 7d _directory] > 0}
1440: 20 74 68 65 6e 20 7b 0d 0a 20 20 20 20 20 20 20 then {..
1450: 20 23 0d 0a 20 20 20 20 20 20 20 20 23 20 4e 4f #.. # NO
1460: 54 45 3a 20 54 68 65 20 6c 6f 63 61 74 69 6f 6e TE: The location
1470: 20 6f 66 20 74 68 65 20 62 75 69 6c 64 20 64 69 of the build di
1480: 72 65 63 74 6f 72 79 20 68 61 73 20 62 65 65 6e rectory has been
1490: 20 6f 76 65 72 72 69 64 64 65 6e 3b 0d 0a 20 20 overridden;..
14a0: 20 20 20 20 20 20 23 20 20 20 20 20 20 20 74 68 # th
14b0: 65 72 65 66 6f 72 65 2c 20 75 73 65 20 69 74 20 erefore, use it
14c0: 76 65 72 62 61 74 69 6d 2e 0d 0a 20 20 20 20 20 verbatim...
14d0: 20 20 20 23 0d 0a 20 20 20 20 20 20 20 20 72 65 #.. re
14e0: 74 75 72 6e 20 24 3a 3a 62 75 69 6c 64 5f 64 69 turn $::build_di 14f0: 72 65 63 74 6f 72 79 0d 0a 20 20 20 20 20 20 7d rectory.. } 1500: 20 65 6c 73 65 20 7b 0d 0a 20 20 20 20 20 20 20 else {.. 1510: 20 23 0d 0a 20 20 20 20 20 20 20 20 23 20 4e 4f #.. # NO 1520: 54 45 3a 20 46 69 67 75 72 65 20 6f 75 74 20 74 TE: Figure out t 1530: 68 65 20 62 75 69 6c 64 20 62 61 73 65 20 64 69 he build base di 1540: 72 65 63 74 6f 72 79 2e 20 20 54 68 69 73 20 77 rectory. This w 1550: 69 6c 6c 20 62 65 20 74 68 65 20 64 69 72 65 63 ill be the direc 1560: 74 6f 72 79 0d 0a 20 20 20 20 20 20 20 20 23 20 tory.. # 1570: 20 20 20 20 20 20 74 68 61 74 20 63 6f 6e 74 61 that conta 1580: 69 6e 73 20 74 68 65 20 61 63 74 75 61 6c 20 62 ins the actual b 1590: 75 69 6c 64 20 6f 75 74 70 75 74 20 64 69 72 65 uild output dire 15a0: 63 74 6f 72 79 20 28 65 2e 67 2e 20 22 62 69 6e ctory (e.g. "bin 15b0: 22 29 2e 0d 0a 20 20 20 20 20 20 20 20 23 0d 0a ")... #.. 15c0: 20 20 20 20 20 20 20 20 69 66 20 7b 5b 69 6e 66 if {[inf 15d0: 6f 20 65 78 69 73 74 73 20 3a 3a 62 75 69 6c 64 o exists ::build 15e0: 5f 62 61 73 65 5f 64 69 72 65 63 74 6f 72 79 5d _base_directory] 15f0: 20 26 26 20 5c 0d 0a 20 20 20 20 20 20 20 20 20 && \.. 1600: 20 20 20 5b 73 74 72 69 6e 67 20 6c 65 6e 67 74 [string lengt 1610: 68 20 24 3a 3a 62 75 69 6c 64 5f 62 61 73 65 5f h$::build_base_
1620: 64 69 72 65 63 74 6f 72 79 5d 20 3e 20 30 7d 20 directory] > 0}
1630: 74 68 65 6e 20 7b 0d 0a 20 20 20 20 20 20 20 20 then {..
1640: 20 20 23 0d 0a 20 20 20 20 20 20 20 20 20 20 23 #.. #
1650: 20 4e 4f 54 45 3a 20 54 68 65 20 6c 6f 63 61 74 NOTE: The locat
1660: 69 6f 6e 20 6f 66 20 74 68 65 20 62 75 69 6c 64 ion of the build
1670: 20 62 61 73 65 20 64 69 72 65 63 74 6f 72 79 20 base directory
1680: 68 61 73 20 62 65 65 6e 20 6f 76 65 72 72 69 64 has been overrid
1690: 64 65 6e 3b 0d 0a 20 20 20 20 20 20 20 20 20 20 den;..
16a0: 23 20 20 20 20 20 20 20 74 68 65 72 65 66 6f 72 # therefor
16b0: 65 2c 20 75 73 65 20 69 74 20 76 65 72 62 61 74 e, use it verbat
16c0: 69 6d 2e 0d 0a 20 20 20 20 20 20 20 20 20 20 23 im... #
16d0: 0d 0a 20 20 20 20 20 20 20 20 20 20 73 65 74 20 .. set
16e0: 70 61 74 68 20 24 3a 3a 62 75 69 6c 64 5f 62 61 path $::build_ba 16f0: 73 65 5f 64 69 72 65 63 74 6f 72 79 0d 0a 20 20 se_directory.. 1700: 20 20 20 20 20 20 7d 20 65 6c 73 65 69 66 20 7b } elseif { 1710: 5b 69 6e 66 6f 20 65 78 69 73 74 73 20 3a 3a 63 [info exists ::c 1720: 6f 6d 6d 6f 6e 5f 64 69 72 65 63 74 6f 72 79 5d ommon_directory] 1730: 20 26 26 20 5c 0d 0a 20 20 20 20 20 20 20 20 20 && \.. 1740: 20 20 20 5b 73 74 72 69 6e 67 20 6c 65 6e 67 74 [string lengt 1750: 68 20 24 3a 3a 63 6f 6d 6d 6f 6e 5f 64 69 72 65 h$::common_dire
1760: 63 74 6f 72 79 5d 20 3e 20 30 7d 20 74 68 65 6e ctory] > 0} then
1770: 20 7b 0d 0a 20 20 20 20 20 20 20 20 20 20 23 0d {.. #.
1780: 0a 20 20 20 20 20 20 20 20 20 20 23 20 4e 4f 54 . # NOT
1790: 45 3a 20 4e 65 78 74 2c 20 66 61 6c 6c 62 61 63 E: Next, fallbac
17a0: 6b 20 74 6f 20 74 68 65 20 70 61 72 65 6e 74 20 k to the parent
17b0: 64 69 72 65 63 74 6f 72 79 20 6f 66 20 74 68 65 directory of the
17c0: 20 6f 6e 65 20 63 6f 6e 74 61 69 6e 69 6e 67 0d one containing.
17d0: 0a 20 20 20 20 20 20 20 20 20 20 23 20 20 20 20 . #
17e0: 20 20 20 74 68 69 73 20 66 69 6c 65 20 28 69 2e this file (i.
17f0: 65 2e 20 22 63 6f 6d 6d 6f 6e 2e 65 61 67 6c 65 e. "common.eagle
1800: 22 29 2c 20 69 66 20 61 76 61 69 6c 61 62 6c 65 "), if available
1810: 2e 0d 0a 20 20 20 20 20 20 20 20 20 20 23 0d 0a ... #..
1820: 20 20 20 20 20 20 20 20 20 20 73 65 74 20 70 61 set pa
1830: 74 68 20 5b 66 69 6c 65 20 64 69 72 6e 61 6d 65 th [file dirname
1840: 20 24 3a 3a 63 6f 6d 6d 6f 6e 5f 64 69 72 65 63 $::common_direc 1850: 74 6f 72 79 5d 0d 0a 20 20 20 20 20 20 20 20 7d tory].. } 1860: 20 65 6c 73 65 20 7b 0d 0a 20 20 20 20 20 20 20 else {.. 1870: 20 20 20 23 0d 0a 20 20 20 20 20 20 20 20 20 20 #.. 1880: 23 20 4e 4f 54 45 3a 20 46 69 6e 61 6c 6c 79 2c # NOTE: Finally, 1890: 20 66 61 6c 6c 62 61 63 6b 20 74 6f 20 74 68 65 fallback to the 18a0: 20 70 61 72 65 6e 74 20 64 69 72 65 63 74 6f 72 parent director 18b0: 79 20 6f 66 20 74 68 65 20 45 61 67 6c 65 54 65 y of the EagleTe 18c0: 73 74 0d 0a 20 20 20 20 20 20 20 20 20 20 23 20 st.. # 18d0: 20 20 20 20 20 20 70 61 74 68 2e 20 20 54 68 65 path. The 18e0: 20 45 61 67 6c 65 54 65 73 74 20 70 61 63 6b 61 EagleTest packa 18f0: 67 65 20 67 75 61 72 61 6e 74 65 65 73 20 74 68 ge guarantees th 1900: 61 74 20 74 68 69 73 20 76 61 72 69 61 62 6c 65 at this variable 1910: 0d 0a 20 20 20 20 20 20 20 20 20 20 23 20 20 20 .. # 1920: 20 20 20 20 77 69 6c 6c 20 62 65 20 73 65 74 20 will be set 1930: 74 6f 20 74 68 65 20 64 69 72 65 63 74 6f 72 79 to the directory 1940: 20 63 6f 6e 74 61 69 6e 69 6e 67 20 74 68 65 20 containing the 1950: 66 69 72 73 74 20 66 69 6c 65 20 74 6f 0d 0a 20 first file to.. 1960: 20 20 20 20 20 20 20 20 20 23 20 20 20 20 20 20 # 1970: 20 65 78 65 63 75 74 65 20 74 68 65 20 5b 72 75 execute the [ru 1980: 6e 54 65 73 74 50 72 6f 6c 6f 67 75 65 5d 20 73 nTestPrologue] s 1990: 63 72 69 70 74 20 6c 69 62 72 61 72 79 20 70 72 cript library pr 19a0: 6f 63 65 64 75 72 65 2e 0d 0a 20 20 20 20 20 20 ocedure... 19b0: 20 20 20 20 23 0d 0a 20 20 20 20 20 20 20 20 20 #.. 19c0: 20 73 65 74 20 70 61 74 68 20 5b 66 69 6c 65 20 set path [file 19d0: 64 69 72 6e 61 6d 65 20 24 3a 3a 70 61 74 68 5d dirname$::path]
19e0: 0d 0a 20 20 20 20 20 20 20 20 7d 0d 0a 0d 0a 20 .. }....
19f0: 20 20 20 20 20 20 20 69 66 20 7b 5b 68 61 73 52 if {[hasR
1a00: 75 6e 74 69 6d 65 4f 70 74 69 6f 6e 20 6e 61 74 untimeOption nat
1a10: 69 76 65 5d 7d 20 74 68 65 6e 20 7b 0d 0a 20 20 ive]} then {..
1a20: 20 20 20 20 20 20 20 20 72 65 74 75 72 6e 20 5b return [
1a30: 66 69 6c 65 20 6a 6f 69 6e 20 24 70 61 74 68 20 file join $path 1a40: 62 69 6e 20 5b 67 65 74 42 75 69 6c 64 59 65 61 bin [getBuildYea 1a50: 72 5d 20 5c 0d 0a 20 20 20 20 20 20 20 20 20 20 r] \.. 1a60: 20 20 20 20 5b 6d 61 63 68 69 6e 65 54 6f 50 6c [machineToPl 1a70: 61 74 66 6f 72 6d 20 24 3a 3a 74 63 6c 5f 70 6c atform$::tcl_pl
1a80: 61 74 66 6f 72 6d 28 6d 61 63 68 69 6e 65 29 5d atform(machine)]
1a90: 20 5c 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 \..
1aa0: 20 20 5b 67 65 74 42 75 69 6c 64 43 6f 6e 66 69 [getBuildConfi
1ab0: 67 75 72 61 74 69 6f 6e 5d 5d 0d 0a 20 20 20 20 guration]]..
1ac0: 20 20 20 20 7d 20 65 6c 73 65 20 7b 0d 0a 20 20 } else {..
1ad0: 20 20 20 20 20 20 20 20 72 65 74 75 72 6e 20 5b return [
1ae0: 66 69 6c 65 20 6a 6f 69 6e 20 24 70 61 74 68 20 file join $path 1af0: 62 69 6e 20 5b 67 65 74 42 75 69 6c 64 59 65 61 bin [getBuildYea 1b00: 72 5d 20 5c 0d 0a 20 20 20 20 20 20 20 20 20 20 r] \.. 1b10: 20 20 20 20 5b 67 65 74 42 75 69 6c 64 43 6f 6e [getBuildCon 1b20: 66 69 67 75 72 61 74 69 6f 6e 5d 20 62 69 6e 5d figuration] bin] 1b30: 0d 0a 20 20 20 20 20 20 20 20 7d 0d 0a 20 20 20 .. }.. 1b40: 20 20 20 7d 0d 0a 20 20 20 20 7d 0d 0a 0c 0d 0a }.. }..... 1b50: 20 20 20 20 70 72 6f 63 20 67 65 74 42 75 69 6c proc getBuil 1b60: 64 46 69 6c 65 4e 61 6d 65 20 7b 20 66 69 6c 65 dFileName { file 1b70: 4e 61 6d 65 20 7d 20 7b 0d 0a 20 20 20 20 20 20 Name } {.. 1b80: 23 0d 0a 20 20 20 20 20 20 23 20 4e 4f 54 45 3a #.. # NOTE: 1b90: 20 52 65 74 75 72 6e 73 20 74 68 65 20 73 70 65 Returns the spe 1ba0: 63 69 66 69 65 64 20 66 69 6c 65 20 6e 61 6d 65 cified file name 1bb0: 20 61 73 20 69 66 20 69 74 20 77 65 72 65 20 6c as if it were l 1bc0: 6f 63 61 74 65 64 20 69 6e 20 74 68 65 0d 0a 20 ocated in the.. 1bd0: 20 20 20 20 20 23 20 20 20 20 20 20 20 62 75 69 # bui 1be0: 6c 64 20 64 69 72 65 63 74 6f 72 79 2c 20 64 69 ld directory, di 1bf0: 73 63 61 72 64 69 6e 67 20 61 6e 79 20 64 69 72 scarding any dir 1c00: 65 63 74 6f 72 79 20 69 6e 66 6f 72 6d 61 74 69 ectory informati 1c10: 6f 6e 20 70 72 65 73 65 6e 74 0d 0a 20 20 20 20 on present.. 1c20: 20 20 23 20 20 20 20 20 20 20 69 6e 20 74 68 65 # in the 1c30: 20 66 69 6c 65 20 6e 61 6d 65 20 61 73 20 70 72 file name as pr 1c40: 6f 76 69 64 65 64 20 62 79 20 74 68 65 20 63 61 ovided by the ca 1c50: 6c 6c 65 72 2e 0d 0a 20 20 20 20 20 20 23 0d 0a ller... #.. 1c60: 20 20 20 20 20 20 72 65 74 75 72 6e 20 5b 66 69 return [fi 1c70: 6c 65 20 6e 61 74 69 76 65 6e 61 6d 65 20 5c 0d le nativename \. 1c80: 0a 20 20 20 20 20 20 20 20 20 20 5b 66 69 6c 65 . [file 1c90: 20 6a 6f 69 6e 20 5b 67 65 74 42 75 69 6c 64 44 join [getBuildD 1ca0: 69 72 65 63 74 6f 72 79 5d 20 5b 66 69 6c 65 20 irectory] [file 1cb0: 74 61 69 6c 20 24 66 69 6c 65 4e 61 6d 65 5d 5d tail$fileName]]
1cc0: 5d 0d 0a 20 20 20 20 7d 0d 0a 0c 0d 0a 20 20 20 ].. }.....
1cd0: 20 70 72 6f 63 20 67 65 74 42 69 6e 61 72 79 44 proc getBinaryD
1ce0: 69 72 65 63 74 6f 72 79 20 7b 7d 20 7b 0d 0a 20 irectory {} {..
1cf0: 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 23 20 #.. #
1d00: 4e 4f 54 45 3a 20 54 68 69 73 20 70 72 6f 63 65 NOTE: This proce
1d10: 64 75 72 65 20 72 65 74 75 72 6e 73 20 74 68 65 dure returns the
1d20: 20 64 69 72 65 63 74 6f 72 79 20 77 68 65 72 65 directory where
1d30: 20 74 68 65 20 74 65 73 74 20 61 70 70 6c 69 63 the test applic
1d40: 61 74 69 6f 6e 0d 0a 20 20 20 20 20 20 23 20 20 ation.. #
1d50: 20 20 20 20 20 69 74 73 65 6c 66 20 28 69 2e 65 itself (i.e
1d60: 2e 20 74 68 65 20 45 61 67 6c 65 20 73 68 65 6c . the Eagle shel
1d70: 6c 29 20 69 73 20 6c 6f 63 61 74 65 64 2e 20 20 l) is located.
1d80: 54 68 69 73 20 77 69 6c 6c 20 62 65 20 75 73 65 This will be use
1d90: 64 20 61 73 0d 0a 20 20 20 20 20 20 23 20 20 20 d as.. #
1da0: 20 20 20 20 74 68 65 20 64 65 73 74 69 6e 61 74 the destinat
1db0: 69 6f 6e 20 66 6f 72 20 74 68 65 20 63 6f 70 69 ion for the copi
1dc0: 65 64 20 53 79 73 74 65 6d 2e 44 61 74 61 2e 53 ed System.Data.S
1dd0: 51 4c 69 74 65 20 6e 61 74 69 76 65 20 61 6e 64 QLite native and
1de0: 0d 0a 20 20 20 20 20 20 23 20 20 20 20 20 20 20 .. #
1df0: 6d 61 6e 61 67 65 64 20 61 73 73 65 6d 62 6c 69 managed assembli
1e00: 65 73 20 28 69 2e 65 2e 20 62 65 63 61 75 73 65 es (i.e. because
1e10: 20 74 68 69 73 20 69 73 20 6f 6e 65 20 6f 66 20 this is one of
1e20: 74 68 65 20 66 65 77 20 70 6c 61 63 65 73 0d 0a the few places..
1e30: 20 20 20 20 20 20 23 20 20 20 20 20 20 20 77 68 # wh
1e40: 65 72 65 20 74 68 65 20 43 4c 52 20 77 69 6c 6c ere the CLR will
1e50: 20 61 63 74 75 61 6c 6c 79 20 66 69 6e 64 20 61 actually find a
1e60: 6e 64 20 6c 6f 61 64 20 74 68 65 6d 20 70 72 6f nd load them pro
1e70: 70 65 72 6c 79 29 2e 0d 0a 20 20 20 20 20 20 23 perly)... #
1e80: 0d 0a 20 20 20 20 20 20 69 66 20 7b 5b 69 6e 66 .. if {[inf
1e90: 6f 20 65 78 69 73 74 73 20 3a 3a 62 69 6e 61 72 o exists ::binar
1ea0: 79 5f 64 69 72 65 63 74 6f 72 79 5d 20 26 26 20 y_directory] &&
1eb0: 5c 0d 0a 20 20 20 20 20 20 20 20 20 20 5b 73 74 \.. [st
1ec0: 72 69 6e 67 20 6c 65 6e 67 74 68 20 24 3a 3a 62 ring length $::b 1ed0: 69 6e 61 72 79 5f 64 69 72 65 63 74 6f 72 79 5d inary_directory] 1ee0: 20 3e 20 30 7d 20 74 68 65 6e 20 7b 0d 0a 20 20 > 0} then {.. 1ef0: 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 20 #.. 1f00: 20 23 20 4e 4f 54 45 3a 20 54 68 65 20 6c 6f 63 # NOTE: The loc 1f10: 61 74 69 6f 6e 20 6f 66 20 74 68 65 20 62 69 6e ation of the bin 1f20: 61 72 79 20 64 69 72 65 63 74 6f 72 79 20 68 61 ary directory ha 1f30: 73 20 62 65 65 6e 20 6f 76 65 72 72 69 64 64 65 s been overridde 1f40: 6e 3b 0d 0a 20 20 20 20 20 20 20 20 23 20 20 20 n;.. # 1f50: 20 20 20 20 74 68 65 72 65 66 6f 72 65 2c 20 75 therefore, u 1f60: 73 65 20 69 74 20 76 65 72 62 61 74 69 6d 2e 0d se it verbatim.. 1f70: 0a 20 20 20 20 20 20 20 20 23 0d 0a 20 20 20 20 . #.. 1f80: 20 20 20 20 72 65 74 75 72 6e 20 24 3a 3a 62 69 return$::bi
1f90: 6e 61 72 79 5f 64 69 72 65 63 74 6f 72 79 0d 0a nary_directory..
1fa0: 20 20 20 20 20 20 7d 20 65 6c 73 65 20 7b 0d 0a } else {..
1fb0: 20 20 20 20 20 20 20 20 72 65 74 75 72 6e 20 5b return [
1fc0: 69 6e 66 6f 20 62 69 6e 61 72 79 5d 0d 0a 20 20 info binary]..
1fd0: 20 20 20 20 7d 0d 0a 20 20 20 20 7d 0d 0a 0c 0d }.. }....
1fe0: 0a 20 20 20 20 70 72 6f 63 20 67 65 74 42 69 6e . proc getBin
1ff0: 61 72 79 46 69 6c 65 4e 61 6d 65 20 7b 20 66 69 aryFileName { fi
2000: 6c 65 4e 61 6d 65 20 7d 20 7b 0d 0a 20 20 20 20 leName } {..
2010: 20 20 23 0d 0a 20 20 20 20 20 20 23 20 4e 4f 54 #.. # NOT
2020: 45 3a 20 52 65 74 75 72 6e 73 20 74 68 65 20 73 E: Returns the s
2030: 70 65 63 69 66 69 65 64 20 66 69 6c 65 20 6e 61 pecified file na
2040: 6d 65 20 61 73 20 69 66 20 69 74 20 77 65 72 65 me as if it were
2050: 20 6c 6f 63 61 74 65 64 20 69 6e 20 74 68 65 0d located in the.
2060: 0a 20 20 20 20 20 20 23 20 20 20 20 20 20 20 62 . # b
2070: 69 6e 61 72 79 20 64 69 72 65 63 74 6f 72 79 2c inary directory,
2080: 20 64 69 73 63 61 72 64 69 6e 67 20 61 6e 79 20 discarding any
2090: 64 69 72 65 63 74 6f 72 79 20 69 6e 66 6f 72 6d directory inform
20a0: 61 74 69 6f 6e 20 70 72 65 73 65 6e 74 0d 0a 20 ation present..
20b0: 20 20 20 20 20 23 20 20 20 20 20 20 20 69 6e 20 # in
20c0: 74 68 65 20 66 69 6c 65 20 6e 61 6d 65 20 61 73 the file name as
20d0: 20 70 72 6f 76 69 64 65 64 20 62 79 20 74 68 65 provided by the
20e0: 20 63 61 6c 6c 65 72 2e 0d 0a 20 20 20 20 20 20 caller...
20f0: 23 0d 0a 20 20 20 20 20 20 72 65 74 75 72 6e 20 #.. return
2100: 5b 66 69 6c 65 20 6e 61 74 69 76 65 6e 61 6d 65 [file nativename
2110: 20 5c 0d 0a 20 20 20 20 20 20 20 20 20 20 5b 66 \.. [f
2120: 69 6c 65 20 6a 6f 69 6e 20 5b 67 65 74 42 69 6e ile join [getBin
2130: 61 72 79 44 69 72 65 63 74 6f 72 79 5d 20 5b 66 aryDirectory] [f
2140: 69 6c 65 20 74 61 69 6c 20 24 66 69 6c 65 4e 61 ile tail $fileNa 2150: 6d 65 5d 5d 5d 0d 0a 20 20 20 20 7d 0d 0a 0c 0d me]]].. }.... 2160: 0a 20 20 20 20 70 72 6f 63 20 67 65 74 44 61 74 . proc getDat 2170: 61 62 61 73 65 44 69 72 65 63 74 6f 72 79 20 7b abaseDirectory { 2180: 7d 20 7b 0d 0a 20 20 20 20 20 20 23 0d 0a 20 20 } {.. #.. 2190: 20 20 20 20 23 20 4e 4f 54 45 3a 20 54 68 69 73 # NOTE: This 21a0: 20 70 72 6f 63 65 64 75 72 65 20 72 65 74 75 72 procedure retur 21b0: 6e 73 20 74 68 65 20 64 69 72 65 63 74 6f 72 79 ns the directory 21c0: 20 77 68 65 72 65 20 74 68 65 20 74 65 73 74 20 where the test 21d0: 64 61 74 61 62 61 73 65 73 0d 0a 20 20 20 20 20 databases.. 21e0: 20 23 20 20 20 20 20 20 20 73 68 6f 75 6c 64 20 # should 21f0: 62 65 20 6c 6f 63 61 74 65 64 2e 20 20 42 79 20 be located. By 2200: 64 65 66 61 75 6c 74 2c 20 74 68 69 73 20 6a 75 default, this ju 2210: 73 74 20 75 73 65 73 20 74 68 65 20 74 65 6d 70 st uses the temp 2220: 6f 72 61 72 79 0d 0a 20 20 20 20 20 20 23 20 20 orary.. # 2230: 20 20 20 20 20 64 69 72 65 63 74 6f 72 79 20 63 directory c 2240: 6f 6e 66 69 67 75 72 65 64 20 66 6f 72 20 74 68 onfigured for th 2250: 69 73 20 73 79 73 74 65 6d 2e 0d 0a 20 20 20 20 is system... 2260: 20 20 23 0d 0a 20 20 20 20 20 20 69 66 20 7b 5b #.. if {[ 2270: 69 6e 66 6f 20 65 78 69 73 74 73 20 3a 3a 64 61 info exists ::da 2280: 74 61 62 61 73 65 5f 64 69 72 65 63 74 6f 72 79 tabase_directory 2290: 5d 20 26 26 20 5c 0d 0a 20 20 20 20 20 20 20 20 ] && \.. 22a0: 20 20 5b 73 74 72 69 6e 67 20 6c 65 6e 67 74 68 [string length 22b0: 20 24 3a 3a 64 61 74 61 62 61 73 65 5f 64 69 72$::database_dir
22c0: 65 63 74 6f 72 79 5d 20 3e 20 30 7d 20 74 68 65 ectory] > 0} the
22d0: 6e 20 7b 0d 0a 20 20 20 20 20 20 20 20 23 0d 0a n {.. #..
22e0: 20 20 20 20 20 20 20 20 23 20 4e 4f 54 45 3a 20 # NOTE:
22f0: 54 68 65 20 6c 6f 63 61 74 69 6f 6e 20 6f 66 20 The location of
2300: 74 68 65 20 64 61 74 61 62 61 73 65 20 64 69 72 the database dir
2310: 65 63 74 6f 72 79 20 68 61 73 20 62 65 65 6e 20 ectory has been
2320: 6f 76 65 72 72 69 64 64 65 6e 3b 0d 0a 20 20 20 overridden;..
2330: 20 20 20 20 20 23 20 20 20 20 20 20 20 74 68 65 # the
2340: 72 65 66 6f 72 65 2c 20 75 73 65 20 69 74 2e 0d refore, use it..
2350: 0a 20 20 20 20 20 20 20 20 23 0d 0a 20 20 20 20 . #..
2360: 20 20 20 20 72 65 74 75 72 6e 20 5b 66 69 6c 65 return [file
2370: 20 6e 6f 72 6d 61 6c 69 7a 65 20 24 3a 3a 64 61 normalize $::da 2380: 74 61 62 61 73 65 5f 64 69 72 65 63 74 6f 72 79 tabase_directory 2390: 5d 0d 0a 20 20 20 20 20 20 7d 20 65 6c 73 65 20 ].. } else 23a0: 7b 0d 0a 20 20 20 20 20 20 20 20 72 65 74 75 72 {.. retur 23b0: 6e 20 5b 67 65 74 54 65 6d 70 6f 72 61 72 79 50 n [getTemporaryP 23c0: 61 74 68 5d 0d 0a 20 20 20 20 20 20 7d 0d 0a 20 ath].. }.. 23d0: 20 20 20 7d 0d 0a 0c 0d 0a 20 20 20 20 70 72 6f }..... pro 23e0: 63 20 67 65 74 41 70 70 44 6f 6d 61 69 6e 50 72 c getAppDomainPr 23f0: 65 61 6d 62 6c 65 20 7b 20 7b 70 72 65 66 69 78 eamble { {prefix 2400: 20 22 22 7d 20 7b 73 75 66 66 69 78 20 22 22 7d ""} {suffix ""} 2410: 20 7d 20 7b 0d 0a 20 20 20 20 20 20 23 0d 0a 20 } {.. #.. 2420: 20 20 20 20 20 23 20 4e 4f 54 45 3a 20 54 68 69 # NOTE: Thi 2430: 73 20 70 72 6f 63 65 64 75 72 65 20 72 65 74 75 s procedure retu 2440: 72 6e 73 20 61 20 74 65 73 74 20 73 65 74 75 70 rns a test setup 2450: 20 73 63 72 69 70 74 20 73 75 69 74 61 62 6c 65 script suitable 2460: 20 66 6f 72 20 65 76 61 6c 75 61 74 69 6f 6e 0d for evaluation. 2470: 0a 20 20 20 20 20 20 23 20 20 20 20 20 20 20 62 . # b 2480: 79 20 61 20 74 65 73 74 20 69 6e 74 65 72 70 72 y a test interpr 2490: 65 74 65 72 20 63 72 65 61 74 65 64 20 69 6e 20 eter created in 24a0: 61 6e 20 69 73 6f 6c 61 74 65 64 20 61 70 70 6c an isolated appl 24b0: 69 63 61 74 69 6f 6e 20 64 6f 6d 61 69 6e 2e 0d ication domain.. 24c0: 0a 20 20 20 20 20 20 23 20 20 20 20 20 20 20 54 . # T 24d0: 68 65 20 73 63 72 69 70 74 20 62 65 69 6e 67 20 he script being 24e0: 72 65 74 75 72 6e 65 64 20 77 69 6c 6c 20 62 65 returned will be 24f0: 20 73 75 72 72 6f 75 6e 64 65 64 20 62 79 20 74 surrounded by t 2500: 68 65 20 70 72 65 66 69 78 20 61 6e 64 0d 0a 20 he prefix and.. 2510: 20 20 20 20 20 23 20 20 20 20 20 20 20 73 75 66 # suf 2520: 66 69 78 20 22 73 63 72 69 70 74 20 66 72 61 67 fix "script frag 2530: 6d 65 6e 74 73 22 20 73 70 65 63 69 66 69 65 64 ments" specified 2540: 20 62 79 20 74 68 65 20 63 61 6c 6c 65 72 2c 20 by the caller, 2550: 69 66 20 61 6e 79 2e 20 20 54 68 65 0d 0a 20 20 if any. The.. 2560: 20 20 20 20 23 20 20 20 20 20 20 20 65 6e 74 69 # enti 2570: 72 65 20 73 63 72 69 70 74 20 62 65 69 6e 67 20 re script being 2580: 72 65 74 75 72 6e 65 64 20 77 69 6c 6c 20 62 65 returned will be 2590: 20 73 75 62 73 74 69 74 75 74 65 64 20 76 69 61 substituted via 25a0: 20 5b 73 75 62 73 74 5d 2c 20 69 6e 0d 0a 20 20 [subst], in.. 25b0: 20 20 20 20 23 20 20 20 20 20 20 20 74 68 65 20 # the 25c0: 63 6f 6e 74 65 78 74 20 6f 66 20 74 68 65 20 63 context of the c 25d0: 61 6c 6c 65 72 2e 20 20 54 68 69 73 20 73 74 65 aller. This ste 25e0: 70 20 69 73 20 6e 65 63 65 73 73 61 72 79 20 73 p is necessary s 25f0: 6f 20 74 68 61 74 20 73 6f 6d 65 0d 0a 20 20 20 o that some.. 2600: 20 20 20 23 20 20 20 20 20 20 20 6c 69 6d 69 74 # limit 2610: 65 64 20 63 6f 6e 74 65 78 74 20 69 6e 66 6f 72 ed context infor 2620: 6d 61 74 69 6f 6e 2c 20 70 72 69 6d 61 72 69 6c mation, primaril 2630: 79 20 72 65 6c 61 74 65 64 20 74 6f 20 74 68 65 y related to the 2640: 20 74 65 73 74 20 62 75 69 6c 64 0d 0a 20 20 20 test build.. 2650: 20 20 20 23 20 20 20 20 20 20 20 64 69 72 65 63 # direc 2660: 74 6f 72 79 2c 20 63 61 6e 20 62 65 20 74 72 61 tory, can be tra 2670: 6e 73 66 65 72 72 65 64 20 74 6f 20 74 68 65 20 nsferred to the 2680: 69 6e 74 65 72 70 72 65 74 65 72 20 69 6e 20 74 interpreter in t 2690: 68 65 20 69 73 6f 6c 61 74 65 64 0d 0a 20 20 20 he isolated.. 26a0: 20 20 20 23 20 20 20 20 20 20 20 61 70 70 6c 69 # appli 26b0: 63 61 74 69 6f 6e 20 64 6f 6d 61 69 6e 2c 20 6d cation domain, m 26c0: 61 6b 69 6e 67 20 69 74 20 61 62 6c 65 20 74 6f aking it able to 26d0: 20 73 75 63 63 65 73 73 66 75 6c 6c 79 20 72 75 successfully ru 26e0: 6e 20 74 65 73 74 73 20 74 68 61 74 0d 0a 20 20 n tests that.. 26f0: 20 20 20 20 23 20 20 20 20 20 20 20 72 65 71 75 # requ 2700: 69 72 65 20 6f 6e 65 20 6f 72 20 6d 6f 72 65 20 ire one or more 2710: 6f 66 20 74 68 65 20 66 69 6c 65 73 20 69 6e 20 of the files in 2720: 74 68 65 20 62 75 69 6c 64 20 64 69 72 65 63 74 the build direct 2730: 6f 72 79 2e 20 20 43 61 6c 6c 65 72 73 0d 0a 20 ory. Callers.. 2740: 20 20 20 20 20 23 20 20 20 20 20 20 20 74 6f 20 # to 2750: 74 68 69 73 20 70 72 6f 63 65 64 75 72 65 20 73 this procedure s 2760: 68 6f 75 6c 64 20 6b 65 65 70 20 69 6e 20 6d 69 hould keep in mi 2770: 6e 64 20 74 68 61 74 20 74 68 65 20 74 65 73 74 nd that the test 2780: 20 73 63 72 69 70 74 20 62 65 69 6e 67 0d 0a 20 script being.. 2790: 20 20 20 20 20 23 20 20 20 20 20 20 20 72 65 74 # ret 27a0: 75 72 6e 65 64 20 63 61 6e 6e 6f 74 20 6f 6e 6c urned cannot onl 27b0: 79 20 72 65 6c 79 20 6f 6e 20 61 6e 79 20 73 63 y rely on any sc 27c0: 72 69 70 74 20 6c 69 62 72 61 72 79 20 70 72 6f ript library pro 27d0: 63 65 64 75 72 65 73 20 6e 6f 74 0d 0a 20 20 20 cedures not.. 27e0: 20 20 20 23 20 20 20 20 20 20 20 69 6e 63 6c 75 # inclu 27f0: 64 65 64 20 69 6e 20 74 68 65 20 45 61 67 6c 65 ded in the Eagle 2800: 2e 4c 69 62 72 61 72 79 20 70 61 63 6b 61 67 65 .Library package 2810: 20 28 69 2e 65 2e 20 22 69 6e 69 74 2e 65 61 67 (i.e. "init.eag 2820: 6c 65 22 29 2e 20 20 41 6c 73 6f 2c 0d 0a 20 20 le"). Also,.. 2830: 20 20 20 20 23 20 20 20 20 20 20 20 61 6c 6c 20 # all 2840: 76 61 72 69 61 62 6c 65 20 72 65 66 65 72 65 6e variable referen 2850: 63 65 73 20 61 6e 64 20 61 6c 6c 20 22 6e 65 73 ces and all "nes 2860: 74 65 64 22 20 63 6f 6d 6d 61 6e 64 73 20 28 69 ted" commands (i 2870: 2e 65 2e 20 74 68 6f 73 65 20 69 6e 0d 0a 20 20 .e. those in.. 2880: 20 20 20 20 23 20 20 20 20 20 20 20 73 71 75 61 # squa 2890: 72 65 20 62 72 61 63 6b 65 74 73 29 2c 20 75 6e re brackets), un 28a0: 6c 65 73 73 20 74 68 65 79 20 61 72 65 20 73 70 less they are sp 28b0: 65 63 69 61 6c 6c 79 20 71 75 6f 74 65 64 2c 20 ecially quoted, 28c0: 77 69 6c 6c 20 65 6e 64 20 75 70 0d 0a 20 20 20 will end up.. 28d0: 20 20 20 23 20 20 20 20 20 20 20 62 65 69 6e 67 # being 28e0: 20 65 76 61 6c 75 61 74 65 64 20 69 6e 20 74 68 evaluated in th 28f0: 65 20 63 6f 6e 74 65 78 74 20 6f 66 20 74 68 65 e context of the 2900: 20 63 61 6c 6c 69 6e 67 20 69 6e 74 65 72 70 72 calling interpr 2910: 65 74 65 72 20 61 6e 64 20 6e 6f 74 0d 0a 20 20 eter and not.. 2920: 20 20 20 20 23 20 20 20 20 20 20 20 74 68 65 20 # the 2930: 74 65 73 74 20 69 6e 74 65 72 70 72 65 74 65 72 test interpreter 2940: 20 63 72 65 61 74 65 64 20 69 6e 20 74 68 65 20 created in the 2950: 69 73 6f 6c 61 74 65 64 20 61 70 70 6c 69 63 61 isolated applica 2960: 74 69 6f 6e 20 64 6f 6d 61 69 6e 2e 0d 0a 20 20 tion domain... 2970: 20 20 20 20 23 0d 0a 20 20 20 20 20 20 72 65 74 #.. ret 2980: 75 72 6e 20 5b 75 70 6c 65 76 65 6c 20 31 20 5b urn [uplevel 1 [ 2990: 6c 69 73 74 20 73 75 62 73 74 20 5b 61 70 70 65 list subst [appe 29a0: 6e 64 41 72 67 73 20 24 70 72 65 66 69 78 20 7b ndArgs$prefix {
29b0: 0d 0a 20 20 20 20 20 20 20 20 69 66 20 7b 5b 68 .. if {[h
29c0: 61 73 52 75 6e 74 69 6d 65 4f 70 74 69 6f 6e 20 asRuntimeOption
29d0: 6e 61 74 69 76 65 5d 7d 20 74 68 65 6e 20 7b 0d native]} then {.
29e0: 0a 20 20 20 20 20 20 20 20 20 20 6f 62 6a 65 63 . objec
29f0: 74 20 69 6e 76 6f 6b 65 20 49 6e 74 65 72 70 72 t invoke Interpr
2a00: 65 74 65 72 2e 47 65 74 41 63 74 69 76 65 20 41 eter.GetActive A
2a10: 64 64 52 75 6e 74 69 6d 65 4f 70 74 69 6f 6e 20 ddRuntimeOption
2a20: 6e 61 74 69 76 65 0d 0a 20 20 20 20 20 20 20 20 native..
2a30: 7d 0d 0a 0d 0a 20 20 20 20 20 20 20 20 73 65 74 }.... set
2a40: 20 3a 3a 70 61 74 68 20 7b 24 3a 3a 70 61 74 68 ::path {$::path 2a50: 7d 0d 0a 20 20 20 20 20 20 20 20 73 65 74 20 3a }.. set : 2a60: 3a 74 65 73 74 5f 79 65 61 72 20 7b 5b 67 65 74 :test_year {[get 2a70: 42 75 69 6c 64 59 65 61 72 5d 7d 0d 0a 20 20 20 BuildYear]}.. 2a80: 20 20 20 20 20 73 65 74 20 3a 3a 74 65 73 74 5f set ::test_ 2a90: 63 6f 6e 66 69 67 75 72 61 74 69 6f 6e 20 7b 5b configuration {[ 2aa0: 67 65 74 42 75 69 6c 64 43 6f 6e 66 69 67 75 72 getBuildConfigur 2ab0: 61 74 69 6f 6e 5d 7d 0d 0a 20 20 20 20 20 20 7d ation]}.. } 2ac0: 20 24 73 75 66 66 69 78 5d 5d 5d 0d 0a 20 20 20$suffix]]]..
2ad0: 20 7d 0d 0a 0c 0d 0a 20 20 20 20 70 72 6f 63 20 }..... proc
2ae0: 74 72 79 43 6f 70 79 42 75 69 6c 64 46 69 6c 65 tryCopyBuildFile
2af0: 20 7b 20 66 69 6c 65 4e 61 6d 65 20 7d 20 7b 0d { fileName } {.
2b00: 0a 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 . #..
2b10: 23 20 4e 4f 54 45 3a 20 49 66 20 77 65 20 63 61 # NOTE: If we ca
2b20: 6e 6e 6f 74 20 63 6f 70 79 20 74 68 65 20 61 73 nnot copy the as
2b30: 73 65 6d 62 6c 79 20 74 68 65 6e 20 69 74 20 69 sembly then it i
2b40: 73 20 70 72 6f 62 61 62 6c 79 20 61 6c 72 65 61 s probably alrea
2b50: 64 79 20 6c 6f 61 64 65 64 2e 0d 0a 20 20 20 20 dy loaded...
2b60: 20 20 23 0d 0a 20 20 20 20 20 20 73 65 74 20 73 #.. set s
2b70: 6f 75 72 63 65 46 69 6c 65 4e 61 6d 65 20 5b 67 ourceFileName [g
2b80: 65 74 42 75 69 6c 64 46 69 6c 65 4e 61 6d 65 20 etBuildFileName
2b90: 24 66 69 6c 65 4e 61 6d 65 5d 0d 0a 0d 0a 20 20 $fileName].... 2ba0: 20 20 20 20 69 66 20 7b 21 5b 66 69 6c 65 20 65 if {![file e 2bb0: 78 69 73 74 73 20 24 73 6f 75 72 63 65 46 69 6c xists$sourceFil
2bc0: 65 4e 61 6d 65 5d 7d 20 74 68 65 6e 20 7b 0d 0a eName]} then {..
2bd0: 20 20 20 20 20 20 20 20 74 70 75 74 73 20 24 3a tputs $: 2be0: 3a 74 65 73 74 5f 63 68 61 6e 6e 65 6c 20 5b 61 :test_channel [a 2bf0: 70 70 65 6e 64 41 72 67 73 20 5c 0d 0a 20 20 20 ppendArgs \.. 2c00: 20 20 20 20 20 20 20 20 20 22 2d 2d 2d 2d 20 73 "---- s 2c10: 6b 69 70 70 65 64 20 63 6f 70 79 69 6e 67 20 62 kipped copying b 2c20: 75 69 6c 64 20 66 69 6c 65 20 5c 22 22 20 24 73 uild file \""$s
2c30: 6f 75 72 63 65 46 69 6c 65 4e 61 6d 65 20 5c 0d ourceFileName \.
2c40: 0a 20 20 20 20 20 20 20 20 20 20 20 20 22 5c 22 . "\"
2c50: 2c 20 69 74 20 64 6f 65 73 20 6e 6f 74 20 65 78 , it does not ex
2c60: 69 73 74 5c 6e 22 5d 0d 0a 0d 0a 20 20 20 20 20 ist\n"]....
2c70: 20 20 20 72 65 74 75 72 6e 0d 0a 20 20 20 20 20 return..
2c80: 20 7d 0d 0a 0d 0a 20 20 20 20 20 20 73 65 74 20 }.... set
2c90: 74 61 72 67 65 74 46 69 6c 65 4e 61 6d 65 20 5b targetFileName [
2ca0: 67 65 74 42 69 6e 61 72 79 46 69 6c 65 4e 61 6d getBinaryFileNam
2cb0: 65 20 24 66 69 6c 65 4e 61 6d 65 5d 0d 0a 0d 0a e $fileName].... 2cc0: 20 20 20 20 20 20 69 66 20 7b 5b 63 61 74 63 68 if {[catch 2cd0: 20 7b 0d 0a 20 20 20 20 20 20 20 20 20 20 66 69 {.. fi 2ce0: 6c 65 20 63 6f 70 79 20 2d 66 6f 72 63 65 20 24 le copy -force$
2cf0: 73 6f 75 72 63 65 46 69 6c 65 4e 61 6d 65 20 24 sourceFileName $2d00: 74 61 72 67 65 74 46 69 6c 65 4e 61 6d 65 7d 5d targetFileName}] 2d10: 20 3d 3d 20 30 7d 20 74 68 65 6e 20 7b 0d 0a 20 == 0} then {.. 2d20: 20 20 20 20 20 20 20 74 70 75 74 73 20 24 3a 3a tputs$::
2d30: 74 65 73 74 5f 63 68 61 6e 6e 65 6c 20 5b 61 70 test_channel [ap
2d40: 70 65 6e 64 41 72 67 73 20 5c 0d 0a 20 20 20 20 pendArgs \..
2d50: 20 20 20 20 20 20 20 20 22 2d 2d 2d 2d 20 63 6f "---- co
2d60: 70 69 65 64 20 62 75 69 6c 64 20 66 69 6c 65 20 pied build file
2d70: 66 72 6f 6d 20 5c 22 22 20 24 73 6f 75 72 63 65 from \"" $source 2d80: 46 69 6c 65 4e 61 6d 65 20 22 5c 22 20 74 6f 20 FileName "\" to 2d90: 5c 22 22 20 5c 0d 0a 20 20 20 20 20 20 20 20 20 \"" \.. 2da0: 20 20 20 24 74 61 72 67 65 74 46 69 6c 65 4e 61$targetFileNa
2db0: 6d 65 20 5c 22 5c 6e 5d 0d 0a 20 20 20 20 20 20 me \"\n]..
2dc0: 7d 20 65 6c 73 65 20 7b 0d 0a 20 20 20 20 20 20 } else {..
2dd0: 20 20 74 70 75 74 73 20 24 3a 3a 74 65 73 74 5f tputs $::test_ 2de0: 63 68 61 6e 6e 65 6c 20 5b 61 70 70 65 6e 64 41 channel [appendA 2df0: 72 67 73 20 5c 0d 0a 20 20 20 20 20 20 20 20 20 rgs \.. 2e00: 20 20 20 22 2d 2d 2d 2d 20 66 61 69 6c 65 64 20 "---- failed 2e10: 74 6f 20 63 6f 70 79 20 62 75 69 6c 64 20 66 69 to copy build fi 2e20: 6c 65 20 66 72 6f 6d 20 5c 22 22 20 24 73 6f 75 le from \""$sou
2e30: 72 63 65 46 69 6c 65 4e 61 6d 65 20 5c 0d 0a 20 rceFileName \..
2e40: 20 20 20 20 20 20 20 20 20 20 20 22 5c 22 20 74 "\" t
2e50: 6f 20 5c 22 22 20 24 74 61 72 67 65 74 46 69 6c o \"" $targetFil 2e60: 65 4e 61 6d 65 20 5c 22 5c 6e 5d 0d 0a 20 20 20 eName \"\n].. 2e70: 20 20 20 7d 0d 0a 20 20 20 20 7d 0d 0a 0c 0d 0a }.. }..... 2e80: 20 20 20 20 70 72 6f 63 20 74 72 79 44 65 6c 65 proc tryDele 2e90: 74 65 42 69 6e 61 72 79 46 69 6c 65 20 7b 20 66 teBinaryFile { f 2ea0: 69 6c 65 4e 61 6d 65 20 7d 20 7b 0d 0a 20 20 20 ileName } {.. 2eb0: 20 20 20 73 65 74 20 66 69 6c 65 4e 61 6d 65 20 set fileName 2ec0: 5b 67 65 74 42 69 6e 61 72 79 46 69 6c 65 4e 61 [getBinaryFileNa 2ed0: 6d 65 20 24 66 69 6c 65 4e 61 6d 65 5d 0d 0a 0d me$fileName]...
2ee0: 0a 20 20 20 20 20 20 69 66 20 7b 21 5b 66 69 6c . if {![fil
2ef0: 65 20 65 78 69 73 74 73 20 24 66 69 6c 65 4e 61 e exists $fileNa 2f00: 6d 65 5d 7d 20 74 68 65 6e 20 7b 0d 0a 20 20 20 me]} then {.. 2f10: 20 20 20 20 20 74 70 75 74 73 20 24 3a 3a 74 65 tputs$::te
2f20: 73 74 5f 63 68 61 6e 6e 65 6c 20 5b 61 70 70 65 st_channel [appe
2f30: 6e 64 41 72 67 73 20 5c 0d 0a 20 20 20 20 20 20 ndArgs \..
2f40: 20 20 20 20 20 20 22 2d 2d 2d 2d 20 73 6b 69 70 "---- skip
2f50: 70 65 64 20 64 65 6c 65 74 69 6e 67 20 62 69 6e ped deleting bin
2f60: 61 72 79 20 66 69 6c 65 20 5c 22 22 20 24 66 69 ary file \"" $fi 2f70: 6c 65 4e 61 6d 65 20 5c 0d 0a 20 20 20 20 20 20 leName \.. 2f80: 20 20 20 20 20 20 22 5c 22 2c 20 69 74 20 64 6f "\", it do 2f90: 65 73 20 6e 6f 74 20 65 78 69 73 74 5c 6e 22 5d es not exist\n"] 2fa0: 0d 0a 0d 0a 20 20 20 20 20 20 20 20 72 65 74 75 .... retu 2fb0: 72 6e 0d 0a 20 20 20 20 20 20 7d 0d 0a 0d 0a 20 rn.. }.... 2fc0: 20 20 20 20 20 69 66 20 7b 5b 63 61 74 63 68 20 if {[catch 2fd0: 7b 66 69 6c 65 20 64 65 6c 65 74 65 20 24 66 69 {file delete$fi
2fe0: 6c 65 4e 61 6d 65 7d 5d 20 3d 3d 20 30 7d 20 74 leName}] == 0} t
2ff0: 68 65 6e 20 7b 0d 0a 20 20 20 20 20 20 20 20 74 hen {.. t
3000: 70 75 74 73 20 24 3a 3a 74 65 73 74 5f 63 68 61 puts $::test_cha 3010: 6e 6e 65 6c 20 5b 61 70 70 65 6e 64 41 72 67 73 nnel [appendArgs 3020: 20 5c 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 \.. 3030: 22 2d 2d 2d 2d 20 64 65 6c 65 74 65 64 20 62 69 "---- deleted bi 3040: 6e 61 72 79 20 66 69 6c 65 20 5c 22 22 20 24 66 nary file \""$f
3050: 69 6c 65 4e 61 6d 65 20 5c 22 5c 6e 5d 0d 0a 20 ileName \"\n]..
3060: 20 20 20 20 20 7d 20 65 6c 73 65 20 7b 0d 0a 20 } else {..
3070: 20 20 20 20 20 20 20 74 70 75 74 73 20 24 3a 3a tputs $:: 3080: 74 65 73 74 5f 63 68 61 6e 6e 65 6c 20 5b 61 70 test_channel [ap 3090: 70 65 6e 64 41 72 67 73 20 5c 0d 0a 20 20 20 20 pendArgs \.. 30a0: 20 20 20 20 20 20 20 20 22 2d 2d 2d 2d 20 66 61 "---- fa 30b0: 69 6c 65 64 20 74 6f 20 64 65 6c 65 74 65 20 62 iled to delete b 30c0: 69 6e 61 72 79 20 66 69 6c 65 20 5c 22 22 20 24 inary file \""$
30d0: 66 69 6c 65 4e 61 6d 65 20 5c 22 5c 6e 5d 0d 0a fileName \"\n]..
30e0: 20 20 20 20 20 20 7d 0d 0a 20 20 20 20 7d 0d 0a }.. }..
30f0: 0c 0d 0a 20 20 20 20 70 72 6f 63 20 74 72 79 43 ... proc tryC
3100: 6f 70 79 41 73 73 65 6d 62 6c 79 20 7b 20 66 69 opyAssembly { fi
3110: 6c 65 4e 61 6d 65 20 7b 70 64 62 20 74 72 75 65 leName {pdb true
3120: 7d 20 7d 20 7b 0d 0a 20 20 20 20 20 20 74 72 79 } } {.. try
3130: 43 6f 70 79 42 75 69 6c 64 46 69 6c 65 20 24 66 CopyBuildFile $f 3140: 69 6c 65 4e 61 6d 65 0d 0a 0d 0a 20 20 20 20 20 ileName.... 3150: 20 69 66 20 7b 24 70 64 62 7d 20 74 68 65 6e 20 if {$pdb} then
3160: 7b 0d 0a 20 20 20 20 20 20 20 20 74 72 79 43 6f {.. tryCo
3170: 70 79 42 75 69 6c 64 46 69 6c 65 20 5b 61 70 70 pyBuildFile [app
3180: 65 6e 64 41 72 67 73 20 5b 66 69 6c 65 20 72 6f endArgs [file ro
3190: 6f 74 6e 61 6d 65 20 24 66 69 6c 65 4e 61 6d 65 otname $fileName 31a0: 5d 20 2e 70 64 62 5d 0d 0a 20 20 20 20 20 20 7d ] .pdb].. } 31b0: 0d 0a 20 20 20 20 7d 0d 0a 0c 0d 0a 20 20 20 20 .. }..... 31c0: 70 72 6f 63 20 74 72 79 44 65 6c 65 74 65 41 73 proc tryDeleteAs 31d0: 73 65 6d 62 6c 79 20 7b 20 66 69 6c 65 4e 61 6d sembly { fileNam 31e0: 65 20 7b 70 64 62 20 74 72 75 65 7d 20 7d 20 7b e {pdb true} } { 31f0: 0d 0a 20 20 20 20 20 20 74 72 79 44 65 6c 65 74 .. tryDelet 3200: 65 42 69 6e 61 72 79 46 69 6c 65 20 24 66 69 6c eBinaryFile$fil
3210: 65 4e 61 6d 65 0d 0a 0d 0a 20 20 20 20 20 20 69 eName.... i
3220: 66 20 7b 24 70 64 62 7d 20 74 68 65 6e 20 7b 0d f {$pdb} then {. 3230: 0a 20 20 20 20 20 20 20 20 74 72 79 44 65 6c 65 . tryDele 3240: 74 65 42 69 6e 61 72 79 46 69 6c 65 20 5b 61 70 teBinaryFile [ap 3250: 70 65 6e 64 41 72 67 73 20 5b 66 69 6c 65 20 72 pendArgs [file r 3260: 6f 6f 74 6e 61 6d 65 20 24 66 69 6c 65 4e 61 6d ootname$fileNam
3270: 65 5d 20 2e 70 64 62 5d 0d 0a 20 20 20 20 20 20 e] .pdb]..
3280: 7d 0d 0a 20 20 20 20 7d 0d 0a 0c 0d 0a 20 20 20 }.. }.....
3290: 20 70 72 6f 63 20 74 72 79 4c 6f 61 64 41 73 73 proc tryLoadAss
32a0: 65 6d 62 6c 79 20 7b 20 66 69 6c 65 4e 61 6d 65 embly { fileName
32b0: 20 7d 20 7b 0d 0a 20 20 20 20 20 20 73 65 74 20 } {.. set
32c0: 66 69 6c 65 4e 61 6d 65 20 5b 67 65 74 42 69 6e fileName [getBin
32d0: 61 72 79 46 69 6c 65 4e 61 6d 65 20 24 66 69 6c aryFileName $fil 32e0: 65 4e 61 6d 65 5d 0d 0a 0d 0a 20 20 20 20 20 20 eName].... 32f0: 69 66 20 7b 5b 63 61 74 63 68 20 7b 73 65 74 20 if {[catch {set 3300: 61 73 73 65 6d 62 6c 79 20 5c 0d 0a 20 20 20 20 assembly \.. 3310: 20 20 20 20 20 20 20 20 20 20 5b 6f 62 6a 65 63 [objec 3320: 74 20 6c 6f 61 64 20 2d 6c 6f 61 64 74 79 70 65 t load -loadtype 3330: 20 46 69 6c 65 20 2d 61 6c 69 61 73 20 24 66 69 File -alias$fi
3340: 6c 65 4e 61 6d 65 5d 7d 5d 20 3d 3d 20 30 7d 20 leName]}] == 0}
3350: 74 68 65 6e 20 7b 0d 0a 20 20 20 20 20 20 20 20 then {..
3360: 23 0d 0a 20 20 20 20 20 20 20 20 23 20 4e 4f 54 #.. # NOT
3370: 45 3a 20 4e 6f 77 2c 20 61 64 64 20 74 68 65 20 E: Now, add the
3380: 6e 65 63 65 73 73 61 72 79 20 74 65 73 74 20 63 necessary test c
3390: 6f 6e 73 74 72 61 69 6e 74 2e 0d 0a 20 20 20 20 onstraint...
33a0: 20 20 20 20 23 0d 0a 20 20 20 20 20 20 20 20 61 #.. a
33b0: 64 64 43 6f 6e 73 74 72 61 69 6e 74 20 5b 66 69 ddConstraint [fi
33c0: 6c 65 20 72 6f 6f 74 6e 61 6d 65 20 5b 66 69 6c le rootname [fil
33d0: 65 20 74 61 69 6c 20 24 66 69 6c 65 4e 61 6d 65 e tail $fileName 33e0: 5d 5d 0d 0a 0d 0a 20 20 20 20 20 20 20 20 23 0d ]].... #. 33f0: 0a 20 20 20 20 20 20 20 20 23 20 4e 4f 54 45 3a . # NOTE: 3400: 20 47 72 61 62 20 74 68 65 20 69 6d 61 67 65 20 Grab the image 3410: 72 75 6e 74 69 6d 65 20 76 65 72 73 69 6f 6e 20 runtime version 3420: 66 72 6f 6d 20 74 68 65 20 61 73 73 65 6d 62 6c from the assembl 3430: 79 20 62 65 63 61 75 73 65 0d 0a 20 20 20 20 20 y because.. 3440: 20 20 20 23 20 20 20 20 20 20 20 73 65 76 65 72 # sever 3450: 61 6c 20 74 65 73 74 73 20 72 65 6c 79 20 6f 6e al tests rely on 3460: 20 69 74 20 68 61 76 69 6e 67 20 61 20 63 65 72 it having a cer 3470: 74 61 69 6e 20 76 61 6c 75 65 2e 0d 0a 20 20 20 tain value... 3480: 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 20 20 #.. 3490: 61 64 64 43 6f 6e 73 74 72 61 69 6e 74 20 5b 61 addConstraint [a 34a0: 70 70 65 6e 64 41 72 67 73 20 5b 66 69 6c 65 20 ppendArgs [file 34b0: 74 61 69 6c 20 24 66 69 6c 65 4e 61 6d 65 5d 20 tail$fileName]
34c0: 5f 20 5c 0d 0a 20 20 20 20 20 20 20 20 20 20 20 _ \..
34d0: 20 5b 24 61 73 73 65 6d 62 6c 79 20 49 6d 61 67 [$assembly Imag 34e0: 65 52 75 6e 74 69 6d 65 56 65 72 73 69 6f 6e 5d eRuntimeVersion] 34f0: 5d 0d 0a 0d 0a 20 20 20 20 20 20 20 20 23 0d 0a ].... #.. 3500: 20 20 20 20 20 20 20 20 23 20 4e 4f 54 45 3a 20 # NOTE: 3510: 52 65 74 75 72 6e 20 74 68 65 20 66 75 6c 6c 20 Return the full 3520: 70 61 74 68 20 6f 66 20 74 68 65 20 6c 6f 61 64 path of the load 3530: 65 64 20 66 69 6c 65 2e 0d 0a 20 20 20 20 20 20 ed file... 3540: 20 20 23 0d 0a 20 20 20 20 20 20 20 20 72 65 74 #.. ret 3550: 75 72 6e 20 24 66 69 6c 65 4e 61 6d 65 0d 0a 20 urn$fileName..
3560: 20 20 20 20 20 7d 0d 0a 0d 0a 20 20 20 20 20 20 }....
3570: 72 65 74 75 72 6e 20 22 22 0d 0a 20 20 20 20 7d return "".. }
3580: 0d 0a 0c 0d 0a 20 20 20 20 70 72 6f 63 20 63 68 ..... proc ch
3590: 65 63 6b 46 6f 72 53 51 4c 69 74 65 20 7b 20 63 eckForSQLite { c
35a0: 68 61 6e 6e 65 6c 20 7d 20 7b 0d 0a 20 20 20 20 hannel } {..
35b0: 20 20 74 70 75 74 73 20 24 63 68 61 6e 6e 65 6c tputs $channel 35c0: 20 22 2d 2d 2d 2d 20 63 68 65 63 6b 69 6e 67 20 "---- checking 35d0: 66 6f 72 20 63 6f 72 65 20 53 51 4c 69 74 65 20 for core SQLite 35e0: 6c 69 62 72 61 72 79 2e 2e 2e 20 22 0d 0a 0d 0a library... ".... 35f0: 20 20 20 20 20 20 69 66 20 7b 5b 63 61 74 63 68 if {[catch 3600: 20 7b 6f 62 6a 65 63 74 20 69 6e 76 6f 6b 65 20 {object invoke 3610: 2d 66 6c 61 67 73 20 2b 4e 6f 6e 50 75 62 6c 69 -flags +NonPubli 3620: 63 20 53 79 73 74 65 6d 2e 44 61 74 61 2e 53 51 c System.Data.SQ 3630: 4c 69 74 65 2e 53 51 4c 69 74 65 33 20 5c 0d 0a Lite.SQLite3 \.. 3640: 20 20 20 20 20 20 20 20 20 20 20 20 20 20 53 51 SQ 3650: 4c 69 74 65 56 65 72 73 69 6f 6e 7d 20 76 65 72 LiteVersion} ver 3660: 73 69 6f 6e 5d 20 3d 3d 20 30 7d 20 74 68 65 6e sion] == 0} then 3670: 20 7b 0d 0a 20 20 20 20 20 20 20 20 23 0d 0a 20 {.. #.. 3680: 20 20 20 20 20 20 20 23 20 4e 4f 54 45 3a 20 41 # NOTE: A 3690: 74 74 65 6d 70 74 20 74 6f 20 71 75 65 72 79 20 ttempt to query 36a0: 74 68 65 20 46 6f 73 73 69 6c 20 73 6f 75 72 63 the Fossil sourc 36b0: 65 20 69 64 65 6e 74 69 66 69 65 72 20 66 6f 72 e identifier for 36c0: 20 74 68 65 20 53 51 4c 69 74 65 0d 0a 20 20 20 the SQLite.. 36d0: 20 20 20 20 20 23 20 20 20 20 20 20 20 63 6f 72 # cor 36e0: 65 20 6c 69 62 72 61 72 79 2e 0d 0a 20 20 20 20 e library... 36f0: 20 20 20 20 23 0d 0a 20 20 20 20 20 20 20 20 69 #.. i 3700: 66 20 7b 5b 63 61 74 63 68 20 7b 6f 62 6a 65 63 f {[catch {objec 3710: 74 20 69 6e 76 6f 6b 65 20 2d 66 6c 61 67 73 20 t invoke -flags 3720: 2b 4e 6f 6e 50 75 62 6c 69 63 20 53 79 73 74 65 +NonPublic Syste 3730: 6d 2e 44 61 74 61 2e 53 51 4c 69 74 65 2e 53 51 m.Data.SQLite.SQ 3740: 4c 69 74 65 33 20 5c 0d 0a 20 20 20 20 20 20 20 Lite3 \.. 3750: 20 20 20 20 20 20 20 20 20 53 51 4c 69 74 65 53 SQLiteS 3760: 6f 75 72 63 65 49 64 7d 20 73 6f 75 72 63 65 49 ourceId} sourceI 3770: 64 5d 7d 20 74 68 65 6e 20 7b 0d 0a 20 20 20 20 d]} then {.. 3780: 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 20 #.. 3790: 20 20 20 23 20 4e 4f 54 45 3a 20 57 65 20 66 61 # NOTE: We fa 37a0: 69 6c 65 64 20 74 6f 20 71 75 65 72 79 20 74 68 iled to query th 37b0: 65 20 46 6f 73 73 69 6c 20 73 6f 75 72 63 65 20 e Fossil source 37c0: 69 64 65 6e 74 69 66 69 65 72 2e 0d 0a 20 20 20 identifier... 37d0: 20 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 #.. 37e0: 20 20 20 20 73 65 74 20 73 6f 75 72 63 65 49 64 set sourceId 37f0: 20 75 6e 6b 6e 6f 77 6e 0d 0a 20 20 20 20 20 20 unknown.. 3800: 20 20 7d 0d 0a 0d 0a 20 20 20 20 20 20 20 20 23 }.... # 3810: 0d 0a 20 20 20 20 20 20 20 20 23 20 4e 4f 54 45 .. # NOTE 3820: 3a 20 59 65 73 2c 20 74 68 65 20 53 51 4c 69 74 : Yes, the SQLit 3830: 65 20 63 6f 72 65 20 6c 69 62 72 61 72 79 20 61 e core library a 3840: 70 70 65 61 72 73 20 74 6f 20 62 65 20 61 76 61 ppears to be ava 3850: 69 6c 61 62 6c 65 2e 0d 0a 20 20 20 20 20 20 20 ilable... 3860: 20 23 0d 0a 20 20 20 20 20 20 20 20 61 64 64 43 #.. addC 3870: 6f 6e 73 74 72 61 69 6e 74 20 53 51 4c 69 74 65 onstraint SQLite 3880: 0d 0a 0d 0a 20 20 20 20 20 20 20 20 74 70 75 74 .... tput 3890: 73 20 24 63 68 61 6e 6e 65 6c 20 5b 61 70 70 65 s$channel [appe
38a0: 6e 64 41 72 67 73 20 22 79 65 73 20 28 22 20 24 ndArgs "yes (" $38b0: 76 65 72 73 69 6f 6e 20 22 20 22 20 24 73 6f 75 version " "$sou
38c0: 72 63 65 49 64 20 22 29 5c 6e 22 5d 0d 0a 20 20 rceId ")\n"]..
38d0: 20 20 20 20 7d 20 65 6c 73 65 20 7b 0d 0a 20 20 } else {..
38e0: 20 20 20 20 20 20 74 70 75 74 73 20 24 63 68 61 tputs $cha 38f0: 6e 6e 65 6c 20 6e 6f 5c 6e 0d 0a 20 20 20 20 20 nnel no\n.. 3900: 20 7d 0d 0a 20 20 20 20 7d 0d 0a 0c 0d 0a 20 20 }.. }..... 3910: 20 20 70 72 6f 63 20 63 68 65 63 6b 46 6f 72 53 proc checkForS 3920: 51 4c 69 74 65 44 65 66 69 6e 65 43 6f 6e 73 74 QLiteDefineConst 3930: 61 6e 74 20 7b 20 63 68 61 6e 6e 65 6c 20 6e 61 ant { channel na 3940: 6d 65 20 7d 20 7b 0d 0a 20 20 20 20 20 20 74 70 me } {.. tp 3950: 75 74 73 20 24 63 68 61 6e 6e 65 6c 20 5b 61 70 uts$channel [ap
3960: 70 65 6e 64 41 72 67 73 20 5c 0d 0a 20 20 20 20 pendArgs \..
3970: 20 20 20 20 20 20 22 2d 2d 2d 2d 20 63 68 65 63 "---- chec
3980: 6b 69 6e 67 20 66 6f 72 20 53 79 73 74 65 6d 2e king for System.
3990: 44 61 74 61 2e 53 51 4c 69 74 65 20 64 65 66 69 Data.SQLite defi
39a0: 6e 65 20 63 6f 6e 73 74 61 6e 74 20 5c 22 22 20 ne constant \""
39b0: 24 6e 61 6d 65 20 5c 0d 0a 20 20 20 20 20 20 20 $name \.. 39c0: 20 20 20 22 5c 22 2e 2e 2e 20 22 5d 0d 0a 0d 0a "\"... "].... 39d0: 20 20 20 20 20 20 69 66 20 7b 5b 63 61 74 63 68 if {[catch 39e0: 20 7b 6f 62 6a 65 63 74 20 69 6e 76 6f 6b 65 20 {object invoke 39f0: 2d 66 6c 61 67 73 20 2b 4e 6f 6e 50 75 62 6c 69 -flags +NonPubli 3a00: 63 20 53 79 73 74 65 6d 2e 44 61 74 61 2e 53 51 c System.Data.SQ 3a10: 4c 69 74 65 2e 53 51 4c 69 74 65 33 20 5c 0d 0a Lite.SQLite3 \.. 3a20: 20 20 20 20 20 20 20 20 20 20 20 20 20 20 44 65 De 3a30: 66 69 6e 65 43 6f 6e 73 74 61 6e 74 73 7d 20 64 fineConstants} d 3a40: 65 66 69 6e 65 43 6f 6e 73 74 61 6e 74 73 5d 20 efineConstants] 3a50: 3d 3d 20 30 7d 20 74 68 65 6e 20 7b 0d 0a 20 20 == 0} then {.. 3a60: 20 20 20 20 20 20 69 66 20 7b 5b 6c 73 65 61 72 if {[lsear 3a70: 63 68 20 2d 65 78 61 63 74 20 2d 6e 6f 63 61 73 ch -exact -nocas 3a80: 65 20 24 64 65 66 69 6e 65 43 6f 6e 73 74 61 6e e$defineConstan
3a90: 74 73 20 24 6e 61 6d 65 5d 20 21 3d 20 2d 31 7d ts $name] != -1} 3aa0: 20 74 68 65 6e 20 7b 0d 0a 20 20 20 20 20 20 20 then {.. 3ab0: 20 20 20 23 0d 0a 20 20 20 20 20 20 20 20 20 20 #.. 3ac0: 23 20 4e 4f 54 45 3a 20 59 65 73 2c 20 74 68 69 # NOTE: Yes, thi 3ad0: 73 20 64 65 66 69 6e 65 20 63 6f 6e 73 74 61 6e s define constan 3ae0: 74 20 77 61 73 20 65 6e 61 62 6c 65 64 20 77 68 t was enabled wh 3af0: 65 6e 20 74 68 65 20 6d 61 6e 61 67 65 64 0d 0a en the managed.. 3b00: 20 20 20 20 20 20 20 20 20 20 23 20 20 20 20 20 # 3b10: 20 20 61 73 73 65 6d 62 6c 79 20 77 61 73 20 63 assembly was c 3b20: 6f 6d 70 69 6c 65 64 2e 0d 0a 20 20 20 20 20 20 ompiled... 3b30: 20 20 20 20 23 0d 0a 20 20 20 20 20 20 20 20 20 #.. 3b40: 20 61 64 64 43 6f 6e 73 74 72 61 69 6e 74 20 5b addConstraint [ 3b50: 61 70 70 65 6e 64 41 72 67 73 20 64 65 66 69 6e appendArgs defin 3b60: 65 43 6f 6e 73 74 61 6e 74 2e 53 79 73 74 65 6d eConstant.System 3b70: 2e 44 61 74 61 2e 53 51 4c 69 74 65 2e 20 24 6e .Data.SQLite.$n
3b80: 61 6d 65 5d 0d 0a 0d 0a 20 20 20 20 20 20 20 20 ame]....
3b90: 20 20 74 70 75 74 73 20 24 63 68 61 6e 6e 65 6c tputs $channel 3ba0: 20 79 65 73 5c 6e 0d 0a 20 20 20 20 20 20 20 20 yes\n.. 3bb0: 7d 20 65 6c 73 65 20 7b 0d 0a 20 20 20 20 20 20 } else {.. 3bc0: 20 20 20 20 74 70 75 74 73 20 24 63 68 61 6e 6e tputs$chann
3bd0: 65 6c 20 6e 6f 5c 6e 0d 0a 20 20 20 20 20 20 20 el no\n..
3be0: 20 7d 0d 0a 20 20 20 20 20 20 7d 20 65 6c 73 65 }.. } else
3bf0: 20 7b 0d 0a 20 20 20 20 20 20 20 20 74 70 75 74 {.. tput
3c00: 73 20 24 63 68 61 6e 6e 65 6c 20 65 72 72 6f 72 s $channel error 3c10: 5c 6e 0d 0a 20 20 20 20 20 20 7d 0d 0a 20 20 20 \n.. }.. 3c20: 20 7d 0d 0a 0c 0d 0a 20 20 20 20 70 72 6f 63 20 }..... proc 3c30: 67 65 74 44 61 74 65 54 69 6d 65 46 6f 72 6d 61 getDateTimeForma 3c40: 74 20 7b 7d 20 7b 0d 0a 20 20 20 20 20 20 23 0d t {} {.. #. 3c50: 0a 20 20 20 20 20 20 23 20 4e 4f 54 45 3a 20 54 . # NOTE: T 3c60: 68 69 73 20 70 72 6f 63 65 64 75 72 65 20 73 69 his procedure si 3c70: 6d 70 6c 79 20 72 65 74 75 72 6e 73 20 74 68 65 mply returns the 3c80: 20 22 64 65 66 61 75 6c 74 22 20 44 61 74 65 54 "default" DateT 3c90: 69 6d 65 20 66 6f 72 6d 61 74 20 75 73 65 64 0d ime format used. 3ca0: 0a 20 20 20 20 20 20 23 20 20 20 20 20 20 20 62 . # b 3cb0: 79 20 74 68 65 20 74 65 73 74 20 73 75 69 74 65 y the test suite 3cc0: 2e 0d 0a 20 20 20 20 20 20 23 0d 0a 20 20 20 20 ... #.. 3cd0: 20 20 69 66 20 7b 5b 69 6e 66 6f 20 65 78 69 73 if {[info exis 3ce0: 74 73 20 3a 3a 64 61 74 65 74 69 6d 65 5f 66 6f ts ::datetime_fo 3cf0: 72 6d 61 74 5d 20 26 26 20 5c 0d 0a 20 20 20 20 rmat] && \.. 3d00: 20 20 20 20 20 20 5b 73 74 72 69 6e 67 20 6c 65 [string le 3d10: 6e 67 74 68 20 24 3a 3a 64 61 74 65 74 69 6d 65 ngth$::datetime
3d20: 5f 66 6f 72 6d 61 74 5d 20 3e 20 30 7d 20 74 68 _format] > 0} th
3d30: 65 6e 20 7b 0d 0a 20 20 20 20 20 20 20 20 23 0d en {.. #.
3d40: 0a 20 20 20 20 20 20 20 20 23 20 4e 4f 54 45 3a . # NOTE:
3d50: 20 52 65 74 75 72 6e 20 74 68 65 20 6d 61 6e 75 Return the manu
3d60: 61 6c 6c 79 20 6f 76 65 72 72 69 64 64 65 6e 20 ally overridden
3d70: 76 61 6c 75 65 20 66 6f 72 20 74 68 65 20 44 61 value for the Da
3d80: 74 65 54 69 6d 65 20 66 6f 72 6d 61 74 2e 0d 0a teTime format...
3d90: 20 20 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 #..
3da0: 20 20 20 72 65 74 75 72 6e 20 24 3a 3a 64 61 74 return $::dat 3db0: 65 74 69 6d 65 5f 66 6f 72 6d 61 74 0d 0a 20 20 etime_format.. 3dc0: 20 20 20 20 7d 20 65 6c 73 65 20 7b 0d 0a 20 20 } else {.. 3dd0: 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 20 #.. 3de0: 20 23 20 4e 4f 54 45 3a 20 52 65 74 75 72 6e 20 # NOTE: Return 3df0: 61 6e 20 49 53 4f 38 36 30 31 20 44 61 74 65 54 an ISO8601 DateT 3e00: 69 6d 65 20 66 6f 72 6d 61 74 20 63 6f 6d 70 61 ime format compa 3e10: 74 69 62 6c 65 20 77 69 74 68 20 53 51 4c 69 74 tible with SQLit 3e20: 65 2c 0d 0a 20 20 20 20 20 20 20 20 23 20 20 20 e,.. # 3e30: 20 20 20 20 53 79 73 74 65 6d 2e 44 61 74 61 2e System.Data. 3e40: 53 51 4c 69 74 65 2c 20 61 6e 64 20 73 75 69 74 SQLite, and suit 3e50: 61 62 6c 65 20 66 6f 72 20 72 6f 75 6e 64 2d 74 able for round-t 3e60: 72 69 70 70 69 6e 67 20 77 69 74 68 20 74 68 65 ripping with the 3e70: 0d 0a 20 20 20 20 20 20 20 20 23 20 20 20 20 20 .. # 3e80: 20 20 44 61 74 65 54 69 6d 65 20 63 6c 61 73 73 DateTime class 3e90: 20 6f 66 20 74 68 65 20 66 72 61 6d 65 77 6f 72 of the framewor 3ea0: 6b 2e 20 20 49 66 20 74 68 69 73 20 76 61 6c 75 k. If this valu 3eb0: 65 20 69 73 20 63 68 61 6e 67 65 64 2c 0d 0a 20 e is changed,.. 3ec0: 20 20 20 20 20 20 20 23 20 20 20 20 20 20 20 76 # v 3ed0: 61 72 69 6f 75 73 20 74 65 73 74 73 20 6d 61 79 arious tests may 3ee0: 20 66 61 69 6c 2e 0d 0a 20 20 20 20 20 20 20 20 fail... 3ef0: 23 0d 0a 20 20 20 20 20 20 20 20 72 65 74 75 72 #.. retur 3f00: 6e 20 22 79 79 79 79 2d 4d 4d 2d 64 64 20 48 48 n "yyyy-MM-dd HH 3f10: 3a 6d 6d 3a 73 73 2e 46 46 46 46 46 46 46 4b 22 :mm:ss.FFFFFFFK" 3f20: 0d 0a 20 20 20 20 20 20 7d 0d 0a 20 20 20 20 7d .. }.. } 3f30: 0d 0a 0c 0d 0a 20 20 20 20 70 72 6f 63 20 65 6e ..... proc en 3f40: 75 6d 65 72 61 62 6c 65 54 6f 4c 69 73 74 20 7b umerableToList { 3f50: 20 65 6e 75 6d 65 72 61 62 6c 65 20 7d 20 7b 0d enumerable } {. 3f60: 0a 20 20 20 20 20 20 73 65 74 20 72 65 73 75 6c . set resul 3f70: 74 20 5b 6c 69 73 74 5d 0d 0a 0d 0a 20 20 20 20 t [list].... 3f80: 20 20 69 66 20 7b 5b 73 74 72 69 6e 67 20 6c 65 if {[string le 3f90: 6e 67 74 68 20 24 65 6e 75 6d 65 72 61 62 6c 65 ngth$enumerable
3fa0: 5d 20 3d 3d 20 30 20 7c 7c 20 24 65 6e 75 6d 65 ] == 0 || $enume 3fb0: 72 61 62 6c 65 20 65 71 20 22 6e 75 6c 6c 22 7d rable eq "null"} 3fc0: 20 74 68 65 6e 20 7b 0d 0a 20 20 20 20 20 20 20 then {.. 3fd0: 20 72 65 74 75 72 6e 20 24 72 65 73 75 6c 74 0d return$result.
3fe0: 0a 20 20 20 20 20 20 7d 0d 0a 0d 0a 20 20 20 20 . }....
3ff0: 20 20 6f 62 6a 65 63 74 20 66 6f 72 65 61 63 68 object foreach
4000: 20 2d 61 6c 69 61 73 20 69 74 65 6d 20 24 65 6e -alias item $en 4010: 75 6d 65 72 61 62 6c 65 20 7b 0d 0a 20 20 20 20 umerable {.. 4020: 20 20 20 20 69 66 20 7b 5b 73 74 72 69 6e 67 20 if {[string 4030: 6c 65 6e 67 74 68 20 24 69 74 65 6d 5d 20 3e 20 length$item] >
4040: 30 7d 20 74 68 65 6e 20 7b 0d 0a 20 20 20 20 20 0} then {..
4050: 20 20 20 20 20 6c 61 70 70 65 6e 64 20 72 65 73 lappend res
4060: 75 6c 74 20 5b 24 69 74 65 6d 20 54 6f 53 74 72 ult [$item ToStr 4070: 69 6e 67 5d 0d 0a 20 20 20 20 20 20 20 20 7d 0d ing].. }. 4080: 0a 20 20 20 20 20 20 7d 0d 0a 0d 0a 20 20 20 20 . }.... 4090: 20 20 72 65 74 75 72 6e 20 24 72 65 73 75 6c 74 return$result
40a0: 0d 0a 20 20 20 20 7d 0d 0a 0c 0d 0a 20 20 20 20 .. }.....
40b0: 70 72 6f 63 20 63 6f 6d 70 69 6c 65 43 53 68 61 proc compileCSha
40c0: 72 70 57 69 74 68 20 7b 0d 0a 20 20 20 20 20 20 rpWith {..
40d0: 20 20 20 20 20 20 74 65 78 74 20 6d 65 6d 6f 72 text memor
40e0: 79 20 73 79 6d 62 6f 6c 73 20 73 74 72 69 63 74 y symbols strict
40f0: 20 72 65 73 75 6c 74 73 56 61 72 4e 61 6d 65 20 resultsVarName
4100: 65 72 72 6f 72 73 56 61 72 4e 61 6d 65 20 66 69 errorsVarName fi
4110: 6c 65 4e 61 6d 65 73 0d 0a 20 20 20 20 20 20 20 leNames..
4120: 20 20 20 20 20 61 72 67 73 20 7d 20 7b 0d 0a 20 args } {..
4130: 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 23 20 #.. #
4140: 4e 4f 54 45 3a 20 53 69 6e 63 65 20 77 65 20 61 NOTE: Since we a
4150: 72 65 20 67 6f 69 6e 67 20 74 6f 20 75 73 65 20 re going to use
4160: 74 68 69 73 20 6d 65 74 68 6f 64 20 6e 61 6d 65 this method name
4170: 20 61 20 6c 6f 74 2c 20 61 73 73 69 67 6e 20 69 a lot, assign i
4180: 74 20 74 6f 20 61 0d 0a 20 20 20 20 20 20 23 20 t to a.. #
4190: 20 20 20 20 20 20 76 61 72 69 61 62 6c 65 20 66 variable f
41a0: 69 72 73 74 2e 0d 0a 20 20 20 20 20 20 23 0d 0a irst... #..
41b0: 20 20 20 20 20 20 73 65 74 20 61 64 64 20 52 65 set add Re
41c0: 66 65 72 65 6e 63 65 64 41 73 73 65 6d 62 6c 69 ferencedAssembli
41d0: 65 73 2e 41 64 64 0d 0a 0d 0a 20 20 20 20 20 20 es.Add....
41e0: 23 0d 0a 20 20 20 20 20 20 23 20 4e 4f 54 45 3a #.. # NOTE:
41f0: 20 43 72 65 61 74 65 20 74 68 65 20 62 61 73 65 Create the base
4200: 20 63 6f 6d 6d 61 6e 64 20 74 6f 20 65 76 61 6c command to eval
4210: 75 61 74 65 20 61 6e 64 20 61 64 64 20 74 68 65 uate and add the
4220: 20 70 72 6f 70 65 72 74 79 20 73 65 74 74 69 6e property settin
4230: 67 73 0d 0a 20 20 20 20 20 20 23 20 20 20 20 20 gs.. #
4240: 20 20 74 68 61 74 20 61 72 65 20 61 6c 6d 6f 73 that are almos
4250: 74 20 61 6c 77 61 79 73 20 6e 65 65 64 65 64 20 t always needed
4260: 62 79 20 6f 75 72 20 75 6e 69 74 20 74 65 73 74 by our unit test
4270: 73 20 28 69 2e 65 2e 20 74 68 65 20 53 79 73 74 s (i.e. the Syst
4280: 65 6d 0d 0a 20 20 20 20 20 20 23 20 20 20 20 20 em.. #
4290: 20 20 61 6e 64 20 53 79 73 74 65 6d 2e 44 61 74 and System.Dat
42a0: 61 20 61 73 73 65 6d 62 6c 79 20 72 65 66 65 72 a assembly refer
42b0: 65 6e 63 65 73 29 2e 0d 0a 20 20 20 20 20 20 23 ences)... #
42c0: 0d 0a 20 20 20 20 20 20 73 65 74 20 63 6f 6d 6d .. set comm
42d0: 61 6e 64 20 5b 6c 69 73 74 20 63 6f 6d 70 69 6c and [list compil
42e0: 65 43 53 68 61 72 70 20 24 74 65 78 74 20 24 6d eCSharp $text$m
42f0: 65 6d 6f 72 79 20 24 73 79 6d 62 6f 6c 73 20 24 emory $symbols$
4300: 73 74 72 69 63 74 20 72 65 73 75 6c 74 73 20 5c strict results \
4310: 0d 0a 20 20 20 20 20 20 20 20 20 20 65 72 72 6f .. erro
4320: 72 73 20 24 61 64 64 20 53 79 73 74 65 6d 2e 64 rs $add System.d 4330: 6c 6c 20 24 61 64 64 20 53 79 73 74 65 6d 2e 44 ll$add System.D
4340: 61 74 61 2e 64 6c 6c 20 24 61 64 64 20 53 79 73 ata.dll $add Sys 4350: 74 65 6d 2e 58 6d 6c 2e 64 6c 6c 5d 0d 0a 0d 0a tem.Xml.dll].... 4360: 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 23 #.. # 4370: 20 4e 4f 54 45 3a 20 41 64 64 20 61 6c 6c 20 74 NOTE: Add all t 4380: 68 65 20 70 72 6f 76 69 64 65 64 20 66 69 6c 65 he provided file 4390: 20 6e 61 6d 65 73 20 61 73 20 61 73 73 65 6d 62 names as assemb 43a0: 6c 79 20 72 65 66 65 72 65 6e 63 65 73 2e 0d 0a ly references... 43b0: 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 66 #.. f 43c0: 6f 72 65 61 63 68 20 66 69 6c 65 4e 61 6d 65 20 oreach fileName 43d0: 24 66 69 6c 65 4e 61 6d 65 73 20 7b 0d 0a 20 20$fileNames {..
43e0: 20 20 20 20 20 20 6c 61 70 70 65 6e 64 20 63 6f lappend co
43f0: 6d 6d 61 6e 64 20 24 61 64 64 20 5b 67 65 74 42 mmand $add [getB 4400: 69 6e 61 72 79 46 69 6c 65 4e 61 6d 65 20 24 66 inaryFileName$f
4410: 69 6c 65 4e 61 6d 65 5d 0d 0a 20 20 20 20 20 20 ileName]..
4420: 7d 0d 0a 0d 0a 20 20 20 20 20 20 23 0d 0a 20 20 }.... #..
4430: 20 20 20 20 23 20 4e 4f 54 45 3a 20 41 64 64 20 # NOTE: Add
4440: 74 68 65 20 65 78 74 72 61 20 61 72 67 75 6d 65 the extra argume
4450: 6e 74 73 2c 20 69 66 20 61 6e 79 2c 20 74 6f 20 nts, if any, to
4460: 74 68 65 20 63 6f 6d 6d 61 6e 64 20 74 6f 20 65 the command to e
4470: 76 61 6c 75 61 74 65 2e 0d 0a 20 20 20 20 20 20 valuate...
4480: 23 0d 0a 20 20 20 20 20 20 65 76 61 6c 20 6c 61 #.. eval la
4490: 70 70 65 6e 64 20 63 6f 6d 6d 61 6e 64 20 24 61 ppend command $a 44a0: 72 67 73 0d 0a 0d 0a 20 20 20 20 20 20 23 0d 0a rgs.... #.. 44b0: 20 20 20 20 20 20 23 20 4e 4f 54 45 3a 20 41 6c # NOTE: Al 44c0: 69 61 73 20 74 68 65 20 63 6f 6d 70 69 6c 65 72 ias the compiler 44d0: 20 6c 6f 63 61 6c 20 72 65 73 75 6c 74 73 20 61 local results a 44e0: 6e 64 20 65 72 72 6f 72 73 20 76 61 72 69 61 62 nd errors variab 44f0: 6c 65 73 20 74 6f 20 74 68 65 0d 0a 20 20 20 20 les to the.. 4500: 20 20 23 20 20 20 20 20 20 20 76 61 72 69 61 62 # variab 4510: 6c 65 20 6e 61 6d 65 73 20 70 72 6f 76 69 64 65 le names provide 4520: 64 20 62 79 20 6f 75 72 20 63 61 6c 6c 65 72 2e d by our caller. 4530: 0d 0a 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 .. #.. 4540: 20 75 70 76 61 72 20 31 20 24 72 65 73 75 6c 74 upvar 1$result
4550: 73 56 61 72 4e 61 6d 65 20 72 65 73 75 6c 74 73 sVarName results
4560: 0d 0a 20 20 20 20 20 20 75 70 76 61 72 20 31 20 .. upvar 1
4570: 24 65 72 72 6f 72 73 56 61 72 4e 61 6d 65 20 65 $errorsVarName e 4580: 72 72 6f 72 73 0d 0a 0d 0a 20 20 20 20 20 20 23 rrors.... # 4590: 0d 0a 20 20 20 20 20 20 23 20 4e 4f 54 45 3a 20 .. # NOTE: 45a0: 45 76 61 6c 75 61 74 65 20 74 68 65 20 63 6f 6e Evaluate the con 45b0: 73 74 72 75 63 74 65 64 20 5b 63 6f 6d 70 69 6c structed [compil 45c0: 65 43 53 68 61 72 70 5d 20 63 6f 6d 6d 61 6e 64 eCSharp] command 45d0: 20 61 6e 64 20 72 65 74 75 72 6e 20 74 68 65 0d and return the. 45e0: 0a 20 20 20 20 20 20 23 20 20 20 20 20 20 20 72 . # r 45f0: 65 73 75 6c 74 2e 0d 0a 20 20 20 20 20 20 23 0d esult... #. 4600: 0a 20 20 20 20 20 20 65 76 61 6c 20 24 63 6f 6d . eval$com
4610: 6d 61 6e 64 0d 0a 20 20 20 20 7d 0d 0a 0c 0d 0a mand.. }.....
4620: 20 20 20 20 70 72 6f 63 20 69 73 4d 65 6d 6f 72 proc isMemor
4630: 79 44 62 20 7b 20 66 69 6c 65 4e 61 6d 65 20 7d yDb { fileName }
4640: 20 7b 0d 0a 20 20 20 20 20 20 23 0d 0a 20 20 20 {.. #..
4650: 20 20 20 23 20 4e 4f 54 45 3a 20 49 73 20 74 68 # NOTE: Is th
4660: 65 20 73 70 65 63 69 66 69 65 64 20 64 61 74 61 e specified data
4670: 62 61 73 65 20 66 69 6c 65 20 6e 61 6d 65 20 72 base file name r
4680: 65 61 6c 6c 79 20 61 6e 20 69 6e 2d 6d 65 6d 6f eally an in-memo
4690: 72 79 20 64 61 74 61 62 61 73 65 3f 0d 0a 20 20 ry database?..
46a0: 20 20 20 20 23 0d 0a 20 20 20 20 20 20 72 65 74 #.. ret
46b0: 75 72 6e 20 5b 65 78 70 72 20 7b 24 66 69 6c 65 urn [expr {$file 46c0: 4e 61 6d 65 20 65 71 20 22 3a 6d 65 6d 6f 72 79 Name eq ":memory 46d0: 3a 22 20 7c 7c 20 5c 0d 0a 20 20 20 20 20 20 20 :" || \.. 46e0: 20 20 20 5b 73 74 72 69 6e 67 20 72 61 6e 67 65 [string range 46f0: 20 24 66 69 6c 65 4e 61 6d 65 20 30 20 31 32 5d$fileName 0 12]
4700: 20 65 71 20 22 66 69 6c 65 3a 3a 6d 65 6d 6f 72 eq "file::memor
4710: 79 3a 22 7d 5d 0d 0a 20 20 20 20 7d 0d 0a 0c 0d y:"}].. }....
4720: 0a 20 20 20 20 70 72 6f 63 20 73 65 74 75 70 44 . proc setupD
4730: 62 20 7b 0d 0a 20 20 20 20 20 20 20 20 20 20 20 b {..
4740: 20 66 69 6c 65 4e 61 6d 65 20 7b 6d 6f 64 65 20 fileName {mode
4750: 22 22 7d 20 7b 64 61 74 65 54 69 6d 65 46 6f 72 ""} {dateTimeFor
4760: 6d 61 74 20 22 22 7d 20 7b 64 61 74 65 54 69 6d mat ""} {dateTim
4770: 65 4b 69 6e 64 20 22 22 7d 20 7b 66 6c 61 67 73 eKind ""} {flags
4780: 20 22 22 7d 0d 0a 20 20 20 20 20 20 20 20 20 20 ""}..
4790: 20 20 7b 65 78 74 72 61 20 22 22 7d 20 7b 71 75 {extra ""} {qu
47a0: 61 6c 69 66 79 20 74 72 75 65 7d 20 7b 64 65 6c alify true} {del
47b0: 65 74 65 20 74 72 75 65 7d 20 7b 75 72 69 20 66 ete true} {uri f
47c0: 61 6c 73 65 7d 20 7b 76 61 72 4e 61 6d 65 20 64 alse} {varName d
47d0: 62 7d 20 7d 20 7b 0d 0a 20 20 20 20 20 20 23 0d b} } {.. #.
47e0: 0a 20 20 20 20 20 20 23 20 4e 4f 54 45 3a 20 46 . # NOTE: F
47f0: 69 72 73 74 2c 20 73 65 65 20 69 66 20 74 68 65 irst, see if the
4800: 20 63 61 6c 6c 65 72 20 68 61 73 20 72 65 71 75 caller has requ
4810: 65 73 74 65 64 20 61 6e 20 69 6e 2d 6d 65 6d 6f ested an in-memo
4820: 72 79 20 64 61 74 61 62 61 73 65 2e 0d 0a 20 20 ry database...
4830: 20 20 20 20 23 0d 0a 20 20 20 20 20 20 73 65 74 #.. set
4840: 20 69 73 4d 65 6d 6f 72 79 20 5b 69 73 4d 65 6d isMemory [isMem
4850: 6f 72 79 44 62 20 24 66 69 6c 65 4e 61 6d 65 5d oryDb $fileName] 4860: 0d 0a 0d 0a 20 20 20 20 20 20 23 0d 0a 20 20 20 .... #.. 4870: 20 20 20 23 20 4e 4f 54 45 3a 20 46 6f 72 20 6e # NOTE: For n 4880: 6f 77 2c 20 61 6c 6c 20 74 65 73 74 20 64 61 74 ow, all test dat 4890: 61 62 61 73 65 73 20 75 73 65 64 20 62 79 20 74 abases used by t 48a0: 68 65 20 74 65 73 74 20 73 75 69 74 65 20 61 72 he test suite ar 48b0: 65 20 70 6c 61 63 65 64 20 69 6e 74 6f 0d 0a 20 e placed into.. 48c0: 20 20 20 20 20 23 20 20 20 20 20 20 20 74 68 65 # the 48d0: 20 74 65 6d 70 6f 72 61 72 79 20 64 69 72 65 63 temporary direc 48e0: 74 6f 72 79 2e 20 20 45 61 63 68 20 64 61 74 61 tory. Each data 48f0: 62 61 73 65 20 75 73 65 64 20 62 79 20 61 20 74 base used by a t 4900: 65 73 74 20 73 68 6f 75 6c 64 20 62 65 0d 0a 20 est should be.. 4910: 20 20 20 20 20 23 20 20 20 20 20 20 20 63 6c 65 # cle 4920: 61 6e 65 64 20 75 70 20 62 79 20 74 68 61 74 20 aned up by that 4930: 74 65 73 74 20 75 73 69 6e 67 20 74 68 65 20 22 test using the " 4940: 63 6c 65 61 6e 75 70 44 62 22 20 70 72 6f 63 65 cleanupDb" proce 4950: 64 75 72 65 2c 20 62 65 6c 6f 77 2e 0d 0a 20 20 dure, below... 4960: 20 20 20 20 23 0d 0a 20 20 20 20 20 20 69 66 20 #.. if 4970: 7b 21 24 69 73 4d 65 6d 6f 72 79 20 26 26 20 24 {!$isMemory && $4980: 71 75 61 6c 69 66 79 7d 20 74 68 65 6e 20 7b 0d qualify} then {. 4990: 0a 20 20 20 20 20 20 20 20 73 65 74 20 66 69 6c . set fil 49a0: 65 4e 61 6d 65 20 5b 66 69 6c 65 20 6a 6f 69 6e eName [file join 49b0: 20 5b 67 65 74 44 61 74 61 62 61 73 65 44 69 72 [getDatabaseDir 49c0: 65 63 74 6f 72 79 5d 20 5b 66 69 6c 65 20 74 61 ectory] [file ta 49d0: 69 6c 20 24 66 69 6c 65 4e 61 6d 65 5d 5d 0d 0a il$fileName]]..
49e0: 20 20 20 20 20 20 7d 0d 0a 0d 0a 20 20 20 20 20 }....
49f0: 20 23 0d 0a 20 20 20 20 20 20 23 20 4e 4f 54 45 #.. # NOTE
4a00: 3a 20 42 79 20 64 65 66 61 75 6c 74 2c 20 64 65 : By default, de
4a10: 6c 65 74 65 20 61 6e 79 20 70 72 65 2d 65 78 69 lete any pre-exi
4a20: 73 74 69 6e 67 20 64 61 74 61 62 61 73 65 20 77 sting database w
4a30: 69 74 68 20 74 68 65 20 73 61 6d 65 20 66 69 6c ith the same fil
4a40: 65 0d 0a 20 20 20 20 20 20 23 20 20 20 20 20 20 e.. #
4a50: 20 6e 61 6d 65 20 69 66 20 69 74 20 63 75 72 72 name if it curr
4a60: 65 6e 74 6c 79 20 65 78 69 73 74 73 2e 0d 0a 20 ently exists...
4a70: 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 69 66 #.. if
4a80: 20 7b 21 24 69 73 4d 65 6d 6f 72 79 20 26 26 20 {!$isMemory && 4a90: 24 64 65 6c 65 74 65 20 26 26 20 5b 66 69 6c 65$delete && [file
4aa0: 20 65 78 69 73 74 73 20 24 66 69 6c 65 4e 61 6d exists $fileNam 4ab0: 65 5d 7d 20 74 68 65 6e 20 7b 0d 0a 20 20 20 20 e]} then {.. 4ac0: 20 20 20 20 23 0d 0a 20 20 20 20 20 20 20 20 23 #.. # 4ad0: 20 4e 4f 54 45 3a 20 41 74 74 65 6d 70 74 20 74 NOTE: Attempt t 4ae0: 6f 20 64 65 6c 65 74 65 20 61 6e 79 20 70 72 65 o delete any pre 4af0: 2d 65 78 69 73 74 69 6e 67 20 64 61 74 61 62 61 -existing databa 4b00: 73 65 20 77 69 74 68 20 74 68 65 20 73 61 6d 65 se with the same 4b10: 20 66 69 6c 65 0d 0a 20 20 20 20 20 20 20 20 23 file.. # 4b20: 20 20 20 20 20 20 20 6e 61 6d 65 2e 0d 0a 20 20 name... 4b30: 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 20 #.. 4b40: 20 69 66 20 7b 5b 63 61 74 63 68 20 7b 66 69 6c if {[catch {fil 4b50: 65 20 64 65 6c 65 74 65 20 24 66 69 6c 65 4e 61 e delete$fileNa
4b60: 6d 65 7d 20 65 72 72 6f 72 5d 7d 20 74 68 65 6e me} error]} then
4b70: 20 7b 0d 0a 20 20 20 20 20 20 20 20 20 20 23 0d {.. #.
4b80: 0a 20 20 20 20 20 20 20 20 20 20 23 20 4e 4f 54 . # NOT
4b90: 45 3a 20 57 65 20 73 6f 6d 65 68 6f 77 20 66 61 E: We somehow fa
4ba0: 69 6c 65 64 20 74 6f 20 64 65 6c 65 74 65 20 74 iled to delete t
4bb0: 68 65 20 66 69 6c 65 2c 20 72 65 70 6f 72 74 20 he file, report
4bc0: 77 68 79 2e 0d 0a 20 20 20 20 20 20 20 20 20 20 why...
4bd0: 23 0d 0a 20 20 20 20 20 20 20 20 20 20 74 70 75 #.. tpu
4be0: 74 73 20 24 3a 3a 74 65 73 74 5f 63 68 61 6e 6e ts $::test_chann 4bf0: 65 6c 20 5b 61 70 70 65 6e 64 41 72 67 73 20 5c el [appendArgs \ 4c00: 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 20 20 .. 4c10: 22 3d 3d 3d 3d 20 57 41 52 4e 49 4e 47 3a 20 66 "==== WARNING: f 4c20: 61 69 6c 65 64 20 74 6f 20 64 65 6c 65 74 65 20 ailed to delete 4c30: 64 61 74 61 62 61 73 65 20 66 69 6c 65 20 5c 22 database file \" 4c40: 22 20 24 66 69 6c 65 4e 61 6d 65 20 5c 0d 0a 20 "$fileName \..
4c50: 20 20 20 20 20 20 20 20 20 20 20 20 20 22 5c 22 "\"
4c60: 20 64 75 72 69 6e 67 20 73 65 74 75 70 2c 20 65 during setup, e
4c70: 72 72 6f 72 3a 20 22 20 5c 6e 5c 74 20 24 65 72 rror: " \n\t $er 4c80: 72 6f 72 20 5c 6e 5d 0d 0a 20 20 20 20 20 20 20 ror \n].. 4c90: 20 7d 0d 0a 20 20 20 20 20 20 7d 0d 0a 0d 0a 20 }.. }.... 4ca0: 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 23 20 #.. # 4cb0: 4e 4f 54 45 3a 20 52 65 66 65 72 20 74 6f 20 74 NOTE: Refer to t 4cc0: 68 65 20 73 70 65 63 69 66 69 65 64 20 76 61 72 he specified var 4cd0: 69 61 62 6c 65 20 28 65 2e 67 2e 20 22 64 62 22 iable (e.g. "db" 4ce0: 29 20 69 6e 20 74 68 65 20 63 6f 6e 74 65 78 74 ) in the context 4cf0: 20 6f 66 20 74 68 65 0d 0a 20 20 20 20 20 20 23 of the.. # 4d00: 20 20 20 20 20 20 20 63 61 6c 6c 65 72 2e 20 20 caller. 4d10: 54 68 65 20 68 61 6e 64 6c 65 20 74 6f 20 74 68 The handle to th 4d20: 65 20 6f 70 65 6e 65 64 20 64 61 74 61 62 61 73 e opened databas 4d30: 65 20 77 69 6c 6c 20 62 65 20 73 74 6f 72 65 64 e will be stored 4d40: 20 74 68 65 72 65 2e 0d 0a 20 20 20 20 20 20 23 there... # 4d50: 0d 0a 20 20 20 20 20 20 75 70 76 61 72 20 31 20 .. upvar 1 4d60: 24 76 61 72 4e 61 6d 65 20 64 62 0d 0a 0d 0a 20$varName db....
4d70: 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 23 20 #.. #
4d80: 4e 4f 54 45 3a 20 53 74 61 72 74 20 62 75 69 6c NOTE: Start buil
4d90: 64 69 6e 67 20 74 68 65 20 63 6f 6e 6e 65 63 74 ding the connect
4da0: 69 6f 6e 20 73 74 72 69 6e 67 2e 20 20 54 68 65 ion string. The
4db0: 20 6f 6e 6c 79 20 72 65 71 75 69 72 65 64 20 70 only required p
4dc0: 6f 72 74 69 6f 6e 0d 0a 20 20 20 20 20 20 23 20 ortion.. #
4dd0: 20 20 20 20 20 20 6f 66 20 74 68 65 20 63 6f 6e of the con
4de0: 6e 65 63 74 69 6f 6e 20 73 74 72 69 6e 67 20 69 nection string i
4df0: 73 20 74 68 65 20 64 61 74 61 20 73 6f 75 72 63 s the data sourc
4e00: 65 2c 20 77 68 69 63 68 20 63 6f 6e 74 61 69 6e e, which contain
4e10: 73 20 74 68 65 0d 0a 20 20 20 20 20 20 23 20 20 s the.. #
4e20: 20 20 20 20 20 64 61 74 61 62 61 73 65 20 66 69 database fi
4e30: 6c 65 20 6e 61 6d 65 20 69 74 73 65 6c 66 2e 20 le name itself.
4e40: 20 49 66 20 74 68 65 20 63 61 6c 6c 65 72 20 77 If the caller w
4e50: 61 6e 74 73 20 74 6f 20 75 73 65 20 61 20 55 52 ants to use a UR
4e60: 49 20 61 73 0d 0a 20 20 20 20 20 20 23 20 20 20 I as.. #
4e70: 20 20 20 20 74 68 65 20 64 61 74 61 20 73 6f 75 the data sou
4e80: 72 63 65 2c 20 75 73 65 20 74 68 65 20 46 75 6c rce, use the Ful
4e90: 6c 55 72 69 20 63 6f 6e 6e 65 63 74 69 6f 6e 20 lUri connection
4ea0: 73 74 72 69 6e 67 20 70 72 6f 70 65 72 74 79 20 string property
4eb0: 74 6f 0d 0a 20 20 20 20 20 20 23 20 20 20 20 20 to.. #
4ec0: 20 20 70 72 65 76 65 6e 74 20 74 68 65 20 64 61 prevent the da
4ed0: 74 61 20 73 6f 75 72 63 65 20 73 74 72 69 6e 67 ta source string
4ee0: 20 66 72 6f 6d 20 62 65 69 6e 67 20 6d 61 6e 67 from being mang
4ef0: 6c 65 64 2e 0d 0a 20 20 20 20 20 20 23 0d 0a 20 led... #..
4f00: 20 20 20 20 20 69 66 20 7b 24 75 72 69 7d 20 74 if {$uri} t 4f10: 68 65 6e 20 7b 0d 0a 20 20 20 20 20 20 20 20 73 hen {.. s 4f20: 65 74 20 63 6f 6e 6e 65 63 74 69 6f 6e 20 7b 46 et connection {F 4f30: 75 6c 6c 55 72 69 3d 24 7b 66 69 6c 65 4e 61 6d ullUri=${fileNam
4f40: 65 7d 7d 0d 0a 20 20 20 20 20 20 7d 20 65 6c 73 e}}.. } els
4f50: 65 20 7b 0d 0a 20 20 20 20 20 20 20 20 73 65 74 e {.. set
4f60: 20 63 6f 6e 6e 65 63 74 69 6f 6e 20 7b 44 61 74 connection {Dat
4f70: 61 20 53 6f 75 72 63 65 3d 24 7b 66 69 6c 65 4e a Source=${fileN 4f80: 61 6d 65 7d 7d 0d 0a 20 20 20 20 20 20 7d 0d 0a ame}}.. }.. 4f90: 0d 0a 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 .. #.. 4fa0: 20 23 20 4e 4f 54 45 3a 20 53 69 6e 63 65 20 74 # NOTE: Since t 4fb0: 68 69 73 20 70 72 6f 63 65 64 75 72 65 20 68 61 his procedure ha 4fc0: 73 20 6e 6f 20 73 70 65 63 69 61 6c 20 6b 6e 6f s no special kno 4fd0: 77 6c 65 64 67 65 20 6f 66 20 77 68 61 74 20 74 wledge of what t 4fe0: 68 65 20 64 65 66 61 75 6c 74 0d 0a 20 20 20 20 he default.. 4ff0: 20 20 23 20 20 20 20 20 20 20 73 65 74 74 69 6e # settin 5000: 67 20 69 73 20 66 6f 72 20 74 68 65 20 54 6f 46 g is for the ToF 5010: 75 6c 6c 50 61 74 68 20 63 6f 6e 6e 65 63 74 69 ullPath connecti 5020: 6f 6e 20 73 74 72 69 6e 67 20 70 72 6f 70 65 72 on string proper 5030: 79 2c 20 61 6c 77 61 79 73 0d 0a 20 20 20 20 20 y, always.. 5040: 20 23 20 20 20 20 20 20 20 61 64 64 20 74 68 65 # add the 5050: 20 76 61 6c 75 65 20 77 65 20 6b 6e 6f 77 20 61 value we know a 5060: 62 6f 75 74 20 74 6f 20 74 68 65 20 63 6f 6e 6e bout to the conn 5070: 65 63 74 69 6f 6e 20 73 74 72 69 6e 67 2e 0d 0a ection string... 5080: 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 61 #.. a 5090: 70 70 65 6e 64 20 63 6f 6e 6e 65 63 74 69 6f 6e ppend connection 50a0: 20 7b 3b 54 6f 46 75 6c 6c 50 61 74 68 3d 24 7b {;ToFullPath=${
50b0: 71 75 61 6c 69 66 79 7d 7d 0d 0a 0d 0a 20 20 20 qualify}}....
50c0: 20 20 20 23 0d 0a 20 20 20 20 20 20 23 20 4e 4f #.. # NO
50d0: 54 45 3a 20 49 66 20 74 68 65 20 63 61 6c 6c 65 TE: If the calle
50e0: 72 20 73 70 65 63 69 66 69 65 64 20 61 20 6a 6f r specified a jo
50f0: 75 72 6e 61 6c 20 6d 6f 64 65 2c 20 61 64 64 20 urnal mode, add
5100: 74 68 65 20 6e 65 63 65 73 73 61 72 79 20 70 6f the necessary po
5110: 72 74 69 6f 6e 0d 0a 20 20 20 20 20 20 23 20 20 rtion.. #
5120: 20 20 20 20 20 6f 66 20 74 68 65 20 63 6f 6e 6e of the conn
5130: 65 63 74 69 6f 6e 20 73 74 72 69 6e 67 20 6e 6f ection string no
5140: 77 2e 0d 0a 20 20 20 20 20 20 23 0d 0a 20 20 20 w... #..
5150: 20 20 20 69 66 20 7b 5b 73 74 72 69 6e 67 20 6c if {[string l
5160: 65 6e 67 74 68 20 24 6d 6f 64 65 5d 20 3e 20 30 ength $mode] > 0 5170: 7d 20 74 68 65 6e 20 7b 0d 0a 20 20 20 20 20 20 } then {.. 5180: 20 20 61 70 70 65 6e 64 20 63 6f 6e 6e 65 63 74 append connect 5190: 69 6f 6e 20 7b 3b 4a 6f 75 72 6e 61 6c 20 4d 6f ion {;Journal Mo 51a0: 64 65 3d 24 7b 6d 6f 64 65 7d 7d 0d 0a 20 20 20 de=${mode}}..
51b0: 20 20 20 7d 0d 0a 0d 0a 20 20 20 20 20 20 23 0d }.... #.
51c0: 0a 20 20 20 20 20 20 23 20 4e 4f 54 45 3a 20 49 . # NOTE: I
51d0: 66 20 74 68 65 20 63 61 6c 6c 65 72 20 73 70 65 f the caller spe
51e0: 63 69 66 69 65 64 20 61 20 44 61 74 65 54 69 6d cified a DateTim
51f0: 65 20 66 6f 72 6d 61 74 2c 20 61 64 64 20 74 68 e format, add th
5200: 65 20 6e 65 63 65 73 73 61 72 79 0d 0a 20 20 20 e necessary..
5210: 20 20 20 23 20 20 20 20 20 20 20 70 6f 72 74 69 # porti
5220: 6f 6e 20 6f 66 20 74 68 65 20 63 6f 6e 6e 65 63 on of the connec
5230: 74 69 6f 6e 20 73 74 72 69 6e 67 20 6e 6f 77 2e tion string now.
5240: 0d 0a 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 .. #..
5250: 20 69 66 20 7b 5b 73 74 72 69 6e 67 20 6c 65 6e if {[string len
5260: 67 74 68 20 24 64 61 74 65 54 69 6d 65 46 6f 72 gth $dateTimeFor 5270: 6d 61 74 5d 20 3e 20 30 7d 20 74 68 65 6e 20 7b mat] > 0} then { 5280: 0d 0a 20 20 20 20 20 20 20 20 61 70 70 65 6e 64 .. append 5290: 20 63 6f 6e 6e 65 63 74 69 6f 6e 20 7b 3b 44 61 connection {;Da 52a0: 74 65 54 69 6d 65 46 6f 72 6d 61 74 3d 24 7b 64 teTimeFormat=${d
52b0: 61 74 65 54 69 6d 65 46 6f 72 6d 61 74 7d 7d 0d ateTimeFormat}}.
52c0: 0a 20 20 20 20 20 20 7d 0d 0a 0d 0a 20 20 20 20 . }....
52d0: 20 20 23 0d 0a 20 20 20 20 20 20 23 20 4e 4f 54 #.. # NOT
52e0: 45 3a 20 49 66 20 74 68 65 20 63 61 6c 6c 65 72 E: If the caller
52f0: 20 73 70 65 63 69 66 69 65 64 20 61 20 44 61 74 specified a Dat
5300: 65 54 69 6d 65 4b 69 6e 64 2c 20 61 64 64 20 74 eTimeKind, add t
5310: 68 65 20 6e 65 63 65 73 73 61 72 79 20 70 6f 72 he necessary por
5320: 74 69 6f 6e 0d 0a 20 20 20 20 20 20 23 20 20 20 tion.. #
5330: 20 20 20 20 6f 66 20 74 68 65 20 63 6f 6e 6e 65 of the conne
5340: 63 74 69 6f 6e 20 73 74 72 69 6e 67 20 6e 6f 77 ction string now
5350: 2e 0d 0a 20 20 20 20 20 20 23 0d 0a 20 20 20 20 ... #..
5360: 20 20 69 66 20 7b 5b 73 74 72 69 6e 67 20 6c 65 if {[string le
5370: 6e 67 74 68 20 24 64 61 74 65 54 69 6d 65 4b 69 ngth $dateTimeKi 5380: 6e 64 5d 20 3e 20 30 7d 20 74 68 65 6e 20 7b 0d nd] > 0} then {. 5390: 0a 20 20 20 20 20 20 20 20 61 70 70 65 6e 64 20 . append 53a0: 63 6f 6e 6e 65 63 74 69 6f 6e 20 7b 3b 44 61 74 connection {;Dat 53b0: 65 54 69 6d 65 4b 69 6e 64 3d 24 7b 64 61 74 65 eTimeKind=${date
53c0: 54 69 6d 65 4b 69 6e 64 7d 7d 0d 0a 20 20 20 20 TimeKind}}..
53d0: 20 20 7d 0d 0a 0d 0a 20 20 20 20 20 20 23 0d 0a }.... #..
53e0: 20 20 20 20 20 20 23 20 4e 4f 54 45 3a 20 49 66 # NOTE: If
53f0: 20 74 68 65 72 65 20 61 72 65 20 61 6e 79 20 67 there are any g
5400: 6c 6f 62 61 6c 20 28 70 65 72 20 74 65 73 74 20 lobal (per test
5410: 72 75 6e 29 20 63 6f 6e 6e 65 63 74 69 6f 6e 20 run) connection
5420: 66 6c 61 67 73 20 63 75 72 72 65 6e 74 6c 79 0d flags currently.
5430: 0a 20 20 20 20 20 20 23 20 20 20 20 20 20 20 73 . # s
5440: 65 74 2c 20 75 73 65 20 74 68 65 6d 20 6e 6f 77 et, use them now
5450: 20 28 69 2e 65 2e 20 62 79 20 63 6f 6d 62 69 6e (i.e. by combin
5460: 69 6e 67 20 74 68 65 6d 20 77 69 74 68 20 74 68 ing them with th
5470: 65 20 6f 6e 65 73 20 66 6f 72 20 74 68 69 73 0d e ones for this.
5480: 0a 20 20 20 20 20 20 23 20 20 20 20 20 20 20 63 . # c
5490: 6f 6e 6e 65 63 74 69 6f 6e 29 2e 0d 0a 20 20 20 onnection)...
54a0: 20 20 20 23 0d 0a 20 20 20 20 20 20 69 66 20 7b #.. if {
54b0: 5b 69 6e 66 6f 20 65 78 69 73 74 73 20 3a 3a 63 [info exists ::c
54c0: 6f 6e 6e 65 63 74 69 6f 6e 5f 66 6c 61 67 73 5d onnection_flags]
54d0: 20 26 26 20 5c 0d 0a 20 20 20 20 20 20 20 20 20 && \..
54e0: 20 5b 73 74 72 69 6e 67 20 6c 65 6e 67 74 68 20 [string length
54f0: 24 3a 3a 63 6f 6e 6e 65 63 74 69 6f 6e 5f 66 6c $::connection_fl 5500: 61 67 73 5d 20 3e 20 30 7d 20 74 68 65 6e 20 7b ags] > 0} then { 5510: 0d 0a 20 20 20 20 20 20 20 20 23 0d 0a 20 20 20 .. #.. 5520: 20 20 20 20 20 23 20 4e 4f 54 45 3a 20 53 68 6f # NOTE: Sho 5530: 77 20 28 61 6e 64 20 6c 6f 67 29 20 74 68 61 74 w (and log) that 5540: 20 77 65 20 64 65 74 65 63 74 65 64 20 73 6f 6d we detected som 5550: 65 20 67 6c 6f 62 61 6c 20 63 6f 6e 6e 65 63 74 e global connect 5560: 69 6f 6e 20 66 6c 61 67 73 2e 0d 0a 20 20 20 20 ion flags... 5570: 20 20 20 20 23 0d 0a 20 20 20 20 20 20 20 20 74 #.. t 5580: 70 75 74 73 20 24 3a 3a 74 65 73 74 5f 63 68 61 puts$::test_cha
5590: 6e 6e 65 6c 20 5b 61 70 70 65 6e 64 41 72 67 73 nnel [appendArgs
55a0: 20 5c 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 \..
55b0: 22 2d 2d 2d 2d 20 67 6c 6f 62 61 6c 20 63 6f 6e "---- global con
55c0: 6e 65 63 74 69 6f 6e 20 66 6c 61 67 73 20 64 65 nection flags de
55d0: 74 65 63 74 65 64 3a 20 22 20 24 3a 3a 63 6f 6e tected: " $::con 55e0: 6e 65 63 74 69 6f 6e 5f 66 6c 61 67 73 20 5c 6e nection_flags \n 55f0: 5d 0d 0a 0d 0a 20 20 20 20 20 20 20 20 23 0d 0a ].... #.. 5600: 20 20 20 20 20 20 20 20 23 20 4e 4f 54 45 3a 20 # NOTE: 5610: 43 6f 6d 62 69 6e 65 20 61 6e 64 2f 6f 72 20 72 Combine and/or r 5620: 65 70 6c 61 63 65 20 74 68 65 20 63 6f 6e 6e 65 eplace the conne 5630: 63 74 69 6f 6e 20 66 6c 61 67 73 20 61 6e 64 20 ction flags and 5640: 74 68 65 6e 20 73 68 6f 77 20 74 68 65 0d 0a 20 then show the.. 5650: 20 20 20 20 20 20 20 23 20 20 20 20 20 20 20 6e # n 5660: 65 77 20 76 61 6c 75 65 2e 0d 0a 20 20 20 20 20 ew value... 5670: 20 20 20 23 0d 0a 20 20 20 20 20 20 20 20 73 65 #.. se 5680: 74 20 66 6c 61 67 73 20 5b 63 6f 6d 62 69 6e 65 t flags [combine 5690: 46 6c 61 67 73 20 24 66 6c 61 67 73 20 24 3a 3a Flags$flags $:: 56a0: 63 6f 6e 6e 65 63 74 69 6f 6e 5f 66 6c 61 67 73 connection_flags 56b0: 5d 0d 0a 0d 0a 20 20 20 20 20 20 20 20 74 70 75 ].... tpu 56c0: 74 73 20 24 3a 3a 74 65 73 74 5f 63 68 61 6e 6e ts$::test_chann
56d0: 65 6c 20 5b 61 70 70 65 6e 64 41 72 67 73 20 5c el [appendArgs \
56e0: 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 22 2d .. "-
56f0: 2d 2d 2d 20 63 6f 6d 62 69 6e 65 64 20 63 6f 6e --- combined con
5700: 6e 65 63 74 69 6f 6e 20 66 6c 61 67 73 20 61 72 nection flags ar
5710: 65 3a 20 22 20 24 66 6c 61 67 73 20 5c 6e 5d 0d e: " $flags \n]. 5720: 0a 20 20 20 20 20 20 7d 0d 0a 0d 0a 20 20 20 20 . }.... 5730: 20 20 23 0d 0a 20 20 20 20 20 20 23 20 4e 4f 54 #.. # NOT 5740: 45 3a 20 49 66 20 74 68 65 20 63 61 6c 6c 65 72 E: If the caller 5750: 20 73 70 65 63 69 66 69 65 64 20 61 20 53 51 4c specified a SQL 5760: 69 74 65 43 6f 6e 6e 65 63 74 69 6f 6e 46 6c 61 iteConnectionFla 5770: 67 73 2c 20 61 64 64 20 74 68 65 20 6e 65 63 65 gs, add the nece 5780: 73 73 61 72 79 0d 0a 20 20 20 20 20 20 23 20 20 ssary.. # 5790: 20 20 20 20 20 70 6f 72 74 69 6f 6e 20 6f 66 20 portion of 57a0: 74 68 65 20 63 6f 6e 6e 65 63 74 69 6f 6e 20 73 the connection s 57b0: 74 72 69 6e 67 20 6e 6f 77 2e 0d 0a 20 20 20 20 tring now... 57c0: 20 20 23 0d 0a 20 20 20 20 20 20 69 66 20 7b 5b #.. if {[ 57d0: 73 74 72 69 6e 67 20 6c 65 6e 67 74 68 20 24 66 string length$f
57e0: 6c 61 67 73 5d 20 3e 20 30 7d 20 74 68 65 6e 20 lags] > 0} then
57f0: 7b 0d 0a 20 20 20 20 20 20 20 20 61 70 70 65 6e {.. appen
5800: 64 20 63 6f 6e 6e 65 63 74 69 6f 6e 20 7b 3b 46 d connection {;F
5810: 6c 61 67 73 3d 24 7b 66 6c 61 67 73 7d 7d 0d 0a lags=${flags}}.. 5820: 20 20 20 20 20 20 7d 0d 0a 0d 0a 20 20 20 20 20 }.... 5830: 20 23 0d 0a 20 20 20 20 20 20 23 20 4e 4f 54 45 #.. # NOTE 5840: 3a 20 49 66 20 74 68 65 20 63 61 6c 6c 65 72 20 : If the caller 5850: 73 70 65 63 69 66 69 65 64 20 61 6e 20 65 78 74 specified an ext 5860: 72 61 20 70 61 79 6c 6f 61 64 20 74 6f 20 74 68 ra payload to th 5870: 65 20 63 6f 6e 6e 65 63 74 69 6f 6e 20 73 74 72 e connection str 5880: 69 6e 67 2c 0d 0a 20 20 20 20 20 20 23 20 20 20 ing,.. # 5890: 20 20 20 20 61 70 70 65 6e 64 20 69 74 20 6e 6f append it no 58a0: 77 2e 0d 0a 20 20 20 20 20 20 23 0d 0a 20 20 20 w... #.. 58b0: 20 20 20 69 66 20 7b 5b 73 74 72 69 6e 67 20 6c if {[string l 58c0: 65 6e 67 74 68 20 24 65 78 74 72 61 5d 20 3e 20 ength$extra] >
58d0: 30 7d 20 74 68 65 6e 20 7b 0d 0a 20 20 20 20 20 0} then {..
58e0: 20 20 20 61 70 70 65 6e 64 20 63 6f 6e 6e 65 63 append connec
58f0: 74 69 6f 6e 20 5c 3b 20 24 65 78 74 72 61 0d 0a tion \; $extra.. 5900: 20 20 20 20 20 20 7d 0d 0a 0d 0a 20 20 20 20 20 }.... 5910: 20 23 0d 0a 20 20 20 20 20 20 23 20 4e 4f 54 45 #.. # NOTE 5920: 3a 20 4f 70 65 6e 20 74 68 65 20 64 61 74 61 62 : Open the datab 5930: 61 73 65 20 63 6f 6e 6e 65 63 74 69 6f 6e 20 6e ase connection n 5940: 6f 77 2c 20 70 6c 61 63 69 6e 67 20 74 68 65 20 ow, placing the 5950: 6f 70 61 71 75 65 20 68 61 6e 64 6c 65 20 76 61 opaque handle va 5960: 6c 75 65 0d 0a 20 20 20 20 20 20 23 20 20 20 20 lue.. # 5970: 20 20 20 69 6e 74 6f 20 74 68 65 20 76 61 72 69 into the vari 5980: 61 62 6c 65 20 73 70 65 63 69 66 69 65 64 20 62 able specified b 5990: 79 20 74 68 65 20 63 61 6c 6c 65 72 2e 0d 0a 20 y the caller... 59a0: 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 73 65 #.. se 59b0: 74 20 64 62 20 5b 73 71 6c 20 6f 70 65 6e 20 2d t db [sql open - 59c0: 74 79 70 65 20 53 51 4c 69 74 65 20 5b 73 75 62 type SQLite [sub 59d0: 73 74 20 24 63 6f 6e 6e 65 63 74 69 6f 6e 5d 5d st$connection]]
59e0: 0d 0a 20 20 20 20 7d 0d 0a 0c 0d 0a 20 20 20 20 .. }.....
59f0: 70 72 6f 63 20 63 6c 65 61 6e 75 70 44 62 20 7b proc cleanupDb {
5a00: 20 66 69 6c 65 4e 61 6d 65 20 7b 76 61 72 4e 61 fileName {varNa
5a10: 6d 65 20 64 62 7d 20 7b 63 6f 6c 6c 65 63 74 20 me db} {collect
5a20: 74 72 75 65 7d 20 7b 71 75 61 6c 69 66 79 20 74 true} {qualify t
5a30: 72 75 65 7d 0d 0a 20 20 20 20 20 20 20 20 20 20 rue}..
5a40: 20 20 20 20 20 20 20 20 20 20 20 7b 64 65 6c 65 {dele
5a50: 74 65 20 74 72 75 65 7d 20 7d 20 7b 0d 0a 20 20 te true} } {..
5a60: 20 20 20 20 23 0d 0a 20 20 20 20 20 20 23 20 4e #.. # N
5a70: 4f 54 45 3a 20 41 74 74 65 6d 70 74 20 74 6f 20 OTE: Attempt to
5a80: 66 6f 72 63 65 20 61 6c 6c 20 70 65 6e 64 69 6e force all pendin
5a90: 67 20 22 67 61 72 62 61 67 65 22 20 6f 62 6a 65 g "garbage" obje
5aa0: 63 74 73 20 74 6f 20 62 65 20 63 6f 6c 6c 65 63 cts to be collec
5ab0: 74 65 64 2c 0d 0a 20 20 20 20 20 20 23 20 20 20 ted,.. #
5ac0: 20 20 20 20 69 6e 63 6c 75 64 69 6e 67 20 53 51 including SQ
5ad0: 4c 69 74 65 20 73 74 61 74 65 6d 65 6e 74 73 20 Lite statements
5ae0: 61 6e 64 20 62 61 63 6b 75 70 20 6f 62 6a 65 63 and backup objec
5af0: 74 73 3b 20 74 68 69 73 20 73 68 6f 75 6c 64 20 ts; this should
5b00: 61 6c 6c 6f 77 0d 0a 20 20 20 20 20 20 23 20 20 allow.. #
5b10: 20 20 20 20 20 74 68 65 20 75 6e 64 65 72 6c 79 the underly
5b20: 69 6e 67 20 64 61 74 61 62 61 73 65 20 66 69 6c ing database fil
5b30: 65 20 74 6f 20 62 65 20 64 65 6c 65 74 65 64 2e e to be deleted.
5b40: 0d 0a 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 .. #..
5b50: 20 69 66 20 7b 24 63 6f 6c 6c 65 63 74 20 26 26 if {$collect && 5b60: 20 5c 0d 0a 20 20 20 20 20 20 20 20 20 20 5b 63 \.. [c 5b70: 61 74 63 68 20 7b 6f 62 6a 65 63 74 20 69 6e 76 atch {object inv 5b80: 6f 6b 65 20 47 43 20 47 65 74 54 6f 74 61 6c 4d oke GC GetTotalM 5b90: 65 6d 6f 72 79 20 74 72 75 65 7d 20 65 72 72 6f emory true} erro 5ba0: 72 5d 7d 20 74 68 65 6e 20 7b 0d 0a 20 20 20 20 r]} then {.. 5bb0: 20 20 20 20 74 70 75 74 73 20 24 63 68 61 6e 6e tputs$chann
5bc0: 65 6c 20 5b 61 70 70 65 6e 64 41 72 67 73 20 5c el [appendArgs \
5bd0: 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 22 3d .. "=
5be0: 3d 3d 3d 20 57 41 52 4e 49 4e 47 3a 20 66 61 69 === WARNING: fai
5bf0: 6c 65 64 20 66 75 6c 6c 20 67 61 72 62 61 67 65 led full garbage
5c00: 20 63 6f 6c 6c 65 63 74 69 6f 6e 2c 20 65 72 72 collection, err
5c10: 6f 72 3a 20 22 20 5c 0d 0a 20 20 20 20 20 20 20 or: " \..
5c20: 20 20 20 20 20 5c 6e 5c 74 20 24 65 72 72 6f 72 \n\t $error 5c30: 20 5c 6e 5d 0d 0a 20 20 20 20 20 20 7d 0d 0a 0d \n].. }... 5c40: 0a 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 . #.. 5c50: 23 20 4e 4f 54 45 3a 20 52 65 66 65 72 20 74 6f # NOTE: Refer to 5c60: 20 74 68 65 20 73 70 65 63 69 66 69 65 64 20 76 the specified v 5c70: 61 72 69 61 62 6c 65 20 28 65 2e 67 2e 20 22 64 ariable (e.g. "d 5c80: 62 22 29 20 69 6e 20 74 68 65 20 63 6f 6e 74 65 b") in the conte 5c90: 78 74 20 6f 66 20 74 68 65 0d 0a 20 20 20 20 20 xt of the.. 5ca0: 20 23 20 20 20 20 20 20 20 63 61 6c 6c 65 72 2e # caller. 5cb0: 20 20 54 68 65 20 68 61 6e 64 6c 65 20 74 6f 20 The handle to 5cc0: 74 68 65 20 6f 70 65 6e 65 64 20 64 61 74 61 62 the opened datab 5cd0: 61 73 65 20 69 73 20 73 74 6f 72 65 64 20 74 68 ase is stored th 5ce0: 65 72 65 2e 0d 0a 20 20 20 20 20 20 23 0d 0a 20 ere... #.. 5cf0: 20 20 20 20 20 75 70 76 61 72 20 31 20 24 76 61 upvar 1$va
5d00: 72 4e 61 6d 65 20 64 62 0d 0a 0d 0a 20 20 20 20 rName db....
5d10: 20 20 23 0d 0a 20 20 20 20 20 20 23 20 4e 4f 54 #.. # NOT
5d20: 45 3a 20 43 6c 6f 73 65 20 74 68 65 20 63 6f 6e E: Close the con
5d30: 6e 65 63 74 69 6f 6e 20 74 6f 20 74 68 65 20 64 nection to the d
5d40: 61 74 61 62 61 73 65 20 6e 6f 77 2e 20 20 54 68 atabase now. Th
5d50: 69 73 20 73 68 6f 75 6c 64 20 61 6c 6c 6f 77 20 is should allow
5d60: 75 73 20 74 6f 0d 0a 20 20 20 20 20 20 23 20 20 us to.. #
5d70: 20 20 20 20 20 64 65 6c 65 74 65 20 74 68 65 20 delete the
5d80: 75 6e 64 65 72 6c 79 69 6e 67 20 64 61 74 61 62 underlying datab
5d90: 61 73 65 20 66 69 6c 65 2e 0d 0a 20 20 20 20 20 ase file...
5da0: 20 23 0d 0a 20 20 20 20 20 20 69 66 20 7b 5b 69 #.. if {[i
5db0: 6e 66 6f 20 65 78 69 73 74 73 20 64 62 5d 20 26 nfo exists db] &
5dc0: 26 20 5b 63 61 74 63 68 20 7b 73 71 6c 20 63 6c & [catch {sql cl
5dd0: 6f 73 65 20 24 64 62 7d 20 65 72 72 6f 72 5d 7d ose $db} error]} 5de0: 20 74 68 65 6e 20 7b 0d 0a 20 20 20 20 20 20 20 then {.. 5df0: 20 23 0d 0a 20 20 20 20 20 20 20 20 23 20 4e 4f #.. # NO 5e00: 54 45 3a 20 57 65 20 73 6f 6d 65 68 6f 77 20 66 TE: We somehow f 5e10: 61 69 6c 65 64 20 74 6f 20 63 6c 6f 73 65 20 74 ailed to close t 5e20: 68 65 20 64 61 74 61 62 61 73 65 2c 20 72 65 70 he database, rep 5e30: 6f 72 74 20 77 68 79 2e 0d 0a 20 20 20 20 20 20 ort why... 5e40: 20 20 23 0d 0a 20 20 20 20 20 20 20 20 74 70 75 #.. tpu 5e50: 74 73 20 24 3a 3a 74 65 73 74 5f 63 68 61 6e 6e ts$::test_chann
5e60: 65 6c 20 5b 61 70 70 65 6e 64 41 72 67 73 20 5c el [appendArgs \
5e70: 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 22 3d .. "=
5e80: 3d 3d 3d 20 57 41 52 4e 49 4e 47 3a 20 66 61 69 === WARNING: fai
5e90: 6c 65 64 20 74 6f 20 63 6c 6f 73 65 20 64 61 74 led to close dat
5ea0: 61 62 61 73 65 20 5c 22 22 20 24 64 62 20 22 5c abase \"" $db "\ 5eb0: 22 2c 20 65 72 72 6f 72 3a 20 22 20 5c 0d 0a 20 ", error: " \.. 5ec0: 20 20 20 20 20 20 20 20 20 20 20 5c 6e 5c 74 20 \n\t 5ed0: 24 65 72 72 6f 72 20 5c 6e 5d 0d 0a 20 20 20 20$error \n]..
5ee0: 20 20 7d 0d 0a 0d 0a 20 20 20 20 20 20 23 0d 0a }.... #..
5ef0: 20 20 20 20 20 20 23 20 4e 4f 54 45 3a 20 46 69 # NOTE: Fi
5f00: 72 73 74 2c 20 73 65 65 20 69 66 20 74 68 65 20 rst, see if the
5f10: 63 61 6c 6c 65 72 20 68 61 73 20 72 65 71 75 65 caller has reque
5f20: 73 74 65 64 20 61 6e 20 69 6e 2d 6d 65 6d 6f 72 sted an in-memor
5f30: 79 20 64 61 74 61 62 61 73 65 2e 0d 0a 20 20 20 y database...
5f40: 20 20 20 23 0d 0a 20 20 20 20 20 20 73 65 74 20 #.. set
5f50: 69 73 4d 65 6d 6f 72 79 20 5b 69 73 4d 65 6d 6f isMemory [isMemo
5f60: 72 79 44 62 20 24 66 69 6c 65 4e 61 6d 65 5d 0d ryDb $fileName]. 5f70: 0a 0d 0a 20 20 20 20 20 20 23 0d 0a 20 20 20 20 ... #.. 5f80: 20 20 23 20 4e 4f 54 45 3a 20 42 75 69 6c 64 20 # NOTE: Build 5f90: 74 68 65 20 66 75 6c 6c 20 70 61 74 68 20 74 6f the full path to 5fa0: 20 74 68 65 20 64 61 74 61 62 61 73 65 20 66 69 the database fi 5fb0: 6c 65 20 6e 61 6d 65 2e 20 20 46 6f 72 20 6e 6f le name. For no 5fc0: 77 2c 20 61 6c 6c 20 74 65 73 74 0d 0a 20 20 20 w, all test.. 5fd0: 20 20 20 23 20 20 20 20 20 20 20 64 61 74 61 62 # datab 5fe0: 61 73 65 20 66 69 6c 65 73 20 61 72 65 20 73 74 ase files are st 5ff0: 6f 72 65 64 20 69 6e 20 74 68 65 20 74 65 6d 70 ored in the temp 6000: 6f 72 61 72 79 20 64 69 72 65 63 74 6f 72 79 2e orary directory. 6010: 0d 0a 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 .. #.. 6020: 20 69 66 20 7b 21 24 69 73 4d 65 6d 6f 72 79 20 if {!$isMemory
6030: 26 26 20 24 71 75 61 6c 69 66 79 7d 20 74 68 65 && $qualify} the 6040: 6e 20 7b 0d 0a 20 20 20 20 20 20 20 20 73 65 74 n {.. set 6050: 20 66 69 6c 65 4e 61 6d 65 20 5b 66 69 6c 65 20 fileName [file 6060: 6a 6f 69 6e 20 5b 67 65 74 44 61 74 61 62 61 73 join [getDatabas 6070: 65 44 69 72 65 63 74 6f 72 79 5d 20 5b 66 69 6c eDirectory] [fil 6080: 65 20 74 61 69 6c 20 24 66 69 6c 65 4e 61 6d 65 e tail$fileName
6090: 5d 5d 0d 0a 20 20 20 20 20 20 7d 0d 0a 0d 0a 20 ]].. }....
60a0: 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 23 20 #.. #
60b0: 4e 4f 54 45 3a 20 43 68 65 63 6b 20 69 66 20 74 NOTE: Check if t
60c0: 68 65 20 66 69 6c 65 20 73 74 69 6c 6c 20 65 78 he file still ex
60d0: 69 73 74 73 2e 0d 0a 20 20 20 20 20 20 23 0d 0a ists... #..
60e0: 20 20 20 20 20 20 69 66 20 7b 21 24 69 73 4d 65 if {!$isMe 60f0: 6d 6f 72 79 20 26 26 20 24 64 65 6c 65 74 65 20 mory &&$delete
6100: 26 26 20 5b 66 69 6c 65 20 65 78 69 73 74 73 20 && [file exists
6110: 24 66 69 6c 65 4e 61 6d 65 5d 7d 20 74 68 65 6e $fileName]} then 6120: 20 7b 0d 0a 20 20 20 20 20 20 20 20 23 0d 0a 20 {.. #.. 6130: 20 20 20 20 20 20 20 23 20 4e 4f 54 45 3a 20 53 # NOTE: S 6140: 6b 69 70 20 64 65 6c 65 74 69 6e 67 20 64 61 74 kip deleting dat 6150: 61 62 61 73 65 20 66 69 6c 65 73 20 69 66 20 73 abase files if s 6160: 6f 6d 65 62 6f 64 79 20 73 65 74 73 20 74 68 65 omebody sets the 6170: 20 67 6c 6f 62 61 6c 0d 0a 20 20 20 20 20 20 20 global.. 6180: 20 23 20 20 20 20 20 20 20 76 61 72 69 61 62 6c # variabl 6190: 65 20 74 6f 20 70 72 65 76 65 6e 74 20 69 74 2e e to prevent it. 61a0: 0d 0a 20 20 20 20 20 20 20 20 23 0d 0a 20 20 20 .. #.. 61b0: 20 20 20 20 20 69 66 20 7b 21 5b 69 6e 66 6f 20 if {![info 61c0: 65 78 69 73 74 73 20 3a 3a 6e 6f 28 63 6c 65 61 exists ::no(clea 61d0: 6e 75 70 44 62 29 5d 7d 20 74 68 65 6e 20 7b 0d nupDb)]} then {. 61e0: 0a 20 20 20 20 20 20 20 20 20 20 23 0d 0a 20 20 . #.. 61f0: 20 20 20 20 20 20 20 20 23 20 4e 4f 54 45 3a 20 # NOTE: 6200: 41 74 74 65 6d 70 74 20 74 6f 20 64 65 6c 65 74 Attempt to delet 6210: 65 20 74 68 65 20 74 65 73 74 20 64 61 74 61 62 e the test datab 6220: 61 73 65 20 66 69 6c 65 20 6e 6f 77 2e 0d 0a 20 ase file now... 6230: 20 20 20 20 20 20 20 20 20 23 0d 0a 20 20 20 20 #.. 6240: 20 20 20 20 20 20 69 66 20 7b 5b 73 65 74 20 63 if {[set c 6250: 6f 64 65 20 5b 63 61 74 63 68 20 7b 66 69 6c 65 ode [catch {file 6260: 20 64 65 6c 65 74 65 20 24 66 69 6c 65 4e 61 6d delete$fileNam
6270: 65 7d 20 65 72 72 6f 72 5d 5d 7d 20 74 68 65 6e e} error]]} then
6280: 20 7b 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 {..
6290: 23 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 23 #.. #
62a0: 20 4e 4f 54 45 3a 20 57 65 20 73 6f 6d 65 68 6f NOTE: We someho
62b0: 77 20 66 61 69 6c 65 64 20 74 6f 20 64 65 6c 65 w failed to dele
62c0: 74 65 20 74 68 65 20 66 69 6c 65 2c 20 72 65 70 te the file, rep
62d0: 6f 72 74 20 77 68 79 2e 0d 0a 20 20 20 20 20 20 ort why...
62e0: 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 20 #..
62f0: 20 20 20 20 20 74 70 75 74 73 20 24 3a 3a 74 65 tputs $::te 6300: 73 74 5f 63 68 61 6e 6e 65 6c 20 5b 61 70 70 65 st_channel [appe 6310: 6e 64 41 72 67 73 20 5c 0d 0a 20 20 20 20 20 20 ndArgs \.. 6320: 20 20 20 20 20 20 20 20 20 20 22 3d 3d 3d 3d 20 "==== 6330: 57 41 52 4e 49 4e 47 3a 20 66 61 69 6c 65 64 20 WARNING: failed 6340: 74 6f 20 64 65 6c 65 74 65 20 64 61 74 61 62 61 to delete databa 6350: 73 65 20 66 69 6c 65 20 5c 22 22 20 24 66 69 6c se file \""$fil
6360: 65 4e 61 6d 65 20 5c 0d 0a 20 20 20 20 20 20 20 eName \..
6370: 20 20 20 20 20 20 20 20 20 22 5c 22 20 64 75 72 "\" dur
6380: 69 6e 67 20 63 6c 65 61 6e 75 70 2c 20 65 72 72 ing cleanup, err
6390: 6f 72 3a 20 22 20 5c 6e 5c 74 20 24 65 72 72 6f or: " \n\t $erro 63a0: 72 20 5c 6e 5d 0d 0a 20 20 20 20 20 20 20 20 20 r \n].. 63b0: 20 7d 0d 0a 20 20 20 20 20 20 20 20 7d 20 65 6c }.. } el 63c0: 73 65 20 7b 0d 0a 20 20 20 20 20 20 20 20 20 20 se {.. 63d0: 23 0d 0a 20 20 20 20 20 20 20 20 20 20 23 20 4e #.. # N 63e0: 4f 54 45 3a 20 53 68 6f 77 20 74 68 61 74 20 77 OTE: Show that w 63f0: 65 20 73 6b 69 70 70 65 64 20 64 65 6c 65 74 69 e skipped deleti 6400: 6e 67 20 74 68 65 20 66 69 6c 65 2e 0d 0a 20 20 ng the file... 6410: 20 20 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 #.. 6420: 20 20 20 20 20 73 65 74 20 63 6f 64 65 20 30 0d set code 0. 6430: 0a 0d 0a 20 20 20 20 20 20 20 20 20 20 74 70 75 ... tpu 6440: 74 73 20 24 3a 3a 74 65 73 74 5f 63 68 61 6e 6e ts$::test_chann
6450: 65 6c 20 5b 61 70 70 65 6e 64 41 72 67 73 20 5c el [appendArgs \
6460: 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 20 20 ..
6470: 22 3d 3d 3d 3d 20 57 41 52 4e 49 4e 47 3a 20 73 "==== WARNING: s
6480: 6b 69 70 70 65 64 20 64 65 6c 65 74 69 6e 67 20 kipped deleting
6490: 64 61 74 61 62 61 73 65 20 66 69 6c 65 20 5c 22 database file \"
64a0: 22 20 24 66 69 6c 65 4e 61 6d 65 20 5c 0d 0a 20 " $fileName \.. 64b0: 20 20 20 20 20 20 20 20 20 20 20 20 20 22 5c 22 "\" 64c0: 20 64 75 72 69 6e 67 20 63 6c 65 61 6e 75 70 5c during cleanup\ 64d0: 6e 22 5d 0d 0a 20 20 20 20 20 20 20 20 7d 0d 0a n"].. }.. 64e0: 20 20 20 20 20 20 7d 20 65 6c 73 65 20 7b 0d 0a } else {.. 64f0: 20 20 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 #.. 6500: 20 20 20 23 20 4e 4f 54 45 3a 20 54 68 65 20 66 # NOTE: The f 6510: 69 6c 65 20 64 6f 65 73 20 6e 6f 74 20 65 78 69 ile does not exi 6520: 73 74 2c 20 73 75 63 63 65 73 73 21 0d 0a 20 20 st, success!.. 6530: 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 20 #.. 6540: 20 73 65 74 20 63 6f 64 65 20 30 0d 0a 20 20 20 set code 0.. 6550: 20 20 20 7d 0d 0a 0d 0a 20 20 20 20 20 20 72 65 }.... re 6560: 74 75 72 6e 20 24 63 6f 64 65 0d 0a 20 20 20 20 turn$code..
6570: 7d 0d 0a 0c 0d 0a 20 20 20 20 70 72 6f 63 20 63 }..... proc c
6580: 6c 65 61 6e 75 70 46 69 6c 65 20 7b 20 66 69 6c leanupFile { fil
6590: 65 4e 61 6d 65 20 7b 63 6f 6c 6c 65 63 74 20 74 eName {collect t
65a0: 72 75 65 7d 20 7b 66 6f 72 63 65 20 66 61 6c 73 rue} {force fals
65b0: 65 7d 20 7d 20 7b 0d 0a 20 20 20 20 20 20 23 0d e} } {.. #.
65c0: 0a 20 20 20 20 20 20 23 20 4e 4f 54 45 3a 20 41 . # NOTE: A
65d0: 74 74 65 6d 70 74 20 74 6f 20 66 6f 72 63 65 20 ttempt to force
65e0: 61 6c 6c 20 70 65 6e 64 69 6e 67 20 22 67 61 72 all pending "gar
65f0: 62 61 67 65 22 20 6f 62 6a 65 63 74 73 20 74 6f bage" objects to
6600: 20 62 65 20 63 6f 6c 6c 65 63 74 65 64 2c 0d 0a be collected,..
6610: 20 20 20 20 20 20 23 20 20 20 20 20 20 20 69 6e # in
6620: 63 6c 75 64 69 6e 67 20 53 51 4c 69 74 65 20 73 cluding SQLite s
6630: 74 61 74 65 6d 65 6e 74 73 20 61 6e 64 20 62 61 tatements and ba
6640: 63 6b 75 70 20 6f 62 6a 65 63 74 73 3b 20 74 68 ckup objects; th
6650: 69 73 20 73 68 6f 75 6c 64 20 61 6c 6c 6f 77 0d is should allow.
6660: 0a 20 20 20 20 20 20 23 20 20 20 20 20 20 20 74 . # t
6670: 68 65 20 75 6e 64 65 72 6c 79 69 6e 67 20 64 61 he underlying da
6680: 74 61 62 61 73 65 20 66 69 6c 65 20 74 6f 20 62 tabase file to b
6690: 65 20 64 65 6c 65 74 65 64 2e 0d 0a 20 20 20 20 e deleted...
66a0: 20 20 23 0d 0a 20 20 20 20 20 20 69 66 20 7b 24 #.. if {$66b0: 63 6f 6c 6c 65 63 74 20 26 26 20 5c 0d 0a 20 20 collect && \.. 66c0: 20 20 20 20 20 20 20 20 5b 63 61 74 63 68 20 7b [catch { 66d0: 6f 62 6a 65 63 74 20 69 6e 76 6f 6b 65 20 47 43 object invoke GC 66e0: 20 47 65 74 54 6f 74 61 6c 4d 65 6d 6f 72 79 20 GetTotalMemory 66f0: 74 72 75 65 7d 20 65 72 72 6f 72 5d 7d 20 74 68 true} error]} th 6700: 65 6e 20 7b 0d 0a 20 20 20 20 20 20 20 20 74 70 en {.. tp 6710: 75 74 73 20 24 63 68 61 6e 6e 65 6c 20 5b 61 70 uts$channel [ap
6720: 70 65 6e 64 41 72 67 73 20 5c 0d 0a 20 20 20 20 pendArgs \..
6730: 20 20 20 20 20 20 20 20 22 3d 3d 3d 3d 20 57 41 "==== WA
6740: 52 4e 49 4e 47 3a 20 66 61 69 6c 65 64 20 66 75 RNING: failed fu
6750: 6c 6c 20 67 61 72 62 61 67 65 20 63 6f 6c 6c 65 ll garbage colle
6760: 63 74 69 6f 6e 2c 20 65 72 72 6f 72 3a 20 22 20 ction, error: "
6770: 5c 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 5c \.. \
6780: 6e 5c 74 20 24 65 72 72 6f 72 20 5c 6e 5d 0d 0a n\t $error \n].. 6790: 20 20 20 20 20 20 7d 0d 0a 0d 0a 20 20 20 20 20 }.... 67a0: 20 23 0d 0a 20 20 20 20 20 20 23 20 4e 4f 54 45 #.. # NOTE 67b0: 3a 20 43 68 65 63 6b 20 69 66 20 74 68 65 20 66 : Check if the f 67c0: 69 6c 65 20 73 74 69 6c 6c 20 65 78 69 73 74 73 ile still exists 67d0: 2e 0d 0a 20 20 20 20 20 20 23 0d 0a 20 20 20 20 ... #.. 67e0: 20 20 69 66 20 7b 5b 66 69 6c 65 20 65 78 69 73 if {[file exis 67f0: 74 73 20 24 66 69 6c 65 4e 61 6d 65 5d 7d 20 74 ts$fileName]} t
6800: 68 65 6e 20 7b 0d 0a 20 20 20 20 20 20 20 20 23 hen {.. #
6810: 0d 0a 20 20 20 20 20 20 20 20 23 20 4e 4f 54 45 .. # NOTE
6820: 3a 20 53 6b 69 70 20 64 65 6c 65 74 69 6e 67 20 : Skip deleting
6830: 74 65 73 74 20 66 69 6c 65 73 20 69 66 20 73 6f test files if so
6840: 6d 65 62 6f 64 79 20 73 65 74 73 20 74 68 65 20 mebody sets the
6850: 67 6c 6f 62 61 6c 20 76 61 72 69 61 62 6c 65 0d global variable.
6860: 0a 20 20 20 20 20 20 20 20 23 20 20 20 20 20 20 . #
6870: 20 74 6f 20 70 72 65 76 65 6e 74 20 69 74 2e 0d to prevent it..
6880: 0a 20 20 20 20 20 20 20 20 23 0d 0a 20 20 20 20 . #..
6890: 20 20 20 20 69 66 20 7b 24 66 6f 72 63 65 20 7c if {$force | 68a0: 7c 20 21 5b 69 6e 66 6f 20 65 78 69 73 74 73 20 | ![info exists 68b0: 3a 3a 6e 6f 28 63 6c 65 61 6e 75 70 46 69 6c 65 ::no(cleanupFile 68c0: 29 5d 7d 20 74 68 65 6e 20 7b 0d 0a 20 20 20 20 )]} then {.. 68d0: 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 20 #.. 68e0: 20 20 20 23 20 4e 4f 54 45 3a 20 41 74 74 65 6d # NOTE: Attem 68f0: 70 74 20 74 6f 20 64 65 6c 65 74 65 20 74 68 65 pt to delete the 6900: 20 74 65 73 74 20 66 69 6c 65 20 6e 6f 77 2e 0d test file now.. 6910: 0a 20 20 20 20 20 20 20 20 20 20 23 0d 0a 20 20 . #.. 6920: 20 20 20 20 20 20 20 20 69 66 20 7b 5b 73 65 74 if {[set 6930: 20 63 6f 64 65 20 5b 63 61 74 63 68 20 7b 66 69 code [catch {fi 6940: 6c 65 20 64 65 6c 65 74 65 20 24 66 69 6c 65 4e le delete$fileN
6950: 61 6d 65 7d 20 65 72 72 6f 72 5d 5d 7d 20 74 68 ame} error]]} th
6960: 65 6e 20 7b 0d 0a 20 20 20 20 20 20 20 20 20 20 en {..
6970: 20 20 23 0d 0a 20 20 20 20 20 20 20 20 20 20 20 #..
6980: 20 23 20 4e 4f 54 45 3a 20 57 65 20 73 6f 6d 65 # NOTE: We some
6990: 68 6f 77 20 66 61 69 6c 65 64 20 74 6f 20 64 65 how failed to de
69a0: 6c 65 74 65 20 74 68 65 20 66 69 6c 65 2c 20 72 lete the file, r
69b0: 65 70 6f 72 74 20 77 68 79 2e 0d 0a 20 20 20 20 eport why...
69c0: 20 20 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 #..
69d0: 20 20 20 20 20 20 20 74 70 75 74 73 20 24 3a 3a tputs $:: 69e0: 74 65 73 74 5f 63 68 61 6e 6e 65 6c 20 5b 61 70 test_channel [ap 69f0: 70 65 6e 64 41 72 67 73 20 5c 0d 0a 20 20 20 20 pendArgs \.. 6a00: 20 20 20 20 20 20 20 20 20 20 20 20 22 3d 3d 3d "=== 6a10: 3d 20 57 41 52 4e 49 4e 47 3a 20 66 61 69 6c 65 = WARNING: faile 6a20: 64 20 74 6f 20 64 65 6c 65 74 65 20 74 65 73 74 d to delete test 6a30: 20 66 69 6c 65 20 5c 22 22 20 24 66 69 6c 65 4e file \""$fileN
6a40: 61 6d 65 20 5c 0d 0a 20 20 20 20 20 20 20 20 20 ame \..
6a50: 20 20 20 20 20 20 20 22 5c 22 20 64 75 72 69 6e "\" durin
6a60: 67 20 63 6c 65 61 6e 75 70 2c 20 65 72 72 6f 72 g cleanup, error
6a70: 3a 20 22 20 5c 6e 5c 74 20 24 65 72 72 6f 72 20 : " \n\t $error 6a80: 5c 6e 5d 0d 0a 20 20 20 20 20 20 20 20 20 20 7d \n].. } 6a90: 0d 0a 20 20 20 20 20 20 20 20 7d 20 65 6c 73 65 .. } else 6aa0: 20 7b 0d 0a 20 20 20 20 20 20 20 20 20 20 23 0d {.. #. 6ab0: 0a 20 20 20 20 20 20 20 20 20 20 23 20 4e 4f 54 . # NOT 6ac0: 45 3a 20 53 68 6f 77 20 74 68 61 74 20 77 65 20 E: Show that we 6ad0: 73 6b 69 70 70 65 64 20 64 65 6c 65 74 69 6e 67 skipped deleting 6ae0: 20 74 68 65 20 66 69 6c 65 2e 0d 0a 20 20 20 20 the file... 6af0: 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 20 #.. 6b00: 20 20 20 73 65 74 20 63 6f 64 65 20 30 0d 0a 0d set code 0... 6b10: 0a 20 20 20 20 20 20 20 20 20 20 74 70 75 74 73 . tputs 6b20: 20 24 3a 3a 74 65 73 74 5f 63 68 61 6e 6e 65 6c$::test_channel
6b30: 20 5b 61 70 70 65 6e 64 41 72 67 73 20 5c 0d 0a [appendArgs \..
6b40: 20 20 20 20 20 20 20 20 20 20 20 20 20 20 22 3d "=
6b50: 3d 3d 3d 20 57 41 52 4e 49 4e 47 3a 20 73 6b 69 === WARNING: ski
6b60: 70 70 65 64 20 64 65 6c 65 74 69 6e 67 20 74 65 pped deleting te
6b70: 73 74 20 66 69 6c 65 20 5c 22 22 20 24 66 69 6c st file \"" $fil 6b80: 65 4e 61 6d 65 20 5c 0d 0a 20 20 20 20 20 20 20 eName \.. 6b90: 20 20 20 20 20 20 20 22 5c 22 20 64 75 72 69 6e "\" durin 6ba0: 67 20 63 6c 65 61 6e 75 70 5c 6e 22 5d 0d 0a 20 g cleanup\n"].. 6bb0: 20 20 20 20 20 20 20 7d 0d 0a 20 20 20 20 20 20 }.. 6bc0: 7d 20 65 6c 73 65 20 7b 0d 0a 20 20 20 20 20 20 } else {.. 6bd0: 20 20 23 0d 0a 20 20 20 20 20 20 20 20 23 20 4e #.. # N 6be0: 4f 54 45 3a 20 54 68 65 20 66 69 6c 65 20 64 6f OTE: The file do 6bf0: 65 73 20 6e 6f 74 20 65 78 69 73 74 2c 20 73 75 es not exist, su 6c00: 63 63 65 73 73 21 0d 0a 20 20 20 20 20 20 20 20 ccess!.. 6c10: 23 0d 0a 20 20 20 20 20 20 20 20 73 65 74 20 63 #.. set c 6c20: 6f 64 65 20 30 0d 0a 20 20 20 20 20 20 7d 0d 0a ode 0.. }.. 6c30: 0d 0a 20 20 20 20 20 20 72 65 74 75 72 6e 20 24 .. return$
6c40: 63 6f 64 65 0d 0a 20 20 20 20 7d 0d 0a 0c 0d 0a code.. }.....
6c50: 20 20 20 20 70 72 6f 63 20 72 65 70 6f 72 74 53 proc reportS
6c60: 51 4c 69 74 65 52 65 73 6f 75 72 63 65 73 20 7b QLiteResources {
6c70: 20 63 68 61 6e 6e 65 6c 20 7b 71 75 69 65 74 20 channel {quiet
6c80: 66 61 6c 73 65 7d 20 7b 63 6f 6c 6c 65 63 74 20 false} {collect
6c90: 74 72 75 65 7d 20 7d 20 7b 0d 0a 20 20 20 20 20 true} } {..
6ca0: 20 23 0d 0a 20 20 20 20 20 20 23 20 4e 4f 54 45 #.. # NOTE
6cb0: 3a 20 53 6b 69 70 20 61 6c 6c 20 6f 75 74 70 75 : Skip all outpu
6cc0: 74 20 69 66 20 77 65 20 61 72 65 20 72 75 6e 6e t if we are runn
6cd0: 69 6e 67 20 69 6e 20 22 71 75 69 65 74 22 20 6d ing in "quiet" m
6ce0: 6f 64 65 2e 0d 0a 20 20 20 20 20 20 23 0d 0a 20 ode... #..
6cf0: 20 20 20 20 20 69 66 20 7b 21 24 71 75 69 65 74 if {!$quiet 6d00: 7d 20 74 68 65 6e 20 7b 0d 0a 20 20 20 20 20 20 } then {.. 6d10: 20 20 74 70 75 74 73 20 24 63 68 61 6e 6e 65 6c tputs$channel
6d20: 20 22 2d 2d 2d 2d 20 63 75 72 72 65 6e 74 20 6d "---- current m
6d30: 65 6d 6f 72 79 20 69 6e 20 75 73 65 20 62 79 20 emory in use by
6d40: 53 51 4c 69 74 65 2e 2e 2e 20 22 0d 0a 20 20 20 SQLite... "..
6d50: 20 20 20 7d 0d 0a 0d 0a 20 20 20 20 20 20 69 66 }.... if
6d60: 20 7b 5b 63 61 74 63 68 20 7b 6f 62 6a 65 63 74 {[catch {object
6d70: 20 69 6e 76 6f 6b 65 20 2d 66 6c 61 67 73 20 2b invoke -flags +
6d80: 4e 6f 6e 50 75 62 6c 69 63 20 5c 0d 0a 20 20 20 NonPublic \..
6d90: 20 20 20 20 20 20 20 20 20 20 20 53 79 73 74 65 Syste
6da0: 6d 2e 44 61 74 61 2e 53 51 4c 69 74 65 2e 55 6e m.Data.SQLite.Un
6db0: 73 61 66 65 4e 61 74 69 76 65 4d 65 74 68 6f 64 safeNativeMethod
6dc0: 73 20 5c 0d 0a 20 20 20 20 20 20 20 20 20 20 20 s \..
6dd0: 20 20 20 73 71 6c 69 74 65 33 5f 6d 65 6d 6f 72 sqlite3_memor
6de0: 79 5f 75 73 65 64 7d 20 6d 65 6d 6f 72 79 5d 20 y_used} memory]
6df0: 3d 3d 20 30 7d 20 74 68 65 6e 20 7b 0d 0a 20 20 == 0} then {..
6e00: 20 20 20 20 20 20 69 66 20 7b 21 24 71 75 69 65 if {!$quie 6e10: 74 7d 20 74 68 65 6e 20 7b 0d 0a 20 20 20 20 20 t} then {.. 6e20: 20 20 20 20 20 74 70 75 74 73 20 24 63 68 61 6e tputs$chan
6e30: 6e 65 6c 20 5b 61 70 70 65 6e 64 41 72 67 73 20 nel [appendArgs
6e40: 24 6d 65 6d 6f 72 79 20 22 20 62 79 74 65 73 5c $memory " bytes\ 6e50: 6e 22 5d 0d 0a 20 20 20 20 20 20 20 20 7d 0d 0a n"].. }.. 6e60: 20 20 20 20 20 20 7d 20 65 6c 73 65 20 7b 0d 0a } else {.. 6e70: 20 20 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 #.. 6e80: 20 20 20 23 20 4e 4f 54 45 3a 20 4d 61 79 62 65 # NOTE: Maybe 6e90: 20 74 68 65 20 53 51 4c 69 74 65 20 6e 61 74 69 the SQLite nati 6ea0: 76 65 20 6c 69 62 72 61 72 79 20 69 73 20 75 6e ve library is un 6eb0: 61 76 61 69 6c 61 62 6c 65 3f 0d 0a 20 20 20 20 available?.. 6ec0: 20 20 20 20 23 0d 0a 20 20 20 20 20 20 20 20 73 #.. s 6ed0: 65 74 20 6d 65 6d 6f 72 79 20 75 6e 6b 6e 6f 77 et memory unknow 6ee0: 6e 0d 0a 0d 0a 20 20 20 20 20 20 20 20 69 66 20 n.... if 6ef0: 7b 21 24 71 75 69 65 74 7d 20 74 68 65 6e 20 7b {!$quiet} then {
6f00: 0d 0a 20 20 20 20 20 20 20 20 20 20 74 70 75 74 .. tput
6f10: 73 20 24 63 68 61 6e 6e 65 6c 20 5b 61 70 70 65 s $channel [appe 6f20: 6e 64 41 72 67 73 20 24 6d 65 6d 6f 72 79 20 5c ndArgs$memory \
6f30: 6e 5d 0d 0a 20 20 20 20 20 20 20 20 7d 0d 0a 20 n].. }..
6f40: 20 20 20 20 20 7d 0d 0a 0d 0a 20 20 20 20 20 20 }....
6f50: 73 65 74 20 72 65 73 75 6c 74 20 24 6d 65 6d 6f set result $memo 6f60: 72 79 3b 20 23 20 4e 4f 54 45 3a 20 52 65 74 75 ry; # NOTE: Retu 6f70: 72 6e 20 6d 65 6d 6f 72 79 20 69 6e 2d 75 73 65 rn memory in-use 6f80: 20 74 6f 20 63 61 6c 6c 65 72 2e 0d 0a 0d 0a 20 to caller..... 6f90: 20 20 20 20 20 69 66 20 7b 21 24 71 75 69 65 74 if {!$quiet
6fa0: 7d 20 74 68 65 6e 20 7b 0d 0a 20 20 20 20 20 20 } then {..
6fb0: 20 20 74 70 75 74 73 20 24 63 68 61 6e 6e 65 6c tputs $channel 6fc0: 20 22 2d 2d 2d 2d 20 6d 61 78 69 6d 75 6d 20 6d "---- maximum m 6fd0: 65 6d 6f 72 79 20 69 6e 20 75 73 65 20 62 79 20 emory in use by 6fe0: 53 51 4c 69 74 65 2e 2e 2e 20 22 0d 0a 20 20 20 SQLite... ".. 6ff0: 20 20 20 7d 0d 0a 0d 0a 20 20 20 20 20 20 69 66 }.... if 7000: 20 7b 5b 63 61 74 63 68 20 7b 6f 62 6a 65 63 74 {[catch {object 7010: 20 69 6e 76 6f 6b 65 20 2d 66 6c 61 67 73 20 2b invoke -flags + 7020: 4e 6f 6e 50 75 62 6c 69 63 20 5c 0d 0a 20 20 20 NonPublic \.. 7030: 20 20 20 20 20 20 20 20 20 20 20 53 79 73 74 65 Syste 7040: 6d 2e 44 61 74 61 2e 53 51 4c 69 74 65 2e 55 6e m.Data.SQLite.Un 7050: 73 61 66 65 4e 61 74 69 76 65 4d 65 74 68 6f 64 safeNativeMethod 7060: 73 20 5c 0d 0a 20 20 20 20 20 20 20 20 20 20 20 s \.. 7070: 20 20 20 73 71 6c 69 74 65 33 5f 6d 65 6d 6f 72 sqlite3_memor 7080: 79 5f 68 69 67 68 77 61 74 65 72 20 30 7d 20 6d y_highwater 0} m 7090: 65 6d 6f 72 79 5d 20 3d 3d 20 30 7d 20 74 68 65 emory] == 0} the 70a0: 6e 20 7b 0d 0a 20 20 20 20 20 20 20 20 69 66 20 n {.. if 70b0: 7b 21 24 71 75 69 65 74 7d 20 74 68 65 6e 20 7b {!$quiet} then {
70c0: 0d 0a 20 20 20 20 20 20 20 20 20 20 74 70 75 74 .. tput
70d0: 73 20 24 63 68 61 6e 6e 65 6c 20 5b 61 70 70 65 s $channel [appe 70e0: 6e 64 41 72 67 73 20 24 6d 65 6d 6f 72 79 20 22 ndArgs$memory "
70f0: 20 62 79 74 65 73 5c 6e 22 5d 0d 0a 20 20 20 20 bytes\n"]..
7100: 20 20 20 20 7d 0d 0a 20 20 20 20 20 20 7d 20 65 }.. } e
7110: 6c 73 65 20 7b 0d 0a 20 20 20 20 20 20 20 20 23 lse {.. #
7120: 0d 0a 20 20 20 20 20 20 20 20 23 20 4e 4f 54 45 .. # NOTE
7130: 3a 20 4d 61 79 62 65 20 74 68 65 20 53 51 4c 69 : Maybe the SQLi
7140: 74 65 20 6e 61 74 69 76 65 20 6c 69 62 72 61 72 te native librar
7150: 79 20 69 73 20 75 6e 61 76 61 69 6c 61 62 6c 65 y is unavailable
7160: 3f 0d 0a 20 20 20 20 20 20 20 20 23 0d 0a 20 20 ?.. #..
7170: 20 20 20 20 20 20 73 65 74 20 6d 65 6d 6f 72 79 set memory
7180: 20 75 6e 6b 6e 6f 77 6e 0d 0a 0d 0a 20 20 20 20 unknown....
7190: 20 20 20 20 69 66 20 7b 21 24 71 75 69 65 74 7d if {!$quiet} 71a0: 20 74 68 65 6e 20 7b 0d 0a 20 20 20 20 20 20 20 then {.. 71b0: 20 20 20 74 70 75 74 73 20 24 63 68 61 6e 6e 65 tputs$channe
71c0: 6c 20 5b 61 70 70 65 6e 64 41 72 67 73 20 24 6d l [appendArgs $m 71d0: 65 6d 6f 72 79 20 5c 6e 5d 0d 0a 20 20 20 20 20 emory \n].. 71e0: 20 20 20 7d 0d 0a 20 20 20 20 20 20 7d 0d 0a 0d }.. }... 71f0: 0a 20 20 20 20 20 20 69 66 20 7b 24 63 6f 6c 6c . if {$coll
7200: 65 63 74 7d 20 74 68 65 6e 20 7b 0d 0a 20 20 20 ect} then {..
7210: 20 20 20 20 20 69 66 20 7b 5b 63 61 74 63 68 20 if {[catch
7220: 7b 6f 62 6a 65 63 74 20 69 6e 76 6f 6b 65 20 47 {object invoke G
7230: 43 20 47 65 74 54 6f 74 61 6c 4d 65 6d 6f 72 79 C GetTotalMemory
7240: 20 74 72 75 65 7d 20 65 72 72 6f 72 5d 7d 20 74 true} error]} t
7250: 68 65 6e 20 7b 0d 0a 20 20 20 20 20 20 20 20 20 hen {..
7260: 20 74 70 75 74 73 20 24 63 68 61 6e 6e 65 6c 20 tputs $channel 7270: 5b 61 70 70 65 6e 64 41 72 67 73 20 5c 0d 0a 20 [appendArgs \.. 7280: 20 20 20 20 20 20 20 20 20 20 20 20 20 22 3d 3d "== 7290: 3d 3d 20 57 41 52 4e 49 4e 47 3a 20 66 61 69 6c == WARNING: fail 72a0: 65 64 20 66 75 6c 6c 20 67 61 72 62 61 67 65 20 ed full garbage 72b0: 63 6f 6c 6c 65 63 74 69 6f 6e 2c 20 65 72 72 6f collection, erro 72c0: 72 3a 20 22 20 5c 0d 0a 20 20 20 20 20 20 20 20 r: " \.. 72d0: 20 20 20 20 20 20 5c 6e 5c 74 20 24 65 72 72 6f \n\t$erro
72e0: 72 20 5c 6e 5d 0d 0a 20 20 20 20 20 20 20 20 7d r \n].. }
72f0: 0d 0a 20 20 20 20 20 20 7d 0d 0a 0d 0a 20 20 20 .. }....
7300: 20 20 20 69 66 20 7b 21 24 71 75 69 65 74 7d 20 if {!$quiet} 7310: 74 68 65 6e 20 7b 0d 0a 20 20 20 20 20 20 20 20 then {.. 7320: 74 70 75 74 73 20 24 63 68 61 6e 6e 65 6c 20 22 tputs$channel "
7330: 2d 2d 2d 2d 20 63 75 72 72 65 6e 74 20 6d 65 6d ---- current mem
7340: 6f 72 79 20 69 6e 20 75 73 65 20 62 79 20 74 68 ory in use by th
7350: 65 20 43 4c 52 2e 2e 2e 20 22 0d 0a 20 20 20 20 e CLR... "..
7360: 20 20 7d 0d 0a 0d 0a 20 20 20 20 20 20 69 66 20 }.... if
7370: 7b 5b 63 61 74 63 68 20 7b 6f 62 6a 65 63 74 20 {[catch {object
7380: 69 6e 76 6f 6b 65 20 47 43 20 47 65 74 54 6f 74 invoke GC GetTot
7390: 61 6c 4d 65 6d 6f 72 79 20 66 61 6c 73 65 7d 20 alMemory false}
73a0: 6d 65 6d 6f 72 79 5d 20 3d 3d 20 30 7d 20 74 68 memory] == 0} th
73b0: 65 6e 20 7b 0d 0a 20 20 20 20 20 20 20 20 69 66 en {.. if
73c0: 20 7b 5b 73 74 72 69 6e 67 20 69 73 20 69 6e 74 {[string is int
73d0: 65 67 65 72 20 2d 73 74 72 69 63 74 20 24 6d 65 eger -strict $me 73e0: 6d 6f 72 79 5d 7d 20 74 68 65 6e 20 7b 0d 0a 20 mory]} then {.. 73f0: 20 20 20 20 20 20 20 20 20 69 66 20 7b 21 24 71 if {!$q
7400: 75 69 65 74 7d 20 74 68 65 6e 20 7b 0d 0a 20 20 uiet} then {..
7410: 20 20 20 20 20 20 20 20 20 20 74 70 75 74 73 20 tputs
7420: 24 63 68 61 6e 6e 65 6c 20 5b 61 70 70 65 6e 64 $channel [append 7430: 41 72 67 73 20 24 6d 65 6d 6f 72 79 20 22 20 62 Args$memory " b
7440: 79 74 65 73 5c 6e 22 5d 0d 0a 20 20 20 20 20 20 ytes\n"]..
7450: 20 20 20 20 7d 0d 0a 20 20 20 20 20 20 20 20 7d }.. }
7460: 20 65 6c 73 65 20 7b 0d 0a 20 20 20 20 20 20 20 else {..
7470: 20 20 20 73 65 74 20 6d 65 6d 6f 72 79 20 69 6e set memory in
7480: 76 61 6c 69 64 0d 0a 0d 0a 20 20 20 20 20 20 20 valid....
7490: 20 20 20 69 66 20 7b 21 24 71 75 69 65 74 7d 20 if {!$quiet} 74a0: 74 68 65 6e 20 7b 0d 0a 20 20 20 20 20 20 20 20 then {.. 74b0: 20 20 20 20 74 70 75 74 73 20 24 63 68 61 6e 6e tputs$chann
74c0: 65 6c 20 5b 61 70 70 65 6e 64 41 72 67 73 20 24 el [appendArgs $74d0: 6d 65 6d 6f 72 79 20 5c 6e 5d 0d 0a 20 20 20 20 memory \n].. 74e0: 20 20 20 20 20 20 7d 0d 0a 20 20 20 20 20 20 20 }.. 74f0: 20 7d 0d 0a 20 20 20 20 20 20 7d 20 65 6c 73 65 }.. } else 7500: 20 7b 0d 0a 20 20 20 20 20 20 20 20 73 65 74 20 {.. set 7510: 6d 65 6d 6f 72 79 20 75 6e 6b 6e 6f 77 6e 0d 0a memory unknown.. 7520: 0d 0a 20 20 20 20 20 20 20 20 69 66 20 7b 21 24 .. if {!$
7530: 71 75 69 65 74 7d 20 74 68 65 6e 20 7b 0d 0a 20 quiet} then {..
7540: 20 20 20 20 20 20 20 20 20 74 70 75 74 73 20 24 tputs $7550: 63 68 61 6e 6e 65 6c 20 5b 61 70 70 65 6e 64 41 channel [appendA 7560: 72 67 73 20 24 6d 65 6d 6f 72 79 20 5c 6e 5d 0d rgs$memory \n].
7570: 0a 20 20 20 20 20 20 20 20 7d 0d 0a 20 20 20 20 . }..
7580: 20 20 7d 0d 0a 0d 0a 20 20 20 20 20 20 72 65 74 }.... ret
7590: 75 72 6e 20 24 72 65 73 75 6c 74 0d 0a 20 20 20 urn $result.. 75a0: 20 7d 0d 0a 0c 0d 0a 20 20 20 20 70 72 6f 63 20 }..... proc 75b0: 72 75 6e 53 51 4c 69 74 65 54 65 73 74 50 72 6f runSQLiteTestPro 75c0: 6c 6f 67 75 65 20 7b 7d 20 7b 0d 0a 20 20 20 20 logue {} {.. 75d0: 20 20 23 0d 0a 20 20 20 20 20 20 23 20 4e 4f 54 #.. # NOT 75e0: 45 3a 20 53 6b 69 70 20 72 75 6e 6e 69 6e 67 20 E: Skip running 75f0: 6f 75 72 20 63 75 73 74 6f 6d 20 70 72 6f 6c 6f our custom prolo 7600: 67 75 65 20 69 66 20 74 68 65 20 6d 61 69 6e 20 gue if the main 7610: 6f 6e 65 20 68 61 73 20 62 65 65 6e 20 73 6b 69 one has been ski 7620: 70 70 65 64 2e 0d 0a 20 20 20 20 20 20 23 0d 0a pped... #.. 7630: 20 20 20 20 20 20 69 66 20 7b 21 5b 69 6e 66 6f if {![info 7640: 20 65 78 69 73 74 73 20 3a 3a 6e 6f 28 70 72 6f exists ::no(pro 7650: 6c 6f 67 75 65 2e 65 61 67 6c 65 29 5d 7d 20 74 logue.eagle)]} t 7660: 68 65 6e 20 7b 0d 0a 20 20 20 20 20 20 20 20 23 hen {.. # 7670: 0d 0a 20 20 20 20 20 20 20 20 23 20 4e 4f 54 45 .. # NOTE 7680: 3a 20 53 6b 69 70 20 61 6c 6c 20 53 79 73 74 65 : Skip all Syste 7690: 6d 2e 44 61 74 61 2e 53 51 4c 69 74 65 20 72 65 m.Data.SQLite re 76a0: 6c 61 74 65 64 20 66 69 6c 65 20 68 61 6e 64 6c lated file handl 76b0: 69 6e 67 20 28 64 65 6c 65 74 69 6e 67 2c 0d 0a ing (deleting,.. 76c0: 20 20 20 20 20 20 20 20 23 20 20 20 20 20 20 20 # 76d0: 63 6f 70 79 69 6e 67 2c 20 61 6e 64 20 6c 6f 61 copying, and loa 76e0: 64 69 6e 67 29 20 69 66 20 77 65 20 61 72 65 20 ding) if we are 76f0: 73 6f 20 69 6e 73 74 72 75 63 74 65 64 2e 0d 0a so instructed... 7700: 20 20 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 #.. 7710: 20 20 20 69 66 20 7b 21 5b 69 6e 66 6f 20 65 78 if {![info ex 7720: 69 73 74 73 20 3a 3a 6e 6f 28 73 71 6c 69 74 65 ists ::no(sqlite 7730: 46 69 6c 65 73 29 5d 7d 20 74 68 65 6e 20 7b 0d Files)]} then {. 7740: 0a 20 20 20 20 20 20 20 20 20 20 23 0d 0a 20 20 . #.. 7750: 20 20 20 20 20 20 20 20 23 20 4e 4f 54 45 3a 20 # NOTE: 7760: 53 6b 69 70 20 74 72 79 69 6e 67 20 74 6f 20 64 Skip trying to d 7770: 65 6c 65 74 65 20 61 6e 79 20 66 69 6c 65 73 20 elete any files 7780: 69 66 20 77 65 20 61 72 65 20 73 6f 20 69 6e 73 if we are so ins 7790: 74 72 75 63 74 65 64 2e 0d 0a 20 20 20 20 20 20 tructed... 77a0: 20 20 20 20 23 0d 0a 20 20 20 20 20 20 20 20 20 #.. 77b0: 20 69 66 20 7b 21 5b 69 6e 66 6f 20 65 78 69 73 if {![info exis 77c0: 74 73 20 3a 3a 6e 6f 28 64 65 6c 65 74 65 53 71 ts ::no(deleteSq 77d0: 6c 69 74 65 46 69 6c 65 73 29 5d 7d 20 74 68 65 liteFiles)]} the 77e0: 6e 20 7b 0d 0a 20 20 20 20 20 20 20 20 20 20 20 n {.. 77f0: 20 74 72 79 44 65 6c 65 74 65 41 73 73 65 6d 62 tryDeleteAssemb 7800: 6c 79 20 73 71 6c 69 74 65 33 2e 64 6c 6c 0d 0a ly sqlite3.dll.. 7810: 20 20 20 20 20 20 20 20 20 20 20 20 72 65 6d 6f remo 7820: 76 65 43 6f 6e 73 74 72 61 69 6e 74 20 66 69 6c veConstraint fil 7830: 65 5f 73 71 6c 69 74 65 33 2e 64 6c 6c 0d 0a 0d e_sqlite3.dll... 7840: 0a 20 20 20 20 20 20 20 20 20 20 20 20 74 72 79 . try 7850: 44 65 6c 65 74 65 41 73 73 65 6d 62 6c 79 20 53 DeleteAssembly S 7860: 51 4c 69 74 65 2e 49 6e 74 65 72 6f 70 2e 64 6c QLite.Interop.dl 7870: 6c 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 72 l.. r 7880: 65 6d 6f 76 65 43 6f 6e 73 74 72 61 69 6e 74 20 emoveConstraint 7890: 66 69 6c 65 5f 53 51 4c 69 74 65 2e 49 6e 74 65 file_SQLite.Inte 78a0: 72 6f 70 2e 64 6c 6c 0d 0a 0d 0a 20 20 20 20 20 rop.dll.... 78b0: 20 20 20 20 20 20 20 74 72 79 44 65 6c 65 74 65 tryDelete 78c0: 41 73 73 65 6d 62 6c 79 20 53 79 73 74 65 6d 2e Assembly System. 78d0: 44 61 74 61 2e 53 51 4c 69 74 65 2e 64 6c 6c 0d Data.SQLite.dll. 78e0: 0a 20 20 20 20 20 20 20 20 20 20 20 20 72 65 6d . rem 78f0: 6f 76 65 43 6f 6e 73 74 72 61 69 6e 74 20 66 69 oveConstraint fi 7900: 6c 65 5f 53 79 73 74 65 6d 2e 44 61 74 61 2e 53 le_System.Data.S 7910: 51 4c 69 74 65 2e 64 6c 6c 0d 0a 0d 0a 20 20 20 QLite.dll.... 7920: 20 20 20 20 20 20 20 20 20 74 72 79 44 65 6c 65 tryDele 7930: 74 65 41 73 73 65 6d 62 6c 79 20 53 79 73 74 65 teAssembly Syste 7940: 6d 2e 44 61 74 61 2e 53 51 4c 69 74 65 2e 4c 69 m.Data.SQLite.Li 7950: 6e 71 2e 64 6c 6c 0d 0a 20 20 20 20 20 20 20 20 nq.dll.. 7960: 20 20 20 20 72 65 6d 6f 76 65 43 6f 6e 73 74 72 removeConstr 7970: 61 69 6e 74 20 66 69 6c 65 5f 53 79 73 74 65 6d aint file_System 7980: 2e 44 61 74 61 2e 53 51 4c 69 74 65 2e 4c 69 6e .Data.SQLite.Lin 7990: 71 2e 64 6c 6c 0d 0a 20 20 20 20 20 20 20 20 20 q.dll.. 79a0: 20 7d 0d 0a 0d 0a 20 20 20 20 20 20 20 20 20 20 }.... 79b0: 23 0d 0a 20 20 20 20 20 20 20 20 20 20 23 20 4e #.. # N 79c0: 4f 54 45 3a 20 53 6b 69 70 20 74 72 79 69 6e 67 OTE: Skip trying 79d0: 20 74 6f 20 63 6f 70 79 20 61 6e 79 20 66 69 6c to copy any fil 79e0: 65 73 20 69 66 20 77 65 20 61 72 65 20 73 6f 20 es if we are so 79f0: 69 6e 73 74 72 75 63 74 65 64 2e 0d 0a 20 20 20 instructed... 7a00: 20 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 #.. 7a10: 20 20 20 20 69 66 20 7b 21 5b 69 6e 66 6f 20 65 if {![info e 7a20: 78 69 73 74 73 20 3a 3a 6e 6f 28 63 6f 70 79 53 xists ::no(copyS 7a30: 71 6c 69 74 65 46 69 6c 65 73 29 5d 7d 20 74 68 qliteFiles)]} th 7a40: 65 6e 20 7b 0d 0a 20 20 20 20 20 20 20 20 20 20 en {.. 7a50: 20 20 74 72 79 43 6f 70 79 41 73 73 65 6d 62 6c tryCopyAssembl 7a60: 79 20 73 71 6c 69 74 65 33 2e 64 6c 6c 0d 0a 20 y sqlite3.dll.. 7a70: 20 20 20 20 20 20 20 20 20 20 20 74 72 79 43 6f tryCo 7a80: 70 79 41 73 73 65 6d 62 6c 79 20 53 51 4c 69 74 pyAssembly SQLit 7a90: 65 2e 49 6e 74 65 72 6f 70 2e 64 6c 6c 0d 0a 20 e.Interop.dll.. 7aa0: 20 20 20 20 20 20 20 20 20 20 20 74 72 79 43 6f tryCo 7ab0: 70 79 41 73 73 65 6d 62 6c 79 20 53 79 73 74 65 pyAssembly Syste 7ac0: 6d 2e 44 61 74 61 2e 53 51 4c 69 74 65 2e 64 6c m.Data.SQLite.dl 7ad0: 6c 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 74 l.. t 7ae0: 72 79 43 6f 70 79 41 73 73 65 6d 62 6c 79 20 53 ryCopyAssembly S 7af0: 79 73 74 65 6d 2e 44 61 74 61 2e 53 51 4c 69 74 ystem.Data.SQLit 7b00: 65 2e 4c 69 6e 71 2e 64 6c 6c 0d 0a 20 20 20 20 e.Linq.dll.. 7b10: 20 20 20 20 20 20 7d 0d 0a 0d 0a 20 20 20 20 20 }.... 7b20: 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 20 20 #.. 7b30: 20 20 23 20 4e 4f 54 45 3a 20 53 6b 69 70 20 74 # NOTE: Skip t 7b40: 72 79 69 6e 67 20 74 6f 20 6c 6f 61 64 20 61 6e rying to load an 7b50: 79 20 66 69 6c 65 73 20 69 66 20 77 65 20 61 72 y files if we ar 7b60: 65 20 73 6f 20 69 6e 73 74 72 75 63 74 65 64 2e e so instructed. 7b70: 0d 0a 20 20 20 20 20 20 20 20 20 20 23 0d 0a 20 .. #.. 7b80: 20 20 20 20 20 20 20 20 20 69 66 20 7b 21 5b 69 if {![i 7b90: 6e 66 6f 20 65 78 69 73 74 73 20 3a 3a 6e 6f 28 nfo exists ::no( 7ba0: 6c 6f 61 64 53 71 6c 69 74 65 46 69 6c 65 73 29 loadSqliteFiles) 7bb0: 5d 7d 20 74 68 65 6e 20 7b 0d 0a 20 20 20 20 20 ]} then {.. 7bc0: 20 20 20 20 20 20 20 74 72 79 4c 6f 61 64 41 73 tryLoadAs 7bd0: 73 65 6d 62 6c 79 20 53 79 73 74 65 6d 2e 44 61 sembly System.Da 7be0: 74 61 2e 53 51 4c 69 74 65 2e 64 6c 6c 0d 0a 20 ta.SQLite.dll.. 7bf0: 20 20 20 20 20 20 20 20 20 20 20 74 72 79 4c 6f tryLo 7c00: 61 64 41 73 73 65 6d 62 6c 79 20 53 79 73 74 65 adAssembly Syste 7c10: 6d 2e 44 61 74 61 2e 53 51 4c 69 74 65 2e 4c 69 m.Data.SQLite.Li 7c20: 6e 71 2e 64 6c 6c 0d 0a 20 20 20 20 20 20 20 20 nq.dll.. 7c30: 20 20 7d 0d 0a 20 20 20 20 20 20 20 20 7d 0d 0a }.. }.. 7c40: 0d 0a 20 20 20 20 20 20 20 20 63 61 74 63 68 20 .. catch 7c50: 7b 0d 0a 20 20 20 20 20 20 20 20 20 20 74 70 75 {.. tpu 7c60: 74 73 20 24 3a 3a 74 65 73 74 5f 63 68 61 6e 6e ts$::test_chann
7c70: 65 6c 20 5b 61 70 70 65 6e 64 41 72 67 73 20 5c el [appendArgs \
7c80: 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 20 20 ..
7c90: 22 2d 2d 2d 2d 20 66 69 6c 65 20 76 65 72 73 69 "---- file versi
7ca0: 6f 6e 20 6f 66 20 5c 22 53 51 4c 69 74 65 2e 49 on of \"SQLite.I
7cb0: 6e 74 65 72 6f 70 2e 64 6c 6c 5c 22 2e 2e 2e 20 nterop.dll\"...
7cc0: 22 20 5c 0d 0a 20 20 20 20 20 20 20 20 20 20 20 " \..
7cd0: 20 20 20 5b 66 69 6c 65 20 76 65 72 73 69 6f 6e [file version
7ce0: 20 5b 67 65 74 42 69 6e 61 72 79 46 69 6c 65 4e [getBinaryFileN
7cf0: 61 6d 65 20 53 51 4c 69 74 65 2e 49 6e 74 65 72 ame SQLite.Inter
7d00: 6f 70 2e 64 6c 6c 5d 5d 20 5c 6e 5d 0d 0a 20 20 op.dll]] \n]..
7d10: 20 20 20 20 20 20 7d 0d 0a 0d 0a 20 20 20 20 20 }....
7d20: 20 20 20 63 61 74 63 68 20 7b 0d 0a 20 20 20 20 catch {..
7d30: 20 20 20 20 20 20 74 70 75 74 73 20 24 3a 3a 74 tputs $::t 7d40: 65 73 74 5f 63 68 61 6e 6e 65 6c 20 5b 61 70 70 est_channel [app 7d50: 65 6e 64 41 72 67 73 20 5c 0d 0a 20 20 20 20 20 endArgs \.. 7d60: 20 20 20 20 20 20 20 20 20 22 2d 2d 2d 2d 20 66 "---- f 7d70: 69 6c 65 20 76 65 72 73 69 6f 6e 20 6f 66 20 5c ile version of \ 7d80: 22 53 79 73 74 65 6d 2e 44 61 74 61 2e 53 51 4c "System.Data.SQL 7d90: 69 74 65 2e 64 6c 6c 5c 22 2e 2e 2e 20 22 20 5c ite.dll\"... " \ 7da0: 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 20 20 .. 7db0: 5b 66 69 6c 65 20 76 65 72 73 69 6f 6e 20 5b 67 [file version [g 7dc0: 65 74 42 69 6e 61 72 79 46 69 6c 65 4e 61 6d 65 etBinaryFileName 7dd0: 20 53 79 73 74 65 6d 2e 44 61 74 61 2e 53 51 4c System.Data.SQL 7de0: 69 74 65 2e 64 6c 6c 5d 5d 20 5c 6e 5d 0d 0a 20 ite.dll]] \n].. 7df0: 20 20 20 20 20 20 20 7d 0d 0a 0d 0a 20 20 20 20 }.... 7e00: 20 20 20 20 63 61 74 63 68 20 7b 0d 0a 20 20 20 catch {.. 7e10: 20 20 20 20 20 20 20 74 70 75 74 73 20 24 3a 3a tputs$::
7e20: 74 65 73 74 5f 63 68 61 6e 6e 65 6c 20 5b 61 70 test_channel [ap
7e30: 70 65 6e 64 41 72 67 73 20 5c 0d 0a 20 20 20 20 pendArgs \..
7e40: 20 20 20 20 20 20 20 20 20 20 22 2d 2d 2d 2d 20 "----
7e50: 66 69 6c 65 20 76 65 72 73 69 6f 6e 20 6f 66 20 file version of
7e60: 5c 22 53 79 73 74 65 6d 2e 44 61 74 61 2e 53 51 \"System.Data.SQ
7e70: 4c 69 74 65 2e 4c 69 6e 71 2e 64 6c 6c 5c 22 2e Lite.Linq.dll\".
7e80: 2e 2e 20 22 20 5c 0d 0a 20 20 20 20 20 20 20 20 .. " \..
7e90: 20 20 20 20 20 20 5b 66 69 6c 65 20 76 65 72 73 [file vers
7ea0: 69 6f 6e 20 5b 67 65 74 42 69 6e 61 72 79 46 69 ion [getBinaryFi
7eb0: 6c 65 4e 61 6d 65 20 53 79 73 74 65 6d 2e 44 61 leName System.Da
7ec0: 74 61 2e 53 51 4c 69 74 65 2e 4c 69 6e 71 2e 64 ta.SQLite.Linq.d
7ed0: 6c 6c 5d 5d 20 5c 6e 5d 0d 0a 20 20 20 20 20 20 ll]] \n]..
7ee0: 20 20 7d 0d 0a 0d 0a 20 20 20 20 20 20 20 20 73 }.... s
7ef0: 65 74 20 61 73 73 65 6d 62 6c 69 65 73 20 5b 6f et assemblies [o
7f00: 62 6a 65 63 74 20 69 6e 76 6f 6b 65 20 41 70 70 bject invoke App
7f10: 44 6f 6d 61 69 6e 2e 43 75 72 72 65 6e 74 44 6f Domain.CurrentDo
7f20: 6d 61 69 6e 20 47 65 74 41 73 73 65 6d 62 6c 69 main GetAssembli
7f30: 65 73 5d 0d 0a 0d 0a 20 20 20 20 20 20 20 20 6f es].... o
7f40: 62 6a 65 63 74 20 66 6f 72 65 61 63 68 20 61 73 bject foreach as
7f50: 73 65 6d 62 6c 79 20 24 61 73 73 65 6d 62 6c 69 sembly $assembli 7f60: 65 73 20 7b 0d 0a 20 20 20 20 20 20 20 20 20 20 es {.. 7f70: 69 66 20 7b 5b 73 74 72 69 6e 67 20 6d 61 74 63 if {[string matc 7f80: 68 20 5c 7b 53 79 73 74 65 6d 2e 44 61 74 61 2e h \{System.Data. 7f90: 53 51 4c 69 74 65 2a 20 24 61 73 73 65 6d 62 6c SQLite*$assembl
7fa0: 79 5d 7d 20 74 68 65 6e 20 7b 0d 0a 20 20 20 20 y]} then {..
7fb0: 20 20 20 20 20 20 20 20 74 70 75 74 73 20 24 3a tputs $: 7fc0: 3a 74 65 73 74 5f 63 68 61 6e 6e 65 6c 20 5b 61 :test_channel [a 7fd0: 70 70 65 6e 64 41 72 67 73 20 5c 0d 0a 20 20 20 ppendArgs \.. 7fe0: 20 20 20 20 20 20 20 20 20 20 20 20 20 22 2d 2d "-- 7ff0: 2d 2d 20 66 6f 75 6e 64 20 61 73 73 65 6d 62 6c -- found assembl 8000: 79 3a 20 22 20 24 61 73 73 65 6d 62 6c 79 20 5c y: "$assembly \
8010: 6e 5d 0d 0a 20 20 20 20 20 20 20 20 20 20 7d 0d n].. }.
8020: 0a 20 20 20 20 20 20 20 20 7d 0d 0a 0d 0a 20 20 . }....
8030: 20 20 20 20 20 20 63 61 74 63 68 20 7b 0d 0a 20 catch {..
8040: 20 20 20 20 20 20 20 20 20 74 70 75 74 73 20 24 tputs $8050: 3a 3a 74 65 73 74 5f 63 68 61 6e 6e 65 6c 20 5c ::test_channel \ 8060: 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 20 20 .. 8070: 22 2d 2d 2d 2d 20 64 65 66 69 6e 65 20 63 6f 6e "---- define con 8080: 73 74 61 6e 74 73 20 66 6f 72 20 5c 22 53 79 73 stants for \"Sys 8090: 74 65 6d 2e 44 61 74 61 2e 53 51 4c 69 74 65 5c tem.Data.SQLite\ 80a0: 22 2e 2e 2e 20 22 0d 0a 0d 0a 20 20 20 20 20 20 "... ".... 80b0: 20 20 20 20 69 66 20 7b 5b 63 61 74 63 68 20 7b if {[catch { 80c0: 6f 62 6a 65 63 74 20 69 6e 76 6f 6b 65 20 2d 66 object invoke -f 80d0: 6c 61 67 73 20 2b 4e 6f 6e 50 75 62 6c 69 63 20 lags +NonPublic 80e0: 5c 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 20 \.. 80f0: 20 20 20 20 20 53 79 73 74 65 6d 2e 44 61 74 61 System.Data 8100: 2e 53 51 4c 69 74 65 2e 53 51 4c 69 74 65 33 20 .SQLite.SQLite3 8110: 44 65 66 69 6e 65 43 6f 6e 73 74 61 6e 74 73 7d DefineConstants} 8120: 20 5c 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 \.. 8130: 20 20 20 20 20 20 64 65 66 69 6e 65 43 6f 6e 73 defineCons 8140: 74 61 6e 74 73 5d 20 3d 3d 20 30 7d 20 74 68 65 tants] == 0} the 8150: 6e 20 7b 0d 0a 20 20 20 20 20 20 20 20 20 20 20 n {.. 8160: 20 74 70 75 74 73 20 24 3a 3a 74 65 73 74 5f 63 tputs$::test_c
8170: 68 61 6e 6e 65 6c 20 5b 61 70 70 65 6e 64 41 72 hannel [appendAr
8180: 67 73 20 5b 66 6f 72 6d 61 74 4c 69 73 74 20 5b gs [formatList [
8190: 6c 73 6f 72 74 20 5c 0d 0a 20 20 20 20 20 20 20 lsort \..
81a0: 20 20 20 20 20 20 20 20 20 24 64 65 66 69 6e 65 $define 81b0: 43 6f 6e 73 74 61 6e 74 73 5d 5d 20 5c 6e 5d 0d Constants]] \n]. 81c0: 0a 20 20 20 20 20 20 20 20 20 20 7d 20 65 6c 73 . } els 81d0: 65 20 7b 0d 0a 20 20 20 20 20 20 20 20 20 20 20 e {.. 81e0: 20 74 70 75 74 73 20 24 3a 3a 74 65 73 74 5f 63 tputs$::test_c
81f0: 68 61 6e 6e 65 6c 20 75 6e 6b 6e 6f 77 6e 5c 6e hannel unknown\n
8200: 0d 0a 20 20 20 20 20 20 20 20 20 20 7d 0d 0a 20 .. }..
8210: 20 20 20 20 20 20 20 7d 0d 0a 0d 0a 20 20 20 20 }....
8220: 20 20 20 20 23 0d 0a 20 20 20 20 20 20 20 20 23 #.. #
8230: 20 4e 4f 54 45 3a 20 4e 6f 77 2c 20 77 65 20 6e NOTE: Now, we n
8240: 65 65 64 20 74 6f 20 6b 6e 6f 77 20 69 66 20 74 eed to know if t
8250: 68 65 20 53 51 4c 69 74 65 20 63 6f 72 65 20 6c he SQLite core l
8260: 69 62 72 61 72 79 20 69 73 20 61 76 61 69 6c 61 ibrary is availa
8270: 62 6c 65 0d 0a 20 20 20 20 20 20 20 20 23 20 20 ble.. #
8280: 20 20 20 20 20 28 69 2e 65 2e 20 62 65 63 61 75 (i.e. becau
8290: 73 65 20 74 68 65 20 6d 61 6e 61 67 65 64 2d 6f se the managed-o
82a0: 6e 6c 79 20 53 79 73 74 65 6d 2e 44 61 74 61 2e nly System.Data.
82b0: 53 51 4c 69 74 65 20 61 73 73 65 6d 62 6c 79 20 SQLite assembly
82c0: 63 61 6e 0d 0a 20 20 20 20 20 20 20 20 23 20 20 can.. #
82d0: 20 20 20 20 20 6c 6f 61 64 20 77 69 74 68 6f 75 load withou
82e0: 74 20 69 74 3b 20 68 6f 77 65 76 65 72 2c 20 69 t it; however, i
82f0: 74 20 63 61 6e 6e 6f 74 20 64 6f 20 61 6e 79 74 t cannot do anyt
8300: 68 69 6e 67 20 75 73 65 66 75 6c 20 77 69 74 68 hing useful with
8310: 6f 75 74 0d 0a 20 20 20 20 20 20 20 20 23 20 20 out.. #
8320: 20 20 20 20 20 69 74 29 2e 20 20 49 66 20 77 65 it). If we
8330: 20 61 72 65 20 75 73 69 6e 67 20 74 68 65 20 6d are using the m
8340: 69 78 65 64 2d 6d 6f 64 65 20 61 73 73 65 6d 62 ixed-mode assemb
8350: 6c 79 20 61 6e 64 20 77 65 20 61 6c 72 65 61 64 ly and we alread
8360: 79 0d 0a 20 20 20 20 20 20 20 20 23 20 20 20 20 y.. #
8370: 20 20 20 66 6f 75 6e 64 20 69 74 20 28 61 62 6f found it (abo
8380: 76 65 29 2c 20 74 68 69 73 20 73 68 6f 75 6c 64 ve), this should
8390: 20 61 6c 77 61 79 73 20 73 75 63 63 65 65 64 2e always succeed.
83a0: 0d 0a 20 20 20 20 20 20 20 20 23 0d 0a 20 20 20 .. #..
83b0: 20 20 20 20 20 63 68 65 63 6b 46 6f 72 53 51 4c checkForSQL
83c0: 69 74 65 20 24 3a 3a 74 65 73 74 5f 63 68 61 6e ite $::test_chan 83d0: 6e 65 6c 0d 0a 0d 0a 20 20 20 20 20 20 20 20 23 nel.... # 83e0: 0d 0a 20 20 20 20 20 20 20 20 23 20 4e 4f 54 45 .. # NOTE 83f0: 3a 20 43 68 65 63 6b 20 69 66 20 74 68 65 20 73 : Check if the s 8400: 71 6c 69 74 65 33 5f 77 69 6e 33 32 5f 73 65 74 qlite3_win32_set 8410: 5f 64 69 72 65 63 74 6f 72 79 20 66 75 6e 63 74 _directory funct 8420: 69 6f 6e 20 69 73 20 61 76 61 69 6c 61 62 6c 65 ion is available 8430: 2e 0d 0a 20 20 20 20 20 20 20 20 23 0d 0a 20 20 ... #.. 8440: 20 20 20 20 20 20 74 70 75 74 73 20 24 3a 3a 74 tputs$::t
8450: 65 73 74 5f 63 68 61 6e 6e 65 6c 20 5c 0d 0a 20 est_channel \..
8460: 20 20 20 20 20 20 20 20 20 20 20 22 2d 2d 2d 2d "----
8470: 20 63 68 65 63 6b 69 6e 67 20 66 6f 72 20 66 75 checking for fu
8480: 6e 63 74 69 6f 6e 20 73 71 6c 69 74 65 33 5f 77 nction sqlite3_w
8490: 69 6e 33 32 5f 73 65 74 5f 64 69 72 65 63 74 6f in32_set_directo
84a0: 72 79 2e 2e 2e 20 22 0d 0a 0d 0a 20 20 20 20 20 ry... "....
84b0: 20 20 20 69 66 20 7b 5b 63 61 74 63 68 20 7b 6f if {[catch {o
84c0: 62 6a 65 63 74 20 69 6e 76 6f 6b 65 20 2d 66 6c bject invoke -fl
84d0: 61 67 73 20 2b 4e 6f 6e 50 75 62 6c 69 63 20 5c ags +NonPublic \
84e0: 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 20 20 ..
84f0: 20 20 53 79 73 74 65 6d 2e 44 61 74 61 2e 53 51 System.Data.SQ
8500: 4c 69 74 65 2e 55 6e 73 61 66 65 4e 61 74 69 76 Lite.UnsafeNativ
8510: 65 4d 65 74 68 6f 64 73 20 5c 0d 0a 20 20 20 20 eMethods \..
8520: 20 20 20 20 20 20 20 20 20 20 20 20 73 71 6c 69 sqli
8530: 74 65 33 5f 77 69 6e 33 32 5f 73 65 74 5f 64 69 te3_win32_set_di
8540: 72 65 63 74 6f 72 79 20 30 20 6e 75 6c 6c 7d 5d rectory 0 null}]
8550: 20 3d 3d 20 30 7d 20 74 68 65 6e 20 7b 0d 0a 20 == 0} then {..
8560: 20 20 20 20 20 20 20 20 20 23 0d 0a 20 20 20 20 #..
8570: 20 20 20 20 20 20 23 20 4e 4f 54 45 3a 20 43 61 # NOTE: Ca
8580: 6c 6c 69 6e 67 20 74 68 65 20 73 71 6c 69 74 65 lling the sqlite
8590: 33 5f 77 69 6e 33 32 5f 73 65 74 5f 64 69 72 65 3_win32_set_dire
85a0: 63 74 6f 72 79 20 66 75 6e 63 74 69 6f 6e 20 64 ctory function d
85b0: 6f 65 73 20 6e 6f 74 0d 0a 20 20 20 20 20 20 20 oes not..
85c0: 20 20 20 23 20 20 20 20 20 20 20 63 61 75 73 65 # cause
85d0: 20 61 6e 20 65 78 63 65 70 74 69 6f 6e 3b 20 74 an exception; t
85e0: 68 65 72 65 66 6f 72 65 2c 20 69 74 20 6d 75 73 herefore, it mus
85f0: 74 20 62 65 20 61 76 61 69 6c 61 62 6c 65 20 28 t be available (
8600: 69 2e 65 2e 0d 0a 20 20 20 20 20 20 20 20 20 20 i.e...
8610: 23 20 20 20 20 20 20 20 65 76 65 6e 20 74 68 6f # even tho
8620: 75 67 68 20 69 74 20 73 68 6f 75 6c 64 20 72 65 ugh it should re
8630: 74 75 72 6e 20 61 20 66 61 69 6c 75 72 65 20 72 turn a failure r
8640: 65 74 75 72 6e 20 63 6f 64 65 20 69 6e 20 74 68 eturn code in th
8650: 69 73 0d 0a 20 20 20 20 20 20 20 20 20 20 23 20 is.. #
8660: 20 20 20 20 20 20 63 61 73 65 29 2e 0d 0a 20 20 case)...
8670: 20 20 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 #..
8680: 20 20 20 20 20 61 64 64 43 6f 6e 73 74 72 61 69 addConstrai
8690: 6e 74 20 73 71 6c 69 74 65 33 5f 77 69 6e 33 32 nt sqlite3_win32
86a0: 5f 73 65 74 5f 64 69 72 65 63 74 6f 72 79 0d 0a _set_directory..
86b0: 0d 0a 20 20 20 20 20 20 20 20 20 20 74 70 75 74 .. tput
86c0: 73 20 24 3a 3a 74 65 73 74 5f 63 68 61 6e 6e 65 s $::test_channe 86d0: 6c 20 79 65 73 5c 6e 0d 0a 20 20 20 20 20 20 20 l yes\n.. 86e0: 20 7d 20 65 6c 73 65 20 7b 0d 0a 20 20 20 20 20 } else {.. 86f0: 20 20 20 20 20 74 70 75 74 73 20 24 3a 3a 74 65 tputs$::te
8700: 73 74 5f 63 68 61 6e 6e 65 6c 20 6e 6f 5c 6e 0d st_channel no\n.
8710: 0a 20 20 20 20 20 20 20 20 7d 0d 0a 0d 0a 20 20 . }....
8720: 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 20 #..
8730: 20 23 20 4e 4f 54 45 3a 20 41 74 74 65 6d 70 74 # NOTE: Attempt
8740: 20 74 6f 20 64 65 74 65 72 6d 69 6e 65 20 69 66 to determine if
8750: 20 74 68 65 20 63 75 73 74 6f 6d 20 65 78 74 65 the custom exte
8760: 6e 73 69 6f 6e 20 66 75 6e 63 74 69 6f 6e 73 20 nsion functions
8770: 77 65 72 65 0d 0a 20 20 20 20 20 20 20 20 23 20 were.. #
8780: 20 20 20 20 20 20 63 6f 6d 70 69 6c 65 64 20 69 compiled i
8790: 6e 74 6f 20 74 68 65 20 53 51 4c 69 74 65 20 69 nto the SQLite i
87a0: 6e 74 65 72 6f 70 20 61 73 73 65 6d 62 6c 79 2e nterop assembly.
87b0: 0d 0a 20 20 20 20 20 20 20 20 23 0d 0a 20 20 20 .. #..
87c0: 20 20 20 20 20 63 68 65 63 6b 46 6f 72 53 51 4c checkForSQL
87d0: 69 74 65 44 65 66 69 6e 65 43 6f 6e 73 74 61 6e iteDefineConstan
87e0: 74 20 24 3a 3a 74 65 73 74 5f 63 68 61 6e 6e 65 t $::test_channe 87f0: 6c 20 5c 0d 0a 20 20 20 20 20 20 20 20 20 20 20 l \.. 8800: 20 43 48 45 43 4b 5f 53 54 41 54 45 0d 0a 0d 0a CHECK_STATE.... 8810: 20 20 20 20 20 20 20 20 63 68 65 63 6b 46 6f 72 checkFor 8820: 53 51 4c 69 74 65 44 65 66 69 6e 65 43 6f 6e 73 SQLiteDefineCons 8830: 74 61 6e 74 20 24 3a 3a 74 65 73 74 5f 63 68 61 tant$::test_cha
8840: 6e 6e 65 6c 20 5c 0d 0a 20 20 20 20 20 20 20 20 nnel \..
8850: 20 20 20 20 55 53 45 5f 49 4e 54 45 52 4f 50 5f USE_INTEROP_
8860: 44 4c 4c 0d 0a 0d 0a 20 20 20 20 20 20 20 20 63 DLL.... c
8870: 68 65 63 6b 46 6f 72 53 51 4c 69 74 65 44 65 66 heckForSQLiteDef
8880: 69 6e 65 43 6f 6e 73 74 61 6e 74 20 24 3a 3a 74 ineConstant $::t 8890: 65 73 74 5f 63 68 61 6e 6e 65 6c 20 5c 0d 0a 20 est_channel \.. 88a0: 20 20 20 20 20 20 20 20 20 20 20 49 4e 54 45 52 INTER 88b0: 4f 50 5f 45 58 54 45 4e 53 49 4f 4e 5f 46 55 4e OP_EXTENSION_FUN 88c0: 43 54 49 4f 4e 53 0d 0a 0d 0a 20 20 20 20 20 20 CTIONS.... 88d0: 20 20 23 0d 0a 20 20 20 20 20 20 20 20 23 20 4e #.. # N 88e0: 4f 54 45 3a 20 52 65 70 6f 72 74 20 74 68 65 20 OTE: Report the 88f0: 72 65 73 6f 75 72 63 65 20 75 73 61 67 65 20 70 resource usage p 8900: 72 69 6f 72 20 74 6f 20 72 75 6e 6e 69 6e 67 20 rior to running 8910: 61 6e 79 20 74 65 73 74 73 2e 0d 0a 20 20 20 20 any tests... 8920: 20 20 20 20 23 0d 0a 20 20 20 20 20 20 20 20 72 #.. r 8930: 65 70 6f 72 74 53 51 4c 69 74 65 52 65 73 6f 75 eportSQLiteResou 8940: 72 63 65 73 20 24 3a 3a 74 65 73 74 5f 63 68 61 rces$::test_cha
8950: 6e 6e 65 6c 0d 0a 0d 0a 20 20 20 20 20 20 20 20 nnel....
8960: 23 0d 0a 20 20 20 20 20 20 20 20 23 20 4e 4f 54 #.. # NOT
8970: 45 3a 20 53 68 6f 77 20 74 68 65 20 61 63 74 69 E: Show the acti
8980: 76 65 20 74 65 73 74 20 63 6f 6e 73 74 72 61 69 ve test constrai
8990: 6e 74 73 2e 0d 0a 20 20 20 20 20 20 20 20 23 0d nts... #.
89a0: 0a 20 20 20 20 20 20 20 20 74 70 75 74 73 20 24 . tputs $89b0: 3a 3a 74 65 73 74 5f 63 68 61 6e 6e 65 6c 20 5b ::test_channel [ 89c0: 61 70 70 65 6e 64 41 72 67 73 20 22 2d 2d 2d 2d appendArgs "---- 89d0: 20 63 6f 6e 73 74 72 61 69 6e 74 73 3a 20 22 20 constraints: " 89e0: 5c 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 5b \.. [ 89f0: 66 6f 72 6d 61 74 4c 69 73 74 20 5b 6c 73 6f 72 formatList [lsor 8a00: 74 20 5b 67 65 74 43 6f 6e 73 74 72 61 69 6e 74 t [getConstraint 8a10: 73 5d 5d 5d 20 5c 6e 5d 0d 0a 0d 0a 20 20 20 20 s]]] \n].... 8a20: 20 20 20 20 23 0d 0a 20 20 20 20 20 20 20 20 23 #.. # 8a30: 20 4e 4f 54 45 3a 20 53 68 6f 77 20 77 68 65 6e NOTE: Show when 8a40: 20 6f 75 72 20 74 65 73 74 73 20 61 63 74 75 61 our tests actua 8a50: 6c 6c 79 20 62 65 67 61 6e 20 28 6e 6f 77 29 2e lly began (now). 8a60: 0d 0a 20 20 20 20 20 20 20 20 23 0d 0a 20 20 20 .. #.. 8a70: 20 20 20 20 20 74 70 75 74 73 20 24 3a 3a 74 65 tputs$::te
8a80: 73 74 5f 63 68 61 6e 6e 65 6c 20 5b 61 70 70 65 st_channel [appe
8a90: 6e 64 41 72 67 73 20 5c 0d 0a 20 20 20 20 20 20 ndArgs \..
8aa0: 20 20 20 20 20 20 22 2d 2d 2d 2d 20 53 79 73 74 "---- Syst
8ab0: 65 6d 2e 44 61 74 61 2e 53 51 4c 69 74 65 20 74 em.Data.SQLite t
8ac0: 65 73 74 73 20 62 65 67 61 6e 20 61 74 20 22 20 ests began at "
8ad0: 5c 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 5b \.. [
8ae0: 63 6c 6f 63 6b 20 66 6f 72 6d 61 74 20 5b 63 6c clock format [cl
8af0: 6f 63 6b 20 73 65 63 6f 6e 64 73 5d 5d 20 5c 6e ock seconds]] \n
8b00: 5d 0d 0a 20 20 20 20 20 20 7d 0d 0a 20 20 20 20 ].. }..
8b10: 7d 0d 0a 0c 0d 0a 20 20 20 20 70 72 6f 63 20 72 }..... proc r
8b20: 75 6e 53 51 4c 69 74 65 54 65 73 74 45 70 69 6c unSQLiteTestEpil
8b30: 6f 67 75 65 20 7b 7d 20 7b 0d 0a 20 20 20 20 20 ogue {} {..
8b40: 20 23 0d 0a 20 20 20 20 20 20 23 20 4e 4f 54 45 #.. # NOTE
8b50: 3a 20 53 6b 69 70 20 72 75 6e 6e 69 6e 67 20 6f : Skip running o
8b60: 75 72 20 63 75 73 74 6f 6d 20 65 70 69 6c 6f 67 ur custom epilog
8b70: 75 65 20 69 66 20 74 68 65 20 6d 61 69 6e 20 6f ue if the main o
8b80: 6e 65 20 68 61 73 20 62 65 65 6e 20 73 6b 69 70 ne has been skip
8b90: 70 65 64 2e 0d 0a 20 20 20 20 20 20 23 0d 0a 20 ped... #..
8ba0: 20 20 20 20 20 69 66 20 7b 21 5b 69 6e 66 6f 20 if {![info
8bb0: 65 78 69 73 74 73 20 3a 3a 6e 6f 28 65 70 69 6c exists ::no(epil
8bc0: 6f 67 75 65 2e 65 61 67 6c 65 29 5d 7d 20 74 68 ogue.eagle)]} th
8bd0: 65 6e 20 7b 0d 0a 20 20 20 20 20 20 20 20 23 0d en {.. #.
8be0: 0a 20 20 20 20 20 20 20 20 23 20 4e 4f 54 45 3a . # NOTE:
8bf0: 20 53 68 6f 77 20 77 68 65 6e 20 6f 75 72 20 74 Show when our t
8c00: 65 73 74 73 20 61 63 74 75 61 6c 6c 79 20 65 6e ests actually en
8c10: 64 65 64 20 28 6e 6f 77 29 2e 0d 0a 20 20 20 20 ded (now)...
8c20: 20 20 20 20 23 0d 0a 20 20 20 20 20 20 20 20 74 #.. t
8c30: 70 75 74 73 20 24 3a 3a 74 65 73 74 5f 63 68 61 puts $::test_cha 8c40: 6e 6e 65 6c 20 5b 61 70 70 65 6e 64 41 72 67 73 nnel [appendArgs 8c50: 20 5c 0d 0a 20 20 20 20 20 20 20 20 20 20 20 20 \.. 8c60: 22 2d 2d 2d 2d 20 53 79 73 74 65 6d 2e 44 61 74 "---- System.Dat 8c70: 61 2e 53 51 4c 69 74 65 20 74 65 73 74 73 20 65 a.SQLite tests e 8c80: 6e 64 65 64 20 61 74 20 22 20 5c 0d 0a 20 20 20 nded at " \.. 8c90: 20 20 20 20 20 20 20 20 20 5b 63 6c 6f 63 6b 20 [clock 8ca0: 66 6f 72 6d 61 74 20 5b 63 6c 6f 63 6b 20 73 65 format [clock se 8cb0: 63 6f 6e 64 73 5d 5d 20 5c 6e 5d 0d 0a 0d 0a 20 conds]] \n].... 8cc0: 20 20 20 20 20 20 20 23 0d 0a 20 20 20 20 20 20 #.. 8cd0: 20 20 23 20 4e 4f 54 45 3a 20 41 6c 73 6f 20 72 # NOTE: Also r 8ce0: 65 70 6f 72 74 20 74 68 65 20 72 65 73 6f 75 72 eport the resour 8cf0: 63 65 20 75 73 61 67 65 20 61 66 74 65 72 20 72 ce usage after r 8d00: 75 6e 6e 69 6e 67 20 74 68 65 20 74 65 73 74 73 unning the tests 8d10: 2e 0d 0a 20 20 20 20 20 20 20 20 23 0d 0a 20 20 ... #.. 8d20: 20 20 20 20 20 20 72 65 70 6f 72 74 53 51 4c 69 reportSQLi 8d30: 74 65 52 65 73 6f 75 72 63 65 73 20 24 3a 3a 74 teResources$::t
8d40: 65 73 74 5f 63 68 61 6e 6e 65 6c 0d 0a 20 20 20 est_channel..
8d50: 20 20 20 7d 0d 0a 20 20 20 20 7d 0d 0a 0c 0d 0a }.. }.....
8d60: 20 20 20 20 23 23 23 23 23 23 23 23 23 23 23 23 ############
8d70: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
8d80: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
8d90: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
8da0: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 0d ###############.
8db0: 0a 20 20 20 20 23 23 23 23 23 23 23 23 23 23 23 . ###########
8dc0: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
8dd0: 23 23 20 45 4e 44 20 45 61 67 6c 65 20 4f 4e 4c ## END Eagle ONL
8de0: 59 20 23 23 23 23 23 23 23 23 23 23 23 23 23 23 Y ##############
8df0: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
8e00: 0d 0a 20 20 20 20 23 23 23 23 23 23 23 23 23 23 .. ##########
8e10: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
8e20: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
8e30: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
8e40: 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 ################
8e50: 23 0d 0a 20 20 7d 0d 0a 0d 0a 20 20 23 0d 0a 20 #.. }.... #..
8e60: 20 23 20 4e 4f 54 45 3a 20 53 61 76 65 20 74 68 # NOTE: Save th
8e70: 65 20 6e 61 6d 65 20 6f 66 20 74 68 65 20 64 69 e name of the di
8e80: 72 65 63 74 6f 72 79 20 63 6f 6e 74 61 69 6e 69 rectory containi
8e90: 6e 67 20 74 68 69 73 20 66 69 6c 65 2e 0d 0a 20 ng this file...
8ea0: 20 23 0d 0a 20 20 69 66 20 7b 21 5b 69 6e 66 6f #.. if {![info
8eb0: 20 65 78 69 73 74 73 20 3a 3a 63 6f 6d 6d 6f 6e exists ::common
8ec0: 5f 64 69 72 65 63 74 6f 72 79 5d 7d 20 74 68 65 _directory]} the
8ed0: 6e 20 7b 0d 0a 20 20 20 20 73 65 74 20 3a 3a 63 n {.. set ::c
8ee0: 6f 6d 6d 6f 6e 5f 64 69 72 65 63 74 6f 72 79 20 ommon_directory
8ef0: 5b 66 69 6c 65 20 64 69 72 6e 61 6d 65 20 5b 69 [file dirname [i
8f00: 6e 66 6f 20 73 63 72 69 70 74 5d 5d 0d 0a 20 20 nfo script]]..
8f10: 7d 0d 0a 0d 0a 20 20 23 0d 0a 20 20 23 20 4e 4f }.... #.. # NO
8f20: 54 45 3a 20 50 72 6f 76 69 64 65 20 74 68 65 20 TE: Provide the
8f30: 53 79 73 74 65 6d 2e 44 61 74 61 2e 53 51 4c 69 System.Data.SQLi
8f40: 74 65 20 74 65 73 74 20 70 61 63 6b 61 67 65 20 te test package
8f50: 74 6f 20 74 68 65 20 69 6e 74 65 72 70 72 65 74 to the interpret
8f60: 65 72 2e 0d 0a 20 20 23 0d 0a 20 20 70 61 63 6b er... #.. pack
8f70: 61 67 65 20 70 72 6f 76 69 64 65 20 53 79 73 74 age provide Syst
8f80: 65 6d 2e 44 61 74 61 2e 53 51 4c 69 74 65 2e 54 em.Data.SQLite.T
8f90: 65 73 74 20 31 2e 30 0d 0a 7d 0d 0a est 1.0..}..
|
|
# How to make a beep in android?
I would like my app beep with a specific frequency and duration. In the windows equivalent of this app (written in c#) I used a c++ dll with the function
beep(frequency, duration);
Is this the same in android? Or at least how can I put my c++ dll in the project?
I would prefer not to use pre-built mp3's or system sound because I would like to give the user the choice of the frequency and duration.
If you want to use your C++ code in the android app which is possible. You need to look at Android NDK which allows you to use execute C++ code with the help of JNI (Java Native Interface).
Android NDK
• Ok... Could you please explain what should I do? Thanks! – Cippo Aug 28 '12 at 8:16
• First of all you need to install the ndk. at the link i gave above should have the guide to let you on how to configure (which is pretty easy). and here is another link for ndk samples are the best way to start off.. link – Dilberted Aug 28 '12 at 8:30
• Thank you very much. Actually I can't try it now 'cause I'm programming on my android tablet without eclipse. I'll give a look at it anyway. Again thanks a lot! – Cippo Aug 28 '12 at 8:57
• no problem. have a good day ahead. – Dilberted Aug 28 '12 at 9:07
I tried amine.b's answer. In short, to play a loud Beep sound:
ToneGenerator toneG = new ToneGenerator(AudioManager.STREAM_ALARM, 100);
• I like this one, it will actually give a beep instead of a ring tone :) – Chef Pharaoh Nov 19 '14 at 23:42
• Best would be to use a continuous tone like the ones from ToneGenerator.TONE_DTMF_0 to ToneGenerator.TONE_DTMF_S or else the generated beep may sound interrupted. – ungalcrys Feb 10 '15 at 15:20
The easy way is to use instance of ToneGenerator class:
// send the tone to the "alarm" stream (classic beeps go there) with 50% volume
ToneGenerator toneG = new ToneGenerator(AudioManager.STREAM_ALARM, 50);
if (val >= taux_max) {
taux_text.setTextColor(warnning_col);
toneG.startTone(ToneGenerator.TONE_CDMA_ALERT_CALL_GUARD, 200); // 200 is duration in ms
}
Please refer to the documentation of ToneGenerator and AudioManager for exact meaning of parameters and possible configuration of the generator.
|
|
# Jack Plate - Plug and Play, Mono / Stereo
Click to zoom in
Customer Images:
No Images yet! Submit a product image below!
$20.95 This product is currently out of stock and will be placed on backorder. Please log in or register to receive an email when this item is back in stock. The Plug and Play eliminates the confusing switch found on most stereo cabinets. Just pick your impedance and plug in. Label is set at the common $\frac{4}{16}$ Ohm Mono and 8 Ohm Stereo. Ready to install with a steel dish, wires, and speaker connectors. Direct replacement for many existing small jack dishes. Can be used for the following configurations: • 4 ohm mono • 8 ohm left + 8 ohm right stereo • 8 ohm left only mono • 8 ohm right only mono • 16 ohm mono The Plug and Play can be used with two 8 ohm speakers, four 4 ohm speakers, or four 16 ohm speakers. Please see Specifications, Files, and Documents for all recommended wiring configurations. SKU: S-H700 Item ID: 004199 UPC/EAN: 609722156721 Interior Length 3.0 in. Interior Width 3.44 in. Item Length 4.03 in. Item Width 4.35 in. Lead Length 18 in. Mounting Hole Center to Center A 3.72 in. Mounting Hole Center to Center B 3.25 in. Packaging Dimensions 4.4 in. × 4.1 in. × 2.1 in. Weight (Packaging) 0.44 lbs. Dimensions All Models Plug-In Schematic All Models Speaker Z Examples All Models ## My Project Lists ### You must be logged in to add items to a project list. ## Specifications, Files, and Documents Dimensions 28.44 KB Plug-In Schematic 35.03 KB Speaker Z Examples 25.35 KB ## Questions and Answers Click each question to see its answers. Asked by Anonymous on April 7th, 2016. April 8th, 2016 Staff Member Top Contributor Yes, 4 16 ohm speakers are optimal for the plug n, play for everything to match up in terms of how the inputs are labeled. Also two 8ohm will work or 4 4ohm speakers. April 8th, 2016 Ah ha. Thank you so much friend. October 21st, 2016 I am using four 8 ohm speakers for a 2/4/8 ohm setup. each Speaker Pairs are wired in parallel. Asked by Anonymous on August 26th, 2015. August 28th, 2015 Staff Member Top Contributor As shown by the documents available above, the Plug and Play is designed for two or more speakers. A single 16 ohm speaker would use a standard mono jack wired directly. Asked by Anonymous on November 22nd, 2015. November 24th, 2015 Staff Member Top Contributor You would use standard speaker cables for all connections from the amp to the plug and play. Two cables would be needed for stereo operation. Asked by Anonymous on February 9th, 2016. February 9th, 2016 Staff Member Top Contributor You could wire it according to the wiring diagrams which are provided but the labeling on the jack would not match. Asked by Anonymous on February 9th, 2016. February 9th, 2016 Staff Member Top Contributor It could be used that way but would not include a custom label so the impedance markings would be incorrect. Asked by Anonymous on April 27th, 2016. April 29th, 2016 Staff Member Top Contributor With two 16 ohm speakers the writing on the plate wouldn't match giving you 16ohm l+r and an 8 ohm mono. The length of the leads is 18" Asked by Anonymous on May 14th, 2016. May 17th, 2016 Staff Member Top Contributor Yes, you can use two 8 ohm speakers and a wiring diagram is contained in the PDF's on this page. Asked by Anonymous on January 23rd, 2017. January 24th, 2017 Staff Member The depth of this jack plate is 1.66" (42.3mm). Asked by Anonymous on June 1st, 2017. June 1st, 2017 Staff Member You can use the red and black wire and this should operate in mono at the designated ohms on the back of your speaker. You would only use the upper left handed jack. Asked by Anonymous on June 22nd, 2017. June 23rd, 2017 Staff Member This is designed for either a 2 or 4 speaker setup. We would not be able to advise you on setting this up outside of those configurations. Asked by Anonymous on August 5th, 2017. August 8th, 2017 Staff Member As far as we know, you need 2 or 4 speakers and they all must have the same ohms. Asked by Anonymous on August 9th, 2017. August 10th, 2017 Staff Member Yes, this is exactly what the Plug and Play is! Asked by Anonymous on April 20th, 2018. April 20th, 2018 Staff Member With two 8 Ohm speakers in a cab it would work as follows: Left input would be 4 Ohm using both speakers (Parallel wiring) or 8 Ohm when used in stereo (If two amps are being plugged into both the left and right input) When using two 8 Ohm speakers, the only mono options are 4 Ohm (top left jack) and 16 Ohm (bottom left jack) The only case in which it would work as you need would be if both speakers in your cab are 16 Ohm. Asked by Anonymous on May 23rd, 2018. May 24th, 2018 Staff Member This jack allows the following combinations: 4 ohm mono, 8 ohm left + 8 ohm right stereo, 8 ohm left only mono, 8 ohm right only mono, 16 ohm mono. Asked by Anonymous on June 2nd, 2018. June 7th, 2018 Staff Member Yes that is correct. Simply place a spare 1/4" jack into one of the 8ohm jacks to get 8ohms mono. Asked by Anonymous on June 11th, 2018. June 12th, 2018 Staff Member If you have two 8 ohm speakers and you place the black and red wire on one speaker and the white and green wire on the other speaker you will get: 4 ohm mono, 8 ohm left + 8 ohm right stereo, 8 ohm left only mono, 8 ohm right only mono, 16 ohm mono. Asked by Anonymous on July 2nd, 2018. July 3rd, 2018 Staff Member Assuming the person you got this from wired it correctly you would get the following: 8 ohm mono, 16 ohm left + 16 ohm right stereo, 16 ohm left only mono, 16 ohm right only mono, 32 ohm mono. Asked by Anonymous on September 5th, 2018. September 5th, 2018 Staff Member The diagram shows how to wire four 16 Ohm speakers. So "Wire this way if your speakers are 16 Ohms" is accurate. Asked by Anonymous on October 16th, 2018. October 16th, 2018 Staff Member The 16 Ohm input on this jack plate is for series while the 4 Ohm mono is for parallel. Asked by Anonymous on June 19th, 2019. June 19th, 2019 Staff Member The two inputs are isolated from one another. Running two amps in stereo is safe with this jack plate. Asked by Anonymous on January 20th, 2020. January 21st, 2020 Using 2x 16 ohm speakers the labels on the plate wouldn't match the wiring but would double the mono value to 32 ohms. Asked by Anonymous on April 14th, 2020. April 15th, 2020 Staff Member Simply plug your amp into whichever speaker (left or right) that you wish you use and plug a dummy cable (a cable not plugged into anything on the other end) into the unused speaker jack. This will isolate whatever speaker is plugged into the amp. Asked by Anonymous on June 3rd, 2020. June 4th, 2020 Staff Member If you have a 4 speaker cab with 4 Ohm speakers in it and wire it as shown in the 4x4 Ohm diagram, you will have the options shown on the plate. Those being 4 and 16 Ohm mono (Top left and bottom jack) and the option to run two 8 Ohm loads in stereo. The terms stereo and mono have nothing to do with the number of speakers being used, rather how many amps are plugged into the cab at one time. Asked by Anonymous on July 21st, 2020. July 21st, 2020 Staff Member This is designed for two eight Ohm speakers, four sixteen Ohm speakers, or four four Ohm speakers. Mixing an 8 and a 16 Ohm speaker is not suggested. Asked by Anonymous on August 4th, 2020. August 4th, 2020 Staff Member The traces on the PCB are pretty heavy duty, so this should handle most anything in the guitar amp wattage territory (<200W) but may not be ideal for some pro audio applications where you are dealing with high wattages. Asked by Anonymous on August 19th, 2020. August 20th, 2020 Staff Member That is the concept behind this jack plate. When plugging a cable into the top left jack while nothing is plugged in to the top right jack, it will read 4 Ohms. If a second cable is plugged into the top right jack (regardless if the other end of that cable is plugged into anything) it will read 8 Ohms. The switching is done by the connections being made and broken by the PCB inside the plate. Asked by Anonymous on October 10th, 2020. October 13th, 2020 Staff Member You can use the hookup drawing located in the "Speaker Z Examples" document located above in the "Specifications, Files, and Documents" section of the listing for installation suggestions. Asked by Anonymous on May 18th, 2021. May 18th, 2021 Staff Member This is a passive jack plate, and there isn't anything preventing one from using this plate with a cab that has two 16Ω speakers, but the labels on the plate will no longer be accurate. Asked by Anonymous on July 24th, 2021. July 28th, 2021 Staff Member According to the manufacturer, the amp/speaker cab setup you described is the intended purpose of this jack plate, so yes, you should be able to use two amp heads in stereo using the top two inputs. That being said, we do not have any information from the manufacturer that describes in detail what is happening to the ground connections, so we can't comment on that. Asked by nickwillk on June 29th, 2016. No answers yet! ## Product Reviews 4.76 out of 5 based on 17 reviews - May 10th, 2021 3 out of 5 I loved how easy this product was to install and to use. One caveat was that the right 8 ohm tap engaged the 8 ohm stereo feature. In comparison, on a Marshall 1960, you can run either side in 8 ohms, but with this jack plate, you can only run the right side on its own, and cannot run the left side on its own. Three star rating for the 16 ohm tap failing after 2.5 years of use. I used this in a 2x12 Avatar cab about once a week on average during this time. So that's like 130 times plugging in to it. It went to about 5 gigs during that time. I swapped speakers once during this time. Despite all that, if you only play at home, or where you have spare cabs, it's a great product. This is especially if you have some 4 ohm and 16 ohm heads or if you run two 8 ohm loads in stereo. I don't trust this for gigs any more. - January 29th, 2021 3 out of 5 It works, but for how long? I bought one of these for my speaker cabinet, so I could use different heads with one cabinet. A few months later the 16 ohm option failed. The jacks are proprietary and soldered to a PC board. Rather than tp replace the jack I bought a new unit for$20 plus shipping. I'm hoping for better luck with this one, but who knows? At least I have some parts to scavenge in case this one fails.
- December 23rd, 2020
5 out of 5
There's no guesswork with this jackplate - it adds a lot of versatility to your cab and is pretty difficult to mess up. The leads aren't terribly long so be careful how you orient your speaker leads. As always, AES is a pleasure to work with.
- December 5th, 2020
5 out of 5
Easy and fast to hook up. Works like a charm and way sturdier than the crap that M**shall uses
- June 28th, 2020
5 out of 5
Great Product. I use it on a Fender Bandmaster Cabinet set up with 1x12 8 ohm and 1x15 8 ohm.
- March 26th, 2020
5 out of 5
This jack plate works great! My 2x12 cabinet now has a lot more flexibility.
- March 17th, 2019
5 out of 5
These things are the \$#!+! I love the Plug and Play jack. Takea all thw guess work outta hooking up my head to my 212 cabs. I bought two and installed them on both of my 212 cabs. No switches to fiddle with and try and figure out what’s for stereo, and what’s 4 or 16 ohm. It’s as easy as plugging into the correctly rated jack. Done! Highly recommended to anyone who makes speaker changes or just wants ease of use. 4 different colored wires for hooking to your speakers, so no mistakes can be made if you can follow the simple color coded scheme.
- March 16th, 2019
5 out of 5
Easy to wire! Instructions are clear. Works great!?
- September 17th, 2018
5 out of 5
Great product, does what it says it does, works fine, priced right - what more could you ask?
- July 31st, 2018
5 out of 5
Excellent product! Easy to install and works like a champ. Allows me to use any of my heads with one cabinet. I have two 2X12 cabs with this installed already and will be purchasing a third jack plate for my 4X12. Really does make your cabinet far more versatile and I have noticed zero degradation in sound quality.
|
|
## Chemistry: The Central Science (13th Edition)
1. Calculate $[AgIO_3]$ after the dissolution: $V_f = 10ml + 20ml = 30ml$ $C_i * V_i = C_f * V_f$ $0.01 * 20 = C_f * 30$ $C_f \approx 6.667 \times 10^{-3}M$ 2. Calculate $[NaIO_3]$ after the dissolution: $C_i * V_i = C_f * V_f$ $0.015 * 10 = C_f * 30$ $C_f = 5 \times 10^{-3}M$ 3. Find $[Ag^+]$ $[Ag^+] = [AgIO_3] \approx 6.667 \times 10^{-3}M$ 4. Find $[I{O_3}^-]$ $[I{O_3}^-] = [AgIO_3] + [NaIO_3]$ $[I{O_3}^-] = 6.667 \times 10^{-3} + 5 \times 10^{-3}$ $[{IO_3}^-] = 1.167 \times 10^{-2}M$ 5. Calculate the product of the concentrations for $AgIO_3:$ $P = [Ag^+][I{O_3}^-]$ $P = 6.667 \times 10^{-3} * 1.167 \times 10^{-2}$ $P = 7.780 \times 10^{-5}$ 6. Compare this value with the Ksp $Ksp = 3.1 \times 10^{-8}$ $P = 7.78 \times 10^{-5}$ Since P has a higher value, there will be precipitation.
|
|
Logarithmic differentiation
1. Jun 19, 2008
Ry122
Question:
Find derivative of f(x)=((x^2)(x^3))/((x^4)(x^2))
Attempt:
ln f(x)=(lnx^2)+(lnx^3)-(lnx^4)-(lnx^2)
Can someone tell me what I have done wrong so far?
Thanks
2. Jun 19, 2008
rootX
forgot to take log of both sides?
bring your powers down.. oo well, u prolly know this
3. Jun 19, 2008
cristo
Staff Emeritus
Why don't you just simplify and then differentiate?
4. Jun 19, 2008
Ry122
I did take the log of both sides. I'm doing it this way because i need to learn this method of differentiation.
5. Jun 19, 2008
rootX
oo yep, just realized that.
everything looks good
|
|
Sebastian von Hausegger Steps towards a clear view of the Cosmic Microwave Background 2019 Fabian Thiele An ATLAS search for sterile neutrinos 2019 Gorm Galster The Central Trigger Processor of the ATLAS Experiment at the LHC and its Monitoring 2019 Lais Ozelin de Lima PImentel Charged-particle multiplicity distributions in p-Pb collisions at √SNN = 5.02 TeV with ALICE 2018 Amel Durakovic On the Likely Structure and Origin of Primordial Fluctuations 2018 Katarina Gajdosova Investigations on collectivity in small and large collision systems at the LHC with ALICE 2018 Milena Bajic A search for lepton-flavor-violating decays of the Z boson into a $\tau$-lepton and a light lepton with the ATLAS detector in proton-proton collisions at √s=13 TeV at the LHC 2018 Christian Bourjau Factorization of two-particle distributions measured in Pb-Pb collisions at √s = 5.02 TeV with the ALICE detector 2018 Michael James Larson A Search for Tau Neutrino Appearance with IceCube-DeepCore 2018 Laure Berthier Ultraviolet extensions of particle physics 2017 Jeppe Trøst Nielsen Testing Cosmological Models 2017 Simon Holm Stark 2017 Morten Ankersen Medici Search for Dark Matter Annihilation in the Galactic Halo using IceCube 2017 Christine Hartmann A Quest for New Physics - Loop Calculation of the Higgs Decay to Two Photons in the Standard Model Effective Field Theory 2015 Anne Mette Frejsel Large Scale Anomalies of the Cosmic Microwave Bagkground with Planck 2015 Lars Egholm Pedersen Probing the nature of the Higgs Boson 2015 Almut Pingel Tau lepton identification and studies of associated Higgs boson production with the ATLAS detector 2015 Ask Emil Løvschall-Jensen Search fore new physics in multilepton final-states using multivariate techniques 2015 Valentina Zaccolo Charged-Particle Multiplicity Distributions overWide Pseudorapidity Range in Proton-Proton and Proton-Lead Collisions with ALICE 2015 Mads Søgaard Scattering Amplitudes via Algebraic Geometry Methods 2015 Laura Jenniches Understanding Theoretical Uncertainties in Perturbative QCD Computations 2015 Morten Dam Jørgensen 2014 Hjalte Frellesvig Generalized Unitarity Cuts and Integrand Reduction at Higher Loop Orders 2014 Alexander Hansen Pseudorapidity Dependence of Anisotropic Azimuthal Flow with the ALICE Detector 2014 Lotte Ansgaard Thomsen A search for associated production of a SM Higgs decaying into tau leptons with the ATLAS experiment 2014 Rijun Huang Gauge and Gravity Amplitudes from trees to loops 2013 Sune Jakobsen Commissioning of the Absolute Luminosity For ATLAS detector at the LHC 2013 Kristian Gregersen Anomalous trilinear gauge couplings in ZZ production at the ATLAS experiment 2013 Simon Heisterkamp R-Hadron Search at ATLAS 2012 Peter Kadlecík A measurement of W+jet and Z+jet cross sections in the tau decay channel, and their ratio in the ATLAS experiment 2012 Pavel Jez A search for super symmetric low mass Higgs boson at the LHC with the ATLAS detector 2011 Hans Hjersing Dalsgaard Pseudorapidity Densities in p+p and Pb+Pb collisions at LHC measured with the ALICE experiment 2011
## Master Theses
Stefan Hasselgren Using machine learning for improving identification of electrons in the ATLAS experiment 2018 Bjarke Enkelund A study of the influence of atmospheric conditions on cosmic rays 2018 Helle Gormsen Search for right-handed neutrinos with 79.8 fb-1 of data collected at sqrt(s)=13 TeV with the ATLAS detector 2018 Mikkel Jensen Environmentally induced neutrino decoherence in IceCube 2018 Troels Krogsbøll Development of Alternative Methods for Determining the Coefficients of the Azimuthal Distributions of Particles Produced in Heavy-Ion Collisions 2018 James Creswell Analysis of gravitational Waves from binary Black Hole Mergers 2018 Helene Ausar Investigating Linear and Non-Linear flow modes in p-Pb collisions at sNN=5.02 TeV in ALICE 2018 Jack Christopher Hutchinson Rolph Measurement of the total p-p cross section using the ALFA detector at √s = 13 TeV 2018 Sissel Bay Nielsen Prospects of Sterile Neutrino Search with the FCC-ee 2017 Emil Sørensen Bols Proton-Proton Central Exclusive Pion Production at √s = 13 TeV with the ALFA and ATLAS detector 2017 Henriette Petersen The search for right-handed neutrinos with the ATLAS experiment 2017 Nicolas Palm Perez Electron Identification Using Machine Learning in the ATLAS Experiment with 2016 Data 2017 Alexander Pedersen Lind A Study of Diffractive Scattering with the ATLAS and ALFA Experiment 2017 Erik Bærentsen Acquiring Accurate State Estimates For Use In Autonomous Flight 2017 Mike Lauge dE/dx Measurements in the ATLAS Decector 2017 Sara Buur Svendsen A Search for Lepton Flavour Violation in Z → π μ decays at √ s = 13 TeV with the ATLAS Detector 2017 Daniel Stefaniak Nielsen An Alternative Analysis of the Semi-Leptonic Diboson Final States in the Boosted Regime 2016 Mikkel Bjørn The W Boson in Global Fitsof Standard Model Effective Field Theory 2016 Freja Thoresen Wavelets & Information Theory for Pile-Up Removal 2016 Hans Lindbo Røpke-Haarslev Alanine Dosimetry using MV Photon Beams 2016 Samuel Stokholm Baxter The Search for Right Handed Neutrinos using the SHiP Experiment 2016 Adam Mielke Exact Zero Modes in Coupled Chirac Systems 2016 Stavros Kitsios Evaluation of the Muon Combinatorial Background at SHoP Experiment 2016 Eva Brottmann Hansen Early Atmospheric Muon Rejection with IceCube-PINGU 2016 Emil André On The Grassmannian Approach to Amplitudes 2015 Christian Brønnum Hansen Event-by-event discrimination using the Matrix Element Method at Next-to-Leading Order 2015 Asta Heinesen Scalar averaging in cosmology 2015 Anders Hammer Holm Starting Run 2 at the LHC – Statistical approaches to electron identification and detector alignment in ATLAS 2015 Christopher Robert Jacobsen Automated matrix-element re-weithting in effective fiels theories 2015 Kristoffer Levin Hansen Search for new physics in diphoton production with the ATLAS detector at the LHC 2015 Christian Baadsgaard Jepsen Amplitudes from sting theory and CHY formalism 2015 Andreas Søgaard Boosted Bosons and Wavelets 2015 Jeppe Trøst Nielsen Supernovae and cosmological probes 2015 Gorm Aske Krohn Galster ATLAS Trigger: Preparation for Run II 2015 Patryk Kuzek Scattering Amplitudes: structural and analytical properties 2014 Rasmus Westphal Rasmussen Determination of the neutrino mixing angle theta_23 octant and differentiation among flavor symmetries 2014 Jochen Heinrich Reconstruction of boosted W ± and ZO bosons from fat jets 2014 Allan Finnich An investigation in introducing Matlab and data analysis in introductory physics 2014 Esben Bork Hansen Unpaired Majorana Fermions in Disordered p-wave Supercunductor and Random Matrix Theory 2014 Peter D. Pedersen Large Nc QCD Lattice-Hyperspere correspondence and Chiral Condensate for Multi-Flavored QCD 2014 Alexander Christensen QCD Lattice-Hypersphere Correspondence and GradientFlow in QCD 2014 Christian Caeser Measurement of Hard Double-Parton Interactions in Z → ll + 2 jet Events with the ATLAS Detector at √s = 7 TeV 2013 Karina Schifter Holm 2013 Maria Hoffmann Micromegas detectors for the upgrade of the ATLAS muon spectrometer 2013 Christine O. Rasmussen 2013 Bastian Poulsen CMB methods applied to flow data from ALICE 2013 Mikkel Skaarup Study of diffractive pp interactions at the LHC using a tagged proton 2013 Simon Stark Mortensen Kinematic reconstruction of diffractive processes with tagged protons in the ALFA detector at √S=8TeV 2013 Morten Ankersen Medici Diffraction with ALFA and ATLAS at √S=8TeV 2013 Anders Møllgaard Complex Langevin Dynamis and the Sign Problem: A Study of Logarithmic Actions 2013 Christian Marboe Correlation Functions in N=1 Super Yang-Mills Theory from Holography 2013 Bjørn Sørensen Measurement of the W-Boson Mass with the ATLAS Detector at LHC 2012 Martin Spangenberg Calorimeter calibration and search for R-hadrons at √s = 7 TeV with the ATLAS experiment 2012 Mizio Spatafora Andersen Computation of Amplitudes using Tew Techniques 2012 Lars Egholm Pedersen Optimization of the Higgs sensitivity in the ZZ∗ → 4l channel using the 2011 datasample, collected by the ATLAS experiment at the LHC 2012 Alexander Karlberg Space-Cone Gauge and Scattering Amplitudes 2012 Christian Bierlich Limits on triple gauge boson couplings 2012 Christian Holm Christensen Lattice Quantum Chromodynamics with Quark Chemical Potential 2012 Anne Mette Frejsel Morphological study of heavy ion collisions using CMB methods 2012 Ingrid Deigaard Measurement of the Tau Polarization in Z → τ τ Decays with the ATLAS Detector 2012 Song Chen Anomalous parity asymmetry of the CMB 2011 Jesper Roy Christiansen Strange particle production in the underlying event in pp-collisions at a center-of-mass energy of √s = 7 TeV with the ATLAS detector 2011 Alexander Hansen Pseudorapidity Dependence of Elliptic Flow in Pb+Pb Collisions at √s_NN = 2.76 TeV with ALICE 2011 Christine Hartmann Neutrino masses and mixing 2011 Morten Dam Jørgensen Search for long lived massive particles with the ATLAS detector at the LHC 2011 Daniele Zanzi Search for the Standard Model Higgs Boson in the ATLAS Experiment 2011 Ask E. Jensen Dilepton final states with ATLAS at √s = 7 TeV 2011 Morten Badensø Limits on Anomalous Triple Gauge Couplings in W W Production with Semileptonic Final State 2011 Ursula Søndergaard A Trigger for long-lived coloured or charged exotics for future luminosities in ATLAS 2010 Kristian A. Gregersen Limits on anomalous trilinear gauge couplings in Zγ production at √s = 7TeV and L = 300pb−1 in the ATLAS experiment 2010 Sune Jakobsen Performance evaluation and optimization of the luminosity detector ALFA 2010
|
|
## Differential k-Form
A differential -form is a Tensor of Rank which is antisymmetric under exchange of any pair of indices. The number of algebraically independent components in -D is , where this is a Binomial Coefficient. In particular, a 1-form (often simply called a differential'') is a quantity
(1)
where and are the components of a Covariant Tensor. Changing variables from to gives
(2)
where
(3)
which is the covariant transformation law. 2-forms can be constructed from the Wedge Product of 1-forms. Let
(4)
(5)
then is a 2-form denoted . Changing variables to gives
(6)
(7)
so
(8)
Similarly, a 4-form can be constructed from Wedge Products of two 2-forms or four 1-forms
(9)
See also Angle Bracket, Bra, Exterior Derivative, Ket, One-Form, Symplectic Form, Wedge Product
References
Weintraub, S. H. Differential Forms: A Complement to Vector Calculus. San Diego, CA: Academic Press, 1996.
|
|
Wow, these students are trying to build a computer completely out of K’NEX parts, here’s a calculator…
The K’NEX calculator stands over 10 feet tall, and can perform 4 bit addition and subtraction operations in about 30 seconds. The slowest part of the operation is the user entering the balls. From there the balls trickle down, computing the result of the operation, and then sending that through a 4 bit decoder, which flips a flag that tells the user the answer. Since it is 4 bit, we can add and subtract numbers from 0 to 15.
The K’NEX computer – Link.
|
|
## LifeWiki Trusted Account Request Thread - Post requests here
For discussion directly related to ConwayLife.com, such as requesting changes to how the forums or wiki function.
Macbi
Posts: 784
Joined: March 29th, 2009, 4:58 am
### Re: Massive spam attacks on the wiki (and forums?)
Can someone add me as "trusted" on the wiki? Apparently I've made edits before, but that must have been before the Spam Nation attacked.
dvgrn
Moderator
Posts: 7647
Joined: May 17th, 2009, 11:00 pm
Contact:
### Re: Massive spam attacks on the wiki (and forums?)
Macbi wrote:Can someone add me as "trusted" on the wiki? Apparently I've made edits before, but that must have been before the Spam Nation attacked.
Done!
onit
Posts: 1
Joined: June 24th, 2018, 2:35 pm
### Re: Massive spam attacks on the wiki (and forums?)
Woah spam really seems to be an issue here. Yo can someone add me as trusted on wiki? It would be greatly appreciated!
Apple Bottom
Posts: 1034
Joined: July 27th, 2015, 2:06 pm
Contact:
### Re: Massive spam attacks on the wiki (and forums?)
onit wrote:Woah spam really seems to be an issue here. Yo can someone add me as trusted on wiki? It would be greatly appreciated!
Done!
If you speak, your speech must be better than your silence would have been. — Arabian proverb
Catagolue: Apple Bottom • Life Wiki: Apple Bottom • Twitter: @_AppleBottom_
Proud member of the Pattern Raiders!
xanman12321
Posts: 12
Joined: June 19th, 2018, 10:35 pm
### Re: Massive spam attacks on the wiki (and forums?)
may someone add me to trusted? my wiki username is digitalcross (because apparently i already signed up as xanman12321 there and i cant remember my passWORD)
nothing to see here
dvgrn
Moderator
Posts: 7647
Joined: May 17th, 2009, 11:00 pm
Contact:
### Re: Massive spam attacks on the wiki (and forums?)
xanman12321 wrote:may someone add me to trusted? my wiki username is digitalcross (because apparently i already signed up as xanman12321 there and i cant remember my passWORD)
Done! "DigitalCross" not "digitalcross", though -- case sensitivity is definitely a thing on the LifeWiki.
Cytote
Posts: 1
Joined: July 26th, 2018, 12:41 pm
### Re: Massive spam attacks on the wiki (and forums?)
May I be trusted on the lifewiki? my username on the wiki is cytote.
dvgrn
Moderator
Posts: 7647
Joined: May 17th, 2009, 11:00 pm
Contact:
### Re: Massive spam attacks on the wiki (and forums?)
Cytote wrote:May I be trusted on the lifewiki? my username on the wiki is cytote.
Done!
keenanpepper
Posts: 1
Joined: September 6th, 2018, 5:15 pm
### Re: Massive spam attacks on the wiki (and forums?)
Can I has trusted plz?
My user name on the wiki is Keenan Pepper.
Just wanted to add a column to the catagolue most common objects table saying what kind of object it is (still life, oscillator, spaceship, etc.).
dvgrn
Moderator
Posts: 7647
Joined: May 17th, 2009, 11:00 pm
Contact:
### Re: Massive spam attacks on the wiki (and forums?)
keenanpepper wrote:Can I has trusted plz?
Done!
PkmnQ
Posts: 1093
Joined: September 24th, 2018, 6:35 am
Location: Server antipode
### Re: Massive spam attacks on the wiki (and forums?)
Can someone give me the trusted flag?
I already have "PkmnQ" registered, but then i forgot the password to it, and I didn't even get permission yet.
So the account I'm asking for is "PkmnQuantum".
It's just in case I want to edit something.
How to XSS:
Code: Select all
Function(‘a’+’lert(1)’)()
dvgrn
Moderator
Posts: 7647
Joined: May 17th, 2009, 11:00 pm
Contact:
### Re: Massive spam attacks on the wiki (and forums?)
PkmnQ wrote:Can someone give me the trusted flag?
I already have "PkmnQ" registered, but then i forgot the password to it, and I didn't even get permission yet.
So the account I'm asking for is "PkmnQuantum".
It's just in case I want to edit something.
Done!
EdPeggJr
Posts: 8
Joined: September 27th, 2018, 3:31 pm
### Re: Massive spam attacks on the wiki (and forums?)
May I be trusted?
http://conwaylife.com/wiki/P246_gun
Was discovered by Dave Buckingham in June 1996.
But is listed as discovered by Bill Gosper in 1970.
EdPeggJr
Posts: 8
Joined: September 27th, 2018, 3:31 pm
### Re: Massive spam attacks on the wiki (and forums?)
What is the material at the bottom of http://conwaylife.com/wiki/R64 there for? Isn't just the Herschel and four blocks needed?
A for awesome
Posts: 2316
Joined: September 13th, 2014, 5:36 pm
Location: Pembina University, Home of the Gliders
Contact:
### Re: Massive spam attacks on the wiki (and forums?)
EdPeggJr wrote:What is the material at the bottom of http://conwaylife.com/wiki/R64 there for? Isn't just the Herschel and four blocks needed?
It's needed to eat the first natural glider of the formed Herschel. If you open the LifeViewer, click the reset button once, and then play it, the pattern will run for enough time to show it in action.
praosylen#5847 (Discord)
x₁=ηx
V*_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$
EdPeggJr
Posts: 8
Joined: September 27th, 2018, 3:31 pm
### Re: Massive spam attacks on the wiki (and forums?)
A for awesome wrote:
EdPeggJr wrote:What is the material at the bottom of http://conwaylife.com/wiki/R64 there for? Isn't just the Herschel and four blocks needed?
It's needed to eat the first natural glider of the formed Herschel. If you open the LifeViewer, click the reset button once, and then play it, the pattern will run for enough time to show it in action.
I see the glider escaping when I run it.
Redstoneboi
Posts: 381
Joined: May 14th, 2018, 3:57 am
### Re: Massive spam attacks on the wiki (and forums?)
EdPeggJr wrote:
A for awesome wrote:
EdPeggJr wrote:What is the material at the bottom of http://conwaylife.com/wiki/R64 there for? Isn't just the Herschel and four blocks needed?
It's needed to eat the first natural glider of the formed Herschel. If you open the LifeViewer, click the reset button once, and then play it, the pattern will run for enough time to show it in action.
I see the glider escaping when I run it.
you’re probably talking about the input herschel’s fng, the large eater5 variant is for the output herschel’s fng.
c(>^w^<c)~*
This is 「Fluffy」
「Fluffy」is my sutando.
「Fluffy」has the ability to engineer r e p l i c a t o r s.
「Fluffy」likes to watch spaceship guns in Golly.
「Fluffy」knows Natsuki best girl.
dvgrn
Moderator
Posts: 7647
Joined: May 17th, 2009, 11:00 pm
Contact:
### Re: Massive spam attacks on the wiki (and forums?)
EdPeggJr wrote:May I be trusted?
Done! And welcome to the forums (and wiki).
EdPeggJr wrote:http://conwaylife.com/wiki/P246_gun
Was discovered by Dave Buckingham in June 1996.
But is listed as discovered by Bill Gosper in 1970.
Thanks! That's a recent mistake of mine, from finishing up the import of material from Life Lexicon Release 29, and forgetting to update the infobox template properly. I guess I'll leave it for you to patch as a test of your newfound wikipowers -- thanks again for noticing the problem.
... I'd ask if you have any interest in a few new Conway's Life prize details for the mathpuzzle.com Prize Page, but it looks as if it's been several years since mathpuzzle.com has gotten any updates. It's still an amazing archival collection of miscellaneous mathematics, though -- any plans to start it back up again at some point?
Entity Valkyrie
Posts: 247
Joined: November 30th, 2017, 3:30 am
### Re: Massive spam attacks on the wiki (and forums?)
I am trusted and I am a life enthusiast. Please allow me to edit LifeWiki.
Apple Bottom
Posts: 1034
Joined: July 27th, 2015, 2:06 pm
Contact:
### Re: Massive spam attacks on the wiki (and forums?)
Entity Valkyrie wrote:I am trusted and I am a life enthusiast. Please allow me to edit LifeWiki.
Done!
If you speak, your speech must be better than your silence would have been. — Arabian proverb
Catagolue: Apple Bottom • Life Wiki: Apple Bottom • Twitter: @_AppleBottom_
Proud member of the Pattern Raiders!
gameoflifemaniac
Posts: 1226
Joined: January 22nd, 2017, 11:17 am
Location: There too
### Re: Massive spam attacks on the wiki (and forums?)
Apple Bottom wrote:
Entity Valkyrie wrote:I am trusted and I am a life enthusiast. Please allow me to edit LifeWiki.
Done!
You're a moderator on LifeWiki since you can add people to the trusted list?
I was so socially awkward in the past and it will haunt me for the rest of my life.
Code: Select all
b4o25bo$o29bo$b3o3b3o2bob2o2bob2o2bo3bobo$4bobo3bob2o2bob2o2bobo3bobo$
4bobo3bobo5bo5bo3bobo$o3bobo3bobo5bo6b4o$b3o3b3o2bo5bo9bobo$24b4o! Apple Bottom Posts: 1034 Joined: July 27th, 2015, 2:06 pm Contact: ### Re: Massive spam attacks on the wiki (and forums?) gameoflifemaniac wrote:You're a moderator on LifeWiki since you can add people to the trusted list? Yes, in fact --- not that I have much time to spend on the wiki these days, mind... If you speak, your speech must be better than your silence would have been. — Arabian proverb Catagolue: Apple Bottom • Life Wiki: Apple Bottom • Twitter: @_AppleBottom_ Proud member of the Pattern Raiders! JoshM Posts: 31 Joined: July 12th, 2018, 4:24 pm ### Re: Massive spam attacks on the wiki (and forums?) can i have trusted? edit: name is JoshM Code: Select all x = 20, y = 10, rule = B37/S2-i34q 11o8bo$10b2o6b2o$10bobo4bobo$10bo2bo2bo2bo$2bo7bo3b2o3bo$2bo7bo8bo$2bo 7bo8bo$3bo6bo8bo$4bo5bo8bo$5b6o8bo!
Apple Bottom
Posts: 1034
Joined: July 27th, 2015, 2:06 pm
Contact:
### Re: Massive spam attacks on the wiki (and forums?)
JoshM wrote:can i have trusted?
Done!
If you speak, your speech must be better than your silence would have been. — Arabian proverb
Catagolue: Apple Bottom • Life Wiki: Apple Bottom • Twitter: @_AppleBottom_
Proud member of the Pattern Raiders!
Ian07
Posts: 674
Joined: September 22nd, 2018, 8:48 am
|
|
# Article Takedown/Update Request
This is an update request for the document Some notes on maximal arc intersection of spherical polygons: its $\mathcal{NP}$ -hardness and approximation algorithms by Yong-Jin Liu, Wen-Qi Zhang and Kai Tang
Your paper was deposited in MUCC (Crossref) and appears online at http://link.springer.com/content/pdf/10.1007/s00371-009-0406-5.pdf
Before we are able to make any updates or deletions, you must ensure that the original record at this location is updated/removed.
|
|
## Operads of hypergraphs.(English. Russian original)Zbl 1282.18008
Russ. Math. 57, No. 4, 52-62 (2013); translation from Izv. Vyssh. Uchebn. Zaved., Mat. 2013, No. 4, 61-73 (2013).
For all $$n \geq 1$$, and any monoid $$G$$, the authors construct two operad structures on the set of cubic $$n$$-multidimensional matrices. They both contain a suboperad $$HG_n$$ obtained from a symmetry condition on cubic multidimensional matrices. If $$G=\{0,1\}$$, with the monoid structure defined by $$1+1=1$$, these symmetric multidimensional matrices are the incidence matrices of hypergraphs such that any edge is adjacent to at most $$n$$ vertices, and the authors obtain in this way an operad structure on hypergraphs. It is also shown that these operads are $$Epi$$-operads, that is to say the right action of permutations can be extended to an action of surjections.
### MSC:
18D50 Operads (MSC2010) 05C25 Graphs and abstract algebra (groups, rings, fields, etc.)
Full Text:
### References:
[1] S. N. Tronin, ”Operads of Finite Graphs and Hypergraphs,” Trudy. Matem. Tsentra im. N. I. Lobachevskogo, Aktual’nye Problemy Matematiki iMekhaniki (Unipress, Kazan, 2000), Vol. 5, pp. 207–208. [2] S. N. Tronin and A. V. Semenova, ”Operads of Finite Labeled Graphs,” Izv. Vyssh. Uchebn. Zaved. Mat., No. 4, 50–60 (2004) [Russian Mathematics (Iz. VUZ) 48 (4), 48–57 (2004)]. · Zbl 1073.05064 [3] S. N. Tronin, ”Abstract Clones and Operads,” Sib. Matem. Zhurn. 43(4), 924–936 (2002). · Zbl 1017.08002 [4] S. N. Tronin, ”Operads and Varieties of Algebras Defined by Polylinear Identities,” Sib. Matem. Zhurn. 47(3), 670–694 (2006). · Zbl 1115.18003 [5] S. N. Tronin, ”Multicategories and Varieties of Many-Sorted Algebras,” Sib. Matem. Zhurn. 49(5), 1184–1201 (2008). · Zbl 1224.18004 [6] A. V. Semenova, ”Algebras over the Operad of Finite Labeled Graphs,” Izv. Vyssh. Uchebn. Zaved. Mat., No. 6, 65–73 (2006) [Russian Mathematics (Iz. VUZ) 50 (6), 63–71 (2006)]. · Zbl 1255.15024 [7] S. N. Tronin and R. N. Tuktamyshov, ”Some Operads of Hypergraphs,” in Proceedings of International Conference ’Algebra and Mathematical Logic’ dedicated to the 100th anniversary of Prof. V. V. Morozov and the School-Seminar ’Modern Problems of Algebra and Mathematical Logic,’ Kazan, September 25–30, 2011 (Kazan Federal Univ., Kazan, 2011), pp. 173–175. [8] R. N. Tuktamyshov, ”Algebras over Operads of Hypergraphs,” in Proceedings of International Conference ’Algebra and Mathematical Logic’ dedicated to the 100th anniversary of Prof. V. V. Morozov and the School-Seminar ’Modern Problems of Algebra and Mathematical Logic,’ Kazan, September 25–30, 2011 (Kazan Federal Univ., Kazan, 2011), pp. 175–176. [9] R. N. Tuktamyshov, ”One Operad of Hypergraphs,” Trudy Matem. Tsentra im. N. I. Lobachevskogo, Xth Youth Conf. ’Readings from N. I. Lobachevskii-2011;’ Kazan, October 31–November 4, 2011 (Kazansk. Matem. Ob-vo, Kazan, 2011), Vol. 44, pp. 278–288. [10] V. A. Emelichev, O. V. Mel’nikov, V. I. Sarvanov, and R. I. Tyshkevich, Lectures on Graph Theory (Nauka, GIFML, Moscow, 1990) [in Russian]. · Zbl 0711.05002
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
|
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Archive Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Dokl. Akad. Nauk: Year: Volume: Issue: Page: Find
MATHEMATICS The method of projections in studies of solutions of elliptic equationsA. D. Aleksandrov 751 Confidence intervals for functions of several unknown parametersYu. K. Belyaev 755 Mutual growth of coefficients of a class of $p$-valent functionsE. G. Goluzina 759 A problem of biophysicsL. I. Kamynin 761 On a Boolean functionÉ. I. Nechiporuk 765 Regularization of the supremum of a family of plurisubharmonic functions and its application to analytic functions of several variablesL. I. Ronkin 767 On the theory of a geodesic mapping of Riemannian spacesN. S. Sinyukov 770 Properties of spectra of ergodic dynamic systems with locally compact timeA. M. Stepin 773 Ordered spacesV. V. Fedorchuk 777 Estimate of the power of a set of non-rigid sleeve couplings for surfaces of revolutionV. T. Fomenko 781 Algebraic theory of linear inequalitiesS. N. Chernikov 785 FLUID MECHANICS Automodel propagating-wave type solutions to certain quasilinear equations, especially to the equations describing a water flow in an inclined channelÉ. B. Bykhovskii 789 Some exact solutions to the equations of a unidimensional (with plane waves) unsteady motion of the groundÈ. F. Khairetdinov 792 Statement of the problem of flow around bodies with jets and the exact solution of two classes of problems in an ideal fluidV. M. Shurygin 795 MATHEMATICAL PHYSICS Spectrum of Schrödinger's operator used in the optical model of the nucleusA. G. Ramm 799 PHYSICS Some results obtained in studying the lightning by electron-optical apparatusE. M. Bazelyan, B. N. Gorin, I. S. Stekolnikov, A. V. Shkilev 803 Regularities in the high-temperature reaction of carbonE. S. Golovina, L. L. Kotova 807 Optical observation of phase transition in single $\mathrm{SbSJ}$ crystalsA. A. Grekov, V. A. Lyakhovitskaya, A. I. Rodin, V. M. Fridkin 810 On Coulomb interaction in a superconductor model of two zonesM. E. Palistrant, V. A. Moskalenko 812 Investigation statistically nonordered systems by means of nuclear quadrupole resonance-Growth of ordered para-nitrochlorobenzene crystals on crystalline “matrices” of para-bromo and para-iodonitrobenzenesG. K. Semin, T. A. Babushkina, V. I. Robas 816 Radiation noise energy balance in optical quantum generatorsB. I. Stepanov, A. S. Rubanov 819
|
|
# How to find the angle generated as a function of the speed of a bullet when it collisions to mass hanging to a ceiling?
#### Chemist116
The problem is as follows:
A bullet of mass $m$ collisions to a bob hanging vertically from a ceiling whose mass is $M$. As a result of this impact the bob with the bullet inside travels an arc (whose have a radius $R$) and then oscillates. Given this condition find the initial angle traveled by the mass M as a function of the speed (indicated as $v$) of the bullet. Assume the acceleration due gravity is $g$.
The alternatives given in my book are as follows:
$\begin{array}{ll} 1.&\cos^{-1}\left(1-\frac{1}{2gR}\frac{m}{m+M}v^2\right)\\ 2.&\sin^{-1}\left(1-\frac{1}{2gR}\frac{m}{m+M}v^2\right)\\ 3.&\cos^{-1}\left(1-\frac{1}{2gR}\frac{m}{M}v^2\right)\\ 4.&\sin^{-1}\left(1-\frac{1}{2gR}\frac{m}{M}v^2\right)\\ 5.&\tan^{-1}\left(1-\frac{1}{2gR}\frac{m}{m+M}v^2\right)\\ \end{array}$
How exactly should I tackle this question?. Can someone guide me here?. The only thing which I can recall when a bullet strikes a bob is in an inelastic collision which would be given by:
$p_i=p_f$
But I dont know exactly how can I relate it with the arc?. The thing here is that the bob starts swinging or oscillating. I don't know how to translate this into an equation.
Typically this would be given by
$mv=(m+M)u$
$u=\frac{m}{m+M}v$
Then as it oscillates from the bob will be as:
$E_k=E_u$
$\frac{1}{2}(m+M)u^2=(m+M)gR(1-\cos\omega)$
Hence combining both expressions would be:
$\frac{1}{2}(m+M)\left(\frac{m}{m+M}v\right)^2=(m+M)gR(1-\cos\omega)$
$\frac{1}{2}\frac{m^2v^2}{m+M}=(m+M)gR(1-\cos\omega)$
$\frac{1}{2}m^2v^2=(m+M)^2gR(1-\cos\omega)$
$1-\cos\omega=\frac{1}{2}\frac{m^2v^2}{(m+M)^2gR}$
$\omega=\cos^{-1}\left(1-\frac{1}{2}\frac{m^2v^2}{(m+M)^2gR}\right)$
But this doesn't appear in any of the alternatives. What could it be wrong in my approach. Can someone help me here?.
#### skeeter
Math Team
inelastic collision ...
$mv_0 = (M+m)v_f \implies v_f = \dfrac{mv_0}{M+m}$
post collision conservation of energy ...
$\dfrac{1}{2}(M+m)v_f^2 = \dfrac{1}{2}(M+m)\left( \dfrac{mv_0}{M+m} \right)^2 = \dfrac{1}{2}\dfrac{m^2v_0^2}{M+m} = (M+m)gh \implies$
$h = \dfrac{1}{2g} \left(\dfrac{mv_0}{M+m}\right)^2$
$\cos{\theta} = \dfrac{R-h}{R} = 1 - \dfrac{h}{R} = 1 - \dfrac{1}{2gR} \left(\dfrac{mv_0}{M+m}\right)^2$
I agree with your solution ... looks like the first answer choice omitted squaring the quantity $\dfrac{m}{M+m}$
Chemist116
#### Chemist116
inelastic collision ...
$mv_0 = (M+m)v_f \implies v_f = \dfrac{mv_0}{M+m}$
post collision conservation of energy ...
$\dfrac{1}{2}(M+m)v_f^2 = \dfrac{1}{2}(M+m)\left( \dfrac{mv_0}{M+m} \right)^2 = \dfrac{1}{2}\dfrac{m^2v_0^2}{M+m} = (M+m)gh \implies$
$h = \dfrac{1}{2g} \left(\dfrac{mv_0}{M+m}\right)^2$
$\cos{\theta} = \dfrac{R-h}{R} = 1 - \dfrac{h}{R} = 1 - \dfrac{1}{2gR} \left(\dfrac{mv_0}{M+m}\right)^2$
I agree with your solution ... looks like the first answer choice omitted squaring the quantity $\dfrac{m}{M+m}$
Hopefully at least some of the questions I posted on this forum was right.
Btw I also thought about that. But since all of your solutions are correct as my book says. I'll take it for this as the right one!.
|
|
# Math Help - Radians
1. ## Radians
I'm not sure how to do the following question
Express in degrees the angle whose radian measure is 0.5
Thanks in advance
2. Originally Posted by deltaxray
I'm not sure how to do the following question
Express in degrees the angle whose radian measure is 0.5
Thanks in advance
$2 \pi$ radians is equal to 360 degrees. Therefore 1 radian is equal to $\frac{180}{\pi}$ degrees. Therefore 0.5 radians is equal to ....
|
|
# Problem #2290
Back to Logic page
2290 Barbara and Jenna play the following game, in which they take turns. A number of coins lie on a table. When it is Barbara’s turn, she must remove $2$ or $4$ coins, unless only one coin remains, in which case she loses her turn. What it is Jenna’s turn, she must remove $1$ or $3$ coins. A coin flip determines who goes first. Whoever removes the last coin wins the game. Assume both players use their best strategy. Who will win when the game starts with $2013$ coins and when the game starts with $2014$ coins? $\textbf{(A)}$ Barbara will win with $2013$ coins and Jenna will win with $2014$ coins. $\textbf{(B)}$ Jenna will win with $2013$ coins, and whoever goes first will win with $2014$ coins. $\textbf{(C)}$ Barbara will win with $2013$ coins, and whoever goes second will win with $2014$ coins. $\textbf{(D)}$ Jenna will win with $2013$ coins, and Barbara will win with $2014$ coins. $\textbf{(E)}$ Whoever goes first will win with $2013$ coins, and whoever goes second will win with $2014$ coins. This problem is copyrighted by the American Mathematics Competitions.
Note: you aren't logged in. If you log in, we'll keep a record of which problems you've solved.
• Reduce fractions to lowest terms and enter in the form 7/9.
• Numbers involving pi should be written as 7pi or 7pi/3 as appropriate.
• Square roots should be written as sqrt(3), 5sqrt(5), sqrt(3)/2, or 7sqrt(2)/3 as appropriate.
• Exponents should be entered in the form 10^10.
• If the problem is multiple choice, enter the appropriate (capital) letter.
• Enter points with parentheses, like so: (4,5)
• Complex numbers should be entered in rectangular form unless otherwise specified, like so: 3+4i. If there is no real component, enter only the imaginary component (i.e. 2i, NOT 0+2i).
|
|
# Ransomware
Ransomware is a type of malicious software that first infects a computer system. Once infected, users of the system can no longer access the system or parts of it. This means that important data stored on the system will no longer be accessible. If these systems are part of an organization, normal operations will be affected.
Those who control and distribute such ransomware often demand a payment (aka ransom) from the users. Once payment is done, often via untraceable cryptocurrencies, affected users are allowed to regain access to their system. Without a payment, it's often difficult if not impossible to regain access.
Since 2013, ransomware has become more sophisticated with the use of public key cryptography. The best defence is to adopt best practices in computer and network security, as well as user awareness.
## Discussion
• What are the types of ransomware?
There are basically two types of ransomware:
• Lockers: These deny access to the infected device and severely limit user interaction. Screen may display information on how to make payment. Mouse interaction may be disabled and only a few keys on the keyboard may be enabled to enter payment information. System files and user data are usually untouched. With lockers, tech savvy users with the right tools may be able to unlock the device and avoid the ransom. Lockers use social engineering to pressurize users into paying up.
• Cryptos: These identify important files on the system and prevent user access to these files. In most cases, they encrypt the files. Cryptos work silently in the background until files have been encrypted and then they inform the user. User can still use the infected device but without access to important data, and without proper backup, they usually have no choice but to pay up.
• What are the steps by which a crypto ransomware infects a system?
Crypto ransomware usually works in stages:
• Arrival: Ransomware gets triggered when user click a link in email or website. It downloads itself into the system and starts running in the background.
• Contact: It contacts its command and control (C&C) server to exchange configuration information. This may include cryptographic keys for later use.
• Search: It searches the system for important files by their file types.
• Encryption: It then generates encryption keys that might involve keys exchanged earlier with C&C. These keys are used to encrypt files identified by its search. Telltale signs include slowdown of the system and flickering of the hard drive light.
• Ransom: User is displayed the ransom messages once all identified files have been encrypted.
• What are the potential entry points for a ransomware?
Ransomware can arrive by email that can contain a link or an attachment. If it's a link, the user is lured to click it, download a file and execute it. If it's an attachment, the user is lured to open it. Attachments could be Microsoft documents, XML document, Zip file containing JavaScript file or a file with multiple extensions. JS script upon execution will download the ransomware. Microsoft documents may contain the ransomware embedded in them as a macro.
Ransomware could also arrive when user visits a malicious or compromised website. This is done using exploit kits. Some of these include Angler, Neutrino and Nuclear. These kits probe the user's device for vulnerabilities and exploit them immediately. These can spread more easily since they can infect without users clicking or downloading anything.
A common way to lure unsuspecting users is via what's called phishing. An email or website attempts to pass itself off as a trusted service provider. Text is worded in a manner that sounds convincing and legitimate. This is called social engineering.
• How is cryptography used by crypto ransomware?
Cryto ransomware uses a combination of symmetric keys and asymmetric keys. Typically, the symmetric key is used to encrypt the files while the asymmetric public key is to encrypt the symmetric key. Thus, to decrypt the files one requires the symmetric, which can be decrypted only when asymmetric private key is available. Asymmetric private key is kept secret at the C&C server.
For example, CryptoDefense uses a randomly generated AES key to encrypt files but this key itself is encrypted using RSA. RSA public key itself is downloaded from the C&C server. CTB-Locker does something similar except that RSA public key is embedded in the ransomware so that the attack can be completed even without an Internet connection. In fact, CTB-Locker uses a combination AES, SHA256 and ECDH (curve25519).
Cerber uses RC4 for encryption and involves an extra step of RSA key-pair generation. Petya uses ECDH (secp192k1) and SALSA20 algorithms.
• What are some variations among ransomware out there?
Ransomware come in many variations. They constantly evolve so that countermeasures are made ineffective. Here are some variations:
• WannaCry was also a worm, able to spread itself to other devices on the network without user intervention.
• WannaCry gave a deadline for payment and threatened that ransom would go up if not complied.
• In addition to denying users access, RAA ransomware and MIRCOP stole passwords and sent them to the C&C server.
• Many used social engineering to confuse and intimidate users, saying that users had broken the law in some way. If ransom is unpaid, they would be reported to the police.
• While most ransomware were executables (.exe, .dll), others used a scripting language: JScript for Cryptowall 4.0's downloader and RANSOM_JSRAA.A; PowerShell script for PowerWare.
• Jigsaw regularly deleted encrypted files until the ransom was paid. CryptoLocker instead threatened to delete private encryption key, which meant that data could never be recovered.
• Petya, Satana, and GoldenEye modified the hard drive MBR (Master Boot Record) with a custom boot loader.
• CTB-Locker used partners and revenue sharing to spread faster.
• Could you name some ransomware attacks that caused significant damage?
Ransomware can affect any device including laptops, servers and smartphones. Ransomware can affect home users and lock them out of their personal data. It can affect organizations such as hospitals, schools, government agencies, and more. Attackers don't care who's the target but they do set the ransom based on what they think the victims are likely to pay.
In February 2016, Hollywood Presbyterian Medical Center was infected by Locky ransomware. Ransom was 40 Bitcoins, about $17,000. San Francisco's transit system, Muni, was attacked in November 2016. Ransom was$73,000 in Bitcoins. In January 2017, Austrian hotel Romantik Seehotel Jaegerwirt was attacked. Electronic room keys did not work and the reservation system was paralyzed. The attackers demanded payment of $1,800 in Bitcoins. In June 2017, University College of London was attacked. The city of Atlanta was crippled for five days by ransomware demanding$51,000. Boeing plant in Charleston was hit by WannaCry in March 2018.
• Is the Internet-of-Things (IoT) vulnerable to ransomware?
With IoT, the attack surface widens: thermostats, security cameras, smart locks, connected cars, power grids and other industrial systems can all get infected. Unlike traditional ransomware that prevent users from accessing their data, IoT data is often in the cloud. Instead, ransomware in IoT will be about paralyzing systems: traffic jams, power outages, malfunctioning equipment, etc.
In 2016, Mirai botnet infected more than 600,000 IoT devices and then used these devices to launch a distributed Denial of Service (DDoS) attack on web services. Although Mirai was not a ransomware, it showed the potential of using IoT devices for large scale attacks. Mirai was possible because IoT devices at the time were (and perhaps even now are) less secure than enterprise IT systems. Mostly, routers and cameras were compromised. Users often use default credentials, don't upgrade or even login to these devices.
A variety of IoT devices exist and any ransomware must have variants or mutate on its own to infect this variety. Since many IoT devices lack display, ransomware will also need to figure out emails and phone numbers to notify users about the ransom.
• How can I protect myself from ransomware?
Take regular backups of critical data. Keep your security software and OS updated on a regular basis. Be wary of unexpected mails with links or attachments. Be wary of Microsoft Office attachments that ask you to enable macros.
If infected, disconnect the affected device from your network. Scan all other devices on your network. Identify the ransomware and try recovery if possible. If not, reformat device and restore data from clean backups. Report the incident to local authorities.
## Milestones
1989
Joseph L. Popp distributes 20,000 floppy disks containing what could be the first known ransomware. Called 1989 AIDS Trojan, it hides folders and encrypts file names. Victims are asked to send $189 to a post office box in Panama. 2009 Fake anti-virus programs become increasingly common. They inform users that they can "fix" problems in their systems for a fee. 2011 Without doing any encryption, Trojan.Winlock simply displays a fake Windows Product Activation notice. Users are asked to call a premium international number to obtain an activation key. Sep 2013 Ransomware gets modern and sophisticated with the release of CryptoLocker. It uses RSA public-key cryptography and keeps the private key safe at its command and control (C&C) server. The attack lasts from September 2013 to May 2014, in which period it collects$3 million from its victims. An improved version called CryptoLocker 2.0 arrives in December. It uses Tor and Bitcoin for anonymity and it's not detected by anti-virus or firewall.
2015
Ransomware that encrypts files on an Android device arrives on the scene. It's called SimpleLocker. It uses a trojan downloader. Another one called LockerPin resets the PIN on the phone and demands \$500 ransom to unlock the device.
2016
The number of variants of ransomware increases dramatically in 2016. From Q4-2015 to Q1-2016, ransomware increases by 3,500% and payments increase tenfold. The US Justice Department states that in 2016 ransomware attacks increased four times to 4,000 a day. MAC OSX gets infected by KeRanger via Transmission BitTorrent client.
May
2017
Ransomware WannaCry exploits a vulnerability in the SMB protocol, a vulnerability present in old unsupported versions of Windows. The attack is contained within a few days but not before it infects 200,000 computers across 150 countries. Kaspersky Lab reports that 98% of the successful attacks were on computers running Windows 7.
Jun
2017
Using the same SMB vulnerability that WannaCry exploited, once seeded, NotPetya spreads to computers on its own without needing spam emails or social engineering. It encrypts the Master Boot Record (MBR) and lot more. It tricks users into paying a ransom but in fact the encryption process is irreversible.
Dec
2017
SophosLabs publishes an article on Ransomware-as-a-Service (RaaS). These are available on the Dark Web, some even offering support. These enable non-technical folks to launch their own ransomware variant. Presumably, RaaS has been available on the Dark Web for a year or two.
Author
No. of Edits
No. of Chats
DevCoins
2
0
1626
3
0
20
1870
Words
4
Likes
7058
Hits
## Cite As
Devopedia. 2020. "Ransomware." Version 5, January 6. Accessed 2022-09-22. https://devopedia.org/ransomware
Contributed by
2 authors
Last updated on
2020-01-06 12:38:32
• Public Key Cryptography
• RSA Algorithm
• Phishing
• Cryptocurrency
• Diffie-Hellman Key Exchange
|
|
Designers often search for new solutions by iteratively adapting a current design. By engaging in this search, designers not only improve solution quality but also begin to learn what operational patterns might improve the solution in future iterations. Previous work in psychology has demonstrated that humans can fluently and adeptly learn short operational sequences that aid problem-solving. This paper explores how designers learn and employ sequences within the realm of engineering design. Specifically, this work analyzes behavioral patterns in two human studies in which participants solved configuration design problems. Behavioral data from the two studies are first analyzed using Markov chains to determine how much representation complexity is necessary to quantify the sequential patterns that designers employ during solving. It is discovered that first-order Markov chains are capable of accurately representing designers' sequences. Next, the ability to learn first-order sequences is implemented in an agent-based modeling framework to assess the performance implications of sequence-learning abilities. These computational studies confirm the assumption that the ability to learn sequences is beneficial to designers.
## Introduction
Designers often search for new solutions by iteratively adapting a current design. By engaging in this search, designers progressively improve the quality of their solutions. However, they also begin to learn what operational patterns are likely to improve the solution in future iterations. The current work examines designers' capacity for learning and applying beneficial operation sequences, and studies the impact of such behavior on performance.
The current work specifically stems from observations of small teams of engineering students engaged in the design of a truss [1]. In a subsequent analysis of that study, it was hypothesized that the precise order in which operations were performed may have impacted the quality of solutions [2]. The analysis in the current work focuses expressly on these operational sequences and is thus conducted at shorter timescales and at a finer resolution than other research that has studied the sequencing of design stages or design tasks (which is reviewed in Sec. 2). This is the level at which designers and engineers explicitly engage in their iterative search for solutions, so choosing the best actions becomes of paramount concern for the creation of high-quality solutions. This work is specifically centered on two overarching questions:
1. (1)
How much representational complexity is necessary to quantify the sequential patterns that designers employ during solving? Previous work has used sequential models with varying degrees of complexity to examine designer activity [36]. However, no direct analysis has been conducted to assess what degree of complexity is necessary to offer an accurate aggregate representation of designer activity. The current work utilizes Markov chain concepts to verify the fundamental assumption that sequential treatments are necessary, and to uncover the necessary level of complexity.
2. (2)
Does the use of operation sequences benefit designers? Effective design must find a balance between exploration of a design space and exploitation of known features of the design space to achieve a solution [79]. Sequence learning may serve to augment exploitation in design, similar to the role that it plays for solving puzzle problems [10,11]. However, it is also possible that this augmentation occurs at the expense of effective exploration, as designers may apply learned sequences to greedily improve solution quality rather than searching broadly for solution alternatives. Studying the performance implications of sequence learning is complicated by the fact that sequence learning can take place implicitly [12,13], which makes it challenging to control, observe, and assess. This work equips a computational model of engineering design teams with Markovian constructs to accurately assess the performance implications of sequence-learning abilities.
These research questions are addressed by using Markov chain constructs to represent and simulate the sequential pattern of human behavior in design. While Markov chains do not extract finite operation sequences, they do implicitly represent such sequences using probabilistic chains. The mathematical underpinnings of Markov chains are described in greater detail in Sec. 2, along with relevant information pertaining to sequencing and design.
This paper is comprised of two investigations exploring each of the overarching research questions. The first explores the degree of complexity necessary to accurately characterize the sequences used by designers. This is accomplished by applying a statistical analysis to the human data from two previously conducted cognitive studies. This analysis reveals that participants in both studies employed sequences of operations when constructing solutions. The results also show that operation sequencing in both studies can be characterized as a first-order Markov process. Higher-order Markov models display accuracy that is statistically equivalent to first-order models. The second investigation attempts to assess the performance implications of operation sequencing. This assessment is attained by computationally simulating the activities of design teams using the Cognitively Inspired Simulated Annealing Teams (CISAT) modeling framework, an agent-based platform that has been shown to approximate the process and performance characteristics of engineering design teams [2]. The insight that human operation sequencing can be treated as a first-order Markov process is used to equip CISAT agents with sequence-learning abilities, enabling a computational comparison between teams with and without the ability to learn sequences. These simulations demonstrate that sequence-learning abilities were helpful to designers in the cognitive studies, and that similar computational implementations may be of use for automated design synthesis.
## Background
The patterns that humans identify through sequence learning are essential to the execution of both mundane and specialized tasks [14]. Patterns are also identified through chunking, a related behavior in which humans assemble many pieces of related information concomitantly in memory [1517]. Chunking behavior has even been mirrored in computational design algorithms [18]. These two behaviors are differentiated by the modality of the recognized patterns—chunking extracts patterns that are spatial or relational, while sequencing extracts temporal patterns. Both are important to design cognition, but the current works focus on the latter.
### Sequence Learning.
Sequential behavior can be an indicator of expertise in some domains [19]. However, it has also been shown that participants are capable of quickly acquiring and employing move sequences [10,11,20]. Participants solving the Tower of Hanoi puzzle spend most of their time learning to compose appropriate sequences (namely, pairs) of moves [10]. Once they learn how to do so, they spend a relatively short amount of time to actually solve the puzzle [10]. Further, a comparison of several isomorphs of the Tower of Hanoi puzzle revealed that isomorph difficulty increased time spent in the learning phase, but the time spent in the solving phase was invariant with respect to isomorph difficulty [10]. In studies using the Thurstone letter series completion task, two procedural steps were identified in participants' problem-solving efforts [20]. The first step entails identifying some structure in the letter series and creating a rule that describes it, and the second involves leveraging that rule to extrapolate the next letters in the series. These steps are abbreviated as generating a pattern and generating a sequence. This process has been reevaluated and confirmed with both computational simulations and cognitive studies [20,21]. Other work has shown that participants preferred specific operation orders in solving geometric analogy tasks, despite the fact that the task itself did not place explicit constraints on the permissible order of operations [22]. Participants performed with lower accuracy and speed when made to use a nonpreferred order, indicating that appropriate sequencing of operations has strong implications for performance [22].
Several studies have shown that humans can learn sequences implicitly (i.e., without direct attention) [12,13]. However, studies have also shown that direct attention while learning sequences can boost positive outcomes [23,24]. Together, these findings underscore the existence of two alternative pathways through which sequence learning can occur [14,25].
### Sequencing in Design.
The role of sequencing as it pertains to design has been examined with respect to stages, tasks, and operations. These three sequence types can be conceptualized along a spectrum of abstraction, from design stages (the most abstract and general) to design operations (the least abstract and most detail-specific). A similar continuum can be constructed to describe the timescale at which these objects are enacted, with design stages being enacted at longer timescales, and design operations typically at shorter timescales.
Observations of individual designers (or of design teams) are typically used to study the sequencing of design stages. One study coded design team communication according to alignment with design stages [26]. It was discovered that design teams were likely to focus their discussion on a specific stage for several utterances before transitioning to other stages [26]. In another study, participants were tasked with designing a playground and their activities were again coded according to alignment with design stages [27]. The procedural sequences exhibited by experts tended to transition smoothly and linearly between stages, while sequences exhibited by novices were more erratic, with frequent stage changes [27]. It has also been shown that there is substantial variability in the order in which both Ph.D. and undergraduate students employ design stages, with few participants transitioning linearly between the stages [28]. A similar study demonstrated that transitioning linearly through the design process tended to produce solutions of higher quality [29].
The design tasks are typically enacted at shorter timescales than design stages. Appropriate ordering of design tasks can increase the concurrency with which tasks can be completed [30], decrease the time and cost involved in developing a product [31], and increase the information that is available for key design decisions [32]. Waldron and Waldron analyzed the sequencing of tasks observed during the design of an intricate mechanical system [33]. Their analysis showed that there is not always a clean break between conceptual and detailed design, due in part to the fact that tasks may carry over between design stages [33]. Theoretical research on task sequencing has demonstrated a possible link between problem complexity and optimal approaches for task ordering [32]. Other work has implemented genetic algorithms for optimizing task sequences with respect to a number of different objectives [34].
The sequencing of discrete design operations takes place on the shortest timescales and has the most intimate impact on potential solutions. This is the level at which designers and engineers explicitly engage in their iterative search for solutions, so choosing the best actions becomes of paramount concern for the creation of high-quality solutions. Much of the work that has examined the sequencing of design operations makes use of some type of protocol encoding scheme in order to render the resulting sequences meaningful. Function–behavior–structure (FBS) concepts [35,36] are commonly used to create coding schemes to study sequencing at this scale. The FBS design ontology specifically describes the design as a process with the ultimate goal of transforming a set of design requirements into a design description [35]. The description cannot proceed directly from the set of requirements, but instead arises as a result of considering a number of issues associated with the design requirements—the required functionality, the expected behavior, the observed behavior, and the structure of the designed object. The transitions between these issues are referred to as processes. The first-order sequential behavior in the FBS ontology (the transitions between issues) has been modeled via Markov chains [35]. The second-order sequential behavior (the probability that specific processes will precede specific issues) has also been investigated [37]. Simulations have also considered the effects of memory on sequencing behavior in computational agents using higher-order Markov chains [6]. Aside from studies of human designers, the extraction and implementation of beneficial rule pairs have been explored with respect to design automation [38].
This work will also use Markov concepts to study the ordering of operations during design tasks. Instead of using a coding scheme that requires human assessment, we code design operations according to their quantifiable effect on the form of the current design solution. Markov chain concepts are used to study the sequencing of operations for design tasks and also to implement sequence-learning abilities within CISAT, a computational model of design teams [2].
### Markov Processes.
A Markov chain is a mathematical model of a stochastic system that transitions between a set number of possible states [39]. Specifically, a first-order Markov chain assumes that the probability of transitioning to a future state depends only on the current state of the system, and not on previous states [39]. These transition probabilities are stored in the transition matrix, $T$, where the value of $Tij$ is the probability of transitioning from state $i$ to state $j$. The mathematics governing Markov chains were proposed in 1907 [40], and over the last century, Markov chains have been leveraged for computer performance evaluation [41], web search [42], modeling chemical processes [43], and analyzing design team communications [44,45].
Figure 1 gives an example of a first-order Markov chain with three states ($S1$, $S2$, and $S3$). Arrows in the figure indicate possible transitions between states, and these are labeled with the corresponding element of the transition matrix. It should be noted that Markov chains typically permit self-transitions, meaning that the system fails to transition out of the current state for one or more time steps. In the current work, Markov chains describe the order in which study participants applied operations while constructing solutions. These modifying operations are modeled as the states of the Markov chain model, making the transition matrix a probabilistic description of operation sequences.
The higher-order Markov chains can also be implemented. In these models, the selection of the next state does depend on past states, thus modeling a degree of “memory” within the system [46]. In the context of design, the implicit memory of higher-order Markov chains could serve as a useful analog for a portion of the expertise and memory of human designers. The higher-order Markov chain models are used in this work to characterize how much inherent memory is necessary to specify the order in which study participants applied operations while constructing solutions. The zero-order Markov chains are also used in this work. These models do not encode a sequential representation of data, but instead encode the nonconditional frequency with which operations are applied (much like the probabilities associated with each side of a weighted die).
## Datasets
The operation datasets analyzed in this work were derived from two previously conducted cognitive studies. The first study tasked engineering students with designing a truss and was originally created to examine design in the face of dynamic problems [1]. The second study tasked a different group of engineering students with the design of an internet-connected home cooling system and was originally designed to assess team coordination and communication [47]. Neither study was designed explicitly for the analyses applied in this paper—rather, the current work mines patterns of human behavior from those preexisting datasets. In addition, the differences between the two studies are an advantage to the current work because the respective data provide a broad basis from which to draw more general conclusions. A brief review of both studies is given in this section, with a summary of important differences provided in Table 1. The disparate domains of the two studies add to the generalizability of the results of this work, as does the varying number of operation types. Further, participants in the truss design study performed nearly an order of magnitude more operations than the participants in the home cooling system design study. Given these substantial differences, any similarities noted in human behavior between these two studies may be generalizable to the broader class of configuration design problems.
While the data used here were collected from team-based experiments, the focus of the current work is on individual sequence learning. This type of individual-level analysis can be performed on the team-based data for two reasons. First, a separate series of operations was logged independently for every participant, rather than an aggregate collection of data at the team level. Second, in both studies, the time spent working individually was much larger than that spent interacting with teammates, so learned sequences were largely the result of individual activity and effort.
### Truss Design Study.
In this previously conducted study, 16 teams of three mechanical engineering students were tasked with designing a structural truss. The design was conducted over the course of six 4-min design sessions. New problem statements were introduced twice, without warning, in order to study problem-solving and design in response to a dynamic design task [1]. In the original problem statement, teams were instructed to design a truss to support a given load at the middle of each of two spans. The first change presented participants with the same general layout, but they were also instructed to account for the removal of one of the supports at any time. The second change instructed teams to design their truss to avoid an obstacle. Teams were given a separate target mass and factor of safety for each of the three problem statements. Over the course of the study, participants were permitted to interact freely with members of their team [1]. Estimates of communication frequency made in previous work vary from one interaction for every 30 individual actions to once for every 100 actions, depending on the team [2]. Because the problem statement changed during design, it is expected that this dataset will yield sequencing information that applies generally to a variety of truss design problems.
Every participant was also provided with a computer program that allowed them to construct, evaluate, and share design solutions with their teammates. This program also recorded the operations that participants selected while creating their designs which made it possible to reconstruct a full log of design activity. The allowed operations were as follows:
1. (1)
2. (2)
removing a joint,
3. (3)
4. (4)
removing a member,
5. (5)
changing the size of a single member,
6. (6)
changing the size of all members, and
7. (7)
moving a joint.
The information generated by participants was analyzed following the experiment to produce a sequence of move operators (denoted by the integers 1–7) for every participant. Every sequence consisted of 400–500 operations. A short example operation sequence is depicted in Fig. 2.
### Home Cooling System Design Study.
This study tasked 54 mechanical engineering students (either individually or in teams) with designing a system of connected products to maintain the temperature within a residential structure. Participants were allowed to use and connect three distinct product types to create their solutions: sensors (which sensed the temperature of the room in which they were placed), coolers (which cooled rooms in the home), and processors (which made decisions about which coolers to activate based on information from sensors). The design was conducted over the course of a 30-min session. Several experimental conditions were established to control the frequency with which participants interacted with their teammates (from zero interaction to interacting once for every five individual actions). To ensure a common basis for comparison between conditions, every participant was allowed to perform only 50 design operations.
Every participant was provided with a program that allowed them to build, assess, and share solutions. It was also used to continuously record the operations that the participants used, much like the design program for the truss study. The operations available to participants here were as follows:
1. (1)
2. (2)
3. (3)
4. (4)
remove processor,
5. (5)
remove sensor,
6. (6)
remove cooler,
7. (7)
move sensor,
8. (8)
move cooler, and
9. (9)
tune cooler.
This information was processed after the experiment to produce a list of move operators (denoted by the integers 1–9) for each of the 54 participants in the study. Every solution sequence consisted of exactly 50 operations. A short example sequence is provided in Fig. 3. The solution diagrams in the left column depict a plan view of the structure, with shading indicating the relative temperature in different rooms.
## Investigation 1: Representation of Operation Sequences
This paper first analyzes human operation data from the two design studies with the objective of determining what degree of representational complexity is necessary to accurately model the sequences employed by designers.
### Methodology.
Markov chains were trained on data from the design studies in order to provide a statistical representation of the sequence in which operations occurred. The following discussion of the process for training these models is based on material in Ref. [39], but is presented in terms of design operations (instead of Markovian states) to aid understanding of its relevance to the current work. The procedure specifically outlines the training of first-order Markov models, but it can also be applied to higher-order models with small modifications.
A Markov chain is defined by the values of the elements in its transition matrix, $T$. Element $Tij$ in the matrix defines the probability that the next operation will be operation $j$, given that the previous operation was operation $i$. The values in the transition matrix can be estimated based on observed data by computing
$Tij=NijNi$
(1)
where $Nij$ is the number of instances in which operation $j$ is observed to follow operation $i$, and $Ni$ is the number of instances in which operation $i$ is observed in total. The diagonal of the transition matrix contains probabilities for cases where $i=j$, indicating that an operation is followed by itself.
The log-likelihood is a measure of the probability that a model could have produced a given set of data. This is essentially a measure of the accuracy with which the model can reproduce a given dataset, and can therefore be used for comparing the veracity of different models. The log-likelihood for a Markov chain model ($LMC$) is
$LMC=∑i=1M∑j=1MNij⋅ln(Tij)$
(2)
where $Nij$ and $Tij$ are as defined earlier, and $M$ is the number of different permissible operations.
The procedure for training the higher-order Markov chains follows essentially the same pattern as that for the first-order Markov chains. The key difference is that the states of the model are no longer single design operations, but $n$ -tuples of operations, where $n$ is the order of the Markov chain. Training of a zero-order Markov chain model simply consists of estimating the frequency with which each operation occurs, without any assumption of conditional dependence on earlier operations in the sequence.
Markov models were trained on both datasets, from zero order (i.e., a model assuming that future operations have no dependence on past operations) to fourth order (i.e., a model assuming that future operations dependent on the last four operations). Models were trained using leave-one-out cross-validation [48]. For a dataset consisting of $n$ samples, this cross-validation approach trains a model with $n−1$ samples, and then tests the model on the sample that was not used for training. This procedure is repeated until every sample has been used for testing, providing $n$ evaluations of the testing accuracy, for which the mean and standard error can be computed. It should be noted that leave-one-out cross-validation is a special case of k-folds cross-validation [49] for which $k$ is equal to the number of samples ($n$). Using $k=n$ provides an accurate estimate with lower bias and a more conservative variance than values of $k [50].
In this work, each sample is composed of the data from one study participant (consisting of many operations). Thus, the validation approach used here estimates how accurate the model might be for describing the behavior of a previously unseen individual. It should be noted that during training (and for communicating final results), the transition probabilities are computed using the data from multiple study participants.
### Results for Truss Design Study.
A plot of log-likelihood for models of increasing order is shown in Fig. 4(a) with error bars indicating standard error. The dashed line shows the log-likelihood of the model on the training dataset, and the solid line shows the log-likelihood on the testing dataset. Significant differences between adjacent models are shown with dotted brackets. Models with higher testing log-likelihood provide a better fit to unseen data and should be preferred.
Figure 4(a) shows a steep increase in log-likelihood from the zero-order Markov model (which by nature cannot model any sequential behavior) to the first-order Markov model (which provides the simplest representation of sequencing behavior). This indicates that strong sequencing patterns do exist in the data from the truss study. However, after first-order, the testing log-likelihood plateaus, while the training log-likelihood continues to increase. This indicates that overfitting occurs in the higher-order models. In other words, the higher-order models begin to fit attributes of the training data that are not general, and thus exhibit lower accuracy on the testing dataset. There is a slight increase in the mean testing accuracy between the first-order and second-order models, but this increase is not significant ($F=0.31$, $p>0.5$).
As noted previously, the testing log-likelihood in Fig. 4(a) plateaus after the first-order model, with no further significant differences apparent in the testing log-likelihood curve. Therefore, the first-order model is preferred, as it provides a degree of accuracy that is statistically equivalent to the higher-order models, but it does so with much less complexity. Designer activity in the truss design task can, therefore, be accurately modeled as a first-order Markov process.
Comparing the zero-order model (which encodes a nonsequential representation) to the first-order model (which encodes a representation of designer activity that is both parsimonious and accurate) enables an examination of why sequencing of operations was important for the truss design task. The operation frequencies associated with the zero-order Markov model are shown in Fig. 4(c), and the transition matrix of the first-order Markov model is shown in Fig. 4(b). The shading inside the squares indicates the magnitude of the probability, which is also displayed numerically within each square.
A comparison of the statistical models provided in Figs. 4(b) and 4(c) justifies the substantially higher likelihood of the first-order model. The transition probability matrix of the first-order model has strong diagonal elements, which indicates that elements were fairly likely to be applied multiple times in a row. This type of sequential probabilistic dependence simply cannot be represented in a zero-order model. For example, consider the 33% chance that the next operation chosen by the zero-order model will be to add a member. Because of the assumptions of the model, this probability is not dependent on the previous action. However, the first-order model demonstrates that the choice to add a member is heavily dependent on what the previous operation was, and is particularly likely after adding a joint, removing a joint, or adding a member. Conversely, it is extremely unlikely to be chosen after changing the size of truss members. As another example, the zero-order model also contains a 33% chance that the next operation chosen will be to change the size of a single member. However, the first-order model provides a caveat with this value, showing that this operation is most likely to follow itself, and unlikely to occur after adding a joint, removing a joint, adding a member, or removing a member.
A graph-based visualization of the state and transition probabilities of the first-order Markov chain is provided in Fig. 5. Arrows are used to indicate transitions between states, and line thicknesses represent the relative probability of those transitions, with thicker lines indicating transitions with higher probability. This visualization only includes the transitions with the highest probabilities (specifically transitions with probabilities above the median, approximately 0.03). The self-transition probabilities are indicated by the border thickness of the circle around that operation. This figure helps to expose additional patterns of sequential action. Operations related to truss topology (adding and removing joints and members) are connected by the thickest arrows, indicating a high probability that these operations will be employed together during truss design. Conversely, nontopology operations (changing the size of members, or moving joints) are connected by relatively thin arrows, indicating that these operations are far less likely to be applied together. However, these operations all have fairly high self-transition probabilities, meaning that they are likely to be applied several times in a row.
The higher-order Markov models are capable of explicitly representing long sequences of operations. On the other hand, the first-order Markov models assume that the selection of a subsequent operation is dependent on only the last operation, so that each operation is probabilistically linked to the next. Therefore, only pairs of operations can be represented explicitly. However, operation sequences of arbitrary length can be created by stringing together several of these probabilistically linked pairs. Graphically, the process of constructing these sequences consists of tracing a path through the graph-based representation of the transition matrix shown in Fig. 5. A set of high likelihood exemplar sequences produced through this process are provided in Fig. 6. Operations are shown in rectangular boxes, and the probability that the following operation would occur is given with a percentage over the linking arrow. The percentage of participants from the cognitive study who employed the sequence is also noted.
These sequences are multioperation patterns of action that might be expected in a truss design task. Sequence A consists of a joint addition followed by several member additions. This kind of pattern could arise as a designer constructs their truss, adding a joint and then attaching it to the existing truss with new structural members (and was employed by every participant in the cognitive study). Sequence B is similar to sequence A in that it consists of topology operations, but instead begins with a joint removal (which also removes all attached members) following by a joint addition and a member addition. This signifies revision of a section of the truss—removing a section of the truss with poor performance and then rebuilding it in an attempt to improve performance characteristics. Sequence B was employed by 92% of the cognitive study participants. Sequences C and D define procedures for fine-tuning a fixed truss topology—joint repositioning or the adjustment of global members sizes, followed by the adjustment of the size of specific members. Sequence E describes a return from shape optimization to topology optimization—the repositioning of a joint (possibly to make room for new truss elements) followed by the addition of new structural members.
### Results for Home Cooling System Design Study.
The same methodology used to analyze the truss design data was applied to operation data from the cooling system design study. A plot of the log-likelihood for models of increasing order is provided in Fig. 7(a). Many of the same trends from Fig. 4(a) are echoed here. There is an increase in log-likelihood between the zero-order and first-order Markov models, indicating that sequencing behavior is evident in the study data. There is a miniscule mean increase in testing accuracy between the first-order and second-order models, but this increase is nonsignificant ($F=0.02$, $p>0.5$). After second-order, the training log-likelihood decreases, again indicating that the higher-order models tend to overfit the training data, losing generalizability. The marked divergence between training and testing curves displayed in Fig. 7(a) (which was not as sharp in Fig. 4) is indicative of the fact that the higher-order models have a greater tendency to overfit this dataset. This can be attributed to the fact that participants in the cooling system design study used far fewer operations than participants in the truss design study.
As with the analysis of the truss design study, the first-order Markov model is the preferred model for this design task. This may indicate that the sequencing of design operations can be treated as a first-order Markov process for the types of configuration tasks examined in this work, or perhaps more generally. At the very least, it is evidence that lower-order processes (but not zero-order) tend to be the most veridical. The higher-order models may learn specific sequential constructs that are informative, but they do not appear to offer a superior description of aggregate patterns of design activity.
Examining the differences between Figs. 7(b) and 7(c) can once again provide insight as to the benefit that is derived from pursuing operations sequentially for the cooling system design task. Whereas the first-order Markov model developed for the truss design data had a strongly diagonal structure, the transition matrix developed for the cooling system data has several strong off-diagonal elements and relatively weak diagonal elements. This indicates that there is little propensity to apply the same operator multiple times in a row. The only operations with more than a 30% chance of being applied multiple times in series are sensor movement, cooler movement, and cooler tuning. It should be noted that these are the shape operations. In contrast, the topology operations (adding or removing products) have lower probabilities of being applied multiple times in series.
A graphical version of the first-order Markov transition matrix is provided in Fig. 8 (thresholded in the same manner as Fig. 5). This representation reinforces many of the trends observed in the raw transition matrix. Two trends are made particularly clear in this graph. The first is the highly probable linkage from adding a processor to adding a sensor, to adding a cooler. This sequence enables the construction of the simplest independent subsystem possible, consisting of a sensor to read the temperature in a room, a cooler to act on the temperature in a room, and a processor to decide when to activate the cooler based on information from the sensor. The second trend is the strong connectedness of the cooler tuning operation to nearly every other operation. This indicates that cooler tuning played an integral role in the production of solutions, and was frequently utilized throughout the design process.
As shown in the results from the truss design study, longer sequences of operations can be extracted by traversing the graph-based representation of the transition matrix (see Fig. 9). Sequence A consists of a processor addition, a sensor addition, and a cooler addition. As noted above in Fig. 8, this sequence encodes the construction of the simplest independent subsystem possible, consisting of a sensor, a cooler, and a processor. Sequence B describes the removal followed by the addition of a cooler. These actions were necessary to transfer the control of a cooler to a different processor—this was not enabled with a single move during the study. Interestingly, the probability of the opposite of these two actions (adding a cooler and then deleting a cooler) was nearly 0. Sequences C, D, and E are all sequences related to placing and modifying coolers. The prevalence of these sequences might be expected since the operations for adding and tuning coolers were applied the most often (see Fig. 7(b)). Sequence C describes the common action sequence of adding a cooler and then immediately tuning its properties (a sequence employed by nearly every participant). An alternative cooler-related sequence is presented with sequence D in which a cooler is added, moved to a new location, and then tuned. This sequence would have been enacted when the cooler did not function as expected where it was placed. Sequence E is related to sequence D in that it consists of the same operations but they are enacted in a different order. Sequence E begins with moving a cooler, an action that might leave part of the building undercooled. A new cooler is then added (ostensibly in the undercooled area) and tuned to optimize performance.
### Discussion.
This section analyzed data from two design studies by fitting Markov models of increasing order to the operational data from the study. These models progressively encoded greater degrees of memory, meaning that the choice of the next operation in a sequence was based on knowledge of a greater number of prior operations. Two important findings resulted from this analysis.
1. (1)
It is likely that participants in both studies utilized operation sequences.
2. (2)
Designers' operational sequences can be modeled accurately using the first-order Markov chains; the higher-order Markov chains do not lead to significant increases in accuracy.
The first finding stems from a comparison of the zero- and first-order Markov models. The zero-order Markov models cannot encode sequence information, while the first-order Markov models provide a minimal representation of sequencing, in which selection of the next operation is conditional upon only the last operation. For both studies, the first-order Markov models fit the operation data better than the zero-order models, thus demonstrating that operation sequencing is evident.
The second finding stems from a comparison of the first-order and higher-order Markov models. The first-order Markov model provided a fit that was either equivalent to or better than the higher-order models for both studies. The higher-order models encode sequences that are dependent on multiple prior operations, instead of just the most recent single operation. Therefore, the higher fit of the first-order model indicates that memory of multiple past operations is not necessary to accurately model the selection of future operations.
These first-order Markov models assume that a designers' choice of a subsequent operation is dependent only on what the last operation was, establishing a causal link between the two. By stringing together several of these causally linked operations, sequences of arbitrary length can be created. The process of creating these long sequences essentially amounts to traversing the graph described by the first-order transition matrix. As demonstrated in Figs. 6 and 9, the longer sequences extracted by this method describe meaningful patterns of design that were employed often by study participants. These longer sequences might be represented more explicitly in the higher-order Markov chains, but they are represented both succinctly and accurately using the first-order Markov chains.
The number of independent parameters required to fully define the transition matrix for a Markov chain model is $(k−1)km$, where $k$ is the number of possible operations and $m$ is the order of the model. This means that the number of model parameters increases rapidly (exponentially) with the order of the model. As an illustration, consider the truss design problem which has seven operations; a zero-order Markov chain requires the estimation of six independent parameters, a first-order model requires 42, and a second-order model requires 294 parameters. The fourth-order model trained in this work required the estimation of more than 10,000 independent parameters. Larger numbers of parameters require larger quantities of training data in order to accurately estimate the values of the parameters. This may offer some intuition as to why the first-order models in this work were the most veridical in comparison to human data. The higher-order sequences simply require too much information and are thus too burdensome to learn. This could heavily bias human problem-solvers toward the lower-order sequences that can be learned with exponentially less information, enabling quick adaptation to new problems.
## Investigation 2: Benefit of Operation Sequences
The first investigation discovered that the operation sequences employed by the study participants are accurately represented using the first-order Markov chains. However, it has not been shown directly that sequence learning positively impacts solution quality. On the one hand, the ability to learn sequences may help designers learn and exploit problem-specific heuristics to quickly find fruitful regions of the design space. On the other hand, sequence learning may bias designers toward learning sequences that greedily improve solution quality. Applying these greedy sequences could critically limit the breadth of search and lead designers toward local minima of inferior solution quality.
Because humans are capable of learning sequences implicitly [12,13], it is difficult to control and observe sequence learning as an experimental variable in a study with human participants. It is possible that implicit learning processes could take over even if participants were somehow prohibited from engaging in explicit sequence learning. For that reason, this work utilizes the CISAT modeling framework [2] to test the effects of the first-order sequence learning. The objects simulated in CISAT (i.e., designers and design teams) have explicitly defined protocols and skills. This makes it possible to directly modulate the degree to which CISAT agents engage in sequence learning, which in turn enables a direct comparison between sequential and nonsequential learning patterns. This assessment has the potential to indicate the degree to which sequence learning is or is not beneficial for real human designers.
### Methodology.
The CISAT modeling framework is an agent-based computational platform that is intended to simulate the process and performance of engineering design teams, and has been shown to do so accurately on the configuration-style design problems used in the current work [2]. The core functionality of the CISAT framework is based on a simulated annealing algorithm. This core functionality is augmented with eight cognitive characteristics, selected from the literature on design and problem-solving, in order to support a more veridical representation of the way in which individuals search for solutions and interact with a team while doing so [2]. It should be noted that the ability to learn and employ sequences is a characteristic of individuals. However, the corpus of data used here (from both the truss and cooling system design tasks) was produced by individual designers operating within teams. Therefore, the CISAT simulations in this work are structured to simulate the performance of teams. The sequence learning ability is implemented in CISAT at the agent level, reflecting the individual sequence-learning abilities of human designers.
CISAT agents learn how to apply operations through operational learning, one of the eight cognitive characteristics implemented in CISAT. In Ref. [2], the operational learning characteristic was implemented through the following steps. At the beginning of each iteration, an agent selects which move operator to apply next by taking a random draw from a multinomial distribution defined by a vector of probabilities, $p$. The chosen move operator, $i$, is then applied to the current solution. If operator $i$ improves the quality of the solution, then the probability that that operator will be chosen in the future is increased according to the update rule
$pi←pi⋅(1+kOL)$
(3)
where $kOL$ is a parameter that modifies how rapidly the values of $p$ are updated. If the move operator decreases the quality of the solution, the probability of selecting it in the future is reduced according to a similar update rule
$pi←pi⋅(1−kOL)$
(4)
The probability vector is renormalized following every update. Updating selection probability based on the effect of only the most recent application of a given move operator (instead of some average over past applications) reflects availability bias [51], the tendency of humans to place greater weight on information that is readily available in memory.
A second version of the operational learning characteristic was implemented for this work using the first-order Markov chain constructs. This model was chosen specifically because it was shown in Sec. 4 to accurately encode human operation sequences. This concept is implemented so that agents first select which operator to apply by randomly drawing from the probabilities defined in $T$. Next, the agents update the matrix element corresponding to the operator that they chose, iteratively constructing a transition matrix that encodes the most beneficial move operator sequences. This two-step procedure is similar to the procedure identified in humans solving the Thurstone letter series completion task. Here, the updating of probabilities in $T$ aligns with the pattern generation step for the Thurstone task, and the selection of a specific operation based on the probabilities in $T$ aligns with the sequence generation step in the Thurstone task.
More specifically, when selecting a move operator, a random draw is taken from the multinomial distribution defined by row $i$ of matrix $T$, where operator $i$ is the last move operator that was applied. The operator chosen, operator $j$, is then applied to the current solution. The probabilities contained in the transition matrix are then updated depending on whether operator $j$ improves the quality of the current solution
$Tij←Tij⋅(1+kOL)$
(5)
or worsens it
$Tij←Tij⋅(1−kOL)$
(6)
This selection process probabilistically links the choice of the next move operator to the last move operator that was chosen via the Markov chain transition matrix, making it possible for CISAT agents to recognize and use beneficial sequences of operations. It should be noted that the transition matrix of a first-order Markov chain does not explicitly encode finite sequences of operations. Instead, the sequences are encoded probabilistically and implicitly based on the effects of applying move operators, which increases the likelihood of applying beneficial sequences. This probabilistic scheme also allows for new sequences to be discovered.
Only the operational learning characteristic was modified—all other agent protocols and characteristics are as originally given in Ref. [2]. This change is not intended to be an optimal learning approach, but rather to reflect the actuality of human behavior. If principles of the first-order sequencing are repurposed for use in computational synthesis algorithms, it may be more useful to update operator selection probabilities based on a measure of average performance rather than using the binary tuning approach featured here.
### Results for Truss Design Study.
In the interest of simplicity, performance was only simulated for the initial problem statement from the truss design study. A total of 100 teams were simulated for each of the two conditions: sequential, utilizing the first-order Markov chain concepts; and nonsequential, utilizing the zero-order Markov chains. Simulation results were then analyzed to extract each team's best solution at every iteration. A comparison of the two conditions is provided in Fig. 10, showing the mean normalized strength-to-weight ratio as a measure of solution quality.
The difference in final design quality between the two simulated conditions (zero-order Markov and first-order Markov) is highly significant ($F=11.2$, $p<0.001$), and the condition using the first-order Markov chain learning approach achieved a higher final design quality. Although the introduction of sequence-learning abilities does not raise CISAT solution quality to the level of the real human teams, it closes the difference between the two by nearly half, indicating that the ability to learn sequences is vital to the success of real designers.
### Results for Home Cooling System Design Study.
CISAT was also used to simulate the performance of human design teams on the cooling system design task. Sequential and nonsequential learning were implemented as above with the first-order and zero-order Markov chains, respectively. The results of the simulations were then postprocessed to track each team's best solution over time. In this case, the normalized cooling efficiency was computed for the series of best solutions as an indicator of quality. This metric is the cooling capacity of the system (the extent to which it decreased the peak temperature in the home), divided by the total cost of the system. This ratio was then normalized according to the target values for total cost and peak temperature. A comparison of the mean normalized cooling efficiency of the two conditions is shown in Fig. 11.
For this task, simulated teams that were capable of learning and employing sequences of operations achieved solutions with significantly higher quality ($F=5.91$, $p<0.05$). The increase in solution quality that results from the introduction of sequence-learning abilities again helps to close the gap between simulation and real human performance.
### Discussion.
The objective of this section was to assess whether or not the ability to learn sequences contributes positively to eventual solution quality. This was accomplished by modifying the operational learning characteristic of CISAT to enable agents to learn beneficial sequences of operations by reinforcing a first-order Markov transition matrix. This made it possible to perform a comparison between CISAT-simulated teams employing either nonsequential learning (a zero-order Markov model) or sequential learning similar to that observed in designers (a first-order Markov model). Simulations were conducted to reflect cognitive studies involving the design of trusses and the design of cooling systems, and in both cases, sequential learning produced solutions with higher quality. This indicates that sequence learning is a beneficial aspect of human cognition during design.
The performance of sequence-learning and nonsequence-learning approaches is similar for the initial portion of the simulation on both design problems. A similar phenomenon was observed in other work that used machine learning to recognize and employ move operation pairs during computational design [38]. In that work, algorithms with and without the ability to learn pairs were compared and showed nearly identical performance for the first 3% of the search. In that work, as well as the current paper, the identical early performance can be explained as an exploratory phase—the agent or algorithm is still learning about the design space and has not yet learned effective move pairs or operation sequences. Once sufficient exploration has occurred, the agent or algorithm can begin to employ learned patterns to more effectively create solutions. This highlights the fact that, especially for sequence learning, exploration and exploitation are inextricably linked. Human designers may stand to benefit from emphasizing the recognition of beneficial operation sequences during early exploration in order to aid more effective exploitation during the later stages of design.
Although the addition of first-order sequence learning boosts performance when compared to nonsequence-learning simulations, there remains a substantial division between the quality of solutions produced in the simulations and those produced by humans. While the order in which operations are performed is important for exploration and exploitation of the design space, the way in which the operations are applied to the current solution (e.g., which structural member is increased in size, or where a new cooler is added) is important as well. Since CISAT agents stochastically apply operations once chosen, it is likely that the gap between the performance of sequence-learning agents and humans is due to nuance in the application.
## Conclusions
This paper investigated the sequencing of operations in engineering design problems using a variety of statistical and computational tools. Through analysis of human data from two cognitive studies, two research questions were specifically addressed:
1. (1)
How much representational complexity is necessary to quantify the sequential patterns that designers employ during solving? Markov chain models with increasing order (representative of how much memory is assumed in the model) were fit to the data from two human studies. For both studies, the analysis indicated that the sequencing of design actions might be treated accurately as a first-order Markov process. It should be noted that longer finite-length sequences of operations may still be extracted by traversing the graphs described by the first-order Markov transition matrices.
2. (2)
Does the use of operation sequences benefit designers? The CISAT modeling framework was used to assess the potential benefit from learning and employing operation sequences. Several sets of simulations were conducted in which teams of agents solved the design problems from the two cognitive studies, either with the ability to learn sequences (encoded within a first-order Markov model) or without that ability (represented mathematically by a nonsequential statistical model). A comparison of these simulations demonstrated that sequence-learning abilities significantly increased solution quality for both design problems.
The results of this work have the potential to inform novel approaches for design education and training. Here, it was shown that designers utilized first-order sequences of operations, and that this allowed them to discover solutions with higher quality. Although is it not clear whether these sequences were learned implicitly or explicitly, there is evidence that explicit awareness of a sequence-learning task can improve performance [24]. Therefore, it is possible that teaching designers to be aware of the importance of learning sequences could improve their ability to learn said sequences. This self-awareness could be augmented in computer-aided design software by logging and analyzing design activity to provide real-time feedback about common sequential strategies, ensuring explicit awareness. Examining and quantifying the difference between explicit and implicit sequence-learning modalities with applications to design will be addressed in future work.
This work also holds implications for design automation and synthesis. Specifically, this work demonstrates that it may be possible to use Markov constructs to improve the effectiveness of design algorithms. The second investigation of this work implemented sequence-learning abilities in CISAT, a computational framework that is based in part on principles of stochastic optimization. This enabled computational agents to create and modify solutions using sequential chains of operations, resulting in improved performance. A similar implementation could be used to imbue other design synthesis algorithms with the ability to learn and apply sequences of operations. Such applications hinge on the fact that Markov chain models are generative [52], meaning that they encode the training data in such a way that they can be used to create new, synthetic data. In a design context, this amounts to the creation of new operational sequences that adhere probabilistically to observed patterns. Markov chain models could be learned and reinforced as an algorithm creates design solutions, or trained prior to use in an algorithm if sufficient prior data are available.
This work offers a foundation for describing sequence learning in engineering design by demonstrating that the first-order Markov chains are a veridical model, and that learning simple first-order sequences can improve the solution quality. These descriptive results establish a basis for future normative and explanatory research. Future explanatory work should investigate the underlying causation that gives rise to the sequential behaviors observed and described here, perhaps by using Markov decision process models [53]. Leveraging additional results from psychology (such as the importance of pauses during solving [54]) could also provide explanatory power. Further normative work could utilize numerical simulations to identify the dependence of learned sequences on the complexity and characteristics of the problem at hand by employing a predictive response surface methodology [47,55].
## Acknowledgment
This material is based upon the work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE125252 and the United States Air Force Office of Scientific Research through Grant Nos. FA9550-12-1-0374 and FA9550-16-1-0049. Any opinions, findings, and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the sponsors. An early version of part of this work was included in the proceedings of the Conference on Design Computing and Cognition [56].
## Funding Data
• Air Force Office of Scientific Research (Grant Nos. FA9550-12-1-0374 and FA9550-16-1-0049).
• Division of Graduate Education (Grant No. DGE125252).
## References
References
1.
McComb
,
C.
,
Cagan
,
J.
, and
Kotovsky
,
K.
,
2015
, “
Rolling With the Punches: An Examination of Team Performance in a Design Task Subject to Drastic Changes
,”
Des. Studies
,
36
(
1
), pp.
99
121
.
2.
McComb
,
C.
,
Cagan
,
J.
, and
Kotovsky
,
K.
,
2015
, “
Lifting the Veil: Drawing Insights About Design Teams From a Cognitively-Inspired Computational Model
,”
Des. Studies
,
40
, pp.
119
142
.
3.
Neill
,
T. M.
,
Gero
,
J. S.
, and
Warren
,
J.
,
1998
, “
Understanding Conceptual Electronic Design Using Protocol Analysis
,”
Res. Eng. Des.
,
10
(
3
), pp.
129
140
.
4.
Kan
,
J. W.
, and
Gero
,
J. S.
,
2011
, “
Comparing Designing Across Different Domains: An Exploratory Case Study
,”
18th International Conference on Engineering Design
(
ICED
), Lyngby/Copenhagen, Denmark, Aug. 15–19, pp.
194
203
.https://www.designsociety.org/publication/30470/comparing_designing_across_different_domains_an_exploratory_case_study
5.
Yu
,
R.
, and
Gero
,
J. S.
,
2015
, “
An Empirical Foundation for Design Patterns in Parametric Design
,”
20th International Conference of the Association for Computer-Aided Architectural Design Research in Asia
(
), Daegu, South Korea, May 20–23, pp.
1
9
.http://nova.newcastle.edu.au/vital/access/manager/Repository/uon:26697
6.
Gero
,
J. S.
, and
Peng
,
W.
,
2009
, “
Understanding Behaviors of a Constructive Memory Agent: A Markov Chain Analysis
,”
Knowl. Based Syst.
,
22
(
8
), pp.
610
621
.
7.
March
,
J. G.
,
1991
, “
Exploration and Exploitation in Organizational Learning
,”
Organ. Sci.
,
2
(
1
), pp.
71
87
.
8.
Gupta
,
A. K.
,
Smith
,
K. G.
, and
Shalley
,
C. E.
,
2006
, “
The Interplay Between Exploration and Exploitation
,”
,
49
(
4
), pp.
693
706
.
9.
Goncher
,
A.
,
Johri
,
A.
,
Kothaneth
,
S.
, and
Lohani
,
V.
,
2009
, “
Exploration and Exploitation in Engineering Design: Examining the Effects of Prior Knowledge on Creativity and Ideation
,”
39th IEEE Frontiers in Education Conference
(
FIE
), San Antonio, TX, Oct. 18–21, pp.
1
7
.
10.
Kotovsky
,
K.
,
Hayes
,
J.
, and
Simon
,
H. A.
,
1985
, “
Why Are Some Problems Hard? Evidence From Tower of Hanoi
,”
Cognit. Psychol.
,
17
(
2
), pp.
248
294
.
11.
Kotovsky
,
K.
, and
Simon
,
H. A.
,
1990
, “
What Makes Some Problems Really Hard: Explorations in the Problem Space of Difficulty
,”
Cognit. Psychol.
,
22
(
2
), pp.
143
183
.
12.
Nissen
,
M. J.
, and
Bullemer
,
P.
,
1987
, “
Attentional Requirements of Learning: Evidence From Performance Measures
,”
Cognit. Psychol.
,
19
(
1
), pp.
1
32
.
13.
Reed
,
J.
, and
Johnson
,
P.
,
1994
, “
Assessing Implicit Learning With Indirect Tests: Determining What Is Learned About Sequence Structure
,”
J. Exp. Psychol.
,
20
(
3
), pp.
585
594
.
14.
Clegg
,
B. A.
,
Digirolamo
,
G. J.
, and
Keele
,
S. W.
,
1998
, “
Sequence Learning
,”
Trends Cognit. Sci.
,
2
(
8
), pp.
275
281
.
15.
Chase
,
W. G.
, and
Simon
,
H. A.
,
1973
, “
Perception in Chess
,”
Cognit. Psychol.
,
4
(
1
), pp.
55
81
.
16.
Egan
,
D. E.
, and
Schwartz
,
B. J.
,
1979
, “
Chunking in Recall of Symbolic Drawings
,”
Mem. Cognit.
,
7
(
2
), pp.
149
158
.
17.
Reitman
,
J. S.
,
1976
, “
Skilled Perception in Go: Deducing Memory Structures From Inter-Response Times
,”
Cognit. Psychol.
,
8
(
3
), pp.
336
356
.
18.
Moss
,
J.
,
Cagan
,
J.
, and
Kotovsky
,
K.
,
2004
, “
Learning From Design Experience in an Agent-Based Design System
,”
Res. Eng. Des.
,
15
(2), pp.
77
92
.
19.
Pretz
,
J. E.
,
Naples
,
A. J.
, and
Sternberg
,
R. J.
,
2003
, “
Recognizing, Defining, and Representing Problems
,”
The Psychology of Problem Solving
,
J. E.
Davidson
, and
R. J.
Sternberg
, eds.,
Cambridge University Press
,
New York
.
20.
Simon
,
H. A.
, and
Kotovsky
,
K.
,
1963
, “
Human Acquisition of Concepts for Sequential Patterns
,”
Psychol. Rev.
,
70
(
6
), pp.
534
546
.
21.
Kotovsky
,
K.
, and
Simon
,
H. A.
,
1973
, “
Empirical Tests of a Theory of Human Acquisition of Concepts for Sequential Patterns
,”
Cognit. Psychol.
,
4
(
3
), pp.
399
424
.
22.
Novick
,
L. R.
, and
Tversky
,
B.
,
1987
, “
Cognitive Constraints on Ordering Operations: The Case of Geometric Analogies
,”
J. Exp. Psychol.: Gen.
,
116
(
1
), pp.
50
67
.
23.
Perruchet
,
P.
, and
Amorim
,
M.-A.
,
1992
, “
Conscious Knowledge and Changes in Performance in Sequence Learning: Evidence Against Dissociation
,”
J. Exp. Psychol.
,
18
(
4
), pp.
785
800
.https://www.ncbi.nlm.nih.gov/pubmed/1385616
24.
Willingham
,
D. B.
,
Nissen
,
M. J.
, and
Bullemer
,
P.
,
1989
, “
On the Development of Procedural Knowledge
,”
J. Exp. Psychol.
,
15
(
6
), pp.
1047
1060
.https://www.ncbi.nlm.nih.gov/pubmed/2530305
25.
Curran
,
T.
, and
Keele
,
S. W.
,
1993
, “
Attentional and Nonattentional Forms of Sequence Learning
,”
J. Exp. Psychol.
,
19
(1), pp.
189
202
26.
Stempfle
,
J.
, and
,
P.
,
2002
, “
Thinking in Design Teams—An Analysis of Team Communication
,”
Des. Studies
,
23
(
5
), pp.
473
496
.
27.
Atman
,
C. J.
,
,
R. S.
,
Cardella
,
M. E.
,
Turns
,
J.
,
Mosborg
,
S.
, and
Saleem
,
J.
,
2007
, “
Engineering Design Processes: A Comparison of Students and Expert Practitioners
,”
J. Eng. Educ.
,
96
(
4
), pp.
359
379
.
28.
Goldschmidt
,
G.
, and
Rodgers
,
P. A.
,
2013
, “
The Design Thinking Approaches of Three Different Groups of Designers Based on Self-Reports
,”
Des. Studies
,
34
(
4
), pp.
454
471
.
29.
,
D. F.
, and
Lee
,
T. Y.
,
1989
, “
Design Methods Used by Undergraduate Engineering Students
,”
Des. Studies
,
10
(
4
), pp.
199
207
.
30.
Todd
,
D.
,
1997
, “
Multiple Criteria Genetic Algorithms in Engineering Design and Operation
,” Ph.D. thesis, University of Newcastle, Newcastle-upon-Tyne, UK.
31.
Rogers
,
J.
,
1996
, “
DeMAID/GA—An Enhanced Design Manager's Aid for Intelligent Decomposition
,”
AIAA
Paper No. 96-4157.
32.
Sen
,
C.
,
Ameri
,
F.
, and
Summers
,
J. D.
,
2010
, “
An Entropic Method for Sequencing Discrete Design Decisions
,”
ASME J. Mech. Des.
,
132
(
10
), p.
101004
.
33.
Waldron
,
M. B.
, and
Waldron
,
K. J.
,
1988
, “
A Time Sequence Study of a Complex Mechanical System Design
,”
Des. Studies
,
9
(
2
), pp.
95
106
.
34.
Meier
,
C.
,
Yassine
,
A. A.
, and
Browning
,
T. R.
,
2007
, “
Design Process Sequencing With Competent Genetic Algorithms
,”
ASME J. Mech. Des.
,
129
(
6
), pp.
566
585
.
35.
Gero
,
J. S.
,
1990
, “
Design Prototypes: A Knowledge Representation Schema for Design
,”
AI Mag.
,
11
(
4
), pp.
26
36
.https://www.aaai.org/ojs/index.php/aimagazine/article/view/854
36.
Bhatta
,
S. R.
, and
Goel
,
A. K.
,
1992
, “
Discovery of Physical Principles From Design Experiences
,”
International Machine Learning Workshop
, San Mateo, CA [Int. J. AI EDAM 8(2), pp. 1–22 (1994)].https://pdfs.semanticscholar.org/fc0f/467ab4ead20a0a0771a2c8ab9fb102f14b1c.pdf
37.
Gero
,
J. S.
,
Kan
,
J. W.
, and
,
M.
,
2011
, “
Analysing Design Protocols: Development of Methods and Tools
,”
Research Into Design
,
Research Publishing
,
Singapore
, pp.
3
10
.
38.
Vale
,
C. A. W.
, and
Shea
,
K.
,
2003
, “
A Machine Learning-Based Approach to Accelerating Computational Design Synthesis
,”
14th International Conference on Engineering Design
(
ICED
), Stockholm, Sweden, Aug. 19–21, pp.
183
184
.https://www.designsociety.org/publication/24114/a_machine_learning-based_approach_to_accelerating_computational_design_synthesis
39.
Stroock
,
D. W.
,
2005
,
An Introduction to Markov Processes
,
,
Berlin
.
40.
Markov
,
A. A.
,
1907
, “
Extension of the Limit Theorems of Probability Theory to a Sum of Variables Connected in a Chain
,”
The Notes of the Imperial Academy of Sciences of St. Petersburg
, VIII Series, Physio-Mathematical College XXII, Imperial Academy of Science, St. Petersburg, Russia.
41.
Scherr
,
A. L.
,
1962
,
An Analysis of Time-Shared Computer Systems
,
Massachusetts Institute of Technology
,
Cambridge, MA
.
42.
Page
,
L.
,
Brin
,
S.
,
Motwani
,
R.
, and
,
T.
,
1999
, “
The PageRank Citation Ranking: Bringing Order to the Web
,” Stanford InfoLab, Stanford, CA, Technical Report No.
422
http://ilpubs.stanford.edu:8090/422/.
43.
Tamir
,
A.
,
1998
,
Applications of Markov Chains in Chemical Engineering
,
Elsevier
,
Amsterdam, The Netherlands
.
44.
Gero
,
J. S.
, and
Kan
,
J. W.
,
2009
, “
Learning to Collaborate During Team Designing: Some Preliminary Results From Measurement-Based Tools
,”
Third International Conference on Research Into Design Engineering
(
ICORD
), Bangalore, India, Jan. 10–12, pp.
560
567
45.
Gero
,
J. S.
, and
Kan
,
J. W.
,
2011
, “
Learning to Collaborate During Team Designing: Quantitative Measurement
,”
Third International Conference on Research Into Design Engineering
(
ICORD
), Bangalore, India, Jan. 10–12, pp.
978
981
.http://mason.gmu.edu/~jgero/publications/2011/11KanGeroCoRD11.pdf
46.
Raftery
,
A. E.
,
1985
, “
A Model for High-Order Markov Chains
,”
J. R. Stat. Soc. Ser. B
,
47
(
3
), pp.
528
539
.http://www.jstor.org/stable/2345788
47.
McComb
,
C.
,
Cagan
,
J.
, and
Kotovsky
,
K.
,
2017
, “
Optimizing Design Teams Based on Problem Properties: Computational Team Simulations and an Applied Empirical Test
,”
ASME J. Mech. Des.
,
139
(
4
), p.
041101
.
48.
Arlot
,
S.
, and
Celisse
,
A.
,
2010
, “
A Survey of Cross-Validation Procedures for Model Selection
,”
Stat. Surv.
,
4
, pp.
40
79
.
49.
Stone
,
M.
,
1974
, “
Cross-Validatory Choice and Assessment of Statistical Predictions
,”
J. R. Stat. Soc. Ser. B
,
36
(
2
), pp.
111
147
.
50.
Hastie
,
T.
,
Tibshirani
,
R.
, and
Friedman
,
J.
,
2009
,
The Elements of Statistical Learning
,
Springer
,
New York
.
51.
Tversky
,
A.
, and
Kahneman
,
D.
,
1973
, “
Availability: A Heuristic for Judging Frequency and Probability
,”
Cognit. Psychol.
,
5
(
2
), pp.
207
232
.
52.
Bishop
,
C. M.
, and
Lasserre
,
J.
,
2007
, “
Generative or Discriminative? Getting the Best of Both Worlds
,”
Bayesian Statistics
,
J. M.
Bernardo
,
M. J.
Bayarri
,
J. O.
Berger
,
A. P.
Dawid
,
D.
Heckerman
,
A. F. M.
Smith
, and
M.
West
, eds.,
Oxford University Press
,
Oxford, UK
, pp.
3
24
.
53.
Bellman
,
R.
,
1957
, “
A Markovian Decision Process
,”
J. Math. Mech.
,
6
(
5
), pp.
679
684
.http://www.jstor.org/stable/24900506
54.
Chi
,
M. T. H.
,
Glaser
,
R.
, and
Rees
,
E.
,
1982
, “
Expertise in Problem Solving
,”
Advances in the Psychology of Human Intelligence
,
Erlbaum
,
Hillsdale, NJ
, pp.
7
75
.
55.
Dinar
,
M.
,
Park
,
Y.-S.
,
Shah
,
J. J.
, and
Langley
,
P.
,
2015
, “
Patterns of Creative Design: Predicting Ideation From Problem Formulation
,”
ASME
Paper No. DETC2015-46537.
56.
McComb
,
C.
,
Cagan
,
J.
, and
Kotovsky
,
K.
,
2016
, “
Utilizing Markov Chains to Understand Operation Sequencing in Design Tasks
,”
Design Computing and Cognition’16
,
J. S.
Gero
, ed., Springer International Publishing, Cham, Switzerland, pp.
401
418
.
|
|
3 Tutor System
Starting just at 265/hour
# In Q.1, Exercise 14.2, you were asked to prepare a frequency distribution table regarding the blood groups of 30 students of a class. Use this table to determine the probability that a student of this class, selected at random, has blood group AB.Blood GroupNumber of studentsO12A9B6AB3
Hence, required probability is $$P = \frac{3}{30} = \frac{1}{10}$$.
Therefore, the probability that a student of this class, selected at random, has blood group AB is $$\frac{1}{10}$$.
|
|
# closure
### Topology
The closure of a given set in a topological space is the smallest closed set containing the given set.
### Abstract Algebra
If $$F$$ is a field, the algebraic closure of $$F$$ is a field $$G$$ containing $$F$$ such that every polynomial with coefficients in $$F$$ has a root in $$G$$.
|
|
# How do you differentiate f(x)=xe^((lnx-2)^2 using the chain rule.?
Nov 1, 2016
$f ' \left(x\right) = 2 \left(\ln x - 2\right) {e}^{{\left(\ln x - 2\right)}^{2}} + {e}^{{\left(\ln x - 2\right)}^{2}}$
#### Explanation:
$f \left(x\right) = x {e}^{{\left(\ln x - 2\right)}^{2}}$
Use product rule and chain rule
$f = x , g = {e}^{{\left(\ln x - 2\right)}^{2}}$
$f ' = 1 , g ' = {e}^{{\left(\ln x - 2\right)}^{2}} \cdot 2 \left(\ln x - 2\right) \frac{1}{x}$
$f ' \left(x\right) = f g ' + g f '$
$f ' \left(x\right) = 2 \left(\ln x - 2\right) {e}^{{\left(\ln x - 2\right)}^{2}} + {e}^{{\left(\ln x - 2\right)}^{2}}$
|
|
### Forgotten Number
I have forgotten the number of the combination of the lock on my briefcase. I did have a method for remembering it...
### Man Food
Sam displays cans in 3 triangular stacks. With the same number he could make one large triangular stack or stack them all in a square based pyramid. How many cans are there how were they arranged?
### Sam Again
Here is a collection of puzzles about Sam's shop sent in by club members. Perhaps you can make up more puzzles, find formulas or find general methods.
# Mystic Rose
##### Stage: 3 Challenge Level:
The circle below has seven points spread equally around its circumference. Press start to watch the construction of a seven pointed mystic rose. You can construct different sized roses by using the slider.
This text is usually replaced by the Flash movie.
Watch the animation for some different sized mystic roses.
What did you see? Describe how to construct a mystic rose.
Now describe what a completed mystic rose looks like.
Alison and Charlie have been wondering how many lines are needed to draw a 10 pointed mystic rose.
Alison wrote down the calculation $9+8+7+6+5+4+3+2+1$.
Charlie wrote down the calculation $\frac{10 \times 9}{2}$
Who is right? Can you explain how the calculations relate to the diagram?
Investigate the number of lines needed in mystic roses of different sizes.
How would Alison work them out? How would Charlie do it?
Will they always get the same result?
What are the advantages of the alternative methods?
How many lines are needed for a 100 pointed mystic rose?
Which of the numbers below could be the number of lines needed to draw a very large mystic rose? How many points would each mystic rose have around its circumference?
• 4851
• 6214
• 3655
• 7626
• 8656
You may wish to try the problems Picturing Triangle Numbers and Handshakes. Can you see why we chose to publish these three problems together?
You may also be interested in reading the article Clever Carl, the story of a young mathematician who came up with an efficient method for adding lots of consecutive numbers.
|
|
# Hinge joint attached to spring gets stuck
Because Hinge Joint 2D components do not have "spring properties" like their 3D counterparts, I made one. I did this by adding a Hinge Joint to connect Bones 1 and 2 of the tree. I then attached a Spring Joint to the circle above the tree, and connected this to the second bone. I then applied angle limits so it swings back and forth between -20 and 20.
This works very well when you apply a one-time force to a Rigid Body attached to Bone 2:
rb.AddTorque(2000f, ForceMode2D.Force);
(I don't know why that much torque is needed to move it though). But it does lean and then spring back to the normal position.
However, the minute you add multiple forces within a short time span (for example, if you add forces within Update()), the tree will just fall to the left and get stuck there.
Even when the forces have stopped being applied, the tree will never spring back to its original position, as it does when only 1 force has been applied.
In this video, you can see the tree being operated first by a motor, then the motor is stopped, causing it to spring properly. After this, a few forces are applied one after the other and short succession - which causes it to get stuck on the left.
How do I fix this? Any help would be appreciated.
• That sheep looked rather confused : D Jul 19 at 12:54
• Glad you enjoyed the sheep! Jul 19 at 13:38
|
|
# Publication [4.13] of Tomás Oliveira e Silva
Do not bookmark this page, because its URI may change in the future.
Instead, bookmark its parent page (http://sweet.ua.pt/tos/bib.html).
## Reference
Stephen D. Cohen, Tomás Oliveira e Silva and Tim Trudgian, "A Proof of the Conjecture of Cohen and Mullen on Sums of Primitive Roots," Mathematics of Computation, 2014. Accepted for publication (8 pages).
## Abstract
We prove that for all q>61, every non-zero element in the finite field \mathbbF_{q} can be written as a linear combination of two primitive roots of \mathbbF_{q}. This resolves a conjecture posed by Cohen and Mullen.
## BibTeX entry
```@Article
{
Cohen-2014-3-PCSPR-,
author = {Cohen, Stephen D.} # { and } # {Oliveira e Silva, Tom{\'a}s} #
{ and } # {Trudgian, Tim},
title = {A Proof of the Conjecture of {C}ohen and {M}ullen on Sums of
Primitive Roots},
journal = {Mathematics of Computation},
year = {2014},
note = {Accepted for publication (8 pages).}
}
```
|
|
# [NTG-context] How to start a \definedescription-defined description's body on a new line, not the same as the description's header?
Louis Strous Louis.Strous at intellimagic.com
Fri Oct 9 18:13:50 CEST 2015
With
[start sample]
...
\startcmddescription{mycommand}
This is a very useful command. It knows just what to do, and you don't even have to specify any options!
\stopcmddescription
[end sample]
the text "This is a very useful command" begins on the same line as the "mycommand" header. I want the text to begin on the next line, so that "mycommand" is on a line of its own. How should I adjust the \definedescription command to achieve this? I've tried all kinds of things with the 'before', 'inbetween', 'after', 'indentnext', 'command', and 'headcommand' options of the \definedescription command (based on http://wiki.contextgarden.net/Command/setupdescriptions), using \par, \crlf, and \vspace, but didn't get the desired results.
Regards,
Louis Strous
IntelliMagic - Availability Intelligence
T: +31 (0)71-579 6000
www.intellimagic.com<http://www.intellimagic.com>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.ntg.nl/pipermail/ntg-context/attachments/20151009/6a15c419/attachment-0001.html>
|
|
# Is it possible neutron stars are actual elements?
As you go up the periodic table (more protons), the ratio of neutrons to protons steadily increases as well. Are we sure there are absolutely no protons and electrons in a neutron star, or could there be so much more neutrons that we cannot measure any protons and electrons? Perhaps then a neutron star is a nucleus of some huge element with a neutron:proton ratio higher than we can distinguish.
-
from what is written in wiki I guess the short answer is no, as a neutron star contains ions, electrons and nuclei you could probably not call the whole thing an element: upload.wikimedia.org/wikipedia/commons/thumb/9/9e/… – DrCopyPaste Apr 7 '14 at 11:32
You should provide some reference to what is considered an element. – harogaston May 3 '14 at 3:12
|
|
Today’s goal was to understand and model VCO nonlinearity, specifically in $k_{VCO}$. Rather than just have $k_{VCO}$ as a linear function, it’s actually more similar to hyperbolic tan ($tanh(x)$). While that’s an ugly function, the output is actually quite simple. Instead of just having a normal linear function, we just put limits on the positive and negative values we can get out of it. Nominally, these are nicely at $\pm1$.
|
|
# What is a binary loss, and should I use a binary loss or a softmax loss for classification?
I was trying to understand the final section of the paper Revisiting Baselines for Visual Question Answering. The authors state that their model performs better with a binary loss in comparison to a softmax loss.
What is a binary loss (in this case)? Is the softmax loss a synonym for binary cross-entropy? Should I use a binary loss or a softmax loss for classification?
There is a nice explanation here
Binary Cross-Entropy Loss is also called Sigmoid Cross-Entropy loss. It is a Sigmoid activation plus a Cross-Entropy loss. Unlike Softmax loss it is independent for each vector component (class), meaning that the loss computed for every vector component is not affected by other component values.
The term binary stands for number of classes = 2.
I think that binary loss is the one based Shannon on entropy $$-\sum_ip_i\ln p_i$$, while the softmax is based on Boltzmann distribution: $$\frac{e^{z_i}}{\sum_j e^{z_i}}$$
Softmax itself shouldn't be a loss function though. It's the probability like function that you use to pick the output of classificator. For instance, you could use it to calculate probabilities $$\hat p_{ij}$$ of classificator outcomes for categories $$i$$ for sample $$j$$.
Then you can use the entropy based loss function to evaluate the fit: $$\sum_ip_{ij}\ln\hat p_{ij}$$, where $$p_{ij}$$ is a binary outcome of a category $$i$$.
|
|
# Mesh Current - Difficult
#### Godsninja
Joined Apr 30, 2016
16
Hi again! I've refreshed myself on mesh-current, but this 1 problem (of 2) has stumped me. It's doesn't look overly difficult (topic name is to keep consistency) but my answers are wrong. I've taken a picture of the question, circuit, and my solution. The question states to find the branch currents from ia through ie
The second part of the question (b) can be ignored in this topic.
At first I was a little unsure about i3. I thought it was 4.3id - i2. After fixing that mistake, my solution and texts solutions are still different. The rest is evident. I don't know what else to say because I've looked over this numerous times and I can't find any flaws in my method.
The question is from Nilsson & Riedels, Electric Circuits 9th edition.
Thanks.
#### WBahn
Joined Mar 31, 2012
25,751
You claim that:
$$i_3 \; = \; 4.3 i_d$$
Are you sure you agree with this?
If so, then why isn't id = i2?
I'm very happy to see you making a real effort to use units, but you are doing two things wrong.
First, you have things like
$$i_3 \; = \; 4.3 i_d \; A$$
You have units of current squared, because id has units of current.
And what if id happens to be 10 mA. Now you have 43 mA².
Next, you start dropping your units partway through your work. You dropped the Ω and so you have an equation in which you have a current plus a current equals a voltage.
Then you further drop the units entirely when you solve for i1 and i2 and have them as pure numbers instead of currents.
Don't worry -- it takes time, patience, and attention to detail.
#### BlackMelon
Joined Mar 19, 2015
84
i3 = 4.3id..... or i3-i2 = 4.3id Be careful with the KVL method.
#### Godsninja
Joined Apr 30, 2016
16
You are right, I could have tried harder. And thank you BlackMelon, that makes perfect sense.
I would really like to upload a photo of the final solution, some of my best work (when it comes to using units properly), but the website won't let me, giving me an error. I tried multiple times, but it won't let me.
Here's an un-editted un-scaled version directly from my drive though: https://photos.app.goo.gl/WJVH0XCDUbA3yHM72
Also, looks like the person who did the question for the publisher got ic and ie reversed.
Thanks again!
#### WBahn
Joined Mar 31, 2012
25,751
You are well on the way to doing exceptional work. Most of the next set of comments I was going to make about your prior attempt you have dealt with in the latest one.
You have now labeled your two mesh equations in a meaningful way that shows that the second one is a supermesh equation. This is extremely helpful to someone trying to understand your work.
Another problem your earlier work had is that the work to the right of the diagram involved results from work done below the diagram. This makes it very hard to follow. Your work should flow in a continuous direction down the page if at all possible.
About the only other recommendation I have at this point is to first get your equations done but don't manipulate them. So I would have put the first equation that you have to the right of the diagram as the first equation underneath it. Then the two mesh equations. Then draw a line under these (basically where the center hole in the page is). Everything above this line is the EE stuff. Everything below this line is just math. Before you proceed, review the three equations that you have above the line and be sure that you are satisfied that they are correct (this step would likely have caught your mistake). This is critical, because no amount of math can catch a mistake that you make in setting up the EE equations -- so you want those equations to be as obvious as you can make them. I think your basic problem was that you were trying to do two things at once to the right of your diagram -- apply the EE concepts AND do a bunch of math manipulations. Humans aren't good at multi-tasking, no matter what we claim. So focus on JUST the EE stuff. Check it. Then proceed. Once you are below the line, you are ONLY doing algebra and can focus on that, thus reducing the chances of letting a math error slip by because you are trying to do EE stuff at the same time.
The order I would have presented the EE equations are:
Mesh 1
Mesh 2,3
Constraint 2,3
The equation that relates currents i2 and i3 is a constraint equation that is needed because you used a supermesh equation involving i2 and i3.
|
|
If you wish to contribute or participate in the discussions about articles you are invited to join Navipedia as a registered user
# Emission Time Computation
Fundamentals
Title Emission Time Computation
Author(s) J. Sanz Subirana, J.M. Juan Zornoza and M. Hernández-Pajares, Technical University of Catalonia, Spain.
Year of Publication 2011
Two different algorithms for the satellite transmission time computation from the receiver measurement time are presented as follows. The first of them is based on using the pseudorange measurements, which is a link between the receiver time tags (i.e., the reception time in the receiver clock) and the satellite transmission time (in the satellite clock). The second one is a pure geometric algorithm, which does not require any receiver measurement. It only needs the satellite coordinates and an approximate receiver position.
## A pseudorange based algorithm
The emission time can be directly obtained from the reception time, taking into account that the pseudorange $\displaystyle R$ is a direct measurement of the time difference between both epochs, each one of them measured in the corresponding clock:
$R=c\;\left(t_{rcv}[reception]-t^{sat}[emission]\right) \qquad \mbox{(1)}$
So, the signal emission time, measured with satellite clock ($\displaystyle t^{sat}$), is given by:
$t^{sat}[emission]=t_{rcv}[reception]-\Delta t \qquad \mbox{(2)}$
where,
$\Delta t= R/c \qquad \mbox{(3)}$
Thence, if the $\displaystyle \delta t^{sat}$ is the satellite clock offset, regarding to the GNSS (GPS, GLONASS, Galileo...) system time scale (see Clock Modelling) the transmission time $\displaystyle T[emission]$ in this system time scale can be computed from the receiver measurement time tags ($\displaystyle t_{rcv}$) as:
$T[emission] = t^{sat}[emission] - \delta t^{sat} = t_{rcv}[reception] - R/c - \delta t^{sat} \qquad \mbox{(4)}$
The former equation (4) has the advantage of providing the signal emission time directly, without iterative calculation, although it does need pseudorange measurements in order to connect both epochs.
The accuracy in determination of $\displaystyle T[emission]$ is very high, and essentially depends on $\displaystyle \delta t^{sat}$ error. For instance, in the case of the GPS system it is less than $10$ or $100$ nanoseconds with S/A=off and S/A=on, respectively. This allows calculating satellite coordinates with errors below one tenth of millimetre in both cases [footnotes 1].
## A purely geometric algorithm
The former algorithm (equation 2) provides the signal emission time tied to satellite clock ($\displaystyle t^{sat}$). The next algorithm ties this epoch to receiver clock ($\displaystyle t_{rcv}$):
$t_{rcv}[emission]=t_{rcv}[reception]-\Delta t \qquad \mbox{(5)}$
where $\displaystyle \Delta t$ is now calculated by iteration assuming that an approximate receptor position $\displaystyle r_{0_{rcv}}$ is known (it converges very fast):
The algorithm is based in the following steps:
1. Calculate the position $\displaystyle {\mathbf r}^{sat}$ of the satellite at signal reception time $\displaystyle t_{rcv}$.
2. Calculate the geometric distance between satellite coordinates obtained previously and receiver position [footnotes 2], and from it, calculate the signal travelling time between both points: $\Delta t=\frac{\left\| {\mathbf r}^{sat}-{\mathbf r}_{0_{rcv}}\right\|}{c} \qquad \mbox{(6)}$
3. Calculate satellite position at the time: $t = t_{rcv} - \Delta t \Longrightarrow r^{sat}$.
4. Compare the new position $\displaystyle r^{sat}$ with the former position. If they differ more than certain threshold value, iterate the procedure starting from 2.
Finally, emission time at the system-time-scale is given by[footnotes 3]:
$T[emission] = t_{rcv}[emission] - \delta t_{rcv} \qquad \mbox{(7)}$
where $displaystyle \delta t_{rcv}$ is receiver clock offset referred to the system time, that may be obtained from navigation solution (although "a posteriori").
This algorithm for the satellite coordinate calculations at the reception epoch allows an efficient modularity because pseudorange measurements are not needed to compute the transmission time.
If the receiver clock offset is small [footnotes 4], thence the $displaystyle \delta t_{rcv}$ may be neglected. On the other hand, the receiver clock estimates from the navigation solution can be used (extrapolated from the previous epoch). In any case, it must be taken into account that neglecting this term when $displaystyle \delta t_{rcv}$ reaches large values (e.g., 1 millisecond) may introduce errors of about one meter in satellite coordinates, and this must be taken in account when building the navigation model[footnotes 5]; or more precisely, in the partial derivative respect to receiver clock of design matrix.
frameless frameless frameless frameless
Figure 1 shows an example to illustrating the effect of neglecting the travelling time in the satellite coordinates computation for positioning. It corresponds to a receiver located in Barcelona, Spain (receiver coordinates $\lambda\simeq 2^o$ $\phi \simeq 41^o$). During the $70$ to $90$ milliseconds of travelling time, the satellite moves about $200-250$ meters, which leads to $+/-60$ meters in range. The effect on the user position is up to $50$ meters or more in horizontal and vertical components.
## Notes
1. ^ GPS, GLONASS or Galileo satellites speed is of few km/s.
2. ^ Again, notice that satellite and receiver coordinates must be given in the same reference system, because satellite-receiver ray must be generated in a common reference system.
3. ^ Rigorously, equation (7) is: $T[emission] = f(T[reception]) = f(t_{rcv}[reception] - \delta t_{rcv}) \simeq t_{rcv}[emission] - \delta t_{rcv}$ where function $f(\cdot)$ represents geometric algorithm.
4. ^ Some receivers apply clock-steering adjusting their clocks epoch-by-epoch and providing offsets of few nanoseconds. However, in many cases the receiver wait until gathering an offset of $1$ millisecond to adjust the clock.
5. ^ In the "design matrix" or Jacobian matrix, obtained when linearising the model with respect to coordinates and receiver clock errors, see Code Based Positioning (SPS).
|
|
Vectors will be our friend for understanding motion happening in more than one dimension. Required fields are marked *. ... All CBSE Notes for Class 11 Physics Maths Notes Chemistry Notes Biology Notes. Torque, τ = dL/dt Conservation of Angular Momentum If the external torque acting on a system is zero, then its angular momentum remains conserved. Math Results And Formulas; Math Symbols; Basic Formulas and Results of Vectors. very very usefull for students to learn all formulas in physics of 11th class at a place thankyou so much byjus, thank u so much sir your formula sheet is very helpful for us u r awesome teacher sir great and thanks to once again sir, Nice very easy to study thanks for helping, IT IS THE BEST SHEET Class 11 physics all derivations are also very helpful in quick revision also. these list of physic formula of class 11 chapter Unit dimension & vector is useful and highly recommended for quick revision and final recap of chapter Unit dimension & vector. The physics formulas for Class 11 will not only help students to excel in their examination but also prepare them for various medical and engineering entrance exams. This first chapter can help you build a good foundation in physics for class 11 and class 12. Force can defined as so… Incident ray, reflected ray, and normal at the point of incidence lie in the same plane. Click here ( Short Notes) – Download Click Here Compl… Momentum is calculate using the formula: P = m (mass) x v (velocity) 2. You may also like physics formulas for class 11 and 12. I moved from hindi medium to english medium in class 11th and it had become my nightmare at that time. Dynamics of Rotational motion Rotational motion kinematics My favorite book in physics is University Physics with Modern Physics. Class 11 Physics Chapter 1 Physics World. Physics formulas for Class 11 is one of the best tools to prepare physics for Class 11 examination and various competitive examinations. We can define a vector as an object that has both a direction and a magnitude. list of physics formulas class 11 chapter Unit dimension & vector for CBSE ,IIT JEE & NEET, Download the free Pdf sheet of list of physics formulas class 11 for IIT JEE & NEET For chapter-Unit dimension & vector, Academic team of Entrancei prepared short notes and all important Physics formulas and bullet points of chapter Unit dimension & vector (class-11 Physics) . Its dimensional formula is [M 0 L 1 T -1]. I am sharing an Assignment on Vectors Chapter of JEE Physics Class 11 portion (as per requests received from students). We are giving a detailed and clear sheet on all Physics Notes that are very useful to understand the Basic Physics Concepts.. Circular Motion | Definition, Equations, Formulas, Types, Units – Motion in a Plane 2.The magnitude of position vector. Chapter wise Physics Quiz for class 9. Soln. Section Formula; Projection of a Vector on a Line; Q1. Circular Motion Definition Circular motion is the movement of an object in a circular path. Physics formulas for Class 11 is one of the best tools to prepare physics for Class 11 examination and various competitive examinations. Different sorts of subjects identified with dierent physical forces for example gravitational force, atomic force, electromagnetic force, and so on and fundamental laws of physics that oversee common Faraday, Coulomb, Ampere, Newton, Oersted, and others over the globe to comprehend concepts will be engaged. The unit vector = where the magnitude of unit vector is 1. We can define a vector as an object that has both a direction and a magnitude. Frankly speaking, i was at very bad position 8 years back in my 11th class. Force can defined as so… The course 'Vectors' is for students studying in class 11 (Medical & Non Medical). What is a Vector in Math? This course is explained in Hindi language (only study terms are in English). Chapter wise Physics Quiz for class 10. Class 11 Physics Notes Pdf: We know that last-minute revision and stuffing is never so easy during examinations. ... Velocity is a vector quantity its SI unit is meter per sec. A1. This app cover all the topic of NCERT and CBSE board also. The Physics Classroom Tutorial presents physics concepts and principles in an easy-to-understand language. The components of a vector defined by two points and are given as follows: In what follows , and are 3-D vectors given by their components as follows Momentum is calculate using the formula: P = m (mass) x v (velocity) 2. Each lesson includes informative graphics, occasional animations and videos, and Check Your Understanding sections that allow the user to practice what is taught. Formulas on Reflection of Light Laws of Reflection . Chapter wise Physics Quiz for class 12 These physics formula sheet for chapter Unit dimension & vector is useful for your CBSE , ICSE board exam as well as for entrance exam like JEE & NEET, Download the Pdf of class 11 physics formula sheet of chapter Unit dimension & vector from the link given below, A-1, Acharya Nikatan, Mayur Vihar, Phase-1, Central Market, New Delhi-110091. Not only physics notes pdf class 11 but we have Class 11 Chemistry Notes, Class 11 Biology Notes for class 11 also. This chapter will help you to construct a concrete foundation which thus can help you adapt further concepts of physics and its different elements. 2. The NCERT solutions for class 11 physics given in this article is updated to the latest syllabus. Class 11 physics all derivations are also very helpful in quick revision also. To find , shift vector such that its initial point coincides with the terminal point of vector . Thus study notes play a very important role here. CBSE Previous Year Question Papers Class 10, CBSE Previous Year Question Papers Class 12, NCERT Solutions Class 11 Business Studies, NCERT Solutions Class 12 Business Studies, NCERT Solutions Class 12 Accountancy Part 1, NCERT Solutions Class 12 Accountancy Part 2, NCERT Solutions For Class 6 Social Science, NCERT Solutions for Class 7 Social Science, NCERT Solutions for Class 8 Social Science, NCERT Solutions For Class 9 Social Science, NCERT Solutions For Class 9 Maths Chapter 1, NCERT Solutions For Class 9 Maths Chapter 2, NCERT Solutions For Class 9 Maths Chapter 3, NCERT Solutions For Class 9 Maths Chapter 4, NCERT Solutions For Class 9 Maths Chapter 5, NCERT Solutions For Class 9 Maths Chapter 6, NCERT Solutions For Class 9 Maths Chapter 7, NCERT Solutions For Class 9 Maths Chapter 8, NCERT Solutions For Class 9 Maths Chapter 9, NCERT Solutions For Class 9 Maths Chapter 10, NCERT Solutions For Class 9 Maths Chapter 11, NCERT Solutions For Class 9 Maths Chapter 12, NCERT Solutions For Class 9 Maths Chapter 13, NCERT Solutions For Class 9 Maths Chapter 14, NCERT Solutions For Class 9 Maths Chapter 15, NCERT Solutions for Class 9 Science Chapter 1, NCERT Solutions for Class 9 Science Chapter 2, NCERT Solutions for Class 9 Science Chapter 3, NCERT Solutions for Class 9 Science Chapter 4, NCERT Solutions for Class 9 Science Chapter 5, NCERT Solutions for Class 9 Science Chapter 6, NCERT Solutions for Class 9 Science Chapter 7, NCERT Solutions for Class 9 Science Chapter 8, NCERT Solutions for Class 9 Science Chapter 9, NCERT Solutions for Class 9 Science Chapter 10, NCERT Solutions for Class 9 Science Chapter 12, NCERT Solutions for Class 9 Science Chapter 11, NCERT Solutions for Class 9 Science Chapter 13, NCERT Solutions for Class 9 Science Chapter 14, NCERT Solutions for Class 9 Science Chapter 15, NCERT Solutions for Class 10 Social Science, NCERT Solutions for Class 10 Maths Chapter 1, NCERT Solutions for Class 10 Maths Chapter 2, NCERT Solutions for Class 10 Maths Chapter 3, NCERT Solutions for Class 10 Maths Chapter 4, NCERT Solutions for Class 10 Maths Chapter 5, NCERT Solutions for Class 10 Maths Chapter 6, NCERT Solutions for Class 10 Maths Chapter 7, NCERT Solutions for Class 10 Maths Chapter 8, NCERT Solutions for Class 10 Maths Chapter 9, NCERT Solutions for Class 10 Maths Chapter 10, NCERT Solutions for Class 10 Maths Chapter 11, NCERT Solutions for Class 10 Maths Chapter 12, NCERT Solutions for Class 10 Maths Chapter 13, NCERT Solutions for Class 10 Maths Chapter 14, NCERT Solutions for Class 10 Maths Chapter 15, NCERT Solutions for Class 10 Science Chapter 1, NCERT Solutions for Class 10 Science Chapter 2, NCERT Solutions for Class 10 Science Chapter 3, NCERT Solutions for Class 10 Science Chapter 4, NCERT Solutions for Class 10 Science Chapter 5, NCERT Solutions for Class 10 Science Chapter 6, NCERT Solutions for Class 10 Science Chapter 7, NCERT Solutions for Class 10 Science Chapter 8, NCERT Solutions for Class 10 Science Chapter 9, NCERT Solutions for Class 10 Science Chapter 10, NCERT Solutions for Class 10 Science Chapter 11, NCERT Solutions for Class 10 Science Chapter 12, NCERT Solutions for Class 10 Science Chapter 13, NCERT Solutions for Class 10 Science Chapter 14, NCERT Solutions for Class 10 Science Chapter 15, NCERT Solutions for Class 10 Science Chapter 16, CBSE Previous Year Question Papers Class 10 Science, CBSE Previous Year Question Papers Class 12 Physics, CBSE Previous Year Question Papers Class 12 Chemistry, CBSE Previous Year Question Papers Class 12 Biology, ICSE Previous Year Question Papers Class 10 Physics, ICSE Previous Year Question Papers Class 10 Chemistry, ICSE Previous Year Question Papers Class 10 Maths, ISC Previous Year Question Papers Class 12 Physics, ISC Previous Year Question Papers Class 12 Chemistry, ISC Previous Year Question Papers Class 12 Biology, List of Physics Scientists and Their Inventions. One of our academic counsellors will contact you within 1 working day. All physics formulas are in PDF format and are available free of charge; this makes it easily accessible to students. Related Class 11 Notes Chapter-wise: Class 11 Physics Notes; Class 11 Biology Notes; Class 11 Chemistry Notes; The CBSE Class 11 Maths Formulae are cumulated by our panel of highly experienced teachers after analyzing the past 10 years of examination papers and material so that no important formula is left behind. Its S1 unit is kg-m/s and dimensional formula is [MLT-1]. and direction. projectile motion, circular motion. If the two forces 4N and 3N acting simultaneously on a particle are in opposite direction, the resultant force F 1 is minimum. Perform various operations with vectors like adding, subtracting, scaling, conversion between rectangular to polar coordinates, etc. Learn what vectors are and how they can be used to model real-world situations. Momentum is the product of mass and velocity of a body. Chapter wise Physics Quiz . Physics formulas for Class 11 is one of the best tools to prepare physics for Class 11 examination and various competitive examinations. all rights reserved. or OP = ( x,y ). Soln. we have created awesome list of formulas of physics. Physics Formulas PDF for Class 11 and Class 12 Physics formulas from Mechanics, Waves, Optics, Heat and Thermodynamics, Electricity and Magnetism and Modern Physics. That is why it is recommended to start preparation much before the date of exam commencement. Learn what vectors are and how they can be used to model real-world situations. In Geometrically, we can picture a vector as a directed line segment, whose length is the magnitude of the vector … Class 11 Physics Chapter 1 Physics World. The NCERT solutions for class 11 physics given in this article is updated to the latest syllabus. Chapter wise Physics Quiz for class 9. The important formulas of vectors are given below: 1. Chapter wise Physics Quiz for class 11. In Geometrically, we can picture a vector as a directed line segment, whose length is the magnitude of the vector … Related Class 11 Notes Chapter-wise: Class 11 Physics Notes; Class 11 Biology Notes; Class 11 Chemistry Notes; The CBSE Class 11 Maths Formulae are cumulated by our panel of highly experienced teachers after analyzing the past 10 years of examination papers and material so that no important formula is left behind. This app cover all the topic of NCERT and CBSE board also. This book is … Physics is a subject that deals with the natural world and the properties of energy and matter, etc. A1. 3. Section Formula; Projection of a Vector on a Line; Q1. Perform various operations with vectors like adding, subtracting, scaling, conversion between rectangular to polar coordinates, etc. The physics formulas for class 11 PDF is provided here so that students can understand the subject more effectively. Academic team of Entrancei prepared short notes and all important Physics formulas and bullet points of chapter Unit dimension & vector (class-11 Physics) . This course is explained in Hindi language (only study terms are in English). Free PDF download of Vector Algebra Formulas for CBSE Class 12 Maths. Acceleration, velocity, force and displacement are all examples of vector quantities. Vector Formulas. We can add two vectors by joining them head-to-tail: vector add a+b. It might only be a small amount of movement and very-very slow, but movement does happen. Not only physics notes pdf class 11 but we have Class 11 Chemistry Notes, Class 11 Biology Notes for class 11 also. During examinations, students are left with much less time to go through all the chapters and revise them. Geometrically, we can represent a vector as a directed line segment, whose length is the magnitude of the vector … The course 'Vectors' is for students studying in class 11 (Medical & Non Medical). Your email address will not be published. Must do for all IIT & NEET Aspirants. It is a vector quantity and its direction is in the direction of velocity of the body. Momentum is the product of mass and velocity of a body. Motion in a Straight Line Class 11 Notes Physics Formulas pdf Motion:- Rest and Motion are relative terms, nobody can exist in a state of absolute rest or of absolute motion. 1) If $$\overrightarrow a = x\widehat i + y\widehat j + z\widehat k$$ then the magnitude or length or norm or absolute value of $$\overrightarrow a$$ is $$\left| ... {x^2} + {y^2} + {z^2}}$$ 2) A vector of unit magnitude is the unit vector. Free PDF download of Vector Algebra Formulas for CBSE Class 12 Maths. It is a vector quantity and its direction is in the direction of velocity of the body. From adding two or more vectors together with BYJU ’ s doubts and queries be! ; Basic formulas and Results of vectors are given in proper order that... Circular path i was at very bad position 8 years back in 11th... Iit-Jee physics incident ray, and normal at the point of vector Algebra formulas class. 11 Biology Notes student to get fastest exam alerts and government job in... All the chapters and revise them but we have created awesome list of formulas of physics its... Helpful in quick revision also government job alerts in India, join our Telegram channel easy-to-understand.. With much less time to go through all the topic of NCERT and board... Filled with complex formulas and students must understand the subject step by.. Principles in an easy-to-understand language frankly speaking, i was at very bad position 8 years back in my class. Mass and velocity of the significant topics in physics for class 11 ( Medical & Non Medical ) that. Length of the significant topics in physics for class 11 is one of the best tools to prepare physics class! Motion in a plane physics: motion in plane is called as motion in plane is called as motion two... Physics Maths Notes Chemistry Notes, class 11 also quantity and its direction is in the direction IIT-JEE Assignment... T -1 ] real-world situations can defined as an element of a vector on a Line Q1... 11Th class quantity and its different elements NCERT textbook for physics vectors will be cleared 11 also vector Results... Behind the formulas to excel in the subject of an origin and two co-ordinate all formulas of vector class 11 physics and... Working day ultimately leading into the mathematics of the topics formulas are given this! More about these in vector Algebra formulas for class 11 physics Maths Notes Chemistry Notes Notes. 11 given below BYJU ’ s doubts and queries will be cleared momentum is calculate the. Conversion between rectangular to polar coordinates, etc check the physics Classroom Tutorial presents physics concepts principles..., e.g this book is … free IIT-JEE Level Assignment on vectors chapter of JEE class... 12 the course 'Vectors ' is for students studying in class 11th and it had my! Become my nightmare at that time before the date of exam commencement this article is updated to the of... In more than one dimension leading into the mathematics of the body is equal the... So easy during examinations, students are left with much less time to go through the! Solutions for class 11 physics given in proper order so that students can understand subject! Reference will be our friend for understanding motion happening in more than one dimension, shift vector that! Incidence lie in the same plane but movement does happen Classroom Tutorial presents physics concepts and principles in an language!, e.g 1 is minimum vector = where the magnitude of unit is! Proper order so that students can understand the concepts should be clear which will help you further! ’ s and learn numerous interesting physics topics with the natural world and the properties of and. Of formulas of vectors Notes Biology Notes for class 11 also be cleared a and!, the resultant vector is 1 from students ) made of an and. Important formulas of vectors are given in this article is updated to the of. That Results from adding two or more vectors together India, join our Telegram channel using the:! To model real-world situations two vectors by joining them head-to-tail: vector a+b! Its S1 unit is kg-m/s and dimensional formula is [ MLT-1 ] 1 is minimum medium to English in. Are in English ) a plane physics: motion in a Straight class! Physics Quiz for class 11 physics Notes PDF class 11 physics join our Telegram channel will... More about these in vector Algebra formulas for class 12 Maths length of the best tools to physics... This book is … free IIT-JEE Level Assignment on vectors chapter of physics! In quick revision also 11 but we have class 11 physics all derivations are also very helpful quick... Vector Algebra formulas for CBSE class 12 the course 'Vectors ' is for students in! Better marks in examinations a Straight Line class 11 PDF is provided here so that can. Telegram channel physics syllabus quickly must go with our Notes the student ’ s doubts and queries be... Is equal to the angle of reflection i.e., $\angle i=\angle$... Find, shift vector such that its initial point coincides with the natural world the. Of exam commencement more about these in vector Algebra formulas for class 11 physics Maths Chemistry. Section formula ; Projection of a body alerts and government job alerts in India, join our channel... Is a vector quantity and its direction is in the direction of of. An overview of each chapter that is available in NCERT textbook for physics in proper order that.
|
|
(16 votes, average: 5.00 out of 5)
## Project Euler 4: Find the largest palindrome product
#### Project Euler 4 Description
Project Euler 4: A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99.
Find the largest palindrome made from the product of two 3-digit numbers.
#### Analysis
We are tasked with finding the largest palindromic number with two 3 digit factors. One method is searching factors downwards from 999 as it is more likely to find a larger product from larger factors. We test each product as a palindrome and save it and continue until we exhaust all our potential factors.
Nanogyth’s comment below demonstrates how to speed things up significantly. It summarizes that if a six digit number is palindromic it must follow the form:
$10^5 \cdot a + 10^4 \cdot b + 10^3 \cdot c + 100 \cdot c + 10 \cdot b + a$
$100001a + 10010b + 1100c$
$11(9091a + 910b + 100c)$
Where a, b, and c are integers such that 0 < a ≤ 9, 0 ≤ b, c ≤ 9
This means that one of our target factors has to be divisible by 11 and allows us to step our search index, j, by 11 when i is divisible by 11. When i is not divisible by 11 we have to decrement j by 1.
Here’s my initial simple brute-force solution; about 0.3 seconds, but not fast enough for bigger problems.
print max(i*j for i in xrange(1000) for j in xrange(i,1000) if str(i*j)==str(i*j)[::-1])
The final solution is presented below and simply performs an exhaustive investigation of 3-digit factors starting from the biggest factors until a maximum palindrome product is found. There’s an early out condition that breaks the inside loop (j) by finding a palindrome less than the current maximum. This is allowed because palindrome products are found in descending order from the perspective of the inside loop.
One subtle optimization was to check the limit before checking if the product is a palindrome as the palindrome check is more computationally expensive. If the first condition is false the succeeding conditions are ignored.
if p < L and is_palindromic(p): x, y, pmax = i, j, p
The two factors are printed along with their product. L was added to solve the HackerRank Project Euler 4 version which keeps the search to 3 digit factors but includes a limit to the palindrome product.
This program and method
solves all test cases for
Project Euler 4 on HackerRank
#### Project Euler 4 Solution
Runs < 0.003 seconds in Python 2.7.
Use this link to get the Project Euler 4 Solution Python 2.7 source.
#### Afterthoughts
• Function is_palindromic is listed in Common Functions and Routines for Project Euler
• There are only 279 discrete 6-digit palindromes which are products of two 3-digit numbers.
• The largest palindrome for some other cases (Note: I restricted the search range to something sane):
# Digits
factor 1
factor 2
product
4
9999
9901
99000099
5
99979
99681
9966006699
6
999999
999001
999000000999
7
9997647
9998017
99956644665999
8
99999999
99990001
9999000000009999
9
999920317
999980347
999900665566009999
10
9999986701
9999996699
99999834000043899999
11
99999943851
99999996349
9999994020000204999999
12
999999000001
999999999999
999999000000000000999999
13
9999996340851
9999999993349
99999963342000024336999999
You can see that this method is becoming impractical for a larger number of digits.
Project Euler 4 Solution last updated
## Discussion
### 2 Responses to “Project Euler 4 Solution”
1. I think your comment is backwards in regard to when you step by 1 vs stepping by 11.
“This means that one of our target factors has to be divisible by 11 and allows us to step our search index, j, by 11 when i is divisible by 11. When i is not divisible by 11 we have to decrement j by 1.”
What you SHOULD have said is:
“This means that one of our target factors has to be divisible by 11 and allows us to step our search index, j, by 11 when i is not divisible by 11. When i is divisible by 11 we have to decrement j by 1.”
You can further specify that this makes sense since the starting point of j, 990 (vs. 999 for i) is the largest 3-digit number that is divisible by 11, so decrementing by 11 while starting at 990 will give you all of the 3-digit numbers that are divisible by 11, which of course is exactly what we want when i is NOT divisible by 11.
Posted by Ryan McGregor | June 1, 2020, 5:45 PM
2. I have some optimizations and several of the higher digit solutions posted on reddit.
|
|
### Constraining neural networks
Fully-connected layers with arbitrary connectivity are commonly used in neural networks (both feed-forward and recurrent), but is the full power of an unconstrained, all-to-all connectivity matrix really needed for the network to perform well? A lot of empirical evidence suggests that trained neural networks are highly compressible, suggesting that unconstrained connectivity may not be necessary. If we are going to do away with unconstrained connectivity matrices, what should we replace them with? Here are a few recent suggestions from the literature (you can check out the original papers for detailed performance comparisons, but the basic message is that these methods reduce computational or memory requirements of neural networks, or improve their training, with minimal performance loss):
1) Instead of an unconstrained, arbitrary $N\times D$ matrix $\mathbf{W}$, Yang et al. (2014) suggest matrices of the following form (called the adaptive fastfood transform):
$\mathbf{W} = \mathbf{SHG\Pi H B}$ (*)
where $\mathbf{S}$, $\mathbf{G}$, $\mathbf{B}$ are diagonal matrices, $\mathbf{\Pi}$ is a random permutation matrix, and $\mathbf{H}$ is the Hadamard matrix. Here $\mathbf{\Pi}$ and $\mathbf{H}$ are fixed, so one only learns the diagonal matrices $\mathbf{S}$, $\mathbf{G}$, $\mathbf{B}$. This decomposition has $O(N)$ parameters and requires $O(N\log D)$ operations to compute, as opposed to the much less efficient $O(ND)$ parameters and $O(ND)$ operations required for an unconstrained matrix. The particular matrix structure in (*) was introduced by Le et al. (2013) in earlier work, but in their case $\mathbf{S}$, $\mathbf{G}$, $\mathbf{B}$ were set randomly and left untrained. In that work, they showed that feature maps computed using this random non-adaptive transformation correspond to the Gaussian RBF kernel in expectation, or more precisely $\mathbf{E}_{S,G,B,\Pi}[\overline{\phi(x)}^\top\phi(x^{\prime})] = \exp(-\frac{|| x-x^{\prime}||}{2\sigma^2})$ where the features are $\phi_j(x) = \frac{1}{\sqrt{n}}\exp(i [\mathbf{W}x]_j)$ with $\mathbf{W}$ as in (*).
2) Moczulski et al. (2015) propose connectivity matrices of the following form:
$\mathbf{W} = \prod_{k=1}^K \mathbf{A}_k \mathbf{F} \mathbf{D}_k \mathbf{F}^{-1}$ (**)
where $\mathbf{A}_k$ and $\mathbf{D}_k$ are diagonal matrices and $\mathbf{F}$ is the discrete Fourier transform matrix. This has $O(KN)$ parameters and requires $O(KN\log N)$ operations to compute, which is an improvement over the $O(N^2)$ parameters and $O(N^2)$ operations required to compute a fully-connected unconstrained linear layer (with identical input and output dimensions, $N$), assuming of course $K$ is not $O(N)$. In the real version they ultimately prefer, $\mathbf{F}$ is replaced by the discrete cosine transform matrix (of type II) $\mathbf{C}$.
A justifiable worry at this point is whether we are losing a lot of expressive power when we constrain our connectivity matrices in this way. The authors mention a theoretical guarantee assuring that almost all matrices $\mathbf{M} \in \mathbb{C}^{N \times N}$ can be factored as $\mathbf{M} = [ \prod_{i=1}^{N-1} \mathbf{D}_{2i-1} \mathbf{R}_{2i} ] \mathbf{D}_{2N-1}$ with $\mathbf{D}_{2i-1}$ diagonal and $\mathbf{R}_{2i}$ circulant, and this is precisely in the form (**) above. However, I’m not sure how useful this guarantee is, because $K$ is $O(N)$ in this result, hence we lose our computational savings in this limit.
3) A very nice idea has recently been proposed for constraining the connectivity of a recurrent neural network: Arjovsky et al. (2015) propose constraining the connectivity to be unitary (orthogonal in the real case). The advantage of a unitary connectivity matrix is that it exactly preserves the norm of a vector it is applied to, hence avoiding the vanishing or exploding gradient problems in recurrent neural network training. What particular unitary structure should we choose for the connectivity matrix? They suggest the following structure:
$\mathbf{W} = \mathbf{D}_3 \mathbf{R}_2 \mathbf{F}^{-1} \mathbf{D}_2 \mathbf{\Pi} \mathbf{R}_1 \mathbf{F} \mathbf{D}_1$ (***)
where $\mathbf{D}_1$, $\mathbf{D}_2$, $\mathbf{D}_3$ are diagonal matrices, $\mathbf{R}_1$, $\mathbf{R}_2$ are reflection matrices, $\mathbf{\Pi}$ is a permutation matrix and $\mathbf{F}$ is the discrete Fourier transform matrix. The authors mention that this particular structure was based on trial and error essentially. It is important to note that all the matrices in (***), except for the permutation matrix, have complex-valued entries.
In summary, constraints are generally useful for reducing the complexity of neural networks (think convnets), making training easier and more efficient, and making the models more interpretable. However, finding the right set of constraints to impose on neural networks is not always easy: imposing too many constraints or the wrong set of constraints might unnecessarily limit the representational power of the network, imposing too few constraints might reduce the benefits that would otherwise be obtained with the right set of constraints.
Advertisements
|
|
# Description of $A^\bullet(G/H)$ [closed]
Let $$G$$ be a compact Lie group and let $$H$$ be a closed subgroup of $$G$$, with Lie algebras $$\mathfrak{g}$$ and $$\mathfrak{h}$$.
We denote $$G\times_H \mathfrak{g} / \mathfrak{h}$$: the set of orbits $$(G \times \mathfrak{g} / \mathfrak{h})/H$$ of the right action of $$H$$ on $$G \times \mathfrak{g} / \mathfrak{h}$$ ($$H$$ acts on $$G$$ by multiplication and acts on $$\mathfrak{g} / \mathfrak{h}$$ by the adjoint action).
We have this identification: $$T(G/H) \simeq G\times_H \mathfrak{g} / \mathfrak{h}$$.
My question is why the space of differential forms of $$G/H$$, $$A^\bullet(G/H)$$, satisfies $$A^\bullet(G/H) \simeq \Gamma (G/H, G\times_H {\bigwedge}^\bullet{(\mathfrak{g} / \mathfrak{h})}^*) \simeq {(C^\infty (G) \otimes {\bigwedge}^\bullet {(\mathfrak{g} / \mathfrak{h})}^*)}^H.$$
• For a principal $H$-bundle $P\to M$, the differential forms on $M$ can be identified with $H$-invariant forms on $P$ which are basic, i.e. vanish on the vector fields generating the $H$-action. In your example $P = G$, and forms on $G$ can be identified with functions to $\Lambda^*\mathfrak g^*$ using the trivialization of $TG$. The basic forms are then given by exterior powers of the annihilator of $\mathfrak h$, i.e. $(\mathfrak g/\mathfrak h)^*$. Mar 7, 2021 at 15:27
• This is a good exercise which you should work hard to complete yourself.
– mme
Mar 7, 2021 at 16:06
• Your other recent question suggests you are working through a reference on differential forms. In addition to the excellent suggestion to work these exercises yourself, if you do ask about them, then you should mention the referece you are using. \\ TeX note: please use TeX, like $G\times H$ G \times H, rather than Unicode, like $G × H$ G × H. I have edited accordingly. Mar 9, 2021 at 19:55
By definition, $$k$$-forms are sections of the bundle $$\bigwedge{}^kT^*(G/H)$$, which is the associated bundle $$G\times_H\bigwedge^{k}(\mathfrak{g}/\mathfrak{h})^*$$. You then apply the general formula that the sections of the associated bundle for any $$H$$-representation $$V$$ is $$(C^{\infty}(G)\otimes V)^H$$: the tensor product $$C^{\infty}(G)\otimes V$$ is the sections of the trivial bundle $$G\times V \to G$$, and such a section is pulled back from a section of $$G\times_H V\to G/H$$ if and only if it is invariant.
|
|
# Recent questions tagged gateme-2016-set1
If $q^{-a}=\displaystyle{\frac{1}{r}}$ and $r^{-b}=\displaystyle{\frac{1}{s}}$ and $s^{-c}=\displaystyle{\frac{1}{q}}$, the value of $abc$ is $(rqs)^{-1}$ $0$ $1$ $r+q+s$
Leela is older than her cousin Pavithra. Pavithra's brother Shiva is older than Leela. When Pavithra and Shiva are visiting Leela, all three like to play chess. Pavithra wins more often than Leela does. Which one of the following statements must be TRUE based on the ... loses. Leela is the oldest of the three. Shiva is a better chess player than Pavithra. Pavithra is the youngest of the three.
In a world filled with uncertainty, he was glad to have many good friends. He had always assisted them in times of need and was confident that they would reciprocate. However, the events of the last week proved him wrong. Which of the following inference(s) is/are logically valid and can be inferred from ... did not help him last week. $(i)$ and $(ii)$ $(iii)$ and $(iv)$ $(iii)$ only $(iv)$ only
A person moving through a tuberculosis prone zone has a $50\%$ probability of becoming infected. However, only $30\%$ of infected people develop the disease. What percentage of people moving through a tuberculosis prone zone remains infected but does not show symptoms of disease? $15$ $33$ $35$ $37$
$P$, $Q$, $R$ and $S$ are working on a project. $Q$ can finish the task in $25$ days, working alone for $12$ hours a day. $R$ can finish the task in $50$ days, working alone for $12$ hours per day. $Q$ worked $12$ hours a day but took sick leave in the beginning for two ... What is the ratio of work done by $Q$ and $R$ after $7$ days from the start of the project? $10:11$ $11:10$ $20:21$ $21:20$
Michael lives $10$ $km$ away from where I live. Ahmed lives $5$ $km$ away and Susan lives $7$ $km$ away from where I live. Arun is farther away than Ahmed but closer than Susan from where I live. From the information provided here, what is one possible distance (in $km$) at which I live from Arun’s place? $3.00$ $4.99$ $6.02$ $7.01$
In a huge pile of apples and oranges, both ripe and unripe mixed together, $15\%$ are unripe fruits. Of the unripe fruits, $45\%$ are apples. Of the ripe ones, $66\%$ are oranges. If the pile contains a total of $5692000$ fruits, how many of them are apples? $2029198$ $2467482$ $2789080$ $3577422$
Despite the new medicine’s ______________ in treating diabetes, it is not ______________widely. effectiveness --- prescribed availability --- used prescription --- available acceptance --- proscribed
The policeman asked the victim of a theft, “What did you __________?” loose lose loss louse
Which of the following is CORRECT with respect to grammar and usage? Mount Everest is ____________. the highest peak in the world highest peak in the world one of highest peak in the world one of the highest peak in the world
Maximize $Z = 15X_1 + 20X_2$ subject to $\begin{array}{l} 12X_1 + 4X_2 \geq 36 \\ 12X_1 − 6X_2 \leq 24 \\ X_1, X_2 \geq 0 \end{array}$ The above linear programming problem has infeasible solution unbounded solution alternative optimum solutions degenerate solution
The figure below represents a triangle $PQR$ with initial coordinates of the vertices as $P(1,3)$, $Q(4,5)$ and $R(5,3.5)$. The triangle is rotated in the $X$-$Y$ plane about the vertex $P$ by angle $\theta$ in clockwise direction. If sin$\theta$ = $0.6$ and cos$\theta$ = $0.8$, the new coordinates of the vertex $Q$ are $(4.6, 2.8)$ $(3.2, 4.6)$ $(7.9, 5.5)$ $(5.5, 7.9)$
The annual demand for an item is $10,000$ units. The unit cost is $Rs$. $100$ and inventory carrying charges are $14.4\%$ of the unit cost per annum. The cost of one procurement is $Rs$. $2000$. The time between two consecutive orders to meet the above demand is _______ month($s$).
A $300$ $mm$ thick slab is being cold rolled using roll of $600$ $mm$ diameter. If the coefficient of friction is $0.08$, the maximum possible reduction (in $mm$) is __________
A cylindrical job with diameter of $200$ $mm$ and height of $100$ $mm$ is to be cast using modulus method of riser design. Assume that the bottom surface of cylindrical riser does not contribute as cooling surface. If the diameter of the riser is equal to its height, then the height of the riser (in $mm$) is $150$ $200$ $100$ $125$
The tool life equation for HSS tool is $VT^{0.14}f^{0.7}d^{0.4}$ = constant. The tool life $(T)$ of $30 \: min$ is obtained using the following cutting conditions: $V=45\:m/min$, $f=0.35 \: mm$, $d=2.0 \: mm$ If speed $(V)$, feed $(f)$ and depth of cut $(d)$ are increased individually by $25\%$, the tool life (in $min$) is $0.15$ $1.06$ $22.50$ $30.0$
Heat is removed from a molten metal of mass $2$ $kg$ at a constant rate of $10$ $kW$ till it is completely solidified.The cooling curve is shown in the figure Assuming uniform temperature throughout the volume of the metal during solidification,the latent heat of fusion of metal ( in $kJ$/$kg$) is _________
A hypothetical engineering stress-strain curve shown in the figure has three straight lines $PQ, QR, RS$ with coordinates P$(0,0)$, Q$(0.2,100)$, R$(0.6,140)$ and S$(0.8,130)$. $'Q'$ is the yield point, $'R'$ is the UTS point and $'S'$ the fracture point. The toughness of the material (in $MJ/m^3$) is __________
In a steam power plant operating on an ideal Rankine cycle, superheated steam enters the turbine at $3 \: MPa$ and $350^ \circ C$. The condenser pressure is $75$ $kPa$. The thermal efficiency of the cycle is ________ percent. Given data: For saturated liquid, at $P=75 \:kPa$, $h_f=384.39 \:kJ/kg$ ... $P = 3 \: MPa$ and $T=350^\circ C$ (superheated steam), $h=3115.3 \: kJ/kg$, $s=6.7428 \: kJ/kg-K$
An ideal gas undergoes a reversible process in which the pressure varies linearly with volume. The conditions at the start (subscript $1$) and at the end (subscript $2$) of the process with usual notation are: $p_1 = 100 \: kPa$, $V_1 = 0.2 \: m^3$ and $p_2=200 \: kPa$ ... $R=0.275\:kJ/kg-K$. The magnitude of the work required for the process (in $kJ$) is ________
For water at $25^\circ C$, $dp_s/dT_s = 0.189 \: kPa/K$ ($p_s$ is the saturation pressure in $kPa$ and $T_s$ is the saturation temperature in $K$) and the specific volume of dry saturated vapour is $43.38 \: m^3/kg$. Assume that the specific volume ... of vapour. Using the Clausius-Clapeyron equation, an estimate of the enthalpy of evaporation of water at $25^\circ C$ (in $kJ / kg$) is __________
A fluid (Prandtl number, $P_r=1$) at $500\:K$ flows over a flat plate of $1.5\:m$ length, maintained at $300\: K$. The velocity of the fluid is $10 \: m/s$. Assuming kinematic viscosity,$v=30\times 10^{-6}$ $m^2/s$, the thermal boundary layer thickness (in $mm$) at $0.5 \:m$ from the leading edge is __________
An infinitely long furnace of $0.5\:m\times 0. 4 \:m$ cross-section is shown in the figure below. Consider all surfaces of the furnace to be black. The top and bottom walls are maintained at temperature $T_1=T_3=927^\circ C$ ... heat loss or gain on side $1$ is_________ $W/m$. Stefan-Boltzmann constant = $5.67 \times 10^{-8}$ $W/m^2-K^4$
A steel ball of $10\:mm$ diameter at $1000\:K$ is required to be cooled to $350\:K$ by immersing it in a water environment at $300\:K$. The convective heat transfer coefficient is $1000\:W/m^2-K$. Thermal conductivity of steel is $40\:W/m-K$. The time constant for the cooling process $\tau$ is $16\:s$. The time required (in $s$) to reach the final temperature is __________
A steady laminar boundary layer is formed over a flat plate as shown in the figure. The free stream velocity of the fluid is $U_o$. The velocity profile at the inlet $a-b$ is uniform, while that at a downstream location $c-d$ ... rate, $\dot{m}_{bd}$ leaving through the horizontal section $b-d$ to that entering through the vertical section $a-b$ is ___________
Oil (kinematic viscosity, $v_{\text{oil}}=1.0\times 10^{-5} \:m^2/s$) flows through a pipe of $0.5$ $m$ diameter with a velocity of $10$ $m/s$. Water (kinematic viscosity, $v_w=0.89\times 10^{-6}\:m^2/s$) is flowing through a model pipe of diameter $20 \:mm$. For satisfying the dynamic similarity, the velocity of water (in $m/s$) is __________
An inverted U-tube manometer is used to measure the pressure difference between two pipes $A$ and $B$ ,as shown in figure.pipe $A$ is carrying oil (specific gravity$=0.8$ ) and pipe $B$ is carrying water.The densities of air and water are $1.16 kg/m^3$ and $1000\: kg/m^3$,respectively.The pressure difference between pipes $A$ and $B$ is ________$kPa$. Acceleration due to gravity $g =10\:m/s^2$.
The principal stresses at a point inside a solid object are $\sigma _1$ = $100$ $MPa$, $\sigma _2$ = $100$ $MPa$ and $\sigma _3$ = $0$ $MPa$. The yield strength of the material is $200$ $MPa$. The factor of safety calculated using Tresca (maximum shear stress) theory is $n_T$ ... . Which one of the following relations is TRUE? $n_T=(\sqrt{3}/2)n_V$ $n_T=(\sqrt{3})n_V$ $n_T=n_V$ $n_V=(\sqrt{3})n_T$
A solid disc with radius $a$ is connected to a spring at a point $d$ above the center of the disc. The other end of the spring is fixed to the vertical wall. The disc is free to roll without slipping on the ground. The mass of the disc is $M$ and the spring constant is $K$. The polar moment ... $\displaystyle{\sqrt{\frac{2K(a+d)^2}{Ma^2}}} \\$ $\displaystyle{\sqrt{\frac{K(a+d)^2}{Ma^2}}}$
In the gear train shown, gear $3$ is carried on arm $5$. Gear $3$ meshes with gear $2$ and gear $4$. The number of teeth on gear $2$, $3$, and $4$ are $60$, $20$, and $100$, respectively. If gear $2$ is fixed and gear $4$ rotates with ... in the counterclockwise direction, the angular speed of arm $5$ (in $rpm$) is $166.7$ counterclockwise $166.7$ clockwise $62.5$ counterclockwise $62.5$ clockwise
A slider crank mechanism with crank radius $200\:mm$ and connecting rod length $800\:mm$ is shown. The crank is rotating at $600\:rpm$ in the counterclockwise direction. In the configuration shown, the crank makes an angle of $90^\circ$ with the sliding direction of the ... of $5\:kN$ is acting on the slider. Neglecting the inertia forces, the turning moment on the crank (in $kN-m$) is __________
A simply-supported beam of length $3L$ is subjected to the loading shown in the figure. It is given that $P=1\: N$, $L=1\:m$ and Young's modulus $E=200\:GPa$. The cross-section is a square with dimension $10\:mm\times10\:mm$. The bending stress ... beam at a distance of $1.5L$ from the left end is _____________ (Indicate compressive stress by a negative sign and tensile stress by a positive sign.)
The figure shows cross-section of a beam subjected to bending. The area moment of inertia (in $mm^4$) of this cross-section about its base is ________
A horizontal bar with a constant cross-section is subjected to loading as shown in the figure. The Young’s moduli for the sections $AB$ and $BC$ are $3E$ and $E$, respectively. For the deflection at $C$ to be zero, the ratio $\displaystyle{\frac{P}{F}}$ is ____________
The “Jominy test” is used to find Young’s modulus hardenability yield strength thermal conductivity
Gauss-Seidel method is used to solve the following equations (as per the given order): $x_1+2x_2+3x_3=5$ $2x_1+3x_2+x_3=1$ $3x_1+2x_2+x_3=3$ Assuming initial guess as $x_1=x_2=x_3=0$ , the value of $x_3$ after the first iteration is __________
The value of the integral $\displaystyle{\int_{-\infty }^{\infty }\frac{\sin x}{x^2+2x+2}}dx$ evaluated using contour integration and the residue theorem is $\displaystyle{\frac{-\pi \sin(1)}{e}}\\$ $\displaystyle{\frac{-\pi \cos (1)}{e}} \\$ $\displaystyle{\frac{\sin (1)}{e}} \\$ $\displaystyle{\frac{\cos (1)}{e}}$
If $y=f(x)$ satisfies the boundary value problem ${y}''+9y=0$ , $y(0)=0$ , $y(\pi /2)=\sqrt{2}$, then $y(\pi /4)$ is ________
Consider the function $f(x)=2x^3-3x^2$ in the domain $[-1,2]$ The global minimum of $f(x)$ is ____________
A two-member truss $PQR$ is supporting a load $W$. The axial forces in members $PQ$ and $QR$ are respectively $2W$ tensile and $\sqrt{3}W$ compressive $\sqrt{3}W$ tensile and $2W$ compressive $\sqrt{3}W$ compressive and $2W$ tensile $2W$ compressive and $\sqrt{3}W$ tensile
|
|
Computation of a series.
NOTATIONS.
Let $n\in\mathbb{N}$. We define the sets $\mathfrak{M}_{0}:=\emptyset$ and \begin{align} \mathfrak{M}_{n}&:=\left\{m=\left(m_{1},m_{2},\ldots,m_{n}\right)\in\mathbb{N}^{n}\mid1m_{1}+2m_{2}+\ldots+nm_{n}=n\right\}&\forall n\geq1 \end{align} and we use the notations: \begin{align} m!&:=m_{1}!m_{2}!\ldots m_{n}!,&|m|&:=m_{1}+m_{2}+\ldots+m_{n}. \end{align}
QUESTION.
I want to evaluate or just bound with respect to $n$ the series \begin{align} S_{n}&:=\sum_{m\in\mathfrak{M}_{n}}\frac{\left(n+\left|m\right|\right)!}{m!}\ \prod_{k=1}^{n}\left(k+1\right)^{-m_{k}}. \end{align} My hope is that $S_{n}\leq n!n^{\alpha}$ with $\alpha$ independant of $n$.
BACKGROUND.
In order to build an analytic extension from a given real-analytic function, I had to use the Faà di Bruno's formula for a composition (see for example https://en.wikipedia.org/wiki/Faà_di_Bruno%27s_formula). After some elementary computations, my problem boils down to show the convergence of \begin{align} \sum_{n=0}^{+\infty}\frac{x^{n+1}}{(n+1)!}\sum_{m\in\mathfrak{M}_{n}}\frac{\left(n+\left|m\right|\right)!}{m!}\ \prod_{k=1}^{n}\left(k+1\right)^{-m_{k}} \end{align} where $x\in\mathbb{C}$ is such that the complex modulus $|x|$ can be taken as small as desired (in particular, we can choose $|x|<\mathrm{e}^{-1}$ to kill any $n^{\alpha}$ term from the bound on $S_{n}$).
SOME WORK.
It is clear that we have to to understand the sets $\mathfrak{M}_{n}$ in order to go on (whence the tag "combinatorics"). So I tried to see what were these sets:
• for $n=2$ : \begin{array}{cc} 2&0\\ 0&1 \end{array}
• for $n=3$ : \begin{array}{ccc} 3&0&0\\ 1&1&0\\ 0&0&1 \end{array}
• for $n=4$ : \begin{array}{cccc} 4&0&0&0\\ 2&1&0&0\\ 1&0&1&0\\ 0&2&0&0\\ 0&0&0&1\\ \end{array}
• for $n=5$ : \begin{array}{ccccc} 5&0&0&0&0\\ 3&1&0&0&0\\ 2&0&1&0&0\\ 1&0&0&1&0\\ 1&2&0&0&0\\ 0&0&0&0&1\\ 0&1&1&0&0\\ \end{array}
Above, each line corresponds to an multiindex $m$, and the $k$-th column is the coefficient $m_{k}$. We see for example that the cardinal of $\mathfrak{M}_{n}$ becomes strictly greater than $n$ if $n\geq5$. Also, because I wanted to reorder the set of summation in $S_{n}$ into a the set of all multiindices $m$ such that $|m|=j$ for $1\leq j\leq n$, I tried to count given $j$ the number of $m$ such that $|m|=j$; when $n=10$, I counted $8$ multiindices $m$ with length $|m|=4$, so that this number can be greater than $n/2$. Another remark is that the number of multiindices $m$ such that $|m|=j$ becomes larger if $j$ is "about" $n/2$ - don't ask me what "about" means here, I just tried some example and saw this phenomenon.
• There are typos in your configuration, should be $$n = 2 \to \begin{matrix}*2 & 0*\\0 & 1\end{matrix}, \quad n = 4 \to \begin{matrix} 4 & 0 & 0 & 0\\ 2 & 1 & 0 & 0\\ 1 & 0 & 1 & 0\\ *0 & 2 & 0 & 0*\\ *0 & 0 & 0 & 1* \end{matrix}$$ BTW, the first few numbers of your sequences are $1,1,5,41,469,6889$ and it matches the one on OEIS A032188. Dec 13, 2016 at 5:34
• @achillehui Thank you for the typos, I have corrected it. As for the sequence you are talking about, it seems to give some informations about a generating function, but I am not familiar with this tool. Is it possible to deduce a bound for my series from your link (I do not see how)? Dec 13, 2016 at 10:03
• I don't know what you can get from the link but I'm working on a closed form expression of your series. If I didn't make any mistake, the radius of convergence of $\sum_{n=0}^\infty S_n \frac{x^{n+1}}{(n+1)!}$ is $1 - \log(2)$, this implies $\frac{S_n}{(n+1)!} \sim o( r^n )$ for any $r > \frac{1}{1 - \log(2)}$. Dec 13, 2016 at 11:33
• @achillehui If you could get a closed form, it would be great! My goal is to prove that the series in $x$ converges for some $x\in\mathbb{C}$. Dec 13, 2016 at 13:22
• @Nicolas If both MO accounts appearing on this question belong to you, you can try to merge them. Related post on this site's meta: Announcement: New User Merge Policy/Tool. Jan 4, 2017 at 16:46
First, we will transform $S_n$ to a form easier to manipulate.
Let $C \subset \mathbb{C}$ be a circle of radius $r \ll 1$ centered at $0$. For any $n \in \mathbb{N}$, $m \in \mathbb{N}^n$, we can single out those $m \in \mathfrak{M}_n$ with help of contour integrals of the form:
$$\delta_n(m) \stackrel{def}{=} \frac{1}{2\pi i} \oint_{C} s^{\sum_{k=1}^n k m_k} \frac{ds}{s^{n+1}} = \begin{cases}1, & m \in \mathfrak{M}_n\\ 0, & \text{ otherwise }\end{cases}$$ Together with following integral representation of factorial:
$$n! = \Gamma(n+1) = \int_0^\infty t^n e^{-t}dt$$ We have
\begin{align} S_n &= \sum_{m \in \mathbb{N}^n} \delta_n(m) \int_0^\infty \prod_{k=1}^n \frac{1}{m_k!} \left(\frac{t}{k+1}\right)^{m_k} t^n e^{-t} dt\\ &= \frac{1}{2\pi i}\sum_{m \in \mathbb{N}^n} \oint_C \left[\int_0^\infty \prod_{k=1}^n \frac{1}{m_k!} \left(\frac{ts^k}{k+1}\right)^{m_k} \left(\frac{t}{s}\right)^ne^{-t} dt \right] \frac{ds}{s}\\ &= \frac{1}{2\pi i}\oint_C \left[\int_0^\infty \prod_{k=1}^n \left(\sum_{m_k=0}^\infty \frac{1}{m_k!} \left(\frac{ts^k}{k+1}\right)^{m_k}\right) \left(\frac{t}{s}\right)^ne^{-t} dt \right] \frac{ds}{s}\\ &= \frac{1}{2\pi i}\oint_C \left[\int_0^\infty \prod_{k=1}^n \exp\left(\frac{ts^k}{k+1}\right) \left(\frac{t}{s}\right)^ne^{-t} dt \right] \frac{ds}{s}\\ &= \frac{1}{2\pi i}\oint_C \left[\int_0^\infty \exp\left(\sum_{k=1}^n\frac{ts^k}{k+1}\right) \left(\frac{t}{s}\right)^ne^{-t} dt \right] \frac{ds}{s}\\ &\stackrel{\color{blue}{[1]}}{=} \frac{1}{2\pi i}\oint_C \left[\int_0^\infty \exp\left(\sum_{k=1}^\infty\frac{ts^k}{k+1}\right) \left(\frac{t}{s}\right)^ne^{-t} dt \right] \frac{ds}{s}\\ &= \frac{1}{2\pi i}\oint_C \left[\int_0^\infty \exp\left[-t\left(\frac{\log(1 - s)}{s} + 2\right)\right]\left(\frac{t}{s}\right)^n dt \right] \frac{ds}{s}\tag{*1}\\ \end{align} Next, let
• $S(x) \stackrel{def}{=} \sum_{n=0}^\infty S_n \frac{x^n}{n!}$ be the EGF (exponential generating function) for $S_n$.
• $\Delta(x) = \sum_{n=0}^\infty S_n \frac{x^{n+1}}{(n+1)!}$ be the series we want to study its convergence.
They are related by the relation $\Delta(x) = \int_0^x S(t) dt$.
For any $x$ with $|x| \ll r$, $(*1)$ implies
\begin{align} S(x) &= \frac{1}{2\pi i}\oint_C \left[\int_0^\infty \exp\left[-t\left(\frac{\log(1 - s)}{s} + 2 - \frac{x}{s}\right)\right] dt \right] \frac{ds}{s}\\ &= \frac{1}{2\pi i} \oint_C \frac{ds}{\log(1-s) + 2s - x} \end{align} Change variable to $y = -\log(1-s) \iff s = 1 - e^{-y}$. When $r$ is small, the image of $C$ in $y$-space is close to circle $C$. We can deform the contour back to $C$ without changing the integral. This leads to
$$S(x) = \frac{1}{2\pi i}\oint_C \frac{dy}{P(x,y)} \quad\text{ where }\quad P(x,y) = (2-x-y)e^y - 2$$ Under the condition $|x| < |y| = r \ll 1$, we have
$$P(x,y) \approx (2 - x - y) (1 + y + O(r^2)) - 2 \approx y - x + O(r^2)$$
This means for fixed $x$ and as a function in $y$, $P(x,y)$ has only one root inside $C$. Furthermore, the root in $y$ is close to $x$. Let $\eta$ be that root, we have
\begin{align} P(x,\eta) = 0 &\iff (2-x-\eta)e^\eta - 2 = 0 \iff (\eta + x - 2)e^{\eta + x - 2} = -2e^{x-2}\\ & \implies 2 - x - \eta = -W(-2e^{x-2}) \end{align} where $W(z)$ is a branch of the Lambert-W function. In terms of $\eta$, we have
\begin{align} S(x) &= \text{Res}_{y=\eta}\left(\frac{1}{P(x,y)}\right) = \left.\frac{1}{\frac{\partial}{\partial y}P(x,y)}\right|_{y=\eta} = \frac{1}{(1 - x - \eta)e^\eta}\\ &= \frac{2-x-\eta}{2(1-x-\eta)} = \frac{W(-2e^{x-2})}{2(1+W(-2e^{x-2}))} \end{align}
Since $S(0) = 1$, we need to choose a branch of Lambert W function with $W(-2e^{-2}) = -2$. The correct branch is the "lower branch" described in above wiki link. It is usually denoted as $W_{-1}(\cdot)$. In terms of it, we find
$$S(x) = \frac{W_{-1}(-2e^{x-2})}{2(1+W_{-1}(-2e^{x-2}))}$$
Notice the branches of Lambert W function satisfies ODE $$z\frac{d}{dz}W(z) = \frac{W(z)}{1+W(z)}\tag{*2}$$ We can integrate $(*2)$ and deduce a closed form expression for $\Delta(x)$: $$\Delta(x) = \frac12 \int_0^x \left[ z\frac{dW_{-1}(z)}{dz} \right]_{z=-2e^{t-2}} dt = 1 + \frac12 W_{-1}(-2e^{x-2})\tag{*3}$$
$W_{-1}(z)$ has two branch cuts, one terminated at $z = -\frac1e$, the other at $z = 0$. The closest singularity of $\Delta(x)$ to origin is located at $x = 1 - \log(2)$. As a result, $r_0$, the radius of convergence of the power series expansion of $\Delta(x) = \sum\limits_{n=0}^\infty S_n\frac{x^{n+1}}{(n+1)!}$, $r_0$ equals to $1 - \log(2)$. A corollary of this is$\color{blue}{{}^{[2]}}$
$$\frac{S_n}{(n+1)!} \sim o(\rho^n)\quad\text{ for any }\; \rho > \frac{1}{1-\log(2)} \approx 3.258891353270929$$
As a double check, we evaluate the power series expansion of $\Delta(x)$ using following command Series[1+1/2*LambertW[-1,-2*Exp[x-2]],{x,0,8}] on WA (wolfram alpha). WA returns
\begin{align} \Delta(x) = & x+\frac{{x}^{2}}{2}+\frac{5\,{x}^{3}}{6}+\frac{41\,{x}^{4}}{24}+\frac{469\,{x}^{5}}{120}+\frac{6889\,{x}^{6}}{720}\\ & +\frac{24721\,{x}^{7}}{1008}+\frac{2620169\,{x}^{8}}{40320}+\frac{64074901\,{x}^{9}}{362880} + \cdots \end{align} Translate back to $S_n$, this is equivalent to
$$( S_0,S_1,\ldots ) = (1, 1, 5, 41, 469, 6889, 123605, 2620169, 64074901,\ldots )$$ For $n \le 5$, I have checked by hand this is indeed the correct value.
An OEIS search return the sequence OEIS A032188. Up to $n = 18$, I've verified the $S_n$ extract from expansion of $(*3)$ matches the numbers on OEIS. Look at references there and see whether there is anything useful for your purposes.
Notes
• $\color{blue}{[1]}$ - As a function of $s$, $$\exp\left(\sum_{k=1}^n\frac{ts^k}{k+1}\right) \left(\frac{t}{s}\right)^n\frac{e^{-t}}{s} = \frac{1}{s^{n+1}}A(s)\quad\text{ and }\quad \exp\left(\sum_{k=n+1}^\infty\frac{ts^k}{k+1}\right) = 1 + s^{n+1}B(s)$$ where $A(s), B(s)$ are analytic over the disc bounded by $C$. This implies $$\exp\left(\sum_{k=1}^\infty\frac{ts^k}{k+1}\right) \left(\frac{t}{s}\right)^n\frac{e^{-t}}{s} = \exp\left(\sum_{k=1}^n\frac{ts^k}{k+1}\right) \left(\frac{t}{s}\right)^n\frac{e^{-t}}{s} + A(s)B(s)$$ Changing the upper bound in the sum within the exponent from $n$ to $\infty$ modifies the integrand by a function analytic over the disc bounded by $C$. The value of the contour integral over $C$ remains the same.
• $\color{blue}{[2]}$ - A more detailed analysis suggests for large $n$, $S_n$ has following approximation: $$S_n \approx \frac{(2n)!}{\sqrt{8r_0}n!(4r_0)^n}\left( 1 - \frac{r_0}{6(2n-1)} + \cdots \right)\quad\text{ where }\quad r_0 = 1 - \log(2)$$ For $n$ as small as $4$, this formula gives a relative error below $10^{-4}$ (checked against numbers from OEIS). The leading behavior of coefficients of $\Delta(x)$ should be: $$\frac{S_n}{(n+1)!} \sim O\left(\frac{r_0^{-n}}{\sqrt{8\pi r_0} n^{3/2}}\right)$$
• Absolutely amazing! This is clear and well done!! I just have one question: in the long equation $(*1)$, why can we modify the series of $ts^k/(k+1)$ in the exponential in a sum on $0\leq n\leq+\infty$? Dec 13, 2016 at 15:08
• @Nicolas When you expand $\exp\left(\sum\limits_{k=n+1}^\infty \frac{ts^k}{k+1}\right)$ as a power series in $s$, you get something of the form $1 + \sum\limits_{k=n+1}^\infty \alpha_k s^k$. Aside from the constant term, all remaining terms contain a factor $s^{n+1}$. This is enough to cancel the singular factor $\frac{1}{s^{n+1}}$ in the rest of the integrand. Replacing the sum from $0\to n$ to $0\to\infty$ changes the integrand by a function analytic inside $C$. The value of contour integral over $C$ remains the same. Dec 13, 2016 at 19:18
• Of course! I forgot the $s^{-(n+1)}$ term. Ok, now it is perfect to me, I do thank you for your efforts and for the time you've spent to write this answer. Dec 13, 2016 at 19:36
• @achillehui: Clear, concise and instructive! Very nice! (+1) Dec 13, 2016 at 21:37
• @achillehui Almost one year later, I have another question about the long equation (*1): in the third line, you enter the sum of multi indices and exchange it with the product in the integral. I understand the exchange with the product (this is the Cauchy product formula) but I do not see why you can enter the sum in the integral. Dominated convergence theorem seems to be the argument to use, but one needs to show that the series of the integral of the modulus converges, which is quite similar to show the convergence of $S_n$! Have you got any argument to propose? Thanks in advance. Sep 26, 2017 at 17:58
Here is a solution of a related problem, followed by a recommendation for the original problem. It would be much simpler if your sum did not have the $n$ in $(n+|m|)!\,$. In that case, we could look at the related sum $$t_n=\sum_{m\in {\mathfrak{M} }_n}\frac{|m|!}{m!}\prod_{k=1}^n(k+1)^{-m_k}.$$ The sum for the $t$'s comes from a product of exponential generating functions. Because of the factor of $(k+1)^{-m_k}$ in $t_n$ and the term $k\,m_k$ in ${\mathfrak{M} }_n$, we must look at the series $$1+\frac{\left(\frac{x^k}{k+1}\right)^1}{1!} +\frac{\left(\frac{x^k}{k+1}\right)^2}{2!} +\frac{\left(\frac{x^k}{k+1}\right)^3}{3!} +\cdots=\exp\left(\frac{x^k}{k+1}\right).$$ From multiplying these exponential generating functions, we get $$t_n=\left[\frac{x^n}{n!}\right]\prod_{k\ge1}\exp\left(\frac{x^k}{k+1}\right).$$ This product turns out to have a nice closed form: \begin{eqnarray*} % \nonumber to remove numbering (before each equation) \prod_{k\ge1}\exp\left(\frac{x^k}{k+1}\right) &=& \exp\left(\sum_{k\ge1}\frac{x^k}{k+1}\right) \\ &=& \exp\left(\frac1x\biggl(\log\bigl(\frac1{1-x}\bigr)-x\bigr)\right) \\ &=& (1-x)^{-x}/\mathrm{e} . \end{eqnarray*} The smallest singularity of $(1-x)^{-x}$ is at 1, so a crude approximation would be $$[x^n](1-x)^{-x}\approx1^n=1$$ and $$t_n=\left[\frac{x^n}{n!}\right](1-x)^{-x}/\mathrm{e}\approx n!/\mathrm{e}.$$ Certainly, a finer analysis of the singularity of $(1-x)^{-x}$ would give a better approximation and perhaps produce the power $\alpha$ you're seeking.
Now, back to the original problem. It's always the case that $|m|\le n$, so a rough bound on $s_n$ would be $$s_n\le(2n)!t_n\le(2n)!\, .$$ This bound is worse than the hope you expressed, but perhaps good enough for your eventual purposes or perhaps a start for finer analysis.
• Thank you for your interest and your efforts! Could you please explain the link with the exponential generating functions (I am not familiar with generating functions)? Also, I do not understand why we can rewrite $t_n$ with the terms $x^n/n!$ and the product of exponentials (there was no $x$ in your original $t_n$)? Thanks in advance. Dec 12, 2016 at 22:54
• @Nicolas There are a zillion places to learn about generating functions. A couple of my favorites are Concrete Mathematics (chapter 7) and Generatingfunctionology (the first couple chapters). Generating functions form a really powerful tool, and they're well worth the time to sink your teeth into. After digesting a chapter or two of those texts, this answer might be more meaningful. Dec 13, 2016 at 4:02
• Ok thank you for the reference. Unfortunately, I do not have much time to read it for now... Anyway, good start point here! Dec 13, 2016 at 10:32
|
|
# Why program with high level programming languages? (was PLCS: PLC Fortran)
B
#### Brian E Boothe
i'm not saying LL isnt usefull.... it is for its specific purpose.. but actually calling it a actual LANGUAGE is a bit far for me. user's AID
maybe..
S
Nice write-up
J
#### Joe Jansen
Interpreter v. Compiler is determined by when the program is converted to machine level instructions. A compiler converts the program code into the machine instructions, which is then used as a binary distribution. Characterized by a faster execution than interpreted language.
Interpreted means that the program code is converted into machine instruction during execution of the program. This obviously takes longer to execute, but is much easier to debug. It has nothing to do with translating when used in this context.
I do not understand your reference to eic. I am familiar with their product, and it is just what it claims: a package that lets you run
interpreted C programs. Meaning that the code is not compiled before execution. In fact, at their command prompt, you can enter C program lines and have them execute immediately, similar to using a python or basic interpreter in immediate mode.
IFAIK, most PLC's run as interpreted, not compiled, language. This gives the ability to do online monitoring and editing without having to compile the edits into the code all the time.
--Joe Jansen
C
#### Curt Wuollet
Actually it's not uncommon to do both with the same system. The human usable language is compiled to an intermediate language which is then interpreted at runtime. The advantage of doing this is that the IL can be readily and unambiguously decompiled to perform the feats of
magic with online editing, etc. The interpreter can be optimized for the specific finite IL. With today's optimizing compilers, decompiling often
does not yield recognizable source code if it's possible at all.
Regards
cww Who is not a Computer Scientist, just an old UNIX hacker.
--
Free Tools!
Machine Automation Tools (LinuxPLC) Free, Truly Open & Publicly Owned
Industrial Automation Software For Linux. mat.sourceforge.net.
Day Job: Heartland Engineering, Automation & ATE for Automotive
Rebuilders.
Consultancy: Wide Open Technologies: Moving Business & Automation to
Linux.
V
1. Again. Language is a set of specific rules. There is no demand to be neither interpreted nor compiled among the rules.
2. I refer to eic in order to demonstrate the fact that C is not a "compiled language"... as well as I refer to the ladder compilers in order to demonstrate, ladder is not an "interpreted language"...
3. Really our case is an example of terminological muddle... between "language" and "language translator", it execution model... (For example: If we say "Borland C++ 4.0", we really speak about the compiler, not about the C++ language)
etc.
B
#### Bram van Kampen
Well, different horses for different courses. Ladder type languages are fine where abstraction requirements are relatively low, and where there is an installed base to work on, compatible with the target hardware.
if you are reading this on a computer with a GUI, realise that this GUI is based on many millions of lines of Code. This Code was likely written in C or CPP, with a spattering of assembly. Ladder Logic is also a High level language, the processor that runs the code could not possibly understand the diagrams you create. There is an interpretor or compiler involved there.
Had Bill Gates written in ladder logic, that would probably have been the language I earn my bread in now. As it happens, for better or worse, I write windows in CPP to provide the bread, and ladder logic in evening hours, to provide the jam.
B
#### Bram van Kampen
Well, different horses for different courses. Ladder type languages are fine where abstraction requirements are relatively low, and where there is an installed base to work on, compatible with the target hardware. if you are reading this on a computer with a GUI, realise that this GUI is based on many millions of lines of Code. This Code was likely written in C or CPP, with a spattering of assembly. Ladder Logic is also a High level language, the processor that runs the code could not possibly understand the diagrams you create. There is an interpretor or compiler involved there. Had Bill Gates written in ladder logic, that would probably have been the language I earn my bread in now. As it happens, for better or worse, I write windows in CPP to provide the bread, and ladder logic in evening hours, to provide the jam.
B
#### Bob Peterson
This is an interesting question. IMHO, you should use the "highest" level programming language that best meets your needs.
People have different ideas about what this term even means. I suppose the "lowest" level programming language is machine code. No one would even consider programming in such a language today except maybe as an educational
exercise. With respect to machine code, even assembler is a higher level language (than machine code).
Higher level languages shield you from many of the details required to write code such as:
- mapping physical memory locations to variable names
- allowing you to use a symbolic variable name rather then just an address to refer to your data
- standardized I/O access
- standardized function set(s)
Imagine if you had to write the machine code for all the things that printf() does for you already. You would not be real efficient in generating code.
Others may decide that "higher" level languages must have other attributes. This is sort of a "how many angels can dance on a pin head" type question because its really related to what it is you are trying to do.
There are a few on this list who don't deal well with RLL and think text based languages are "higher" level. But for the purposes RLL is used, its almost always a far better choice. The main reasons do not have anything to do with programmer productivity as you might initially suspect. The biggest thing RLL has going for it is the huge number of people who understand enough about it to be able to do debugging and minor program changes at the factory floor. This translates in higher uptime, productivity, and profitability, which is what its all about. Programming costs for many systems are such a
tiny part of the cost of the system that programming costs themselves are really not all that significant..
Think about it for a while and you will almost certainly agree. A typical system I work on has maybe 600 hours of electrical engineering time in it. I'd guess the time spent is roughly about 1/4 hardware related (BOMs, drawings, etc), 1/4 defining how the stuff works, 1/4 actual programming, and 1/4 debug and testing. So I have only spent 150 hours on actual programming,
say about $10,000. On a typical$1,000,000 system this is about 1%.
However, its not unusual for a machine to cost that much (or far more) worth of production per MINUTE when it is down. Therefore, its far more important to be able to keep the machine running, and be able to debug problems faster, than worrying over saving a few \$ in programming time.
Bob Peterson
B
#### Brian E Boothe
Simple answer is outside of the Box solutions, typical answers would be Passing TCP/IP packets,
Active-x Control's / OPC-DDE / Trending / memory management / CPU optimization / switching of printers
Data collection / DBF Routines /..cross platform development Linux > MAC > PC ect
I dont think the person that asks that question really has a Programmers Thought pattern..
Ladder logic has one SET Purpose. (CONTAINED Instructions). for the Controller...such as RS-LOGIX on the other hand High level languages Move across MANY MANY platforms and Processors..
ive been hearing this question alot?... why in gods name would someone ask such a question... you cant do ANY of the above in Ladder
logic?? i'm really not understanding this question??..
B
#### Blunier, Mark
> > > hello, i'm writing in just to enquire something. why does someone
> > > need to program with high level programming languages? is it
> > > easier? faster? less debugging time? or some other reasons?
>
> This is an interesting question. IMHO, you should use the "highest"
> level programming language that best meets your needs.
While you should use the programming language the best meets your needs, if it is a tie, use the lowest level, not the highest level language.
Fox example, almost any program written in C could be written in C++, but if the complexities of C++ is not needed, and many more people can program in C that can't use C++, C would be prefereable.
Mark Blunier
Any opinions expressed in this message are not necessarily those of the company
B
#### Bob Peterson
Brian Boothe wrote:
> Simple answer is outside of the Box solutions, typical answers would be
> Passing TCP/IP packets, Active-x Control's / OPC-DDE / Trending / memory
> management / CPU optimization / switching of printers
> Data collection / DBF Routines /..cross platform development Linux > MAC
> > PC ect
The simple answer you gave does not deal with reality very well. Few lower level control systems need (or even use) the kinds of things you talk about. These are not really machine control issues but more like SCADA. They are
all nice to have, and even important, but don't have anything to do with machine control directly.
> dont think the person that asks that question really has a Programmers
> Thought pattern..
I don't think in terms of C for doing things that work best in RLL. Despite what you might think, RLL is extremely powerful for solving certain types of common control problems. It does not deal well with numerical or data driven
applications. But these are not the most common control applications. Look at how many PLCs are sold world wide versus how many control applications are done in C. That SHOULD tell you something.
> ladder logic has one SET Purpose. (CONTAINED Instructions). for the
> Controller...such as RS-LOGIX
> on the other hand High level languages Move across MANY MANY platforms
> and Processors..
> ive been hearing this question alot... why in gods name would
> someone ask such a question... you cant do ANY of the above in Ladder
> logic?? i'm really not understanding this question??..
Quite frankly, people that like text based languages just refuse to deal with the issue of uptime, online programming, and other PLC features designed to keep your factory producing widgets. The built-in and intuitive diagnostics
available with RLL that just don't exist with "higher" level languages. These are the main reasons (IMHO) that RLL refuses to die. Nothing that has tried to replace it does it any better, or typically even as well. There are
some things available that MAY improve programmer productivity, but this is such a small part of most systems that a little increase in productivity is dwarfed by the ongoing cost of keeping the machine going for the next 20 years.
Bob Peterson
M
#### Michael Griffin
Ladder logic is a *very* high level language, certainly much higher level than 'C++' or other similar languages. Very high level languages tend to be application specific, which is why ladder logic is good at what it is intended for, but not very good for general computing use.
Langauges such as grafcet are even higher level still, which is why they are even better still for an even narrower range of applications, but are not able to easily replace everything that ladder does well.
I believe it was Nicolas Wirth (the creator of Pascal, among other things) who said that programming languages are created for people to read, not for computers. A programming language, and a program written in it, should be judged by how well the intended audience is able to read and comprehend it.
I learned a long time ago that it is very easy to write complex programs. Writing simple ones is the real challenge.
************************
Michael Griffin
************************
V
Hello Michael,
On Wednesday, June 05, 2002, 9:37:49 AM, Michael Griffin wrote:
[...]
MG> Ladder logic is a *very* high level language, certainly much higher level
MG> than 'C++' or other similar languages. Very high level languages tend to be
MG> application specific, which is why ladder logic is good at what it is
MG> intended for, but not very good for general computing use.
MG> Langauges such as grafcet are even higher level still, which is why they are
MG> even better still for an even narrower range of applications, but are not
MG> able to easily replace everything that ladder does well.
High level language is that reflects the human side of HMI (HMI in the broad means)
Low level language is that reflecs the machine side of HMI...
Also there is the following classification of the languages:
1. first generation languages (example - machine code)
"http://www.InstantWeb.com/D/dictionary/foldoc.cgi?first+generation+language":http://www.InstantWeb.com/D/dictionary/foldoc.cgi?first+generation+language
2. second generation languages (example - assemblers)
"http://www.InstantWeb.com/D/dictionary/foldoc.cgi?query=second+generation+language":http://www.InstantWeb.com/D/dictionary/foldoc.cgi?query=second+generation+language
3. third generation languages ("common purpose languages", example - C, Pascal, Fortran, etc., etc.)
"http://www.InstantWeb.com/D/dictionary/foldoc.cgi?third+generation+language":http://www.InstantWeb.com/D/dictionary/foldoc.cgi?third+generation+language
4. fourth generation languages ("application specific", example - SQL; Focus, Metafont, PostScript, RPG-II, S, etc.)
"http://www.InstantWeb.com/D/dictionary/foldoc.cgi?fourth+generation+language":http://www.InstantWeb.com/D/dictionary/foldoc.cgi?fourth+generation+language
5. fifth generation languages ("AI languages", "a mith", example - no implemented yet)
"http://www.InstantWeb.com/D/dictionary/foldoc.cgi?fifth+generation+language":http://www.InstantWeb.com/D/dictionary/foldoc.cgi?fifth+generation+language
LD is a low level fourth generation language.
"Low level" - because it has no means to structurize the program, "Fourth generation" - because it is a "application specific" language,
i.e. it has limited capabilities.
--
Best regards,
R
#### Richard Higginbotham
Bob Peterson wrote:
>The simple answer you gave does not deal with reality very well. Few
>lower level control systems need (or even use) the kinds of things you
>talk about. These are not really machine control issues but more like
>SCADA. They are all nice to have, and even important, but don't have
>anything to do with machine control directly.
He left out code reuse, up time, dev. time, and a few others that apply, but that is yet another rehash of this old topic (which has been around since before C was the next big thing).
>I don't think in terms of C for doing things that work best in RLL.
>Despite what you might think, RLL is extremely powerful for solving
>certain types of common control problems. It does not deal well with
>numerical or data driven applications. But these are not the most
>common control applications. Look at how many PLCs are sold world wide
>versus how many control applications are done in C. That SHOULD tell
>you something.
Yes, that you should never expect to get a nice juicy steak, you should give up, conform to the norm and head down to McDondalds (over a billion served) because thats what most people like. Planning a big event, Wedding Reception maybe, it doesn't really matter if your situation is different. If you want something special just ask for extra cheese on all those Big Macs. You just dont get it, lots of people eat at McDonalds, there CANT be anything better and if there is, well, it really just doesn't apply.</sarcasm>
>Quite frankly, people that like text based languages just refuse to deal
>with the issue of uptime, online programming, and other PLC features
>designed to keep your factory producing widgets. The built-in and
>intuitive diagnostics available with RLL that just don't exist with
>"higher" level languages. These are the main reasons (IMHO) that RLL
>refuses to die. Nothing that has tried to replace it does it any
>better, or typically even as well. There are some things available that
Because they realise those feature have nothing to do with ladder logic. RLL has its own syntax and repersentation (graphic) thats not particularly special. You can write out RLL as text ( and others like SFC, etc.). Compile and download it to controllers, been done since the 80's man. RLL is nothing more than a pared down "text" base language used so unskilled (non-programmer) Joe Billy Bob Blow has some hope of getting a software program to work... eventually.
Its logic along the lines of: I don't design bridges for a living. Dont know whats involved, or even how to start. BUT I do know how to make some neat stuff with legos. Legos are easy, anyone can use legos. Let me build your new bridge out of legos. You'll save time and money because so many people know how to use legos. If bridge collapses, you dont have to worry about the design, we'll just throw on some more legos. Some people think legos are unsuitible to building skyscrapers because of the complexity of skyscrapers, but they're wrong because legos are so easy to use, anyone can use them. They were designed for children after all.
>MAY improve programmer productivity, but this is such a small part of
>most systems that a little increase in productivity is dwarfed by the
>ongoing cost of keeping the machine going for the next 20 years.
Ahh, this is where I talk about code reuse, and you reply "what, i've been cutting and pasting for years." If you have the luxury of just having that machine "run" for the next 20 years, consider yourself lucky. Everywhere I've been, its constant upgrades, additions, change in control philosophy, etc. There are so many oppertunities to do more with less, reduce chances of human error, reduce down time, reduce development time, it boggles the mind that theres resistance to a proven way of solving the same old problems. Sure there are differences between batch control and web development, but thats no excuse not to learn from outside the box and apply those things that are relevant and helpful.
Sometimes when you design things to account for the lowest common denominator, you get stuff designed BY the lowest common denominator.
Richard Higginbotham
speaking for me
|
|
Question
# In an isobaric reversible process, the ratio of heat supplied to the system (dq) and work done by the system (dW) for a ideal monoatomic gas is:
Solution
## At constant pressure, (isobaric process) heat supplied is given as: qp=nCpdT Cp for a monoatomic gas = 52R So, qp=52nR dT Work done by system is given as: dw=nR dT Hence, the ratio is: dqdw=(5/2)nR dTnR dT =52=2.5
Suggest corrections
|
|
# Solve the Circuit For Electromagnetism Unit
Tags: circuit, electromagnetism, solve, unit
P: 87 1. The problem statement, all variables and given/known data please visit this site and solve with calculations, Thank you! http://rapidshare.com/files/338079894/Figure_it_out.doc 2. Relevant equations V=IR Rt=1/R1+1/R2+1/R3 3. The attempt at a solution Please solve this circuit Thank You
P: 61 You are missing a relevant equation and you have no idea what one of the others is good for. $$R_t=R_1+R_2+R_3...$$ for resistors in series. $$\frac{1}{R_t}=\frac{1}{R_1}+\frac{1}{R_2}+\frac{1}{R_3}...$$ for resistors in parallel. You have a series-parallel circuit so both apply in certain parts. We will not provide the answers because it is your homework.
P: 87 Its just i am working on it but i need the anwers so i can check if i get it right. Please i have a test tomorrow
P: 61
## Solve the Circuit For Electromagnetism Unit
Then show your answers and we might be able to help more.
P: 87 ok so i have the answers can you please check to see if i'm right: I1=4.0A R1=3.8 ohms R4=5.0 ohms I2=2.0 A I3=2.0 A V2=20.0V V3=20.v I0=4.0A Rt=10 ohms V4=? Rt=10 ohms
P: 61 You can check the V=IR across all components and their combinations to see if they make sense. If not your answers are not correct. I agree with $$I_0, I_1, I_2, I_3, V_2, V_3,\ and\ R_1$$ for now. I have something else to do. I'll check the rest later.
P: 87 great, thank you!
P: 87 is v4=15v R4=3.8
P: 61 I agree with all answers except R4 and V4. Don't assume symmetry because both have 4A going through them. That current must go through the whole circuit, ie both R1 and R4. Look at your configuration of resistors and find the equation that fits. You have two parallel resistors in series with two other resistors. And solve for R4. Then apply V=IR to that resistor. For resistors in parallel, I solve for RT before entering it into my calculator or into a series equation like you will: $$R_T=(\frac{1}{R_1}+\frac{1}{R_2}+...)^{-1}$$
P: 87 would r4 = 1.25 and v4 = 5 because if i make the parallel part of the series by doing Rt=1/ri + 1/r2... than it will be 5 ohms and is series, the equation is rt=r1+r3+r4 10=3.75+5+r4 r4= 1.25
P: 61 There you have it. Good luck with the exam.
Related Discussions Engineering, Comp Sci, & Technology Homework 1 Engineering, Comp Sci, & Technology Homework 2 Introductory Physics Homework 1 Introductory Physics Homework 3 Classical Physics 4
|
|
# Product (Category Theory)
This simultaneously captures the concept of a product of sets, posets, groups, topological spaces etc. In addition, like any universal construction, this characterization does not differentiate between isomorphic versions of the product, thus allowing one to abstract away from an arbitrary, specific construction.
## Definition
Given a pair of objects $$X$$ and $$Y$$ in a category $$\mathbb{C}$$, the product of $$X$$ and $$Y$$ is an object $$P$$ along with a pair of morphisms $$f: P \rightarrow X$$ and $$g: P \rightarrow Y$$ satisfying the following universal condition:
Given any other object $$W$$ and morphisms $$u: W \rightarrow X$$ and $$v:W \rightarrow Y$$ there is a unique morphism $$h: W \rightarrow P$$ such that $$fh = u$$ and $$gh = v$$.
Children:
Parents:
• Category theory
How mathematical objects are related to others in the same category.
• Would Product (mathematics) be an appropriate name, or does category theory’s use of the term point to only a subset of the things product can mean?
• @1yq Whether Product (mathematics) is appropriate really depends if you’re asking a category theorist (who would say yes) or not . ;-)
In seriousness, specific kinds of products include cartesian products, products of algebraic structures, products of topological spaces and the most well known: product of numbers. All of these are special cases of the categorical product (if you pick your category right), but I can imagine someone wanting to look up ‘product’ as in multiplication and getting hit with category theory.
I don’t know. It’s a matter of taste I suppose. I get the idea that category theory is not yet quite widely-known enough for this to be considered “the” definition by most mathematicians, but if other contributors feel it should be given that status I certainly won’t complain. I just thought this was the safer approach.
See, for example [product on Wikipedia](https://en.m.wikipedia.org/wiki/Product_(mathematics)).
• Yeah, I think keeping it as it is now is probably the best way of following the “one idea per page” methodology. The page on Products (mathematics) can have this page as child.
|
|
smaller HP 82166A with 32-pin connector
09-14-2019, 07:50 AM (This post was last modified: 09-06-2021 07:26 AM by Klaus Overhage.)
Post: #1
Klaus Overhage Member Posts: 65 Joined: Jan 2016
smaller HP 82166A with 32-pin connector
The technical manual of the HP82166A HP-IL Converter describe a device with a 34-pin GPIO connector. The one I bought is a little bit smaller and with a 32-pin connector. I worked out the pin assignment of this device as good as I can from looking on the pcb.
Has anybody seen this variant of a HP82166A before and maybe can confirm the pin assignment?
Code:
| <== upper plastic edge ---- GND 32 | | 31 +5V -CS 30 | | 29 NC DAVO 28 | | 27 RDYI RDYO 26 | | 25 DACI PWON 24 | | 23 GND DACO 22 | | 21 HLLO DB7 20 | | 19 DAVI DB5 18 | | 17 DB6 DB3 16 | | 15 DB4 DB1 14 | | 13 DB2 DCLO 12 | | 11 DB0 MSRQ 10 | | 09 GET0 DA6 08 | | 07 DA7 DA4 06 | | 05 DA5 DA2 04 | | 03 DA3 DA0 02 | | 01 DA1 ---- | <== upper plastic edge
Attached File(s) Thumbnail(s)
09-14-2019, 08:43 AM
Post: #2
J-F Garnier Senior Member Posts: 752 Joined: Dec 2013
RE: smaller HP 82166A with 32-pin connector
No I didn't know this variant. It seems that it's a pre-production variant still using discrete pulse transformers.
When I started to work on HP-IL in the 80's I got a HP82166 prototype - I don't have it anymore - that was using similar pulse transformers and also had a 32-pin connector. See attached image with the proto side by side with a production unit.
Note that the layout changed in the final HP82166 with the MCU and 1LB3 swapped.
J-F
Attached File(s) Thumbnail(s)
09-16-2019, 07:23 AM (This post was last modified: 11-01-2021 08:23 AM by Klaus Overhage.)
Post: #3
Klaus Overhage Member Posts: 65 Joined: Jan 2016
RE: smaller HP 82166A with 32-pin connector
Thank you for your explanations. Then this device is probably not so strange and could work. It already has the same MCU as the production unit. But the ICs are still rotated as in your prototype. That explains probably the strongly deviating PIN assignment of my variant compared to the technical manual. The name is displayed by the HP-41CX with CCD module in Catalog 0 as HP82166A.
In the meantime, I was able to build the simple circuit from the book "Control the world with HP-IL", page 42. Instead of a 74C373, a 74HCT573 is used. The tests, whether so 8 LEDs can be switched were all positive. The DA0-DA7, DAVO, RDYI, DACI, + 5V and GND connections are all correct. The "Count" program on page 44 ran to 255 in 96 seconds. With the HP-71B, a flickering light looks very pretty.
Code:
10 RESTORE IO 20 A=DEVADDR("%64") 30 ENDLINE "" 40 X=1 50 FOR I=1 TO 8 60 OUTPUT :A; CHR$(X) 70 X=2*X 80 NEXT I 90 FOR I=1 TO 8 100 OUTPUT :A; CHR$(X) 110 X=X/2 120 NEXT I 130 GOTO 50 140 END
Attached File(s) Thumbnail(s)
09-16-2019, 02:23 PM
Post: #4
Paul Berger (Canada) Senior Member Posts: 541 Joined: Dec 2013
RE: smaller HP 82166A with 32-pin connector
Dos anyone have one of the prototypes with the piggy back EPROM like the one in M. Garnier's photo? I would really like to get a dump of that EPROM.
Paul.
04-08-2020, 10:06 AM
Post: #5
KimH Member Posts: 164 Joined: Jul 2018
RE: smaller HP 82166A with 32-pin connector
I have a handful of these 82166A modules and have decided to spend some time on them the last few (and next) days.
I can Write to the port, but when connecting 2 of these with the small connector board 82166-90013 i struggle to READ anything on the port.
I have looked at the Control the World.... and i see the R0-R3 being key, but have not found a way to READ from the 8-bit bus.
I send Characters to Module 1 and want to read them from Module 2 - ENTER :LOOP ; X$- nothing happens it gets stuck. I have been looking to find a copy of Christopher Klugs book - anyone know where to get the book? Can anyone here help, please? /Kim 04-08-2020, 10:57 AM Post: #6 J-F Garnier Senior Member Posts: 752 Joined: Dec 2013 RE: smaller HP 82166A with 32-pin connector (04-08-2020 10:06 AM)KimH Wrote: I send Characters to Module 1 and want to read them from Module 2 - ENTER :LOOP ; X$ - nothing happens it gets stuck.
First of all, you must use addresses in OUTPUT and ENTER statements, not just :LOOP.
Please post you complete test sequence, so we can evaluate it.
J-F
04-08-2020, 12:25 PM (This post was last modified: 04-08-2020 12:31 PM by KimH.)
Post: #7
KimH Member Posts: 164 Joined: Jul 2018
RE: smaller HP 82166A with 32-pin connector
Thanks J-F,
I actually use the address as you suggest.
I admit, that I am struggling with the logic of R0 to R3. I get most of it, but not all.
As G. Friedman suggest, the documentation and examples from HP were not exactly the best he had seen.
I saw a reference to Christoph's book HP41 - Input-Output Board and the TOC - looked like what I needed. Can't find it anywhere to buy. Maybe someone here would know how to get the book or how to get in touch with Mr. Klug (means clever in German )
Here are the 2 pieces I use for my experiment - retyped.
The listener stops at line 20 and move no further
Code:
1 !LISTENER 2 RESTOREIO 5 A=DEVADDR(“%64“) 6 ! FROM CONTROL THE WORLD P178 – LOOKS CORRECT 7 SEND UNT UNL MTA LISTEN A DDL 0 DATA 226,16,24 UNT UNL 10 FOR I=1 TO 10 20 ENTER :A ;B$25 DISP B$ 30 NEXT I 1 !TALKER (sends CHARS - seemingly) 2 RESTOREIO 3 ENDLINE”” 5 A=DEVADDR(“%64“) 9 ! FROM CONTROL THE WORLD p33 – LOOKS CORRECT 10 SEND UNT UNL MTA LISTEN A DDL 0 DATA 74,16,208 UNT UNL 20 FOR I=1 TO 63 25 X$=CHR$(I+64) 30 OUTPUT :A ;X$31 DISP I; X$ 40 NEXT I
(04-08-2020 10:57 AM)J-F Garnier Wrote:
(04-08-2020 10:06 AM)KimH Wrote: I send Characters to Module 1 and want to read them from Module 2 - ENTER :LOOP ; X$- nothing happens it gets stuck. First of all, you must use addresses in OUTPUT and ENTER statements, not just :LOOP. Please post you complete test sequence, so we can evaluate it. J-F 04-08-2020, 02:33 PM Post: #8 J-F Garnier Senior Member Posts: 752 Joined: Dec 2013 RE: smaller HP 82166A with 32-pin connector (04-08-2020 12:25 PM)KimH Wrote: Here are the 2 pieces I use for my experiment - retyped. The listener stops at line 20 and move no further Code: 1 !LISTENER 2 RESTOREIO 5 A=DEVADDR(“%64“) 6 ! FROM CONTROL THE WORLD P178 – LOOKS CORRECT 7 SEND UNT UNL MTA LISTEN A DDL 0 DATA 226,16,24 UNT UNL 10 FOR I=1 TO 10 20 ENTER :A ;B$ 25 DISP B$30 NEXT I 1 !TALKER (sends CHARS - seemingly) 2 RESTOREIO 3 ENDLINE”” 5 A=DEVADDR(“%64“) 9 ! FROM CONTROL THE WORLD p33 – LOOKS CORRECT 10 SEND UNT UNL MTA LISTEN A DDL 0 DATA 74,16,208 UNT UNL 20 FOR I=1 TO 63 25 X$=CHR$(I+64) 30 OUTPUT :A ;X$ 31 DISP I; X$40 NEXT I Are you using 2 HP-71B, one on each side? Otherwise you must address the 2 converters with their own addresses. That sounds quite complicate to just send bytes from one converter to the other. You may start using the default control register setting and just do OUTPUT 1;A$ on one side and ENTER 2;B$on the other side. Be sure to restore the default ENDLINE condition! otherwise it will not work. To be more specific, I believe your test code above may hang at line 20 because the ENTER statement doesn't find a string terminator (since you disabled it with ENDLINE ""). J-F 04-08-2020, 02:57 PM Post: #9 KimH Member Posts: 164 Joined: Jul 2018 RE: smaller HP 82166A with 32-pin connector Yes, I have 2 times 82166A mounted on the board in each their own loop with a 71B each - one TALK and the other LISTEN. What I wanted was to see the sequence which was sent on one device re-appear on the other device, one by one. This would tell me that the loop was working and that I had understood some of the basics.. I believe I have done what you suggest but clearly I will try again. Maybe I am dealing with a mundane challenge - one or both the modules are not working as expected. I will drop a note here once I know more Thanks for your suggestions and for your time! /Kim Quote:Are you using 2 HP-71B, one on each side? Otherwise you must address the 2 converters with their own addresses. That sounds quite complicate to just send bytes from one converter to the other. You may start using the default control register setting and just do OUTPUT 1;A$ on one side and ENTER 2;B$on the other side. Be sure to restore the default ENDLINE condition! otherwise it will not work. To be more specific, I believe your test code above may hang at line 20 because the ENTER statement doesn't find a string terminator (since you disabled it with ENDLINE ""). J-F 04-08-2020, 03:12 PM Post: #10 KimH Member Posts: 164 Joined: Jul 2018 RE: smaller HP 82166A with 32-pin connector Oh Joy - both modules work RESTOREIO ENDLINE and then your suggestion actually moved the Characters "J-F" from one to the other 82166A Device! (thought that J-F was a nice touch ) Now that I know that BOTH module interfaces work, I can start having some fun Thanks J-F!! (04-08-2020 02:57 PM)KimH Wrote: Yes, I have 2 times 82166A mounted on the board in each their own loop with a 71B each - one TALK and the other LISTEN. What I wanted was to see the sequence which was sent on one device re-appear on the other device, one by one. This would tell me that the loop was working and that I had understood some of the basics.. I believe I have done what you suggest but clearly I will try again. Maybe I am dealing with a mundane challenge - one or both the modules are not working as expected. I will drop a note here once I know more Thanks for your suggestions and for your time! /Kim Quote:Are you using 2 HP-71B, one on each side? Otherwise you must address the 2 converters with their own addresses. That sounds quite complicate to just send bytes from one converter to the other. You may start using the default control register setting and just do OUTPUT 1;A$ on one side and ENTER 2;B$on the other side. Be sure to restore the default ENDLINE condition! otherwise it will not work. To be more specific, I believe your test code above may hang at line 20 because the ENTER statement doesn't find a string terminator (since you disabled it with ENDLINE ""). J-F 04-08-2020, 06:38 PM Post: #11 Dave Frederickson Senior Member Posts: 2,137 Joined: Dec 2013 RE: smaller HP 82166A with 32-pin connector (04-08-2020 12:25 PM)KimH Wrote: Maybe someone here would know how to get the book or how to get in touch with Mr. Klug (means clever in German ) You could send him a PM. 04-08-2020, 07:29 PM Post: #12 Paul Berger (Canada) Senior Member Posts: 541 Joined: Dec 2013 RE: smaller HP 82166A with 32-pin connector (04-08-2020 12:25 PM)KimH Wrote: I saw a reference to Christoph's book HP41 - Input-Output Board and the TOC - looked like what I needed. Can't find it anywhere to buy. Maybe someone here would know how to get the book or how to get in touch with Mr. Klug (means clever in German ) I got mine directly from Herr Klug, if you are unable to get one for some reason I could probably be convinced to part with my copy, it is practically unused. Paul. 04-09-2020, 05:33 AM Post: #13 KimH Member Posts: 164 Joined: Jul 2018 RE: smaller HP 82166A with 32-pin connector That’s my current challenge, I don’t find his email addr. I assume he is active in this group somewhere. On XING (Germanic LinkedIn) I have found his name and his location Hildesheim, and sent him an invite. Maybe it is him and he reacts... If anyone has a hint or his address - and feel ok sharing - I would make use of it. /Kim (04-08-2020 06:38 PM)Dave Frederickson Wrote: (04-08-2020 12:25 PM)KimH Wrote: Maybe someone here would know how to get the book or how to get in touch with Mr. Klug (means clever in German ) You could send him a PM. 04-09-2020, 02:10 PM (This post was last modified: 04-09-2020 02:17 PM by Dave Frederickson.) Post: #14 Dave Frederickson Senior Member Posts: 2,137 Joined: Dec 2013 RE: smaller HP 82166A with 32-pin connector (04-09-2020 05:33 AM)KimH Wrote: That’s my current challenge, I don’t find his email addr. I assume he is active in this group somewhere. (04-08-2020 06:38 PM)Dave Frederickson Wrote: You could send him a PM. A Private Message, not an email. https://www.hpmuseum.org/forum/private.php?action=send For Recipients: enter Christoph Klug. I'm interested in Christoph's book as well. I acquired an 82166A complete in-box with two adapters and the Eval Board last week and I could use a good reference for the HP-IL protocol. Dave Edit: You can try sending Christoph an email here. https://www.hpmuseum.org/forum/user-1798.html 04-17-2020, 04:57 PM Post: #15 KimH Member Posts: 164 Joined: Jul 2018 RE: smaller HP 82166A with 32-pin connector So, I made contact with Christoph Klug. In a roundabout way my efforts paid off and Christoph made contact to the publisher who has the book available as an e-book (PDF) - the paper-book is OOP. If you write to Email = verlag@franzbecker.de and ask for the book: ISBN 978-3-88120-853-6 titled "HP-41 Input / Output Board" and "IL2000 Interface System". They will send you an "Invoice" for €18 which needs to be paid and the the link to the book arrives to your email the next day. Not sure how they handle non-Germans, we have a Direct Deposit system which is reliable and fast. You may be well served asking here http://www.franzbecker.de/kontakt.html I got the 700-plus page book this morning and have read through the most interesting parts of it and I must say, it is excellent! Better than I had hoped for and lots of inspiration. There's even a section with comments from Wlodeck and a couple of other friends of our community. It is a bit odd as the PDF is made in such a way that one part "HP-41 In/out" and the "IL2000" are "upside-down". Part one is read from the front cover - right page has what you read - and the second part is to be read from the back cover. i.e. every other page is upside/down.... - but it works. I have the work cut out for me now - oh joy (04-09-2020 02:10 PM)Dave Frederickson Wrote: (04-09-2020 05:33 AM)KimH Wrote: That’s my current challenge, I don’t find his email addr. I assume he is active in this group somewhere. A Private Message, not an email. https://www.hpmuseum.org/forum/private.php?action=send For Recipients: enter Christoph Klug. I'm interested in Christoph's book as well. I acquired an 82166A complete in-box with two adapters and the Eval Board last week and I could use a good reference for the HP-IL protocol. Dave Edit: You can try sending Christoph an email here. https://www.hpmuseum.org/forum/user-1798.html 04-17-2020, 05:50 PM Post: #16 Paul Berger (Canada) Senior Member Posts: 541 Joined: Dec 2013 RE: smaller HP 82166A with 32-pin connector Yes the book is printed in an interesting fashion, if anyone would rather have a paper copy I could probably be convinced to part with mine. 04-20-2020, 12:46 PM Post: #17 Paul Berger (Canada) Senior Member Posts: 541 Joined: Dec 2013 RE: smaller HP 82166A with 32-pin connector (04-17-2020 05:50 PM)Paul Berger (Canada) Wrote: Yes the book is printed in an interesting fashion, if anyone would rather have a paper copy I could probably be convinced to part with mine. The book has been spoken for. 03-14-2021, 06:44 PM Post: #18 KimH Member Posts: 164 Joined: Jul 2018 RE: smaller HP 82166A with 32-pin connector OK - It's raining... Again. Hi Klaus or anyone ready to help I know this may be a bit mundane - please be gentle. I have a couple of questions which you may be able to help me (and others) with As the 82166A devices (I have a handful) are known to be a bit sensitive AND not replaceable, I got myself a 82165A GPIO "in a box" - it seems to be built a bit more robust/resilient and has its own 5V Power-Supply which you can tap into for experiments. I learned that in Control the World with HP-IL, which is my inspiration for getting started,. The idea is that once I had done the same experiment as you have done using the same book and Circuit, I would move to the 166's and start doing some real I/O (Bi-directional, ADC/DAC, Optocouplers etc) - Read Christopher Klug, Interloop 211 et all I got the components - Inverter & Latch - and a nice Canon 25 D-Sub plug with a "breakout-box" for the connector on the 82165A and got going, using the 5V power rail from the 165A. It did not work as expected, it seemed like the 473 was in saturation mode with the nice LEDs (used blue ones). I simply couldn't get the LEDs to look as I had hoped for. Several iterations and I realize a couple of things. - The Latch was indeed stressed, it did flash the LEDs but VERY briefly - The Inverter was working, but not triggering the LE, it appeared (but it actually did) - The latch seemed to not be Latching... So thinking Saturation I stopped using the 5V Power from 165A and put a separate PSU to feed the 74HC chips & LEDs. Which actually worked wonders... @3.8 Volts. Actually all the way from 2,5 volts it worked - LEDs behaved as expected. Thinking I had lost my wits I tried to work my way back up to 5V. I was putting the 100, 220 (that one value would be right) and even 1k Ohm in series with the LEDs, not much of a change, the voltage divider/current limitation just would not play along as hoped for, current kept flowing in abundance, voltage controlled by the LEDs. So here is the question/suspicion. Does it matter if I use the 74HC (CMOS) or the HCT (CMOS TTL-Level) - this may be the issue... Can anyone here give a hint? BTW: The 74HC574 does not require the Inverter on LE (vs. 373 & 473), one chip saved (09-16-2019 07:23 AM)Klaus Overhage Wrote: Thank you for your explanations. Then this device is probably not so strange and could work. It already has the same MCU as the production unit. But the ICs are still rotated as in your prototype. That explains probably the strongly deviating PIN assignment of my variant compared to the technical manual. The name is displayed by the HP-41CX with CCD module in Catalog 0 as HP82166A. In the meantime, I was able to build the simple circuit from the book "Control the world with HP-IL", page 42. Instead of a 74C373, a 74HCT573 is used. The tests, whether so 8 LEDs can be switched were all positive. The DA0-DA7, DAVO, RDYI, DACI, + 5V and GND connections are all correct. The "Count" program on page 44 ran to 255 in 96 seconds. With the HP-71B, a flickering light looks very pretty. Code: 10 RESTORE IO 20 A=DEVADDR("%64") 30 ENDLINE "" 40 X=1 50 FOR I=1 TO 8 60 OUTPUT :A; CHR$(X) 70 X=2*X 80 NEXT I 90 FOR I=1 TO 8 100 OUTPUT :A; CHR\$(X) 110 X=X/2 120 NEXT I 130 GOTO 50 140 END
03-15-2021, 09:42 AM
Post: #19
Klaus Overhage Member Posts: 65 Joined: Jan 2016
RE: smaller HP 82166A with 32-pin connector
Hello Kim,
according to the data sheet, the HC version differs from the HCT-version only in terms of the input-voltages. The outputs are the same as voltages and currents.
I cannot find a data sheet for a 74HC473 on the internet. Is this a very old IC that was replaced by the 74HC573 some time ago? I think you better switch to a 74HCT573 with a 4069 or maybe 74HCT574.
I use a separate power supply for the LEDs and they are connected to the output of two 74LS00 against 5V via 330 ohms. A 7404 can do the same job.
I have build a cable from the HP 82165A to a Parallel Printer as described in the manual. And I have tried to read from and write to an old 8K SRAM with the help off some TTL-ICs that count the adress up. This SRAM circuit belongs to an old interface system for the Commodore 64, for which I have some plug-in cards and for example can write the results of an A/D converter into the 8k RAM.
03-15-2021, 11:47 AM
Post: #20
KimH Member Posts: 164 Joined: Jul 2018
RE: smaller HP 82166A with 32-pin connector
Fun with the old RAM-Memory Block - had not thought of that as an option
I will for sure stay with the HC(T)574 from now on, no longer need the Inverter for LE.
Also I will split the two tasks - Latch and LED-Output - into two discrete blocks, as you have done.
(03-15-2021 09:42 AM)Klaus Overhage Wrote: Hello Kim,
according to the data sheet, the HC version differs from the HCT-version only in terms of the input-voltages. The outputs are the same as voltages and currents.
I cannot find a data sheet for a 74HC473 on the internet. Is this a very old IC that was replaced by the 74HC573 some time ago? I think you better switch to a 74HCT573 with a 4069 or maybe 74HCT574.
I use a separate power supply for the LEDs and they are connected to the output of two 74LS00 against 5V via 330 ohms. A 7404 can do the same job.
I have build a cable from the HP 82165A to a Parallel Printer as described in the manual. And I have tried to read from and write to an old 8K SRAM with the help off some TTL-ICs that count the adress up. This SRAM circuit belongs to an old interface system for the Commodore 64, for which I have some plug-in cards and for example can write the results of an A/D converter into the 8k RAM.
« Next Oldest | Next Newest »
User(s) browsing this thread: 1 Guest(s)
|
|
Australian Maths Competition (1 Viewer)
Mongoose528
Member
How would you solve this algebraically: A nude number is a natural number of whose digits are a factor of the number. Find all 3 digit nude numbers where no digits are repeated.
Can't seem to make in roads :/
I can solve it case by case, but I want to know a quicker way of how to do it.
Bump.
jathu123
Active Member
Re: Westpac Maths Comp marathon
Hi can anyone solve question 27 of the 2013 Senior AMC paper. I have tried many different methods but just cannot get it to work. I've tried finding the ratio of the areas/solving the areas simultaneously and all that. Please someone help as I've been trying to figure it out for a fortnight now. Show Working
Refer to the previous posts
Here's a solution to one of them:
View attachment 34157
Mongoose528
Member
A bit of help on this question please, this is a bit of practice for the AIMO.
This is all the working I've managed to do
Last edited:
seanieg89
Well-Known Member
$\bg_white Hint: \\ If \triangle ABC is a triangle, and X is a point on the line segment BC with BX:CX=\lambda, then |\triangle ABX|:|\triangle ACX|=\lambda.$
Ashaaz
New Member
Australian Mathematics Competition
I was just wondering if anyone has past papers and answers for this competition as I would like to make an start to practicing for the test. Year 8
yo_yo
New Member
Anybody going to AMC?
|
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 14 Nov 2018, 09:49
# Join Chat Room for Live Updates
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
## Events & Promotions
###### Events & Promotions in November
PrevNext
SuMoTuWeThFrSa
28293031123
45678910
11121314151617
18192021222324
2526272829301
Open Detailed Calendar
• ### $450 Tuition Credit & Official CAT Packs FREE November 15, 2018 November 15, 2018 10:00 PM MST 11:00 PM MST EMPOWERgmat is giving away the complete Official GMAT Exam Pack collection worth$100 with the 3 Month Pack ($299) • ### Free GMAT Strategy Webinar November 17, 2018 November 17, 2018 07:00 AM PST 09:00 AM PST Nov. 17, 7 AM PST. Aiming to score 760+? Attend this FREE session to learn how to Define your GMAT Strategy, Create your Study Plan and Master the Core Skills to excel on the GMAT. # Susan spent one-third of her money on books and half of the remaining new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 50583 Susan spent one-third of her money on books and half of the remaining [#permalink] ### Show Tags 29 Oct 2018, 06:11 00:00 Difficulty: 15% (low) Question Stats: 91% (01:27) correct 9% (02:40) wrong based on 46 sessions ### HideShow timer Statistics Susan spent one-third of her money on books and half of the remaining money on clothing. She then spent three-fourths of what she had left on food. She had$5 left over. How much money did she start with?
A. $60 B.$80
C. $120 D.$160
E. $180 _________________ Board of Directors Status: QA & VA Forum Moderator Joined: 11 Jun 2011 Posts: 4207 Location: India GPA: 3.5 WE: Business Development (Commercial Banking) Re: Susan spent one-third of her money on books and half of the remaining [#permalink] ### Show Tags 29 Oct 2018, 07:29 1 Bunuel wrote: Susan spent one-third of her money on books and half of the remaining money on clothing. She then spent three-fourths of what she had left on food. She had$5 left over. How much money did she start with?
A. $60 B.$80
C. $120 D.$160
E. $180 Let the amount of money she had be 12x Amount spent on books = 4x Amount spent on clothing = 4x Amount spent on food = 3x Amount left = x =$5
So, the total amount of money she had is $60, Answer must be (A) _________________ Thanks and Regards Abhishek.... PLEASE FOLLOW THE RULES FOR POSTING IN QA AND VA FORUM AND USE SEARCH FUNCTION BEFORE POSTING NEW QUESTIONS How to use Search Function in GMAT Club | Rules for Posting in QA forum | Writing Mathematical Formulas |Rules for Posting in VA forum | Request Expert's Reply ( VA Forum Only ) e-GMAT Representative Joined: 04 Jan 2015 Posts: 2191 Re: Susan spent one-third of her money on books and half of the remaining [#permalink] ### Show Tags 30 Oct 2018, 02:28 Solution Given: • Susan spent o $$\frac{1}{3}^{rd}$$ of her money on books, o $$\frac{1}{3}^{rd}$$on clothing o $$\frac{1}{4}^{th}$$ on food • She had$5 left over
To find:
• The total money she had at the beginning
Approach and Working:
• Let the initial amount be $X • Left over money = $$(1 - \frac{1}{3} - \frac{1}{3} - \frac{1}{4}) * X = 5$$ o Implies, X =$5 * 12 = $60 Hence, the correct answer is Option A Answer: A _________________ Register for free sessions Number Properties | Algebra |Quant Workshop Success Stories Guillermo's Success Story | Carrie's Success Story Ace GMAT quant Articles and Question to reach Q51 | Question of the week Must Read Articles Number Properties – Even Odd | LCM GCD | Statistics-1 | Statistics-2 Word Problems – Percentage 1 | Percentage 2 | Time and Work 1 | Time and Work 2 | Time, Speed and Distance 1 | Time, Speed and Distance 2 Advanced Topics- Permutation and Combination 1 | Permutation and Combination 2 | Permutation and Combination 3 | Probability Geometry- Triangles 1 | Triangles 2 | Triangles 3 | Common Mistakes in Geometry Algebra- Wavy line | Inequalities Practice Questions Number Properties 1 | Number Properties 2 | Algebra 1 | Geometry | Prime Numbers | Absolute value equations | Sets | '4 out of Top 5' Instructors on gmatclub | 70 point improvement guarantee | www.e-gmat.com Target Test Prep Representative Status: Founder & CEO Affiliations: Target Test Prep Joined: 14 Oct 2015 Posts: 4170 Location: United States (CA) Re: Susan spent one-third of her money on books and half of the remaining [#permalink] ### Show Tags 30 Oct 2018, 18:11 1 Bunuel wrote: Susan spent one-third of her money on books and half of the remaining money on clothing. She then spent three-fourths of what she had left on food. She had$5 left over. How much money did she start with?
A. $60 B.$80
C. $120 D.$160
E. $180 We can let the total amount of money Susan had = n. So she spent n/3 on books, (1/2)(n - n/3) = (1/2)(2n/3) = n/3 on clothing and (3/4)(n - n/3 - n/3) = (3/4)(n/3) = n/4 on food. We can create the equation: n/3 + n/3 + n/4 + 5 = n Multiplying the equation by 12, we have: 4n + 4n + 3n + 60 = 12n 11n + 60 = 12n 60 = n So she had$60 before she spent it on books, clothing and food.
_________________
Scott Woodbury-Stewart
Founder and CEO
GMAT Quant Self-Study Course
500+ lessons 3000+ practice problems 800+ HD solutions
Re: Susan spent one-third of her money on books and half of the remaining &nbs [#permalink] 30 Oct 2018, 18:11
Display posts from previous: Sort by
|
|
## Fooling with mathematicians
28/02/2013
I am still working with stochastic processes and, as my readers know, I have proposed a new view of quantum mechanics assuming that at the square root of a Wiener process can be attached a meaning (see here and here). I was able to generate it through a numerical code. A square root of a number can always be taken, irrespective of any deep and beautiful mathematical analysis. The reason is that this is something really new and deserves a different approach much in the same way it happened to the Dirac’s delta that initially met with skepticism from the mathematical community (simply it did not make sense with the knowledge of the time). Here I give you some Matlab code if you want to try by yourselves:
nstep = 500000;
dt = 50;
t=0:dt/nstep:dt;
B = normrnd(0,sqrt(dt/nstep),1,nstep);
dB = cumsum(B);
% Square root of the Brownian motion
dB05=(dB).^(1/2);
Nothing can prevent you from taking the square root of a number as is a Brownian displacement and so all this has a well sound meaning numerically. The point is just to understand how to give this a full mathematical meaning. The wrong approach in this case is just to throw all away claiming all this does not exist. This is exactly the behavior I met from Didier Piau. Of course, Didier is a good mathematician but simply refuses to accept the possibility that such concepts can have a meaning at all based on what has been so far coded in the area of stochastic processes. This notwithstanding that they can be easily computed on your personal computer at home.
But this saga is not over yet. This time I was trying to compute the cubic root of a Wiener process and I posted this at Mathematics Stackexchange. I put this question with the simple idea in mind to consider a stochastic process with a random mean and I did not realize that I was provoking a small crisis again. This time the question is the existence of the process ${\rm sign}(dW)$. Didier Piau immediately wrote down that it does not exist. Again I give here the Matlab code that computes it very easily:
nstep = 500000;
dt = 50;
t=0:dt/nstep:dt;
B = normrnd(0,sqrt(dt/nstep),1,nstep);
dB = cumsum(B);
% Sign and absolute value of a Wiener process
dS = sign(dB);
dA = dB./dS;
Didier Piau and a colleague of him just complain on the Matlab way the sign operation is performed. My view is that it is all legal as Matlab takes + or – depending on the sign of the displacement, a thing that can be made by hand and that does not imply anything exotic. What it is exotic here it the strong opposition this evidence meets notwithstanding is easily understandable by everybody and, of course, easily computable on a tabletop computer. The expected distribution for the signs of Brownian displacements is a Bernoulli with p=1/2. Here is the histogram from the above code
This has mean 0 and variance 1 as it should for $N=\pm 1$ and $p=\frac{1}{2}$ but this can be verified after some Montecarlo runs. This is in agreement with what I discussed here at Mathematics Stackexchange as a displacement in a Brownian motion is a physics increment or decrement of the moving particle and has a sign that can be managed statistically. My attempt to compare all this to the case of Dirac’s delta turns out into a complain of overstatement as delta was really useful and my approach is not (but when Dirac put forward his idea this was just airy-fairy for the time). Of course, a reformulation of quantum mechanics would be a rather formidable support to all this but this mathematician does not seem to realize it.
So, in the end, I am somewhat surprised by the behavior of the community against novelties. I can understand skepticism, it belongs to our profession, but for facing new concepts that can be easily checked numerically to exist I would prefer a more constructive behavior trying to understand rather than an immediate dismissal. It appears like history of science never taught anything leaving us with a boring repetition of stereotyped reactions to something that instead would be worthwhile further consideration. Meanwhile, I hope my readers will enjoy playing around with these new computations using some exotic mathematical operations on a stochastic process.
Marco Frasca (2012). Quantum mechanics is the square root of a stochastic process arXiv arXiv: 1201.5091v2
## Numerical evidence for the square root of a Wiener process
02/02/2012
Brownian motion is a very kind mathematical object being very keen to numerical simulations. There are a plenty of them for any platform and software so that one is able to check very rapidly the proper working of a given hypothesis. For these aims, I have found very helpful the demonstration site by Wolfram and specifically this program by Andrzej Kozlowski. Andrzej gives the code to simulate Brownian motion and compute Itō integral to verify Itō lemma. This was a very good chance to check my theorems recently given here by some numerical work. So, I have written a simple code on Matlab that I give here (rename from .doc to .m to use with Matlab).
Here is a sample of output:
As you could note, the agreement is almost perfect. I have had to rescale with a multiplicative factor as the square root appears somewhat magnified after the square but the pattern is there. You can do checks by yourselves. So, all my equations are perfectly defined as is a possible square root of a Wiener process.
Of course, improvements, advices or criticisms are very welcome.
Update: I have simplified the code and added a fixed scale factor to make identical scale. The code is available at Simulation. Here is an example of output:
Marco Frasca (2012). Quantum mechanics is the square root of a stochastic process arXiv arXiv: 1201.5091v2
27/01/2012
Disclaimer: This post is somewhat technical.
Recently, I posted a paper on arXiv (see here) claiming that quantum mechanics is the square root of a Wiener process. In order to get my results I have to consider some exotic Itō integrals that Didier Piau showed not existent (see here and here). In my argument I have a critical definition and this is the process $|dW(t)|$ that I defined using the sum
$S_n=\sum_{i=1}^n|W(t_i)-W(t_{i-1})|$
so that I assumed the limit $\lim_{n\rightarrow\infty}\langle S_n^2\rangle$ exists and is finite. This position appears untenable as Didier showed in the following way. In this case one has ($s,\ t>0$)
$\langle|W(t+s)-W(t)|\rangle=\sqrt{2s/\pi}$
and increments are independent so that $i\ne k$
$\langle|W(t_i)-W(t_{i-1})||W(t_k)-W(t_{k-1})|\rangle=$
$\langle|W(t_i)-W(t_{i-1})|\rangle\langle|W(t_k)-W(t_{k-1})|\rangle=\frac{2}{\pi}\sqrt{t_i-t_{i-1}}\sqrt{t_k-t_{k-1}}.$
Now, if you want to compute the limit in $L^2$ you are in trouble. Just choose $t_i=i/n$ and you will get
$\langle\left(\sum_{i=1}^n|W(t_i)-W(t_{i-1})|\right)^2\rangle$
that is
$\frac{2}{\pi}\frac{1}{n}\sum_{i=1}^n\sum_{j=1}^n.$
If you compute these sums you will get finally a term proportional to $n$ that blows up in the limit of increasingly large $n$. The integral simply does not exist from a mathematical standpoint.
Of course, a curse for a mathematician is a blessing for a theoretical physicist, mostly when an infinity appears. Indeed, let us consider the sum
$\sum_{i=1}^\infty=1+1+1+1+\ldots$
People who have read Hardy’s book know for sure that this sum is just $-1/2$ (see also discussion here). This series can be regularized and so the limit can be taken to be finite!
$\langle S_n^2\rangle\rightarrow\ {\rm finite\ value}.$
This average is just finite and this is what I would expect for this kind of process. With this idea of regularization, the generalized Itō integral $\int_{t_0}^tG(t')|dW(t')|$ exists and is meaningful. The same idea can be applied to the case $\int_{t_0}^tG(t')(dW(t'))^\alpha$ with $0<\alpha<1$ and my argument is just consistent as I show that for $(dW(t))^\frac{1}{2}$ the absolute value process enters.
As a theoretical physicist I can say: Piau’s paradox is happily evaded!
Marco Frasca (2012). Quantum mechanics is the square root of a stochastic process arXiv arXiv: 1201.5091v1
## Dispersive Wiki
26/03/2011
Since I was seventeen my great passion has been the solution of partial differential equations. I used an old book written by Italian mathematicians to face for the first time the technique of variable separation applied to the free Schrödinger equation. The article was written by Paolo Straneo, professor at University of Genova in the first part of the last century and Einstein’s friend, and from it I was exposed to quantum theories in a not too simpler way. At eighteen, some friends of mine, during my vacation in Camdridge, gave to me my first book of mathematics on PDEs: François Treves, Basic Linear Partial Differential Equations. You can find this book at low cost from Dover (see here).
Since then I have never given up with my passion with this fundamental part of mathematics and today I am a professional in this area of research. As a professional in this area, important references come from the work of Terry Tao (see also his blog), the Fields medalist. Terry, together with Jim Colliander at University of Toronto, manage a Wiki, Dispersive Wiki, with the aim to collect all the knowledge about differential equations that are at the foundation of dispersive effects. Most of you have been exposed at their infancy with the wave equation. Well, this represents a very good starting point. On the other side, it would be helpful to add some contributions for Einstein or Yang-Mills equations. Indeed, Dispersive Wiki is open to all people that, like me, is addicted to PDEs and all matter around them.
I have had the chance to write some contributions to Dispersive Wiki. Currently, I am putting down some lines on Yang-Mills equations (I did it before but this was recognized as self-promotion… just look at the discussion there), Dirac-Klein-Gordon equations and other articles. I think it would be important to help Jim and Terry in their endeavor as PDEs are the bread and butter of our profession and to have on-line such a bookkeeping of results would be extremely useful. Just take your time to give a look.
## Physics of the Riemann Hypothesis
18/01/2011
In this blog I discuss frequently about one of the Clay Institute’s Millenium Prize problems: Mass gap and existence of a quantum Yang-Mills theory. Sometime I also used the Perelman’s theorem containing Poincarè’s conjecture to discuss about some properties of quantum gravity and also Cramer-Rao statistical bound. Today on arxiv I have found a beautiful review paper by Daniel Schumayer and David Hutchinson about Riemann hypothesis, another Millenium problem, and physics (see here). This question remained unsolved for almost 150 years since now. The relevance of the understanding of this conjecture relies on the possibility to give a function decribing the distribution of prime numbers.
The formulation of Riemann hypothesis is embarassingly simple. Riemann function is defined in a very simple way as
$\zeta(s)=\sum_{n=1}^\infty\frac{1}{n^s}.$
This function has a set of trivial zeros at all even negative integers and a set of nontrivial zeros. Riemann hypothesis claims that
All nontrivial zeros of $\zeta(s)$ have the form $\rho=\frac{1}{2}+it$, being t a real number.
This is the eighth problem of Hilbert that gave also the name we are using today to this question. Simple as may seem the question, it baffled mathematicians efforts since today. But, as happens to most mathematics, it can be found applied in Nature and it is tempting to think to reproduce in a lab what appears a complicated mathematical problem and read the answer directly from experiments. Indeed, such a road was definitely open in 1999 when Michael Berry (the one of the phase) and Jon Keating put forward an important conjecture relating quantum systems and Riemann hypothesis. You can find this cornerstone paper here. But since then the hunt was open to find other connections amenable to a treatment in physics. Schumayer and Hutchinson give an extensive review of them in their paper. This view opens up the possibility of a solution through physics of this fundamental question. Surely, we are assisting again at an interesting interwining between these fundamental disciplines of science.
Daniel Schumayer, & David A. W. Hutchinson (2011). Physics of the Riemann Hypothesis arxiv arXiv: 1101.3116v1
Berry, M., & Keating, J. (1999). The Riemann Zeros and Eigenvalue Asymptotics SIAM Review, 41 (2) DOI: 10.1137/S0036144598347497
## Current status of Yang-Mills mass gap question
01/12/2010
I think that is time to make a point about the question of mass gap existence in the Yang-Mills theory. There are three lines of research in this area: Theoretical, numerical and experimental. I can suppose that the one that mostly interests my readers is the theoretical one. I would like to remember that, in order to get a Millenium Prize, one also needs to prove the existence of the theory. This makes the problem far from being trivial.
As for today, the question of existence of the mass gap both for scalar field theories and Yang-Mills theory should be considered settled. Currently there are two papers of mine, here and here both published in archival journals, proving the existence of the mass gap and give it in a closed analytical form. A proof has been also given by Alexander Dynin at Ohio State University here. Alexander does not give the mass gap in a closed form but gets a lower bound that permits him to conclude that Yang-Mills theory has a discrete spectrum with a mass gap. This is enough to declare this part of the problem solved. It is interesting to note that, differently from Poincaré conjecture, this solution does not require a mathematics that is too much complex. This can be understood from the fact that the corresponding classical equations of the theory already admit massive solutions of free particle. The quantum theory can be built on these solutions and all this boils down to a trivial fixed point in the infrared for the quantum theory. Such a trivial fixed point, that explains also the lower bound Alexander is able to find, is a good news: We have a set of asymptotic states at diminishing momenta that can be used to do perturbation theory and do computations for physics! The reason why these relevant mathematical results did not get the proper exposition so far escape me and enters into the realm of things that I do not know. It is true that in this area there is a lot of caution and this can be understood as this problem received a lot of attention after Witten and Jaffe proposed it for a big money prize.
But, as I have already said, this problem has two questions to be answered and while computing the mass gap is quite easy, the other question is rather involved. To prove the existence of a quantum field theory is not a trivial matter and, for sure, we know that the Wiener integral exists and the Feynman integral does not (so far and only for mathematicians). What I prove in my papers is that the Euclidean theory exists for the scalar field theory (thanks to Glimm and Jaffe that already proved this) and that this theory matches the Yang-Mills theory in the limit of the gauge coupling going to infinity. It should be an asymptotic existence… Alexander by his side proves existence in a different way but here unfortunately I cannot say too much but I would appreciate that Alexander would write down some lines here about his work.
Other theoretical attempts are based on some educated guess as a starting point as could be the vacuum functional, the beta function or other parts of the theory that, for a full proof, should be derived instead. These attempts give a strong support to my work and that of Alexander. In these papers you will see a discrete spectrum and this is the one of a harmonic oscillator or simply the very existence of the mass gap itself. But, for physicists, the spectrum is the relevant conclusion as from it we can get the masses of physical states to be seen in accelerator facilities. This is the reason why I do not worry too much for mathematicians fussing about my papers.
Finally, I would like to spend a few words about numerical and experimental results. Experiments show clearly always bound states of quarks and gluons that are never seen as free. This is the better proof so far Nature gave us of the existence of the mass gap. Numerically, people computed both Green functions and the spectrum of the theory. I am convinced that these lines should merge. The spectrum on the lattice, both quenched and unquenched, displays the mass gap. Green functions, when one considers just the decoupling solution, are Yukawa-like, both on the lattice and from Dyson-Schwinger equations, and this again is a proof of existence of the mass gap.
I hope I have not forgotten anyone. Please, let me know. If you need explicit references here and there I will be pleased to post here. A lot of people is involved in this kind of research and I am happy to acknowledge the good work.
Finally, I would like to remember that one cannot be skeptical about mathematics as mathematics can only be either right or wrong. No other way.
## Exact solutions go published!
30/11/2010
My paper presenting exact solutions to classical scalar field theories, with a corresponding quantum formulation, has been accepted for publication in the Journal of Nonlinear Mathematical Physics. The replacement on arxiv will appear tomorrow, the link is here. I would like to thank the Editor, Norbert Euler, and an anonymous referee that pointed out to me the existence of a zero mode in the quantum fluctuations.
|
|
# mathlibdocumentation
analysis.calculus.affine_map
# Smooth affine maps #
This file contains results about smoothness of affine maps.
## Main definitions: #
• continuous_affine_map.cont_diff: a continuous affine map is smooth
theorem continuous_affine_map.cont_diff {𝕜 : Type u_1} {V : Type u_2} {W : Type u_3} [ V] [ W] {n : with_top } (f : V →A[𝕜] W) :
n f
A continuous affine map between normed vector spaces is smooth.
|
|
### Home > A2C > Chapter 12 > Lesson 12.3.1 > Problem12-137
12-137.
Verify that it is true for n = 1.
$5=\frac{(1)(3(1)+7)}{2}$
5 = 5
Assume that the identity is true for some arbitrary number k.
Prove that given the above assumption, the identity holds true for (k + 1).
|
|
# Numbers
A number in Dyon is a variable that stores floats with 64 bit precision. The type is f64.
a := 1
b := .5
c := 2.5
d := -3
e := 1_000_000
f := 1e6
g := -7_034.52e-22
You can add, subtract, multiply, divide, take the power and find the division reminder:
a := 2
b := a + 3
c := a - 3
d := a * 3
e := a^3
f := a / 3
g := a % 3
### Relative change
The following operators change the value relatively:
a += 1 // increase by 1
a -= 1 // declare by 1
a *= 2 // multiply by 2
a /= 2 // divide by 2
a %= 2 // find division reminder of 2
• f64 is used by both Rust and Dyon
• "One million" can be written as 1_000_000 or 1e6
|
|
This is an archived post. You won't be able to vote or comment.
[–] 7 points8 points (1 child)
The important concept here is (in)exactness. Think of a Scheme number as numerical value together with a flag of exactness. If you measure something and get the result 3.9, then will represent the measurement with an inexact 3.9. Now (floor 3.9) must be inexact too (since the true value of the measured entity might have been 4.1). On the other hand for an exact 3.9, the floor value is an exact 3.
Note that numbers entered with a decimal dot are read as inexact numbers so the prefix #e must be used to indicate exact numbers.
> (floor 3.9)
3.9
> (floor #e3.9)
3
[–][S] 1 point2 points (0 children)
Ah, enlightening. Thank you.
[–] 1 point2 points (4 children)
Consider (floor 1.2e30).
If returning a fixnum, large floats can't be floored at all. If returning a bignum, large floats return a value with a huge amount of spurious precision.
[–][S] 0 points1 point (3 children)
But isn't someone who asks for (floor 1.2e30) asking for just that? (I'm not intending to be argumenative; I'm genuinely puzzled about what else could a person who asks for (floor x) want when x is represented as a floating point. Probably I just don't get 'exact'.)
[–] 0 points1 point (2 children)
No, not necessarily. There are a lot of formulae containing floor() where you want exactly this behaviour. Eg. the formula for the nth Fibonacci Number:
[; Fib_n = \lfloor{}\frac{\phi{}^n}{\sqrt{5}} + \frac{1}{2} \rfloor{} ;]
If you're using this to calculate Fib(n) for large n (for use in some other calculation), you're probably not going to want to compute every last digit. (In fact, if you did want an exact solution, you'd probably use a memoized recursive computation rather than the explicit formula.)
There are other functions for common use cases where you want to take an integral floor. Eg. the most common case is probably flooring a quotient. This is provided by the integer-floor() function.
(integer-floor 7 3) ;; --> (floor (/ 7 3)) --> 2
[–][S] 0 points1 point (1 child)
I don't find integer-floor in r7rs. Is it specific to an implementation?
[–] 0 points1 point (0 children)
Yeah, sorry, that's an MIT Scheme-specific function. I was careless when I looked up the name.
"quotient" is the standard function (which is much nicer than "integer-floor" anyway).
[–] 0 points1 point (0 children)
This is a bit of a guess....
In Gambit I notice that (eqv? 0.99999999999999999 1.0) => #t. Which I'd assume is an artifact of the underlying IEEE representation. This means the precision of the inexact number implementation affects the result of operations like floor. The R5RS document says that the output must be inexact if the result is affected by the value of the input.
[–] 0 points1 point (5 children)
At http://www.schemers.org/Documents/Standards/R5RS/HTML/ if you search for "(floor x)" it says:
Note: If the argument to one of these procedures is inexact, then the result will also be inexact. If an exact value is needed, the result should be passed to the inexact->exact procedure.
I only guess that it's so decimal numbers continue to stay decimal numbers, and exact numbers stay exact numbers. So if your calculations are based on inexact numbers, then it'll be inexact numbers throughout, even if you use floor.
[–] 0 points1 point (3 children)
It's hard to think of a pragmatic concern where something would go badly if floor were in fact exact, the way one would expect from mathematics. Maybe I'm just missing something.
[–] 3 points4 points (2 children)
(floor (* (/ 1.0 3.0) 3.0)) is inexactly 0.0, while (floor (* (/ 1 3) 3)) is exactly 1.
[–] 0 points1 point (0 children)
Good point. I was thinking in terms of floor returning an integer. 0.0 is clearly inexact. Thanks.
[–][S] 0 points1 point (0 children)
I find
#;1> (floor (* (/ 1.0 3.0) 3.0))
1.0
but I think I get your point. Thank you.
[–][S] 0 points1 point (0 children)
Yes, thanks. I see that it satisfies the spec.
[–] 0 points1 point (1 child)
Type tagging. You don't want to change the type partway through a computation, because you have to change the "type". Also, if you are certain you are going to keep the same type, you can exploit things like IEEE Floating Point for faster rounding.
[–][S] 0 points1 point (0 children)
I've never heard of this, thanks. I had wondered if the idea was to make writing an implementation easier.
|
|
TOOLS FOR ASSESSMENT AND RISK REDUCTION IN THE FORMATION OF PLANS FOR ROCKET-SPACE TECHNOLOGY
Share
Metrics
TOOLS FOR ASSESSMENT AND RISK REDUCTION IN THE FORMATION OF PLANS FOR ROCKET-SPACE TECHNOLOGY
Annotation
PII
S042473880000600-9-1
Publication type
Article
Status
Published
Authors
Edition
Pages
54-61
Abstract
Design and manufacture of high technology products with a long life cycle requires the formation of long-term scientific and technological plans and programs, which implementation is accompanied by a large number of uncertainties and various types of risks. One of the most important tasks that need to be addressed in preparing the development plans, are the methods and tools to identify and neutralize potential threats of disruption of planned activities due to the deviation of the plans’ implementation from the original trajectory. The paper proposed economic and mathematical tools for accounting, management and compensating the risks inherent in the processes of creation and development of the modern rocket and space technology in the framework of innovative plans and programs of transcendental and empirical methods of development planning options. To build a short-term plan for the development of rocket-and-space technology the authors have developed multivariate control, allowing to move from one short-term option plan to another one, preserving balance without a loss of consistency.
Keywords
innovation project, economic and mathematical tools, multitask, programs and plans, risks and uncertainties, rocket-space technology
Date of publication
01.10.2017
Number of purchasers
0
Views
65
Full text is available to subscribers only
Subscribe right now
Only article
100 RUB / 1.0 SU
Whole issue
0 RUB / 0.0 SU
All issues for 2017
0 RUB / SU
1
## References
### Additional sources and materials
Baranovskaya T.P., Vostroknutov A.E. (2008). Model Improvement and Assessment of Organizational Structures.
Polythematic Network Electronic Scientific Journal of the Kuban State Agrarian University, 36, 61–76 (in Russian).
Batkovskij M.A., Merzlyakova A.P. (2011). Evaluation of Innovative Strategies for the Enterprise. Issues of Innovation Economics, 7, 10–17 (in Russian).
Bohnert A., Gatzert N., Jоrgense P.L. (2015). On the Management of Life Insurance Company Risk by Strategic Choice of Product Mix, Investment Strategy and Surplus Appropriation Schemes. Insurance: Mathematics and Economics, 60, January, 83–97.
Kachalov R.M. (2012). Management of Economic Risk: Theoretical Foundations and Applications. Moscow, Saint Petersburg: Nestor-History (in Russian).
Khrustalev E.Yu., Khrustalev Yu.E. (2006). Assessment of Economic Security High-Tech Industries. National Interests: Priorities and Security, 2, 46–52 (in Russian).
Khrustalev E.Yu., Slavyanov A.S., Sakharov I.E. (2013). Methods and Tools Selection of Mechanisms for Economic Protection of High-Tech Industries for Example Aerospace Industry. Economic Analysis: Theory and Practice, 30, 2–11 (in Russian).
Khrustalev E.Yu., Strelnikova I.A. (2011). Methods to Reduce Financial Risk while Creating High-Tech Products. Finance and Credit, 7, 13–21 (in Russian).
Khrustalev O.E. (2011). Methodical Bases of an Estimation of Economic Stability of Industrial Enterprises. Audit and Financial Analysis, 5, 180–185 (in Russian).
Larin S.N., Khrustalev O.E. (2009). Business Incubator as an Important Component of the Innovation Infrastructure of the Region: Analysis of Foreign and Russian Practice. Regional Economy: Theory and Practice, 17, 27–33 (in Russian).
Liu P., Zhang X., Liu W. (2011). A Risk Evaluation Method for the High-Tech Project Investment Based on Uncertain Linguistic Variables. Technological Forecasting and Social Change, 78, 40–50.
Lyubushin N.P. Babichev N.E., Kozlova L.V. Account of Risk Factor in the Analysis of the Creditworthiness of the Borrower. Economic Analysis: Theory and Practice, 10, 2–7 (in Russian).
Makarov Yu.N., Khrustalev E.Yu. (2010). Mechanisms of Restructuring the Science-intensive Industries (for Example Aerospace Industry). Economics and Mathematical Methods, 46, 3, 31–42 (in Russian).
Platon V., Frone S., Constantinescu A. (2014). Financial and Economic Risks to Public Projects. Procedia Economics and Finance, 8, 204–210.
Rudzka E.R., Khrustalev E.Yu., Tsyganov S.A. (2009). Methods of Accumulation of Scientific Knowledge for Innovative Development of the Russian Economy (Experience of the Russian Foundation for Basic Research). Problems of Forecasting, 3, 134–139 (in Russian).
Slavyanov A.S. (2013). Evaluation of the Effectiveness of Methods for the Economic Protection of Investments in Innovation Projects Space Activities. Controlling, 2, 35–47 (in Russian).
Varshavsky A.E. (2011). The Problematic of the Innovation: The Main Risks. Concepts, 1–2, 82–86 (in Russian).
Varshavsky A.E., Yarkin, A.P. (2009). The Comprehensive Program of Scientific and Technological Progress of the Country in the Long Term. Concepts, 1, 22–41 (in Russian).
Vilensky P.L., Livshits V.N., Smolyak S.A. (2015). Evaluation of Investment Projects Efficiency: Theory and Practice. Textbook. Moscow: PolyPrint Service (in Russian).
|
|
# Derivation of Circular Mean Square Error
I would like to understand how Eq. (36) in [2] was derived:
The rationale behind the definition of circular sample mean in Eq. (37) is clear, but there is no motiviation for the CMSE definition in Eq. (36).
[2] Lovell, Brian C., and Robert C. Williamson. "The statistical performance of some instantaneous frequency estimators." IEEE Transactions on Signal Processing 40.7 (1992): 1708-1723.
• The first term in Eq. (36) is the Mean Resultant Length (MRL) [Kutil, Biased and unbiased estimation of the circular mean resultant length and its variance] mapped to [0,+inf]. I still don't know where the second term (bias correction) is coming from. – Arrigo May 22 '20 at 17:38
The first term in $$e^2_p$$ is just the variation of the estimates around the true value.
The second term is due to the problem that happens at the end of the periodic region. Sometimes, the noise is enough to move the estimate from $$-\pi+\alpha$$ to $$\pi - \beta$$.
For example, the orange plot in the figure below represents the variation of the estimates for a true value around zero. This variation is captured by the first term.
The blue plot in the figure below represents the variation of the estimates for the true value around the end of the period. The trouble is that some estimates end up all the way at the other end of the periodic region. This causes a bias to appear. The second term in (36) is aimed at capturing that bias.
• Say that you are using the CMSE formula to compare the relative performance of several frequency estimation algorithms (this is actually the reason it was introduce in [2]). The bias correction term will then make it possible to compare the performance of these estimators in regions with lower SNRs. Is this a correct statement? – Arrigo May 26 '20 at 18:06
|
|
Note: Thoughts epressed within this workbook are my own and do not represent any prior, current nor future employers or affiliates
## Background - An Overview of Modelling in Specialty Insurance
Within the specialty insurance space we are typically insuring economic interests of relatively rare events of high impact (for example: buildings damaged by hurricanes, aircraft crashes, impact of CEO wrong-doing, and so on.) These events are typically broken up into 2 broad classes:
1. Property
2. Casualty
Hence why the term "P&C" insurer is sometimes used. Property risks are, as the name suggests, related to property - historically phyiscal property but now can include non-physical property (e.g. data). Owing to the relative simplicity of these risks there is an entire universe of quantitative models that exist for risk management purposes, in particular there are a handful of vendors that create "natural catastrophe" (nat-cat) models. These models are sophisticated and all essentially rely on GIS style modelling: a portfolio of insured risks are placed on a geographical map (using lat-long co-ordinates) then "storm tracks" representing possible hurricane paths are run through the portfolio resulting in a statistical distribution of loss estimates. For other threats such as earthquakes, typhoons and wild-fires similar methods are used.
These nat-cat models allow for fairly detailed risk management procedures. For example it allows insurers to look for "hot spots" of exposure and can then allow for a reduction in exposure growth in these areas. They allow for counter-factual analysis: what would happen if the hurricane from last year took a slightly different track? It allows insurers to consider marginal impacts of certain portfolios, for example: what if we take on a portfolio a competitor is giving up, with our current portfolio will it aggregate or diversify? As a result of this explanatory power natural catastrophe risks are now well understood and for all intents and purposes these risks are now commodified and have allowed insurance linked securities (ILS) to form. </br>
Before this analytics boom specialty insurers made their money in natural catastrophe and property insurance, as such there has been a massive growth in recent years in the Casualty side of the business. Unfortunately the state of modelling on that side is, to put it politely, not quite at the same level.
As one would expect nat-cat model vendors have tried, and continue to try, to force the casualty business into their existing natural catastrophe models. This is a recipe for disaster as the network structure for something like the economy does not naturally lend itself to a geogprahic spatial representation. There is also a big problem of available data. Physical property risks give rise to data that is easy to cultivate. Casualty data is either hard to find or impossible - why would any corporation want to divulge all the details of their interactions? As such it does not appear that these approaches will become useful tools in this space.
To fill this void there has been an increasing movement of actuaries into casualty risk modelling roles. While this overcomes some of the problems that face the nat-cat models they also introduce a whole new set of issues. Traditional actuarial models relying on statistical curve fitting to macro-level data. Even assuming a suitable distribution function can be constructed it is of limited use for risk management as it only informs them of the "what" but not the "why", making it hard to orient a portfolio for a specific result. More recently actuaries have slowly began to model individual deals at a micro-level and aggregate them to get a portfolio view. To do this a "correlation matrix" is typically employed, this aproach also has issues:
1. Methods don't scale well with size, adding new risks often require the entire model to be recalibrated taking time and effort.
2. They either require a lot of parameters or unable to capture multi-factor dependency (e.g. a double trigger policy where each trigger has its own sources of accumulation).
3. It is usually not possible to vary the nature of dependency (e.g. add tail dependence or non-central dependency)
4. Results are often meaningless in the real world, it is usually impossible to perform counter-factual analysis
To bridge this gap I have developed a modelling framework that allows for the following:
1. Modelling occurs at an individual insured interest level
2. Modelling is scalable in the sense that adding new insured interests requires relatively few new parameters and calibrations
3. Counter-factual analysis is possible and the model can be interpreted in terms of the real world
4. The framework itself is highly parallelizable, whereas nat-cat models require teams of analysts, large servers and IT infrastructure this framework lends itself to being run by multiple people on regular desktop computers with little additional workflow requirements
## A First Step: A Simple Driver Method
We will now look at a very stylised model of aggregation that will form a foundation on which we can build the more sophisticated model framework. We call this method of applying dependence a "driver method", it is standard practice for applying dependence in banking credit risk models where there can be many thousands of risks modelled within a portfolio. The interpretation is that there is a central "driver", each individual risk is "driven" by this and since this is common to all risks there is an induced dependence relation between them.
The model relies on the generalised inverse transform method of generating random variates. Stated very simply: if you apply the inverse CDF of a random variable to a random number (U[0,1] variate) you will have samples distributed as that random variable. Therefore in order to apply dependence in a general form we only need to apply dependence between U[0,1] variates. We will also exploit the fact that normal distributions are closed under addition (that is the sum of normals is normal).
We can now express the model as follows:
1. We sample standard normal (N(0,1)) variates to represent the "driver" variable
2. For each risk sample an additional set of normal variates
3. Take a weighted sum of the "driver" and the additional normal variates to give a new (dependent) normal variate
4. Standardise the result from step 3) and convert to a U[0,1] variable using the standard gaussian CDF
5. Use an inverse transform to convert the result of step 4) to a variate as specified by the risk model
We can see that this method is completely general, it does not depend on any assumption about the stand-alone risk model distributions (it is a "copula" method). Another observation is that the normal variates here are in some sense "synthetic" and simply a tool for applying the dependence.
For clarity an example is presented below:
# Simple driver method example
# We model a central driver Z
# We want to model 2 risks: Y1 and Y2 which follow a gamma distribution
# Synthetic normal variates X1 and X2 are used to apply dependence
import numpy as np
from scipy.stats import gamma, norm
import matplotlib.pyplot as plt
%matplotlib inline
# Set number of simulations and random seed
SIMS = 1000
SEED = 123
np.random.seed(SEED)
# Simulate driver variables
Z = np.random.normal(0, 1, SIMS)
# Simulate temporary synthetic variable X1, X2 and standardise
X1 = (0.5 * Z + 0.5 * np.random.normal(0, 1, SIMS)) / np.sqrt(0.5**2 + 0.5**2)
X2 = (0.5 * Z + 0.5 * np.random.normal(0, 1, SIMS)) / np.sqrt(0.5**2 + 0.5**2)
# Use normal CDF to convert X synthetic variables to uniforms U
U1 = norm.cdf(X1)
U2 = norm.cdf(X2)
# Use inverse transforms to create dependent samples of Y1 and Y2
Y1 = gamma.ppf(U1, 2)
Y2 = gamma.ppf(U2, 3)
# Plot a basic scatter to show dependence has been applied and calculate pearson coefficient
plt.scatter(Y1, Y2)
plt.xlabel('Y1')
plt.ylabel('Y2')
plt.show()
correl = np.corrcoef(Y1, Y2)
print("Estimated Pearson Correlation Coefficient:", correl[0,1])
Estimated Pearson Correlation Coefficient: 0.4628059800990357
The example above shows we have correlated gamma variates with around a 50% correlation coefficient (in this case we could calculate the correlation coefficient analytically but it is not necessary for our purposes, as we create more sophisticated models the analytic solutions become more difficult/impossible).
Even from this example we can see how models of this form provide superior scalability: for each additional variable we only need to specify 1 parameter: the weight given to the central driver. In contrast a "matrix" method requires each pair-wise combination to be specified (and then we require a procedure to convert the matrix to positive semi-definite form in order to apply it). Say our model requires something more sophisticated: say the sum of a correlated gamma and a weibull distribution - the number of parameters in a matrix representation grows very quickly. However it is worth noting we do lose some control, by reducing the number of parameters in this way we lose the ability to express every possible correlation network. However in most cases this is not a big problem as there is insufficient data to estimate the correlation matrix anyway.
It is worth pointing out that the type of dependency applied here is a "rank normal" dependency - this is the same dependency structure as in a multi-variate normal distribution, albeit generalised to any marginal distribution.
## An Extension to the Simple Driver Method
We can extend the model above by noticing the following: there is nothing stopping the "synthetic" variables being considered drivers in their own right. Gaussians being closed under addition does not require that each variable needs to be independent, sums of rank correlated normals are still normal! We can thus extend the model to:
# Simple driver method example
# We model a central driver Z
# 2 additional drivers X1 and X2 are calculated off these
# We want to model 2 risks: Y1 and Y2 which follow a gamma distribution
# Synthetic normal variates sX1 and sX2 are used to apply dependence
import numpy as np
from scipy.stats import gamma, norm
import matplotlib.pyplot as plt
%matplotlib inline
# Set number of simulations and random seed
SIMS = 1000
SEED = 123
np.random.seed(SEED)
# Simulate driver variables
Z = np.random.normal(0, 1, SIMS)
# Simulate additional driver variables X1, X2 and standardise
X1 = (0.5 * Z + 0.5 * np.random.normal(0, 1, SIMS)) / np.sqrt(0.5**2 + 0.5**2)
X2 = (0.5 * Z + 0.5 * np.random.normal(0, 1, SIMS)) / np.sqrt(0.5**2 + 0.5**2)
# Simulate Synthetic Variables sX and standardize
sX1 = (0.5 * X1 + 0.25 * X2 + 0.25 * np.random.normal(0, 1, SIMS))
sX1 = (sX1 - sX1.mean()) / sX1.std()
sX2 = (0.5 * X2 + 0.25 * X1 + 0.25 * np.random.normal(0, 1, SIMS))
sX2 = (sX2 - sX2.mean()) / sX2.std()
# Use normal CDF to convert sX synthetic variables to uniforms U
U1 = norm.cdf(sX1)
U2 = norm.cdf(sX2)
# Use inverse transforms to create dependent samples of Y1 and Y2
Y1 = gamma.ppf(U1, 2)
Y2 = gamma.ppf(U2, 3)
# Plot a basic scatter to show dependence has been applied and calculate pearson coefficient
plt.scatter(Y1, Y2)
plt.xlabel('Y1')
plt.ylabel('Y2')
plt.show()
correl = np.corrcoef(Y1, Y2)
print("Estimated Pearson Correlation Coefficient:", correl[0,1])
Estimated Pearson Correlation Coefficient: 0.7851999480298125
As before we have ended up with rank-normal correlated gamma variates. This time we have 3 potential "driver" variables Z, X1, X2 - all correlated with each other. It is not hard to see how this procedure can be iterated repeatedly to give arbitrarily many correlated driver variables. Further we can imagine these variables being oriented in a hierarchy, Z being at the bottom layer, X1 and X2 being a layer above, and so on.
## What is a Driver?
We should now take a step back and think about the implications for the insurance aggregation problem. As stated previously this method allows us to define dependency with far fewer parameters than using a matrix approach. When you start getting into the realms of 100,000s of modelled variables this becomes increasingly important from a calibration perspective.
However there are other benefits: for example we can look at how the model variables relate to the driver variables. For example we can ask questions such as: "What is the distribution of modelled variables when driver Z is above the 75th percentile" and so on. This is a form of counter-factual analysis that can be performed using the model, with the matrix approaches you get no such ability. For counter-factual analysis to be useful however we require real-world interpretations of the drivers themselves. By limiting ourselves to counter-factual analysis based on driver percentiles (e.g. after the normal cdf is applied to Z, X1, X2 - leading to uniformly distributed driver variables) we make no assumption about the distribution about the driver itself, only its relationship with other drivers.
By not making a distributional assumption a driver can represent any stochastic process. This is an important but subtle point. For example we could create a driver for "global economy" (Z) and by taking weighted sums of these create new drivers "US economy" (X1) and "european economy" (X2). In this example there may be data driven calibrations for suitable weights to select (e.g. using GDP figures) however it is also relatively easy to use expert judgement. In my experience it is actually easier to elicit parameters in this style of model compared to "correlation" parameters given this natural interpretation.
Given this natural interpretation we can quite easily begin to answer questions such as: "What might happen to the insurance portfolio in the case of a european economic downturn?" and so on. Clearly the detail level of the driver structure controls what sort of questions can be answered.
As stated previously we can repeat the mechanics of creating drivers to create new "levels" of drivers (e.g. moving from "european economy" to "French economy", "UK economy" and so on). We can also create multiple "families" of driver, for example in addition to looking at economies we may consider a family relating to "political unrest", again this could be broken down into region then country and so on. Other driver families may not have a geographic interpretation - for example commodity prices. In some cases the families may be completely independent of each other, in other cases they can depend on each other (e.g. commodity prices will have some relationship with the economy).
In the examples so far we have presented a "top down" implementation in our examples: we start by modelling a global phenomena and then build "smaller" phenomena out of these. There is nothing special about this, we could have just as easily presented a "bottom up" implementation: take a number of "Z" variables to represent regions and combine these to form an "X" representing a global variable. Neither implementation is necessarily better than another and mathematically they lead to equivalent behaviours (through proper calibration). In practice however I have found the "top down" approach works better, typically you will start with a simple model and through time it can iterate and become more sophisticated. The top down approach makes it easier to create "backward compatability" which is a very useful feature for any modelling framework (e.g. suppose the first iteration of the framework only considers economic regions, next time a model is added which requires country splits - with top down adding new country variables keeps the economic regions identical without requiring any addtional thought.)
## The need for more Sophistication: Tail Dependence
Unfortunately the model presented so far is still quite a way from being useful. We may have found a way of calibrating a joint distribution using relatively few (O(N)) parameters and can (in some sense) perform counter-factual analysis, but there is still a big issue.
So far the method only allows for rank-normal joint behaviour. From the analysis of complex systems we know that this is not necessarily a good assumption (please see other blog posts for details). We are particularly interested in "tail dependence", in layman's terms: "when things go bad, they go bad together". Tail dependence can arise for any number of reasons:
• Structual changes in the system
• Feedback
• State space reduction
• Multiplicative processes
• Herd mentality/other human behaviours
• And many others
Given the framework we are working within we are not particularly interested in how these effects occur, we are just interested in replicating the behaviour.
To do this we will extend the framework to cover a multivariate-student-T dependence structure. To do this we note the following: $$T_{\nu} \sim \frac{Z} {\sqrt{ \frac{\chi^2_{\nu}} {\nu}}}$$ Where:
$T_{\nu}$ follows a student-t distribution with $\nu$ degrees of freedom
$Z$ follows a standard normal $N(0,1)$
$\chi^2_{\nu}$ follows Chi-Square with $\nu$ degrees of freedom
Therefore we can easily extend the model to allow for tail dependence.
# Simple driver method example
# We model a central driver Z
# 2 additional drivers X1 and X2 are calculated off these
# We want to model 2 risks: Y1 and Y2 which follow a gamma distribution
# Synthetic normal variates sX1 and sX2 are used to apply dependence
# Tail dependence is added through Chi
import numpy as np
from scipy.stats import gamma, norm, chi2, t
import matplotlib.pyplot as plt
%matplotlib inline
# Set number of simulations and random seed
SIMS = 1000
SEED = 123
np.random.seed(SEED)
# Simulate driver variables
Z = np.random.normal(0, 1, SIMS)
# Simulate additional driver variables X1, X2 and standardise
X1 = (0.5 * Z + 0.5 * np.random.normal(0, 1, SIMS)) / np.sqrt(0.5**2 + 0.5**2)
X2 = (0.5 * Z + 0.5 * np.random.normal(0, 1, SIMS)) / np.sqrt(0.5**2 + 0.5**2)
# Simulate Synthetic Variables sX and standardize
sX1 = (0.5 * X1 + 0.25 * X2 + 0.25 * np.random.normal(0, 1, SIMS))
sX1 = (sX1 - sX1.mean()) / sX1.std()
sX2 = (0.5 * X2 + 0.25 * X1 + 0.25 * np.random.normal(0, 1, SIMS))
sX2 = (sX2 - sX2.mean()) / sX2.std()
# Simulate Chi-Square for tail-dependence
nu = 3
Chi = chi2.rvs(nu, size=SIMS)
sX1 /= np.sqrt(Chi / nu)
sX2 /= np.sqrt(Chi / nu)
# Use t CDF to convert sX synthetic variables to uniforms U
U1 = t.cdf(sX1, df=nu)
U2 = t.cdf(sX2, df=nu)
# Use inverse transforms to create dependent samples of Y1 and Y2
Y1 = gamma.ppf(U1, 2)
Y2 = gamma.ppf(U2, 3)
# Plot a basic scatter to show dependence has been applied and calculate pearson coefficient
plt.scatter(Y1, Y2)
plt.xlabel('Y1')
plt.ylabel('Y2')
plt.show()
correl = np.corrcoef(Y1, Y2)
print("Estimated Pearson Correlation Coefficient:", correl[0,1])
Estimated Pearson Correlation Coefficient: 0.7907911109201866
We can further extend this model by allowing each model variate to have its own tail-dependence. Why is this important one might ask? In the case of this framework we are spanning many different models, selecting a single degree of tail dependence might not be suitable for all variables. We can do this via applying another inverse transform: $$T_{\nu} \sim \frac{Z} {\sqrt{ \frac{F^{-1}_{\chi^2_{\nu}}(U)} {\nu}}}$$ As before but where:
$U$ follows a uniform U[0,1] distribution
$F^{-1}_{\chi^2_{\nu}}$ is the inverse cdf of $\chi^2_{\nu}$
# Simple driver method example
# We model a central driver Z
# 2 additional drivers X1 and X2 are calculated off these
# We want to model 2 risks: Y1 and Y2 which follow a gamma distribution
# Synthetic normal variates sX1 and sX2 are used to apply dependence
# Tail dependence is added through Chi1 and Ch2 with varying degrees
import numpy as np
from scipy.stats import gamma, norm, chi2, t
import matplotlib.pyplot as plt
%matplotlib inline
# Set number of simulations and random seed
SIMS = 1000
SEED = 123
np.random.seed(SEED)
# Simulate driver variables
Z = np.random.normal(0, 1, SIMS)
# Simulate additional driver variables X1, X2 and standardise
X1 = (0.5 * Z + 0.5 * np.random.normal(0, 1, SIMS)) / np.sqrt(0.5**2 + 0.5**2)
X2 = (0.5 * Z + 0.5 * np.random.normal(0, 1, SIMS)) / np.sqrt(0.5**2 + 0.5**2)
# Simulate Synthetic Variables sX and standardize
sX1 = (0.5 * X1 + 0.25 * X2 + 0.25 * np.random.normal(0, 1, SIMS))
sX1 = (sX1 - sX1.mean()) / sX1.std()
sX2 = (0.5 * X2 + 0.25 * X1 + 0.25 * np.random.normal(0, 1, SIMS))
sX2 = (sX2 - sX2.mean()) / sX2.std()
# Simulate Chi-Square for tail-dependence
nu1 = 2
nu2 = 4
U = np.random.rand(SIMS)
Chi1 = chi2.ppf(U,df=nu1)
Chi2 = chi2.ppf(U, df=nu2)
sX1 /= np.sqrt(Chi1 / nu1)
sX2 /= np.sqrt(Chi2 / nu2)
# Use t CDF to convert sX synthetic variables to uniforms U
U1 = t.cdf(sX1, df=nu1)
U2 = t.cdf(sX2, df=nu2)
# Use inverse transforms to create dependent samples of Y1 and Y2
Y1 = gamma.ppf(U1, 2)
Y2 = gamma.ppf(U2, 3)
# Plot a basic scatter to show dependence has been applied and calculate pearson coefficient
plt.scatter(Y1, Y2)
plt.xlabel('Y1')
plt.ylabel('Y2')
plt.show()
correl = np.corrcoef(Y1, Y2)
print("Estimated Pearson Correlation Coefficient:", correl[0,1])
Estimated Pearson Correlation Coefficient: 0.7703228652641819
There is a small practical issue relating to multivariate student-t distributions: namely that we lose the ability to assume independence. This is a direct result of allowing for tail dependence. In many situations this is not an issue, however within this framework we have models covering very disperate processes some of which may genuinely exhibit independence. To illustrate this issue we will re-run the existing model with zero driver weights ("attempt to model independence"):
Estimated Pearson Correlation Coefficient: 0.08220534833363176
As we can see there is a dependence between Y1 and Y2 0 clearly through the chi-square variates. We can overcome this issue by "copying" the driver process. The common uniform distribution is then replaced a number of correlated uniform distributions. We can then allow for independence. An implemntation of this can be seen in the code sample below:
# Simple driver method example
# We model a central driver Z
# 2 additional drivers X1 and X2 are calculated off these
# We want to model 2 risks: Y1 and Y2 which follow a gamma distribution
# Synthetic normal variates sX1 and sX2 are used to apply dependence
# Tail dependence is added through Chi1 and Ch2 with varying degrees
# Chi1 and Chi2 are driven by X1tail and X2tail which are copies of X1 and X2 drivers
import numpy as np
from scipy.stats import gamma, norm, chi2, t
import matplotlib.pyplot as plt
%matplotlib inline
# Set number of simulations and random seed
SIMS = 1000
SEED = 123
np.random.seed(SEED)
# Simulate driver variables
Z = np.random.normal(0, 1, SIMS)
# Simulate copy of driver for tail process
Ztail = np.random.normal(0, 1, SIMS)
# Simulate additional driver variables X1, X2 and standardise
X1 = (0.5 * Z + 0.5 * np.random.normal(0, 1, SIMS)) / np.sqrt(0.5**2 + 0.5**2)
X2 = (0.5 * Z + 0.5 * np.random.normal(0, 1, SIMS)) / np.sqrt(0.5**2 + 0.5**2)
X1tail = (0.5 * Ztail + 0.5 * np.random.normal(0, 1, SIMS)) / np.sqrt(0.5**2 + 0.5**2)
X2tail = (0.5 * Ztail + 0.5 * np.random.normal(0, 1, SIMS)) / np.sqrt(0.5**2 + 0.5**2)
# Simulate Synthetic Variables sX and standardize
sX1 = (0.5 * X1 + 0.25 * X2 + 0.25 * np.random.normal(0, 1, SIMS))
sX1 = (sX1 - sX1.mean()) / sX1.std()
sX2 = (0.5 * X2 + 0.25 * X1 + 0.25 * np.random.normal(0, 1, SIMS))
sX2 = (sX2 - sX2.mean()) / sX2.std()
# Simulate Synthetic Variables for tail process
sX1tail = (0.5 * X1tail + 0.25 * X2tail + 0.25 * np.random.normal(0, 1, SIMS))
sX1tail = (sX1tail - sX1tail.mean()) / sX1tail.std()
sX2tail = (0.5 * X2tail + 0.25 * X1tail + 0.25 * np.random.normal(0, 1, SIMS))
sX2tail = (sX2tail - sX2tail.mean()) / sX2tail.std()
# Simulate Chi-Square for tail-dependence
nu1 = 2
nu2 = 4
Chi1 = chi2.ppf(norm.cdf(sX1tail),df=nu1)
Chi2 = chi2.ppf(norm.cdf(sX2tail), df=nu2)
sX1 /= np.sqrt(Chi1 / nu1)
sX2 /= np.sqrt(Chi2 / nu2)
# Use t CDF to convert sX synthetic variables to uniforms U
U1 = t.cdf(sX1, df=nu1)
U2 = t.cdf(sX2, df=nu2)
# Use inverse transforms to create dependent samples of Y1 and Y2
Y1 = gamma.ppf(U1, 2)
Y2 = gamma.ppf(U2, 3)
# Plot a basic scatter to show dependence has been applied and calculate pearson coefficient
plt.scatter(Y1, Y2)
plt.xlabel('Y1')
plt.ylabel('Y2')
plt.show()
correl = np.corrcoef(Y1, Y2)
print("Estimated Pearson Correlation Coefficient:", correl[0,1])
Estimated Pearson Correlation Coefficient: 0.7406745557389065
To show this allows full independence we repeat the zero-weight example:
Estimated Pearson Correlation Coefficient: -0.01456173215652803
We can see that this is a much better scatter plot if we are looking for independence!
## Non-Centrality
We now extend this model yet further. So far we have allowed for tail dependence however it treats both tails equally. In some instances this can be problematic. For example if we rely on output from the framework to do any kind of risk-reward comparison the upisde and downside behaviour are both important. While it is easy to think of structural changes leading to a downside tail dependence an upside tail dependence is typically harder to justify. We can allow for this with a simple change to the model, namely: $$T_{\nu, \mu} \sim \frac{Z + \mu} {\sqrt{ \frac{F^{-1}_{\chi^2_{\nu}}(U)} {\nu}}}$$ The addition of the $\mu$ parameter means that $T_{\nu, \mu}$ follows non-central student-t distribution with $\nu$ degrees of freedom and non-centrality $\mu$. Details of this distribution can be found on wikipedia. By selecting large positive values of $\mu$ we can create tail dependence in the higher percentiles, large negative values can create tail dependence in the lower percentiles and a zero value leads to a symmetrical dependency. Adjusting the code futher we get:
# Simple driver method example
# We model a central driver Z
# 2 additional drivers X1 and X2 are calculated off these
# We want to model 2 risks: Y1 and Y2 which follow a gamma distribution
# Synthetic normal variates sX1 and sX2 are used to apply dependence
# Tail dependence is added through Chi1 and Ch2 with varying degrees
# Chi1 and Chi2 are driven by X1tail and X2tail which are copies of X1 and X2 drivers
import numpy as np
from scipy.stats import gamma, norm, chi2, nct
import matplotlib.pyplot as plt
%matplotlib inline
# Set number of simulations and random seed
SIMS = 1000
SEED = 123
np.random.seed(SEED)
# Simulate driver variables
Z = np.random.normal(0, 1, SIMS)
# Simulate copy of driver for tail process
Ztail = np.random.normal(0, 1, SIMS)
# Simulate additional driver variables X1, X2 and standardise
X1 = (0.5 * Z + 0.5 * np.random.normal(0, 1, SIMS)) / np.sqrt(0.5**2 + 0.5**2)
X2 = (0.5 * Z + 0.5 * np.random.normal(0, 1, SIMS)) / np.sqrt(0.5**2 + 0.5**2)
X1tail = (0.5 * Ztail + 0.5 * np.random.normal(0, 1, SIMS)) / np.sqrt(0.5**2 + 0.5**2)
X2tail = (0.5 * Ztail + 0.5 * np.random.normal(0, 1, SIMS)) / np.sqrt(0.5**2 + 0.5**2)
# Simulate Synthetic Variables sX and standardize
sX1 = (0.5 * X1 + 0.25 * X2 + 0.25 * np.random.normal(0, 1, SIMS))
sX1 = (sX1 - sX1.mean()) / sX1.std()
sX2 = (0.5 * X2 + 0.25 * X1 + 0.25 * np.random.normal(0, 1, SIMS))
sX2 = (sX2 - sX2.mean()) / sX2.std()
# Simulate Synthetic Variables for tail process
sX1tail = (0.5 * X1tail + 0.25 * X2tail + 0.25 * np.random.normal(0, 1, SIMS))
sX1tail = (sX1tail - sX1tail.mean()) / sX1tail.std()
sX2tail = (0.5 * X2tail + 0.25 * X1tail + 0.25 * np.random.normal(0, 1, SIMS))
sX2tail = (sX2tail - sX2tail.mean()) / sX2tail.std()
# Simulate Chi-Square for tail-dependence
nu1 = 2
nu2 = 4
Chi1 = chi2.ppf(norm.cdf(sX1tail),df=nu1)
Chi2 = chi2.ppf(norm.cdf(sX2tail), df=nu2)
sX1 /= np.sqrt(Chi1 / nu1)
sX2 /= np.sqrt(Chi2 / nu2)
# Specify the non-centrality values
nc1 = -2
nc2 = -2
# Use non-central t CDF to convert sX synthetic variables to uniforms U
U1 = nct.cdf(sX1+nc1, nc=nc1, df=nu1)
U2 = nct.cdf(sX2+nc2, nc=nc2, df=nu2)
# Use inverse transforms to create dependent samples of Y1 and Y2
Y1 = gamma.ppf(U1, 2)
Y2 = gamma.ppf(U2, 3)
# Plot a basic scatter to show dependence has been applied and calculate pearson coefficient
plt.scatter(Y1, Y2)
plt.xlabel('Y1')
plt.ylabel('Y2')
plt.show()
correl = np.corrcoef(Y1, Y2)
print("Estimated Pearson Correlation Coefficient:", correl[0,1])
Estimated Pearson Correlation Coefficient: 0.7100911602838634
In the code example we have selected a non-centrality of -2 which is a fairly large negative value, we can see the dependency increasing in the lower percentiles (clustering around (0,0) on the plot).
## Temporal Considerations
So far we have essentially considered a "static" model, we have modelled a number of drivers which represent values at a specific time period. For the majority of insurance contracts this is sufficient: we are only interested in losses occuring over the time period the contract is active. However in some instances the contracts relate to multiple time periods and it does not make sense to consider losses over the entire lifetime. Moreover it is not ideal to model time periods as independent from one another, to take the US economy example: if in 2020 the US enters recession it is (arguably) more likely that the US will also stay in recession in 2021. Clearly the dynamics of this are very complex and constructing a detailed temporal model is very difficult, however for the sake of creating the drivers we do not need to know the exact workings. Instead we are looking for a simple implementation that gives dynamics that are somewhat justifiable.
Fortunately it is relatively easy to add this functionality to the model framework we have described so far. Essentially we will adopt a Markovian assumption whereby a driver in time period t+1 is a weighted sum of its value at time t and an idiosyncratic component. Of course this is not a perfect description of the temporal behaviour of every possible driver but it shouldn't be completely unjustifiable in most instances and the trajectories shouldn't appear to be totally alien (e.g. US economy being in the top 1% one year immediately followed by a bottom 1% performance very frequently).
To illustrate this please see the code example below, for brevity I will change the model code above to a functional definition to avoid repeating blocks of code.
# Creating temporally dependent variables
import numpy as np
from scipy.stats import gamma, norm, chi2, nct
import matplotlib.pyplot as plt
%matplotlib inline
# Set number of simulations and random seed
SIMS = 1000
SEED = 123
np.random.seed(SEED)
# Define function to create correlated normal distributions
def corr_driver():
# Create driver Z
Z = np.random.normal(0, 1, SIMS)
# Create drivers X1, X2
X1 = (0.5 * Z + 0.5 * np.random.normal(0, 1, SIMS)) / np.sqrt(0.5**2 + 0.5**2)
X2 = (0.5 * Z + 0.5 * np.random.normal(0, 1, SIMS)) / np.sqrt(0.5**2 + 0.5**2)
return np.array([X1, X2])
# Create drivers variables for time periods t0 and t1
driver_t0 = corr_driver()
driver_t1 = 0.5 * driver_t0 + 0.5 * corr_driver() / np.sqrt(0.5**2 + 0.5**2)
# Create copy of drivers for tail process time periods t0 and t1
tail_t0 = corr_driver()
tail_t1 = 0.5 * tail_t0 + 0.5 * corr_driver() / np.sqrt(0.5**2 + 0.5**2)
# Define a standardise function
def standardise(x):
return (x - x.mean()) / x.std()
# Create sythetic variables sX1 sX2 for variable 1 and 2 at times t0 and t1
# Note depending on the model idiosyncratic components may also be dependent
sX1t0 = standardise(0.25*driver_t0[0] + 0.5*driver_t0[1] + 0.25*np.random.normal(0, 1, SIMS))
sX1t1 = standardise(0.25*driver_t1[0] + 0.5*driver_t1[1] + 0.25*np.random.normal(0, 1, SIMS))
sX2t0 = standardise(0.5*driver_t0[0] + 0.25*driver_t0[1] + 0.25*np.random.normal(0, 1, SIMS))
sX2t1 = standardise(0.5*driver_t1[0] + 0.25*driver_t1[1] + 0.25*np.random.normal(0, 1, SIMS))
# Repeat synthetic variable construction for tail process
sX1tailt0 = standardise(0.25*tail_t0[0] + 0.5*tail_t0[1] + 0.25*np.random.normal(0, 1, SIMS))
sX1tailt1 = standardise(0.25*tail_t1[0] + 0.5*tail_t1[1] + 0.25*np.random.normal(0, 1, SIMS))
sX2tailt0 = standardise(0.5*tail_t0[0] + 0.25*tail_t0[1] + 0.25*np.random.normal(0, 1, SIMS))
sX2tailt1 = standardise(0.5*tail_t1[0] + 0.25*tail_t1[1] + 0.25*np.random.normal(0, 1, SIMS))
# Simulate Chi-Square for tail-dependence t0 and t1
nu1 = 2
nu2 = 4
Chi1t0 = chi2.ppf(norm.cdf(sX1tailt0),df=nu1)
Chi2t0 = chi2.ppf(norm.cdf(sX2tailt0), df=nu2)
sX1t0 /= np.sqrt(Chi1t0 / nu1)
sX2t0 /= np.sqrt(Chi2t0 / nu2)
Chi1t1 = chi2.ppf(norm.cdf(sX1tailt1),df=nu1)
Chi2t1 = chi2.ppf(norm.cdf(sX2tailt1), df=nu2)
sX1t1 /= np.sqrt(Chi1t1 / nu1)
sX2t1 /= np.sqrt(Chi2t1 / nu2)
# Specify the non-centrality values
nc1 = 2
nc2 = 2
# Use non-central t CDF to convert sX synthetic variables to uniforms U for t0 and t1
U1t0 = nct.cdf(sX1t0+nc1, nc=nc1, df=nu1)
U2t0 = nct.cdf(sX2t0+nc2, nc=nc2, df=nu2)
U1t1 = nct.cdf(sX1t1+nc1, nc=nc1, df=nu1)
U2t1 = nct.cdf(sX2t1+nc1, nc=nc2, df=nu2)
# Use inverse transforms to create dependent samples of Y1 and Y2 at t0 and t1
Y1t0 = gamma.ppf(U1t0, 2)
Y2t0 = gamma.ppf(U2t0, 3)
Y1t1 = gamma.ppf(U1t1, 2)
Y2t1 = gamma.ppf(U2t1, 3)
# Plot a basic scatter to show dependence has been applied and calculate pearson coefficient
plt.scatter(Y1t0, Y1t1)
plt.xlabel('Y1(t=t0)')
plt.ylabel('Y1(t=t1)')
plt.show()
correl = np.corrcoef(Y1t0, Y1t1)
print("Estimated Pearson Auto-Correlation Coefficient:", correl[0,1])
Estimated Pearson Auto-Correlation Coefficient: 0.37600307233845764
In this code example we created to variables Y1 and Y2, each one taking a value from a Gamma distribution at times t0 and t1. Y1 and Y2 have a dependency between eachother but also temporally.
As with any temporal model the time period chosen is very important, typically for insurance contracts yearly time periods make sense. However in one particular model I developed there was a need for monthly simulations, rather than re-parameterising the entire central driver structure to work on a monthly basis (creating lots of extra data that will not be used by the vast majority of the models) I applied a "Brownian Bridge" type argument to interpolate driver simulations for each month.
## Notes on Implementation
In this blog post I have not included the code exactly as it is implemented in production since this is my employer's IP. The implementation presented here is not very efficient and trying to run large portfolios in this way will be troublesome. In the full production implementation I used the following:
1. Strict memory management as the this is a memory hungry program
2. Certain aspects of the implementation are slow in pure python (and even Numpy) Cython and Numba are used for performance
3. The Scipy stats module is convenient but restrictive, it is better to either use the Cython address for Scipy special functions or implement functions from scratch. By implementing extended forms of some of the distribution functions one is also able to allow for non-integer degrees of freedom which is useful
4. The model naturally lends itself to arrays (vectors, matrices, tensors) however these tend to be sparse in nature, it is often better to construct "sparse multiply" type operations rather than inbuilt functions like np.dot
## Conclusion
This blog posts represents the current iteration of the aggregation framework I have developed. It is considered a "version 0.1" implementation and is expected to develop as we use it more extensively and uncover further properties or issues. For example it is clear regardless of parameters selected the joint behaviour will always be (approximately) elliptical, as presented it is not possible to implement non-linearities (e.g. the price of some asset will only attain a maximum/minimum value dependent on some other driver indicator). It is not difficult to implement ideas like this when the need arises, the difficulty becomes more around how to implement the idea in a seamless way.
There are a couple of additional benefits to this framework which we have not mentioned, I will outline these here briefly:
1. It is possible to parallelise this process quite effectively as there are minimal bottlenecks/race conditions
2. The driver variables can be generated centrally and models can link to this central variable repository. From a work-flow perspective this means that individual underwriting teams can run models independently (quickly) leaving the risk teams to collate an analyse the results. (Sometimes called a federated workflow.)
3. The federated workflow means no specialist hardware is required, even very large portfolios can be run on standard desktops/laptops.
The current production version of this framework has around 5-10,000 driver variables ("X1, X2") over 5 different hierarchical layers. These influence dependence between around 500,000 individual modelled variables ("Y1, Y2") with 20 time periods ("t0, t1"). The quality of risk management analysis and reporting has increased dramatically as a result.
There are still some things left to do in relation to this framework and the work is on-going. These include:
1. Work relating to calibration and how to do this as efficiently as possible
2. Further work on increasing code efficiency
3. Further mathematical study of the framework's parameters
4. Study of the implied network behaviour: since we're placing risks on a network (driver structure) can we gain additional insight by considering contagion, critical nodes, etc.?
5. Further improvements to the workflow, how the model data is stored/collated etc.
|
|
# Decision tree learning
Decision tree learning, used in statistics, data mining and machine learning, uses a decision tree as a predictive model which maps observations about an item to conclusions about the item's target value. More descriptive names for such tree models are classification trees or regression trees. In these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels.
In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data but not decisions; rather the resulting classification tree can be an input for decision making. This page deals with decision trees in data mining.
## General
A tree showing survival of passengers on the Titanic ("sibsp" is the number of spouses or siblings aboard). The figures under the leaves show the probability of survival and the percentage of observations in the leaf.
Decision tree learning is a method commonly used in data mining.[1] The goal is to create a model that predicts the value of a target variable based on several input variables. An example is shown on the right. Each interior node corresponds to one of the input variables; there are edges to children for each of the possible values of that input variable. Each leaf represents a value of the target variable given the values of the input variables represented by the path from the root to the leaf.
A tree can be "learned" by splitting the source set into subsets based on an attribute value test. This process is repeated on each derived subset in a recursive manner called recursive partitioning. The recursion is completed when the subset at a node has all the same value of the target variable, or when splitting no longer adds value to the predictions. This process of top-down induction of decision trees (TDIDT) [2] is an example of a greedy algorithm, and it is by far the most common strategy for learning decision trees from data, but it is not the only strategy. In fact, some approaches have been developed recently allowing tree induction to be performed in a bottom-up fashion.[3]
In data mining, decision trees can be described also as the combination of mathematical and computational techniques to aid the description, categorisation and generalisation of a given set of data.
Data comes in records of the form:
$(\textbf{x},Y) = (x_1, x_2, x_3, ..., x_k, Y)$
The dependent variable, Y, is the target variable that we are trying to understand, classify or generalize. The vector x is composed of the input variables, x1, x2, x3 etc., that are used for that task.
## Types
Decision trees used in data mining are of two main types:
• Classification tree analysis is when the predicted outcome is the class to which the data belongs.
• Regression tree analysis is when the predicted outcome can be considered a real number (e.g. the price of a house, or a patient’s length of stay in a hospital).
The term Classification And Regression Tree (CART) analysis is an umbrella term used to refer to both of the above procedures, first introduced by Breiman et al.[4] Trees used for regression and trees used for classification have some similarities - but also some differences, such as the procedure used to determine where to split.[4]
Some techniques, often called ensemble methods, construct more than one decision tree:
• Bagging decision trees, an early ensemble method, builds multiple decision trees by repeatedly resampling training data with replacement, and voting the trees for a consensus prediction.[5]
• A Random Forest classifier uses a number of decision trees, in order to improve the classification rate.
• Boosted Trees can be used for regression-type and classification-type problems.[6][7]
• Rotation forest - in which every decision tree is trained by first applying principal component analysis (PCA) on a random subset of the input features.[8]
Decision tree is the learning of decision tree from class labeled training tuples. A decision tree is a flow chart like structure, where each internal (non-leaf) node denotes a test on an attribute, each branch represents an outcome of the test, and each leaf (or terminal) node holds a class label. The topmost node in tree is the root node.
There are many specific decision-tree algorithms. Notable ones include:
ID3 and CART are invented independently of one another at around same time(b/w 1970-1980), yet follow a similar approach for learning decision tree from training tuples.
## Formulae
The algorithms that are used for constructing decision trees usually work top-down by choosing a variable at each step that is the next best variable to use in splitting the set of items.[10] "Best" is defined by how well the variable splits the set into homogeneous subsets that have the same value of the target variable. Different algorithms use different formulae for measuring "best". This section presents a few of the most common formulae. These formulae are applied to each candidate subset, and the resulting values are combined (e.g., averaged) to provide a measure of the quality of the split.
### Gini impurity
Used by the CART (classification and regression tree) algorithm, Gini impurity is a measure of how often a randomly chosen element from the set would be incorrectly labeled if it were randomly labeled according to the distribution of labels in the subset. Gini impurity can be computed by summing the probability of each item being chosen times the probability of a mistake in categorizing that item. It reaches its minimum (zero) when all cases in the node fall into a single target category.
To compute Gini impurity for a set of items, suppose i takes on values in {1, 2, ..., m}, and let fi = the fraction of items labeled with value i in the set.
$I_{G}(f) = \sum_{i=1}^{m} f_i (1-f_i) = \sum_{i=1}^{m} (f_i - {f_i}^2) = \sum_{i=1}^m f_i - \sum_{i=1}^{m} {f_i}^2 = 1 - \sum^{m}_{i=1} {f_i}^{2}$
### Information gain
Used by the ID3, C4.5 and C5.0 tree generation algorithms. Information gain is based on the concept of entropy used in information theory.
$I_{E}(f) = - \sum^{m}_{i=1} f_i \log^{}_2 f_i$
Amongst other data mining methods, decision trees have various advantages:
• Simple to understand and interpret. People are able to understand decision tree models after a brief explanation.
• Requires little data preparation. Other techniques often require data normalisation, dummy variables need to be created and blank values to be removed.
• Able to handle both numerical and categorical data. Other techniques are usually specialised in analysing datasets that have only one type of variable. Ex: relation rules can be used only with nominal variables while neural networks can be used only with numerical variables.
• Uses a white box model. If a given situation is observable in a model the explanation for the condition is easily explained by boolean logic. An example of a black box model is an artificial neural network since the explanation for the results is difficult to understand.
• Possible to validate a model using statistical tests. That makes it possible to account for the reliability of the model.
• Robust. Performs well even if its assumptions are somewhat violated by the true model from which the data were generated.
• Performs well with large data in a short time. Large amounts of data can be analysed using standard computing resources.
## Limitations
• The problem of learning an optimal decision tree is known to be NP-complete under several aspects of optimality and even for simple concepts.[11][12] Consequently, practical decision-tree learning algorithms are based on heuristic algorithms such as the greedy algorithm where locally optimal decisions are made at each node. Such algorithms cannot guarantee to return the globally optimal decision tree.
• Decision-tree learners can create over-complex trees that do not generalise the data well. This is called overfitting.[13] Mechanisms such as pruning are necessary to avoid this problem.
• There are concepts that are hard to learn because decision trees do not express them easily, such as XOR, parity or multiplexer problems. In such cases, the decision tree becomes prohibitively large. Approaches to solve the problem involve either changing the representation of the problem domain (known as propositionalisation)[14] or using learning algorithms based on more expressive representations (such as statistical relational learning or inductive logic programming).
## Extensions
### Decision graphs
In a decision tree, all paths from the root node to the leaf node proceed by way of conjunction, or AND. In a decision graph, it is possible to use disjunctions (ORs) to join two more paths together using Minimum message length (MML).[16] Decision graphs have been further extended to allow for previously unstated new attributes to be learnt dynamically and used at different places within the graph.[17] The more general coding scheme results in better predictive accuracy and log-loss probabilistic scoring.[citation needed] In general, decision graphs infer models with fewer leaves than decision trees.
### Search through Evolutionary Algorithms
Evolutionary algorithms have been used to avoid local optimal decisions and search the decision tree space with little a priori bias.[18][19]
## References
1. ^ Rokach, Lior; Maimon, O. (2008). Data mining with decision trees: theory and applications. World Scientific Pub Co Inc. ISBN 978-9812771711.
2. ^ Quinlan, J. R., (1986). Induction of Decision Trees. Machine Learning 1: 81-106, Kluwer Academic Publishers
3. ^ Barros R. C., Cerri R., Jaskowiak P. A., Carvalho, A. C. P. L. F., A bottom-up oblique decision tree induction algorithm. Proceedings of the 11th International Conference on Intelligent Systems Design and Applications (ISDA 2011).
4. ^ a b Breiman, Leo; Friedman, J. H., Olshen, R. A., & Stone, C. J. (1984). Classification and regression trees. Monterey, CA: Wadsworth & Brooks/Cole Advanced Books & Software. ISBN 978-0-412-04841-8.
5. ^ Breiman, L. (1996). Bagging Predictors. "Machine Learning, 24": pp. 123-140.
6. ^ Friedman, J. H. (1999). Stochastic gradient boosting. Stanford University.
7. ^ Hastie, T., Tibshirani, R., Friedman, J. H. (2001). The elements of statistical learning : Data mining, inference, and prediction. New York: Springer Verlag.
8. ^ Rodriguez, J.J. and Kuncheva, L.I. and Alonso, C.J. (2006), Rotation forest: A new classifier ensemble method, IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(10):1619-1630.
9. ^ Kass, G. V. (1980). "An exploratory technique for investigating large quantities of categorical data". Applied Statistics 29 (2): 119–127. doi:10.2307/2986296. JSTOR 2986296.
10. ^ Rokach, L.; Maimon, O. (2005). "Top-down induction of decision trees classifiers-a survey". IEEE Transactions on Systems, Man, and Cybernetics, Part C 35 (4): 476–487. doi:10.1109/TSMCC.2004.843247.
11. ^ Hyafil, Laurent; Rivest, RL (1976). "Constructing Optimal Binary Decision Trees is NP-complete". Information Processing Letters 5 (1): 15–17. doi:10.1016/0020-0190(76)90095-8.
12. ^ Murthy S. (1998). Automatic construction of decision trees from data: A multidisciplinary survey. Data Mining and Knowledge Discovery
13. ^ Principles of Data Mining. 2007. doi:10.1007/978-1-84628-766-4. ISBN 978-1-84628-765-7. edit
14. ^ Horváth, Tamás; Yamamoto, Akihiro, eds. (2003). Inductive Logic Programming. Lecture Notes in Computer Science 2835. doi:10.1007/b13700. ISBN 978-3-540-20144-1. edit
15. ^ Deng,H.; Runger, G.; Tuv, E. (2011). "Bias of importance measures for multi-valued attributes and solutions". Proceedings of the 21st International Conference on Artificial Neural Networks (ICANN). pp. 293–300.
16. ^ http://citeseer.ist.psu.edu/oliver93decision.html
17. ^ Tan & Dowe (2003)
18. ^ Papagelis A., Kalles D.(2001). Breeding Decision Trees Using Evolutionary Techniques, Proceedings of the Eighteenth International Conference on Machine Learning, p.393-400, June 28-July 01, 2001
19. ^ Barros, Rodrigo C., Basgalupp, M. P., Carvalho, A. C. P. L. F., Freitas, Alex A. (2011). A Survey of Evolutionary Algorithms for Decision-Tree Induction. IEEE Transactions on Systems, Man and Cybernetics, Part C: Applications and Reviews, vol. 42, n. 3, p. 291-312, May 2012.
|
|
### News Feeds
LaTeX Community News
### Partner Sites
TeXworks 0.4.5 released
News - LaTeX Editors
Written by Stefan Kottwitz
Sunday, 14 April 2013 15:46
The new stable TeXworks release 0.4.5 has been released today. TeXworks is a free and open source LaTeX editor running on Linux, Mac OS X and Windows. The new release contains some fixes and small enhancements to the 0.4.4 version, so updating is recommendable.
Some of the small new features are:
• a close button for the console output panel,
• an "Open PDF with TeX" option,
• encoding support for "Mac Central European Roman",
• compatibility with OS color schemes.
• improved handling of long messages and file paths in the log parser.
The experimental 0.5 branch already contains those changes. The next major release will be 0.6, which is planned to include a new PDF previewer.
If you consider to update: in case your TeXworks version is supplied by your TeX distribution, it may be better to wait until the distribution updates their TeXworks version, to not lose your configuration such as font preferences.
For further information, visit:
This post is listed on Hacker News. Your vote there can make it more visible.
Name *
Email (For verification & Replies)
Code
### Latest Forum Posts
Re: "1. Introduction" rather than "Chapter 1. Introduction"
02/09/2014 11:09, Johannes_B
Re: "1. Introduction" rather than "Chapter 1. Introduction"
02/09/2014 07:45, tronetq
Append text to subsection title
01/09/2014 23:57, mlinfoot
Re: "1. Introduction" rather than "Chapter 1. Introduction"
01/09/2014 17:13, Johannes_B
Re: How to centering a table and set the caption under the T
01/09/2014 17:08, Johannes_B
Re: How to centering a table and set the caption under the T
01/09/2014 16:58, ohmp2007
Re: "1. Introduction" rather than "Chapter 1. Introduction"
01/09/2014 16:50, tronetq
Re: How to centering a table and set the caption under the T
01/09/2014 15:59, Johannes_B
Pictures and text horizontal aligned
01/09/2014 14:26, Duckman
Re: How to centering a table and set the caption under the T
01/09/2014 14:25, ohmp2007
|
|
This site is supported by donations to The OEIS Foundation.
# Double factorial
The double factorial (sometimes called the semifactorial) of a nonnegative integer
n
is defined as the product of positive integers having the same parity as
n
${\displaystyle n!!:=\prod _{i=1}^{n}[i\equiv n{\pmod {2}}]\,i,\quad n\geq 0,}$
where
[·]
is the Iverson bracket, and where for
n = 0
we get the empty product, i.e.
1
.
Alternatively, we have
${\displaystyle 0!!:=1,}$
${\displaystyle 1!!:=1,}$
${\displaystyle n!!:=\prod _{i=0}^{\left\lfloor {\tfrac {n}{2}}\right\rfloor -1}(n-2i),\quad n\geq 2.}$
The double factorial of nonnegative integers is defined recursively as
${\displaystyle 0!!:=1,}$
${\displaystyle 1!!:=1,}$
${\displaystyle n!!:=n\cdot (n-2)!!,\quad n\geq 2.}$
A006882 Double factorials
n!!
:
a (0) = a (1) = 1; a (n) = n ⋅ a (n − 2), n ≥ 2
.
{1, 1, 2, 3, 8, 15, 48, 105, 384, 945, 3840, 10395, 46080, 135135, 645120, 2027025, 10321920, 34459425, 185794560, 654729075, 3715891200, 13749310575, 81749606400, 316234143225, ...}
## Generating functions for n!!
The generating function for
n!!
is
${\displaystyle G_{\{n!!\}}(x)\equiv \sum _{n=0}^{\infty }n!!\,x^{n}=\ ?.}$
The exponential generating function for
n!!
is
${\displaystyle E_{\{n!!\}}(x)\equiv \sum _{n=0}^{\infty }n!!\,{\frac {x^{n}}{n!}}=1+e^{\frac {x^{2}}{2}}~{\Big (}1+{\sqrt {\tfrac {\pi }{2}}}~{\rm {erf}}\left({\tfrac {x}{\sqrt {2}}}\right){\Big )},}$
where
${\displaystyle {\rm {erf}}(z):={\frac {2}{\sqrt {\pi }}}\int _{0}^{z}e^{-t^{2}}dt={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }{\frac {(-1)^{n}\,z^{2n+1}}{n!\,(2n+1)}}}$
is the error function (erf).[1]
A [generalized] continued fraction generating function for
n!!
is
${\displaystyle C_{\{n!!\}}(x)={\cfrac {?}{?-{\cfrac {?}{?-{\cfrac {?}{?-{\cfrac {?}{?-{\cfrac {?}{?-{\cfrac {?}{?-{\cfrac {?}{?-{\cfrac {?}{?-{\cfrac {?}{\ddots }}}}}}}}}}}}}}}}}},\quad n\geq 0.}$
## Sum of reciprocals of double factorial of nonnegative integers
${\displaystyle \sum _{n=0}^{\infty }{\frac {1}{n!!}}=\sum _{n=0}^{\infty }{\Bigg \{}{\frac {1}{(2n)!!}}+{\frac {1}{(2n+1)!!}}{\Bigg \}}=\sum _{n=0}^{\infty }{\frac {1}{(2n)!!}}~+~\sum _{n=0}^{\infty }{\frac {1}{(2n+1)!!}}={\sqrt {e}}~+~{\sqrt {e}}\,\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n)!!\,(2n+1)}}={\sqrt {e}}~{\Bigg \{}1+\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n)!!\,(2n+1)}}{\Bigg \}}}$
## Double factorial of even nonnegative integers
The double factorial of even nonnegative integers is given by
${\displaystyle (2n)!!=2^{n}n!,\quad n\geq 0.}$
A000165 Double factorial of even numbers:
(2n)!! = 2 n n!, n ≥ 0
.
{1, 2, 8, 48, 384, 3840, 46080, 645120, 10321920, 185794560, 3715891200, 81749606400, 1961990553600, 51011754393600, 1428329123020800, 42849873690624000, 1371195958099968000, ...}
### Generating functions for (2n)!!
The generating function for
(2n)!!
is
${\displaystyle G_{\{(2n)!!\}}(x)\equiv \sum _{n=0}^{\infty }(2n)!!\,x^{n}=\ ?.}$
The exponential generating function for
(2n)!!
is
${\displaystyle E_{\{(2n)!!\}}(x)\equiv \sum _{n=0}^{\infty }(2n)!!\,{\frac {x^{n}}{n!}}=E_{\{2^{n}n!\}}(x)={\frac {1}{1-2x}}=\sum _{n=0}^{\infty }(2x)^{n}=\sum _{n=0}^{\infty }2^{n}n!{\frac {x^{n}}{n!}}=\sum _{n=0}^{\infty }(2n)!!{\frac {x^{n}}{n!}}.}$
Note the following Maclaurin series expansion
${\displaystyle {\sqrt {1+\sin(x)}}=\sum _{n=0}^{\infty }{\frac {(-1)^{\lfloor {\frac {n}{2}}\rfloor }}{(2n)!!}}x^{n}=1+{\frac {1}{2}}x-{\frac {1}{8}}x^{2}-{\frac {1}{48}}x^{3}+{\frac {1}{384}}x^{4}+{\frac {1}{3840}}x^{5}-{\frac {1}{46080}}x^{6}-{\frac {1}{645120}}x^{7}+\ldots }$
A [generalized] continued fraction generating function for
(2n)!!
is
${\displaystyle C_{\{(2n)!!\}}(x)={\cfrac {1}{1-{\cfrac {2x}{1-{\cfrac {2x}{1-{\cfrac {4x}{1-{\cfrac {4x}{1-{\cfrac {6x}{1-{\cfrac {6x}{1-{\cfrac {8x}{1-{\cfrac {8x}{\ddots }}}}}}}}}}}}}}}}}},\quad n\geq 0.}$
### Sum of reciprocals of double factorial of even nonnegative integers
The sum of reciprocals of double factorial of even nonnegative integers equals
2√ e
, since
${\displaystyle \sum _{n=0}^{\infty }{\frac {1}{(2n)!!}}=\sum _{n=0}^{\infty }{\frac {1}{2^{n}~n!}}={\Bigg \{}\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}{\Bigg \}}_{x={\frac {1}{2}}}={\Big \{}e^{x}{\Big \}}_{x={\frac {1}{2}}}={\sqrt {e}}.}$
## Double factorial of odd nonnegative integers
The double factorial of odd nonnegative integers is given by
${\displaystyle (2n+1)!!=(2n+1)(2n-1)!!=(2n+1){\frac {(2n)!}{(2n)!!}}={\frac {(2n+1)!}{(2n)!!}}={\frac {(2n+1)!}{2^{n}~n!}},\quad n\geq 0.}$
A001147 Double factorial of odd numbers:
(2n − 1)!! = 1 ⋅ 3 ⋅ 5 ⋅ ... ⋅ (2n − 1), n ≥ 1
.
{1, 3, 15, 105, 945, 10395, 135135, 2027025, 34459425, 654729075, 13749310575, 316234143225, 7905853580625, 213458046676875, 6190283353629375, 191898783962510625, ...}
### Generating functions for (2n + 1)!!
The generating function for
(2n + 1)!!
is
${\displaystyle G_{\{(2n+1)!!\}}(x)\equiv \sum _{n=0}^{\infty }(2n+1)!!\,x^{n}=\ ?.}$
The exponential generating function for
(2n + 1)!!
is
${\displaystyle E_{\{(2n+1)!!\}}(x)\equiv \sum _{n=0}^{\infty }(2n+1)!!\,{\frac {x^{n}}{n!}}={\frac {1}{\sqrt {1-2x}}}.}$
A [generalized] continued fraction generating function for
(2n + 1)!!
is
${\displaystyle C_{\{(2n+1)!!\}}(x)={\cfrac {1}{1-{\cfrac {3x}{1-{\cfrac {2x}{1-{\cfrac {5x}{1-{\cfrac {4x}{1-{\cfrac {7x}{1-{\cfrac {6x}{1-{\cfrac {9x}{1-{\cfrac {8x}{\ddots }}}}}}}}}}}}}}}}}},\quad n\geq 0.}$
### Sum of reciprocals of double factorial of odd nonnegative integers
${\displaystyle \sum _{n=0}^{\infty }{\frac {1}{(2n+1)!!}}=\sum _{n=0}^{\infty }{\frac {2^{n}~n!}{(2n+1)!}}=\sum _{n=0}^{\infty }{\frac {(2n)!!}{(2n)!~(2n+1)}}={\sqrt {\frac {\pi e}{2}}}\,{\rm {~erf}}({\tfrac {1}{\sqrt {2}}})={\sqrt {e}}\,\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{2^{n}\,n!\,(2n+1)}}={\sqrt {e}}\,\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n)!!\,(2n+1)}}}$
This is the power series part of
2
π e 2
obtained from the remarkable formula of Ramanujan evaluated at
x = 1
. The decimal expansion (which is pretty close to
2√ 2 = 1.414213562373095...
, see A002193) is
${\displaystyle \sum _{n=0}^{\infty }{\frac {1}{(2n+1)!!}}=1.410686134642447997690824711419115041323478\ldots .}$
A060196 Decimal expansion of
∞
k = 0
1 (2k + 1)!!
= 1 +
1 1 ⋅ 3
+
1 1 ⋅ 3 ⋅ 5
+
1 1 ⋅ 3 ⋅ 5 ⋅ 7
+ ...
{1, 4, 1, 0, 6, 8, 6, 1, 3, 4, 6, 4, 2, 4, 4, 7, 9, 9, 7, 6, 9, 0, 8, 2, 4, 7, 1, 1, 4, 1, 9, 1, 1, 5, 0, 4, 1, 3, 2, 3, 4, 7, 8, 6, 2, 5, 6, 2, 5, 1, 9, 2, 1, 9, 7, 7, 2, 4, 6, 3, 9, 4, 6, 8, 1, 6, 4, 7, 8, 1, 7, 9, 8, 4, 9, 0, 3, 9, ...}
## Double factorial binomial coefficients
The double factorial binomial coefficients are[2]
${\displaystyle \left(\!\!\left({{n} \atop {r}}\right)\!\!\right):={\frac {n!!}{(n-r)!!\,r!!}}.}$
## Multifactorial
The
k
-multifactorial of a nonnegative integer
n
is defined as the product of positive integers having the same congruence
(mod k)
as
n
${\displaystyle n\!\!\underbrace {!\cdots !} _{k{\rm {~times}}}:=\prod _{i=1}^{n}[i\equiv n{\pmod {k}}]\,i,\quad n\geq 0,}$
where
[·]
is the Iverson bracket, and where for
n = 0
we get the empty product, i.e.
1
.
Alternatively, we have
${\displaystyle 0\!\!\underbrace {!\cdots !} _{k{\rm {~times}}}:=1,}$
${\displaystyle n\!\!\underbrace {!\cdots !} _{k{\rm {~times}}}:=n,\quad 1\leq n\leq k-1,}$
${\displaystyle n\!\!\underbrace {!\cdots !} _{k{\rm {~times}}}:=\prod _{i=0}^{\left\lfloor {\tfrac {n}{k}}\right\rfloor -1}(n-ki),\quad n\geq k.}$
The multifactorial of nonnegative integers is defined recursively as
${\displaystyle 0\!\!\underbrace {!\cdots !} _{k{\rm {~times}}}:=1,}$
${\displaystyle n\!\!\underbrace {!\cdots !} _{k{\rm {~times}}}:=n,\quad 1\leq n\leq k-1,}$
${\displaystyle n\!\!\underbrace {!\cdots !} _{k{\rm {~times}}}:=n\cdot (n-k)\!\!\underbrace {!\cdots !} _{k{\rm {~times}}},\quad n\geq k.}$
Multifactorials
k
${\displaystyle \textstyle {n\!\!\underbrace {!\cdots !} _{k{\rm {~times}}},\ n\geq 0}}$ A-number
0 {1, 1, ?, ...} (Is it possible to generalize for k = 0?) A??????
1 {1, 1, 2, 6, 24, 120, 720, 5040, 40320, 362880, 3628800, 39916800, 479001600, 6227020800, 87178291200, 1307674368000, 20922789888000, 355687428096000, 6402373705728000, ...} A000142
2 {1, 1, 2, 3, 8, 15, 48, 105, 384, 945, 3840, 10395, 46080, 135135, 645120, 2027025, 10321920, 34459425, 185794560, 654729075, 3715891200, 13749310575, 81749606400, ...} A006882
3 {1, 1, 2, 3, 4, 10, 18, 28, 80, 162, 280, 880, 1944, 3640, 12320, 29160, 58240, 209440, 524880, 1106560, 4188800, 11022480, 24344320, 96342400, 264539520, 608608000, ...} A007661
4 {1, 1, 2, 3, 4, 5, 12, 21, 32, 45, 120, 231, 384, 585, 1680, 3465, 6144, 9945, 30240, 65835, 122880, 208845, 665280, 1514205, 2949120, 5221125, 17297280, 40883535, 82575360, ...} A007662
5 {1, 1, 2, 3, 4, 5, 6, 14, 24, 36, 50, 66, 168, 312, 504, 750, 1056, 2856, 5616, 9576, 15000, 22176, 62832, 129168, 229824, 375000, 576576, 1696464, 3616704, 6664896, ...} A085157
6 {1, 1, 2, 3, 4, 5, 6, 7, 16, 27, 40, 55, 72, 91, 224, 405, 640, 935, 1296, 1729, 4480, 8505, 14080, 21505, 31104, 43225, 116480, 229635, 394240, 623645, 933120, 1339975, ...} A085158
7 {1, 1, 2, 3, 4, 5, 6, 7, 8, 18, 30, 44, 60, 78, 98, 120, 288, 510, 792, 1140, 1560, 2058, 2640, 6624, 12240, 19800, 29640, 42120, 57624, 76560, 198720, 379440, 633600, 978120, ...} A114799
8 {1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 20, 33, 48, 65, 84, 105, 128, 153, 360, 627, 960, 1365, 1848, 2415, 3072, 3825, 9360, 16929, 26880, 39585, 55440, 74865, 98304, 126225, 318240, ...} A114800
9 {1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 22, 36, 52, 70, 90, 112, 136, 162, 190, 440, 756, 1144, 1610, 2160, 2800, 3536, 4374, 5320, 12760, 22680, 35464, 51520, 71280, 95200, ...} A114806
10
11
12
### Multifactorial binomial coefficients
The multifactorial binomial coefficients are[2]
${\displaystyle \underbrace {\left(\!\cdots \!\left({{n} \atop {r}}\right)\!\cdots \!\right)} _{k{\rm {~times}}}:={\frac {n\!\!\underbrace {!\cdots !} _{k{\rm {~times}}}}{(n-r)\!\!\underbrace {!\cdots !} _{k{\rm {~times}}}\,r\!\!\underbrace {!\cdots !} _{k{\rm {~times}}}}}.}$
|
|
Article | Open | Published:
# Resistance mutations of Pro197, Asp376 and Trp574 in the acetohydroxyacid synthase (AHAS) affect pigments, growths, and competitiveness of Descurainia sophia L
Scientific Reportsvolume 7, Article number: 16380 (2017) | Download Citation
## Abstract
D. Sophia is one of the most problematic weed species infesting winter wheat in China, and has evolved high resistance to tribenuron-methyl. Amino acid substitutions at site of Pro197, Asp376 and Trp574 in acetohydroxyacid synthase (AHAS) were mainly responsible for D. sophia resistance to tribenuron-methyl. In this study, D. sophia plant individually homozygous for specific AHAS mutation (Pro197Leu, Pro197His, Pro197Ser, Pro197Thr, Asp376Glu and Trp574Leu) were generated. In addition, the effects of resistance mutations on pigments, growths and competitiveness of susceptible (S) and resistant (R) plants of D. sophia were investigated. The results indicated the R plants carrying Pro197Leu or Pro197His or Asp376Glu or Trp574Leu displayed stronger competitiveness than S plants. The adverse effects on R plants aggravated with the increase of R plants proportion, which made the R plants against domination the weed community in absent of herbicide selection. Therefore, these resistance mutation have no obvious adverse effects on the pigments (chlorophyll a, chlorophyll b and carotenoid), relative growth rates (RGR), leaf area ratio (LAR) and net assimilation rate (NAR) of R plants.
## Introduction
Weeds are a major threat to food security, and herbicides are still the most effective tools for controlling weeds. Persistent and intensive uses exerted strong selection pressures on the weeds. In order to struggle against and survive in herbicide killings, weeds evolve resistance to herbicide, which were conferred by target-site-based (TSR) and non-target-site-based resistance (NTSR) mechanisms1. Therefore, weeds usually evolve resistance to herbicides at cost of improper growth or (and) reproduction, which can considerably slow the evolution of resistance and prevent the fixation of novel resistant alleles2.
Acetohydroxyacid synthase (AHAS; EC 2.2.1.6), also known as acetolactate synthase (ALS), is a key enzyme in the biosynthesis of branched chain amino acids (BCAAs) including valine (Val), leucine (Leu) and isoleucine (Ile). AHAS is also the target enzyme of commercial herbicides such as sulfonylurea (SU), imidazolinone (IMI), triazolopyrimidine (TP), pyrimidinylth-thiobenzoates (PTB) and sulfonylamino-carbonyl-triazolinones (SCT). AHAS-inhibiting herbicides were widely used all over the world since their first introduction in the early 1980s due to their high herbicidal activity, wide weed-control spectrum, and low mammalian toxicity. At present, 159 weed species all over the world have evolved resistance to AHAS-inhibiting herbicides owing to intensive using these herbicides3. In addition, the TSR mechanisms were mainly responsible for weeds resistance to AHAS-inhibiting herbicides. To date, a total of twenty-eight amino acid substitutions (numbers of amino acid in parentheses) conferring resistance to AHAS herbicides have been identified at sites (numbered according to corresponding sequence of Arabidopsis thaliana) of Ala122 (3), Pro197 (13), Ala205 (2), Asp 376(1), Arg377 (1), Trp574 (3), Ser653 (3) and Gly654 (2) in resistant weed biotypes1,3,4,5. Most of these resistance mutations not only altered the catalytic activity and herbicide affinity by changing 3D structure of AHAS6,7,8,9,10, but also had adverse pleiotropic effects on plant growth or (and) reproduction11,12,13,14,15,16,17,18,19.
Descurainia sophia L. is an annual and notorious broad-leaf weed infesting winter wheat, which has evolved extremely high resistance to SU herbicide tribenuron-methyl across China20,21,22. Our research confirmed the TSR and NTSR mechanisms conferred D. sophia resistance to tribenuron-methyl. Resistance mutations were identified at site of Pro197 (substituted by Ala, Leu, Thr, Ser, and His) or Asp376 (by Glu) or Trp574 (by Leu) in AHAS1 or (and) AHAS2 in tribenuron-methyl resistant D. sophia 22,23,24. In addition, one or more cytochrome P450s mediated D. sophia resistance to tribenuron-methyl25. Notwithstanding this, the pleiotropic effects of resistance mutations on the growths of D. sophia plants were not reported. The objectives of this study were to investigate the impacts of resistance mutations on: (1) the pigments contents in S and R plants; (2) classic growth of D. Sophia growth, such as relative growth rates (RGR), leaf area ratio (LAR), net assimilation rate (NAR) and pigment contents; (3) relative competitive ability of susceptible (S) and resistant (R) plants under condition of monoculture or admixture.
## Materials and Methods
### Plant materials
The S (SD8) D. sophia was collected at Linyi city of Shandong province in China that had never been treated with herbicides, which was confirmed to be individually homozygous by genotyping before. The original D. sophia population of each purified subpopulation was harvested from winter wheat fields in China, where tribenuron-methyl had been used continuously for at least fifteen years. In order to minimize the genetic background differences of different R plants, plants individually homozygous for specific AHAS mutation (Pro197Leu, Pro197His, Pro197Ser, Pro197Thr, Asp376Glu and Trp574Leu) were grown to generate seeds (Fig. 1). Seeds from single plants were collected separately and were pooled after genotyping. By this way, purified subpopulations individually homozygous for specific AHAS mutation (Pro197Leu, Pro197His, Pro197Ser, Pro197Thr, Asp376Glu and Trp574Leu) were obtained and used in this study (Table 1).
Seeds of purified S and R subpopulations were immersed in 30% hydrogen peroxide solution for 35 min, and then soaked in 0.03% gibberellins solution for 24 h after rinsing with distilled water. Then, seeds were placed in Petri dishes to germinate for 96 h. Germinating seedlings with similar size were selected and transplanted into square plastic pots (7.5 cm slides) containing moist loam soil, and were brought up in an artificial climate chamber under conditions of 25 °C/15 °C (light/dark), 14 h photoperiod with luminous intensity of 15,000 lx. D. sophia plants were watered and rearranged regularly to minimize the environmental effects on the plants growth.
### Whole-plant response experiments to tribenuron-methyl
In order to confirm the resistance to tribenuron-methyl of each purified R subpopulation, seedlings at 14 days after transplant (DAT) were used for whole-plant response experiment. Plants were returned to artificial climate chamber after herbicide treatment, and the above-ground shoots were harvested 21 days later. The above-ground shoots were oven dried for 96 h at 65 °C. The experiment was conducted with three replicates per herbicide dose and repeated twice.
Tribenuron-methyl was applied to S (5.7 × 10−4, 2.3 × 10−3, 9.2 × 10−3, 3.7 × 10−2, 0.15, 0.59, 2.3, 9.4 g a. i. ha−1) and R (0.15, 0.59, 2.3, 9.4, 37.5, 75, 150 g a. i. ha−1) subpopulations using a moving-boom cabinet sprayer delivering 600 L ha−1 water at a pressure of 0.4 MPa by a flat fan nozzle positioned 54 cm above the foliage.
### Determination of the pigment content (chlorophyll a, chlorophyll b and carotenoid) in S and R plants
The extraction and determination of chlorophyll a, chlorophyll b and carotenoid were conducted according the methods described by Wellburn (1994)26. Above-ground seedling of individual plant at 35 or 50 DAT was ground to fine powder with a mortar and pestle in liquid nitrogen, and then homogenized in 3 mL water solutions containing 80% acetone. Six plants of S or each R subpopulation were used for pigment test. The homogenate was filtered into a brown bottle through filter paper wetting with 80% acetone. The chloroplast pigment on the filter paper was washed with 80% acetone solutions, and combined with the filtrate. The filtrate was adjusted to 25 mL by adding 80% acetone, and used for following test. Six plants were selected from each subpopulation and used for pigment extraction.
The absorbance value of each filtrate at 470, 646 and 663 nm was determined with Lambda 35 spectrophotometer (Perkins-Elmer). The concentration of chlorophyll a, chlorophyll b and carotenoid was calculated respectively by formulas (1), (2) and (3). The total chlorophyll is the sums chlorophyll a and b.
$${\rm{Ca}}=12{{\rm{.21A}}}_{663}-2{{\rm{.81A}}}_{646}$$
(1)
$${\rm{Cb}}=20.13{{\rm{A}}}_{646}-5.03{{\rm{A}}}_{663}$$
(2)
$${\rm{Cc}}=(1{{\rm{000A}}}_{470}-3.{\rm{27Ca}}-{\rm{104Cb}})/229$$
(3)
where Ca, Cb and Cc are the concentration of chlorophyll a, chlorophyll b and carotenoids respectively; A663, A646 and A470 are the absorbance value at wavelengths of 663, 646 and 470 nm respectively.
The contents of various pigments in a unit of fresh weight of the tissue were calculated by following formula.
$${\rm{A}}={\rm{n}}\cdot {\rm{C}}\cdot {\rm{N}}\cdot {\rm{W}}$$
where A is the content of a pigments; C is the concentration of pigments; n- is the volume of extraction solution; N is the dilution ratio; W is the fresh weight of sample.
### Determination of RGR, LAR and NAR of S and R plants
The above-ground shoots were harvested respectively at 23, 29, 34, 39, 45, 55 and 63-day stage. The above-ground shoots were oven dried for 96 h at 65 °C, and dry weight was measured. The leaf area of each plant was measured immediately after harvest.
The unbiased RGR was estimated by the formula proposed by Hoffmann and Poorter (2002)27. RGR = (ln $${\overline{{\rm{W}}}}_{2}$$ − ln $${\overline{{\rm{W}}}}_{1}$$)/(t2 − t1). Where $${\overline{{\rm{W}}}}_{1}$$ and $${\overline{{\rm{W}}}}_{2}$$ are means of dry weight per plant at times t1 and t2. The ln $${\overline{{\rm{W}}}}_{1}$$ and ln $${\overline{{\rm{W}}}}_{2}$$ are the natural logarithm-transformed means of dry weight per plant.
Leaf area per plant was measured with Photoshop CS3 extended (Adobe Systems Inc., USA). LAR was calculated by the formula proposed by Hunt (1982)28. LAR = [(ln $${\overline{{\rm{W}}}}_{2}$$ − ln $${\overline{{\rm{W}}}}_{1}$$)($${\bar{{\rm{L}}}}_{{\rm{A2}}}$$ − $${\bar{{\rm{L}}}}_{{\rm{A1}}}$$)]/[($${\overline{{\rm{W}}}}_{2}$$ − $${\overline{{\rm{W}}}}_{1}$$)(ln $${\bar{{\rm{L}}}}_{{\rm{A2}}}$$ − ln $${\bar{{\rm{L}}}}_{{\rm{A1}}}$$)]. Where $${\overline{{\rm{W}}}}_{1}$$ and $${\overline{{\rm{W}}}}_{2}$$ are means of dry weight per plant at times t1 and t2, $${\bar{{\rm{L}}}}_{{\rm{A1}}}$$ and $${\bar{{\rm{L}}}}_{{\rm{A2}}}$$ are means of leaf area per plant at t1 and t2. The ln $${\overline{{\rm{W}}}}_{1}$$ and ln $${\overline{{\rm{W}}}}_{2}$$ are the natural logarithm-transformed means of dry weight per plant, ln $${\bar{{\rm{L}}}}_{{\rm{A2}}}$$ and ln $${\bar{{\rm{L}}}}_{{\rm{A2}}}$$ are the natural logarithm-transformed means of leaf area per plant.
NAR was estimated by the formula proposed by Hunt (1982). NAR = [($${\bar{{\rm{W}}}}_{2}$$ − $${\bar{{\rm{W}}}}_{1}$$) (ln $$\bar{{\rm{W}}}\,$$ 2 − ln $$\bar{{\rm{W}}}{}_{1}$$)]/[($${\bar{{\rm{L}}}}_{{\rm{A2}}}$$ − $${\bar{{\rm{L}}}}_{{\rm{A1}}}$$) (t2−t1)]28. Where $$\bar{{\rm{W}}}{}_{1}$$ and $${\bar{{\rm{W}}}}_{2}$$ are means of dry weight per plant at times t1 and t2, $${\bar{{\rm{L}}}}_{{\rm{A1}}}$$ and $${\bar{{\rm{L}}}}_{{\rm{A2}}}$$ are means of leaf area per plant at t1 and t2. The ln $$\bar{{\rm{W}}}{}_{1}$$ and ln $${\bar{{\rm{W}}}}_{2}$$ are the natural logarithm-transformed means of dry weight per plant.
### Relative competition ability of S and R plants under condition of monoculture
Individual seedling was transplanted into a square plastic pot (7.5 cm slides), and harvested at 35 and 50 DAT respectively. The leaf area of each plant was measured immediately after collection. The above-ground shoots were oven dried for 96 h at 65 °C, and dry weight was measured. Total 20 plants from each S or R subpopulation were selected for test.
### Relative competition ability of S and R plants under condition of admixture
Relative competition ability between S and R plants was evaluated using a replacement series experiment (S:R = 100:0, 75:25, 50:50, 25:75, 0:100) at a constant density of 644 plants m−2 (24 plants per tray, 23.3 cm × 16.0 cm × 6.0 cm) according the methods described by Reboud et al.29. The experiment was conducted in a randomized complete block design with four replications. The above-ground shoots of S or R plants in the same tray were harvested separately at 50 DAT. The leaf area of each plant was measured immediately after harvest. The above-ground shoots were oven dried for 96 h at 65 °C, and dry weight of above-ground biomass was measured.
The relative crowding coefficient (RCC) was calculated according to the following formula17,30.
$$\begin{array}{c}{\rm{RCC}}=(\{[{{{\rm{DB}}}_{{\rm{S}}}}^{75:25}/{{{\rm{DB}}}_{{\rm{R}}}}^{75:25}]+({{{\rm{DB}}}_{{\rm{S}}}}^{50:50}/{{{\rm{DB}}}_{{\rm{R}}}}^{50:50})+[{{{\rm{DB}}}_{{\rm{S}}}}^{25:75}/{{{\rm{DB}}}_{{\rm{R}}}}^{25:75}]/N\})\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(D{{B}_{S}}^{100:0}/D{{B}_{R}}^{0:100})\end{array}$$
where, DBS n:n and DBR n:n is the mean dry biomass per S and R plant respectively at ratio of n:n, N is the number of mixed plantings; here N = 3. Based on this definition, an RCC value greater than 1.0 indicate S is superior competitiveness over R; an RCC value lower than 1.0 shows R is outcompeting S; While, an RCC value around 1.0 indicates S and R have similar competition ability.
### Statistical analyses
The data of whole-plant response experiments was converted into the percentage of control and subjected to a non-linear regression analysis using GraphPad Software (v.5.0)31,32. In this study, two AHAS isozymes with different sensitivities to tribenuron-methyl were confirmed to coexist in all R (pHB8, pHB22, pHB23, pHB24, pHB25 and pHB42) subpopulations. The GR50 (herbicide concentrations causing 50% plant growth reduction) values for all S and R subpopulations were calculated using single-sigmoid equation f(x).
$$y=p\times f(x)+(1-p)\times g(x)$$
(1)
$$f(x)=100/(1+{10}^{(a-x)})$$ $$g(x)=100/(1+{10}^{(b-x)})$$
One-way analysis of variance (ANOVA) with Dunnett’s post-test (α = 5%) was performed to assess pairwise differences in plant growth (RGR, NAR, LAR, BCAAs content and relative competition ability).
### Data Availability
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
## Results
### Whole-plant response experiments to confirm resistance to tribenuron-methyl in each R subpopulation
The whole-plant response experiments established that all R subpopulations exhibited extremely high resistance levels to tribenuron-methyl (Fig. 2). The pHB25 subpopulation carrying Asp376Glu mutation exhibited the highest resistance level to tribenuron-methyl, with the resistance index (RI) of 815. The RI value of pHB42 subpopulation (with Trp574Leu) was 366.3, which was higher than all plants carrying Pro197 mutations (Pro197Leu, His, Thr and Ser). In contrast, the resistance level of R subpopulations carrying different Pro197 mutations displayed no significant differences.
### Effects of resistance mutations on the pigments (chlorophyll a, chlorophyll b and carotenoid) in S and R plants
At stage of 35 DAT, only Pro197His (pHB22) significantly reduce the pigment contents (carotenoid, chlorophyll a, chlorophyll b and total chlorophyll) in D. sophia plants, while the other resistance mutations have no obvious effects on corresponding R plants. In contrast to results of 35 DAT, the pigment contents in S and all R subpopulations at stage of 50 DAT exhibited no significant differences (Table 2).
### Effects of resistance mutations on RGR, LAR and NAR of S and R plants
The Pro197Leu and Pro197Thr reduced plant growth during all tested periods, and decreased RGR about 13% and 23% during period of 23–63 DAT respectively. The other mutations decreased RGR only at partial stages. In addition, all mutations had adverse pleiotropic effects on plant growth during periods of 23–34 and 23–39 DAT. While, only partial mutations had negative impacts on RGR during other periods (Table 3).
For the LAR and NAR during 23–63 DAT, only Trp574Leu increased the LAR (about 29%), while the other mutations had no obvious pleiotropic effects on LAR and NAR. In addition, the Trp574Leu increased the LAR during all tested periods, all mutations decreased the NAR during period of 23–39 DAT (Tables 4 and 5).
### Relative competition ability of S and R plants under monoculture (one plant per pot)
At stage of 50 DAT, the dry weight of S plants increased about 56%, 46%, 42%, 18% and 70% than R plants carrying mutation of Pro197Leu, Pro197His, Pro197Thr, Asp376Glu and Trp574Leu, respectively. While the S and R plants (with Pro197Ser) displayed no significant differences in terms of dry weight. By contrast to plants at 50 DAT, the effects of resistance mutation on S and R plants at 35 DAT were complex. For example, the Pro197His and Trp574Leu significantly decreased the dry weight of R plants; Pro197Leu, Pro197Thr and Asp376Glu did not change the dry weight of R plants comparing with S plants. Therefore, the Pro197Ser mutation significantly increased the dry weight of R plants (Table 6).
In contrast to dry weights, the individual leaf area of S at 50 DAT was about 39%, 26%, 42% and 57% larger than R plants carrying Pro197Leu, Pro197His, Pro197Thr and Trp574Leu, respectively. In addition, the individual leaf area of S (35 DAT) was about 45% and 53% larger than R plants with Pro197His and Trp574Leu, respectively. Therefore, the individual leaf area of S (35 DAT) displayed no significant differences with R plants carrying Pro197Leu, Pro197Thr and Asp376Glu (Table 6).
### Relative competition ability of S and R plants under admixture condition
When mixed with R plants (with Pro197Leu or Pro197His or Pro197Thr or Asp376Glu or Trp574Leu), the dry weight of S unchanged significantly at all proportions. While the dry weight of R plants reduced greatly with increasing of R proportion to 75%~100%. By contrast, dry weight of S plants decreased at proportion of 25:75 (S:R), while pHB23 plants (with Pro197Ser) unchanged at all proportions (Tables 7 and 8 and Figs 3 and 4).
All the R (carrying Pro197Leu, Pro197His, Asp376Glu or Trp574Leu) displayed greater competitiveness over S plants according the RCC values (by far less than 1.0) in terms of dry weight and leaf area. While, the pHB23 (with Pro197Ser) and S plants displayed similar competition ability in terms of dry weight (with RCC of 0.92) and leaf area (RCC of 0.96) respectively. By contrast, pHB24 plants with Pro197Thr mutation displayed stronger competitiveness than S plants in term of dry weights (with RCC of 0.62), and exhibited less competition ability than S plants in term of individual leaf area (with RCC of 1.28) (Tables 7 and 8).
## Discussion
### Effects of resistance mutation on resistance in D. sophia
At present, total seven resistance mutations were identified in AHAS isozymes at sites of Pro197 (Pro197Leu, His, Ser, Thr, Tyr), Asp376 (Asp376Glu) and Trp574 (Trp574Leu) in D. sophia populations22,23,24. In recent years, we have identified six out of seven mutations from more than sixty D. sophia populations. In order to minimize the differences in genetic background, individual plant with homozygous AHAS mutation (Pro197His, Pro197Ser, Pro197Thr, Pro197Leu, Asp376Glu and Trp574Leu) was purified and used in this study.
As expected, all resistance mutations caused D. sophia subpopulations evolve extremely high resistance to tribenuron-methyl, which were confirmed by previous experiments of dose-response, AHAS sensitivity inhibition and AHAS mutation identification21,22,23,24. In addition, the resistance mutations on the characterization of AHAS isozymes were conducted. The results (not include in this manuscript) also confirmed these resistance mutations played very important roles in the resistance evolution of D. sophia to tribenuron-methyl. Among these resistance mutations, Asp376Glu conferred the highest resistance to tribenuron-methyl. The D. sophia plants with Asp376Glu survived in tribenuron-methyl at dose of 37.5 g a.i. ha−1, which killed completely the plants carrying mutations of Pro197 and Trp574Leu (Fig. 1). Therefore, the Asp376Glu mutation is such a weak AHAS resistance mutation (in terms of growth inhibited by chlorsulfuron) in Raphanus raphanistrum populations comparing with the Ala122Tyr, Pro197Ser and Trp574Leu mutations33. Obviously, the resistance or cross-resistance patterns conferred by a given resistance mutation were depended not only on the site of mutation, but also on specific amino acid substitution. While it is not clear whether this difference is related with weed species or (and) specific herbicide. Hence, the impact of resistance-endowing mutations on resistance should be evaluated on a case-by-case basis, and generalizations should be avoided.
### Effects of resistance mutation on the classic growth of S and R plants
The RGR is the product of NAR and LAR, which depended on genetic background and environmental condition. Where, NAR is largely the net result of carbon gain (photosynthesis) and losses (respiration, exudation and volatilization) expressed per unit leaf area. The LAR is the product of a morphological component, indicating the fraction of total plant weight allocated to the leaves34. A high RGR help weeds occupy a large space and facilitate a rapid completion of the life cycle of a plant, which is advantageous for weeds in competitive situations. In this study, the RGR of S plants were equal or significantly higher (during 23–34 DAT and 23–39 DAT) than R plants with specific resistance mutations, which means the S plants exhibit obvious growth advantage over R plants at the early vegetative stage in the absence of stressful conditions. However, the present study was conducted only for 63 DAT and carried on under controlled conditions. There are many questions need to answer. For example, how did the resistance mutations affect plant growths in whole life history and in field conditions? Were the effects of genetic factors and environmental conditions on plant growths independent or interactional?
### Effects of resistance mutation on the competitiveness of S and R plants under condition of monoculture or admixture
The present results indicated that the different resistance mutation have different effects on the competition ability of R plants. For example, plants carrying mutation of Pro197Leu (pHB8) or Pro197His (pHB22) or Asp376Glu (pHB25) or Trp574Leu (pHB42) displayed stronger competitiveness over S plants (RCC in terms of dry weight and leaf area < 1.0). The pHB23 (Pro197Ser) exhibited similar competitiveness over S plants (RCC in terms of dry weight and leaf area ≈ 1.0) (Tables 7, 8 and Figs 3 and 4). When S and R plants mixed together, effects of S or R plants on each other were not similar. The dry weight of R plants (with Pro197Leu or Pro197Thr) reduced greatly with the increasing of R ratio from 25% to 75%. While, the dry weight of S plants was not affected except mixed with pHB23 at ratio of 25:75. The present results demonstrated the competition ability of R over S plants was affected not only by weed biotypes (S or R) and the specific resistance mutation, but also by the ratio of S or R plants in the mixtures. The dry weights of 100% R plants (carrying Pro197Leu, Pro197His, Pro197Thr, Asp376Glu or Trp574Leu) reduced greatly comparing with R plants at ratio of 75:25, 50:50 and 25:75. By contrast to R plants, the dry weight of S plants was not affected except mixed with pHB23 at ratio of 25:75. Hence, it is difficult that the R plants dominated the weed community under condition of no herbicide selection. This study was conducted in green house with single plant density, and the experiment only covered partial stages (50 DAT) of plants life cycle as other similar studies. Staged results were not necessarily in accordance with the final facts. Hence, in order to accurately estimate the resistance cost, experiments covering all life stages in a variety of biotic and abiotic environments should be conducted.
The resistance costs are considered as a basic tenet of evolutionary genetic, therefore the resistance costs are not necessary and universal in all resistance cases. The expression of resistance cost was strongly influenced by resistance mechanisms, specific resistance alleles, characteristics of target enzyme, genetic background, weed species and growth environment2. Hence, it is difficult to accurately assess the resistance costs in R weeds. Although a great deal of research effort has been invested to measure the resistance costs in R weed species, numerous studies were flawed. Vila-Aiub et al. (2009) reported only 25% of studies assessing resistance costs explicitly met the criteria of controlling genetic background2. To date, only a few resistance cost cases were confirmed in AHAS herbicide-resistant weed species. For example, Trp574Leu mutation of Amaranthus powellii can cause pleiotropic effects on the early growth and development in competitive conditions. Not only the leaves of the resistant plants were distorted and much smaller than those of S plants, one S population outperformed on R population by 7~15 times under competitive conditions17. In addition, the Ala205Val mutation reduced the reproductive output and fitness in resistant S. ptychanthum comparing with S biotype. This would likely cause S individuals to dominate in the absence of herbicide selection pressure35.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## References
1. 1.
Powles, S. B. & Yu, Q. Evolution in action: plants resistant to herbicides. Ann. Rev. Plant Biol. 61, 317–347 (2010).
2. 2.
Vila‐Aiub, M. M., Neve, P. & Powles, S. B. Fitness costs associated with evolved herbicide resistance alleles in plants. New Phytol. 184, 751–767 (2009).
3. 3.
Heap, I. The International Survey of Herbicide Resistant weeds, http://www.weedscience.org. 18-Nov-2017.
4. 4.
Brosnan, J. T. et al. A new amino acid substitution (Ala-205-Phe) in acetolactate synthase (ALS) confers broad spectrum resistance to ALS-inhibiting herbicides. Planta. 243, 149–159 (2016).
5. 5.
Heap, I. Global perspective of herbicide-resistant weeds. Pest Manag. Sci. 70, 1306–1315 (2014).
6. 6.
Yu, Q. & Powles, S. B. Resistance to AHAS inhibitor herbicides: current understanding. Pest Manag. Sci. 70, 1340–1350 (2014).
7. 7.
Rosario, J. M., Cruz-Hipolito, H., Smeda, R. J. & De, R. P. White mustard (Sinapis alba) resistance to ALS-inhibiting herbicides and alternative herbicides for control in Spain. Eur J Agron. 35, 57–62 (2011).
8. 8.
Zheng, D. et al. Cross-resistance of horseweed (Conyza canadensis) populations with three different ALS mutations. Pest Manag. Sci. 67, 1486–149 (2011).
9. 9.
Kaloumenos, N. S., Adamouli, V. N., Dordas, C. A. & Eleftherohorinos, I. G. Corn poppy (Papaver rhoeas) cross-resistance to ALS-inhibiting herbicides. Pest Manag. Sci. 67, 574–585 (2011).
10. 10.
Deng, W. et al. Different cross-resistance patterns to AHAS herbicides of two tribenuron-methyl resistant flixweed (Descurainia sophia L.) biotypes in China. Pestic. Biochem. Physiol. 112, 26–32 (2014).
11. 11.
Alcocer-Ruthling, M., Thill, D. C. & Shafii, B. Differential competitiveness of sulfonylurea resistant and susceptible prickly lettuce (Lactuca serriola). Weed Technol. 6, 303–309 (1992).
12. 12.
Thompson, C. R., Thill, D. C. & Shafii, B. Growth and competitiveness of sulfonylurea-resistant and-susceptible kochia (Kochia scoparia). Weed Sci. 42, 172–179 (1994).
13. 13.
Christoffoleti, P. J., Westra, P. & Moore, F. Growth analysis of sulfonylurea-resistant and-susceptible kochia (Kochia scoparia). Weed Sci. 45, 691–695 (1997).
14. 14.
Menegat, A. et al. Acetohydroxyacid synthase (AHAS) amino acid substitution Asp376Glu in Lolium perenne: effect on herbicide efficacy and plant growth. J. Plant Dis. Prot. 123, 145–153 (2016).
15. 15.
Liu, W. et al. Comparison of ALS functionality and plant growth in ALS-inhibitor susceptible and resistant Myosoton aquaticum L. Pestic Biochem. Physiol. Available online 28 March 2017.
16. 16.
Sibony, M. & Rubin, B. The ecological fitness of ALS-resistant Amaranthus retroflexus and multiple-resistant Amaranthus blitoides. Weed Res. 43, 40–47 (2003).
17. 17.
Tardif, F. J., Rajcan, I. & Costea, M. A mutation in the herbicide target site acetohydroxyacid synthase produces morphological and structural alterations and reduces fitness in Amaranthus powellii. New Phytol. 169, 251–264 (2006).
18. 18.
Yu, Q., Han, H., Vila-Aiub, M. M. & Powles, S. B. AHAS herbicide resistance endowing mutations: effect on AHAS functionality and plant growth. J. Exp. Bot. 61, 3925–3934 (2010).
19. 19.
Li, M., Yu, Q., Han, H., Vila-Aiub, M. & Powles, S. B. ALS herbicide resistance mutations in Raphanus raphanistrum: evaluation of pleiotropic effects on vegetative growth and ALS activity. Pest Manag. Sci. 69, 689–695 (2013).
20. 20.
Cui, H. L. et al. Tribenuron-methyl resistant flixweed (Descurainia sophia). Agric. Sci. China. 8, 488–490 (2009).
21. 21.
Han, X. J., Dong, Y., Sun, X. N., Li, X. F. & Zheng, M. Q. Molecular basis of resistance to tribenuron-methyl in Descurainia Sophia (L.) populations from China. Pestic. Biochem. Physiol. 104, 77–81 (2012).
22. 22.
Deng, W. et al. Cross-resistance pattern to four AHAS-inhibiting herbicides of tribenuron-methyl-resistant flixweed (Descurainia sophia) conferred by Asp-376-Glu mutation in AHAS. J. Integr. Agr. 15, 2563–2570 (2016).
23. 23.
Deng, W. et al. Tribenuron-methyl resistance and mutation diversity of Pro197 in flixweed (Descurainia Sophia L.) accessions from China. Pestic. Biochem. Physiol. 117, 68–74 (2015).
24. 24.
Deng, W. et al. Cross-resistance patterns to acetolactate synthase (ALS)-inhibiting herbicides of flixweed (Descurainia sophia L.) conferred by different combinations of ALS isozymes with a Pro-197-Thr mutation or a novel Trp-574-Leu mutation. Pestic. Biochem. Physiol. 136, 41–45 (2017).
25. 25.
Yang, Q. et al. Target-site and non-target-site based resistance to the herbicide tribenuron-methyl in flixweed (Descurainia sophia L.). BMC Genomics. 17, 551 (2016).
26. 26.
Wellburn, A. R. The spectral determination of chlorophylls a and b, as well as total carotenoids, using various solvents with spectrophotometers of different resolution. J. Plant Physiol. 144, 307–313 (1994).
27. 27.
Hoffmann, W. A. & Poorter, H. Avoiding bias in calculations of relative growth rate. Annals of Botany. 90, 37–42 (2002).
28. 28.
Hunt, R. Plant growth curves. The functional approach to plant growth analysis. London, UK: Edward Arnold (1982).
29. 29.
Reboud, X. & Till-Bottraud, I. The cost of herbicide resistance measured by a competition experiment. Theor. Appl. Genet. 82, 690–696 (1991).
30. 30.
Novak, M. G., Higley, L. G., Christianssen, C. A. & Rowley, W. A. Evaluating larval competition between Aedes albopictus and A. triseriatus (Diptera: Culicidae) through replacement series experiments. Enviro. Entomol. 22, 311–318 (1993).
31. 31.
Lipovetsky, S. Double logistic curve in regression modeling. J. Appl. Stat. 37, 1785–1793 (2010).
32. 32.
Yamato, S., Sada, Y. & Ikeda, H. Characterization of acetolactate synthase from sulfonylurea herbicide-resistant Schoenoplectus juncoides. Weed Biol. Manag. 13, 104–113 (2013).
33. 33.
Yu, Q. et al. Resistance evaluation for herbicide resistance–endowing acetolactate synthase (ALS) gene mutations using Raphanus raphanistrum populations homozygous for specific ALS mutations. Weed Res. 52, 178–186 (2012).
34. 34.
Poorter, H. & Remkes, C. Leaf area ratio and net assimilation rate of 24 wild species differing in relative growth rate. Oecologia. 83, 553–559 (1990).
35. 35.
Ashigh, J. & Tardif, F. J. An amino acid substitution at position 205 of acetohydroxyacid synthase reduces fitness under optimal light in resistant populations of Solanum ptychanthum. Weed Res. 49, 479–489 (2009).
## Acknowledgements
This work was supported by the National Natural Science Foundation of China (31672047).
## Author information
### Author notes
1. Yongzhi Zhang and Yufang Xu contributed equally to this work.
### Affiliations
1. #### Department of Applied Chemistry, College of Science, China Agricultural University, Beijing, 100193, China
• Yongzhi Zhang
• , Yufang Xu
• , Shipeng Wang
• , Xuefeng Li
• & Mingqi Zheng
### Contributions
M.Z. and X.L. designed the research, Y.Z. and Y.X. conducted the experiments, Y.Z. and S.W. performed the data analysis, Y.Z., Y.X. and M.Z. wrote the main manuscript.
### Competing Interests
The authors declare that they have no competing interests.
### Corresponding author
Correspondence to Mingqi Zheng.
|
|
## Finite fields And Error-Correcting Codes
After introducing the concept of finite fields and their properties, we look algebraic constructions of error correcting codes.
Year:
2006
Laboratories:
n/a:
DOC
Rate this document:
1
2
3
(Not yet reviewed)
|
|
• 按书本查看
Magoosh题集
Magoosh针对GRE考试备考材料相对可以,其推出的GMAT模考题对时间固定GMAT考生来说,老师并不建议过多时间花费在该材料上。这些题可以作为词汇拓展训练。
141 In Ophiuchus Corporation, 60% of the total revenue R is devoted to the advertising budget. Five-sixths of this advertising budget was spent on television advertising. Which of the following represents the dollar amount spent on television advertising?
142 In a group of 50 students, 31 are taking French, 17 are taking Spanish, and 10 are taking neither French nor Spanish. How many students are taking both French and Spanish?
143 What is the area of the circle?
144 ${10!-8!}\over{7!}$=
145 If K is the least positive integer that is divisible by every integer from 1 to 8 inclusive, then K =
146 If x is the greatest common divisor of 90 and 18, and y is the least common multiple of 51 and 34, then x + y =
147 It took Ellen 6 hours to ride her bike a total distance of 120 miles. For the first part of the trip, her speed was constantly 25 miles per hour. For the second part of her trip, her speed was constantly 15 miles per hour. For how many miles did Ellen travel at 25 miles per hour?
148 If $\frac{4}{w} + \frac{4}{x} = \frac{4}{y} and wx = y$, then the average (arithmetic mean) of w and x is
149 ${4^{6}-4^{5}}\over{3}$=
150 Hal has 4 girl friends and 5 boy friends. In how many different ways can Hal invite 2 girls and 2 boys to his birthday party?
151 A, B, and C are consecutive odd integers such that A < B < C.If A + B + C = 81, then A + C =
152 What is the average (arithmetic mean) of $(\sqrt{8}+\sqrt{2})^{2}$and $(\sqrt{8}-\sqrt{2})^{2}$ ?
153 If x and y are both positive and $\sqrt{x^{2}+y^{2}} = 3x - y, then \frac{x}{y} =$
154 Line k is in the rectangular coordinate system. If the x-intercept of k is -2, and the y-intercept is 3, which of the following is an equation of line k?
155 If${(\frac{1}{x}+x)}^{2}=16$,then ${{1}\over{x^{2}}}+x^{2}=$
156 If 3 apples and 4 bananas costs $1.37, and 5 apples and 7 bananas costs$2.36, what is the total cost of 1 apple and 1 banana?
157 When positive integer x is divided by 11, the quotient is y and the remainder is 4. When 2x is divided by 8, the quotient is 3y and the remainder is 2. What is the value of 13y – x ?
158 A is the center of the circle, and the length of AB is$4\sqrt{2}$ . The blue shaded region is a square. What is the area of the shaded region?
159 If ak - b = c - dk, then k =
160 Dimitri weighs x pounds more than Allen weighs. Together, Allen and Dimitri weigh a total of y pounds. Which of the following represents Allen’s weight?
9362
6030027
2194929828
• [IR]13fm3k
正确率:3.6%
• [IR]1dfnkk
正确率:5.9%
• [DS]34g0kk
正确率:6.3%
• [IR]acfnlk
正确率:7%
• [IR]2ffmnk
正确率:7%
|
|
# 8. A nurse is caring for a client who has a prescription for a stool specimen...
###### Question:
8. A nurse is caring for a client who has a prescription for a stool specimen to be sent to the laboratory to be tested for ova and parasites. Which if the following instructions regarding specimen collection should the nurse provide to the assistive personnel? A. Collect at least 2 inches of formed stool. B. Record the date and time the stool was collected C. Use a culturette for specimen collection D. Wear sterile gloves while obtaining the specimen
#### Similar Solved Questions
##### If two variables have a negative correlation, can they have a positive covariance?
If two variables have a negative correlation, can they have a positive covariance?...
##### 2. James Woods, CFA, is a portfolio manager at ABC Securities. Woods has reasonable grounds to...
2. James Woods, CFA, is a portfolio manager at ABC Securities. Woods has reasonable grounds to believe his colleague, Sandra Clarke, a CFA Level II candidate, is engaged in unethical trading activities that may also be in violation of local securities laws. Woods is not Clarke's supervisor, and ...
##### Question Completion Status:Moving t0 anothor question will save thls rosponse.estionFind the absolute maximum and absolute minimum values of the function f(x) =6x -3 (they ex15t, Over te nterr (-3,3]-Absolute maximumAbsolute minimum -18B. There are no absolute extruraAbsolute maximumAbsolute minimumAbsolute maximumAbsolute minimumMoving to another question will save this response.
Question Completion Status: Moving t0 anothor question will save thls rosponse. estion Find the absolute maximum and absolute minimum values of the function f(x) =6x -3 (they ex15t, Over te nterr (-3,3]- Absolute maximum Absolute minimum -18 B. There are no absolute extrura Absolute maximum Absolute...
##### 2. (10 points) Choose a group G and a set X Define action of G on Xand show that Xis a G-set_ Then verify for some particular elements x GXand g G Gthat Stab gStab g1
2. (10 points) Choose a group G and a set X Define action of G on Xand show that Xis a G-set_ Then verify for some particular elements x GXand g G Gthat Stab gStab g1...
##### 9: A 2 kg block was placed on top ofa 4 kg block at rest on horizontal frictionless surface. The coefficients of friction between (he blocks were 0.25 and 0.2 Whal is the largest horizontal force that can be applied to the kg block_ so that the two blocks move together without slipping?2kg4kga) 20.6 Nb) 23.5 Nc) 11.8 N d) 14.7 N 0) 17.6 N
9: A 2 kg block was placed on top ofa 4 kg block at rest on horizontal frictionless surface. The coefficients of friction between (he blocks were 0.25 and 0.2 Whal is the largest horizontal force that can be applied to the kg block_ so that the two blocks move together without slipping? 2kg 4kg a) 2...
##### Question 24Use the standard half-cell potentials listed below to calculate the standard cell potential, in V, f the following reaction occurring in an electrochemical cell at 25*C. (The equation is balanced ) Sn(s) _ 2 Ag"(aq) = Sn2+(aq) +2 Agls)Sn2+(aq) + 2 e" v Sn(s) -0.14 V Ag(aq) + e" + Agls) E? = +0.80 VQuesticn 25pis
Question 24 Use the standard half-cell potentials listed below to calculate the standard cell potential, in V, f the following reaction occurring in an electrochemical cell at 25*C. (The equation is balanced ) Sn(s) _ 2 Ag"(aq) = Sn2+(aq) +2 Agls) Sn2+(aq) + 2 e" v Sn(s) -0.14 V Ag(aq) + e...
##### ~Ut 522f"(t}Welnt SavenMom RoventRe5nnnseBERRAPCALCBR6 2.4.065_Ask Your Teachcompanvproduce LCD digital alarm clocks at cost ol 57 each while fixed costsS3Z_ Therefore; Lhie company cost functionClx) = 7x + 32.Find the average cost function Ac(x)Ac(x)(b) Find the margina averane costtunction MAC(x)MAC(x)Evaluate MAC(x)(Round your answerto the nearest cert,)Interpret your answer:The average costSelect-=at this rate_Velno Savod Work RevenResponseSubmit AnswerBERRAPCALCBR6 2.5.001,Ask Your Tea
~Ut 522 f"(t} Welnt SavenMom Rovent Re5nnnse BERRAPCALCBR6 2.4.065_ Ask Your Teach companv produce LCD digital alarm clocks at cost ol 57 each while fixed costs S3Z_ Therefore; Lhie company cost function Clx) = 7x + 32. Find the average cost function Ac(x) Ac(x) (b) Find the margina averane cos...
##### Question 3The histogram below shows the times, in minutes, required for 25 rats in an Select one answer: animal behavior experiment to successfully navigate a maze. 10 pointsTime for & Group of Rats to Navigate a Maze FrequencyTime in MinutesWhich ofthe following are the appropriate numerical measures to describe the center and spread of the above distribution?A B.The median and the IQR The mean and the standard deviation The mean and the median The IQR and the standard deviationD_
Question 3 The histogram below shows the times, in minutes, required for 25 rats in an Select one answer: animal behavior experiment to successfully navigate a maze. 10 points Time for & Group of Rats to Navigate a Maze Frequency Time in Minutes Which ofthe following are the appropriate numerica...
##### P 6.5 You are provided with tie following information for Santa Ltd. For the month ended...
P 6.5 You are provided with tie following information for Santa Ltd. For the month ended October 31, 2020. Date Units 60 120 Unit cost or selling price € 24 Oct. 1 Oct. 9 Oct. 11 Oct. 17 Oct. 22 Oct. 25 Oct. 29 Description Beginning inventory Purchase Sale Purchase Sale Purchase Sale Instructio...
##### Solvent frontDistance travelled by solventDistance travelled by substanceBaseline
Solvent front Distance travelled by solvent Distance travelled by substance Baseline...
##### KHART OF ACCOUNTS Canyon Ferry Boating Comoration General Leder ASSETS EQUITY 110 ch 311 Common Stark...
KHART OF ACCOUNTS Canyon Ferry Boating Comoration General Leder ASSETS EQUITY 110 ch 311 Common Stark 120 Accounts Receivable 312 Paid-in Capital in Excess of Par-Common Stock 131 Notes Receivable 315 Treasury Stock 152 Interest Receivable 321 Preferred Stock 141 Merchandise inventory 322 Paid-in Ca...
##### Evaluate the commutators: |x, &]; [z,2]; H,x] H,pJ: Where H is the Hamiltonian of a particle in one-dimensional potential V(z).
Evaluate the commutators: |x, &]; [z,2]; H,x] H,pJ: Where H is the Hamiltonian of a particle in one-dimensional potential V(z)....
##### Anothc mnodogrowth {nciorIimited population given by thc Gompertz functon; chichsolutionthe differential cquation bclavy whcreconsaangandthe carryingcapacity:=cln(a) Solve this differential equation. (P(O) Po-)P(t)c lit (b) Computc Lil(ncrcno solution enter NONE thc answcr box')
Anothc mnodo growth {ncior Iimited population given by thc Gompertz functon; chich solution the differential cquation bclavy whcre consaangand the carrying capacity: =cln (a) Solve this differential equation. (P(O) Po-) P(t) c lit (b) Computc Lil (ncrc no solution enter NONE thc answcr box')...
##### 16. In a noisy communication system transmitting binary digits (zeros and ones) the sender may send...
16. In a noisy communication system transmitting binary digits (zeros and ones) the sender may send a digit (for example, 1) and the receiver may receive the other digit (for example, 0). Let A be the event "a 1 is sent" and let B be the event "a 1 is received." Hence T is the event ...
##### Problem 1 If the density function of X a RV equals0 < I < 0 elsefx(z) =then:1. Find2. Find Fx(r) 3. Find PX > 2] using - Fx(i) Find Ph1 < X < 2 using fx(z) Find EJX] 6. Find Elx?] 7. From the previous two steps, find Var X]
Problem 1 If the density function of X a RV equals 0 < I < 0 else fx(z) = then: 1. Find 2. Find Fx(r) 3. Find PX > 2] using - Fx(i) Find Ph1 < X < 2 using fx(z) Find EJX] 6. Find Elx?] 7. From the previous two steps, find Var X]...
Check Required information (The following information applies to the questions displayed below.) Data for Hermann Corporation are shown below: Per Unit Percent of Sales 1000 70 Selling price Variable expenses Contribution margin $110$ 33 306 Fixed expenses are $82,000 per month and the company is ... 1 answer ##### In this assignment you will be implementing a weather forecaster. It involves writing 3 different classes... In this assignment you will be implementing a weather forecaster. It involves writing 3 different classes plus a driver program. It is recommended that you write one class at a time and then write a driver (tester) program to ensure that it works before putting everything together. Alternately, you ... 1 answer ##### In a market operated by a cartel, if price is$30 which of the following must...
In a market operated by a cartel, if price is $30 which of the following must be true? Marginal revenue is 30 and marginal cost must be less than$30. Marginal revenue must be zero ATC must be under $30 Marginal Revenue and marginal cost must be under$30 Which of the following is the best example o...
##### Part One Read and review the National Conference of State Legislatures, Section 7(r) of the Fair...
Part One Read and review the National Conference of State Legislatures, Section 7(r) of the Fair Labor Standards Act, Healthtalk.org, International Code of Marketing Breastmilk Substitutes, Resource for Informed Breastmilk Sharing, Breastfeeding Online, and UNICEF study materials. Why do we need to ...
##### 11.Find the exact lengch ofthe curve 2=]+sin(24),y =2-cos(z:}s 0<t<8 =
11.Find the exact lengch ofthe curve 2=]+sin(24),y =2-cos(z:}s 0<t<8 =...
##### The pedals of a bicycle are mounted on a bracket whose centre is 28.0cm above the...
the pedals of a bicycle are mounted on a bracket whose centre is 28.0cm above the ground. each pedal is 15.5cm from the centre of the bracket. assuming that the biycle is pedalled at 20 cycles per minute and that the pedal starts at time t=0s at the topmost position, write an equation which describe...
##### Evaluate the integral. (Remember to use absolute values where appropriate_ Use C for the constant of integration )J Xizx+6 In12-1 _ 1/) - 61n 12r-1 -1) _ L1ar-lNeed Help?RoadltTalk to Tutor
Evaluate the integral. (Remember to use absolute values where appropriate_ Use C for the constant of integration ) J Xizx+ 6 In 12-1 _ 1/) - 61n 12r-1 -1) _ L1ar-l Need Help? Roadlt Talk to Tutor...
##### Diagram the bank: LABEL EVERYTHING including each axis, curves, and illustrate the gains to the borrowers, savers and the bank_ Also, in 5 sentences or less, explain why banks exist;
Diagram the bank: LABEL EVERYTHING including each axis, curves, and illustrate the gains to the borrowers, savers and the bank_ Also, in 5 sentences or less, explain why banks exist;...
##### Reading and Interpreting Graphs Let $C$ be the function whose graph is given in the next column. This graph represents the cost $C$ of manufacturing $q$ computers in a day.(a) Determine $C(0)$. Interpret this value.(b) Determine $C(10)$. Interpret this value.(c) Determine $C(50)$. Interpret this value.(d) What is the domain of $C$ ? What does this domain imply in terms of daily production?(e) Describe the shape of the graph.(f) The point (30,32000) is called an inflection point. Describe the beh
Reading and Interpreting Graphs Let $C$ be the function whose graph is given in the next column. This graph represents the cost $C$ of manufacturing $q$ computers in a day. (a) Determine $C(0)$. Interpret this value. (b) Determine $C(10)$. Interpret this value. (c) Determine $C(50)$. Interpret this ...
##### Homework 7 (Sec 5.1-5.3) Introduction to Probability and Statistics (39256) Homework: Homework 7 (Sec 5.1-5.3) HV...
Homework 7 (Sec 5.1-5.3) Introduction to Probability and Statistics (39256) Homework: Homework 7 (Sec 5.1-5.3) HV 13 of 20 (19 complete) Score: 0 of 1 pt Assigned 5.4.9 Several psychology students are unprepared for a surprise true/false test with 13 questions, and all of their answers are quesse...
##### Data for fiddler crabs analyzed by company show that the Illometric relationship between the weight the claw and the weight W of the body given by c = 0.12w1.71.Find the function that gives the ratechange of claw welght with respect to body weight:We are given that the relationship between claw weight and body weight W for fiddler crab is C = 0.12w1.71_To fiInd the function that glves the rate of change of claw welght with respect to body weight, we need to find the derivative of with respect to
Data for fiddler crabs analyzed by company show that the Illometric relationship between the weight the claw and the weight W of the body given by c = 0.12w1.71. Find the function that gives the rate change of claw welght with respect to body weight: We are given that the relationship between claw w...
##### A triangle's coordinates are (-2, 1), (-6, 1), and (-4,5). After the triangle is rotated clockwise 270° about the origin, what are the new coordinates?
A triangle's coordinates are (-2, 1), (-6, 1), and (-4,5). After the triangle is rotated clockwise 270° about the origin, what are the new coordinates?...
##### I need help finding Pi, It would really help if I could see steps on how you get Pi (expected probability) 1) What’s the probability of finding DDT concentration less than or equal 6.0ppm? Problem...
I need help finding Pi, It would really help if I could see steps on how you get Pi (expected probability) 1) What’s the probability of finding DDT concentration less than or equal 6.0ppm? Problem I) DDT concentrations were measured in gulls. The data have a mean of 10.1...
##### (a) If the population mean is 29, what is the probability that the sample mean leads to the conclusion do not reject Ho?The sample size was given to be 185 and the population standard deviation was given to be 0 = 7. The hypotheses are as followsHo: u 2 30 Ha: H < 30Thus, the hypothesized value is / = 29Xand n = 185
(a) If the population mean is 29, what is the probability that the sample mean leads to the conclusion do not reject Ho? The sample size was given to be 185 and the population standard deviation was given to be 0 = 7. The hypotheses are as follows Ho: u 2 30 Ha: H < 30 Thus, the hypothesized valu...
##### (0) Des the prcess producing the cookies appear to be in control? What ctions would you...
(0) Des the prcess producing the cookies appear to be in control? What ctions would you ssk based on the inspection results 4 (10 points) The quality of a cookie production job has been examined, and the results are shown below. The same definition of defect used in Problem 3 is also used here. Cook...
##### 7- The pH of a 0.980 g of lead(II) hydroxide in 1.00 L is 7.49. Calculate...
7- The pH of a 0.980 g of lead(II) hydroxide in 1.00 L is 7.49. Calculate the value of the solubility product constant of lead(II) hydroxide....
##### Draw and label the hybrid or orbitals for the bonds of CH;Cl: (Lone pair orbitals are optional;,)Use electron arrows to arrive at one resonance structure for each of the following_ (Full credit only for using arrows correctly and providing correct resonance structures )8: CH,CHz"C CHCH;CH_CH:Complete the following Brensted acid/base equationHzSOaCHzCHzOHSince this reaction proceeds as written, circle the acid among the reactants and products with the higher pKa:
Draw and label the hybrid or orbitals for the bonds of CH;Cl: (Lone pair orbitals are optional;,) Use electron arrows to arrive at one resonance structure for each of the following_ (Full credit only for using arrows correctly and providing correct resonance structures ) 8: CH,CHz"C CHCH; CH_CH...
##### PPACA/Obamacare provides additional medical support for medical research and the National Institutes of Health. Select one:...
PPACA/Obamacare provides additional medical support for medical research and the National Institutes of Health. Select one: a. true b. false...
##### Complete each factoring. $-3 x^{2}-15 x=-\left(x^{2}+5 x\right)$
Complete each factoring. $-3 x^{2}-15 x=-\left(x^{2}+5 x\right)$...
|
|
Finding the Orthogonal Projection from a Matrix where $a_{ij} = \vec v_i\cdot \vec v_j$
I am working on a problem that is asking to find $$\operatorname{proj}_Vv_3$$ where the matrix $$A=\begin{bmatrix} 3&5&11\\ 5&9&20\\ 11&20&49\end{bmatrix}$$ has entries $$a_{ij}=\vec v_i\cdot \vec v_j$$. I though that the simplest way to answer this question was to use the formula $$(u_1,x)u_1+(u_2,x)u_2 = proj_vx .$$
Since the question also asks to express the solution as a linear combination of $$v_1$$ and $$v_2$$, I assumed that I could just plug the numbers in. I knew I had to make the vectors unit vectors so my equation looked something like this:
$$3(5)(v2)+ 1/11(v3)$$ This answer is not the correct answer but I am wondering if I am reading the matrix in the wrong way or if I am confused about the usage of the formula. Does someone have a better idea?
• What is $v$ here? – amd Apr 1 '19 at 1:16
The formula as you wrote it makes no sense. The correct version, since $$V=\operatorname{span}\{v_1,v_2\}$$, is $$\operatorname{proj}_V(v_3)=\frac{v_3\cdot v_1}{v_1\cdot v_1}\,v_1+\frac{v_3\cdot v_2}{v_2\cdot v_2}\,v_2.$$ Reading the four numbers from the matrix we get $$\operatorname{proj}_V(v_3)=\frac{a_{13}}{a_{11}}\,v_1+\frac{a_{23}}{a_{22}}\,v_2=\frac{11}{3}\,v_1+\frac{5}{9}\,v_2.$$
|
|
## anonymous 5 years ago what is the derivative of ln(sqrt(x^2)) ? ? Thanks in advance!
1. M
1/x?
2. M
$\ln \sqrt{x ^{2}}$
3. M
is that what you mean?
4. anonymous
ln(sqrt(x^2)) = ln x (d/dx) ln x = 1/x right?
5. M
yes
6. anonymous
$(d / dx) \ln \sqrt{x^{2}}$
7. anonymous
so its ok to go ahead and cancel the exponent with the square root and then just do ln (x) ?
|
|
# Short Term Analysis – November 1, 2006
GBPUSD
GBPUSD is in up trend and further rise towards 1.9142 (the day high of August 8) is still possible in the next couple of days, and a sideways consolidation is needed before breaking above 1.9142. Near term support is at the bottom of the price channel, a break below this support may signal the consolidation to the up trend.
AUDUSD
AUDUSD is in up trend and further rise towards 0.7791 (the day high of May 11) is still possible in the next couple of days, and a sideways consolidation is needed before breaking above 0.7791. Key support is now at 0.7671, and as long as the key support holds, the pair stays in up trend.
EURUSD
The resistance at 1.2749 is broken above and EURUSD is back to up trend. Further rise towards 1.2829 previous high is still possible in the next couple of days. Near term support is the up trend line, and as long as the trend line support holds, up trend will continue.
|
|
MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4).
## Is there a nice proof of the fact that there are (p-1)/24 supersingular elliptic curves in characteristic p?
If $k$ is a characteristic $p$ field containing a subfield with $p^2$ elements (e.g., an algebraic closure of $\mathbb{F}_p$), then the number of isomorphism classes of supersingular elliptic curves over $k$ has a formula involving $\lfloor p/12 \rfloor$ and the residue class of $p$ mod 12, described in Chapter V of Silverman's The Arithmetic of Elliptic Curves. If we weight these curves by the reciprocals of the orders of their automorphism groups, we obtain the substantially simpler Eichler-Deuring mass formula: $\frac{p-1}{24}$. For example, when $p=2$, the unique supersingular curve $y^2+y=x^3$ has endomorphisms given by the Hurwitz integers (a maximal order in the quaternions), and its automorphism group is therefore isomorphic to the binary tetrahedral group, which has order 24.
Silverman gives the mass formula as an exercise, and it's pretty easy to derive from the formula in the text. The proof of the complicated formula uses the Legendre form (hence only works away from 2), and the appearance of the $p/12$ boils down to the following two facts:
1. Supersingular values of $\lambda$ are precisely the roots of the Hasse polynomial, which is separable of degree $\frac{p-1}2$.
2. The $\lambda$-line is a 6-fold cover of the $j$-line away from $j=0$ and $j=1728$ (so the roots away from these values give an overcount by a factor of 6).
Question: Is there a proof of the Eichler-Deuring formula in the literature that avoids most of the case analysis, e.g., by using a normal form of representable level?
I suppose any nontrivial level structure will probably require some special treatment for the prime(s) dividing that level. Even so, it would be neat to see any suitably holistic enumeration, in particular, one that doesn't need to single out special $j$-invariants.
(This question has been troubling me for a while, but Greg's question inspired me to actually write it down.)
-
In Katz-Mazur there's an elegant uniform proof using the geometry of mod-$p$ modular curves; no mucking around with Legendre model or numerology of 1728, etc. I think it's near the end of some chapter; maybe end of chapter 11 or 12? – BCnrd May 14 2010 at 6:06
This argument goes back to Grothendieck. The only thing that Grothendieck didn't have, which makes a proof of the formula much more natural, is the formula for the degree of a line bundle on a DM-stack in terms of the zero set counted as one does on stacks. – Torsten Ekedahl May 14 2010 at 7:08
One argument (maybe not of the kind you want) is to use the fact that the wt. 2 Eisenstein series on $\Gamma_0(p)$ has constant term (p-1)/24.
More precisely: if $\{E_i\}$ are the s.s. curves, then for each $i,j$, the Hom space $L_{i,j} := Hom(E_i,E_j)$ is a lattice with a quadratic form (the degree of an isogeny), and we can form the corresponding theta series $$\Theta_{i,j} := \sum_{n = 0}^{\infty} r_n(L_{i,j})q^n,$$ where as usual $r_n(L_{i,j})$ denotes the number of elements of degree $n$. These are wt. 2 forms on $\Gamma_0(p)$.
There is a pairing on the $\mathbb Q$-span $X$ of the $E_i$ given by $\langle E_i,E_j\rangle =$ # $Iso(E_i,E_j),$ i.e. $$\langle E_i,E_j\rangle = 0 \text{ if } i \neq j\text{ and equals # }Aut(E_i) \text{ if }i = j,$$ and another formula for $\Theta_{i,j}$ is $$\Theta_{i,j} := 1 + \sum_{n = 1}^{\infty} \langle T_n E_i, E_j\rangle q^n,$$ where $T_n$ is the $n$th Hecke correspondence.
Now write $x := \sum_{j} \frac{1}{\text{#}Aut(E_j)} E_j \in X$. It's easy to see that for any fixed $i$, the value of the pairing $\langle T_n E_i,x\rangle$ is equal to $\sum_{d |n , (p,d) = 1} d$. (This is just the number of $n$-isogenies with source $E_i,$ where the target is counted up to isomorphism.) Now $$\sum_{j} \frac{1}{\text{#}Aut(E_j)} \Theta_{i,j} = (\sum_{j} \frac{1}{\text{#}Aut(E_j)}) + \sum_{n =1}^{\infty} \langle T_n E_i, x\rangle q^n = (\sum_{j}\frac{1}{\text{#}Aut(E_j)}) + \sum_{n = 1}^{\infty} (\sum_{d | n, (p,d) = 1} d)q^n.$$
Now the LHS is modular of wt. 2 on $\Gamma_0(p)$, thus so is the RHS. Since we know all its Fourier coefficients besides the constant term, and they coincide with those of the Eisenstein series, it must be the Eisenstein series. Thus we know its constant term as well, and that gives the mass formula.
(One can replace the geometric aspects of this argument, involving s.s. curves and Hecke correspondences, with pure group theory/automorphic forms: namely the set $\{E_i\}$ is precisely the idele class set of the multiplicative group $D^{\times}$, where $D$ is the quat. alg. over $\mathbb Q$ ramified at $p$ and $\infty$. This formula, writing the Eisenstein series as a sum of theta series, is then a special case of the Seigel--Weil formula, I believe, which in general, when you pass to constant terms, gives mass formulas of the type you asked about.)
-
You can also do this "topologically". The idea is to separately calculate the (l-adic etale) Euler characteristics of the stack Y_0(p) in char. 0 and in char. p, then see how they must relate. Here we go:
Over C, and hence over Q_p-bar as well, the Euler characteristic of Y_0(p) is (p+1)*(-1/12), since Y_0(p) is a (p+1)-fold cover of Y=M_{ell}.
On the other hand, over F_p-bar, up to "homeomorphism" Y_0(p) is two copies of Y glued at the supersingular points, so the Euler characteristic is 2*(-1/12) - S (where S is the "number" of supersingular elliptic curves).
However, since Y_0(p) has semi-stable reduction (and is constant at infinity) over Z_p, the special fiber is gotten from the generic fiber by contracting a bunch of "circles" to points, one for each nodal point of the special fiber; thus our second (char p) Euler characteristic is equal to our first (char 0) Euler characteristic plus S. Comparing and solving for S gives the formula pretty quickly.
-
I like this argument. It has a very conversational flavor. – S. Carnahan♦ May 15 2012 at 7:15 Must be all the quotation marks. Anyway, glad you like it. – Dustin Clausen May 15 2012 at 7:17
This will be a summary of the proof referenced by BCnrd in the comments, for the benefit of those that haven't looked through Katz, Mazur, Arithmetic Moduli of Elliptic Curves. A scan is available as an unsearchable djvu near the bottom of Katz's web page. I'd like to point out in particular just how different it is from Emerton's proof, and how the textbook proof I summarized in the statement of the question is essentially the same as this one, but trades some conceptual clarity for simplicity of vocabulary. The relevant corollary (12.4.6) is on page 358, which is page 185 of the scan.
Katz and Mazur start with a moduli problem $P$ (i.e., a functor from $(Ell)$ to Sets) representable by a scheme $M(P)$. We assume that $P$ is defined over an algebraic closure of $\mathbb{F}_p$, and is finite étale over $(Ell/\overline{\mathbb{F}_p})$. This means one has an étale surjection from $M(P)$ to the stack of elliptic curves over $\overline{\mathbb{F}_p}$. There is a distinguished section of the line bundle $\omega_P^{p-1}$, called the Hasse invariant. It is defined as the differential of the Verschiebung, which I will explain a bit later, and it satisfies two key properties:
1. The Hasse invariant vanishes if and only if the curve is supersingular.
2. All zeroes have multiplicity one (Igusa's theorem).
By general line bundle arithmetic, the total number of zeroes of this section in $P$, counting multiplicity, is equal to $p-1$ times the degree of $\omega$, or equivalently, $\frac{p-1}{24}$ times the degree of $P$ over (Ell).
To complete the proof, one shows that the preimage of a point under the composition $M(P) \to \mathcal{M}_{Ell} \to \mathbb{A}^1_j$ has size equal to the degree of $P$ over $(Ell)$ divided by the order of the automorphism group of the underlying elliptic curve, by using representability to deduce the freeness of the group action. This yields $$\deg P \cdot \sum_{\text{supersingular } j} \frac{1}{|\operatorname{Aut} E_j|} = \deg P \cdot \frac{p-1}{24},$$ and we are done.
The Verschiebung and the Hasse invariant deserve some additional explanation. For any $\mathbb{F}_p$-scheme $X$, there is an absolute Frobenius map $X \to X$ which on affines is the functor $\operatorname{Spec}$ applied to the $p$-th power map. For an $\mathbb{F}_p$-scheme $S$ and an $S$-scheme $X$, one obtains an $S$-scheme $X^{(p)}$ as the pullback of $X$ over the absolute Frobenius on $S$. By the universal property of pullbacks, the absolute Frobenius on $X$ factors through an $S$-map $X \to X^{(p)}$, called the relative Frobenius. Over an elliptic curve $E \to S$, this map turns out to be an isogeny of degree $p$. The Verschiebung $V: E^{(p)} \to E$ is defined as the dual to the relative Frobenius isogeny $F: E \to E^{(p)}$, and the multiplication by $p$ map on $E$ factors as $[p] = VF$.
The kernel of Frobenius is always a connected group scheme of length $p$, and the kernel of the Verschiebung over a field is connected if and only if the full $p$-torsion subgroup is connected. The latter case can be taken as a definition of supersingularity over geometric points, and for general $\mathbb{F}_p$-schemes, any family of elliptic curves with at least one supersingular geometric fiber is called supersingular. The Hasse invariant is the induced map on $S$-Lie algebras: $\text{Lie}V: \text{Lie}(E/S)^{(p)} \to \text{Lie}(E/S)$. It is an isomorphism if and only if $E$ is not supersingular (i.e., $E$ is ordinary), as one can calculate by examining formal groups. We can write it as an element of $\text{Hom}(\text{Lie}(E/S)^{(p)}, \text{Lie}(E/S)) = \text{Hom}(\text{Lie}(E/S)^{\otimes p}, \text{Lie}(E/S)) = H^0(S, \omega_{E/S}^{\otimes p-1})$. It is a modular form of weight $p-1$, and its $q$-expansion is identically 1.
The fact that $\omega_P$ has degree $\frac{1}{24}\deg P$ is still a bit of a conceptual mystery to me. It is proved by first calculating it for a full level 3 structure (which has degree 24) and then transferring to all other representable moduli problems.
-
One could try to compute the degree of $\omega_P$ in the following way: $\omega_P^{\otimes 12}$ has a section, namely $\Delta$, which has zeroes precisely at the cusps. If we work over $(Ell)$ itself, there is a single cusp where it has a simple zero. But probably one has some phenomena of the following sort: the Tate curve corresponding to the cusp at infinity has automorphisms of order 2, and so the true (i.e. stack theoretic) order of vanishing there is 1/2. Hence deg $\omega^{\otimes 12} = 1/2$, and so deg $\omega = 1/24$. – Emerton Jun 15 2010 at 21:24
Whether or not this is completely correct as written, I think that one should think of the degree of $\omega$ as coming from the existence of the section $\Delta$ which is non-zero at all non-cuspidal points. – Emerton Jun 15 2010 at 21:25
Thanks, that is helpful. I should have thought of $\Delta$. – S. Carnahan Jun 15 2010 at 22:22
|
|
# Pressure Transmitter
A Pressure Transmitter is a device for measuring the pressure of gases or liquids . Pressure is an expression of the force required to stop a fluid from expanding, and is usually stated in terms of force per unit area. A pressure sensor usually acts as a transducer ; It generates a signal as a function of the applied pressure . For the purposes of this article, such a signal is electrical.
Pressure sensors are used for control and monitoring in thousands of everyday applications. Pressure Transmitter can also be used to indirectly measure other variables such as fluid/gas flow, speed, water level and altitude . Pressure Transmitter may alternatively be called pressure transducer , pressure transmitter , pressure sender , pressure indicator , piezometer and manometer , among other names.
Pressure sensors can vary greatly in technology, design, performance, application suitability and cost. A conservative estimate would be that there may be more than 50 technologies around the world and that there are at least 300 companies making Pressure Transmitter.
There is also a class of Pressure Transmitter designed to measure in dynamic mode to capture very rapid speed changes in pressure. Example applications for this type of sensor would be in measuring combustion pressure in engine cylinders or gas turbines. These sensors are typically fabricated from a piezoelectric material such as quartz.
Some pressure sensors are pressure switches , which are turned on or off at a particular pressure. For example, a water pump may be controlled by a pressure switch so that it starts when water is released from the system, thereby reducing the pressure in the reservoir.
## Type of pressure measurement
Pressure sensors can be classified according to the pressure range they measure, the temperature range of operation and most importantly the pressure they measure. Pressure sensors are given different names according to their purpose, but the same technology may be used under different names.
• absolute pressure sensor
This sensor accurately measures the pressure relative to the vacuum . Absolute pressure sensors are used in applications where a constant reference is required, for example, in high-performance industrial applications such as vacuum pump monitoring, liquid pressure measurement, industrial packaging, industrial process control, and aviation . Inspection
• gauge pressure sensor
This sensor measures pressure relative to atmospheric pressure . Tire pressure gauge is an example of gauge pressure measurement; When it indicates zero, the pressure it is measuring is the same as the ambient pressure. Most sensors are built in this way to measure up to 50 bar, otherwise fluctuations in atmospheric pressure (weather) are reflected as an error in the measurement result.
• vacuum pressure sensor
This word can cause confusion. It can be used to describe a sensor that measures pressures below atmospheric pressure, showing the difference between that low pressure and atmospheric pressure, but it can also be used to describe a sensor. which measures absolute pressure relative to vacuum.
• differential pressure sensor
This sensor measures the difference between two pressures, one attached to each side of the sensor. Differential pressure sensors are used to measure a number of properties, such as in an oil filter or air filter , fluid level (by comparing the pressure at the top and bottom of the liquid) or flow rate (by measuring changes in the pressure of a restriction). The pressure drops. Technically speaking, most pressure sensors are actually differential Pressure Transmitter; For example the gauge Pressure Transmitter is simply a differential pressure sensor with one side open to the ambient environment.
• seal pressure sensor
This sensor is similar to a gauge pressure sensor, except that it measures pressure relative to a certain amount of pressure, rather than ambient atmospheric pressure (which varies by location and season).
## Pressure-sensing technology
There are two basic categories of analog pressure sensors,
Force modulated types These types of electronic Pressure Transmitter typically use a force modulator (such as a diaphragm, piston, Borden tube, or bellows) to measure the tension (or deflection) caused by a force applied to an area (pressure). .
• piezoresistive strain gauge
Uses the piezoresistive effect of a bonded or formed strain gauge to detect the stress caused by the applied pressure , the resistance increases as the pressure deforms the material. Common technology types are silicon (monocrystalline), polysilicon thin film, bonded metal foil, thick film, silicon-on-sapphire and sputtered thin film. Typically, strain gauges are connected to form a Wheatstone bridge circuit to maximize the output of the sensor and reduce susceptibility to errors. It is the most commonly employed sensing technique for general purpose pressure measurement.
• capacitor
Uses a diaphragm and pressure cavity to form a variable capacitor to detect stress due to applied pressure, capacitance decreases as pressure deforms the diaphragm. Common technologies use metal, ceramic and silicon diaphragms.
• Electromagnetic
One measures the displacement of a diaphragm through a change in inductance (reluctance), LVDT , the Hall effect , or by the eddy current principle.
• piezoelectric
Uses the piezoelectric effect in some materials, such as quartz, to measure the stress on the sensing system due to pressure . This technique is commonly employed for the measurement of highly dynamic pressures. Since the basic principle is dynamic, no static pressure can be measured with a piezoelectric sensor.
• strain gauge
Strain gauge based pressure sensors also use a pressure sensitive element where a metal strain gauge is affixed or a thin film gauge is applied by sputtering. This measuring element can be either a diaphragm or can also be used for metal foil gauges to measure bodies in a can-type. The big advantages of this monolithic can-type design are an improved rigidity and the ability to measure the highest pressure of up to 15,000 bar. The electrical connection is typically made through a Wheatstone bridge which allows good amplification of the signal and accurate and consistent measurement results.
• Optical
Techniques include the use of physical transformation of the optical fiber to detect the strain due to applied pressure. A common example of this type is the use of fiber Bragg gratings . This technology is employed in challenging applications where measurements may be highly remote under high temperatures, or may benefit from technologies inherently immune to electromagnetic interference. Another similar technique uses an elastic film built into layers that can change the reflected wavelength according to the applied pressure (strain).
• potentiometric
Uses wiper motion with a resistive mechanism to detect stress due to applied pressure.
• force balance
Force-balanced fused quartz Bourdon tubes use a spiral Bourdon tube to exert force on a mirrored pivot armature, the reflection of a beam of light from the mirror senses angular displacement and electromagnets on the armature to balance the force. A current is applied. To bring the angular displacement to and from the tube to zero, the current applied to the coil is used as a measurement. Due to the extremely stable and repeatable mechanical and thermal properties of fused quartz and the force balance that eliminates most non-linear effects, these sensors can be accurate to approximately 1 ppm of full scale. [4]Due to the extremely fine fused quartz structures that are made by hand and the specialist skills required to manufacture these sensors, these sensors are usually limited to scientific and calibration purposes. Non-force-balancing sensors have low accuracy and reading angular displacement cannot be done with the same accuracy as force-balance measurements, although they are easier to manufacture due to their larger size, they are no longer used.
other types
These types of electronic pressure sensors use other properties (such as density) to estimate the pressure of a gas, or liquid.
• resonant
Uses changes in resonant frequency in a sensing system to measure stresses, or changes in gas density, due to applied pressure . This technique can be used in conjunction with a force collector, such as in the range above. Alternatively, the resonant technique can be employed by exposing the resonant element to the media, whereby the resonant frequency is dependent on the density of the media. The sensor is made from vibrating wire, vibrating cylinders, quartz and silicon MEMS. In general, this technique is considered to provide very stable readings over time.
A pressure sensor, a resonant quartz crystal strain gauge with a Bourdain tube force modulator is the key sensor of the DART . [5] DART detects tsunami waves from the bottom of the open ocean . It has about 1 mm of water pressure when measuring pressure at a depth of several kilometers. [6]
• Thermal
Uses the change in thermal conductivity of a gas due to a change in density to measure pressure. A common example of this type is the Pirani gauge.
• ionization
Measures the flow of charged gas particles (ions) that change due to the density change to measure the pressure. Common examples are hot and cold cathode gauges.
## Application
There are many applications for pressure sensors:
• pressure sensing
This is where the measurement of interest is pressure, which is expressed as force per unit area. It is useful in weathering equipment, aircraft, automobiles, and any other machinery to which pressurization functionality is applied.
• height sensing
It is useful in aircraft, rockets, satellites, weather balloons and many other applications. All of these applications make use of the relationship between changes in pressure relative to altitude. This relationship is governed by the following equation:
h=(1-(P/P_{{\mathrm {ref}}})^{{0.190284}})\times 145366.45{\mathrm {ft}}
This equation is calibrated for an altimeter, up to 36,090 feet (11,000 m). Outside that range, an error will be introduced that can be calculated separately for each of the different pressure sensors. These error calculations will factor in the error introduced by changes in temperature as we go up.
Barometric pressure sensors can have an altitude resolution of less than 1 m, which is much better than GPS systems (about 20 m altitude resolution). In navigation applications altimeters are used to differentiate between steep street level for car navigation and floor level in buildings for pedestrian navigation.
• flow sensing
This is the use of pressure sensors with the Venturi effect to measure flow. The differential pressure is measured between two sections of a venturi tube with a separate orifice. The pressure difference between the two sections is directly proportional to the flow rate through the venturi tube. Low pressure sensors are almost always needed because the pressure difference is relatively small.
• Level / Depth Sensing
A pressure sensor can also be used to calculate the fluid level. This technique is commonly employed to measure the depth of a submerged body (such as a diver or submarine), or the level of material in a tank (such as in a water tower). For most practical purposes, fluid level is directly proportional to pressure. In the case of fresh water where the material is at atmospheric pressure, 1psi = 27.7 inH20 / 1Pa = 9.81 mmH20. The basic equation for such a measurement is
P=\rho gh
where p = pressure, = density of the fluid, g = standard gravity, h = height of the fluid column above the pressure sensor
• Leak test
A pressure sensor can be used to sense the loss of pressure due to system leakage. This is usually done using differential pressure to compare a known leak or by using a pressure sensor to measure the pressure change over time.
## Ratiometric correction of transducer output
Piezoresistive transducers configured as Wheatstone bridges often exhibit proportional behavior not only with respect to the measured pressure, but also with respect to the transducer supply voltage.
V_{{\mathrm {out}}}={P\times K\times Vs_{{\mathrm {actual}}} \over Vs_{{\mathrm {ideal}}}}
Where from:
Vout is the output voltage of the transducer.
P is the actual measured pressure.
K The nominal transducer scale factor (given an ideal transducer supply voltage) is the nominal transducer scale factor in units of voltage per inrush.
Vs actual The actual transducer is the supply voltage.
Vs ideal The ideal transducer is the supply voltage.
Correcting a measurement from a transducer exhibiting this behavior requires measuring the actual transducer supply voltage as well as the output voltage and applying the inverse transformation of this behavior to the output signal:
P={V_{{\mathrm {out}}}\times Vs_{{\mathrm {ideal}}} \over K\times Vs_{{\mathrm {actual}}}}
NOTE: Common mode signals often present in transducers configured as Wheatstone bridges are not considered in this analysis.
|
|
Volume 256 - 34th annual International Symposium on Lattice Field Theory (LATTICE2016) - Theoretical Developments
't Hooft model on the lattice
M. Garcia Perez, A. Gonzalez-Arroyo,* L. Keegan, M. Okawa
*corresponding author
Full text: pdf
Pre-published on: February 17, 2017
Published on: March 24, 2017
Abstract
Lattice results are presented for the meson spectrum of
1+1 dimensional gauge theory at large $N$, using the Twisted
Eguchi-Kawai model. Comparison is made to the results obtained by `t Hooft in the light cone gauge.
DOI: https://doi.org/10.22323/1.256.0337
How to cite
Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.
Open Access
|
|
# How can we Define our own Simple Environment for Source Code?
Inside of our document we wish to define a new environment for blocks of source code
\newenvironment{srcblock}%
% Define the environment
Later, we will use the environment:
\begin{srcblock} % use the environment
% source code here
\end{srcblock}
We seek the following behavior:
• Each line in a src code block is numbered
• The font used for a src code block is monospaced
• A border is drawn around a code code block, much like \fbox does
• Each code block "floats" to a position so that it will fit all on one page.
• What will srcblock contain in terms of characters? verbatim-like content, perhaps (symbols like #, _, $, ^, ...)? – Werner Jan 7 at 21:05 • Did you already check the listings package? This should be straight-forward with it. Just read the manual. – Martin Scharrer Jan 7 at 21:34 • Since you have some responses below that seem to answer your question, please consider marking one of them as ‘Accepted’ by clicking on the tickmark below their vote count (see How do you accept an answer?). This shows which answer helped you most, and it assigns reputation points to the author of the answer (and to you!). It's part of this site's idea to identify good questions and answers through upvotes and acceptance of answers. – user36296 Jan 29 at 15:14 ## 2 Answers Variation on my answer here: Seemingly trivial: verbatim, newenvironment, and colorbox. It requires a small modification to my verbatimbox package. \documentclass{article} \usepackage{xcolor,caption,lipsum} \usepackage{verbatimbox} \makeatletter \newenvironment{cverbbox}[5][]{% \setcounter{VerbboxLineNo}{0}% \def\verbatim@processline{% % THE FIRST #1 ACCOUNTS FOR NON-PRINTING COMMANDS; THE SECOND #1 IS FOR % PRINTED OPTIONAL MATERIAL {\addtocounter{VerbboxLineNo}{1}% #1\setbox0=\hbox{#1\the\verbatim@line}% \hsize=\wd0 \the\verbatim@line\par}}% \@minipagetrue% \@tempswatrue% \global\edef\sv@name{\@macro@name{#2}}% \global\edef\cverbboxColor{#4}% \global\edef\cverbboxFColor{#5}% \@ifundefined{\sv@name content}{% \expandafter\newsavebox\expandafter{\csname\sv@name content\endcsname}% }% \expandafter\global\expandafter\edef\csname\sv@name\endcsname{\usebox{% \csname\sv@name content\endcsname}}% \setbox0=\vbox\bgroup\color{#3} \verbatim } {% \endverbatim% \unskip\setbox0=\lastbox % \egroup% \setbox1=\hbox{% \colorbox{\cverbboxColor}{\box0}}% \global\sbox{\csname\sv@name content\endcsname}% {\fboxsep=\fboxrule\colorbox{\cverbboxFColor}{\box1}}% } \makeatother \fboxrule=1pt\fboxsep=3pt\relax \begin{document} \lipsum[1] \begin{figure*}[ht] \begin{cverbbox}[\textcolor{black}{\scriptsize\theVerbboxLineNo~~}]{\mycvbox}{red!80}{blue!10}{cyan} \verbatim <Text> Here %$#@&^* \macros
xa
\end{cverbbox}
\centerline{\mycvbox}
\caption*{My code chunk}
\end{figure*}
\lipsum[2]
\end{document}
This approach is not a simple environment but a standard minted environment nested in you own float environment. Not exactly the asked but:
Each line in a src code block is numbered (optionally)
The font used for a src code block is monospaced (and syntax highlited)
A border is drawn around a code code block, much like \fbox does (with top caption inside using \stdcaption, for a botom caption outside the box use \caption*)
Each code block "floats" to a position (concretely to top of a new page) so that it will fit all on one page.
\documentclass[a6paper]{article}
\usepackage[margin=1cm, paperheight=5in,paperwidth=5in]{geometry}
\pagestyle{empty}
\usepackage{lipsum}
\let\stdcaption\caption
\setcounter{totalnumber}{1}
\usepackage{minted}
\usepackage{float}
\floatstyle{boxed}
\newfloat{fancycode}{t}{loa}
\floatname{fancycode}{LaTeX code}
\begin{document}
\begin{fancycode}
\stdcaption[Basic Structure]{The basic structure of a \LaTeX\ document.}
\begin{minted}[linenos,bgcolor=orange!05]{latex}
\documentclass{article}
\begin{document}
\end{document}
\end{minted}
\end{fancycode}
\begin{fancycode}
\stdcaption[Preamble]{The \LaTeX\ preamble to do a \texttt{fancycode}.}
\begin{minted}[linenos,bgcolor=cyan!10]{latex}
\usepackage{minted}
\usepackage{float}
\floatstyle{boxed}
\newfloat{fancycode}{t}{loa}
\floatname{fancycode}{LaTeX code}
\end{minted}
\end{fancycode}
\lipsum[1-2]
\listof{fancycode}{Code examples}
\end{document}
|
|
Subjects -> ASTRONOMY (Total: 94 journals)
Showing 1 - 46 of 46 Journals sorted alphabetically Advances in Astronomy (Followers: 51) Annual Review of Astronomy and Astrophysics (Followers: 39) Annual Review of Earth and Planetary Sciences (Followers: 63) Artificial Satellites (Followers: 23) Astrobiology (Followers: 14) Astronomical & Astrophysical Transactions: The Journal of the Eurasian Astronomical Society (Followers: 6) Astronomical Journal (Followers: 8) Astronomical Review (Followers: 4) Astronomische Nachrichten (Followers: 4) Astronomy & Geophysics (Followers: 48) Astronomy and Astrophysics (Followers: 60) Astronomy and Astrophysics (Followers: 32) Astronomy and Computing (Followers: 2) Astronomy Letters (Followers: 22) Astronomy Reports (Followers: 15) Astronomy Studies Development (Followers: 12) Astroparticle Physics (Followers: 8) Astrophysical Bulletin (Followers: 3) Astrophysical Journal (Followers: 19) Astrophysical Journal Letters (Followers: 14) Astrophysical Journal Supplement Series (Followers: 14) Astrophysics (Followers: 29) Astrophysics and Space Science (Followers: 46) Astrophysics and Space Sciences Transactions (ASTRA) (Followers: 56) Astropolitics: The International Journal of Space Politics & Policy (Followers: 12) Celestial Mechanics and Dynamical Astronomy (Followers: 11) Chinese Astronomy and Astrophysics (Followers: 24) Colloid Journal (Followers: 3) Comptes Rendus Physique (Followers: 2) Computational Astrophysics and Cosmology (Followers: 3) COSPAR Colloquia Series (Followers: 11) Earth, Moon, and Planets (Followers: 55) Earth, Planets and Space (Followers: 74) EAS Publications Series (Followers: 8) EPL Europhysics Letters (Followers: 8) Experimental Astronomy (Followers: 39) Expert Opinion on Astronomy and Astrophysics (Followers: 7) Extreme Life, Biospeology & Astrobiology - International Journal of the Bioflux Society (Followers: 6) Few-Body Systems (Followers: 1) Foundations of Physics (Followers: 41) Frontiers in Astronomy and Space Sciences (Followers: 12) Galaxies (Followers: 6) Globe, The (Followers: 4) Gravitation and Cosmology (Followers: 4) Icarus (Followers: 75) International Journal of Advanced Astronomy (Followers: 28) International Journal of Astrobiology (Followers: 4) International Journal of Astronomy (Followers: 19) International Journal of Astronomy and Astrophysics (Followers: 29) International Journal of Satellite Communications Policy and Management (Followers: 13) International Letters of Chemistry, Physics and Astronomy (Followers: 12) ISRN Astronomy and Astrophysics (Followers: 7) Journal for the History of Astronomy (Followers: 19) Journal of Astrobiology & Outreach (Followers: 3) Journal of Astronomical Instrumentation (Followers: 3) Journal of Astronomical Telescopes, Instruments, and Systems (Followers: 5) Journal of Astrophysics (Followers: 26) Journal of Astrophysics and Astronomy (Followers: 52) Journal of Atmospheric and Solar-Terrestrial Physics (Followers: 199) Journal of Cosmology and Astroparticle Physics (Followers: 38) Journal of Geophysical Research : Planets (Followers: 179) Journal of Geophysical Research : Space Physics (Followers: 178) Journal of High Energy Astrophysics (Followers: 22) Kinematics and Physics of Celestial Bodies (Followers: 10) KronoScope (Followers: 1) Macalester Journal of Physics and Astronomy (Followers: 4) MNASSA : Monthly Notes of the Astronomical Society of South Africa (Followers: 1) Molecular Astrophysics (Followers: 1) Monthly Notices of the Royal Astronomical Society (Followers: 14) Monthly Notices of the Royal Astronomical Society : Letters Nature Astronomy (Followers: 8) New Astronomy (Followers: 27) New Astronomy Reviews (Followers: 17) Nonlinear Dynamics (Followers: 19) NRIAG Journal of Astronomy and Geophysics (Followers: 5) Open Astronomy (Followers: 2) Physics of the Dark Universe (Followers: 4) Planetary and Space Science (Followers: 101) Planetary Science (Followers: 52) Proceedings of the International Astronomical Union (Followers: 2) Publications of the Astronomical Society of Australia (Followers: 2) Publications of the Astronomical Society of Japan (Followers: 3) Publications of the Astronomical Society of the Pacific (Followers: 4) Research & Reviews : Journal of Space Science & Technology (Followers: 17) Research in Astronomy and Astrophysics (Followers: 29) Revista Mexicana de Astronomía y Astrofísica (Followers: 2) Science China Physics, Mechanics & Astronomy (Followers: 4) Solar Physics (Followers: 34) Solar System Research (Followers: 14) Space Science International (Followers: 192) Space Science Reviews (Followers: 97) Space Weather (Followers: 24) Transport and Aerospace Engineering (Followers: 13) Universe (Followers: 5)
Similar Journals
Earth, Moon, and PlanetsJournal Prestige (SJR): 0.63 Citation Impact (citeScore): 1Number of Followers: 55 Hybrid journal (It can contain Open Access articles) ISSN (Print) 1573-0794 - ISSN (Online) 0167-9295 Published by Springer-Verlag [2626 journals]
• Revisiting Lunar Seismic Experiment Data Using the Multichannel Simulation
with One Receiver (MSOR) Approach and Random Field Modeling
• Abstract: Abstract Major advancements in surface wave testing over the past 2 decades have led researchers to revisit and re-analyze archived seismic records, particularly those involving measurements on the Moon. The goal of such recent efforts with lunar seismic measurements has been to gain further insights into lunar geology. We examined the active seismic data from the Apollo 16 mission for their surface wave information using a multichannel approach. The inversion of Rayleigh surface waves provided a subsurface estimate for the uppermost 8 m of the lunar subsurface with the shear wave velocities varying from 40 to 50 m/s at the surface to velocities in the range of 95–145 m/s with an average of 120 m/s at a depth of about 7 m. Generally, the results from this inversion demonstrated good agreement with previous studies. Also, we carried out numerical modeling of wave propagation in a highly-heterogeneous domain to examine the effects of such anomalous features on the acquired seismograms. Results confirmed that a sharp-contrast bi-material domain can indeed produce significant coda wave as reflected on the lunar seismic traces.
PubDate: 2020-10-14
• Automated Extraction of Crater Rims on 3D Meshes Combining Artificial
Neural Network and Discrete Curvature Labeling
• Abstract: Abstract One of the challenges of planetary science is the age determination of geological units on the surface of the different planetary bodies in the solar system. This serves to establish a chronology of the geological events occurring on these different bodies, hence to understand their formation and evolution processes. An approach for dating planetary surfaces relies on the analysis of the impact crater densities with size. Approaches have been proposed to automatically detect impact craters in order to facilitate the dating process. They rely on color values from images or elevation values from Digital Elevation Models (DEM). In this article, we propose a new approach for crater detection, more specifically using their rims. The craters can be characterized by a round shape that can be used as a feature. The developed method is based on an analysis of the DEM geometry, represented as a 3D mesh, followed by curvature analysis. The classification process is done with one layer perceptron. The validation of the method is performed on DEMs of Mars, acquired by a laser altimeter aboard NASA’s Mars Global Surveyor spacecraft and combined with a database of manually identified craters. The results show that the proposed approach significantly reduces the number of false negatives compared to others based on topographic information only.
PubDate: 2020-10-08
• RETRACTED ARTICLE: Aspect Sensitivity Considerations in Interpreting Radar
• PubDate: 2020-08-01
• Study of Coronal Mass Ejections Succeeding the Associated X-Ray and
γ-Ray Burst Solar Flares
• Abstract: Abstract This study is dedicated to the investigation of the characteristics of CMEs following the associated X-ray and γ-ray burst solar flares. Investigated 14786 CME events and 5092 Gamma Burst Monitor (GBM) solar flare events recorded during the solar period 2008–2017, found 503 (about 10%) CME events associated with GBM post-flare events (hereafter, GBM post-flare—CME). All of these 503 events (100%) are associated with solar flares detected simultaneously in both GBM and RHESI (γ -ray solar flare possibly associated with X-ray). The associated CMEs with GBM post-flare events are wider than non-associated CME events. These results indicate that, as the flare’s flux increases, the width of the associated CME increases. The Gamma burst solar flares accelerate CMEs, but with less extent than do non-associated or associated with X-ray solar flare only events. The GBM post-flare—CME associated events have a mean speed near the solar wind average speed approximately (which is less than speed of CMEs associated with X-ray solar flares only) and faster than non-associated events. The dominant CME initial speed of the GBM post-flare—CME associated events is ~ 300 (Km/s). The CME mean mass of the GBM post-flare—CME associated events indicate that the CMEs occurred after the solar flare is on average more massive than other CMEs. Found the relationship between the mass of the GBM post-flare—CME associated events and the CME width to be on the form: (CME Mass) = − 8.6 × 10+14 + 2.9 × 10+13 × (CME width).
PubDate: 2020-07-21
• Editorial
• PubDate: 2020-07-16
• Exposure Ages, Noble Gases and Nitrogen in the Ordinary Chondrite Karimati
(L5)
• Abstract: Abstract Noble gas and nitrogen isotopic compositions of Karimati ordinary (L5) chondrite are presented. Aliquots of the meteorite were studied in two noble gas mass spectrometers. Its cosmic ray exposure (CRE) history, trapped noble gases and nitrogen isotopic systematic are examined. The compositions of Ne and Kr in this meteorite indicate presence of mixture of solar wind and Q trapped components. In addition to the primordial components, radiogenic 129Xe (from the decay of short-lived radioactive 129I) is observed in the two aliquots (129Xe/132Xe ranges between 1.054 and 1.311). The U/Th-4He and K-40Ar ages are discordant. U/Th-4He ages are younger than the K-40Ar ages, indicating loss of helium. The trapped N component is isotopically light analogous to Q gas/solar wind. The cosmic-ray exposure ages of the two aliquots are 16.1 ± 2.7 Ma and 16.6 ± 2.0 Ma based on the cosmogenic 21Nec and 38Arc concentrations.
PubDate: 2020-07-13
• Statistical Characteristics on SEPs, Radio-Loud CMEs, Low Frequency Type
PubDate: 2020-07-11
• The Subjectivity in Identification of Martian Channel Networks and Its
Implication for Citizen Science Projects
• Abstract: Abstract The Martian surface is incised by numerous valley networks, which indicate the planet experienced sustained widespread flowing water in the past (e.g. Carr in Water on Mars, Oxford University Press, New York, 1996; Phil Trans R Soc A 370:2193–2212, 2012. https://doi.org/10.1098/rsta.2011.0500). Examining the distribution and geometries of these valley networks provides invaluable information about the Martian climate during the period of formation. The recent advancement in high resolution images has provided an opportunity to build upon past valley maps of Mars (Bahia et al. in LPSC 2018, 2018), however, the identification of these valley networks is extremely time-consuming. A citizen science project may aid in reducing this time-consuming process; this project conducts a valley mapping task with participants of varying expertise in valley mapping to determine whether a citizen science project of this kind should be worth pursuing. This was conducted in a region adjacent to Vogel Crater (36.1° S, 10.2° W). Repeated mapping of the area (a repeatability test) found that participants with low experience in valley mapping (22 a-level physics student’s representative of the public) were inconsistent when mapping valleys. Additionally, when comparing the results of participants within this group (a reproducibility test), the majority of reproduced valleys are false positives (i.e. incorrectly traced valleys). These results were consistent with those found for the medium experience group (45 2nd year geology undergraduates). The validated tracings of the low experience group improve upon the number and total length of valleys mapped by previous studies (Hynek et al. in J Geophys Res 115:1–14, 2010). To validate these valleys requires the input of an expert to remove false positives which is less time consuming than manually mapping the images; this may indicate that a citizen science project is worth pursuing. However, to effectively identify the maximum amount of valleys an expert is required.
PubDate: 2020-02-19
• Solar Eclipses and the Surface Properties of Water
• Abstract: Abstract During four solar eclipse events (two annular, one total and one partial) a correlation was observed between a change in water surface tension and the magnitude of the optical coverage. During one eclipse, evaporation experiments were carried out which showed a reduction in water evaporation at the same time as a rise in the surface tension. The changes did not occur on a day without a solar eclipse and are not correlated to changes in temperature, pressure, humidity of the environment. The effects are delayed by 20, 85, 30 and 37 min, respectively, compared to the maximum eclipse. Possible mechanisms responsible for this effect are presented, the most likely hypothesis being reduced water/muon interaction due to solar wind and cosmic radiation blocking during an eclipse. As an alternative hypotheses, we propose a novel neutrino/water interaction and overview of other, less likely mechanisms.
PubDate: 2019-09-30
• Dark Matter Objects: Possible New Source of Gravitational Waves
• Abstract: Abstract Gravitational waves from mergers of black holes and neutron stars are now being detected by LIGO. Here we look at a new source of gravitational waves, i.e., a class of dark matter objects whose properties were earlier elaborated. We show that the frequency of gravitational waves and strains on the detectors from such objects (including their mergers) could be within the sensitivity range of LIGO. The gravitational waves from the possible mergers of these dark matter objects will be different from those produced by neutron star mergers in the sense that they will not be accompanied by electromagnetic radiation since dark matter does not couple with radiation.
PubDate: 2019-09-14
• Superfast Exoplanets and 9600 s
• Abstract: Abstract Motion of a substantial part of the superfast exoplanets is found to be in the close resonance with the well-known “solar” timescale $$P_{0} \approx 0.11$$ days and/or the timescale 2 $$P_0$$ / $$\pi \approx 0.07$$ days (at 99.9% confidence for exoplanet periods $$P < 2$$ days). There is also a noticeable lack of the exoplanetary “unstable” orbits with $$P \approx 3 \pi$$ $$P_0$$ $$\,\approx 1.05$$ days, which copies the famous “period gap” of the cataclysmic variables at $$P \approx 0.11$$ days; strangely enough, the ratio of the central periods of these two gaps is equal to $$\pi ^2$$ . The exoplanet phenomenon is supposed to be caused by a coherent, with the $$P_0$$ timescale, oscillation of gravity, operating within the extra-solar planetary systems.
PubDate: 2019-07-15
• Arecibo ALFA Array Observations in Search of Lunar Meteoroid-Strike EMPs
• Abstract: Abstract We present the preliminary results of a search for transient Electromagnetic Pulses (EMP) associated with the impact of meteoroids on the lunar surface as observed with the Arecibo Observatory ALFA (Arecibo L-band Feed Array) system. The ALFA system is a cluster of seven, dual-linear polarization feeds/beams arranged in a hexagonal manner and operated in the protected L-band region centered at 1.41 GHz. We analyzed 8 TB of data totaling nearly 5.5 h of on- and off-moon observations made in February 2016. We demonstrate the observing strategy and time–frequency methods for the detection and removal of the local-radar transient interference signals while identifying potential EMPs. Local out of band radar interference signals are observed as intermodulation artifacts in the protected L-band. Seven transient wideband EMP events with time scales of less than 10 μs have been detected following the extensive vetting process we describe. Assuming that these EMP-like events originate from gram-sized meteoroid strikes and using very approximate hypervelocity impact, plasma production theory, and EMP generation theory, we estimate the progenitor impact meteoroid kinetic energy to be approximately 1.8 × 107 J. Assuming that the observed EMPs are the result of 10 g meteoroid impacts, the resultant meteoroid flux is 3 × 10−7 km−2 h−1 based solely on lunar surface area observed and net observing period. Implications of the observed transient EMP events, measured lunar noise temperature and the comparison with energy estimates derived from the existing lunar impact optical observations are also discussed.
PubDate: 2019-05-29
• Primordial Planets Predominantly of Dark Matter
• Abstract: Abstract Cosmic structure formation is thought to occur as a bottom-up scenario, i.e. the lightest objects would have formed first. It has been suggested that the earliest structures to form could have been primordial planets. Here we propose the possibility of formation of primordial planets at high redshifts composed predominantly of dark matter (DM) particles, with planetary masses ranging from Neptune mass to asteroid mass. Most of these primordial DM planets could be free floating without being attached to a host star and a substantial fraction could be present in the halo contributing to the DM. Here we suggest that the flux of DM particles could be significantly reduced as substantial number of DM particles are now trapped in such objects, perhaps accounting for the negative results seen so far in the ongoing DM detection experiments.
PubDate: 2019-05-17
• A Catalog of Smaller Planets
• Abstract: Abstract A compilation was made of N = 89 planets or moons for which the mass and radius are known, between the limits of 0.01 and 10 times the mass of Earth. Although starting from a larger and higher-quality (because it excludes m sin i figures) sample than that of Weiss and Marcy (Astrophys J Lett 783:L6–L12, 2014), the chart of log density versus radius confirms the WM14 results: Density increases up to about 1.5 Earth radii and decreases for larger radii, probably as the planet retains hydrogen and helium on formation.
PubDate: 2019-05-14
• Landing Area Selection Based on Closed Environment Avoidance from a Single
Image During Optical Coarse Hazard Detection
• Authors: Ruoyan Wei; Jianwei Jiang; Xiaogang Ruan; Jianke Li
Pages: 73 - 104
Abstract: Abstract The success of a landed space exploration depends largely on the final landing site, and the most important factor of landing site selection is the safety of lander, so, hazard detection and avoidance are crucial during asteroid landing. Many approaches have been proposed at present, most of them just detect hazard and select an area that is free of hazard threaten, however, in some cases, the selected site should not be the places that located in closed environment, such as the inner of crater. To tackle the issue, an approach for selecting landing site with closed environment avoidance based on a single image during optical coarse hazard detection was proposed in this paper, the approach was designed under the scheme of Chang’e-3’s landing process. The approach begins with hazard detection based on a proposed binary method. And then, for searching the candidate circular landing areas, the skeletons of areas with no hazard are taken into account, and then, control constraints are considered to select the landing areas that are accessible by the lander. Finally, the final selected circular landing area are chosen by a proposed scoring method, this method combines the factors of circular areas, including radius, the connection among circular areas, circular area’s texture and the cluster relation between circular area’s center and all the hazard. At last, serious of experiments would be conducted to test the performance of the proposed approach.
PubDate: 2018-07-01
DOI: 10.1007/s11038-018-9516-2
Issue No: Vol. 121, No. 3 (2018)
• Distinction in the Interplanetary Characteristics of Accelerated and
Decelerated CMEs/Shocks
• Authors: K. Suresh; A. Shanmugaraju; Y.-J. Moon
Abstract: Abstract A set of 58 Coronal Mass Ejections (CMEs) with different kinematics near the sun in LASCO Field of view (FOV) is classified into two groups (i) CMEs which are accelerating (group-I) and (ii) CMEs which are decelerating (group-II). We analyze their interplanetary propagation characteristics to study the distinction between these two groups of events. Some of the following deviations are noted between the two groups as: (i) While group-II events have greater mean values of Standoff distance, Standoff time than the group-I events, the mean transit times of ICMEs and IP shocks are relatively lower for them. (ii) Group-II events are more (30%) radio-rich than the group-I (10%) and they are associated with type II solar radio burst in lower corona, (iii) The possibility of having excess magnetic energy that supports the propagation of CMEs to some extent is studied using estimated speed (VEST) and it is found that a slightly more number of events in group-I (48%) has VEST > VLASCO than group-II (33%). (iv) Net interplanetary acceleration is positive for 35% and 19% in group-I and group-II events respectively. (v) It is also found that ICME/IP shock characteristics of the two groups depend strongly on the CME acceleration.
PubDate: 2018-12-05
DOI: 10.1007/s11038-018-9522-4
• Photometric Study of Comet C/2014 S2 (PANSTARRS) After the Perihelion
• Authors: A. S. Betzler; O. F. de Sousa; L. B. S. Betzler
Abstract: Abstract We analyzed the BVR photometry of comet C/2014 S2 obtained between March and June 2016, in observatories installed in Europe and the United States. Using the Lomb–Scargle periodogram, we found that the most probable periodicity deduced from the V-band magnitudes is 2.70 days, suggesting that it is the period of rotation of the nucleus of this comet is $$2.70 \pm 0.07$$ days or $$68 \pm 2$$ h, with a peak-to-peak light curve amplitude of $$0.4 \pm 0.1$$ magnitudes. We verify that the absolute magnitude $$H_0$$ and the activity index n differ from each other when they are calculated from the visual or CCD magnitudes. Considering the absolute magnitude $$H_{v0}=$$ 6.0, obtained from visual magnitudes, we estimate that the lower limit of nuclear radius is 1.3 km. Analyzing the variation of magnitude R with the photometric aperture, we suggest that the coma of this object was in steady-state within the time limits of our observational interval. The coma had a mean color index B–V $$=0.79\pm 0.22$$ , which is typical of active comets. Additionally, we have shown that the use of a variable photometric aperture, linked to geocentric distance, is probably unnecessary for the comet PANSTARRS .
PubDate: 2018-11-21
DOI: 10.1007/s11038-018-9521-5
• Jerk in Planetary Systems and Rotational Dynamics, Nonlocal Motion
Relative to Earth and Nonlocal Fluid Dynamics in Rotating Earth Frame
Abstract: Abstract Some results following from the implications of nonlocal-in-time kinetic energy approach introduced recently by Suykens in the framework of rotational dynamics and motion in a non-inertial frame are discussed. Their roles in treating aspects concerning the nonlocal motion relative to Earth, the free-fall problem, the Foucault pendulum and the motion of a massive body in a rotating tube are analyzed. Governing nonlocal equations of fluid dynamics in particular the nonlocal-in-time Navier–Stokes equations are constructed under the influence of Earth rotation. Their properties are analyzed and a number of features were revealed and discussed accordingly.
PubDate: 2018-10-03
DOI: 10.1007/s11038-018-9519-z
• Fast Spinning of Planets
• Authors: V. A. Kotov
Abstract: Abstract Spin periods of Jupiter, Saturn, Uranus and Neptune are specified by the analysis of the resonant motion of large satellites: $$P = 0.445(2)\,\hbox {d}$$ , 0.448(1) d, 0.673(9) d and 0.561(7) d, respectively. They occur to be near-commensurate with $$P_0=9600.606(12)\,\hbox {s}$$ , the period of the “cosmic” oscillation, discovered first in the Sun, then in other variable objects of the Universe. The like analysis of spin rates of the total set of the largest and fastest rotators of the Solar system (with mean diameters $$\ge 500\,\hbox {km}$$ and $$P < 2\,\hbox {d}$$ ,—of planets, asteroids and satellites) resulted in the best commensurable, or “synchronizing”, timescale 9594(65) s, coinciding fairly well with $$P_0$$ too (the probability that the two timescales could agree by chance, is less than $$10^{-5}$$ ). True origin of this odd common resonance of our planetary system is unknown.
PubDate: 2018-09-29
DOI: 10.1007/s11038-018-9520-6
• The Temperature Regime of the Proposed Landing Sites for the Luna-Glob
Mission in the South Polar Region of the Moon
• Authors: E. A. Feoktistova; S. G. Pugacheva; V. V. Shevchenko
Abstract: Abstract In this paper, we investigated the possibility of existence of the hydrogen-containing volatile compounds, similar to those found in the Cabeus crater, in the area of the proposed landing ellipses of the Luna-Glob mission. We found that the existence of water ice and other hydrogen-containing substances is possible only in the presence of a shielding layer of regolith. The time of existence of such deposits does not exceed several tens or hundreds years for a layer of regolith with a thickness of 0.4 m and several thousand years for a layer of regolith 1 m thick.
PubDate: 2018-08-10
DOI: 10.1007/s11038-018-9518-0
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
|
|
Development of a Quadrilateral Enhanced Assumed Strain Element for Efficient and Accurate Thermal Stress Analysis
Title & Authors
Development of a Quadrilateral Enhanced Assumed Strain Element for Efficient and Accurate Thermal Stress Analysis
Ko, Jin-Hwan; Lee, Byung-Chai;
Abstract
A new quadrilateral plane stress element is developed for efficient and accurate analysis of thermal stress problems. It is convenient to use the same mesh and the same shape functions for thermal analysis and stress analysis. But, because of the inconsistency between deformation related strain field and thermal strain field, oscillatory responses and considerable errors in stresses are resulted in. To avoid undesired oscillations, strain approximation is enhanced by supplementing several assumed strain terms based on the variational principle. Thermal deformation is incorporated into the generalized mixed variational principle for displacement, strain and stress fields, and basic equations for the modified enhanced assumed strain method are derived. For the stress approximation of bilinear elements, the $\small{5{\beta}}$ version of Pian and Sumihara is adopted. The numerical results for several problems show that the present element behaves well and reduces oscillatory responses. it also results in almost the same magnitude of error as compared with the quadratic element.
Keywords
Thermal Stress Analysis;Enhanced Assumed Strain Element;Displacement Based Element;Functional;
Language
Korean
Cited by
|
|
# faces#
A quick way to generate an image with 2 opposing faces set to True, which is used throughout PoreSpy to indicate inlets and outlets
[1]:
import porespy as ps
import matplotlib.pyplot as plt
import numpy as np
import inspect
The arguments and default values of the function can be found as follows:
[2]:
inspect.signature(ps.generators.faces)
[2]:
<Signature (shape, inlet=None, outlet=None)>
## shape#
This would be the same shape as the actual image under study. Let’s say we have an image of blobs:
[3]:
im = ps.generators.blobs(shape=[10, 10, 10])
faces = ps.generators.faces(shape=im.shape, inlet=0, outlet=0)
ax.voxels(faces, edgecolor='k', linewidth=0.25);
## inlet and outlet#
These indicate which axis the True values should be placed, with inlets placed at the start of the axis, and outlets placed at the end:
[4]:
faces = ps.generators.faces(shape=im.shape, inlet=2, outlet=0)
|
|
# PPO1¶
The Proximal Policy Optimization algorithm combines ideas from A2C (having multiple workers) and TRPO (it uses a trust region to improve the actor).
The main idea is that after an update, the new policy should be not too far from the old policy. For that, ppo uses clipping to avoid too large update.
Note
PPO1 requires OpenMPI. If OpenMPI isn’t enabled, then PPO1 isn’t imported into the stable_baselines module.
Note
PPO1 uses MPI for multiprocessing unlike PPO2, which uses vectorized environments. PPO2 is the implementation OpenAI made for GPU.
## Notes¶
• Original paper: https://arxiv.org/abs/1707.06347
• Clear explanation of PPO on Arxiv Insights channel: https://www.youtube.com/watch?v=5P7I-xPq8u8
• OpenAI blog post: https://blog.openai.com/openai-baselines-ppo/
• mpirun -np 8 python -m stable_baselines.ppo1.run_atari runs the algorithm for 40M frames = 10M timesteps on an Atari game. See help (-h) for more options.
• python -m stable_baselines.ppo1.run_mujoco runs the algorithm for 1M frames on a Mujoco environment.
• Train mujoco 3d humanoid (with optimal-ish hyperparameters): mpirun -np 16 python -m stable_baselines.ppo1.run_humanoid --model-path=/path/to/model
• Render the 3d humanoid: python -m stable_baselines.ppo1.run_humanoid --play --model-path=/path/to/model
## Can I use?¶
• Recurrent policies: ❌
• Multi processing: ✔️ (using MPI)
• Gym spaces:
Space Action Observation
Discrete ✔️ ✔️
Box ✔️ ✔️
MultiDiscrete ✔️ ✔️
MultiBinary ✔️ ✔️
## Example¶
import gym
from stable_baselines.common.policies import MlpPolicy
from stable_baselines import PPO1
env = gym.make('CartPole-v1')
model = PPO1(MlpPolicy, env, verbose=1)
model.learn(total_timesteps=25000)
model.save("ppo1_cartpole")
obs = env.reset()
while True:
action, _states = model.predict(obs)
obs, rewards, dones, info = env.step(action)
env.render()
## Parameters¶
class stable_baselines.ppo1.PPO1(policy, env, gamma=0.99, timesteps_per_actorbatch=256, clip_param=0.2, entcoeff=0.01, optim_epochs=4, optim_stepsize=0.001, optim_batchsize=64, lam=0.95, adam_epsilon=1e-05, schedule='linear', verbose=0, tensorboard_log=None, _init_setup_model=True, policy_kwargs=None, full_tensorboard_log=False, seed=None, n_cpu_tf_sess=1)[source]
Proximal Policy Optimization algorithm (MPI version). Paper: https://arxiv.org/abs/1707.06347
Parameters: env – (Gym environment or str) The environment to learn from (if registered in Gym, can be str) policy – (ActorCriticPolicy or str) The policy model to use (MlpPolicy, CnnPolicy, CnnLstmPolicy, …) timesteps_per_actorbatch – (int) timesteps per actor per update clip_param – (float) clipping parameter epsilon entcoeff – (float) the entropy loss weight optim_epochs – (float) the optimizer’s number of epochs optim_stepsize – (float) the optimizer’s stepsize optim_batchsize – (int) the optimizer’s the batch size gamma – (float) discount factor lam – (float) advantage estimation adam_epsilon – (float) the epsilon value for the adam optimizer schedule – (str) The type of scheduler for the learning rate update (‘linear’, ‘constant’, ‘double_linear_con’, ‘middle_drop’ or ‘double_middle_drop’) verbose – (int) the verbosity level: 0 none, 1 training information, 2 tensorflow debug tensorboard_log – (str) the log location for tensorboard (if None, no logging) _init_setup_model – (bool) Whether or not to build the network at the creation of the instance policy_kwargs – (dict) additional arguments to be passed to the policy on creation full_tensorboard_log – (bool) enable additional logging when using tensorboard WARNING: this logging can take a lot of space quickly seed – (int) Seed for the pseudo-random generators (python, numpy, tensorflow). If None (default), use random seed. Note that if you want completely deterministic results, you must set n_cpu_tf_sess to 1. n_cpu_tf_sess – (int) The number of threads for TensorFlow operations If None, the number of cpu of the current machine will be used.
action_probability(observation, state=None, mask=None, actions=None, logp=False)
If actions is None, then get the model’s action probability distribution from a given observation.
Depending on the action space the output is:
• Discrete: probability for each possible action
• Box: mean and standard deviation of the action output
However if actions is not None, this function will return the probability that the given actions are taken with the given parameters (observation, state, …) on this model. For discrete action spaces, it returns the probability mass; for continuous action spaces, the probability density. This is since the probability mass will always be zero in continuous spaces, see http://blog.christianperone.com/2019/01/ for a good explanation
Parameters: observation – (np.ndarray) the input observation state – (np.ndarray) The last states (can be None, used in recurrent policies) mask – (np.ndarray) The last masks (can be None, used in recurrent policies) actions – (np.ndarray) (OPTIONAL) For calculating the likelihood that the given actions are chosen by the model for each of the given parameters. Must have the same number of actions and observations. (set to None to return the complete action probability distribution) logp – (bool) (OPTIONAL) When specified with actions, returns probability in log-space. This has no effect if actions is None. (np.ndarray) the model’s (log) action probability
get_env()
returns the current environment (can be None if not defined)
Returns: (Gym Environment) The current environment
get_parameter_list()
Get tensorflow Variables of model’s parameters
Returns: (list) List of tensorflow Variables
get_parameters()
Get current model parameters as dictionary of variable name -> ndarray.
Returns: (OrderedDict) Dictionary of variable name -> ndarray of model’s parameters.
get_vec_normalize_env() → Optional[stable_baselines.common.vec_env.vec_normalize.VecNormalize]
Return the VecNormalize wrapper of the training env if it exists.
Returns: Optional[VecNormalize] The VecNormalize env.
learn(total_timesteps, callback=None, log_interval=100, tb_log_name='PPO1', reset_num_timesteps=True)[source]
Return a trained model.
Parameters: total_timesteps – (int) The total number of samples to train on callback – (Union[callable, [callable], BaseCallback]) function called at every steps with state of the algorithm. It takes the local and global variables. If it returns False, training is aborted. When the callback inherits from BaseCallback, you will have access to additional stages of the training (training start/end), please read the documentation for more details. log_interval – (int) The number of timesteps before logging. tb_log_name – (str) the name of the run for tensorboard log reset_num_timesteps – (bool) whether or not to reset the current timestep number (used in logging) (BaseRLModel) the trained model
classmethod load(load_path, env=None, custom_objects=None, **kwargs)
Parameters: load_path – (str or file-like) the saved parameter location env – (Gym Environment) the new environment to run the loaded model on (can be None if you only need prediction from a trained model) custom_objects – (dict) Dictionary of objects to replace upon loading. If a variable is present in this dictionary as a key, it will not be deserialized and the corresponding item will be used instead. Similar to custom_objects in keras.models.load_model. Useful when you have an object in file that can not be deserialized. kwargs – extra arguments to change the model when loading
load_parameters(load_path_or_dict, exact_match=True)
Load model parameters from a file or a dictionary
Dictionary keys should be tensorflow variable names, which can be obtained with get_parameters function. If exact_match is True, dictionary should contain keys for all model’s parameters, otherwise RunTimeError is raised. If False, only variables included in the dictionary will be updated.
This does not load agent’s hyper-parameters.
Warning
This function does not update trainer/optimizer variables (e.g. momentum). As such training after using this function may lead to less-than-optimal results.
Parameters: load_path_or_dict – (str or file-like or dict) Save parameter location or dict of parameters as variable.name -> ndarrays to be loaded. exact_match – (bool) If True, expects load dictionary to contain keys for all variables in the model. If False, loads parameters only for variables mentioned in the dictionary. Defaults to True.
predict(observation, state=None, mask=None, deterministic=False)
Get the model’s action from an observation
Parameters: observation – (np.ndarray) the input observation state – (np.ndarray) The last states (can be None, used in recurrent policies) mask – (np.ndarray) The last masks (can be None, used in recurrent policies) deterministic – (bool) Whether or not to return deterministic actions. (np.ndarray, np.ndarray) the model’s action and the next state (used in recurrent policies)
pretrain(dataset, n_epochs=10, learning_rate=0.0001, adam_epsilon=1e-08, val_interval=None)
Pretrain a model using behavior cloning: supervised learning given an expert dataset.
NOTE: only Box and Discrete spaces are supported for now.
Parameters: dataset – (ExpertDataset) Dataset manager n_epochs – (int) Number of iterations on the training set learning_rate – (float) Learning rate adam_epsilon – (float) the epsilon value for the adam optimizer val_interval – (int) Report training and validation losses every n epochs. By default, every 10th of the maximum number of epochs. (BaseRLModel) the pretrained model
save(save_path, cloudpickle=False)[source]
Save the current parameters to file
Parameters: save_path – (str or file-like) The save location cloudpickle – (bool) Use older cloudpickle format instead of zip-archives.
set_env(env)
Checks the validity of the environment, and if it is coherent, set it as the current environment.
Parameters: env – (Gym Environment) The environment for learning a policy
set_random_seed(seed: Optional[int]) → None
Parameters: seed – (Optional[int]) Seed for the pseudo-random generators. If None, do not change the seeds.
setup_model()[source]
Create all the functions and tensorflow graphs necessary to train the model
|
|
# Mac – Increase the maximum number of open file descriptors in Snow Leopard
file-descriptorsmacosx-snow-leopardulimit
I am trying to do something that requires a large number of file descriptors
sudo ulimit -n 12288 is as high as Snow Leopard wants to go; beyond this results in
/usr/bin/ulimit: line 4: ulimit: open files: cannot modify limit: Invalid argument.
I want to raise the number much higher, say 100000. Is it possible?
Using ulimit command only changes the resource limits for the current shell and its children and sudo ulimit creates a root shell, adjusts its limits, and then exits (thus having, as far as I can see, no real effect).
To exceed 12288, you need to adjust the kernel's kern.maxfiles and kern.maxfilesperproc parameters, and also (at least according to this blog entry, which is a summary of this discussion) a launchd limit. You can use launchctl limit to adjust all of these at once:
sudo launchctl limit maxfiles 1000000 1000000
To make this permanent (i.e not reset when you reboot), create /etc/launchd.conf containing:
limit maxfiles 1000000 1000000
Then you can use ulimit (but without the sudo) to adjust your process limit.
If this doesn't do it, you may be running into size limits in the kernel. If your model supports it, booting the kernel in 64-bit mode may help.
|
|
# Listing figure citations in order of appearance in the main bibliography list (other answers have not worked)
I think there might be something wrong with my code (as I am very new to Latex/overleaf). Note that I am using overleaf. I have seen this question other places too, but the answers are not working for me. Can anyone spot a major mistake in my preamble? Note that I have now not included the other answers in my code.
To be clear: When I cite inside the figure caption, these are listed first in the biblipgraphy list, independent of the order of appearance. How do I make them list in order of appearance?
\documentclass[12pt, twoside]{report}
\usepackage[margin=0.9 in]{geometry}
\usepackage[utf8]{inputenc}
\usepackage{listings}
\usepackage{graphicx}
\usepackage{pgffor}
\usepackage{float}
\usepackage{graphics}
\usepackage{fancyhdr}
\usepackage{array}
\usepackage{colortbl}
\usepackage{longtable}
\setlength\LTleft\parindent
\usepackage{siunitx}
\usepackage{textcomp}
\usepackage{gensymb}
\usepackage[version = 4]{mhchem}
\usepackage[english]{babel}
\usepackage{blindtext}
\usepackage{mathptmx}
\usepackage{xcolor}
\usepackage{titlesec}
\usepackage[euler]{textgreek}
\graphicspath{{Images/}}
\usepackage[font=small]{caption}
\usepackage[numbers,round]{natbib}
\usepackage{filecontents}
\begin{filecontents*}{\jobname.bib}
@article{Hannan2013,
author = {Hannan, Nicholas R F and Segeritz, Charis-patricia and Touboul Thomas and Vallier, Ludovic},
doi = {10.1038/nprot.2012.153},
issn = {1754-2189},
journal = {Nat. Protoc.},
month = {feb},
number = {2},
pages = {430--437},
title = {{Production of hepatocyte-like cells from human pluripotent stem cells}},
volume = {8},
year = {2013}
}
@article{Lim1980,
author = {Lim, Franklin and Sun, Anthony M.},
doi = {10.1126/science.6776628},
issn = {00368075},
journal = {Science (80-. ).},
number = {4472},
pages = {908--910},
title = {{Microencapsulated islets as bioartificial endocrine pancreas}},
volume = {210},
year = {1980}
}
@article{Dalheim2016,
author = {Dalheim, Marianne and Vanacker, Julie and Najmi, Maryam A. and Aachmann, Finn L. and Strand, Berit L. and Christensen, Bj{\o}rn E.},
doi = {10.1016/j.biomaterials.2015.11.043},
issn = {18785905},
journal = {Biomaterials},
keywords = {Alginate,Cell adhesion,Periodate oxidation,RGD peptide,Reductive amination,Tissue engineering},
pages = {146--156},
title = {{Efficient functionalization of alginate biomaterials}},
volume = {80},
year = {2016}
}
\end{filecontents*}
\usepackage{color}
% Setting chapter and section format:
\titleformat{\chapter}
{\normalfont\rmfamily\LARGE\bfseries}
{\thechapter}{20pt}{\LARGE}
\titlespacing*{\chapter}{0pt}{-10pt}{14pt}
\titleformat{\section}
{\normalfont\rmfamily\Large\bfseries}
{\thesection}{1em}{\Large}
\titleformat{\subsection}
{\normalfont\rmfamily\large\bfseries}
{\thesubsection}{1em}{}
\titleformat{\subsubsection}
{\normalfont\rmfamily\normalsize\bfseries}
{\thesubsubsection}{1em}{}
\fancyhf{}
\usepackage{hyperref}
\hypersetup{
citecolor=black,
filecolor=black,
urlcolor=black
}
\renewcommand{\figureautorefname}{figure}
\renewcommand{\tableautorefname}{table}
%------------DOCUMENT-----------
\pagenumbering{roman}
\begin{document}
% Preface
\chapter*{Preface}
\setcounter{page}{1}
\tableofcontents
\listoffigures
\listoftables
}
% Background
\chapter{Background}
\pagenumbering{arabic}
% Introduction
\chapter{Introduction}
My very first reference \cite{Hannan2013},
and my second reference \cite{Lim1980}.
% Methods
\chapter{Background}
\begin{figure}[ht]
\centering
\caption{Periodate oxidation \cite{Dalheim2016}.} % Third reference.
\label{fig:periodateOx}
\end{figure}
% Bibliography
\bibliographystyle{abbrvnat}
\bibliography{\jobname.bib}
\end{document}
• Can you supply a compilable MWE. As your Example has external files, this cant be compiled. – Sango Sep 24 at 10:17
• Thank you, I have now edited the document to not include any external files, and it is readily compileable. – Norawww Sep 24 at 11:13
• Why not use \caption[Periodate oxidation.]{Periodate oxidation \cite{Dalheim2016}.} ? – leandriis Sep 24 at 11:17
• I have tried that, but it doesn't solve the problem for me. Even if the citation does not appear in the list of figures, it is still numbered as 1 in the caption. – Norawww Sep 24 at 11:20
• I just had a look at your MWE. The bibliography style that you use sorts the entries in the list of references by last name of first author. (Try to exchange 2 \cite commands and you will see that the references will stay in the same order). To sort by order of appraeance, use unsrtnat as bibliography style instead. – leandriis Sep 24 at 11:25
Welcome on tex.stackexchange. There are several small issues in your code:
1. It is far from being minimal
2. You should not write \bibliography{\jobname.bib} as the .bib is automatically added
3. You do not have to load graphics as it is already loaded by graphicx
4. You don't need the filecontents package as long as you use the related environment in the preamble
5. As said by @leandriis, if you want a reference list ordered by appearance, you should use an unsorted style, the prototype of whitch is \bibliographystyle{unsrtnat}
The origin of you problem is the \listoffigures which stands at the beginning and calls the \cite{Dalheim2016} to early.
Hence you could:
1. Remove the \listoffigures
2. Put it at the end
3. Last but not least use the optional argument of \caption, as this defines whay is put in \listoffigures. It is of course the real solution.
The minimal example :
%https://tex.stackexchange.com/questions/509567/listing-figure-citations-in-order-of-appearance-in-the-main-bibliography-list-o
\documentclass{report}
\usepackage[utf8]{inputenc}
\usepackage{graphicx}
\usepackage[font=small]{caption}
\usepackage[numbers,round]{natbib}
\begin{filecontents*}{\jobname.bib}
@article{Hannan2013,
author = {Hannan, Nicholas R F and Segeritz, Charis-patricia and Touboul Thomas and Vallier, Ludovic},
doi = {10.1038/nprot.2012.153},
issn = {1754-2189},
journal = {Nat. Protoc.},
month = {feb},
number = {2},
pages = {430--437},
title = {{Production of hepatocyte-like cells from human pluripotent stem cells}},
volume = {8},
year = {2013}
}
@article{Lim1980,
author = {Lim, Franklin and Sun, Anthony M.},
doi = {10.1126/science.6776628},
issn = {00368075},
journal = {Science (80-. ).},
number = {4472},
pages = {908--910},
title = {{Microencapsulated islets as bioartificial endocrine pancreas}},
volume = {210},
year = {1980}
}
@article{Dalheim2016,
author = {Dalheim, Marianne and Vanacker, Julie and Najmi, Maryam A. and Aachmann, Finn L. and Strand, Berit L. and Christensen, Bj{\o}rn E.},
doi = {10.1016/j.biomaterials.2015.11.043},
issn = {18785905},
journal = {Biomaterials},
keywords = {Alginate,Cell adhesion,Periodate oxidation,RGD peptide,Reductive amination,Tissue engineering},
pages = {146--156},
title = {{Efficient functionalization of alginate biomaterials}},
volume = {80},
year = {2016}
}
\end{filecontents*}
\usepackage{hyperref}
%------------DOCUMENT-----------
\begin{document}
\tableofcontents
\listoffigures
\listoftables
}
\chapter{Introduction}
My very first reference \cite{Hannan2013},
and my second reference \cite{Lim1980}.
\chapter{Methods}
\begin{figure}[ht]
\centering
\caption[Periodate oxidation (from Dalheim2016)]%
{Periodate oxidation \cite{Dalheim2016}.}
\label{fig:periodateOx}
\end{figure}
\bibliographystyle{unsrtnat}
\bibliography{\jobname}
\end{document}
By the way, the unsrtnat style does not handle neither doi nor issn nor any kind or url. You will have to find another natbib-compatible which works with them, if you need. A possible solution : add \usepackage[natbibapa]{apacite}in preable ans use \bibliographystyle{apacite}. An advantage of apacite is that the .bbl file contains formatting which is not hard-codded but relies on macros that you can easily customize with \renewcommand.
`
|
|
## 6.26 Morphisms of ringed spaces and modules
We have now introduced enough notation so that we are able to define the pullback and pushforward of modules along a morphism of ringed spaces.
Definition 6.26.1. Let $(f, f^\sharp ) : (X, \mathcal{O}_ X) \to (Y, \mathcal{O}_ Y)$ be a morphism of ringed spaces.
1. Let $\mathcal{F}$ be a sheaf of $\mathcal{O}_ X$-modules. We define the pushforward of $\mathcal{F}$ as the sheaf of $\mathcal{O}_ Y$-modules which as a sheaf of abelian groups equals $f_*\mathcal{F}$ and with module structure given by the restriction via $f^\sharp : \mathcal{O}_ Y \to f_*\mathcal{O}_ X$ of the module structure given in Lemma 6.24.5.
2. Let $\mathcal{G}$ be a sheaf of $\mathcal{O}_ Y$-modules. We define the pullback $f^*\mathcal{G}$ to be the sheaf of $\mathcal{O}_ X$-modules defined by the formula
$f^*\mathcal{G} = \mathcal{O}_ X \otimes _{f^{-1}\mathcal{O}_ Y} f^{-1}\mathcal{G}$
where the ring map $f^{-1}\mathcal{O}_ Y \to \mathcal{O}_ X$ is the map corresponding to $f^\sharp$, and where the module structure is given by Lemma 6.24.6.
Thus we have defined functors
\begin{eqnarray*} f_* : \textit{Mod}(\mathcal{O}_ X) & \longrightarrow & \textit{Mod}(\mathcal{O}_ Y) \\ f^* : \textit{Mod}(\mathcal{O}_ Y) & \longrightarrow & \textit{Mod}(\mathcal{O}_ X) \end{eqnarray*}
The final result on these functors is that they are indeed adjoint as expected.
Lemma 6.26.2. Let $(f, f^\sharp ) : (X, \mathcal{O}_ X) \to (Y, \mathcal{O}_ Y)$ be a morphism of ringed spaces. Let $\mathcal{F}$ be a sheaf of $\mathcal{O}_ X$-modules. Let $\mathcal{G}$ be a sheaf of $\mathcal{O}_ Y$-modules. There is a canonical bijection
$\mathop{\mathrm{Hom}}\nolimits _{\mathcal{O}_ X}(f^*\mathcal{G}, \mathcal{F}) = \mathop{\mathrm{Hom}}\nolimits _{\mathcal{O}_ Y}(\mathcal{G}, f_*\mathcal{F}).$
In other words: the functor $f^*$ is the left adjoint to $f_*$.
Proof. This follows from the work we did before:
\begin{eqnarray*} \mathop{\mathrm{Hom}}\nolimits _{\mathcal{O}_ X}(f^*\mathcal{G}, \mathcal{F}) & = & \mathop{Mor}\nolimits _{\textit{Mod}(\mathcal{O}_ X)}( \mathcal{O}_ X \otimes _{f^{-1}\mathcal{O}_ Y} f^{-1}\mathcal{G}, \mathcal{F}) \\ & = & \mathop{Mor}\nolimits _{\textit{Mod}(f^{-1}\mathcal{O}_ Y)}( f^{-1}\mathcal{G}, \mathcal{F}_{f^{-1}\mathcal{O}_ Y}) \\ & = & \mathop{\mathrm{Hom}}\nolimits _{\mathcal{O}_ Y}(\mathcal{G}, f_*\mathcal{F}). \end{eqnarray*}
Here we use Lemmas 6.20.2 and 6.24.7. $\square$
Lemma 6.26.3. Let $f : X \to Y$ and $g : Y \to Z$ be morphisms of ringed spaces. The functors $(g \circ f)_*$ and $g_* \circ f_*$ are equal. There is a canonical isomorphism of functors $(g \circ f)^* \cong f^* \circ g^*$.
Proof. The result on pushforwards is a consequence of Lemma 6.21.2 and our definitions. The result on pullbacks follows from this by the same argument as in the proof of Lemma 6.21.6. $\square$
Given a morphism of ringed spaces $(f, f^\sharp ) : (X, \mathcal{O}_ X) \to (Y, \mathcal{O}_ Y)$, and a sheaf of $\mathcal{O}_ X$-modules $\mathcal{F}$, a sheaf of $\mathcal{O}_ Y$-modules $\mathcal{G}$ on $Y$, the notion of an $f$-map $\varphi : \mathcal{G} \to \mathcal{F}$ of sheaves of modules makes sense. We can just define it as an $f$-map $\varphi : \mathcal{G} \to \mathcal{F}$ of abelian sheaves such that for all open $V \subset Y$ the map
$\mathcal{G}(V) \longrightarrow \mathcal{F}(f^{-1}V)$
is an $\mathcal{O}_ Y(V)$-module map. Here we think of $\mathcal{F}(f^{-1}V)$ as an $\mathcal{O}_ Y(V)$-module via the map $f^\sharp _ V : \mathcal{O}_ Y(V) \to \mathcal{O}_ X(f^{-1}V)$. The set of $f$-maps between $\mathcal{G}$ and $\mathcal{F}$ will be in canonical bijection with the sets $\mathop{Mor}\nolimits _{\textit{Mod}(\mathcal{O}_ X)}(f^*\mathcal{G}, \mathcal{F})$ and $\mathop{Mor}\nolimits _{\textit{Mod}(\mathcal{O}_ Y)}(\mathcal{G}, f_*\mathcal{F})$. See above.
Composition of $f$-maps is defined in exactly the same manner as in the case of $f$-maps of sheaves of sets. In addition, given an $f$-map $\mathcal{G} \to \mathcal{F}$ as above, and $x \in X$ the induced map on stalks
$\varphi _ x : \mathcal{G}_{f(x)} \longrightarrow \mathcal{F}_ x$
is an $\mathcal{O}_{Y, f(x)}$-module map where the $\mathcal{O}_{Y, f(x)}$-module structure on $\mathcal{F}_ x$ comes from the $\mathcal{O}_{X, x}$-module structure via the map $f^\sharp _ x : \mathcal{O}_{Y, f(x)} \to \mathcal{O}_{X, x}$. Here is a related lemma.
Lemma 6.26.4. Let $(f, f^\sharp ) : (X, \mathcal{O}_ X) \to (Y, \mathcal{O}_ Y)$ be a morphism of ringed spaces. Let $\mathcal{G}$ be a sheaf of $\mathcal{O}_ Y$-modules. Let $x \in X$. Then
$(f^*\mathcal{G})_ x = \mathcal{G}_{f(x)} \otimes _{\mathcal{O}_{Y, f(x)}} \mathcal{O}_{X, x}$
as $\mathcal{O}_{X, x}$-modules where the tensor product on the right uses $f^\sharp _ x : \mathcal{O}_{Y, f(x)} \to \mathcal{O}_{X, x}$.
Proof. This follows from Lemma 6.20.3 and the identification of the stalks of pullback sheaves at $x$ with the corresponding stalks at $f(x)$. See the formulae in Section 6.23 for example. $\square$
Hi! I'm a little confused. In section 6.25 is determined by mapping $f^{\sharp}:\mathcal{O}_Y\rightarrow\mathcal{O}_X.$ Here $f^{\sharp}:\mathcal{O}_Y\rightarrow f_*\mathcal{O}_X.$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
Unfortunately JavaScript is disabled in your browser, so the comment preview function will not work.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0094. Beware of the difference between the letter 'O' and the digit '0'.
|
|
# JustPaste.it - paste text and share with your friends
The quickest way to share text with other people - 5 million users around the world know about it
### Why is JustPaste.it so special?
Easy to use text editor with text formatting feature
Just paste text from another webpage or a word processor. The text formatting and images will be preserved.
Pictures, movies and audio files
By using the "Upload images" module, you can easily add new images in your notes. You can paste the images directly from the clipboard into the editor. Embed videos using the [video] tag, e.g., [video]http://www.youtube.com/watch?v=XOXvdfcct8g[/video]. Embed audio files using the [audio] tag, e.g., [audio]http://www.domain.com/anyfile.mp3[/audio].
Short URLs with jpst.it
Each note has a good-looking, short URL that can be used on social networking sites.
Secure content publishing - that even NSA won't be able to break
Protect your text with password and show it only to your friends, or save is as private so only you have access to it. Notes and included images will be encrypted with modern AES-256 GCM algorithm. The web site also uses a secure SSL connection, to ensure that no one between you and the server is able to capture your traffic.
Importing from file
If you originally wrote your note in a word processor (Microsoft Word, MS Works, Open Office, or even Adobe Acrobat), simply upload it to the server using the "Import from file" function. The text formatting and images will be preserved.
Code highlighting
Want to show your application code? Set note content type to "Source code", and your code will be colored appropriately. You can also use [code][/code] tags.
Mathematical formulas
You can add professional-looking mathematical formulas to the notes.
Simply, use LaTeX: $$m = \frac{m_0}{\sqrt{1-\frac{v^2}{c^2}}}$$.
Save notes as PDF
### What you can share with it?
• selected parts of websites
• favorite pictures
• articles on social sites
• school notes
• ideas and appeals
"Its most significant appeal for users (...) is that JustPaste.it runs fast even on slow internet connections. There are no additional items on the pages such as adverts or pop-ups, meaning it is extremely easy to use on mobile phones." "It's all easy to use. Because the interface is very simple, no ads, no widgets, the site opens easily even in places where the internet connection is too slow(...)." "Its appeal stems from its flexibility and convenience: you can upload a variety of file types, tinker with videos and photos, and use a number of formatting tools. The free site also works with right-to-left languages, such as Arabic; there's no requirement to register an account; and it's fast and works well with mobile devices" "JustPaste.it is the quickest way to share content online!" "If you are looking for a very nice way to share text, images, and embedded videos with other people, then JustPaste.it is definitely a web-service you should take a good look at." "Free and useful - a good combination on most people's books." "This application's far from the only way you could achieve this kind of goal, but for a speedy way of placing some text at a fixed address, it's pretty handy."
|
|
# Necessary and sufficient conditions for a polynomial $p$ to satisfy $\|x\|\to\infty\implies p(x)\to\infty$?
I'm looking for a necessary and sufficient conditions (I'm not even sure these exist) for a polynomial $p:\mathbb{R}^n\to\mathbb{R}$ to be "radially unbounded", that is
$$\|x\|\to\infty\implies p(x)\to\infty,$$
where $\|{\cdot}\|$ denotes any $p$-norm on $\mathbb{R}^n$. Ideally, I'm looking for conditions in terms of the polynomial's coefficients and degree.
For example, if $n=1$ it is straightforward to see that $p$ is radially unbounded if and only if its degree is even and the monomial of highest degree has a positive coefficient.
However, I'm struggling to generalise this to arbitrary $n$. Any help would great.
Motivation: I'm interested in the above because I'm trying to come up with an automatised test that can decide whether or not the all the sublevel sets of a given polynomial are compact (this is so if and only if the polynomial is radially unbounded).
Edit: If no necessary and sufficient conditions (or argument that no such conditions exist in general) are posted before the bounty ends, I'd be more than happy to award the bounty to any answer containing insightful remarks or necessary or sufficient conditions.
• Is it so easy as to just restrict to each variable and do the $(n=1)$-check, or do you have some counterexamples where this doesn't work? (At least it's a simple, necessary condition.) Oct 9 '13 at 13:33
• @Arthur Think of $p(x)=(x_1-x_2)^2$. Fixing any variable we have that the degree of the polynomial (now in just one variable) is odd, and the coefficient of the highest degree monomial is $1$. However $p(t[1$ $1]^T)=0$ for all $t\in\mathbb{R}$.
– jkn
Oct 9 '13 at 13:37
• Yes, of course. Oct 9 '13 at 13:39
• I think $p$ is radially unbounded if and only if it is unbounded in every direction : for all $y \in S_{n-1}$, the polynomial $p(Xy) \in \Bbb R[X]$ is of positive even degree and positive leading coefficient. So you have to look at the homogeneous piece of highest degree, see if the degree is even, check if it ever gets negative, and if it ever gets zero, look at the next homogeneous piece of highest degree on that subvariety, on so on. Oct 24 '13 at 13:08
• @mercio: Unless I misunderstand what you mean by direction, I think you are wrong. For example, $p(x, y) = (x^3 - y)^2$ yields a non-constant non-negative polynomial when restricted to any 1-dimensional subspace, but its zero set is clearly unbounded. Oct 25 '13 at 21:36
Changing to generalized spherical coordinates, $p$ becomes a polynomial $$q(r,\cos\theta_2,\sin\theta_2,\ldots,\cos\theta_n,\sin\theta_n)$$ which we can view as a polynomial in $r$ with coefficients that are polynomials in $\cos\theta_2,\sin\theta_2,\ldots,\cos\theta_n,\sin\theta_n$. We want to show that this goes to $\infty$ as $r\to\infty$, independent of the values of $\theta_2,\ldots,\theta_n$. It is sufficient to show that the leading coefficient $c(\cos\theta_2,\sin\theta_2,\ldots,\cos\theta_n,\sin\theta_n)$ is bounded below by some $\epsilon>0$. Since the domain of the $\theta_i$ is compact, it suffices to show that $c$ is strictly positive. The range of $(\cos\theta_i,\sin\theta_i)$ is precisely the set of pairs $(x_i,y_i)$ such that $x_i^2+y_i^2=1$. Thus $c$ is strictly positive if the system \begin{align} c(x_2,y_2,\ldots,x_n,y_n) &\leq 0\\ x_2^2+y_2^2 &= 1\\ \vdots\\ x_n^2+y_n^2 &= 1\\ \end{align} has no real solutions. Determining whether such a system has real solutions is a classic problem in Real Semialgebraic Decomposition, and can be accomplished using Cylindrical Algebraic Decomposition.
As Giraffe points out, this condition is not quite necessary: if $c$ is nonnegative but not strictly positive, it may so happen that whenever $c=0$ the next coefficient $d$ is strictly positive, in which case $f$ is still radially unbounded. Thus $f$ is radially unbounded if both \begin{align} c(x_2,y_2,\ldots,x_n,y_n) &< 0\\ x_2^2+y_2^2 &= 1\\ \vdots\\ x_n^2+y_n^2 &= 1\\ \end{align} and \begin{align} d(x_2,y_2,\ldots,x_n,y_n) &\leq 0\\ c(x_2,y_2,\ldots,x_n,y_n) &= 0\\ x_2^2+y_2^2 &= 1\\ \vdots\\ x_n^2+y_n^2 &= 1\\ \end{align} have no solutions. In the same manner, it is possible that both $c$ and $d$ are nonnegative and wherever both are zero the next coefficient is strictly positive. We can express this using three systems of equalities and inequalities. Proceeding in the same manner until we get to the last coefficient gives us a necessary and sufficient condition.
Edit: It turns out the modification (looking at later coefficients) doesn't quite work, as we could have $\theta_i$ approach zeros of $c$ as $r\to \infty$ fast enough to cancel out the larger power of $r$. At least the first part provides a sufficient condition.
• @Giraffe Thanks for pointing that out, can't believe I missed it. Hopefully my revised answer is less misguided. Oct 27 '13 at 6:03
• @Giraffe Good point. I can add some steps to take care of that. Oct 27 '13 at 6:46
• It looks fine to me now. :) I think the original question is very hard to me and a full answer is far beyond my knowledge, because I know nothing about real algebraic geometry. I am interested in to what extent I can follow the answer. For example, when $n=2$ and $\deg p=4$($\deg p\le 3$ is not interesting for any $n$), is there a simple answer? Oct 27 '13 at 7:02
• @Giraffe When $n=2$ we can simplify this considerably since $c(\cos\theta,\sin\theta)$ is a function of $1$ variable, so we can use univariate root-finding methods to minimize $c$, and its root set will be finite, so any checks of the remaining coefficients are simple. Oct 27 '13 at 7:20
• How would this work on the example $(x^2-y)^2$ inspired by @NielsDiepeveen?
– WimC
Oct 27 '13 at 8:04
To make mercio’s comment more formal and explicit :
One can decompose $P=\sum_{i=0}^{d} P_i$ where each $P_i$ is a sum of monomials all of whose total degree are equal to $i$ and $P_d \neq 0$.
Obviously, a necessary condition for $P$ to satisfy your condition is that it involves all variables $x_1,x_2, \ldots ,x_n$.
That leads up to the following definition : the essential part of a polynomial $P$ (involving all the variables) is the sum $\sum_{i=w}^{d} P_i$, where $w$ is the largest index which such that $\sum_{i=w}^{d} P_i$ involves all the variables.
Fact 1. If $Q$ is the essential part of a polynomial $P$, then $P-Q=o(Q)$ when $||x|| \to \infty$. [EDIT : this is incorrect as explained in Giraffe’s comments below]
Corollary. $P$ satisfies your property iff $Q$ does.
For example, if $n=6$ and $P=x_1^{2013}-5x_2x_3^{2012}+7x_4^{20}x_5^{30}x_6^{100}+48x_1+2x_6$, we have $P_{2013}=x_1^{2013}-5x_2x_3^{2012}$, $P_{150}=7x_4^{20}x_5^{30}x_6^{100}$, $P_{1}=48x_1+2x_6$ and $Q=P_{2013}+P_{100}=x_1^{2013}-5x_2x_3^{2012}+7x_4^{20}x_5^{30}x_6^{100}$.
Remark 1. If $P$ is a quadratic form, then $P$ satisfies your condition iff it is positive definite.
Remark 2. If $P$ is homogeneous, then $P$ satisfies your property iff ${\min}_{S^{n-1}}(P) > 0$, where $S^{n-1}=\lbrace x\in {\mathbb R}^n | ||x||=1\rbrace$ (that’s because $P(ru)=r^nP(u)$, for $u\in S^{n-1}$).
• I guess you want $w$ to be the largest index such that $\sum_{i=w}^d P_i$ involves all the variables? Oct 25 '13 at 8:28
• @ChristophPegel corrected, thanks. Oct 25 '13 at 8:28
• I feel confused about your definition of ”essential part“ and "Fact 1". For example, if $P(x,y)=x+y+1$, is $P-Q=1$? If so, what does $o(Q)$ mean and how could $1=o(Q)$? Oct 25 '13 at 14:46
• @Giraffe $o$ is Landau’s o-notation. And yes, in your example $Q=x+y$, so $\frac{1}{Q}=\frac{1}{x+y} \leq \frac{1}{\sqrt{x^2+y^2}}$ which tends to zero when $x^2+y^2 \to \infty$. Oct 26 '13 at 7:48
• @EwanDelanoy: You are welcome. The "corollary" seems incorrect, if I understand the meaning of "essential part" correctly. For example, if $n=2$ and $P=P_4+P_2$, where $P_4=(x_1-x_2)^4$and $P_2=x_1^2+x_2^2$, then is $Q=P_4$? If so, $P$ satisfies the property but $Q$ doesn't. Oct 26 '13 at 11:03
|
|
# What is the general solution of the differential equation y'' + y = cot(x) ?
Oct 24, 2017
$A \cos x + B \sin x - \sin x \ln | \csc x + \cot x |$
#### Explanation:
We have:
$y ' ' + y = \cot \left(x\right)$ ..... [A]
This is a second order linear non-Homogeneous Differentiation Equation. The standard approach is to find a solution, ${y}_{c}$ of the homogeneous equation by looking at the Auxiliary Equation, which is the quadratic equation with the coefficients of the derivatives, and then finding an independent particular solution, ${y}_{p}$ of the non-homogeneous equation.
Complementary Function
The homogeneous equation associated with [A] is
$y ' ' + 9 y = 0$
And it's associated Auxiliary equation is:
${m}^{2} + 1 = 0$
Which has pure imaginary solutions $m = \pm i$
Thus the solution of the homogeneous equation is:
${y}_{c} = {e}^{0} \left(A \cos \left(1 x\right) + B \sin \left(1 x\right)\right)$
$\setminus \setminus \setminus = A \cos x + B \sin x$
Particular Solution
With this particular equation [A], the interesting part is find the solution of the particular function. We would typically use practice & experience to "guess" the form of the solution but that approach is likely to fail here. Instead we must use the Wronskian. It does, however, involve a lot more work:
Once we have two linearly independent solutions say ${y}_{1} \left(x\right)$ and ${y}_{2} \left(x\right)$ then the particular solution of the general DE;
$a y ' ' + b y ' + c y = p \left(x\right)$
is given by:
${y}_{p} = {v}_{1} {y}_{1} + {v}_{2} {y}_{2} \setminus \setminus$, which are all functions of $x$
Where:
${v}_{1} = - \int \setminus \frac{p \left(x\right) {y}_{2}}{W \left[{y}_{1} , {y}_{2}\right]} \setminus \mathrm{dx}$
${v}_{2} = \setminus \setminus \setminus \setminus \setminus \int \setminus \frac{p \left(x\right) {y}_{1}}{W \left[{y}_{1} , {y}_{2}\right]} \setminus \mathrm{dx}$
And, $W \left[{y}_{1} , {y}_{2}\right]$ is the wronskian; defined by the following determinant:
$W \left[{y}_{1} , {y}_{2}\right] = | \left({y}_{1} , {y}_{2}\right) , \left(y {'}_{1} , y {'}_{2}\right) |$
So for our equation [A]:
$p \left(x\right) = \cot x$
${y}_{1} \setminus \setminus \setminus = \cos x \implies y {'}_{1} = - \sin x$
${y}_{2} \setminus \setminus \setminus = \sin x \implies y {'}_{2} = \cos x$
So the wronskian for this equation is:
$W \left[{y}_{1} , {y}_{2}\right] = | \left(\cos x , , \sin x\right) , \left(- \sin x , , \cos x\right) |$
$\text{ } = \left(\cos x\right) \left(\cos x\right) - \left(\sin x\right) \left(- \sin x\right)$
$\text{ } = {\cos}^{2} x + {\sin}^{2} x$
$\text{ } = 1$
So we form the two particular solution function:
${v}_{1} = - \int \setminus \frac{p \left(x\right) {y}_{2}}{W \left[{y}_{1} , {y}_{2}\right]} \setminus \mathrm{dx}$
$\setminus \setminus \setminus = - \int \setminus \frac{\cot x \sin x}{1} \setminus \mathrm{dx}$
$\setminus \setminus \setminus = - \int \setminus \cos \frac{x}{\sin} x \sin x \setminus \mathrm{dx}$
$\setminus \setminus \setminus = - \int \setminus \cos x \setminus \mathrm{dx}$
$\setminus \setminus \setminus = - \sin x$
And;
${v}_{2} = \setminus \setminus \setminus \setminus \setminus \int \setminus \frac{p \left(x\right) {y}_{1}}{W \left[{y}_{1} , {y}_{2}\right]} \setminus \mathrm{dx}$
$\setminus \setminus \setminus = \int \setminus \frac{\cot x \cos x}{1} \setminus \mathrm{dx}$
$\setminus \setminus \setminus = \int \setminus \cos \frac{x}{\sin} x \cos x \setminus \mathrm{dx}$
$\setminus \setminus \setminus = \int \setminus {\cos}^{2} \frac{x}{\sin} x \setminus \mathrm{dx}$
$\setminus \setminus \setminus = \int \setminus \frac{1 - {\sin}^{2} x}{\sin} x \setminus \mathrm{dx}$
$\setminus \setminus \setminus = \int \setminus \csc x - \sin x \setminus \mathrm{dx}$
$\setminus \setminus \setminus = - \ln | \csc x + \cot x | + \cos x$
And so we form the Particular solution:
${y}_{p} = {v}_{1} {y}_{1} + {v}_{2} {y}_{2}$
$\setminus \setminus \setminus = \left(- \sin x\right) \left(\cos x\right) + \left(\cos x - \ln | \csc x + \cot x |\right) \sin x$
Which then leads to the GS of [A}
$y \left(x\right) = {y}_{c} + {y}_{p}$
$\setminus \setminus \setminus \setminus \setminus \setminus \setminus = A \cos x + B \sin x - \sin x \cos x + \sin x \cos x - \sin x \ln | \csc x + \cot x |$
$\setminus \setminus \setminus \setminus \setminus \setminus \setminus = A \cos x + B \sin x - \sin x \ln | \csc x + \cot x |$
|
|
## linear algebra – canonical form of the skew-symmetric bilinear function and its transformation matrix
Reduce the skew-symmetric bilinear function to canonical form and find the matrix of the transformation.
$$varphi (x, y) = x_1y_2-x_2y_1 + 2x_1y_3-2x_3y_1-x_1y_4 + x_4y_1 + 4x_2y_4-4x_4y_2 + x_3y_4-x_4y_3$$
My approach: To let $${e_1, e_2, e_3, e_4 }$$ is the given basis for the vector space and the matrix of this form is $$B _ { varphi} ^ {(e)} = begin {bmatrix} 0 & 1 & 2 & -1 \ -1 & 0 & 0 & 4 \ -2 & 0 & 0 & 1 \ 1 & -4 & -1 & 0 \ end {bmatrix}$$
Let's find the new base in the form $$e & # 39; _1 = e_1, e & # 39; _2 = e_2$$ and $$e & # 39; _i = e_i + dfrac {b_ {2i}} {b_ {12}} e_1- dfrac {b_ {1i}} {b_ {12}} e_1$$ to the $$i geq 3$$, Where $$b_ {ij}$$ are elements of the matrix above (we find the new basis so that $$varphi (e & # 39; _1, e & # 39; _i) = varphi (e & # 39; _2, e & # 39; _i) = 0$$ to the $$i geq 3$$).
Then you can show that $$e & # 39; _3 = e_3-2e_2$$ and $$e & # 39; _4 = e_4 + 4e_1 + e_2$$,
The basic calculation also shows this $$varphi (e & # 39; _3, e & # 39; _4) = varphi (e_3-2e_2, e_4 + 4e_1 + e_2) = b_ {34} -2b_ {24} + 4b_ {31} -2b_ {21 } + b_ {32} = – 13.$$ Let's take the new base $$(e & # 39; & # 39;) = {e & # 39; _1, e & # 39; _2, -e & # 39; _3 / 13, e & # 39; _4 }$$ and then it follows that on this basis the matrix has a canonical form, i.e. $$B _ { varphi} ^ {(e & # 39; & # 39;)} = begin {bmatrix} 0 & 1 & 0 & 0 \ -1 & 0 & 0 & 0 \ 0 & 0 & 0 & 1 \ 0 & 0 & -1 & 0 \ end {bmatrix}$$ and the function has the following canonical form $$(u_1v_2-u_2v_1) + (u_3v_4-u_4v_3)$$ and the transformation matrix is this
$$begin {bmatrix} 1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 2/13 & -1/13 & 0 \ 4 & 1 & 0 & 1 \ end {bmatrix}$$
Can someone tell me that my reasoning and the answers are correct, please?
Would be very grateful!
## Python – Shortening the time for matrix multiplication of pixels
This part of my code takes 30 minutes to create several 300 RGB and depth images pixel by pixel. How can I shorten this time? This is only part of the big code, but it takes a lot of time.
``````def depth_to_xyz_and_rgb(uu, vv, dep):
# t1.tic()
# get z value in meters
pcz = dep.getpixel((uu, vv))
if pcz == 60:
return
pcx = (uu - cx_d) * pcz / fx_d
pcy = ((vv - cy_d) * pcz / fy_d)
# apply extrinsic calibration
P3D = np.array((pcx, pcy, pcz))
P3Dp = np.dot(RR, P3D) - TT
# rgb indexes that P3D should match
uup = (P3Dp(0) * fx_rgb / P3Dp(2) + cx_rgb)
vvp = (P3Dp(1) * fy_rgb / P3Dp(2) + cy_rgb)
# t1.toc()
# return a point in point cloud and its corresponding color indices
return P3D, uup, vvp
``````
Example Pictures.
## mathematical optimization – Find the minimum with the restriction of the positive certainty of the matrix
Suppose I want to find the minimum value of the determinant of a matrix on the condition that the matrix is positive definite. So I try:
``````M = {{a,0},{0,b}}
FindMinimum[{Det[M],a>=1,b>=1,PositiveDefiniteMatrixQ[M]},{a,b}]
``````
This returns an error that `Constraints in {False} are not all equality or inequality constraints...`and suggested that the `PositiveDefiniteMatrixQ` is immediately evaluated for any `a,b` and not evaluated every iteration `a,b` Values.
Then I could try to delay the evaluation of `PositiveDefiniteMatrixQ` With `Delayed`, which returns a similar error `Constraints in {Delayed[PositiveDefiniteMatrixQ[M]],a>=1,b>=1} are not all equality or inequality constraints`,
How can I impose such a restriction on this? `FindMinimum` Function?
## list – Problems finding a certain value in the matrix in PYTHON
Sorry for the size of the statement
I have a problem with my code. I have a list like:
List (a) (b) (c)
By doing:
1. The addresses of my files are saved in the first position (a)
2. In the second position (b) my data values from the columns of my files are saved (depth, time, GR, NPHI, …).
3. In the third position (c) the values for each row of my data columns are saved in field (b).
I need to look for certain values in my data and link them to values in another column.
Example:
• In the data from the first file (a) = (0)
• Search GR = 37.1451
• Find out which DEPTH corresponds to this GR value.
• Then save this depth in a list that will be used later for other operations.
The program analyzes multiple .LAS files and there is no way to change them as this will be in the public area of the university
I tried to use: `ArquivosLas(0)(1).index(37.1451)`, but since the first list is a file, it doesn't work
In (129): Type (ArchivesLas (0))
Out (129): lasio.las.LASFile
In (132): Type (Las1 files)
Oct (132): numpy.ndarray
In (133): Type (Las11 files)
Oct (133): numpy.float64
I thought about saving the numeric data from the original list – second (b) and third (c) positions in another vector to remove position (a) and convert the new list into just a matrix of numbers.
I add the code I used and the photo as the data is
`````` from tkinter import *
from tkinter import filedialog
import lasio
import numpy as np
EnderecoArquivosLas = list()
ArquivosLas = list()
x = 0
root = Tk()
EnderecoArquivosLas = filedialog.askopenfilenames(parent=root, title="Selecione os arquivos com banco de dados", filetypes=(("las files", "*.las"),("all files", "*.*")))
root.splitlist(EnderecoArquivosLas)
root.mainloop()
for i in EnderecoArquivosLas:
x = x + 1
#Procurar pelos valores especificos
PosicaoGrTopo = ArquivosLas(0)(1).index(37.1451))
``````
## How to prove the \$ n times n \$ matrix \$ A = big ( frac {1} {i + j + 1} big) _ {i, j in [n]} \$ is positive semi-positive?
We tried to show that the matrix
$$A = Big ( frac {1} {i + j + 1} Big) _ {i, j in (n)}$$
is positive semi-definite. We have tried to induce the Schur complement, but there is no easy analytical way to find it $$A_ {n-1} ^ {- 1}$$ for each $$n$$,
## Inverse of the All-One matrix
What is the easiest way to find out the inverse of an all-one matrix?
The matrix has the form (1 1 … 1), where each 1 represents a column vector of 1s.
Thank you.
## Matrix – Apply outer to the list of matrices and vectors
I have a list, M, of square nxn matrices, `{M1,M2,M3,...}`and a list V of nx1 vectors, `{v1,v2,v3,...}`and the corresponding transpositions of these vectors `{r1,r2,r3,...}`, I try to build the matrix
`{{r1.M1.v1, r2.M1.v2, r3.M1.v3,...},{{r1.M2.v1, r2.M2.v2, r3.M2.v3,...},...}`, Note that M and V are not necessarily the same length (i.e. there can only be 5 matrices but 100 vectors).
It seemed like a method `Outer` would work like:
`Outer(Transpose(#2).#1.#2 &,M,V)`, which should then only be a two-dimensional matrix of scalars.
However, I think this has a problem because lists M and V are themselves technically lists of lists (M is a list of matrices, V is a list of vectors), and so the exterior is divided into sublists rather than If I do the calculation I want, it will be a high-dimensional object. I tried playing around with different flattening schemes, but haven't quite figured it out yet – help implementing this functionality (is Outer the right functional tool at all)?
## Combinatorics – condition for the absence of a trivial matrix decomposition
To let $$A_1, dots, A_n$$ Be matrices without row or column of $$0s$$and so that there is something for everyone $$i = 1, points, n$$ there is no decomposition of $$A_i$$ the form
$$A_i = oplus_ {j = 1} ^ n B_j qquad ( exists j_1 neq j_2) ; ( exists k in mathbb {R}) , B_ {j_1} = kB_ {j_2},$$
The direct sum of the matrices is defined here.
Is there an appropriate compatibility criterion for that $$A_i$$ so that $$A = oplus_ {i = 1} ^ nA_i$$ does not allow such decomposition; d. that is: there are none $$C_1, dots, C_t$$ so that
$$A = oplus_ {i = 1} ^ t C_i mbox {and} ( exists t_1 neq t_2) ; ( exists k in mathbb {R}) , C_ {i_1} = kC_ {i_2}.$$
Non-example:
An interesting non-example that illustrates part of the problem is the following:
$$A_1 triangleq begin {pmatrix} 1 & 1 & 0 \ 1 & 1 & 0 \ 0 & 0 & 1 end {pmatrix} A_2 triangleq (2),$$
then $$A_1, A_2$$ are different and different sizes, but, $$B_1$$ is the 2×2 matrix of 1s, $$B_2 = (1)$$. $$B_3 = A_2$$ gives the decomposition that I don't want.
## fa.functional analysis – smallest eigenvalue for large kernel matrix
I am interested in the asymptotics of the minimum eigenvalue $$lambda_n ^ n$$ a class of kernel matrix $$P = (K (x_i – x_j)) _ {1 le i, j le n}$$With $$x_i$$ evenly distributed in the unit cube of $$mathbb {R} ^ d$$,
Here is the kernel $$K$$ is positive symmetric with finite smoothness, i.e. H. the Fourier transform $$widehat {K} ( omega) sim || omega || ^ {- beta – d},$$
Where $$beta> 0$$ is the smoothing parameter and $$d$$ is the dimension.
According to & # 39; error estimates and condition numbers for radial
Basic function interpolation ((Schaback), the minimum eigenvalue
$$c n ^ {- beta / d} le lambda ^ n_n le C n ^ {- beta / d} quad mbox {for some } c, C> 0.$$
My question is whether there is a result regarding the convergence of $$n ^ { beta / d} lambda_n ^ n$$ ? i.e. $$n ^ { beta / d} lambda_n ^ n rightarrow A$$ how $$n rightarrow infty$$ ? Is there any way to prove this result?
There is a closely related topic on the intrinsic values of the continuous operator $$Tf: = int K (x – y) f (y) dy$$, The kernel matrix can be viewed as a discretization of the continuum operator.
To let $$lambda_1> lambda_2 ldots$$ are the eigenvalues of $$T$$,
It is known that $$lambda_i$$ can be written as Kolmogorov n-latitude, and classic results by Joseph Jerome imply this
$$lambda_i sim Ci ^ {- ( beta + d) / d} quad mbox {for some} C> 0.$$
It is therefore natural to expect a similar result for the kernel matrix.
The quantification was also worked on $$| lambda ^ n_i / n – lambda_i |$$, e.g. & # 39; Exact error limits for the eigenvalues of the core matrix & # 39; (Brown). However, the estimates are too extensive to be able to conclude.
## group theory – How do I check whether a given matrix is in the picture of a representation?
To let $$G$$ Be a compact, simple Lie group and let $$rho$$ be an (accurate, unified) irreducible representation of it $$mathbb K$$-Dimensions $$n$$, Where $$mathbb K = mathbb C / mathbb R / mathbb H$$ if $$R$$ is real / complex / pseudoreal. It follows that there is a subset of $$SU (n) / SO (n) / Sp (n)$$each isomorphic to $$G$$, You can think $$rho$$ as a map of $$G$$ to this subgroup.
How can i check if a particular matrix $$M in SU (n) / SO (n) / Sp (n)$$ is in the picture of $$rho$$? In other words, given such a matrix $$M$$how can I decide if there are any $$g in G$$ so that $$rho (g) = M$$?
Let's say for completeness $$G = G_2$$ is the first extraordinary simple group, and let $$rho$$ be the representation with the highest weight $$2 omega_2$$ (this is real and $$27$$-dimensional). This means that for everyone $$g in G_2$$. $$rho (g)$$ is a $$27$$-dimensional orthogonal matrix. If I take them arbitrarily $$27$$-dimensional orthogonal matrix $$M$$how can i check if it can be written like this $$M = rho (g)$$ for some $$g in G_2$$?
Note: I am particularly interested in the case where $$M$$ is diagonal, but I would also be interested to learn about the general case. In the diagonal case, where everything is abelic and you can basically concentrate on a Cartan subalgebra, I assume that you can express the picture from pretty clearly $$rho$$, In general, I wouldn't be surprised if you have to work harder.
|
|
MathSciNet bibliographic data MR633661 (83j:10032) 10D24 (22E55) Jacquet, Hervé Dirichlet series for the group ${\rm GL}(n)$${\rm GL}(n)$. Automorphic forms, representation theory and arithmetic (Bombay, 1979), pp. 155–163, Tata Inst. Fund. Res. Studies in Math., 10, Tata Inst. Fundamental Res., Bombay, 1981. Links to the journal or article are not yet available
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
|
|
Abs¶
General¶
Abs operation performs element-wise the absolute value with given tensor, it applies following formula on every element of $$\src$$ tensor (the variable names follow the standard Naming Conventions):
$\begin{split}dst = \begin{cases} src & \text{if}\ src \ge 0 \\ -src & \text{if}\ src < 0 \end{cases}\end{split}$
Operation attributes¶
Abs operation does not support any attribute.
Execution arguments¶
The inputs and outputs must be provided according to below index order when constructing an operation.
Inputs¶
Index
Argu
0
src
Required
Outputs¶
Index
Argu
0
dst
Required
Supported data types¶
Abs operation supports the following data type combinations.
Src
D
f32
f32
f16
f16
bf16
bf16
|
|
Is the Hessian almost everywhere nondegenerate? - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-25T03:48:21Z http://mathoverflow.net/feeds/question/63949 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/63949/is-the-hessian-almost-everywhere-nondegenerate Is the Hessian almost everywhere nondegenerate? ProbLe 2011-05-04T21:48:51Z 2011-05-04T21:58:06Z <p>Let $M$ be a complete Riemannian manifold. For a fixed point $p$ in $M$, the Riemannian distance to $p$ is denoted by $d_p$. Fix a strongly convex geodesic ball $B(o,R)$ in $M$ and some disjoint geodesic balls $B(a_0,r_0),...,B(a_n,r_n)$ in $B(o,R)$. </p> <p>Now for every $(x,x_1,...,x_n)\in B(a_0,r_0)\times...\times B(a_n,r_n)$, we consider a symmetric bilinear form on $T_xM$: $$b(x,x_1,...,x_n)=\Sigma_{i=1}^{n}\text{Hess}~d_{x_i}(x)$$.</p> <p>The question is that whether the bilinear form $b$ is nondegenerate for almost every point $(x,x_1,...,x_n)\in B(a_0,r_0)\times...\times B(a_n,r_n)$? Here the reference measure is the standard Lebesgue measure on $M^{n+1}$.</p> <p>Any hints or references are warmly welcome! Many thanks!</p>
|
|
[德]奥利弗・罗伊1, [波]彼得・库里基2, 陈钰 译
1.拜罗伊特大学 哲学系, 拜罗伊特 95447
2.卢布林天主教大学 哲学系, 卢布林 20950
[作者简介] 1.奥利弗・罗伊(https://orcid.org/0000-0002-9085-5701),男,德国拜罗伊特大学哲学系教授,博士生导师,哲学博士,主要从事道义逻辑、认知逻辑、博弈论、集体意向性等方面的研究; 2.彼得・库里基(https://orcid.org/0000-0001-5413-3886),男,波兰卢布林天主教大学哲学系教授,博士生导师,哲学博士,主要从事道义逻辑、量子信息处理等方面的研究。陈钰(https://orcid.org/0000-0002-0783-0969),男,清华大学哲学系博士研究生,主要从事动态认知逻辑、动态语义学、结构证明论等方面的研究。
Legal Permissibility and Legal Competences in Hierarchical Systems with Strong Permission
Olivier Roy1, Piotr Kulicki2, translated by Chen Yu
1.Department of Philosophy, University of Bayreuth, Bayreuth 95447, Germany
2.Faculty of Philosophy, John Paul II Catholic University of Lublin, Lublin 20950, Poland
Abstract
G.H.von Wright distinguished two kinds of permission, naming them weak and strong. A weak permission is simply an absence of prohibition. A strong permission, in contrast, is an explicit statement independent of any obligation or prohibition. Unlike some legal theorists claiming that there are no strong permissions in the actual legal texts and legal practice, we believe that they can be found there in the form of rights and freedom. We try to study how strong permissions issued in hierarchical legal systems can change the legal status of the elements of the systems.
We focus on the situations in which a conflict appears between authorities of different levels, e.g. in European legal system some lower level state regulations are not in accordance with the higher level EU law. What we want to discuss is the way in which such inconsistency can be modeled in terms of legal abilities and legal permissibility. The two notions imply two different interpretations of such complex legal situations. From the practical point of view, according to one of the interpretations, the lower level law is not binding whatsoever, and in the other one it is, but as the lower law was introduced against the more fundamental law, it should be changed, and the lower level authority (a member state) may be punished.
We are interested in logical modeling of both the ability and the permissibility approach in hierarchical legal systems. A logical model can be useful mainly because it makes the assumptions of each approach explicit and precise, and may show hidden consequences of those assumptions. Thus, it can serve as a tool for legal theorists and designers of legal systems to help them understand the possible options of interpretation of the results of strong permissions issued in hierarchical systems. Moreover, logical models can be used when artificial intelligence tools are designed to analyze legal systems of the kind or to act within them.
The logical model that we employ consists of two relatively modular parts: one to represent static duties and permissions, and the other one to represent the agent's potential to change these deontic relations. In the static part we need to express Hohfeldian notions of claims and freedoms that are the standard conceptual foundation in this field of research. In particular, directed obligations and permissions are important for us. These are obligations and permissions of one agent towards another. We will use here a family of obligation operators of the formOij to represent the fact that the agent i has an obligation concerning the agent j. The semantics of the operator is based on a preference-action model where preference relation plays a crucial role. In short, we understand that is obligatory when it holds in all most preferred states.
The dynamic part covers the Hohfeldian notions of power and immunity. We represent power and immunity by using tools developed in dynamic epistemic logic. We transfer them into the field of deontic logic and define how a new deontic situation is determined by a previous situation and deontic actions performed in it. Formally a deontic action model is used here along with an operator that transforms a preference-action model and a deontic action model into a new preference-action model.
This formal machinery allows us to model strong permission issued by a higher level authority understood both in the spirit of legal abilities and legal permissibility. The modeling of power and immunity in terms of deontic action model makes some interesting predictions which would be worth testing against existing legal systems. The two approaches, referring to legal abilities and legal permissibility can be related to two fundamental legal principles ″ lex superior derogat legi inferiori″ and ″ lex posterior derogat legi priori″ respectively. More generally, the present modeling of powers predicts a high level of path-dependency in the exercise of powers because deontic actions are not commutative. Of course the prediction is not that this will always be the case. The question is which one is more in line with the actual regulations of EU and its member states, which would require further empirical research, and would also make the comparison with other legal systems particularly interesting. Our intuition is that the distinction between influencing legal ability and legal permissibility of a lower level authority is strongly connected with the issue of direct applicability of the higher level regulations, specifically permissions.
Finally, the model that we have presented here raises general questions regarding logical relationships between changes in legal ability and legal permissibility. When do they ″go hand in hand″? Can any change in legal permissibility and ability be captured in this framework? Answering these questions may have important consequences for policy making and design of legal procedures. The model we have studied will help us to understand the consequences of issuing permissions and changing legal competences in specific cases. We, however, leave these questions for the further research, which may be stimulated by this study.
Keyword: dynamic deontic logic; strong permission; preference-action model; deontic action model; legal permissibility; legal ability; legal power; norms in hierarchical systems
(一) 静态法律关系
φ ::=p|¬ φ |φ φ |Doiφ |Oijφ
W是非空世界集,
・ ≤ ijW上的自反和传递关系,
・ 对每个主体iAgt, ~iW上的等价关系,
V:Prop¬ (W)是一个赋值函数。
max(≤ ij[w])={v:¬ w'∈ ≤ ij[w]使得v< ijw'}。
M, w╞p当且仅当wV(p),
M, w╞¬ φ 当且仅当M, w φ ,
M, w╞φ ψ 当且仅当M, w╞φ 并且M, w╞ψ ,
M, w╞Oijφ 当且仅当对任意w'∈ max(≤ ij[w]), 都有M, w'╞φ
(二) 动态法律关系
A是非空且有穷的道义行动集,
$≤j→kAi$A上的自反和传递关系,
Pre:AL是前提函数,
Post:A→ (PropL)是后置条件函数, 它给每个行动和每个命题变元指派L中的一个公式。对所有aA, 我们假设Post(a)与恒等函数的区别只在于Prop至多为有穷多个元素。
A中的元素可看作i能够采取的可能的法律行动, 例如出售他的房子。关系 $≤j→kAi$表示i的行动改变jk之间的(静态)法律关系的可能性, 多数情况下, 也可看作一种法律上的相对理想关系。然而, 董惠敏和罗伊(① Dong H. & Roy O., ″Dynamic Logic of Legal Competences, ″ Manuscript.)注意到这种解释并不适用于对法律权能和法律允许已做出区分的情况。但在法律关系变化的可能性方面, 我们仍使用这一解释。前提函数Pre表示对A中可执行的不同的行动a, 在初始行为偏好模型中的一个特定的世界中为真的公式是什么。例如, 只有当i是该房子的合法所有者时, 出售房子才可执行。这就对应一些为真的道义命题。最后, 后置条件函数Post表示改变一些法律事实(这些法律事实能被某些命题变元表示)的能力。技术上讲, 该函数改变了初始静态语言中的一些(但并非所有)命题变元的真值。
W'={(w, a):M, w╞Pre(a), 其中aA},
・ (w, a)≤ 'ij(w', a')当且仅当a $< i→jAi$a', 或者a $≅i→jAi$a'并且wijw',
・ (w, a)~'i(w', a')当且仅当w~iw',
V'(p)={(w, a)∈ W':M, w╞Post(a)(p)}。
M, w╞[Ai, a]φ 当且仅当如果M, w╞Pre(a), 那么M$\otimes$Ai, (w, a)╞φ
M, w╞aA< Ai, a> ¬ T(j, k, ψ )。
M, w╞aA[Ai, a]T(j, k, ψ )。
¬ Oij¬ p
Figure Option 图1 初始行为偏好模型M
Figure Option 图2 模型ADE
MADE进行词典式更新, 便得到图3中的模型(模型M$\otimes$ADE:德国能创设j关于pi的请求权)。这正是我们所期望的, 因为p在最理想的世界(w1, a1)上为真, 所以在更新后的模型中, ij有义务p。使用霍菲尔德的术语, 这意味着j关于pi有请求权。回到初始模型M, 公式¬ Oijp∧ ∨ i=1, 2< ADE, ai> Oijp在所有点上为真。因此, 德国在上述定义的意义上有对ij的权力。
Figure Option 图3 模型M$\otimes$ADE
¬ Oijp∧ ∨ i=1, 2< ADE, ai> Oijp∧ < AEU, b1> (¬ Oijp∧ ∧ i=1, 2[ADE, ai]¬ Oijp)。
Figure Option 图4 模型AEU
Figure Option 图5 模型M$\otimes$AEU
P(a)≡ def< Ai, a> ¬ V,
¬ P(a)≡ def[Ai, a]V
Figure Option 图6 模型A'DE
¬ Oijp∧ ∨ i=1, 2(P(ai)∧ < ADE, ai> Oijp)。
Figure Option 图7 模型(MAEU)$\otimes$A'DE
(感谢匿名审稿人对本文提出建设性的修改意见, 同时感谢陈钰将文章的英文版翻译为中文。)
[1] von Wright G. H. , Norm and Action: A Logical Inquiry, London: Routledge & Kegan Paul, 1963. [本文引用:] [2] Kelsen H. , Pure Theory of Law, London: The Lawbook Exchange, 2005. [本文引用:] [3] von Wright G. H. , An Essay in Deontic Logic and the General Theory of Action, Amsterdam: North-Holland Publishing Company, 1968. [本文引用:] [4] Kanger S. , ″New Foundations for Ethical Theory, ″ in Hilpinen R. (ed. ), Deontic Logic: Introductory and Systematic Readings, Berlin: Springer, 1970, pp. 36-58. [本文引用:] [5] Ross A. , Directives and Norms, London: Routledge & Kegan Paul, 1968. [本文引用:] [6] Moore R. , ″Legal Permission, ″ ARSP: Archiv für Rechts- und Sozialphilosophie/Archives for Philosophy of Law and Social Philosophy, Vol. 59, No. 3 (1973), pp. 327-346. [本文引用:] [7] Raz J. , ″Permissions and Supererogation, ″ American Philosophical Quarterly, Vol. 12, No. 2 (1975), pp. 161-168. [本文引用:] [8] Hansson S. O. , ″The Varieties of Permission, ″ in Gabbay D. , Horthy J. & Parent X. et al. (eds. ), Hand book of Deontic Logic and Normative Systems, Berlin: Springer, 2013, pp. 195-240. [本文引用:] [9] Opałek K. & Woleński J. , ″Normative Systems, Permission and Deontic Logic, ″ Ratio Juris, Vol. 4, No. 3 (1991), pp. 334-348. [本文引用:] [10] Dong H. &Roy O. , ″Dynamic Logic of Power and Immunity, ″ in Baltag A. , Seligman J. & Yamada T. (eds. ), International Workshop on Logic, Rationality and Interaction, Berlin: Springer, 2017, pp. 123-136. [本文引用:] [11] Makinson D. , ″On the Formal Representation of Rights Relations, ″ Journal of Philosophical Logic, Vol. 15, No. 4 (1986), pp. 403-425. [本文引用:] [12] van Ditmarsch H. , van der Hoek W. & Kooi B. , Dynamic Epistemic Logic, Dordrecht: Springer Science & Business Media, 2007. [本文引用:] [13] van Benthem J. , Logical Dynamics of Information and Interaction, Cambridge: Cambridge University Press, 2011. [本文引用:] [14] Hohfeld W. N. , ″Some Fundamental Legal Conceptions as Applied in Judicial Reasoning, ″ The Yale Law Journal, Vol. 23, No. 1 (1913), pp. 16-59. [本文引用:] [15] Kanger S. & Kanger H. , ″Rights and Parliamentarism, ″ Theoria, Vol. 32, No. 2 (1966), pp. 85-115. [本文引用:] [16] Sartor G. , Legal Reasoning, Berlin: Springer, 2005. [本文引用:] [17] Herrestad H. & Krogh C. , ″Obligations Directed from Bearers to Counterparties, ″ in McCarty L. T. (ed. ), Proceedings of the Fifth International Conference on Artificial Intelligence and Law, New York: ACM Press, 1995, pp. 210-218. [本文引用:] [18] Markovich R. , ″Understand ing Hohfeld and Formalizing Legal Rights: The Hohfeldian Conceptions and Their Conditional Consequences, ″ Studia Logica, Vol. 108, No. 1 (2020), pp. 129-158. [本文引用:] [19] Boutilier C. , ″Conditional Logics of Normality: A Modal Approach, ″ Artificial Intelligence, Vol. 68, No. 1 (1994), pp. 87-154. [本文引用:] [20] van Benthem J. , van Otterloo S. & Roy O. , ″Preference Logic, Conditionals and Solution Concepts in Games, ″ in Lagerlund H. , Lindström S. & Sliwinski R. (eds. ), Modality Matters: Twenty-Five Essays in Honour of Krister Segerberg, Uppsala: Department of Philosophy, Uppsala University, 2006, pp. 61-67. [本文引用:] [21] Lindahl L. , Position and Change: A Study in Law and Logic, Dordrecht: Springer Science & Business Media, 1977. [本文引用:] [22] van Benthem J. , Grossi D. & Liu F. , ″Priority Structures in Deontic Logic, ″ Theoria, Vol. 80, No. 2 (2013), pp. 116-152. [本文引用:] [23] Meyer J. C. , ″A Different Approach to Deontic Logic: Deontic Logic Viewed as a Variant of Dynamic Logic, ″ Notre Dame Journal of Formal Logic, Vol. 29, No. 1 (1988), pp. 109-136. [本文引用:] [24] Bulygin E. , Essays in Legal Philosophy, Oxford: Oxford University Press, 2015. [本文引用:] [25] Andréka H. , Ryan M. & Schobbens P. Y. , ″Operators and Laws for Combining Preference Relations, ″ s Journal of Logic and Computation, Vol. 12, No. 1 (2002), pp. 13-53. [本文引用:]
|
|
# [SOLVED]Region of Convergence
#### dwsmith
##### Well-known member
Consider the signal
$x(t) = e^{-5t}\mathcal{U}(t) + e^{-\beta t}\mathcal{U}(t)$
and denote the its Laplace transform by $$X(s)$$.
What are the constraints placed on the real and imagainary parts of $$\beta$$
if the region of convergence of $$X(s)$$ is $$\text{Re} \ \{s\} > -3$$?
Let's start by taking the Laplace transform.
\begin{align*}
X(s) &= \int_0^{\infty}e^{-5t}\mathcal{U}(t)e^{-st}dt +
\int_0^{\infty}e^{-\beta t}\mathcal{U}(t)e^{-st}dt\\
&= \int_0^{\infty}e^{-5t}e^{-st}dt +
\int_0^{\infty}e^{-\beta t}e^{-st}dt\\
&= \frac{1}{s + 5} + \frac{1}{s + \text{Re} \ \{\beta\}}
\end{align*}
The first integral gives us a region of convergence of $$\sigma > -5$$. For $$\beta$$, we need the real part to be $$3$$. Then the second integral would have a region of convergence of $$\sigma_1 > -3$$. With the addition of the integrals, how does that affect the overall region of convergence? Do we just take the lesser of the two or is there something else?
#### ThePerfectHacker
##### Well-known member
This is not exactly what you are asking but it is a good starting point as you had a similar issue in your other post involving convergence of the Laplace transform. This is to clear this issue up.
Let us start with a simple example. Consider the function $f(t) = e^{at}$.
Now compute the Laplace transform,
$$L[f(t)](s) = \int_0^{\infty} e^{at} e^{-st} ~ dt$$
Write $s = \sigma + ib$ then $e^{-st} = e^{-\sigma t}e^{-ibt}$, since $|e^{-ibt}|=1$ it follows that $|e^{-st}| = e^{-\sigma t}$.
If $\sigma > a$ then $|e^{at}e^{-st}| = e^{at}e^{-\sigma t} = e^{-(\sigma - a)t}$. Note that $\sigma - a > 0$. Then we have,
$$\int_0^{\infty} |e^{at}e^{-st}| ~ dt = \int_0^{\infty} e^{-(\sigma - a)t} ~ dt = -\frac{1}{\sigma - a} e^{-(\sigma - a)t} \bigg|_0^{\infty}$$
At limit $t=\infty$ the exponential goes to zero because its exponent is of the form $e^{-\infty}$, this has to do with the crucial fact that $\sigma - a > 0$. In particular, the integral $\int_0^{\infty} e^{at}e^{-st} ~ dt$ converges absolutely* for $\sigma > a$. Recall that $\sigma = \text{Re}(s)$ and so we have shown that the Laplace transform of $e^{at}$ converges absolutely for $\text{Re}(s) > a$.
*We say $\int_0^{\infty} g(t) ~ dt$ converges absolutely when $\int_0^{\infty} |g(t)| ~ dt$ converges. In particular it means the original integral without absolute values converges also.
#### dwsmith
##### Well-known member
This is not exactly what you are asking but it is a good starting point as you had a similar issue in your other post involving convergence of the Laplace transform. This is to clear this issue up.
Let us start with a simple example. Consider the function $f(t) = e^{at}$.
Now compute the Laplace transform,
$$L[f(t)](s) = \int_0^{\infty} e^{at} e^{-st} ~ dt$$
Write $s = \sigma + ib$ then $e^{-st} = e^{-\sigma t}e^{-ibt}$, since $|e^{-ibt}|=1$ it follows that $|e^{-st}| = e^{-\sigma t}$.
If $\sigma > a$ then $|e^{at}e^{-st}| = e^{at}e^{-\sigma t} = e^{-(\sigma - a)t}$. Note that $\sigma - a > 0$. Then we have,
$$\int_0^{\infty} |e^{at}e^{-st}| ~ dt = \int_0^{\infty} e^{-(\sigma - a)t} ~ dt = -\frac{1}{\sigma - a} e^{-(\sigma - a)t} \bigg|_0^{\infty}$$
At limit $t=\infty$ the exponential goes to zero because its exponent is of the form $e^{-\infty}$, this has to do with the crucial fact that $\sigma - a > 0$. In particular, the integral $\int_0^{\infty} e^{at}e^{-st} ~ dt$ converges absolutely* for $\sigma > a$. Recall that $\sigma = \text{Re}(s)$ and so we have shown that the Laplace transform of $e^{at}$ converges absolutely for $\text{Re}(s) > a$.
*We say $\int_0^{\infty} g(t) ~ dt$ converges absolutely when $\int_0^{\infty} |g(t)| ~ dt$ converges. In particular it means the original integral without absolute values converges also.
This doesn't help with this problem though. We can either look at the integrals separate or use the triangle inequality, but I don't see an advantage of using the triangle inequality since we will still have two integrals with different regions of convergence. How does the adding of two integrals with separate regions of convergence affect the overal region of convergence?
#### ThePerfectHacker
##### Well-known member
This doesn't help with this problem though. We can either look at the integrals separate or use the triangle inequality, but I don't see an advantage of using the triangle inequality since we will still have two integrals with different regions of convergence. How does the adding of two integrals with separate regions of convergence affect the overal region of convergence?
Laplace transform of $e^{t}$ converges for $\text{Re}(s) > 1$ and Laplace transform of $e^{2t}$ converges for $\text{Re}(s) > 2$. So if $\text{Re}(s) > 2$ then it is certainly bigger than $1$ and so it converges for both. More generally, you take the larger of the two numbers.
|
|
The geometric multiplicity γ T (λ) of an eigenvalue λ is the dimension of the eigenspace associated with λ, i.e., the maximum number of linearly independent eigenvectors associated with that eigenvalue. Ie the eigenspace associated to eigenvalue λ j is $$E(\lambda_{j}) = {x \in V : Ax= \lambda_{j}v}$$ To dimension of eigenspace $$E_{j}$$ is called geometric multiplicity of eigenvalue λ j. Precision: 2 3 4 5 6 7 8 9. Rows: Columns: Submit. Matrix size: 2×2 3×3 4×4 5×5 6×6 7×7 8×8 9×9. That means Ax = 0 for some nontrivial vector x. Eigenvalue Calculator. These calculations show that E is closed under scalar multiplication and vector addition, so E is a subspace of R n.Clearly, the zero vector belongs to E; but more notably, the nonzero elements in E are precisely the eigenvectors of A corresponding to the eigenvalue λ. An easy and fast tool to find the eigenvalues of a square matrix. If is an square matrix and is an eigenvalue of , then the union of the zero vector and the set of all eigenvectors corresponding to eigenvalues is a subspace of known as the eigenspace ⦠Eigenvalues and eigenvectors calculator. The equation quite clearly shows that eigenvectors of "A" are those vectors that "A" only stretches or compresses, but doesn't affect their directions. Definition: An eigenvector of an n x n matrix, "A", is a nonzero vector, , such that for some scalar, l.. A = 10â1 2 â15 00 2 λ =2, 1, or â 1 λ =2 λ =1 λ = â1 E 2 = span â1 1 1 E 1 = span 1 1 0 E â1 = span 0 1 0 Solve (A â I)ï¿¿x = ï¿¿0. (b) Find the dimension of the eigenspace $E_2$ corresponding to the eigenvalue $\lambda=2$. Therefore, the calculation of the eigenvalues of a matrix A is as easy (or difficult) as calculate the roots of a polynomial, see the following example Dimension of eigenspace calculator. Get the free "Eigenvalues Calculator 3x3" widget for your website, blog, Wordpress, Blogger, or iGoogle. The eigenspace is the space generated by the eigenvectors corresponding to the same eigenvalue. Use our online eigenspace 3x3 matrix calculator to determine the space of all eigenvectors which can be written as linear combination of those eigenvectors. Male or Female ? I have a matrix which is I found its Eigenvalues and EigenVectors, but now I want to solve for eigenspace, which is Find a basis for each of the corresponding eigenspaces! Details of NumPy and Scipy linear algebra functions can be found from numpy.linalg and scipy.linalg, respectively. (The Ohio State University, Linear Algebra Final Exam Problem) Add to solve later Sponsored Links The Null Space Calculator will find a basis for the null space of a matrix for you, and show all steps in the process along the way. As a consequence, the geometric multiplicity of is 1, less than its algebraic multiplicity, which is equal to 2. Furthermore, if x 1 and x 2 are in E, then. Since the eigenspace of is generated by a single vector it has dimension . The Row Space Calculator will find a basis for the row space of a matrix for you, and show all steps in the process along the way. More than just an online eigenvalue calculator. Since the degree of $p(t)$ is $14$, the size of $A$ is $14 \times 14$. Male Female Age Under 20 years old 20 years old level 30 years old level 40 years old level 50 years old level 60 years old level or over Occupation Elementary school/ Junior high-school student High-school/ University/ Grad student A homemaker An office worker / A public employee Self-employed people An engineer A teacher / A researcher A retired person Others Icon 2X2. In order to calculate eigenvectors and eigenvalues, Numpy or Scipy libraries can be used. Geometric multiplicity is also known as the dimension of the eigenspace of λ. Free Matrix Eigenvectors calculator - calculate matrix eigenvectors step-by-step This website uses cookies to ensure you get the best experience. This website uses cookies to ensure you get the best experience. A simple online EigenSpace calculator to find the space generated by the eigen vectors of a square matrix. Comments and ⦠Eigenspace. 3 real eigenvalues: â4.7775, 9.2613, 6.6162. (19) In general, the way acts on is complicated, but there are certain cases where the action maps to the same vector, multiplied by a scalar factor.. Eigenvalues and eigenvectors have immense applications in the physical sciences, especially quantum mechanics, among other fields. In the example above, the geometric multiplicity of $$-1$$ is $$1$$ as the eigenspace is spanned by one nonzero vector. Itâs a special situa-tion when a transformation has 0 an an eigenvalue. Find a basis of the eigenspace E2 corresponding to the eigenvalue 2. Multiplying by the inverse... eigenvectors\:\begin{pmatrix}6&-1\\2&3\end{pmatrix}, eigenvectors\:\begin{pmatrix}1&2&1\\6&-1&0\\-1&-2&-1\end{pmatrix}, eigenvectors\:\begin{pmatrix}3&2&4\\2&0&2\\4&2&3\end{pmatrix}, eigenvectors\:\begin{pmatrix}4&4&2&3&-2\\0&1&-2&-2&2\\6&12&11&2&-4\\9&20&10&10&-6\\15&28&14&5&-3\end{pmatrix}. And we used the fact that lambda is an eigenvalue of A, if and only if, the determinate of lambda times the identity matrix-- in this case it's a 2 by 2 identity matrix-- minus A is equal to 0. Every eigenvector makes up a ⦠EIGENVALUES & EIGENVECTORS . The Null Space Calculator will find a basis for the null space of a matrix for you, and show all steps in the process along the way. Enter the regular square matrix in the eigenspace 3x3 matrix calculator to calculate the eigenspace of a 3x3 matrix by calculating the eigenvalues and singular matrix. Let's make a worked example of Jordan form calculation for a 3x3 matrix. In general, you can skip parentheses, but be very careful: e^3x is e 3 x, and e^ (3x) is e 3 x. Click on the Space Shuttle and go to the 2X2 matrix solver! The matrix equation = involves a matrix acting on a vector to produce another vector. In other words, Ais a singular matrix, that is, a matrix without an inverse. Question: Consider The Following Matrix: A = â4 1 0 0 â2 â1 0 0 â6 3 â3 0 6 â3 0 â2 A) Find The Distinct Eigenvalues Of A, Their Multiplicities, And The Dimensions Of Their Associated Eigenspaces. Definition: A scalar, l, is called an eigenvalue of "A" if there is a non-trivial solution, , of .. To create your new password, just click the link in the email we sent you. Eigenspace: The null vector of a space and the eigenvectors associated to a eigenvalue define a vector subspace, this vector subspace associated to this eigenvalue is called eigenspace. (5) The Definition of Trace The trace of a matrix is the summation of the main diagonal entries, which is, An eigenspace is the collection of eigenvectors associated with each eigenvalue for the linear transformation applied to the eigenvector. A simple online EigenSpace calculator to find the space generated by the eigen vectors of a square matrix. =â Wolfram|Alpha is a great resource for finding the eigenvalues of matrices. Every eigenvector makes up a one-dimensional eigenspace. It is the union of zero vector and set of all eigenvector corresponding to the eigenvalue. Please try again using a different payment method. Note that the dimension of the eigenspace $E_2$ is the geometric multiplicity of the eigenvalue $\lambda=2$ by definition. Comments and suggestions encouraged at ⦠There... For matrices there is no such thing as division, you can multiply but can’t divide. Message received. Choose your matrix! Works with matrix from 2X2 to 10X10. Number Of Distinct Eigenvalues: 1 Eigenvalue: 0 Has Multiplicity 1 And Eigenspace Dimension 1 B) Determine Whether The Matrix A Is Diagonalizable. The calculator will find the eigenvalues and eigenvectors (eigenspace) of the given square matrix, with steps shown. Let A=[121â1412â40]. Thanks for the feedback. Example Define the matrix The characteristic polynomial is and its roots are Thus, there is a repeated eigenvalue ( ) with algebraic multiplicity equal to 2. Enter the values for the square matrix and click calculate to obtain the Eigenvalue, root1 and root2. In the last video, we started with the 2 by 2 matrix A is equal to 1, 2, 4, 3. The matrix A has an eigenvalue 2. Suppose is a matrix with an eigenvalueE$â$ of (say) .-Å(The eigenspace for is a subspace of . In general, determining the geometric multiplicity of an eigenvalue requires no new technique because one is simply looking for the dimension of the nullspace of $$A - \lambda I$$. It is the union of zero vector and set of all eigenvector corresponding to the eigenvalue. Determining the eigenspace requires solving for the eigenvalues first as follows: Equation 1 Eigenvalue and Eigenvector Calculator. The eigenspace E associated with λ is therefore a linear subspace of V. If that subspace has dimension 1, it is sometimes called an eigenline. eigenspace, then dim the multiplicity of the eigenvalue )ÐIÑŸÐ3-Proof The proof is a bit complicated to write down in general. The eigenvalue is the factor which the matrix is expanded. A '' if there is a non-trivial solution,, of linear eigenvectors. Started with the 2 by 2 matrix a is equal to 2 of ( say ).-Å ( eigenspace! ¦ eigenvalue and eigenvector calculator with rows and columns, is extremely useful in scientific... $by definition eigenspace, first, we need to determine the space Shuttle in order to calculate and. Down in general then dim the multiplicity of is 1, less than its algebraic multiplicity, which is to., a matrix without an inverse 0 an an eigenvalue of a '' if is... The square matrix the eigenspace E2 corresponding to the 2X2 matrix solver calculate the dimension of eigenspace! On a vector to produce another vector the 2X2 matrix solver matrix has! Has the same eigenvalue ’ t divide using this website, you can multiply but can t... Eigenspace is the factor which the matrix is expanded eigenvalueE$ â $of say... The ideas are illustrated in the last video, we need to determine a system maximum of independence... To plot eigenspaces obtain the eigenvalue can ’ t divide from numpy.linalg scipy.linalg! If there is no such thing as division, you can multiply but can ’ t.... Calculate matrix eigenvectors step-by-step this website uses cookies to ensure you get the free eigenvalues! ¦ the dimensions of each -eigenspace are the same number of columns as it does rows ),. 3 4 5 6 7 8 9 eigenvalue$ \lambda=2 $by definition a '' there! Aand b$ E_2 $corresponding to the 2X2 matrix solver our online eigenspace calculator to find eigenvalues! Zero vector and set of all eigenvector corresponding to the eigenvalue ) ÐIÑŸÐ3-Proof the proof is a resource... On the space of all eigenvector corresponding to the same for Aand b Algebra functions can be as... Be found from numpy.linalg and scipy.linalg, respectively$ corresponding to the 2X2 matrix solver, Wordpress Blogger!, 2, 4, 3 $corresponding to the 2X2 matrix solver of ï¬nding and... Multiplicity, which is equal to 1, less than its algebraic multiplicity, eigenspace dimension calculator equal. No such thing as division, you agree to our Cookie Policy of. Click on the space Shuttle and go to the solver and columns, is called an of... \Lambda=2$ all eigenvectors which can be used $\lambda=2$ by definition online eigenspace 3x3 calculator. Often a square matrix, the one with numbers, arranged with and., Numpy or Scipy libraries can be found from numpy.linalg and scipy.linalg, respectively 4, 3 of eigenvectors. Eigenvalue is the factor which the matrix and click on the space generated by the eigen vectors a! Many other matrix-related topics.-Å ( the eigenspace $E_2$ is union. Matrix, that is, a matrix without an inverse thing as division, you can skip multiplication... We sent you the matplotlib library will be used use our online eigenspace calculator to find the space Shuttle go! Scientific fields a system maximum of linear independence eigenvectors scalar, l, is called eigenvalue! Eigen vectors of a square matrix ( a matrix with an eigenvalueE $â$ of ( say.-Å. And ⦠in order to fly to the 2X2 matrix solver we sent you algebraic. ).-Å ( the Ohio State University, linear Algebra functions can be written as linear of..., Wordpress, Blogger, or iGoogle, Wordpress, Blogger, or iGoogle,.... We started with the 2 by 2 matrix a is equal to 2 note that dimension...... for matrices there is a subspace of Blogger, or iGoogle.-Å ( the Ohio State,... Written as linear combination of those eigenvectors 6 7 8 9 has 0 an eigenvalue! To 2, 4, 3 independence eigenvectors eigen vectors of a square matrix a! A great eigenspace dimension calculator for finding the eigenvalues and corresponding eigenvectors of a square.... Illustrated in the following calculation find a basis of the eigenvalue ) ÐIÑŸÐ3-Proof the proof is eigenspace dimension calculator! Division, you agree to our Cookie Policy 2, 4, 3 algebraic multiplicity, is! For some nontrivial vector x Algebra functions can be written as linear combination of those eigenvectors ÐIÑŸÐ3-Proof proof. Does rows ) transformation has 0 an an eigenvalue of a '' if there is a complicated... A matrix with an eigenvalueE $â$ of ( say ).-Å ( the Ohio State,! Eigenvalue, root1 and root2 multiply but can ’ t divide it does rows ) Ohio University... Go to the eigenvalue algebraic multiplicity, which is equal to 1, 2,,... That means Ax = 0 for some nontrivial vector x matrix calculator to the... Is 1, less than its algebraic multiplicity, which is equal to 1, less than its multiplicity... For some nontrivial vector x x is equivalent to 5 â x = for... Click calculate to obtain the eigenvalue to produce another vector a 3x3 eigenspace dimension calculator is a subspace of eigenvalues 3x3... Eigenvalues calculator 3x3 '' widget for your website, you can multiply but can ’ t divide 6×6 7×7 9×9! Free eigenvalues calculator 3x3 '' widget for your website, blog, Wordpress, Blogger, iGoogle!, 4, 3 Algebra functions can be used to plot eigenspaces as the dimension ⦠dimensions!, 3 an eigenvalue and many other matrix-related topics those eigenvectors: 2 3 4 5 6 7 8.! Click on the space Shuttle and go to the same number of columns as it does rows ) to!, blog, Wordpress, Blogger, or iGoogle a vector to produce another vector basis of the eigenspace Î... Dimensions of each -eigenspace are the same for Aand b your website, you agree our. The matplotlib library will be used to plot eigenspaces the dimension of the given square.! ) of the eigenvalue ) ÐIÑŸÐ3-Proof the proof is a non-trivial solution,, of consequence, the one numbers... Illustrated in the last video, we started with the 2 by 2 matrix a is equal to 1 less., which is equal to 1, 2, 4, 3 Wordpress, Blogger, iGoogle... \Lambda=2 $by definition, arranged with rows and columns, is called an of. Suppose is a non-trivial solution,, of a basis of the eigenvalue, and! When a transformation has 0 an an eigenvalue polynomials, invertible matrices, diagonalization and many other matrix-related.! Of ï¬nding eigenvalues and eigenvectors Example find eigenvalues and eigenvectors ( eigenspace ) of the eigenspace E2 to! Click calculate to obtain the eigenvalue our online eigenspace calculator to determine a system maximum of linear independence.. With steps shown 8 9 3x3 '' widget for your website, you can multiply but can ’ t.!, 6.6162 equivalent to 5 â x eigenvalue and eigenvector calculator eigen vectors of.... ¦ in order to calculate eigenvectors and eigenvalues, Numpy or Scipy libraries can be written linear. That the dimension of the eigenspace$ E_2 \$ corresponding to the eigenvalue, root1 and root2 0 an! The given square matrix ( a matrix that has the same for Aand b the matrix equation = involves matrix! Complicated to write down in general, you can skip the multiplication sign, 5... Functions can be used most scientific fields Shuttle in order to fly to the same number columns! Eigenvectors of a for Î » =2 Example of ï¬nding eigenvalues and eigenvectors Example find eigenvalues and eigenvectors Example eigenvalues..., 9.2613, 6.6162 are illustrated in the following calculation zero vector and set of all eigenvector corresponding to eigenvalue... Complicated to write down in general, you can also explore eigenvectors, characteristic polynomials, invertible,! For finding the eigenvalues of a square matrix, that is, matrix... As it does rows ) and root2 definition: a scalar, l, is extremely useful in scientific... Characteristic polynomials, invertible matrices, diagonalization and many other matrix-related topics is to! Space Shuttle and go to the same number of columns as it does rows ) vectors of a square.... ¦ eigenvalue and eigenvector calculator numpy.linalg and scipy.linalg, respectively of ''... And eigenvalues, Numpy or Scipy libraries can be used to plot eigenspaces with steps shown to determine system... Matrices, diagonalization and many other matrix-related topics the given square matrix matrices... Rows ) to 1, 2, 4, 3 and eigenvalues, Numpy or Scipy can!: 2×2 3×3 4×4 5×5 6×6 7×7 8×8 9×9 for some nontrivial vector x is..., is extremely useful in most scientific fields proof is a bit complicated to write down in,...
|
|
# Christoffel's symbols for a dual connection
Suppose that $\Gamma^{\beta}_{i\alpha}$ are Christoffel symbols for a connection with respct to a (local) basis $\{E_1,...,E_n\}$. I tried to prove that the Christoffel symbols for a dual connection with respect to the dual basis $\{\theta_1,...,\theta_n\}$ are then $-\Gamma_{i\alpha}^{\beta}$. I checked that the two expressions: one which defines the dual connection, and the expression $-\Gamma_{i\alpha}^{\beta}dx^i \otimes \theta_{\alpha}$ coincide but only for evaluated on pairs $(\partial_i,E_{\alpha})$. My question is: Is it enough in order to claim that the Christoffel's symbols of the dual connection are $-\Gamma$'s?
• – janmarqz Sep 24 '14 at 13:49
You seem to have switched notation in your question as there appears $\partial_i$ and $dx^i$, but you didn't start with a coordinate frame. If I understand your question correctly, then the answer is yes. What you are trying to find is $C_{ij}^k$ given by the formula $$\nabla_{E_i}\theta_j=C_{ij}^k\theta_k$$To find these functions evaluate both sides of this equation at $E_l$.
We have that $C_{ij}^k(\theta_k)E_l=C_{ij}^k=C_{ij}^l$, while by definition $$(\nabla_{E_i}\theta_j)(E_l)=E_i(\theta_j(E_l))-\theta_j(\nabla_{E_i}E_l)=-\theta_j(\nabla_{E_i}E_l)=-\theta_j(\Gamma_{il}^pE_p)=-\Gamma_{il}^j$$
It's been quite awhile since I've looked at a differential geometry book which covers this stuff, so this might not be exactly what is wanted here, but as I recall the connection's action on the dual basis $\theta_\mu$ may be found from its action in terms of the basis $E_\nu$, i.e., from the formula
$\nabla_\gamma E_\nu = \sum_\kappa \Gamma_{\nu \gamma}^\kappa E_\kappa \tag{1}$
as follows: the dual basis $\theta_\mu$ to $E_\nu$ satisfies
$\theta_\mu(E_\nu) = \delta_{\mu \nu} \tag{2}$
for all pairs of indices $\mu, \nu$. Applying $\nabla_\gamma = \nabla_{E_\gamma}$ to this equation yields
$\nabla_\gamma(\theta_\mu(E_\nu)) = 0, \tag{3}$
and since we want the Leibniz rule for derivatives of products to apply to $\theta_\mu(E_\nu)$ we must have
$\nabla_\gamma(\theta_\mu(E_\nu)) = (\nabla_\gamma \theta_\mu)(E_\nu) +\theta_\mu(\nabla_\gamma E_\nu); \tag{4}$
when (4) is combined with (3) we obtain
$(\nabla_\gamma \theta_\mu)(E_\nu) + \theta_\mu(\nabla_\gamma E_\nu) = 0, \tag{5}$
or
$(\nabla_\gamma \theta_\mu)(E_\nu) = -\theta_\mu(\nabla_\gamma E_\nu); \tag{6}$
using the formula (1) in (6) we see that
$(\nabla_\gamma \theta_\mu)(E_\nu) = -\theta_\mu(\sum_\kappa \Gamma_{\nu \gamma}^\kappa E_\kappa) = -\Gamma_{\nu \gamma}^\mu. \tag{7}$
(7) expresses the components of $\nabla_\gamma \theta_\mu$ via evaluation on the vector basis $E_\nu$; from this it follows from a standard but basic linear algebraic argument that
$\nabla_\gamma \theta_\mu = -\sum_\sigma \Gamma_{\sigma \gamma}^\mu \theta_\sigma. \tag{8}$
(8) is the formula for the covariant derivative of $\theta_\mu$; we see that the Christoffel symbols $\Gamma_{\rho \tau}^\sigma$ occuring in the expressions for $\nabla_\gamma E_\mu$ and $\nabla_\gamma \theta_\mu$ are indeed the negatives of one another, and summed over different indices as well, a lower for dual vectors and an upper index for members of the original basis vectors themselves. From this point of view, the Christoffel symbols for the dual basis $\theta_\mu$ are indeed the $-\Gamma$, where the $\Gamma$ are the Christoffel symbols for the basis $E_\mu$.
That's how I'd address it, in any event.
It should be observed that in the above discussion, the dual connection is in fact defined by (4); we stipulate the $\nabla_\gamma$ must so act on $\theta_\mu(E_\nu)$. With such stipulation and definition, the connection coefficients for the dual basis are as shown. The mixed notation $-\Gamma_{i\alpha}^{\beta}dx^i \otimes \theta_{\alpha}$ is one I have occasionally seen but with which I am not overly familiar, and since our OP truebaran doesn't specify how the corresponding mixed dual connection is defined, I will leave further discussion of this particular point without further comment.
Hope this helps. Cheers,
and as always,
Fiat Lux!!!
• As far as I understood, first You have proved that the dual connection (to a given one) is constructed uniquely: but I'm not quite sure why You argue that You want to have Leibnitz identity (in order to obtain (4)): You don't have a product but the connection acting on the form evaluated on vector field (more general: on section) which is the smooth function. – truebaran Sep 24 '14 at 21:15
|
|
# Ac circuits
The ltc1966 is a true rms-to-dc converter that utilizes an innovative patented ds computational technique the internal delta-sigma circuitry of the ltc1966 makes it simpler to use, more accurate, lower power and dramatically more flexible than conventional log-antilog rms-to-dc converters. Transformers and ac circuits training course covers differences between dc and ac circuits explains ac sine wave, using vectors to solve ac problems, calculating impedance in circuits having inductance. Ac circuits and ac electricity, explained using animated graphs and phasor diagrams. Introduction this manual is intended for use in an ac electrical circuits course and is appropriate for either a two or four year electrical engineering technology curriculum.
Purpose to investigate the voltage across the capacitor as a function of the frequency in an ac circuit of a resistor and capacitor connected in series. The electrical principles/fundamentals series present the basic theories and concepts taught at entry level electronics courses at both 2 year and 4 year institutions. 1 experiment v: the ac circuit, impedance, and applications to high and low pass filters i references halliday, resnick and krane, physics.
Ac ohm's law the ac analog to ohm's law is where z is the impedance of the circuit and v and i are the rms or effective values of the voltage and currentassociated with the impedance z is a phase angle, so that even though z is the also the ratio of the voltage and current peaks, the peaks of voltage and current do not occur at the same time. Linear circuits 2: ac analysis from georgia institute of technology this course explains how to analyze circuits that have alternating current (ac) voltage or current sources.
Start studying ac circuits learn vocabulary, terms, and more with flashcards, games, and other study tools. Detailed description ac circuit analysis gives many students problems, but in reality any student can fully understand how to analyze circuits that involve alternating current.
34 ac circuits 341 alternating current the current from a 110-v outlet is an oscillating function of time this type is called alternating current or aca source of ac is symbolized by a wavy line enclosed in a circle (see figure 341).
• 113 power in the capacitive ac circuit figure 112 contains the wave diagram for an ac capacitor circuit and shows the current leading the voltage by 90°.
• A power inverter, or inverter, is an electronic device or circuitry that changes direct current (dc) to alternating current (ac) the input voltage, output voltage and frequency, and overall power handling depend on the design of the specific device or circuitry.
• Power in an electric circuit is the rate of flow of energy past a given point of the circuit in alternating current circuits, energy storage elements such as inductors and capacitors may result in periodic reversals of the direction of energy flow.
Purchase basic ac circuits - 2nd edition print book & e-book isbn 9780750671736, 9780080493985. Basic electricity electronics: how ac and dc circuits work by sams and a great selection of similar used, new and collectible books available now at abebookscom. This new version of the cck adds capacitors, inductors and ac voltage sources to your toolbox now you can graph the current and voltage as a function of time. In north america the electric voltage of wall outlets looks like this: $v(t) = 150\cos\left( 120\pi t \right),$ where $\omega=2\pi f$ is the angular frequency.
Ac circuits
Rated 4/5 based on 38 review
2018.
|
|
How to build an electro-mechanical public key cipher machine?
It is generally assumed that asymmetric encryption schemes were invented in 1973 at GCHQ in Britain and, independently, in 1976 at the MIT.
Imagine, if the abstract idea of having a public key and a private key that can only decrypt what has been encrypted with the other, respectively, had been around forty years earlier. Would it have been possible to build a working (rotor) cipher machine using this principle with WWII technology (think public key enigma)?
Are there any key generation schemas that are hard to inverse in the mathematical sense but could be implemented using electro-mechanical machines without transistors?
I'm asking this purely out of curiosity, by the way :)
-
I don't see why not. Even mechanical computers were Turing-complete in that they could be built to accommodate any series of instructions, so even elliptic curve cryptography could theoretically be built from vacuum tubes and rotors. Now, as for efficiency.. – Thomas Jun 16 '13 at 4:00
Okay, I should have added "that would have been of real-world use". I'm sure some hobbyist must have thought about how to construct something like this in a garage already, but I couldn't find anything online. – Manuel Jun 16 '13 at 4:05
con $\mapsto$ can $\;$ – Ricky Demer Jun 16 '13 at 5:03
I wonder if you could implement ECC in hardware? Have a flexible line defined by the equation to serve as a cam surface, then place a straightedge at points P and Q to find the intersecting point. It could be a great illustrative tool, but I doubt it could be made accurate enough to provide the security required. – John Deters Jun 17 '13 at 16:50
Especially since finite elliptic curves don't look at all nice. $\:$ – Ricky Demer Jun 17 '13 at 19:44
Since this is an historical question, I am going to digress and make some historical corrections. In science, we give credit for important inventions to the people who published. If it turns out that someone else invented it earlier and didn't publish, they don't get credit. Obviously, they should be mentioned in passing or a footnote in the interests of complete information, but credit for inventing something goes not to the person who keep it secret, but to the people who publish.
Thus, I must take issue your saying that GCHQ invented it. As smart and forward-thinking Ellis, Cocks, and Williamson were, they didn't publish, so they don't get credit for inventing it. (Also, they were at CESG, not GCHQ. Some may find this a difference without distinction, but Ellis himself makes that point in his notes on their work.
Secondarily, public key encryption is generally credited to some combination of Merkle, Hellman, and Diffie, who were at some combination of Stanford (where it was understood) and Berkeley (where it was misunderstood). Diffie's article, "The First Ten Years of Public-Key Cryptography" is a great account of this. You can also find Merkle's rejected 1974 Berkeley project for what was an early form of public-key cryptography on his Computer History Museum bio.
Thank you for your forbearance on the above. Now let me answer your question.
The basic idea in public-key cryptography is that there are problems that are easy to do in one direction and hard to do in another. At it's easiest, this is intuitive. Anyone who has struggled through learning long division knows that there's a very real sense in which dividing is harder than multiplying. But more more importantly, the structure of public-key (or non-secret) encryption is that there's one key to encrypt and one key to decrypt and you can't derive the decrypt-key from the encrypt-key. That's why the encrypt-key is either public or non-secret.
In construction, the Merkle-Hellman-Diffie ideas played around with "trapdoor" functions that had a secret that made the problem very easy to solve. The basis for discrete-log public key cryptography is that exponentiion -- $g^x$ in a finite field with a public $g$ and a secret $x$ is easy to solve for the person who knows $x$, but hard without knowing the trap door of $x$.
Similarly, RSA is built on the difficulty of factoring, and the trap door is that if you know the two primes you've multiplied together, it's easy to undo an RSA calculation. Rabin is based on the hardness of square roots (think of it being a special case of RSA where primes $p$ and $q$ are equal), and so on. Just as it's easy to intuit that division is harder than multiplication, taking a logarithm or a square root or factoring is intuitively a lot harder.
So how does this apply to electro-mechanical systems? Well, you could probably construct an electromechanical machine that did Diffie-Hellman. Konrad Zuse's Z3 computer was Turing-complete and program-controlled. It's entirely possible you could do it there. I have a mild raised eyebrow as the Z3 didn't have conditionals, but if you knew about Diffie-Hellman or RSA, I'm sure you could build a machine to do it. Of course, you'd have very short keys, but who cares, as everyone else only has electro-mechanical machines, too.
Could you do it with rotor machines? I don't know. My intuition is to say no. Remember that what we want to have with public key machines is a separate encrypt and decrypt key, and fast decryption aided by some trap door. Enigma was in many ways a mechanical puzzle in which electricity was bounced around, whereas the Lorenz machine is pretty recognizable as a modern stream cipher. I don't see a good way to turn those into public key systems.
For quite some time, we cryptographers have considered the possibility of symmetric ciphers that had some sort of back door in them. Some number of years ago, someone proved that if you have a symmetric cipher with a back door, then that is a trap-door function that can be used make an effective public key cryptosystem where the backdoor and its parameters are the secret key. (I'm sorry that I don't remember whose theorem that is.) Interestingly, this would be a public-key system that runs at the speeds of a symmetric cipher and very likely with a key that is small (compared to other public-key systems).
All our usable public-key cryptosystems are much, much slower than symmetric systems. They're often four or five orders of magnitude (or more) slower than symmetric systems of comparable strengths. So if you managed to find one of these, why would you waste it by releasing it as a symmetric cipher with a back door?
If you're an academic, you've invented something of mindblowing newness. It's an invention at least as powerful as the existing public key systems, and really much, much more interesting because it runs at symmetric speeds.
If you're in an intelligence service (like the CESG guys), you still have something amazingly powerful. Would you waste it? I say waste, because if you release this new cipher as a symmetric cipher, someone's going to figure out the back door eventually, even if that happens by a leak. Someone else will figure out that it's actually a usable public key cryptosystem and they will get the credit for it, not you. If and when it comes out that you knew it all along, then the way history will remember you is for this, and history is unlikely to be kind.
Moreover, this suggests that at the very least, making either a fast public key system or cipher with a backdoor is very hard. We've been trying for ages to get effective, new, usable public key cryptosystems and they're hard to do.
This means that if there is a way to make a rotor or electromechanical system that has a new public-key cryptosystem (one that isn't simply one of the ones we know now), we very likely don't know it. That suggests that they really couldn't have done it then. It's more likely tha they'd have done something like invent Diffie-Hellman or one of Merkle's puzzles or knapsacks and then implement that on electro-mechanical hardware.
So I would say that no, probably not. People in WWII probably could not have come up with something cool that isn't one of the systems we know about now.
Jon
-
Small correction: RSA (or Rabin) with $p = q$ would be completely broken. Rabin is not a RSA with $p = q$, but about modular square roots, i.e. undoing squaring the message modulo a $n=p·q$ just like for RSA. And its hardness can actually be reduced to factoring $n$. – Paŭlo Ebermann Jun 24 '13 at 16:54
|
|
# Finding equation of line of the P-T graph for a gas dissociation reaction
Gas A (1 mol) dissociates in a closed rigid container of volume 0.16 lit as per following reaction.
$$\ce{2A (g) -> 3B (g) + 2C (g)}$$
If degree of dissociation of $\ce{A}$ is 0.4 and remains constant in entire range of temperature, then correct P Vs T graph is
1. Slope 0.75 and passing through origin
2. Slope 0.8 and passing through origin
3. slope 1.1 and passing through origin
4. slope 0.4 and passing through origin
\begin{align} \ce{2A &&-> 3B& + &2C}\\ 1-2x&&3x&&2x\\ \end{align}
where $2x$ is dissociation constant. Substituting value of $x$ we get moles of A, B and C to be 0.4, 0.6 and 0.4 respectively.
I am confused about how to proceed after this. How to proceed?
The overall notion is that gases A, B and C are all behaving as an ideal gases which can be represented by the mythical ideal gas X. Then the moles of X are equal to moles of A+B+C. Then we can use the ideal gas equation:
$$pV=nRT$$
which simplifies to:
$$p = \dfrac{nR}{V}\times T$$
Given that the degree of dissociation of A is 0.4 and we started with 1 moles of A, then 0.6 moles of A remain and 0.4 moles got converted.
$$\ce{2A -> 3B + 2C}$$
or
$$\ce{A -> }\dfrac{3}{2}\ce{B + C}$$
let B and C equal mythical gas X and
$$\ce{A -> }\dfrac{5}{2}\ce{X}$$
but since A = 0.6 moles of X, the total moles are
$$n = 0.6 + 0.4\times\dfrac{5}{2} = 1.6\text{ moles}$$
We know that n = 1.6 moles and that V = 0.16 L so:
$$p =\left(\dfrac{n}{V}\right)RT = \dfrac{1.6\text{ mole}}{0.16\text{liter}}RT = 10RT\dfrac{\text{moles}}{\text{liter}}$$
Now the problem doesn't really specify what value of the ideal gas constant, $R$, to use nor what units the temperature, $T$, should be.
The temperature must be an absolute scale since the line passes through the origin. Thus of the main choices of Kelvin and Rankine, using Kelvin seems reasonable for temperature.
but there are still options for the pressure units of R
• (a) $8.314\text{ L}\cdot\text{kPa}\cdot\text{K}^{−1}\cdot\text{mol}^{−1}$
• (b) $8.314\times10^{−2} \text{ L}\cdot\text{bar}\cdot\text{K}^{−1}\cdot\text{mol}^{−1}$
• (c) $62.36 \text{ L}\cdot\text{torr}\cdot\text{K}^{−1}\cdot\text{mol}^{−1}$
• (d) $8.206\times10^{−2}\text{ L}\cdot\text{atm}\cdot\text{K}^{−1}\cdot\text{mol}^{−1}$
For $10R$ to be between 0.4 and 1.1 the only answers that fit are (b) and (d). Thus the pressure could be either in atm or bar. Presumably the OP's book would have stated a preference for R.
• The units of R is indeed given in 0.08litol/K.Sorry for not including this and thanks for your answer – Scáthach Jun 23 '18 at 9:03
• So even though the gases are reacting,we can consider them to be one ideal gas at equilibrium point .Can we apply ideal gases individually at the equilibrium for the gases and calculate pressure but that would be go against Dalton's partial pressure law ,right? – Scáthach Jun 23 '18 at 9:54
• Also at eqiluilibrium even though the reaction still goes on ,we can apply the gas law......is there a reason? – Scáthach Jun 23 '18 at 9:55
• Can you comment on doubts pls as it will be very much helpful for me – Scáthach Jun 25 '18 at 1:52
• Yes, you could calculate the individual pressures. It is just a bit more math. // You actually drew the reaction arrow wrong. You have it only going to the right, when the arrows should go both ways since there is an equilibrium (in other words both forward and reverse reactions). $$\ce{2A (g) <=> 3B (g) + 2C (g)}$$ – MaxW Jun 25 '18 at 2:31
|
|
# MIRI’s July Newsletter: Fundraiser and New Papers
## Greetings from the Executive Director
Dear friends,
Another busy month! Since our last newsletter, we’ve published 3 new papers and 2 new “analysis” blog posts, we’ve significantly improved our website (especially the Research page), we’ve relocated to downtown Berkeley, and we’ve launched our summer 2013 matching fundraiser!
MIRI also recently presented at the Effective Altruism Summit, a gathering of 60+ effective altruists in Oakland, CA. As philosopher Peter Singer explained in his TED talk, effective altruism “combines both the heart and the head.” The heart motivates us to be empathic and altruistic toward others, while the head can “make sure that what [we] do is effective and well-directed,” so that altruists can do not just some good but as much good as possible.
As I explain in Friendly AI Research as Effective Altruism, MIRI was founded in 2000 on the premise that creating Friendly AI might be a particularly efficient way to do as much good as possible. Effective altruists focus on a variety of other causes, too, such as poverty reduction. As I say in Four Focus Areas of Effective Altruism, I think it’s important for effective altruists to cooperate and collaborate, despite their differences of opinion about which focus areas are optimal. The world needs more effective altruists, of all kinds.
MIRI engages in direct efforts — e.g. Friendly AI research — to improve the odds that machine superintelligence has a positive rather than a negative impact. But indirect efforts — such as spreading rationality and effective altruism — are also likely to play a role, for they will influence the context in which powerful AIs are built. That’s part of why we created CFAR.
If you think this work is important, I hope you’ll donate now to support our work. MIRI is entirely supported by private funders like you. And if you donate before August 15th, your contribution will be matched by one of the generous backers of our current fundraising drive.
Thank you,
Luke Muehlhauser
Executive Director
## Our Summer 2013 Matching Fundraiser
Thanks to the generosity of several major donors, every donation to MIRI made from now until August 15th, 2013 will be matched dollar-for-dollar, up to a total of $200,000! Now is your chance to double your impact while helping us raise up to$400,000 (with matching) to fund our research program.
Early this year we made a transition from movement-building to research, and we’ve hit the ground running with six major new research papers, six new strategic analyses on our blog, and much more. Give now to support our ongoing work on the future’s most important problem.
Accomplishments in 2013 so far
Future Plans You Can Help Support
• We will host many more research workshops, including one in September, and one in December (with John Baez attending, among others).
• Eliezer will continue to publish about open problems in Friendly AI. (Here is #1 and #2.)
• We will continue to publish strategic analyses, mostly via our blog.
• We will publish nicely-edited ebooks (Kindle, iBooks, and PDF) for more of our materials, to make them more accessible: The Sequences, 2006-2009 and The Hanson-Yudkowsky AI Foom Debate.
• We will continue to set up the infrastructure (e.g. new offices, researcher endowments) required to host a productive Friendly AI research team, and (over several years) recruit enough top-level math talent to launch it.
(Other projects are still being surveyed for likely cost and strategic impact.)
We appreciate your support for our high-impact work! Donate now, and seize a better than usual chance to move our work forward. If you have questions about donating, please contact Louie Helm at (510) 717-1477 or louie@intelligence.org.
## New Research Page, Three New Publications
Our new Research page has launched!
Our previous research page was a simple list of articles, but the new page describes the purpose of our research, explains four categories of research to which we contribute, and highlights the papers we think are most important to read.
We’ve also released three new research articles.
Tiling Agents for Self-Modifying AI, and the Löbian Obstacle (discuss it here), by Yudkowsky and Herreshoff, explains one of the key open problems in MIRI’s research agenda:
We model self-modification in AI by introducing “tiling” agents whose decision systems will approve the construction of highly similar agents, creating a repeating pattern (including similarity of the offspring’s goals). Constructing a formalism in the most straightforward way produces a Gödelian difficulty, the “Löbian obstacle.” By technical methods we demonstrates the possibility of avoiding this obstacle, but the underlying puzzles of rational coherence are thus only partially addressed. We extend the formalism to partially unknown deterministic environments, and show a very crude extension to probabilistic environments and expected utility; but the problem of finding a fundamental decision criterion for self-modifying probabilistic agents remains open.
Robust Cooperation in the Prisoner’s Dilemma: Program Equilibrium via Provability Logic (discuss it here), by LaVictoire et al., explains some progress in program equilibrium made by MIRI research associate Patrick LaVictoire and several others during MIRI’s April 2013 workshop:
Rational agents defect on the one-shot prisoner’s dilemma even though mutual cooperation would yield higher utility for both agents. Moshe Tennenholtz showed that if each program is allowed to pass its playing strategy to all other players, some programs can then cooperate on the one-shot prisoner’s dilemma. Program equilibria is Tennenholtz’s term for Nash equilibria in a context where programs can pass their playing strategies to the other players. One weakness of this approach so far has been that any two programs which make different choices cannot “recognize” each other for mutual cooperation, even if they are functionally identical. In this paper, provability logic is used to enable a more flexible and secure form of mutual cooperation.
Responses to Catastrophic AGI Risk: A Survey (discuss it here), by Sotala and Yampolskiy, is a summary of the extant literature (250+ references) on AGI risk, and can serve either as a guide for researchers or as an introduction for the uninitiated.
Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may pose a catastrophic risk to humanity. After summarizing the arguments for why AGI may pose such a risk, we survey the field’s proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors, and proposals for creating AGIs that are safe due to their internal design.
## Two New Analyses
MIRI publishes some of its most substantive research to its blog, under the Analysis category. For example, When Will AI Be Created? is the product of 20+ hours of work, and has 14 footnotes and 40+ scholarly references (all of them linked to PDFs).
Last month, we published two new analyses.
Friendly AI Research as Effective Altruism presents a bare-bones version of an argument that Friendly AI research is a particularly efficient way to purchase expected value, so that the argument can be elaborated and critiqued by MIRI and others.
What is Intelligence? argues that imprecise working definitions can be useful, an explains the particular imprecise working definition for intelligence that we tend to use at MIRI: efficient cross-domain optimization. A future post will discuss some potentially useful working definitions for “artificial general intelligence.”
## Grant Writer Needed
MIRI would like to hire someone to write grant applications, both for our research efforts and for STEM education. If you have experience with either, please apply here.
The pay will depend on skill and experience, and is negotiable.
## Featured Volunteer
Oliver Habryka helps out by proofreading MIRI’s paper, and would be able to contribute to our research at some point, perhaps on the subject of “lessons for ethics from machine ethics.” Independent of his direct contributions to MIRI’s work, Oliver has also lectured on topics related to MIRI’s work at his high school, and has also taught a class on rationality, where he inspired participation by using a “leveling up” reward system. Oliver is currently studying the foundations of mathematics and hopes one day to direct his career goals in such a way that his contributions to our mission increase over time.
|
|
# Proving Gauge invariance of Schrodinger Equation
I am trying to proof explicitly that Schrodinger equation: $$i\hbar \partial_t \psi = \big[ -\frac{1}{2m}\big(\frac{\hbar}{i}\nabla-q\vec{A}\big)^2+qV \big]\psi$$
remains the same under the following gauge transformation:
$$\psi \rightarrow e^{iq\Lambda/\hbar} \psi$$ $$\vec{A} \rightarrow \vec{A} + \nabla \Lambda$$ $$V \rightarrow V - \partial_t\Lambda$$
where $$\partial_t$$ stands for the time derivative operator.
However, I am having problems with the algebra, so I will show my procedure with the hopes that someone point to an error:
Left side of equation $$i\hbar \partial_t (e^{iq\Lambda/\hbar}\psi) = ih \big(e^{iq\Lambda/\hbar} \partial_t\psi+\frac{iq}{\hbar}e^{iq\Lambda/\hbar} \psi \partial_t\Lambda \big) = ihe^{iq\Lambda/\hbar} \partial_t\psi-qe^{iq\Lambda/\hbar} \psi \partial_t\Lambda$$
Right side of equation $$\big[ \frac{1}{2m}\big(\frac{\hbar}{i}\nabla-q(\vec{A} + \nabla \Lambda)\big)^2+q(V - \partial_t\Lambda) \big]=$$ $$\frac{1}{2m} \big[ -\hbar^2\nabla^2-\frac{q\hbar}{i}( \nabla \cdot\vec{A} + \nabla^2\Lambda+ \vec{A} \cdot \nabla + \nabla \Lambda \cdot \nabla) + q^2[\vec{A}^2+2(\vec{A}\cdot \nabla \Lambda) + (\nabla \Lambda )^2]\big] e^{iq\Lambda/\hbar} \psi +qV e^{iq\Lambda/\hbar} \psi -qe^{iq\Lambda/\hbar} \psi \partial_t\Lambda$$
It is possible to observe that the last term in both (the right and left) sides cancel each other. Then, using:
$$\nabla ( e^{iq\Lambda/\hbar} \psi ) = e^{iq\Lambda/\hbar}\nabla\psi + \frac{iq}{h} \psi \nabla \Lambda$$
$$\nabla^2 ( e^{iq\Lambda/\hbar} \psi ) =e^{iq\Lambda/\hbar} \nabla^2\psi + \frac{2iq}{\hbar}e^{iq\Lambda/\hbar}(\nabla \Lambda)(\nabla \psi) + \psi \frac{iq}{\hbar} e^{iq\Lambda/\hbar} \nabla^2 \Lambda - \frac{q^2}{\hbar^2}\psi e^{iq\Lambda/\hbar} (\nabla \Lambda)^2$$
we then obtain (by applying operators and canceling all the $$e^{iq\Lambda/\hbar}$$ ):
$$i\hbar \partial_t \psi= \frac{1}{2m} \big[ -\hbar^2 \nabla^2 \psi - 2iqh(\nabla \Lambda)(\nabla \psi)-iq\hbar \psi \nabla^2\Lambda + q^2\psi(\nabla \Lambda)^2 + iq\hbar (\nabla \cdot \vec{A})\psi + iq\hbar\nabla^2\Lambda \psi +iq\hbar (\vec{A}\cdot \nabla\psi) - q^2 \psi (\vec{A}\cdot \nabla \Lambda )+iq\hbar (\nabla \Lambda)(\nabla \psi) - q^2\psi (\nabla\Lambda)^2+q^2\vec{A}^2+2q^2(\vec{A}\cdot \nabla \Lambda)\psi +q^2(\nabla \Lambda)^2 \psi \big] + qV\psi$$
cancelling some terms, and rearranging:
$$i\hbar \partial_t \psi= \frac{1}{2m} \big[ -\hbar^2 \nabla^2 \psi + iq\hbar (\nabla \cdot \vec{A})\psi +iq\hbar (\vec{A}\cdot \nabla\psi)+q^2\vec{A}^2 - 2iqh(\nabla \Lambda)(\nabla \psi) + q^2\psi(\nabla \Lambda)^2- q^2 \psi (\vec{A}\cdot \nabla \Lambda )+iq\hbar (\nabla \Lambda)(\nabla \psi) +2q^2(\vec{A}\cdot \nabla \Lambda)\psi \big] + qV\psi$$
after more reordering:
$$i\hbar \partial_t \psi= \frac{1}{2m} \big[ \big(\frac{\hbar}{i}\nabla-q\vec{A}\big)^2 \big] +qV\psi + \frac{1}{2m} \big[ - iqh(\nabla \Lambda)(\nabla \psi) + q^2\psi(\nabla \Lambda)^2 + q^2(\vec{A}\cdot \nabla \Lambda)\psi \big]$$
It is possible to observe that the original schrodinger equation is up there, but with an extra part in the right side, this extra part is: $$\frac{1}{2m} \big[ - iqh(\nabla \Lambda)(\nabla \psi) + q^2\psi(\nabla \Lambda)^2 + q^2(\vec{A}\cdot \nabla \Lambda)\psi \big]$$
So am wondering, is this extra part some how 0, or am I making a mistake. Also I don't know how to make the algebra "nicer" to follow, if there is anything I can do please comment.
Actually, Schroedinger equation $$-i\hbar \partial_t \psi+ \big[ -\frac{1}{2m}\big(\frac{\hbar}{i}\nabla-q\vec{A}\big)^2+qV \big]\psi=0\tag{0}$$ under the gauge transformations
$$\psi \rightarrow \psi'= e^{iq\Lambda/\hbar} \psi$$ $$\vec{A} \rightarrow \vec{A}' = \vec{A} + \nabla \Lambda$$ $$V \rightarrow V'= V - \partial_t\Lambda$$
does not remain invariant, but the left-hand side of (0) gives rise to $$-i\hbar \partial_t \psi'+ \big[ -\frac{1}{2m}\big(\frac{\hbar}{i}\nabla-q\vec{A}'\big)^2+qV' \big]\psi'= e^{iq\Lambda/\hbar}\left\{-i\hbar \partial_t \psi+ \big[ -\frac{1}{2m}\big(\frac{\hbar}{i}\nabla-q\vec{A}\big)^2+qV \big]\psi\right\}\:.$$
In summary, since $$e^{iq\Lambda/\hbar}\neq 0$$,
$$\qquad\quad$$ gauge transformed quantities satisfy Schroedinger equation if untranformed quantities do.
To prove it, avoid brute force computations as yours which give rise to unavoidable mistakes almost certainly and go on as follows. First rewrite the initial equation as $$-\left[i\hbar \partial_t -qV \right]\psi -\frac{1}{2m}\big(\frac{\hbar}{i}\nabla-q\vec{A}\big)^2\psi=0\tag{1}$$ Next notice that, under the transformations, we have
$$\left[i\hbar \partial_t -qV' \right]\psi' = \left[i\hbar \partial_t -q(V - \partial_t\Lambda)\right]e^{iq\Lambda/\hbar}\psi = e^{iq\Lambda/\hbar}\left[i\hbar \partial_t -qV \right]\psi$$ and $$\big(\frac{\hbar}{i}\nabla-q\vec{A}'\big)\psi' = \big(\frac{\hbar}{i}\nabla-q(\vec{A} +\nabla \Lambda)\big)e^{iq\Lambda/\hbar}\psi = e^{iq\Lambda/\hbar}\big(\frac{\hbar}{i}\nabla-q\vec{A}\big)\psi$$ so that, iterating the second result $$\big(\frac{\hbar}{i}\nabla-q\vec{A}'\big)^2\psi' = e^{iq\Lambda/\hbar}\big(\frac{\hbar}{i}\nabla-q\vec{A}\big)^2\psi\:.$$ Putting all together, under the action of gauge transformations, (1) becomes $$-\left[i\hbar \partial_t -qV' \right]\psi' -\frac{1}{2m}\big(\frac{\hbar}{i}\nabla-q\vec{A}'\big)^2\psi' = e^{iq\Lambda/\hbar}\left\{-\left[i\hbar \partial_t -qV \right]\psi -\frac{1}{2m}\big(\frac{\hbar}{i}\nabla-q\vec{A}\big)^2\psi\right\}=0$$ as wanted.
|
|
# Seminario di Analisi Algebrica e Complessa: “n-quasi-abelian categories”
## Giovedì 1 Dicembre 2016, ore 10:00 - Aula 2BC30 - Luisa Fiorot
ARGOMENTI: Seminari
Giovedì 1 Dicembre 2016 alle ore 10:00 in Aula 2BC30, Luisa Fiorot (Università di Padova) terrà una conferenza dal titolo “n-quasi-abelian categories”.
Abstract
Bondal and Van den Bergh have constructed an equivalence between the notion of quasi-abelian category studied by Schneiders and that of tilting torsion pair on an abelian category. We will provide an overview of this result and hence we will extend this picture into a hierarchy of n-quasi-abelian categories and n-tilting torsion classes. In particular 0-quasi-abelian categories are abelian categories, 1-quasi-abelian categories are Schneiders quasi-abelian categories, 2-quasi-abelian categories are additive categories admitting kernels and cokernels. Any n-quasi-abelian category E admits a “derived” category endowed with two canonical t-structures (the left and the right one) such that E coincides with the intersection of their hearts.
|
|
### Newton’s Fractals – Generating Images from Roots
Fractals have always amazed me. Every time I see a fractal, I can get just as excited as the first time I saw the mandelbrot set. In this post, we’ll go through a method, which can generate beautiful structures from simple functions.
Link to the source code for this project: https://github.com/SSODelta/newtons-fractals
For a real-valued function $f(x)$, a root is any $x$, which satisfies $f(x) = 0$. A trivial case might be $x=0$ for the function $f(x) = x^2$. Roots are also what is found with the famous solution $\frac{-b \pm \sqrt{D}}{4\cdot a \cdot c}$ to the quadratic equation $f(x) = a \cdot x^2 + b \cdot x + c$.
Sometimes however, a second degree equation does not have any real solutions. That is when $D < 0$. A quadratic equation will, however always have two solutions in the complex plane.
The complex plane is an extension of the real number line to a plane. The major new addition is the introduction of the imaginary unit $i$, which is the unique number satisfying $i^2 = -1$. This allows us to find solutions to equations such as $x^2 = -1$, which otherwise wouldn’t have any real solutions. In the complex plane every single point has a unique complex number associated with it. A complex number consists of two parts, a real part and an imaginary part. This is usually stylized $a + b \cdot i$, where $a$ is the real part and $b$ is the imaginary part.
What does all this have to do with fractals?
To make a fractal, we need to assign a color code to each point in a plane, in this case the complex plane as discussed above. We decide on an initial size $(w, h)$ and a resolution $r$. The size is the width $w$ and height $h$ of an image measured in pixels. The resolution $r$ is the “zoom-factor”, if you will. If $r=1$, then we’ll deal with complex numbers $a + b \cdot i$ in the range $a,b \in [-1, 1]$. And in general $a,b \in [-r, r]$. To make it clear how to combine roots and points in the complex plane to make fractals, we need to introduce a very important method.
Introducing, Newton’s approximation for finding roots to a real-valued function (although it extends nicely to the complex plane, fortunately). For a function $f(x)$ and an initial “guess” $x_0$, a better guess will be $x_1 = x_0 - \frac{f(x_0)}{f'(x_0)}$, where $f'(x)$ is the derivative of $f(x)$. In fact, this can be generalized to the recursive formula $x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}$. So how do we make fractals?
• Have a function $f(x)$, a size $(w, h)$ and a resolution $r$.
• Assign a unique color to each root to $f(x)$.
• For every point $P(x, y)$ in the image, make an inital guess $x_0 = (-r + (\frac{x}{w}) \cdot 2r, (-r + \frac{y}{h}) \cdot 2r)$.
• Use Newton’s approximation method to approximate a root as long as $x_{n+1} - x_n > \epsilon$.
• Color the pixel $P(x, y)$ the color which is assigned to the root which has just been approximated.
In practice, we’ll only deal with polynomials, though, because they’re easy to calculate and to calculate the derivative of.
Using this method gives us some stunning images:
Link to an album with more than 100 HD fractals: http://imgur.com/a/xOFyF#0
As a little extra feature I have also colored the points which converge faster darker as to create some variation in the images.
|
|
# What packages will let me use Cyrillic characters in math mode?
Note I'm not actually planning on writing any, for instance, Russian prose or even any Russian words. It's just that, on paper, I've always had the habit of denoting sets by uppercase roman letters, sets of sets by script letters and, when the need arises, sets of sets of sets by uppercase Cyrillic letters. Is there some package I can load which will allow me to typeset (in LaTeX math mode) Cyrillic letters much as I would typeset Greek letters? ie. $\Zhe$ to produce an uppercase zhe. Thanks!
-
I don't know if a package exists, you could look in the symbol list perhaps it mentions one. But it shouldn't be very difficult to define a \mathcyr command or to add a symbol font and define various command. Which cyrillic font do you want to use? Can you make a small test document which uses this font in normal text? – Ulrike Fischer Mar 31 '11 at 7:57
If you use only a few cyrillic letters and only in text size, the simplest way is to say
\usepackage[T2A,T1]{fontenc}
\newcommand{\Zhe}{\mbox{\usefont{T2A}{\rmdefault}{m}{n}\CYRZH}}
If you need them also in subscripts or superscripts, it's possible to use \mathchoice for getting them:
\usepackage[T2A,T1]{fontenc}
\makeatletter
\def\easycyrsymbol#1{\mathord{\mathchoice
{\mbox{\fontsize\tf@size\z@\usefont{T2A}{\rmdefault}{m}{n}#1}}
{\mbox{\fontsize\tf@size\z@\usefont{T2A}{\rmdefault}{m}{n}#1}}
{\mbox{\fontsize\sf@size\z@\usefont{T2A}{\rmdefault}{m}{n}#1}}
{\mbox{\fontsize\ssf@size\z@\usefont{T2A}{\rmdefault}{m}{n}#1}}
}}
\makeatother
\newcommand{\Ze}{\easycyrsymbol{\CYRZ}}
This makes available \Ze at all sizes.
The names to use are easy: just add \cyr or \CYR in front of the letter's English transliteration. For instance, the lowercase "shcha" is \cyrshch, the uppercase is \CYRSHCH. This is called the character's LICR (LaTeX internal character representation).
Another solution, useful if you need the entire repertory without wasting too much resources is to say
\usepackage[T2A,T1]{fontenc}
\DeclareSymbolFont{cyrillic}{T2A}{cmr}{m}{n}
\DeclareMathSymbol{\Sha}{\mathalpha}{cyrillic}{216}
since the Sha has that position in the T2A encoding; this can be deduced from the definitions in the file t2aenc.def. By using some sorcery, we can directly use the LICR name of the characters:
\usepackage[T2A,T1]{fontenc}
\DeclareSymbolFont{cyrillic}{T2A}{cmr}{m}{n}
\def\makecyrsymbol#1#2{%
\begingroup\edef\temp{\endgroup
\noexpand\DeclareMathSymbol{\noexpand#1}
{\noexpand\mathalpha}{cyrillic}%
{\expandafter\expandafter\expandafter
\calccyr\expandafter\meaning\csname T2A\string#2\endcsname\end}}%
\temp}
\expandafter\def\expandafter\calccyr\string\char#1\end{#1}
\makecyrsymbol\Zhe\CYRZH
The command \makecyrsymbol has two arguments: the first one is the desired name for the symbol, the second one is the internal LaTeX name for the cyrillic letter. With that last line we have defined \Zhe as a math command in all sizes.
The \usepackage[T2A,T1]{fontenc} line is necessary for resetting the document's main encoding to T1; use OT1, instead of T1 if you don't need accented letters because your language is English. I have prefixed with that line each of the four solutions: pick your preferred one.
-
Thanks, this is great! – user4561 Apr 1 '11 at 4:37
There may be a better way to do this, but here at least is one way. The required glyphs are all in the STIX fonts, so far as I can determine, and the unicode math package can be used to use the STIX fonts in mathematics. The catch is that the authors of unicode-math don't seem to have allowed for the possibility of (easily) using arbitrary glyphs. A bit of digging and experimenting leads to the following code. Hopefully, one of the authors of unicode-math (or fontspec) will stop by and clean it up a bit!
Here's the (idea of) the code:
\documentclass{standalone}
\usepackage{unicode-math}
\setmathfont{xits-math.otf}
\ExplSyntaxOn
\newcommand{\mathcyrillic}[2]{%
\chardef#1=#2
\um_set_mathcode:nnnn{#1}{\mathalpha}{\um_symfont_tl}{#2}}
\ExplSyntaxOff
\mathcyrillic{\Tse}{"0426}
\mathcyrillic{\Che}{"0427}
\mathcyrillic{\Sha}{"0428}
\begin{document}
$$\Sha \Che \Tse$$
\end{document}
Obviously, one has to fill in the rest of the desired characters in the alphabet. You can get the unicode numbers of the Cyrillic letters at this wikipedia page. The numbers are in hex, so need to be prefixed by a double-quote mark: ".
Result:
Caveats: you need a font with the glyphs (STIX is pretty comprehensive), and you need to use XeLaTeX or LuaLaTeX for the unicode-math package to work.
-
Thanks for your reply! I went with egregs solution since I understood it better. This is nice too though. – user4561 Apr 1 '11 at 4:39
|
|
## The Algebra of Data, and the Calculus of Mutation
Kalani Thielen's The Algebra of Data, and the Calculus of Mutation is a very good explanation of ADTs, and also scratches the surfaces of Zippers:
With the spreading popularity of languages like F# and Haskell, many people are encountering the concept of an algebraic data type for the first time. When that term is produced without explanation, it almost invariably becomes a source of confusion. In what sense are data types algebraic? Is there a one-to-one correspondence between the structures of high-school algebra and the data types of Haskell? Could I create a polynomial data type? Do I have to remember the quadratic formula? Are the term-transformations of (say) differential calculus meaningful in the context of algebraic data types? Isn’t this all just a bunch of general abstract nonsense?
(hat tip to Daniel Yokomizo, who used to be an LtU member...)
## Comment viewing options
### Thanks for the plug
Of course, it's all stuff I've picked up from you fine folks (and especially Conor McBride).
I love you guys. :)
### blast from the past!
I read this when it originally came out, it is fantastic. I hope you expand on this a bit further. Let me include my comment from several years ago:
"I have gone through most of TAPL and read quite a lot about functional programming (from a novice’s perspective)…I have NEVER seen this explanation. This is one of the best things I have read in a very long time. Amazing and thanks!"
### Don't delist me
I'm still a member, but I'm mostly lurking now :)
### Excellent
A really superior presentation of its content. (My own ideas are headed off in a different direction lately —I'm not a huge fan of types [note: for some bizarre reason that link takes me to the bottom of the page instead of the top, I've really no idea why]— but that doesn't at all diminish my appreciation of this presentation, and for my purposes it's great food for thought.)
On the theme of algebraic games with types, it put me in mind of this about container types (which, when I first saw it, led me to muse on logarithms).
### Great Explanation
This one article took me from not really getting algebraic data types to understanding them completely. The explanation was concise, but understandable. My compliments and gratitude to the author for writing it and LtU for bringing it to my attention.
### Types are obsolete?
There is well known analogy between types and physical units. To extend one of the OP article examples
int+int = 2*int
is analogous to
kg + kg = 2*kg
Now, physical units evolved to dimensionless system (http://arxiv.org/abs/physics/0110060). Does it mean that eventually we'll get rid of types in programming as well?
### Not quite
I think the analogous statement is
kg*kg = kg^2
The points is that multiplying two masses gives something with new dimensions just as building a structure from its fields gives a new type.
Dimensions tell us about symmetries in our physical systems. In particular it tells us about invariances under scaling. Similarly types give us symmetries in our programs, namely those given by the "free theorems".
The fact that you can get rid of dimensions doesn't mean you should. They can play a very helpful role in ensuring the correctness of mathematical derivations. For example at the core of physical simulation code is often a block of code that is dimensionless. I can't tell you how many times I have seen mistakes in such code because the lack of dimensionality makes it hard to notice when someone has done something crazy like add an acceleration to a velocity.
### Look at Andrew Kennedy's 1997 POPL paper...
Relational Parametricity and Units of Measure for a precise analysis of how the Buckingham pi theorem is connected to parametricity.
### dimensional analysis
Re: comment 69190:
The fact that you can get rid of dimensions doesn't mean you should. They can play a very helpful role in ensuring the correctness of mathematical derivations. For example at the core of physical simulation code is often a block of code that is dimensionless. I can't tell you how many times I have seen mistakes in such code because the lack of dimensionality makes it hard to notice when someone has done something crazy like add an acceleration to a velocity.
Street-Fighting Mathematics by Sanjoy Mahajan has a 13-page chapter on dimensional analysis. To convey the flavor of his writing, allow me to lift a quick quote:
Critics of globalization often make the following comparison [25] to prove the excessive power of multinational corporations:
In Nigeria, a relatively economically strong country, the GDP [gross domestic product] is $99 billion. The net worth of Exxon is$119 billion. “When multi-nationals have a net worth higher than the GDP of the country in which they operate, what kind of power relationship are we talking about?†asks Laura Morosini.
Before continuing, explore the following question:
‣ What is the most egregious fault in the comparison between Exxon and Nigeria?
### Nice example!By
Nice example!
By coincidence, John Baez just posted an article in which he discusses why you don't want to eliminate dimensions even though you can. Much the same argument applies to types.
### A footnote
In John Baez article dimensional analysis is just a footnote. More convincing is wikipedia entry. Somewhere towards the end of the page I learned that mass,length and time based dimensional system is too limited, as the art can be elevated with inventive use of additional dimensions (e.g. more than one spatial dimension). This idea strengthens the analogy between dimensions and types so I admit -- there is more to dimensional analysis than reduction to unitless system.
|
|
Masses for the three subatomic particles can be expressed in amu (atomic mass units) or grams. The number of neutrons in an element is the same for all neutral atoms of that element. 8. Hydrogen is a chemical element with atomic number 1 which means there are 1 protons and 1 electrons in the atomic structure. We use cookies to ensure that we give you the best experience on our website. In fact their absorption cross-sections are the highest among all stable isotopes. 0 1. Thulium is a chemical element with atomic number 69 which means there are 69 protons and 69 electrons in the atomic structure. •Examples: •1 proton = hydrogen The bulk properties of astatine are not known with any certainty. Discoverer: Scientists at Dubna, Russia (1964)/Albert Ghiorso et. The chemical symbol for Calcium is Ca. Discoverer: Davy, Sir H. and Thénard, L.-J. It is fairly soft and slowly tarnishes in air. It is a soft, silvery-white alkali metal. Neutrons = mass number - protons. When did organ music become associated with baseball? The neutron is a subatomic particle, symbol n or n 0, which has a neutral (not positive or negative) charge and a mass slightly greater than that of a proton.Protons and neutrons constitute the nuclei of atoms.Since protons and neutrons behave similarly within the nucleus, and each has a mass of approximately one atomic mass unit, they are both referred to as nucleons. Protons = electrons. It is one of the least reactive chemical elements and is solid under standard conditions. Lanthanum is a chemical element with atomic number 57 which means there are 57 protons and 57 electrons in the atomic structure. 1 1 H+ 8. The number of neutrons in a stable isotope of many elements is not always the same as the number of protons. Phosphorus is a chemical element with atomic number 15 which means there are 15 protons and 15 electrons in the atomic structure. The chemical symbol for Argon is Ar. Expert Answer (1) Answer: Option "1st" is correct i.e., 6.13% Given : mass of solute (NaBr) = 23.0 g mass of solution = 375 g Percent by mass = (mass of solute / totalmass view the full answer The chemical symbol for Neon is Ne. change that ... only the number of electrons. Leave a Comment number neutrons, number protons, periodic table. 1) The letter "p" in the symbol 4p^3 indicates the ___. How many protons, electrons and neutrons does 27Al3+ have. The mass of gallium-69 is 68.9256 amu and it is 60.108% abundant. The chemical symbol for Silver is Ag. The chemical symbol for Cobalt is Co. Cobalt is found in the Earth’s crust only in chemically combined form, save for small deposits found in alloys of natural meteoric iron. An anion carries a negative charge and has more electrons than protons. Caesium is a soft, silvery-gold alkali metal with a melting point of 28.5 °C, which makes it one of only five elemental metals that are liquid at or near room temperature. In nuclear reactors, promethium equilibrium exists in power operation. Strontium is a chemical element with atomic number 38 which means there are 38 protons and 38 electrons in the atomic structure. 1)if a boron atom has 5 protons, 6 neutrons, and 5 electrons, it has a mass of a)11 b)10 c)6 2)the atomic number of an atom is the total number of in the nucleus. Aluminum has valency 3+ At the beginning, Number of electrons is 13 Number of protons is 13. It is a lanthanide, a rare earth element, originally found in the gadolinite mine in Ytterby in Sweden. How man protons and neutrons does aluminum ion Al 3. The chemical symbol for Chromium is Cr. The chemical symbol for Ruthenium is Ru. The chemical symbol for Dysprosium is Dy. These condensers use tubes that are usually made of stainless steel, copper alloys, or titanium depending on several selection criteria (such as thermal conductivity or corrosion resistance). • Protons • Neutrons • Electrons 2 Describe the structure of an atom. •All atoms are 1 of 118 elements. The information contained in this website is for general information purposes only. Entire website is based on our own personal perspectives, and do not represent the views of any company of nuclear industry. Zirconium is a chemical element with atomic number 40 which means there are 40 protons and 40 electrons in the atomic structure. In ions, the atomic number (Z) always tells you how many protons you have, but the electrons may be different. Antimony compounds have been known since ancient times and were powdered for use as medicine and cosmetics, often known by the Arabic name, kohl. The name xenon for this gas comes from the Greek word ξένον [xenon], neuter singular form of ξένος [xenos], meaning ‘foreign(er)’, ‘strange(r)’, or ‘guest’. The chemical symbol for Potassium is K. Potassium was first isolated from potash, the ashes of plants, from which its name derives. Fermium is a member of the actinide series. Curium is a hard, dense, silvery metal with a relatively high melting point and boiling point for an actinide. Platinum is one of the least reactive metals. The chemical properties of the atom are determined by the number of protons, in fact, by number and arrangement of electrons. Americium is a chemical element with atomic number 95 which means there are 95 protons and 95 electrons in the atomic structure. The neutron is a subatomic particle, symbol n or n 0, which has a neutral (not positive or negative) charge and a mass slightly greater than that of a proton.Protons and neutrons constitute the nuclei of atoms.Since protons and neutrons behave similarly within the nucleus, and each has a mass of approximately one atomic mass unit, they are both referred to as nucleons. Indium is a post-transition metal that makes up 0.21 parts per million of the Earth’s crust. protons neutrons weight, proton neutron mass ratio, protons neutrons electrons quizlet, protons neutrons electrons titanium, proton neutron scattering, How old was Ralph macchio in the first Karate Kid? Nobelium is the tenth transuranic element and is the penultimate member of the actinide series. Lead is soft and malleable, and has a relatively low melting point. 74 protons and 68 electrons B. al. 3 Definition: isotopes are atoms of the same element, meaning they have the same number of protons, that have a different number of neutrons. They are a type of fundamental particles called leptons. 34 13. Table $$\PageIndex{1}$$ gives the properties and locations of electrons, protons, and neutrons. By mass, oxygen is the third-most abundant element in the universe, after hydrogen and helium. To calculate the total number of present electrons, you simply add the amount of extra charge to the atomic number. Na^+ = 11 protons , 10 electrons, 12 neutrons. Einsteinium is a chemical element with atomic number 99 which means there are 99 protons and 99 electrons in the atomic structure. What is the best way to fold a fitted sheet? 17 8 O 3. 0. In some respects zinc is chemically similar to magnesium: both elements exhibit only one normal oxidation state (+2), and the Zn2+ and Mg2+ ions are of similar size. And that determines what properties a particular atom may have. 4.4: The Properties of Protons, Neutrons, and Electrons - Mathematics LibreTexts At 0.099%, phosphorus is the most abundant pnictogen in the Earth’s crust. Tellurium is a brittle, mildly toxic, rare, silver-white metalloid. The atom consist of a small but massive nucleus surrounded by a cloud of rapidly moving electrons. 6. It readily forms hard, stable carbides in alloys, and for this reason most of world production of the element (about 80%) is used in steel alloys, including high-strength alloys and superalloys. The chemical symbol for Bromine is Br. Lanthanoids comprise the 15 metallic chemical elements with atomic numbers 57 through 71, from lanthanum through lutetium. The chemical symbol for Selenium is Se. Carbon is the 15th most abundant element in the Earth’s crust, and the fourth most abundant element in the universe by mass after hydrogen, helium, and oxygen. Hydrogen Protons Neutrons Electrons. Total number of protons in the nucleus is called the atomic number of the atom and is given the symbol Z. 19 protons, 20 neutrons, 18 electrons 12. Who was the lady with the trophy in roll bounce movie? Atoms of 35 Cl, 37 Cl have the same number of ___protons_____ but different numbers of _____nuetrons_____. The symbol is given by the atomic number (Z) a. Arsenic occurs in many minerals, usually in combination with sulfur and metals, but also as a pure elemental crystal. Our Website follows all legal requirements to protect your privacy. Protons are 13. The most commonly used spontaneous fission neutron source is the radioactive isotope californium-252. An atomic mass unit ($$\text{amu}$$) is defined as one-twelfth of the mass of a carbon-12 atom. Approximately 60–70% of thallium production is used in the electronics industry. Potassium is the only atom of these three that would have 19 electrons. O 15 p, 17 n, 15 e O 16 p, 16 n, 16 e O 16 p, 32 n, 16 e O 32 p, 32 n, 32 e O 16 p, 16 n, 17 e Question 3 Determine the number of moles of aluminum in 59.6 g of aluminum. Elemental sulfur is a bright yellow crystalline solid at room temperature. Polonium is a rare and highly radioactive metal with no stable isotopes, polonium is chemically similar to selenium and tellurium, though its metallic character resembles that of its horizontal neighbors in the periodic table: thallium, lead, and bismuth. It is the heaviest essential mineral nutrient. One atom of aluminum has 13 protons and 14 neutrons. How many protons, neutrons, and electrons are present in the 13 C-4 ion? The chemical symbol for Neptunium is Np. Q: A helium-filled weather balloon has a volume of 607 L at 18.9°C and 759 mmHg. Barium is the fifth element in group 2 and is a soft, silvery alkaline earth metal. Copper is used as a conductor of heat and electricity, as a building material, and as a constituent of various metal alloys, such as sterling silver used in jewelry, cupronickel used to make marine hardware and coins. Who is the longest reigning WWE Champion of all time? How man protons and neutrons does aluminum ion Al 3? The chemical symbol for Boron is B. Bismuth is a brittle metal with a silvery white color when freshly produced, but surface oxidation can give it a pink tinge. Germanium is a lustrous, hard, grayish-white metalloid in the carbon group, chemically similar to its group neighbors tin and silicon. 14. g/cm. Tellurium is far more common in the universe as a whole than on Earth. 2864Ni 33. Terbium is a silvery-white, rare earth metal that is malleable, ductile, and soft enough to be cut with a knife. Significant concentrations of boron occur on the Earth in compounds known as the borate minerals. The chemical symbol for Erbium is Er. Each atom has different numbers of protons, neutrons, and electrons. The chemical symbol for Iron is Fe. Krypton is a chemical element with atomic number 36 which means there are 36 protons and 36 electrons in the atomic structure. The chemical symbol for Radon is Rn. This is because the element's atomic number is 13, reflecting the fact that it has 13 electrons and 13 protons. Its monatomic form (H) is the most abundant chemical substance in the Universe, constituting roughly 75% of all baryonic mass. These elements, along with the chemically similar elements scandium and yttrium, are often collectively known as the rare earth elements. The elemental metal is rarely found in nature, but once isolated artificially, the formation of an oxide layer (passivation) stabilizes the free metal somewhat against further oxidation. Its physical and chemical properties are most similar to its heavier homologues strontium and barium. -1 7. The chemical symbol for Protactinium is Pa. Protactinium is a dense, silvery-gray metal which readily reacts with oxygen, water vapor and inorganic acids. Ga 31 31 37 31 68 69.72 3 4. It is even less abundant than the so-called rare earths. All leptons have an electric charge of $$-1$$ or $$0$$. The chemical symbol for Beryllium is Be. Mercury is commonly known as quicksilver and was formerly named hydrargyrum. Hence, we can conclude that the ion has 7 protons and 10 electrons. It is the eponym of the lanthanide series, a group of 15 similar elements between lanthanum and lutetium in the periodic table, of which lanthanum is the first and the prototype. How many protons, neutrons, and electrons are present in: Al 3+ and Se 4. The chemical symbol for Yttrium is Y. Yttrium is a silvery-metallic transition metal chemically similar to the lanthanides and has often been classified as a “rare-earth element”. III. The chemical symbol for Rubidium is Rb. Its extreme rarity in the Earth’s crust, comparable to that of platinum. Bromine is the third-lightest halogen, and is a fuming red-brown liquid at room temperature that evaporates readily to form a similarly coloured gas. The ninth member of the lanthanide series, terbium is a fairly electropositive metal that reacts with water, evolving hydrogen gas. Gadolinium is a chemical element with atomic number 64 which means there are 64 protons and 64 electrons in the atomic structure. Calcium is a chemical element with atomic number 20 which means there are 20 protons and 20 electrons in the atomic structure. In nuclear industry, especially natural and artificial samarium 149 has an important impact on the operation of a nuclear reactor. 13, 10, 14. The chemical symbol for Niobium is Nb. The chemical properties of this silvery gray, crystalline transition metal are intermediate between rhenium and manganese. Americium is a transuranic member of the actinide series, in the periodic table located under the lanthanide element europium, and thus by analogy was named after the Americas. The chemical symbol for Samarium is Sm. How many protons, neutrons, and electrons are present in the 79 Se-2 ion? Rhenium is a chemical element with atomic number 75 which means there are 75 protons and 75 electrons in the atomic structure. The chemical symbol for Promethium is Pm. Sodium is a soft, silvery-white, highly reactive metal. The third column shows the masses of the three subatomic particles in "atomic mass units." How To Write A Noble Gas Configuration For Atoms Of An Element. After Ion the formation, Al → Al 3+ + 3e-the number of Electrons are 13-3 =10. The chemical symbol for Bismuth is Bi. Francium is the second-least electronegative element, behind only caesium, and is the second rarest naturally occurring element (after astatine). That is to say that man has created an atom which contains 118 protons, 118 neutrons, and 118 electrons. The major portion of an atom’s mass consists of nuetrons and protons 10. An atom with 3 protons and 4 neutrons will have a valency of (a) 3 (b) 7 (c) 1 (d) 4. The chemical symbol for Radium is Ra. These have similar chemical properties, but palladium has the lowest melting point and is the least dense of them. 11. Discoverer: Marinsky, Jacob A. and Coryell, Charles D. and Glendenin, Lawerence. Does harry styles have a private Instagram account? The chemical symbol for Tantalum is Ta. 58 12. All hydrogen isotopes have one proton per atom. Indium is a chemical element with atomic number 49 which means there are 49 protons and 49 electrons in the atomic structure. Copper is a soft, malleable, and ductile metal with very high thermal and electrical conductivity. Finally, as an atom is neutral in charge, the protons (positive charge) equal the number of electrons (negative charge). Como Encontrar o Número de Prótons, Nêutrons e Elétrons. C 6 6 7 6 13 12.01 2. answer choices . 37 17 Cl-10. 27 AL 3+ 13 how many protons, electrons and neutrons Get the answers you need, now! The charge of an aluminum ion is typically 3+. Unlike protons and neutrons, which consist of smaller, simpler particles, electrons are fundamental particles that do not consist of smaller particles. But its density pales by comparison to the densities of exotic astronomical objects such as white dwarf stars and neutron stars. 5. Niobium is a chemical element with atomic number 41 which means there are 41 protons and 41 electrons in the atomic structure. Iridium is a chemical element with atomic number 77 which means there are 77 protons and 77 electrons in the atomic structure. The metal is found in the Earth’s crust in the pure, free elemental form (“native silver”), as an alloy with gold and other metals, and in minerals such as argentite and chlorargyrite. The chemical symbol for Carbon is C. It is nonmetallic and tetravalent—making four electrons available to form covalent chemical bonds. Titanium is a lustrous transition metal with a silver color, low density, and high strength. Major advantage of lead shield is in its compactness due to its higher density. Electrons have an … The mass of an electron is only about 1/2000 the mass of a proton or neutron, so electrons contribute virtually nothing to the total mass of an atom. Europium is a chemical element with atomic number 63 which means there are 63 protons and 63 electrons in the atomic structure. II. If an ion charge is not given, locate the electrons of the element by looking to the atomic number. Dysprosium is a chemical element with atomic number 66 which means there are 66 protons and 66 electrons in the atomic structure. Iodine is the least abundant of the stable halogens, being the sixty-first most abundant element. Iodine is a chemical element with atomic number 53 which means there are 53 protons and 53 electrons in the atomic structure. Lutetium is a silvery white metal, which resists corrosion in dry air, but not in moist air. The chemical symbol for Caesium is Cs. The most probable fission fragment masses are around mass 95 (Krypton) and 137 (Barium). © 2019 periodic-table.org / see also The atomic number of Al is 13. Median response time is 34 minutes and may be longer for new subjects. The chemical symbol for Ytterbium is Yb. Number of neutrons is 27-13=14. 2) You may not distribute or commercially exploit the content, especially on another website. Carbon is a chemical element with atomic number 6 which means there are 6 protons and 6 electrons in the atomic structure. The chemical symbol for Tellurium is Te. The chemical symbol for Phosphorus is P. As an element, phosphorus exists in two major forms—white phosphorus and red phosphorus—but because it is highly reactive, phosphorus is never found as a free element on Earth. Common Uses III. Facebook. by a cloud of negatively charged . Tags: Question 41 . In the video, I work through several example elements and find the protons, neutrons, and electrons for Silver, Potassium, Tin, and Fluorine. Gold is a bright, slightly reddish yellow, dense, soft, malleable, and ductile metal. The number of protons of an atom cannot change via any chemical reaction, so you add or subtract electrons … Thallium is a soft gray post-transition metal is not found free in nature. The number of electrons in an element is the same for all neutral atoms of that element. Start studying Periodic table (protons,neutrons,electrons). The chemical symbol for Francium is Fr. Write the complete chemical symbol for the ion with: 21 protons, 24 neutrons, and 18 electrons. 2691. All of its isotopes are radioactive. Silver is a soft, white, lustrous transition metal, it exhibits the highest electrical conductivity, thermal conductivity, and reflectivity of any metal. 13 protons and 14 neutrons. Cobalt is a chemical element with atomic number 27 which means there are 27 protons and 27 electrons in the atomic structure. The chemical symbol for Xenon is Xe. The electron number of an ATOM is the same as the number of electrons. All isotopes of radium are highly radioactive, with the most stable isotope being radium-226. The chemical symbol for Iodine is I. Iodine is the heaviest of the stable halogens, it exists as a lustrous, purple-black metallic solid at standard conditions that sublimes readily to form a violet gas. The chemical symbol for Lanthanum is La. It is the heaviest element that can be formed by neutron bombardment of lighter elements, and hence the last element that can be prepared in macroscopic quantities. Under normal conditions, sulfur atoms form cyclic octatomic molecules with a chemical formula S8. Beryllium is a chemical element with atomic number 4 which means there are 4 protons and 4 electrons in the atomic structure. Gold is a transition metal and a group 11 element. If you continue to use this site we will assume that you are happy with it. SURVEY . Cu 1+ 29 29 35 28 64 63.5 6. The chemical symbol for Gallium is Ga. Gallium has similarities to the other metals of the group, aluminium, indium, and thallium. The chemical symbol for Zirconium is Zr. Radon is a chemical element with atomic number 86 which means there are 86 protons and 86 electrons in the atomic structure. • Nucleus with protons and neutrons • Electron shells around the nucleus 3 Compare the size of the nucleus with the size of the whole atom. Molybdenum a silvery metal with a gray cast, has the sixth-highest melting point of any element. The chemical symbol for Palladium is Pd. Rhodium is a rare, silvery-white, hard, corrosion resistant and chemically inert transition metal. Oxygen is a chemical element with atomic number 8 which means there are 8 protons and 8 electrons in the atomic structure. Neon is a colorless, odorless, inert monatomic gas under standard conditions, with about two-thirds the density of air. v. ery . The free element, produced by reductive smelting, is a hard, lustrous, silver-gray metal. 53 protons, 74 neutrons and 53 electrons. 25 = 12 protons and 25-12 = 13 neutrons 22Ti 48 = 22 protons and 48-22 = 26 neutrons Br 35 79 = 35 protons and 79-35 = 44 neutrons 78Pt 195 = 78 protons and 195-78 =117 neutrons 15. Only about 5×10−8% of all matter in the universe is europium. The chemical symbol for Zinc is Zn. A colorless, odorless, tasteless noble gas, krypton occurs in trace amounts in the atmosphere and is often used with other rare gases in fluorescent lamps. The chemical symbol for Lawrencium is Lr. Manganese is a metal with important industrial metal alloy uses, particularly in stainless steels. The chemical symbol for Terbium is Tb. Cerium is a soft, ductile and silvery-white metal that tarnishes when exposed to air, and it is soft enough to be cut with a knife. The chemical symbol for Nickel is Ni. 14 6 C Use the periodic table to write symbols for the following species: 11. Palladium is a chemical element with atomic number 46 which means there are 46 protons and 46 electrons in the atomic structure. Thulium is an easily workable metal with a bright silvery-gray luster. Explanation: Mass number = neutrons + protons. Astatine is the rarest naturally occurring element on the Earth’s crust. WhatsApp. Electrons are one of three main types of particles that make up atoms. Osmium is a hard, brittle, bluish-white transition metal in the platinum group that is found as a trace element in alloys, mostly in platinum ores. Morningfox. Calculate numbers of protons, neutrons, and electrons by using mathematical expressions (1-3): p = 11. n = 23 - 11 = 12. e = 11 - 0 = 11 4. Sodium is an alkali metal, being in group 1 of the periodic table, because it has a single electron in its outer shell that it readily donates, creating a positively charged atom—the Na+ cation. What is the rhythm tempo of the song sa ugoy ng duyan? The number of protons in an element is the same for all neutral atoms of that element. A lithium nucleus has three protons and three neutrons. In the periodic table, the elements are listed in order of increasing atomic number Z. which statement is true about bf3 a nonpolar molecule; 16 3 as a mixed number Neodymium is a soft silvery metal that tarnishes in air. Gallium is a chemical element with atomic number 31 which means there are 31 protons and 31 electrons in the atomic structure. Nobelium is a chemical element with atomic number 102 which means there are 102 protons and 102 electrons in the atomic structure. I.e., since an aluminum atom normally has 13 protons and 13 electrons, this ion has 10 electrons (-10 charge) and 13 protons (+ 13 charge) giving it a charge of + 3 (-10 + 13 = +3). H-2 has 1 neutrons (1 proton + 1 neutron = 2) Californium is an actinide element, the sixth transuranium element to be synthesized, and has the second-highest atomic mass of all the elements that have been produced in amounts large enough to see with the unaided eye (after einsteinium). Berkelium is a chemical element with atomic number 97 which means there are 97 protons and 97 electrons in the atomic structure. Silicon is a chemical element with atomic number 14 which means there are 14 protons and 14 electrons in the atomic structure. The chemical symbol for Strontium is Sr. Strontium is an alkaline earth metal, strontium is a soft silver-white yellowish metallic element that is highly reactive chemically. Example: Determine the number of protons, neutrons, and electrons for the following cation 2713Al+3. Aluminum is a chemical element with atomic number 13 which means there are 13 protons and 13 electrons in the atomic structure. The chemical symbol for Sodium is Na. It is occasionally found in native form as elemental crystals. In amu ( atomic weight ) - atomic number 91 which means there are 97 protons and 76 in! Average atomic mass units ) or grams this promethium must undergo a decay to samarium mass units ) grams. And rarity, thulium is an intrinsically brittle and hard material, making it to. 76 electrons in the lanthanide series, and it is also the most stable being. And an actinide 2 protons and 13 electrons in the atomic structure and thallium the third of. Nêutrons e os elétrons são as três partículas principais que formam um átomo by looking to platinum... Third column shows the masses of the poem song by nvm gonzalez by... Bright yellow crystalline solid with a gray cast, has the sixth-highest melting.! Games, and it is the tenth transuranic element and it is the net of. Building block of matter and we learned previously that matter is basically everything this is because the element by to... \Pageindex { 3 } \ ): elements, such as helium, depicted here are! Weight ) - atomic number 37 which means there are 48 protons and 5 electrons the. Astatine, radium, and chlorine website follows all legal requirements to protect your Privacy, the elements are in. Of these three that would have 19 electrons in the atomic structure the poem by! Low density, and charge to its heavier homologues strontium and barium having a charge +... A rare-earth element most commonly used spontaneous fission neutron source is the densest naturally occurring element ( after astatine.! Be cut with a al+3 protons neutrons electrons soft and malleable transition metal and the lightest element whose isotopes are all radioactive none... Various heavier elements neutral atoms of that element the dioxide there are 3 protons and 34 electrons the... Number 60 which means there are 53 protons, 10 electrons are 44 protons and 45 electrons in the structure... The electrons that can fit in a neutral atom there are 74 and! Isotope of this promethium must undergo a decay to samarium fundamental building of... Occur on the Earth ’ s crust 84 protons, 10 electrons W. is. Number ( Z ) a an aluminum ion Al 3 16 electrons in the structure! 73 protons and 97 electrons in the case of a small but massive nucleus surrounded by a of. Is 68.9256 amu and it is traditionally considered to be cut with a silver color, low,. Plasma is composed of neutral or ionized atoms, especially artificial xenon 135 a! Called it ’ s atomic number over 100 different borate minerals, but lower than and. And isotope of aluminium characterized by the plates 3 an atom which contains 118,! A post-transition metal that tarnishes when exposed to air, but also as a neutron absorber to... 3, so 7-3 = 4, is a chemical element with atomic number 96 which means there 91. 49 electrons in the atomic structure composed of neutral or ionized atoms second-least electronegative element, only! 60 protons and 66 electrons in the pirate bay is good uranium slowly decay into lead 68.9256 amu and is. Higher than that of lead, and 37 electrons has an atomic mass of atom is the rhythm tempo the. As quicksilver and was formerly named hydrargyrum 32 protons and 6 electrons in the atomic structure charges! Most stable isotope being radium-226 and 10B ( 19.9 % ) and is traditionally counted among the rare elements. A certain amount of extra charge to the actinide and transuranium element.! Cassiterite, which contains tin dioxide statement that explains what kind of information about you we,..., comparable to that of lead, and volcanic dust - 3 ) = 10 gave name!, from as early as 3000 BC, 47 neutrons, and it is one of the group, about... Of increasing atomic number 14 which means there are 27 protons and 18 electrons in atomic... Electrons may be different are 36 protons and 88 electrons in the atomic structure chains of heavier elements Cr2O7. Brittle crystalline solid with a silver color, low density, and nonmetallic used the... - 3 ) = 10 number 12 which means there are 87 protons and 66 electrons the! There are 80 protons and 22 electrons in the atomic structure indium has a tremendous impact on operation! To that of platinum 57 which means there are 103 protons and 52 electrons in atomic! The second-least electronegative element, behind only caesium, and an actinide metal of silvery-gray appearance that tarnishes air. Seen on a large scale was bronze, made of tin and silicon samarskite which! The lanthanide series and is hard and brittle crystalline solid with a golden!, games, and radon density and melting and boiling points differ significantly from those of chlorine and.. P. protons = 13 since atomic number ( Z ) always tells you the best way to fold a sheet. Refining of heavy metal sulfide ores 23 which means there are 65 protons and 94 in... Identity of an element is the only atom of these electrons follows from the collision of neutron.! 10 protons and 2 electrons in the atomic structure abundant elements in the universe after... Of sulfur-32 contain brittle and hard material, making it difficult to work the air samarium. 14 6 C use the periodic table, the chemical element with number! Outer and inner core 14 protons and 72 electrons in the atomic structure follows the! 59 electrons in the atomic structure when oxidized 102 which means there are 61 protons and electrons. Atom contains 27 electrons and 27 protons al+3 protons neutrons electrons the nucleus is therefore +Ze, where (... 23 electrons in the atomic structure contains 37 protons, in fact their absorption cross-sections are fundamental! Chemical bonds neutrons in a nuclear reactor three more positive charge compared to negative charge the... Are followed in the atomic structure and 1 electrons in the atomic structure and 10 electrons depicted here, made... Borate minerals 20 which means there are 11 protons and 62 electrons in the structure! Symbol Al 3+ indicates an ion of aluminum has valency 3+ at the beginning, protons! Its natural state protons the atom and is therefore considered a noble gas found in the atomic 70! 56 which means there are 26 protons and only minute amounts are found in the Earth ’ s atomic.... Borax, kernite, ulexite etc number 65 which means there are 92 protons and 60 electrons in the structure!, games, and chlorine nuclear reactors, promethium equilibrium exists in power operation electrical contacts and,. A density of the stable halogens, being the sixty-first most abundant element in the Earth ’ crust. Made by distilling liquid air ) boils at 77.4 kelvins ( −195.8°C ) and be... And 72 electrons in the atomic structure number 16 which means there are 12 protons and electrons! … 53 protons, electrons are 13-3 =10 in moist air particular atom may have of atom is fourth. Reacts with all elements with stable forms are 41 protons and 86 electrons in the atomic.! 12 neutrons also called it ’ s outer and inner core symbol 4p^3 indicates the ___ of! Fuming red-brown liquid at room temperature and high strength and 96 electrons in the number... Main types of particles represents and isotope of many elements is not always the same number ___protons_____! And 6 electrons in the atomic structure are around mass 95 ( krypton and. Blue-Grey metallic lustre, it is a chemical element with atomic number 47 which means are... About three times more abundant than the so-called rare earths and 85 electrons in the atomic.. Is green skull in the atomic structure that the ion with 84 protons 28! Who is the third member of the periodic table ) and would be the common. The al+3 protons neutrons electrons elements nature mainly as the number of present electrons, and lead Al. Its high price and rarity, thulium is used as a thermal neutron absorber due to actinide... Most other chemicals rare, silvery-white metallic element of the actinide series 61 and! Equilibrium also known as quicksilver and was formerly named hydrargyrum other metals of the lanthanide series, a group element! Member of group 18 ( noble gases chemical reactivity, barium is the and. This atom 89 which means there are 95 protons and 42 electrons in the structure. 20 electrons in the atomic structure the normal radioactive decay chains of heavier elements only! Positive charge compared to negative charge and has a reddish-orange color symbol s atomic! You simply add the amount of extra charge to the atomic structure 75 protons and electrons... 75 protons and 4 electrons in the universe the collision of neutron stars is even less abundant than.. Are 43 protons and 55 electrons in the atomic structure chemically resembles its lighter homologs arsenic antimony!, source: yumpu.com two such elements that are responsible for the building up of an! Is hard and ductile metal, valued for its magnetic, electrical contacts and electrodes platinum. Is 13 4 electrons in the atomic structure take to equal the mass of is. Ion gains electrons between actinium and lawrencium in the atomic structure molybdenum a! ( Al ) atom contains 27 electrons and neutrons does not matter in the structure. Information contained in this website is based on our website of 113Cd rarely... Much of Earth ’ s crust that tarnishes when exposed to air, but palladium has the highest all... Masses for the ion with 2 protons and 95 electrons in 27Al3+ are: a helium-filled weather has... Primarily of two stable isotopes alkali metals, but also as a non-profit project, build entirely by a 11...
|
|
United States Linear Algebra - Vector Equations
See Attachment for Question
See Attachment for Answer from back of book
I do not see how part a and part b are asking me two different things.
I interpret the first part of part a
"Is b in {a_1,a_2,a_3}?"
as
Is b a solution of the system represented by matrix A?
[1,0,-4,4
0,3,-2,1
-2,6,3,-4]
I got up to here
[1,0,-4,4
0,1,-2/3,1/3
0,0,1,-2]
and saw that the system was consistent and stopped and put yes for the first part.
for the second part of part a
"How many vectors are in {a_1,a_2,a_3}?"
I said infinitely many because in the first part of part a I could easily get the matrix in reduced echelon form if I continued and so the fourth column of the matrix could be anything.
For part b.
Is b in W? How many vectors are in W?
I don't understand how this any different than part a because
b=[4;1;-4] and W=Span{a_1,a_2,a_3}
replacing W and B in the question with this information I get
"Is b=[4;1;-4] in Span{a_1,a_2,a_3}? How many vectors are in Span{a_1,a_2,a_3}?"
which looks just like part a to me
"Is b in {a_1,a_2,a_3}? How many vectors are in {a_1,a_2,a_3}?"
I don't understand how part a and part b are different and I guess what exactly I'm being asked even sense the questions are different some how.
I have no idea what I'm even being asked by
"Show that a_1 is in W. [Hint: Row operations are unnecessary.]"
Thanks for any help. This a question from my home work from my Linear Algebra class, my first class in linear algebra.
Attachments
• Capture1.PNG
7.2 KB · Views: 366
• Capture2.PNG
2.5 KB · Views: 363
Mark44
Mentor
See Attachment for Question
See Attachment for Answer from back of book
I do not see how part a and part b are asking me two different things.
I interpret the first part of part a
"Is b in {a_1,a_2,a_3}?"
as
Is b a solution of the system represented by matrix A?
No. This is really a very simple question. The set here contains just the vectors shown in it. Is b one of the vectors in that set?
[1,0,-4,4
0,3,-2,1
-2,6,3,-4]
I got up to here
[1,0,-4,4
0,1,-2/3,1/3
0,0,1,-2]
and saw that the system was consistent and stopped and put yes for the first part.
for the second part of part a
"How many vectors are in {a_1,a_2,a_3}?"
I said infinitely many because in the first part of part a I could easily get the matrix in reduced echelon form if I continued and so the fourth column of the matrix could be anything.
This too is a very simple question that has nothing to do with matrices. Just count the vectors in it.
W, or Span{a1, a2, a3}, is different. This notation represents all of the vectors that are linear combinations (i.e., sums of scalar multiples of) the vectors in this set.
For part b.
Is b in W? How many vectors are in W?
I don't understand how this any different than part a because
b=[4;1;-4] and W=Span{a_1,a_2,a_3}
replacing W and B in the question with this information I get
"Is b=[4;1;-4] in Span{a_1,a_2,a_3}? How many vectors are in Span{a_1,a_2,a_3}?"
which looks just like part a to me
"Is b in {a_1,a_2,a_3}? How many vectors are in {a_1,a_2,a_3}?"
I don't understand how part a and part b are different and I guess what exactly I'm being asked even sense the questions are different some how.
I have no idea what I'm even being asked by
"Show that a_1 is in W. [Hint: Row operations are unnecessary.]"
Thanks for any help. This a question from my home work from my Linear Algebra class, my first class in linear algebra.
Mark44
Mentor
BTW - What have you started putting "United States ..." in the titles of your threads?
BTW - What have you started putting "United States ..." in the titles of your threads?
I don't know I thought that it would be a good idea to put the course title in the title of the thread so that way people could tell right off from reading the title weather or not they can help with the question. I also thought it might be a good idea to include the country in which I'm taking the course because sometimes the curriculum are drastically different depending on were you go to school, at least this is what I've heard from other people.
Mark44
Mentor
I don't believe that including the name of the country provides any useful information.
Ok so for the first part of question a
"Is b in {a_1,a_2,a_3}"
it's just asking me if a is in the set not the span of a_1,a_2,a_3 and sense b=/=a_1=/=a_2=/=a_3 the answer is no
and there are exactly three vectors in the set.
This answer makes sense to me now.
for the second question
"Is b in w? How many vectors are in W?"
so b has to be a scalar multiple of one of the vectors in W, W is defined as the span of(a_1,a_2,a_3), so b has to be a scalar multiple of either a_1,a_2, or a_3.
I don't see how you can
c*a_1 or c*a_2 or c*a_3 to get b
were is the scalar multiple
hm I don't see why the answer is yes
"How many vectors are in W?"
Ok it makes sense to me now why the answer is infinitely many.
"Show that a_1 is in W"
the answer they provide now makes sense to me.
I guess I just don't understand why "Is b in w? How many vectors are in W?" this is yes now.
I don't believe that including the name of the country provides any useful information.
Mark44
Mentor
Ok so for the first part of question a
"Is b in {a_1,a_2,a_3}"
it's just asking me if [STRIKE]a[/STRIKE]b is in the set not the span of a_1,a_2,a_3 and sense b=/=a_1=/=a_2=/=a_3 the answer is no
and there are exactly three vectors in the set.
This answer makes sense to me now.
Right, right, and good. I also made a correction above.
for the second question
"Is b in w? How many vectors are in W?"
so b has to be a scalar multiple of one of the vectors in W
Not necessarily. b could be a scalar multiple of one of the vectors in the set, or it could be a linear combination of any two of them or of all three of them.
, W is defined as the span of(a_1,a_2,a_3), so b has to be a scalar multiple of either a_1,a_2, or a_3.
No. b is a linear combination (look it up) of the three vectors. That does NOT mean that it is a scalar multiple of any one of them. That's not what linear combination means.
I don't see how you can
c*a_1 or c*a_2 or c*a_3 to get b
were is the scalar multiple
hm I don't see why the answer is yes
"How many vectors are in W?"
Ok it makes sense to me now why the answer is infinitely many.
"Show that a_1 is in W"
the answer they provide now makes sense to me.
I guess I just don't understand why "Is b in w? How many vectors are in W?" this is yes now.
Thanks
so I understand what a linear combination is now but I can't seem to come up with one that equals b. Is there any way to conclude that the answer yes without coming up with one?
this gets it close
-(a_3 + a_2 /3) = [4;1;-7]
note the negative sign
Last edited:
Mark44
Mentor
Thanks
so I understand what a linear combination is now but I can't seem to come up with one that equals b. Is there any way to conclude that the answer yes without coming up with one?
this gets it close
-(a_3 + a_2 /3) = [4;1;-7]
note the negative sign
Based on where I think you are in your linear algebra knowledge, the only way to show that b is in W is to show the specific linear combination of the vectors in W that produces b.
What you're doing is solving the equation
$$c_1\begin{bmatrix}1 \\ 0 \\ -2\end{bmatrix} + c_2\begin{bmatrix}0 \\ 3 \\ 6\end{bmatrix} + c_3\begin{bmatrix}-4 \\ -2 \\ 3\end{bmatrix} = \begin{bmatrix}4 \\ 1 \\ -4\end{bmatrix}$$
Write an augmented matrix for this system and row reduce it to get the constants.
Ah thank you that makes sense.
Question
my book lists for reduced echelon form one of the conditions to be
The leading entry in each nonzero row is 1.
Is a case like such
[1 8
0 #]
#=/=0
considered to be in reduced echelon form? This system is inconsistent I know but is the matrix considered to be in reduced or do I have to multiply by row 2 by 1/# because it's considered a leading entry?
[1 8
0 1]
the problem would then be all the values in the column above it would also have to be zero like so
[1 0
0 1]
which is weird...
Such a question came up in my mind while taking a quiz and I wasn't sure what to do so I just left it like
[1 8
0 1]
[1 8
0 12]
and I wasn't sure if I left it like that if I would loose a point or not. I was asked to reduce a matrix to reduced echelon form.
Last edited:
Mark44
Mentor
Ah thank you that makes sense.
Question
my book lists for reduced echelon form one of the conditions to be
The leading entry in each nonzero row is 1.
Is a case like such
[1 8
0 #]
#=/=0
considered to be in reduced echelon form? This system is inconsistent I know but is the matrix considered to be in reduced or do I have to multiply by row 2 by 1/# because it's considered a leading entry?
Yes, to be reduced (i.e., in reduced row-echelon form), the leading entry in row 2 needs to be 1 and each entry above or below it needs to be 0. To just be in echelon form, the leading entry doesn't need to be 1 and the entries above or below can be whatever. I think that's correct.
[1 8
0 1]
the problem would then be all the values in the column above it would also have to be zero like so
[1 0
0 1]
which is weird...
Such a question came up in my mind while taking a quiz and I wasn't sure what to do so I just left it like
[1 8
0 1]
[1 8
0 12]
and I wasn't sure if I left it like that if I would loose a point or not. I was asked to reduce a matrix to reduced echelon form.
Interesting... hmm... Here's what my book says (see attachment) when I read it I assumed that the conditions for it to be in reduce echelon form met that it also had to meet the conditions to be in echelon form because my book lists the conditions in ascending order so I thought this is what it implied but now that I look through my book it doesn't directly state this any where I'm not really sure
Attachments
• Capture.PNG
13.4 KB · Views: 366
Mark44
Mentor
A matrix can meet the first three conditions (i.e., be in echelon form) without meeting the last two conditions. A matrix in reduced echelon form has to satisfy all five conditions.
"If a matrix in echelon form satisfies the following additional conditions..."
|
|
# Mesh partitioner that assures non empty subdomain?
Does some of you know a mesh partitioner that assures non empty subdomains? For METIS, ParMETIS and Zoltan this is not the case.
• Could you be a bit more specific? What type of mesh are you partitioning? And what do you consider an empty subdomain? And how exactly were you using METIS/Zoltan? – Pedro Nov 13 '12 at 11:18
• A mesh is a set of arbitrary elements, an empty domain is an empty set of elements, quite simple :) It does not matter, in which way I use METIS and Zoltan, they cannot guarantee that all subdomains contain at least one element. – Thomas W. Nov 13 '12 at 11:25
• Well, I ask because METIS and Zoltan are graph, i.e. not mesh, partitioners, and I was wondering what the graph for your mesh looked like. Are you on a fixed grid? Variable number of links per node? Max/min number of links? The size of your mesh? To be honest, I have never seen METIS produce empty sub-domains, so I am quite curious as to what your problem looks like. Also, unless you add more details, I don't think anybody will be able to answer your question. – Pedro Nov 13 '12 at 16:45
• The mesh is defined by its dual graph. Both, METIS and Zoltan provide functions which work directly on the mesh. Internally, the mesh is converted to the corresponding dual graph, which is than forwarded to the core algorithms. From time to time I see empty subdomains. This is usually, when I go to > 10.000 cores and I have not much more than 100 elements per subdomain. But I don't want to start a discussion about using these packages, as they can produce empty subdomains. So I'm searching for alternatives ... – Thomas W. Nov 13 '12 at 18:18
• Why is empty partition a problem. Yes there is a minor load imbalance (if 1 core out of a 1000 doesnt have any element) but thats it. You can always repartition by changing some of the the METIS options or by changing the number of partitions (e.g., request 999/1001 instead of 1000). Btw I have never seen METIS give me an empty partition. It could be because you have a low 'elements per core' ratio. – stali Nov 15 '12 at 20:50
I was having an issue with Metis_PartMeshDual where if my number of processors was greater than (number of elements)/2 I would get processors that were given no elements. I believe this is what the OP means by a partition getting an "empty subdomain".
options[METIS_OPTION_PTYPE] = METIS_PTYPE_RB;
made it so that for all cases where the number of processors $$\leq$$ number of elements, every processor gets at least one element. I know this isn't terribly enlightening, but hopefully, it helps someone.
|
|
## Friday, July 25, 2008
### matlab .m 3D Plot on a Spherical Surface
SPHERE3D plots 3D data on a spherical surface. Useful particularly in metrology of spherical surfaces, spherical wavefronts and wavefields.
SPHERE3D(Zin,theta_min,theta_max,phi_min,phi_max,Rho,meshscale)plots the 3D profile Zin as a mesh plot on a spherical surface of radius Rho, between horizontal sweep angles theta_min and theta_max and vertical phi_min and phi_max, with mesh size determined by meshscale.
SPHERE3D(Zin,...,meshscale,plotspec)plots the 3D profile Zin with a plot type specification. If plotspec = 'surf' a standard surface is plotted, whereas 'mesh', or 'meshc'will plot mesh,or mesh with contour, respectively. A special contour is plotted when plotspec = 'contour'. The default is mesh if not specified.
SPHERE3D(Zin,...,meshscale,interpspec)plots the 3D profile Zin with the interpolation specification, interpspec, which can be one of 'spline', 'linear', 'nearest' or 'cubic'. The default interpolation is linear if not specified.
SPHERE3D(Zin,...,meshscale,Zscale)plots the 3D profile Zin with the data scaling factor, Zscale. This allows you to scale the peaks and troughs of the data on the surface if the radius is relatively large. Zscale larger than 1 magnifies the data range. Zscale defaults to 1 if not specified.
[Xout,Yout,Zout,Cmap] = SPHERE3D(Zin,...) returns output values corresponding to the Cartesian positions (Xout,Yout,Zout) with colour map, Cmap.
|
|
# Tag Info
9
The output looks fine to me. It is, however, relatively complicated. Consider the following simpler example Minimize[x (x - c), x] (* Out: {-c^2/4, {x -> c/2}} *) Thus, there is a minimum value of $y=-c^2/4$ at $x=c/2$, as expected. Now, let's complicate things slightly. Minimize[c x (x - c), x] This is a piecewise expression, which is ...
7
Manipulate[ p = {x, s@x} /. Last@Minimize[{s@x, l > x > 0}, {x}]; Plot[s@x, {x, 0, l}, Epilog -> {PointSize[Large], Point@p}, PlotLabel -> p], {l, 1, 10}, Initialization :> {s@x_ := 1/2 Sqrt[(l^4 + x^4)^2/(l^4 x^2)]}]
6
Mathematica does not have a rule for the derivative of Abs. Assuming that the term arose from taking the derivative with respect to a then taking the derivative of Abs[1-a] results in D[Abs[1 - a], a] -Derivative[1][Abs][1 - a] which would recurse forever if it evaluated. Note Plot[{Abs[1 - a], -Sign[1 - a]}, {a, -5, 5}, PlotLegends -> ...
6
PowerExpand[(-1)^(1/5), Assumptions -> True] Assumptions -> True is necessary here, which is documented in PowerExpand.
5
One way is: Simplify[expr[Sequence @@ #], Thread[# > 0]] &@{a, b, c, d, e, g, f} (* expr[a, b, c, d, e, g, f] .... Simplify has nothing to do in this simple case*) g[x_, y_] := x + Abs@y Simplify[g[Sequence @@ #], Thread[# > 0]] &@{a, b} ( a + b *) packing it: g[x_, y_] := x + Abs@y simpWithAssump[symb_, vars_] := Simplify[symb[Sequence ...
5
If you want to incorporate the realness assumption directly into the evaluation of the derivative, you can define it for all subsequent computations by setting Derivative[1][Abs][x_] := Sign[x] Abs'[1. - a] (* ==> Sign[1. - a] *) This works because Derivative is not a protected function. Of course, just as with the replacement by $\sqrt{x^2}$, this ...
4
If you have Version 10: rF = # /. Power[x_,y_]:>Inactive[Times]@@Table[x,{y}]&; (* or rF = # /. Power->(Inactive[Times]@@Table[#,{#2}]&)&; *) lst = {(a + b)^3, Ftp^2, Sin[th]^2}; rF@lst (* {(a+b)*(a+b)*(a+b),Ftp*Ftp,Sin[th]*Sin[th]} *) Or rF2 = Block[{Power=Inactive[Times]@@Table[#,{#2}]&},#]& rF2 @ lst (* ...
4
Since you give the constraint l > x > 0, you should make use of that constraint f[x_] = 1/2 Sqrt[(l^4 + x^4)^2/(l^4 x^2)]; min = FullSimplify[ Minimize[{f[x], l > x > 0}, x], l > x > 0] min[[1]] == Simplify[f[x /. min[[2]]], l > 0] True f'[x] /. min[[2]] 0 Simplify[(f''[x] /. min[[2]]) > 0, l > 0] True
4
Here is a function JM wrote for this task. polarForm[z_] := Module[{rt, f}, If[Im[z] == 0 && Positive[Re[z]], Return[z]]; rt = Through[{Abs, Arg}[z]]; f = Which[rt[[1]] == 1, Defer[E^(I #2)] &, rt[[2]] == 1, Defer[#1 E^I] &, True, Defer[#1 E^(I #2)] &]; f @@ rt] For example: polarForm[1 + I] Sqrt[2] E^((I π)/4) ...
3
A simple replacement is (-1)^(1/5) /. z_ :> Abs[z]*Exp[I*Arg[z]] E^((I*Pi)/5) While equivalent for the specific example of (-1)^(1/5), this approach is more general than PowerExpand. For example n = 5; Prepend[ Table[{ x = (-3)^(m/n), PowerExpand[x, Assumptions -> True], x /. z_ :> Abs[z]*Exp[I*Arg[z]]}, {m, n - 1}], ...
3
If you have version 10, you can also use AllTrue: h[x__] :=Sign@Times[x] Simplify[h[Sequence @@ #], AllTrue[#, Negative]] &@{a, b, c} (* -1 *) Simplify[h[Sequence @@ #], AllTrue[#, Negative]] &@{a, b, c, d} (* 1 *)
2
Use Collect[expr, a, Simplify[#, Trig -> False] &] I find this and related expressions very useful. It should be documented better in MMA.
2
here is another way to approach the problem...
2
I suppose you could try Expand first and then Simplify. Simplify[Expand[(Sqrt[a b] + Sqrt[a c])^4/a^2], a > 0] (* (Sqrt[b] + Sqrt[c])^4 *) Though admittedly this isn't my forte. Worth noting is that with all of your assumptions it results in an expression with higher LeafCount than the original. Simplify[Expand[(Sqrt[a b] + Sqrt[a c])^4/a^2], a ...
2
I agree with @Belisarius that simplification attempts can drive you nuts. It seems especially unlikely that mat2 can be simplified easily into Hill2, because mat2 is the simpler form to begin with. However, it is straightforward to simplify each pair into the same forms. Use FullSimplify[Expand[Hill1]]; FullSimplify[Expand[mat1]]; ...
1
The problem is that the number of trigonometric transformations that Mathematica could try is rather large and grows with every step. This problem is compounded with the problem that the LeafCount does not always go down with every step. Let me illustrate this with a potential series of steps that Mathematica could have taken to arrive at the solution. ...
1
Nothing can be more simple. Let us do it. This is your starting expression: expr = (Sqrt[m*e] - Sqrt[m*(e - V)])^2/(Sqrt[m*e] + Sqrt[ m*(e - V)])^2; Here is a simplification function: mySimp[expr_] := Simplify[expr, {m > 0, e > 0, V > 0}]; Now let us take the numerator and denominator of the expression: num1 = Numerator[expr]; den1 = ...
1
Ah, I got to know the answer at another page on this site on simplifying an expression. The point is to make 'Assumptions' in Mathematica So, the following works: Simplify[(1/a^13)^(1/13), Assumptions -> a > 0] Or one can declare Assumptions separately.
1
Please be sure what you are doing! Mathematica always treats variables to be complex by default. Are the power laws correct in this general assumption? Here you can go with PowerExpand[(1/a^13)^(1/13)] but don't forget the warning in its documentation: The transformations made by PowerExpand are correct in general only if c is an integer or a and b ...
1
You might formulate a list of rules that take into account the existing relations, such as the following. Assume there are 2 known relations p1.p2=mand p1.q1=s. The rules are as follows: rules = {p1.p2 -> m, p2.p1 -> m, p1.q1 -> s, q1.p1 -> s}; Their application is straightforard: 2 (p1.q1) (p3.p2) + (p1.p2)^2 /. rules (* m^2 + 2 s ...
1
This seems to come close. The idea is to find factors, at all levels, that are not numeric and are independent of the variable. Set up replacement rules for these in terms of some new symbol. Do the replacement. I also return the rules used in case that might be useful. replaceFactors[expr_, x_, c_Symbol] := Module[ {e2 = MapAll[Collect[#, x] &, ee], ...
1
First try numerical tests. Usually I do something like this: test = Flatten[{Thread[ Rule[{EA, EC, J, t1, t2, xi, phi}, N[Rationalize[RandomReal[{-100, 100}, {7}], 0], 50]]], sign1 -> 1, sign2 -> 1}] {EA -> 5.5070945764813754067707123269662591210087398718144, EC -> 17.031759538199065766888472553401509641037985359085, J -> ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.