text
stringlengths
23
30.4k
embeddings_A
list
embeddings_B
list
How can one have control over the vertical space brought on by a `\\` for the new aligned entry in the `aligned` environment? For example, how to I condense the following? $\begin{aligned} a & b \\ c & d \end{aligned}$ I am using this through the `btex-etex` environment in metapost and `\begin{group}` does not seem to do it!
[ -0.003358006477355957, 0.01770445704460144, -0.020220521837472916, 0.02357800304889679, 0.022676337510347366, 0.022620147094130516, 0.006991314701735973, 0.022958720102906227, -0.01887945644557476, 0.00035601589479483664, -0.021700039505958557, 0.013373340480029583, -0.022911151871085167, ...
[ 0.45846372842788696, 0.14439445734024048, 0.6130329966545105, 0.018124038353562355, 0.22687633335590363, -0.156303271651268, -0.003004990052431822, -0.09010173380374908, 0.10538166761398315, -0.44042840600013733, 0.11737770587205887, 0.3096005916595459, -0.30783769488334656, 0.280302375555...
There are countless titles of the form "the many faces of ...". A quick Google search finds nearly 500 million hits, starting with "The Many Faces of the Public Domain", "The Many Faces of the Freshman Seminar", "The Many Faces of Go" and "The Many Faces of Influence Infographic". What is the origin of the phrase "the many faces of ...", in particular when used in a title? The closest I came to finding an answer was a search using Google Ngram Viewer. This seems to show that use of "the many faces of ..." really took off around 1955. Thus, probably the origin of the phrase is neither the Bible nor Shakespeare.
[ 0.010570555925369263, -0.0059216758236289024, 0.0018090298399329185, 0.03137264773249626, -0.013644920662045479, -0.011534672230482101, 0.008356583304703236, 0.008657712489366531, -0.02442866563796997, -0.016802610829472542, 0.013529710471630096, 0.022220604121685028, 0.012314214371144772, ...
[ 0.35697877407073975, 0.27894216775894165, 0.44693517684936523, 0.08294876664876938, -0.3295688331127167, 0.2506832778453827, 0.10828106105327606, 0.46638786792755127, -0.43307340145111084, -0.4806804955005646, -0.14132802188396454, 0.3616601824760437, 0.18738195300102234, 0.665564358234405...
In linear regression model, the means of the errors are assumed to be zero. Furthermore, we can assume either that the errors are uncorrelated and have the same variance, or even that the errors are iid. Note normality isn't assumed on the errors. What extra properties does assuming errors are iid bring to the OLS estimates, compared to assuming errors are uncorrelated and common variance? Thanks!
[ 0.03710491582751274, 0.025943975895643234, -0.017951304093003273, 0.013653322122991085, -0.02228480391204357, 0.009497509337961674, 0.013161340728402138, -0.021651387214660645, -0.006737129762768745, -0.013980980962514877, -0.0019715805537998676, 0.017846232280135155, 0.005752818193286657, ...
[ -0.1910441368818283, -0.3842461407184601, 0.1945953518152237, 0.3546711206436157, -0.007620206102728844, 0.006499414797872305, -0.01205045823007822, -0.13285712897777557, -0.03756909444928169, -0.4661094844341278, 0.40924498438835144, 0.5642222762107849, 0.010576862841844559, 0.35595870018...
The last patch claimed that a bug was fixed and Health Kits would start spawning. > -Fixed a bug where health packs were not spawning in world as they should I've played through a few lives in the game after the update and still haven't gotten any med kits. They're not spawning in the open and I've opened the drawers/ect in a couple dozen rooms now, and killed lots of monsters and not a single med kit. If it matters I'm playing the free version, but the update is for the free version and mentions they should spawn. Where do I find med kits?
[ -0.021420087665319443, 0.010712671093642712, 0.0015831475611776114, 0.014387103728950024, 0.013566031120717525, -0.0021822992712259293, 0.004104277119040489, 0.00043531879782676697, -0.01708739809691906, 0.00007748045027256012, -0.008229037746787071, 0.02594977617263794, -0.01097114570438861...
[ 0.43867525458335876, -0.054531749337911606, 0.3342052698135376, 0.20501810312271118, -0.11088715493679047, 0.21870969235897064, 0.5283506512641907, 0.24561719596385956, -0.5673840641975403, -0.6832651495933533, 0.4206365942955017, 0.6374930739402771, -0.23574647307395935, 0.095926433801651...
I have a Samsung Galaxy S3 with a broken screen. The screen does not respond to any touch and does not display anything at all, it is black at all times. CyanogenMod 11 is installed on the device. I need to securely wipe the device, but I've been having a very difficult time doing so, as CyanogenMod doesn't allow ADB to connect without user input, which in this case is impossible. I've done a factory reset on the phone, but didn't wipe the user data on the internal storage (not an external SD card, rather the "internal SD card" which is hosted on the main phone ROM). How can I securely wipe the phone and delete _all_ data left on the internal storage? I'd like to zero-out the filesystem, but still have a semi-working phone that will at least boot into Android for occasional tinkering over adb.
[ 0.00293326354585588, 0.005463290028274059, -0.01053132489323616, 0.015339421108365059, -0.041605036705732346, -0.02676948718726635, 0.007597649935632944, -0.003922253381460905, -0.013748872093856335, 0.0057959528639912605, -0.0193695817142725, 0.012421008199453354, -0.004875590559095144, 0...
[ 0.13603900372982025, 0.38932791352272034, 0.6961531043052673, -0.21897010505199432, 0.14887624979019165, -0.08638890832662582, 0.635452389717102, -0.2577325403690338, -0.03695302456617355, -0.531390368938446, -0.15306097269058228, 0.4626999795436859, -0.3633236885070801, 0.2820560336112976...
I just inherited a system and I am trying to understand it's partition table for the hard drive. (I'm new to this) machine:~# fdisk -l /dev/sda Disk /dev/sda: 250.0 GB, 250000000000 bytes 255 heads, 63 sectors/track, 30394 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000080 Device Boot Start End Blocks Id System /dev/sda1 * 1 30064 241489048+ fd Linux raid autodetect /dev/sda2 30065 30394 2650725 5 Extended /dev/sda5 30065 30394 2650693+ fd Linux raid autodetect Why does the numbering go from 1 to 2 to 5. "What is on" sda2 and sda5?
[ -0.010879293084144592, -0.0019675097428262234, -0.013759341090917587, 0.020825907588005066, 0.021588139235973358, 0.004203782416880131, 0.0084726233035326, 0.005451742559671402, -0.014194570481777191, -0.004096918739378452, -0.00934818759560585, -0.0009775898652151227, 0.005778309423476458, ...
[ -0.04265475273132324, 0.15279826521873474, 0.5079412460327148, 0.27591419219970703, 0.3916046917438507, 0.3750778138637543, -0.23553411662578583, -0.05333222076296806, -0.44847384095191956, -0.7835825085639954, -0.1507095992565155, 0.4437386095523834, 0.305301696062088, 0.332658976316452, ...
in order to understand the output of one of my lme models I produced a little simpler example using lm (so no random factor). I noticed that my fitted model does not seem to fit the data correctly, as the predicted y-values deviate strongly from the given values when using predict(lm). My data set is: a = c(rep(1:10, 4)) b = c(10,20,30,40,50,60,70,80,90,100, 5,8,10,14,17,22,27,35,42,50, 90,82,73,64,56,48,40,33,25,18, 5,6,8,10,12,14,17,20,23,26) c = c(rep("male", 10), rep("female", 10), rep("male", 10), rep("female", 10)) d = c(rep("low", 20), rep("high", 20)) e = data.frame(yval = b, xval = a, sex = c, education = d) Graphically it looks like this: library(car) scatterplot(yval~xval | education, smooth = F, grid = T, spread = F, reg.line = T, data = e, xlab = "x", ylab = "y") scatterplot(yval~xval | sex, smooth = F, grid = T, spread = F, reg.line = T, data = e, xlab = "x", ylab = "y") and the linear model is: lm2 = lm(yval~xval+sex+xval:sex+education+xval:education, data = e) summary(lm2) Using e_pred = e e_pred$pred = predict(lm2) I get the predicted values which do not match the real data at all: e = cbind(e,e_pred$pred) Is this due to the fact, that there is more than one significant interaction? Thanks a lot (in advance) for reading and perhaps answering!!
[ 0.017520330846309662, 0.011999453417956829, -0.01866157352924347, 0.0047716423869132996, -0.011952019296586514, 0.007014259696006775, 0.006886725313961506, 0.004412188660353422, -0.013082539662718773, -0.0234365314245224, -0.00345560978166759, 0.0057897865772247314, 0.000584764638915658, 0...
[ 0.11366663873195648, 0.11821078509092331, 0.5000877380371094, -0.3109152615070343, 0.20900413393974304, 1.205904483795166, 0.16435667872428894, -0.42672544717788696, -0.41723382472991943, -0.2510837912559509, 0.14594042301177979, 0.3177858591079712, -0.21238203346729279, 0.3481723070144653...
Using second quantization for scalar field, spinor field and vector fields, we can get commutation and anticommutation relations for the birth and destruction operators of the fields, which leads us to the Bose or to Fermi statistics. Is it possible to expand these results on a field of arbitrary spin (integer or half-integer), using the basic idea that each field can be built by combination of spinor $\frac{\hbar }{2}$ fields?
[ 0.010907119140028954, 0.027563950046896935, -0.004453357774764299, 0.023841550573706627, 0.013482029549777508, -0.005781549494713545, 0.012312390841543674, -0.0004719890421256423, -0.015624018386006355, 0.009265944361686707, 0.00036457140231505036, 0.018847469240427017, -0.00431465171277523,...
[ 0.31586191058158875, -0.38534632325172424, 0.016590990126132965, 0.00907078105956316, -0.08986777812242508, 0.2021450698375702, -0.39278775453567505, -0.5224242806434631, -0.33990007638931274, -0.1995718628168106, 0.18985404074192047, 0.22033973038196564, -0.4264993965625763, 0.57278877496...
Say I spot a sentry in the distance, and I take out the engineer. What's the fastest way to destroy the sentry? 1. Shoot a series of slow, fully-charged shots? 2. Shoot a series of fast, completey-uncharged shots? 3. Shoot a series of half-charged shots? Or is there an even faster way? e.g. shooting quickly without bothering to zoom in.
[ 0.006422053091228008, 0.020254530012607574, -0.011702985502779484, 0.0009792271303012967, -0.03601575270295143, -0.022416286170482635, 0.010361967608332634, 0.007884375751018524, -0.017060110345482826, -0.008788321167230606, -0.007153254002332687, 0.023415477946400642, -0.017496008425951004,...
[ 0.3112192451953888, 0.1655595451593399, 0.031794577836990356, 0.4840033948421478, -0.04345276579260826, 0.009316079318523407, 0.3695002496242523, -0.1826227456331253, -0.03106476180255413, -0.09755445271730423, -0.12093978375196457, 0.7578217387199402, -0.16766223311424255, -0.306404858827...
Is there a way to have a macro optionally prefixed with `\left` or `\right` and have it expand differently depending on that? For example, have `\bra{x}` expand to `\langle x |`, but `\left\bra{x}` to `\left\langle x \middle|`? Extra bonus if it works also with explicit size prefixes, giving that size to both `\langle` and `|`. Some examples of what you could do with such macros: Assuming a corresponding `\ket` macro, one could then write e.g. \left\bra{x} \frac{\hat p}{m} \right\ket{x} and have it expand to \left\langle x \middle| \frac{\hat p}{m} \middle| x \right\rangle i.e. the size of the brackets would depend on both the arguments and what's in between. On the other hand, \bra{x} \frac{\hat p}{m} \ket{x} would just expand to \langle x | \frac{\hat p}{m} | \rangle that is, without extension. Since there would be exactly one `\left` in the expansion of the `\left`-prefixed macro, and exactly one `\right` in the expansion of the `\right`-prefixed macro, one could also combine it with normal delimiters (including the "pseudo-delimiter" `.`). So for example, \left\langle{x^2}\right. would expand only according to what is inside the argument. (Note: I am aware of the braket package, so no need to point that out. Note that it doesn't the `\left`/`\right` thing either.)
[ 0.018424078822135925, 0.01696275733411312, -0.006182784680277109, 0.011584926396608353, 0.020745832473039627, 0.0004624718567356467, 0.007869324646890163, 0.0023782013449817896, -0.013144679367542267, 0.021931130439043045, -0.0037199652288109064, 0.0029438715428113937, -0.004205950070172548,...
[ -0.06524210423231125, -0.3003247082233429, 0.12871535122394562, -0.1598866730928421, 0.14113271236419678, 0.31436944007873535, -0.2952049672603607, -0.01635526306927204, -0.46124619245529175, -0.7382447123527527, -0.16203899681568146, 0.5337162017822266, -0.41273778676986694, -0.1594052612...
It's my understanding that when the Monarch arrives, the fortress becomes the civilization's capital. Hence I stop getting dwarven liaisons (which would mean I would stop getting dwarven caravans, that provide me with some necessary resources not available in my embark location). The wiki is also amiss on what happens to said civilization when the fortress eventually crumbles. What I want to know is: * Does receiving the Monarch or becoming the capital provide any tangible (economical, military, social) advantage? * What happens after the fortress ends (or when the Monarch dies)? Does the civilization keep going?
[ -0.029149629175662994, 0.022870469838380814, -0.005433250218629837, 0.00798526406288147, -0.008079135790467262, -0.02920946478843689, 0.010595062747597694, -0.026619810611009598, -0.017560288310050964, -0.033044636249542236, -0.02339513599872589, 0.02780582383275032, -0.017357584089040756, ...
[ 0.09104303270578384, -0.13476094603538513, 0.409620076417923, -0.11745642870664597, 0.00034518176107667387, 0.01965983211994171, 0.4122351109981537, 0.2600938379764557, -0.5183948874473572, -0.5807119607925415, -0.5062540769577026, -0.2809285819530487, 0.28432032465934753, 0.49627566337585...
I followed very simple instructions http://liliputing.com/2012/03/how-to-dual-boot-cyanogenmod-7-nook-tablet-os- with-a-microsd-card.html1 for extracting cyanogenmod to an sd card for dual booting on nook. Works great. As instructions indicate, file system is only 4gb, even though I used a 16gb sd card. I've googled all over the place to figure out how to expand the file system to use all avaliable storage. My laptop is on windows. the windows disk manager doesnt seem to know how to expand the partitions on the drive (or create a new one on unpartitioned space) do i need to be running 'nix to manage the file system? Is there an app on the marketplace that will manage the filesystem? Apologies if this is a duplicate question, I did a pretty good search before posting.
[ 0.0014300472103059292, -0.0018660806817933917, -0.010486152023077011, 0.012813001871109009, -0.03435733914375305, -0.013283336535096169, 0.009370107203722, -0.00112877506762743, -0.01674335077404976, 0.001786855049431324, -0.011957732029259205, 0.007723548449575901, 0.001968733035027981, 0...
[ 0.020393744111061096, 0.049216289073228836, 0.32271242141723633, 0.18644841015338898, 0.000659366836771369, 0.17458686232566833, 0.1721150428056717, 0.35451653599739075, -0.34976956248283386, -0.6303258538246155, 0.1792212277650833, 0.6777550578117371, -0.21181689202785492, 0.0102120600640...
The sentence > Although they looked totally inconspicuous at first glance, we knew they are > unique and special. is given. Now, what I though is that "Although they looked totally inconspicuous at first glance," is a subordinate clause and the rest is the main clause. Then I realized that with in the main clause a that is omitted, so that it could read "we knew that they are unique and special." Does this make we knew a matrix clause? Then the two predicator would make sense to me.
[ 0.0024800747632980347, 0.015230711549520493, -0.006415726616978645, 0.027236128225922585, 0.006166632287204266, -0.0027761885430663824, 0.011877578683197498, -0.014100556261837482, -0.021049316972494125, 0.014895850792527199, -0.012665865942835808, 0.006858632899820805, 0.0026967779267579317...
[ -0.10321726649999619, 0.17423425614833832, 0.07123734056949615, 0.0012763598933815956, -0.5811212658882141, 0.46992406249046326, 0.4525192081928253, 0.0407320000231266, -0.49774813652038574, -0.4203574061393738, -0.17953860759735107, 0.28624460101127625, -0.1916329711675644, 0.067630402743...
I first asked this question on SuperUser.com but got no responses. I have found how to align the partition of my SSD using fdisk (SSD article on Gentoo Wiki) but haven't been able to find any resources about aligning the partitions of a HDD. Is this practice necessary, or should I just let something like GPartEd align them as default? If it's something I should do to the HDD as well, where can I find a resource for the size to use for the sector and head portion of the command?
[ 0.015335595235228539, -0.002961475867778063, -0.00656597875058651, 0.019160911440849304, -0.002736445516347885, 0.006314639933407307, 0.0066666267812252045, 0.0048805223777890205, -0.018363989889621735, 0.016324035823345184, -0.0008888256270438433, 0.006338242907077074, -0.006433850154280662...
[ 0.444591760635376, -0.24984948337078094, 0.34313514828681946, 0.16106374561786652, -0.01654697209596634, 0.03855709359049797, -0.19652341306209564, -0.22420625388622284, 0.07330533862113953, -0.7735716700553894, 0.30453023314476013, 0.3933968245983124, -0.06842614710330963, 0.3545728027820...
I am considering a poster presentation that I would like to design using nonstandard margin shapes. Current standard essentially puts elements inside a rectangle determined by page size, header and footer spacing, margin widths and likely other things I don't know. I think I can get the effect I want by specifying a slanted margin, e.g. specify spacing at the top of the page, and a slope for the left margin (separately for the right margin). Think of it as specifying typesetting for a trapezoidal sheet of paper. (Although slanted lines of text would be cool, I don't intend to use that: just slanted left and right margins. Knowing how to slant the text line (not the font) would be a bonus, though.) Is there a LaTeX package that would allow specification of slanted margins, especially on a per page basis? (Yes, I am considering different slopes for different pages.) Is there one which would allow a shaped margin? (I doubt I would use it for this project, but having the margin slope change halfway down the page might have its uses. Curves and more interesting effects would be a bonus.) Suppose I have use a package which lets me specify a slanted margin with slope 10 (so as I go down the page 10 inches, the margin slides over steadily until it is one inch further left). What are the readability issues in doing this? Is it a bad idea for a page with more than, say 30 lines of text?
[ -0.01026391051709652, 0.005389781203120947, -0.016405656933784485, 0.01306060329079628, 0.003983279690146446, 0.003932627849280834, 0.00819566659629345, 0.00942002423107624, -0.006639811675995588, -0.02195739559829235, 0.0017691721441224217, -0.0016836372669786215, -0.00384804280474782, 0....
[ 0.45864564180374146, 0.19616898894309998, 0.6071363687515259, 0.3142136335372925, -0.21754665672779083, 0.04256197810173035, -0.3261399269104004, -0.1643829047679901, -0.22268405556678772, -0.7230504155158997, 0.39431998133659363, 0.12487898021936417, 0.00401805154979229, 0.094810225069522...
Consider this code: MemoryInUse[] T = Table[RandomComplex[], {i, 1, 6000}, {j, 1, 6000}]; MemoryInUse[] T += T\[ConjugateTranspose]; MemoryInUse[] {Es, Ys} = Eigensystem[T]; MemoryInUse[] T = Table[RandomComplex[], {i, 1, 6000}, {j, 1, 6000}]; MemoryInUse[] T += T\[ConjugateTranspose]; MemoryInUse[] {Es, Ys} = Eigensystem[T]; MemoryInUse[] $HistoryLength = 0; MemoryInUse[] Clear[T] MemoryInUse[] Clear[Es, Ys] MemoryInUse[] ClearSystemCache[] MemoryInUse[] It gives me the following results: > 15808208 > > 880820520 > > 1456822832 > > 4919500424 > > 5783503032 > > 6359505096 > > 9822181440 > > 9822182648 > > 9822182112 > > 9822181384 > > 9822162952 Clearly, the memory clears negligibly on any of `ClearSystemCache`, `Clear` and zeroing `$HistoryLength`. Repeating its execution leads to swapping, after start of which I hurry up to kill MathKernel before my X or WM or anything else are OOM-killed. So what are the working ways to release the memory?
[ 0.008477354422211647, 0.013461552560329437, -0.00417977012693882, 0.010135382413864136, -0.00623228307813406, -0.005099767353385687, 0.0063577815890312195, -0.007159214001148939, -0.004481257870793343, 0.024331659078598022, -0.0026264777407050133, 0.00798056647181511, -0.004544621333479881, ...
[ -0.4735755920410156, -0.41346728801727295, -0.028445972129702568, -0.17093722522258759, -0.09156106412410736, 0.7535451054573059, -0.004377112258225679, -0.14563314616680145, -0.16397500038146973, -0.5690630078315735, -0.05234989523887634, 0.32383450865745544, -0.3230842351913452, 0.063085...
Here in Pittsburgh, we have lots of "Let's go Steelers!" (and some diehards who also say "Let's go Bucs!", but they're dying out). What does that phrase even imply? I assume it's similar to "Go Steelers", which I'm also not sure of the implications. "Go Steelers... to victory!" is a very strange way to phrase the sentence. Does anyone know where this phrase comes from?
[ 0.01710168644785881, 0.010007739998400211, 0.009160871617496014, 0.02918347530066967, -0.003573391120880842, -0.016176875680685043, 0.00975500512868166, -0.02278003841638565, -0.02167104370892048, 0.022095616906881332, 0.00010085221583722159, 0.014047633856534958, 0.025756873190402985, -0....
[ 0.40505871176719666, -0.1568761020898819, 0.3500465452671051, 0.1969209462404251, -0.21184691786766052, -0.36931708455085754, 0.45504921674728394, 0.988307535648346, -0.2790207862854004, -0.6314220428466797, 0.06749556958675385, -0.06921307742595673, -0.14527301490306854, 0.415412843227386...
Let's say I have 3 variables, `A`, `B`, and `C` and I want to generate data where `A` and `B` are correlated at `r=x`, `A` and `C` are correlated at `r=y`, and `B` and `C` are correlated at `r=z`. 1. Is there any algorithm that will tell me, given specified values for `x`, `y`, and `z`, if any set of variances for `A`, `B` and `C` will yield a positive definite covariance matrix? 2. Given the existence of such an algorithm, is there another algorithm that, in cases where a positive definite covariance matrix _is_ possible, will give me a set of variances that achieve a positive definite matrix?
[ 0.009363159537315369, 0.01719779148697853, -0.005497995298355818, 0.010627545416355133, 0.004147523548454046, -0.0013889647088944912, 0.005059036426246166, -0.016466297209262848, -0.006751680746674538, -0.0038624603766947985, -0.005559408571571112, 0.00990504864603281, -0.006137255113571882,...
[ 0.11944599449634552, -0.10446946322917938, 0.2799684405326843, 0.07367470860481262, -0.09789688885211945, 0.08218309283256531, 0.07965315878391266, -0.4789193868637085, 0.2215205729007721, -0.18853510916233063, 0.2833457589149475, 0.34469926357269287, -0.2159513384103775, 0.159787550568580...
I am using the package `sidecap` for captions which appear left or right of a picture/table, like so: \documentclass[a4paper,twoside,11pt,openright]{scrbook} \usepackage{graphicx} \usepackage[wide]{sidecap} \usepackage[font=footnotesize, format=plain, labelfont={bf,sf}, textfont={it}, width=10pt]{caption} \newcommand{\fig}[4]{ \begin{SCfigure} \centering \includegraphics[width=\textwidth]{#1} \caption[#2]{#3} \label{fig:#4} \end{SCfigure} } And the, within the document: \begin{document} \fig{background_flickr.png}{Flickr geotagging functionality}{Flickr geotagging functionality. By navigating and panning on a map, the user can place an image or video and specify the desired level of spatial granularity.}{background_flickr} \end{document} Which yields: ![enter image description here](http://i.stack.imgur.com/uuRlW.png) I am also using the package `listings` for code, and the settings look like this: \usepackage{listings} \lstset{ backgroundcolor=\color{lightgray}, extendedchars=true, basicstyle=\footnotesize\ttfamily, xleftmargin=20pt, showstringspaces=false, showspaces=false, numbers=left, numberstyle=\footnotesize, tabsize=2, breaklines=true, showtabs=false, captionpos=tb } Inserting a code listing like this \begin{lstlisting}[caption={[JSON Example]A basic JSON example}, label=src:DataTwitterAPIJSON] { "firstName": "John", "lastName": "Smith", "age": 25, "address": { "streetAddress": "21 2nd Street", "city": "New York", "state": "NY", "postalCode": 10021 }, "phoneNumbers": [ { "type": "home", "number": "212 555-1234" }, { "type": "fax", "number": "646 555-4567" } ] } \end{lstlisting} yields: ![enter image description here](http://i.stack.imgur.com/N2yIy.png) There are two problems: First of all, the caption does not get displayed correctly at all (seems to be very narrow). Secondly, I need it to be on the left of the listing on left pages and on the right of the listing on right pages (like the normal `sidecap` behavior above). I suspect that the "narrow" look somehow comes from the `sidecap` package. Is there no way to not only use `sidecap` for figures and tables but also for listings?
[ 0.013656647875905037, 0.0003002352314069867, -0.00756972236558795, 0.01792505756020546, -0.007607483305037022, 0.011339884251356125, 0.007636060006916523, 0.02167767472565174, -0.009375020861625671, 0.005224461667239666, -0.024785097688436508, -0.005427379626780748, 0.00796892587095499, 0....
[ 0.3859213590621948, 0.290987491607666, 0.5711067318916321, -0.32852429151535034, -0.022588878870010376, -0.1314673274755478, 0.12227224558591843, -0.043416500091552734, -0.17928031086921692, -0.6312370896339417, 0.008681554347276688, 0.11329945921897888, -0.02746248058974743, -0.1189669743...
I have a greek site with greek domain (gr) and even so, the keyword list of google shows in the first places the greek equivalent words of the common words like "in", "a", "and",etc. All them would be greek's Stop Words Thanks for any help
[ 0.007559977471828461, 0.001193324220366776, -0.019366033375263214, 0.04794548079371452, 0.00393492728471756, 0.04136624187231064, 0.013678996823728085, 0.019638365134596825, -0.036449916660785675, 0.03679000586271286, 0.003707055700942874, 0.007962027564644814, 0.011446504853665829, 0.0338...
[ 0.339130163192749, 0.550438642501831, 0.03566315770149231, 0.1293201893568039, -0.13140834867954254, -0.29058143496513367, 0.41796761751174927, 0.549089252948761, 0.14415442943572998, -0.3254799246788025, -0.29100972414016724, -0.02812287025153637, 0.022334285080432892, 0.4288080632686615,...
Michael Swan in his "Practical English Usage" says that present passive forms can have similar meanings to present perfect passives. > The vegetables _are_ all _cut up_ \- what shall I do? = The vegetables _have > been_ cut up > I got caught in the rain and my suit' _s ruined_. = ... _has been ruined_ > I think your ankle _is broken_. = ... _has been broken_ > My suitcase _is packed_. = ... _has been packed_. He states that it happens due to that fact that some verbs refer to _actions that produce a finished result_ (to cut, to build, to pack, to close), while others do not (to push, to live, to speak, to hit, to carry). He goes on: the past participles of finished-result verbs, and some of their passive tenses, can have two meanings. They can _refer to the action_ , or they can _describe the result_ (rather like adjectives). > The theatre was closed by the police on the orders of the mayor. (refers to > the _action of closing_ ). > When I got there I found that the theatre was closed. (refers to the state > of being shut - the result of the action). I'm not sure I get the difference between the two groups of words mentioned above. Could anyone, please, go into more detail and explain it to me? I need more examples _to feel_ what it really means.
[ 0.003687121206894517, 0.013983651995658875, -0.0029981895349919796, 0.026531338691711426, 0.020050078630447388, -0.007597299292683601, 0.006756427697837353, -0.002362989354878664, -0.009592793881893158, 0.038024790585041046, -0.01161542721092701, 0.005523908417671919, 0.024051029235124588, ...
[ -0.12141400575637817, -0.180500790476799, -0.03779122978448868, -0.32497307658195496, -0.18421608209609985, 0.34320324659347534, 1.0923376083374023, -0.23355595767498016, -0.24382413923740387, -0.36826273798942566, -0.423369824886322, 0.39130422472953796, -0.2532356083393097, -0.3683330118...
Bootstrap is a well known resampling method. But I want to know what is blocked weighted bootstrap sampling? Why we need this?
[ 0.016361035406589508, 0.007603328209370375, 0.012765648774802685, 0.02947041019797325, -0.03070060722529888, -0.0069941869005560875, 0.01692260056734085, -0.048508841544389725, -0.039675816893577576, 0.035913679748773575, -0.024074571207165718, 0.030901525169610977, -0.0177326500415802, 0....
[ 0.4407426714897156, -0.09869052469730377, 0.17467588186264038, 0.5359643697738647, 0.0009519963059574366, -0.16419896483421326, 0.02811429277062416, -0.4066406488418579, -0.1767570525407791, -0.2388731986284256, 0.4702109396457672, 0.3374849557876587, -0.4090563654899597, -0.08354822546243...
I am trying to understand the idea of a force carrier with the following example. Let's say there are two charges $A$ and $B$ that are a fixed distance from each other. What is causing the force on $B$ by $A$? Classically charge $A$ has an associated electric field which causes a force on $B$. From the standard model, photons are the force carrier for the electromagnetic force. With this view does it mean that $A$ is constantly emitting photons but in a way that the magnetic component cancels out? If that is the case then doesn't that mean that charge $A$ is constantly losing energy?
[ -0.015852972865104675, 0.018078265711665154, -0.01639387756586075, 0.010259732604026794, 0.002013002522289753, -0.042345643043518066, 0.009688672609627247, -0.016497813165187836, -0.0140683613717556, 0.009278809651732445, -0.022530650720000267, 0.012766538187861443, -0.019521843641996384, ...
[ 0.6628289818763733, -0.09854266792535782, 0.4066326320171356, 0.044312942773103714, -0.3945200741291046, -0.24131175875663757, -0.40052059292793274, -0.3696627616882324, -0.3059578835964203, -0.35189202427864075, 0.2276296764612198, 0.6065416932106018, -0.6750473976135254, 0.41381716728210...
What are some practices I should use in a product registration system I'm building? I likely can't stop all malicious hacking, but I'd like to slow them down a great deal. (Note, I know only PHP.) I'm talking about things like encrypting traffic, testing the encryption from hacking like a man-in-the- middle attack, etc. The other concern I have is that this needs to work on most PHP5-based web hosting environments, which may not have mcrypt installed.
[ 0.005177703220397234, 0.011638028547167778, -0.00020749402756337076, 0.002353645395487547, -0.002998087555170059, 0.0016215831274166703, 0.007287357468158007, -0.007195542566478252, -0.015688490122556686, -0.013531173579394817, -0.0037074561696499586, 0.0063944789581000805, -0.00218493980355...
[ 0.4330940842628479, 0.1972941905260086, 0.030965350568294525, 0.40301451086997986, -0.08508456498384476, -0.22156260907649994, 0.2595604956150055, 0.13116824626922607, -0.190616637468338, -0.5673614740371704, 0.49256935715675354, 0.6226380467414856, 0.10634766519069672, -0.0794621929526329...
I would like to convert a list of $n$ complex equations to a list of $2n$ real ones. At the moment I am doing it like this: eqs = {a + I b == 0, c + I d == 0} Flatten[{ComplexExpand[Re[First[#]]] == 0 & /@ eqs, ComplexExpand[Im[First[#]]] == 0 & /@ eqs}] I would like to know how I can write this more compactly, since I'm basically using the same command twice, with the only difference being changing the function `Re -> Im`. Perhaps I can use a pure function to map over a list of these 2 functions? Thanks!
[ 0.016317836940288544, 0.006186767481267452, -0.011212445795536041, 0.006994823459535837, -0.0031512309797108173, -0.016001101583242416, 0.0054205311462283134, 0.00009748560842126608, -0.01633933000266552, 0.017986521124839783, -0.0008490546606481075, 0.00289246067404747, -0.01140147261321544...
[ -0.12151388823986053, 0.027451392263174057, 0.49743014574050903, -0.21060174703598022, -0.10411299765110016, 0.08508970588445663, 0.13923244178295135, -0.24703915417194366, -0.17297883331775665, -0.788840115070343, 0.13710732758045197, 0.8025839328765869, -0.22574257850646973, 0.2045479118...
I have a simple AJAX pagination that seemed to work just fine, calling one Wordpress page (not ideal, I know) with a pagenum and a sort order arguments and it returned 4 custom posts per call. The problem I found while testing it is that if the items are ordered like this in the db: item1: 4 votes item2: 3 votes item3: 3 votes item4: 3 votes ----page2---- item5: 3 votes item6: 2 votes ... ...there's a chance that any one of items with 3 votes in custom field will display as the first item on page2 of the pagination. This effectively erases one of the items from listing, and I can't seem to find a way to query get_posts properly. This is how it looks now: elseif($savjet_order == 'votes') : $offset = (($savjet_per_page * $savjet_page) /* - $savjet_per_page*/ ); //$offset = $offset < 0 ? 0 : $offset; $args = array( 'numberposts' => $savjet_per_page , 'offset' => $offset , 'meta_key' => 'wpcf-glasova', 'orderby' => 'meta_value_num post_date', 'order' => 'DESC', //'paged' => $savjet_page , 'post_type' => 'savjet', 'post_status' => 'publish' /*'suppress_filters' => true*/ ); $savjeti = get_posts($args); ...adding post_date (or even date) to orderby clause doesn't seem to change anything ('date' seems to make it worse by making the list ascend even though the order is DESC).
[ -0.003655080683529377, 0.030381470918655396, -0.0061605870723724365, 0.018647996708750725, -0.012422062456607819, -0.004503797739744186, 0.0077922772616147995, -0.003928325604647398, -0.018225550651550293, 0.0036254730075597763, -0.008909152820706367, 0.014019211754202843, -0.024482382461428...
[ 0.10375059396028519, -0.02279844880104065, 0.21317246556282043, 0.18177913129329681, -0.6177577972412109, 0.2573895752429962, 0.5712786912918091, -0.34350237250328064, -0.3741435408592224, -0.6656383275985718, 0.2155483365058899, 0.4287468492984772, -0.3035498261451721, 0.00663240905851125...
I ran a multinomial logit model in JMP and got back results which included the AIC as well chi-squared p-values for each parameter estimate. The model has one categorical outcome and 7 categorical explanatory vars. I then fit what I thought would build the same model in R, using the `multinom` function in the nnet package. The code was basically: fit1 <- multinom(y ~ x1+x2+...xn,data=mydata); summary(fit1); However, the two give different results. With JMP the AIC is 2923.21, and with `nnet::multinom` the AIC is 3116.588. So my **first question** is: Is one of the models wrong? The second thing is, JMP gives chi-squared p-values for each parameter estimate, which I need. Running summary on the multinom `fit1` does not - it just gives the estimates, AIC and Deviance. My **second question** is thus: Is there a way to get the p-values for the model and estimates when using `nnet::multinom`? I know mlogit is another R package for this and it looks like its output includes the p-values; however, I have not been able to run `mlogit` using my data. I think I had the data formatted right, but it said I had an invalid formula. I used the same formula that I used for `multinom`, but it seems like it requires a different format using a pipe and I don't understand how that works. Thanks.
[ 0.018407005816698074, 0.03563228249549866, 0.0013806289061903954, 0.008836624212563038, 0.016605837270617485, 0.0022357834968715906, 0.008995266631245613, 0.021982917562127113, -0.015618132427334785, -0.015293320640921593, -0.012244023382663727, 0.010996390134096146, -0.001685125520452857, ...
[ -0.1267673522233963, -0.3159845173358917, 0.30522847175598145, 0.048591259866952896, -0.28729820251464844, 0.4155227839946747, 0.30979830026626587, -0.6066278219223022, -0.07107804715633392, -0.3718794882297516, 0.009520672261714935, 0.24670521914958954, -0.2990962564945221, 0.044468358159...
I'm trying to use ogrinfo to get some details on a shapefile I downloaded. Currently, the only way I know how to do this is to load it into QGIS and manually click around to find any information on it, like opening the attribute table. I just want to be able to see any metadata is tagged along with the features. If I do: ogrinfo -al USA_adm0.shp I can see at the beginning there is a lot of useful information, but then it flies past with all the feature data. Can someone help me out? # EDIT This is what I get on my mac using the -ro and -so flag, doesn't seem to be much help. ->ogrinfo -ro -so USA_adm0.shp INFO: Open of `USA_adm0.shp' using driver `ESRI Shapefile' successful. 1: USA_adm0 (Polygon)
[ -0.003384229028597474, -0.00700363889336586, 0.0018887555925175548, 0.016697995364665985, -0.0009515817509964108, 0.019067250192165375, 0.006720769219100475, 0.0035454691387712955, -0.017370684072375298, 0.01796538755297661, 0.007977091707289219, 0.008447820320725441, -0.008123415522277355, ...
[ 0.6230068206787109, 0.027455182746052742, 0.8326550722122192, 0.12201258540153503, -0.20941412448883057, -0.2862168252468109, 0.005758865270763636, 0.329704612493515, -0.1372327208518982, -0.8720623850822449, 0.18402095139026642, 0.5053519010543823, -0.37064942717552185, 0.0456299595534801...
I have a URL: www.example.com/my_cool_pancakes. When a User goes to this URL, I would like the URL to instead be www.example.com/pancakes. How do I do this? Is this URL rewriting? I have studied URL rewriting, but I am still not sure about how it works, or even if it is the method I should use to do this. If it is, could you show me how? Thank you :) PS: www.example.com/my_cool_pancakes is an archive page template for a custom post type if that makes any difference.
[ -0.0099995331838727, 0.007081931922584772, -0.004939347505569458, 0.014810889959335327, 0.004308732226490974, 0.0011581717990338802, 0.007511274889111519, -0.01138537097722292, -0.01595746912062168, -0.017775341868400574, -0.0011848678113892674, -0.0013129590079188347, 0.00818054098635912, ...
[ 0.7130061388015747, 0.3238057792186737, 0.3293962776660919, 0.1387789398431778, -0.13228671252727509, -0.009069250896573067, 0.05637113004922867, 0.5613288879394531, -0.6462157368659973, -0.8200362920761108, 0.42081108689308167, -0.04420890659093857, 0.12237704545259476, 0.3737530410289764...
This script should remove the contents of the dustbin directory. If the `-a` option is used the script should remove _all_ files from the dustbin. Otherwise, the script should display the filenames in the dustbin one by one and ask the user for confirmation that they should be deleted. if test ! -f ~/TAM/dustbin/* then echo "this directory is empty" else for resfile in ~/TAM/dustbin/* do if test -f $resfile ; then echo "Do you want to delete $resfile" echo "Y/N" read ans if test $ans = Y ; then rm $resfile echo "File $resfile was deleted" fi fi done fi The above is working, however it causes a few errors to be reported _even_ though it still carries out the code on the next line after the error without crashing. Errors: ./remove: line 4: test: to many arguments (This happens when there are more than 2 files in the dustbin.) ./remove: line 4: test: root/TAM/dustbin/NewFile2: binary operator expected (This happens when the file is newfile2 but not newfile3.) Also does anyone have any input on how I could do the `-a` to delete everything in the folder without asking about each file separately?
[ 0.005033608991652727, 0.017474420368671417, -0.005633774679154158, 0.005578330717980862, 0.004523801617324352, -0.010426503606140614, 0.0079909423366189, 0.02241169847548008, -0.016918638721108437, 0.023378808051347733, -0.015563861466944218, 0.006952106487005949, -0.023488929495215416, 0....
[ 0.3433542549610138, 0.29390668869018555, 0.1508846879005432, -0.487075537443161, 0.13347430527210236, -0.058542437851428986, 0.6985697150230408, -0.3403140604496002, -0.1758393943309784, -0.4907807409763336, -0.1940673291683197, 0.43967095017433167, -0.660349428653717, 0.2331194132566452, ...
I found many solutions to similar problems here, but I still cannot solve this one: I'm using Springer's template like this: \documentclass[twocolumn, natbib]{svjour3} \usepackage{graphicx} \usepackage{natbib} ... \bibliographystyle{spbasic} % basic style, author-year citations \bibliography{my_bibliography} and I get an error: Bibliography not compatible with author-year citations. I really cannot figure out what I'm doing wrong.
[ 0.010845914483070374, 0.006892036646604538, -0.006881524808704853, 0.029174786061048508, 0.011534280143678188, -0.0016529960557818413, 0.008524708449840546, 0.017904169857501984, -0.01325277704745531, -0.0038635265082120895, -0.01598101109266281, 0.001293027657084167, -0.011689658276736736, ...
[ -0.2423408329486847, 0.11570221185684204, 0.4319689869880676, 0.01798931136727333, -0.27375420928001404, -0.35039520263671875, 0.061303477734327316, -0.09389162808656693, -0.13160112500190735, -0.6439954042434692, 0.11914196610450745, 0.4466682970523834, -0.35305026173591614, 0.28013235330...
I have some trouble understanding exactly what a mole represents. As I understand, one unit mole is 1/12 of the mass of an atom of carbon-12 (thus it is the mass of one nucleon?). What is a mole, then?
[ -0.007913528010249138, 0.02221689000725746, -0.04874177649617195, -0.001984933391213417, -0.0075829410925507545, -0.0073857796378433704, 0.012746509164571762, -0.03262878581881523, -0.034598883241415024, -0.040521446615457535, 0.011330348439514637, -0.003703030291944742, -0.00195925659500062...
[ 0.48802775144577026, 0.36261898279190063, -0.3440922200679779, -0.008188394829630852, -0.01940341480076313, 0.19161689281463623, -0.09010490030050278, -0.09686113893985748, -0.06990860402584076, -0.366303414106369, 0.10498285293579102, 0.021646607667207718, -0.05228668078780174, 0.33899033...
In my fresh install of Linux Mint 16 with MATE, I have no hibernate option in the Power Manager; only suspend and shutdown. In the Quit menu I have hibernate as an option. Also `sudo pm-hibernate` works from the command line. Any suggestions of how to enable hibernate in the Power Manager? I want to hibernate when the laptop lid closes. I have just enough swap space for hibernation to work: $ free -h total used free shared buffers cached Mem: 3.5G 1.6G 1.8G 0B 18M 406M -/+ buffers/cache: 1.2G 2.3G Swap: 3.6G 16M 3.6G
[ 0.011895204894244671, -0.0012213299050927162, -0.0013360537122935057, 0.01315450482070446, -0.002969786524772644, -0.013200517743825912, 0.008372629061341286, -0.015387684106826782, -0.013502389192581177, -0.0028340171556919813, -0.018101833760738373, 0.001478011254221201, -0.003752822987735...
[ 0.1249120831489563, 0.013515214435756207, 0.44870713353157043, -0.2017601579427719, -0.024702321738004684, 0.0749172568321228, 0.6358455419540405, -0.22644451260566711, -0.13764342665672302, -0.4895511567592621, -0.3364078998565674, 0.8859596252441406, -0.2617167830467224, -0.0083976974710...
What's the difference between: I will be eating cakes tomorrow. I will eat cakes tomorrow. And, when should I use the first form?
[ -0.03199931979179382, 0.03880317509174347, -0.03561745584011078, 0.007162580266594887, 0.0009439308196306229, 0.016966216266155243, 0.021896496415138245, -0.00029115471988916397, -0.005096161738038063, -0.02693214640021324, -0.03403830528259277, -0.004204725846648216, -0.005602511577308178, ...
[ 0.1984434276819229, 0.09562221169471741, 0.6562232971191406, -0.26508715748786926, -0.3192463517189026, 0.24602243304252625, 0.28738611936569214, 0.04950113967061043, -0.1446603685617447, -0.7607589364051819, 0.004469889681786299, 0.520270824432373, 0.37035855650901794, -0.1720856279134750...
Are Web **slideshows** and **carousels** the same thing? If not, what is the difference? For _Web slideshow_ , I mean HTML image galleries like: * Flexslider by WooThemes * Nivo Slider™ * Juicebox
[ -0.011660730466246605, 0.0016330080106854439, 0.021326113492250443, 0.04472518339753151, -0.022417450323700905, 0.01350410096347332, 0.013697882182896137, 0.020610470324754715, -0.034281112253665924, -0.00377497635781765, -0.015761973336338997, -0.005546949803829193, 0.013366581872105598, ...
[ 0.8025608658790588, -0.1261792778968811, 0.3287732005119324, 0.24283123016357422, -0.1312723606824875, -0.07295384258031845, 0.10968205332756042, 0.2061866670846939, -0.7916982769966125, -0.474168598651886, 0.1324600726366043, 0.43896546959877014, -0.16307346522808075, 0.05027051642537117,...
If I extract Potential Evaporation (PET, W/m$^2$) from the National Centers for Environmental Prediction (NCEP), climate reanalysis data (downloadable as netCDF files here), there are some negative values, e.g. for summer in southern Manitoba (50N, 100W), I get the attached distribution of PET values.: ![enter image description here](http://i.stack.imgur.com/G6Gct.png) Is this a real physical process, such as condensation (dew formation), or is it an error (e.g. should I average over these values or set them to zero)? * * * **update** requested metadata: The metadata (contained in the files) says that the valid range is -800, 5200; here is further description of the 'pevpr' variable (provided in the data header): * long_name = "Monthly Mean of Potential Evaporation Rate" * valid_range = -800,5200 ; * units = "W/m^2" ; * add_offset = 0 ; * scale_factor = 1 ; * missing_value = -9.96921e+36 ; * precision = 1 ; * least_significant_digit = 0 ; * var_desc = "Potential Evaporation Rate" ; * dataset = "CDC Derived NCEP Reanalysis Products" ; * level_desc = "Surface" ; * statistic = "Mean" ; * parent_stat = "Individual Obs" ;
[ -0.015913523733615875, 0.011263078078627586, 0.004601977299898863, 0.021442074328660965, 0.0021341112442314625, -0.017680639401078224, 0.008111190050840378, 0.008749911561608315, -0.011088975705206394, -0.010296451859176159, 0.002674538642168045, 0.021418191492557526, -0.016596216708421707, ...
[ 0.658411979675293, -0.055346835404634476, 0.5568525195121765, 0.23370376229286194, 0.176356241106987, -0.125454381108284, 0.7476396560668945, 0.013441500253975391, -0.4784030616283417, -0.15027934312820435, -0.3084075450897217, -0.2792224586009979, -0.20070573687553406, 0.35636329650878906...
I am using the package moderncv. I implemented already a conditional statement in the preamble: \usepackage{ifthen} \newif\ifresume \resumetrue %true for RESUME, false for CV This allows me to set up different designs for either generation of a resume or a cv based on the same data. What I would like to do now, is to also to implement this condition in the \cventry{year--year}{Degree}{Institution}{City}{Grade}{Description} data fields in the following sense: \cventry{year--year}{Degree}{Institution}{City}{Grade}{(short) Description for resume}{(long) Description for CV} I already tried various ideas to implement this, but none succeeded. The obvious solution would be \cventry{year--year}{Degree}{Institution}{City}{Grade}{\ifresume %Text for RESUME \else %Text for CV \fi} but that does not appear very pracicable.# Any ideas how to solve this smoothly? Cheers, Mil
[ 0.028175152838230133, 0.02041054144501686, -0.0050585465505719185, 0.010407724417746067, 0.012929904274642467, 0.01509532518684864, 0.008593950420618057, -0.018351782113313675, -0.007123228162527084, -0.00002024706918746233, -0.016457926481962204, 0.006515790242701769, 0.0045963311567902565,...
[ 0.510329008102417, 0.12728576362133026, 0.7516329884529114, -0.28388190269470215, 0.6622799038887024, -0.38604483008384705, 0.2082415670156479, 0.15024934709072113, 0.03326094150543213, -0.6163981556892395, 0.05275559052824974, 0.7943465709686279, 0.25152602791786194, 0.030030468478798866,...
I would like to organise a graph's vertices in levels. Consider g = { 0 -> 1, 1 -> 2, 2 -> 3, 0 -> 4, 0 -> 5, 2 -> 6, 2 -> 7, 8 -> 3, 4 -> 9, 5 -> 9, 6 -> 9, 6 -> 10, 7 -> 10, 8 -> 10, 9 -> 11, 9 -> 12, 10 -> 11, 10 -> 12 }; Graph[g] Using nested list of vertices such as `{{0, 1, 2, 3}, {4, 5, 6, 7, 8}, {9, 10}, {11, 12}}`, I would like to see graph with 4 levels wherein vertices 0, 1, 2, 3 placed in line on the top level, vertices 4, 5, 6, 7, 8 in line on the level below and so on. ![enter image description here](http://i.stack.imgur.com/ZoubU.png)
[ -0.004807874094694853, 0.0038695253897458315, -0.008089138194918633, 0.012185832485556602, 0.015346329659223557, -0.006500212009996176, 0.0045267255045473576, 0.0066002048552036285, -0.009549567475914955, 0.026504624634981155, 0.004936751443892717, 0.002063457388430834, -0.012528112158179283...
[ -0.25300082564353943, 0.12157265096902847, 0.24732394516468048, 0.11633933335542679, -0.14387550950050354, 0.12585709989070892, 0.49576088786125183, -0.3614015579223633, -0.036536626517772675, -1.0400935411453247, 0.06550870835781097, 0.15686960518360138, -0.5955069661140442, 0.03803202882...
I understand the difference between the two architectures is the separation of instructions from data in the Harvard architecture. But how do I know which type of system I'm on? Is it possible to write a program such that the program determines whether the system is von Neumann or Harvard? Could there be another architecture or are these architectures the only ones known?
[ -0.005434144288301468, 0.014869059436023235, 0.010195901617407799, 0.01610237918794155, -0.014705559238791466, -0.0018902342999354005, 0.012151156552135944, -0.0032493476755917072, -0.022303367033600807, 0.012109663337469101, -0.02456694468855858, 0.0053146835416555405, 0.017584264278411865,...
[ 0.482563316822052, 0.40976375341415405, -0.2726111114025116, 0.416851669549942, 0.4179455637931824, 0.33338287472724915, -0.048606015741825104, 0.20452266931533813, -0.24544799327850342, -0.33733615279197693, 0.26860982179641724, 0.10609525442123413, -0.24559186398983002, 0.292294949293136...
I am using OpenGeo and OpenLayers 3. I managed to set a WMS layer from PostGis (Configure new SQL view) where I show points from coordinates I have in database. I also managed (following tutorial on docs.geoserver) to set the custom icon as graphic. The next step is that this graphic icon changed on mouse over. I understand, that only image is transferred to client from GeoServer since I am using WMS. But there must be a way to do that. Is it? I think I must use WFS, but maybe I am wrong. What then? There are not plenty examples for OpenLayers3. Icon style: <FeatureTypeStyle> <Rule> <PointSymbolizer> <Graphic> <ExternalGraphic> <OnlineResource xlink:type="simple" xlink:href="image.png" /> <Format>image/png</Format> </ExternalGraphic> <Size>32</Size> </Graphic> </PointSymbolizer> </Rule> </FeatureTypeStyle>
[ -0.0006223961245268583, -0.003523988416418433, -0.0035357375163584948, 0.02300700917840004, 0.01263136975467205, -0.01649259217083454, 0.00933115091174841, 0.01288941502571106, -0.014617843553423882, -0.021166183054447174, -0.009161051362752914, 0.016202500090003014, 0.004390902351588011, ...
[ 0.13028864562511444, -0.05586506426334381, 0.8745077252388, -0.10000196099281311, -0.18759381771087646, -0.1174200177192688, 0.2842903435230255, -0.1574133336544037, -0.3125015199184418, -1.0647372007369995, 0.19688241183757782, 0.46407461166381836, -0.3571276366710663, 0.13973963260650635...
I have rewritten some Delphi function to ORACLE DB functions for converting from lat-long to utm and mgrs. Anyone care to validate the output with real data they know is correct? package specification: create or replace package gedaco as function MGRS(lat in number, Lon in number, a in number, InverseFlattening in number, Coding in number, Digits in number) return varchar2; Function MGRSLatZone(lat in number) return varchar2; function SquareID(UTMzn in number, Northing in number, Easting in number, Coding in number) return varchar2; function UTM(lat in number, Lon in number, a in number, InverseFlattening in number) return varchar2; Function UTMX(UTMs in varchar2) return number; Function UTMY(UTMs in varchar2) return number; function UTMZone(lat in number, Lon in number) return number; end gedaco; package body: create or replace package body gedaco as function MGRS(lat in number, Lon in number, a in number, InverseFlattening in number, Coding in number, Digits in number) return varchar2 is result varchar2(32); UTMs1 varchar2(32); E1 number; N1 number; Zn number; Lzn varchar2(32); Sq varchar2(32); begin UTMs1 := UTM(lat, Lon, a, InverseFlattening) ; E1 := UTMX(UTMs1); N1 := UTMY(UTMs1); Zn := UTMZone(lat, Lon); Lzn := MGRSLatZone(lat); Sq := SquareID(Zn, N1, E1, Coding); result := replace( to_char(Zn,'00') || LZn || Sq || to_char(round(E1 - 100000 * trunc(E1/100000)),'00000') || to_char(round(N1 - 100000 * trunc(N1/100000)),'00000') ,' ', ''); return result; end MGRS; Function MGRSLatZone(lat in number) return varchar2 is result varchar2(1); GridZones CONSTANT varchar2(20) := 'CDEFGHJKLMNPQRSTUVW'; begin If (lat >= 72) Then Result := 'X'; Else Result := substr(GridZones, Trunc((lat + 88) / 8), 1); End If; return result; end MGRSLatZone; function SquareID(UTMzn in number, Northing in number, Easting in number, Coding in number) return varchar2 is result varchar2(32); N number; E number; ZoneSet number; Col varchar2(32); Rov varchar2(32); Col1 CONSTANT varchar2(20) := 'ABCDEFGH'; Col2 CONSTANT varchar2(20) := 'JKLMNPQR'; Col3 CONSTANT varchar2(20) := 'STUVWXYZ' ; Row1 CONSTANT varchar2(20) := 'ABCDEFGHJKLMNPQRSTUV'; Row2 CONSTANT varchar2(20) := 'FGHJKLMNPQRSTUVABCDE'; Row3 CONSTANT varchar2(20) := 'LMNPQRSTUVABCDEFGHJK'; Row4 CONSTANT varchar2(20) := 'RSTUVABCDEFGHJKLMNPQ'; begin N := Trunc(Northing / 100000); N := N - 20 * Trunc(N / 20); E := Trunc(Easting / 100000); ZoneSet := UTMzn - 6 * Trunc(UTMzn / 6); If ((ZoneSet = 1) Or (ZoneSet = 4)) Then Col := SubStr(Col1, E, 1); End If; If ((ZoneSet = 2) Or (ZoneSet = 5)) Then Col := SubStr(Col2, E, 1); End If; If ((ZoneSet = 3) Or (ZoneSet = 0)) Then Col := SubStr(Col3, E, 1); End If; ZoneSet := ZoneSet - 2 * Trunc(ZoneSet / 2); If ((Coding = 1) And (ZoneSet = 1)) Then Rov := SubStr(Row1, N + 1, 1); End If; If ((Coding = 1) And (ZoneSet = 0)) Then Rov := SubStr(Row2, N + 1, 1); End If; If ((Coding = 2) And (ZoneSet = 1)) Then Rov := SubStr(Row3, N + 1, 1); End If; If ((Coding = 2) And (ZoneSet = 0)) Then Rov := SubStr(Row4, N + 1, 1); End If; Result:= Col || Rov; return result; end SquareId; function UTM(lat in number, Lon in number, a in number, InverseFlattening in number) return varchar2 is result varchar2(320); ZoneWidth CONSTANT number := 6; CentralScaleFactor CONSTANT number := 0.9996; Zone1CentralMeridian CONSTANT number := -177; Zone0WestMeridian number; Zone0CentralMeridian number; FalseEasting CONSTANT number := 500000; Pi number; SemiMajorAxis number; Flattening number; Eccent2 number; Eccent4 number; Eccent6 number; A0 number; A2 number; A4 number; A6 number; LatRad number; LonRad number; Sin1Lat number; Sin2Lat number; Sin4Lat number; Sin6Lat number; Rho number; Nu number; Psi number; Psi2 number; Psi3 number; Psi4 number; CosLat number; CosLat2 number; CosLat3 number; CosLat4 number; CosLat5 number; CosLat6 number; CosLat7 number; TanLat number; TanLat2 number; TanLat4 number; TanLat6 number; DifLon number; DifLon2 number; DifLon3 number; DifLon4 number; DifLon5 number; DifLon6 number; DifLon7 number; DifLon8 number; DistOverMeridian number; Zone number; CentralMeridian Integer; East1 number; East2 number; East3 number; East4 number; North1 number; North2 number; North3 number; North4 number; X number; Y number; Hemi varchar2(1); FalseNorthing number; begin Zone0WestMeridian := Zone1CentralMeridian - (1.5 * ZoneWidth); Zone0CentralMeridian := Zone0WestMeridian + ZoneWidth / 2; Pi := 3.141592653589793238462643383279502884197169399375105820974944592307816406; SemiMajorAxis := 1000 * a ; Flattening := 1.0 / InverseFlattening ; Eccent2 := 2.0 * Flattening - (Flattening * Flattening); Eccent4 := Eccent2 * Eccent2 ; Eccent6 := Eccent2 * Eccent4 ; A0 := 1 - (Eccent2 / 4.0) - ((3 * Eccent4) / 64.0) - ((5.0 * Eccent6) / 256.0); A2 := (3.0 / 8.0) * (Eccent2 + (Eccent4 / 4.0) + ((15.0 * Eccent6) / 128.0)) ; A4 := (15 / 256) * (Eccent4 + ((3.0 * Eccent6) / 4.0)); A6 := (35.0 * Eccent6) / 3072.0 ; -- ' Parameters to radians LatRad := lat / 180 * Pi; LonRad := Lon / 180 * Pi ; -- 'Sin of latitude and its multiples Sin1Lat := sIn(LatRad) ; Sin2Lat := sIn(2 * LatRad) ; Sin4Lat := sIn(4 * LatRad); Sin6Lat := sIn(6 * LatRad); -- 'Meridian Distance DistOverMeridian := SemiMajorAxis * (A0 * LatRad - A2 * Sin2Lat + A4 * Sin4Lat - A6 * Sin6Lat); -- 'Radii of Curvature Rho := SemiMajorAxis * (1 - Eccent2) /Power( (1 - (Eccent2 * Sin1Lat * Sin1Lat)) , 1.5); Nu := SemiMajorAxis /Power( (1 - (Eccent2 * Sin1Lat * Sin1Lat)) , 0.5); Psi := Nu / Rho ; Psi2 := Psi * Psi ; Psi3 := Psi * Psi2; Psi4 := Psi * Psi3 ; -- 'Powers of cos latitude CosLat := Cos(LatRad); CosLat2 := CosLat * CosLat ; CosLat3 := CosLat * CosLat2 ; CosLat4 := CosLat * CosLat3 ; CosLat5 := CosLat * CosLat4 ; CosLat6 := CosLat * CosLat5 ; CosLat7 := CosLat * CosLat6 ; -- 'Powers of tan latitude TanLat := Tan(LatRad) ; TanLat2 := TanLat * TanLat ; TanLat4 := TanLat2 * TanLat2 ; TanLat6 := TanLat2 * TanLat4 ; -- 'Zone -- 'Zone := Int((Lon - Zone0WestMeridian) / ZoneWidth) Zone := UTMZone(lat, Lon) ; CentralMeridian := Trunc((Zone * ZoneWidth) + Zone0CentralMeridian ) ; DifLon := (Lon - CentralMeridian) / 180 * Pi ; DifLon2 := DifLon * DifLon ; DifLon3 := DifLon * DifLon2 ; DifLon4 := DifLon * DifLon3 ; DifLon5 := DifLon * DifLon4 ; DifLon6 := DifLon * DifLon5 ; DifLon7 := DifLon * DifLon6 ; DifLon8 := DifLon * DifLon7 ; East1 := DifLon * CosLat ; East2 := DifLon3 * CosLat3 * (Psi - TanLat2) / 6.0; East3 := DifLon5 * CosLat5 * (4.0 * Psi3 * (1.0 - 6.0 * TanLat2) + Psi2 * (1.0 + 8.0 * TanLat2) -Psi * (2.0 * TanLat2) + TanLat4) / 120.0; East4 := DifLon7 * CosLat7 * (61.0 - 479.0 * TanLat2 + 179.0 * TanLat4 - TanLat6) / 5040.0 ; X := CentralScaleFactor * Nu * (East1 + East2 + East3 + East4) + FalseEasting ; If (lat >= 0) Then Hemi := 'N'; FalseNorthing := 0; Else Hemi := 'S'; FalseNorthing := 10000000; end if; North1 := Sin1Lat * DifLon2 * CosLat / 2.0 ; North2 := Sin1Lat * DifLon4 * CosLat3 * (4.0 * Psi2 + Psi - TanLat2) / 24.0 ; North3 := Sin1Lat * DifLon6 * CosLat5 * (8.0 * Psi4 * (11.0 - 24.0 * TanLat2) - 28.0 * Psi3 * (1.0 - 6.0 * TanLat2) + Psi2 * (1.0 - 32.0 * TanLat2) - Psi * (2.0 * TanLat2) + TanLat4) / 720; North4 := Sin1Lat * DifLon8 * CosLat7 * (1385 - 3111 * TanLat2 + 543 * TanLat4 - TanLat6) / 40320.0 ; Y := CentralScaleFactor * (DistOverMeridian + Nu * (North1 + North2 + North3 + North4)) + FalseNorthing; Result := Zone || Hemi || ' ' || to_char(round(X, 3),'0000000.000') || to_char(round(Y, 3),'0000000.000'); return result; End UTM; Function UTMX(UTMs in varchar2) return number is result number; begin Result := to_number(substr(UTMs, 6, 11)); return result; End UTMX; Function UTMY(UTMs in varchar2) return number is result number; begin Result := to_number(substr(UTMs, 18, 11)); return result; End UTMY; function UTMZone(lat in number, Lon in number) return number is result number; UTMZone number; e number; d number; ZoneWidth CONSTANT number := 6; Zone1CentralMeridian CONSTANT number := -177; Zone0WestMeridian number; begin Zone0WestMeridian := Zone1CentralMeridian - (1.5 * ZoneWidth); d:=ZoneWidth; UTMZone := Trunc((lon - Zone0WestMeridian) / d); --Special Cases for Norway & Svalbard CASE WHEN (lat > 55) AND (UTMZone = 31) AND (lat < 64) AND (lon > 2) THEN UTMZone := 32; WHEN (lat > 71) AND (UTMZone = 32) AND (lon < 9) THEN UTMZone := 31; WHEN (lat > 71) AND (UTMZone = 32) AND (lon > 8) THEN UTMZone := 33; WHEN (lat > 71) AND (UTMZone = 34) AND (lon < 21) THEN UTMZone := 33; WHEN (lat > 71) AND (UTMZone = 34) AND (lon > 20) THEN UTMZone := 35; WHEN (lat > 71) AND (UTMZone = 36) AND (lon < 33) THEN UTMZone := 35; WHEN (lat > 71) AND (UTMZone = 36) AND (lon > 32) THEN UTMZone := 37; ELSE UTMZone := UTMZone; END CASE; Result := UTMZone; return result; end UTMZone; end gedaco; Function is used MGRS(:latitude, :longitude, 6378.137, 298.2572236, 1, 5) for WGS84 with 5 digits precision. list of datums: Datum Radius InverseFlattening 'WGS84', 6378.137, 298.2572236 'NAD27', 6378.2064, 294.9786982 'NAD83', 6378.137, 298.2572221 'WGS66', 6378.145, 298.25 'GRS67', 6378.16, 298.2472 'IAU68', 6378.16, 298.2472 'WGS72', 6378.135, 298.26 'Clarke66', 6378.2064, 294.9786982 'GRS80', 6378.137, 298.2572221 'Krasovsky', 6378.2064, 298.3 'Bessel', 6377.397155, 299.1528128 Just to be completely specific, I have of course done some random checks with Earth Point, but I am asking for someone to control my functions with a sustainable amount of data. A proper reply to my question can simply be test with 10 000 records without errors found, and if errors are found I am of course interested in the location not calculated correctly if that is possible.
[ 0.002312229946255684, 0.009221464395523071, -0.0033367229625582695, 0.005451631732285023, 0.024507349357008934, 0.01304460596293211, 0.010030614212155342, 0.003931667655706406, -0.011432324536144733, -0.035363152623176575, -0.006163693033158779, 0.007318052463233471, 0.00910122599452734, 0...
[ -0.2063288390636444, 0.08008120954036713, 0.5493690967559814, -0.296029657125473, -0.15866544842720032, 0.43745899200439453, 0.16896605491638184, -0.5280286073684692, 0.06594273447990417, -0.4657639265060425, -0.2894510328769684, 0.3699592053890228, -0.22798237204551697, 0.0106790401041507...
Is it possible to use a style, or something equivalent, to specify the column expressions for `plot table`? The objective is to be able to use the same expressions for a large number of different plots with a minimum of repetitive typing. Here is an example (which does not compile) that suggests what I would like to be able to do: \documentclass{standalone} \usepackage{pgfplots} \pgfplotsset{/pgfplots/columns/.style={x=dof,y expr={\thisrow{L2}+\thisrow{Lmax}}}} \begin{document} \begin{tikzpicture} \begin{loglogaxis} \addplot table[columns] { % sample data from PGFplots manual dof L2 Lmax maxlevel 5 8.31160034e-02 1.80007647e-01 2 17 2.54685628e-02 3.75580565e-02 3 49 7.40715288e-03 1.49212716e-02 4 129 2.10192154e-03 4.23330523e-03 5 }; \end{loglogaxis} \end{tikzpicture} \end{document}
[ 0.0024144789204001427, 0.016859911382198334, -0.008755841292440891, 0.019751403480768204, -0.002360126469284296, 0.01137950923293829, 0.006101816892623901, 0.011024349369108677, -0.01307293027639389, -0.01645875722169876, -0.000057450029999017715, 0.0043786270543932915, 0.010947690345346928,...
[ 0.4195646047592163, 0.017717991024255753, 0.4537319540977478, -0.043888576328754425, 0.09355370700359344, -0.05206657201051712, 0.12435685843229294, -0.23607419431209564, -0.18669021129608154, -0.40020665526390076, 0.11315205693244934, 0.6063638925552368, -0.12879516184329987, -0.294033735...
I am editing two different TeX files simultaneously, the manuscript and the response for the referees, and generating pdf files through pdfLaTeX. How can I go from one pdf to another in the internal PDF viewer without recompiling the other file?
[ 0.018935905769467354, 0.04429751634597778, 0.004569563549011946, 0.028610127046704292, -0.01046624593436718, -0.021198950707912445, 0.01488217618316412, 0.00852470938116312, -0.029789499938488007, -0.03039993904531002, -0.0172779131680727, 0.014818631112575531, 0.017952607944607735, 0.0168...
[ 0.2070300132036209, 0.10457742214202881, 0.4367973208427429, 0.0314093716442585, -0.15454690158367157, -0.046271804720163345, 0.03446485474705696, -0.35453489422798157, 0.1696477234363556, -0.7424562573432922, 0.09566868096590042, 0.7744981646537781, -0.1691569834947586, -0.116706587374210...
**The Data I have:** * `Bridges` (line layer) * `Municipal_roads` (line layer and does not include information in `Bridges`) * `State_roads` (line layer that doesn't include `Municipal_roads` or `Bridges`) **Question:** Since these layers are continuously getting updated/upgraded especially in construction season; what is the _'best of practice'_ way to connect these layers to produce one layer for routing purposes? **Edit:** I would like to, if possible avoid merging the layers.
[ 0.004865599796175957, 0.009409250691533089, 0.001998539548367262, 0.012353427708148956, -0.0011121572460979223, -0.0056446450762450695, 0.007045356091111898, -0.004044770263135433, -0.016609404236078262, -0.01429890189319849, 0.0035420753993093967, 0.020583800971508026, -0.003864783328026533...
[ 0.44836628437042236, 0.3914170563220978, 0.607792854309082, 0.21116887032985687, -0.018091771751642227, 0.08138324320316315, -0.032514382153749466, -0.04163474962115288, -0.1404321789741516, -1.1720921993255615, 0.0981973260641098, 0.2094220519065857, 0.17870567739009857, 0.062519550323486...
Print button is disabled in Free version of OpenGeo Suite 4.02. In versions 3.x OGS, Community Edition, Print button was always enabled. Does anyone know why?
[ -0.042875926941633224, -0.001977338222786784, -0.04045898839831352, 0.035628125071525574, -0.046474162489175797, 0.0306712556630373, 0.01836857572197914, 0.043947815895080566, -0.032981026917696, -0.03180800750851631, -0.02458583377301693, 0.0136582525447011, -0.039299577474594116, 0.02146...
[ 0.2781446874141693, 0.03135905787348747, 0.4290505647659302, 0.09631587564945221, 0.17057327926158905, -0.3746400773525238, 0.561093270778656, 0.3121686279773712, -0.3017333149909973, -0.3589043915271759, -0.40264347195625305, 0.5053220391273499, -0.6087713837623596, -0.04853803664445877, ...
I am creating an Esri Flex API application (not the viewer), and have been creating a tool to use the World Places Locator. For now, I am just zooming to the top scoring candidate returned. If you search for Brighton, it returns you 20 candidates, sorted by score. I read on an older blog post: > You can also filter the results by extent using client logic. The candidates > field to look for in this case are North_Lat, South_Lat, East_Lon, and > West_Lon. Source. I am thinking it would be a good idea to have a tickbox on my search, that limits the results to the current extent of the map in the Flex app. (Or maybe buffer a litte out from the view). I understand the logic on how to go about this, but was wondering if anyone has already done this (does not have to be Flex) so that I can save some time coding it. If not, I will have a go at this next weekend and post my answer.
[ 0.010161030106246471, 0.011359610594809055, -0.010736925527453423, -0.00133901194203645, -0.026080988347530365, 0.0011982121504843235, 0.006504138931632042, 0.029271095991134644, -0.01584778167307377, 0.00406840443611145, 0.0021694740280508995, 0.01814526692032814, -0.01868297904729843, 0....
[ 0.05201883986592293, -0.20659448206424713, 0.8955021500587463, 0.15953998267650604, 0.12194717675447464, 0.3350659906864166, -0.22272610664367676, -0.029245834797620773, -0.5115212798118591, -0.8268787264823914, -0.15584613382816315, -0.14097535610198975, 0.3492967486381531, 0.027493914589...
I am reading A. Zee, _QFT in a nutshell,_ and in appendix 1 he has: > _Meanwhile the principal value integral is defined by:_ $$\int dx\,{\cal > P}{1\over x}f(x)~=~ \lim_{\epsilon \rightarrow 0} \int dx\, {x\over > x^2+\epsilon^2}f(x)$$ Please can someone explain to me why this is the case? As I understood it the principal value integral is rather defined as $$\int_a^b dx\,{\cal P}{1\over x}f(x)~=~ \lim_{\epsilon \rightarrow 0^+} \int_a^{-\epsilon} dx\, {1\over x}f(x)+\lim_{\epsilon \rightarrow 0^+} \int_{\epsilon}^b dx\, {1\over x}f(x),$$ where $a<0<b$. But as far as I can see these two definitions are not equivalent.
[ -0.002558111445978284, 0.004004036542028189, -0.008369697257876396, 0.00407925620675087, -0.009233526885509491, -0.017085453495383263, 0.006098410114645958, 0.004474358633160591, -0.010634388774633408, -0.0014394023455679417, -0.0001268376363441348, 0.00025655346689745784, -0.020659733563661...
[ -0.29059240221977234, -0.10788162797689438, 0.47892752289772034, -0.2972218692302704, 0.2583344876766205, -0.36631274223327637, -0.3092231750488281, -0.14818692207336426, 0.19914209842681885, -0.06144723296165466, 0.0277568269520998, 0.6900482177734375, -0.2706681191921234, 0.5874063372612...
My document has been compiling very well (thanks in no small part to Tex SX!), but I seem to have hit a snag. I added a more complicated dedication to the beginning of my thesis (using `\dedication \input{foo.tex}`). In the dedication I have some Chinese that I wanted to display vertically. I got that to work using the `CJKvert` package. However, now it seems that all of my Chinese text is displaying rotated! I think this MWE shows the problem: \documentclass{report} \usepackage{CJKutf8, CJKspace} \usepackage[usebaselinestretch]{CJKvert} \usepackage{rotating} \begin{document} \begin{center} \vspace*{-2cm} \parbox[c][5em][c]{15cm}{% \small In order to properly understand the big picture, we should fear becoming mentally clouded and obsessed with one small section of truth. \\% \\% Chapter 21 ``Dispelling Obsession'' \\% The \emph{Xunzi} \\ \\ \\ \\ \\ \\ } \vspace*{3cm} \begin{turn}{-90} \parbox[c][3cm][c]{24em}{% \begin{CJK}{UTF8}{gbsn}\CJKvert\CJKtilde\fontsize{12pt}{14pt}荀子\\% 解蔽篇第二十一\end{CJK} \\% \begin{CJK}{UTF8}{bkai}\CJKvert\CJKtilde\fontsize{18pt}{20pt}凡人之患,蔽于一曲,而闇于大理。\end{CJK} } \end{turn} \parbox[c][5em][c]{15cm}{To my Cousin's Brother's\\% Flatmate. } \end{center} \chapter{Some Chapter Title} The modern name of the People's Republic of China in Chinese remains \emph{Zhongguo} {[}\begin{CJK}{UTF8}{gbsn}中国\end{CJK}{]} which translates most directly as ``central state'' or in literary usage ``middle kingdom.'' \end{document} Do I need to `\renewcommand` or some such after the dedication? Ps. Please ignore the gross use of `\\\` to get the spacing right. I am hardly concerned about that ATM. EDIT 1: I know this can perhaps be done more easily with xetex/xelatex and xeCJK, but that would break all of the rest of my Chinese, causing a great deal of recoding.
[ -0.008446616120636463, 0.00834538135677576, -0.012786139734089375, 0.01820659637451172, -0.0031389915384352207, 0.006228264421224594, 0.006997447460889816, 0.022077782079577446, -0.010489847511053085, -0.014615117572247982, -0.009374233894050121, 0.001707820687443018, -0.004720060154795647, ...
[ 0.45257848501205444, 0.7259963750839233, 0.3780311942100525, -0.37139225006103516, -0.18794786930084229, -0.38548415899276733, 0.08515478670597076, -0.04090236872434616, -0.1503594070672989, -0.34244680404663086, 0.11348050832748413, 0.24169839918613434, 0.09425926208496094, -0.00973225943...
Sometimes a page on our site will show up in the Google SERPs that was previously not indexed (yet) due to a news story or blog posting on another site that points to it. From what I understand, the Google FreshBot has crawled it. The FreshBot's job is to find information that is current or newsworthy to the time and get it indexed and into the SERPs quickly. However, the pages that benefit from that drop out of the SERPs after two or three days. As a matter of fact, if I do an advanced query and request that page specifically from our site (using "site:"), it appears to not even be indexed any more. **What's happening to our Google FreshBot crawled pages?** **Do pages that are indexed by the FreshBot still need to be re-indexed by the DeepBot?** For what it's worth, we're ordinarily getting indexed daily at a healthy rate, but we don't yet have all of our pages indexed (large site). It has been encouraging to see the effect of the FreshBot indexing, but watching it fade away is hard to understand. On our pages indexed as part of our normal crawls, our SERPs are strong -- without the added relevancy of news or other off-page links.
[ -0.009640110656619072, 0.019252214580774307, -0.005676139146089554, 0.012661605142056942, -0.007770657539367676, -0.0009733437327668071, 0.006877736654132605, 0.008246829733252525, -0.012771164998412132, 0.012031088583171368, -0.011191862635314465, 0.019766002893447876, 0.010132066905498505,...
[ 0.6057878136634827, 0.15672652423381805, 0.6683409214019775, 0.3588225841522217, 0.2993539273738861, -0.5561431050300598, 0.06163420528173447, 0.36090752482414246, -0.2415541261434555, -0.6012426614761353, -0.38988813757896423, 0.08699671179056168, 0.028597531840205193, 0.48079460859298706...
How do I clear the `\tableofcontents` page in book or report? \documentclass[a4paper]{book} \usepackage{lipsum} \begin{document} % i need clear this page {\thispagestyle{empty} %but is not work \tableofcontents} \chapter{Book tableofcontents without page number} \lipsum \end{document}
[ 0.02374367043375969, 0.008598772808909416, -0.0012859023408964276, 0.02620883099734783, -0.03198856860399246, 0.021580969914793968, 0.011026672087609768, 0.011035722680389881, -0.01947147026658058, 0.006688825320452452, -0.01955558732151985, 0.0019015397410839796, -0.008151472546160221, 0....
[ -0.17099422216415405, 0.291311651468277, 0.8321844935417175, 0.05259234830737114, 0.40627115964889526, -0.24057377874851227, 0.5022201538085938, -0.31267327070236206, -0.16389620304107666, -0.47776809334754944, -0.3245168626308441, 0.46645158529281616, -0.024891037493944168, -0.08540313690...
Please provide R code which allows one to conduct a between-subjects ANOVA with -3, -1, 1, 3 contrasts. I understand there is a debate regarding the appropriate Sum of Squares (SS) type for such an analysis. However, as the default type of SS used in SAS and SPSS (Type III) is considered the standard in my area. Thus I would like the results of this analysis to match perfectly what is generated by those statistics programs. To be accepted an answer must directly call aov(), but other answers may be voted up (espeically if they are easy to understand/use). sample.data <- data.frame(IV=rep(1:4,each=20),DV=rep(c(-3,-3,1,3),each=20)+rnorm(80)) **Edit:** Please note, the contrast I am requesting is not a simple linear or polynomial contrast but is a contrast derived by a theoretical prediction, i.e. the type of contrasts discussed by Rosenthal and Rosnow.
[ 0.014572320505976677, 0.0071579646319150925, -0.0050806584767997265, 0.022037047892808914, 0.011173740029335022, -0.0015098147559911013, 0.007705324329435825, -0.007912658154964447, -0.012648055329918861, 0.0012275171466171741, -0.006282341666519642, -0.004923341330140829, -0.006749799009412...
[ -0.08003299683332443, -0.14205504953861237, 0.3495709002017975, -0.07450560480356216, -0.35738375782966614, 0.23926995694637299, 0.42362865805625916, -0.6997550129890442, -0.06150570511817932, -0.4544515311717987, -0.4407947361469269, 0.1981482058763504, -0.16217437386512756, -0.1721013933...
I ran into this when I was working with an example from the Help Pages of V9. `Cells` is a new function added in V9. The following appears to work the first time it is evaluated in a notebook with `nb` assigned to some appropriate value, say, `EvaluationNotebook[]`. Scan[(CurrentValue[#, StyleNames] = "Title") &, Cells[nb, CellStyle -> "Section"]] However, if I try again to change the cells affected by the first evaluation to another style or back to the original style, nothing changes. Neither this Scan[(CurrentValue[#, StyleNames] = "Text") &, Cells[nb, CellStyle -> "Title"]] nor this Scan[(CurrentValue[#, StyleNames] = "Section") &, Cells[nb, CellStyle -> "Title"]] has any effect. For other `CurrenValue` targets such as `FontSize`, it's easy to change the value repeatedly. Scan[(CurrentValue[#, FontSize] = 100) &, Cells[nb, CellStyle -> "Section"]] Scan[(CurrentValue[#, FontSize] = 30) &, Cells[nb, CellStyle -> "Section"]]
[ -0.0018939949804916978, 0.00468169990926981, -0.016662709414958954, 0.008535642176866531, -0.02698490396142006, -0.0016848689410835505, 0.006990430876612663, 0.01904468610882759, -0.012963911518454552, 0.007113491650670767, -0.0020865220576524734, 0.00542003707960248, -0.00804651714861393, ...
[ -0.016172852367162704, -0.03537542000412941, 0.3997938334941864, -0.19142897427082062, 0.05127888545393944, -0.11223658174276352, 0.09662125259637833, -0.26286831498146057, -0.2756149470806122, -0.6265162825584412, 0.14647486805915833, 0.3868781328201294, -0.27445676922798157, -0.073345251...
So I just bought this book 'Requiem for a dream' and I just "found out" that there is no quotation mark in the book in a conversation. So it's up to me to tell when the converstation started by who! Anyone knows what this is. A traditional novel style of something?! How can I do some read with this? ![Snapshot of a page](http://i.stack.imgur.com/YAsW1.jpg)
[ 0.007850863970816135, 0.004421141929924488, 0.0030909304041415453, 0.022794639691710472, -0.007948344573378563, 0.00040346835157833993, 0.006227605044841766, -0.010113613680005074, -0.01922708749771118, -0.012904318049550056, -0.006686028558760881, 0.005683694034814835, 0.0039587002247571945...
[ 0.7367540001869202, 0.03658180683851242, -0.10575378686189651, 0.16255038976669312, -0.27753061056137085, -0.3325752317905426, 0.7655444145202637, 0.3272440433502197, -0.3520524203777313, -0.2110898345708847, 0.09662400186061859, 0.33465564250946045, 0.2265203446149826, 0.5069660544395447,...
I have to apologise about my lack of experience but hopefully someone can clarify things for me. I am interested in looking at change in psychosocial functioning over time and compare it between those with borderline personality disorder, another personality disorder and no personality disorder. My main focus is psychosocial functioning (PF) but I also want to show that full recovery (remission of symptoms plus improvement in functioning) is harder to attain than remission of symptoms alone. So I was going to do a repeated measures ANOVA. My dataset is composed of the following variables: * PF is measured at 4 time points (PF1, PF2, PF3 and P4); * Symptoms are measured at 4 time points (S1, S2, S3, S4); * I can then calculate another outcome variable at time 4 that is recovery (yes or no); * And another outcome variable for remission (yes or no). There are three groups: BPD, OPD and NPD. To test my **first hypothesis** I was planning on doing repeated measures ANOVA to see if change in PF was less for the BPD group than the other groups. This seems simple enough. However, it is my **second aim** as to where I get lost. I get stuck on how to test whether rates of recovery are less than rates of remission and compare the groups on this. I hypothesize that the BPD group will find it harder to achieve recovery than remission (i.e. the rates will be significantly different), while the NPD group will have similar recovery and remission rates, and the OPD will be somewhere in between. I have no idea how to achieve this without using a mixed models approach. Unfortunately I cannot go down that route because of reasons that are out of my control (and in my supervisors control) but none the less I cannot. I have to keep the analysis as simple as I can.
[ 0.005932167638093233, 0.01606075093150139, -0.01729702576994896, 0.014326122589409351, -0.014373617246747017, -0.014778299257159233, 0.00832537841051817, 0.007511068135499954, -0.011967452242970467, -0.008959189057350159, -0.01322835311293602, 0.007659796625375748, 0.0006136114243417978, 0...
[ 0.14526699483394623, 0.17962783575057983, 0.6871185302734375, -0.2732551693916321, -0.42506182193756104, 0.6903223991394043, 0.7108033895492554, -0.675701379776001, -0.4011315107345581, -0.31293776631355286, 0.2236330509185791, 0.30946287512779236, -0.09645476192235947, 0.10778915882110596...
Is there some way in R to output not only the main effects and the interactions of the two factors using an aov() like function but also the variance of the subject within a non-repeated factor? For example, I have an experiment dealing with the factor Dose(the between subject variable - non repeated) and Time(the within subject - repeated) on 40 animal's AUC in a balanced design (10 animals in each of 4 doses measuring AUC at 2 times, visit 1 and visit 2, for a total of 80 observations). The aov() function outputs the Dose, Visit, and Dose:Visit interaction just fine, but the residual output contains pertinent information I'd like to have. This includes the Animal w/i Dose Sums of Square as well as the within animal error, both of which have a df of 36. How would I obtain these statistics in R? Any help is appreciated, thanks.
[ 0.0012743892148137093, 0.019396044313907623, -0.011645281687378883, 0.023770039901137352, -0.00607127882540226, -0.010470440611243248, 0.008871415629982948, -0.00579240033403039, -0.01593245007097721, -0.0008519222028553486, -0.011613158509135246, 0.012466585263609886, 0.0022682328708469868,...
[ 0.21483860909938812, -0.08422534167766571, 0.285835325717926, -0.1688457429409027, -0.18658484518527985, 0.595482349395752, 0.44137507677078247, -0.8600970506668091, -0.17335647344589233, -0.12872833013534546, 0.20646357536315918, 0.6325286626815796, -0.0840684249997139, 0.0015104878693819...
I'm a recent college graduate and have been hired to work for a business software giant. The job by itself is great with amazing perks and a decent pay. Also there is the joy of working in core product development, building something which will be used by millions of people. However a huge problem is that we do most of our development in a proprietary language which is not used in any other product development company outside of here. The only other companies using this language are our partners and customers who would like to implement/customize the software in their business/clients' businesses. Im afraid that working here for a length of time could make me irrelevant as a developer outside the company. While ideally one could say that software development is software development and the language does not matter, the fact is that most companies hire based on past experience, and in this context, my experience would be 0. My apprehensions are supported by the fact that that the attrition rate in this company is much less than the others, so you could find a lot of people who have not changed their job in the past 10-15 years. This is great from the company's perspective and they take a lot of pride in this. However I'm sure that most people do not leave cause they are tied up, since their skills are irrelevant outside of the company. I really love programming and want to remain a programmer in product development rather than being a manager. What should I do to be hireable in a new company if and when I choose to leave my current job? Thanks in advance.
[ 0.002778388559818268, 0.0020154330413788557, -0.009171594865620136, -0.0021115171257406473, -0.006563536822795868, 0.008214347995817661, 0.005192357115447521, -0.006192408036440611, -0.010392743162810802, -0.01717335730791092, -0.013604242354631424, 0.012812044471502304, 0.012420779094099998...
[ 0.8778386116027832, 0.5918692350387573, -0.1503102034330368, 0.02143782004714012, 0.4429321885108948, -0.10026978701353073, 0.2627786099910736, 0.6645361185073853, 0.07619787752628326, -0.44926801323890686, 0.22107329964637756, 0.49469688534736633, 0.2847461998462677, 0.2820294201374054, ...
In my plugin, I want to test to see if jQuery or Prototype (or both) are going to be loaded by another plugin. So, have `wp_enqueue_script('jquery')` or `wp_enqueue_script('prototype')` has already been called. I have code appropriate to my plugin in files `plugin.prototype.js` and `plugin.jquery.js` and if Prototype is queued, my plugin will use `plugin.prototype.js`. This way I avoid loading more than necessary into the site. If neither has loaded, I will queue up whichever is smaller. How can I test to see what has been queued up? How do I make sure my code runs last?
[ 0.01137879490852356, 0.011992880143225193, 0.017065465450286865, 0.013997182250022888, 0.005588035099208355, 0.019627194851636887, 0.009073448367416859, 0.006945302709937096, -0.013980992138385773, 0.01617501489818096, -0.006568005308508873, 0.01515215914696455, 0.0037573701702058315, 0.01...
[ 0.13146953284740448, -0.03930753096938133, 0.1525973230600357, -0.05817318335175514, -0.06659278273582458, -0.051149867475032806, -0.06639480590820312, 0.07410721480846405, -0.08939680457115173, -0.8104010224342346, -0.05046549811959267, 0.5372048616409302, -0.4819070100784302, -0.05755474...
Is there a way to exclude certain categories from a widgetised sidebar? I've got categories that are associated with custom post types and ones associated with my blog - I don't want to display the CPT categories when viewing my blog. Any ideas? All I can think of is that I have to hard code the sidebar instead and ditch the widgetised version.
[ 0.03148679807782173, -0.003797580488026142, -0.00010551272862358019, 0.052113939076662064, -0.013526881113648415, 0.022414639592170715, 0.009721299633383751, 0.03239445760846138, -0.034026339650154114, 0.017572037875652313, -0.021167699247598648, 0.0024650043342262506, 0.008331513963639736, ...
[ 0.50449138879776, 0.4242837429046631, 0.10341285914182663, 0.11462947726249695, -0.09572809189558029, -0.04143830016255379, 0.3305352032184601, 0.327222615480423, -0.2510780394077301, -0.21487271785736084, 0.31297415494918823, 0.30693042278289795, -0.26545003056526184, 0.3419131338596344, ...
For simplicity's sake, supposing that Ember Spirit has a Quelling Blade which deals 50% damage to non hero units, and Ember spirit has 50% cleave from whatever source, and that hypothetically Sleight of Fist does a static damage value of 100 to whatever unit it hits and this 100 damage is never reducted, if I hit a creep unit that stands beside a hero, how much **cleave damage** will the hero take? A: 50 damage, meaning no Quelling Blade bonus damage added to cleave, i.e. creep is hit for 150 damage, but only 100 damage is accounted for cleaving purposes. B: 75 damage, as the creep is hit for 150 damage due to the hipothetically 50% bonus damage of quelling blade. C: None of the above.
[ -0.008927395567297935, 0.018477994948625565, -0.0034896708093583584, 0.006699917837977409, 0.0004916645120829344, -0.01972290314733982, 0.013984748162329197, 0.014636745676398277, -0.01424286887049675, 0.010203801095485687, -0.0034776320680975914, 0.02252357453107834, -0.011752678081393242, ...
[ -0.18824639916419983, -0.3094731867313385, 0.4238587021827698, 0.3033328950405121, -0.7456160187721252, 0.385389119386673, 0.49128463864326477, -0.5541717410087585, -0.15974736213684082, -0.29689934849739075, 0.07628876715898514, 0.20735155045986176, 0.10386496782302856, 0.1161769255995750...
When users upload very large photos and memory is tight, it seems like Wordpress runs out of memory - fails to resize the uploaded photos and does not create the necessary metadata (`_wp_attachment_metadata` entry in `wp_postmet`a is not created). The worst part is that the user is never notified. At most I get an "HTTP error" message. Is is possible to somehow add an error message that will warn the user and remove the inconsistent file/database entries? How come this is not standard WP behavior?
[ -0.012098114006221294, -0.004437594208866358, 0.0038856114260852337, 0.02555619738996029, 0.011979919858276844, 0.017731549218297005, 0.009748877957463264, 0.017504680901765823, -0.01248126570135355, -0.020973356440663338, -0.013759282417595387, 0.014799980446696281, 0.0123612554743886, 0....
[ 0.38631919026374817, 0.15447837114334106, 0.4160202741622925, 0.08020874112844467, -0.15502247214317322, -0.36682307720184326, 0.6481828689575195, -0.1254969984292984, -0.473013311624527, -0.4200522303581238, 0.11567177623510361, 0.3122488856315613, -0.3890380561351776, 0.30576565861701965...
I have created several large tables in Excel and MATLAB. Since it is very time-consuming to manually copy and paste these tables in LaTeX, I would like to include them as figures in LaTeX. I would really appreciate it if someone could help me with the following questions: 1. Is it possible to include tables from MATLAB or Excel without having to manually copy and paste the cells in LaTeX? 2. If I import a table as a figure into LaTeX, how can I change the caption from "Figure" to "Table" and have it listed in the "List of Tables" instead of "List of Figures"? Thank you very much in advance for your time and help.
[ 0.02068042755126953, 0.002485219854861498, -0.007904107682406902, 0.01595339924097061, 0.03013882413506508, 0.00669613853096962, 0.009425388649106026, 0.014863749034702778, -0.023872297257184982, -0.034848153591156006, -0.009237561374902725, 0.009435276500880718, -0.0025691231712698936, 0....
[ 0.3801228404045105, 0.1367410272359848, 0.4379010796546936, 0.15070165693759918, 0.059135887771844864, 0.4941730499267578, -0.3581582307815552, -0.10349888354539871, -0.6719689965248108, -0.7844395637512207, 0.14301498234272003, 0.1275719702243805, -0.11872424930334091, -0.0065182736143469...
I'm working in a koding terminal running ubuntu 13.04, and I'm trying to run an application that requires an x-screen (even to run in terminal mode). I am unable to connect with x11 forwarding, and I'm wondering if there is a workaround that will allow me to run this application (lmms). $ lmms -v lmms: cannot connect to X server $ uname -a Linux vm-2.masd.koding.kd.io 3.9.0-0-generic #4userns5 SMP Mon May 13 06:15:34 PDT 2013 x86_64 x86_64 x86_64 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 13.04 Release: 13.04 Codename: raring Thanks in advance!
[ 0.013464674353599548, -0.023345958441495895, -0.019156519323587418, 0.005846179556101561, -0.023017145693302155, -0.01037781685590744, 0.009858319535851479, 0.003170458599925041, -0.018977180123329163, 0.00610708212479949, -0.01827612891793251, 0.005863742902874947, -0.012366190552711487, ...
[ 0.2334917187690735, 0.2424696385860443, 0.5427132248878479, -0.027022717520594597, 0.11922626942396164, -0.07176826894283295, 0.10836803913116455, 0.29157331585884094, -0.32559487223625183, -0.7498293519020081, 0.0005630812374874949, 0.4784093499183655, -0.31395670771598816, 0.603948652744...
The linear SVM in textbook takes form of maximizing $L_D = \sum_i{a_i} - \frac{1}{2}\sum_{i,j}{a_ia_jy_iy_jx_i^Tx_j}$ over $a_i$ where $a_i \geq 0$ and $\sum_i{a_iy_i} = 0$ Since $w = \sum_i{a_iy_ix_i}$, the classifier will take the form $\text{Sgn}(wx - b)$. Thus, it seems to solve linear SVM, I need to figure out $a_i$ with some gradient based methods. However, recently, I came across a paper which states that they try to minimize the following form: $L_P = \frac{1}{2}||w||^2+C\sum_i{\text{max}(0, 1-y_if_w(x_i))}$ and they claim $C$ is a constant. It seems to me that this form is quite different from primal form of $L_P$ in linear SVM because of the missing $a_i$. As far as the paper goes, it seems to me that they optimized on $w$ directly. I am puzzled here as if I missed something. Can you optimize $w$ directly on linear SVM? Why is that possible?
[ 0.004179035313427448, 0.0068919602781534195, 0.00029836042085662484, 0.0022349657956510782, -0.012111689895391464, -0.009627848863601685, 0.004661481361836195, 0.00753027992323041, -0.004994023125618696, 0.004404233302921057, -0.013109407387673855, 0.006784664001315832, -0.01248372532427311,...
[ -0.453849196434021, -0.4122014045715332, 0.9065688848495483, -0.17514607310295105, -0.04857742413878441, 0.07194207608699799, 0.104449562728405, -0.28348109126091003, 0.05458445847034454, -0.36313432455062866, -0.3044545352458954, 0.5657175183296204, -0.1326504796743393, 0.1901841908693313...
I'm trying to use the following code to get a line between the perimeters of the circles, the `\--(b)++(135:16pt)` is the bit that I'm having the problem with, it seems that it draws it to `(b)` and neglects the `++` part after, is there any way to bracket this so it draws it to `((b)++(135:16pt))`. Also, the same goes with the definition of coordinate `(c)`, it thinks it's the same as `(b)` not `(b)++whatever`. \begin{document} \begin{center} \begin {tikzpicture}[scale=1] \coordinate (a) at (0pt,0); \coordinate (b) at (130pt,0); \coordinate(c) at (b)++(0,10pt); \draw (a) circle (28pt) (a) circle (22pt) (a)circle(20pt); \draw (b) circle (16pt) (b) circle (12pt) (b)circle(10pt); \draw(a)++(55:28pt)--(b)++(135:16pt); \end {tikzpicture} \end{center} \end{document}
[ -0.011273978278040886, -0.004092661663889885, -0.013431371189653873, 0.01804850995540619, -0.01020556315779686, -0.008752233348786831, 0.004975362680852413, -0.005451759323477745, -0.01628967374563217, -0.01993812806904316, -0.0027004911098629236, 0.001816748408600688, -0.005688881501555443,...
[ 0.4128304421901703, -0.027471765875816345, 0.5389078855514526, -0.11535564064979553, -0.13240212202072144, -0.19333098828792572, 0.036282993853092194, -0.5428081154823303, 0.028988724574446678, -0.25891897082328796, 0.22359855473041534, 0.27482926845550537, -0.41950178146362305, 0.39279654...
I have only played MGS4. Is the Stun Knife available in any other game in the series?
[ 0.0024114721454679966, 0.016497764736413956, -0.027370231226086617, 0.0002770841238088906, 0.01816312037408352, 0.02785673551261425, 0.014107375405728817, -0.0023987633176147938, -0.03140455111861229, -0.0471218004822731, 0.009153679013252258, 0.06279043853282928, -0.015951141715049744, 0....
[ 0.0005924554425291717, 0.21305397152900696, 0.08571755141019821, -0.36019209027290344, -0.15871216356754303, -0.0698198676109314, 0.23021411895751953, -0.10613398253917694, -0.5413755774497986, -0.25679582357406616, 0.14339803159236908, 0.30222389101982117, 0.03903675824403763, -0.02737474...
I have encountered a weird pdflatex problem. Maybe you know a solution to this. I have a latex project on a server. When I compile this project + bibliography with pdflatex project bibtex project pdflatex project pdflatex project Everything works fine. The last pdflatex properly includes all the bib-files, which were created before. But if I mount this server on my local computer over the network and run my local pdflatex project on all the files, pdflatex fails to include all the references - although they were correctly included, when I run pdflatex on the server. The funny thing is, that this had worked on a 2nd system (which I do not have anymore). Do you know why the local pdflatex does not include the bib-files created from the server (and why this had worked out fine with an older machine)? Edit: I forget to mention the pdflatex versions on the machines: Server (where it works): pdfTeX 3.1415926-1.40.10-2.2 (TeX Live 2009/Debian) Local machine: pdfTeX 3.1415926-2.5-1.40.14 (TeX Live 2013/Debian) /edit2: Solved the problem, after trying several things. It seems, that it really was a latex version issue... I have to add [backend=bibtex] to \usepackage[backend=bibtex]{biblatex}.... It seems, that in my older latex installation, the bibtex backend was the default backend... The new installation expects files created with a biber backend, if no backend is further specified! Thx for your replies!!
[ 0.0063428631983697414, 0.012198224663734436, -0.0037189924623817205, 0.02581627294421196, 0.03631860762834549, -0.004736513365060091, 0.009617988020181656, 0.023649312555789948, -0.02054869756102562, -0.02371501922607422, -0.013252394273877144, 0.017064612358808517, -0.017246786504983902, ...
[ 0.13522592186927795, 0.28806987404823303, 0.7271292209625244, 0.34995561838150024, -0.3169049620628357, -0.3007097840309143, 0.2157234102487564, 0.06408071517944336, -0.3769654333591461, -0.7281450629234314, 0.13090282678604126, 0.42540860176086426, -0.4475029408931732, -0.0477212592959404...
I have used ITopologicalOperator for getting common Area of two polygons but the result is not correct. May anyone please tell me how to get the common area of two features. //Make Topological operator of geometry here ITopologicalOperator pTopoOperator = (ITopologicalOperator)pGeometry; //intersect of the above topo operator with another geometry IGeometry pGeometry1 = pFeature1.Shape as IGeometry; IGeometry pGeomResult = pTopoOperator.Intersect(pGeometry1, esriGeometryDimension.esriGeometry2Dimension); //Typecasted to area viz. common area IArea pCommonArea = (IArea)pGeomResult; double dblCommonArea = pCommonArea.Area; The Area value in dblCommonArea is not coming correct.
[ -0.005360523704439402, 0.007301577366888523, -0.006139211822301149, 0.02267242968082428, -0.015248536132276058, 0.011140584014356136, 0.00841047428548336, 0.0076546152122318745, -0.012301599606871605, 0.010259930044412613, -0.0086982985958457, 0.017907023429870605, 0.0013301705475896597, 0...
[ -0.08667914569377899, -0.13362042605876923, 0.596531867980957, 0.15427403151988983, -0.14484885334968567, 0.20899274945259094, -0.21464279294013977, -0.38732144236564636, 0.1764688491821289, -0.6567594408988953, 0.09474518150091171, 0.45621341466903687, 0.02198670245707035, 0.0095156067982...
If a _phobia_ is to have an irrational fear of something, what is the word for having an irrational affinity for something? For example a numerologist may fear the number 13, but be attracted to (or even have a love for) the number 8. (Some would say this is irrational.)
[ 0.011994398199021816, 0.029050715267658234, -0.012350840494036674, 0.021413711830973625, 0.002434825524687767, -0.014709560200572014, 0.01259208470582962, -0.020027538761496544, -0.0168100968003273, -0.006826523691415787, -0.014506520703434944, 0.005023983307182789, 0.00941817369312048, 0....
[ 0.36227455735206604, 0.35068029165267944, -0.1324259340763092, -0.029987450689077377, -0.3186630606651306, -0.1914568692445755, 0.6546465158462524, 0.0425918847322464, -0.4034757614135742, -0.14100611209869385, -0.05811189115047455, -0.14960439503192902, -0.8606475591659546, 0.047964949160...
Let $X_{1},..,X_{n}$ be independent, each with a exp($\lambda$) distribution. Let $Z=min(X_{1},..X_n)$. Show that $n\lambda Z$ has an exp$(1)$ distribution. I calculate that $P(Z>z)=e^{-n\lambda z}$. Hence the density function of $n\lambda Z$ is $f(z)=(n\lambda)^2e^{-n\lambda z}$. But an $exp(1)$ distribution would have density $e^{-z}$.
[ -0.0008145438041538, 0.007875490933656693, -0.0075258188880980015, 0.007992076687514782, -0.01254948042333126, -0.013943955302238464, 0.005431516095995903, -0.006098769139498472, -0.008016055449843407, -0.00492499116808176, -0.0073603009805083275, 0.008503852412104607, -0.007029911037534475,...
[ 0.03195904195308685, -0.24447864294052124, 0.476800799369812, -0.21652790904045105, 0.15392561256885529, 0.47262948751449585, -0.38006392121315, -0.3173733949661255, -0.05365897715091705, -0.46023204922676086, -0.14615504443645477, 0.5972350835800171, -0.5570990443229675, 0.250043451786041...
I want to add a new custom "product type" to woocommerce plugin: ![enter image description here](http://i.stack.imgur.com/t2aF6.png) Tried to duplicate one of currently exist product type files (woocommerce template structure) as a new file (file name and inside commented name) but not worked! ![enter image description here](http://i.stack.imgur.com/wnPMl.png)
[ -0.010818907991051674, 0.006861318834125996, 0.010787279345095158, 0.03734269365668297, 0.012052059173583984, 0.008917576633393764, 0.007076924201101065, 0.02847663313150406, -0.024960005655884743, 0.008167228661477566, -0.008341276086866856, 0.003209670539945364, 0.006172538734972477, 0.0...
[ 0.8545592427253723, 0.027702590450644493, 0.6631320118904114, -0.14258627593517303, -0.1071229949593544, 0.07353480905294418, 0.08977992832660675, -0.5201829671859741, -0.24381032586097717, -0.46759670972824097, -0.029906662181019783, 0.5704516172409058, -0.23823024332523346, 0.17147976160...
I have a problem with my beamer presentation, It always land me an empty slide after "Plan", I don't know why ?! Here is a small part of my latex presentation ? can you guess what wrong with that ? : \documentclass[11pt,a4paper]{beamer} \usepackage[applemac]{inputenc} \usepackage[frenchb]{babel} \usepackage[T1]{fontenc} \usepackage{graphicx} \usepackage{caption} \usepackage{subcaption} \usepackage{lmodern} \usepackage{color} \usepackage{hyperref} \title{Synthèse des Méthodes de la Segmentation d'images IRM cérébrales} \author{ALJI Mohamed} \institute{LaRIT - faculté des sciences - Université Ibn Tofail} \date{\today} %\usetheme{Warsaw} \usetheme{Madrid} \AtBeginSection[]{ \begin{frame} %\frametitle{Plan} \tableofcontents[currentsection, subsections] \end{frame} } %\AtBeginSubsection[]{ % \begin{frame} %\frametitle{Plan} %\tableofcontents[currentsection, currentsubsection] %\end{frame} %} \AtBeginSubsubsection[]{ \begin{frame} \frametitle{Plan} \tableofcontents[currentsubsection, currentsubsubsection] \end{frame} } \setbeamertemplate{footline}{\hspace{28em}\insertframenumber/\inserttotalframenumber\vspace{1em}\null} \begin{document} % slide Titre % \begin{frame} \titlepage \end{frame} % slide Plan % \begin{frame} \frametitle{Plan} \tableofcontents[] \end{frame} % -------------- Debut ------------- %   \section*{Introduction} \begin{frame} \frametitle{Articles lus :} \begin{block}{Articles lus} \begin{description} \item[1995] MRI Segmentation : Methods and Applications \item[2008] Segmentation d’images cérébrales : État de l’art \item[2010] Review of brain MRI image segmentation methods \pause \item[2012a] Segmentation of Brain MRI - Advances in Brain Imaging \item[2012b] Segmentation of Brain MRI Image - A Review \item[2013] State of the art survey on MRI brain tumor segmentation \end{description} \end{block} \end{frame} \end{document}
[ 0.0021461877040565014, 0.0009910413064062595, 0.008404959924519062, 0.026002123951911926, -0.0020184465683996677, -0.02166597917675972, 0.0072975605726242065, 0.01655471697449684, -0.011971743777394295, -0.020291713997721672, -0.022823991253972054, -0.0036032211501151323, 0.01339887361973524...
[ 0.03541954606771469, 0.14545470476150513, 0.5494007468223572, 0.015007304027676582, -0.0357372984290123, 0.0570259615778923, 0.4454690217971802, 0.11288098245859146, -0.06229161098599434, -0.5628228783607483, 0.0714845210313797, 0.48410263657569885, -0.5808213949203491, -0.0865933746099472...
Does wordpress code base use mysqli or PDO? I know PDO is superior to mysqli but mysqli is not bad neither. Plus from one of the features of what makes PDO is superior to mysqli ( that is being database agnostic ) does not mean much to WordPress as WordPress will always use mysql server. But binding params with data types is something PDO supports but mysqli does not and it's a good thing. My guts tell me that WordPress does use mysqli but I could not see it in the code base yet. My second question is if WordPress is using mysqli, is it because of speed concerns or is it because back in the earlier days (when WP was being developed), PDO wasn't just there yet?
[ 0.019458085298538208, 0.013051794841885567, 0.005924972705543041, 0.016624804586172104, -0.003437422215938568, 0.0198549535125494, 0.009950470179319382, 0.00999175664037466, -0.018576405942440033, -0.03485887125134468, -0.024566806852817535, 0.02628730610013008, 0.01133046019822359, 0.0143...
[ 0.3074723780155182, 0.1720694899559021, 0.3881664574146271, 0.29449978470802307, -0.2390897274017334, -0.21582601964473724, 0.2037627249956131, -0.08099015802145004, -0.2991954982280731, -0.39443814754486084, 0.19598357379436493, 0.1615438312292099, -0.5803268551826477, 0.17120260000228882...
For the Gaussian distribution with unknown mean and variance, the sufficient statistics in the standard exponential family form is $T(x)=(x,x^2)$. I have a distribution that has $T(x)=(x,x^2,...,x^{2N})$, where N is kind of like a design parameter. Is there a corresponding known distribution for this kind of sufficient statistics vector? I need samples from this distribution so it is kind of crucial for me to get exact samples from the distribution. Thanks a lot.
[ 0.003130726283416152, 0.008891977369785309, -0.009149500168859959, 0.008411522023379803, 0.006896283011883497, 0.015407303348183632, 0.008119912818074226, -0.012659557163715363, -0.013155702501535416, -0.03730839863419533, -0.006187425926327705, 0.0017304702196270227, -0.006981102749705315, ...
[ 0.3093867599964142, -0.2905711531639099, 0.0675370916724205, -0.03103579394519329, -0.2684857249259949, 0.21148204803466797, 0.231351837515831, -0.23828767240047455, -0.0638837218284607, -0.4566005766391754, 0.04413123428821564, 0.5247746109962463, -0.43481916189193726, 0.40645313262939453...
I would like to ask a question on incandescent light. ![enter image description here](http://i.stack.imgur.com/jLnJ6.jpg) From the Thermionic emission (Edison effect), heated tungsten filament emits electrons that could be collected by an anode (like a foil connected to positive voltage). The wiki also mentions that in order to facilitate thermionic emission, tungsten is often treated with mixture of barium, strontium and calcium. Does this emission **happen** in a normal incandescent lamp? In the bulb which Edison discovered the effect, there is a plate(foil) inserted into the bulb from the base, this is absent in normal bulbs. Does the tungsten filament still emit electron in this case? If it does, where would the emitted electron **go**? Without an anode collector, will they be suspended in the vacuum (assume a vacuum bulb) space inside the bulb? This further brings up a question to the energy balance (conservation) question in a vacuum bulb: ![enter image description here](http://i.stack.imgur.com/UOj7l.jpg) This is the fundamental equation for all tungsten filament temperature calculation when the inside of the bulb is vacuum, and appears in **Irving Langmuir** ’s 1936 paper and numerous others. Do we need to consider the thermionic effects for an ordinary incandescent bulb? How should we modify the equation above? Wang
[ -0.003261097939684987, -0.0037918519228696823, -0.006405686493963003, 0.010579649358987808, -0.019177217036485672, -0.006687985733151436, 0.010056138038635254, 0.0009174281731247902, -0.013374912552535534, -0.00988362543284893, -0.002597658196464181, 0.01929062232375145, -0.02878621593117714...
[ 0.6800584197044373, 0.09012585133314133, -0.008173232898116112, 0.16110217571258545, -0.1809772104024887, -0.44868195056915283, -0.1754678636789322, -0.1922169327735901, 0.2212432324886322, 0.07136201858520508, -0.05491287633776665, 0.3595293462276459, -0.20466218888759613, 0.6078102588653...
Many questions on this forum as well as other places really boils down to somebody coming from Linux environment and then not being able to use the equivalent command on Solaris. Often this is because of different options supported, etc. This question intends to document (Q&A style) of what a reasonable Solaris install should always include. Never again should a user be frustrated because something isn't available. We focus on the packages most often asked for by Solaris newbies/visitors in questions. This is about standard userland tools such as `find`, `grep` and what have you. If you are looking for a similar posting about development tools (e.g. compiler, make, etc) then you should look here.
[ -0.00036945659667253494, -0.002190400380641222, -0.006633511744439602, 0.003300498239696026, 0.0010290571954101324, 0.016580117866396904, 0.008436387404799461, 0.004362098872661591, -0.017869733273983, -0.027994506061077118, 0.003648181911557913, 0.0034651635214686394, -0.010347602888941765,...
[ 0.6143203377723694, -0.02876150794327259, 0.011810975149273872, 0.22745369374752045, -0.06477046012878418, -0.08142746239900589, -0.2245607078075409, -0.11837142705917358, -0.18852032721042633, -0.5706588625907898, -0.2203802615404129, 0.864650547504425, -0.2258833646774292, 0.095040433108...
For a quantum system with time-reversal symmetry, other than the absence of a magnetic field, can we infer anything else about the system?
[ 0.0552653931081295, 0.04436144232749939, 0.0049141766503453255, 0.023026371374726295, 0.021521614864468575, 0.011836529709398746, 0.01844657212495804, -0.018676232546567917, -0.010611254721879959, -0.003824568586423993, -0.02308930642902851, 0.027032069861888885, -0.010599273256957531, -0....
[ 0.3436084985733032, -0.13979999721050262, 0.0010741923470050097, 0.2642032206058502, 0.3513058125972748, 0.08561784029006958, 0.08347199857234955, -0.21258452534675598, -0.11423775553703308, -0.23725932836532593, -0.14836956560611725, 0.1814359724521637, 0.02204388566315174, 0.274335175752...
I am new to all kinds of GIS software. Due to my project I need to open WFS site which needs a WFS standard 1.1.0. I tried to do it with MapInfo Professional 11.5, but it wont open it. I found a solution from the internet which told me to put ?request=getCapabilities&version=1.1.0 in the end of my WFS URL, but it gives me an error " Unable to get capabilities from the server " Do I have to download any add-ons to MapInfo 11.5? Or can anybody suggest me some other programs to get this link to work? Best Regards, Jarmo.
[ -0.01765826717019081, -0.016489792615175247, -0.01102815568447113, 0.015257490798830986, 0.005539596546441317, 0.03389293700456619, 0.008529864251613617, 0.028305772691965103, -0.019842125475406647, -0.0045575713738799095, -0.0041426848620176315, 0.01913117803633213, 0.0010774205438792706, ...
[ 0.8134409189224243, 0.24955320358276367, 0.3836442828178406, 0.015081901103258133, -0.11424916982650757, -0.2834677994251251, 0.23002344369888306, -0.032256994396448135, -0.06445354223251343, -1.0703344345092773, 0.1892479509115219, 0.4397786259651184, -0.19948622584342957, 0.3144939839839...
A related WPSE question asks how to get the term by specifying ID only, without specifying taxonomy. My question is more philosophical. Generally, stuff in WP core is there for a reason. I'm trying to understand why term_id can't be the primary key for the term - why do we need the taxonomy as well? Can a single term record be a member of multiple taxonomies? That's certainly not currently supported in the API. Is there a use case where this might be desirable? Or is the required `$taxonomy` parameter in `get_term()` a vestigial tail from an earlier incarnation of the database structure?
[ 0.011753465980291367, 0.009391426108777523, -0.0027706348337233067, 0.02528354339301586, -0.017111696302890778, 0.030689576640725136, 0.00919055100530386, 0.025463568046689034, -0.015452899038791656, -0.014792225323617458, -0.013059577904641628, 0.016424069181084633, -0.006098750978708267, ...
[ 0.5002952814102173, -0.0020712839905172586, -0.002640344202518463, 0.31866729259490967, -0.024860119447112083, -0.38375940918922424, -0.06344236433506012, -0.24323323369026184, -0.10014954954385757, -0.44282662868499756, -0.013933682814240456, 0.5793257355690002, -0.0468478761613369, 0.171...
I recently made a path across a snowy field using cobblestone, and when it snowed, my cobblestone path was buried, requiring me to perform the tedious task of shoveling out the snow. In order to not be doing chores in Minecraft that I'm oft too lazy to do in real life, I was thinking about using half- steps for the path. ### Is it possible for snow to stick to half-steps in Minecraft? What about double half-steps? Edit to add: Are there any solid blocks that snow cannot stick to? For reference purposes, assume Minecraft Beta version 1.6.6
[ -0.009382367134094238, 0.01815059967339039, -0.012140289880335331, 0.01606922782957554, -0.01795373111963272, -0.003703781869262457, 0.009546764194965363, -0.004328911192715168, -0.02164236269891262, -0.011645831167697906, -0.010676325298845768, 0.013742029666900635, 0.0035474118776619434, ...
[ 0.23257090151309967, 0.045568183064460754, 0.23228192329406738, 0.14015865325927734, -0.3418372869491577, 0.019715415313839912, 0.5616216063499451, -0.21754755079746246, -0.561596691608429, -0.3686741888523102, 0.1653444617986679, -0.20087553560733795, 0.3218550980091095, -0.11199598759412...
Every time there is a classical wave equation, the underlying system is bosonic. For example, em waves are made from photons, sound from phonons (technically quasi-particles), etc. What would be the classical wave equation corresponding to a fermionic particle? Intuition says that it would be very different. For example, take the fundemental mode standing wave in a 1m box. The wave equation admits solutions of any amplitude, which means there is no limit to the number of particles _with a 2m De Broglie wavelength_ that can be stuffed into the box. Fermions don't allow this, so if we replace photons with, for example, "massless" neutrinos, we would get something different in the classical limit. Note: you can make neutrinos "massless" by using such a high frequency that the energy>>rest mass.
[ 0.01094432920217514, 0.01824801042675972, 0.0009231476578861475, 0.01977185346186161, 0.0021330788731575012, -0.025504399091005325, 0.007711147423833609, 0.0013031316921114922, -0.023959370329976082, -0.0009872817900031805, -0.010294358246028423, 0.01682579517364502, -0.012377362698316574, ...
[ 0.18282191455364227, -0.15944181382656097, 0.3744649291038513, 0.034781910479068756, -0.3076151907444, -0.21066240966320038, 0.15245914459228516, -0.6739006638526917, -0.30470168590545654, -0.4403815269470215, -0.14448827505111694, 0.6311123371124268, -0.4292682111263275, 0.326220721006393...
What's the deal regarding ESRB ratings of `LEGO Star Wars III: The Clone Wars` in the DS and 3DS formats? I ask, because I need an explanation that can get this game past a picky mom of a <10 kid. The DS version shows as a "E" rating. The 3DS version shows an "E 10+" rating. The links above both say "Cartoon Violence" and "Comic Mischief", and I can't imagine that there is any real difference. I saw this game in a store, and I was certain that it said "Cartoon Violence" and "Crude Humor".
[ -0.02861660346388817, -0.006174862384796143, -0.026229117065668106, 0.010396209545433521, 0.0016308040358126163, 0.0028454428538680077, 0.010125272907316685, 0.0016472518909722567, -0.015675988048315048, 0.009135408326983452, -0.003633964341133833, 0.013180112466216087, 0.011280988343060017,...
[ 0.4530159831047058, 0.25359487533569336, 0.2765096127986908, 0.2579135298728943, -0.11625295877456665, -0.41057562828063965, -0.15794621407985687, 0.017245430499315262, -0.25125932693481445, -0.2859743535518646, -0.12543542683124542, 0.8703693747520447, -0.012949935160577297, 0.36210578680...
Suppose engineers built a large circular room in a rotating space station where if one looked directly up from any location, one could see the floor. If one used a ladder to reach the center of the room, could they balance an object in the center of the room's rotation, such that the object floated unsupported? Would it be easy to place the object there or quite difficult?
[ 0.015403385274112225, 0.02574022114276886, -0.012502370402216911, -0.0010122541571035981, -0.023228591307997704, -0.002028154209256172, 0.011142459698021412, -0.011320199817419052, -0.014156270772218704, -0.018065055832266808, -0.014036886394023895, 0.021250486373901367, 0.013393884524703026...
[ 0.21772782504558563, 0.022004462778568268, -0.1884850114583969, 0.23658646643161774, 0.3108166456222534, 0.44821473956108093, -0.180866077542305, -0.2460300475358963, -0.660478949546814, -0.38038092851638794, -0.006542977411299944, 0.18486955761909485, -0.12121294438838959, 0.1402681022882...
I have two nodes (drawn as rectangles) in my picture, one filled in green, the other red. I wish to produce a third 'rectangle' which shows the effect of splicing these two rectangles and stitching them back together. Or, in other words, I wish to produce a rectangle twice the width of my starting rectangles that is subdivided up into smaller segments, each segment filled alternating between red and green. Given my picture: \begin{tikzpicture}[node distance=5mm] \node [fill=red!30,minimum width=40mm] (b1) {Bank 1 \unit[1024]{MiB}}; \node [fill=green!30,minimum width=40mm,right=of b1] (b2) {Bank 2 \unit[1024]{MiB}}; \end{tikzpicture} I have been able to 'create' the desired effect using two \foreach loops: \foreach \n in {0,2,...,10} \draw [fill=red!30] (\n*6.66mm,1) +(-3.33mm,0) rectangle ++(3.33mm,5mm); \foreach \n in {1,3,...,11} \draw [fill=green!30] (\n*6.66mm,1) +(-3.33mm,0) rectangle ++(3.33mm,5mm); However, this requires several manual factors (40 / 6 = 6.66, 6.66 / 2 = 3.33) and I have been unable to position it under my existing two nodes due to the use of absolute coordinates. Are there better ways of doing this which would give me more/easier control of the positioning of the 'group'? I looked into splitting a rectangle but this is seemingly limited to four horizontal splits.
[ 0.0014238199219107628, 0.01055497769266367, -0.009070568718016148, 0.02272723987698555, -0.04437018185853958, -0.012928200885653496, 0.008656078949570656, 0.01241037342697382, -0.013910208828747272, 0.017044518142938614, -0.009235838428139687, 0.002882986795157194, -0.007944939658045769, -...
[ 0.11341726779937744, -0.23998647928237915, 0.4906480312347412, -0.13136647641658783, 0.28656402230262756, 0.585792064666748, 0.05439159646630287, 0.0022003999911248684, -0.3768393099308014, -0.6322586536407471, 0.0985320508480072, 0.12772348523139954, -0.18463556468486786, 0.00832449737936...
So this is a piecewise defined function I have that I need to talk about: f(a,b) = \left\{ \begin{array}{lr} \text{open} & : \text{RMSD}_\text{s-open}\ge6, \text{RMSD}_\text{closed}\ge6\\ \text{closed} & : \text{RMSD}_\text{closed}\le2 \\ \text{semiopen} & : \text{RMSD}_\text{s-open}\le2\\ \text{transition} & :f(a,b)\notin\{\text{open}, \text{closed}, \text{semiopen}\} \ \end{array} \right. So what's the way to align up the colons (conditions) so that it looks all nice and pretty? Also, if I want to describe transition to be the value of the function when none of the conditions of the previous 3 are satisfied, is the way I wrote it out with `f(a,b)` not belonging to the set `{open,closed,semiopen}` a good way of writing it? seems really unprofessional...
[ -0.011354264803230762, 0.018591994419693947, -0.008492749184370041, 0.0008025887655094266, -0.006487749051302671, -0.0003157369792461395, 0.004502737894654274, 0.02490168623626232, -0.009192435070872307, 0.010142737999558449, -0.009579810313880444, -0.001262238947674632, 0.003432552330195903...
[ -0.14124462008476257, 0.09455406665802002, 0.4269751310348511, -0.26122525334358215, 0.07122994214296341, 0.04655136168003082, -0.12614001333713531, -0.3669956624507904, 0.11647091060876846, -0.5247204899787903, -0.2154657393693924, 0.4392941892147064, -0.2565021812915802, 0.14856655895709...
Given that there is no exact general solution to the $N$-body problem, can it be concluded that the Universe is non-deterministic, even for the Newtonian case (ignoring relativistic and quantum effects)?
[ -0.009620558470487595, 0.025804463773965836, -0.012076896615326405, 0.018325382843613625, 0.014504010789096355, -0.021076861768960953, 0.012204383499920368, 0.038609858602285385, -0.01709642820060253, -0.013965178281068802, -0.02671957015991211, 0.016212547197937965, -0.021782424300909042, ...
[ 0.28410977125167847, 0.11817941069602966, 0.2602279484272003, 0.3688441216945648, 0.13228020071983337, -0.005661636125296354, 0.09603193402290344, -0.03562436252832413, -0.04335947334766388, 0.019451752305030823, -0.27471181750297546, 0.5154512524604797, -0.5009876489639282, 0.424928754568...
I am using GDAL and MODIS reprojection tool in my python project and want to know better way for math new indices of HDF data (MODIS data). Also I want to visualize them in web. Which format is more flexible for math and visualize my results?
[ -0.003359338967129588, 0.009798016399145126, -0.01449311152100563, 0.026665914803743362, -0.03920290246605873, -0.023625828325748444, 0.017234284430742264, 0.03426846116781235, -0.025830119848251343, -0.029340283945202827, 0.0016480055637657642, 0.006669209338724613, 0.0015036168042570353, ...
[ 0.1521245688199997, 0.06172795966267586, 0.09730718284845352, 0.3108161985874176, -0.3400024175643921, 0.010573328472673893, -0.05557001382112503, -0.07188344746828079, -0.2691909968852997, -0.6711673140525818, 0.11645301431417465, 0.6063845157623291, -0.14699870347976685, 0.06563685834407...
My basement cellar texture changed and now rooms are full of spider webs and fast annoying spiders. What's happened? Any hints how to go back and how to kill these spiderlings? ![enter image description here](http://i.stack.imgur.com/2Nrtb.jpg)
[ 0.010104849934577942, 0.026099545881152153, -0.002203608164563775, 0.029851356521248817, -0.025883976370096207, -0.004944223444908857, 0.00480695441365242, 0.015712475404143333, -0.0215851292014122, -0.018220433965325356, -0.010867421515285969, 0.0124736949801445, 0.0001307639031438157, 0....
[ 0.4023159146308899, -0.00009170792327495292, 0.14873921871185303, 0.17904101312160492, 0.3948248028755188, -0.18640483915805817, 0.9634726047515869, 0.1621614396572113, -0.780306875705719, -0.5992780923843384, -0.1271747201681137, 0.008504279889166355, -0.00773445051163435, 0.5042650103569...
There is some way to get notifications when deprecated API elements are used in WordPress. How is it done? Put WordPress in debug mode (which also shows all other types of errors) or is there another method that shows only API related errors?
[ 0.010621113702654839, -0.002174810040742159, 0.001183619606308639, 0.03359769284725189, 0.005042501259595156, 0.043837692588567734, 0.013323642313480377, 0.036253996193408966, -0.030863402411341667, -0.00905702356249094, -0.019673863425850868, 0.01690627448260784, -0.03940785676240921, 0.0...
[ 0.4057271182537079, 0.08810063451528549, 0.1796804666519165, 0.3352140188217163, -0.056393660604953766, -0.20532207190990448, 0.544110119342804, 0.02100798860192299, -0.1890152096748352, -0.4455743432044983, 0.03546619415283203, 0.3593859076499939, -0.4026070833206177, 0.07049908488988876,...
Over on Stackoverflow, I keep seeing questions wherein posters say: *I have an item _named_ `SoAndSo` (a table, a file, etc.). Shouldn't it be: *I have an item _called_ `SoAndSo`. Is "named" an acceptable word in this context? Are those words specific to a particular English speakers, e.g. UK vs. USA vs. Australia, etc.?
[ -0.0022850160021334887, 0.011124975048005581, -0.0034808029886335135, 0.012867240235209465, 0.00013807389768771827, 0.028556372970342636, 0.00803351029753685, 0.019871268421411514, -0.00734905619174242, -0.01881401613354683, -0.009108081459999084, 0.00840743724256754, 0.005227209534496069, ...
[ 0.45908111333847046, 0.0968933179974556, 0.24541491270065308, 0.07844184339046478, -0.20807629823684692, -0.002297323662787676, -0.045433443039655685, 0.27412763237953186, -0.6719368100166321, -0.5268604159355164, -0.18589986860752106, -0.3939584791660309, -0.028050413355231285, 0.30790415...
A few years ago I registered a domain with Network Solutions. In recent years I've been using cheaper services such as namecheap, powerpipe etc. Every time that I need to renew some of the older domains with Network Solutions I am surprised at how much expensive they are. What is the reason for the price differences between the services? Why should I use a service like Network Solutions if there are so many companies out there that offer domain registration for a very cheap price?
[ 0.0027939784340560436, 0.007008771412074566, -0.007174856495112181, 0.012280729599297047, -0.013378728181123734, -0.006637488026171923, 0.009683280251920223, 0.03141253814101219, -0.01685098186135292, -0.010191217064857483, -0.002581286011263728, 0.018655389547348022, 0.0011903345584869385, ...
[ 0.9523639678955078, 0.16015587747097015, 0.2683957517147064, 0.1680670529603958, -0.23668856918811798, -0.020888226106762886, 0.2565472424030304, 0.34891748428344727, -0.31208428740501404, -0.4399399757385254, 0.6794744729995728, 0.22495684027671814, 0.11609649658203125, 0.8459100127220154...
It's Friday again, how about some fun to get us into the weekend? What is the longest word you can come up with for which all the letters in that word are in alphabetical order? Rules: * English words only * Can't be a name of a place, person or other proper noun. * if it contains the same letter twice in a row, that does not disqualify it. * No fair looking it up on Google! Update: One more rule to help you guys out. * The word can be in either ascending or descending alphabetical order.
[ 0.0061402227729558945, 0.02260628156363964, -0.011253283359110355, 0.017599299550056458, -0.011481081135571003, -0.0029944097623229027, 0.006857633125036955, 0.02079884149134159, -0.0231215450912714, -0.00030303391395136714, -0.00221810070797801, -0.0001889751001726836, 0.015171214938163757,...
[ 0.7392665147781372, -0.325954794883728, 0.42477259039878845, 0.19480198621749878, -0.05922815576195717, -0.5878682136535645, 0.695436418056488, 0.49526336789131165, -0.47497913241386414, -0.10531440377235413, 0.21561364829540253, 0.5125728845596313, 0.02558756433427334, 0.10892659425735474...
I heard that you can "gild" all heroes in clickerheroes.com, which gives them 50% more efficiency per gild. Now I ask myself, does this in any way affect the amount of gold you need to upgrade the gilded hero?
[ -0.00014905419084243476, 0.013170678168535233, 0.004669440910220146, 0.005678921472281218, -0.0018670061836019158, -0.019122831523418427, 0.009540416300296783, -0.035600364208221436, -0.028380712494254112, 0.01927177608013153, 0.0007257357356138527, 0.016445955261588097, 0.008348300121724606...
[ 0.5257135033607483, 0.28153517842292786, 0.3154299557209015, 0.4411168098449707, -0.057607635855674744, -0.15491876006126404, 0.153141587972641, -0.15195055305957794, -0.6817589998245239, -0.1779848039150238, 0.38232895731925964, 0.5083553194999695, 0.1012042760848999, 0.22904013097286224,...
I'm having an issue with the `Eigensystem` command. I need to diagonalize a bunch of 3 by 3 complex valued matrices, but more importantly, I need to keep the exact ordering of their eigenvalues once brought to their diagonal form. For example, if A = { {1.999, 0.000428712*I, 0} , {-0.000428712*I, 2.00072, 0} , {0, 0, -4.00057} } then `Eigensystem[A]` returns the three eigenvalues (with their corresponding eigenvectors) listed in order of decreasing magnitude (absolute value). What is even more annoying is if my loop runs into an already diagonal 3 by 3 matrix such as `B = {{2,0,0},{0,-3,0},{0,0,2}}`, it will reorder the eigenvalues as `{-3,2,2}`. Is there a command that gives me the eigenvalue without re-sorting them?
[ 0.011300732381641865, 0.018327485769987106, -0.014836503192782402, 0.019763940945267677, -0.026299001649022102, 0.025332452729344368, 0.004987803287804127, -0.01847851276397705, -0.013358192518353462, 0.014360269531607628, -0.011645043268799782, 0.006158079952001572, -0.019345566630363464, ...
[ 0.0464479960501194, 0.06923725455999374, 0.3313811123371124, -0.37894105911254883, -0.2168554961681366, 0.25049924850463867, 0.03991975635290146, -0.493169903755188, -0.022059613838791847, -0.4881812334060669, -0.18112331628799438, 0.3070674538612366, -0.30611544847488403, 0.13055488467216...
I'm working with small network and I want to start network explorer from a terminal. When I tried to type `xdg-open network:///server` it opened google chrome and did nothing. I also tried to type `smb://server` but it hasn't helped me. I really need to run it from terminal. Does anybody know how can I do it?
[ -0.0047393799759447575, -0.01251767948269844, -0.0020333195570856333, -0.016109488904476166, -0.020909707993268967, -0.029928265139460564, 0.010484070517122746, 0.023320242762565613, -0.029836421832442284, -0.02825496904551983, -0.007860521785914898, 0.006099225487560034, 0.01321647781878709...
[ 0.3519158959388733, 0.16768860816955566, 0.387765109539032, -0.1436663269996643, -0.021666700020432472, -0.03229096904397011, 0.33366313576698303, 0.2582262456417084, -0.2041996717453003, -0.7288275361061096, 0.33832648396492004, 0.43310800194740295, -0.1271684467792511, 0.364568293094635,...
i'm sorry for my english. I have a problem with the function wp_insert_post in a script. I'm trying to create a new post when i receive a notification from paypal. Everything works fine untill i try to create a new post in the database. This is my code: <?php include_once($_SERVER['DOCUMENT_ROOT'].'/wp-load.php'); // Send an email announcing "received IPN" $mail_From = "xxxxxx@gmail.com"; $mail_To = "xxxxxx@gmail.com"; $mail_Subject = "received IPN"; $mail_Body = $req; mail($mail_To, $mail_Subject, $mail_Body, $mail_From); ?> <?php // STEP 1: read POST data // Reading POSTed data directly from $_POST causes serialization issues with array data in the POST. // Instead, read raw POST data from the input stream. $raw_post_data = file_get_contents('php://input'); $raw_post_array = explode('&', $raw_post_data); $myPost = array(); foreach ($raw_post_array as $keyval) { $keyval = explode ('=', $keyval); if (count($keyval) == 2) $myPost[$keyval[0]] = urldecode($keyval[1]); } // read the IPN message sent from PayPal and prepend 'cmd=_notify-validate' $req = 'cmd=_notify-validate'; if(function_exists('get_magic_quotes_gpc')) { $get_magic_quotes_exists = true; } foreach ($myPost as $key => $value) { if($get_magic_quotes_exists == true && get_magic_quotes_gpc() == 1) { $value = urlencode(stripslashes($value)); } else { $value = urlencode($value); } $req .= "&$key=$value"; } // STEP 2: POST IPN data back to PayPal to validate $ch = curl_init('https://www.paypal.com/cgi-bin/webscr'); curl_setopt($ch, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_1_1); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_RETURNTRANSFER,1); curl_setopt($ch, CURLOPT_POSTFIELDS, $req); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 1); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 2); curl_setopt($ch, CURLOPT_FORBID_REUSE, 1); curl_setopt($ch, CURLOPT_HTTPHEADER, array('Connection: Close')); // In wamp-like environments that do not come bundled with root authority certificates, // please download 'cacert.pem' from "http://curl.haxx.se/docs/caextract.html" and set // the directory path of the certificate as shown below: // curl_setopt($ch, CURLOPT_CAINFO, dirname(__FILE__) . '/cacert.pem'); if( !($res = curl_exec($ch)) ) { // error_log("Got " . curl_error($ch) . " when processing IPN data"); curl_close($ch); exit; } curl_close($ch); // STEP 3: Inspect IPN validation result and act accordingly if (strcmp ($res, "VERIFIED") == 0) { // Send an email announcing "enter in verified IF" $mail_From = "xxxxxxx@gmail.com"; $mail_To = "xxxxxxx@gmail.com"; $mail_Subject = "enter in verified IF"; $mail_Body = $req; mail($mail_To, $mail_Subject, $mail_Body, $mail_From); // You should validate against these values. $donCause = $_POST['item_number']; $txnID = $_POST['txn_id']; $firstName = $_POST['first_name']; $lastName = $_POST['last_name']; $addressCountry = $_POST['address_country']; $addressCity = $_POST['address_city']; $addressStreet = $_POST['address_street']; $addressZip = $_POST['address_zip']; $payerEmail = $_POST['payer_email']; $payment_gross = $_POST['mc_gross']; $payment_status = $_POST['payment_status']; if ($payment_status == 'Completed') { // Send an email announcing "enter in payment_status==completed" $mail_From = "xxxxxx@gmail.com"; $mail_To = "xxxxxx@gmail.com"; $mail_Subject = "enter in payment_status==completed"; $mail_Body = $req; mail($mail_To, $mail_Subject, $mail_Body, $mail_From); // Create post object $my_post = array( 'post_title' => $txnID, 'post_status' => 'publish', 'post_author' => 1, 'comment_status' => 'closed', 'ping_status' => 'closed', 'post_type' => 'post_pledges', ); $post_id = wp_insert_post( $my_post, true ); // Send an email announcing "after post_pledges creation" $mail_From = "ragazzin@gmail.com"; $mail_To = "ragazzin@gmail.com"; $mail_Subject = "after post_pledges creation"; $mail_Body = $post_id; mail($mail_To, $mail_Subject, $mail_Body, $mail_From); add_post_meta($post_id, "wpl_pledge_cause", $donCause); add_post_meta($post_id, "wpl_pledge_transaction_id", $txnID); add_post_meta($post_id, "wpl_pledge_first_name", $firstName); add_post_meta($post_id, "wpl_pledge_last_name", $lastName); add_post_meta($post_id, "wpl_pledge_country", $addressCountry); add_post_meta($post_id, "wpl_pledge_city", $addressCity); add_post_meta($post_id, "wpl_pledge_address", $addressStreet); add_post_meta($post_id, "wpl_pledge_postal_code", $addressZip); add_post_meta($post_id, "wpl_pledge_email", $payerEmail); add_post_meta($post_id, "wpl_pledge_donation_amount", $payment_gross); add_post_meta($post_id, "wpl_pledge_payment_source", 'paypal'); add_post_meta($post_id, "wpl_pledge_payment_Status", $payment_status); } // Response is VERIFIED // Send an email announcing the IPN message is VERIFIED $mail_From = "xxxxxx@gmail.com"; $mail_To = "xxxxxxx@gmail.com"; $mail_Subject = "VERIFIED IPN"; $mail_Body = $req; mail($mail_To, $mail_Subject, $mail_Body, $mail_From); } else if (strcmp ($res, "INVALID") == 0) { // IPN invalid, log for manual investigation // Notification protocol is NOT complete, begin error handling // Send an email announcing the IPN message is INVALID $mail_From = "xxxxxxx@gmail.com"; $mail_To = "xxxxxxx@gmail.com"; $mail_Subject = "INVALID IPN"; $mail_Body = $req; mail($mail_To, $mail_Subject, $mail_Body, $mail_From); } ?> I read about a problem with an infinite loop and maybe it's also for me because the script doesn't go on when try to execute the function. Have you any ideas?
[ -0.006712728645652533, 0.003295803675428033, 0.0020059715025126934, 0.015192724764347076, -0.006833734922111034, 0.013313105329871178, 0.007533069234341383, 0.0010930539574474096, -0.015649113804101944, -0.005680979695171118, -0.01416833233088255, 0.008443801663815975, -0.015096481889486313,...
[ 0.2005515843629837, 0.36629047989845276, 0.5045239329338074, -0.25595954060554504, -0.4318959712982178, 0.08368610590696335, 0.6216983199119568, 0.35069888830184937, -0.04298662021756172, -0.6296824216842651, 0.15041151642799377, 0.055434782058000565, -0.3267853558063507, 0.236138433218002...
I wonder if feature scaling like this makes always sense for neural networks: Let $T$ be the training set and $x_i \in \mathbb{R}^n$ with $d_i \in T$ be the feature vector of $d_i$. Then add another preprocessing step so that $x_i' \gets \frac{x_i - \text{mean}(T)}{\max(T) - \min(T)}$ where $\max$ and $\min$ get applied seperately for each dimension. This preprocessing step guarantees that for each feature you will get a mean of $0$ and a range of 1. I've heard that this is desired for neural nets. **Do you know any sources for that?** (Or sources that claim that feature normalization is not always good?) **Note** : The range is1, not necessary the variance. The variance of a random variable $X$ is calculated like this: $$Var(X) = E(X^2) - (\underbrace{E(X)}_{=0})^2 = E(X^2)$$. If you have, for example, $X$ with $P(-0.5) = 0.5 = P(+0.5)$ you have a variance of $Var(X) = E(X^2) - E(X)^2 = (0.5 \cdot 0.25 + 0.5 \cdot 0.25) - 0 = 0.25$. As $\max(X) - \min(X) = 0.5 - (-0.5) = 1$ and $\text{mean}(X) = 0$, feature standardization will not change anything
[ -0.00015371263725683093, 0.016348805278539658, -0.0012587818782776594, -0.0033518923446536064, -0.013798006810247898, 0.010145192965865135, 0.0068473247811198235, -0.0036053257063031197, -0.010880833491683006, -0.0007629045285284519, -0.014670971781015396, 0.014841591939330101, -0.0127983549...
[ 0.19394545257091522, -0.43790653347969055, 0.31262803077697754, -0.17975133657455444, -0.0279396902769804, 0.21938377618789673, -0.012292835861444473, -0.24000687897205353, 0.009075132198631763, -0.8148722052574158, 0.0691341906785965, 0.7972503900527954, -0.18577495217323303, 0.0193781927...
What are the fundamental differences between the mainstream *NIX shells and what scenarios might prompt you to use one over the other? I understand that some of it probably comes down to user preference but I've only ever used bash and I'm interested to hear where another shell might be useful. Also, is there an impact on user-written shell scripts when running under one shell or another or is it simply a matter of changing the shell at the top of the file? My instinct says it's not that easy.
[ 0.0029929836746305227, 0.0173327773809433, -0.0075934287160634995, 0.024738922715187073, 0.008283887058496475, 0.008075428195297718, 0.00866694189608097, -0.003654312575235963, -0.026640092954039574, -0.00555709982290864, -0.02904057316482067, 0.006399065721780062, 0.005442027002573013, 0....
[ 0.5890827775001526, 0.09671472012996674, -0.3593655228614807, -0.02984299696981907, -0.061465680599212646, -0.031561821699142456, 0.2811061441898346, 0.05520348250865936, 0.03445082902908325, -0.5245262384414673, 0.3921595513820648, 0.9544633626937866, 0.03204937279224396, -0.1551447808742...
Most of the usage of "matter-of-factly" that I've seen is to describe a manner of speaking - "He said, matter of factly,...", etc. A friend brought up the following usage, which seems wrong, but I can't pinpoint exactly what is wrong. "Matter of factly, I don't know. I know from my dad's experience." What's the view on this? Couple of points: * The adjective form "As a matter of fact, I don't know. ..." seems correct. * Similar usage of literally works: "I literally don't know." or "Literally, I don't know"
[ -0.0008719214238226414, -0.002839423716068268, -0.017151368781924248, 0.006901318673044443, -0.007765737362205982, -0.011497492901980877, 0.00899866409599781, 0.0045438846573233604, -0.01431146077811718, 0.009059765376150608, 0.006372020114213228, 0.008681750856339931, 0.007937629707157612, ...
[ 0.5700746774673462, 0.4260849058628082, 0.03727070242166519, 0.06606121361255646, -0.4109534025192261, -0.03088447079062462, 0.31237897276878357, 0.31600356101989746, -0.2603808045387268, -0.14527510106563568, -0.012267866171896458, 0.09052953124046326, 0.21469131112098694, 0.5313092470169...
I come from a scientific biology background where we also use Python a lot. Now that I've begun to start with Web development, I've consistently found myself wondering just why it is that JavaScript is the primary client-side language on the Web. Is JavaScript's predominance a historical accident or something else? Also, I'm curious if there are any hurdles to integrating Python into client-side scripting?
[ -0.014399481937289238, 0.008259239606559277, -0.01063845306634903, -0.0013041727943345904, -0.03343694657087326, -0.004924668464809656, 0.006831036414951086, 0.028321566060185432, -0.01959996484220028, -0.026804884895682335, -0.01396614033728838, 0.006227289326488972, 0.0026153353974223137, ...
[ 0.7397255897521973, 0.3584275245666504, -0.3678901493549347, -0.23090724647045135, -0.16565918922424316, -0.07786337286233902, 0.15302450954914093, 0.6071389317512512, -0.35082775354385376, -0.483626127243042, 0.06795985996723175, 0.5946360230445862, 0.005134667735546827, -0.04961468279361...
> **Possible Duplicate:** > Absolute positioning in beamer > Insert graphic at precise place on a page I would like to add a picture to my title page, however, for effect, the picture needs to be aligned with the right side of the paper and the bottom of the paper. How can I do this? I'm using `scrartcl` as main `documentclass`. I'm not averse to using the `titlepage` environment, I just haven't used it before (and also I wouldn't know how to do this on a normal page). Edit: this is what my relevant code looks like: \begin{document} \maketitle \vfill\hfill\includegraphics{../front_minifig} \pagebreak[4] \tableofcontents \pagebreak[4] \section{Lorem}
[ -0.005324074998497963, 0.0005767091060988605, -0.002838419983163476, 0.013847768306732178, -0.003552861977368593, 0.0022043464705348015, 0.0068160961382091045, 0.022487588226795197, -0.014679351821541786, -0.0030156837310642004, -0.022639626637101173, 0.006374262273311615, -0.008103094995021...
[ 0.15909093618392944, 0.09694457054138184, 0.830014169216156, 0.0015965630300343037, -0.045130275189876556, -0.061565887182950974, 0.35568907856941223, -0.09594158083200455, -0.1426691710948944, -0.9678360223770142, 0.09893929213285446, 0.5113198161125183, -0.11216197907924652, -0.062104206...