text
stringlengths
23
30.4k
embeddings_A
list
embeddings_B
list
This is my second play through of Mass Effect 3. The first time it was flawless but during my second play through I have all single player major DLCs installed. I have Javik and Liara as my team mates and at the point you go to Ardat Yakshi monastery where you meet Samara (depends on ME2) - after two battles with banshees you take an elevator to the great hall. In my case, however, the elevator keeps on going forever. The game doesn't hang but the elevator never stops going and if I choose any option from the menu (load, save, squad.. etc) then the game freezes. The only solution I then have is to resort to `ctrl`+`alt`+`del`. I've restarted it several times tried different combinations of squadmates but the game always gets stuck on the elevator. Any ideas on how to resolve this issue?
[ 0.019543468952178955, 0.009572306647896767, 0.0029990533366799355, -0.0029330102261155844, 0.006170512177050114, -0.006325762718915939, 0.006833817809820175, -0.014111305586993694, -0.010378161445260048, -0.01080265361815691, -0.016903089359402657, 0.012998980470001698, 0.0021962025202810764...
[ 0.06948935985565186, -0.3183617889881134, 0.08164532482624054, -0.054136116057634354, -0.8753538727760315, 0.14895598590373993, 0.8979301452636719, -0.37660959362983704, -0.20305603742599487, -0.704650342464447, -0.1735447198152542, -0.15079203248023987, -0.14816832542419434, 0.04261517152...
A long time ago I was thinking about how the Imperial system of measurements is arbitrary and annoying, and I decided to design the best system of units ever (I wasn't very old then). I worked on this idea occasionally for years without making any progress. When I finally got serious about it, I discovered Planck Units and that seemed to settle the issue. Now my problem is that Planck Units are so small that they can't be used for "normal" things without huge exponents, most of these being very different depending on the quantity being measured. Solving human-scale equations with these units by computer would thus either be very inaccurate due to floating- point errors (for numbers that are even within range) or very slow due to the need for extended precision. They also go "up" but not "down", which renders almost half of the possibile signed floating-point values as useless. I considered the idea of creating a new system by raising each Planck Unit with a standard exponent or multiplier, but I think this would bring some into an acceptable range but not others. So what I want to know is whether a system of natural units exists that uses units that are appropriate for work with technology that humans can interact with, or can be made appropriate without introducing too many arbitrary elements.
[ 0.014559776522219181, 0.01176287792623043, -0.005475365091115236, 0.01434446033090353, -0.007891148328781128, -0.012998895719647408, 0.0057599819265306, -0.03466042876243591, -0.01854655146598816, -0.016525033861398697, -0.008864237926900387, 0.006437069270759821, 0.0033164932392537594, -0...
[ 0.3175157904624939, 0.35429647564888, 0.19566461443901062, 0.1684027761220932, -0.1923464834690094, 0.378949910402298, 0.08060715347528458, -0.12805862724781036, -0.11545909196138382, -0.6442341804504395, 0.378980815410614, 0.0104207219555974, 0.15943124890327454, 0.5134323239326477, 0.0...
I'm running Fedora 20, and I forgot to change my swap size allocation during installation. I have 16G of RAM, so I'd like to allocate 32G of swap space. The install created a swap for me, of only 8G. The Fedora install is on an LVM2 partition. Is there a way I can increase the swap size without reinstalling Fedora?
[ 0.00807256530970335, 0.004883989226073027, -0.020780064165592194, 0.021137189120054245, -0.0005652482504956424, -0.03978767618536949, 0.012925395742058754, 0.013624906539916992, -0.02480444125831127, -0.01810608059167862, -0.007356762420386076, 0.011946742422878742, -0.012263080105185509, ...
[ 0.3901608884334564, -0.013854365795850754, 0.6475039124488831, -0.3199920654296875, 0.36903244256973267, 0.22300343215465546, -0.3183897137641907, -0.3965442478656769, -0.5664787888526917, -0.5859588980674744, -0.13553737103939056, 0.5425315499305725, -0.3202877640724182, -0.22393038868904...
I recently installed CyanogenMod 7.1 on my Samsung Galaxy S i9000 and I have some problems now to get ADB on my Windows 7 64-bit computer to recognize the device. Currently, `adb devices` returns no entry for my phone and the Device Manager shows a device called "Galaxy S" but with a yellow exclamation mark because a driver is missing. I've tried to install Samsung Kies as well as the Google USB driver for Windows Revision 4 and point the device manager at the directories with those drivers. It doesn't find anything it can use there. Which drivers do I actually need for the phone now that it is running CyanogenMod 7.1? And where do I get those and how can I install them?
[ -0.02528233267366886, -0.009152554906904697, -0.01837838813662529, 0.004559561610221863, -0.01816178299486637, 0.012800700031220913, 0.009851310402154922, 0.008754413574934006, -0.010005069896578789, -0.04074249416589737, -0.0138277318328619, -0.00007790152449160814, -0.02095280960202217, ...
[ 0.11220043152570724, 0.42649614810943604, 0.5922589898109436, -0.12739624083042145, 0.12010125815868378, -0.11130949854850769, 0.09630004316568375, 0.3349635601043701, -0.17138242721557617, -0.5912240743637085, -0.25406384468078613, 0.3450299799442291, -0.36754849553108215, 0.5376295447349...
How can you have a negative voltage? I don't really understand the concept of negative voltage, how can it exist?
[ 0.027916837483644485, 0.01814616285264492, -0.0008022122783586383, -0.003368677804246545, -0.017949797213077545, -0.012970494106411934, 0.013966393657028675, -0.018915211781859398, -0.033143095672130585, 0.007687846664339304, -0.035297710448503494, 0.039621930569410324, -0.02395017072558403,...
[ 0.802856981754303, 0.16952653229236603, 0.17197275161743164, 0.15749980509281158, -0.36444947123527527, -0.39279088377952576, 0.40342462062835693, 0.05115445330739021, -0.07430752366781235, 0.18160557746887207, 0.32896167039871216, 0.2019604742527008, -0.45854446291923523, 0.52231937646865...
I've produced with mapnik my own tiles with EPSG:31256 on a shapefile. I've structured the tiles in a OSM-like tree: zoom/x/y.png, with zoom from 0 to 2 and, x y starting from 1: tiles ├── 0 │ └── 1 │ └── 1.png ├── 1 │ ├── 2 │ │   ├── 2.png │ │   └── 3.png │ └── 3 │ ├── 2.png │ └── 3.png └── 2 ├── 4 │   ├── 4.png │   ├── 5.png │   ├── 6.png │   └── 7.png ├── 5 │   ├── 4.png │   ├── 5.png │   ├── 6.png │   └── 7.png ├── 6 │   ├── 4.png │   ├── 5.png │   ├── 6.png │   └── 7.png └── 7 ├── 4.png ├── 5.png ├── 6.png └── 7.png Tiles are 256x256 pixels and the tile at zoom 0 covers the entire shape, which is 7500x7500 meters (I used this dimension to set up scales). From zoom 1, tiles are splitted in 2 and so on. Map div element is 512x512 pixels. To display tiles, I would like to use OpenLayers XYZ layer, with my own `get_url` function, as described here: function create_map () { Proj4js.defs["EPSG:31256"] = "+proj=tmerc +lat_0=0 +lon_0=16.33333333333333 +k=1 +x_0=0 +y_0=-5000000 +ellps=bessel +towgs84=577.326,90.129,463.919,5.137,1.474,5.297,2.4232 +units=m +no_defs " var proj = new OpenLayers.Projection("EPSG:31256"); var bounds = OpenLayers.Bounds(5000,216250,12500,223750); var minZoom = 0; var maxZoom = 2; var map = new OpenLayers.Map({ div: "map", allOverlays: true, projection: proj, maxExtent: bounds, allowSelection: true, scales: [7500, 3750, 1875], units: 'm', controls: [new OpenLayers.Control.PanZoomBar(), new OpenLayers.Control.Navigation()] }); var layer = new OpenLayers.Layer.XYZ( "my_tile_layer", '/tiles/', { maxZoomLevel: maxZoom, minZoomLevel: minZoom, getURL: get_my_url } ); map.addLayer(layer); var center = bounds.getCenterLonLat(); map.setCenter(center, minZoom); }; function get_my_url (bounds) { var res = this.map.getResolution(); // Note, I've added +1 because I do not have x and y coordinates equal to 0. var x = Math.round ((bounds.left - this.maxExtent.left) / (res * this.tileSize.w)+1); var y = Math.round ((this.maxExtent.top - bounds.top) / (res * this.tileSize.h)+1); var z = this.map.getZoom(); var path = z + "/" + x + "/" + y + ".png"; var url = this.url; if (url instanceof Array) { url = this.selectUrl(path, url); } return url + path; } My problem is that tile retrieval is wrong: at zoom 0, I correctly load 0/1/1.png, but it is positioned at the extreme right of the map: I had to pan it to make it visible. I do not understand why, because my center variable is correctly set to (8750, 220000). If I zoom in I see nothing and I get not existent tile names such as 1/1/1.png. Could you please help me? Thanks in advance.
[ 0.0014879056252539158, 0.00877567008137703, -0.0015228313859552145, 0.02655007690191269, -0.008697407320141792, 0.01596800424158573, 0.006125404499471188, 0.003242124803364277, -0.012500114738941193, 0.007327395491302013, -0.007638940121978521, 0.0024551991373300552, -0.00285316607914865, ...
[ -0.10562743991613388, 0.12097486853599548, 0.9168111085891724, -0.06857027113437653, -0.1607992798089981, 0.6323823928833008, 0.06732303649187088, -0.505875825881958, -0.4175748825073242, -0.83766770362854, 0.10810603201389313, 0.21364131569862366, 0.16782931983470917, -0.17642861604690552...
I am in the beginning phase of create a mobile MMO with my team. The server software will be written in JavaScript using NodeJS, and the client software in Lua using Corona. We need a tool to auto-generate documentation for both the server-side and client-side code. Are there any tools which can generate documentation for both Lua and Javascript? And as a bonus: we are hosting our project on Bitbucket and the Bitbucket Wiki uses the Creole markup language. So if it's possible I want the tool to export to Creole. Edit: I know about tools for generating documentation for one of both languages. However, I don't want 2 different styles for documentation in one project. Therefore one tool which can generate documentation for both languages would be great.
[ 0.0025166755076497793, 0.0008389110444113612, 0.0058492328971624374, -0.004361133091151714, -0.001390824094414711, -0.004467959050089121, 0.01043054461479187, 0.056375518441200256, -0.022197430953383446, 0.004804870579391718, -0.024236250668764114, 0.016312820836901665, 0.012847518548369408,...
[ 0.5801377892494202, 0.1496642380952835, 0.5100783705711365, -0.14323176443576813, 0.05279035121202469, 0.12681686878204346, -0.24098466336727142, 0.1049441248178482, -0.09929293394088745, -0.7084574699401855, 0.037525687366724014, 0.1945612132549286, 0.17184790968894958, -0.003303950652480...
How to achieve having modular title pages in latex (xelatex), to only have to change one word in the .tex file and have it changed? All the files must be defined in the project's directory, not somewhere else on the system (the project is versioned and shared). I am defining my class on top of `book` called `yapbook`. The relevant piece of the class: \NeedsTeXFormat{LaTeX2e} \ProvidesClass{yapbook}[2010/10/04 Yet Another Project''s book class] %------------------------------------------------------------------------------ % useful for tex programming %------------------------------------------------------------------------------ \RequirePackage{needspace} \RequirePackage[usenames,dvipsnames]{color} \RequirePackage{kvoptions} \SetupKeyvalOptions{ family=YAPBOOK, prefix=YAPBOOK@ } \DeclareStringOption[phpro]{titlepagestyle} \DeclareOption*{\PassOptionsToClass{\CurrentOption}{book}} \ProcessKeyvalOptions* \ProcessOptions \LoadClass{book} % here more code \def\@maketitle{ \RequirePackage{titlepage-\YAPBOOK@titlepagestyle} } \renewcommand*{\maketitle}{ \@maketitle %here more stuff } And the class `titlepage-phpro` looks like this: \begin{titlepage} \thispagestyle{empty} \null \vskip 2em% \begin{center}% \textsc{\huge \@title} \vspace{1em} %\hrule \vspace{3em} \textit{\textbf{\shorttitle}} \vspace{3em} \hrule \vspace{8em} \authors \\ \vspace{3em} \@date \end{center} \vfill \begin{flushright} O iniţiativă \emph{Yet Another Project}\\ Homepage: \url{http://yet-another-project.github.com/} \end{flushright} \end{titlepage} **Now I do realize** , that this is completely wrong, but I don't know how to wire these pieces correctly so that it works. The individual chucks of latex code used to work. It is a requirement to have the definition of title pages in individual files, to have it modularized, and to not have to specify too many things in the "client code" (the .tex master file). Another **very important** requirement is to make the usage of this infrastructure **semantic** , so there should be no `\include` in the client code. The complete code can be found at https://github.com/yet-another- project/booktemplate
[ -0.008025013841688633, 0.01511194184422493, -0.0016162856481969357, 0.011016849428415298, -0.0027861008420586586, 0.01435096189379692, 0.007782677188515663, 0.01276854332536459, -0.012803297489881516, -0.020141253247857094, -0.006518895737826824, 0.011311275884509087, 0.011763093993067741, ...
[ 0.3080955743789673, 0.2849375307559967, 0.46083664894104004, -0.31361573934555054, 0.1896154135465622, 0.13498584926128387, 0.05097873508930206, -0.345810204744339, -0.2162448763847351, -0.5444336533546448, -0.07895958423614502, 0.5284392833709717, -0.28712812066078186, -0.0064650927670300...
We are building an standalone ArcGIS Engine application and we are going to add support for printing. There are several APIs for printing a PageLayout in ArcObjects, in ArcGIS 10 there are at least three different ways to do this: * The new PrintAndExport class, in the documentation they seem to promote this API Conceptual help - Printing maps. * It's also possible to print a layout using the PrintPageLayout method PageLayoutControl Sample: Printing with the PageLayoutControl * But also Output method on IActiveView can be used. Can someone give recommendations on which API we should choose? Is there any known pitfalls in these APIs? Update: Found one pitfall during my research: * When working with ArcGIS Server layers, only PrintAndExport draw the patches/swatches on the legend when printing the layout.
[ -0.013269278220832348, 0.004878830164670944, -0.011234050616621971, 0.005996289197355509, -0.020274516195058823, 0.012776155956089497, 0.009155931882560253, 0.013719342648983002, -0.017237065359950066, -0.03478476032614708, -0.0012345912400633097, 0.01584581658244133, -0.0010217239614576101,...
[ 0.4561637043952942, 0.028629887849092484, 0.23883752524852753, 0.23482294380664825, -0.3645647168159485, -0.10187132656574249, -0.048283651471138, -0.17248386144638062, 0.038435112684965134, -0.7969251275062561, 0.42413491010665894, 0.5358954668045044, -0.15565131604671478, -0.263536095619...
I have seen discussions of unwanted nulls in the output in the context of building lists with conditions on the elements, but that is not involved here. I would like to know where the nulls come from and how to avoid generating them. Clear["Global`*"] localGroup = AstronomicalData["LocalGroup"]; properties = { "AlphanumericName", "StandardName", "AlternateStandardNames", "NGCNumber", "ApparentMagnitude", "Constellation", "Declination", "RightAscension", "DistanceLightYears", "GalaxyType", "HubbleType", "RadialVelocity", "Redshift" }; lgTable = {#, Table[ { properties[[n]], AstronomicalData[#, properties[[n]]] }, {n, 1, Length[properties]} ] } & /@ localGroup; displayTable[record_] := Module[{}, Print[ #[[1]] <> ": ", #[[2]] ] & /@ record[[2]]; Print["\n"] ] (* display two records for illustration *) displayTable[#] & /@ lgTable[[1 ;; 2]]
[ 0.005410904064774513, 0.010156441479921341, 0.0011262840125709772, 0.005415656138211489, -0.02786070853471756, 0.006849590688943863, 0.0050179981626570225, 0.023885143920779228, -0.012056522071361542, 0.02144540846347809, 0.008767601102590561, 0.004464247263967991, 0.004490047227591276, 0....
[ 0.16016289591789246, 0.011778837069869041, 0.4292829632759094, -0.058740366250276566, 0.09345889091491699, -0.05833102762699127, 0.11204426735639572, 0.019465822726488113, 0.005220451857894659, -0.5451622605323792, -0.41396355628967285, 0.36895954608917236, -0.4025647044181824, 0.109024390...
I just stumbled over a global variable called `$content_width`. It was new to me, so far I could see that it's used in themes, both the standard twenty ten one and third party ones. It looks like that it contains the width of the content area in pixels. But since global variables are not documented in codex, I have trouble to find valid and funded information about it. When was the `$content_width` global variable introduced and for what reason? Is there a ticket related to it?
[ -0.038268521428108215, 0.004717240110039711, -0.002084864303469658, 0.017752887681126595, 0.00960833951830864, 0.006221357267349958, 0.00717764999717474, 0.0016813258407637477, -0.01865653321146965, -0.009811926633119583, -0.011521593667566776, 0.012228747829794884, -0.007969758473336697, ...
[ 0.4619337320327759, -0.19009026885032654, 0.5316044092178345, 0.41737157106399536, -0.27898815274238586, -0.21571537852287292, -0.4552355110645294, 0.34154507517814636, -0.49157950282096863, -0.5413285493850708, 0.2973484396934509, -0.09184585511684418, -0.1805517077445984, 0.6071892976760...
I would like to specify a Toeplitz variance matrix for the random effects of my `nlme` model in `R`. Is it possible ? More precisely, my model looks like lme(y ~ dose, data = dat, random = list(Lot = pdSymm(~ 0+dose ))) but I want a Toeplitz matrix instead of the unstructured matrix which is here specified by `pdSymm`. There's no available pdToeplitz-like argument in `nlme`, but maybe there is another way to do this ? I don't know if "Topelitz" is a standard terminology, this is the `SAS` terminology for a variance matrix with the only restriction that the diagonal entries are all equal. (EDIT) Sorry, my above definition of Toeplitz is erroneous. A Toeplitz matrix $\Sigma=(m_{ij})$ is the case when $m_{ij}$ depends on $i$ and $j$ only through $|i-j|$. This is not what I'm looking for, I really want a variance matrix with the only restriction that the diagonal entries are all equal.
[ 0.02235494554042816, 0.010840240865945816, -0.005076471716165543, 0.01390942931175232, 0.0016278857365250587, 0.014156395569443703, 0.009290621615946293, 0.011080591939389706, -0.013255959376692772, 0.027457762509584427, -0.014738000929355621, 0.011683563701808453, -0.01476377621293068, 0....
[ 0.36572861671447754, -0.27458056807518005, 0.1667695939540863, -0.12856274843215942, -0.2743571102619171, 0.17297673225402832, -0.13234634697437286, -0.6030213236808777, 0.02965473383665085, -0.7693286538124084, 0.35618817806243896, 0.5784798264503479, -0.11576905846595764, 0.2478975653648...
I have quarterly unbalanced Panel data and I want to de-trend my dependent variable to make it stationary. how do i do it? I don't want to take differences as it will shorten my observations. The residual series that I get after regressing my dependent variable on time trend does not remove unit root for data . It should be noted my independent variables are stationary? should I transform/detrend them as well. Also can I regress differences on levels in a panel data setting or all variables should be in the same order of integration? Would Hodrick prescott filter be a good choice for detrending quarterly observations? 2002q2 to 2013q4
[ 0.005294158589094877, 0.021442154422402382, -0.01882156915962696, 0.015046180225908756, -0.0011677638394758105, -0.024346131831407547, 0.010499419644474983, 0.021963827311992645, -0.013675852678716183, -0.03431273624300957, -0.016213756054639816, 0.023748379200696945, -0.01970463991165161, ...
[ 0.2529318332672119, 0.006704211700707674, 0.37809550762176514, 0.269525408744812, -0.1032218337059021, 0.3543243706226349, 0.13168960809707642, -0.35602623224258423, -0.2866996228694916, -0.5962780117988586, 0.6342383027076721, 0.6673641800880432, -0.20911797881126404, 0.5458004474639893, ...
Here is image of desired layout, from PDF-ed MS Word version of the documment: ![](http://i.stack.imgur.com/54pMs.png) It's A5 format. What I exactly want is: * have it in two columns, each of them left-aligned * lyrics should take as much space as they need; so, simple `\begin{multicols}{2}` is not enough How can I achieve such layout using LaTeX?
[ 0.003197479760274291, 0.0009334917995147407, -0.006275255233049393, 0.017837626859545708, -0.004638042766600847, 0.003596847876906395, 0.007278713397681713, 0.009238862432539463, -0.01633496955037117, 0.012476860545575619, -0.007381603587418795, 0.000013205103641666938, 0.011229355819523335,...
[ -0.12597592175006866, 0.11552337557077408, 0.4930903911590576, 0.0118827223777771, -0.014593865722417831, 0.17060771584510803, 0.17865322530269623, -0.03994758054614067, -0.3011120855808258, -0.683245837688446, 0.057209305465221405, 0.5992527008056641, -0.11196445673704147, -0.000107123902...
I am trying to loop through all the post to get the top most shared posts on social network. I want to use the `date_query` parameter in the `WP_query` to get the posts of last two days , last 5 days, last 7 days and last 9 days. How can i implement it through using the `date_query` in `WP_Query` My parameters of `WP_query` are $args = array( 'post_type' => 'post', 'order' => 'desc', 'posts_per_page' => 4, 'orderby' => 'meta_value', 'meta_key' => 'esml_socialcount_total', );
[ 0.01953251287341118, 0.007155970204621553, -0.01643626019358635, 0.012716063298285007, 0.010945922695100307, 0.009613292291760445, 0.009034387767314911, 0.02095555141568184, -0.012676181271672249, -0.022622954100370407, -0.00905437022447586, 0.01363408099859953, 0.010452192276716232, 0.015...
[ 0.5387367010116577, 0.01290291827172041, 0.8533027172088623, -0.1492631882429123, 0.054040245711803436, 0.24843867123126984, 0.24362871050834656, 0.06361737102270126, -0.15538866817951202, -0.8935801982879639, 0.12676703929901123, 0.12650683522224426, -0.15809400379657745, 0.45367515087127...
My phone screen is broken and i can't access my data etc. on it anymore since touch and screen don't work. I'm wondering if it's possible to plug it onto an pc and stream the screen to my pc (without an app installed on the phone.) just like downloading an os from pc to android. It's a samsung galaxy s advance. any help would be greatly appreciated!
[ -0.030414560809731483, -0.0004215411318000406, 0.0032275610137730837, -0.0016292976215481758, -0.018751436844468117, -0.02611258067190647, 0.005213632248342037, 0.010186059400439262, -0.022598406299948692, -0.02170754037797451, -0.008273693732917309, 0.010346460156142712, 0.00526157626882195...
[ 0.4211433231830597, 0.2884305715560913, 0.631600558757782, 0.22960889339447021, 0.3049374520778656, 0.11726986616849899, 0.6596618890762329, 0.30128562450408936, -0.104471854865551, -0.5689491033554077, 0.07556930184364319, 0.484255313873291, -0.1454186588525772, 0.1805707961320877, -0.0...
I use `\includegraphics` command to insert image to the document. But how can I add a border around this image w/o any margin between border lines and image? It seems to be very easy, but I can't find it in the documentation.
[ -0.0037286055739969015, -0.006782973185181618, -0.01676342263817787, 0.03179709613323212, -0.01695258542895317, -0.011152146384119987, 0.009546968154609203, -0.01342152152210474, -0.034513603895902634, -0.025863012298941612, -0.013253497891128063, -0.0022426985669881105, -0.00723764346912503...
[ 0.7324255704879761, 0.30539023876190186, 0.4833037853240967, -0.09921129047870636, 0.1786133497953415, -0.2545787990093231, 0.22326518595218658, -0.2371106743812561, -0.14663095772266388, -0.6878052949905396, 0.25624021887779236, 0.4625456631183624, -0.34808751940727234, -0.011646905913949...
I've created a custom metabox with a textarea in it. How would I go about using Markdown in this textarea? I've seen some WP plugins but they seem to only be for the main editor.
[ 0.012864997610449791, 0.0018001575954258442, 0.005153120961040258, 0.02921767719089985, -0.006715963128954172, 0.02891944721341133, 0.009687376208603382, 0.0157924797385931, -0.03631260246038437, -0.00708214333280921, -0.0013632300542667508, 0.015322336927056313, 0.0034699568059295416, 0.0...
[ 0.38042256236076355, 0.07315376400947571, 0.13331885635852814, 0.09073128551244736, -0.19366762042045593, 0.031158048659563065, 0.2219950556755066, -0.02329545095562935, 0.06509149819612503, -0.4531269073486328, 0.2465089112520218, 0.39929795265197754, -0.16381798684597015, -0.062163323163...
Using esri javascript api version 3.8, I am trying to use some cached tiles from our server at http://hexe.er.usgs.gov/ifhp/will/tiles/ using WebTiledLayer. I believe that our tiles are set up as ${level}/${row}/${col}.png , however my tiles still aren't showing up. Does this have something to do with setting up an extent parameter? I'd like the tiles to show up with the extent of 'xmin': -88.243751, 'ymin': 41.49263, 'xmax': -88.131789, 'ymax': 41.727235 This sample is the only example I've found that uses a web tile cache and it doesn't use the extent parameter but its a world map whereas mine is a small county in Illinois. I haven't found any examples of using the extent option so maybe I just don't have it set up correctly. Here's my code snippet with the attempt at the extent parameter. var tiles = new WebTiledLayer("http://hexe.er.usgs.gov/ifhp/will/tiles/overlay10/${level}/${row}/${col}.png", { "Extent": "'xmin': -88.243751, 'ymin': 41.49263, 'xmax': -88.131789, 'ymax': 41.727235" }); map.addLayer(tiles);
[ 0.0005682077025994658, -0.002801181748509407, 0.002250104211270809, 0.010978811420500278, 0.011949051171541214, 0.0127110555768013, 0.008876865729689598, 0.001459550578147173, -0.01994302310049534, -0.03192766755819321, 0.003493320196866989, 0.008476187475025654, -0.014789633452892303, 0.0...
[ -0.0559205487370491, 0.1987372189760208, 0.8710953593254089, -0.03733396902680397, 0.2743634581565857, -0.2530195116996765, 0.24710051715373993, -0.16787225008010864, -0.050758279860019684, -0.9878172278404236, 0.07855909317731857, 0.31003066897392273, -0.05477882921695709, -0.088705338537...
Always, when I used to download or update applications on my phone LG there was no problem But now, while I'm downloading or updating some application , there is a problem: > Authentication is required. You need to sign in. But I'm signed in. What can I do to resolve this problem?
[ -0.010610494762659073, -0.0015909698558971286, 0.00468116719275713, 0.01724500209093094, -0.01441648043692112, 0.0037659332156181335, 0.008562183007597923, -0.031840935349464417, -0.02029135636985302, -0.009003306739032269, -0.02000073716044426, 0.016602594405412674, 0.012418997474014759, ...
[ 0.31926700472831726, 0.18658386170864105, 0.4685836136341095, -0.1608523577451706, 0.2408396154642105, -0.38387852907180786, 0.35458633303642273, -0.13744302093982697, -0.06784666329622269, -0.5712760090827942, 0.2397538423538208, 0.2623046040534973, -0.11342499405145645, 0.053897552192211...
How do I do a 'like' comparison on a numeric value with meta_query? I have some code that does an autocomplete search for product sku's in WooCommmerce. It works fine for non numeric skus, but I can't get it working with numeric skus. I'm using compare => 'like' because my autocomplete script is setup to start searching after 2 characters have been entered (so that skus 10 characters long can be searched instead of entered exactly). Here's what I'm working with now: $products1 = array( 'post_type' => array ('product', 'product_variation'), 'post_status' => 'publish', 'posts_per_page' => -1, 'meta_query' => array( array( 'key' => '_sku', 'value' => $_REQUEST['term'], 'compare' => 'LIKE' ) ), 'fields' => 'ids' );
[ 0.004434189759194851, 0.008128596469759941, -0.016344143077731133, 0.010508273728191853, -0.01449054665863514, 0.008673313073813915, 0.009990829974412918, -0.007007731590420008, -0.018728621304035187, 0.019416026771068573, -0.01882062666118145, -0.003077227622270584, 0.006651677191257477, ...
[ 0.5364021062850952, 0.052178047597408295, 0.6799442768096924, -0.01922842301428318, -0.4057113826274872, 0.1082245409488678, 0.023519719019532204, -0.09393911063671112, -0.003193061100319028, -0.564913809299469, 0.21970388293266296, 0.4636370539665222, 0.07313594967126846, 0.21551904082298...
I've just embarked in an area with no water. No murky pools. No Aquifer. Also no soil, only stone. Eventually I may dig deep enough to find a water source, but Armok only knows how long that will take. Is it possible to set up any kind of farm with zero water? I'd assumed I could do an above ground farm, but I've harvested basically every bush on the map and it's still not showing anything plantable. Will my dwarves have to eat the harvested berries and leave seeds to plant? What are my options?
[ -0.00204672385007143, 0.016941864043474197, 0.0021318430081009865, 0.010044069029390812, -0.029357846826314926, -0.01670791208744049, 0.00648448197171092, 0.008307178504765034, -0.019729066640138626, -0.01944287121295929, -0.004521715454757214, 0.017321499064564705, -0.013128060847520828, ...
[ 0.42575496435165405, 0.4643583297729492, -0.20804154872894287, 0.062033820897340775, 0.20155972242355347, 0.1314847618341446, 0.6653121113777161, 0.24240773916244507, -0.4969887435436249, -0.44795647263526917, 0.17993617057800293, 0.026283858343958855, 0.07309994846582413, 0.21313926577568...
In my website , I allow to the new user to write posts. but the problem is when the user write post. the state of post is publish. it is appear in front page. I want to read it before the publish , I mean I want to approve it if it is useful or refuse it if it unuseful. I wish you understand me. Thank you very much ,
[ -0.01621909998357296, 0.013060767203569412, 0.01992657780647278, 0.01930657960474491, -0.002124333754181862, 0.021067215129733086, 0.012698468752205372, -0.004004553891718388, -0.017265645787119865, 0.007409955840557814, -0.016957012936472893, 0.014072378166019917, -0.004537568427622318, 0...
[ 0.5572734475135803, 0.5271864533424377, 0.4588366150856018, -0.08276229351758957, -0.29151758551597595, -0.4063308835029602, 0.30524492263793945, 0.3109285533428192, 0.11015217006206512, -0.7119857668876648, 0.2755577564239502, 0.05156576260924339, -0.13814806938171387, 0.3357178866863251,...
**Edit** (The almost ultimate solution) After sorting everything out and having some nice code - thanks to cjorssen :) - I cam up with a general solution with which you are able to use the connector with two arbitrary nodes: (The previous code needed the circled node to be at the origin (0,0). %usage: \arcconnector[color]{Satellite Node}{Circled Node}{rim radius} \newcommand\arcconnector[4][black]% {% \path [name path=S--C] (#2) -- (#3); \path [name path=Rim] (#3.center) circle(#4); \path [name intersections={of=S--C and Rim}]; \pgfmathanglebetweenpoints{% \pgfpointanchor{#3}{center}}{% \pgfpointanchor{intersection-1}{center}} \let\myendresult\pgfmathresult \path [draw,color=#1] (intersection-1) arc[start angle=\myendresult,delta angle=-40,x radius=#4,y radius=#4]; \path [draw,color=#1] (intersection-1) arc[start angle=\myendresult,delta angle=40,x radius=#4,y radius=#4]; \path [draw,#1] (#2) -- (intersection-1); } So for the example depicted below we can do this: \documentclass{standalone} \usepackage{tikz} \usetikzlibrary{intersections} \begin{document} \begin{tikzpicture} \node [shape=circle,draw,minimum size=1cm,red] (C) {}; \node at (0.8,1.5) [shape=rectangle,draw,blue] (P) {P}; \arcconnector{P}{C}{0.6cm} \end{tikzpicture} \end{document} Another useful parameter that you may want to adjust is the length of the arc wings (here const 40°). **Edit** (after first answer and comments) I want to draw an arc at an intersection in that way, that the intersection is the middle of the arc. But I don't know how to define `\myendresult` to be the angle between (Origin intersection-1) and the x-axis. \documentclass{standalone} \usepackage{tikz} \usetikzlibrary{intersections} \begin{document} \begin{tikzpicture} \coordinate (Origin) at (0,0); \coordinate (Xaxis) at (1,0); % Note: the minimum size is the diameter, so radius = .5cm \node [shape=circle,draw,minimum size=1cm,red] (C) {}; \node at (0.8,1.5) [shape=rectangle,draw,blue] (P) {P}; \path [name path=P--C] (P) -- (C); \path [name path=Rim] (0,0) circle(0.6cm); \path [name intersections={of=P--C and Rim}]; % How to define \myendresult? %\path [draw] (intersection-1) arc[start angle=\myendresult,delta % angle=-40,radius=0.6cm]; %\path [draw] (intersection-1) arc[start angle=\myendresult,delta % angle=40,radius=0.6cm]; \path [draw] (P) -- (intersection-1); \end{tikzpicture} \end{document} This should look like the following: ![enter image description here](http://i.stack.imgur.com/TjbRY.png) The base problem is to find the correct start angle for arc. Maybe this is also possible by using some tangent calculations? * * * **Original question** The base problem is, that I'd like to compute the angle between two lines given by two coordinates in `tikz`. That seems to be difficult and because one of the lines is the x-axis (1,0), we can reduce that to: Calculate the asin from the y-value of the second coordinate (here `intersection-1`). But this seems to be a problem. I tried to use `\pgfextracty` with `\pgfmathasin`: \newdimen\myyvalue \pgfextracty{\myyvalue}{intersection-1} \node at (1,1) {\myyvalue}; \pgfmathsetmacro{\myendresult}{asin(\myyvalue)} \path [draw,blue] (intersection-1) arc[start angle=\myendresult,delta angle=30,x radius=0.6cm,y radius=0.6cm]; At line 3 I get "missing number treated as zero". So I tried using the `let` command: \path [name intersections={of=A and B},draw,blue] let \p1=(intersection-1) in (intersection-1) arc[start angle=\pgfmathasin{\y1},delta angle=30,x radius=0.6cm,y radius=0.6cm] (intersection-1); But now I get the error: ! Incomplete \iffalse; all text was ignored after line 821. <inserted text> \fi I don't get the errors and am I little bit helpless. It seems no one else ever tried to compute the angle between to vectors/coordinates/points in `tikz` (at least Google doesn't find anything).
[ 0.01899346150457859, 0.008124055340886116, -0.015362847596406937, 0.009131877683103085, -0.039942122995853424, -0.012633298523724079, 0.005369191057980061, 0.01825658045709133, -0.017131667584180832, -0.003087150864303112, -0.0023035791236907244, 0.0042237788438797, -0.023051656782627106, ...
[ 0.21674753725528717, -0.17055580019950867, 0.7807456254959106, 0.12697675824165344, 0.09445417672395706, 0.3285815417766571, -0.05002335086464882, -0.10152599215507507, -0.4655363857746124, -0.7411747574806213, 0.3573820888996124, 0.2724904716014862, -0.437350869178772, 0.14527003467082977...
In "Animal Crossing: New Leaf," the owner of Roost Cafe lets you work part- time and gives you some beans, according on how well you served the customers. The problem is, I don't know how to use them! Can someone help?
[ 0.04101496934890747, 0.025805221870541573, 0.009796296246349812, -0.005600090604275465, -0.042643483728170395, 0.013593779876828194, 0.011735056526958942, 0.008920813910663128, -0.030175749212503433, 0.030542531982064247, -0.05254356190562248, 0.015218800865113735, 0.02072952128946781, -0....
[ 0.6277672648429871, -0.057013049721717834, -0.24788883328437805, 0.0923955962061882, 0.4428979754447937, 0.2645757496356964, 0.3200400471687317, -0.003431646153330803, -0.31815609335899353, -0.2882404029369354, 0.37813645601272583, 0.1805705726146698, -0.11143188178539276, -0.0745377913117...
silly question, and I'm not sure this is even necessarily the right forum, but it's the most appropriate on StackExchange, so here we are. Why is it, in older books, that years are sometimes redacted and replaced with a dash when writing the date in letters and so forth? Here is an example, from Mary Shelley's Frankenstein: > Letter 1 > > St. Petersburgh, Dec. 11th, **17--** > > TO Mrs. Saville, England > > You will rejoice to hear that no disaster has accompanied... I've seen this in many (mostly older) books, and my only hypothesis is that it is/was a fashionable attempt to try not to make the book seem outdated quite so quickly; or as a sort of faux attempt to feign respect for privacy, within the world of the novel itself. In a similar vein, in Frankenstein, several curse words ( _D--n_ ) are also redacted. I assume this is a sort of Victorian modesty in not printing profanity, but if I'm wrong, I'd love to be corrected on that, as well. * * * **EDIT** : I just received this back from the reference librarian (libraries are so great!): > It seems that there is no definitive explanation, but several explanations > seem to come up over and over again. I am including the best of what I found > online, rather than some of the random information that is posted (though, I > will include one online discussion that might be interesting for you all the > same). > > * From author John Barth: http://www.colby.edu/~isadoff/ss/barth.doc > "Initials, blanks, or both were often substituted for proper names in > nineteenth century fiction to enhance the illusion of reality. It is as if > the author felt it necessary to delete the names for reasons of tact or > legal liability. Interestingly, as with other aspects of realism, it is an > illusion that is being enhanced, by purely artificial means." > > * Electronic Labyrinth: Postmodernism and the Postmodern Novel > http://elab.eserver.org/hfl0256.html "... a literary convention of the time > when many books and pamphlets were written criticising the government of the > day, or important figures, by using false names... Some rather scurrilous > stories were also printed which were thinly veiled parodies or criticisms of > important figures. So when Jane Austen wrote the **___ _** _shire regiment, > or the Earl of_ ** ___ _**, she was a)avoiding the pitfall of being accused > of inaccuracy and b) avoiding the pitfall of being accused of criticism of > some important political figures." > > * Here is that discussion I mentioned: Republic of Pemberley Archive: More > or less: http://www.pemberley.com/bin/archives/regarc1.pl?read=9221 > > * Here is one more online discussion with a very nice and referenced > answer, though the source page is no longer available. It discusses the use > of this convention in epistolary novels (novels written in the form of > letters): http://answerpool.com/eve/forums/a/tpc/f/436601891/m/6931055141 > > Since I think a couple of these links came up in the answers below, I'm just going to upvote them all and mark as answered the closest one (not that it was a quiz; but there were many good suggestions, and I can only mark one as the answer...).
[ -0.013087118044495583, 0.004544034134596586, -0.018361248075962067, 0.01582186482846737, 0.022841013967990875, -0.0089716212823987, 0.007272297516465187, 0.0027867406606674194, -0.012206928804516792, 0.013521764427423477, -0.0005229928065091372, -0.0009462455636821687, 0.004258001688867807, ...
[ 0.2675062417984009, -0.07562396675348282, 0.3270623981952667, 0.27474015951156616, 0.24289418756961823, 0.31019771099090576, -0.002761220093816519, 0.4043191373348236, -0.3667958378791809, -0.41092202067375183, 0.0669483169913292, -0.43202289938926697, -0.11025505512952805, 0.8163620233535...
I'm planning to enter a tournament. I have a team of legendaries, and I'm not sure if I'm allowed to enter them in the tournament. When I watch a tournament on the internet no one uses legendaries. Are they generally allowed?
[ 0.03975195065140724, 0.010779429227113724, -0.0008168048807419837, 0.017213566228747368, 0.026353120803833008, 0.027885744348168373, 0.009489650838077068, 0.007630503736436367, -0.02995087206363678, -0.04693134129047394, 0.00864567793905735, 0.03544152155518532, -0.004389071837067604, 0.03...
[ 0.704616129398346, 0.032107725739479065, 0.18119172751903534, 0.013019048608839512, 0.13721005618572235, -0.42125725746154785, 0.2157776653766632, 0.07609204202890396, -0.23821325600147247, -0.3762679398059845, 0.20406416058540344, 0.05563332885503769, 0.043419960886240005, -0.100743241608...
I'm using advancd-custom-fields plugin and I'm about to modifiy my acrhive content page that is shown frontend. I use the file **personnel-archive.php** (because the custom post type used is personnel) in my theme in that works ok: Beginning of this file looks like: <?php get_header(); ?> <?php if (have_posts()) : while (have_posts()) : the_post(); ?> <div class="post" id="post-<?php the_ID(); ?>"> <h1 class="personnel"><?php the_title(); ?></h1> <div class="entry"> <?php echo image_personnel(get_the_ID());?> <?php the_content('<p class="serif">L&auml;s resten &raquo;</p>'); ?> <?php wp_link_pages(array('before' => '<p><strong>Sidor:</strong> ', 'after' => '</p>', 'next_or_number' => 'number')); ?> I my functions.php I have made this function: function image_personnel($post_ID) { $post_type = get_post_type( $post_ID ); $content = ''; //If custom type for shown post is personnel, then show image first if ($post_type == 'personnel') { $picture_personnel = get_post_custom_values('picture_personnel', $post_ID); if ($picture_personnel) { $image_fields = get_field('picture_personnel'); $url_image = $image_fields['url']; $alt_image = $image_fields['alt']; $title_image = $image_fields['title']; if (strlen($alt_image)>0) { $title_image = $alt_image; } else { $alt_image = $title_image; } $content = '<img src="' . $url_image . '" alt="' . $alt_image . '" title="' . $title_image . '" />'; } } return $content; } I've basically just added a field "picture_personnel" to each personnel post and I want to show it in the content. Is it better to use filter for the_content? Or is it just a matter of taste?
[ 0.005718784406781197, 0.006013298407196999, 0.007316305302083492, 0.013517722487449646, 0.01698382757604122, -0.004189951345324516, 0.0072112781926989555, -0.0035335011780261993, -0.011734035797417164, -0.008597003296017647, -0.0012834793888032436, 0.00029556037043221295, 0.01533948909491300...
[ 0.4342014491558075, 0.23659752309322357, 0.6774111390113831, -0.010986889712512493, 0.046659793704748154, -0.1314702332019806, 0.1425328552722931, -0.22215890884399414, -0.15554551780223846, -0.6871116161346436, -0.20539022982120514, 0.235817089676857, 0.003432940226048231, 0.2992538809776...
Some errors have occured when I installed lucene, a perl module: [xlwang@localhost Lucene-0.18]$ make test Running Mkbootstrap for Lucene () chmod 644 Lucene.bs PERL_DL_NONLAZY=1 /home/xlwang/local/bin/perl "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t t/00-load.t .. 1/1 # Failed test 'use Lucene;' # at t/00-load.t line 6. # Tried to use 'Lucene'. # Error: Can't load '/home/xlwang/Lucene-0.18/blib/arch/auto/Lucene/Lucene.so' for module Lucene: /usr/lib/../lib64/libclucene.so.0: undefined symbol: pthread_mutexattr_settype at /home/xlwang/local/lib/perl5/5.20.0/x86_64-linux/DynaLoader.pm line 193. # at t/00-load.t line 6. # Compilation failed in require at t/00-load.t line 6. # BEGIN failed--compilation aborted at t/00-load.t line 6. # Testing Lucene 0.18, Perl 5.020000, /home/xlwang/local/bin/perl # Looks like you failed 1 test of 1. t/00-load.t .. Dubious, test returned 1 (wstat 256, 0x100) Failed 1/1 subtests Test Summary Report ------------------- t/00-load.t (Wstat: 256 Tests: 1 Failed: 1) Failed test: 1 Non-zero exit status: 1 Files=1, Tests=1, 1 wallclock secs ( 0.04 usr 0.01 sys + 0.08 cusr 0.03 csys = 0.16 CPU) Result: FAIL Failed 1/1 test programs. 1/1 subtests failed. make: *** [test_dynamic] error 1 My os is Redhat 6.4, and Perl version is 5.20. Please help me to fix it.
[ -0.0015420522540807724, -0.000420829514041543, -0.007073584944009781, -0.0014766734093427658, -0.046985071152448654, 0.01756509393453598, 0.010263074189424515, -0.01713964156806469, -0.00982465036213398, -0.007604639045894146, -0.005127061158418655, 0.0023451275192201138, -0.0039969258941709...
[ 0.26674774289131165, -0.251568078994751, 0.3936103284358978, 0.16877003014087677, 0.23096142709255219, 0.019433561712503433, 0.511142373085022, -0.009801123291254044, -0.11070667207241058, -0.3244304656982422, 0.005683865398168564, 0.7653303146362305, -0.12032640725374222, -0.1328370124101...
> HP TouchPad vs. iPad vs. Xoom vs. PlayBook: the tale of the tape.
[ -0.017402173951268196, -0.007186380214989185, -0.00353623041883111, 0.016782166436314583, -0.020324552431702614, 0.025368571281433105, 0.024077527225017548, 0.02276977151632309, -0.0369705967605114, -0.058183059096336365, -0.05392223969101906, 0.0011986246099695563, -0.004365794360637665, ...
[ 0.11825087666511536, -0.2055189311504364, 0.5134225487709045, 0.5041648149490356, -0.14942742884159088, 0.00036876622471027076, -0.28741323947906494, -0.16410574316978455, 0.04342775046825409, -0.7691969871520996, 0.03222779184579849, 0.5770000219345093, 0.1789407879114151, 0.2045807838439...
I have been using geotools to read shapefiles and import them into postgis. Unfortunately, when large files are being imported, this leads to a massive lag time --8000% of shp2pgsql piped into pgsql-- which I assume is caused mostly by individual inserts of data. Assuming that the amount of time it takes to write the features to the dump is not excessive, this should massively decimate the exorbitant amount of time required to import the shapefile. My plan is to just convert all of the feature attributes into rows of strings and then just use the copy functionality in the jdbc to bulk load the data --but I do not know of any functions in the jdbc or elsewhere that can convert java objects into postgres dump comptable strings. Below is a sample output from shp2pgsql --which is essentially what I want to replicate using java. SET CLIENT_ENCODING TO UTF8; SET STANDARD_CONFORMING_STRINGS TO ON; BEGIN; CREATE TABLE "raw_data"."mud" (gid serial, "objectid" numeric(10,0), "tcmud_name" varchar(50), "shape_area" numeric, "shape_len" numeric); ALTER TABLE "raw_data"."mud" ADD PRIMARY KEY (gid); SELECT AddGeometryColumn('raw_data','mud','geom','2277','MULTIPOLYGON',2); COPY "raw_data"."mud" ("objectid","tcmud_name","shape_area","shape_len",geom) FROM stdin; 14 LAKEWAY MUD 9.60083055833e+004 1.24005179036e+003 0106000020E50800000100000001030000000100000005000000E03113F38C20474188DA1ADB9142634140726A6B94204741B07B4EE669426341007FCF91FE1F4741D0BE4D2668426341A03E7819F71F474190631C1B90426341E03113F38C20474188DA1ADB91426341 16 LAKEWAY MUD 1.99805337165e+004 5.90416771226e+002 0106000020E5080000010000000103000000010000000900000040CA100AE92F474110A13411B3466341E09D3C7DE42F474198B6CE82AA466341A0BA19BD912F4741F08A1B80AF466341940F2FE9912F4741E9D4FBCEB24663410005BDDD902F4741B0CC491BB646634120440ECA8C2F4741403B9018BC466341E0865D22ED2F4741D06DCE02BA466341E0AAB0DAEA2F474170988569B646634140CA100AE92F474110A13411B3466341 17 LAKEWAY MUD 2.55495765553e+004 6.75755882634e+002 0106000020E5080000010000000103000000010000000800000080183EE7062F4741A03A533557466341208801F3F52E4741E836A58E55466341805FC108D32E4741685B85F96446634140DE1C01CF2E4741D8E7CCEA6A466341A063D3DFC92E47415008A8CE71466341E0CBB53AF42E4741A848726374466341C0F32A05292F4741588B715F5A46634180183EE7062F4741A03A533557466341 20 LAKEWAY MUD 1.54327655151e+004 5.08037611343e+002 0106000020E5080000010000000103000000010000000500000000A3626F4F284741402A23586044634140FFC13E36284741A8F42ECF4D44634100E74A4A06284741604256135244634140943F531F284741409C39B66344634100A3626F4F284741402A235860446341 27 WTCMUD NO. 7 6.92786595456e+005 3.61805221358e+003 0106000020E50800000100000001030000000100000018000000C0063953CF464741107CA79ED538634100190A95D246474110AE1AA1DA386341C0240A3EDE474741D866062A0439634180FC86331F484741B8A81D80EE386341E03C4D782848474190155C7FC2386341203E3FAAED4747418056C1F598386341C0BE4FB8ED474741B0F4E3B28A386341A08A5238C947474180DBCB047E386341E036F588A6474741389250786438634140FEF57170474741205693EF523863416028CABB3F4747414897B23D503863410081AAA732474741989B78844F386341601450391A474741903F8F535138634140948698EB46474170D55FC754386341A0068BD6D9464741703CF3B85C386341001E6513CC464741E0D4F5E06238634140A9C648CC464741C8048B7274386341A06B007ACC4647415043AEA684386341E07EA77ECC464741D8E2993086386341A00B156BE0464741B845446E91386341008BC1BB15474741C004DF82AF386341C05D1BB7EC464741401E7AF3C0386341C0EF4FB5CB464741B0DF1CFCCE386341C0063953CF464741107CA79ED5386341 ... \. CREATE INDEX "mud_geom_gist" ON "raw_data"."mud" USING GIST ("geom"); COMMIT; To sumarize, I am looking for the fastest/easist/robust way to bulk upload any shapefile to postgis in java. I do not care whether solutions use geotools, as I have found geotools very cumbersome for my very simple needs -- changing the name of the geometry column from "the_geom" to "geom", for example, requires generating a brand new schema from scratch with the new geometry name, iterating through all the features of the shape datastore, generating a new feature in the new schema, and then individually mapping each attribute of the old schema to the corresponding attribute in the new schema!
[ -0.013498058542609215, 0.01165002677589655, 0.0003495237324386835, 0.022261597216129303, 0.023335915058851242, 0.00308043509721756, 0.00959794595837593, 0.0153102558106184, -0.014096560887992382, -0.014893803745508194, -0.004323427099734545, 0.008600631728768349, -0.0029061068780720234, 0....
[ 0.2737874686717987, 0.1434699147939682, 0.3400428891181946, -0.08992554992437363, -0.08257821202278137, -0.005932603031396866, 0.09924998879432678, 0.04342357814311981, 0.07356822490692139, -0.8744915723800659, -0.043552856892347336, 0.4845108985900879, -0.08081941306591034, 0.123090326786...
Which open source GIS software is preferable: QGIS, mapWindow or gvSIG. On the basis of: 1. GIS analysis. 2. Digitization. 3. Quality checking.(Data correction). 4. Store of data(i.e. storage in db as well as folder) 5. Performance. 6. Easy to learn. Which software should I go for depending on these basic conditions?
[ -0.011642278172075748, 0.016063261777162552, -0.006976255215704441, 0.010402428917586803, 0.00044859928311780095, 0.020825611427426338, 0.01204992737621069, 0.03789595514535904, -0.01431873720139265, -0.032266803085803986, -0.023776013404130936, 0.0043059964664280415, 0.006190800108015537, ...
[ 0.39022475481033325, -0.05843508243560791, 0.1242319643497467, 0.47256219387054443, -0.30611586570739746, -0.06713786721229553, 0.14867505431175232, -0.2747671902179718, 0.015471802093088627, -0.7350159883499146, -0.20856034755706787, 0.6294838786125183, -0.017440885305404663, 0.0066381008...
I'm trying to use `Manipulate` to visually try out different values of lambda in a Box-Cox transformation. I've created a `boxcox` function with two definitions to deal with both the normal case and the case when lambda is 0: boxcox[data_, 0] := Log[data] boxcox[data_, l_] := (data^l - 1)/l Then I use this function inside `Manipulate` but I keep getting tons of errors. It looks like Manipulate is only using the general definition and starts complaining about dividing by zero. Manipulate[ pdata = Partition[boxcox[data, u], 12]; ranges = Max[#] - Min[#] & /@ pdata; means = Mean[#] & /@ pdata; mrdata = Transpose[{means, ranges}]; mrlm = LinearModelFit[mrdata, x, x]; Show[ ListPlot[mrdata, Axes -> False, Frame -> True, AxesOrigin -> {Automatic, 0}], Plot[mrlm[x], {x, Min[means], Max[means]}] ], {u, 0.00, 1.00} ] Here is the data I'm using in case it matters: data = {154., 96., 73., 49., 36., 59., 95., 169., 210., 278., 298., 245., \ 200., 118., 90., 79., 78., 91., 167., 169., 289., 347., 375., 203., \ 223., 104., 107., 85., 75., 99., 135., 211., 335., 460., 488., 326., \ 346., 261., 224., 141., 148., 145., 223., 272., 445., 560., 612., \ 467., 518., 404., 300., 210., 196., 186., 247., 343., 464., 680., \ 711., 610., 613., 392., 273., 322., 189., 257., 324., 404., 677., \ 858., 895., 664., 628., 308., 324., 248., 272.}
[ 0.012632761150598526, 0.010853691026568413, -0.010148250497877598, 0.01621498167514801, -0.017363175749778748, -0.012352021411061287, 0.008727967739105225, -0.010844981297850609, -0.017689090222120285, 0.002441512420773506, -0.015506993979215622, -0.0019850749522447586, -0.011032359674572945...
[ 0.14596958458423615, -0.06894275546073914, 0.14750070869922638, -0.18169474601745605, -0.02796308323740959, 0.3209351897239685, -0.08602684736251831, -0.5162249207496643, -0.033170558512210846, -0.3378894627094269, 0.5027638673782349, 0.366537868976593, -0.45330217480659485, 0.074836894869...
![](http://i.stack.imgur.com/a1tAv.jpg) I want to solve this PDE. I have tried to solve it with `NDSolve` but found error 'Boundary values may only be specified for one independent variable. Initial values may only be specified at one value of the other independent'. Please help me to solve this problem: I am beginner in Mathematica. This is cylinderical PDE and in equation omega, lambda and fi have constant values i.e 2,3,4 This is what I have tried. sol = NDSolve[{w^2 (1/p) (D[T[p, x] (D[T[p, x], {p, 1}]), {p, 1}]) + D[T[p, x], {x, 2}] - 2 l (D[T[p, x], {x, 1}]) - 4 f^2 (T[p, x]) == 0, T[p, 0] == 1, T[p, 1] == 1, T[0, x] == 10, T[1, x] == 1}, {T[p, x]}, {p, 0, 1}, {x, 0, 1}]
[ -0.005564934108406305, 0.014863326214253902, -0.021044597029685974, 0.02139972150325775, -0.02191130444407463, 0.0035154540091753006, 0.008002011105418205, 0.038668759167194366, -0.01872745342552662, -0.008945861831307411, -0.007946117781102657, 0.01749459281563759, -0.022952906787395477, ...
[ -0.2604079246520996, 0.26924389600753784, 0.678678572177887, -0.23646345734596252, -0.12989966571331024, 0.03514642268419266, 0.38476094603538513, -0.3531244099140167, -0.3101007044315338, -0.3655174672603607, -0.13542349636554718, 0.5825316309928894, -0.1944529265165329, -0.18968664109706...
I am a newbie and I asked in How to create rectangles like in this example? about how to make some rectangles. I am trying to read the code and understand it as I need rectangles rather than squares. But I cannot find in the net what this #1 to #4 are? As the notation is short I can't even search it up. So I would be happy if somebody helps me with it. \newcommand\catalannumber[3]{ % start point, size, Dyck word (size x 2 booleans) \fill[cyan!25] (#1) rectangle +(#2,#2); \fill[fill=lime] (#1) \foreach \dir in {#3}{ \ifnum\dir=0 -- ++(1,0) \else -- ++(0,1) \fi } |- (#1); \draw[help lines] (#1) grid +(#2,#2); \draw[dashed] (#1) -- +(#2,#2); \coordinate (prev) at (#1); \foreach \dir in {#3}{ \ifnum\dir=0 \coordinate (dep) at (1,0); \else \coordinate (dep) at (0,1); \fi \draw[line width=2pt,-stealth] (prev) -- ++(dep) coordinate (prev); }; }
[ 0.0037297122180461884, 0.013833257369697094, -0.006114973220974207, 0.01802915707230568, -0.0186045803129673, 0.0038779210299253464, 0.007120315916836262, 0.015904000028967857, -0.028613300994038582, -0.002439921721816063, -0.011936490423977375, -0.0008448263397440314, 0.011288042180240154, ...
[ 0.06394007802009583, 0.17138075828552246, 0.5739713907241821, -0.3010234236717224, 0.07754741609096527, 0.09155149757862091, 0.1658235788345337, -0.16823455691337585, -0.46298182010650635, -0.8303602337837219, -0.06005876511335373, 0.2667192816734314, -0.032199885696172714, 0.1772560477256...
`biblatex` has an option `firstinits` which will abbreviate first and middle names in the bibliography. Is there any way to limit this behavior for the `editor` field only, so that it doesn't abbreviate names in the `author` field? In the example below, the author should appear as `Lennon, John`, but the editors should appear as `P. McCartney, J. Lennon, G. Harrison, and R. Starkey`. \documentclass{article} \usepackage[style = authoryear-comp, maxnames = 99]{biblatex} \usepackage{filecontents} \begin{filecontents}{\jobname.bib} @incollection{lennon1965, AUTHOR = "John Lennon", BOOKTITLE = "A book with articles", EDITOR = "Paul McCartney and John Lennon and George Harrison and Richard Starkey", TITLE = "This is my article in this book", YEAR = "1965", LOCATION = "Liverpool", PAGES = "65--87", PUBLISHER = "Cavern Club"} \end{filecontents} \addbibresource{\jobname.bib} \begin{document} \nocite{*} \printbibliography \end{document}
[ 0.029714522883296013, 0.009782894514501095, -0.022229028865695, 0.026885194703936577, -0.0056487321853637695, 0.01611330732703209, 0.00966717954725027, 0.02489367499947548, -0.014642799273133278, 0.01102352887392044, -0.009669025428593159, 0.0002154357498511672, -0.006421476136893034, 0.00...
[ 0.3208044469356537, 0.45350828766822815, 0.14357081055641174, -0.13007360696792603, 0.07126428186893463, 0.24863789975643158, -0.25107139348983765, -0.06688196957111359, -0.5094971060752869, -0.18053056299686432, -0.48878562450408936, -0.057836372405290604, -0.41684120893478394, 0.16008627...
In the MC server I'm in, every player experiences a spawn issue; players spawn in the ground and start to suffocate. After a second or so you then spawn where you would normally spawn.The length of time in the ground depends on the connectivity to the server (the laggier the server, the longer you suffocate). When you're low HP and you teleport home, suffocation can kill you and I noticed that, after spawning back to the correct location, some of the items lost during death are lying around and some are gone completely. Because of this I believe that the players spawn directly below the spawn point by a few blocks, but I'm not sure... I've also mined the area I suffocated in, hoping that if I ever spawned there again I would just be in an empty square, but the blocks always respawn. Does anyone know of a solution to this problem?
[ -0.01451314240694046, 0.018586356192827225, 0.0019958713091909885, 0.0011872141622006893, 0.0033937350381165743, -0.009263170883059502, 0.008549343794584274, 0.0059468625113368034, -0.01466686837375164, 0.024573709815740585, -0.014476553536951542, 0.036139488220214844, -0.0002726349048316479...
[ 0.3339807093143463, -0.15931615233421326, 0.7089563012123108, 0.16761384904384613, 0.012167931534349918, 0.01772775873541832, -0.048346079885959625, -0.17464527487754822, -0.3015446364879608, -0.7404168248176575, 0.21996872127056122, 0.31842783093452454, 0.10057274252176285, 0.632057487964...
I'm looking for a single word, possibly ending in "-consciously", that represents doing something socially common that you were not aware had a name or a greater context. For example, many people have played Huckle buckle beanstalk (the hotter or colder finding things game) without knowing the name or thinking that it is even considered a game. Someone learning the term for the first time might say "I've _subconsciously_ played that game with my friends for years." But that's not quite right since they were consciously engaging in the activity, they just didn't know it was a proper game. Is there a word that can replace _subconsciously_? (I know that something like "I've never heard that term but I've been playing that with my friends for years" would also work, but I feel it could be shorter.)
[ -0.009141349233686924, -0.0065516941249370575, -0.009020956233143806, 0.01752152480185032, 0.0024621854536235332, -0.018516577780246735, 0.009059768170118332, -0.001794070703908801, -0.008384507149457932, 0.013184888288378716, 0.004272820428013802, 0.011865377426147461, 0.01611083373427391, ...
[ 0.669293224811554, 0.1524399220943451, -0.2002381682395935, 0.06821391731500626, -0.1102563738822937, -0.4501863718032837, 0.3381127119064331, 0.2201712280511856, -0.3969897925853729, -0.42412272095680237, 0.18904204666614532, 0.05241779610514641, 0.009904898703098297, 0.2345988154411316, ...
> Arnold raced out of the door, and started... In its time, it was once reported, this was one of the most often-read lines of fiction in the English language: it is the sentence fragment shown in a brief close-up shot of mystery novelist Jessica Fletcher's typewriter in the opening credits of _Murder, She Wrote_ from 1984 to 1991. You can see it here. Even conceding that "door" can be used as a perfectly legitimate synonym for "doorway," this always bothered me. One may race _out of_ a room, and one may race _through_ a doorway, but I don't see how Arnold could have raced _out of a door_ —unless perhaps he had been standing still in the middle of the doorway before suddenly "racing" out of it, which seems unlikely. What's interesting is that "Arnold raced out the door" doesn't bother me as much without the _of_ , perhaps because I'm subconsciously putting an implied _through_ into the sentence: "Arnold raced out [through] the door." Even so, I was surprised and amused to see that, out of all the examples they could have chosen, Merriam-Webster illustrates its definition of _out_ as a preposition with the phrase "ran _out_ the door." (Were the writers of this definition _Murder, She Wrote_ fans, I wonder?) This doesn't seem to leave much room for my interpretation. How should the clause "Arnold raced out of the door" be evaluated? Is it ungrammatical, grammatical but poor form, or grammatical with no reservations?
[ -0.022997254505753517, 0.01431785523891449, -0.011380065232515335, 0.01159175019711256, -0.02051394432783127, 0.020000867545604706, 0.009937312453985214, 0.007783449254930019, -0.01654203236103058, 0.015245229005813599, -0.017534609884023666, 0.010950461961328983, 0.038238275796175, 0.0113...
[ 0.03460555523633957, 0.25825920701026917, 0.10940996557474136, -0.18160131573677063, 0.5205214619636536, 0.18433095514774323, 0.5724998116493225, -0.24830424785614014, -0.5955423712730408, 0.09655078500509262, 0.25298064947128296, 0.27262064814567566, -0.02768249809741974, 0.41296070814132...
After upgrading to a new release version, my `bash` scripts start spitting errors: bash: /dev/stderr: Permission denied in previous versions Bash would _internally recognize_ those file names (which is why this question is not a duplicate of this one) and _do the right thing (tm)_ , however, this has stopped working now. What can I do to be able to run my scripts again successfully? I have tried adding the user running the script to the group `tty`, but this makes no difference (even after logging out and back in). I can reproduce this on the command line without problem: $ echo test > /dev/stdout bash: /dev/stdout: Permission denied $ echo test > /dev/stderr bash: /dev/stderr: Permission denied $ ls -l /dev/stdout /dev/stderr lrwxrwxrwx 1 root root 15 May 13 02:04 /dev/stderr -> /proc/self/fd/2 lrwxrwxrwx 1 root root 15 May 13 02:04 /dev/stdout -> /proc/self/fd/1 $ ls -lL /dev/stdout /dev/stderr crw--w---- 1 username tty 136, 1 May 13 05:01 /dev/stderr crw--w---- 1 username tty 136, 1 May 13 05:01 /dev/stdout $ echo $BASH_VERSION 4.2.24(1)-release On an older system (Ubuntu 10.04): $ echo $BASH_VERSION 4.1.5(1)-release
[ 0.0019075754098594189, 0.0073921396397054195, -0.01704498380422592, 0.008799990639090538, -0.0052017634734511375, -0.00008612382225692272, 0.007264891639351845, 0.002694374416023493, -0.0168447308242321, -0.002626853995025158, -0.01792026311159134, 0.008022383786737919, -0.007993858307600021...
[ 0.5254812836647034, 0.5082839727401733, 0.2700229287147522, -0.18612490594387054, -0.05981144309043884, -0.13376660645008087, 0.7487972974777222, -0.21441428363323212, -0.3016558289527893, -0.6105726361274719, 0.11260239779949188, 0.6863844990730286, -0.09455486387014389, 0.170360639691352...
Suppose I have a data set with a certain outcome $Y$, covariates $X$, and a certain status variable $Z$, which can take a finite (small) number of values, say 1, 2 and 3. Any of these variables may be missing in the data set, so I want to multiply impute my data. On top of the imputations from the model $Y|\\{X,Z\\}$, I want to obtain normalized imputations $Y|\\{X,Z=1\\}$ -- i.e., force one of my predictor variables to be set to a specific level. The context is somewhat similar to that of the BMI-liars exercise in Sec. 7.3 in Stef van Buuren's FIMD book. The status $Z$ corresponds to different sources of measurements on $Y$, and I suspect that the status $Z=1$ is the most accurate, so I want to get a feeling of what outcomes on $Y$ would have been if everybody were measured using the source $Z=1$. The difference though is that I don't have any parallel measures, like his self-reported and instrumented BMI. So what I need, computationally, is run the burn-ins, calibrate the imputation model(s), and in the last iteration for $Y$, substitute $Z=1$ instead of its actual or predicted values. There may be a way to create a passive variable that is constant $=1$, but then it would be dropped from the imputation equation as collinear with the intercept term. If I just create a copy of $Y$ and make it missing for $Z \neq 1$, and put $Y$ and $Z$ as predictors, then I get a perfect prediction with singular matrices, so that's a no-go, either. Any ideas how that can be implemented using reasonably standard packages? I would like to use Stata or R for this.
[ 0.0171600840985775, 0.016197392717003822, -0.006189568433910608, 0.014098826795816422, -0.003494740929454565, 0.000038749538362026215, 0.006323091685771942, -0.010983581654727459, -0.0070420438423752785, 0.007568529807031155, -0.011036064475774765, 0.0073061296716332436, -0.01085411384701728...
[ -0.1778779774904251, -0.13224154710769653, 0.17941072583198547, -0.044364187866449356, 0.007676573935896158, 0.4757358133792877, 0.041241906583309174, -0.26342761516571045, 0.20135706663131714, -0.5467137694358826, -0.07172736525535583, 0.41871944069862366, -0.44806233048439026, 0.25188624...
I have bought an Xbox 360 wireless controller and I want to use it on pc, but unfortunately, I cant find the Xbox 360 wireless receiver. Is it possible to use a normal wireless adapter for Xbox 360 controller?
[ 0.0010691970819607377, -0.0034381907898932695, -0.016460804268717766, 0.002508634002879262, 0.0023285329807549715, -0.039294544607400894, 0.009769577533006668, -0.0486304946243763, -0.023297471925616264, -0.05918549373745918, 0.017617113888263702, 0.029251551255583763, -0.02240888588130474, ...
[ 0.5041948556900024, -0.012146856635808945, 0.3109647333621979, 0.4744681715965271, 0.14957179129123688, -0.45415639877319336, 0.02825082093477249, 0.009201421402394772, 0.16096287965774536, -0.6135167479515076, 0.2847423553466797, 0.6245160698890686, -0.23155775666236877, 0.158767268061637...
Neutrons have no charge so they would not, I think, interact with photons. Would a neutron star be transparent?
[ 0.026816105470061302, 0.010136967524886131, 0.028607280924916267, 0.047814685851335526, -0.03931887447834015, -0.05909626558423042, 0.019233504310250282, -0.025379013270139694, -0.03348635882139206, 0.007645664270967245, -0.01170816458761692, 0.02643727883696556, 0.040227651596069336, 0.00...
[ 0.409263551235199, 0.15381187200546265, -0.13525106012821198, 0.37740379571914673, -0.24306072294712067, -0.32171550393104553, 0.2413870394229889, 0.4465182423591614, -0.19530849158763885, -0.3945710361003876, -0.1535378098487854, 0.3962666392326355, -0.241604283452034, 0.21623824536800385...
Usually, we use `proxy.ashx` for the same domain by this code. (working well.) esri.config.defaults.io.proxyUrl = "proxy.ashx"; esri.config.defaults.io.alwaysUseProxy = true; If the other domain need to use my proxy, is it possible to provide `proxy.ashx` for the other domain? I have try from my localhost but not working esri.config.defaults.io.proxyUrl = "http://www.mydomain.com/proxy.ashx"; //can't access esri.config.defaults.io.alwaysUseProxy = true; Thanks in advance.
[ 0.02297515980899334, 0.010975543409585953, -0.013913624919950962, 0.013501618057489395, -0.02167671173810959, -0.01649162732064724, 0.010590313002467155, -0.018106654286384583, -0.017830614000558853, -0.013700319454073906, 0.00020860519725829363, 0.004576375707983971, -0.02236967906355858, ...
[ 0.2137494534254074, 0.2889454960823059, 0.465553343296051, -0.09609463065862656, 0.07712242007255554, -0.2946411073207855, 0.3438864052295685, 0.21269147098064423, -0.12089569121599197, -0.6401674747467041, -0.09361259639263153, 0.38468053936958313, -0.21876971423625946, 0.2055727243423462...
I am reading about Metaelliptical copulas but I don't know the difference between elliptical Gaussian and multivariate Gaussian distributions I would appreciate if somebody can explain the difference in a simple way. This is the paper that I was reading in case if you need more clarification. http://onlinelibrary.wiley.com/doi/10.1029/2006WR005275/abstract
[ 0.0008060996769927442, -0.00123313267249614, 0.007525959517806768, 0.023461366072297096, 0.007291238754987717, 0.009733353741466999, 0.00833897478878498, -0.006190330255776644, -0.030128667131066322, -0.03882226347923279, 0.0028831996023654938, 0.010388141497969627, -0.018962232396006584, ...
[ 0.11732228100299835, -0.3559201657772064, -0.12674441933631897, -0.18974919617176056, -0.6110855340957642, 0.1128644123673439, -0.3020927906036377, 0.11741184443235397, -0.4205334186553955, -0.1659373790025711, 0.16015352308750153, 0.40478500723838806, -0.28986144065856934, 0.3419665396213...
It is common wisdom - and mathematically proven - that quantum entanglement cannot be used to bypass the relativistic speed limit and transfer information faster than light. So there must be something wrong with the following gedankenexperiment, but I can't figure out what: Let there be a device which (nearly) simultaneously creates lots of entangled, coherent photon pairs and sends them to Alice and Bob, each of them receiving one photon of each pair. Both are far away from the light source and have placed screens into the - highly focused - light beam, so that they detect some small area of light on the screen. Now Alice wants to transfer one bit of information to Bob. If it is a 0, she does nothing. If it is a 1, she puts an appropriate double-slit plate into the light path, immediately before the photon bunch arrives. This will create an interference pattern on her screen, and entanglement will instantly replicate the photons' paths and such the pattern onto Bob's screen. So if he sees an interference pattern instead of a dot, he knows that it's a 1, and vice versa. Of course Bob's screen will not show an exact copy of what Alice sees: it will be olverlayed by noise, e.g. from photons which bounced off Alice's double- slit plate or decohered through environment interaction. Worst case a statistical accident may create a pattern out of a 0-bit which looks like the result of interference. But we may optimize the signal/noise ratio through the experimental setup, and besides of that _any_ real-world communication channel is subject to partial information loss (which must be compensated by error correction). So it seems like this experiment transfers information faster than light. What did I miss?
[ 0.01608976721763611, 0.010408880189061165, -0.0031752437353134155, -0.004277470987290144, -0.009691284969449043, -0.029009610414505005, 0.0084396256133914, -0.013777428306639194, -0.012009432539343834, -0.03522029146552086, -0.012605870142579079, 0.016401078552007675, -0.009462005458772182, ...
[ 0.6161157488822937, -0.08252900093793869, 0.019215648993849754, 0.18298107385635376, -0.04756273329257965, 0.031238868832588196, 0.1547718197107315, -0.36127549409866333, -0.5254507660865784, -0.1883445680141449, -0.06740362197160721, 0.13297328352928162, -0.43319207429885864, 0.2987003624...
I've restarted playing and can't find my saves anywhere. I have never played any of the DLCs. Are the DLCs scaling? is there a max level to enjoy? I'm playing through from level 1, at 12 now. Where can I find level range details on the new areas since I can warp to them with the travel system already?
[ -0.00643417052924633, 0.02417728863656521, -0.004632791504263878, -0.015337426215410233, 0.03942706435918808, -0.014781294390559196, 0.008734581992030144, -0.03279503434896469, -0.030142640694975853, -0.0010424909414723516, 0.0003862844605464488, 0.02170911803841591, -0.01335886213928461, ...
[ 0.26858392357826233, -0.15945760905742645, 0.7922090291976929, 0.1412706822156906, 0.17487388849258423, -0.3222760558128357, 0.371993750333786, 0.1622193455696106, -0.6173508167266846, -0.8081239461898804, 0.015921251848340034, 0.3548237681388855, 0.23853521049022675, 0.2823493778705597, ...
Telling a person to repeat something they have said sounds better to me, but is it more correct to ask them to resay what they said? If I say something then resay it, then I have said it again. I don't peat, so why would I repeat? Do I peat? What does peat mean when referred to this way? Which is better, repeat or resay?
[ 0.02153926156461239, 0.023096799850463867, -0.02454027161002159, 0.01729917712509632, 0.0021855938248336315, 0.015101899392902851, 0.010240094736218452, -0.00972286332398653, -0.02044736035168171, -0.03526989743113518, -0.002447896171361208, 0.0126785384491086, 0.014424227178096771, -0.007...
[ 0.44928136467933655, 0.1830187737941742, 0.46605855226516724, -0.4244816303253174, -0.6973764300346375, -0.024723118171095848, 0.702801525592804, -0.2044680118560791, -0.18463528156280518, -0.10783018916845322, 0.15774483978748322, 0.6849910020828247, -0.18407867848873138, 0.30128467082977...
This is one integration problem I encountered during the calculation of Bayes factor between two models given data $D$ One of the model, $M_0$ assumes the data accords to multinomial distribution, with the parameter $(\theta_1, \theta_2, \dots, \theta_k)$ and $\sum_{i=1}^k\theta_i= 1$. Also we assume Dirichlet distribution for the prior, $\mathrm{Dir}(\alpha_1, \alpha_2, \dots,\alpha_k)$. Now we want to calculate the marginal posterior $\mathrm{P}(D|M_0)$, which is proportional to $\int_0^1(\prod_{i=1}^k \theta_i^{n_i + \alpha_i - 1}) \mathrm{d}(\theta_1\theta_2\dots \theta_k)$ I am stuck here, how to calculate this integral, which is subject to $\sum_{i=1}^k\theta_i= 1$?
[ -0.011920507065951824, 0.016984064131975174, -0.0009301583049818873, 0.011485978029668331, 0.007792712189257145, -0.012113003991544247, 0.007618293631821871, -0.018748514354228973, -0.01000403705984354, -0.015150715596973896, -0.015705754980444908, 0.00778191676363349, -0.030345484614372253,...
[ -0.4849817752838135, -0.27553918957710266, 0.38608846068382263, -0.25562387704849243, 0.018272729590535164, 0.41204172372817993, 0.10747829079627991, -0.532944917678833, -0.0023341928608715534, -0.5705316066741943, -0.16490834951400757, 0.7514861822128296, -0.4594305157661438, 0.3708463013...
Consider two different data time-series, **_Data1_** and **_Data2_** , expressed using **inhomogeneous scales (units)**. Each of these two data series is itself a weighted-average of a bunch of **standardized** individual series. I would like to aggregate these two data series into one unique "index", by taking the equal-weight average of the two series at each point in time. However, before doing that, I must put these two series on a common scale. The predominant method is using standardization, i.e. by subtracting the sample mean from the raw values and dividing this difference by the sample standard deviation. However, this assumes that the variables are normally distributed. **_This assumption is clearly violated in the case of the series I am using._** One other method suggests to do the transformation of the raw values on the basis of their empirical cumulative distribution function (CDF) involving the computation of order statistics. I am not sure if this method is valid in my case for the following reason: - Let's take the first data series **_Data1_**. Imagine this series only has 5 data points: (2,-1,4,10,100). The transformed values using order statistic would be: (2/5, 1/5, 3/5, 4/5, 1). When you plot the raw values against time, it seems steady for the first 4 time periods, and there is a jump at t=5. However, when you plot the transformed values against time, there is no evident jump from t=4 to t=5. Therefore, the interpretation of the transformed values is not intuitive. ![enter image description here](http://i.stack.imgur.com/DOMAH.jpg) ![enter image description here](http://i.stack.imgur.com/x7SzD.jpg) Can I still use the order statistic to transform those two data series? If not, would there be any better way of aggregating them? Any help would be appreciated. Thank you!
[ -0.007195288315415382, 0.024464674293994904, -0.015492850914597511, 0.010760501027107239, 0.01853388547897339, 0.009928852319717407, 0.008306861855089664, -0.015334979631006718, -0.013832474127411842, 0.012510119006037712, -0.0010414046701043844, 0.019583309069275856, -0.0010308553464710712,...
[ 0.15852035582065582, -0.4210452139377594, 0.008641102351248264, 0.19537192583084106, -0.025391479954123497, 0.2169020026922226, -0.35377493500709534, -0.16159863770008087, -0.3232433497905731, -0.7140452861785889, 0.18530365824699402, 0.1438700407743454, -0.10034876316785812, 0.38720795512...
I have a rest webservice with an endpoint `www.foobar.com/service.svc/MAC` (migration authorisation code). Posting and getting to that adds and gets one MAC respectively. I now need to impliment a new enpoint for all MACs. What would that be? `www.foobar.com/service.svc/MAC/All` seems wrong. What would be correct and why?
[ 0.007944255135953426, 0.012680714949965477, -0.006055135279893875, 0.007182751782238483, 0.016928013414144516, 0.001543750986456871, 0.008592951111495495, 0.027228547260165215, -0.013358787633478642, -0.0355910062789917, 0.0006393440417014062, 0.0098445238545537, 0.008786069229245186, 0.00...
[ 0.09811330586671829, 0.32226628065109253, 0.9370260834693909, -0.24638259410858154, 0.013872470706701279, 0.005054507404565811, 0.23381787538528442, 0.10344713181257248, -0.25190234184265137, -0.4242313504219055, -0.1524980068206787, 0.4863901436328888, -0.18813878297805786, 0.085214078426...
I'm trying to implement this JavaFX code where I want to call remote Java class and pass boolean flag: final CheckMenuItem toolbarSubMenuNavigation = new CheckMenuItem("Navigation"); toolbarSubMenuNavigation.setOnAction(new EventHandler<ActionEvent>() { @Override public void handle(ActionEvent e) { //DataTabs.renderTab = toolbarSubMenuNavigation.isSelected(); DataTabs.setRenderTab(toolbarSubMenuNavigation.isSelected()); // call here the getter setter and send boolean flag System.out.println("subsystem1 #1 Enabled!"); } }); Java class which I want to call: public class DataTabs { private static boolean renderTab; // make members *private* private static TabPane tabPane; public static boolean isRenderTab() { return DataTabs.renderTab; } public static void setRenderTab(boolean renderTab) { DataTabs.renderTab = renderTab; tabPane.setVisible(renderTab); } // somewhere below // set visible the tab pane TabPane tabPane = DataTabs.tabPane = new TabPane(); tabPane.setVisible(renderTab); } This implementation works but I want to optimize it to use less static variables and objects. Can you tell me which sections of the code how can be optimized?
[ 0.007902423851191998, 0.014227857813239098, 0.006164319347590208, -0.005917969159781933, 0.014380300417542458, 0.0016548337880522013, 0.009700866416096687, 0.016704395413398743, -0.014282409101724625, 0.009840480983257294, -0.029809316620230675, 0.01960904523730278, -0.008276214823126793, ...
[ -0.20101195573806763, -0.3927367329597473, 0.7906432151794434, -0.1787414699792862, 0.16715547442436218, 0.3507307469844818, 0.7798040509223938, -0.4290447533130646, -0.2144659012556076, -0.7960588932037354, 0.01440921425819397, 0.44610902667045593, 0.043918102979660034, -0.087902665138244...
To fix the idea, suppose C is consumption in dollars and Y is income in dollars. This is a time series model. Suppose for some strange reason I need to estimate the model $Y = a + b\cdot \Delta X$ where $\Delta X$ is the first difference of X. How do I interpret this $b$ coefficient? If both sides are FDs, we can just interpret it as at levels since the model comes from the levels (and time trend). But what if the LHS is levels and the RHS is FDs as in the above regression? Thanks!
[ -0.006665879860520363, 0.015471626073122025, -0.01274292916059494, 0.006513721309602261, 0.012539382092654705, -0.015218700282275677, 0.006957643199712038, -0.00793460663408041, -0.01114715076982975, 0.006386558525264263, -0.012130730785429478, 0.006379526574164629, -0.019149256870150566, ...
[ 0.5033530592918396, -0.22022894024848938, 0.582761287689209, 0.302974671125412, -0.004754372872412205, 0.12740755081176758, -0.33734408020973206, -0.08357853442430496, -0.2529740333557129, -0.5588975548744202, 0.41915589570999146, 0.6371461749076843, -0.1021258533000946, 0.5954914689064026...
I am trying to create a service that returns the shortest path for a given start and end destination. What I did was used shortest path query provided by pgrouting to calculate the vertex and edges that form the shortest path. Then for each of the edges I calculated their corresponding coordinates and send them as points in a gpx file. Now when I plotted the coordinates and joined them by a straight line, it gave me a path but not the curve that I needed. I am wondering what is the best way to send that information, I mean the path information. Currently I am just sending the coordinates. So lets say for a long curve path, do I need to actually send all the intermediate points in the curve to actually plot it like a curve or is there other way. What is the standard procedure? Thanks
[ 0.0013673303183168173, 0.012594771571457386, -0.005596613511443138, 0.006111619528383017, -0.007608545012772083, -0.0030700680799782276, 0.007999426685273647, 0.001405477523803711, -0.014560429379343987, -0.032099444419145584, -0.006301091983914375, 0.013516231440007687, -0.01529193948954343...
[ 0.1388203650712967, 0.429363489151001, 0.4229699969291687, 0.03969995677471161, -0.3197094202041626, 0.13059887290000916, 0.19571034610271454, 0.1536007970571518, -0.21702232956886292, -0.8031206727027893, 0.28063178062438965, 0.3290201425552368, -0.034800950437784195, 0.4210590422153473, ...
How to do proper line breaking (continuation) for commands, i.e. their options and/or their arguments? For example, in order to transform this: \usepackage[top=1.0cm, bottom=1.0cm, left=1.0cm, right=1.0cm, includehead, includefoot]{geometry} Into this: \usepackage[top=1.0cm, bottom=1.0cm, left=1.0cm, right=1.0cm, includehead, includefoot]{geometry}
[ -0.001790148438885808, 0.015972085297107697, -0.01537309493869543, 0.029400387778878212, -0.01235695369541645, 0.008243897929787636, 0.007290298119187355, -0.014998226426541805, -0.01437927782535553, -0.016226932406425476, -0.013852855190634727, -0.0012162942439317703, -0.008197473362088203,...
[ -0.043536946177482605, -0.14338766038417816, 0.21566689014434814, -0.041543856263160706, 0.12777389585971832, -0.06334888935089111, 0.34572654962539673, -0.5754706263542175, -0.22199080884456635, -0.23163792490959167, -0.2670338451862335, 0.5176567435264587, -0.2437659651041031, -0.2672551...
I have a static HTML website. `www.example.com/?12345` (this page doesn't exist) redirects to `www.example.com` and `www.example.com/page.html?12345` redirects to `www.example.com/page.html`. I don't know why this happens. Google said this is a soft 404 error and `www.example.com/page.html?12345` should return a 404 response not a 200 OK response. How can I fix this ? Here's my _.htaccess_ : RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] # Cache # 1 month <filesMatch ".(jpg|jpeg|png|swf)$"> Header set Cache-Control "max-age=2592000, private" </filesMatch> # 3 days <filesMatch ".(txt|css|js)$"> Header set Cache-Control "max-age=259200, must-revalidate, proxy-revalidate" </filesMatch> # 10 min <filesMatch ".(html|htm)$"> Header set Cache-Control "max-age=3600, public, must-revalidate, proxy-revalidate" </filesMatch> # Include php <Files contact.htm> AddHandler application/x-httpd-php5 htm </Files>
[ -0.02782054804265499, 0.0030788243748247623, 0.0036508541088551283, 0.02006625011563301, 0.0055428664200007915, 0.012997720390558243, 0.009095687419176102, 0.003346569137647748, -0.012876691296696663, -0.02744881622493267, -0.007013745605945587, 0.004835868254303932, 0.00742272287607193, 0...
[ 0.35536065697669983, 0.1927916258573532, 0.30475136637687683, 0.031142182648181915, -0.1600853055715561, 0.017575005069375038, 0.5430251359939575, 0.12532836198806763, -0.33526912331581116, -0.5178601145744324, 0.11526555567979813, 0.38746321201324463, -0.27476316690444946, 0.3107450902462...
Here is what I want: ![enter image description here](http://i.stack.imgur.com/qEseq.jpg) And here is how I want to achieve this: Just like you have the `\label` and `\ref` commands for automatic section number referencing, I wish to have **something similar which will auto- reference the specific item number (1.2.1 in my case) without me having me explicitly write it out every time**. This will be part of a big report I'm writing, hence I wish to make my work less cumbersome Here is the code: \documentclass[]{article} \usepackage{enumerate} %opening \title{} \author{} \begin{document} \begin{enumerate} \item 1st item \begin{enumerate}[{1.}1] \item 1st nested item \item 2nd nested item \begin{enumerate}[{1.2.}1] \item Useful/Important Information \end{enumerate} \end{enumerate} \item 2nd item \end{enumerate} \noindent In this paragraph, I talk a bit about the Useful/Important Information and then reference the item number \end{document}
[ 0.016788821667432785, 0.011560034938156605, -0.008801144547760487, 0.02325471118092537, -0.0055254315957427025, -0.012954682111740112, 0.004044062457978725, 0.005327633116394281, -0.016862671822309494, 0.008997694589197636, -0.006199635565280914, -0.0011649706866592169, -0.001287770690396428...
[ 0.5105040669441223, 0.09425625205039978, 0.4355551600456238, -0.02203260362148285, -0.11058861017227173, -0.09871560335159302, -0.0727594792842865, -0.14210782945156097, -0.4800679087638855, -0.5962721705436707, 0.006268139462918043, 0.3909315764904022, -0.2009987235069275, 0.0669823810458...
I've read this very short paragraph from Landau & Lifshitz's Mechanics (Chap.2, Par.10) (that you can find here) about Mechanical similarity. I was looking for some more detailed explanations of the matter, at a level like the one in the first chapters of Landau's book. I've been able to find this article but it's a little too much for me. Can you give me some references that do not go into quantum mechanics, i.e. that refer only to classical mechanics and Lagrangian formalism, about the subject? Thank you in advance. **Content of the cited paragraph** As requested in the comments, I will summarize the paragraph's content; I will give my personal understanding of Landau's explanation, so I can be corrected if I'm wrong. Suppose a system of particles is described by a Lagrangian $\cal L(\mathbf{r_1},\mathbf{\dot{r_1}},\mathbf{r_2},\mathbf{\dot{r_2}},...,t)$ and suppose the potential energy is such that $U(\alpha \mathbf{r_1},\alpha \mathbf{r_2},...)=\alpha ^k ·U(\mathbf{r_1},\mathbf{r_2},...)$. Since multiplying the Lagrangian by a constant leaves the equations of motions unaltered, we may multiply $\cal L$ by $\alpha ^k$. In that case, the kinetical energy becomes (let's look at the kinetic energy of a generic particle): $$\alpha ^ k · T = \frac{m}{2} (\alpha ^{k/2}\dfrac{ \text d \mathbf r}{\text d t})^2=\frac{m}{2}(\dfrac{\text d \,\alpha \mathbf{r}}{\text d \, \alpha^{1-k/2}t})^2,$$ so letting $\mathbf{r'}=\alpha \mathbf{r}$ and $t'=\alpha^{1-k/2}t$ we get $$\alpha ^k \cal L (\mathbf{r_1},\mathbf{\dot{r_1}},\mathbf{r_2},\mathbf{\dot{r_2}},...,t)=\cal L (\mathbf{r_1'},\mathbf{\dot{r_1'}},\mathbf{r_2'},\mathbf{\dot{r_2'}},...,t').$$ In conclusion, if lenght and times are scaled respectively by a factor $\alpha$ and $\alpha ^{1-k/2}$, the resulting equations of motion are identical and the paths followed by the system of particles are similar. From this we may infer the ratios of times in similar paths from the ratios of lenghts beetween the two paths. For example: for gravitational potential $k=-1$, so:$$t/t'=(l/l')^{3/2},$$ which is Kepler's Third Law.
[ 0.000003899913281202316, 0.01982569321990013, 0.007128315977752209, 0.02518118917942047, 0.03704293817281723, 0.007439271081238985, 0.006970840040594339, -0.003541887504979968, -0.017672870308160782, 0.0047128647565841675, -0.0009273940231651068, 0.018259646371006966, -0.006654330063611269, ...
[ 0.5090495944023132, 0.32213762402534485, 0.0098018329590559, 0.0009597181924618781, -0.1431761384010315, 0.0375276654958725, -0.01840311475098133, 0.14576523005962372, -0.3247292935848236, -0.556402325630188, 0.1431426852941513, 0.16712075471878052, 0.12550055980682373, 0.7270382046699524,...
I am using LyX with the default "article" document class. All the citations look as bracketed numbers [1], [2], etc. I want to have two different citation formats in the same document, according to the context, for example: * "It is impossible to solve the problem exactly (Author1, Year1), however, recently an approximate solution has been suggested by Author2 (Year2)". How can I do this?
[ 0.013970431871712208, 0.008456817828118801, -0.026282362639904022, 0.0245699193328619, 0.0009428336634300649, -0.0014765391824766994, 0.008126243948936462, 0.014650995843112469, -0.0188347939401865, -0.00992698036134243, -0.004149679094552994, -0.0016693847719579935, -0.0006328391027636826, ...
[ 0.10539709776639938, 0.3644118309020996, 0.3283160626888275, -0.08896303921937943, 0.03500555828213692, -0.19748827815055847, 0.059739649295806885, -0.10072421282529831, -0.13531902432441711, -0.4920129179954529, 0.0600077360868454, 0.2988218069076538, -0.08182372152805328, 0.0353871099650...
I am having a hard time figuring out how to get a jQuery script to run on a Wordpress page of mine the "correct" way. I have followed this article and implemented what the first answer says to do. There are so many tutorials and articles I've read on how to get jQuery to work on your blog, but I just cannot seem to figure it out. I have a number of questions that I hope to get addressed from others who have had experience in it. First of all, most of the tutorials and things I've read are how to get jQuery **_included_** on your blog. My first question is, shouldn't most themes that you download out there automatically include jQuery in your header anyway? For example, my site has these lines automatically included in every page right off the bat: <script type='text/javascript' src='http://<mysite>.com/wp-includes/js/jquery/jquery.js?ver=1.11.0'></script> <script type='text/javascript' src='http://<mysite>.com/wp-includes/js/jquery/jquery-migrate.min.js?ver=1.2.1'></script> So, jQuery is included, I don't need to worry about that... Okay, it's included, now I need to figure out how to get just a sample page or post to run a jQuery script. I follow the directions on the page I listed above... I first go into the back end of my site, go into my Child Themes folder, and make a new file called "jquery-script.js". In it, I write the following code and save the file: jQuery(document).ready(function($){ $("button").click(function(){ $("#div1").fadeIn(); $("#div2").fadeIn("slow"); $("#div3").fadeIn(3000); }); }); Okay... I have my jQuery script file on the back end. Now, I need to reference or enqueue that script. According to the article, it is best to enqueue the script. So I follow the directions and add a new PHP function in my child theme's "functions.php" file. In this file, I add the following: function add_jquery_script() { wp_enqueue_script( 'jquery-script', // name your script so that you can attach other scripts and de-register, etc. get_template_directory_uri() . '/jquery-script.js', // this is the location of your script file array('jquery') // this array lists the scripts upon which your script depends ); } So, that's done. I believe this should work. So, I go and create a "Test Page" on my site (not published or anything). I create the divs, button, and everything I need to get my script to do what I want it to do: <p>Demonstrate fadeIn() with different parameters.</p> <button>Click to fade in boxes</button> <br><br> <div id="div1" style="width:80px;height:80px;display:none;background-color:red;"></div><br> <div id="div2" style="width:80px;height:80px;display:none;background-color:green;"></div><br> <div id="div3" style="width:80px;height:80px;display:none;background-color:blue;"></div> I preview the page, click the button.... nothing. Nothing happens. Okay... maybe I need to do a little more in my functions.php file. So I go back in there and read on the page I'm following that perhaps you have to add a line to get it to work. So I add the following line after my function: add_action('wp_enqueue_scripts', 'add_jquery_script'); I save the file. I'm excited. This should work. I go and test my page... nothing at all again. Nothing happens when I click the button. Hmm... So now, I go and do something totally different in my functions.php file. Something that I've done in the past to get certain scripts to work... so I completely remove the function and the "add_action" call and implement the following in my functions.php file: function add_jquery_script() { echo '<script type="text/javascript" src="http://<mysite>.com/wp-content/themes/responsive-childtheme-master/jquery-script.js"></script>' . "\n"; } add_action('wp_head', 'add_jquery_script'); I then save the file and go test it. My page WORKS!!! Woohoo! It does what I want it to do. However, I'm not satisfied with that. I've read everywhere that you should always enqueue your scripts using the method that I attempted to use but failed. So a few closing questions that I hope to get answered: * First, why did it not work when I tried to enqueue my script but did work doing my second method? * Secondly, the thing is now... that jQuery script I wrote gets included on EVERY single page of my site... every post, page, etc. Is this a problem?? Is there any way to just have it included on the ONE page I want the script to run so that it doesn't affect other pages? * Third, why can't I include this script within the "text" tab of my TinyMCE Editor that I use while writing a page/post? I tried using `<script>` tags and just putting my tiny script in between those, but that doesn't appear to work. * Fourth, if I CAN'T get my script to only run for one page and it does have to be included in every single page, then I'll obviously have to specifically target my HTML elements much much better. Obviously I won't be using a script that targets every button by using `$("button")`, so then would I just give my button on that specific page some unique ID such as `<button id="some-unique-id">` and then target it through my jQuery? That way, the script will only work for that button. Sorry for the extremely long post and noob questions... I just have never messed with jQuery that much and I'm trying to wrap my head about how to work Wordpress in order to include my own custom scripts and things that I want to run. If anyone has any feedback on any of my questions, it would be GREATLY appreciated. Thanks so much!
[ 0.011078285053372383, 0.002383641665801406, 0.001850672997534275, 0.011440576985478401, -0.002430558670312166, 0.003538743359968066, 0.0043914406560361385, 0.021102748811244965, -0.021656639873981476, -0.026009656488895416, -0.005488913040608168, 0.01593669503927231, -0.018717505037784576, ...
[ 0.5812298655509949, 0.10528233647346497, 0.009545003063976765, 0.21479852497577667, -0.2619357407093048, -0.23725181818008423, 0.30849525332450867, 0.47339576482772827, -0.06407264620065689, -0.6625630259513855, 0.11696916073560715, 0.5823841094970703, 0.1885465830564499, 0.103124409914016...
I have four datasets: morphological measurements for a set of species (M1), ecological measurements for the same set of species (E1), morphological measurements for a second set of species (M2), and ecological measurements for this second set of species (E2). I am interested in finding the linear combinations of variables between M1 and E1, and between M2 and E2. That is, I'd like to know what combinations of morphological measurements are associated with what combination of ecological measurements--for each set of species separately. This seems like a good use of CCA (two separate CCAs). But here's where things get tricky for me. I'd like to see whether the same linear combinations from one set of species do a good job of explaining the variation in the second set of matrices. And I'd like to see how they differ, if possible...e.g. variable 3 from M2 would be more heavily loaded on a given axis if we didn't constrain the second CCA by the linear combinations found from the first. Is this making any sense? I'm not a statistician, so I admit my lack of experience up front. I could see simply running these as two separate CCAs, then comparing the results qualitatively. But that doesn't seem very rigorous. Should I be considering some other approach entirely? Thanks for any input.
[ 0.035125747323036194, 0.030139461159706116, -0.012560677714645863, 0.025266531854867935, 0.03767355531454086, 0.018956899642944336, 0.012204224243760109, -0.020198456943035126, -0.012622687965631485, -0.0715961903333664, 0.007299867924302816, 0.009003894403576851, -0.02385798841714859, 0.0...
[ 0.4591125249862671, 0.26018673181533813, -0.22012218832969666, 0.008655795827507973, 0.26287007331848145, 0.6843518018722534, 0.08826828747987747, -0.15639610588550568, -0.11907678097486496, -0.43057680130004883, 0.27943965792655945, 0.3075900375843048, -0.0008478449890390038, 0.3006701767...
I'm slowly learning `node.js` and have a small project I want to start. The project will have a lot of background processes (downloading data from external sites, parsing CSV files, etc.). A big "win" for me and node is the fact it uses JavaScript for both client and server. I code in Java and JavaScript in my day job but am also pretty good at Ruby. But, like I said, it seems attractive to use one language everywhere and JS seems to fit that bill. However, I haven't had much experience in using JS for running background jobs. Ruby seems to excel at this. And I'm not opposed to using it. So what are your thoughts on going 100% JS for this? I realize very large projects require custom solutions. I'm just wondering if it's worth the effort. Or, should I just stick with Ruby on those kind of chores? Opinions appreciated. Thanks
[ -0.010998645797371864, -0.0016700729029253125, -0.006833282299339771, 0.0004229126498103142, -0.016776859760284424, -0.00021941703744232655, 0.005280520301312208, 0.008921336382627487, -0.01572900265455246, -0.010687603615224361, -0.00016568147111684084, 0.010146468877792358, 0.0021367589943...
[ 0.3075565993785858, 0.2605426609516144, -0.016753235831856728, -0.22890594601631165, -0.08694174140691757, -0.10571587830781937, 0.2648070454597473, 0.3543541133403778, -0.14652223885059357, -0.8758302927017212, 0.051491785794496536, 0.3572987914085388, -0.046974264085292816, 0.02791961468...
I have a large road network layer with an `AGE` attribute associated per segment. In some occasions, I would have segments `A -> B -> C` with segment `B` with no value for the `AGE` field. In other situations I would have segments `D-> E` with either segments `D` or `E` with no `AGE` value. In the first case, the age of `B` could be assumed as either the average or max of `A` and `C` and in the other case assign `D` or `E` the age of the other segment. Is this possible in ArcGIS? _Note: Thought must be given to the plausible situation, which I haven't seen in the data yet, of the following case`G <-> H <-> ... <-> I <-> K` where `H`, `I`, and values in between have no `AGE` values._
[ -0.01451996061950922, 0.015085550025105476, -0.017812326550483704, 0.008486857637763023, 0.006644153036177158, 0.003630502847954631, 0.006180326454341412, -0.0006405553431250155, -0.008191509172320366, 0.005000860430300236, -0.004506336525082588, 0.010525020770728588, 0.00029444671235978603,...
[ 0.4201655387878418, 0.49344533681869507, 0.13149580359458923, -0.09014301002025604, 0.1715976744890213, 0.49252164363861084, 0.056630831211805344, 0.1048785150051117, -0.05384686961770058, -0.8344414234161377, -0.12572768330574036, 0.25187811255455017, 0.6743859648704529, 0.672623395919799...
What is the reason behind this? Does giving _Kunkka_ a **Quelling Blade** makes the game unbalanced? Also, does this same rule carry-over in **Dota2**?
[ 0.014529084786772728, 0.03931345418095589, 0.023451635614037514, 0.021839087828993797, 0.053394999355077744, 0.005432896316051483, 0.014950903132557869, 0.022069133818149567, -0.027111349627375603, -0.007799556013196707, -0.024326253682374954, 0.05114511400461197, -0.03929198905825615, -0....
[ 0.14664830267429352, 0.14610134065151215, 0.19543112814426422, -0.03017236292362213, -0.33295631408691406, -0.3872937262058258, 0.18519161641597748, -0.3516981899738312, -0.2919679880142212, -0.22796742618083954, 0.13180257380008698, -0.018177542835474014, -0.29192203283309937, 0.223447173...
I'm trying to create an SCfigure, but the formatting doesn't seem to be working. I have used this same code and it worked fine, but I tried to compile again in a new session and this error showed. \documentclass[12pt,onecolumn]{extarticle} \usepackage[pdftex]{graphicx} \graphicspath{ {C:/Users/WKUUSER/Pictures/} } \usepackage{mathptmx} \usepackage{extsizes} \usepackage{setspace} \usepackage{amsmath} \doublespacing \usepackage[margin=1in]{geometry} \usepackage{sidecap} \begin{document} \begin{SCfigure} \centering \caption{A field line is traced from $\theta = 0$ to $\theta = 2\pi$ with initial conditions $\psi$ from 0.8 to 1.4 and $\phi$ from 0 to $2\pi$.} \label{lyapunov} \includegraphics[width=0.5\textwidth]{lastPtsMaxExc.png} \end{SCfigure} \end{document} Runaway argument? {\settowidth \labelwidth {\@biblabel { ! Paragraph ended before \list was complete. <to be read again> \par Am I making an obvious error in formatting, or is there something deeper at play? This might be related: LaTeX warns me that my references are undefined no matter how many times I recompile. I am not using a bibtex file. I am using TeXworks.
[ 0.0006894241087138653, 0.001503507373854518, -0.0025369049981236458, 0.018317783251404762, 0.0086037777364254, 0.004785354249179363, 0.007348968647420406, 0.004556093830615282, -0.012498822063207626, -0.022163841873407364, -0.01081874594092369, -0.005449392832815647, 0.007786598522216082, ...
[ 0.28970348834991455, -0.18635891377925873, 0.6420086026191711, 0.025335030630230904, -0.08859517425298691, -0.013524982146918774, -0.0316305048763752, -0.029681451618671417, -0.2858213186264038, -0.9123616218566895, 0.2076537013053894, 0.39998728036880493, -0.324052095413208, -0.1948979794...
I want to estimate the shape parameter of gamma distribution in Winbugs. I select gamma distribution as prior for shape parameter. Data set is generated using MATLAB as: gamrnd(5,1,[100 1]) a small part of data loaded to model is here: list(n=100,b=1) y[,1] 9.85509822926424 6.78794280129914 2.37341388433267 5.44664020179438 14.7723695566505 4.53177357981821 . . END The model in Winbugs is simple and as follows: model; { for( i in 1 : n ) { y[i] ~ dgamma(a,b) } a ~ dgamma(3,1) } Estimation is strongly dependent on the prior that was selected for **a**. As for each distribution, (normal-gamma-uniform) the estimated **a** value is around the mean value of the selected prior distribution. For example, choosing the above-mentioned prior ( **a ~ dgamma(3,1)** ) resulted in shape parameter estimation 3.026; also running the model for more iterations doesn't change results so much! What is the problem and how can I solve that?
[ -0.00004114932380616665, 0.0026549396570771933, -0.003971331752836704, 0.0023969924077391624, 0.0024398714303970337, 0.007369872182607651, 0.005025174934417009, 0.011539030820131302, -0.012629710137844086, -0.029219744727015495, 0.0019554931204766035, 0.004316363483667374, -0.018706906586885...
[ -0.025981467217206955, 0.01851595565676689, 0.4567231237888336, -0.01893262006342411, -0.21336321532726288, 0.2017025351524353, 0.20238685607910156, -0.38473376631736755, 0.08463895320892334, -0.5097321271896362, 0.12804420292377472, 0.8170540928840637, -0.06896685808897018, 0.352272659540...
I have successfully used csvsimple to import .csv files. But I would like to have commas as number separators, so I modified the .csv file to use semicolons as separators and as csvsimple manual says, I defined it with `separator=semicolon` like this \begin{tabular}{c|c c|c}% & \bfseries A & \bfseries B & \\\hline \csvreader[head to column names, late after line=\\, separator=semicolon]% {csv/test.csv}{}% {\bfseries\cat & \A & \B & \acc\%}% \end{tabular} but I keep getting error `! Package pgfkeys Error: I do not know the key '/csv/separator' and I am going to ignore it.`
[ 0.017056837677955627, 0.01724802516400814, -0.010799124836921692, 0.01944069378077984, -0.00838396418839693, 0.010115984827280045, 0.007232848554849625, -0.019636239856481552, -0.017400557175278664, -0.007187072187662125, 0.007554595358669758, -0.003352854633703828, 0.015103401616215706, -...
[ -0.46561893820762634, 0.19957171380519867, 0.770920991897583, -0.39324209094047546, -0.40199288725852966, 0.1434946060180664, -0.047671884298324585, -0.2002115249633789, -0.4069330394268036, -0.4541763663291931, -0.42154011130332947, -0.09424541890621185, -0.3176863193511963, 0.09572734683...
Working in 10.2. I want to show two different line feature classes, one with just plain lines and the other with arrow-headed lines. The feature classes have the same categorical field I use to symbolize. I want to maintain the different line symbols, but use the same color assignments. If I save one FC's symbology out as a layer, this (obviously) resets the actual line symbol. I've then messed w/changing `Properties for All Symbols` for back to the arrow at end line but can't seem to propagate that back to all symbols in the FeatureLayer. ![enter image description here](http://i.stack.imgur.com/wbkEK.png) Been reading up on layers, but don't see a clear path. Any thoughts appreciated!
[ 0.004562489688396454, 0.008268537931144238, -0.019587114453315735, 0.0033803367987275124, -0.027513964101672173, -0.011184146627783775, 0.0076601505279541016, 0.027293842285871506, -0.01151652354747057, -0.002476518740877509, -0.016010060906410217, 0.0077180638909339905, 0.013417462818324566...
[ 0.18990293145179749, -0.2933438718318939, 0.6614410281181335, -0.08628089725971222, 0.0259519275277853, -0.2126324474811554, 0.31433790922164917, -0.3228938579559326, -0.015120935626327991, -0.8445877432823181, -0.12148183584213257, 0.5606658458709717, -0.21007151901721954, -0.590100526809...
I'm studying circular motion and centripetal force in college currently and there is a very simple question but confuses me (our teacher doesn't know how to explain either :/), so I hope we can sort it out here >< So I draw two pictures to show what I was thinking on it. ![1](http://i.stack.imgur.com/IXKXR.jpg) In pic 1 there is a hand rotating a ball attached to a piece of string in a circular motion, by free body diagram we can easily see that the net force produced by tension and gravity is centripetal force, and it towards to the center of the circle. But in pic 2 as shown below ![2](http://i.stack.imgur.com/c5Ocp.jpg) When the hand is below the ball, the net force is actually towards downwards, not to the center of the circle. How would that circular motion happen if this free body diagram doesn't make sense? Or is there any other force acting on it?
[ -0.00375670799985528, 0.007480432279407978, -0.01852210983633995, 0.01791907101869583, -0.029238693416118622, -0.02336316555738449, 0.006656899116933346, 0.008484731428325176, -0.01726357638835907, -0.038979627192020416, -0.007390475831925869, 0.002807832323014736, -0.0058120861649513245, ...
[ 0.08198399096727371, -0.3151845335960388, 0.44559916853904724, 0.20716102421283722, -0.4440348744392395, -0.1442452073097229, -0.5752405524253845, -0.1378069669008255, -0.878294050693512, -0.5052047371864319, 0.3564438819885254, -0.0378841757774353, -0.2412019819021225, 0.2788965106010437,...
This question is in reference to this particular version, the Sudoku that is available via Xbox Live on WP7. I'm on my way to the "Ultimate" achievement and want to know which gametype and combination of powerups will net the most XP in a single round. I suspect that "Lightning" mode, combined with "XP Bonus" and "Gamble" will do the trick, but I want to know if there was something I was missing that would help get me the maximum amount of XP per round -- and also what that value would be.
[ 0.0005407985299825668, -0.00489830132573843, -0.0013375466223806143, 0.0023315888829529285, -0.011057639494538307, -0.016241405159235, 0.008081633597612381, -0.0009330252651125193, -0.018938817083835602, -0.009861007332801819, -0.008983604609966278, 0.014762526378035545, -0.00519247492775321...
[ 0.5560976266860962, 0.17986363172531128, 0.48816871643066406, 0.31455197930336, -0.3305966258049011, -0.11654894053936005, 0.21400073170661926, -0.2504195272922516, -0.39952942728996277, -0.37716302275657654, 0.1848311871290207, 0.4505419433116913, 0.48656395077705383, 0.017959484830498695...
Specifically in regard to online games, what is lag compensation? Should it affect the way I play?
[ -0.04159201309084892, 0.04807143285870552, 0.0013226286973804235, -0.008815213106572628, 0.047975774854421616, -0.021639127284288406, 0.013366719707846642, -0.02513958513736725, -0.0010456001618877053, 0.08215513080358505, -0.018444107845425606, 0.040043383836746216, -0.012394580990076065, ...
[ 0.40126582980155945, 0.11323469132184982, 0.10925226658582687, 0.19721324741840363, 0.16955281794071198, -0.12880824506282806, -0.07821843028068542, 0.30452829599380493, -0.32258060574531555, -0.38610461354255676, 0.3496415615081787, 0.6181066632270813, 0.15793755650520325, -0.016035478562...
I'm new to LaTeX and trying to create a resumé. I know there are alternatives like moderncv but I'd like to try creating it from scratch. I would like to have dates displayed on the left, sort of like this http://www.artbizblog.com/2010/11/tables-for-resume.html Is there an nice way of doing this? Tried to create description lists inside of a table but it doesn't work. \begin{tabular}{l|c} \textbf{date should be here} & \begin{description} \item[item should be here] description should be here \end{description} \\ \end{tabular}
[ 0.01478867419064045, 0.009371464140713215, -0.009816378355026245, -0.0031265378929674625, 0.007799976039677858, 0.00009625032544136047, 0.005536837503314018, 0.029466591775417328, -0.018760990351438522, 0.028069576248526573, -0.012071565724909306, -0.0016724788583815098, 0.01397872157394886,...
[ 0.6412734389305115, -0.14436636865139008, 0.6475469470024109, -0.09932141751050949, 0.33581435680389404, 0.10752864927053452, -0.27592259645462036, 0.2104097455739975, -0.4488404095172882, -0.626268744468689, 0.4553423523902893, 0.4048205316066742, -0.015125991776585579, 0.0558254495263099...
We've a WordPress blog and had disqus plugin in stalled for several months. Around late August this year, the plugin created a ton of URLs that linked to non-existent location on our website. For example - Correct URL: domain.com/correct-URL/ Disqus created - 1. domain.com/correct-URL/344322/ -> Throws 404 2. domain.com/correct-URL/433466/ -> Throws 404 So essentially, Google found a LARGE number of broken links that pointed to unknown locations on our own domain. As the count of those errors (404) rose, our site suffered massive drop in traffic and crawl rate dropped to 10% of what it was earlier. I wish to know - 1. Can large number of (we've over 99k of them) internal broken links cause rankings to drop? 2. I've fixed the issue in one go by creating 301 redirects for each bad URL to correct URL and removing disqus. Google however drops the count by ~1000 daily, as I mark errors as 'fixed' in Google Webmaster Tools. Is there any way to speed this up? 3. Should I setup custom crawl rate to 'Fast' in GWT to make Google crawl our website faster? I'd appreciate your inputs and experience sharing.
[ -0.003587431740015745, -0.012858220376074314, -0.0059002176858484745, 0.021132607012987137, 0.017888404428958893, -0.003551250323653221, 0.007811857387423515, 0.014599042013287544, -0.019601738080382347, -0.02119293436408043, -0.00929428543895483, 0.0014375406317412853, 0.0028524762019515038...
[ 0.4913093149662018, 0.11736617237329483, 0.47571149468421936, 0.4649685323238373, 0.30968132615089417, -0.0928075909614563, 0.47208526730537415, 0.3560323119163513, -0.03930651396512985, -0.5083135962486267, 0.11683335900306702, 0.025951381772756577, -0.10784044861793518, 0.332989007234573...
When I call `systat -if` shows that `wlan0` out traffic/peak/total are 0 kb(/s). I'm using `wlan0` for networking, internet down/upload are working. The in traffic/peak/total are OK. The wifi chip is Atheros 9285, the module is `ath`. System is FreeBSD 10.0. How can I correct this? If you need some information, command output, etc. I'll share.
[ -0.006938027683645487, 0.00036870999610982835, -0.014095153659582138, 0.007778829429298639, -0.021506963297724724, -0.018636353313922882, 0.010551282204687595, -0.019960520789027214, -0.012402957305312157, 0.009944933466613293, -0.020136183127760887, 0.011023851111531258, -0.0063757169991731...
[ 0.2350023239850998, 0.11237451434135437, 0.40925541520118713, 0.14470060169696808, 0.15681049227714539, 0.01892785355448723, 0.20404677093029022, 0.14606721699237823, -0.05773117393255234, -0.7834919095039368, -0.38670217990875244, 0.6522441506385803, -0.11450137943029404, 0.11307784914970...
Talking about the matchmaking 1v1 mode that was implemented: Do neutrals spawn in that mode? I could not test it for now.
[ -0.009363804012537003, 0.052166275680065155, 0.0024299921933561563, 0.011867526918649673, 0.009292024187743664, -0.004754675552248955, 0.008838036097586155, -0.0049545858055353165, -0.03927519544959068, 0.023416463285684586, -0.01014045998454094, 0.04183986783027649, -0.014819435775279999, ...
[ 0.4353750944137573, -0.07845187932252884, -0.05975828319787979, 0.28769344091415405, -0.5500693321228027, -0.6078243255615234, 0.16797246038913727, 0.19039049744606018, 0.11115363240242004, -0.10346989333629608, 0.518379271030426, 0.7651346921920776, -0.13105331361293793, -0.04616181179881...
I have been looking at a number of document databases (RavenDB, CouchDB, MongoDB) and there are two things about them that I really like and makes me what to incorporate them as much as possible and that is the schema-less natural (not that they are schema-less per-say but that it is a very flexible schema that is much easier to change than ones with RDBMS) and the fact there is a lot less impedance mismatch when mapping the database data to code. There have a been a number of things that I have found that have prevented me from using a document database because I find I need these things all the time. Some of them I have found solutions for like unique fields. While document databases don't directly support this feature, a work around that is acceptable is creating another document with the email is the document id and then inserting the user if the insert on the email document was successful however there are there is one big thing that I don't think document database can provide me from my searching. That feature is relationships with complex documents. One of my projects I wanted to use a document database for is a project management system. The issue with this is that there are a lot of places where I need relationships with large objects (and multiple relationships within one document). When it come to something like a task or a user, those are complex documents and having the document embedded would not be a good thing as data mismatch with these items can't happen. Now if I just reference the document then am I really getting any benefit from using a document database because now I am probably going to have a lot more queries that I need to run compared to a relational database. While I was hopping to have most of my data be stored into a document database, the more I look at it, it seems like most of the data needs to be in a RDBMS. Am I correct in assuming this type of relationship needs a RDBMS or am I un-aware of how this can be accomplished in a document database.
[ 0.003944464959204197, 0.004687701817601919, -0.005413784645497799, 0.0147653017193079, -0.0023026051931083202, 0.002570672193542123, 0.00682208314538002, 0.0035409533884376287, -0.009239042177796364, -0.0179494246840477, -0.0012870421633124352, 0.008193043060600758, -0.003400504821911454, ...
[ 0.37351685762405396, 0.020986828953027725, -0.07837125658988953, 0.44366949796676636, -0.1874750703573227, -0.42696046829223633, 0.1245887279510498, -0.047717295587062836, -0.15243518352508545, -0.7023187875747681, 0.243539959192276, 0.8216457366943359, -0.2869626581668854, 0.0735836923122...
I've implemented a custom Isotope template for the NextGen Gallery plugin, and I've made changes and bug fixes to the template, but when I refresh the page the template changes have not taken effect. I tested the cache of my browser by making changes to other theme template files, such as _header.php_ , etc. The browser recognizes these changes immediately, but the NextGen Gallery template is still on an old version. Any idea of how to flush this cache and reload this template file with each new change?
[ 0.004608447197824717, -0.0031240300741046667, -0.006011863239109516, 0.02806260995566845, 0.016348756849765778, 0.003523770486935973, 0.007003726437687874, -0.0022163279354572296, -0.018912341445684433, 0.013069256208837032, -0.008688604459166527, 0.01853497140109539, -0.00616353889927268, ...
[ 0.6330573558807373, 0.0697842538356781, 0.4287952780723572, 0.06414993107318878, 0.23795056343078613, -0.17454779148101807, -0.09838061779737473, -0.20125792920589447, -0.13526538014411926, -0.6917003989219666, 0.23783168196678162, 0.46571433544158936, -0.048700593411922455, 0.224239870905...
The words 'desire' and 'motivation' often appear in different kind of sentences for (what I assume is) grammatical reasons, but I have a really hard time separating them as concepts. When we talk about desire or motivation it seems like we ultimately are talking about why we act; we are seeking some kind of explanation of our actions and without desire or motivation there will be no action. We act because we desire/are motivated by X.... What motivated you / what desires caused you to do that? Do you desire x / are you motivated by x? It can be a physical (causal) explanation, e.g. the increase in dopamine drove me, it can be a design explanation, e.g. humans are 'designed to' pursue sugar, fat, sex etc, or it can be an intentional explanation, e.g. I work hard because I want money. It might feel like an explanation is sometimes about desire and sometimes about motivation but are we really talking about any conceptual difference? Please help me sort this out. A side point (about MY definition of a value) I understand that we can value something without being motivated by it. But a value to me is just a type of belief i.e. a belief about what we desire or should desire. Hopefully our values correspond to our desires; our values can influence our desires over time, but believing we value something does not automatically make us act accordingly; we have to make it emotional if we want something to drive us to action i.e. if we want something to motivate us / be a desire. Wordnet says that desire is "the feeling that accompanies an unsatisfied state" or "an inclination to want things". The first definition I would say describes a force of motivation i.e. the two words refer to the same phenomenon (the feeling which causes the action), the second definition describe a phenomenon which is either a force for motivation or what I referred to as a value.
[ -0.023217882961034775, 0.016336165368556976, -0.006917575374245644, 0.010972857475280762, -0.010936969891190529, -0.00021778536029160023, 0.007378889247775078, 0.006380863953381777, -0.011453826911747456, -0.0006714235059916973, -0.009057416580617428, 0.003856722265481949, -0.007756606675684...
[ 0.3665526211261749, 0.15376083552837372, -0.296373188495636, 0.19424639642238617, -0.44597679376602173, 0.48195162415504456, 0.3790861964225769, 0.1974734663963318, -0.12487157434225082, -0.4293566942214966, -0.01941201277077198, 0.5750143527984619, -0.4252609610557556, 0.3805471658706665,...
I have used the following code to create cited/non-cited bibliographies. \DeclareBibliographyCategory{cited} \AtEveryCitekey{\addtocategory{cited}{\thefield{entrykey}}} I want to further subdivide the 'Cited' bibliography called by \printbibliography[category=cited, prenote={JoPE}] into two sections. The first section is every cited reference except those for the journal named 'Foobar'. The next section is for cited references in the journal 'Foobar'. Both sections need to have prenotes.
[ 0.00587938167154789, 0.015134335495531559, 0.00011295371223241091, 0.029302310198545456, -0.005015169270336628, 0.0020272070541977882, 0.00863543152809143, 0.007549826987087727, -0.013197967782616615, -0.006253180094063282, -0.009502439759671688, -0.002378697507083416, -0.012783179059624672,...
[ 0.20725686848163605, 0.49598807096481323, 0.24654103815555573, -0.0824321061372757, -0.11253315955400467, -0.08285151422023773, -0.31414085626602173, 0.051504865288734436, -0.348628431558609, -0.39376750588417053, 0.1339007019996643, 0.22633932530879974, -0.3607079088687897, 0.243735402822...
Can we say anything about the distribution of the sum of not iid random variables?
[ -0.04163466766476631, -0.007455648388713598, -0.02596862241625786, 0.05025182291865349, -0.03509853407740593, -0.047994665801525116, 0.02120717242360115, -0.018131032586097717, -0.0440293550491333, -0.0998636931180954, -0.008790617808699608, 0.051597606390714645, -0.06926405429840088, 0.00...
[ 0.2711176872253418, -0.2283974289894104, -0.05762353166937828, 0.15628503262996674, -0.20159362256526947, -0.1275484561920166, -0.10930042713880539, -0.0061793155036866665, -0.2030908614397049, -0.3374464213848114, 0.12795335054397583, 0.2884075343608856, -0.18915818631649017, 0.4067821204...
I found Learned Darian in Darian's Sanctuary, but I did it after completely clearing the Rising Bridge map (directly to the east), including dueling and killing the mad battle beta there, as well as wiping every other enemy off that map. Darian refuses to sell to me, saying she needs the items for defense of the sanctuary while the battle alphas to the east is are around. As I said, I fully cleared that map and already killed the battle beta. The walkthrough on GameFaqs suggests Darian will give you a quest to kill the battle beta and sell to you once you complete it, but she's not actually giving me a quest. Did I screw this up by essentially completing the quest before she could give it to me, or is Rising Bridge not the area she's talking about? I'm especially confused since the FAQ says "kill the battle beta to the SW", but Learned Darian herself says east, and indeed there was a battle beta in the zone directly east.
[ -0.02004762925207615, 0.017692983150482178, -0.01481001079082489, -0.011654561385512352, 0.005382242612540722, 0.00460381805896759, 0.011305810883641243, 0.006516849622130394, -0.020480744540691376, -0.014067061245441437, -0.017279259860515594, 0.023101430386304855, -0.023787163197994232, ...
[ 0.3672054409980774, -0.12277861684560776, 0.1621323525905609, 0.4612996280193329, -0.2972598969936371, 0.2177266776561737, 0.22762362658977509, -0.5098096132278442, -0.0931507796049118, -0.4023868143558502, -0.10216236859560013, 0.09368039667606354, -0.05428397282958031, 0.6338910460472107...
For brevity's sake, consider the following scenario: Part of my application is a wizard for bringing on new clients and it's a dynamic page. One step contains billing information, another step is setup information, etc. Because it's one page, in my mind I have one controller for the entire wizard, but that also means I have models and functions for each step in one **controllername.js** file in addition to using a bunch of ng-show directives to "show" steps. This doesn't feel quite right..have I overlooked something in the docs? Edit: To exapnd a bit, I feel like I'm violating the SRP and I feel dirty.
[ -0.002753856824710965, 0.028368927538394928, 0.00513059925287962, 0.0043798815459012985, -0.0020724590867757797, 0.015929358080029488, 0.006205213721841574, 0.002218674635514617, -0.011668115854263306, -0.017823651432991028, -0.00482373870909214, 0.011803430505096912, -0.0020196109544485807,...
[ 0.2318962812423706, 0.008860974572598934, 0.6465722322463989, 0.04135885462164879, 0.007421240210533142, -0.24227169156074524, 0.1785704642534256, -0.09074213355779648, -0.33764880895614624, -0.4450898766517639, 0.23555925488471985, 0.6949117183685303, -0.2119688242673874, 0.17250402271747...
We need to create an API to our system. How do I convince my boss that REST is a better option than SOAP (or XML-RPC)? I say REST is... * easier to implement and maintain * not much new to learn -- plain old HTTP * lot of people have chosen it Yahoo ~ Facebook ~ Twitter * will be lot quicker to code My boss says SOAP is... * richer and more expressive * it's all standard XML (SOAP, WSDL, UDDI) -- and so will be easier to consume * well standardized than REST * Google uses a lot of SOAP * it is important to adhere to SOAP standards than to create a custom XML schema in REST
[ 0.004105865024030209, 0.0058159781619906425, -0.01739998161792755, -0.00414361571893096, -0.00545805087313056, 0.0032877293415367603, 0.008469032123684883, 0.026957683265209198, -0.014399794861674309, -0.02930000238120556, -0.011441465467214584, 0.007146521005779505, -0.000026169931516051292...
[ 0.5981341004371643, 0.04437418654561043, 0.44721776247024536, 0.1753617376089096, -0.3851369023323059, 0.09970888495445251, 0.1314261108636856, -0.21433575451374054, -0.03457806631922722, -0.6912367939949036, 0.13997390866279602, 0.6372945308685303, -0.3572880029678345, -0.1292823106050491...
I had run the contour tool in my script to generate polylines. Contour("rectExtract", "C:/fakepath/Class1.shp", 50,0) Now I want to label the polylines with the contour-value. If I execute this script: layer = arcpy.mapping.ListLayers(IMXD, "")[1] layer.showLabels=True arcpy.RefreshActiveView() it will label the polylines with the ID-Value. How can I change this? Thanks for your help.
[ -0.0005974483210593462, 0.009502137079834938, -0.008822341449558735, 0.017308074980974197, -0.04578900709748268, -0.013574366457760334, 0.009770847856998444, 0.0071259792894124985, -0.018146902322769165, 0.0077598560601472855, -0.00679242517799139, 0.009062834084033966, -0.005960215348750353...
[ 0.5405453443527222, -0.27578407526016235, 0.47440364956855774, -0.010318762622773647, -0.2274341583251953, -0.15300419926643372, 0.3215230405330658, -0.0994955450296402, -0.1765332669019699, -0.8896474242210388, 0.11229407787322998, 0.5561509132385254, -0.3344498574733734, 0.04846977069973...
I have just downloaded the demo of Dungeons and am on the level where you first capture a pentagram and then have to kill heros to get soul power. It then asks me to extend my area of influence by building pentagrams, but I have no idea how. I cannot find an icon or keyboard shortcut or anything on how to build a pentagram. I hope I dont just have to capture pentagrams because I cannot find any others on the map either... Anyone know how to do this?
[ -0.013777983374893665, -0.000013748973287874833, -0.001641560927964747, 0.0056158737279474735, -0.008764944039285183, -0.006465422920882702, 0.0075079468078911304, 0.002090160734951496, -0.027049828320741653, 0.008338780142366886, -0.012404127046465874, 0.006319646257907152, -0.0317959412932...
[ 0.6413072347640991, 0.10834945738315582, 0.2829265296459198, 0.20268478989601135, -0.16959115862846375, -0.37838777899742126, 0.2507733702659607, -0.1708022505044937, -0.11708385497331619, -0.5332810282707214, 0.2931410074234009, 0.3074873685836792, 0.2607206702232361, 0.14087505638599396,...
On page 192 of Analysing spatial point patterns in 'R' (Baddeley 2011), there are plots of the Gcross function for the amacrine dataset. I am looking for an interpretation of the plot. off/off and on/on are far below the CSR line, but on/off and off/on cross the CSR line - what does this mean please?
[ -0.013087201863527298, 0.02490217611193657, -0.010574137791991234, 0.029872005805373192, 0.011095504276454449, -0.022021569311618805, 0.01239802036434412, -0.009028383530676365, -0.016189299523830414, -0.003994010854512453, -0.0019725547172129154, 0.015140159986913204, -0.01082692202180624, ...
[ 0.2939385771751404, -0.022334106266498566, 0.4803297519683838, -0.21254637837409973, -0.12254663556814194, 0.00960233062505722, -0.3134179711341858, -0.17324784398078918, -0.3233281075954437, -0.4850306510925293, 0.015530595555901527, 0.264483243227005, -0.4428243935108185, 0.2590516209602...
I haven't found any good explanation yet about how to display a map in Mapserver (localhost). I have msw4 with p.mapper in my pc. I have created a base .map file, shape files and html file. Please tell me the exact steps in which folder i put these files to be displayed in p.mapper?
[ -0.03531202673912048, -0.0006098433514125645, 0.0063252742402255535, 0.007681430783122778, -0.011682411655783653, 0.01022629626095295, 0.006665635854005814, 0.02318384125828743, -0.033142294734716415, -0.04286700487136841, 0.00465113902464509, 0.010942753404378891, 0.008212890475988388, 0....
[ 0.22027930617332458, -0.0010503290686756372, 0.6927764415740967, 0.08726578205823898, -0.14071427285671234, -0.061550628393888474, 0.1006125882267952, 0.14081059396266937, -0.3095852732658386, -1.153341293334961, 0.2849554419517517, 0.25674840807914734, 0.09280523657798767, 0.0201779641211...
In the article Oh & Berry (2009), p. 1506, in the note for Table 2, a certain statistic is used: "Operational (true) validity is the LISREL estimated correlation corrected for measurement error in the criterion measure" Can anyone explain a) what this means, b) why/when it's used (why not just use path coefficients as in a standard structural equations model?), and c) how to interpret this statistic? ### Reference Oh, I.-S., & Berry, C. M. (2009). The five-factor model of personality and managerial performance: validity gains through the use of 360 degree performance ratings. The Journal of Applied Psychology, 94(6), 1498–513. doi:10.1037/a0017221
[ 0.007962658070027828, 0.015136439353227615, -0.010090907104313374, 0.012848836369812489, 0.0002969140186905861, 0.007737644016742706, 0.007869819179177284, -0.017458712682127953, -0.007027515210211277, -0.03895627707242966, -0.00759270740672946, 0.0077899848110973835, 0.0006363252177834511, ...
[ 0.18045435845851898, 0.1267167031764984, 0.3225691318511963, 0.12625212967395782, -0.5630946755409241, 0.2442646026611328, 0.2674349546432495, -0.3856354057788849, -0.07635390013456345, -0.09024445712566376, 0.05912112072110176, 0.6097551584243774, 0.27331411838531494, 0.06856414675712585,...
I reset my Dawnguard DLC. I've played Dawnguard against vampires and when I was finished with the main quest, I downloaded the Serana marriage mod from nexus and married her, but I decided on give Dawnguard another run. So I turned it off and made a new save. I drop the ring of matrimony and everything that could connected me to Serana and this time I sided with the vampires. But when I finished the main quest and told Valerica she could return to the castle, which she did, I put the amulet of Mara on and tried to propose to Serana again. However, this time after the first question (if she had something on her mind), nothing more appeared. There was just one small message in the left corner stating that Serana is available for proposal. I can't marry her or anyone else in Skyrim because the real question (if they're interested in me) never appears. I already tried some console codes to try go around this situation and nothing. I downloaded a new mod for Serana and still nothing. I turned off/on the marriage mod and still nothing. I played Skyrim for a while now and it's not the first time I try and marry someone, but this once it's killing me. Why can I not marry her? How can I fix this?
[ 0.03732689470052719, 0.022352535277605057, -0.002658800221979618, -0.003174730110913515, 0.01080214511603117, -0.005289819557219744, 0.009752335026860237, 0.003908384591341019, -0.0143723851069808, -0.015259761363267899, -0.009510321542620659, 0.02450636774301529, -0.021625949069857597, 0....
[ -0.10650809854269028, -0.04692266881465912, 0.47348752617836, 0.2785790264606476, -0.23071089386940002, 0.0575646236538887, 0.5336989164352417, -0.3147127032279968, 0.08651061356067657, -0.4536084830760956, 0.18372762203216553, 0.534010112285614, -0.08476448804140091, 0.20829890668392181, ...
When I open Gedit in my Linux Mint 17 Cinnamon 64-bit, the syntax highlight is off by default. I can then activate it via the menu, or by saving my file with the correct file extension (e.g., every *.f95 file will automatically be shown with the Fortran 95 sytnax highlight). My question is: since I am programming in Fortran most of the time, would it be possible for me to create a new Gedit shortcut named "Gedit-Fortran" in my menu that would open it with Fortran's syntax highlight already activated? Just note that I don't want this to be the default behaviour of Gedit though: I still want to have the standard shortcut to open a blank file without any syntax highlighting. Would this be possible, maybe via some terminal command, so I can create such shortcut?
[ -0.013041031546890736, -0.0005259818281047046, -0.019314264878630638, 0.02127986215054989, -0.0074698845855891705, -0.019140131771564484, 0.010115008801221848, -0.0050634415820240974, -0.020432747900485992, 0.014617210254073143, -0.019664369523525238, 0.009241150692105293, 0.0045333546586334...
[ -0.06813590228557587, 0.2323099821805954, 0.45416542887687683, -0.45901063084602356, -0.3965436816215515, -0.23902149498462677, 0.5208536386489868, 0.13757343590259552, 0.13242392241954803, -0.819428563117981, -0.4355306923389435, 0.8942375779151917, -0.17012657225131989, -0.21844244003295...
I'm using ESRI's LocalGovernmentMI.gdb data model and am managaing parcel data in a ParcelFabric dataset. I'm trying to use a search cursor on the ParcelFabric_Parcels dataset but I recieve this ... > RuntimeError: cannot open > 'P:\Mapping\Data\LocalGovernmentMI.gdb\ParcelEditing\ParcelFabric\ParcelFabric_Parcels' I'm using this but of code ... import arcpy fc = r'P:\Mapping\Data\LocalGovernmentMI.gdb\ParcelEditing\ParcelFabric\ParcelFabric_Parcels' field = "Name" with arcpy.da.SearchCursor(fc, field) as cursor: for row in cursor: print (row) If I export the dataset and run the search cursor on that data it works. Any thoughts?
[ -0.015281863510608673, 0.005899657029658556, -0.009568534791469574, 0.024373847991228104, -0.011192484758794308, 0.001980791101232171, 0.010492787696421146, 0.02019316330552101, -0.011253157630562782, -0.036547765135765076, -0.01805548183619976, 0.012315074913203716, -0.0011074701324105263, ...
[ 0.09145153313875198, 0.04886215180158615, 0.5321775078773499, -0.12407314032316208, 0.10162071138620377, 0.01693977229297161, 0.09543830156326294, -0.49806180596351624, -0.14195634424686432, -0.5371996760368347, -0.13972774147987366, 0.24357308447360992, -0.2862538695335388, 0.210052490234...
Is there is an easy way to mark dimensions in a technical drawing with TikZ? ![dimensions in technical drawing](http://i.stack.imgur.com/XaPdA.png) Is there a library or something? ## Edit I am using XeLaTeX. ## Update I chose Martin's answer because it serves me well for the moment. Ultimately the best solution would be a library that, in an easy and consistent manner, would allow to change the arrow/dimension lines styles, would support polar coordinates, would allow to choose 2 nodes and the vertical distance that you want the dimension to be typeseted etc.
[ 0.00970072578638792, 0.008722677826881409, -0.014912556856870651, 0.010592280887067318, 0.021244902163743973, -0.004101907834410667, 0.006036980077624321, 0.02397879585623741, -0.01653367653489113, -0.024580445140600204, -0.006627288181334734, 0.0032740095630288124, -0.0022444899659603834, ...
[ 0.16582319140434265, 0.16843928396701813, 0.7592856287956238, 0.19464555382728577, -0.1659381240606308, 0.1339028924703598, -0.03751888871192932, -0.2818383574485779, -0.18228773772716522, -0.5459420680999756, 0.12326361984014511, 0.2813694179058075, 0.017696896567940712, -0.17660209536552...
I have sampled 8 individuals (birds) from two regions. For each of these 16 individuals I have sampled 9 feathers that have each grown in sequential order (from 1 to 9). Next I have measure both carbon and nitrogen isotopes in each of the feather samples. I have plotted (Fig. 1) my data and in some individuals the relationship between the isotope value and feather position is linear, in some cases monotonic, and in a few cases neither. ![Delta15N by feather position for all individuals in the "South" region](http://i.stack.imgur.com/Nz6oi.png) I am looking for a non-parametric (?) method to test these three alternative hypotheses for each individual in my dataset. H1: If the 9 sequentially grown feathers on each individual are grown in the same place under the same diet, the isotope values will be highly correlated with feather position and the regression line essentially flat. H2: If an individual moves or changes their diet in a systematic way, the isotope values and feather position will correlated but the regression line will be either positive or negative. H3: If the individual abruptly moves or changes their diet, the isotope values will be uncorrelated and the relationship will not be linear. Finally, I would like to test the hypothesis that their are differences between the relationship of feather isotopes and feather position is different in individuals between the two regions. This would more intuitive if all the data (for each individual) was linear or monotonic, but they are not. From my nascent understanding for using either a Pearson's correlation or GLM, these tests assume the data is linear, while the Spearman's assumes the data is monotonic. Sample Data: WW_Wing_SI <- structure(list(Individual_ID = c("WW_08A_02", "WW_08A_02", "WW_08A_02", "WW_08A_02", "WW_08A_02", "WW_08A_02", "WW_08A_02", "WW_08A_02", "WW_08A_02", "WW_08A_03", "WW_08A_03", "WW_08A_03", "WW_08A_03", "WW_08A_03", "WW_08A_03", "WW_08A_03", "WW_08A_03", "WW_08A_03", "WW_08A_04", "WW_08A_04", "WW_08A_04", "WW_08A_04", "WW_08A_04", "WW_08A_04", "WW_08A_04", "WW_08A_04", "WW_08A_04", "WW_08A_05", "WW_08A_05", "WW_08A_05", "WW_08A_05", "WW_08A_05", "WW_08A_05", "WW_08A_05", "WW_08A_05", "WW_08A_05", "WW_08A_06", "WW_08A_06", "WW_08A_06", "WW_08A_06", "WW_08A_06", "WW_08A_06", "WW_08A_06", "WW_08A_06", "WW_08A_06", "WW_08A_08", "WW_08A_08", "WW_08A_08", "WW_08A_08", "WW_08A_08", "WW_08A_08", "WW_08A_08", "WW_08A_08", "WW_08A_08", "WW_08A_09", "WW_08A_09", "WW_08A_09", "WW_08A_09", "WW_08A_09", "WW_08A_09", "WW_08A_09", "WW_08A_09", "WW_08A_09", "WW_08A_13", "WW_08A_13", "WW_08A_13", "WW_08A_13", "WW_08A_13", "WW_08A_13", "WW_08A_13", "WW_08A_13", "WW_08A_13", "WW_08B_02", "WW_08B_02", "WW_08B_02", "WW_08B_02", "WW_08B_02", "WW_08B_02", "WW_08B_02", "WW_08B_02", "WW_08B_02", "WW_08G_01", "WW_08G_01", "WW_08G_01", "WW_08G_01", "WW_08G_01", "WW_08G_01", "WW_08G_01", "WW_08G_01", "WW_08G_01", "WW_08G_02", "WW_08G_02", "WW_08G_02", "WW_08G_02", "WW_08G_02", "WW_08G_02", "WW_08G_02", "WW_08G_02", "WW_08G_02", "WW_08G_05", "WW_08G_05", "WW_08G_05", "WW_08G_05", "WW_08G_05", "WW_08G_05", "WW_08G_05", "WW_08G_05", "WW_08G_05", "WW_08G_07", "WW_08G_07", "WW_08G_07", "WW_08G_07", "WW_08G_07", "WW_08G_07", "WW_08G_07", "WW_08G_07", "WW_08G_07", "WW_08I_01", "WW_08I_01", "WW_08I_01", "WW_08I_01", "WW_08I_01", "WW_08I_01", "WW_08I_01", "WW_08I_01", "WW_08I_01", "WW_08I_03", "WW_08I_03", "WW_08I_03", "WW_08I_03", "WW_08I_03", "WW_08I_03", "WW_08I_03", "WW_08I_03", "WW_08I_03", "WW_08I_07", "WW_08I_07", "WW_08I_07", "WW_08I_07", "WW_08I_07", "WW_08I_07", "WW_08I_07", "WW_08I_07", "WW_08I_07", "WW_08I_12", "WW_08I_12", "WW_08I_12", "WW_08I_12", "WW_08I_12", "WW_08I_12", "WW_08I_12", "WW_08I_12", "WW_08I_12" ), Feather = c("1", "2", "3", "4", "5", "6", "7", "8", "9", "1", "2", "3", "4", "5", "6", "7", "8", "9", "1", "2", "3", "4", "5", "6", "7", "8", "9", "1", "2", "3", "4", "5", "6", "7", "8", "9", "1", "2", "3", "4", "5", "6", "7", "8", "9", "1", "2", "3", "4", "5", "6", "7", "8", "9", "1", "2", "3", "4", "5", "6", "7", "8", "9", "1", "2", "3", "4", "5", "6", "7", "8", "9", "1", "2", "3", "4", "5", "6", "7", "8", "9", "1", "2", "3", "4", "5", "6", "7", "8", "9", "1", "2", "3", "4", "5", "6", "7", "8", "9", "1", "2", "3", "4", "5", "6", "7", "8", "9", "1", "2", "3", "4", "5", "6", "7", "8", "9", "1", "2", "3", "4", "5", "6", "7", "8", "9", "1", "2", "3", "4", "5", "6", "7", "8", "9", "1", "2", "3", "4", "5", "6", "7", "8", "9", "1", "2", "3", "4", "5", "6", "7", "8", "9" ), Delta13C = c(-18.67, -19.16, -20.38, -20.96, -21.61, -21.65, -21.31, -20.8, -21.28, -20.06, -20.3, -21.21, -22.9, -22.87, -21.13, -20.68, -20.58, -20.69, -16.54, -15.6, -16.61, -19.65, -20.98, -21.18, -21.7, -21.18, -21.33, -20.33, -20.28, -20.58, -20.8, -21.24, -20.94, -20.54, -21.04, -20.42, -21.28, -21.24, -21.22, -21.2, -21.47, -21.23, -21.89, -21.89, -21.6, -23.86, -23.95, -24, -24.16, -24.93, -24.93, -24.48, -24.17, -23.1, -21.3, -21.44, -21.49, -21.49, -21.1, -20.84, -20.78, -21.58, -20.76, -21.34, -24.13, -23.03, -21.77, -21.4, -21.57, -21.45, -21.32, -21.59, -20.87, -20.95, -20.76, -20.9, -21.02, -20.84, -21.11, -20.64, -20.11, -20.32, -20.02, -19.92, -20.05, -20.23, -20.73, -20.91, -19.87, -19.58, -19.35, -19.38, -19.7, -19.94, -20.43, -20.08, -20.81, -20.9, -19.24, -21.2, -21.29, -21.85, -22.22, -22.34, -22.42, -22.69, -22.75, -22.73, -21.61, -21.42, -21.84, -21.68, -21.79, -21.49, -21.88, -21.62, -21.54, -18.3, -18.53, -19.55, -20.18, -20.96, -21.08, -21.5, -17.42, -13.18, -22.3, -22.2, -22.18, -22.14, -21.55, -20.85, -23.1, -20.75, -20.9, -21.6, -21.77, -22.17, -22.21, -22.24, -22.47, -22.19, -21.89, -21.89, -24.12, -24.08, -24, -24.2, -24.16, -22.87, -22.51, -22.12, -22.3), Delta15N = c(7.35, 7.27, 7.23, 7.07, 7.13, 7.38, 6.98, 6.88, 6.72, 5.72, 5.76, 5.51, 6.12, 5.8, 5.34, 5.47, 5.78, 6.2, 7.33, 7.45, 7.3, 7.19, 7.56, 7.54, 8.12, 7.71, 7.44, 9.45, 9.81, 9.7, 9.08, 8.6, 9.34, 10.38, 9.67, 10.48, 7.71, 7.76, 7.95, 7.73, 7.69, 7.24, 6.64, 6.42, 7.31, 8.26, 8.1, 8.07, 8.7, 8.98, 9.44, 7.84, 7.26, 6.05, 8.04, 7.73, 7.55, 6.77, 6.99, 6.84, 7.09, 6.78, 7.07, 6.96, 6, 5.91, 6.48, 7.06, 7.27, 8.32, 7.85, 7.45, 6.9, 6.73, 6.97, 6.67, 6.76, 6.59, 6.58, 6.42, 6.3, 11.64, 11.83, 11.66, 11.3, 11.32, 11.29, 10.91, 10.77, 11.4, 9.5, 9.55, 9.22, 8.84, 8.89, 9.14, 9.8, 9.13, 8.51, 7.7, 7.8, 8.29, 9.65, 10.25, 13.67, 14.66, 13.48, 13.76, 8.7, 8.7, 8.36, 8.11, 8.47, 8.13, 6.88, 7.21, 7.16, 14.07, 13.91, 14.07, 14.26, 13.99, 13.51, 13.77, 14.83, 15.13, 10.93, 10.85, 11.31, 11.28, 11.96, 13.41, 8.12, 12.96, 12.03, 8.16, 8.29, 8.43, 8.53, 8.1, 7.65, 7.6, 7.51, 7.38, 6.44, 6.18, 6.33, 6.49, 6.34, 8.65, 7.73, 7.13, 7.07), Region = c("South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "South", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North", "North")), .Names = c("Individual_ID", "Feather", "Delta13C", "Delta15N", "Region"), row.names = c(NA, 153L), class = "data.frame")
[ 0.01593055948615074, 0.01350998692214489, -0.015933649614453316, 0.013902386650443077, -0.003501815488561988, 0.004036462865769863, 0.007784025743603706, -0.003671707585453987, -0.01751711592078209, -0.0056355660781264305, 0.010000446811318398, 0.008579837158322334, -0.012962440960109234, ...
[ 0.28860974311828613, 0.0913398414850235, 0.1474037617444992, -0.34145456552505493, 0.21707652509212494, 0.634489893913269, 0.36779364943504333, -0.5261785387992859, -0.6003113389015198, -0.3588908612728119, 0.1011369451880455, -0.07735895365476608, -0.05007600784301758, 0.19486120343208313...
I have a request, one of which I am not certain attainable. Recently I have gone from taking notes in Evernote, to just taking notes using my latex distribution, by reason of the fact that whenever I had to type of an equation in Evernote, I had to go to the website http://www.codecogs.com/latex/eqneditor.php, and generate an image, which I found particularly annoying. I also like using latex because it is allows for a lot of customization. However, one thing that I did enjoy about Evernote was the fact that all my notes were contained in one area, and every single note was capable of being searched at once. Here comes my request: Does anyone know of a way of making all of the pdf files that I generate from my latex distribution searchable, without actually having to open each individual pdf, and searching through it?
[ 0.014082231558859348, -0.0006471225060522556, -0.014164583757519722, 0.004958854988217354, -0.004688550718128681, 0.0032018115743994713, 0.00700528034940362, 0.019029196351766586, -0.01750342920422554, -0.0074918936006724834, -0.006709157954901457, 0.010074401274323463, -0.003989696968346834...
[ 0.2979312539100647, -0.016531312838196754, 0.3855383098125458, 0.3450920283794403, -0.09239307045936584, 0.06223162263631821, 0.15179605782032013, 0.3661069869995117, -0.18394385278224945, -0.9539446830749512, 0.4217362403869629, 0.06291797757148743, -0.14541052281856537, -0.06681930273771...
Can anyone tell me how to set a directory only in read/write mode as below? -rw-rw-r-- I tried with 640,644 but I am able to achieve...
[ 0.0003955687570851296, 0.022177942097187042, -0.030753150582313538, 0.014420188032090664, -0.021850338205695152, 0.0010082208318635821, 0.011385245248675346, -0.011457415297627449, -0.0347801074385643, 0.014350028708577156, -0.019673114642500877, -0.0031442344188690186, -0.0168598685413599, ...
[ 0.3566802144050598, 0.4160252809524536, 0.3276607394218445, 0.13694213330745697, 0.1392742395401001, -0.10973165184259415, 0.21177256107330322, -0.12595516443252563, -0.21120448410511017, -0.5303971767425537, 0.03465836122632027, 0.7363956570625305, -0.23599158227443695, -0.121802218258380...
I sent an email to my colleague for any questions he had, he could feel free to ask me. This is the reply from him: > Will do, just got sidetracked for a bit here… what else is new. =) Should I understand "What else is new" as a question or something else? It happens to seem an informal way of saying something but I couldn't really understand the informal meaning. I know what it says literally though. TheFreeDictionary.com defines it: > Inf. This isn't new. It has happened before; Not this again. But personally, I couldn't fit its definition in my colleague's sentence.
[ -0.006577468477189541, 0.0003384319134056568, -0.009620851837098598, 0.009118013083934784, -0.0098775215446949, -0.010664118453860283, 0.006483360193669796, -0.007134799379855394, -0.015950946137309074, 0.003801148384809494, -0.003554447554051876, 0.009166882373392582, 0.02594338357448578, ...
[ 0.3337152302265167, -0.15494000911712646, 0.3349341154098511, -0.09673725813627243, -0.14415797591209412, -0.28276270627975464, 0.1323596090078354, -0.1030694842338562, -0.39434361457824707, -0.7589554786682129, 0.06653929501771927, 0.2036990374326706, 0.21591632068157196, 0.24287271499633...
Is there a way to determine which features/variables of the dataset are the most important/dominant within a kmeans cluster solution generated via R?
[ 0.01733112893998623, 0.007701855152845383, -0.02584088034927845, 0.005465298891067505, -0.05616217479109764, 0.05740506947040558, 0.016539718955755234, -0.02172868140041828, -0.014618528075516224, 0.02528638020157814, -0.004769324790686369, 0.031999219208955765, -0.011914622038602829, 0.01...
[ -0.021190669387578964, -0.06812053918838501, 0.21842379868030548, 0.391991525888443, 0.31047192215919495, 0.38351547718048096, -0.3313360810279846, -0.2813563048839569, -0.2325720340013504, -0.2618516683578491, -0.25659263134002686, 0.304096519947052, -0.016936080530285835, -0.003118230961...
In my quest to find a way to limit outgoing bandwidth for a running instance of bitcoind, I came across this guide that explains how to rate limit traffic to a particular destination IP: tc qdisc add dev $DEV root handle 1: cbq avpkt 1000 bandwidth 10mbit tc class add dev $DEV parent 1: classid 1:1 cbq rate 512kbit allot 1500 prio 5 bounded isolated tc filter add dev $DEV parent 1: protocol ip prio 16 u32 match ip dst 195.96.96.97 flowid 1:1 I'm not trying to limit traffic to a certain destination IP, though, but to and from a specific port, so I found this guide which tells me how to match traffic by source and destination port: tc filter add dev eth0 protocol ip parent 10: prio 1 u32 match ip dport 22 0xffff flowid 10:1 tc filter add dev eth0 protocol ip parent 10: prio 1 u32 match ip sport 80 0xffff flowid 10:1 The combination of the first script with the port matching of the second script leads me to the following set of commands that should limit outgoing traffic to port 21 to 160 kbit/s. I'm testing the setup using FTP first, because limiting bitcoind, which uses port 8333, isn't optimal for testing since I can't decide when traffic is sent. tc qdisc add dev eth0 root handle 1: cbq avpkt 1000 bandwidth 800kbit tc class add dev eth0 parent 1: classid 1:1 cbq rate 160kbit allot 1500 prio 5 bounded isolated tc filter add dev eth0 parent 1: protocol ip prio 16 u32 match ip dport 21 0xffff flowid 1:1 As far as I can see, this should limit outgoing traffic to port 21 on the eth0 interface with an outgoing bandwidth of 800kbit/s to 160kbit/s, but it's not working: NetHogs version 0.8.0 PID USER PROGRAM DEV SENT RECEIVED 23653 rune filezilla eth0 102.609 2.978 KB/sec The FTP connection consists of two connections to port 21: $ netstat -n|grep "21 " tcp 0 0 192.168.1.33:59967 194.192.207.26:21 ESTABLISHED tcp 0 0 192.168.1.33:59974 194.192.207.26:21 ESTABLISHED What am I doing wrong? I'm running Ubuntu Raring, in case that's relevant.
[ 0.001635002437978983, 0.00486043281853199, -0.009309993125498295, -0.0027680727653205395, -0.017048049718141556, -0.01322844810783863, 0.009922342374920845, 0.02195538952946663, -0.01772533357143402, 0.0008727125823497772, -0.010595133528113365, 0.01082097738981247, -0.002511679893359542, ...
[ 0.24798358976840973, -0.29074567556381226, 0.6268861889839172, 0.27960702776908875, -0.26805055141448975, 0.06211470067501068, 0.080999456346035, -0.2690331041812897, -0.05499677732586861, -0.6109340190887451, 0.15144285559654236, 0.23858757317066193, -0.543366014957428, 0.2255753129720688...
I'm certain that at one point I was able to look at a breakdown of inbound URLs. Now I can't find it anywhere! It seems like this would be a common use of GA, but I couldn't find instructions for this either on this SE or elsewhere on the Internet. The closest I found was this question: “Can I track conversion rates from specific sources?” but one answer there only gets down to the level of seeing all direct traffic (I want names of sites) and the other answer recommends setting up custom goals, which I know I didn't do in the past. Can anyone point me in the right direction to see traffic by source website again?
[ 0.018475688993930817, -0.0007858928292989731, 0.0012265394907444715, 0.011200745590031147, 0.013522692024707794, 0.0016979638021439314, 0.004190927837044001, -0.0017135352827608585, -0.01577002741396427, 0.008933402597904205, 0.01141614094376564, 0.009396339766681194, -0.01006375253200531, ...
[ 0.5993799567222595, 0.08978019654750824, 0.44454425573349, 0.23901398479938507, -0.29058635234832764, -0.0867498442530632, 0.3792514204978943, 0.07268549501895905, -0.44658878445625305, -0.37492918968200684, 0.4583036005496979, 0.540050745010376, -0.18407830595970154, 0.16568981111049652, ...
While studying quantum mechanics from standard textbooks I always felt some conceptual gap that was never mentioned or explained. In what follow I tried to formulate my question, please be patient with me. For a quantum particle in an infinite potential well the stationary states are labelled by the quantum number $n$ which labels the eigenenergies. An eigenenergy, that corresponds to a stationary state, does not change with time, hence is a conserved quantity. For a spinless electron in Coulomb potential, to model the hydrogen atom, again we have the same story, the stationary states are labeled by the quantum numbers $n$, $l$, $m$ which corresponds to conserved quantities. My question is rather general since I am trying to understand conceptually why only conserved quantities are used to label the quantum states. I mean how would someone think in advance that he has to look for conserved quantities, and then use such conserved quantities to label the states ?
[ 0.01798461750149727, 0.024214118719100952, 0.004874958656728268, 0.018005989491939545, -0.011196797713637352, -0.005612486973404884, 0.009219903498888016, -0.018067415803670883, -0.010587574914097786, 0.0038367202505469322, -0.01036797184497118, 0.021928828209638596, -0.008514541201293468, ...
[ 0.16898448765277863, -0.13594916462898254, 0.03952877223491669, -0.008715225383639336, -0.04361213743686676, 0.028126992285251617, -0.09806569665670395, -0.440996378660202, -0.03435986116528511, -0.5390443801879883, 0.0991605669260025, 0.24469681084156036, -0.29610008001327515, 0.738062024...