text
stringlengths
23
30.4k
embeddings_A
list
embeddings_B
list
Have you ever submitted a LaTeX paper to a journal that wants _separate_ figures? When you want to include a figure composed of a table of images, you must build it separately. When I run `latexpdf`, I get an entire 8.5 x 11 page, even when I remove the page number. Is there a way to "autocrop" my built pdf as I compile it from LaTeX? I suppose I could open it in inkscape and crop and save it, but I'd really like to learn how to do this the _right way_.
[ -0.005295025650411844, 0.007917569018900394, 0.0027847429737448692, 0.01485208049416542, 0.013858472928404808, 0.0016617809887975454, 0.007618437986820936, -0.006341969594359398, -0.022756842896342278, -0.005148535128682852, -0.017647715285420418, 0.0026627322658896446, -0.00275225006043911,...
[ 0.6071259379386902, -0.10458255559206009, 0.24403133988380432, 0.17749853432178497, -0.12066368758678436, 0.163497194647789, -0.36366790533065796, -0.0348270907998085, -0.5480908751487732, -0.4515759348869324, 0.4045000672340393, 0.16972112655639648, -0.09625906497240067, -0.24962158501148...
I have a document that contains formatting macros. That document includes a document that contains the content. I'd like to be able to create a second document that redefines the formatting macros so that the resulting PDF can be converted to plain text very successfully. I'm using PDFMiner's pdf2text.py (python), but the document itself needs to be properly suited for conversion (no columns, drawing marks, typeface changes, etc.). The only problem I'm having is with line wrapping. Is there a way to turn off wrapping for (every) paragraph? I think this would solve my problem. If there's another way to convert a LaTeX document to text (without me having to code the tool to do it), I'm willing to consider alternatives to my current plan.
[ -0.007710476405918598, 0.009918974712491035, -0.001289444975554943, 0.016115887090563774, -0.005975643638521433, -0.001756751909852028, 0.005875221453607082, 0.013242384418845177, -0.010593138635158539, -0.024732442572712898, -0.005882871802896261, -0.0026123947463929653, 0.00596994999796152...
[ 0.248228058218956, 0.21077848970890045, 0.39385175704956055, 0.11836358904838562, -0.1427565962076187, -0.022198490798473358, 0.07911588996648788, -0.20714905858039856, -0.22658948600292206, -0.6232731938362122, -0.18520382046699524, 0.5449174642562866, -0.25915151834487915, -0.15425412356...
Is there a way to use wp_enqueue_script() for inline scripts? I'm doing this because my inline script depends on another script and i would like the flexibility of inserting it after it's loaded. Also, It's an inline script because i'm passing php variables into the javascript (like theme path, etc) Thanks in advance.
[ 0.006469824351370335, 0.0181114599108696, 0.015123267658054829, 0.023336615413427353, -0.013075309805572033, 0.022381717339158058, 0.009581551887094975, -0.011590753681957722, -0.02962573617696762, -0.016941582784056664, -0.011033061891794205, 0.026981443166732788, -0.021320076659321785, 0...
[ 0.3746507167816162, -0.016282759606838226, 0.23157702386379242, 0.21577656269073486, -0.14243115484714508, -0.29453763365745544, -0.001950943493284285, 0.038011062890291214, -0.16165876388549805, -0.39255014061927795, 0.40082088112831116, 0.644243597984314, -0.14853551983833313, -0.0903215...
I'm having trouble with a website I'm working on. I initially set up a robots.txt file to prevent robots from indexing it while I was working on it. However now its live and the robots.txt rile has been deleted but it still has not been crawled and shows that robots are disallowed access. EVen in the absence of a robots.txt file. The site is a wordpress based website - everything seems to suggest that there should be no block for any crawlers. Running a search for `site:claimsadvicecentre.co.uk` should bring up atleast 5 pages however its only listing the main page. What could be wrong here?
[ 0.00338916527107358, 0.008984735235571861, -0.0042605288326740265, 0.014293458312749863, 0.025119993835687637, 0.010194343514740467, 0.007402375340461731, 0.005991026293486357, -0.015966907143592834, -0.01704765111207962, -0.012859571725130081, 0.010987957939505577, -0.005426203832030296, ...
[ 0.5788829326629639, 0.1289149969816208, 0.510475754737854, -0.14596644043922424, 0.15625369548797607, -0.18501655757427216, 0.2406155914068222, 0.23618857562541962, -0.37894317507743835, -0.6238489151000977, 0.19578687846660614, 0.05627971887588501, -0.21468424797058105, 0.504771888256073,...
When clicking on the lollipop tree on the lollipop farm, the icon changes. The possible values, when cycled, are `*`, `cnd`, `!`, `+`, `?`, `/|\`, and `~`. Does this have any in game effect other than some basic customization?
[ -0.005310175009071827, 0.012422348372638226, -0.014593410305678844, 0.012652784585952759, -0.017591897398233414, -0.013740204274654388, 0.008172709494829178, 0.012815388850867748, -0.02197333611547947, 0.021418198943138123, -0.014264687895774841, 0.009384281001985073, -0.015595736913383007, ...
[ 0.023663947358727455, -0.12384643405675888, 0.5181592106819153, -0.06870352476835251, 0.126447394490242, 0.5508358478546143, 0.056911829859018326, 0.5620262026786804, -0.41376227140426636, -0.3638961911201477, -0.12301758676767349, 0.335840106010437, -0.2048824578523636, 0.1965697407722473...
I have a Warrior/Rogue-heavy character that has gotten down to the bottom floor once now. When I got there, Dredmor used a spell that killed me after one or two rounds. I'm considering adding Emomancy into the mix so I can get _Dampening Field of Angst_ (the antimagic field). Since I play on perma-death, I can't easily test this. I don't want to waste a skill slot for a single spell that might not even work. Does this spell affect him, or does he resist it like the final boss in most games?
[ 0.003009407315403223, 0.014092797413468361, -0.014905299060046673, 0.0018432815559208393, -0.019993571564555168, -0.03148699551820755, 0.007030665874481201, -0.006095031276345253, -0.011306319385766983, 0.014535943977534771, -0.014593919739127159, 0.01581932045519352, -0.01598750799894333, ...
[ 0.23931683599948883, 0.07331479340791702, 0.11191948503255844, 0.09224068373441696, -0.33318063616752625, 0.2697594165802002, 0.24613390862941742, -0.4015240967273712, -0.4146185517311096, -0.48134005069732666, 0.09694657474756241, 0.15464600920677185, 0.12531116604804993, 0.14035224914550...
There are four shadow achievements in Cookie Clicker which involve dungeons: * ![basic](http://i.stack.imgur.com/UFiWq.png) **Getting even with the oven** : Defeat the **Sentient Furnace** in the factory dungeons * ![basic](http://i.stack.imgur.com/UFiWq.png) **Now this is pod-smashing** : Defeat the **Ascended Baking Pod** in the factory dungeons * ![chirpy](http://i.stack.imgur.com/aGqkO.png) **Chirped out** : Find and defeat **Chirpy** , the dysfunctional alarm robot * ![bunny](http://i.stack.imgur.com/wDgpJ.png) **Follow the white rabbit** : Find and defeat the elusive **sugar bunny** How can I get to the dungeons and earn these achievements in the live version of Cookie Clicker?
[ 0.001757109072059393, 0.0004339241422712803, -0.0033446678426116705, 0.0022828199435025454, -0.019839465618133545, 0.0022632665932178497, 0.004969730973243713, 0.0191018208861351, -0.011427805759012699, 0.018793970346450806, -0.017738042399287224, 0.004802226088941097, 0.0029211994260549545,...
[ -0.18253083527088165, -0.289546936750412, 0.26184985041618347, 0.18457455933094025, -0.35917553305625916, 0.24694551527500153, 0.25781169533729553, 0.10216851532459259, -0.5041592717170715, -0.8137629628181458, 0.20289643108844757, 0.41595304012298584, -0.1504821628332138, -0.0917788743972...
We are attempting to speed up the installation of oracle nodes for RAC installation. this requires that we get ssh installed and configured so that it doesn't prompt for a password. The problem is: On first usage, we are prompted for RSA key fingerprint is 96:a9:23:5c:cc:d1:0a:d4:70:22:93:e9:9e:1e:74:2f. Are you sure you want to continue connecting (yes/no)? yes Is there a way to avoid that or are we doomed to connect at least once on every server from every server manually?
[ 0.0030260537751019, 0.0009777036029845476, -0.006393912248313427, 0.007469001226127148, -0.023675797507166862, -0.0004556485218927264, 0.005389830097556114, -0.006501789204776287, -0.0071813082322478294, -0.001347661018371582, -0.0029778238385915756, -0.0019101849757134914, -0.01120964065194...
[ 0.3496522605419159, 0.154164120554924, 0.44872069358825684, 0.2446049004793167, 0.5382049679756165, -0.20450837910175323, 0.4127599596977234, 0.08635365962982178, -0.18458378314971924, -0.33250758051872253, -0.08539265394210815, 0.45229843258857727, -0.37294119596481323, 0.2328386753797531...
here is the commonly recommended sql command for removing posts revisions and cleaning up the wp database: DELETE a,b,c FROM `wp_posts` a LEFT JOIN `wp_term_relationships` b ON (a.ID = b.object_id) LEFT JOIN `wp_postmeta` c ON (a.ID = c.post_id) WHERE a.post_type = 'revision'; how can i modify it to keep let's say the last 3 revisions ?
[ 0.016922371461987495, 0.01834263652563095, -0.013629157096147537, 0.013075619004666805, -0.022189535200595856, 0.015070278197526932, 0.007547405548393726, 0.018857665359973907, -0.008023700676858425, 0.0037028163205832243, -0.014434834010899067, 0.009652707725763321, 0.0011058044619858265, ...
[ 0.045026764273643494, 0.18770618736743927, 0.6592872738838196, -0.16086287796497345, 0.12921692430973053, 0.04974818974733353, 0.3509596288204193, -0.3795225918292999, -0.20619216561317444, -0.5017192363739014, -0.03179410099983215, 0.39799249172210693, -0.15915724635124207, 0.653822839260...
I have to do forecasting of sales that is how much sales of a product is going to happen in a particular store. I have time series data for last two years and doing forecasting for 2014. The variables are promotion flag ( Yes/ No ), promotion period, location in a store, price discount. These all are categorical variables. For this I am using regression method where, dependent variable is sales, and independent variables are categorical variables mentioned above. This analysis is done in SPSS where I have used step-wise and backward regression. I want to know, the regression model is under-forecasting? Is there a way to improve the forecast?
[ 0.02710121124982834, 0.01214690413326025, -0.010651757940649986, 0.018614664673805237, 0.006163467653095722, 0.0012421037536114454, 0.010032976046204567, 0.009246679954230785, -0.009827209636569023, -0.022845014929771423, -0.020384399220347404, 0.005205028690397739, -0.010692013427615166, ...
[ 0.34935128688812256, 0.050695180892944336, 0.6419644951820374, -0.051323361694812775, 0.0331953726708889, 0.28598570823669434, 0.012673011049628258, 0.015503072179853916, -0.38172441720962524, -0.25893789529800415, 0.22795556485652924, 0.5569377541542053, 0.2829654812812805, 0.575317859649...
Suppose you have a set of charges that are Newtonian (not quantum and not fast-moving) point particles. They are subject to known (but not necessarily constant) external forces ($F_{ext}$), as well as mutual electrostatic forces ($F_s$). We want to estimate the _electromagnetic_ forces ($F_m$) on each charge; these forces will over time sap energy form the system. We want an estimate of $F_m$ that is accurate, in other words the _relative_ error in $F_m$ goes to zero in the Newtonian limit ($F_m$ itself will go to zero as well, thus my specification of relative and not absolute error). Is there a way to estimate this easily (without resorting to Maxwell's equations)? If so how?
[ 0.012624835595488548, 0.0056172204203903675, -0.010981195606291294, 0.012447277083992958, 0.0017685159109532833, -0.018780667334794998, 0.006748995278030634, -0.017001882195472717, -0.009327664971351624, 0.0030716503970324993, 0.006246475037187338, 0.022238966077566147, -0.002729012630879879...
[ 0.18252374231815338, -0.05588701739907265, 0.3700452446937561, 0.051572661846876144, 0.3154280185699463, 0.23309224843978882, -0.35268473625183105, -0.5256469249725342, -0.4155655801296234, -0.3298161029815674, -0.1640433967113495, 0.37925469875335693, -0.467949777841568, 0.188915804028511...
I'm running linear regression model on a post-intervention test score controlling for pre-intervention test score. I used Box-Cox transformation on the post-intervention test score to normalize it. Since there no normality assumption imposed on independent variable, I plan not to transform the pre- intervention test score; but since the pre- and post-test are on a same scale, should I transform the pre-test as well just to be consistent?
[ 0.01470690406858921, 0.01583024486899376, -0.004860688932240009, 0.02226175181567669, -0.002440447686240077, 0.009066242724657059, 0.01512450072914362, -0.036713942885398865, -0.01918913424015045, -0.02003665082156658, -0.001263490179553628, 0.012821851298213005, -0.010012158192694187, 0.0...
[ 0.05947398766875267, -0.26028046011924744, 0.07717248052358627, 0.07028988003730774, 0.15675826370716095, 0.2753830552101135, 0.030648810788989067, -0.22270292043685913, -0.0960523709654808, -0.4953538775444031, 0.39682674407958984, 0.4033016264438629, -0.16620081663131714, 0.4422873556613...
Let's assume that Johnny Physicist has decided to move from his poor dingy second story apartment, into his much deserved home. Without making modifications to the existing structures? What would be the most work efficient way of moving all of his possessions? [Lets assume that the apartment is outside facing, and has stairs leading up to it] [Also lets assume that his possessions are a mix of carryable boxes, and large items]
[ 0.00216917647048831, 0.026323514059185982, -0.005858732853084803, 0.015835337340831757, -0.014211805537343025, -0.019814500585198402, 0.00797228142619133, 0.014325244352221489, -0.016594527289271355, -0.011497044004499912, -0.020242435857653618, 0.0166562981903553, 0.0003392851212993264, -...
[ 0.09242598712444305, 0.34896594285964966, 0.2458697408437729, 0.21901941299438477, 0.510019063949585, 0.09805023670196533, 0.2535978853702545, 0.04136069118976593, -0.33122313022613525, -0.3646080791950226, -0.6088437438011169, -0.6408411860466003, -0.22832801938056946, 0.7865971922874451,...
I've seen _funniest_ a few times in that context, but isn't that a derivation of _funny_? Is there a superlative of _fun_ or do we really use _funniest_ for the lack of one?
[ 0.005730795674026012, 0.016484960913658142, -0.017277270555496216, 0.010839972645044327, -0.038249362260103226, -0.022194353863596916, 0.009436906315386295, 0.019444504752755165, -0.023300819098949432, -0.017163999378681183, 0.001355451880954206, 0.008918355219066143, 0.015031250193715096, ...
[ 0.13139179348945618, 0.23448419570922852, -0.16126005351543427, 0.2716067135334015, -0.33243128657341003, -0.13554468750953674, 0.36611947417259216, 0.46172410249710083, -0.6750458478927612, -0.052027538418769836, 0.11054091900587082, 0.43660426139831543, -0.02129419706761837, 0.3527251780...
I recently installed a full version of `TeXLive 2012` (around 3.2 GB installation). Following is a simple `LaTeX` code that I am unable to compile: \listfiles \documentclass{article} \usepackage{hyperref} \title{\LaTeX\ Document} \author{Joe the Student} \date{\today} \begin{document} \maketitle Hello world! My email is \href{mailto:my_email@server.org}{my\_email@server.org} \end{document} I get following error when I try to compile with `pdflatex`: This is pdfTeX, Version 3.1415926-2.4-1.40.13 (TeX Live 2012/Debian) restricted \write18 enabled. entering extended mode (./tex.tex LaTeX2e <2011/06/27> Babel <v3.8m> and hyphenation patterns for english, dumylang, nohyphenation, lo aded. (/usr/share/texlive/texmf-dist/tex/latex/base/article.cls Document Class: article 2007/10/19 v1.4h Standard LaTeX document class (/usr/share/texlive/texmf-dist/tex/latex/base/size10.clo)) (/usr/share/texlive/texmf-dist/tex/latex/hyperref/hyperref.sty (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/hobsub-hyperref.sty (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/hobsub-generic.sty)) (/usr/share/texlive/texmf-dist/tex/latex/graphics/keyval.sty) (/usr/share/texlive/texmf-dist/tex/generic/ifxetex/ifxetex.sty) (/usr/share/texlive/texmf-dist/tex/latex/oberdiek/kvoptions.sty) (/usr/share/texlive/texmf-dist/tex/latex/hyperref/pd1enc.def) (/usr/share/texlive/texmf-dist/tex/latex/latexconfig/hyperref.cfg) ! LaTeX Error: File `url.sty' not found. Type X to quit or <RETURN> to proceed, or enter new name. (Default extension: sty) Enter file name: ) Package hyperref Message: Driver (autodetected): hpdftex. (/usr/share/texlive/texmf-dist/tex/latex/hyperref/hpdftex.def (/usr/share/texlive/texmf-dist/tex/latex/oberdiek/rerunfilecheck.sty)) (./tex.aux) (/usr/share/texlive/texmf-dist/tex/latex/hyperref/nameref.sty (/usr/share/texlive/texmf-dist/tex/generic/oberdiek/gettitlestring.sty)) (./tex.out) (./tex.out) [1{/var/lib/texmf/fonts/map/pdftex/updmap/pdftex.map}] (./tex.aux) *File List* article.cls 2007/10/19 v1.4h Standard LaTeX document class size10.clo 2007/10/19 v1.4h Standard LaTeX file (size option) hyperref.sty 2012/05/13 v6.82q Hypertext links for LaTeX hobsub-hyperref.sty 2012/05/28 v1.13 Bundle oberdiek, subset hyperref (HO) hobsub-generic.sty 2012/05/28 v1.13 Bundle oberdiek, subset generic (HO) hobsub.sty 2012/05/28 v1.13 Construct package bundles (HO) infwarerr.sty 2010/04/08 v1.3 Providing info/warning/error messages (HO) ltxcmds.sty 2011/11/09 v1.22 LaTeX kernel commands for general use (HO) ifluatex.sty 2010/03/01 v1.3 Provides the ifluatex switch (HO) ifvtex.sty 2010/03/01 v1.5 Detect VTeX and its facilities (HO) intcalc.sty 2007/09/27 v1.1 Expandable calculations with integers (HO) ifpdf.sty 2011/01/30 v2.3 Provides the ifpdf switch (HO) etexcmds.sty 2011/02/16 v1.5 Avoid name clashes with e-TeX commands (HO) kvsetkeys.sty 2012/04/25 v1.16 Key value parser (HO) kvdefinekeys.sty 2011/04/07 v1.3 Define keys (HO) pdftexcmds.sty 2011/11/29 v0.20 Utility functions of pdfTeX for LuaTeX (HO) pdfescape.sty 2011/11/25 v1.13 Implements pdfTeX's escape features (HO) bigintcalc.sty 2012/04/08 v1.3 Expandable calculations on big integers (HO) bitset.sty 2011/01/30 v1.1 Handle bit-vector datatype (HO) uniquecounter.sty 2011/01/30 v1.2 Provide unlimited unique counter (HO) letltxmacro.sty 2010/09/02 v1.4 Let assignment for LaTeX macros (HO) hopatch.sty 2012/05/28 v1.2 Wrapper for package hooks (HO) xcolor-patch.sty 2011/01/30 xcolor patch atveryend.sty 2011/06/30 v1.8 Hooks at the very end of document (HO) atbegshi.sty 2011/10/05 v1.16 At begin shipout hook (HO) refcount.sty 2011/10/16 v3.4 Data extraction from label references (HO) hycolor.sty 2011/01/30 v1.7 Color options for hyperref/bookmark (HO) keyval.sty 1999/03/16 v1.13 key=value parser (DPC) ifxetex.sty 2010/09/12 v0.6 Provides ifxetex conditional kvoptions.sty 2011/06/30 v3.11 Key value format for package options (HO) pd1enc.def 2012/05/13 v6.82q Hyperref: PDFDocEncoding definition (HO) hyperref.cfg 2002/06/06 v1.2 hyperref configuration of TeXLive hpdftex.def 2012/05/13 v6.82q Hyperref driver for pdfTeX rerunfilecheck.sty 2011/04/15 v1.7 Rerun checks for auxiliary files (HO) nameref.sty 2010/04/30 v2.40 Cross-referencing by name of section gettitlestring.sty 2010/12/03 v1.4 Cleanup title references (HO) tex.out tex.out *********** )</usr/share/texlive/texmf-dist/fonts/type1/public/amsfonts/cm/cmr1 0.pfb></usr/share/texlive/texmf-dist/fonts/type1/public/amsfonts/cm/cmr12.pfb>< /usr/share/texlive/texmf-dist/fonts/type1/public/amsfonts/cm/cmr17.pfb> Output written on tex.pdf (1 page, 34458 bytes). Transcript written on tex.log. Compilation halts at the prompt: Type X to quit or <RETURN> to proceed, or enter new name. (Default extension: sty) Enter file name: and then proceeds and **does create** the pdf as expected. However I don't understand why I get the compilation error in the first place. I **successfully verified** that packages `hyperref` and `url` are installed using command: tlmgr show <pkg-name> However, I read somewhere that `url` is part of `ltxmisc` and output of ls /usr/share/texlive/texmf-dist/tex/latex/ltxmisc/ has `url.sty` missing. Please let me know what could be wrong.
[ -0.006889287382364273, -0.00195880513638258, -0.008186878636479378, 0.00858115591108799, 0.02313660830259323, 0.01680479571223259, 0.007163808215409517, -0.0033688731491565704, -0.00912158191204071, -0.019206328317523003, 0.000170400133356452, -0.0033897673711180687, 0.01742006465792656, 0...
[ 0.0428621731698513, 0.3188421428203583, 0.8554659485816956, -0.24002692103385925, 0.3206336498260498, -0.043574970215559006, 0.571828305721283, 0.15915463864803314, -0.0388982892036438, -0.6826740503311157, 0.04932057112455368, 0.28709977865219116, 0.07621712982654572, 0.32454192638397217,...
I'm trying to plot some data where the X axis is logarithmic. The data runs from ~30 microseconds up to 10 milliseconds. It looks much cleaner to have the x-ticks looking like {0.1 ms, 1 ms, 10 ms} than {10^-4 s, 10^-3 s, 10^-2 s}. In other words, I would like my tick labels to be presented in fixed point (i.e., not as exponentials), and scaled (multiplied by 1000). To achieve this effect, I've tried using \documentclass{standalone} \usepackage{pgfplots} \begin{document} \begin{tikzpicture} \begin{semilogxaxis} [xmin=1e-6, xmax=1e-3, domain=1e-6:1e-3, scaled x ticks=real:1e-3, xtick scale label code/.code={}, log ticks with fixed point] \addplot {x}; \end{semilogxaxis} \end{tikzpicture} \end{document} but logarithmic axes seem to ignore the "scaled x ticks" instructions. Any help would be much appreciated. Thanks,
[ 0.0005780498031526804, 0.008040102198719978, -0.03274215757846832, -0.010081465356051922, 0.014642716385424137, -0.010274712927639484, 0.009035837836563587, 0.011613675393164158, -0.010393082164227962, -0.007984403520822525, -0.010132846422493458, -0.005970992147922516, -0.010784327052533627...
[ 0.43184787034988403, -0.17119985818862915, 0.31054002046585083, 0.11729216575622559, -0.0030721465591341257, 0.4401800036430359, 0.16590547561645508, -0.15543167293071747, -0.09792639315128326, -0.812165379524231, 0.5001064538955688, 0.14072374999523163, -0.10939453542232513, 0.43683037161...
I am using the (I think default) **Calender** application on my Samsung Galaxy S phone. It shows blue blocks for my events in the 7 day view. Is there a way to **change this blue color** ? So that, e.g. I can mark private events in green and appointments with customers in red ? Or is there a way to **show some text in the blocks** ? Or it not possible with the default calender .. do you know of another one which does allow this ? Thanks.
[ 0.00719744898378849, 0.0016610054299235344, -0.010558752343058586, 0.009152241051197052, -0.0044952211901545525, 0.00030562205938622355, 0.008841974660754204, 0.03447486832737923, -0.013791055418550968, -0.008908976800739765, -0.016351696103811264, -0.0032034534960985184, 0.01942914351820945...
[ 0.27588188648223877, -0.21388491988182068, 0.741179347038269, -0.13236266374588013, -0.15951287746429443, -0.1422838419675827, 0.576904833316803, 0.6072053909301758, -0.45954492688179016, -0.71735018491745, -0.16881848871707916, 0.4677409529685974, -0.19518759846687317, -0.0132111357524991...
is there a Hamiltonian reformultion of gravity ?=? if so if we use the usual Quantization scheme we can not we quantizy the gravity ?? in terms of a Gauge Theory with the potential $ A_{\mu}^{i} $ how can we get the Schroedinguer equation ??
[ -0.010754643008112907, 0.010899312794208527, 0.005006193183362484, 0.01504463143646717, 0.0015465464675799012, -0.014813615009188652, 0.012405338697135448, -0.006525424774736166, -0.018014604225754738, -0.025671128183603287, -0.014725932851433754, 0.01208349410444498, -0.030061451718211174, ...
[ 0.12235260009765625, -0.20687928795814514, 0.3598082959651947, 0.08738363534212112, -0.00982842780649662, 0.1544775366783142, 0.12443508952856064, -0.4929392635822296, -0.26711201667785645, -0.2633723020553589, -0.10087922215461731, 0.5197401642799377, -0.2798597514629364, 0.18726763129234...
I simply want to use the `\multimapboth` symbol, which belongs to the `\txfonts` package; however I don't want to install this package, as it changes some formatting on my document... How can I do this? Thank you in advance. EDIT: I'm looking for something like the answer to this question, just don't know how can I find more details about my symbol...
[ 0.0005811623414047062, 0.0036188382655382156, -0.007681895047426224, 0.008738595061004162, -0.00943751260638237, 0.010580062866210938, 0.005533750168979168, 0.02670714072883129, -0.01839602179825306, -0.01613403670489788, -0.009401364251971245, 0.0007085349643602967, -0.003061472438275814, ...
[ 0.5611993074417114, 0.20432904362678528, 0.41694214940071106, -0.003473342629149556, 0.16875381767749786, -0.36821797490119934, 0.5654206871986389, 0.03258632868528366, -0.10470551997423172, -0.7267662286758423, -0.15291467308998108, 0.44149303436279297, -0.22605963051319122, 0.22292597591...
My question is if I'm running the ArcMap Dissolve tool through a Python script, how can I tell it to check for attributes in one field, and if that field is empty, to perform the tool on another field. To be more specific, I would like the Dissolve tool to check for route numbers in a field called [rt_shrt_nm] (which are provided when the route name is a numerical value). But if that field is empty (which is sometimes is if the field has textual route names) to perform the tool on the [rt_long_nm] field instead. This is what I'm working with now, just to give you a better idea, even though I know its probably wildly wrong: if [rt_shrt_nm] is null: arcpy.Dissolve_management(OutShapesFCname, outGDB, ["rt_long_nm"], "", "", "") elif: arcpy.Dissolve_management(OutShapesFCname, outGDB, ["rt_shrt_nm"], "", "", "")
[ 0.01227486040443182, 0.008665468543767929, -0.00664969626814127, 0.020061058923602104, -0.03298335522413254, -0.01116333156824112, 0.009160486981272697, -0.0017919940873980522, -0.01428405474871397, 0.02173921838402748, -0.006450274493545294, 0.009593721479177475, -0.021695531904697418, 0....
[ 0.23343689739704132, 0.27922162413597107, 0.1039387658238411, -0.021471526473760605, 0.05911608412861824, -0.32861384749412537, 0.059682611376047134, -0.3972708284854889, -0.022637929767370224, -0.42618632316589355, 0.15290376543998718, 0.5960226655006409, -0.2474241554737091, -0.001775637...
WordPress autmatically embeds a youtube video's if I use: [embed] http://www.youtube.com/watch?v=Xog1T5dUxcw [/embed ] This is great, but it doesn't work if I use it in an template file. I have a custom field where the admin can put a URL to a YouTube video. I want get the video in the sinle-post using the following code: <?php $custom = get_post_custom($post->ID); $url = $custom['_videoLink'][0]; ?> <div class="video"> [embed]<?php $url; ?>[/embed] </video> How can i convert the Youtube URL into an embed URL using the standard WordPress [embed] function?
[ 0.006531521212309599, -0.002560738008469343, -0.0014664039481431246, 0.016090940684080124, -0.004127041436731815, 0.0023242912720888853, 0.008258218877017498, 0.013298653066158295, -0.01734386757016182, -0.014571363106369972, -0.01524255983531475, 0.005164747126400471, 0.006612143013626337, ...
[ 0.4746027886867523, -0.039386484771966934, 0.6110738515853882, 0.07132630050182343, -0.14338377118110657, 0.27629372477531433, 0.3510562777519226, -0.025946347042918205, 0.3320329785346985, -0.751082718372345, -0.18818062543869019, 0.6785998344421387, -0.4950384795665741, -0.14892080426216...
There are a lot of android resources types: layouts, strings, drawables and so on. I understand that readability of it's names is important but can not create a table of rules how to name them in the best way. Are there any best practices on that?
[ 0.016545869410037994, 0.022352809086441994, -0.0022005566861480474, 0.014100953936576843, -0.0247182659804821, 0.02816244773566723, 0.010438459925353527, 0.03987177833914757, -0.023005006834864616, -0.02511589601635933, -0.013683474622666836, 0.005760107655078173, 0.0007478729239664972, 0....
[ 0.6214422583580017, 0.36174148321151733, -0.040713295340538025, 0.1706615537405014, 0.23470544815063477, 0.29433050751686096, -0.026715287938714027, 0.16610057651996613, -0.09136780351400375, -0.549881637096405, 0.16472212970256805, 0.4853040874004364, -0.15574392676353455, -0.140014499425...
I'm using Mathematica 9.0.1. Sorry for using an image, but pasting code messed up the output formatting. I'd like to represent 25 Btu/(hr ft F). ![enter image description here](http://i.stack.imgur.com/LVn2r.jpg) All units are spelled correctly as identified in the second attempt... not sure what is going wrong in the first attempt? I've also tried: "DegreesFahrenheit" "DegreesFahrenheitDifference" "BritishThermalUnitsIT" No luck. I've seen other posts where people used the Wolfram Alpha capabilities to "interpret" which units were used... I hope I don't need internet connectivity for this simple task?
[ 0.0035998355597257614, 0.010050242766737938, -0.011825504712760448, 0.005573722999542952, -0.001759770791977644, -0.02563285082578659, 0.005168782081454992, 0.003762832609936595, -0.019329477101564407, -0.018702849745750427, 0.00595529330894351, 0.00457618897780776, -0.008961228653788567, ...
[ 0.3298657536506653, 0.3985055983066559, 0.851385235786438, -0.11721184104681015, -0.20910046994686127, 0.28254204988479614, 0.3374418020248413, -0.34749749302864075, -0.06482455134391785, -0.6281096339225769, -0.18628141283988953, 0.25773707032203674, 0.27718985080718994, 0.276768118143081...
Steam is now selling more than just games, and the non-game software clutters up the main page that shows new releases. ![enter image description here](http://i.stack.imgur.com/3aZq0.png) Is there a filter option or anything else I can do to keep Steam from showing me it's Software selections? I see that there's a "Software" tab if I want to see just the Software selections, but the corresponding Games tab makes me filter down to a specific category of games, whereas I want to see all games.
[ -0.020565064623951912, -0.004494258668273687, 0.005759891588240862, 0.011493573896586895, 0.0012135040014982224, -0.004978800658136606, 0.006092133931815624, 0.001973153557628393, -0.012519320473074913, 0.02723173424601555, -0.008040702901780605, 0.01319056749343872, -0.004981246776878834, ...
[ 0.586702287197113, 0.09316074103116989, 0.35652780532836914, -0.16103729605674744, -0.2414175570011139, -0.4903901517391205, -0.07653848826885223, 0.1893666386604309, -0.4108690917491913, -0.22406570613384247, 0.3402779996395111, 0.3585807979106903, 0.2174210399389267, 0.6298001408576965, ...
I am really new to the CFD simulation, and started some simple algorithms recently. I then got introduced to the Riemann Invariants. 1. Can any one provide some physical interpretation? 2. Also, why is it the case, that when we have an open tube, and the flow is entering with a subsonic speed, then at this point, only one characteristic exist dx/dt=u+a ?
[ -0.027621636167168617, 0.022474780678749084, -0.006225492339581251, 0.01406491082161665, -0.006057423539459705, -0.025446200743317604, 0.00936951395124197, 0.011230450123548508, -0.015159585513174534, -0.02383533865213394, 0.0017210511723533273, 0.00978402141481638, -0.012192833237349987, ...
[ 0.2662058174610138, -0.196993887424469, 0.43812498450279236, 0.19219183921813965, -0.2913849651813507, -0.09291492402553558, -0.07283775508403778, -0.1301393210887909, 0.11003965139389038, -0.6280509829521179, 0.07187102735042572, 0.20463259518146515, -0.21665245294570923, 0.65442252159118...
This problem is on Di Francesco's book I. It's exercise 7.1: Calculate the norm of the following vector, where $\lvert h\rangle$ is the state of highest weight. $$L_{-1}^n\lvert h\rangle$$ I have just tried to use the commutation relations of the $L$ operators and the fact that $L_1$ acts on $\lvert h\rangle$ is $0$. But as the calculation goes on, things began to be troublesome. I just found them too complicated. Commutation relations: $$[L_n,L_m]=(n-m)L_{n+m}+\frac{c}{12}\delta_{n+m,0}n(n^2-1) $$ It is in fact asking us to calculate: $\langle h|L_1^nL_{-1}^n|h\rangle $. And we have the following relations: $$\langle h\rvert L_{-1}=0 \quad\mbox{and}\quad L_1\lvert h\rangle=0. $$
[ -0.006085533183068037, 0.01735982857644558, -0.008706962689757347, 0.006222670432180166, -0.013223482295870781, -0.02292622998356819, 0.005620677024126053, -0.02892087958753109, -0.011623602360486984, -0.008405857719480991, 0.0007480913773179054, 0.009374048560857773, -0.015536049380898476, ...
[ -0.213614821434021, -0.176823690533638, 0.44246724247932434, -0.32819175720214844, -0.2313368022441864, 0.1512850672006607, -0.08136378228664398, -0.287534236907959, -0.392528235912323, -0.532277524471283, 0.13044486939907074, 0.4429100453853607, -0.4457332491874695, 0.17574431002140045, ...
I'm building a custom homepage for a client where they want to have a few changeable boxes to link to specific pages, or posts, within the site. I've added custom fields to the homepage so that they need only enter the page/post ID, and then it will display the proper post or page. And I'd like it to be flexible enough that they can do a post or page. Right now, my code for the box is <?php query_posts('p='.$topright); ?> <?php while (have_posts()) : the_post(); ?> {title and featured image} <?php endwhile; ?> where $topright is a variable already defined. (I've tested the variable with an echo and it is returning the proper ID number.) Unfortunately, WordPress seems to require that I use p=ID if it is a post, and page_id=ID if it is a page. So, if I designate the ID for a post, it is working fine, but not if I desingate the ID for a page. Is there an alternative syntax I could use? Or, it there a conditional of some kind that might look a the ID and recognize if it is a post or a page so I could run the query with an IF ELSE?
[ 0.00800306349992752, 0.007733371574431658, 0.0008619982982054353, 0.00670572929084301, -0.0029982198029756546, 0.013671666383743286, 0.006525167264044285, 0.006395428907126188, -0.014912893995642662, -0.009467607364058495, -0.0003573472495190799, 0.003684484399855137, -0.004771376959979534, ...
[ 0.9520999789237976, 0.08325944095849991, 0.5411201119422913, -0.1761120706796646, 0.03312167152762413, 0.0365748405456543, -0.04744889587163925, -0.34231874346733093, -0.3314664363861084, -0.6275765299797058, -0.10971176624298096, 0.09354642778635025, -0.5123934745788574, 0.035844095051288...
I would like to use a tablet for some mobile programming. It would be nice to read a pdf and immediately try the code on the same device. I've found various videos and tutorials how to get Ubuntu running on an ex-android tablet so I'd like to ask * Has anybody of you ever used Linux on arm? * How about driver support? I heard nvidia is releasing tegra drivers for linux but I guess a tablet without working wlan wouldn't be to nice either. * Did you ever program on arm? Any problems with compilers, IDEs? Does eclipse work? Just to be clear: I'm not sure if a linux on arm is a good idea to code on. Are there any major problems or limitations you know ?
[ -0.0021540375892072916, -0.0008140932768583298, -0.0035032378509640694, -0.0037834718823432922, -0.03874213621020317, -0.006166283506900072, 0.007330646272748709, 0.02236110158264637, -0.019411511719226837, -0.04252547770738602, -0.004022040404379368, 0.008499333634972572, -0.001911460072733...
[ 0.5747579336166382, 0.1783648580312729, -0.14482221007347107, 0.43314915895462036, -0.41650891304016113, -0.01436561718583107, -0.21947965025901794, 0.3556886613368988, -0.5258516073226929, -0.522739827632904, 0.29941901564598083, 0.5674618482589722, 0.05755434185266495, 0.0153682846575975...
After reaching level 80 and earning enough XP to prestige (81?), what happens if I chose not to prestige? Do I continue to rack up or accumulate XP (that can be seen when I eventually prestige) or am I stagnant at the max XP cap for the first set of ranks? I understand that guns will continue to level and that I can unlock titles and such, which is partly why I'm holding off leveling, I am close to finishing a challenge on the MK14 and don't want to lose my progress.
[ 0.013632550835609436, 0.012047088705003262, 0.004061034414917231, 0.0037292365450412035, -0.010879079811275005, -0.003394236322492361, 0.005758810322731733, -0.022344544529914856, -0.013992167077958584, 0.0024059906136244535, 0.0024620969779789448, 0.02350740320980549, 0.0005971903447061777,...
[ 0.3386852741241455, 0.1456538736820221, 0.66477370262146, 0.02995527721941471, -0.16712306439876556, -0.1887788474559784, 0.6977063417434692, -0.25032275915145874, -0.47181522846221924, -0.37554702162742615, 0.29926416277885437, 0.8461205363273621, 0.47812479734420776, 0.14716173708438873,...
I want to test how my site would be behave when being spidered. However, I want to exclude all URLs containing the word "page". I tried: $ wget -r -R "*page*" --spider --no-check-certificate -w 1 http://mysite.com/ The `-R` flag is supposed to reject URL pattern containing the word "page". Except that it doesn't seem to work: Spider mode enabled. Check if remote file exists. --2014-06-10 12:34:56-- http://mysite.com/?sort=post&page=87729 Reusing existing connection to [mysite.com]:80. HTTP request sent, awaiting response... 200 OK How do I exclude spidering of such URL?
[ -0.009167568758130074, 0.0013411163818091154, -0.0010284667368978262, 0.015346502885222435, -0.01883753389120102, 0.02595442906022072, 0.008033677004277706, -0.006432469934225082, -0.014369895681738853, 0.001986018382012844, -0.007154746446758509, -0.0032560538966208696, 0.005364391021430492...
[ 0.3701194226741791, 0.333006888628006, 0.608063817024231, -0.0935322716832161, -0.2114706039428711, -0.020386988297104836, 0.9760401844978333, 0.11177261173725128, -0.4843340814113617, -0.7293261885643005, 0.010057434439659119, 0.3888896703720093, -0.35441458225250244, 0.312598317861557, ...
I am trying to type (and evaluate) expressions of the following form: $$ G^{a,b} $$ into a mathematica. I've tried the obvious G^(a, b) or Superscript[G,a,b] But both of these give the same error Syntax::tsntxi: "a,b" is incomplete; more input is needed. I know one solution is to use Symbolize from the Notation package << Notation` Symbolize[ParsedBoxWrapper[SuperscriptBox["G", RowBox[{"_", ",", "_"}]]]] however this solution will not suffice as I want to perform operations such as the following: In[3]:= Sum[G^(a, b), {a, 1, 2}, {b, 1, 2}] Out[3]= 4 G^(a, b) Is there any sensible way to get around this? Why is Superscript so much more picky than Subscript? Thanks in advance.
[ 0.006895733065903187, -0.003891641041263938, -0.009033136069774628, 0.014233918860554695, 0.008581374771893024, 0.007485773880034685, 0.009341687895357609, 0.010937739163637161, -0.01875392533838749, -0.022155452519655228, -0.015412955544888973, -0.005827547982335091, -0.021646954119205475, ...
[ 0.0027783175464719534, 0.27474668622016907, 0.20500412583351135, -0.056879498064517975, -0.13690759241580963, 0.2654396593570709, 0.3585166931152344, -0.265807569026947, 0.18272244930267334, -0.7301324605941772, 0.08979631215333939, 0.340027391910553, -0.4558049738407135, -0.17321975529193...
I recently bought Doodle Jump for Android, and keep dying because I accidentally land on the broken platforms. Do these ever serve a useful purpose, e.g. to slow you down if you're travelling at high speed? Or are they there merely to distract from the viable platforms?
[ -0.03006969951093197, 0.010804702527821064, -0.02474360167980194, -0.00039891450433060527, -0.03226865828037262, -0.02599811926484108, 0.007551165763288736, 0.028536217287182808, -0.01134627964347601, -0.002521438291296363, -0.017190489917993546, 0.021444065496325493, -0.0005251027178019285,...
[ 0.5077734589576721, 0.05828889459371567, -0.09693660587072372, 0.7134377956390381, -0.012101463042199612, -0.21175514161586761, 0.17007993161678314, 0.31964725255966187, -0.32541653513908386, -0.25241750478744507, 0.4702048897743225, -0.08022089302539825, -0.07237046957015991, -0.008772887...
Which is correct: > * Today is one of the warmer days this month. > * Today is one of the warmest days this month. > I hear the first used almost exclusively on television news.
[ -0.0368148609995842, 0.01313813216984272, -0.07990916818380356, 0.0181104838848114, -0.004857932683080435, 0.02346303127706051, 0.018949022516608238, -0.04993215203285217, -0.01680068112909794, -0.018754512071609497, 0.008839407935738564, 0.00818469375371933, -0.009827966801822186, 0.00416...
[ 0.3795190453529358, 0.19255420565605164, 0.42565566301345825, -0.3230896592140198, -0.1883392632007599, -0.18520109355449677, 0.5164005756378174, 0.8538001775741577, -0.5242090821266174, -0.6215917468070984, -0.2854471504688263, 0.39302241802215576, 0.06328132003545761, 0.21118615567684174...
I have a package (.rpm) on CentOS 6.5 that requires a .so file, which I also have on my machine. When I attempt to install the package I get an error stating unresolved dependencies regarding the .so file. I have tried placing the .so file in the same directory, in /lib/, in /usr/lib/, and setting the path of LD_LIBRARY_PATH to /usr/lib/, but regardless the package does not resolve the dependency, how can I make the .so available to the package I am installing?
[ 0.006992511451244354, -0.0033241487108170986, -0.018505394458770752, 0.02209807001054287, -0.007116254884749651, 0.020487261936068535, 0.0073499297723174095, -0.011764667928218842, -0.011782877147197723, -0.01337440125644207, -0.008873173967003822, 0.002198663540184498, -0.025362595915794373...
[ 0.4073178172111511, -0.02862670086324215, 0.4898894727230072, -0.028015069663524628, 0.21114473044872284, -0.3246663510799408, 0.1396348476409912, -0.18893377482891083, -0.21779708564281464, -0.8063457012176514, -0.19835130870342255, 0.8848245739936829, -0.37242236733436584, 0.016891123726...
I would like to somehow retrieve a list of all current admin menu items, even the ones created by themes/plugins. Is it possible?
[ 0.04565136134624481, 0.018124302849173546, -0.0100655946880579, -0.004019813612103462, -0.012195120565593243, -0.003018368035554886, 0.010871564969420433, 0.015122991986572742, -0.027308762073516846, 0.02451801300048828, -0.012172195129096508, 0.026793669909238815, 0.02234741672873497, -0....
[ 0.47547343373298645, 0.14749212563037872, 0.4445863664150238, 0.3170727491378784, 0.38552916049957275, 0.06672258675098419, -0.30250638723373413, 0.29168257117271423, -0.30357876420021057, -0.36406174302101135, -0.1750839203596115, 0.07137307524681091, 0.001493789372034371, 0.0805315449833...
In a repeated measured design we measure a particular variable at different time points from the same subjects. In animal experiments, if animals are sacrificed at every time point to measure a variable, theoretically the measurements at every time point are from different animals, though the measured variable is same. Can such experiment setting be called repeated measure design ?
[ 0.0214063823223114, 0.026223404332995415, -0.006190939340740442, 0.028928939253091812, 0.043731749057769775, 0.0060921404510736465, 0.014735453762114048, -0.0042790574952960014, -0.01743282377719879, -0.014074312523007393, -0.003737101098522544, 0.022136157378554344, 0.014426657930016518, ...
[ 0.7870995998382568, -0.30687063932418823, -0.19816477596759796, 0.29199665784835815, -0.06291689723730087, 0.418367475271225, 0.2644912302494049, -0.3244001269340515, -0.25747138261795044, -0.4277256429195404, 0.24852728843688965, 0.2629868984222412, -0.47069302201271057, 0.552967131137847...
Is there any tool that tells me the min and max battery consumption rate of an app?
[ 0.020969130098819733, -0.030971618369221687, 0.004305948968976736, 0.003801533253863454, -0.019589122384786606, -0.0240112766623497, 0.019440289586782455, -0.021284226328134537, -0.027911126613616943, 0.018738524988293648, -0.0035984793212264776, 0.020060980692505836, 0.025406956672668457, ...
[ 0.7655277252197266, 0.06325468420982361, 0.13453838229179382, 0.40777814388275146, 0.26403236389160156, 0.03910036012530327, 0.1892128735780716, 0.3781569004058838, -0.22311252355575562, -0.10269822180271149, 0.3751092851161957, 0.5799769163131714, 0.07455708831548691, -0.3449266850948334,...
I used to write papers in math on LaTex. I wounder if there exists some program which makes semantic check. I want that there will check for example that it will warn on grammar mistakes, and will know to handle latex. Warn for example if I open ( but not close it. Check for math common mistakes like +\ldots+ ussually should be +\cdots+.
[ 0.019875729456543922, -0.010115636512637138, -0.0169588103890419, 0.022063221782445908, 0.029448052868247032, 0.022818326950073242, 0.009255761280655861, -0.0006993198185227811, -0.014308160170912743, 0.02749582752585411, 0.0062611219473183155, -0.0018788111628964543, 0.010786485858261585, ...
[ -0.2617042064666748, 0.36145102977752686, 0.19066955149173737, -0.1122923344373703, -0.22432170808315277, -0.032470230013132095, 0.4039878249168396, 0.05180303007364273, -0.15602464973926544, -0.4432467222213745, 0.012055894359946251, 0.08487333357334137, -0.1480763852596283, 0.18854622542...
I am an electronics and communication engineer, specializing in signal processing. I have some touch with the mathematics concerning communication systems and also with signal processing. I want to utilize this knowledge to study and understand Quantum Mechanics from the perspective of an engineer. I am not interested in reading about the historical development of QM and i am also not interested in the particle formalism. I know things have started from the wave-particle duality but my current interests are not study QM from that angle. What I am interested is to start studying a treatment of QM from very abstract notions such as, 'what is an observable ? (without referring to any particular physical system)' and 'what is meant by incompatibility observables ?' and then go on with what is a state vector and its mathematical properties. I am okay to deal with the mathematics and abstract notions but I some how do not like the notion of a particle, velocity and momentum and such physical things as they directly contradict my intuition which is based on classical mechanics ( basic stuff and not the mathematical treatment involving phase space as i am not much aware of it). I request you to give some suggestions on advantages and pitfalls in venturing into such a thing. I also request you to provide me good reference books or text books which give such a treatment of QM without assuming any previous knowledge of QM.
[ 0.016350936144590378, 0.011025415733456612, -0.0017184833995997906, -0.003994242288172245, -0.01142465602606535, -0.005144067108631134, 0.006949977949261665, 0.0031963384244590998, -0.01189368311315775, -0.006586805917322636, 0.0026957860682159662, 0.016262654215097427, -0.018391236662864685...
[ 0.6627761721611023, 0.28480130434036255, 0.0099448561668396, -0.19466881453990936, -0.21932141482830048, -0.15350519120693207, 0.2180957943201065, 0.26364758610725403, -0.13246268033981323, -0.44323328137397766, 0.025102511048316956, 0.5947982668876648, -0.02701987884938717, 0.371815502643...
I learned earlier today that you can move imperial levels to a lower number floor to change the required items for completing an imperial objective. I had an imperial level under construction and I moved it to floor -1. Now I'm unable to click on the hologram level to view my current objectives. Will this fix itself when the floor in done construction? I'm just scared I broke my game.
[ 0.02191535010933876, 0.019312508404254913, -0.018091212958097458, -0.009227276779711246, 0.009230135008692741, 0.005723306443542242, 0.012206584215164185, -0.0013313726522028446, -0.02951594442129135, 0.00877049658447504, -0.03266602009534836, 0.020269664004445076, -0.016036836430430412, 0...
[ 0.4140184223651886, 0.3575504422187805, 0.5076079368591309, 0.34125712513923645, 0.18970094621181488, 0.22805927693843842, 0.024290915578603745, -0.1889500916004181, -0.6377609968185425, -0.7679622173309326, 0.2049563229084015, 0.0868007242679596, 0.010148242115974426, 0.1648767739534378, ...
I have moved (mv) a pretty large directory on my NAS (Linux based), but had to interrupt the procedure. Not being a regular Linux user, I though I could just continue and merge the rest later in. mv /oldisk/a /newdisk Procedure is alfway done, so rest of /oldisk/a still exists, and /newdisk/a with the already copied files is already present. I have no idea which files have been already copied. BTW, under /oldisk/a, of course, are plenty of sub directories. What would be the best way to move / merge the remaining files to /newdisk/a ?
[ 0.02110789157450199, 0.017942525446414948, -0.008066286332905293, 0.02989927865564823, 0.01780647784471512, 0.010981276631355286, 0.009532390162348747, 0.0028984458185732365, -0.024573946371674538, 0.0031832277309149504, -0.010260052978992462, 0.022028818726539612, 0.005361699499189854, 0....
[ 0.377418577671051, 0.19034411013126373, 0.8289231657981873, -0.22299854457378387, 0.12214343249797821, -0.15539705753326416, 0.22057074308395386, -0.09258505702018738, -0.6210383176803589, -0.488493412733078, -0.052851077169179916, 0.5564066171646118, -0.07817748188972473, 0.51596617698669...
I'm trying to create a script which runs a command and compares the output of that command to a certain string to see if it contains that substring, ignoring the case of the string needed to be compared. I hope what I'm asking is clear, though.
[ 0.0045328387059271336, 0.022580143064260483, -0.0004053035518154502, 0.015349273569881916, -0.025600546970963478, 0.03138967230916023, 0.010805407539010048, 0.007679241243749857, -0.03206923231482506, 0.017391283065080643, -0.017398910596966743, -0.0057446095161139965, 0.00781421922147274, ...
[ 0.611454963684082, 0.15675809979438782, -0.13512486219406128, 0.040859464555978775, -0.09380742907524109, 0.2113611251115799, 0.0020830167923122644, 0.052204884588718414, 0.04534987360239029, -0.38948214054107666, 0.16229866445064545, 0.36183151602745056, -0.23943792283535004, 0.1431626975...
I've an issue with the pagination function in WordPress. Although the anchor text is correct, when I hit the URL for some reasons WP changes it from `www.site.com/page?page=2` to `www.site.com/page/2` thus generating a 404 error. My pagination call is like the following: $paginate_links = paginate_links( array( 'base' => preg_replace('/\?.*/', '/', get_pagenum_link()) . '%_%', 'prev_text' => __('«'), 'next_text' => __('»'), 'mid_size' => 5, 'current' => $current, 'total' => $totalpages )); The strange thing is that using the same function in an archive page I get this working properly. Any idea on what is wrong?
[ -0.016948772594332695, 0.013211284764111042, -0.0005948650650680065, 0.019982794299721718, 0.0051780566573143005, -0.004251422360539436, 0.007107567507773638, -0.0037103439681231976, -0.014314436353743076, 0.0032218010164797306, -0.003057230496779084, 0.0028466135263442993, -0.01314420253038...
[ -0.023746909573674202, 0.051278453320264816, 0.5033450126647949, -0.11948920786380768, -0.023223934695124626, 0.08881288766860962, 0.38472244143486023, 0.35434630513191223, -0.21373137831687927, -0.7270051836967468, -0.09051229059696198, 0.19338610768318176, -0.4012832045555115, 0.35744550...
I am trying to create a geotiff and the software (raster design) is asking for a code that I thought was EPSG code. however while trying to find out what epsg = us48 I noticed that the dialog actually says geotiff code. Now I am really in a loop. autocad map contains a coordinate system that is named us48 albers equal area, nad 27 meter orig lat 23, orig long -96, northern standard par 45.5, southern standerd par 29.5 ESRI difines the same system as... > > 'PROJCS["USA_Contiguous_Albers_Equal_Area_Conic_USGS_version",GEOGCS["GCS_North_American_1983",DATUM["D_North_American_1983",SPHEROID["GRS_1980",6378137.0,298.257222101]],PRIMEM["Greenwich",0.0],UNIT["Degree",0.017453292519943295]],PROJECTION["Albers"],PARAMETER["False_Easting",0.0],PARAMETER["False_Northing",0.0],PARAMETER["Central_Meridian",-96.0],PARAMETER["Standard_Parallel_1",29.5],PARAMETER["Standard_Parallel_2",45.5],PARAMETER["Latitude_Of_Origin",23.0],UNIT["Meter",1.0]]'> and it says USGS after it. This is the same as EPSG 6703, I cannot find anything that matches in the raster design code. I found USA US National Atlas Equal Area but it is lambert azimuthal 45, -100 the code for it in raster design is 2163 Apparently the autodesk version is supposed to be the same as epsg as desribed in this document... http://svn.osgeo.org/fdo/trunk/Providers/GenericRdbms/com/ExtendedCoordSys.txt/ The values in this document are what I am using in the raster design dialog that I thought were epsg. Can someone straighten me out? I want an epsg code for US48
[ -0.018919367343187332, 0.0021697520278394222, -0.014362786896526814, 0.01011657901108265, 0.0029613561928272247, 0.03679265081882477, 0.008820590563118458, -0.0002683837665244937, -0.018395807594060898, -0.02058669924736023, -0.004831365309655666, 0.009319182485342026, 0.002519301138818264, ...
[ 0.17175725102424622, 0.03099558874964714, 1.0319914817810059, -0.20587629079818726, -0.06790352612733841, 0.01972033455967903, 0.25793400406837463, -0.0010611111065372825, -0.2838610112667084, -0.6227505207061768, 0.06041150540113449, -0.010181933641433716, -0.11126392334699631, 0.15264463...
I have got the following question relating to random walks. I would like to determine the moment when a random walk changes from being a simple random walk to start to drift at certain time. This sort of scenario could happen when temperatures start to rise or sea levels droppping. The idea is to detect that change as soon as possible. Clearly if the system is left to drift for a long time, it is fairly obvious it has changed from a given initial level (given by the simple random walk at the start), but I would like to know if it is possible to detect that change very quickly after it happens. What sort of analysis would be needed? Any suggestions greatly appreciated. Many thanks!! Michael and whuber, I really appreciate your comments. My initial question was in fact related to Quality Control, as I am trying to model a CUSUM as a Random Walk: When the system is in-control, I see it as a simple random walk, when it starts to drift and eventually goes out-of-control, I see it as a drifting random walk. Thus my question as to how (and critically, how quickly) to detect this change from stationary to drifting within random walks. I can see it is not easy to detect the drift, specially when the drift is small due to the stochastic nature of the system. But I think the ARIMA idea will help me, I had not looked at this theory before and thank you for this advice. Mili.
[ -0.004687711130827665, 0.024042844772338867, -0.014495212584733963, 0.014039892703294754, -0.008401187136769295, -0.012054843828082085, 0.007063155993819237, -0.005161766894161701, -0.013554361648857594, 0.0034496006555855274, -0.0014164461754262447, 0.019566334784030914, 0.01277790032327175...
[ 0.37301525473594666, -0.26373493671417236, 0.5731704831123352, 0.21800638735294342, -0.05041816085577011, -0.1286819875240326, 0.14556559920310974, 0.27780720591545105, -0.7355960607528687, -0.5538864135742188, 0.33836373686790466, -0.2255512773990631, 0.11880501359701157, 0.41540202498435...
Using the MWE below: \documentclass{standalone} \usepackage{tikz} \usepackage{tikz-qtree} \usepackage{tikz-qtree-compat} \usepackage{ textcomp } \begin{document} \begin{tikzpicture} \Tree [ .TP [ .T' \node(C){T+verb}; [ .vP \qroof{`ana}.DP [ .v' \node(B){v+{\textlangle}verb{\textrangle}}; [ .VP [ .V' \node(A){V+{\textlangle}verb{\textrangle}}; \qroof{taalib}.DP ] ] ] ] ] ] \draw [semithick,->] (A) to[out=270,in=180] (B); \draw [semithick,->] (B) to[out=270,in=180] (C); \end{tikzpicture} \end{document} I get output like this: ![End of first arrow and beginning of second arrow don't match](http://i.stack.imgur.com/2b2xU.jpg) The tree would look a lot better and I think be more intuitionally correct if the endpoint of the arrow from "V+" and the starting point of the arrow from "v+" were in the same place, perhaps with a small black dot to connect them. That would better show that the same verb is doing both movements. Does anyone know how to accomplish this?
[ 0.018335707485675812, -0.0024818291421979666, 0.0009160572662949562, 0.016430921852588654, 0.004927573725581169, 0.02271616831421852, 0.008786223828792572, 0.014059117063879967, -0.011538136750459671, -0.0010396016295999289, -0.005604061763733625, -0.00031798001145944, -0.00954410433769226, ...
[ -0.012284329161047935, -0.04328108951449394, 0.4182729125022888, 0.0333871990442276, 0.0679498091340065, 0.08828327059745789, 0.3973279297351837, -0.8581475615501404, -0.1661345660686493, -0.658522367477417, -0.10222233086824417, 0.6769027709960938, -0.12947848439216614, 0.2222664505243301...
I have created a page that uses custom posts: http://www.africanhealthleadership.org/resources/toolkit/ Each tool (Preparation, Assessment, etc.) is a custom post. On the WP Admin, each tool is a category; each category has a "description" field. I would like to output those descriptions on the Toolkit page. I tried using this and nothing displayed: `<?php echo category_description( $category ); ?>` Right now, the descriptions are hard-coded in to the page. The one for preparation begins "Preparation tools establish..." Thank you for any ideas! Jeff * * * Here is the loop that spits out the custom post type: <?php query_posts( array( 'post_type' => 'portfolio', 'toolkit' => 'preparation' ) ); //the loop start here if ( have_posts() ) : while ( have_posts() ) : the_post(); ?> <?php the_content(); ?> <?php endwhile; endif; wp_reset_query(); ?> And Here is the code from functions.php add_action('init', 'portfolio_register'); function portfolio_register() { $labels = array( 'name' => _x('Toolkit', 'post type general name'), 'singular_name' => _x('Tool', 'post type singular name'), 'add_new' => _x('Add New Tool', 'tool'), 'add_new_item' => __('Add New Tool'), 'edit_item' => __('Edit Tool'), 'new_item' => __('New Tool'), 'view_item' => __('View Tool'), 'search_items' => __('Search Toolkit'), 'not_found' => __('Nothing found'), 'not_found_in_trash' => __('Nothing found in Trash'), 'parent_item_colon' => '' ); $args = array( 'labels' => $labels, 'public' => true, 'publicly_queryable' => true, 'show_ui' => true, 'query_var' => true, 'menu_icon' => get_stylesheet_directory_uri() . '/article16.png', 'rewrite' => true, 'capability_type' => 'post', 'hierarchical' => false, 'menu_position' => null, 'supports' => array('title','editor','thumbnail') ); register_post_type( 'portfolio' , $args ); } register_taxonomy("toolkit", array("portfolio"), array("hierarchical" => true, "label" => "Tool Categories", "singular_label" => "Tool", "rewrite" => true));
[ -0.0010330926161259413, 0.0029073788318783045, 0.015027659013867378, 0.02867911197245121, -0.003907732665538788, -0.0002553604426793754, 0.008460871875286102, 0.01217132993042469, -0.015604005195200443, 0.006084360182285309, -0.013898154720664024, 0.004486643709242344, 0.005583073012530804, ...
[ 0.6549689769744873, 0.35117170214653015, 0.023142268881201744, -0.14454257488250732, -0.05694909021258354, -0.06932031363248825, 0.20698890089988708, -0.33916881680488586, -0.39975255727767944, -0.4575824737548828, 0.11948834359645844, 0.03352545201778412, -0.3025866746902466, 0.3692468404...
I have a custom post type that is going to use a loop in a couple different places and I wanted to make maintaining those loops easier. I remembered that get_template_part() is available and figured this would be an optimal time to get used to it. However, what has me at a standstill is how to set up a file for the loops so that I can call the specific parts accurately. Google, thus far, has been not helpful in understanding how get_template_part() and twentyten's loop.php actually work and call the 3 parts. I have it duplicated, stripped down and ready for altering for my CPT version, but that is it thus far. Any help?
[ 0.017226647585630417, 0.01682671718299389, 0.0017000657971948385, 0.00866708718240261, 0.012685631401836872, 0.021631833165884018, 0.00665234075859189, 0.013942738994956017, -0.018608346581459045, -0.014613145031034946, -0.009844768792390823, 0.002024511806666851, -0.012168025597929955, 0....
[ 0.873573899269104, 0.04850192740559578, 0.12324945628643036, 0.022259065881371498, -0.1332908719778061, 0.14077961444854736, 0.41272079944610596, -0.10763048380613327, -0.20744408667087555, -0.6913468837738037, 0.43095314502716064, 0.2879061698913574, -0.19147539138793945, 0.17116858065128...
> **Possible Duplicate:** > If a 1kg mass was accelerated close to the speed of light would it turn > into a black hole? Imagine, a rod of length **L** is moving with velocity approaching the speed of light with respect to a human observer on Earth. Due to Lorentz contraction, the rod will observed to be very short. And since all laws of physic hold true in every frame of references, the gravitation law which state the force acting is inversely proportional to square of distance between them, will be acting between various part of the rod. Now, as velocity approach **c** , **L** will approach 0. This should cause an enormous gravitational force enough to form a black hole. Isn't this suggesting a black hole can be formed when the object velocity comes near the speed of light? According to the rod's frame of reference, it will see itself as stationary and a man who observe it is moving, so according to the rod, the human must be a black hole. Isn't this a paradox?
[ 0.007019344717264175, 0.01662380062043667, -0.0014500193065032363, 0.004942653235048056, -0.002716033486649394, -0.008260494098067284, 0.007789243943989277, -0.013780375942587852, -0.0129833510145545, -0.027270305901765823, -0.001522055477835238, 0.019466744735836983, -0.004561533220112324, ...
[ 0.34473690390586853, 0.06152280047535896, 0.6057561039924622, 0.10539166629314423, -0.20735758543014526, 0.28096407651901245, -0.032170847058296204, -0.35530728101730347, -0.4340205490589142, -0.7261520624160767, 0.03349959850311279, 0.20017269253730774, -0.2846985459327698, 0.617866992950...
I have read tutorials asking you to construct "lit" shelters. What is the benefit of lighting your shelter ?
[ 0.05340725928544998, 0.020532995462417603, 0.0025626032147556543, -0.00788394920527935, -0.07945506274700165, 0.02826930396258831, 0.01709592342376709, 0.017517605796456337, -0.04842771962285042, -0.03536136820912361, -0.00544334901496768, 0.03747202828526497, -0.030749531462788582, -0.002...
[ 1.0307841300964355, 0.439920037984848, -0.3586154282093048, 0.23423552513122559, -0.07442528009414673, 0.06963996589183807, 0.05325660482048988, -0.21022376418113708, 0.0769425705075264, -0.6615179777145386, 0.3717418313026428, 0.010673482902348042, -0.23211517930030823, -0.181261718273162...
Is there any reason why I should use `rewrite_rules_array` instead of `generate_rewrite_rules`? `generate_rewrite_rules` works out of the box, but I couldn't get `rewrite_rules_array` to work. And I am told that any array actions [when adding custom rewrite rules] should be done via `rewrite_rules_array` and the rest of `*_rewrite_rule` filters (e.g. `add_rewrite_rule`). It's unclear why.
[ 0.005899196490645409, 0.020129287615418434, -0.010012323036789894, 0.02670808508992195, 0.006214636377990246, 0.004687882028520107, 0.008457640185952187, -0.0024453294463455677, -0.016168326139450073, -0.00009150244295597076, -0.0059907822869718075, 0.004825962707400322, -0.01410232391208410...
[ 0.28083154559135437, 0.21161983907222748, 0.1351400464773178, -0.219209223985672, -0.08383417874574661, -0.5334952473640442, 0.3451001048088074, -0.6002767086029053, -0.42200368642807007, -0.44462889432907104, 0.14754894375801086, 0.6580672264099121, -0.49303048849105835, 0.034438755363225...
For unweighted variance $$\text{Var}(X):=\frac{1}{n}\sum_i(x_i - \mu)^2$$ there exists the bias corrected sample variance, when the mean was estimated from the same data: $$\text{Var}(X):=\frac{1}{n-1}\sum_i(x_i - E[X])^2$$ I'm looking into weighted mean and variance, and wondering what the appropriate bias correction for the weighted variance is. Using: $$\text{mean}(X):=\frac{1}{\sum_i \omega_i}\sum_i \omega_i x_i$$ The "naive", non-corrected variance I'm using is this: $$\text{Var}(X):=\frac{1}{\sum_i \omega_i}\sum_i\omega_i(x_i - \text{mean}(X))^2$$ So I'm wondering whether the correct way of correcting bias is A) $$\text{Var}(X):=\frac{1}{\sum_i \omega_i - 1}\sum_i\omega_i(x_i - \text{mean}(X))^2$$ or B) $$\text{Var}(X):=\frac{n}{n-1}\frac{1}{\sum_i \omega_i}\sum_i\omega_i(x_i - \text{mean}(X))^2$$ or C) $$\text{Var}(X):=\frac{\sum_i \omega_i}{(\sum_i \omega_i)^2-\sum_i \omega_i^ 2}\sum_i\omega_i(x_i - \text{mean}(X))^2$$ A) does not make sense to me when the weights are small. The normalization value could be 0 or even negative. But how about B) ($n$ is the number of observations) - is this the correct approach? Do you have some reference that shows this? I belive "Updating mean and variance estimates: an improved method", D.H.D. West, 1979 uses this. The third, C) is my interpretation of the answer to this question: http://mathoverflow.net/questions/22203/unbiased- estimate-of-the-variance-of-an-unnormalised-weighted-mean For C) I have just realized that the denominator looks a lot like $\text{Var}(\Omega)$. Is there some general connection here? I think it does not entirely align; and obviously there is the connection that we are trying to compute the variance... All three of them seem to "survive" the sanity check of setting all $\omega_i=1$. So which one should I used, under which premises? ''Update:'' whuber suggested to also do the sanity check with $\omega_1=\omega_2=.5$ and all remaining $\omega_i=\epsilon$ tiny. This seems to rule out A and B.
[ 0.009325632825493813, 0.005764916073530912, -0.005932077765464783, 0.014726893976330757, 0.01067808922380209, -0.006578600965440273, 0.0074360305443406105, -0.000021202489733695984, -0.012152804993093014, 0.0047499043866992, 0.000023399945348501205, 0.007112892810255289, -0.02661993727087974...
[ 0.22445189952850342, -0.29039716720581055, 0.40559121966362, 0.11262470483779907, -0.17830586433410645, 0.31633976101875305, -0.24695608019828796, -0.4461626410484314, -0.09739534556865692, -0.18953467905521393, 0.2187146693468094, 0.5722864270210266, -0.24622929096221924, 0.26926079392433...
I'm still newbie to android, I'm a little confused whether my android phone fine or damaged, or my sdcard is broken, because when I use the phone to play games, watch movies, or is in the idle position, sdcard suddenly unmount and then mount again (like doing re-mount). When running game or watch a movie suddenly close itself, and a notification appear "Preparing SD card". Anyone know why this could happen? Here my phone information : RAM : 256MB, ROM : 256MB, Model number: Device-01, Android Version: 2.3.6, baseband version: VENUS_BP_00.03.63.b501, Kernel Version: 2.6.38.6-perf zly @ HL120 # 1 My sdcard : V-Gen 2GB. Thank you.
[ -0.023014014586806297, 0.009743131697177887, 0.007875867187976837, 0.010609215125441551, 0.010858609341084957, -0.0065817637369036674, 0.009920774027705193, -0.001430968288332224, -0.013274533674120903, 0.019058402627706528, -0.030246898531913757, 0.01079426147043705, 0.011347081512212753, ...
[ 0.32930412888526917, 0.04982931166887283, 0.46009719371795654, -0.0602269172668457, 0.2676744759082794, -0.009132140316069126, 0.23748721182346344, -0.21544812619686127, -0.336078941822052, -0.6940068602561951, 0.01360622514039278, 0.7818456888198853, -0.1056394949555397, 0.058516472578048...
How can one import the coordinates stored in a binary CHARMM/NAMD DCD file? This is the structure of the file: HDR NSET ISTRT NSAVC 5-ZEROS NATOM-NFREAT DELTA 9-ZEROS `CORD' #files step 1 step zeroes (zero) timestep (zeroes) interval C*4 INT INT INT 5INT INT DOUBLE 9INT ========================================================================== NTITLE TITLE INT (=2) C*MAXTITL (=32) ========================================================================== NATOM #atoms INT ========================================================================== X(I), I=1,NATOM (DOUBLE) Y(I), I=1,NATOM Z(I), I=1,NATOM ========================================================================== A sample pdb and dcd file is here. A C plugin for reading the DCD can be found here. As with any file format without a rigid formal specification there is a lot of variation and different edge cases... * * * Surprisingly googling did not turn up any ready made recipes. I'm hoping somebody has something on the shelf:) Of course this is trivial to implement using `BinaryReadList`, so if nobody else answers, I will answer this question myself, there by still making it useful to the community.
[ -0.004586114082485437, 0.01012252178043127, -0.0017728512175381184, 0.013513186015188694, 0.01038100104779005, -0.0019089491106569767, 0.006905782967805862, 0.01540178619325161, -0.01453853864222765, -0.029898973181843758, -0.01704287715256214, 0.000540088047273457, -0.007762497756630182, ...
[ -0.019751591607928276, -0.32721269130706787, 0.9530921578407288, 0.07625029981136322, -0.017841439694166183, 0.02260470949113369, -0.05344211310148239, -0.509601354598999, 0.11861609667539597, -0.6155675649642944, 0.12771522998809814, 0.3619122803211212, -0.06197991967201233, 0.21307237446...
Part of the FSF's instructions for placing a program under the GPL is including the following "copying permission statement" at the top of your file, under the copyright notice: This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>. I am wondering what is the significance of each paragraph in this statement. In particular, for a program that I am about to release under the GPL, I am considering omitting the second and third paragraphs to reduce the length of the statement, and I'm wondering what would be the negative consequences (if any) of doing so.
[ -0.012338731437921524, 0.002702985890209675, 0.007324609439820051, 0.01624005287885666, 0.013772455044090748, 0.0052430215291678905, 0.007619969081133604, 0.01298707164824009, -0.019409041851758957, -0.012542035430669785, -0.009110426530241966, 0.021689072251319885, -0.002952799666672945, ...
[ 0.652347981929779, 0.466034859418869, 0.2901056110858917, -0.12078765779733658, 0.23552395403385162, -0.9362995624542236, -0.1769663691520691, 0.11291946470737457, -0.37938353419303894, -0.31663843989372253, -0.5052679181098938, 0.3220910131931305, -0.4143463373184204, 0.28292879462242126,...
μ = 0.15; λ = 0.4; ρ = 1; r = 0.04; i = 3; θ = 1; κ = 0.1; σ = 0.15; γ[κ_, μ_, λ_, ρ_, σ_, r_] = (2 κ - 2 μ + 2 λ ρ σ + σ^2 + Sqrt[8 r σ^2 + (-2 κ + 2 μ - 2 λ ρ σ - σ^2)^2])/(2 σ^2); b[κ_, μ_, λ_, ρ_, σ_, r_] = 2 - 2 γ[κ, μ, λ, ρ, σ, r] + 2 (κ - μ + λ ρ σ)/σ^2; B[κ_, μ_, λ_, ρ_, σ_, r_] = -Gamma[1 - γ[κ, μ, λ, ρ, σ, r] - b[κ, μ, λ, ρ, σ, r]]* Gamma[b[κ, μ, λ, ρ, σ, r]]/ Gamma[-γ[κ, μ, λ, ρ, σ, r]]/ Gamma[2 - b[κ, μ, λ, ρ, σ, r]]; h[V] = Hypergeometric1F1[-γ[κ, μ, λ, ρ, σ, r], b[κ, μ, λ, ρ, σ, r], (2 κ θ)/(σ^2 V)]; g[V] = Hypergeometric1F1[ 1 - γ[κ, μ, λ, ρ, σ, r] - b[κ, μ, λ, ρ, σ, r], 2 - b[κ, μ, λ, ρ, σ, r], (2 κ θ)/(σ^2 V)]; F[A, V] = A*(h[V] + B[κ, μ, λ, ρ, σ, r]*((2 κ θ)/(σ^2 V))^(1 - b[κ, μ, λ, ρ, σ, r])*g[V])* V^γ[κ, μ, λ, ρ, σ, r]; x = V /. FindRoot[{F[A, V] == V - i, D[F[A, V], V] == 1}, {A, 0.01}, {V, 1.01}] I think this should be an easy question, but I am new to _Mathematica_ and can not figure it out. Variable `x` is what I would like to obtain, and given the parameters values at the beginning of the code, the value of `x` can be found. It should be 6.3632. Here is the question: how can I change the code and establish a loop to evaluate the values of `x` with σ taking different values, like ranging from 0.1 to 0.2 in step of 0.01. Thus, σ is no longer a constant input as presented in my code but a iterating variable. Also, if it is convenient, how can I plot `x` against the values of σ?
[ 0.007571154274046421, 0.004104469902813435, -0.010743847116827965, 0.010590344667434692, -0.013356742449104786, 0.0014012395404279232, 0.0031385982874780893, -0.015097019262611866, -0.005928037688136101, -0.0012384953442960978, -0.004817181266844273, 0.004073174670338631, -0.0234300009906291...
[ -0.0918852835893631, 0.13710306584835052, 0.28488245606422424, -0.35737916827201843, -0.002550200093537569, 0.7080574035644531, 0.28793492913246155, -0.9589121341705322, -0.13894404470920563, -0.5082687735557556, -0.26143530011177063, 0.493324875831604, -0.49532461166381836, 0.347224682569...
I have a curious problem: In Google Webmaster Tools, it shows that from my sitemap 330 images submitted and only 6 are indexed. I try to put alt tags, descriptive names, everything by the book, but still no results. What can you suggest to improve image indexing by Google or any other search engine :).
[ -0.002043248387053609, 0.0018718853825703263, -0.027946840971708298, 0.03210606426000595, 0.013601908460259438, 0.0013642353005707264, 0.008896897546947002, -0.008622527122497559, -0.031019845977425575, -0.008172670379281044, 0.0023483880795538425, 0.014013522304594517, -0.0317639485001564, ...
[ 0.47154051065444946, 0.2210533320903778, 0.3739874064922333, 0.17648813128471375, -0.2316373586654663, 0.01529201865196228, -0.005853385664522648, 0.05608197674155235, -0.3880750834941864, -0.48041629791259766, 0.27845942974090576, 0.31882959604263306, -0.2513723075389862, 0.59824049472808...
I've been looking up this thread on how software piracy can be prevented. It's got a number of interesting suggestions ranging from using copy protection to providing free versions, and not surprisingly, none of them will satisfy you 100%. It will perhaps be apt to experiment with each of the popular anti-piracy measures to gauge success. Are there reliable surveys that point to the success or failure of such measures? Furthermore, surveys are often biased because of countless factors, and perhaps the software product developer himself should attempt to gauge success or failure of a particular measure. If I decide to resort to the second option, what would you suggest to obtain hard data about software piracy patterns for _my_ specific product?
[ 0.010605172254145145, 0.015410825610160828, -0.0070930710062384605, 0.011698199436068535, -0.007404612377285957, -0.005150675307959318, 0.005807148292660713, 0.010759297758340836, -0.015106257051229477, -0.016919974237680435, -0.007889123633503914, 0.01512810681015253, -0.006450773682445288,...
[ 0.5872675776481628, 0.1189015582203865, 0.03413287550210953, 0.38424956798553467, 0.034215666353702545, -0.0962022915482521, 0.0869484394788742, 0.20001426339149475, -0.20226134359836578, -0.1267484873533249, 0.11844313889741898, 0.8225526213645935, 0.008666503243148327, 0.1893183290958404...
I want to show my custom post type in list pattern like ## S.No Title * 6 samplePost1 - replypost - replypost * 5 samplepost2 - replypost * 4 samplepost3 Here i am setting post_per_page count from admin so If admin set post count to 4 then on list page it should show 4 post including reply post. I am posting my code below please let me know how can i resolve this issue. <?php $args = array( 'post_type' => 'customposttype', 'posts_per_page' => $posts_per_page, 'paged' => $paged, 'orderby' => 'post_date', 'order' => 'DESC', 'post_status' => 'publish', //'post_parent'=>0, 'tax_query' => array( array( 'taxonomy' => 'custompostcategory', 'field' => 'slug', 'terms' => $cat ) ) ); $my_query = new WP_Query($args); $max_num_pages = $my_query->max_num_pages; $post_count = $my_query->found_posts; $i = $post_count - (($paged - 1 ) * $posts_per_page) ; ?> <div id="warpper"> <?php if ( $my_query->have_posts() ) : ?> <div> <ul> <li>Sno.</li> <li>Title</li> </ul> </div> <div> <?php while ( $my_query->have_posts() ) : $my_query->the_post();?> <ul> <?php //if($my_query->post->post_parent < 1) :?> <li><?php echo $i; ?></li> <li> <a href="#"><?php echo mb_substr(get_the_title(),0,30)?> </a></li> <?php $i--;// else: ?> <?php //if($my_query->post->post_parent > 1) :?> <?php $r_args = array( 'post_type' => 'customposttype', 'orderby' => 'post_date', 'order' => 'DESC', 'post_status' => 'publish', 'post_parent' => $my_query->post->ID ); $Rquery = new WP_Query($r_args); ?> <?php if ( $Rquery->have_posts() ) : ?> <div class="cmb_comment_warp"> <?php while ( $Rquery->have_posts() ) : $Rquery->the_post(); ?> <ul> <li>&nbsp;</li> <li><a href="#"><?php echo mb_substr(get_the_title(),0,30)?> </a> </li> </ul> <?php endwhile; ?> </div> <?php endif;//endif;?> </ul> <?php endwhile; // End While?> </div> </div>
[ 0.013548544608056545, 0.0029630784410983324, 0.0017986893653869629, 0.026624487712979317, 0.017237678170204163, 0.00865454226732254, 0.008641550317406654, 0.007202471140772104, -0.015626097097992897, 0.007627557963132858, 0.0021450400818139315, 0.009900011122226715, 0.010028835386037827, 0...
[ 0.35593923926353455, 0.298888623714447, 0.7018537521362305, -0.20952025055885315, -0.415617436170578, 0.388600617647171, 0.2244778275489807, -0.4535579979419708, -0.1669779121875763, -0.5847098231315613, 0.30008941888809204, 0.3433210849761963, -0.31390729546546936, 0.15280964970588684, ...
I would like to add a feature to a page to display a short list of people, like the one seen here. I have done this on other sites by styling lists with css. This time however, it's for a client and I can't trust them to copy and paste a `<li>`, editing the name, job title and img name without messing it up. What is the best way to go about creating this so that it can be easily reduced/added to in the dashboard? Basically, so that the user simply clicks "add person" then fills in some fields before updating the page. Here is the architecture of what I've used in the past (to be styled with css)- <ul class="people"> <li> <img src="http://image_path.jpg" alt="a person" /> Jakie <span> her job</span> </li> </ul>
[ 0.003382668364793062, -0.0017267672810703516, -0.005627838894724846, -0.0011011289898306131, 0.01135467179119587, -0.0000832676887512207, 0.005861973389983177, 0.00735575333237648, -0.021834377199411392, 0.012915330938994884, -0.006882097106426954, 0.01181788370013237, -0.0040514105930924416...
[ 0.7577939033508301, 0.26221024990081787, 0.05305808037519455, -0.030086275190114975, 0.029472213238477707, 0.09479409456253052, -0.11144473403692245, 0.18485699594020844, -0.2583065330982208, -0.6814782023429871, 0.2710326313972473, 0.3731929361820221, -0.14487914741039276, 0.0508268773555...
I am trying to write a function that I will put on all my machines in order to make it easy to send files at a fixed place on my network. Here is my script so far. Some folder may have duplicates names on my machines, so I'm adding a uuid at the end of the folder name. function putOnSG3() { uuid=`uuidgen` if [[ -d $1 ]]; then scp -rv "$1" shiny:/Volumes/Seagate3To/"$1.$uuid"; else echo $1 " is not a directory. Not copying."; fi; } I'm invoking it like this: $ putOnSG testFo\[l\}der Here is the problem: zsh:1: bad pattern: /Volumes/Seagate3To/testFo[l}der.d84abc26-501b-4f89-a636-518b4059a770 How can I manage these nasty filenames ? Target filesystem is case sensitive hfsplus, source filesystems are various extfs from Linux machines and NTFS.
[ 0.002480232622474432, 0.010362917557358742, -0.0006991551490500569, 0.014812182635068893, -0.007958905771374702, -0.015969373285770416, 0.007362929172813892, -0.0022942260839045048, -0.016993779689073563, 0.0005791509756818414, -0.015900272876024246, 0.0031272005289793015, 0.0000818148255348...
[ 0.1016501933336258, -0.16182959079742432, 0.510922372341156, -0.29810187220573425, 0.06425977498292923, 0.07105815410614014, 0.3439348638057709, -0.15394435822963715, -0.6319897770881653, -0.7765065431594849, 0.18828441202640533, 0.414991170167923, -0.2625274062156677, 0.42177239060401917,...
I shot down the Overseer UFO with an EMP-fitted Firestorm. However I did not dispatch the Skyranger, and a after a few days the crash site disappeared. Will another Overseer eventually appear? Is my game "stuck" because I didn't do that mission?
[ -0.01667572557926178, 0.04478174448013306, 0.0031550514977425337, -0.0031016210559755564, -0.016514915972948074, -0.02172302082180977, 0.011813347227871418, 0.019423915073275566, -0.027375372126698494, -0.025484571233391762, 0.0015607274835929275, 0.04916345700621605, 0.021285809576511383, ...
[ 0.3238013982772827, 0.12263204157352448, 0.3204253315925598, 0.4750511944293976, -0.07007081806659698, -0.031823333352804184, 0.24678446352481842, 0.006462893448770046, -0.30374497175216675, -0.2729378938674927, -0.09079059213399887, 0.3313906490802765, -0.14946109056472778, 0.287669926881...
is it possible to load static library (.lib) compiled in c++ using NETLink or similar in Mathematica?
[ 0.03846307471394539, 0.015597840771079063, -0.014023398980498314, 0.011342553421854973, 0.028881244361400604, -0.025706520304083824, 0.0185114536434412, -0.03301462158560753, -0.022383423522114754, -0.0025532403960824013, 0.0026766315568238497, 0.02746516279876232, -0.023764800280332565, 0...
[ 0.14040584862232208, -0.1662875860929489, 0.09052067995071411, 0.2341122180223465, -0.06819849461317062, -0.2700805068016052, -0.08560303598642349, -0.03536571189761162, -0.3031623065471649, -0.15817882120609283, -0.16041123867034912, 0.3245958685874939, -0.48781752586364746, -0.2754519283...
How is it possible for a grad student to do research in any modern area of string theory like AdS/CFT or ABJM if they need to start grad school by having to learn QFT from scratch? Is there a time-line over which this is even possible? Or one necessarily needs to come to grad school knowing at least Polchinki level string theory to be able to understand AdS/CFT or ABJM? I mean even if one has worked through enough of Polchinki's book one is likely to stare blank at the ABJM paper. Then is it even possible for someone to get to research level with something like this if they start from basic QFT in grad school? I would like to know what if is a practical implementation.
[ -0.0038963269907981157, 0.02643524669110775, -0.01210021786391735, 0.017661405727267265, -0.009078609757125378, -0.013207023963332176, 0.010654578916728497, 0.009168872609734535, -0.020164217799901962, -0.02327727898955345, -0.017527785152196884, 0.023561403155326843, -0.0160320196300745, ...
[ 0.38782382011413574, 0.2090696096420288, 0.017171988263726234, 0.20449727773666382, 0.01826251856982708, -0.4018462598323822, 0.02326730266213417, 0.007317071780562401, -0.1646251231431961, -0.15603090822696686, 0.20211394131183624, 0.30599990487098694, 0.1862718015909195, 0.24208141863346...
Is it possible to turn an Android phone into a USB gamepad for the PC? If yes, how?
[ -0.07666812092065811, 0.020806826651096344, 0.004129729233682156, -0.025134462863206863, -0.023985233157873154, -0.01810036413371563, 0.01395624503493309, -0.03488423675298691, -0.01244395598769188, -0.03039446659386158, -0.002891827840358019, 0.0432085245847702, 0.011034549213945866, 0.02...
[ 0.2171117514371872, -0.11098500341176987, 0.254733145236969, 0.681452214717865, 0.5062810778617859, 0.035250235348939896, -0.2716766893863678, 0.1517137587070465, -0.07811549305915833, -0.45637670159339905, 0.16532735526561737, 0.6284551620483398, -0.17218658328056335, -0.34693270921707153...
Say, I am using a simple recursive algo for fibonacci , which would be executed as: fib(5) -> fib(4)+fib(3) | | fib(3)+fib(2)| fib(2)+fib(1) and so on Now, the execution will still be sequential. Instead of that, how would I code this so that `fib(4)` and `fib(3)` are calculated by spawning 2 separate threads, then in `fib(4)`, 2 threads are spawned for `fib(3)` and `fib(2)`. Same for when `fib(3)` is split to `fib(2)` and `fib(1)` ? (I'm aware that Dynamic programming would be a much better approach for Fibonacci, just used it as an easy example here) (if someone could share a code sample in C\C++\C# as well, that would be ideal)
[ -0.002398357493802905, 0.028944900259375572, -0.004999908618628979, 0.015489148907363415, 0.010190464556217194, -0.017423143610358238, 0.009017163887619972, 0.015076774172484875, -0.014595589600503445, 0.004358108155429363, -0.004447687417268753, 0.00712712574750185, -0.022336136549711227, ...
[ 0.2121494710445404, -0.19369550049304962, 0.3077205419540405, -0.3645259141921997, 0.1852380484342575, 0.2498997151851654, 0.1609022170305252, -0.6670212149620056, -0.4362652599811554, -0.40043750405311584, 0.02161056362092495, 0.44867557287216187, -0.664340078830719, 0.38324958086013794, ...
Consider I have the following probabilities: $$P(A|B) = 0.86 $$ $$ P(A|B^C) = 0.35 $$ $$ P(B) = 0.80 $$ $$ P(A) = 0.758$$ Is there necessary information given to calculate $P(B^C|A^C)$? If so please guide me how. Thanks in advance.
[ -0.008681995794177055, 0.014499587938189507, -0.00992356427013874, 0.014713033102452755, 0.004265409894287586, -0.011065388098359108, 0.0060652573592960835, -0.023198341950774193, -0.013021668419241905, -0.007981971837580204, -0.0093313530087471, 0.0037370778154581785, -0.014923353679478168,...
[ 0.07375360280275345, 0.427548885345459, 0.36580997705459595, -0.5133834481239319, 0.1537214070558548, 0.15678811073303223, 0.179518461227417, -0.1178726926445961, 0.06641633808612823, -0.4048117399215698, 0.17112497985363007, 0.6455885767936707, -0.39719292521476746, 0.19119232892990112, ...
Basically, I want the header fonts to use a different (smaller) font size than the size I set using `\documentclass`. What should be the option I'd be using ? \documentclass[12pt]{article} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage[margin=1in, headsep=10pt]{geometry} \usepackage{fancyhdr} \pagestyle{fancy} \fancyhead[L]{Personal Statement} % What other option should I use to get a custom font size ?
[ 0.007890230976045132, 0.004118719138205051, -0.004069487564265728, 0.024561338126659393, 0.012453172355890274, -0.009914889931678772, 0.007612679153680801, -0.007907580584287643, -0.011577065102756023, -0.03439008817076683, -0.006566409952938557, -0.0045644803903996944, 0.001613036496564746,...
[ 0.2463577538728714, 0.003391520818695426, 0.3336952328681946, 0.08547498285770416, 0.18413278460502625, 0.31284287571907043, -0.052657730877399445, -0.24299415946006775, 0.00355447456240654, -0.7993903756141663, 0.08746298402547836, 0.40165913105010986, -0.2301352620124817, -0.229590669274...
I remember before (and at least partially during) Cataclysm, a Mage's Polymorph pulled aggro of all enemy NPCs nearby the polymorphed NPC. Either during Cataclysm or after, Polymorphed was changed to no longer aggro nearby enemy NPCs. I can't find when this change was implemented, however. Which patch/hotfix implemented this change?
[ -0.005037905648350716, 0.020427824929356575, -0.012074117548763752, 0.012941380962729454, -0.01387201901525259, -0.024899885058403015, 0.01191553846001625, -0.030025795102119446, -0.026493951678276062, 0.04859893023967743, -0.03187916800379753, 0.013853148557245731, -0.0023128960747271776, ...
[ 0.24102917313575745, -0.007883050478994846, 0.09137977659702301, 0.2904714345932007, -0.42383432388305664, -0.05357413738965988, -0.059813492000103, 0.08858313411474228, -0.3487052619457245, -0.39627331495285034, -0.23672577738761902, 0.34323155879974365, -0.20416447520256042, 0.1325124204...
I have 2 HTML files, where part of the content looks like this: In FILE1: <td width="48%" align="right" valign="top"> <b>mom. Wirkleistung P+ tot.: </b><br> <b>mom. Wirkleistung P+ L1: </b><br> <b>mom. Wirkleistung P+ L2: </b><br> <b>mom. Wirkleistung P+ L3: </b><br> </td><td width="4%" align="middle"> &nbsp; </td><td width="48%" valign="top"> <b>114,00 W </b><br> <b> 2,00 W </b><br> <b>109,00 W </b><br> <b> 2,00 W </b><br> </td></tr></table> <p></td> and in File2: <b>mom. Wirkleistung P- tot.: </b><br> <b>mom. Wirkleistung P- L1: </b><br> <b>mom. Wirkleistung P- L2: </b><br> <b>mom. Wirkleistung P- L3: </b><br> </td><td width="4%" align="middle"> &nbsp; </td><td width="48%" valign="top"> <b> 45,00 W </b><br> <b> 0,00 W </b><br> <b> 0,00 W </b><br> <b> 0,00 W </b><br> </td></tr></table> Where I want to use for either files the first Watt value (114.00 and 45.00, which of course does change every 5 seconds) and put the SUM together. I'm using a RASPBERRY PI (do Debian Linux), is there a way to extract these values from the two files and add it together, so that it works even if it contains a 5.00 or 66.70 or 1444.24 value. ATTACHED below the FULL FILE.... <html><head> <title>FacilityWeb</title> <meta http-equiv="cache-control" content="no-cache"> <style type="text/css"> #idHF {font-family:Arial; font-size:30px; color:#FFFFFF } a {font-family:Arial; font-size:20px; color:#FFFFFF } table {font-family:Arial; font-size:20px; color:#FFFFFF } input {font-family:Arial; font-size:20px; font-weight:bold; color:#000000 } select {font-family:Arial; font-size:20px; font-weight:bold; color:#000000 } </style> </head> <body bgcolor="#000000" link=#ffffff vlink=#ffffff alink=#ffffff> <table align="center" border="0" width="960" cellspacing="0" cellpadding="8"> <tr><td id="idHF" align="right" valign="middle" bgcolor="#0074B2"> <b><i>Lingg &amp; Janke&nbsp;</i></b></td></tr></table> <p><table align="center" border="0" width="960" bgcolor="#2f2f2f"><tr> <!-- BCU part begin --> <td align="center"> <a href="valpap">[ LEISTUNG P+ ]</a> <a href="valpan">[ LEISTUNG P- ]</a> <a href="valprp">[ LEISTUNG Q+ ]</a> <a href="valprn">[ LEISTUNG Q- ]</a><br> <a href="valv">[ SPANNUNG ]</a> <a href="valc">[ STROM ]</a> <a href="valx">[ COS PHI ]</a><br> <a href="valpapt">[ GRENZWERTE P+ tot. ]</a><br><a href="valpap1">[ GRENZWERTE P+ L1 ]</a> <a href="valpap2">[ GRENZWERTE P+ L2 ]</a> <a href="valpap3">[ GRENZWERTE P+ L3 ]</a><br> <a href="/1.1.2/">[ HOME ]</a> <p><b>Wirkleistungen P+ (Bezug)</b><p> <table width="100%"><tr> <td width="48%" align="right" valign="top"> <b>mom. Wirkleistung P+ tot.: </b><br> <b>mom. Wirkleistung P+ L1: </b><br> <b>mom. Wirkleistung P+ L2: </b><br> <b>mom. Wirkleistung P+ L3: </b><br> </td><td width="4%" align="middle"> &nbsp; </td><td width="48%" valign="top"> <b> 70,00 W </b><br> <b> 2,00 W </b><br> <b> 64,00 W </b><br> <b> 2,00 W </b><br> </td></tr></table> <p></td> <!-- BCU part end --> </tr></table><p> <table align="center" border="0" width="960" cellspacing="0" cellpadding="8"> <tr><td align="center" valign="middle" bgcolor="#0074B2"> <a id="idHF" href="/en/main.htm"><b>HOME</b></a></td></tr></table> </body></html>
[ -0.015279985964298248, 0.012056127190589905, -0.0047659664414823055, 0.021269869059324265, -0.002335358178243041, 0.013260533101856709, 0.007190278731286526, 0.027852384373545647, -0.0105403121560812, 0.029863420873880386, -0.014078861102461815, 0.0035871455911546946, -0.0068796840496361256,...
[ 0.21022920310497284, 0.11439123004674911, 0.699873685836792, -0.38706064224243164, -0.14140035212039948, 0.39212027192115784, 0.1262035369873047, -0.40641823410987854, 0.03584643453359604, -0.8726240992546082, 0.0685231164097786, 0.2266855090856552, -0.07091184705495834, -0.104224607348442...
Is it possible to backup an android by just copying all the files onto a PC? Is it really necessary to use a backup program? I just dragged and dropped all the files from my android internal storage, and they appear to be all there. The question is whether they can actually be used if I need to restore something. Seems like everywhere I look there are programs to archive everything into one backup file, and then save the backup file on a cloud or you copy it to your PC. Is this really necessary? Is it possible to get by just copying all the files?
[ 0.004128364380449057, 0.014869335107505322, -0.004775659181177616, 0.013630966655910015, -0.0014073303900659084, 0.003162197070196271, 0.007130593527108431, 0.00802881270647049, -0.02134721539914608, -0.04757149890065193, 0.0003077451838180423, 0.016340259462594986, 0.02200465276837349, 0....
[ 0.2759725749492645, 0.15495505928993225, 0.26109760999679565, 0.34524545073509216, 0.31162363290786743, 0.09968101978302002, 0.30588871240615845, 0.19510193169116974, -0.6029500365257263, -0.30203285813331604, 0.06043720245361328, 0.9006033539772034, -0.3302561640739441, -0.126273423433303...
I have a working bash script, that may run from 0 to 8-9 minutes. It requires `sudo` if it finds a problem and needs to change permissions/ownership on a file. I do not want to wait for the `sudo` prompt, as this might be a few minutes into the run, so I do `sudo -v` at the beginning of the script. If I expect the script to take long, I often walk over to the cafeteria. So some time ago I included a trap handler, that calls a function, that does `sudo -k` to drop my credentials. This way pressing `Ctrl`+`C` does not leave someone with access to `sudo` while I am not back yet. I do also call that function at the end of the script, in case the script terminates before I am back. If I already did a `sudo` command before calling the script, `sudo -v`, it doesn't ask me for my credentials. That is nice. Depending on where I start I know the script is not going to take long, and I wait for it to finish. If I started it after having just done a `sudo` command, **every time I end up without`sudo` credentials** after it finishes. I did check the return value of `sudo -v`. On exit 0, that doesn't tell me if the credentials were already there (from before running the script) or that the password was typed in correctly just before. That doesn't help me to know if I should run `sudo -k` at the end or not. I thought about making two versions of the script, one with and one without `sudo -v`/`sudo -k`, but I don't think that is a nice solution and I am bound to select the wrong version at times. Is there a better way to solve this? Am I missing something?
[ -0.009468844160437584, 0.012440220452845097, -0.016825314611196518, 0.0011651675449684262, -0.009708080440759659, -0.0017970288172364235, 0.007507420144975185, -0.017366455867886543, -0.01462186686694622, 0.004799452144652605, -0.01630891114473343, -0.00621807062998414, -0.000511069316416978...
[ 0.5721701383590698, 0.22609968483448029, 0.23293711245059967, -0.22625941038131714, 0.10616130381822586, -0.43129467964172363, 0.4242096245288849, -0.3616175651550293, -0.17700207233428955, -0.46274325251579285, 0.05193380266427994, 0.6813325881958008, 0.3856521248817444, -0.2144775390625,...
I'm trying to determine which type of learning algorithm is best for making predictions on my data. My data set consists of several independent variables, each of which is accompanied by an "indicator" variable that represents the sample size used to acquire its specific value. Let's say I'm attempting to assign expected future conversion rates to an assortment of web advertisements (conversion rate = orders/clicks). For some ads, I have a lot of specific traffic history to draw from. But for newer ads that have minimal to no traffic history, I need to rely on higher level characteristics. My data set consists of historical conversion rates at different levels/traits of the ad: ![enter image description here](http://i.stack.imgur.com/uM4Gl.jpg) For ad #5, I would want my model to rely almost completely on the historical conversion rate for the specific ad (4.23%) since it has enough past clicks for me to trust that value. For ad #1, however, I would like the model to ignore the "specific ad" value and rely mostly on the historical rates for the "brand" and "product category" of the ad (since 15 clicks is not enough to establish a reliable conversion rate). I've had some success with separating ads into several different groups by similarity of click volume, then using a different regression formula for each group. This allows each variable to be properly weighted. However, this approach ends up spreading the data too thin and likely does not learn as well as one consolidated model could. What is a good approach for using one model to make predictions on this data? I need the model to recognize that the "clicks" variables should be used to weight their respective conversion rates accordingly. (I've also had some small success with the nnet package in R, but the results are very hit-and- miss). Please don't hesitate to ask for clarification. Thanks!
[ 0.0007767202332615852, 0.009677756577730179, -0.007997771725058556, 0.003358004614710808, -0.0041833664290606976, 0.01035824604332447, 0.007052087690681219, -0.002862487453967333, -0.010086935013532639, 0.018984027206897736, -0.007349880877882242, 0.0016565547557547688, -0.000832902966067194...
[ 0.6929945945739746, 0.5258396863937378, 0.5534477829933167, 0.2029402107000351, -0.034888677299022675, 0.4810120165348053, 0.0023661740124225616, 0.020364094525575638, 0.05181950703263283, -0.5981447696685791, 0.5159105658531189, 0.5109699368476868, -0.2893180847167969, 0.10692372918128967...
Consider the following document: \documentclass{article} \usepackage{unicode-math} \usepackage{hyperref} \begin{document} aaa \end{document} When I compile it with `lualatex`, I get the following error: ! Undefined control sequence. <argument> \Url@FormatString l.5037 \let \HyOrg@url\url ? If I put `hyperref` before `unicode-math`, I get a different error: ! Undefined control sequence. <argument> \Url@FormatString l.2317 } The strange part is, the first document (more complicated one to be exact, the one I've reduced to this example) worked for me just yesterday. I have not updated any part of the system; I tried reverting back to previous commits and that did not help either. Does anyone know what can cause this problem? System details: * OSX 10.8.5 * TexLive 2013 * luatex r30581, hyperref r28213, unicode-math r30504 Edit: the result of `\listfiles`: article.cls 2007/10/19 v1.4h Standard LaTeX document class size10.clo 2007/10/19 v1.4h Standard LaTeX file (size option) unicode-math.sty 2013/05/04 v0.7e Unicode maths in XeLaTeX and LuaLaTeX ifxetex.sty 2010/09/12 v0.6 Provides ifxetex conditional ifluatex.sty 2010/03/01 v1.3 Provides the ifluatex switch (HO) expl3.sty 2013/07/28 v4582 L3 Experimental code bundle wrapper l3names.sty 2012/12/07 v4346 L3 Namespace for primitives l3bootstrap.sty 2013/07/28 v4581 L3 Experimental bootstrap code luatex.sty 2010/03/09 v0.4 LuaTeX basic definition package (HO) infwarerr.sty 2010/04/08 v1.3 Providing info/warning/error messages (HO) etex.sty 1998/03/26 v2.0 eTeX basic definition package (PEB) luatex-loader.sty 2010/03/09 v0.4 Lua module loader (HO) pdftexcmds.sty 2011/11/29 v0.20 Utility functions of pdfTeX for LuaTeX (HO) ltxcmds.sty 2011/11/09 v1.22 LaTeX kernel commands for general use (HO) ifpdf.sty 2011/01/30 v2.3 Provides the ifpdf switch (HO) l3basics.sty 2013/07/28 v4581 L3 Basic definitions l3expan.sty 2013/07/24 v4565 L3 Argument expansion l3tl.sty 2013/07/28 v4581 L3 Token lists l3seq.sty 2013/07/28 v4581 L3 Sequences and stacks l3int.sty 2013/07/28 v4581 L3 Integers l3quark.sty 2013/07/21 v4564 L3 Quarks l3prg.sty 2013/07/28 v4581 L3 Control structures l3clist.sty 2013/07/28 v4581 L3 Comma separated lists l3token.sty 2013/07/28 v4581 L3 Experimental token manipulation l3prop.sty 2013/07/28 v4581 L3 Property lists l3msg.sty 2013/07/28 v4581 L3 Messages l3file.sty 2013/07/28 v4581 L3 File and I/O operations l3skip.sty 2013/07/28 v4581 L3 Dimensions and skips l3keys.sty 2013/07/28 v4581 L3 Experimental key-value interfaces l3fp.sty 2013/07/09 v4521 L3 Floating points l3box.sty 2013/07/28 v4581 L3 Experimental boxes l3coffins.sty 2012/09/09 v4212 L3 Coffin code layer l3color.sty 2012/08/29 v4156 L3 Experimental color support l3luatex.sty 2013/07/28 v4581 L3 Experimental LuaTeX-specific functions l3candidates.sty 2013/07/24 v4576 L3 Experimental additions to l3kernel xparse.sty 2013/07/28 v4582 L3 Experimental document command parser l3keys2e.sty 2013/07/28 v4582 LaTeX2e option processing using LaTeX3 keys fontspec.sty 2013/05/20 v2.3c Font selection for XeLaTeX and LuaLaTeX luaotfload.sty 2013/07/23 v2.3b OpenType layout system luatexbase.sty 2013/05/11 v0.6 Resource management for the LuaTeX macro progr ammer luatexbase-compat.sty 2011/05/24 v0.4 Compatibility tools for LuaTeX luatexbase-modutils.sty 2013/05/11 v0.6 Module utilities for LuaTeX luatexbase-loader.sty 2013/05/11 v0.6 Lua module loader for LuaTeX luatexbase-regs.sty 2011/05/24 v0.4 Registers allocation for LuaTeX luatexbase-attr.sty 2013/05/11 v0.6 Attributes allocation for LuaTeX luatexbase-cctb.sty 2013/05/11 v0.6 Catcodetable allocation for LuaTeX luatexbase-mcb.sty 2013/05/11 v0.6 Callback management for LuaTeX fontspec-patches.sty 2013/05/20 v2.3c Font selection for XeLaTeX and LuaLaTeX fixltx2e.sty 2006/09/13 v1.1m fixes to LaTeX fontspec-luatex.sty 2013/05/20 v2.3c Font selection for XeLaTeX and LuaLaTeX fontenc.sty eu2enc.def 2010/05/27 v0.1h Experimental Unicode font encodings eu2lmr.fd 2009/10/30 v1.6 Font defs for Latin Modern xunicode.sty 2011/09/09 v0.981 provides access to latin accents and many othe r characters in Unicode lower plane eu2lmss.fd 2009/10/30 v1.6 Font defs for Latin Modern graphicx.sty 1999/02/16 v1.0f Enhanced LaTeX Graphics (DPC,SPQR) keyval.sty 1999/03/16 v1.13 key=value parser (DPC) graphics.sty 2009/02/05 v1.0o Standard LaTeX Graphics (DPC,SPQR) trig.sty 1999/03/16 v1.09 sin cos tan (DPC) graphics.cfg 2010/04/23 v1.9 graphics configuration of TeX Live pdftex.def 2011/05/27 v0.06d Graphics/color for pdfTeX fontspec.cfg catchfile.sty 2011/03/01 v1.6 Catch the contents of a file (HO) etexcmds.sty 2011/02/16 v1.5 Avoid name clashes with e-TeX commands (HO) fix-cm.sty 2006/09/13 v1.1m fixes to LaTeX ts1enc.def 2001/06/05 v3.0e (jk/car/fm) Standard LaTeX file filehook.sty 2011/10/12 v0.5d Hooks for input files unicode-math-luatex.sty lualatex-math.sty 2013/08/03 v1.3 Patches for mathematics typesetting with Lu aLaTeX etoolbox.sty 2011/01/03 v2.1 e-TeX tools for LaTeX unicode-math-table.tex hyperref.sty 2012/11/06 v6.83m Hypertext links for LaTeX hobsub-hyperref.sty 2012/05/28 v1.13 Bundle oberdiek, subset hyperref (HO) hobsub-generic.sty 2012/05/28 v1.13 Bundle oberdiek, subset generic (HO) hobsub.sty 2012/05/28 v1.13 Construct package bundles (HO) ifvtex.sty 2010/03/01 v1.5 Detect VTeX and its facilities (HO) intcalc.sty 2007/09/27 v1.1 Expandable calculations with integers (HO) kvsetkeys.sty 2012/04/25 v1.16 Key value parser (HO) kvdefinekeys.sty 2011/04/07 v1.3 Define keys (HO) pdfescape.sty 2011/11/25 v1.13 Implements pdfTeX's escape features (HO) bigintcalc.sty 2012/04/08 v1.3 Expandable calculations on big integers (HO) bitset.sty 2011/01/30 v1.1 Handle bit-vector datatype (HO) uniquecounter.sty 2011/01/30 v1.2 Provide unlimited unique counter (HO) letltxmacro.sty 2010/09/02 v1.4 Let assignment for LaTeX macros (HO) hopatch.sty 2012/05/28 v1.2 Wrapper for package hooks (HO) xcolor-patch.sty 2011/01/30 xcolor patch atveryend.sty 2011/06/30 v1.8 Hooks at the very end of document (HO) atbegshi.sty 2011/10/05 v1.16 At begin shipout hook (HO) refcount.sty 2011/10/16 v3.4 Data extraction from label references (HO) hycolor.sty 2011/01/30 v1.7 Color options for hyperref/bookmark (HO) auxhook.sty 2011/03/04 v1.3 Hooks for auxiliary files (HO) kvoptions.sty 2011/06/30 v3.11 Key value format for package options (HO) pd1enc.def 2012/11/06 v6.83m Hyperref: PDFDocEncoding definition (HO) hyperref.cfg 2002/06/06 v1.2 hyperref configuration of TeXLive url.sty 1999/03/02 ver 1.4 Verb mode for urls, email addresses, and fi le names hpdftex.def 2012/11/06 v6.83m Hyperref driver for pdfTeX rerunfilecheck.sty 2011/04/15 v1.7 Rerun checks for auxiliary files (HO) t3cmr.fd 2001/12/31 TIPA font definitions supp-pdf.mkii epstopdf-base.sty 2010/02/09 v2.5 Base part for package epstopdf grfext.sty 2010/08/19 v1.1 Manage graphics extensions (HO) epstopdf-sys.cfg 2010/07/13 v1.3 Configuration of (r)epstopdf for TeX Live nameref.sty 2012/10/27 v2.43 Cross-referencing by name of section gettitlestring.sty 2010/12/03 v1.4 Cleanup title references (HO) smallcaps.out smallcaps.out
[ -0.006669468712061644, 0.010438437573611736, 0.0003030856605619192, 0.017262548208236694, 0.02320951595902443, 0.01991959474980831, 0.007601907476782799, 0.004240495152771473, -0.008311976678669453, -0.03318513557314873, -0.009523414075374603, -0.0009046255145221949, -0.0147425951436162, 0...
[ -0.1127357929944992, 0.18121245503425598, 0.5624709725379944, -0.18744897842407227, 0.20607659220695496, 0.22961720824241638, -0.2156195044517517, -0.07562875747680664, -0.2893017828464508, -0.6634038686752319, -0.2063482701778412, 0.6522276997566223, -0.298517644405365, 0.2598574161529541...
am using the plugin Ignitewoo Wholesale price and have coded the single- product page to show two prices with the idea of one being the wholesale price the other normal retail price but atm its showing both wholesale price how can i change one to be only normal price? <p itemprop="price" class="price"><?php echo woocommerce_price($product->get_price_including_tax()); ?> <span class="pcat"><?php if ( is_user_logged_in() ) { ?> trade <?php } else { ?> rrp <?php } ?> </span></p> <?php if ( is_user_logged_in() ) { ?> <p itemprop="price" class="rrpprice"><?php echo woocommerce_price($product->get_price_including_tax()); ?> <span class="pcat">rrp</span> </p> <?php } else { ?> <?php } ?>
[ 0.0002330453135073185, 0.027700964361429214, 0.001409625168889761, 0.02741285413503647, -0.04343701899051666, 0.007989222183823586, 0.010265071876347065, 0.016265347599983215, -0.017624396830797195, 0.031524352729320526, -0.021401632577180862, 0.006991098634898663, -0.0029109385795891285, ...
[ 0.5319100618362427, 0.17339321970939636, 0.5129550099372864, -0.2697572708129883, 0.17667701840400696, 0.13993388414382935, -0.17530375719070435, -0.9555176496505737, 0.049351032823324203, -0.419299840927124, -0.018050288781523705, 0.812609851360321, 0.024591252207756042, 0.135714292526245...
Since wordpress does not have a menu icon (aka "user friendly) way to add the command to a post or page, I'm trying to create a shortcode to do it. However, as logic would have it, its not that easy. Doing this with a shortcode merely inserts into the markup, which is post process, not pre- process as wordpress does it substitution in pre-process. Can anyone shed light on how this can be done?
[ 0.006161301396787167, 0.007689518388360739, 0.010438232682645321, 0.01783459633588791, -0.00713471882045269, 0.009893937967717648, 0.008227581158280373, -0.0014735197182744741, -0.019111741334199905, 0.008733009919524193, -0.027383359149098396, 0.012236197479069233, 0.0029220429714769125, ...
[ 0.5210455060005188, -0.07025764137506485, 0.17441165447235107, 0.1280861794948578, 0.04732133075594902, 0.11880466341972351, 0.08504562824964523, 0.36219650506973267, -0.019947877153754234, -0.615544855594635, 0.13368238508701324, 0.17098569869995117, -0.3979516625404358, 0.016045842319726...
In Realm of the Mad God, in the Wine Cellar, Somehow you get wine that regenerates HP or MP. I was wondering if you get it from the barrels, Oryx's minions, or even Oryx himself.
[ 0.04633152857422829, 0.025284739211201668, -0.005384947173297405, 0.005364735145121813, -0.03234647959470749, -0.03488309681415558, 0.014379761181771755, 0.0032158747781068087, -0.03318377956748009, 0.022701414301991463, -0.04118511825799942, 0.025892335921525955, 0.009909753687679768, 0.0...
[ 0.7517896294593811, 0.41188204288482666, -0.17643137276172638, 0.45254629850387573, -0.3895096182823181, -0.10945659130811691, 0.5952959656715393, 0.3920898735523224, -0.18003632128238678, -0.23874446749687195, -0.00011141308641526848, -0.03909134119749069, -0.22209785878658295, 0.52837210...
I was trying to eliminate the vertical space before and after the title of a section by using the `titlesec` package and the command `\titlespacing*{\section}{0pt}{0pt}{0pt}`, but it works only in eliminating the vertical space after the section title. Before the section title there still is a portion of double space. I've searched the database and didn't find a solution.
[ 0.009731514379382133, 0.007903313264250755, -0.016421183943748474, 0.024597465991973877, 0.0021991776302456856, 0.03416871652007103, 0.0114535978063941, 0.004831025842577219, -0.021395577117800713, 0.01505269855260849, -0.02941221557557583, 0.012430486269295216, -0.005661728326231241, 0.02...
[ 0.5530990362167358, 0.10436694324016571, 0.4005683958530426, 0.0993083193898201, 0.22177650034427643, -0.22511440515518188, 0.031635962426662445, 0.010554591193795204, -0.1345607340335846, -0.4230515658855438, -0.022286806255578995, 0.4313375949859619, -0.14835241436958313, 0.4702656567096...
I'm about to make a new child theme and really would like to learn a bit about making it responsive. The design/changes will be rather small. I've done some child themes for both Twenty ten and Twenty eleven but not sure which one to choose for trying to learn responsive design. You could argue Twenty eleven is the latest but I've read somewhere Twenty ten is better maintained. I know there are many barebones/frameworks, but these two I know quite well. Any thoughts on this?
[ 0.009473740123212337, 0.00810613576322794, -0.031264375895261765, -0.00273865251801908, 0.006471789442002773, 0.003200638573616743, 0.009253822267055511, 0.006780616473406553, -0.015991218388080597, -0.04404483363032341, -0.011575127020478249, 0.0019975858740508556, 0.008323793299496174, 0...
[ 0.39836186170578003, 0.08554735034704208, 0.2227764129638672, 0.09332873672246933, 0.05164347589015961, 0.7330450415611267, 0.09761392325162888, 0.21660128235816956, -0.4938426911830902, -1.0261685848236084, 0.4397731423377991, 0.39033591747283936, 0.20426888763904572, 0.4172549545764923, ...
What’s the difference between _old-fashioned_ , _out of fashion_ , _unfashionable_ and _outdated?_ * She wears old-fashioned clothes. * She wears unfashionable clothes. * She wears outdated clothes. * The clothes she wears are out of fashion.
[ 0.0002938929246738553, 0.019364118576049805, 0.003831841517239809, 0.02525397203862667, -0.03432647883892059, -0.023421483114361763, 0.019022010266780853, 0.027611784636974335, -0.012413771823048592, -0.0062016695737838745, -0.03339666128158569, 0.002313129371032119, 0.03315677493810654, -...
[ 0.8791753649711609, 0.31133192777633667, -0.12905879318714142, 0.07186758518218994, 0.06912337243556976, 0.38127219676971436, 0.3478212356567383, 0.10773152112960815, -0.4788007140159607, -0.44993719458580017, -0.08026473969221115, 0.28466111421585083, 0.14725661277770996, 0.66343843936920...
I'm trying to tame the Ghost Saber with my hunter. It's level 19. I first tried when I was level 16 and the game told me the saber was too high-level for me to tame. I figured I should reduce the level so that it's yellow to me, but I still got the same message when I tried to tame it at 17. What are the level requirements for taming a new pet?
[ -0.0036282632499933243, 0.0029923405963927507, -0.01990179345011711, -0.0058153835125267506, 0.030249718576669693, 0.011253084056079388, 0.010997118428349495, -0.0037126580718904734, -0.02307429537177086, -0.01933632418513298, -0.003443991532549262, 0.016025956720113754, -0.01385054178535938...
[ 0.249439999461174, -0.27772122621536255, 0.4786487817764282, 0.011782768182456493, -0.43167349696159363, -0.20423796772956848, 0.7344651222229004, -0.315898597240448, 0.15587514638900757, -0.4108618199825287, 0.39778992533683777, 0.47440382838249207, -0.1938881278038025, 0.1544265896081924...
I'm trying to understand a passage in Koenker's Quantile regression book (p.33). It says: (note that y,x, are vectors and w is the direction vector) ![enter image description here](http://i.stack.imgur.com/KZVlI.png) With the first part of the outcome no problem: I apply the product rule for derivatives and the derivative of the indicator function is 0 since the discontinuity of the indicator function doesnt happen in y-x'b different from zero. But what happens in y-x'b=0? My logical mind tells me that there is no derivative for the indicator function if not some complex approximation, so how came Koenker up with that result? Thanks in advance!
[ -0.010661162436008453, 0.008725582621991634, -0.01464967243373394, 0.007957758381962776, -0.03233087435364723, -0.0008811969310045242, 0.008093063719570637, 0.015612270683050156, -0.010437548160552979, -0.009082265198230743, -0.01019489485770464, 0.010147840715944767, -0.0015886626206338406,...
[ 0.03203585371375084, -0.31528928875923157, 0.5640358924865723, -0.16547705233097076, 0.02612152136862278, 0.04554348438978195, 0.07843529433012009, -0.20274236798286438, 0.09687165915966034, -0.4557391107082367, 0.07628888636827469, 0.6758776903152466, 0.006188450381159782, 0.3114918172359...
Suppose we are interested in English scores ($E_{ij}$) and Math scores ($M_{ij}$) in students in various classes. The scores are in different scales from each other. We perform two linear regressions. The regression with the English scores has $X$ as the covariates. The regression with the Math scores has $Z$ and $W$ as the covariates. Is there a way to combine the two regressions into one regression? Note that we use GEE linear regression with an exchangeable working variance covariance matrix. So we have: $$E(E_{ij}) = \beta_{0}+ \beta_{1}X_{ij}$$ $$E(M_{ij}) = \gamma_{0}+ \gamma_{1} Z_{ij} + \gamma_{2}W_{ij}$$ Could we somehow combine the covariates? For example, would it make sense to consider: $$E(M_{ij}-E_{ij}) = (\gamma_{0}-\beta_{0})+(\gamma_{1}Z_{ij}-\beta_{1}X_{ij})+ \gamma_{2} W_{ij}$$
[ -0.0038073186296969652, 0.014665248803794384, -0.009905193001031876, 0.013386154547333717, -0.01169451791793108, -0.005945125129073858, 0.009593494236469269, -0.02227040007710457, -0.01103643886744976, -0.01988672837615013, -0.004418603610247374, 0.014466345310211182, -0.009924056008458138, ...
[ 0.033354252576828, -0.1268942803144455, 0.09280425310134888, 0.1564110964536667, 0.007036298979073763, 0.3322478234767914, 0.1332622915506363, -0.0915478989481926, 0.24216896295547485, -0.7662912011146545, 0.0830719992518425, 0.48645925521850586, -0.10846523940563202, 0.009831857867538929,...
I am investigating a discrepancy between male and female self reports of sexual experiences. The original survey consists of a female version (asking about victimization) and a male version (asking about perpetration). Typically, when given the original version, female's reported rates of victimization are about 2/3s higher than male rates of perpetration. I have modified the original survey (both male and female versions) in order to determine whether the wording of the modified version will have an impact on the female/victim--male/perpetrator discrepancy. One of my hypotheses is that the modified version will produce a narrower discrepancy between female reports of victimization, and male reports of perpetration. I need to figure out what test (or series of tests) I can use to determine if there is a significant difference between the discrepancy rate of original survey, and the discrepancy rate of my modified version. Additional info: * males and females are not matched, and I have different sample sizes of males and females * each subject was administered both versions (original and modified) of the survey, according to gender * subjects answered the original survey first, and then were given the modified survey * my data will be nominal -- e.g. "Yes" I've had this experience, or "no" I haven't had this experience.
[ -0.012417088262736797, 0.021188659593462944, -0.018978949636220932, 0.025151332840323448, 0.020088884979486465, -0.03402729332447052, 0.009963471442461014, 0.004984832368791103, -0.01676202192902565, 0.0028390572406351566, 0.011300807818770409, 0.008583808317780495, -0.0034452308900654316, ...
[ 0.379902184009552, 0.2329649180173874, -0.021431220695376396, -0.07713315635919571, -0.19804325699806213, 0.3730264902114868, 0.34827563166618347, -0.10814108699560165, -0.11956308037042618, -0.291486918926239, 0.3318367004394531, 0.2120494544506073, 0.02849276177585125, 0.1851643770933151...
I have a bunch of GML datasets which I need to import to Esri featureclasses. I am using the Data Interoperability extension for ArcGIS, which should support this. However when trying to import my features I get the following error: "Error when writing Schema: A unique destination featuretype name could not be generated for """ Thanks Here is a snippet of the code: - - - http://www.opengis.net/gml/3.2 http://schemas.opengis.net/gml/3.2.1/gml.xsd" gml:id="zones"> - - - - inspireId UK_zone - - eng - - Teesside Urban Area - - - -
[ 0.0018550631357356906, 0.005024039652198553, -0.004573211073875427, 0.02088889479637146, 0.016560867428779602, 0.011045520193874836, 0.008842991665005684, 0.029551442712545395, -0.013962147757411003, -0.025907166302204132, 0.0015636812895536423, 0.018450135365128517, 0.000733130844309926, ...
[ 0.3149505853652954, 0.29237279295921326, 0.2841319739818573, -0.055028099566698074, -0.1854286789894104, -0.2738484740257263, 0.12151508033275604, 0.24450427293777466, -0.30371665954589844, -0.6223199367523193, -0.0008879824308678508, 0.543766438961029, 0.05229329317808151, 0.0149382799863...
I'm interested in the possibility of trying to take as minimal team as possible through the game to maximise the funds available to upgrade my runner. What archetype mix is considered the most solid for the various tasks required in a run? I do understand that in some missions you will have to take a story character along, or a token decker. Would a solo strategy revolve around a mage with healing? A rigger with a bots for damage and healing? Or would it require a mix of more than one archetype, such as a Rigger Shaman.
[ -0.00023434031754732132, 0.022378336638212204, 0.006149317137897015, -0.004329632967710495, -0.01492786779999733, -0.007534036412835121, 0.007735667284578085, -0.003142524976283312, -0.01830647699534893, -0.01192120835185051, -0.011624102480709553, 0.020688805729150772, 0.010188366286456585,...
[ 0.15966472029685974, -0.12678483128547668, -0.19136099517345428, 0.37290075421333313, -0.5038117170333862, 0.1390700787305832, 0.18901598453521729, -0.377368301153183, -0.4136759638786316, -0.43882623314857483, 0.13338229060173035, 0.49987050890922546, 0.0696474015712738, -0.46420410275459...
I have a pgf plot. Now the scale on the x-axis is at the moment frequency, hz, but it should be rad/s to save myself time exporting the file again is it possible to multiply the x-coordinate of every data point with a certain factor easily in pgf plots? In this case 2*pi?
[ -0.011200601235032082, 0.024836057797074318, -0.019128138199448586, 0.021079793572425842, 0.012521103955805302, -0.01826503686606884, 0.010675669647753239, 0.02316758967936039, -0.026738788932561874, -0.030371399596333504, -0.0010589079465717077, 0.0092054083943367, -0.014141058549284935, ...
[ 0.29397934675216675, -0.1733243763446808, 0.6985602378845215, -0.0027529194485396147, -0.17002369463443756, 0.01677800342440605, -0.031535860151052475, -0.31192246079444885, -0.19797901809215546, -0.5644020438194275, 0.3584028482437134, 0.2077397257089615, -0.19887518882751465, 0.149012550...
I'm pretty new to this and was wondering how to turn my site URLs into SEO friendly URLs using .htaccess and mod_rewrite? My URLs take the form `mydomain.com/index.php?pid=1&pagename=Some Page` I would like it be `mydomain.com/Some Page` OR `mydomain.com/somepage.html` I know this is possible with mod_rewrite and .htaccess, but I'm having trouble finding an exact answer on how to accomplish this?
[ -0.010972924530506134, 0.01661178097128868, 0.011269602924585342, 0.014084935188293457, -0.014297958463430405, 0.0077390484511852264, 0.009154392406344414, 0.012056279927492142, -0.027130205184221268, -0.012817158363759518, -0.013663927093148232, -0.0031993647571653128, -0.010184058919548988...
[ 0.3023287057876587, 0.13642257452011108, 0.8229668736457825, -0.06256145983934402, -0.17518073320388794, -0.5483266711235046, 0.0976082906126976, 0.06736304610967636, -0.19802755117416382, -0.5909055471420288, -0.11391463130712509, 0.32612472772598267, -0.021887915208935738, -0.06314448267...
Background: During a conference an analyst pointed out in a tweet that developers hate scrum. Myself and another person responded that this was not the case, and started discussing different scenarios on why developers would dislike scrum. One of the scenarios where that lazy developers are not able to hide in a scrum project. They are constantly challenged by the team to contribute. This discussion resulted in a blog post and video http://elsewhat.com/2010/05/20/lazy-developers-hate-agile-and%C2%A0scrum/ I've gotten three comments which I've tried to answer in a neutral way, but they comments do point out that there are some people who loathe scrum (and I am always 100% certain they are not lazy developers). **Question** **Have there ever been a survey among developers on to what degree developers like or hate scrum ?**
[ -0.003394349245354533, 0.007860965095460415, 0.005628032144159079, 0.020254572853446007, 0.020377323031425476, -0.014228436164557934, 0.005800280719995499, 0.004465084057301283, -0.010016567073762417, -0.013968939892947674, -0.007983138784766197, 0.015047606080770493, 0.019846858456730843, ...
[ 0.7710835933685303, 0.15684738755226135, -0.18847765028476715, -0.24007326364517212, -0.04511052742600441, 0.18817771971225739, 0.09405089169740677, 0.19079746305942535, -0.2597801089286804, -0.2434416264295578, 0.20409546792507172, 0.6149222254753113, -0.12553708255290985, -0.023286640644...
The question is slightly deceiving however still simple for those academics therefore, here I present my question: If I take a para-magnetic material and put it very close to a homogeneous magnetic field, the para magnetic material will start to accelerate towards the magnetic body thus it has kinetic energy (but I ensure it does not hit the magnet and stop it with my fingers) so does the magnetic field or body loose it magnetism or will the same magnetism stay? Because in Wikipedia, it says: > Energy is needed to generate a magnetic field both to work against the > electric field that a changing magnetic field creates and to change the > magnetization of any material within the magnetic field However I'm sure that para-magnetic materials are not magnetized, so where is this energy coming from? Or is my understanding wrong if so can someone link me to a good website which uses mathematics and theory to show how para magnetic material works? Thanks!
[ 0.032230839133262634, 0.010649750009179115, -0.0010400572791695595, 0.007188594900071621, -0.015029214322566986, 0.0013854724820703268, 0.008184956386685371, -0.00380961736664176, -0.013384771533310413, -0.028301335871219635, -0.0028144458774477243, 0.02180948108434677, 0.004607125651091337,...
[ 0.5042354464530945, -0.12156252562999725, 0.27123454213142395, 0.36330646276474, -0.1911500096321106, -0.20144261419773102, -0.21497763693332672, -0.6794090270996094, -0.21799324452877045, -0.5064062476158142, -0.11501793563365936, 0.47661855816841125, 0.044198088347911835, 0.3275514841079...
I was in Google maps today and noticed near Baltimore, US where the river is, there's a huge brown spot. Is this due to all of the waste? If so, will this keep going all the way into the ocean? ![enter image description here](http://i.stack.imgur.com/5c8Le.png)
[ -0.007360599469393492, 0.004878134001046419, 0.00522195128723979, 0.008826328441500664, -0.012676604092121124, -0.005366960074752569, 0.005176926963031292, 0.027415916323661804, -0.01798546127974987, -0.0029936078935861588, 0.00482379412278533, 0.012644818052649498, -0.017200017347931862, ...
[ 0.3827313184738159, -0.3418521285057068, 0.2634623944759369, -0.03971228748559952, 0.6807579398155212, 0.15825004875659943, 0.03909740597009659, 0.763718843460083, -0.8945560455322266, -0.7072928547859192, 0.3822246491909027, 0.13828012347221375, -0.3575545847415924, 0.2834184765815735, ...
I am using the native internet calling feature provided on Jelly Bean. (I believe this feature has been available since Gingerbread.) I can make outgoing calls without issue. Since my SIP account is connected to a PSTN (i.e. phone number) I can also receive calls. However, when I answer an incoming call the calling party can hear me, but I cannot hear the caller on the other end. My guess is that this is some kind of NAT problem. See here, for example. With other third-party SIP apps it is possible to use STUN to eliminate the problem. I don't see any corresponding options for the native SIP interface, though. So I have two related questions: **1\. Is it possible to enable/use STUN with native internet calling?** **2\. How can I troubleshoot problems with native internet calling?**
[ -0.009651228785514832, -0.013504591770470142, -0.016555923968553543, 0.010395284742116928, -0.010645129717886448, -0.002604628913104534, 0.009669603779911995, 0.024329861626029015, -0.014489288441836834, -0.016281940042972565, -0.022404786199331284, 0.014348181895911694, -0.01537882816046476...
[ 0.3893339931964874, 0.19927021861076355, 0.2732858955860138, -0.026382816955447197, -0.12201137095689774, -0.22617653012275696, 0.3814798891544342, 0.43673449754714966, -0.2162563055753708, -0.7337009310722351, 0.043144211173057556, 0.6353073120117188, -0.17365522682666779, 0.0496857278048...
We are doing a cost benefit analysis on a migration project. It would be nice to be able to say that future changes will be x percent cheaper due to the migration. Does anyone have any experience or know of any study that shows what benefit we could expect?
[ 0.03187938034534454, 0.03708161041140556, -0.002526595490053296, 0.038353435695171356, 0.005878967233002186, -0.01887981966137886, 0.00906504224985838, -0.00630821380764246, -0.02652566321194172, -0.015678297728300095, 0.013798100873827934, 0.02655024640262127, -0.01578328013420105, 0.0018...
[ 0.828119158744812, -0.09838207811117172, 0.026398736983537674, 0.3502987027168274, 0.401865154504776, 0.07507168501615524, -0.06576286256313324, 0.4190625548362732, -0.3250502049922943, -0.4928773045539856, 0.2642350196838379, 0.3036564588546753, 0.11286698281764984, 0.0015484322793781757,...
I was taught by an old-time programmer who called this nested code if(condition){ <div> <b>text <u>text</u></b><br/> <table> <tr> <td>a</td> <td>b</td> </tr> </table> </div> } else { log(); } as opposed to this code which was considered to be not nested if(condition){<div> <b>text <u>text</u></b> <table><tr><td>a</td><td>b</td></tr></table> </div> } else {log();} When I started reading tutorials I found early on that I had the wrong definition to nested. Messy vs clean is not sufficient for this because messy code encompasses other problems that I am not talking about, for example confusing flow of logic, using static initializers inappropriately, poor use of variable names, using twice the code to get the job done in a more confusing manner, etc. What is the right word for this, where the first example is ... and the second example is not ...?
[ -0.00550483725965023, 0.014491491951048374, -0.0042813499458134174, 0.0017516955267637968, 0.004725053906440735, 0.014016570523381233, 0.005320199765264988, -0.01586042158305645, -0.010711540468037128, 0.005104420240968466, -0.009736813604831696, 0.0019143441459164023, 0.002317172707989812, ...
[ 0.1650315821170807, 0.1378149539232254, 0.04241122677922249, -0.1742989420890808, -0.04059969261288643, -0.2230553925037384, 0.2685598134994507, -0.4040151834487915, 0.03412819281220436, -0.8016576170921326, 0.2236015498638153, 0.11184995621442795, -0.40794000029563904, -0.0120702646672725...
> **Possible Duplicate:** > What are the most important things I need to do to encourage Google > Sitelinks? I just met a fellow designer who runs a website with the same pagerank as mine. When googling both our names, search results will show Site links below his website, but not mine. Do you know which attributes are handled by Google to define when and how these links are shown in search results?
[ -0.007673963438719511, -0.00969045702368021, -0.006384157110005617, 0.021603599190711975, -0.013683815486729145, -0.007351202890276909, 0.007320340257138014, 0.019573602825403214, -0.0257666427642107, -0.01283914316445589, 0.008782160468399525, 0.018892912194132805, 0.00787703599780798, 0....
[ 0.6585536003112793, 0.4228346049785614, 0.22245411574840546, 0.1755773425102234, -0.013596716336905956, 0.0012119817547500134, 0.14112837612628937, 0.13966159522533417, -0.2954793870449066, -0.5748781561851501, 0.41454747319221497, 0.3686167299747467, 0.13818830251693726, 0.201308324933052...
I have a data set, total_data and I applied a model to it. For instance, the model has one parameter $\beta$, and I calculated the log-likelihood of the fitted model (using maximum likelihood method). In the mean time, I have a stratification variable that could divide total_data into two subsets: high_risk_data and low_risk_data. Now, I could apply the same model to each subset, obtaining $\beta_1$ and $\beta_2$. I could sum up the log-likelihood from each fitting and obtain some kind of overall log-likelihood for the total_data under model 2. Model 1: parameter $\beta$ fitted on total_data; Model 2: parameters $\beta_1$ and $\beta_2$ fitted on each subset of data. Can I consider them as nested models, since I have one fewer parameter in model 1? Can I apply the likelihood ratio test to select a better model? I know a more standard way to obtain something similar to model2: introduce a second stratification variable-group and apply the model ($\beta$,$\beta_{group}$) to the total_data. The likelihood ratio test would be to test against H0:$\beta_{group}=0$. Is this the same as the method I described above?
[ 0.015155363827943802, 0.020139245316386223, 0.0008259767782874405, 0.006788778118789196, 0.02007811702787876, 0.0001244288869202137, 0.008509421721100807, -0.010945947840809822, -0.008875526487827301, 0.0006283549591898918, -0.016487382352352142, 0.006724879611283541, -0.0183425210416317, ...
[ 0.047355491667985916, -0.031243136152625084, 0.04543514549732208, -0.10357318073511124, -0.011139526031911373, 0.5462650656700134, 0.09515698999166489, -0.48191237449645996, -0.012907443568110466, -0.5555053949356079, 0.11419126391410828, 0.6622744798660278, -0.4551942050457001, 0.22174619...
(Edited for clarity, sorry!) I have a table in which many records have long/lat info, but many others do not. In other words the locations for those records are currently unknown. Therefore there are many records that are visible in table view, but not in map view. This is not an error, it is just the current state of my database. I would like to be able to add the locations manually, case by case, as they become known. I was hoping there was a way to do that. I know you can add new records via the map view and table view. And you can edit existing records in the table view. I am wondering if you can edit an existing record in the map view by clicking on the map to indicate that location. The long and lat where clicked would populate the table. If not, through the existing UI, what would be the best way to implement such functionality? Thanks! Kenny
[ 0.009838779456913471, 0.0067442008294165134, 0.0026340095791965723, 0.013189217075705528, 0.0007828481029719114, -0.004400692414492369, 0.004694798961281776, 0.02668662555515766, -0.0171254463493824, 0.005190434865653515, -0.006394804921001196, 0.017657741904258728, -0.018741074949502945, ...
[ 0.2797783315181732, 0.1401403844356537, 0.32369035482406616, 0.3714418411254883, 0.17961902916431427, -0.048647262156009674, 0.12839432060718536, 0.39724352955818176, -0.5517069101333618, -0.8709173202514648, 0.021969281136989594, 0.42646482586860657, -0.10585132241249084, 0.32172003388404...
I have a question. If I found the effects of a variable is insignificant (p>.05), but the effect size d is 0.2 (small effect). So I used GPower to calculate the power and I found it is 0.14, which means I need a larger sample size to get a significant results. So is the hypothesis rejected or supported? because it will be significant if i have a larger sample size. Thanks Let me make it clear. My hypothesis is that **priming improves creative performance.** So if p>.05, d=0, the hypothesis is rejected, **priming has no effect on creative performance**. My question is if p>.05, d=0.2 (small effect), is the hypothesis rejected or supported?
[ 0.0014551240019500256, 0.0038894645404070616, -0.0006468947976827621, 0.021653588861227036, -0.0005586179904639721, 0.004645338281989098, 0.007163693197071552, -0.012259998358786106, -0.010431147180497646, -0.028280965983867645, 0.002437473740428686, 0.00949886254966259, -0.01508769951760768...
[ 0.38145264983177185, 0.024463584646582603, 0.05844041332602501, 0.09620559215545654, -0.11919557303190231, 0.3165942132472992, 0.17791379988193512, -0.31543347239494324, -0.19989794492721558, -0.33477485179901123, 0.15511272847652435, 0.5912181735038757, -0.015202377922832966, 0.0905356481...
What is the meaning of the term _“just-in-time jobs”_? Of my dictionaries, both Macmillan and Oxford have a definition of _just-in-time_. Macmillan says that it means: > bought, sent, or produced at the last possible time Meanwhile, Oxford says that _just-in-time_ is: > denoting a manufacturing system in which materials or components are > delivered immediately before they are required in order to minimize storage > costs But I still cannot ascertain the meaning of _“just-in-time jobs”_ as a phrase.
[ 0.0022408151999115944, 0.019807450473308563, -0.00509753217920661, 0.005631075240671635, 0.03349490463733673, -0.0001816821750253439, 0.007401398383080959, 0.012011095881462097, -0.00901728868484497, 0.021508708596229553, -0.008952884934842587, 0.01671837642788887, 0.020835621282458305, -0...
[ 0.35649311542510986, 0.001576232141815126, 0.09383714944124222, 0.025488222017884254, 0.4242013990879059, 0.4261711537837982, -0.07997329533100128, 0.3466396629810333, -0.3307592570781708, -0.5187481641769409, -0.42212092876434326, 0.14429426193237305, -0.05818391591310501, 0.3729085326194...
Not totally sure if this is the right place but here goes. I understand that you are able to use X11 to ssh to GUI's and view them, without the server having a full on GUI system running/installed like gnome or something. My end goal is to be able to "broadcast" my coding sessions online, so people can view them online. Right now I have set it up so that a restricted tmux session will basically always be mirroring my own personal tmux coding session window. So if you ssh onto the server on a restricted account, you can hop onto the restricted session and follow my coding. However, I want to be able to let people watch this from their browser. I suspect that the best way is to have some terminal emulator on x11 running on my server attached to the tmux session..and then somehow have the browser view that terminal emulator? I'm not really too familiar with this domain, so does anyone know if this is possible? Or is there a better approach I should be taking? Note that I code entirely on a remote headless server that I ssh onto.
[ 0.014138948172330856, 0.005466687958687544, -0.0015496235573664308, -0.004331517033278942, -0.034493155777454376, -0.018869925290346146, 0.007393076084554195, 0.00047331233508884907, -0.017834920436143875, -0.012030981481075287, -0.012742122635245323, 0.008556146174669266, -0.007382683455944...
[ 0.9064035415649414, -0.034219998866319656, 0.043970346450805664, 0.067731574177742, -0.30897992849349976, -0.35040083527565, 0.47304144501686096, 0.26273179054260254, -0.20170706510543823, -0.8055792450904846, 0.04480849578976631, 0.6919555068016052, -0.2835770547389984, -0.084839403629302...