text
stringlengths
0
30.5k
title
stringclasses
1 value
embeddings
listlengths
768
768
locator.getPatientServiceImplPort(); ``` .net: ``` PatientServiceImplService service = new PatientServiceImplService(); ``` I'd hop over to <http://www.w3.org/TR/wsdl.html> which I think explains Port, Service and Endpoint reasonably well. A locator is an implementation specific mechanism that some WS stacks use to provide access to service endpoints.
[ 0.20441196858882904, -0.1335781067609787, 0.913906991481781, -0.00622610142454505, 0.31426548957824707, 0.035523921251297, 0.09915374219417572, -0.10823776572942734, -0.3111110031604767, -0.7384433746337891, -0.260256826877594, 0.2855985462665558, -0.19956082105636597, 0.3076389729976654, ...
For my blog I am wanting to use the Output Cache to save a cached version of a perticular post for around 10 minutes, and thats fine... ``` <%@OutputCache Duration="600" VaryByParam="*" %> ``` However, if someone posts a comment, I want to clear the cache so that the page is refreshed and the comment can be seen. How do I do this in ASP.Net C#? I've found the answer I was looking for: ``` HttpResponse.RemoveOutputCacheItem("/caching/CacheForever.aspx"); ```
[ 0.6453393697738647, -0.017406392842531204, 0.42707952857017517, 0.023227259516716003, -0.22643762826919556, -0.18354757130146027, 0.49086993932724, 0.3203955888748169, -0.11910626292228699, -0.8565976023674011, 0.1785232424736023, 0.3649127185344696, -0.19576705992221832, 0.123924374580383...
What is the best way of doing case-insensitive string comparison in C++ without transforming a string to all uppercase or all lowercase? Please indicate whether the methods are Unicode-friendly and how portable they are. Boost includes a handy algorithm for this: ``` #include <boost/algorithm/string.hpp> // Or, for fewer header dependencies: //#include <boost/algorithm/string/predicate.hpp> std::string str1 = "hello, world!"; std::string str2 = "HELLO, WORLD!"; if (boost::iequals(str1, str2)) { // Strings are identical } ```
[ -0.3713986575603485, 0.13513831794261932, 0.3496682941913605, -0.25202059745788574, -0.2851807475090027, 0.2215166836977005, 0.008567088283598423, -0.23570330440998077, -0.05876082181930542, -0.48343589901924133, -0.18183819949626923, 0.5123471021652222, -0.4056190550327301, -0.34345504641...
Here is the sample code for my accordion: ``` <mx:Accordion x="15" y="15" width="230" height="599" styleName="myAccordion"> <mx:Canvas id="pnlSpotlight" label="SPOTLIGHT" height="100%" width="100%" horizontalScrollPolicy="off"> <mx:VBox width="100%" height="80%" paddingTop="2" paddingBottom="1" verticalGap="1"> <mx:Repeater id="rptrSpotlight" dataProvider="{aSpotlight}"> <sm:SmallCourseListItem viewClick="PlayFile(event.currentTarget.getRepeaterItem().fileID);"
[ -0.26163095235824585, -0.13337048888206482, 1.348467469215393, -0.24656210839748383, 0.028230920433998108, 0.5648640990257263, -0.0522221177816391, -0.553554117679596, -0.5584802031517029, -0.6723459959030151, 0.17222116887569427, 0.6103136539459229, -0.19598782062530518, -0.02957734279334...
Description="{rptrSpotlight.currentItem.fileDescription}" FileID = "{rptrSpotlight.currentItem.fileID}" detailsClick="{detailsView.SetFile(event.currentTarget.getRepeaterItem().fileID,this)}" Title="{rptrSpotlight.currentItem.fileTitle}"
[ -0.07127675414085388, -0.3328041136264801, 0.4071541726589203, 0.13928532600402832, 0.49902692437171936, -0.39444804191589355, 0.06500378251075745, -0.28202393651008606, -0.005655128974467516, -0.6767760515213013, -0.41522765159606934, 0.4536572992801666, -0.11376556009054184, 0.3310782313...
FileIcon="{iconLibrary.getIcon(rptrSpotlight.currentItem.fileExtension)}" /> </mx:Repeater> </mx:VBox> </mx:Canvas> </mx:Accordion> ``` I would like to include a button in each header like so: ![wishful" onclick="alert('xss')](https://i.stack.imgur.com/EN3kP.jpg) Thanks, I got it working using [FlexLib](http://code.google.com/p/flexlib/)'s CanvasButtonAccordionHeader.
[ 0.2745077311992645, -0.4292924404144287, 0.7856417298316956, 0.10543188452720642, -0.3921232521533966, -0.007169021759182215, 0.26334989070892334, -0.2713293135166168, -0.45424798130989075, -0.45179665088653564, 0.23089638352394104, 0.8760578036308289, -0.3650037944316864, -0.3443976044654...
Does anybody recommend a design pattern for taking a binary data file, parsing parts of it into objects and storing the resultant data into a database? I think a similar pattern could be used for taking an XML or tab-delimited file and parse it into their representative objects. A common data structure would include: > (Header) (DataElement1) (DataElement1SubData1) (DataElement1SubData2)(DataElement2) (DataElement2SubData1) (DataElement2SubData2) (EOF) I think a good design would include a way to change out the parsing definition based on the file type or some defined metadata included in the header. So a [Factory Pattern](http://www.oodesign.com/factory-method-pattern.html) would be part of the overall design for the
[ 0.5849649906158447, 0.02877233549952507, -0.052990082651376724, 0.05108862370252609, -0.040460240095853806, 0.06054656207561493, 0.10713978111743927, -0.05524521693587303, -0.3041585087776184, -0.5208483934402466, 0.05035698413848877, 0.057493213564157486, -0.39302971959114075, -0.11196149...
Parser part. 1. Just write your file parser, using whatever techniques come to mind 2. Write lots of unit tests for it to make sure all your edge cases are covered Once you've done this, you will actually have a reasonable idea of the problem/solution. Right now you just have theories floating around in your head, most of which will turn out to be misguided. Step 3: Refactor mercilessly. Your aim should be to delete about half of your code You'll find that your code at the end will either resemble an existing design pattern, or you'll have created a new one. You'll then be qualified
[ 0.10724686086177826, 0.36571067571640015, 0.16646556556224823, -0.015332446433603764, -0.2800949811935425, -0.22689013183116913, 0.7261330485343933, -0.5534713268280029, 0.22373239696025848, -0.4881640672683716, -0.02566872537136078, 0.992161750793457, -0.38328078389167786, -0.151660233736...
to answer this question :-)
[ 0.12772899866104126, 0.22356271743774414, 0.1966674029827118, 0.32758358120918274, -0.1061524823307991, 0.16152474284172058, 0.18006962537765503, -0.09980181604623795, -0.052984144538640976, -0.031668249517679214, -0.187265545129776, 0.15713584423065186, -0.17417556047439575, 0.08327300101...
I am intentionally leaving this quite vague at first. I'm looking for discussion and what issues are important more than I'm looking for hard answers. I'm in the middle of designing an app that does something like portfolio management. The design I have so far is * Problem: a problem that needs to be solved * Solution: a proposed solution to one or more problems * Relationship: a relationship among two problems, two solutions, or a problem and a solution. Further broken down into: + Parent-child - some sort of categorization / tree hierarchy + Overlap - the degree to which two solutions or two problems
[ 0.25022539496421814, 0.09806519001722336, 0.24548503756523132, 0.05471087619662285, 0.2989826798439026, 0.3298068642616272, -0.005161950830370188, 0.038761205971241, -0.438268780708313, -0.6731318235397339, 0.12216105312108994, 0.14129643142223358, -0.11441566795110703, 0.0765426829457283,...
really address the same concept + Addresses - the degree to which a problem addresses a solution My question is about the temporal nature of these things. Problems crop up, then fade. Solutions have an expected resolution date, but that might be modified as they are developed. The degree of a relationship might change over time as problems and solutions evolve. So, the question: what is the best design for versioning of these things so I can get both a current and an historical perspective of my portfolio? *Later: perhaps I should make this a more specific question, though @Eric Beard's answer is worth
[ 0.4981101155281067, 0.019139442592859268, 0.1503971666097641, 0.06453017890453339, -0.1344604641199112, 0.1018800362944603, 0.3005247712135315, -0.18270990252494812, -0.3427745997905731, -0.43011292815208435, 0.259004145860672, 0.4930713474750519, 0.04635337367653847, 0.013891574926674366,...
an up.* I've considered three database designs. I'll enough of each to show their drawbacks. My question is: which to pick, or can you think of something better? 1: Problems (and separately, Solutions) are self-referential in versioning. --------------------------------------------------------------------------- ``` table problems int id | string name | text description | datetime created_at | int previous_version_id foreign key previous_version_id -> problems.id ``` This is problematic because every time I want a new version, I have to duplicate the entire row, including that long `description` column. 2: Create a new Relationship type: Version. ------------------------------------------- ``` table problems int id | string name | text description | datetime created_at ``` This simply moves
[ -0.1812940090894699, 0.24770095944404602, 0.6981439590454102, -0.15043164789676666, -0.11455991119146347, 0.2209233045578003, -0.11803628504276276, -0.42770910263061523, -0.5560097694396973, -0.6157830357551575, -0.014017889276146889, 0.47612062096595764, -0.4628082811832428, 0.22307875752...
the relationship from the Problems and Solutions tables into the Relationships table. Same duplication problem, but perhaps a little "cleaner" since I already have an abstract Relationship concept. 3: Use a more Subversion-like structure; move all Problem and Solution attributes into a separate table and version them. ------------------------------------------------------------------------------------------------------------------------- ``` table problems int id table attributes int id | int thing_id | string thing_type | string name | string value | datetime created_at | int previous_version_id foreign key (thing_id, thing_type) -> problems.id or solutions.id foreign key previous_version_id -> attributes.id ``` This means that to load the current version of a Problem or Solution I
[ -0.6821846961975098, 0.11562294512987137, 0.6942227482795715, 0.0449393093585968, -0.17055708169937134, 0.06581586599349976, 0.1457313448190689, -0.5407137274742126, -0.365474671125412, -0.6967586874961853, -0.19335079193115234, 0.4948875606060028, -0.3422304689884186, -0.02420338243246078...
have to fetch all versions of the attribute, sort them by date and then use the most current. That might not be terrible. What seems really bad to me is that I can't type-check these attributes in the database. That `value` column has to be free-text. I can make the `name` column a reference into a separate `attribute_names` table that has a `type` column, but that doesn't *force* the correct type in the `attributes` table. *later still: response to @Eric Beard's comments about multi-table foreign keys:* Alas, what I've described is simplistic: there are only two types of Things (Problems and Solutions).
[ 0.2432761937379837, 0.3537862300872803, 0.2092309445142746, 0.03381086885929108, -0.08066946268081665, 0.09734030067920685, 0.38014501333236694, -0.2504631280899048, -0.378145307302475, -0.5354111790657043, 0.032981500029563904, 0.6412937641143799, -0.047763142734766006, 0.2376691550016403...
I actually have about 9 or 10 different types of Things, so I'd have 9 or 10 columns of foreign keys under your strategy. I wanted to use single-table inheritance, but the Things have so little in common that it would be *extremely* wasteful to do combine them into one table. Hmm, sounds kind of like this site... As far as a database design would go, a versioning system kind of like SVN, where you never actually do any updates, just inserts (with a version number) when things change, might be what you need. This is called MVCC, Multi-Value Concurrency Control. A
[ 0.2397008091211319, 0.16972975432872772, 0.20107895135879517, 0.06074351817369461, 0.03940349817276001, -0.11731124669313431, 0.22391793131828308, 0.1550186723470688, -0.6422231197357178, -0.28137049078941345, 0.2536947429180145, 0.5344668626785278, -0.40122830867767334, 0.3260683119297027...
wiki is another good example of this.
[ 0.49106162786483765, -0.13308878242969513, -0.5023723840713501, 0.2708643674850464, -0.06181995943188667, -0.7017554640769958, 0.21160738170146942, 0.422089159488678, -0.47572147846221924, -0.6241833567619324, -0.3427659869194031, 0.1502290815114975, -0.16132883727550507, 0.295868992805480...
On my **Windows XP** machine Visual Studio 2003 2005 and 2008 all complain that I cannot start debugging my **web application** because I must either be a member of the Debug Users group or of the Administrators group. So, I am an Administrator and I added Debug Users just in case, and it still complains. Short of reformatting my machine and starting over, has anyone encountered this and fixed it [with some undocumented command]? Which users and/or groups are in your "Debug programs" right (under User Rights Assignment)? Maybe that setting got overridden by group policy (Daniel's answer), or just got out
[ 0.10915166884660721, 0.6458179354667664, 0.3062276244163513, -0.07473846524953842, -0.36121296882629395, -0.09876205772161484, 0.4837624728679657, 0.04807286337018013, -0.3383689522743225, -0.24322576820850372, -0.1720990240573883, 0.5905886888504028, -0.1725500077009201, 0.179563984274864...
of whack for some reason. It should, obviously, include the "Debug Users" group.
[ 0.513181746006012, 0.5313825011253357, 0.03001016564667225, -0.09966729581356049, -0.6020047068595886, -0.528654158115387, 0.36967942118644714, 0.050394076853990555, 0.03481290489435196, 0.17177030444145203, -0.033922888338565826, 0.3616655468940735, -0.2905092239379883, 0.4600828588008880...
I am developing an application to install a large number of data files from multiple DVDs. The application will prompt the user to insert the next disk, however Windows will automatically try to open that disk either in an explorer window or ask the user what to do with the new disk. How can I intercept and cancel auto play messages from my application? There are two approaches that I know of. The first and simplest is to register the special Windows message "QueryCancelAutoPlay" and simply return 1 when the message is handled. This only works for the current window,
[ 0.41053351759910583, -0.12821508944034576, 0.6276207566261292, 0.17826782166957855, -0.03156313672661781, -0.3624995946884155, -0.1196046844124794, -0.048983894288539886, 0.013602719642221928, -0.5429999828338623, -0.059453122317790985, 0.5661587119102478, -0.2594962418079376, 0.4001866281...
and not a background application. The second approach requires inserting an object that implements the COM interface `IQueryCancelAutoPlay` COM interface into the Running Object Table.
[ -0.3127562701702118, -0.030541257932782173, 0.2508491575717926, 0.13641715049743652, 0.16829699277877808, -0.07924709469079971, -0.13991285860538483, 0.08956746011972427, 0.048988938331604004, -0.5357956886291504, -0.47974735498428345, 0.5783594846725464, -0.2587512731552124, -0.0657357424...
How can I present a control to the user that allows him/her to select a directory? There doesn't seem to be any native .net controls which do this? The [FolderBrowserDialog class](http://msdn.microsoft.com/en-us/library/system.windows.forms.folderbrowserdialog.aspx) is the best option.
[ 0.05636584386229515, -0.30970385670661926, 0.33438754081726074, 0.4179669916629791, 0.18718543648719788, -0.2698645293712616, -0.13492649793624878, 0.12144473195075989, -0.17454570531845093, -0.7700138688087463, -0.14572377502918243, 0.29578518867492676, -0.08793654292821884, 0.08742893487...
What's the general consensus on supporting Windows 2000 for software distribution? Are people supporting Windows XP SP2+ for new software development or is this too restrictive still? "OK" is a subjective judgement. You'll need to take a look at your client base and see what they're using. Having said that, I dropped support for Win2K over a year ago with no negative impact.
[ 0.4443493187427521, 0.4082765579223633, 0.2535977363586426, -0.25796955823898315, -0.22446295619010925, -0.38592666387557983, 0.2474542260169983, 0.20264741778373718, -0.4318743646144867, -0.007989203557372093, -0.09959905594587326, 0.8466190099716187, -0.038406528532505035, 0.091037161648...
I'm working on a web service at the moment and there is the potential that the returned results could be quite large ( > 5mb). It's perfectly valid for this set of data to be this large and the web service can be called either sync or async, but I'm wondering what people's thoughts are on the following: 1. If the connection is lost, the entire resultset will have to be regenerated and sent again. Is there any way I can do any sort of "resume" if the connection is lost or reset? 2. Is sending a result set this large even appropriate? Would it be better
[ 0.37448692321777344, -0.11643744260072708, 0.4201439321041107, 0.3656003475189209, 0.05510964244604111, -0.14582112431526184, 0.33381620049476624, 0.0024426167365163565, -0.4536404311656952, -0.41557541489601135, 0.05468403175473213, 0.37223362922668457, -0.21764996647834778, 0.16967211663...
to implement some sort of "paging" where the resultset is generated and stored on the server and the client can then download chunks of the resultset in smaller amounts and re-assemble the set at their end? I have seen all three approaches, **paged**, **store and retrieve**, and **massive push**. I think the solution to your problem depends to some extent on why your result set is so large and how it is generated. Do your results grow over time, are they calculated all at once and then pushed, do you want to stream them back as soon as you have them? **Paging Approach** ------------------- In
[ 0.10296245664358139, -0.23750697076320648, 0.009710479527711868, 0.24559645354747772, -0.2026512622833252, 0.2797192335128784, 0.07688513398170471, -0.04113491252064705, -0.317408949136734, -0.6135634183883667, 0.16607092320919037, 0.5492203831672668, -0.31690797209739685, -0.1944708228111...
my experience, using a paging approach is appropriate when the client needs quick access to reasonably sized chunks of the result set similar to pages in search results. Considerations here are overall chattiness of your protocol, caching of the entire result set between client page requests, and/or the processing time it takes to generate a page of results. **Store and retrieve** ---------------------- Store and retrieve is useful when the results are not random access and the result set grows in size as the query is processed. Issues to consider here are complexity for clients and if you can provide the user with partial
[ 0.07324293255805969, -0.06985850632190704, -0.035066571086645126, 0.4693809747695923, -0.0996738001704216, 0.0066412147134542465, 0.180244579911232, -0.02604031004011631, -0.22417159378528595, -0.6427819728851318, -0.09367620944976807, 0.396777868270874, -0.24779334664344788, 0.00716650160...
results or if you need to calculate all results before returning anything to the client (think sorting of results from distributed search engines). **Massive Push** ---------------- The massive push approach is almost certainly flawed. Even if the client needs all of the information and it needs to be pushed in a monolithic result set, I would recommend taking the approach of `WS-ReliableMessaging` (either directly or through your own simplified version) and chunking your results. By doing this you 1. ensure that the pieces reach the client 2. can discard the chunk as soon as you get a receipt from the client 3. can reduce the
[ 0.19105765223503113, -0.36613184213638306, 0.4896451532840729, 0.17101165652275085, -0.33589914441108704, -0.13006049394607544, 0.1343863159418106, -0.325833261013031, -0.3003946542739868, -0.47333067655563354, -0.10532129555940628, 0.5215808153152466, -0.05320942401885986, -0.002787334844...
possible issues with memory consumption from having to retain 5MB of XML, DOM, or whatever in memory (assuming that you aren't processing the results in a streaming manner) on the server and client sides. Like others have said though, don't do anything until you know your result set size, how it is generated, and overall performance to be actual issues.
[ 0.0953092947602272, 0.012822194956243038, 0.20059198141098022, 0.41627874970436096, 0.09455210715532303, -0.2761329710483551, 0.1479296088218689, -0.009707462973892689, -0.4585358202457428, -0.6151043176651001, -0.3139137029647827, 0.4547705352306366, 0.10434071719646454, 0.020298887044191...
This [question and answer](https://stackoverflow.com/questions/11782/file-uploads-via-web-services) shows how to send a file as a byte array through an XML web service. How much overhead is generated by using this method for file transfer? I assume the data looks something like this: ``` <?xml version="1.0" encoding="UTF-8" ?> <bytes> <byte>16</byte> <byte>28</byte> <byte>127</byte> ... </bytes> ``` If this format is correct, the bytes must first be converted to UTF-8 characters. Each of these characters allocates 8 bytes. Are the bytes stored in base 10, hex, or binary characters? How much larger does the file appear as it is being
[ 0.10715071856975555, 0.197475403547287, 0.5431379079818726, 0.023777533322572708, -0.08405379951000214, 0.1481640636920929, -0.13074025511741638, -0.4969013035297394, -0.3293456733226776, -0.4971521496772766, -0.3230865001678467, 0.36698490381240845, -0.1563052535057068, -0.019926508888602...
sent due to the XML data and character encoding? Is compression built into web services? Typically a byte array is sent as a `base64` encoded string, not as individual bytes in tags. <http://en.wikipedia.org/wiki/Base64> The `base64` encoded version is about **137%** of the size of the original content.
[ 0.03377298638224602, 0.04372073709964752, 0.10895335674285889, 0.20577511191368103, -0.2650830149650574, 0.05533310025930405, 0.006728378590196371, -0.13632212579250336, -0.11260978877544403, -0.34111088514328003, -0.5654057860374451, 0.048289623111486435, -0.3093124032020569, 0.0931937471...
My website will be using only OpenID for authentication. I'd like to pull user details down via attribute exchange, but attribute exchange seems to have caused a lot of grief for StackOverflow. What is the current state of play in the industry? Does any OpenID provider do a decent job of attribute exchange? Should I just steer away from OpenID attribute exchange altogether? How can I deal with inconsistent support for functionality? Here on Stack Overflow, we're just using the [Simple Registration](http://openid.net/specs/openid-simple-registration-extension-1_0.html) extension for now, as there were some issues with Attribute Exchange (AX). The biggest was OpenID Providers (OP) not agreeing on which [attribute
[ 0.36470574140548706, 0.2673260271549225, 0.4845814108848572, 0.09631416201591492, -0.22600914537906647, -0.4868200719356537, 0.33734551072120667, -0.28497403860092163, -0.35673317313194275, -0.48258596658706665, 0.0527745820581913, 0.5544331669807434, -0.2488052099943161, 0.181421265006065...
type urls](http://www.axschema.org/types/) to use. The finalized spec for AX says that attribute urls should come from <http://www.axschema.org/> However, some OPs, especially our favorite <http://myopenid.com>, recognize [other](http://openid.net/pipermail/general/2008-February/004158.html) [urls](http://rakuto.blogspot.com/2008/03/ruby-fetch-some-attributes-from.html). I wasn't going to keep a list of which ones were naughty and which were nice! The other problem was that most of the OPs I tried just didn't return information when queried with AX - I might have been doing something wrong (happens quite frequently :) ), but I had made relevant details public on my profiles and we're using the latest, most excellent .NET library, [DotNetOpenId](http://code.google.com/p/dotnetopenid/). We'll definitely revisit AX here on Stack
[ 0.5836186408996582, -0.11454311013221741, 0.43695181608200073, 0.2569669187068939, -0.43289121985435486, -0.12138789147138596, 0.20874273777008057, -0.4006088078022003, -0.2622736990451813, -0.5380004048347473, 0.14412975311279297, 0.6469569802284241, -0.002347873989492655, -0.012187364511...
Overflow when we get a little more time, as a seamless user experience is very important to us!
[ -0.019672421738505363, 0.012678112834692001, 0.16260798275470734, 0.30101245641708374, 0.24904760718345642, 0.10336213558912277, 0.16755592823028564, 0.43981462717056274, -0.1673230081796646, -0.6239761114120483, -0.2338419258594513, 0.3396001160144806, 0.05632144585251808, 0.3027210533618...
How would you reccommend handling RSS Feeds in ASP.NET MVC? Using a third party library? Using the RSS stuff in the BCL? Just making an RSS view that renders the XML? Or something completely different? Here is what I recommend: 1. Create a class called RssResult that inherits off the abstract base class ActionResult. 2. Override the ExecuteResult method. 3. ExecuteResult has the ControllerContext passed to it by the caller and with this you can get the data and content type. 4. Once you change the content type to rss, you will want to serialize the data to RSS (using your own code or another library) and
[ 0.5085246562957764, -0.5838484168052673, 0.524860680103302, 0.38109657168388367, -0.1555079221725464, -0.46749943494796753, 0.045207638293504715, -0.32243701815605164, -0.43905502557754517, -0.3205527663230896, -0.025773098692297935, 0.6615130305290222, -0.5835505127906799, 0.2566426098346...
write to the response. 5. Create an action on a controller that you want to return rss and set the return type as RssResult. Grab the data from your model based on what you want to return. 6. Then any request to this action will receive rss of whatever data you choose. That is probably the quickest and reusable way of returning rss has a response to a request in ASP.NET MVC.
[ 0.07640767097473145, -0.7187025547027588, 0.6070518493652344, 0.10443587601184845, -0.3993414640426636, 0.03910914063453674, 0.0016889107646420598, -0.3655741810798645, -0.5625016689300537, -0.443286657333374, -0.11391738057136536, 0.4264768958091736, -0.47432827949523926, 0.11264306306838...
If I inherit from a base class and want to pass something from the constructor of the inherited class to the constructor of the base class, how do I do that? For example, if I inherit from the Exception class I want to do something like this: ``` class MyExceptionClass : Exception { public MyExceptionClass(string message, string extraInfo) { //This is where it's all falling apart base(message); } } ``` Basically what I want is to be able to
[ 0.17872074246406555, -0.10612227767705917, -0.19237102568149567, -0.10712207853794098, -0.005224656313657761, -0.2505822479724884, 0.35578757524490356, -0.18936383724212646, -0.08414285629987717, -0.532692551612854, -0.16251641511917114, 0.3507126569747925, -0.24948394298553467, 0.23361749...
pass the string message to the base Exception class. Modify your constructor to the following so that it calls the base class constructor properly: ``` public class MyExceptionClass : Exception { public MyExceptionClass(string message, string extrainfo) : base(message) { //other stuff here } } ``` Note that a constructor is not something that you can call anytime within a method. That's the reason you're getting errors in your call in the constructor body.
[ 0.1456284373998642, 0.038290780037641525, 0.1931368112564087, -0.1585092544555664, -0.21062737703323364, -0.11091289669275284, 0.37763530015945435, -0.15173210203647614, -0.07760205119848251, -0.5358721613883972, -0.22380563616752625, 0.5870265960693359, -0.3777385354042053, -0.00776855740...
I'm sure many readers on SO have used [Lutz Roeder](https://www.lutzroeder.com/dotnet/)'s [.NET reflector](http://www.reflector.net/) to decompile their .NET code. I was amazed just how accurately our source code could be recontructed from our compiled assemblies. I'd be interested in hearing how many of you use obfuscation, and for what sort of products? I'm sure that this is a much more important issue for, say, a .NET application that you offer for download over the internet as opposed to something that is built bespoke for a particular client. I wouldn't worry about it too much. I'd rather focus on putting out an awesome product, getting
[ 0.6227596998214722, 0.14577439427375793, 0.1066407784819603, 0.17968346178531647, -0.08208118379116058, -0.2185588777065277, 0.36059990525245667, -0.03456306457519531, -0.343828022480011, -0.48839226365089417, -0.20257867872714996, 0.28463301062583923, -0.0811183974146843, 0.35733306407928...
a good user base, and treating your customers right than worry about the minimal percentage of users concerned with stealing your code or looking at the source.
[ 0.4921107888221741, 0.4677978754043579, -0.16751104593276978, 0.2812082767486572, -0.1423165202140808, -0.21226796507835388, 0.4091850221157074, 0.4085068702697754, -0.11135265231132507, -0.4053584635257721, -0.08022378385066986, 0.7314034104347229, 0.4272165596485138, -0.3144288957118988,...
Our team is creating a new recruitment workflow system to replace an old one. I have been tasked with migrating the old data into the new schema. I have decided to do this by creating a small Windows Forms project as the schema are radically different and straight TSQL scripts are not an adequate solution. The main sealed class 'ImportController' that does the work declares the following delegate event: ``` public delegate void ImportProgressEventHandler(object sender, ImportProgressEventArgs e); public static event ImportProgressEventHandler importProgressEvent; ``` The main window starts a static method in that class using a new thread: ``` Thread dataProcessingThread = new Thread(new ParameterizedThreadStart(ImportController.ImportData)); dataProcessingThread.Name = "Data Importer: Data
[ 0.11446864157915115, 0.1156776025891304, 0.5527337789535522, -0.1621212512254715, 0.09890097379684448, -0.16013985872268677, 0.0768011212348938, -0.30595600605010986, -0.32739996910095215, -0.5486909747123718, -0.3324851393699646, 0.5243419408798218, -0.28525400161743164, 0.271005034446716...
Processing Thread"; dataProcessingThread.Start(settings); ``` the ImportProgressEvent args carries a string message, a max int value for the progress bar and an current progress int value. The Windows form subcribes to the event: ``` ImportController.importProgressEvent += new ImportController.ImportProgressEventHandler(ImportController_importProgressEvent); ``` And responds to the event in this manner using it's own delegate: ``` private delegate void TaskCompletedUIDelegate(string completedTask, int currentProgress, int progressMax); private void ImportController_importProgressEvent(object sender, ImportProgressEventArgs e) { this.Invoke(new TaskCompletedUIDelegate(this.DisplayCompletedTask), e.CompletedTask, e.CurrentProgress, e.ProgressMax);
[ -0.3047734200954437, -0.17306436598300934, 0.6176270842552185, -0.404350608587265, 0.05328211560845375, 0.2897169589996338, 0.3410753011703491, -0.20913201570510864, -0.3605453670024872, -0.28069669008255005, -0.30337822437286377, 0.4696463644504547, -0.5626223683357239, 0.3891278505325317...
} ``` Finally the progress bar and listbox are updated: ``` private void DisplayCompletedTask(string completedTask, int currentProgress, int progressMax) { string[] items = completedTask.Split(new string[] { Environment.NewLine }, StringSplitOptions.RemoveEmptyEntries); foreach (string item in items) { this.lstTasks.Items.Add(item);
[ 0.046420786529779434, -0.1007981225848198, 0.5078933238983154, -0.38612130284309387, 0.2674439251422882, 0.25450044870376587, 0.33132433891296387, -0.33940377831459045, -0.43630632758140564, -0.528429388999939, -0.5014610290527344, 0.307758092880249, -0.21585386991500854, 0.372201323509216...
} if (currentProgress >= 0 && progressMax > 0 && currentProgress <= progressMax) { this.ImportProgressBar.Maximum = progressMax; this.ImportProgressBar.Value = currentProgress; } } ``` The thing is the ListBox seems to update
[ -0.13314270973205566, -0.09176122397184372, 0.6312204599380493, -0.326116144657135, 0.49489298462867737, -0.2832884192466736, 0.3623790442943573, -0.2793034613132477, -0.31989437341690063, -0.35801181197166443, -0.26044967770576477, 0.6879697442054749, -0.2693006992340088, 0.28902566432952...
very quickly, but the progress bar never moves until the batch is almost complete anyway ??? what gives ? @John Thanks for the links. @Will There's no gain from threadpooling as I know it will only ever spawn one thread. The use of a thread is purely to have a responsive UI while SQL Server is being pounded with reads and writes. It's certainly not a short lived thread. Regarding sledge-hammers you're right. But, as it turns out my problem was between screen and chair after all. I seem to have an unusal batch of data that has many many many more foreign key
[ 0.4398603141307831, 0.2208951860666275, 0.3619440793991089, 0.2644447982311249, -0.14984162151813507, -0.023440659046173096, 0.4602016806602478, 0.2714982032775879, -0.4545852541923523, -0.5469982028007507, 0.3681179881095886, 0.4747859835624695, 0.13223589956760406, 0.30054908990859985, ...
records than the other batches and just happens to get selected early in the process meaning the currentProgress doesn't get ++'d for a good 10 seconds. @All Thanks for all your input, it got me thinking, which got me looking elsewhere in the code, which led to my ahaa moment of humility where I prove yet again the error is usually human :)
[ 0.3696689307689667, 0.28396666049957275, 0.13680487871170044, 0.21311983466148376, 0.1202475056052208, -0.01593584567308426, 0.6722079515457153, 0.05107840523123741, -0.1914631724357605, -0.39748942852020264, 0.209468811750412, 0.4255838096141815, -0.04423923417925835, 0.2135452926158905, ...
When I am running the following statement: ``` @filtered = map {s/&nbsp;//g} @outdata; ``` it is returning an empty list instead of the filtered list that I expected. What I am trying to do is remove every occurrence of `&nbsp;` from an array of string (which is an XML file). Obviously, I am not understanding something. Can anyone tell me the correct way to do this might be, and why this isn't working for me as is? Try this: ``` @filtered = map {s/&nbsp;//g; $_} @outdata; ``` The problem is the s operator in perl modifies $\_ but actually returns the number of changes it made. So, the extra $\_
[ -0.08250550925731659, -0.21333591639995575, 0.462184339761734, -0.21971215307712555, -0.3325471580028534, -0.05150758475065231, 0.3212659955024719, -0.11349084973335266, -0.3460722267627716, -0.8573087453842163, -0.30373769998550415, 0.3256737291812897, -0.626753568649292, 0.10167370736598...
at the end causes perl to return the modified string for each element of @outdata.
[ 0.0572151318192482, -0.21602465212345123, 0.24674516916275024, -0.15339939296245575, -0.20171742141246796, 0.03326726704835892, 0.3765977919101715, -0.0430917851626873, 0.1150979995727539, -0.14640341699123383, -0.13922497630119324, 0.17080943286418915, -0.3862050473690033, 0.3632939755916...
We were having a problem with our build server not checking out modifications from source control despite recognizing that there had been changes. It was traced to the control folder (not sure what it's real name is), the existing working builds were using \_svn. Clearing the working folder forced a new complete checkout and I noticed that now the control folder is .svn. It looks like originally our integration routines were checking out code using \_svn but now it is using .svn. *The svn.exe being used during integration is from VisualSVN Server can I set this up to use \_svn again?* How the
[ 0.510936975479126, 0.04366202652454376, 0.29402896761894226, 0.043423958122730255, -0.00401945598423481, 0.1856040060520172, 0.2711581289768219, 0.23045624792575836, -0.29209938645362854, -0.8004031181335449, 0.11174798756837845, 0.6712037324905396, -0.05359543487429619, 0.2223557382822036...
original working copies were using \_svn I don't know! - we only ever ever used VisualSVN Server and haven't changed this. We had setup TortoiseSVN to use \_svn following the recommendation that this works better for Visual Studio and have also installed TortoiseSVN on the build server in case it is ever needed. Could this be the cause? *Also is this really necessary? As MSBuild is Microsoft's is it recommended as it is for Visual Studio?* The business about \_svn vs. .svn was an issue with Visual Studio web projects only (and I'm fairly sure it was fixed in VS2005 anyway), it's not
[ 0.3491646349430084, 0.04943202808499336, 0.4544230103492737, 0.021336250007152557, -0.14991572499275208, -0.10887694358825684, 0.4296327829360962, 0.20107915997505188, -0.24482375383377075, -0.5222724676132202, 0.18754000961780548, 0.7711943984031677, -0.19692255556583405, 0.18745104968547...
a general "\_svn works better with VS" thing. It's also only a working-copy issue, not a repository issue - i.e. it doesn't matter if some users of SVN are using clients set up to do \_svn and some are using .svn - the repository won't know or care - (unless somehow you end-up with a load of these \_svn/.svn files actually checked-into the repository which would be confusing in the extreme.) Unless you have absolute concrete evidence that .SVN is causing you problems, then I would stick with that wherever you can.
[ 0.49256518483161926, 0.20623581111431122, 0.14173774421215057, 0.1614304631948471, -0.13831448554992676, -0.4183077812194824, 0.20003488659858704, 0.23233245313167572, -0.18098561465740204, -0.47093772888183594, -0.17869403958320618, 0.7963777780532837, -0.25216031074523926, 0.151896819472...
I installed VS SP1 and played around with Entity Framework. I created a schema from an existing database and tried some basic operations. Most of it went well, except the database schema update. I changed the database in every basic way: * added a new table * deleted a table * added a new column to an existing table * deleted a column from an existing table * changed the type of an existing column The first three went well, but the type change and the column deletion did not followed the database changes. Is there any way to make is work from the designer? Or is it not supported
[ 0.15083584189414978, -0.16246895492076874, 0.7929536700248718, 0.1206880733370781, -0.35876646637916565, -0.11645074188709259, 0.5346379280090332, -0.6569284200668335, -0.3819664716720581, -0.5562201142311096, 0.17389936745166779, 0.7328232526779175, -0.5044136643409729, -0.196931257843971...
at the moment? I didn't find any related material yet, but still searching. I would guess that possibly those don't happen because they would break the build for existing code, but that's just a guess on my part. Here's my logic: First, EF is supposed to be more than 1:1 table mapping, so it's quite possible that just because you are deleting a column from table A doesn't mean that for that entity, there shouldn't be a property Description. You might just map that property to another table. Second, changing a type could just break builds. that's the only rationale there.
[ 0.19332581758499146, 0.07680417597293854, 0.2771013081073761, -0.04672945290803909, 0.020501544699072838, 0.09057969599962234, 0.07687944173812866, -0.08235227316617966, -0.22308231890201569, -0.7269026041030884, 0.23099206387996674, 0.4243602454662323, -0.053117889910936356, 0.19932527840...
I have a database that contains a date and we are using the MaskedEditExtender (MEE) and MaskedEditValidator to make sure the dates are appropriate. However, we want the Admins to be able to go in and change the data (specifically the date) if necessary. How can I have the MEE field pre-populate with the database value when the data is shown on the page? I've tried to use 'bind' in the 'InitialValue' property but it doesn't populate the textbox. Thanks. We found out this morning why our code was mishandling the extender. Since the db was handling the date as a date/time
[ 0.3209255337715149, -0.12311629951000214, 0.4130931496620178, 0.1737520843744278, 0.1188524141907692, -0.3023444414138794, 0.18559762835502625, -0.0606309212744236, 0.0989413857460022, -0.5820260047912598, -0.21392272412776947, 0.21754564344882965, 0.00956472847610712, 0.40763619542121887,...
it was returning the date in this format 99/99/9999 99:99:99 but we had the extender mask looking for this format 99/99/9999 99:99 ``` Mask="99/99/9999 99:99:99" ``` the above code fixed the problem. thanks to everyone for their help.
[ 0.2501550316810608, 0.10472432523965836, 0.7143124938011169, 0.11525657027959824, -0.14337845146656036, -0.09103561192750931, 0.6388823390007019, 0.014996299520134926, -0.16023513674736023, -0.4435133635997772, -0.08291557431221008, 0.6238492727279663, -0.1133778765797615, -0.0258030872792...
Hy, does anyone worked with N2 Content Management System(<http://www.codeplex.com/n2>). If yes, how does it perform, performance wise(under heavy load)? It seems pretty simple and easy to use. Adrian Maybe try this question at <http://www.codeplex.com/n2/Thread/List.aspx> They might be able to tell you about performance limitations or bottlenecks.
[ 0.5121588706970215, 0.13301508128643036, 0.12033948302268982, 0.3941093385219574, -0.0232841894030571, -0.24116632342338562, 0.1953900009393692, 0.08399177342653275, -0.10561180114746094, -0.23438170552253723, 0.06814245879650116, 0.26305824518203735, 0.11504581570625305, 0.011840499006211...
I've got a Repeater that lists all the `web.sitemap` child pages on an ASP.NET page. Its `DataSource` is a `SiteMapNodeCollection`. But, I don't want my registration form page to show up there. ``` Dim Children As SiteMapNodeCollection = SiteMap.CurrentNode.ChildNodes 'remove registration page from collection For Each n As SiteMapNode In SiteMap.CurrentNode.ChildNodes If n.Url = "/Registration.aspx" Then Children.Remove(n) End If Next RepeaterSubordinatePages.DataSource = Children ``` The `SiteMapNodeCollection.Remove()` method throws a > NotSupportedException: "Collection is read-only". How can I remove the node from the collection before DataBinding the Repeater? Your shouldn't need CType ``` Dim children = _ From n In SiteMap.CurrentNode.ChildNodes.Cast(Of SiteMapNode)() _ Where n.Url <>
[ 0.1925763636827469, 0.020569879561662674, 0.6285977363586426, 0.06352461129426956, 0.06269317120313644, 0.06424367427825928, 0.1602490097284317, -0.10439125448465347, -0.3667793273925781, -0.6795248985290527, -0.04034041985869408, -0.11163127422332764, 0.03113841824233532, 0.83486264944076...
"/Registration.aspx" _ Select n ```
[ 0.3148239552974701, 0.3660220503807068, 0.27294379472732544, -0.07432935386896133, 0.4872019588947296, -0.31879937648773193, 0.042647380381822586, 0.03964536264538765, -0.3270251154899597, -0.5659164190292358, -0.22296203672885895, 0.08376255631446838, -0.12611441314220428, 0.2899330854415...
How would you attach a propertychanged callback to a property that is inherited? Like such: ``` class A { DependencyProperty prop; } class B : A { //... prop.AddListener(PropertyChangeCallback); } ``` (edited to remove recommendation to use DependencyPropertyDescriptor, which is not available in Silverlight) [PropertyDescriptor AddValueChanged Alternative](http://agsmith.wordpress.com/2008/04/07/propertydescriptor-addvaluechanged-alternative/)
[ -0.30762216448783875, -0.2760676443576813, 0.20108984410762787, -0.04304874688386917, 0.0651683360338211, 0.05980377644300461, 0.10720408707857132, -0.4835492968559265, -0.14072276651859283, -0.16318997740745544, -0.18668219447135925, 0.7910993099212646, -0.5271030068397522, 0.177497789263...
What tools are useful for automating clicking through a windows form application? Is this even useful? I see the testers at my company doing this a great deal and it seems like a waste of time. Check out <https://github.com/TestStack/White> and <http://nunitforms.sourceforge.net/>. We've used the White project with success.
[ 0.6251484751701355, -0.07182473689317703, -0.2819719612598419, 0.1580747365951538, -0.10334230959415436, -0.15398769080638885, 0.2520471513271332, 0.11842995882034302, -0.021534021943807602, -0.7376577258110046, 0.39926955103874207, 0.5045605301856995, 0.019279776141047478, 0.0802950933575...
Is there a quick one-liner to call datepart in Sql Server and get back the name of the day instead of just the number? ``` select datepart(dw, getdate()); ``` This will return 1-7, with Sunday being 1. I would like 'Sunday' instead of 1. ``` select datename(weekday, getdate()); ```
[ 0.2716110646724701, -0.14520765841007233, 0.5351512432098389, -0.038198016583919525, 0.1535719633102417, 0.0021438002586364746, -0.0007356212590821087, 0.2270587980747223, -0.10941267758607864, -0.5107893347740173, -0.1753738522529602, 0.5478408336639404, -0.25448471307754517, 0.0903853699...
Let's say we have a simple function defined in a pseudo language. ``` List<Numbers> SortNumbers(List<Numbers> unsorted, bool ascending); ``` We pass in an unsorted list of numbers and a boolean specifying ascending or descending sort order. In return, we get a sorted list of numbers. In my experience, some people are better at capturing boundary conditions than others. The question is, "How do you know when you are 'done' capturing test cases"? We can start listing cases now and some clever person will undoubtedly think of 'one more' case that isn't covered by any of the previous. Don't waste too much time trying to think of *every*
[ 0.176158607006073, 0.023006543517112732, -0.0033716652542352676, 0.39559319615364075, 0.030055852606892586, -0.019765576347708702, -0.020758774131536484, -0.21995875239372253, -0.22332073748111725, -0.5946282148361206, -0.10914550721645355, 0.48398256301879883, -0.29467812180519104, 0.1545...
boundry condition. Your tests won't be able to catch *every* bug first time around. The idea is to have tests that are *pretty good*, and then each time a bug *does* surface, write a new test specifically for that bug so that you never hear from it again. Another note I want to make about code coverage tools. In a language like C# or Java where your have many get/set and similar methods, you should **not** be shooting for 100% coverage. That means you are wasting too much time writing tests for trivial code. You *only* want 100% coverage on your
[ 0.7015824913978577, 0.06345214694738388, 0.021294932812452316, 0.2846977412700653, 0.17029006779193878, -0.38210615515708923, 0.37710142135620117, 0.03357367962598801, -0.3114202916622162, -0.5596370697021484, 0.15383829176425934, 0.47377923130989075, -0.5185215473175049, -0.13301886618137...
complex business logic. If your full codebase is closer to 70-80% coverage, you are doing a good job. If your code coverage tool allows multiple coverage metrics, the best one is 'block coverage' which measures coverage of 'basic blocks'. Other types are class and method coverage (which don't give you as much information) and line coverage (which is too fine grain).
[ 0.07011361420154572, 0.011114859953522682, -0.07720594108104706, 0.18367531895637512, 0.12036259472370148, -0.23181107640266418, 0.2776447832584381, -0.3116095960140228, -0.3081062138080597, -0.7094405889511108, -0.2767399847507477, 0.4393846094608307, -0.17601515352725983, -0.094707399606...
Let's say I have a drive such as **C:\**, and I want to find out if it's shared and what it's share name (e.g. **C$**) is. To find out if it's shared, I can use [NetShareCheck](https://learn.microsoft.com/en-us/windows/desktop/api/Lmshare/nf-lmshare-netsharecheck). How do I then map the drive to its share name? I thought that [NetShareGetInfo](https://learn.microsoft.com/en-us/windows/desktop/api/Lmshare/nf-lmshare-netsharegetinfo) would do it, but it looks like that takes the share name, not the local drive name, as an input. If all else fails, you could always use [NetShareEnum](https://learn.microsoft.com/en-us/windows/win32/api/lmshare/nf-lmshare-netshareenum) and call [NetShareGetInfo](https://learn.microsoft.com/windows/desktop/api/lmshare/nf-lmshare-netsharegetinfo) on each.
[ 0.21684177219867706, -0.0023967106826603413, 0.13228429853916168, 0.061863187700510025, -0.0876641497015953, -0.07396595925092697, -0.32706987857818604, 0.17104873061180115, -0.2504311501979828, -0.736930251121521, -0.10692430287599564, 0.6410565972328186, -0.261971116065979, 0.50269258022...
we're developing a web app to cover all aspects of a printing company from finances, to payroll, to job costing. Its important to be able to control who can access what parts of these applications. Don't want a line employee giving himself a raise, etc... I've heard of the concept of ACL & ACO, but haven't found a good example that we could adapt to our project. Anyone know where I can find good information to work from? A brief rundown on ACLs, where they should be used and how they should be structured and implemented for various applications and user levels can
[ 0.8328332901000977, 0.18223033845424652, 0.22640423476696014, 0.3133583068847656, 0.35704320669174194, -0.18936219811439514, -0.4364941716194153, 0.22281676530838013, -0.27047669887542725, -0.38180509209632874, 0.07886403053998947, 0.46233633160591125, 0.05756530538201332, -0.2128806114196...
be found here: [LINK](http://c2.com/cgi/wiki?AccessControlList)
[ 0.41988837718963623, -0.03829066827893257, -0.056838516145944595, 0.3402625322341919, 0.2547900676727295, -0.2465950846672058, 0.002178543247282505, 0.33071231842041016, -0.7477092742919922, -0.49501994252204895, -0.5709710717201233, 0.05724150687456131, 0.04000591114163399, 0.449906855821...
What's the most efficient way to resize large images in PHP? I'm currently using the [GD](http://en.wikipedia.org/wiki/GD_Graphics_Library) function imagecopyresampled to take high resolution images, and cleanly resize them down to a size for web viewing (roughly 700 pixels wide by 700 pixels tall). This works great on small (under 2 MB) photos and the entire resize operation takes less than a second on the server. However, the site will eventually service photographers who may be uploading images up to 10 MB in size (or images up to 5000x4000 pixels in size). Doing this kind of resize operation with large images tends to increase the
[ 0.0456085167825222, -0.04266459122300148, 0.7126994132995605, -0.027069294825196266, -0.3689160645008087, 0.08208534866571426, 0.07164996862411499, -0.47881022095680237, -0.3594933748245239, -0.7156643271446228, 0.10828041285276413, 0.5648714900016785, 0.0013710202183574438, 0.022723119705...
memory usage by a very large margin (larger images can spike the memory usage for the script past 80 MB). Is there any way to make this resize operation more efficient? Should I be using an alternate image library such as [ImageMagick](http://en.wikipedia.org/wiki/ImageMagick)? Right now, the resize code looks something like this ``` function makeThumbnail($sourcefile, $endfile, $thumbwidth, $thumbheight, $quality) { // Takes the sourcefile (path/to/image.jpg) and makes a thumbnail from it // and places it at endfile (path/to/thumb.jpg). // Load image and get image size. $img = imagecreatefromjpeg($sourcefile); $width
[ 0.14987148344516754, 0.08819759637117386, 0.6463669538497925, -0.05167945846915245, 0.07238085567951202, 0.07542415708303452, 0.01870122179389, -0.27838191390037537, -0.21104516088962555, -0.6834006309509277, 0.048834919929504395, 0.8011877536773682, 0.005254142452031374, 0.092774383723735...
= imagesx( $img ); $height = imagesy( $img ); if ($width > $height) { $newwidth = $thumbwidth; $divisor = $width / $thumbwidth; $newheight = floor( $height / $divisor); } else { $newheight = $thumbheight; $divisor = $height / $thumbheight; $newwidth = floor( $width / $divisor );
[ -0.142048180103302, -0.01265265978872776, 0.9040089845657349, -0.25734251737594604, 0.06337288022041321, 0.03690381720662117, 0.03543245792388916, -0.19762666523456573, -0.3626309633255005, -0.7321932315826416, -0.4757685363292694, 0.2874472737312317, -0.18767376244068146, 0.24651636183261...
} // Create a new temporary image. $tmpimg = imagecreatetruecolor( $newwidth, $newheight ); // Copy and resize old image into new image. imagecopyresampled( $tmpimg, $img, 0, 0, 0, 0, $newwidth, $newheight, $width, $height ); // Save thumbnail into a file. imagejpeg( $tmpimg, $endfile, $quality); // release the memory imagedestroy($tmpimg); imagedestroy($img); ``` People say that ImageMagick is much faster. At best just compare both libraries and measure that. 1. Prepare 1000 typical images. 2. Write two scripts --
[ 0.0717606320977211, -0.11419114470481873, 0.6494908928871155, -0.08416678011417389, 0.31832486391067505, 0.12648896872997284, 0.1786533147096634, -0.2124336212873459, -0.11743270605802536, -0.9813283085823059, -0.09694478660821915, 0.7218527793884277, -0.29799118638038635, 0.02736421115696...
one for GD, one for ImageMagick. 3. Run both of them a few times. 4. Compare results (total execution time, CPU and I/O usage, result image quality). Something which the best everyone else, could not be the best for you. Also, in my opinion, ImageMagick has much better API interface.
[ 0.05905446410179138, -0.3740916848182678, 0.601069450378418, 0.1865140199661255, -0.5189346075057983, 0.09483999758958817, -0.009734345600008965, -0.32034480571746826, 0.03372463211417198, -0.9066864848136902, 0.28146541118621826, 0.817496657371521, -0.3506518602371216, -0.3743208646774292...
Let's say I woke up today and wanted to create a clone of StackOverflow.com, and reap the financial windfall of millions $0.02 ad clicks. Where do I start? My understanding of web technologies are: * HTML is what is ultimately displayed * CSS is a mechanism for making HTML look pleasing * ASP.NET lets you add functionality using .NET(?) * JavaScript does stuff * AJAX does asyncronous stuff * ... and the list goes on! To write a good website to I just need to buy seven books and read them all? Are Web 2.0 sites really the synergy of all these technologies? Where does someone go to
[ 0.8108534216880798, 0.08070319145917892, 0.49787235260009766, 0.15774425864219666, -0.06372197717428207, -0.1784667819738388, 0.09096696972846985, 0.08760091662406921, -0.42677485942840576, -0.7040059566497803, 0.2253486067056656, 0.48858463764190674, 0.012377646751701832, 0.18925781548023...
get started down the path to creating professional-looking web sites, and what steps are there along the way. I think that this series of [Opera Articles](http://www.opera.com/wsc/) will give you a good idea of web standards and basic concepts of web development. *2014 update*: the Opera docs were relocated in 2012 to this section of [webplatform.org](http://webplatform.org): <http://docs.webplatform.org/wiki/Main_Page>
[ 0.18807312846183777, -0.2584814131259918, 0.29327961802482605, -0.32195138931274414, -0.1962549090385437, -0.4252190887928009, 0.20702378451824188, -0.014530542306602001, -0.30657991766929626, -0.16589443385601044, -0.32060736417770386, 0.2111603021621704, 0.23716621100902557, 0.0689646378...
Has anyone built a website with IronPython and ASP.NET. What were your experiences and is the combination ready for prime-time? The current version of ASP.NET integration for IronPython is not very up-to-date and is more of a "proof-of-concept." I don't think I'd build a production website based on it. **Edit:**: I have a very high level of expectation for how things like this should work, and might setting the bar a little high. Maybe you should take what's in "ASP.NET Futures", write a test application for it and see how it works for you. If you're successful, I'd like to hear about
[ 0.7277581691741943, 0.049123506993055344, 0.03577130287885666, 0.20376108586788177, 0.21788465976715088, -0.1297120749950409, 0.14649374783039093, 0.27467402815818787, -0.07441642135381699, -0.6353701949119568, 0.2659284174442291, 0.5824069380760193, 0.100217305123806, -0.1968453824520111,...
it. Otherwise, I think there should be a newer CTP of this in the next six months. (I'm a developer on IronPython and IronRuby.) **Edit 2:** Since I originally posted this, a [newer version](http://www.codeplex.com/aspnet/Wiki/View.aspx?title=Dynamic%20Language%20Support) has been released.
[ 0.1498405486345291, -0.4693012237548828, 0.4197331964969635, -0.03407222777605057, -0.0007738242857158184, -0.05078235641121864, 0.08645855635404587, 0.021836239844560623, -0.47026288509368896, -0.8028647899627686, -0.4647163152694702, 0.5223565697669983, -0.23610469698905945, -0.236890017...
In C++ program, I am trying to #import TLB of .NET out-of-proc server. I get errors like: > z:\server.tlh(111) : error C2146: syntax error : missing ';' before identifier 'GetType' > > > z:\server.tlh(111) : error C2501: '\_TypePtr' : missing storage-class or type specifiers > > > z:\server.tli(74) : error C2143: syntax error : missing ';' before 'tag::id' > > > z:\server.tli(74) : error C2433: '\_TypePtr' : 'inline' not permitted on data declarations > > > z:\server.tli(74) : error C2501: '\_TypePtr' : missing storage-class or type specifiers > > > z:\server.tli(74) : fatal error C1004: unexpected end of file found The TLH looks like: ``` _bstr_t GetToString(); VARIANT_BOOL Equals (const _variant_t
[ -0.38979825377464294, 0.13610245287418365, 0.44262412190437317, -0.31019434332847595, -0.09658391773700714, 0.0323253832757473, 0.320454865694046, -0.39300477504730225, -0.10842672735452652, -0.4396667182445526, -0.3451698422431946, 0.43551504611968994, -0.6083385348320007, -0.069466754794...
& obj); long GetHashCode(); _TypePtr GetType(); long Open(); ``` I am not really interested in the having the base object .NET object methods like GetType(), Equals(), etc. But GetType() seems to be causing problems. Some google research indicates I could `#import mscorlib.tlb` (or put it in path), but I can't get that to compile either. Any tips? Added no\_namespace and raw\_interfaces\_only to my #import: ``` #import "server.tlb" no_namespace named_guids ``` Also using TLBEXP.EXE instead of REGASM.EXE seems to help this issue.
[ -0.01820993423461914, -0.11323650181293488, 0.5655849575996399, -0.19076819717884064, -0.21689735352993011, -0.21430151164531708, 0.3793566823005676, -0.1027180477976799, -0.15725649893283844, -0.6258236169815063, -0.08938849717378616, 0.6743905544281006, -0.5524128079414368, 0.20070993900...
This is driving me crazy. I have this one php file on a test server at work which does not work.. I kept deleting stuff from it till it became ``` <? print 'Hello'; ?> ``` it outputs > Hello if I create a new file and copy / paste the same script to it it works! Why does this one file give me the strange characters all the time? That's the [BOM (Byte Order Mark)](http://en.wikipedia.org/wiki/Byte_Order_Mark) you are seeing. In your editor, there should be a way to force saving without BOM which will remove the problem.
[ 0.21602120995521545, 0.426079124212265, 0.18439656496047974, 0.032417189329862595, -0.2029915601015091, -0.022168809548020363, 0.26580193638801575, 0.3158937096595764, -0.45241713523864746, -0.5880619287490845, 0.14791251718997955, 0.024516282603144646, -0.1929408609867096, 0.3798536658287...
I'm trying to use `jQuery` to format code blocks, specifically to add a `<pre>` tag inside the `<code>` tag: ``` $(document).ready(function() { $("code").wrapInner("<pre></pre>"); }); ``` Firefox applies the formatting correctly, but IE puts the entire code block on one line. If I add an alert ``` alert($("code").html()); ``` I see that IE has inserted some additional text into the pre tag: ``` <PRE jQuery1218834632572="null"> ``` If I reload the page, the number following jQuery changes. If I use `wrap()` instead of `wrapInner()`, to wrap the `<pre>` outside the `<code>` tag, both IE and Firefox handle it correctly. But shouldn't `<pre>` work *inside* `<code>` as well? I'd prefer to use `wrapInner()` because I
[ 0.3207509517669678, 0.11581642925739288, 0.5825990438461304, -0.2642443776130676, -0.1725558489561081, -0.12994208931922913, 0.13841219246387482, -0.3519032299518585, -0.06929368525743484, -0.9403935074806213, -0.370318204164505, 0.5275686383247375, -0.3599691689014435, 0.06186643242835998...
can then add a CSS class to the `<pre>` tag to handle all formatting, but if I use `wrap()`, I have to put page formatting CSS in the `<pre>` tag and text/font formatting in the `<code>` tag, or Firefox and IE both choke. Not a huge deal, but I'd like to keep it as simple as possible. Has anyone else encountered this? Am I missing something? That's the difference between [block and inline](http://www.w3.org/TR/html4/struct/global.html#h-7.5.3) elements. [`pre` is a block level element](http://www.w3.org/TR/html4/sgml/dtd.html#block). It's not legal to put it inside a `code` tag, which [can only contain inline content](http://www.w3.org/TR/html4/struct/text.html#h-9.2.1). Because browsers have to support whatever godawful
[ 0.416376531124115, 0.13026396930217743, 0.37053975462913513, 0.06674163043498993, -0.1021452397108078, -0.29932546615600586, 0.28663912415504456, -0.12977710366249084, -0.08046161383390427, -0.7334240674972534, -0.24441707134246826, 0.5626857280731201, -0.35297736525535583, 0.1408517658710...
tag soup they might find on the real web, Firefox tries to do what you mean. IE happens to handle it differently, which is fine by the spec; behavior in that case is unspecified, because it should never happen. * Could you instead *replace* the `code` element with the `pre`? (Because of the block/inline issue, technically that should only work if the elements are inside [an element with "flow" content](http://www.w3.org/TR/html4/sgml/dtd.html#flow), but the browsers might do what you want anyway.) * Why is it a `code` element in the first place, if you want `pre`'s behavior? * You could also give the `code` element
[ 0.4827931225299835, -0.45015057921409607, 0.3415411114692688, -0.13607257604599, -0.1852450668811798, -0.3253287971019745, 0.2969690263271332, -0.11682092398405075, -0.12701846659183502, -0.5999578237533569, 0.0888829305768013, 0.5021494626998901, -0.6814322471618652, -0.22688981890678406,...
`pre`'s whitespace preserving power with the CSS [`white-space: pre`](http://www.blooberry.com/indexdot/css/properties/text/whitespace.htm), but apparently [IE 6 only honors that in Strict Mode](http://www.quirksmode.org/css/whitespace.html).
[ 0.11262792348861694, -0.03944811597466469, 0.22932963073253632, -0.07837661355733871, -0.03972886502742767, 0.07968433201313019, 0.2560511529445648, -0.10510173439979553, -0.16176538169384003, -0.5707194805145264, -0.4486224353313446, 0.021683232858777046, -0.11130205541849136, 0.386932492...
I have two separate mercurial repositories. At this point it makes sense that they "become one" because I want to work on the two projects simultaneously. I'd really like the two projects to each be a subdirectory in the new repository. 1. How do I merge the two projects? 2. Is this a good idea, or should I keep them separate? It seems I ought to be able to push from one repository to the other... Maybe this is really straight forward? I was able to combine my two repositories in this way: 1. Use `hg clone first_repository` to clone one of the repositories. 2. Use `hg pull
[ 0.503300130367279, -0.1436997950077057, 0.20215101540088654, 0.11400371789932251, 0.2592220604419708, 0.06827111542224884, -0.37161561846733093, -0.005046603735536337, -0.1875872015953064, -0.7465401291847229, 0.2206006497144699, 0.12305867671966553, -0.13569672405719757, 0.481813937425613...
-f other_repository` to pull the code in from the other repository. The `-f` (force) flag on the pull is the key -- it says to ignore the fact that the two repositories are not from the same source. Here are [the docs](https://www.mercurial-scm.org/wiki/MergingUnrelatedRepositories) for this feature.
[ 0.021866004914045334, -0.16690866649150848, 0.4117118716239929, 0.05638665705919266, 0.08610564470291138, -0.3244533836841583, -0.10102482885122299, -0.14922641217708588, -0.11939534544944763, -0.2632841467857361, -0.38619670271873474, 0.529357373714447, -0.3250579535961151, 0.054323896765...
I just get the beach ball all day long (it's been doing nothing for hours). It's not taking CPU, not reading from disk, not using the network. I'm using **Java 1.6** on **Mac OS X 10.5.4**. It worked once, now even restarts of the computer won't help. Activity Monitor says it's "(Not Responding)". Only thing that I can do is kill -9 that sucker. When I sample the process I see this: ``` mach_msg_trap 16620 read 831 semaphore_wait_trap
[ 0.40795668959617615, 0.22761958837509155, 0.45383742451667786, 0.05575074255466461, -0.004988150671124458, -0.01590503193438053, 0.2503672242164612, 0.20714552700519562, -0.6641422510147095, -0.4765363037586212, 0.0047112577594816685, 0.40180420875549316, -0.45261433720588684, 0.0654638409...
831 ``` An acceptable answer that doesn't fix this would include a url for a decent free Oracle client for the Mac. Edit: @Mark Harrison sadly this happens every time I start it up, it's not an old connection. I'll like to avoid running Windows on my laptop. I'm giving some plugins for my IDE a whirl, but still no solution for me. @Matthew Schinckel Navicat seems to only have a non-commercial Oracle product...I need a commercial friendly one (even if it costs money). I get the same problem after there's been an active connection sitting idle for a while. I
[ 0.42788824439048767, 0.9272605180740356, -0.14051732420921326, -0.09621444344520569, 0.09647742658853531, -0.21193374693393707, 0.400285542011261, 0.21961814165115356, -0.258064866065979, -0.3363923728466034, -0.11405343562364578, 0.6045064926147461, -0.3604022264480591, -0.129933655261993...
solve it by restarting sql developer every once in a while. I also have Toad for Oracle running on a vmware XP session, and it works great. If you don't mind the money, try that.
[ 0.1506989598274231, 0.2329990565776825, -0.13621775805950165, 0.2658749520778656, 0.2550298571586609, -0.08823759853839874, 0.3422882556915283, 0.48860833048820496, -0.10119779407978058, -0.3428889811038971, 0.38832277059555054, 0.7446190118789673, -0.11290795356035233, -0.3391565680503845...
I have a large database of normalized order data that is becoming very slow to query for reporting. Many of the queries that I use in reports join five or six tables and are having to examine tens or hundreds of thousands of lines. There are lots of queries and most have been optimized as much as possible to reduce server load and increase speed. I think it's time to start keeping a copy of the data in a denormalized format. Any ideas on an approach? Should I start with a couple of my worst queries and go from there? I know more
[ 0.28160157799720764, 0.23963548243045807, 0.23277612030506134, 0.15603308379650116, -0.047391701489686966, 0.08699171245098114, 0.44290855526924133, 0.248065784573555, -0.15694111585617065, -0.7891043424606323, 0.34827715158462524, 0.3363454043865204, -0.05923351272940636, 0.21468903124332...
about mssql that mysql, but I don't think the number of joins or number of rows you are talking about should cause you too many problems with the correct indexes in place. Have you analyzed the query plan to see if you are missing any? <http://dev.mysql.com/doc/refman/5.0/en/explain.html> That being said, once you are satisifed with your indexes and have exhausted all other avenues, de-normalization might be the right answer. If you just have one or two queries that are problems, a manual approach is probably appropriate, whereas some sort of data warehousing tool might be better for creating a platform to develop data
[ -0.028167646378278732, -0.015645407140254974, -0.12685419619083405, 0.4123496115207672, -0.013084979727864265, -0.29518556594848633, 0.36112508177757263, 0.44039174914360046, -0.4120170772075653, -0.5459191799163818, 0.16831091046333313, 0.5991750359535217, -0.4347565174102783, 0.148433968...
cubes. Here's a site I found that touches on the subject: <http://www.meansandends.com/mysql-data-warehouse/?link_body%2Fbody=%7Bincl%3AAggregation%7D> Here's a simple technique that you can use to keep denormalizing queries simple, if you're just doing a few at a time (and I'm not replacing your OLTP tables, just creating a new one for reporting purposes). Let's say you have this query in your application: ``` select a.name, b.address from tbla a join tblb b on b.fk_a_id = a.id where a.id=1 ``` You could create a denormalized table and populate with almost the same query: ``` create table tbl_ab (a_id, a_name, b_address); -- (types elided) ``` Notice the underscores match the table aliases you use ``` insert tbl_ab select a.id,
[ 0.1275990754365921, 0.5190981030464172, 0.43886592984199524, -0.09798571467399597, -0.5004581809043884, 0.4254736006259918, 0.25955209136009216, -0.41389432549476624, -0.44962260127067566, -0.7351124286651611, 0.13915061950683594, 0.08188734948635101, -0.5339788794517517, 0.414837390184402...
a.name, b.address from tbla a join tblb b on b.fk_a_id = a.id -- no where clause because you want everything ``` Then to fix your app to use the new denormalized table, switch the dots for underscores. ``` select a_name as name, b_address as address from tbl_ab where a_id = 1; ``` For huge queries this can save a lot of time and makes it clear where the data came from, and you can re-use the queries you already have. Remember, I'm only advocating this as the last resort. I bet there's a few indexes that would help you. And when you de-normalize, don't forget to account
[ 0.23487871885299683, 0.32141634821891785, 0.313941091299057, -0.047720298171043396, -0.015088357031345367, 0.14481516182422638, 0.28176045417785645, -0.1529698669910431, -0.3188929855823517, -0.556601881980896, -0.03179457411170006, 0.3831503987312317, -0.6845678091049194, 0.29818961024284...
for the extra space on your disks, and figure out when you will run the query to populate the new tables. This should probably be at night, or whenever activity is low. And the data in that table, of course, will never exactly be up to date. [Yet another edit] Don't forget that the new tables you create need to be indexed too! The good part is that you can index to your heart's content and not worry about update lock contention, since aside from your bulk insert the table will only see selects.
[ 0.2006424367427826, -0.005309411324560642, 0.3715725243091583, 0.2830897271633148, 0.16499868035316467, -0.16706779599189758, 0.15340439975261688, 0.04860333353281021, -0.40184473991394043, -0.6417964696884155, 0.28957056999206543, 0.6408933401107788, 0.09003347903490067, -0.14154981076717...
I need to know how much space occupies all the databases inside an SQL Server 2000. I did some research but could not found any script to help me out. Source: <http://searchsqlserver.techtarget.com/tip/0,289483,sid87_gci1313431,00.html> Works with SQL2000,2005,2008 ``` USE master; GO IF OBJECT_ID('dbo.sp_SDS', 'P') IS NOT NULL DROP PROCEDURE dbo.sp_SDS; GO CREATE PROCEDURE dbo.sp_SDS @TargetDatabase sysname = NULL, -- NULL: all dbs @Level varchar(10) = 'Database', -- or "File" @UpdateUsage bit = 0, -- default no update @Unit char(2) =
[ 0.30265891551971436, 0.13269422948360443, 0.3383110761642456, 0.10517926514148712, -0.04858039692044258, -0.16998416185379028, 0.3950938880443573, -0.4793923795223236, -0.2727499008178711, -0.7859322428703308, -0.02346877194941044, 0.4190824031829834, -0.23565304279327393, 0.28185179829597...
'MB' -- Megabytes, Kilobytes or Gigabytes AS /************************************************************************************************** ** ** author: Richard Ding ** date: 4/8/2008 ** usage: list db size AND path w/o SUMmary ** test code: sp_SDS -- default behavior ** sp_SDS 'maAster' ** sp_SDS NULL, NULL, 0 ** sp_SDS NULL, 'file', 1, 'GB' **
[ -0.5662770867347717, 0.37088543176651, 0.23027844727039337, 0.1332455426454544, -0.1792832612991333, 0.27156949043273926, 0.23424272239208221, -0.22545461356639862, -0.07150515168905258, -0.25155603885650635, -0.38846954703330994, 0.027036229148507118, -0.5562438368797302, 0.33473804593086...
sp_SDS 'Test_snapshot', 'Database', 1 ** sp_SDS 'Test', 'File', 0, 'kb' ** sp_SDS 'pfaids', 'Database', 0, 'gb' ** sp_SDS 'tempdb', NULL, 1, 'kb' ** **************************************************************************************************/ SET NOCOUNT ON; IF @TargetDatabase IS NOT NULL AND DB_ID(@TargetDatabase) IS NULL BEGIN RAISERROR(15010, -1, -1, @TargetDatabase); RETURN (-1) END IF OBJECT_ID('tempdb.dbo.##Tbl_CombinedInfo', 'U') IS NOT NULL
[ -0.27671152353286743, -0.25121697783470154, 0.5931089520454407, 0.004443205893039703, -0.2505764365196228, 0.06844621896743774, 0.3614721894264221, -0.7238166332244873, -0.47918733954429626, -0.6661975383758545, -0.39582863450050354, 0.2838239371776581, -0.4275302290916443, -0.075408227741...
DROP TABLE dbo.##Tbl_CombinedInfo; IF OBJECT_ID('tempdb.dbo.##Tbl_DbFileStats', 'U') IS NOT NULL DROP TABLE dbo.##Tbl_DbFileStats; IF OBJECT_ID('tempdb.dbo.##Tbl_ValidDbs', 'U') IS NOT NULL DROP TABLE dbo.##Tbl_ValidDbs; IF OBJECT_ID('tempdb.dbo.##Tbl_Logs', 'U') IS NOT NULL DROP TABLE dbo.##Tbl_Logs; CREATE TABLE dbo.##Tbl_CombinedInfo ( DatabaseName sysname NULL, [type] VARCHAR(10) NULL, LogicalName sysname NULL, T dec(10, 2) NULL, U dec(10, 2) NULL, [U(%)] dec(5, 2) NULL, F dec(10, 2) NULL, [F(%)] dec(5, 2) NULL, PhysicalName sysname NULL ); CREATE TABLE dbo.##Tbl_DbFileStats ( Id int identity, DatabaseName sysname NULL, FileId int NULL, FileGroup int NULL,
[ 0.43278124928474426, 0.10228884965181351, 0.3718123733997345, 0.15256761014461517, -0.09062230587005615, 0.28261440992355347, 0.23955166339874268, -0.32861459255218506, -0.41997286677360535, -0.18325477838516235, -0.2967759370803833, 0.24725987017154694, -0.6980648636817932, 0.068967029452...
TotalExtents bigint NULL, UsedExtents bigint NULL, Name sysname NULL, FileName varchar(255) NULL ); CREATE TABLE dbo.##Tbl_ValidDbs ( Id int identity, Dbname sysname NULL ); CREATE TABLE dbo.##Tbl_Logs ( DatabaseName sysname NULL, LogSize dec (10, 2) NULL, LogSpaceUsedPercent dec (5, 2) NULL, Status int NULL ); DECLARE @Ver varchar(10), @DatabaseName sysname, @Ident_last int, @String varchar(2000), @BaseString varchar(2000); SELECT
[ -0.04849165305495262, -0.11180312931537628, 0.2985799312591553, 0.03702539578080177, -0.3956896960735321, 0.34736427664756775, 0.3575303852558136, -0.4278378188610077, -0.3730248510837555, -0.3408711850643158, -0.05903567373752594, 0.5903432369232178, -0.4898699223995209, 0.425129473209381...
@DatabaseName = '', @Ident_last = 0, @String = '', @Ver = CASE WHEN @@VERSION LIKE '%9.0%' THEN 'SQL 2005' WHEN @@VERSION LIKE '%8.0%' THEN 'SQL 2000' WHEN @@VERSION LIKE '%10.0%' THEN 'SQL 2008'
[ -0.14992234110832214, 0.12059134244918823, 0.6218224167823792, -0.20172849297523499, -0.35061514377593994, -0.12435662746429443, 0.06059594079852104, -0.08562041074037552, 0.024964965879917145, -0.17948251962661743, 0.3006139099597931, 0.32456308603286743, -0.20886574685573578, 0.431837588...
END; SELECT @BaseString = ' SELECT DB_NAME(), ' + CASE WHEN @Ver = 'SQL 2000' THEN 'CASE WHEN status & 0x40 = 0x40 THEN ''Log'' ELSE ''Data'' END' ELSE ' CASE type WHEN 0 THEN ''Data'' WHEN 1 THEN ''Log'' WHEN 4 THEN ''Full-text'' ELSE ''reserved'' END' END + ', name, ' + CASE WHEN @Ver = 'SQL 2000' THEN 'filename' ELSE 'physical_name' END + ', size*8.0/1024.0 FROM ' + CASE WHEN @Ver = 'SQL 2000' THEN 'sysfiles' ELSE 'sys.database_files' END + ' WHERE ' + CASE WHEN @Ver = 'SQL 2000' THEN
[ -0.04770972579717636, -0.0927158072590828, 0.48390722274780273, -0.14017386734485626, 0.03640974313020706, -0.106230229139328, 0.09017449617385864, -0.5420886874198914, 0.17713434994220734, -0.30469822883605957, -0.12757748365402222, 0.47449952363967896, -0.19735652208328247, 0.41631308197...
' HAS_DBACCESS(DB_NAME()) = 1' ELSE 'state_desc = ''ONLINE''' END + ''; SELECT @String = 'INSERT INTO dbo.##Tbl_ValidDbs SELECT name FROM ' + CASE WHEN @Ver = 'SQL 2000' THEN 'master.dbo.sysdatabases' WHEN @Ver IN ('SQL 2005', 'SQL 2008') THEN 'master.sys.databases' END + ' WHERE HAS_DBACCESS(name) =
[ 0.13812631368637085, 0.22384728491306305, 0.49378833174705505, -0.33490830659866333, -0.31620216369628906, -0.19433198869228363, 0.13416419923305511, -0.6251659989356995, -0.0853276327252388, -0.5378558039665222, -0.29630735516548157, 0.1541043370962143, -0.20774197578430176, 0.65373629331...
1 ORDER BY name ASC'; EXEC (@String); INSERT INTO dbo.##Tbl_Logs EXEC ('DBCC SQLPERF (LOGSPACE) WITH NO_INFOMSGS'); -- For data part IF @TargetDatabase IS NOT NULL BEGIN SELECT @DatabaseName = @TargetDatabase; IF @UpdateUsage <> 0 AND DATABASEPROPERTYEX (@DatabaseName,'Status') = 'ONLINE' AND DATABASEPROPERTYEX (@DatabaseName, 'Updateability') <> 'READ_ONLY' BEGIN SELECT @String = 'USE [' + @DatabaseName + '] DBCC UPDATEUSAGE (0)'; PRINT '*** ' + @String + ' *** ';
[ 0.038670625537633896, -0.035745926201343536, 0.28501835465431213, -0.1750754415988922, -0.22213363647460938, -0.1322324424982071, 0.21874050796031952, -0.4443136155605316, 0.046666812151670456, -0.5840975046157837, -0.5979517102241516, 0.44229817390441895, -0.7234070897102356, 0.0869041159...