text
stringlengths
0
30.5k
title
stringclasses
1 value
embeddings
listlengths
768
768
} // end function find_WeekdaysFromThisTo public function set_Day($newday) { if (is_int($newday) && $newday > 0 && $newday < 32 && checkdate($this->format('m'),$newday,$this->format('Y'))) $this->setDate($this->format('Y'),$this->format('m'),$newday); } // end function set_Day public function set_Month($newmonth) { if (is_int($newmonth) && $newmonth > 0 && $newmonth < 13) $this->setDate($this->format('Y'),$newmonth,$this->format('d')); } // end function set_Month
[ 0.19447015225887299, -0.03175067529082298, 1.0541539192199707, -0.22523808479309082, 0.41771024465560913, -0.03572848439216614, 0.2870064377784729, 0.08729014545679092, -0.0698852390050888, -0.3373379409313202, -0.4393500089645386, 0.37861278653144836, -0.3658539056777954, 0.15107133984565...
public function set_Year($newyear) { if (is_int($newyear) && $newyear > 0) $this->setDate($newyear,$this->format('m'),$this->format('d')); } // end function set_Year } // end class Extended_DateTime ```
[ 0.4489743411540985, -0.16508163511753082, 0.8738356828689575, -0.4240279793739319, 0.411838173866272, -0.292358934879303, 0.022112935781478882, 0.08190318942070007, -0.10308757424354553, -0.3275109827518463, -0.5558382272720337, 0.5384659767150879, -0.25598207116127014, 0.3403094410896301,...
Does a new `SessionFactor`y and `Session` object have to be created for each database? I have a data store for my application data, and a separate data store for my employee security, which is used to validate users. Do I have to create a new SessionFactory ans Session object for calls to the 2 different databases? ok so this doesn't answer your question directly but it might offer an insight as to why you should create multiple session objects for each datastore. [This article](http://www.codeproject.com/KB/architecture/NHibernateBestPractices.aspx) explains how you can implement a thread safe lazy singleton for each type of Session you need so
[ 0.24051699042320251, 0.11951728165149689, 0.17386555671691895, 0.28649744391441345, -0.18389888107776642, -0.04555060341954231, 0.46465328335762024, -0.1053517535328865, -0.35026663541793823, -0.5827433466911316, 0.19101502001285553, 0.5123608708381653, -0.4614723026752472, 0.4675278365612...
that you only have one session per datastore but it's shared across the entire application. So at most you're only ever going to have 2 session objects. To directly answer your question however, you will need 1 session object per database.
[ 0.23409128189086914, -0.024805529043078423, 0.1998974233865738, 0.38199368119239807, 0.012675860896706581, -0.08181002736091614, 0.20527848601341248, 0.11485695838928223, -0.5300270915031433, -0.4338601231575012, -0.19233782589435577, 0.38033655285835266, -0.1689215749502182, 0.27876788377...
I'm not a SQL expert, and I'm reminded of the fact every time I need to do something beyond the basics. I have a test database that is not large in size, but the transaction log definitely is. How do I clear out the transaction log? Making a log file smaller should really be reserved for scenarios where it encountered unexpected growth which you do not expect to happen again. If the log file will grow to the same size again, not very much is accomplished by shrinking it temporarily. Now, depending on the recovery goals of your database, these are
[ 0.11676102131605148, 0.2262464016675949, 0.09660238772630692, -0.0070477137342095375, 0.3658875823020935, -0.10395471006631851, 0.036654844880104065, 0.021683527156710625, -0.5994415283203125, -0.4588273763656616, 0.3553088307380676, 0.3001713752746582, 0.12579451501369476, 0.4341946244239...
the actions you should take. First, take a full backup ========================= Never make any changes to your database without ensuring you can restore it should something go wrong. If you care about point-in-time recovery ======================================== (And by point-in-time recovery, I mean you care about being able to restore to anything other than a full or differential backup.) Presumably your database is in `FULL` recovery mode. If not, then make sure it is: ``` ALTER DATABASE testdb SET RECOVERY FULL; ``` Even if you are taking regular full backups, the log file will grow and grow until you perform a *log* backup - this is for your protection, not to needlessly eat
[ -0.10011126846075058, -0.07216759026050568, 0.3393571674823761, 0.10427235811948776, 0.21245206892490387, -0.03938809782266617, 0.49227404594421387, -0.024903832003474236, -0.4284755289554596, -0.4367820620536804, -0.26950737833976746, 1.0258599519729614, -0.2605701982975006, -0.1779269725...
away at your disk space. You should be performing these log backups quite frequently, according to your recovery objectives. For example, if you have a business rule that states you can afford to lose no more than 15 minutes of data in the event of a disaster, you should have a job that backs up the log every 15 minutes. Here is a script that will generate timestamped file names based on the current time (but you can also do this with maintenance plans etc., just don't choose any of the shrink options in maintenance plans, they're awful). ``` DECLARE @path NVARCHAR(255)
[ 0.3198568820953369, 0.048683080822229385, 0.0012863939628005028, -0.05067112669348717, 0.20983488857746124, 0.15221944451332092, 0.16601595282554626, 0.26297011971473694, -0.43468350172042847, -0.2834215760231018, -0.1827159821987152, 0.6560721397399902, -0.45073220133781433, 0.24299415946...
= N'\\backup_share\log\testdb_' + CONVERT(CHAR(8), GETDATE(), 112) + '_' + REPLACE(CONVERT(CHAR(8), GETDATE(), 108),':','') + '.trn'; BACKUP LOG foo TO DISK = @path WITH INIT, COMPRESSION; ``` Note that `\\backup_share\` should be on a different machine that represents a different underlying storage device. Backing these up to the same machine (or to a different machine that uses the same underlying disks, or a different VM that's on the same physical host) does not really help you, since if the machine blows up, you've lost your database *and* its backups. Depending on your network infrastructure it may make more sense to backup
[ 0.2944372296333313, -0.017107876017689705, 0.5288165807723999, -0.046648938208818436, 0.24712760746479034, -0.07685156911611557, 0.1969776600599289, -0.04779766499996185, -0.32392269372940063, -0.391173392534256, -0.11859532445669174, 1.0202137231826782, -0.40911415219306946, 0.17511653900...
locally and then transfer them to a different location behind the scenes; in either case, you want to get them off the primary database machine as quickly as possible. Now, once you have regular log backups running, it should be reasonable to shrink the log file to something more reasonable than whatever it's blown up to now. This does *not* mean running `SHRINKFILE` over and over again until the log file is 1 MB - even if you are backing up the log frequently, it still needs to accommodate the sum of any concurrent transactions that can occur. Log file autogrow
[ 0.3111780881881714, -0.08179209381341934, 0.4545428156852722, 0.18841290473937988, 0.3486376106739044, -0.29898345470428467, 0.20636138319969177, 0.02619856782257557, -0.7567555904388428, -0.401022344827652, -0.29659801721572876, 0.7794654965400696, -0.3497844636440277, 0.12248426675796509...
events are expensive, since SQL Server has to zero out the files (unlike data files when instant file initialization is enabled), and user transactions have to wait while this happens. You want to do this grow-shrink-grow-shrink routine as little as possible, and you certainly don't want to make your users pay for it. Note that you may need to back up the log twice before a shrink is possible (thanks Robert). So, you need to come up with a practical size for your log file. Nobody here can tell you what that is without knowing a lot more about your system, but
[ 0.2597240209579468, -0.10022927820682526, 0.11925682425498962, 0.21440863609313965, 0.17799489200115204, -0.3005808889865875, 0.24547179043293, -0.02566683106124401, -0.5487420558929443, -0.34289121627807617, 0.13102760910987854, 0.5626175403594971, -0.09288554638624191, 0.0369286648929119...
if you've been frequently shrinking the log file and it has been growing again, a good watermark is probably 10-50% higher than the largest it's been. Let's say that comes to 200 MB, and you want any subsequent autogrowth events to be 50 MB, then you can adjust the log file size this way: ``` USE [master]; GO ALTER DATABASE Test1 MODIFY FILE (NAME = yourdb_log, SIZE = 200MB, FILEGROWTH = 50MB); GO ``` Note that if the log file is currently > 200 MB, you may need to run this first: ``` USE yourdb; GO DBCC SHRINKFILE(yourdb_log, 200); GO ``` If you don't care about point-in-time recovery ============================================== If this is a
[ -0.0013020074693486094, -0.2830940783023834, 0.38911470770835876, -0.0835503488779068, 0.06705984473228455, -0.2984430491924286, 0.37061434984207153, -0.39281415939331055, -0.4265116751194, -0.6230623126029968, 0.01060205977410078, 0.6857445240020752, -0.3284417986869812, -0.08672046661376...
test database, and you don't care about point-in-time recovery, then you should make sure that your database is in `SIMPLE` recovery mode. ``` ALTER DATABASE testdb SET RECOVERY SIMPLE; ``` Putting the database in `SIMPLE` recovery mode will make sure that SQL Server re-uses portions of the log file (essentially phasing out inactive transactions) instead of growing to keep a record of *all* transactions (like `FULL` recovery does until you back up the log). `CHECKPOINT` events will help control the log and make sure that it doesn't need to grow unless you generate a lot of t-log activity between `CHECKPOINT`s. Next, you should make absolute
[ 0.06380300223827362, 0.04756690189242363, 0.14797621965408325, 0.16287165880203247, -0.03021048940718174, -0.36504441499710083, 0.4691793918609619, -0.3989854156970978, -0.24172177910804749, -0.47650864720344543, -0.06847269088029861, 0.6267330050468445, -0.2603859603404999, 0.059968512505...
sure that this log growth was truly due to an abnormal event (say, an annual spring cleaning or rebuilding your biggest indexes), and not due to normal, everyday usage. If you shrink the log file to a ridiculously small size, and SQL Server just has to grow it again to accommodate your normal activity, what did you gain? Were you able to make use of that disk space you freed up only temporarily? If you need an immediate fix, then you can run the following: ``` USE yourdb; GO CHECKPOINT; GO CHECKPOINT; -- run twice to ensure file wrap-around GO DBCC SHRINKFILE(yourdb_log, 200); -- unit is set in
[ 0.23726403713226318, -0.04249495640397072, 0.18982985615730286, 0.28179699182510376, 0.3180115222930908, -0.2684237062931061, 0.22702045738697052, -0.008572863414883614, -0.36906611919403076, -0.2563040554523468, 0.07632448524236679, 0.5467562079429626, -0.48702093958854675, 0.070175722241...
MBs GO ``` Otherwise, set an appropriate size and growth rate. As per the example in the point-in-time recovery case, you can use the same code and logic to determine what file size is appropriate and set reasonable autogrowth parameters. Some things you don't want to do ================================ * **Back up the log with `TRUNCATE_ONLY` option and then `SHRINKFILE`**. For one, this `TRUNCATE_ONLY` option has been deprecated and is no longer available in current versions of SQL Server. Second, if you are in `FULL` recovery model, this will destroy your log chain and require a new, full backup. * **Detach the database, delete the log file,
[ -0.2641933560371399, -0.11978534609079361, 0.6540964245796204, -0.02566504292190075, 0.16930772364139557, -0.2297203242778778, 0.3409442901611328, -0.040869105607271194, -0.3975556492805481, -0.40850716829299927, -0.39573684334754944, 0.7601284384727478, -0.5831838846206665, 0.214051514863...
and re-attach**. I can't emphasize how dangerous this can be. Your database may not come back up, it may come up as suspect, you may have to revert to a backup (if you have one), etc. etc. * **Use the "shrink database" option**. `DBCC SHRINKDATABASE` and the maintenance plan option to do the same are bad ideas, especially if you really only need to resolve a log problem issue. Target the file you want to adjust and adjust it independently, using `DBCC SHRINKFILE` or `ALTER DATABASE ... MODIFY FILE` (examples above). * **Shrink the log file to 1 MB**. This looks tempting
[ 0.40942442417144775, -0.13699521124362946, 0.5964251756668091, -0.05893599987030029, -0.023549972102046013, -0.09350979328155518, 0.10477473586797714, -0.4489738345146179, -0.4346809685230255, -0.6630033254623413, -0.3514358103275299, 1.0582668781280518, -0.48173511028289795, 0.16089053452...
because, hey, SQL Server will let me do it in certain scenarios, and look at all the space it frees! Unless your database is read only (and it is, you should mark it as such using `ALTER DATABASE`), this will absolutely just lead to many unnecessary growth events, as the log has to accommodate current transactions regardless of the recovery model. What is the point of freeing up that space temporarily, just so SQL Server can take it back slowly and painfully? * **Create a second log file**. This will provide temporarily relief for the drive that has filled your disk,
[ 0.23249709606170654, 0.1759643852710724, 0.13307833671569824, 0.3672667443752289, 0.3042575716972351, -0.28270697593688965, 0.16185778379440308, 0.15507544577121735, -0.6985642313957214, -0.4050482213497162, 0.04917550086975098, 0.5596836805343628, -0.10771499574184418, 0.2783207595348358,...
but this is like trying to fix a punctured lung with a band-aid. You should deal with the problematic log file directly instead of just adding another potential problem. Other than redirecting some transaction log activity to a different drive, a second log file really does nothing for you (unlike a second data file), since only one of the files can ever be used at a time. [Paul Randal also explains why multiple log files can bite you later](http://www.sqlskills.com/blogs/paul/multiple-log-files-and-why-theyre-bad/). Be proactive ============ Instead of shrinking your log file to some small amount and letting it constantly autogrow at a small rate on its
[ 0.22743335366249084, 0.12698301672935486, -0.23782257735729218, 0.08890533447265625, -0.01625068672001362, -0.054026760160923004, 0.32063618302345276, -0.07214269787073135, -0.295566588640213, -0.5963292717933655, 0.14471954107284546, 0.7101929187774658, -0.42250704765319824, 0.20851655304...
own, set it to some reasonably large size (one that will accommodate the sum of your largest set of concurrent transactions) and set a reasonable autogrow setting as a fallback, so that it doesn't have to grow multiple times to satisfy single transactions and so that it will be relatively rare for it to ever have to grow during normal business operations. The worst possible settings here are 1 MB growth or 10% growth. Funny enough, these are the defaults for SQL Server (which I've complained about and [asked for changes to no avail](https://web.archive.org/web/20140108204835/http://connect.microsoft.com:80/SQLServer/feedback/details/415343)) - 1 MB for data files, and
[ 0.1976001113653183, 0.053547121584415436, 0.6542764902114868, 0.02919289842247963, 0.30804911255836487, -0.1423771232366562, -0.023734059184789658, 0.17349819839000702, -0.41905033588409424, -0.3875909745693207, 0.11140089482069016, 0.528311014175415, -0.10699111223220825, 0.19838097691535...
10% for log files. The former is much too small in this day and age, and the latter leads to longer and longer events every time (say, your log file is 500 MB, first growth is 50 MB, next growth is 55 MB, next growth is 60.5 MB, etc. etc. - and on slow I/O, believe me, you will really notice this curve). Further reading =============== Please don't stop here; while much of the advice you see out there about shrinking log files is inherently bad and even potentially disastrous, there are some people who care more about data integrity than freeing up
[ -0.0019798728171736, -0.1475505828857422, 0.5447635054588318, 0.2440556287765503, 0.40177932381629944, -0.030379995703697205, 0.30760064721107483, 0.11956330388784409, -0.5114562511444092, -0.5259817242622375, -0.24017131328582764, 0.3512416183948517, -0.015067442320287228, 0.1388404667377...
disk space. [A blog post I wrote in 2009, when I saw a few "here's how to shrink the log file" posts spring up](https://sqlblog.org/2009/07/27/oh-the-horror-please-stop-telling-people-they-should-shrink-their-log-files). [A blog post Brent Ozar wrote four years ago, pointing to multiple resources, in response to a SQL Server Magazine article that should *not* have been published](http://www.brentozar.com/archive/2009/08/stop-shrinking-your-database-files-seriously-now/). [A blog post by Paul Randal explaining why t-log maintenance is important](http://www.sqlskills.com/blogs/paul/importance-of-proper-transaction-log-size-management/) and [why you shouldn't shrink your data files, either](http://www.sqlskills.com/blogs/paul/why-you-should-not-shrink-your-data-files/). [Mike Walsh has a great answer covering some of these aspects too, including reasons why you might not be able to shrink your log file immediately](https://dba.stackexchange.com/questions/29829/why-does-the-transaction-log-keep-growing-or-run-out-of-space).
[ 0.47357627749443054, 0.29038214683532715, 0.15728341042995453, 0.30570611357688904, -0.025292320176959038, -0.3888569474220276, -0.045335423201322556, -0.011959618888795376, -0.6454492211341858, -0.3108580410480499, -0.15884162485599518, 0.43347516655921936, -0.09901954978704453, 0.4736266...
Slashdot has a little widget that allows you to tweak your comment threshold to filter out down-modded comments. It will be in one place if you scroll to the top of the page, and as you scroll down, at some point, where its original home is about to scroll off the page, it will switch to fixed position, and stay on your screen. (To see an example, click [here](http://news.slashdot.org/news/08/09/10/2257242.shtml).) My question is, how can I accomplish the same effect of having a menu be in one place when scrolled up, and switch to fixed position as the user scrolls down? I
[ 0.2485029101371765, -0.17849841713905334, 0.3590569496154785, 0.1725601702928543, -0.24637925624847412, 0.0032758042216300964, 0.040772367268800735, 0.3749378025531769, -0.33932870626449585, -0.8573023676872253, 0.2132268249988556, 0.48366791009902954, -0.05407562106847763, 0.3412434458732...
know this will involve a combination of CSS and javascript. I'm not necessarily looking for a full example of working code, but what steps will my code need to go through? Okay, I figured it out. I will post it here in case it help anyone else. This solution uses prototype, and an internal library that gives me the registerEvent, getElementX and getElementY functions, which do what you would think. ``` var MenuManager = Class.create({ initialize: function initialize(menuElt) { this.menu = $(menuElt); this.homePosn = { x:
[ 0.2578895390033722, 0.04162052273750305, 0.11774051189422607, -0.07138225436210632, 0.4690183699131012, -0.012112320400774479, 0.23664060235023499, -0.3230670392513275, 0.06113927811384201, -0.897968590259552, 0.06356804817914963, 0.4966689348220825, -0.3215021789073944, 0.1667662113904953...
getElementX(this.menu), y: getElementY(this.menu) }; registerEvent(document, 'scroll', this.handleScroll.bind(this)); this.handleScroll(); }, handleScroll: function handleScroll() { this.scrollOffset = document.viewport.getScrollOffsets().top; if (this.scrollOffset > this.homePosn.y) { this.menu.style.position = 'fixed'; this.menu.style.top = 0; this.menu.style.left = this.homePosn.x;
[ -0.33485445380210876, -0.17479026317596436, 1.0601166486740112, -0.27784058451652527, 0.5164400339126587, 0.23042505979537964, 0.22605416178703308, -0.4540259838104248, -0.08937953412532806, -0.32903823256492615, -0.8584129810333252, 0.7332402467727661, -0.26488569378852844, 0.080592028796...
} else { this.menu.style.position = 'absolute'; this.menu.style.top = null; this.menu.style.left = null; } } }); ``` Just call the constructor with the id of your menu, and the class will take it from there.
[ -0.09182612597942352, -0.05963992699980736, 0.32003775238990784, -0.26718348264694214, 0.2995879352092743, -0.13262540102005005, 0.32600927352905273, -0.10858890414237976, 0.29193657636642456, -0.624692976474762, -0.4404671788215637, 0.9518082737922668, -0.2757803797721863, 0.2194939553737...
If you have an `NSMutableArray`, how do you shuffle the elements randomly? (I have my own answer for this, which is posted below, but I'm new to Cocoa and I'm interested to know if there is a better way.) --- Update: As noted by @Mukesh, as of iOS 10+ and macOS 10.12+, there is an `-[NSMutableArray shuffledArray]` method that can be used to shuffle. See <https://developer.apple.com/documentation/foundation/nsarray/1640855-shuffledarray?language=objc> for details. (But note that this creates a new array, rather than shuffling the elements in place.) You don't need the swapObjectAtIndex method. [exchangeObjectAtIndex:withObjectAtIndex:](https://developer.apple.com/library/mac/documentation/Cocoa/Reference/Foundation/Classes/NSMutableArray_Class/Reference/Reference.html#//apple_ref/occ/instm/NSMutableArray/exchangeObjectAtIndex:withObjectAtIndex:) already exists.
[ -0.0575207956135273, -0.21607521176338196, 0.30414196848869324, 0.06325133144855499, 0.18597637116909027, -0.2959000766277313, -0.20667459070682526, -0.24353323876857758, -0.9095327854156494, -0.4822060167789459, -0.15279985964298248, 0.14870776236057281, -0.34054529666900635, -0.008047880...
I am curious whether it is OK to copy a directory that is under version control and start working on both copies. I know it can be different from one VCS to another, but I intentionally don't specify any VCS since I am curious about different cases. I was talking to a coworker recently about doing it in SVN. I think it should be OK, but I am still not 100% sure, since I don't know what exactly SVN is storing in the working copy. However, if we talk about the DVCS world, things might be even more unclear, since every working copy
[ 0.6199299097061157, 0.28781601786613464, 0.28809502720832825, -0.06218767538666725, 0.2596445083618164, -0.2215917706489563, 0.043756309896707535, 0.15149928629398346, -0.30453503131866455, -0.5312530994415283, 0.1721813678741455, 0.600031852722168, -0.10912713408470154, 0.599933922290802,...
is a repository by itself. Being faced with doing this in bzr now, I decided to ask the question. Later edit: Some people asked why I would want to do that. Here is the whole story: In the case of SVN it was because being out of the office, the connection to the SVN server was really slow, so me and my coworker decided to check out the sources only once and make a local copy. That's what we did and it worked OK, but I am still wondering whether it is guaranteed to work, or it just happened. In the bzr case, I
[ 0.34803032875061035, 0.09647338837385178, -0.10334306210279465, 0.3775065839290619, 0.003405498107895255, -0.41576939821243286, 0.22015149891376495, 0.1141582727432251, -0.463344544172287, -0.1262398511171341, 0.09426190704107285, 0.2594897747039795, 0.04387618601322174, 0.3600889444351196...
am planning to move the "main" repo to another server. So I was thinking to just copy it there and start considering that the main repo. I guess the safest is to make a clone though. In Subversion, every .svn folder has whatever is necessary for the containing folder. And since all local paths are stored as relative, you are safe while copying whole or partial trees outside the original checkout tree. They will continue to function in their new homes. I frequently copy subtrees from my trunk outside, switch the new copies to other branches/tags and do whatever is necessary on
[ 0.39696162939071655, -0.24220769107341766, 0.3524439334869385, 0.17006151378154755, 0.09345276653766632, -0.19058436155319214, 0.3348507881164551, 0.39522644877433777, -0.5199199318885803, -0.5707069039344788, -0.08583388477563858, 0.2552865147590637, -0.3302241563796997, 0.225652828812599...
the "cloned" local copies. This way, if, for any reason, I need to go back and do something in the trunk, I have an undisturbed trunk copy in the original location. Copying source-controlled directories **into** other source-controlled trees, on the other hand, is **unsafe**. If you will be overwriting any .svn folders, you'll most probably be corrupting your target copies.
[ 0.5380573272705078, 0.13146206736564636, 0.11206123977899551, 0.4622647762298584, 0.39460742473602295, -0.3432205021381378, 0.2889940142631531, 0.3895297646522522, -0.5211657881736755, -0.6898654103279114, -0.20816783607006073, 0.5551386475563049, -0.29165133833885193, 0.32462847232818604,...
In Windows, is there an easy way to tell if a folder has a subfile that has changed? I verified, and the last modified date on the folder does not get updated when a subfile changes. Is there a registry entry I can set that will modify this behavior? If it matters, I am using an NTFS volume. I would ultimately like to have this ability from a C++ program. **Scanning an entire directory recursively will not work for me because the folder is much too large.** **Update: I really need a way to do this without a process running while the change occurs.
[ 0.4339011311531067, -0.01833277754485607, 0.46946823596954346, 0.09035839140415192, 0.048333413898944855, -0.3764374852180481, 0.5148310661315918, 0.10375726222991943, -0.3982563316822052, -0.8963611721992493, -0.057701848447322845, 0.8266205191612244, 0.041497889906167984, -0.005880243144...
So installing a file system watcher is not optimal for me.** **Update2: The archive bit will also not work because it has the same problem as the last modification date. The file's archive bit will be set, but the folders will not.** This [article](http://msdn.microsoft.com/en-us/library/aa365261(VS.85).aspx) should help. Basically, you create one or more notification object such as: ``` HANDLE dwChangeHandles[2]; dwChangeHandles[0] = FindFirstChangeNotification( lpDir, // directory to watch
[ 0.3324090838432312, -0.04741140082478523, 0.4798479974269867, -0.042849231511354446, 0.2784123122692108, -0.3255389332771301, 0.20261719822883606, -0.14720778167247772, -0.3145254850387573, -0.7271824479103088, -0.13929378986358643, 0.7043761014938354, -0.27443861961364746, 0.0270680952817...
FALSE, // do not watch subtree FILE_NOTIFY_CHANGE_FILE_NAME); // watch file name changes if (dwChangeHandles[0] == INVALID_HANDLE_VALUE) { printf("\n ERROR: FindFirstChangeNotification function failed.\n"); ExitProcess(GetLastError()); } // Watch the subtree for directory creation and deletion. dwChangeHandles[1] = FindFirstChangeNotification( lpDrive,
[ 0.1181856170296669, -0.06163541600108147, 0.5519583225250244, -0.23738162219524384, 0.5943930745124817, 0.11206383258104324, 0.4696520268917084, -0.17683124542236328, -0.3423037528991699, -0.42071205377578735, -0.2868060767650604, 0.7554448246955872, -0.5262836813926697, 0.165197491645813,...
// directory to watch TRUE, // watch the subtree FILE_NOTIFY_CHANGE_DIR_NAME); // watch dir name changes if (dwChangeHandles[1] == INVALID_HANDLE_VALUE) { printf("\n ERROR: FindFirstChangeNotification function failed.\n"); ExitProcess(GetLastError()); } ``` and
[ -0.07166825979948044, -0.18069511651992798, 0.5661013722419739, -0.4004054665565491, 0.5668480396270752, -0.050023190677165985, 0.38170331716537476, -0.15452349185943604, -0.10579436272382736, -0.5602316856384277, -0.2532139718532562, 1.0049927234649658, -0.3432879149913788, 0.173545613884...
then you wait for a notification: ``` while (TRUE) { // Wait for notification. printf("\nWaiting for notification...\n"); DWORD dwWaitStatus = WaitForMultipleObjects(2, dwChangeHandles, FALSE, INFINITE); switch (dwWaitStatus) { case WAIT_OBJECT_0: // A file was created, renamed, or deleted in the directory.
[ 0.35057929158210754, -0.3498084843158722, 0.4985298216342926, -0.35053905844688416, 0.38227251172065735, -0.029031986370682716, 0.3439469337463379, -0.21139143407344818, -0.2728472054004669, -0.43220850825309753, -0.3247569799423218, 0.7162678241729736, -0.1648588925600052, 0.2348526418209...
// Restart the notification. if ( FindNextChangeNotification(dwChangeHandles[0]) == FALSE ) { printf("\n ERROR: FindNextChangeNotification function failed.\n"); ExitProcess(GetLastError()); } break;
[ -0.2920067310333252, -0.22268140316009521, 0.5218374729156494, -0.1920589655637741, 0.6670406460762024, -0.03433192893862724, 0.3837536573410034, 0.2573685348033905, -0.11249134689569473, -0.38705974817276, -0.6043208837509155, 0.6741257905960083, -0.2058994024991989, 0.1875196099281311, ...
case WAIT_OBJECT_0 + 1: // Restart the notification. if (FindNextChangeNotification(dwChangeHandles[1]) == FALSE ) { printf("\n ERROR: FindNextChangeNotification function failed.\n"); ExitProcess(GetLastError());
[ -0.3001948595046997, -0.2911114990711212, 0.5864870548248291, -0.2648603916168213, 0.524817168712616, 0.08370564877986908, 0.2955454885959625, -0.01633754000067711, 0.061118654906749725, -0.38460901379585266, -0.536414384841919, 0.693352222442627, -0.3137790262699127, 0.2320275902748108, ...
} break; case WAIT_TIMEOUT: // A time-out occurred. This would happen if some value other // than INFINITE is used in the Wait call and no changes occur. // In a single-threaded environment, you might not want an // INFINITE wait.
[ 0.007767142727971077, -0.38224631547927856, -0.11183757334947586, -0.11248021572828293, 0.6759239435195923, -0.17685195803642273, 0.23438607156276703, 0.22531923651695251, -0.4884970784187317, -0.18138360977172852, -0.34775036573410034, 0.49473604559898376, -0.4117759168148041, 0.089314743...
printf("\nNo changes in the time-out period.\n"); break; default: printf("\n ERROR: Unhandled dwWaitStatus.\n"); ExitProcess(GetLastError()); break; } } } ```
[ -0.48056015372276306, -0.10894382745027542, 0.3300696015357971, -0.34061112999916077, 0.7569128274917603, -0.0556441955268383, 0.27862662076950073, 0.1258520931005478, -0.23746736347675323, -0.5639861822128296, -0.7125775218009949, 0.7037696838378906, -0.21496355533599854, 0.28099662065505...
Consider the class below that represents a Broker: ``` public class Broker { public string Name = string.Empty; public int Weight = 0; public Broker(string n, int w) { this.Name = n; this.Weight = w; } } ``` I'd like to randomly select a Broker from an array, taking into account their weights. What do you think of the code below? ``` class Program { private static Random _rnd =
[ 0.04184146970510483, -0.13038839399814606, 0.03888016566634178, -0.03392253443598747, -0.2540718615055084, 0.23002691566944122, -0.2740001976490021, -0.2065088450908661, -0.14998753368854523, -0.7461658716201782, 0.11111528426408768, 0.2849026322364807, -0.21538405120372772, 0.377788871526...
new Random(); public static Broker GetBroker(List<Broker> brokers, int totalWeight) { // totalWeight is the sum of all brokers' weight int randomNumber = _rnd.Next(0, totalWeight); Broker selectedBroker = null; foreach (Broker broker in brokers)
[ -0.014273606240749359, -0.6329041719436646, 0.34702035784721375, 0.022510068491101265, -0.28566014766693115, 0.2033023238182068, -0.36523497104644775, -0.328231543302536, -0.5428722500801086, -0.7342256307601929, -0.1760481297969818, 0.2674351632595062, -0.16552118957042694, 0.718959033489...
{ if (randomNumber <= broker.Weight) { selectedBroker = broker; break; }
[ -0.17426776885986328, -0.7506919503211975, -0.1915154755115509, 0.03947756066918373, 0.12201065570116043, 0.13911999762058258, -0.2668991684913635, -0.25586235523223877, -0.22422435879707336, -0.5455625653266907, -0.19214218854904175, 0.24801258742809296, -0.21046045422554016, 0.2344379127...
randomNumber = randomNumber - broker.Weight; } return selectedBroker; } static void Main(string[] args) { List<Broker> brokers = new List<Broker>(); brokers.Add(new Broker("A", 10));
[ -0.06670968979597092, -0.5840591192245483, 0.21299922466278076, -0.13733768463134766, -0.2199409157037735, 0.24030157923698425, -0.13023187220096588, -0.2089630514383316, -0.42870232462882996, -0.6845235824584961, -0.2172268182039261, 0.24986371397972107, -0.45484256744384766, 0.5028201341...
brokers.Add(new Broker("B", 20)); brokers.Add(new Broker("C", 20)); brokers.Add(new Broker("D", 10)); // total the weigth int totalWeight = 0; foreach (Broker broker in brokers) {
[ 0.03695102408528328, -0.28189319372177124, 0.840331494808197, -0.12918849289417267, -0.40096229314804077, 0.15157471597194672, -0.14047658443450928, -0.44773197174072266, -0.6414359211921692, -0.511729896068573, -0.07495810836553574, 0.5519434213638306, 0.002390275476500392, 0.419609665870...
totalWeight += broker.Weight; } while (true) { Dictionary<string, int> result = new Dictionary<string, int>(); Broker selectedBroker = null;
[ 0.06511185318231583, -0.4830929636955261, 0.34533363580703735, -0.03888446092605591, -0.3442820608615875, 0.36929890513420105, 0.06541833281517029, -0.31545889377593994, -0.12129875272512436, -0.4336513876914978, -0.04156799614429474, 0.5461794137954712, -0.1408889889717102, 0.314666479825...
for (int i = 0; i < 1000; i++) { selectedBroker = GetBroker(brokers, totalWeight); if (selectedBroker != null) {
[ 0.17479592561721802, -0.45468848943710327, 0.1366214007139206, 0.03536989912390709, 0.17337045073509216, 0.3776693046092987, -0.06583483517169952, -0.3814737796783447, -0.05722390487790108, -0.5033299922943115, -0.1913660168647766, 0.4266461133956909, -0.23453158140182495, 0.30841490626335...
if (result.ContainsKey(selectedBroker.Name)) { result[selectedBroker.Name] = result[selectedBroker.Name] + 1;
[ -0.23988568782806396, -0.39087656140327454, 0.032575320452451706, -0.07509003579616547, 0.20691704750061035, 0.04775913804769516, -0.2144109457731247, -0.6051388382911682, 0.04569654539227486, -0.17878949642181396, -0.3984110653400421, 0.030467359349131584, -0.23940572142601013, 0.33071312...
} else { result.Add(selectedBroker.Name, 1);
[ -0.12316226959228516, -0.3417205512523651, -0.11362344026565552, -0.11221089959144592, 0.2394084632396698, 0.08859442919492722, -0.09197413921356201, -0.4542614221572876, 0.14971154928207397, -0.1991497129201889, -0.313434362411499, 0.36853712797164917, -0.31240254640579224, 0.045403670519...
} } } Console.WriteLine("A\t\t" + result["A"]); Console.WriteLine("B\t\t" + result["B"]); Console.WriteLine("C\t\t" + result["C"]);
[ -0.5893613696098328, -0.05593603476881981, 0.3567519783973694, -0.2909320890903473, 0.6015700101852417, 0.04423525929450989, -0.015994267538189888, 0.09179317206144333, 0.3923235833644867, -0.6076172590255737, -0.684578001499176, 0.7636126279830933, -0.44259113073349, -0.04265877604484558,...
Console.WriteLine("D\t\t" + result["D"]); result.Clear(); Console.WriteLine(); Console.ReadLine(); } } } ``` I'm not so confident. When I run this, Broker A always gets more hits than
[ -0.22561222314834595, 0.06896830350160599, 0.5509864091873169, -0.3399573564529419, 0.14385339617729187, -0.23160842061042786, 0.34988564252853394, -0.15168620645999908, 0.31962981820106506, -0.777522087097168, -0.03828591853380203, 0.6618906259536743, -0.26505717635154724, 0.1576524823904...
Broker D, and they have the same weight. Is there a more accurate algorithm? Thanks! Your algorithm is nearly correct. However, the test should be `<` instead of `<=`: ``` if (randomNumber < broker.Weight) ``` This is because 0 is inclusive in the random number while `totalWeight` is exclusive. In other words, a broker with weight 0 would still have a small chance of being selected – not at all what you want. This accounts for broker A having more hits than broker D. Other than that, your algorithm is fine and in fact the canonical way of solving this problem.
[ 0.09455366432666779, -0.1684974730014801, 0.14710748195648193, 0.2581627368927002, -0.43224236369132996, -0.0635044053196907, -0.13966576755046844, -0.4325927793979645, -0.13982202112674713, -0.843813955783844, 0.2859388291835785, 0.67775958776474, 0.04751307889819145, 0.18297313153743744,...
What would be the best way to write Objective-C on the Windows platform? Cygwin and gcc? Is there a way I can somehow integrate this into Visual Studio? Along those lines - are there any suggestions as to how to link in and use the Windows SDK for something like this. Its a different beast but I know I can write assembly and link in the Windows DLLs giving me accessibility to those calls but I don't know how to do this without googling and getting piecemeal directions. Is anyone aware of a good online or book resource to do or explain these
[ 0.4956510066986084, 0.1604246199131012, 0.0884438157081604, 0.20075541734695435, -0.30451783537864685, -0.1482609063386917, 0.03887437656521797, 0.11251580715179443, -0.13972726464271545, -0.37199726700782776, 0.16416367888450623, 0.7690430283546448, -0.06735298782587051, -0.20171007513999...
kinds of things? Expanding on the two previous answers, if you just want Objective-C but not any of the Cocoa frameworks, then gcc will work on any platform. You can use it through Cygwin or get MinGW. However, if you want the Cocoa frameworks, or at least a reasonable subset of them, then GNUStep and Cocotron are your best bets. Cocotron implements a lot of stuff that GNUStep does not, such as CoreGraphics and CoreData, though I can't vouch for how complete their implementation is on a specific framework. Their aim is to keep Cocotron up to date with the latest version
[ 0.7779310941696167, 0.04125304892659187, 0.02832149527966976, 0.35519272089004517, 0.175589457154274, -0.296231746673584, 0.035312265157699585, 0.24569851160049438, -0.47687652707099915, -0.5394111275672913, -0.10284363478422165, 0.6078705787658691, -0.5229588150978088, -0.0741204693913459...
of OS X so that any viable OS X program can run on Windows. Because GNUStep typically uses the latest version of gcc, they also add in support for Objective-C++ and a lot of the Objective-C 2.0 features. I haven't tested those features with GNUStep, but if you use a sufficiently new version of gcc, you might be able to use them. I was not able to use Objective-C++ with GNUStep a few years ago. However, GNUStep does compile from just about any platform. Cocotron is a very mac-centric project. Although it is probably possible to compile it on other platforms,
[ 0.16228719055652618, 0.21132637560367584, 0.21349303424358368, 0.19282874464988708, 0.10981958359479904, -0.3860704302787781, -0.1998511403799057, 0.4594863951206207, -0.22865188121795654, -0.6400519013404846, -0.16398897767066956, 0.7204641699790955, -0.5558912754058838, -0.08468174934387...
it comes XCode project files, not makefiles, so you can only compile its frameworks out of the box on OS X. It also comes with instructions on compiling Windows apps on XCode, but not any other platform. Basically, it's probably possible to set up a Windows development environment for Cocotron, but it's not as easy as setting one up for GNUStep, and you'll be on your own, so GNUStep is definitely the way to go if you're developing on Windows as opposed to just for Windows. For what it's worth, Cocotron is licensed under the MIT license, and GNUStep is licensed
[ 0.35266542434692383, 0.07296759635210037, 0.2032591551542282, 0.1132781058549881, -0.10059310495853424, -0.2406245768070221, -0.0892724096775055, 0.13654331862926483, -0.010457816533744335, -0.6146754026412964, -0.09418153762817383, 0.4788469970226288, -0.5509815812110901, -0.1884486973285...
under the LGPL.
[ -0.4107241630554199, 0.016208268702030182, 0.26696455478668213, -0.08954529464244843, 0.18989194929599762, -0.3103407323360443, -0.13445596396923065, 0.23890644311904907, -0.5488252639770508, -0.699954628944397, -0.3836623728275299, 0.20809942483901978, -0.5628554821014404, -0.090925112366...
Just got a request from my boss for an application I'm working on. Basically we're getting an email address setup for an external client to submit excel files to. What I need is a way to automatically pick up any email sent to this address, so I can take the attachment, process it and save it to a folder. Any information of even where to start would be helpful.\ Note: We're using a lotus notes server to do this, but a generic way would be more helpful (If possible). Email -> mailserver ->[something] -> file-on-disk. File on disk is pretty easy to parse, use
[ 0.30936551094055176, 0.15295812487602234, 0.031049972400069237, 0.06934551894664764, 0.14504499733448029, -0.0110317999497056, -0.13479068875312805, 0.2316335290670395, -0.23742154240608215, -0.76931232213974, 0.11611063778400421, 0.280088871717453, -0.04266764596104622, 0.0039414782077074...
[JavaMail](http://java.sun.com/products/javamail/javadocs/javax/mail/package-summary.html). The [something] could be: * listener for smtp connections (overkill)! * [Pop3](http://java.sun.com/products/javamail/javadocs/com/sun/mail/pop3/package-summary.html)/[imap](http://java.sun.com/products/javamail/javadocs/com/sun/mail/imap/package-summary.html) client * [Maildir](http://en.wikipedia.org/wiki/Maildir)/Mailbox
[ 0.20324844121932983, -0.14207881689071655, 0.16074614226818085, -0.18765458464622498, -0.3292086124420166, -0.04055451601743698, 0.3055049777030945, 0.1195865273475647, -0.29608163237571716, -0.48641473054885864, -0.4159446954727173, 0.1734536737203598, -0.2803950905799866, 0.0175581108778...
Is there a way to access file system info via some type of Windows API? If not what other methods are available to a user mode developer? Not very clean, but you can use `DeviceIoControl()` Open volume as a file, pass resulting handle to `DeviceIoControl()` together with control code. Check MSDN for control codes, there is something like "read journal record".
[ 0.10805775970220566, 0.11063574999570847, 0.34768033027648926, 0.49610093235969543, 0.052583225071430206, -0.620953381061554, -0.04904843121767044, 0.20937411487102509, -0.19035470485687256, -0.7330108284950256, 0.016142588108778, 0.694990873336792, -0.03365934267640114, 0.3204897344112396...
Is there a difference (performance, overhead) between these two ways of merging data sets? ``` MyTypedDataSet aDataSet = new MyTypedDataSet(); aDataSet .Merge(anotherDataSet); aDataSet .Merge(yetAnotherDataSet); ``` and ``` MyTypedDataSet aDataSet = anotherDataSet; aDataSet .Merge(yetAnotherDataSet); ``` Which do you recommend? While Keith is right, I suppose the example was simply badly chosen. Generally, it is better to initialize to the “right” object from the beginning and *not* construct an intermediate, empty object as in your case. Two reasons: 1. Performance. This should be obvious: Object creation costs time so creating less objects is better. 2. *Much* more important however, it better states your **intent**. You do generally *not* intend to create stateless/empty objects. Rather, you intend
[ 0.052194029092788696, 0.028966188430786133, -0.08926057070493698, 0.05134132131934166, -0.14152930676937103, -0.13110819458961487, 0.41269153356552124, -0.23093712329864502, 0.053080927580595016, -0.7936692237854004, -0.1928042471408844, 0.26439368724823, -0.3910118043422699, 0.12061479687...
to create objects with some state or content. Do it. No need to create a useless (because empty) temporary.
[ 0.3797212243080139, 0.12250083684921265, -0.14496926963329315, 0.44046375155448914, -0.17477618157863617, -0.1780608892440796, 0.5723089575767517, 0.2072451263666153, -0.3038918375968933, -0.717884361743927, -0.18020522594451904, 0.11549829691648483, -0.31998759508132935, 0.240699440240859...
I was reviewing some code that a consultant checked in and notice they were using SQLCLR. I don't have any experience with it so thought I would research what it was about. I noticed that they used ``` Dim cn As New SqlConnection("server=LOCALHOST;integrated security=yes;database=" & sDb) ``` instead of ``` DIM conn As New SqlConnection("context connection=true") ``` I'm wondering what the difference since it's localhost on the first? The context connection uses the user's already established connection to the server. So you inherit things like their database context, connection options, etc. Using localhost will connect to the server using a normal shared memory connection. This can be useful if you
[ 0.08800558745861053, 0.029043124988675117, 0.38367781043052673, 0.07665827125310898, 0.003582318313419819, -0.11598003655672073, -0.15668049454689026, 0.1362754851579666, -0.3384053707122803, -0.707308292388916, 0.03215161710977554, 0.6490822434425354, -0.18762646615505219, 0.3739670813083...
don't want to use the user's connection (i.e. if you want to connect to a different database, or with different options, etc). In most cases you should use the context connection, since it doesn't create a separate connection to the server. Also, be warned that using a separate connection means you are not part of the user's transaction and are subject to normal locking semantics.
[ 0.3376336693763733, -0.013447868637740612, 0.09701225161552429, 0.4241239130496979, 0.3591814935207367, -0.6115175485610962, 0.24175101518630981, 0.20974835753440857, -0.46303173899650574, -0.3635313808917999, -0.22335024178028107, 0.4504987299442291, -0.5381380319595337, 0.223257765173912...
I'm trying to consume a SharePoint webservice from ColdFusion via cfinvoke ('cause I don't want to deal with (read: parse) the SOAP response itself). The SOAP response includes a byte-order-mark character (BOM), which produces the following exception in CF: ``` "Cannot perform web service invocation GetList. The fault returned when invoking the web service operation is: 'AxisFault faultCode: {http://www.w3.org/2003/05/soap-envelope}Server.userException faultSubcode: faultString: org.xml.sax.SAXParseException: Content is not allowed in prolog." ``` The standard for UTF-8 encoding optionally includes the BOM character (<http://unicode.org/faq/utf_bom.html#29>). Microsoft almost universally includes the BOM character with UTF-8 encoded streams . From what I can tell there’s no way to change that in IIS. The XML parser that JRun
[ 0.11789702624082565, 0.3544956147670746, 0.21792441606521606, -0.29228559136390686, -0.12356805056333542, -0.052072249352931976, 0.6198026537895203, 0.08624931424856186, 0.14809784293174744, -0.6152185797691345, -0.1848769336938858, 0.42449092864990234, -0.4827953577041626, 0.1787012219429...
(ColdFusion) uses by default doesn’t handle the BOM character for UTF-8 encoded XML streams. So, it appears that the way to fix this is to change the XML parser used by JRun (<http://www.bpurcell.org/blog/index.cfm?mode=entry&entry=942>). Adobe says that it doesn't handle the BOM character (see comments from anoynomous and halL on May 2nd and 5th). <http://livedocs.adobe.com/coldfusion/8/htmldocs/Tags_g-h_09.html#comments> I'm going to say that the answer to your question (is it possible?) is no. I don't know that definitively, but the poster who commented just above halL ([in the comments on this page](http://livedocs.adobe.com/coldfusion/8/htmldocs/help.html?content=Tags_g-h_09.html#comments)) gave a work-around for the problem -- so I assume it is possible
[ 0.07755491882562637, 0.46270573139190674, 0.37596505880355835, -0.20428971946239471, -0.06477541476488113, 0.0923001617193222, 0.3701844811439514, -0.09471523016691208, -0.2579362988471985, -0.24276068806648254, -0.11838449537754059, 0.027417819947004318, -0.3166380226612091, 0.25545689463...
to deal with when parsing manually. You say that you're using CFInvoke because you don't want to deal with the soap response yourself. It looks like you don't have any choice.
[ 0.3612024784088135, -0.07259846478700638, 0.06031797081232071, 0.09166566282510757, -0.3399481177330017, 0.16245105862617493, 0.10729967802762985, -0.37991833686828613, 0.030198847874999046, -0.603428840637207, 0.1828802078962326, 0.6243369579315186, -0.20034275949001312, -0.20809020102024...
A simple question, but could someone provide sample code as to how would someone call a web service from within the JBoss Seam framework, and process the results? I need to be able to integrate with a search platform being provided by a private vendor who is exposing his functionality as a web service. So, I'm just looking for some guidance as to what the code for calling a given web service would look like. (Any sample web service can be chosen as an example.) There's roughly a gajillion HTTP client libraries (Restlet is quite a bit more than that, but I
[ 0.375583291053772, 0.08358364552259445, 0.012521528638899326, 0.16435685753822327, -0.2001512348651886, 0.039539750665426254, 0.19408437609672546, -0.17153380811214447, -0.093190036714077, -0.6925233602523804, 0.07228846848011017, 0.3147256374359131, -0.06773996353149414, 0.134430408477783...
already had that code snippet for something else), but they should all provide support for sending GET requests. Here's a rather less featureful snippet that uses [HttpClient](http://hc.apache.org/httpclient-3.x/tutorial.html) from Apache Commons: ``` HttpClient client = new HttpClient(); HttpMethod method = new GetMethod("http://api.search.yahoo.com/WebSearchService/V1/webSearch?appid=restbook&query=HttpClient"); client.executeMethod(method); ```
[ 0.344413161277771, -0.2691298723220825, 0.7298080921173096, 0.24310806393623352, -0.15367738902568817, -0.1669483482837677, 0.3355087637901306, -0.28886252641677856, -0.3634489178657532, -0.5929670333862305, -0.3410671651363373, 0.3988262116909027, -0.3069749176502228, 0.08741185814142227,...
Ran into an “Out of Stack Space” error trying to serialize an ASP.Net AJAX Array object. Here is the scenario with simplified code: 1. `Default.aspx` 2. `MainScript.js` ``` function getObject(){ return new Array(); } function function1(obj){ var s=Sys.Serialization.JavaScriptSerializer.serialize(obj); alert(s); } function function2(){ var obj=getObject(); var s=Sys.Serialization.JavaScriptSerializer.serialize(obj); alert(s); } ``` 3. `Content.aspx` 4. `ContentScript.js` ``` function serializeObject(){ var obj=window.top.getObject(); window.top.function1(obj); // <– This works fine obj=new Array(); window.top.function1(obj); // <– this causes an Out of Stack Space error } ``` The code for the sample pages and JavaScript
[ -0.07557626813650131, -0.3244219124317169, 0.36909374594688416, -0.13763901591300964, -0.05671928822994232, 0.14721916615962982, 0.7577131390571594, -0.37837713956832886, -0.4292590320110321, -0.5851282477378845, -0.2620689272880554, 0.522942066192627, -0.6757819056510925, -0.0696898698806...
is [here](http://braincells2pixels.wordpress.com/2008/02/14/aspnet-ajax-javascript-serialization-error/). Posting the code for the aspx pages here posed a problem. So please check the above link to see the code for the aspx pages. A web page (default.aspx) with an IFrame on that hosts a content page (content.aspx). Clicking the “Serialize Object” button calls the JavaScript function serializeObject(). The serialization works fine for Array objects created in the top window (outside the frame). However if the array object is created in the IFrame, serialization bombs with an out of stack space error. I stepped through ASP.Net AJAX JS files and what I discovered is, the process goes into an
[ -0.02142474614083767, 0.07624592632055283, 0.5097484588623047, -0.021718215197324753, -0.14355099201202393, -0.16638873517513275, 0.2532857358455658, -0.03787711262702942, -0.4243839383125305, -0.7516325116157532, -0.043916005641222, 0.3697616159915924, -0.3740949034690857, 0.0980405509471...
endless loop trying to figure out the type of the array object. Endless call to Number.IsInstanceOf and pretty soon you get an out of stack error. Any ideas? This problem happens because Sys.Serialization.JavaScriptSerializer can't serialize objects from others frames, but only those objects which where instantiated in the current window (which calls serialize() method). The only workaround which is known for me it's making clone of the object from other frame before calling serialize() method. Example of the clone() methode you can find here (comments in Russian): [link text](http://snowcore.net/clone-javascript-object)
[ 0.42682304978370667, -0.17274680733680725, 0.2349657565355301, 0.10175782442092896, -0.09498325735330582, -0.23953792452812195, 0.41461381316185, 0.10973897576332092, -0.6713993549346924, -0.39890536665916443, 0.15906155109405518, 0.13068990409374237, -0.6905513405799866, 0.515653550624847...
In follow up to [this question](https://stackoverflow.com/questions/56820/round-in-python-doesnt-seem-to-be-rounding-properly), it appears that some numbers cannot be represented by floating point at all, and instead are approximated. How are floating point numbers stored? Is there a common standard for the different sizes? What kind of gotchas do I need to watch out for if I use floating point? Are they cross-language compatible (ie, what conversions do I need to deal with to send a floating point number from a python program to a C program over TCP/IP)? As mentioned, the [Wikipedia article on IEEE 754](http://en.wikipedia.org/wiki/IEEE_754-1985) does a good job of showing how floating point numbers are stored on most
[ -0.19709481298923492, 0.287238210439682, 0.13228006660938263, 0.037417009472846985, -0.24904747307300568, -0.051420122385025024, -0.08231011033058167, -0.14510779082775116, -0.4930194020271301, -0.15864624083042145, 0.22440360486507416, 0.28733810782432556, 0.06646786630153656, 0.164096623...
systems. Now, here are some common gotchas: * The biggest is that you almost never want to compare two floating point numbers for equality (or inequality). You'll want to use greater than/less than comparisons instead. * The more operations you do on a floating point number, the more significant rounding errors can become. * Precision is limited by the size of the fraction, so you may not be able to correctly add numbers that are separated by several orders of magnitude. (For example, you won't be able to add 1E-30 to 1E30.)
[ 0.12583231925964355, 0.1316029578447342, 0.2129254788160324, 0.3703996539115906, 0.05852862074971199, 0.17108690738677979, 0.022836608812212944, -0.01671016775071621, -0.485457181930542, -0.5516875386238098, -0.060084424912929535, 0.058250054717063904, -0.01825605146586895, 0.3959400653839...
I am writing some software to identify tracking numbers (in the same way that Google identifies FedEx or UPS numbers when you search for them). Most couriers use a system, such as a "weighted average mod system" which can be used to identify if a number is a valid tracking number. Does anyone know if TNT consignment numbers use such a system, and if so, what it is? I have asked TNT support, and the rep told me they do not... but I'd like to doublecheck. OK, so it's three months since you asked but I stumbled across this as I'm
[ 0.582951009273529, 0.11241932213306427, 0.11120062321424484, -0.009632478468120098, 0.04919776692986488, 0.03834725171327591, 0.09913427382707596, 0.1179051473736763, -0.39109471440315247, -0.29544612765312195, 0.4239179790019989, 0.258543461561203, -0.2041732519865036, -0.3135814666748047...
writing a similar piece of software. As far as we know TNT uses the [S10](http://en.wikipedia.org/wiki/S10_(UPU_standard)) tracking number system. Which means that their numbers will be of the type AA#########AA. With the last two letters corresponding to a ISO/IATA country code. Having said that TNT uses WW which we believe must stand for worldwide. This is not quite an answer, at least it's not about checksums or algorithms, but it might be useful? Hope that helps Willow
[ 0.7464715242385864, -0.06258047372102737, -0.10507885366678238, 0.24747580289840698, -0.19135528802871704, -0.3942222595214844, 0.36090341210365295, 0.0541980005800724, -0.40714937448501587, -0.08736605942249298, -0.0816376656293869, 0.29498568177223206, -0.24624262750148773, -0.1517596542...
I have ASP.NET web pages for which I want to build automated tests (using WatiN & MBUnit). How do I start the ASP.Net Development Server from my code? I do not want to use IIS. From what I know, you can fire up the dev server from the command prompt with the following path/syntax: ``` C:\Windows\Microsoft.NET\Framework\v2.0.50727\Webdev.WebServer.exe /port:[PORT NUMBER] /path: [PATH TO ROOT] ``` ...so I could imagine you could easily use Process.Start() to launch the particulars you need through some code. Naturally you'll want to adjust that version number to whatever is most recent/desired for you.
[ 0.5209985971450806, 0.019355813041329384, 0.21521221101284027, -0.0030021623242646456, -0.27535614371299744, -0.21577243506908417, 0.35294395685195923, -0.16140972077846527, -0.2441657930612564, -0.9355047345161438, 0.04948600381612778, 0.5396382808685303, 0.051999252289533615, 0.180645734...
I'm going to be starting a project soon that requires support for large-ish binary files. I'd like to use Ruby on Rails for the webapp, but I'm concerned with the BLOB support. In my experience with other languages, frameworks, and databases, BLOBs are often overlooked and thus have poor, difficult, and/or buggy functionality. Does RoR spport BLOBs adequately? Are there any gotchas that creep up once you're already committed to Rails? BTW: I want to be using PostgreSQL and/or MySQL as the backend database. Obviously, BLOB support in the underlying database is important. For the moment, I want to avoid focusing on
[ 0.4367242157459259, 0.17734652757644653, -0.035135697573423386, -0.012951978482306004, -0.15703614056110382, -0.023119548335671425, 0.29056161642074585, 0.3511485457420349, -0.27960118651390076, -0.6492257714271545, -0.16841048002243042, 0.30094316601753235, -0.36522817611694336, 0.0917184...
the DB's BLOB capabilities; I'm more interested in how Rails itself reacts. Ideally, Rails should be hiding the details of the database from me, and so I should be able to switch from one to the other. If this is *not* the case (ie: there's some problem with using Rails with a particular DB) then please do mention it. UPDATE: Also, I'm not just talking about ActiveRecord here. I'll need to handle binary files on the HTTP side (file upload effectively). That means getting access to the appropriate HTTP headers and streams via Rails. I've updated the question title and
[ 0.39032599329948425, -0.11129803210496902, 0.10830485820770264, -0.030648572370409966, -0.4997149407863617, -0.1877368539571762, 0.5253093838691711, 0.07600006461143494, 0.046555742621421814, -0.6637753248214722, -0.17842091619968414, 0.5564293265342712, -0.22832365334033966, 0.11527682095...
description to reflect this. +1 for attachment\_fu I use attachment\_fu in one of my apps and MUST store files in the DB (for annoying reasons which are outside the scope of this convo). The (one?) tricky thing dealing w/BLOB's I've found is that you need a separate code path to send the data to the user -- you can't simply in-line a path on the filesystem like you would if it was a plain-Jane file. e.g. if you're storing avatar information, you can't simply do: ``` <%= image_tag @youruser.avatar.path %> ``` you have to write some wrapper logic and use send\_data, e.g. (below is JUST an example w/attachment\_fu,
[ 0.2513999342918396, 0.07484383881092072, 0.4434681534767151, -0.17758898437023163, -0.1804240494966507, 0.09157803654670715, 0.108774833381176, -0.15344302356243134, -0.07335736602544785, -0.9571059346199036, -0.005100619047880173, 0.5824741721153259, -0.5285835862159729, -0.14952605962753...
in practice you'd need to DRY this up) ``` send_data(@youruser.avatar.current_data, :type => @youruser.avatar.content_type, :filename => @youruser.avatar.filename, :disposition => 'inline' ) ``` Unfortunately, as far as I know attachment\_fu (I don't have the latest version) does not do clever wrapping for you -- you've gotta write it yourself. P.S. Seeing your question edit - Attachment\_fu handles all that annoying stuff that you mention -- about needing to know file paths and all that crap -- EXCEPT the one little issue when storing in the DB. Give it a try; it's the standard for rails apps. IF you insist on re-inventing the wheel, the source code for attachment\_fu
[ 0.18071432411670685, 0.26141929626464844, 0.5743759870529175, -0.09673964232206345, -0.21699301898479462, -0.11982415616512299, 0.21325260400772095, -0.10061933100223541, -0.0943320021033287, -0.7937057614326477, 0.08373236656188965, 0.7279557585716248, -0.2799716889858246, -0.138382077217...
should document most of the gotchas, too!
[ 0.6041088700294495, 0.3575269877910614, -0.061933353543281555, 0.5776488184928894, 0.03163685277104378, -0.6350381374359131, 0.2703331410884857, 0.10058678686618805, -0.21648046374320984, -0.09569455683231354, -0.22433501482009888, 0.3280710279941559, 0.09275233745574951, -0.21654586493968...
I know I can call the GetVersionEx Win32 API function to retrieve Windows version. In most cases returned value reflects the version of my Windows, but sometimes that is not so. If a user runs my application under the compatibility layer, then GetVersionEx won't be reporting the real version but the version enforced by the compatibility layer. For example, if I'm running Vista and execute my program in "Windows NT 4" compatibility mode, GetVersionEx won't return version 6.0 but 4.0. Is there a way to bypass this behaviour and get true Windows version? The best approach I know is to check if specific
[ 0.38015738129615784, 0.0950690507888794, 0.3794882595539093, -0.033088359981775284, -0.2887479364871979, -0.32759013772010803, 0.5035039186477661, 0.06670927256345749, 0.06717149168252945, -0.5266169309616089, 0.026770448312163353, 0.598069429397583, -0.15883710980415344, 0.303689301013946...
API is exported from some DLL. Each new Windows version adds new functions and by checking the existance of those functions one can tell which OS the application is running on. For example, Vista exports [GetLocaleInfoEx](http://msdn.microsoft.com/en-us/library/ms724451(VS.85).aspx) from kernel32.dll while previous Windowses didn't. To cut the long story short, here is one such list containing only exports from kernel32.dll. ``` > *function: implemented in* > GetLocaleInfoEx: Vista > GetLargePageMinimum: Vista, Server 2003 GetDLLDirectory: Vista, Server 2003, XP SP1 GetNativeSystemInfo: Vista,
[ 0.13365451991558075, -0.12543585896492004, 0.38920918107032776, -0.0570441335439682, -0.24640969932079315, 0.23284384608268738, 0.1777661144733429, 0.07154878973960876, -0.3673103451728821, -0.5037533044815063, -0.06496906280517578, 0.44754931330680847, -0.5806583166122437, 0.2247090935707...
Server 2003, XP SP1, XP ReplaceFile: Vista, Server 2003, XP SP1, XP, 2000 OpenThread: Vista, Server 2003, XP SP1, XP, 2000, ME GetThreadPriorityBoost: Vista, Server 2003, XP SP1, XP, 2000, NT 4 IsDebuggerPresent: Vista, Server 2003, XP SP1, XP, 2000, ME, NT 4, 98 GetDiskFreeSpaceEx: Vista, Server 2003, XP SP1, XP, 2000, ME,
[ -0.1528366506099701, -0.12088233977556229, 0.6316096782684326, 0.18907210230827332, -0.21903258562088013, -0.12401799857616425, 0.5085269808769226, -0.08977590501308441, -0.20906783640384674, -0.409740149974823, -0.26725587248802185, 0.48189598321914673, -0.22986336052417755, -0.0145536176...
NT 4, 98, 95 OSR2 ConnectNamedPipe: Vista, Server 2003, XP SP1, XP, 2000, NT 4, NT 3 Beep: Vista, Server 2003, XP SP1, XP, 2000, ME, 98, 95 OSR2, 95 ``` Writing the function to determine the real OS version is simple; just proceed from newest
[ -0.2211267352104187, 0.025661466643214226, 0.768388032913208, -0.06717154383659363, 0.14232687652111053, 0.14921818673610687, 0.34223684668540955, 0.08496294170618057, -0.08484422415494919, -0.7426791787147522, -0.19004538655281067, 0.6620103120803833, -0.4062207341194153, 0.21520955860614...
OS to oldest and use [GetProcAddress](http://msdn.microsoft.com/en-us/library/ms683212.aspx) to check exported APIs. Implementing this in any language should be trivial. The following code in Delphi was extracted from the free [DSiWin32](http://gp.17slon.com/gp/dsiwin32.htm) library): ``` TDSiWindowsVersion = (wvUnknown, wvWin31, wvWin95, wvWin95OSR2, wvWin98, wvWin98SE, wvWinME, wvWin9x, wvWinNT3, wvWinNT4, wvWin2000, wvWinXP, wvWinNT, wvWinServer2003, wvWinVista); function DSiGetWindowsVersion: TDSiWindowsVersion; var versionInfo: TOSVersionInfo; begin versionInfo.dwOSVersionInfoSize := SizeOf(versionInfo); GetVersionEx(versionInfo); Result := wvUnknown; case versionInfo.dwPlatformID of VER_PLATFORM_WIN32s: Result := wvWin31; VER_PLATFORM_WIN32_WINDOWS: case versionInfo.dwMinorVersion of 0:
[ -0.40953657031059265, 0.06478867679834366, 0.7000628113746643, -0.15913891792297363, 0.3493090867996216, 0.08175162971019745, 0.2945488393306732, -0.11067260056734085, -0.017254933714866638, -0.680326521396637, -0.4208645522594452, 0.7205438613891602, -0.3195994794368744, 0.056587282568216...
if Trim(versionInfo.szCSDVersion[1]) = 'B' then Result := wvWin95OSR2 else Result := wvWin95; 10: if Trim(versionInfo.szCSDVersion[1]) = 'A' then Result := wvWin98SE else
[ 0.07383580505847931, -0.24533230066299438, 0.18504329025745392, -0.6744550466537476, -0.1053863912820816, 0.1084287241101265, 0.27221158146858215, -0.5619267225265503, -0.033650416880846024, -0.15061332285404205, -0.4308885633945465, 0.8124071359634399, -0.43956783413887024, -0.34746444225...
Result := wvWin98; 90: if (versionInfo.dwBuildNumber = 73010104) then Result := wvWinME; else Result := wvWin9x; end; //case versionInfo.dwMinorVersion VER_PLATFORM_WIN32_NT: case versionInfo.dwMajorVersion of 3: Result := wvWinNT3;
[ -0.13671788573265076, -0.0027223106008023024, 0.592341423034668, -0.4361489415168762, -0.06211220100522041, 0.012428832240402699, 0.38054201006889343, -0.7318737506866455, -0.02619210258126259, -0.3114575147628784, -0.17618757486343384, 0.6580247282981873, -0.23460596799850464, 0.115961067...
4: Result := wvWinNT4; 5: case versionInfo.dwMinorVersion of 0: Result := wvWin2000; 1: Result := wvWinXP; 2: Result := wvWinServer2003; else Result := wvWinNT end;
[ -0.31324440240859985, -0.13371263444423676, 0.6460137963294983, -0.19868706166744232, -0.012290297076106071, -0.12918277084827423, 0.2466888129711151, -0.7522892355918884, 0.03709053620696068, -0.27145951986312866, -0.3579363524913788, 0.6651504039764404, -0.11624659597873688, 0.2295298576...
//case versionInfo.dwMinorVersion 6: Result := wvWinVista; end; //case versionInfo.dwMajorVersion end; //versionInfo.dwPlatformID end; { DSiGetWindowsVersion } function DSiGetTrueWindowsVersion: TDSiWindowsVersion; function ExportsAPI(module: HMODULE; const apiName: string): boolean; begin Result := GetProcAddress(module, PChar(apiName)) <> nil; end; { ExportsAPI } var hKernel32: HMODULE; begin { DSiGetTrueWindowsVersion } hKernel32 := GetModuleHandle('kernel32'); Win32Check(hKernel32 <> 0); if ExportsAPI(hKernel32, 'GetLocaleInfoEx') then Result := wvWinVista else if ExportsAPI(hKernel32, 'GetLargePageMinimum') then Result := wvWinServer2003 else if ExportsAPI(hKernel32, 'GetNativeSystemInfo') then
[ -0.248409703373909, -0.20842401683330536, 0.7442654967308044, -0.41420966386795044, 0.10076934099197388, 0.12753593921661377, 0.3560653626918793, -0.34020382165908813, -0.1566602885723114, -0.48726770281791687, -0.45189377665519714, 0.6842526197433472, -0.19912278652191162, 0.2157227396965...
Result := wvWinXP else if ExportsAPI(hKernel32, 'ReplaceFile') then Result := wvWin2000 else if ExportsAPI(hKernel32, 'OpenThread') then Result := wvWinME else if ExportsAPI(hKernel32, 'GetThreadPriorityBoost') then Result := wvWinNT4 else if ExportsAPI(hKernel32, 'IsDebuggerPresent') then //is also in NT4! Result := wvWin98 else if ExportsAPI(hKernel32, 'GetDiskFreeSpaceEx') then //is also in NT4! Result := wvWin95OSR2 else if ExportsAPI(hKernel32, 'ConnectNamedPipe') then Result := wvWinNT3 else if ExportsAPI(hKernel32, 'Beep') then Result := wvWin95 else
[ -0.18359260261058807, 0.06944326311349869, 0.5566997528076172, -0.3361421525478363, 0.11562290787696838, 0.0028484161011874676, 0.17315185070037842, -0.5349864959716797, -0.07732654362916946, -0.4353821873664856, -0.5896793603897095, 0.5706129670143127, -0.12307053804397583, -0.07007107883...
// we have no idea Result := DSiGetWindowsVersion; end; { DSiGetTrueWindowsVersion } ``` --- updated 2009-10-09 It turns out that it gets very hard to do an "undocumented" OS detection on Vista SP1 and higher. A look at the [API changes](http://msdn.microsoft.com/en-us/library/aa383687(VS.85).aspx) shows that all Windows 2008 functions are also implemented in Vista SP1 and that all Windows 7 functions are also implemented in Windows 2008 R2. Too bad :( --- end of update FWIW, this is a problem I encountered in practice. We (the company I work for) have a program that was not really Vista-ready when Vista was released (and some weeks
[ 0.1468377709388733, 0.2585075795650482, 0.40459591150283813, 0.02021860145032406, 0.42837581038475037, -0.22514288127422333, 0.48196515440940857, -0.0072712283581495285, -0.10725424438714981, -0.2764916718006134, -0.17062260210514069, 0.773559033870697, -0.12197649478912354, 0.100779950618...
after that ...). It was not working under the compatibility layer either. (Some DirectX problems. Don't ask.) We didn't want too-smart-for-their-own-good users to run this app on Vista at all - compatibility mode or not - so I had to find a solution (a guy smarter than me pointed me into right direction; the stuff above is not my brainchild). Now I'm posting it for your pleasure and to help all poor souls that will have to solve this problem in the future. Google, please index this article! If you have a better solution (or an upgrade and/or fix for mine), please
[ 0.49918851256370544, 0.17792560160160065, 0.2660837471485138, 0.42278754711151123, 0.002362542785704136, -0.14806604385375977, 0.40986892580986023, 0.5975794792175293, -0.1014920324087143, -0.6275870203971863, 0.13338899612426758, 0.6202445030212402, -0.2758156657218933, -0.011375087313354...
post an answer here ...
[ 0.3348328769207001, 0.227997824549675, -0.07716356217861176, 0.2687535285949707, -0.11494401842355728, -0.020208731293678284, 0.1624085158109665, 0.507805585861206, -0.024444453418254852, -0.2789023518562317, -0.4250091016292572, -0.26780688762664795, -0.4047186076641083, -0.11556058377027...
While creating classes in Java I often find myself creating instance-level collections that I know ahead of time will be very small - less than 10 items in the collection. But I don't know the number of items ahead of time so I typically opt for a dynamic collection (ArrayList, Vector, etc). ``` class Foo { ArrayList<Bar> bars = new ArrayList<Bar>(10); } ``` A part of me keeps nagging at me that it's wasteful to use complex dynamic collections for something this small in size. Is there a better way of implementing something like this? Or is this the norm? Note, I'm not hit with any
[ -0.17658624053001404, 0.025244366377592087, -0.06746096909046173, -0.05653960630297661, -0.2634269595146179, 0.08843069523572922, 0.3389113247394562, -0.18378514051437378, -0.4743092954158783, -0.7955988645553589, -0.047829922288656235, 0.3699760437011719, -0.5828145146369934, -0.077817954...
(noticeable) performance penalties or anything like that. This is just me wondering if there isn't a better way to do things. The `ArrayList` class in Java has only two data members, a reference to an `Object[]` array and a size—which you need anyway if you don't use an `ArrayList`. So the only advantage to not using an `ArrayList` is saving one object allocation, which is unlikely ever to be a big deal. If you're creating and disposing of many, many instances of your container class (and by extension your `ArrayList` instance) every second, you *might* have a slight problem with garbage collection
[ 0.021304592490196228, -0.11478111892938614, -0.24770234525203705, 0.13760815560817719, -0.16543400287628174, -0.21081531047821045, 0.23938843607902527, -0.10626713186502457, -0.4533110558986664, -0.7802525162696838, 0.01231766864657402, 0.2470962107181549, -0.3280524015426636, 0.0219182278...
churn—but that's something to worry about if it ever occurs. Garbage collection is typically the least of your worries.
[ 0.47526153922080994, 0.2590884268283844, 0.04942706599831581, -0.002440681215375662, 0.1508808583021164, -0.02181508019566536, -0.030119916424155235, 0.9110738635063171, -0.3320600390434265, -0.3320300281047821, -0.04959149286150932, 0.17848631739616394, -0.13174739480018616, 0.48131445050...
Has anyone had any luck of using Oracle from .Net on a 64 bit machine, and using the UDT capabilities of Oracle? I've been able to use an x64 ODP.Net client, but cannot find one with any support for UDTs. Thanks Nick [Edit] I've posted an answer below. The latest (as of December 2008) release is 11.1.0.7. This has support for 64 bit and UDT. You need to use 11.1.0.7 release. This has UDT support and works with 32 and 64 bit.
[ 0.35943570733070374, 0.2530576288700104, 0.2540314793586731, -0.156995490193367, -0.3223284184932709, -0.23482133448123932, 0.25886014103889465, 0.1401553601026535, -0.2930496037006378, -0.8788310885429382, -0.052949171513319016, 0.5704802870750427, -0.03974917531013489, 0.1320449858903885...
How do I get the history of commits that have been made to the repository for a particular user? I am able to access CVS either through the command line or TortioseCVS, so a solution using either method is sufficient. ``` cvs history -u username ``` gives a history of changes the user has made
[ 0.5746698975563049, 0.32306158542633057, 0.4608585834503174, -0.07531324774026871, 0.29315656423568726, -0.07624237984418869, 0.16053515672683716, -0.11491906642913818, -0.2563721239566803, -0.6241675615310669, -0.15056319534778595, 0.9155033230781555, 0.029965737834572792, 0.3636090159416...
I would like to put a link to a webpage in an alert dialog box so that I can give a more detailed description of how to fix the error that makes the dialog box get created. How can I make the dialog box show something like this: ``` There was an error. Go to this page to fix it. wwww.TheWebPageToFix.com ``` Thanks. You could try asking them if they wish to visit via window.prompt: ``` if(window.prompt('Do you wish to visit the following website?','http://www.google.ca')) location.href='http://www.google.ca/'; ``` Also, Internet Explorer supports modal dialogs so you could try showing one of those: ``` if (window.showModalDialog) window.showModalDialog("mypage.html","popup","dialogWidth:255px;dialogHeight:250px"); else window.open("mypage.html","name","height=255,width=250,toolbar=no,directories=no,status=no,menubar=no,scrollbars=no,resizable=no,modal=yes"); ```
[ 0.02698632888495922, 0.1669818013906479, 0.603999674320221, 0.06600847095251083, 0.006847194861620665, 0.0029182485304772854, 0.4493664503097534, 0.2721532881259918, -0.20502451062202454, -1.0519390106201172, -0.1891494244337082, 0.47500115633010864, -0.3727097809314728, 0.0161006003618240...
I have a Windows Form app with a TabControl. One of the TabPages in the TabControl contains a ToolStrip. Frequently, the Windows Form designer will spontaneously set the Visible property of the Toolstrip to False. To get the toolstrip to reappear in my form, I have to manually find the control and set the property back to True. It's really annoying. Does anyone know why this happens, or how to stop it? I've tried setting the minimum height of the control to a non-zero value, but that has no effect. I'm using VS2008, VB.NET and the .Net framework 2.0, however I've struggled
[ 0.7009177803993225, 0.03608999401330948, 0.4406009018421173, -0.008452892303466797, -0.042041510343551636, -0.15468956530094147, 0.6373447775840759, -0.17633461952209473, 0.1415446400642395, -0.742552638053894, 0.11689052730798721, 0.6366853713989258, -0.11581655591726303, 0.13252331316471...