text
stringlengths
0
30.5k
title
stringclasses
1 value
embeddings
listlengths
768
768
For example: > This is main body of my content. I have a > footnote link for this line [1]. Then, I have some more > content. Some of it is interesting and it > has some footnotes as well [2]. > > > [1] Here is my first footnote. > > > [2] Another footnote. So, if I click on the "[1]" link it directs the web page to the first footnote reference and so on. How exactly do I accomplish this in HTML? Give a container an id, then use `#` to refer to that Id. e.g. ```html <p>This is main body of my content. I
[ 0.32984086871147156, 0.12946105003356934, 0.47207748889923096, 0.04362697899341583, -0.3416065275669098, -0.2029225081205368, -0.22738541662693024, 0.032263003289699554, -0.12742480635643005, -1.0076204538345337, 0.18687555193901062, 0.20879070460796356, -0.17600519955158234, 0.06445799767...
have a footnote link for this line <a href="#footnote-1">[1]</a>. Then, I have some more content. Some of it is interesting and it has some footnotes as well <a href="#footnote-2">[2]</a>.</p> <p id="footnote-1">[1] Here is my first footnote.</p> <p id="footnote-2">[2] Another footnote.</p> ```
[ 0.02603764645755291, 0.20452655851840973, 0.273624986410141, 0.185442715883255, -0.39516696333885193, 0.1200713962316513, -0.41270673274993896, 0.07279361039400101, -0.11641884595155716, -0.979299783706665, 0.3699249029159546, 0.15593335032463074, -0.1212305873632431, 0.009271048940718174,...
I woud like to create a cross-platform drawing program. The one requirement for writing my app is that I have pixel level precision over the canvas. For instance, I want to write my own line drawing algorithm rather than rely on someone elses. I do not want any form of anti-aliasing (again, pixel level control is required.) I would like the users interactions on the screen to be quick and responsive (pending my ability to write fast algorithms.) Ideally, I would like to write this in Python, or perhaps Java as a second choice. The ability to easily make the final
[ 0.36598530411720276, 0.09328672289848328, 0.48035728931427, 0.035175446420907974, -0.2377411425113678, 0.1448138803243637, 0.23089517652988434, 0.049462273716926575, 0.16497346758842468, -1.0404490232467651, 0.16741859912872314, 0.8061665296554565, -0.15814588963985443, -0.2889888882637024...
app cross-platform is a must. I will submit to different API's on different OS'es if necessary as long as I can write an abstraction layer around them. Any ideas? addendum: I need the ability to draw *on-screen*. Drawing out to a file I've got figured out. I just this week put together some slides and demo code for doing 2d graphics using OpenGL from python using the library pyglet. Here's a representative post: [Pyglet week 2, better vertex throughput](https://www.tartley.com/posts/pyglet-week-2-better-vertex-throughput/) (or [3D stuff using the same basic ideas](https://www.tartley.com/posts/flying-high-hobbyist-opengl-from-python/)) It is very fast (relatively speaking, for python) I have managed to get around 1,000 independently
[ 0.46716850996017456, 0.03975832089781761, 0.6600837111473083, -0.07464297860860825, -0.39035314321517944, 0.42954185605049133, 0.16130410134792328, -0.07210531830787659, -0.5471495985984802, -0.7466228604316711, 0.10789929330348969, 0.4177110493183136, -0.1639707237482071, -0.3501391112804...
positioned and oriented objects moving around the screen, each with about 50 vertices. It is very portable, all the code I have written in this environment works on windows and Linux and mac (and even obscure environments like Pypy) without me ever having to think about it. Some of these posts are very old, with broken links between them. You should be able to find all the relevant posts [using the 'graphics' tag](https://www.tartley.com/categories/graphics/).
[ 0.5476019978523254, -0.030405525118112564, 0.20162326097488403, -0.04892463609576225, -0.34140726923942566, 0.19992154836654663, 0.12675468623638153, 0.17582166194915771, -0.06305351853370667, -0.6871728301048279, 0.22527877986431122, 0.3831489384174347, 0.1485425978899002, 0.1073210984468...
I would like to get data from from different webpages such as addresses of restaurants or dates of different events for a given location and so on. What is the best library I can use for extracting this data from a given set of sites? If using python, take a good look at Beautiful Soup (<http://crummy.com/software/BeautifulSoup>). An extremely capable library, makes scraping a breeze.
[ 0.3158144950866699, -0.14265069365501404, -0.1358669549226761, 0.13259918987751007, -0.09325901418924332, 0.0026586363092064857, -0.04560194909572601, 0.49512729048728943, -0.2572995126247406, -0.6517672538757324, 0.23338687419891357, 0.02899782545864582, 0.14105141162872314, 0.21501877903...
It strikes me that Properties in C# should be use when trying to manipulate a field in the class. But when there's complex calculations or database involved, we should use a getter/setter. Is this correct? When do you use s/getter over properties? The .NET design guidelines provide some answers to this question in the [Properties vs. Methods](http://msdn.microsoft.com/en-us/library/bzwdh01d(VS.71).aspx#cpconpropertyusageguidelinesanchor1) section. Basically, properties have the same semantics as a field. You shouldn't let a property throw exceptions, properties shouldn't have side effects, order shouldn't matter, and properties should return relatively quickly. If any of those things could happen, it's better to use a method. The guidelines also
[ 0.09193681180477142, -0.1436620056629181, 0.13914746046066284, 0.4147885739803314, -0.07432394474744797, -0.29045167565345764, -0.06897789239883423, -0.2758365869522095, -0.1277998387813568, -0.5092803835868835, -0.14944744110107422, 0.6207780241966248, 0.07240089029073715, 0.0454122461378...
recommend using methods for returning arrays. When deciding whether to use a property or method, it helps if I think of it like a field. I think about the behavior of the property and ask myself, "If this were a field on the class, would I be surprised if it behaved the way it does?" Consider, for example, the [TcpClient.GetStream method](http://msdn.microsoft.com/en-us/library/system.net.sockets.tcpclient.getstream.aspx). It can throw several exceptions based on if the connection is made, and it's important that the TcpClient is configured before you try to get the stream. Because of this, it is a Get method rather than a property. If you
[ 0.22306479513645172, -0.2152928113937378, -0.07986856251955032, 0.2575486898422241, -0.01817142590880394, -0.2762882709503174, 0.02911919355392456, 0.007322563324123621, -0.39083927869796753, -0.578002393245697, -0.07980068773031235, 0.5141534805297852, -0.3138364851474762, 0.1174668371677...
take a good look at the design guidelines, you'll see that it's usually *not* a matter of preference; there's good reasons to use methods instead of properties in certain cases.
[ 0.7619879245758057, 0.24358239769935608, -0.14069856703281403, 0.23568454384803772, 0.010001584887504578, -0.14626121520996094, 0.35497787594795227, -0.17723160982131958, 0.0030669844709336758, -0.5369000434875488, -0.1703510880470276, 0.2898004353046417, 0.2063114196062088, -0.12705802917...
What is Windows' best I/O event notification facility? By best I mean something that ... 1. doesn't have a limit on number of input file descriptors 2. works on all file descriptors (disk files, sockets, ...) 3. provides various notification modes (edge triggered, limit triggered) In Windows, async operations are done by file operation, not by descriptor. There are several ways to wait on file operations to complete asynchronously. For example, if you want to know when data is available on a network socket, issue an async read request on the socket and when it completes, the data was available and was retrieved. In Win32, async
[ -0.47324827313423157, -0.21612486243247986, 0.3859325647354126, -0.0783003717660904, -0.13791266083717346, -0.31436771154403687, 0.18088610470294952, -0.0028755294624716043, -0.18381008505821228, -0.6269540786743164, -0.4940038025379181, 0.7527666091918945, -0.03638143837451935, -0.0707652...
operations use the [`OVERLAPPED`](http://msdn.microsoft.com/en-us/library/ms684342(VS.85).aspx) structure to contain state about an outstanding IO operation. 1. Associate the files with an [IO Completion Port](http://msdn.microsoft.com/en-us/library/aa365198(VS.85).aspx) and dispatch async IO requests. When an operation completes, it will put a completion message on the queue which your worker thread(s) can wait on and retrieve as they arrive. You can also put user defined messages into the queue. There is no limit to how many files or queued messages can be used with a completion port 2. Dispatch each IO operation with an event. The event associated with an operation will become signaled (satisfy a wait) when it
[ 0.03233697637915611, -0.0825623944401741, 0.3095243275165558, -0.0648246556520462, 0.09864592552185059, 0.2696361839771271, 0.05355216935276985, -0.13170108199119568, -0.5111669898033142, -0.576519250869751, -0.2518957555294037, 0.4909799098968506, -0.3373096287250519, 0.10909102857112885,...
completes. Use [`WaitForMultipleObjects`](http://msdn.microsoft.com/en-us/library/ms687025(VS.85).aspx) to wait on all the events at once. This has the disadvantage of only being able to wait on `MAXIMUM_WAIT_OBJECTS` objects at once (64). You can also wait on other types of events at the same time (process/thread termination, mutexes, events, semaphores) 3. Use a [thread pool](http://msdn.microsoft.com/en-us/library/ms686760(VS.85).aspx). The thread pool can take an unlimited number of objects and file operations to wait on and execute a [user defined function](http://msdn.microsoft.com/en-us/library/ms684124(VS.85).aspx) upon completion each. 4. Use `[ReadFileEx](http://msdn.microsoft.com/en-us/library/aa365468(VS.85).aspx)` and [`WriteFileEx`](http://msdn.microsoft.com/en-us/library/aa365748(VS.85).aspx) to queue [Asynchronous Procedure Calls](http://msdn.microsoft.com/en-us/library/ms681951(VS.85).aspx) (APCs) to the calling thread and [`SleepEx`](http://msdn.microsoft.com/en-us/library/ms686307(VS.85).aspx) (or `WaitFor{Single|Multiple}ObjectsEx`) with `Alertable TRUE` to receive a notification message
[ 0.3424825370311737, -0.03992597386240959, 0.5905936360359192, -0.05109537020325661, 0.06366754323244095, 0.07767041772603989, 0.3043942153453827, -0.3337479531764984, -0.503308117389679, -0.5372710227966309, -0.27166512608528137, 0.23044444620609283, -0.2315439134836197, 0.2347850054502487...
for each operation when it completes. This method is similar to an IO completion port, but only works for one thread. The Windows NT kernel makes no distinction between socket, disk file, pipe, etc. file operations internally: all of these options will work with all the file types.
[ 0.1756470501422882, -0.12474235147237778, 0.43763890862464905, -0.1588018387556076, 0.20177540183067322, 0.06333395838737488, 0.2559296786785126, 0.06701727956533432, -0.20768271386623383, -0.6394336819648743, -0.4276716709136963, 0.6105611324310303, -0.19059202075004578, 0.223105981945991...
The MySQL manual at [MySQL](http://web.archive.org/web/20160504181056/https://dev.mysql.com/doc/refman/5.1/en/rename-database.html) covers this. Usually I just dump the database and reimport it with a new name. This is not an option for very big databases. Apparently `RENAME {DATABASE | SCHEMA} db_name TO new_db_name;` [does bad things, exists only in a handful of versions, and is a bad idea overall](http://web.archive.org/web/20160504181056/https://dev.mysql.com/doc/refman/5.1/en/rename-database.html). This needs to work with [InnoDB](http://en.wikipedia.org/wiki/InnoDB), which stores things very differently than [MyISAM](http://en.wikipedia.org/wiki/MyISAM). For **InnoDB**, the following seems to work: create the new empty database, then rename each table in turn into the new database: ``` RENAME TABLE old_db.table TO new_db.table; ``` You will need to adjust the permissions after that. For scripting in a
[ -0.04998830333352089, 0.2625334560871124, 0.19270795583724976, 0.057061515748500824, 0.017655694857239723, -0.25868579745292664, 0.3020520508289337, -0.08696603775024414, -0.12768399715423584, -0.5960318446159363, -0.26808568835258484, 0.3243788778781891, -0.2264595925807953, 0.28379118442...
shell, you can use either of the following: ``` mysql -u username -ppassword old_db -sNe 'show tables' | while read table; \ do mysql -u username -ppassword -sNe "rename table old_db.$table to new_db.$table"; done ``` OR ``` for table in `mysql -u root -ppassword -s -N -e "use old_db;show tables from old_db;"`; do mysql -u root -ppassword -s -N -e "use old_db;rename table old_db.$table to new_db.$table;"; done; ``` --- Notes: * There is no space between the option `-p` and the password. If your database has no password, remove the `-u username -ppassword` part. * If some table has a trigger, it cannot be moved to another
[ -0.21424001455307007, 0.1324390470981598, 0.33103233575820923, -0.1044265404343605, 0.35740944743156433, 0.07272543013095856, 0.023762203752994537, -0.13567790389060974, -0.09510468691587448, -0.3682737946510315, -0.4581029415130615, 1.080290675163269, -0.3081899583339691, 0.23515038192272...
database using above method (will result `Trigger in wrong schema` error). If that is the case, use a traditional way to clone a database and then drop the old one: `mysqldump old_db | mysql new_db` * If you have stored procedures, you can copy them afterwards: `mysqldump -R old_db | mysql new_db`
[ -0.07407017797231674, -0.041259728372097015, 0.3145836293697357, 0.15312232077121735, 0.17262856662273407, -0.1981980949640274, 0.24650725722312927, -0.2889483869075775, -0.20629814267158508, -0.6977909803390503, -0.14061589539051056, 0.45285800099372864, -0.4479360580444336, 0.41670238971...
I have an application that uses NHibernate as its ORM and sometimes it experiences performance issues due to how the data is being accessed by it. What kind of things can be done to improve the performance of NHibernate? (Please limit to one recommendation per answer) The first and most dramatic performance problem that you can run into with NHibernate is if you are creating a new session factory for every session you create. Only one session factory instance should be created for each application execution and all sessions should be created by that factory. Along those lines, you should continue using
[ 0.29750531911849976, 0.15141527354717255, 0.20421023666858673, 0.0479593463242054, -0.30068033933639526, -0.04715895280241966, 0.11614144593477249, -0.2129543572664261, -0.14630331099033356, -0.5039451718330383, 0.4062979817390442, 0.3696575462818146, -0.4681895673274994, 0.428545504808425...
the same session as long as it makes sense. This will vary by application, but for most web applications, a single session per request is recommended. If you throw away your session frequently, you aren't gaining the benefits of its cache. Intelligently using the session cache can change a routine with a linear (or worse) number of queries to a constant number without much work. Equally important is that you want to make sure that you are lazy loading your object references. If you are not, entire object graphs could be loaded for even the most simple queries. There are
[ 0.32901906967163086, -0.1333097517490387, -0.0332915335893631, 0.3909393548965454, -0.1588086038827896, -0.1906771957874298, 0.7923457622528076, -0.18359316885471344, -0.5993108749389648, -0.7103995680809021, -0.03438162803649902, 0.6030001640319824, -0.29941418766975403, 0.101381190121173...
only certain reasons not to do this, but it is always better to start with lazy loading and switch back as needed. That brings us to eager fetching, the opposite of lazy loading. While traversing object hierarchies or looping through collections, it can be easy to lose track of how many queries you are making and you end up with an exponential number of queries. Eager fetching can be done on a per query basis with a FETCH JOIN. In rare circumstances, such as if there is a particular pair of tables you always fetch join, consider turning off lazy loading
[ -0.05485985800623894, -0.4831853210926056, 0.30709728598594666, 0.4066174626350403, -0.3451054096221924, 0.014132954180240631, -0.07815248519182205, -0.04839462786912918, -0.46290647983551025, -0.6946823596954346, 0.10997217148542404, 0.7281772494316101, -0.3119591474533081, -0.18956951797...
for that relationship. As always, SQL Profiler is a great way to find queries that are running slow or being made repeatedly. At my last job we had a development feature that counted queries per page request as well. A high number of queries for a routine is the most obvious indicator that your routine is not working well with NHibernate. If the number of queries per routine or request looks good, you are probably down to database tuning; making sure you have enough memory to store execution plans and data in the cache, correctly indexing your data, etc. One tricky little
[ 0.2052866816520691, 0.020457901060581207, 0.2568827271461487, 0.2755339443683624, -0.3687959909439087, -0.1286199390888214, 0.3384973108768463, 0.023864801973104477, -0.1777307540178299, -0.6225866675376892, 0.6313140988349915, 0.5479194521903992, -0.07379913330078125, 0.13517476618289948,...
problem we ran into was with SetParameterList(). The function allows you to easily pass a list of parameters to a query. NHibernate implemented this by creating one parameter for each item passed in. This results in a different query plan for every number of parameters. Our execution plans were almost always getting released from the cache. Also, numerous parameters can significantly slow down a query. We did a custom hack of NHibernate to send the items as a delimited list in a single parameter. The list was separated in SQL Server by a table value function that our hack automatically
[ -0.04964003711938858, -0.27706360816955566, 0.20626641809940338, 0.3615840971469879, -0.4113517701625824, 0.1622452437877655, 0.29564788937568665, -0.3764614462852478, -0.3955499529838562, -0.3852766454219818, 0.49245786666870117, 0.32019707560539246, -0.6146649122238159, 0.205080166459083...
inserted into the IN clause of the query. There could be other land mines like this depending on your application. SQL Profiler is the best way to find them.
[ 0.3827725350856781, 0.06766994297504425, -0.05472502112388611, 0.290692001581192, 0.2703610062599182, -0.21399293839931488, 0.0855305939912796, 0.15650543570518494, -0.05104650557041168, -0.5246676802635193, 0.0044938139617443085, 0.3803901970386505, -0.05499107018113136, 0.171520575881004...
I am implementing a quite simple state-machine order processing application. It is a e-commerce application with a few twists. The users of the application will not be editing workflows by themselves. Microsoft claims that asp.net and Windows Workflow is possible to combine. How hard is it to install and maintain a combination of asp.net and Windows Workflow? I would be keeping the workflow state in sql-server. Is it easier for me to roll my own state machine code or is Windows Workflow the right tool for the job? Asp.net and WF get along just fine, and WF doesn't add much maintenance overhead. Whether or
[ 0.3900889456272125, 0.205976665019989, 0.1892051100730896, -0.01346453558653593, -0.0052269198931753635, -0.16728350520133972, 0.06542657315731049, -0.18592333793640137, -0.23831070959568024, -0.6999616026878357, 0.1087663397192955, 0.43371203541755676, -0.27194350957870483, 0.198412060737...
not this is the right design for you depends a lot on your needs. If you have a lot of event driven actions then WF might be worthwhile, otherwise the overhead of rolling your own tracking would probably add less complexity to the system. WF is reasonably easy to work with so I'd suggest working up a prototype and experimenting with it. Also, in my opinion, based on your requirements, I doubt WF would be the right solution for you.
[ 0.7841306924819946, -0.2812784016132355, -0.011901376768946648, 0.42195069789886475, 0.09134136140346527, -0.054354093968868256, 0.06749065965414047, -0.22575879096984863, -0.2007257640361786, -0.399809867143631, 0.3301785886287689, 0.533076822757721, -0.31906139850616455, -0.2314833551645...
I am working on a cocoa software and in order to keep the GUI responsive during a massive data import (Core Data) I need to run the import outside the main thread. Is it safe to access those objects even if I created them in the main thread without using locks **if** I don't explicitly access those objects while the thread is running. With Core Data, you should have a separate managed object context to use for your import thread, connected to the same coordinator and persistent store. You cannot simply throw objects created in a context used by the main thread
[ 0.41994625329971313, 0.26781052350997925, -0.05038411542773247, 0.09097104519605637, -0.07586245983839035, -0.13576732575893402, 0.19468183815479279, 0.01754290983080864, -0.4123242199420929, -0.5925053954124451, -0.02979869954288006, 0.5253595113754272, -0.24993130564689636, 0.14479696750...
into another thread and expect them to work. Furthermore, you cannot do your own locking for this; you must at minimum lock the managed object context the objects are in, as appropriate. But if those objects are bound to by your views a controls, there are no "hooks" that you can add that locking of the context to. There's no free lunch. Ben Trumbull explains some of the reasons why you need to use a separate context, and why "just reading" isn't as simple or as safe as you might think, in [this great post from late 2004 on the webobjects-dev list](http://lists.apple.com/archives/webobjects-dev/2004/Dec/msg00255.html
[ 0.4121945798397064, 0.04928947240114212, -0.07681376487016678, 0.2355450689792633, 0.05827125906944275, -0.4331633150577545, 0.3072432577610016, 0.1833108812570572, -0.6077622175216675, -0.3129311203956604, -0.062134042382240295, 0.3501204252243042, -0.44502148032188416, 0.0681839510798454...
"Re: locking problem"). (The whole thread is great.) He's discussing the Enterprise Objects Framework and WebObjects, but his advice is fully applicable to Core Data as well. Just replace "EC" with "NSManagedObjectContext" and "EOF" with "Core Data" in the meat of his message. The solution to the problem of sharing data between threads in Core Data, like the Enterprise Objects Framework before it, is "don't." If you've thought about it further and you really, honestly do have to share data between threads, then the solution is to keep independent object graphs in thread-isolated contexts, and use the information in the save
[ 0.6166499853134155, -0.2550317347049713, 0.352093368768692, 0.3489983081817627, 0.03811165690422058, -0.35063672065734863, 0.3313056230545044, 0.20131585001945496, -0.49019747972488403, -0.7062263488769531, -0.21935947239398956, 0.49659547209739685, -0.42905616760253906, 0.2089186310768127...
notification from one context to tell the other context what to re-fetch. [`-[NSManagedObjectContext refreshObject:mergeChanges:]`](http://developer.apple.com/documentation/Cocoa/Reference/CoreDataFramework/Classes/NSManagedObjectContext_Class/Reference/Reference.html#//apple_ref/occ/instm/NSManagedObjectContext/refreshObject:mergeChanges: "-[NSManagedObjectContext refreshObject:mergeChanges:]") is specifically designed to support this use.
[ -0.19194577634334564, -0.2088513970375061, 0.01191147044301033, -0.3943583071231842, 0.015329642221331596, 0.04061231389641762, 0.18732519447803497, -0.22354917228221893, -0.3470599949359894, -0.4426584839820862, -0.6070377230644226, 0.35376331210136414, -0.5469087362289429, -0.17362013459...
We have a WinForms application written in C# that uses the AxAcroPDFLib.AxAcroPDF component to load and print a PDF file. Has been working without any problems in Windows XP. I have moved my development environment to Vista 64 bit and now the application will not run (on Vista 64) unless I remove the AxAcroPDF component. I get the following error when the application runs: "System.Runtime.InteropServices.COMException: Class not registered (Exception from HRESULT: 0x80040154 (REGDB\_E\_CLASSNOTREG))." I have been advised on the Adobe Forums that the reason for the error is that they do not have a 64 bit version of the AxAcroPDF ActiveX control.
[ 0.13142159581184387, 0.14286234974861145, 0.6037878394126892, 0.0834396481513977, 0.07837880402803421, -0.15536648035049438, 0.3403479754924774, -0.13514995574951172, 0.30425557494163513, -0.8149450421333313, -0.16487576067447662, 0.9393182992935181, -0.2336604744195938, -0.051395669579505...
Is there some way around this problem? For example can I convert the 32bit ActiveX control to a 64bit control myself? You can't convert Adobe's ActiveX control to 64bit yourself, but you can force your application to run in 32bit mode by setting the platform target to x86. For instructions for your version of Visual Studio, see section 1.44 of [Issues When Using Microsoft Visual Studio 2005](http://msdn.microsoft.com/en-gb/vstudio/aa718685.aspx)
[ 0.10780902206897736, -0.06771411001682281, 0.3512001633644104, 0.037705011665821075, -0.07149944454431534, 0.048382144421339035, 0.4443240761756897, -0.1198372170329094, -0.1391887366771698, -0.2944831848144531, -0.008706921711564064, 0.9618943333625793, -0.37861838936805725, -0.0833085849...
Does anybody know a "technique" to discover memory leaks caused by smart pointers? I am currently working on a large project written in **C++** that heavily uses smart pointers with reference counting. Obviously we have some memory leaks caused by smart pointers, that are still referenced somewhere in the code, so that their memory does not get free'd. It's very hard to find the line of code with the "needless" reference, that causes the corresponding object not to be free'd (although it's not of use any longer). I found some advice in the web, that proposed to collect call stacks of
[ 0.5659438967704773, 0.27815598249435425, -0.10745207965373993, 0.5271589756011963, 0.2670834958553314, -0.30005091428756714, 0.16597792506217957, 0.36989355087280273, -0.47672098875045776, -0.3610651195049286, 0.30205413699150085, 0.3325716257095337, -0.06291362643241882, 0.093676447868347...
the increment/decrement operations of the reference counter. This gives me a good hint, which piece of code has caused the reference counter to get increased or decreased. But what I need is some kind of algorithm that groups the corresponding "increase/decrease call stacks" together. After removing these pairs of call stacks, I hopefully have (at least) one "increase call stack" left over, that shows me the piece of code with the "needless" reference, that caused the corresponding object not to be freed. Now it will be no big deal to fix the leak! But has anybody an idea for an "algorithm" that
[ 0.3001389801502228, 0.13060262799263, 0.4851030111312866, 0.45029664039611816, 0.056271933019161224, -0.10729061812162399, 0.21800893545150757, -0.16631631553173065, -0.41702497005462646, -0.3258843421936035, 0.19436949491500854, 0.9236593246459961, -0.23828533291816711, -0.059644237160682...
does the grouping? Development takes place under **Windows XP**. (I hope someone understood, what I tried to explain ...) EDIt: I am talking about leaks caused by circular references. Note that one source of leaks with **reference-counting smart pointers** are pointers with **circular dependancies**. For example, A have a smart pointer to B, and B have a smart pointer to A. Neither A nor B will be destroyed. You will have to find, and then break the dependancies. If possible, use boost smart pointers, and use shared\_ptr for pointers which are supposed to be owners of the data, and weak\_ptr for pointers not supposed to
[ 0.20014122128486633, 0.0810188353061676, 0.23293426632881165, 0.3017638623714447, 0.014635487459599972, -0.25937047600746155, -0.055430732667446136, -0.14199678599834442, -0.3113473355770111, -0.26936930418014526, 0.17960940301418304, 0.44729021191596985, -0.3202473819255829, 0.22705350816...
call delete.
[ -0.12267383933067322, 0.09785161167383194, -0.05527117848396301, 0.26133641600608826, 0.08502290397882462, 0.054713256657123566, 0.4469415843486786, 0.10157422721385956, -0.1612955629825592, -0.17233647406101227, -0.24002864956855774, 0.5476680397987366, -0.4078858494758606, 0.217861086130...
When I use the PrintOut method to print a Worksheet object to a printer, the "Printing" dialog (showing filename, destination printer, pages printed and a Cancel button) is displayed even though I have set DisplayAlerts = False. The code below works in an Excel macro but the same thing happens if I use this code in a VB or VB.Net application (with the reference changes required to use the Excel object). ``` Public Sub TestPrint() Dim vSheet As Worksheet Application.ScreenUpdating = False Application.DisplayAlerts = False Set vSheet = ActiveSheet vSheet.PrintOut Preview:=False
[ 0.6242958903312683, 0.09973359107971191, 0.6934475302696228, -0.16398030519485474, 0.02732919342815876, -0.26083701848983765, 0.05347634106874466, -0.3205023407936096, -0.07504846900701523, -0.7590665221214294, 0.008917693048715591, 0.5256551504135132, -0.33273887634277344, 0.0974009633064...
Application.DisplayAlerts = True Application.ScreenUpdating = True End Sub ``` EDIT: The answer below sheds more light on this (that it may be a Windows dialog and not an Excel dialog) but does not answer my question. Does anyone know how to prevent it from being displayed? EDIT: Thank you for your extra research, Kevin. It looks very much like this is what I need. Just not sure I want to blindly accept API code like that. Does anyone else have any knowledge about these API calls and that they're doing what the author purports? When you say the "Printing"
[ 0.6862014532089233, 0.3005484640598297, 0.3128819763660431, -0.04020523652434349, -0.07205057144165039, -0.2971893846988678, 0.38799259066581726, 0.04074349254369736, -0.38079023361206055, -0.729489266872406, 0.15425068140029907, 0.6082876920700073, -0.4654836058616638, -0.0220232829451560...
Dialog, I assume you mean the "Now printing xxx on " dialog rather than standard print dialog (select printer, number of copies, etc). Taking your example above & trying it out, that is the behaviour I saw - "Now printing..." was displayed briefly & then auto-closed. What you're trying to control may not be tied to Excel, but instead be Windows-level behaviour. If it is controllable, you'd need to a) disable it, b) perform your print, c) re-enable. If your code fails, there is a risk this is not re-enabled for other applications. EDIT: Try this solution: [How do you prevent printing
[ 0.3328002095222473, 0.01422054786235094, 0.3686496615409851, -0.09262754023075104, 0.12422850728034973, -0.12148255854845047, 0.171452596783638, -0.10657567530870438, -0.33641648292541504, -0.5600731372833252, -0.10786834359169006, 0.5637698173522949, -0.5272783041000366, 0.069659464061260...
dialog when using Excel PrintOut method](http://www.mrexcel.com/archive2/11900/13336.htm). It seems to describe exactly what you are after.
[ 0.15339410305023193, 0.007879219949245453, 0.6265418529510498, 0.13250452280044556, 0.035578127950429916, -0.08276701718568802, -0.04188523069024086, -0.08940707892179489, -0.21042008697986603, -0.4431438744068146, -0.17224130034446716, 0.19453109800815582, -0.5760329365730286, -0.24442918...
Why isn't there a Team Foundation Server Express Edition? Almost 3 years and 16 answers later,[**TFS Express**](http://www.visualstudio.com/en-us/products/visual-studio-express-vs) is now a fact.
[ 0.3111599385738373, -0.204631507396698, 0.5116615891456604, 0.1275075376033783, -0.3016290068626404, 0.07926147431135178, 0.2624916136264801, 0.15506239235401154, -0.3367183804512024, -0.5567461252212524, -0.06805211305618286, -0.0630127489566803, 0.043995097279548645, -0.24862436950206757...
I'm fairly new to the STL, so I was wondering whether there are any dynamically sortable containers? At the moment my current thinking is to use a vector in conjunction with the various sort algorithms, but I'm not sure whether there's a more appropriate selection given the (presumably) linear complexity of inserting entries into a sorted vector. To clarify "dynamically", I am looking for a container that I can modify the sorting order at runtime - e.g. sort it in an ascending order, then later re-sort in a descending order. If you know you're going to be sorting on a single value
[ -0.048184558749198914, -0.09841690212488174, 0.42997118830680847, 0.025309821590781212, -0.11139243096113205, 0.15280628204345703, -0.12611936032772064, 0.16284970939159393, -0.5653821229934692, -0.693497896194458, -0.08971279114484787, 0.04798150435090065, -0.28329578042030334, 0.13417381...
ascending and descending, then set is your friend. Use a reverse iterator when you want to "sort" in the opposite direction. If your objects are complex and you're going to be sorting in many different ways based on the member fields within the objects, then you're probably better off with using a vector and sort. Try to do your inserts all at once, and then call sort once. If that isn't feasible, then deque may be a better option than the vector for large collections of objects. I think that if you're interested in *that* level of optimization, you had better be
[ -0.10762739181518555, -0.3262309730052948, 0.1686757504940033, 0.14482824504375458, -0.042002443224191666, 0.4133286774158478, -0.12274926155805588, -0.24956628680229187, -0.39290091395378113, -0.6854288578033447, -0.07179725915193558, 0.41664180159568787, -0.3909696936607361, 0.1547771543...
profiling your code using actual data. (Which is probably the best advice anyone here can give: it may not matter that you call sort after each insert if you're only doing it once in a blue moon.)
[ 0.8996177315711975, 0.14046423137187958, -0.1966497004032135, 0.38120779395103455, 0.0567847304046154, -0.46751752495765686, 0.2410266399383545, 0.1147797554731369, -0.1821087747812271, -0.17329958081245422, 0.08384762704372406, 0.30885499715805054, -0.16368982195854187, -0.086361713707447...
Let's say the first N integers divisible by 3 starting with 9. I'm sure there is some one line solution using lambdas, I just don't know it that area of the language well enough yet. Just to be different (and to avoid using a where statement) you could also do: ``` var numbers = Enumerable.Range(0, n).Select(i => i * 3 + 9); ``` **Update** This also has the benefit of not running out of numbers.
[ 0.17594443261623383, -0.11814899742603302, 0.28262656927108765, -0.18403546512126923, -0.14580711722373962, 0.1054023951292038, 0.30184316635131836, -0.2930658757686615, -0.14826072752475739, -0.35384252667427063, -0.0038632189389318228, 0.516739010810852, -0.39521169662475586, -0.09978511...
I'd like to have a `TabNavigator` component that has a close button for some of the tabs. How do I do that? It seems that the `TabNavigator` component does not allow (or I could not find) extensibility of this form. Help. Thanks You should take a look at the SuperTabNavigator component from the [FlexLib](http://code.google.com/p/flexlib/) project: * [SuperTabNavigator example](http://flexlib.googlecode.com/svn/trunk/examples/SuperTabNavigator_Sample.swf) * [SuperTabNavigator documentation](http://flexlib.googlecode.com/svn/trunk/docs/flexlib/containers/SuperTabNavigator.html) * [FlexLib Component list](http://code.google.com/p/flexlib/wiki/ComponentList) If you don't want all of the tabs to have close buttons (I understand from the question that you don't) it looks like you could use the [setClosePolicyForTab()](http://flexlib.googlecode.com/svn/trunk/docs/flexlib/containers/SuperTabNavigator.html#setClosePolicyForTab()) method for specifying which tabs you want to have them.
[ 0.12081277370452881, -0.24716651439666748, 0.3120329678058624, -0.005470560863614082, -0.09684858471155167, 0.10498538613319397, 0.10943195968866348, 0.031230580061674118, -0.3876643776893616, -0.6119646430015564, -0.11817391216754913, 0.5090810060501099, -0.48475876450538635, -0.166597515...
I'm looking for a profiler in order to find the bottleneck in my C++ code. I'd like to find a free, non-intrusive, and good profiling tool. I'm a game developer, and I use PIX for Xbox 360 and found it very good, but it's not free. I know the Intel [VTune](https://en.wikipedia.org/wiki/VTune), but it's not free either. [CodeXL](https://gpuopen.com/archived/legacy-codexl/) has now superseded the End Of Line'd [AMD Code Analyst](https://web.archive.org/web/20120607044224/http://developer.amd.com/tools/CodeAnalyst/Pages/default.aspx) and both are free, but not as advanced as VTune. There's also [Sleepy](http://www.codersnotes.com/sleepy/), which is very simple, but does the job in many cases. Note: **All three of the tools above are unmaintained since several years.**
[ 0.28937622904777527, -0.13461557030677795, 0.3886181712150574, 0.3315720856189728, -0.08212774246931076, -0.07038405537605286, -0.003087035147473216, -0.037821874022483826, -0.23830023407936096, -0.435493141412735, -0.19894492626190186, 0.6668250560760498, 0.01838257722556591, 0.0111987795...
I know and have Xcode, but I was wondering if there were any other complete development environments that support Objective-C? I'm not looking for solutions with vim or emacs, nor editors like BBEdit that support syntax highlighting, but a full fledged IDE with: * code completion * compilation * debugging * refactoring Extra points for being cross platform, supporting vi key bindings and supporting other languages. ### Note: I've updated and accepted my answer below as Jetbrains has released Early Access for *AppCode*, their new Objective-C IDE. Since this has been a fairly popular question, I thought it worthwhile to update the information. I recently learned that [Jetbrains](http://www.jetbrains.com/)
[ 0.4797971546649933, 0.4435253441333771, 0.38503390550613403, 0.12565720081329346, -0.051850080490112305, -0.1754373013973236, 0.10350681096315384, 0.3060457408428192, -0.1311008185148239, -0.6201964020729065, -0.16781513392925262, 0.43543511629104614, -0.3907811939716339, -0.10742123425006...
the make of my favorite IDE (Idea) may support Objective-C (though it is unclear how much it will work for iPhone/iPad development). See the thread [here](http://www.jetbrains.net/devnet/thread/282271?tstart=0) for early discussion on this. In the last year or two, they have started adding additional language support both in their flagship IDE as well as specialized IDEs (for Ruby, Python, PHP). I guess this is just another step in the process. I for one would love to have another option other than XCode and I couldn't think of one that I'd love more. This is obviously vaporware at the moment, but I think it is
[ 0.6395993828773499, -0.01456780731678009, 0.30324095487594604, 0.029184257611632347, 0.4369104206562042, 0.043961308896541595, -0.07203282415866852, 0.2042268067598343, 0.008286318741738796, -0.5470640063285828, -0.0051766433753073215, 0.3862951993942261, -0.22140827775001526, 0.1635214686...
something to keep an eye on. This is now a real product, albeit still in Early Access. See [here](http://blog.jetbrains.com/objc/) for a the blog on this new product, which will give you pointers to check out the EAP. UPDATE: [AppCode](http://www.jetbrains.com/objc/) has now been released and offers a true alternative to using Xcode for Objective-C and iPhone/iPad/Mac development. It does still rely on Interface Builder for layout and wiring of GUI components and uses the iOS simulator, but all coding, including a slew of refactorings, smart templating and static analysis, is available through App Code.
[ 0.10664039850234985, 0.14025959372520447, 0.4514616131782532, 0.24570490419864655, 0.2012656033039093, 0.14379626512527466, -0.08297755569219589, -0.06205505505204201, -0.07491619884967804, -0.7446267008781433, -0.13673356175422668, 0.4544289708137512, -0.5120410919189453, -0.3869551122188...
I'm looking for a way to extract the audio part of a FLV file. I'm recording from the user's microphone and the audio is encoded using the [Nellymoser Asao Codec](http://en.wikipedia.org/wiki/Nellymoser_Asao_Codec). This is the default codec and there's no way to change this. [ffMpeg](http://ffmpeg.mplayerhq.hu/) is the way to go ! It worked for me with SVN Rev 14277. The command I used is : `ffmpeg -i source.flv -nv -f mp3 destination.mp3` **GOTCHA** : If you get this error message : `Unsupported audio codec (n)`, check the [FLV Spec](https://www.adobe.com/content/dam/Adobe/en/devnet/flv/pdfs/video_file_format_spec_v10.pdf) in the **Audio Tags** section. ffMpeg can decode n=6 (Nellymoser). But for n=4 (Nellymoser 8-kHz mono) and
[ 0.4488449990749359, -0.0776735320687294, 0.43820473551750183, 0.1834704577922821, -0.14023742079734802, -0.03485285863280296, 0.26830580830574036, -0.5014880895614624, -0.310554563999176, -0.15635813772678375, 0.10146410763263702, 0.9866551160812378, -0.2183152437210083, 0.0487673580646514...
n=5 (Nellymoser 16-kHz mono) it doesn't work. To fix this **use the default microphone rate** when recording your streams, overwise ffMpeg is unable to decode them. Hope this helps !
[ 0.5463137030601501, -0.30423232913017273, 0.10337226837873459, 0.18363799154758453, -0.08627814799547195, 0.04285941272974014, 0.7714419364929199, -0.2125977724790573, -0.32806846499443054, -0.30344709753990173, 0.4530903697013855, 0.8216993808746338, -0.3863702416419983, 0.040579058229923...
Using Oracle 10g with our testing server what is the most efficient/easy way to backup and restore a database to a static point, assuming that you always want to go back to the given point once a backup has been created. A sample use case would be the following 1. install and configure all software 2. Modify data to the base testing point 3. take a backup somehow (this is part of the question, how to do this) 4. do testing 5. return to step 3 state (restore back to backup point, this is the other half of the question) **Optimally this would be completed through sqlplus
[ 0.39109426736831665, -0.008440743200480938, -0.09098543971776962, 0.23641586303710938, 0.06596053391695023, -0.1734706163406372, 0.36363595724105835, -0.3252725303173065, 0.17796073853969574, -0.4065055847167969, -0.03694435581564903, 0.631742000579834, -0.07192199677228928, -0.32476285099...
or rman or some other scriptable method.** You do not need to take a backup at your base time. Just enable flashback database, create a guaranteed restore point, run your tests and flashback to the previously created restore point. The steps for this would be: 1. Startup the instance in mount mode. startup force mount; 2. Create the restore point. create restore point before\_test guarantee flashback database; 3. Open the database. alter database open; 4. Run your tests. 5. Shutdown and mount the instance. shutdown immediate; startup mount; 6. Flashback to the restore point. flashback database to restore point before\_test; 7. Open the database. alter database open;
[ -0.044147081673145294, 0.016300950199365616, 0.43898651003837585, -0.08597423136234283, 0.2506308853626251, -0.16262739896774292, 0.6383638978004456, -0.41438785195350647, 0.10960505157709122, -0.6606980562210083, -0.3337821066379547, 1.012964129447937, -0.06993259489536285, 0.131642609834...
My `master` and `development` branches are tracked remotely on [GitHub](http://en.wikipedia.org/wiki/GitHub). How do I clone both these branches? First, clone a remote [Git](http://en.wikipedia.org/wiki/Git_%28software%29) repository and [`cd`](http://en.wikipedia.org/wiki/Cd_%28command%29) into it: ``` $ git clone git://example.com/myproject $ cd myproject ``` Next, look at the local branches in your repository: ``` $ git branch * master ``` But there are other branches hiding in your repository! See these using the `-a` flag: ``` $ git branch -a * master remotes/origin/HEAD remotes/origin/master remotes/origin/v1.0-stable remotes/origin/experimental ``` To take a quick peek at an upstream branch, check it out directly: ``` $ git checkout origin/experimental ``` To work on that branch, create a local tracking branch, which is done automatically by: ``` $ git checkout experimental Branch
[ 0.4718523919582367, 0.21527448296546936, 0.5232237577438354, 0.09168560802936554, 0.4185813069343567, 0.09595619887113571, -0.041669268161058426, -0.00004046386675327085, -0.09199148416519165, -0.6078470945358276, -0.10589590668678284, 0.18264512717723846, 0.0010174354538321495, 0.48089164...
experimental set up to track remote branch experimental from origin. Switched to a new branch 'experimental' ``` Here, "new branch" simply means that the branch is taken from the index and created locally for you. As the *previous* line tells you, the branch is being set up to track the remote branch, which usually means the origin/branch\_name branch. Your local branches should now show: ``` $ git branch * experimental master ``` You can track more than one remote repository using `git remote`: ``` $ git remote add win32 git://example.com/users/joe/myproject-win32-port $ git branch -a * master remotes/origin/HEAD remotes/origin/master remotes/origin/v1.0-stable remotes/origin/experimental remotes/win32/master remotes/win32/new-widgets ``` At this point, things are
[ 0.3811328113079071, -0.08780466020107269, 0.4030705690383911, 0.05629761889576912, 0.2893226444721222, -0.05053100734949112, -0.1463651806116104, -0.13309772312641144, -0.5345631837844849, -0.2873243987560272, 0.008029479533433914, 0.22116559743881226, -0.3042924702167511, 0.46985378861427...
getting pretty crazy, so run `gitk` to see what's going on: ``` $ gitk --all & ```
[ 0.6782350540161133, 0.5145874619483948, 0.47580328583717346, 0.1635059118270874, 0.46480807662010193, -0.5600895881652832, 0.4815492331981659, 0.38735949993133545, -0.3000219762325287, -0.046491868793964386, -0.08929620683193207, 0.5988661050796509, -0.22089503705501556, 0.2852154076099396...
Has anyone come up with a good way of performing full text searches (`FREETEXT() CONTAINS()`) for any number of arbitrary keywords using standard LinqToSql query syntax? I'd obviously like to avoid having to use a Stored Proc or have to generate a Dynamic SQL calls. Obviously I could just pump the search string in on a parameter to a SPROC that uses FREETEXT() or CONTAINS(), but I was hoping to be more creative with the search and build up queries like: "pepperoni pizza" and burger, not "apple pie". Crazy I know - but wouldn't it be neat to be able to do this directly
[ 0.6321234703063965, 0.39726540446281433, 0.14333824813365936, 0.05756068229675293, 0.11858026683330536, -0.0015955434646457434, -0.09143751114606857, 0.22194525599479675, -0.5235342979431152, -0.3311316668987274, -0.049579884856939316, 0.5251195430755615, -0.15177804231643677, 0.0905739068...
from LinqToSql? Any tips on how to achieve this would be much appreciated. Update: I think I may be on to something [here](http://tomasp.net/blog/linq-expand-update.aspx)... Also: I rolled back the change made to my question title because it actually changed the meaning of what I was asking. I *know* that full text search is not supported in LinqToSql - I would have asked that question if I wanted to know that. Instead - I have updated my title to appease the edit-happy-trigger-fingered masses. Unfortunately LINQ to SQL does not support Full Text Search. There are a bunch of products out there that I think could:
[ 0.39481326937675476, 0.04867856577038765, 0.3887138366699219, 0.20284083485603333, -0.06443439424037933, -0.6103194355964661, 0.26374804973602295, 0.31386709213256836, -0.29939383268356323, -0.33570876717567444, -0.17354834079742432, 0.36804789304733276, -0.11133566498756409, 0.16384223103...
Lucene.NET, NHibernate Search comes to mind. LINQ for NHibernate combined with NHibernate Search would probably give that functionality, but both are still way deep in beta.
[ 0.2267056107521057, -0.23350873589515686, 0.2443334460258484, -0.07336662709712982, -0.11323065310716629, -0.4046507775783539, -0.1890871524810791, -0.057258330285549164, -0.2551727294921875, -0.5633914470672607, -0.06687190383672714, 0.1639547497034073, -0.2260066717863083, 0.472355365753...
If I add 3-400 databases to a single SQL Server instance will I encounter scaling issues introduced by the large number of databases? This is one of those questions best answered by: Why are you trying to do this in the first place? What is the concurrency against those databases? Are you generating databases when you could have normalized tables to do the same functionality? That said, yes MSSQL 2005 will handle that level of database per installation. It will more or less be what you are doing with the databases which will seriously impede your performance (incoming connections, CPU usage, etc.)
[ 0.08768311887979507, 0.12961941957473755, 0.17520716786384583, 0.2042073905467987, -0.06159007176756859, -0.21590515971183777, 0.16786518692970276, -0.3686613142490387, -0.5375732183456421, -0.4027458131313324, 0.1902792602777481, 0.5992733836174011, -0.5379655957221985, 0.0748643353581428...
I'd love to learn Cocoa, it seems like the best systems language for Mac OS X. Can you recommend any useful ways to learn the language? Books, websites, example projects or even classes to take? Andy Matuschak has a great [blog post](http://andymatuschak.org/articles/2007/09/09/getting-started-with-cocoa-a-friendlier-approach/) that leads you through several good Cocoa tutorials, explaining why you are reading each one. [Cocoa Dev Central](http://www.cocoadevcentral.com/) has loads of tutorials. For books, I echo Dave and Phillip Bowden with [Cocoa Programming for Mac OS X by Aaron Hillegass](https://rads.stackoverflow.com/amzn/click/com/0321503619).
[ 0.22216370701789856, 0.49038395285606384, -0.3933408260345459, 0.03492084890604019, 0.0613894909620285, 0.22369644045829773, 0.13653719425201416, 0.17072884738445282, -0.3282935917377472, -0.4403150677680969, -0.1651042401790619, 0.31304287910461426, 0.038131728768348694, -0.34579238295555...
I'm looking to have windows recognize that certain folders are associated to my application - maybe by naming the folder 'folder.myExt'. Can this be done via the registry? A bit more info - This is for a x-platform app ( that's why I suggested the folder with an extension - mac can handle that ) - The RAD I'm using doesn't read write binary data efficiently enough as the size of this 'folder' will be upwards of 2000 files and 500Mb Folders in Windows aren't subject to the name.extension rules at all, there's only 1 entry in the registry's file
[ 0.19242598116397858, 0.1828656643629074, 0.3705059885978699, 0.32534220814704895, 0.15457646548748016, -0.18630428612232208, 0.2175578773021698, 0.13427288830280304, -0.3620240390300751, -0.6403610110282898, 0.047173596918582916, 0.3510012924671173, -0.16003604233264923, 0.3116986453533172...
type handling for "folder" types. (If you try to change it you're going to have very, very rough times ahead) The only simple way to get the effect you're after would be to do what OpenOffice, MS Office 2007, and large video games have been doing for some time, use a ZIP file for a container. (It doesn't have to be a "ZIP" exactly, but some type of readily available container file type is better than writing your own) Like OO.org and Office 2K7 you can just use a custom extension and designate your app as the handler. This will also
[ 0.16701173782348633, -0.23552295565605164, 0.12823836505413055, 0.05802614986896515, -0.0994001254439354, -0.193777397274971, -0.08111003041267395, 0.06869658827781677, -0.3724345266819, -0.6349270939826965, -0.05971602350473404, 0.4337138235569, -0.6628155708312988, -0.053895510733127594,...
work on Macs, so it can be cross-platform. It may not be fast however. Using low or no compression may help with that.
[ -0.16510213911533356, 0.02783925272524357, 0.2717951536178589, 0.2474210411310196, 0.15815649926662445, -0.34971919655799866, -0.02112477459013462, -0.009435759857296944, 0.007312071975320578, -0.9305020570755005, -0.23454029858112335, 0.4829713702201843, -0.07233256846666336, -0.356724262...
If I have a Resource bundle property file: A.properties: ``` thekey={0} This is a test ``` And then I have java code that loads the resource bundle: ``` ResourceBundle labels = ResourceBundle.getBundle("A", currentLocale); labels.getString("thekey"); ``` How can I replace the {0} text with some value ``` labels.getString("thekey", "Yes!!!"); ``` Such that the output comes out as: ``` Yes!!! This is a test. ``` There are no methods that are part of Resource Bundle to do this. Also, I am in Struts, is there some way to use MessageProperties to do the replacement. The class you're looking for is java.text.MessageFormat; specifically, calling ``` MessageFormat.format("{0} This {1} a test", new Object[] {"Yes!!!", "is"}); ``` or ``` MessageFormat.format("{0} This {1} a test", "Yes!!!", "is"); ``` will return ``` "Yes!!! This is a
[ 0.12691965699195862, -0.27862662076950073, 0.04230072721838951, 0.05962850898504257, 0.06922341138124466, -0.1899147778749466, 0.08957593888044357, -0.3112996518611908, -0.025787096470594406, -0.7593132257461548, -0.13434821367263794, 0.5254336595535278, -0.2508322596549988, -0.00996492337...
test" ``` [Unfortunately, I can't help with the Struts connection, although [this](http://www.jguru.com/faq/view.jsp?EID=915891) looks relevant.]
[ 0.4801589846611023, 0.050492946058511734, 0.2808670103549957, 0.08993445336818695, -0.16062495112419128, -0.2690850794315338, 0.7041052579879761, -0.6277872920036316, -0.1529543250799179, -0.28916090726852417, 0.11778828501701355, 0.3375907242298126, -0.02642872743308544, 0.100511237978935...
I'm not overly familiar with Tomcat, but my team has inherited a complex project that revolves around a Java Servlet being hosted in Tomcat across many servers. Custom configuration management software is used to write out the server.xml, and various resources (connection pools, beans, server variables, etc) written into server.xml configure the servlet. This is all well and good. However, the names of some of the resources aren't known in advance. For example, the Servlet may need access to any number of "Anonymizers" as configured by the operator. Each anonymizer has a unique name associated with it. We create and configure
[ 0.2458612471818924, 0.16664674878120422, 0.36990293860435486, 0.1781444400548935, 0.17693810164928436, -0.19297732412815094, -0.028239205479621887, -0.04387857764959335, -0.30034542083740234, -0.43058907985687256, -0.17915038764476776, 0.3435259461402893, -0.12879295647144318, 0.1181160137...
each anonymizer using java beans similar to the following: ``` <Resource name="bean/Anonymizer_toon" type="com.company.tomcatutil.AnonymizerBean" factory="org.apache.naming.factory.BeanFactory" className="teAnonymizer" databaseId="50" /> <Resource name="bean/Anonymizer_default" type="com.company.tomcatutil.AnonymizerBean" factory="org.apache.naming.factory.BeanFactory" className="teAnonymizer" databaseId="54" /> ``` However, this appears to require us to have explicit entries in the Servlet's context.xml file for each an every possible resource name in advance. I'd like to replace the explicit context.xml entries with wildcards, or know if there is a better solution to this type of problem. Currently: ``` <ResourceLink name="bean/Anonymizer_default" global="bean/Anonymizer_default"
[ -0.08757802844047546, 0.18777722120285034, 0.4676847755908966, -0.19135092198848724, 0.056603409349918365, -0.04035656526684761, 0.18342958390712738, -0.08153577893972397, -0.571179211139679, -0.6050349473953247, -0.3618984520435333, 0.19633710384368896, -0.37566637992858887, 0.25375464558...
type="com.company.tomcatutil.AnonymizerBean"/> <ResourceLink name="bean/Anonymizer_toon" global="bean/Anonymizer_toon" type="com.company.tomcatutil.AnonymizerBean"/> ``` Replaced with something like: ``` <ResourceLink name="bean/Anonymizer_*" global="bean/Anonymizer_*" type="com.company.tomcatutil.AnonymizerBean"/> ``` However, I haven't been able to figure out if this is possible or what the correct syntax might
[ -0.24827541410923004, 0.08441463112831116, 0.2144264280796051, -0.2982652485370636, 0.19951270520687103, -0.19097842276096344, 0.1088612899184227, 0.8181414604187012, -0.7278850078582764, -0.17478181421756744, -0.6048073172569275, 0.534360945224762, -0.49928587675094604, 0.271259069442749,...
be. Can anyone make any suggestions about better ways to handle this? I've had some luck with [the Java Service Wrapper](http://wrapper.tanukisoftware.org/doc/english/introduction.html)
[ 0.45724794268608093, -0.23969075083732605, -0.35632753372192383, -0.13151492178440094, -0.11588694900274277, 0.02322479896247387, 0.27367380261421204, -0.15091104805469513, 0.3458193838596344, -0.8144011497497559, -0.2826836407184601, 0.6073900461196899, -0.362519770860672, -0.202890142798...
Is it possible to get gdb or use some other tools to create a core dump of a running process and it's symbol table? It would be great if there's a way to do this without terminating the process. If this is possible, what commands would you use? (I'm trying to do this on a Linux box) ``` $ gdb --pid=26426 (gdb) gcore Saved corefile core.26426 (gdb) detach ```
[ 0.08352091163396835, 0.03668250888586044, 0.12748250365257263, 0.1390315741300583, 0.18339405953884125, -0.09059357643127441, 0.027291322126984596, -0.002193962223827839, -0.284213662147522, -0.5805652141571045, 0.08498569577932358, 0.5379465222358704, -0.5008139610290527, 0.20574040710926...
Let’s say I'm developing a helpdesk application that will be used by multiple departments. Every URL in the application will include a key indicating the specific department. The key will always be the first parameter of every action in the system. For example ``` http://helpdesk/HR/Members http://helpdesk/HR/Members/PeterParker http://helpdesk/HR/Categories http://helpdesk/Finance/Members http://helpdesk/Finance/Members/BruceWayne http://helpdesk/Finance/Categories ``` The problem is that in each action on each request, I have to take this parameter and then retrieve the Helpdesk Department model from the repository based on that key. From that model I can retrieve the list of members, categories etc., which is different for each Helpdesk Department. This obviously violates DRY. My question is, how can I
[ 0.17971470952033997, 0.12644793093204498, 0.344800740480423, 0.2019055187702179, 0.028598425909876823, 0.0830690786242485, -0.0009856059914454818, -0.4366452395915985, -0.5527796149253845, -0.23927293717861176, -0.1542092263698578, 0.4988510310649872, -0.31030622124671936, 0.15553471446037...
create a base controller, which does this for me so that the particular Helpdesk Department specified in the URL is available to all derived controllers, and I can just focus on the actions? I have a similar scenario in one of my projects, and I'd tend to use a ModelBinder rather than using a separate inheritance hierarchy. You can make a ModelBinder attribute to fetch the entity/entites from the RouteData: ``` public class HelpdeskDepartmentBinder : CustomModelBinderAttribute, IModelBinder { public override IModelBinder GetBinder() { return this; } public
[ -0.03929264098405838, -0.5671502947807312, 0.3357231020927429, 0.42518150806427, 0.21816246211528778, -0.12236727774143219, 0.1711512804031372, -0.26718130707740784, -0.039677225053310394, -0.46518003940582275, -0.06913446635007858, 0.5735757350921631, -0.5558190941810608, 0.19913771748542...
object GetValue(ControllerContext controllerContext, string modelName, Type modelType, ModelStateDictionary modelState) { //... extract appropriate value from RouteData and fetch corresponding entity from database. } } ``` ...then you can use it to make the HelpdeskDepartment available to all your actions: ``` public class MyController : Controller { public ActionResult Index([HelpdeskDepartmentBinder] HelpdeskDepartment department) { return View(); } } ```
[ -0.28103867173194885, -0.6104514002799988, 0.7206238508224487, 0.40073832869529724, 0.11383824050426483, -0.015955250710248947, 0.36998116970062256, -0.14181828498840332, -0.2589220702648163, -0.5923184156417847, -0.37829092144966125, 0.5966070294380188, -0.40501177310943604, 0.17458681762...
When defining a method on a class in Python, it looks something like this: ``` class MyClass(object): def __init__(self, x, y): self.x = x self.y = y ``` But in some other languages, such as C#, you have a reference to the object that the method is bound to with the "this" keyword without declaring it as an argument in the method prototype. Was this an intentional language design decision in Python or are there some implementation details that require the passing of "self" as an argument? I
[ 0.18170908093452454, 0.13478130102157593, -0.08341909199953079, -0.18296098709106445, -0.11395355314016342, -0.19349540770053864, 0.1441739797592163, -0.004666090477257967, -0.05903016775846481, -0.25455889105796814, -0.010882711037993431, 0.42410552501678467, -0.35318058729171753, 0.16416...
like to quote Peters' Zen of Python. "Explicit is better than implicit." In Java and C++, '`this.`' can be deduced, except when you have variable names that make it impossible to deduce. So you sometimes need it and sometimes don't. Python elects to make things like this explicit rather than based on a rule. Additionally, since nothing is implied or assumed, parts of the implementation are exposed. `self.__class__`, `self.__dict__` and other "internal" structures are available in an obvious way.
[ 0.25999027490615845, 0.11898678541183472, -0.3267599940299988, 0.010667193681001663, -0.27470630407333374, -0.369913786649704, 0.4275347590446472, 0.15722297132015228, -0.04466153681278229, -0.5012602210044861, -0.27419066429138184, 0.6312564611434937, -0.37879684567451477, -0.105650544166...
Working on a project at the moment and we have to implement soft deletion for the majority of users (user roles). We decided to add an `is_deleted='0'` field on each table in the database and set it to `'1'` if particular user roles hit a delete button on a specific record. For future maintenance now, each `SELECT` query will need to ensure they do not include records `where is_deleted='1'`. Is there a better solution for implementing soft deletion? Update: I should also note that we have an Audit database that tracks changes (field, old value, new value, time, user, ip) to all tables/fields
[ -0.24244697391986847, -0.22880646586418152, 0.3165753483772278, 0.005161435808986425, 0.07396359741687775, -0.02259220741689205, 0.14009563624858856, 0.08206231892108917, -0.5853128433227539, -0.5062965154647827, -0.39618098735809326, 0.3140628933906555, -0.26746946573257446, 0.28199911117...
within the Application database. You could perform all of your queries against a view that contains the `WHERE IS_DELETED='0'` clause.
[ 0.10603874176740646, -0.10193637013435364, 0.19366836547851562, -0.002229598816484213, 0.1648981273174286, 0.048767752945423126, 0.35635167360305786, 0.05974321439862251, -0.17926882207393646, -0.41391676664352417, -0.16917747259140015, 0.5057949423789978, -0.47047650814056396, 0.326812744...
At what level of complexity is it mandatory to switch to an existing framework for web development? What measurement of complexity is practical for web development? Code length? Feature list? Database Size? If you work on several different sites then by using a common framework across all of them you can spend time working on the code rather than trying to remember what is located where and why. I'd always use a framework of some sort, even if it's your own, as the uniformity will help you structure your project. Unless it's a one page static HTML project. There is no mandatory limit however.
[ 0.4802922010421753, 0.0745772197842598, -0.016065102070569992, 0.24431562423706055, 0.0017203546594828367, -0.3160952925682068, 0.2814205288887024, -0.16527971625328064, -0.5001623630523682, -0.6785130500793457, -0.07993900775909424, 0.49421969056129456, -0.22466866672039032, -0.0176004171...
Using the svnmerge.py tool it is possible to merge between branches, up and down. It is hard to find the details for doing this. Hopefully, v1.5 will have a neat method for doing this without using svnmerge.py - details requested! It looks like you're asking about 1.5 merge tracking. Here's a quick overview for doing merges to/from trunk (or another branch): <http://blog.red-bean.com/sussman/?p=92>
[ 0.15173961222171783, -0.3075396418571472, 0.3198368549346924, 0.057209331542253494, 0.022488437592983246, 0.015774695202708244, 0.41419553756713867, -0.03522562235593796, -0.6164301633834839, -0.676233172416687, -0.04637975990772247, 0.09664159268140793, -0.22719383239746094, 0.13712909817...
If I use the following code I lose the ability to right click on variables in the code behind and refactor (rename in this case) them ``` <a href='<%# "/Admin/Content/EditResource.aspx?ResourceId=" + Eval("Id").ToString() %>'>Edit</a> ``` I see this practice everywhere but it seems weird to me as I no longer am able to get compile time errors if I change the property name. My preferred approach is to do something like this ``` <a runat="server" id="MyLink">Edit</a> ``` and then in the code behind ``` MyLink.Href= "/Admin/Content/EditResource.aspx?ResourceId=" + myObject.Id; ``` I'm really interested to hear if people think the above approach is better since that's what I always see on popular coding sites and blogs
[ 0.37466493248939514, 0.20562557876110077, 0.2211260050535202, -0.10339093953371048, -0.013000892475247383, -0.3388745188713074, 0.4394510090351105, -0.21104741096496582, -0.22749869525432587, -0.9011217355728149, -0.32779496908187866, 0.5597022175788879, -0.3448212146759033, 0.183303385972...
(e.g. Scott Guthrie) and it's smaller code, but I tend to use ASP.NET because it is compiled and prefer to know if something is broken at compile time, not run time. I wouldnt call it bad practice (some would disagree, but why did they give us that option in the first place?), but I would say that you'll improve overall readability and maintainability if you do not submit to this practice. You already conveyed out a good point, and that is IDE feature limitation (i.e., design time inspection, compile time warning, etc.). I could go on and on about how many principles
[ 0.588677167892456, 0.5939965844154358, -0.45627614855766296, -0.07762034982442856, -0.37773630023002625, -0.39407798647880554, 0.641502320766449, -0.41293641924858093, -0.08031342178583145, -0.623338520526886, -0.17928090691566467, 0.6436182856559753, -0.11738988757133484, -0.3572323918342...
it violates (code reuse, separation of concerns, etc.), but I can think of many applications out there that break nearly every principle, but still work after several years. I for one, prefer to make my code as modular and maintainable as possible.
[ 0.37849941849708557, 0.7402963042259216, -0.11177967488765717, -0.08835329860448837, 0.13523922860622406, -0.2142164260149002, 0.5189381837844849, 0.1272309571504593, -0.0727861076593399, -0.6138195395469666, -0.04839547723531723, 0.41270995140075684, -0.2786391079425812, 0.244552537798881...
I like the XMLReader class for it's simplicity and speed. But I like the xml\_parse associated functions as it better allows for error recovery. It would be nice if the XMLReader class would throw exceptions for things like invalid entity refs instead of just issuinng a warning. I'd avoid SimpleXML if you can. Though it looks very tempting by getting to avoid a lot of "ugly" code, it's just what the name suggests: simple. For example, it can't handle this: ``` <p> Here is <strong>a very simple</strong> XML document. </p> ``` Bite the bullet and go to the DOM Functions. The power of
[ 0.21874770522117615, 0.03711778670549393, 0.14134837687015533, -0.12084846943616867, -0.46655383706092834, -0.1873927116394043, 0.3798372447490692, -0.29451683163642883, 0.07683668285608292, -0.6025378704071045, -0.09509813785552979, 1.048832893371582, -0.24920959770679474, 0.0706871151924...
it far outweighs the little bit extra complexity. If you're familiar at all with DOM manipulation in Javascript, you'll feel right at home with this library.
[ 0.18924930691719055, 0.06466992944478989, 0.09960678964853287, 0.18581244349479675, -0.0986381322145462, -0.30589860677719116, -0.01709759794175625, 0.04394819959998131, -0.13123668730258942, -0.5157570242881775, -0.07469283044338226, 0.7781535983085632, 0.22392812371253967, -0.02679269388...
Often, I find myself wanting to write a unit test for a portion of code that accesses HTTP resources as part of its normal function. Have you found any good ways to write these kinds of tests? Extract the part that accesses the HTTP resources out of your main code. Create an interface for that new component, In your test, mock the interface and return data that you can control reliably. You can test the HTTP access as an integration test.
[ 0.7152106165885925, 0.010804318822920322, -0.3585144579410553, 0.44026318192481995, -0.01839338429272175, 0.02097385749220848, 0.17249463498592377, -0.22127573192119598, 0.14500637352466583, -0.6854029893875122, 0.391910195350647, 0.5628837943077087, 0.08305307477712631, 0.0228879470378160...
I would like to parse a string such as `p1=6&p2=7&p3=8` into a `NameValueCollection`. What is the most elegant way of doing this when you don't have access to the `Page.Request` object? There's a built-in .NET utility for this: [HttpUtility.ParseQueryString](http://msdn.microsoft.com/en-us/library/ms150046.aspx) ```cs // C# NameValueCollection qscoll = HttpUtility.ParseQueryString(querystring); ``` ```vb ' VB.NET Dim qscoll As NameValueCollection = HttpUtility.ParseQueryString(querystring) ``` You may need to replace `querystring` with `new Uri(fullUrl).Query`.
[ -0.03208530694246292, 0.032897789031267166, 0.6722050309181213, 0.01769501157104969, 0.054790619760751724, -0.25341126322746277, 0.012038796208798885, -0.07495218515396118, -0.12721262872219086, -0.5225014090538025, -0.08142643421888351, 0.4342179596424103, -0.21690623462200165, 0.41089382...
Is there any performance difference between tuples and lists when it comes to instantiation and retrieval of elements? The [`dis`](https://docs.python.org/3/library/dis.html) module disassembles the byte code for a function and is useful to see the difference between tuples and lists. In this case, you can see that accessing an element generates identical code, but that assigning a tuple is much faster than assigning a list. ``` >>> def a(): ... x=[1,2,3,4,5] ... y=x[2] ... >>> def b(): ... x=(1,2,3,4,5) ... y=x[2] ... >>> import dis >>> dis.dis(a) 2
[ -0.1196446418762207, -0.04701726883649826, 0.29112255573272705, -0.0654810294508934, -0.3026585280895233, 0.2692891061306, 0.2794490158557892, -0.39890775084495544, -0.3528830409049988, -0.4079797863960266, -0.41488370299339294, 0.30607885122299194, -0.4692838788032532, -0.1929717212915420...
0 LOAD_CONST 1 (1) 3 LOAD_CONST 2 (2) 6 LOAD_CONST 3 (3) 9 LOAD_CONST
[ -0.1655823439359665, 0.000386553059797734, 0.5301822423934937, -0.49398505687713623, -0.06926582008600235, -0.06626717001199722, 0.03298621252179146, -0.28732267022132874, 0.04858248308300972, -0.2707914113998413, -0.17396654188632965, 0.6352126002311707, -0.8199748992919922, -0.1994814276...
4 (4) 12 LOAD_CONST 5 (5) 15 BUILD_LIST 5 18 STORE_FAST 0
[ 0.4404589831829071, 0.028882933780550957, 0.46103373169898987, -0.24962222576141357, 0.24245795607566833, 0.06049108877778053, 0.3573421835899353, -0.2169865220785141, 0.02667243778705597, -0.4976377487182617, -0.1681372970342636, 0.961648166179657, -0.20336003601551056, 0.2075691223144531...
(x) 3 21 LOAD_FAST 0 (x) 24 LOAD_CONST 2 (2) 27 BINARY_SUBSCR 28 STORE_FAST
[ -0.08490175753831863, 0.008680221624672413, 0.44923824071884155, -0.13597194850444794, -0.056884098798036575, 0.05771896243095398, 0.11735420674085617, -0.07131966203451157, -0.14780710637569427, -0.3268168866634369, -0.29979559779167175, 0.3739885985851288, -0.5230710506439209, -0.1341706...
1 (y) 31 LOAD_CONST 0 (None) 34 RETURN_VALUE >>> dis.dis(b) 2 0 LOAD_CONST 6 ((1, 2, 3, 4, 5)) 3 STORE_FAST
[ 0.10570118576288223, 0.0185388270765543, 0.26949548721313477, -0.1341322660446167, 0.1419316977262497, 0.22167658805847168, 0.38293904066085815, -0.29161831736564636, 0.05578193813562393, -0.2492694854736328, -0.25303754210472107, 0.7332587838172913, -0.675114095211029, -0.1528450399637222...
0 (x) 3 6 LOAD_FAST 0 (x) 9 LOAD_CONST 2 (2) 12 BINARY_SUBSCR
[ 0.11622729897499084, 0.006127750966697931, 0.2027411162853241, -0.17952004075050354, 0.0018198740435764194, -0.1324918419122696, 0.1034078374505043, -0.04701346531510353, -0.14102402329444885, -0.3526741862297058, -0.3672142028808594, 0.5344986915588379, -0.5997734665870667, -0.02709172107...
13 STORE_FAST 1 (y) 16 LOAD_CONST 0 (None) 19 RETURN_VALUE ```
[ 0.1076965183019638, 0.2066403478384018, 0.21342451870441437, -0.15898185968399048, 0.3780910074710846, -0.08439068496227264, 0.3625529706478119, -0.056736018508672714, -0.20695877075195312, -0.2840549349784851, -0.2640443742275238, 0.5386838316917419, -0.2116795927286148, 0.109813518822193...
Suppose I have: 1. Toby 2. Tiny 3. Tory 4. Tily Is there an algorithm that can easily create a list of common characters in the same positions in all these strings? (in this case the common characters are 'T' at position 0 and 'y' at position 3) I tried looking at some of the algorithms used for DNA sequence matching but it seems most of them are just used for finding common substrings regardless of their positions. Finding a list of characters that are common in ALL strings at a certain position is trivially simple. Just iterate on each string for each character position 1 character
[ 0.7888197302818298, 0.22617903351783752, -0.15449205040931702, -0.06504505127668381, -0.2561490833759308, 0.30770912766456604, 0.17662487924098969, 0.10756948590278625, -0.2979421317577362, -0.4874487817287445, 0.1180502399802208, 0.08783408254384995, -0.0777733325958252, 0.284845083951950...
position at a time. If any string's character is not the match of it's closest neighbor string's character, then the position does not contain a common character. For any i = 0 to length -1... Once you find Si[x] != Si+1[x] you can skip to the next position x+1. Where Si is the ith string in the list. And [x] is the character at position x.
[ -0.5342507362365723, -0.20915177464485168, 0.4047563076019287, -0.1845114678144455, -0.15257401764392853, 0.12190914899110794, -0.20754475891590118, -0.3274873197078705, 0.2877201735973358, -0.6229424476623535, -0.10538101941347122, -0.08198017627000809, 0.044455185532569885, 0.30482965707...
During a discussion about security, a developer on my team asked if there was a way to tell if viewstate has been tampered with. I'm embarrassed to say that I didnt know the answer. I told him I would find out, but thought I would give someone on here a chance to answer first. I know there is some automatic validation, but is there a way to do it manually if event validation is not enabled? EnableViewStateMac page directive
[ 0.7055478692054749, -0.028793254867196083, 0.33640068769454956, 0.12401150912046432, 0.038382209837436676, -0.6375231742858887, 0.35811296105384827, -0.1976909339427948, -0.12725844979286194, -0.4902171790599823, 0.1295025646686554, 0.40901854634284973, -0.2902418375015259, -0.144196912646...
[Nevrona Designs'](http://www.nevrona.com/) [Rave Reports](http://www.nevrona.com/Products/RaveReports/StandardEdition/tabid/66/Default.aspx) is a Report Engine for use by [Embarcadero's](http://www.embarcadero.com/) [Delphi](http://www.codegear.com/products/delphi/win32) IDE. This is what I call the Rave Endless Loop bug. In Rave Reports version 6.5.0 (VCL10) that comes bundled with Delphi 2006, there is a nortorious bug that plagues many Rave report developers. If you have a non-empty dataset, and the data rows for this dataset fit exactly into a page (that is to say there are zero widow rows), then upon PrintPreview, Rave will get stuck in an infinite loop generating pages. This problem has been previously reported in this newsgroup under the following headings: 1. "error: generating infinite pages"; Hugo Hiram 20/9/2006 8:44PM 2.
[ 0.13363738358020782, 0.12621700763702393, 0.4802967309951782, 0.06670431047677994, -0.2708493173122406, -0.483895480632782, -0.01792709156870842, -0.38372984528541565, -0.30243510007858276, -0.0663268193602562, 0.3711569607257843, 0.4739408493041992, -0.5635755658149719, 0.3354393243789673...
"Rave loop bug. Please help"; Tomas Lazar 11/07/2006 7:35PM 3. "Loop on full page of data?"; Tony Chistiansen 23/12/2004 3:41PM 4. reply to (3) by another complainant; Oliver Piche 5. "Endless lopp print bug"; Richso 9/11/2004 4:44PM In each of these postings, there was no response from Nevrona, and no solution was reported. Possibly, the problem has also been reported on an allied newsgroup (nevrona.public.rave.reports.general), to wit: 6. "Continuously generating report"; Jobard 20/11/2005 Although it is not clear to me if (6) is the Rave Endless loop bug or another problem. This posting did get a reply from Nevrona, but it was more in relation to multiple regions ("There is a
[ 0.28358790278434753, 0.14362271130084991, 0.23201337456703186, 0.07314302772283554, -0.4782656729221344, -0.08049146831035614, 0.7277395129203796, -0.12616665661334991, -0.5389769673347473, 0.20351265370845795, -0.19911037385463715, 0.302665114402771, -0.7744658589363098, 0.389088571071624...
problem when using multiple regions that go over a page-break.") than the problem of zero widows. This is more of a work-around than a true solution. I first posted this work-around on the Nevrona newsgroup (Group=nevrona.public.rave.developer.delphi.rave; Subject="Are you suffering from the Rave Endless Loop bug?: Work-around announced."; Date=13/11/2006 7:06 PM) So here is my solution. It is more of a work-around than a good long-term solution, and I hope that Nevrona will give this issue some serious attention in the near future. 1. Given your particular report layout, count the maximum number of rows per page. Let us say that this is 40. 2. Set up a counter to
[ 0.45728397369384766, 0.04520438611507416, 0.5952821373939514, 0.03764643520116806, -0.36024025082588196, -0.13293074071407318, 0.19781608879566193, -0.1862681806087494, -0.4351104497909546, -0.3623216450214386, 0.19917143881320953, 0.4692586362361908, -0.28990569710731506, 0.22340781986713...
count the rows within the page (as opposed to rows within the whole report). You could do this either by event script or by a CalcTotal component. 3. Define an OnBeforePrint scripted event handler for the main data band. 4. In this event handler set the FinishNewPage property of the main data band to be True when the row-per-page count is one or two below the max (in our example, this would be 38). And set it to False in all other cases. The effect of this is to give every page a non-zero number of widows (in this case 1..38), thus
[ 0.1168316900730133, -0.10023169964551926, 0.48825931549072266, 0.09718556702136993, -0.044910967350006104, -0.015961360186338425, 0.47327783703804016, -0.5242873430252075, -0.29508739709854126, -0.5848962664604187, 0.031157810240983963, 0.5385469198226929, -0.11849924176931381, -0.00232792...