Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Reflector tells me that SortedList uses a ThrowHelper class to throw exceptions instead of throwing them directly, for example: ``` public TValue this[TKey key] { get { int index = this.IndexOfKey(key); if (index >= 0) return this.values[index]; ThrowHelper.ThrowKeyNotFoundException(); return default(TValue); } ``` where ThrowKeyNotFoundException does nothing more than just: ``` throw new KeyNotFoundException(); ``` Note how this requires a duff statement "return default(TValue)" which is unreachable. I must conclude that this is a pattern with benefits large enough to justify this. What are these benefits?
According to [ThrowHelper.cs](https://github.com/dotnet/runtime/blob/master/src/libraries/System.Private.CoreLib/src/System/ThrowHelper.cs) source code the main purpose is to reduce the JITted code size. Below is a direct copy paste from the link: ``` // This file defines an internal class used to throw exceptions in BCL code. // The main purpose is to reduce code size. // // The old way to throw an exception generates quite a lot IL code and assembly code. // Following is an example: // C# source // throw new ArgumentNullException("key", Environment.GetResourceString("ArgumentNull_Key")); // IL code: // IL_0003: ldstr "key" // IL_0008: ldstr "ArgumentNull_Key" // IL_000d: call string System.Environment::GetResourceString(string) // IL_0012: newobj instance void System.ArgumentNullException::.ctor(string,string) // IL_0017: throw // which is 21bytes in IL. // // So we want to get rid of the ldstr and call to Environment.GetResource in IL. // In order to do that, I created two enums: ExceptionResource, ExceptionArgument to represent the // argument name and resource name in a small integer. The source code will be changed to // ThrowHelper.ThrowArgumentNullException(ExceptionArgument.key, ExceptionResource.ArgumentNull_Key); // // The IL code will be 7 bytes. // IL_0008: ldc.i4.4 // IL_0009: ldc.i4.4 // IL_000a: call void System.ThrowHelper::ThrowArgumentNullException(valuetype System.ExceptionArgument) // IL_000f: ldarg.0 // // This will also reduce the Jitted code size a lot. ```
Look at what ThrowHelper does. It gets resources and stuff for the error messages. In this particular instance, there's no error text, so it seems like it's useless, but their pattern probably requires it, so the developer who wrote it followed the pattern like s/he should.
Why does SortedList implementation use ThrowHelper instead of throwing directly?
[ "", "c#", "" ]
Is there any way to manually decorate function names in MS C++? I'm going to guess any solution would be Microsoft-specific, and I'm cool with that. Alternatively, is there a way to declare the function "`??_M@YGXPAXIHP6EX0@Z@Z`" so the compiler can understand it? FYI, `undname` lists it as: `` void _ _ stdcall `eh vector destructor iterator'(void * , unsigned int, int, void (_ _thiscall * )(void * )) `` Obviously, the compiler is not happy with apostrophes in my identifers. **EDIT:** The solution was actually extremely simple. I complied the obj's, and hexedited the desired decorated names right over my placeholder names, padding them with nulls. The linker ate it right up, no questions asked. The functions are stubs, calling external functions so I don't have to keep hexediting every time I make a small change. Thanks for the ideas everyone.
If you declare your function as *extern "C"* and use the linker option that disables underscore prefix, the name you give in the code is the name you'll have in the .obj file. But this won't solve all your problems, since the compiler will balk when you try to declare a function with special characters in the name. I guess the solution (which is a lot of work IMO) is to give to the desired functions unique names in the source code (like PLACEHOLDER\_01) and then replace these in the .obj file. You'll need somehow to parse the object file and change the symbol names. Notice also that `eh vector destructor iterator' is a mangled name, the real (symbol) name shouldn't have apostrophes or spaces in it. When dumping the .obj/.dll, you'll have to disable name-mangling.
> `eh vector destructor iterator' You sure the apostrophes[sic] are part of the identifier? Did you try to use the function without them? Looks like they exist to improve readability. > Alternatively, is there a way to declare the function "??\_M@YGXPAXIHP6EX0@Z@Z" I seriously doubt if you can 'declare' a mangled name if we mean the same thing by declaration. Declaration is pre-compliation, mangling occurs much later. > ??\_M@YGXPAXIHP6EX0@Z@Z This is a mangled name. All symbols with C++ linkage are mangled. This is sort of a hash of the signature, return type and calling convention information. Read more on [MSDN](http://msdn.microsoft.com/en-us/library/2ax8kbk1(VS.80).aspx).
How to manually decorate linker names in MSVC 2005?
[ "", "c++", "linker", "" ]
I want to suppress specific warnings from g++. I'm aware of the `-Wno-XXX` flag, but I'm looking for something more specific. I want **some** of the warnings in `-Weffc++`, but not **all** of them. Something like what you can do with lint - disable specific messages. Is there a built in way in gcc to do this? Do I have to write a wrapper script?
~~Unfortunately, this feature isn't provided by g++.~~ In VC++, you could use [#pragma warning](http://msdn.microsoft.com/en-us/library/2c8f766e(VS.80).aspx) to disable some specific warnings. In gcc, the closest you can have is [diagnostic pragmas](http://gcc.gnu.org/onlinedocs/gcc/Diagnostic-Pragmas.html), which let you enable/disable certain types of diagnostics for certain files or projects. *Edit*: GCC supports pushing/popping warnings since 4.6.4 (see [changelog](https://gcc.gnu.org/gcc-4.6/changes.html))
For some warnings, there is a command line switch to disable them. In order to know which switch to use, pass `-fdiagnostics-show-option` to gcc.
How to suppress specific warnings in g++
[ "", "c++", "gcc", "g++", "" ]
I want to build a django site where a certain group of trusted users can edit their profile information. Does it make sense to have each trusted user go through the django admin interface? I'd only want them to be able to see and edit their own information (obviously). It doesn't seem like this fits the way the django people define "trust", especially the bolded bit... From [The Django Book, Chapter 18](http://74.125.95.132/search?q=cache:fw9E1c2Oa7AJ:www.djangobook.com/en/beta/chapter18/+django+admin+use+cases&hl=en&ct=clnk&cd=2&gl=ca&client=firefox-a): > The admin is designed to be used by > people who you, the developer, trust. > This doesn’t just mean “people who > have been authenticated;” it means > that Django assumes that your content > editors can be trusted to do the right > thing. > > This means that there’s no “approval” > process for editing content — if you > trust your users, nobody needs to > approve of their edits. It also means > that the permission system, while > powerful, has no support for limiting > access on a per-object basis. **If you > trust someone to edit their own > stories, you trust them not to edit > anyone else’s without permission.** Is this one of those use cases that fits with django's admin module, or is it just a specialized view for a non-trusted user?
No, the Django admin is not suited for individual user profiles, each user would be able to see, and edit, all other user profiles. This is suited more to an administrator who has to manage all the users at once. What you need to build is a user profile page. Django already has a nice login system courtesy of the django.contrib.auth module. You can easily integrate this into your pages, and its exactly what the Django admin uses to authenticate users. Next you'll have to build a simple page that exposes that specific user's profile information based on their User model. This should be relatively painless as it will only require one view and one template, and the template can take advantage of ModelForms.
I would suggest you to create a Person model which contains a OneToOneField to the User model(Admin site User model.). Some what like this.. ``` from django.contrib.auth.models import User class Person(models.Model): """The person class FKs to the User class and contains additional user information including userImage, country, etc""" user = models.OneToOneField(User, related_name='person_fk') url = models.URLField(max_length=255, blank=True) country = models.CharField(max_length=2, blank=True) state = models.CharField(max_length=50, blank=True) zipCode = models.IntegerField(max_length=7, blank=True, null=True) userImage = models.ImageField(upload_to=generate_filename, blank=True, null=True) ```
Valid use case for django admin?
[ "", "python", "django", "django-admin", "" ]
Consider the following 2 tables: ``` Table A: id event_time Table B id start_time end_time ``` Every record in table A is mapped to exactly 1 record in table B. This means table B has no overlapping periods. Many records from table A can be mapped to the same record in table B. I need a query that returns all A.id, B.id pairs. Something like: ``` SELECT A.id, B.id FROM A, B WHERE A.event_time BETWEEN B.start_time AND B.end_time ``` I am using MySQL and I cannot optimize this query. With ~980 records in table A and 130.000 in table B this takes forever. I understand this has to perform 980 queries, but taking more than 15 minutes on a beefy machine is strange. Any suggestions? P.S. I cannot change the database schema, but I can add indexes. However an index (with 1 or 2 fields) on the time fields doesn't help.
You may want to try something like this ``` Select A.ID, (SELECT B.ID FROM B WHERE A.EventTime BETWEEN B.start_time AND B.end_time LIMIT 1) AS B_ID FROM A ``` If you have an index on the Start\_Time,End\_Time fields for B, then this should work quite well.
I'm not sure this can be optimized fully. I tried it on MySQL 5.1.30. I also added an index on `{B.start_time, B.end_time}` as suggested by other folks. Then I got a report from `EXPLAIN`, but the best I could get is a [Range Access Method](http://dev.mysql.com/doc/refman/5.1/en/range-access-multi-part.html): ``` EXPLAIN SELECT A.id, B.id FROM A JOIN B ON A.event_time BETWEEN B.start_time AND B.end_time; +----+-------------+-------+------+---------------+------+---------+------+------+------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+------------------------------------------------+ | 1 | SIMPLE | A | ALL | event_time | NULL | NULL | NULL | 8 | | | 1 | SIMPLE | B | ALL | start_time | NULL | NULL | NULL | 96 | Range checked for each record (index map: 0x4) | +----+-------------+-------+------+---------------+------+---------+------+------+------------------------------------------------+ ``` See the note on the far right. The optimizer thinks it *might* be able to use the index on `{B.start_time, B.end_time}` but it ended up deciding not to use that index. Your results may vary, because your data distribution is more representative. Compare with the index usage if you compare `A.event_time` to a constant range: ``` EXPLAIN SELECT A.id FROM A WHERE A.event_time BETWEEN '2009-02-17 09:00' and '2009-02-17 10:00'; +----+-------------+-------+-------+---------------+------------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+-------+---------------+------------+---------+------+------+-------------+ | 1 | SIMPLE | A | range | event_time | event_time | 8 | NULL | 1 | Using where | +----+-------------+-------+-------+---------------+------------+---------+------+------+-------------+ ``` And compare with the dependent sub-query form given by @Luke and @Kibbee, which seems to make use of indexes more effectively: ``` EXPLAIN SELECT A.id AS id_from_a, ( SELECT B.id FROM B WHERE A.id BETWEEN B.start_time AND B.end_time LIMIT 0, 1 ) AS id_from_b FROM A; +----+--------------------+-------+-------+---------------+---------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+--------------------+-------+-------+---------------+---------+---------+------+------+-------------+ | 1 | PRIMARY | A | index | NULL | PRIMARY | 8 | NULL | 8 | Using index | | 2 | DEPENDENT SUBQUERY | B | ALL | start_time | NULL | NULL | NULL | 384 | Using where | +----+--------------------+-------+-------+---------------+---------+---------+------+------+-------------+ ``` Weirdly, EXPLAIN lists `possible_keys` as NULL (i.e. no indexes could be used) but then decides to use the primary key after all. Could be an idiosyncrasy of MySQL's EXPLAIN report?
Optimize SQL that uses between clause
[ "", "sql", "mysql", "query-optimization", "" ]
As is well known, in XHR (aka AJAX) web applications no history for your app is build and clicking the refresh button often moves the user out of his/her current activity. I stumbled upon location.hash (e.g. `http://anywhere/index.html#somehashvalue`) to circumvent the refresh problem (use location.hash to inform your app of it's current state and use a page load handler to reset that state). It's really nice and simple. This brought me to thinking about using location.hash to track the history of my app. I don't want to use existing libraries, because they use iframes etc. So here's my nickel and dime: when the application page loads I start this: ``` setInterval( function(){ if (location.hash !== appCache.currentHash) { appCache.currentHash = location.hash; appCache.history.push(location.hash); /* ... [load state using the hash value] ... */ return true; } return false; }, 250 ); ``` (*appCache* is a predefined object containing application variables) The idea is to trigger every action in the application from the hash value. In decent browsers a hash value change adds an entry to the history, in IE (<= 7) it doesn't. In all browsers, navigating back or forward to a page with another hash value doesn't trigger a page refresh. That's where the intervalled function takes over. With the function everytime the hash value change is detected (programmatically, or by clicking back or forward) the app can take appropriate action. The application can keep track of it's own history and I should be able to present history buttons in the application (especially for IE users). As far as I can tell this works cross browser and there's no cost in terms of memory or processor resources. So my question is: would this be a viable solution to manage the history in XHR-apps? What are the pros and cons? Update: because I use my homebrew framework, I didn't want to use one of the existing frameworks. To be able to use location.hash in IE and having it in it's history too, I created a simple script (yes, it's needs an iframe) which may be of use to you. I published it [on my site](http://www.nicon.nl/hash2history/), feel free to use/modify/critizise it.
I think you'll have a tricky time knowing if a user went forward or back. Say the url starts /myapp#page1 so you start tracking states. Then the user does something to make the url /myapp#page2 Then the user does something to make the url /myapp#page1 again. Now their history is ambiguous and you won't know what to remove or not. The history frameworks use iframes to get around the browser inconsistencies you mentioned. You only need to use iframes in the browsers that need them. Another con is that users will always go for their browsers back button before they will go for your custom back button. I have a feeling the delay on reading the history every 250ms will be noticeable too. Maybe you can do the interval even tighter, but then I don't know if that'll make things perform badly. I've used yui's history manager, and although it doesn't work perfectly all the time in all browsers (especially ie6), it's been used by a lot of users and developers. The pattern they use is pretty flexible too.
There are 3 issues that tend to get munged together by most solutions: 1. back button 2. bookmarkability 3. refresh button The `window.location.hash` based solutions can solve all three for most cases: the value in the `hash` maps to a state of the application/webpage, so a user can press one of "back"/"forward"/"refresh" and jump to the state now in the hash. They can also bookmark because the value in the address bar has changed. (Note that a hidden `iframe` is needed for IE related to the hash not affecting the browser's history). I just wanted to note however that an iframe only solution can be used without monitoring `window.location.hash` for a very effective solution too. Google maps is a great example of this. The state captured for each user action is way too large to be placed into window.location.hash (map centroid, search results, satellite vs map view, info windows, etc). So they save state into a form embedded in a hidden `iframe`. Incidentally this solves the [soft] "refresh" issue too. They solve bookmarkability separately via a "Link to this page" button. I just thought it's worthing knowing/separating the problem domains you are thinking about.
Is monitoring location.hash a solution for history in XHR apps?
[ "", "javascript", "ajax", "browser-history", "xmlhttprequest", "" ]
Say I have an IEnumerable. For example, {2,1,42,0,9,6,5,3,8}. I need to get "runs" of items that match a predicate. For example, if my predicate was ``` bool isSmallerThanSix(int number){...} ``` I would want to get the following output: {{2,1},{0},{5,3}} Is there a built-in function that accomplishes this? So far I have this: ``` public static IEnumerable<IEnumerable<T>> GetSequences<T>(this IEnumerable<T> source, Func<T, bool> selector) { if (source == null || selector == null) { yield break; } IEnumerable<T> rest = source.SkipWhile(obj => !selector(obj)); while (rest.Count() > 0) { yield return rest.TakeWhile(obj => selector(obj)); rest = rest .SkipWhile(obj => selector(obj)) .SkipWhile(obj => !selector(obj)); } } ``` which seems to work, but was written by me in the middle of the night and thus inefficient fifteen ways from Tuesday. Is there a better, preferably built-in (and therefore well-tested) way? Thank y'all so much for your time, Ria.
There is not a build in method as far as I'm aware. However, calling the `Count` extension method on an `IEnumerable` isn't very efficient as it has to enumerate the list to get the count. Therefore, I've come up with this that has the same effect. ``` public static IEnumerable<IEnumerable<T>> GetSequences<T>(this IEnumerable<T> source, Func<T, bool> selector) { // omitted null checks for brevity var list = new List<T>(); foreach(var item in source) { if (selector.Invoke(item)) { list.Add(item); } else if (list.Count > 0) { yield return list; list = new List<T>(); } } if (list.Count > 0) yield return list; } ``` As Jon Skeet mentioned, the use of `SkipWhile` and `TakeWhile` also seem pretty inefficient in this case as they will create iterator upon iterator upon iterator. You can check this out as when debugging your example it goes a bit crazy as you step through it trying to find the next sequence and so on even though the example is simple.
I suspect your code won't actually work in all cases. In particular, SkipWhile and TakeWhile are lazily evaluated - if the calling code doesn't actually read through all of the yielded `IEnumerable<T>s` (or worse, buffers them up and reads them in a different order!) I strongly suspect you'll get the wrong results. I suspect you really need to do something like: ``` public static IEnumerable<IEnumerable<T>> GetSequences<T>( this IEnumerable<T> source, Func<T, bool> selector) { List<T> current = new List<T>(); foreach (T element in source) { if (selector(element)) { current.Add(element); } else if (current.Count > 0) { yield return current; current = new List<T>(); } } if (current.Count > 0) { yield return current; } } ``` (This ignores error checking - due to the deferred execution of iterator blocks, you'd want to do that in a separate method which then calls this method as a private one - it's a very common pattern when writing production-quality iterator blocks.) The choice of `List<T>` is somewhat arbitrary, btw - you could certainly use a `LinkedList<T>` instead, for example. Or if you *do* go for List, you could return `IEnumerable<IList<T>>` instead of `IEnumerable<IEnumerable<T>>` which *may* make it easier for callers to process the results.
Is there a better way to get sub-sequences where each item matches a predicate?
[ "", "c#", "linq", "" ]
In several places in our code, we notice that if running under debugger it'll show that there's an unhandled exception in the code, however if running outside the debugger it will just ignore the exception completely as if it were caught. We have an exception handler that pops up an error submit dialog that's hooked up to Application.ThreadException and AppDomain.CurrentDomain.UnhandledException And neither of those appear to be catching them either. We also log our exceptions and nothing appears in the log. What are some possible reasons for this? Edit: It seems that it isn't dependent on the type of exception throw, but rather where it is thrown. This was tested by just adding: ``` throw new Exception("Test Exception"); ``` It'll show up under debugger but doesn't show up outside, so in our case it's not a ThreadAbortedException or anything that's dependent on it being a specific type of exception.
Found one place where this could occur is if there's an exception in the UnhandledException event handler. An easy way to see this is this: In the Form.Load event handler throw any old exception. In the Application.ThreadException event put something similar to the following: ``` static void Application_ThreadException(object sender, System.Threading.ThreadExceptionEventArgs e) { string b = null; int i = b.Length; } ``` Under debugger it'll show your exception was unhandled by user code, and then after that it'll show a null reference exception in the ThreadException handler, but if you run it outside the debugger it'll just swallow the exception like it was handled.
There are some special exceptions that don't get bubbled up or caught, it sounds like you're dealing with one of them: see [ThreadAbortException](http://msdn.microsoft.com/en-us/library/system.threading.threadabortexception.aspx)
When can an exception in a .NET WinForms app just get eaten without being caught or bubbling up to a windows exception?
[ "", "c#", "winforms", "unhandled-exception", "" ]
I woud like to save my MS SQL Server 2005 stored procedures to .sql files automatically (would prefer a tool which I can call via .bat) so I don't have to click each single sproc manually and save it. I have already found SMOscript from devio IT, but it gathers all tables and sproc which takes some time. Is there any similar tool where I can define which sproc(s) to export? Also I'm missing the `USE <DB>` clause which SMOScript doesn't add to exported file in contrast to the manuall export as script sproc for `CREATE`.
Create batch file with script (sorry about formatting, but it's really should be inline to execute batch): ``` osql -U %1 -P %2 -S %3 -d %4 -h-1 -Q "SELECT ROUTINE_NAME FROM INFORMATION_SCHEMA.Routines WHERE ROUTINE_TYPE = 'PROCEDURE'" -n -o "sp_list.txt" for /f %%a in (sp_list.txt) do osql -U %1 -P %2 -S %3 -d %4 -h-1 -Q "SELECT ROUTINE_DEFINITION FROM INFORMATION_SCHEMA.Routines WHERE ROUTINE_NAME = '%%a'" -n -o "%%a.sql" ``` Name it "run.bat". Now, to execute batch use params: run.bat [username] [password] [servername] [database] on example: run.bat sa pwd111 localhost\SQLEXPRESS master first all stored procedure names will be stored in file sp\_list.txt, then one by one in separate script files. The only issue - last line of each script with result count - I'm workin' on it :) **edited**: bug in query fixed **Removing "Rows affected" line** Ok, now we need to create one more batch: ``` type %1 | findstr /V /i %2 > xxxtmpfile copy xxxtmpfile %1 /y /v del xxxtmpfile ``` Name it "line\_del.bat". See, the first param is file to process, 2nd - string to search lines for removing. Now modify the main batch (again, sorry about formatting): ``` osql -U %1 -P %2 -S %3 -d %4 -h-1 -Q "SELECT ROUTINE_NAME FROM INFORMATION_SCHEMA.Routines WHERE ROUTINE_TYPE = 'PROCEDURE'" -n -o "sp_list.txt" call line_del sp_list.txt "rows affected" call line_del sp_list.txt "row affected" for /f %%a in (sp_list.txt) do osql -U %1 -P %2 -S %3 -d %4 -h-1 -Q "SELECT ROUTINE_DEFINITION FROM INFORMATION_SCHEMA.Routines WHERE ROUTINE_NAME = '%%a'" -n -o "%%a.sql" for /f %%a in (sp_list.txt) do call line_del %%a.sql "rows affected" for /f %%a in (sp_list.txt) do call line_del %%a.sql "row affected" ``` See related articles: [Simple programming commands in a batch environment](http://www.aumha.org/a/batches.php) [osql Utility](http://msdn.microsoft.com/en-us/library/aa214012(SQL.80).aspx) [MSSQL: How do you script Stored Procedure creation with code?](https://stackoverflow.com/questions/513158/mssql-how-do-you-script-stored-procedure-creation-with-code) [Delete certain lines in a txt file via a batch file](https://stackoverflow.com/questions/418916/delete-certain-lines-in-a-txt-file-via-a-batch-file) :) you may notice, last two are from SO!
There is an alternative in SQL Server Management Studio, scripting the database... Expand the Object Explorer view to find the database, right click and select "Tasks : Generate Scripts" From there you can script all object, just stored preocedures, of anything in between. There are quite a few options on one page, though the main one I change is: - "Include IF NOT EXISTS" By making that option "FALSE" then you just get a whole list of CREATE statements. You can then choose to script the objects to a new query window, or a file.
How to save SQL stored procedures to .sql files via batch
[ "", "sql", "sql-server-2005", "stored-procedures", "export", "" ]
What do you think is best practice when creating public header files in C++? 1. Should header files contain no, brief or massive documentation? I've seen everything from almost no documentation (relying on some external documentation) to large specifications of invariants, valid parameters, return values etc. I'm not sure exactly what I prefer, large documentation is nice since you've always access to it from your editor, on the other hand a header file with very brief documentation can often show a complete interface on one or two pages of text giving a much better overview of what's possible to do with a class. 2. Let's say I go with something like brief or massive documentation. I want something similar to javadoc where I document return values, parameters etc. What's the best convention for that in c++? As far as I can remember doxygen does good stuff with java doc-style documentation, but are there any other conventions and tools for this I should be aware of before going for javadoc style documentation?
Usually I put documentation for the interface (parameters, return value, *what* the function does) in the interface file (.h), and the documentation for the implementation (*how* the function does) in the implementation file (.c, .cpp, .m). I write an overview of the class just before its declaration, so the reader has immediate basic information. The tool I use is Doxygen.
1. I would definetely have some documentation in the header files themselves. It greatly improves debugging to have the information next to the code, and not in separate documents. As a rule of thumb, I would document the API (return values, argument, state changes, etc) next to the code, and high-level architectural overviews in separate documents (to give a broader view of how everything is put together; it's hard to place this together with the code, since it usually references several classes at once). 2. Doxygen is fine from my experience.
C/C++ Header file documentation
[ "", "c++", "c", "documentation", "javadoc", "" ]
I've started writing an interface for FedEx's webservice APIs. They have 3 different APIs that I'm interested in; Rate, Ship, and Track. I am generating the service proxies with SvcUtil.exe. The different service endpoints are each specified by FedEx in their own WSDL files. Each service endpoint has it's own xml namespace (e.g. <http://fedex.com/ws/rate/v5> and <http://fedex.com/ws/ship/v5>) The service endpoints do use quite a few identical types such as Address, Measurements, Weight, AuthenticationDetail, ClientDetail, etc... And here is where the problem lies, I can provide all the WSDL files at the same time to SvcUtil.exe and normally it would coalesce any identical types into a single shared type, but since each of FedEx's services are in their own namespace, and they redeclare these types in each WSDL file under that namespace what I end up with instead is an Address, Address1, and Address2 one for each namespace. To solve that issue, what I do now is to run each WSDL through svcutil separately and put them each in their own .NET namespace (e.g. FedEx.Rate, FedEx.Ship, FedEx.Track). The problem with this is that now I have a distinct address type in each namespace (Fedex.Rate.Address, FedEx.Ship.Address). This makes it difficult to generalize the code used between the services like a GetAuthenticationDetail() factory method so I don't have to repeat that code in every place I use the different services. Is there any way in C# to Coerce FedEx.Rate.Address to FedEx.Ship.Address?
If the types are identical, and you have control over the source classes, you can define a [conversion operator](http://msdn.microsoft.com/en-us/library/85w54y0a.aspx) in the class, and any function that takes a `Rate.Address` will also automatically take a `Ship.Address`. For example: ``` namespace Rate { class Address { string Street; string City; // ... public static implicit operator Ship.Address(Rate.Address addr) { Ship.Address ret; ret.Street = addr.Street; ret.City = addr.City; // ... return ret; } } } ``` My C# is a little rusty but I hope you get the idea.
So here is how I implemented the implicit conversion operators using reflection. SvcUtil creates partial classes so I added an implicit conversion operator for each direction of the conversion so in the client code you can just type `Type1 = Type2`. In this snippet WebAuthenticationCredentials is a property of WebAuthenticationDetails so while iterating the properties of the source object if the types arent the same (built-ins) it checks the name of the types (without the namespace) and recursively calls the copy function with those properties. ``` internal class ReflectionCopy { public static ToType Copy<ToType>(object from) where ToType : new() { return (ToType)Copy(typeof(ToType), from); } public static object Copy(Type totype, object from) { object to = Activator.CreateInstance(totype); PropertyInfo[] tpis = totype.GetProperties(BindingFlags.Public | BindingFlags.Instance); PropertyInfo[] fpis = from.GetType().GetProperties(BindingFlags.Public | BindingFlags.Instance); // Go through each property on the "to" object Array.ForEach(tpis, tpi => { // Find a matching property by name on the "from" object PropertyInfo fpi = Array.Find(fpis, pi => pi.Name == tpi.Name); if (fpi != null) { // Do the source and destination have identical types (built-ins)? if (fpi.PropertyType == tpi.PropertyType) { // Transfer the value tpi.SetValue(to, fpi.GetValue(from, null), null); } else { // If type names are the same (ignoring namespace) copy them recursively if (fpi.PropertyType.Name == tpi.PropertyType.Name) tpi.SetValue(to, Copy(fpi.PropertyType, tpi.GetValue(from, null)), null); } } }); return to; } } namespace Rate { partial class WebAuthenticationDetail { public static implicit operator Ship.WebAuthenticationDetail(WebAuthenticationDetail from) { return ReflectionCopy.Copy<Ship.WebAuthenticationDetail>(from); } } partial class WebAuthenticationCredential { public static implicit operator Ship.WebAuthenticationCredential(WebAuthenticationCredential from) { return ReflectionCopy.Copy<Ship.WebAuthenticationCredential>(from); } } } namespace Ship { partial class WebAuthenticationDetail { public static implicit operator Rate.WebAuthenticationDetail(WebAuthenticationDetail from) { return ReflectionCopy.Copy<Rate.WebAuthenticationDetail>(from); } } partial class WebAuthenticationCredential { public static implicit operator Rate.WebAuthenticationCredential(WebAuthenticationCredential from) { return ReflectionCopy.Copy<Rate.WebAuthenticationCredential>(from); } } } ```
Coerce types in different namespaces with Identical layout in C#
[ "", "c#", ".net", "web-services", "types", "coerce", "" ]
I am doing a code review on some large class libraries and I was wondering if anyone knows of an easy easy way to generate a list of all the methods (and possibly properties/variables too) and their access modifiers. For example, I would like something like this: ``` private MyClass.Method1() internal MyClass.Method2() public MyOtherClass.Method1() ``` Something kind of like a C++ header file, but for C#. This would put everything in one place for quick review, then we can investigate whether some methods really need to be marked as internal/public.
Yup, use reflection: ``` foreach (Type type in assembly.GetTypes()) { foreach (MethodInfo method in type.GetMethods(BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Static | BindingFlags.Instance)) { Console.WriteLine("{0} {1}{2}.{3}", GetFriendlyAccess(method), method.IsStatic ? "static " : "", type.Name, method.Name); } } ``` I'll leave GetFriendlyAccessName as an exercise to the reader - use IsFamily, IsPrivate, IsPublic, IsProtected etc - or the Attributes property.
Well, you can certainly use reflection for this, to enumerate the methods.
Is it possible to show all methods and their access modifiers?
[ "", "c#", "methods", "" ]
I am trying to take a person and display their current insurance along with their former insurance. I guess one could say that I'm trying to flaten my view of customers or people. I'm running into an issue where I'm getting multiple records back due to multiple records existing within my left join subqueries. I had hoped I could solve this by adding "TOP 1" to the subquery, but that actually returns nothing... Any ideas? ``` SELECT p.person_id AS 'MIRID' , p.firstname AS 'FIRST' , p.lastname AS 'LAST' , pg.name AS 'GROUP' , e.name AS 'AOR' , p.leaddate AS 'CONTACT DATE' , [dbo].[GetPICampaignDisp](p.person_id, '2009') AS 'PI - 2009' , [dbo].[GetPICampaignDisp](p.person_id, '2008') AS 'PI - 2008' , [dbo].[GetPICampaignDisp](p.person_id, '2007') AS 'PI - 2007' , a_disp.name AS 'CURR DISP' , a_ins.name AS 'CURR INS' , a_prodtype.name AS 'CURR INS TYPE' , a_t.date AS 'CURR INS APP DATE' , a_t.effdate AS 'CURR INS EFF DATE' , b_disp.name AS 'PREV DISP' , b_ins.name AS 'PREV INS' , b_prodtype.name AS 'PREV INS TYPE' , b_t.date AS 'PREV INS APP DATE' , b_t.effdate AS 'PREV INS EFF DATE' , b_t.termdate AS 'PREV INS TERM DATE' FROM [person] p LEFT OUTER JOIN [employee] e ON e.employee_id = p.agentofrecord_id INNER JOIN [dbo].[person_physician] pp ON p.person_id = pp.person_id INNER JOIN [dbo].[physician] ph ON ph.physician_id = pp.physician_id INNER JOIN [dbo].[clinic] c ON c.clinic_id = ph.clinic_id INNER JOIN [dbo].[d_Physgroup] pg ON pg.d_physgroup_id = c.physgroup_id LEFT OUTER JOIN ( SELECT tr1.* FROM [transaction] tr1 LEFT OUTER JOIN [d_vendor] ins1 ON ins1.d_vendor_id = tr1.d_vendor_id LEFT OUTER JOIN [d_product_type] prodtype1 ON prodtype1.d_product_type_id = tr1.d_product_type_id LEFT OUTER JOIN [d_commission_type] ctype1 ON ctype1.d_commission_type_id = tr1.d_commission_type_id WHERE prodtype1.name <> 'Medicare Part D' AND tr1.termdate IS NULL ) AS a_t ON a_t.person_id = p.person_id LEFT OUTER JOIN [d_vendor] a_ins ON a_ins.d_vendor_id = a_t.d_vendor_id LEFT OUTER JOIN [d_product_type] a_prodtype ON a_prodtype.d_product_type_id = a_t.d_product_type_id LEFT OUTER JOIN [d_commission_type] a_ctype ON a_ctype.d_commission_type_id = a_t.d_commission_type_id LEFT OUTER JOIN [d_disposition] a_disp ON a_disp.d_disposition_id = a_t.d_disposition_id LEFT OUTER JOIN ( SELECT tr2.* FROM [transaction] tr2 LEFT OUTER JOIN [d_vendor] ins2 ON ins2.d_vendor_id = tr2.d_vendor_id LEFT OUTER JOIN [d_product_type] prodtype2 ON prodtype2.d_product_type_id = tr2.d_product_type_id LEFT OUTER JOIN [d_commission_type] ctype2 ON ctype2.d_commission_type_id = tr2.d_commission_type_id WHERE prodtype2.name <> 'Medicare Part D' AND tr2.termdate IS NOT NULL ) AS b_t ON b_t.person_id = p.person_id LEFT OUTER JOIN [d_vendor] b_ins ON b_ins.d_vendor_id = b_t.d_vendor_id LEFT OUTER JOIN [d_product_type] b_prodtype ON b_prodtype.d_product_type_id = b_t.d_product_type_id LEFT OUTER JOIN [d_commission_type] b_ctype ON b_ctype.d_commission_type_id = b_t.d_commission_type_id LEFT OUTER JOIN [d_disposition] b_disp ON b_disp.d_disposition_id = b_t.d_disposition_id WHERE pg.d_physgroup_id = @PhysGroupID ```
The pattern I normally use for this is: > SELECT whatever > FROM person > LEFT JOIN subtable AS s1 > ON s1.personid = person.personid > > ... > > WHERE NOT EXISTS > ( SELECT 1 FROM subtable > WHERE personid = person.personid > AND orderbydate > s1.orderbydate > ) Which avoids the TOP 1 clause and maybe makes it a little clearer. BTW, I like the way you've put this query together in general, except I'd leave out the brackets, assuming you have rationally named tables and columns; and you might even gain some performance (but at least elegance) by listing columns for tr1 and tr2, rather than "tr1.\*" and "tr2.\*".
In Sql server 2005 you can use OUTER APPLY ``` SELECT p.person_id, s.e.employee_id FROM person p OUTER APPLY (SELECT TOP 1 * FROM Employee WHERE /*JOINCONDITION*/ ORDER BY /*Something*/ DESC) s ``` <http://technet.microsoft.com/en-us/library/ms175156.aspx>
Top 1 on Left Join SubQuery
[ "", "sql", "sql-server", "" ]
[Scala Mixins](http://www.scala-lang.org/node/117)
No, but it can be faked to varying degrees with templates: ``` template<typename AbsIterator> class RichIterator : public AbsIterator { public: template<typename FuncType> void foreach(FuncType f) { while( hasNext() ) f( next() ); } }; class StringIterator { std::string m_s; int i; public: typedef char T; StringIterator() : m_s(), i(0) {} // Unfortunately need this, or // else RichIterator // gets way more complicated StringIterator(const std::string &s) : m_s(s), i(0) {} void swap(StringIterator& other) { m_s.swap(other.m_s); std::swap(i, other.i); } void reset_str(const std::string& s) { StringIterator(s).swap(*this); } bool hasNext() { return i < m_s.length(); } char next() { return m_s[i++]; } }; template<typename Outputable> void println(const Outputable& o) { std::cout << o << std::endl; } int main(int argc, char **argv) { typedef RichIterator<StringIterator> Iter; Iter iter; iter.reset_str(argv[1]); iter.foreach(&println<Iter::T>); } ``` To be totally honest, I've haven't tested this by compiling it, but you should get the idea.
Some aspects of Scala mixins can be satisfied using multiple (virtual) inheritance. Unfortunately, this often introduces more problems than it solves. Also, you can't mix and match superclasses on the fly *a la*: ``` val me = new Human with Coder with Musician ``` If you really, *really* want true mixins, you almost have to go with something like the template solution proposed in the answer by @[Logan Capaldo](https://stackoverflow.com/questions/505686/are-there-scala-like-mixins-for-c/505757#505757).
Are there scala-like mixins for C++?
[ "", "c++", "scala", "traits", "mixins", "template-mixins", "" ]
In a PHP script, what regex should I use to check for mismatched parentheses in a string? Things that I want to allow include: * This is (ok) * This (is) (ok) Things I want to prevent: * This is )bad( * This is also (bad * This is (bad (too) Thanks! Update: You guys all rock. Doing this with a regex seemed trickier than it should have, and these kinds of 2nd level answers are what makes stackoverflow beautiful. Thanks for the links and the pseudocode. I'm not sure who to give the answer to, so I apologize to everyone whose answers I can't accept.
Regex is not the right tool for the job. Scan a string manually. Pseudo-code: ``` depth = 0 for character in some_string: depth += character == '(' depth -= character == ')' if depth < 0: break if depth != 0: print "unmatched parentheses" ```
You *can* do this with a regular expression -- PCRE, as used by PHP, allows recursive patterns. The PHP Manual gives an [example](http://php.net/manual/en/regexp.reference.php#regexp.reference.recursive) that is almost exactly what you want: ``` \(((?>[^()]+)|(?R))*\) ``` This matches any correctly parenthesised substring as long as it begins and ends with parentheses. If you want to ensure the entire string is balanced, allowing strings like "wiggedy(wiggedy)(wiggedy(wack))", here's what I came up with: ``` ^((?:[^()]|\((?1)\))*+)$ ``` Here's an explanation of the pattern, which may be more illuminating than obfuscatory: ``` ^ Beginning of the string ( Start the "balanced substring" group (to be called recursively) (?: Start the "minimal balanced substring" group [^()] Minimal balanced substring is either a non-paren character | or \((?1)\) a set of parens containing a balanced substring ) Finish the "minimal balanced substring" group * Our balanced substring is a maximal sequence of minimal balanced substrings + Don't backtrack once we've matched a maximal sequence ) Finish the "balanced substring" pattern $ End of the string ``` There are lots of considerations of efficiency and correctness that come up with these sorts of regexes. Be careful.
Regex for checking if a string has mismatched parentheses?
[ "", "php", "regex", "" ]
I have one table called **gallery**. For each row in **gallery** there are several rows in the table **picture**. One picture belongs to one gallery. Then there is the table **vote**. There each row is an upvote or a downvote for a certain gallery. Here is the (simplified) structure: ``` gallery ( gallery_id ) picture ( picture_id, picture_gallery_ref ) vote ( vote_id, vote_value, vote_gallery_ref ) ``` Now I want one query to give me the following information: All galleries with their own data fields and the number of pictures that are connected to the gallery and the sumarized value of the votes. Here is my query, but due to the multiple joining the aggregated values are not the right ones. (At least when there is more than one row of either pictures or votes.) ``` SELECT *, SUM( vote_value ) as score, COUNT( picture_id ) AS pictures FROM gallery LEFT JOIN vote ON gallery_id = vote_gallery_ref LEFT JOIN picture ON gallery_id = picture_gallery_ref GROUP BY gallery_id ``` Because I have noticed that `COUNT( DISTINCT picture_id )` gives me the correct number of pictures I tried this: ``` ( SUM( vote_value ) / GREATEST( COUNT( DISTINCT picture_id ), 1 ) ) AS score ``` It works in this example, but what if there were more joins in one query? Just want to know whether there is a better or more 'elegant' way this problem can be solved. Also I'd like to know whether my solution is MySQL-specific or standard SQL?
This quote from William of Okham applies here: > Enita non sunt multiplicanda praeter necessitatem > > (Latin for "entities are not to be multiplied beyond necessity"). You should reconsider *why do you need this to be done in a single query?* It's true that a single query has less overhead than multiple queries, but if the nature of that single query becomes too complex, both for you to develop, and for the RDBMS to execute, **then run separate queries.**
Or just use subqueries... I don't know if this is valid MySQL syntax, but you might be able to do something similar to: ``` SELECT gallery.*, a.score, b.pictures LEFT JOIN ( select vote_gallery_ref, sum(vote_value) as score from vote group by vote_gallery_ref ) a ON gallery_id = vote_gallery_ref LEFT JOIN ( select picture_gallery_ref, count(picture_id) as pictures from picture group by picture_gallery_ref ) b ON gallery_id = picture_gallery_ref ```
Aggregation with two Joins (MySQL)
[ "", "sql", "join", "aggregation", "" ]
I know anonymous classes save typing when it comes to implementing Listener and similar stuff. They try to be a replacement for some usages of closures. But what does the community think about the value of this language-feature? Does it make sense and do you use it regularly? Does it make the code clearer, more understandable and more maintainable? Or do anonymous classes make the code less readable? What is your opinion, and please have examples/arguments handy to support your opinion?
I tend to use anonymous inner classes in situations where I don't need to have a full-blown class just to perform some task. For example, if I want to implement an `ActionListener` or `Runnable`, but I don't think having an inner class would be necessary. For example, for starting a simple `Thread`, using an anonymous inner class might be more readable: ``` public void someMethod() { new Thread(new Runnable() { public void run() { // do stuff } }).start(); } ``` In certain cases, such as the example above, it can increase readability, especially for one-time tasks, as the code that is to be executed is all written in one spot. Using an inner class would "delocalize" the code: ``` public void someMethod() { new Thread(new MyRunnable()).start(); } // ... several methods down ... // class MyRunnable implements Runnable { public void run() { // do stuff } } ``` That said, however, if there is going to be cases where the same thing is going to be repeated, it should indeed be a separate class, be it a regular class or an inner class. I tend to use anonymous inner classes in programs where I am just trying things out rather than have it as a central feature of an actual application.
One more good use of anonymous inner class is when you need to initialize collections like ArrayList and Set. This practice is also known as **double brace initialization** For example , ``` private static final Set<String> VALID_CODES = new HashSet<String>() {{ add("XZ13s"); add("AB21/X"); add("YYLEX"); add("AR2D"); }}; ``` Obviously, this is not limited to collections; it can be used to initialize any kind of object -- for example Gui objects: ``` add(new JPanel() {{ setLayout(...); setBorder(...); add(new JLabel(...)); add(new JSpinner(...)); }}); ```
Is usage of anonymous classes in Java considered bad style or good?
[ "", "java", "coding-style", "anonymous-class", "" ]
The majority of content on my company's website starts life as a Word document (Windows-1252 encoded) and is eventually copied-and-pasted into our UTF-8-encoded content management system. The conversion usually chokes on a few characters (special break characters, smart quotes, scientific notations) which have to be cleaned up manually, but of course a few always slip through. What do you think the best way would be to detect these?
How exactly are you doing the conversion? The whole copying-from-Word problem is something I've come across more often, but it should really be easy to solve. Those chararacters you mention are all in the `0x80` - `0x9F` range in which [the Windows-1252 code page differs from the ISO-8859-1 code page](http://en.wikipedia.org/wiki/Windows-1252). That range is undefined in ISO-8859-1. You must be doing the conversion from ISO-8859-1 (or perhaps ISO-8859-15) instead of Windows-1252, causing it to choke on characters in that range. You should either adjust the source encoding of your conversion or, if that's somehow not possible (I'm not familiar with C#, but I doubt it), use the code page chart to fix the 32 problem characters separate from the main conversion.
Can you save the text as .rtf and then parse it using some other program? Can you use Word's VBA to save the text as something sane?
Detecting encoding conversion problems
[ "", "c#", "encoding", "utf-8", "windows-1252", "" ]
We have OCRed thousands of pages of newspaper articles. The newspaper, issue, date, page number and OCRed text of each page has been put into a mySQL database. We now want to build a Google-like search engine in PHP to find the pages given a query. It's got to be fast, and take no more than a second for any search. How should we do it?
You can also try out [SphinxSearch](http://www.sphinxsearch.com/). Craigslist uses sphinx and it can connect to both mysql and postgresql.
There are some interesting search engines for you to take a look at. I don't know what you mean by "Google like" so I'm just going to ignore that part. * Take a look at the [Lucene](http://lucene.apache.org) engine. The original is high performance but written in Java. There is a [port of Lucene to PHP](http://framework.zend.com/manual/en/zend.search.lucene.html) (already mentioned elsewhere) but it is too slow. * Take a serious look at the [Xapian Project](http://xapian.org/). It's fast. It's written in C++ so you'll most probably have to build it for your target server(s) but has PHP bindings.
Google-like Search Engine in PHP/mySQL
[ "", "php", "mysql", "search-engine", "" ]
I have a very big problem solving this issue I have 2 tables first table is : [http://lh3.ggpht.com/\_uMvfbnPHuko/SYPJQ1ZsBDI/AAAAAAAAAKs/eq49xquy3jM/s800/Untitled-2.jpg](http://picasaweb.google.com/lh/photo/iDp4kgFFjk5d2h5dUZtbvg?feat=embedwebsite) which contains a modules (id and name) second table is [http://lh6.ggpht.com/\_uMvfbnPHuko/SYPJQ4IXGwI/AAAAAAAAAK0/zmTIO6V-UEQ/s800/Untitled-3.jpg](http://picasaweb.google.com/lh/photo/PsT4H6bZ0lH8rN7ZPRMbjQ?feat=embedwebsite) which contains users ids and their preferred modules' numbers now lets move to the problem I want to generate a list of checkboxes for example for the userid 1 which has the following modules "3,2 and 4" the result will be [http://lh4.ggpht.com/\_uMvfbnPHuko/SYPJQ\_LJeVI/AAAAAAAAAK8/6RV8a1S8eCY/s800/Untitled-4.jpg](http://picasaweb.google.com/lh/photo/KSQIDk_jeuIfyFabCsgXow?feat=embedwebsite) notice that "recommended for you" checkbox is not checked because the user 1 does not have module id "1" I spent a long time trying to solve this problem using c# and LINQ see the result of my effort :) (the following code does not work) ``` var q = from m in db.modules from um in db.usersModules.Where(um2 => um2.userId == myUserId).Single().modules.Trim().Split(',') where um.Contains(m.moduleId.ToString()) select new { moduleid = here i want the module id from modules table modulename = here i want the modulename from modules table ischecked = here i want "true" or "false" depending on if the user has this module or not }; ``` If LINQ is easy for you please try to solve this one? any questions?
Your query doesn't works because LinqToSQL tries to translate everything to plain SQL. If you cannot refactor your schema, and use an intermediate table since you clearly have a many-to-many relation ship, you can do something like this: ``` var userModules = db.userModules.SingleOrDefault(m => m.userId == myUserId).modules; // Get the modules of the user (a comma delimited string) var integerModules = modules.Split(',').Select(m => int.Parse(m)); // Convert the comma delimited string to IEnumerable<int> var query = db.modules.Select(x => new { x.moduleId, x.moduleName, isChecked = integerModules.Contains(x.moduleId) }); // And finally do the query ```
How come you are using a comma-delimited list? The better approach would be to have a table in between which stores the mapping between users and modules.
complex query using LINQ and C#
[ "", "c#", "asp.net", "database", "linq", "" ]
This should be a fairly trivial problem. I'm trying to open an ofstream using a std::string (or std::wstring) and having problems getting this to work without a messy conversion. ``` std::string path = ".../file.txt"; ofstream output; output.open(path); ``` Ideally I don't want to have to convert this by hand or involve c-style char pointers if there's a nicer way of doing this?
In the path string, use two dots instead of three. Also you may use 'c\_str()' method on string to get the underlying C string. ``` output.open(path.c_str()); ```
this should work: `output.open(path.c_str())`
Opening a file with std::string
[ "", "c++", "stl", "file-io", "" ]
I have a database with some stored procedures. My question is: Is there a problem to do this: GRANT EXECUTE ON [dbo].[StoredProcedureXPTO] TO [Public] i.e. grant permission of execution to public?? Or is thar a security issue?
The answer entirely depends on what your Stored Procedure does and whether you want every man and his dog to have access to it. In my experience, it would be unusual since I provide quite narrowly scoped users and roles for my applications. But your question makes me wonder if there is another issue behind your question. Why do you ask?
There is no problem if that's what you want. If you want to restrict access to the stored procedure then this is not the way to go.
Grant Permission stored procedures
[ "", "sql", "" ]
I have a pkcs12 file. I need to use this to connect to a webpage using https protocol. I came across some code where in order to connect to a secure web page i need to set the following system properties: ``` System.setProperty("javax.net.ssl.trustStore", "myTrustStore"); System.setProperty("javax.net.ssl.trustStorePassword", "changeit"); System.setProperty("javax.net.ssl.keyStoreType", "pkcs12"); System.setProperty("javax.net.ssl.keyStore", "new_cert.p12"); System.setProperty("javax.net.ssl.keyStorePassword", "newpass"); ``` I have the p12(pkcs12) file. All I need is a truststore file. I extracted the certificates using: ``` openssl.exe pkcs12 -in c:/mykey.p12 -out c:/cert.txt -nokeys -clcerts ``` Now converted the cert PEM file to der ``` openssl.exe x509 -in c:/cert.txt -outform DER -out c:/CAcert.der ``` Now adding the der file to a keystore ``` keytool -import -file C:/Cacert.der -keystore mytruststore ``` Now I have the truststore, but when I use it, I get the following error ``` Exception in thread "main" java.net.SocketException: java.security.NoSuchAlgorithmException: Error constructing implementation (algorithm: Default, provider: SunJSSE, class: com.sun.net.ssl.internal.ssl.DefaultSSLContextImpl) ``` Update: After removing certain properties and setting only the "trustStore", "trustStorePassword" and "trustStoreType" property, I got the following exception ``` java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty ``` Please Help.
For anyone encountering a similar situation I was able to solve the issue above as follows: 1. Regenerate your pkcs12 file as follows: ``` openssl pkcs12 -in oldpkcs.p12 -out keys -passout pass:tmp openssl pkcs12 -in keys -export -out new.p12 -passin pass:tmp -passout pass:newpasswd ``` 2. Import the CA certificate from server into a TrustStore ( either your own, or the java keystore in `$JAVA_HOME/jre/lib/security/cacerts`, password: `changeit`). 3. Set the following system properties: ``` System.setProperty("javax.net.ssl.trustStore", "myTrustStore"); System.setProperty("javax.net.ssl.trustStorePassword", "changeit"); System.setProperty("javax.net.ssl.keyStoreType", "pkcs12"); System.setProperty("javax.net.ssl.keyStore", "new.p12"); System.setProperty("javax.net.ssl.keyStorePassword", "newpasswd"); ``` 4. Test ur url. Courtesy@ <http://forums.sun.com/thread.jspa?threadID=5296333>
I cannot comment because of the 50pts threshhold, but I don't think that the answer provided in <https://stackoverflow.com/a/537344/1341220> is correct. What you are actually describing is how you insert server certificates into the systems default truststore: ``` $JAVA_HOME/jre/lib/security/cacerts, password: changeit) ``` This works, indeed, but it means that you did not really specify a trust store local to your project, but rather accepted the certificate universially in your system. You actually never use your own truststore that you defined here: ``` System.setProperty("javax.net.ssl.trustStore", "myTrustStore"); System.setProperty("javax.net.ssl.trustStorePassword", "changeit"); ```
How to connect to a secure website using SSL in Java with a pkcs12 file?
[ "", "java", "ssl", "" ]
Alright, this is probably a really silly question but I am new to Python/Django so I can't really wrap my head around its scoping concepts just yet. Right now I am writing a [middleware](http://docs.djangoproject.com/en/dev/topics/http/middleware/#topics-http-middleware) class to handle some stuff, and I want to set 'global' variables that my views and templates can access. What is the "right" way of doing this? I considered doing something like this: ## middleware.py ``` from django.conf import settings class BeforeFilter(object): def process_request(self, request): settings.my_var = 'Hello World' return None ``` ## views.py ``` from django.conf import settings from django.http import HttpResponse def myview(request): return HttpResponse(settings.my_var) ``` Although this works, I am not sure if it is the "Django way" or the "Python way" of doing this. So, my questions are: 1. Is this the right way? 2. If it is the right way, what is the right way of adding variables that can be used in the actual template from the middleware? Say I want to evaluate something and I want to set a variable `headername` as 'My Site Name' in the middleware, and I want to be able to do `{{ headername }}` in all templates. Doing it the way I have it now I'd have to add `headername` to the context inside every view. Is there anyway to bypass this? I am thinking something along the lines of CakePHP's `$this->set('headername','My Site Name');` 3. I am using the middleware class as an equivalent of CakePHP's `beforeFilter` that runs before every view (or controller in CakePHP) is called. Is this the right way of doing this? 4. Completely unrelated but it is a small question, what is a nice way of printing out the contents of a variable to the browser ala `print_r`? Say I want to see all the stuff inside the `request` that is passed into the view? Is `pprint` the answer?
1. It's not the best way. You could set my\_var on the request rather than on the settings. Settings are global and apply to the whole site. You don't want to modify it for every request. There could be concurrency issues with multiple request updating/reading the variable at the same time. 2. To access request.my\_var in your templates you could do **{{ request.my\_var }}**. To get access to the request variable in your template you will have to add **django.core.context\_processors.request** to your **TEMPLATE\_CONTEXT\_PROCESSORS** setting. 3. Yes. Other terminology to describe request middleware would be request pre-processor/filter/interceptor. Also, if you want to use a common Site name for the header in your templates, you might want to check out the Django Sites application which provides a site name variable for your use.
Here's what we do. We use a context processor like this... ``` def context_myApp_settings(request): """Insert some additional information into the template context from the settings. Specifically, the LOGOUT_URL, MEDIA_URL and BADGES settings. """ from django.conf import settings additions = { 'MEDIA_URL': settings.MEDIA_URL, 'LOGOUT_URL': settings.LOGOUT_URL, 'BADGES': settings.BADGES, 'DJANGO_ROOT': request.META['SCRIPT_NAME'], } return additions ``` Here the setting that activates this. ``` TEMPLATE_CONTEXT_PROCESSORS = ( "django.core.context_processors.auth", "django.core.context_processors.debug", "django.core.context_processors.i18n", "django.core.context_processors.media", "django.core.context_processors.request", "myapp. context_myApp_settings", ) ``` This provides "global" information in the context of each template that gets rendered. This is the standard Django solution. See <http://docs.djangoproject.com/en/dev/ref/templates/api/#ref-templates-api> for more information on context processors. --- "what is a nice way of printing out the contents of a variable to the browser ala print\_r?" In the view? You can provide a `pprint.pformat` string to a template to be rendered for debugging purposes. In the log? You have to use Python's `logging` module and send stuff to a separate log file. The use of simple print statements to write stuff to the log doesn't work wonderfully consistently for all Django implementations (mod\_python, for example, loses all the stdout and stderr stuff.)
Django - having middleware communicate with views/templates
[ "", "python", "django", "" ]
I have the following classes (trimmed to only show the basic structure): ``` public abstract class BaseModel { public bool PersistChanges() { // Context is of type "ObjectContext" DatabaseHelper.Context.SafelyPersistChanges(this); } } public static class ObjectContextExtensions { public static bool SafelyPersistChanges<T>(this ObjectContext oc, T obj) { // Persist the object using a transaction } } [Persistent("LEADS")] public class Lead : BaseModel { // Extra properties } public class LeadsController : Controller { public ActionResult Save(Lead lead) { lead.PersistChanges() } } ``` My **Lead** class derives from **BaseModel**, which contains a method to persist the object's changes to the database using a transaction. I implemented the transactional persist with an extension method. The problem is that by passing **this** to **SafelyPersistChanges** in my **BaseModel** class, the generic T on the extension method is set to **BaseModel**. However, since **BaseModel** isn't marked as a persistent object (which it cannot be), the ORM framework throws an exception. Example: ``` Lead lead = LeadRepository.FindByNumber(2); lead.SalesmanNumber = 4; // Calls "ObjectContextExtensions.SafelyPersistChanges<BaseModel>(BaseModel obj)" // instead of "ObjectContextExtensions.SafelyPersistChanges<Lead>(Lead obj)" lead.PersistChanges(); ``` The above block raises the following exception: > Cannot create mapping for type 'SalesWeb.Data.BaseModel' without persistent attribute. Any ideas?
Extension Methods are statically bound at compile time. At the point in which SafelyPersistChanges is called, this is typed as BaseModel and hence your exception. In order to get the behavior you want, you'll either need to do an ugly if statement with lots of casting or force the call to the derived class. Make PersistChanges an abstract method. Then implement the call in the derived classes with exactly the same code. ``` public class Lead { public override bool PersistChanges() { // Context is of type "ObjectContext" DatabaseHelper.Context.SafelyPersistChanges(this); } } ``` Now this will properly be Lead
I would have designed this differently, making "public bool PersistChanges()" call a virtual method, that is overridden in each subclass.
Generic method is picking up type of base class
[ "", "c#", "generics", "inheritance", "" ]
Given: `Throwable` is `Exception`'s superclass. When I read texts on writing your own 'exceptions', I see examples of `Throwable` being used in the `catch` block and other texts show `new Exception()` being used in the `catch` block. I have yet to see an explanation of when one should use each. My question is this, when should `Throwable` be used and when should `new Exception()` be used? Inside the `catch` or `else` block using either: ``` throw throwable; ``` or ``` throw new Exception(); ```
> (from comments) The issue that brought this up is that > I need to pass an 'exception' to a > piece of code a coworker is building > if a collection does not get built. In that case, you might want to throw a *checked exception*. You could throw an [`Exception`](http://java.sun.com/javase/6/docs/api/java/lang/Exception.html), an appropriate existing subclass of it (except [`RuntimeException`](http://java.sun.com/javase/6/docs/api/java/lang/RuntimeException.html) and its subclasses which are *unchecked*), or a custom subclass of `Exception` (e.g. "`CollectionBuildException`"). See the [Java Tutorial on Exceptions](http://java.sun.com/docs/books/tutorial/essential/exceptions/index.html) to get up to speed with Java exceptions.
Always throw an `Exception` (never a `Throwable`). You generally don't catch `Throwable` either, but you can. Throwable is the superclass to `Exception` and `Error`, so you would catch `Throwable` if you wanted to not only catch `Exception`s but `Error`s, that's the point in having it. The thing is, `Error`s are generally things which a normal application wouldn't and shouldn't catch, so just use `Exception` unless you have a specific reason to use `Throwable`.
When should Throwable be used instead of new Exception?
[ "", "java", "exception", "throwable", "" ]
I am using [buildout](http://pypi.python.org/pypi/zc.buildout) to automatically download and setup the many dependencies of my Plone installation. buildout more or less uses easy\_install to download and install a bunch of Python eggs. This usually works, but it doesn't work if any of the dependencies cannot be downloaded or if I want buildout to find an internal package not appropriate for pypi. How can I set up my own local version of pypi to automatically mirror the packages I need? Will it still depend on third-party servers for packages that use pypi for their metadata but not their code?
Here are [instructions on how to setup your own PyPi mirror](http://www.zopyx.de/blog/creating-a-local-pypi-mirror). The homepage of this project is [here](http://www.openplans.org/projects/pypi-mirroring/project-home). There also seems to be a growing number of mirrors out there. For instructions on how to setup your own package index, check out [this blog post](http://plope.com/Members/chrism/distribution_links_considered_harmful) where one solution is explained at the end. Then you can also host your own internal packages in there. The advantage is also that the versions are fixed that way. (For a way to pin the versions directly in buildout, check out [this post](http://maurits.vanrees.org/weblog/archive/2008/01/easily-creating-repeatable-buildouts)). If there is only metadata on PyPI and the archive is stored somewhere else you might of course copy that over to your index as well. If you just use a PyPI mirror I assume that you still need access to these servers.
You can also use a mirror. Put this in the "[global]" section of "~/.pip/pip.conf": ``` index-url = http://d.pypi.python.org/simple/ ``` This is a recent feature as announced [here](http://mail.python.org/pipermail/catalog-sig/2010-July/003132.html).
How do I use easy_install and buildout when pypi is down?
[ "", "python", "plone", "easy-install", "buildout", "" ]
I am creating a program in c++, which I want to be able to have the option to have users let it auto start in windows. So when a user starts his computer, windows will auto start this program. I have read stuff about modifying the registry or putting it in the startup folder, but what would be the best "clean" way to do this?
Startup folder is clean enough. Gives the user the possibility to remove it if needed.
There are many ways to autostart an application, but the easiest, most common and IMO best are: 1. Put a shortcut in the autostart folder 2. Add an autostart entry to the registry (Software\Microsoft\Windows\CurrentVersion\Run) The end result is the same for both. I believe the registry way is executed earlier in the logon process than the startup way, but I am not certain. It does not make any difference for most cases anyway. I prefer the registry, but that is personal taste. You can create and delete the registry key or the shortcut programatically in your app. With both options you can use either one setting for all users (All User startup folder, or under HKLM key in the registry) or user specific (user startup folder or under HKCR key). In general it is better to use the per user options, because you can be certain to have writing privileges in those areas; and every user on the computer can have his/her own setting.
How to create an auto startup c++ program
[ "", "c++", "windows", "autostart", "" ]
How can I get list of all colors I can pick in Visual Studio Designer (which is `System.Windows.Media.Colors`, but that isn't a collection) and put them into my own `ComboBox` using WPF and XAML markup?
Here is the pure XAML solution. In your resources section, you would use this: ``` <!-- Make sure this namespace is declared so that it's in scope below --> .. xmlns:sys="clr-namespace:System;assembly=mscorlib" .. <ObjectDataProvider MethodName="GetType" ObjectType="{x:Type sys:Type}" x:Key="colorsTypeOdp"> <ObjectDataProvider.MethodParameters> <sys:String>System.Windows.Media.Colors, PresentationCore, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35</sys:String> </ObjectDataProvider.MethodParameters> </ObjectDataProvider> <ObjectDataProvider ObjectInstance="{StaticResource colorsTypeOdp}" MethodName="GetProperties" x:Key="colorPropertiesOdp"> </ObjectDataProvider> ``` Or, as [CodeNaked points out](https://stackoverflow.com/questions/562682/how-can-i-list-colors-in-wpf-with-xaml/562840#comment9495865_562840), it can be reduced to one tag: ``` <ObjectDataProvider ObjectInstance="{x:Type Colors}" MethodName="GetProperties" x:Key="colorPropertiesOdp" /> ``` And then the combobox would look like this: ``` <ComboBox Name="comboBox1" ItemsSource="{Binding Source={StaticResource colorPropertiesOdp}}" DisplayMemberPath="Name" SelectedValuePath="Name" /> ```
Here is a great ItemTemplate to use for a combobox using [casperOne's code](https://stackoverflow.com/questions/562682/how-can-i-list-colors-with-wpf/562840#562840): ``` <ComboBox Name="cboColors" ItemsSource="{Binding Source={StaticResource colorPropertiesOdp}}" SelectedValuePath="Name"> <ComboBox.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal" Height="18" Margin="0,0,0,2"> <Border BorderThickness="1" CornerRadius="2" BorderBrush="Black" Width="50" VerticalAlignment="Stretch" Background="{Binding Name}"/> <TextBlock Text="{Binding Name}" Margin="8,0,0,0"/> </StackPanel> </DataTemplate> </ComboBox.ItemTemplate> </ComboBox> ```
How can I list colors in WPF with XAML?
[ "", "c#", "wpf", "xaml", "" ]
I got an `Int16` value, from the database, and need to convert this to an enum type. This is unfortunately done in a layer of the code that knows very little about the objects except for what it can gather through reflection. As such, it ends up calling `Convert.ChangeType` which fails with an invalid cast exception. I found what I consider a smelly workaround, like this: ``` String name = Enum.GetName(destinationType, value); Object enumValue = Enum.Parse(destinationType, name, false); ``` Is there a better way, so that I don't have to move through this String operation? Here's a short, but complete, program that can be used if anyone need to experiment: ``` using System; public class MyClass { public enum DummyEnum { Value0, Value1 } public static void Main() { Int16 value = 1; Type destinationType = typeof(DummyEnum); String name = Enum.GetName(destinationType, value); Object enumValue = Enum.Parse(destinationType, name, false); Console.WriteLine("" + value + " = " + enumValue); } } ```
`Enum.ToObject(....` is what you're looking for! **C#** ``` StringComparison enumValue = (StringComparison)Enum.ToObject(typeof(StringComparison), 5); ``` **VB.NET** ``` Dim enumValue As StringComparison = CType([Enum].ToObject(GetType(StringComparison), 5), StringComparison) ``` If you do a lot of Enum converting try using the following class it will save you alot of code. ``` public class Enum<EnumType> where EnumType : struct, IConvertible { /// <summary> /// Retrieves an array of the values of the constants in a specified enumeration. /// </summary> /// <returns></returns> /// <remarks></remarks> public static EnumType[] GetValues() { return (EnumType[])Enum.GetValues(typeof(EnumType)); } /// <summary> /// Converts the string representation of the name or numeric value of one or more enumerated constants to an equivalent enumerated object. /// </summary> /// <param name="name"></param> /// <returns></returns> /// <remarks></remarks> public static EnumType Parse(string name) { return (EnumType)Enum.Parse(typeof(EnumType), name); } /// <summary> /// Converts the string representation of the name or numeric value of one or more enumerated constants to an equivalent enumerated object. /// </summary> /// <param name="name"></param> /// <param name="ignoreCase"></param> /// <returns></returns> /// <remarks></remarks> public static EnumType Parse(string name, bool ignoreCase) { return (EnumType)Enum.Parse(typeof(EnumType), name, ignoreCase); } /// <summary> /// Converts the specified object with an integer value to an enumeration member. /// </summary> /// <param name="value"></param> /// <returns></returns> /// <remarks></remarks> public static EnumType ToObject(object value) { return (EnumType)Enum.ToObject(typeof(EnumType), value); } } ``` Now instead of writing `(StringComparison)Enum.ToObject(typeof(StringComparison), 5);` you can simply write `Enum<StringComparison>.ToObject(5);`.
Based on the @Peter's answer here is the method for `Nullable<int>` to `Enum` conversion: ``` public static class EnumUtils { public static bool TryParse<TEnum>(int? value, out TEnum result) where TEnum: struct, IConvertible { if(!value.HasValue || !Enum.IsDefined(typeof(TEnum), value)){ result = default(TEnum); return false; } result = (TEnum)Enum.ToObject(typeof(TEnum), value); return true; } } ``` Using `EnumUtils.TryParse<YourEnumType>(someNumber, out result)` becomes useful for many scenarios. For example, WebApi Controller in Asp.NET does not have default protection against invalid Enum params. Asp.NET will just use `default(YourEnumType)` value, even if some passes `null`, `-1000`, `500000`, `"garbage string"` or totally ignores the parameter. Moreover, `ModelState` will be valid in all these cases, so one of the solution is to use `int?` type with custom check ``` public class MyApiController: Controller { [HttpGet] public IActionResult Get(int? myEnumParam){ MyEnumType myEnumParamParsed; if(!EnumUtils.TryParse<MyEnumType>(myEnumParam, out myEnumParamParsed)){ return BadRequest($"Error: parameter '{nameof(myEnumParam)}' is not specified or incorrect"); } return this.Get(washingServiceTypeParsed); } private IActionResult Get(MyEnumType myEnumParam){ // here we can guarantee that myEnumParam is valid } ```
Convert.ChangeType and converting to enums?
[ "", "c#", "enums", "changetype", "" ]
I am trying this query: ``` SELECT ARTICLE_NO, USERNAME, ACCESSSTARTS, ARTICLE_NAME, date_format( ACCESSSTARTS, '%d %m %Y' ) AS shortDate FROM AUCTIONS WHERE upper( ARTICLE_NAME ) LIKE '%hardy%' LIMIT 0 , 10; ``` Which works fine, but shortDate is null, and I am unsure why. The contents of the database is like so: ``` ARTICLE_NO USERNAME ACCESSSTARTS ARTICLE_NAME shortDate 110313660559 banger-wear 17.11.2008 13:24:56 acolshirt NULL ``` edit: The accessstarts field is not a datetime field but a varchar. I am unsure of what to do. Should I simply strip everything after the first space in ACCESSSTARTS to display only the date? Or would it be better to convert the column to a datetime field, and if so, how would I specify that it should be in %D.%M.%Y format instead of the default, which apprantly starts with %Y
The formatting of ACCESSSTARTS looks like the date you have there is a varchar, but DATE\_FORMAT expects a DATE or DATETIME value. YOu could try using [STR\_TO\_DATE](http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_str-to-date) to turn that string into a date value first, e.g. ``` SELECT ACCESSSTARTS, date_format(str_to_date(ACCESSSTARTS, '%d.%m.%Y %k:%i:%s'), '%d %m %Y' ) AS shortDate FROM AUCTIONS WHERE upper( ARTICLE_NAME ) LIKE '%hardy%' LIMIT 0 , 10; ```
ACCESSSTARTS looks like a string type of column in stead of a datetime. Try replacing your ``` date_format( ACCESSSTARTS, '%d %m %Y' ) ``` by this: ``` date_format(str_to_date(ACCESSSTARTS, '%d.%m.%Y %H:%i:%S'), '%d %m %Y'); ```
mysql formatting a date
[ "", "sql", "mysql", "" ]
I'm currently using the following function which is based on an example from the web, it is called every second to display the current progress through a video stream. Is there something I could do to make this more efficient? ``` function secondstominutes(secs){ var s; if(secs > 60){ var min = Math.floor(secs / 60); s = min < 10 ? "0" : ""; s += min +":"; secs = secs - min * 60; } else { s = "00:"; } if(secs < 10){ s+= "0" + Math.floor(secs); } else { s += Math.floor(secs); } return s; } ```
``` function secondstominutes(secs) { var mins = Math.floor(secs / 60); secs = secs % 60; return (mins < 10 ? "0" + mins : mins) + ":" + (secs < 10 ? "0" + secs : secs); } ```
Yes it can be a little simpler, and uses less Math.floor, local variables etc. Here mine proposition: ``` function secondstominutes(secs) { return (Math.floor(secs/60))+":"+secs%60; } ``` This will give result like: 0:1 for 1 sec 0:10 for 10 sec 1:1 for 61 sec etc. If you want spaces etc, it could be done like this: ``` function formatZero(number) { return (number>9) ? number : "0"+number; } function secondstominutes(secs) { return formatZero((Math.floor(secs/60)))+":"+formatZero(secs%60); } ``` And this one is for obbsesive One-line function ppls ;) ``` function secondstominutes(secs) { return ((arguments[1]=(Math.floor(secs/60)))<10?"0":"")+arguments[1]+":"+((arguments[2]=secs%60)<10?"0":"") + arguments[2]; } ```
Efficiently calculate minutes and seconds for MM:ss formatted display from seconds, using OpenLaszlo / javascript?
[ "", "javascript", "openlaszlo", "" ]
I have a word document which is a blank form. I need to be able to fill it in programatically using .NET, and print out the result. The form I have is a Word document, but I could obviously convert this to PDF if it is needed.
As Josef said, if it's OpenXML (Office 2007) document you can use the managed .net classes to easily update the document, which is basically modifying a bunch of xml files, zipped and renamed to .docx . Visual Studio tools for Office (VSTO) should help you out if need be for Office 2000 and 2003. For previous versions of office you'd need to use Office Automation COM Classes.. Now for updating fields in the word document, you'd need to identify where to insert text to. So if you have some bookmarks or markers to identify the places where you'd like to insert text... you can seek to that position and insert text. Printing the word doc should be simple since Word has printing support built in. Should be as easy as calling the right method.
Do you have Word document in Open XML format or is it in old binary format? In Open XML this task can as easy as manipulation of XML inside a package (ZIP file). If you have binary Word file this can be tricky. You will need to use .NET Programmability Support for Office and [Microsoft.Office.Interop.Word namespace](http://msdn.microsoft.com/en-us/library/microsoft.office.interop.word(VS.80).aspx).
Populate a form and print out document
[ "", "c#", ".net", "pdf", "ms-word", "" ]
How would I compare 2 strings to determine if they refer to the same path in Win32 using C/C++? While this will handle a lot of cases it misses some things: ``` _tcsicmp(szPath1, szPath2) == 0 ``` For example: * forward slashes / backslashes * relative / absolute paths. [Edit] Title changed to match an existing C# question.
Open both files with `CreateFile`, call `GetFileInformationByHandle` for both, and compare `dwVolumeSerialNumber`, `nFileIndexLow`, `nFileIndexHigh`. If all three are equal they both point to the same file: [`GetFileInformationByHandle` function](http://msdn.microsoft.com/en-us/library/aa364952(VS.85).aspx) [`BY_HANDLE_FILE_INFORMATION` Structure](http://msdn.microsoft.com/en-us/library/aa363788.aspx)
## Filesystem library Since C++17 you can use the [standard filesystem library](http://en.cppreference.com/w/cpp/filesystem). Include it using `#include <filesystem>`. You can access it even in older versions of C++, see footnote. The function you are looking for is `equivalent`, under namespace `std::filesystem`: ``` bool std::filesystem::equivalent(const std::filesystem::path& p1, const filesystem::path& p2 ); ``` To summarize from the [documentation](http://en.cppreference.com/w/cpp/filesystem/equivalent): this function takes two paths as parameters and returns true if they reference the same file or directory, false otherwise. There is also a `noexcept` overload that takes a third parameter: an `std::error_code` in which to save any possible error. ## Example ``` #include <filesystem> #include <iostream> //... int main() { std::filesystem::path p1 = "."; std::filesystem::path p2 = fs::current_path(); std::cout << std::filesystem::equivalent(p1, p2); //... } ``` Output: ``` 1 ``` ## Using filesystem before C++17 To use this library in versions prior to C++17 you have to enable experimental language features in your compiler and include the library in this way: `#include <experimental/filesystem>`. You can then use its functions under the namespace `std::experimental::filesystem`. Please note that the experimental filesystem library may differ from the C++17 one. See the documentation [here](http://en.cppreference.com/w/cpp/experimental/fs). For example: ``` #include <experimental/filesystem> //... std::experimental::filesystem::equivalent(p1, p2); ```
Best way to determine if two path reference to same file in Windows?
[ "", "c++", "winapi", "path", "" ]
Say I have three threads that need access to a collection and I use a lock block around the access in each thread. The following happens... (1) Thread 1 gets the lock on the collection (2) Thread 2 gets blocked (3) Thread 3 gets blocked When Thread 1 releases the lock, who gets to take the lock next? Is it FIFO access? Thanks
You should not care who gets the lock next.
Your question implies that you are looking for a FIFO behaviour? Then you might want to try this code by Jakub Sloup: **[Monitor/lock which remember order in C# to simulate FIFO](http://www.jakubsloup.cz/Blog/March-2009/Monitor-lock-which-remember-order-in-C--to-simulat.aspx)** As already mentioned in the other answers there is no guaranteed order waiting threads will receive a lock.
Lock Statement C#
[ "", "c#", "locking", "" ]
I want to match a web address through regex which should capture <http://www.google.com> as well as www.google.com i.e. with and without protocol.
Well it's going to depend on exactly what you want to capture ("FTP"? "/index.htm"?) because a general URI capture based on the RFC standard is very hard, but you could start with: ``` /^((https?\:\/\/)?([\w\d\-]+\.){2,}([\w\d]{2,})((\/[\w\d\-\.]+)*(\/[\w\d\-]+\.[\w\d]{3,4}(\?.*)?)?)?)$/ ``` Complicated see?
Try [RegexLib](http://regexlib.com/DisplayPatterns.aspx?cattabindex=1&categoryId=2).
Matching a web address through regex
[ "", "c#", "regex", "url", "" ]
PHP must track the amount of CPU time a particular script has used in order to enforce the max\_execution\_time limit. Is there a way to get access to this inside of the script? I'd like to include some logging with my tests about how much CPU was burnt in the actual PHP (the time is not incremented when the script is sitting and waiting for the database). I am using a Linux box.
On unixoid systems (and in php 7+ on Windows as well), you can use [getrusage](http://php.net/getrusage), like: ``` // Script start $rustart = getrusage(); // Code ... // Script end function rutime($ru, $rus, $index) { return ($ru["ru_$index.tv_sec"]*1000 + intval($ru["ru_$index.tv_usec"]/1000)) - ($rus["ru_$index.tv_sec"]*1000 + intval($rus["ru_$index.tv_usec"]/1000)); } $ru = getrusage(); echo "This process used " . rutime($ru, $rustart, "utime") . " ms for its computations\n"; echo "It spent " . rutime($ru, $rustart, "stime") . " ms in system calls\n"; ``` Note that you don't need to calculate a difference if you are spawning a php instance for every test.
If all you need is the wall-clock time, rather than the CPU execution time, then it is simple to calculate: ``` //place this before any script you want to calculate time $time_start = microtime(true); //sample script for($i=0; $i<1000; $i++){ //do anything } $time_end = microtime(true); //dividing with 60 will give the execution time in minutes otherwise seconds $execution_time = ($time_end - $time_start)/60; //execution time of the script echo '<b>Total Execution Time:</b> '.$execution_time.' Mins'; // if you get weird results, use number_format((float) $execution_time, 10) ``` Note that this will include the time that PHP is sat waiting for external resources such as disks or databases, which is not used for `max_execution_time`.
Tracking the script execution time in PHP
[ "", "php", "performance", "" ]
From what i know a vector is guaranteed to be continuous and i can write a chunk of memory to it and do send of fwrite with it. All i need to do is make sure i call .resize() to force it to be the min length i need then i can use it as a normal char array? would this code be correct ``` v.resize(numOfElements); v.clear(); //so i wont get numOfElements + len when i push back vector<char>v2; v2.resize(numOfElements*SizeOfType); while(...) { ... v.push_bacK(x); } compress(&v2[0], len, &v[0], len); fwrite(&v2[0], ....) ``` noting that i never push back or pop v2 i only resize it once and used it as a char array. Would this be safe? and if i also dumped v that would also be safe(i do push back and clear, i may dump it for testing)
``` v.resize(numOfElements); v.clear(); //so i wont get numOfElements + len when i push back ``` Well, that above code snippet is in effect allocating and creating elements, just to destroy them again. It's in effect the same as: ``` v.reserve(numOfElements); ``` Just that this code is way faster. So, `v.size() == 0` in both cases and `v.capacity()` *might be the same as* `numOfElements` in both cases too (although this is *not guaranteed*). In the second case, however, the capacity is *at least numOfElements*, which means the internal buffer will not be reallocated until you have push\_back'ed that many elements to your vector. Note that in both cases it is *invalid* if you try accessing any elements - because there are zero elements actually contained. Apart from that, i haven't figured a problem in your code. It's safe and i would encourage it so use it instead of a raw `new` or `malloc` because of the added safeties it provides. I'm however not sure what you mean by "dump v".
Indeed, std::vector is guaranteed to be contiguous, in order to be layout-compatible with a C array. However, you must be aware that many operations of the vector invalidate all pointers pointing to its elements, so you'd better stick to one type of use: avoid mixing pointer arithmetic and method calls on the vector. Apart from that is perfectly correct, except the first line : what you want is ``` v.reserve(numOfElements); ``` which will allocate enough place to store `numOfElements` into the vector, whereas ``` v.resize(numOfElements); ``` will do the following: ``` // pseudo-code if (v.size() < numOfElements) insert (numOfElements - size) elements default constructed at the end of the vector if (v.size() > numOfElements) erase the last elements so that size = numOfElements ``` To sum up, after a `reserve` you are sure that vector **capacity** is superior or equal to numOfElements, and after a `resize` you are sure that vector **size** is equal to numOfElements.
vector and dumping
[ "", "c++", "vector", "" ]
We have a form which displays media items in tab pages of a tab control, and I'm implementing a feature which allows users to 'pop out' the tab pages into their own forms. However, when I add the media player to a form rather than a TabPage, the background switches from the gradient fill of a tab page to the plain SystemColors.Control background of the parent form. I need to add the the media player to a control which has the same background as a TabControl, but which doesn't display a tab at the top. I tried adding the media player to the TabControl's control collection, but that just throws an exception. How do I get a control which looks like a TabControl with no tabs? Should I keep trying to add the media player to a TabControl, or should I try to write a Panel with a custom-drawn background? If the latter, how do I make sure that works with all possible themes?
Thanks to Henk - I eventually went with: ``` protected override void OnPaintBackground(PaintEventArgs e) { if (TabRenderer.IsSupported && Application.RenderWithVisualStyles) { TabRenderer.DrawTabPage(pe.Graphics, this.ClientRectangle); } else { base.OnPaintBackground(pe); ControlPaint.DrawBorder3D(pe.Graphics, this.ClientRectangle, Border3DStyle.Raised); } } ```
The questions seems to be about the UseVisbleBackgroundStyle. AFAIK only buttons and TabPages have this property. The following is a very dirty hack, just to get you started: 1) derive a customControl from Panel and add "using System.Windows.Forms.VisualStyles;" 2) Add the following code ``` //warning: incomplete, add error checking etc private readonly VisualStyleElement element = VisualStyleElement.Tab.Body.Normal; public bool UseVisbleBackgroundStyle { get; set; } protected override void OnPaint(PaintEventArgs pe) { if (UseVisbleBackgroundStyle) { var x = new VisualStyleRenderer(element); x.DrawBackground(pe.Graphics, this.ClientRectangle); } else { base.OnPaint(pe); } } ```
How do I get a control which looks like a TabControl with no tabs?
[ "", "c#", ".net", "winforms", "tabcontrol", "" ]
I'm hoping there are some fellow doctrine users out there. Here is a simplified YAML of my relationships: ``` Collection: columns: id: { type: integer(4), notnull: true, primary: true, autoincrement: true } name: { type: string(255), notnull: true, unique: true } relations: Items: class: Item refClass: CollectionItem foreignAlias: Collections type: many foreignType: many Item: columns: id: { type: integer(4), notnull: true, primary: true, autoincrement: true } name: { type: string(255), notnull: true } CollectionItem: columns: id: { type: integer(4), notnull: true, primary: true, autoincrement: true } collection_id: { type: integer(4) } item_id: { type: integer(4) } relations: Collection: foreignAlias: CollectionItem foreignType: one Item: foreignAlias: CollectionItem foreignType: one ``` I want a collection to be able to hold many copies of the same item, but when I use the generated classes to load items like so: ``` $collection = Doctrine::getTable('Collection')->find(1); $items = $collection->Items; ``` $items doesn't contain my duplicates. The generated sql seems to correctly return duplicate rows: ``` SELECT i.id AS i__id, i.name AS i__name, c.id AS c__id, c.collection_id AS c__collection_id, c.item_id AS c__item_id, FROM item i LEFT JOIN collection_item c ON i.id = c.item_id WHERE c.collection_id IN (?) - (1) ``` I know I can get around this my making specific dql queries instead but does anyone know if there is simple setting somewhere to allow the Items collection to have duplicates?
you have to change the hydration mode to HYDRATE\_SCALAR: > This hydration mode creates a > flat/rectangular result set that can > contain duplicate data. ``` $res = $q->execute(array(), Doctrine::HYDRATE_SCALAR); ``` (or to HYDRATE\_NONE) as stated in <http://www.doctrine-project.org/documentation/manual/1_1/en/working-with-models#fetching-data>
In Doctrine you can't have duplicate objects. Each object retrieved from the database is stored only once in Doctrine. If you query for the same object twice you'll get a pointer the the same object you had already retrieved. You can clone the object and store it in your `Doctrine_Collection`, but that will actually create another row in the database when you save the collection.
How can I allow Duplicate Records in a Doctrine Collection
[ "", "php", "orm", "doctrine", "" ]
I have a bunch of code that has lots integers with different meanings (I'd rather a general solution but for a specific example: day-of-the-month vs. month-of-the-year vs. year etc.). I want to be able to overload a class constructor based on these meanings. For example ``` int a; // takes role A int b; // takes role B var A = new Foo(a); // should call one constructor var B = new Foo(b); // should call another constructor ``` Now clearly that won't work but if I could define a type (not just an alias) that is an `int` in all but name like this: ``` typedef int TypeA; // stealing the C syntax typedef int TypeB; ``` I could do the overloading I need and let the type system keep track of what things are what. *In particular this would allow me to be sure that values are not mixed up, for example a value returned from a function as a year is not used as a day-of-the-month.* Is there any way short of `class` or `struct` wrappers to do this in c#? --- It would be nice if the solution would also work for floats and doubles.
There is no direct typedef equivalent, but you can do the following: ``` using TypeA = int; using TypeB = int; ``` However, this just aliases the type rather than creating a new strong type. Therefore, the compiler will still treat them as an `int` when resolving method calls. A better solution might be to create simple wrapper classes that wraps `int` and provides implicit casting, such as: ``` struct TypeA { public TypeA(int value) { this.realValue = value; } private int realValue; public static implicit operator int(TypeA value) { return this.realValue; } public static implicit operator TypeA(int value) { return new TypeA(value); } } ``` However, in most situations, an `enum` would be more appropriate.
This may be way off, but couldnt you use an enum for this? Enum base is int, but is typed, and you could define different constructors based on the type of enum passed.
typedef equivalent for overloading in c#
[ "", "c#", "types", "overloading", "typedef", "" ]
I want to create an empty array and append items to it, one at a time. ``` xs = [] for item in data: xs.append(item) ``` Can I use this list-style notation with [NumPy](http://en.wikipedia.org/wiki/NumPy) arrays?
That is the wrong mental model for using NumPy efficiently. NumPy arrays are stored in contiguous blocks of memory. To append rows or columns to an existing array, the entire array needs to be copied to a new block of memory, creating gaps for the new elements to be stored. This is very inefficient if done repeatedly. Instead of appending rows, allocate a suitably sized array, and then assign to it row-by-row: ``` >>> import numpy as np >>> a = np.zeros(shape=(3, 2)) >>> a array([[ 0., 0.], [ 0., 0.], [ 0., 0.]]) >>> a[0] = [1, 2] >>> a[1] = [3, 4] >>> a[2] = [5, 6] >>> a array([[ 1., 2.], [ 3., 4.], [ 5., 6.]]) ```
A NumPy array is a very different data structure from a list and is designed to be used in different ways. Your use of `hstack` is potentially very inefficient... every time you call it, all the data in the existing array is copied into a new one. (The `append` function will have the same issue.) If you want to build up your matrix one column at a time, you might be best off to keep it in a list until it is finished, and only then convert it into an array. e.g. ``` mylist = [] for item in data: mylist.append(item) mat = numpy.array(mylist) ``` `item` can be a list, an array or any iterable, as long as each `item` has the same number of elements. In this particular case (`data` is some iterable holding the matrix columns) you can simply use ``` mat = numpy.array(data) ``` (Also note that using `list` as a variable name is probably not good practice since it masks the built-in type by that name, which can lead to bugs.) EDIT: If for some reason you really do want to create an empty array, you can just use `numpy.array([])`, but this is rarely useful!
How do I create an empty array and then append to it in NumPy?
[ "", "python", "arrays", "numpy", "" ]
I am reading each line of a CSV file and need to get the individual values in each column. So right now I am just using: ``` values = line.Split(delimiter); ``` where `line` is the a string that holds the values that are seperated by the delimiter. Measuring the performance of my `ReadNextRow` method I noticed that it spends 66% on `String.Split`, so I was wondering if someone knows of a faster method to do this. Thanks!
It should be pointed out that `split()` is a questionable approach for parsing CSV files in case you come across commas in the file eg: ``` 1,"Something, with a comma",2,3 ``` The other thing I'll point out without knowing how you profiled is be careful about profiling this kind of low level detail. The granularity of the Windows/PC timer might come into play and you may have a significant overhead in just looping so use some sort of control value. That being said, `split()` is built to handle regular expressions, which are obviously more complex than you need (and the wrong tool to deal with escaped commas anyway). Also, `split()` creates lots of temporary objects. So if you want to speed it up (and I have trouble believing that performance of this part is really an issue) then you want to do it by hand and you want to reuse your buffer objects so you're not constantly creating objects and giving the garbage collector work to do in cleaning them up. The algorithm for that is relatively simple: * Stop at every comma; * When you hit quotes continue until you hit the next set of quotes; * Handle escaped quotes (ie \") and arguably escaped commas (\,). Oh and to give you some idea of the cost of regex, there was a question (Java not C# but the principle was the same) where someone wanted to replace every n-th character with a string. I suggested using `replaceAll()` on String. Jon Skeet manually coded the loop. Out of curiosity I compared the two versions and his was an order of magnitude better. So if you really want performance, it's time to hand parse. Or, better yet, use someone else's optimized solution like this [fast CSV reader](http://www.codeproject.com/KB/database/CsvReader.aspx?fid=142714&df=90&mpp=25&noise=3&sort=Position&view=Quick&fr=126&select=2741699). By the way, while this is in relation to Java it concerns the performance of regular expressions in general (which is universal) and `replaceAll()` vs a hand-coded loop: [Putting char into a java string for each N characters](https://stackoverflow.com/questions/537174/putting-char-into-a-java-string-for-each-n-characters).
The BCL implementation of string.Split is actually quite fast, I've done some testing here trying to out preform it and it's not easy. But there's one thing you can do and that's to implement this as a generator: ``` public static IEnumerable<string> GetSplit( this string s, char c ) { int l = s.Length; int i = 0, j = s.IndexOf( c, 0, l ); if ( j == -1 ) // No such substring { yield return s; // Return original and break yield break; } while ( j != -1 ) { if ( j - i > 0 ) // Non empty? { yield return s.Substring( i, j - i ); // Return non-empty match } i = j + 1; j = s.IndexOf( c, i, l - i ); } if ( i < l ) // Has remainder? { yield return s.Substring( i, l - i ); // Return remaining trail } } ``` The above method is not necessarily faster than string.Split for small strings but it returns results as it finds them, this is the power of lazy evaluation. If you have long lines or need to conserve memory, this is the way to go. The above method is bounded by the performance of IndexOf and Substring which does too much index of out range checking and to be faster you need to optimize away these and implement your own helper methods. You can beat the string.Split performance but it's gonna take cleaver int-hacking. You can read my post about that [here](https://stackoverflow.com/questions/399798/memory-efficiency-and-performance-of-string-replace-net-framework/400065#400065).
Does any one know of a faster method to do String.Split()?
[ "", "c#", ".net", "performance", "string", "csv", "" ]
Is there any Java library allowing to build a simple standalone webservice server without any application server framework?
Java 6 contains JAX-WS, which makes it very easy to host a web service in a stand-alone application: ``` javax.xml.ws.Endpoint.publish("http://localhost:8000/myService/", myServiceImplementation); ```
[Axis 2](http://ws.apache.org/axis2/) has a simple standalone server (see <http://ws.apache.org/axis2/1_4_1/installationguide.html>)
Lightweight Webservice producing in Java (without an application server)
[ "", "java", "web-services", "" ]
I've read elsewhere on here that to capture "Enter" key stroke in a text box and use it as if pushing a button I should set the KeyPreview property of the form to true and check the value of KeyDown. I want to be able to use this functionality on several TextBox controls which each are associated with a different Button. My question is how do I know which control caused the KeyPress event? The sender is listed as the form itself. G
I've found a solution which appears to be working. ``` private void DeviceForm_KeyDown(object sender, KeyEventArgs e) { if (e.KeyValue == 13 && tstxtDeviceFilter.Focused) { filterByDeviceSN(); } } ``` I can't help but think there must be a better way though! --EDIT--EDIT--EDIT--EDIT--EDIT-- Well, after looking at the suggestions below (thank you) I've found a 'better' way for me in this circumstance. ``` this.tstxtDeviceFilter.KeyDown += new System.Windows.Forms.KeyEventHandler(this.tstxtDeviceFilter_KeyDown); private void tstxtDeviceFilter_KeyDown(object sender, KeyEventArgs e) { if (e.KeyValue == 13) { filterByDeviceSN(); } } ``` Obviously by trapping the event on the textbox itself rather than the form I don't need to worry about focus. Once again I feel dumb for not thinking of that for so long!
Each form has a property for an "Accept" button & "Cancel" button, these are the buttons that get "clicked" when the user presses enter and escape respectively. You can change the default button as each control gets the focus (you can have one got focus event hander per button, and share it with a set of text boxes) If you do this then the apperance of the buttons change giving the user a visual cue telling them which button is the default. Alternatively, if you don't want to do that, you can use the "ActiveControl" property, and test to see which of the sets of text boxes it belongs to. Have you asked yourself, what should the default button be if it's not one of thse text boxes?
How to determine which control on form has focus?
[ "", "c#", ".net", "winforms", "controls", "focus", "" ]
I'm looking for a JavaScript Testing Framework that I can easily use in whatever context, be it browser, console, XUL, etc. Is there such a framework, or a way to easily retrofit an existing framework so its context agnostic? Edit: The testing framework should **not** be tied to any other framework such as jQuery or Prototype.js and shouldn't depend on a DOM (or document object) being present. I'm looking for something to test **pure JavaScript**.
OK, here's something I just brewed based on some earlier work. I hope this would meet your needs. ## [jsUnity](http://jsunity.com/) Lightweight Universal JavaScript Testing Framework > **jsUnity** is a lightweight universal JavaScript testing framework that is > context-agnostic. It doesn't rely on > any browser capabilities and therefore > can be run inside HTML, ASP, WSH or > any other context that uses > JavaScript/JScript/ECMAScript. ## Sample usage inside HTML ``` <pre> <script type="text/javascript" src="../jsunity.js"></script> <script type="text/javascript"> function sampleTestSuite() { function setUp() { jsUnity.log("set up"); } function tearDown() { jsUnity.log("tear down"); } function testLessThan() { assertTrue(1 < 2); } function testPi() { assertEquals(Math.PI, 22 / 7); } } // optionally wire the log function to write to the context jsUnity.log = function (s) { document.write(s + "</br>"); }; var results = jsUnity.run(sampleTestSuite); // if result is not false, // access results.total, results.passed, results.failed </script> </pre> ``` The output of the above: ``` 2 tests found set up tear down [PASSED] testLessThan set up tear down [FAILED] testPi: Actual value does not match what's expected: [expected] 3.141592653589793, [actual] 3.142857142857143 1 tests passed 1 tests failed ```
[Jasmine](http://pivotal.github.com/jasmine/) looks interesting. According to the developers, it was written because none of the other JS test frameworks met all their needs in a single offering and not requiring things like DOM, jQuery, or the `window` object is one of the explicit design points. I'm thinking of using it with [env.js](http://www.envjs.com/) and Rhino/SpiderMonkey/V8/etc. to write client-side tests for my web apps which can be easily run in all the same situations as Python unit tests. (`setup.py test`, BuildBot, etc.)
Context agnostic JavaScript Testing Framework
[ "", "javascript", "unit-testing", "testing", "frameworks", "" ]
I'm assuming I need to use Connector\J and JDBC to achieve this, but I can't seem to 'install' Connector\J using the CLASSPATH thing. How do I do that ? I use the IntelliJ IDE if thats relevant. I'm looking for a way to talk to a mysql database and execute and print out a few basic queries but I'm not getting anywhere because I can't even talk to the database. Any help is appreciated.
You might follow an example such as this one: <http://www.kitebird.com/articles/jdbc.html> But, you might want to consider using [Hibernate](http://www.hibernate.org/) unless you're doing only a couple basic queries.
See [Using MySQL with Java](http://dev.mysql.com/usingmysql/java/) -- the first link from [Google](http://www.google.com/search?q=mysql+java).
How do I use interact with MySQL using Java?
[ "", "java", "mysql", "" ]
I am using Doxygen to generate documentation for our API, written in C#. However, it exposes private/protected members. Is there a way to hide those? I figured out how to hide files: EXCLUDE = List of file names Yet, I need more granularity and thus shield users from unnecessary API noise. A sample Doxygen file would be appreciated as well as tips/tricks. What tools do you use to generate API from the source code? I feel somewhat left in the 18th century as I use Doxygen in C# by way of C++.
I don't know how well C# is supported by Doxygen. For hiding private members, you change `Doxyfile` configuration file as following: ``` EXTRACT_PRIVATE = YES ``` Many other options can be set for various kinds of extracting/hiding code elements, e.g., citing `Doxyfile` itself: ``` # If the EXTRACT_ALL tag is set to YES doxygen will assume all entities in # documentation are documented, even if no documentation was available. # Private class members and static file members will be hidden unless # the EXTRACT_PRIVATE and EXTRACT_STATIC tags are set to YES EXTRACT_ALL = YES # If the EXTRACT_PRIVATE tag is set to YES all private members of a class # will be included in the documentation. EXTRACT_PRIVATE = YES # If the EXTRACT_STATIC tag is set to YES all static members of a file # will be included in the documentation. EXTRACT_STATIC = YES # If the EXTRACT_LOCAL_CLASSES tag is set to YES classes (and structs) # defined locally in source files will be included in the documentation. # If set to NO only classes defined in header files are included. EXTRACT_LOCAL_CLASSES = YES # This flag is only useful for Objective-C code. When set to YES local # methods, which are defined in the implementation section but not in # the interface are included in the documentation. # If set to NO (the default) only methods in the interface are included. EXTRACT_LOCAL_METHODS = YES # If this flag is set to YES, the members of anonymous namespaces will be # extracted and appear in the documentation as a namespace called # 'anonymous_namespace{file}', where file will be replaced with the base # name of the file that contains the anonymous namespace. By default # anonymous namespace are hidden. EXTRACT_ANON_NSPACES = NO # If the HIDE_UNDOC_MEMBERS tag is set to YES, Doxygen will hide all # undocumented members of documented classes, files or namespaces. # If set to NO (the default) these members will be included in the # various overviews, but no documentation section is generated. # This option has no effect if EXTRACT_ALL is enabled. HIDE_UNDOC_MEMBERS = NO # If the HIDE_UNDOC_CLASSES tag is set to YES, Doxygen will hide all # undocumented classes that are normally visible in the class hierarchy. # If set to NO (the default) these classes will be included in the various # overviews. This option has no effect if EXTRACT_ALL is enabled. HIDE_UNDOC_CLASSES = NO # If the HIDE_FRIEND_COMPOUNDS tag is set to YES, Doxygen will hide all # friend (class|struct|union) declarations. # If set to NO (the default) these declarations will be included in the # documentation. HIDE_FRIEND_COMPOUNDS = NO ```
Check out the @cond flag for doxygen. In C# I hide some of our password encryption members like this: ``` //! @cond private const String ENCRYPTEDFLAG = "xxxENCFLAGxxx"; private const String SEED = "hi_i_r_@_seed"; //! @endcond ``` The doxygen documentation would have you believe that you need a conditional symbol defined to doxygen and used on the @cond line, but that did not work for me. This method did.
Doxygen: hiding private/protected method...and tips
[ "", "c#", "documentation", "doxygen", "" ]
I have a single process that queries a table for records where `PROCESS_IND` = 'N', does some processing, and then updates the `PROCESS_IND` to 'Y'. I'd like to allow for multiple instances of this process to run, but don't know what the best practices are for avoiding concurrency problems. Where should I start?
The pattern I'd use is as follows: * Create columns "lockedby" and "locktime" which are a thread/process/machine ID and timestamp respectively (you'll need the machine ID when you split the processing between several machines) * Each task would do a query such as: UPDATE taskstable SET lockedby=(my id), locktime=now() WHERE lockedby IS NULL ORDER BY ID LIMIT 10 Where 10 is the "batch size". * Then each task does a SELECT to find out which rows it has "locked" for processing, and processes those * After each row is complete, you set lockedby and locktime back to NULL * All this is done in a loop for as many batches as exist. * A cron job or scheduled task, periodically resets the "lockedby" of any row whose locktime is too long ago, as they were presumably done by a task which has hung or crashed. Someone else will then pick them up The LIMIT 10 is MySQL specific but other databases have equivalents. The ORDER BY is import to avoid the query being nondeterministic.
Although I understand the intention I would disagree on going to row level locking immediately. This will reduce your response time and may actually make your situation worse. If after testing you are seeing concurrency issues with APL you should do an iterative move to “datapage” locking first! To really answer this question properly more information would be required about the table structure and the indexes involved, but to explain further. DOL, datarow locking uses a lot more locks than allpage/page level locking. The overhead in managing all the locks and hence the decrease of available memory due to requests for more lock structures within the cache will decrease performance and counter any gains you may have by moving to a more concurrent approach. Test your approach without the move first on APL (all page locking ‘default’) then if issues are seen move to DOL (datapage first then datarow). Keep in mind when you switch a table to DOL all responses on that table become slightly worse, the table uses more space and the table becomes more prone to fragmentation which requires regular maintenance. So in short don’t move to datarows straight off try your concurrency approach first then if there are issues use datapage locking first then last resort datarows.
Best practices for multithreaded processing of database records
[ "", "sql", "database", "multithreading", "concurrency", "sybase", "" ]
I am struggling with a sensible logic loop for stripping out nodes from an XML file too large to use with XPath supporting .NET classes. I am attempting to replace the single line of code I had (that called SelectNodes with an XPath query string) with code that does the same but uses an XmTextReader. I have to go several levels down as illustraed by the previously used XPath query (which was for reference): ``` ConfigurationRelease/Profiles/Profile[Name='MyProfileName']/Screens/Screen[Id='MyScreenId']/Settings/Setting[Name='MySettingName'] ``` I thought this would be annoying but simple. However, I just can't seem to get the loop right. I need to get a node, check a node under that to see if the value matches a target string and then walk down further if it does or skip that branch if it does't. In fact, I think my problem is that I don't know how to ignore a branch if I'm not inetersted in it. I can't allow it to walk irrelevant branches as the element names are not unique (as illustrated by the XPath query). I thought I could maintain some booleans e.g. bool expectingProfileName that gets set to true when I hit a Profile node. However, if its not the particular profile node I want, I can't get out of that branch. So...hopefully this makes sense to someone...I've been staring at the problem for a couple hours and may just be missing something obvious..... I'd like to post a portion of the file up but can't figure out how so the structure is roughly: ``` ConfigRelease > Profiles > Profile > Name > Screens > Screen > Settings > Setting > Name ``` I will know ProfileName, ScreenName and SettingName and I need the Setting node. I am trying to avoid reading the whole file in one hit e.g. at app start-up because half the stuff in it won't ever be used. I also have no control over what generates the xml file so cannot change it to say, produce multiple smaller files. Any tips will be greatly appreciated. UPDATE I have re-opened this. A poster suggested XPathDocument which should have been perfect. Unfortunatley, I didn't mention that this is a mobile app and XPathDocument is not supported. The file isn't large by most standards which is why the system was originally coded to use XmlDocument. It is currently 4MB which is apparently large enough to crash a Mobile App when it is loaded into an XmlDocument. It's probably just as well it came up now as the file is epxected to get much bigger. Anyway, I am now trying the DataSet suggestion but am still open to other ideas. UPDATE 2 I got suspiscious because quite a few people have said they would not expect a file this size to crash the system. Further experiments have shown that this is an intermittent crash. Yesterday it crashed everytime but this morning after I reset the device, I can't reproduce it. I am now trying to figure out a reliable set of reproductive steps. And also decide the best way to handle the problem which I'm sure is still there. I can't just leave it because if the app can't access this file, it is useless and I don't think can tell my users that they can't run anything else on their devices when my app is running.......
I'm adding this as the issue is now dead but the selected solution doesn't match anything listed so far. Our technical architect took this issue over and decided that we should never have implemented Xml in the first place. This decision was partly due to this issue but also due to some complaints about the level of data transfer charges. His verdict is that we should have implemented a custom file format (with indexing) optimised for size and speed of query. So, the issue is on hold until that work is approved and properly specced. The end for now.
Have a look at `XPathDocument`. XPathDocument is more light-weight than XmlDocument and is optimized for read-only XPath queries.
Large XML file, XmlDocument not feasible but need to be able to search
[ "", "c#", "xml", "mobile", "compact-framework", "xmltextreader", "" ]
I am confused as to when I can use the DOM properties and when I could use the Jquery methods on a Jquery object. Say, I use a selector ``` var $elemSel = $('#myDiv').find('[id *= \'select\']') ``` At this point, $elemSel is a jquery object which I understand to be a wrapper around the array of DOM elements. I could get a reference to the DOM elements by iterating through the $elemSel object/array (Correct?) My questions: 1. Is there a way to convert this $elemSel into a non JQuery regular array of DOM elements? 2. Can I combine DOM properties and JQuery methods at the same time (something like this) ``` $elemSel.children('td').nodeName ``` (nodeName is DOM related, children is JQuery related) EDIT: What's wrong with this? ``` $elemSel.get(0).is(':checked') ``` **EDIT 2:** Thanks for the responses. I understand now that I can use the get(0) to get a DOM element. Additional questions: 1. How would I convert a DOM element to a JQuery object? 2. If I assign "this" to a variable, is that new var DOM or JQuery? If it's JQuery, how can I convert this to a DOM element? (Since I can't use get(0)) var $elemTd = $(this); 3. When I do a assignment like the one above, I have seen some code samples not include the $ sign for the variable name. Why? 4. And as for my original question, can I combine the DOM properties and JQuery functions at the same time on a JQuery object? $elemSel.children('td').nodeName
You'll need to .get(0) the result to get the DOM-ready object. ``` var myBox = $("div#myBox"); alert(myBox.get(0).id); // "myBox" ``` Read "[Peeling Away the jQuery Wrapper and Finding an Array](http://www.learningjquery.com/2008/12/peeling-away-the-jquery-wrapper)" by Cody Lindley --- **Re: Edit:** `.is()` is not a native javascript method. When you run `.get(0)`, you are no longer working off of the jQuery object, therefore you cannot expect to run jQuery methods from it. If you want to run `.is()` on a specific result, use the [`:eq(index) selector`](http://docs.jquery.com/Selectors/eq#index), or the [`.eq(index) method`](http://docs.jquery.com/Traversing/eq#index): ``` $("div:eq(1)").is(":checked"); // gets second div $("div").eq(1).is(":checked"); // gets second div ``` --- **Re: Edit # 2** > Bob, you really should create new > questions, rather than asking more and > more here. Converting a dom element to jquery object is done by passing it in a selector: ``` var myBox = document.createElement("div"); var myBoxJQ = $(myBox); ``` Assinging `This` to a variable. Depends on when you do it. If by "this" you're referring to a jQuery object, then `this` will be a jQuery object. You can convert it by following `this` with `.get(0)`. When `this` is referring to a jQuery object, you don't need to wrap it in the $(). This is redundant. And lastly, `$elemSel.children('td').nodeName` can be done like this: `$elemSel.children('td')[0].nodeName` or `$elemSel.children('td').get(0).nodeName`, where the 0 is the index of which item to access.
There is also a shortcut for get(index) function: ``` $(selector)[0].nodeName ``` Soruce: <http://docs.jquery.com/Core/get#index>
JQuery methods and DOM properties
[ "", "javascript", "jquery", "dom", "" ]
Suppose you have a list of acronym's that define a value (ex. AB1,DE2,CC3) and you need to check a string value (ex. "Happy:DE2|234") to see if an acronym is found in the string. For a short list of acronym's I would usually create a simple RegEx that used a separator (ex. (AB1|DE2|CC3) ) and just look for a match. But how would I tackle this if there are over 30 acronym's to match against? Would it make sense to use the same technique (ugly) or is there a more effecient and elegant way to accomplish this task? Keep in mind the example acronym list and example string is not the actual data format that I am working with, rather just a way to express my challenge. BTW, I read a SO [related question](https://stackoverflow.com/questions/192957/efficiently-querying-one-string-against-multiple-regexes) but didn't think it applied to what I was trying to accomplish. EDIT: I forgot to include my need to capture the matched value, hence the choice to use Regular Expressions...
Personally I don't think 30 is particularly large for a regex so I wouldn't be too quick to rule it out. You can create the regex with a single line of code: ``` var acronyms = new[] { "AB", "BC", "CD", "ZZAB" }; var regex = new Regex(string.Join("|", acronyms), RegexOptions.Compiled); for (var match = regex.Match("ZZZABCDZZZ"); match.Success; match = match.NextMatch()) Console.WriteLine(match.Value); // returns AB and CD ``` So the code is relatively elegant and maintainable. If you know the upper bound for the number of acronyms I would to some testing, who knows what kind of optimizations there are already built into the regex engine. You'll also be able to benefit for free from future regex engine optimizations. Unless you have reason to believe performance will be an issue keep it simple. On the other hand regex may have other limitations e.g. by default if you have acronyms AB, BC and CD then it'll only return two of these as a match in "ABCD". So its good at telling you there is an acronym but you need to be careful about catching multiple matches. When performance became an issue for me (> 10,000 items) I put the 'acronyms' in a HashSet and then searched each substring of the text (from min acronym length to max acronym length). This was ok for me because the source text was very short. I'd not heard of it before, but at first look the Aho-Corasick algorithm, referred to in the question you reference, seems like a better general solution to this problem.
If acronym's have fixed size (like in above example), you could calculate a hash for all of them (could be done once per application life) and then split the string in such overlapped pieces and calculate hashes for them too. Then all you'd have to do is to search for values from one array into another one. You probably could create a suffix/prefix tree or something similar from acronyms and search using this information, there's plenty of algorithms in Wikipedia to do just that. You could also create an deterministic automata for each of acronyms but it's very similar to previous approach.
Best way to test for existing string against a large list of comparables
[ "", "c#", "regex", "string", "compare", "" ]
Hi I'm looking to parse spreadsheets (xls/ods) in Groovy. I have been using the Roo library for Ruby and was looking to try the same tasks in Groovy, as Java is already installed on a development server I use, and I would like to keep the number of technologies on the server to a simple core few. I am aware that the ods format is zipped XML, and so can be parsed as such, but I would like to process the file using spreadsheet concepts, not XML concepts. The ability to process xls files is not of major importance, but would save me having to save multiple xls files to ods (as this is for parsing data from clients). Thanks
I would suggest [Apache POI](http://poi.apache.org/) for access to .xls files. I've never had to work with the .ods format, so no information on that one.
There's also [JExcelAPI](http://jexcelapi.sourceforge.net/), which has a nice, clean, simple interface (for the most part). Can't help you with ODS Files though.
Spreadsheet Parser in Java/Groovy
[ "", "java", "parsing", "groovy", "spreadsheet", "" ]
Is it possible to get the pydev debugger to break on exception?
~~On **any** exception?~~ If my memory serves me right, in PyDev (in Eclipse) this is possible. --- **EDIT:** went through it again, checked [pdb documentation](http://docs.python.org/library/pdb.html), can't find a way to set an exception breakpoint. If I may suggest a really crude workaround, but if you must, you can call your program from within a `try-except` block, set a breakpoint there, and once it breaks in the `except` block just go up the stack and debug your error. **Another edit** This functionality [has been added to PyDev](https://stackoverflow.com/a/6655894/17523)
This was added by the PyDev author, under Run > Manage Python Exception Breakpoints
Break on exception in pydev
[ "", "python", "eclipse", "debugging", "exception", "pydev", "" ]
Is there a method/pattern/library to do something like that (in pseudo-code): ``` task_queue.push_back(ObjectType object1, method1); task_queue.push_back(OtherObjectType object2, method2); ``` so that I could do the *something* like: ``` for(int i=0; i<task_queue.size(); i++) { task_queue[i].object -> method(); } ``` so that it would call: ``` obj1.method1(); obj2.method2(); ``` Or is that an impossible dream? And if there's a way to add a number of parameters to call - that would be the best. **Doug T. please see this** Excellent answer! **Dave Van den Eynde**'s version works well too.
Yes you would want to combine [boost::bind](http://www.boost.org/doc/libs/1_38_0/libs/bind/bind.html) and [boost::functions](http://www.boost.org/doc/libs/1_38_0/doc/html/function.html) its very powerful stuff. **This version now compiles, thanks to Slava!** ``` #include <boost/function.hpp> #include <boost/bind.hpp> #include <iostream> #include <vector> class CClass1 { public: void AMethod(int i, float f) { std::cout << "CClass1::AMethod(" << i <<");\n"; } }; class CClass2 { public: void AnotherMethod(int i) { std::cout << "CClass2::AnotherMethod(" << i <<");\n"; } }; int main() { boost::function< void (int) > method1, method2; CClass1 class1instance; CClass2 class2instance; method1 = boost::bind(&CClass1::AMethod, class1instance, _1, 6.0) ; method2 = boost::bind(&CClass2::AnotherMethod, class2instance, _1) ; // does class1instance.AMethod(5, 6.0) method1(5); // does class2instance.AMethod(5) method2(5); // stored in a vector of functions... std::vector< boost::function<void(int)> > functionVec; functionVec.push_back(method1); functionVec.push_back(method2); for ( int i = 0; i < functionVec.size(); ++i) { functionVec[i]( 5); }; return 0; }; ```
Since C++ does not support heterogenous containers, your objects will have to have a shared base (so you can get away with having a container for pointers to this base class). ``` class shared_base { public: virtual void method() = 0; // force subclasses to do something about it }; typedef std::list<shared_base*> obj_list; class object : public shared_base { public: virtual void method() { methodx(); } private: int methodx(); }; // ... list.insert(new object); // ... std::for_each(list.begin(), list.end(), std::mem_fun(&shared_base::method)); ``` Are you trying to implement the [Hollywood principle](http://en.wikipedia.org/wiki/Hollywood_Principle) otherwise known as Inversion-of-Control (and also, poorman's error handling)? Look up both the [Observer](http://en.wikipedia.org/wiki/Observer_pattern) and [Visitor](http://en.wikipedia.org/wiki/Visitor_pattern) patterns -- they might be of interest.
C++ bound method queue (task manager/scheduler?)
[ "", "c++", "queue", "task", "" ]
In python-mode (for emacs), hitting Control-C\Control-C will execute the current buffer. However, when execution is finished, the output buffer springs up and splits my editing window in half. This is a *complete* pain, especially given that there's generally no output in the buffer anyway! Is there a way to stop the buffer from appearing? Also, how can I send painful electric shocks to the programmer who thought unexpectedly interrupting my train of thought with an empty buffer was a good idea? **Edit:** Apparently, there are uses for this behavior, most notably in seeing the output of a program. That's fine, but if there's no output (as in the program with which I'm tinkering), it is *really* dumb to cut my buffer in half with a blank window.
What python-mode are you using? Unfortunately, there are *several*. Type C-h k and then C-c C-c .. what does it say in the modeline? Does it say "py-execute-buffer" or does it say "python-send-buffer", the first kind indicates you're using Tim Peters' python-mode, while the second indicates you're using Dave Love's python-mode. I myself prefer Dave Love's python mode, and it's the version that ships with Emacs 23 (and maybe earlier too?). Dave Love's does not pop up the buffer when you run C-c C-c, you have to explicitly switch to the \*Python\* buffer if you want to see it. If you're really intent on sticking with Tim Peters' version, you could always put something like the following in your .emacs and you'll never see the output buffer again unless explicitly moved to. ``` (defadvice py-execute-buffer (after advice-delete-output-window activate) (delete-windows-on "*Python Output*")) ```
Sorry that I can't help you with emacs, but as for your second question, a modified aluminum keyboard from Apple may be a solution: <http://www.apple.com/keyboard/>
How do I suppress python-mode's output buffer?
[ "", "python", "emacs", "" ]
We are looking at switching from phpundercontrol to Hudson (it looks to have some really cool features!) but I can't figure out how to get phpunit logs to show up. I have phpunit running fine in Hudson with ant, and --log-xml is putting a phpunit.xml in the appropriate builds/ folder for that build. But I can't figure out how to get that to show up for the build, so that we can see the tests the ran and which failed, if any. After I figure that out, getting coverage, metrics, and api will be next :) It seems like it should be trivial for anything which generates its on HTML, to tell Hudson for example "For this project show a link to 'API' for each build and link to builds/$BUILDNUM/api/index.html".
I installed the xUnit plugin, pointed it at my log file (from job config), and it works like a charm. It appears there is no longer a need for any custom hacks. <http://wiki.hudson-ci.org/display/HUDSON/xUnit+Plugin>
With the last answer being from 2009 and [a lot of people migrating from Hudson to Jenkins](http://wiki.jenkins-ci.org/display/JENKINS/Upgrading+from+Hudson+to+Jenkins) now [due to Oracle](http://jenkins-ci.org/content/jenkins), consider using the **Jenkins Template for PHP**, offering a free and convenient template for all your configuration needs of [PHPQATools](http://phpqatools.org/), like pdepend, phpmd, phpcs and phpunit in one convenient template. * <http://jenkins-php.org/> * <http://edorian.posterous.com/setting-up-jenkins-for-php-projects> There is also the [IRC channel #jenkins-php on Freenode](http://webchat.freenode.net/) for support.
How might I integrate phpunit with Hudson CI?
[ "", "php", "hudson", "phpunit", "" ]
I have a weird situation where I open a form from my application, hide it, and when I try to show it a second time, the Visible property says true, TopMost true also, location is correct, yet I can't see it. Any ideas where and what to look at to work out what the problem is? (Note: there may be issues with event handlers/delegate, but I'm not too sure where to start) *UPDATE* - I set ShowInTaskbar to true - and I see it in the taskbar, but no way to bring it to front, and right click on the taskbar button does nothing. Alt-Tab neither. Doesn't show up in TaskManager. Tried Crack.NET, but can't find any properties which let me get 'into' the object instance. .Net Memory Profiler doesn't help for this, I believe, and Winspector I'm not sure what to do with. **UPDATE2** - I lied, it's not directly a WinForm, I was inheriting from a custom class which in turn inherits from WinForm. I tried inheriting directly from Form and it works. So now I've got to go and look in the code of the custom class. I'll keep you posted...
OK, nailed it. The Custom form class has a 'cool' function - it fades in and out. But only once, apparently. So the reason I couldn't see the form is because the Opacity was 0!
Could it be out of the screen boundaries? (that is, moved out). Use Spy++ or [Winspector](http://www.windows-spy.com/) and try to find this missing window and see exactly where/why it's hiding.
.Net winform property says Visible but I can't see it
[ "", "c#", ".net", "winforms", "" ]
Is there a way that I can execute the same line of code for each "Case" but only have to type it in once instead of having the same code specified for all Cases ? ``` switch (SomeTest) { case "test1": { // Do something for test 1 break; } case "test2": { // Do something for test 2 break; } case "test3": { // Do something for test 3 break; } // =====> Then do something generic here for example if case is test1, test2 or test3 } ```
Are you possibly over thinking it? ``` switch(SomeTest) { // specific stuff } // code you want running for every case ``` Otherwise the best you can do without setting a flag or something is: ``` switch(SomeTest) { // specific stuff } switch(SomeTest) { case "Test1", "Test2", "Test3": // stuff for the matching cases } ``` Or if you want to run the code for every case you match: ``` bool runGenericStuff = true; switch(SomeTest) { // specific stuff default: runGenericStuff = false; } if (runGenericStuff) { // run generic stuff } ``` That saves you having to set the flag in every case.
Put the common logic in a seperate method and call it on each case label that requires it.
Is there a way of performing some code within a Switch statement that executes only if any of the cases have been passed?
[ "", "c#", "switch-statement", "" ]
I'm attempting to declare a `Row` and a `Column` class, with the `Row` having a private `std::map` with values pointing to a templated `Column`. Something like this: ``` template <typename T> class DataType { private: T type; }; template <typename T> class Field { private: T value; DataType<T> type; }; class Row { private: std::map<unsigned long,Field*> column; }; ``` Well, I suppose in principle the `Row` class shouldn't have to know which kind of `Field` (or `Column`) we'd like to use, i.e. whether it's a `Field<int>` in column 1 or a `Field<double>` in column 2. But I'm not sure what's the correct syntax for the `Row::column` declaration, or if the `std::map` is limited in this sense and I should be using something else. I appretiate you suggestions and thank you for them in advance.
`Field` alone is not a type, but a template which can generate a family of types, such as `Field<int>` and `Field<double>`. All these fields are not related such that the one is somehow derived from the other or such. So you have to establish some relation between all these generated types. One way is to use a common non-template base class: ``` class FieldBase { }; template <typename T> class Field : public FieldBase { private: T value; DataType<T> type; }; class Row { private: std::map<unsigned long,FieldBase*> column; }; ``` And consider using smart pointer instead of that raw pointer in the code. Anyway, now the problem is that the type-information is lost - whether you point to a `Field<double>` or to a `Field<int>` is not known anymore and can only be detected by keeping some sort of type-flag in the base which is set by the templated derived class - or by asking RTTI using ``` dynamic_cast<Field<int>*>(field) != 0 ``` But that's ugly. Especially because what you want there is a value semantic. I.e you would want to be able to copy your row, and it would copy all the fields in it. And you would want to get a double when a double is stored - without first using RTTI to hack your way to the derived type. One way of doing it is to use a discriminated union. That is basically an union for some arbitrary types and in addition a type-flag, which stores what value is currently stored in that field (e.g whether a double, int, ...). For example: ``` template <typename T> class Field { private: T value; DataType<T> type; }; class Row { private: std::map<unsigned long, boost::variant< Field<int>, Field<double> > > column; }; ``` boost::variant does all the work for you. You can use visitation to make it call a functor using the right overload. Have a look at its [manual](http://www.boost.org/doc/libs/1_38_0/doc/html/variant.html)
1. You got an error there: you have to "value" member in Field (one should probably be "type"). 2. Please don't keep raw pointers in the map's value. Use [boost::shared\_ptr](http://www.boost.org/doc/libs/1_38_0/libs/smart_ptr/shared_ptr.htm). 3. Also, you should have a good reason for writing such classes where there are plenty of DB/table handling code out there already which you can probably use. So, if it's applicable, consider using something existing and not writing your own table handling code. Now, to answer your question :), the Field<> classes can inherit from a common base class that's shared by all data types. This way a container such as your column map can keep pointers (make that **shared** pointers) to derived objects that are instanced of a template class.
C++ std::map of template-class values
[ "", "c++", "templates", "methods", "stdmap", "" ]
Thanks for the response. Here's the problem....the Oildatasetstatusid's (5-11) are mapped to labid=4. Labid=4 has input data for multiple customers, so the update cannot be performed based on oildatasetstatusid. The updates have to be based on samplepointid. That's why 3 tables are being used. I am among the SQL ignorant and have been asked to perform a task that is way out of my realm of understanding. Our customer wants to change their existing severity levels, e.g. change from Low Warning (severityid=7) to Low (severityid=18). The severity information is entered in the database by a 3rd party oil lab. I want to generate one list using information from three different tables, then change the severity mapping on that list. Have I provided enough information below? Can someone please help???? ``` table: samplepoint - samplepointid table: oildataset - oildatasetid - oillabid (4=R&G) - samplepointid table: Oildatasetstatus - Oildatasetid - Oildatasetstatusid (5,6,7,8,9,10,11) needs to be changed to (16,17,18,19,20) - 5=16 - 6=17 - 7,8,9=18 - 10=19 - 11=20 ```
> I want to generate one list using information from three different tables The schema you've posted has no actual information in the samplepoint table, so it'd actually just be information from two tables. Essentially I'm guessing you simply want to join the ‘oillabid’ and ‘samplepointid’ values from ‘oildataset’ into each row of ‘oildatasetstatus’? ``` SELECT * FROM oildatasetstatus JOIN oildataset ON oildataset.oildatasetid=oildatasetstatus.oildatasetid ``` > then change the severity mapping on that list. I'm guessing you mean you want your query to have different severities, rather than actually UPDATEing your stored ‘oildatasetstatus’ table? If so, you can use the CASE operator: ``` SELECT oildataset.oildatasetid, oildataset.oillabid, oildataset.samplepointid, CASE oildatasetstatus.oildatasetstatusid WHEN 5 THEN 16 WHEN 6 THEN 17 WHEN 7 THEN 18 WHEN 8 THEN 18 WHEN 9 THEN 18 WHEN 10 THEN 19 WHEN 11 THEN 20 END AS newstatus FROM oildatasetstatus JOIN oildataset ON oildataset.oildatasetid=oildatasetstatus.oildatasetid ```
I don't know if you'll have do anything else, but if you just want to replace those severities, do this: UPDATE Oildatasetstatus SET Oildatasetstatusid=16 WHERE Oildatasetstatusid=5; UPDATE Oildatasetstatus SET Oildatasetstatusid=17 WHERE Oildatasetstatusid=6; UPDATE Oildatasetstatus SET Oildatasetstatusid=18 WHERE Oildatasetstatusid IN (7, 8, 9); UPDATE Oildatasetstatus SET Oildatasetstatusid=19 WHERE Oildatasetstatusid=10; UPDATE Oildatasetstatus SET Oildatasetstatusid=11 WHERE Oildatasetstatusid=20; If **Oildatasetid** is the primary key than this should do it. Be careful and make a backup of the data. What is the connection between **Oildatasetstatus** and **oildataset**?
SQL Statement to Pull a List Using Data from Multiple Tables
[ "", "sql", "" ]
Is there a method to create a copy of an animated gif image using C#? What I want is to generate a copy of a given gif image using the height and width parameters that the user provides. I have tried for a couple of hours to accomplish this but the resulting image does not preserve the original animations.
You need to loop through the frames in the animated GIF and resize each one. May also want to take a look at [GifLib](http://www.codeplex.com/GifLib).
Took me a while to find this, but finally found a solution: Install **Magick.NET** via NuGet, license can be found here: <https://magick.codeplex.com/license> Example code: ``` var newWidth = 100; using (var collection = new MagickImageCollection(new FileInfo(@"C:\test.gif"))) { collection.Coalesce(); foreach (var image in collection) { image.Resize(newWidth, 0); } collection.Write(@"c:\resized.gif"); } ``` From my tests, this works with alpha channels and varying frame rates. Seems to be perfect!
How to resize an animated gif image using C#?
[ "", "c#", "image", "gif", "animated-gif", "" ]
I'm hoping to find an open source .net common library, but I'm having a hard time finding one. What I'm looking for a is something that contains : Logging, Caching, String Manipulation, Config Reading, ext?!?. Do you know if anything like this exists or is this more likely something that I will need to create myself?
Use Microsoft Enterprise Application Blocks Get it from here <http://msdn.microsoft.com/en-us/library/cc467894.aspx> Source code here <http://www.codeplex.com/entlib>
* [Microsoft Enterprise Library](http://msdn.microsoft.com/en-us/library/cc467894.aspx) - Built by Microsoft, source code is available * [Spring.NET](http://www.springframework.net/) - Open source development framework
C# Common Library
[ "", "c#", ".net", "architecture", "" ]
Say I have a function similar to this: ``` function getGroupValue(group) { for (var i=0; i < group.length; i++) { if (group[i].checked) { return group[i].value; } } } ``` that i can call with something like `getGroupValue(document.forms[0].myGroup)` How do I convert this to just passing in the name of a html radio button group like `getGroupValue('myGroupName')` ?
This should work: ``` function getGroupValue(group) { var g = document.getElementsByName(group); for (var i=0; i < g.length; i++) { if (g[i].checked) { return g[i].value; } } } ```
I'm assuming there's only one form on your page, given the way you were originally calling the function. If that's a valid assumption to make, then you can do the following: ``` function getGroupValue(groupName) { var group = document.forms[0][groupName]; for (var i = 0, n = group.length; i < n; i ++) { if (group[i].checked) { return group[i].value; } } } ``` Javascript lets you specify attributes/children of the form using the brackets, so you can just use the groupName you passed in as a child of the form itself.
How to pass the html name of button group to a javascript function
[ "", "javascript", "" ]
I am trying to import financial information using the OFX protocol which is used by Quicken and Money. I have had a pretty good look around for an open source library but so far have come up empty. Does anyone know of anything?
[GnuCash](http://www.gnucash.org/) can do it, and it looks like they use [LibOFX](http://libofx.sourceforge.net/). Also, there are some python scripts and OFX resources [here](http://www.jongsma.org/gc/).
How about ["Class to transform ofx (Microsoft Money) file into DataSet"](http://www.codeproject.com/KB/aspnet/Ofx_to_DataSet.aspx)? Note I'm not hugely familiar with OFX - that is just a quick search result...
Importing OFX files
[ "", "c#", "currency", "ofx", "" ]
I am looking for a simple and uncatchable way to terminate the Mac port of my C++ application. In Windows I was using ``` TerminateProcess(GetCurrentProcess, 0); ``` What's the equivalent command I can use with Mac OS X / XCode / GCC?
Actually you want `_exit` if you want to have the same semantics as `TerminateProcess`. `exit` semantics are more closely aligned with `ExitProcess`.
A closer to [ProcessTerminate](http://msdn.microsoft.com/en-us/library/ms686714(VS.85).aspx) will be to send a [SIGKILL](http://en.wikipedia.org/wiki/SIGKILL) with [kill](http://developer.apple.com/DOCUMENTATION/DARWIN/Reference/ManPages/man2/kill.2.html), both terminate the current process immediately and can't be trapped. This is the same as [\_exit](http://developer.apple.com/DOCUMENTATION/DARWIN/Reference/ManPages/man2/_exit.2.html) ``` kill(getpid(), SIGKILL); ```
Mac OS X equivalent for TerminateProcess(GetCurrentProcess,0);
[ "", "c++", "macos", "terminate", "" ]
Is there a built in method or defacto default plugin that will let you automatically assign an unique ID to an element in jQuery, or do you need to implement something like this yourself? I'm looking for the jQuery equivalent to [Prototype's identify method](http://prototypejs.org/api/element/identify) Here's an example. I have some HTML structure on a page that looks like this ``` <span id="prefix_1">foo bar</span> ... <div id="foo"> <span></span> <span></span> <span></span> </div> ``` I want to assign each of the spans an ID that will be unique to the page. So after calling something like this ``` $('#foo span').identify('prefix'); //fake code, no such method ``` The rendered DOM would look something like this ``` <span id="prefix_1">foo bar</span> ... <div id="foo"> <span id="prefix_2"></span> <span id="prefix_3"></span> <span id="prefix_4"></span> </div> ``` Is there anything official-ish/robust for jQuery, or is this something most jQuery developers roll on their own?
``` jQuery.fn.identify = function(prefix) { var i = 0; return this.each(function() { if(this.id) return; do { i++; var id = prefix + '_' + i; } while($('#' + id).length > 0); $(this).attr('id', id); }); }; $('span').identify('test'); ``` Tested this on: ``` <span id='test_2'></span> <span>test1</span> <span>test2</span> <span>test3</span> ``` Turned it to: ``` <span id="test_2"></span> <span id="test_1">test1</span> <span id="test_3">test2</span> <span id="test_4">test3</span> ```
Not that I know of, but you could do this yourself by adding the `id` attribute and supplying a GUID. To add the attribute: ``` $(foo).attr( "id", createGuid() ); ``` For the `createGuid()` implementation, [see this question](https://stackoverflow.com/questions/105034/how-to-create-a-guid-in-javascript). Note that you could easily turn this into a [jQuery plug-in](http://blog.jeremymartin.name/2008/02/building-your-first-jquery-plugin-that.html) so that this function (you could even name it `identify()`) is available on any `$-`expression.
Does jQuery have an equivalent to Prototype's Element.identify?
[ "", "javascript", "jquery", "ajax", "dom", "prototypejs", "" ]
So I understand that there are a few options available as far as parsing straight XML goes: NSXMLParser, TouchXML from TouchCode, etc. That's all fine, and seems to work fine for me. The real problem here is that there are dozens of small variations in RSS feeds (and Atom feeds too), so supporting all possible permutations of feeds available out on the Internet gets very difficult to manage. I searched around for a library that would handle all of these low-level details for me, but came out without anything. Since one could link to an external C/C++ library in Objective-C, I was wondering if there is a library out there that would be best suited for this task? Someone must have already created something like this, it's just difficult to find the "right" option from the thousands of results in Google. Anyway, what's the best way to parse RSS/Atom feeds in an iPhone application?
"Best" is relative. The best performance you'll need to go the SAX route and implement the handlers. I don't know of anything out there open source available (start a google code project and release it for the rest of us to use!) Whatever you do, it's probably a really bad idea to try and load the whole XML file into memory and act on it like a DOM. Chances are you'll get feeds that are much larger than you can handle on the device leading to frequent memory warnings and crashes.
I've just released an open source [RSS/Atom Parser for iPhone](http://github.com/mwaterfall/MWFeedParser) and hopefully it might be of some use. I'd love to hear your thoughts on it too!
What's the best way to parse RSS/Atom feeds for an iPhone application?
[ "", "c++", "iphone", "c", "objective-c", "" ]
Every coding standard I've ever seen has a recommended or absolute limit on number of characters in a line. There are various ways of working within this limitation, but I've not seen any specific guidance in this regard. Obviously, if possible, don't write excessively long lines. But what if that's not practical? How should long lines be handled? Here are a couple of examples ``` if ($Stmt = $Mysqli->prepare("SELECT color, pattern, size, manufacturer, mfgSku, storeLocation, aisle, status FROM tblItems WHERE ourSku = ?")) { ``` or ``` $flavors = array ('chocolate', 'strawberry', 'vanilla', 'cookie dough', 'chocolate chip', 'mint chocolate chip', 'rocky road', 'peach', 'fudge brownie', 'coffee', 'mocha chip'); ``` or ``` $Stmt->bind_result( $this->_firstName, $this->_lastName, $this->_BillToAddress->address1, $this->_BillToAddress->address2, $this->_BillToAddress->city, $this->_BillToAddress->state, $this->_BillToAddress->zip, $this->_BillToAddress->country, $this->_email, $this->_status, $this->_primaryPhone, $this->_mobilePhone ); ``` In each of these examples, the indenting of lengthy code is different. Is there a better or more "standard" way of doing this? Should extra lines always be indented the same way. Or is this OK?
There is a pattern you can see in each example - they are indented to the first parameter of the function. This is a good standard to follow as it transposes the data from horizontal to vertical and the columns allow easy reading. For other line length issues, such as lengthy computations, the preferred method is to break it down. Calculating the julian date, or easter is done in several steps instead of one long calculation.
My personal preference is the following; ``` $Stmt->bind_result( $this->_firstName, $this->_lastName, $this->_BillToAddress->address1, $this->_BillToAddress->address2, $this->_BillToAddress->city, $this->_BillToAddress->state, $this->_BillToAddress->zip, $this->_BillToAddress->country, $this->_email, $this->_status, $this->_primaryPhone, $this->_mobilePhone ); ``` That way the closing bracket and semi-colon are on the same indent as the opening call. Not all languages support having parameters on another line to the method call though...
Coding standards and line length
[ "", "php", "coding-style", "" ]
This might seem like a pretty detailed question about Easymock, but I'm having a hard time finding a support site/forum/mailing list for this library. I'm encountering a bug when using the `captures()` method that seems to return the captured parameters out of order. Here's a simplified version of what I am testing: ``` public class CaptureTest extends TestCase { // interface we will be mocking interface Processor { void process(String x); } // class that uses the interface above which will receive the mock class Component { private Processor processor; private String[] s = { "one", "two", "three", "four" }; Component(Processor processor) { this.processor = processor; } public void doSomething() { for (int i = 0; i < s.length; i++) { processor.process(s[i]); } } } public void testCapture() { //create the mock, wire it up Processor mockProcessor = createMock(Processor.class); Component component = new Component(mockProcessor); //we're going to call the process method four times //with different arguments, and we want to capture //the value passed to the mock so we can assert against it later Capture<String> cap1 = new Capture<String>(); Capture<String> cap2 = new Capture<String>(); Capture<String> cap3 = new Capture<String>(); Capture<String> cap4 = new Capture<String>(); mockProcessor.process(and(isA(String.class), capture(cap1))); mockProcessor.process(and(isA(String.class), capture(cap2))); mockProcessor.process(and(isA(String.class), capture(cap3))); mockProcessor.process(and(isA(String.class), capture(cap4))); replay(mockProcessor); component.doSomething(); //check what values were passed to the mock assertEquals("one", cap1.getValue()); assertEquals("two", cap2.getValue()); assertEquals("three", cap3.getValue()); assertEquals("four", cap4.getValue()); verify(mockProcessor); } } ``` (Please note that this is just a simplified test case - I know that I could specify the exact value of the arguments I expect passed to my mock, but in my real case the arguments are complex objects with a handful of fields, and I want to capture the object so I can assert against just a few of those fields without re-creating the entire object in my test case). When I run the test, it fails at: > junit.framework.ComparisonFailure: expected:<[one]> but was:<[four]> Meaning that the parameter that EasyMock is capturing in `cap1` is not the first call to the method, but the last (since the value is `four`). I get the same results if I reverse the `captures()` declarations, i.e. use `cap4` with the first method call, etc. This seems like it might be a bug within EasyMock - different parameters passed to the same method in different invocations don't seem to be capture correctly. Is anyone else using `capture()` with EasyMock and having similar problems? Is there an easy workaround you know of, or a different way I can capture the parameters being passed to my mock's methods? *Update 1*: fixed code sample to show I am using `createMock`, not `createStrictMock`, but I get the same error with both (although the actual value of what is captured changes).
I've received [an answer](https://sourceforge.net/tracker/?func=detail&atid=567837&aid=2617107&group_id=82958) on the bug I submitted to the Easymock sourceforge site, and a developer has confirmed it is indeed a bug with this version of Easymock. > It is indeed a bug. The capture is done even if it was already done. The > current workaround is to implement your own capture object and override > setValue to do this: ``` @Override public void setValue(T value) { if(!hasCaptured()) { super.setValue(value); } } ```
I was playing around with your test and could not solve. However I extended the Capture Class to see if the values were set in a different order (I was suspicious that EasyMock internally was using a hash with a key generated from the methods and the parameters) I was wrong the methods are set in the correct order. But there is something really weird going on.. It seems that the algorithm does some kind assigning pattern.. Well let me show the code and the strange output.... BTW the changes from mock, niceMock and strictMock didn't make anydifference.. ``` class MyCapture extends Capture<String> { private String id; public MyCapture(String id) { super(); System.out.printf("Constructor %s expecting %s\n", id, this.getClass().getName()); this.id = id; } private static final long serialVersionUID = 1540983654657997692L; @Override public void setValue(String value) { System.out.printf("setting value %s expecting %s \n", value, id); super.setValue(value); } @Override public String getValue() { System.out .printf("getting value %s expecting %s \n", super.getValue(), id); return super.getValue(); } } public void testCapture() { // create the mock, wire it up Processor mockProcessor = createStrictMock(Processor.class); Component component = new Component(mockProcessor); // we're going to call the process method four times // with different arguments, and we want to capture // the value passed to the mock so we can assert against it later Capture<String> cap1 = new MyCapture("A"); Capture<String> cap2 = new MyCapture("B"); Capture<String> cap3 = new MyCapture("C"); Capture<String> cap4 = new MyCapture("D"); mockProcessor.process(and(isA(String.class), capture(cap1))); mockProcessor.process(and(isA(String.class), capture(cap2))); mockProcessor.process(and(isA(String.class), capture(cap3))); mockProcessor.process(and(isA(String.class), capture(cap4))); replay(mockProcessor); component.doSomething(); // check what values were passed to the mock assertEquals("A", cap1.getValue()); assertEquals("B", cap2.getValue()); assertEquals("C", cap3.getValue()); assertEquals("D", cap4.getValue()); verify(mockProcessor); } ``` } \*And this is the output \* ``` Constructor A expecting com.comp.core.dao.impl.CaptureTest$MyCapture Constructor B expecting com.comp.core.dao.impl.CaptureTest$MyCapture Constructor C expecting com.comp.core.dao.impl.CaptureTest$MyCapture Constructor D expecting com.comp.core.dao.impl.CaptureTest$MyCapture calling process A setting value A expecting A calling process B setting value B expecting A <<Setting the wrong guy setting value B expecting A <<Setting the wrong guy setting value B expecting B <<Ops this is the right one..stop calling process C setting value C expecting B <<Setting the wrong guy setting value C expecting B <<Setting the wrong guy setting value C expecting C <<Setting the wrong guy calling process D setting value D expecting C <<Setting the wrong guy setting value D expecting C <<Setting the wrong guy setting value D expecting D <<Ops this is the right one..stop getting value B expecting A ``` Sorry I can't help you more. It might be indeed a bug in easy mock.
Easymock: does the order of captures matter?
[ "", "java", "mocking", "easymock", "" ]
On Windows/c++, I want to customize the assert dialog box to ignore an assertion forever, so I can be more aggressive with assertions. I understand how hard it is to write a correct assert macro, and do not wish to do this, just hook the dialog code. Is there an easy way (or concise hack) to do this? [article on assert macro dangers](http://tinyurl.com/csgqbn) (googlecache) update: more aggressive => use far more frequently and for noncrash **bugs**. I want to be able to ignore an assertion forever so if a minor bug assertion occurs in a loop it doesn't effectively halt my process.
Look into the [\_CrtSetReportHook](http://msdn.microsoft.com/en-us/library/0yysf5e6(VS.80).aspx) function or the newer [\_CrtSetReportHook2](http://msdn.microsoft.com/en-us/library/94a21kwy(VS.80).aspx). You can use it to install a hook that remembers "seen" messages, and reports them as handled when seen again.
If by "more aggressive" you mean using assertions for error handling, then you're better off using exceptions.
customizing assert macro
[ "", "c++", "windows", "assert", "" ]
In C# the Main class has string[] args parameter. What is that for and where does it get used?
From the [C# programming guide on MSDN](http://msdn.microsoft.com/en-us/library/cb20e19t.aspx): **The parameter of the Main method is a String array that represents the command-line arguments** So, if I had a program (MyApp.exe) like this: ``` class Program { static void Main(string[] args) { foreach (var arg in args) { Console.WriteLine(arg); } } } ``` That I started at the command line like this: ``` MyApp.exe Arg1 Arg2 Arg3 ``` The Main method would be passed an array that contained three strings: "Arg1", "Arg2", "Arg3". If you need to pass an argument that contains a space then wrap it in quotes. For example: ``` MyApp.exe "Arg 1" "Arg 2" "Arg 3" ``` Command line arguments commonly get used when you need to pass information to your application at runtime. For example if you were writing a program that copies a file from one location to another you would probably pass the two locations as command line arguments. For example: ``` Copy.exe C:\file1.txt C:\file2.txt ```
Further to everyone else's answer, you should note that the parameters are optional in C# if your application does not use command line arguments. This code is perfectly valid: ``` internal static Program { private static void Main() { // Get on with it, without any arguments... } } ```
What is "string[] args" in Main class for?
[ "", "c#", "" ]
I am observing strange behaviour of std::map::clear(). This method is supposed to call element's destructor when called, however memory is still accessible after call to clear(). For example: ``` struct A { ~A() { x = 0; } int x; }; int main( void ) { std::map< int, A * > my_map; A *a = new A(); a->x = 5; my_map.insert( std::make_pair< int, *A >( 0, a ) ); // addresses will be the same, will print 5 std::cout << a << " " << my_map[0] << " " << my_map[0]->x << std::endl; my_map.clear(); // will be 0 std::cout << a->x << std::endl; return 0; } ``` The question is, why is variable `a` still accessible after its destructor was called by map::clear()? Do I need to write `delete a;` after calling `my_map.clear()` or is it safe to overwrite the contents of `a`? Thanks in advance for your help, sneg
std::map does not manage the memory pointed to by the pointer values - it's up to you to do it yourself. If you don't want to use smart pointers, you can write a general purpose free & clear function like this: ``` template <typename M> void FreeClear( M & amap ) for ( typename M::iterator it = amap.begin(); it != amap.end(); ++it ) { delete it->second; } amap.clear(); } ``` And use it: ``` std::map< int, A * > my_map; // populate FreeClear( my_map ) ``` ;
If you store pointers on a map (or a list, or anything like that) **YOU** are the responsible for deleting the pointers, since the map doesn't know if they have been created with new, or not. The clear function only invokes destructors if you don't use pointers. Oh, and one more thing: invoking a destructor (or even calling delete) doesn't mean the memory can't be accessed anymore. It only means that you will be accessing garbage if you do.
Why is memory still accessible after std::map::clear() is called?
[ "", "c++", "stdmap", "" ]
[psyco](http://psyco.sourceforge.net/) seems to be quite helpful in optimizing Python code, and it does it in a very non-intrusive way. Therefore, one has to wonder. Assuming you're always on a x86 architecture (which is where most apps run these days), why not just always use `psyco` for all Python code? Does it make mistakes sometimes and ruins the correctness of the program? Increases the runtime for some weird cases? Have you had any negative experiences with it? My most negative experience so far was that it made my code faster by only 15%. Usually it's better. Naturally, using psyco is not a replacement for efficient algorithms and coding. But if you can improve the performance of your code for the cost of two lines (importing and calling psyco), I see no good reason not to.
1) The memory overhead is the main one, as described in other answers. You also pay the compilation cost, which can be prohibitive if you aren't selective. From the [user reference](http://psyco.sourceforge.net/psycoguide/module-psyco.html): > Compiling everything is often overkill for medium- or large-sized applications. The drawbacks of compiling too much are in the time spent compiling, plus the amount of memory that this process consumes. It is a subtle balance to keep. 2) Performance can actually be harmed by Psyco compilation. Again from the user guide (["known bugs"](http://psyco.sourceforge.net/psycoguide/tutknownbugs.html) section): > There are also performance bugs: situations in which Psyco slows down the code instead of accelerating it. It is difficult to make a complete list of the possible reasons, but here are a few common ones: > > * The built-in `map` and `filter` functions must be avoided and replaced by list comprehension. For example, `map(lambda x: x*x, lst)` should be replaced by the more readable but more recent syntax `[x*x for x in lst]`. > * The compilation of regular expressions doesn't seem to benefit from Psyco. (The execution of regular expressions is unaffected, since it is C code.) Don't enable Psyco on this module; if necessary, disable it explicitely, e.g. by calling `psyco.cannotcompile(re.compile)`. 3) Finally, there are some relatively obscure situations where using Psyco will actually introduce bugs. Some of them are [listed here](http://psyco.sourceforge.net/psycoguide/bugs.html#bugs).
> Psyco currently uses a lot of memory. > It only runs on Intel 386-compatible > processors (under any OS) right now. > There are some subtle semantic > differences (i.e. bugs) with the way > Python works; they should not be > apparent in most programs. See also the [caveats section](http://psyco.sourceforge.net/psycoguide/node25.html). For a hard example, I noticed that my web app with Cheetah-generated templates and DB I/O gained no appreciable speedup.
Why not always use psyco for Python code?
[ "", "python", "optimization", "psyco", "" ]
Is it possible to assign variables to the client scope using JavaScript? More specifically, I'm trying to assign a value after it is returned from an AJAX call. I understand that ColdFusion is run on the server side and JavaScript on the client side, but with the proliferation of AJAX, I'm curious if something like this might be possible. Thank you.
Peter Boughton was pretty much dead on in his concept, but you asked how to write client variables, and he didn't test his code. Also, what you're trying to do would be called a ClientFacade, so I've written (and tested) a ClientFacade CFC and companion JavaScript. Note that I'm using jQuery because I never go anywhere without it. ;) **ClientFacade.cfc:** ``` <cfcomponent output="false" hint="acts as a remote facade for the client scope"> <cffunction name="set" output="false" access="remote" returntype="boolean" hint="sets a value into the client scope"> <cfargument name="name" type="string" required="true"/> <cfargument name="val" type="any" required="true"/> <!--- you should sanitize input here to prevent code injection ---> <cfscript> try { client[arguments.name] = arguments.val; return(true); }catch (any e){ return(false); } </cfscript> </cffunction> <cffunction name="get" output="false" access="remote" returntype="any" hint="gets a value from the client scope"> <cfargument name="name" type="string" required="true"/> <cfargument name="defaultVal" type="any" required="false" default=""/> <!--- you should sanitize input here to prevent code injection ---> <cfscript> if (structKeyExists(client, arguments.name)){ return(client[arguments.name]); }else{ if (len(trim(arguments.defaultVal)) eq 0){ return(''); }else{ return(arguments.defaultVal); } } </cfscript> </cffunction> </cfcomponent> ``` **test.cfm:** ``` <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.1/jquery.min.js"> </script> foo:<input type="text" name="foo" id="foo"/> <button id="setValue">Set Value</button> <button id="alertValue">Alert Value</button> <script type="text/javascript"> $(document).ready(function(){ //attach functionality to our alert button $("#alertValue").click(function(e){ var clientVal = 'initialdata'; $.getJSON( 'clientFacade.cfc', {method:"get", returnFormat:"json", name:"foo"}, function(d){ clientVal = d; alert(clientVal); } ); e.preventDefault();//prevent the button from doing anything else }); //attach functionality to our set button $("#setValue").click(function(e){ var success = false; var valu = $("#foo").val(); $.getJSON( 'clientFacade.cfc', { method:"set", returnFormat:"json", name:"foo", "val":valu }, function(d){ success = eval(d); if (!success){ alert('Was not able to set the client var :('); } } ); e.preventDefault();//prevent the button from doing anything else }); }); </script> ``` I think that's everything you wanted. Let me know if I missed anything.
I dont knoew anything about CF, but usually this is done by posting a name/value pair to the server, where the server picks up that value and sticks it into a variable. You can also use GET if you wanted to, and you can do both of these things using AJAX. Another silly hack is to do something like this in javascript: var img = new Image(); img.src = "<http://myserver.com/something.cfm?name=value&anothername=anothervalue>"; after that, the server will perform a GET and pass those 2 values to the server. no image will be displayed to the client because: a) you dont add it to the DOM b) its not really an image anyway.
Assign ColdFusion Client Variables Using Javascript
[ "", "javascript", "ajax", "coldfusion", "" ]
Is there a function like `GETDATE()` in Sql Server 2005 that let's you get the max possible date? I do not want to find the highest date in a table. I want to get the max possible date that sql server will be able to store. Basically, I want an expiration date of never
The [documentation](http://msdn.microsoft.com/en-us/library/ms187819.aspx) says the range is January 1, 1753, through December 31, 9999. I don't think there is a built in function, but you could create one that returns the maximum datetime value. ``` CREATE FUNCTION fn_max_date RETURNS datetime AS return cast('12/31/9999 23:59:59.9999' as datetime) ```
`CAST()` seems to be dependent on the SQL Server language/culture. On my **German** SQL Servers 2008 R2 and 2012 (`@@language = 'Deutsch'`), the following cast throws an error: ``` CAST('12/31/9999 23:59:59.997' AS DATETIME) ``` > The conversion of a varchar data type to a datetime data type resulted in an out-of-range value. Whereas this one works just fine: ``` CAST('31.12.9999 23:59:59.997' AS DATETIME) ``` **SOLUTION** I think the safest approach is to specify the format with `CONVERT()`: ``` /* ISO 8601 */ CONVERT(DATETIME, '9999-12-31T23:59:59.997', 126) ```
finding max possible date in ms sql server 2005+
[ "", "sql", "sql-server", "" ]
I want to store some values in a balanced binary search tree using C#. I looked through the collections in the generics namespace and I haven't found an equivalent of the stl set. What generic collection can I use? (I don't want to store key/value pairs... just values.)
1. If you require sorted set, use [`SortedDictionary<T,U>`](http://msdn.microsoft.com/en-us/library/f7fta44c.aspx). This is implemented using a binary search tree. Admittedly, you will be using 64-bits per entry because you are storing a key-value pair underneath. You can write a wrapper around it like this: ``` class Set<T> : SortedDictionary<T, bool> { public void Add(T item) { this.Add(item, true); } } ``` 2. If you don't require a sorted set, use [`HashSet<T>`](http://msdn.microsoft.com/en-us/library/bb359438.aspx). 3. Otherwise, check out [C5 Generic Collection Library](http://www.itu.dk/research/c5/). In particular [`TreeSet<T>`](http://www.itu.dk/research/c5/Release1.1/c5doc/types/C5.TreeSet_1.htm). It is a red-black tree and only stores the values.
You could use an [HashSet](http://msdn.microsoft.com/en-us/library/bb359438.aspx) > The `HashSet<T>` class provides high performance set operations. A set is a collection that contains **no duplicate elements**, and whose elements are in no particular order. The capacity of a `HashSet<T>` object is the number of elements that the object can hold. A `HashSet<T>` object's capacity automatically increases as elements are added to the object.
What is the C# equivalent of the stl set?
[ "", "c#", "data-structures", "" ]
If we look at the Java Object class then we can find some of the methods like: ``` public native int hashCode() protected native Object clone() ``` What are these natives and how do these methods work?
These methods are either *Intrinsic* or written outside Java in "native" code, that is, specific to the given machine. The ones you mention are *Intrinsic* and part of the JDK but you can also write native methods yourself using the [Java Native Interface](http://java.sun.com/docs/books/jni/) (JNI). This would normally use C to write the methods, but a lot of other languages, such as python allow you to write methods this way fairly easily. Code is written this way either for performance, or because it needs to access platform specific infrastructure which cannot be done in plain java. In the case of `hashcode()`, this is implemented by the JVM. This is because often the hashcode will be related to something only the JVM knows. On early JVMs this was related to the object's location in memory - on other JVMs the Object may move in memory, and so a more complicated (but still very fast) scheme may be used.
Most native methods are implemented using JNI as mentioned in other answers. However, performance critical methods such as `Object.hashCode` are typically implemented as intrinsics. When the byte code is compiled into machine code, the Java compiler recognises the method call and inlines appropriate code directly. This is obviously going to be much faster than going through JNI for a trivial method. Many people will claim that `Object.hashCode` will return the address of the object representation in memory. In modern implementations objects actually move within memory. Instead an area of the object header is used to store the value, which may be lazily derived from the memory address at the time that the value is first requested.
What is a native implementation in Java?
[ "", "java", "java-native-interface", "" ]
This is one i struggled with for ages so thought I'd document somewhere. (Apologies for asking and answering a question.) (C# .net 2.0) I had a class that was being serialized by XmlSerializer, I added a new public property however it wasn't being included in the output XML. It's not mentioned in the docs anywhere I could find, but public properties must have a set as well as a get to be serialized! I guess this is because it assumes that if you're going to serialize then you'll want to deserialize from the same file, so only serializes properties that have both a set and a get.
As mentioned, most properties must have both a getter and setter; the main exception to this is lists - for example: ``` private readonly List<Foo> bar = new List<Foo>(); public List<Foo> Bar {get { return bar; } } // works fine ``` which will work fine; however, if `XmlSerializer` *finds* a setter - it demands that it is public; the following will **not** work: ``` public List<Foo> Bar {get; private set;} // FAIL ``` Other reasons it might not serialize: * it isn't public with get and set (or is `readonly` for a field) * it has a `[DefaultValue]` attribute, and is with that value * it has a public `bool ShouldSerializeFoo()` method that returned false * it has a public `bool FooSpecified {get;set;}` property or field that returned false * it is marked `[XmlIgnore]` * it is marked `[Obsolete]` Any of these will cause it not to serialize
The point about getter+setter is made in the 3rd paragraph on the "[Intro to Xml Serialization](http://msdn.microsoft.com/en-us/library/182eeyhh(VS.85).aspx)" page. It's actually in a call-out box. Can't miss it! [Intro-to-XML Serialization http://www.freeimagehosting.net/uploads/2f04fea2db.png](http://www.freeimagehosting.net/uploads/2f04fea2db.png) (having a little too much fun with Freeimagehosting.net)
Why isn't my public property serialized by the XmlSerializer?
[ "", "c#", ".net", "xml-serialization", "" ]
Given a start date of 1/1/2009 and an end date of 12/31/2009, how can I iterate through each date and retrieve a DateTime value using c#? Thanks!
I would use a loop that looks like this ``` for(DateTime date = begin; date <= end; date = date.AddDays(1)) { } ``` Set begin and end accordingly
**Another option implementing the Iterator design pattern:** This may sound unnecessary, but I depending on how to you use this functionality, you may also implement the [Iterator design pattern](http://en.wikipedia.org/wiki/Iterator_pattern). Think on this. Suppose that everything works just fine, and you copy/paste all over the place the "for" sentence. And suddenly as part of the requirements, you have to iterate all the days but skip some of them ( like in calendar, skip holydays, weekends, custom etc. ) You would have to create a new "snipped" and use Calendar instead. Then search and replace all your for's. In OOP, this could be achieved using the Iterator pattern. From Wikpedia: > *In object-oriented programming, the Iterator pattern is a design pattern in which iterators are used to access the elements of an aggregate object sequentially **without exposing its underlying representation**. An Iterator object encapsulates the internal structure of how the iteration occurs.* So the idea is to use a construct like this: ``` DateTime fromDate = DateTime.Parse("1/1/2009"); DateTime toDate = DateTime.Parse("12/31/2009"); // Create an instance of the collection class DateTimeEnumerator dateTimeRange = new DateTimeEnumerator( fromDate, toDate ); // Iterate with foreach foreach (DateTime day in dateTimeRange ) { System.Console.Write(day + " "); } ``` And then if needed you could create subclasses to implement different algorithms, one that uses AddDay(1), other that uses AddDay( 7 ) or other that simple uses Calendar instead. Etc. etc. The idea is to lower the coupling between objects. Again, this would be overkill for most of the cases, but if the iteration forms a **relevant** part of a system ( let's say , you are creating some kind of whatever notification, for an enterprise, and should adhere to different globalizations The basic implementation of course would use the for. ``` public class DateTimeEnumerator : System.Collections.IEnumerable { private DateTime begin; private DateTime end; public DateTimeEnumerator ( DateTime begin , DateTime end ) { // probably create a defensive copy here... this.begin = begin; this.end = end; } public System.Collections.IEnumerator GetEnumerator() { for(DateTime date = begin; date < end; date = date.AddDays(1)) { yield return date; } } } ``` Just an idea :)
How do you iterate through every day of the year?
[ "", "c#", "datetime", "" ]
In our asp.net 2.0 web application, there is a user control with validation. For some reason, when the validation fails, the pages looks fine, however, no controls (asp buttons, scroll bars, or third party like Telerik text editor) respond to mouse inputs. The only work around is to resize the browser window which will make the UI responsive. Has anyone seen this issue before? What could be causing it?
Check your CSS and/or javascript. It sounds like there is a transparent element (like a div) beging placed over the elements in the page.
Does it happen in other browsers? Are you talking client side validation or server side (does it actually post back?) Have you tried removing all CSS and seeing if it still happens? If it doesn't, add the css back bit by bit until it break again and youve found the culprit!
IE 7 stops responding
[ "", "asp.net", "javascript", "ajax", "" ]
I have an application which sends a POST request to the VB forum software and logs someone in (without setting cookies or anything). Once the user is logged in I create a variable that creates a path on their local machine. c:\tempfolder\date\username The problem is that some usernames are throwing "Illegal chars" exception. For example if my username was `mas|fenix` it would throw an exception.. ``` Path.Combine( _ Environment.GetFolderPath(System.Environment.SpecialFolder.CommonApplicationData), _ DateTime.Now.ToString("ddMMyyhhmm") + "-" + form1.username) ``` I don't want to remove it from the string, but a folder with their username is created through FTP on a server. And this leads to my second question. If I am creating a folder on the server can I leave the "illegal chars" in? I only ask this because the server is Linux based, and I am not sure if Linux accepts it or not. **EDIT: It seems that URL encode is NOT what I want.. Here's what I want to do:** ``` old username = mas|fenix new username = mas%xxfenix ``` Where %xx is the ASCII value or any other value that would easily identify the character.
**Edit: Note that this answer is now out of date. See [Siarhei Kuchuk's answer below](https://stackoverflow.com/a/7427556/21539) for a better fix** UrlEncoding will do what you are suggesting here. With C#, you simply use `HttpUtility`, as mentioned. You can also Regex the illegal characters and then replace, but this gets far more complex, as you will have to have some form of state machine (switch ... case, for example) to replace with the correct characters. Since `UrlEncode` does this up front, it is rather easy. As for Linux versus windows, there are some characters that are acceptable in Linux that are not in Windows, but I would not worry about that, as the folder name can be returned by decoding the Url string, using `UrlDecode`, so you can round trip the changes.
I've been experimenting with the various methods .NET provide for URL encoding. Perhaps the following table will be useful (as output from a test app I wrote): ``` Unencoded UrlEncoded UrlEncodedUnicode UrlPathEncoded EscapedDataString EscapedUriString HtmlEncoded HtmlAttributeEncoded HexEscaped A A A A A A A A %41 B B B B B B B B %42 a a a a a a a a %61 b b b b b b b b %62 0 0 0 0 0 0 0 0 %30 1 1 1 1 1 1 1 1 %31 [space] + + %20 %20 %20 [space] [space] %20 ! ! ! ! ! ! ! ! %21 " %22 %22 " %22 %22 &quot; &quot; %22 # %23 %23 # %23 # # # %23 $ %24 %24 $ %24 $ $ $ %24 % %25 %25 % %25 %25 % % %25 & %26 %26 & %26 & &amp; &amp; %26 ' %27 %27 ' ' ' &#39; &#39; %27 ( ( ( ( ( ( ( ( %28 ) ) ) ) ) ) ) ) %29 * * * * %2A * * * %2A + %2b %2b + %2B + + + %2B , %2c %2c , %2C , , , %2C - - - - - - - - %2D . . . . . . . . %2E / %2f %2f / %2F / / / %2F : %3a %3a : %3A : : : %3A ; %3b %3b ; %3B ; ; ; %3B < %3c %3c < %3C %3C &lt; &lt; %3C = %3d %3d = %3D = = = %3D > %3e %3e > %3E %3E &gt; > %3E ? %3f %3f ? %3F ? ? ? %3F @ %40 %40 @ %40 @ @ @ %40 [ %5b %5b [ %5B %5B [ [ %5B \ %5c %5c \ %5C %5C \ \ %5C ] %5d %5d ] %5D %5D ] ] %5D ^ %5e %5e ^ %5E %5E ^ ^ %5E _ _ _ _ _ _ _ _ %5F ` %60 %60 ` %60 %60 ` ` %60 { %7b %7b { %7B %7B { { %7B | %7c %7c | %7C %7C | | %7C } %7d %7d } %7D %7D } } %7D ~ %7e %7e ~ ~ ~ ~ ~ %7E Ā %c4%80 %u0100 %c4%80 %C4%80 %C4%80 Ā Ā [OoR] ā %c4%81 %u0101 %c4%81 %C4%81 %C4%81 ā ā [OoR] Ē %c4%92 %u0112 %c4%92 %C4%92 %C4%92 Ē Ē [OoR] ē %c4%93 %u0113 %c4%93 %C4%93 %C4%93 ē ē [OoR] Ī %c4%aa %u012a %c4%aa %C4%AA %C4%AA Ī Ī [OoR] ī %c4%ab %u012b %c4%ab %C4%AB %C4%AB ī ī [OoR] Ō %c5%8c %u014c %c5%8c %C5%8C %C5%8C Ō Ō [OoR] ō %c5%8d %u014d %c5%8d %C5%8D %C5%8D ō ō [OoR] Ū %c5%aa %u016a %c5%aa %C5%AA %C5%AA Ū Ū [OoR] ū %c5%ab %u016b %c5%ab %C5%AB %C5%AB ū ū [OoR] ``` The columns represent encodings as follows: * UrlEncoded: `HttpUtility.UrlEncode` * UrlEncodedUnicode: `HttpUtility.UrlEncodeUnicode` * UrlPathEncoded: `HttpUtility.UrlPathEncode` * EscapedDataString: `Uri.EscapeDataString` * EscapedUriString: `Uri.EscapeUriString` * HtmlEncoded: `HttpUtility.HtmlEncode` * HtmlAttributeEncoded: `HttpUtility.HtmlAttributeEncode` * HexEscaped: `Uri.HexEscape` **NOTES:** 1. `HexEscape` can only handle the first 255 characters. Therefore it throws an `ArgumentOutOfRange` exception for the Latin A-Extended characters (eg Ā). 2. This table was generated in .NET 4.0 (see Levi Botelho's comment below that says the encoding in .NET 4.5 is slightly different). **EDIT:** I've added a second table with the encodings for .NET 4.5. See this answer: <https://stackoverflow.com/a/21771206/216440> **EDIT 2:** Since people seem to appreciate these tables, I thought you might like the source code that generates the table, so you can play around yourselves. It's a simple C# console application, which can target either .NET 4.0 or 4.5: ``` using System; using System.Collections.Generic; using System.Text; // Need to add a Reference to the System.Web assembly. using System.Web; namespace UriEncodingDEMO2 { class Program { static void Main(string[] args) { EncodeStrings(); Console.WriteLine(); Console.WriteLine("Press any key to continue..."); Console.Read(); } public static void EncodeStrings() { string stringToEncode = "ABCD" + "abcd" + "0123" + " !\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~" + "ĀāĒēĪīŌōŪū"; // Need to set the console encoding to display non-ASCII characters correctly (eg the // Latin A-Extended characters such as ĀāĒē...). Console.OutputEncoding = Encoding.UTF8; // Will also need to set the console font (in the console Properties dialog) to a font // that displays the extended character set correctly. // The following fonts all display the extended characters correctly: // Consolas // DejaVu Sana Mono // Lucida Console // Also, in the console Properties, set the Screen Buffer Size and the Window Size // Width properties to at least 140 characters, to display the full width of the // table that is generated. Dictionary<string, Func<string, string>> columnDetails = new Dictionary<string, Func<string, string>>(); columnDetails.Add("Unencoded", (unencodedString => unencodedString)); columnDetails.Add("UrlEncoded", (unencodedString => HttpUtility.UrlEncode(unencodedString))); columnDetails.Add("UrlEncodedUnicode", (unencodedString => HttpUtility.UrlEncodeUnicode(unencodedString))); columnDetails.Add("UrlPathEncoded", (unencodedString => HttpUtility.UrlPathEncode(unencodedString))); columnDetails.Add("EscapedDataString", (unencodedString => Uri.EscapeDataString(unencodedString))); columnDetails.Add("EscapedUriString", (unencodedString => Uri.EscapeUriString(unencodedString))); columnDetails.Add("HtmlEncoded", (unencodedString => HttpUtility.HtmlEncode(unencodedString))); columnDetails.Add("HtmlAttributeEncoded", (unencodedString => HttpUtility.HtmlAttributeEncode(unencodedString))); columnDetails.Add("HexEscaped", (unencodedString => { // Uri.HexEscape can only handle the first 255 characters so for the // Latin A-Extended characters, such as A, it will throw an // ArgumentOutOfRange exception. try { return Uri.HexEscape(unencodedString.ToCharArray()[0]); } catch { return "[OoR]"; } })); char[] charactersToEncode = stringToEncode.ToCharArray(); string[] stringCharactersToEncode = Array.ConvertAll<char, string>(charactersToEncode, (character => character.ToString())); DisplayCharacterTable<string>(stringCharactersToEncode, columnDetails); } private static void DisplayCharacterTable<TUnencoded>(TUnencoded[] unencodedArray, Dictionary<string, Func<TUnencoded, string>> mappings) { foreach (string key in mappings.Keys) { Console.Write(key.Replace(" ", "[space]") + " "); } Console.WriteLine(); foreach (TUnencoded unencodedObject in unencodedArray) { string stringCharToEncode = unencodedObject.ToString(); foreach (string columnHeader in mappings.Keys) { int columnWidth = columnHeader.Length + 1; Func<TUnencoded, string> encoder = mappings[columnHeader]; string encodedString = encoder(unencodedObject); // ASSUMPTION: Column header will always be wider than encoded string. Console.Write(encodedString.Replace(" ", "[space]").PadRight(columnWidth)); } Console.WriteLine(); } } } } ``` [Click here to run code on dotnetfiddle.net](https://dotnetfiddle.net/jRsnt7)
URL Encoding using C#
[ "", "c#", ".net", "urlencode", "" ]
I would like to set up a cookie that never expires. Would that even be possible? ``` document.cookie = "name=value; expires=date; path=path;domain=domain; secure"; ``` I don't want to make the date really large, I am just wondering if there was a value for the expires parameter on the cookie that told it never to expire. Thanks.
Nope. That can't be done. The best 'way' of doing that is just making the expiration date be like 2100. **Edit** As of August 2022, Chrome cookies can be set to expire after [400 days only](https://developer.chrome.com/blog/cookie-max-age-expires/).
There is no syntax for what you want. Not setting expires causes the cookie to expire at the end of the session. The only option is to pick some arbitrarily large value. Be aware that some browsers have problems with dates past 2038 (when unix epoch time exceeds a 32-bit int).
Javascript Cookie with no expiration date
[ "", "javascript", "cookies", "" ]
To my knowledge SQL Server 2008 will only allow one clustered index per table. For the sake of this question let's say I have a list of user-submitted stories that contains the following columns. ID (int, primary key) Title (nvarchar) Url (nvarchar) UniqueName (nvarchar) This is the url slug (blah-blah-blah) CategoryID (int, FK to Category table) Most of the time stories will never be queried by ID. Most of the queries will be done either by the CategoryID or by the UniqueName. I'm new to indexing so I assumed that it would be best to place 2 nonclustered indexes on this table. One on UniqueName and one on CategoryID. After doing some reading about indexes it seems like haivng a clustered index on UniqueName would be very beneficial. Considering UniqueName is... unique would it be advantageous to place the primary key on UniuqeName and get rid of the ID field? As for CategoryID I assume a nonclustered index will do just fine. Thanks.
In the first place you can put the clustered index on unique name, it doesn't have to be onthe id field. If you do little or no joining to this table you could get rid of the id. In any event I would put a unique index on the unique name field (you may find in doing so that it isn't as unique as you thought it would be!). If you do a lot of joining though, I would keep the id field, it is smaller and more efficient to join on. Since you say you are new at indexing, I will point out that while primary keys have an index created automatically when they are defined, foreign keys do not. You almost always want to index your foreign key fields.
Just out of habit, I always create an Identity field "ID" like you have as the PK. It makes things consistent. If all "master" tables have a field named "ID" that is INT Identity, then it's always obvious what the PK is. Additionally, if I need to make a bridge entity, I'll be storing two (or more) columns of type INT instead of type nvarchar(). So in your example, I would keep ID as the PK and create a unique index on UniqueName.
Where to place a primary key
[ "", "sql", "sql-server-2008", "indexing", "" ]
I have mysql database and I want to get the last ID from field in a table . example : - ``` id Value 1 david 2 jone 3 chris ``` I want a function return 3 if this table exist .
If you want to select the ID of the most recently inserted row in a table with an AUTO\_INCREMENT column, you will likey be interested in MySQL's [LAST\_INSERT\_ID](http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_last-insert-id) function.
You can use: ``` SELECT MAX(id) FROM table ```
How to get the last field in a Mysql database with PHP?
[ "", "php", "mysql", "database", "" ]
I want to show a title and description from a db query in each form, but I don't want it to be in a charfield, I want it to be html-formatted text. sample template code: ``` {% for form, data in zipped_data %} <div class="row"> <div class="first_col"> <span class="title">{{ data.0 }}</span> <div class="desc"> {{ data.1|default:"None" }} </div> </div> {% for field in form %} <div class="fieldWrapper" style="float: left; "> {{ field.errors }} {{ field }} </div> {% endfor %} {% endfor %} ``` Is this the most idiomatic way of doing this? Or, is there a way to add text that will not be displayed inside of a textarea or text input to my model: ``` class ReportForm(forms.Form): comment = forms.CharField() ``` ?
Instead of zipping your forms with the additional data, you can override the constructor on your form and hold your title/description as *instance-level* member variables. This is a bit more object-oriented and learning how to do this will help you solve other problems down the road such as dynamic choice fields. ``` class MyForm (forms.Form): def __init__ (self, title, desc, *args, **kwargs): self.title = title self.desc = desc super (MyForm, self).__init__ (*args, **kwargs) # call base class ``` Then in your view code: ``` form = MyForm ('Title A', 'Description A') ``` Adjust accordingly if you need these values to come from the database. Then in your template, you access the instance variables just like you do anything else, e.g.: ``` <h1>{{ form.title }}</h1> <p>{{ form.desc }}</p> ``` From the way you phrased your question, I think you probably have some confusion around the way Django uses Python *class attributes* to provide a declarative form API versus *instance-level* attributes that you apply to individual instances of a class, in this case your form objects. * [Check out this link for a good discussion on the distinction](https://stackoverflow.com/questions/207000/python-difference-between-class-and-instance-attributes) * [And this one](https://stackoverflow.com/questions/206734/why-do-attribute-references-act-like-this-with-python-inheritance)
I just created a read-only widget by subclassing the text input field one: ``` class ReadOnlyText(forms.TextInput): input_type = 'text' def render(self, name, value, attrs=None): if value is None: value = '' return value ``` And: ``` class ReportForm(forms.Form): comment = forms.CharField(widget=ReadOnlyText, label='comment') ```
How do I add plain text info to forms in a formset in Django?
[ "", "python", "django", "django-forms", "" ]
I use [Simple HTML DOM](http://simplehtmldom.sourceforge.net/) to scrape a page for the latest news, and then generate an RSS feed using this [PHP class](http://www.phpclasses.org/browse/package/4427.html). This what I have now: ``` <?php // This is a minimum example of using the class include("FeedWriter.php"); include('simple_html_dom.php'); $html = file_get_html('http://www.website.com'); foreach($html->find('td[width="380"] p table') as $article) { $item['title'] = $article->find('span.title', 0)->innertext; $item['description'] = $article->find('.ingress', 0)->innertext; $item['link'] = $article->find('.lesMer', 0)->href; $item['pubDate'] = $article->find('span.presseDato', 0)->plaintext; $articles[] = $item; } //Creating an instance of FeedWriter class. $TestFeed = new FeedWriter(RSS2); //Use wrapper functions for common channel elements $TestFeed->setTitle('Testing & Checking the RSS writer class'); $TestFeed->setLink('http://www.ajaxray.com/projects/rss'); $TestFeed->setDescription('This is test of creating a RSS 2.0 feed Universal Feed Writer'); //Image title and link must match with the 'title' and 'link' channel elements for valid RSS 2.0 $TestFeed->setImage('Testing the RSS writer class','http://www.ajaxray.com/projects/rss','http://www.rightbrainsolution.com/images/logo.gif'); foreach($articles as $row) { //Create an empty FeedItem $newItem = $TestFeed->createNewItem(); //Add elements to the feed item $newItem->setTitle($row['title']); $newItem->setLink($row['link']); $newItem->setDate($row['pubDate']); $newItem->setDescription($row['description']); //Now add the feed item $TestFeed->addItem($newItem); } //OK. Everything is done. Now genarate the feed. $TestFeed->genarateFeed(); ?> ``` How can I make this code simpler? Right know there is two foreach statements, how can I combine them? Because the news scraped is in Norwegian, I need to apply the html\_entity\_decode() on the title. I've tried It here, but I couldn't get it to work: ``` foreach($html->find('td[width="380"] p table') as $article) { $item['title'] = html_entity_decode($article->find('span.title', 0)->innertext, ENT_NOQUOTES, 'UTF-8'); $item['description'] = "<img src='" . $article->find('img[width="100"]', 0)->src . "'><p>" . $article->find('.ingress', 0)->innertext . "</p>"; $item['link'] = $article->find('.lesMer', 0)->href; $item['pubDate'] = unix2rssdate(strtotime($article->find('span.presseDato', 0)->plaintext)); $articles[] = $item; } ``` Thanks :)
Well for just a simple combination of the two loops you could create the feed as your parse through the HTML: ``` <?php include("FeedWriter.php"); include('simple_html_dom.php'); $html = file_get_html('http://www.website.com'); //Creating an instance of FeedWriter class. $TestFeed = new FeedWriter(RSS2); $TestFeed->setTitle('Testing & Checking the RSS writer class'); $TestFeed->setLink('http://www.ajaxray.com/projects/rss'); $TestFeed->setDescription( 'This is test of creating a RSS 2.0 feed Universal Feed Writer'); $TestFeed->setImage('Testing the RSS writer class', 'http://www.ajaxray.com/projects/rss', 'http://www.rightbrainsolution.com/images/logo.gif'); //parse through the HTML and build up the RSS feed as we go along foreach($html->find('td[width="380"] p table') as $article) { //Create an empty FeedItem $newItem = $TestFeed->createNewItem(); //Look up and add elements to the feed item $newItem->setTitle($article->find('span.title', 0)->innertext); $newItem->setDescription($article->find('.ingress', 0)->innertext); $newItem->setLink($article->find('.lesMer', 0)->href); $newItem->setDate($article->find('span.presseDato', 0)->plaintext); //Now add the feed item $TestFeed->addItem($newItem); } $TestFeed->genarateFeed(); ?> ``` What's the issue you're seeing with `html_entity_decode`, if you give us a link to a page it doesn't work on that might help?
It seems that you loop through the `$html` to build an array of articles, then loop through these adding to a feed - you can skip a whole loop here by adding items to the feed as they're found. To do this you'll need to move you `FeedWriter` contstructor up a bit in the execution flow. I'd also add a couple of methods in to help with readability, which may help maintainability in the long run. Encapsulating your feed creation, item modification etc should make it easier if you ever need to plug a different provider class in for the feed, change parsing rules, etc. There are further improvements that can be made on the below code (`html_entity_decode` is on a separate line from `$item['title']` assignment etc) but you get the general idea. What is the issue you're having with `html_entity_decode`? Have you a sample input/output? ``` <?php // This is a minimum example of using the class include("FeedWriter.php"); include('simple_html_dom.php'); // Create new instance of a feed $TestFeed = create_new_feed(); $html = file_get_html('http://www.website.com'); // Loop through html pulling feed items out foreach($html->find('td[width="380"] p table') as $article) { // Get a parsed item $item = get_item_from_article($article); // Get the item formatted for feed $formatted_item = create_feed_item($TestFeed, $item); //Now add the feed item $TestFeed->addItem($formatted_item); } //OK. Everything is done. Now generate the feed. $TestFeed->generateFeed(); // HELPER FUNCTIONS /** * Create new feed - encapsulated in method here to allow * for change in feed class etc */ function create_new_feed() { //Creating an instance of FeedWriter class. $TestFeed = new FeedWriter(RSS2); //Use wrapper functions for common channel elements $TestFeed->setTitle('Testing & Checking the RSS writer class'); $TestFeed->setLink('http://www.ajaxray.com/projects/rss'); $TestFeed->setDescription('This is test of creating a RSS 2.0 feed Universal Feed Writer'); //Image title and link must match with the 'title' and 'link' channel elements for valid RSS 2.0 $TestFeed->setImage('Testing the RSS writer class','http://www.ajaxray.com/projects/rss','http://www.rightbrainsolution.com/images/logo.gif'); return $TestFeed; } /** * Take in html article segment, and convert to usable $item */ function get_item_from_article($article) { $item['title'] = $article->find('span.title', 0)->innertext; $item['title'] = html_entity_decode($item['title'], ENT_NOQUOTES, 'UTF-8'); $item['description'] = $article->find('.ingress', 0)->innertext; $item['link'] = $article->find('.lesMer', 0)->href; $item['pubDate'] = $article->find('span.presseDato', 0)->plaintext; return $item; } /** * Given an $item with feed data, create a * feed item */ function create_feed_item($TestFeed, $item) { //Create an empty FeedItem $newItem = $TestFeed->createNewItem(); //Add elements to the feed item $newItem->setTitle($item['title']); $newItem->setLink($item['link']); $newItem->setDate($item['pubDate']); $newItem->setDescription($item['description']); return $newItem; } ?> ```
Scrape and generate RSS feed
[ "", "php", "foreach", "rss", "screen-scraping", "" ]
I am heavily using byte array to transfer objects, primitive data, over the network and back. I adapt java's approach, by having a type implement ISerializable, which contains two methods, as part of the interface, ReadObjectData and WriteObjectData. Any class using this interface, would write date into the byte array. Something Like that ``` class SerializationType:ISerializable { void ReadObjectData (/*Type that manages the write/reads into the byte array*/){} void WriteObjectData(/*Type that manages the write/reads into the byte array*/){} } ``` After write is complete for all object, I send an array of the network. --- This is actually two-fold question. Is it a right way to send data over the network for the most efficiency (in terms of speed, size)? Would you use this approach to write objects into the file, as opposed to use typically xml serialization? **Edit #1** *Joel Coehoorn* mentioned BinaryFormatter. I have never used this class. Would you elaborate, provide good example, references, recommendations, current practices -- in addition to what I currently see on msdn?
This should be fine, but you're doing work that is already done for you. Look at the [`System.Runtime.Serialization.Formatters.Binary.BinaryFormatter`](http://msdn.microsoft.com/en-us/library/system.runtime.serialization.formatters.binary.binaryformatter.aspx) class. Rather than needing to implement your own Read/WriteOjbectData() methods for each specific type you can just use this class that can already handle most any object. It basically takes an exact copy of the memory representation of almost any .Net object and writes it to or reads it from a stream: ``` BinaryFormatter bf = new BinaryFormatter(); bf.Serialize(outputStream, objectToSerialize); objectToDeserialize = bf.Deserialize(inputStream) as DeserializedType; ``` Make sure you read through the linked documents: there can be issues with unicode strings, and an exact memory representation isn't always appropriate (things like open Sockets, for example).
If you are after simple, lightweight and efficient binary serialization, consider [protobuf-net](http://code.google.com/p/protobuf-net/); based on google's protocol buffers format, but implemented from scratch for typical .NET usage. In particular, it can be used either standalone (via protobuf-net's `Serializer`), or via `BinaryFormatter` by implementing `ISerializable` (and delegating to `Serializer`). Apart from being efficient, this format is designed to be extensible and portable (i.e. compatible with java/php/C++ "protocol buffers" implementations), unlike BinaryFormatter that is both implementation-specific and version-intolerant. And it means you don't have to mess around writing any serialization code...
Serialization byte array vs XML file
[ "", "c#", ".net", "networking", "file-io", "" ]
``` public class MyExample { public static void Main(String[] args) { string input = "<a href=\"http://tvrss.net/search/?show_name=The+Venture+Bros&amp;show_name_exact=true\">The Venture Bros</a></p></li>"; // Call Regex.Match Match m = Regex.Match(input, "/show_name=(.*?)&amp;show_name_exact=true\">(.*?)</i"); // Check Match instance if (m.Success) { // Get Group value string key = m.Groups[1].Value; Console.WriteLine(key); // alternate-1 } } ``` I want "The Venture Bros" as output (in this example).
try this : ``` string input = "<a href=\"http://tvrss.net/search/?show_name=The+Venture+Bros&amp;show_name_exact=true\">The Venture Bros</a></p></li>"; // Call Regex.Match Match m = Regex.Match(input, "show_name=(.*?)&amp;show_name_exact=true\">(.*?)</a"); // Check Match instance if (m.Success) { // Get Group value string key = m.Groups[2].Value; Console.WriteLine(key); // alternate-1 } ```
I think it's because you're trying to do the perl-style slashes on the front and the end. A couple of other answerers have been confused by this already. The way he's written it, he's trying to do case-insensitive by starting and ending with / and putting an i on the end, the way you'd do it in perl. But I'm pretty sure that .NET regexes don't work that way, and that's what's causing the problem. Edit: to be more specific, look into RegexOptions, an example I pulled from MSDN is like this: ``` Dim rx As New Regex("\b(?<word>\w+)\s+(\k<word>)\b", RegexOptions.Compiled Or RegexOptions.IgnoreCase) ``` The key there is the "RegexOptions.IgnoreCase", that'll cause the effect that you were trying for with /pattern/i.
Any ideas why this does not work? C#
[ "", "c#", "regex", "" ]
I have this object: ``` function formBuddy() { var fields = new Array(); var labels = new Array(); var rules = new Array(); var count=0; this.addField = function(field, label, rule) { fields[count] = field; labels[field] = label; rules[field] = rule; count = ++count; } } ``` Its used in this way: ``` var cForm=new formBuddy(); cForm.addField("c_first_name","First Name","required"); cForm.addField("c_last_name","Last Name","required"); ``` The problem is, in the `addField()` function the `fields` array is being set correct (perhaps because a numerical index is being used to refer to it) but the other 2 arrays (`labels` and `rules`) aren't being touched at all. Doing a `console.log` shows them as empty in firebug. What do I need to change to make them work? I'd still like to refer to the rules and labels by the string index of the field.
Use objects instead: ``` function formBuddy() { var fields = {}; var labels = {}; var rules = {}; var count = 0; this.addField = function(field, label, rule) { fields[count] = field; labels[field] = label; rules[field] = rule; count++; } } ``` --- But as [Christoph already mentioned](https://stackoverflow.com/revisions/575533/list#spacer-762632), I would store this information in a single data structure too. For example: ``` function formBuddy() { var fields = {}; this.addField = function(name, label, rule) { fields[name] = { name: name, label: label, rule: rule }; }; this.getField = function(name) { return fields[name]; }; } var cForm=new formBuddy(); cForm.addField("c_first_name","First Name","required"); cForm.addField("c_last_name","Last Name","required"); alert(cForm.getField("c_last_name").label); ```
`fields` should be an array, whereas `labels` and `rules` should be objects as you want to use strings as keys. Also, `addField()` is the same for each instance of `FormBuddy()` (names of constructor functions should be capitalized) and should reside in the prototype, ie ``` function FormBuddy() { this.fields = []; // this is the same as `new Array()` this.labels = {}; // this is the same as `new Object()` this.rules = {}; } FormBuddy.prototype.addField = function(field, label, rule) { this.fields.push(field); this.labels[field] = label; this.rules[field] = rule; }; ``` You can access the labels/rules via ``` var buddy = new FormBuddy(); buddy.addField('foo', 'bar', 'baz'); alert(buddy.labels['foo']); alert(buddy.rules.foo); ``` --- Just to further enrage Luca ;), here's another version which also dosn't encapsulate anything: ``` function FormBuddy() { this.fields = []; } FormBuddy.prototype.addField = function(id, label, rule) { var field = { id : id, label : label, rule : rule }; this.fields.push(field); this['field ' + id] = field; }; FormBuddy.prototype.getField = function(id) { return this['field ' + id]; }; var buddy = new FormBuddy(); buddy.addField('foo', 'label for foo', 'rule for foo'); ``` It's similar to Gumbo's second version, but his `fields` object is merged into the `FormBuddy` instance. An array called `fields` is added instead to allow for fast iteration. To access a field's label, rule, or id, use ``` buddy.getField('foo').label ``` To iterate over the fields, use ``` // list rules: for(var i = 0, len = buddy.fields.length; i < len; ++i) document.writeln(buddy.fields[i].rule); ```
Associative arrays in javascript
[ "", "javascript", "arrays", "oop", "" ]
We have a C# Windows service polling a folder waiting for an FTP’ed file to be posted in. To avoid using the file when it is still being written to we attempt to get a lock on the file first, however, there seems to be occasions where we are getting a lock on the file after the FTP’ed file is created but before the file is written to, so we end up opening an empty file. Is there a reliable anyway to tell if the FTP is complete?
What about using a folder watcher to index the contents and if a files size does not change within 5 mins you can pretty-much guarantee the upload has been finished. The time out could be tied to the timeout of your FTP server to. <http://www.codeproject.com/KB/files/MonitorFolderActivity.aspx>
You could possibly change the filename before upload, then rename it after it's done. that way it will look like it doesn't exist until finished.
How can I tell if a file has finished being FTPed?
[ "", "c#", "file-io", "ftp", "" ]
Say I have a Point2 class, and I want to implement the following Equals: ``` public override bool Equals ( object obj ) public bool Equals ( Point2 obj ) ``` This is from the Effective C# 3 book: ``` public override bool Equals ( object obj ) { // STEP 1: Check for null if ( obj == null ) { return false; } // STEP 3: equivalent data types if ( this.GetType ( ) != obj.GetType ( ) ) { return false; } return Equals ( ( Point2 ) obj ); } public bool Equals ( Point2 obj ) { // STEP 1: Check for null if nullable (e.g., a reference type) if ( obj == null ) { return false; } // STEP 2: Check for ReferenceEquals if this is a reference type if ( ReferenceEquals ( this, obj ) ) { return true; } // STEP 4: Possibly check for equivalent hash codes if ( this.GetHashCode ( ) != obj.GetHashCode ( ) ) { return false; } // STEP 5: Check base.Equals if base overrides Equals() System.Diagnostics.Debug.Assert ( base.GetType ( ) != typeof ( object ) ); if ( !base.Equals ( obj ) ) { return false; } // STEP 6: Compare identifying fields for equality. return ( ( this.X.Equals ( obj.X ) ) && ( this.Y.Equals ( obj.Y ) ) ); } ``` Is that an overkill?
Supporting equality with an inheritance hierarchy is tricky. You need to work out exactly what you mean. Do you really need inheritance here? If not - if Point2 derives directly from System.Object, and you can make it sealed, life becomes a bit easier. In that case I would use: ``` public override bool Equals (object obj) { return Equals(obj as Point2); } public bool Equals (Point2 obj) { // STEP 1: Check for null if nullable (e.g., a reference type) // Note use of ReferenceEquals in case you overload ==. if (object.ReferenceEquals(obj, null)) { return false; } // STEP 2: Check for ReferenceEquals if this is a reference type // Skip this or not? With only two fields to check, it's probably // not worth it. If the later checks are costly, it could be. if (object.ReferenceEquals( this, obj)) { return true; } // STEP 4: Possibly check for equivalent hash codes // Skipped in this case: would be *less* efficient // STEP 5: Check base.Equals if base overrides Equals() // Skipped in this case // STEP 6: Compare identifying fields for equality. // In this case I'm using == instead of Equals for brevity // - assuming X and Y are of a type which overloads ==. return this.X == obj.X && this.Y == obj.Y; } ```
Not really - you are accounting for pretty much every possibility. If this code is for anything other than a scratch application you should consider the benefits of this approach because logical errors due to weird object equality behavior are painful to debug.
Most robust Equals implementation for custom classes for value equality in C#
[ "", "c#", ".net", "class", "equals", "" ]
Is there a limit to the number of joins permitted in a JPA/Hibernate query? Since Hibernate [doesn't automatically join](http://www.hibernate.org/118.html#A23), I have to explicitly specify the joins in my JPA/Hibernate query. For example, person has an address, an address has a state. The following query retrieves person(s) with address and state fully loaded: ``` select p, a, s from person p left join p.address a left join a.state s where ... ``` As I keep adding joins, I eventually (after 12-13 left joins) reach a limit where Hibernate generates invalid SQL: ``` Caused by: java.sql.SQLException: Column 'something69_2_' not found. ``` I do have Hibernate's dialect set to my database implementation, MySQL: ``` <property name="hibernate.dialect">org.hibernate.dialect.MySQLInnoDBDialect</property> ``` Is there an limit to the number joins Hibernate can handle in a single query? **Edit 1:** The following is in the log file: ``` could not read column value from result set: something69_2_; Column 'something69_2_' not found. ``` However, `something69_2_` doesn't appear in the SQL query. It's like Hibernate generated a SQL query and is expecting `something69_2_` to be in the results, which is not. **Edit 2:** Similar problem documented as an [unfixed Hibernate bug HHH-3035](http://opensource.atlassian.com/projects/hibernate/browse/HHH-3035) **Edit 3:** This is a documented [Hibernate bug HHH-3636](http://opensource.atlassian.com/projects/hibernate/browse/HHH-3636), which has been fixed but is not part of any release yet. **Edit 4:** I built hibernate-core 3.3.2-SNAPSHOT which includes bug fix HHH-3636 and it did not address this problem. **Edit 5:** The bug behavior seems to be triggered by multiple `LEFT JOIN FETCH` on ManyToMany or OneToMany relationships. One will work, two or three results in the bug. **Edit 6:** Here's the stack trace: ``` javax.persistence.PersistenceException: org.hibernate.exception.SQLGrammarException: could not execute query at org.hibernate.ejb.AbstractEntityManagerImpl.throwPersistenceException(AbstractEntityManagerImpl.java:629) at org.hibernate.ejb.QueryImpl.getResultList(QueryImpl.java:73) Caused by: org.hibernate.exception.SQLGrammarException: could not execute query at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:67) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:43) at org.hibernate.loader.Loader.doList(Loader.java:2214) at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2095) at org.hibernate.loader.Loader.list(Loader.java:2090) at org.hibernate.loader.hql.QueryLoader.list(QueryLoader.java:388) at org.hibernate.hql.ast.QueryTranslatorImpl.list(QueryTranslatorImpl.java:338) at org.hibernate.engine.query.HQLQueryPlan.performList(HQLQueryPlan.java:172) at org.hibernate.impl.SessionImpl.list(SessionImpl.java:1121) at org.hibernate.impl.QueryImpl.list(QueryImpl.java:79) at org.hibernate.ejb.QueryImpl.getResultList(QueryImpl.java:64) ... 69 more Caused by: java.sql.SQLException: Column 'something69_2_' not found. at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1055) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:956) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:926) at com.mysql.jdbc.ResultSetImpl.findColumn(ResultSetImpl.java:1136) at com.mysql.jdbc.ResultSetImpl.getInt(ResultSetImpl.java:2777) at org.hibernate.type.IntegerType.get(IntegerType.java:28) at org.hibernate.type.NullableType.nullSafeGet(NullableType.java:113) at org.hibernate.type.NullableType.nullSafeGet(NullableType.java:102) at org.hibernate.loader.Loader.getKeyFromResultSet(Loader.java:1088) at org.hibernate.loader.Loader.getRowFromResultSet(Loader.java:553) at org.hibernate.loader.Loader.doQuery(Loader.java:689) at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:224) at org.hibernate.loader.Loader.doList(Loader.java:2211) ... 77 more ``` **Edit 7:** The reason for all these joins is to avoid Hibernate doing n+1 queries, see [Hibernate FAQ on How can I avoid n+1 SQL SELECT queries when running a Hibernate query?](http://www.hibernate.org/118.html#A23)
The question is why are you trying to make such a complex query in the first place? Have you considered different approaches? The documentation on [improving performance](http://www.hibernate.org/hib_docs/v3/reference/en-US/html/performance.html) makes some suggestions.
I've once hit the MySQL 5.0 61 table limit with Hibernate: ``` ERROR 1116 (HY000): Too many tables; MySQL can only use 61 tables in a join ```
JPA/Hibernate maximum number of joins?
[ "", "java", "hibernate", "jpa", "join", "" ]
I am creating a `TextBox` and a `Button` dynamically using the following code: ``` Button btnClickMe = new Button(); btnClickMe.Content = "Click Me"; btnClickMe.Name = "btnClickMe"; btnClickMe.Click += new RoutedEventHandler(this.CallMeClick); someStackPanel.Childern.Add(btnClickMe); TextBox txtNumber = new TextBox(); txtNumber.Name = "txtNumber"; txtNumber.Text = "1776"; someStackPanel.Childern.Add(txtNumber); ``` I hook up to a click event to the `Click Me` button. The click me button even is fired correctly. However I cannot find the `TextBox` I entered dynamically. Here is my click me event: ``` protected void ClickMeClick(object sender, RoutedEventArgs e) { // Find the phone number TextBox txtNumber = this.someStackPanel.FindName("txtNumber") as TextBox; if (txtNumber != null) { string message = string.Format("The number is {0}", txtNumber.Text); MessageBox.Show(message); } else { MessageBox.Show("Textbox is null"); } } ``` How can I find the `TextBox` `txtNumber`?
Josh G had the clue that fixed this code: use RegisterName(). Three benefits here: 1. Doesn't use a member variable to save the reference to the dynamically created TextBox. 2. Compiles. 3. Complete code. ``` using System; using System.Windows; using System.Windows.Controls; namespace AddControlsDynamically { public partial class Window1 : Window { public void Window_Loaded(object sender, RoutedEventArgs e) { GenerateControls(); } public void GenerateControls() { Button btnClickMe = new Button(); btnClickMe.Content = "Click Me"; btnClickMe.Name = "btnClickMe"; btnClickMe.Click += new RoutedEventHandler(this.CallMeClick); someStackPanel.Children.Add(btnClickMe); TextBox txtNumber = new TextBox(); txtNumber.Name = "txtNumber"; txtNumber.Text = "1776"; someStackPanel.Children.Add(txtNumber); someStackPanel.RegisterName(txtNumber.Name, txtNumber); } protected void CallMeClick(object sender, RoutedEventArgs e) { TextBox txtNumber = (TextBox) this.someStackPanel.FindName("txtNumber"); string message = string.Format("The number is {0}", txtNumber.Text); MessageBox.Show(message); } } } ```
Another method is to set the associated `TextBox` as `Button Tag` when instanciating them. ``` btnClickMe.Tag = txtNumber; ``` This way you can retrieve it back in event handler. ``` protected void ClickMeClick(object sender, RoutedEventArgs e) { Button btnClickMe = sender as Button; if (btnClickMe != null) { TextBox txtNumber = btnClickMe.Tag as TextBox; // ... } } ```
WPF how do I create a textbox dynamically and find the textbox on a button click?
[ "", "c#", ".net", "wpf", "textbox", "mouseevent", "" ]
What is the most elegant method for dumping a list in python into an sqlite3 DB as binary data (i.e., a BLOB cell)? ``` data = [ 0, 1, 2, 3, 4, 5 ] # now write this to db as binary data # 0000 0000 # 0000 0001 # ... # 0000 0101 ```
Assuming you want it treated as a sequence of 8-bit unsigned values, use the `array` module. ``` a = array.array('B', data) >>> a.tostring() '\x00\x01\x02\x03\x04\x05' ``` Use different typecodes than `'B'` if you want to treat the data as different types. eg. 'b' for a sequence of signed bytes, or `'i'` for a signed integer.
It seems that Brian's solution fits your needs, but keep in mind that with that method your just storing the data as a string. If you want to store the raw binary data into the database (so it doesn't take up as much space), convert your data to a **Binary** sqlite object and then add it to your database. ``` query = u'''insert into testtable VALUES(?)''' b = sqlite3.Binary(some_binarydata) cur.execute(query,(b,)) con.commit() ``` (For some reason this doesn't seem to be documented in the python documentation) Here are some notes on sqlite BLOB data restrictions: <http://effbot.org/zone/sqlite-blob.htm>
Python, Sqlite3 - How to convert a list to a BLOB cell
[ "", "python", "sqlite", "" ]
In Windows when you create a window, you must define a (c++) ``` LRESULT CALLBACK message_proc(HWND Handle, UINT Message, WPARAM WParam, LPARAM LParam); ``` to handle all the messages sent from the OS to the window, like keypresses and such. Im looking to do some reading on how the same system works in Linux. Maybe it is because I fall a bit short on the terminology but I fail to find anything on this through google (although Im sure there must be plenty!). * Is it still just one single C function that handles all the communication? * Does the function definition differ on different WMs (Gnome, KDE) or is it handled on a lower level in the OS? **Edit**: Ive looked into tools like QT and WxWidgets, but those frameworks seems to be geared more towards developing GUI extensive applications. Im rather looking for a way to create a basic window (restrict resize, borders/decorations) for my OGL graphics and retrieve input on more than one platform. And according to my initial research, this kind of function is the only way to retrieve that input. What would be the best route? Reading up, learning and then use QT or WxWidgets? Or learning how the systems work and implement those few basic features I want myself?
In principle it is absolutely the same. However, it has nothing to do with communication with the OS (nor does it on win32, using user32.dll is entirely optional) A GUI application has an event loop somewhere, which processes messages from a queue at some level. There are a lot of libraries typically used to "hide" this behaviour - you can use them (and indeed, you should). If anything, the Xlib event system is even more perverse than Win32's user32.dll one, and is less widely understood, therefore fewer people use it directly. --- In Linux or in Windows, applications can use the low-level GUI, or can use a library. Most use a library. Applications can also choose to do neither and operate without a GUI (server applications typically do this). Applications can create multiple threads, one of which sits in an event loop, and others work differently. This is a popular approach too. * Most GUI applications use a higher level library for their GUI * Non-interactive applications, e.g. server applications, don't use the GUI at all and don't use the libraries (e.g. XLib, user32.dll) * Applications which don't lend themselves to an "Event loop" (e.g. Games) typically use a separate thread to process their event loop. * These things are largely true on Win32 and Linux.
Well at the very basic level you have the X Window protocol <http://en.wikipedia.org/wiki/X_Window_System_core_protocol>, which we can be pretty complex to handle if you want to do any application. Next on the stack there's Xlib <http://en.wikipedia.org/wiki/Xlib> which is a "convenient" wrapper around the X protocol, but still is complex for "real life" applications. It's on top of Xlib that most other frameworks are built, trying to simplify application development. The most know are: Xt, Gtk, Qt, etc. Like in window you have a "event loop", and if you want you can implement on top of it a GetMessage/DispachMessage metaphor to mimic the windows behavior. That way you may have a WNDPROC, but natively X doesn't provide such thing. Before reinventing the wheel is preferable to take a look at similar applications, what they are using. If you need something simple you can try SDL <http://www.libsdl.org/>, which is a cross platform library to aimed to develop games/simple applications. Another alternative is Allegro game library <http://www.talula.demon.co.uk/allegro/>.
Window message procedures in Linux vs Windows
[ "", "c++", "windows", "linux", "cross-platform", "" ]
I have taken over maintenance of a very large (>2M SLOC) software project, all written in C#. There is very little documentation. I am now wanting to make a change to a module that has public interfaces (about 400), but I don't know what all other modules (there are about 50 total) in the solution may be using this public interface. How would you create an interface dependency usage tree for a situation such as this? The codebase is too big to simply navigate the Project Explorer and read source code. What tools or methods have you used to create this type of dependency analysis tree? I do not want to purchase any tools. The Visual Studio tools such as Class View don't seem to handle a project of this size very well. I have thought about writing my own sed/awk/perl-ish script that simply walks the source code and uses pattern matching to build my own dependency/interface useage database, but I don't want to do something the hard way if there is an easy way. Thanks!
You probably ought to buy something like [NDepend](http://www.ndepend.com/). If there was another FREE tool that would provide similar value, I'd recommend it. However, I truly believe NDepend is your best bet. With a code base of that size, it won't take long for a $400 tool to pay for itself.
I seriously suggest just finding an existing tool to do this. You are never going to be able to get the whole thing sorted out by hand, and the time saved using a tool will definitely pay for the cost of the tool itself. A guy a work with had this problem - his manager wouldn't pay for a profiler, and so they spent 5 developer days on optimizing the wrong parts of the program. Using the profiler, he found and fixed the problem within half a day.
Discovering Interface Dependencies
[ "", "c#", "interface", "dependencies", "" ]
i have an application on my site where people can sign up to receive newsletters, and then someone in a paid group can write and send one immediately to everybody who signed up... ...meaning that i need an efficient way to loop through my database of subscribers and mail them copies of the email using php. clearly, there is the mail() function....i could put that in a loop...is there a better way?
I'd suggest finding a way to loop through, and remembering who you mailed already, because if it becomes a large list of people, you script might end and you'd have to reload it. I have done it once using AJAX, gave me a great way to track where I was in the sending proces. Counted how many people to mail, put the id's in an array, had javascript loop and make seperate calls to a php-mail-page... -edit- You can have a script in php, with a simple while loop, but then you should add a check in the DB to see if a mail was already sent to one person. If the script exceeds memory usage, just reload the page, and it will only sent to the ones that haven't received it yet...
PEAR's [mail queue](http://pear.php.net/package/Mail_Queue)? > The Mail\_Queue class puts mails in a > temporary container, waiting to be fed > to the MTA (Mail Transport Agent), and > sends them later (e.g. a certain > amount of mails every few minutes) by > crontab or in other way.
Automated mailing list - PHP
[ "", "php", "email", "" ]
I'm a bit of a database novice, so pardon my naivety here. I have following sql statement: ``` SELECT DISTINCT dbo.tb_user.familyName, dbo.user_email.email FROM dbo.tb_user INNER JOIN dbo.user_email ON (dbo.tb_user.id = dbo.user_email.userID) ``` This returns records where an email address exists. What I would like to do is retrieve all the records so I can see which users do and do not have email address/es recorded in the db. Can this be achieved? Any hints/help much appreciated.
You need to use a left outer join.. ``` SELECT DISTINCT dbo.tb_user.familyName,dbo.user_email.email FROM dbo.tb_user LEFT OUTER JOIN dbo.user_email ON (dbo.tb_user.id = dbo.user_email.userID) ```
what you want is called an "outer join". In an outer join the order the tables are listed matters, unlike the inner join you have listed. The second one listed is the outer one and will show NULL those cases where there is no match (at least in MySQL, the only place I've used outer joins so far.) SELECT DISTINCT dbo.tb\_user.familyName, dbo.user\_email.email FROM dbo.tb\_user LEFT **OUTER** JOIN dbo.user\_email ON (dbo.tb\_user.id = dbo.user\_email.userID)
Retrieving All records - Inner join
[ "", "sql", "database", "join", "" ]
I need to have the ability to convert and merge various documents into a single Pdf. The documents could be of varying types, such as Word, Open Office, Images, Text, Web pages (by URL) and the PDF would usually consist of 2-3 documents. At the moment, we are using BCL Technologies easyPDF with Microsoft Office installed onto the Server. This handles most documents but we haven't had it doing Open Office ones yet. We currently produce around 100-1000 of these PDF's per day. The reason I am asking the question is that performance is a key issue. The PDF is generated for users on the fly and so the waiting times we are currently getting of 30-60 seconds is becoming unacceptable. We have done some caching around documents when they are intially uploaded so the main tasks that happens when a User requests a Pdf is merging a number of already generated Pdf's. Does anyone else have any other tools they have used that work reliably for most common document types and above all, quickly? When put like that, it seems like I'm asking a lot! Edit: Thanks for all the great advice, I'll look into some of these and compare performance. Just to add to all this, money is not really an object. We're more than happy to pay for different applications to perform each task as well as looking into various hardware options to distribute the load as much as possible.
Merging multiple PDF documents is normally simple enough (as long as they don't need to be merged on the same page) - you could compare your merge performance with something like iTextSharp (.NET version of iText) to be sure it isn't a bottleneck - otherwise the conversion from other formats to PDF is likely the bottleneck. In almost all cases, the method used to convert *X* to PDF is to execute the applications print command, targeted at a software PDF printer, to create a temporary PDF file. This means: * The target application (for example Office) is opened and closed * The document has to travel through the printing service In your situation, are you converting arbitrary documents submitted by the users, or do the documents come from a stored library of files? If it's a library, you could make a PDF copy of each file as it is added to the library (instead of when the user makes a request), and then only merge the PDF files.
We use ABC Pdf. I don't know if it will be fast enough for your needs, but it seems to work for our use.
ASP.Net Converting and Merging documents into single PDF
[ "", "c#", "asp.net", "pdf", "pdf-generation", "" ]